logo Spalted Software LLC logo

 

Methodology > Comprehensive Test Output

 
 
Make-It-Good
Strategy
Iterative
Prototyping
Greenfield
TDD
Brownfield
TDD
Comprehensive
Test Output
Requirements
Citation
 

Test programs are notoriously buggy because they are written in relative haste. We test our programs because we know that we will make mistakes and want to catch them as soon as possible, but our test code cannot receive that same treatment. If test programs are based on binary pass-fail gates then they very often obscure problems within the test cases themselves.

For example:

  • Only the simplest test cases fit the pass-fail method well, and even then they produce nothing that explains why an output value is correct, how it was calculated, or which inputs were used.
  • Reacting to a failed case always requires test code inspection; a notoriously time consuming, error prone, and frustrating process.
  • Pass-fail gates cannot easily capture software side effects, like library warnings or diagnostic messages that might explain why a test fails.

Black-box and unit tests should produce text that describes inputs and calculated intermediate values. They should describe expected output values, complain when output values do not match expectations, and capture output messages produced by library code when things go wrong. Displaying intermediate values and printing floating point values at an accuracy greater than required by the pass-fail gates also helps to diagnose calculation problems without looking at the code, or can at least indicate where to look for those problems.

Existing test programs can easily be modified to include new textual outputs. The outputs do not interfere with the pass-fail gate evaluations used by frameworks like GoogleTest. These new outputs should be used further inform test case evaluation:

  1. Textual test outputs are written to standard output or standard error and then captured in a file for further review.
  2. After all of the pass-fail gates succeed, the captured output should be archived as a "golden" expected result file.
  3. An automated process (as part of a Continuous Integration pipeline) may be used to compare future output files to the golden result.
  4. Future test results whose pass-fail gates succeed should be considered suspicious if the associated output file does not match the golden output.
  5. Suspicious output should be reviewed by the developer. The output may indicate that additional pass-fail gates should be created. Differences in the output may also be explained by improvements introduced into the code base. When appropriate, the golden output file should be updated to remove suspicion.
 
 
info0001@spalted-software.com
 

Copyright © 2021-2024 Spalted Software LLC