Pass, Fail...and other

In a perfect world, tests either pass or fail (ok - in a really perfect world, they would all pass). The problem is, that there's a gray area in between pass and fail that has a lot of potential confusion.

If a test fails during setup (e.g. configuration or application launch), it could be reported as a failure, but many automated tests report this as an aborted, or blocked test. This helps with reporting - for example, if a suite of low level network tests fails because of network infrastructure problems, failures reported in the low-level tests would be misleading. Reporting these as blocked or aborted indicates the problem is elsewhere. (Note that for some teams, blocked and aborted are unique results).

Test results can also be reported as skipped. Consider, for example, a video card test suite that automatically selects tests based on the hardware capabilities. The test suite could have 100 possible tests, but not all will run in every configuration. This can cause confusion when reporting. Here's an example of reporting the same results in two different ways.

 

Total Tests Pass Fail Pass Rate
Video Card 1 100 90 10 90%
Video Card 2 100 65 7 65%

 

Total Tests Pass Fail Skipped Pass Rate
Video Card 1 100 90 10 0 90%
Video Card 2 100 65 7 28 90%

The first table, of course assumes that pass rate is calculated as total pass / total possible tests. You could calculate the pass rate as pass / (pass + fail), but that isn't fair to the audience of reports like this (typically management). If they see that the total number run changes often, they may assume you are trying to hide something. Classifying each test result is something you can do to better communicate test results.

What else do you report as test results? What do those results mean?