Pass, Fail…and other


In a perfect world, tests either pass or fail (ok - in a really perfect world, they would all pass). The problem is, that there's a gray area in between pass and fail that has a lot of potential confusion.

If a test fails during setup (e.g. configuration or application launch), it could be reported as a failure, but many automated tests report this as an aborted, or blocked test. This helps with reporting - for example, if a suite of low level network tests fails because of network infrastructure problems, failures reported in the low-level tests would be misleading. Reporting these as blocked or aborted indicates the problem is elsewhere. (Note that for some teams, blocked and aborted are unique results).

Test results can also be reported as skipped. Consider, for example, a video card test suite that automatically selects tests based on the hardware capabilities. The test suite could have 100 possible tests, but not all will run in every configuration. This can cause confusion when reporting. Here's an example of reporting the same results in two different ways.

 

Total Tests Pass Fail Pass Rate
Video Card 1 100 90 10 90%
Video Card 2 100 65 7 65%

 

Total Tests Pass Fail Skipped Pass Rate
Video Card 1 100 90 10 0 90%
Video Card 2 100 65 7 28 90%

The first table, of course assumes that pass rate is calculated as total pass / total possible tests. You could calculate the pass rate as pass / (pass + fail), but that isn't fair to the audience of reports like this (typically management). If they see that the total number run changes often, they may assume you are trying to hide something. Classifying each test result is something you can do to better communicate test results.

What else do you report as test results? What do those results mean?


Comments (7)

  1. Alan Page says:

    Something I forgot to note above – I’m talking about automated tests – where the test has to figure out what happened.

    Great commentary in the pingback above if you’re interested.

  2. Adam Goucher says:

    I keep track of the following things

    • Passed

         – Of these passed ones, which ones are newly passing

    • Failed

         – Of these failed tests, which are new

    • Errors

    • Skipped

    Keeping track of the tests that changed category is a fount of test information. For instance, should test x be passing now? If not, well, the fact that something is working is a bug.

    Of course, I design my rigs to keep track of not just the numbers of each category, but the actual tests that are involved.

    -adam

  3. Jared says:

    Cool, I was going to to comment to point back to my blog, but looks like the ‘net did it for me 🙂

    Nice when stuff works.

  4. Anu says:

    Is it a good idea to have an "Inconclusive" field and add comments saying why this is skipped or blocked or aborted or abandoned or un-analyzed instead of many different test results?

  5. In a previous post , I mentioned that when writing automated tests, the grey area between pass and fail

  6. In a previous post , I mentioned that when writing automated tests, the grey area between pass and fail

Skip to main content