Concerns about Test Case Yields


Shrini K had an excellent question about the concept of a Test Case Yield. Test Cases can take an extraordinary amount of effort to run effectively. It's not uncommon for a lead, manager, (or Testers themselves) to seek high bug counts to demonstrate that this effort is worthwhile.


Some might find this statement alarming, but I find it to be very true: Running Test Cases are not a good way to find bugs! Does this sound backwards and unintuitive? Of course, so allow me to explain myself after making such a broad statement.


Test Cases are a great tool that any Tester should utilize. They provide a structured, repeatable method for assuring that an Expected result is obtained for any given input. This leads to a tester following a strict structure, running test cases that have been run the same way in the past. It encourages and limits a Tester to “stay between the lines” of their test cases, instead of creatively exploring all aspects of their program. The Tester's creativity was applied when they were writing the Test Cases, not when they run them. Running Test Cases is a mechanical process, where the Tester is restricted to only looking for bugs in the ways defined in their Test Cases.


Thus, Test Cases will find bugs only in the areas in which they're written. Each Test Case has probably been run many times before, so bugs in these areas have probably already been found and fixed. You'll find some new bugs and regressions of old bugs, but these really won't be in high numbers.


So, what are Test Cases good for?



  • Verification! Test Cases are essential to prove that ALL areas of a product are working at milestone end.

  • Finding bug regressions

  • Finding new bugs that have appeared since the previous run of a test case.

Test Cases are a valuable tool- just don't expect a high Bug Count when using them. Test Cases are better suited at showing that the software works, instead of finding ways that it doesn't. Low bug counts when running Test Cases is OK, and this is usually expected.


-Greg


Comments (1)
  1. Shrini K says:

    Running Test Cases is a mechanical process, where the Tester is restricted to only looking for bugs where there is a deviation between expected result and Actual result as documented in the test cases.

    Test cases need to serve two purposes.

    1. Verify that Expected result matches Actual result – conformance based testing.

    2. While ensuring conformance, verify that there are no side effects [Application does not do what it is not supposed to do]. This part by and large is not addressed by test cases and left to intuitive thinking of tester to look beyond test cases.

    Here is where bugs are lurking and escape testers eye. Now does this lead to what is known as “Dependence on Individual excellence, Hunch, Intution” to find bugs? How this can be predictably and repeatedly done? I think here lies true tester’s challenge. Develop a process/technique so that “element” of individuality is minimized to possible extent. I am not sure if this can be made zero. Model based testing seem to have some pointers to this

    Bug counts attributed to test cases generally start as high in initial test cycles and reduce as they are used again and again. In a way they wear out or loose their efficiency to find bug. This is what is referred as “Pesticide paradox” These test cases are static in nature. While techniques like Model based testing, tend to produce test cases based on test patterns and put up scenarios where in test cases can be executed in random order. Bug count in such cases tend to increase.

Comments are closed.

Skip to main content