Shrini K had an excellent question about the concept of a Test Case Yield. Test Cases can take an extraordinary amount of effort to run effectively. It's not uncommon for a lead, manager, (or Testers themselves) to seek high bug counts to demonstrate that this effort is worthwhile.
Some might find this statement alarming, but I find it to be very true: Running Test Cases are not a good way to find bugs! Does this sound backwards and unintuitive? Of course, so allow me to explain myself after making such a broad statement.
Test Cases are a great tool that any Tester should utilize. They provide a structured, repeatable method for assuring that an Expected result is obtained for any given input. This leads to a tester following a strict structure, running test cases that have been run the same way in the past. It encourages and limits a Tester to “stay between the lines” of their test cases, instead of creatively exploring all aspects of their program. The Tester's creativity was applied when they were writing the Test Cases, not when they run them. Running Test Cases is a mechanical process, where the Tester is restricted to only looking for bugs in the ways defined in their Test Cases.
Thus, Test Cases will find bugs only in the areas in which they're written. Each Test Case has probably been run many times before, so bugs in these areas have probably already been found and fixed. You'll find some new bugs and regressions of old bugs, but these really won't be in high numbers.
So, what are Test Cases good for?
- Verification! Test Cases are essential to prove that ALL areas of a product are working at milestone end.
- Finding bug regressions
- Finding new bugs that have appeared since the previous run of a test case.
Test Cases are a valuable tool- just don't expect a high Bug Count when using them. Test Cases are better suited at showing that the software works, instead of finding ways that it doesn't. Low bug counts when running Test Cases is OK, and this is usually expected.