Designing Tests

I’ve been thinking a lot recently about test case design. How can we, as professional testers, write the minimum amount of tests to generate the maximum amount of product verification and defect detection? When both flow control and input parameters (data) are taken into account, even a moderately sized software program has the potential for billions (if not trillions) of potential test cases. We obviously don’t run (or write) that many test cases - instead, we use tools like code coverage to see what code we have (and more importantly, have not) tested, and we use techniques like equivalence class partitioning (including boundary value analysis) to minimize the amount of inputs we need to test. We also use combinatorial analysis to cut down on the amount of parameter interaction that needs to be verified. There are other tools and techniques that can be valuable. Model based testing can generate many of the test cases for us – but even that doesn’t make a dent in the problem.

Another solution is to disagree with me that this is a problem. Customers don’t care about defects that they don’t see, so some would argue that it’s only relevant to test what the customer uses. Other testers only do their design while actually doing testing – this is a good technique for ad-hoc testing, but I think this approach falls down on many levels (I’ll save more on this for another post).

In How to Break Software, James Whittaker’s software “attacks” can be considered when designing test cases. The test patterns introduced by Robert Binder in Testing OO Software are a great resource for designing tests (if patterns are so widely accepted as a development design tool, why aren’t they so widely used in test?).

I’m skimming the surface here, and will likely follow up this post once I reach a conclusion worth sharing. In the meantime, what do you consider when designing test cases? What goals do testers need to keep in mind when designing tests?