Testing Estimation

Sonal asks:

Could you discuss on estimating testing for a manual/automation project? How long to estimate the estimation, how many test cases designed per day (though this sounds like a bad statistic).

Test case designing and testing per se never ends but scoping is essential! So how does one do it?

Ah, one of the dreaded questions of testing. "How long will it take you to test?" There are many different ways to answer this question.

One option is to ask "When do you want to ship?" I see testing as an information-providing service. We provide information regarding our view of the application to whomever is making the ship decision - information which is typically only one of multiple pieces of data that person is considering. Probably they are also evaluating other items, such as whether competitors are about to launch a new version. And whether customers are clamoring for new functionality. And the likely impact of releasing (or not) on the company's revenues. And whether they get a bonus for shipping by a particular date. And whether the ship party has already been scheduled. <g/>

Given that it is not our job as testers to decide whether the application is ready to ship, it is perfectly valid for us to say "We will give you feedback regarding what we have looked at, and what we haven't looked at, and what we have found, until you tell us you have sufficient information for your purposes". That's as far as our responsibility goes.

(Which is not to say it is not also our job to make clear whether we think shipping in the current state is doing a disservice to our customers. If management decides to ship anyway, though, that is their call.)

If your "When do you want to ship?" question does not completely freeze your management's collective brains, they may have enough wherewithal to say something like "Thank you for providing that immensely valuable service. Let me restate my question: how long will it take you to complete enough testing for you to feel reasonably confident you have covered most of the risks and found most of the important bugs?" (If they say "all" rather than "most", I suggest replying "Infinity bazillion years" and then educating them on how no amount of testing will ever find every last problem.)

I see this as a very different question from the first one. Here they are asking you to estimate how long you think it will take you to do however much testing you think is necessary, similar to how they (hopefully) ask your developers to testimate how long coding and debugging and bug fixing will take. There are a plethora of different techniques you can use for estimating your testing tasks. My (current) favorites are Session-Based Test Management and Joel-On-Software Estimation.

James and Jon Bach have posted copious information on Session-Based Test Management. In brief, you don't define oodles of test cases before you start testing. Instead, you define a set of Test Missions - short descriptions of the testing to be done. Then you execute a series of Test Sessions, at least one per Test Mission, wherein you define and execute test cases on the fly. Just-In-Time Testing, kind of. See the SBTM site for more details.

Joel-On-Software Estimation comes from Joel Spolsky. Here you break your work into inch-pebbles - tasks which will take at most four hours to complete. For each task you track your original estimate, your current estimate, and your elapsed time thus far. A bit of spreadsheet math calculates your time remaining. See Joel's article Painless Software Schedules for more details.

There are any number of other estimation methods you can use as well. SBTM and JOSE are two I find useful. The specific method you use does not matter so much as that you use *some* method. Find one that works for you.

*** Want a fun job on a great team? I need a tester! Interested? Let's talk: Michael dot J dot Hunter at microsoft dot com. Great testing and coding skills required.