Test Costing

Most people know that part of software planning and scheduling is developers taking a look at the spec and trying to guess how much it will cost. They go through the features and come up with their best guess for how long they will take (a SWAG.) Then we compare these guesses to how much time we have in our schedule and see if anything is egregiously wrong. Little differences aren't too big a deal since the guesses aren't that accurate anyway, but big things like we think we'll need 200 days and we only have 120 require a closer look. This isn't a new concept.

What most people don't know is it's valuable for test teams to do this as well. At Microsoft we tend to operate under the idea that test teams and dev teams are about the same size. For example, a project with 20 developers will need about 20 testers. But some projects don't really work out that way. Certain things will require less or more test work. And if the test team also tries to guess at how much testing time they'll need we can prevent serious scheduling problems down the road. If the developers think they'll have enough time to write the feature, but the test team is clearly not going to have enough time to test it, that needs to be fixed too. We can up the test team size (breaking our typical 1-1 ratio) or we can cut certain features that cause the biggest test hit. That sounds strange, cutting features we have time to write but not time to test. But it makes sense, it's a disservice to our users to release poorly tested features. Plus with the test team doing estimates they anticipate some serious problem nobody else thought of, and as we all know planning for this in the spec phase is much cheaper then down the road.

The trouble is estimating test costs in fundamentally hard. By it's nature, testing is never really done. Software Engineering 101 says you don't ship a product when it's perfect, you ship it when it's “good enough”. But how do we know as a tester how long it will take to get a product to the good enough state? Our inclination is that it's never good enough, we want it to be perfect. With unlimited time my test cost estimate would be unrealistically big. This doesn't happen with developer estimates - given a feature with well defined scope the time it will require is pretty constant.

This is why test estimating is hard. It requires more then just experience around the type and hardness of problems in a certain feature. You also need to understand what good enough means for that feature, and how long it will take to get to that point. I'm doing a lot of this estimating right now and every time I do one I find I learn a lot about testing. The process of doing a test estimate is very similar to doing a dev estimate. I think through the feature, figure out the basic high level things I'll need to do to test it, try to anticipate problems I'll run into, that kind of thing. Once I've thought about that stuff I have an idea of how much work it will take to test it, then it's pretty easy to convert that into a number (like days of work.)

If you're a tester you should try this on your own for the next feature you work on. Sit down and spend some time on it, take a guess on how long you think it will take. Then when you're done see how close you were. Try this on a smaller scale on a lot of features over the course of a project. You'll get better as you do it more and more. And it's not wasted work either, it has two big benefits.

1. The high level work you do thinking about how you will test the feature converts nicely into an outline for your test plan (you do write a test plan, right?)

2. Having good estimation skills is hugely important in the software world. You'll be amazed how much it will help your career and the length of your work week (good estimation skills plus the ability to say 'no' is the best way to have a reasonable work week.) Test costing is a great way to build estimation skills.

 

Chris