The Limitations and Purpose of Testing

I was walking around the hallways yesterday and walked by the office of one of the test leads on my team. He has a piece of paper in his window that says:

Trying to improve software quality by increasing the amount of testing is like trying to lose weight by weighing yourself more...

If you want to lose weight, don't buy a new scale; go on a diet.

If you want to improve your software, don't test more; develop better.

(I asked him if he got this from somewhere and he said he did, but didn't remember where. My google-fu was successfully though and found that Steve McConnell wrote that in Code Complete.)

Anyway, when I first read this I didn't like it at all. Like most testers, I am fiercely sensitive about anything that might doubt my technical skill or value in the software development process. Especially so about things that imply that I don't make software better. Most testers think of ourselves as the last defense, the people that hold the line on quality. We're always the ones telling everyone the quality isn't good enough. When I first read that quote it set off alarm bells in that vein.

Then I stopped and thought about it, and I like it much more now. The simple truth is: no matter how good of a job I do as a tester, I do not directly make the product any better. If the whole development staff went home the testers could stick around testing the product for years and we would have a very exact catalog of everything that was wrong with it, but we wouldn't have fixed a single thing. That's not our job. Our job is to find problems, gauge their severity, and prioritize them in terms of their impact to our users (and ourselves.)

I have to tangent a moment now to set up a comment I'll make later (yes, I'm being a lazy writer and this is a cop-out.) Every non-safety critical piece of software of reasonable complexity ships with known bugs - usually lots of known bugs. Why this needs to happen could be the subject of a very long software engineering paper, but the short version is: if you tried to fix everything you'd never ship and be vaporware. There is a balance to be reached here, which I'll leave for another day. I will note that this issue - which bugs need to be fixed before shipping - is always the cause of at least one very passionate discussion (read: screaming match) during the process of shipping the software.

Back on topic now. Yes, as a tester I don't directly impact the quality of the product. But what I can do is keep telling the developers “nope, it's not good enough yet - keep going.“ Trust me, if we weren't doing that most developers would think they were done and everything is ready to go. Good testing doesn't make the software better, only good coding does that. But good testing is the only way to gauge what the quality of the software is, where it's broken, and how bad those breaks are. That's what the big test push just before shipping is all about. One last look to be sure we accurately understand the quality level of the software. Then everybody gets together and decides if that quality level is good enough. If it is, we ship. If it isn't, we decide where the gaps are and fix them. Then rinse and repeat.

The short version is: after some thought about how testing really works, it's limitations, and it's purpose. I'm secure in saying that even though I only indirectly impact the quality of the software, quality software could not ship without the work I do.

 

Chris