prevention v. cure (part 1)

Developer testing, which I call prevention because the more bugs devs find the fewer I have to deal with is often compared to tester testing (J) which I call detection. Detection is much like a cure, the patient has gotten sick and we need to diagnose and treat it before it sneezes all over our users. Users get cranky when they get app snot on them and it is advisable to avoid that situation to the extent possible.

Developer testing consists of things like writing better specs, performing code reviews, running static analysis tools, writing unit tests (running them is a good idea too), compilation, etc. Clearly developer testing is superior to detection for the following reasons:

1. An ounce of prevention is worth a pound of cure. For every bug kept out of the ecosystem we decrease testing costs and those [censored] testers are costing us a [censored] fortune. [editor note to author: the readers may very well detect your cynicism at this point, suggest tone-down. Author note to editor: I’m a tester and I can only contain my cynicism for a finite period, that period has expired]

2. Developers are closer to the bug and therefore can find it earlier in the lifecycle. The less time a bug lives, the cheaper it is to remove. Testers come into the game so late and that is another reason they cost so much.

Tester testing consists of mainly two activities: automated testing and manual testing. I’ll compare those two in a future post. For now, I just want to talk about prevention v. cure. Are we better to keep software from getting sick or should we focus on disease control and treatment?

Again the answer is obvious: prevention is superior so fire the testers. They come to the patient too late after the disease has run rampant and the cure is costly. What the heck are we thinking hiring these people in the first place?

To be continued.