measuring testers

Yeah, I know … scary subject. But as it is review time here at the empire, this is a subject that has been front and center for both testers and the managers they report to so I’ve been asked about it a lot. I always give the same advice to test managers, but I’ve done so with much trepidation. However, I suddenly feel better about my answer because I’m in good company.

Before I give it away, let me tell you why I am feeling better about my answer. I came across a quote today while looking at the slides that Jim Larus is using for his keynote tomorrow at ISSTA (the International Symposium on Software Testing and Analysis). The quote captures exactly my advice to managers here at Microsoft who ask me how to rate their SDETs. Moreover, the quote comes from Tony Hoare who is a professional hero of mine and a friend of my mentor Harlan Mills (and a Knight, a Turing Award winner and Kyoto Prize winner). If Tony had said the opposite, I would have a whole lot of apologizing to do to the many test managers I’ve given this advice to. Whenever we disagree, you see, I am always wrong.

So here’s my advice: don’t count bugs, their severity, test cases, lines of automation, number of regressed suites or anything concrete. It won’t give you the right answer except through coincidence or dumb luck. Throw away your bug finding leader boards (or at least don’t use them to assign bonuses) and don’t ask the other testers in the group to rate each other. They have skin in this game too.

Instead, measure how much better a tester has made the developers on your team. This is the true job of a tester, we don’t ensure better software we enable developers to build better software. It isn’t about finding bugs because the improvement caused is temporal. The true measure of a great tester is that they find bugs, analyze them thoroughly, report them skillfully and end up creating a development team that understands the gaps in their skill and knowledge. The end result will be developer improvement and that will reduce the number of bugs and increase their productivity in ways that far exceeds simple bug removal.

This is a key point. It’s software developers that build software and if we’re just finding bugs and assisting their removal, no real lasting value is created. If we take our job seriously enough we’ll ensure the way we go about it creates real and lasting improvement. Making developers better, helping them understand failures and the factors that cause them will mean fewer bugs to find in the future. Testers are quality gurus and that means teaching those responsible for anti-quality what they are doing wrong and where they could improve.

Here’s Tony’s exact words:

“The real value of tests is not that they detect bugs in the code,

but that they detect inadequacies in the methods, concentration

and skill of those who design and produce the code.”

 – Tony Hoare 1996

Now replace the word “tests” with “testers” and you end up with a recipe for your career. I imagine I’ll be examining this subject more in future posts. Follow the link above to get Jim Larus’ take on this as well as a guided tour through some of MSRs test technology, some of which is wide of Tony’s mark and some a bit closer.

Comments (10)

  1. Nice quote. I’m enjoying your blog so far – keep at it.

  2. great blog jw and what a great way to think about measuring the value of testing and also testers.

    I can’t help but wonder tho for the thousands of testers out there doing bug bashes (and being measured in a way you are suggesting not to be), what can we change in our daily activities or thinking that can bring about this fundamental (but important) change.

    Bet there is no "easy button" for this, but where do you start?

  3. MSDN Archive says:


    Leave it to YOU to be the one to point out that the two side by side posts are at odds. PEST on the virtue of finding bugs and this one the the virtue to not counting the finds! Clearly I owe you a beer for that one.

    It’s a connundrum, I agree, that the part we spend so much time trying to be good at is the exact part no one wants to be measured on! I think the real advice here is to get good at finding bugs BUT DON’T stop there. You need to take it to the next level and ensure that the apparatus for creating those bugs gets reprogrammed. In other words, finding bugs should lead to preventing bugs or it isn’t very valuable.

    But still…it is fun as hell…

  4. ryanboucher says:

    An interesting point but I feel that you are ignoring large slices of what a tester does.

    If the tester reviews a requirements or ui spec, suggest changes and makes the specification more complete. This work making the analyst better, not the coder. Naturally this will implicitly flow through to a better initial version of software built by the coders. It’ll also flow through to the testing phases.

    I also think you’re missing one half of testing which is “did we build the correct software” or, validation. We can have the ultimate code monkey cutting perfect code from now until eternity but if it’s not what the end user wants then it is useless. Naturally, if you are building COT software you probably won’t have a user until the marketing department finds one.

    I agree that statistics, while interesting, shouldn’t be used to rate testers as the impact of an individual on a team is not something that can be easily or consistently defined. I disagree that the tester exists to be the support crew of the coder.

  5. MSDN Archive says:

    Thanks for your comment Ryan. Nothing to argue about in it. But Tony’s quote doesn’t say ‘code’ it says those who design and produce code. I should have included the others too as opposed to my generalization. Good catch.

  6. Grant Holliday on What product key do I use for TFS Proxy? and How do You test code that uses the TFS…

  7. dking says:

    It appears that the link to Jim Larus’s page is incorrect. It is giving a 404 error at this time.

  8. [Nacsa Sándor, 2009. január 13. – február 3.]  A minőségbiztosítás kérdésköre szinte alig ismert

  9. Mike says:


    So what metrics can be used to measure how much better a tester has made the developers ?


Skip to main content