Effective?

How do you measure tester effectiveness? What things make one tester's actions more effective than another tester?

In most cases, this question basically boils down to how do you evaluate a tester? If during a product cycle, John finds 10 bugs, while Jane finds 50, which tester was more effective? Don't know? What if you also knew that John's test cases had 90% code coverage, while Jane's test cases had 75% code coverage?

Still don't know? What else do you need to know? Number of test cases? Types of testing done? Feedback from team members? Number of test cases written? Number of test "artifacts" created?

This is the point in the post where I wish I could enlighten you with a proven formula for determining tester effectiveness, but I don't have the answer. The problem with measuring individuals is that measurement usually forces a behavior change. Sometimes the behavior is expected, sometime it's not. (see Hawthorne Effect for a famous example).

When I started at Microsoft, my management team was silly enough to assign bug quotas - I was "expected" to find ten bugs a week. I like to exceed expectations, so I always reported at least 12-15 bugs per week. Some weeks, things would go well for me (and bad for the developer), and I'd find 20 or more bugs. However, I still only reported 12-15. I "saved" the rest of the bugs for the following week (hey - you never know when the bug-finding well will dry up!). The measurement forced me to change my behavior in a detrimental way. I've seen teams with goals for code coverage who have written suites of horrible tests that, while reaching high levels of code coverage, don't really do any testing. In fact, I have seen just about every attempt at measuring "how well a tester is testing" fail.

The answer, I think, is to give up measuring the individual, and measure the effectiveness of the entire test team. I.e. answer the question "is our test team effective?"

Crud - I have no idea how to do that either.