Industrial Strength Exploratory Testing – part 3


Let’s pick up mythbusting from where we left off

Myth 2: There is no way to measure exploratory testing

I have heard this often from test managers – “how do we measure testers’ productivity in exploratory testing?” Scripted testing allows for measuring rate of tests executed per hour, test pass progress percentage etc. but how do we measure progress or productivity in exploration?

First up, there are no absolute metrics that is a silver bullet for everyone. One could easily argue against any metric if the situation does not suit measuring that particular component. For instance, I could say number of bugs found per hour in exploratory testing is a metric, but it is a function of how buggy it originally was, what kind of software this was, the amount of churn went into it etc. So, first we need to come up with metrics that are meaningful to measure on a specific team. Once you determine that, it should be easy to wire those metrics up with the tools you use to do exploratory testing.

For instance, on my team we measure progress in exploratory testing like any other task in the daily standup meeting. We report on time spent, time remaining, adjustments made if any based on bug density or risk. We look at metrics like tour effectiveness based on bug priorities or numbers found per tour, code coverage across tours to assess if we have holes in testing, user story complexity to bugs ratio to see if we have spikes in certain stories etc. Since TFS is the underlying data repository for all our testing, it is easy for us to pull this data in reports from TFS.

An interesting visualization we tried was built up on code coverage was to show a heat map that looked something like this:

The colors in the components indicate how well covered a specific component is in terms of code coverage. You can track coverage while exploring specific requirements or user stories of your application and figure out where there is a lack in code coverage to direct where your future testing efforts should be. There could be similar heat maps built on bug density or bug numbers for instance. Your map could show those stories in red that have a high bug density in terms of bugs per line of code to inform future testing.

In summary, there are various ways that you could measure and direct your exploration efficiently. Choose metrics wisely!

Comments (5)

  1. Anonymous says:

    Why is it important to measure testers' productivity?

    For answer X, I have two follow-up questions:

    Can you accomplish X in other, better ways?

    Why is it important to X?

    For answer Y, you can ask the same follow-up questions.

    Repeat until you discover your true purposes, and work towards those instead of simplified measurements that will skew your efforts.

    [Anu] I agree . I thought it was implicit in the examples I provided that we have answered these qs to arrive at metrics that are meaningful to us, but your articulation is nicely done. Which is why there is really no silver bullet metric that applies to all teams.

    For instance, on my team, it is important that testers optimize their testing time by looking at areas with high risk. How do I quantify risk? Via code churn – all code changes are suspect. So, mapping coverage with churn is a good way to optimize my tester's time and derisk the product quality. Is it the only way? Perhaps not. 🙂  

  2. Tarun__Arora says:

    As always, good work!

  3. Hi Anu,

    Great post! Can you provide me some more information about how you build this visualization? Really interested in that!

    [Anu] Hmmm…long answer -that should be a post of it's own – will publish one soon

    Thanks!

  4. Anonymous says:

    "Anu-tations" – Nice play on words on the title. Excellent post also.

  5. Anonymous says:

    Do we have that heat map shipped with latest VS 2012?