Every time Shrini posts something, I want to write a whole post just to comment. His latest post is no exception (my response hasn't shown up yet because Shrini is one of those bloggers who insists on approving comments - no offense to Shrini, but I hate that).
Anyway, the point I want to reiterate is that anything you measure will cause change. Success depends on how you implement and monitor the measurement. It's easy to screw up, and it gets screwed up a lot. One way to maybe give you a better chance at success is to test the metrics (hey - we're testers, we should be able to do that!).
Things to think about include:
- What adverse behaviors could you identify?
- What is a good result for this metric?
- How could the metric be gamed?
- Is the metric by itself accurate? Should it be normalized with another measurement?
- Is the metric defined enough to mean the same thing to all stakeholders?
If you want to play along, try this: Consider the following metric, then tear it apart and make it better. Tell me what could go wrong with it, how to make it more accurate or actionable or any other way it could be improved to be more beneficial. I'll post some of my thoughts early next week.
% of tasks planned versus completed for last month
This is one metric that will be used to help predict the following high level goal
Improve the accuracy of estimates for all projects by 20% in the next year
Kaner has a paper that may help guide your thinking (and that as I re-read it is something that goes much deeper than this silly little blog post).