Does GQM Work?


I was typing a response to Matt on his blog to this post, when I realized two things.

1 - the comment forms on blogspot blow

2 - due to the length of my reply and #1 above, I thought it would be ok to respond here.

NOTE: If you don't know what GQM is and need context, read this (wikipedia), and look at Matt's previous post.

In short, the purpose of GQM is to measure something meaningful rather than the random assortment of metrics that most teams look at. Like most things in life, if you do it wrong, it doesn't work.

The OIM concept that JB responded with is hard to disagree with. It basically says "look and think about what is happening, adjust appropriately, and repeat". I don't like that the example is "Is X doing testing well" - people metrics are almost  always going to be wrong. Perhaps that is why I don't get the model...?

I'll be the first to admit that GQM is easy to get wrong, but bear with an example (and one not based on individual performance).

For GQM to work, the goal needs to be SMART (Specific, Measurable, Attainable, Relevant, and Time Bound). Let's say for example, that you're team is struggling with estimates and are trying to improve. In fact, management has asked you to improve estimates and provide data! A (smart) goal could be "Improve the accuracy of estimates for all projects by 20% in the next year compared to last year"

Questions may be results based: "What percentage of tasks planned for last year or this year were completed?", or progress based: "How accurate were my initial estimates for the last milestone?", "Did our team accomplish all of its objectives for this month?". From that, you may come up with metrics like:

  • % of tasks planned versus completed for last year versus next 12 months
  • Error of estimate for last milestone abs(actual-estimate)
  • % of tasks accomplished versus planned for last month

But wait - metrics can be gamed. In fact, it's easy to game metrics. This may be where JB's OIM model comes in. If you think your metrics will always tell you what you think they will, you are in for a big, big surprise. The answer, of course, is that you test your metrics before using them, then watch and make sure they are measuring what they think they are. Metrics aren't special - anything you follow blindly will come back to bite you.


Comments (2)

  1. Shrini says:

    >> In short, the purpose of GQM is to measure something meaningful rather than the random assortment of metrics that most teams look at.

    Let me restate this slightly differently … Purpose of GQM is to force people to think about reasons and information that they would like to get out of metrics – before defining metrics. Typically, what people do is pick a set of "standard" metrics and start collecting data/interpreting them. Instead G and Q part of GQM model forces approach the whole thing that eventually leads to metrics rather than an approach where people begin with a set of metrcis used by someone else in their context.

    >> The OIM concept that JB responded with is hard to disagree with.

    Beauty of this method is  – it does not attempt to quantify things – rather it recognizes human thinking and learning dyanamics – hence aproaches the problem from a qualitative standpoint.

    >> For GQM to work, the goal needs to be SMART (Specific, Measurable, Attainable, Relevant, and Time Bound).

    Here is a problem with SMART kind of stuff … it is easier to define for simple and non linear/complex tasks. But when it comes to human learning/thinking aspects – SMART really fails. Some managers I know, still claim that they can convert any non-linear/learning task to fit SMART model – but one thing you must notice … When try to make things like "non measureable"  like testers Work – measurable, you change the whole process to fit measurement paradigm. That is a side effect of metrics – they change the way people work. We believe that unlike "catelysts" in chemical reaction – these things do take part in whole process and affect/change the process in subtle ways.

    SMART kind of thing never worked with me for Testing related tasks – where it appeared to work – it had serious side effects.

    One last thing … we must approach all software related tasks in a different manner than we approach typical engineering/management related tasks. They involve human learning and social system related – hence are very complex – OIM may be the method to go …

    Shrini

  2. Alan Page says:

    I prefer the shorter version of GQM, but I don’t have a problem with you rephrasing.

    I think the "beauty" of OIM is that it’s just plain common sense. If you think that there’s no human element or social aspects to all aspects of developing software, I think you have a  lot to learn about the industry.

    Your point about SMART is something you seem to say about everything – "If you use something the wrong way, it won’t work". I 100% believe that and agree with it. If you are solving a testing problem using your "human mind", but make a mistake – perhaps it was because you were thinking about it the wrong way? In my experience, nearly every time someone had a difficult time applying something like SMART to a goal, it was because it was a poorly defined or misconceived goal in the first place

    It may be interesting for you to read the work of James Reason – he wrote one of my favorite books on human learning and the reasons human’s can make mistakes called "Human Error".

Skip to main content