Software metrics primarily useful as negative indicators?


Dear Readers –


 


I was thinking about metrics, and it occurred to me that most of the metrics we commonly use in the industry are really good as negative indicators of quality, efficiency, testing, etc. but lousy positive indicators.  That is, most software metrics are really good at telling you when something is wrong with your project, but they don’t give you much assurance that the project is actually on the right track.  (I’m sure someone else has already thought of this, but I figured I’d pass on my random though nonetheless.)


 


Code coverage is a classic example.  Code coverage can be viewed as a measure of what you’re not testing.  I.e. if I have 60% code coverage then I know that 40% of my code is not being tested at all.  However, the 60% code coverage gives me no assurance that the testing of that 60% is actually any good, because there can of course be a vast combination of different code paths through any piece of complex code.


 


Bug numbers are another example.  A high rate of reported bugs is clearly a sign that something is amiss.  However, a low rate of reported bugs doesn’t necessarily mean you’re on track.  What if your QA team is off writing documents and not testing the product?  In that case a low rate of reported bugs is simply reflecting a lack of testing activity.  Similarly, a very low rate of bug fixing is often (though not always) a sign that something is amiss.  However, a high rate of bug fixing is not comforting – it may simply reflect that devs are rushing work in order to make the #’s look good.


 


Code complexity is another example.  High code complexity is generally a bad sign for your code’s maintainability (with the exception of some specific code patterns like a parsing or message handling function).  However, low code complexity doesn’t mean that your code is necessarily “good” in any other respect.  It could be utter trash, broken up into small functions.


 


Conformance of work estimates to actual time spent is another one.  If the actual time spent on a task is way longer (or shorter) than the estimate, that tells you that your estimation process is not very accurate.  However, if someone’s actual time spent on a task always closely jives with their estimates, that’s really not very comforting unless you’re absolutely, positively sure that no other aspect of their work has been compromised as a result (i.e. there’s no “distortion” as Robert D. Austin calls it in his excellent book Measuring and Managing Performance in Organizations, which I highly recommend to anyone interested in metrics).


 


By these comments I don’t mean to bash these metrics – they are very useful as a way of identifying potential problems and fixing them.  But they have to be viewed as specialized indicators, not numbers to be mindlessly met.  Most software metrics make great gauges but lousy controls.


 


Over & out!


Chris


 

Comments (4)

  1. You are absolutely right. It is due to this fact that so many methodology presentations sound like hot air to me. CMMI level 5 will give you a productivity increase of up to 300% and a decrease in bugs down to about 30%, I was told recently. I did not bother to ask, but exactly how can you know? What metric could you possibly use that is of any real value?

    However, I have started to meter some of the projects that I work on just for the fun of it. I am currently using a simple tool I found called Source Monitor (http://www.campwoodsw.com/sm20.html) which seems nice but quite as thorough as I would have liked. What it does right though, is making it easy to keep track of time, which I hope will allow me to see tendencies. Whether or not these will correlate with my general feelings for the projects, time will tell.

  2. It all depends on the maturity of your development organisation. When you start using metrics at the end of your development they indeed are negative indicators. However when you measure throughout your whole process you can use the measurements as leading indicators and identify problems earlier with less impact on the schedule and better final quality. For example with code reading you can maybe detect some modules with a higher defect density. As a result you can adapt your effort for later testing and integrating these modules, or when it is worse you can even decide to redesign a module.

    When organisations start using metrics for their developments they typically go through an evolution: from project to product to process metrics, from post-release to in-process metrics, from status metrics to prediction & control metrics, from univariable to multivariable, from implicit to explicit models.

  3. Teemu Vesala says:

    It’s better to know as early as possible that you are off the track than know it when customer tells you that you were out from the track.

    I see metrics as predictive items. Customers hopefully start to make measurable quality requirements one day. And at that day we have to be able to predict the quality thru the development. Earlier we have the measurement program going, better we can answer for customer how much their required quality costs.

    (Btw… my blog is about software quality & metrics, but unfortunately it’s in Finnish.)