About the art of predicting defect density (and not abusing absolute defect counts)

Everyone who has worked on a software project knows the problem: Code gets written, code gets tested and before you know it bug reports start hitting the bug tracking system. Still, life is good, bugs get fixed and ultimately the software is ready to be shipped (if this were a commercial I'd probably mention that this is a "dramatization" at this point). The interesting part is what happens after the software has shipped and the project team is starting work on the next version.

When planning for the new version (and I'll just keep on dramatizing things a little bit) and trying to figure out the cost for bug fixing people are still sometimes tempted to say: "Hey, we found n bugs in the last version. So, we'll probably have about n bugs again this time." Or will they? The first problem here is that differences in scope between the two versions are completely ignored. Does the new version contain as many new features as the last one? Will existing features be changed/extended? Are the new features as complex as last time?

These questions are related to the defect density of a software system which is usual defined as:

Defect Density = Number of Defects / Thousand Lines of Code (KLOC)

Now we are looking at the number of defects in relation to the size of the codebase which may give us an idea of what to expect for the next version. We still won't know how much code will get written but we have some guidance on how many defects per KLOC to expect. But even this should be taken with a grain of salt as several things can impact the defect density like engineering practices (e.g. investing more in defect prevention) or (again) the new code being much more/less complex than before.

With that said, it would be very useful if one could actually make predictions about the defect density of a system under development and in case of a large system which parts of it are likely to be broken. To what extent this is actually possible is an often controversially discussed topic and there are certainly approaches to this problem which are more suitable than others. However, there is one in particular which motivated me to write this post and also in my opinion doesn't get as much attention as it deserves:

It is a model for predicting defect density based on the changes made to the codebase of a system and incorporates actual code changes (LOC added, changed and removed), the amount of time spent on the changes and the source files affected. The paper Use of Relative Code Churn Measures to Predict System Defect Density describes the model in detail using real-world data from the Windows Server 2003 codebase. It was written by Nachiappan Nagappan and Thomas Ball who now both work at MSR.


This posting is provided "AS IS" with no warranties, and confers no rights.