DRE

I know I'm overdue for a post when Bj starts lapping me on posts.

This week, I've been finishing up my presentations for STAR and PSQT (for those who haven't presented at a conference before, presenters are expected to have their presentation complete 2-3 months before the conference date). The two conference presentations are barely related, but the piece that brings them together is something that I have been meaning to post here for a while.

For anyone who has come across this post looking for information about Dr. Dre, all I can tell you is that I'm a big fan of the Ben Folds cover of a Dre song that I can't post the title to here (and certainly not lyrics!). For everyone else, this post is about Defect Removal Effectiveness.

I talk a lot about preventing bugs and everyone has heard ramblings on moving quality upstream, but nobody really seems to know too much about what it means. I believe that most software defects can be prevented, and for those bugs that cannot be prevented, that they should be detected as early as possible. DRE is simply a measure of how effectively defects (bugs) are removed at each stage of the product cycle.

Note: now that I've mentioned stages of the development cycle, some scrumbut is going to proclaim me as waterfall man. Please consider "stages" any serial activity in software development. Even agile purists want to have their note card user stories in place before writing code, so please apply this concept to whatever flavor of water-spiral-v-w-x-xp-xyz-iterative-prototype-rup-nm model you choose to use.

Hold on tight for a moment while I attempt to explain DRE without the use of any fancy formatting tricks such as tables, scrolling text or lightweight ajax controls.

Say, for example, that you find 10 bugs during requirements review (ambiguous statements, conflicting statements, etc.) Note yet again, that you can find these same sort of bugs reviewing user stories for consistency. Say that throughout the rest of the product cycle that you find another 15 bugs that relate to requirements. This means that during the initial stage / phase of the product cycle you found 10 bugs of the eventual 25 that were there at the time. Grade school math tells me that your effectiveness during this phase was 40% *10/25). Is that good - I don't know, I'll tell you in a minute.

Now, let's say that while the devs were coding they found another 10 of the requirement bugs, as well as 10 errors in their coding (due to unit test, buddy test or sheer luck). Let's also assume that 15 additional bugs were found in the testing phase which were attributed to developer error during coding.

This means, that during the coding phase there were 40 bugs latent in the product (15 requirements defects, 10 dev errors found and 15 remaining). 10 coding errors were found, along with 5 requirements errors. Grade school math (which again comes in handy) says our defect removal effectiveness was (15/40) 37.5%.

The numbers in this example are pretty close, so we can't say whether we're significantly better at one phase vs. the other. However, if you track this sort of metric throughout the product cycle, it allows you to do two important things.

  1. Measuring DRE helps you target where improvement needs to be made. If, for example, developers are introducing a huge number of defects while writing code and finding very few of them, it would point to the necessity of additional detection techniques during that phase
  2. Measuring DRE helps validate any sort of improvement programs. Say, for example, your development team is implementing unit tests. If they track the number of defects found they can validate the effectiveness of the time invested in writing unit tests.

The big caveat (if you haven't thought of it already), is that if you don't track when a defect was introduced in your bug management system, you can't track DRE. Hardly anyone I know currently tracks this, but I get more converts to this sort of thinking nearly every day.

ed. 9:20 pm. formatting

**boy I hope my grade school math holds up.