Instrumenting your product's source code to get an indication of how much of the code was covered during testing is a really, really smart measurement to get. If you aren't at least measuring this number, you should be. Visual Studio provides features that continue to make this easier. What does this have to do with fire? Well, code coverage measurements are to managers like fire was to cavemen. Discovering how powerful these measurements can be is enlightening! It's totally awesome! You can do so much with it! Grill up some mammoth, roast marshmallows, oh wait, are we talking about fire or code coverage? But just like fire, with code coverage measurements you need to understand what you have, or you will get burned. Ok, so play with fire, just don't start a forest fire - Don't draw conclusions from data that isn't fully understood. Here are some tips so that you can keep it all real when dealing with code coverage numbers.
Know what you are measuring. You instrument a file, you run your tests, your code coverage is high, you think you are great. Really? Do you know what you are measuring?
How large is your file? Small files can potentially have high coverage numbers.
Are you measuring the right file? I've seen some mistaken data that was being pulled from the wrong file and therefore more inflated coverage numbers were shown than were actually occurring.
What is the code and feature you are measuring? C++/C# code - easy to instrument. Jscript - a bit harder to measure but possible. XSLT - is that even possible? SQL - possible with the right tools. ETL - still figuring that one out ourselves. What your features do and therefore what language they are written in (and what technologies they use) have a huge impact on what your coverage numbers will be.
Maybe your numbers were unexpectedly low. Can all the code in the file be instrumented? Do you share the file with other teams and are you really expected to cover their code when running your tests? This leads me to my next point.
Be very careful about setting one code coverage target number.
Your code coverage numbers on each file could vary a lot for many good reasons. Trying to hit one universal code coverage number will drive you crazy. Predict code coverage numbers per file and base it on how many lines (blocks, arc, whatever is your base unit) of code is actually be covered.
Years ago in some of the large teams I've been on, there was a code coverage target for everyone. Yep, all those thousands of people testing thousands of files, and we all had to have them roll up into one number, 70%. The logic seemed reasonable. With that many files, they should average out easily. The problem was that in feature teams where they weren't hitting 70%, the testers were doing crazy things with their test cases to drive the numbers up. Instead of inventing test cases and scenarios based on the customers, testers were creating unrealistic scenarios in order to hit some obscure line of code that may have been in the product for years and never really used. Good intentions sometimes drive the wrong behavior.
Take action based on the extremes
Honestly, as a manager, 70% CC doesn't tell me much more than my test team is doing well at covering the dev code. And the conclusion is to keep doing what they are doing. I don't take action on CC numbers that aren't extremely low or high.
In some places, like testing APIs, I expect a higher number like into the 90%.
When CC numbers seem too high, investigation and root cause analysis is the action plan to make sure the data is accurate.
Only when the code coverage numbers are low do I really believe my team needs to take action and focus differently on the work than what they have been doing.
Those are a few of my thoughts. Code coverage is an interesting topic. I'll let everyone absorb my info above for a while before I continue with more of my thoughts and tips on this topic. It's cold outside so I think I'll go light a fire now.