Share via


More on Playing With Fire

As a continuation from my last blog entry, here are some more thoughts I have about code coverage measurements.

· Never roll up code coverage data without the interpretation of that data. Without some text about what your numbers mean, people will draw different conclusions which sometimes will require you to then provide more detail than is truly necessary. So always include key takeaways or explanations with your data. And remember that in most cases the numbers are relative, not absolute, so normalize.

· For example: developers start writing unit tests and find their code coverage is up in the 90% range. Wow! How is that possible? My testers have been writing tests for months and are only at 45% CC. After some investigation, we found that we were reporting coverage numbers on a different amount of files. The devs had 90% CC for one file and the testers were at 45% for 10 files. Once we normalized this, the developers’ unit tests only had 2% coverage.

· Hey, my developer just checked in a ton of code and now my code coverage number is down, what do I do? Well, you can't stop your developers from creating features for the product. And you can't stop them from creating new files in the process of doing this. So instead you need to make sure to explain the reason why the numbers are down, and what your plan is to fix it. Obviously, you'll need to run more tests on their new features. A decrease in code coverage numbers always deserves an explanation.

· Code coverage does not prove quality; it only proves how effective the test team is at hitting all the code. Used correctly, it can help drive quality and efficiency to some extent.

· Some lines of code will get hit many times during a test pass. That won't be represented in code coverage. But that doesn't mean that your testers should only hit a line of code once. Your team needs to run through all the customer scenarios during a full test pass and if in doing that, the same code gets hit many times, then so be it.

· I've heard some great ideas around using code coverage to drive focused testing, especially for hot fixes. If you can map specific test cases to specific lines of code that they hit when they are run, then if your product needs to release a hot fix (a bug fix after product release), targeted efficient testing is possible. You'll know the specific lines of code the developer changed for the fix. All you need to do is run the test cases that map to those specific lines of code that were changed. In general, it sounds like a great idea. But I've never seen it fully implemented and proven.

· If you have some old code in your product and those files generate low coverage numbers, should your team spend time writing more test cases to increase coverage on that old code? There are some significant tradeoffs with doing this. If your product has been used by customers for some time and therefore no critical issues are being reported in that area, what exactly will increasing the code coverage in that area solve? What's the benefit? Your testers will spend a lot of time increase a number, yet potentially not increase product quality at all.

Code coverage is a powerful measurement that can drive great improvements in your testing when used correctly. Make sure you understand what the numbers mean and check them twice. Accurate code coverage numbers are sometimes more elusive than you'd expect.