Dangers of using Visual Studio 2008 Team System Code Coverage Tool for Native C++

So now you know how to get coverage reports for native C++ using Visual Studio 2008 Team System (if not - read this). There are a few things you need to know before you get excited. First of all the only metrics you get are line and block coverage. A block is basically a statement and each line typically consists of one or more blocks. Unless you have 100% coverage I think these metrics both are pretty useless when measuring quality. For example consider a function consisting of ten lines of code. There is an IF-statement checking for an error and it throws an exception if the error occurs. If the error never occurs during the test-run you still get 90% line coverage since the other nine lines are executed. I think this is pretty common in production code. Most of the code is for the common state and fewer lines are used to handle errors. So you get pretty high line coverage even if you do not test any of error cases.

Block coverage is even worse. For example consider the following line:

 SimpleClass* o = new SimpleClass();

That line produces two blocks of which only one is covered. And there is no reasonable way test the uncovered block since it probably has to do with when the process runs out of memory.

Identifying functions that are not called at all is often considered an important part of the code coverage report. Here we have another problem with the visual studio tool. Functions never referenced by the code will be excluded from the report completely (I suspect this is the case since the linker will remove all unreferenced symbols as part of the optimization at link time). This means the following class will report 100% coverage it it is instantiated and only GetA is called.

 class SimpleClass
{
public:
    SimpleClass(int a, int b)
        : m_a(0) , m_b(b)
    { }

    int GetA() { return m_a; }
    int GetB() { return m_b; }

private:
    int m_a;
    int m_b;
};

So with all these potential problems there is another tool I'd recommend you consider. It's called BullsEye. It is a more "puristic" tool so there is no way to get block or line coverage, basically since those metrics are bad. In stead you can get Decision and Condition/Decision coverage. Basically Decision coverage checks that each conditional evaluates to both true and false and Condition/Decision coverage is when each part of a boolean expression is evaluated to both true and false. Consider the following line:

 if(a || b)

There are two different decisions (either "a || b" is true or false) but four different conditions (both "a" and "b" must evaluate to true and false). BullsEye also adds instrumentation at compile time so the GetB method in the example above will not be lost but be part of the report as an uncovered function even if not referenced anywhere in the code. In the initial example (ten lines with 90% line coverage) we would get 50% decision coverage which is a much better indicator of quality.

And on using code coverage as a quality metric...I must insist you read one of my previous posts if you haven't done that already...