The Risk of Micro-optimizations - follow-up

In yesterday's post I gave an example of a bug where an attempted micro-optimization combined with a common C++ idiom causes a fairly subtle bug. For those who didn't look long enough to catch it, the flaw was in these two lines:

 BOOL _fRaining:1;
_fRaining = (dwFlags & WEATHER_RAINING);

The assumption that the coder was making was that he could save memory by only using a single bit for the boolean value, whereas BOOL is typedef'd as an "int" and will take 32 bits (on a 32-bit processor). As I pointed out in the post, this was an object which would have very few instances (in my real-world case it was tied to the top-level window of an application), so the most this could even theoretically save is a handful of bytes.

The bug is that "(dwFlags & WEATHER_RAINING)", which is a common way to check whether a flag is set, gives 0x02 as the answer, or 00000010 in binary. When this value is assigned to _fRaining, which is only a single bit, the value is truncated and only the least significant bit is kept: the 0. Therefore, _fRaining is set to FALSE.

In the comments, Rick Schaut pointed out that using "(dwFlags & WEATHER_RAINING) != 0" is more correct. This is true and would have avoided the bug. Mike Dimmick also pointed out that the compiler is going to round up to a whole byte (at minimum) anyway, and even then it's unlikely that you'll see any performance benefit due to the way processors access memory.

There are certainly scenarios where it is useful to carefully pack variables into a struct or class to save memory or bandwidth, but, like most micro-optimizations, in most cases it more likely to cause bugs than have any tangible benefit.