What’s wrong with this code–a real world example

I was working on a new feature earlier today and I discovered that while the code worked just fine when run as a 32bit app, it failed miserably when run as a 64bit app.

If I was writing code that used polymorphic types (like DWORD_PTR) or something that depended on platform specific differences, this wouldn’t be a surprise, but I wasn’t.


Here’s the code in question:

        DWORD cchString;
        DWORD cbValue;
        HRESULT hr = CorSigUncompressData(pbBlob, cbBlob, &cchString, &cbValue);
        if (SUCCEEDED(hr))
            cbBlob -= cbValue;
            pbBlob += cbValue;

            if (cbBlob >= cchString)
                //  Convert to unicode
                wchar_t rgchTypeName[c_cchTypeNameMax];
                DWORD cchString = MultiByteToWideChar(CP_UTF8, 0, reinterpret_cast<LPCSTR>(pbBlob), static_cast<int>(cchString), 
rgchTypeName, ARRAYSIZE(rgchTypeName)); if (cchString != 0 && cchString < ARRAYSIZE(rgchTypeName)) { // Ensure that the string is null terminated. rgchTypeName[cchString] = L'\0'; } cbBlob -= cchString; pbBlob += cchString; } }

This code parses a ECMA 335 SerString. I’ve removed a bunch  of error checking and other code to make the code simpler (and the bug more obvious).

When I ran the code when compiled for 32bits, the rgchTypeName buffer contained the expected string contents. However, when I ran it on a 64bit compiled binary, I only had the first 6 characters of the string. Needless to say, that messed up the subsequent parsing of the blob containing the string.

Stepping into the code in the debugger, I saw that the cchString variable had the correct length at the point of the call to MultiByteToWideChar, however when I stepped into the call to MultiByteToWideChar, the value had changed from the expected length to 6!

After a couple of minutes staring at the code, I realized what was happening: Through a cut&paste error, I had accidentally double-declared the cchString local variable.  That meant that the cchString variable passed to the MulitByteToWideChar call was actually an uninitialized local variable, so it’s not surprising that the string length was bogus.

So why did this fail on 32bit code but not on 64bit?  Well, it turns out that the 32bit compiler’s code generation algorithm had re-used the stack storage for the inner and outer cchString variables (this is safe to do because the outer cchString variable was not used anywhere outside this snippet), thus when the processor pushed the unitialized cchString variable it happened to push the right value.  Since the 64bit compiler allocates local variables differently, the uninitialized variable was immediately obvious.

Comments (15)

  1. Jon says:

    DWORD cchString = MultiByteToWideChar( … cchString …);

    Since you use cchString in the call,  you will have received

    blah.cpp: warning C4700: uninitialized local variable 'cchString' used

  2. @Jon: I expected to see a warning as well. My guess is that something (maybe the cast?) caused the uninitialized local variable error to be suppressed. Or it's possible that my default build environment suppresses the warning.

  3. Jason says:

    > So why did this fail on 32bit code but not on 64bit?

    Don't you mean fail on 64-bit and succeed on 32-bit?

  4. JamesNT says:

    Larry!  Great to see you blogging again!!

  5. cheong00 says:

    ; Larry!  Great to see you blogging again!!

    Same here. It's been a long time not see you blog here.

  6. Jonathan says:

    In C#, you'd also get:

    error CS0136: A local variable named 'cchString' cannot be declared in this scope because it would give a different meaning to 'cchString', which is already used in a 'parent or current' scope to denote something else

    Of course, in C# you'd seldom need to manually calculate storage…

  7. JM says:

    Aside from bitness, another way that would hopefully expose this is building and testing the code as a debug build first, with optimizations turned off. That should encourage the compiler to not do storage folding. The 32-bit vs. 64-bit is really just you getting lucky (or unlucky, depending on your perspective) — any change in compiler setting or minor tweak to the code could have exposed the bug.

    And I agree with Jon, and I would put effort into tracking down why you didn't get a warning. "Uninitialized local variable" is *not* a warning you want suppressed for any reason other than locally, as a workaround for a compiler limitation where it can't see the variable really is initialized (and you can't add explicit initialization yourself, for some reason). If people protest a change in build settings, tell them ignoring these warnings is a potential security risk. Because it is.

    If it's a compiler bug, the static_cast<> is not the reason, if my little test program is any indication (apologies if the blog formatting turns this into mush):

     unsigned int foo(unsigned int x) {

       return x;


     int main() {

       unsigned int x = 3.0;


         unsigned int x = foo(static_cast<int>(x));



    This produces a warning when compiled with CL 16.0.

  8. @JM: Yes, but this was built from the NT build system – our debug builds are built with full optimization enabled – this ensures that our debug and release builds are as close to identical as possible – the only difference between the two is the asserts in the debug build.

    And yes, I'm working with the compiler folks to figure out why a warning wasn't generated – there are other uninitialized local variable errors that are generated.

  9. Karellen says:

    This is also an argument for turning on warnings for "shadow"d variables/functions. That's caught a number of bugs in some of my code before.

  10. Ramesh Dharan says:

    The GCC compiler has -Wshadow for catching this kind of thing. This StackOverflow article has a couple of interesting tidbits on the topic:


    "Check out warnings C6244 and C6246

    But you will need to enable automatic code analysis to get them."

  11. Jon says:

    "our debug builds are built with full optimization enabled"


    Doesn't this make it harder to debug crashes?

  12. Richard says:

    Do you not compile with /analyze? (implied by Ramesh's post)

  13. @Jon: Yes it does make debugging crashes harder. But it helps make the produce more reliable.

    @Richard: We have a different mechanism other than /analyze – it runs the same tool chain but slightly differently

  14. alegr1 says:

    Prefast would find that a declaration is hidden by a nested declaration of the same name.

  15. alegr1 says:

    And, by the way, using reinterpret_cast is a bad habit. It's even worse than using a C-style cast indiscriminately.

Skip to main content