Security: Don’t forget to initialize the stuff you don’t care about

Lost in excitement of privilege escalation vulnerabilities is the simple information disclosure through missing garbage initialization. Everybody should by now be familiar with the use of the SecureZeroMemory function to ensure that buffers that used to contain sensitive information are erased, but you also have to zero out buffers before you write their contents to another location. Consider, for example, the following binary format:

    DWORD dwMagic;
    DWORD dwVersion;
    WCHAR wszComment[256];
    DWORD cbData;
    // followed by cbData bytes of data

Code that writes out one of these files might go like this:

BOOL SaveToFile(HANDLE hFile, LPCWSTR pszComment,
                DWORD cbData, const BYTE *pbData)
  DWORD cbWritten;
  fh.dwMagic = FILE_MAGICNUMBER;
  fh.cbData = cbData;
  return SUCCEEDED(StringCchCopyW(
             fh.wszComment, 256, pszComment)) &&
         WriteFile(hFile, &fh, sizeof(fh), &cbWritten, NULL) &&
         cbWritten == sizeof(fh) &&
         WriteFile(hFile, pbData, cbData, &cbWritten, NULL) &&
         cbWritten == cbData;

Do you see the security bug?

If the comment is shorter than 255 characters, then the bytes after the terminating null consist of uninitialized stack garbage. That stack garbage might contain interesting information that you didn't intend to leak into the file. Sure, it won't contain information that you already recognized as highly-sensitive, such as passwords, but it still might contain information that, while less sensitive, still would be valuable to somebody looking for it. For example, depending on where the compiler decided to put local variables, you might leak an account name into those unused bytes.

I'm told that one company's networking software from a long time ago had a bug just like this one. They used a very advanced "change password" algorithm, the details of which are not important. The design was that only heavily encrypted data was transmitted on the wire. That way, somebody who sat on the network and captured packets wouldn't see anything of value. Except that they had a bug in their client: When it sent the encrypted password to the server, it forgot to null out the unused bytes in the "change password" packet. And in those unused bytes were, you guessed it, a copy of the password in plain text.

Comments (16)
  1. Arthur Strutzenberg says:

    What is interesting is that this is very similar to another forensic analysis technique taught to me by one of my instructors–analyzing slack space…not a whole big chance that you may find something, but the chance is still there!!

  2. Rhomboid says:

    I remember waaaay back in the day (maybe 1993 or so) there was a HUGE controversy around AOL because someone looked in the binary data files that it created and found all kinds of personal information.  The implication to the naive was of course that they were spying on their users, snooping about on their disks for interesting data and then transmitting it back to the mothership.  (This was of course when AOL was a "walled garden" that offered their own proprietery BBS+content, not standard TCP/IP internet access.)  But of course no such thing was happening, they were simply dumping data structures to this cache file on disk without clearing the uninitialized bits, and so if you looked hard enough you were bound to find something interesting in there.  That didn’t stop the BBS rumor mill from going to town, though.

  3. J says:

    Similar to the AOL issue above, this issue bit Steam, the content-distribution program for Valve Software games like Half-Life 2.  People didn’t trust the program, so they looked into the gigantic cache file that it reserves on the hard drive to see what data Valve might be collecting.  Well, when the program reserved space for the gigantic cache, it didn’t initialize the data, so some people freaked when they saw stuff like their illegal .mp3 directories in the cache.

    Some operating systems are designed around issues like this, and zero data after you delete things.  I think VMS did that if I remember correctly.

  4. A says:

    Some operating systems are designed around issues like this, and zero data after you delete things.

    NT always zeroes newly allocated space. It’d be quite a security hole if it didn’t.

  5. Fred says:

    A long time ago, on an OS/360/370/…, this caused us a configurtion management problem.

    Due to compressed schedule, we shipped a preliminary version of the software to an associate contractor prior to validation, then a QA-blessed version afterward. We wrote up the spec saying that work done with the preliminary version would be acceptable because the two tapes would be identical.

    They had to be identical, didn’t they–they were both written from the same disk file. So we didn’t check it.

    The receiver’s QA people did check it and they weren’t identical. It seems that IBM’s tape copy utility copied the in-memory data structure which contained a session-specific value (a pointer, IIRC). Our faces were a bit red and we had to revise the configuration document to say "identical except for these three bytes" and cite the relevant documentation.

  6. Fred says:

    One case when we did it right was a disk format specification. The Quality organization was responsibile for receiving inspection. So they came to Engineering (me) to get the specification against which they would inspect.

    One of the things I had them do was to assure that all "unused," "spare," and "reserved" fields were zero. (Yes, the format [written before I was in charge of it] had all three kinds.)

    The QA analyst came back and asked, "If they’re unused, why do we care?"

    My response: "Someday we may need these fields. Zero will mean whatever we do now, non-zero will be the new feature. And I don’t want any disks in the inventory pretending to have the new features."

    "Oh, you’re thinking further ahead than I was."

  7. AC says:

    If I correctly remember, old versions of Office products  had exactly the problems described in the article, and the problem was even operating system dependent (which demonstrates how Office was *not* “just another Windows application”), where NT based systems initialized the OLE slack data, and 9x didn’t. If somebody has better facts, please correct me.

    [You described it yourself – the two OSs initialized memory differently. How is this proof that Office is not “just another Windows application”? This behavior would affect any application. -Raymond]
  8. Gabe says:

    The issue with Office I’m pretty sure has nothing to do with Office or the OS. I seem to remember the issue is actually with disk allocation. The OLE storage layer that Office (and any OLE program) uses actually creates filesystems within files, so it allocates files in blocks (say, 4k at a time).

    Win9x would look at the allocation and just mark the disk block as used, leaving whatever bits were already on the disk. WinNT would either zero out the blocks when they were allocated, or just return zero and not actually allocate the blocks until they were written to disk.

    Since the problem was actually not in Office, the memory manager, or even the kernel, this was a nontrivial problem to fix. It would require either changing the filesystem to zero allocated blocks (which is a performance hit when you allocate a large chunk and then write it all), or changing the OLE layer so that it zeroed out all disk blocks immediately upon allocation. I think they ended up changing OLE.

  9. Norman Diamond says:

    Monday, July 03, 2006 8:22 PM by Fred

    > One case when we did it right was a disk

    > format specification.


    > One of the things I had them do was to assure

    > that all "unused," "spare," and "reserved"

    > fields were zero.

    Hmm, I would have interpreted "unused" as a binding promise.  Though security would be a reason to zero out (or randomize) such fields, there wouldn’t be any other reason because the system will never use those fields.  "Spare" is troublesome.  The way that word sounds, in an executable image the spare bytes might be used for patches.  On a disk the spare blocks might be used for relocations of bad blocks.

    Meanwhile, my understanding is that IBM once specified a format for floppy disks, including specifying how many padding bytes would exist in between blocks that were used, but not specifying the contents of the padding bytes.  Not zero, not unused, not spare, not reserved, not specified.  One system wrote all 0xFF values into the padding bytes.  Another system couldn’t read the floppies because the padding bytes weren’t all 0x00.  A third system had to be found which could read the original floppies and rewrite them with zeroed padding bytes.  Fortunately I only had to do it once, and it was only about 7 floppies.  Someone else told me the reason.

    Monday, July 03, 2006 8:14 PM by Fred

    > It seems that IBM’s tape copy utility copied

    > the in-memory data structure which contained

    > a session-specific value (a pointer, IIRC).

    This I can’t figure out.  I think the tape labels included dates but the file contents would be copied directly from disk exactly as you thought.  I suppose that if the disk file had varying length records and the tape file had fixed length records then the padding could be unspecified (and might be a security leak).

    By the way was the copying program IEBGENER?  For the most part it didn’t care if the files were on disk or tape or cards or whatever.  But it couldn’t handle an ANSI format tape.  It was 100% repeatable.  Tell IEBGENER to copy a tape formatted according to the then-ANSI standard with varying length records and watch it crash with an 0C4.

  10. James says:

    Isn’t the real bug that some previously-called function failed to zero sensitive data in its stack frame before returning?

    I think a simpler rule of thumb is to: don’t forget to zero all
    sensitive data when you’re done with it.  Arguably dealing with it
    when you serialize buffers to disk/network packets/whatever is more
    direct, but if one should zero heap-allocated memory when freeing it,
    one should zero stack-allocated memory too.  Why should people
    distinguish between the two in this regard?

    [The problem is, as others have noted, what one person considers sensitive another might not. Is the name of a file on the user’s hard disk sensitive? Does this mean that you have to SecureZeroMemory your WIN32_FIND_DATA buffers? What about all the file names you added to your list box? The issue is not stack vs heap – you can have the same “information disclosure” with heap data. I just chose the stack for simplicity. -Raymond]
  11. Mike Dimmick says:

    James: yes, the virtual memory manager zeros user-mode page allocations (VirtualAlloc/VirtualCommit) before giving the page to the process. I’m not sure if it does it if the page was previously used by the same process – if it’s no longer in the working set I don’t think there’s any association with that process any longer, the OS cannot tell whether it’s safe to hand it over or not. However, the heap manager (HeapAlloc, LocalAlloc, malloc, new, etc) does not – it’s not disclosing sensitive information since the process could already read this memory. You might still get zeros if the heap manager has to allocate more virtual memory to satisfy this allocation, but you can’t rely on it.

    If you use <crtdbg.h> you can get the C run-time heap allocation code to initialise blocks (to 0xCC, IIRC, which is something like ‘int 3’, the breakpoint instruction). This is to help stop developers relying on values being initialised to zero.

    The OS has a background thread (the only one in the system with priority zero which is never boosted, so it only runs if there are fewer runnable threads than [logical] processors) called the zero page thread. This thread takes pages from the free list and overwrites them with zeros, to keep a pool (zero page list) of zero pages. This is to avoid having to hold up the application while it clears the page. If there aren’t any zero pages around already, the memory manager will try the free list. More in ‘Windows Internals, 4th Edition’.

  12. AC says:

    On the subject "was Office ‘just a normal win app’": Raymond, you’re right, in a sense "Office just uses OLE". But, at the first moment when Office "just used OLE", who from competitors was able to use it the same way?

    Not to mention the dependencies demonstrated by the solution to the bug, which, as Gabe points, is practically impossible if you "just use OLE".

    Of course, we can say that competitors could have been better off not using OLE for document storage anyway.  But still there are interoperability issues with existing Office documents etc. My excuses for drifting away from the subject of your article.

  13. James says:

    The fact NT zeros out pages of memory before letting your app use them *doesn’t* free you from the need to initialise buffers yourself before you write out their contents to disk or network. It does mean uninitialized areas will tend to contain zero rather than the user’s data, but there’s no guarantee of that: memory managers will normally recycle pages, since that’s faster than constantly requesting and returning pages from the kernel.

    So, you allocate a 4kb buffer to do something. Your memory manager gets a page from the OS to put the buffer in, you use it, then free the buffer. Now you allocate another 4kb buffer to send something over the network – and your memory manager hands you back the same page you used previously. Because it was never transferred to another application, the NT kernel never zeros it for you – and you might just have sent the user’s credit card number over the Internet in cleartext. Whoops.

    Of course, in an ideal world you wouldn’t be saving or transmitting unused chunks of data anyway… ;-)

  14. Reinder Verlinde says:

    Consider the following C skeleton code:


           volatile char password[ 64]; // probably needs other attribute(s)


           aFunctionCall( password);


           SecureZeroMemory( password, 64)


    I do not think that one can guarantee that no copy of the password remains after leaving this loop. For instance, aFunctionCall could use registers in its calculation of a hash of the password. A heavily optimizing compiler might even use vector registers to do so. Those might give ample room for leaking sufficient bits to recover the entire password. Those registers, in turn, could easily make it into main memory when another function call saves and restores them.

  15. dave says:

    And then there’s code that uses (say) std::string  from the C++ standard library to hold ‘sensitive’ data.

    Since the string implementation can reallocate its internal storage -and- can share internal storage between objects whenever the whim takes it, it gets pretty tricky to write code that guarantees erasure.

  16. BryanK says:

    Not to mention .Net’s System.String class…

Comments are closed.