What’s the point of SecureZeroMemory?


The Secure­Zero­Memory function zeroes out memory in a way that the compiler will not optimize out. But what's the point of doing that? Does it really make the application more secure? I mean, sure the data could go into the swap file or hibernation file, but you need to have Administrator access to access those files anyway, and you can't protect yourself against a rogue Administrator. And if the memory got swapped out before it got zeroed, then the values went into the swap file anyway. Others say that it's to prevent other applications from reading my process memory, but they could always have read the memory before I called Secure­Zero­Memory. So what's the point?

The Secure­Zero­Memory function doesn't make things secure; it just makes them more secure. The issue is a matter of degree, not absolutes.

if you had a rogue Administrator or another application that is probing your memory, then that rogue operator has to suck out the data during the window of opportunity between the time you generate the sensitive data and the time you zero it out. This is typically not a very long time, so it makes the attacker work harder to get the data. Similarly, the data has to be swapped out during the window between the sensitive data being generated and the data being zeroed. Whereas if you never called Secure­Zero­Memory, the attacker could take their sweet time looking for the sensitive information, because it'll just hang around until the memory gets re-used for something else.

Furthermore, the disclosure may not be due to a rogue operative, but may be due to your own program! If your program crashes, and you're signed up your program for Windows Error Reporting, then a crash dump file is generated and uploaded to Microsoft so that you can download and investigate why your program is failing. In preparation for uploading, the crash dump is saved to a file on the user's hard drive, and an attacker may be able to mine that crash dump for sensitive information. Zeroing out memory which contained sensitive information reduces the likelihood that the information will end up captured in a crash dump.

Another place your program may inadvertently reveal sensitive information is in the use of uninitialized buffers. If you have a bug where you do not fully-initialize your buffers, then sensitive information may end up leaking into them and then accidentally transmitted over the network or written to disk. Using the Secure­Zero­Memory function when finished with sensitive information is a defense-in-depth way of making it harder for sensitive information to go where it's not supposed to.

Comments (37)
  1. Brian says:

    I'm curious how compilers can optimize away ZeroMemory.  Is there something special in the function signature that guarantees the operation is side-effect free?  Or do compilers just hard-code the function name?

  2. SimonRev says:

    ZeroMemory is a macro for memset.  I suppose a compiler might feel it has a priori knowledge of what memset does and optimize the call away if the buffer will receive no subsequent use.

  3. henke37 says:

    It is rather simple how it works: it can't be optimized away because the compiler doesn't know what it does.

  4. SimonRev says:

    SecureZeroMemory is implemented in a header file as a FORCEINLINE C function which treats the input pointer as volatile.  For x64 it seems to run an intrinsic on it (I didn't follow into that) and for x86 zeros out the values in the buffer.  Presumable the volatile qualifier is enough to keep the MSVC compiler from trying to optimize away the call.

  5. Brian_EE says:

    In the crypto hardware community we refer to this as Data At Rest, even if it is in processor/device memory. The less time that sensitive data is in buffers, the less opportunity to compromise that data. As Raymond correctly points out, this doesn't secure the system by itself, but is one of many strategies put together to minimize risk.

    If you are designing such systems, the rule is to destroy any plaintext data as soon as it is consumed by the downstream process (e.g. an encryption routine). When such systems are being certified, typically the certifying agency requires the stepping through assembly code in a debugger to prove that all security-critical functions operate properly (and that they were not optimized away).

  6. Joshua says:

    @SimonRev: from the beginning the compiler was always allowed to assume these functions and generate calls to them at will: memcpy, memmove, memset, memcmp, div, ldiv. The fact the compiler is allowed to optimize them away follows.

  7. Adam Rosenfield says:

    Here's an example where the compiler will optimize away a call to ZeroMemory:

    void foo()

    {

    char buf[] = "foobarbaz";
    
    printf("%sn", buf);
    
    bar(buf);
    
    ZeroMemory(buf, sizeof(buf));
    

    }

    When compiled with /O2 /Oi /Oy /GS- with the Visual Studio 2010 compiler on x86, this does not call memset or write to memory at all after calling bar(), it just runs the normal function epilog (add esp,0Ch; mov esp,ebp; pop ebp; ret).  Using /GS instead /GS- adds a call to __security_check_cookie but otherwise does not zero out any memory.

  8. Brian_EE says:

    @Brian, as Joshua pointed out, the compiler knows about certain functions. In the example Adam provided, the compiler sees that the buffer is being written to, and then immediately destroyed at the end of the function (i.e. never read again). So the compiler assumes it can eliminate the call. This is similar to optimizing away a variable that you declare and set but never read from in the function.

  9. Mashmagar says:

    Good going, Maurits. Now we all know your sensitive information!

  10. John Doe says:

    Devil's advocate:

    A rogue administrator may breakpoint at any 90% matching expansion of SecureZeroMemory, so he/she'll have a better chance at and possibly faster way of finding where sensitive data is than by thourough reverse engineering.

    Your application may still be rather insecure if SecureZeroMemory is invoked on an unchecked NULL pointer, or if you're zeroing several buffers and any but the last one fails because it was writing out of allocated memory bounds. And if the buffers are close enough, they might appear in the crash dump.

    On the other, if you always use SecureZeroMemory where ZeroMemory would suffice, you'll make him/her go through a hard time.

    @Maurits, if your sensitive data is all zeros, at least you're not readily compromising it by writing all zeros over zeros, in the attempt to lure a rogue hacker into thinking "these are not the zeros you're looking for…". If you're writing all ones over all zeros, now that's suspicious! Even more suspicious would be (pseudo-)random bytes.

    @Mashmagar, security through obscurity is overtaken by mere perseverance.

  11. 12BitSlab says:

    If someone has physical access (or admin rights via comm) to your device, then you are already pwned and it is game over.

  12. Brian_EE says:

    @12BitSlab: "If someone has physical access (or admin rights via comm) to your device, then you are already pwned and it is game over."

    Depends on what your "device" is. Physical access /= game over in every situation. Nor does "admin rights". Some equipment is designed to protect when even the enemy has it in their hands.

  13. Zr40 says:

    @Brian_EE: But we're talking about Windows, not some hypothetical tamper-resistant device or administrator-mistrusting OS. In Windows, having Administrator privileges allows for any behavior.

  14. @Zr40

    Even if we are talking about Windows. This doesn't mean an automatic game over.

    An example is a self service checkout in a supermarket. As a user you have physical access to it, but you are most likely supervised while using them, and can't get access to them in your own time. Even if these are set to run with admin rights, you wouldn't be able to make use of it. And this is very relevant because the self service checkouts in all my local supermarkets run on Windows.

    So as Brian_EE said, it depends on what your "device" is.

  15. June says:

    @Zr40: Be that as it may, there are certainly historic cases of applications (and games) on Windows resisting tampering (for copy-protection purposes) for quite some time, even though the 'attacker' has full access. Don't assume that because a theoretical superman cracker can defeat any protection, that such a cracker a) exists, b) cares about <foo> application, c) has plenty of time and patience.. ;-)

  16. apz says:

    "The foobarfunction function doesn't make things secure; it just makes them more secure. The issue is a matter of degree, not absolutes. "

    This is why I find your concept of an 'airtight hatchway' so offensive – so many times people have suggested features that incrementally improve security, only to be rejected with the blind "other side of airtight hatchway" refrain.

    Next time anyone is tempted to blindly retort: 'other side of airtight hatchway!', think a second to functions such as SecureZeroMemory.

    [Defense in depth limits the damage of an existing vulnerability, but the airtight hatchway issue is whether a vulnerability exists at all. That it is possible to limit the damage of somebody who makes it past the hatchway is not the same as saying "I found a way to stop people from getting past the hatchway!" -Raymond]
  17. 12BitSlab says:

    The concept of "secure" does not exist.  System security is EXACTLY like the security of a bank vault.  The security measures exist solely to buy more time.  That is we want to slow down the fire (in the case of a vault) or slow down the attacker (in the case of a system).  Then one must have detection, notification, and response measures in place.  With out the last three items being there, there is no point in having security measures in the first place.  Any attack given enough time will succeed.

  18. @12BitSlab says:

    Exactly – the guiding principle should be "defense-in-depth" (bank vault alarms, SecureZeroMemory), not "airtight hatchway" (UNIX root/nonroot distinction, some aspects of Windows security).

  19. Myria says:

    Windows needs an extension to VirtualLock that means "never write this data to permanent storage".  VirtualLock works for most things, but will not protect you against your data being written to disk if the user hibernates.

    If I were to design it, the extension would be that the memory is discarded if the machine would otherwise write it to disk.  Attempts to access such memory after a hibernation would trigger a STATUS_IN_PAGE_EXCEPTION.  Programs that need advanced functionality can deal with advanced consequences.

  20. @John Doe: I never thought of it that way. Obviously the only solution is for me to find someone else's sensitive data and overwrite my own with that.

  21. My sensitive information *is* all zeros. So I wrote a SecureOneMemory function that writes all 1s and cannot be optimized away.

  22. 640k says:

    Function should be renamed to CallbackToAllPrivilegedUsers_IHaveAMememoryBufferWithSecureStuff_HereYouHaveIt(void *buf, size_t bufsize);

    You now has a less secure OS.

  23. Doug says:

    Regarding SecureZeroMemory being a red flag, note (as mentioned earlier) that SecureZeroMemory is implemented as an inline function. Basically, the only difference between SecureZeroMemory and memcpy is that you can be confident that SecureZeroMemory won't be optimized away. So anybody trying to catch calls to "SecureZeroMemory" will actually catch all calls to memcpy.

    Regarding this still being vulnerable, note that "attacker with the ability to access my device's memory WHILE I'm performing sensitive operations" is not the primary opponent (though this does make his/her life a bit more difficult). The primary opponent is "attacker with the ability to access information about my device AFTER I've performed sensitive operations".

  24. cheong00 says:

    Btw, I wonder why we have SecureZeroMemory() but not SecureRandomMemory()?

    In pre-zeroed memory region you can be sure that blocks that aren't zero can be "some data", in memory region with random pattern you can't, especially if all "data" the secure application have is numeric (i.e. not plain text). Shouldn't it serve better purpose of "securing" the application?

    Sure it'll probably be slower than zeroing out memory, but I think it'd be found it's purpose.

  25. Paradice says:

    @cheong00: what would be it's purpose? If your needs are such that you want to obfuscate live data with randomness, you almost certainly want to use a trusted source for the randomness, rather than rely on a predetermined OS function.

  26. ender says:

    An example is a self service checkout in a supermarket. As a user you have physical access to it, but you are most likely supervised while using them, and can't get access to them in your own time.

    You don't normally have physical access to the computer in these stations – that's usually locked behind physical door, and the only access you have is to the barcode scanner and touchscreen, which (barring any vulnerability in the software that's running) don't allow you into the system.

  27. cheong00 says:

    @Paradice: The purpose is to obfuscate "relevent data"'s boundary, so randomness itself is not important at all.

  28. @640k: SecureZeroMemory is defined with FORCEINLINE, so the compiled code won't leave an entry point to discover (if I understand correctly).

  29. SimonRev says:

    I think 640's point was that you could scan the compiled code and look for any loops that look like what SecureZeroMemory would compile to and put breakpoints there.  Now whether the actual compiled result of SecureZeroMemory is sufficiently different from that of memset that you could distinguish them is something I cannot answer.  

  30. @SimonRev: If you're looking for memset loops, then the differences between a normal memset and SecureZeroMemory are trivial at best ;)

  31. bdv@inec.ru says:

    To zero sensitive memory better to use custom code. Using public Win32 API is just helping cracker to determine where and when to look for passwords. He would just intercept the call and voila!

  32. Gabe says:

    Dmitry: I don't think you get the point. Writing the code yourself may result in that code being optimized out. The SecureZeroMemory invocation will not be optimized out, and will be inlined so there is nothing to intercept.

  33. John Doe says:

    Now this is getting interesting. New commenters should try to find which DLL exports SecureZeroMemory, before suggesting to hook on calls to it. And if you don't find at first…

    @Maurits: ROTFLOL

    @cheong00: randomness is important for data stored in disks you're throwing away or which might be stolen. The pagefile may be considered random enough that, when your data is zeroed, the magnetism is not consistent enough to find what exact data was there before being zeroed.

    However, zeroed blocks will look more suspicious than random data indeed, and a more persistent hacker would probably look those up first.

    But as someone said, you'd want to use a trustful random number generator (not that one the OS provides isn't, but do you trust any OS?).

  34. voo says:

    @Dmitry: The function is defined as FORCEINLINE as already mentioned several times. So the only way to find out where the function was used was to look in the binary for code that looks similar enough to it. The problem there obviously is that the difference between SecureZeroMemory and memset are rather nonexistant in cases where the later isn't optimized away.

    In the best case your reinvented SecureZeroMemory will be as secure as the original, but I really wouldn't count on it.

  35. cheong00 says:

    @John: I'd say as long as the region is non-zero, it'll be good enough. Magnetism is usually non-issue here because I'm not talking about "covering memory with random bits after the memory has been used", but to "prefill the memory with random bits so the data boundary is not obvious".

    That's why I said the randomness itself is not important at all for this particular usage.

  36. John Doe says:

    @cheong00, "prefill the memory with random bits so the data boundary is not obvious".

    That only avoids static analysis of a process's memory snapshot. At runtime or during reverse engineering, it's just as good as zeroing to detect boundaries, making it suspicious.

  37. cheong00 says:

    Yup, neither of the way protects the application from RE attack. It just make it more difficult to get how data is located when viewing memory snapshot. Because I just want to use garbage to fill the gaps of data, the quality of garbage itself is not important. That's why I said the "trustful random number generator" is not needed.

    All I want to say is that.

Comments are closed.

Skip to main content