You can’t even trust the identity of the calling executable


A while back, I demonstrated that you can't trust the return address. What's more, you can't even trust the identity of the calling executable. I've seen requests from people who say, "I want to check whether I'm being called from MYAPP.EXE. I'm going to make a security decision based on the result."

Although you can do this, all it does is give you more rope.

Even if you are convinced that you're being called from the expected application, you aren't any safer. An attacker can inject code into that process (say, via a global hook) and you will foolishly trust it. In the same way that you shouldn't trust who you're talking to on the phone based solely on the caller ID. Somebody could have broken into the caller's house and made the call from that phone.

Comments (15)
  1. Gabe says:

    I should add that you can’t trust your command line either. The only copy of the command line use to call your process is stored in writable memory, so there’s no way to know if what you are looking at has been modified.

  2. "I want to check whether I’m being called from MYAPP.EXE. I’m going to make a security decision based on the result."

    Ahahahaha. I couldn’t help but laugh when I read that, as I have a fair amount of experience with the various ways of injecting code into foreign processes.

  3. josh says:

    Ultimately you can’t even trust your own code, because someone could inject stuff into your process or hack the binary.

    Is there any sort of binary security that is safe, or is the only sure bet to rely on data that algorithmically becomes corrupt when your condition is not met?

    Or at some point do you just trust that whoever has set up your application is not malicious and can secure the environment properly?

  4. Cheong says:

    Josh: Seems that you have to rely on Windows itself’s memory protection, but then there’s some way to work around that if there’s some other program targeting "your program" with administrative rights.

    If your program is that "security critical", perheps you can write code that’ll calculate the integraty checksum for code segments and have child process periodically check that. So only custom-crafted programs could have modified your executed code without having the user acknowledged.

    I have further question. Can someone change the code that have be loaded to CPU cache? If not, preheps you can trust the process you’re running… :P

  5. I believe it’s theoretically impossible to 100% prevent tampering of your code from the same priviledge level, user, and logon session.

  6. Jerry Pisk says:

    Verifying checksums (hashes) of your code works only if the attacker does not have access to modify those as well. If you emebed them in the code to protect yourself against having your code change it does not help you as it’s trivial to change the checksums or even remove the check completely once somebody is able to modify your code (why would your child process be any more secure, if it was you wouldn’t need it). If you know where it’s trivial to change a JNZ or JZ to JMP. And I think it is possible to change code loaded in the CPU cache, just change it in memory, mark it as dirty and let the CPU refresh its cache.

  7. theorbtwo says:

    You can’t trust anything at all; so far as you know your code is being run on an emulator specifically designed to mess with you.

    At some point, you have to make assumptions, and if you want, document them.

  8. Mike Hearn says:

    This is just a rephrasing of "on most operating systems, there is no internal security". It’s not a given that you can’t trust the identity of the calling process, this is true on Windows (and MacOS X and most flavours of Linux) but isn’t necessarily true and in fact on some Linux distros you /can/ obtain and trust the security context of the caller because the kernel restricts the process control APIs so code injection is not possible. Likewise, in some cases the dynamic linker will ignore the equivalent of Win32 global hooks.

    Now, if the kernel is compromised or modified then it’s game over but in most contexts this isn’t a problem worth worrying about (for the sort of thing this guy probably wants to do anyway).

    But Raymond is right that on Windows, as it stands, you can’t really prove much about the state of the system from inside it.

  9. Nick Lamb says:

    "I’m going to make a security decision based on the result."

    This is actually game over anyway. You don’t get to make any real security decisions in this situation.

    The only time a /real/ security decision can be made is when you have a difference in privileges. For example, the firewall software has more privileges than the packets being processed, so it makes meaningful security decisions. Similarly the OS kernel has more privileges than a calling program, so it too makes meaningful security decisions.

    A called function doesn’t have any special privileges, it’s running in the context of the calling code. So anything it can do is also possible for the caller. Suppose your function is able to decrypt messages using an embedded secret key. The caller can just read the key straight out of your code, without executing any of your safeguards.

    However, when the function call is upgraded to a Remote Procedure Call or similar, things get interesting. Now the called code can have different privileges, and it becomes meaningful to know the identity of the caller. For example some Unix systems include a kernel-supervised "credential passing" local transport for this purpose. Because the messages pass through the OS kernel, which rejects fraudulent credentials, they can really be used to make security decisions.

  10. M Knight says:

    But Raymond is right that on Windows, as it stands, you can’t really prove much about the state of the system from inside it.

    Gödel’s Incompleteness Theorem means you can not prove a system from within the system. And any attempt to use some external factor to prove the system just makes the system larger & more complex.

  11. MGrier says:

    If, in a single address space, you have both Bad Code and Good Code, you’re sunk.  The Good Code can all the Good API all it wants to get the identity of the containing process/whatever but if the Bad Code is in the same address space, it can change the Good Code into Bad Code.  For that matter if it’s in the same address space it can just do the Special Operation that "only" the Good Code could do in the first place.

    Boom.

    The only way to implement a security boundary is to have the code which possesses varying levels of permission/privilege be held distinct with each other.  The data flow across the boundaries has to be very strictly maintained.  (This is why IMO you can never get one of these "sandboxing" technologies right without running the sandboxed things in entire separate address spaces.)

  12. says:

    "Gödel’s Incompleteness Theorem means you can not prove a system from within the system. And any attempt to use some external factor to prove the system just makes the system larger & more complex."

    That’s little bit of an oversimplification, and I don’t think it applies in this particular case. Gödel wasn’t saying that <i>no</i> statements could be proven or disproven, only that certain statements in a system couldn’t be proven or disproven without expanding the system (and thereby introducing a new class of statements that can’t be proven or disproven). But a theorem selected at random can be self-consistent without having to go outside the system (even if the system, like all others, is incomplete).

    Here Raymond et al. are making a statement too: "it is not possible to answer P(x) for all x", where P(x) is the question, "has code x been tampered with?". This is quite possible to prove within the bounds of the system, probably with a Cantor’s-diagonal argument: enumerate all possible x and come up with an answer for each of them. If they’re all answerable, Raymond’s statement is wrong; otherwise, it’s right.

  13. JS says:

    You can’t trust anything, really. Maybe one of the OS libraries was replaced with code that causes the computer to explode when you call it.

  14. Tito says:

    "In the same way that you shouldn’t trust who you’re talking to on the phone based solely on the caller ID. Somebody could have broken into the caller’s house and made the call from that phone. "

    There are MUCH easier ways to fake caller ID than breaking into someone’s home (or stealing their cellphone).  That is one reason to always require voicemail passwords even when calling from the cell.  Spoofing this is trivial.

Comments are closed.

Skip to main content