If you configure a program to run in Windows 2000 compatibility mode, then it is also vulnerable to Windows 2000 security issues

We received a security vulnerability report that said, basically, that if you apply Windows 2000 compatibility mode to an application, then it becomes vulnerable to Windows 2000 security issues.

Well, yeah. Because that's what you asked for.

If you set a program to run in Windows 2000 compatibility mode, then one of the things that happens is that the DLL loading follows the Windows 2000 rules, and Windows 2000 predates the Safe­Dll­Search­Mode setting, so they always follow the "Safe­Dll­Search­Mode is disabled" rules.

This is intentional, because one of the reasons the program was put into Windows 2000 compatibility mode is that it relies on the Windows 2000 algorithm for DLL loading. In other words, the program relies on bug-for-bug compatibility,¹ and the Windows 2000 compatibility does its best to oblige.²

Is this a security vulnerability?

Well, it's a security vulnerability in the program, that it stops working when the more secure DLL loading algorithm is used. On the other hand, good luck getting the vendor to do anything to address this issue. The fact that the program requires Windows 2000 compatibility mode is a strong indication that the vendor is not going to do anything about the matter, given that it's had over fifteen years to do something about it and hasn't.

But what about if a user manually applies the Windows 2000 compatibility mode to a program that doesn't need it? Is it a security vulnerability that Windows allows the user to put a current-day program into a compatibility mode that reintroduces old security vulnerabilities? Or is this a case of "If you configure your system to be insecure, then don't be surprised that you have a security vulnerability"?

Let's look at the usual questions for evaluating whether something is a security vulnerability: Who is the attacker? Who is the victim? What has the attacker gained?

The attacker is somebody who can set a program into an insecure compatibility mode. The victim is somebody who runs the program thinking they are getting a normal program, but are instead getting an insecure program. The attacker can now compromise the program by using the old security vulnerability.

Okay, but let's take closer look at the relationship between the the attacker and the victim. If a local user applies an insecure compatibility mode to a program, it affects only that user. The user hasn't gained anything. They could have just written a program that does whatever they like and run it. No need to pile on the style points by employing DLL injection. In this case, the attacker is attacking himself. This is not particularly interesting.

In order to change what other users experience when they run the program, you need to have administrator privileges in order to modify the system compatibility database or edit system shortcuts. In that case, you're already on the other side of the airtight hatchway.

Compatibility shims should be applied only to address compatibility issues and not as something you run around applying to anything you see, because some compatibility shims weaken security for compatibility reasons.

¹ And that isn't even the weirdest "throwback Thursday" compatibility shim. My favorite is Emulate­Heap, which replaces the standard heap with an exact copy of the Windows 95 heap manager.

² Note that the compatibility shim infrastructure performs only in-process shimming. It can alter the way the process internally behaves (or how in-process components like the DLL loader behave), but it doesn't alter the security boundaries between the program and the rest of the system. So even though it weakens the security to Windows 2000 levels, it does so only to the extent that the application could have weakened security on its own (say by implementing an insecure algorithm).

Comments (25)
  1. skSdnW says:

    How can 1 and 2 both be correct? Win95 has a undocumented flag to allocate memory that is shared by all processes. (From your TechNet article: “… that emulate the Windows 95 heap down to the very last detail”)

    1. To be specific, emulates the heap memory layout down to the very last detail. Because the shim is for programs that relied on details of the heap layout (usually inadvertently, such as the exact lifetime of memory after it has been freed).

      1. alegr1 says:

        There was that time when Windows 95 HyperTerminal stopped working after MSVCRT update, and a patch was issued to restore the heap behavior for those buggy programs.

        1. Yuhong Bao says:

          Windows 98. Windows 95 is too old to use MSVCRT.DLL I think.

          1. skSdnW says:

            Win 95 doesn’t always have MSVCRT installed but it might be there depending on which features you selected during installation. Obviously, it is not the one from VC6 that most people depended on, probably 4.1 or 4.2.

          2. Yuhong Bao says:

            Win95 was created before MSVCRT.DLL even was introduced (in Vistual C++ 4.2 I think).

    2. Joshua says:

      Since Windows 7 has shared memory there’s no reason they couldn’t implement the shared across processes flag. Obviously it would only work against processes that enabled EmulateHeap.

      The only reasonable mistake that could be depended on is memory freed retains its value until the next allocation; this would actually be true with classical heap managers. To be fair, I used to abuse that, but only with malloc and free and when I knew the library version of malloc and free implemented a classical heap (that is, never returns free heap to the OS). I didn’t ever try it with the Windows heap allocation functions. I want to say because I didn’t know you could do that, but in fact I knew you couldn’t do that.

      1. “The only reasonable mistake that could be depended on is memory freed retains its value until the next allocation.” Um, no, they depended upon a lot more than that. I’ve vaguely recall one program that inadvertently relied on the fact that if they allocated memory, freed it, then allocated four more blocks of memory, the last block would be pointer-identical to the memory that they freed a while back. Something like that.

        1. Antonio Rodríguez says:

          It must be fun having to debug that…

          1. smf says:

            Yeah, one of the major SQL vendors (not microsoft) had some really strange memory allocation issues. You could see when it started allocating memory quickly then it started logging internal memory inconsistency errors and connections locked up.

            Of course we were the only customer reporting that issue (it was on their latest offering so no wonder) and it must be something wrong with the queries were were executing (wait what???).

            They never fixed it. We found daily defragmenting the database and log file & trying to get it to allocate a huge amount of memory upfront would let us keep the database running.

            It turns out they have an A and B development team. The A developers write the new code, then release it and move on to their next bug creation phase. The B developers then have to hack fixes in. I’m not naming the vendor in case they sue.

        2. Yuhong Bao says:

          Which also reminded me of the VC6 small block heap debacle where the old VC5 allocator had to be added to MSVCRT. I think even some MS software was affected. I wonder if many would be considered security bugs nowadays.

          1. alegr1 says:

            That’s when Hyperterminal was crashing because of allocator change.

          2. Yuhong Bao says:

            Update: one of them seems to be double frees, which is definitely a security issue.

  2. Karellen says:

    “They could have just written a program that does whatever they like and run it. […] In this case, the attacker is attacking himself.”

    Hmmm…..thinking about this brings to mind Return Oriented Programming.[0] There can exist situations where an attacker has no way of creating new executable code on a system, or even of modifying existing executable code, but is capable of running existing executable code in new, exciting, and previously not-considered ways, as but one step in a long series of leveraging small vulnerabilities into increasingly larger ones.[1]

    It’s possible that the attacker is not the user, and that the attacker is someone who does not have the ability to write a program that does whatever they like, but can somehow change the configuration of existing programs. Just because one person can’t imagine how a given bug might be turned into a vulnerability, that doesn’t mean that no-one else on the internet can.

    [0] https://en.wikipedia.org/wiki/Return-oriented_programming
    [1] https://blog.chromium.org/2012/05/tale-of-two-pwnies-part-1.html

    1. In which case, exploiting that other vulnerable program brought them to the other side of the airtight hatchway. Once they’re on the other side, it’s not surprising they can cause havoc.

      1. Kevin says:

        I agree with just about everything you’ve written (particularly that this is a reasonable design decision for compatibility mode), but I’m not sure about the “airtight hatchway” model. Taken to its logical extreme, it suggests that we shouldn’t bother with defense in depth at all.

        (But then, this blog is for entertainment purposes only, presumes a certain level of knowledge in its audience, etc., so I suppose I should just get over it.)

        1. smf says:

          I think his point is that if you install an insecure programme that doesn’t sanitize it’s inputs, then it’s not windows fault.

          Things like ASLR make it more challenging, but you can’t stop a program from writing to it’s stack. Although CPU’s could sign stack frames

          1. xcomcmdr says:

            Input sanitization is a dead end. Where do you stop “cleaning” the string you received ?

            Just escape it.

  3. Pierre B. says:

    Well, the truth is that the victim would probably set the compatibility mode without knowing that it has security implication, just to make the program run. It gets worse because the vendor has no incentive to fix their issue since there’s this handy work-around.

    Then the attacker would simply profit from the situation as a side effect.

    1. Antonio Rodríguez says:

      In practice, this would be improbable. Why does Windows get all the viruses, even when other desktop OSes have security vulnerabilities too (as evidenced by their patches)? Because Windows has a 95% usage share. If I’m going to write a worm or a piece of spyware, I’m going to target that 95%, instead of the 1-4% of the other platforms.

      In the same way, if I write malware, I’ll be aiming at a standard configuration. How many users will be running a 15-year-old version of some program under a current version of Windows? I guess there won’t be too much… If you still have doubts, see how Windows XP came out unscratched from the recent WannaCry attack, even if it is technically vulnerable. It seems that attackers didn’t bother to target an OS with less than 1% usage share.

      1. GP.Burth says:

        That depends on the attack. If it’s a generic “infect anything you see” type of attack (like WannaCry) then you are right. On the other hand there are more specific attacks on people, e.g. spearfishing. These could very well use knowledge that the victim uses a specific program.
        Then there are attacks like Nyetya this june which only attacked a specific program only a small subset of companies worldwide use, but with devastating effects nonetheless (shipping company Maersk lost up to 300 Million $). In this case the update mechanism was compromised, something unlikely for a program needing Win2000-mode (but could very well be for XP-mode programs), but there are other ways to compromise a system. And it only needs one infected computer in a company network…

      2. xcomcmdr says:

        > see how Windows XP came out unscratched from the recent WannaCry attack

        … I wouldn’t call a BSOD loop (especially on embedded and/or medical equipement) as “coming out unscratched”.
        But yes, that was quite surprising, compared to what happens on Windows 7 (for example).

    2. I’ve never set the compatibility mode of something I didn’t REALLY need, let alone if one mode didn’t work and I had to keep going back in time. While I might not assume that there was a specific security vuln, I would absolutely assume that a broken program would be broken and vulnerable in many ways.

  4. Dwedit says:

    Can’t compatibility settings be applied to a program without administrative access just by poking the registry? This means a user mode EXE could change the DLL search order of another application.

    1. Yup, and I discussed this in the article. All you’re doing is attacking yourself. If you have medium integrity access, then you already pwn all other medium integrity processes running under the same user.

Comments are closed.

Skip to main content