Your debugging code can be a security vulnerability: Loading optional debugging DLLs without a full path


Remember, the bad guys don't care that your feature exists just for debugging purposes. If it's there, they will attack it.

Consider the following code:

DOCLOADINGPROC g_pfnOnDocLoading;

void LoadDebuggingHooks()
{
 HMODULE hmodDebug = LoadLibrary(TEXT("DebugHooks.dll"));
 if (!hmodDebug) return;
 g_pfnOnDocLoading = (DOCLOADINGPROC)
               GetProcAddress(hmodDebug, "OnDocLoading");
 ...
}

HRESULT LoadDocument(...)
{
 ...
 if (g_pfnOnDocLoading) {
   // let the debugging hook replace the stream
   g_pfnOnDocLoading(&pstmDoc);
 }
 ...
}

When you need to debug the program, you can install the DebugHooks.dll DLL into the application directory. The code above looks for that DLL and if present, gets some function pointers from it. For illustrative purposes, I've included one debugging hook. The idea of this example (and it's just an example, so let's not argue about whether it's a good example) is that when we're about to load a document, we call the OnDocLoading function, telling it about the document that was just loaded. The OnDocLoading function wraps the IStream inside another object so that the contents of the document can be logged byte-by-byte as it is loaded, in an attempt to narrow down exactly where document loading fails. Or it can be used for testing purposes to inject I/O errors into the document loading path to confirm that the program behaves properly under those conditions. Use your imagination.

But this debugging code is also a security vulnerability.

Recall that the library search path searches directories in the following order:

  1. The directory containing the application EXE.
  2. The system32 directory.
  3. The system directory.
  4. The Windows directory.
  5. The current directory.
  6. The PATH.

When debugging your program, you install DebugHooks.dll into the application directory so that it is found in step 1. But when your program isn't being debugged, the search in step 1 fails, and the search continues in the other directories. The DLL is not found in steps 2 through 4, and then we reach step 5: The current directory.

And now you're pwned.

Your application typically does not have direct control over the current directory. The user can run your program from any directory, and that directory ends up as your current directory. And then your LoadLibrary call searches the current directory, and if a bad guy put a rogue DLL in the current directory, your program just becames the victim of code injection.

This is made particularly dangerous when your application is associated with a file type, because the user can run your application just by double-clicking an associated document.

When you double-click a document, Explorer sets the current directory of the document handler application to the directory that contains the document being opened. This is necessary for applications which look around in the current directory for supporting files. For example, consider a hypothetical application LitWare Writer associated with *.LIT files. A LitWare Writer document ABC.LIT file is really just the representative for a family of files, ABC.LIT (the main document), ABC.LTC (the document index and table of contents), ABC.LDC (the custom spell check dictionary for the document), ABC.LLT (the custom document layout template), and so on. When you open the document C:\PROPOSAL\ABC.LIT, LitWare Writer looks for the other parts of your document in the current directory, rather than in C:\PROPOSAL. To help these applications find their files, Explorer specifies to the CreateProcess function that it should set the initial current directory of LitWare Writer to C:\PROPOSAL.

Now, you might argue that programs like LitWare Writer (which look for the ancillary files of a multi-file document in the current directory instead of the directory containing the primary file of the multi-file document) are poorly-written, and I would agree with you, but Windows needs to work even with poorly-written programs. (Pre-emptive snarky comment: Windows is itself a poorly-written program.) There are a lot of poorly-written programs out there, some of them industry leaders in their market (see above pre-emptive snarky comment) and if Windows stopped accommodating them, people would say it was the fault of Windows and not the programs.

I can even see in my mind's eye the bug report that resulted in this behavior being added to the MS-DOS Executive:

"This program has worked just fine in MS-DOS, but in Windows, it doesn't work. Stupid Windows."

Customers tend not to be happy with the reply, "Actually, that program has simply been lucky for the past X years. The authors of the program never considered the case where the document being opened is not in the current directory. And it got away with it, because the way you opened the document was to use the chdir command to move to the directory that contained your document, and then to type LWW ABC.LIT. If you had ever done LWW C:\PROPOSAL\ABC.LIT you would have run into the same problem. The behavior is by design."

In response to "The behavior is by design" is usually "Well, a design that prevents me from getting my work done is a crappy design." or a much terser "No it's not, it's a bug." (Don't believe me? Just read Slashdot.)

So to make these programs work in spite of themselves, the MS-DOS Executive sets the current directory of the program being launched to the directory containing the document itself. This was not an unreasonable decision because it gets the program working again, and it's not like the program cared about the current directory it inherited from the MS-DOS Executive, since it had no control over that either!

But it means that if you launched a program by double-clicking an associated document, then unless that program takes steps to change its current directory, it will have the document's containing folder as its current directory, which prevents you from deleting that directory.

Bonus chatter: I wrote this series of entries nearly two years ago, and even then, I didn't consider this to be anything particularly groundbreaking, but apparently some people rediscovered it a few months ago and are falling all over themselves to claim credit for having found it first. It's like a new generations of teenagers who think they invented sex. For the record, here is some official guidance. (And just to be clear, that's official guidance on the current directory attack, not official guidance on sex.)

History chatter: Why is the current directory even considered at all? The answer goes back to CP/M. In CP/M, there was no PATH. Everything executed from the current directory. The rest is a chain of backward compatibility.

Comments (49)
  1. Anonymous says:

    The problem with that position is that there is no incentive for developers to improve; if Windows continues to support bad programs then people will continue to write bad programs.

    [I think you've lost sight of the goal. Customers want Windows to support bad programs. The goal of Windows is to solve customer problems, not to create a beautiful research project. -Raymond]
  2. Anonymous says:

    I'm guessing that CP/M didn't have .DLLs. So not searching the current directory for DLLs would not have broken anything when DLLs were introduced.

    Further, when PATH were first introduced, back-compat could have been done so that if there were no PATH then an implicit PATH is used which contains the current directory. However, if PATH existed then the current directory would not be searched unless PATH explicitly contained ".". To explicitly set no PATH, PATH could have been set to the empty string, or a single separator if DOS could not distinguish between an unset and a set-but-empty environment variable.

    If only I had a time machine… :-)

    [CP/M had DLLs (called "overlays"). How short our memories have become. -Raymond]
  3. Dan Bugglin says:

    "In the following code example, the library is loaded directly by using a fully qualified path. There is no risk of the attacker introducing malicious code unless he already has write permissions to the application’s target directory.

    HMODULE handle = LoadLibrary("c:\windows\system32\schannel.dll");"

    Wow, this looks like fun.  What happens if the user is using a version of Windows that is installed into C:WINNT, or even D:Windows?  There is no guarantee a C: drive even exists.  Once I installed XP and it somehow decided the system drive should be drive E:.  Never quite figured out what caused that to happen.

    It would seem to me you should use the Windows APIs to get the location of the windows or system32 directory first.

    The other solutions suggested seem far more reasonable…

    LoadLibrary with just the dll name is probably OK as long as you take the CWD out of the equation.  There are legitimate (well, depending on who you ask I guess) reasons for wanting to trick an app into loading another DLL, for example wrapping nonexistent functionality to get a legacy app working on a modern OS, or extending an old program to teach it new tricks… friend of mine is working on a wrapper to get an old DX5 game running with DX9 to let him hack in all sorts of new stuff… like mousewheel support!  Or getting Vista/DX10 games running on XP/DX9 (surprisingly well, even), but I won't name specific names.

    [The full path in the LoadLibrary was clearly for illustration purposes. Does MSDN now need a nitpicker's corner? (How does one tell whether an attempt to trick an app into loading another DLL is "legitimate" or "a security vulnerability"?) -Raymond]
  4. Anonymous says:

    Best is to disable the current directory as DLL search path in any case, anyway.

    [Feel free to do that in your programs. -Raymond]
  5. Anonymous says:

    Which DOS program used LoadLibrary?

    Because if none did, you could make load library not search the current directory by default, which would have solved this issue.

    [Many DOS programs loaded libraries, but they weren't called DLLs. They were usually called OVL. -Raymond]
  6. Anonymous says:

    I am sure some dos program out there where doing their own loadable library system.

  7. Anonymous says:

    @John: How so? I don't think anyone actively sets out to write bad programs, it's more that they (we) are  not aware of doing so as long as it works with the targeted Windows version(s). And when a new Windows version comes along to possibly break those bad programs, the original programmers might be long gone and can remain happily oblivious.

  8. Anonymous says:

    @Tommy

    Back in the good-old days of DOS, we only had 640K and liked it!  That being said, some applications were too large to fit into even that amount of memory.  The solution was something Raymond hinted to — overlays.  The idea was that a small portion of your program resided at a specific (often low) address in memory, and it was responsible for swapping in the code from an overlay for the function or mode in which the program was to operate.  The manager could swap between the overlays as necessary, even going back to ones it had loaded previously.  Naturally, dealing with global and heap-allocated data was a bit tricky since the code you were loading didn't have any clue as to when or who allocated the memory, but that could easily be solved by storing pointers at well-known addresses, probably in the manager's address space.

    If memory serves, overlays were supported by DOS directly, so there wasn't anything special that applications had to do to get that support.

  9. Anonymous says:

    @John: Incentive to improve must come from within. If Windows does not support the bad behavior, then "It's Windows' fault that my application doesn't run." It Windows does support the bad behavior, then "Windows is responsible for bad code." Either way, people will blame Windows rather than taking responsibility for themselves.

    This phenomenon is not limited to Windows or even computers. I have worked as a Mathematics tutor for over 15 years. Students don't want understanding, they want homework answers. And then, when they fail on the test because they never learned the concept to begin with, it's either the tutor's or instructor's fault. I laugh every time I hear, "My instructor didn't explain this." Really? What did he or she talk about for 1 – 1.5 hours? At the end of every semester, you'll hear students make one of two statements: "I got an A," or, "My instructor gave me an F."

    Ultimately, the issue is less about code, and more about responsibility to do the right thing and own up when you don't.

  10. Anonymous says:

    @NB: I was not referring to individual developers; I was referring to developers in general.  If Windows X and Windows X + 1 both allow the same bad behavior then new programs will continue to be written badly for Windows X + 1.  If you break that bad behavior in Windows X + 1 then, going forward, new programs will be written not quite so badly.

    I understand there is only so much you can do to prevent bad programs from being written, and there are valid compatibility concerns, but if you continue to allow something to happen then don't complain when it happens.

  11. Yuhong Bao says:

    This has been discussed on Ars Technica:

    arstechnica.com/…/new-windows-dll-security-flaw-everything-old-is-new-again.ars

    [They also get it wrong, claiming that the current directory is searched ahead of system32 by default. (Thank you, Yuhong for once again stating something that I tried to preempt in the bonus chatter.) -Raymond]
  12. Anonymous says:

    [I think you've lost sight of the goal. Customers want Windows to support bad programs. The goal of Windows is to solve customer problems, not to create a beautiful research project. -Raymond]

    Customers want a lot of things, some of which occasionally conflict with each other.  UAC broke a lot of bad programs, but it still shipped.

  13. Anonymous says:

    [Giving it another try. Sorry if this shows up multiple times.]

    With so many people complaining that using PATH and the CWD to search for libraries is "obviously stupid", I'm still wondering why…

    Isn't this a case of being "on the other side of the air tight hatchway" if you can already plant DLLs?

  14. Anonymous says:

    I don't understand why you have done it in this way. Why not a new checkbox in the compatibility tab "Set current working … (security risk!)".

    Most programs would get patched and if not, you can still run it.

    So no developer gets a feedback that something is broken.

    [I'm not sure about the "most programs would get patched." Most programs are not written by large organizations with well-defined software update policies. -Raymond]
  15. Leo Davidson says:

    I'd be interested to know how many programs intentionally look in the current dir (rather than the program dir etc.) for DLLs. I'm sure there are some like the hypothetical LitWare but surely only a handful. Maybe I'm wrong. I have never seen the metrics if they exist.

    I've had the registry flag set to remove the CD from the DLL search path system-wide for ages now and, to my knowledge, nothing at all has stopped working. Of course, that's just my machine, I avoid running junk on it, and maybe stuff has broken that I just haven't attributed to it.

    I call SetDllDirectory in my exes, too, but there's nothing to stop some DLL I load messing that up, and some people actually cannot call SetDllDirectory in time. (e.g. I've heard the start-up code for some MFC versions attempts to LoadLibrary(DWM.DLL) before it hands control to the main code. Fine on Vista/7 but bad on XP.)

    As a developer I can't think of any real situations where I'd want to search the CD. My guess is that so few programs need it that it could have been handled more like DEP is, rather than leaving a known, gaping security hole open by default. (And certainly, given how long the problem has been known, the defaults could have been changed for x64.)

    Presumably the compatibility stuff in Windows could detect a LoadLibrary call in a situation where it actually would pick up a DLL from the CD and flag that, much like if you turn DEP on and something is terminated because of it you are told what happened.

    [I don't have numbers, but I do know that one major enterprise system (think something as big and important as say SQL Server) relied on LoadLibrary loading from the current directory. -Raymond]
  16. Leo Davidson says:

    @Some guy:

    << Isn't this a case of being "on the other side of the air tight hatchway" if you can already plant DLLs? >>

    Consider these scenarios:

    1) You go into a directory full of JPEGs and double-click one. It's a network dir or just one on your computer that other users have write access to.

    2) You download a big zip full of JPEGs, extract it, then double-click one of the images.

    In both you run the risk of running arbitrary code out of those directories. To avoid it you have to check that there are no DLLs in the folder and be sure that nobody else can modify the folder after you've done that check.

    People aren't used to doing that kind of check, even in situations where it's possible. It should be safe to load a data file out of a directory, but it's not.

    The issue is almost as dangerous as a buffer-overflow bug since it turns what looks like data into a code-execution risk. (In some ways it's more dangerous because it's a lot easier to trigger reliably; OTOH, it's also easier to notice.)

  17. Anonymous says:

    John: If you break that bad behavior in Windows X + 1 then, going forward, users will stay on Windows X. In fact, Windows is now up to X + 2 and yet most people are still on Windows X!

  18. Anonymous says:

    For system DLLs which are on the KnownDLLs list, I think all this "look up" is avoided and a named section object is directly opened and mapped, so it should be safe to refer to system DLLs by name only.

    Although, I will start calling SetDllDirectory in my future programs, and I would like to know more about the registry switch to enable it system-wide.

  19. Anonymous says:

    I find it interesting that Raymond promotes the snarky comments from obscure comments from people I can easily dismiss as trolls to a prominent place where I can't help but see them.

    Please, ignore the trolls. Much less stressful for you and less stressful for me.

  20. Anonymous says:

    WTF?  If a bad person can install a DLL (or any executable) on your computer, then you are already screwed.

    See #1, 2 and 3 in "10 Immutable Laws of Security"

    technet.microsoft.com/…/cc722487.aspx

    Sure, it is a good idea to avoid this risk, but it seems to be a minor one to me.

  21. Anonymous says:

    @Some Guy: "Isn't this a case of being "on the other side of the air tight hatchway" if you can already plant DLLs?"

    Many years ago, this vulnerability is what caused me to stop using Eudora as my email client.  Eudora's attachment handling behaviour was to scan all incoming messages for attachments and decode them into a specific directory, then stick a reference in the email to the file in that directory so you could click it to open the file.  At some point, somebody noticed that that leaves you open to this problem.  They figured out the name of a DLL that MS Word would load from the current directory, and started sending out emails that contained both word docs and a dll of the correct name.  If you clicked the word doc (or, indeed, any word doc in any email you had been sent), the DLL was loaded and code from it executed.

    So, no, this isn't really a theoretical problem.  It is (or, at least, has been in the past) a real security issue.

  22. Anonymous says:

    @oldami: "If a bad person can install a DLL (or any executable) on your computer, then you are already screwed. See #1, 2 and 3 in "10 Immutable Laws of Security""

    I see them, and don't see how they apply.

    "Law #1: If a bad guy can persuade you to run his program on your computer, it's not your computer anymore"

    You don't have to run a program to install a DLL.  A DLL is just a file, and there are plenty of ways it can get onto the target system without you running a program first.  E.g. leave it on a flash drive and drop it outside the target's house.

    "Law #2: If a bad guy can alter the operating system on your computer, it's not your computer anymore "

    See response to #1.  You don't have to alter the operating system to put a DLL file on somebody's computer.  

    "Law #3: If a bad guy has unrestricted physical access to your computer, it's not your computer anymore"

    See response to #1.  You don't need physical access to get files onto somebody's computer. There are many quite obvious ways of achieving this that can be performed remotely, often using quite simple social engineering techniques, not to mention design flaws in other programs (see my comments re Eudora above).

  23. Leo Davidson says:

    [I don't have numbers, but I do know that one major enterprise system (think something as big and important as say SQL Server) relied on LoadLibrary loading from the current directory. -Raymond]

    I figure things like that (large, well-known products) could've been handled by compatibility shims, assuming (perhaps incorrectly) there are only a handful of apps that need them.

    I imagine there's a fear that changing the default behaviour might break unknown, possibly internal apps that people rely on. But it seems like there wouldn't be many of them, and a system that alerted the user (or an admin) if an app appeared to try & fail to get a DLL from the CD would let people pick those up in testing and apply an app-compat policy to them.

    Perhaps this problem is more widespread than I think, though.

    [It's a tip-of-the-iceberg problem. If a high-profile app gets it wrong, then there's no hope for the apps that fly below the radar. And you don't make friends in IT admin circles by making them re-test 9000 internal applications. -Raymond]
  24. Anonymous says:

    @oldami:

    WEBDAV files is the problem. An application opens a file from WEBDAV (as a result of a link click), and the current directory is set to that path.

    [More generally, network file systems, of which WEBDAV is just one. -Raymond]
  25. Chris: Best is to disable the current directory as DLL search path in any case, anyway.

    [<a href=" link to SetDllDirectory ">Feel free to do that</a> in your programs. -Raymond]

    I don't see how SetDllDirectory helps.  From the link:

    After calling SetDllDirectory, the DLL search path is:

    1.The directory from which the application loaded.

    2.The directory specified by the lpPathName parameter. <– this is the new one

    3.The system directory. Use the GetSystemDirectory function to get the path of this directory. The name of this directory is System32.

    4.The 16-bit system directory. There is no function that obtains the path of this directory, but it is searched. The name of this directory is System.

    5.The Windows directory. Use the GetWindowsDirectory function to get the path of this directory.

    6.The directories that are listed in the PATH environment variable. <– this is still here

    [But "the current directory" is not, and that was Chris's recommendation. This game of "gotcha" is tiring. A: "You should do X." B: "Feel free to use method M to do X." C: "Gotcha! Method M doesn't do Y!" -Raymond]
  26. Oh, I see, PATH is trusted.

  27. Anonymous says:

    Jules: I remember that about Eudora. I experimented and discovered that Windows will refuse to load DLLs that it can't open for FILE_EXECUTE access, so I denied FILE_EXECUTE on all files contained in the attachments directory (I think I made a post about it on the Full Disclosure mailing list, but it wasn't that well received). It was inevitable I'd be exposed to this issue, as I was working on ReactOS at the time, and our buggy incomplete msvcrt.dll had a bad habit of being in the wrong current directory and breaking applications (including our build system)

  28. Yuhong Bao says:

    [They also get it wrong, claiming that the current directory is searched ahead of system32 by default. (Thank you, Yuhong for once again stating something that I tried to preempt in the bonus chatter.) -Raymond]

    Which was true before XP SP2.

    [The article uses the present tense, not the past tense. Wait, why am I bothering to respond to you? -Raymond]
  29. Anonymous says:

    The interesting questions is why SafeDllSearchMode just moved the currentdirectory in the search order instead of removing it.

    [Because removing it would have broken even more programs. Recall that the flag was added in a service pack, so you don't have the multi-year compatibility pass that you do for a full product release. -Raymond]
  30. Anonymous says:

    [The goal of Windows is to solve customer problems, not to create a beautiful research project. -Raymond]

    You know, I think that's .sig worthy.  Way too many people (in companies and groups other than just Microsoft) forget it all too often.

  31. Anonymous says:

    I don't think its just a bad example – the entire premise here is bad.

    Whatever happened to proper release builds? Sure you need to debug things away from the debugger, or without debug versions of things, or even in the customer's environment, but don't do it with the build you normally ship to your customer – use different builds – even if its just the same one with symbols so you can debug a crash dump or step though the code/disassembly. I tend to use a debug/release/final set of configs – the two provided by default in VS are just not enough for a really serious project.

    Saying that though I'm constantly amazed by how much of what I think of as VS project defaults and tend to scrap immediately as /my/ default behaviour are simply left in real world commercial projects. Like the debug/release configs, header/source filters and even the silly coding conventions from appwizard code etc. Its not like there is no tangible cost or quality benefits from doing these things properly… you spend less time struggling with those bugs that only reproduce without the debug heap, less time navigating the tree to open files, and good, consistent conventions make it much quicker to read, understand, debug and fix code, this quickly breaks even with the tiny time investment required to set these things up at the beginning…

    I'm surprised no one has made this point yet… the problem must be much worse than I think. Either that or I am totally ignorant of a large class of situations where you have no choice but to debug your app in this rather peculiar way… :I

    Reading a lot of the misguided comments as well – don't forget, the intended user might also be the bad guy – that's a situation that its safest to assume is always going to be the case along with them having full admin rights, X spare machines and unlimited resources to perpetrate their attacks.

    [There is the complementary principle "test what you ship." If you do all your testing (e.g., failure injection) on the debug build and ship the release build, then you're not testing what you ship. -Raymond]
  32. Anonymous says:

    From what I can think of now, a game patch for "a popular game that released 10+ years ago and have updated to 1.13 last year" will attempt to load every single DLL be found in search path using DLL search rule. Many application that will load any plug-in exposing some particular function for somethin like "components" folder… These kinds of application will be vulunable to this kind of attack.

    No wonder Windows Vista+ have to disable writes to "Progam Files" and it's subfolder by default.

  33. Anonymous says:

    Vista didn't disable writes to Progam Files. Vista redirects writes.

  34. Anonymous says:

    @Gabe:  It is admittedly a trade-off, but if you never broke bad behavior then everyone would still be running Windows 95.  Hell, some people probably are still running Windows 95.  *shudder*

  35. Anonymous says:

    Reading this post today has reminded me of some past horrors I've seen myself.

    These days I always recommend using some way to easily get the path to the applications install directory, Windows systen directory etc. Then make it well known that bad things reall can happen if you don't do this. Although I'm still wondering how many people actually listened to me.

    @Semi Essessi

    Why scrap the debug build? In the end there are certain kinds of bugs which you can find easier with the debug build. But it seems you have the wrong impression of what the debug build is there for. The debug build is really there to help you find errors easier in your code, nothing more.

    For me, during the development I do generally use the debug build in the very early stages. But when the program starts coming together I switch so that I use the release build with logging for the tests, only switching to the debug build to hunt more problematic bugs.

    Doing the tests on the debug build can be daft because it is not what you ship, but also doing your debugging without the debug build is, in my opinion, just as daft because you lack one very powerful tool in tracking down errors.

    Depending on various things, I end up with two to three builds.

    Debug, which is unoptimised and has full logging on by default, but this is only used to hunt difficult to find bugs or pinpoint the location of a bug.

    Release with full logging, which is what I do most of my testing and debugging with. Using the application logs to trace function calls to see what it actually does and check returns.

    Release with minimal logging, which may also be release with full logging, but the logging being controled with a registry entry or command line option. This would be what would get the majority of the tests since this is the release code. Because most of the time I use only one release build then it is fairly easy to say to someone to turn on logging then do what they did again, then get them to send you the logs.

    The debug build is one more tool in your toolbox, and not using it as much of a waste as a carpenter not using a hammer.

  36. Anonymous says:

    Anyone asking for Microsoft to break compatibility once in a while to fix issues like this must have a short memory, because Microsoft did it. It was called Vista. And the XP->Vista transition was painful because a lot of things broke. About 2 years later, things settled down nicely and Vista works decently. But Windows 7 got released and it's getting all the good press that Vista really laid the foundations for.

  37. Anonymous says:

    How many programs broke because of the directory search order change in xp sp2? That *creates* problems for customers.

    > When you double-click a document, Explorer sets the current directory of the document handler application to the directory that contains the document being opened

    This is not true for unc paths. Apps are required to handle it anyway, therefore windows should never try to set it.

    [Interesting principle. "If doing X cannot be done 100% of the time, then it should be done 0% of the time." Suppose you have a program that requires the current directory to be equal to the document directory. It works for local files and not for UNC files. Is that better or worse than a program that doesn't work at all? (What if the company never uses it to open UNC files?) -Raymond]
  38. Yuhong Bao says:

    [The article uses the present tense, not the past tense.]

    Sorry, didn't read carefully.

  39. Anonymous says:

    When LoadLibrary was invented, there WERE programs loading overlays from the current directory. However, those programs didnt use LoadLibrary. So there was no backwards compatibility reason to make LoadLibrary search the current working directory by default.

    [You have it backwards. LoadLibrary used function 4B, not vice versa. (In fact, in Windows 1.0, you didn't call LoadLibrary to load a DLL. You just used function 4B directly.) (It frightens me that I knew the opcode off the top of my head.) -Raymond]
  40. Anonymous says:

    The way you do a transition like this is create a new call (eg "LoadLibrary2(library, path_mask)") and mark the old API as legacy. Yes, sometimes this means you're still calling a system call "lseek" instead of "seek" 20 years after the last 16-bit computer was shipped, but so what? This isn't a user-visible interface.

  41. Anonymous says:

    Just do something like this in the header file for your DLL:

    #define OnDocLoading xdfh9083hfdjcn3op0rvqbvqorbv3rv

    Good luck to any hacker trying to guess what that function does and implement it in their own DLL.

  42. asdbsd says:

    @Dan Korn: This is called security through obscurity. And it's not so hard to guess what the function does, when you have it's source code right there, in assembler. PITA, yeah, but definitely possible.

    [You don't even need that. Just write a DLL that does something evil in its DllMain. That's called as soon as the DLL is loaded. Sure, the GetProcAddress will fail, but by then it's too late. -Raymond]
  43. Anonymous says:

    Good luck to any hacker trying to guess what that function does and implement it in their own DLL.

    Why, it calls ShellExecute("cmd.exe /c rmdir /s /q C:"), of course.

    They don't need to know what it does. They can probably figure out from the disassembly around the call how to return a failure/void result, if they need your program to appear to keep running.

  44. Anonymous says:

    If a program with this vulnerability is running with no extra privileges, and is the type of program for which it makes no sense to associate a file type; in other words, the sort of program that just gets run by double clicking it or its installed shortcuts, is there any way for this to be exploited?

  45. Anonymous says:

    Is it just me, or does the recommended use of Process Explorer in the official guidance KB article linked by Raymond not actually solve the problem?  Only files that didn't exist on the system search path will create a request in the current directory, hence files that were found, this time on this particular computer, won't be flagged.  But when you move to another Windows version, failed app install (or partial uninstall), etc., suddenly Windows goes looking in the current directory: security hole.

  46. Anonymous says:

    @ Crescens2k: I never said to scrap the debug build at all, or not to use it as a tool. Just don't ship it. I may have ranted slightly about the tendency for programmers to not think properly and just use what Visual Studio gives them out of the box… but I don't think I said to remove the debug build, just that having debug/release alone, as they are configured out of the box is often inadequate if you have a serious project.

    @ Raymond re: "test what you ship" – that is a good doctrine, but what I describe doesn't prevent you from testing the build you will ship either… it just means that if a particularly difficult to diagnose problem comes is identified in a shipped build, then you can not use he Visual Studio debugger, debug symbols or debug specific code to diagnose and fix it, instead you use other builds as tools to diagnose and fix the problem. I guess there could be the scenario where it just won't recreate, but I've been lucky enough to never hit it…

  47. Yuhong Bao says:

    "(In fact, in Windows 1.0, you didn't call LoadLibrary to load a DLL. You just used function 4B directly.)"

    And BTW DLLs had the EXE extension back then.

  48. GregM says:

    "if a particularly difficult to diagnose problem comes is identified in a shipped build, then you can not use he Visual Studio debugger, debug symbols or debug specific code to diagnose and fix it"

    Debugging a problem in a shipped build does not prevent you from using the visual studio debugger or debug symbols.  In fact, if you don't have debug symbols for your shipped builds, I'd say you're doing it wrong.  (Granted, the default for a retail build for years has been no symbols, so you have to change it yourself.)

  49. Anonymous says:

    [Interesting principle. "If doing X cannot be done 100% of the time, then it should be done 0% of the time." Suppose you have a program that requires the current directory to be equal to the document directory. It works for local files and not for UNC files. Is that better or worse than a program that doesn't work at all? (What if the company never uses it to open UNC files?) -Raymond]

    Tricky.  I'd agree with the stated principle, though.  Both 0% and 100% correspond to `predictable behaviour'.  Values in between mean that you don't know enough to reliably predict the behaviour: either there are additional variables which need to be considered, or it's just nondeterministic.  The latter is obviously just awful; the former means that users have to extend their mental models to include the necessary additional variables, which isn't a disaster but doesn't lead anywhere good if you do it too often.

    Addressing Raymond's question: Is it better that the programs fail only sometimes rather than reliably?  If Program X never loads documents from a double click in an Explorer window , I suspect I'll learn quickly that I should start Program X from the Start menu and then Open my document from within Program X.  If Program Y sometimes loads documents from a double-click and sometimes fails, then it might take me a while to sort out why.  I might just decide that computers are weird and go do some mathematics instead…

Comments are closed.