Using delayload to detect functionality is a security vulnerability


We saw last time that your debugging code can be a security vulnerability when you don't control the current directory. A corollary to this is that your delayload code can also be a security vulnerability, for the same reason.

When you use the linker's delayload functionality to defer loading a DLL until the first time it is called, the linker injects code which calls LoadLibrary on a DLL the first time you call a function in it, and then calls GetProcAddress on the functions you requested. When you call a delay-loaded function and the delayload code did not get a function pointer from GetProcAddress (either because the DLL got loaded but the function does not exist, or because the DLL never got loaded in the first place), it raises a special exception indicating that a delayed load failed.

Let's look again at the order in which the LoadLibrary function searches for a library:

  1. The directory containing the application EXE.
  2. The system32 directory.
  3. The system directory.
  4. The Windows directory.
  5. The current directory.
  6. The PATH.

The code which implements the delayload functionality uses a relative path when it passes the library name to LoadLibrary. (It has no choice since all it has to work with is the library name stored in the IMPLIB.) Consequently, if the DLL you are delay-loading does not exist in any of the first four search locations, the LoadLibrary will look in location 5: the current directory.

At this point, the current directory attack becomes active, and a bad guy can inject an attack DLL into your process.

For example, this sample code uses delayload to detect whether the functions in dwmapi.dll exist, calling them if so. If the function IsThemeEnabled is not available, then it treats themes as not enabled. If the program runs on a system without dwmapi.dll, then the delayload will throw an exception, and the exception is caught and turned into a failure. Disaster avoided.

But in fact, the disaster was not avoided; it was introduced. If you run the program on a system without dwmapi.dll, then a bad guy can put a rogue copy of dwmapi.dll into the current directory, and boom your process just loaded an untrusted DLL. Game over.

Using the delayload feature to probe for a DLL is morally equivalent to using a plain LoadLibrary to probe for the presence of a debugging DLL. In both cases, you are looking for a DLL with the expectation that there's a good chance it won't be there. But it is exactly in those sometimes it won't be there cases where you become vulnerable to attack.

If you want to probe for the existence of a DLL, then you need to know what directory the DLL should be in, and then load that DLL via that full path in order to avoid the current directory attack.

On the other hand, if the DLL you want to delayload is known to be installed in a directory ahead of the current directory in the search path (for example, you require versions of the the operating system in which the DLL is part of the mandatory install, and the directory in which it is installed is the System32 directory) then you can use delayload.

In other words, you can use delayload for delaying the load of a DLL. But if you're using delayload to probe for a DLL's existence, then you become vulnerable to a current directory attack.

This is one of those subtle unintended consequences of changing the list of files included with an operating system. If you take what used to be a mandatory component that can't be uninstalled, and you change it to an optional component that can be uninstalled, then not only do programs which linked to the DLL in the traditional manner stop loading, but you've also introduced a security vulnerability: Programs which had used delayload under the calculation (correct at the time it was made) that doing so was safe are now vulnerable to the current directory attack.

Comments (45)
  1. Ivo says:

    Isn't the solution to call LoadLibrary with a full path before using any of the DLL's features? Then you get the benefits of delay-loading (saves you the trouble of messing with GetProcAddress) and you get full control of which DLL gets loaded.

    [This assumes you can identify all the places you use a feature (i.e., there's no obscure code path where a feature can get used without first funnelling through your "Load the library first" function.) Probably easier is to write a custom notification hook. -Raymond]
  2. pupu_platter says:

    What if you call GetModuleFileName after using DelayLoad or LoadLibrary and check if the directory is legit?

    This way you can also compare the module's directory to the current directory. If they are the same, and the current directory is some random location on the harddrive, you can suspect your program was attacked.

    [Um, it's kind of too late after you load the DLL. -Raymond]
  3. Joshua says:

    Ugh. Time to break backwards compatibility and stop loading DLLs from the current directory.

    [not only do programs which linked to the DLL in the traditional manner stop loading, but you've also introduced a security vulnerability]

    No, Microsoft introduced this security vulnerability. And since developers cannot defend against this one in all cases, only Microsoft can fix it.

  4. Todd Greer says:

    I presume that the way to use a custom notification hook here is to hook the dliNotePreLoadLibrary notification, try to load the DLL from a safe place, then throw an exception if it fails (sounds like a good approach). MSDN doesn't state whether it's ok to throw an exception from a custom notification hook. Is this documented anywhere?

  5. Ben Voigt says:

    @Todd: I seem to recall that the delayloader hooks provide a way to report failure (e.g. returning a NULL pointer instead of an HMODULE).  And the delayloader will translate that into an exception (possibly language-appropriate, probably SEH, depends on your compiler since it provides the delayloader).

  6. Todd Greer says:

    @Ben: The notification and failure hooks can only return 0 (meaning "proceed as usual") or a handle (meaning "I've done the work; just use this handle"). While a 0 from a failure hook will let the process continue to fail, the only way for a notification hook to fail is to throw an exception.

    The error hooks are specifically mentioned as being OK to throw exceptions from, but the documentation doesn't mention exceptions from notification hooks. I've submitted a clarification request to MSDN.

  7. TheHims says:

    What happens when a program that is trusted by the user, and lives in %ProgramFiles%, is copied by a malicious exploiter to %Temp%, and run from there, with copies of kernel32, etc? I guess the program can check that it should only run from a protected location such as %programfiles%, but to check that, does it not need to load kernel32? Is there a way to protect against this?

  8. TheHims says:

    Just to clarify the previous comment on the trusted program, this becomes more interesting in an UAC situation, where malicious exploiter is trying to elevate %temp%trusted.exe. The elevation prompt does not show the program location by default, so the user may just hit "Yes".

    [You're already on the other side of the airtight hatchway. -Raymond]
  9. a random passerby says:

    Does this vulnerability even need the existence of an optional or delay-loaded DLL? Why couldn't an attacker rename an existing DLL and replace it with one of their own which simply emulates the DLL's API?

    [You would break the application. -Pithy commenter]

    So I use DllImport to load the original DLL and pass calls along to the original DLL, in addition to whatever I choose to do to exploit the other application, or intercept its data, or whatever devious thing I wanted to do.

    [The difference is that renaming a DLL requires privileges beyond control of the current directory. -Raymond]
  10. Joshua says:

    @TheHims: then well you just got busted. FYI, kernel32 and a few others cannot be attacked but the various MSVC* dlls can be attacked just fine. Since UAC is bustable out of the box now anyway, I just don't care about that one.

  11. Windows *must* be modified to prevent DLLs loading from CWD says:

    Loading DLLs from the current directory is so uncommon, and such a pernicious security vulnerability, that it should be disabled by default. A system-wide enabling switch similar to the existing DLL load order one combined with an appcompat shim would be sufficient for backward compatibility. Security should trump compatibility.

    [The hard part is identifying all the programs that require the shim. Most of which are probably not commercially available. Result: Company has to test 9000 internal applications. -Raymond]
  12. Michael says:

    Once this potential security issue was identified, why did not OS add an option to LoadLibraryEx, like LOAD_LIBRARY_NO_CURRENT_DIR? This would let delay load avoid this problem all together on newer OS and old OS would simply ignore unknown flag.

  13. JonPotter says:

    "Those people are indirectly saying "Microsoft should introduce a new security vulnerability."

    Would this be any worse than intentionally leaving an existing security vulnerability unfixed?

  14. Leo Davidson says:

    [Actually, I was directing it to people who say "Microsoft should make X an optional component." Those people are indirectly saying "Microsoft should introduce a new security vulnerability." -Raymond]

    If a DLL was removed from a future version of Windows couldn't it just be added to a list of DLLs which are never to be loaded from the CD?

    (Not sure if the existing KnownDLLs list would do that already — it's usually used to list things which do exist, not which do not — but there could be another list if needed.)

    Similarly, with the Vista/7 DWM DLL being a common one to attack, Microsoft could roll out a patch to XP which added that DLL to a list, although I guess you'd risk breaking apps which loaded their own DLL with the same name. (They'd break anyway when upgrading to Vista, though… Actually, doesn't that mean every single time a new system DLL is added to the OS people have to re-test their 9000 apps? Don't people have to do that anyway with every OS update, realistically?)

  15. Brian says:

    I agree that this should be treated on the same level as Data Execution Prevention (DEP).  If a program attempts to load a DLL from the current directory, a dialog box should pop up which states, "Program X attempted to load component Y from <Current Directory>.  Please contact the vendor for an update or click here for more information."  Clicking here would say something like "add the directory to the path" or "copy the dll to the executable folder"

  16. Mijzelf says:

    When using c++ dll's the only way to handle nonexistent dll's is delayload them, AFAIK. While it *could* be possible to manually import class interfaces, I wouldn't know how. So I suppose calling SetDllDirectory("") is the only way to be safe, then. (Which function has to be imported manually, because it's not always there)

  17. Random832 says:

    "Delayload is not for detecting functionality." You know that and I know that, but it won't stop being used that way until it stops working.

    It's simple. When the program starts, run through the list of delayloads and check each one for existence [without actually loading it, so as not to defeat the supported purpose]. If any of them does not exist, crash the app.

    [And then you get complaints from people whose perfectly correct app now crashes. "I do an OS check before calling into a delay-loaded dwmapi.dll, and now you just crash my app outright even though I'm doing the right thing." Can you guys go talk to the people who hate DPI virtualization and tell them to shut up? -Raymond]
  18. 640k says:

    Why do 64-bit dlls also load from current dir? If those who developed the first version of 64-bit windows had fixed this flaw once and for all, these problems whould not exist in the future. Now we have to wait for 128-bit windows to fix this security flaw.

    [Porting to 64-bit is hard enough. You don't want to compound the problem by adding a bazillion subtle breaking changes. -Raymond]
  19. Whole says:

    A query. I understand why current directory is a bad idea, but what about the path ? The path often contains directories which are writeable by everybody, so one could place a dll in this directory and execute the same attack against a program.

    [If an attacker can write to a directory on the PATH then you have the same problem. So don't add an attacker-writeable directory to the PATH. -Raymond]
  20. Random832 says:

    @Leo Davidson

    You may be right about that, it took a lot of developers until UAC was introduced to actually start thinking about least privilages. Human nature is to stick with what is known until they are forced to change and then they change but complain loudly.

    But still, complaining about it in these comments won't go towards changing anything. The better thing to do is rally people together and have everybody write well constructed arguments to relavent teams through email.

    @Raymond

    At least that application had some sense.

  21. Marquess says:

    “I agree that this should be treated on the same level as Data Execution Prevention (DEP).”

    You do realize that DEP is set to opt-in for Client editions of Windows 6.x?

  22. Jim says:

    Sorry for the off topic but you (mr Chen) is a hard man to get in contact with (by design i assume)

    About your new c++ scratch program:

    Did you know it has the behavior of getting flooded with WM_GETICON messages due to how the RegisterClass struct is setup ?

    Is this a bug on windows part, or is it normal (by design) behavior to flood a program like this when RegisterClass struct is setup like that ?

    Thanks for understanding and for answering (hopefully)

    Jim

  23. Adam V says:

    @Joshua:

    He wasn't talking about *you*. The paragraph in question was talking about "changing the list of files included with an operating system". I think it's safe to assume Raymond is pointing this comment at Microsofties, not regular developers.

    [Actually, I was directing it to people who say "Microsoft should make X an optional component." Those people are indirectly saying "Microsoft should introduce a new security vulnerability." -Raymond]
  24. Nawak says:

    Once again, Raymond left too many "obvious" questions unanswered for me and since I am not knowledgeable enough to know the answers to all these obvious questions, I have to ask them…

    DelayLoad is done by code placed there by the *linker*.

    Where's the backwards compatibility problem here? The code isn't produced yet, can't you just remove current directory at this instant and be done with it? The program won't work? Yes, but it won't work from the beginning of its life, therefore it'll get fixed.

    I assume that the "current directory attack" is newer than the DelayLoad and that's why people designing DelayLoad didn't think of explicitly removing the current directory from the search paths. Ok fragile programs are in the wild. I would have hoped the article ended with the good news that a new DelayLoad code has been introduced in VS2011 and that this problem will eventually fade away.

    And I would have hoped that tests done by MS would have shown that when DelayLoad is used, it's not by those pesky LitWare folks always producing bug-riddled applications, and that in all the cases that matters for MS's clients, a shim could be introduced to patch the old, CurrentDir-loving, DelayLoad with the shiny new VS2011 DelayLoad at runtime.

    It's apparently not the case and I wish Raymond had explained why it has not be done.

    [The explanation is in the subject line. Delayload is not for detecting functionality. It's for loading a DLL that you already know is there. -Raymond]
  25. Snoshy says:

    [Actually, I was directing it to people who say "Microsoft should make X an optional component." Those people are indirectly saying "Microsoft should introduce a new security vulnerability." -Raymond]

    That seems a bit facetious; the act of making it optional is not the one introducing the security vulnerability.  The vuln is introduced by incorrect usage of delay loading.  Which brings me to the next point…

    I see no reason for this discussion to be limited to the _detection_ of a delay loaded DLL.  This is just a specific case of the more general problem that when you're delay loading a DLL without a fully-qualified path, you're introducing a security vulnerability unless you *know* that the required DLL exists in one of the locations 1-4 in your list.  You've already just mentioned this point in the comment above, but the title of the post is still misleading.

    [If DLLs can move from the "guaranteed to exist" to the "not guaranteed to exist" category, then you will never *know* that a Windows-provided DLL exists in one of the locations on the list. (And it directly relates to the subject line, because if you *know* that the DLL exists, then you don't need to detect whether it exists.) -Raymond]
  26. Joshua says:

    [The Enterprise class application didn't take any chances. it called "SetCurrentDirectory('extension-directory')" before loading via relative paths (and all this happened before it spawned any worker threads). -Raymond]

    They way I read the document, loading via relative path is the same as by absolute path, it will not be searched. Oh but you mean it loaded with just a name and it was deliberately in that directory. I doubt it will take all that long for said enterprise customer to realize they need to shim that app.

    OK this can't be done in a hotfix, but it should be safe for a service pack. It will cause a lot less trouble then XPSP2 did.

  27. Crescens2k says:

    @a random passer by

    If you could do that to an application, why do it. This would mean you have write access to the program's application directory which would further imply bad administration. So it is quite possible that you either already allowed the bad person admin access to your system, or there is an easier way to attack.

    This is talking about on a secure system where you have your programs installed into program files or some other secure location and bad people can't just go in and change vital files.

    @Windows *must* be modified to prevent DLLs loading from CWD

    Ok, so you feel so strongly that you name yourself this way. First complaining it about it here isn't likely to get it fixed. You'd be better to give a well constructed argument to something like the Windows security team.

    Secondly, is it uncommon? You may think it is uncommon because it is something you wouldn't do. But as Raymond said in yesterdays comments, an Enterprise class application used LoadLibrary on a relative path. So if it makes it's way into enterprise class applications then who knows how many smaller applications rely on this behaviour. It is so easy to use a relative path to a library assuming you are stll in the applications initial working directory since developers assume they will be executed through the link in explorer, not through the command line.

    Yes this can be a vulnerability, but it is only one if the developer is sloppy, so instead of going running to Windows in this way, developers should first fix their bad habits so Windows doesn't have to go cleaning up after them. Since we should know by now, if you change something and an application breaks, the application isn't blamed, Windows is blamed.

    [The Enterprise class application didn't take any chances. it called "SetCurrentDirectory('extension-directory')" before loading via relative paths (and all this happened before it spawned any worker threads). -Raymond]
  28. Leo Davidson says:

    "Yes this can be a vulnerability, but it is only one if the developer is sloppy"

    I disagree. It's an issue unless the developer is *exceptionally* on top of things. And even then only if they can audit every piece of third-party code they load, including code which may run before control is passed to their own code.

    Spend five minutes running a few of the programs which ship with Windows, let alone things written by third parties, under Process Monitor and you'll find a hell of a lot of exes which accidentally probe the CD for DLLs. That even includes at least one exe configured to silently auto-elevate; one where you'd think a bit of extra attention to that sort of thing had been paid.

    Expecting developers to fix this in every app seems unrealistic. It's either going to stay an issue forever or be addressed by the OS itself one day.

    Maybe the answer is for Windows 8 to tell people if their apps are relying on this deprecated behaviour so people can find them but they don't actually break; then have Windows 9 actually disable the behaviour by default. Seems the only way to me.

    It also seems funny that the vulnerability is caused by apps doing exactly what they're being told not to do here (and elsewhere), and it's too risky to change the OS because lots of unknown apps could depend on it (although only one has been cited so far)… Aren't we then admitting that developers are never going to fix this themselves?

  29. Alex Cohn says:

    @Todd: you cannot use NULL, it's reserved; but you can use INVALID_HANDLE, which is -1. This should be a good indication for all practical purposes.

    And regarding PATH: any user can open command prompt, type

     set path=c:hacked_dll_directory;%PATH%

    and start any application she has been granted access to. In this sense, CWD is not more dangerous. As with executables, Windows simply assumes that there is .; in front of the %PATH%.

  30. As with executables, Windows simply assumes that there is .; in front of the %PATH%.

    This leads to similar security vulnerabilities (specify a full path to CreateProcess(…) or you might not get the notepad.exe you were looking for…)

    I will note in passing that Unix-ey OSes force you to type ./configure because . is not in the PATH by default (of course, you could add it to the path.)

  31. mpbk says:

    There exists for a while now a patch from Microsoft allowing you to remove CWD from the list of directories that are checked.  We've rolled this out company-wide.  Surprisingly few things were broken.

    1) Outlook 2003 (not 2007 or 2010).  One DLL needed to show EML attachments didn't load.

    2) CAD package SolidWorks can't load some extensions.

    3) Google Chrome uses a bizarre method to load and update itself which relied on CWD and thus broke.

    By judicious use of registry workarounds provided by the patch, we were able to allow cases 1-3 above to use CWD so long as the CWD was on the local machine and not a remote share or webdav host.

    For those who say Microsoft should fix this, well, they've provided the tools for you to right here.  It's a nice FixIt package.

    support.microsoft.com/…/2264107

    [Interesting perspective. To me, three broken Tier-1 applications = horrible. That's three huge reasons why somebody won't upgrade. Most people need only one. -Raymond]
  32. Leo Davidson says:

    @mpbk: Unfortunately, the FixIt change only configures the registry to change what happens on network drives.

    A lot of people seem to ignore the local-drive scenario, which IMO is as much of a threat. (e.g. Download a zip with 1000 JPEGs and a DLL at the bottom, extract to a local folder, then double-click a JPEG without realising the DLL was even there.)

    Also, thanks for sharing your findings. There's so little actual hard examples about this stuff which makes it difficult to judge what impact the change would have. I've only had my own machine to go on, where nothing seems to have broken so far.

    Earlier today I ran into and was reminded of the one issue/annoyance after making the change (for all drive types): RegSvr32.exe now fails to find DLLs via relative paths; you have to give it an absolute path. That might break some (shoddy) install scripts.

    IMO, RegSvr32 should be updated to cope with this and at least allow "regsvr32 .blah.dll" to work even if "regsvr32 blah.dll" still doesn't. Right now neither works.

  33. DWalker says:

    If a bad guy can write arbitrary code into your current directory, hasn't he already pwned your machine?  Or am I missing something (which is likely)?

  34. Random832 says:

    11 Nov 2010 4:17 PM is not me.

  35. Random User 423802 says:

    @Leo Davidson RE: regsvr32

    (Most) install scripts that use regsvr32 are, by association, shoddy. Self-reg has proven to be nothing but a different demon from the same old DLL Hell. Half the time I install such a "product", it decides to XCOPY and self-reg an older version of some COM component I already have on my system, and I have to go through and fix everything to it broke.

  36. Wisefaq says:

    @Raymond

    "[Interesting perspective. To me, three broken Tier-1 applications = horrible. That's three huge reasons why somebody won't upgrade. Most people need only one. -Raymond]"

    Having done a few of these desktop deployments, the reason I've seen for an application stopping an upgrade is if there is no "free" workaround/fix available.  If the customer has to buy a newer version of a product, that will delay a OS upgrade.

    Case in point, I had a 700 seat WinXP upgrade halted because Outlook 97 would not work on WinXP.

  37. Leo Davidson says:

    @DWalker59: "If a bad guy can write arbitrary code into your current directory, hasn't he already pwned your machine?"

    The current directory isn't a protected location. (We're not talking about the application's directory, e.g. in Program Files.)

    A machine shouldn't be considered pwned the moment someone extracts a zip file on it, browses to a network drive or receives an email attachment. Doing any of those may land you in a situation where there's an EXE or DLL in the current directory. Unless you intentionally run that code you should be safe from it. The problem is that doing something else on some other file in the same directory (e.g. opening a JPEG) may cause you to inadvertently execute that code (which you may not even have noticed was there among the other files (especially if file extensions are hidden)).

  38. DWalker says:

    @Leo: Ah, that makes sense.  I wasn't thinking about Program files, but thanks.

  39. DWalker says:

    @WiseFaq:  Outlook 97 ran on Windows XP.  There were a couple of patches to Outlook 97 that helped.

  40. Stefan Kanthak says:

    [You're already on the other side of the airtight hatchway. -Raymond]

    But why does Windows allow users to pass this hatch? Or is Windows not airtight at all?

    %APPDATA% and %LOCALAPPDATA% (both say DATA) were introduced many years ago, but Windows still does no "chmod A-x" on these directories (yes, I know that the owner could change it back, so it's not fool proof). Heck, there are current Microsoft products which place DLLs in these paths, and neither QA nor the famous SDL catches up!

    OTOH: Windows XP introduced SAFER alias software restriction policies 9 years ago.

    Turn it on for ALL users [0], set its default level to "Disallowed" and have it check DLLs too, but remove *.LNK from the list of executable files (else shortcuts in the start menu won't work).

    @Brian

    This is (sort of) DEP extended into the filesystem: users can't write to %SystemRoot% and %ProgramFiles% which are the only directories where they can execute from (yes, I know that this is limited to the file extensions known to SAFER, but that list can be extended).

    BTW: this setting does NOT break "Windows Logo" compliant programs.

    Turn it on, and if some application breaks, ask their vendor/developer why they don't follow a now 15 year old guidance!

    [0] including Administrators, but change their %TEMP% and %TMP% to %SystemRoot%TEMP then, else many installers won't work.

    @Leo Davidson

    Apropos sloppy coding: on NT 5.x almost all system binaries have their copy in the "DLLCache" directory. Just run all .EXE found in there (so "DLLCache" is the "application directory", i.e. #1 in DLL search path) and check which DLLs get loaded from "DLLCache" [1], although their path is VERY WELL known (hard-coded in SFCFILES.DLL).

    It's frightening!

    Defense in depth: nope! SDL: nope!

    When done, check (for example) the registry key [HKLMSoftwareMicrosoftNetSh]:

    why are all DLLs specified with just their filename there, not their very well known pathname?

    BTW: don't try to fix this using REG_EXPAND_SZ and %SystemRoot%System32 as prefix.

    Unfortunately some intern not aware of the "better" registry data types must have written the code in NETSH.EXE, so only REG_SZ and hard coded pathnames work.

    If this sounds common: back in 2000 there was a problem with WINLOGON.EXE executing the wrong %SystemDrive%EXPLORER.EXE and %SystemDrive%USERINIT.EXE (see MSKB 269049). The obvious fix: "use the pathname" but led to MSKB 249321.

    Fortunately you can have BOTH errors fixed: change REG_SZ to REG_EXPAND_SZ and use %SystemRoot%System32userinit.exe!

    Unfortunately MSFT did it "the Microsoft way", and cast that in concrete: since Windows XP the registry entry does NOT support REG_EXPAND_SZ any more. Honi soit…

    [1] you can let SAFER do the work: turn it on and specify a logging file.

    [The original question was "What's to prevent an attacker from copying an EXE to another directory, planting DLLs in the new application directory, and then running it from there?" The answer is "Well if you assume an attacker can do that, then the attacker is working waaay too hard. Just copy rogue.exe to the target directory under the name "notepad.exe" and run it. -Raymond]
  41. Stefan Kanthak says:

    [Just copy rogue.exe to the target directory under the name "notepad.exe" and run it. -Raymond]

    That's why I started with %APPDATA% etc. and the question "is Windows  airtight": are "exec" rights in %USERPROFILE% really necessary for Windows' proper function?

    If not, why are they (still) granted in first place?

    "drive by" malware abuses exactly this weakness.

    The "need to know" principle was well known long before DOS (OK, that had no rights and privileges) and Windows development started.

    In Server 2008 OTOH you have to enable almost every service you want to use, most of them are "off" by default or not installed at all.

    [Remove execute permission and see what breaks. (Answer: A lot of stuff.) I'm willing to be educated on how a Web site can execute stuff out of %APPDATA% without a security prompt first. -Raymond]
  42. Wisefaq says:

    @DWalker59

    "Outlook 97 ran on Windows XP.  There were a couple of patches to Outlook 97 that helped."

    Outlook 97 would corrupt roaming profiles under Windows XP.  I don't believe Outlook 97 was supported on WinXP, so we were not able to ask Microsoft for help.

    The workaround we used at the time was:

    1. Delete the roaming profile on the end users Windows XP PC.
    2. Delete the end user's roaming profile from the server.

    3. Reset the profuser.ini file for outlook (this triggers the outlook setting script to run at login, which sets up the clients outlook settings)

    4. Get the user to sign onto a Windows NT4 PC so that their outlook settings will be created.

    This is because the only way for Outlook 97 to be configured into the user's profile, is via a Windows NT4 logon.

    Outlook 97 will not work on Windows XP, unless the user has first logged onto a Windows NT4 PC.

    1. Once a Windows NT4 generated profile has been created for the end user, they are able to logon to the Windows XP PC.

    (until the end user's profile is corrupted, then you start at step 1 again).

    The workaround was expensive in terms of customer downtime, and our support time.

    We encouraged the customer to migrate to Outlook 2003.  Which they weren't able to do until they upgraded their Exchange servers (v5.5 -> something newer)

  43. Stefan Kanthak says:

    Gordon Fecyk published his advice and script there:

    http://www.antiwindowscatalog.com

  44. Stefan Kanthak says:

    [Remove execute permission and see what breaks. (Answer: A lot of stuff.) -Raymond]

    Which stuff (besides "directory traversal", which but doesnt really matter) breaks?

    (Not only [*]) I did exactly this back in the last century under NT4 for the first time, and do it until now on my own systems, without breaking any applications.

    Since XP and its SAFER rules I turn them on in every system I install (the script home.arcor.de/…/XP_SAFER.INF is part of my unattended installation). Guess what breaks there: NOTHING! (OK, I had one user who ran this script report that PDFXChange Viewer stopped working on 7 x64. The cause: PDFXChange Viewer is a 32bit application, but installs itself into %ProgramFiles%, not %ProgramFiles (x86)%". After manual installation into the correct path the application now works. Did I already state that I don't care when non Windows Logo compliant applications break?)

    [*] At least Gordon Fecyk http://www.pan-am.ca/ was the other one.-)

    [I'm willing to be educated on how a Web site can execute stuff out of %APPDATA% without a security prompt first. -Raymond]

    "drive by" downloads/malware works this way: first a stub is dropped below %LOCALAPPDATA% (to be precise: the "Temporary Internet Files" in case of Internet Explorer, the browser cache in case of 3rd party web browsers, and %TEMP% for other applications) and second some creative method used to let the OS execute this stub.

    Do you take an IExpress installer as PoC? (Packages built with IExpress.exe extract themselves to %TEMP%<some_dir> and typically execute a "RunDll32.Exe AdvPack.Dll,LaunchINFSection <filename>.INF,DefaultInstall". The latter is run without any "security" prompt.)

    [Pretty much every installation program assumes that %TEMP% allows execution. Many auto-updaters make the same assumption. Also, per-user application installation needs execute permission on the user profile. (It is my understanding that these "creative methods" are themselves vulnerabilities, so removing execute from %TEMP% is a defense in depth issue rather than a primary defense measure.) -Raymond]
  45. Stefan Kanthak says:

    [Pretty much every installation program assumes that %TEMP% allows execution. Many auto-updaters make the same assumption. Also, per-user application installation needs execute permission on the user profile. (It is my understanding that these "creative methods" are themselves vulnerabilities, so removing execute from %TEMP% is a defense in depth issue rather than a primary defense measure.) -Raymond]

    Guilty^Wcorrect in all cases.-)

    I'm willing to sacrifice/trade-in "per-user application installation" for security.

    That's why I wrote

    | including Administrators, but change their %TEMP% and %TMP% to %SystemRoot%TEMP then,

    | else many installers won't work.

    Apropos auto-updaters: most of them already need some "broker" process running with elevated rights. Take MSIEXEC.EXE as example, or "automatic updates".

    There's always a price to pay: if you let (l)users have full control then their PC will most probably get infested (take a look into your MUAs SPAM folder to see the result of that policy).

    OTOH: I've setup quite a number of Windows PCs with REALLY restrictive configuration for "Joe Average" in the last 12 years. These users typically fought malware before that, with AV, but without success. Afterwards, they NEVER had a problem with malware: the method(s) I describe here clearly seem to be^W^W^WARE an effective (counter) measure, far superior to AV software.

    The only problem(s) my "victims" have to fight from time to time are bad written applications/installers, which most often just dont comply with "Windows Logo" (HP printer drivers are a notorious example).

Comments are closed.