Solutions that don’t actually solve anything


If changing a setting requires administrator privileges in the first place, then any behavior that results cannot be considered a security hole because in order to alter the setting, attackers must already have gained administrative privileges on the machine, at which point you’ve already lost the game. If attackers have administrative privileges, they’re not going to waste his time fiddling with some setting and leveraging it to gain even more privileges on the system. They’re already the administrator; why go to more work to get what they already have?

One reaction to this is to try to “secure” the feature by asking, “Well, can we make it harder to change that setting?” For example, in response to the Image File Execution Options key, Norman Diamond suggested “only allowing the launching of known debuggers.” But this solution doesn’t actually solve anything. What would a “known debugger” be?

  • “The operating system contain a hard-coded list of known debuggers. On that list are ntsd.exe, cdb.exe, and maybe windbg.exe.” Personally, I would be okay with that, but that’s because I do all my debugging in assembly language anyway. Most developers would want to use devenv.exe or bds.exe or even gdb.exe. If somebody comes up with a new debugger, they would have to petition Microsoft to add it to the hard-coded list of “known debuggers” and then wait for the next service pack for it to get broad distribution. And even before the ink was dry on the electrons, I’m sure somebody somewhere will already have filed an anti-competitive-behavior lawsuit. (“Microsoft is unlawfully raising the barrier to entry to competing debugging products!”)
  • “Okay, then the program just needs to be digitally signed in order to be considered a ‘known debugger’.” Some people would balk at the $500/year cost of a code signing certificate. And should the operating system ask the user whether or not they trust the signing authority before running the debugger? (What if the debugger is being invoked on a service or a remote computer? There is nowhere to display the UI!) Actually, these were all trick questions. It doesn’t matter whether the operating system prompts or not, because the attackers would just mark their signing certificate as a “trusted” certificate. And in fact the $500/year wouldn’t stop the attackers, since they would just create their own certificate and install it as a “trusted root”. Congratulations, the only people who have to pay the $500/year are the honest ones. The bad guys just slip past with their self-signed trusted-root certificate.
  • “Okay, forget the digital signature thing, just have a registry key that lists all the ‘known debuggers’. If you’re on the list, then you can be used in Image File Execution Options.” Well, in that case, the attackers would just update the registry key directly and set themselves as a “known debugger”. That “known debuggers” registry key didn’t slow them done one second.
  • “Okay, then not a registry key, but some other setting that’s hard to find.” Oh, now you’re advocating security through obscurity?

Besides, it doesn’t matter how much you do to make the Image File Execution Options key resistant to unwanted tampering. If the attacker has administrative privileges on your machine, they won’t bother with Image File Execution Options anyway. They’ll just install a rootkit and celebrate the addition of another machine to their robot army.

Thus is the futility of trying to stop someone who already has obtained administrative privileges. You’re just closing the barn door after the horse has bolted.

Comments (48)
  1. BryanK says:

    I assume the situation is different if the machine’s real admin changes a setting that inadvertently lets an attacker in, right?  I don’t think this would apply to IFEO, but it may apply to certain other things.  Not sure whether that means anything as far as security is concerned, though — dumb admins can always screw up a system.

    Regarding the signing stuff: Do the same principles apply to driver signing?  Specifically, if you install your own root cert in the CA store on a machine, will drivers show a warning if they’re signed with a cert that chains to yours?  Or does the code-signing cert have to chain to one of a hardcoded list?

    (Or don’t you know, since you work on the shell and not the kernel?  If not, do you know who might?)

  2. Adam says:

    "If changing a setting requires administrator privileges in the first place, then any behavior that results cannot be considered a security hole because in order to alter the setting, attackers must already have gained administrative privileges on the machine, at which point you’ve already lost the game."

    Not necessarily. It may be that a priveliged service that accepts connections from untrused users has a bug that does not allow arbitrary code execution, but does allow an attacker to execute existing code that they shouldn’t be able to, to change such a setting.

    Example:

       if (!userIsAdmin) {

           return EACCESS;

       }

       changePriveligedSetting();

    If userIsAdmin() is a function, not a variable, then the use of userIsAdmin above will decay to a non-null function pointer, meaning that the user will never be denied permission to call changePriveligedSetting().

    As noted, that’s only an example; there are other ways of achieving the same result. It doesn’t have to be a service that accepts remote connections, a setuid root program with a similar bug could do the same thing. Also, it doesn’t have to be a function call/function pointer mix-up – there are other coding errors that could creep into a priveliged program that could cause the same effect. e.g. reversing a privelige test so that only non-admin users can execute the sensitive code.

  3. silkio says:

    maybe the code to change the setting fits within the required size for your exploit, but a full attack doesn’t.

    hello security hole.

  4. Mark Steward says:

    Adam, in your case, changing the setting does not require "administrator privileges in the first place": the programmer just thinks it does.  The problem is actually privilege elevation (it allows an attacker to change something they shouldn’t), and is solved by correct validation.

  5. dimmik says:

    If changing a setting requires

    >administrator privileges […]

    (disclaimer: May be It’s not the right place to discuss my question, excuse me in that case.)

    WHY?

    Why changing a setting must require admin priv?

    Why not to give any user possibility to change his own system settings (including new dll, file associations and others), except for some that have to be changed ONLY by admins.

    Well, one have to be admin to change users, to set up firewall or router soft, to set ip and other that touches all users of the system. May be to install soft-for-any-user-of-machine.

    But why not to store for every user his own registry, his own plase where the dll’s are placed and so on?

    For example, once upon a time I had to install ACDSee.

    ACDSee – not complex utility, not deep-in-system-integrated-tool, just picture viewer.

    And I had to become admin on the computer.

    May be it’s because of my dumb security settings, I’m with no doubts lamer, but I was logged in as default ‘User’.

    It’s not clear for me why third-party-soft have to place smth into "c:windows…" dir, but almost all of them do that.

    Why not to give to user ability to make any change to all settings but deep-system-preferences, and store them separately for every user?

    And security hole will be much more tiny.

    P.S. Yes, I refer to *nix security model.

    Once, being simple non-privileged-user, I wanted to install another windows manager on linux (Afterstep) just for me. And I did it without root rights.

  6. :'( says:

    Look who’s speaking. The company that purposefully cripples its OS to "mitigate the spread of malware".

    Hello 10 half-open outbound connection limit, hello no-raw-sockets, hello DNS client that ignores .hosts for MS addresses…

    And hello 10-lines-of-C-code that disable these "protections". On XP, you’ve already lost. Leave it alone, please :)

    And I don’t like Vista’s direction: you still are supposed to run installers as admin, having to trust every company out here. It’ll only solve security holes, but not malware spread by "traditional" means.

  7. Adam says:

    Mark:

    Sorry I didn’t make it clear; if the setting I’m talking about has a system-enforced ACL (or equivalent) that only allows Admin users to change it, then an attacker could leverage a lack of correct validation in an elevated-privelige program to gain more priveliges on the system.

    Not all security holes are code injection bugs.

  8. Alun Jones says:

    dimmik: This is an ACDSee failing, not a Windows failing.  It’s perfectly possible to write an app that a restricted user can install – after all, installation consists of copying files, and changing / creating registry settings.  Those are activities that any user can do.

    Installing to your "Program Files" directory, or setting system-wide registry settings, that’s going to require admin privilege.

    Adam: If there’s a program running with admin-level privilege and allowing ordinary users to execute inappropriate admin-level actions, then that program is the flaw, not the setting it sets.

    :’): Show me those "10-lines-of-code".

  9. <i>Hello 10 half-open outbound connection limit, hello no-raw-sockets, hello DNS client that ignores .hosts for MS addresses…

    And hello 10-lines-of-C-code that disable these "protections". On XP, you’ve already lost. Leave it alone, please :)</i>

    Really? You can disable all of those as a limited user? Do tell me how!

  10. oldnewthing says:

    "It may be that a priveliged service that accepts connections from untrused users has a bug that…"

    Then you have a bug that is a security hole. But that’s not the topic for today. As I noted, it’s a setting that only administrators can change. Your counter-example is a setting that non-administrators can change (due to a flaw).

    The topic for today is, "I have a setting that only administrators can change. This setting can take values that are insecure." My point is that this is not a bug in the setting.

  11. AC says:

    Alun Jones: "It’s perfectly possible to write an app that a restricted user can install (…) Installing to your "Program Files" directory, or setting system-wide registry settings, that’s going to require admin privilege"

    As far as I understand, MSFT still promotes that all installs are to be to "Program Files". Of course any user can run anything he downloads/copies etc, and the simplest way for a user to avoid to fetch administrator/get permissions is still to simply install the app somewhere in his own "My Documents" :)

    E.g. if my grandmother gets some game, and  I haven’t given her an admin password, she’ll probably be able to install the game to her documents.

    There is a danger that then some malware run without provileges can infect the executable installed that way. But the possibility for a user to run something for himself is a good one. MSFT probably thinks that some sandbox environment is confusing for a normal user, but I believe that sandboxing is the future. Why shouldn’t the user be allowed to install his own apps, and why shouldn’t they be nicely sandboxed, so that one can’t infect another, none can infect the system, and each can create and modify files only in it’s own incarnation of My Documents?

  12. ... says:

    Really? You can disable all of those as a limited user? Do tell me how!

    Raw sockets and modifying the HOSTS file already required admin, so these measures are there to protect against admin-privileged malware, and are therefore useless. (Incidentally, nmap now supports lower-level access that allow even for source MAC address forging, as a result of MS blocking raw sockets) (And the HOSTS file, you just have to search for "microsoft.com" in the DNS DLL and path it for "xxxxxxxxx.xxx", for example)

    99% of the users run as admin. That’s why the battle is "lost", and that’s what you have to fix. The 10 connection limit makes sense, but only for limited users. Admins shouldn’t have this limit for the same obvious reason: they can path the OS to get rid of it, or they can use lower level access (see nmap).

  13. 99% of the users run as admin. That’s why

    > the battle is "lost", and that’s what you

    > have to fix.

    99% of users run as admin because they view security warnings as a pointless nuisance, they want those warnings to go away, and they honestly don’t care how it happens or what the consequences are.

    Essentially, the battle is lost because the troops are incompetent, largely because they just don’t understand they’re troops in the first place. They never really wanted to be troops. Maybe they’re not really suited for it. But you still can’t blame the companies that provide their gear.

  14. Gabe says:

    mikeb, KnownDlls isn’t for security purposes. It’s there for efficiency. Since almost every single process will use them, it makes sense to load them into memory in a fixed location and not have to worry about searching for them, loading them, and possibly rebasing them every time you start a process?

  15. mikeb says:

    > KnownDlls isn’t for security purposes. It’s there for efficiency. <<

    You’re right.  I believed that it was created to prevent easy trojaning by dropping a trojan DLL into a directory that might likely be used as a current directory by an admin.

    It’s not such a crazy thought – see Larry Osterman’s blog entry that mentions KnownDlls side-effect as a small security improvement:

    http://blogs.msdn.com/larryosterman/archive/2004/07/19/187752.aspx

  16. anon e. mouse says:

    Hmmm, only processes running with admin privileges ever had access to DevicePhysical Memory but starting with W2K3, MS took away acccess to DevicePhysical Memory in the name of making the OS "more secure".

    Pot. Kettle. Black.

  17. oldnewthing says:

    I agree that if there are vulnerabilities, then these settings can be used as stepping stones. But is that a fault of the setting? Should all useful settings be removed because somebody might st it incorrectly?

  18. Maurits says:

    Should all useful settings be removed because somebody might st it incorrectly?

    The usefulness of the setting should be weighed against the probability and severity of it being set incorrectly.

    If the latter outweighs the former, another way must be found.

  19. oldnewthing says:

    Is "Image File Execution Options" useful enough to justify its continued existence? You tell me.

  20. mikeb says:

    > Is "Image File Execution Options" useful enough to justify its continued existence? You tell me. <<

    I assume we’re talking aout the ‘Debugger’ setting of that key (I don’t know a whole lot about the other settings). I use IFEO for 2 things:

    1) to attach a debugger to processes that are otherwise difficult (or inconvenient) to debug

    2) let SysInternals’s Process Explorer replace taskmgr.exe

    #1 is the intended use, but I think has little value for everyday users.

    #2 is a convenience that is not essential

    So, I’d want to keep it, but I could see regular users saying – "hey, come up with some other way to do your debugging that doesn’t open up a risk".  

    I’d offer some suggestions, but they’d probably either be worthless or if implemented cause personal inconvenience that I’d rather do without (like require a kernel debugger be attached for the setting to function – yuk).

  21. Mike Swaim says:

    ‘As far as I understand, MSFT still promotes that all installs are to be to "Program Files". Of course any user can run anything he downloads/copies etc, and the simplest way for a user to avoid to fetch administrator/get permissions is still to simply install the app somewhere in his own "My Documents" ‘

    OneClick (or ClickOnce) applications by default do not install into Program Files. That’s nice because my users can update an app as limited users.

  22. Pavel Lebedinsky says:

    MS took away acccess to DevicePhysical Memory

    > in the name of making the OS "more secure"

    I think it was actually done to make the OS more stable. Applications were mapping physical memory without regard for caching attributes which can cause TLB corruption. For the same reason, access to DevicePhysical Memory is discouraged even for drivers – see comments in the MSDN page for ZwMapViewOfSection:

    http://msdn.microsoft.com/library/default.asp?url=/library/en-us/Kernel_r/hh/Kernel_r/k111_cdad5afa-13b3-415e-96e8-688e7984a9fd.xml.asp

  23. Ugh again says:

    "99% of users run as admin because they view security warnings as a pointless nuisance, they want those warnings to go away…"

    99% of Windows developers don’t understand the proper security and ask "everybody else requires admin, so why not I?"

    How many developers know what a Limited User is? How many of us use it??

    I’ve met devs who refuse to fix crashes because they figure Windows crashes so often anyway, it’s just easier to blame Windows.

    99% of statistics are useless. But as somebody else said, don’t underestimate a developer’s will to undermine your computer’s security.

  24. Adam:

    The question here is not whether the attacker can gain more privileges on the system, it is whether he can do so without the collusion of an administrative user. Which, in this case, he has – albeit unknowingly – by virtue of that user’s failure to test the code properly.

    And from my perspective, this security hole actually *is* a code injection bug: the original code to change the setting has no bug. It is additional code injected between the original code and the end user which has the bug. Blaming the original code for the end result is simply unreasonable. Just because the attacker didn’t inject the code himself doesn’t change the mechanics of the attack.

    But that verification code might also have been presented to the user on a developer group, for example, by the attacker. It certainly looks like an innocent mistake (we’ve all done that!), so if he’s caught he need only apologise.

    This leads me to some thoughts on open source, but I won’t go into it here.

  25. mikeb says:

    It’s interesting that it seems that Microsoft is expending quite a bit of its resources in Vista trying to solve some security issues using some of the techniques Raypond indicates are fatally flawed.

    In Vista you’ll find:

    1) requiring GUI verification of some accesses that are restricted to Admins (due to Vista’s LUA/UAC policies)

    2) *All* drivers on 64-bit Vista will need to be signed with a certificate (a PIC) that can be obtained only after purchasing another certificate from Verisign for about $500 (or is it $300?). As an alternative, the driver can be signed in the WHQL program which has similar costs associated with identity certificates in addition to other testing costs.  In fairness, it appears that Microsoft might be re-evaluating the PIC program.

    In fact, Microsoft already uses some of these techinques in WinXP/Win2003 such as using registry keys to designate KnownDlls, and signing and other enforcement in Windows File Protection to try to ensure that valid binaries are in place.

    So, while all of these techinques may be ultimately fatally flawed (until the Trusted Computing Platform arrives), they must have some utility and benefit, or Microsoft is wasting a lot of people’s time and effort.

    I’m not necessarily saying that having a ‘known good debugger’ list is a good idea, but Raymond’s post seems to imply that any security solution should be perfect to be considered.  If that’s the standard then we might as well toss out our computers right here and now, or at least disconect them from the ‘net.  

    Sometimes just raising the bar is indeed worthwhile.

  26. mikeb, it’s not just raising the bar that’s worthwhile. Sometimes you can leave the bar where it is, and simply remind people how high that bar happens to be.

    Assuming that your system has a known and unfixable security flaw, it will increase customer confidence if you move the same flaw to another location and then explain why it is every bit as secure as it is going to get. There are three results.

    1. You have responded. Even though your activity has no real effect, you have demonstrated that you are listening. This is the single most important thing to do.

    2. You have explained. Those who care about your explanation will look to people they trust to understand it, and get a confirmation there.

    3. Your detractors will say "this doesn’t change anything!" because it really doesn’t.

    However, #3 only works if you *omit* #1 or #2. Action without explanation leaves the public uncertain, and they will latch onto your detractor as the authority. Explanation without action sounds like an excuse, and the public will latch onto your detractor because you really haven’t changed anything.

    But if you have taken action and explained that action, your detractors are just whining. The public has what they want: you have done something about it, and they are satisfied with the rationale for what you’ve done. So the detractor doesn’t actually do any damage; he merely advertises that you didn’t really HAVE to change or explain anything, which makes you look even better.

    As Oscar Wilde said, living well is the best revenge.

  27. silkio says:

    Raymond Says:

    > The topic for today is, "I have a setting that

    > only administrators can change. This setting

    > can take values that are insecure." My point is

    > that this is not a bug in the setting.

    Not a bug, but still something that can be used by ‘hackers’ to exploit. And hence if it’s reasonably possible for someone to do so, it is not wise to create a setting like this.

    You must see this.

  28. Micah Brodsky says:

    Raymond —

    My impression of much of the security work for Vista — e.g. signed drivers only on x64 — is that it’s effectively trying to demote the horribly overused Administrator to the level of a more normal user and then add a privileged level of mandatory access control beneath. My question is, why didn’t Microsoft do that literally? E.g. hoist the entire system up and put a lightweight privileged monitor beneath? Or, hoist up user space, virtualize the disk, and establish a protected data store for the kernel and its configuration that only the kernel (or another operating system, but not administrative tools) could access? Many of the massive headaches, like not being able to disable driver signing through boot.ini because that could be modified by programs running as administrator, would disappear, since you could introduce a ‘Hyperadministrator’ able to manage these options. Also, the concern that someone could find a gap in the current, ad-hoc enforcement would seem to diminish greatly.

  29. Adam says:

    Raymond> "…in order to alter the setting, attackers must already have gained administrative privileges on the machine…"

    "The topic for today is, "I have a setting that only administrators can change. This setting can take values that are insecure." My point is that this is not a bug in the setting."

    Respectfully, I must disagree with that particular conclusion[0]. An attacker can and will exploit multiple bugs/vulnerabilites, in series, to 0wn your box.

    Forgive the non-Windows example and slight misrepresentaions, but often, when a single vulnerability is discovered on a UNIX system, it hardly ever allows a remote attacker to gain a root shell on your system. For this reason, you will normally see the UNIX die-hard-fanboys give one of the following two mitigations:

    a) Hah, it only allows a remote user to get shell access with the account of the running process. apache/ftpd/cvsd runs with limited priveliges, so that’s not a problem like it would be with a Windows system, where such services run as the local system.

    b) Hah, it only allows a local user to elevate their privs to r00t. We carefully monitor the people who are allowed to log on to our server, even as limited users, so that’s not a problem like it would be with a Windows system, where everyone needs to be a domain administrator to get work done.

    Unfortunately, becuase each is announced separately, the fanboys seem to consistently fail to see that combined, one of each of the above vulnerabilites is fatal, and that you need to keep on top of *all* of them to keep your system secure.

    If you have a setting which makes something else insecure, attackers will try to leverage it *in conjuntion with other vulnerabilites* to help them 0wn your box.

    [0] I agree with everything else. Security by obscurity must concede to the almighty Google, and once someone has Admin, you’re toast.

  30. Driver Dude says:

    Definitely keep Image File Execution Options.

    1. The whole point of Administrator/root is to be dangerous. Sometimes it’s the only way to get things done.

    2. Remove this and somebody will discover another way to do the same thing. Never underestimate the ‘Net knowledge-spreading ability. Blackhats use the Net very effectively – in many ways.

    3. Removing this closes ONE hole. Fixing the underlying problems – everybody runs with Admin privs, and the ease of obtaining Admin from a Limited user – will close MANY holes.

  31. Norman Diamond says:

    What would a "known debugger" be?

    > "The operating system contain a hard-coded

    > list of known debuggers

    No kidding that such a thing would deserve anti-trust action and it isn’t what I meant.

    > "Okay, then the program just needs to be

    > digitally signed in order to be considered

    > a ‘known debugger’."

    Enough with the attacks on straw girls.  OK, you did stop.  Thank you.

    When I used the word "known" I meant known to the machine’s user[s], roughly along the following lines:

    If the administrator installed a development tool that includes a debugger, and the development tool is visible in "Start – All Programs", then the administrator likely won’t be surprised about the debugger.

    If the individual user has a way of doing the same kind of installation, then the same reasoning applies.

    A few debuggers are built into Windows.  Fine, do include them in the list.  But: (1) Don’t limit the list to those.  (2) Don’t include Solitaire or Notepad in the list unless the administrator or user took some action to put it in the list.

    For the most part, I agree that debuggers aren’t really escalations of privileges.  A user shouldn’t be allowed to debug a process that they don’t own, unless an administrator has given them privilege to do so.

    (Actually I prefer meta-privileges, in which an administrator gives the user privilege to set the privilege, so that if the user hasn’t set the privilege but accidentally does some mistyping or mismousing then the mistake won’t accidentally do damage.)

    However, even if there’s no escalation of privileges, there’s still a security risk.  A hacker might break in and become administrator, the hacker WOULD add her key logger to the list of known debuggers, and the hacker WOULD set the debugging key on an application that the real administrator would use.  This is because the hacker doesn’t know the password that the real administrator is going to input for some purpose, and the key logger will catch it.

    So restricting the list of debuggers to a list known by the machine’s user[s] isn’t a concrete security protection.  It’s a small step to help increase the chances of discovering that you’ve been hacked, plus a step to avoid damage from unintended accidents.

  32. oldnewthing says:

    I’m still not sure where this "list of debuggers" is. If I’m reading you correctly (and I’m probably not), you’re saying that the way to answer the question "Is this a valid debugger?" is to do a treewalk of the administrator’s Start menu to see if anything there is a shortcut to that same program. (What if you don’t have permission to access the administrator’s start menu? Does that mean you can’t debug anything? Does this mean that the administrator can’t "clean up" the Start menu by getting rid of rarely-used programs? What if the administrator’s start menu is redirected to another server that is unavailable?)

  33. Adam says:

    *All* useful settings? Of course not.

    But, if an option has an insecure setting, then I’d say that that *particular* setting of that *particular* option *should* be at least considered a "security issue", and looked at for possible removal from the next release.

    On reflection, the risk may be worth the functionality, especially if individual admins are given enough background to make an informed decision themselves. And admins are at least somewhat likely to read the big warning dialogs that pop up when they select that setting :)

    But to leave an insecure setting in just because it’s useful? Must … resist … cheapshot … IE ….

  34. Adam says:

    *sucks teeth* Tricky one. :)

    I presume you mean the "debugger" subkey that you’ve written about before.[0]

    Considering how open to abuse that particular setting is (something you pointed out yourself in the previous article), I’d say that having a good look at alternative ways of getting the same results would be strongly encouraged.

    Ideas:

    1) You could enable that setting only under a windows debugging kernel. Yes, you won’t be running the target programs fully "in the wild", but it’d keep a lot of people safe.

    2) Where program A calls program B, and you need to debug program B without modifying it, then:

    2a) If you can wait to attach a debugger to program B until after it reads data from program A, you could run program A under a debugger, pause it after it starts program B, and then attach another debugger to program B while it’s waiting for data from A?

    2b) If you have to attach a debugger to B before it reads any information from A, can’t you just start B straight from the debugger?

    3) Can you set up a windows debugger to follow the child on CreateProcess()? If so, could that be used to debug B?

    Of course, there may be good reasons why none of these (off-top-of-head) ideas, and indeed no other more secure alternatives to IFEO that cover *all* its functionality do exist, and that the functionality *is* vital enough to keep.

    *That doesn’t stop those risks existing though, and it doesn’t mean that IFEO couldn’t be used as part of an exploit.*

    In answer to your question, I don’t know. I’ve done a reasonable amount of (ATL/COM/C++) development on Windows and never needed it. But I’ve not delved into driver programming, or some of the nastier corners of MFC/the Win32 API either. My guess would be to keep that one. It’s the kind of thing that seems like it wouldn’t have been put there in the first place unless there was a real need for it. But that doesn’t mean that I’d keep all such settings. It’s definitely a case-by-case type of decision.

    [0] http://blogs.msdn.com/oldnewthing/archive/2005/12/19/505449.aspx

  35. dimmik says:

    Alun Jones> This is an ACDSee failing, not a Windows failing.

    May be.

    But it looks like it’s common way for any windows-app to install itself.

    They (apps) look for %ProgramFiles% which points to, say, "C:Program Files" and tries to put all necessary files there. And, of course, fails because of user being non-admin.

    Ok, it, may be, failure of almost every app to be installed. But it seems to be very common design flaw. ;)

    But – why not to point %ProgramFiles% to, say, "C:SomeUserProgramFiles" and %WINDIR% to "C:SomeUserWindows" and so on? And save something like %GlobalProgramFiles% pointing to privileged folder.

    It will be transparent for apps and for users, and they will have no need to care about admin priv.

    My wife have no idea what folders and files are – she knows she has fotos and she has viewer and nothing else. Exaggregation, of course, but close to reality.

    How (and what for) to explain her that she have to change default "C:Program Files" to smth else?

    And what if app wants to write into registry?

  36. meh says:

    Sooner or later you’d have apps that install to %GlobalProgramFiles%. What do you do then?

  37. Norman Diamond says:

    If I’m reading you correctly (and I’m

    > probably not), you’re saying that the way

    > to answer the question "Is this a valid

    > debugger?" is to do a treewalk

    Obviously you’re not, because on Windows XP the results of a treewalk would include Solitaire, which is an example of what I suggested shouldn’t be included in the list.

    Here are some famous lists.  "Add/Remove programs" accesses a list that’s stored somewhere.  The Run and RunOnce keys are lists that are used for a different purpose.  The list of known debuggers would be yet another list, with some obvious places where it could be stored.  It would start with a default list of debuggers that are installed by default in the Windows system, would be extended when authorized persons add development tools, and would be extended when hackers obtain sufficient privileges to hack it.

    A few decades ago on a different OS, a program called "chsh" asked if I really wanted to change my login shell to a program that wasn’t on an administratively maintained list of known shells.  Since I wasn’t a root I only had read access to the list, but it was enough.  The ways for me to accomplish what I was trying to do would be to answer "y" to the prompt and know that I was choosing an unusual shell, or for an administrator to change my account settings with or without notifying me, or for a hacker to change my account settings.  Anyway the list was a reasonable list and it was used in a reasonable manner.

  38. Mark Steward says:

    Norman: I think a list of debuggers that’s modifiable by Administrator to protect a registry key that’s modifiable by Administrator adds so little protection it’s not worth doing.  And I think that was Raymond’s point.

    It doesn’t even stop the limited users: once you have one debugger, you can run any other debugger under it.  And, since you can recreate a program’s functionality in VBA, or copy the program and edit it, should debugging programs be a controlled resource?

    I find having a debugger as important as a programming language.  The ability to change the behaviour of a program without rewriting it is (to me) an essential part of an OS.  I often use it to force programs to install on my limited profile. (Ugh and I hate OLE’s obsession with HKLM…)

    I know many companies have policies against running your own code, but until Windows has a comprehensive way of stopping users running their own code (appsec is trivial to break), that’ll only stop the good guys.

    If your concern is somebody using IFEO to make a system act strangely, perhaps a better solution is to create a notification for when IFEO is in effect.  Then the jokers will be caught, and the serious hackers will be using more than one technique anyway.

  39. Mark Steward says:

    Norman: Whoops, sorry, I thought you were advocating it from a security point of view.  I agree that administrators, like developers ("no programmer would be stupid enough to…"), shouldn’t be trusted with their own systems.

    It’s always risky when changing a dangerous setting becomes an everyday procedure.  So to protect the admin against operator error, I’d instead suggest a property dialog that changes IFEO for you.  It could then warn (from a system-protected list of debuggers) if there’s a problem.  And perhaps IFEO should only be writeable by SYSTEM.  How’s that?

  40. Micah Brodsky says:

    How is IFEO any more dangerous that the HKLM ‘Run’ key, or the document editor associations in ‘Classes’, or any of the dozens of other ways to obfuscate an an application’s presence once it’s got administrative privileges?

    (AFAIK, the privilege check for actually attaching the debugger to a process is completely independent and sound — you need rights with respect to the process.)

  41. Norman Diamond says:

    Mr. Steward, thank you for posting your second followup so quickly, so I could read it before complaining about your first one.

    Nonetheless as a minor security measure it still increases the chance of a hack being discovered rather than remaining undiscovered.  Just  abit.

    For limited users it still helps too.  Limited users would be allowed to debug their own programs.  But if an administrator or hacker set the limited user’s options to use Solitaire as a debugger but the administrator or hacker forgot to update the list of known debuggers then the user would get a warning.  The user would know they’ve been oddly administrated.

    > It’s always risky when changing a dangerous

    > setting becomes an everyday procedure.

    Well sure, but for Visual Studio isn’t one of the programs that I reinstall every day.  Nearly every day I wish for bug fixes, but they don’t come, and reinstallation won’t help.  I wouldn’t mind if installation of Visual Studio would automatically update the list of known debuggers.  I don’t mind if this update includes a dialog box or not — but I hope the dialog would be more understandable than a simple "Do you wish to update your JIT settings" (yes or no) without even saying which product is asking and what kind of JIT settings are affected and what the change is.

  42. oldnewthing says:

    I don’t understand when the "XYZ is not a registered debugger. Do you want to use it anyway?" dialog box is supposed to be displayed. Is Regedit supposed to display it when you set the value in IFEO? Is the RegSetValueEx function supposed to display the message? (What if RegSetValueEx is being performed from a service or a driver?) Is it supposed to be displayed when the target program is run? (What if the process is being run as a service? Where do you display the dialog box?)

  43. Myria says:

    I think it’s important to distinguish between "security" and "safety" in this matter.  I completely agree with Raymond here.

    With any system that cares about security, there is a line in the sand you can draw between privilege levels.  Any program with access above that line can use its privilege to become god, no matter which particular privilege level they actually have.

    For example, consider being in the Administrators group, or having any of the SeTakeownershipPrivilege, SeTcbPrivilege, SeRestorePrivilege, SeLoadDriverPrivilege, or SeDebugPrivilege privileges.  If you have any of those rights, you can elevate your privileges in some way to become kernel.

    Even though we generally separate powers like this, they are all effectively equivalent.  It means nothing for security.

    So why bother?  It’s because security isn’t the only concern.  Safety is another important one.  This separation makes it more difficult to accidentally break something.

    NT disables privileges by default even if you have them.  Obviously, it does nothing for security, but it does a lot for safety.

    Vista x86-64 driver signing is another example of missing the point.  It will anger developers and maybe users, but one thing it will not do is stop rootkits.  First of all, kernel-mode rootkits are few and far between (other than as copy protection schemes or drivers that hide cheat programs from online games).  Almost all soldiers of robot armies are infected with a user-mode "rootkit" of some kind, not a kernel rootkit.

    Second, if a bad program is already running as elevated Administrator, driver signing is not going to stop it from getting into the kernel if it *really* wants to.

    A program running as elevated Administrator can overwrite ntldr (whatever it’s called in Vista) and reboot the system.  No more driver signing check.

    A response would be to block writing to those files.  In that case, raw open DeviceHarddisk0Partition1 and write to the sectors containing ntldr.

    A response to that would be to block raw disk access.  Fine.  Create a 512 byte rootkit loader, put it in a file called rootkit.bin, and use bcdedit to add a new legacy OS entry for the file.  Set the default option to that and timeout to zero, then reboot.

    There’s even tricks you can do without rebooting, but I won’t go into them here.

    Microsoft should be concentrating on ways of preventing unelevated programs from becoming elevated.  Vista does a somewhat good job of this already with UAP, but it’s not perfect.  This feature is definitely a step in the right direction, unlike driver signing.

    It’s complete futility to try to prevent anyone  above the line in the sand from taking over the system.  The only way to do something like this is to require all programs that access any kind of protected data to be signed.  What scares me is that Microsoft appears to be headed in that direction.  I don’t think it will be long before executing as anything but verifiable .NET requires a signature.

    Melissa

  44. Norman Diamond says:

    I don’t understand when the "XYZ is not a

    > registered debugger. Do you want to use it

    > anyway?" dialog box is supposed to be

    > displayed.

    Thank you for your understanding of my writing.  Indeed my writing wasn’t clear enough and your subsequent question made me think.  Anyway the answer to this, in terms of what I had been thinking, was that the dialog would be displayed at the time that the program would otherwise have been opened by the debugger that is specified in the key.

    > What if the process is being run as a

    > service?

    Yeah you got me, since this isn’t ShellExecuteEx looking up the key (or is it?), and the program is being started in a context that has no user interaction.  But I think I can come up with two or three answers.

    (1)  If the process is being run as a service and the debugger isn’t on the list of known debuggers that the administrator or hacker has registered (along with predefined debuggers that are installed with Windows), then the chances seem even higher that a hacker has interjected this key in order to get the hacker’s program to run under the local system account.  Maybe we want to log the fact that Windows is prohibiting the service from starting, and say the reason is an unauthorized debugger.

    (2)  The service could be started without using any debugger.  Again a log entry should be made.

    (3)  Are there debuggers for Windows that run unattended?  I don’t mean just unattendedly counting the number of times a breakpoint is passed before breaking, and running macros, but I mean completing their operations without even prompting the user when an unexpected situation occurs.  If the answer is no then we could treat this as an unexpected situation and prompt the user.  I’m not sure which window station we want to prompt on though.

    By the way, there’s a certain situation where a service will not start and Windows will log an event saying that the service failed to respond within 30 seconds — but the log entry is recorded a long time before 30 seconds passes.  This happened to me enough times that I finally memorized what the real meaning of the error was and then I could fix my own errors promptly.  Anyway, there is precedent for refusing to run a service when a configuration isn’t quite what it should be.

  45. Norman Diamond says:

    Submitted as a separate response so it will be easy to delete if too much teasing is still considered offensive ^_^

    Sunday, May 14, 2006 2:58 AM by Myria

    > I completely agree with Raymond here.

    You see that, Mr. Chen?  You even distinguish between "security" and "safety" like a girl.

  46. Mike Dimmick says:

    dimmik: a lot of Windows applications use COM components, both to construct their UIs and as general-purpose utilities.

    COM registration /can/ be written into HKEY_CURRENT_USERSoftwareClasses, but this often isn’t done since this feature was first introduced in Windows 2000. You can also have registration-free COM but this seems to be a complex feature of Windows Installer, and no-one understands Windows Installer (yes, this is a gross over-simplification, but I think the number of people who actually understand Windows Installer is tiny, and I’m not one of them).

    File associations can also be written to HKEY_CURRENT_USERSoftwareClasses, and they override the local-machine settings.

    Basically, few applications actually support per-user installs.

  47. Myria says:

    Norm, I don’t think you realize that in Windows NT, every process is basically a debugger of any process it starts.  An NT process starts out as more or less an empty space (unless you fork).

    The creating process actually allocates memory inside the new process and writes to it, and it even sets the initial values of the CPU registers for the new process’s thread.  The creating process is effectively a debugger of the new process.

    Adding "known debuggers" is no better than WinSafer (group policy) and is trivial to break, as someone mentioned.

    Melissa

  48. Norman Diamond says:

    Friday, May 19, 2006 4:10 AM by Myria

    > The creating process actually allocates

    > memory inside the new process and writes to

    > it,

    You’re right, I didn’t know that.  While assuming too much, I assumed that the kernel would initiate a stub in the new process and let the parent process continue with its own operations.  I figured that the parent would interfere with the child only when the parent was designed to do so, using the handles returned.

    Now I’m wondering, even if inspection of the IFEO key is done by the parent process, and even if the debugging role is handed off from the parent process to the designated debugger, could the code that reads IFEO still warn the user.  The SHELLEXECUTEINFO structure has an hwnd which is designed for user notifications, but the SECURITY_ATTRIBUTES structure doesn’t.

    > Adding "known debuggers" is no better than

    > WinSafer (group policy) and is trivial to

    > break

    Sure it’s trivial to break deliberately, but John Robbins didn’t mention the possibility of adding Solitaire to a list of debuggers in WinSafer ^_^

Comments are closed.