Your debugging code can be a security hole

When you're developing your debugging code, don't forget that just because it's only for debugging doesn't mean that you can forget about security.

I remember one customer who asked (paraphrased)

We have a service, and for testing purposes we want to be able to connect to this service and extract the private data that the service is managing, the data that normally nobody should be allowed to see. That way, we can compare it against what we think the data should be. This is just for testing purposes and will not be called during normal operation. How do you recommend we do this?

Remember that the bad guys don't care whether the code you wrote was for normal use or for diagnostic purposes. If it's there, they will attack it.

The customer went to a lot of effort to protect this internal data, making sure that none of the service operations disclose it directly, but then in a haze of "this would make debugging easier", they lost their heads and added a debugging backdoor that gives direct access to this data that they had worked so hard to protect.

It doesn't matter how much you protect the front door if you leave the service entrance wide open.

I have a printer driver that insists on creating a log file in the root of the drive. This log file, which is world-readable, contains, among other things, the URLs of every single web page I have printed. If I log on as an administrator and delete the log file, it just comes back the next time I print a document.

I assume the printer vendor created this log file for diagnostic purposes, but it also creates a security hole. Everybody on the system can see the URL of any web page that was printed by anybody else.

Comments (18)
  1. Steve says:

    Er… I can’t understand how a list of URLs you’ve printed can be useful in debugging the driver. I also can’t forgive ANYTHING for creating something in the root, much less a driver.

    What does it do if you make a dir there of the same name? Crash?

  2. AC says:

    There are even much subtler points of security failures: if the application has some logging feature, turned off by default, and if the logging code has a buffer overflow, then even if QA checks the application, if they don’t know about that setting or test with it, the final product is trivially


    Step 1: Change the registry entry (user will not see the change); Step 2: Use the newly opened hole; Step 3: you know already…

    So the product which intends to be secure must have all the possible settings documented. Even those which are invisible through the user interface.

    Anybody knows how many MSFT applications have undocumented settings which are specified through registry, ini, file names, event names, etc?

  3. Actually there is no issue (from a security standpoint) at the application level since there is no privilege elevation. In order to set the registry key you already have to have access to the machine at the user account level, and big deal – you tricked the application into doing something at the user account level — something you could have done yourself by using your user account level permissions anyway!

  4. Basil Hussain says:

    It annoys me no end when programs seem to have debug features still enabled in a shipping build, and therefore feel they can trample all over the root of your C: drive and C:temp (which, of course, are probably hard-coded paths) writing diagnostic log files galore. As I write, I have 6 log files sitting in C: that get recreated every time I run the associated programs, plus umpteen more in the temp folder.

    Application programmers really need to remember to disable all their debug features before they ship a program!

  5. Matt says:

    I’m a total novice at C/C++ programming, but isn’t this type of situation exactly what

    #ifdef DEBUG

    //debug code


    is for?

  6. Michael says:


    Possibly, but customers and most of your testers won’t use the debug version of your product.

    It’s pretty nice for a support tech to be able to say, "Set this registry key, rerun the app, and then send me the log file at <xyz>".

  7. Jay B says:

    Too bad there’s no CSIDL_LOGFILES for use with SHGetSpecialFolderPath.

    There’s no standard for where log files get put, which is why you see some of them in C:WindowsSystem32LogFiles, some in C:, some in the Program Files application directory, some in My Documents, etc…

  8. aoeu says:

    "I have a printer driver that insists on creating a log file in the root of the drive."

    Why not empty the file, then deny write privs to everyone? It’s kind of a crappy workaround, but I wonder how the program would deal with it. One of the first things I do on new installations of XP is to deny write access to the recent documents folder.

  9. Daev says:

    One of the points Peter Seibel makes in his compulsively readable book on Common LISP is about the programming convenience of a language based around an interactive "read-evaluate-print loop." Even LISP applications running "in the wild" have this feature, which he illustrates with a story:

    "An even more impressive instance of remote debugging occurred on NASA’s Deep Space 1 mission. A half year after the spacecraft launched, a bit of Lisp code was going to control the spacecraft for two days while conducting a sequence of experiments. Unfortunately, a subtle race condition in the code had escaped detection during ground testing and was already in space. When the bug manifested in the wild — 100 million miles away from Earth — the team was able to diagnose and fix the running code, allowing the experiments to complete. One of the programmers described it as follows:

    ‘Debugging a program running on a $100M piece of hardware that is 100 million miles away is an interesting experience. Having a read-eval-print loop running on the spacecraft proved invaluable in finding and fixing the problem.’"

    My thought upon reading this: it’s a good thing for you guys that hackers don’t have radio-telescope dishes yet.

  10. Stu says:

    How about if you do something like using a public/private key to varify that the debug information request is from an authorized debugger?

  11. Mihai says:

    "Actually there is no issue (from a security standpoint) at the application level since there is no privilege elevation"

    Counter-example: an application is used for passwords management. The logging feature dumps the password in clear text somewhere.

    As an admin, I can set the registry key on the machine shared by several users, and as a result I get all their private data (Amazon, CitiBank, credit card numbers, etc.)

    This is stuff I cannot get even if I am admin.

    So, NEVER is better!

  12. If you’re the admin, then you don’t need to go to all that trouble. Just install your own keylogger.

  13. JamesW says:

    Steve: ‘I also can’t forgive ANYTHING for creating something in the root’

    There’s a well known office suite that dumps its install log files to / on OS X.

  14. "There’s no standard for where log files get put, which is why you see some of them in C:WindowsSystem32LogFiles, some in C:, some in the Program Files application directory, some in My Documents, etc…"

    There is actually, but too many programmers are clueless about it – anything the user produces (including log files) in general goes into their profile, in this case, in a custom folder under ‘application data’.

    It has to be this way, because anything run by the user does not necesssarily have the rights to write a file anywhere else – which brings us back to the other cardinal sin of programming "always assume the user is an administrator".

  15. kbiel says:

    "There’s no standard for where log files get put"

    I disagree, there’s the Event Log service. I understand that some programs that must run older (non-NT kernel) Windows systems can not rely on the event log being available, but device drivers certainly don’t fall into that category.

  16. Mark Steward says:

    How about the geniuses at my University’s computer services department, who have a huge virtual infrastructure for their "clusters" (rooms full of ICA clients)? These are network booted to W2k, with locked cases and a special shell that only allows you to connect, change volume, etc.

    And then they leave the ICA configuration file writeable, so anybody can turn on key logging.

  17. Jay B says:

    Certainly I would agree with you that logs should go into the user’s profile directory. The Event Log isn’t really suited for a wide spectrum of logging output.

    However, since there is no facility that leads developers down the righteous path (such as my post about CSIDL_LOGFILES for use with SHGetSpecialFolderPath, but ideally a more robust logging solution as yet unseen), the standard/best practice isn’t going to be conformed too.

    Make it easy to do it the right way, and the "clueless" will follow.

  18. The Windows security model is based on identity.

Comments are closed.

Skip to main content