Exploring the problem: Create a file that only one process can write to


A customer liaison explained that they had a customer who wanted to create a file that only one process can write to. The customer has a program that writes important information to a file, and they want to prevent the user from modifying that file. The program is running under the credentials of the logged-in user, so they cannot deny write access to the file, because that would prevent the program itself from being able to write to it. They considered locking the file by denying sharing, but that would be effective only while the program is running.

This is a difficult position right off the bat, because permissions belong to users, not to processes. Since the program is running under the user's credentials, the user has full control over the process and can access to the sensitive file by stealing the file handle out of the process. Any solution would therefore require the involvement of more than one set of user credentials.

One solution is to have a service. The sensitive file is accessible only to the account under which the service runs. The application contacts the service, and it is the service which writes the data to the file.

Mind you, this is still vulnerable: The user can attack the program and manipulate the parameters that it passes to the service. The service cannot trust the data received from the program because the program could be passing false data.

The customer liaison explained that the customer wants to prevent the end users from tampering with the program's log file. The program is monitoring employee activities, and the customer has found that in many cases, employees report issues with the program, and when they debug the problem, they discover that the log files have been tampered with. To prevent tampering, they want the file to be writable only by the program generating the log.

I noted that the fact that the employee is tampering with the log file should be a significant data point in building a case against them. After all, if the user doesn't want the information to be logged, they can reformat the hard drive that contains the log file.

The customer liaison thanked us for our feedback and reported that the customer decided to use a separate account for accessing the log file. To avoid the complexity of a Windows service, the customer is simply having the program temporarily impersonate the special account, write the data to the log file, and then stop impersonating. The password for the special account is stored in an encrypted configuration file, which is how they are currently storing the password to their database.

Okay, now you've created a system that would never pass a security review.

You think you're so smart by encrypting the password, but that doesn't add any security because the program itself must be able to decrypt it. An attacker can simply set a breakpoint in the program right after the code that decrypts the password, and now they have the password in the clear. With this password, they can not only manipulate their log files, they can also manipulate the log files of other users. Your original problem was a data tampering security vulnerability, but by giving the user the password to the special account, you added spoofing (the user can impersonate the special user and do anything that special user can do), information disclosure (obtaining access to log files for other users), and denial of service (locking the log file and preventing anybody else from accessing it).

And in fact, they have this insecure system already in production, since they admitted that they are already using this technique to record the password to their database.

The customer liaison thanked us for pointing this out and will advise the custom of these additional issues.

Another approach that doesn't involve a service is to use the system event log. The program can write entries to a custom event log, and you get to take advantage of existing infrastructure to collect logs across your organization.

We never did find out what the customer ended up doing. But we hope it wasn't the thing about putting the password in an encrypted file (and giving everybody the decryption key).

Comments (49)

  1. kantos says:

    If they need that kind of oversight they might want to consider using Terminal Services instead… that way the server can run the monitoring process as an admin and the users are sandboxed as standard users in their own Terminal Services session. That would alleviate any need to do any of this and would allow them to log using the windows logging system… which is probably what they should have used in the first place.

  2. 12BitSlab says:

    When I have had requirements like this in the past, I send the log data off box to a server via tcp. The program on the other end then has responsibility to write the log files. To make this secure, one has to deal with certificates on both ends, encrypting the data stream, and a other issues as well.

    BTW, what the customer wants to do would be trivial on an AS/400 using owner adopted authority.

    1. IanBoyd says:

      The problem with the idea of sending logs off to a logging server is that you’re still at the mercy of the client application.

      We’re already in a world where the user can debug their own program. That means they can alter logs about to be sent, or they can completely `nop` or `jmp` around logging.

      1. cheong00 says:

        There’s no need to send the log entries in plain text.

        In my experience, if there’s no requirement for the users to be able to read their own records, without the “map” to the message numbers you use, it’d be very difficult for them to modify the program to generate data in meaningful way, without generating a few suspicious entries first. If there are report to catch honeypot messages (some message must only appear after some other message), there would be very high chance to catch any employee who try to temper with it.

        When you design it to store data on remote server and only allow “insert” but not “update” and “delete”, I would be satisfy that such “security measure” would be enough.

  3. jake says:

    if the integrity of the log file is so important, i would investigate using a blockchain. send new blocks out via tcp and udp to different hosts, and keep a local copy of the bitstream on a separate raw partition :-) let them try to modify that successfully, if they can even find it!

    1. Clockwork-Muse says:

      ….but why? The problem is that the blockchain isn’t being secured by the users (presumably the company would be managing the keys and infrastructure), so the company could still rewrite the logs whenever they feel like (since a work-factor based chain would likely be too expensive). At that point you have a distributed copy of a centrally managed log, which, while possibly nice from a recovery point of view, isn’t otherwise helpful (especially since I’d imagine most logging tools can set backup log locations). That’s ignoring the fact you have to be very careful about what you write to a publicly viewable distributed log, less so about a central one which can be access controlled.

    2. IanBoyd says:

      The problem with the client application sending logs out via UDP to be incorporated into a blockchain, is that you still have to trust the client application.

      We’re already in a world where the user is attaching debuggers to the process (to read database credentials); they can then certainly use the debugger to `nop` or alter logging before it goes out.

      Of course, we’re not talking about **security**, we’re talking about **defense in depth**.

      1. Yup. “The user can attack the program and manipulate the parameters that it passes to the service.” The user can make the program log “The user is being super-productive” instead of “The user is playing Minecraft.”

        1. ender9 says:

          Wouldn’t this be resolved by the logging program running in it’s own account while user’s account is a standard user?

          1. cheong00 says:

            I think it’s supposed NOT to be a locked down machine.

            If the user is just a standard user, and the internet access goes though proxy with proper ACL set, there aren’t much interesting thing the machine can do, and such program would not be necessary. (why measure the amount on inactive time, but not try to measure the amount of work done instead?)

  4. Ken in NH says:

    There is a solution that doesn’t require special permissions or logs or anything: if you find a user is tampering with logs to hide data, fire them. If they can’t be trusted then why do you continue to employ them?

    1. PJH says:

      This. The problem should be solved via (HR) policy, not by trying to code around it.

    2. Yup, I said as much. “I noted that the fact that the employee is tampering with the log file should be a significant data point in building a case against them.”

    3. Tim says:

      I’m guessing they were more worried about the employees who were tampering with the log files that weren’t stupid enough to be so obvious that they got caught.

      1. Is it me or has anyone else also noticed it is a case where the zeroth law of security (a.k.a law #3 of Scott Culp’s ten immutable laws of security) applies? The user is in control of the system and probably has physical access too, so theoretically, it is only a matter of time (and by extension, knowledge) to defeat the system.

        What should be done here is:
        Remove physical access
        Make it hard enough, i.e. make sure the system cannot be broken long enough, until the next audit.
        Put audits in place

        Debugging isn’t something that everyone can do, and there are ways to stop it, e.g. by implementing whitelist-based app restriction policy. When the user cannot run a debugger, it cannot debug. Now, asymmetric encryption (from Windows API) to encrypt log entries and add hash to them. This way, even if both the log is carried off system and the encryption key is compromised, the user still cannot readily decrypt the log. (It needs a time-consuming brute-force attack, which takes longer than the next audit’s deadline.) Keep the layout of the log unknown to the user, and he or she cannot fabricate a wholly new log either.

        But ultimately, all these only make sense when the auditor has the power to remove the disrupting user. As the zeroth law says, no system can permanently withstand a persistent disrupter.

    4. Metalhed666 says:

      We are always having people in our organisation trying to use technology to solve a ‘human’ problem; firing people makes you the bad guy – if you use technology to control people, you don’t have to do the horrible (but necessary) jobs such as firing people and can offload the blame for any problems to an inanimate piece of software.

  5. Joshua says:

    This would be a whole lot easier if Windows actually had a chmod u+s equivalent. (Yeah I know such a thing can’t traverse networks.)

  6. Robin Stevens says:

    The employees are apparently tampering with log files in order to frustrate the developers’ efforts to debug the program?

    Why would an employee report a bug, then deliberately prevent a dev from fixing it?

    I suspect a bad dev claiming employee tampering as an excuse for writing shoddy code in the first place and being unable to fix it in the second.

    1. The employee tampers with the log file, and then some time later reports a problem with the program. They forgot that they had tampered with the log file, or they didn’t think the tampering would be detected, or they didn’t think the tampering was the cause of the problem. (Think of the people who apply unauthorized patches to Windows and then report bugs in it.)

      1. Robin Stevens says:

        The customer is reporting that users are complaining about errors in user-monitoring software, and the devs are asserting that the users are tampering with the log.

        From the context, I would presume that the issue could be with missing entries rather than changed ones.

        How does one prove the the user tampered with the log rather than a buggy program failed to write correct log entries in the first place?

        This is classic he-said/she-said, and (with a cynical eye) the customers’ solution (impersonation with encrypted credentials) has the appearance of being a strong solution, yet leaves enough wriggle room that the developers can still point the finger at a tampering user.

    2. Adrian says:

      Perhaps the employee is angry about the employer monitoring their computer use and so complains that the monitoring software is slowing everything down and reducing productivity. In an effort to make a stronger case, they tamper with the log file to make it look like it’s doing more than it actually does or with the timestamps to make it looks like it’s slower than it actually is. Their goal isn’t to get the software fixed but to get it removed.

  7. IanBoyd says:

    Authenticating to a server is always a challenge. If the client has a set of credentials, then those credentials can be snooped on.

    People will argue for using Kerberos in order to authenticate yourself with the remote server (i.e. SQL Server). The problem with that, of course is:

    – it doesn’t work where Kerberos is not available
    – it doesn’t add any security

    If you’re in the world where you’re afraid of a user attaching a debugger to their own process in order to steal database credentials (i.e. Username and Password), then they can also debug to application to alter T-SQL queries before they are sent to the database:

    information disclosure: they can alter the WHERE clause of a SELECT statement to return more data (obtaining access to log files for other users)
    denial of service: the can delete or drop, update, alter, or damage the logs – or perform a long-running that took an update lock (locking the log file and preventing anybody else from accessing it).
    spoofing: They can identify themselves as another user (the user can impersonate the special user and do anything that special user can do)

    Even worse, is if users are authenticating against the database server as “themselves” (i.e. Windows Integrated Authentication), rather than as another user (e.g. SQL Server login), then they can use other applications to connect to the database directly without having to even go through the bother and expense of using a debugger. Just use one of the hundreds of applications that let you connect to an SQL Server data store (e.g. Excel), and start issuing queries.

    Kerberos/Active Directory authentication doesn’t add any security in this situation. It attempts to add defense-in-depth; but trades away another huge amount of defense. Requiring a separate set of credentials, that are not the credentials of the user, is a better way to go if you want more defense.

    But the only security comes from users not having the ability to interfere with the running code; either:

    – move the code into the context of another user (e.g. another user on same machine; another machine)
    – deny the user PROCESS_VM_READ, PROCESS_VM_WRITE, PROCESS_VM_OPERATION [1], and change the process owner to someone else

    [1] Inside Windows Debugging – Table 3.1 – Win32 API Support for User-Mode Windows Debuggers

    1. AndyCadley says:

      If you were doing it via SQL Server, you’d only grant their accounts permission to call a specific stored procedure. There doesn’t have to be any way they could read or modify the database in any other way. It’s not a perfect solution but to some extent that depends on what you’re trying to prevent. When I’ve seen this thing before, it’s not uncommon that the users are running with full admin rights anyway which pretty much defeats the entire purpose.

  8. Cesar says:

    That sounded a bit like the “perfect attacker fallacy”.

    Step back a bit: who is modifying the log file? Most people aren’t high-level hackers who can debug an executable armed solely with Notepad and a stopwatch.

    In the original issue, the log file is most probably a simple text file, which the user edits by hand. Moreover, the user probably wanted to tamper with it after the fact. Sending every log entry to a service running under a separate account, which then writes it to a file owned by that account, would be enough to stop more than 90% of the tampering attempts. In fact, even a simple XOR of the log file with a constant key would probably be enough to foil more than 50% of the attempts.

    Of course, the “system event log” solution (which is a variant of the “service under a separate account” solution) is probably the best one. And the “encrypted password” solution is the worst one, since while the user might not be an advanced hacker, someone who invades the user’s account might be.

    1. zboot says:

      I don’t need to be an advanced hacker. I can just find some app written by one that does what I want.

      1. smf says:

        Hackers don’t usually have access to the crappy LOB apps that embed passwords. It could be worse, you could have posted them on github. https://www.wired.com/2013/01/users-scramble-as-github-search-exposes-passwords-security-details/

  9. Yukkuri says:

    Just a reminder that probably the poor people tasked to implement this know it is stupid but either do it or get fired themselves

    1. Alex Cohn says:

      And probably they also have this logging program installed on their workstations, so they have a good reason to reduce their own suffering.

  10. Matteo Italia says:

    Once we “solved” a similar situation by having a clear text and an encrypted log file (with a trivial but nonstandard cypher). The clear text one is easy to read and gives the user the impression that he can tamper with it, the encrypted one is for us to see when we spot some “impossible” problem potentially caused by the user (which as always denies touching the configuration).

    Of course this is not actually secure, but:
    – the application runs on a machine where the user has full privileges, so it’s an unwinnable game by definition;
    – the threat model is “an unskilled employee is clumsily trying to hide a mistake”, not “the NSA is trying to read reserved data”; when the maximum sophistication of your adversary is editing text files (usually leaving broken rows around) you don’t really have to worry about him hacking your executable.

    This trivial solution has indeed helped us debug “impossible” problems in several occasions (and to spot and warn some clients who were quite keen of playing this kind of tricks).

    So: in some cases you have to accept that a “perfect” solution is either impossible or extremely impractical, but fortunately often good enough is actually good enough.

    1. I’ve had to do something very similar, but instead of two log files, I wrote the information “in the clear” to a text file, and then stored the same information in an alternate data stream in the same file.

      1. Scarlet Manuka says:

        Oh, nice! I like that one, very sneaky.

  11. Karellen says:

    Hang on, isn’t this sort of thing, where you need to give users the ability to use a crypographic key, but not the ability to access it, or to debug the processes that access it, exactly what DRM was invented for? Can’t Windows do this already?

    1. Clockwork-Muse says:

      …..if Windows could do this reliably why are there so many failing (game) DRM attempts? Because the user has sufficient privileges (ie, is admin) to run whatever they want, including accessing the raw key. If the user isn’t admin the proper response isn’t to give them access to the key, it’s to implement something running under a separate account that they can pass data to/from. If the user must be admin, then you have to change “separate process” to “separate physical machine”.

  12. Antonio Rodríguez says:

    In the last years, I have started to think that giving applications/bundles/suites their own security identity would help solve many problems. This is one of them. The other is the user’s App Data problem: in the current model, any application (say, a “free” game with bundled adware/spyware) can mess with other application’s private data (maybe a browser’s stored password database), because they all run under the same user identity.

    Of course, you can argue that a malicious application, or one with a security bug, can defeat this system. But if you can convince the user to install/run a malicious app, or get to abuse a vulnerability, well, you are already on the other side of the airtight hatchway (and even then, with this model, you don’t get unlimited access to all of the user’s data).

    1. Cesar says:

      > In the last years, I have started to think that giving applications/bundles/suites their own security identity would help solve many problems.

      That’s Android’s security model: each application has its own separate user account.

      1. Medinoc says:

        Too bad they demand rights instead of requesting them: Unless you’ve rooted your phone, you can’t install/update an app without giving it ALL the rights it wants.

        1. Antonio Rodríguez says:

          Right. It’s a nice example of a good concept ruined by a detail in the implementation. Users should be in control of privileges at all times. That would make it harder for developers, but who said our work had to be easy?

        2. Alex Cohn says:

          Yes you can. Even if the app does not ‘target Android M’ and use runtime permissions itself, you can install it, deny any permission, and try to launch the app. Your mileage may vary.

          This was first introduced in Cynaogenmod in 2011.

  13. Ivan K says:

    If the employee complained to IT about the program using all the disk or whatever then that would be a counterpoint I reckon. Though that depends on whether the program was dos’ing him and other unknowns. Communication is the key.

    1. Ivan K says:

      When I first read this I had it in my mind that employees were deleting (possibly very large) log files rather than modifying them, even though the blog title in bold face font suggests otherwise. Oops. That’s a whole different kettle of fish.

      1. smf says:

        We had users changing the date and time on an msdos based handheld to try to provide evidence against parking tickets. They couldn’t have been parked there, because they were delivering elsewhere.

        I think we might have ended up writing a TSR.

        1. Joshua says:

          I’d rather that stunt work than the nonsense we have to put up with with parking tickets now.

  14. Daniel Anderson says:

    Tampering with logs is not always bad as people seems to think

    I worked for a company that was so paranoid they forced us to use 1 computer to access the internet, that computer had a guest account for us to use. They also installed a key logger on that machine so they could spy on what we did when accessing the outside world.
    The key logger only dumped everything in a file using the guest account rights, nothing fancy. It happened that log file was easily accessible once you knew where it was. I looked into those logs and could see my colleague password when they were connecting to their bank accounts and other services which required password

    Of course I told my friends about it and how using notepad they could erase that data from those log file.

  15. ZLB says:

    Surely, this is the sort of problem that you want a service for. (Because many users can be logged in at once!)

    How about running a service, which creates a process in the users session as the user, inheriting a handle to an IPC object? (Pipe, Shared memory etc)

    The process then writes the logs to the IPC handle. The user would have to inject code/debug process to fake/block logs.

    Have a ping message on the IPC object and the service kill the process if the user suspends threads. Respawn the logger process if the user kills it.

    The service can save the logs using Machine local encryption to keep it tamper proof.

  16. Karlis says:

    I did once see another solution. It is possible to set an append permission on a file, and then open the file with only append rights.
    So the user’s software has the rights to append to the log, but cannot delete or modify anything.
    Unfortunately, we had to use custom written software for this purpose, as for example when Powershell’s add-content cmdlet is told to append to the file, it still tries to open the file using the write permission, not the append permission, and fails.

  17. Ivan says:

    Why not use dedicated service and service SIDs?

    1. Yup. “One solution is to have a service.” Of course, you still have the issue of the untrusted client sending bogus data to the service.

Skip to main content