Evaluating the security consequences of an instance of reading past the end of a buffer

A security vulnerability report came in that laid out the following scenario: By performing the correct sequence of operations in a program's UI, you can cause code to miscalculate the location of a string and display a string in an error dialog box where the string points past the end of a buffer. The finder reported this as a remote code execution vulnerability.

Let's evaluate the severity of this issue.

First of all, there is no writing going on. The string is read from memory and displayed on the screen in a dialog box. The string is not written to, and the contents of the string are not used to control where data gets written. Furthermore, the contents of the string are not used to determine what code gets executed.¹ There is no opportunity to mutate values in memory or cause unusual code to be executed. The claim of remote code execution seems to be unjustified. (They never did explain how this could lead to remote code execution; they just reported it as such.)

What kind of vulnerability do we have, then? Well, if the miscalculated pointer happens to point to a memory block that does not contain any null characters before reaching an invalid page, then the code that renders the string will take an access violation trying to read the string. That's a denial of service.

The miscalculated string will print a garbage string from the process's memory. If the attacker can arrange for the miscalculation to point to some memory of interest, they can extract data from the process by reading it from the dialog box. That's information disclosure.

Okay, so how bad are these issues?

Recall that in order to trigger the issue, the user needs to interact with the program in just the right way. This means either that the attacker has to socially-engineer the victim into performing those unusual operations, or the attacker has sufficient privileges to automate those operations. But if you have sufficient charisma to socially-engineer the victim into performing those operations, or you have sufficient privileges to perform automation, then why are you wasting your time attacking this program? Just trick the user into typing or automate Win+R \\hacker-site\exploit\pwnz0rd.exe Enter and start counting your money.

How bad is the information disclosure? Well, who is the information being disclosed to? The information is displayed on the screen in a dialog box. It's not copied to another location within the program that might be disclosed further, nor is it sent over the network or written to a file. You're disclosing the information to the user who is running the program. This is in general not particularly interesting in the case where you are showing the user information that they already have access to.

In order for the information to leave the computer, somebody would have to screen-scrape it. But if somebody has the ability to screen-scrape your computer as you're using it, they are getting far more valuable information than some bytes of memory from this one program.

In other words, in order for these issues to become security issues, the attacker must already have significant powers on the computer under attack, so much so that they could use those powers to do far more valuable things than get a program to display garbage on the screen.

We thanked the finder for their report but indicated that what they had found was a bug, not a meaningful security vulnerability.

¹ Well, technically it controls what the font rendering engine does, because you're printing different characters. Theoretically, if there's a bug in the font rendering engine where, say, something bad happens if you ask it to draw a particular character, you could try to arrange for that character to be in the garbage string. Of course, an easier way would be to use that character in, say, the name of your file, so the font rendering engine crashes when it tries to put the name of the file in the title bar of the window.

Comments (29)
  1. Antonio Rodríguez says:

    Maybe the program maintains in memory information that the user isn’t allowed/supposed to see (perhaps credentials for a web service, or a private key, in clear text?). Then, the user could use the “attack” to reveal them. But, anyway, the user could do this with a memory browser, so it would only reveal the real problem (in this hypothetical case, storing clear text secrets).

    1. Simon Clarkstone says:

      How about if the program was running at a higher privilege level than the programs the user could normally run, so the user would not normally be able to view the memory of the program except via this bug? (Like how setuid and setgid programs in Linux are used to regulate access to some resources.)

      That’s contrived though, and still not remote and not code execution.

      1. Patrick says:

        There was a number of classic bugs where you could get SUID binaries to dump a world-readable core containing /etc/shadow. Or outright spit it out (for example by having them attempt to use it as configuration and reporting syntax errors showing the contents of each line).

    2. Steve says:

      “Maybe the program maintains in memory information that the user isn’t allowed/supposed to see (perhaps credentials for a web service, or a private key, in clear text?).”

      But surely nobody would be stupid enough to allow that to allow a bug that significant through into production! Ah.

      1. Clockwork-Muse says:

        Yeah, but that’s not an overrun. Assuming the password was stored hashed (which it should be, or you could just read it off the disk), the most likely thing that happened is that it was being _set_ to the password on creation. Probably, something that was done for debug/test purposes, and never got removed. The error here was faulty programmer logic, not faulty program logic.

        1. GregM says:

          >the most likely thing that happened is that it was being _set_ to the password on creation

          That’s what the article says:
          “If a hint was set in Disk Utility when creating an APFS encrypted volume, the password was stored as the hint. This was addressed by clearing hint storage if the hint was the password, and by improving the logic for storing hints.”

    3. DWalker07 says:

      But, a user can display all of memory in his own computer.

      1. DWalker07 says:

        That was a reply to “Maybe the program maintains in memory information that the user isn’t allowed/supposed to see (perhaps credentials for a web service, or a private key, in clear text?).”

  2. xcomcmdr says:

    Do people get chocolate medals if they say “remote code vulnerability” in a report ?

    Why so much clearly bogus/fake/ridiculous security “reports” get sent to Microsoft ?

    1. Clockwork-Muse says:

      Because so many people use their products. It’s like having a grocery store – eventually you’re going to be popular enough that _somebody_ is going to say “Yes, I get you’re a grocery store, now, where do you stock the hammers and nails?”.

      1. Antonio Rodríguez says:

        Furthermore, there is a (supposed) status in saying “look, Bill Gates may be the richest man on earth, but I0m smarter than him”.

    2. cheong00 says:

      Microsoft have tradition of thank people who give them information about the vulnerability.


      This gives incentive to security researchers to file report so they can get their name listed for fame. (I think it helps to include a link to official website of major software vendor for your achievements if you’re going to apply a position in these companies)

  3. Mason Wheeler says:

    First of all, there is no writing going on.

    That doesn’t inherently make it not a severe bug. There was no writing going on in Heartbleed, but it was still one of the most serious buffer overrun exploits of all time.

    1. GWO says:

      True, but that comes under the “Information Disclosure / Exfiltration” part of the post.

      Heartbleed was it disaster because it not only allowed Information Discloure of attacker-controlled *very* *very* secret information (private keys), but it also sent them over the internet to anyone who knew how to request them. This bug requires a local-attacker to produce non-exfiltrated disclosure of non-attacker-controlled data.

      1. Simon says:

        Yeah, that’s the key thing. It’s certainly in the same category of bug as Heartbleed… but the seriousness of Heartbleed was three-fold… it could be exploited remotely, it revealed highly sensitive data, and it could at least partly be targeted. The bug described here, not so much.

      2. Medinoc says:

        The information returned by Heartbleed was attacker-controlled? I thought it was just whatever 64k bytes happened to follow the allocated buffer in memory… Of course, successive attacks with brand-new connections would cause such buffers to be reallocated elsewhere, allowing to probe most of the server process’s memory over time even without attacker control.

  4. smf says:

    Just because the user is able to run a program that has access to some data, doesn’t mean the user should have access to it.

    Some users are hostile, you’re lucky if you’ve avoided them.

  5. Danny says:

    OK Raymond, plenty of stories throughout the years where people report false security vulnerabilities, they already are on the other side of the air hatch and so on. But how about one story where an actual vulnerability was reported? Because every 2 or 3 months we read about another Windows zero-day vulnerability that was used to gain access to people/organizations computers. When do we get one of those for a change?

    1. “Somebody reported a vulnerability in X. It was valid.” And really, I’m not allowed to write even that because I shouldn’t be saying what X is. So we’re down to “Somebody reported a vulnerability. It was valid.” And I probably am not allowed to write even that much.

      1. Danny says:

        Not even for old timers like Windows 95 or Windows 98? You know us, we love those quirky stories. I bet you are allowed to tell us a full story on one of those. Ain’t like anything you share is actually to any value from technological point of view, but from historical point of view it has tremendous value.

        1. Tanveer Badar says:

          Some *really* old system could still be running one of those. Disclosing vulnerabilities would give attackers tools to their goal.

  6. Joshua says:

    I remember one long ago where the security bug did indeed involve the font rendered. It mattered somewhat what the characters were, but mostly it mattered the sheer length of the string to be rendered had to be tens of kilobytes of length, so there was no possibility of exploiting it over a filename.

    On the other hand, this yielded a direct to kernel exploit from a large in webpage served off the internet. Thank you Nxxxxx.

    1. Joshua says:

      a large in should read a large textarea in. The blog ate the literal tag.

  7. cheong00 says:

    Well, I think it depends on the nature of the application.

    If that application reads from a database, and the next string variable to the “input” is part of SQL statement, there could be valid “information disclosure” type of vulnerabilities. (In my experience, not many companies have implemented schema/table based access control on their database server. Most, at most, implemented database based access control only. So be able to alter SQL statement means they may read data they shouldn’t be able to read)

    1. Joshua says:

      Attach a debugger. Change the SQL statement.

      1. cheong00 says:

        I think if you count a “vulnerability” needs debugger to work as vulnerability, all applications that includes parts that runs under impersonation context fails.

      2. Tanveer Badar says:

        If you can attach a debugger, you are already past the hatchway. Most of the time.

  8. M says:

    “But if you have sufficient charisma to socially-engineer the victim into performing those operations, or you have sufficient privileges to perform automation, then why are you wasting your time attacking this program?”

    I think these two are far from being equivalent. The “if you have sufficient privileges” argument always holds, but social engineering is tricky: some users might be fooled into clicking some icons or entering commands into a program that is already on their computer, but not into typing a URL or running a downloaded executable

Comments are closed.

Skip to main content