Threat Modeling Again, Threat modeling and the fIrefoxurl issue.


Yesterday I presented my version of the diagrams for Firefox’s command line handler and the IE/URLMON’s URL handler.  To refresh, here they are again:

 Here’s my version of Firefox’s diagram:

 And my version of IE/URLMON’s URL handler diagram:

 

As  I mentioned yesterday, even though there’s a trust boundary between the user and Firefox, my interpretation of the original design for the Firefox command line parsing says that this is an acceptable risk[1], since there is nothing that the user can specify via the chrome engine that they can’t do from the command line.  In the threat model for the Firefox command line parsing, this assumption should be called out, since it’s important.

 

Now let’s think about what happens when you add in the firefoxurl URL handler to the mix?

 

For that, you need to go to the IE/URLMON diagram.  There’s a clear trust boundary between the web page and IE/URLMON.  That trust boundary applies to all of the data passed in via the URL, and all of the data should be considered “tainted”.  If your URL handler is registered using the “shell” key, then IE passes the URL to the shell, which launches the program listed in the “command” verb replacing the %1 value in the command verb with the URL specified (see this for more info)[2].  If, on the other hand, you’ve registered an asynchronous protocol handler, then IE/URLMON will instantiate your COM object and will give you the ability to validate the incoming URL and to change how IE/URLMON treats the URL.  Jesper discusses this in his post “Blocking the FIrefox“.

The key thing to consider is that if you use the “shell” registration mechanism (which is significantly easier than using the asynchronous protocol handler mechanism), IE/URLMON is going to pass that tainted data to your application on the command line.

 

Since the firefoxurl URL handler used the “shell” registration mechanism, it means that the URL from the internet is going to be passed to Firefox’s command line handler.  But this violates the assumption that the Firefox command line handler made – they assume that their command line was authored with the same level of trust as the user invoking firefox.  And that’s a problem, because now you have a mechanism for any internet site to execute code on the browser client with the privileges of the user.

 

How would a complete threat model have shown that there was an issue?  The Firefox command line threat model showed that there was a potential issue, and the threat analysis of that potential issue showed that the threat was an accepted risk.

When the firefoxurl feature was added, the threat model analysis of that feature should have looked similar to the IE/URLMON threat model I called out above – IE/URLMON took the URL from the internet, passed it through the shell and handed it to Firefox (URL Handler above).  

 

So how would threat modeling have helped to find the bug?

There are two possible things that could have happened next.  When the firefoxurl handler team[3] analyzed their threat model, they would have realized that they were passing high risk data (all data from the internet should be treated as untrusted) to the command line of the Firefox application.  That should have immediately raised red flags because of the risk associated with the data.

At this point in their analysis, the foxurl handler team needed to confirm that their behavior was safe, which they could do either by asking someone on the Firefox command line handling team or by consulting the Firefox command line handling threat model (or both).  At that point, they would have discovered the important assumption I mentioned above, and they would have realized that they had a problem that needed to be mitigated (the actual form of the mitigation doesn’t matter – I believe that the Firefox command line handling team removed their assumption, but I honestly don’t know (and it doesn’t matter for the purposes of this discussion)).

 

As I mentioned in my previous post, I love this example because it dramatically shows how threat modeling can help solve real world security issues.

I don’t believe that anything in my analysis above is contrived – the issues I called out above directly follow from the threat modeling process I’ve outlined in the earlier posts. 

I’ve been involved in the threat modeling process here at Microsoft for quite some time now, and I’ve seen the threat model analysis process find this kind of issue again and again.  The threat model either exposes areas where a team needs to be concerned about their inputs or it forces teams to ask questions about their assumptions, which in turn exposes potential issues like this one (or confirms that in fact there is no issue that needs to be mitigated).

 

Next: Threat Modeling Rules of thumb.

 

[1] Obviously, I’m not a contributor to Firefox and as such any and all of my comments about Firefox’s design and architecture are at best informed guesses.  I’d love it if someone who works on Firefox or has contributed to the security analysis of Firefox would correct any mistakes I’m making here.

[2] Apparently IE/URLMON doesn’t URLEncode the string that it hands to the URL handler – I don’t know why it does that (probably for compatibility reasons), but that isn’t actually relevant to this discussion (especially since all versions of Firefox before 2.0.0.6, seem to have had the same behavior as IE).  Even if IE had URL encoded the URL before handing it to the handler, Firefox is still being handed untrusted input which violates a critical assumption made by the Firefox command line handler developers.

[3] Btw, I’m using the term “team” loosely.  It’s entirely possible that the same one individual did both the Firefox command line handling work AND the firefoxurl protocol handler – it doesn’t actually matter.

Comments (26)

  1. JeffCurless says:

    Based on the enormous lack of comments, perhaps it would be better to move on to another topic.  People just aren’t interested in threat modelling.  Heck, even the tetris post got more comments than most of the threat modelling ones…..

  2. Ben Fulton says:

    I would like to see the threat modeling continued.

  3. Jeff, I’ve got about 2 or 3 more posts in the series (probably the rest of the week).

    Ben, I’m not planning on stopping ’til I’m done.

  4. William Davidson says:

    This blog used to be interesting, seriously.

  5. Harry Johnston says:

    For the record, it seems to me that the critical assumption made by Firefox (and many other developers) was that a registered URI handler would only be handed legal URIs.  Note that it wouldn’t matter if the URIs were malicious, just so long as they met the syntax requirements.

    In fact the specification for calling URI handlers not only fails to require that the URI passed be legal, but actually requires that legal URIs be converted into illegal ones by improperly decoding them!

    However, your point stands; threat modeling would probably have called out the discrepancy between this (IMO reasonable) assumption and the actual specification.

  6. some one says:

    Is it this blog or threat modeling?

    If it is threat modleing it is funny how we complain about security issues. Then when a solution that is not fun to play with like threat modeling we dismiss it.

    If it is about the blog I would say no. Good lite reading. Imagine if you would for a moment if FireFox did something in this nature then would the issue be there? So is the issue that not having some form of "Threat Modeling" in place. Or was it the handling of the url being passed?

  7. Harry, I’m not 100% sure if that’s true.  At a minimum, the firefoxurl handler team should have realized that they were going to receive non URL encoded strings (it’s really easy to verify this) and then built their assumptions accordingly.

    And the fact that Firefox before FF 2.0.6 used to have exactly the same behavior as IE when it invoked registered URL handlers leads me to believe that even if IE’s behavior is a mistake (and I’m not sure it is), someone on the Firefox team believed that IE’s behavior was correct.

  8. Harry Johnston says:

    I don’t think it was unreasonable for the firefoxurl handler team to assume that the strings being passed would be legal URIs.  Remember that encoding is mandatory in URIs; any URI with unencoded characters outside of the permitted set is illegal.  (Note that I’m not arguing that making this assumption wasn’t a mistake; it was.  But it was an easy mistake to make, and a common one.)

    Neither Firefox nor IE validate the legality of the URL as it passes from the web page to the browser, across the trust boundaries in your diagram for IE.  Personally, I think they should, but I’m told this might cause compatibility issues – the argument seems to be that browsers have accepted illegal URIs for so long that doing so is a de facto standard.  Oh well. 🙂

  9. Ben says:

    Me too, I like this topic. Just doesn’t have anything to add so I’m best stay to be a quiet listener. 🙂

  10. Cheong says:

    The above post should be me… I was thinking of inserting "Ben:" in the beginning… but… 😛

  11. Gabe says:

    I believe that back in the olden days, the idea was that you could make URL handlers out of ordinary programs. That way you could have a URL like "telnet:foobar.com" or "mailto:foo@bar.com" and the program intended to receive it wouldn’t have to know about URLs. Of course nowadays anything designed to be a handler would know about URLs, but this facility was designed for programs written before browsers were popular.

    I don’t understand why the FireFox people didn’t just create a "-unencodedurl" command line option for untrusted input in the first place.

  12. Triangle says:

    "But this violates the assumption that the Firefox command line handler made – they assume that their command line was authored with the same level of trust as the user invoking firefox.  And that’s a problem, because now you have a mechanism for any internet site to execute code on the browser client with the privileges of the user."

    This isn’t a problem. If you’re running code at all under Windows, you fully have the authority to do anything the user can do using simulated mouse clicks and key presses. (Which I believe is entirely broken.) The problem is that you can install a URL extension that any website is able to exploit. Websites should be able to do little more than give interactive information to the user, and recieve input from the user. Allowing a website to do something the user didn’t authorize is in itself a security hole.

  13. _^_ says:

    Sorry for posting twice like this, but is there some way to configure it to warn the user that they are letting arbitrary code execute on their machine whenever one of these URL extensions is invoked? Because I didn’t consider how useful it could be in certain circumstances to let a website be able to use it.

  14. Gabe: They did – they added a "-url:'<url>’" command line option.  But they stopped parsing the -url argument when they got to the closing quote (which is fine if you trust your caller).

    _^_: I believe that both Firefox and IE warn when launching an external handler, but I’m not sure.

    Triangle: The assumption made by the command line parsing folks was just fine.  The firefoxurl logic violated that assumption because now the command line could be authored by an attacker.

    The "simulated mouse clicks and key presses" problem is an interesting one – essentially it boils down to "You can’t trust the return address: http://blogs.msdn.com/oldnewthing/archive/2004/01/01/47042.aspx&quot; – anything that the OS can do to indicate validity of the keypresses can be spoofed.

  15. Jeff says:

    Another lurker saying thanks for the threat modelling posts. It’s your blog, write what you want, though I thought the threat modelling is fascinating and looking at a recent high profile security issue especially so.

  16. Norman Diamond says:

    "I don’t think it was unreasonable for the firefoxurl handler team to assume that the strings being passed would be legal URIs."

    Oh sure.  And it also wasn’t unreasonable for a kernel API to assume that all addresses passed to it would be legal, and it also wasn’t unreasonable for an e-mail program to assume that all HTML constructions in incoming e-mail would be legal.  OK then.  To deal with the real world you have to abandon reason.

    Threat modeling sucks.  You got a problem with that?

  17. Harry Johnston says:

    Norman: the situations aren’t comparable.  A kernel API may be called directly by malicious code, so it has to cope with invalid data.  Incoming e-mail is obviously untrustworthy.  The URI protocol handler, on the other hand, is called by a nominally trustworthy piece of code running in the same security context; if the IE developers had chosen to validate the syntax of the URI as part of the URI handler specification, there wouldn’t have been an issue.

    I’ve already said that Threat Modeling would probably have identified the problem in Firefox, by pointing out the discrepancy between the expected behaviour (of IE) and the documented behaviour.  It is clear that the bug was in Firefox.

    On the other hand, the recent Quicktime/Firefox vulnerability has made it very clear that relying on other people’s code to do the right thing is a bad idea.  Looked at this way, Larry’s original interpretation is probably the correct one; the mistake was in assuming the command line interface was trustworthy while also registering it in a way that made it available for other people’s code to (mis)use.

     Harry.

  18. Harry: FF should have never allowed script on the command line (and I believe they’ve removed in in FF 2.0.0.7).

    The command line should be treated as being just as hostile as the parameters passed into the PlaySound API, and just as hostile as a word document located on your hard disk.

    The firefoxurl handler made the weakness associated with running scripts from the command line be blindingly obvious, but that’s not the only threat associated with the command line.

    I’m certain that with about 30 minutes of digging, I can find a half a dozen elevation of privilege vulnerabilities associated with incorrect command line parsing of various applications.  For instance (going purely by memory – I’ve not bothered to look up the reference), I believe that sendmail used to have an overflow in it’s command line handling of the mail recipient name.  Couple that with the fact that on some systems sendmail was marked as always running as root, that means that the mailto: url handler could be used to mount a remote root exploit.

    By blaming "other people’s code" for the vulnerability is shifting the blame around – you MUST assume that the bad guy has full control of your environment, anything else is a recipe for disaster.

  19. Harry Johnston says:

    I don’t believe that refusing to trust the command line at all is a reasonable solution, because it blocks legitimate user functionality.

    Firefox 2.0.0.7 blocks certain command-line functionality to prevent exploitation of the quicktime issue, but it is my understanding that this was done only as a temporary emergency fix.  As a longer term solution it has been proposed that there be two separate executables, one of which provides unrestricted command line functionality and the other of which is used as the URI handler, associated with the file types, etc.

    Note that the quicktime issue also affects IE:

    <http://larholm.com/2007/09/19/quicktime-qtnext-0day-for-ie/&gt;

     Harry.

  20. I want to wrap up the threat modeling posts with a summary and some comments on the entire process. Yeah,

  21. Hi Larry,

    I think you’re being unfair to the Firefox team here and worse than that, you’re missing a security problem.

    As far as I can tell from reading the CVE-2007-3670 and its links, their mistake was to assume that IE would pass the hostile URI as a single command line parameter as specified in the Windows API documentation.

    In fact, IE suffers from a parameter injection flaw which means that it can be manipulated into passing more than one command line argument to FF.

    If we were talking about function calls here, it’s as if Firefox registered a callback with IE but IE called a different function instead. e.g. instead of IE calling callMe(String handleUntrustedURI) it calls callMe(String nice, String safe, String params).

    In effect, IE modifies the incoming, untrusted data and disguises it as internally generated, trusted data. If it just passed the untrusted data as expected then FF couldn’t be exploited.

    It seems to me that saying that this is Firefox’s fault, is exactly analogous to blaming a database engine for a SQL injection flaw in a web application.

    Of course, the Firefox devs can make their app. more robust once they know that IE can’t be trusted to pass data without modifying it, and this is exactly what they’ve done. However, the bug in IE remains and will continue to cause security problems with any other app that registers a URI handler and can handle more than one command line argument.

    cheers,

    Richard

  22. Richard, how does IE take untrusted data and disguise it as internally trusted data?

    The documentation for the "open" verb mechanism pretty clearly says that it just hands the command line to the handler – the instant you register a command line handler, you’re saying that you are going to strictly validate your command line.

    Given that firefox has the EXACT SAME validation error (until firefox 2.0.0.7 when they changed it to be different from IE’s behavior), it seems likely to me that this was known to the FF team.

  23. Hi Larry,

    I was referring to the way that IE can be tricked into calling the Firefox command line with multiple parameters instead of the single parameter registered with the URL handler.

    I suspect that the FF developers didn’t think it was possible for the FF command line to be called with multiple parameters with untrusted data. This is why I likened IE’s behaviour to "disguising" untrusted data. Perhaps I should have picked a better phrase.

    I think we have different interpretations of the MSDN documentation for URL handlers. This page

     http://msdn2.microsoft.com/en-us/library/aa767914.aspx

    says that "If the specified shellopen command specified in the Registry contains a %1 parameter, Internet Explorer passes the URI to the registered protocol handler."

    %1, %2 etc are usually used indicate different command line arguments. e.g. in batch file handling. To me, this clearly indicates that the URI will be passed to the receiving application as a single command line argument.

    If my interpretation is correct, then any application that passes multiple arguments to a URL handlers is failing to honour this contract.

    You seem to think that documentation suggests that applications should treat a URI as a command line, splitting it into multiple arguments before calling the URL handler. Perhaps you think that all of these arguments collectively are represented by the "%1" in the documentation. Is this what you believe? Is there any other MSDN documentation to support your view?

    cheers,

    Richard

  24. Ah.  I see the disconnect.  In Windows, the command line to a program is a single string (see the GetCommandLine API for details).

    For compatibility reasons, the C runtime library parses the string and presents the string as multiple arguments to the C application.  It’s entirely possible that there are command line arguments that would cause this parser to break the single command line into different strings.

    But from a Window perspective, a command line is an opaque string.

  25. Hi Larry,

    Ok. In that case, I’ve completely misunderstood the documentation for "Registering an Application to a URL Protocol". Given the problems with URL handlers that have appeared recently, it would appear that I’m not the only one.

    Perhaps you’d be kind enough to ask the relevant team to improve the documentation.  It could be much clearer that any number of arguments can be passed to the target app including more than are listed in the handler registration. Some explicit examples with and without the use of quotes and with multiple arguments would clarify things enormously. It should also explicitly state the consequences if the URL contains unmatched quotes etc.

    Hopefully, this will prevent other developers from falling into the same trap.

    So just out of curiosity, can you use the URL handler registration to re-write the incoming command line e.g.

    "c:mycomplicatedthing.exe" -c -d "%2" -e "%1"

    cheers,

    Richard