Threat Modeling, Part 2 - threats

The first threat modeling post discussed the first part of threat modeling, which is defining your assets and understanding how the data flows through the design of your component.  Today, I want to talk about the threats themselves.

One of the key aspects of threats is that for a given design, the threats are static.  Change the design, and you change the threats, but for a given design, the threats don't change. 

This is actually a really profound statement, when you think about it.  For a given design, the threats against that design are unchanging.  The only thing that matters for a threat is whether or not the threat is mitigated (and the quality of the mitigation).

Another important aspect of a threat is that a threat applies to an asset.  If there's no asset affected, then it's not a threat.  One implication of this is that if you DO decide that you have a threat, and you can't figure out what asset is being threatened, then clearly you've not identified your assets correctly, and you should go back and re-evaluate your assets - maybe the asset being protected is an intangible asset like a privilege.

At Microsoft, each threat is classified based on the "STRIDE" classification scheme (for "S"poofing, "T"ampering, "R"epudiation, "I"nformation disclosure, "D"enial of service, and "E"levation of privilege).  The STRIDE classification describes the consequences of a particular threat (note that some versions of the Microsoft documentation uses STRIDE as a methodology for threat determination, it's really better thought of as a classification mechanism).

All threats are theoretical by their very nature.  The next part of threat analysis is to determine the attack vectors for that threat to ensure that there are no vulnerabilities associated with the threat.

Attack vectors come in two flavors, mitigated and unmitigated.  All the unmitigated attack vectors are vulnerabilities.  When you have a vulnerability, that's when you need to get worried - in general, a threat where all the attack vectors are mitigated isn't a big deal. 

Please note however: There may be unknown attack vectors, so you shouldn't feel safe just because you've not thought of a way of attacking the code. As Michael Howard commented in his article, that's why a good pentester (a tester skilled in gedanken experiments :)) is so valuable - they help to find vectors that have been overlooked.  In addition, mitigations may be bypassed.  Three years ago, people thought that moving their string manipulation from the stack to the heap would mitigate against buffer overruns associated with string data.  Then the hackers figured out that you could use heap overruns (and underruns) as attack vectors, which rendered the "move string manipulation to the heap" mitigation irrelevant.  So people started checking the lengths of their strings to mitigate against that threat, and the hackers started exploiting arithmetic overflows.  You need to continue to monitor the techniques used by hackers to ensure that your mitigations continue to be effective, because there are people out there who are continually trying to figure out how to bypass common mitigations.

There's also an important corollary to this: If you can't mitigating a particular threat, then you need to either decide that the vulnerability isn't significant, or you need to change your design.   And some vulnerabilities aren't significant - it depends on the circumstances.  Here's a hypothetical:  Let's say that your feature contains a "TypeFile()" API.  This is an API contained in a DLL that causes the contents of a file on the disk to be displayed on the console.  If the design of the API was that it only would work on files in the "My Documents" folder, but it contained a canonicalization vulnerability that caused it to be able to display any file, then that might not be a vulnerability - after all, you're not letting the user see anything they don't have access to.  On the other hand, that very same canonicalization vulnerability might be a critical system vulnerability if the TypeFIle() API was called in the context of a web server.  It all depends on the expected use scenarios for the feature, each feature (and each vulnerability) is different.

One really useful tool when trying to figure out the attack vectors against a particular threat is the "threat tree".  A threat tree (also known as an attack tree) allows you to measure the level of risk associated with a particular vulnerability.  Essentially you take a threat, and enumerate the attack vectors to that threat.  If the attack vector is a vulnerability (it's not directly mitigated), you enumerate all the conditions that could occur to exploit that vulnerability.  For each of those conditions, if they're mitigated, you're done, if they're not, once again, you look for the conditions that could exploit that vulnerability, and repeat.  This paper from CMU's SEI has a great example of a threat tree.  Bruce Schneier also had an article on them in the December 1999 Dr Dobb's Journal.  His article includes the following example of an attack tree against a safe:

The really cool thing that you get from a threat tree is that it gives you an ability to quantify the risk associated with a given attack vector, and thus gives you objective tools to use to determine where to concentrate your efforts in determining mitigations.  I found a GREAT slide deck here from a talk given by Dan Sellers on threat modeling, with a wonderful example of a threat tree on slide #68.

In addition to the articles I mentioned yesterday, I also ran into this rather fascinating slide deck about threat modeling.  Dana's also written about this here,