We're currently in the middle of the most recent round of reviews of the threat models we did for all the new features in Vista (these happen periodically as a part of the SDL).
As usually happens in these kinds of things, I sometimes get reflective, and I've spent some time thinking about what the reasons WHY we generate threat models for all the components in the system (and all the new features that are added to the system).
Way back in college, in my software engineering class, I was taught that there were two major parts to the design of a program.
You started with the functional specification. This was the what of the program, it described the reasons for the program, what it had to do, and what it didn't have to do :).
Next came the design specification. This was the how of the program, it described the data structures and algorithms that were going to be used to create the program, the file formats it would write, etc.
We didn't have to worry about testing our code, because we all wrote perfect code :). More seriously, none of the programs we worked on were complicated enough to justify a separate testing organization - the developers would suffice to be the testers.
After coming to Microsoft, and (for the first time) having to deal with a professional testing organization (and components that were REALLY complicated), I learned about the 3rd major part, the test specification.
The test specification described how to test the program: What aspects were going to be tested, what were not, and it defined the release criteria for the program.
It turns out that a threat model is the fourth major specification, it's the one that tells how the bad guys are going to BREAK your program. The threat model is a key part of what we call SD3 - Secure by Design, Secure by Default, and Secure in Deployment. The threat model is a large part of how you ensure the "Design", it forces you to analyze the components of your program to see how it will react to an attacker.
Threat modeling is an invaluable tool because it forces you to consider what the bad guys are going to do to use your program to break into the system. And don't be fooled, the bad guys ARE going to use your program to break into the system.
By forcing you to consider your program's design from the point of view of the attacker, it forces you to consider a larger set of failure cases than you'd normally consider. How do you protect from a bad guy replacing one of your DLLs? How do you protect against the bad guy snooping on the communications between your components? How to you handle the bad guy corrupting a registry key or reading the contents of your files?
Maybe you don't care about those threats (they might not be relevant, it's entirely possible). But for every irrelevant threat, there's another one that's going to cause you major amounts of grief down the line. And it's way better to figure that out BEFORE the bad guys do :).
Now threat modeling doesn't help you with your code, it doesn't prevent buffer overflows or integer overflows, or heap underruns, or any of the other myriad ways that code can go wrong. But it does help you know the areas where you need to worry about. It may help you realize that you need to encrypt that communication, or set the ACLs on a file to prevent the bad guys from getting at them, etc.
Btw, before people comment on it, yes, I know I wrote a similar post last year :). I had another aha related to it and figured I'd post again. Tomorrow, I want to go back and reflect on those early threat model posts 🙂