Threat Modeling, Part 3 - Process

Continuing the discussion on threat modeling that I started in this post (and continued in this one)...

There's a critical third part of threat modeling, and that's the process of threat modeling.

Threat modeling is a discipline - you need to start the threat modeling process early in your feature's lifetime.

Hold on, what's a feature?  I keep on referring to it, but I don't ever define it.  For this discussion, a "feature" is just that - it's a feature of a product.  Features can be coarse grained (The "My Pictures" shell area), or they can be fine grained (adding transparent PNG support to Trident).  A feature can affect one source file, it could affect a hundred different DLLs, it's up to your development team to determine what level of granularity you want to work on.

For each team, there's a "natural" boundary for a feature, but it's ultimately up to the team doing the design to decide what that boundary is.  If you make it too fine grained ("Make a new folder'" vs. "Rename this folder" vs "Copy this folder") you may end up doing too much work (which will reduce the overall quality of the product).  On the other hand, if you make it too broad ("ntoskrnl.exe") you're likely to have more components than are manageable.  The bottom line is that your team needs to make a call about what you threat model.

And this quandary leads to the unfortunate thing about defining the process for threat modeling.  While the end product (the threat model) is fairly well defined, the process is squishy.  There is no "one true way" of going about performing threat models, for each development team, the process is slightly different.

For example, on some teams, a program manager collects the entrypoints and interfaces and writes the DFDs.

On other teams, the entire threat modeling process is handled by a single developer.  On other teams, the responsibility is split - the PM does the entrypoints and interfaces, and a developer writes the DFDs. 

Still other teams spread the responsibility around, requiring each developer and PM to do the DFDs for the entrypoints for which they're responsible.

It all depends on the dynamics of your team - if your PMs are tightly tied into the development process (or if you don't have PMs at all), then it might make sense to have them handle some of the work.  But maybe not...   IMHO, If your PM doesn't know how the internals work, it's probably not a good idea to offload the process to them, let a developer who can write do the work.

Once you've decided how you're going to apportion the workload of generating the threat model, you need to start the process.  Start by enumerating your entrypoints, then drill down to figure out what assets are involved.  Then go back and see if you need more entrypoints (or assets).  The threat modeling process is inherently an iterative process, in the beginning, if you haven't come up with a new entrypoint or a new asset on each meeting, you're likely not looking at the problem in "the right way".

One thing that every group I've encountered has done during the threat model process is what I call the "Big Threat Brainstorming Meeting".  The BTBM comes fairly late in the threat modeling process, when the rest of the threat model is relatively mature - you've already had three or four iterations in a smaller scope (maybe a couple of people), and you think you've got most of the entrypoints and assets enumerated and the DFDs completed.  In the BTBM, you get the entire development team for the feature - both dev, test and PM and the team brainstorms the threats against the various pieces of the feature.  You iterate down each of the entrypoints and resources for the feature, and try to figure if there are threats against them.  For each threat, you need to identify the entrypoint and resource associated with the threat (even at this point, you may still find new entrypoints or protected resources), and spend some time figuring out if the threats are mitigated or not.

In many ways, the "BTBM" is the core of the threat modeling process.  You really must engage the entire team for this one, because everyone has a slightly different view of their component, and it's not always clear what the interactions between the different components are - the developers who actually own the code have a much better idea than the person who's writing the threat model.

Once you've had the "big meeting" you still need to write up the results of the meeting and generate threat trees for all the threats that are discovered.  And then it's time for yet another iteration of the threat model review.  You may need to do a second generation BTBM, it depends on how comfortable you are with the completeness of your threat model.

Eventually, the changes per iteration damp down, and you've got a finished threat model, right?

Well, no.  Actually you don't.  What you have is an approximation of a finished threat model.  Because as you make changes to the code during your march to ship, the design is going to change.  And when the design changes, then the threat model has to change too.  You need to ensure that you change the design of any of the entrypoints to your feature, you go back and revisit the threat model to ensure that the new design hasn't changed it.  But the really cool thing about having the threat model there, while you're making the changes is that updating the threat model forces you to revisit the design change, and that's going to make sure that you think about the security ramifications of your fix WHEN you make the fix, not after you shipped.

I can't say how cool running through this process is.  When we had our BTBM the other day, we came up with a boatload of threats (almost all mitigated) AND we found a bunch of vulnerabilities that we hadn't considered before.  It was an invaluable experience for everyone involved.  As a result of this, we have a much higher probability that we really do understand the threats to our component.

And now we get to do it all over again for our deliverables for the next milestone.  Yay!