Code Scanning Tools Do Not Make Software Secure

There has been a lot of press recently about using ‘code scanning’ tools to find security bugs in source code. So I thought I’d share my view on code scanning tools.

 

Such tools, often called static analysis tools, such as the tools we have included in Visual Studio 2005, are very useful, but they are no replacement for human intellect. If a developer does not know how to code securely, or if a designer does not know how to design secure systems, and testers don’t know how to validate the security-posture of code, tools will provide little, if any, help.

 

Here’s why.

 

1) Code analysis tools find only a small fraction of real bugs. Sure, some of them are very real and should be fixed. But simply running a tool does not mean the code is clean.

2) Code analysis tools have to keep the number of false positives low so developers are not sent on wild goose chases hunting down non-issues. Because of this high bar, many tools will miss real bugs. Hopefully, the number of real bugs missed is low, but it’s never zero.

3) A design bug will not find be found by a source code analysis tool. A missed authentication step or a poorly encrypted key or a weakly-ACLd object will rarely be caught by static analysis tools.

 

So allow me to explain how we use tools internally at Microsoft. We use tools for two main purposes:

 

1) They help scale our attack on the problem. If a new code-level bug type is found, we can often build a tool or augment an existing tool to search for the problematic construct and understand how bad the problem is. And in many cases, this allows us to find and fix real bugs.

2) They are used to enforce policy. This is the most important use of tools in my mind. We have a policy under the Security Development Lifecycle (SDL) mandating what constructs can be checked into Microsoft products. We expect the developers to do the right thing in the first place (because we educate them), and then we use tools as a backstop to make sure that policy is being adhered to.

 

When you check code into, say, Windows Vista, there are a battery of tools that run automatically to look for use of weak crypto, use of banned APIs, potential buffer overruns, integer overflows, setting weak access control lists (ACLs) in code and so on. If the tools find a bug, the check-in fails and the developer needs to fix the code. But we know from sad experience that there are many other ways of introducing vulnerabilities into software, and after the tools stop, we are relying on our trained engineers and our robust processes to keep those vulnerabilities from being released to customers.

 

To a certain extent, tools can also provide “just in time learning,” by pointing out potential problems to developers. But personally, I don’t like that idea; I think developers should know the implications of what they are writing in the first place and use great tools too. But that’s just me!

 

Another take is:

 

"Not-so-knowledgeable-developer" + great tools == marginally more secure code

 

Don’t get me wrong, source code analysis tools find real bugs, and they are very useful. I love code analysis tools but I refuse to allow developers at Microsoft, or anywhere else for that matter, to believe that such tools will fix the core problem of developers writing insecure code.

 

Creating secure software requires having an executive mandated, end-to-end process requiring on-going education, secure design based on threats, secure coding policy and testing policy, penetration and fuzz testing focused on new and old code, a credible response process and finally a feedback loop to learn from mistakes.

 

And you use great tools too...

 

(Big thanks to Steve Lipner, Eric Bidstrup, Shawn Hernan, Sean Sandys, Bill Hilf and Stephen Toulouse for their draft comments)