Economics of the Vulnerability Finding Game

A friend of mine loaned me a book - "Hidden Order: The Economics of Everyday Life", by David Friedman - It's an interesting read - the overall point is that a lot of human behavior can be explained with economic theory. Basically, it amounts to what we all do can be explained by the fact that we see more value in whatever it is to be done than sitting on the couch. It isn't all monetary - we associate value with different things.

As some of you may know, I was once in the vulnerability finding business (still am to some extent - just internally). I was one of the first programmers to work at Internet Security Systems, and we placed high value on the marketing value of posting security bulletins about the things we'd found. I was the very first person to be publicly thanked by Microsoft for doing what came to be known as responsible disclosure, and the day that my name with a link to ISS' web site showed up on what eventually became www.microsoft.com/technet/security, we had twice as many hits on our web site as normal that month. The sales and marketing department liked me for quite a while after that. Getting our names in lights paid off. To be clear, we never once had to twist Microsoft's arms to fix anything, no threats of going public - they just fixed _everything_ we reported to them. One report got fixed in 72 hours, and some more difficult, design-level issues took quite a while. Once they fixed it, I had a check waiting for the Internet Scanner, and maybe a signature for RealSecure.

At least early on, I didn't spend a lot of effort finding things - I'd stumble on things while developing the Internet Scanner, or was just writing code to explore how things worked (previously known as hacking). I also wanted to make things better for people, and since I was an early adopter of Windows NT, I wanted the platform I depended on to be secure.

In my personal system of economics, I place a lot of value on the effect someone's actions have on the rest of the world. It's been a common thread for me over time - I abandoned Aerospace Engineering when it became clear that most of the jobs available at the time involved figuring out better ways to blow things (and people!) up, and focussed on bioengineering. That didn't pan out, so I next went on to Environmental Engineering, and had a notable success - my work contributed strongly to getting the EPA to change how they test vehicle emissions to better model real-world driving, and this led to a reduction of millions of tons of pollutants per year. It's one of the reasons I write books - they don't pay all that well, but hopefully they will help people make better software and consumers get hacked less often as a result. So different people value different things, and to take a really good look at the economics of something, you need to factor in non-monetary value.

Once I went from running the Internet Scanner dev team (or at least the Windows NT version - a friend ran the UNIX side) to ISS' research team, I could create more vulnerability checks for the Scanner, signatures for RealSecure, or I could dig in and find problems that would lead to bulletins. Doing the work to create a bulletin could take up a lot of time - you could spend some time poking around before you found something, then you had to work with the company to fix it, and then write it up. I often argued with my management (Mike's right - I am opinionated) that we could provide more value to the customers by doing product work than bulletins, but they wanted to see a balance. Their value system obviously placed a high value on marketing, and since Chris Klaus (founder of ISS) has done really well, I'd have a hard time arguing he's wrong (not to say I didn't debate things with him). One thing I'd also like to point out - Chris also always seemed to me to be highly motivated by wanting to make things better.

Recently, we've seen an underground market for exploits develop - someone tried to sell an exploit on eBay not too long ago. We could say that vulnerabilities are found by people with a number of different motivations - market value of the exploit, which is determined by the value of the information that could be obtained by using the exploit and the number of people affected by the exploit; the marketing value of an exploit, which is also a strong function of affected users - lots of exploits are found every week, and most of them go unnoticed; the value of the exploit to that individual, which could be a lot of different things - the joy of solving a puzzle, a need to be disruptive, or a desire to improve the application.

From the defender's standpoint, we need to make the work needed to find an exploit exceed the value of the exploit. We can't achieve perfection, so there will always be issues. Just for argument, let's say it cost someone 2 person weeks to find an exploit, and the value of that person's time, plus overhead was around $100/hr to the company - that's an $8000 exploit. If the value of the exploit is more than that, it's been a profitable exercise. If less, then maybe it is time to move to easier pickings. If you can put an automated tool or mitigation in place that reduces the value of an exploit, or makes it much, much harder to find an exploit, then the picture changes. For example, an exploit against an app running with a low integrity level (such as Protected Mode IE) can't do as much damage as one running as full admin - that exploit would sell at a discount. Or if you have ASLR and NX running on a 64-bit system, and not nearly as much is on the stack, so far fewer flaws are actually exploitable, the cost of producing an exploit has just gone up by a lot. My favorite approach is to work on reducing the exploit density, but there are limits (see Watts Humprey's work) to just how far you can go.

At any rate, I'd never spent any real time looking into economics - us engineers tend to look down on those sorts of things - but "Hidden Order" has caused me to look at why people do things in different ways.