Economics of the Vulnerability Finding Game

A friend of mine loaned me a book - "Hidden Order: The Economics of Everyday Life", by David Friedman - It's an interesting read - the overall point is that a lot of human behavior can be explained with economic theory. Basically, it amounts to what we all do can be explained by the fact that we see more value in whatever it is to be done than sitting on the couch. It isn't all monetary - we associate value with different things.

As some of you may know, I was once in the vulnerability finding business (still am to some extent - just internally). I was one of the first programmers to work at Internet Security Systems, and we placed high value on the marketing value of posting security bulletins about the things we'd found. I was the very first person to be publicly thanked by Microsoft for doing what came to be known as responsible disclosure, and the day that my name with a link to ISS' web site showed up on what eventually became, we had twice as many hits on our web site as normal that month. The sales and marketing department liked me for quite a while after that. Getting our names in lights paid off. To be clear, we never once had to twist Microsoft's arms to fix anything, no threats of going public - they just fixed _everything_ we reported to them. One report got fixed in 72 hours, and some more difficult, design-level issues took quite a while. Once they fixed it, I had a check waiting for the Internet Scanner, and maybe a signature for RealSecure.

At least early on, I didn't spend a lot of effort finding things - I'd stumble on things while developing the Internet Scanner, or was just writing code to explore how things worked (previously known as hacking). I also wanted to make things better for people, and since I was an early adopter of Windows NT, I wanted the platform I depended on to be secure.

In my personal system of economics, I place a lot of value on the effect someone's actions have on the rest of the world. It's been a common thread for me over time - I abandoned Aerospace Engineering when it became clear that most of the jobs available at the time involved figuring out better ways to blow things (and people!) up, and focussed on bioengineering. That didn't pan out, so I next went on to Environmental Engineering, and had a notable success - my work contributed strongly to getting the EPA to change how they test vehicle emissions to better model real-world driving, and this led to a reduction of millions of tons of pollutants per year. It's one of the reasons I write books - they don't pay all that well, but hopefully they will help people make better software and consumers get hacked less often as a result. So different people value different things, and to take a really good look at the economics of something, you need to factor in non-monetary value.

Once I went from running the Internet Scanner dev team (or at least the Windows NT version - a friend ran the UNIX side) to ISS' research team, I could create more vulnerability checks for the Scanner, signatures for RealSecure, or I could dig in and find problems that would lead to bulletins. Doing the work to create a bulletin could take up a lot of time - you could spend some time poking around before you found something, then you had to work with the company to fix it, and then write it up. I often argued with my management (Mike's right - I am opinionated) that we could provide more value to the customers by doing product work than bulletins, but they wanted to see a balance. Their value system obviously placed a high value on marketing, and since Chris Klaus (founder of ISS) has done really well, I'd have a hard time arguing he's wrong (not to say I didn't debate things with him). One thing I'd also like to point out - Chris also always seemed to me to be highly motivated by wanting to make things better.

Recently, we've seen an underground market for exploits develop - someone tried to sell an exploit on eBay not too long ago. We could say that vulnerabilities are found by people with a number of different motivations - market value of the exploit, which is determined by the value of the information that could be obtained by using the exploit and the number of people affected by the exploit; the marketing value of an exploit, which is also a strong function of affected users - lots of exploits are found every week, and most of them go unnoticed; the value of the exploit to that individual, which could be a lot of different things - the joy of solving a puzzle, a need to be disruptive, or a desire to improve the application.

From the defender's standpoint, we need to make the work needed to find an exploit exceed the value of the exploit. We can't achieve perfection, so there will always be issues. Just for argument, let's say it cost someone 2 person weeks to find an exploit, and the value of that person's time, plus overhead was around $100/hr to the company - that's an $8000 exploit. If the value of the exploit is more than that, it's been a profitable exercise. If less, then maybe it is time to move to easier pickings. If you can put an automated tool or mitigation in place that reduces the value of an exploit, or makes it much, much harder to find an exploit, then the picture changes. For example, an exploit against an app running with a low integrity level (such as Protected Mode IE) can't do as much damage as one running as full admin - that exploit would sell at a discount. Or if you have ASLR and NX running on a 64-bit system, and not nearly as much is on the stack, so far fewer flaws are actually exploitable, the cost of producing an exploit has just gone up by a lot. My favorite approach is to work on reducing the exploit density, but there are limits (see Watts Humprey's work) to just how far you can go.

At any rate, I'd never spent any real time looking into economics - us engineers tend to look down on those sorts of things - but "Hidden Order" has caused me to look at why people do things in different ways.



Comments (3)

  1. MikeA says:

    On the whole, I’d agree with your premise David, but there’s two factors that don’t quite work the “economics” argument to me.

    First, you argue the cost/reward of finding a vulnerability.  This works well when then economies are the same, but a *lot* of vulns are being discovered by non-1st world economies (like Russia, China, etc), where $8000 for a vuln is a *huge* amount of money.  More likely the case, vulns are traded for a lot less than that so the cost/reward is pretty insignificant – cost to find is probably in the $100’s, maybe less.  Making value on that is easier as even in a tiny attack on a very small population (say 20 successful credit card info disclosures on a US bank’s customers) makes the economics worth while.

    The other thing that you note is that even more people look for vulnerabilities as “fun” than any other reason.  the forums are full of potentially very damaging XSS vulns (that’s their focus – I would assume there are others out there with focus on a different attack vector).  The economies here are people showing just how bad it is out there, and in a lot of ways when these vulns are (responsibly) disclosed, they are treated as an annoyance rather than a help *not* to allow the situation above.

  2. OK, that’s totally fair – I was thinking of saying something about the potential for outsourcing attacks. I won’t venture a guess as to what proportion of people have which motivations.

    Actually, the economic argument remains the same, just the variables change. Even so, let’s say that in Outer Baldonia, we can get devs to work for 1/10 the price. Now let’s say that something happens to make vulns cost $80000 USD to generate. The US-based vuln finders now move on to look for vulns in a less hardened target, and even the Outer Baldonian vuln finders make more profit from easier vulns.

    XSS is lower hanging fruit these days, and are examples of economics at work.

    At any rate, I wasn’t completely thinking of this from the standpoint of what it takes to put the bug hunters out of business, but just looking at the problem from the angle of economic analysis applied to my own experience based on when it was my job to do that.

  3. Someone said:

    "Personally, I am always suspicious of the motives of someone who claims to be interested in helping, but is scrambling to make sure they get credit for their actions. That’s not heroic behavior; it sounds more like ego or greed wrapped in a thin sham of altruism."

    I do somewhat agree, but I also have to acknowledge that behavior is complex, and attempting to attribute a single motivation to an action is usually oversimplifying. Causes and conditions are typically complex. Someone might be looking for their 15 minutes of fame, maybe a little cash, and be trying to make the world a better place. Or they could be, as Frank Zappa’s immortal lyrics put it – "Strictly Commercial", and are making claims of trying to improve things as a thin veneer over very banal motives. Which is which and who is who would certainly be a topic to be discussed in a far less public venue. Samuel Johnson did say in 1775 that "Patriotism is the last refuge of a scoundrel", obviously implying false patriotism, and the sentiment applies to quite a number of self-righteous sorts of proclamations.

    BTW, one thing I do ask is that people be considerate of the fact that I do work for Microsoft, this blog is hosted by Microsoft, and no matter how much I might like you personally, or have great respect for your work and opinions, that if you post a comment that veers off into something not appropriate for this site, I’m not going to allow it through here. Sorry, but I like where I work.

Skip to main content