Attackers, Vuln Finders and Exploits – It just ain’t fair!


Recently took a look at “The Vulnerability Disclosure Game: Are We More Secure?” (http://www2.csoonline.com/exclusives/column.html?CID=28072) by Marcus Ranum, which in turn links to “Schneier: Full Disclosure of Security Vulnerabilities a ‘Damned Good Idea'” (http://www2.csoonline.com/exclusives/column.html?CID=28073). I’ve got a lot of respect for Marcus – he’s always been on the right side of the fight, and let’s face it – anyone who goes jogging with his horse is pretty cool in my book. Not that I’d be biased towards people with horses…

That said, both of these guys have it wrong, or at least partially. Marcus is right that there’s a lot of people out there busy being part of the problem. In my earlier post about the economics of the vulnerability finding game, I outlined some of the forces at work. If we look at the overall picture, it’s highly asymmetrical – a vuln finder spends a few thousand at best finding a vuln, the vendor spends anywhere from tens of thousands to hundreds of thousands making a fix, testing it, and getting it out, and then the customers spend even more deploying the fix. All so someone gets their 2 minutes of fame (15 if they’re lucky). It just ain’t fair.

If I were a CSO or CIO, I don’t think I’d spend my money with vendors who ran around giving yet more exploits to the script kiddies and other rabble that’s out there attacking my systems. The attackers have plenty of weapons to start with, and I’d rather help out the people who are part of the solution. If all these folks would just straighten up and behave responsibly, we’d be so much better off – or at least that’s the premise. Unfortunately, we wouldn’t be any better off. There’s a whole lot more people who find these things, keep the secret, sell them, use them for personal gain, and it’s all underground. The stuff the people writing bulletins find is a drop in the bucket. It’s nice that they’re generally responsible, and it’s nice that they let the vendor know about things (so they get their name in lights), but if they all took up a different job tomorrow, I don’t think we’d be noticeably any more or less secure. Unless it’s someone really smart who truly does a thorough job and works with the vendor to get a complete solution (a good example of this was David Litchfield’s analysis of the /GS switch – it’s a lot better now and he deserves credit for helping), all these little ankle-biting exploits don’t amount to much. We find a LOT more of this sort of thing internally than comes in from outside.

In a previous career, I was involved in helping clean up the environment – gee, if everyone would just recycle, landfills wouldn’t fill up so fast, we wouldn’t use up so many resources, and so on. Same sort of thing as wishing the vuln finders would all go fishing and not come back – guess what? People are for the most part going to do what is in their economic self interest, and never mind the effect on the rest of the world. If there’s enough economic incentive, they’ll start doing the right thing. Unfortunately, there’s economic incentive to do the wrong thing in this case, but that’s the way it is. It just ain’t fair.

Then the other side of the argument is that without Full Disclosure, no one would fix anything. Maybe that’s the way it used to be, for some vendors, but things change. Used to be if I wailed my head off, people would bring me food, but that hasn’t worked for a very long time. Things change. I’m always suspicious of Things In Capital Letters. If it isn’t a proper name, then it’s probably a synonym for the One True Way, and now we’re talking dogma, not reason. The reality is that if there are people out there attacking your customers, then you better put a lot of effort into security, and these people running around making “wow, look how smart _I_ am” posts to mailing lists are really one of your smaller problems. The reality is that it just gives the attackers more weapons to play with, and makes the Internet a more hostile and difficult place to communicate and conduct business than it already is. It’s the tragedy of the commons on a global scale. If that premise were true, I would have never gotten much of anything fixed, and it’s been quite the contrary – I only went public without a fix once, and that was after the vendor told me that buffer overruns weren’t exploitable on Windows (seriously, they did tell me that). But if all those people just went fishing one day, we still wouldn’t be a lot better off – there are still real criminals to worry about. And ironically, they serve a purpose in some cases – occasionally, they’ll steal an attack from the underground and bring it to light, where it will get fixed – but if they’d quietly let a responsible vendor know about the problem, it would get fixed anyway. And this deal of concluding that since one person found it, everyone else will magically notice via telepathy – hogwash. Sometimes people do figure out the same things about the same time – LaPlace and Newton both figured out the foundations of calculus at around the same time – but more often, they don’t.

Bruce went on to say “… and that software companies will spend time and money fixing secret vulnerabilities. […] full disclosure is the only reason vendors routinely patch their systems.” There’s all sorts of evidence these assertions are false. Let’s take my employer out of the picture for a moment, and the fact I have direct knowledge of Microsoft not only fixing vulns no one else knows about, but fixing things that just look sort of like a vuln (but we’re not completely sure) that no one else knows about. Consider the guys at OpenBSD – they just “fix the bugs”. I don’t agree with them about everything, but they’re so completely right about that, I just can’t say it loud or often enough.

As the story goes, someone asked Jesse James (a famous Western US outlaw from the late 1800’s) why he robbed banks. “That’s where the money is” was the alleged reply. As much money as there is running around the Internet these days, there’s a lot of incentive for people to do bad stuff. Wishing it wasn’t so won’t get us anywhere. Facing reality and building apps that stand up to current threats will get us somewhere. Life just ain’t fair – TANSTAAFL – There Ain’t No Such Thing As A Free Lunch.


Comments (7)

  1. orcmid says:

    I think it was Leibnitz and Newton.  Laplace came later.

    I’m not sure what to make sure of this welcome perspective.  As an outsider, I’ve never detected a vulnerability and certainly not a hack to exploit it.  

    What I had been doing is reporting phishing to the impersonated banks and sites (a thankless chore, and some institutions have no way to receive reports and actually don’t want to know about it, especially if you are not a customer, which is how I know it is a phish hook, of course).  So I stopped reporting those things and just delete them.  Also, the reporting process is very one-way, like some-people’s bug-reporting system, and it becomes easy to think that you are shouting into dev/null.

    If you know that a site is hacked, or its SSL certificate is weird, or any other problem it is often difficult to find a way to communicate that to the operators of the site.  In fact, it is very difficult to communicate to organizations about a problem they are having and may not know about.  There was an occasion when Blogger was serving up other people’s blogs for me to edit based on my cookie, and I discovered just how difficult it is to communicate to Google.  I eventually sent them a fax, but I have no idea what came of that.

    So, it becomes a little like confirmation that no good deed goes unpunished.

    I have found the Microsoft security e-mail address to be very effective (although finding that address is a chore).

    OK, I have that off my chest.

    Now, in a shorter post, can you recommend exactly what you would like people who notice vulnerabilities and defects to do?  Somehow, there should be an easy way for those who simply want to be helpful and improve the cyberspace we share to report problems (and, ideally, not end up duplicating effort that has already been made).

    Can you offer some principles and practices?

  2. Ryan Russell says:

    I’m not likely to believe that vendors will do the right thing if left to it when there are such glaring counter-examples from Apple and Oracle.

  3. david_leblanc says:

    I’m not going to get into throwing rocks at other vendors here, but I think this calls out a point where Marcus is right – have any of these vendors you’d characterize as counter-examples improved? Haven’t they been subject to full disclosure for 10-15 years? Isn’t this evidence that FD isn’t an effective technique?

    This goes right back to my points – the marketing value of an exploit against Apple is low (I suppose unless it were the IPod), and the anti-marketing value to them is also low. So when there are issues, they don’t show up on CNN. Thus FD doesn’t put much pressure on them – ironically, because if I did have the killer Apple exploit, how many systems am I going to be able to 0wn up? Not terribly many. It all goes back to economics – if they start losing marketshare because they are seeing security pressure, they’ll get better.

    I do think FD used to serve a purpose – when I got started in this in ’96, Microsoft was the only vendor we worked with who would just fix things regardless of pressure to expose it publicly. To be fair, a couple of others reacted well in at least one case I recall – AIX 10 (or 11) had re-created the dread sendmail pipe bomb, which had been fixed for quite a while. Not sure how that happened, but we let them know, and they fixed it pretty quickly.

    Another reality – unless it is a group that is really good at keeping secrets, like a 3-letter agency, it’s foolish to assume that someone is keeping something to themselves. I know of cases where pen-testers all seemed to have the same POC we were trying to fix, and I think you do, too. Some of the bulletin mills will give the exploit to their buddies before the fix is out. My take is that if it comes from outside, there’s a fair chance it is being used against customers, and we need to fix it as quick as we can and still assure it won’t break more than it fixes.

    The point is that it’s just about all economics – if the vendor doesn’t see that it is in their best interest to pay attention to security quality, they’re going to put resources into something else. No amount of wishing it were otherwise will change things, and I don’t think FD is going to change things either.

    That said, even though FD doesn’t really help much of anything, it’s going to happen anyway – the marketing value to the individual of "Hey wow, I’m really smart, look at me! Greetz to my buddies!" is high enough that people will do this whether it truly helps anything or not. They’ll convince themselves that they have a value, and what the thinker thinks, the prover proves. How vendors react to this reality can be a measure of their after sale quality of service.

  4. Ryan Russell says:

    That makes a lot of assumptions, like the only driver for FD is to make vendors improve, and that researchers only do things for fame.

    A counter-counter example is Microsoft. I think FD worked just fine on them. Oracle might be improving, it’s hard to tell. They may have just gone from taking 4 years to patch things to now just several? Unfortunately, we’re going to need a few years to tell. But in the meantime, we know they took 4 years for some patches as recently as 6 months ago.

    So what is motivating the anonymous vulnerability publishers?

    It’s also been pretty apparent to me that nearly every vendor needs to learn the hard way, so FD needs to be a constant thing.

  5. david_leblanc says:

    The myth I’m debunking is that the only way vendors will fix anything is when threatened with FD. I know otherwise. I also know from personal experience that Microsoft didn’t need threats to want to do the right thing. They called me one day, and basically said "Hey – looks like you’re probably going to find some interesting stuff, and we’d like to make sure we get it fixed." It’s a longer story than that, but that’s the gist of it. That was long before they were getting slammed on a regular basis.

    I also have personal experience with other vendors who weren’t doing the right thing, and when threatened with FD, still didn’t do the right thing, and after many years of getting hit with FD, still don’t do the right thing. Won’t name names here.

    This doesn’t mean that FD is completely without any redeeming qualities. It has served the useful purpose of bringing to light 0-days that were in the wild. That’s FD at it’s best. Maybe there’s other instances where people are more likely to do the right thing when threatened, but I’m glad I don’t work for them.

    What motivates the anonymous publishers? I don’t know – never known many of them very well. I think ego is a primary driver for a lot of people, and the anonymous people probably want to be anonymous because they’re either doing other things that are improper (maybe illegal), or they have a day job and don’t want to lose it because their employer might sack them if they were known to be doing destructive things. And let’s be clear – going public, especially with a POC, when there is no fix is really the equivalent of crying fire in a crowded theater.

    My main points are that Bruce’s claim is completely busted – too many counterexamples I personally know about for that to hold much water, and that whether it is good or bad, or whether we like it or not, FD and bulletin mills are going to continue, so railing against them won’t help. There’s also that whole freedom of speech issue.

    And let’s be real here – one of the main drivers in this whole thing is ego. It’s rare that someone is really into the common good, and posts a bunch of stuff to full_disclosure. They want everyone to see how smart they are. I know people who are smarter than 99% of all the l33t h4x0rs out there, and no one knows their name in public, but they’ve done more to improve security for millions of people than all the FD ‘bulletins’ and papers put together. If someone _really_ wants to help, go be part of the solution.

  6. Ryan Russell says:

    I don’t doubt that some vendors will do the right thing regardless. I happen to think I work for one as well. I don’t doubt that some vendors will always do the wrong thing. But then at least with FD we know that. If you’re trying to debunk the extremes, I don’t think many people with argue with you.

    I DO think that some number of vendors only do the right thing because they are bullied into it, and would rather go back to slipstreaming or not fixing the problems. Otherwise you’re trusting that corporations think “Doing what is morally right, at extra expense, for no profit? Sign me up!”

    I also don’t disagree that the percentage for ego-driven activities is in the 90’s. But I do happen to think that it’s a beautiful thing when researchers get to feed their egos, and companies get their free QA out of it.

  7. david_leblanc says:

    I’m saying that companies (and people) will do the right thing based on economic motives. Companies, like people, do display various degrees of altruism – for example, Microsoft and it’s employees donate more to charity than most. To the extent that full disclosure provides an economic motive, it will encourage some level of security response. The thing is that it doesn’t tend to do that well. It also tends to do a lot of harm along with the good. Let’s look at what we get:

    1) Notification that a problem exists – that’s a benefit

    2) Some idea of how well vendors respond to problems – another benefit

    3) Speedy dissemenation of attack capability

    4) Customers under widespread attack, and little ability to respond

    5) Vendor produces fixes under pressure, with increased chance of regressions, and may not fix related problems due to time pressure

    For example, I understand that some new cars have some interesting crypto problems, and someone with the right equipment can fairly trivially steal some of these cars. So would it be right to make the equipment available for $3.50, and sell it in the ghetto? That’s how FD works.

    Free QA? If we had a tester who found as few bugs as most of the external people, we’d sack them. ROTFL. That’s another myth – that FD actually provides any real value in terms of improving apps. That’s kind of like trying to reduce malaria by swatting mosquitos with your hands.

    People who want to be part of the solution find ways to deliver the first two without the downsides. People who just like disrupting things and making the Internet a more difficult place to work and do business just throw bombs and laugh at the disruption they cause – really childish, IMHO.

    One of the main points you’re missing is that we’ve gone well beyond FD – there’s an active criminal economy that trades in exploits, and FD feeds them, means they don’t have to work as hard. I’m sure the criminals think highly of FD – they get free exploit generation. "Helped criminal enterprise" isn’t something I’d want on my resume.

    It used to be that exploits had very little economic value, and it was mostly an ego thing. Things change.