What is it that makes security hard?


I’ve been asked this question numerous times, often in the guise of a question like, “why can’t you guys simply fix the security problem?” or “reliability and scalability problems are understood and solvable, why can’t you do the same with security?” or my favorite variant, “what the heck keeps you interested in security when it seems you’re fighting a ‘no-win’ battle?”
First, there is little agreement around what constitutes a “security bug” so I’ll leave that subject for another day!
Next, I’m no expert on the science behind reliability or scalability, so I’ll take it at face value that when people say these issues are “understood and solvable” and they are being honest.


So what is it that makes security hard?


It’s simple:



  • Scalability and reliability issues are man-vs-machine and machines are stupid.

  • Security is man-vs-man and humans are intelligent.

This security stuff is an ongoing arms race and chess game, and each side is constantly trying to outwit the other.  We raise the bar, and the attackers then spend time trying to defeat that bar. So we raise the bar again, and so on.  With reliability and scalability, we can understand the “adversary” and that’s that. The “enemy” won’t adapt to defeat you!


To be honest, it’s this on-going intellectual battle that keeps me coming back to security, but it also means that no-one will ever build 100% secure computer products and this why we update the Security Development Lifecycle (SDL) twice a year as we learn new attack and defense techniques.

Comments (10)

  1. Osama Salah says:

    "Security is man-vs-man and humans are intelligent."

    It’s not just an intelligence.

    The "evil" man has most probably more incentive than the "good" man (think organized cyber-crime).

  2. a co-worker says:

    I think what makes it hard is the profusion of New Zealanders.

  3. MikeA says:

    >> First, there is little agreement around what constitutes a “security bug” so I’ll leave that subject for another day!

    Well, I’d agree, but I’ll take a shot none-the-less 🙂

    The if you have to break "bugs" down, I’d say they really are in three flavors – functional bugs, performance bugs, and security bugs.  This totally side-steps the "bug vs flaw" argument – for simplicity (reduce the number of variables) I’ll just assume that the developer wanted to do the right thing, and had a design that met that goal.

    As much as I think that it’s a oversimplification, and perhaps could do with another axis, the general argument that Hugh Thompson and James Whittaker* have that security bugs are "side effects" of usually correctly functioning code is a good one – see "Why security testing is hard" – IEEE S&P July-August 2004 also available at [http://tinyurl.com/2jrmph] so you don’t have to pay for it 🙂

    In my book, it’s harder to find security bugs because there’s no noticeable side-effect unless you specifically look for / test for it.  Which leads to what I think security is different from functional bugs – security bugs are inadvertent behavior that can be exploited for malicious gain (either directly, such as DoS, or indirectly, such as increasing privileges).

    >> Scalability and reliability issues are man-vs-machine and machines are stupid

    Now, security vs reliability/performance I believe is an easier one to address.  Software isn’t like a machine – there are no moving parts that "wear out".  All things being equal (e.g. introducing other things like service patches, 3rd party driver changes, etc, that may cause problems outside our control), reliability will increase in a system the more it’s tested and used over time.  The reliability bugs will be found and fixed.  Even if new bugs are introduced by these fixes (we’re human after all) they also will be found and fixed.  Performance is similar – there’s a bar than needs to be met and changes can be made to make that bar.  Left alone, that bar will come down because improvements in hardware (CPU, memory, etc – thank you Dr Moore!) make it easier to meet.

    With security on the other hand, the bar will only go in one direction – up. We try to close off attack vectors, but without completely changing certain things (direction of stack growth, HTML+JavaScript, etc), we can only "patch" for these security bugs (either in the frameworks, or by developers knowing about them).  Attackers will inevitably look for, and find ways around them.  Also, we have in our future security bugs we aren’t even considering right now.

    So, in the words of the NSA — "Attacks always get better – they never get worse".  The corollary is that to meet the security bar, it’s always going up; for functional/performance, over time the bar goes down.

    Perhaps an over simplification, but that’s my $0.02.

    *Disclaimer: I’m friends and an ex-collegue of Hugh and James, but I still think it’s one of the better and most easily digested explanations of security vs functional bugs out there that I know of.

  4. ac says:

    "Security is man-vs-man and humans are intelligent."

    There’s also an issue of the scale: if you maintain the program, to make it 100% "safe", you’d have to find and fix *all* security bugs. An attacker has to find only one.

    And then, there are the user’s actions, which can be manipulated. And the UAC as it is now, in my opinion, is still done *wrong*. The confirmation dialogs pop out *too often*, enough to make people acknowlege automatically (even the people that know what they do!). For example, to know all processes which run on the machine I have to confirm UAC dialog each time I want to do that the Task manager?!?

    Dialogs that pop often will nobody read, and nobody will think about them.

    No offense, but I believe, the way you did it, you added it more to be able say "you’re guilty yourself, you clicked yes" than to really enhance the security of the user.

  5. alik levin's says:

    It’s Between Your Ears Why? Because "Security is man-vs-man and humans are intelligent." – more about

  6. Gene Naden says:

    Why is it hard to understand the training materials on the subject of security? Because if you explain it too well then the attacker can read it, understand it only too well, and find the vulnerabilities. So you can only include part of the story.

  7. For sure is really difficult to analyse all the possibilities, keep the backward compability and know all the performance impacts, in a generic OS.

    Like the analysis of your blog entry: http://blogs.msdn.com/michael_howard/archive/2006/05/26/address-space-layout-randomization-in-windows-vista.aspx, many people try to say about other OS that have this kind of resource for a long time before microsoft tries to implement it.  

    In any way the microsoft implementation is not 2^8 bits entropy in every case.  You can have some situations where the process have more than just one dll loaded with a trampoline instruction pointing to the same offset, so, the number goes down… 😉

    Good luck to you guys,

    Rodrigo (BSDaemon).

  8. Chris says:

    Good post. Yet people still think an outdated certified ‘consultant’ can protect them. Security IS an arms race, if your not current, your out.

  9. Hi, Michael here. Every bug is an opportunity to learn, and the security update that fixed the data binding

  10. Microsoft explains how it missed a serious IE bug for NINE years or, as the company chooses to title