What makes a bug a security bug?


In my last post, I mentioned that security bugs were different from other bugs.  Daniel Prochnow asked:

What is the difference between bug and vulnerability?

In my point of view, in a production enviroment, every bug that may lead to a loss event (CID, image, $) must be considered a security incident.

What do you think?

I answered in the comments, but I think the answer deserves a bit more commentary, especially when Evan asked:

“I’m curious to hear an elaboration of this.  System A takes information from System B.  The information read from System A causes a[sic] System B to act in a certain way (which may or may not lead to leakage of data) that is unintended.  Is this a security issue or just a bug?”

Microsoft Technet has a definition for a security vulnerability:

“A security vulnerability is a flaw in a product that makes it infeasible – even using the product properly – to prevent an attacker from usurping privileges on the user’s system, regulating it’s operation, compromising data on it or assuming ungranted trust.”

IMHO, that’s a bit too lawyerly, although the article does an excellent job of breaking down the definition and making it understandable.

Crispin Cowan gave me an alternate definition, which I like much better:

Security is the preservation of:

· Confidentiality: your secret stuff stays secret

· Integrity: your data stays intact

· Availability: your systems and data remain available

A vulnerability is a bug such that an attacker can compromise one or more of the above properties

 

In Evan’s example, I think there is a security bug, but maybe not.  For instance, it’s possible that System A validates (somehow) that System B hasn’t been compromised.  In that case, it might be ok to trust the data read from System B.  That’s part of the reason for the wishy-washy language of the official vulnerability definition.

To me, the key concept in determining if a bug is a security bug or not is that of an unauthorized actor.  If an authorized user performs operations on a file to which the user has access and the filesystem corrupts their data, it’s a bug (a bad bug that MUST be fixed, but a bug nonetheless).  If an unauthorized user can cause the filesystem to corrupt the data of another user, that’s a security bug.

When a user downloads a file from the Internet, they’re undoubtedly authorized to do that.  They’re also authorized to save the file to the local system.  However the program that reads the file downloaded from the Internet cannot trust the contents of the file (unless it has some way of ensuring that the file contents haven’t been tampered with[1]).  So if there’s a file parsing bug in the program that parses the file, and there’s no check to ensure the integrity of the file, it’s a security bug.

 

Michael Howard likes using this example:

char foo[3];
foo[3] = 0;

Is it a bug?  Yup.  Is it a security bug?  Nope, because the attacker can’t control anything.  Contrast that with:

struct
{
    int value;
} buf;
char foo[3];

_read(fd, &buf, sizeof(buf));
foo[buf->value] = 0;

That’s a 100% gen-u-wine security bug.

 

Hopefully that helps clear this up.

 

 

[1] If the file is cryptographically signed with a signature from a known CA and the certificate hasn’t been revoked, the chances of the file’s contents being corrupted are very small, and it might be ok to trust the contents of the file without further validation. That’s why it’s so important to ensure that your application updater signs its updates.

Comments (22)

  1. Ash says:

    Hi Larry,

    I’m not entirely convinced that a security bug neccesarily is worse than an "ordinary" bug. Obviously it will all depend on the bug in question (security or otherwise), but in many way one can at least (try to) archtecture oneself around security bugs.

    E.g. imagine you have a database product that for some reason or other anyone with network access can craft a udp packet and flip it over. This is obviously a bad security bug – but you can protect your database by placing a firewall in front of your database – not eliminating the security bug – but at least reducing the risk of someone being able to send that udp packet in the first place

    On the other hand, if you have an "ordinary" bug that say stops your machines from starting after a certain date – then regardless if anyone would like to attack you or not you’re in trouble.

    Obvisouly a purist might want to call the latter example an issue and not a risk – but you probably get the point I’m getting at.

    -Ash

  2. I personally think you’re still emphasizing a false dichotomy.

    Consider the string of bugs that, chained together, produced the exploitable Flash vulnerability:

    http://www.matasano.com/log/1032/this-new-vulnerability-dowds-inhuman-flash-exploit/

    In the real world, code containing something like the out of bounds dereference you showed above would be buried inside a much more complex piece of software, in which it would be infeasible to fully unravel the chain and state conclusively/provably that "this is not a security bug".

  3. Ash, I’m not saying that they are "worse".  

    I’m saying that the risk associated with a security bug is greater than the risk associated with a non-security bug, because some unauthorized person can exploit a security bug to do "bad things" (for an unspecified value of "bad things").

    Remember that at the end of the day, once a product has shipped, applying a bug fix has a certain amount of risk associated with taking the fix.  Organizations need to weigh the risk associated with taking the fix.  

    If a bug is a security bug, that increases the risk of NOT taking the fix.  

  4. Skywing says:

    Actually, you have to do a whole lot more than just "ensure that your application updater signs its updates".  I know this is probably not what you meant, but saying that a signature check alone is enough to trust file contents is dangerous.

    Just because code is signed does not mean that it is free of bugs.  Furthermore, even enforcing a rule that code must be signed with your key is not particularly enough to ensure that everything is kosher, depending on how your software update mechanism works.

    The problem is that, assuming you sign all of your updates, and you have an update that introduces a security bug, and then another update that fixes said security bug, a malicious user in the update server path might be able to just feed you the signed binaries *with security bug present*, which will cause a software update mechanism that consists of simply "check signature, replace file" to happily reintroduce security holes at the whim of any attacker.

    This might sound a bit farfetched, but it’s a very real problem (in fact, one that many Linux distributions with a centralized package management system that spiders out to third party mirrors that host "signed" packages are hard hit by).  The problem is made even worse when you consider that there are scenarios where you may want to allow a user to run an old version of a particular software version, which even happens with Microsoft software (say, if you want to run Windows Vista SP0 for a while still, even though Windows Vista SP1 is out).  Blind "new file version is higher than old file version" checks don’t really cut it either.  This tends to be a even more common with third party software than with Microsoft software out in the real world, in my experience.

    The unfortunate fact is that updating software securely is very hard to do, and it’s a whole lot more complex than simply slapping a digital signature check on the whole process and simply calling it done.  And this also assumes that the process running the signature check has ensured that the update file is at a secure location before it checks the signature (so that a user can’t exploit a time of use check if the update was, say, running from the user’s %TEMP% directory), and all the "usual" local security problems, which is a whole other, non-trivial can of worms.

    – S

  5. Skywing, as always, you’re right.  I seriously glossed over the difficulty associated with writing an updater.  

  6. Ash says:

    I agree 100% with you up to the conclusion that a security bug increases the risk of not taking the fix.

    A security bug usually implies someone else can do bad things to your system, however you’re not guarenteed that someone will try that.

    A functional bug on the other hand usually implies that bad things can happen to your system when you’re using it like you’re supposed to.

    My reasoning is that you will be garanteed to use your system, but you’re not garanteed that someone will try to hack your system. Thus the likelyhood of being hit by a functional bug is theoretically higher than being hit by a security bug.

    Adding the ability to some extent mitigate security bugs by implementing other measures I see not keeping up to date with functional bug fixes a bigger risk than security bugs.

    This shouldn’t take any glory or attention away from security bugs. But I’d like customers to pay more attention to fixing funtional bugs than they are doing today. Just by applying the latest service pack would be a nice start.

    -Ash

  7. Ash, I think we’re in essentially total agreement.  I’d love it if people picked up the latest service packs.  And if they kept their machines up-to-date.

    I wish for lots of things.  

  8. Norman Diamond says:

    >> Security is the preservation of:

    >> · Confidentiality: your secret stuff stays secret

    >> · Integrity: your data stays intact

    >> · Availability: your systems and data remain available

    If that is the definition of security then nearly every version of Windows that I have used is insecure.

    "A vulnerability is a bug such that an attacker can compromise one or more of the above properties"

    With that definition, vunerability is different from insecure.  With that definition, even if every vulnerability is removed, Windows will still be insecure.  I wonder if Crispin Cowan might want to change his definitions.

    ‘If an authorized user performs operations on a file to which the user has access and the filesystem corrupts their data, it’s a bug (a bad bug that MUST be fixed, but a bug nonetheless).’

    Wow, you and I agree.  But do you know any way to persuade your employer to agree?

    > Michael Howard likes using this example:

    > char foo[3];

    > foo[3] = 0;

    > Is it a bug?  Yup.  Is it a security bug?  Nope, because the attacker can’t control anything.

    Nope or maybe not nope, because that example looks rather Flashy.

  9. Derlin says:

    As always, these discussions are very interesting.

    Should "->" be "." in foo[buf->value] = 0 in your example?

  10. Dave says:

    Depending what’s declared between foo and the buggy assignment, that could actually be a security bug if it overwrites a security-related value.

  11. I don’t mind admitting that the example with the struct vs. the array doesn’t jump out at me as a bug at all, let alone a security bug. Heck – it’s 20 years since I wrote C in anger, and even then, "mild irritation" might have been a more accurate claim.

    I’m guessing that the second example is a security bug because it allows the bad guy to decide where he wants the 0 to go. The assumption here is that in the first scenario he can’t combine the ability to write his 0 in an unexpected place with some other exploit to produce a problem.

    Did I miss the point?

  12. Dominic – the bug allows an attacker to write a 0 to any place in memory he wants.  Just having the ability to write a 0 in memory at an arbitrary location in memory can be used to create a remote code execution exploit even without other exploits.  

    Just writing a 0 is sufficient.

  13. TimHollebeek says:

    > If that is the definition of security then nearly every

    > version of Windows that I have used is insecure.

    This surprises you?

    Crispin is actually just quoting the Department of Defense

    definition of security.  There was excellent work done on

    the theory of security in the pre-Windows era, most of

    which has been forgotten and yet to be rediscovered.

  14. Jack Mathews says:

    This line strikes me:

    >>> To me, the key concept in determining if a bug is a security bug or not is that of an unauthorized actor.  If an authorized user performs operations on a file to which the user has access and the filesystem corrupts their data, it’s a bug (a bad bug that MUST be fixed, but a bug nonetheless).  If an unauthorized user can cause the filesystem to corrupt the data of another user, that’s a security bug. <<<

    I believe that UAC muddies this just a little bit now.  Since certain apps can run elevated, would you consider this sort of bug in one of those apps a vulnerability now, since it’s a different level of "user" access?  I find that interesting – that setup programs can now have "security vulnerabilities" where bugs once existed.

  15. Daniel Prochnow says:

    WOW!!!

    Some real nice arguments in the comments.

    Before I start my post, please forgive my bad indian-latin-english in advance.

    In my humble opinion, the discussion about the boundaries between bug and security bug may depend more on the point of view than in technical aspects.

    For those who develop software, the focus is the same that Larry and some others defend here. In the software development universe, security bug still is just a bug subclass.

    But there are information security guys (me included), that handles security on the other side, dealing with threats, vulnerability, incidents, loss and risk analysis every day. For us, every loss (or almost loss) event caused by a bug, security related or not, is a security issue.

    It’s clear for me the difference now.

    Thank you.

  16. Rob Meyer says:

    I think of it more along a spectrum of risk. It’s almost certain impossible to prove that a bug absolutely can’t be exploited as a security flaw. Seems like papers explaining how exploit things previously thought unable to be exploited are launched all the time.

    Today’s boring bug is tomorrow’s exploitable flaw.

    So on one end you have "trivially easy for any attacker to exploit to violate security", and on the other end, "impossible for all but the most genius team of developers to see how this might be exploited in any context."

    Line this up in a quadrant style graph with risk of fixing, and I think you have a solid methodology for classifying defects and how to approach the urgency of the fixes.

  17. Artimus says:

    I think I’d consider that a genuine critical bug with some security implications rather than a genuine security bug.  This one comes down to probability and risk.

    There’s a very, very, high risk that an corrupt/incorrect file will cause memory to be overwritten causing a crash/corruption/or data loss.  That makes it a critical bug.

    On the security side, there’s absolutely nothing that an attacker could do with that bug that couldn’t be done by chance with a corrupt/incorrect file input.  Mind you, I’m not arguing that the odds of a particular "exploit" happening by chance are anywhere remotely close to the odds of an attacker producing it. I’m just saying that he doesn’t introduce any new modes of failure.  

    I’d consider a genuine security bug as one where the probability of data loss/corrupt/program crash due to bad input is very low and/or where the attacker can manipulate the bug to produce modes of failure completely outside the normal scope of the bug itself.

  18. Sys64738 says:

    [Larry]

    >>> the bug allows an attacker to write a 0 to any place in memory he wants.  Just having the ability to write a 0 in memory at an arbitrary location in memory can be used to create a remote code execution exploit even without other exploits.

    Hi Larry,

    please excuse my ignorance in security, but could you please elaborate on that?

    It sounds fascinating to me! Just writing a 0 to any place in memory allows code execution exploit?

    How is that possible?

    Thank you!

  19. Will says:

    That zero could alter a function call, it could alter where another write stores its data (which might be the particular value the attacker needs to write), either of these could alter which branch is taken after a comparison (perhaps of permissions) or whether another function is called, it could alter how many bytes are popped off a stack, etc.

    Also, Larry’s example code may have been inside a loop or other routine which is called multiple times, allowing the attacker several chances to reroute code execution.

  20. mh says:

    I don’t think it’s valid to pick on a few lines of code as an example here.  Both of the snippets that Larry gave are basic mistakes that I would hope any sane developer would never let go into production.  Most bugs these days come from more subtle interactions between components, with unexpected side-effects of unanticipated actions being the prime cause.

    The snippets are however good for illustrating an example, but I wouldn’t take them too literally.

    Splitting hairs between security bugs and general bugs seems a bad thing to me.  I know for a fact that I have systems which have security bugs, but which are never used in circumstances where these bugs may manifest.  I would hope that a general bug which can occur during normal operation and which would seriously compromise availability or stability would be higher on any conceivable priority list.  The moral here I guess is that too much paranoia about security can end up being counterproductive in the longer term.

  21. A { COLOR: #0033cc } A:link { COLOR: #0033cc } A.local:visited { COLOR: #0033cc } A:visited { COLOR: