What makes a bug a security bug?

In my last post, I mentioned that security bugs were different from other bugs.  Daniel Prochnow asked:

What is the difference between bug and vulnerability?

In my point of view, in a production enviroment, every bug that may lead to a loss event (CID, image, $) must be considered a security incident.

What do you think?

I answered in the comments, but I think the answer deserves a bit more commentary, especially when Evan asked:

“I’m curious to hear an elaboration of this.  System A takes information from System B.  The information read from System A causes a[sic] System B to act in a certain way (which may or may not lead to leakage of data) that is unintended.  Is this a security issue or just a bug?”

Microsoft Technet has a definition for a security vulnerability:

“A security vulnerability is a flaw in a product that makes it infeasible – even using the product properly – to prevent an attacker from usurping privileges on the user’s system, regulating it’s operation, compromising data on it or assuming ungranted trust.”

IMHO, that’s a bit too lawyerly, although the article does an excellent job of breaking down the definition and making it understandable.

Crispin Cowan gave me an alternate definition, which I like much better:

Security is the preservation of:

· Confidentiality: your secret stuff stays secret

· Integrity: your data stays intact

· Availability: your systems and data remain available

A vulnerability is a bug such that an attacker can compromise one or more of the above properties

 

In Evan’s example, I think there is a security bug, but maybe not.  For instance, it’s possible that System A validates (somehow) that System B hasn’t been compromised.  In that case, it might be ok to trust the data read from System B.  That’s part of the reason for the wishy-washy language of the official vulnerability definition.

To me, the key concept in determining if a bug is a security bug or not is that of an unauthorized actor.  If an authorized user performs operations on a file to which the user has access and the filesystem corrupts their data, it’s a bug (a bad bug that MUST be fixed, but a bug nonetheless).  If an unauthorized user can cause the filesystem to corrupt the data of another user, that’s a security bug.

When a user downloads a file from the Internet, they’re undoubtedly authorized to do that.  They’re also authorized to save the file to the local system.  However the program that reads the file downloaded from the Internet cannot trust the contents of the file (unless it has some way of ensuring that the file contents haven’t been tampered with[1]).  So if there’s a file parsing bug in the program that parses the file, and there’s no check to ensure the integrity of the file, it’s a security bug.

 

Michael Howard likes using this example:

char foo[3];
foo[3] = 0;

Is it a bug?  Yup.  Is it a security bug?  Nope, because the attacker can’t control anything.  Contrast that with:

struct
{
    int value;
} buf;
char foo[3];

_read(fd, &buf, sizeof(buf));
foo[buf->value] = 0;

That’s a 100% gen-u-wine security bug.

 

Hopefully that helps clear this up.

 

 

[1] If the file is cryptographically signed with a signature from a known CA and the certificate hasn’t been revoked, the chances of the file’s contents being corrupted are very small, and it might be ok to trust the contents of the file without further validation. That’s why it’s so important to ensure that your application updater signs its updates.