Linus Torvalds is “Fed up with the ‘security circus’”

There’s been a lot of discussion on the intertubes about some comments that Linus Torvalds, the creator of Linux has made about security vulnerabilities and disclosure.Not surprisingly, there’s been a fair amount of discussion amongst the various MSFT security folks about his comments and about the comments about his comments (are those meta-comments?).


The whole thing started with a posting from Linus where he says:

Btw, and you may not like this, since you are so focused on security, one reason I refuse to bother with the whole security circus is that I think it glorifies - and thus encourages - the wrong behavior.

It makes "heroes" out of security people, as if the people who don't just fix normal bugs aren't as important.

He also made some (IMHO) unprofessional comments about the OpenBSD community, but I don’t think that’s relevant to my point.

Linus has followed up his initial post with an interview with Network World where he commented:

“You need to fix things early, and that requires a certain level of disclosure for the developers," Torvalds states, adding, "You also don't need to make a big production out of it."”


"What does the whole security labeling give you? Except for more fodder for either of the PR camps that I obviously think are both idiots pushing for their own agenda?" Torvalds says. "It just perpetrates that whole false mind-set" and is a waste of resources, he says.

As a part of our internal discussion, Crispin Cowan pointed out that Linus doesn’t issue security updates for Linux, instead the downstream distributions that contain the Linux kernel issue security fixes.

That comment was the catalyst – after he made the comment, I realized that I think I understand the meaning behind Linus’ comments.

IMHO, Linus is thinking about security bugs as an engineer.  And as an engineer, he’s completely right (cue the /. trolls: “MSFT engineer thinks that Linux inventor is right about something!”). 

As a software engineer, I fully understand where Linus is coming from: From a strict engineering standpoint, security bugs are no different from any other bugs, and treating them as somehow “special” denigrates other bugs.  It’s only when you consider the consequences of security bugs that they become more interesting.

A non security bug can result in an unbootable system or the loss of data on the affected machine.  And they can be very, very bad.  But security bugs are special because they’re bugs that allow a 3rd party to mess with your system in ways that you didn’t intend.

Simply put, your customers data is at risk from security bugs in a way that normal defects aren’t.  There are lots of bad people out there who would just love to exploit any security defect in your product.  Security updates are more than just “PR”, they provide critical information that customers use to help determine the risk associated with taking a fix.

Every time your customer needs to update the software on their computer, they take the risk that the update will break something (that’s a large part of the reason that that MSFT takes it’s time when producing security fixes – we test the heck out of stuff to reduce the risk to our customers).  But because the bad guys can use security vulnerabilities to compromise their customers data, your customers want to roll out security fixes faster than they roll out other fixes.

That’s why it’s so important to identify security fixes – your customers use this information for risk management.  It’s also why Microsoft’s security bulletins carry mitigating factors that would help identify if customers are at risk.  For example MS08-045 which contains a fix for CVE-2008-2257 has a mitigating factor that mentions that in Windows Server 2003 and Windows Server 2008 the enhanced security configuration mode mitigates this vulnerability.  A customer can use that information to know if they will be affected by MS08-045.

But Linus’ customers aren’t the users of Linux.  They are the people who package up Linux distribution.  As Crispin commented, the distributions are the ones that issue the security bulletins and they’re the ones that work with their customers to ensure that the users of the distribution are kept safe.

By not clearly identifying which fixes are security related fixes, IMHO Linus does his customers a disservice – it makes the job of the distribution owner harder because they can’t report security defects to their customers.  And that’s why reporting security bug fixes is so important.

Edit: cleared out some crlfs

Edit2: s/Linus/Linux/ 🙂

Comments (23)

  1. Anonymous says:

    What does "ETA" mean in this context? I saw you use it the other day and seeing it again makes me wonder.

    Is it "edited the article," or something like that?

    Not "estimated time of arrival" or ETA, the Basque paramilitary nationalist organization, obviously, but those are the only two common acronyms I can think of.

    As for what Linus said, I think all that’s important is how risky the bug and the fix are. Whether it’s security or some other type of bug doesn’t matter per se. I guess that’s what he’s saying, in a way. But, as I think you suggest, security bugs tend to be more important because they let other people do things to your machine, unlike most other bugs where you have to do something. Something you presumably don’t do often, else the bug would have been discovered & fixed a lot sooner. As soon as a security issue is known about people will start trying to exploit it.

  2. Anonymous says:

    Yes, and no, you forget the disclaimer in the GPL states:

      This package is distributed in the hope that it will be useful,

      but WITHOUT ANY WARRANTY; without even the implied warranty of


      GNU General Public License for more details.

    There are additional sections in the full text that go into more detail, Section 15: Disclaimer of Warranty, and Section 16: Limitation of Liability.

    So basically all bugs are the customer’s problem.  Live with it, or don’t use the software.

    In recent times we’ve seen non-security bugs become security bugs (I’m thinking of the Flash VM null pointer bug here), so in fact _any_ bug, could be a security bug.

    I agree that perhaps he could help out with a few pointers for each commit, but what metrics would you use to measure?

  3. Anonymous says:

    > By not clearly identifying which fixes are security related fixes, IMHO Linus does his customers a disservice…

    This is a little silly.  The Linux kernel is _not_ the sole responsibility of Linus Torvalds.

    As a Microsoft customer, I don’t perceive a disservice because Steve Ballmer doesn’t _personally_ identify which fixes are security related fixes.  And since Ballmer’s responsibilities include marketing and investor relations, Ballmer’s comments about Windows security has a greater disconnect than Linus’s comments about Linux security.  And I don’t blame Ballmer, again, his responsibilities include marketing and investor relations.

    Simply put, Linus Torvalds has no ability to deny anyone information about the Linux kernel.  He is not considered an expert on security issues in running Linux, by anyone.

    This is the second example in a week of Microsoft blogging of Microsoft engineers wagging an accusatory finger for insufficient dedication to security.

    This is fine.  Microsoft has made very impressive gains in the whole security development life-cycle.  But as a Microsoft customer who has to support 50 odd users (some odder than others), the new Microsoft security development life-cycle doesn’t fill my body with the warm sensation of total bliss.

    I still have to deal with Internet Explorer being tightly embedded into the Microsoft OS.  I would prefer it if it was not.  Obviously, my concerns don’t carry much weight with the Microsoft decision makers.

    My ability to use newer revisions of Microsoft’s OS is limited by the slowest developer in the group of enterprise-critical applications I have to support.  I cannot selectively patch Windows 2000 workstations, to take full advantage of security upgrades, at any price.

    Ranting aside, I am more interested in Microsoft engineers talking about how the experiences of Microsoft customer will be improved, than lectures to other software companies.  Especially when I have demonstratively less problems and less headaches with the products from those other companies.  Because I expect to be using products from all those companies.

  4. Leo: It should have just been Edit:  Sorry.

  5. Manuel: Microsoft <does> call out security fixes when we distribute them.  You’re right that we don’t disclose security fixes made as a part of the normal part of fixing bugs, but that’s because nobody has the opportunity to see those security bugs – if a tree falls in the forest, did it make a noise (if a security bug is fixed but no customer will ever see the security bug was it worth mentioning)?

    I’m actually NOT wagging my finger at Linus (or at least that wasn’t my intent).  My intent was to say (a) that I understand where he’s coming from and (b) point out that even though at a software engineering level security bugs are no different from any other bugs, when you step back and look at the bigger picture there are critical differences between security bugs and other defects.

  6. Norman Rasmussen: The GPL doesn’t enter into this, IMHO.  There’s a similar disclaimer in almost every software product out there.  

    Customers that deploy software products (either FOSS or OTS) take time to test to ensure that the software will meet their needs (for enterprise customers deploying an OS, that means that they evaluate the OS carefully to ensure that the OS won’t break their line-of-business applications).  

    Similarly, when a vendor provides a bug fix for a problem, the customer goes through the same evaluation process to ensure that the fix won’t destabilize their systems.  

    Security fixes have to go through the sae evaluation process.  But security fixes carry an extra level of risk associated with them, because the bad guys can/will use them to attack the customer (or the customers data), so they tend to deploy security fixes more quickly than they would other bug fixes.

    If the customer doesn’t know that a particular bug fix has security implications they may choose to ignore a particular fix.

    And the Flash VM bug was extraordinary, and may be a rule changer (it’s too soon to say).  If in general someone can figure out a way to turn a DoS attack into RCE, that would be…. bad.

  7. Anonymous says:

    There’s one aspect of the ensuing discussion that I don’t think you addressed, and which is an important yet perhaps subtle point. Basically, once you start tagging certain changes as "security fixes" it implies that other changes *aren’t*.

    In reality, the usual reason some bug fixes get tagged as security fixes is because they were found by amateur, or paid, security researchers specifically looking for exploits. However, just because I am *not* a security researcher and I check in a random fix for some crash, that doesn’t mean that the change doesn’t have security implications.

    The way I interpreted Linus’ comments, and what I agree with, is that if you care, you need to have every change potentially considered for security implications, by people who care about security, and with the understanding that the typical developer (kernel, or otherwise) probably doesn’t have that mindset.

  8. Anonymous says:

    But Linus’ customers aren’t the users of Linus.

    Is this another typo (aren’t the users of Linux)?

  9. Anonymous says:

    There’s something else. About those bad, bad people who want to mess with your system:

    They don’t stick to exploiting bugs clearly identified in security bulletins.

    A bug that gets de-prioritized because the developers can’t think of a way to exploit it, the fix half-tested and tossed out as a hot-fix, a fix that no-one gets unless they call the support line or lurk on newsgroups…

    …well, you better hope those bad, bad people can’t figure out a way to exploit it either.

    Your point is that end-users won’t install fixes unless they’re sufficiently scared about what will happen if they don’t. Yeah, that’s true enough. So a 0-day exploit hits and now you have to scramble to both get a fix together, test it, *and* scare your customers into installing it. And *they* have to hope their fear didn’t just convince them to install something that will destroy their data or bring down their systems in a less-scary, more mundane, but still utterly crippling fashion.

    Using fear as a tool to get people to act right is old hat. Every bad parent knows how to do *that*. The interesting story here is whether you should let the bad scary hackers affect how you prioritize bugs.

    AFAIK, you’ve talked about this before. Some horrible "security hole" in Vista that required users to play a specially-crafted sound file while holding their microphone up to a speaker? But that’s what you get when you rank bugs stamped with "security" an order of magnitude higher than all the other bugs put together – it becomes a tempting tool for anyone with an axe to grind, or glory to seek, or just wanting to see their own personal (least-) favorite bug fixed.

    Send in the clowns…

  10. Anonymous says:

    "A non security bug can result in an unbootable system or the loss of data on the affected machine.  And they can be very, very bad.  But security bugs are special because they’re bugs that allow a 3rd party to mess with your system in ways that you didn’t intend."

    Some bugs can be worse, as someone pointed out in another forum last year:  Some bugs can cause death.  If a machine depends on an embedded system that comes with any of those famous non-warranties, that machine is unsuitable for use in hospitals, cars, airplanes, air traffic controller systems, telephone switches, etc.

    There are some environments where I agree that security bugs are more special than others.  Those would be ordinary users’ desktops, financial services companies, etc.

    P.S.  About "and about the comments about his comments (are those meta-comments?)", this metametacomment says yes.

  11. Shog9: You’re right, but a patch/fix is a roadmap to exploitation.  It’s a lot easier to reverse engineer an exploit than it is to find one from scratch.  Lurene Grenier gave a fascinating talk at Bluehat where she described how she could reverse engineer a Microsoft patch and produce a weaponized exploit in under 12 hours.  And she does that every single month.

    Btw, you’ll note that the vista speech recognition bug isn’t yet fixed.  That’s because the exploitability of that particular bug is relatively low – the various teams <i>did</i> prioritize security bugs and that one came out fairly low on the list.  It’s on the list, but it’s not at the "we need to fix this ASAP" level.

  12. Anonymous says:

    "Leo: It should have just been Edit:  Sorry."

    Haha, so I was trying to find meaning in a typo? No need to apologise! I bet you’ve been typing "ETA" a lot lately. I find it nearly impossible to type the word "serve" without turning it into "server" (or "director" -> "directory") and I rarely even notice I’ve done it until I’m re-reading much later.

  13. Anonymous says:

    "A non security bug can result in an unbootable system or the loss of data on the affected machine."

    I have a healthy respect for you and the MSFT SDL team and process, but I also understand that availability of resources, including data, is a key security goal.  So, a bug that results in an unbootable system or the loss of data would really be a security bug.

    A UI bug that renders "funny" (you’re font bug, for example)?  Not a security bug.

  14. Anonymous says:

    What is the difference between bug and vulnerability?

    In my point of view, in a production enviroment, every bug that may lead to a loss event (CID, image, $) must be considered a security incident.

    What do you think?

  15. Daniel, that’s only the case if some unauthorized user can trigger the loss event.  That’s the thing that differentiates security bugs from other bugs – if an unauthorized user can trigger the event, it’s a security bug, otherwise it’s just a bug.

  16. Anonymous says:

    "He also made some (IMHO) unprofessional comments about the OpenBSD community…"

    He’s been working on his hobby for more than ten years. Why would he need to be professional?

    I don’t think you could characterize many of his most quotable statements as "professional" but people very rarely take offense at them – did anyone with the Open BSD jibe?

  17. Dave: Actually I was offended by them.  Like it or not, Linus is a public figure.  Comments like his are inappropriate for a public forum.

  18. Anonymous says:

    if an unauthorized user can trigger the event, it’s a security bug, otherwise it’s just a bug.

    I’m curious to hear an elaboration of this.  System A takes information from System B.  The information read from System A causes a System  B to act in a certain way (which may or may not lead to leakage of data) that is unintended.  Is this a security issue or just a bug?

  19. Anonymous says:

    Tell me one case that Microsoft paid a compensation to the client because of bug/security issue that cause a data loss?

    I doubt there is such a case. Then why pay MS for claims on paper when the actual result is same in case of product under GPL license?

  20. Anonymous says:

    RE <Uknown >: Tell me one case that Microsoft paid a compensation to the client because of bug/security issue that cause a data loss?

    IS this somehow worse than treating Security risks / bugs as though they are not a real priority.

    This industry is fueled by consumer needs and desires and what they are willing to pay for, not the ambitions of "Glorified Hero’s".  

    AS a consumer, I am certainly more concerned with the case where a malicious attack can cost me serious capitol.  And I am glad to hear that the limited resources at Microsoft are not being squandered on the silly idea that "a bug is a bug", or that I’d rather see a font look proper on screen than prevent my bank accounts from being drained.

    There is, among men, a human obligation to prevent or help prevent the products and tools they manufacture from being used to exploit the unexpecting or defenseless.  If a manufacturer can reduce the risk to potential victims and refuses to do so then they are guilty of gross negligence.  Manufacturers who take an apathetic and indifferent stance while watching these crimes be committed are not only inhumane, but the attitude is against the very beliefs that a republic like the United States was founded on.  That being that the health and welfare of the people are in the hands of those same manufacturers.

  21. Anonymous says:

    A { COLOR: #0033cc } A:link { COLOR: #0033cc } A.local:visited { COLOR: #0033cc } A:visited { COLOR:

  22. Anonymous says:

    "Every time your customer needs to update the software on their computer, they take the risk that the update will break something (that’s a large part of the reason that that MSFT takes it’s time when producing security fixes"

    Yes it took them 5 years to give something similar to PaX patch in Linux. Don’t say ASLR is better!

    And for 4 years, your Windows XP system for up for game play by hackers. "AMD64 Overflows" are easily exploited.

    Why still viruses in Windows? I know –> use less reasons

    1. It’s more popular so more viruses.

    2. Linux also has but people don’t care about it.

    I didn’t find it very user friendly to re-install windows again and again just after a simple boot sector virus. that was 5 years back, I never touched any Windows system after that horrifying experience.

    It’s easy to mix up random statements and put up something which seems logical but it’s not. I can also put up Bill Gates or Microsoft comments about the London Stock Exchange Program claiming reliability. September 8?? What Happened?? No Comments.

    I don’t hate Microsoft, I just don’t think their stuff is worth a dollar. Even if given for free, I won’t use it.

Skip to main content