The Trouble with Giblets

I don't write about the SDL very much, because I figure that the SDL team does a good enough job of it on their blog, but I was reading the news a while ago and realized that one of the aspects of the SDL would have helped if our competitors were to adopt it.


A long time ago, I wrote a short post about "giblets", and they're showing up a lot in the news lately.  "Giblets" are a term coined by Steve Lipner , and they've entered the lexicon of "micro-speak".  Essentially a giblet is a chunk of code that you've included from a 3rd party.  Michael Howard wrote about them on the SDL blog a while ago (early January), and now news comes out that Google's Android SDK contains giblets that contain known exploitable vulnerabilities

I find this vaguely humorous, and a bit troubling.  As I commented in my earlier post (almost 4 years ago), adding a giblet to your product carries with it the responsibility to monitor the security mailing lists to make sure that you're running the most recent (and presumably secure) version of the giblet.

What I found truly surprising was that Android development team had shipped code (even in beta) with those vulnerabilities.  Their development team should have known about the problem with giblets and never accepted the vulnerable versions in the first place.  That in turn leads me to wonder about the process management associated with the development of Android.

I fully understand that you need to lock down the components that are contained in your product during the development process, that's why fixes take time to propagate into distributions. As I've seen it from watching FOSS bugs, the typical lifecycle of a security bug in FOSS code is: A bug is typically found in the component, and fixed quickly.  Then over the next several months, the fix is propagated into the various distributions that contain the fix.  So a fix for the bug is made very quickly (but is completely untested), the teams that package up the distribution consumes the fix and proceeds to test the fix in the distribution.  As a result, distributions naturally lag behind fixes (btw, the MSFT security vulnerabilities follow roughly the same sequence - the fix is usually known within days of the bug being reported, but it takes time to test the fix to ensure that the fix doesn't break things (especially since Microsoft patches vulnerabilities in multiple platforms, the fix for all of them needs to be released simultaneously)).

But even so, it's surprising that a team would release a beta that contained a version of one of it's giblets that was almost 4 years old (according to the original report, it contained libPNG version 1.2.7, from September 12, 2004)!  This is especially true given the fact that the iPhone had a similar vulnerability found last year (ironically, the finder of this vulnerability was Travis Ormandy of Google).  I'm also not picking on Google because of spite - other vendors like Apple and Microsoft were each bitten by exactly this vulnerability - 3 years ago.  In Apple's case, they did EXACTLY the same thing that the Android team did: They released a phone that contained a 3 year old vulnerability that had previously been fixed in their mainstream operating system.


So how would the SDL have helped the Android team?  The SDL requires that you track giblets in your code - it forces you to have a plan to deal with the inevitable vulnerabilities in the giblets.  In this case, SDL would have forced the development teams to have a process in place to monitor the vulnerabilities (and of course to track the history of the component), so they hopefully would never have shipped vulnerable components.  It also means that when a vulnerability is found after shipping, they would have a plan in place to roll out a fix ASAP.  This latter is critically important because history has shown us that when one component is known to have a vulnerability, the vultures immediately swoop in to find similar vulnerabilities in related code bases (on the theory that if you make a mistake once, you're likely to make it a second or third time).  In fact, that's another requirement of the SDL: When a vulnerability is found in a component, the SDL requires that you also look for similar vulnerabilities in related code bases.

Yet another example where adopting the SDL would have helped to mitigate a vulnerability[1].


[1] Btw, I'm not saying that the SDL is the only way to solve this problem.  There absolutely are other methodologies that would allow these problems to be mitigated.  But when you're developing software that's going to be deployed connected to a network (any network), you MUST have a solution in place to manage your risk (and giblets are just one form of risk).  The SDL is Microsoft's way, and so far it's clearly shown its value.

Comments (26)
  1. Anonymous says:

    I think that often enough, simply having a formal security process with teeth in it will inevitably reduce security problems.  I get the feeling that lots of development efforts do not think about security until after the fact.  

  2. MS: I agree with you on both points.  MSFT had an extreme wake-up call several years ago, and turned the ship around (which takes a LOT of time given that many of our products are operating on an 18 month ship cycle).

    I’m hoping other developers realize this.

  3. Anonymous says:

    Yep, even the latest Linux distros don’t have such old libraries.

    BTW, MS is not the only one who do a formal security review.

    OpenBSD also does, and also fixes vulnerabilities fast, which would compensate for the risks of the "full disclosure" approach the project uses.

  4. Yuhong: I never said that OpenBSD doesn’t have a formal process.  In fact, I’ve singled them out in the past as being the only *nix distro I know of that seems to "get" the idea of security.

    And I did say that there were other methodologies other than the SDL that work.  I was just pointing out how the SDL would have never permitted this kind of vulnerability to happen.

  5. Anonymous says:


    I agree with most of what you are saying. The logic of using outdated opensource libraries baffles me.

    However, you said:

    >> In Apple’s case, they did EXACTLY the same thing that the Android team did: They released a phone that contained a 3 year old vulnerability that had previously been fixed in their mainstream operating system.

    But i say:

    The phones have not been released yet. The Android software is in early alpha stages and is not in beta yet.

  6. Muthu: It shouldn’t matter if the software is in the early alpha stages or not.  According to the news, they’ve been working on Android for YEARS.  If the Android team had a security management process, they would have noticed that Apple had a vulnerability in the iPhone and flagged it as a potential issue in the Android SDK and resolved it then.

    According to at least one report, the Android team is planning on doing a security review of their product before they ship.  That’s too late, the security reviews should start happening before they wrote the first line of code, and should be ongoing.

    There’s absolutely no excuse for shipping a 3 year old software vulnerability (that is well known to be exploited in the wild) in a product that is intended to be used on the internet.

  7. JamesNT says:

    Now that Google and Apple are getting into the big time like Microsoft is, we are about to see them eat a lot of their words regarding Windows security.


  8. Anonymous says:

    "Now that Google and Apple are getting into the big time like Microsoft is, we are about to see them eat a lot of their words regarding Windows security."

    Yep, Mac malware is becoming more common, which is why Apple added things like ASLR into Leopard.

  9. Yuhong: Yes they added ASLR, they didn’t turn it on for most of the "interesting" binaries in the system – including the network facing binaries (Safari, Rendezvous, iTunes).

    So I wonder how much benefit they get from it.

  10. Anonymous says:

    BTW, if you can read APSL code, in dyld there are some ASLR code if you are interested in seeing how it work in Mac OS X.

  11. Igor Levicki says:

    Larry, in my opinion, whole this security mess is actually a problem of code "reuse".

    I am saying "reuse" because it is not true reuse.

    True code reuse would be if for example Windows had only one instance of GDI+ DLL instead of Office and Visual Studio having their own copies each of them in need of a same patch.

    Also, true code reuse (on a much larger scale of course) would be if system had only one runtime library — i.e. if there was only one instance of strcpy() in the whole OS distribution and if everything else was linked against it. Then you wouldn’t have to hunt down all those 1,000 strcpy() versions throughout the codebase and fix each one separately — you would just patch one file and be good to go.

    Feel free to replace strcpy() (which is a basic function) from the above example with more complex stuff like JPEG or PNG decode library and hopefully you will get the picture.

    That would have many benefits:

    – It would reduce the number and size of those hotfixes considerably

    – It would reduce the overall system memory footprint

    – It would improve code speed because of better code locality

    – It would free the user from the DLL Hell

    – It would allow developers to focus more on code optimization and other functional improvements instead of wasting time on propagating the same security fixes in thousands of files

    I am inviting you to perform a little test. If you have access to the whole Windows codebase pick a function or a library and try finding out how many times it has been duplicated. Then you will realize the true extent of (the lack of true) code reuse.

  12. Igor: Actually with SxS deployment, the servicing model for GDI+ is pretty clear.  The problem happened when you distribute GDI+ without using SxS.

    Apps that used SxS deployments for GDI+ didn’t have any issues with security fixes, MSFT was able to service them without requiring application involvement.

  13. Igor Levicki says:

    Larry, I mentioned GDI+ specifically because Microsoft at one point released a hotfix which searched a whole system in an ettempt to find GdiPlus.dll and replace it with up to date copy.

    I am curious to hear your opinion on code "reuse".

  14. Igor: I know – that hotfix was to find the apps that didn’t use SxS deployment.

    And as for code re-use, I think you believe that developing code for Windows occurs with the same level of complexity as a university project.

    It’s not (this is an understatement).  Windows is (as I understand it) the single most complicated single code base in history.

    There IS code re-use in the windows OS, and there are teams that obsessively look at code for opportunities to reuse code as much as possible.  But it’s not as simple as "pick a function or a library and finding out how many times it has been duplicated".

  15. Igor Levicki says:

    Larry, of course I do not think that Windows is simple. That would be plain stupid and I am not that stupid.

    Unfortunately the real world example says the opposite of what you are saying — for example, Vista takes 10x more disk space than XP and it doesn’t offer 10x performance or 10x more features. That can only mean code bloat which comes from the lack of proper reuse.

  16. Igor Levicki says:

    Oh, and before you say "it is 10x biggger because of all the security checks we have put in" — it is not 10x more secure than XP either 😉

  17. Igor, you’re simply being rude.

    It’s not bigger because of the security checks.  It’s bigger for lots of reasons.  The first is because you don’t know how to add – you need to discard all the files in the WinSxS directory because those files don’t take space on the disk (they’re hard links to the actual files).

    Other reasons are: Larger (higher resolution) bitmaps (media count), larger (higher resolution) icons, more icons, etc.

    It’s also not possible to relate code size with increased functionality.  Again, that’s an extremely simplistic view.

    Are there opportunities to reduce the bloat in the OS?  Of course.  But is Vista 10x as bloated as XP?  Not a chance.

  18. Anonymous says:

    "The problem happened when you distribute GDI+ without using SxS."

    Sure.  Now this kind of problem is going to happen more and more, since Microsoft advises using private assemblies instead of SxS.

    "Apps that used SxS deployments for GDI+ didn’t have any issues with security fixes"

    Sure.  But apps that use SxS can’t even be registered properly, because redirections to newer CRT assemblies conflict with the versions that were compiled against.  Welcome to DLL Hell .Net.  Applications cannot benefit from these security fixes.

  19. Igor Levicki says:

    "It’s also not possible to relate code size with increased functionality"

    I strongly disagree with that but since my previous comment wasn’t published I won’t bother to explain why.

    Whoever believes that code size doesn’t relate to functionality or performance is a bad programmer in my opinion regardless of his formal education and regardless of the size of a project he has been working on so far.

    There are countless examples of that in the software market. One only has to look around and compare.

  20. Anonymous says:

    "Welcome to DLL Hell .Net."

    I once said that SxS can be like Plug and Play. Wonderful when it works, a pain to fix if it doesn’t

  21. Anonymous says:

    I doubt that all of the kernel enhancements in Vista require Vista to take 10 times the disk space of XP, that is why I suggested in a private email to make a trimmed version of Vista. I think it depends on the edition, as the Vista installer copies all the components to the HD.

  22. Yuhong: Do you have evidence that ntoskrnl.exe for Vista is 10x the size of ntoskrnl.exe for XP?  My Vista ntoskrnl.exe is 3.5M, my test machine’s ntoskrnl is 2.5M.  That’s not a 10x difference.

    Vista is bigger than XP.  No question there.  But it’s not 10x bigger – as I mentioned above, you’re using tools that don’t understand hard links so they’re double counting disk space usage (for example, on Vista, the entire winsxs directory is almost entirely hard links, so most naive disk usage apps double count the contents of that directory)

  23. Anonymous says:

    Larry, you need a "Frequently [Un]Asked [but should be asked] Questions" page.  It should include answers to questions such as "What is a kernel, and what does an operating system contain in addition to a kernel?" and "What is a hard link, and why doesn’t it take up space?"

    I think you and Raymond are both taking the wrong approach to Igor Levicki.  The best way to deal with trolls (even trolls who actually know something about the topic) is to ignore them.

  24. Anonymous says:

    "Yuhong: Do you have evidence that ntoskrnl.exe for Vista is 10x the size of ntoskrnl.exe for XP?  My Vista ntoskrnl.exe is 3.5M, my test machine’s ntoskrnl is 2.5M.  That’s not a 10x difference."

    No I don’t and that is my point.

  25. Anonymous says:

    I mean, you are misinterpeting my comment on kernel bloat.

  26. Anonymous says:

    BTW, hard links are the primary reason why Vista have to be installed on a NTFS file system.

Comments are closed.

Skip to main content