Securing Existing Code

Just read Michael Howard's post about differentiating secure features, security features and security response, found at, and wanted to offer some counterpoints.

Overall, I'm in strong agreement with what he has to say – just because we're still shipping bulletins and updates doesn't mean we're not making forward progress. For one thing, those of you on the outside can't see how many incoming issues were caught by our current practices. What none of us has much visibility into, except anecdotally, is how many times someone external finds something and just abandons it because it only repros on older versions. A bulletin pointing out a bug that got fixed 5 years ago doesn't have much marketing value. Until we achieve perfection, I think you'll continue to see bulletins. The argument that because the SDL hasn't made us perfect implies that it isn't useful is a straw man.

I also have to stress that these things change over time. The efforts we made to secure Office 2003 were substantial, and we had very few bulletins for the first two years. Some of the work Windows delivered in Vista was first delivered in Office 2003. The Office bar for Prefast (known internally to Office as OACR) was a lot higher than the initial SDL requirements. We annotated functions with SAL. We removed older, unsafe CRT calls, and the work we did to show the regression rate was very low was some of what helped Windows decide they could do it, too. Unfortunately, as we've seen, all of this hard work didn't help against attack methods that were in their infancy when Office 2003 shipped.

In the past, I've pointed out that the SDL was first released just after Office 2003 shipped, and that Office 2003 met or (sometimes greatly) exceeded the SDL standards at the time it shipped. We've also shipped a lot of bulletins – no glossing over that fact. This doesn't mean that the SDL isn't useful – like any other attempt to solve a large problem, it only solves part of the problem. It's good to solve part of a problem, as it frees resources to go solve the rest of it.

Now here's where I'm going to argue with Michael – he said:

"Such releases [service packs] cannot get the full benefit of the SDL, because security is not just about bug fixes, it is a holistic property that goes beyond fixing implementation vulnerabilities to encompass sound design and defense in depth.

Ultimately, this means that newer Microsoft code is more secure than the older Microsoft code, and that is the trend we're seeing across the board. Don't expect to see a marked drop in the vulnerability count in older code. You won't see it, because we can't dramatically improve the security of an already released product."

While we certainly can't make significant design changes in a service pack, we can tighten existing implementation, and we can engage in defense in depth. If we can disable a format by default in the version we just released, maybe we can disable it in a service pack. This isn't without pitfalls – customers who need that functionality might have some issues, which we'd hopefully minimize. I'd also remind my friend that the SDL isn't just about design level issues – several aspects address implementation issues, and these can be addressed at any time. If we have a better fuzzer a year after we release, we should use that fuzzer against all the versions where we have a service pack available as a ship vehicle and get more solid code to customers. That's exactly what we've done with Office 2003 SP3 and Office 2007 SP1.

It's also true that solid code is solid code. If it's done really right in the first place, it's going to hold up over time. It's been a while since I last was poking around a lot of the guts of the Windows security system, but there are some bits of it that haven't changed in over 15 years. It was done right, and if we didn't need to change how it worked, no need to touch it. That's the real key – well-engineered code written to exacting standards tends not to have security flaws, and while many aspects of the SDL help with this, you can't mandate this any more than you can legislate morality. It's something that has to be part of the individual and group's culture.

Maybe it's just phrasing, but I don't agree that newer code has to be more secure than older code. To use an extreme example, to toss out the Excel recalc engine would be folly. While we do sometimes need to just toss out the old and re-write new, tidy code, that's also a great way to cause regressions (like my last post's topic). What's often better is to find ways to bring existing code up to current quality standards, using current techniques. This is a lot of the design intent of things like SafeInt – you _don't_ throw out all your old code, you just use a library that non-intrusively removes exploits without substantially disrupting existing code. The real challenge is to find ways to get that existing code base up to current quality standards. Now if what Michael _meant_ was that "older code" is something we compiled 5+ years ago to say 2002 standards, using the compiler we had then, and that newer code is something compiled recently and is tested to current standards (no matter when it was originally written), then we agree.

However, I think we can dramatically improve the security of an already released product, and that's exactly what we just delivered in SP3. In fact, I think we're going to have to learn to continue to do this. For example, Office 2007 is going to be in mainstream support until 2012, and we'll have a large customer base using it for some time after that. I'm not going to pretend that I can forecast what the attackers are going to be up to in 2012, but I do know we need to protect customers, and to do that, we'll have to be able to deliver substantial improvements over the lifecycle of the product.

Comments (4)

  1. David has an interesting counterpoint post to my SDL post this morning. As expected he makes some valid

  2. David has an interesting counterpoint post to my SDL post this morning. As expected he makes some valid

  3. asteingruebl says:


    Can you comment on what percentage of defects you all are finding are implementation vs. design defects?

    Its pretty clear that older code that doesn’t have buffer overflows isn’t going to all of a sudden have one.  At the same time older “well-written” code is more likely to have a design flaw, or be subject to a new class of attack than newer code designed to mitigate said attack.

    When you find those design issues they can be especially tricky to fix especially if the flaw is part of an externally facing API/interface rather than just an internal one.

    I’m not asking for hard numbers, just curious anecdotally whether you can comment on the rate of occurence.


  4. Due splendidi post sulla sicurezza, SDL, le patch e il ciclo di vita dei prodotti

Skip to main content