We're into the last week of our Security Push effort for Visual Studio Team System. We've reviewed lots of code, found a few bugs (thankfully, only a few scary bugs), verified our capabilities while operating in a hardened environment, and are wrapping up our penetration tests. It's been a different experience, and a fun one.
Given that we took a little bitty break to see Star Wars last week, I can even call it an exciting and even magical event (Vader's "sorceror's" ways and all that).
What did I learn?
- We stand on the shoulders of giants. I knew this already, but the Security Push process really brought into sharper relief how much we've built our product on Windows, on the CLR, on Visual Studio. Our security story is easier to tell because Windows, .NET and ASP.NET make a lot of it (never all of it, of course) easy to set up and use - it becomes more of a question of using these correctly, but we have less to invent (and make mistakes while inventing) from scratch.
- Challenge assumptions. It's easy to believe something is secure, or is supposed to operate a certain way. Yet, when you run a test or go look at the code, you suddenly realize that all is NOT as well as you believed (or perhaps wanted to believe). This applies to more than just security, of course; in general, validating the implementation against the spec and the design from time to time is a worthwhile exercise.
- Anonymous Access is a Pain In The A...pplication Programming Interface 🙂 Seriously, when you consider the additional surface that having even a single unauthenticated/authorized method adds to your interface, it makes you want to find ways to close the gap completely. So, part of our effort was to make sure that any public interface requires authentication and at least minimal authorization unless it absolutely must allow anonymous access. This tends to lead into the next thought, which is:
- You're only as strong as your weakest link. There were several things that drove this point home for me. The system we use to assign severity for threats is summed up with an appropriately-chosen acronym, the DREAD rating (Damage potential, Reproducibility, Exploitablility, Affected users, Discoverability). A given threat, with multiple vectors, is rated based on the worst independent vector (there's that weakest link).
Our general goal is not to make it impossible to attack us, but to make sure that the bar for a successful attack is as high as possible (and preferably, as discoverable and actionable as possible). So, part of the process was making sure that there was no 'low-hanging fruit' that would give an attacker an easy way in. To use the burglary metaphor, adding bars and deadbolts to the front door is kind of silly if the back door is wide open.
- Security is a process, not a step. This is another one I 'knew' already, but this Push helped remind me (and, I think, the rest of the team) that while we put extra effort in sometimes, the security of the product, and the data it safeguards, is something that we have to keep thinking about. Thoughts like "Will fixing this bug introduce any new security implications?" and "Wow, adding that feature added a whole new class of threats to the server" have started becoming more automatic, which was one of the goals (raising awareness). So, the Security Push may be wrapping up soon, but we'll still be thinking about security until long after the boxes are on the shelves. Along the same lines,
- Security is an Arms Race, not a single contest. This is sort of the flip side of the above - we can lose the security battle, but we can't ever really win it. So, while we'll do the best we can to be secure when we ship, we'll also have to continue to adapt against new threats and new classes of threats as we go. And of course, social engineering never gets old...
Overall, I think the Security Push has been a big success. We got as much or more done than I'd originally hoped (scheduled-tasks-wise), I've seen a turnaround where people are thinking about security now, who previously seemed to almost blow it off. I see tests run (and bugs found), as a result. We're going to ask you to trust your data within Team System's virtual walls soon, and I feel better than ever that those defenses will hold.