Just because you're paranoid...

...doesn't mean they're not out to get you -- more on that in a minute.

First, I apologize to the zero or more regular readers of my blog - it's been quite some time since I posted anything. This hasn't been for lack of things to talk or ask about, more of an issue of budgeting in time to blog about them. I'll do better.

Having said that, back to paranoia. One area of testing I focus on that is not "feature-specific" is security, and I haven't blogged about that so far, so now seems like a good time. If you think "security" is a separate feature, let me know - I don't view it as such, so that might make a good separate topic later on.

As you're probably all aware, the software landscape has changed a lot over time. Security was once an afterthought at best; anyone who disagrees, ask yourself why telnet, and ftp, two fundamental protocols upon which the internet was built, are (usually) authenticated yet transmit their authentication data in cleartext. It clearly wasn't critically important - YET - that such data be better protected than merely not displaying your password on the screen, and perhaps storing passwords in encrypted files.

Things have changed -- firewalled network connections are rapidly becoming the rule rather than the exception. Spam has, for many users, become a majority of incoming mail, rather than a tiny fraction (more on why Spam is a security issue in a moment). Any host connected to the public internet is a target for malicious attacks - if there was ever a time where only certain hosts or networks were worth attacking, it has long past.

<aside>

Spam is a security issue for several reasons -- many spam messages have viruses/trojans/worms/spyware embedded in them or their attachments; other spam messages attempt to trick the recipient into revealing private data (e.g. credit card numbers, or information used for identity theft, or to gain access to their banking or ebay accounts, among others). So, spam is as least as much of a security threat as any worm or door-to-door con artist.

</aside>

Security is of particular interest to me for several reasons. First, Microsoft software is automatically a target for intrusion and/or compromise. The reasons for that 'fact' are many, complex, and (some, at least) debatable, but I'm going to state it as a fact, for now. Second, Hatteras as a source code control and repository system is, by nature, a target for attacks. Whether a given company or group writes software for sale, for their internal productivity, or even to give away, that software is valuable and must be protected from theft, sabotage, espionage, or destruction (or any combination thereof). Finally, experience has shown us time and again that it is easier to build in security from the start, than it is to improve a product's security after the fact. Given that we're working on the initial release of Hatteras and the rest of the Visual Studio Team System, it's very important that we think about, plan for, and build a system that's as secure as possible the first time around, or we and our customers will pay the price down the road. Mangle it badly enough, and you won't have to worry about the 'customers' part of that, because there won't BE any.

I suspect security, as it applies to software in general, and to VSTS and Hatteras specifically, will be a recurring topic for me over the next few months. Given that, I'd like to start with a question to gauge interest and help me focus on topics that will be interesting and useful:

When it comes to source control, what does "security" mean to you?

I'm hope there's a couple "flavors" of response, but don't let these stop or limit you: Types/vectors of attacks, likelihood of attacks, detection and mitigation of attack, and last (but not least), the cost of security.

I'll end (for now) by answering my own question and asking a few specific follow-up questions/requests.

To me, security for source control means, mainly, ensuring that the source control system is reliable and trustworthy to its legitimate users. This means that only authorized and authenticated users can read and/or alter the state and contents of the repository. It means that malicious attempts to access, alter, hinder, or destroy the repository are detected, reported, and thwarted. It also means that the source control system provides these capabilities without an undue additional cost in terms of additional hardware, software, or staff.

As far as answering my question yourself goes, consider the following:

  1. Has your source control system ever been attacked?
  2. How did you find out?
  3. Was the threat external or internal?
  4. Was the threat technical, social, both, or something else?
  5. What did the attack attempt to accomplish? (code theft, code injection, deletion, etc.)
  6. Was the attack successful?
  7. How did you respond to the attack?
  8. What was the cost? This might be time, money, lost data - whatever way you view the "cost" to your company or team.
  9. If the attack was successful, or unsuccessful but disruptive, what could have reduced or prevented the severity of the damage/disruption?

As I hinted before, this is a complex and often lively subject, even if we apply a fairly narrow filter by framing it in the context of source control. I'm interested in it, and interested in the experiences and feedback of "y'all" for a fairly simple reason - the more we know about what "security" means to you, the more likely we'll be able to deliver a product that meets or exceeds your expectations.

I hope most of you can say "no" to number 1, and the rest of the questions become pretty much academic. But I know that it won't be "no" for everyone, and I suspect such attacks will increase in the future.