I'm at the build conference in Anaheim this week, and I was in the platform booth when a customer asked me a question I'd not been asked before: "How do you get started with test driven development". My answer was simply "just start - it doesn't matter how much existing code you already have, just start writing tests alongside your new code. Get a good unit test framework like the one in Visual Studio, but it really doesn't matter what framework you use, just start writing the tests".
This morning, I realized I ought to elaborate on my answer a bit.
I'm a huge fan of Test Driven Development. Of all the "eXtreme Programming" methodologies, TDD is by far the one that makes the most sense. I started using TDD back in Windows 7. I had read about TDD over the years, and was intrigued by the concept but like the customer, I didn't really know where to start. My previous project had extensive unit tests, but they really didn't use any kind of methodology when developing them. When it came time to develop a new subsystem for the audio stack for Windows 7 (the feature that eventually became the "capture monitor/listen to" feature), I decided to apply TDD when developing the feature just to see how well it worked. The results far exceeded my expectations.
To be fair, I don't follow the classic TDD paradigm where you write the tests first, then write the code to make sure the tests pass. Instead I write the tests at the same time I'm writing the code. Sometimes I write the tests before the code, sometimes the code before the tests, but they're really written at the same time.
In my case, I was fortunate because the capture monitor was a fairly separate piece of the audio stack - it is essentially bolted onto the core audio engine. That meant that I could develop it as a stand-alone system. To ensure that the capture monitor could be tested in isolation, I developed it as a library with a set of clean APIs. The interface with the audio engine was just through those clean APIs. By reducing the exposure of the capture monitor APIs, I restricted the public surface I needed to test.
But I still needed to test the internal bits. The good news is that because it was a library, it was easy to add test hooks and enable the ability to drive deep into the capture monitor implementation. I simply made my test classes friends of the implementation classes and then the test code could call into the protected members of the various capture monitor classes. This allowed me to build test cases that had the ability to simulate internal state changes which allowed me to build more thorough tests.
I was really happy with how well the test development went, but the proof about the benefits of TDD really shown when it was deployed as a part of the product.
During the development of Windows 7, there were extremely few (maybe a half dozen?) bugs found in the capture monitor that weren't first found by my unit tests. And because I had such an extensive library of tests, I was able to add regression test cases for those externally found tests.
I've since moved on from the audio team, but I'm still using TDD - I'm currently responsible for two tools in the Windows build system/SDK and both of them have been developed with TDD. One of them (the IDL compiler used by Windows developers for creating Windows 8 APIs) couldn't be developed using the same methodology as I used for the capture monitor, but the other (mdmerge, the metadata composition tool) was. Both have been successful - while there have been more bugs found externally in both the IDL compiler and mdmerge than were found in the capture monitor, the regression rate on both tools has been extremely low thanks to the unit tests.
As I said at the beginning, I'm a huge fan of TDD - while there's some upfront cost associated with creating unit tests as you write the code, it absolutely pays off in the long run with a higher initial quality and a dramatically lower bug rate.