Brian Harry wrote a post entitled “Thoughts on TDD” that I thought I was going to let lie, but I find that I need to write a response.
I find myself in agreement with Brian on many points in the post, but I disagree with his conclusion.
Not surprisingly, I agree with the things that he likes about TDD. Focusing on the usage rather than the implementation is really important, and this is important whether you use TDD or not. And YAGNI was a big theme in my “Seven Deadly Sins of Programming” series.
Now, on to what he doesn’t like.
He says that he finds it inefficient to have tests that he has to change every time he refactors.
Here is where we part company.
If you are having to do a lot of test rewriting (say, more than a couple of minutes work to get back to green) *often* when you are refactoring your code, I submit that either you are testing things that you don’t need to test (internal details rather than external implementation), your code perhaps isn’t as decoupled as it could be, or maybe you need a visit to refactorers anonymous.
I also like to refactor like crazy, but as we all know, the huge downside of refactoring is that we often break things. Important things. Subtle things. Which makes refactoring risky.
*Unless* we have a set of tests that have great coverage. And TDD (or “Example-based Design”, which I prefer as a term) gives those to us. Now, I don’t know what sort of coverage Brian gets with the unit tests that he writes, but I do know that for the majority of the developers I’ve worked with – and I count myself in that bucket – the coverage of unit tests written afterwards is considerably inferior to the coverage of unit tests that come from TDD.
For me, it all comes down to the answer to the following question:
How do you ensure that your code works now and will continue to work in the future?
I’m willing to put up with a little efficiency on the front side to get that benefit later. It’s not the writing of the code that’s the expensive part, it’s everything else that comes after.
I don’t think that stepping through test cases in the debugger gets you what you want. You can verify what the current behavior is, sure, and do it fairly cheaply, but you don’t help the guy in the future who doesn’t know what conditions were important if he has to change your code.
His second part that he doesn’t like backing into an architecture (go read to see what he means).
I’ve certainly had to work with code that was like this before, and it’s a nightmare – the code that nobody wants to touch. But that’s not at all the kind of code that you get with TDD, because – if you’re doing it right – you’re doing the “write a failing tests, make it pass, refactor” approach. Now, you may miss some useful refactorings and generalizations for this, but if you do, you can refactor later because you have the tests that make it safe to do so, and your code tends to be easy to refactor because the same things that make code easy to write unit tests for make it easy to refactor.
I also think Brian is missing an important point.
We aren’t all as smart as he is.
I’m reminded a bit of the lesson of Intentional Programming, Charles Simonyi’s paradigm for making programming easier. I played around with Intentional Programming when it was young, and came to the conclusion that it was a pretty good thing if you were as smart as Simonyi is, but it was pretty much a disaster if you were an average developer.
In this case, TDD gives you a way to work your way into a good, flexible, and functional architecture when you don’t have somebody of Brian’s talents to help you out. And that’s a good thing.