"This is great! VSTS will automatically generate unit tests for my class." ...
"I wrote one unit test for each public interface of my class. Do I need any more?" ...
"Oops! I forgot to write a unit test before I checked my code in" ...
Eric Gunnerson recently wrote about scrumbut, the phenomena currently manifesting itself in teams everywhere (including Microsoft), whereby the Scrum methodology is modified to such a degree that failure is inevitable. Brad Wilson posted a few months earlier about scrummerfall: the specific combination of Scrum and Waterfall as to ensure failure at a much faster rate than you had with Waterfall alone. Giona Moran's list of "why you might be a CrAgilist" no doubt struck a few nerves. There is a growing awareness that teams will fail (or flounder) when they adopt a methodology without understanding how and why it will help their success.
I'm encountering this same problem with teams adopting unit tests as part of the "we're agile now" philosophy. Through some anomaly I've worked on two different v1 teams this year so what I'm saying is skewed towards non-legacy code; "legacy" meaning anything that wasn't developed using TDD.
As much as I enjoy reading articles on CodeProject, this one entitled Stop Designing For Testability really bugs me. The logic presented within falls into the same ballpark as the quotes from the beginning of this article: that tests are an "accompaniment" to your code and should be treated as such. What's really odd is that in the first paragraph the author links to Peter Provost's excellent article on TDD in .NET then promptly mis-states the purpose of unit tests in the following line and paragraph.
I'm going to put forth my own maxim on unit tests:
Unit tests written outside the bounds of TDD are only marginally
more useful than not having unit tests at all.
Why? Because they provide a false sense of security to anyone looking at the code those tests cover. Since we can't change the definition of "unit test" as a "program written to test classes ... [by] sending a fixed message and verifying the return as a predicted answer" within the realm of TDD, we need some new names. Luckily, Brad solved this problem back in August by proposing the following:
Example Driven Design
...because the T and 2nd D in TDD are both wrong
The problem with using the term "unit tests" in the context of TDD is that TDD isn't about testing, and "unit tests" aren't QA-style tests themselves. Rather they are a way to build, or design, your class one step at a time. Upon completion, the test fixture is documentation on how your class functions, with examples for each step of the way. And if that wasn't enough, you also end up with pretty good code coverage.
If you read Jim Newkirk's book on TDD, or this example of designing a
Stack, you can see that the unit tests are really examples depicting how the class works. The initial test of verifying the stack is empty upon creation is an example of what the user can expect when they instantiate a new stack. The second test is an example of the stack's
isEmpty state after an item has been pushed onto it, and so on and so forth. I'll say it again: unit tests in the context of TDD are really examples.
One final point about examples: without proper names they become less useful. I think that following of a style of
testY() leads down a slippery slope because there is no specific relation between the example name and what the example is demonstrating. Rather than
testEmpty(), why not use
StackIsEmptyUponCreation(). Instead of
StackIsEmptyAfterPushingAndPoppingOneElement(). Just as there is no longer a reason to stick to 8.3 filenames, neither is there one for using non-descriptive method names.
Let's forget about 'unit tests' with their role as add-ons or afterthoughts, and stick to letting examples help us design our code.