Just As, Not Less Or More

Kate comments on my previous blog post Why I'm An SDE/T:

Writing code to break other's code is what I fascinate about. Hope I can get this kind of job in the near future.  

Now comes the question: When you write a tool to test other's code, what makes you trust the code you wrote? Since you're the ultimate tester [Ed:  Why thank you!  <g/>] (not your current role as a lead), how could you ensure your tool works fine? Could you please shed any light on this? Thanks a lot!

I get asked this question a lot.  I recently -- with the more-than-able assistance of Mario (*still* no blog) -- presented on just this topic to a group of testers from around my division.  The answer is simple:  treat Test code just like production code.  There's no difference between the two, really; one is shipped to external customers and the other is used by internal customers is all.

It's amazing, really, but I see this happen over and over again:  take the best testers in the world and set them to writing code, and they promptly forget everything they know about testing.

Code is code is code.  When you write code you have to use a process.  I'm personally a fan of Agile Development, but use whatever works for you as long as it includes collaborating with your customer to discover what they need, spending time thinking about the design, and spending time reviewing the code with others.

I insist on specifications for everything my team does.  The form varies depending on the size of the project, anything from a few paragraphs to a full-on spec doc, but I want something written down that describes what is being proposed.  The "written down" part is important as it helps focus the discussion.

We unit test everything we write.  If you haven't unit tested your code you don't really know what it's doing.  I don't care where the code is coming from or who wrote it -- if it hasn't been unit tested I don't trust it.

All code must be reviewed by at least one other person before it can be checked into source control.  (User folders are of course not subject to this restriction.)  Key code must also go through a formal group code review.  Reviewers tend to be other testers, but more and more we're asking our Devs as well.  And then there's "MichaelCop":  all code checked in is subject to review by me and the MichaelCop tasks that may follow.

The most important part of the process, though, is reviews.  We review every spec, every design, every bit of code, everyone's general approach to testing a particular feature, and every test case we write.  Reviews are the ultimate win-win situation:  the code being reviewed becomes much better, the code's author learns tips and tricks and what to not do next time, the reviewers have a golden opportunity to pass on some of their hard-won knowledge and experience, and the team as a whole grows stronger.  I would give up unit testing before I gave up reviews.

The burning question, always, is where the testing ends.  Dev writes production code.  We write test cases and tools to test that production code.  We write tests to test the tools that test the production code.  We could test the tests that test the tools that test the production code, and test those tests, and test those tests, ad adsurbum, but that would just be silly.  Reviews chop the head off this recursive beast.  Diligent reviews of unit tests and other test cases are generally sufficient to ensure their quality, and hence the quality of the code they test.  If you're testing life critical code then you want to go further, but for most of us thoroughly reviewed tests of thoroughly reviewed code is sufficient.

In the end, it's all code that's just as important as any other code.  Treat it as such.

 

*** Comments, questions, feedback?  Want a fun job on a great team?  Send two coding samples and an explanation of why you chose them, and of course your resume, to me at michhu at microsoft dot com.  I need a senior tester and my team needs a dev lead and program managers.  Great coding skills required for all positions.