The Programmer Tester


James Bach gave a talk at Microsoft yesterday. Among his many points, he reiterated a few points he’s been making for years that got me thinking…so I thought I’d continue the thinking in this blog post.

James is concerned that Microsoft is hiring too many programmers to be testers. His view is that the “programmer testers” at Microsoft spend their entire day writing tools and writing test automation. This isn’t an uncommon feeling, so I can’t fault James for feeling this way. I don’t feel the need to justify this (too much) in this post, but I have written about this topic in the past.

The short version is, that for the majority of what we do, knowledge of CS is critical. We expect our testers to debug, diagnose, and analyze problems they run into. We expect them to recognize patterns of bugs, and have insight into how the computer may be using the data they input.

My experience is that a small number of testers with non-computer backgrounds can figure out how to do this (there are definite exceptions – I’m a music major!). There are a lot of folks who are good at finding surface level bugs, but they rarely recognize patterns in the bugs they’re finding. I’ve seen a lot more CS folks who are fantastic “testing with the brain” testers – un-phased by unintentional blindness – who can leverage both the skills of their brain and their “skills”.

On the other hand, James is right to some extent. I bet we have  a number of “programmer testers” who are doing exactly what James thinks they are doing. They’re spending 50 hours a week creating tools and writing automation. Usually their automation isn’t very good, so they spend a lot of time “fixing” their code and adding “clever” workarounds. These folks are the exception and not the rule. I don’t like them either, but this isn’t how the majority of the testers at Microsoft are spending their day. (I’ve sort of talked about this before too).

In the end, like many other efforts, balance is the key. The best testers I’ve ever met have this balance. They know when to leverage their knowledge of computer science, and they know when to let their brain do the heavy lifting. They know what their limitations are and (if necessary) how to compensate for them. I think if James were to talk to more testers at Microsoft he’d understand that “testers who code” aren’t the problem – rather it’s the balance that’s the problem (keeping in mind that a great number of teams and individuals have this balance already).

Btw – James, if you read this, this is the 4th or 5th time I’ve heard you say that Jon couldn’t get a job at MS now because he doesn’t code. I’m pretty sure we paid him for his work on the Acceptance Testing guide :}


Comments (2)

  1. Shrini says:

    >>> The short version is, that for the majority of what we do, knowledge of CS is critical. We expect our testers to debug, diagnose, and analyze problems they run into. We expect them to recognize patterns of bugs, and have insight into how the computer may be using the data they input.

    Can all of this be equated to "ability to code" ? A non-coder could still do all this without he/she having to write some code to investigate problems uncovered during testing. I think  there is a confusion between ability to code vs ability to understand/debug/investigate technical problems.

    Like James, what find interesting about what way some people in Microsoft ( I am not generalizing, here) and some people in places like Google – when asked about testing they straightway talking about classes, mock objects, testable code, design patterns and so … instead of models, heuristics, exploration, systems thinking, analysis, holistic view of how software component works in real environment.  These folks are so obsessed with code that when it comes testing they only code and nothing else.

    You talked about Balance … that is important .. but somehow in the public face of Microsoft testers ( bloggers and conference speakers mainly), this balance appear to be largely missing.

    That is what James might be trying to articulate … as you might agree there is so much to test other than code, automation and framework …

    Shrini

  2. alanpa says:

    >Can all of this be equated to "ability to code

    No – I didn’t say that on purpose. Sure, you probably need to be able to write automation and tools, but the main skill the CS bg gives you is an understanding of the system and the logic used to put it together.

    When speaking of the public face, you are generalizing. There are many testers outside of MS (although few as vocal as James) who understand the approach of MS testers. I think I’d be surprised if I picked a random MS tester, asked them about testing and they mentioned classes,mocks and patterns before going into testing concepts.

    They might, however, go into a discussion about how they are working on deploying a set of customer based network topologies automatically on a single hyper-v enabled server so they can then run an exchange server load test while monitoring for memory leaks, heap corruption and performance issues (e.g. message throughput over time). The same scenario could be re-used to automatically verify the effects of network latency (via programaticaly slowing the network) on network throughput.

    etc.

    Without the ability to think through a necessary test case such as this (purely off the top of my head – I’m sure there are better examples), leverage the appropriate tools, and automate what is necessary, this testing becomes impossible – or at least impossible within a reasonable amount of time. Would you explore, use heuristics or analysis here – absolutely. I don’t really get why you think that this situation wouldn’t use all of your favorite buzzwords.