Test Developers Shouldn't Execute Tests

This view puts me outside the mainstream of the testing community but I feel strongly that test teams need to be divided into those that write the tests (test developers) and those that execute them (testers).  Let me be clear up front.  I don't mean that test developers should *never* execute their tests.  Rather, I mean that they should not be responsible for executing them on a regular basis.  It is more efficient for a test team to create a wall between these two responsibilities than to try to merge them together.  There are several reasons why creating two roles is better than one role. 

If you require that test developers spend more than, say, 15% of their time installing machines, executing tests, etc. you'll find a whole class of people that don't want to be test developers.  Right or wrong, there is a perception that development is "cooler" than test development.  Too much non-programming work feeds this perception and increases the likelihood that people will run off to development.  This becomes a problem because in many realms, we've already solved the easy test development problems.  If your developers are writing unit tests, even more of the easy work is already done.  What remains are trying to automate the hard things.  How do you ensure that the right audio was played?  How do you know the right alpha blending took place?  How do you simulate the interaction with hardware or networked devices?  Are there better ways to measure performance and pinpoint what is causing the issues?  These and many more are the problems facing test development today.  It takes a lot of intellectual horsepower and training to solve these problems.  If you can't hire and retain the best talent, you'll be unable to accomplish your goals.

Even if you could get the right people on the team and keep them happy, it would be a bad idea to conflate the two roles.  Writing code requires long periods of uninterrupted concentration.  It takes a long while to load everything into one's head, build a mental map of the problem space, and envision a solution.  Once all of that is done, it takes time to get it all input to the compiler.  If there are interruptions in the middle to reproduce a bug, run a smoke test, or execute tests, the process is reset.  The mind has to page out the mental map.  When a programmer is interrupted for 10 minutes, he doesn't lose 10 minutes of work.  He loses an hour of productivity (numbers may vary but you get the picture).  The more uninterrupted time a programmer has, the more he will be able to accomplish.  Executing tests is usually an interrupt-driven activity and so is intrinsically at odds with the requirements to get programming done.

Next is the problem of time.  Most items can be tested manually in much less time than they take to test automatically.  If someone is not sandboxed to write automation, there is pressure to get the testing done quickly and thus run the tests by hand.  The more time spent executing tests, the less time spent writing automation which leads to the requirement that more time is spent running manual tests which leads to less time...  It's a tailspin which can be hard to get out of.

Finally, we can talk about specialization.  The more specialized people become, the more efficient they are at their tasks.  Asking someone to do many disparate things inevitably means that they are less efficient at any one of them.  Jack of all trades.  Master of none.  This is the world we create when we ask the same individuals to both write tests and execute them repeatedly.  They are not granted the time to become proficient programmers nor do they spend the time to become efficient testers.

The objections to this systems are usually two-fold.  First, this system creates ivory towers for test developers.  No.  It creates a different job description.  Test developers are not "too good" to run tests, it is just more efficient for the organization if they do not.  When circumstances demand it, they should still be called upon for short periods of time to stop coding and get their hands dirty.  Second, this system allows test developers to become disconnected from the state of the product.  That is a viable concern.  The mitigation is to make sure that they still have a stake in what is going on.  Have them triage the automated test failures.  Ensure that there are regular meetings and communications between the test developers and the testers.  Encourage the testers to drive test development and not the other way around.

I know there are a lot of test managers in the world who disagree with what I'm advocating here.  I'd love to hear what you have to say.