Let's Go Bust Some Silos!

Who plans the tests for your product? Who writes them? Who executes them?

On some product teams, developers write product code. That's it. If you're lucky, they even compile the code they write. Actually *launching* the application - let alone determining whether things work the way they should - these developers see as Someone Else's Responsibility.

On many product teams, developers write product code and then they put that code through whatever paces they think necessary in order to be reasonably certain that it does what they think it should do. Generally these paces are much less comprehensive than their testers would like.

On a few product teams, developers write product code and they also execute bunches of tests. Many of these tests are likely automated unit tests. Some of these tests are manual exploratory sessions. Regardless of what form any particular test takes, these developers aim to eliminate every last could-find-it-simply-by-following-a-checklist, don't-want-to-waste-my-tester's-time-finding-this defect.

I've heard about the first type of developer; I've never actually seen one. (Thank goodness!) I've worked with many developers of the second type. I haven't yet found a developer of the third type, although I've worked with a few who come close.

On some product teams, testers test. That's it. If you're lucky, they ask a buddy to review their tests before they execute them. Their tests are based on specifications which have little resemblance to the actual product. Their days are largely spent complaining about their developers, who although rarely seen are "obviously complete idiots since the daily builds are always junk of the highest order!"

On many product teams, testers design their tests long before there is any code to run them against. They review their tests with other testers and also with their developers. Once they start executing their tests, they find that some of the tests are no longer relevant, other tests require rework, and multitudes of new tests are necessary.

On a few product teams, testers spend time with their developers building a model of how the code works. They plan classes of tests and areas of focus rather than delineating multitudes of individual test cases. They work in a tight spin loop of plan-execute-review, continuously feeding what they learned during the current loop in to the next one. These testers look for checklist bugs as part of their larger focus on integration and system-level defects. Many of their tests are likely automated. Many others are likely manual exploratory sessions. Regardless of what form any particular test takes, these testers aim to find the most important issues first.

I've known testers of the first type. Much of my experience has been with testers of the second type. I know a few testers of the third type; they are incredibly effective and much in demand.

I characterize the first type of developers and testers as Doers. They are constantly Doing and always seem busy. Their efficacy, however, is not nearly so high as their busyness might seem to indicate.

I characterize the second type of developers and testers as Thinkers. They have discovered that whatever time they spend thinking will be more than paid back by greater efficiency and efficacy once they move on to doing. Unless of course they never make the transition and instead become mired in Analysis Paralysis!

I characterize the third type of developers and testers as Learners. They spend lots of time thinking, and they spend lots of time doing. They want to always be learning. The moment they stop learning - information about the product, information about writing or testing code, information about working with their counterparts in other disciplines - they stop and make adjustments before continuing. Losing their focus on learning information that adds value to their team and product is the main bugaboo for which they must keep watch.

One habit all of these types share is a tendency to think in silos. Developers write product code, and possibly some quantity of tests. Testers write tests, possibly test tools, and never product code. Have you ever considered whether another arrangement might work better?

What would happen if your feature team sat down together and planned all of the work for the milestone: the product code that needs to be written and the test missions that need to be executed? And then you divvied out the work however makes sense? Maybe you have a tester who can write GUI code, which task all of your developers despise. Maybe some of your tests could easily be automated at the unit level. Maybe some of your unit tests require specialized knowledge which one of your tester happens to have.

What would happen if we stopped putting people in silos and instead thought of our features teams as groups with people each of whom have a set of skills? One person knows a lot about writing code which is highly scalable. Another person enjoys writing user interface glue code. Another person designs award-winning GUIs. Another person is expert at testing security. Another person is highly skilled at finding every case the developer forgot to handle. Maybe this is all the same person. Maybe it's five people. Maybe it's fifty.

This is chock-full of unknowns, I know. I'm not saying any of this would actually work. I'm asking you to consider it, think about it. If you try it - in full or just one part - please let me know how it goes!