Let’s Go Bust Some Silos!

Who plans the tests for your product? Who writes them? Who executes them?

On some product teams, developers write product code. That's it. If you're lucky, they even compile the code they write. Actually *launching* the application - let alone determining whether things work the way they should - these developers see as Someone Else's Responsibility.

On many product teams, developers write product code and then they put that code through whatever paces they think necessary in order to be reasonably certain that it does what they think it should do. Generally these paces are much less comprehensive than their testers would like.

On a few product teams, developers write product code and they also execute bunches of tests. Many of these tests are likely automated unit tests. Some of these tests are manual exploratory sessions. Regardless of what form any particular test takes, these developers aim to eliminate every last could-find-it-simply-by-following-a-checklist, don't-want-to-waste-my-tester's-time-finding-this defect.

I've heard about the first type of developer; I've never actually seen one. (Thank goodness!) I've worked with many developers of the second type. I haven't yet found a developer of the third type, although I've worked with a few who come close.

On some product teams, testers test. That's it. If you're lucky, they ask a buddy to review their tests before they execute them. Their tests are based on specifications which have little resemblance to the actual product. Their days are largely spent complaining about their developers, who although rarely seen are "obviously complete idiots since the daily builds are always junk of the highest order!"

On many product teams, testers design their tests long before there is any code to run them against. They review their tests with other testers and also with their developers. Once they start executing their tests, they find that some of the tests are no longer relevant, other tests require rework, and multitudes of new tests are necessary.

On a few product teams, testers spend time with their developers building a model of how the code works. They plan classes of tests and areas of focus rather than delineating multitudes of individual test cases. They work in a tight spin loop of plan-execute-review, continuously feeding what they learned during the current loop in to the next one. These testers look for checklist bugs as part of their larger focus on integration and system-level defects. Many of their tests are likely automated. Many others are likely manual exploratory sessions. Regardless of what form any particular test takes, these testers aim to find the most important issues first.

I've known testers of the first type. Much of my experience has been with testers of the second type. I know a few testers of the third type; they are incredibly effective and much in demand.

I characterize the first type of developers and testers as Doers. They are constantly Doing and always seem busy. Their efficacy, however, is not nearly so high as their busyness might seem to indicate.

I characterize the second type of developers and testers as Thinkers. They have discovered that whatever time they spend thinking will be more than paid back by greater efficiency and efficacy once they move on to doing. Unless of course they never make the transition and instead become mired in Analysis Paralysis!

I characterize the third type of developers and testers as Learners. They spend lots of time thinking, and they spend lots of time doing. They want to always be learning. The moment they stop learning - information about the product, information about writing or testing code, information about working with their counterparts in other disciplines - they stop and make adjustments before continuing. Losing their focus on learning information that adds value to their team and product is the main bugaboo for which they must keep watch.

One habit all of these types share is a tendency to think in silos. Developers write product code, and possibly some quantity of tests. Testers write tests, possibly test tools, and never product code. Have you ever considered whether another arrangement might work better?

What would happen if your feature team sat down together and planned all of the work for the milestone: the product code that needs to be written and the test missions that need to be executed? And then you divvied out the work however makes sense? Maybe you have a tester who can write GUI code, which task all of your developers despise. Maybe some of your tests could easily be automated at the unit level. Maybe some of your unit tests require specialized knowledge which one of your tester happens to have.

What would happen if we stopped putting people in silos and instead thought of our features teams as groups with people each of whom have a set of skills? One person knows a lot about writing code which is highly scalable. Another person enjoys writing user interface glue code. Another person designs award-winning GUIs. Another person is expert at testing security. Another person is highly skilled at finding every case the developer forgot to handle. Maybe this is all the same person. Maybe it's five people. Maybe it's fifty.

This is chock-full of unknowns, I know. I'm not saying any of this would actually work. I'm asking you to consider it, think about it. If you try it - in full or just one part - please let me know how it goes!

Comments (4)

  1. Chris says:

    This is a great post and something that I’ve proposed at my company. We are a small team where silos make even less sense than on bigger teams.  So far it hasn’t had any traction. People have their comfort zones. The devs have started to write more tests, but only in certain parts of the release cycle when they feel they have time.

    I would love to hear of others who have made this happen.

  2. Mike Hofer says:

    Going against the grain is scary but sometimes vital in our industry. It takes courage, and always involves risk. But calculated risk isn’t always a bad thing.

    I’m not a fan of blindly adhering to "established best practices," "the Next Big Methodology (TM)," or established team models. Just because something is a best practice at Acme Corporation doesn’t mean it’s a best practice at Real World Inc. The business model is different, the staffing patterns are different, the problem domains are different, *everything* is different. You have to be willing, at some point, to accept he idea that a best practice for *them* isn’t necessarily a best practice for *you.*

    Be flexible. Do something different. Experiment. Find out what actually works, and makes you more effective. Make the leap. At the end of it all, if you find out that it didn’t work, you’ll at least come out of the experiment knowing something that you didn’t know before. And the acquisition of knowledge is never a wasted effort.

  3. Jim Bullock says:

    FWIW I do this, and sometimes undo this all the time. There are four things going on when you unsilo development:

    – Letting people contribute in line with their skills & preferences and the needs of the work. Why can’t someone called a tester who also programs, program sometimes?

    – Owning different concerns. It is hard to both do the work and assess the same work at the same time even if you are skilled in both the doing and the assessing. This is why there is a devil’s advocate in considering Roman Catholic saints. Somebody owns the concern: "Maybe we’re being fooled here."

    – Consistency and roll-ups. It is really useful to have a consistent meaning to what "code done" means or "a test" means and so on. We struggle with this even with dedicated, single-function resources. Note I said "function" not "skill." That’s related to owning concerns and having consistency. We do different things in making software. Cooking the dinner isn’t cleaning up the pots afterward although both are required. Doesn’t matter who’s doing the washing up, we’d like the same meaning for "clean."

    – When you "silo people" you solve one problem that comes from some of the people themselves. Some folks seem unable to work with people who aren’t exactly like them in what they do, how they think, how they speak, and so on.

    I’ve seen this last thing more with developers (programmers) but it’s everywhere including QA and test people. The problem is you can’t only picky & precise & definite your way to a successful product. You need to get all speculative and castles in air sometimes. To have a product, you also have to drag those castles into reality and poke at them in ways you didn’t first imagine. Both.

    "Silos" is one way to keep the dogs and cats from fighting like, well, dogs and cats. I’d prefer having only people who can appreciate others different from themselves. That’s a skill and a discipline, however. Not everyone has it. Not everyone is interested in having it.

    FWIW, Microsoft has in the past done "feature teams" and being Microsoft claimed credit for this bold and brazen "innovation" that others had been doing for a couple decades at least at that point. Look at a famous talk given by the guy who ran the early Visual C++ team that recently got up on the web. Eric Sink is his usual eloquent self about the need for people with multiple ways to contribute, and the perspective to do so. He calls them "developers" vs. "programmers."

    As a practical matter for Chris, you might want to listen to any concerns about what you have proposed in terms of owning the separate concerns, and having consistent meanings. That’s usually the problem. The silos are there to make sure that each concern gets owned, and to keep what we mean consistent. If you can address those two things with a blended team, it’s easier to blend the team. Those concerns are legitimate as well. Maybe even necessary for success.

    The other thing you might find is that what’s going on in your organization isn’t about doing the work but is about who’s in charge, specifically who gets to tell who what to do. That’s a separate problem. I have been told by folks who have reported to me, both individual contributors and managers, that my take on this can be – um – interesting. The point is to move the boat. Who does what to move the boat is a means to an end, not an end. If you are all doing your individual "what’s" and the boat ain’t moving, figure out another solution and tell me. If I have to figure out that things aren’t working, then figure out a plan to fix it, well, I might be paying you too much.

    Tactically and pragmatically, have people always report what’s going on with two measures: their contribution in role, and their progress on the shared goal. This at least gets the whole problem on their minds. Then you can see what they do with it. So, "I did 87 tests on this stuff as we shipped three new, tested and acceptable features." Or "I wrote 87 classes that actually compile as we shipped three new, tested and acceptable features." Or "I developed 17 annotated use cases which we used to ship three matching, new, tested and acceptable features."

    There’s another potential problem here – amateur opinions – but that’s for another time.

  4. alanpa says:

    What’s more important, creating features, or testing features? Different people may give you different answer, and the answer you would probably get if you asked lies somewhere around "it depends". The answer you *want*, is "neither – it’s quality that’s important".

    Sorry – it’s saturday and I don’t know if that makes sense. The shorter and more direct versionof my comment is that a non-silo’d approach can work as long as everyone *gets* what quality is. In fact, this sort of org, more than any other, would benefit from a dedicated QA person ("real" QA – not testers that we call QA). Such a person, reporting in parallel to the engineering manager could do a lot to make sure the right things were being done by the entire team.

    Time to get more coffee and stop babbling.

Skip to main content