Automate This!

How much of your testing do you automate? How do you know whether you have automated enough - or too much?

My current team is taking the Automate Everything approach. This means we automate every last test case. "110% automation", as one of our executives liked to say. While certain tests may remain unautomated due to time constraints or extreme technical difficulties, we are working hard to keep that number lower than humanly possible.

Automating Everything has considerable payoffs: every test can be run on every build, ensuring our application does not regress in any way we have encountered thus far. Supporting additional operating systems (oh look - now there's a Windows XP Service Pack 42!) and languages (flash from Marketing: now we are localizing into Lower Elbonian!) requires nothing more than adding another row to the configuration matrix and kicking off another test run. Hot fixes can be tested without subjecting the test team to a fire drill. Pretty much nirvana, right?

Automating Everything has considerable downsides as well: Automated tests are by nature scripted, not exploratory. Even with an automation stack which injects all sorts of variability, the tests wear grooves in those areas of the product they cover and they ignore everything else. When something unexpected happens they are likely to die, and even if they are able to recover they are not able to stop what they were doing and investigate that unexpected happening. And don't forget the maintenance required to keep those tests running - which efforts are not helping you find defects in your application. Say, have you had time to actually use your application yet?

On the other extreme is the Automate Nothing approach. Here every test case is executed manually by a person physically using their mouse and keyboard. This has considerable payoffs: every test can be exploratory. The entire surface of the product will likely be covered. When something unexpected happens it is easily followed up. No maintenance is required to keep the test cases up to date with changes in the application. Everybody is always using the application. Pretty much nirvana, right?

Automating Nothing has considerable downsides as well: It is unlikely that every test will be run on every build (unless you only get builds every two weeks - in which case you have my sympathies!), so regressions may not be found until long after they are introduced, if they are found at all. Supporting an additional configuration means either running another full test pass or scoping down your testing and hoping you do not miss anything important - no economy of scale benefits here! Every hot fix requires yet another full test pass. Not to mention that it can be difficult for people to stay Brain Engaged when running a test for the tenth or twentieth or two hundredth time.

I struggle. The benefits of automating are clear to me. So are the downsides. Some tests - or parts of tests - are eminently automatable. Other tests are tedious or boring to do manually. Automated tests lend themselves to spitting out data in pretty graphs, which management generally likes. Session-Based Test Management seems an effective way to leverage testers' exploratory and critical thinking skills - to keep them Brain Engaged - while also giving management the data they require. I wonder however whether it scales to my context.

It is clear to me that Automating Everything is taking things too far. So is Automating Nothing. I have not yet found a balance I like. How about you?