Automate This!

How much of your testing do you automate? How do you know whether you have automated enough - or too much?

My current team is taking the Automate Everything approach. This means we automate every last test case. "110% automation", as one of our executives liked to say. While certain tests may remain unautomated due to time constraints or extreme technical difficulties, we are working hard to keep that number lower than humanly possible.

Automating Everything has considerable payoffs: every test can be run on every build, ensuring our application does not regress in any way we have encountered thus far. Supporting additional operating systems (oh look - now there's a Windows XP Service Pack 42!) and languages (flash from Marketing: now we are localizing into Lower Elbonian!) requires nothing more than adding another row to the configuration matrix and kicking off another test run. Hot fixes can be tested without subjecting the test team to a fire drill. Pretty much nirvana, right?

Automating Everything has considerable downsides as well: Automated tests are by nature scripted, not exploratory. Even with an automation stack which injects all sorts of variability, the tests wear grooves in those areas of the product they cover and they ignore everything else. When something unexpected happens they are likely to die, and even if they are able to recover they are not able to stop what they were doing and investigate that unexpected happening. And don't forget the maintenance required to keep those tests running - which efforts are not helping you find defects in your application. Say, have you had time to actually use your application yet?

On the other extreme is the Automate Nothing approach. Here every test case is executed manually by a person physically using their mouse and keyboard. This has considerable payoffs: every test can be exploratory. The entire surface of the product will likely be covered. When something unexpected happens it is easily followed up. No maintenance is required to keep the test cases up to date with changes in the application. Everybody is always using the application. Pretty much nirvana, right?

Automating Nothing has considerable downsides as well: It is unlikely that every test will be run on every build (unless you only get builds every two weeks - in which case you have my sympathies!), so regressions may not be found until long after they are introduced, if they are found at all. Supporting an additional configuration means either running another full test pass or scoping down your testing and hoping you do not miss anything important - no economy of scale benefits here! Every hot fix requires yet another full test pass. Not to mention that it can be difficult for people to stay Brain Engaged when running a test for the tenth or twentieth or two hundredth time.

I struggle. The benefits of automating are clear to me. So are the downsides. Some tests - or parts of tests - are eminently automatable. Other tests are tedious or boring to do manually. Automated tests lend themselves to spitting out data in pretty graphs, which management generally likes. Session-Based Test Management seems an effective way to leverage testers' exploratory and critical thinking skills - to keep them Brain Engaged - while also giving management the data they require. I wonder however whether it scales to my context.

It is clear to me that Automating Everything is taking things too far. So is Automating Nothing. I have not yet found a balance I like. How about you?

Comments (14)

  1. It seems that the testing world keeps rehashing this problem of how much to automate. I think we’re all focusing on the wrong part of the problem you outlined. The problem is that few companies / teams have enough resources to do both. Because then that would be nirvana.

    If I were CIO or CTO or C..whatever O. I would say that they goal of the "test automation" team is to  automate everything. The goal of the exploratory tester team should be to "explore" 100% of the application. And the goal of the quality manager would be to get the most out of both teams effort and reduce duplication of work.

    As the skills of the workforce change and maybe if people become more intune with how agile development works we will be able to call the automation team developers and exploratory testers well… testers.  Because testers does not mean button pusher it means someone who performs a test. Once we’ve done it 500 times its no longer a test it’s a script.

    It seems this comment is too long and I should have just made a blog post about it ๐Ÿ˜› I’ll reserve that right for later.

  2. Prasanna says:

    > If I were CIO or CTO or C..whatever O. I would say that they goal of the "test automation" team is to  automate everything. The goal of the exploratory tester team should be to "explore" 100% of the application. And the goal of the quality manager would be to get the most out of both teams effort and reduce duplication of work.

    Well the question now becomes how do you decide what the right mix/ratio of people among both teams should be?

  3. Mike Hofer says:

    I tend to agree with Jerrad. I use automated testing here, and love it, but I certainly don’t automate everything. I believe that end-user testing is absolutely essential; there’s nothing like human input.

    In addition, as you pointed out, there is a certain "stupid factor" involved in automated testing: this relentless "Must . . . Continue . . . Testing . . . Despite . . . Failure . . ." mentality to automation. It’s so Terminatoresque. And God help you if the data gets destroyed or corrupted in the process of executing the tests and you lack a decent recovery plan. (You *do* have a recovery plan, don’t you?) And then I always have the scary question…Who’s testing the tests? But that’s a whole different story.

    Also, there are certain types of problems that can *only* be caught by humans. You’ll never catch them with an automated test. "This feature is a pain in the neck to use." "These colors are too dark." "The font’s too small to read." "The help text doesn’t match the interface." You’ll never eliminate those kinds of defects with automation.

    Most importantly to me, however, (and for whatever it’s worth), I kind of think you can reach the point of diminishing returns with automated tests. Is it reasonable to spend the time to write this test? Is it cost-effective? What’s the payoff? Isn’t this something that’s more effectively caught by a human? Sure, I could spend the man-hours to write and debug the tests for something truly trivial, or I could spend the time writing solid code and delivering features. Sometimes, it’s a toss-up.

    Yes, absolutely the tests have value and I would never discard them happenstance. But some tests are *so* trivial that you can take "Automate Everything" to the point of absurdity. At that point, you’ve run the risk of sinking more money and time into maintaining the tests than you have into delivery of the software. And that’s a slippery slope to be sure.

  4. >>>Well the question now becomes how do you decide what the right mix/ratio of people among both teams should be?

    in an ideal world the development team *is* the test automation team. and the exploratory team is the team of integrated testers.

    distribution and problems attaining resources is a problem that resides in the project manager domain. We should leave that behind the great oz curtain and move on ๐Ÿ™‚

  5. One other thing I think needs to be fixed about automated testing is that it needs to be easier for lazy developers to do right.

    The IDE increased developer effeciency by many orders of magnitude, when are we going to see the same thing for developer testing and testing in general?

    And seeing as I know I’ve said this somewhere before I’ll just put a link here.

  6. Anutthara says:

    Hmmm….the perennial tester dilemma.

    I think the balance is highly situational and may vary from 20% to 100%. This is a combination of too many factors – is your app UI intensive, do you have resource/time crunch, are you going to run those on multiple platforms blah blah blah

    But what I don’t appreciate is when the division passes mandates like X% automation compulsory!! And that too as a blanket rule for all releases – incremental or brand new! Guess you figured what I am saying ๐Ÿ˜‰

  7. Bruce says:

    Who cares? I mean, seriously, who cares what the "correct" ratio of automated to manual test cases should be? How is that even remotely relevant?

    The only thing that should matter is:

    – What is our current testing strategy catching and, more importantly, not catching? Is that important to us? Do we need to change?

    – Are we getting the biggest bang for our testing buck?

    If you are satisfied with the answer to those questions, then you’re doing well and, if your testing happens to be 100% automated then who cares?

    I’d LOVE to have enough bodies to do every test manually and have people doing nothing but kicking the tires in weird and wonderful ways, but that’s not my reality. Nor is it the reality for a goodly chunk of software development firms in the world. Automation, besides allowing me to do things like run my regression FAR more often than I used to be able to do, saves me time. Time I can use to do those things I can’t automate, like exploratory testing.

    I’ve seen this argument drag on for far too long and, frankly, it doesn’t always reflect well on the people involved.

  8. Frank Cohen says:

    Interesting discussion, thanks for the post.

    Seems to me that you’re missing “governance” when talking about test automation. In the SOA context governance adds a registry to announce that a servivce eixsts and a set of policies to control its use. Imagine a policy that says ‘You must run a test script to check that this service is correctly configured before using this service.” The governance tool manages the policies, stores the functional tests scripts, issues a call to run the test, and saves the results. Take a look at governance tools (BEA ALER, WebMethods X-Registry, Iona Repository, etc.) to get a better idea on how to mitigate the “test everywhere” mentality into a “test by policy” method.

    -Frank Cohen

  9. Ben Simo says:

    Automation can be a great tool when applied wisely.  I believe the trouble comes when we try to automate manual testing.  Any automation that requires more work than manual testing to do less than a manual tester is of little value.  

    I say let the thinking testers do what they best and let the computers do what they do best.  Automation needs to be applied as a tool to assist testers, not replace them.  There are some things that a computer can do faster (and without complaint) than a human tester.  These are the things we should seek to automate.  

    The automate everything crowd often fails to value skilled testers.  This is often the same crowd that thinks we can easily outsource all testing (whether onshore or offshore).

  10. Ben Simo says:

    One more flaw in the "automate everything" or "automated 100%" is the assumption that we can "test everything" or "test 100%".  Testing is a potentially infinite process.  This requires that we decide what is most important to test.  And we need to revisit this questions form time to time.  What we decided was most important 4 releases ago may no longer be the most important.  When asked to automate 100%, I ask "100% of what?"  That usually doesn’t go over very well.  ๐Ÿ™‚

    Placing our faith in automation can give us a false sense of security.  People begin to believe that a passed automated test means a good quality product.  Then they are surprised when a major flaw gets past the automated tests.  This is made to be an even bigger problem when someone has told management that we have "100% of our tests automated."

  11. Skeptical says:

    How about someone who believes that a passed manual test means a good quality product?

    I’m sorry, but this type of argument simply leads me to believe that anti-automation arguments are primarily driven by testers who fear replacement.

    Have you really met someone who would blindly go and automate every test, even manual tests, regardless of the time required and the payback received? I never have.

    Have you really met someone who would blindly say because our tests are automated we have a good quality product? I never have.

    Have you really met someone who, when they say they try to automate everything, really believes this means they’ve tested everything? I never have.

    The amusing thing is: every one of those objections could be reworded so they referred to manual testing instead.

    It’s not about manual vs automated people! Automated testing is a means to an end, which is to allow us to run regression more often. This is something that manual tests DO have difficulty doing. It’s not that automated tests are better than manual tests, but they do provide something that manual tests can’t: continuous feedback.

    If you really meet someone, someday, who believes that automation in itself guarantees quality, then don’t worry. Adding manual testing isn’t going to help them anyway.

  12. John Jimmy says:

    Great thread guys!

    Bruce –  Are we getting the biggest bang for our testing buck?

    I completely agree coz thats what testers should do everyday – deliver the best whether automated/manual! You said it all in one question… I think Automated or manual, the aim of testing is to deliver the best whatever the means!

    Ben Simo – "Automation needs to be applied as a tool to assist testers, not replace them"

    Unfortunately most whatever O’s dont realize this I guess!!

    Skeptical – "It’s not about manual vs automated people!" ..

    Most of people(or newbies like me) get confused reading stuff out there (or by the promises of auomated testing tools) that misleads us to take one side or the other. But unless you really know the app that you are testing you will not know whats the right mix for you! thats what I realized.

    I tried to sum up whatever little knowledge I could get out of this thread. No new revelations!

    Thank you all, this is a very good thread!

  13. I heard this from Michael Bolton – Last year Mahantesh Ashok Pattan from India made the audience of QAI conference in Delhi go silent for a while by saying "My team has been successful in achieving 100% test automation" and then surprised the crowd by saying, "What I mean by that is, we achieved 100% of whatever we wanted to automate".

    Michael Bolton who was supposed to present next got impressed too about this guy and introduced him to me later.

    Today, Mahantesh works for Microsoft, Hyderabad building and heading a testing team.

    Setting a mission that is achievable based on the team’s skills might be as important as automating something.

    James Bach’s post "Manual Tests Cannot be Automated" is insightful enough to become situationally aware to answer people who demand  automating everything.

  14. Praveen Chakravarthy says:

    IF ( Testscript developement takes more time AND it is NOT regression )

    THEN no automation

    IF ( Testscript development takes more time AND it regression )

    THEN automate

    IF ( Testscript development takes less time and it is NOT regression )

    Then automate

Skip to main content