manual v. automated testing again

In my Future series I was accused of supporting both sides of the manual v. automated debate and flip-flopping like an American politician who can’t decide whether to kiss the babies or their moms. Clearly this is not an either-or proposition. But I wanted to supply some clarity in how I think about this.

This is a debate about when to choose one over the other and which scenarios one can expect manual testing to outperform automated testing and vice versa. I think the simplistic view that automation is better at regression and API testing and manual is better for acceptance and GUI testing diverts us from the real issues.

I think the reality of the problem has nothing to do with APIs or GUIs, regression or functional. We have to start thinking about our code in terms of business logic code or infrastructure code. Because that is the same divide that separates manual and automated testing.

Business logic code is the code that produces the results that stakeholders/users buy the product for. It’s the code that gets the job done. Infrastructure code is the code that makes the business logic work in its intended environment. Infrastructure code makes the business logic multiuser, secure, localized and so forth. It’s the platform goo that makes the business logic into a real application.

Obviously, both types of code need to be tested. Intuitively, manual testing should be better at testing business logic because the business logic rules are easier for a human to learn than they are to teach to a piece of automation. I think intuition is bang-on correct in this situation.

Manual testers excel at becoming domain experts and they can store very complex business logic in the most powerful testing tool around, their brains. Because manual testing is slow, testers have the time to watch for and analyze subtle business logic errors. Low speed but also low drag.

Automation, on the other hand, excels at low-level details. Automation can detect crashes, hangs, incorrect return values, error codes, tripped exceptions, memory usage and so forth. High speed but also high drag. Tuning automation to test business logic is very difficult and risky. In my humble hindsight I think that Vista got bit by this exact issue, depending so much on automation whereas a few more good manual testers would have been worth their weight in gold.

So whether you have an API or a GUI, regress or test fresh, the type of testing you choose depends on what type of bug you want to find. There may be special cases, but the majority of the time manual testing beats automated testing in finding business logic bugs and automated testing beats manual testing in finding infrastructure bugs.

There I go again, fishing in both sides of the pond.

Comments (10)
  1. says:

    It is the nature of certain problems (ie, the interesting ones) to resist simplistic solutions.  I’m far more offended by the trend in US jurisprudence toward mandatory sentencing guidelines — and countless other examples of inappropriate de-judgmentizing — than by your clumsy [:-)] attempts to systematize the desiderata in a complex and rapidly evolving engineering discipline.

    Nil Illigitimi Carborundum, or whatever the Latin-de-queso is.  Keep exploring the gray areas in a nuanced way, please.  We demand "zero tolerance" for comments demanding black-and-white answers!

  2. Pete Schneider says:

    What do you consider to be the business logic of windows?  Some of the software systems that I’ve worked on it was clear what was infrastructure and what was business logic, but when I think about an operating system it’s not so clear to me.

    Could you give me a couple of examples?

  3. says:

    It seems to me that everything in the operating system proper is infrastructure but the administrative interface is full of business logic.

  4. sarbjitarora says:

    I kind of agree with you here.

    Though it is always difficult to decide when to automate but Manual Testing will always rule when it comes to localization and other such "business logic".

    The only issue is when you fail to separate business logic 🙂

  5. [Nacsa Sándor, 2009. január 13. – február 3.]  A minőségbiztosítás kérdésköre szinte alig ismert

  6. [ Nacsa Sándor , 2009. február 6.] Ez a Team System változat a webalkalmazások és –szolgáltatások teszteléséhez

  7. rcohn says:

    Domain experts are vital to the success of both manual and automated testing.  We want domain experts, using the benefit of their knowledge, to craft a set of automated tests that can be sensitive to changes in the software behavior.  Being automated, those tests are now imbued with their creators’ domain knowledge and can be run by non-experts.

    By the same token, when domain experts sit down and manually test a system using their intuition, skill and experience, they can find bugs that have not yet been encoded into automated tests.  Once found, that knowledge can be retained for future automated testing.

  8. sarbjitarora says:


    Very well said..

    If we can retain the "expertise" in evolving automated cases on a regular basis their "intuition" can be used in a better way in non automated scenarios

  9. I found it very easy to understand thanks for the post.

Comments are closed.

Skip to main content