The debate over scripted vs. exploratory testing continues (at least among the exploratory testers). A mailing list I'm on has had some discussion on this recently, and have brought up a few really good points. Rather than respond on the list (froma "non-creative, far too technical, just don't get testing, script kiddy" email address), I thought I'd post my thoughts here for the rest of us).

One discussion started with this (more or less) question. "My boss / client / overlord wants me to do some testing. He has a set of scripted tests, and I want to do exploratory testing. There's only time to do one - which do I do? How do I get my boss / client / overlord to know that ET will be super-awesome?"

The answer to both questions, of course, is yes (or 42). Many years ago, on my first day of work at Microsoft, my boss (who was awful in so many ways) handed me a list of test cases and said, "Here, make sure you run these every day". As an experienced tester, I took that to mean - "if these things ever fail, we're in deep doo-doo, so make sure these work as well as can be expected, but while you're there, do some other testing as well".

I did it my way. After a few days, I got bored, and automated most of the test cases so I could think of other things to test. I randomized some data and varied some of the steps. Of the tests in the original script, I don't think any ever found a bug. But I found dozens (perhaps hundreds) of other bugs. But I was running tests from a script. Kind of?

The management / client / overlord pushback from the original question would be - "You're telling me you can either run my scripts that (to my knowledge) verify the features our customers expect to work are working, OR, you can play around with the app for a while?" If you put it that way, what do you expect them to say? If test scripts are run in tunnel vision, of course they will suck. If they're used as a guideline to make sure a few things are done during exploration, they can be quite effective. Will there be some situations (ahem - contexts) where pure ET will be superior to any guidance that a script or checklist can provide - you bet. I'm just saying (once again), that it's not a one or the other choice, and that good testers know how to make this balance (as well as communicate the choices to their management).

Comments (2)

  1. Adam Goucher says:

    In this sort of debate, I like Jon Bach’s ‘Tester Freedom Scale’ (towards the end of Sounds like you were told to be at the far left and moved things towards a more center-left position.

    Every person, or perhaps more importantly, every organization sits somewhere on the scale. I’ve sometimes used a variant of it when hiring to determine if the candidate would fit the culture.


  2. Alan says:

    Thanks for the comment Adam – that scale is exactly what I was thinking. I think the best testing happens anywhere BUT on the far ends of the spectrum – that is (if this were a 100 point scale), points 0 and 100.

    When I see arguments like this from people who are too far buried in either camp, they assume the other camp is at point 0 or 100.

    As usual, there’s a better explanation for everything somewhere other than in this blog.

Skip to main content