Exploring Fallacies


I have been having various conversations of late about exploratory testing and Session-Based Test Management. Interesting conversations. Conversations I never expected to have, in some cases.

One tester thought exploratory testing was nifty. He was also concerned that it would mean he would never code again. “I spent all this time learning to program, and now that I am exploratory testing I don’t get to do the coding I love to do?”

I am finding this to be a common misconception, that exploratory testing, or manual testing, means nothing other than a mouse and keyboard comes between a tester and the product they are testing. This is patently false. An effective tester will use every tool at their disposal. If my test mission is to boundary test every input widget in my application, spending half an hour to write a small automated script which blasts thousands of data values into every widget may well be a better use of my time – and find more bugs faster – than doing the same task manually. Especially if I watch it as it runs and so can notice unexpected side effects. (And if by some chance my testing cannot benefit at all from any tool, almost certainly some part of the rest of my job can!)

Another tester said he did not see how exploratory testing and SBTM could work for functional testing.

I goggled at him a bit. SBTM seems to me custom-built for functional testing as its primary goal is to provide a view onto what testing has been done and what testing is still left to do. I explained this in detail and suggested he give it a whirl for several weeks and see how it goes. If it turns out to not work for him, I reminded him, he could always go back to writing scripted test cases.

A test lead and I had a similar discussion. He was embarking on the potentially challenging task of introducing exploratory testing into his management-gets-warm-fuzzies-from-knowing-we-run-oodles-of-scripted-test-cases environment. He was curious how to ensure coverage of requirements  and use cases and such when using exploratory testing.

I explained how I base my test mission on the functional areas. For example, a web application might have missions such as:

  • Login – Functionality
  • Login – Security
  • Login – Accessibility
  • Login – Globalization and Localizations

I generally also have one or more test missions for use cases and user scenarios, where the number of missions depends on how many use cases/scenarios there are, how complicated they are, and into what sorts of logical groupings they fall. Similarly, I cover risks by having an explicit test mission for each risk, including in the test mission description any specific concrete issues I want to be sure are covered.

This test lead was planning to have his more experienced testers do exploratory testing for functional areas and system-level scenarios while his less experienced testers executed scripted test cases for areas of high risk.

Well. This seemed backwards to me, and I said so. If ever there is a case where I want my testers using their brains, and where I want to apply my most experienced testers, it is certainly on high risk areas! If a specific problem occurs which a) must never ever occur again, and b) the particular steps matter, I will generally create a new test mission along the lines of “Ensure these specific steps no longer cause this problem, and then look for other ways this or a similar problem could occur”. This way I know that those particular steps work, and I also get a tester’s brain looking for similar issues.

If scripted test cases were necessary in order to make my management happy, I would work with them to understand what risks they believe will be mitigated and/or what information they wish to be provided, develop a small set of scripts to cover those risks and provide that information, and then create a test mission for each script along the lines of “Run this script verbatim, and then use this script as a jumping-off point for the remainder of the test session”.

I tell you all this for several reasons. One is to provide you some thoughts to chew on, if you are pondering similar questions. Another is to provide you some insight into my current thinking, so that you can identify areas where you think I am bonkers and engage me in a conversation where you tell me so and why and we discuss from there. Another is so I can remember what I said the next time somebody asks me a similar question. <g/>

 

*** Want a fun job on a great team? I need a tester! Interested? Let’s talk: Michael dot J dot Hunter at microsoft dot com. Great testing and coding skills required.


Comments (9)

  1. Anaamica says:

    "I am finding this to be a common misconception, that exploratory testing, or manual testing, means nothing other than a mouse and keyboard comes between a tester and the product they are testing. This is patently false. An effective tester will use every tool at their disposal. If my test mission is to boundary test every input widget in my application, spending half an hour to write a small automated script which blasts thousands of data values into every widget may well be a better use of my time – and find more bugs faster – than doing the same task manually."

    As soon as you say write a script, it is no longer (strictly) manual testing. You are coding, may be in a scripting language rather than a compiled language, but it’s still coding. This is completely different from manually clicking on controls and verifying the dialog box content. Manual testers may use tools to record and play back, but these tools are still naive. What we need is an intelligent tool which can change, even revolutionize, the way manual testing is done.

  2. Roger Foden says:

    I wonder if scripts are popular (in some quarters) *because* you don’t have to use your brain when executing them !

    When you design a script, don’t you have a mission in mind for the script ? Surely you don’t just write a random, unrelated, set of cases into a script ? Perhaps the mission part of the script design is usually implicit, unstated, because the script embodies it. So when in SBTM there isn’t a script, the mission (statement) looks novel – but it’s really been there all the time.

    Is exploratory testing a way of empowering testers ? Instead of saying ‘script those cases then execute them’, exploratory testing says ‘use all the tools at your disposal to give me some information about the product, especially, does it work ?’. Ie ‘use your brain’ don’t just follow a pattern !

    If musical notation is ‘like’ a test script, then executing a script is like playing the notes as written out for you; whereas exploratory testing is playing the music, without the notation. Testers who understand their art are like concert pianists.

  3. TestingGeek says:

    Excellent. I always had problem in selling the idea of exploratory testing to people, who are used to scripted testing and are afraid of making change. I think method you have suggested in this post, i.e reduce the number of test scripts and make them part of test charter seems like a very good starting point. This should be easier to sell  and also once people see value in this they can move to further.. I would say this as a step towards exploration 🙂

  4. SriSub1 says:

    I think the whole "management-gets-warm-fuzzies-from-knowing-we-run-oodles-of-scripted-test-cases" is based on the assumption that we have ‘enough’ scripts/ automation that cover high risk areas. Don’t we all start with some exploratory testing and essentially do what you suggested towards scripted tests?

    If the debate is between existing scripts and more exploration, then I would have more experienced looking at the exploratory stuff, just because the probability of getting more out of that is more than having an inexperienced doing so.

    If we are starting with nothing, what you suggested is what I would do too.

    Am I missing something fundamental?

  5. Michele says:

    My opinion is that scripted testing is a tool and exploratory testing is a skill.  If your company brings in a new product and wants you to test it, chances are it did not come with test cases.  Your testing skills, specifically the exploratory ones, will play a big role in your assessment of the program.  It is what will enable you to provide information about the product to "the people that matter".

    Also, having a great appreciation for Rapid Software Testing techniques, I take the definition of such testing to heart:  "Rapid Software Testing, the skill of testing any software, any time, under any conditions, such that your work stands up to scrutiny."

    And yet, I believe that scripted testing plays an important role in testing as well, especially in Regression testing.

    This is a great topic for discussion, and it usually gets a bit lively 🙂

  6. Shrini says:

    >>> scripted testing is a tool and exploratory testing is a skill.

    Well said – Michele.

    One thing I often hear about ET is "experience". I heard people saying, to be good at ET, you need to have experience in stuff you are exploring". I would rather prefer "Skill" and "practice" to "experience".

    As a tester, one would be required to have a go at variety of things, each requiring its own domain (business, technology etc). Instead of focusing on gaining "experience" of each and every thing, a skilled exploratory tester would focus on building skills in exploring such as modeling, questioning, analyzing that are more generic in nature. And, to be good at ET, you require a constant practice at variety of things.

    I would say, need of the hour now is more "rapid testers" – test anything (not necessarily software), anywhere and under any timelines such that work stands up for scrutiny".

    I would emphasis the fact that ET is not a technique but an approach and there is a great bit of ET even in scripted testing. As James Bach mentions, in scripted and exploratory testing approaches are two ends of a continuum

    Shrini

  7. P.C. says:

    Interesting post & comments.  Our company currently takes the approach for most products of Manual/Exploratory testing followed by scripted performance/load testing.  I’ve been involved in both types over the years (used to do manual testing but now getting into performance testing) & I’d say they both are very important approaches towards ensuring the product is ready for prod. I guess I don’t understand the whole manual vs. script/tool debate… seems like using both to meet your needs is logical.

    P.C.

  8. SG9 says:

    It’s always going to be difficult to make a change from a largely scripted model to a largely exploratory model when management are used to being able to monitor scripts and maybe don’t see scripted tests as being a problem. "If it ain’t broke…"

    I think there’s room for both and I hope there’s room for both as that’s the model I’m using for a current project!

    Until I can get more evidence on the benefits of exploratory/SBTM, how it might save money, improve the quality going out of the door it’s always going to be a struggle to win over those who like to see scripts being written, executed and recorded…

    Simon

Skip to main content