Exploratory testing vs. Scripted testing; Is it really only either or?

I just left Stockholm after spending a week there. That was my second visit to Stockholm and it is truly a remarkable city. I spoke at EuroStar which is the largest software testing conference in Europe, and had the opportunity to meet some old friends and colleagues as well as chat with many other speakers and delegates throughout the week. My friend Steve Allott invited me to meet with the senior managers of Know IT, which is the largest IT consulting company in Sweden where I discussed controlling test environments and techniques to generate real-world and random test data. This next week I will be in Denmark where I will be teaching at the Microsoft campus in Vedbaek.

While at an after-event gathering (a really great party thrown my the wonderful folks at Know IT AB) one evening last week I met a gentleman whom I had previously challenged regarding his comparison of the agile development model to the waterfall model. His response at the time was simply that most people in the industry were only familiar with the waterfall model, so that's what he used as a comparison. At the gathering he approached and chided me by asking if I remembered giving him such a hard time in Dusseldorf. After some light-hearted banter I told him I only gave him such a 'hard time' because often we only see concepts compared to a completely opposite concept or approach. What I had asked at the conference was for him to compare agile approaches to other iterative development approaches. This evening he was better prepared for that conversation and he was also able to discuss the advantages and disadvantages of the agile approach in relation to various other approaches, as well as state when other approaches may be more relevant as compared to an agile approach. (Yes, believe it or not, some people still understand that agile is not, and should not be, the only way!)

Often when I hear people talk about exploratory testing (just to clarify again, I think everyone does and should do explorative type testing) it is often compared with scripted testing. But, I often ask myself "Is the testing problem really so simple that it can be solved from either an exploratory or scripted testing approach? Or, are there other approaches, methods, and techniques that I should consider when attempting to prove or disprove a hypothesis of a complex system?"

When I think of scripted testing I think of a test case (often poorly written) that is so prescriptive in its set of instructions that it leaves no ability on the part of the tester to deviate while still proving or disproving the hypothesis or purpose of the test. For example, a scripted test to me is exemplified by the following set of prescriptive steps:

  1. Click Start button and select Run... menu item
  2. Enter "iexplore" in the Open textbox and press the OK button
  3. Enter http://www.google.com in the address bar and press Enter
  4. Press ALT + q and enter "pickaxe" in the search field
  5. Click the Google Search button
  6. Verify the search results contain "Programming Ruby"

An automated version of this test in Ruby might look something like the following:

require 'watir'
include Watir
test_site = 'http://www.google.com'
ie = IE.new
ie.text_field(:name, "q").set("pickaxe")
ie.button(:name, "btnG").click
if ie.contains_text("Programming Ruby")
   puts "Test Passed.
   puts "Test Failed!

Now, there may be times when I need a prescriptive set of steps in a scripted test to validate specific requirements, or I need absolute control over what is being tested and how it is being tested. There are other times I want to use an explorative approach to test and test design, such as when I am not familiar with a new undocumented or under-documented feature, or when I want to evaluate the system in ways which I have not previously considered.

But, I also know that between the spectrum of explorative type testing and purely scripted testing there are a wide variety of approaches to testing and test design that are extremely beneficial to the professional tester. When I talk about techniques or methods or approaches commonly used in software testing I always discuss the specific context in which they are useful and situations in which they are not applicable. I discuss strengths and weaknesses, and also the different types of information that can be revealed beyond simply the discovery of potential defects.

From my point of view, professional testing is perhaps the most challenging discipline in the software industry. The complexity of the software, and the virtual infinite number of tests that are possible surely require us to approach the situation from multiple approaches and not simply from an explorative approach or a scripted approach. It is certainly easy to bucket things for simplicity (e.g. if it isn't exploratory it must be scripted). However, when we view a problem from only two, or one of two directions we are certainly more likely to miss important information (other than simply defects) that may be critical for our managers to better analyze risk and make better business decisions. A professional tester not only realizes that there are multiple approaches to a specific situation, but also understands that each approach has advantages and disadvantages and knows how to leverage those advantages in order to gather the appropriate information.

So, let's break out of only viewing testing within the context of testing approaches (exploratory vs, scripted), and start understanding the contexts of the software under test (the specific set of facts or circumstances surrounding a particular event or situation) and let's really learn about all the various approaches, techniques, and methods that are available to the professional tester and better understand when and how to best apply those 'tools of our trade' in the pursuit of our profession.

Comments (3)

  1. brentpaine says:

    Nice article BJ. I tend to agree with you. I really don’t see the difference, nor to I care to really argue the difference.

    While we still adhere to the test scripts that we use, we also implemented Exploratory Testing, to a degree, and it was successful enough in the pilot project it was used under to warrant further use throughout the company.

    This was a huge success story for us because our investigations had hit somewhat of a plateau, but when focusing on a particular area we were able to log a couple hundred more bugs before the product release.

    With that said, I don’t know that I would ever rely solely ET or Ad Hoc as a QA methodology. It’s not that it doesn’t work, and it’s not that it can’t be structured (because we feel we’ve created somewhat of a hybrid that does have a certain amount of struct along with the freedom), but I think that scripted tests are a great checklist to make sure you’re seeing the forest.

    What I mean by that is, I think that exploratory is a great exercise and it has its place in the test cycle, but I can also see where it could be so focused on finding deep-rooted bugs that you miss some of the most obvious ones.

  2. Shrini says:


    What actually promopted you to write this post?

    Did you hear or see someone insisting on doing either scripted or exploratory test?

    or it is just a example that came to your mind while discussing about Agile vs waterfall development models ….

    You are right – a good testing is continuum between two extreems of scripted and exploratory testing approaches.

    I think, the debate about scripted vs exploratory is DEAD … unless there is a total new twist or new angle to it …


  3. I.M.Testy says:

    Brent, spot on! I couldn’t agree with you more.

    Shrini, the problem is that some people consistently compare scripted testing to exploratory testing. The reason I wrote this was because I had just left a presentation in which the speaker made this comparison.

    I find comparing scripted tests to exploratory tests similar to the way people sometimes compare agile to waterfall development models. (Interestingly enough, very little commercial software is developed using a true waterfall model; most commercial software is developed using some iterative model. So, the comparison is virtually meaningless except in theorectical or philosophical discussions.)

    Of course, over the past few months I have been doing much more thinking about what ‘scripted testing’ is and can only conclude that scripted testing is execution of written (or recorded) validation tests derived from the specification and designed primarily to verify a product meets or adheres to specified requirements.

    In my opinion, all other types of testing are investigative in nature.

    So, I am not sure why you think the debate is dead since so many people you know continue to use the inane comparison of ‘scripted testing’ (made especially worse by the fact that ‘scripted testing’ is never clearly defined) to exploratory testing (ET).

    There are some recent studies comparing ‘scripted tests’ (those tests derived from a specification) to exploratory testing (as described by Bach, Kaner, et. el.). And empirical data I’ve collected over the past 5 years closely parallels the data from a similar study done at the University of Helsinki. When I publish the white paper (around August) the information should cause rationally thinking individuals to question some common assumptions about exploratory testing in hopes that testers will better understand the advantages and disadvantages and learn to utilize exploratory testing (ET) to augment other approaches to testing.

Skip to main content