I just left Stockholm after spending a week there. That was my second visit to Stockholm and it is truly a remarkable city. I spoke at EuroStar which is the largest software testing conference in Europe, and had the opportunity to meet some old friends and colleagues as well as chat with many other speakers and delegates throughout the week. My friend Steve Allott invited me to meet with the senior managers of Know IT, which is the largest IT consulting company in Sweden where I discussed controlling test environments and techniques to generate real-world and random test data. This next week I will be in Denmark where I will be teaching at the Microsoft campus in Vedbaek.
While at an after-event gathering (a really great party thrown my the wonderful folks at Know IT AB) one evening last week I met a gentleman whom I had previously challenged regarding his comparison of the agile development model to the waterfall model. His response at the time was simply that most people in the industry were only familiar with the waterfall model, so that's what he used as a comparison. At the gathering he approached and chided me by asking if I remembered giving him such a hard time in Dusseldorf. After some light-hearted banter I told him I only gave him such a 'hard time' because often we only see concepts compared to a completely opposite concept or approach. What I had asked at the conference was for him to compare agile approaches to other iterative development approaches. This evening he was better prepared for that conversation and he was also able to discuss the advantages and disadvantages of the agile approach in relation to various other approaches, as well as state when other approaches may be more relevant as compared to an agile approach. (Yes, believe it or not, some people still understand that agile is not, and should not be, the only way!)
Often when I hear people talk about exploratory testing (just to clarify again, I think everyone does and should do explorative type testing) it is often compared with scripted testing. But, I often ask myself "Is the testing problem really so simple that it can be solved from either an exploratory or scripted testing approach? Or, are there other approaches, methods, and techniques that I should consider when attempting to prove or disprove a hypothesis of a complex system?"
When I think of scripted testing I think of a test case (often poorly written) that is so prescriptive in its set of instructions that it leaves no ability on the part of the tester to deviate while still proving or disproving the hypothesis or purpose of the test. For example, a scripted test to me is exemplified by the following set of prescriptive steps:
- Click Start button and select Run... menu item
- Enter "iexplore" in the Open textbox and press the OK button
- Enter http://www.google.com in the address bar and press Enter
- Press ALT + q and enter "pickaxe" in the search field
- Click the Google Search button
- Verify the search results contain "Programming Ruby"
An automated version of this test in Ruby might look something like the following:
test_site = 'http://www.google.com'
ie = IE.new
if ie.contains_text("Programming Ruby")
puts "Test Passed.
puts "Test Failed!
Now, there may be times when I need a prescriptive set of steps in a scripted test to validate specific requirements, or I need absolute control over what is being tested and how it is being tested. There are other times I want to use an explorative approach to test and test design, such as when I am not familiar with a new undocumented or under-documented feature, or when I want to evaluate the system in ways which I have not previously considered.
But, I also know that between the spectrum of explorative type testing and purely scripted testing there are a wide variety of approaches to testing and test design that are extremely beneficial to the professional tester. When I talk about techniques or methods or approaches commonly used in software testing I always discuss the specific context in which they are useful and situations in which they are not applicable. I discuss strengths and weaknesses, and also the different types of information that can be revealed beyond simply the discovery of potential defects.
From my point of view, professional testing is perhaps the most challenging discipline in the software industry. The complexity of the software, and the virtual infinite number of tests that are possible surely require us to approach the situation from multiple approaches and not simply from an explorative approach or a scripted approach. It is certainly easy to bucket things for simplicity (e.g. if it isn't exploratory it must be scripted). However, when we view a problem from only two, or one of two directions we are certainly more likely to miss important information (other than simply defects) that may be critical for our managers to better analyze risk and make better business decisions. A professional tester not only realizes that there are multiple approaches to a specific situation, but also understands that each approach has advantages and disadvantages and knows how to leverage those advantages in order to gather the appropriate information.
So, let's break out of only viewing testing within the context of testing approaches (exploratory vs, scripted), and start understanding the contexts of the software under test (the specific set of facts or circumstances surrounding a particular event or situation) and let's really learn about all the various approaches, techniques, and methods that are available to the professional tester and better understand when and how to best apply those 'tools of our trade' in the pursuit of our profession.