Exploratory testing seems to be the “it” word of software testing these days. Everyone from high profile consultants to newbie testers are talking about exploratory testing. My nature (similar to that of many testers) is to be a bit of a naysayer, and because of that, I’ve often tried to find holes in exploratory testing theory. However, I’m not one to provoke argument just for the sake of it, so I spent some time over the past few days analyzing my thoughts on ET. I’ll warn you in advance that this will likely be a long post, but I hope you take the time to read to the end.
My only real beef with ET is that everything I see written about it is from a black box approach. ET proponents describe it as “simultaneous testing and learning” In short, try something, see what it does, let the result guide what you try next. Yep – good stuff. The problem I see with many exploratory testers (including sometimes with the experts!) is huge amounts of inefficiency in applying the learning to the testing. I don’t get a lot of chances to observe exploratory testers, and conference presentations and videos may not do their skills justice, but it doesn’t matter since I absolutely agree that you should learn while you’re testing, and what you learn should directly influence what you do next.
I think most good testers will agree that the most effective testing is a combination of black box and white box testing, so the next thing I did was challenge my assumption that ET is a black-box approach. One common white box technique is code coverage analysis. When I’m looking at code coverage data, I learn about how my tests are executing and apply that knowledge to writing new tests. If I played with words long enough I may be able convince myself that ET can be applied to code coverage analysis, but I certainly wouldn’t convince anyone else. Strike one. How about code reviews? Nope – strike two. I took a step out of the metaphorical batter’s box to see if there was a time when I was using an exploratory approach when doing something inherently white box.
POW – base hit (and thankfully, the end of the baseball metaphor). I’ve written about debugging before (heck, I’ve written debuggers!). I almost always use an exploratory approach when debugging(let me watch the state of this variable and see if it tells me anything interesting…what happens if I change this variable…). Another thing I often do when testing a new component is to exercise the entire module in the debugger. Before I even write test cases, I write a few basic tests (scripted or unscripted), and use the debugger to understand how every code path is reached and executed. I note where boundary conditions should be tested, and where external data is used. I typically spend hours (and sometimes days) poking and prodding (and learning and executing) until I feel I have a good understanding of the component. The more I thought about it, the more I realized that these debugger sessions were exploratory sessions, and I love doing them. Assumption resolved – I love exploratory testing!
This reminded me that the stance of the ET zealots is not ET vs. white box testing (that was my own erroneous assumption). The ETers generally compare ET to scripted testing. But what tester would write a bunch of automated / scripted tests without taking time to understand what they were testing? (rhetorical question – hopefully none). Good test writers are going to use an exploratory approach to help define their test cases (manual or automated). Depending on the audience and support cycle of the product, ET approaches, in fact, may be enough (white or black box approaches). I definitely don’t have a problem with that. For some products, a combination of scripted and exploratory approaches are needed, but exploratory approaches can definitely help in designing better scripted tests. Case closed – ET is a valuable testing approach. (Note that the arguments against scripted tests are valid. Model based testing combats some of these arguments, but there’s a whole other post brewing in that discussion).
I thought I was done, then I thought some more…
Good testers use exploratory techniques. They may use them exclusively, or they may use them in conjunction with another approach, but in short, good testing == exploratory testing. Good testers need to be able to experiment, learn as they test, then apply what they’ve learned to the current and future testing sessions. In fact, I can think of dozens of situations where I’d prefer ET over scripted testing. Even in a “standards” based testing organization, I think there is tremendous value in exploring as a method of influencing scripted test case design. I’m starting to love exploratory testing more and more…
Aww crud – my brain wouldn’t stop there. I love to cook too. When I cook, I experiment with ingredients, learn from my experiments and apply what I’ve learned to the current and future cooking sessions. I like to mountain bike. When I’m biking, I experiment, learn from my experiments and apply what I’ve learned to the current and future biking session. I like to golf. When I golf, I experiment, learn from my experiments and apply what I’ve learned to the current and future golf session. When I …
Live search failed in an attempt to find anything on exploratory cooking, mountain biking or golfing. If you’re not so pissed off at me by now that you’ve stopped reading, you realize exactly what I came up with. Exploratory testing is just… testing. I would argue that those that don’t simultaneously execute and learn aren’t really testers (or certainly aren’t good testers). In fact, I’ve found that many articles on ET can safely be read with the word “exploratory” removed.
Here, I’ll show you an example in one sentence. I love
exploratory testing. Try it yourself :}