Exploratory Testing versus Ad Hoc Testing

A few weeks ago a read an interesting post on SQA Forums about exploratory testing. It was interesting not because there was anything 'new' to learn about 'exploratory' testing; but because it offered a compelling counter-argument to ad hoc testing. It is also a good read because it differentiates exploratory testing from ad hoc testing in a surprising revelation.

I, perhaps like many of you, have always assumed ad hoc testing to imply an unplanned, random approach to testing. In our internal defect tracking system at Microsoft two entries for the How Found category included "Ad hoc (directed)" and "Ad hoc (undirected)" and I once thought...'How can random testing be directed?' Even Lee Copeland writes in his excellent book A Practitioner's Guide to Software Test Design,  "...ad hoc testing which (by my definition, although others may disagree) often denotes sloppy, careless, unfocused, random and unskilled testing."

But, a Moderator on the SQA Forums site pointed out the actual definition of ad hoc (as an adjective - because the context of its use is important and in this context ad hoc describes the type of testing being performed) is "concerned or dealing with a specific subject, purpose, or end" and "improvised and often impromptu."

In sharp contrast to it's commonly assumed connotation, the denotative implication of 'ad hoc testing' might be better interpreted as "improvised testing dealing with a specific subject or a clear purpose without previous preparation."

Now let's contrast the denotative implication of ad hoc (or a bug bash) testing to exploratory testing. James Bach defines exploratory testing as "simultaneous learning, test design, and test execution." So, I interpret this to mean that a tester thinks about what to do next based on his/her cognitive abilities to apply their knowledge of testing techniques or methods as well as the domain and system space, and then performs or executes an action (or test) while learning about the capabilities or attributes of the application under test (without having a specific goal or purpose in mind other than perhaps the hope of finding a defect.)

When comparing the bastardized connotative assumption of ad hoc testing with exploratory testing I agree with the SQAForums moderator mentioned above who also stated that "...trying to equate exploratory testing to ad hoc testing is incorrect" and that "They [ad hoc testing and exploratory testing] are really on the opposite ends of the spectrum." But I also don't think that exploratory testing is equitable to the denotative inference of ad hoc testing. Unless of course exploratory testing is largely improvised, doesn't require previous preparation and does have a purpose or deal with a specific subject; in which case exploratory testing would literally be the same as ad hoc testing (based on the denotation of that adjectival phrase in the context of software testing)?

(BTW...antonyms of the word improvised include planned and predetermined, and we all know that exploratory testing is not planned or predetermined.)

Now, I don't think for a moment that this revelation about the denotation of the word ad hoc is going to change the connotative implication of the phrase ad hoc testing. I also don't think some people will ever change their opinion about exploratory testing as the magic snake oil or holy grail of testing. But this post in SQAForums made me think about the ridiculousness of the whole ad hoc/exploratory testing debate a bit. So, to expunge the emotion and religious rebuttals between ad hoc and exploratory testing, and to perhaps expose some common ground among testers I suggest we simply start using the phrase improvised testing to refer to any testing performed without a pre-defined test case with the intent of learning and/or evaluating the attributes and capabilities of a software project and exposing defects. Improvised testing...yeah...that's it! No, wait. On second thought the word improvised may also carry negative connotations, so maybe we can call it extemporaneous testing, or maybe autoschediastical testing. Now, that sounds pretty damn impressive and sexy! Way better sounding than just simply saying testing. Who's with me?

(Oh..we do (and have for many years) perform 'directed ad hoc testing' all the time at Microsoft...but we lovingly refer to it as a bug bash! A bug bash is when we 'direct' the testing effort to focus on a specific feature area or type of testing such as security or globalization for a given period of time. A bug bash involves executing improvised tests with a goal or purpose of exposing defects in a feature area or defects of a particular type or classification often with minimal preparation.)

Comments (2)

  1. adamu says:

    Hmm. It almost makes sense. Except that with these definitions, there’s no difference between HowFound="Ad Hoc (directed)" and ="Bug Bash".


  2. I.M.Testy says:

    Hi Adam,

    Personally, I really don’t see distinctive differences between “ad-hoc (directed) and “Bug Bash” that would warrent me tracking these as separate data points in a defect tracking system. I bet if we asked testers what the difference between them is we would get a bunch of hand-waving and fanciful words thrown about by a few, a handful of others would stare at us like deer in headlights (thinking it is some sort of trick question perhaps), and (hopefully) most tester’s would say…”you know…that’s a really good question; I’ve never really thought about it.”

    With the negative connotation associated with ad-hoc, personally I would probably pull that from the flavor list and suggest using Bug Bash or some other descriptive word/phrase to identify defects that are exposed outside the scope of more structured (although I am not sure structured is the right word here) testing approaches.

    I think the how found category in defect tracking databases is often over-loaded and quite abused. I once saw a database in which anyone could edit the How Found category and there were three separate entries for Vulcan mind meld (Vulcan mind meld, Vulcan Mind-meld, and Vulcan Mindmeld), and of course before these entries was an entry for “a little birdie told me.”

    Ultimately, the data points tracked in the defect database should provide meaningful information. If I was not going to use the information obtained in the How Found category to adust my processes or strategy, then perhaps I might just have 2 choices for that category: How Found == Testing and How Found == Guessing; and I might even consider removing one of those choices! 🙂

Skip to main content