With the winds of change seemingly blowing developers into testing and testers into development it seems like there are some pockets of our industry where the craft of manual testing is becoming somewhat of a lost and unappreciated art.
A more mustachioed Eric than I (who you should watch out for on Halloween) helped popularize the phrase “many eyeballs make bugs shallow.” These days there’s usually also the clarification that many eyeballs that know what to look for make bugs shallow. I’ve been in many code review meetings where we were going through code line by line, and the same people kept picking out the same bug patterns. Working as a team, more bugs were exposed, since the pool of known patterns to look for increased. I’m convinced that bug bashing, manual testing, exploratory testing, or whatever term you decide to use to call the hands-on work of finding bugs has a lot in common with this dynamic. The more defect patterns one person knows how to recognize, the more they find.
The trick here is have a comprehensive defect taxonomy at the ready before the bug bash starts. If you ever hit a mental block you can scan through the list and come up with ideas for bugs. There have been many times when I have been using this technique and the bottleneck literally became the time it took to log bugs, whereas other testers were out of ideas and spinning their wheels an hour or two into the bash.
I wish I knew of a good public domain source, but if you don’t have a good defect taxonomy already, the absolute best list I have come across so far is from the back of the book “Testing Computer Software, 2nd Edition” by Cem Kaner, Jack Falk, and Hung Q. Nguyen. Appendix A “Outline of Common Software Errors.” The book is an absolute classic, and worth every penny just for the appendix! To make things easier especially for contesting I photocopied the last pages out of my book and have them right on my desk. Another cool thing about guiding yourself this way is that you can take the title of the software error type from the list and then easily reword it to fit your defect, which saves you some time during a bug bash logging when trying to come up with a title for your bug. (reducing test drag)
So this is just a simple technique I’ve found useful. I know there is more heavyweight stuff out there, if you’re looking for it I highly recommend searching for all the material you can from Jon Bach who beats the exploratory testing drum fairly well. I’ve personally been impressed with his work and I think he has some really valuable insights that are worth finding a way to factor into your approach. I enjoy being entertained at his lectures more than anything but James Whittaker’s “How To Break Software…” books are also great reading material. From an academic perspective he takes the same ‘we did lots of research over lots of bugs and here’s what we came up with’ approach. He lists “attacks” that you can perform against the software once you understand the failure mode and there is an appendix of those attacks as well which is a great addition to the bag of tricks. In my experience it can take a little more work to crank through these “attacks” and they lean heavily towards the fault injection side of things which makes defending your bugs harder, but it is still great material to be familiar with.
Good luck in your next bug bash!