Broken Skull Theory of software testing

I mentioned Von Koenigswald last time, and that in turn leads me to my (rather violently named) theory of testing I call the "Broken Skull Theory of Testing."

First, let's set the stage. It's 1937, and Von Koenigswald is on a dig in Java. He finds a piece of a skull and knows other pieces should be nearby. He needed to enlist the help of local workers to help dig out and bring him the pieces they find so he can get a complete specimen to study. Here was his solution:

"We mobilized the maximum number of collectors," stated von Koenigswald. "I had brought the fragment back with me, showed it round, and promised 10 cents for every additional piece belonging to the skull. That was a lot of money, for an ordinary tooth brought in only 1/2 cent or 1 cent. …"

Now you can easily guess what happened next:

The highly motivated crew quickly turned up the desired skull fragments. Von Koenigswald would later recall:

"... But I had underestimated the 'big-business' ability of my brown collectors. The result was terrible! Behind my back they broke the larger fragments into pieces in order to increase the number of sales! . . . "

From <https://www.bibliotecapleyades.net/ciencia/hiddenhistory/hiddenhistory08.htm>

What a shock! He set out to reward quantity over quality and got exactly what he asked for: a large quantity of low quality results. To his credit, he managed to piece all the fragments together and finish his project, but it was a huge amount of work on his part that he could have avoided. Here's the final result (I think - it's very hard to dig through anthropology photos on the 'net. I kept getting directed to Bigfoot types of sites. But I digress):

clip_image001

From https://anthropology.si.edu/humanorigins/ha/sang2.html

The same mistake could be made in the software testing world as well. For instance, I could push to get "more" automation scripts written. You can imagine the results if I just ask for "more." Someone could latch onto that and write scripts that do something like:

  1. Boot OneNote, type the letter 'a' on the page, verify and exit
  2. Boot OneNote, type the letters 'aa' on the page, verify and exit

And so on...

Clearly, these are not the best scripts in the world but I would have only myself to fault for not being clear with expectations.

You can also imagine asking testers to "find bugs" and design a system that rewards people based on quantity alone. You can imagine the number of "typo in a dialog" types of bugs reported vs. the kind of performance bug that may take 2-3 days to track down that get reported. The performance bugs may be far more critical, but since I pushed for quantity, I may never get that report.

Rocket science? Hardly. It's just something I always keep in mind when designing methods to track status of different projects. I need to remind myself to look at all metrics available and decide which are important for this task, and which do not need to be emphasized. So I always try to make it clear what expectations are when I start a task. Helps avoid broken skulls later in the game…

Questions, comments, concerns and criticisms always welcome,

John