Contextual blindness: or How to take things completely out of context

Many testers are familiar with the concept of inattentional blindness (or at least should be in my opinion). Basically inattentional blindness occurs when we are so visually focused on a task  or object that we completely fail to see something out of the ordinary.

But, I am going to introduce my own neologism that I will refer to as contextual blindness. Contextual blindness occurs when someone is so restrictive in their thinking or so biased by their own opinion that they take references to a document or study completely out of context to support their own biased argument. In essence, they are ignoring the original context in which the statements are made, and perverting the sentence or using a statement out of its original context to support an oppositional point of view.

I too have been guilty of this when I made statements without completely researching available data or carefully reviewing empirical, or factually substantiated evidence. (These days, if I don't have sufficient data or information to support a strong argument for or against something then I try to preface my statement with "I suspect... " or "in my opinion. .."). However, some people seem to make a habit out of making wild, and often fallacious statements often in seemingly juvenile attempts of one-upsmanship. I suspect that sometimes people do this because they think they are beyond reproach; that they assume to know more than others, or that they consider themselves to be such an expert that nobody should question anything they say.

As I have gotten older and a bit more wiser, I have learned to question things and reassess my position or my ideas from time to time. I often speak with recognized industry experts, read several books and studies (often presenting  contradictory approaches or perspectives), review empirical data (and just to be clear, IMHO bug count as the only data point offered as empirical data is about as useful as nipples on men), and when I can I try to experiment or experience new things in order to draw my own conclusions. By now your probably asking where I am going with all this...there is a point...read on!

This evening a person sent me mail asking me if my keynote at EuroStar last year was "an attack on certification programs." She knew I gave one of the keynotes at EuroStar, but was a little shocked that I would attack certification programs. I told her that I gave one of 5 keynote addresses at EuroStar 2007 and my talk was entitled The Path to Professionalism: Skills of star performers, and gave her my perspective of certifications. In her response she forwarded me some mail from a distribution list which included excerpts from a rather lively debate between two other members of the list in which one  of the participants in the debate stated,

"The Department of Defense has identified the failure of traditional testing processes, including the problem of over-documentation, as one of the top five problems in IT. The keynote speech at Eurostar, in December was an attack on certification programs such as the CSQE."

So, I reminded her that the second sentence should more appropriately read, "One of the keynote speeches at Eurostar...". I told her that Michael Bolton gave that talk. (I must say that it was the first time I had seen him speak and I his stage presence is excellent, but I wasn't overly impressed with his arguments against certification because he simply denigrated the existing programs without offering any other solutions. Several delegates at the conference later asked me about his comments and I deferred by stating, "when in Rome." (Certifications in Europe are highly valued by employers for a variety of reasons.)  But, the above mentioned statement was merely misleading; it is actually the first statement that is most fallacious, and taken completely out of context.

The statement "The Department of Defense has identified the failure of traditional testing processes, including the problem of over-documentation, as one of the top five problems in IT" used the following study as a reference. So, let's take a critical look at that statement and question its truth or validity. (Because, in our jobs as professional testers understanding the correct context, critical thinking and logical questioning are important skills!)

"All we want are the facts, ma'am"

Fact #1. The DoD did not identify anything. The study was conducted by the National Defense Industrial Association (NDIA), which is not in anyway shape or form a part of the Department of Defense (DoD). NDIA members include "individuals from academia, government, the military services, small businesses, prime contractors, and the international community, the opportunity to network effectively with the government."  There were 26 participants in this study and one participant represented the DoD.

Fact #2. The study did not identify "the failure of traditional testing processes."  The purpose of the study conducted by 26 participants was to "Identify the top 5 software engineering problems or issues prevalent within the defense industry. " The participants actually came up with 7 issues, and they determined that one of the top issues in software engineering included "Traditional software verification techniques are costly and ineffective for dealing with the scale and complexity of modern systems. "

Fact #3. The problem of over-documentation was in the context of a discussion related to tests with a "disproportionate effort on detailed procedures." This was not one of the 5 (actually 7) problems identified, the participants concluded "Tests are over-documented with disproportionate effort on detailed procedures. " as a possible reason to explain the 5th top issue of "Traditional software verification techniques are costly and ineffective for dealing with the scale and complexity of modern systems." I suspect this statement speaks to the fact that many documented tests that I have seen by under-trained testers are simply prescriptive, regimented scripts based on some interpretation of an ambiguous requirements document rather than well-formed tests designed from an in-depth analysis of the system under test.

What the study really said...

But, not only did this person take parts of several statements in the report, munge them together in an attempt to use the findings completely out of context; this person also completely ignored (or purposefully omitted)  other points discussed in relation to this specific problem such as:

  • Over-reliance on testing alone rather than robust SW verification techniques.
  • Manual testing techniques are labor-intensive, scale poorly, and are unproductive relative to the large investment of resources."
  • Compliance based tools do not adequately cover risks or failure conditions
  • Tests are over-documented with disproportionate effort on detailed procedures
  • Education, training, certifications are inadequate to develop effective test skills.

The person also ignored (or purposefully omitted) the recommendations by the participants which included:

  • Sponsor a study of state-of-the-practice verification and testing approaches.
  • Review/update testing policies and guidance to emphasize robust, productive approaches that maximize ROI.
  • Review adequacy of verification plans/approaches early in the acq. life cycle.
  • Emphasize skilled investigation throughout the life cycle, based on coverage, risk mitigation, high volume automation.
  • Strengthen curricula, training, certifications, career incentives for testing roles.

Now, I really don't understand how someone can read this report and misconstrue the information to imply that traditional testing processes (and it is not clear what they are referring to here), or that over-documentation is one of the top 5 problems identified by the DoD; especially when the report also recommends strengthening policies or guidelines for full requirements traceability.

I have my suspicions as to why the person who made the fallacious remark above might omit all the facts and additional counterpoints in their argument. I suspect those details were not revealed because those points do not support (in fact, they completely dispute) the context-driven ideology that emphasizes manual GUI testing and the vehement opposition to documentation, test automation, coverage analysis, robust verification techniques, strengthening of certifications, or essentially anything that involves the need for greater technical know-how, logical analysis, or measurable advancement of the testing profession.