The debate over whether testers need to at least understand programming concepts is still raging within the discipline. To me this debate is puzzling because it seems to suggest that as a professional, I don't have to really understand or be completely proficient in critical aspects of my trade. Even Cem Kaner noted, "I think that the next generation of testers will have to have programming skills." Actually, there was a time not so long ago when testers had to have programming skills, so it is nice that Cem now acknowledges that skill as useful in testing.
Unfortunately, occasionally even within Microsoft a few people still want to differentiate between STE and SDET by blindly assuming that STE meant non-programming testers. The fact is, that the old STE ladder level guidelines clearly stated skills such as debugging production code, and design and develop effective automation as required skills for Microsoft testers. Unfortunately, some managers chose to selectively ignore these skill requirements and some groups chose to differentiate between GUI testers and any tester who could write code by labeling them as STE and SDET respectively. (This was a horrible abomination of job titles in my opinion.) The new SDET competencies at Microsoft are designed, and supposed to be implemented in a manner, to reinforce the essential skills we expect from our testers so a tester at a certain level in their career stage in one business unit essentially has equitable skills of any other tester at the same level in their career stage in any group in the company.
But, people are often resistant to change, and as I wrote in my last post some people choose to wallow in self-pity, pretend they are a victim of some evil plot, hypercriticize change with dogmatic arrogance, and incessantly bemoan dubiously negative aspects of change from an often overly emotional narrow-minded perspective. A person who moved from a testing role to program management stated, "I was a tester because I understand how users think and how they use products and I wanted to use that knowledge to make our software better." Really? We make software better by beating quality into it? Does this demonstrate a good understanding of software processes and good business logic? I only ask these questions because it is pretty well-known that it is much cheaper to prevent defects, and that many defects can be found in the design process. So, I am asking myself why in the world didn't this person start as a Program Manager (responsible for interpreting marketing analysis and customer feedback into requirements and product design) or become one before now? What is even more amazing about this statement is that it doesn't even acknowledge the fact that as a program manager this person is now in a role that should have a direct connection to the customer and a greater impact on making our software better. A development strategy or process that emphasizes customer advocacy primarily in the testing phases is ridiculously immature and a gross waste of resources since it is widely known through empirical studies that it is cheaper to prevent defects by better designs and clear requirements as opposed to finding them during a testing cycle.
The same person stated, "I wanted to keep breaking software in the incredibly fun, very effective way I had been doing." (Personally, I find API testing (which can also use a black-box approach), and white box test design extremely fun and intellectually challenging, and is also very effective when used appropriately.) Unfortunately, this comment seems to perpetuate a myth that testers make software better by finding bugs, and it also demonstrates an extremely limited view of the role and overall potential value of software testing to an organization. This is indeed a very narrow, antiquated (in technology time), and immature view of software testing that emphasizes testing as primarily a bug finding endeavor. However, Beizer wrote that black box testing exercises approximately 35 - 65% of the product, and Marne Hutcheson and I have empirical data that demonstrates that GUI testing (which is one type of black box testing most people are familiar with, and the type of testing most non-technical testers are limited to perform) is not as effective as most people want to believe, and it is often more costly as compared to using a variety of approaches to software testing. Again, even Kaner notes, "Programmer productivity has grown dramatically over the years, a result of paradigmatic shifts in software development practice. Testing practice has evolved less dramatically and our productivity has grown less spectacularly. This divergence in productivity has profound implications—every year, testers impact less of the product. If we continue on this trajectory, our work will become irrelevant because its impact will be insignificant." (I highly suspect the 'testers' Kaner is referring to in this context are primarily non-technical, GUI testers since that is the type of testing emphasized in his BBST course.)
There is no doubt that a person who does not at least understand programming concepts or have an in-depth technical understanding of the system they are testing is unable to perform various activities that may be required in the role of a professional software tester. That person cannot perform code reviews (which have been proven to find certain classes of defects more effectively than any other type of testing); they cannot analyze code to determine which areas of the code have not been tested and design tests from a white-box approach to increase testing effectiveness and reduce risk, they cannot debug errors and identify root causes of defects, they cannot automate tests to free up their time or reduce costs during the maintenance phase of the product lifecycle, they may not be able to adequately analyze and decompose test data, etc. While some companies don't rely on their testers to do this type of work, these are certainly tasks that any professional tester should be able to perform.
I guess there are some software companies that are not interested in actually maturing their processes, reducing long term costs, have no interest in the intellectual property value of testing artifacts, or simply want to continue to rely primarily on GUI testing to get a 'gut-feel' of their product before releasing it. However, many large companies that produce software (Microsoft, Cisco, Google, Siemens, etc.) understand the value add proposition that professional testers provide to the organization health and specifically hire people into testing roles who have both broad technical skills as well as the common traits we tend to associate with good testers.
This post is not to question the need for non-technical people who have in-depth and current domain or business knowledge of the application space, or who understand the market expectations and customer demands/needs in the software engineering process. The question I ask is whether the value these individuals bring to the software development process is misplaced, or would their contribution be more cost effective and provide greater overall value to the customer if they were in a role (other than testing) that better utilized their knowledge by contributing to defining requirements and designing high quality software rather than trying beat in quality through bug finding?