Do testers need programming skills?


The debate over whether testers need to at least understand programming concepts is still raging within the discipline. To me this debate is puzzling because it seems to suggest that as a professional, I don’t have to really understand or be completely proficient in critical aspects of my trade. Even Cem Kaner noted, “I think that the next generation of testers will have to have programming skills.” Actually, there was a time not so long ago when testers had to have programming skills, so it is nice that Cem now acknowledges that skill as useful in testing.


Unfortunately, occasionally even within Microsoft a few people still want to differentiate between STE and SDET by blindly assuming that STE meant non-programming testers. The fact is, that the old STE ladder level guidelines clearly stated skills such as debugging production code, and design and develop effective automation as required skills for Microsoft testers. Unfortunately, some managers chose to selectively ignore these skill requirements and some groups chose to differentiate between GUI testers and  any tester who could write code by labeling them as STE and SDET respectively.  (This was a horrible abomination of job titles in my opinion.) The new SDET competencies at Microsoft are designed, and supposed to be implemented in a manner, to reinforce the essential skills we expect from our testers so a tester at a certain level in their career stage in one business unit essentially has equitable skills of any other tester at the same level in their career stage in any group in the company.


But, people are often resistant to change, and as I wrote in my last post some people choose to wallow in self-pity, pretend they are a victim of some evil plot, hypercriticize change with dogmatic arrogance, and incessantly bemoan dubiously negative aspects of change from an often overly emotional narrow-minded perspective. A person who moved from a testing role to program management stated, “I was a tester because I understand how users think and how they use products and I wanted to use that knowledge to make our software better.” Really? We make software better by beating quality into it? Does this demonstrate a good understanding of software processes and good business logic? I only ask these questions because it is pretty well-known that it is much cheaper to prevent defects, and that many defects can be found in the design process. So, I am asking myself why in the world didn’t this person start as a Program Manager (responsible for interpreting marketing analysis and customer feedback into requirements and product design) or become one before now? What is even more amazing about this statement is that it doesn’t even acknowledge the fact that as a program manager this person is now in a role that should have a direct connection to the customer and a greater impact on making our software better. A development strategy or process that emphasizes customer advocacy primarily in the testing phases is ridiculously immature and a gross waste of resources since it is widely known through empirical studies that it is cheaper to prevent defects by better designs and clear requirements as opposed to finding them during a testing cycle.


The same person stated, “I wanted to keep breaking software in the incredibly fun, very effective way I had been doing.” (Personally, I find API testing (which can also use a black-box approach), and white box test design extremely fun and intellectually challenging, and is also very effective when used appropriately.) Unfortunately, this comment seems to perpetuate a myth that testers make software better by finding bugs, and it also demonstrates an extremely limited view of the role and overall potential value of software testing to an organization. This is indeed a very narrow, antiquated (in technology time), and immature view of software testing that emphasizes testing as primarily a bug finding endeavor. However, Beizer wrote that black box testing exercises approximately 35 – 65% of the product, and Marne Hutcheson and I have empirical data that demonstrates that GUI testing (which is one type of black box testing most people are familiar with, and the type of testing most non-technical testers are limited to perform) is not as effective as most people want to believe, and it is often more costly as compared to using a variety of approaches to software testing. Again, even Kaner notes, “Programmer productivity has grown dramatically over the years, a result of paradigmatic shifts in software development practice. Testing practice has evolved less dramatically and our productivity has grown less spectacularly. This divergence in productivity has profound implications—every year, testers impact less of the product. If we continue on this trajectory, our work will become irrelevant because its impact will be insignificant.” (I highly suspect the ‘testers’ Kaner is referring to in this context are primarily non-technical, GUI testers since that is the type of testing emphasized in his BBST course.)


There is no doubt that a person who does not at least understand programming concepts or have an in-depth technical understanding of the system they are testing is unable to perform various activities that may be required in the role of a professional software tester. That person cannot perform code reviews (which have been proven to find certain classes of defects more effectively than any other type of testing); they cannot analyze code to determine which areas of the code have not been tested and design tests from a white-box approach to increase testing effectiveness and reduce risk, they cannot debug errors and identify root causes of defects, they cannot automate tests to free up their time or reduce costs during the maintenance phase of the product lifecycle, they may not be able to adequately analyze and decompose test data, etc. While some companies don’t rely on their testers to do this type of work, these are certainly tasks that any professional tester should be able to perform.


I guess there are some software companies that are not interested in actually maturing their processes, reducing long term costs, have no interest in the intellectual property value of testing artifacts, or simply want to continue to rely primarily on GUI testing to get a ‘gut-feel’ of their product before releasing it. However, many large companies that produce software (Microsoft, Cisco, Google, Siemens, etc.) understand the value add proposition that professional testers provide to the organization health and specifically hire people into testing roles who have both broad technical skills as well as the common traits we tend to associate with good testers.


This post is not to question the need for non-technical people who have in-depth and current domain or business knowledge of the application space, or who understand the market expectations and customer demands/needs in the software engineering process. The question I ask is whether the value these individuals bring to the software development process is misplaced, or would their contribution be more cost effective and provide greater overall value to the customer if they were in a role (other than testing) that better utilized their knowledge by contributing to defining requirements and designing high quality software rather than trying beat in quality through bug finding?

Comments (11)

  1. David Drake says:

    I agree that understanding programming is essential for many types of testing.  The question is whether all testers need these skills.  Is usability testing worthwhile ever?  Good design cannot always predict the effect that results, after all.

    But it is difficult to argue with you, since some of your statements seem to rely so much on supposition of other people’s thoughts: "I highly suspect the ‘testers’ Kaner is referring to…" and "…not as effective as most people want to believe".

    Is this argument really so binary?  Could not technical testers pair with technical testers?  Is it easier to come at a project with a lot of domain knowledge and pick up the technical side as you go, or vice versa?  And how much technical knowledge is "enough" to be considered a tester?

  2. I.M.Testy says:

    Hi David,

    I will answer your question as to whether all testers need these skills with a question. Would you go to a doctor who has no knowledge of human anatomy?

    By asking “Is usability testing worthwhile ever?” you seem to be indicating that understanding programming and understanding how to peform usability testing are mutually exclusive. Personally, I think most professional testers can perform a wide range of testing tasks and are not limited to 2 basic approaches.

    Actually, only one of my statements is a supposition of someone else’s comments. The single supposition I derive from Kaner’s comment is my opinion based on my personal observations of on-going work inside of Microsoft, reviewing the content of the BBST, and my knowledge and experience of the industry at large. If you or Cem have emprical evidence to refute my interpretation of his comment then please share it because I love to learn new things.

    My observation is in no way critical of Kaner or his course. In fact, I agree with Kaner that the testing discipline has not matured as rapidly as the development discipline, and I think we (professional testers) really need to figure out how to grow and mature the discipline.

    A collegue at Microsoft did a formal study of pair testing at Oregon a few years ago and discovered that it was not as effective as some would like to believe. I was quoted in the study and still suggest that pair testing may be more effective in a mentoring scenario, but his study provided pretty good evidence that 2 testers with similar backgrounds and knowledge did not provide a significant value add to the testing process.

    Your question as to whether it is easy to come with domain knowledge and pick up the technical stuff is a good one. Personally, I have seen many people with degrees in things such as marine biology, anthropology, math, etc. learn the domain and technical skills much faster than individual who comes into testing with simply domain knowledge. (This is only an anecdotal observation on my part.)

    “…how much technical knowledge is “enough”…” is a good question to which there is not a good answer. I quess we could ask, how much knowledge of physiology does a doctor have to know to be considered a professional doctor? Even a race car driver may not be an  expert at rebuilding an car engine, but they do understand how it works and have at least enough knowledge of the system (the car they are driving) to diagnose many different types of engine or tire problems.

    Finally, I don’t think the question is really that binary. I suspect that some people and some companies will continue (at least for awhile) with the status quo. But, as I suggested in my last post, I think that a person should strive to become highly proficient in any undertaking, especially a career. This doesn’t mean they should become only as proficient as necessary to get by. It means they should strive to become experts at his or her work. In my career as a tester I find it important and rewarding to learn as much as I possibly can about my chosen discipline and the systems on which I work. To me, that is one of the key attributes of a professional tester.

  3. johnoverbaugh says:

    I am one of those testers who started out at MSFT as an STE and watched the transition. When I started, the argument was whether SDETs should be eliminated altogether; now the pendulum has fully swung the other way! I made the transition quite nicely and left as a TM with a technical background. So don’t think my comments are biased based on a lack of skill.

    When we started losing STEs due to a lack of technical ability, and moving towards heavier automation, there was an overall increase in effectiveness (ie, we could churn through test cases a lot faster). However, nothing should be absolute (<G>), and this included… We lost incredible testers who had maybe script-level programming skills and were great and managing and executing a ton of manual cases. Overall, product quality went down though, in my opinion, esp in consumer applications.

    I think maybe the industry’s backlash against Vista and the bugs therein is a little harsh (most everyone says the issues are due to the lack of non-technical people testing the product). I disagree in part because many of the bugs I have encountered are so obvious that ANYONE (even a total propeller-head, kernel-level SDET) should have found them! I do, however, believe that the transition to all SDETs did have a negative impact.

    On the teams I built at MS, I tended to have one or more ‘really strong SDETs’ who could develop automation frameworks and integrate with harnesses. I also had ‘really strong SDETs’ whose skills were more around quality and user experience. They could automate in C#, but the bar was an ability to use automation harnesses and code with occasional external assistance. They were hired for their user empathy, primarily. This required better design in the framework, to allow the less-technical SDETs to automate their tests, which is exactly what the framework engineers wanted to do, anyhow.

    I found that building this type of mix resulted in the highest possible code. Even my ‘less-technical’ SDETs were able to dig into code with devs, if needed, and understood all the programming concepts. Yet often their bugs were ‘user perception will be…’ and the likes.

    I do believe that MS has swung too far, setting a bar which is not too HIGH but which is too NARROW. If you don’t fit the very-technical, deep-programming ability niche, you don’t get hired on most teams. That’s a mistake.

    In my current organization, I have a tester who has no formal programming background. He actually does 90% of his ‘programming’ in Selenium with text-based scripts. And I wouldn’t trade him for the world! This engineer has single-handedly coded all the automation for our content-heavy web site (and he can generally code full coverage for 5-10 new pages per hour). We recently migrated to a new release of the CMS code that powers our site, and we were able to rely on that automation to cut our testing to 3 hours or so!

    I think it’d pay MS to step back and think again about the desired outcome (highest quality software at the lowest input) and ask if maybe the emphasis has been more on lowest input and less on quality–quality in terms of user experience as well as ‘test cases passing’. And some rethinking of how to reach that quality, and the skills required to do so, might help.

    Balance. It’s all about balance.

    John O.

  4. I.M.Testy says:

    Hi John,

    It’s good to hear from you. We started at MS roughly at the same time, and my first job title was also STE. At that time, the policy in our group was that if your primary responsibility was to design, build, and maintain testing infrastructure then your job title was SDET and everyone else had a job title of STE. But, at the time most groups also included technical knowledge and coding skills in their STE interview process. Personally, I think the debate of titles is based on personal vanity, and is just a silly waste of time. I don’t get too hung up on job titles (STE vs SDET), I am more concerned with the overall skills and abilities a person brings to the table.

    You are a smart guy, so may I caution against jumping on the "Vista sucks because Microsoft only hires SDETs" bandwagon. People who present this as an argument are clearly ignorant of the fact that the majority of testers who worked on Vista were the same ones who worked on Windows Xp. So, to blame the hiring practices at MS for issues in Vista only serves to expose a person’s biased ignorance and  diminishes their credibility.

    Also, in my opinion there are two other pointless arguements in this debate that expose a person’s overly narrow view of the role of testing and also demonstrate a blatent ignorance of the multitude of tasks required to test highly complex systems.

    First, is the argument of mutual exclusivity. This argument tends to suggest that testers who know how to write automation or have programming skills are simply inept at testing from the proverbial ‘customer’ point of view. I suspect that this argument is based on the same faulty logic along the lines of ‘good testers who perform usability testing from the GUI only can’t automate tests, therefore testers who can automate tests can’t perform good usability testing. (I know there have been some misguided managers who have built their team with developer want-to-be’s rather than looking for people with the demonstrated traits common in many good testers and who also possess a strong technical background.

    The second argument that some folks seem to be stuck on is the whole SDET == automation. (I know there are some people in our industry who are convinced automation is the devil’s tool; interestingly enough some people were also vehemently skeptical about electricty, the telephone, and the automobile.) But really, the problem with this argument is again its extremely narrow focus of skills or capabilities and the more in-depth system knowledge someone with programming skills brings to the table. The question a good manager should be asking themselves is, "What types of testing can’t a person who doesn’t understand programming concepts do?" and "Do we need to do these things to be successful?"

    Of course, if a managers’ only answer to the first question is "umm…they can’t code automated tests," then I would suggest that manager really doesn’t understand the scope of testing, the overall value that testing can provide to an organization, or how to ultimately reduce testing costs and impact overall product quality. They are probably stuck in the "Let’s just hire a lot of people who mimic the customer (or at least think they do) to bang away at the GUI" twilight zone where testing effectiveness and is determined by # of bugs found and the information provided by the test team is "hey…I found another bug."

    The key to the whole debate, and why I phrased the title of this post with a question mark is the answer to the second question above. Each software organization essentially has to look at what types of skills and knoweldge they need in their employees to be successful in the long term.

    At Microsoft, the vision is to push quality upstream and prevent defects rather than trying to hire hoards of ‘testers’ to bang away at the GUI and try to find as many bugs as they can before the scheduled ship date. To achieve this goal we seek testers with greater overall technical skills and in-depth knowledge of the systems they are working on, and who also possess the common traits we expect in an professional tester.

  5. David Drake says:

    That was an excellent response; I wouldn’t say that Cem Kaner’s audience is specifically non-techincal, GUI based testers.  Most of the BBST course is designed as a basis for testing theory, and I suspect that this is because he sees it as an entry into the field.  His own definition of testing states explicitly that it is a "technical investigation", and the course materials were initially developed for software engineering students.  But I think the thrust of your article is a discussion of the skills that are brought to bear by a tester, while Cem’s course (only the foundation level course has been offered thus far) focuses less on the method and more on the theory.  As to whether he focuses on Black-Box testing while introducing the theory is because he believes it is superior to other forms of testing, I cannot say, and neither has he (that I know of).  So much for the facts that I know of Cem Kaner’s intended audience.

    I definitely agree with you that being a proficient tester requires study of how software is constructed and how it behaves; I suppose I just fear that the pendulum swing from non-technical testing to purely automated testing might swing too far and disconnect the testing field from more subjective definitions of quality.  I don’t think that’s what you’re advocating, so I suppose that’s a concern that I bring to the discussion rather than one that I find in your statements.

  6. I.M.Testy says:

    Hi David,

    You’re absolutely right. I was not trying to denigrate Kaner or the BBST effort, and I think Cem has done a lot of great work in the past to put the discipline of testing on the radar so to speak. I was also very glad to see Cem specifically mentioning that testers in the future will need programming skills.

    Your concern about the pendulum is well grounded. I have witnessed some teams who ‘knee-jerked’ and hired mediocre programmers as testers. This is simply fool-hearty. I also highly suspect that we will never see a purely automated testing environment in my lifetime. Automation doesn’t write itself. Also, there are various types of automation. 80% of the automation we write at MS is below the GUI. If we write GUI automation we tend to rely heavily on abstaction layers, and also model based testing approaches. Of course, we often expect that each automated test will run on multiple environments, mutliple platforms, and multiple langauges as it is distributed to run on various machines throughout the company.

    But, you’re right. I would never advocate pure automated testing.

    Thanks for your comments…spot on!

  7. stth10 says:

    "Do testers need programming skills?"

    It is an advantage but when I compare the test teams I worked with over the years the best results have been when having members with  diversified skills including individuals with strong domain knowledge rather than technical knowledge. The prefered ratio between the different skills are of course depending on the context and nature of the Product.

    One mandatory requirement for each test team member though is that he/she should have the mindset of a tester…

  8. I.M.Testy says:

    Hi stth10,

    Yes, I agree that each memeber of a test team should have the mindset of a tester, I also think that they should also have the skills and knowledge in order to provide the complete spectrum of tasks required of a professional tester.

    For some reason I think people seem to assume that becuase a person codes, they do not have a diversified set of skills including strong ‘domain’ knowledge. I simply don’t understand that logic.

    I think the key point we need to understand here is that there is no single best approach to software testing, the the more tools in our toolbox, the better service we provide to our customers.

  9. Irina Yatsenko says:

    Hello. You all sound like a very experienced crowd and I’m just "an" SDET, but this topic has a great interest to me so I’ll venture an opinion. Be merciful 🙂

    IMHO too many conversations about technical skills for testers end up with "coding". The pressure to demonstrate that you can code is so high that testers start producing totally useless tools and gadgets. Those tools usually have nothing to do with good programming practices or design.

    Automation is another area that the coding efforts should be directed to, but wait, how many people do we need to develop and maintain the automation framework? Certainly, not every SDET should be involved in that. And writing the actual feature automation is often tedious and doesn’t require much knowledge about software development (at least that’s the case in my group).

    I like the idea of a tester doing code reviews (I assume we mean the production code) and providing useful feedback to the developer as well as using the gained knowledge to drive the testing. However, doesn’t this imply that the tester should be at least as good as the developer with regard to designing and implementing software?  Where would the tester get the skill and how would he maintain it (keep up with all those dev blogs, read books about design patterns and C++ templates, write his own non-trivial code, etc)?

    On the other hand, there is also a strong pull into a completely different  direction. From my personal experience I’ve observed that it’s expected from testers to have rather broad knowledge about the systems their application works with/on (we are a client app but we allow the user to open files directly from a server) . Different SMB implementations out there – sure! SharePoint – no problem! (how to set it up, administer, crack the databases open and troubleshoot random config failures)  Authentication schemes – common place! So on. Approach of "lets spend some time to understand what exactly is coded into our app then test for proper error handling" doesn’t work because the answer is, "even if we fail because of a provider’s bug we need to find it and push for a fix".  Why? Because the end user sees us as the point of failure, not the provider.

    Speaking about the end users. Testers are also supposed to be "user advocates" and "create connection with the customers" which means participating in news groups, hunting down random bugs reported by the customers so the bug becomes actionable and dev can take a look at it, following up on competitor products, etc. We are also responsible for producing up to date documentation that reflects the actual product behaviour (versus intended behaviour described in specs) as well as describes how to verify this behaviour.

    I’m not saying it’s impossible to have SDETs who can do all of the above and more, I truly believe they would help to ship higher quality software… But where would those SDETs come from? Is there already an environment that would grow testers like this? (Personally, I’m thinking about moving to dev, starting all over and learning programming proper, then I’ll see whether problems found in testing still seem interesting and worth switching back.)

    Thanks for listening  đź™‚

  10. I.M.Testy says:

    Hi Irina,

    I agree with you that too many conversations focus on coding only, rather than programming knowledge (which to me are not the same thing). I know several elementary school children who can code, but I would say that their knowledge of designing highly complex code, effectively reviewing source code, or considering testability when writing code is limited (in most cases).

    IMHO, pressure to demonstrate coding skills by producing useless tools and gadgets primarily comes from inexperienced or non-technical managers who have no other way to evaluate skills other than having someone build another duplicate hot-key checker, or some other useless tool. Managers who don’t understand the value of in-depth technical knowledge and skills have no idea what people with those skills are capable of. This is one reason why Steven Sinofsky has said that engineering managers should be engineers themselves.

    You’re also right that we don’t want a lot of SDETs simply writing new automation frameworks. Test automation is non-trivial by any stretch of the imagination. Automation frameworks that provide an abstration layer between the GUI and the functionality is great becuase it can simplify GUI automation. However, over 80% of the test automation at Microsoft is not GUI automation. Test automation in many groups is much more complex than simple scripted automation, or keyword driven automation that drives GUI automation.

    A lot of teams do code reviews of both product code and test automation. Test automation is complex and demanding, and yes, there are some teams who actually ship thier test automation externally! I would argue that test automation must be of higher quality because we cannot afford failure in the automation.

    We hire primarily CS graduates, so that is where they acquire the knowledge. Some people are also self-taught. We also have a program that puts college grads from other engineering disciplines through intensive training. We are in the technology business. We must constantly maintain and grow our skills.

    I am beginning to question the whole "user advocate" and "customer connection" in testing. I think all disciplines (especially the project or program managers) should connect with the customer. What is the role of the program manager? How often do testers meet with the most important customers to understand their specific solution needs? How often are testers directly involved in designing the features? How much of my tangible deliverables (test cases (manual & automated), defect reports, test status reports, etc.) do I share with the customers who purchase our product? Personally, I think we really need to think about who are really the most important customers of a testing organization. Is it the person on the street buying or downloading the product, or is it the management team who is responsible for making the business decisions of what to ship based on the information we provide directly to the management team?

    You are quite right that testers must have a broad set of skills, and that is what keeps this career both demanding and challenging.

  11. I. M. Testy says:

    This morning I installed Vista SP1 onto my laptop. I was pretty excited about this release because it