Motley says: "More test automation is always better"


Summary


 


Motley:  A tester's job is to find bugs, so measure them on the amount of bugs they find. More test automation is always better.


 


Maven: Do not measure testers by the amount of bugs they report. Think of the test team as more a quality assurance team than a quality control team. Too much test automation can be a problem due to brittle test infrastructures,  too much time troubleshooting tests, suboptimal investment in tests, and lack of thinking like a user.


______________________________


 


[Context: Motley has been having issues with another counterpart team in the company]


 


Motley: I am a little disappointed in our test team. They really haven't generated many bugs in the product as of late. I feel like I have to kick them in the pants to get anything done.


 


Maven: I always thought we had a great test team. We have a bunch of really smart people, and they have made previous releases great. What is the problem?


 


Motley: Lately all the bugs are generated either by the development team or by others using the product. We pay the test team to test the product and find issues. If I was the test manager who is measuring them by the number of bugs they find, they would all fail this year's annual review.


 


Maven: Whoa. Before we explore the real issues of why the test team is not reporting bugs, let's talk about your statement around measuring testers.


 


Motley: Why? The best way to measure a tester is by the number of bugs they find. You can mine the bug database easily to get the numbers and it makes evaluation of the test team really easy.


 


Maven: Sorry, bud. Measuring testers by the number of bugs they find is a really dumb thing to do.


 


Motley: Don't EVER call me "dumb" - <pow>.


 


Maven: Ouch! Fine. I deserved that, and I apologize. Anyway, just because a tester is not reporting bugs does not mean they are not being productive. Think of the test role less as "quality control" and more as "quality assurance". The best tester can prevent a bug before it happens.


 


Motley: But they do control quality, and how would they prevent a bug before it happens? Developers hand over the code when they are ready expecting the tester to find all the issues.


 


Maven: Maybe in the past. Now, particularly with agile teams, testers are involved in the process right from the beginning. They participate in functional specification reviews, contribute to user scenario development, design reviews, code reviews, do private buddy testing, help compute metrics like code coverage, and many other quality-related activities. They do not have a sole test responsibility - they are another member of the team. A good tester helps assure quality by finding issues in all aspects of the development process and helps to improve the team's processes to prevent bugs. The best testers find no bugs in the code because they worked to prevent them right from the beginning. As a result, it makes no sense whatsoever to measure a tester by the amount of bugs they report in the bug database.


 


Motley: I guess that has some merit. The best tester may actually have a lower bug count than the poor testers. That is kind of an odd way of looking at the performance of a tester, though.


 


Maven: Odd, but reasonable. The test team should still be testing end-to-end scenarios in the product and reporting test results, however. You are saying that isn't happening? Why?


 


Motley: I talked to Morgan, the test lead of the multimedia feature. They are focusing all of their time on improving the test automation to ensure we have full coverage. However, the test infrastructure they are using is very unreliable and often reports incorrect results. Another team in the company is responsible for that infrastructure, but because our tests are a little more complicated, Morgan's team ends up troubleshooting the issues. It sucks up so much time that they don't have time to do the tasks that matter, like attend reviews and actually test the product.


 


Maven: A common problem when teams focus too much on automation.


 


Motley: What do you mean? Test automation is a good thing. It allows us to re-run tests without human intervention on a regular basis. Without test automation we risk breaking many areas of the product every time we make a change.


 


Maven: Do not get me wrong - automation can be a great thing. Unit testing is a form of automated testing and something that we have been discussing that absolutely every software developer must be doing. You make a change, you run your tests, and perhaps tests from other teams, to ensure you have not broken any functionality. The more unit tests, the better. The higher the code coverage on those unit tests, the better.


 


Motley: You are contradicting yourself all over the place Mr. Maven. First you said you can focus too much on automation, and now you are saying more is better. Inquiring minds want to know - which is better?


 


Maven: More unit tests - better. More scenario-based automated tests - not necessarily better. There is a great value in having a basic test suite that tests the core user scenarios of the product. However, relying too much on automation at the scenario level has disadvantages:



  • If the architecture of the test infrastructure is very brittle (i.e. not robust), every time the product changes, major changes in the automated tests may be required. Testers spend more time trying to get the automation to work than adding real value to the product.

  • A large focus on automation perpetually puts the test team behind the development team, if that is how your team is organized. If the developers design for testability, as they should, this situation is less of a problem as the testers can start developing automation fairly quickly. If automation is an afterthought, the developers may be on to the next feature before the test team is done, which means the feature isn't "done" according to our definition. It is also a context switch to go back and diagnose and debug issues later as the code is not as fresh.

  • If the infrastructure on which the current features are built is expected to change in the next release, often a rewrite of the test automation is required.  Should the team spend a large amount of time building automation infrastructure if it will change in the near future? Tough call.

  • An automated test cannot think like a user, even if you inject some randomness into the tests. Over-reliance on automation creates some nasty test holes in the product that may only get discovered after shipping.

 


Motley: And here I thought more automation all the way around is always better. You actually make some good points. I am going to have a chat with the test people and ensure we are focusing on the right things. We should give up some work on automation if it means that we ship the product sooner and we do not cause ourselves any long term harm.


 


Maven: Sounds great. Let me know how you make out. I'm now going to stick some tissue up my nose to control the bleeding thanks to you.


 


Motley: Ah, you know you deserved it.


______________________________


 


Maven's Pointer:  At Microsoft, I (James) have worked with a test team that seems to spend more time diagnosing and troubleshooting organization-wide automation infrastructure than it does improving processes, working closely with developers on test plans, and manually testing the product as a user would. Automation is a tool to detect regressions earlier in development, but it should not be used as a crutch. Given that philosophy, limit the amount of time you spend on automation to a reasonable number and pay attention to diminishing returns.


 


Maven's Double Pointer Indirection: We talked about code coverage in the past. One mistake test teams make is that they demand that they own the code coverage numbers, and that coverage should be measured against the test team's automated tests. Scenario-based test automation is not about testing individual components, modules, methods, and lines of code - it is more about ensuring that a scenario works as it should. Leave code coverage metrics to the developers and measure it with unit tests.


 


Maven's Resources: 



  • I.M. Testy's blog, from a guy in the Microsoft Engineering Excellence who has been teaching about testing for a long time.

  • Alan Page's Notes and Rants, from another guy in Engineering Excellence who really knows his test material

  • How We Test Software at Microsoft, by Page, Johnston and Rollison, Microsoft Press, ISBN: 9780735624252, August 2008.

Skip to main content