Types of testers and types of testing

In yesterday’s “non admin” post, Mat Hall made the following comment:

"Isn't testing the whole purpose of developing as non-admin?"

Remember, Larry is lucky enough that the REAL testing of his work is done by someone else. The last time I did any development in a team with dedicated testers, my testing was of the "it compiles, runs, doesn't break the build, and seems to do what I intended it to". I then handed it over to someone else who hammered it to death in completely unexpected ways and handed it back to me... 

Mat’s right and it served as a reminder to me that not everyone lives in the ivory tower with the resources of a dedicated test team.  Mea culpa.

Having said that, I figured that a quick discussion about the kinds of testers and the types of tests I work with might be interesting.  Some of this is software test engineering 101, some of it isn’t.

In general, there are basically four different kinds of testing done of our products.

The first type of testing is static analysis tools.  These are tools like FxCop and PREfast that the developers run on our code daily and help to catch errors before they leave the developers machines.  Gunnar Kudrjavets has written a useful post about the tools we use that can be found here.

The second is the responsibility of the developer – before a feature can be deployed, we need to develop a set of unit tests for that feature.  For some components, this test can be quite simple.  For example, the waveOutGetNumDevs() unit test is relatively simple, because it doesn’t have any parameters, and thus has a relatively limited set of scenarios.  Some components have quite involved unit tests.  The unit tests in Exchange server that test email delivery can be quite complicated.

In general, a unit test functions as a “sniff test” – it’s the responsibility of the developer to ensure that the basic functionality continues to work.

The next type of testing done is component tests.  These are typically suites of tests designed to thoroughly exercise a component.  Continuing the waveOutGetNumDevs() example above, the component test might include tests that involve plugging in and removing USB audio devices to verify that waveOutGetNumDevs() handles device arrival and removal correctly.   Typically a component covers more than a single API – all of the waveOutXxx APIs might be considered a single component, for example.

And the last type of testing done is system tests.  The system tests are the ones that test the entire process.  So there won’t be a waveOutXxx() system test, but the waveOutGetNumDevs() API would be tested as a part of the system test.  A system test typically involves cross-component tests, so they’d test the interaction between the mixerXxx APIs and the waveOutXxx APIs. 

System tests include stress tests and non stress tests, both are critical to the process.

Now for types of testers.  There are typically three kinds of testers in a given organization. 

The first type of tester is the developer herself.  She’s responsible for knowing what needs to be tested in her component, and it’s her job to ensure that her component can be tested.  It’s surprising how easy it is to have components that are essentially untestable, and those are usually the areas that have horrid bugs.

The second type of tester is the test developer.  A test developer is responsible for coding the component and system tests mentioned above.  A good test developer is a rare beast; it takes a special kind of mindset to be able to look at an API and noodle out how to break it.  Test developers also design and implement the test harnesses that are used to support the tests. For whatever reason, each team at Microsoft has their own pet favorite test harness, nobody has yet been able to come up with a single test harness that makes everyone happy, so teams tend to pick their own and run with it.  There are continuing efforts going on to at least rationalize the output of the test harnesses, but that’s an ongoing process.

The third type of tester is the test runner.  This sounds like a button-presser job, but it’s not.  Many of the best testers that I know of do nothing but run tests.  But their worth is in their ability to realize that something’s wrong and to function as the first line of defense in tracking down a bug.  Since the test runner is the first person to encounter a problem, they need to have a thorough understanding of how the system fits together so that (at a minimum) they can determine who to call in to look at a bug.

One of the things to keep in mind is that the skill sets for each of those jobs is different, and they are ALL necessary.  I’ve worked with test developers who don’t have the patience to sit there and install new builds and run.  Similarly, most of the developers I’ve known don’t have the patience to design thorough component tests (some have, and the ability to write tests for your component is one of the hallmarks of a good developer IMHO).