You Gotta Work!

Apoorva asks:

How do work assignments take place for STE's? An example would be, I was testing this product which involved client/server stuff. I was assigned to test the funtionality of the server. In short, anything in the spec which was server related or came from the server dev. team, I was put in charge of by my test lead. How does it work at Microsoft? How is one assigned which part of the software/specs to test? Is there a particular method followed (such as maybe a new tester on the team will be assigned certain tasks which he might find easier to handle and slowly get into the product). I probably am not being able to explain this well with an example, but I hope you get the picture.

In the main, one person is responsible for each feature in a product, and each person is responsible for at least one feature new to the in-progress version of the product. When that release goes out the door all those new features instantly become legacy features, and so each person is now responsible for whatever feature(s) of that version plus one or more features new to the next version. Over time, each tester builds up quite a portfolio of features for which they are responsible. Automated tests are the only way to effectively deal with this situation. (Well, I guess you can always just ignore those legacy features on the assumption that they aren't changing and so they don't need to be tested anymore. But that's a sure path to shipping buggy software. And I know you don't want to do that. <g/>) Some features, especially new features, require more than one person to cover them. Introducing a new file format, for example, will almost certainly require a small team of testers to bang it into shape. (Once the new format is fully tested, however, just one or part of one tester is probably sufficient thereafter.)

(A "feature" pretty much lines up one-to-one with a specification, and specifications generally don't span multiple groups. So it's unusual for testers here at Microsoft to be in the situation Apoorva describes above of "You're responsible for anything in this spec that relates to A, Jane will test anything that relates to B, and Bob will take care of the rest.")

How a person gets assigned to a particular feature varies based on the person's interests and the needs of the team. The team's Test leadership tries to take each tester's desires into account when making assignments. A happy tester is an effective tester I always say, and a person who is excited by a feature is more likely to do a good job banging on it than someone who finds the feature extremely boring and spends as little time working with it as possible. On the other hand, there is a set of features that all need to be tested, and someone won't necessarily want to work on each one. Or the person who does might have a full plate of features already. We must also take into account each tester's experience and abilities, but this is less important than you might think. As I said in Grow, Darn You, Grow!, we believe very strongly in growing and challenging our people. A less experienced tester will certainly be watched over more than a more experienced tester will be, but even brand new testers have opportunities to own important features.

New testers are acclimated in a variety of ways, everything from throwing them in and letting them learn by force of necessity to sending them through formal tester training. The most successful method, in my opinion, is pairing the newbie with a more experienced tester on the team. This gives the new recruit a well-defined person to help them get going and show them the ropes. My team does this not just for brand-new-to-testing testers but also brand-new-to-our-team testers. Regardless of experienced a tester you are, it still takes some time to learn how your new team does things. We find this mentoring process smooths the inevitable bumps and helps people come up to speed more rapidly.

After testing a particular feature for some length of time its owner may lost interest in the feature. Or another feature may become more interesting for some reason. Or the feature may be dropped. As I said earlier, happy testers are effective testers, and so if you want to get rid of a particular feature management usually is happy to change things around to make that happen. Here's a hint, though: your request is much more likely to be granted if you make it at logical changepoints such as during planning for the next version!

*** Comments, questions, feedback? Want a fun job on a great team? Send two coding samples and an explanation of why you chose them, and of course your resume, to me at michhu at microsoft dot com. I need a tester and my team needs program managers. Great coding skills required for all positions.

Comments (3)

  1. "After testing a particular feature for some length of time its owner may lost interest in the feature. Or another feature may become more interesting for some reason. Or the feature may be dropped."

    Also, the tester (or dev) could leave the company. I wonder how you would take care of a scenario where a tester/dev who is been on the same team for lot many years, owns many features, decides to leave. More ill luck strikes and some changes need to be made in the feature he owns…. u get the kinda scenario I am trying to create.

    Things such as good documentation, reviews, automation of tests with a detailed log etc. would definitely help. What other practices would you recommend which would not leave a team in trouble for the above scenario.

  2. Test documentation definitely helps when this happens. In the best case automated tests will be simple, clean, and clear, and manual tests will be documented in a test case management system. There is still a lot of other knowledge that is just in that person’s head, though, and so a transfer period where the new tester can get braindumps from the old tester is invaluable.

    In the worst case, automated tests are messy and may not even run, manual tests are never written down (yes, this signifies much bigger problems, and well-run test organizations should never get into this state), and a handoff isn’t possible for some reason. This is close to complete disaster, but there always are other people who have some familiarity with the orphaned areas — at least the area’s dev, hopefully!

    The best way to avoid this scenario — and a general good practice even if you’re automated and manual tests are all perfectly documented and easy to understand — is crosstraining amongst your team. Pair testing (i.e., two testers in front of one computer working together to brainstorm test cases and investigate failures) is a simple way to do this (as well as being quite effective in its own right).

Skip to main content