Planning Does Not Necessarily Make Perfect

For the past several months I have been testing setup for my application. Actually, that's not true - it is only about two weeks ago that I started testing. The two months before that I:

  • Wrote the test specification, getting it reviewed and signed-off by my feature team, getting it reviewed by the other testers on my team.
  • Wrote a second test specification specifically for our integration with a partner team.
  • Learned how to set up and configure the various pieces of setup infrastructure.

Each of these I thought would be simple. Writing the test specification mostly was, once I wrapped my head around how our setup works. (It's not your standard "run an MSI".) Writing the integration test specification took weeks longer than I had expected because the partner team wanted to see the gory details of every last integration test I planned to run; it seemed to take fifty revisions for them to sign off on my spec. Setting up and configuring the infrastructure also took weeks longer than I had expected due to a seemingly endless series of "It worked for me I don't know why it doesn't work for you" and "That piece requires Windows Server 2003 *Enterprise*, Standard doesn't work at all" and "You're doing things differently from most partner teams so our handy auto-configure tool won't work and instead you will have to go through these thirty steps".

Finally l was able to start executing the first set of integration test cases. While I mostly had their steps correct, I did have to change a few things. Next I moved on to the second set of integration test cases, and I soon discovered that they required massive changes.

How could that possibly be? The test specification and test cases had been reviewed by numerous people in great detail. How could they possibly not be perfect?

Are you at all surprised? I wasn't. I learned long ago that the amount of time and effort which is put into reviewing a specification does not necessarily have any relation to how accurately it describes reality. This is one reason I favor Agile methodologies - they dispense with the myth that accurate advance planning is possible. This is also one reason I am experimenting with Session Based Test Management - I no longer see the point in spending time writing and reviewing long lists of test cases, many of which will no longer make sense by the time I run them, assuming of course I ever do get around to running them!

I am about to embark on writing the test specification for my other feature. This will focus on three areas:

  1. The test missions for the feature.
  2. The way in which I plan to use a model-based testing-ish automation stack to enable my test machines to do something somewhat exploratory testing-ish.
  3. The risks which seem likely to prevent my feature from shipping.

I think this approach covers the myriad concerns my feature team, my test team, and my management, and I have:

  1. Test missions and SBTM enable us to track which features have and have not been tested, and how much more testing we think is necessary, without requiring the full set of test cases to be determined before we start.
  2. Model-based testing gives us automated tests which can be run continuously, while also working against the tendency to train the product to pass the automated tests.
  3. Using risk to organize and prioritize my testing will I think help me keep the big picture in mind, focus on the most important bits, and determine how deep to go and when to stop. (Are we confident that the risk for which I am testing is sufficiently unlikely to occur, or sufficiently ameliorated? Then I'm done enough.)

I have never taken this approach before so I do not know whether it will work. If you use a similar approach, I am interested in hearing how it works for you. I am most curious to learn how it works for me!

*** Want a fun job on a great team? I need a tester! Interested? Let's talk: Michael dot J dot Hunter at microsoft dot com. Great testing and coding skills required.