Planning Does Not Necessarily Make Perfect


For the past several months I have been testing setup for my application. Actually, that’s not true – it is only about two weeks ago that I started testing. The two months before that I:

  • Wrote the test specification, getting it reviewed and signed-off by my feature team, getting it reviewed by the other testers on my team.
  • Wrote a second test specification specifically for our integration with a partner team.
  • Learned how to set up and configure the various pieces of setup infrastructure.

Each of these I thought would be simple. Writing the test specification mostly was, once I wrapped my head around how our setup works. (It’s not your standard “run an MSI”.) Writing the integration test specification took weeks longer than I had expected because the partner team wanted to see the gory details of every last integration test I planned to run; it seemed to take fifty revisions for them to sign off on my spec. Setting up and configuring the infrastructure also took weeks longer than I had expected due to a seemingly endless series of “It worked for me I don’t know why it doesn’t work for you” and “That piece requires Windows Server 2003 *Enterprise*, Standard doesn’t work at all” and “You’re doing things differently from most partner teams so our handy auto-configure tool won’t work and instead you will have to go through these thirty steps”.

Finally l was able to start executing the first set of integration test cases. While I mostly had their steps correct, I did have to change a few things. Next I moved on to the second set of integration test cases, and I soon discovered that they required massive changes.

How could that possibly be? The test specification and test cases had been reviewed by numerous people in great detail. How could they possibly not be perfect?

Are you at all surprised? I wasn’t. I learned long ago that the amount of time and effort which is put into reviewing a specification does not necessarily have any relation to how accurately it describes reality. This is one reason I favor Agile methodologies – they dispense with the myth that accurate advance planning is possible. This is also one reason I am experimenting with Session Based Test Management – I no longer see the point in spending time writing and reviewing long lists of test cases, many of which will no longer make sense by the time I run them, assuming of course I ever do get around to running them!

I am about to embark on writing the test specification for my other feature. This will focus on three areas:

  1. The test missions for the feature.
  2. The way in which I plan to use a model-based testing-ish automation stack to enable my test machines to do something somewhat exploratory testing-ish.
  3. The risks which seem likely to prevent my feature from shipping.

I think this approach covers the myriad concerns my feature team, my test team, and my management, and I have:

  1. Test missions and SBTM enable us to track which features have and have not been tested, and how much more testing we think is necessary, without requiring the full set of test cases to be determined before we start.
  2. Model-based testing gives us automated tests which can be run continuously, while also working against the tendency to train the product to pass the automated tests.
  3. Using risk to organize and prioritize my testing will I think help me keep the big picture in mind, focus on the most important bits, and determine how deep to go and when to stop. (Are we confident that the risk for which I am testing is sufficiently unlikely to occur, or sufficiently ameliorated? Then I’m done enough.)

I have never taken this approach before so I do not know whether it will work. If you use a similar approach, I am interested in hearing how it works for you. I am most curious to learn how it works for me!


*** Want a fun job on a great team? I need a tester! Interested? Let’s talk: Michael dot J dot Hunter at microsoft dot com. Great testing and coding skills required.

Comments (7)

  1. P.C. says:

    Just wanted to say I appreciate your blog.  I’m a newbie to testing and I’ll have to admit a lot of what you talk about is very new to me but I know reading your blog will help me in my learning process.  The biggest difficulty I have as a new tester is that most of the writings and teaching of testing I find on the internet seem very high theory, disorganized, & full of management buzzwords…and nothing like the day-to-day practical testing skills that I need to learn to be successful in my job.  When you get a new employee who has technical skills but no experience testing, what do you have them do the first week/month on the job?  What reading assignments do you give them?  Any books or webinars that are must reads/watch?

    Thanks!

  2. I’d like to respond with something I put up on the Agile Testing list a while ago.  It’s worth repeating, I think, because it’s so relevant to what you’re saying here:

    I’m reading a book at the moment called "Sensemaking in Organizations", by Karl Weick.  It contains this intriguing passage, in which the parenthetical words refer to aspects of sensemaking:  "[an incident described in an earlier passage] raises the intriguing possibility that when you’re lost, any old map will do.  For example, extended to the issue of strategy, maybe when you are confused any old strategic plan will do.  Strategic plans are a lot like maps.  They animate and orient people.  Once people begin to act (enactment), they generate tangible outcomes (cues) in some context (social), and this helps them discover (retrospect) what is occurring (ongoing), what needs to be explained (plausibility), and what should be done next (identity enhancement).  Mangers keep forgetting that it is what they do, not what they plan, that explains their success.  They keep giving credit to the wrong thing–namely, the plan–and having made this error, they then spend more time planning and less time acting.  They are astonished when more planning improves nothing."

    The testing and development effort isn’t about your plan or your documents or your script; it’s about what you think and what you do when your plan–a form of fantasy–starts to interact with reality.  This resonates with the Agile Manifesto principle where responding to change is of higher value than following a plan.  Better engagement with the customers would seem the way to go on this (since automated testing will simply allow you to reproduce your own misconceptions about the customer’s desires at greater speed).  I would suggest that the solution to your problem involves figuring out why your customers are able to find bugs that you can’t find.  What are the principles or mechanisms by which they are finding problems?  Are they different from yours?  Are their tests covering different things that you could cover in your own testing?  Is there a chance to get the customer involved immediately, as soon as some code is produced?  Is there an opportunity to improve skills, resources or information for your testers or developers?

    Cheers,

    —Michael B.

  3. micahel says:

    PC: I point new testers at my reading list [http://blogs.msdn.com/micahel/archive/2006/03/08/ReadingList.aspx], with instructions to pick a book, read a chapter, decide what they think about it, discuss it with me (especially where they disagree), and then repeat with the next chapter, and the next book, and the next blog. Also to read my Hallmarks Of A Great Tester paper [http://www.thebraidytester.com/downloads/HallmarksOfAGreatTester.pdf], look for areas where they think they are not yet great, and start to improve them. Also to test everything they can get their hands on. Is that enough to keep you busy for a week or two? <g/>

  4. micahel says:

    Michael: Thanks for your suggestion. I agree that early involvement of customers makes a big difference. Thanks for reminding me to pursue this angle.

  5. Jim Bullock says:

    "Perfect" defined how? Perfect execution thereafter, realizing the plan exactly as imagined? How incredibly boring, if such a thing were possible.

    What about "perfect" as in wringing all the insight that’s reasonable out of the information we have before we start?

    What about establishing a common vocabulary?

    What about an exercise in working together before we start with the real work – essentially cheap practice and team-building?

    What about learning which outcomes are required for you to be successful and which are expendable?

    What about building some understanding of the trade-offs in the work you are about to undertake?

    I could go on.

    In many ways, the implicit goal, and goodness measure of of "a plan" are both wrong. They sound kind of silly when you say them, and very silly if you poke at them a little – To figure out exactly what we’re going to do, without deviation, for quite some time in a changing world? Perfect precognition? Those two are emotionally appealing but just wrong.

    One thing I like to find out from planning is how uncertain we actually are about what we’re about to go do.

    So, "perfect" defined how? I’m reminded of the quote attributed to Dwight Eisenhower, regarding planning for the WWII D-Day invasion. It’s usually misquoted:

    "Planning is everything, the plan is nothing.”

    The somewhat more reliable quote goes:

    "In preparing for battle I have always found that plans are useless, but planning is indispensable."

    For many purposed, the more common, incorrect quote is perfect enough.

  6. P.C. says:

    Thank you Micahel! I shall take a look at your suggested resources and I’m sure that will keep me busy for awhile.

    🙂

    P.C.