Setup testing

 

Imagine a feature of Internet Explorer or some other browser which would automatically delete items from its cache after some set number of days or months if they haven't been used.  For sake of argument, let's say the default value is 6 months.  So you browse some website and get some HTML pages, a few images, and maybe a cookie.  This feature would run when you start the browser and look in the cache for items older than 6 months and delete them.  Easy to describe, relatively easy to test.

One of the lessons I learned during my stint testing Outlook setup is few system administrators want to accept the default values.  Suppose you were in charge of hundreds or thousands of machines and were getting ready to roll out this new feature to your company.  What would you want the duration to be?

 

You can make the case for any duration, really.  Some companies that have limited hardware budget might have users with small hard drives, and would want the duration to be zero or some other very short duration to help keep hard drive space free.  At the other end of the spectrum, some companies might want to keep the cache alive for a year to either help prevent time downloading files or they need to implement some sort of archiving/tracking tools.

 

The point is that we (Office in general, OneNote in particular) need to make this setting controllable by administrators.  We do this via a policy tool, which, greatly simplifying the implementation, sets a bunch of registry keys and other settings when running which defines the options the way the administrator wants.  Again, generally speaking, the admin can leave these settings alone, can set them to be certain values, and can also determine whether the user can override the setting or not. 

 

Testing these options is actually quite easy.  We have some good automation in place for this, and it is a well understood problem.  The only hard part is the time consuming nature of the testing - this type of test pass can take weeks (even with automation), can expose design problems (what is the right set of options to configure), and in the case of Outlook can rapidly mushroom when you start to test deploying email accounts across hundreds or thousands of machines.  Poring through logs can be a tremendously time consuming task, and they are often the only clues you get when things go wrong.  Allowing for email time back and forth, getting reports from beta testers can take weeks to narrow the problems.

 

This is all part of "setup" testing.  The act of installing software is not nearly as easy to get right as it seems (and we always get feedback on it).  But from the testing point of view, it is very time consuming.  It may take a week or more to narrow down one specific bug which only occurs one in a thousand times.  And when you compare your work to other testers who have very visible areas (say the Ribbon from Office 2007, which gets a lot of attention), you can feel overlooked.  At the end of the day, some testers can say "Look at this cool new feature we added!" and the best you can say is "Yep, setup works."  And if that cool new feature the other tester is crowing about has a bug or two to polish, that's OK.  But when setup breaks, you definitely hear about it.  Immediately.

 

There's not a big lesson here.  Setup testing is the foundation of all software.  You can argue that not everyone uses every feature in every product, but you can't say that about setup.  It's the one feature that everyone who uses your software uses.

 

Questions, comments, concerns and criticisms always welcome,

John