Software Testing Problems Part 1: The Customer

All these blog entries and not one dedicated to what I do.  The problem has been
that I have too much to say about the problem of testing software that it's taking
me a long time to organize my thoughts well enough for public consumption.  So
here is part one of an X part series on the problems and potential solutions in testing
software. 

I could talk about testing early, regression testing, testing with check-in suites,
model based testing, pair wise testing, exploratory testing, code coverage, complexity
metrics, manually testing, or automating all your tests but, in truth, if I wrote
a book on software testing chapter one would be about the customers.  If you
don't understand what the customer needs from your software you will not be able to
verify that the implementation satisfies these needs.  Once you believe you've
tested your design to meet the customer needs you will have to understand how your
customers will use the software your going to write or you will never understand how
you should focus your testing and what methodologies apply best to the areas you'd
like to focus on.

If you were asked to test a new product the first question you should be able to answer
is " What is the user need for this product? "The second is " How will my
customers use this product to satisfy these needs and what are the most important
aspects of this usage?"   There is a subtle difference in the two questions. 
The first question is used to verify at a basic level to verify that you have designed
something that serves the purpose you intended it to serve.  The second question
is designed to make you think about how people with different work styles will leverage
and care about the feature/product you are attempting to build.  While you will
indeed want to think about how 80% of customers will use the product from A to B;
if your focus is on testing you need to go beyond this to understand the realistic
needs of someone who might want to stop at C along the way. 

For a real world example let's pretend I'm talking about the Task List in Visual Studio. 
This feature needs to store user tasks and organize build errors so you can easily
find and fix your flaws.  I'm willing to bet we met those needs with the task
list you have now.  But if you are designing new feature you will still need
to ask these questions.  Now ask the second question.  Even getting at the
80% answer is complicated for this feature.  It involves knowing what the average
size project used in Visual Studio is, how many files the user has open at any time,
how many user tasks does someone enter in a day, how many TODO comments users will
typically insert per LOC, how many build errors are typically generated by a project
this large, how big most people keep the window, etc.  Knowing the answers to
these questions will enable to you to limit the scope of your test plan in order to
focus in on these needs.  Once you know the answers to these questions you can
start thinking about the difficult, but realistic 20% cases.  What about the
team that wants 5,000 items in their task list?  Is the sorting and information
presented in the UI for the window sufficient for this group of customers? Is this
important enough to test?  The reason you should only spend 20% of your time,
like the customer usage, on these scenarios is that they are difficult to unearth
and risky to spend your time thinking about them.  Brainstorm all you like, you
haven't thought of all the possible things someone could do with the Task List. 

The last sentence reminds me of a lunch interview at Microsoft where I thought I would
be eating and ended up trying to think of all things one could do to test a salt shaker. 
While this is a valuable question that could perhaps tell you about someone's creativity
and ability to organize a response it does nothing to tell you which tests you should
really be spending your time on and how. 

josh