Using OneNote's customers as a basis for designing tests

 

One of the more easy to explain testing techniques we use at Microsoft to ensure our software meets the needs of users is that of "persona testing."  Actually, personas are used to define new features first, so let’s start there.

 

Suppose we wanted to add a new feature to OneNote.  Our first choice is which feature to add.  Before we can start making the choices, we need to decide who the intended user for the feature will be.  We have many different types of users for OneNote, so let's start by narrowing our list down to two: medical professional and students.  We'll call the medical professional "Dr. Simms" and the student "Pat."  Those names become the "persona name," and when we talk about new features from the inception to the implementation, we always keep Dr. Simms and/or Pat in mind.

 

As a semi-fictional example, let's say we have a Table Summation feature (https://blogs.msdn.com/johnguin/archive/2007/12/10/table-sum-powertoy-for-onenote.aspx).  We could say that Pat will use this to total exam scores stored in a notebook, and Dr. Simms can also use this feature to quickly sum billing fees.  For Pat, we can say get a certain measure of accuracy: probably 0 to 100 to one decimal point is a good start.  But Pat also has that one oddball professor that subtracts points for wrong answers, so we know we need to ensure negative number support.  Dr. Simms is adding monetary units, so for the US market, we need two decimal places of precision.  Dr. Simms also does NOT want negative numbers.

 

Now we can use this data to develop a test plan.  Obviously negative numbers, integers and decimals are easy to figure.  We can use the persona information to also start to design boundary value cases - the upper and lower ends of numbers for which we need to ensure the feature works.  For Pat, 100 seems like a logical upper bound, and -100 seems like a lower.  Dr. Simms is a bit more interesting.  We can cap the upper bound testing at $50 trillion (my rough guess at the amount of money there is in the world), and since his billing is always positive, we can set 0 to be the lower.  We will call this our "supported values" plan.

 

Testing won't end there, though.  We don't want to merely verify the expected behavior when we have valid inputs.  We also have to test beyond the boundaries to include unexpected cases such as negative money, overflow of our expected upper bound of money, really bad grades (large negative numbers and the like) and currency to more than 2 decimal places when entered.  In each of these cases, we'll call out what type of error to present.  For instance, for overflowing the largest numbers expected, we can open a dialog that explains the upper limit of the feature.  The persona information about Dr. Simms gives us the case to verify the behavior.  In other cases, like the extra precision inserted with monetary units, we can ignore the extra data and not alert Dr. Simms.  Just "do the math" and present the results to two decimal places.

 

We repeat this cycle for all manner of personas for all the products we have, and apply the personas to each feature.  It also serves as reality check to ensure we are creating functionality which can be used by the target audience we have in mind.

 

The last aspect of testing I'll mention here is hidden in at least one assumption I made above.  When we start testing this feature, there will be a bug that one of our personas will quickly discover and report.  Can you see what it is?

 

Questions, comments, concerns and criticisms always welcome,

John