Grading On the Curve (Why the UI, Part 8)

This is the eighth part in my eight-part series of entries in which I outline some of the reasons we decided to pursue a new user interface for Office 2007.

Over the last two posts, I've discussed the Customer Experience Improvement Program and some of the data we've collected from the program.

How do we use that data to influence the design and organization of the Office 2007 user interface?

If you plot the command usage of the Office applications on a graph, you get a curve. A few commands account for a lot of clicks, and then slowly the number of clicks per command tapers off. We use the data represented by the curve to inform us about how often people use certain commands. The curve itself helps us visualize the usage pattern of the overall program and the average "depth" to which most people use the product.

Many people suggest that "you guys should optimize the UI to match the feature usage data." On the surface, this sounds like a solid idea; you could have a computer determine the organization and prominence of different features depending on what part of the curve they are in. It would be very scientific. The only problem? We've already designed that product, and it's called Office 2003.

Put another way: if all we want to do is design a product that matches today's pattern of feature usage--well, I don't have to do any work! Office 2003 already matches the curve exactly; we can't do any better than statistical perfection.

The real equation at work here is data + human = design. We need to take the data, analyze it, understand its shortcomings, and use it to inform a design which meets our goals. But, in itself, the data cannot produce a UI because it has no goals and is a reflection of the DNA of a product you already shipped!


Twice a day, I go in and swap out the tapes...

So, back to the initial question. How can we use the data to inform the Office 2007 design? There are two less obvious ways.

One thing we do is to look for desirable features that have low usage numbers. In general, this combination is a great opportunity for us to take advantage of work we've done in a previous version by helping people find useful features that they don't know are there. We can measure the "desirability" of a feature in several ways: a lot of direct customer requests, questions about the missing functionality on newsgroups and message boards, and sometimes just our gut feeling that people would like a feature if they only could find it. An example of this is the feature in Word that allows you to put a watermark behind a document. Lots of people ask how to do it, but can't figure out how. The prominent gallery of watermarks in Word 2007 has already caused many people to comment what a "great new feature that is."

Of course, there are things that can derail this process. Number one, a low-quality or poorly-designed feature will not succeed no matter how easy it is to find in the UI. Where possible, we've tried to "spruce up" old features in order to make them worth raising their prominence. Secondly, a bad name for a feature can turn people off from using it. Do we change the name, hoping that new people will discover it? Or do we keep it the same, knowing that it hurts discoverability but also that existing users of the feature won't be confused. It's a hard judgment call to make sometimes.

The second way we use the data is by looking for frequently-used features that are hard to get to today. Any time we see this, it represents people overcoming the user interface to use a buried feature because it's so important. A great example of this is "superscript" in Word. In Word 2003, it must be added to the toolbar manually through customization. Yet, even as a non-default toolbar button, it gets more clicks than 30% of the buttons on the Formatting toolbar. The opportunity here is to discover the things that people love and that even more people would use if they knew they could.

The point of both of these exercises is to reshape the feature usage curve. We want to see more people using a broader set of tools and saving time because of it.

Of course it's true that we also use the feature usage data to figure out which commands need to be ultra-efficient and which can be taken out of the product entirely, but that's really the less lofty part of our goal. Success for the Office 2007 UI means that we broaden the Office 2003 feature curve, not that we match it.