HOWTO: Analyze the data collected in a usability study

I just realised that I never got around to finishing off a series of posts on how to design and run an API usability study.

After designing and running the study, the fun part beginsĀ - making sense of all the data that you have collected.

The first thing to do is to collect together all the patterns of behaviour that you observed. For example, any problems that were experienced by two or more participants, or similar expectations that two or more participants had. Typically I list out all of the problems that people had in terms of the tasks that they were working on.

The next question to ask is why did the participants experience problems? To understand the reason for the problems, I use the cognitive dimensions framework. For each problem, I try to relate it to one or more of the cognitive dimensions. I look carefully at the notes I took, the comments made by participants while working on the task and the code that they wrote while working on the task, carefully considering the changes that they make to their code during the session. The cognitive dimensions framework provides multiple perspectives with which to view a problem. Consider the first dimension, abstraction level. While watching a series of video clips of participants struggling to find a code snippet to show them how to write a line of text to a file, I found myself asking "Is the problem due to the abstraction level of the API?". I then started looking more closely for behaviours and comments that would help me answer the question. Further probing into the data found that many participants made comments about the types they were browsing in the documentation being too "low level". Looking at the query strings created by participants when searching the documentation provided me with a hint about the type of abstraction they were looking for. Queries performed by many participants used the words "File" quite often. Thus it started looking as if the fundamental problem was due to the level of abstraction offered by the API not matching users expectations.

After identifying one dimension which helps explain the issue it's important to continue to look for other dimensions which might offer another perspective. Each dimension in the framework is non-orthogonal and changes in one dimension can affect other dimensions. I continued to look for other perspectives and it became clear that the API also suffered from a larger work step unit than expected. Participants often needed to work with more than one object to accomplish the task, often using a factory object to create an instance of some other object. It was clear from watching the sequence of code written by users that this was completely unexpected.

After explaining all of the probems that I listed at the start with the use of the cognitive dimensions, I then start going through the dimensions looking for other issues. Sometimes there are issues that aren't immediately obvious during the actual participant sessions or that don't cause significant enough problems for participants to stop them from being successful at the different tasks. But these might still be significant enough to make the experience of using the API an unpleasant one. API usability isn't just about ensuring that users will be successful. Users also need to enjoy working with an API, otherwise they might not choose to use your API. I call these type of issues "paper cuts". You want to avoid the "death by a thousand paper cuts" phenomenon. I use the cognitive dimensions to look for these paper cut issues.

For each dimension I go through the data looking for instances where the API might not meet users expectations with respect to that particular dimension. For these types of issues I don't limit myself to instances experienced by two or more participants. Particular paper cuts might not be experienced by everybody, but if everybody experiences a subset of the set of paper cuts, even if everybody experiences a different subset, that could be enough to tip them over the edge from wanting to use the API to not wanting to use it.

The last thing I do is collect video clips that demonstrate each issue collected and present my findings to the team. You'll notice that I do not spend time designing solutions to the problems. My job as a usability engineer is not to design an API. It's to provide the people designing the API with the information and knowledge they need to be able to design as usable an API as they can. I do this through the use of the cognitive dimensions framework and have found that this provides the necessary insight and detail to understand why problems were experienced with an API such that a good solution to those problems can be designed.