How we used user research data to help design Compatibility View

There have already been a few posts on Compatibility View in Internet Explorer 8 (here, here, here, here, here, and here), but none have gone into detail about the user research data we used to help design this feature. We collected data on Compatibility View throughout the IE8 beta releases and have made multiple decisions about its design based on our data from lab studies, field studies, instrumentation, and community feedback. What I’d like to do is go through some common questions we’ve received about the user experience of Compatibility View, and explain how user data has influenced our decisions. Hopefully, this will give you some insight into our design process and how we use data to make feature design decisions.

Current design

So that we’re on the same page, let’s do a quick review. The current design places the Compatibility View button at the end of the Address Bar, next to the Refresh button in the default layout, on sites that do not include the compatibility meta tag or HTTP header. The button has a “page” icon on it. The first two times a user is on a page with the button visible and refreshes the page, a notification balloon will pop pointing to Compatibility View.

Compatibility View button in the address bar and balloon tooltip explaining what Compatibilty View is.

Now that we’ve reviewed the design, let’s look at some of our top questions.

Why put the icon next to the Refresh button?

One of our top concerns with Compatibility View has always been its discoverability. If users can’t find this feature, they could have a severely compromised experience. When we first tested Compatibility View in our user research labs, it was a button on the Command Bar named “Emulate IE7.”

Old Emulate IE7 button in the command bar.

To test it, we created a task in which participants were asked to use a site that we knew would be unusable before switching to Compatibility View. When they arrived at the site and found it unusable, we asked them, “What would you do if you came to a site that looked like this?” The most common response by far was to click the Refresh button. After probing deeper into why people would try to “fix” a page in this way, most responses focused on a specific incident in the past where the participant had seen a page that was “off” somehow and refreshing the page had fixed the issue. Another common response was to look for something in the Tools menu.

Side note: We use the term “fix” a page in this article a lot because that is how our participants referred to what they were doing. We know the technical accurateness of this term is debatable in a lot of cases but we’re sticking with how our participants thought about the scenarios.

Almost no one thought to use the “Emulate IE7” button, even though they had seen an explanation of it earlier in the session. The main problem was that the word “Emulate” had little to nothing to do with “fixing” sites to our participants. It was a technical description of the feature instead of describing what the user was looking for in these situations. Also, we knew that the problem wasn’t that it didn’t stand out enough. I mean, it was a BIG button that was totally new in the Command Bar. Our problem wasn’t that it didn’t catch user’s eyes, it was that it didn’t fit into user’s mental models.

Based on this data, we knew that the majority of users will likely look to the Refresh button when they encounter a page that needs to be “fixed.” To take advantage of this natural tendency, we place the Compatibility View button next to the Refresh button – next to the most likely place a user will look when a page looks “off.” Additionally, to catch the small percentage of users who we expected to look in the Tools menu, we added a link there as well.

Why use that icon?

We knew that having a button with a name as long as “Emulate IE7” was not going to work being placed next to the Refresh button. It would take up too much space and we already knew that the idea of emulating IE7 in IE8 was strange for most users. We brainstormed and developed a series of potential icons that could work for Compatibility View. One challenge we faced was that communicating what Compatibility View does with only an icon is quite difficult. We tried various iterations such as showing blocks misaligned, representing areas of the page that could be out of alignment. These versions were too abstract to communicate the general idea and often it’s not a case of elements being misaligned (e.g., menus might have the wrong background color). We decided on the “page” icon because this correlated best with how participants were describing the pages that were not rendering correctly (e.g., “broken”, “screwed up”).

In refining the final design, we worked to make it fit as part of our family of icons in the address bar, including stop, refresh and go. We tried options that had different colors and textures for the page but found the page white design was most recognizable as a “web page”. We used the same colors, line weights and gradients present in the other icons to “ground” the icon to the overall address bar design.

When we tested the icon in the lab, we found that the majority of participants understood that the icon had something to do with the page with a problem they were looking at. Seeing the icon led most people to read the tooltip that explains the feature. After reading the tooltip, we saw many positive reactions to the icon and that it made sense given what the feature does.

Some reviewers felt that showing this icon on pages that were potentially fine would confuse users and make them think that the page was broken somehow, but we saw no indication of that in our studies. Participants did not even look toward the Compatibility View button unless they were in situations where it could potentially help them, which was one of our design goals.

Will people understand what it’s for?

We expect most users to understand Compatibility View and when to use it.  Of our lab participants who read the Compatibility View tooltip, the notification balloon, or tried the feature, all have understood the feature’s purpose. Most importantly, everyone who understood the the feature, continued to use it on the sites they encountered that had issues. This was true for our lab participants and also for our field research participants.

Why have balloon notifications?

After a few rounds of testing in the lab, and getting feedback from participants in our field research, we still saw some participants didn’t notice the Compatibility View button, even after they hit Refresh on a page with layout issues. Quite naturally, they got into an automatic pattern of quickly clicking Refresh and then putting their eyes right back to the page content to see if it fixed their problem. We had an important decision to make--do we add some kind of notification that lets the user know about Compatibility View when it could help them? The upside is that more people who would benefit from Compatibility View will find it. The downside is that we are adding another notification to the system.

We tested a build with the notification balloons in the lab and in our field research and found that displaying a balloon did in fact help more people discover Compatibility View. We didn’t take the decision lightly but felt the added awareness was worth the added notification.

Why have two balloon notifications?

Originally the balloon notification only showed once when a user refreshed a page displaying the Compatibility View icon for the first time. As I mentioned above, even showing this once was a strongly-debated decision. But, there were many concerns that once was not enough so we tested the possibility of showing the notification up to three times (after the first three times someone refreshed a page displaying the Compatibility View icon).

In the lab, we showed participants from one to three different sites with layout or content problems. At each site we asked them what they would do if they came to a site like this. Most clicked Refresh as we expected. What we found was that of people who saw the notifications, about two thirds reacted to the first notification and the other third reacted to the second notification. Showing the second notification caught the attention of a group of people who were very focused on the content of the page at the first site but were more likely to notice things outside of their normal pattern at the second site.

There was also a group who saw all three notifications and dismissed or ignored them all. We believe these users would not pay attention to the notification no matter how many times we showed it.  This lead us to not show the notification three times “to be safe” because we had no evidence that showing the notification more than two times will catch any additional users and there is a real cost to over communicating with users (e.g., data in the recent e7 blog post about Action Center).

Conclusion

I hope this gives you some insight into how we as an engineering team use user research data to make some of our design decisions. As you can see, a lot of time, research, and deliberation can go into a feature (and this was just a summary, I didn’t even get into our instrumentation data). We always want to base important decisions on the best data possible and that includes data from the lab, field, instrumentation, and community feedback. If you’d like to find out more about user research at Microsoft, please check out our user research website.

Jess Holbrook
User Experience Researcher

Ben Truelove
User Experience Designer