Product Feedback Systems

Several people on the Windows Internet Explorer team have written blog posts about our feedback mechanisms (our use of automated telemetry in Windows, how to write a great bug, how to submit bugs, etc) for IE9.  After looking at the many similarities and obvious differences between manual feedback systems and projects, we decided to use Microsoft Connect for IE9 and eliminate the invitation requirement for filing bugs.  In this post, I want to take a step back and talk about how we made that decision.

Every Internet Explorer user is a Windows customer.  Listening to customer feedback is vital to the success of any business.  As communication methods have evolved, how a business listens to its customers has changed as well.  Our customers have always been at the center of how we envision and design IE but the way we listen has evolved both with technology improvements and with the way our customers communicate with each other.

Our Internet Explorer 8 beta customers asked us to look at different options for how we listen.  We took their feedback to heart and looked deeply into all aspects of our feedback systems as well as those used by other software companies and in other industries.  We also looked into research regarding the effectiveness and efficiency of various ways of listening and responding to customers.  Most importantly, we listened to real customers, real enterprises, and real developers.

Each time we start a new project we begin by stepping back and looking across the feedback from the previous project and the engineering process by which we collected it.  We challenge all assumptions, especially those based on speed, size, or connectivity.  We also talk with people to see what they like and dislike about our competitors’ product and engineering process choices to see if Microsoft’s customers might benefit from something similar or more advanced.

One of the assumptions most software organizations appear to take for granted is the need for beta releases.  In fact, some companies have even taken this assumption well beyond its commonly understood meaning. So we asked ourselves, “Should we do betas and, if so, how many?  Should we do nightly builds?  Should we continue to use Microsoft Connect or move to something else?”

A group of us sat down and thought through the customer benefits and drawbacks of various models.  Ultimately, there are two main benefits to public releases:

  1. Feedback – customers reporting bugs on what is not working correctly
  2. Readiness – testing sites with the new browser, adding newly possible features to pages, preparing product support, writing documentation and books, adjusting tooling, etc.

There is a continuum of feedback models that ranges from completely closed to completely open.  Examples of completely closed betas include new PC models, cars, toasters, etc.  Examples of completely open models include some academic open source projects, school PTA projects, etc.  We’ve chosen something near the middle so we can get broad feedback on quality events but not completely anonymous or unstructured either because we believe in responsible engineering.

From our point of view, the most important thing about working on a consumer technology like a browser is respecting people’s scarce time and energy.  You can do this while still achieving your goals of getting everybody ready (from web developers to support professionals to corporations) for product launch and getting feedback on quality issues that only surface in unique configurations.  I want to call out a few important aspects about feedback systems:

  • Having a distinct start and end to the pre-release period is important.  Without this, people would gamble their business or personal browsing on the product always behaving in a predictable way.  Having beta customers running betas forever also neither respects their valuable time nor focuses that time on finding and reporting real defects. 
  • How often you send out builds depends heavily on your own ability to find defects.   We looked at the pros and cons of releasing “nightly builds” to beta testers.  Because we have a professional testing organization and a broad range of hardware on which we test, we find defects daily (many prior to checkin), fix those, and look for more.  When the IE team starts running low on new types of bugs to find, we get thousands of other professional Microsoft testers to start using IE9 to report any defects they find with IE9 as their daily default browser.  We call these folks “Self-host Testers” and they find an additional set of interesting bugs.  Again, when it appears that this group is running low on new bugs, we expand the scope even further and publish things like IE9 betas.  The risk of putting out daily builds across all audiences is that multiple beta testers waste their valuable time finding the same bug because they’re simply testing incomplete code. 
  • There is a decreasing effectiveness in the feedback as you expand the group of testers.   We looked at IE8 bug reports that were actionable versus those without enough data on which to make an effective code change to determine a signal vs. noise ratio.  For IE8, the signal to noise ratio in our bug database went from 3.1:1 to 2.6:1 when the code moved from Development to Test.  It dropped to 1.5:1 when other Microsoft testers started using it and reporting bugs.  It dropped to 0.73:1 when the rest of pre-release Windows users inside of Microsoft used it.  Finally, it dropped to just 0.64:1 across all the IE8 beta testers.  Closely related to the actual effectiveness of the bug reports is each group’s ability to produce a volume of effective bug reports.  The IE engineering teams produced more than five times the number of bug reports at six times the effectiveness of the beta program.  In other words, the IE engineering team was 30 times more efficient at finding actionable bugs than the IE beta testers in-aggregate over the same period of time.  Make no mistake though, the IE beta testers filed some great bug reports and we and Microsoft’s customers are glad they did!  For instance, bugs found on websites which require account login, special credentials such as smart cards, regionally-locked IP addresses, etc. are very important to us.
  • We really want to know if something doesn’t work as expected.   If you think there’s something wrong with a feature, please file a bug so we can make adjustments before IE is done.  We will manage that in a way that respects everyone’s time by helping people target their time and bug reports on reporting actual defects we haven’t yet found at Microsoft.  Bug reports come in all flavors and sizes.  They range from “I want feature x” to “Please add a whole new mode so I can use feature y this new way” to I tried feature x in the way you said it should work, but it doesn’t work that way.   Reports similar to the last example are the most useful.  When features don’t work like they should, we definitely want to know it

We also looked at various feedback tools when we started the project.  These included things like Bugzilla, Mantis, Launchpad, etc.  We compared them to Microsoft Connect, which is what nearly every Microsoft product uses to get beta feedback from customers.  It turns out that these tools are all amazingly similar.

We looked more deeply at Bugzilla and projects using it because it’s a tool that at least two other browser vendors use today.  We looked at the tool itself and how it handles bug workflow both from a beta site perspective and a product engineering perspective.  We found some really interesting similarities and differences:

  1. All of them require some kind of login.  Mozilla’s Bugzilla needs an email address to log-in. WebKit’s needs an email to log-in.  Both of them warn you of being spammed if you actually use your primary email account so you should get a new email account if you want to file bugs.  By contrast, Microsoft Connect uses a Windows Live account with either your current email account or a free new one.  In either case, your email address isn’t visible to people or web-bots mining the site to get email addresses for spamming campaigns.  We actually went one step further and require LiveID login to query the bugs, further reducing the risk to our beta users.  Connect also works for multiple Microsoft beta projects with a single account.
  2. They all handle entering issues by area, feature, or symptom.  All have some bug entry form where you fill out the details, including repro steps, title, etc.  They provide a way for you to give it a priority.  They let you provide great feedback (Bugzilla, Connect) or poor feedback (Bugzilla, Connect).
  3. Bug management differs considerably across projects though.  With IE8, we took action on every issue that was reported and we staffed our beta with Microsoft professional product support engineers so they could learn how to support the product while we built it.  By contrast, if you filed an issue with Firefox 3.5, it would sit at Unconfirmed until someone with “Can Confirm” privileges saw it, which may take years.  If a privileged person does happen to see it, it may eventually be seen by the engineers. 
  4. Bug closure also differs across Bugzilla and Connect.  For IE8, we took action on every bug and drove it to closure.  By contrast, as of 4/26/10, Mozilla’s Bugzilla had 12,779 unconfirmed bugs.  Webkit’s had 2,616 unconfirmed bugs across all versions but very few bugs against older versions.  Just like unemployment will never be 0%, it’s important to note that in-development projects will always have a non-zero number of bugs just as a normal part of bug management.  Each project team needs to understand and manage their inventory of bug reports to ensure a reasonable response time for their project testers.  For a project as complex as IE, that can be as large as a couple thousand issues active at any point during the project.

We made some engineering decisions on how to get the best possible feedback from our beta customers after doing all the research into different release cadences, feedback models, tools, and bug management processes. We want to respect our customers’ time and energy so we’re going to distribute more focused Platform Preview builds when there are new platform features ready for people to test drive. 

Thanks for all the great feedback in IE8. We’re looking forward to building a great IE9 release!

Jason Upton
Test Manager – Internet Explorer