Previously, we described our Adoption reports. These reports provide you with information on downloads, adoption rate, user ratings and usage which, together, can help you determine the popularity of your app. Adoption reports are useful, but they’re just a part of the reporting tools we provide in the Windows Store. We also provide reports related to app quality. These reports help you measure and improve the quality of your app. In this post, program manager Kalyan Venkatasubramanian describes the Store’s Quality reports and how you can use them to improve your apps.
— Antoine Leblond
Before we get started, we need to define what we mean by "quality." After all, there are many facets to determining quality—like usability, reliability, security, and so on. For the Windows Store, we chose to focus our Quality reports on providing you analytics data based on your app’s reliability as experienced by your customers.
You access the Quality reports through your app’s app summary page. From that page, click the Quality link, which takes you to the Quality reports screen.
These reports help you measure the quality of your app by tracking the app’s failure rate—the number of failures customers experienced. A failure is defined as the unexpected closure of an app due to one of the following reasons:
- A crash
- An unresponsive app (hang)
With these reports, you can:
- Understand the quality of your app over the different versions that were published to the Store. This tells you if customers of your app are having a better experience in successive versions.
- Improve the quality of your app. You can improve the quality of your app by knowing about and understanding the top failures (as seen by your customers) in the latest version published to the Store. Understanding the top failures enables you to fix them and publish updates to your app in the Store.
Understanding the quality of your app
We compute these failure rates as the average number of failures encountered on a machine during the first 15 minutes of active usage. Looking at data from all apps in the PC ecosystem, we saw that the measured reliability of an app tends to stabilize over time—after a certain amount of usage we see very little variation in the rate of failure. For Metro-style apps in the Consumer Preview, this stabilization occurs after about 15 minutes of usage. This timeframe ensures that the data we give you is both accurate and timely. (Setting the panel to include a longer period of time would increase the amount of time we need to wait before reporting back to you.) As with the quality panel size, we’ll continue to monitor this threshold as the Metro style app market evolves. Also, as we calculate failure rates, we remove any outliers to ensure they don’t skew the results.
Here’s an example of failure rate.
Improving the quality of your app
In the previous section we discussed how you can understand the change in quality of your app over different versions. We also realized that you would be interested in knowing top failures your customers faced in the latest version of your app. So we also provide a list of common failures for the latest version of your app, ordered by prevalence. We determine the prevalence of the failure by counting the total number of occurrences your customers’ experience.
Remember, failure rates are calculated from machines in the quality panel, meeting stringent criteria around initial active usage of your app. The data for the most common failure list comes from all customers of your app. But what if a majority of the customers for your app have not been able to meet the usage requirements because of the failures they are experiencing? In such a case the failure rate will be 0, but you will still see the top failures for the app, as shown here:
By giving you the list of most common failures seen by your customers independent of the calculation of the failure rates (for example, for crashes, as shown in picture above) and broadening the reach for collection, we enable you to be aware of and fix failures seen by all of your customers. This also enables you to know about and react to the failures in your app early in the release.
Crashes and hangs
For crashes and hangs, we show you the 5 most common failures in the latest version of your app. The count is the total occurrences of the failure among all customers of your app. The Download link provides you with a .cab file containing the process dump for that failure.
A failure is uniquely identified by a failure name. For hangs and crashes, an example of a failure name is
The failure name is broken down into the following elements
- Problem class (NULL_CLASS_PTR_READ)
- Error Code (c0000005)
- Symbol (mydll.dll!myfunc::DoOp)
Note: How we determine the root cause of failure can be found here. Even though the blog post is not tailored specifically for Metro style apps, it is a great read to understand the details on collection and processing of failures.
You can determine the reason for the crash or hang in your app by downloading the associated .cab file. The .cab file contains a process dump associated with the failure in your app. You can get the stack traces and other details for the failure from the process dump.
Pre-requisites for processing the .cab file and extracting the stack traces are:
- Install WinDbg.exe on your machine.
WinDbg.exe is the recommended debugging tool to get stack traces from the process dump. If you do not have WinDbg.exe on your machine, you can get it here.
- Symbols for the application.
To get the stack traces from the process dump, you should have the symbols corresponding to the current version of your app in the Store.
Getting stack traces for crashes and hangs
These steps are not intended to be a thorough debugger tutorial. However they will enable you to get the stack traces for failures in your app.
- Click on the Download link next to the failure name for any associated with your app (crash or hang). Let us assume that the failure name is:
- Save the .cab file to a location of your choice.
- Launch WinDbg.exe.
- Click File > Open Crash Dump.
- In the Open Crash Dump dialog box, point to the location of the file saved in step 2 and open it.
- Click on File > Symbol File Path and type in the path for the symbols corresponding to the version available in the Store. Check the Reload check box and click OK.
If you want to point to the publicly available symbols from Microsoft (for binaries other than that of your app), use the following format for the symbols path:
Srv*;<<your symbols path here>>
If your symbols path is c:\symbols, the equivalent path per the above guidance would be
- In the prompt on the command Window, type !analyze –v and press enter.
The errors in the previous screenshot are because the symbols for some of the DLLs are not matched. While setting the symbols path as mentioned in step 7 would reduce the number of errors you see, you should be concerned if the error corresponds to the DLLs and exes in your app. If the errors and warnings are about binary files in your app, it means that the debugger was not able to find the correct symbols for your app. You should identify the correct path for where the symbols are stored and add it as mentioned in step 7.
- The stack trace is displayed in the command window as follows:
You can see from the call stack that the failure was a “divide by zero” exception in a function called DivideByZero in FaultoidEx.Engine.dll. This corresponds to the failure name we saw in step 1, helping you to understand the failure and what you can do to fix it.
- ErrorTypeText (WinRT error)
- ErrorNumber (8007007E)
- Filename_FunctionName (program.js!scenario1Run)
- Save the .cab file to a location of your choice.
- The file contains a file with a name starting with ErrorInfo (the ErrorInfo file). Extract the file and save it to a location of your choice.
- Open the ErrorInfo file from the location chosen in step 3 using notepad.
- The ErrorInfo text file that has the stack traces associated with the failure. Here’s an example:
In this example, the error was due to an undefined function. The call stack leading up to the failure is also in the ErrorInfo file.
We believe that understanding and improving quality is critical to building a successful app. We have designed the quality reports to provide you useful and actionable data to improve your app. We are confident that these reports will help you prioritize improvements and deliver quick updates to your apps in the Store.
We look forward to hearing from you about your experiences using the quality and other reports in the analytics portal.