Collecting performance counter information with the Visual Studio Team System profiler

Collecting performance counter information with the Visual Studio Team System profiler

Introduction

 

If you have ever done performance work on Windows systems you are probably already familiar with the PerfMon tool. This tool gives you an overview of your system performance and can be an invaluable tool in the early (and sometimes the late) stages of a performance investigation. If you look at either of my two PerfMon links you can see that PerfMon is usually used to pick out what the performance bottleneck is when examining an application. By “performance bottleneck” I mean picking out the resource that is being saturated by your program (CPU, network, disk). PerfMon does this by tracking some subset of performance counters that monitor different aspects of system performace. This is important as you often need to use different techniques and tools for investigating different bottlenecks. For example, using the Visual Studio profiler it’s usually best to use sampling mode to investigate CPU issues and instrumentation mode to investigate memory and disk issues.

                With the Visual Studio Profiler we wanted to give customers an easy and integrated way to collect this performance counter information and view it alongside their performance data. This was especially important to us as with this information we could help customers to analyze specific trouble areas of their program or to choose the correct profiling modes based on their performance bottleneck.

 

Turning on performance counter collection

 

To access this performance counter collection start out by going to the session properties of your performance session. In the session properties you will want to select the new “Windows Counters” options, shown in the screenshot below.

                On this page you can check the box in the top of the screen labeled “Collect Windows Counters” to add windows performance counter collection to your next profiling run. Below that checkbox you see a box to input a value for how often the counters are collected labeled “Collection interval (msecs).” But before we dig into the collection interval it is important to understand just how these counters are collected. With our profiler we have the concept of “marks” that show up in the data collection stream with a comment associated with them. Unlike all other profiling data these marks are not aggregated and show up in chronological order in the “Marks” view of the performance report. These marks are usually added by annotating your code with the data collection APIs (native API here). When we tell the profiler to collect Windows performance counters they get collected every time one of these marks gets hit. The collection interval control allows you to automatically insert marks in your program while running at some specific time interval. That way you can collect performance counter information easily without all the hassle of adding marks to your program manually.

                In addition to setting the collection interval you also need to control what performance counters are being collected. In the lower section of the property page are all the different performance counters that you can selected to be collected. We surface all the same counters as PerfMon, so if you have a favorite counter that you like to track you can be sure that we’ll have it. By default we’ve included basic processor, memory and disk usage counters. We picked these as they give a good basic overview as to how your application is taxing the system.

Viewing performance counter data

 

                So to demonstrate how these marks and associated counter values are actually surfaced in the report file we need to run the profiler and collect our data. For this example I’m using the PeopleTrax sample application and running it in sampling mode with the default set of performance counters and the default 500 ms marking interval. For my profiling scenario I’m just going to wait for the application to initialize, hit “get people” button to load the records from the database and then shut down the application. After the application is shut down the performance report will load automatically and then I will jump to the “Marks” view.

 

                In the view above you can see the automatically collected marks. And for each mark you can see the memory, disk and processor usage at that point in time. Now this data can already tell us much about our program. For example we can see that we started out with a brief processor spike between marks three and five closely followed by a memory usage spike between marks five and eight. But how can we actually tie these values into all the rest of the performance data that we collected? After all these are timeline values while the rest of the data in the report is aggregate data from the whole run. Luckily we’ve provided a new filtering feature to help use the performance counter data to guide your performance investigations.

Filtering performance data from the marks view

 

                Looking further into the marks data there was another big memory spike later in my program between marks 129 and 133. But how can we tell what was actually happening in the program during that time? To start the investigation ctrl-click marks 129 and 133 in the marks view so that they are both selected. Then right-click on one of them and select the “Add Filter on Marks” command. When you do this a new filter dialog will appear docked on the top of your performance report.

                The filter control has a lot of depth to it, but for now you can just understand that is it currently saying “only show the data collected between mark 129 and mark 133.” To actually apply the filter to the performance report click the “execute filter” button in the toolbar (it’s the button with the green play icon). You will see a progress bar for the reanalysis that is being performed and then the report will pop back up. Only now all the aggregate data for the report is only the data that we captured between those two marks! With just a quick glance at the summary page below you can tell exactly what our application was doing and why we were seeing the memory usage spike at that point (big blue circle added by me for emphasis ;-) ). However it is important to keep in mind that you will need to clear the filter items and rerun the empty filter or to close and reopen the report to get back to seeing all of your performance data.

                In addition to filtering on marks the filter grid can so some other very neat stuff such as filtering on threads, by timestamps or by time percentage (to do something like “show only the first 10%” of my run). I don’t have room to cover all the other filtering types here though, so I’ll have to come back and hit them in a later article.

Conclusion

 

                Collecting performance counters is a very cool new feature that we’ve added to the profiler for Orcas. Even before we get into how cool it is to be able to filter down your profiling results based on performance counters it’s nice to just be able to collect and store those values side by side with your profiling data. And when you throw in the filtering features it quickly becomes an invaluable tool for performance analysis.