To understand our customers and their expectations and communicate current product performance relative to those expectations.
Great product performance is essential to winning customers. They want their applications to start fast, respond immediately, and consume as few machine resources as possible. If our runtime doesn’t meet their expectations, they won’t use it. It’s a simple concept, but a complicated thing to deliver.
Often, tradeoffs need to be made between features and performance to strike a balance that results in the greatest customer satisfaction. Understanding the specific customer expectations around performance and where the runtime is relative to those expectations is absolutely necessary to make the right call on these tradeoffs.
The primary focus of the of the CLR performance test team is to understand our customers, how they use our product, and what performance expectations they have. Using this data, we define scenarios and metrics that translate to customer experience. We track and measure these metrics using the right set of processes and tools. These metrics, processes, and tools allow us to track the state of runtime performance as it affects the customer. This information allows us to make the right calls on the tradeoffs between features and performance.
The CLR performance test team owns two types of testing; functional testing of tools/features owned by the CLR performance team and performance testing, which is the majority of the testing we do.
Functional features and tools owned by the CLR performance team are diagnostic related. That is they are tools provided to the customers for the purpose of diagnosing performance issues to determine where the problems lie. Sometimes performance issues reported by customers are the result of incorrect usage of features or not designing for performance in customer code, and sometimes the CLR is the root of the problem. These diagnostic tools allow customers to help themselves and at the same time help us find improvements that can be made in the runtime. While the tools are owned by the CLR Performance team, respective feature teams in the CLR own the data collected by these tools. For example, the GC team makes sure the #Gen 0 Collections being handed to the infrastructure is correct and the CLR performance team makes sure that data makes its way through the infrastructure and is displayed correctly and presented in a meaningful way to the customer.
Performance testing is challenging and requires special knowledge and experience to do effectively. Performance testing is based largely upon non-deterministic data, unlike functional testing where a feature works or does not. It requires constant filtering of noise caused by fluctuations in results across machines, builds, and application executions to obtain an accurate reading on product performance. The CLR performance test team members possess specialized skills and experience that allow us to make an accurate read on the product’s performance. That knowledge and expertise is a resource that is leveraged in a number of different ways.
CLR Performance team – Each release, the CLR performance leadership team takes customer feedback we’ve received and works with management to identify the top N customer performance issues. Following this, performance features are driven to improve product performance measurably. The CLR performance test team participates in the specification, design, and code reviews for these features and plans, designs, and implements tests or scenarios that measure the appropriate performance metric(s) . One important responsibility we have here is constantly tracking and holding product performance in check. Hitting our performance goals is not enough. We must hold onto the wins we create because they can be easily consumed by overhead of new features. And holding the line on CLR performance extends beyond specific improvements made in the product during a release. It also includes ensuring no loss in performance from one release to the next.
CLR Feature test teams – The CLR is too large a product for the performance test team to own implementation of all performance tests. As a result, we partner with the different feature test teams to ensure sufficient performance testing coverage. By combining the feature area expertise the feature test teams possess and the performance testing know-how the performance test team has, we get the most effective performance test coverage possible. It is the CLR performance test team’s responsibility to ensure the feature test teams have the information required to make the right decisions on what needs performance testing coverage and how to test it, the tools to implement and execute the necessary coverage, and a way to view and understand the performance of the feature areas they own.
CLR team – Due to the unique requirements of performance testing, specifically around the stability of the performance test environment, developers (or anyone else making product code changes) are unable to effectively measure performance on their own machines prior to checking in code changes. The CLR performance test team provides a way for developers to submit their changes to a stable test environment in the performance test lab. The fully automated process consumes the binaries, executes the required performance tests against those binaries, and produces a set of results that tells the developer what performance effects, if any, his or her changes have on the product. The CLR performance team is responsible for the continuous operation of this system and delivering accurate quality reads on private builds submitted to it.
.Net Framework teams – Some teams within the .Net Framework do not have a separate performance team. Those that do often don’t have the experience and knowledge that the CLR performance team possesses. All disciplines within the CLR performance team play an integral role in helping these partner teams with their performance goals. The CLR performance test team investigates and communicates any CLR related performance issues affecting partner products, works with partner teams to make sure they are tracking the best metrics and ensures necessary performance coverage across our test beds. We also provide them with tools and a system to do their own testing where appropriate. Our level of involvement with each team varies depending on their specific needs, but our success in this area is measured by eliminating any CLR performance issues as a blocker for partners reaching their goals.