Performance Testing Objectives Document Template

Concise performance testing objectives document helps me staying focused, save time by creating shared vision among so many hands that are in it - biz analysts representing end users, dev teams, testers, IT folks and some more. To generate simple performance testing objectives document template I used stuff found here:

 

Here is what I have come with.

 

  • Service level agreements (SLA's).

I used this section to set up front the perf requirements defined in app specification. This helps making sure expectations set with the biz analysts. For example:

    • Transaction 1 Ux should take y1 sec
    • Transaction 2 Ux should take y2 sec
    • Transaction n Ux should take yn sec
  • Detect bottlenecks to be tuned.

This part I used to describe the parts of the system to pay most of my attention. Today's distributed systems involve very much components - IIS, SQL Server, Active Directory, MSMQ, and more. So here I try to stay focused on what is under my control, what can be measured and tuned. This part makes sure that application architectural characteristics have been taken into account. For example:

    • Web Services
    • LDAP queries against ADAM
  • Determining the performance characteristics for various configuration options.

This part describes network environments and hardware high level characteristics to be tested. This part makes sure that sanity check has been done with regards to environments. Helps prioritizing on resources. For example:

    • Test environment. All servers installed on one physical machine
    • Staging environment. Client machines with X configuration (CPU, memory, Servers run on virtual environment. 
    • Production environment. Got the idea...

 

  • Application architecture changes

This part should describe briefly what arch changes are to be made - for example moving from Web Services to WCF, AJAX to SilverLight. This helps prioritizing on metrics to be collected that would serve to support further architectural decisions.

  • Hardware changes

This part describes exact hardware combinations to be tested.

  • Metrics to collect

This part actually describes what metrics to be collected to make sure enough data collected to make decisions on one but also help reduce amount of data that adds noise, Here is the simplest example:

Collect base line metrics (time taken) for all IIS servers - ASP.NET UI and Web Services.

  • Known and documented practices

This

part describe known stuff - pitfalls or help materials - with regards to tested application. For example links to articles or other documentation.