BizTalk Performance Lab Delivery Guide – A Process for Conducting Performance Labs


The diagram below reflects the current process that the Microsoft BizTalk Ranger Team uses to conduct performance labs. If this diagram looks somewhat familiar, you may have seen an earlier incarnation that was part of my contribution to Doug Girard's excellent MSDN Article "Managing a Successful Performance Lab" which can be found at this link

This process is simply a recommendation, certainly not an edict. Be that as it may, this process has consistently resulted in successful Performance Lab Engagements over the past year. Many of the sections that follow in this guide will echo this process. If users decide to use a different or modified process, the content from related sections in the guide should be modified or adopted accordingly.

 Click this image to the left if you want other sizes of the image.

BizTalk Ranger Team Perf Lab Process


The Performance Lab Process in the above diagram outlines five distinct phases that a Performance Lab Engagement follows, which are as follows: 

Scope -> Plan -> Prepare -> Build Lab -> Execute

As with any process that a team might attempt to follow, it is critically important for all parties involved to know, understand and agree upon the steps and tasks in the process. To aid with communicating this process to other stakeholders, a PowerPoint slide with this diagram is included with the Performance Lab Bits. The Visio for this diagram is also included in the Performance Lab Bits to use as a starting point in case the reader wants to customize the process for a specific Performance Lab Engagement.

Click the links below to Understand the Lab Process at a deeper level

Before the Process Begins

The Scope Phase

The Plan Phase

The Prepare Phase

The Build Lab Phase

The Execute Phase

Comments (1)
  1. I really like your diagram here.

    When I do a performance lab, I include a “Data Gathering” load test in between the “Review Results” and “Tune for Performance” phases. Basically, many people like to gather tons of data during load tests. Many times, this heavy monitoring has a negative impact on the system under test. Therefore, I always suggest light monitoring (critical metrics only) during regular test runs, then during the “Data Gathering” runs, we allow heavy monitoring such as detailed perfmon logs, SQL Profiler, etc. The point of the “Data Gathering” run is to gather enough data to identify a performance bottleneck, then triage to determine which single change will be made to the system.

Comments are closed.

Skip to main content