BizTalk Performance Lab Delivery Guide – TOC v 0.02 – Please Comment


As promised, here is the deeper look at the Table of Contents for the BizTalk Performance Lab Delivery Guide.

As Pages are added to this site, I will update the links in this post.

Since the whole purpose of this site is to give the community a dirty read during the evolution process of the guide, please remember that everything is subject to change based upon community feedback.

Chapter 1, Introduction,

Chapter 2, A Lab Process, outlines a process and walks you through the process we use to conduct BizTalk Performance Labs on the BizTalk Ranger Team.

Chapter 3, Performance Factors, enumerates the Factors affect performance because if you don't know what to look out for it's hard to know where to start. This chapter starts by reiterating that the factors in Wayne Clark's 2004 paper are all still valid. Factors for 2006 are then discussed.

Chapter 4, Performance Metrics, is short but important as it provides reference material for learning about Performance Metrics used during a BizTalk Server Performance Lab. It's important to note that Performance Metrics involves many other technologies than BizTalk, like SQL Server, IIS, COM+, ASP.NET, etc.

Chapter 5, Performance Bottlenecks, enumerates the types of performance bottlenecks that can be encountered and offers guidance for identifying, isolating, measuring and tuning in each specific situation.

Chapter 6, Useful Tools, will probably be the most entertaining chapter since it will give how-to information on usage of various powerful tools and techniques.

Chapter 7, Engagement Documents, will provide those following the guide with a starting point from which to document the activity and planning for a performance lab engagement.

Introduction    6

What Is a BizTalk Performance Lab?    6

Who is the Audience?    6

Why was this Written?    6

A Process for Conducting Performance Labs    7

Before the Process Begins    9

The Scope Phase    9

Why Consider Scope?    10

A Document Engagement Summary    10

Hardware Diagram    14

High Level Architecture Diagram    16

The Plan Phase    16

Why Plan    17

Third-Party Software and Technology    17

Detailed Lab Hardware Stack    17

Detailed Lab Software Stack    18

Physical Space and other Logistics    18

The Prepare Phase    18

A Detailed Solution Design    19

The Application to be Tested    20

The Build Lab Phase    21

Build Lab Infrastructure    21

Configuring Third-Party Software    22

Configure Performance Monitoring    23

Run Automated Deploy    23

Run Automated Functional Testing    23

Run Automated Load Tests    23

Document Solution Performance Baseline    24

The Execute Phase    24

Performance Factors    24

Performance Factors in 2004    24

Performance Factors in BizTalk 2006    26

BAM    26

Host Design    26

Orchestrations    26

Receive Ports    26

Send Ports    26

The Tracking Host    26

The Tracking Service    26

Delivery Notification    26

Correlation    26

Message Size    26

Flat File Processing    27

Business Rules Engine    27

Performance Metrics    27

BizTalk Performance Metrics    27

SQL Server Performance Metrics    27

Windows Server 2003 Performance Metrics    28

IIS 6.0 Performance Metrics    28

ASP.Net Performance Metrics    28

Performance Bottlenecks    28

High-Level BizTalk System Bottlenecks    28

Items to Check First    29

Processing Host Queues    29

Specific Types of Performance Bottlenecks    29

Disk I/O Bottlenecks

CPU Bottlenecks

Memory Bottlenecks

Network I/O Bottlenecks

Database Contention Bottlenecks

MaxConnections Causing Bottlenecks

Thread Starvation Bottlenecks

Large Message Size Bottlenecks

XML Bottlenecks

Pipeline Bottlenecks

Tracking Bottlenecks

ASP.Net Bottlenecks

Errors and Exceptions Causing Bottlenecks

HTTP Bottlenecks

SOAP and Web Services Bottlenecks

MQ Series Bottlenecks

Throttling Bottlenecks

MaxReceiveInterval as Related to Latency and Bottlenecks    66

Useful Tools and How to Use Them    67

PerfMon    67

Tracelog.exe    67

Logman.exe    67

Log Parser    67

PALS    67

Performance Log Viewer 1.6 (toolbox)    67

SQLIO    67

Where to Obtain SQLIO    67

How to "Quickly" Use SQLIO    67

SQLIO Resources    67

SQLIOSIM    68

When To Use SQLIO vs. SQLIOSIM    68

Log Parser 2.2    68

RDCMan (toolbox)    69

Where to Obtain RDCMan    70

How to "Quickly" Use RDCMan    70

RDCMan resources    70

The F1 Profiler    70

PerfConsole    70

PowerShell Scripts Which Make use of These Tools    70

Engagement Documentation Examples

    70

Comments (6)

  1. zachbonham says:

    Rob,

    Looks like a great outline, I can’t wait to see this evolve!  

    One thing that I thought that could add value is commentary on load generation tools and techniques for repeatable tests?  You may cover this in your section ‘Run Automated Load Tests’?

    Many times driving load against edge systems will indirectly drive load against BizTalk.  This is all well and good, but having instances of these edge applications available to load test is not always an option, or schedules conflict, or maybe not event desirable, etc.  Often I’ve found that I’ve had to test the integration tier without having all the application end points available.

    Sometimes picking up the message from intermediate storage (File, MSMQ, etc), is the only option and mimicking endpoints is required in order to test as best we can.

    In the past, I’ve used custom and vendor tools to generate load against endpoints (controlling arrival rates, etc).  Regardless of the tools used, the underlying end result is typically the same.

    What are some of your experiences?  Which ones work best?

    That will lead into the sticky situation of data creation.  All applications need data for test runs, what are some of the data creation techniques you’ve successfully used?

    Thanks for taking the time and putting these experiences on ‘paper’ so the rest of us can learn from them.  This will add significant value back to our business!

  2. DanD says:

    Hey there,

    I can hardly wait for you to finsh all the chapters !

    Your work is much appreciated.

    Regards,

    Dan

  3. Angusf says:

    Rob,

    How about some brief worked examples – e.g. a couple of typical perf lab scenarios where you pull out the testing approach and the how bottlenecks were ID’d and eliminated. Maybe a high volume messaging example and a high complexity orchestration example ?

    This would maybe shed light on some of the issues that only occur in the end-to-end scenarios, never in isolated testing.

    cheers

    Angus

  4. trishakee82501 says:

    Rob,

    Found you on the internet through myspace.  This is trisha miller.

    Trisha

  5. cliffep says:

    Looks like a good framework. I am particularly interested in the tools & where to find them section, and am, looking forward to when you put some meat on the bones of this section 😉

    Keep up the good work, I have registered for updates.

    Cheers,

    Pete

  6. David Zazzo says:

    Just to let you know — if the RDCMan that you're referring to is on //toolbox internally, it has now been released for external audiences.  (We shipped on Thursday.)  You can now obtain RDCMan from the Microsoft Download Center at http://www.microsoft.com/…/details.aspx.  

Skip to main content