Applying Operational Profiles to Demonstrate Production Readiness Of an Oracle to SQL Server Database Port using Web Services

By James Cusick, Imran Riaz, Hubert Huang, Allen Tse, Murugan Gnanavel {james.cusick; imran.riaz; hubert.huang;allen.tse; murugan.gnanavel}

Wolters Kluwer Corporate Legal Services, New York, NY


A common problem in application development requires a port of existing and modified functionality to a new target technological platform. In this paper a recent experience is detailed on porting a modified Oracle based database implementation to a Microsoft SQL Server implementation. This port effort utilized simulated load to provide confidence around the production readiness of the new target database to support current and future user load conditions. This approach was guided by the use of an Operational Profile and by the use of web services based test scripts.


A recent project at Wolters Kluwers Corporate Legal Services required a port of an online transaction processing relational database from an Oracle platform to a Microsoft SQL Server platform. The decision to migrate was based on cost, performance, application integration, new features and capacity of SQL server 2008, and other factors. The focus of this paper, however, is on the specifics of the efforts taken to ensure that the selected database platform would support the current and future load expectations and that the database platform would meet all technical, functional, and reliability requirements.

Several challenges presented themselves immediately. First was to define the current workload of the legacy system that was being rewritten end to end. Second, a test environment needed to be constructed to simulate load levels expected of the new platform. And third was building out the test and production environments and demonstrating their readiness. In addition, the project required a conversion from the legacy Oracle 9i database to the SQL Server 2008 database. This document outlines the steps required to build an Operational Profile [1] driven database proving approach utilizing a web services architecture to meet these challenges and summarizes the actual results. The work reflected in this paper was conducted between late 2009 and August 2010 and required several technical leads, managers, developers, and testers working part-time on this effort during the course of the large project to realize the new application launch.

Method and Approach

The application of interest described in this paper is a corporate governance product delivered to customers via a SaaS (Software as a Service) model over the public Internet. The redesign of the application was triggered due to several product strategy directions and technical evolution plans. The core functionality of the application was revalidated through an intensive user centered design process [2]. Working with Microsoft, the Wolters Kluwer team developed the platform of choice for the target design which included SQL Server as the primary database. Other technologies introduced with this platform included Microsoft’s SilverLight [3], a complete SOA (Service Oriented Architecture [4]) approach, and an automated unit test approach. The details of the full architecture, its capabilities, design philosophy, and results are beyond the scope of this paper. The focus instead is on the preparation and realization of the SQL database supporting the broader application.

The application itself consists of a database with a size in the range of 50 GB including data, indexes, log files, and performance data. The application consists of over 200 tables, 500+ procedures, and roughly 50 functions. The largest table contains 5,000,000 records as of the port.

For most systems development projects a key objective is to build a system that performs within an acceptable response time for the users at all load levels [5]. Conceptually this can be understood graphically as shown in Figure 1. In this diagram we can see that as load is increased performance as measured by response time can be expected to increase (thus there is an assumed proportional relationship between system workload and response time). In this project efforts were directed at defining the response time goals, workload levels (via an Operational Profile), and a demonstration of the production readiness of the system at any load level by using test scripts which interfaced with the web services employed by the SOA of the new application.

In order to understand this relationship (between workload and response times) and other goals, several core technical objectives were established. These basic objectives were set up for the development and database teams working together included the following:

1. Demonstrate database performance early in the lifecycle;

2. Allow time for adjustment should it be required;

3. Meet a 30% CPU utilization level on the new platform with a load equivalent to current legacy;

4. Prepare for conversion and go-live end to end (including conversion performance itself);

5. Provide robust support for SQL tuning, performance results, and configuration correctness;

6. Understand performance of integration requirements to external systems.

The core success criteria for the database platform consisted of meeting the above goals were defined within 3 factors:

1. Page Response Time (PRT)

    1. 98% of requested Pages should be in acceptable response time window at 1XLoad.
    2. 95% of requested Pages should be in acceptable response time window while running the 2X Load.
    3. Online transactions vs. Real-time Reporting performance should be non-conflicting.
  1. Resource Utilization
    1. CPU and memory utilization to be under 30% in normal circumstances.
    2. No Disk I/O bottlenecks.
    3. During the backup and any system maintenance overall CPU utilization to be under 40% except occasional peaks.
    4. Failover and 2X load conditions should be achievable and predictable with nominal increases in resource use.
  2. Database Health
    1. No deadlocks should occur.
    2. Buffer cache hit ratio should be > 90%
    3. The number of database connections should be within the threshold amount. Database connections should increase gradually when the load increases.
    4. No significant errors in the alert log while we perform the tests and during baselines.


Figure 1 – Typically, as load increases response time may also increase. This chart is a theoretical layered model of response time. The approach provided in this paper addresses the specifics on how to quantify these levels precisely.

The core activities to meet these objectives included the following areas of focus:

1. Early Development Performance Testing

2. Engineering of Production Boxes

3. SQL Tuning as required

4. Treat Test environment as Production including monitoring and adjustments

5. Conversion preparation and testing

6. Production platform performance testing

These tasks were further broken down into detailed activity phases summarized in Table 1 below.




Planning & Analysis

1) Define objectives

2) Create technical approach

3) Define load profiles

4) Prepare environment

5) Prepare test harness

6) Trial testing

6 weeks

Development Proving Testing

Conduct multiple rounds of proving testing on early versions of application and database in the development and test environment and then in the production environment prior to go-live.

4 Weeks

Quality Assurance (QA) Proving Testing

Conduct multiple rounds of proving testing on QA versions of application and database. Conduct this testing in conjunction with QA team and load testing specialists.

3 Weeks

Technical Adjustments

1) Based on results of early testing, recommend technical, configuration, application, or database changes as required.

2) Fine tune test procedures, scripting, and measurements.

2 Weeks

Production Setup

1) Work with hosting vendor on production cluster configuration.

2) Review configuration and basic operational state.

1 Week

Conversion Trial and Performance Testing

1) Conduct trial runs of data conversion.

2) Carry out load testing.

3) Adjust configuration as required.

2 Weeks

Freeze Period

1) Insure no changes to environment 2 weeks prior to go live.

2 Weeks


1) Support go live

2 Weeks

Table 1 – Database Proving Activities Phased Plan


Figure 2 – The role of the Operational Profile in building a target test approach.

The Operational Profile as Applied

An Operational Profile is at its simplest a statistical model of the predicted usage of system functions or operations. Figure 2 indicates the normative and high level approach to the creation of a load profile to accomplish testing between the legacy and target platform. In summary, data from the legacy instrumentation repository was sampled to create an abridged functional model with assigned usage probabilities (see Table 2). This model was mapped to a projected usage model taking into account the changes in functionality between legacy and refresh application implementations (see inset in Figure 2). Further mapping was also done between the application functional usage model and underlying web service scripts. With the projected web service script load model developed the team was able to drive the test mechanisms available. This load profile then exercised the application and, in turn, the database. Instrumentation data captured during testing as well as via resource and performance monitoring tools then provided a basis for analyzing performance and pinpointing tuning opportunities.

Module Name


Usage Percent

Function 1



Function 2



Function 3



Function 4



Function 5



Function 6



Function 7



Function 8



Function 9



Function 10



Table 2 – A Sample Operational Profile

Application Architecture and Database Proving Environment

The application architecture is shown in Figure 3.1. This diagram indicates the SOA boundary that encloses the web service layer along with the multi-tier backend layers and exposes a presentation layer and other consumers via port 8080.

To visualize the database proving environment consider a multi-tier architecture as shown in Figure 3.2. This diagram indicates the basic environment leveling and components required to carry out the database proving approach used. Essentially, the load model produced above from the legacy usage data and modified with the new functions represented in the Operational Profile drove an application harness layer. This harness consisted of scripts which call specific web services of the modules defined in the target usage profile or sets of those services in sufficient frequency to simulate the per hour load rates expected from the workload profile (in production this application receives tens of thousands of page requests per hour). In addition to the frequency, a randomness of invoking the web services was applied to simulate the real world usage pattern.


Figure 3.1 – Application Architecture

Migration and Testing

The new application took as a starting point the legacy database schema and redesigned it to fit the needs of the new application. To migrate the application a mapping from legacy to new was required. This also required the staged build-out of the development, test, and production environments. The steps in this process included:

1. Migrate the database architecture

a. Build the SQL Server instance

b. Configure the server

c. Migrate the storage architecture

2. Migrate the schemas

a. To migrate the schema we used custom scripts (which involves Bulk Copy) to populate fresh empty tables during migration and for performance collection we have used the Performance Warehouse tools provided by Microsoft.

b. Migrate the schema objects

3. Migrate the users

a. Create user accounts

b. Create roles and grant privileges

4. Migrate the data

a. Plan the data migration

b. Execute the data migration


Figure 3.2 – Database proving environment.

As mentioned earlier, a test harness comprising scripts to drive web service calls produced a 1X (one times base expected load) as well as 2X, 3X, 4X, and 5X loads. Eventually, even greater loads were extrapolated from the actual stress testing results (see Figure 7). In theory, the level of load within testing can then be dialed up or down as appropriate or as events might warrant. Overall system performance was observed using pre-deployed monitoring tools and any required adjustments to the application or the database was made during the test phase and then revalidated.

In order to create the environment and execute the testing some pre-requisites were required including:

1) Target system work profile.

2) Test scripts.

3) Data conversion.

4) Environment availability schedule.

5) Developers time for testing.

6) Readiness of functional Web services

The specific database server specifications consisted of the following: Window Server 2008 R2 enterprise 64-bit Operation System with Intel(R) Xeon(R) cpu, E7440 @2.4GHz (4 processors). Installed Memory of 16.0 GB and 8.0 GB allocated to the SQL Server instance. 4.0 GB reserved for the OS and the remaining 4.0 GB allocated for another SQL Server instance which hosts the performance data collection warehouse database. The clustered SQL Server uses Microsoft SQL Server 2008 (SP1) enterprise edition. SQL server version 10.0.2531.0 (X64). The security mode is set to window authentication.

The tools involved included:

1. Window system monitor (for window server 2003).

2. Performance monitor (for window vista, window server 2008).

3. SQL server activity monitor: to monitor important parameter such as Process time, waiting tasks, database I/O, Recent expensive query.

4. SQL profiler.

5. Database engine tuning advisor.

6. SQL trace.

7. DMVs: Dynamic management views.

8. IIS Logs

9. Load Runner

SQL server 2008 Performance Data Collection provides features that the above tools lack. It provides a repository for historical and baseline performance analysis and comparison. It has a single performance data repository to monitor all instances on a server. The system can stay always on and the overhead is low. We use this tool which is new to SQL server 2008 to automatically collect important performance data during the stress test.

There are three components for Performance Data Collection: management data warehouse, data collection sets, standard reports for system data collection sets. The performance data are collected and stored in a cache directory and uploaded to the warehouse database every 15 minutes. The standard report provides server activity history, query statistics history and disk usage summary. Reports can be customized based on need. A sample of the report options is provided in Figure 4.

As mentioned, stress testing was also required in our objectives. Once the environment was established, the Operational Profile based functional allocations were assigned, and the test harness created, essentially any load level could be produced and tested for. In our work we decided to use a 2X, 3X, 4X and, 5X load to discover the “elbow” in the response time curve. Such testing informed us how much stress the new system can handle or what the limits are. If the application fails at these higher levels the system may still be acceptable. It is only an issue if projected load growth rates exceed the system performance ceiling.


Figure 4 – Sample SQL Server Workbench monitoring tools


The results of the testing were excellent. The application scaled very well and the database in general never utilized any significant resources during regular load tests. Even under 5X load the database remained extremely constant in resource usage. Only through linear extrapolation assumptions can we see that system memory becomes a potential bottleneck at roughly 20X normally expected load. Please refer to Figures 5, 6, and 7 for details on these tests results. Of special note is the Microsoft preferred metric for database processing (Batch Requests per Second). This value should be under 1,000 for a system of the class we are operating. As Figure 6 indicates even at 1 quarter of this threshold system memory and CPU are barely utilized. This indicates a more than adequate resource level on the device and significant future headroom for the application.


Figure 5 – System and SQL Server CPU consumption during consecutive load tests.


Figure 6 – Batch Request per Second vs Memory utilization levels during load testing.


Figure 7 – Several system parameters plotted against projected load levels. In this chart Batch Requests per Second approaches 1,000 at 4X load. The limiting factor appears to be memory resources at somewhere over 2X the maximum stress level from earlier tests. Thus, while the database shows adequate performance at 5X typical load, stressed to 20X load it is projected that system memory will be the first bottleneck encountered. Note, this is projected and not demonstrated.


The results of the planning, analysis, system design, usage profiling, and testing of this platform allowed for the successful launch of this application in late 2010 as scheduled. Since go-live there have been essentially no issues with this database platform. Daily user levels remains within the predicted range and resource utilization on the database is watched daily and remains well within predicted tolerances. There have been no anomalies reported. The only technical impact we encountered was due to a mistaken configuration setting made by a vendor staff member which caused a brief outage.

The approach taken here can be utilized by any team looking to deploy a new database (or application). Several key components of this approach are easily reusable, including the use of an Operational Profile to guide testing and to determine future estimated load factors and functional distribution in testing. The step-wise approach to configuring, porting, testing, tuning, and finalizing the platform can also be followed. Finally, the test harness approach utilizing web services is without question extensible to other environments.


This paper reflects work that was triggered by the redesign of a core application, thus we thank our business partners and technical leadership for giving us an opportunity to apply the best of breed technology and methods to a very interesting set of challenges. Specifically we would like to thank our CTO, Venkatesh Ramakrishnan, for guidance on the overall goals and ideas on how to achieve them. We also owe a debt to the many developers and testers from Wolters Kluwer and Tata Consultancy Services who worked tirelessly on this project.

About Wolters Kluwer Corporate Legal Services (CLS)

As the originator of the corporate legal services industry, Wolters Kluwer CLS has been setting the standard for unrivaled service to the legal community for nearly 120 years. Combining unmatched industry expertise, a full spectrum of products and services and a personalized customer experience every time, CLS is the legal professional’s clear choice to manage, measure and define the future of legal compliance services and performance management. For more information, go to


[1] Musa, J. D., "Operational Profiles in Software-Reliability Engineering", IEEE Software, March 1993.

[2] Holtzblatt, K., Contextual Design: Experience in Real Life, Mensch & Computer 2001.

[3] Microsoft SilverLight Home Page,, viewed 4/2/2011.

[4] Erl, Thomas, Service-Oriented Architecture: A Field Guide to Integrating XML and Web Services, Prentice Hall (April 26, 2004)

[5] Cusick, J., & Welch, T., "Developing and Applying a Distributed Systems Performance Approach for a Web Platform", 30th International Conference for the Resource Management and Performance Evaluation of Computing Systems, Las Vegas, NV, December 10, 2004.

Comments (5)

  1. Mike Harris says:

    An excellent pragmatic approach to a common challenge.  Congrats!

  2. David says:

    This is an excellent article and show mature approach to port from Oracle to SQL server. Well done everyone including James/Imran Riaz/Hubert Huag and rest

  3. Hughes (VP) says:

    Excellent article from Muhammad Imran Riaz and James C and others. Specially like the approach taken in this project from Oracle to SQL Server porting.

  4. Sameer says:

    I used this technique in my project with success. Thanks to this publication.

  5. James Cusick says:

    Thanks for the feedback. I am especially glad to see that our experience was helpful to others.

Skip to main content