How to Size SAP Systems Running on Azure VMs

Rahat Ahmad from our Partner Architect Team has contributed this blog.

  

In one of the partner workshop last month, we had an interesting conversation around SAP Sizing principals and options on Microsoft Azure.

 

As one of the prerequisites for SAP certification to run SAP Applications on Azure, Microsoft benchmarked some of the Virtual Machine Types that are certified by SAP for Azure (eg. A6, D14 etc). These certifications are publicly available here. SAP benchmarks are done using a specific combination of Operating System, Database and SAP Application releases. For example if you see here, you will find Operating system as Windows Server 2012 DC Edition, SQL Server 2012 and SAP EhP 5 for SAP ERP 6.0. Noticing further please note CPU utilization of central server, which is 99%.

So, the question is: SAP recommends sizing to be conducted targeting a maximum of 60% - 70% CPU utilization. As a normal practice, do we need to scale down the SAPS benchmarked while sizing for Microsoft Azure? If we take any benchmark certification from SAP, let say, certification number: 2014040. According to this benchmark we get 18,770 SAPS at 99% CPU utilization. If I need to map to Virtual Machines (VMs) available on Azure, what are the recommendations?

 

Let me try to explain:

There are two recommended ways SAP sizing is done.

1st SAP Quicksizer : we have three options: 1. Quicksizer for Initial Sizing, 2. Quicksizer for Advanced Sizing and 3. Quicksizer for Expert Sizing. These options are used to perform user or throughput based sizing. The accuracy of SAP Quicksizer results is related to the quality of the input information.

2nd Reference Sizing : Sizing based on the comparison of ST03 and other actual productive customer performance data to another known customer system with the same or similar performance data and a known hardware configuration.

 

Example: An on-premises customer system with a database size of 1TB and 1,000 users processes 3 million dialog steps per day total. Out of this 3 million dialog steps, 1 million is Dialog, 1 million is RFC and the balance is other task types. The month end peak dialog steps/hour is 400,000/hour. ST03 data reports 400 low, 400 medium and 200 high usage users. The workload is compared to a different known customer deployment and found to be nearly identical to another customer running a dedicated DS14 VM with SQL Server datafiles and log files on Premium Storage connected to 4 DS12 VMs running two SAP application instances per VM. Each SAP application server instance has 50 workprocesses. It is therefore possible to recommend a similar configuration to the on-premises customer after validating the annual growth rates etc.

Sizing for SAP systems are usually an iterative process and it depends closely on the quantity and quality of information available at the time of sizing to arrive on the actual compute resource requirements. The key is to identify at what stage a customer is in their SAP project lifecycle. If a customer is starting their journey with SAP, SAP Quicksizer is used. However, if a customer is already using SAP for production either Expert sizing using Quicksizer is used or reference based sizing is done.

It is critical that we clearly distinguish between the sizing results derived using SAP Quick Sizer and the one derived out of existing systems on the underlying (existing) hardware (Reference sizing). When sizing exercise is done using SAP Quicksizer, the recommended CPU utilization is factored in. The CPU sizing results are calculated against an average target CPU utilization of 65% for throughput-based sizings (and 33.3% for user-based sizings) to achieve predictable server behavior. Ideally, you would observe 65% CPU utilization if you ran the same processes used in Quicksizer and purchased hardware to meet the CPU sizing recommendations, which are measured according to the SAP Application Performance Standard (SAPS).

As the Quicksizer tools calculates for 65% utilization, you can use this value to check with existing benchmark results. You do not have to do any more calculations.

Therefore, if we have the visibility in the process of how sizing results (SAPS) are arrived at and what was sized using SAP Quicksizer, we are good with mapping the SKUs (compute resources) available on Azure. However, if we do not have the visibility on the process, we need to be conservative and add buffers before mapping to Azure VMs.

Whereas when looking in to existing SAP systems we need to see how the customer defines the SAPS. It is not recommended to do a simple 1:1 mapping of current on-premises CPU and RAM to Azure VM types. This will typically lead to significantly oversized solutions.

If we are simply taking the SAPS number of the existing hardware and map it to a VM, we need to know about the CPU resource consumption of the hardware as well (EWA can provide this information). Whereas if we are getting a SAPS number where the customer already calculated the SAPS down (means server has 40K SAPS, but we only use 50% CPU hence we need 20K SAPS), then we need to take a buffer into account since we don’t want to run the servers on 100%.

 

Coming back to mapping the SAPS requirements and VMs on Azure, lets us see an example.

 

A company is running SAP EhP6 for SAP ERP 6.0. The SAPS required for this system is:

Total SAPS required: 36760

SAPS for DB: 3500

SAPS for App: 33260

Note: For simplicity, let us ignore IOPS, Storage and other typical requirements.

Following is the table with few of the VMs available (not exhaustive) on Azure along with the benchmarked 2-tier SAPS.

VM Type

# of CPU cores

Memory size (GB RAM)

SAPS

Highspeed Local SSD (GB)

# of attachable disks to maximize IOPS

A5

2

14

1,500

-

4

A6

4

28

3,000

-

8

A7

8

56

6,000

-

16

A8

8

56

11,000

-

16

A9

16

112

22,570

-

16

A10

8

56

11,000

-

16

A11

16

112

22,570

-

16

D11

2

14

2,338

100

4

D12

4

28

4,675

200

8

D13

8

56

9,350

400

16

D14

16

112

18,770

800

32

DS11

2

14

2,338

28

4

DS12

4

28

4,675

56

8

DS13

8

56

9,350

112

16

DS14

16

112

18,770

224

32

 

Table 1: Azure VMs for running SAP applications.

If I have to identify the Virtual Machines available on Azure. I will need to know whether the SAPS are derived using SAP Quicksizer or from existing hardware running the same SAP system.

If the SAPS are derived using SAP Quicksizer, I will assume CPU utilization (65%) is already factored in and will map the VMs accordingly.

Since the Quicksizer SAPS are already adjusted down for 65% CPU utilization we have two options, either to scale up the required SAPS output from Quicksizer or scale down the SAPS benchmarked for the VMs.

 

 

 

 

Option 1 – Scale-up SAP Quicksizer SAPS (@65%) and match to SAP SD 2 Tier Benchmark (@100%)

If we scale-up the SAPS we get 3,700 -> 5385 SAPS for DB and 33260 -> 51170 SAPS for APP. We have a straight forward mapping option;

Answer: A7 (6000 SAPS) and 3 xD14 (56310 SAPS).

clip_image002[8]

 

Option 2 – Scale-down SAP SD 2 Tier Benchmark (@100%) and match to SAP Quicksizer SAPS (@65%)

Alternately, taking the scale-down option, from table 1, we see D14 is benchmarked for 18770 SAPS. Our requirement for Application server is 33460 SAPS. Adding two D14 will give me 2*18770 = 37540 SAPS, which is a little higher than our requirement and we should be happy about it. However, as we know the SAPS benchmark for VMs are done at 99% or near 100% CPU utilization, D14*2 is a challenge. How? When we scale down the utilization to 65%, D14 will give me approximately 12200 SAPS and D14 * 2 or 12200*2 = 24400 SAPS, which is less than 33260 SAPS required for App. Therefore, if we want to use D14 VMs, we will have to provision 3 and we will get 12200*3 = 36600 SAPS.

Similarly, for DB the required SAPS is 3700. We have two options A7 which is benchmarked for 6000 SAPS and D12 which is benchmarked for 4675 SAPS. Scaling down to 65% we get, for A7 3900 SAPS and D12 3040 SAPS approximately. The choice is clear here A7 provides a little higher SAPS and D12 provides lower SAPS than required. Hence, we go with A7 VM.

Answer: 1 x A7 for DB, 3 x D14 for SAP application server

Note: This is just an illustration. It is possible that we choose different VMs altogether as the solution to this example for various reasons.

 

If SAPS are arrived using existing hardware running the same SAP system, in addition to SAPS, we need to consider the following:

  • CPU utilization for the system in question
  • Processor Speed of the current hardware
  • CPU Type

 

Azure VMs have different performance characteristics dependent on the different SKUs (A1-A7, A9, A10, A11, D Series, DS Series, G Series). It is advised to have a proper understanding on specs for these SKUs to perform a better sizing/compute mapping.

 

IOPS and Disc latency

In addition to SAPS, IOPS and disk latency as well are critical factors for optimum sizing of compute resources on cloud.

It is important that we know the certified VMs, the type of storage supported with each VMs and the number of attachable disks to maximize the IOPS. This will give us an understanding on disk layouts possible and the maximum IOPS we can get for specific VMs. Following is the table providing the stats with and example disk layouts.

 

clip_image004[8]

 

Best Practice Recommendation: Ensure Database datafiles are distributed across multiple disks and consider using Premium Storage especially for Database Log. Typically medium size databases should have 8-16 datafiles distributed across 4-8 disks (approximately 2 datafiles per disk). Larger databases will benefit from more disks.

 

This is illustrated in the below example:

Contoso Corporation is running SAP ERP landscape on-premises and are interested in migrating the entire landscape, which consists of Development, Quality Assurance, Production and Disaster Recovery systems to Azure data center. Following are the technical details of the production system:

• SAP System : SAP ERP ECC 6.0 EHP 7

• Operating System : Windows Server 2012 R2

• Database : SQL Server 2014

• Production Database server : 8 cores, 32GB RAM x 1 node

• IOPS (of Production DB server) : 7000 IOPS

• Production Database size : 2TB

• Performance Characteristics : Batch input jobs and heavy batch reports

• Production Application server : 8 cores, 32GB RAM x 3 nodes

• Production Total SAPS : 30k SAPS

Let us focus in designing a solution for the production system only (no Dev, QA and DR and no HA considered to keep the solution example simple).

We need to perform two major steps:

1. Select the deployment architecture of the SAP ERP system (3-tier). Choose appropriate VM Types for SAP ASCS/SCS, Application server and Database

2. Choose appropriate Storage Type and decide how many hard discs are needed for Database Files and the Log Files

 

clip_image006[8]

 

There are multiple options available to design a solution for the given production system on Azure. The table above shows one of the option to design the solution.

Let us check the requirements once again.

From the requirements statistics in the above example, the SAPS for Database and Application Server layers are not clear and hence we will have to design the solution based on certain assumptions.

The SAPS split for Application and Database are not provided explicitly, therefore, we can take a conservative thumb rule of 70:30 for App and DB and we arrive at 21000 SAPS for App and 9000 SAPS for DB.

The SAPS split for ASCS/SCS and App is not explicitly given. With our experience we now ASCS/SCS instance usually do not need too much of SAPS and therefore we can keep the SAPS for ASCS/SCS instance to bare minimum.

 

Solution - from the table:

1. VM type D11 is considered for ASCS/SCS instance. D11 is benchmarked for 2338 SAPS at 99% utilization. Therefore scaling down to 65% utilization we get approximately 1520 SAPS.

2. VM Type D13 is considered for App instance. D13 is benchmarked for 11000 SAPS at 99% utilization. Therefore scaling down to 65% utilization we get approximately 7150 SAPS. 3*7150 = 21450 SAPS. Therefore the total SAPS for Application layer becomes 1520+21450 = 22970 (a bit higher than 21000 SAPS as per the requirement).

3. VM Type DS14 is considered for Database instance. DS14 is benchmarked for 18600 SAPS at 99% utilization. Therefore scaling down to 65% utilization we get approximately 12200 SAPS. Note that we also have other VMs that can be used for Database, e.g. D13 which is benchmarked for 11000 SAPS. However, in our solution we are considering to use Premium storage (a high performance storage option which uses SSDs unlike Standard storage which uses HDDs) which is currently available with DS-Series VMs. Though the required SAPS for DB is 9000 SAPS the VM providing us the nearest SAPS is DS-14 (in the DS Series).

4. The other requirement was of 7000 IOPS for database server. The blob storage provides us with a maximum of 500 IOPS per disc. Therefore, we can take 15*200 GB disc to achieve 15*500= 7500 IOPS for database files. For database log files we can use 1 x Premium Storage P20 512GB disk which provides single digit latency for log files.

 

 

SAPS and benchmarking.

Performance optimization is a continuous effort and the experience of running SAP systems helps in identifying the performance bottlenecks and scope for improvements. However, if we are preparing to setup a new SAP system or planning for migration from one hardware to another or from on-premises hardware to cloud platform, we need to carefully consider factors which can help in proper sizing. SAPS or SAP Application Performance Standard is the benchmarking done usually on a two-tier architecture with Sales & Distribution process (SD). This measurement is very important for sizing SAP workloads.

SAP applications are usually deployed in 2-tier or 3-tier architecture.

 

clip_image008[8]

2-tier deployment architecture is a central installation where the Database as well as the application layer components are deployed on the single Operating System with multiple client systems accessing the SAP application.

3-tier deployment is a distributed architecture where single Database instance server supports multiple application server instances. Each instance (DB, App1, App2, Appn,.) are deployed on single Operating system and accessed by multiple client systems. In general 3-tier deployments are recommended for all but the smallest SAP on Azure implementations.

Respective topology represents different characteristics in terms of flexibility, complexity, manageability and resiliency of SAP systems.

As the deployment topologies are different, it is imperative we perform a detailed sizing exercise based on the deployment architecture (2-tier or 3-tier). We cannot and should not use 2-tier benchmark and size for 3-tier deployment architecture. The simple reason is the SAPS required for DB server instance. Typically, the SAP Quicksizer provides the result with 2-tier benchmark and I feel it is biased towards the application server benchmarking and somewhere misses on the DB server. Usually, SAP Quicksizer results shows around 10 to 15% of total SAPS for DB (refer to the results of examples given in SAP Quicksizer). In real world scenario the ratio between Application server to DB server is somewhere around 5:1 for OLTP workload (e.g. SAP ERP) and to a maximum of 1:1 for OLAP workload (e.g. SAP BW).

Another important point we must understand while sizing SAP systems for public cloud infrastructure is the buffer provisions. In on-premises scenario usually we factor in the user growth requirements (usually 3 to 5 years) and then sizing exercise is performed. Being in cloud we always have the ability and flexibility to scale up and scale out.

Care should be taken not to “oversize” SAP on Azure solutions. When procuring on-premises hardware SAP administrators normally add substantial performance buffer because hardware typically has a lifecycle of 3-5 years. Such a performance buffer is not required on Azure because resources can be added quickly and easily as needed.

 

 

 

References:

 

 

 

https://websmp205.sap-ag.de/~sapidb/011000358700000108102008E/QS_Best_Pract_V38_2.pdf