Benchmark: Data migration to Azure SQL Database using DMA 3.2


DMA 3.2 enables schema and data migration from an on-premises SQL Server database to an Azure SQL Database. The data migration pipeline is built atop the SQL bulk copy technology and lets you quickly and reliably move data from your source database into your target database. There are many variables that can affect the performance of this data migration: the number of cores on the machine running DMA, the speed of your internet connection, the size of the database you’re migrating, and more.

For now, let's focus on two variables: the target database's service objective (which can be set through the Azure portal, through PowerShell, or through SSMS) and the number of parallel tasks performed by the data movement pipeline (which can be set in DMA’s configuration file).

An Azure SQL Database’s service objective (e.g. S0, P1, P15) indicates the level of performance that the database is capable of. Generally speaking, you will get faster and more reliable data movement with higher service tiers and performance levels. We recommend that you set your target database to level P15, at least for the duration of the migration, to make it as quick and smooth as possible. For more information and instructions for changing your service objective, see here.

The number of parallel tasks used in data movement controls the number of batches of data that can be written in parallel to your target. The pipeline splits the rows of your source tables into batches in order to insert them into your target tables. With more batches being written in parallel, you increase the number of DTUs utilized on your target database, but you can sometimes get faster data movement.

By default, the number of parallel tasks is not static. It is calculated according to the following logic: if your target database is in the Basic or Standard service tiers (Basic, S0, S1, S2, S3), we will set this number to 4. Otherwise, we use this formula (where “cores” is the number of cores on the machine running DMA):

cores >= 8 ? cores * 4 : cores * 2

So, for a 4-core machine targeting a P15 database, we would use 8 parallel tasks. For a 16-core machine targeting a P15 database, we would use 64 tasks. For a machine of any size targeting S0, we would use 4 tasks.

However, if you would like to experiment, this logic can be overwritten to use a fixed number of your choosing. In DMA’s configuration file (Dma.exe.config, located in C:\Program Files\Microsoft Data Migration Assistant), you will see a line like this (commented-out by default):

<!--<add key=”QueryTableDataRangeTaskCount” value=”4”/>-->

You can un-comment this line and tweak the value to control the number of tasks done in parallel by the pipeline.

Following is some sample data illustrating the effects of service objective and parallel task count on migration duration. All of these migrations used the same 220gb source database and were run on a 16-core machine.

Note that these results were fairly specific to our database and our setup; with a different set of parameters, you may get different results. They are just meant to give you a rough idea of what the performance looks like. For example, our source database had only a small number of tables, but the tables were quite large. As a result, increasing the number of tasks actually slowed things down a little overall because we ended up parallelizing more within a table rather than across tables.

In general, it's faster to do multiple tables in parallel than it is to do multiple batches from the same table in parallel. With that in mind, migration of a source database with a greater number of smaller tables would benefit from an increased number of parallel tasks compared to the source database that was used in the study. So, just be aware of this if you're experimenting with these settings.

Miscellaneous notes

There are a few other things to take note of when migrating data to Azure SQL Database through DMA:

Migrating heaps

Tables without any indexes (heaps) will be slower to migrate than tables that have indexes. This is because the lack of indexes means we cannot query the data and split it into batches for processing as quickly.

Migrating to a Standard-tier target

When trying to migrate a medium-to-large source database to a Standard-tier Azure SQL DB (e.g. S0, S1), you may get better results if you decrease the maximum batch size in DMA's configuration file (Dma.exe.config). Look for this line:

<!--<add key="MaxBatchSizeKb" value="262144"/>-->

The default is 262,144 kilobytes, or 256 megabytes. You can try un-commenting this line and setting the value to 131072 (for 128 megabytes). In our testing, using a max batch size of 131072 and the default number of writers (4), we migrated our 220gb database to an S1 target in roughly 2 days and 17 hours. Doing the same with an S0 target took roughly 4 days and 10 hours. So it will work, but the process will be more efficient with a Premium-tier target.


Comments (0)

Skip to main content