Backing up a VLDB to Azure Blob Storage

Reviewed by: Pat Schaefer, Rajesh Setlem, Xiaochen Wu, Murshed Zaman

All SQL Server versions starting from SQL Server 2012 SP1 CU2 support Backup to URL, which allows storing SQL Server backups in Azure Blob Storage. In SQL Server 2016, several improvements to Backup to URL were made, including the ability to use block blobs in addition to page blobs, and the ability to create striped backups (if using block blobs). Prior to SQL Server 2016, the maximum backup size was limited to the maximum size of a single page blob, which is 1 TB.

With striped backups to URL in SQL Server 2016, the maximum backup size can be much larger. Each block blob can grow up to 195 GB; with 64 backup devices, which is the maximum that SQL Server supports, that allows backup sizes of 195 GB * 64 = 12.19 TB.

(As an aside, the latest version of the Blob Storage REST API allows block blob sizes up to 4.75 TB, as opposed to 195 GB in the previous version of the API. However, SQL Server does not use the latest API yet.)

In a recent customer engagement, we had to back up a 4.5 TB SQL Server 2016 database to Azure Blob Storage. Backup compression was enabled, and even with a modest compression ratio of 30%, 20 stripes that we used would have been more than enough to stay within the limit of 195 GB per blob.

Unexpectedly, our initial backup attempt failed. In the SQL Server error log, the following error was logged:

Write to backup block blob device failed. Device has reached its limit of allowed blocks.

When we looked at the blob sizes in the backup storage container (any storage explorer tool can be used, e.g. Azure Storage Explorer), the blob referenced in the error message was slightly over 48 GB in size, which is about four times smaller than the maximum blob size of 195 GB that Backup to URL can create.

To understand what was going on, it was helpful to re-read the “About Block Blobs” section of the documentation. To quote the relevant part: “Each block can be a different size, up to a maximum of 100 MB (4 MB for requests using REST versions before 2016-05-31 [which is what SQL Server is using]), and a block blob can include up to 50,000 blocks.”

If we take the error message literally, and there is no reason why we shouldn’t, we must conclude that the referenced blob has used all 50,000 blocks. That would mean that the size of each block is 1 MB (~48 GB / 50000), not the maximum of 4 MB that SQL Server could have used with the version of REST API it currently supports.

How can we make SQL Server use larger block sizes, specifically 4 MB blocks? Fortunately, this is as simple as using the MAXTRANSFERSIZE parameter in the BACKUP DATABASE statement. For 4 MB blocks, we used the following statement:

URL = '',
URL = '',

This time, backup completed successfully, with each one of the 20 blobs/stripes being slightly less than 100 GB in size. We could have used fewer stripes to get closer to 195 GB blobs, but since the compression ratio was unknown in advance, we chose a safe number.

Another backup parameter that helps squeeze more data within the limited number of blob blocks is BLOCKSIZE. With the maximum BLOCKSIZE of 65536, the likelihood that some blob blocks will be less than 4 MB is reduced.

Conclusion: If using Backup to URL to create striped backups of large databases (over 48 GB per stripe), specify MAXTRANSFERSIZE = 4194304 and BLOCKSIZE = 65536 in the BACKUP statement.