Ask Learn
Preview
Please sign in to use this experience.
Sign inThis browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
As you may or may not know, Azure SQL Database is a great tiered offering that comes with backup retention capabilities out of the box. What is awesome about this offering is it makes backing up your database braindead simple, and saves you the administrative effort of creating a maintenance plan to do this. It also saves you the hassle of planning out the nuances of full backups, delta backups, and log backups necessary in order to achieve your recovery goals. Finally it saves you the headache of remembering to delete aged backup files. After all, nobody wants to find out a backup failed because you ran out of disk space.
As I mentioned before, this is a tiered offering and the out of the box backup capabilities are no exception to this. How much backup retention you get depends entirely on what tier of our database offering you choose. At the time of this post the retention is as follows:
A question I’m often asked is whether this feature is configurable on a per database basis. Most customers I speak with would gladly pay the extra storage costs to make this a tuneable feature on the database, and add the option of sacrificing the Point in Time nature of the restore as the backups age out. Another question I’m asked is how to best take a preemptive backup before a deployment so there is a known good state to recover to in the event of a deployment failure. The built in backup can aid in the deployment scenario just fine, but the challenge is often understanding what “point in time” to recover to. Those responsible for a deployment rest easier knowing there is a snapshot of the database when all else fails.
While these features (tuneable retention & on demand backup) aren’t addressed as part of the built in backup capabilities of Azure SQL Database there is a way we can use other features of the database in order to achieve this namely SQL Database Copy and SQL Database Export. While not technically “backup” these features can be used to get you a consistent copy of your database for disaster purposes, hence the quotes. (NOTE: When I put the quotes there I have a mental image of Mike Myers aka Dr. Evil doing air quotes.)
Let’s explore these two features a bit…
SQL Database Copy – is designed to make a transactionally consistent copy of your database leveraging the built in backup retention policy. It first makes a copy of the tail of the transaction log, and then uses this tail plus the full, differential, and log backups retained behind the scenes in Azure SQL Database. All this is done without interrupting the log chain of the current database retention plan (a common problem with preemptive full backups in on premises databases).
SQL Database Export – is designed to export a copy of your database to a .bacpac file which is an encapsulated version of the databases schema and data. While not truly a backup file like a .bak file in a traditional SQL database, it does contain everything you need to reconstitute your database in the event of an emergency.
Based on these two descriptions, you might ask yourself “why do I need both?”. This is the point where the fine print is important. There are two reasons I think it’s essential to use both of these features in concert:
Now that we have a good background on why I’m recommending this approach, here are a few powershell script segments to get you started. I logically break this down into three discrete steps:
# Define the source datbase, server, and resoruce group $sourceDatabase = "[source database]" $sourceServer = "[source server]" $sourceResourceGroup = "[source resource group]
# Define output location for bacpac $storageAccount = "[blob account].blob.core.windows.net" $storageContainer = "[blob container]" $storageKeyType = "StorageAccessKey" $storageKey = "[storage key]"
# Admin credentials for the database $adminUser = "[admin user name]" $adminPassword = "[admin password]"
# Generating a unique name for the databse copy and the uri for the bacpac. $destinationDatabase = $sourceDatabase + (Get-Date).ToString("yyyyMMddHHmm") $bacpacUri = "https://" + $storageAccount + "/" + $storageContainer + "/" + $destinationDatabase + ".bacpac"
# Execute the copy command New-AzureRmSqlDatabaseCopy -ResourceGroupName $sourceResourceGroup ` -ServerName $sourceServer ` -DatabaseName $sourceDatabase ` -CopyDatabaseName $destinationDatabase
# Note: Exporting to a bacpac requires you to have an account and password for an admin. # If you do not include this the command will ask for it at execution time. $exportRequest = New-AzureRmSqlDatabaseExport –ResourceGroupName $sourceResourceGroup ` –ServerName $sourceServer ` –DatabaseName $destinationDatabase ` –StorageKey $storageKey ` -StorageUri $BacpacUri ` –StorageKeytype $storageKeyType ` –AdministratorLogin $adminUser ` –AdministratorLoginPassword $adminPassword
#Now that the export is done, delete the databse. Remove-AzureRmSqlDatabase -ResourceGroupName $sourceResourceGroup ` -ServerName $sourceServer ` -DatabaseName $destinationDatabase
This is the gist of it. In order to save yourself some headaches with copy and paste, and for a complete version of the powershell script checkout my github repository @ For a complete script you can check out my github repository
Please sign in to use this experience.
Sign in