One of the key value propositions for the Azure SQL Data Warehouse service is the ability to re-size the compute power of the database very quickly. A common pattern is to re-size the cluster before a data load to decrease load and aggregation time, then re-size again to save costs when running report/analytic workloads. The operations are all supported via T-SQL code but the process is asynchronous. Being able to monitor when the database is available is key in making this work.
Given that we're all database developers, let's look at how we can scale a database up or down via T-SQL code and then monitor when the database is scaled.
Viewing the current Service Level Objective (SLO)
A Service Level Objective (SLO) is a fancy title for defining how much compute power you have assigned to your SQL DW database. For SQL DW, we quantify this in terms of Data Warehouse Units (DWU) - a blend of the cores, memory, local storage, and network per compute node. To see the current setting for your database(s), Azure SQL provides a catalog view (sys.database_service_objectives) which returns the service tier and performance level for all databases on a logical Azure SQL Server. You can simply connect to the Master database on your logical server and run the following T-SQL to get the name and service_objective for your data warehouse databases:
SELECT db.[name] AS [Name], ds.[service_objective] AS [ServiceObject] FROM sys.database_service_objectives ds JOIN sys.databases db ON ds.database_id = db.database_id WHERE 1=1 AND ds.edition = 'DataWarehouse';
You should see a two column result with the name of your database and the current service objective.
Re-sizing your data warehouse
To change the size of your data warehouse, you can issue a T-SQL call to modify the service objective simply using the ALTER DATABASE statement and specifying the new service_objective. This is an asynchronous call - the T-SQL will return immediately but the process of resizing the cluster is happening behind the scenes.
Note:You will need to be connected to a different database than the one be re-sized. I've chosen the logical MASTER database.
ALTER DATABASE DemoDw MODIFY ( service_objective = 'DW100' );
Monitoring the Change Request
Now that we have the database scaling, we want to be able to monitor the operation so we can resume any tasks (say a loading operation). Using some T-SQL ingenuity, we can simply poll the sys.dm_operation_status Dynamic Management View (DMV) in the MASTER database. The sys.dm_operation_status DMV returns operations performed on databases in Azure databases (both SQL Database and SQL Data Warehouse).
You can simply just use the WAITFOR DELAY T-SQL syntax, we can just poll the DMV for the current status. Below is a sample script that polls every 5 seconds for status.
WHILE ( SELECT TOP 1 state_desc FROM sys.dm_operation_status WHERE 1=1 AND resource_type_desc = 'Database' AND major_resource_id = 'DemoDW' AND operation = 'ALTER DATABASE' ORDER BY start_time DESC ) = 'IN_PROGRESS' BEGIN RAISERROR('Scale operation in progress',0,0) WITH NOWAIT; WAITFOR DELAY '00:00:05'; END PRINT 'Complete';
This resulting output shows a log of the polling of the status:
If you're using SSIS, you could implement this as an Execute T-SQL Statement Task as the beginning of your ETL/ELT job to re-size the service, execute your load and transform, and then scale right back down for low cost dashboard/analytic consumption. Look for another blog post on this pattern in an upcoming blog.
Visit the SQL Data Warehouse Overview to learn more about Microsoft's scale out relational data warehouse.