UPDATED: New White-Paper on SQL Server 2014 and Azure Blob storage integration

Last April I published, with my colleague Francesco Cogno, a big white paper on SQL Server 2014 and Azure Blob storage, explaining the benefits, new enabled scenarios, proposing a new Failover Cluster – like mechanism and internal mechanics. After two months, we received many feedbacks, both externally and internally in Microsoft, then I’m glad to publish an updated version including some minor bug fixes, additional details and updated information. As for the version before, this white paper has been published in the official MSDN collection for SQL Server 2014 in MSDN that you can download from the link below:

SQL Server 2014 and Windows Azure Blob Storage Service: Better Together


In the contained chapters, you will find details and detailed explanation on:

  • Architecture
    • Learn what does really means this new feature and why it is so powerful to have the SQL Server 2014 now being able to generate direct REST calls to Azure Blob storage service. Also, learn about which kind of scenarios could obtain benefits and be aware of pros and cons of this new approach.
  • Configuration
    • Find a complete step-by-step procedure to enable and configure this mechanism, along with known pitfalls and which best practices you should be aware of.
  • Implementing a failover cluster mechanism
    • Everyone told you that Failover Clustering is not possible in Azure IaaS? Read through Chapter (3) and you will find a way to achieve similar functionalities for a SQL Server database.
  • Monitoring and troubleshooting
    • Power is nothing without control, learn in Chapter (4) how to leverage existing Azure tools and new SQL Server 2014 mechanisms to monitor, diagnose and troubleshoot this new integration capability.

Here is a short abstract of the paper:

SQL Server 2014 and Windows Azure Blob Storage Service: Better Together

Summary: SQL Server 2014 introduces significant new features toward a deeper integration with Microsoft Azure, thus unlocking new scenarios and provide more flexibility in the database space for IaaS data models. This technical article will cover in depth SQL Server 2014 Data Files on Azure Blob storage service, starting from step-by-step configuration, then providing guidance on scenarios, benefits and limitations, best practices and lessons learned from early testing and adoption. Additionally, a fully featured example of a new (Windows Server Failover Clustering) – like feature will be introduced to demonstrate the power of Microsoft Azure and SQL Server 2014 When combined together.

Authors: Igor Pagliai, Francesco Cogno

Technical Reviewers: Silvano Coriani, Francesco Diaz, Pradeep M.M, Luigi Delwiche, Ignacio Alonso Portillo

Published: April 2014

Last Updated: June 2014 (Revision 2)

Applies to: SQL Server 2014 and Microsoft Azure

Thanks to MSFTs Silvano Coriani, Francesco Diaz, Pradeep M.M, Luigi Delwiche and Ignacio Alonso Portillo that found the time to review this work.

Remember that you can follow me also on Twitter (@igorpag). Best regards.

Comments (3)

  1. This is a really interesting and useful paper, thank you Igor. Many things have changes or improved in Azure since you wrote this update. I am very interested in getting your opinion / insight on the following:

    – It is now possible to have VMs with multiple NICs, what do you think about adding a secondary NIC to the SQL VMs and using custom routing to rules to send the traffic to the storage account on the secondary NIC thus separating the network traffic? would it be worth it? do VMs with multiple NICs get multiple bandwidth allocation too? where can I find the bandwidth allocation for D and Dv2 VMs like the table you have in the paper for the As?

    – Would it be possible to use Premium Storage for databases? as per the paper SQL 2014 only supports page blobs which is exactly the only thing Premium Storage supports, but I’m unsure on how the allocation would works. As Premium Storage gets allocated and billed in specific sizes, would it be possible to store several small databases on a 128 GB Premium Storage and upgrade it to 256 GB when needed? or would each file (or each container) take a full 128 GB allocation? I’m a little confused with this one…

    – What do you think about an application that requires MSDTC, I was thinking on restricting the MSDTC traffic to a single TCP port and then use the same methodology to move the endpoint to the server hosting the database. For multiple databases it may be possible to use multiple VIPs on the cloud service (also something new) so each database uses the same set of ports (SQL & MSDTC) on its own VIP…

    Your input would be appreciated along with any other comment on things that have improved in Azure that would make this solution work better.


    1. Hi Gonzalo, thanks for your feedback.
      Few points:

      1) Adding more (virtual) NIC will not give you more physical bandwidth, then not useful from this perspective.
      2) I wrote this paper before Premium Storage, I consider it a “must” for SQL Server.
      3) For DTC, I don’t have fresh memories for that, then I cannot comment it.

      HTH, regards.

  2. Thomas K says:

    Awesome, thanks a lot.