Windows Azure Storage Release – Introducing CORS, JSON, Minute Metrics, and More

We are excited to announce the availability of a new storage version 2013-08-15 that provides various new functionalities across Windows Azure Blobs, Tables and Queues. With this version, we are adding the following major features:

1. CORS (Cross Origin Resource Sharing): Windows Azure Blobs, Tables and Queues now support CORS to enable users to access/manipulate resources from within the browser serving a web page in a different domain than the resource being accessed. CORS is an opt-in model which users can turn on using Set/Get Service Properties. Windows Azure Storage supports both CORS preflight OPTIONS request and actual CORS requests. Please see for more information.

2. JSON (JavaScript Object Notation): Windows Azure Tables now supports OData 3.0’s JSON format. The JSON format enables efficient wire transfer as it eliminates transferring predictable parts of the payload which are mandatory in AtomPub.

JSON is supported in 3 forms:

  • No Metadata – This format is the most efficient transfer which is useful when the client is aware on how to interpret the data type for custom properties.
  • Minimal Metadata – This format contains data type information for custom properties of certain types that cannot be implicitly interpreted. This is useful for query when the client is unaware of the data types such as general tools or Azure Table browsers.
  • Full metadata – This format is useful for generic OData readers that requires type definition for even system properties and requires OData information like edit link, id, etc.

More information about JSON for Windows Azure Tables can be found at

3. Minute Metrics in Windows Azure Storage Analytics: Up till now, Windows Azure Storage supported hourly aggregates of metrics, which is very useful in monitoring service availability, errors, ingress, egress, API usage, access patterns and to improve client applications and we had blogged about it here. In this new 2013-08-15 version, we are introducing Minute Metrics where data is aggregated at a minute level and typically available within five minutes. Minute level aggregates allow users to monitor client applications in a more real time manner as compared to hourly aggregates and allows users to recognize trends like spikes in request/second. With the introduction of minute level metrics, we now have the following tables in your storage account where Hour and Minute Metrics are stored:

  • $MetricsHourPrimaryTransactionsBlob
  • $MetricsHourPrimaryTransactionsTable
  • $MetricsHourPrimaryTransactionsQueue
  • $MetricsMinutePrimaryTransactionsBlob
  • $MetricsMinutePrimaryTransactionsTable
  • $MetricsMinutePrimaryTransactionsQueue

Please note the change in table names for hourly aggregated metrics. Though the names have changed, your old data will still be available via the new table name too.

To configure minute metrics, please use Set Service Properties REST API for Windows Azure Blob, Table and Queue with 2013-08-15 version. The Windows Azure Portal at this time does not allow configuring minute metrics but it will be available in future.

In addition to the major features listed above, we have the following below additions to our service with this release. More detailed list of changes in 2013-08-15 version can be found at

  • Copy blob now allows Shared Access Signature (SAS) to be used for the destination blob if the copy is within the same storage account.
  • Windows Azure Blob service now supports Content-Disposition and ability to control response headers like cache-control, content-disposition etc. via query parameters included via SAS. Content-Disposition can also be set statically through Set Blob Properties.
  • Windows Azure Blob service now supports multiple HTTP conditional headers for Get Blob and Get Blob Properties; this feature is particularly useful for access from web-browsers which are going through proxies or CDN servers which may add additional headers.
  • Windows Azure Blob Service now allows Delete Blob operation on uncommitted blob (a blob that is created using Put Block operation but not committed yet using Put Block List API). Previously, the blob needed to be committed before deleting it.
  • List Containers, List Blobs and List Queues starting with 2013-08-15 version will no longer return the URL address field for the resource. This was done to reduce fields that can be reconstructed on client side.
  • Lease Blob and Lease Container starting with 2013-08-15 version will return ETag and Last Modified Time response headers which can be used by the lease holder to easily check if the resource has changed since it was last tracked (e.g., if the blob or its metadata was updated). The ETag value does not change for blob lease operations. Starting with 2013-08-15 version, the container lease operation will not change the ETag too.

We are also releasing an updated Windows Azure Storage Client Library here that supports the features listed above and can be used to exercise the new features. In the next couple of months, we will also release an update to the Windows Azure Storage Emulator for Windows Azure SDK 2.2. This update will support “2013-08-15” version and the new features.

In addition to the above changes, please also read the following two blog posts that discuss known issues and breaking changes for this release:

Storage Emulator Guidance

As mentioned above an updated Windows Azure Storage Emulator is expected to ship with full support of these new features in the next couple of months. Users attempting to develop against the current version of the Storage emulator will receive Bad Request errors as the protocol version (2013-08-15) is unsupported.  Until then, users wanting to use the new features would need to develop and test against a Windows Azure Storage Account to leverage the 2013-08-15 REST version.

Please let us know if you have any further questions either via forum or comments on this post.

Jai Haridas and Brad Calder

Comments (20)

  1. Ido Flatow says:


    I was trying to set the content-disposition property of an existing blob that I have in an existing storage account, but the content-disposition property does not appear in the portal when editing a blob.

    Is it currently not supported in the portal? or is it not showing because this is an old blob/storage?

  2. says:

    @Ido, the portal does not support it yet. If you do a GET on the blob – do you see it? You can use fiddler or better yet, just download from browser and you should see the content disposition in play if it has been set.



  3. Paul says:

    You mentioned a dependency on WCF Data Services for Azure Tables, but the NuGet package does not have a package dependency.   What I found was that if I upgrade the storage library to 3.0, my table storage code breaks complaining about a missing assembly (Microsoft.Data.Services.Client.dll). If I add the WCF Data Services Client package (aka Microsoft.Data.Services.Client) all is fixed.  If you have this dependency why isn't it part of the NuGet package?

    It also seems like I need OData, Spacial & EDM packages  when I just want to use the blob client library.  Why is that?

  4. says:

    @ Paul

    You are correct that the WCF Data Services dependency is not explicitly referenced by the nuget package, installing this package should fix your issue (…/Microsoft.Data.Services.Client. I will look into adding this going forward.  

    We are also recommending clients utilize the service layer provided in the Storage.Table namespace as it provides many significant performance and extensiblity improvements over the legacy implementation. (You can read more about the various table features here :…/announcing-storage-client-library-2-1-rtm.aspx).  If you do that then there is no dependency your code would have on WCF data services, and you would not need that package.

    To your second question, the Spatial, Odata, EDM, and JSON dependencies are to support the core table functionality. If you are only leveraging blobs or queues you can remove these to keep deployment size smaller.

    Historically the storage client has been a single package exposing all three storage abstractions (Blobs, Tables, and Queues). From your question it seems some clients may appreciate a more segregated design allowing them to utilize only the specific storage services they require. This is good feedback to receive, and we will look into various options to address it going forward.

  5. Yann says:


    can somebody tell me how I can make if work? E.g. Table.CreateIfNotExistsAsync returns "400 Bad Request", code: "InvalidInput" (in the storage emulator). Any ideas?



  6. Robert says:

    Great updates.

    But are the new client libraries not compatible at all with the current emulator? Or is it just the new features? Because I'm getting a 400 Bad Request error directly after updating the libraries and without changing any code.

    It seems you put the release on the stable feed on Nuget but it won't work with the current emulator there? The local emulator is vital when developing a solution based on Azure Storage.  So if I only want the release of the client library that works with the emulator, then how do I seamlessly do that via Nuget (especially if I'm a new user)?

  7. says:

    @Robert – the problem is the emulator released a few months back did not know about the new version and hence does not support 2013-08-15 version. The new client lib only supports one version – 2013-08-15 and hence unsupported by emulator until we release update to the emulator.

    @Yann – are you trying new lib with existing emulator? It is not supported.

    We are working on updating the emulator and will try to have a release in the next month or so. However, this will not be a SDK release but release of just the required dlls.



  8. Simon Timms says:

    Do you have any examples of setting Content-Disposition via the SAS? I poked around at the API but I couldn't figure it out. I did get it working at the blob level, however, and that is brilliant.

  9. Manny says:

    I get 'Could not load file or assembly 'Microsoft.Data.Services.Client, Version=' after upgrading. I am only uploading to blob storage and not using table storage or queues in any way. It worked perfectly before the update. Do I need the WCF package as well? Seems rather odd.

  10. says:

    @Simon: Here is a sample code using Windows Azure Storage Client Library 3.0.

               SharedAccessBlobHeaders sasHeaders = new SharedAccessBlobHeaders()


                   ContentDisposition = "Attachment; filename=anotherName.txt"


               SharedAccessBlobPolicy sasBlobPolicy = new SharedAccessBlobPolicy()


                   Permissions = SharedAccessBlobPermissions.Read,

                   SharedAccessExpiryTime = DateTime.UtcNow.AddDays(1)


               string sasQueryParam = cloudBlockBlob.GetSharedAccessSignature(sasBlobPolicy, sasHeaders);

               Uri fullSasUri = new Uri(cloudBlockBlob.Uri, sasQueryParam);

  11. says:


    Thanks for reporting this issue. For the 3.0.0 release we had to move from the System.Data.Services.Client in the GAC to the Microsoft.Data.Services.Client on nuget (…/Microsoft.Data.Services.Client). Currently there is a piece of shared code during exception translation that checks for WCF exceptions which is why you may hit this while doing blob or queue traffic. We will be updating the package to add the nuget dependency, and working on decoupling this logic to allow blob and queue users to run with out delay loading the WCF or Odata dependencies.

    In the interim please add a reference to the nuget package mentioned above.

  12. George says:

    Hi, great news. Is Json format supported by OData table client? How can I switch to using Json transfer in existing sdk client?

  13. says:

    @George: You need to use the latest Windows Azure Client Library 3.0 mentioned in the blog.

    For more details regarding JSON and client SDK support, please refer to the following blog post:…/windows-azure-tables-introducing-json.aspx

  14. Jeroen Landheer says:

    We're now at mid jan 2014, any news about a new storage emulator?

    One of my projects could really use the new features that have been released (Content-Disposition and CORS) but since we don't have a new emulator we're a bit stuck with this atm. (The size of the files and our internet bandwidth prevent us from developing against the cloud's storage services.)

  15. says:

    @Jeroen: We will provide a Storage Emulator preview release with support for the new version and features by end of month.



  16. acarlon says:

    Just my perspective, but I don't think that this should be a stable package on NuGet without emulator support. I spent quite a bit of time today tracking down 400 errors. Since '400 Bad Request' is such a generic error, it is not immediately obvious what is going on. Now I find that this is expected and I have to go through the messy process of rolling back packages.

  17. DJ Grossman says:

    Any word on a storage emulator that works with Microsoft.WindowsAzure.Storage 3.x?

  18. says:

    @DJ: We have a preview release of the Windows Azure Storage Emulator 2.2.1 that supports the 2013-08-15 version and works with the 3.0 client. Please see the following blog post for more information:…/windows-azure-storage-emulator-2-2-1-preview-release-with-support-for-2013-08-15-version.aspx


  19. says:

    Fantastic, thank you. 🙂

  20. says:

    Great article!

Skip to main content