How to maintain Azure SQL Indexes and Statistics

[2017-07-16]Note: New version is now available, added option to log operation to table including automatic retention of 3 last operation (can be modified) 

There are a lot of work that Azure SQL saves from you, and most of the users we meet seem to believe that maintain the database indexes and statistics is one the missions you can forget when you migrate to Azure SQL.

Statistics objects, like in the box version of SQL have by default "Auto update" set to ON.

the condition to run the auto update is updating for at least 20% + 500 rows in the table. if the condition is met, the update will occur when the optimizer needs to generate execution plan against that table. so except that the update will take some time and use IO resources (which is very important information if you are on the cloud PAAS database) the update will read only 30% of the rows to generate the updated statistics, therefore, the statistic might not be 100% accurate.

  • You can use QDS to force the best plan for the statistics, but this is your responsibility to keep it up to date.

so, this maintenance still needs to be done by the DBA on a regular basis, and if this has not been done, your database can suffer from poor performance just because of that.

I created a maintenance solution (Yes, I was inspired by Ola Hallengren's solution) just that this one is lighter and suitable with Azure SQL DB (it also compatible with SQL Server 2012 and higher)

Now, all you have to do is download the T-SQL script, execute it on your Azure SQL DB and execute the maintenance procedure.

A quick remark about the options you have:

exec  AzureSQLMaintenance @operation,@mode

@operation: {all, index, statistics} (no default)

statistics: will run only statistics update

index: will run only index maintenance

@mode: {smart, dummy} (Default: smart)

smart: will touch only modified statistics and choose index maintenance by % of fragmentation

dummy: will run through all statistics or indexes.

@logtotable: {0, 1} (Default: 0)

0: this feature is off.

1: will log the operation to table [AzureSQLMaintenanceLog] (if the table does not exist it will be created automatically) the log will update ongoing with any progress so you can monitor the progress with this log. for every operation, you can find detailed information about the object before it was maintained (fragmentation percent for indexes, and modification counter for statistics) by default only 3 last operations will be kept in the log table, older execution log will be automatically removed, this can be changed in the procedure code.

Download procedure code:



Scheduling and automation:

Automating Azure SQL DB index and statistics maintenance using Azure Automation


! updated: 2016-10-27 - fixed performance issue while updating statistics.

! updated: 2016-11-14 - fixed issue with the double rebuild of indexes. (thank you JPelttari for the comment)

! updated: 2016-11-29 - adding information about scheduling and automation.

! updated: 2017-07-16 - adding an option to log operation to table with a rotation of 3 last operations.

! updated: 2018-01-15 - added reference to another post that describes how to automate this maintenance task.

Comments (31)

  1. JPelttari says:

    If there is multiple fragmented indexes in table, this script rebuilds indexes on that table multiple times as it rebuilds them with ALL identifier

    1. Aman [MSFT] says:

      Thank you JPelttari for the comment. you absolutely right.
      It’s now fixed.

  2. timvdh says:

    Very helpful script, thank you. I somehow thought that index fragmentation is no big issue on Azure SQL, maybe because maintenance functionality is absent. But running this script actually made a big difference for a clustered index of a table where 80% of the records were deleted before.

    1. Aman [MSFT] says:

      Thank you for the feedback 🙂

  3. syfre says:

    Thank you, it is really helpfull, works also on an on premise server (at least SQL 2012)
    I suggest you add an option to check constraint to retrust foreign keys and constraints, it will be usefull after bcp upload of data

    1. Aman [MSFT] says:

      Thank you for the feedback.
      about constraints and foreign keys this is a different subject that is not handled by regular maintenance task as you first need to disable them before you run the bcp load, and this probably will be more selective on the database.
      if you meant to something else – please reply on this comment.

  4. LitalSQLForums says:


  5. Bogdan Mart says:

    Thank you, Yochanan!
    You saved my day. Suddenly SQL performance degraded to expreme. Index maintenance helped alot!

  6. Antonio says:

    Hello Yochanan and thanks for your article!!

    I just tried it with a test database in Azure. It looks that the store procedure execution stops after several minutes. Maybe it is Azure functions or any other security mechanism killing the job.

    Do you know any solution to workaround this? Any other suggestions?

    My other choice would be to use Azure Automation and Runbooks to schedule the index and statistics maintenance. As described here:


  7. ABlake says:

    Thank you for developing this! Is there a good rule of thumb for how often should this be run on the database?

    1. Hi ABlake,

      it really depend on your database activity pattern and usage.
      I can tell that the most often you can run this maintenance is the better.
      say, you are using local application, so you can run it at night when the application is idle
      for global application that being used 24 hours, weekend might be also a good option.


  8. Babji Talluri says:

    Hello Yochanan thanks for your article,

    I have a question,Why are were updating the statistics with FULLSCAN,Is it not ideal to let the SAMPLE decide by Azure database to updatestats,My understanding was when omitted, SQL Server uses sampling to create the statistics, and determines the sample size that is required to create a high quality query plan.

    Thanks in advance,
    Babji Talluri

    1. Babji Talluri says:

      Any thoughts please?

      1. sampling is good but not as good as full scan to get a full picture of the data distribution.
        for instance, when auto create statistics create a new statistics object it will scan sample set in order to quickly get to an execution plan.
        but when we run maintenance task we would like to bring the database to the best figure which we can get by updating the statistics with a full scan.

        1. Babji Talluri says:

          Thanks for the clarification Yochanan,When i run update stats with full scan it consuming lot of eDTU’s and causing the application to slow down every night,any thoughts on how to get around this?

          Thanks in advance,

          1. Updating statistics and rebuilding indexes are intensive IO bound operations, therefore we recommend to run the maintenance while the application is idle (or less used by application users). conceptual workaround for this is to scale up the database before the maintenance takes place, and then when finished to scale down to the original service tier. but it might be too many efforts (and money) to get a tiny benefit (note that there is no SLA for scale operation and it can be done instantly and up to 6 hours depend on the data size)
            I hope this information helps.

  9. John Langston says:

    We’ve implemented it across 350+ databases with the actual execution being kicked off by an automation account runbook. Very nice.

    1. I’m very happy to hear that it helped you get better performance.
      thank you so much for this comment.

  10. Marios Philippopoulos says:


    Thank you for this posting. I have started using this code for maintenance on our azure databases.

    One change I made was to include only stats with a percentage of row modifications > 5%:

    ObjectSchema = OBJECT_SCHEMA_NAME(s.object_id)
    ,ObjectName = object_name(s.object_id)
    ,StatsName =
    sys.stats s cross apply sys.dm_db_stats_properties(s.object_id,s.stats_id) sp
    OBJECT_SCHEMA_NAME(s.object_id) != ‘sys’
    and (100 * ([sp].[modification_counter] * 1. / [sp].[rows]) > 5. or @mode=’dummy’)
    order by
    sp.last_updated asc;


    1. Thank you for your comment.
      note that on very large tables percent might be a high number of rows neede to be updated in order to trigger stats update with this script.

  11. Marko says:

    Hello there, for some reason i cannot build the procedure on my staging environment, it gives me a connection error on build. Any clue to why this would happen? Thanks!

    1. I cannot think of any reason why the connection could have an issue when creating a procedure in the database, can you share more information about the process and the exact exception message?

  12. mohamed says:

    Hi Yochanan
    What is the Microsoft recommendation to kick off SQL maintenances tasks? using Azure Automation and Runbooks or Azure Functions?

    1. Azure Automation account is one feasible option to kick the maintenance task, keep following our blog, I will publish a how-to article that discusses the automation setup.

  13. Very Helpful Post.
    Can you let me know if this even works for Azure Warehouse DB ?

    1. Hi Suhas, Thank you for the feedback.
      this maintenance procedure is for Azure SQL DB only.
      maintenance in DWH is different than SQL DB.

  14. With a geo-replicated Azure SQL Database, do the statistics and indexes need to be updated on each of the primary and secondary instances individually? Or if updates are made on the primary, does that index rebuild or statistics update on the primary sync with the secondary? Is there a blog post that discusses this topic as it relates to geo-replicated Azure SQL database instances…perhaps? 😉

    1. Hi Mike,
      Thank you for reaching out to us.
      geo-replicated database is read only and therefore you cannot run any maintenance task.
      however, it’s enough to run the maintenance on the primary database.

  15. We’ve been using your script for sometime. Thank you! We’ve discovered a bug (see error message below). It’s trying to rebuild a clustered index using online tag for a table that has a column of data type TEXT. According to Microsoft this is a limitation ( ). The rebuild for a clustered index for a table which has a column of data type image, text or ntext needs to be done offline.

    Is this something you could fix in your script? Thank you for the consideration!

    FAILED : 2725An online operation cannot be performed for index ‘IX_tb_Employee_EmployeeID’ because the index contains column ‘Notes’ of data type text, ntext, image or FILESTREAM. For a non-clustered index, the column could be an include column of the index. For a clustered index, the column could be any column of the table. If DROP_EXISTING is used, the column could be part of a new or old index. The operation must be performed offline.

    1. Thank you for your comment! much appreciated.
      text data type is announced as deprecated

      however, as you faced the issue it might be used in some scenario, I created issue#5 on the github project

      I will try to follow up on this as soon as possible.

      1. I’ve already added a patch to the script on our side. I added this block right below your “update #idxBefore set …”. Keep in mind this isn’t just for TEXT columns. It’s IMAGE, NTEXT or TEXT. Thanks again!

        — Needed to add add’l logic to handle CLUSTERED indexes on tables with LOB data types of TEXT, NTEXT and IMAGE
        — Those indexes need to be built OFFLINE, per Microsoft
        IB.OnlineOpIsNotSupported = 1
        FROM #idxBefore IB
        INNER JOIN
        WHERE IB.index_type_desc = ‘CLUSTERED INDEX’

Skip to main content