Lesson Learned #78: DataSync – Cannot enumerate changes at the RelationalSyncProvider for table ‘customertable’ – Execution Timeout Expired

Hello Team,

Yesterday, I worked in a service request with a lot of lessons learned when our customer was using Azure SQL Data Sync to synchronize around 15 million of rows.

In every process that they tried to synchronize the data they got the following error message: Sync failed with the exception 'Cannot enumerate changes at the RelationalSyncProvider for table 'dbo.customertable'. Check the inner exception for any store-specific errors. Inner exception: SqlException Error Code: -2146232060 - SqlError Number:-2, Message: Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Inner exception: The wait operation timed out For more information, provide tracing ID ‘c4a39cc2-xxxxx’ to customer support.'

This error message report that DataSync process is not able to complete the process to synchronize the data and following I would like to share with you what was our troubleshooting process and the solution:


  • Using Azure Portal we didn't find a high resource consumption in terms of CPU, DataIO or Log IO. I would like to recommend using the dmv sys.dm_resource_stats with 15 seconds of delay for consumption resources.
  • In this situation, we need to review what is the process or the query that is taking this time, for this reason, we enabled SQL Auditing in order to review what is the query that is taking this time. Also, using sys.dm_exec_requests we found a query all the time in suspending state. We found that a user table dbo.customertable is joining the data with the table that contains the modifications captured by DataSync, for example, customertable_dss_tracking under the schema datasync.
  • In this situation, I asked to our customer if they have any maintenance plan for rebuilding the indexes and updating the statistics and they told they don't have.



  • As suggestion, we rebuilt all the indexes for this user table called customertable, and after running again the synchronization process it was completed  in few seconds. if you need one, review this URL.


Other Lessons Learned:

  • During the troubleshooting process, our customer has configured as a member of this Data Sync group a SQL Server OnPremise environment and we noticed that this process took too much time. In this situation, we found three important things that I would like to share with you:
    • In every synchronization using the DataSync service that is running as a service in OnPremise, this program needs to download the data saving it in a temporal file in windows temp folder, it is very important that:
      • Have enough free space depending on the data to be synchronized.
      • The capacity of I/O is key in terms of performance. You could use Performance Monitor to review the transfer speed and MB/s of the IO. Please, use SSD if could be possible.
      • Also, the capacity of network bandwidth to download the file. You could use Performance Monitor to review the transfer speed
    • In SQL Server OnPremise:
      • Maintain a correct maintenance plan for the tables. If you change the data with some frequency the data it is very important having a maintenance plan (updating statistics and rebuilding indexes).
      • If could be possible, install the service in the same server that the SQL Server is or if you have an isolate server with DataSync Agent installed, please, try that the connection between these servers will be very closed to.
    • In terms of Azure SQL Database performance:
      • During the synchronization process I have found that DataSync is running a FULL OUTER JOIN joining the data table and its tracking table. Using this join method the TSQL is reading all the rows for both tables and depending on the number of the rows and columns selected in any of these tables could cause a high values wait states in MEMORY_ALLOCATION, PAGEIOLATCH_SH, CXCONSUMER and the typical ASYNC_NETWORK_IO.
            • For this reason, please, review this TSQL in order to review what is the stored procedure that is working and the part of this that Azure SQL Database is executing and the moment of the synchronization.
          SELECT t.*,r.*, SUBSTRING(t., statement_start_offset/2 + 1,
          (CASE WHEN statement_end_offset = -1
          THEN LEN(CONVERT(nvarchar(max), t.)) * 2
          ELSE statement_end_offset
          END - statement_start_offset) / 2
          FROM sys.dm_exec_requests AS r
          CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS t
          • In this case, the table [dbo].[customer_table] has 15 million of rows and the tracking table has only 300 rows, in summary, every process is reading 15 million of rows. In this situation, where we don’t have a high CPU usage and we have a high Data IO and Memory usage the best thing is using one of the Premium database tier for better storage performance.



Skip to main content