I finally got to sit down and prepare my next demo application in the OfflineApplication series. This one shows you some neat feature that we added to the runtime a little bit after Beta 2 release. Your feedback was the driving force for including batching support to the product. The original plan was to punt it until the next release. All in all, it worked out very well and I am glad, thanks to you, that we did include batching in V1.
Sync Services is all about empowering the developer to control the sync operation. The same principal applies to batching support as you will see in the demo. The runtime gives three new session parameters that you can put to work when implementing batching logic. These parameters are:
- The number of batches you think you need to finish downloading changes
- Maximum anchor value that the runtime can store for you [optional]
- Batch size in terms of number of rows [optional]
To clarify the idea, let’s consider the following example: Your client has synchronized to an anchor of 100. The server side current anchor is 10,000. Now, say that you decided to batch the changes and you estimated that you will need 10 batches to complete the job. That said, you need to set the batch count parameter to 10, and the maximum anchor to 10,000. Recall that the sync runtime needs a sync_new_received_anchor value to enumerate changes for each batch. To do that you would set it for the first batch to 1000, then 2000 in the next batch ….etc until you hit the maximum anchor value. You got the idea.
Well, that was simple batching which seems to divide the anchor range and is purely math. But that is just one sample implementation. I bet you are thinking of a scenario where the same row was updated 10,000 times. This approach will make ten round trips to finally get the row at the last batch. That not good!? Of course not, but we invested very little in the previous example in building batching to ask for more than that. So, let’s invest some more and change the logic to count the number of changed rows before it sets the number of batches as well as the new anchor for each batch. The idea is simple and requires a UNION operation for all the timestamp columns in tables participating in sync. A little bit more of code as shown on this MSDN ‘How to’ document.
As you can see, you have the control to decide what goes when. I am sure your scenario might have some QoS requirements or load balancing needs that makes you come with more creative ideas to implement your batching\throttling logic.
I am not actively blogging about Sync Technologies. Please see Sync Team Blog for more updated content.