Q&A on OCS & Sync Services for ADO.NET


Not surprisingly we’ve been get a lot of great questions about specific features and scenarios for our new Sync Services for ADO.NET (OCS).  Rafik has been fielding most of these on the Sync Services forums.  Since the Q&A for SQLce seemed popular, I thought I’d do the same here. 


Q: How does Sync Services compare to Merge Replication and RDA?


A: Merge replication is our developer oriented, feature rich, SQL Server based database “replication” product.  It’s designed for a DBA to configure a publication on the database exposing a set of tables (articles) enabling filtering and rules.  Clients then “subscribe” to the publication and receive a local database reflecting the publication.  It’s a very powerful and mature product that we will continue to invest in.  From the server side (publications), Merge Replication is a SQL Server only feature that is available in our SQL Server Workgroup, Standard and Enterprise SKUs.  Merge Replication, as a publisher, is not available in the free SQL Server Express SKU.  Merge replication supports 2tier sync and sync over HTTP(s)  to enable internet synchronization.  However, I wouldn’t say that Merge is SOA oriented in that you don’t have much control of what’s sent or received across the wire, nor do you have the ability to support additional transports.  It’s a fantastic end to end replication product that handles a lot of complicated scenarios.


Remote Data Access (RDA) is a developer oriented, simple but RAD sync technology.  It works over HTTP, so it enables internet protocol sync, but again, it’s not very SOA friendly.  While the client doesn’t directly open a connection to SQL Server, it does require the client provide the connection string and query to be executed on the server.  So, it pretty much violates the notion of keeping the server information from the client.  While it is limited, you can’t deny it’s popularity for it’s simplicity.  We continually hear many people implementing RDA rather than merge as theire requirements are simple and RDA just worked great.  Which brings me to Sync Services for ADO.NET


Sync Services for ADO.NET is our answer to this dilemma of having to choose between the dba focused powerful features of Merge, the simplicity of RDA, and the developer motivation for them to write their own.  When designing Sync Services we used RDA as our user model.  Debra Dove and her team did an excellent job with RDA, and we wanted to take it to the next level.  We used the provider and developer programming model of ADO.NET and wanted to leverage all the great transport, security and protocol work others were doing.  Essentially, rather than build additional solutions to existing problems, we wanted to scope the solution to what we needed to solve, and leverage others who were experts in their field.  The Sync Services for ADO.NET features are built by the same team that delivers Merge Replication.  I’ve been truly amazed at the knowledge these guys have.  It’s because of their experience that I have a hard time calling Sync Services a 1.0 product.  It’s really a culmination of all the great work from Merge, RDA, file synchronization, and even the WinFS sync work. 


The quick bullet for Sync Services is it’s a componentized synchronization framework, built on ADO.NET, but factored to provide a common sync platform for Entities, Files, and other formats.  Sync Services for ADO.NET is our first delivery in our Microsoft Synchronization Platform. 


The key advantages of Sync Services is its developer focus, w/DBA empowerment.  Rather than force your DBA to understand all the details of how you’re going to synchronize data within your application, you engage the DBA for the portions that touch the server database.  The transports are up to you, and we have a great WCF designer integrated story, but you can use other transports as well.  The local database schema/structure is up to the developer to decide as it’s their apps database.  Of course you can engage your DBA as well, but you don’t get what you get as the result of a synch operation.  You can use just the client components and use Java and Oracle on the server, or you can use just the server components and other technologies on the client.  If your data is locked up in Oracle, you can still use Sync Services to synchronize that data directly from Oracle to your client.  You don’t have to put SQL Server in the middle just to enable sync.   


The point here is regardless of what cards you’ve been dealt, we want to help you get your app done to make your users happy.  The more Microsoft products you use, the better the experience we can provide, but we’re not locking you in, or out of our platform just because something is out of your control.


Q: What’s the roadmap for Merge, RDA and Sync Services


Merge replication will continue to be our database replication product and will have Katmai investments in the next release.  Merge will be the DBA tool for replicating a database, and those that are already using Merge shouldn’t feel like we’re abandoning them by any means.  If Merge is working for you, don’t worry, we’re continuing to take feature work and making improvements.  In fact, many of the Sync Services features will work into Merge as well.  That’s about all I can say for now.  We’ll announce more about Katmai later on.


RDA will eventually be phased out.  We truly believe the Sync Services features address the simplicity of RDA, but don’t have any of the limits imposed by RDA.  Specifically, incremental changes, ability to synchronize several tables in one transaction handling all the interleaving of inserts, updates and deletes.  Support for other ADO.NET Providers, etc.  If you’re already using RDA, we’re not killing it yet, but we won’t be doing any new work there either.  We do expect to deprecate it within the next release or so as Sync Services are released.  If you haven’t yet deployed RDA, but are looking into it, definitely look at the Sync Services CTP.  If you need to go into production now and can’t wait ‘till Sync Services are released, than by all means use RDA.  It’s not like it has that much functionality that you won’t be able to easily switch to Sync Services later on.  <g>


Sync Services is where we’re doing a lot of our investments.  It’s just the first delivery in our new Microsoft Synchronization Platform.  I’ll write a different blog post on our naming.  (I love discussing naming, …not).  Sync Services are our developer oriented, SOA enabled data synchronization features.  We won’t have all the database replication features of Merge, as we really focus on synchronizing data.  Sync Services provides and end to end story, but is a componentized model allowing you to get in the middle, or completely replace one end. 


Q: Can I use Merge and Sync Services together?


A: As we’ve all seen with our SQLce and Express discussions, one size doesn’t fit all.  But we also don’t believe you need 10 ways to solve the same problem.  Just as with Express being our entry point to our Data Service platform and SQLce being our client/embedded platform, we expect Merge and Sync Services to address the needs of two types of scenarios.  It’s likely that some scenarios, such as a branch office may actually use both Merge and Sync Services.  Merge to replicate data between the branch office and the corporate office, and Sync Services to enable branch workers to go out into the field. 


Q: Does Sync Services support N Tier?


A: Absolutely.  The beauty about the N, is it could imply 1 to many.  With Sync Services we have Server and Client Providers.  Actually, we like to think of them as local and remote providers as we currently focus on hub/spoke, but we’ll be enabling p2p as well in the future.  You can start with 2 tier and move to N tier.  You can intermix 2 tier and N tier based on where the clients are connecting from.  When you first start working with Sync Services you may ask why you have to specify the tables your synchronizing on the client and server providers.  That’s because we want to make sure you can easily split the client and server code to different tiers.  All the “intimate” knowledge of the server can easily be moved to the mid tier, while the client provider configuration is maintained on the client.  Even the Sync Designer allows you to easily split the client and server provider code.  It’s quite sweet when you see it.  Screen cast coming soon…


Q: What transports do you support?


A: What do you want?  As noted above, the sync team wants to get out of the transport business.  We have many bright people in Microsoft thinking about those problems.  In Orcas the Visual Studio designers will focus on enabling WCF, but you can use Web Services, SSE, SyncML or if you can figure out how to convert Jelly Beans to .NET objects, you can use those as well.  Essentially, you simply plug in a matching service and a proxy, and you’re good to go. 


Q: Do you support column level tracking?


A: Not yet.  For this first release, we only support row level tracking.  We are working on column level tracking, but it does require additional functionality on the server, and we wanted to make it easy for developers to start synchronizing data from their existing databases.  As with any good database design, you should think about how you partition your tables.  For various reasons, it’s good to separate images, or other large blobs from your primary table into other tables and maintain a 1:1 relationship.  It’s not meant to be an excuse, and we will be implementing column level tracking in the future.  Using features like ADO.NET V3 entities you can roll up the 1:1 mappings into a single object allowing your developers to work in a normal fashion, while managing performance and isolation in your database.


Q: Do you support custom conflict resolvers?


A: Of course.  How could we not?  On both the server and client providers we expose a conflict event.  In that event you’ll get the client and server rows that represent the conflict.  You can simply say force overwrite, or implement very custom logic.  Such as determining if the person who made the change is a manager, or owner of that particular customer account.  Based on that custom logic, you simply make the logical change.  All this is in managed code, with your favorite .NET language. 


Q: Do you support partitioning/filtering?


A: Of course.  We don’t really expect people to synchronize terabytes of their data to all their clients.  The partitioning is the normal horizontal and vertical partitioning.  You simply provide the query that represents the filter you wish to support.  You can do joins, etc.  It’s just a query.  The client sends up as many parameter values as you need.  There’s no limitation on the number of parameters.  In fact, you can even intercept calls on the server and set sync parameter values based on other logic.  In WCF you can determine who the client is, and based on that info, send them their customers without exposing the SalesPersonId to the client to substitute another value. 


Q: Does the server track each client individually?


A: No.  This is one of the major differences when compared to Merge.  One of the powerful features of Merge is it knows all its clients, so when the client connects, it already has the data ready for it, and easily supports “data repartitioning”. In Sync Services, the server has no idea who all the clients are.  The benefit here is Sync Services don’t have the same scalability constraints.  You can synchronize as many clients as your server can handle queries.  The server doesn’t necessarily know it’s a publisher, but rather it’s just answering queries.


Q: Does Sync Services support multiple publications


A: Yes/no.  Sync Services doesn’t utilize the pub/sub model per se.  You can configure the server provider to offer 20 tables you want to synchronize.  The client simply say it cares about 3.  Another client cares about a different 3.  Another client cares about 4, which overlap the first two clients.  In fact, we also support a client dynamically adding tables.  A sales person may cover another sales person for a week and needs to bring in an additional product line.  Within the app, the developer can change the filtering query, and off they go.


Q: How are schema changes handled?


A: Unlike merge which is geared around replicating a database, Sync Services is geared around synchronizing data.  I’m not a big believer that generally speaking the DBA simply adds a column to the server and the UI automatically updates on the client and life is good.  While it can be done, most of the time I’d bet you want some control over where and how the new element is displayed, add some interaction logic to the client, tab order etc.  We really treat schema updates as an app update.  It’s a holistic update of the app overall.  The model we’ve gone with the Sync Services are the following:



  • A new requirement is defined, say AddressLine3.  The DBA would add the column to the server.  All the normal rules apply.  If the column is non-nullable, than a default should be provided. 

  • The developer involved with the sync layer would most likely create a new version of the Sync Service, say v2.  This means that apps that were using v1 can be slowly migrated, or at least be migrated within some level of control.  If the user is in the middle of an important deal, the last thing they need is a forced software upgrade.  Ever been in the middle of something important and IT forces an update that reboots your computer or app?  Software is an enabler, it should help me achieve my goals, not fight me because IT thinks its important now.

  • The app developer updates their service proxy to point to v2 of the sync service, exposing the extra column.

  • In the version check code, the app author can either choose to reset the table, or they can execute the alter table script locally adding the additional column.  They may even bring down a single data call to retrieve the values for the new column on all the existing rows.

  • The developer than decides what they want to do with the new element, updating their ui, logic etc.

So, while we didn’t implement something as simple as point click, we think it tends to fit the SOA model where apps may consume services from other apps, and they should have control over how and when they consume new schema.


Q: How are constraints, keys, and other db objects brought down to the client?


A: This again falls in the category of Sync Services is about synchronizing data, not replicating a database.  Sync Services does do some schema and even database creation with SQL Server Compact Edition.  If you’re starting from scratch, and you first synchronize, the SQLce database will be created based on the connection string properties, name, encryption, password, etc.  It will then create all the tables the client has said they’re interested in.  Remember, just because the server exposes 20 tables, doesn’t mean the client must use all of them.  The client determines which tables it wants to consume with the SyncTable collection.  When the tables are created, primary keys are created, datatypes are mapped to the clients datatypes, and nullability is applied.  No additional indexes, constraints, defaults, etc. are applied.  There are SchemaCreating/ed events fired where you can either initial create the schema to be used, or alter the schema after the tables are created. 


Q: Does sync services handle parent/child/grandchild relationships?


A: Yes.  Unlike RDA where you can only sync one table at a time, Sync Services handles the hierarchical nesting of inserts, updates and deletes.  In fact you can even control it seperatly on the server from the client.  On the server, tables are placed in the SyncAdapter collection.  The order of the SyncAdapters defines the order by which updates will be applied.  Inserts and Updates are done from the top down, while deletes are done from the bottoms up.  The same is done on the client, in the SyncTables collection.  This allows the server to control its order, while allowing the client to control its order of updates. 


Q: Can I update only a few tables at a time?


A: Yes.  Say you want to only synchronize your lookup tables, states, codes, etc. once a day.  You can create a specific service just for your lookups, and another for your product catalog. 


Q: Can I update everything in a single operation, or can I control things more granularly?


A: Within the SyncAgent, you can utilize the SyncGroup to determine the grouping of updates.  In the previous example, you may choose to put all the lookup tables in their own individual groups.  If the connection drops while you’re synchronizing your lookups, it can pickup where it left off next time it synchs.  However, when synchronizing Orders, you probably don’t want Orders to ever go up/down without OrderDetails.  Simply put the Orders and OrderDetails table in the same SyncGroup, and you’re all set.


Q: Does Sync Services support batching for large data sets?


A: Yes, but not quite yet.  We initially scoped this out of the first release, but we believe we’ll be able to get it in, so look for it sometime around March 07.


Q: How does Sync Services track changes?


A: Sync Services uses an Anchor based model.  Each time a sync operation occurs it gets a reference mark from the server.  It could be the servers DateTime, or a TimeStamp (RowVersion).  The client saves that value for the next sync operation.  Each time the client synchronizes a particular SyncGroup, it first requests the server anchor.  It than executes the queries on the server using the last anchor as the low range, and the new anchor as the high range.  This gets a consistent set of changes across several queries.  In future releases of the Microsoft Synchronization Platform we’ll be supporting a knowledge based sync model as well as the anchor based model discussed here.  Rafik does a great job explaining it in his blog


Q: How are deletes purged?


A: On the server, deletes are either kept in a tombstone table, or simply tracked by some sort of active/status flag in the primary table.  Since this version of Sync Services isn’t tightly coupled to SQL Server, we actually don’t do anything.  In general, we’d expect the DBA to write a scheduled task to purge tombstone records on their determined interval.  You can expect us to do more in the “future”, tease, tease, tease…


On the client, SQLce purges deleted records once it confirms data has been sent to the server.


Q: Can I purge old data on the client without triggering a delete on the server?


A: Yes.  While we don’t have a simple API to do this today, you can delete a bunch of rows on the client based on what ever criteria you decide, than simply “AcceptChanges” on the client prior to these changes being sent to the server.  Of course you could intercept these on the server an toss deletes as well to protect your server data. 


Q: Does Sync Services support low bandwidth type sync scenarios?


A: By low bandwidth, I mean can I synchronize only the important things now, and catch up later.  Yes.  You can either upload only, download only, or synchronize just a particular SyncGroup based on your own logic at the time.


Q: When will Sync Services ship?


A: Sync Services for ADO.NET will ship at the same time Visual Studio Orcas ships.  This is currently scheduled for Q4 2007.  Note: This is not meant to be the official place to get the timeframe for Orcas, but rather just saying that our current plan is to ship Sync Services within SQL Server Compact Edition 3.5, which will ship with Orcas.


Q: Will Sync Services ship on both the desktop framework and the .NET Compact Framework?


A: Yes, but at different times. At current, March 16th ’07, we are scheduled to ship for the full framework, but we are not planning on shipping Sync Services for the device platform in the Orcas product.  We do plan to ship the client components for Sync Services soon after Orcas, but are still working out the schedule.  The problem is the various .NET Compact Framework teams, including the Visual Studio for Devices teams and Sync teams have a lot of work to manage with many different device platforms and a very short schedule, and we haven’t been able to get all the appropriate ship level test coverage complete.  We have designed, and done preliminary testing with the client components working and synching over Web Services.  We are still hopeful we can pull it in, but at this point, we’re just not ready to commit to Orcas, but rather shortly thereafter.


Q: Will Sync Services or SQL Server Compact Edition be in the .NET Framework, or .NET Compact Framework?


A: No, we are not shipping within either framework, but rather shipping as an add-on component.  Why?  Because we wanted more flexibility with our ship schedule.  SQLce will ship 2-3 times between .NET 2.0 and .NET 3.5.  While it would be nice to ride the distribution of the frameworks, with the embedded/private deployment options of SQLce and Sync Services, we felt it was better to have more flexibility with our schedule. 


Q: Are these the only questions?


A: I doubt it.  So, keep them coming, and I’ll update this FAQ as I receive them.


Thanks for all the great questions.  Keep them coming as they help us make sure we’re shipping the right features in the right order.


Steve

Comments (48)

  1. ErikEJ says:

    Hi Steve, what is the story for device development? As far as I can see Sync Services will not be available for devices in the Orcas timeframe.

  2. ErikEJ says:

    Sorry Steve, somehow overlooked: Will Sync Services ship on both the desktop framework and the .NET Compact Framework?

  3. Steve Lasker has compiled a FAQ with questions and answers people might have about using the ADO.NET

  4. Steve Lasker says:

    Hi Erik,

    I’ve added info on ship dates, including our offset ship cycle for devices.  We’re not thrilled with this plan, but it’s the best we can do for now.  We are working to pull it in, so …please stand by…

    Steve

  5. Fox-Jazz says:

    We have our own homegrown sync. Which delivers data via web services.

    It is an on-demand sync. Is the data compressed before it is sent (to/from) client.

    Also what about directory sync. We use a hash compare for each file to “extra verify” the clients run the code we expect them to.

    Ever since implementing this structure, we have had litterally NO problems with code updates and deliverables.

    We are looking into a different project, and it is important that the data is compressed before sent to the server. Or is this defined by the transport?

  6. Steve Lasker says:

    Hi Fox/Jazz

    Today, we don’t do file sync.  We will be adding file sync in future version of the overall Microsoft Sync Platform (MSP).  And today we don’t do any compression.  We do leave it up to the transport.  That said, I’m sure this will be a common thing, and we’ll need to address this more directly than just saying it’s "not my job" and look to the transport.  However, it won’t be in the first release.  

    Steve

  7. Q: What’s the roadmap for Merge, RDA and Sync Services A: Merge replication will continue to be our database

  8. Q: Can I use Merge and Sync Services together? A: As we’ve all seen with our SQLce and Express discussions

  9. GSR says:

    Hi Steve,

    I have downloaded Microsoft Synchronization Services for ADO.NET CTP from:

    http://www.microsoft.com/downloads/details.aspx?FamilyId=75FEF59F-1B5E-49BC-A21A-9EF4F34DE6FC&displaylang=en

    This download package contains the SQL Server Compact Edition 3.5 and I have installed the same. Before that I was having SqlCE 3.1 and able to work with that fine from C# application on Desktop.

    After installing the new one I could see V3.1 and V3.5 folders at: C:Program FilesMicrosoft SQL Server Compact Edition. But when I tried to create a new data connection from Visual Studio 2005, it directly making use of V3.1.

    Does anybody know how to make use of this newer version in my C# application with in Visual Studio 2005 and also enabling the same in SQL Server Management Studio?

    It’s an urgent requirement for me; please do spend couple of minutes in providing the solution for this.

    Thanks in advance

  10. Steve Lasker says:

    Hi GSR,

    Unfortunately 3.5 will not work with the designer features of Visual studio 2005.  The designers are looking for a particular version, 3.1.  My suggestion is to use the 3.1 designers, but change your project reference to the 3.5 version and you should be fine.  I believe in the CTP we don’t differentiate database versions.    

    Steve

  11. This week’s question comes from Oran who asks: Hi Udi, I’m enjoying the recent discussion on Entity…

  12. Em says:

    Hi Steve, will there be a sync client provider for SQL Express. If yes – is this a priority and when could we expect to see this.

    Cheers .E

  13. Steve Lasker says:

    We have had several requests for a SQL Server Express client provider.  While not in our initial plans as we’ve been thinking about Express as the server and SQLce as the client, we will likely have something in the Sync Services for ADO.NET v2 timeframe.  Unfortunately, I can’t provide more detail as we’re still working out some of our planning schedules.  

    Steve

  14. One of the common options to synchronize several distributed sql servers to a central Sql server 2005

  15. Manish says:

    hello Mr. Steve.

    I am developing sql express-server merge replication for web synchronization using RMO. I want to rollback sync when any error occurs or user presses cancel to roll back synchronization operation.

    how can i abort or roll back or stop sychronisation so that next synchronisation can be done without problems….?

    Thanking you

    Manish

  16. Steve Lasker says:

    Hi Manish,

    I checked with Vijay, PM for Merge Replication and this is what he had to say:

    You cannot really rollback synchronization.  You can cancel the agent by killing the backgroundworker thread that is hosting the MergesynchronizationAgent.synchronize call.

    Synchronization makes incremental progress in batches and updates the synchronization watermark once for each batch so the SQL database may have already received or sent changes before you stopped the synchronize call

    Steve

  17. Troy says:

    Hi Steve,

    I’ve been Googling for an answer but haven’t found it, so I hope this isn’t a stupid question…

    How do sync services/sql compact handle data types that are supposed in sql server [express] and not in sql compact ? Things like varchar(max) (or just [n]varchar > 4000 characters), XML, user defined types, etc. ?

    Thanks.

  18. Steve Lasker says:

    Hi Troy,

    Sync Services, RDA and Merge Replication all use a mapping to upsize, downsize the datatypes between  SQLce and SQL Server. SQLce supports a subset/superset of datatypes.  For instance , SQLce doesn’t support both nVarChar and VarChar. It only supports nVarChar which is the double byte superset of VarChar.  XML becomes nText. For a complete listing of the mappings, the online docs can be found here.

    SQLce will continue to evolve its data types for the appropriate set in a compact footprint.

    Steve

  19. Allan Downs says:

    Where are we with the sync services for devices?

  20. jainmanishs says:

    when i run SnapshotGenerationAgent.GenerateSnapshot(), it fails.

    the log file generated is as follows

    ……………………….

    ……………………….

    Flushing cabinet folder

    The replication agent had encountered an exception.

    Source: Replication

    Exception Type: Microsoft.SqlServer.Replication.FciException

    Exception Message: The replication agent had encountered a file compression (cabinet) library error while calling ‘FCIDestroy()’.

    Message Code: 4

    what this error indicates .. i am not able to track what is mistake

    pls let me konw asap

    FOLLOWING IS THE CODE THAT I AM USING TO RUN SNAPSHOT AGENT

    private static EReturnValue generateSnapshot()

    {

    if (!publisherConn.IsOpen)

    {

    return EReturnValue.Failure;

    }

    EReturnValue retVal = EReturnValue.Success;

    SnapshotGenerationAgent agent=new SnapshotGenerationAgent();;

    try

    {

    //SET THE STATUS EVENT

    mPublicationStatusInfo = new StringBuilder(1024);//CAPACITY IS 1024

    agent.Status += new AgentCore.StatusEventHandler(snapshotAgentStatus);

    agent.Distributor = DistributorName;

    agent.DistributorLogin = DBLogin;

    agent.DistributorPassword = DBPassword;

    agent.Publisher = PublisherName;

    agent.PublisherLogin = DBLogin;

    agent.PublisherPassword = DBPassword;

    agent.PublisherDatabase = PublicationDatabaseName;

    agent.ReplicationType = ReplicationType.Merge;

    agent.Publication = PublicationName;

    agent.GenerateSnapshot();

    retVal = EReturnValue.Success;

    }

    catch (Exception ex)

    {

    #if debug

    MessageBox.Show(ex.Message);

    #endif

    retVal = EReturnValue.Failure;

    }

    finally

    {

    writeStatusInfoInLogFile();

    }

    return retVal;

    }

    Manish

  21. jainmanishs says:

    Hello Mr. steve ,

    I am facing one more problem.

    i am not clear how first time synchronisation and later synchronisation are different.

    When first time synchronisation fails then my both publisher database and subscription database gets corrupted i mean to both are in inconsistant state. at subscriber database i am not allowed to insert update or delete.

    while delete i am having following error:

    Msg 20092, Level 16, State 1, Procedure MSmerge_disabledml_6929E0D8B2654CB68853D26BDC6EAE68, Line 8

    Table ‘[dbo].[WMArea]’ into which you are trying to insert, update, or delete data is currently being upgraded or initialized for merge replication. On the publisher data modifications are disallowed until the upgrade completes and snapshot has successfully run. On subscriber data modifications are disallowed until the upgrade completes or the initial snapshot has been successfully applied and it has synchronized with the publisher.

    Msg 3609, Level 16, State 1, Line 1

    The transaction ended in the trigger. The batch has been aborted.

    This problem happens when intial snapshot is applied for the first time. here i need to do syncronisation with existing database. the tables which are published may be already filled. but before first time sync i am emptying all the published tables. and then i am starting synchronisation for first time.

    Please provide possible solution.

    Thanking u

    Manish

  22. Partha says:

    When there is change committed in the Server Database what happens during Synchronization?

    (i) All the Data from the Server is downloaded to the client and only the changes are applied and seen in the Client Database.

    (ii) Only the changed row, is downloaded and seen in the client database.

    This task is doen by the SyncAgent. Am I true.

  23. Steve Lasker says:

    Hi Partha,

    Sync models are typically patterned around a flow of the client first pushing up changes, then asking for server changes which is the model Sync Services follows.  And yes, the SyncAgent is the orchestrator of the overall flow.

    Steve

  24. jainmanishs says:

    finally i come with work around. I removed the code that is creating

    cab(zip)  from snapshot files. when i avoided compression of snapshots

    it behaves nicely. I feel that there is problem in library related to

    FCIDestroy() who is performing compression.

    By checking all performance we decided to remove compression of

    snapshot and now it works fine.

    thanks a lot for your help.

  25. jainmanishs says:

    About issue of trigger related problem i deleted all triggers related to synchronisation and the problem is solved. The sql command i used to delete is as follows

    strQry = new StringBuilder(

                       //step 1 : get list of all triggers(applied on published tables) whose names are starting with ‘MSmerge’

                                           "Declare @sql nvarchar(4000);" +

                                           " Select @sql=COALESCE(@sql+’],[‘,”)+NAME from sys.triggers " +

                       //here COALESCE function Returns the first nonnull expression among its arguments

                                           " WHERE  name LIKE ‘MSmerge%’ AND PARENT_CLASS=1;" +

                                           " set @sql = ‘[‘+@sql+’]’;" +

                       //" PRINT @sql;"+

                       //step 2 : drop all the triggers(applied on published tables)

                                           "set @sql = ‘drop trigger ‘ + @sql;" +

                       //"print @sql;"+

                                           "exec sp_executesql @sql;");

    Thanks a lot Mr. Steve

    link to problem:http://blogs.gotdotnet.com/stevelasker/archive/2007/03/18/QAforOCS_2D00_SyncServicesForAdoNet.aspx#5143341

  26. Connectivity Cross Version Compatibility This blog post explains the Merge Replication connectivity cross

  27. stanley.broo@gbo.se says:

    Do you know any resource (tutorial,sample app) that show how to secure your sync service (conn string send over the internet)

    Would like to use SSL /https

  28. Steve Lasker says:

    HI Stanley,

    If you use the ‘N tier solution, the client doesn’t have any knowledge of the server side information.  All the client needs to know is the service address and any authentication information for the service.  The service has the server side connection string, and any specific T-SQL statements.  

    Here’s a screencast,
    Going N Tier w/WCF, Synchronizing data using Sync Services for ADO.NET and SQL Server Compact Edition

    and a sample project:
    SyncNTierWithWCF Sample

    Steve

  29. Shane says:

    Hi Steve

    I’ve watched the screencasts and looked through much of the sample code available from MS and elsewhere and I’m left wanting. I’m looking to make use of SQL Server 2008 change tracking and the only close to real example code I can find is partial and difficult (impossible?) to integrate with Visual Studio 2008 Sync Designer generated code (sample code referenced here http://blogs.msdn.com/agujjar/archive/2008/01/09/sync-services-with-sql-2008.aspx).

    I’m looking for a real sample project that makes use of SQL Server 2008 change tracking as the "anchor" to allow for partial client updates. Any tips, advice, pointers, samples or anything else you can point me to?

  30. Steve Lasker says:

    Hi Shane,

    I know we have some internal samples.  Let me see what I can dig up.

    Steve

  31. Steve Lasker says:

    Hi Shane,

    Here’s a walk through that should help.  There’s a bit of a formatting issue that we’re trying to get fixed, so just copy/paste and the VS/SQL tools will clean things up.

    How to: Use SQL Server Change Tracking

    And just to help, here’s the VS patch to enable the SQL Server 2008 CTP with VS 2008

    Visual Studio 2008 Support for SQL Server 2008, Community Technology Preview

    Steve

  32. Balaji says:

    Hi,

      Is it possibe to create a web application using MS Synch Framework. If possible, how to build the application. How MS Synch check the network connectivity and how the data will automatically populated to local database. And if the network comes, how the data will automatically moved to server database. And how to make the synchronization between the two database in both online and offline mode. can you please provide a sample Web application(Solution) for understanding the network as well as synchronization feature

  33. Dating says:

    Not surprisingly we’ve been get a lot of great questions about specific features and scenarios for our new Sync Services for ADO.NET (OCS). Rafik has been fielding most of these on the Sync Services forums . Since the Q&amp;amp;A for SQLce seemed popular

  34. Weddings says:

    Not surprisingly we’ve been get a lot of great questions about specific features and scenarios for our new Sync Services for ADO.NET (OCS). Rafik has been fielding most of these on the Sync Services forums . Since the Q&amp;amp;A for SQLce seemed popular

  35. andreas.forstinger says:

    Hi Steve,

    thank you for sharing this with us.

    I got a question tough. You wrote about "low bandwith scenarios" and the possibility to only sync specific sync groups.

    I could not find a way to implement such a behaviour and did not even get much help on this from synchronization services forum, can you point me the right direction?

    What i want to achieve is to only sync the most important tables on low connection and do kind of major sync when cradled or within wifi.

    Regards,

    Andreas

  36. andreas.forstinger says:

    Hi Steve,

    thank you for sharing this with us.

    I got a question tough. You wrote about "low bandwith scenarios" and the possibility to only sync specific sync groups.

    I could not find a way to implement such a behaviour and did not even get much help on this from synchronization services forum, can you point me the right direction?

    What i want to achieve is to only sync the most important tables on low connection and do kind of major sync when cradled or within wifi.

    Regards,

    Andreas

  37. Steve Lasker says:

    Hi Andres,

    Sync Groups are a means to define different transaction boundaries, which also map to different payloads sent between the local and remote providers.  So, it’s not really a SyncGroup that you’d use to differentiate synching at different times, but rather a SyncAgent session.  

    To achieve syncing different tables based on different times, you can configure different SyncAgents.  The LookupSyncAgent could contain the list of LookupTables, such as states, codes, etc.  You could then create an OrdersSyncAgent that contains the Orders, OrderDetails, Customers, etc.

    Sync Services for ADO.NET doesn’t actually contain any means to determine high or low bandwidth connections, but you can use a block from the Patterns & Practices team to determine cost based synchronization.  Once you’ve decided it’s a high or low bandwidth connection you can then instance different SyncAgents to control which tables you wish to sync.

    Hope that helps,

    Steve

  38. gavinyan says:

    whilereaseofVisualStudio2008wealsogetv1.0ofADO.NETSynchronizationServicesthatallowsy…

  39. Hi Steve,

    thank you for expanding on this.

    Regards,

    Andreas

  40. Ben says:

    In my app I want to mirror a table in my client, but with out the data. I want sync services to only bring down the structure of the table. The cleint will only upload the records in this table. Two questions:

    1) Is there an option or setting so that Sync Services will only pull down the empty table?

    2) Having multiple clients and given that the server does not distinguish each cleint, what is to keep there from being repeated Primary Key constaint violations when the clients create new records in a table that has an auto generated Identity field for aprimary key?

  41. Steve Lasker says:

    Hi Ben,

    Yes, you can set the SyncTableDirection to DownloadOnly.  It’s on the SyncAgent.Configuration.SyncTables collection.

    For keys, we really don’t do anything special here.  The assumption is the app will create unique keys.  You can either use GUID’s, your own unique identifiers that combine some hashing of a client ID and an incremental number, or use Identity locally, but again hash it with a ClientId, so on the server you wind up with a compound primary key of (ClientId, NumberId)

    Steve

  42. Nenad Marković says:

    How Sync Services work with NHibernate or some other ORM tool/framework ?

  43. david hary says:

    Are there error logs of the sync events?  If so where can be found.  Otherwise, is there a way to get more info on sync failures.

    Thanks

    David

  44. Sanand says:

    Please give me example how to implement row filter in occassionally connected smart device application using Microsoft Sync Service ADO .NET V1…………pref in VB.

    Thanks

    Sanand  

  45. Sanand says:

    Please give me example how to implement row filter in occassionally connected smart device application using Microsoft Sync Service ADO .NET V1…………pref in VB.

    Thanks

    Sanand  

  46. lakshmi says:

    Hi ,

     In Orcas the Visual Studio

    Which transport is Efficient(WCF,WEB Service) means

    Synchronization Services for ADO.NET using WCF

    or

    Synchronization Services for ADO.NET using Web services

    Thanks

    Lakshmi

  47. Iain says:

    Hi,

    i cannot find an example of how to achieve the passing of parameters from the client device to the WCF Service as in your answer:

    Q: Do you support partitioning/filtering?

    A: Of course.  We don't really expect people to synchronize terabytes of their data to all their clients.  The partitioning is the normal horizontal and vertical partitioning.  You simply provide the query that represents the filter you wish to support.  You can do joins, etc.  It's just a query.  The client sends up as many parameter values as you need.  There's no limitation on the number of parameters.  In fact, you can even intercept calls on the server and set sync parameter values based on other logic.  In WCF you can determine who the client is, and based on that info, send them their customers without exposing the SalesPersonId to the client to substitute another value.  

    Can you please prove a link, or an example of the client, and WCF code required to achieve this. I have been trying to pass parameters, using syncAgent.Configuration.SyncParameter on the client, but on the WCF service code, i dont know how to access the parameters (if indeed they are accessible which i doubt)

    Many Thanks

    Iain

Skip to main content