This scenario is simple and relatively frequent. A solution that is normally deployed into a single site with a single central database, now needs to be accessed from many sites in many geographies over some kind of WAN or perhaps even public Internet. In each geography (e.g. UK, Hong Kong) there is a cluster of users and decent infrastructure/connectivity. There are many approaches to this – but they roughly boil down to:
- Try your damndest to continue to use a central database – look at async operations, look at application level locking vs DBMS locking, look at improved connectivity from the remote sites, look at building a more “forgiving” client (that expects failures but tries not to bother the user with them)
- If the above won’t or is unlikely to work, then and only then, look at having “many databases”.
“Many databases” is an interesting and large topic. It spans such fun areas as “in memory databases”, “read mostly data”, “two phase commit”, “write through caches”, “multi master” etc etc.
We have been discussing this recently for a UK ISV – and increasingly I favor a solution built around SQL Server 2005 (Express?) in each geography + a central SQL Server 2005. But the interesting question is how best to connect them. We have been discussing SQL Service Broker as a possible technology to use for this but the other (even more obvious approach) is Peer-to-Peer Transactional Replication which was introduced in SQL Server 2005. This is significantly different/better than merge in SQL Server 2000
Merge to Peer to Peer
Peer to Peer Transactional looks very interesting and a great match for a system that is read heavy and has none overlapping writes. Alas I have no experience of it. I would love to hear from you if you are using Peer to Peer – good and bad.