(long, but, hopefully, worth your while)
In my (non-authoritative) opinion, it’s not because webservices is your only or necessarily the best upgrade path, but it’s because webservices don’t allow you (for the most part) do things that will not have an upgrade path…
With SOA (service oriented architecture) being such a buzz-word in today’s world, it’s important to identify where traditional object oriented programming is the best choice, where SOA principals should be followed, and what technology is to be used in both cases.
Certain components do not have requirements to perform as a service; e.g. components used only by one application. These components are frequently deployed to the same computer as the calling tier, and are instantiated in the same application domain, e.g. User Interface Business/Data Layer. In these cases, tight coupling using .NET strongly typed data types are not only acceptable, but are the best choice due to performance, flexibility, security, ease of programming and maintenance, and lack of requirements for being a “service”.
Components that logically belong to one application and are used by another should follow SOA principals.
As Richard Turner said, SOA is an “architectural notion” that should have no connection to a specific communications protocol; it’s a way of architecting apps to have fewer dependencies on one another. Service-oriented systems are unique in that they don’t assume anything about what’s on the other end, unlike past-generation apps that were tightly coupled and platform-specific.
It’s important to note that SOA does not automatically translate to web services. One can create SOA application that does not use web services, but rather leverage other technologies as long as SOA principals are followed.
For example, when writing services, one cannot pass objects by reference or pass callback delegates, as frequently done when using RPC technologies like DCOM or .NET Remoting. The reason for this restriction is that a service-oriented technology cannot make as many assumptions about the network as the local area network technologies DCOM or Remoting. Callbacks and object references require a callback path from the server to the client whereby the server can contact the client whenever needed. In an environment where we have to assume that services are distributed across platforms, trust boundaries and wide area networks, this sort of bi-directional connectivity very often does not exist. Clients may reside behind firewalls, or network address translation (NAT) services, or simply don’t actively listen for messages. And even if they do, security restrictions typically mandate that any sender, including those returning calls, will have to authenticate and be authorized at any endpoint. Therefore, implicit backchannels such as those established by callbacks and object references simply don’t work in a services world. Instead, such backchannels must be explicitly established using so-called “duplex” conversations.
There are a number of respected industry experts that expressed their view on the web services vs. remoting subject matter, including Don Box (owner of core web services plumbing), Gopal Kakivaya (architect of .NET remoting), Clemens Vasters, Juval Lowy, Ingo Rammer, Michele Leroux Bustamante, and many others.
Traditionally, a number of factors were considered to choose between remoting and web services, including:
• Ability to pass rich .NET types between AppDomains, Processes or Machines (we’re talking Hashtables, true business objects, etc.)
• Visibility (defined as the number and types of client over what medium will access the service. Visibility introduces the possibility of firewalls and access to the system over the Internet.)
• Scalability (including bandwidth considerations)
• Programming model
These factors are not viewed in isolation, but as a whole in the context of the system performance and functional requirements.
In the future, Windows Communication Foundation (WCF), a.k.a Indigo — a framework for developing connected systems – will provide a single programming model unifying web services, remoting, enterprise services, MSMQ technologies.
It is recommended that, regardless of the selected RPC protocol, the application follows the service oriented principals, where exposed services are:
• Autonomous units of application logic
• Have explicit boundaries
• Share schema and contract (interface), not class
• Policy expressed compatibility
So, if interop is not required, if both sides use .NET, why take the hit of web services implementation? Why not use remoting as long as the following principals (and the ones stated above) are followed:
• In/out parameters will be either simple data types or must be marked with [Serializable] attribute. WCF supports serializing types marked with [Serializable], which allows .NET remoting types to work with WCF without change (http://msdn.microsoft.com/msdnmag/issues/06/02/WindowsCommunicationFoundation/default.aspx).
• Complex data types passed in/out of service will not have any implementation – just data definition. Types are unique and immutable and require the sharing of an assembly, whereas schema is a description of the XML content that acts like a contract between parties that can be used without the need for an assembly. In WCF timeframe, DataContract and DataMember attributes will be added to these classes as follows:
public class YourDataClass
public int YourDataMember1;
public string YourDataMember2;
. . .
The service implementation will also have required attributes:
YourDataClass GetMyData(int someId)
. . .
• Finally, no SoapExtensions or custom message sinks in Remoting will be used, as they are known to have no supported upgrade path to WCF.
Only time will tell… But to me, remoting does have its place.