I’d like to surface a debate I had recently about the application of the four tenets, or t4T for brevity. Please note my blog disclaimer, which applies to all my post even when I don’t explicitly mention it.
Behind object orientation there’s Algebra: the idea of classes, methods, equivalence operators… many of the concepts in OOP map into mathematical equivalents, Abelian groups, rings, models, that stuff. And the ones which don’t can be readily expressed in math therms in a useful way. The advantage is that what is sound in the math level will hold in the OO counterpart as well, so I can apply predictions and proofs or manipulate ideas in a totally general fashion. Those rules can be fed into a compiler, which will diligently resolve and apply them in a predictable fashon. If a language would not follow closely the rules and definitions of OO, I’d lose those advantages. To be totally strict this happen pretty much in every language which is not based on the functional paradigm, but it’s not my point here. What I’m trying to do is giving a feeling of what it means to have a ~1:1 relationship between implementation concepts and the theory behind it.
Behind service orientation there’s Experience: choices which proved a roadblock for further evolution, issues with the scalability of distributed software, maintenance nightmares, hard or impossible integration, hopeless barriers to interoperability… all those, and many others, have been experimented, observed, studied, dealt with and resolved in different ways by a generation of architects, programmers, testers, etcetera. T4T is a beautiful distillate of the findings in that respect: every single theme is ermetic, dense of meaning and implications, and deserve studying & debating. It has been studied and debated, and we are all but over with learning from them. In my personal interpretation, I see them as a way to mantain freedom: each “rule” is an enabler, which leaves you free of doing something that “traditional” thinking may have prevented. What happen if you do violate the autonomy theme, say by sharing memory among instances? In the first iteration of your solution, nothing. If you acted in this way, this mean that it was compatible with your present requirements (otherwise, it would simply refuse to work as expected). The day in which you face a surge in traffic, however, and you need to scale out, you’ll have to address the fact that you can’t use shared memory anymore. If you would have designed your solution without shared memory from the start, now you would be ready to scale out without further work: if your entire solution revolves around shared memory, and you have to rip the whole thing to work around the problem, you are in a very bad situation. However real world is not blak&white. Let’s say that you made your homework, and thanks to the t4T or equivalent knowledge you realize that shared memory may prevent you from scaling out the solution, if one day you’ll need it: on the other hand, shared memory would give you now advantages that you don’t want to give up. So you use shared memory today: but you make sure that in your design making the switch to another system can be performed with little (or no) cost, should the need arise. Are you a bad, bad bad architect because you don’t adhere to t4T today? Well, shared memory is particularly nasty so I have inhibition to make my point. Let’s add another example. You are designing a part of your backend with WCF; a couple of services will have to talk to each other. Everything happens well behind your firewall, and the data handled are totally “internal” (in the sense that they represent entities which make little sense if considered out of the context of your company). You need to have good perfomances: you choose to use the binary encoding of WCF, as opposed as HTTPML. This give you whopping performances, and you know that the day you’ll need to move one of the two services to another platform you’ll be able to fold back on HTTPML just decommenting a row in your app.config. Are you a bad, bad bad architect? In my opinion, you are a good one: you are exploiting the advantages that the situation offers you today, but you know what to do if and when things will change. In a day of good weather, you don’t keep the umbrella in your hand all the time because it may rain: but you make sure to have one in the trunk of your car (or in the driver’s door, if you drive a Passat: colleagues are kindly requested not to make fun of me [:)]), should you see clouds gathering.
Bottom line: IMHO, it is inefficient to treat t4T as religious dogma. They provide a solid thinking framework, which has immense value especially when you are considering the strategic implications of your designs: but in the end they are just a tool in your hand to make informed decision. I can imagine a number of factors that may locally bring you to violate them: but if they helped you to assess consequences and guided you in planning contingency, then they already accomplished their purpose.
Bottom line 2: WCF and t4T are here to make you more powerful and more aware. But beware, don’t get confused. WCF unifies different distributed computing technologies, and all of them will enjoy the advantages of WS-*: but this by no mean implies that from this moment on every single piece of distributed software will have to be a Service. WCF makes easy to implement a service, and certain aspects (like sharing messages rather than classes, working by contract, leveraging policies, etc etc) will simply come out for free without conscious work from you: but you should expect to use this technology to implement also finer grain objects, which from the design point of view will be more assimilable to classic Components rather than services.
Remark: SO is NOT an evolution of OO, as I sometimes still hear. Ah, and BTW COM+ is NOT OOP; a component may be constituted by an object, but it’s all about activation & marhalling rather than inheritance an poymorphism: runtime vs design time. I won’t elaborate here, as there’s overabundant literature on the subject, but I just wanted to point out in order to prevent misinterpretations of the parallel I leveraged above [:)]