Less Loosely Coupled Than Meets The Eye

I don’t know that it is possible to write anything like a unitary software system in a way that is truly loosely coupled.   It’s not that you can’t make boxes and lines that are cleanly separated in all sorts of pretty ways, though that’s hard enough.  The problem is that even if you manage to do that, what you end up with is actually still pretty tightly coupled.

What do I mean?

Well, let me use Visual Studio as an example.  It’s made up of all kinds of extensions which communicate through formal interfaces.  Putting aside the warts and just looking at how it was intended to work it seems pretty good.  You can make and replace pieces independently, there is a story for how now features light up; it’s all pretty good.  Yes, it could be better but let’s for the moment idealize what is there.  But is it loosely coupled, really?

Well the answer is heck no.

These systems all share resources in fundamental ways that actually tie them together pretty tightly.  It starts with the address space, each extension can be victimized by the others, and it goes on.  In fact every important resource on the machine is subject to direct interference between nominally isolated extensions.  Is this avoidable?

I don’t think it is actually.  I think unitary software systems are fundamentally tightly coupled when you look at them through a performance lens.  And that is why I’m so often telling people that they need to look at their overall system and see if, when it’s all put together, it has any hope of working the way you want.  Two subsystems that both (loosely) use 2/3 of the L2 cache are going to use 4/3 of a cache… that’s not good.  There may be no lines between them in the architecture diagram but they are going to destroy each others ability to work.

So, does that about wrap it up for agile then?  Should we just waterfall our way to victory?


Everything you’ve ever read about requirements changing and not over-designing systems that are loosely specified (much less loosely coupled) still applies, but just because you’re going agile and you want to keep your flexibility is no reason to not think about what it’s all going to look like when you put it all together.  Some kind of blend is needed.

If you do run into problems that loose architecture is going to help you.  But waiting until you’re done and counting on your agility to save you at the finish line isn’t so smart either.  Agile development doesn’t prohibit you from planning out the parts that need planning and understanding your key constraints.  In fact I think it encourages this – knowing your constraints keeps you honest and lets you make adjustments smartly as you go.

Performance is never loosely coupled.  Don’t be fooled by your diagram, the profiler doesn’t know where your little boxes and lines are.  Trust me 🙂

Comments (6)
  1. Frank de Groot - Schouten says:

    Recently I've seen something about the Command Query Segregation Pattern coined by Udi Dahan. It appears to offer more loose coupling at the expense of reuse. The only 'lines between the boxes' are a pub/sub messaging system. There are claims about the performance of these systems, but that's more a question of multi-machine scalability. Building something like Visual Studio with that pattern would probably create a hideous resource hog.

  2. PatrickSmacchia says:

    The loosely coupled trend is mainly for clean design hence maintenance made easier. Maintenance is 'static' concept, in opposition to 'dynamic' concept such as perf issues. I wrote a post in the same vein concerning static vs dynamic dependencies.


  3. Ryan Cromwell says:

    A team committed to or concerned about the performance implications of integrated code should include integrated performance profiling as a part of their "Definition of Done".  Having that definition of done is the heart of a teams ability to deliver increments of shippable software in any agile framework or methodology (Scrum, Kanban, Lean, XP).

  4. Mat Noguchi says:

    "Performance is never loosely coupled.  Don’t be fooled by your diagram, the profiler doesn’t know where your little boxes and lines are.  Trust me :)"

    I'd go one step further and say that the underlying hardware doesn't care either. At least hardware is designed to stall to maintain correctness. I'm not sure we have the equivalent in software.


  5. ricom says:

    Thanks for the comments.

    I wanted to give a quick shout out to Mendelt Siebenga who heard my Deep Fried Bytes interview (deepfriedbytes.com/…/episode-21-talking-performance-with-performance-preacher-rico-mariani) and wrote about it on his blog (mendeltsiebenga.com/…/Software-engineering-is-NOT-like-structural-engineering.aspx).

    Mendelt and I substantially agree on approach I think.  But I didn't feel I'd hit the nail very squarely so I wrote this article in response to his.

  6. Mike Kelly says:

    @Ryan's point is very important.  Because you don't know a priori where performance problems will be (you may think you know, but you probably don't know all of them), build-to-build performance measurement of key scenarios with guidelines about how much backsliding (hopefully none) is allowed is key.  That way, you're doing the right amout of work on performance issues at the right point in the project – if you wait to the end, you're overwhelmed with issues that are sometimes quite hard to fix without significant architecture changes that would have been much more easily made earlier before so much code grew around the current architecture.  It also helps with getting early use of the system as its being built if performance is reasonable.  As Rico notes, the loose coupling hopefully gives you a bit more freedom to make some tradeoffs when you find hotspots.  Thanks for this pragmatic tenet – never too good to get too caught up in the beautiful abstracts of architecture.

Comments are closed.

Skip to main content