Modularization vs. Integration – Which Is Best?

Clayton Christensen’s second book, The Innovator’s Solution, produces several important theories in the realm of innovation.  Like his first book, The Innovator’s Dilemma, the second book should be required reading for anyone in technology and especially managers of technology.  Among the theories, one stands out as the most important and, I think, most applicable to the world of software development.  Christensen calls this the Law of Conservation of Attractive Profits.  In essence it states that the profits in a system move over time.  When a market segment is under-served, profits are made in vertically integrated products.  When a market becomes over-served, the profits instead flow to more modular solutions.  In this post I will lay out the theory.  In a future post, I’ll apply it to software development.

For every market segment–whether PCs or winter jackets–there are certain metrics that matter.  In the world of PCs for a long time it was speed.  People bought one computer over another because it was faster.  In an early market, the products do not sufficiently satisfy the demand for that metric.  Computers were too slow.  In these markets, there is a performance gap.  To make the fastest computer required tightly integrating the hardware, the operating system, and often the application software.  The interfaces between each of the parts had to be flexible so they could evolve quickly.  This meant the parrts were proprietary  and interdependent.  Companies trying to work on only a single part of the problem found that they couldn’t move fast enough.  Look at the world of computer workstations.  When WindowsNT first tried to take on the Sun and HP workstations of the world, it wasn’t as fast.  Intel made the processors, Dell made PCs, Microsoft made the operating system.  By comparison, Sun made the Sparc processor, the Solaris operating system, and the Sparcstation.  It was difficult to squeeze as much speed out of an NT system as Sun could get out of its.  Because Sun’s workstations provided more performance where the market wanted it, Sun was able to extract higher “rents” (economist-speak for profits).

Eventually every market’s primary metric is sufficiently supplied by available solutions.  Products can be said to have a performance surplus.  At this point, customers no longer make purchasing decisions based on the metric–speed–because most solutions provide enough.  Customers are willing to accept higher performance, but they aren’t willing to pay for it.  Instead, their purchasing criteria switches to metrics like price, customization and functionality.  Modular architectures trade off performance for the ability to introduce new products more quickly, lower costs, etc.  Products become more commoditized and it is hard to extract high rents for high performance.  However, Christensen says that the profits don’t disappear, they only shift to another location in the value chain.  Those companies who are able to best provide the market’s new metrics will make the most money.  In the example of the workstations, once computers became fast enough, the modular solutions based around WindowsNT began to make a lot more of the money.  The costs for these were lower, the ability to customize greater, and the support ecosystem (3rd party devices and software) larger.

Looking closely, it becomes apparent that markets are a lot like fractals.  No matter how close the zoom, there is still a complex world.  Each of the modular parts are themselves a market segment with their own primary metrics.  Each one is subject to the modularization/integration cycle.  When a system becomes ripe for modularization, the profits move to the integrated modules which best satisfy the new metrics.  The secret to continuing to gain attractive profits is to notice when this transition is taking place and give up vertical integration at the right moment, choosing instead to integrate across the parts of the value chain least able to satisfy customer demand.

This theory seems to explain Apple’s success with the iPod.  The Plays-For-Sure approach taken by Microsoft was a modular approach.  Vendors like creative supplied the hardware.  Microsoft supplied the DRM and the player.  Companies like Napster supplied the music.  There are 3 tiers and 2 seems between them that must be standardized.  In an emerging market where the technology on all fronts was not good enough, is it any wonder that this approach was beaten by an integrated strategy?  Of course, hindsight is 20-20 and what is obvious to us now may not have been obvious then.  Still, Apple came at the problem controlling all 3 pieces of the puzzle.  It was able to satisfy the metric–ease of use–much better than the competition.  We all know how that turned out.  The theory indicates that at some point the metric will be satisfied well enough and people the advantage of the integrated player will dissipate.  With the move away from DRM’d music and the increase quality of the available hardware, this day may be upon us.  Amazon’s MP3 store seems to be gaining traction.  Competitors like the Zune and the Sansa players are making some inroads in the player space.  A dis-integrated model may be emerging.

Comments (3)

  1. Mark Sowul says:

    If you’re right, then it’s too bad that MS chose to abandon the modular (PlaysForSure) model and replace it with the integrated Zune model (to say nothing of the ill will and distrust inherently generated by abandoning PFS).

  2. Shane MacLaughlin says:

    Thanks for the heads up on Christensen’s book, Steve.  A friend lent it too my a while back with a strong recommendation.  A second recommendation means I guess I should actually read it.

    Personally, I prefer being involved in more vertical markets, and as they inevitably flatten out, I move on.  The fractal analogy is a good one.  I tend to think of the leading edge of technology as being at the leaves with the heavier horizontal markets at the core.

  3. SteveRowe says:

    @Mark, that is an astute observation.

    @Shane, definitely check out this book.  The theory I describe is in chapters 5 and 6.