Nathan’s laws of software

Way back in 1997, Nathan Myhrvold (CTO of Microsoft at the time) wrote a paper entitled “The Next Fifty Years of Software” (Subtitled “Software: The Crisis Continues!”)  which was presented at the ACM97 conference (focused on the next 50 years of computing).

I actually attended an internal presentation of this talk, it was absolutely riveting. Nathan’s a great public speaker, maybe even better than Michael Howard :).

But an email I received today reminded me of Nathan’s First Law of Software:  “Software is a Gas!”

Nathan’s basic premise is that as machines get bigger, the software that runs on those computers will continue to grow. It doesn’t matter what kind of software it is, or what development paradigm is applied to that software.  Software will expand to fit the capacity of the container.

Back in the 1980’s, computers were limited.  So software couldn’t do much.  Your spell checker didn’t run automatically, it needed to be invoked separately.  Nowadays, the spell checker runs concurrently with the word processor.

The “Bloatware” phenomenon is a direct consequence of Nathan’s First Law.

Nathan’s second law is also fascinating: “Software grows until it becomes limited by Moore’s Law”. 

The second law is interesting because we’re currently nearing the end of the cycle of CPU growth brought on by Moore’s law.  So in the future, the growth of software is going to become significantly constrained (until some new paradigm comes along).

His third law is “Software growth makes Moore’s Law possible”.  Essentially he’s saying that because software grows to hit the limits of Moore’s law, software regularly comes out that pushes the boundaries.  And that’s what drives hardware sales.  And the drive for ever increasing performance drives hardware manufacturers to make even faster and smaller machines, which in turn makes Moore’s Law a reality.

And I absolutely LOVE Nathan’s 4th law.  “Software is only limited by human ambition and expectation.”   This is so completely true.  Even back when the paper was written, the capabilities of computers today were mere pipe dreams.  Heck, in 1997, you physically couldn’t have a computer with a large music library – a big machine in 1997 had a 600M hard disk.

What’s also interesting is the efforts in fighting Nathan’s first law.  It’s a constant fight, waged by diligent performance people against the hoards of developers who want to add their new feature to the operating system.  All the developers want to expand their features.  And the perf people need to fight back to stop them (or at least make them justify what they’re doing).  The fight is ongoing, and unending.

Btw, check out the slides they’re worth reading.  Especially when he gets to the part where the stuff that makes you genetically unique fits on a 3 1/2″ floppy drive.

He goes on from that point – at one point in his presentation, he pointed out that the entire human sensory experience can be transmitted easily on a 100mB ethernet connection.


Btw, for those of you who would like, there’s a link to two different streaming versions of the talk here:


Edit: Added link to video of talk.


Comments (16)

  1. Ryan says:

    "So in the future, the growth of software is going to become significantly constrained (until some new paradigm comes along)."

    I find my self wondering if we’re just not going to see a re-emergence (for lack of any better term) of a time when a small number of people could write software that took full advantage of a machine. In my limited experience it seems development has gotten much easier in the last 10 years or so. (The time period I was in college and my professional time.)

    With processing capability scaling out rather than up the (currently) there is a limited number of people who can write good code that runs in parallel threads. I’d be fall down shocked to find out there isn’t a bunch of research being done around analyzing today’s software to find out how to parallelize common tasks. The next step is pushing the outcome of that research into the dev tools, and then… Of course, that will prove the 4th law.

  2. Chris says:

    Who puts still pictures of themselves talking in a slide show like that? I can see one or two pictures, epsically if they are demonstrating something.

    Honestly, I really do *not* want to see that guys face every 3rd or 4th slide. Espically since it is just him talking!

    Interesting talk, and interesting post Larry.

  3. Anonymous says:

    There’s a touch of hyperbole there. I had a brand new computer in 1997 that was considered a standard desktop class machine (not loaded out) and it had a P2-233, 64MB of RAM, and a 6GB HD. 10GB was an expensive option. Not great shakes, but you could build a respectable MP3 library. (That was pre-WMA of course)

    Your point still stands though. My new PocketPC has almost as much capacity and performance as that system did.

  4. LarryOsterman says:

    Chris, I think that there’s a version of that that has audio that goes with it – the pictures allow it to make more sense (I think).

    I’m not sure what’s up with that though.

  5. mgrier says:

    Moore’s law is still basically going and will probably continue for a good decade or more. Moore’s law is about the number of semiconductor devices that are in a packaged unit (chip).

    We’re out of ways to make single (x86-compatible) processors faster for the most part but we have 10-20 years of making caches larger and more effective (the storage hierarchy is more important than ever!) and exploiting multiple execution units on a given package (multicore CPUs). Maybe VLIW will finally pay off (or IA64!) and we can get a better IPC without having to go past 3-4ghz for internal CPU clocks. (And yes there’s interesting work on asynchronous logic etc. etc. that can pay off; it will be interesting to see if it may be effective enough to ever dislodge the ia32/x64 architectures given the existing investment there…)

    What’s most interesting is that it looks like the relative costs between the main memory and L1 cache in the storage hierarchy is probably close to an all time high right now; memory busses and available bandwidth are going to be improved during the next two decades also so while I’ve harboured a belief for quite a while that we need to start treating physical memory the way a database treats the physical disk drive it looks like that will not come to pass.

    But that will just highlight the problems between memory and secondary storage; there is certainly interesting work going on there to find a middle ground to the storage hierarchy but unless something really revolutionary happens (remember holographic storage?), rotating media has no replacement for very high volume storage.

    I’m sorry we don’t have Nathan around any more; I have heard plenty of good and bad things about him but the world around here was a more interesting place with people like him around.

  6. Jerry Pisk says:

    Ryan, I think the reason that so many people can "write" software these days is that the hardware is capable of running hugely inefficient code. For example garbage collectors make writing code simple to large number of people who would not be able to handle their own memory management – at the expense of using (wasting) a lot of hardware resources. There will always be applications that require so much processing power or resources that only a small number of people are capable of writing a code to handle it.

  7. vince says:

    I knew people with massively huge music collections… in 1993.

    The wonder of .mod files.

  8. Dan McCarty says:

    "Software is a Gas!"

    Not being familiar with the oldster lingo, is that a good thing or a bad thing? ;-)

  9. Dan McCarty says:

    Oh, he actually meant "a gas," like, not a solid! Sorry, scratch that last comment.

    What a gas! (Whatever that means…)

  10. LarryOsterman says:

    Dan, don’t think old people, think physics.

    Nathan’s a theoretical physicist, so…

  11. 机票 says:

















































































































































































  12. This is the fourth part in my weekly series of entries in which I outline some of the reasons we decided…

  13. This is the fourth part in my eight-part series of entries in which I outline some of the reasons we…