Amdahl’s Law : Big Speedups From Bigger Fractions


Does everyone know Amdahl’s Law


This law talks about the best gain you can hope to achieve by a certain class of improvement.  The most common place that it’s used is in talking about what gains you might be able to get from making your algorithms more parallel (adding CPUs).  The main variables are “s” — the speedup factor and “f” the fraction of the code that has been improved or can be improved depending on if you’re doing a backwards looking or forwards looking analysis.


It goes something like this:


        new_time = (1 – f ) * old_time  + ( f ) * ( old_time / s )


Maybe I did too much the in way of statistics because when I first saw it, it looked very unprofound because it was just a weighted average to my eyes.


All this is saying is that the fraction of the old time that I didn’t change (1-f) is unaffected and the fraction that I did change (f) gets divided by the speedup factor s.


I could regroup this to get it closer to the way that it’s usually stated


       new_time = old_time * ( 1 – f  +  f/s)


       new_time / old_time =  1 – f + f/s


new_time / old_time is the inverse of the net speedup so we can write this:


      net_speedup =   1 / (1 – f + f/s )


And that is often how Amdahl’s Law is stated.


OK but I only did that to show you that what I started with was really the same as Amdahl’s Law.  I think it’s easier to understand in the first form I wrote it:


        new_time = (1 – f ) * old_time + ( f ) * ( old_time / s )


It isn’t especially profound then to observe that if you speed up some of the code (fraction f) by an incredible amount then basically the second term goes away and you’re still left with all of the first term.  That’s probably how most people think about Amdahl’s law — if you’re working on some code the best improvement you can hope to get is what the speed would be if the part you’re working on was completely removed.


That’s not especially profound is it?  I mean, I could say it even more briefly:  After you’re done you still have whatever’s left. 


Deep.  🙂


Now here’s the catch:  it doesn’t work.  It’s very easy to find examples where you improved code that was responsible for only 10% of the running time and got a net gain of more than 10% — that should be impossible because the 90% that you didn’t change is still there.  So is Amdahl full of bunk?  And how could this possibly be wrong?  The math is so basic. 


And now we come to my whole reason for writing this article…


Modern processors are very complicated and sometimes that means that when doing work in one area you actually cause a cost in another area.  For instance suppose you have an algorithm that does some computation in a manner that is very unfriendly to the processor’s cache but otherwise finishes comparatively quickly; let’s suppose it takes 10% of the total time.  If you were to eliminate that code you would save not only the 10% time that is spent doing the computation you would also still have good stuff in your cache so the rest of your program would run somewhat faster as well.  As a result you could get much more than a 10% cost savings (pop quiz: 10% cost savings is what speedup factor?)


When people are making additions to their programs/systems I often caution them to consider not only the direct cost of adding the new system but also the “collateral damage” that the new system will cause to the other systems.  Collateral costs indirectly slow things down because of cache effects, virtual memory demands, moving the disk head around more, that sort of thing.


Similarly when removing code you might get “collateral benefits” and they can be a big deal.  For instance a very modest saving in one area might make the difference between the program as a whole fitting into the processor’s cache or not fitting.  The results of which could be profound.


So why does Amdahl’s Law appear to not be working in these cases?  Well, in terms of Amdahl’s Law, I think of it this way:  The fraction of the total work that I am improving is often much bigger than it appears.  It’s not just the work that’s done directly by the code that I am editing/deleting/whatever it’s all the parts that are positively affected by the change I’m making.  So “f” is actually much bigger than I thought it was.


Likewise when considering the effect of adding more parallelism to an existing system (the most common application of Amdahl’s Law) you want to consider the direct effect of having more processors do the parallel parts of the work but you must also consider how the serial parts of the process are benefitted indirectly because some work was offloaded to other processors.  It’s those collateral benefits that sometimes result in what people call “super-linear gains” — e.g. double the number of processors and more than double the net throughput.  Some people might think that super-linear gains contradict Amdahl’s Law but I don’t.


Lesson for us all:  Collateral costs and savings are very real so they should be a part of your performance planning.


Secondary Lesson:  I don’t think Amdahl’s Law is wrong, but is it especially useful in planning?  I don’t think so, I lived most of my life without knowing it and I did fine.  🙂 🙂 But seriously, with all these collateral effects can you ever really consider “f” to be less than 100% or at least very close to it?  At that point the Law is only vacuously correct.

Comments (4)

  1. John Melville says:

    I think of Amdahl’s law as more of a concept than an equation. Its a general, and somewhat obvious, guidance that says "optimize the stuff that runs a lot."

    You used this logic very frequently[1] to vindicate the GC by noting that allocations, including GC, were less than some small percentage of the running time. You seem to presume that because runtime is small F was small, which you now opine is an incorrect assumption. The assumption is correct, not as a law but a useful generalization: it is rare that portion of the code with 4% running time has a bigger perf impact than something with 30 or more %.

    You don’t call it Amdhal’s law, but it seems to be pretty ingrained in your thinking.

    [1] For an example of a post where you defend the GC based on its low % of the runtime see: http://blogs.msdn.com/ricom/archive/2005/08/26/456879.aspx You even discuss colateral effects and dismiss them.

  2. Dan McKinley says:

    I think this post was "especially profound." Nice work.

  3. Matt says:

    you missed the other link

    http://en.wikipedia.org/wiki/Unintended_consequence

    The two together give you the most useful scientific ‘concepts’ for performance tuning.

    The engineering disciplines which flow from them are:

    Clearly define what it is you desire and test it, thus knowing which bits *should* give the biggest benefit from being improved and whether or not they actually *do*.

    Matt

  4. ricom says:

    I think John is saying that I argue both sides of this point and I’d say he’s right. The fact is you’ll go insane if you always try to consider every collateral effect of a change and so we often disregard such things. But that doesn’t make them any less real and they might come along and bite you some day. When you see unexpected results sometimes that’s exactly what did happen. Hence my article.

    But do keep in mind that I try to make only approximately correct postings anyway — mostly in the interest of keeping them remotely brief. So I reserve the right to be slightly wrong sometimes 🙂 🙂

    Happy Holidays!