Amdahl's Law : Big Speedups From Bigger Fractions

Does everyone know Amdahl's Law

This law talks about the best gain you can hope to achieve by a certain class of improvement.  The most common place that it's used is in talking about what gains you might be able to get from making your algorithms more parallel (adding CPUs).  The main variables are "s" -- the speedup factor and "f" the fraction of the code that has been improved or can be improved depending on if you're doing a backwards looking or forwards looking analysis.

It goes something like this:

        new_time = (1 - f ) * old_time  + ( f ) * ( old_time / s )

Maybe I did too much the in way of statistics because when I first saw it, it looked very unprofound because it was just a weighted average to my eyes.

All this is saying is that the fraction of the old time that I didn't change (1-f) is unaffected and the fraction that I did change (f) gets divided by the speedup factor s.

I could regroup this to get it closer to the way that it's usually stated

       new_time = old_time * ( 1 - f  +  f/s)

       new_time / old_time =  1 - f + f/s

new_time / old_time is the inverse of the net speedup so we can write this:

      net_speedup =   1 / (1 - f + f/s )

And that is often how Amdahl's Law is stated.

OK but I only did that to show you that what I started with was really the same as Amdahl's Law.  I think it's easier to understand in the first form I wrote it:

        new_time = (1 - f ) * old_time + ( f ) * ( old_time / s )

It isn't especially profound then to observe that if you speed up some of the code (fraction f) by an incredible amount then basically the second term goes away and you're still left with all of the first term.  That's probably how most people think about Amdahl's law -- if you're working on some code the best improvement you can hope to get is what the speed would be if the part you're working on was completely removed.

That's not especially profound is it?  I mean, I could say it even more briefly:  After you're done you still have whatever's left. 

Deep.  :)

Now here's the catch:  it doesn't work.  It's very easy to find examples where you improved code that was responsible for only 10% of the running time and got a net gain of more than 10% -- that should be impossible because the 90% that you didn't change is still there.  So is Amdahl full of bunk?  And how could this possibly be wrong?  The math is so basic. 

And now we come to my whole reason for writing this article...

Modern processors are very complicated and sometimes that means that when doing work in one area you actually cause a cost in another area.  For instance suppose you have an algorithm that does some computation in a manner that is very unfriendly to the processor's cache but otherwise finishes comparatively quickly; let's suppose it takes 10% of the total time.  If you were to eliminate that code you would save not only the 10% time that is spent doing the computation you would also still have good stuff in your cache so the rest of your program would run somewhat faster as well.  As a result you could get much more than a 10% cost savings (pop quiz: 10% cost savings is what speedup factor?)

When people are making additions to their programs/systems I often caution them to consider not only the direct cost of adding the new system but also the "collateral damage" that the new system will cause to the other systems.  Collateral costs indirectly slow things down because of cache effects, virtual memory demands, moving the disk head around more, that sort of thing.

Similarly when removing code you might get "collateral benefits" and they can be a big deal.  For instance a very modest saving in one area might make the difference between the program as a whole fitting into the processor's cache or not fitting.  The results of which could be profound.

So why does Amdahl's Law appear to not be working in these cases?  Well, in terms of Amdahl's Law, I think of it this way:  The fraction of the total work that I am improving is often much bigger than it appears.  It's not just the work that's done directly by the code that I am editing/deleting/whatever it's all the parts that are positively affected by the change I'm making.  So "f" is actually much bigger than I thought it was.

Likewise when considering the effect of adding more parallelism to an existing system (the most common application of Amdahl's Law) you want to consider the direct effect of having more processors do the parallel parts of the work but you must also consider how the serial parts of the process are benefitted indirectly because some work was offloaded to other processors.  It's those collateral benefits that sometimes result in what people call "super-linear gains" -- e.g. double the number of processors and more than double the net throughput.  Some people might think that super-linear gains contradict Amdahl's Law but I don't.

Lesson for us all:  Collateral costs and savings are very real so they should be a part of your performance planning.

Secondary Lesson:  I don't think Amdahl's Law is wrong, but is it especially useful in planning?  I don't think so, I lived most of my life without knowing it and I did fine.  :) :) But seriously, with all these collateral effects can you ever really consider "f" to be less than 100% or at least very close to it?  At that point the Law is only vacuously correct.