The Rules of Code Optimization

Steve Rowe recently talks about who you're really writing for when you write code.  The argument he makes is essentially that your primary audience is not the compiler, but rather your primary audience is other developers.  This is something I believe strongly.

Steve also makes a point about premature optimization, and how it affects readability.  This reminded me of a list of the Rules of Optimization that I try to live by:

  • The First Rule of Code Optimization: Don't.

Optimizing code has a negative effect on readability and maintainability.  The vast majority of code doesn't need to run particularly fast, but it all needs to be maintainable.  Even programs you think are one-off and temporary have a tendency to become permanent.

Most businesses want to keep costs down, but as a rule, hardware is cheap.  Developer time is expensive.  Don't optimize for the wrong variable.

If you believe that the First Rule doesn't apply to you, (by the way, you're probably wrong.  Go read the first rule again) then it's time to consider the second rule.

  • The Second Rule of Code Optimization: Don't yet.

Designing code and optimizing code are two very different operations.  Your maintenance programmers will thank you not to mix the two.  For both optimizing and maintaining code, it is far easier to start from a good solid readable design, than to start from code that was "written optimized". (and that's assuming you can even figure out what the latter was doing).  Do not optimize code as you write it, unless you like rewriting over and over. 

  • The Third Rule of Code Optimization: Profile first.

 Optimizing all of the code is almost never feasible, and is never worth it.  It you're going to spend three developer days to squeeze 10 milliseconds of performance out of your code, it's probably better that you get those out of the inner loop of a column lookup, rather than the startup splash screen display code. 

So which code is best to optimize?  You don't know.  You can generally make educated guesses, but there's your problem.  They're just guesses.  If you guess wrong (your first guess is almost always wrong), then you'll end up doing more harm than good by optimizing (remember the maintainability hit).  So how do you know?  Profile.  Run tools that will tell you which code is spending most of the time.  Optimize that code.  Often, you'll find an answer you never expected.

Never optimize without profiling first.

Code optimization is sometimes a necessary evil, but do not forget that it is an evil.  It obfuscates the true meaning of the code, and it takes developer time that could be spent on bug fixing or new features.  Some applications (such as real-time audio) can't get by without optimizing, but most applications (card games, mail programs, web apps, etc) can get along just fine without unnecessarily complicating the code.  And when you do need to optimize, have hard data to back up that you're optimizing the right place.

Comments (3)
  1. I mostly agree with this, especially with profile guided code optimization. I work in the real-time world where code optimization is often necessary to meet hard timing deadlines, but I’ve seen and fixed far too many bugs caused by premature optimization of areas that did not have stringent real time requirements.

    However I’ve also seen far too many system level performance problems caused by premature generalization. When you’ve picked off all the easy O(n^2) -> O(nlogn) algorithm changes then you are left with the unenviable task of removing layers of abstraction from the code.

    My rule is that you optimize code towards the end of the coding cycle but you optimize the design up front before coding even begins. I think it was Rob Short who said you don’t start building a house by nailing together a bunch of 2x4s, you start by drawing up a complete set of plans and then move on to construction. Why should software be different?

  2. Well, there’s optimization and optimization.  What are we optimizing for?  It sounds like Ryan is talking about optimizing for readability (though he would have us believe he’s talking about optimizing for performance)… as always, there is a balance that must be struck, and generalities give way to particulars.

    In the audio world, timeliness is a must… the thread absolutely must wake up soon enough, or the audio stream glitches.  The Samples Must Flow (apologies to Frank Herbert.)  So optimizing for time makes fundamental sense even if you have to do weird things like cache precomputed random values.

    However Ryan’s post makes an important point, which is that code readability is ALWAYS helpful.  Unless you’re writing an IOCC entry.

    … or if you’re writing bad Perl.

  3. ColinA says:


    On waterfall: This works for problems you’ve solved before, but in software if you’ve solved it before you (probably) shouldn’t be solving it again.  I could go on, and I’m sure many other problems with the analogy are obvious to you so I won’t belabor the point.  Still, I have a visceral reaction to analogies between construction engineering and software building. 🙂

    Beyond that, I agree that you can remove obvious inefficiencies (O(n^4) instead of O(n^2) on a set of millions, for instance), but I wouldn’t classify these as code optimizations, which is what Ryan’s advocating against.  Removing naiveté from the system doesn’t necessarily mean optimization in the sense that Ryan’s using the term.

Comments are closed.

Skip to main content