Optimize for Simplicity


For a compiler, the common things are code size vs. speed.


The conventional wisdom is to optimize for code-size, because that tends to reduce page faults / get better cache usage and thus ultimately be faster. (I’ve just exhausted my knowledge on that topic. Head over to Rico’s blog if you need more.)


I think an often missed optimization goal is for simplicity.


In my opinion, prefer to optimize for simplicity.   Why?



  1. I’d rather have code be correct and slow, than fast and wrong.
  2. That generally makes the whole dev/test cycle faster, thus giving you more development cycles to focus on optimizing the real bottlenecks.
  3. The gap between the performance of the (simple + easy) solution versus the (complex and “fast”) solution is probably smaller than you expect.

 


“…about 97% of the time: Premature optimization is the root of all evil“. (That’s Rico, quoting Knuth, quoting Tony Hoare). I’m not advocating being dumb: common sense should still rule. For example, if you need to lookup in a data structure, don’t quadruple index it. But as least use a single indexing scheme that you expect to be the common case.  [Update, fixed quote thanks to Peter]


 


Some practical cases:



  1. Don’t try to be fancy with threads. Don’t try to do these fancy complex lockless operations without perf measurements to show you that the complexity is actually buying you speed. Just take the leaf level lock and spend those brain cycles somewhere else. Keep your threading bugs as simple as possible.
  2. Not every search has to be a super-smart multiple indexed thing. Linear searches are ok for small sets.
  3. Use conventions that your team understands (or take on the burden of helping your team understand the conventions).  Sometimes simplicity is relative.
  4. For C++, use destructor cleanup (eg, smart pointers) instead of manually managing all cleanup yourself.
  5. Don’t fight the optimizer.

 


The irony is that some of these simplifications are not only easier to understand, but can perform faster too.  For example, a binary search can wreck your cache lines whereas a linear search might play well.  Or you may try to optimize something yourself (eg,  inline assembly) where the C++ optimizer could do it better.


Comments (6)

  1. Peter Ritchie says:

    Oh no, not the Knuth/Hoare misquote…  Rico does point out that the quote is "…about 97% of the time: premature optimization is the root of all evil".  So people use that quote to say all premature optimization is evil; that’s a logical association fallacy.

    As you say, it also shouldn’t be used to justify not trying to find the fasted known way of doing something.

  2. jmstall says:

    I fixed the quote. That’s certainly an important qualifier.

  3. arun.philip says:

    Wouldn’t you say that optimizing for size/speed can coexist with optimizing for simplicity?

    Since each optimization is implemented by different entities, we can let the compiler worry about size/speed optimizations and let the developer worry about optimizing for simplicity?

    Addendum: On reading your post yet another time, it appears what you’re driving at is also what I’ve said – we shouldn’t try to take over the compiler’s job of size/speed optimizations.

  4. jmstall says:

    Arun:

    Sometimes you can get a simpler design that’s also smaller and faster.

    But often they’re at odds: For example, Quicksort is definitely more complex than the standard bubble-sort. But it’s very often much much faster.

    The compiler can optimize at the low level stuff (eg, register allocation, inlining functions). But the compiler is not going to optimize a bubble-sort into a quick-sort.

  5. Here's a combinatorics quiz: If you have 2 ordered lists (lengths N, M), how many ways can they be