Trade-offs

It seems like a lot of the time we (as software developers) are forced to make trade-offs.  Choosing between 2 algorithms that solve the same problem, but with different characteristics.  Sometimes it's an easy choice: option #1 is O(1) and option #2 is O(n^3).  Sometimes we're forced to compare apples and oranges: option #1 runs as O(1), but uses memory as O(n^2) and option #2 runs as O(n) and uses memory as O(n).

The thing I've been pondering lately is if we should be always striving to make the best trade-offs, or if we should we be redefining the problems so that it's always an easy choice?

The current problem I struggling with in the compiler space is accuracy versus effeciency.  Specifically in the case of exceptional flow control.  Everbody would love a compiler that modelled the control flow of exceptions smartly enough to be able to do really advanced optimizations (dead-store, common sub-expression elimination, constant propigation, SSA, etc.) across  try/catch/finally blocks (and of course get it right).  However properly modelling such flow often results in some very comples flow graphs that tend kill the throughput of most optimizers especially if the algorithms happens to be O(n) or greater where n is the number of edges.

So basically we're looking at several ideas that are 'accurate enough' to do al of the desired optimizations while still not bloating the flow graph with a large number of edges that would slow down those optimizations.  Of course even in that statement, we need to clearly define what 'accurate enough' is and how fast is fast.  Thos terms change greatly between different compilers (JIT, pre-JIT, stand-alone compiler, etc.)

--Grant