To optimize or not to Optimize?

So with the CLI all compilers suddenly have a great code-generator.  However, the code-generator is generally where most if not all optimizations occur.  Now some of those optimizations should only happen in the code generator because they are machine specific.  An argument could also be made that some of the non-machine specific optimizations should not be performed because they make the analysis of the IL harder for the code-generator.  That still leaves a fairly large class of optimizations that could be performed.  Unfortunately for right now the only ones the C# compiler performes are basic dead code elimination and branch optimizing (by eliminating branches-to-branches, or branches-to-next).

Now there’s lots of research, both inside and outside of Microsoft to improve the performance of managed code.  I think there definitely is a lot of room to grow here, but I’m not sure people are looking in the right places.  So far none of the research I’ve seen even looks at the compiler.  It’s all been either source code transformations or runtime changes.  The C# compiler is relatively fast because it doesn’t have to do code-generation.  It seems like a natural idea to do at least some of the optimizations inside the compiler.  Then for the rest at least do the time-consuming static analysis so that the JIT can quickly consume that analysis and generate better code faster.  So am I the only person to think about this, or is it just the unpopular idea and thus gets no attention?


Comments (4)

  1. You have been Taken Out! Thanks for the post.

  2. One argument for not implementing certain compiler optimizations could be that its now too late. After all… what about all the .NET code that has been written with older compilers. If an optimization can be performed in either the Jitter or the compiler, wouldn’t it make more sense to perform the optimization through the Jitter so that old code could take advantage of the optimization as well?

  3. GrantRi says:

    I partially agree with that. I think most if not all optimizations can be performed in the Jitter, but some of them take a while or are just plain NP complete. People would complain if the Jitter suddenly started taking an extra 5 minutes to JIT "Hello World". But most people wouldn’t mind the extra compilation time (as long as they new it was for a good reason). My point is that most of the time in optimizations is spent doing analysis. Why not do the analysis once (at compile time), and allow the Jit to be fast AND generate fast code?

  4. Rick Byers says:

    Another compelling argument for doing more static analysis at compile time (more compelling that performance in my oppinion) is that it provides good opportunities to identify coding errors.

    For example, I recently tracked a bug back to a typo that was essnetially "int x = …; if( x == x ) …". If the compiler was doing more agressive optimizations, it could have warned me at compile time that the conditional expression was constant, and that the else branch was unreachable code.

    See my discussion of this issue, and a response from Eric Gunnerson (C# PM) here: