(#ifdef DEBUG)++

In his most recent (as of this writing) blog entry Brad Abrams writes about an idea we’ve been kicking around and which I happen to like very much.

I think though that adding a few more details is necessary to help get the best possible feedback.

The idea is that there would be conditional compilation sections in the IL itself, these sections would always be present in the IL so every machine would have the capacity to run that code.  Depending on configuration, the JIT would either compile that IL into machine code, or else ignore it.  By default, every developers box would be configured to get all the debug checks and my mom’s machine would be configured to not get them.

Properties of this system if it could be made to work:

  • only one version of IL is distributed
  • any machine could be configured to run the debug code if needed because its always distributed
  • normal machines wouldn’t pay the penalty for additional checks, developer machines would
  • some existing/redudant checks which exist soley to give developers better diagnostics would normally run only on developer machines
  • super-heavy-weight checks or logging which even developers wouldn’t normally want to pay for could be introduced if needed
  • advanced versions of the CLR could someday recompile methods on the fly (!) by administrative order, injecting the debug or logging blocks as directed into running processes (how much would you pay for that!)
  • any number of configurable block types could ultimately be supported with the same architecture

In order to avoid problems like putting the wrong code into a debug block (similar to putting necessary functions into assert statements that are compiled away) we’d want to invest in FXCop like rules to statically evaluate the content of such blocks.  In particular “demands” would be frowned upon.

My thesis is that this feature increases developer productivity by allowing greater checking at no end-user cost and giving easy access to that checking when it is most needed while also giving a net increase in performance for the existing libraries after retrofit.  In contrast to other complexity-adding features (and make no mistake, this is a complexity addition), my mom’s working set can be expected to get smaller, not bigger.

I believe in giving developers very sharp tools when warranted.  I think this is warranted.

Comments (16)

  1. Dmitriy Zaslavskiy says:

    Hey, As I said in Brads’ blog. I am all for it.

  2. Paul Tyng says:

    This is a GREAT idea! It makes so much more sense then having multiple different builds to send to a client especially when I’m trying to debug a problem over the phone or something. Typically what I do is program something like this in so that its always present and once a flag in the config file is flipped on a verbose mode is turned on, but native support for this in the IL sounds like its the ideal solution.

  3. #ifdef in IL? Absolutely awesome. More power in IL and the runtime itself is great.

  4. Keith Patrick says:

    What kind of configuration are we talking about, exactly? Is a dev machine slower in the sense that running the particular app in debug mode will be slower, are is it in the sense that a developer machine has the equivalent of a checked build installed? If it’s something that can easily be turned on/off, sure it’s a great idea that I wish Java back when I was doing it and that I could really use right now in a C# app. But the key thing for me is not to go back to the days where having a dev machine meant that that single function crippled the machine in performing other daily tasks (read: playing games)

  5. Michael Ruck says:

    I think this solution would solve many problems faced during deployment today. Additionally I think any potential solution to reduce the working set for a .NET process (or to keep it low) should be taken. People still tend to think in the size of a working set, when they speak about the quality of an application.

  6. Shane King says:

    "advanced versions of the CLR could someday recompile methods on the fly (!) by administrative order, injecting the debug or logging blocks as directed into running processes (how much would you pay for that!)"

    I wouldn’t pay anything, because it’s already possible to do this today, using the profiling API of the CLR.

    Should I insert some pithy slogan about the future is now? 😉

  7. Jerry Pisk says:

    If implemented would it mean that developers would run a debug version of Visual Studio, the framework and anything else in managed code along with debug versions of their code?

    Personally I don’t see why the final code should contain any debugging code. Metadata such as method names are a different thing, that helps you to get things like stack traces in case of a crash but debugging statements? If you need those you should go back and fix the code before distributing it…

  8. Michael Entin says:

    That is very good. What everybody was afraid of after BradA’s blog is two different versions of CLR, so that one would have to uninstall regular CLR/install debug version to get debugging features.

    One CLR version with developer checks turned on by config is great!

    However, it is unclear how this is different from customer debug probes currently available in Whidbey, could you elaborate?

  9. Bill Wert says:


    It’s easy to envision some trival uses of this kind of thing. You call Foo.Bar(Object o). It returns a BadArgumentException and that’s it. You turn on the debugging information, and now you’re getting a log file showing what it’s parsing. This is a contrived example, and probably nothing like what they’d really implement, but you can see where this could potentially go. It’s not just debugging information for the CLR itself – it’s debugging information for the CLR’s customer – you.

    -Bill Wert

  10. It’s good, Rico. I want it.

  11. Dave says:

    Would there need to be some sort of security permission on this feature to keep rouge end-users from flipping the debug switch? Is there any concern about how this feature impacts the already easy reverse-engineering scene? I’m curious.

  12. Eric says:

    There is still the potential of bloating the IL code, which implies a bit more of a loader/JIT penalty, but in the steady-state I can see why you say it will reduce working set.

    In any case, it’s way better than separate debug/retail. The real question is how powerful do the IL ifdefs get? The static analysis won’t necessarily be easy, either.

  13. Brad Abrams says:
  14. Jerry says:

    With all due respect, nice new toy to play with as a developer. Will it really help Mom see a speed difference? Will it help you ship the product sooner, and avoid introducing new bugs (bound to be)? Consider it for a future release, but is it worth spending the time on this now? What’s the measurable impact here?

    Anyway, not to be a killjoy, I love the idea as a developer. But there are tons more things I want out of Whidbey and I’m not sure this is that major.

  15. Barry Price says:

    #ifdef DEBUG has two States – On and Off. If we have another variable eg ERROR defined in the assembly, we would have 4 States etc.

    For each State there is a potential to have different compiled IL assemblies and therefore we must associate the State (eg DEBUG On, ERROR Off) with the compiled IL assembly (potential total of 4).

    There also has to be a method to change the State by the system and by the user.

    Now! Each time we call a routine in a different assembly (eg program calling framework), we will have to check the current States and call the appropriate routine. This check must also go in Mom’s PC as well as the developers. (I have seen an implementation that does not make this runtime check – DO NOT go there.)

    You can vary the answers to "Where to check for change of State?" and "When can the State change be applied?" but they all impact Mom’s PC or the value of doing the change.