Too much reuse

A recent user question:

I have code that maintains a queue of pending work items waiting to be completed on various different worker threads. In certain unfortunate fatal error situations I complete each of these by throwing an exception. Can I create just one exception object? Are there any issues throwing the same exception object multiple times on multiple threads?

Anyone who has ever seen this in a code review knows the answer:

catch(Exception ex)
   throw ex;

This is a classic “gotcha”; almost always the right thing to do is to say “throw;” rather than “throw ex;” – the reason being that exceptions are not completely immutable in .NET. The exception object’s stack trace is set at the point where the exception is thrown, every time it is thrown, not at the point where it is created. The “throw;” does not reset the stack trace, “throw ex;” does.

Don’t reuse a single exception object. Every time gets thrown the stack trace will be reset, which means that any code up the stack which catches the exception and logs the trace for later analysis will almost certainly be logging someone else’s trace. Making exceptions is cheap, and you’re already in a fatal error situation; it doesn’t matter if the app crashes a few microseconds slower. Take the time to allocate as many exceptions as you plan on throwing.

Comments (71)
  1. Peter Wilson says:

    I know a lot about exceptions and exception handling, but this was actually new to me. Thank you.

  2. Doug says:

    Would it be a good idea to create a new exception and assign the caught exception to InnerException?

  3. Chris B says:

    I have seen this pattern more times than I can count, and it is a mistake that is made very often even by good, experienced, knowledgeable developers.  I think a lot of people don’t even realize that catch(Exception){ throw; } is valid syntax. The similarity of the C# throw statement to Java probably doesn’t help since java requires the "throw somethingThrowable" syntax.  Also, people become so accustomed to writing catch(Exception e){ throw new Exception("Uh oh!", e); } that they assume they must "throw anException;".  It makes me wonder if there is something that the compiler (or some other verification tool) could do to make the correct pattern more obvious.  Since it is perfectly valid code and could very well be intended, a warning doesn’t really make sense.  I know new syntax is definitely low on the list of things to be considered, but the source of the confusion seems to be two behaviors from what is interpreted to be identical code.

  4. Gabe says:

    So how expensive are exceptions to throw? Are they cheap enough to use them for nonlocal transfer of control?

  5. Brian says:

    I agree with Chris B that catch(Exception) { throw; } is valid and better than throw ex, but isn’t it simply better to not catch the exception at all.  Doesn’t that do the same thing?  In Eric’s example it’s being logged, so you need the catch, but if you’re not doing anything with it, why catch it.

  6. Tom says:

    Again, this is another great candidate for the Messages pane.

  7. @Gabe… using exceptions (even if they were "Free") for "nonlocal transfer of control" is pure EVIL. In some environments, every exception (even those that are caught) are treated as "support alerts".

    Under "normal" circumstances the PerformanceCounters for Exceptions should remain at 0. Unfortunately even Microsoft’s CLR/BCL violates this principal. Just set "Break on Exception" to true for many application types, and watch how many exceptions you have to step through just during application startup!!!!!

  8. Gabe says:

    Brian: There are plenty of reasons why you might want to inspect an exception before possibly rethrowing it. Perhaps you know how to handle some kinds of IOExceptions but not others, or you want to look for a certain type of InnerException.

    Or you might just want to optionally ignore it, like here:

    while (true)

       try {


       } catch (Exception) {

           if (!retry) throw;


  9. Brian says:

    @Gabe… understand, but if you’re not doing anything with it, don’t catch it.  Maybe I’m just being pedantic… or not pedantic enough.

  10. Chris B says:

    @Brian,  I didn’t intend the code to be a good example of exception handling, so I omitted the actual handling (logging, wrapping, cleanup, whatever) for brevity. Sorry for not being as clear as I should have.

  11. RagnarokNRoll says:


    Your original question of how cheap they are hasn’t been looked at recently from what I could see.

    The second one is actually a better way of describing the situation.  

    "If you ever get to the point where exceptions are significantly hurting your performance, you have problems in terms of your use of exceptions beyond just the performance."

    Your answer is :"depends."

  12. Jonathan says:

    "it doesn’t matter if the app crashes a few microseconds slower"

    This should go on a t-shirt or something.

  13. Gareth says:

    A little off topic but I’ll ask anyway.

    I sometimes catch an exception in a utility method and then re ‘throw;’ it, but before rethrowing it I use another aspect of an exceptions ‘mutability’ the ‘Data’ property.

    I add local contextual information to it. My base exception handler logs all the information it can about the exception i.e. message, stack, type, innerexception etc Plus the Data property.

    Is this a good idea? Do others use this pattern? Does anyone out there use Exception.Data at all?

    private void ExecuteSqlExample(string sql)




           //To stuff here



       catch (Exception ex)


           ex.Data["Sql"] = sql;




  14. DRBlaise says:

    Unfortunately, Microsoft documentation does not help the situation.  Below is the documentation in the try-catch section of Visual Studio C# MSDN Library:

    A throw statement can be used in the catch block to re-throw the exception, which has been caught by the catch statement. For example:

    catch (InvalidCastException e)


       throw (e);    // Rethrowing exception e


  15. Robert Davis says:

    One extension method we have at my office is this

    public static TException Rethrow<TException>(this TException exception) where TException : Exception


        var field = typeof(Exception).GetField("_remoteStackTraceString", BindingFlags.Instance | BindingFlags.NonPublic);

        field.SetValue(exception, exception.StackTrace);

        return exception;


    This allows us to do

    throw ex.Rethrow();

    without reseting the stack. This is useful if we hand an exception off to a function that decides whether it should be rethrown.

  16. Banjobeni says:

    I think the problem is more that the stack trace is recorded on every throw of the exception.

    For me, the logical point where to record this info is in the very constructor of the base Exception class, not the throw statement. That way, exceptions can be completely immutable.

    Of course, this has a downside. You will see all those constructors in the stack trace that are currently invoked. But these are usually only two (Exception and a direct decendant thereof) and they could even be stripped off the stack trace at all.

  17. @Banjobeni….Changing where the strack trace is captured would be a major breaking change. Also there are times where you want to build up a (potential) exception, and throw it at a fairly distant point.

    @Eric, What I WOULD like to see is a better mechanism of getting information about the CallStack WITHOUT Exceptions being involved at all….but that is another topid..

  18. Anton Tykhyy says:

    Robert: the same effect can be achieved without reflection, FWIW

    static void PreserveStackTrace (Exception e)


       var ctx   = new StreamingContext  (StreamingContextStates.CrossAppDomain) ;

       var mgr = new ObjectManager      (null, context) ;

       var si     = new SerializationInfo     (e.GetType (), new FormatterConverter ()) ;

       e.GetObjectData      (si, ctx)  ;

       mgr.RegisterObject (e, 1, si) ; // prepare for SetObjectData

       mgr.DoFixups         ()         ; // ObjectManager calls SetObjectData


  19. Shawn Martin says:

    I ask what the difference is between "throw" and "throw ex" as an interview question.  No one has answered it correctly yet.

    It’s not the greatest question because someone may have have been using throw and it’s not really very important to know what throw ex does.

    On the other hand I’ve inherited server-side code with "throw ex" sprinkled liberally throughout and it makes troubleshooting from logs very difficult.  Someone who intends to write server-side or widely-deployed client-side code probably should know about this.

  20. Gabe says:

    David V. Corbin: Would you be satisfied if the CLR provided an ExceptionLite, which is like an exception only it doesn’t capture the stack frame (for performance) and doesn’t break into your debugger or increment the "exceptions" counter? Because I’d like to be able to write the following without it being so annoying:

    class NonlocalReturn : ExceptionLite {

       public object Value;

       public NonlocalReturn (object value) { Value = value; }


    void Return(object value) { throw new NonlocalReturn(value); }

    int FirstItemOverX(int x, IList<int> someList) {

       try {

           someList.ForEach(i => { if (i > x) Return(i); });

       } catch (NonlocalReturn ret) {

           return ret.Value;


       throw new NotFoundException();


  21. Robert Davis says:

    David V Corbin: Stack Trace info can be retrieved via System.Diagnostics.StackTrace class, unfortunately, this relies A LOT on VS’s pdb files being in place and the class doesn’t mesh well with System.Reflection. Which is unfortunate.

  22. Stilgar says:

    Like many others I’m surprised by this behaviour. While the feature is good in the sense that the shorter syntax is the one that should be used more often, the discoverability is extremely low.

  23. @Robert [please feel free to contact me off-list] – I am aware of System.Diagnostics.StacckTrace, unfortunately I am looking for something the is ReflectionFriendly [i.e. based on MetaDaa, NOT PDB]

  24. @Brian

    “isn’t it simply better to not catch the exception at all.  Doesn’t that do the same thing?  In Eric’s example it’s being logged, so you need the catch, but if you’re not doing anything with it, why catch it.”

    You are almost right. But it’s worse than that. They don’t do the same thing. Catching the exception causes all nested finally blocks to execute. This is a BAD THING if the exception is something unexpected/fatal, an indicator of corrupted/invalid program state.

    To ensure an exception is logged even if it is fatal, you don’t need to catch the exception. You just need to handle AppDomain.UnhandledException.

    This then gives you control over whether finally blocks should execute or not, which is absolutely vital if you don’t want little pieces of your program to continue trying to run (and probably deleting the wrong files) after a bug has started to run rampant.

    I strongly advice people to read the CLR team’s advice on this. The mantra goes:

    – Write try/finally blocks much more often than try/catch blocks. Finally blocks are very safe as long as you don’t let them run in a process you know to be fatally compromised, i.e. when you have an uncaught exception.

    – Never catch anything you cannot totally handle. Rethrowing exceptions is almost always wrong. So “throw;” is not really any better than “throw ex;” – both will definitely cause finally blocks to execute during a fatal exception, with no way to stop them.

    – Don’t catch purely to log.

    – Don’t catch in an attempt to perform custom exception filtering. If you must (i.e. if you find the C# model of listing each possible known exception type in a separate catch block unmanageable), use actual exception filtering (unfortunately not available in C#, but is available in raw IL, C++/CLI and VB.NET – the BCL itself uses VB.NET to expose a general TryCatchWhen function for its own internal use.)

    Indeed, in general this is excellent advice. I would add to it that of course you have to take into account what the consequences are of finally blocks executing during a fatal crash. Consider a compiler, for example. (Our compiler is written in C++ and does not use exception handling, but imagine a compiler written in C# that does use finally blocks and their moral equivalents, like using, foreach and lock statements.) Does it really matter if the finally blocks run and even more state gets messed up?  What’s the worst that can happen?

    It is highly unlikely that the compiler process has “unsaved” user state that needs preserving. It is highly unlikely that there is a partially completed transaction that needs to be rolled back. It is highly unlikely that sensitive private user data will be compromised. It is highly unlikely that the system is going to fail to an insecure mode. We’re taking in text and producing IL for heaven’s sake. Worst thing that can happen is we crash even harder and don’t give a sensible error message. So, fail fast, or don’t fail fast, I don’t really care.

    Compare that to, say, a crash deep in the heart of a banking application when the user is in the middle of an edit. Now we have user work that might need to be preserved, pending transactions, highly sensitive personal data that needs to be protected, and so on. Running arbitrary code in finally blocks might or might not be a good thing in this scenario. Figuring out what the right thing to do here in the face of the fact that internal state is probably inconsistent can be a hard problem. Is failing fast the right thing to do? Maybe, maybe not. Maybe running the finally blocks makes things better, maybe it makes it worse. Point is: it’s a scenario you’ve got to design for, and therefore you’ve got to know the exact semantics of how exceptions propagate.

    — Eric

  25. By the way, on the subject of catching to log the exception, I’m pretty sure in BCL 4 the default recommended config will be that you are unable to catch various kinds of fatal exception. The CLR will simply let them fly straight through the try/catch block, invisibly, even if you write "catch (Exception ex)". So that won’t even work as a way of logging many fatal exceptions. So if you want to log them, you’ll have to use AppDomain.UnhandledException. (This may be out-of-date, depends what has happened since I read about this change last year).

  26. Regarding "catching for logging" of exceptions you can not really "deal with" is definately a dangerous practice. However, it is often useful (expecially for placing a breakpoint to see local [outside of try block) state).

    My approach is to have a "CHECKED" Build Configuration so that I can have this ability, but ONLY when needed.

  27. Further to Eric’s comment to my first comment…

    Like Eric says, if your finally blocks don’t touch external persistent data, then any additional damage they do will be an irrelevancy to the end user when the process has died. But it’s not just the user who might have a problem, it’s also the developer. It depends on how much you are sometimes reliant on crash dumps sent in by remote users who can only get the problem to happen on this one pesky machine, etc. (Probably not so much for a compiler developer, I guess.)

    A likely problem with catching fatal exceptions willy-nilly is that when the stack is being unwound due to a caught exception, another exception may be triggered. This tends to happen because the invalid state has occurred in some object you were actively using, and the things on the stack being cleaned up are probably touching the same data. In the CLR, where there is a second exception during stack unwinding due to a first exception, the second exception "takes over" and the first just disappears without a trace.

    So whatever you put in place to log or dump unhandled exceptions, it is probably going to be compromised if the exception does not have a clear pathway from where it was thrown, all the way back to Main. Or to put it another way: catching a fatal exception is a way of destroying useful debugging information.

    But of course it depends how easy it is to reproduce the issue back at the lab, in the debugger. If you can do that every time, then none of this matters, as you don’t need crash dumps from users. Just switch on the ‘Stop on first-chance exception’ option and break at the very first throw, before the exception can get discarded. (I usually leave that option switched on, and I detest frameworks that use exceptions as a regular communication channel. Exceptions are for exceptional situations – I don’t want to break into the debugger every time I hit F5.)

    By the way, Eric mentions C++, which reminds me that if you have that "two exceptions flying up the stack at once" situation in C++, the function unexpected() is called, which by default terminates the program. Well, at least it’s defined behaviour, I guess, which in C++ is always something to be grateful for.

  28. Gabe says:

    Why on earth would I not want my finally blocks to run? Isn’t that the whole point of writing them in the first place — to make sure that everything is in a known or consistent state even in exceptional cases? When my app crashes in the middle of an operation, I want it to rollback the transaction, unlock the record, and release the semaphore. I simply can’t imagine how it would ever be a good idea to leave a mess all over the system to be manually cleaned up just because something timed out or ran out of memory when nobody expected it to.

    Furthermore, I don’t understand the advice about exception filtering. Am I just not supposed to write my code in C# when I need to handle an exception in only certain cases? Should I be using some separate VB.Net assembly to do that? And this is all to make sure that somebody who doesn’t know anything about my code can determine whether my carefully-crafted finally blocks actually run?

  29. Marc says:



    Maybe I’m too stupid for this discussion, but the reason a finally block is inserted to have it run in a ‘good’ and a ‘bad’ situation… so what’s the point the nested finally blocks are called?

    I understand the discussion about not catching Exceptions, if there isn’t a good reason for it, but not to catch it when it might corrupt your peace of code is on my opinion more dangerous.

    The first 3 advices of the CLR team sounds solid and clear, BUT the 4th sounds more like C# is not prepared for handling exceptions like it should be in a perfect sense – maybe this should be kept in silence, until C# has to offer something in this direction.

    When I was switching from Java to C#, the missing exception declarations in method level made me feel very uncomfortable – discussions like this reminds me of that.

    Of course it’s a religious question, but personally, I would be more happy to force (optionally) the C# compiler to respect exceptions which might be thrown from a method instead of hoping for a good documentation.

  30. ficedula says:


    The point is that in some situations you *can’t* ensure everything is in a known or consistent state.

    Imagine you run some C# code that calls out to a native component, and then you get an access violation. At this point the external code has done something wrong and quite possibly scribbled all over your memory. Your finally blocks can’t guarantee rolling back the transactions, unlocking the records, or whatever, at this point – the memory in your process is corrupt. And by *trying* to undo your partially-completed work you may make it worse – all your variables are now in a potentially unknown state, so that copy of the users’ data you saved away before beginning to work on it? Can’t guarantee that hasn’t been corrupted too. That copy of the users’ data file? Well, you can’t even guarantee the filename you have for it is correct any more.

    Bailing out and exiting at least guarantees you won’t corrupt the users data any more than you already have.

    OutOfMemory is fun, too, because for your finally code to run reliably, you’d have to guarantee it allocated no memory. That’s kind of tricky in C#. Are you sure your recovery code won’t cause problems if it fails half way through trying to roll something back because it couldn’t allocate another few bytes?

    OTOH, if you get a divide-by-zero exception running some purely managed code that takes parameters in from the user, the decision may well be that it’s perfectly safe to run all your unlocks/finallys/recovery code; it’s an "expected" error, in a sense; the user provided zero as an input, silly user, let’s clean up. In some software, the vast majority of exceptions fall into this situation. But it is a decision you have to make – and as Eric says, in (e.g.) banking software it’s important to roll back when you can, but ONLY when you can do so safely!

  31. CuttingEdge says:

    Interestingly enough this is the path the System.Web.Security.Membership class takes. It caches any initialization exception in a static field and rethrows it every time someone calls it while it’s in an invalid state.

    Why did the ASP.NET team take this approach? Should they fix this?

  32. CuttingEdge says:

    @Robert Davis: That is a strange way of preserving the stack trace. You can achieve the same by wrapping the original exception, so instead of doing this:

    throw ex.Rethrow();

    I would do something like:

    throw new InvalidOperationException("Descriptive error message. " + ex.Message, ex);

  33. Frank Bakker says:

    @ CuttingEdge, The benefit of tweaking the stacktrace of the original exception is that you can rethrow the exact same exception. By wrapping it like you did you change the type from AnyException into InvalidOperationException. Handlers up the stack hohever might depend on the original type they are trying to catch

  34. fmarguerie says:

    Using throw instead of throw ex is not enough. Unfortunately, throw does not always preserve the stack trace. See

  35. fmarguerie says:

    Sorry, I hadn’t seen someone has already added a comment about that.

  36. @Marc – "… the reason a finally block is inserted to have it run in a ‘good’ and a ‘bad’ situation…"

    I would agree with that, as long as we’re using the "Sergio Leone" breakdown:

    Good – the normal case, no exception is thrown.

    Bad – unusually, an operation could not be completed but the program state is under control, you know what the exception means, you can work around it in your code, explain the problem to the user, etc., and carry on working.

    Ugly – an exception is thrown that you never thought would be, and typically indicates that the thrower cannot restore the program state. A few exception types are always of this kind, e.g. NullReference, Overflow, etc.

    So exceptions come in two categories, but they aren’t Good or Bad. They’re Bad (recoverable) or Ugly (fatal).

    This means you are correct: finally blocks need to run in Good and Bad. They are *especially* useful in Bad, because they help you to easily wind back the program state to how it was before the aborted operation began. They help you program transactionally, with a commit as the normal case and a rollback as the exceptional case (hence appropriately expressed through exceptions).

    The place where it becomes less certain is in Ugly, and I can’t put it better than ficedula did above. Actually I remembered another gotcha this morning: What if a fatal exception is thrown, but you catch it, and as a result of that a finally block throws another exception, and this time it’s one that you consider recoverable? The fact that there was originally a fatal exception in play has now been lost altogether. This is not good!

    "BUT the 4th sounds more like C# is not prepared for handling exceptions like it should be in a perfect sense – maybe this should be kept in silence, until C# has to offer something in this direction."

    There is a strong argument for keeping C# exactly as it is in this area. And anyway, no language is ever likely to be "perfect" in this area. It’s the unfortunate subject of how to deal with our mistakes or oversights. It’s never going to be perfect. This means keeping silent is a bad idea! It’s important to know exactly why and when you should take each approach. Turning out the lights does not make the room tidy.

    C# takes the purist approach:

    – Thou Shalt specify all the recoverable exception types, each in a separate catch clause.

    – Thou Shalt Not centralise exception filtering logic – it’s something you have to rethink in every case anyway, so why would you want to reuse it?

    – Thou Shalt Not specify all the fatal exception types and then assume anything else must be recoverable.

    Now check this out:

    Although it’s good that they have added a box about the destruction of debugging information, I think it’s pretty unfortunate that they don’t mention the OTHER implicit, silent non-local side effect of catching an exception in order to manually filter it: the fact that it will execute a bunch of code in finally blocks.

    Note the different instructions for VB.NET under "User-Filtered Exceptions". That’s the feature of the CLR that C# does not expose. Like I say, there are good purist reasons for that.

    The problem is what happens when people don’t accept the purist line. They try to centralise the exception filtering in a function, and they end up suggesting something that makes the situation worse.

    A better alternative – if you really *really* need to do this, e.g. you find you have to catch the same 15 recoverable exceptions in multiple places in your code – is:

    // in static class X:

    public static void Try(Action action, Action<Exception> caught)






       catch (RecoverableType1 x) { caught(x); }

       catch (RecoverableType2 x) { caught(x); }

       catch (RecoverableType3 x) { caught(x); }

       // and so on


    Now you can say:

    X.Try(() =>




    caught =>


       // display caught.Message to user or something


    It has the same effect of centralising your list of known recoverable exceptions, but nobody ever has to catch the universal Exception base class. So no need to rethrow fatal exceptions.

    The one thing it can’t do is read the list of known-recoverable types in from a configuration file. They have to be hard coded. But if you really need to get around that, you could write it in VB.NET or C++/CLI, so it’s not actually a big deal.

  37. Robert Davis says:

    @CuttingEdge: That is certainly the reasonable default practice. But it is not always the best option. Say for example you have a virtual method that in one case uses reflection to invoke some other method. If that method throws an exception, Reflection will wrap it in a TargetInvokationException. If you can determine that it wasn’t the reflective code that threw the exception then its probably a good idea to rethrow the inner exception of the TargetInvokationException. The fact that I used reflection is an implementation detail that the caller shouldn’t have to care about.

  38. Gabe says:

    ficedula: In banking software, there are only three possible states for your account after a transaction: rolled back, committed, and locked (i.e. waiting for the transaction to complete). The guidance is actually more like commit when you can, but ONLY when you can do so safely, otherwise rollback. Do you want to be the sucker whose account is locked because the direct deposit app ran out of memory while processing your paycheck? I would much rather have the money later than have all access to my account blocked until a DBA can go back and manually clean up the mess!

    If I have some C# code that calls out to a native component, and then I get an access violation, it is ONLY at that point that I can determine whether to run finalizers (catch or ignore the exception) or not (Environment.FailFast).

  39. Chris B says:

    This eventually reminded me of another of Eric’s posts (  At one point it says:

    "Fatal exceptions are not your fault, you cannot prevent them, and you cannot sensibly clean up from them. ….There is absolutely no point in catching these because nothing your puny user code can do will fix the problem. Just let your "finally" blocks run and hope for the best."

    If I understand Daniel Earwicker and this post correctly, finally blocks are only run if code completes with no exceptions or there is a handled exception.  If an exception goes unhandled, the CLR assumes the process is in a corrupted state and does not execute the finally blocks because they could make things worse by operating on the corrupted state.  Am I misunderstanding one of these posts or are they inconsistent?

  40. Anton Tykhyy says:

    Chris: there are actually two kinds of exceptions — the bad and the ugly, as Daniel said. Anything you ‘throw’ manually, regardless of type, is a bad exception; CLR raises ugly exceptions (e.g. ThreadAbortException, OutOfMemoryException) internally. When the stack is being unwound after a bad exception, all finally blocks execute; after an ugly exception only finally blocks registered with CER execute. An ugly exception cannot be caught, it is re-raised automatically at the end of the catch block. A very few exceptions (e.g.  StackOverflowException) are so ugly that even CER blocks don’t execute, and AppDomain.UnhandledException does not run. The handling of ugly and super-ugly exceptions is regulated by CLR host policy; AFAIR the normal policy for an uncaught "bad exception" is to unload the appdomain; for ugly it is to terminate the process "gracefully" (allowing CER blocks and critical finalizers to run) and for super-ugly exceptions the normal action is to terminate the process immediately (no CER blocks or critical finalizers ever run).

  41. And, of course the "Power_Off_Exception" (may require additional hardware) is the most un-catchable of ALL.

    While intended as humor, I have had people claim that "finally blocks do/should ALWAYS execute".  When asked HOW, if the power fails (or if a bullet goes through the CPU for my defense clients <grin>), I often recieve a completely blank stare.

  42. Random832 says:

    I remember that Java had two different class trees for the "bad exceptions" (Exception) and the "ugly exceptions" (Error), with Throwable as the superclass for both – anyone know the reason this wasn’t followed for .NET?

  43. Robert Davis says:

    Java actually has three kinds of exceptions. Exception, RuntimeException, and Error. Exceptions were designed to be documented and handled (hence the throws clauses), RuntimeExceptons were similar but didn’t have to be declared in a throws clause (like divide-by-zero), Errors were for very serious, I’m going to crash now, occurances like Stack Overflow or corrupted program state.

  44. @Random832… This has already been covered. For "Ugly" situations, you have NO IDEA what the application state may be, or what the effect of executing ANY code will be. While truely random events are much less likely in managed code than in environments like C++, this is still true.

    The code that is in your finaly block *may have been* overrwitten, the stack may be corrupted such that and call/return invokes random code [there are actually some techniques to force the system to run malicious code!] Even simple things like floating point calculations bay return incorrect results.


    So if you don’t have any information about the state of the system, and if you don’t have any knowledge as to what the code would actually do…. WHY would you WANT to run it????

  45. Patrick says:

    I’m surprised, Eric, that you didn’t point out that exceptions are painfully expensive to throw.  If someone is worried about the cost of re-using an exception object (easy to create; expensive to throw), then it sure seems like there’s probably a fundamental perf problem in the code that’s going to be caused by throwing exceptions so regularly.  Maybe I’m missing something though… Eric, is it not horribly expensive to throw an exception?

    I normally go way out of my way to avoid an exception occurring anywhere in my code.  In .NET 1.1 it killed me that TryParse didn’t exist – it made simple form validation way too expensive if you didn’t want to write your own try parse methods that didn’t throw exceptions internally.

    Even today I go out of my way to write my own TryParse routine for GUIDs (in cases where I need to validate them, like if they’re coming from a URL).  Of course, why the Guid class doesn’t follow much the same patterns of Decimal and other classes is a mystery to me; I constantly wish Guid had a Parse and a TryParse (and New Guid() did the same thing as Guid.NewGuid)…

  46. ficedula says:

    @Gabe: As David V. Corbin has pointed out – once you receive an unexpected exception, all bets are off. If it were an access violation, running any code at all could theoretically do *anything*. In particular, just trying to call DBTransaction.RollBack() might, if you are very unlucky, call DBTransaction.Commit(). Or run any other code in your app. You have no way of knowing; your memory is corrupt.

    Letting your app crash and burn has the advantage that at least it won’t get any *worse*.

    If "rolling back" means "telling the database on the other end of this network connection to rollback": then don’t even try. Let your app terminate. The socket will disappear and the database will be able to see that and roll back itself.

    OTOH, if you have to manually clean up in order to roll back … well, that sucks. If you’ve just received an AV, then trying to roll back could do *anything*. If you’ve received an Out-of-memory, then you can probably *try* to call a method without worrying about another bit of code running by accident, but you have no reason to expect that any given method call will actually succeed (in .NET; unmanaged code has it a bit easier here), and if it fails, the original cause of the problem may be lost due to the newly-thrown exception. If it’s a null reference exception, that’s almost as bad; it indicates that one of your references somewhere in the app, that your program invariants insist will always be non-null, is now null. All you know for sure is that something that was meant to be impossible has happened; therefore you can’t really trust anything.

    Leave the account locked, if that’s the case. Sure, a DBA will have to come and investigate manually. But what’s the alternative? A DBA is going to have to come and investigate anyway! You got an unexpected error while updating the account so you can’t be 100% sure your roll back worked…

    @Patrick: The one situation where reusing an exception makes a lot of sense, that I’ve seen before, is when you’re writing your own runtime library. The Delphi runtime (at least, it used to) created a few exception objects on startup; in particular, an instance of EOutOfMemory – so that if it *did* fail to allocate in the future, it had a pre-allocated OOM exception ready to throw. Otherwise how could you create the OOM exception – by definition, you’re not able to allocate memory 😉

    Obviously this is not really an issue if you’re coding in C# as you don’t have to worry about these low level implementation details…

  47. Anthony P says:

    I guess I’m at a loss over what you are trying to accomplish in finally blocks anyway. For example, I wouldn’t commit or rollback and database transaction in finally, commit would be within the try, rollback within the catch. Finally would only essentially only contain resource cleanup, there would be nothing to change data state within that part of the code.

    To be honest, I’ve never really used that particular block that often. I catch what I can fix (more accurately, provide notification of a common error I expect to happen, such as a duplicate in a unique index field during a user registration), throw what I can’t.

    I also guess I’m "lucky" to not have worked on huge applications using untold amounts of 3rd party utilities and am usually in full control over the code base and therefore any exceptions that arise are of my own making (or at least happening in code I have full access to troubleshoot).

  48. @Anthony… Do you ever write code like:

    using (SomeType x = new SomeType)



    If so then you ARE using finaly blocks!!!!

    If not, the you are most likely;

    a) Allowing IDisposable objects to live longer than they shoulf

    b) Duplicating calls to Dispose in multple paths

    c) Using an over complicated means of dealing with IDisposable objects

  49. Anthony P says:

    d) Calling .Dispose() in the same block that instantiated the resource.

  50. Voo says:

    If we’re already grazing the java discussion, I’d be really interested why the C# team didn’t impose similiar rules for exceptions like the java guys did. One of the few things I’m missing in C# is that you don’t have to specify which Exceptions a method can throw and to guarantue that you handle it somewhere up the callstack.

    Well that doesn’t solve any problems automatically (I’ve seen too much catch(Exception e) code to believe that), but I personally find that rather helpful.

    So is there any specific reason why the C# team didn’t implement something similar but went for the supposedly easier way with only one exception group (compared to java with Error, Exception, RuntimeException)? Or was it just deemed not important enough for the amount of work involved? Just curious 😉

    Though other than that we still got the exactly same problem in every programming language and I think there can’t be one solution for every case per definitionem. If you can’t guarantee any invariants in your programming any longer and have no idea how messed up the situation is, in most cases trying to fail grazely will make things only worse. Though there are mission critical applications where just crashing also isn’t a solution (medical apparatus, ..)

  51. Anthony P says:

    To go off topic for a moment, I was wondering if Eric knew of and could recommend any (possibly upcoming?) book on C#/.NET 4, specifically highlighting what’s new or improved, sort of like how Jon Skeet handled C# in Depth.

  52. ficedula says:

    @Voo: That’s probably a Holy War style argument 😉  but one problem with enforcing which exception types a method throws, and forcing the caller to handle or pass them on, is that it’s brittle in a number of ways.

    Firstly, if you declare a virtual method (DataSource.LoadData, let’s say) that method has to be tagged with all the exception types it can throw. With such an high level definition, it’s hard to know what you declare it as throwing. ArgumentException and … well, maybe a custom exception type, DataSourceException?

    Then you have a number of descendant classes that override it. They have to wrap all the exception types they actually *want* to throw into DataSourceExceptions. Your FileDataSource can’t throw FileNotFound exceptions and your NetworkDataSource can’t throw NetworkSocket exceptions. This is particularly annoying when a caller further up the call stack could have caught NetworkSocket exceptions and done something about it, but now can’t, because they’ve all been wrapped up inside DataSourceExceptions. Well, I suppose it could catch Exception then walk the chain of InnerExceptions to see if any of them are exception types it knows about … urgh.

    Secondly, if you ever change the exceptions a method can throw (new version of the framework/library), that breaks all the code that uses yours; it won’t compile any more. That’s bad. You could just say that the exception types a method throws are part of its signature and can’t ever change, but then you’re going to end up wrapping exceptions inside a not-entirely-suitable type again (we want to throw RegistryLockedException but version 1 of the API only read data from a file, so we have to throw FileAccessException instead even though that’s a lie…)

  53. Voo says:

    @ficedula: Uhuh I hope we don’t end there. Holy Wars should be reserved for important stuff like "where do you put your brackets" 😉

    I agree with your first point, because there are Interfaces in Java that were clearly bitten by that problem. Especially some high level interfaces where you end up throwing an exception in every method although the implementations can’t sanely throw them. Though 1.6 has improved that situation for many interfaces (JDBC comes to my mind) where you can extend Exceptions. So you still end up with one generic Exception for lots of methods, but your implementation just throws a much more specific exception.

    Though I agree  that that’s far from perfect, it’s at least less ugly than it used to be 😉

    Though I don’t agree with your second part. Maybe that’s just personal preference, but if some interface throws new exceptions I prefer if I find that out as early as possible, because I don’t like lurking unhandled exceptions in my code.

    Imho it’s not that you’re breaking your code only with one version, it’s that you find out about breaking changes much earlier. Ok, you can’t compile your code although it will work in 98% of all cases perfectly, but at least you know that there are only really bad errors (like OutOfMemoryError where you’re out of luck in any case) or programming mistakes (RuntimeExceptions like DivideByZero) where your code should crash as fast and hard as possible.

    But yeah in the end TANSTAAFL (ha, finally a situation where I could use that) and I think you brought up some good arguments and I hope we really don’t end up with a Holy War, but with a useful discussion about the pros and cons of the different approaches – though I appolgy to Eric for getting a bit OT 🙂

  54. @Anthony… If you are calling .Dispoose in the same block, and an exception is thrown between the call to the constructor and your explicit call to dispose, then you will NOT invoke Dispose(). This will result in the resources being held unto until the GC runs. A VERY "BAD" practice.

  55. Gabe says:

    Voo: There’s a reason that Java is the only language with compile-time checked exceptions. To inflame the discussion a bit more and diverge even further off-topic, checked exceptions are like communism: in theory they work great (like in communes), but in practice they don’t work because they don’t scale because they fail to take human nature into account. People are lazy and will only do the minimum work needed unless they’re truly committed to the cause.

    So when a leaf function is changed to throw a new exception that needs to be handled way up at the top of your callstack, only somebody who’s really not lazy would bother to add it to the hundred methods in the middle. The other 99% of programmers would just swallow it or rethrow it as Exception, neither of which helps your cause.

    The best solution to this is probably to just document the exceptions that each method throws and have either the compiler or a static analysis tool warn you if your documentation doesn’t match reality. That way as a programmer you can easily tell if you’re going to get an exception you didn’t expect, yet the language doesn’t encourage bad behavior (at worst it encourages bad documentation).

  56. Anthony P says:

    @David, I appreciate the feedback and was not aware that (a) using instituted a try/finally structure behind the scenes or that (b) it was best practice to use a using statement when using objects that implement IDisposable. Now thanks to you and our trusty friend MSDN I have that clarification.

    In my own defense, I’ve worked alone my entire (brief) .NET programming career so the only exposure I get to best practices are through outlets such as this blog. I will admit to being guilty of leaving cleanup to the garbage collector (but not always) in the event of the rare exception and I’ll consider changing that in the future.

  57. Chris B says:

    I apologize if I am being dense, but what exactly is a "nested finally"?  I assumed it meant something like

    try{ A(); try{ B(); } finally { } } finally{ }

    but after experimenting with similar code I am starting to believe I have assumed incorrectly.  Most of my curiosity comes from trying to create a case where I could cause a finally block not to be run as was suggested (without unplugging or shooting the machine 😉 ).

  58. Anthony P says:

    I want to be sure I understand, as well. If you were to write something like this



       try { throw new ArgumentNullException(); }

       finally { Console.WriteLine("A"); }


    finally { Console.WriteLine("B"); }

    Then because there is neither a specific catch clause to catch a ArgumentNullException nor a clause to catch more general exceptions, the exception bubbles up and crashes the application before the finally clauses have a chance to execute. However, if you wrapped the previous code in another try/(general) catch, then each of the nested finally blocks would execute.

    I just want to be sure that understanding is accurate or if there is another way to do it (to say nothing of code that could be executing in a 3rd party utility).

  59. AC says:

    So what’s the recommended practice?

    The gist I’m getting is that finally blocks are okay if the exception is Bad, but not Ugly.

    AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException);

    void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e)


      Type exception = e.GetType();

      if( exception == typeof( ???? ) || exception == typeof(???) )


         System.Environment.FailFast(); // ugly exceptions .. prevent finally blocks running.



    So what are all the Ugly exceptions we should think about?

  60. @Chris B "If I understand Daniel Earwicker and this post correctly, finally blocks are only run if code completes with no exceptions or there is a handled exception. If an exception goes unhandled, the CLR assumes the process is in a corrupted state and does not execute the finally blocks because they could make things worse by operating on the corrupted state.  Am I misunderstanding one of these posts or are they inconsistent?"

    To get this totally clear, you probably need to think through what the CLR does when a throw occurs.

    The CLR makes two separate walks back through the call stack, starting from the point of the throw all the way back to the Main method (and slightly beyond). The first walk looks for a matching catch, but doesn’t run any finally blocks during the walk.

    If a matching catch is found, a second walk is carried out which executes all the intervening finally blocks, stopping at the matching catch (NOT including any finally block at the same level as that catch), and then execution resumes in the matching catch block (any finally block under the same try executes after the catch).

    If no matching catch is found, that second walk is put on hold for the moment, and so there is an opportunity for the application to decide what happens next. Firstly, the CLR starts WER which puts up a dialog asking the user if they want to send an error report to Microsoft. Then when the user has made their choice, only then do finally blocks execute (so even the end user gets a say! They could use task manager to kill the application while the WER dialog is open, and so stop the finally blocks running.)

    But as well as starting WER, the CLR fires an event: AppDomain.UnhandledException. In this handler, you can do whatever you like. If you want to take an extreme hardline attitude to finally blocks, you can call Environment.FailFast. This would mean that whenever your app failed to catch an exception, it would drop dead without running finally blocks (very similar to what happens for certain extreme types of uncatchable exception). But on the downside, it would be quite difficult to attach a debugger to it to find out what went wrong.

  61. @AC

    Firstly, bear in mind that Environment.FailFast() is a pretty "extreme prejudice" approach. It stops you attaching the debugger, for example – the process has already died. You will probably at least want to write out the stacktrace first.

    Secondly, you don’t make the decision about what is bad versus what is ugly inside that event handler. You make it by deciding to catch (for bad) or not catch (for ugly) in the surrounding code.

    So anything that escapes into this event handler is by definition already ugly. It’s something you had no idea was possible. No need to treat exceptions differently by type – as soon as they’re flying uncaught out of your Main function, you know they’re ugly. You have a bug.

    Catching an exception means "The operation didn’t complete, but in a way that I understand the implications of, and I know that I can undo any partial state changes – my finally blocks will take care of that.". Not catching an exception means "I don’t know what the hell is happening. Maybe it’s not a good idea for my finally blocks to try and clean stuff up – they might make it worse." And in the setup of your process, you can decide how to deal with the unhandled exceptions – hence the name of the event.

    So those were the facts. Now for my *opinion*: there is usually no good reason to run finally blocks once you have detected an unhandled exception. You have a bug in your code. It’s time to stop executing and call Dr Watson. Attempting to limp along may sound like a way to "save face" in front of the customer, but it may mean that you silently trash persistent data, or that it takes you months longer to actually find and fix the problem for real. Which is going to create a worse impression?

    In the worst case, an application has a try/catch (Exception ex) around everything in Main, and that is like saying, "If there’s a NullReferenceException, that’s cool – honestly, it’s, like, *totally* part of my design, dude! My finally blocks know how to cleanly roll back from that state so we don’t leave any mess on the system, I have a unit test that exercises it… seriously, I know what I’m doing!"

  62. @Daniel, an excellent post. One "pattern" I have used where there is code I want/need to run even if "Ugly" is to register specific routines so they can be executed within the context of AppDomain.UnhandledException. This provides better control thant yes/no on ALL finally blocks.

    This approach is not without risk, and must be used sparingly, but for certain, well considered scenarios, it can be a life saver [especially when I am dealing with Industrial Automation…]

  63. Chris B says:

    Daniel, thanks for taking the time to answer so thoroughly.  That helps me to realize what is going on much better than I did before.  I never knew that the implications of catching an exception were so significant, but it is definitely something I’ll bear in mind in the future.

  64. Chris B says:

    Is there a reason that BadException and UglyException aren’t part of the Exception class hierarchy?  If they were, it would be possible to catch(BadException e){ Log(); Rollback(); Fix(); etc… }.  In this case, you could assume the exception’s author and the thrower knew enough about the conditions which caused the exception to say, this is bad enough I don’t know where to go, but I do know where I am and where I’ve been. On the other hand, the CLR, or possibly, the compiler, could say that catch(UglyException) will never run.  No assumptions can be made about the state of the system and its best if we all just leave and leave now.

    For me, this pattern is easier to understand than having the CLR look for special exceptions that, from a type definition perspective, have no clearly defined attribute.

  65. AC says:

    Thanks @Daniel. From all the chatter it wasn’t clear that we were discussing how the runtime is the one to decide to execute / not execute the finally block for me. I’m cool with that, and agree that main methods shouldn’t catch exceptions just to keep going or try again.

    I was interpreting this part of the discussion as a "You’re not doing it correctly unless you follow pattern x" and was therefore trying to summarize.

    We don’t want our apps to just go poof when there’s an error, so it’s nice that there’s an event where we can attempt to log something (or even perhaps try to save some additional debug / state information). It might fail in some cases, but that’s ok too as long as we’re aware of it.

    Exceptions, when used correctly, are an incredible tool to making better programs. Every exception caught in dev is an exception that doesn’t make it to production.

  66. Hmm… another long comment. Unfortunately I don’t have time to edit it down so I hope it’s not too repetitive.

    @Chris B said: “Is there a reason that BadException and UglyException aren’t part of the Exception class hierarchy? “

    There kind of was an attempt at that. BadException kind-of already exists, and is called ApplicationException. But it was doomed to failure from the start. It doesn’t and cannot make sense as a concept.

    The type-aware design of catch, the fact that it catches any exception that is assignment-compatible with the specified type, tempts us with that the possibility that the exception type hierarchy will be organised in some way that takes away some pain for us, e.g. those exceptions I need to catch will tend to be derived from a common base class, but which is NOT a superclass of any exceptions I wouldn’t want to catch.

    And then taking that a step further, what if there was a single common base class for all exceptions from which any application can recover? (Which is what Chris B is asking).

    So it’s instructive to look at the docs for ApplicationException:

    It says: “The exception that is thrown when a non-fatal application error occurs.”

    Now, we can certainly say some exception types are intrinsically fatal. But we cannot say that some exception types are intrinsically non-fatal and recoverable. It’s just not symmetrical. The ability to recover depends on the context. Only the caller knows whether they expect the exception to occur in a given scenario, so it’s their decision as to whether it should be considered fatal.

    It goes on to say: “The ApplicationException class differentiates between exceptions defined by applications versus exceptions defined by the system.”

    So somehow, the single base class of your custom exception indicates where the exception was thrown, AND whether it’s recoverable not. That’s going to be even more tricky.

    And now look at the derived exception types: System.Reflection.TargetParameterCountException! So the pattern wasn’t even being followed prior to CLR version 1. Which was really no great loss.

    So now that type has a big warning label on it. The current advice (and I believe the only possible correct advice) is to derive your own exception types from Exception, the “Daddy” base class. The ability of catch to work off inheritance is basically irrelevant. Assignment compatibility of the types is not a reliable guide as to whether they would be “equivalently catch-worthy”. It is now recognised that exceptions form a flat space of peer types, that have to be treated as individuals on their own merits, not grouped in a hierarchy and dealt with ‘en masse’. (There’s probably a political analogy here…)

    When you touch the file system, especially in an interactive GUI app, you usually need to catch both IOException and SecurityException, and treat those two imposters just the same. The only common base class they have is SystemException. Which is also a base class of NullReferenceException! So that’s no good. So you always need to have multiple catch blocks, and you probably do the same thing in all the catch blocks. This requires a bit of ugly repeated ceremony around each separately-recoverable operation on the file system. (Though you can still centralise it with the wrapper method approach, where you pass in a lambda and the wrapper method calls back to it inside the approach try/catch logic – I gave an example somewhere above.)

    If a GUI app lets the user delete a configuration file, and it fails, this probably means the user has it open in Notepad. So just tell them the file is locked and let them try again. It’s not a big deal, it’s a known condition that can happen, even though it’s not that common. So it’s ideal for uniform exception handling.

    But maybe in some situation, in a server app that entirely owns part of the file system, I need to delete a file. It absolutely has to be deleted, or else the next step my application needs to perform will be screwed before it even starts. (Or more formally: it is a precondition of the next step that the file not exist). I’ve been really careful, but what if the file is locked because I leaked a handle somewhere else? If so, the unfortunate fact is that I have a fatal bug. I have to give up, because it invalidates all my efforts to get it right, and I know that the next thing the code will try to do will also fail, probably in a totally confusing way that obscures the underlying problem. Things can only get worse if I continue.

    And all because of an IOException, which in many other kinds of software you would quite happy catch and recover from!

    So the type of an exception is never going to be a universal guide as to whether an app can recover. Hence it is an unrealistic dream to think that each exception could be given an intrinsic property (of any kind – a flag or a base class) that indicates whether you can expect to recover from it. It depends what you were trying to do, and what you need to do next.

    (Although to reiterate, it is possible to permanently classify an exception type as intrinsical fatal. Some things are just beyond the pale!)

    And so if you find yourself thinking “This should be a whole lot easier. There must be something Microsoft did wrong here – it shouldn’t be this hard, I shouldn’t have to think about all these different possibilities”, well, guess again. The present situation is probably as simple as it can ever get. The points I’m making here are not specific to C# or the CLR, they are – pretentious as it sounds – philosophical conclusions about categorization, hierarchy, relatedness, and the distinction between the known and the unknown. And you ain’t gonna fix all that with a better library!

    And even though sometimes you might get lucky and the type hierarchy turns out to be convenient, is it really a good idea to be lazy and catch a common base class? What if a future maintainer derives a new exception from that base, and it’s one that you shouldn’t be catching in this context? (What if a dynamically loaded plugin does this!?) Catching an exception type that can be derived from is a very strange thing to do. It’s just like catching Exception, really – it’s saying that you know you can handle unknown, unspecified future situations. It *could* be valid, if the derived type truly indicates a failure that can be recovered from by any caller who can recover from the previously-known types derived from the base type. But that’s yet another tough thing to get right, or to explain to a team of coworkers.

    So to police both sides, all exception classes should be sealed, as well as derived directly from Exception.

    Awesome analysis! Thanks for posting this. — Eric

  67. Gabe says:

    OK, so now you’re saying that all exceptions should be derived directly from Exception *and* all unhandled exceptions should immediately crash the app? That seems like a recipe for extremely fragile apps.

    Let’s say that instead of throwing generic IOExceptions every time they came up with a new way to prevent you from opening a file (e.g. offline file unavailable, file involved in a transaction, broken symlink), they decided to throw new OfflineFileUnavailableException, TransactedFileException, or BrokenLinkExceptions. Instead of being able to catch IOException and tell the user "Sorry, the file isn’t available in some way I can’t explain", you would have to say "Sorry, but something unanticipated happened and now you lose all your unsaved data".

    Can you imagine writing a C program that looked at the error returned by every system call and crashed if it wasn’t in a list of acceptable errors known at compile time? Raymond Chen would write a blog post about how your app required a different appcompat shim for every different version of your software!

  68. Thanks Eric, high praise indeed!


    @Gabe – For system calls, e.g. the BCL, they simply don’t do that. The possible exceptions are part of the contract. So new exceptions will not be thrown when enhancements are made to existing methods in future versions of the CLR.

    If such a breaking change is found to be necessary or desirable, then that means a major revision of the CLR, such as is happening with CLR 4.0. This is then effectively a different platform to the previous version, and old apps don’t run on it without a recompile anyway. But even then, such deliberate breaking changes should be so rare as to be practically non-existent.

    If the docs for a system call in the C runtime library say that the allowed error codes are 1, 2 and 3, then an application is very much within its rights to explode in fury if it gets back 4 instead. It doesn’t know what 4 means. To continue executing would be something of a gamble, however you look at this. Maybe it’s worth the risk, maybe it isn’t. Who knows? Fortunately system calls have a very wide user base, and so have to stick very closely to a narrow contract, so they tend to be very careful about changing what they do between versions of the platform. This is why we call it a "platform" – it’s safe to stand on, because it isn’t moving around too much.

    It gets somewhat greyer in the architecture of our own libraries, because we may want to guarantee longer-term binary compatibility and retain the flexibility to mess around with 3rd party components that we depend on, and those 3rd party components might not be as helpful as the CLR in sticking to an implied exception contract.

    Suppose you have three layers to your software. The upper layer is "Application", the middle layer is "ConfigReader" and the lower layer is "JSONParser". So the app needs to read a config file, and ultimately it’s handled by a JSON parser class.

    The middle layer is responsible for encapsulating its implementation details. One of those details is the choice of 3rd party JSON parser library. If the ConfigReader layer switches to a new vendor of JSON parsing, the upper Application layer shouldn’t be affected by this. But the new JSON parser throws different exception types.

    So yes, if you want hardcore information hiding and to fully isolate higher layers from lower layers, your middle layer must wrap exceptions, e.g. catch some specific JsonParserSyntaxException and wrap it in ConfigReaderSyntaxException, and so on for any other known recoverable exception types.

    The problem is NOT the catching and re-throwing in itself. The problem is trying to do it in a lazy way by catching the universal Exception base class.

  69. Chris B says:

    This is mainly to confirm my understanding of what was said above to be sure I’m not missing something.  I think good exception handling practices are something that I’ve never had the thorough understanding of that I should have, and I think its about time that I got it right.

    As far as I can tell, it seems that from the above comments, the type of an managed exception in a is analogous to an unmanaged return code.  So if you were in C++ you may return E_FILE_NOT_FOUND, while in C#, you would throw an FileNotFoundException.  Managed exceptions have three advantages over return codes that I can see. First, the ability to include extra data in the exception object, such as a message, a stack trace, and a nested exception.  Second, the ability to automatically unwind the call stack looking for handlers.  This makes it much easier to skip stack frames which make no attempt to handle the error.  Third, it is easier to encapsulate implementation specific information in a context specific exception by catching the implementation specific exception and wrapping it in a context specific exception.

    I think my biggest misunderstanding was that managed exceptions had the additional attribute that the type hierarchy could be exploited to handle any exception in the general case, such as logging and re-throwing.  I made this assumption on the intuitive notion that since polymorphism was one of the pillars of OOP, it should apply to exceptions as well, but Daniel’s arguments did a very good job of proving this assumption to be untrue.

    Hopefully I am not missing anything else!

  70. Gabe says:

    Daniel Earwicker: I think I fell asleep and woke up in some alternate universe where exceptions are part of a contract and programs need to be recompiled to run on new versions of the platform. This is still a .Net blog, isn’t it?

  71. @Chris B – clearly you weren’t alone in having that "polymorphic exceptions" assumption – it’s designed into the CLR, the JVM and C++. So you’re in good company!

    @Gabe – You make it sound like I just unplugged you from the Matrix or something. Sorry. A C# program that targets .NET 2.0 requires the .NET framework version 2.0 to be installed. It will not attempt to run on 4.0. And prior to 4.0, only one version of .NET could be loaded into a given process. This is why you could not write shell extensions in C#: if one extension targeted 1.0 and another targeted 2.0, only one of them would be able to load into the explorer.exe process. And hence all the effort in 4.0 to provide runtime SxS support, to allow multiple versions of the framework to exist in the same process.

    There is no C# language support for *compile time* checking of exception propagation. But the absence of a Java-style ‘throws’ specification feature does not make exceptions go away. It just makes the compile time checking go away. I repeat: turning off the light does not make the room tidy.

    In a purely dynamically typed language (Lisp, Smalltalk, Perl, Python, Ruby, JavaScript…) there is no language support for static checking at all, and yet types exist. Each time in JavaScript we say we are asserting that obj is something that has a property called foo containing a function object that is happy to be called with no arguments. That’s an implicit or "latent" type specification, a form of contract between the caller and the implementer. In dynamic languages, they still exist – they’re just spread all around the code.

    Without a compiler to check such contracts, it’s up to the coder to do so (by writing unit tests – which is why the drive toward test driven development originated among Smalltalk users). Even in such languages, it is often possible to infer type information in a systematic way, which is how we get intellisense for JavaScript in some IDEs, and is also one way that JS engines in recent years have got a lot faster.

    In C#, the exception-throwing behaviour of a method is one aspect of it that is dynamically typed. That doesn’t mean you don’t have to get it right. It just means the compiler isn’t going to help you.

    In an analogy with inferred types and intellisence in JS, could a static analysis tool check the flow of exceptions in .NET assemblies and allow an IDE to tell us what exceptions may emerge from a given point in the code? Yes, to some extent. I’d really like that feature in Visual Studio. But it wouldn’t be complete for all circumstances, because of polymorphism: a call to a delegate, interface method/property or virtual/abstract method/property may have multiple destinations, each with its own throw statements. But for static methods in the BCL it would excellent to be able to call up a list of what it might throw, and should be perfectly possible to do by hunting through the IL in the assembly. (And it could warn us if it encountered polymorphic calls).

Comments are closed.

Skip to main content