Exception handling guidelines…

This morning, I was reading a post on the GDN C# Language board about exception handling, and a reply got away from me, so I thought I'd put it here instead.

I was going to link to the post, but I think that might be unfair to the poster, so I'll skip it.

The discussion is about additional exception handling features in C#.


The vast majority of exceptions should not be caught, except perhaps by a last-chance handler.

In rare and very specific situations, you may want to implement recovery logic for a specific situation. The canonical case of this is when you get an exception when trying to open a file, though even in this case, you only catch it if there's something you can do about it. Can't open a user file? Ask the use for the filename again. Can't open a system file? Well, there's probably no way to recover from that, so catching it probably doesn't do any good.

Without any user-written exception handling, the behavior of your code WRT exceptions is fully specified. Every exception makes it up to the outmost layer, so you know about everything bad that happens.

When you start adding in exception handling, that's no longer true. A catch block that you write may be perfect for the exception you catch, which is a good thing - that's why exception handling is there. Or, it may appear to work but harbor a latent bug. I've used this example a number of times:

catch (Exception e)
   if (e.InnerException.GetType() == typeof(MyFileNotFoundException)
      Recover();   // use your imagination for how this works

This code appears to work fine - the recovery works great. But it swallows any other exception that happens in the try block.

This specific example is an illustration of why wrapping is something to be approached carefully and thoughtfully, but the general point is that exception handling needs to be targetted extremely tightly.  Even then it's possible to get unexcepted behavior.

Or, to put it another way, the less exception handling the better. Even experienced programmers will make mistakes - I inserted 250K records into a database a while back because of a flaw in my recovery logic in a catch clause.

If you've spent time working with return codes in the unmanaged world, my experience suggests that you are likely to use too much exception-handling code in the managed world.

To go on to a couple of the specific proposals:

1) One proposal is to add try(count), which would automatically retry the block up to count times.

There are a few scenarios where this makes sense, but they're pretty rare.

My group came across a situation in unmanaged code recently. The low-level code needed to open a file, but sometimes it wasn't quite closed, so the developer wrote some retry code - it would sleep for a second, then retry - up to 10 times before failing. That code worked fine, but then one day the process hung. It turned out that the file was just missing, and the code tried for 10 seconds, then errored out.

But the calling code also implemented a retry mechanism. As did the calling code of that.

So, the system would wait for about 15 minutes before it finally decided that it couldn't find the file.

I think try(count) would quickly become a crutch for taming behavior that wasn't understood, with lots of weird behavior.

A second proposal was to expand the scope of a catch clause, either making it apply to a whole method or supplying a handler method to be called when an exception arises.

The essence of exception handling is to be able to respond to a specific exception in a specific situation.  The chance that such a handler can do the right thing in all situations seems remote to me, and the chance that it would do the wrong thing seems high.

Comments (19)

  1. davkean says:

    I agree. I think a retry mechanism would be misused by new programmers. I don’t think there is too machine situations where it would be useful. In your example above with the file not being closed – rather than trying to retry the operation until the file was closed, wouldn’t it be better to work out why the file wasn’t being closed? For example, if this was allowed I could see the following code being written:


               try (10)


                   File.Open(@"c:MyFile.txt", FileMode.Open);


               catch (IOException)




    In this situation the user doesn’t realize that File.Create returns an open FileStream and they wonder why when they call the File.Open method that it fails. So rather then work out why they simply add a retry mechanism that ends up relying on the garbage collector to finalize the FileStream and close the handle.

    This is why I don’t like it.

  2. Jon says:

    Personally, the biggest thing I would like with try-catch blocks is for the typed catch syntax to be recursive through the InnerException property.  As in your example above, the catch statement would be "catch (MyFileNotFoundException e)", and it would automatically recurse down through the InnerException properties of the originally thrown one.  

    It would probably have to be optional somehow I guess, maybe add a keyword to make it "catch recursively (MyFileNotFoundException e)", to prevent any breaking changes to existing code.  I do see the pattern you show above a lot in pretty much every C# project I’ve worked on, and as you rightly point out it’s easy to make that mistake and just swallow everything silently.

  3. Maurits says:

    why not just do something like

    int retries = 10;

    bool fileopen = FALSE;

    int tryNumber = 0;

    while (!fileopen && tryNumber++ < retries)


    try {


    fileopen = TRUE;


    catch (MyFileNotFoundException e)


    PrintHelpfulWarningMessage(tryNumber, retries)



    if (!fileopen) { DieAHorribleDeath(); }

  4. Ny says:

    Seriously, it is a waste of time to discuss this feature in c#. Whatever we have currently can be used to get the same behavior.  

    Exceptional "Exception Handling" comes from good architecture of the system. There is always an efficient design out there that does good exception handling. We obviously need another layer on top to customize it for individual applications.

  5. Haacked says:

    The other problem with retry logic is that sometimes developers will retry a method or a few lines of code, but not realize that the code itself is not transactional.  For example, if the code fails, it is not necessarily the case that everything is in the same state as the first try.

    I think if you plan on adding retry logic to anything, you’d want to double check to make sure that the operations are somewhat atomic and a failure indicates that you pretty much are back in your original state.

  6. Jeff Stong says:

    Eric Gunnerson has some advice about exception handling and recovery on his blog:thisThe vast majority…

  7. kbiel says:

    Jon>>Personally, the biggest thing I would like with try-catch blocks is for the typed catch syntax to be recursive through the InnerException property.

    Actually, I think Eric was arguing against wrapping exceptions needlessly.  It’s been my experience that there is never a good time to be handling an InnerException.  If an exception is wrapped, then the code that wrapped it is telling you that it is unrecoverable and is attempting to give you more information to fix the problem with your code.  If the InnerException is something that should or could be handled by a consuming code, then it should not have been wrapped.

    I have seen plenty of examples where an exception was wrapped simply to say "I was here!".  That’s what the stack trace is for.

  8. Miral says:

    "I was here" wrapping exceptions are useless, yes.  But when they add information about the parameters passed into the function (such as what filename to act on, or some other internal state) then they’re priceless.

    That’s actually one of the things that irritates me the most about the current exception stack traces — they don’t show the parameter values.  Even if that was limited to native types (like ints and strings) it’d be a big help.

  9. Matt says:


    Without any user-written exception handling, the behavior of your code WRT exceptions is fully specified. Every exception makes it up to the outmost layer, so you know about everything bad that happens.


    The fully specified part is correct – the outmost layer is wrong, specifically in the case where an exception is thrown which percolates down from one app domain to another but cannot be serialized across the boundary.

    I beat my head against a wall every time this happens since you lose the stack trace.*

    Another reason to do it is because some of the system developers one the .net side were either slack or too performance focused. e.g. from 1.1 ArrayList indexer:

    public virtual object get_Item(int index)


         if ((index < 0) || (index >= this._size))


               throw new ArgumentOutOfRangeException("index", Environment.GetResourceString("ArgumentOutOfRange_Index"));


         return this._items[index];


    AAAAAARgh – ArgumentOutOfRangeException(string, object, string) exists to provide the bloody value! you have it, use it…

    Ok in this case that would trigger a box, not great (though a good case for providing an integral special case).

    But then IndexOutOfRangeException doesn’t bother to provide the index either… would it really have been that big a deal to provide an int field in this exception, ok in some cases the jit may make filling it in impossible (or at least computationally expensive) but come on – it is used by string GetChar() too.

    Hell even allowing the option of providing additional info in core exceptions as a compile / init constant would do.


    * this is all written from the point of view of someone stuck with 1.1 for a while longer – no ideas if 2.0 works round this

  10. Seth says:

    Talking about the last-chance handler. When your application is running in an unattended environment you do want to catch ALL unhandled exceptions and do two things: log as much information as possible about the exception and the state of the process and then recover from the problem (ultimately by rebooting). In the unmanaged world I have SetUnhandledExceptionFilter and MiniDumpWriteDump at my disposal but I have yet to find a perfectly matching (read: equally powerful) alternative in the managed environment.

    Currently I’m trying to get by with a watchdog application that uses MDBG to monitor the other process, stop on unhandled exceptions and dump as much information on the process that I can get (I currently managed to retrieve the exception, inner exceptions, the active thread stack, all other managed thread stacks, all AppDomains and the assemblies loaded in each AppDomain). I got this idea off one of the .NET debugging blogs if I remember well. But this all feels so amateur-like. I am craving for a .NET minidump!

  11. ericgu says:


    I thought about the thread and appdomain case when I was writing that, but I didn’t want to complicate the discussion needlessly. It is something you need to keep in mind when you’re planning how to handle exceptions at the global level, but I don’t think it invalidates my point.

  12. Jarno says:

    With regard to the "eating of exceptions bug" when handling specific inner exceptions, wouldn’t the following counter that?





    catch (Exception e)


      if (e.InnerException.GetType() == typeof(MyFileNotFoundException)


         Recover();   // use your imagination for how this works







    I have had some instances of really needing to recover something based on an inner exception: e.g. (from memory) while hosting winword in a Windows Forms application where on some machine configurations a COM exception with a specific error number, or exceptions you get back from that now obsolete emailer in the v1.1 System.Web namespace where the actual error including SMTP error response is a couple of levels of inner exceptions deep.

  13. Matt says:

    I thought you would have đŸ™‚

    That certainly doesn’t invalidate the core point about avoiding anything but last chance handlers without a very good reason, for instance you can recover from it or you want to make sure some unanticipated user input doesn’t kill the whole system just perhaps their segment of it…

    The lack of detailed information in several of the core system parameters does mean that it becomes much more tempting to write "I was here"* exceptions but adding the calling parameters. so you can work out what went wrong faster. Were the System ones to supply more info the temptation would be removed.

    * good name Miral đŸ™‚

  14. wpoust says:

    It’s real important to also look at an exception handling scheme in regard to code maintenance.  It’s not too hard writing code that works today and you feel comfortable beleiving the exception handling works (today).  The difficult part comes a few years from now when you’ve moved on to another project and someone else starts making changes to your code.

    I am a firm believer in catching specific exceptions at the level they are thrown, perform the appropriate steps to put the object (or environment) in "good" state, and throw a more generic exception that wraps the specific exception.

    Attempting to recover from exceptions at a higher call level is a defect waiting to happen.  In your example, if the Process method implementation is changed at a later point.  It may be possible to have MyFileNotFoundException being thrown because several different reasons.  Plus, the recovery logic may need to be changed for each case.  Essentually, by attempting to handle specific recovery outside the point  of the problem, you would have to know implementation details of the lower level and that breaks object encapsulation.

  15. DavidKAllen says:

    I agree with miral.

    I often need to rethrow the exception because there is crucial data I need to diagnose the problem. My canonical case for this is wrapping SQL calls.

    It’s nice to know where it failed, but I need to know exactly what parameters or sql were being processed at the time the failure occurred. Given that I test heavily, it is likely that we have an error that only reveals itself under unusual data combinations. Without that info I’m seriously hampered.

    Other than this situation, I can easily support what Eric says.

  16. kbiel says:

    Miral>>But when they add information about the parameters passed into the function (such as what filename to act on, or some other internal state) then they’re priceless.

    That was my point exactly.  InnerExceptions should not be handled.  They are purely informational to diagnose the problem and correct the code.  If the InnerException should be or could be handled by the consumer, then don’t wrap it, pass it on.

    So the choice I was presenting was between wrapping an exception to be left unhandled by the consumer, thus producing a visible error or not wrapping an exception if it is possible for the consumer to fix it in a try-catch block.  Try-catch blocks should not be walking the InnerException stack.  If that is happening then either the consumer is abusing the object or the producer was doing something stupid like "I was here" exception wrapping.

  17. David Levine says:

    The inner exception. or any other field, such as an error code, hresult, or message data, shoul not be used for recovery purposes. Reasons: there may  be multiple chained exceptions, so there is no clear indication which inner exception should be examined for recovery purposes; the fields may  have been changed by intermediate code, the value and meaning of a particular field may change over time or environment conditions, and the only guarantee made by the runtime is the type of exception caught by the catch handler.

    Ultimately, the only block of code that has a reasonable chance of having enough detailed information to really know what the root causes of that exception might be is the code that caught the initial exception. However, this also doesn’t help much because the code at that level typically does not have sufficient information to understand the context that it is operating in to make a retry decision; typically, only the code that initiated the original request has that context. The code closest to the exception site has specific fault information but little context to evaluate it with.

    About the catch/no catch decision….I disagree with the guidelines. A stack trace is often not sufficient for diagnostic purposes, and there are times when a stack trace is not even present. Even more compelling, a stack trace is not the same as context…what was it doing, and why was it doing it? Ultimately, the diagnostic information must be understandable by the end user so that they can figure out what they can do differently to avoid the problem the next time,m or just to understand the root causes of the problem. Displaying a NullReferenceException does not tell them what they need to know..why did it fail, and more importantly, what can they do differently the next time so that the request succeeds?

    I prefer to catch-wrap-throw at all major logic decision points, each time adding additional context to the exception message, such as method parameter values or meaningful internal state. The exception type itself can be cloned so that the type caught at the top of the stack is the same as the type originally thrown.

    A weakness of the current guidelines is that it does not distinguish between the strategies that different types of software should employ. There’s a difference between writing a component that many different types of software will consume and run in many different environments versus an application that must provide logging and diagnostic information to an end user.

    In my view, the ultimate guide on how to handle exceptions is not how to make it easy for developers to write code (though this is a good goal) but how to build a product that is end-user friendly.

  18. Patrick Smacchia says:

    I put my thoughts about exception guidelines in my book Practical .NET2 and C#2 (www.PracticalDOT.NET). Here is the excerpt:

    When should you consider throwing an exception?

    The exception mechanism is generally well understood but quite often used improperly. The base principle is that an application which functions in a normal way should not throw exceptions. This forces us to define what an abnormal situation is. There are three types:

    — Those which happen because of a problem with the execution environment but can be solved by a modification to this environment (missing file, invalid password, non-well-formed XML document, network unavailability, restricted security permissions…). Here we talk of business exception.

    — Those which happen because of an execution environment problem which cannot be solved. For example memory hungry applications such a SQL Server 2005 may be limited to 2 or 3GB of addressing space in a 32-bits Windows process. Here we talk of asynchronous exceptions from the fact that they are not related to the semantic of the code which raised it. To manage this type of problem, you must use advanced CLR features syh as CER and critical finalizers. This is essentially equivalent to treating such abnormal situation as normal! Be aware that only the large servers with push the limits of its resources should encounter asynchronous exceptions and will need to use these mechanisms.

    — Those which happen because of a bug and which can only be solved by a new version which fixes properly the bug.

    What to do in exception handlers?

    When you catch an exception, you can envision three scenarios:

    — Either you are faced with a real problem but that you can address it by fixing the conditions which cause the problem. For this, you may need new information (invalid password -> ask the user to reenter the password…).

    — Either you are faced with a problem which you cannot resolve at this level. In this case, the only good approach is to rethrow the exception. It is possible that there may not be a proper exception handler and in this case, you delegate the decision to the runtime host. In console or windowed applications, the runtime host causes the whole process to terminate. Note that you can use the AppDomain.UnhandledException event which is triggered in this situation in order to take over the termination of the process. You can take advantage of this ‘last chance’ to save your data (as with Word) which without this would definitely lead to data loss. In an ASP.NET context, an error processing mechanism, is put in place.

    — In theory, a third scenario can be envisioned. It is possible that the exception that was caught represents a false alarm. In practice, this never happens.

    You must not catch an exception to simply log it and then rethrow it. To log exceptions and the code that they have traversed, we recommend using less intrusive approaches such as the use of specialized events of the AppDomain class or the analysis of the methods on the stack at the moment where the exception was thrown.

    You must not release the resources that you have allocated when you catch an exception. Also be aware that in general only unmanaged resources are susceptible of causing problems (such as memory leaks). This type of code to release resources must be placed in the finally block or in a Dispose() method. In C#, the finally blocks are often implicitly encapsulated in a using block which acts on objects implementing the IDisposable interface.

    Where should you put exception handlers?

    For a specific type of exception, asking this question comes down to asking yourself at which method depth this exception must be caught and what must be done about it. By method depth, we means the number of calls embedded since the entry point (generally the Main() method). This means that the method representing the entry point is the least deep. The answer to these two questions depends on the semantics of an exception. Ask yourself for each type of exception, at which depth your code is more apt to be able to correct the conditions which have triggered the exception and resume the execution or to be able to properly terminate the application.

    Generally, the deeper a method is, the less it must catch custom exceptions. The reason is that custom exceptions often have a signification to the business of your application. Hence, if you develop a class library, you must let exceptions which are meant to the client application bubble outside of the library.

    Exceptions vs. returned error code

    You may be tempted to use exceptions instead of returning error codes in your methods, in order to indicate a potential problem. You must be careful as the use of exceptions suffers from two major disadvantages:

    — The code is hard to read. In fact, to understand the code, you must manually do the work of the CLR which consists in traversing the calls until you find an exception handler. Even if you properly separate your calls into layers, the code is still difficult to read.

    — Exception handling by the CLR is much more expensive than simply looking at an error code.

    The fundamental rule mentioned at the beginning of this section can help you make this decision: an application which functions within normal conditions does not raise exceptions.

    Never under estimate bugs whose consequences are caught by exception handlers

    An abusive use of exceptions happen when we assume that, since we catch all exceptions, those provoked by eventual bugs will also be caught. We then assume that they will prevent the application from crashing. This reasoning does not take into account the fact that the main nuisances from bugs are those which goes uncaught such as indeterminist, unexpected or false results.

    Patrick Smacchia MVP.NET

Skip to main content