Why Do We Still Have "Exceptions"..?

I was reading Shane’s post (well, I had more interaction with his problem than just reading his post since we sit next to each other!) and it made me realise… why do we still have exceptions!?

I mean, in this age of pre-washed salad, skinned and de-boned fish portions, and the Interweb, why is the ancient and painful (I think of handling exceptions akin to that dude from the Da Vinci Code tightening his little leg chain thingy) practice of having to “handle” exceptions still part of a developers life?

Now, before you put a hole through your mouse button in your frenzied attempt to leave a comment, allow me to explain.

The basic idea of any exception is that something has occurred in your application that was unexpected (such as Jesus’ likeness appearing in your comment block), and cannot be handled in an appropriate way by your application, through to the platform/runtime.

Typical things cause exceptions like dividing by zero, buffer overruns, accessing an uninstantiated variable, the list goes on. But what amazes me is that when these things occur, the only thing the OS (because it is the OS providing the lowest level of runtime services) can do is evacuate its bowels.

So I was thinking, why isn’t there some mechanism within the OS that is capable of handling exceptions at the last point before the inevitable user (and eventually developer) chastisement? I’m thinking next time your application spits the dummy, instead of some nondescript box popping up, why doesn’t the OS (or runtime) simply say, “The application is currently in this state, for example, trying to access object X of type Y which is null, so I’ll “rewind” myself to the last section/procedure/method that was working, and let the developer (don’t tell the user, they don’t care and aren’t in a position to do anything other than be frustrated) know what happened by sending them an email (come on, email is everywhere!). And to the user, just tell them that the program stuffed up and is resetting itself back to a previous point, and that until the developer fixes the bug, they probably shouldn’t return to that feature/function.

Anyway, don’t get too bogged down in my example, but rather think to yourself, as developers, we do a lot of repeatable crap (ala having to deal with exception after the fact) that has been done the same way since the dawn of time, and until we protest, we will continue to do it! Think about evolution… nothing evolves unless it’s pushed to. OS’s and Runtime’s won’t evolve out of the lazy “eh, something happened in your app, good bye” to something way cooler. WAY COOLER!

Comments (3)

  1. Keith Farmer says:

    Because, in general, the OS can’t rewind operations?

    Sure, you can reset local memory, etc to a prior state with transactions, but what about operations with some remote or otherwise non-undoable component such as formatting a disk or directing a robot to cut wood?  At that point, the developer *must* provide guidance on how to compensate, if indeed it’s possible to.

    I agree, though, that a better model would be a good research project.

  2. davidlem says:

    Ahh, I agree, the use of compensation to rectify the state of a process or event is definitely not within the realm of an OS/RT (yet), but in the steps of evolution, creating default or standard compensations for well known faults would be a great start. It’s not that I don’t think this stuff would be hard, it’s that I think we are not pushing ourselves to fix fixable problems because human time (when applied to developers) is still seen as cheaper than innovation.

  3. SpoonsJTD says:

    I’d settle for better error handling by the OS and software I build on before it gets to my code (e.g., the ‘Invalid Pointer’ error I recently ran into while using the WMP 11.0 SDK).