Cleaning up Async

There needs to be some concept of cleanup that takes place when an asynchronous request can't be completed. For example, when a service is shut down or a socket is closed you know that any asynchronous operation waiting on that resource will never get a result. Something needs to happen to the asynchronous operations to prevent them from lingering forever.

The asynchronous callback pattern has no built-in concept for cancelling a request that you made previously. Even if you build in the concept of cancellation, by its fundamental nature an asynchronous call is going to race to completion against any actions you take. It would be possible to examine a request, see that it hasn't been completed, attempt to cancel the request, and then find out that it completed anyway because completion occurred during that same interval of time. Consequently, cleanup has to be initiated by the request itself rather than by the caller.

There are two typical patterns that you see asynchronous operations implement to perform cleanup. The fencepost pattern is where the operation cleans up by completing successfully but giving a distinguished value that indicates that the operation had no result, such as by returning a null object. The exception pattern is where the operation cleans up by storing an exception caught on the worker thread, signaling completion, and then throwing the stored exception on the user thread that goes to pick up the result.

The fencepost pattern is typically used for expected cases, such as shutting down, whereas the exception pattern is typically used for unexpected cases, such as IO failures. As an example, consider a service waiting for incoming client connections on a socket. The service will keep several asynchronous requests ready at a time waiting to accept clients. When the socket is shut down, those asynchronous requests need to be cleaned up and they'll do that by returning null. On the other hand, if the socket had encountered a read error instead, then cleanup would have been done by throwing an exception for each outstanding request.

Next time: Controlling HTTP Connection Limits