Building Async Coordination Primitives, Part 5: AsyncSemaphore

In my last few posts, I covered building an AsyncManualResetEvent, an AsyncAutoResetEvent, an AsyncCountdownEvent, and an AsyncBarrier.  In this post, I’ll cover building an AsyncSemaphore class.

Semaphores have a wide range of applicability.  They’re great for throttling, for protected access to a limited set of resources, and more.  There are two public semaphore types in .NET: Semaphore (which wraps the Win32 equivalent) and SemaphoreSlim (which provides a lightweight counterpart built around Monitors).  Here we’ll build a simple async version, with the following shape:

public class AsyncSemaphore
{
    public AsyncSemaphore(int initialCount);
    public Task WaitAsync();
    public void Release();
}

The member variables we’ll need look almost exactly the same as what was needed for the AsyncAutoResetEvent, and for primarily the same reasons.  We need to be able to wake individual waiters, so we maintain a queue of TaskCompletionSource<bool> instances, one per waiter.  We need to keep track of the semaphore’s current count, so that we know how many waits can complete immediately before we need to start blocking.  And as we’ll see, there’s the potential for a fast path, so we maintain an already completed task that we can use repeatedly when the opportunity arises.

private readonly static Task s_completed = Task.FromResult(true);
private readonly Queue<TaskCompletionSource<bool>> m_waiters = new Queue<TaskCompletionSource<bool>>();
private int m_currentCount;

The constructor will just initial the count based on the caller’s request:

public AsyncSemaphore(int initialCount)
{
    if (initialCount < 0) throw new ArgumentOutOfRangeException(“initialCount”);
    m_currentCount = initialCount;
}

Our WaitAsync method will also be reminiscent of the WaitAsync we wrote for AsyncAutoResetEvent.  We take a lock so as to ensure all of the work we do happens atomically and in a synchronized manner with the Release method we’ll write shortly.  Then, there are two possible paths.  If the current count is above zero, there’s still room left in the semaphore, so this wait operation can complete immediately and synchronously; in that case, we decrement the count and we return the cached completed task (this means that waiting on our semaphore in an uncontended manner requires no allocations).  If, however, the current count is 0, then we create a new TaskCompletionSource<bool> for this waiter, add it to the list, and return its Task to the caller.

public Task WaitAsync()
{
    lock (m_waiters)
    {
        if (m_currentCount > 0)
        {
            –m_currentCount;
            return s_completed;
        }
        else
        {
            var waiter = new TaskCompletionSource<bool>();
            m_waiters.Enqueue(waiter);
            return waiter.Task;
        }
    }
}

The Release side of things will also look very familiar.  If there are any waiters in the queue, we dequeue one and complete its TaskCompletionSource<bool>, but we do so outside of the lock, for the same reason we did so with AsyncAutoResetEvent.  If there aren’t any waiters, then we simply increment the count.

public void Release()
{
    TaskCompletionSource<bool> toRelease = null;
    lock (m_waiters)
    {
        if (m_waiters.Count > 0)
            toRelease = m_waiters.Dequeue();
        else
            ++m_currentCount;
    }
    if (toRelease != null)
        toRelease.SetResult(true);
}

Next time, we’ll take a look at how to use such an AsyncSemaphore to implement a scoped mutual exclusion locking mechanism.