Lock free many-producer/single-consumer patterns: A work queue of distinct events, FIFO


At last, our long national nightmare is over: The end of the miniseries on lock-free many-producer/single-consumer patterns. We'll finish up with the case where there are many producers generating work, and one consumer performing the work, and the work items must be processed in the order they were submitted.

We can build on the algorithm we used last time, when the order of processing was not important. We just have to remember to process the work items FIFO rather than LIFO.

SLIST_HEADER WorkQueue;

struct alignas(MEMORY_ALLOCATION_ALIGNMENT)
WorkItem : SLIST_ENTRY
{
    ... other stuff ...
};

void Initialize()
{
 InitializeSListHeader(&WorkQueue);
}

void RequestWork(WorkItem* work)
{
 if (InterlockedPushEntrySList(&WorkQueue, work)
                                               == nullptr) {
  // You provide the WakeConsumer() function.
  WakeConsumer();
 }
}

// You call this function when the consumer receives the
// signal raised by WakeConsumer().
void ConsumeWork()
{
 PSLIST_ENTRY entry = InterlockedFlushSList(&WorkQueue);
 entry = ReverseLinkedList(entry);
 while (entry != null) {
  ProcessWorkItem(static_cast<WorkItem*>(entry));
  delete entry;
 }
}

I leave the Reverse­Linked­List function as an exercise. (Answer.)

The conditions of the exercise are that the order of operations is important, but the linked list is LIFO. We solve this by detaching the list from the Work­Queue, so that it can no longer be affected by Request­Work, and then we reverse the (now-private) list, thereby converting it from LIFO to FIFO, at which point we can process it.

Any new work that gets queued up will be added to the the (now-empty) Work­Queue. The first such new work will cause Wake­Consumer to be called, upon which a new cycle will begin as soon as the current Consume­Work finishes.

That's all I have on the topic of lock-free many-producer/single-consumer patterns. I hope you weren't too bored.

Comments (4)
  1. Ken in NH says:

    It looks like your while loop is missing something. Is this what you meant?

    PSLIST_ENTRY completedEntry = null;
    while (entry != null) {
    ProcessWorkItem(static_cast(entry));
    completedEntry = entry;
    entry = completedEntry->next;
    delete completedEntry;
    }

  2. Clockwork-Muse says:

    Although with the current design, if more work is queued while some is being processed, it’s going to take a new invocation of ConsumeWork to process the new stuff. This is in contrast to the earlier examples, which would process further queued work in the same invocation, and only signal to wake (or, well, it might have been processing the last one). It operates under the same constraints as the bonus chatter from two days ago.

  3. David Haim says:

    So, is this how the Win32 work queue works behind the scenes?
    I always wondered what makes the enqueuing of tasks via SubmitThreadPoolWork so fast.

    Also, this code is not exception safe (although utilizes C++)

  4. schroedl says:

    Bored? No. Loved this series.

Comments are closed.

Skip to main content