Concurrency, Part 3 - What if you can't avoid the issue?

[Yesterday, I talked about how you deal with concurrency issues by simply avoiding the concurrency issues - essentially by ensuring that your data is only ever accessed by a single thread.
But, of course, there are times when this is unavoidable.  For example, if your code is located in a DLL, there's no way of knowing how your caller's going to be calling your code.
If you're a COM component, then you can resolve the issue by marking your component apartment threaded and then checking to see if the thread on which you're being called is the thread on which you were first instantiated (the

](https://weblogs.asp.net/larryosterman/archive/2005/02/15/373460.aspx)IWinHttpRequest object does this, for example).  Problem solved :)  Of course, this pushes the problem off to your caller - they're stuck with dealing with the issues of marshalling your calls to the right thread, etc. 

Of course this isn't scalable, and it's really unfriendly, so we need a more friendly solution (and one that allows for a more scalable solution).  Since concurrent programming is all about protecting your data from being accessed by multiple threads, then really, all you need to do is to ensure that only one thread in your process can access your shared (global) data.

Windows provides a relatively simple mechanism for serializing code execution, EnterCriticalSection and LeaveCriticalSection (yeah, I know you already know that :)).

Again, if you're interested in protecting your data, simply initialize a critical section in your startup, call EnterCriticalSection on every entry, and LeaveCriticalSection on exit.  Problem solved, you won't have to worry about concurrency issues.  Again, you're not going to be scalable, but at least you're not forcing your clients to work overtime.  An example of a component that does this is Exchange's MAPI client DLL (or at least it used to do this when I worked on it, it might have changed).

But this still isn't scalable.  So the next step is to identify the fields you want to protect and implement a critical section around them.  For example, by default, each Win32 heap has a critical section associated with it.  When you call HeapAlloc(), the heap logic enters the critical section, performs the allocation and leaves the critical section.  Similarly, HeapFree() enters the critical section, performs the free, and leaves the critical section.

For a huge number of scenarios, that's sufficient - you simply identify the data that's going to be protected, wrap it in a critical section, enter the critical section before you access the data, leave the critical section when you're done, and you're good to go.

But there's a caveat to wrapping your data structures with critical sections.  It often doesn't work if you have more than one type of data structure being protected.  In fact, if your code is reasonably sophisticated, you've got a potential problem..

And that's Larry's second principle of concurrency: "Critical sections can be your best friend (unless they're your worst enemy) ".    I'll talk about why this is tomorrow.

 

Edit: Added a second principle (I knew I forgot something :))