Thread Communication and Synchronization

After the last blog on Threads and Thread Priorities, readers asked for a continuation on Thread Communication and Synchronization. Before I jump in, I have to reiterate that concurrency is a complex issue. I won’t try to compete with the technical books on the subject that run to hundreds of pages. I will focus on some basic examples of what is available in NETMF to support the concepts introduced in books like that. That means that you can take what you learn here and still get trouble if you are not careful.

Shared Resources – Locking Critical Sections

The most common interaction between threads is through shared resources. We can define a static resource that several threads can interact with. Here is a simple example where I have the main thread and one additional thread – both of which use a counter (s_IncCounter). The main thread prints it out and the worker thread increments it 5 times and then checks its work.

private static int s_IncCount = 0;

public static void Main()
{
Thread thread1 = new Thread(Thread1);
thread1.Start();

     while (true)
{
Thread.Sleep(5000);
Debug.Print("Thread Execution Count = " + s_IncCount);
}
}

private static void Thread1()
{
while (true)
{
int j = 0;
for (int i = 0; i < 5; i++)
{
s_IncCount++;
for (int ii = 0; ii < 1000; ii++)
{
j = s_IncCount % 5;
}
}
if (j != 0)
{
Debug.Print("s_IncCount % 5 is: " + j.ToString());
}
}
}

The output of the execution of this application shows that we are getting slightly fewer than 700 increments every 5 seconds. We will use this measurement to see what the overhead of the thread synchronization mechanisms are. Also notice that the worker thread checks to see if exactly 5 increments have been done. There are no instances where we see that error message. It doesn’t mean that there is not some interaction between the two threads but since only one is writing to the shared resource (s_IncCounter), the interactions don’t have any side-effects.

clip_image001

Now let’s add a second thread. This one exactly matches the first worker thread.

private static void Thread2()
{
while (true)
{
int j = 0;
for (int i = 0; i < 5; i++)
{
s_IncCount++;
for (int ii = 0; ii < 1000; ii++)
{
j = s_IncCount % 5;
}
}
if (j != 0)
{
Debug.Print("s-IncCount % 5 is: " + j.ToString());
}
}
}

We start both threads in the main thread.

thread1.Start();
thread2.Start();

When we run these, we immediately see that both threads are reporting problems. Thread 1 is being interrupted in the middle of making its 5 increments and then Thread 2 starts up from there and runs some unknown number of times. The result can be seen below.

clip_image002

Even with three threads competing for time slices, the total throughput (increments/sec) do not change. The thread swapping is very low overhead in NETMF.

So, what we are seeing is just one of the pitfalls of a multithreaded environment that is called a race condition. That means that the results of your program are being determined by two threads racing against each other to complete their tasks. That is clearly not what you want. For our program to function smoothly, we need the portions of each thread that work on the shared resource to complete entirely (ie increment all 5 times then then swap out). Stated another way, those lines of code need to execute ‘atomically’ – as if they were one undisturbed machine instruction. Let’s look at the support NETMF has for this.

First, there is the Interlocked class that you saw in the previous article. The Interlocked class lets you perform a prescribed set of operations on a resource atomically. That set of operations that are supported are the methods to this class. The syntax for this is

Interlocked.Increment(ref s_IncCount);

You might not think you even need this lock – isn’t one line of code an atomic thing in itself? In fact it isn’t. To do the increment, the value is copied from the variable and into a register, incremented there, and then copied back to the variable in three separate instructions and the thread can be swapped out anywhere during that time.

What if you need to do more than the Interlocked class supports (a more complex operation on the resource or operations on several resources) and you need it to all complete before someone else can touch the resources. That is where the Lock mechanism comes in. In our example, we want to execute a number of steps on the counter and test what we did before we let anyone else have access to it. To do this, I first define an object that the will be used by the lock to identify it.

private static object s_IncLock = new object();

Now I add a lock that looks like this in each of our threads:

private static void Thread1()
{
while (true)
{
int j = 0;
lock (s_IncLock)
{
Debug.Print("Lock 1");
for (int i = 0; i < 5; i++)
{
s_IncCount++;
for (int ii = 0; ii < 1000; ii++)
{
j = s_IncCount % 5;
}
}
if (j != 0)
{
Debug.Print("s_IncCount % 5 is: " + j.ToString());
}
}
}
}

With these locks, our output no longer has the race condition and the message reflecting the race condition is no longer showing up.

There is an interesting interaction between the Lock and the thread scheduler. Without the lock, each thread would execute for its 20 mSec before yielding. That would be many iterations of the while loop in each time slice. However, the thread scheduler takes locks into consideration in the following way – when each thread asks for the lock, the scheduler sees if there is anyone waiting for that lock. The priority goes to the thread that has been waiting the longest so the two threads we have defined end up executing the lock one time each and swapping back and forth.

The output looks like this:

clip_image003

Swapping out for each iteration through the while loop still does not change the number of iterations in a measurable way. You can see that locking and thread swapping is REALLY low overhead.

 

MethodImplOptions Attribute

There is a way to lock a method as opposed to a critical section within a method. That is using the MethodImplOptions attribute. The syntax looks like:

[MethodImpl(MethodImplOption.Synchronized)]
public void MySynchronizedMethod()
{
….
}

This locks the method so that only one thread can access it at a time. If one thread is in the method when it is swapped out and another thread tries to access that method, the second thread blocks.

 

Monitor Class

Before we leave locks, it is worth mentioning that NETMF supports the Monitor class which does effectively the same thing as the lock mechanism – it defines a critical section and blocks other threads from entering that section while there is a lock on it. Monitors are a little more flexible because they are not limited to using braces to define scope. With Monitors, you can exit your Monitor anywhere– possibly in different paths of execution. This flexibility comes with more risk making it easier to have a path of execution where the Monitor does not Exit(object) or Exit(object) is called more than Enter(object) is called.

The NETMF Monitor class is simpler than the full .NET Monitor implementation in that it lacks event wait handles. For those, you have to use the mechanisms described next.

 

Thread synchronization - Events

At times, you want to synchronize threads directly – ie run this thread when another thread enters some specific state. There are two ways to do this – one general and one used specifically for UI interactions. First let’s look at Events.

Events are the same mechanism that is available on the full .NET Framework with some minor differences. You can define an event that will control the execution of a thread. In the full .NET Framework, there is a richer semantic that supports waiting for one event, one of several events, or all events to occur. In NETMF, we only support waiting for a single event. First you define an event something like this:

static AutoResetEvent autoEvent = new AutoResetEvent(false);

This sets up the event in an initial state of ‘false’.

Now let’s add a third thread to our sample application that looks like this:

private static void Thread3()
{
Debug.Print(" Waiting for 10000 iterations");
autoEvent.WaitOne();
Debug.Print("Finally got the 10000 iterations");
}

This thread is entered on its Start() call and prints out the message that it is waiting for 10000 iterations and then enters a suspended state until the event triggers (becomes true). In the other threads we add a test after we increment:

if (s_IncCount >= 10000)
{
autoEvent.Set();
}

When the event is Set(), the third thread is unblocked and the scheduler will pick it up quickly since it hasn’t run in some time. Thread3 writes out its second message and then exits and the other threads continue as expected.

The semantics of the AutoResetEvent class are that when set to false (unsignalled) it will block any thread calling the WaitOne() method on that event. Once the event is signaled by someone calling the Set() method on that event the AutoResetEvent unblocks the thread and then returns to the unsignalled state once again blocking. The MaunalResetEvent supports a slightly different set of behaviors. The ManualResetEvent also unblocks threads that have a WaitOne() call when the event is Set(). However, the ManualResetEvent does not automatically go back into an unsignalled state one it unblocks the thread. It stay in a signaled state until its Reset() is called. That means that all calls to WaitOne() return immediately.

 

Thread Synchronization with the Dispatcher

If you have created a graphical UI for your application using NETMF, then you actually have another thread that is created by the CLR – this is the Dispatcher thread that coordinates the access to all of the UI elements in a thread safe manner. The fact that you now have at least two threads means that you have to manage the access to anything that might be a shared resource. Since your UI logic is sharing the UI elements with the CLR created thread which is actually rendering them and there is no way to define a lock that the Dispatcher thread knows about, there is a different way to safely update the UI resources.

The way to synchronize the two threads is to not execute the code that interacts with the UI Elements on your thread but to execute it on the Dispatcher thread instead. To do this, you define a delegate that the Dispatcher will call back to, you implement the delegate, and you call Dispatcher.BeginInvoke when you want the Dispatcher to invoke your method. (There is also a Dispatcher.Invoke call that is synchronous so you need to define a maximum amount of time you will allow this call the wait for delegate to be called.) There is a previous blog on this topic specifically The Dispatcher and DispatchTimer called Using the Dispatcher.

 

Summary

This is a rather long blog but this is also a fairly complex subject. We have covered Interlocked, locks, MethodImplOptions, Monitors, Events, and (indirectly) synchronizing with the Dispatcher. Hopefully this is helpful.

Technorati Tags: .NET Micro Framework,NETMF,Threads,Thread Communication,Thread Synchronization,Locks,InterLocked,Monitor Class,Dispatcher,AutoResetEvent,ManualResetEvent