Threads and Thread Priorities in NETMF


I started this article with the objective of providing a more up to date discussion of thread priorities. I thought first that I should cover a little about threads just in case. If you are already conversant in threads, jump ahead to the Thread Priorities section.

Threads are a valuable part of the .NET MF programming model. They are particularly useful in the common scenarios where you are getting input from a sensor or communicating with other devices. If you have a sensor for example, it is common to see polling loops in the main program which are very unwieldy. In communication with a Web Server or Client, queries and responses may come at any time and integrating handling them into everything else you have to do in your application can make it all very confusing. Enter threads.

Threads are sets of logic that run ‘independently’ from each other. This means that my main program (one thread) executes as if in parallel with the logic of my sensor monitoring thread and the HTTPListener thread. While threads execute independently, they are not executing at the same time and they are all managed by a scheduler. What the scheduler does is to allocate a time slice to each thread to proceed in its execution. In the case of NETMF, the default time slice is 20 mSec allocated in a round robin order. This means that if I have 2 threads, thread A will get potentially 20 mSec to execute and then be asked to leave and then thread B gets a potential 20 mSec.

I say that they have a ‘potential’ 20 mSec because any thread that can’t do anything (ie is blocked), relinquishes its time back to the scheduler which gives it to the next thread in line. This has implications for how you write your thread logic. You want to be as parsimonious as possible so that the thread gives up quickly and allows other threads to execute. If you can trigger you sensor input logic to execute on an event (eg a pin going high) rather than looping in your thread to poll it, the thread will not be blocking other processing needlessly. The other beneficial side effect is that, if the scheduler can’t find a thread that needs to run, it can put the processor into a lower power state preserving your batteries. To give you a practical example, the SPOT watches that were written on an early version of this platform had a continually updated display and needed to be responsive to arbitrary user input and they were in constant communication with the data source to get updates, did this on a 2% duty cycle. That means that the processor was actually only running 2% of the time. Imagine the impact that had on the battery life.

Let’s look at a simple example. In this example, you see that I have defined two threads and then let the main thread exit. Each thread only writes an identifying string to the output window. It then performs some meaningless logic to fill up time. This reduces the number of actually Debug.Print() calls that get executed to make this easier to see what is going on. Executing this logic in the emulator on my machine outputs about 10 times for each time slice. You would normally use a Timer to spread out the Debug.Print’s but when the scheduler sees that the thread is blocked waiting for a Timer call, it will boot the thread out.

using System.Threading;
using Microsoft.SPOT;
using Microsoft.SPOT.Hardware;

namespace ThreadPriorities
{
    public class Threads
    {
        static void Main()
        {
            Thread thread1 = new Thread(XActivity);
            Thread thread2 = new Thread(YActivity);

            thread1.Start();
            thread2.Start();
        }
        static void XActivity()
        {
            int j;
            while (true)
            {
                Debug.Print("X");
                for (int i = 0; i < 200; i++)
                {
                    j = i * 3;
                }
            }
        }
        static void YActivity()
        {
            int j;
            while (true)
            {
                Debug.Print(" Y");
                for (int i = 0; i < 200; i++)
                {
                    j = i * 3;
                }
            }
        }
    }
}

The output of this program is predictable but demonstrates the time slicing of the scheduler

clip_image002[4]

Before we leave basic threading, I wanted to point out another impact that threading has. Suppose you have a sensor that triggers an event (eg: puts a GPIO pin high) but that thread is not currently running. The scheduler will get to that thread in the course of its progress through the round robin list. If there are multiple threads and they all take their full 20mSec, this could take time. If, for example, you have 5 threads, the worst case approaches 100 mSec before you can respond.

Thread priorities

There are 5 thread priorities supported in NETMF (Lowest, BelowNormal, Normal, AboveNormal, and Highest). Normal is obviously the default. You change the priority by setting the property as in

thread.Priority = ThreadPriority.AboveNormal;

For each step, up or down, you double (or halve) the potential execution time. This means that if you have two threads and one is AboveNormal, then the outcome is that the Above Normal thread is run about twice as much as the Normal. Let’s see what that means in our earlier example. I have reduced the priority of Thread2.

thread2.Priority = ThreadPriority.BelowNormal;

You can see the impact from the output below:

clip_image004[4]

Why would you use this? Remember the example above where we had 5 threads and there was a worst case that an interrupt on one thread was not handled for 100mSec. Now you can make the much better. If you raise the priority of the thread handling that interrupt to say ‘Highest’, then when the scheduler looks at the thread queue at the completion of the current thread, it is highly likely that it will run that thread next if it can run (ie the interrupt has fired). I can only say highly likely because there could be another thread priorities that interact with this selection. The scheduler actually keep a dynamic internal priority based not only on the priority that you have set but also how much time the thread has already had recently. This insures that your high priority thread which is now getting lots of interrupts does not make it impossible for any other thread to ever run.

Let’s look at a more complete example. In this example, we create a thread for each priority level In this example, we just count the iterations and print them out to the Output window every 5 seconds. Here is the code:

using System;
using System.Threading;
using Microsoft.SPOT;

namespace ThreadingSample
{
    /// <summary>
    /// Demonstrates various threading priorities of the .NET Micro Framework.
    /// </summary>
    public static class MyThreading
    {
        private static int[] s_IncCount = new int[5];
        private static void Thread1()
        {
            while (true)
            {
                Interlocked.Increment(ref s_IncCount[0]);
            }
        }

        private static void Thread2()
        {
            while (true)
            {
                Interlocked.Increment(ref s_IncCount[1]);
            }
        }

        private static void Thread3()
        {
            while (true)
            {
                Interlocked.Increment(ref s_IncCount[2]);
            }
        }

        private static void Thread4()
        {
            while (true)
            {
                Interlocked.Increment(ref s_IncCount[3]);
            }
        }

        private static void Thread5()
        {
            while (true)
            {
                Interlocked.Increment(ref s_IncCount[4]);
            }
        }

        /// <summary>
        /// The execution entry point.
        /// </summary>
        public static void Main()
        {
            Thread[] threads = new Thread[5];

            threads[0] = new Thread(new ThreadStart(Thread1));
            threads[1] = new Thread(new ThreadStart(Thread2));
            threads[2] = new Thread(new ThreadStart(Thread3));
            threads[3] = new Thread(new ThreadStart(Thread4));
            threads[4] = new Thread(new ThreadStart(Thread5));

            threads[0].Priority = ThreadPriority.Highest;
            threads[1].Priority = ThreadPriority.AboveNormal;
            threads[2].Priority = ThreadPriority.Normal;
            threads[3].Priority = ThreadPriority.BelowNormal;
            threads[4].Priority = ThreadPriority.Lowest;

            int len = threads.Length;
            for (int i = len – 1; i >= 0; i–)
            {
                threads[i].Start();
            }

            while (true)
            {
                Thread.Sleep(5000);
                lock (s_IncCount)
                {
                    for (int i = 0; i < len; i++)
                    {
                        Debug.Print("th " + i.ToString() + ": " + s_IncCount[i]);
                    }
                    Debug.Print("");
                }
            }
        }

    }
}

Here is the output from this.

clip_image006[4]

You can see that each priority gets about twice the time to run. Now let’s add a little wrinkle. We will add a sixth thread – this one also running at the ‘Highest’ level but this one with a Sleep() for 2 seconds every 100000 iteration.

private static void Thread6()
{
    while (true)
    {
        Interlocked.Increment(ref s_IncCount[5]);

        if (0 == s_IncCount[5] % 100000) // about the increments in 5sec
        {
            Thread.Sleep(2000);
        }
    }
}

What would you expect the iteration count for this thread to look like? Having a ‘Highest’ priority means that even though it is sleeping for a significant portion of the time, the scheduler will try to make up by running it as much as possible. The results look like this.

clip_image008[4]

Summary

This is a very quick look at threading and thread priorities aimed mainly at letting you know that they are there and basically how they work. There are complications to threading that make it one of the more challenging (and interesting) parts of programming small devices. These complications include things like deadlock, starvation, livelock, race conditions. You may have noticed for example that we invoked the ‘Interlock’ class when we updated the counters in the thread and the Lock when we use the counters in the main thread. Since this is a shared resource, there is the possibility of conflict when several threads are trying to access the same resources. So, there is more to know about threads but that is not specific to the .NET Micro Framework so there are a number of good sources for that information.


Comments (14)

  1. Dimps says:

    Thank you for this post!

  2. NETMF Team Bloggers says:

    Certainly – please let me know if there are other items that you want us to cover.  I'm happy to write up whatever will be useful.  

  3. Dimps says:

    Well, just keep doing what you are doing. Every post I've read so far was great.

    I'm new to netmf and i want to know a lot of things, but it would be better for me to read manuals first.

    Currently most important thing for me is to be able to port netmf to any microcontroller / processor i choose.

    By the way, your "official info" at http://www.microsoft.com/…/switch.mspx was almost enough for me to turn my back to netmf (it says i have to pay money for every device i make, my boss would never accept that). But porting kit readme clearly states you do not have to pay.

  4. NETMF Team Bloggers says:

    Sorry for the confusion.  I am trying to get that page removed.

  5. TheJoke says:

    Great. Do you plan a follow on article on inter thread communication and thread synchronisation? You should, sure!

  6. NETMF Team Bloggers says:

    OK – I'll put that on the stack.  Good suggestion!!

  7. William says:

    Thanks:-) Is "mSec" millisecond or microsecond?  Some info on mem model and cpu cache issues in terms of races and x = x + 1 problem and double checked locking, etc. Compare and contrast to big .Net.  Any differences? Same?   Even with a single cpu, what are the things to watch for in terms any edge cases on NETMF?  TIA

  8. NETMF Team Bloggers says:

    Hi William,

      Yes mSec is milliseconds.  I am working on another installment of the threads discussion – I will try to address your questions there.  Thx.

  9. Brandon Grossutti says:

    Colin,

    we're working on a .netmf showcase project and wondering if you could email me at my firstname.lastname@gmail.com

  10. Thanks for posting. Very well written.  You should write a .NETMF book.

  11. NETMF Team Bloggers says:

    If only there were time.  :-)

    If there is another topic (or topics) that you woudl find useful for me to cover, please let me know and I'll get them out.

    Colin

  12. Chris says:

    Thanks for this, Colin.  I'm not quite following the second example, so I have a couple questions based on the first example:  

    1)  Quote: "…if you have two threads and one is AboveNormal, then the outcome is that the Above Normal thread is run about twice as much as the Normal."  Is that twice as *long* (looks so in the output) , or twice as *often*?    

    2)  When you reduced Thread2's priority, it appears (based on your original "…10 times for each time slice…") that Thread1 got *more* time, rather than reducing Thread2's time.  Am I seeing things?

  13. NETMF Team Bloggers says:

    Hi Chris,

        The question of twice as long or twice as often are closely linked.  The size of the time slice allocation (the quantum) always remains the same in the system.  At the end of each quantum (or when the thread can no longer run) the thread manager looks to what thread to run next.  It may be the same one (runnning it twice as often) which effectively makes it run twice as long.  

      In the second question, since I get about 10 cycles in a quanta, that is the minimum that any thread will get (again unless it is blocked for some reason).  So in that example, reducing the priority of Thread2 makes it less likely that it will be selected to run the next time.  In the case above, it looks like Thread1 was run for two quanta and Thread 2 for 1.  

    Does that help?

  14. origin says:

    Thanks very much for this, very well written, very helpful…I have some code to rewrite 😉