StackOverflow answer – why learn multi-core programming?

I must admit, I’m addicted to Stackoverflow.  Its a great site, being both interesting and easy to use.

Recently, I ran across this question “Are you concerned about multicore”.   HenryR, a PhD candidate at Cambridge is asking if the “developer on the street” needs to concern him/herself with multi-core development  practices.  

Henry’s question has a few answers, including the accepted one dmckee (a particle physicist)which I’ll focus on here.

There are many types of programs such as

  • A simple script one of might write to do some work then throw away,
  • high performance very parallel scientific  applications,
  • Main stream operating systems
  • Line of business applications
  • Web based apps
  • Command line based applications
  • Applications with graphical user interfaces.

This is simply the tip of the ice berg – I’m confident that with a little brainstorming, this list would be come large.

Dmckee’s answer is correct – but only for a small set of things.  Indeed, for a very simple application, that is not CPU bound, then making it multi-threaded may be more work than needed.  For example, I have utilities that are not multi-threaded.

However, even utilities can take advantage of multi-threading, and I argue that its de rigueur for a production program to do so.

The astute reader may say “But wait a minute!   Henry ask about multi-core, not multi-threading…”.

Yup, but to leverage multi-core, you need to write multi-threaded programs.   Using more than one thread has other advantages beyond parallelizing CPU bound operations so they have better throughput.

To pre-fetch the next comment “But if its not CPU bound, then why bother with multi-threading?  That’s just a waste of time!”

In my mind, there are three reasons for this:

  1. It can make your program more responsive from a user perspective.
  2. Even for things that are not CPU bound, it makes sense to do them in parallel.
  3. It can break up complex single threaded state machines into simpler, more procedural code.

In Windows, the user interface is message based: Windows sends a UI program a message for ech UI event the program needs to handle.   A program with any kind of graphical UI uses the GetNextMessage() family of functions to get them.  I don’t know about Linux or OSX, but I suspect they are similar in this regard.

This is conventionally done in a message loop where a program loops on getting then processing messages.   If there are no messages, then the program can wait (block) for a new message, or do some background, or idle processing.

Messages included all kinds of things, but most importantly user input – mouse moves and clicks, and keyboard events.  These are often translated into windowing operations (moves, resizes, etc) and control actions (button clicks, scrolling, etc).  In short, this means that coding handling the UI is a big state machine and that this state is not kept on the stack.  It has to be kept explicitly somewhere else.   

This has an important implication – for an application to main responsive, it must be able to quickly and consistently service the message loop.   This is done on a single thread, indeed many applications only have a single thread that does everything. 

Windows programs have been written this way for a long time.  Some people might argue that this is a design deficiency.  That’s a topic for a different discussion about application compatibility – do remember, this pattern was set back in the very early 90’s – Windows 3.11 was released in 1992.

In any case, delays in the message loop that the user can perceive are called hangs.  There are two ways to keep the message loop from hanging.

  1. Break up all operations into pieces that execute quickly enough so that the message loop can always be serviced quickly.   Indeed, lots of developers have tackled the problem just this way.
  2. Move longer running operations to another thread, or threads.

The short story is that only #2 works effectively. Why?  Simply put – I/O.  Specifically disk and network I/O.  But other I/O, such as GPU operations can also be an issue.

The problem is that any disk or network I/O operation can potentially be long – hundreds or thousands of milliseconds long.  Programs that do their file I/O and networking I/O on the UI thread will have hangs.  It is unavoidable.

Some may point out that “Hey!  That doesn’t matter, making it multi-threaded doesn’t speed things up!”.

Right…. and wrong…

Right in the sense that it doesn’t increase an applications throughput – the work still has to get done.   But wrong in the sense that it absolutely positively benefits the user.

So for point #1 multi-threading can make your program more responsive from a user perspective because you can move potentially long running operations off the UI thread – keeping the UI “alive” and responsive.   Trust me, your users will love you for this.

In my next post, I”ll discusses the source code for a C# program that shows how simple it is to move operations off the UI thread in WPF applications.   This doesn’t require any complex knowledge of fine grained synchronization, or complex multi-threaded programming.  The code is also easy to understand, not state machine driven and easy to debug.   All things that conventional wisdom holds is true for multi-threaded programming.

In future posts, I’ll talk more about the example source above, points #2 and #3 , and some general multi-core and multi-threaded topics.

In summary – multi-core and multi-threaded programming is much more than simply speeding up (parallelizing) compute bound operations.  Efficient threading can bring other benefits to your programs as well. 

Comments (0)

Skip to main content