Intel supporting Concurrency Runtime

Intel just disclosed that they intend to support the runtime that our team is working on here at Microsoft, the Concurrency Runtime.
Now that the news is out, it seems useful to provide at least two cents' worth of our perspective on what the runtime is and what it's for.

Here's a link to an interview with Intel's James Reinders on their plans and other things:

https://www.devx.com/go-parallel/Article/38914

Think about what James says in the article:

Right now each programming model asks "how many cores are there?" and then tries to use them all.

There's a big issue right there: each application, each process, each parallel library typically acts as if it owns the whole machine. Many apps start out by querying the OS for the number of processors and then spin up that many threads, thinking that this is efficient. Imagine a library doing this, and imagine using several such libraries together…

The OS, of course, has no choice but to create these threads and let them share the processors. Being the general scheduler it is, the OS is going to try to be fair and let every thread get a little time to run. Going around the room until every thread has had its turn, and then starting over.

A thread pool is of great use in this situation, because it will efficiently throttle the number of threads in use and distribute them evenly. General-purpose thread pools typically try to be very fair (for all the right reasons) and serve the work items submitted in the order they were created. Absent any application-specific information, this is the reasonable thing to do.

There are many situations where this is neither necessary nor desirable. Many programming models don't care about global ordering, just that all the work finishes as soon as possible and without spending too much time loading pages or cache lines. When you don't care about global fairness, the overall work can often be more efficiently performed if you allow a task to run on the same core as the task that created it, since their working sets are likely to overlap and be in cache.

What we're busy building is a runtime that provides a component to cooperatively share processor resources (cores) between libraries implementing various concurrency-related programming models. This is what we call the resource manager, and it is the lowest-level component. We also provide a component called the scheduler, which acts as a user-mode thread pool with both fair (FIFO) and unfair scheduling queues.

The goal with our concurrency runtime is to provide common infrastructure for a variety of native and managed libraries and languages that help solve the many-core challenges without creating technology silos. Microsoft, our partners, third parties, and even adventurous application developers can build and integrate libraries on top of either the resource manager or the scheduler.

If you're going to PDC in October, I have a session on the runtime and how to build libraries on top of it. I hope to see you there, it'll be a lot of fun (in a geeky kind of way...)