not I am not relating to the times when certain runtime enviroments stalled because they were measuring the CPU by counting ticks in a certain amount of time and got an overflow in the counter...
I am relating to an industry workshop done by the Purdue University (see here http://www.cri.purdue.edu/industryHPCWorkshop.cfm) on High Performance Computing or HPC.
The trend towards multicore processors is more than obvious and as usual today we talk about dual or quad but the researchers behind the thing talk about more than a hundred processors in one box... not in a century.. something like in 3 years time
So it is about time to think about how to optimize your applications for that. We recently prepared (and now deliver) a workshop on HPC throughout Germany (see http://www.microsoft.com/germany/technet/seminare/2007/windows-compute-cluster-server.mspx). But while you could argue that your client app does not need to take benefit from Compute Cluster the baseline technologies are good to know even on a client. Because if you want to take benefit from multicore you are in the same game.
In my opinion we have to make it easier to develop applications in that space. Sorry but it is too much for my brain to take care of every possible race condition out there. So what could help?
First functional programming enables the compiler to do the optimization. So Haskell seems to have a bright future.
Another cool thing I found is transactional memory which has been introduced quite nicely here http://msdn.microsoft.com/msdnmag/issues/06/01/EndBracket/
Christian Binder - who is doing the developer part on the roadshow - will certainly come up with some nice samples and demos in the near future.