For my inaugural blog post, I’d like to announce an exciting C++ event on June 29, 2011, in Santa Clara, CA. Microsoft and NVIDIA are teaming up to bring a full evening of C++ content.
Headlining the event is Herb Sutter from Microsoft, who’ll speak about the new C++0x standard and a new way to target GPUs called C++ AMP (Edit: added 15-June-2011). Following that we’ll have a talk on ALM tools coming in Visual Studio. We’ll then switch gears and have three speakers from NVIDIA who’ll talk about GPU computing. Find all the details below.
Herb is the chair of the ISO C++ standards committee, and author of four books: Exceptional C++, More Exceptional C++, Exceptional C++ Style, and C++ Coding Standards (with Andrei Alexandrescu).
Important: the event is free, but you must register here: http://bit.ly/june29cpp. Space is limited, so register early. Snacks and beverages will be supplied by Microsoft and NVIDIA.
See you there!
|Title:||C++ and GPU Computing: A Look Ahead|
|Date:||Wednesday, June 29, 2011|
|Time:||5:45 PM to 9:00 PM|
|Location:||NVIDIA, 2800 Scott Blvd Building E, Santa Clara, CA 95050. Marco Polo room.|
|5:45 PM||Welcome and Registration|
|6:00 PM||C++: Heterogeneous Parallelism in General, C++ AMP in Particular
Principal Architect for Windows C++
|7:15 PM||ALM Tools for C++ in Visual Studio 201X
Program Manager, C++
|8:00 PM||Parallel Nsight: Programming GPUs in Visual Studio
|8:20 PM||Parallel Programming Made Easy with CUDA 4.0
|8:40 PM||Thrust: C++ Template Library for GPGPUs
Special thanks to Marc Wolfson & Yesenia Alvarez of Microsoft, and Calisa Cole & Jennifer Anonical of NVIDIA for putting this together.
June 22, 2011 update. Here's the abstract of Herb's talk:
Title: Heterogeneous Parallelism in General, and C++ AMP in Particular
Parallelism is not just in full bloom, but increasingly in full variety. We know that getting full computational performance out of most machines — nearly all desktops and laptops, most game consoles, and the newest smartphones — already means harnessing local parallel hardware, mainly in the form of multicore CPU processing. This is the commoditization of the supercomputer.
More and more, however, getting that full performance can also mean using gradually ever-more-heterogeneous processing, from local discrete and on-die GPGPU flavors to “often-on” remote parallel computing power in the form of elastic compute clouds. This is the generalization of the heterogeneous cluster in all its NUMA glory, and it’s appearing at all scales from on-die to on-machine to on-cloud.
In this talk, Herb shares a vision of what this will mean for native software on mainstream platforms from servers to devices, and showcases upcoming innovations that bring access to increasingly heterogeneous compute resources — from vector units and multicore, to GPGPU and elastic cloud — directly into the world’s most popular native languages.