MS-MPI and HPC Pack 2008 SDK

The best known implementation of the MPI specification is the MPICH2 maintained by Argonne National Laboratory. MPICH2 is an open-source implementation of the MPI2 specification that is widely used on HPC clusters. MS-MPI is based on and designed for maximum compatibility with the MPICH2 reference implementation. It was shipped within HPC Pack 2008 SDK.

The exceptions to compatibility with MPICH2 are on the job launch and job management side of MPICH2; the APIs are identical to MPICH2. MS-MPI exceptions to the MPICH2 implementation were necessary to meet the strict security requirements of Windows HPC Server 2008 (including integration to Active Directory and System Center Manager). MS-MPI includes a complete set of MPI2 functionality as implemented in MPICH2 with the exception of dynamic process spawn and publishing. In addition, MS-MPI supports lower latency, shared-memory communications. The shared-memory communications implementation, which has been completely rewritten for Windows HPC Server 2008, improves the overall efficiency of communications between cores—especially when an application is using both shared memory and networked communication—and reduces the CPU cost associated with message passing between cores. When combined with the new NetworkDirect MS-MPI interface and the new networking stack in Windows Server 2008, the result is a significantly more efficient HPC cluster.

MS-MPI uses the Microsoft NetworkDirect protocol for maximum compatibility with high-performance networking hardware and maximum networking performance. MS-MPI can use any Ethernet connection that is supported by Windows Server 2008, as well as low-latency and high-bandwidth connections such as InfiniBand, 10-Gigabit Ethernet, or Myrinet. Windows HPC Server 2008 also supports the use of any network connection that has either a NetworkDirect or Winsock Direct provider. Gigabit Ethernet provides a high-speed and cost-effective interconnect fabric, while InfiniBand, 10-Gigabit Ethernet, and Myrinet are ideal for latency-sensitive and high-bandwidth applications. The NetworkDirect protocol bypasses Windows Sockets (Winsock) and the TCP/IP stack, using Remote Direct Memory Access (RDMA) on supported hardware to improve performance and reduce CPU overhead.

 MPI01 mshpcsdk_horz HPCKit  HPC Kit 2008 is a great collection of technical documents, presentations and training for developer and IT Professionals. Everything you need to get started with Windows HPC Server 2008 in one convenient place. Download here.

Windows HPC Server 2008 includes the Microsoft HPC Pack 2008 SDK that currently contains:
- MPIEXEC                                      : MPI Application Launcher
- SMPD                                           : MPI Process Manager
- MPISYNC                                      : MPI Trace Clock Synchronizer
- Microsoft.Hpc.VSDeployTool         : MPI Deployment Tools for Visual Studio
- ELT2CLOG and ETL2OTF              : MPI ETL Trace Converter
- MPI Scheduler API DLLs
- MPI Header and Library Files for i386 and AMD64
- Network Direct Service Provider Interface
 
We can use to Scheduler API to write client applications that interact with the job scheduler but note that we can use the SDK to schedule jobs on Microsoft HPC Server 2008 and later servers only; we cannot use this SDK to schedule jobs on Microsoft Compute Cluster Server 2003 (CCS).   

Please start to download the HPC Kit 2008 and explore about parallel computing using MS-MPI. I will post more tutorial about parallelism on Microsoft platform in this MSDN community. Stay tuned!

Cheers – RAM