Windows HPC Server 2008 R2: The next step in High Performance Computing

Today we released an important component of our Technical Computing initiative: Windows HPC Server 2008 R2 Suite. 

Windows HPC Server provides an end-to-end HPC solution that is tightly integrated with the Microsoft IT infrastructure customers already have in place today.  It provides the performance required by the toughest workloads and offers great new capabilities – such as “supercharging” Excel 2010 to run on clusters for long-running, complex calculations and using idle Windows 7 workstations as part of a “desktop compute cloud” – all at a low cost of ownership.

And, as we continue to push our HPC server platform forward, we also progress within one of the most critical elements of the Technical Computing initiative:  Empowering any developer to create parallel applications on desktops, in clusters, and in public and private clouds.

Parallelism has long been the domain of high performance computing, but with the advent of multi/manycore processors and the cloud, the need and opportunity for better, simpler parallel development tools is critical. 

Using Windows HPC Server together with the parallel development tools in Visual Studio 2010 and the Windows Azure cloud platform, we’re enabling customers to more easily write parallel code, scale better on-premises, and extend their on-premises infrastructure to the cloud.  This will enable a broader group of people to harness the power of parallelism and untapped compute capacity of today’s technology for applications that ask tougher questions and solve bigger challenges.

A while back I wrote a post about some of the parallel development features of Visual Studio, including the Parallel Patterns Library, user level tasks, a parallel debugger and profiler, and other tools. 

As a parallel computing platform, Visual Studio provides an integrated development environment with high-level parallel constructs and abstractions that reduce code footprint and streamline parallel development.  It helps developers express logical parallelism and map it to physical parallelism.  With integrated parallel programming support, developers can parallelize applications and easily increase performance on multicore machines. The debugging tool windows in Visual Studio 2010 support task models in addition to traditional threading programming models. It also includes profiling tools, which let you analyze and measure the degree of parallelism within an application, discover contention for resources across the system, and visualize thread distribution across cores.

Our goal is to help you build applications that seamlessly scale from client to cluster to cloud.  Distributed runtimes cover single-box multicore and manycore machines, on-premises clusters, and cloud.  Developers can build parallel applications that scale across many different infrastructures, including CPU- and GPU-scaled architectures, all from within Visual Studio 2010.

As always, Visual Studio partner solutions extend the platform. NVIDIA’s Parallel Nsight allows the developer to debug and analyze codes running on GPUs, for example.  And Intel’s Parallel Studio lets developers extract full performance from multicore systems.

Recently, Hanweck Associates, a financial services risk management solution provider, used Visual Studio and NVIDIA CUDA to develop GPU code for risk management solutions for top-tier hedge funds, banks, broker/dealers and other financial institutions.  Hanweck used Visual Studio and C, C++, C#, VB, and CUDA to develop real-time financial risk-management software that processes millions of messages each second and turns those calculations around in milliseconds.  All of this is processed on just a handful of conventional servers and NVIDIA Tesla GPU units running Windows HPC Server.  Watch as Hanweck Associates CEO Gerald Hanweck, Jr. gives a deeper dive into the solution they built.

You can learn more about Windows HPC Server here.


Comments (3)

  1. Wil says:

    People whose applications are so CPU-intensive as to require clustering do not develop those applications as programs to be solved in Excel, whether "supercharged" or otherwise.  The use of Excel (or MATLAB, etc.) for prototyping is fine, but once you get to the point of production (and hence need a cluster), the code should run as close to the silicon as possible, and it will therefore most likely be written in C, C++, and/or FORTRAN.  The NVIDIA Tesla-based GPU framework is an appropriate solution environment for such applications; distributed spreadsheets are not.  MS should focus its efforts on what the market needs, rather than viewing every problem as a nail to be hit by its existing hammers.

  2. Rajeevbatham says:

    I have configure WSUS 3.0 server but I am not able to open its web console on SSL, please tell me how to configure it on ssl and on IIS. I am using windows 2003 server enterprise edition SP-1.

  3. BS says:

    Wil thats all good and well but not all endusers can code, if users are comfortable using excel and there is a solution to increase performance I think this is a great solution.

    Ive seen alot of scientist develop large solutions in excel because that is all they know and they dont have the time or the desire to upskill.