I’m not quite sure why I feel the headline above is a bit better than one my colleague wrote - “Interoperable Supercomputer Attracts Wider User Base at Leading University” - but somehow I do feel that it’s a little easier on the eye. (Mind you, I thought about “What weighs 20 tons and uses the same power as 100 ice-cream machines?”)
In a nutshell, Cambridge University have historically used a specialist Linux cluster computing system to do their advanced research calculations, but the people running the service recognised that it meant that it was used exclusively by very specialist researchers – because of the level of detailed technical knowledge needed.
Now they’ve designed and implemented an interoperable HPC environment using Windows HPC Server 2008 and Linux, they’ve been able to open the system out to much wider use – as researchers who use Windows on their desktop computers are able to move their projects across to a Windows-based cluster easily, whilst still allowing the Linux users to run their projects too.
Both groups of researchers happy and both have more choice.
Paul Calleja, Director of High-Performance Computing at the university described what it will do when it goes into widespread use in January:
Users are more productive moving from desktop computers to a familiar HPC environment. With Microsoft we can evangelise HPC in more disciplines — lowering the barrier of entry for supercomputer use.
Why is this important? Well, as Paul puts it:
Competition is intense. Recognition depends on making a discovery first, and that comes down to how quickly you can perform complex computations.
GeekAlert: Only some of you will be interested to learn that the HPC cluster consists of 2,300 Intel core processors across 585 Dell servers, held within an 18-rack cabinet. It has 2,600 network connections, weighs 20 tons, and uses 230 kilowatts of power every week.