Virtual Worlds

Virtualization, becoming more and more popular, is around for quite some time. There are some very good articles/blogs/presentations etc out on the Internet talking about the history of Virtualization. “Google” for it [:)]. A few recommendations:

https://www.kernelthread.com/publications/virtualization/
https://www.cl.cam.ac.uk/Research/SRG/netos/papers/2003-xensosp.pdf (note the reference to [discontinued?] tests with Windows XP)

Hardware partitioning in fantastic products like the hp Superdome or UNISYS ES7000 offer virtualization at hardware level. These products, like others, sport a feature called hardware partitioning. This allows these powerful systems to be divided into several smaller entities. These chunks, completely separated from each other, run their own operating system in a “virtual” hardware environment. Very powerful, very well performing, very secure, very expensive.

Talking about today’s PC world and the efforts in Virtualization, you could of course argue that products like Virtual PC or Virtual Server and comparable product from companies like VMWare already offer Virtualization in software. Actually I see this more like emulation, not so much Virtualization. All vendors done a tremendous job in providing an environment that “looks” like a piece of hardware from an OS perspective. From the BIOS and video card, networking etc etc. some virtualized, some emulated to give the OS what it requires to work properly. Since this is done in software it comes for a price. Before a call to a physical device can be executed, it has to go through several layers of additional code to ensure there is no interference with the host OS (the OS running the emulation/virtualization software). Addons to the guest OS (the OS that runs inside the emulation/virtualization software), allow for better performance and more streamlined access to the physical hardware. Some of these addons come in the form of drivers. Drivers with built-in knowledge about their virtual environment. Even with this overhead by the emulation/virtualization software, some guest OS configurations allow for almost the same performance and speed of an OS running against the bare metal.

The recent introduction of processor instructions to support true virtualization in implementations from AMD (Pacifica) and Intel (VT) enable a new level of abstraction of operating systems from the physical hardware. But even without this support, products like GSX Server from VMWare or Xen from XenSource provide already abstraction from underlying hardware beyond pure emulation.

Above products go already way beyond emulation. They provide a thin layer of software between the physical hardware like processor, memory, network, video etc. This well known piece of software, introduced in the 70’s into the mainframe world, is called Hypervisor. A Hypervisor can also be seen as the Virtualization manager for a particular system. It allows running multiple OS’ses, completely separated from each other, on one physical system. While this can be done completely (almost) in software, Pacifica and VT will help this concept to become the dominant architecture for future operating systems. Without the need for a real host OS, different OS’ses or multiple instances of the same OS can run in parallel, sharing the same underlying hardware, virtualized and controlled by the Hypervisor.

It is public knowledge that Microsoft will provide its own Hypervisor implementation in the Longhorn timeframe. Steve Ballmer talked about it at this year’s Management Summit, Bob Muglia spoke about it an interview with Computerworld and sessions at various Microsoft technical and business events covered the aspect of Virtualization and Microsoft plans in some details.

Even though some of us may be no big Gartner or analyst fans, Gartner has a nice set spotlight about Virtualization. Virtual Strategy Magazine offers an RSS feed and some really good articles about this and related topics. I have also heard that virtualization.info is up and running again.