Building School Networks for the Future – Server Infrastructure ‘’System for the Future’’


Back in January I posted the first in this series of blogs,  Building School Networks for the Future - with System Centre and Hyper-V from Stuart Wilkie, IT Manager at Marine Academy Plymouth. Outlining where Stuart started from, this second blog post in which Stuart has contributed, looks at the next stage of using the server infrastructure to power the ‘’system for the future’’.

The Server Platform was always going to be moving to the latest Server 2008 R2 platform, but how could it be both resilient whilst ensuring the system was sustainable and fitted in with the desire to protect our environment? The old equipment was not going to be able to handle what we wanted to do … so out it came! From top left to bottom right, the scary day! Powering down and disassembling the whole system…..

clip_image002clip_image004 clip_image006

There was also no way we could afford the physical number of servers we would need, nor find somewhere to put them! The power here is Hyper-V, the virtualization system from Microsoft, built into the Operating System. After a demo from Microsoft, the discussions started early on with one of their Virtualisation partners, Dell, with whom Stuart already had a good working relationship.

Based on previous projects with Dell, the hardware choice was the PowerEdge R710 range purchased though Dell partners Laptop House and Printware. Stuart jokes “we did go a bit silly on the specification though” as these servers were configured with 146gb of RAM and 2 quad core Intel Xeon Processors, a staggering amount of computing power!

However, they were talking Hyper-V, and the 4 systems purchased with this specification would be powering the entire system – multiple virtual servers.

“When you start to think of it that way, it doesn’t seem so silly after all. You are cramming what would usually be a whole rack of servers inside one box – now that is real environmental consciousness!”

Local storage wise, 2 x 300gb fast drives in each server(set up as RAID5) would provide a suitable base for the Host operating system. The MD3200i SAN was chosen from Dell’s range as the academy already had its predecessor the MD3000 for the user data. This one had 8 x 2TB drives in it, to ensure we had plenty of capacity and bearing in mind that this would host not only the Hyper-V Client OSs, but also the Data Drives (for program installs etc)  as well as the SQL stream (more on that later).

Then, out came the big pads of paper – to draw out the current system and work out what the new system would look like. The key to this process was to work with the team of 3 technicians on what roles and services were needed. Despite being in the digital age, the best way was still just to get the A3 paper out and some big pens! clip_image010

The starting point was to split roles away from the two DCs as they were doing pretty much everything. DHCP would move onto its own dedicated VM – two in fact, so we could cluster it (as DHCP is supported as a clustered service now in Server 2008 R2). Next, DCs themselves. Best practice says keep a physical DC  and you can virtualise others. Two DCS were put into the plan and later, look to take the roles off the current boxes and create a new physical DC on a repurposed unit. Next, SQL also a clustered role and has been for some time, so that’s two more VMs. The SQL cluster would host all our SQL requirements from SIMS (the Capita Schools Management system), Eclipse .net (Library management), System Centre applications and any others which would come along!

System Centre Configuration Manager (SCCM) was going to be used for OS and Application Management. A system the size of this needs more than one server, so added two of them into the mix.

“You really need multiple distribution points for applications and OS images when you have so many machines; particularly when the applications are so big – like Adobe CS5! Why do you have to make them so big” jokes Stuart.

By racking up the VMS,  the specification of the host servers makes sense. Runtime apps is your standard mapped drive “software share” and takes a VM too which in theory could have two for redundancy but it changes so little and Hyper-V allows you to snapshot  a VM that the need for wasn’t really there. Another single server for web based applications such as a room bookings application, the helpdesk software used (GLPI), and also the library system resulted in another VM on the list.

Server Role


Domain Controller

Yes (2)


Yes (2)

Exchange Client Access

Yes (2)


Yes (2)

System Centre Config Mgr

Yes (2)

System Centre Ops Mgr

Yes (2)

DFS Hosts

Yes (2)

Application Runtime

No (1)

Web Applications

No (1)


No (1)

Certificate Authority

No (1)

Wyse Device Manager

No (1)

Remote Desktop Con. Broker

No (1)

Remote Desktop Web Acc.

No (1)

Remote Desktop Session Host

Yes (5)

Now that the list was decided, it was time to start building. Unpacking the boxes  and as Stuart describes, “The fun part. Every techie loves unpacking new kit, particularly shiny new server kit of this specification”.

To start with, the initial configuration of the Host Oss was done and the partitioning of the SAN was an absolute breeze with Dell’ management and server build tools. The SAN had been split into logical disk groups, and these assigned as ISCSI targets for the Host OSs. The Hyper-V client OSs were all located on one of the disk groups as this allowed us to use the Hyper-V clustering feature. Stuart explained “this is where the real power of Hyper-V comes in – automatic failover and live migration”.

This process can be “tied down” by using the “preferred host” option and ironically, due to some issues with power at the Academy, the feature has been well tested!

“Its quite scary to watch your VMs moving around between servers as they detect load or issues”.

Separate “Data” disks were also added to VMs where needed  to prevent cluttering of the OS drive. These varied in size, so the Apps server was relatively small compared to SCCM (which had all the setup files and images) and the web servers were also relatively small.

The thing Stuart really liked was how easy Hyper-V makes it to create and install a VM and OS. A couple of clicks, then sit back and watch whilst Server 08 R2 installs in about 15-20mins.

A bit more “next, next, next” clicking though Server Manager resulted in the roles and supporting services being installed to each of the Virtual Servers as needed. The process of installing SQL, SCCM and SharePoint were largely the same, particularly with the unattended install options now available in some of the products.

“I just couldn’t get over how easy the clustering process now is” explained Stuart. ‘’Clustering always used to be one of those ‘dark arts in earlier OSs, but now, as so often the case with Microsoft, it is all wizard driven. Technet is full of great guidance articles too and there is a growing wealth of videos on YouTube, as well as on the Microsoft sub-sites.’’

clip_image018It was then time to put everything into its home in the now empty server room. Weird really, as it looks quite empty compared to the before shots from the start of the article.


The final part of the jigsaw server side was the setup of the Remote Desktop environment, this is a topic on its own and so Stuart has agreed to write a separate post on this, and how they set it up with Windows Thin PC / Wyse. Coming up in the next article though, is how Marine Academy went about deploying Windows 7 across the site  and a bit more virtualisation with AppV! Stay tuned for that one…

Skip to main content