Hyper-V in my House - 2013

A while ago I talked about how I was using Hyper-V in my house.  These days I have a quite different configuration in place.  I updated my household deployment immediately after Windows Server 2012 was released.

I had a couple of goals with my new architecture:

  1. I wanted to minimize downtime due to hardware servicing.

    My Hyper-V server handles DHPC, DNS, Internet Connectivity, hosts the kids movies, etc…  All this means that if I need to turn it off for any reason – I have to do it after everyone else has gone to bed.  Not fun.

  2. I wanted to minimize the frequency with which I needed to service hardware.

    The leading cause for me needing to work on hardware has been hard drive failure.  So logically, more hard drives means more weekends working on servers.  Fewer hard drives means fewer weekends working on servers.

  3. I need high levels of data protection.

    My server has all the family photos on it – data loss is not an option!

  4. I need lots of storage.

    At this point in time I have about 5TB of data on my family file server.  So realistically I need 7-8 TB of storage for my file server and all other virtual machines.

  5. I want to keep the cost down.

    Hey, no one likes to waste money!

With all of this in mind – here is what I ended up deploying:


 
I setup two Hyper-V servers.  Each server has a single quad-core processor (I do not use a lot of CPU), 16 GB of ram and 3 1 gigabit network adapters.  Each server also has 6 hard disks.  The first two disks are configured in RAID1 using the onboard Intel RAID.  The next four disks are configured as a 6TB parity space to give me the most capacity possible (note – in practice these four disks are a mix of 2TB and 3TB disks).

I then run half of my virtual machines on each server, and use Hyper-V Replica to replicate them to the other server.

This configuration gives me a high level of data protection (both from a single disk failure and an entire server failure).  It also means that if I have to replace a physical disk / fix a hardware problem with one of the servers – I just move all the virtual machines to the working server, and get to take my time fixing the broken server.

I have been running this configuration for almost a year now – and for the most part it has worked just fine.  I have had three separate occasions where I needed to work on a server, and my family did not notice it (for the most part – other than the general cursing and complaining that I was making while working).  That said, there have been some lessons learned for me:

  1. Low storage IOPs can really hurt sometimes.

    When I built this system I knew that the storage throughput of the 6TB data disk would not be great, but I accepted this as a reasonable trade off in the space / price / performance matrix.  For 90% of the time it has also not been an issue – but there have been a couple of times when I have been surprised by how long operations took.

  2. I need more memory.

    My previous setup was a single server with 8GB of memory.  So two servers with 16GB should be huge – right?  This was my thinking when I built the system – but I was wrong.  First, I need to make sure that I do not oversubscribe my memory so that I can run all my virtual machines off of one server when I need too.  Thankfully dynamic memory makes this easy to do, and ensures that when my virtual machines are spread across both servers I get to use all the memory.  Second though, as soon as I had the memory available I started firing up new virtual machines simply because I could – and it was not long until I was at my limit again.

Anyway – now that I have gotten this all written down, I am planning to blog about some of the experience I have had with this setup over the last year, and the lessons learned in the process.

Cheers,
Ben