Here is a good question:
If you are using iSCSI for all of your storage needs; should you use a software iSCSI initiator inside the virtual machine to connect the iSCSI storage directly to the virtual machine, or should you connect the iSCSI storage to the management operating system (in the parent partition) and then store virtual hard disks / use pass through disks?
The answer is (as usual) not that simple. Generally speaking – you should always connect the iSCSI storage to the management operating system. The reason for this is that virtual machines can only connect to iSCSI devices with a software iSCSI initiator, where as the management operating system will be able to use hardware host-bus adapters (HBAs).
That said, there is a time when you will want to use a software iSCSI initiator inside the virtual machine and connect the storage directly to the virtual machine. You will want to do this if you are trying to cluster the guest operating system inside the virtual machine (as opposed to clustering Hyper-V itself). In this scenario; having iSCSI storage connected directly to the guest operating system is the only option that works.
Some common questions that I have heard around these recommendations include:
- What if I am just using a software iSCSI initiator inside the management operating system? Does it really make any difference then?
Yes, it still matters. At very least a software iSCSI initiator in the management operating system will be able to take full advantage of all of the network offload / acceleration technologies supported by your physical network adapter. If you run the software iSCSI initiator in the guest operating system, you will only be able to access the subset of network offload functionality that is exposed on our virtual network adapters.
Furthermore, in the management operating system it is easier to utilize dedicated storage network adapters and network teaming in order to increase your performance.
- If I connect the iSCSI storage to the management operating system, should I then pass-through the physical disks to the virtual machines – or should I use virtual hard disks (VHDs) stored on the iSCSI device?
I have discussed this issue in the past – and I strongly believe that everyone should default to using fixed size virtual hard disks over using physical disks directly connected to virtual machines. The only exception that I would make with iSCSI storage is if you have some external process (e.g. backup software) that will be manipulating the iSCSI storage directly and is not aware of virtual hard disks.
- How bad is it to use iSCSI connected a software iSCSI initiator inside the virtual machine?
Not that bad really. Chances are that if you are running on gigabit networking you will not really notice much of a performance difference between a software iSCSI initiator in the virtual machine and one in the management operating system. That said, a software iSCSI initiator in the virtual machine will use a lot more CPU power to achieve the same results as a hardware (or software) iSCSI initiator in the management operating system. This will eventually cause performance problems if / when your physical computer comes under high load.
Cheers,
Ben
Microsoft does great job with their Hyper-V to provide minimal performance degradation when iSCSI is moved inside virtual machine. We've experimented with our both iSCSI target and initiator software and inside VM iSCSI runs at 50%-70% of the speed it could do outside VM. Also there are companies like Virsto specializing on moving storage management software inside VM. So I think we'll see less and less difference in performance every day. Just for your references:
http://www.starwindsoftware.com
and
http://virsto.com/
Best wishes!
Anton
And, you cannot backup a VM from the management OS when you have an iSCSI disk connected to a iSCSI initiator inside the VM. If you plan to have a management OS-based backup solution backing up all your VMs in the physical machine, do not count on the iSCSI data being backed up.
Rant over.
I have a fail-over clustered HyperV servers and another fail-over cluster inside the VM used for file server, all conencted to the SAN via iSCSI. I also got DFS and Shadow Copies set up inside the VM cluster. The problem is the VM sometimes log iSCSI error and BOSD (caused by a BugCheck), another issue with the Shadow Copies is that it is not working (probably after a failover inside the VM cluster) as the volume used to store shadow copies show 100% free space? And all previous versions are lost…
doing your iscsi on the hyper-v host, instead of inside the hyper-v guest, limits your flexibility. With Hyper-V you need to shut down a guest in order to resize an attached VHD (either scsi or ide). If you use iSCSI inside the guest, you can usually resize a disk on the fly.
also, you can usually do volume snapshots on iscsi volumes on the storage platform so thats one backup option. DPM also backs up data in iscsi connected disks so thats another.
Great post, Ben. Can you clarify something in your first bullet point? I was under the impression that teaming iSCSI adapters was unsupported. Are you referring to teaming the management or virtual switch NICs?
One more question that I would like to be answered.
Scenario is 12 NICs in total per server.
4 teamed for LAN.
4 MPIO for ISCSi.
4 for management, CSV, Live migration, and Cluster.
You use MPIO software for the iSCSi NICS (i.e. the ones attached to the SAN Unit) as they cannot be teamed, so you can get the best performance possible. When you use Hyper-V you have to use one of the iSCSi NIC's and create a virtual adapter from the pysical one. When you create a VM you use the virtual teamed NIC (LAN side) and you can add another adapter which is the virtual iSCSi NIC that you created. Therefore you have a VM with two NIC cards, one for LAN, one to connect to iSCSi within the VM and attach to SAN storage. This is the best way to do it I believe.
Now the question is this;
Do we just have 1 iSCSi NIC which is used for all VM machines on that host – therefore all VMs sharing just 1 Gigabit connection or is Hyper-V intelligent enough to understand that we share all four as the host does?
OR
Do we have to add four virtual iSCSi NIC's – one for each physical adapter and then use MPIO within the virtual machine to get the best performance and load balancing possible?
Serious answers on the back of a post-card please … you will be surprised how over the last year no one has been able to answer this successfully.
Thanks guys,
Mike
I currently have a VM in a failover cluster but need to present iscsi directly to the VM vs. using vhdx files. My question is Does Hyper-v 2012 support HA of a VM that has ISCSI storage inside a VM?
Thanks
Simon
Hi,
Great article Ben,
Does all the recommandation here are still true with Hyper-V 2012 ?
Thanks Ced.
@Mike B: The scenario you have described is a bit like mine.
Because every iSCSI NIC is a separate connection, by presenting one of them as a VSwitch to your VM Guests it will provide traffic ONLY through this path, that means that you will not leverage the Multipath feature from your storage. So it's not a matter of Hyper-V's intelligence because it is a networking topic 🙂
What seems rational to me is to use half of the iSCSI NICs for Guests use (combined with the hosts traffic) and have every guest with 2 iSCSI vNICs mapping to these vNICS. If you have performance issues then you may use all of the iSCSI NICs as vSwitches.
If you have a cluster in place then you will have to have the same iSCSI vSwitch Names on all your Hyper-V Nodes in order to retain the connections when migrating the VM(s).
Advancements in Server 2012 (and R2) and the networking stack combined with proper NICs will give you very good Storage performance in the Guests.
Idea: I'm very tempted to test and use Shared Storage via SMB, it gives you some new possibilities like Converged NICs with Storage traffic instead of iSCSI that needs separate NICs and SMB Multichannel which is claimed to be hyper-fast!
@Anton Kolomyetsev: Virsto doesn't exist anymore (acquired by VMware and probably assimilated in their line of products)
@soumya That's true, also you cannot migrate a VM with a pass-through disk of any kind, so we should be paying attention in planning correctly before implementing.
@NZregs Great comment!
@Josh What Ben said is using a NIC for iSCSI traffic on the management operating system (aka Hyper-V), this must be dedicated without other traffic passing through it, I haven't seen any mentions for Teaming which as you correctly stated is not supported (nor endorsed) for iSCSI connection because MPIO does the job better (and this is recommended from other vendors and not only from Microsoft). Also a shared NIC with multiple networks (secondary IPs etc) is not a good idea for iSCSI too.
I hope these have helped you.