Networking basics for lab management – Part I

How are the virtual machines in a Lab Environment connected to each other and to other machines? Can I have physical hosts that are distributed across networks? What is the significance of network location in Team Foundation Administration Console? What if I want more control on how virtual machines are networked? If I turn on network isolation capability, what happens to the network of an environment? We will try to answer questions such as these in this series on ‘networking basics for lab management’. Some of this information is useful for lab administrators in planning the physical network topology of the lab. Knowledge of how environments are networked allows development teams understand how their applications might behave when deployed into a virtual Lab Environment.

Physical networking

Before getting into networking of environments, let us spend some time on the networking of lab infrastructure machines – physical hosts, library servers, SCVMM server, TFS server, and Test and build controller machines. Needless to say, there has to be IP connectivity between all these machines. SCVMM communicates with its agents residing on physical hosts and library servers. TFS server communicates with SCVMM and with test/build controller machines.

The clients from which you run “Microsoft Test and Lab Manager” or VSTS need IP connectivity with TFS and with test/build controller machines.

While having IP connectivity between the machines is the minimum requirement, there is more you need to do to get good performance. It is highly recommended that all the physical hosts and library servers have Gigabit Ethernet connectivity to each other. This means that all of them should have Gigabit Ethernet cards, and should be directly connected to a shared Gigabit networking switch. This ensures that the data transfers (think of the large VMs that get copied from one machine to another) between them happen with acceptable performance. As an example, copying an Environment of size 30 GB from a library server to physical host over BITS (Background Intelligent Transfer Service – this is the protocol used by SCVMM to copy VMs) may take about an hour in a 100 Mbps network, and around 10 mins in a Gigabit network. To accommodate larger labs, you can use multiple switches. The idea is to keep the end-to-end bandwidth between hosts and library servers close to a Gigabit.

This guidance implies that all the hosts and library servers are on the same network segment. If you have a larger lab or if you need to distribute physical machines across network segments, Lab Management allows you to do that. You can still get the desired performance by having a Gigabit network within each segment and carefully grouping the hosts that are co-located into a SCVMM host group. By allocating such host groups and co-located library servers to Team Projects, you can ensure that data transfers only happen between machines that are connected on a Gigabit network.

Networking on a host

Each physical host in the lab may be connected to multiple networks. Multiple networks are typical for separating data traffic and management traffic. For each network that a host is connected to, there is a network location that identifies that network. Using SCVMM, you can see the number of networks each host is connected to and their network locations. Select the host in VMM admin console and view its properties. The following figure shows a host that has four network network adapters, but is connected to only one network.


The ‘location’ of that network in this example is If the network location is empty, you can click on the ‘Override discovered network location’ checkbox, and type in the name of a location. Before running lab, you need to ensure that all hosts are connected to a common network location.

Use the TFS admin tool to configure this common location as the ‘preferred network location’ for lab.


The preferred network location identifies the network to which all virtual machines created by lab should be connected to. And, the network adapter on a host that is connected to the preferred network location is called preferred network adapter for that host.

PS1: In VSTS Lab Management 2010 Beta1, if you change the network location after setting it once, you have to reset IIS and restart the TFS job agent service on TFS machine. This is fixed post Beta1.

PS2: If you change the network location after setting it once, virtual machines that are already deployed are not affected. In other words, they remain connected to the old network location. To make them connect to the new location, you have to store the environments in library, and redeploy them. Or, more simply, you (admin) can use SCVMM and manually change the connectivity of each VM.

One more thing that you need to ensure for every physical host is that it has an external virtual network that is connected to the preferred network adapter. You can check this by using the Hyper-V manager or SCVMM admin console. The figure below shows the Hyper-V manager on a host. By clicking on the Virtual network manager, you will be able to see all the virtual networks that are configured on that host. Hyper-V supports three forms of virtual networks – external, internal, and private. Ensure that there is one virtual network that is marked as external and is connected to the preferred network adapter.


You can easily verify that each host in a host group is properly networked by using the Team Foundation Administration Console. When you add a host group to a Team Project Collection or when you open an existing Team Project Collection that is configured for Lab Management, you can verify that all hosts in the host group satisfy the above networking requirements. Open the Project Collection level Lab Management Settings by clicking on ‘Configure Host Groups’ as shown in the figure below, and press the ‘Verify’ button.


Networking for an environment

Now that we have the hosts physically networked and lab configured in TFS, we are all set to understand how environments are networked in VSTS Lab Management. In this post, let us focus on environments that are not network isolated.

Let us say we created an environment with two virtual machines. VSTS Lab Management ensures that each of the virtual machines has one emulated network adapter that is connected to the preferred network location as shown in the figure below. If this means that a new network adapter has to be created and attached to the virtual machine during the creation process, VSTS Lab Management does so.


To walk you through the above figure, there are two physical hosts A and B. The preferred network location configured in Team Foundation Administration Console is ‘’. One Lab Environment with two virtual machines has been created. Lab Management placed the first virtual machine (VM1) on Host A, and the second virtual machine (VM2) on Host B. N2 is the preferred network adapter on Host A, since it has a network location that matches the lab’s preferred network location. N1 is the preferred network adapter on Host B. Lab Management connects VM1 to an external virtual network of N2 on Host A. VM2 is connected to an external virtual network of N1 on Host B.

More flexible networking for an environment

What if you wanted your environment to be connected to a second network location (say in the above example) in addition to the preferred network location? Lab Management does not currently expose this flexibility through its APIs or client. However, you can get around this by using SCVMM. This is what you need to do. When creating the VM in SCVMM, insert a network adapter into the VM and set its network location to ‘’. The following figure shows how you can do that in SCVMM. While creating a new virtual machine, configure the network location of an adapter to be ‘’ under the ‘Configrue Hardware’ step.


Now import this VM into lab. When an environment is created from that VM, VSTS Lab Management inserts a new network adapter into the cloned VM with network location set to ‘’. As a result, the newly created VM will end up with two network adapters – one connected to ‘’, and one connected to ‘

Questions, Comments, Feedback?

Our goal was to provide a simple experience for the testers and developers using Microsoft Test and Lab Manager, and not to burden them with the intricacies of networking their environments. Hence, we made it easy for the most common case, where all virtual machines in an environment are connected to one network. For certain other complex situations, SCVMM can be used to configure the networking of a VM before it is imported into lab.

We would like to hear from you. Does this networking scheme satisfy your application needs? What other features would you like to see? If you have more questions or comments, please post them to this blog.

In Part 2, I will describe how the networking of a ‘network-isolated’ Environment looks like.

Comments (2)

  1. BHardister says:

    Hi, I can’t wait to start using this. I’m installing the beta now.  I’m interested in the security aspects of the product. Can you expand more on what you mean by "Multiple networks are typical for separating data traffic and management traffic."   Thanks!

  2. vijaym says:

    Thanks for your interest in the product. Once you get a chance to try it out, feel free to drop us more detailed feedback.

    Regarding your question on multiple networks, hosts in production data centers are often connected to two networks – one for application traffic (e.g., for transferring transaction payloads between servers) and one for management (e.g., backups, patching, etc). This is an example of multiple network scenario. If you want to mimic this kind of multi-networking between virtual machines of a lab environment, this article describes what the current possibilities and limitations are in Lab Management.

    Please let me know what security aspects you are interested in. We will either post a comment here or a more detailed post on security depending on your questions.

    – Vijay