VSTS Pioneer TFS2010 Dogfood Server – Hardware & Topology

(This blog post is part of a series of posts on the new VSTS Pioneer TFS dogfood server.)

While running the main TFS server for ~3,500 users in Developer Division we got a pretty good idea of the hardware required to support our enormous number of files, local versions & branches.

We knew that:

  • our SQL Server load patterns were very IO intensive

  • our SQL Server load patterns were very CPU intensive

  • our disk usage will grow by about 300% per year

  • the number of operations the server performs will increase


We wanted to purchase and setup hardware that was flexible enough and designed to scale as our needs do. For this reason we settled on two identical servers that are part of the Microsoft IT recommended systems for our needs:

According to Joe Chang who studies SQL Server performance, the “The X7460 (Dunnington) is a clear winner at 4-way for the high-call volume apps.” The 6-core CPUs also offer a pretty significant performance boost over the older quad-core CPUs:

  • “On TPC-C, the 7460 six core generated a 34% edge over the quad core AMD and 56% advantage over the quad core X3750”, and

  • “On TPC-E, the six core showed a 49% edge over the older quad core”

Virtual Machines

We are running 5 virtual machines at the moment with almost identical configurations. Since we’ll be constantly upgrading things like the .NET CLR that often don’t support build-to-build upgrades or clean uninstall, running the application tiers as VM’s makes perfect sense. It also improves our rollback story if we have a failed upgrade. (More about upgrade in a future post).




AT01, AT02, AT03

TFS Application Tier

Windows Server 2008 R2

4 Virtual processors


2 Virtual NICs

1 VHD for C$

1 VHD for version control cache


TFS-TFS Connector tool runs here

Windows Server 2008 RTM

4 Virtual processors


1 Virtual NIC

1 VHD for C$

1 VHD for version control temporary workspace


Microsoft Office SharePoint 2007 SP2 (Complete install)

Web Front End, Excel Services, Search Services & Shared Services Provider

Windows Server 2008 R2

4 Virtual processors


1 Virtual NIC

1 VHD for C$

Databases reside on physical SQL server


Since these servers are hosted by the DevDiv IT guys, we went with their recommendation for storage. They already have a massive storage infrastructure for build & drop servers built on Xiotech technology, so it was easy for them to add additional ISE units to meet our size & performance requirements. In the end we ended up with the following configuration:

  • 5TB of “Balanced” storage

  • 5TB of “Performance” storage

The SQL Customer Advisory Team’s – Storage Top 10 Best Practices is a great resource when planning your own TFS deployment. In particular we specifically considered the following:

  • Isolate log from data at the physical disk level (e.g. O$ drive is on separate spindles)

  • Consider configuration of TEMPDB database (e.g. T$ drive is on separate spindles)

  • Make sure to give thought to the growth strategy up front (e.g. We can add additional ISE units at any time and grow any drive without having to migrate data)

  • Lining up the number of data files with CPU’s has scalability advantages for allocation intensive workloads. (e.g. we have 12 datafiles for TempDB because we have 12 CPU cores)

Here’s how the virtual disks (LUNs) are configured on the server:




Storage Type

Size (Gb)


SQL backup dump drive




LocalVersion filegroup




OLAP data




All database data




Version filegroup




Transaction Logs







SERVER2 (Hyper-V Host)




Size (Gb)


Hyper-V VM's



What’s cool about the Xiotech stuff is that it’s like Lego blocks – as you need more storage or more performance, you just buy more blocks and plug them in. What’s even cooler about it, is that you can control the SAN using web services – the build lab uses the SANMan tool they built to move virtual disks between build servers and automate the provisioning of disk space.


At just over 400 users, we probably don’t need three AT machines in a NLB configuration. However, NLB is an important new feature of TFS2010 and it’s a scenario that we need to dogfood for ourselves.

Here’s some quick facts about our topology:

  • We have a couple of “friendly name” DNS records for the NLB cluster IP, Reporting Services & Analysis Services – this allows us to change the underlying infrastructure without users having to connect to a new addresses.

  • We’re running Windows 2008 R2 RTM on our VM’s & soon on our physical servers

  • A single VM was created, then sysprep’d and copied. This means that we can spin up a new VM in a very short time period. (Copy VHD, Add to Hyper-V, Start, Join Domain, Apply Updates, Done.)

  • SERVER2 is running an SMTP server that relays mail on behalf of each of the other servers. At Microsoft, the corporate mail servers will only accept mail from authenticated users that have mailboxes. By running the SMTP server as an authenticated user, it allows us to run the application tiers as Network Service. This means when we have to change a password to a service account, it doesn’t cause a TFS service interruption – just a brief SMTP interruption.

  • Since all the SQL services are running on the same box, they don’t need to be run as a domain accounts either – no service interruption & less places that need a password changed when it expires.


That’s it for hardware and topology, next post is on performance and Dogfood statistics.

Comments (4)
  1. Ross Johnston says:

    This is great info. Thanks Grant!

  2. Rajesh Chellamani says:

    Thanks Grant.  We are planning on migrating to TFS 2010.  Is it possible to have the baseline Hyper V available for download?  This will help me a lot in my POC and hands on experimentation to understand the product.

  3. Rajesh Chellamani says:

    Oops.  I meant baseline Hyper V VHD …

  4. Srinivas Prasad says:

    Hi Grant,

    I read through your blog and it’s a wonderful stuff!

    I’m a VSTS consultant and working out a hardware requirement for one of our key clients.

    Where the client in planning to start of with a user base of close 8,000 users and scale upto 25,000 users in the next 2 years. I’m unable to find any data for this massive deployment.

    Kindly assist us in achieving the same. Look forward for your help in this regard.



Comments are closed.

Skip to main content