I’ve been somewhat quiet in my blog lately, posting vary sparsely in 2008. Consider this the “reset”, wherein I resolve to be more frequent in my updates.
In a blog post over two year ago, I mentioned that I’d joined a new group at Microsoft, and was no longer working in Visual Studio. While I was necessarily vague at the point, I can now fill in some of the details, and tell you what’s new.
From early 2007 through August of 2008, I worked on Red Dog, now known to the world as Azure. It was a complete change of course for me, and I learned an amazing amount. Up to that point, my career had focused on debugging, diagnostics, operating systems and other “low level” topics. I had no real experience with big distributed systems, data centers, “cloud computing” and the like.
On the Red Dog project, I headed up a team that owned eventing, diagnostics and reporting for the Fabric Controller. We literally started at square zero, and tried to build a platform (Azure) that finally had tooling and diagnostics baked in from the beginning, rather than added on as an afterthought. Suffice it to say, the tools and approaches for distributed system are quite different from the low level SysInternals tools.
I left Red Dog in August 2008 to join the Hyper-V team. While Red Dog uses a hypervisor, it’s not the exact same code base as Hyper-V has today, although they do have a shared lineage. While on Red Dog, I had only minimal exposure to the hypervisor side of the project. Coming to the Hyper-V team was a big jump back to my “low level” roots.
On the Hyper-V team, I lead a group of developers focused on the performance and scaling aspects of the Hyper-V hypervisor component. It’s fun to be down in the bowels of operating systems and advanced CPU features. We tackle big issues like scaling Hyper-V to 32 processor cores and beyond. We deal with issues that most folks aren’t even aware of, such as NUMA (Non-Uniform Memory Access), which makes a big deal in how you set up VMs and how you schedule processor cores to run in those VMs. Another area my team owns is address space management. Imagine the complexity of any given OS’s page table management. Now consider that the hypervisor has to multiplex every VMs page table view into the actual page tables used by the hardware. And be really fast about it, with minimal lock contention because big server machines have lots of cores. Fun stuff!
So that’s what I’ve been up to. I expect future posts will have more meaty technical content, but I felt the need to set the starting context appropriately.