My History of Visual Studio (Part 8)

[All the other Parts: History of Visual Studio

I can’t really talk about what was going on in the IDE without covering what was happening in the runtime because their fates are so intertwined, so even though it’s off topic a little bit, allow me to cover some details from Framework 2.0.

“Longhorn”, which became Windows Vista, was probably the single greatest influence on the Developer Division during the years when .NET Framework 2.0 and Visual Studio 2005 (collectively “Whidbey”) were being developed.  I think I could write several books on those years as a study in being too successful for your own good. 

As I wrote earlier there was a certain mania in managed code adoption during that time and, though ultimately some of those efforts had to be scrapped, many were a positive influence on the tool chain. 

Anecdote: It’s not really important to the story but just for fun I can’t resist telling you about my first day on the job when I joined the CLR performance team.  I had been working for the last 7 years in the MSN area, with a great emphasis on server workloads, data architectures, and so forth.  So I was a bit surprised when on day one I was told I was going to be working on our client performance problems – that’s a Whisky Tango Foxtrot moment.  Pays to be flexible I guess J

Client workloads were a lot harder on the .NET Framework at that time.  We did have an ngen story but it needed a lot of work.  Putting as much code as possible into readily shared DLLs is fundamentally necessary to getting reasonable memory usage on the client – much less of an issue on dedicated servers for instance.  Contrariwise, code sharing is even more essential on a Terminal Server with potentially hundreds of users running client applications.

Sharing also vitally important to flagship applications, like Visual Studio, that are trying to get their code loaded as quickly as possible using technologies like Superfetch.  It’s a pretty simple chain of events, jitted code can’t be shared, unshared code uses more memory, use too much memory and you’re dead. Sharing is good. We had to do more of it.  It was doubly important because major new Framework elements were being developed while this was all going on:  things like Windows Presentation Foundation (WPF) and Windows Communication Foundation (WCF) to name a few.

The Base Class Library (BCL) was gaining support for Generics (starring my favorite Nullable<T>) and XML use was exploding in all parts of the stack.  I used to joke that if angle brackets <> had never been invented I would have no performance issues to work on.  If only it were true.

In the universe of technologies requiring tool support we really should add at least two more.  There were yet more improvements to ASP.NET, there were compelling 64 bit architectures (x64 and ia64), and, probably most difficult of all the considerations, there was SQL Server 2005 “Yukon” which introduced SQL CLR allowing users to write stored procedures in managed code.

Again, I could write whole books on what was going on in the runtime components, but I’m trying to stay focused on Visual Studio for this series so I think that’s probably enough landscape setting, as it is with all those changes it’s easy to see why Whidbey took nearly three years.

For starters, there were whole new project and deployment types needed to support the new ASP.NET with its own, new, developer server; and likewise for the SQL scenarios.  And while I’m on the topic of project and deployment this is also where “ClickOnce” makes its debut.

ClickOnce had to permeate our project systems but its introduction also highlighted an inherent weakness:  despite the extensibility offered in the project space there was (and indeed is) no one central place ClickOnce deployment could be added so that all project types would benefit.  This is a classic example of why ongoing refactoring/remodeling of architecture is so important. 

Personal note: my first direct interaction with the Visual Studio team during this time period was working on the performance of the Create New Project dialog – something which can never quite go fast enough to make us happy.

Ok so the project system needed work.  What else?  Well, anytime you add pretty much anything you can expect the debugger to be affected, this was no exception.  Generics – check, needs debugging support, SQL CLR – check needs debugging support, it was a long list.  But there was even more than just that going on: I have written earlier about the challenges of what we call “interop debugging” (where you debug both managed and unmanaged code at the same time) – in this release both the debugger and runtime teams put considerable effort into making interop debugging more reliable.  That meant taking a very hard look at all the communications logic, the locking model, the safe-stopping points vs. “skid” points as I called them.  It was a huge endeavor, but as a result interop debugging got noticeably better in VS2005.

All this hard work actually had an unexpected extra bonus.  We also introduced a managed code profiler in this timeframe and as it turns out, trying to stop managed code as it runs so that you can walk the stack the way profilers like to do tends to have all the same kinds of problems as trying to stop the code in a debugging context.  As a result, much of the hard work that went into getting the debugger working resulted in a more effective profiler solution.

I should come back to SQL 2005 support – it isn’t enough to just add project and debugging features but ADO.NET 2.0 was making its appearance and that requires suitable designers.  Visual Studio perhaps earns its name more so than its antecedent Visual Basic because it’s Visual in many different ways – just visual forms no longer suffices.  New SQL engine means data designers are a must.

As if all of that wasn’t enough, there were two brand new pieces of technology that appeared during this time as well.  These were part of our Office programming story:  VSTA and VSTO.  In a few words, VSTA is tooling for ISV’s to provide scripting features to their users, allowing their users to write managed language programs that automating their product.  The primary user is clearly Office but in principle anyone can participate – you get a complete authoring experience with it.  VSTO, in contrast, allows you to write office extensions, intended to be written as native code against a COM API, using managed languages instead.  That poor sentence hardly does justice to the difficulty of achieving this result, and I’m glossing over the fact that there were other VSTO solutions as far back as 2003 but this is where I remember the spotlight first shining on that technology with any kind of luminosity.

But, even with all this good work, there was one thing that stands out in my memory as being more important than all the others.

Edit and Continue was back – and there was great rejoicing.

[See The Documentary on Channel 9!]