My History of Visual Studio (Part 6)

[All the other Parts: History of Visual Studio]

The years 1998 to 2002 were very busy ones in the Developer Division.  I’ve previously written about “Dolphin” and I tried to give a sense of exactly how much changed during the Dolphin release and the sheer volume of work it required, but I think this period dwarfs that one in a number of ways.

Visual C++ 2.0 “Dolphin” was, by definition, “just” a C++ product. Putting aside the fact that it was designed to allow extensions for multiple languages, the scope of the system was limited to only one part of the Developer Division.  In contrast, Visual Studio .NET, which spans the four years we’re talking about, and maybe a little more if you count the foundational parts that were already in VS6, was an effort that required the engagement of the entire division.  Arguably it was the first time we ever attempted such a thing in the developer space.

Just thinking about it from the perspective of the number of people involved is telling – VC2 was the work of 100ish people.  VS.NET required more like 20 times that, for more than twice as long.

What kinds of things happened?  Well for starters the entire managed stack of what we’d call the .NET Framework had to be invented.  Not completely from scratch but pretty close.  That’s not just the runtime and the framework but also:

  • all the supporting file structures for IL and executable files, and inspectors for the same
  • a world class garbage collector, in two flavors
  • several variations of JIT compilers, as well as precompiled (ngen)
  • several new major language compilers (C#, VB.Net, and managed C++)
  • a complete “partial trust” model allowing for hosts that want a sandbox solution
  • debugging and profiling APIs for these components used heterogeneously – bringing us back to the world of soft-mode debugging the runtime
  • rich interop choices for both direct calls and COM, into and out of the framework, as well assorted serialization and object remoting strategies
  • the entire ASP.NET stack for IIS
  • new application models:  Winforms and Webforms for this environment including a flagship design experience for both of these
  • solutions that contain projects of all these types in arbitrary combinations
  • an IDE that would handle of the these new experiences in addition to allowing unification of every major feature of all the other IDEs put together
  • an IDE with an extension model that would allow 3rd parties to do their own languages/environments, just like ours
  • lots more that I don’t even have room for

It’s quite an eye-opening experience to consider that all of these things were required to succeed and you can sort of understand why it took so long – some of the things on that list really can’t be started until others of them are substantially done, but of course the more complex later projects are likely to highlight problems in the earlier stages.  With so many people involved just the communication overhead could be daunting, and the tasks above are hardly easy in the first place.

Well I’d like this to not read quite so much like a marketing bullet list so allow me to recall a few stories about all this from my own perspective – remember I was still in MSN here.

When I first heard about the next version of COM+, which became the .NET Framework, it was in the context of a pitch from the developer division to my MSN group asking that maybe we should consider moving some properties to ASP.NET to give feedback.  There was a lot of information to disseminate and many things were still preliminary but I remember that there was one thing that I fundamentally did not “get” about the whole thing.  It was because they were still calling it COM+ at that time and I had assumed that they were trying to come up with a new COM+ framework that was backwards compatible with the old COM+ stuff like what we used in the DTC.  When they told me that they were trying to do a compacting memory scheme in that world I thought they were completely nuts, at minimum you’d have to have proxy objects for everything so that you could move the real objects without anybody knowing.  We were maybe 45 minutes into the meeting before I realized the magnitude of what they were proposing – they wanted an entirely new object model with an entirely different memory management strategy in all new languages.  Wow.  Just, wow.

The next demo hit close to home.  It was a managed client for a web service.  They just pointed the tool at a web service and instantly had Intellisense over of it using VB.NET.  It had some rough spots but definitely another wow.  The next was a quick demo where they imported a COM component, the kind we used on our web pages for ad delivery and so forth, and then started calling that with Intellisense support sweet-as-you-please using nothing but the component’s TLB.   

I was so excited I got myself an early drop of the thing and started writing benchmarks.  I wrote applications that did the kind of string manipulations that our web sites usual did and sent them feedback based on my results.  A lot of them made a difference.  I had a sabbatical coming up and I ended up spending most of my six weeks reading the base documentation for what they had built, partly to provide feedback, but even more because I thought it was going to be really important to learn it.  It would turn into my next job two years later.

What about some of those other items on the list?  There’s some pretty meaty stuff there; I’d like to pick off a few and talk about them a little bit, and I’m so fond of debuggers; let’s start there.   

It turns out that debugging managed code can be pretty tricky.  I alluded to the fact that it’s a soft-mode debugger (like VC1). I say this because it has the key property of soft-mode debuggers which is that the debuggee isn’t really stopped when you stop it.  Both the VC1 debugger and the .NET managed debugger share this but they accomplish it totally differently and for different reasons.  In managed debugging “your” code really does stop normally, but there is a debugger helper thread in the debuggee that provides access to key structures and otherwise relays important information back to the debugger, so technically the debuggee isn’t completely stopped.  

So far so good, but if that were the extent of the situation, then this wouldn’t be a very interesting discussion – maybe you could call that “nearly hard” mode or something – the real situation is a lot more complicated. One complexity is the fact that when if the user tries to stop the debuggee because it’s say in an infinite loop, it’s possible that the debuggee is in the middle of some runtime call and not directly executing the code the user wrote. It could be in the middle of a garbage collection for instance. If that happens you don’t really want to stop the program right away do you? If you did, you’d find that your universe looks wrong – some of the objects have been moved some have not, some pointers may still need to be corrected. In short, the world is not in a good state and there are lots of these temporarily-bad states that could be visible to a debugger.  Ouch.  So this is another typical soft-mode problem, when you try to stop, you kind of have to skid, get the debuggee somewhere sensible, then stop (or pretend to stop).

If that wasn’t bad enough, the managed debugging has to work in a hybrid program that’s partly managed and partly unmanaged, maybe with some threads having different combinations.  So if you try to stop one thread that’s unmanaged you should be able to do the usual hard-mode thing but if you then try to inspect a different thread that is managed well that could then cause you problems unless you’re treating each thread as it needs to be treated for the kind of code its running at the time its stopped…  Oh my…

Add to this fun the fact that you often have to actually run managed code to do normal debugger things like evaluate properties and you find that our poor debugger folks had a few things on their minds while they were integrating all of this.  It’s so easy to assume those call stacks and parameter values are easy J

Let’s pick a couple more of the more technically interesting problems from that bulleted list I started with.  One very interesting one is the Winforms designer.  Now this particular designer is interesting (perhaps unique at the time) because it needs to provide a full-fidelity visualization of the form you are authoring, including, for instance, controls on that form you have written yourself in the very same project.  These are what I like to call the “first party” controls.  It’s easy to show that in general the only way you can provide that kind of fidelity is to actually run the code in the designer.  So now we have to take the code you are writing, compile it on the sneak, load it within the IDE itself (using the very same framework of course) and then you can see your whole form, panels and all, just as it will appear in your application.  Wrap it with handles and rulers and so forth to allow direct manipulation and you have yourself one very slick designer!  Not easy to get that right.

What about all that new IDE integration and extensibility?  Well to make that all happen you have to painstakingly go over all of the extensibility features that are in each of the existing shells, formalize them with clean COM contracts – usually after conducting personal interviews to find out what the existing informal contracts “really mean” – and then fit those into an all new framework of interfaces and components that is itself extensible by 3rd parties.  Naturally this is a totally thankless job and it’s as likely as not that everyone involved will say you did it wrong no matter how careful you are.  It’s kind of like tax-assessment: you know you have it perfectly fair when everyone hates it equally.  But in the end it was super-successful; there are literally dozens (if not hundreds) of language extensions available for Visual Studio – you really could do this even if you didn’t work in Redmond!

I could keep writing about how technically impressive Visual Studio .NET was, maybe I could even win a debate with the thesis “Visual Studio .NET was the most technically impressive release ever” but the fact of the matter is a lot of people didn’t like it; history isn’t all sunshine and roses after all.  I think I’d be remiss if I didn’t talk about at least some of the sore points so I’m going to hit one squarely on the head. 

A lot of VB programmers did not want VB.NET at all and liked VS.NET just as much (i.e. they didn’t like it at all).

Why?  Well the answer to that question is probably a whole book right there but let me boldly make some guesses.

First, VB.NET was not the language they wanted.  The runtime changes presented a challenge, there was Winforms to learn for instance, but I think those might have been more acceptable if the language itself had been more VB-ish.  Traditionally there had always been a compiled version of BASIC and an interpreted version at the same time.   VB.NET decidedly had that compiled-language feel and that didn’t sit well with those that wanted an interpreted feel.  I think a language that was more like “Iron Basic” (e.g. like our IronPython language but Basic instead of Python, still targeting .NET) would have been well received.  It scripts like a dream, it has direct access to .NET objects, you can run it in a little immediate window if you like, and change anything at all about your program on the fly.  I suspect we would have loved to deliver such a language but during that time we simply didn’t yet know how to do so.

Second, VS.NET was not the IDE they wanted.  They were used to something smaller, tailor-made for VB that had genetically evolved for VB users, and this wasn’t it.  In the initial version of the integrated shell, Edit and Continue wasn’t working for the .NET languages leading to the astonishing first-time situation that the C++ system had edit and continue and the RAD programming languages did not!

I think the net of all this was that there were divided loyalties among that generation of VS users – I think that still persists actually.  It was impossible to not acknowledge VS.NET as a great technical achievement but it was also impossible to say that every customer was pleased with the direction.

Whatever else you say though, the 2002 offering became the new foundation for tools innovation at Microsoft and it began a new era in IDE development here.  One in which a key distinguishing factor was the presence of rich graphical designers for virtually every development task.   It was in this release that the notion that it was enough to simply throw up a text editor and call it good became insufficient.  Arguably, even now, Visual Studio’s bevy of designers are a key aspect of its success.

Not long after Visual Studio .NET I returned to the Developer Divison, I’ll pick up the story at the “Whidbey” release in the next part.

[See The Documentary on Channel 9!]