My History of Visual Studio (Part 4)

[All the other Parts: History of Visual Studio]

I didn’t really intend to write one of these per day but here it is day 4 and we’re still going strong.  I’d like to take a moment to thank the many people who commented on this series either here, or on their own blogs, or elsewhere.  I’m watching J

I started the series by saying that this is only “My History” and I’d really love to see your histories, however much you’d care to write, even if it’s just a fond memory or a memorable aggravation.   Many of us went through the same programmer-generation-defining experiences, so it’s fun to share them.  For instance, I was really surprised just how many people I knew spent far too much time exploring the Mandelbrot Set.  Why we all felt compelled to generate almost exactly the same pictures on our own computers, often with our own custom math library, is something I may never understand J

OK back to My History of Visual Studio, I want to go back a little bit again like I did last time and cover something that I skipped that I wish I hadn’t, especially because there’s a fun story that goes with this one.

As it happens, at that time, and for several years, the big show was the Software Development Conference, usually the spring one.  One of the popular attractions was competition between the major tools vendors at that time where they were all given the same challenge to produce from “scratch” (using just whatever came with their tools, including samples) some kind of modest applet that you could build in about half an hour while the audience watched.  Usually there were several “levels” and you’d get points for each level.  One year it was a bitmap viewer.

After one such conference, where we had competed (and won) with Dolphin, we got to thinking that this was actually not such an exotic situation to find yourself in – needing to cobble together some code from pieces you had lying around – and we wanted to drive code reuse as theme in Olympus so we decided we should have something like Code Gallery.  This could very well be the birth moment of what we’d call Code Snippets today.  We had no idea how it should be built exactly but we imagined you should be able to point it at pretty much any chunk of code, make a blob, make some parts of it be variables, and then you should be able to put it elsewhere in the same or a different project.  We gave it to The New Guy (you know who you are J).

I can’t believe how mean we were – “Here, take this vague uncooked idea that touches most of the important systems and go build it.  Let us know how you do.”

I think he did ok though.

The funny thing is, we figured that this would give us an unfair advantage over our competitors at the next SD conference but we also figured that having a long list of features we were banned from using (remember everyone is watching so it’s easily enforceable) was kind of a badge of honor and it really was a cool feature.

Having re-read the above I feel it’s important to remind my readers that a preposition is something you should never end a sentence with.

We’re going to move forward in time now, but we’ve come to a part of the story where I didn’t have first hand knowledge.  In 1995 I changed jobs, and for 7 years I worked in MSN land.  I was busy developing the technology for but I still had a lot of friends telling me what was going on and some of it was pretty amazing.  I think I can best describe this period with an allusion.  This is historical fiction but it kind of hits the key points in a fun way.

<fade to black>

A long time ago on a campus far, far away…

<cue music>


Episode 5.0:
The Great Landgrab

<scrolling text>

It has been many years since the Visual Tools first came on the scene and now the many factions are vying to be “on top” as customers clamor for a Grand Unification.  All the contenders are widely regarded as Evil by the other contenders and the whole thing looks to an outsider like it belongs in a Dilbert cartoon.

The VB Consortium claims that they have a crucial edge because their shell provides the most immediate feedback, their excellent design time experience is a must for any successful Unification.  But the Compiled Language Bloc, led by the notorious C++ team, claims they already have a shell that is hosting multiple languages and that the VB shell is ill-equipped to handle things like “The Mixed Mode Nightmare.”

Meanwhile, unbeknownst to these factions, another, newer, shell has been born.  This Child Shell was the host of a little known system called “Blackbird”, but the Internet was coming, and all those web-developers would need tooling too.  And, perhaps, Interdev could restore balance…

</scrolling text>

OK I’ve had my fun.  But there are some key truths in there.  There were various competitive shells and we needed to choose one.  But of course choosing one was really quite impossible because what we really needed was to cherry pick features from each of them and then consolidate the result.  A critical choice here is “which shell do you start with?” and that is largely what the fuss was all about.

Visual Interdev was literally in its infancy when this was going on and I think it’s fair to say that it, ultimately, became the basis shell which integrated the others.  There are several reasons for this, mostly technical (there may be non-technical ones too but I don’t know them so they’ll have to be part of someone else’s history).

One reason was that there was a hope of having a public extensibility story that made sense with that codebase.  The VB shell had very limited extensibility, and the VC++ shell, while it had tons of it (proven by the number of languages it was able to host) had a serious flaw:  when we tried to let 3rd parties extend it they completely failed because the contract between the shell and the packages was both tight and complicated.  There was an interesting design tension there, well there are many, but let me give you just a taste of one.

VC++ did not use COM for its extensions, in fact it would have been anathema for us to do so.  Our philosophy was that we needed to be able to debug COM when it wasn’t working, not rely on it to boot. But here’s a flip side – if you don’t use a registry-based activation model like COM has, then you suffer two key problems:  #1 your contracts don’t have clean separation so 3rd parties will go crazy trying to use them, and #2 you’ll find you have to load all of your extensions just to know what they do – something you ultimately can’t afford.

In the end the VC++ shell failed the test because while it was designed to be extensible from the start, it was designed to be extensible by us, and not by 3rd parties.  It was designed for a smallish number of extension packages which would always be loaded at startup, and that was something we couldn’t afford – Visual Studio is actually in the business of not loading extensions while giving the illusion that they are loaded, including cute tricks like enabling/disabling menu items of extensions that are not loaded on their behalf.  I haven’t name dropped to this point but I can’t not mention the guy who is most responsible for solving these problems, and I’m about to link you to his video so, you can give Doug Hodges a word of thanks if you ever meet him (which I highly recommend).  

I think it’s fair to say though, that at the time, not everyone agreed this was the right choice and, at the time, I would have not agreed myself.  But in retrospect, I would have been wrong.  But I was working on Sidewalk anyway so nobody asked me J

Now, getting everyone onto a new shell doesn’t happen overnight and as I mentioned, the ultimate winner of this contest was still just nascent at this time. So it would be a while before we’d see the changes that were in the works.  The Visual Studio brand was alive at this point but it wasn’t much more than some lashed together boxes.

There were other pretty cool things going on at this as well.

I have to mention Visual J++ at this point because I think it was just too darn important to the future of Visual Studio and the .NET Framework to omit.  Remember we had been thinking about simpler language models, and simpler programming models, for years, at virtually every offsite.  But how do we move our users to it?  Why would they want to change languages? What might motivate that?  How do you sell the benefits? And then along came Java.

I’m going to keep the discussion focused on interesting technical things that affected the history of VS, the rest of the checkered history of our relationship with Java I have no first-hand, or even second-hand knowledge of anyway.

Java is a managed language, already that drives a bunch of choices, and, additionally while it doesn’t require a JIT compiler that’s pretty much the best/only way to get world class performance out of a JVM.  You might think, so what? Whatever. Managed schmanaged, it compiles to native and runs, what’s the big deal?

Well it’s true that running managed code does look a lot like regular running native code but the similarity is only superficial from the tool’s perspective.  Let me talk about it from the perspective I know best – debugging – to highlight those differences.  Keeping in mind that the reason I’m mentioning this now is because I think Visual J++ was the system that forced us to think about these things and get solutions in place as early as we did.  .NET would have many of the same problems and so the experience would be helpful, even if the code and people were all different.

OK one problem with managed memory is that there’s a garbage collector, and good garbage collectors compact and that means objects move around.  OK so what, so objects move around;  the contents of some pointer variables might change or something right?  That happens all the time while debugging anyway, it's called “running”, how is this a new thing?

Well for instance, if you’re debugging and you tell the debugger “stop anytime the contents of this global variable changes” then what it likes to do is say “ok the address of that global variable is 0x12345678 I’ll just use a hardware register to tell the CPU to interrupt me if the contents of that memory are altered and I’m done”  Oops, that doesn’t work so good anymore, the contents might be altered by garbage collector at pretty much any time without actually logically changing (the global just points to the same object which is now somewhere else) and worse still, the global itself might move so that lovely fixed address you used to have 0x12345678 could itself change at any time.  Joy.  We’ll have to handle all of that.

OK, so, data moves, great.  What about JIT compilation, does that do anything important?  It sure does. In regular code, or even code that’s been compiled in the normal way to p-code (we supported that remember?), the address of code doesn’t change from run to run.  It might be relocated as a chunk but within any given DLL the offsets remain the same.  That means one set of symbolic information is all you need to find any given method, or to determine which method contains the current instruction pointer.  When you JIT, that doesn’t work;  instead you have to remember where you put every given method from run to run, and you can’t assume the method exists as native code just because you loaded the IL for it, it might be not jitted yet – so that means that if a user asks for a breakpoint you have to wait for it be jitted then insert it…

All this jitting and data motion makes an already complicated task, like coming up with a calllstack that shows method names and parameter values, a lot more challenging.  But it had to be done and those challenges were met.  I think VJ++ drove a lot of that.

I think the need to interrogate the runtime state to find object information, and to get notification of things like garbage collection, is most likely the reason why they chose a soft-mode debugging strategy and why ultimately the .NET framework got the same strategy.  Personally, I would have gone hard-mode all the way but it wasn’t my call, and arm-chair quarterbacks don’t have to deal with the Blitz so I should just probably shut up about that.  Likewise that JVM provided great experience building a garbage collector.

I think I’ll change gears a bit for the end of this posting.  While all this was going on: managed future, shell consolidation, etc. etc.  The individual language teams continued to produce very cool things.  We started using Visual Basic’s new ability to create compiled executables to make web pages written in interwoven BASIC, that looked exactly like .asp files, but that we preprocessed into VB projects and compiled into ISAPI DLLS.  And you could debug them with a native debugger and get seamless cross language debugging!  Very cool.

And, in the C++ space, another micracle was occurring.  I have almost no idea how they pulled this off, even though I did get a few peeks at the secret sauce, those guys took incremental linking and compiling one step further and delivered general purpose Edit and Continue for C++, maybe for the first time ever in any production system.  My 20 minute test failures on web servers running C++ had suddenly had dreamy debugging. 

I was lucky enough to be one of few that could call the guy that wrote the code and say to him: “DO YOU HAVE ANY IDEA HOW MUCH TIME YOU JUST SAVED ME!  I LOVE YOU!!”

It was that good.  Like “Thunder.”

[See The Documentary on Channel 9!]

Comments (6)

  1. Stefan Olson says:

    Edit and continue was another game changer.  It can take a long time to get an application into the position where you want to test/fix something and being able to make a change and compile and check it without restarting really was completely mind blowing. Whoever implemented that deserves a medal!

    It is a bit frustrating that we have to go back to living without edit and continue in C#.  It’s not available in Silverlight, for changes to xaml files in WPF or if you’re running 64-bit and have forgotten to make all projects x32.

    However, I can understand how technically difficult it is to implement and why it’s not right at the top of the priority list,


  2. Robert says:

    Very interesting, thanks.

    I’m surprised that you’d find the time for this, what with all that VS2010 optimization still to do? 😉

  3. AG says:

    I’m just going to say that around ’92 I was working for a british company in their developement team of something very similar to Visual Basic 3, but based on OS/2 Warp 3.

    We had the IDE very similar conceptually and visually to VB, we had controls very similar to VB’s OCXs, and we had the development cost … ohh yeah!

    I remember it very well, it was around 15.000 pounds a seat.

    Just by chance (ejem … ejem…) I passed by a software shop, and when I knew VB3 costed 99 pounds flat, I couldn’t resist and bought it for my own secret use and delight.

    Since then I couldn’t stop using that veloved IDE.  Can this be defined as "love on sight" ?

  4. Keith Patrick says:

    One note about Edit and Continue (and someone with a better memory than I can correct me if I’m wrong) – IBM’s VisualAge had something along those lines earlier. I cut my teeth on VA for Java, but VisualAge for Smalltalk was the same IDE (that was cool in its own right, if not a resource behemoth) that allowed for dynamic edit/stack frame reload per-method.

    Funny thing is, outside a brief moment around 2002 when I recall E&C working with the code I wrote, I don’t ever use that feature. It broke in ASP.Net at some point (or never worked), and then only got implemented on one of the languages I use, so I just debug around it.

  5. ricom says:

    Virtually ever interpreted version of BASIC MS ever shipped after the 80s allowed edit and continue.  Other systems had it too.  It was basically unheard of in a native code system though — it’s a lot easier to do it in an interpreted environment.

Skip to main content