Visual Studio 2010 Performance Part 2: Text Editor


Part 1 of this series talked about the startup problems we face.  In Part 2, I want to talk about the editor.


Many people have reported that editing with the new editor is slower. I’ve experienced the same thing myself so I certainly do not want to accuse people of hallucinating but I thought it might be interesting to understand why that might happen, especially since this new editor is supposed to be better than the old.


Why is this editor better and how is it better?


I’m glad you asked 🙂


It’s actually better in pretty much every editing way. The data structures associated with this new editor do not require complex locking algorithms for access, it delivers logical “micro-snapshots” of the editor buffer which do not change – it is a copy of the buffer at an instant in time. This is a fantastic situation if you are, for instance, a background compiler. Previously to get consistent snapshots the entire buffer had to be copied, potentially on every keystroke!


And if that’s not enough, there’s a lot more. Another big tax on the editor is region management. Regions are used to track everything from bookmarks to breakpoints – and more. Those squiggles you see, those are all regions and there can be thousands of them. That is far beyond what the original editor design intended and those algorithms were starting to degenerate to quadratic behavior. You can see that yourself by noting that simply scrolling around at the end of large files in the old editor tended to get slower.


Those are just two areas, but generally where the old editor had quadratic performance we were able to improve things to linear and in some places we were able to do even less work by being more careful about processing only visible segments of the text buffer. All goodness.


So why is the thing slower if it’s so much better?


Well let me elaborate a little more, but before I answer that directly, first let me tell you what things are almost certainly not the problem.


Not the problem #1: WPF


With the exception of some cases where we found that remoting WPF primitives over terminal server connections was slow, generally the WPF system is more than capable of keeping up with the editor. If you are seeing sluggish typing WPF is almost certainly not to blame.


Not the problem #2: Editor Primitive Operations.


The basic insert primitives are blazing fast, as low as a few microseconds even in very large files. If you’re seeing a problem, the chances that the text buffer is letting you down are slim to none.


Ok, so, if it’s not those things, what is it? There’s two good bets.


Compatibility Shims


The new editor is managed, and it has its own brand new managed interface. You can call it directly. However, there is lots of existing code that knows how to talk to the OLD editor and the old editor had a COM interface with its own particular API. We didn’t want to port all of the editor related code to the new editor’s managed interfaces – for several reasons but perhaps the most important is that there’s just so much of it. So we created a compatibility layer – shims – that allow old callers to use the new editor. That’s also important for our partners out there. That’s all fine and well but of course that means you’re going to be forced to use COM-interop where there was none before and of course those shims have to emulate the old editors behavior even when that wasn’t exactly the fastest choice in the west.


So sometimes improving editor responsiveness has meant converting more code to the direct interface and thereby avoiding the shims. The trick here is to make sure you convert the code that needs converting rather than going crazy trying to convert all of it in one go.


OK but who are all these users? What are we talking about here?


That brings us to the second point.


Event Listeners


Another reason why the editor can get bogged down is that various clients can register for notifications of interesting events. This is actually very popular (understatement) and important things like the language’s intellisense services want to be notified of various things going on in the buffer. Now it turns out that the actual editing primitives are so fast that the bulk of the cost of doing edits is actually in these various event processors. Sometimes because they are using shims, sometimes just because they are being over-aggressive in subscribing to things. Sometimes because they have their own personal problems that are only incidentally related to editing.


One way that you can see the difference in the root cause of editing yourself is to try your editing operations in a file named ‘foo.txt’ rather than ‘foo.cs’ or ‘foo.vb’ or whatever the case may be. This will disable many of the event listeners and give you a truer feel for what the editor itself is doing. Even that isn’t perfect because of course there are still listeners for bookmarks, and other kinds of things that are applicable to even plain text files.


What are we doing about it?


Well I think I’ve already alluded to it. Make better use of the underlying editor primitives. Move more things off the shims where the cost is high. Reduce the cost and number of listeners generally. And of course make sure that our text and region management primitives stay nice and fast.


What can you do?


If you see the editor behaving dumbly, enter bugs. They really help!

Comments (17)

  1. GrayShade says:

    The selection gradient and the menus are pretty slow. Can’t WPF be blamed for this?

  2. tobi says:

    why do you think the gradients are slow? maybe calculating the position for the selection boxes is slow. you cannot know.

  3. JD says:

    As Rico can appreciate, all I care about is in scenarios that are important to me.

    If out of the box VS2010 is slower diting CSHarp files, it’s slower. If I am less productive, that sucks. If startup time is worse than the already slow time, I won’t be impressed.

    I also use ReSharper, so I will judge that too, but I’ll rate the core IDE based on its own merits. So far most of the blog entries about the next Visual Studio seem to be about lowering expectations about its performance. That’s too bad.  

    I hope you are using metrics that correspond to things I care about.

  4. @JD:

    I don’t think Rico’s point is to lower expectations.  I think he’s trying to point out that perceived typing performance isn’t great right now, but, most importantly, *it can be fixed*, given the path that he outlined (moving people off the shims and improving the time spent in responding to various editor events).  The problem isn’t that the new editor is inherently less efficient than the old, or that WPF is that big of a problem (as most people tend to assume); the problem also isn’t really a mystery – Rico, Cameron, and others have a good understanding of where the time is going, and what we have to improve.

    Also, I read the startup article not as an intent to lower expectations, but to explain that the time from startup to the app is ready to respond is less important than the time from startup to the project is open and you can type in the editor.  It’s kinda like saying the time from turning your computer on until you get the login screen is less important than the time from turning your computer on until you’ve logged in and can actually use the computer.

  5. GrayShade says:

    @tobi:

    Virtually any text editor needs the same box calculations, so that shouldn’t be a reason for poor performance. Also, the selection problem is seen mostly on computers with poor graphics cards.

    I’d ask about things zooming in the new Parallel Stacks window; it’s a pain to watch.

  6. ricom says:

    Ulimately, if the editor is slow, it won’t matter why.  It’s just slow.  Any excuses I might offer would be worthless anyway.

    But I do think it’s helpful to understand what is going on, where the problems lie and where they don’t.  That’s the point of the posting.

    One of the interesting things about performance work is that it’s "equal opportunity".  Politics and personal taste don’t enter into it.  I just go after whatever comes to the top of the profile.

  7. Troy says:

    ricom: Good reply. I am a bit tired to comment on the whole post, but I will do so when I feel a bit less drained.

    Needless to say though that I am really happy that you are the person entrusted with the VS2010 performance issues.

  8. Rico just put up an interesting post on editor performance , so if you have a few minutes, go check it

  9. Thank you for submitting this cool story – Trackback from DotNetShoutout

  10. Will says:

    "One of the interesting things about performance work is that it’s "equal opportunity".  Politics and personal taste don’t enter into it.  I just go after whatever comes to the top of the profile."

    Hmmm.  But the guy doing the profiling gets to choose the workload he’s running.

    If your workload isn’t like ours, then aren’t we just as out of luck as if you were cherry-picking the profiler’s results?

  11. Fred Morrison says:

    All the more reason to avoid VS 2010 until Service Pack 1.  Come to think of it, waiting for Service Pack 1 of *ANY* Microsoft product is a wise idea.

  12. tobi says:

    then you better dont use linux cause there wont ever be a service pack that can fixup _that_ crap.

  13. pingpong says:

    Ok, after reading part 2 of this series I’m getting the general message: "VS2010 sucks perf-wise, but there are sophisticated reasons for that".

    Why on Earth did you release this as ‘beta’?

  14. SimplyGed says:

    I like reading these blogs. It gives me an insight into how MS are developing their products, the problems they are seeing and how they are addressing them.

    I try to download BETA software when I can and use it as often as possible (at the moment I’m running VS2010 on Windows 7 RC). I have to say, I love using both – especially, in VS2010, the ability to move the tool windows across multiple monitors. But, I realise one thing about both products – they are not RTM !! Now, to me, this means a few things, but mostly :

    1. They will contain bugs

    2. Performance won’t be the best

    @Fred Morrison:

    Saying you won’t use the software until SP1 leave you at a disadvantage IMHO. But, I understand that some people don’t have the resources to run BETA and RTM software. It is time consuming as well.

    Personally, I want to get onboard as early as I can and, if possible, help to iron out the problems that I am seeing by letting MS collect data on what I am doing (and when things go wrong).

    @Will:

    No one can possibly predict every possible use of the software, so the feedback/data collection features help MS to see my issues based on my usage.

    If the tables were turned, would you want to get as much real-world data on how your products were being used and the problems that were occurring?

    @Rico:

    I’d really like to know more about debugger performance in VS2010. What steps are being made in the direction? Does it use the MEF framework to manage functionalilty? Will it be easy to extend?

    I’m looking forward to the next drop of VS2010 – I just wish they were as regular as the Windows 7 drops that kept appearing (even if they were not official!)

  15. jalf says:

    @pingpong: Perhaps *because* it is beta. I’d be more worried if they release the final version with the same performance characteristics.

    The point of a beta isn’t generally speaking to impress.

    And drawing conclusions about having to wait for a SP1 is silly as well. You don’t know that 1) the problems won’t be fixed in RTM, and 2) that they *will* be fixed in SP1. They could be fixed next week, or they could never be fixed. In neither case is "wait for SP1" going to be a sensible strategy.

  16. Steven Padfield says:

    Hey Rico,

    Regarding the shims…

    Not sure if this is related, but a year or two ago I was working on a .NET product that used legacy ADO, and I found that there was a severe performance bottleneck during instantiation of COM objects (notably ADO connections and recordsets).  Using the CLR Profiler, I tracked this down to the allocation of dozens of strings per instantiation due to license checking.

    Apparently the RuntimeLicenseContext class is allocating around 6.8 KB for each COM object instantiation, about 4.1 KB of which are strings (chiefly due to multiple calls to get the filename of the interop assembly).  Setting LicenseManager.CurrentContext to a DesigntimeLicenseContext solves this problem.  In my attached sample, it improves the instantation time for ADODB.Connections by 37%.  (Runtime license context = 1205.100 us per instance; Designtime license context = 759.067 us per instance; release build of test was run 5 times on a fresh boot, best and worst times were removed to calculate average).

    Anyway, your post reminded me of this and I thought I would take a shot in the dark on the off chance this might be related.

    BTW, is there an explanation behind this phenomenon?  Is the runtime license context supposed to be so slow?  Is it allowed to use the designtime license context in production software?

    Thanks,

    Steven

    ———————

           static void Main()

           {

               const int count = 10000;

               Test(1000, true, null);

               Test(count, false, "runtime");

               LicenseManager.CurrentContext = new DesigntimeLicenseContext();

               Test(count, false, "designtime");

               Console.Write("Press enter to exit…");

               Console.ReadLine();

           }

           private static void Test(int count, bool warmup, string licenseContextType)

           {

               if (warmup)

                   Console.WriteLine("Warming up…");

               else

                   Console.WriteLine("Running {0} iterations with {1} license context…", count, licenseContextType);

               Stopwatch timer = new Stopwatch();

               timer.Start();

               for (int i = 1; i < count; i++)

                   new ADODB.Connection();

               timer.Stop();

               if (!warmup)

               {

                   Console.WriteLine("{0:0.000000} us per call", (double)timer.ElapsedMilliseconds * 1000 / count);

                   Console.WriteLine();

               }

           }

  17. Ryan Molden [MSFT] says:

    >The selection gradient and the menus are pretty slow. Can’t WPF be blamed for this?

    I can’t speak for the gradients (though they do cause problems over remote desktop) but I can speak to the menus.  Menu perf is something I am interested in.  There are a few things that can account for (but not entirely excuse) the menu perf that have nothing (or little) to do with WPF:

    1:  Our menus don’t have their children generated until the very first time you drop them.  We have tons of menus (way more than you see at any given time) and generating all their children eagerly would be a HUGE perf hit.  This causes a ‘first drop’ penalty where it takes slightly longer on the first drop than all subsequent drops.

    2:  We are an extensible system.  This means that menu contributions come from far and wide.

    3:  VS is still (and likely will be for a long time) a pull system in terms of command state updating.  All existing packages are written to expect the IDE to QueryStatus them to get the current state of the commands they contributes (i.e. visible, enabled, current text, etc..)

    4:  Right before we display a menu we need to run through all items on the menu (again there are generally MANY more than you see at any given time).  For each item we need to make sure its state is up to date.  This entails QS calls into the contributors in many cases and that can take arbitrarily long.  I have seen QS handlers that create tool windows or try and contact remote servers.  This is a BAD idea for a QS handler, I file high pri bugs on them everytime I see it happening (the current champion who shall remain nameless takes over 3 seconds to respond to the first QS call on the drop of a menu!!).

    5:  Packages may be written in managed or native code thus anytime we call into a package we may or may not be paying a interop transition cost (it is generally impossible to tell if there will be a transition or not).

    6:  The CLR can do…interesting things during interop transitions.  For instance during Dev10 we noticed that everytime it did an interop transition if there were any RCWs eligible for cleanup it would eagerly clean them up.  That is fine for some systems but in systems where you have lots of short lived RCWs you end up spending the vast majority of your time doing this cleanup.  There were changes made in the CLR to help us avoid this penalty so it shouldn’t be in play in Beta1 (or RTM).

    7:  WPF is not a synchronous rendering technology like Win32.  When you ‘invalidate’  a region it isn’t immediately repainted, instead the system takes note of some bounding region that is dirty, when the render thread gets around to its next render pass it rerenders what it needs to do and then swaps the current display for the newly rendered one.  For the most part this should be instantaneous, but complex visual changes can cause it to take slightly longer than we would like.

    Add all these things together and menu perf is a battle we are always fighting.  Beta1 perf here was certainly not where anyone wanted it to be and I think we are making great strides in making some good fixes here for RTM, but that is about all the detail I can go into 🙂  Do feel free to contact me directly  (alias is first initial of first name + last name @ microsoft.com), and I will periodically check on this blog as it seems a hotspot for perf discussion 🙂

    Ryan