Back, Back, way Back … 2 years ago when we started thinking about what Whidbey should be, I spent some time thinking about what makes a good feature for C# and Visual Studio, and what our focus should be for the product. I captured my thoughts in 2 emails which I’m going to share here on my blog.
Looking back now that we’ve released Beta1, I think we did a pretty amazing job. I’m really stoked about where Anders and the rest of the C# language design team has taken the language, and also all the great stuff we’ve been doing in the IDE to make writting all that new C# code that much more fun. A lot of the credit for the good stuff we’ve been doing for the C# developer in the IDE has to go to our IDE PM Anson Horton, and the C# IDE dev team led by Jay Bazuzi.
So without further ado, I’d like to know how you think Whidbey measures up against ‘The Yardstick’ …
There’s a lot of focus around the Product Unit and Division on new features for the Whidbey Product. I’ve been asked my opinion on several of the new feature ideas. Some of them I am really excited about, some I am less excited about, and some I think would be a step backwards for the product. I want to share with you all the criteria that I use to evaluate features in VS, the yardstick to which I measure a feature:
Good features save programmers a significant amount of their time.
That sounds pretty simple, so let me expand a bit. To save a programmer a significant amount of time a feature must save time in a task which the programmer currently spends a lot of time. This is the classic lesson of optimization. A 5% saving in a task which a programmer spends 50% of their time is much more valuable than a 90% saving in a task which currently only takes 1% of the programmer’s time. A relevant example is the proposed features to automatically add using clauses and references to C# source files and C# projects. At first glance this looks great, the user is getting help in a task which had to be done manually. Let’s take a look at this task. An average C# source file will have around a dozen using statements, an average project about a dozen referenced DLLs. Say it takes around a minute to add each one (very liberal estimate) and you’ve saved the user say 10 minutes per source file, and 10 minutes per project. Sounds great. But how long does the average project or source file live? I’d say the average source file lives for several months of active development. Saving me 10 minutes over several months does not get me excited.
Another aspect of good features is that their value should more than justify their costs. When I talk about costs, what I mean is the usability cost of the feature. Every feature incurs a usability cost. These include:
Cost of invoking the feature
This includes the number of keystrokes or mouse clicks required to get from the point where the user wants to invoke the feature, to invoke the feature, and to return to where they left off. To add value the UI cost of the feature must be significantly less than the UI cost of not using the feature, doing the task the old fashioned manual way. One feature which has ben talked about is an auto-correct feature for grammar errors – adding a button to click which will add that missing semi-colon for you. Compare that to the ‘manual’ cost of navigating to the error (F8 and a few arrow keys) and hitting the semi-colon key. Is there a savings here? A few keystrokes, maybe. But then ask yourself, how much of your time do you spend fixing compile errors anyway? Fixing compile errors is a 1% task, and so any kind of autocorrect feature, even if it worked perfectly all the time (which it can’t, you can’t program in ESP), would not significantly improve developer productivity.
Cost of NOT Invoking a feature
This one is a little more subtle, but often it is even more important than the cost of invoking a feature. If you get a pop-up, or smart tag, or heavan forbid a freaking paper clip advertising a feature, then every time you don’t use the feature, the feature has cost you. There’s a reason that advertising costs money. It is because our attention is a valuable commodity. Everytime some UI pop’s up, we cost the user in distraction time. Even an additional menu item, or check box in a dialog costs the users attention. I cannnot stress this one enough, we should treat the user’s attention as the valuable commodity that it is.
Cost of Understanding a feature
This is related to the cost of learning a new feature, but is subtly different. To use a feature effectively, the user must have confidence that when invoked, the user will get the results that they expect. Once you know that a feature exists, how much do you need to know about the feature to have confidence that it will do what you want. I think that it is often better to implement a feature in a way which is predictable by a very small set of rules, then to attempt to code in a million special cases which will each save a bit more time. If the user can’t predict what the feature does, they’ll stop using it. Simple is often better, even if there are some obvious cases in which small improvements can be made. One example of a feature with a high Understandability cost for me is the use of regular expressions in the find dialog. I’m not a regular expression guru, so I will often stick to simpler searches if I’m sure of what they’ll produce rather than construct some uber expression to exactly find what I want. Am I arguing against regular expression searching, absolutely not, the value of the feature, for those that understand it, easily justifies the cost. Another example is smart indenting of tabs. After a few minutes of writing code with the smart indenting turned on, I came to the conclusion that I had no idea what it was doing. Code was being indented in ways that I could not easily predict, and even if it did get it right (aka the way I wanted) I’d allways stop for a moment after hitting enter to wait and see where the cursor ended up before continuing typing. It may have been doing some great things, but I couldn’t predict the algorithm that it was using. Now I switch to block indenting when installing a new build. It doesn’t allways get it right. In fact it is often wrong, but it is allways close, and more importantly, it is allways 100% predictable by the little gears in my brain. Now I never pause after hitting enter. I continue without any breaks in the flow of my typing (often adding an extra tab or shift tab) after hitting the enter key. It is not worth it for me to have the ‘smart’ tab indenting algorithm taking up valuable space in my meager brain, and the ‘saving’ of the occassional tab character is not worth the cost of crippling my typing speed. Getting 80% of perfect in a predictable way is way better than getting 95% or even 100% of the savings in a less predictable way.
Cost of ScreenRealEstate
ScreenRealEstate is precious. The more useful information the user sees the better. When the user is learning the IDE, a list of features (via menus and dockable windows) is useful information because the task at hand is learning the IDE. However, once you start coding, the useful information is code, debug data and debug state. A list of UI features is no-longer interesting information. The first thing I do when I install a new build is configure the IDE to show the maximum amount of useful information for my tasks. No toolbars. No docking windows except thesolution explorer, callstack and locals windows. All other windows are either undocked (if they contain useful information I want to see a useful quantity of it), or hidden if they don’t contain useful information (most other windows). Any new feature had better be worth the ScreenRealEstate that it consumes.
I’ve spent a lot of time talking about the costs of a feature, and not a lot about the benefits. This is intentional. Often the right design decision is to not change anything. This is probably the most important lesson I’ve learned from watching Anders in the C# language design meetings. Often an area of the language is raised in which there apppears to be some real gains which could be made. We could prevent the user from a common coding error, or we could improve the usabilityof the language for a certain problem domain. Then after thinking really hard, through all of the options for addressing the issue, the answer is to do …. nothing! Even though the problem is valid, and it looks like we should be able to add something to improve the situation, after careful reflection there is no way to address the issue which does not cost more than the solution gains. Many of the most important and valuable decisions I’ve seen Anders make are decisions to not add complexity to the language unless the added value justifies the complexity. In borderline cases, he allmost allways choses nothing over adding something which is marginal.