Are some features more harmful than helpful?

We've been talking a lot about the suggestions everyone has been asking us to make.  As you may or may not have guessed, many of the suggestions are things that have come up internally and have already been given a lot of thought.  The reasons we don't have them for this version are many-fold.  But the main reasons are:

  1. We don't have time to implement that feature even though we'd like to
  2. We actually don't think the feature is a good thing and we think it will be detrimental to add it

The first is something that I think everyone can understand.  We have a limited number of resources (time, developers, QA, etc.) and there is only so much we can do.  Large features that affect many teams use a large amount of resources, and we'll only commit to them if we see immense user benefit.  In general we'll schedule to provide the greatest user benefit.  There's no set way of doing this, but (for example) we might pick 5 really great features rather than 1 amazing one because we can do the 5 in the same amount of time and altogether they are better than that 1 other feature.  Of course, figuring out how many resources something will take to do is extremely difficult and things get updated as work progresses.  Sometimes a feature take 1/10 the time you thought, sometimes three times longer.

The second is something that I think people understand but don't necessarily think about.  While they see a benefit to the suggestion, they don't consider that there might be a downside.  I do a lot of my coding in C++ (unfortunately) and I see that as a language whose design philosophy was “that looks cool!  is it fast or powerful?  Yes!!  Then lets add it.”  This is generally why people talk about C++ as a shotgun that not only comes preloaded but also is pre-aimed at your feet.  The features it has can be used amazingly effectively in some situations, but tend to get abused to no end leading to nigh-impossible to understand code with bugs that can be horrendous to find.  I'm also someone who followed the development of java (and other languages) quite closely.  You could see that in their design process the looked carefully at the features that other languages had and they asked “is that feature something that is normally used well, or normally abused”.  If it fell into the latter category then they said “we're not going to include that.  Even if it is a burden for the programmer who would use it well, it's better for the platform overall”.  This is a philosophy that I agree with.  Why?  Because if you make a mistake and later realize that you need something, then you can always add it later.  If, instead, you had added these features then you'd be stuck because it's very very difficult to remove something from a language as you may break existing code.

Many features we've looked at could help programmers in special situations, but we feel that they would lead to developers creating bad code and APIs.  An example of this is the “optional/named parameters” argument.  When you have a language with optional parameters you can end up with apis with methods that take 50 parameters.  (See the Office Object Models for example of this).  The reason this is generally bad is that these 50 parameters have a relationship that you can't express.  Like “if you give the font argument, then you also must specify size, but not specify window layout or brush stroke”.  These kind of constraints are things that should be encapsulated elsewhere.  The arguments people generally make for wanting this feature are so they can inter-operate with these APIs.  However, if we added feature then more of these APIs would breed and you could get a proliferation of bad APIs. 

That's an example of a language feature that we worry about.  There are also issues with our tools that we worry about.  For example, one thing we're ambivalent about is allowing users to collapse regions in the Visual C#.  What we've seen is that people end up with a “#region fields, #region methods, etc.” which they then collapse to get a class that is only like 10 lines long.  If you expand it you end up with a 200k file (no joke).  In this case because we've made it so easy to hide the complexity you end up thinking that your class isn't complex and you end up with an impossibly complex object.    If we didn't have this then frankly you'd be forced to break up your object to make it less unwieldy. 

Another tools feature that we worry about would be Edit & Continue.  Richard Grimes has a very interesting article on it where he argues that it actually leads to poor development and design skills.  I somewhat agree with this view.  As I posted about earlier, when I'm in Ocaml/Java/C# i never use a debugger.  Why?  Because I tend to keep follow development processes that make it unnecessary.  When things to go wrong I tend to just sit down and think about it for about 5 minutes after which I'm pretty sure I know what the issue is.  A quick test will usually confirm it.  I'll then fix up the issue and add tests to make sure that it won't happen again.  When i use the debugger I tend to find the area where the problem actually manifests itself and I'm then tempting to fix it right there.  By doing this I might be overlooking the fact that something else way before actually screwed something up and I'm fixing it in the wrong place.

The best fixes that I've ever made to code have come about because I didn't just path a couple of lines and move on.  Instead I went and talked to my peers about it.  I say what i think is wrong, and how I think it should get fixed.  They're response is usually “waitaminute... that doesn't make any sense.”  or  “wait.. how did this work before.”  or “if you do that Foo will break.  You should be fixing it here instead so that the problem is actually gone”.  or, even better “wait... this is just fundamentally broken.  Of course this would fail.  We really need to rethink this and do it correctly”.   That last statement has happened a lot in the whidbey time-frame and our code-base has become much much better because of it.  Long standing deep seated bugs were rooted out because we didn't just fix a symptom, we tracked it down to the actual source of the bug (usually a way overcomplicated class), ripped it out and replaced it with something far simpler and understandable that we were then much more confident in. 

Note: This is my opinion :-)     many people disagree with me on this.  There are tons of people who live in the debugger (and who create very good code).  They would probably be able to leverage this quite effectively.  However, again, this is one of those areas where we have to ask “are we doing more harm than good”.  If, in the end, we feel the answer is yes (i.e. the feedback from you is “i understand the risks, but this is something that makes me a better developer”) then we'll do it because we want C# developers as a whole to be more productive.

This post came out a lot longer than I expected.  But I thought it might be helpful to understand that we do think very carefully about new features and it doesn't boil down to “lets just do it because we can”.  If you have any questions or issues with anything I've said, let me know!