Rick Byers wrote (some time ago):
Thanks for the awesome post Eric. I’d be interested in hearing more detail about the sorts of things that cause features to be rejected. Is it common to reject a feature that you think would be valuable only because of syntactic compatibility limitations (parser ambiguity, breaking change, etc)?
What are you thoughts on how language evolution should work in general (outside the confines of C#)? Do you think it would be possible to have languages that could more readily accept the type of extensions you’ve wanted to make to C# but couldn’t?
For example, do you think there would be value in a language that added a layer of abstraction between the syntax presented to a user, and the persisted form? Eg. if a language was stored on disk as an XML representation of the parse tree, then you could evolve the language (add keywords, etc.) and rely on the IDE tools to intelligent present the code to the user.
I’ve been saving up this one for a while now.
It’s common for us to reject some features because they aren’t along the lines of our language philosophy.
It’s also fairly common for us to reject a feature because we can’t come up with a good syntax for the feature. Sometimes this is because we just don’t like the constructs we come up with, because they are ugly, or they don’t really make things simpler for the user, or they don’t cover the right scenarios. The syntax we can use is heavily constrained by the existing structure of the language. Take a look at your keyboard, look at all the special characters, and tell me which ones aren’t already used for something in C#. The list is very short, so we are constrained by the operators that are available. We’re also constrained by whether our change would be breaking, and in what situations things would be breaking. C# 2.0 has no major breaking changes, and though that isn’t an absolute rule for us, it’s certainly a goal. Adding new keywords is, in general, a bad thing to do.
Finally, we’re constrained by what the runtime can/will implement, and whether things can be implemented across languages. Some features only make sense if they’re done in all the languages, but that means all languages need to agree before we do it.
Rick also asked about language evolution.
There are different opinions about this. Some believe that languages should never change. Others believe that they should be able to extend their language at will. An extreme example of this is Intentional Programming.
I think I’m one of the few people around who have actually played around with intentional programming. Conceptually, it’s interesting, but in the real world, I think the “everybody designs their own language” approach is challenging at best. One can envision a world where the user representation is extensible but the underlying representation is standard, but I think that’s a bad world to be in. It may be great for you, but it’s probably not good for your team, or the poor guy who takes over your code two years from now. And there’s a lot to be said for the “code in a text file“ world.
We have well-defined ways for users to add functionality – through classes, methods, interfaces, etc. I think that languages should only consider adding features when there is an obvious shortcoming to solving the solution through existing functionality. At that point, you need to understand those issues and determine whether the language solution is the right way to address the issue.
So I’m not big on extensible languages. Existing facilities – such as Macros in C++, do have their users, but are a disaster from the readability standpoint (both for the compiler and the developer).