SDR Time


Development has currently come to a complete halt for me.  This is partly due to the fact that i’m a lazy bum, and i’d rather loaf around than earn my keep.  However, beyond that we’re currently meeting with our C# Customer Council conducting an Strategic Design Review (SDR) and it’s pretty much infinitely more useful for me to be partaking in this process rather than just coding.  Why?  Well it would help to start out by talking about what an SDR is really about?  As you’ve seen from many posts of mine the C# team is desperate for feedback of all sorts from our users.  The SDR process is about the most intimate way we go about getting that feedback.  Rather than using things like blogs which use electronic communication to target tons of users, we spend several days with a small number of developers in very cozy rooms where everything is done face to face.  For me, this time is completely about shutting up and just listening.  We meet with this group periodically and we feel that they’re a good representation of our users.  i will rarely talk during the SDR (unless there is something brought up that i am absolutely the best person to take care of it), and instead i want to get maximum value by getting a chance to learn about what these developers are looking for and what they are finding unsatisfactory about our current tools and language.  Any and all feedback is welcome (and encouraged), and provides a level of closeness that we’ve found tough to replicate through any other forum.

We’re always rolling around many different ideas and trying to determine what we think is the most important stuff to work on, and with the aid of these people we can reevaluate our positions. This is a time for us to hear: “omg!!  Why on earth are you working on that?  That is the most useless thing i’ve ever seen.  Why aren’t you working on this other thing instead since this is causing pain for tons of businesses and developers and any help here would be far more beneficial than anything else you could do!”.  Or they might say: “yes!  That’s a great start!  But in my business it would be useless in its current form.  However, if you just tweaked this slightly then it would fit the bill”.

And, as one member even commented, (paraphrased) “dogfooding only gets you so far.  It lets you build software that works for companies exactly like MS.  But we’re not like MS and you need to understand how our needs differ”

One of the big eye openers is how little these people care about the things that i am passionate about.  Instead they are passionate about areas that i am either unexcited about or, worse yet, areas that i know almost nothing about.  This is a necessary thing for me to be reminded of and it tells me that i need to do a whole lot of learning about an enormous amount of technologies in order to really be able to produce software that these people find useful.  i’m a language guy who loves building tools to help out developers.  i like working mainly on what we describe as “code-focused” tools.  i.e. the features that interact with the user as they’re editing and are really about allowing you to create great, maintainable code.  However, for many of the participants here, it’s the domain specific business logic that they’re always thinking about and working with the code is just a means to and end.  Instead, they get the most benefit from the high level APIs that that implementing that domain specific logic far easier.  It’s also about easing the pain of connecting the different domains that they’re going to run into when creating the software their client needs.  For example, easing the pain of connecting databases to web front ends, to installing rich apps, to providing thin clients for support teams to use, etc. etc. etc.

So the interesting question we ask ourselves is: “Is it possible to further the language in ways that ends up making that job much easier?”   “What about the tools?”  “What about the APIs?”

This SDR then gives a good gut feeling about what we should be working on and it gives us focus for further designs in the coming future.  Then, as time passes we will gradually expand out this information to larger and large circles, eventually leading up to our first public disclosure.  At that point we’ll have done a fair bit of working with all sorts of customers to see the different needs that people have and we’ll have prioritized in a way that we feel provides the maximum benefit to the most consumers.  Then we get let the full community weigh in and let us know how they feel this.  Usually, things seem to be pretty much in agreement.  The choices we’ve made are generally in sync with most users.  There are usually slight differences into how important people think things are, but in general it’s a good match and is pretty easy to fix up.   Now, one of the times that where the community definitely felt that our decision was incorrect was in our decision to not do Edit-&-Continue.  But once we expanded out to the community we realized our error in not supplying this tool.  With the enormous feedback we received we realized that our st of priorities was decidedly un-optimal and needed to be drastically corrected if we really wanted to provide tool our customers would find useful.

i’m still completely committed to making VS2005 great and getting it out of our hands and into yours in a timely fashion.  i think you’re going to love Beta2, and the final release will be even better.  But once it’s in your hands, then i can’t wait to start working on all the stuff that’s going to come next.  i think you’re going to be blown away and are going to find the future of C# to be a very bright and promising one.



Oh, and just so you know, i love all forms of communication and i’m going to using as many as i can handle to get to know what the community wants.  So please continue to send me the feedback that you’ve been so generously providing up to now.  Trust me when i say that *every* single messages is listened to, and we will consider all of them when making our future plans.  If you have ways that you think the platform should develop, or if you’re unhappy about the choices we’ve made so far, please continue to let us know.  Together we can end up making the best C# possible! 🙂



Edit: I was wrong to imply that the customer council told us to not do E&C. Rather, when we asked for prioritization they wanted Refactorings over E&C, and *we* felt that we couldn’t pull off both in one product cycle.  However, given the huge response from the community (as well as the customer council telling us it was important), we reprioritzed our feature set later on in the game so we could do both of them.


Comments (25)

  1. Come say hi when you have a chance…

  2. We didn’t say, ‘Don’t do edit-&-continue.’ We said, ‘Don’t prioritize it above refactoring support like rename.’ The team did them both…. Excellent! 🙂

  3. CyrusN says:

    Mark: You’re absolutely correct, and i should have been more clear. i’ll edit the above to state that.

  4. I’m going to say again "Please make nullable types sane"!

    When I filed bugs in the feedback center I got back responses that were so far removed from reality as I see it that for a long time I just decided to ignore the issue because there was no way that the developers were going to see my point of view. But I recently realized that I’d never bothered to actually explain my credentials and why I feel that I have some credibility when it comes to talking about nullable types.

    You see, one of the very first things I did in C#, about three years ago I think (I haven’t checked the date in my source control history yet) was produce nullable versions of int, bool, DateTime, long, short, etc etc for a DAL layer I was working on. Since then those types have been in daily use by every developer in my company (a team of 4-5 people over the course of those years). I’ve had to explain the behavior of my nullable types to those developers, fix things they found confusing, and try to explain the problems that cannot be fixed without help from the language.

    When I say "people will find this behavior confusing", it’s because I’ve given types with that behavior to developers and *watched* them be confused by it. Or had to fix the same bug once every few months because the behavior is so counterintuitive that even after having it explained and grasping the problem, they’ll still make the same mistake a few months later.

    So I believe I ought to have some level of credibility when I say that the nullable types in C# 2.0 are significantly worse than what could be implemented three years ago without any help from the language.

    And having had this realization that actually I should have some credibility, I’ve decided to do what I can to fight for a less confusing behavior, including giving feedback in every forum I can find. I intend to go back and revisit my (closed By Design) bugs in the feedback center to comment to this effect, and also leave comments like this when I see C#-related people asking for feedback.

    The current behavior of nullable types WILL be a huge source of pain to developers, myself and my team in particular but also anyone else who uses them. Please, I know it’s late in the game, but please consider revisiting this design! If you don’t fix it now, you NEVER can.

    If you’re not sure what my issues actually are, here are the biggest:

    (int) (object) (int?) 1;

    (int?) (object) 1;

    int? i = condition ? 1 : null;

    if ((object) (int?) null != null) print("WRONG");

    The first three of these are real sources of pain with my current nullable types. The fourth one is handled correctly in my wrappers but I KNOW that it would be causing bugs all the time if it weren’t.

    Also, methods and interfaces on the underlying types really should be lifted onto the nullable type. This also caused lots of complaints until I implemented it on my wrappers. Who wants nullable types that can’t be sorted or formatted right?

  5. CyrusN says:

    Stuart: I absolutely understand where you are coming from, and i’ve forwarded your comments in full to the rest of the language design team.

    I really hope we can make things better here.

  6. Adel says:

    You are right , you should focus on developing tools that will make developer’s life easier. Directory services developers lack tools that deal with group policy objects of active directory. I I really appreciate it if you point to exiting tool that does that function.

    Adel

    email:adel.husain@aramco.com

    Saudi araibia.

  7. Dr Pizza says:

    "Instead, they get the most benefit from the high level APIs that that implementing that domain specific logic far easier"

    Related to this, an oldie (but a goodie!).

    Using C# is immensely painful because the runtime doesn’t have a decent set of container classes. This issue has repercussions in C# itself (because C# "knows" about certain bits of the runtime, like String, and iteration), and has even broader repercussions at the higher API level. Not only does C# not have a quality set of containers, but the runtime forces higher level APIs to make the same bad design decisions, thanks to the poor design of e.g. IList.

    C# lacks good parallel programming support. The async model in Cw may be useful, but I’d be more interested in something closer to OpenMP directives.

    When one is doing pinvoke programming and one wishes to marshal structures to and from buffers, I want more control over how I can define the layout of the buffer. (To be honest, I’ve not much looked at .netfx 2, so perhaps these things are fixed). In particular, I want a way to specify that an array’s size is determined by another member of the class. In other words, you need to make SizeParamIndex non-retarded (currently, amongst other failings, it works only on parameters).

  8. CyrusN says:

    DrPizza: "Using C# is immensely painful because the runtime doesn’t have a decent set of container classes. "

    Well, you already know what my answer is going to be on this 🙂

    The fact that you mentioned the word "set" in your sentence is especially humorous to me.

  9. FWIW, I blogged a longer writeup of my issues with nullable types here:

    http://sab39.dev.netreach.com/Blog/12?vobId=172&pm=18

  10. CyrusN says:

    Stuart: Thanks very much for that writeup. It has been forwarded as well, and will be discussed.

    I really appreciate you going through this time to express your frustration. It’s far easier to make certain arguments when you can point to customers and clearly understand the problem they’re having.

  11. Cyrus, I can’t thank you enough for taking the time to forward my concerns and taking them seriously enough to get them considered.

    I imagine that since it’s so late in the game, the chances of getting any really big changes made are slim (I’d LOVE to be wrong of course!). With that in mind, at least one of the problems can be fixed with virtually no downside: "test() ? 1 : null" is currently illegal code so there’s no chance of breaking working code by fixing it.

  12. CyrusN says:

    Stuart: I can’t promise anything. However, i can tell you that this is beign taken very seriously.

  13. Dr Pizza says:

    "The fact that you mentioned the word "set" in your sentence is especially humorous to me. "

    😀

    But seriously, if that single attribute were changed to be made more general it would make my life SO much easier.

    I mean, it might actually make it practical to call many of the more interesting Win32 APIs from C#!

  14. Dr Pizza says:

    And it looks to me like nullable types are bullshit.

    They seem to me to be a workaround to an issue that other languages (e.g. Java, C++) don’t have, because those languages let one create any type on the heap (or at least, close enough to "any type", in the case of Java’s primitive wrappers).

    This provides nullability for "value" types and "reference" types alike (either by effectively having no value types, as in Java, or by effectively having no reference types, as in C++).

    The "nullable types" approach chosen by C# introduces new semantics that differ significantly from those of plain ol’ reference types. Yet those reference type semantics are IMNSHO the ones we really want for nullable types; they’re certainly the ones we want when communicating with databases and so on (of course, databases shouldn’t have nullable columns anyway, so the issue shouldn’t really arise often in any case). The difference in the semantics seems frankly inexplicable.

    What nullable types SHOULD have been used for is to make NullReferenceExceptions rarer. The normal reference declaration "Type name;" should create a variable or field that *cannot* be null. If you need for some reason a reference that may not refer to anything, then *that* is what a "nullable type" should be used for. In this regard, C#’s references should be much more like C++’s references (which equally aren’t nullable). Converting from a nullable reference to a non-nullable reference should require an explicit (and checked, throwing a NullReferenceException if it fails) cast.

  15. DrPizza: I agree with you that it would be really cool to have a construct for *NON*-nullable values of reference types as well as for nullable values of value types. I feel that that’s much less urgent at this point because there’s no construct being introduced in 2.0 that would prevent adding that behavior in the future.

    On the other hand, if the current design of C# 2.0 stands, there’s no way to get sane nullable value types at any point in the future ever.

    (Just because, here’s a suggested syntax for non-nullable reference types:

    string s1 = null;

    string! s2 = "hello";

    string! s3 = null; // compiler error

    I like the idea of the parallel between "int?" as a nullable integer and "string!" as a non-nullable string. However, I’m not 100% sure that this wouldn’t result in any syntax ambiguities…)

  16. CyrusN says:

    Stuart: Non-null reference types is another thing in the list of features that we are always thinking about. And yes, we would probably go with the ! syntax mentioend above.

    It would have been cleaner for:

    Foo

    to always mean "non null Foo" regardless of it was a value type or reference type, and then just have:

    Foo?

    mean "possible null Foo" for eitehr value or reference."

    But, at this point, we could never make that change to the langauge given the enormous amount of code out there that this woudl then break.

  17. Non-null ref types are interesting.

    For example,

    class Foo {

    string! x;

    }

    would have to be illegal, because there’s no legitimate initial value for the variable (unless string is somehow made special so that the default can be "", or the default is just to call a no-arg ctor if there is one, which both have their own issues).

    Presumably "x is string!" would be equivalent to "x != null && x is string"… actually I’m not sure what "is" does with null today so that may be redundant.

    Another interesting consequence is that the recommended pattern of using "as" to avoid the double-check implicit in an "is" followed by a cast, doesn’t work. In fact, a non-nullable type can’t appear on the right hand side of "as" at all, just like value types.

    string x;

    if (x != null) {

    string! y = (string!) x;

    }

    Doesn’t look like there’s an easy way to avoid the double-null-check there. Still, a null check is much quicker than a type check, so it’s probably not a big deal.

    All of this stuff hints that non-nullable types probably can’t be implemented with just a NonNullable<T> struct in the framework. At the very least, if you try to do it that way, you’ll hit most of the same issues as Nullable<T> hits today, and a bunch of interesting new ones 😉

  18. Dr Pizza says:

    "It would have been cleaner for:

    Foo

    to always mean "non null Foo" regardless of it was a value type or reference type, and then just have:

    Foo?

    mean "possible null Foo" for eitehr value or reference." "

    Yes, precisely. That’s what you should do.

    "But, at this point, we could never make that change to the langauge given the enormous amount of code out there that this woudl then break. "

    Balls.

    Providing a translation tool would be relatively easy and would fix most issues (turn uninitialized references to nullable ones, leave initialized ones alone).

    And providing a mechanism to compile using the rules of an older version of the language would also be easy to do, and solve all the issues for those unwilling to upgrade. It’s the approach Java has used for a long time now, and it works well.

    Frankly, I don’t understand where this mindset that you can’t break any code has come from. It’s done under the misguided impression that it’s somehow better for the users, but what it actually means is that we’re left with a language with less consistency and more peculiarities to work around. There may be some small short-term win, but there’s certainly a long-term loss. And it’s probably not best for you guys either; working around e.g. a failure to reserve an identifier that you later want to use as a keyword by using context-sensitive keywords and adding yet more constructs to the language makes your job harder too.

    What’s all the more confounding is that neither of C#’s real competitors (VB/VB.NET, Java) have anything like the same aversion to breaking changes, in spite of having at least an order of magnitude more code written in them than C#. VB traditionally hasn’t even provided any kind of mechanism to ease the upgrade path (the VB6 to VB.NET converter being something of an anomaly in this regard), yet has never been hurt because of this.

  19. Dr Pizza says:

    "class Foo {

    string! x;

    }

    would have to be illegal, because there’s no legitimate initial value for the variable (unless string is somehow made special so that the default can be "", or the default is just to call a no-arg ctor if there is one, which both have their own issues). "

    Fair enough. Make it illegal. It’s illegal in C++ (whose references aren’t nullable), so it’s not the end of the world. In C++ reference members must be initialized in the init list of constructor(s), viz:

    struct X

    {

    int& i;

    X(int& i_) : i(i_)

    {

    }

    };

    C#’s references will, unlike C++’s, remain rebindable. Which brings me on to another issue; the ability to overload == should be removed because it doesn’t work at all nicely. It’s fine in C++ (for obvious reasons). It’s craptastic in C#. Either remove it, or introduce unambiguous syntax for reference and value equality. e.g. == for value equality only (a compile-time error to try to use it for comparisons between value types or nulls/nullable types), === for reference equality.

  20. Stuart: "Non-null ref types are interesting."

    Yup. i’m about to blog about this, but i’m going to go through your post first.

    "For example,

    class Foo {

    ….string! x;

    }

    would have to be illegal, because there’s no legitimate initial value for the variable (unless string is somehow made special so that the default can be "", or the default is just to call a no-arg ctor if there is one, which both have their own issues)."

    It actually is possible, however there are other issues to be taken care of. I’ll address this in my post.

    "Presumably "x is string!" would be equivalent to "x != null && x is string"… actually I’m not sure what "is" does with null today so that may be redundant."

    today, "null" is not anything. so the semantics would stay the same. This is different from java where "null" is anything.

    "Another interesting consequence is that the recommended pattern of using "as" to avoid the double-check implicit in an "is" followed by a cast, doesn’t work. In fact, a non-nullable type can’t appear on the right hand side of "as" at all, just like value types.

    string x;

    if (x != null) {

    ….string! y = (string!) x;

    }"

    it’s quite possible we could optimize this out, or provide a language construct to do the same. however, null checks are exceedingly cheap (something like 1 instruction, so it’s not clear that it’s necessary since the CPU is going to do a completely acceptable job here

    "Doesn’t look like there’s an easy way to avoid the double-null-check there. Still, a null check is much quicker than a type check, so it’s probably not a big deal.

    All of this stuff hints that non-nullable types probably can’t be implemented with just a NonNullable<T> struct in the framework. At the very least, if you try to do it that way, you’ll hit most of the same issues as Nullable<T> hits today, and a bunch of interesting new ones ;)"

    struct NonNullable<T> where T : class

    would be quite *un*-ideal (for many of the same reasons that Nullable<T> is un-ideal. For example, you would have to have a public no-arg constructor that would, by default initialize the internal T field to null. Whoops! Right off the bat you’re allowing a null T when the point was to not allow one.

  21. DrPizza: I’ll probably do a post on backwards compatibility later, but i’ll address your points here first.

    "Balls."

    To you maybe. However, there are a hell of a lot of other customers who don’t feel the same way. And We have to strike a balance here.

    "Providing a translation tool would be relatively easy and would fix most issues (turn uninitialized references to nullable ones, leave initialized ones alone)."

    And what translates all the source code examples out there? What goes and translates the thousands of books already written and on the shelf? What goes through and reeducates the millions of programmers and lets them know that the old way they thought of the language is no longer valid? There is benefit in not changing the language already shipped and instead just adding new things to it that can then be learned independently.

    Say, for sake of example, we did go down your path and provided these tools to change all your source code. Now, during the course of a person’s work they run across a bit of code that just says something like:

    string s;

    They now have to ask tehmselves "ok… is this pre C# 3.0 code or post C# 3.0 code. Does that mean non-nullable string, or nullable string… "

    "And providing a mechanism to compile using the rules of an older version of the language would also be easy to do, and solve all the issues for those unwilling to upgrade. It’s the approach Java has used for a long time now, and it works well."

    AFAIK, java has only provided that for asserts and enums, not changing one of the fundamental ways that types are represented in source.

    "Frankly, I don’t understand where this mindset that you can’t break any code has come from. It’s done under the misguided impression that it’s somehow better for the users, but what it actually means is that we’re left with a language with less consistency and more peculiarities to work around. There may be some small short-term win, but there’s certainly a long-term loss. And it’s probably not best for you guys either; working around e.g. a failure to reserve an identifier that you later want to use as a keyword by using context-sensitive keywords and adding yet more constructs to the language makes your job harder too."

    Then you should talk to more customers. Retraining is massively expensive to them, and as they will be interfacing with otehr customers (who may or may not be using the new C#) it has the potential for *enormous* confusion.

    "What’s all the more confounding is that neither of C#’s real competitors (VB/VB.NET, Java) have anything like the same aversion to breaking changes, in spite of having at least an order of magnitude more code written in them than C#. VB traditionally hasn’t even provided any kind of mechanism to ease the upgrade path (the VB6 to VB.NET converter being something of an anomaly in this regard), yet has never been hurt because of this."

    VB has been massively hurt because of not keeping backwards compat. It’s basically the #1 issue that their customers are upset about. We’d rather not go down that path since we’ve seen how unhappy people are about it.

    Sure it does make the language not as nice as possible. That’s definitely a shame. But I’m ok with having some cruft if it means that our customers can adopt and use the language (especially the large numbers of whom need to use C# 1.0 and 2.0 at the same time).

  22. class Foo {

    string! x = y;

    string! y = x;

    }

    How’s that for an evil case? I don’t know what your proposed solution is, but that’s something it’ll need to handle, since it’s permitted for regular strings (personally, if I’d been designing the language, I’d have been inclined to have initializer dependencies detected and disallowed circularities, but that’s water under the bridge now).

    Or how about:

    class Foo {

    public static string! x = Bar.x + " World";

    }

    class Bar {

    public static string! x = Foo.x + " Hello";

    }

    With regular strings, I think you get non-deterministic behavior there depending on whether Foo or Bar gets referenced first. But what happens with string!s?

    FWIW, I completely 100% agree with you about not breaking existing code (which is one of the reasons I feel so strongly we need to fix nullable types NOW, rather than as a breaking change in 3.0). I have other things I’d like to see in C# also (covariant return types; co/contravariant generics; a construct like x?.y that’s equivalent to "x == null ? null : x.y"; Foo<T> where T : enum; the ability to define other members of an enum type (like Java permits); an automatic Parse(string) method on every enum type; etc) but all these can be added later without breaking existing code (and when the time comes to start designing C# 3.0 I’ll be right there giving feedback for it). But right now I’m just going to push the nullable type issue as hard as I can, if only I can find where the buttons are 🙂

  23. For those of you who don’t read the comments made on other posts of

    mine, you might be unaware about…