Be a language designer…


I started writing a normal blog post – well, as normal as any blog post of mine ever is – and then I decided to let you do the hard work. So here’s the situation.

You’re part of the C# language design team thinking about the next version of C# (ie the version after VS 2005). You get the following email:

When I’m writing code, I often come across a situation where my code throws an exception because another component called with a null parameter. I’d like a way to prevent that from happening.

What are the pros and cons of such a feature? What are the ramifications of adding it to the language? What would have to change? What is your recommendation? Exactly what would such a feature look like?

Comments (55)

  1. Big Dawg says:

    How about a [NullNotAccepted] attribute on a parameter, enforced by the compiler wherever possible and otherwise by the runtime when the method is invoked. Well, thats a pretty bad idea but I think the whole concept is bad.

  2. Kjell Holmgren says:

    Instead of introducing a change to the language, why not create a NotNullAttribute that would be recognized at compile time and have the compiler expand that to do a simple check at the beginning of the method (when applied to a parameter) and at any assignment when it is declared β€œref” or β€œout”?

  3. David Levine says:

    My quick take on this: use attributes that can constrain the value and/or range of arguments. The constraints can be both by inclusion and exclusion. The attributes would look much like the interop marshalling attibutes.

    The attributes are optional and therefore backwardly compatible, and are to be used by the compiler to detect invalid invocations of the method at compile time. Runtime checks can also be performed but at additional cost (e.g. DEBUG vs. RELEASE semantics).

    Benefits: because it is enforced by the compiler and optionally by the runtime the behavior is consistent. It also provides opportunities for instrumentation tools, and for intellisense to aid in generating code that correctly invokes a method.

    Cons: May result in inelegant, ugly, hard-to-read and understand code. May be, no, will be, version-brittle as new implemetations of an assembly change the attributes. Some type of versioning policy will need to be defined and implemented.

    The compiler will have to change to support this, new metadata defined, and if the runtime supports it, features added to the runtime (the JITr, etc.).

  4. Timothy Fries says:

    An attribute is definately the way to go. However, there’s a lot of room for flexibility. I’d argue against only creating a NotNullAttribute, and instead give Attributes a lot more "magic"; such as the ability to add prefix and/or suffix code to a method, with access to the method’s parameter list — or if that’s a perf issue, just a specified parameter.

    That would allow the implementation of NotNullAttribute to be done in C# rather than magically within the compiler like ObsoleteAttribute is (which I’ve wanted to subclass with my own warning message many times but can’t because it’s "magic"), and would also allow users the power to write their own validation attributes for whatever other constraints they might want to place on the parameter data.

  5. Daniel O'Connell says:

    Interesting..well, lets see.

    Lets start at the beginning. To add a non-null constraint to the language requires some work not only at the language level, but also at the runtime level(this is a multi-langauge system after all). The runtime has to be able to enforce no-nulls or a method could still recieve a null value from another component. So, some metadata and verifier enforcement is required or a type must be used with constraints in code. Without that, the entire feature is worthless. Simply defining it so that the compiler itself can enforce it is a solution that is too inoptimal to justify adding it.

    At the language level, things are a little harder. While you can simply emit the metadata on parameters marked non-null, you have to add definate assignment checks to the compiler. Definate assignment is a bit harder and the compiler would not be able to implicitly cast between nullable and non-nullable variants of a type. Conversion between a variable and the non-null variant would have to be explicit. This conversion would have to perform a null check and throw an exception at the point of conversion.

    As for syntax, attributes are not acceptable. Attributes define metadata but should not be used for language features. They are less obvious than language syntax and, frankly, for compiler writers they are a PITA.

    Since we are loooking at a value constraint, this basically asks for a type with a non-null constraint. Something like a value type without value semantics. One solution may be a structure which contains a reference to the given object, NonNull<T> where T : class for example, which would not accept a null value. This could take a page from nullable types and use ! to express cardinality and use member hoisting, etc.

    The pros of this is greater expressiveness of parameter contraints, specifically non-nullability. This could make life easier in some situations and certainly makes the language more correct.

    The cons are the addition of syntax, added verifier constraints or types, increased language complexity and the loss of backwards compatability.

    The concept has potential, but may not be worth the complexity without full parameter constraint implementation.

  6. Mike Dimmick says:

    Well, let’s see, what does the code look like?

    if( null == (object)param1 )

    throw new ArgumentNullException( "param1" );

    // repeat for further arguments

    So it’s clearly achievable within the language and library at present. This pattern can easily be reduced to a function:

    static void EnsureArgNotNull( object o, string name )

    {

    if ( null == o )

    throw new ArgumentNullException( name );

    }

    EnsureArgNotNull( param1, "param1" );

    EnsureArgNotNull( param2, "param2" );

    // etc

    I’d say that it isn’t much of a hardship to keep typing this, although the repetition of the name is a little annoying.

    What syntax might we use? If we’re going to implement it as a language feature, I’d prefer a keyword (albeit a context-sensitive one) to a compiler-interpreted attribute:

    void MyFunc( object notnull o );

    although you could use a pseudo-custom attribute:

    void MyFunc( [NotNull] object o );

    Implementation-wise, you could either add it to the compiler, generating a prologue in the method body, or to the metadata and have the JIT generate the code for you. If you go the metadata route you probably would want to use an attribute instead – this makes it easy for developers to opt in regardless of which language they’re using.

    Enhanced compilers could read the metadata and produce an error if the code passes null for that argument or a warning if one or more path of many possible paths will pass null.

  7. Scot Boyd says:

    I would probably not change the language. I would suggest that if they don’t like the code clutter, they should refactor their implementation into two methods: one that verifies input, and one that does the real work. Example:

    public int Foo(int X, int Y, Bar Z)

    {

    if (Object.ReferenceEquals(Z, null))

    {

    // create and throw exception

    }

    if (X < x_min)

    {

    // create and throw exception

    }

    if (Y < y_min)

    {

    // create and throw exception

    }

    return FooImp(X, Y, Z)

    }

    private int FooImp(int X, int Y, Bar Z)

    {

    // Very pretty algorithm uncluttered by

    // brutish and distracting parameter

    // checking

    }

    Sometimes code extruded automatically by the compiler is nice, sometimes it just creates a new layer that obscures what’s really going on and has to be documented to death. With more than one C# compiler around the differences can really add up. I’d hate to see two different C# compilers fill out attribute-based exceptions with different properties.

    (plus, where does the stack trace start from?)

  8. Daniel O'Connell says:

    I think the point of the feature would be to extend the semantics to disallow nulls and make that apparent in code from both sides. Basically to shift these runtime errors to compile time. If its a good idea or not is another question, but just saying that checking in code is the same thing is pretty off mark, IMHO.

  9. John says:

    Simple…

    Just remove the exception feature πŸ˜‰

    Seriously though, I think you have to ask why someone can call your function with a null parameter and what does "prevent that from happening" mean.

    It’s possible that the calling code was compiled before the called code, thus making the notnull attribute less effective.

    Does making the CLR do the null check satisfy the requirement? I don’t think so. Is the motivation laziness or something else?

    It’s hard to answer without knowing the specific requirements and motivation.

  10. Scot Boyd says:

    >Basically to shift these runtime errors to compile time. If its a good idea or not is another question, but just saying that checking in code is the same thing is pretty off mark, IMHO.

    Null references are only occasionally detectable at compile-time. The code to check and throw still must exist.

  11. Mark says:

    You guys at Microsoft should talk to eachother a bit more. πŸ™‚ Have you seen this? http://blogs.msdn.com/cyrusn/archive/2004/06/03/147778.aspx

  12. Confused says:

    "a situation where my code throws an exception because another component called with a null parameter."

    I don’t understand – another component called what? His code?

  13. John Lam says:

    Are you guys thinking about pre / post-conditional assertions a’la Eiffel? That’s a fairly heavyweight feature to add if the goal is to simply add runtime null checks in a declarative fashion. However, I truly believe that this would be a very useful feature to have in C#.

    During my AOP days, many of the useful cases that I implemented in my experimental runtime aspect weaver were exactly those: pre / post condition assertions. This would be an excellent addition to C#.

  14. AndrewSeven says:

    I’m not sure I understand exactly what the user wants.

    I recommend contacting him/her and refining it a little bit.

    Does he/she want to:

    1.Automate the parameter validation and exception generation?

    2.Use a NonNullable type like Cyrus mentions?

    3.Specify a default value for the parameter that will be used if a null is passed.

    1 and 2 seem similar in this context.

    I could see using #3(As long as it is not compiled into the calling code) but I can do it with now…most of my overloaded functions end up calling a version that has all parameters and understands nulls and default values.

  15. Matthew Douglass says:

    I think all the attribute based suggestions are missing a point — the user says, "my code throws an exception because another component called with a null parameter. I’d like a way to prevent that from happening."

    None of the attribute-based suggestions prevent the exception from being thrown, they just automate the code.

    The only suggestion I’ve seen here that could really prevent the exception is the NonNull template, but that’s a library feature not a language feature.

  16. Matthew W. Jackson says:

    If you are asking about having a data-type that would never support a null reference, I would suggest "Type!" syntax, which would expand to "NonNullable<Type>".

    However, I’m not sure that I completely agree with adding this to C#, as it’s really not useful without support in the framework. It’s bad enough that System.Data isn’t being updated to support Nullable<T>, but it’s not hard to write a wrapper to convert DBNull to null and vice versa. Unless breaking changes were introducted throughout the BCL, I don’t see this happening.

    On the other hand, if you’re asking about a way to rid the need of checking for null arguments, I’m all for decorating parameters with attributes, but I definately wouldn’t stop at checking for null values.

    I’m all for allowing for a wide range of custom parameter validations that would append a prologue to the beginning of the method. There are several attributes I would define, such as [NotNull], [NotNullOrEmpty], [RestrictRange], etc (I’m sure there are better names).

    I already have a set of static methods for validating arguments, but this would move the check from the top of the method to the parameter defintion, which would clear up the body of the method for describing what the method DOES, not what the caller should be passing.

  17. Kristoffer Sheather says:

    I would suggest adding something along the lines of Eifel’s preconditions-postconditions system, in addition to this I would suggest a way of defaulting a parameters value to a specific value if a certain value is passed for the paramter in question.

    Regards,

    Kristoffer Sheather.

  18. adam says:

    When I have a method that takes a reference, in most cases I don’t want to be passed null. In very few places null is allowed. So whatever the syntax it should err toward less code decoration to achieve the effect (should the developer want it).

    So maybe the class should have the attribute and the exceptions can override this method by method [1].

    Other options on the ReferencesNotNullAttribute could be scope, i.e. do you want/need checks on public, private, protected, etc. You would of course want the inverse attributes where passing null is the norm.

    Having such attributes begs the question, "do we only want to test for null"? If we want the compiler to read and understand the attributes then should we have additional checks [2]?

    You could have per and post conditions for single arguments (only post for ref/out make any sense) or for the whole method. Whether the arguments list is <generic> or object or method signature matches is up for discussion. I have used what I hope is the .net 2.0 short-cut for delegates as arguments to the attribute c’tors, without it the code would get very messy.

    There are obviously static/none static-ness to consider. I’m not sure what arguments would go into the post condition, the class instance object maybe?

    Just some thoughts.

    adam

    [1]

    [ReferencesNotNull]

    public Class1

    {

    public void fn1(object o){}

    public void fn2([AllowNull]object o){}

    }

    [2]

    public class Class2

    {

    public void fn3([PreCondition(Validate)]DateTime dtm){}

    void Validate(DateTime dtm)

    {

    // condition

    }

    [PerCondition(MethodValidate)]

    public void fn4(int a, int b){}

    void MethodValidate(int a, int b)

    {

    // condition

    }

    }

  19. Ovidiu says:

    Having seen the Comega compiler preview on http://research.microsoft.com/comega/ I’d say that’s the way to go: Most of the people here seem to agree that an attribute is the way to go, that the check can be done at compile time in a way similar to deffinite assignment checks and, instead of "[NotNull] object o" or "notnull object o" why not use "object! o" ? πŸ™‚

  20. Stephen says:

    Non-nullable types appeal rather more than expanding attributes (which are sounding suspiciously like pre-processor Macros in some comments above).

    The debate reminded me of another recurring chore – implementing IDisposable. The following is taken from http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpguide/html/cpconImplementingDisposeMethod.asp

    // Allow your Dispose method to be called multiple times,

    // but throw an exception if the object has been disposed.

    // Whenever you do something with this class,

    // check to see if it has been disposed.

    public void DoSomething()

    {

    if(this.disposed)

    {

    throw new ObjectDisposedException();

    }

    }

    The consequences of forgetting to do the check could well be an attempt to access a null member variable (post-Dispose, that is). I guess this situation points towards pre-condition/post-condition checks.

    Am not familiar with Eiffel, but I imagine these behave similarly to NUnit’s SetUp/TearDown attributes, right?

  21. Erno says:

    A default value for a (null) parameter is no solution as this will make the component responsible to create a relevant object. Which is very hard in many cases.

    Attributes to do pre-conditions (ranges) can only be added when runtime support is added AND when the pre-conditionons are part of the interface so versioning issues are consistent with the world as we know it.

  22. Luc Cluitmans says:

    I agree with Daniel O’Connell that attributes are not the way to go, though they were my first reaction.

    After reading his entry I thought about something really cool: mark a type as non-nullable by tacking a ‘!’ to it, just like a ‘?’ makes valuetypes ‘nullable’. So, declare your method for instance as ‘void MyMethod(string! aargh){}’ to indicate that the argument aargh must not be null.

    Well OK, Cyrus of course came up with that idea already two months ago, as was mentioned above, see http://blogs.msdn.com/cyrusn/archive/2004/06/03/147778.aspx

  23. Ejor says:

    And why not generalize a little.

    Like in Eiffel ( http://www.eiffel.com ) I want to have pre and post condition on my argument and internal state.

    I think the problem is not only [NullNotAccepted] but for void MethodValidate(int a )

    {

    [precond]

    a > 0 && a < 10

    ……

  24. Null reference exceptions are by far the greatest source of bugs I experience. A solution that moved some of the handling of nulls into compile-time checking, while deferring any undecidable cases to runtime would be a boon.

    The attribute approach seems ugly to me, and the approach using type! notation seems to me the most attractive.

    In fact, the Nice language approach of making all reference types non-nullable by default is very appealing. Its quite rare that I actually want a nullable reference type, and it would be better if I explicitly declared it as being so (using the type? notation).

    I realise, however, that making reference types non-nullable by default would be a backwards-compatability nightmare.

  25. One quick sidenote, the ?? syntax from nullable value types would be very usefull for reference types also.

    instead of this expression:

    object foo = (bar == null ? baz : bar);

    one could use this expression:

    object foo = bar??baz

  26. joc says:

    In my experience, now that array bounds are checked, NullException is the most frequent cause of program failure. So it definitely needs to be tackled. And there is only one effective way: at the type level, so that things can be caught at compile-time.

    Whatever the syntax (though I like "string!" to balance "int?") things to be considered would be:

    – type compatibility: string! is assignable to string, the inverse is not true and needs a cast (runtime check)

    – definite assignment for fields: at the end of every constructor, all the fields of non-nullable types should be assigned; inside a constructor a non nullable field is not useable unless/until it is assigned (on all paths)

    – the "new" operator: returns a non nullable type

    – optionally (but this is a point which is more delicate to formalize): when the compiler can prove that a variable of regular type contains a non-null value in a particular region of code, promote the variable to a non-nullable type in this region. Example

    string s;

    if (s != null)

    {

    // s is considered of type string! here

    }

    else

    {

    // s is of type string here

    }

    I clearly realize that proving non-nullability is not feasible in the general case, but it is at least in cases of the form "cond1 && cond2 && cond3" where one of the conds is of the form "x != null", inside if, while and do statements.

    – optionnally 2 (if the former is implemented): emit a (disengageable) warning when accessing a member (through the "." notation) on a variable of regular type (a disengageable error would be even better, but I am afraid people are going to throw stones at me :). Example:

    if (s != null)

    {

    int i = s.Length // OK

    }

    else

    {

    int i = s.Length // Warning

    }

    Is it worth the effort ? IMO, yes, definitely. Many, many bugs would be caught at compile-time.

    Will existing code be broken ? No, if the syntax is carefully chosen, it is a pure addition.

    Will existing code benefit from it ? No, but future code will, and this means code written for the ten years or so to come.

    Does it have to be in the CLS ? Preferably.

    Does it mean that all languages should be modified ? Not necessarily. Not all languages need compile-time type safety. For example, VB.NET could emit run-time checks when dealing with imported non-nullable types.

  27. IlkkaP says:

    Having object as a parameter is quite a risk anywat, so the developer should be prepared to handle the risk. What if a caller sends another kind of object that the function expects? I guess a cast somewhere would break the code. So saying object is like saying ‘I’m prepared to expect strange things’. And IMHO, this includes also null objects. If the parameter is strongly typed, I’d exclude quite easily that a null is not a good idea, since the function declares to expect that kind of a object. I understand this is quite strict, but strongly typed system should be.

    Automatic parameter validation is another story. I like Eiffels pre/postconditions, and ADAs declarative range checking. But these basicly just "write" code for the developer, unless compiler can do static checking (which is not usually the case) Often daunting, granted. To help the developers (on the both sides), it would be good to have declarative parameter validation, that can be seen on both sides. It’s no good if the caller cannot see (for example with Intellisense) what parameter checks are made by the method.

    For this, I would see the checking like a qualifier of the parameter, not a template or attribute, like out and ref already do. Something like ‘in’ qualifier (as opposite of out) would be for required/must exists (just like out). I don’t see any substantial changes for CLS/CLR in this case (as looking at SSCLI), since it looks, for me, like the out qualifier. It would not break exsiting code base. For interoperability, the host language might not even think about seeing/handling/implementing ‘in’, so that if it is out of the languages semantics, could still ignore it and pass null.

    For value types, it could get more compilcated.

    My dream would be to implement three valued logic (also to get rid of the <type>? that is, IMHO, a kludge that creates another kind of logic semantics). For value types/arrays you need range checking, validation lists.. but this is off topic.

  28. Jim Argeropoulos says:

    You can have [NotNull] today if you use extensible C# http://www.resolvecorp.com/

  29. Thomas says:

    As far as I can tell, the desire for non-nullablilty really rests on strings. It would be nice if strings could never be null like other valuetypes. It would save an extra check.

    Using a generic is nonsense IMO for a couple of reasons. Firstly, it wouldn’t apply to ValueTypes except the hybrid string. Secondly, it would be nonsensical with reference types.

    If you have:

    NonNullable<RefType> foo;

    What is foo initially?

    An attribute that throws an exception if a null is passed into a procedure has promise as a minor time saver. However, since 90% of my development is against a database, checking for null references doesn’t really seem like that much more of a pain.

  30. Darren Oakey says:

    YAY – my favorite topic! I thought I’d write something ’bout it, but bunged it on http://programming.darrenoakey.info – too long to put here.

  31. How would you implement null argument checking

  32. Daniel O'Connell says:

    Scot:

    >>Basically to shift these runtime errors to compile time. If its a good idea or not is another question, but just saying that checking in code is the same thing is pretty off mark, IMHO.

    >Null references are only occasionally detectable at compile-time. The code to check and throw still must exist.

    Not nessecerily. While much of it will require explicit casting(and thus moving the runtime error up to the point the bug exists, not where the code errors). However, as joc showed, it would be entirely possible to generate definitivly non-null codepaths by using definate assignment, casting, and the assumption that new never returns null. It would require some work, but it is doable.

    Thomas: foo initally is unsigned just as existing variables are. The compiler would error out if you use foo *without* definate assignment. The compiler already does it. Once foo is assigned by new, an explicit cast, or whatnot, then its value cannot be assigned null. Also String!? would probably be best restricted, though who knows.

    This also has values *beyond* string, it shows that a null reference is not allowed for a given parameter, I suspect interfaces will be the most popular usage, actually, though string would be common. Using a generic moves most of the functinoality *out* of the langauge and into the type system, which makes life easier on everyone because other languages could use it, eitehr directly or by employing the normal generic type system. It wouldn’t be CLS compliant for the foreseeable future, however.

    Everyone:

    As a whole, I don’t think attributes are a good idea in any situation in any language. They are minor functions designed to add existing metadata, that doesn’t make them a good idea for adding functionality. Major problems with them include:

    1) They don’t apply to locals

    2) They can be redefined further down

    3) They can get lost in a mass of other attributes

    4) They don’t look like code. They don’t feel like code. They are pretty likely to not be as immediatly obvious as actual langauge syntax.

    5) Its to easy to stuff in attributes with reckless abandon. With syntax you have to think, with attributes you can just slam in what you want. That is a bigger risk than one would think.

    When you want to consider new features, one must always consider syntax without regard for attributes or you will introduce something less than optimal(in this case by missing the semantics of non-null locals).

  33. Daniel O'Connell says:

    Darren:

    A comment, since your blog requires me to log in to comment(bleh). Your comment about single compiler being suifficent being compared to a parachute…that might stand up *IF* the plane was flying at about 10 feet because the parachute only works *sometimes*. Not requiring it in all languages means that you *still* have to perform the null check and no bug is fixed, instead you have an idiotic

    notnull string variable;

    if (variable == null)

    throw new NullReferenceException();

    In that case, I would say throw the damn feature out, its pure deteriment.

  34. James Curran says:

    I’m not fully convinced on how many of these errors could be caught at compile-time. At the very least, some (and, I believe, most) will still need to be caught at run-time, which leads us to the what the writer is really asking for.

    Presently, his code is throwing an exception when a caller uses a null object.

    If this feature is implemented, his code will throw an exception when a caller uses a null object. So, what’s the difference??

    Basically, he wants us to build into the language a way of saying "This is YOUR fault, not MINE!".

    Is that really an important goal?

  35. Daniel O'Connell says:

    James:

    It aides in debugging and code with semantic nullability statements, instead of documented ones. While this feature will still cause exceptions, it helps by pushing the error to the caller. By doing that you increase the likely hood that the caller will understand what went wrong. For NullReferenceException, many think that you shouldn’t issue an ArgumetnNullException and instead simply let the NullReferenceException bubble up. Because of this, its possible the NRE will occur sometime *after* the object construction, thus making it far less clear what happened. In esscense, instead of his code throwing the exception, the code at the call site would throw the exception instead. Making it clear that it is the callers fault. That makes life easier all around. Its better when the compiler can tell teh caller that he might have to deal with an error(by virtue of a cast).

    Most code can be null proof if written after this feature is added, which is why Eric makes the comment in his follow up post that it might have been a good idea for v1 instead of v3. It would be difficult to do without null guarentees in the BCL, but as I’ve suggested parallized methods would help(potentially with compiler support, potentially not). It could still be quite the mess though.

    Also, the side effect of this would be definate not-null variables, you could have

    string! x = "marvin";

    and the compiler knows that x will *never* be null. And since string is sealed the compiler would be free to emit call instructions instead of callvirt or any other optimizations the compiler can manage using that knowledge(assuming of course call is faster than callvirt. One would suspect so but its certainly not guarenteed, thats something to look into). It also means you never have to bother with a null check(its minor, but removing every null check from your code could become a significant change).

    But the real value is that it says "This cannot be null", period. Without this type of support the language has no way of saying that expressly. You can debate teh actual value quite a lot, but I don’t think it is as simple as just moving the blame.

  36. Remove null from languages entirely.

    Null is used in several cases:

    Base case for recursive types

    —————————–

    For instance, null is used to mark the tail for linked lists. This is a mistake. Use inheritance thus:

    class Node {}

    class Tail {}

    class Element { Node next; }

    Blank fields in a database

    ————————–

    This version is way over-used. You should be absolutely sure that your field should be nullable (nulls occur much to often even in professional databases) and mark it explicitly with something like Nullable<int> so that those who use the value are clued in directly.

    Temporary values to dumb compilers.

    —————————————-

    I’m sure you’ve seen the following:

    Var a;

    if(condition)

    a = 1;

    else

    a = 2;

    print(a);

    Many compilers report that a "may not have been initialized", but any human can see that this usage is fine. More sophisticated compilers can easily explore every path and decide if a variable really might be uninitialized.

    EVERYBODY – check out languages like SML, OCaml, and Haskell, where NULL DOESN’T EXIST. You can accomplish a lot more, and have fewer errors.

    If anyone has a situation in which null is absolutely necessary, email me, I’d love to see it.

    -Geoff

    http://gweb.org

  37. The first case should read:

    abstract class Node {}

    class Tail: Node {}

    class Element : Node { Node next; }

  38. Darren Oakey says:

    Daniel:

    re: parachute – I see what you’re saying, but don’t agree at all. Suppose we add nonnull to C#, but not vb – so we add a function in VB –

    > public nonnull string Concat( nonnull string a, nonnull string b )

    If you call that from C#, the compiler will prevent you ever calling with a null parameter. Yay.

    If you call from VB – it’s exactly like you’d defined the function at the moment, ie

    > public string Concat( string a, string b )

    without having a guard on a on b inside. – so, if you send a null to it, you’ll get an exception. ie – it will work EXACTLY as it currently works… (without people putting guards on each parameter)

    So – you have to be careful going across languages.

    Big Deal.

    99.99999% of shops out there will only use exactly one language – they’ll be a C# shop, or a VB shop.

    Occasionallly, for various reasons, people will take advantage of the multi-language abilities. However – any shop that says "write code in anything you want" – deserves every problem they get.

    Most people will either a) design in layers, and say this particular layer is in this language – or b) design in subsystems, and say this subsystem is in this language. So – your coding standards just have to remember that you have to check nulls before they send them, (or you’ll get an exception somewhere deep in code).

    So – net effect – in the very rare case where we use multiple languages in the same development – we have the situation where an exception will eventually be thrown if we send a null variable to a nonnull function – exactly as most people code right now (because most people DON’T explicitly put guards on every single parameter on every single function)

    So, in 99.9999% of cases, we get a positive benefit, and in the remaining 0.0001% cases we are stuck with the same horrible situation that we have now. Bummer.

  39. Daniel O'Connell says:

    the problem is 98.9999% of the cases probably use a commercial or MS provided component written in a different language. If the 99.9% rule worked, we wouldn’t need the CLS.

    I doubt I would be upset if it wasn’t CLS compliant, but I would not be pleased if VB or C++ didn’t work.

    Anyway, a net positive is a good thing, but a reduced net positive pretty much results in a feature without the attractivness that a language independent one would. Being a positive isn’t enough, it has to be a substantial one, IMHO.

  40. Daniel O'Connell says:

    Oh, adn teh bit I forgot. The duplicity means that libraries that *don’t* use non-null, due to them being written in VB or whatever, takes the feature entirely away from C#. Its unfortunate, but it is something that has to be considered.

  41. I see Mitch Denny hasn’t posted here yet; see his blog for his analysis of this problem:

    http://notgartner.com/posts/525.aspx

  42. Robert says:

    The whole concept is bad! You’d just be masking the problem. Why are you calling with a null parameter in the first place? Get at the root of your problems!

    Though I LOVE C#, indiscriminate use of Exception handling, and the ablility to pass object around without worrying about "who owns the memory" lead to spaghetti code, and messy practices!

    Some obsolete languages, like Objective-C, won’t complain if you dereference a NULL pointer! That’s really, really bad.

  43. Daniel O'Connell says:

    Robert: The core of the concept, IMHO, is getting the compiler to the point where it will error out if you could be passing a null. Having a semantic rule that says you cannot pass null would be the fix you want, wouldn’t it?

  44. Matthew W. Jackson says:

    Just a thought…and this doesn’t solve the problem of the BCL not having a concept of non-nullable throughout the framework…

    Rather than implementing such a feature as NonNullable<T>, which has the uninitialized-struct problem I mentioned here ( http://weblogs.asp.net/ericgu/archive/2004/08/17/215779.aspx#216078 ), could this be implemented if C# and/or .NET supported subtypes?

    Languages such as Pascal, Delphi, and Ada (and other non-Pascaleque languages, I’m sure) allow you to define one type to be a sub-type of the other, and the compiler can check for several problems rather than relying on checks at runtime.

    NonNullable would just be a subtype of a reference where the value of null was not allowed. It would be similar to having a UInt32 with it’s range restricted between 1 and UInt32.MaxValue.

    I have often times wished I could restrict primitives to a subrange, but I can see where this would be very hard to make CLS-compliant. How would languages that automatically promote numbers to bigger versions when they overflow work with other languages that require a subset type? There’s a lot of problems with requiring all CLS languages to support this functionality.

  45. Daniel O'Connell says:

    Primative subranges is an interesting idea…I wonder what kind of typing could be used to achieve that…

    A single language system could probably handle it easier than .NET would. Trying to design so that VB, et al can use it safely as well is tricky. I wouldn’t want double checking, after all, ;).

  46. Marcus Fansom says:

    How about removing null from the language? If C# didn’t have null, people would have to create explicit null objects and they’d be able to tailor behavior as they need to.

    Well, yes, it is a little late for that but we could do it in whatever language eventually replaces C#.

  47. Matthew W. Jackson says:

    In this case, fixing it in one language doesn’t change the fact that the framework is still littered with methods that don’t indicate whether null is a valid value or not.

    And with all the work being put in to moving APIs to the .NET Framework, it is already mucht o late to make such a drastic change.

  48. Daniel O'Connell says:

    Marcus:

    I think people would be rather crabby about having to create null objects constantly. Its simply not a good use of my time in most cases.

  49. Brooke says:

    Robert:

    I don’t know what you imagine yourself to be saying when you call Objective-C "obsolete" but it’s obvious you don’t know Objective-C.

    Objective-C a strict superset of C. If you dereference a null pointer, you’re using the "C" part of Objective-C. You’ll get the same effect as if you were doing it in a plain C program.

    Objective-C adds a Smalltalk-style dynamic object model on top of C. One interesting feature is that a message sent to a nil object returns nil — it doesn’t dereference a null pointer and it doesn’t cause a runtime error.

    That’s a language design feature that Smalltalkers envy. You might want to read Nevin Pratt’s "A Generalized Null Object Pattern" in the Smalltalk Chronicles to learn more about it.

    You might want to invest some time gaining minimal competence in Objective-C. You won’t understand the benefits until you’ve put in the work, if then. (See Paul Graham on the "Blub Paradox".) But at least you’d be able to explain why dereferencing a null pointer produces a runtime error in Objective-C and why sending a message to nil does not.

  50. Brooke says:

    Robert:

    I posted my previous comment before reading the link to your page. There you mention you’ve worked for several large corporations. You give names (IBM, Adobe, Disney, Apple, Olivetti, …) and you post links. All of the links point to the home page for your past employers with one exception: Apple is linked to "Jerkcity.com".

    Would it be fair to say you didn’t leave there on good terms?

    A few paragraphs below you describe Visual Studio in glowing terms and offer a comparison with another development environment. Interestingly the comparison is not with IBM’s Eclipse or IDEA from JetBrain, both of which are widely used, widely admired and widely taken to represent best of breed development environments not from Microsoft. The single comparison you make is between Visual Studio and "Apple’s crude system for its antiquated Objective-C language".

    That’s so odd. If you want to compare Visual Studio to the competition, you’d be better off comparing it to other development environments for Windows or the best development environments out there on any platform. Were you really thinking the top priority was to win over all the Objective-C programmers using OS X? And did you think you’d win them over just by using the words "crude", "antiquidated" and "obsolete" without any further discussion?

    I don’t know what transpired between you and Apple, but you sure seem bitter about it. And you express that bitterness in _very_ specific ways — it’s all directed towards Apple’s use of the "antiquated/obsolete" Objective-C language and the "crude tools" Apple provides for working with it.

    It would be laughable if it weren’t so sad. Were you let go because you couldn’t program in Objective-C and the new talent saw you as less than ideal for the pace and direction of their future efforts? That would explain your bitterness and the form in which you express it and your ignorance of the langauge. But it would also single you out as the world’s most unreliable source on the very topics you seem most interested to discuss.

    And it’s embarassing. For example, you go on to say:

    "I never fail to make Apple programmer’s jaws drop when I demonstrate Visual Studio .NET to them (and I usually get excuses like `that’ll be there in the next release’)"

    But you don’t provide a single example, leaving a reader to guess what you have in mind. Hoping to learn more, I looked at the Visual Studio 2005 Beta home page at

    http://lab.msdn.microsoft.com/vs2005/

    There I found that the #1 suggestion from beta testers is "edit and continue support for C#". The response from Microsoft is "We are actually targetting implementing Edit and Continue for C# for future release. So keep your figures crossed :-)"

    Talk about irony. Apple has provided edit and continue support for Objective-C since 2003. Microsoft is "targeting" edit and continue support for C# in a release subsequent to Visual Studio 2005, and even then there’s no commitment — you have to keep your fingers crossed.

    Your conclusion: "Microsoft’s system looks years ahead."

    Can you see how that looks to a disinterested reader?

    If you were badly treated by Apple, and you blame it on their adoption of Objective-C with OSX, I’m sorry to hear it and I wish you well writing raves for Visual Studio.

    But your uninformed and unsubstantiated expressions of negative sentiments towards Objective-C undermine your credibility and embarrass you. If you can do better, do yourself a favor and try.

  51. Brooke says:

    Robert:

    I posted my previous comment before reading the link to your page. There you mention you’ve worked for several large corporations. You give names (IBM, Adobe, Disney, Apple, Olivetti, …) and you post links. All of the links point to the home page for your past employers with one exception: Apple is linked to "Jerkcity.com".

    Would it be fair to say you didn’t leave there on good terms?

    A few paragraphs below you describe Visual Studio in glowing terms and offer a comparison with another development environment. Interestingly the comparison is not with IBM’s Eclipse or IDEA from JetBrain, both of which are widely used, widely admired and widely taken to represent best of breed development environments not from Microsoft. The single comparison you make is between Visual Studio and "Apple’s crude system for its antiquated Objective-C language".

    That’s so odd. If you want to compare Visual Studio to the competition, you’d be better off comparing it to other development environments for Windows or the best development environments out there on any platform. Were you really thinking the top priority was to win over all the Objective-C programmers using OS X? And did you think you’d win them over just by using the words "crude", "antiquidated" and "obsolete" without any further discussion?

    I don’t know what transpired between you and Apple, but you sure seem bitter about it. And you express that bitterness in _very_ specific ways — it’s all directed towards Apple’s use of the "antiquated/obsolete" Objective-C language and the "crude tools" Apple provides for working with it.

    It would be laughable if it weren’t so sad. Were you let go because you couldn’t program in Objective-C and the new talent saw you as less than ideal for the pace and direction of their future efforts? That would explain your bitterness and the form in which you express it and your ignorance of the langauge. But it would also single you out as the world’s most unreliable source on the very topics you seem most interested to discuss.

    And it’s embarassing. For example, you go on to say:

    "I never fail to make Apple programmer’s jaws drop when I demonstrate Visual Studio .NET to them (and I usually get excuses like `that’ll be there in the next release’)"

    But you don’t provide a single example, leaving a reader to guess what you have in mind. Hoping to learn more, I looked at the Visual Studio 2005 Beta home page at

    http://lab.msdn.microsoft.com/vs2005/

    There I found that the #1 suggestion from beta testers is "edit and continue support for C#". The response from Microsoft is "We are actually targetting implementing Edit and Continue for C# for future release. So keep your figures crossed :-)"

    Talk about irony. Apple has provided edit and continue support for Objective-C since 2003. Microsoft is "targeting" edit and continue support for C# in a release subsequent to Visual Studio 2005, and even then there’s no commitment — you have to keep your fingers crossed.

    Your conclusion: "Microsoft’s system looks years ahead."

    Can you see how that looks to a disinterested reader?

    If you were badly treated by Apple, and you blame it on their adoption of Objective-C with OSX, I’m sorry to hear it and I wish you well writing raves for Visual Studio.

    But your uninformed and unsubstantiated expressions of negative sentiments towards Objective-C undermine your credibility and embarrass you. If you can do better, do yourself a favor and try.

  52. Joe Cheng says:

    As Damien said, the Nice programming language has solved this problem:

    http://nice.sourceforge.net/safety.html#id2429032

    Since it’s based so closely on Java, they of course needed to provide interop with existing Java libraries:

    http://nice.sourceforge.net/manual.html#optionTypesJava

    Would sure be nice if someone would write a Nice# compiler… πŸ™‚

Skip to main content