Who wants non-nullable types (I do, I do!)?

Many people were intrigued at Tech-Ed when Anders revealed the deep language integration we were giving to the new System.Nullable<A> type.  I could go more in depth into how it works, but for now I’ll just to briefly explain it.  Nullable<A> attempts to give a type that everyone can use when trying to represent alue types that have no value.  This concept routinely comes up in database programming where you can have a field like ‘age’ that can be set to ‘null’ which indicates that you don’t know what the value actually is.

Basically we’ve special cased Nullable<A> to be mixin of the form: Nullable<A> : A.  (we also let you simplify how you type it by writing “A?” to mean the same thing).  So where you need an A you can use an A? and you can treat an A? (more or less) as an A.  For example you could do:

            int? a;

            int? b;




            int? c = a + b;

I.e. we’ve allowed you to use the ‘+’ operator on the int? the same way you’d use it on an int.  If eitehr of hte arguments are unset then c will be unset.

There’s been a lot of debate over many parts of this feature, but one of the things that has come up is that we seem to be multiplying the number of ways you can say that something is null, and we’ve also added confusion into how all the different ‘null’ values interact.  What’s interesting is that we haven’t helped say something that many people want to say, namely that something is not null.  Checking that something is not null and dieing because of it is such a common behavior that i ended up writing something to handle it for me:

    public static class Argument


        public static class Validate


            public static void NotNull(object argument, string name)


                Debug.Assert(argument != null);

                if (argument == null)


                    throw new ArgumentNullException(name);



            //other validation methods


        //other things you can do on an argument


You would then use this by writing: 

Argument.Validate.NotNull(foo, “foo”);

I’d much rather have a built in langauge feature, compile time checking, and BCL support for that kind of construct.  i.e. the ability to say something like:

string! s

Where the ! means “can’t be null”.  You could then decorate your methods ending up with things like:   public string! SubString(int start, int end);, etc.

You could automatically pass an A! anywhere where you needed an A, and using the ! operator you could convert an A into an A! (throwing if the object as null).

This, like const, would require work but would be so valuable for the compile time verification of what is such a common problem.  Threading this through the BCL would make it safer, potentially catch bugs and would benefit everyone (except the poor BCL authors who would haveto retrofit their code).  But really, would the cost be that great?  You’d be able to _get rid of_ null reference checks everywhere.  If the compiler were really nice it would look for these issue and give you warnings.  i.e.:

if (null == foo)  //warns: foo can never be null, express will always be false

return foo //warns: return type of the method is a string but you always return a string! (a non null string, not a statement of exclamation 😉 ) consider having your method return type be string! instead.

What do you think?  Useful?

Edit: Eric discusses Nullable<A> as well

Edit: Luke discusses it as well, and delves into the issues integrating with the current BLC.

Comments (53)

  1. A programmer after my own heart.

    My orininal thought would be to just allow for an implicit conversion of a nullable type to bool for purposes of if checking… see here


    But now that I’ve read this… I think that non-nullable types would rock. It would almost be like going back to the days of user defined class instances being allocated on the stack… sometimes I miss the C++ days.

  2. damien morton says:

    im with you – nullable should go hand in hand with non-nullable.

    in fact, non-nullable types have behavior that is a strict subset of normal variables, and so introducing them should be a problem-free excersise.

    nullable types are a superset of the behaviour of normal types, and changing a normal variable to a nullable variable can potentially cause problems.

  3. Giampiero says:

    I think the usefullness of something like this is questionable. For example, what happens when you set a string! to the return value of a function that returns string and that value is null sometimes. There is no compile time check for that, so you still have to check that value in your code (unless you want ArgumentNullExceptions) everywhere and the usefullness is lost. So from re-usability standpoint this idea works great. I grab an API and it declares object! to be one of the parameters so I know I cannot pass a null value. This means that anything that I get from an outside method would have to be checked by me before I could pass it into this API method. But then that doesn’t really help the consumer of the API, does it?

  4. I like it 🙂

    Something that I would like to have in C# is const, but I could live with A!

    Another thing that I would like to see in the framework is an INullable interface, just the one in the System.Datra.SqlTypes byt I would like to have it inte ht System namespace. There is a INullable today, but it’s internal and used by Nullable<T>. The reason why I want to have a INullable interface with an IsNull property, is that I could then create nullable object, for example NullCustomer. Martin Fowler wrote about Null objects in his refactoring book, and I have used it in my solutions and it’s very useful.

  5. As someone who works with databases and financial software – where 0 and null are two different beasts entirely – I cannot wait for my pain to end.

    I’m currently using types I created because the SqlTypes weren’t serializable. (And Cyrus, if you have any pull at all, would you ask SOMEONE there why all the base types are sealed? Please? Argh..)

    Here’s waiting anxiously.

  6. Nick: The C++ days… *shudder* 🙂

  7. Giampero: you would not be allowed to set a string! to a function that returns a string. You would have to explictly check for null in that case (by using something like the ! operator i talked about above). It could look something like this:

    string! str = !! methodThatReturnsAPotentiallyNullValuedString()

    That would become a runtime check. If you got an isntance back it would be ok, if you got a null value back it would throw. Similar to how casting can throw today at runtime.

    The benefit is within your own code, and if the APIs are used you will, in general have less checking to do than today. (Note: today you’d currently have to check that that return type was not null as well).

  8. AT says:

    1. What will be an Exception error message for this test case ?

    A a = null;

    A! na = !A; // <– ??

    Will it be meanless System.NullPointerException ??

    How do you expect user to be able decrypt this message and fix origin if his boss asking him to complete report in 10 minutes ;o) ??

    I expect more user-friendly exceptions given to user if you are unable to avoid them.

    2. Also I do not see big benefits for this.

    It’s about that same that Java users have to do with (MyObject) Collection.get(i) without actually knowledge of data inside collection.

    The only benefit is early null error detection if value is not used immediately. But I always check values before storing them anythere. This ! operator will be simply a shortcut for this.

    MSFT can possibly implement it without changing CLR at compiler level as Microsoft Extension ;o)

  9. Giampero, as the consumer of that API you have to do that check anyways, because if you don’t the API will throw an NullReferenceException (or ArgumentNullException) when you pass in the null value.

  10. Brian: I’ll look into it and see if there’s someone who can answer your question for you.

  11. AT: what is the use of the exception you get in the:

    "string a = (string)o;" case?

    ClassCastException. Not very helpful. But you can use it plus astack trace ot figure out what’s going wrong and fixing it.

    The help you get is similar to generics. By having this syntax you remove the need to place an:

    if (object.ReferenceEquals(null, someArgument)) {

    throw new ArgumentNullException("someArgument");


    at every single entrypoint in your code. Not only that, but you need to make sure your documentation reflects that that’s what will happen.

    Having that directly in the signature means that you don’t have to do that check _ever_. THe consumer also knows now that there will be a problem passing null.

    Imagine the following case. v1 of an API takes a string and allows it to be null (the api says it’s undefined what will happen with a null argument). v2 no longer allows it to be null and throws in that case. You call that API only extremely rarely with a null value and you expect it to work as before. However, now you get some random crash in a critical service that you have to track down and fix. With strong support in the runtime/bcl/language, you instead have a compile time check that will tell you flat out: this argument can not, under any cirsumstances, take a null value. You must perform that check yourself.

    By doing that check yourself you protect against unknown exceptions getting thrown to you at runtime that would cause you to crash.

    I think the !! operator would just give you a NullReferenceException. If you want better messages then just type out:

    if (object.ReferenceEquals(foo, null)) {

    throw new SpecializedException("very wrong bad!");


  12. Kael Rowan says:

    Definitely! In my personal ideal world, C# would have originally had support for nullable types and then all types/arguments *not* defined as nullable would be non-nullable by default. In other words, even reference types would always have to be set to an instance of an object unless they are declared as nullable (string?, ArrayList?, even object?).

    In the vast majority of cases, APIs always require instances of objects as arguments and rarely do they accept null references. Not only would I (and my development peers) LOVE to have non-nullable types, we would love to have an option to compile ALL types as non-nullable unless specified otherwise (such as string?, ArrayList?, or even object?). I would gladly change my signatures to void foo(object? nullableObj) when a null reference is acceptable if it would save me hundreds of lines of if (nullableObj == null) throw new ArgumentNullException("…");

    To the person(s) above who feel that having exceptions thrown when null references are set to a non-nullable type (i.e. string! myStr = null;), I say you would have check for null manually and ended up throwing a ArgumentNullException or NullReferenceException yourself, or if you didn’t do the check then you’d get a NullReferenceException when you tried to access the null reference (i.e. myStr.Length). If your problem was calling an API that was changed to accept a non-nullable argument then you would have gotten an ArgumentNullException after passing null to the API anyway.

    Having compile-time checking for the possibility of null being set to non-nullable types would be great and prevent tons of runtime errors, but even without compile-time checking the check could be automatically put in place at runtime which would save developers significant development time and increase robustness by taking the burden of null checks off of the developer.


  13. Phil Weber says:

    Hi, Cyrus: Luke Hutteman (of SharpReader fame) proposed this very idea yesterday: http://www.hutteman.com/weblog/2004/06/02-181.html

    +1 from me!

  14. Cyrus,

    I think this is a nice feature, but I can not see its usefulness. I have never encountered a time when I needed to keep something from being null.

    Why pass the argument through the method in your post? Mostly because you’re unsure of what you get, since you would get it from the user in some way or something. All in all, an untrusted call.

    This just takes the responsibility for the value not to be null and places it in the caller’s hands. It forces the caller to use the non-nullable type. It’s also harder to trace the exception if you don’t have the PDB.

    You talk about the versioning case when V1 allows null and V2 doesn’t. This would cause a throw when sending null. This is where side-by-side execution goes into play. You will always use the V1 assembly.

    Switching to V2, haven’t read the docs and haven’t got a unit testing case causing null calls to this in your code?

    tsk tsk tsk.

    Again, it’s nice syntactic sugar, but then again, too much sugar is bad for your teeth. 😉

  15. Giampiero says:

    Ok, I got it now. Took me a while (and a few rebuttle examples that didn’t pan out)to realize where the benefit is.

    But one more thing is, what happens to the BCL APIs that don’t use this construct (which there are a lot of)? If you don’t change the APIs, what is the point in having this? If you do change the APIs you break backwards compatibility because everyone would need to use that new operator.

    Basically it just seems to me that is making things easier on the API writer’s end and harder on the consumers end (more casts). A language convenience like this could be nice but really doesn’t let you do something that you couldn’t do before (unlike Nullable<T>). <Opinion>Seems like more work than the gain.</Opinion>

  16. David Larsson says:

    Synchronisity… I had this exact discussion today about the usefulness of non-nullable types that complement nullable types. I want it!

    There are issues, of course… A non-null type could not be a member of a struct, since it must have a constructor that sets all members to their default values (null for ref types). A non-null class member would have to be initialized in the constructor of the class, but how would one guarantee that the member isn’t accessed (indirectly, via some other method called from the constructor) before the class has finished constructing? So, it seems that you wouldn’t be able to guarantee with 100% certainty, even if your program is verifiably type safe, that an instance of a non-null type is never null, but I still think this is a useful feature.

  17. Hallgrim says:

    I do too, I do too!

    Non-nulls force types to define their behaviour when their value is unknown, instead of letting every one assume what the behaviour is.

    A good example is numbers. Instead checking for null:

    if (number != null && number < maximum) …

    The behaviour for a "null number" is defined in for instance NaN. It will always return false for tests, so the example becomes:

    if (number < maximum) …

  18. Orion Adrian says:

    One confusion I’m seeing is that cardinality is being confused with reference versus value (i.e. heap allocated versus stack allocated). They aren’t actually the same thing, but by default a cardinality of 0 or 1 is associated with reference variables and a cardinality of exactly 1 is associated with value types. They don’t have to be and in my opinion they shouldn’t be.

    IMO, by default all objects should have cardinality 0 or 1 (!). So string should be the same as string! and int should be the same as int!.

    As a programmer, I shouldn’t really care how variables are stored (i.e. stack versus heap). That’s really the compiler’s problem. But I always care about the structure of my data (i.e. its types and those variable’s cardinalities).

    Orion Adrian

  19. Omer: Interesting. Almost every single API i write can not deal with null inputs. This affects me in 3 ways. I must:

    a) check for null and throw in my methods

    b) keep my docs up to date to tell consumers of this problem

    c) fix crashes that occur because i forgot to check if a value was non null before sending it into the library.

    With the new system i just write:

    foo(string! string) and i get:

    a) No need to check for null

    b) No need to keep docs up to date. The signature is the doc

    c) No need to fix NullReference or ArgumentNull crashes

    As the consumer of the api i no longer need to:

    a) Read the docs to know what set of values are allowed

    I do however have to:

    a) Check my values ot make sure they’re not null before I send them to the API. However, this is not new, I had to do this before. Now I’m just forced to do the safe thing rather then finding out weeks from now tht i was doing something wrong when an exception gets thrown.

    Compile time safety is a very very good thing IMO. I’d rather have static type checking then runtime failures any day 🙂

  20. Orion: Could you explain a little bit more about what it means for an object ot have cardinality? I think i understand, but i’d appreciate more information.

    Also, why should things, by default, be allowed to be valueless. It seems safer and more clear to make them valued by default and to only explicitly make them valueless if the need arises.

  21. AT says:


    I feel that changing in a way you proposed for current C# stage will complicate things must more that it benefit users.

    If you have problems to check input params – this can because you use wrong code to do this. All params validation inside methods can be a one-line call to helper class implementing Bouncer pattern (see http://c2.com/cgi/wiki?BouncerPattern or http://24.odessa.ua/java/Validation.java as example)

    Something like :

    public boolean doFoo(Object param1, Method param2, Object[] args) throws NullPointerException {



    Also you are reducing problem too much. There is a lot of other checks that must be performed for params passed.

    a) String must be not empty or s.trim() not empty

    b) Array must be non-empty

    c) All values in array must pass validation – like a non-null or like a)

    d) Values for two (or more) params must be non-conflicting – like min<=max or each value in [min,max] range

    etc ..

    You will be unable to solve all of this problems at compiler level.

    But introducing Validation class will unify exceptions raised across entire project.

    Possibly some rules for FxCop or any other source analyzer can allow to diagnose several kinds of errors easily.

    If you worry too much about performance – you are living currently in 2004 and if you really need this – you can check all params only in debug builds.

  22. AT: Excellent points. BTW this has _nothing_ to do with performance. Also, see in my message how I implemented that pattern.

    Note though. With the current model both the API consumer and producer must agree on this contract and ensure that it is maintained. Of course, this is no different from any other API contract, however, in this case it is such a common occurrance (literally hundreds of calls and methods i must document about this single issue) that the saving to both consumer and producer woudl be great (IMO).

    Also, why would you limit Types in that way? People wanted nullable structs because they foudn structs too limiting not being able to be null. Why not allow symmetry in teh type system? Nullable/Non-nullable versions of every type?

  23. AT: I like your idea of a single unified location to verify arguments. That’s what my ‘Argument’ class is intended to provide.

    I see this more as a completeness argument (and maybe i should have pitched it that way) with the added benefit that you get strong compile time safety against a common class of problems.

  24. damien morton says:

    AT: you are right that non-null declarations cant possibly completely validate paramaters, but consider this: what is one of the most commonly encountered exceptions in java and C# – its the null pointer exception thown at runtime.

    By having verifyable declarations on method paramaters and return values; your Validation.NonNull() methods only need to be inserted when a method is called with potentially non-null values.

    I know from my experience that the C# codebases I work on tend to grow null checks in far too many places. Being able to declare a method as not accepting potentially null arguments will mitigate that null-check growth, and allow the compiler to do the work for you.

  25. Anecdotally, the number of bugs/crashes i’ve had to deal with as a programmer due to dereferences null has been enormous. In fact it’s almost always an issue of:

    a) multi-threading woes

    b) memory fudging (problems with allocating/freeing)

    c) simple simple logic errors concerning null’s

    The amount of time that something is actually a complex issue that is difficult to weed out is far rarer and less time consuming than just dealing with these issues. Any system that can remove that issue from me at compile time is a big win in my book.

  26. damien morton says:


    You could even have the compiler infer when variables can never be null:

    i.e. the following code should compile and run just fine.

    string! foo(string s)


    if (s == null)

    return "";


    return s;


  27. AT: Another point, while I wrote the validation code, I don’t like it. There’s already duplication in it that I want to refactor. Namely the passing of the argument and the name. It’s small, but it is a smell and it’s more book keeping for me to keep track of. The internal call also won’t update my XML docs, nor will it make my callers verify that their values are not null before passing them in. All in all we’ve pushed the issue to failing at runtime which always has the chance of getting missed. 🙁

  28. Orion Adrian says:

    Cardinaility is the number of objects in a set. Every variable you have is essentially a set (not a mathematical set). Basically every variable has a minumum number of values and a maximum number of values it can contain.

    Value variables in C# must be there (minimum cardinality of 1 and maximum cardinality of 1 often written (1,1) or !)

    Reference variables in C# may be there ( (0,1) or ? ) .

    Arrays in C# have cardinality of (0,*) or * which means 0 or more. The double use of * is often confusing, but they mean two different things. The first * means many (or more than 1) and the second star is short for a cardinality of (0,*). Though the maximum cardinality of an array (as opposed to an ArrayList) can be specified to be some number N.

    string[12] has a cardinality of (0,12) in that it can contain anywhere between 0 and 12 values.

    Now a simplification can be made in langauge (some already have this) and allow any variable to have arbitrary cardinality. For example:

    string[2,12] would be an array of strings that can have between 2 and 12 values. Anything else would be an error.

    One nicety about this particular technique is that arrays and non-arrays merge. There’s only one set of concepts. For instance what’s the difference between an empty array and a null array. Not much (there should be none). The compiler should take care of the initialization on first use.

    So if an object is null, it’s Empty. If an array has 0 elements it’s also Empty. The test becomes the same. Even if we just limit ourselves to (?,!,*,+) we still come out ahead.

    I’ve blogged about this and included a few comments on casting. Hopefully it will make a little more sense. If it doesn’t just drop me a line at: oadrian@at@hotmail.com . Remove the extra characters.

    Orion Adrian

  29. Steve Perry says:

    My only problem with nulls comes down to displaying values (on a form or on the web). How many times do I have to write (sorry for the VB code).

    If not isnull(databaseField) then




    End If

    What I want to write is


    Basicly if databaseField is null return "0" otherwise return databaseField


  30. AT says:

    Steve: There is built-in function in VB for Access – Nz(checkValue,valueIfNull). You can create example of own or use built-in if such exists.

  31. Orion Adrian says:

    "My only problem with nulls comes down to displaying values (on a form or on the web). How many times do I have to write (sorry for the VB code).

    If not isnull(databaseField) then




    End If

    What I want to write is

    textbox1.text=format(ifnull(databaseField,"0"),"C") "

    The problem I see with this is why the field is null in the first place. This seems more a problem to do with the constraints (or lack of) placed on the data and less a problem to do with displaying it. $0.00 and null are not the same. This isn’t something I’d like to see even if it was simple to implement.

    Ask this question, "Why is databaseField null and not 0, and if it’s null why would I want to display it as 0?"

    Orion Adrian

  32. Steve: C# also has support for this through the Null-Coalescing operator ??. It works in the following manner:

    expr = expression1 ?? expression2

    if expression1 evaluates to a non-null value then it is the value of ‘expr’. If expression1 evaluates to a null value the expression2 is the value of ‘expr’. So in C# you could write:

    textbox1.text = format(databaseField, "C") ?? "$0.00";

  33. Orion: Because there’s a difference between your view and the underlying representation behing the scenes. It would be just as reasonable to have:

    textbox1.text = format(databaseField, "C") ?? "Squashed Cockroach";

    However, it’s important for the underlying data to have the ability to say "this does not have a cost" as opposed to "this cost is nothing".

  34. Orion Adrian says:

    "textbox1.text = format(databaseField, "C") ?? "Squashed Cockroach";

    However, it’s important for the underlying data to have the ability to say "this does not have a cost" as opposed to "this cost is nothing"."

    This I agree with. I guess I should have been more clear. I just wanted to express that in the specific example the problem wasn’t in the formatting, it was in the data structure. However how to display null is always a problem. That and whether to display it at all.

    Orion Adrian

  35. Orion: Fascinating information about the cardinality of things. I’ve never thought of variables as sets before, but having that consistancy would be great across the entire language.

    Also, thanks much for the articles. I’ll see what i can do about spreading that information around in here so we can consider it for future langauge improvements.

    Knowing how much people care about these things goes a long way in deciding what we’ll be doing in the future.

  36. Steve Perry says:

    My point was not to say there is no difference between $0.00 and NULL. My point was that there is not easy way (in VB.net, guess there is in C#) to convert this null value into something that I can display to the user (ie. N/A or $0.00 or "VALUE NOT SET" or what ever is required.)

    By the way I wrote my own function which emulates foxpro NVL to do this but it requires 5 overloaded functions (boolean, date, integer, string, decimal) I would rather have it built in.

  37. Neil Conway says:

    This is nit-picking, but a "null" value in C# does not really model the NULL concept in a (standards conformant) SQL RDBMS. Per the SQL standard,

    NULL = NULL evaluates to NULL (_not_ "true")

    NULL = x evaluates to NULL for all x (even if there are values of x that happen to be NULL)

    NULL <> NULL also evaluates to NULL (not "false")

    You get the point. The justification is that NULL means "unknown value", so it isn’t known whether two unknown values are equal to one another.

  38. Neil Conway says:

    Ah, ok — Cyrus has beaten me with the clue stick offline, so I now _really_ understand how Nulllable in C# is going to work. So ignore the previous post 🙂

    (That said, it seems to me that overloading the "null" literal to mean both "unknown" and "all-zero-pointer" is asking for confusion…)

  39. Neil: I’ve got my own reservations about this issue 🙂

    That said, i think the readers might appreciate an example where this could be quite confusing given our current understanding of how null works in C# now. Would you like to show that?

  40. RichB says:

    Please please please implement A!. Scratch nullable types, A! is 10 times more useful.

  41. Nicole Calinoiu says:

    I’m with AT on this one. Non-nullable types would be lovely, and I’d certainly use them. However, assuming that you operate under finite time and resources like the rest of us, I’d much rather see implementation of a well-integrated, declarative pre-condition framework before/instead of non-nullable types.

    From a purely personal point of view, null values are actually the simplest of my validation hurdles. I validate early, often, and very thoroughly, and I cannot think of a single case in which I would accept a non-null value that should not be subjected to further restrictions. However, implementing all this is currently quite a bit of work. The effort involved could be dramatically reduced, particularly wrt communicating the rules to consumers and verifying that validation has actually been applied, if an appropriate pre-condition framework were in place.

    So that’s anal retentive, paranoid me. What about normal people? <g> You mentioned in your comment #147989 that null reference exceptions are one of the most common classes of exceptions. This wouldn’t surprise me in the least–far too many folks barely validate at all so, of course, they’re having problems with unexpected nulls. Even for strict validators, it’s just too easy to miss a validation point, so these will creep in despite our best intentions.

    That said, at least a null reference exception has a reasonable chance of being caught in dev or test. Even if the problem does make it into production, it’ll probably be relatively easy to trace and fix _before_ the damage runs too deep. Failure to apply other types of restrictions can be much more difficult to find and fix, and the chances are much greater that they will have security implications. (Please note that the two previous sentences are meant to represent sweeping generalizations. I’m well aware that there are glaring exceptions on both sides of the fence. <g>)

    All in all, I’d expect that there are many more problems (some of which will never show up as exceptions) in production code today due to failure to properly validate non-nulls than there are null reference problems. It’s difficult to group the former because they can manifest in so many ways, but the balance shouldn’t necessarily be weighted toward the latter just because they’re more obvious or easier to count.

    Adding a pre-condition framework would help address both classes of validation, even if it doesn’t go quite so far wrt enforcing non-null values. Adding non-nullable types would only address one problem, and might even hurt wrt the other since it could potentially increase the chances that folks forget about applying any validation at all. Of course, adding both would be fantastic!

  42. Isaac Gouy says:

    Cyrus, <b>you can experience the joys of non-nullable types</b> and write code for JVM and use Java libraries.

    In the <a href="http://nice.sourceforge.net/manual.html#optionTypes">Nice language</a> standard types <i>are</i> non-nullable. If we want to allow null values then we must declare an option type (String is non-nullable; ?String is nullable).


    void main(String[] args){

    ?String someNull = null;

    ?String someString = "Some String";

    println( foo(someNull) );

    println( foo(someString) );


    String foo(?String s){

    if (s != null)

    return s;


    return "Some Null";


    // output

    Test>java -jar test.jar

    Some Null

    Some String


    Of course, there are compile time checks:


    String foo(?String s){

    if (s != null)

    return s;


    return s;


    // output

    >nicec –sourcepath .. -a t.jar test

    nice.lang: parsing

    test: parsing

    test: typechecking

    test.nice: line 12, column 7:

    s might be null


  43. AT says:

    Isaac, Nicole – Microsoft already implemented FxCop-like validator at IL level fully compatible with current CLR runtime using attributes.

    See http://research.microsoft.com/~maf/Papers/non-null.pdf

    I hope they will be able to put it in production from research fast.

    Note – in addition to simply null-checking they was able to reveal a lot of others problems.

  44. Isaac Gouy says:

    Thanks AT, I’ve seen that paper. It’s a pity that nullable is taken to be the norm, maybe that’s inevitable when the implementation relies on attributes.

    It’s almost worth installing JRE just to try out Nice 😉

  45. Nicole Calinoiu says:

    AT: The Fahndrich and Leino paper describes the _partial_ design and implementation of a static checker. This is potentially very, very far from putting into production a complete non-nullable type system in as tightly integrated a manner as originally described by Cyrus.

    To be honest, I would see quite a bit of the work described in the paper as an almost equally good starting point for a general declarative validation framework. Many (most?) of the design challenges addressed would be very similar if one were to simply substitute the more general notion of "is valid" for its "is not null" subset.

    At any rate, I’m very glad that folks at Microsoft are working on this sort of thing, but it doesn’t change my opinion in any way regarding my preference for implementation of a declarative validation framework before non-nullable types.

  46. Matt says:

    This is infact is/was part of x#/Xen/C-omega. It is actually very useful construct. It allows the code using that particular variable assurance that it will never be null.

    We did indeed look at adding this to C# in Whidbey along side of the new nullable types. However, we realized that most code written to date in the C# language, API’s, etc, already encode non-null semantics using just the normal reference types and runtime checks. Adding the new non-null type would lead to people choosing to either continue to use plain reference types as parameters, etc, or using the new non-null types to encode non-nullness. The resulting codebases/frameworks would eventually grow unwieldy because both systems would be in use at the same time.

    We chose to forgo it for the time being. It would have been better to introduce this right from the get go.


  47. Orion Adrian says:

    <q>We chose to forgo it for the time being. It would have been better to introduce this right from the get go. </q>

    <p>Isn’t there a way to get around this now. Even if it’s just part of the documentation only. It would at least allow tools to analyze the code more easily in the future and the signature itself would give you very valuable information without having to do a lookup (which I do all the time because this information isn’t readily available at the "intellisense level".)

    Orion Adrian

  48. For those of you who don’t read the comments made on other posts of

    mine, you might be unaware about…