Why a comparison of a value type with null is a warning?

A reader (Petar Petrov) asked me a question recently which I didn't quite know how to answer:

Why a comparison of a value type against null is a warning? I definitely think it should be a compiler error.

So I asked the C# compiler team and here's the explanation (please welcome today's special guest Eric Lippert):

Why is it legal?

It's legal because the lifted comparison operator is applicable. If you are comparing an int to null then the comparison operator that takes two int?s is applicable.

Why is it a warning?

It's a warning because the comparison always results in "false".

Why is that not an error?

Let's turn that around -- why should it be an error?  Why should any comparison which the compiler knows the answer to be an error?  That is, if you think this should be an error, then why shouldn't
if (123 == 456)
be an error?  Or for that matter, why shouldn't
if (false)
be an error?

Three reasons why none of these things should be errors:
First, the argument that this is work for me. The spec is complicated enough already and the implementation is divergent from the spec enough already; let's not be adding even more special cases that I can then get wrong for you.

Second, the argument from design. By design, C# is an "enough rope" programming language -- we do not try to constrain you to writing only meaningful programs. Rather, we let you write almost any program, and then give you warnings when it looks like you might be entangling yourself in the rope we gave you. If you don't like that, choose a language that gives you less flexibility.  (This is part of the impetus behind the push towards more declarative programming languages; declarative programming languages are less likely to contain senseless commands because they consist of descriptions of how things are desired to be.)

Third, the argument about generated code. We do not disallow statements like if (12 == 13) { whatever... } because not all code is typed in by humans. Some of it is generated by machines, and machines often follow the same rigid rules generating code that compilers do consuming it. Do we really want to put the burden upon machine-generated-code providers that they must jump through the same constant-folding hoops that the C# compiler does in order to avoid compiler errors? Do we want to make machine-generated code not only have to be syntactically and grammatically correct C# code, but also _clever_ and _well-written_ C# code? I don't think we do; I think that makes the job of both the code producer and the compiler writer harder without any corresponding gain in safety or productivity.


Comments (4)

  1. ysweet says:

    if you have a generic with unconstrained T type parameter, it is really useful that C# allows comparison to null even if T can be value type. I even used default(T) == null as a test whether T is reference type.

  2. That’s a good point, by the way!

  3. madduck says:

    If we all did the right thing and compiled with warnings-as-errors, this point would be moot.

    Of course, generated code would probably want to compile without it.

  4. Lucas says:

    @ysweet Be careful with that check for reference type. For nullable types it will return true but they are still value types, not reference types.

Skip to main content