NullReferenceException or ArgumentNullException


The API Design Guidelines encourage developers to check all their arguments and thereby avoid throwing a NullReferenceException.  If an argument is null and the contract of the method forbids null arguments an ArgumentNullException should be thrown. 


 


I get pushback on this periodically from developers that feel like the extra checks they are forced to implement are redundant.  Consider the following code:


 


        public void PrintValue(object o)


        {


            //Insert possibly deep callstackâ€


            Console.WriteLine(o.ToString());


        }


 


 


If you pass an instance that happens to be null nothing will blow-up or AV the process… you will simply get a NullReferenceException.  That is way better than the equivalent thing in Win32, so what is the problem?  The problem is developer productivity.  If I have been up all night working on a program and I hit a NullReferenceException from 10 levels deep into Microsoft code I am likely to assume the bug is in the platform rather than my code (Occam’s razor not withstanding).  Now a few more hours later and maybe a call to PSS and I will realize my mistake, but I will certainly not be amazingly happy with the platform. 


 


Suppose the author of the PrintValue() method followed our design guidelines


 


        public void PrintValue(object o)


        {


            if (o == null) throw new ArgumentNullException();


            //Insert possibly deep callstackâ€


            Console.WriteLine(o.ToString());


        }


 


Now, in the exact same circumstance I get an ArgumentNullException that specifically tells me I passed a null value where it was not expected.  I immediately know the error is in my code and I can quickly find and fix my bug.  Although I may not be able to attribute it to exactly this instance, I am overall much more happy and productive on the platform.


 


 


What do you think?  Is this kind of error checking helpful in the .NET Framework and WinFX?

Comments (44)

  1. I use this all a lot, and I enforce it in code reviews. However, I never use the parameterless constructor.. I always use something like:

    throw new ArgumentNullException("paramName", resourceManager.GetString(blah));

  2. Yes! This is a good practice and I’ve experienced this at work. Actually when I first did my component in .Net, my developers have gone through with this problem. So That point I realized throwing ArgumentNullException like all other objects in .Net framework would be a great idea. And it worked! It reduced the number of questions being rolling in to my IM :-)

    But Brad I think this practice is quiet common among developers now!

  3. David M. Kean says:

    I think it is helpful, and use within my own code alot. However I often wonder what sort of performance improvement would be gained in say a tight loop, by not checking the argument.

    For example I notice that in the ControlPaint class (System.Windows.Forms), any method that takes System.Drawing.Graphics argument is never checked for null. Is this going really going to increase drawing speed?

  4. Steve Willer says:

    Sorry, I don’t see the difference that much. I can see the point you make, for sure … but the names of the exceptions are basically the same from my perspective. There’s no indication that one is basically an OS-level sigsegv, while the other is a design-by-contract argument assertion.

  5. Yes, it’s definately a huge advantage and should be done, except in very special performance situations.

  6. Aaron says:

    Yes it’s an advantage, but this is the sort of thing developers should be able to declare with design-by-contract, not implement in each case.

  7. I do that also. The sooner you catch the bug, the better. Also, this is the kind of feature that you would like to see in the compiler itself, using custom attributes for instance:

    public void PrintValue([NonNull] object o)

    {

    // the compiler will add the check here

    Console.WriteLine(o.ToString());

    }

    The custom attribute nonnull tells the compiler to add non-null checking.

  8. Albert says:

    No, there’s hardly any advantage. The ArgumentNullException can still come from deep in the framework and it can either be a problem in my code or your code. Both exceptions tell me that something was null when it wasn’t expected. The only extra information that ArgumentNullException carries is that a function parameter was null as opposed to a local or member variable. This is cool if the exeption is thrown by my immediate callee otherwise it has little value.

  9. Aleksei Guzev says:

    I dont understand what is the problem? Call stack is always dumped. Thus it’s easy to see what has happend to the program and the system.

    Method should check for nulls if it’s error to use null. But often it’s quite natural to use null and the method should accept the null gracefully.

    Moreover the programing system could provide some protection against such errors. Such facilities as preconditions should help.

    The problem in .NET is existence of untyped references (System.Object) and "null" being instance of every(!) class. Why method declared to accept MyType:

    void Method1 (MyType mt);

    receives "null" which I did not declare as a possible instance of MyType? If "null" is yet an instance of MyType, why does not it respond to the interface I declared for MyType? If NullReferenceExpection is the response of "null" why have I no mean to declare another behaviour, for example "do nothing"? This would make code more modular and shorter and thereafter maintainable.

    I used to skimp cicles and checking every reference everytime it falls into view seems to be out. I write short methods thus the call stack is deep. Some parameters are passed through unchaged and this will lead to multiple checkings of every reference.

    Withal often references are surely not equal to null. Sometimes I write two methods: checked and unchecked. The latter is called when parameters are surely not equal to null. The former simply checks parameters and calls unchecked one. Usually I declare unchecked variants as internal.

  10. Chris Nahr says:

    Getting a NullReferenceException at the earliest possible point is certainly useful, but you know what would be MUCH more useful? A bit of "design by contract" so we don’t have to write all those null checks manually!

    Since most reference parameters can never be null, the Framework should check all such parameters for null when entering a method, and _automatically_ throw an exception if one is null. An optional attribute could be introduced to specify which ref params _may_ be null.

    For bonus points, auto-generate XML comments for the null reference exceptions!

  11. Nils Jonsson says:

    I always do this in my source, and it irritates me to see it missing from other people’s.

  12. Anonymous says:

    Ashutosh Nilkanth’s Blog » NullReferenceException or ArgumentNullException

  13. Rob Walker says:

    > I have been up all night working on a program and I hit a

    > NullReferenceException from 10 levels deep into Microsoft code

    > I am likely to assume the bug is in the platform rather than

    > my code

    If a function is taking an argument and eventually passing it to another function that requires a non-NULL argument then the non-NULL part of the contract is implicitly moved up the call stack.

    From this point of view the top level function you called should surely have checked for NULL as well.

    Personally I don’t think the extra check adds much value since it doesn’t do anything early enough.

    If it was explicit in the method’s signature (such as a [required] attribute) and hence appeared in the documentation and intellisense it would be much more valuable.

  14. Best Practice – yes. Not thoroughly enforced, though.

    I say – remove the parameterless constructor for any argument related exception.

  15. Dmitriy Zaslavskiy says:

    This approach is very hard to make consistent and therefore is not worse the effort.

    Consider the casses then you pass class/struct that contains members which are nulls. Also consider func A calling func B and func B throws InvalidArgument it’s a little ugly to get InvalidArgument exception with the argument you never supplied.

  16. Jerry Pisk says:

    Aleksei, if you don’t want your objects to allow null values then declare them as structs. There’s performance issues associated with doing that so you have to decide what’s more important.

  17. JD says:

    I disagree with most here.

    AgrumentNullException in the direct API I call is very helpful, much more so than a NullReferenceException somewhere down the line.

    Often we end up setting properties or intializing an object, and then reusing that object. Having a failure happen during initialization makes it more likely to be detected before it hits the field, can avoid a lot of wasted time tracking down which intialization step failed, and gives better and more direct feedback to the developer on what they can do to fix the problem. This is huge.

    Another benefit is that a NullReferenceException doesn’t tell you enough. OK, even if I have a callstack, what info am I getting from a customer (internal or external) who hit this bug? With a NullReferenceException, all I have is the method, not the location within the method, not the name of the variable that was null, not anything. I’m still trying to figure out the cause of the darn exception. With ArgmentNullException, there’s no question, I know exactly the parameter in question. I can probably determine how it was passed to the failing method as well.

    So count me as a big believer in ArgumentNullException. Even in internal APIs (avoiding absurdness of doing it in tight loops) between components of the same project, doing so has saved me and my team a lot of time.

    Design-by-contract would be nice as well, but given what we have (and what’s likely to be done before Whidbey ships) I wholeheartedly support earlier and more precise detection of failures.

  18. Dan Kuhn says:

    I’m a big believer in enhancing and providing as much information as possible when providing error handling. We are actually implementing this rule in our source code analyzer to provide for a coding standard. What we have found is that no one (and rarely even ourselves) implements this. If you apply it blindly it seems very excessive. It seems there might be some instances where it is most appropriate. Some of the considerations we have made are:

    – public and potentially protected methods – not private (although MSDN states private and protected – which seems opposite of what I would want),

    – not in event handlers where inputs are many times passed via system events

    – not in properties where arguments are typically just assigned to a private field

    What are people’s thoughts on when to apply this standard? Is the above list too limited? Not limited enough? Or is there no general rule that can be applied?

  19. Yes, checking arguments in public API’s is professional and expected by users. Checking in internal API’s is not as important. In VG.net we use static methods, so we can write this:

    Check.ArgumentNotNull(o, "o")

    Which throws the correct exception with an error message. Because C# does not support macros, the call stack is always 1 level deeper than it should be, to save a bit of typing.

  20. Chris Nahr says:

    Jerry: Bad advice! Never declare types as structs just so they can’t be null. There are not just performance issues, structs have different semantics.

    When you pass a class parameter to a method, only a reference is passed and any modification to that parameter’s data is visible to the caller.

    When you pass a struct parameter to a method, its entire data is object so modifications are NOT visible to the caller.

  21. Aleksei Guzev says:

    In fact this turns to unit testing. The algorithms should ensure some arguments to be non-nulls. But the end user needs not to know what error happened and where. The user needs speed and usability. In fact this information is useful durng debug stage, but in release those statments may be removed. Conditional compilation should help.

    And about struct… Why should I change to value type just to show it cannot be null? And what should I do if this is a nonnullable subclass of pretty nullable one?

  22. this kind of check could be useful in debug mode …

    most of these kind of checks are not that useful at compile time, but for maintainance/operations in recovery situations where you don’t have the luxury of hours of debugging …

  23. Darrell says:

    I agree with Stefan. ArgumentNullException is a "developer" exception, in that it shows the developer he/she coded something wrong or something was unexpectedly null. In theory, the exception should never occur once the code is complete, since the developer would have checked their own code before then and done something appropriate. Not sure how the check statement could be optimized away though.

  24. Easy to optimize away check statements. To continue my previous example, you can also write:

    Check.ArgumentNotNullDebug(…)

    and it will be optimized away in the Check class:

    [Conditional("DEBUG")]

    public static void ArgumentNotNullDebug(object argument, string argumentName)

    {

    if (argument == null) throw new ArgumentNullException(argumentName);

    }

    If you have VG.net installed, look at the Check class in the documentation.

    http://www.vgdotnet.com

  25. Aleksei Guzev says:

    Yes. "[Conditional ("DEBUG")]" is what will help to have early alerts and not waste cycles on double checking. Surely this will lower the chance of early detection of an error during normal operation, but I think if error had occurend on the user side it will not be easy to fix after a minute or two. And again the user will not be able to fix the error, thus the detailed info is almost useless. On debugging stage when You are running through the code dozens of times, the seconds lost on deciding what happened, accumulated to minutes and hours. That’s the place where this info is useful. And it’s more useful if available at compile time. The compiler should be able to alert if a null could treakle to a method which is not ready to accept nothing. That is why the language should provide some means to control if method can accept null, or even if a variable may reference to nothing.

  26. drebin says:

    Brad – I gotta say, I’m glad there are people like you propagating good technique.. If I’ve learned one lesson early in my career (from a really a smart guy) – he said "Write you code as if someone else is going to be working with it 2 years from now". That has far-reaching implications in terms of variable naming, exception handling and documnetation. Ever since I started adopting this policy, I’ve helped MYSELF out countless times, when I’ve needed to work on code I wrote a year ago.. and with other developers too. So when I need to fix someone else’s code… instead of dealing with int x = cobc(34,24); I work with more self-documenting format of something like int intTotalOrders = CalcOrdersByCustomer(34,24); – where at a glance, at least I have an idea of what’s going on. That couple minutes that saved – adds up over the length of a project.

    Point is, ANY time you can be more clear or more succinct in your code – do it, almost at any cost. Do it for yourself. Do it for your coworkers. Do it for your reputation. I have seen horrible code, and then met the guy a few years later – and I had zero respect from the moment I met him.

    I tell other developers that there is nothing more important that clarity of code and how managable you leave your project. Unless there is a completely crazy deadline, you shouldn’t skimp on these.

  27. brian says:

    We use Debug.Assert(o!= null, "o cannot be null") to check all input parameters but of course these get compiled out of release assemblies.

    I have mixed feelings about the use of this since junior developers may believe they are including a runtime check, forgetting that this is removed in production code.

    However it does mean that for developer productivity there is something there to aid debugging and the code is a little more compact than the null checking & exception throwing?

    I wondered what anyones opinion was on this use of debug asserts?

  28. Kannan Goundan says:

    Null checks are not always a development-time-only thing. When writing public library routines, you’ll want to keep the check in there for your production release as well.

    The reason some people are finding that most of code lack this check is that it takes too damn long to type. It’s also a pain to keep it updated as the interface changes. I have a feeling that it’s not even worth the benefit.

    A compiler-inserted check would improve things except that this is something that should be checked at compile time and not at runtime. For those worried that, like with the C++ ‘const’ modifier, it’ll become a burden, notice that in C++ a bare type is non-const and therefore a looser requirement by default. If non-null references were the default, then the modifier the default would be a stricter requirement. If you want to accept nulls, you should be forced to explicitly say so. This will be much less of a burden in practice than C++’s "const". See (http://nice.sourceforge.net/manual.html#optionTypes).

    An advantage of using a modifier is that the interface becomes self documenting. You don’t have to bloat the @param documentation with "this parameter cannot be null".

    Of course, adding this feature will totally break almost all existing C# code, so don’t expect to see it added any time soon.

  29. Aleksei Guzev says:

    Kannan: yes. Unfortunately C# would not privide the facility.

    Now they discuss how to catch those quirky nulls. But where do we get them? I often get nulls while dealing with collections. Hashtable returns null if it maps given key to null or if the key is not in the map’s domain. But I think in the latter case the hashtable should throw an exception.

    …Ah, they say "exception" means something exceptional, extraordinary happened, but absence of a value in a hashtable is a usual practice. "Exception" is not a felicitous name. This could be named "Signal". Hashtable *signals* if the key is not in the domain…

    The item property shoud be declared as

    template <T,K>

    T Item[K key];

    While the key is not in the domain, the hashtable *cannot* return any object of type T. But it *may not* return anything except an object of type T according to the declaration. Thus it should not return anything. The only way to leave the method without returning any object is raising a signal or throwing an exception, name it as You wish.

    They say throwing an exception affects performance. But double, triple, endless checking for null does affect that performance too. While exception handling made slower starting and leaving of only the outermost method using the dangerous reference, cheking for null runs in every method in the call chain and finally it throws an exception if the reference equals to null.

    Another source of nulls I hate is System.Xml. It returns an empty string when the given element does not have the requested attribute at all. Those empty strings actually are reincarnations of nulls to text format.

    The code will get a good structure if it had separated blocks for every situation. One block for the case when the attribute present, other blocks receive control on signals. Actualy we do write those blocks with ifs, nested ifs, elseifs.

    It would be much better to close null sources and return null if there is actually null.

  30. Making Hashtable throw an exception says:

    Making maps throw an exception would be a "cleaner" way of doing things, but there are a couple of problems with that.

    1. Exception handling syntax is really verbose in C#.

    It would be nice if you could say something like:

    String val = map.Get("abcd")

    : catch(BadKeyException e) { Print("Bad key"); }

    Layout-based syntax could also help a little.

    2. Returning null makes things faster.

    The most common alternative is to check for the key’s presence before every retreival. That’s essentially doing two lookups (two hash computations or binary tree traversals) each time.

    It might be faster to just rely on the exception. I hope it’s possible to sprinkle some JIT whole-program-optimization magic to speed up the handling of commonly occurring exceptions. If it’s too hard to recognize these situations automatically, I don’t think a [ThrowsOften] attribute is too much of a burden on a programmer who really wants performance. Also, the stack trace generation is a huge waste of time and so it would be essential for the JIT to eliminate that process when possible.

  31. James Bellinger says:

    If performance is really a problem, wrap what would throw NullReferenceException with a try { } catch and throw an ArgumentNullException when the NullReferenceException throws. Frankly, I expect the JIT realizes that you’ve already checked and doesn’t check on its own (it should, if it doesn’t).

  32. James Bellinger says:

    The stack trace is really handy a lot of the time, in my opinion. One might want to catch an exception and log it, to send to a developer or whatnot, for instance.

  33. James Bellinger says:

    Actually, a smart thing that could be done WRT the stack trace…

    One could have some sort of flag bit on each exception handler down the chain which says if it accessed the stack trace, with the assumption for complex things the JIT doesn’t want to investigate being ‘yes’… most exception handling is pretty simple stuff, at least within the catch { } block, so this might be feasible. Though, exceptions really are supposed to be for *exceptional* stuff, so there’s really no reason to optimize them in my opinion. Like NullReferenceException/ArgumentNullException. If you get those, you’re done. The End. Fix your code. I can’t really think of any exceptions which aren’t like that.

    Speaking of exceptions, I must say the liberal use of them is one of the best features of .Net. Using ordinary Win32, one finds that tons of methods don’t provide much if any information about what on earth is wrong, leading one to just get mad and do something else unless dealing with the problem is a requirement in the short term. Managed DirectX has this flaw, in that it throws DirectXExceptions which are completely useless and non-descriptive. I hope they fix it at some point.

  34. Kannan says:

    James:

    I’m assuming your three posts were in response to my post about peformance (with throwing exceptions on lookup failures).

    1. I’m not sure I understand what you’re saying here. Throwing two exceptions can’t be faster than throwing one.

    2. Yes, the stack trace is handy. That’s why it’s hard to eliminate it. What I was hoping for is a JIT that can detect when a stack trace is never accessed.

    3. One of the reasons exceptions are for "exceptional" events is because they are inefficient. If they weren’t, you’d be able to use them more often. The hash lookup failure is an example of a place where an exception would be "cleaner" but was avoided for practical purposes. I once wanted to use exceptions all over the place so I created a "Signal" subclass of "java.lang.Throwable" and nulled out the call that generates the stack trace. It would be nice if I didn’t have to do this stuff explicitly (since it now introduces the difficult issue of "should I throw an exception or a signal?").

  35. Aleksei Guzev says:

    But there is need to leave a method without returning any object. Let’s turn our heads backward. We already have large systems providing low-level (sometimes hardware) support for strong typed programming languages. Those systems often use both exceptions and signals. While exceptions have been for exceptional stuff, signals are lightweight variants to work in the situations I had described. The signals can be optimized for frequent use while exceptions could provide detailed stack trace and other error information.

    Maybe introducing signals in later verions of .NET will not affect existing code, the more especially as .Net provides side-by-side execution 😉

  36. Kannan Goundan says:

    Aleksei:

    Though I do use signal-style exceptions. In Java, it’s just a matter of deriving from Throwable and eliminating the stack trace generation. The only reason I use them is for speed. I can’t think of any other reason to have two types of "throwable" objects. So, in essence, it’s a performance hack that leaks into your design.

    Take Map.Lookup(), for example. Should you throw an exception or a signal upon lookup failure? That depends entirely on how the map is being used and so it’s difficult for the writer of the Map class to make the decision.

    Also, providing a signal class might encourage premature optimization and you’ll end up without stack traces. Though it would be possible to provide the line of code that caused the exception, a lot of the time you’re going to get something useless like "MapDefaultImpl._CachedSubtreeLookup(): 103".

    A half-solution would be to make signals like checked exceptions so at least the programmer is somewhat aware of where the exception could happen.

  37. Aleksei Guzev says:

    Kannan: I see You do use signal-style. Actually You have posted the previous message while I was typing my last one 😉

    It’s not my design. I was too young when the design had been used already.

    Yes, differentiating between signals and exceptions is a performance hack. I admit, I use a lot of performance hacks. Sometimes performance hacks are essentially what they pay for.

    They say exceptions are good and we like them, but this technique affects performance. Perfomance degradation is the reason which I am hearing for decades. Read the comments to this post.

    On the other hand if we want different map objects behave differently, then we have to create defferent map classes – one for each behaviour. Just as we do for List and Queue. The chances are those variants of map will be derived from Map and override virtual Map.LookUp method 😉 One map should signal while another throws exceptions. If Your task states that absence of a value for the given key is a logical error then you should use the class which throws. But when You create something like cache and have to create an object if it’s not present in the hashtable, then You will use the class which signals.

    I think, the class libraries and .NET itself should support as many programming styles as possible. Having a lot af classes is not really a problem.

    Another solution is including possible signals in signature of the LookUp method, then we could overload methods with changing signal types and count only. Something like this.

    // this method throws exception

    Map.LookUp(K key);

    // while this signals

    Map.LookUp(K key) signals(NotFound);

    This will reduce the count of classes in the library, but actually I don’t like this.

    Slightly different classes with good static methods for convertion between then seems being a good solution.

  38. Aleksei Guzev says:

    Note that in many cases signals do not propagate father than the direct caller of the signaling method. Unlike exceptions which often bubble up to the operating system.

  39. Bug Hunting and NullReferenceException