Should we produce warnings for unused/uninitialized internal fields?

You may have noticed as you’ve been developing C# applications that if you have an internal field of an internal type which you never use, we give you a warning about it. In fact, there are a number of warnings that we give for such fields, depending upon whether the field was never read or written, initialized but not read, or read but not initialized (and therefore always set to its default value.)

Now, obviously we ought not to give similar warnings for public fields on public types. The thing doing the reading or writing is highly likely to be an external assembly which may not even have been written yet, so it would be silly to give a warning. (And similarly, we suppress this warning on internals if we are building a module that is going to be linked to another module; the reads and writes might be in the other module.  We also suppress it if the internal members of the assembly are visible to an external assembly.)

The C# 2.0 compiler was a bit weak though. If you had an unused public field on an internal class, the compiler would say, well, that’s a public field, suppress the warning. But a “public” field on an internal class is not actually visible to anything in the outside world (again, modulo the parenthetical above) so really we could produce the warnings here.

There were a bunch of cases like that. A few months ago I tightened up the C# 3.0 compiler so that we now detect and warn for all the possible cases where we can actually deduce that the field is not used/not initialized.

Naturally this means that people started getting warnings on code which had previously compiled without warnings, so other teams at Microsoft that test pre-release builds of the compilers started to notice. This change has raised a few questions amongst users on other teams, which might motivate changing this behaviour again. It is particularly hard on users who compile with “all warnings are errors” turned on.

Two scenarios in particular have been brought to my attention.

First, more and more classes are now being written specifically to be used with private reflection and/or lightweight codegen. That is, something outside of the C# language is going to be manipulating the fields of a particular class, and it is by design to have the field unread/unwritten by the static portion of the program. It’s irksome to have a warning about something which is by design.

Second, more and more classes are now being written where a field is initialized but never read, but the field is initialized to a finalizable object, and the finalizer effectively “reads” the field and does work on it.

There are a number of possible responses to these scenarios:

1) Roll back the behaviour to that of C# 2.0. That doesn’t actually solve any of these problems; we still potentially give warnings which the user doesn’t want.  But we give warnings in fewer scenarios.  We can also make the plausible statement “this is what the compiler has always done, you can live with it”. 

The downside of course is that we would be eliminating warnings which the user may want. The whole point of this exercise in the first place was to produce useful warnings.

2) Do nothing. Tell the people who are raising these objections to live with it.  They can turn off the warning programmatically if they want. We strike a blow for language purity and ignore the real-world objections of people who are using features external to the language like reflection.

3) Give the “unread” warning only if the field is of a type known to have no finalizer. (Or, weaker, suppress the “unread” warning only if the field is of a type known to have a finalizer.)

4) Eliminate all the warnings altogether.  Declare a new design principle: warnings should be only for patterns which are highly likely to be  dangerous mistakes.  The warnings in question are just helpful reminders that you might have dead code to eliminate.  FXCOP produces this warning already, so if people want to see it, they can use FXCOP or the static analysis tool of their choice.

5) Create some new attributes that let people document how a field is intended to be used. Be smart about looking at the attribute, and if we see attributes which say “this struct is intended for use in p/invoke”, or “this field is to be accessed via private reflection”, and so on, then suppress the warnings.

6) Do something else that Eric hasn’t thought of.

Anyone have thoughts or opinions on this?


Comments (39)

  1. I vote for choice #5.

    This gives the control to the developers and also leads to a minimum of documentation.

    Especially those cases where the – not obviously – read field is read by reflection or within the finalizer are hard to track down. Encouraging documentation by attributes is an advantage.

  2. foxyshadis says:

    So basically #4 is to shift some warnings into a "notice" class, of relatively harmless items that should never impact compilation?

    #5 certainly sounds nice, but I could just imagine the headaches of implementing it in the compiler and in any code that uses it. Meta attributes have a tendancy to pile up and get annoying quickly, although they’re certainly clearer than a pragma directive. (Not that my opinion is important, just musing.)

  3. Joe says:

    1, 2 and 4 are all OK, I’d vote for 4.

    3 is too difficult for average developers, and IMHO 5 is a misuse of attributes, which are more to do with runtime behaviour.

  4. Mark says:

    6. Let the developer select a warning and check as ignore/resolved and store the results as part of the build process (rather than using attributes in the code). Somewhat similar to #2, but at least gives the developer some control over which warnings are important to them and means the warning gets flagged. Still possibly best left for FxCop.

  5. I would have to go with number 5, as it leaves a bit of a trail for maintenance programmers who may not have access to the code that accesses the field.

  6. Aaron says:

    I vote for #5 as well, or for Mark’s suggestion about making these warnings configurable.  Please don’t do #4!  Those warnings have saved my bacon more times than I can count; 9 times out of 10 those warnings tell me that I accidentally made an assignment to the wrong field, or did something stupid like making a public property get itself instead of its data field.  Those are the biggest nightmares to debug because they’re usually due to some minor typo or late-night brain-fart.

    I realize that’s not necessarily in the best interests of language purity, but practical concerns shouldn’t be ignored either.  I think it’s best to design for the common case (those of us for whom unused/unassigned/prematurely-used fields is *likely* to be an error) and providing only on-demand support to the edge cases (people using Reflection as an integral part of their design, as opposed to simple plug-in systems or a tool for overcoming the limitations of 3rd-party libraries).

    Also keep in mind that the people reading and commenting in here have an active interest in these matters, whereas the majority of C# programmers on the front lines probably haven’t even heard of FXCop and think that static analysis should be handled by the electrical contractors.  I do use that tool sometimes, but wading through the enormous and ominous piles of warnings it generates is far more time-consuming than seeing a small handful of compiler warnings.

  7. Andrew says:

    Really depends on the whole set of cases you are considering.

    I was leaning towards 5, but then I went back and reconsidered in the context of the example you gave.

    ie you have an internal class that has a field marked as public. Effectively the public tag has already stated that they want the field to be publicly available, so adding an additional attribute to say this again in a different way seems a bit redundant. (call it the [YesIReallyMeantPublicAttribute])

    Which I suppose puts me in the #1 camp.

  8. Aaron says:

    Sorry, I also wanted to reply to Joe’s statement about attributes only being appropriate for runtime behaviour – I can’t honestly claim to know the true motivation behind them but I still think this couldn’t be further from the truth.  Attributes are simply metadata – there just happen to be several parts of the .NET Runtime that check for these attributes and change their behaviour accordingly, much like how metadata is used in other applications.

    Attributes *might* indicate some preference for runtime behaviour, but they might also be design-time (the attributes in System.ComponentModel), assembly information (System.Reflection), security requirements (System.Security.*), all sorts of wacky and unexpected stuff (System.Web or System.Web.Services), or in some cases they may even affect the compiler, such as with FlagsAttribute.

    Whether or not it’s the optimal solution, it’s a perfectly valid use of attributes.

  9. Eric Lippert says:

    Aaron: yes, I am taking into consideration the fact that the people who comment have an enormous self-selection bias.  They are likely to be the people who are very far from the average programmer in terms of interest in language design issues, opinionatedness, etc.

    But that’s what I want.  I want strongly held opinions from smart people who have thought about it. Those are the kinds of people I want designing tools for people who want to use tools, rather than think about tools. That doesn’t mean that I’m going to do exactly what you guys say, but I very much take your opinions seriously.

    I figured that #5 would be the most popular option. Unfortunately, it is also the least likely to happen, since we do not have time to introduce more classes into the .NET framework, design them, test them, do security reviews, blah blah blah.  The compiler has a lot of moving parts already; adding more to external assemblies is not likely to fly with management this late in the cycle.  I will make the best case I possibly can for it, but right now #2 is the lowest-cost option and therefore the one most likely to happen.

  10. Mars Saxman says:

    I vote for #5: let the programmer document their intent. It’s kind of an ‘extern "C"’ for the new millenium.

  11. David says:

    I vote for #4, if its not messing with the compliation…why is the complier warning anyway? FXCop is the place for this type of item. I’d vote against 1-3, 5 becuase developers don’t know everything (what, shocking I know), what’s not important now, may be important in a few years (look at sec vulns)…same applies here

  12. Bruno Marques says:

    I also think #5 is the best choice. I apreciate this kind of warnings. And if our code starts producing more warnings than before, it’s just a mather of adding an attribute in a few of places. I deal with code which extensively uses that reflection approach, and I wouldn’t mind having those warnings around here.

  13. Luke says:

    If time+money is an issue, I’d say go with #2 for now and see about going #5 in a later version.

    I would argue for a move towards code that can have very powerful static analysis done to it.  There is much discussion and hype and enthusiasm for test-driven development these days, but it strikes me that testing is by definition an incomplete mechanism; 100% code coverage != 100% state coverage.  I am very much in favor of low-friction, non-crufty language enhancements that could help out static analysis tools.  If I can prove that a variable is never null, there is no need to have a test for it being null.  Maybe this is a pipe dream, but I would love to more "provable" code in the future.

    It seems that in instances where a field is accessed by methods not easy to discover by static analysis, it is very important to document how that field is used.  There are plenty of stories online these days about how Ruby’s extensive metaprogramming can cause unpredictable, hard-to-debug behavior.  #5 seems like a reasonable, small step towards alleviating this potential problem.

  14. Zac says:

    I don’t use C#, so my comments may be gibberish.  But my impression on reading this is:

    1) sucks

    2) sucks ever-so-slightly less

    3) seems like a kluge

    4) seems okay, but seems kind of the like the ‘inverse’ of #2, and thus smacks of the same "you deal with it" attitude as your deliberate phrasing of #2.  Doesn’t seem to jibe with the spirit of adding the warnings in the first place.  But it might be more correct than #2: how would you implement the warnings from 2.0 if you were doing it today?

    5) seems like you’re inviting people to cause themselves headaches later.  Is there going to be any "validation" of this metadata?  Well, not at compile time, so if they want it, they’ll have to add it later.  But they won’t.  So it will go stale as the code is maintained, and then later, they’ll be getting warnings for some of the cases they want to be warned about, but not the rest (where they have the stale attributes).  And then they’ll say "why won’t the compiler warn me if my attributes might be wrong".  And then it looks a lot like #4, for some yet-to-be-defined acronym in place of ‘FXCOP’.  So I say "feh" to #5.

    If it makes sense (see my first line about gibberish), could you have an OnlyWarnForTheStuffWeWarnedAboutInTwoPointOh compiler option?  I know, yuck, but such is the price of "backwards compatibility".

    @Luke, if you could "prove" that a variable is never null without actually testing it, I’d give you a cookie.  I would likely not give you a job.  Search the web for the phrase "proved it correct".

  15. Luke says:

    @Zac: I have discussed the topic of provability in quite a bit of depth, so please don’t make snap judgments like you appear to have done.  Nice is an example of a language that guarantees NullPointerExceptions are never thrown: .  I never said I could prove that any variable is not null without testing; static analysis can make use of if statements.  Your allusion to Knuth’s comment about proving his code correct but not debugging it is nice, but is quite irrelevant; Nice is a refutation of the point I think you’re making.  Also see the Ada language.  Note with my mention of Ada "non-cruftiness" is high on my priority list.

  16. roberthahn says:

    I would suggest adding flags to the compiler.  C# 2.0’s warnings would be the default level.  Add 3 more levels: no warnings, critical warnings (a subset of 2.0’s warnings), the default (C# 2.0 standard), then add one (or more) verbose levels as required.

    I’m not familiar with the C# toolchain here, so I can’t guage the impact on people’s toolsets – I would assume that if you had commandline access to the compiler, you can pass in a -w[0-3] as a compiler argument.  IDE’s would have to be modified to add a control to set a warning level.

    You can also use this strategy and simply re-prioritize the warnings the compiler generates (based on how critical they are) so that the *amount* of warnings emitted approximates C# 2.0’s verbosity.

  17. #4. Another case where initialize but never accessed is read is when using the record/replay semantics of mocking frameworks. In order to record a getter, you need to do something like "var foo = Bar.Quux;" I think that this is not serious enough to generate a compiler warning and that static analysis is where we want this sort of automated check.

  18. John Knoeller says:

    I’d say 4) is the only correct solution, but with a caveat.  Treating unuse as a ‘warning’ just seem incorrect to me.  It’s a bit of information about how to improve your code,  but it’s only a warning if the field is un-intentional,  and chances are it’s not.    

    More likely you just havn’t gotten around to using it YET.   (This happens to me all the time).  Possibly the code has drifted to the point where it’s no longer used, but C# isn’t old enough for that to be the most common case yet.

    The most common case, is that the code is in flux, and the fields are there for future use.   The warnings become a thing that you have to code around; Requiring extra metadata (or worse – unused code!) in order to supress a mal-behavior of the compiler.   The C++ compiler has a bunch of these now.  (Warning, I autogenerated a GUID for your interface –  Yeah I know.  I MEANT YOU TO !!!).

    So what you REALLY need to do is option 6.  Take everthing that you treat as a ‘warning’ now that is really LINT and change it to a level 5 notification that you have to ENABLE to see, and/or that ‘treat warnings as errors’ doesn’t treat as an error.

    Incidently,  more and more I use C# to generate type information that I then run through tlbexp to get .tlb files that I then include in C++.   I find this  a far superior way to define interfaces that I want to implement in C++, and MIGHT want to use in managed code in the future.  Much cleaner than MIDL or Attributed C++.    Though I"m unhappy that MS seems to have ceased improving MIDL,  it desparately needs something like MarshalAs().


  19. davkean says:

    We already have 5 in FxCop. It’s called the SuppressMessageAttribute – we use this to indicate that we shouldn’t warn on a particular construct.

    I personally believe (and not because of my vested interest in FxCop ;)) that C# should remain pure, and avoid having intimate knowledge about particular Framework classes and leave that to Code Analysis tools.

    Unfortunately, there are particular source constructs that we simply cannot check. For example, unused constants are particularly hard to detect at the IL level, whereas, at the AST level the compiler can detect this with 100% certainty (excluding reflection).

  20. Chris Nahr says:

    I think #2 is good enough.  I certainly don’t want to have less warnings — running FxCop is always an extra step, and it’s better to have those warnings right away.  Creating a new attribute for what can be done with a pragma and a comment seems like overkill, though.

  21. Stefan Wenig says:

    I’m voting for #6, if that’s not asking too much 😉

    Really, #5 would be a good option, and it would be up to anybody who doesn’t like it to just not use it. If #5 is not possible, you’d probably be right to go with #2 for now.

    Actually, any system that makes supressing certain warnings easier by indicating intentions rather than specifying warning numbers would make the code both less ugly (since they usually destroy indentation, those #pragmas are quite evil) and arguably more portable between compilers. Make sure those attributes are inheritable, because I’d like to be able to combine them with my own attributes. (For instance, if I write code that initializes some field, I’m quite likely to provide an attribute too. If I can inherit this from your attribute, the use won’t have to specify two of them.)

    Compiling with "treat warnings as errors" is everybodys own decision, it should not be used as a reason to cripple the warning system of C#.

    Talking about attributes, are there any plans to get rid of the limitations for attributes in c#/the CLR?

    – no order

    – no generics

    – no lambdas

    – no compound attributes

    I’m aware you’re not responsible for the CLR’s restrictions, but C# could in fact overcome all of these (except the order, but that could be simulated by compound attributes) by deriving attribute classes per instance. I’ll include a sample transformation:

    class C {

     [Invariant<string> (s => Regex.IsMatch (s, "[a-zA-Z]+")]

     public string Name { get; set; }


    would become:

    class C {


     public string Name { get; set; }


     private class InvariantAttribute_someGuid: InvariantAttribute<string> {

       public Invariant_someGuid () : base (s => Regex.IsMatch (s, "[a-zA-Z]+")) {}



    With that simple transformation, anything would become possible with attributes!

    (Actually, in C# it’s not possible to derive a generic type from Attribute, not even an abstract generic type. However, without trying I’d guess that abstract generic attributes are fine with the CLR, since the concrete attribute type has to close the base type anyway. Is that correct?)

  22. Mel Grubb says:

    Definitely go with #5.  I’d much rather have the compiler make me stop and think about each instance of this problem and decide whether it truly is a problem or not.  An attribute like those used to tell static code analysis to shut up about individual items (For example; Yes, I know "RegEx" is an abbreviation.) would be perfect.

  23. Laura T. says:

    Unused/unread vars are, IMHO, quite common at early stages of the development. Warnings are too "scary" in these cases, as they could be place holders or something else, and still they do not affect generation of a correct program.

    For pure CSC.EXE, I’m for number #4. Compiler must do compiling and anything that does not affect correct program generation should be left to other tools. It *might* result also faster compiling. IMHO, checking for unread fits here. Maybe a single warning "WARNING: Static analysis suggested. Unread/unused… things found here" could help.

    My #6 for CSC+IDE combination:

    I’d like a combination of Compiler/IDE that marks these cases as TODO. A compiler/IDE injected TODO attribute in these cases would keep me on the track what I need to check.

    After all done, I’d could turn the TODO flags off or remove the vars.

  24. Peter Ritchie says:

    Based on you description of added errors (unused public field on an internal class) I’m unable to reproduce a situation without a warning (at warning level 4, which I assume is what you meant).


    internal class MyClass


      public int field;



    MyClass myClass = new MyClass();

    …always generates CS0649 on "field".

    Apart from that, I vote for option 2.  If the field is truly not being used, anyone complaining about a new warning about a clear design anomaly either doesn’t want to be told they did something wrong or don’t want to have to bother fixing it…  Both are no reason to avoid adding the warning.  I don’t consider expecting an public field in an internal class being accessed via Reflection a valid design reason to avoid this warning.

  25. Kevin Dente says:

    +1 for roberthahn’s proposal. A new warning level (call it 5) where they types of warnings are reported, (while leaving default of 4), allows people to enable these types of warnings while preserving the previous behavior for existing code bases.

  26. Sadek Drobi says:

    I prefer no warnings, such kind of things are the tools reponsibility, for example resharper does it so well, so it might be a behaviour of ORCAS

    but I join Stefan Wenig about attributes, a lot of good tricks are disabled because of these limitations



    [LazyLoadedPropert<T>(delegate(T target, PropertyInfo property){

    //code to fetch the lazy loaded property


  27. says:

    1: Hate it… give me the information

    2: Rude, but #3 suggestion

    3: Nope, too vague and hard to discribe… probably to hard to get right all the time in the compiler.

    4: REALLY hate it, FxCop is too "optional"… we’ve got enough code out there.

    5: Okay, #2 suggestion

    6: Add another keyword to C# that indicates that the fields is reflectable instead of using an attribute.  If done correctly could be used to allow "safe" reflection of private fields in lower-trust environment.  As for what the keyword should be: friend or visible sound nice

    public class Foo


     visible int zot;


  28. Eric Lippert says:

    Peter Richie:  I apologize, I did not exactly characterize the problem in my original description.  The set of "missing" warnings is hard to characterize succinctly.

    You can reproduce the problem by having an internal class with a public field which is written but not read.  That gives no warning, but it could.

    An internal class with a public field which is not written DOES give a warning in C# 2.0.

  29. meissnersd says:

    I vote for #4.  

    Situations that might or might not be mildly bad, but the compiler does not have enough information to tell are exactly want FX-Cop and tools  like lint are for.

    It would be nice if FX-Cop was bundled into the next release of Visual Studio, with a simple interface to "Build at FX-Cop level".    Of course FX-Cop should have a nice way to be told in the code, "I meant to do that, move along"

  30. FxCop is far harder to use than the standard compiler in the IDE.  I’ve definitely used these compiler messages usefully before, so #4 (remove them) seems unfortunate.  Given that people use treat warnings as errors, and probably use it in automated build scripts which you should probably avoid breaking, emitting false positives is not nice.  #5 (use attributes) seems crufty, but if you consider the possible uses of write-only or uninitialized variables (usually reflection), then you’re probably already dealing with an audience unfazed by an extra attribute.

    I’ld say a combination of #4 and #5 is most attractive:  please don’t show these warnings as "real" warnings which might break warnings-are-errors people and distract others, but don’t remove them, just place them in a lower warning class which by default is not shown, and can’t (easily) be configured to be treated as an error.  Then potentially add the attributes nevertheless  – an attribute signifying that the odd design is intentional.  These attributes have value anyways, and you’ld only need em to hide errors in a really explicit mode which devs need to proactively activate.  

  31. The first thing I thought of before even seeing your list of ideas was an attribute based approach.

  32. Paul Batum says:

    This is a pertinent issue to my organization. We use a custom OR-mapping framework that uses reflection as described in your first scenario. We get many instances of the warning for unused/uninitialized fields and unfortunately sometimes our developers end up with the bad habit of ignoring compiler warnings – ANY warnings. It would be excellent if something like Option #5 could be implemented.

  33. Finnsson says:

    I would vote for #5 iff (if and only if) this attribute can solve some other compiler problems, like inlining or type inference optimization – not just for a warning. An attribute like [External] to flag that the field is used in externall (or unsafe) code could be used. The problem is System.Reflection(.Emit) that can really mess things up but I don’t think that everyone else should pay the price just because I like to play around with Reflection.Emit. Using attributes/keywords to make optimizations possible (or rather switch off optimizations) is already incorporated with the volatile-keyword so using attributes/keywords to solve Reflection.Emit or unsafe code problems wouldn’t really change the feeling of the language – but it would have the huge downsize that it would break a lot of c#2.0-code… .

    On a similar note: I get warnings about unused fields in private nested structs if I only reach these fields indirectly in unsafe code using pointer-manipulations and pointer-casts to get to them (which is possible and "safe" if I have a fixed layout on the struct). E.g.:

    Struct1* s1Ptr = &s;

    Struct2* s2Ptr = (Struct2*)s1Ptr;

    In this case I would get a warning about the fields in Struct2 not being used, even if they might be used What I am trying to say is that #3 is an OK solution since you are giving warnings in to many cases already anyway and I think the developers realize that you can’t think of all the strange cases. If you don’t like to add even more possibly incorrect warnings I would vote for roberthahn’s solution (i.e. one more warning level).

  34. I vote #2. There was never a guarantee that the GC wouldn’t collect unused fields after their last use, was there? (It does, after all, reserve the right — and exercises that right! — to collect an object even while one of its own methods is running.) If it’s that important that the reference stay alive in beyond its last use, the class can do a GC.KeepAlive() on it in the destructor.

  35. Welcome to the XXVIII Community Convergence. In these posts I try to wrap up events that have occurred

  36. Brad Bellomo says:

    I vote #5.  We use reflection a lot, and it violates standard information hiding, so it is vitally important developers reading the code can identify these fields.  An attribute would solve both problems, and conveniently issue a warning when the attribute is forgotten.

  37. Dennis says:

    Basically what’s needed in my opinion is greater flexibility and control in the hands of the developer, while at the same time keeping the tedium to a minimum.

    But another way is to look at the settings for warning levels as being slightly too crude. Assuming all warnings have a severity level they also have a separate type of impact on your code. CS0649 etc isn’t necessarily a warning – but it can only be determined on a case by case basis.

    #5 Works very well in that it would play nicely with documentation – signalling that this property will be modified in ways beyond statical analysis etc. It’s also the only proposed solution with a case by case flexibility.

  38. Victor Wilcox says:

    I definitely like #5.  It combines the requirement that someone properly plan / explicitly define those properties… something that should be minor if they are going to be using something as hardcore (and often not warranted in use) as refelection, takes no runtime clock cycles, and still properly punishes the lazy [omitted] on my dev team that like to copy / paste entire classes and only use part of them.

  39. cody says:


    IIRC, in C++ there also was something linke UnusedParameter which was there to supress the warning.