Debunking another myth about value types

Here's another myth about value types that I sometimes hear:

"Obviously, using the new operator on a reference type allocates memory on the heap. But a value type is called a value type because it stores its own value, not a reference to its value. Therefore, using the new operator on a value type allocates no additional memory. Rather, the memory already allocated for the value is used."

That seems plausible, right? Suppose you have an assignment to, say, a field s of type S:

s = new S(123, 456);

If S is a reference type then this allocates new memory out of the long-term garbage collected pool, a.k.a. "the heap", and makes s refer to that storage. But if S is a value type then there is no need to allocate new storage because we already have the storage. The variable s already exists and we're going to call the constructor on it, right?

Wrong. That is not what the C# spec says and not what we do. (Commenter Wesner Moise points out that yes, that is sometimes what we do. More on that in a minute.)

It is instructive to ask "what if the myth were true?" Suppose it were the case that the statement above meant "determine the memory location to which the constructed type is being assigned, and pass a reference to that memory location as the 'this' reference in the constructor". Consider the following class defined in a single-threaded program (for the remainder of this article I am considering only single-threaded scenarios; the guarantees in multi-threaded scenarios are much weaker.)

using System;
struct S
    private int x;
    private int y;
    public int X { get { return x; } }
    public int Y { get { return y; } }
    public S(int x, int y, Action callback)
        if (x > y)
            throw new Exception();
        this.x = x;
        this.y = y;

We have an immutable struct which throws an exception if x > y. Therefore it should be impossible to ever get an instance of S where x > y, right? That's the point of this invariant. But watch:

static class P
    static void Main()
        S s = default(S);
        Action callback = ()=>{Console.WriteLine("{0}, {1}", s.X, s.Y);};
        s = new S(1, 2, callback);
        s = new S(3, 4, callback);

Again, remember that we are supposing the myth I stated above to be the truth. What happens?

* First we make a storage location for s. (Because s is an outer variable used in a lambda, this storage is on the heap. But the location of the storage for s is irrelevant to today's myth, so let's not consider it further.)
* We assign a default S to s; this does not call any constructor. Rather it simply assigns zero to both x and y.
* We make the action.
* We (mythically) obtain a reference to s and use it for the 'this' to the constructor call. The constructor calls the callback three times.
* The first time, s is still (0, 0).
* The second time, x has been mutated, so s is (1, 0), violating our precondition that X is not observed to be greater than Y.
* The third time s is (1, 2).
* Now we do it again, and again, the callback observes (1, 2), (3, 2) and (3, 4), violating the condition that X must not be observed to be greater than Y.

This is horrid. We have a perfectly sensible precondition that looks like it should never be violated because we have an immutable value type that checks its state in the constructor. And yet, in our mythical world, it is violated.

Here's another way to demonstrate that this is mythical. Add another constructor to S:

    public S(int x, int y, bool panic)
        if (x > y)
            throw new Exception();
        this.x = x;
        if (panic)
            throw new Exception();
        this.y = y;

We have

static class P
    static void Main()
        S s = default(S);
            s = new S(1, 2, false);
            s = new S(3, 4, true);
        catch(Exception ex)
            Console.WriteLine("{0}, {1}", s.X, s.Y);};

Again, remember that we are supposing the myth I stated above to be the truth. What happens? If the storage of s is mutated by the first constructor and then partially mutated by the second constructor, then again, the catch block observes the object in an inconsistent state. Assuming the myth to be true. Which it is not. The mythical part is right here:

Therefore, using the new operator on a value type allocates no additional memory. Rather, the memory already allocated for the value is used.

That's not true, and as we've just seen, if it were true then it would be possible to write some really bad code. The fact is that both statements are false. The C# specification is clear on this point:

"If T is a struct type, an instance of T is created by allocating a temporary local variable"

That is, the statement

s = new S(123, 456);

actually means:

* Determine the location referred to by s.
* Allocate a temporary variable t of type S, initialized to its default value.
* Run the constructor, passing a reference to t for "this".
* Make a by-value copy of t to s.

This is as it should be. The operations happen in a predictable order: first the "new" runs, and then the "assignment" runs. In the mythical explanation, there is no assignment; it vanishes. And now the variable s is never observed to be in an inconsistent state. The only code that can observe x being greater than y is code in the constructor. Construction followed by assignment becomes "atomic"(*).

In the real world if you run the first version of the code above you see that s does not mutate until the constructor is done. You get (0,0) three times and then (1,2) three times. Similarly, in the second version s is observed to still be (1,2); only the temporary was mutated when the exception happened.

Now, what about Wesner's point? Yes, in fact if it is a stack-allocated local variable (and not a field in a closure) that is declared at the same level of "try" nesting as the constructor call then we do not go through this rigamarole of making a new temporary, initializing the temporary, and copying it to the local. In that specific (and common) case we can optimize away the creation of the temporary and the copy because it is impossible for a C# program to observe the difference! But conceptually you should think of the creation as a creation-then-copy rather than a creation-in-place; that it sometimes can be in-place is an implementation detail that you should not rely upon.


(*) Again, I am referring to single-threaded scenarios here. If the variable s can be observed on different threads then it can be observed to be in an inconsistent state because copying any struct larger than an int is not guaranteed to be a threadsafe atomic operation.


Comments (40)
  1. Stuart says:

    But what are you supposed to do if your desired invariant is one that "0, 0" cannot meet?

    You are supposed to either not use a value type or abandon that invariant, or put more stuff in the struct that enables you to deal with the situation.

    For example, suppose you have a value type that represents a handle. You might put some logic in the non-default constructor that verifies with the operating system that the arguments passed in to the constructor allow the internal state to be set to a valid handle. Code which receives a copy of the struct from an untrusted caller cannot assume that a non-default constructor has been called; a non-trusted caller is always allowed to create an instance of a struct without calling any methods on the struct that ensure that its internal state is valid. Therefore when writing code that uses a struct you are required to ensure that the code properly handles the case where the struct is in its default "all zero" state. 

    It is perfectly acceptable to have a flag in the struct which indicates whether the non-default constructor was called and the invariants were checked. For example, the way that Nullable<T> handles this is it has a flag that indicates whether the nullable value was initialized with a valid value or not. The default state of the flag is "false". If you say "new Nullable<int>()" you get the nullable int with the "has value" flag set to false, rather than the value you get with "new Nullable<int>(0)", which sets the flag to true.

    – Eric

  2. Bill P. Godfrey says:

    Somewhere in the back of my mind, I had this inkling that new struct could be used (inside an unsafe block) to create a pointer to a struct instance (like malloc), like the myth discussed above. I wonder what's the C# equivalent of C++'s new, that would return a pointer to newly allocated "heap" memory for a struct instance.

    (Should my descent into insanity ever prompt me to want to do such a thing.)

    Sure, that's easy. To get a pointer to a heap allocated instance of struct S, simply create a one-element array "new S[] { new S(whatever) }", and then use the "fixed" statement to obtain a pointer to the first element of the array. And there you go; you've got a pointer to a pinned, heap-allocated struct. – Eric

  3. Igor Ostrovsky says:

    Is there a reason why you cannot do this instead?

    * Determine the location referred to by s.
    * Set the memory behind s to all zeros (i.e., clear all fields)
    * Run the constructor, passing a reference to s for "this".


    How does that technique solve the problem? You still end up with situations in which violations of a given invariant might be observable from outside the constructor. And don't forget about exceptions. Suppose a constructor throws an exception halfway through construction, and you catch the exception. The memory could be half initialized and half still zero if we did it your way. – Eric

  4. Igor Ostrovsky says:

    Nevermind, ignore my comment above.

    I am still surprised that the C# compiler does this, though. I would have thought that calling into unknown code from within a constructor is the anti-pattern that breaks the example code.

    It is a bad pattern and you should probably not do it. The code shown is deliberately contrived so as to more clearly demonstrate the issue. The rules of the language are designed so that you don't run into this sort of problem in realistic, subtle, non-contrived situations, not so that you can do obviously dangerous stuff like invoke arbitrary functions from inside ctors. – Eric

  5. dimkaz says:

    Eric, without your callback, JIT could have optimized this assignment out.

    Perhaps, as a benefit of a doubt, that's what people meant.

    While from a general correctness the statement is incorrect.

  6. Shuggy says:


    My view is that people rarely quote things like this as a simplification, but instead as a misunderstanding.

  7. M.E. says:

    So, would you mind explaining what (in your mind) structs ARE for? I think the reason why these myths persist is because people want rules of thumb for when structs should be used instead of classes, and one possible rule of thumb is, "When you want to avoid dynamic allocation of memory." But if that's not valid, then when IS a good time to use a struct instead of a class?

    "Dynamic allocation of memory" is a strange thing to want to avoid. Almost everything not done at compile time allocates memory! You add x + y, and the result has to go somewhere; if it cannot be kept in registers then memory has to be allocated for the result. (And of course registers are logically simply a limited pool of preallocated memory from which you can reserve a small amount of space temporarily.)

    Sometimes the memory is allocated on the stack, sometimes it is allocated on the heap, but believe me, it is allocated. Whether the result is a value type or a reference type is irrelevant; values require memory, references require memory, and the thing being referenced requires memory. Values take up memory, and computation produces values.

    The trick is to work out what exactly it is that you're trying to avoid when doing a performance optimization. Why would you want to avoid dynamic allocation? Allocation is very cheap! You know what's not cheap? Garbage collection of a long-lived working set with many small allocations forming a complex reference topology. Don't put the cart before the horse; if what you want to avoid is expensive collections then don't say that you're using structs to avoid dynamic allocation, because you're not. There are lots of strategies for making garbage collections cheaper; some, but not all of them, involve aggressively using structs.

    To address your actual question: you use value types when you need to build a model of something that is logically an immutable value, and, as an implementation detail, that consumes a small fixed amount of storage. An integer between one and a hundred is a value. True and false are values. Physical quantlties, like the approximate electric field vector at a point in space, are values. The price of a lump of cheese is a value. A date is a value. An "employee" type is not a good candidate for a value type; the storage for an employee is likely to be large, mutable, and the semantics are usually referential; two different variables can refer to the same employee.

    – Eric

  8. Yogi says:


    Use Value Types (structs) when you want Value Type semantics i.e. 'Copy Value' during assignments. I have never felt the need to avoid 'dynamic allocation of memory' and reached 'structs' as the tool to address the requirement.

  9. M.E. says:

    I was not encouraging the "avoiding dynamic allocation" mindset, I was simply saying that some people (people who grew up on C) think that way. It's analogous to the belief that high-performance code must be written in assembly (which I'm aware is also not true).

    To me, the value type vs. reference type decision when defining a new type of object really encompasses four decisions:

    1. Do I want the type to be immutable?

    2. Do I want the type to be nullable?

    3. Do I want to make derived objects?

    4. Do I want to pass the object by value or by reference?

    It is my contention that if I could control 1-3 individually, I would hardly ever care about 4, and I would mix and match 1-3 depending on my needs.

    I can already control #3 for objects with the sealed keyword, so I can always make a reference type either sealed or unsealed, but I can't make a value type unsealed (sometimes inconvenient, but I can't say I yearn for it).

    For nullability, I sometimes want my value type variables to be nullable (which I can do with the nullable operator), but sometimes I want variables containing reference types to be non-nullable (which I can't do, but sometimes makes sense, e.g. for a name field, or for a collection).

    Value types are always "immutable" in a certain sense, but reference types can never be made immutable. If only I had a readonly keyword that means "all instance variables of this object must be readonly", that would take care of #1 (I've often wished for this, I'm sure you have a ready explanation why this was considered and rejected).

    Do you see what I mean? It seems like a relatively uninteresting implementation detail (pass by value vs. pass by reference) is being overloaded to drive much more interesting design decisions.

    If I could write "public sealed readonly class CostOfALumpOfCheese" and then declare a non-nullable variable "CostOfALumpOfCheese! cheeseCost" (where '!' is the opposite of '?'), I would never say "Hey, I should really take this public sealed readonly class and change it to a public struct."

  10. Gabe says:

    M.E. now with .Net 4, you also have to consider variance. A struct is always invariant, meaning that you can't return an IEnumerable<Foo> from a function that is supposed to return an IEnumerable<object> if Foo is a struct but you can if Foo is a class.

    Another consideration is size. Even if a value is immutable, nonnullable, and sealed, if it's 100 bytes in size then you still probably want it to be a class because it would be pretty inefficient as a struct.

  11. Shuggy says:

    I am surprised that more people haven't mentioned a key difference between reference types and value types is that value types (can be) blittable. Baring strings/stringbuilders (which are designed quite carefully to allow it if desired) this is only possible on structs.

    Another facet of them, with similar technical reasons to the blittability, is that they have no overhead in size terms bar their payload (and optionally packing) and being multiples of 8bits.

  12. Alexey Kurakin says:

    Hi, Eric.

    You wrote: "you use value types when you need to build a model of something that is logically an immutable value, and, as an implementation detail, that consumes a small fixed amount of storage"

    But, for example types System.Drawing.Rectangle and System.Windows.Rect are both structures, but at the same time they have methods which mutates theirs content (for example method Inflate).

    May you know why these two types from Winforms and WPF were made mutable structures?

    And may be you can point some cases when actually it is ok to make mutable value types?

  13. Stephan Leclercq says:

    While I do agree about creating a temp and copying it to its final destination is the way to go, I do not agree with the demonstration, and especially not with the example.

    Clearly, your example code is severely flawed. If your invariant is that x>y, then you cannot call the callback between the assignment of this.x and this.y. If you do so, you are calling an external method while having some object whose invariant is not maintained. A truly invariant-checking compiler would insert a check of the invariant both before and after each call of the callback, and your program would throw an exception.

    What your code does is provide a pre-condition on the arguments of the constructor, not a real invariant.

  14. Simon Buchan says:

    I think this shows that noone, anywhere has ever understood value types :). As a C++ veteran (in the PTDS sense 🙂 ), I would have expected the "unexpected" behaviour. I get that you want to reduce unexpected behaviour, but this seems to be trying a little too hard.

    BTW, this is easily demonstrated by:

    using System;

    unsafe struct Foo


       public Foo(int dummy)


           fixed (Foo* pinThis = &this) Console.WriteLine(new IntPtr(pinThis).ToString("X"));


       static void Main()


           var foo = new Foo(1);

           Console.WriteLine(new IntPtr(&foo).ToString("X"));



    This rule suggests "this" is in practice fixed in a value-type constructor, is it not simply for consistancy with other function-like members?

  15. Rim says:

    "as an implementation detail, that consumes a small fixed amount of storage"

    I find this the most pressing argument for using structs, since to my mind this 'detail' makes structs play nice with interop and (graphics) hardware. I realize this is probably a niche case, but when choosing between classes and struct I find I first consider if I need a fixed memory layout. My typical 2nd criterium is whether using structs would make life easier on the GC, which can be an important issue on the compact framework, but your treatise has me seriously doubting what I think I know about deterministic finalization.

  16. marc says:

    M.E. wrote:

    > If I could write "public sealed readonly class CostOfALumpOfCheese" and then declare a non-nullable variable

    > "CostOfALumpOfCheese! cheeseCost" (where '!' is the opposite of '?')

    Yes, that is it. A programmer should not have to deal with such implementation details as value or reference types. It should be up to the compiler / JITer to optimize sealed readonly "classes" in non-nullable variables which are relativly small in size and optimize them as it sees fit by making value types out of them. But a programmer should not need the distinction. We much more need a distinction between mutable and non-mutable (as you specified with readonly) types than we need a distinction between value types and reference types.

    Stephan Leclercq wrote:

    > Clearly, your example code is severely flawed. If your invariant is that x>y, then you cannot call the callback between the

    > assignment of this.x and this.y.

    why? The called function has no way of getting at the intermediate values (I think). Just as Eric wrote, you get 0,0 three times (since the values will be copied after the constuctor is done and no external code can access the intermediate copy). 0,0 satisfies the invariant (remeber x>y results in an exception, meaning the invariant is x is less than or equal y, which 0,0 satisfies).

    If you use structs, you have to accept the fact that someone can make a default struct wich should satisfy your invariants or you made a wrong design choice.

  17. Adam says:

    You don't need to construct a new instance of S over the top of an old one to get that invariant to fail for one of those callbacks. Just pass (-2, -1) and the invariant will fail for the second callback.

    One good reason to carefully manage dynamic memory allocation is when you're using the Compact Framework. For example take a look at at:…/713396.aspx

  18. george says:

    Surely the x and y referred to in the conditional test are the parameters to the constructor and NOT the private member fields of the structure?  One would have to use 'this.x' and 'this.y' to refer to the member fields.  Thus, I don't see a case here where x is > y and any exception should be thrown.  What am I missing?

  19. RichB says:

    [pedants corner…]

    > because copying any struct larger than an int is not guaranteed to be a threadsafe atomic operation

    For the CLI, an IntPtr is atomic, not an int. For C#, an int (and other 32bit values) are guaranteed atomic.

    So for a 16bit CLR, 32bit values are atomic, whereas for a 64bit CLR, any 64bit value is atomic.

    ….according to the specs at any rate….

  20. Gabe says:

    marc: Hopefully you will read through the posts again and see that value types have significant semantic differences from reference types, such that you can't turn a class into a struct without breaking things. The whole point of a value type is that it doesn't have any memory overhead (so an array of a million ints takes 4MB instead of 12MB), meaning that it doesn't include storage for the monitor (to enable the "lock" statement) or type information (to enable things like co-/contravariance).

    What the runtime *could* do is optimize reference types to allocate them on the stack instead of the heap when it knows that that there's no danger that the reference will escape the current method. However, heap allocation is no more expensive than stack allocation in the CLR (as opposed to C++ where allocating on the heap can be expensive), so the optimization only reduces the load on the garbage collector. Presumably since this is a non-trivial optimization to detect (you'd have to prove that no method of the object stores a this-reference anywhere) and may not make things much faster, it's not done at the moment

  21. Wesner Moise says:

    public static void RunSnippet()


       ValueTypeObject x = new ValueTypeObject(1, 2);


    .method public hidebysig static void RunSnippet() cil managed


       .maxstack 3

       .locals init (

           [0] valuetype MyClass/ValueTypeObject x)

       L_0000: nop

       L_0001: ldloca.s x

       L_0003: ldc.i4.1

       L_0004: ldc.i4.2

       L_0005: call instance void MyClass/ValueTypeObject::.ctor(int32, int32)

       L_000a: nop

       L_000b: ret


    C++ allows the compilers to construct directly on the storage of the local variable being initialized.

    In addition, I see no evidence in the example output from Reflector of any additional temporary variable being created to store the initial constructed value.

  22. Wesner Moise says:

    The temporary is stored on the stack, but this becomes a CLR issue, not a C# language issues, as to whether to enable the optimization to initialize directly an previously unused variable. The example in the blog post is not ideal because the local variable is hosted in a compiler-generated display class.

    You make an excellent point Wesner, one which I should have called out in my original article. As an optimization we *can* often initialize "in place" and do so, but only when the consequences of that choice are unobservable. I'll update the text; thanks for the note!

    – Eric

  23. marc says:

    Gabe: I know about the semantic differences and the more I read about them, the less I think we should bother a programmer with it.

    So I am not proposing to change C# to be value type / reference type agnostic, but it was mostly a comment to the language construct as a whole, that such a difference should not be made at all. It is too late to do this in C#. The compiler / the CLR could detect if an instance requires monitor and/or type information and provide the storage space if needed. This would basically mean performing the boxing only once if an instance needs to be reference type, but is easy enough to be of value type.

    I still believe that having the assignment and equality operator meaning different things for value / reference types is a source of many (way many) bugs.

  24. Gabe says:

    marc: I'm not sure what you're proposing. Are you suggesting a system like C++ where types are neither value nor reference, but the value/reference aspect is determined at the point of use? Surely you don't want that because it just shifts the problem from the time a type is created to every time it's used!

    Are you instead suggesting that all types should be what are currently considered to be reference types, and make the compiler and runtime responsible for optimizing them where possible to be merely values? If so, the optimization would be extremely rare. A publicly available array of ints, for example, would have to always be an array of references to ints because you never know if some code in another assembly might want to get a reference to one of those ints. Many OO systems don't have value types, and I'm not sure that many of them even attempt this optimization.

  25. Stuart says:

    "Are you instead suggesting that all types should be what are currently considered to be reference types, and make the compiler and runtime responsible for optimizing them where possible to be merely values?"

    I know you weren't talking to me, but I believe I have an answer that makes sense and possibly (?) has some merit. I'm not sure if this is what marc was proposing or not.

    But seems to me that the distinction that's valuable to the programmer is "immutable or not" rather than "value or not". An immutable sealed reference type like string might as well be a value type; a mutable value type is – well, in my opinion – confusing enough that, personally, I'd have no problem making them simply forbidden outside of unsafe code.

    So if there were a way to declare, say, "readonly sealed class X" and have the "readonly" modifier enforce that the class must have value semantics – that is, only have readonly fields and all fields must be of readonly types themselves (and perhaps no finalizer) – then for *those specific types* (and with some other caveats) it perhaps make sense to elide the distinction between value and reference type and make it a purely runtime implementation detail.

    In practice, there are other complications with an approach like that; for example, the question of nullability (an immutable reference type can be; an immutable value type cannot. If we grant that both are semantically "values", shouldn't the nullability question be separate from the storage mechanism? For that matter, why should default(string) be null rather than ""?

    My thought would be that each "readonly" type ought to be able to be used with the ? suffix for nullability, but also that it ought to be able to declare its own default value if it is NOT used with that suffix. And that, as a result, there should not be the restriction that every value type has to accept "all zeros" as a legitimate value; it can declare its own default.

    The CLR would also need a low-level immutable array type in order to support making "string" one of these language-level readonly types.

    All in all, I think it might be a very worthwhile thing to do if someone were redesigning C# from scratch, but I don't think it can be done in a way that's both sane and backward-compatible, because at minimum you'd have to turn every instance of "string" into "string?"…

  26. M.E. says:

    > The whole point of a value type is that it doesn't have any memory overhead (so an array of a million ints takes 4MB

    > instead of 12MB), meaning that it doesn't include storage for the monitor (to enable the "lock" statement) or type

    > information (to enable things like co-/contravariance).

    Now, how am I supposed to know that by declaring something as a value type, I say nothing at all about how that value is allocated in memory, but I DO say something about what extra information the system stores with the object? Eric claims that the first is none of my business, but if so, why is the second my business?

    > Are you suggesting a system like C++ where types are neither value nor reference, but the value/reference aspect is

    > determined at the point of use?

    This does seem like the right model to me.

    > My thought would be that each "readonly" type ought to be able to be used with the ? suffix for nullability, but also that it

    > ought to be able to declare its own default value if it is NOT used with that suffix. And that, as a result, there should not be > the restriction that every value type has to accept "all zeros" as a legitimate value; it can declare its own default.

    This is an interesting suggestion, because one often does run into situations (as in the article) where you want a struct to follow some invariant, but the default values would violate that invariant.  Having a readonly keyword for classes would be oh so nice . . .

  27. ficedula says:

    >My thought would be that each "readonly" type ought to be able to be used with the ? suffix for nullability, but also that it ought to be able to declare its own default value if it is NOT used with that suffix. And that, as a result, there should not be the restriction that every value type has to accept "all zeros" as a legitimate value; it can declare its own default.

    That's a performance nightmare; granted you might not care about that in some circumstances, but it's still a concern I would expect the CLR team to be worried about.

    Right now newing up an array with 1,000,000 elements is a fairly straightforward task: grab the necessary amount of memory and make sure it's zeroed (and newly allocated memory from the OS will be zeroed already. If not, writing zero to a large contiguous block of memory is super fast). If the struct has non-zero default values (2 for the first field, 12 for the second, 0x35dffe2 for the third) the runtime's new[] code has to loop through writing the default values into each location.

    This is particularly painful since it has to be done even if the various elements of the array are only going to be overwritten with some non-default values shortly afterwards! The same applies to fields in classes, you can't just zero out the memory when constructing an object (something that, as above, might already have been done for you in advance), the runtime has to go through and fill in a bunch of default values – which you'll probably go and overwrite in your constructor anyway.

  28. Stuart says:

    Eek! That's a very good point. Ouch.

    I hate when reality messes with my perfectly good theories! 😉

  29. configurator says:

    ficedula: Isn't there a command to blit a certain byte array repeatedly 1,000,000 (or any N) times?

  30. ficedula says:

    configurator: You could optimise the fill, yes. It's still going to be more complex than filling with zeroes; rather than writing zero to every location, you have to fetch the default values based on the type being allocated; and rather than writing data in blocks of 16+ bytes at a time, you may end up having to write it in smaller chunks if your struct is an inconvenient size.

    That aside, since it *is* possible to "pre clear" your unused memory to zeroes, you lose out fairly significantly there. As I mentioned, memory straight from the OS allocator will be zero-filled already so you currently don't have to do anything before using it. Memory that's not come straight from the OS and was already allocated, you could arrange for that to be cleared to zero as part of garbage collection (when you're touching that memory already and it's fresh in the CPU cache, so the update is practically 'free'.) Compared to that, doing *any* unnecessary work filling in default values is a loss.

  31. Just a guy. says:

    Structs are for the most part a waste of time, and I fail to see the much of their use.

    Okay I'm being unkind. Is it me, or do structs get elevated status solely from "A class is a reference type. A struct is a value type. This is different because …. ", and that leads one erroneously to think that a struct is on equal footing with a class. To the untrained eye structs _seem_ to be like classes in so very many ways.

    To be blunt (and unkind again) in an OO language, structs are C hangovers. Necessary, but nobody talks about int or a double as much as they talk about structs.

    benefit: I can allocate 1,000,000 structs faster than I can allocate 1,000,000 objects. Nifty (I guess)

    point of difference: structs get automatically shallow copied on assignment, or when passed as parameters.

    I supposed my real gripe is only that structs can contain references, and if I had access to the spec back in the day, I would have snuck in "Structs containing references should be outlawed" or just purloined the page outright. Can you spot the issue below? Do you know the output? There's nothing magical or tricky, but this code has different semantics if s is a class versus s as a struct.

    S s = new S();

    s.SA = 123;

    s.SC = new C { CA = 12, CB = 23 };

    S prime = s;

    prime.SA = 999;

    prime.SC.CA = 222;

    I'm not beating on Eric or the compiler team here, or on anyone really. It just goads me that the only difference between class C and struct S is that the former doesn't have (shallow) copy on assignment semantics. e.g. structs have automagic IClonable MemberwiseClone() built-in (will all the lovely defects of the shallow-copy).

    I'm not going to too ornery about it, but the subtle defect is discernible by us, but only on close inspection, and is (mostly)  completely lost on a beginner. In fact I'm so used to by-reference semantics, that I will fail to notice that s.SA != prime.SA. I'd also wager that most of the people I've worked with would never spot the difference especially when buried under 1000000 lines of other code. I don't hover my cursor over every variable and check class vs struct, and I certainly don't code review every line of code checked in. Gosh I'm glad I use NUnit!

    I like source-code trivia. It's a fun past time, and I can impress my co-workers. But in reality, it's sometimes a bad thing that this is a fact in the first place. The fact I can subtly break a program by changing 'struct' to 'class' in a declaration is hard to swallow. The compiler doesn't complain because it's still a legal program. I already know Eric's arguments against adding warnings to the compiler, but anytime there's a reference type inside a struct, and that reference type doesn't have value-type characteristics (e.g. strings are fine) I want the compiler to sternly scold me for being downright "dangerous". (Dangerous is what the kids these days like to call code that has the potential to shoot someone in the foot, but then kids these days go a bit over the top)

    Structs, when used correctly, are fine. I really don't have a beef with them except they're too permissive, and they're allowed to act too much like classes, except for all these really small caveats. It's so easy to add a reference type to a struct that you probably don't notice the error when you make it. Value with reference types and references with value types. What is a beginner to think? Stack vs Heap is really the least of their concerns. Truly. in C#, storage is somewhat moot. I don't recall any python or other languages that care so much where in memory a variable happens to be stored. Yes it matters in C++, but not so much in C#.

    Unless required for some sort of interop, I mostly avoid structs and make immutable reference types instead. At the end of the day all I lose is some automagic copy semantics, but I can quite easily craft up an immutable reference type, so I don't really care if it's copied by value or by reference. When's the last time you really noticed a difference between a string and an int. Both have value type semantics which is what matters in the end.

    PS Thanks for another good post. 🙂

  32. Gabe says:

    What's wrong with structs that contain references? Not only are popular types like BigInteger and KeyValuePair implemented as structs that contain references, so are innumerable Enumerator types for the collections you use every day.

    As for whether you should care about Stack vs Heap, I believe that's Eric's point — you shouldn't care. I don't think I've ever had to — that's the BCL authors' jobs.

    When should you make your own value types? Almost never. Your average client/database/server app developer can probably go their whole career without having to make one. And when you do, they should probably be immutable. Note that your typical Enumerator is a mutable struct that contains references.

    So what about other languages that don't care to make the distinction? Look at Java: It has a few built-in value types (ints, floats, etc.) but you can't create your own. That means you can have an array of 1000000 doubles and it will take 8MB, but if you want an array of Complex (2 doubles), you need to create a 4MB (or 8MB) array of references, then allocate a million 16-byte objects, each with 8 (or 16) bytes of overhead. The solution is usually to create two separate arrays (one for reals and one for imaginaries), then rewrite all the code to use the hack.

    Python is even worse — it has no value types whatsoever. Every int, float, or complex you create is a new object on the heap with all the overhead that entails. It's so bad there are numerous C modules (like NumPy) to allow you to create and manipulate arrays of primitive numeric types.

    In other words, if C# didn't have value types, you'd be stuck having to use hacks like creating them in C when you really do need them.

  33. configurator says:

    ficedula: I'm not sure that garbage collection always happens when memory is fresh in the CPU cache. You'd need a damn good GC for that…

    You can always write data in blocks of 16 bytes. Suppose your struct has 5 bytes – simply concatenate the first 16 copies, and then you've got an 80-byte block you can blit all over the place – or does quick blitting require that your 16-byte blocks are all identical?

  34. ficedula says:

    configurator: The idea is that when you garbage collect, the objects that get collected will be touched as part of the collection process. (At least, based on my understanding of the way the .NET GC works). After marking all the objects that are 'live' the GC then compacts the live objects – as part of this process it'll be walking over all the objects [in the generation being collected] which will generally bring that memory into the CPU cache. Zeroing it at that point is potentially cheap since you're touching it anyway. It's not that the GC only runs when the memory is already in the cache – but that the process of GC will *bring* it into the cache anyway [so take advantage of that to clear it out].

    (I have no idea whether the .NET GC actually does preclear the memory at this point; but it seems like a logical option that they might have investigated. The costs might have turned out not to be worthwhile under real world conditions.)

    Writing data in blocks of 16-bytes: right, but your pattern is now 80 bytes long. Older x86 machines don't *have* 80 bytes of registers available for general copying of data! Newer x86 and x86-64 machines do in the form of SSE registers, but then you ideally need to be writing data to 16-byte aligned destinations, and you're taking up more registers. Writing zeros just means clearing one SSE register and then blasting those 16 bytes out again and again.

    (If I had to write a runtime for a language that supported default values for value types I'd certainly look at doing all these sort of things. I'd probably prefer to be writing a runtime for a system that just allowed me to zero everything out though…!)

  35. configurator says:

    @ficedula: I haven't touched assembly in a while, but I remember there being a call that did something like memset for large areas of memory, with any given piece of data. I could be wrong though.

  36. ficedula says:

    @configurator: There's REP STOSD – but that only sets 4-bytes at a time, so you can set a 4-byte pattern, but no larger. x86-64 has REP STOSQ which writes 8-bytes at a time, but again, you can only set an 8-byte pattern. Great for setting a region of memory to zero (or all 0xffffffff, or 0x80808080, or whatever), but no use for setting any larger pattern. In order to set larger patterns, you need to hold the pattern in registers and write your own loop to copy each register into memory in turn. Your pattern still has to fit into available registers.

    (You can also use REP MOVSD to copy a region of memory from one place to another, but (a) that's slower, because it's memory-to-memory rather than register-to-memory, and (b) To copy an 80-byte pattern 1000 times over into an 80000 byte region, you'd need to set up and call REP MOVSD 1000 times … and have your 80-byte pattern set up in memory first as the copy source.)

    (On modern PCs, it turns out that while the x86 string instructions (REP MOVS/STOS) are very compact – one instruction to blast as much data as you want from place to place – they're actually not as fast as using the SSE registers which can move 16-bytes at a time and can also be given cache hints.)

  37. Gabe says:

    configurator & ficedula: Going on about how to initialize arrays with constant bit patterns is pointless, because it's not something you would likely want. If you didn't have to have the all-0 default constructor for value types you would want to have your own constructor called for each instance (like in C++), not have some arbitrary bit pattern copied to each element.

  38. Just a guy says:

    @Gabe there's a difference between confusing specs (IMO structs with embedded reference types) and some functionality that takes advantage of the quirks in a confusing spec. The latter, as great as they may be, does not excuse the former. There are innumerable examples of creative code that takes advantage of a language defect, but so what?

    Or are you implying the main driver of this 'feature' is to enable enumerators to work a little better?

    In any event, if you read my comment, it was:

    reference types – check

    value types without references – check

    value types with references – wtf?

    @ficedula …  since this is conjecture about stuff we know nothing of, I'd wager memory isn't cleared until it's requested. The GC, we're told, is meant to be as quick as possible, and zeroing memory that may never be used by the program again doesn't seem like a good use of time. I'd file it under YAGNI (You Aint Gonna Need It).

  39. ficedula says:

    @Gabe: The use-case would be that you could set defaults for all the fields which maintained sensible invariants that weren't necessarily based on all zero values … even if for the struct to be used 'for real' you then initialised the fields in a constructor to more 'useful' values. I'd agree that this isn't beneficial *enough* to justify all the effort needed to implement the feature though.

    @Just a guy: You could well be right; I'm just speculating on a possible optimisation. I'd expect the CLR team to have thought about it and decided to implement or not based on how it effects real-world workloads … possibly it's just not worth doing, period.

  40. Anonymous says:

    I just wonder if this is way overboard…I mean at a top level isn't that enough for typical development?

Comments are closed.

Skip to main content