Implementation-defined behaviour

As I've mentioned several times on this blog before, C# has been carefully designed to eliminate some of the "undefined behaviour" and "implementation-defined behaviour" that you see in languages like C and C++. But I'm getting ahead of myself; I should probably start by making a few definitions.

Traditionally we say that a programming language idiom has undefined behaviour if use of that idiom can have any effect whatsoever; it can work the way you expect it to or it can erase your hard disk or crash your machine. Moreover, the compiler author is under no obligation to warn you about the undefined behaviour. (And in fact, there are some languages in which programs that use "undefined behaviour" idioms are permitted by the language specification to crash the compiler!)

An example of undefined behaviour in C, C++ and C# is writing to a dereferenced pointer that you have no business writing to. Doing so might cause a fault that crashes the process. But the memory could be valid, and it could be an important data structure of the runtime software. It could be code that you are now overwriting with code that does something else. And hence, anything can happen; you're rewriting the software that is running into software that does something else.

By contrast, an idiom that has implementation-defined behaviour is behaviour where the compiler author has several choices about how to implement the feature, and must choose one. As the name implies, implementation-defined behaviour is at least defined. For example, C# permits an implementation to throw an exception or produce a value when an integer division overflows, but the implementation must pick one. It cannot erase your hard disk.

All that said, for the rest of this article I'm not going to make a strong distinction between these two flavours of underspecified behaviour. The question I'm actually interested in addressing today is:

What are some of the factors that lead a language design committee to leave certain language idioms as undefined or implementation-defined behaviours?

The first major factor is: are there two existing implementations of the language in the marketplace that disagree on the behaviour of a particular program? If FooCorp's compiler compiles M(A(), B()) as "call A, call B, call M", and BarCorp's compiler compiles it as "call B, call A, call M", and neither is the "obviously correct" behaviour then there is strong incentive to the language design committee to say "you're both right", and make it implementation defined behaviour. Particularly this is the case if FooCorp and BarCorp both have representatives on the committee.

The next major factor is: does the feature naturally present many different possibilities for implementation, some of which are clearly better than others? For example, in C# the compiler's analysis of a "query comprehension" expression is specified as "do a syntactic transformation into an equivalent program that does not have query comprehensions, and then analyze that program normally". There is very little freedom for an implementation to do otherwise. For example, you and I both know that

from c in customers
from o in orders
where c.Id == o.CustomerId
select new {c, o}


from c in customers
join o in orders on c.Id equals o.CustomerId
select new {c, o}

are semantically the same, and that the latter is likely to be more efficient. But the C# compiler never, ever turns the first query expression syntax into a call to Join; it always turns it into calls to SelectMany and Where. The runtime implementation of those methods is of course fully within its rights to detect that the object returned by SelectMany is being passed to Where, and to build an optimized join if it sees fit, but the C# compiler does not make any assumptions like that. You will always get a call to SelectMany out of the first syntax, and never a call to Join. We wanted the query comprehension transformation to be syntactic; the smart optimizations can be in the runtime.

By contrast, the C# specification says that the foreach loop should be treated as the equivalent while loop inside a try block, but allows the implementation some flexibility. A C# compiler is permitted to say, for example "I know how to implement the loop semantics more efficiently over an array" and use the array's indexing feature rather than converting the array to a sequence as the specification suggests it should. A C# implementation is permitted to skip calling GetEnumerator.

A third factor is: is the feature so complex that a detailed breakdown of its exact behaviour would be difficult or expensive to specify? The C# specification says very little indeed about how anonymous methods, lambda expressions, expression trees, dynamic calls, iterator blocks and async blocks are to be implemented; it merely describes the desired semantics and some restrictions on behaviour, and leaves the rest up to the implementation. Different implementations could reasonably do different codegen here and still get good behaviours.

A fourth factor is: does the feature impose a high burden on the compiler to analyze? For example, in C# if you have:

Func<int, int> f1 = (int x)=>x + 1;
Func<int, int> f2 = (int x)=>x + 1;
bool b = object.ReferenceEquals(f1, f2);

Suppose we were to require b to be true. How are you going to determine when two functions are "the same"? Doing an "intensionality" analysis -- do the function bodies have the same content? -- is hard, and doing an "extensionality" analysis -- do the functions have the same results when given the same inputs? -- is even harder. A language specification committee should seek to minimize the number of open research problems that an implementation team has to solve! In C# this is therefore left to be implementation-defined; a compiler can choose to make them reference equal or not at its discretion.

A fifth factor is: does the feature impose a high burden on the runtime environment?

For example, in C# dereferencing past the end of an array is well-defined; it produces an array-index-was-out-of-bounds exception. This feature can be implemented with a small -- not zero, but small -- cost at runtime. Calling an instance or virtual method with a null receiver is defined as producing a null-was-dereferenced exception; again, this can be implemented with a small, but non-zero cost. The benefit of eliminating the undefined behaviour pays for the small runtime cost. But the cost of determining if an arbitrary pointer in unsafe code is safe to dereference would have a large runtime cost, and so we do not do it; we move the burden of making that determination to the developer, who, after all, is the one who turned off the safety system in the first place.

A sixth factor is: does making the behaviour defined preclude some major optimization? For example, C# defines the ordering of side effects when observed from the thread that causes the side effects. But the behaviour of a program that observes side effects of one thread from another thread is implementation-defined except for a few "special" side effects. (Like a volatile write, or entering a lock.) If the C# language required that all threads observe the same side effects in the same order then we would have to restrict modern processors from doing their jobs efficiently; modern processors depend on out-of-order execution and sophisticated caching strategies to obtain their high level of performance.

Those are just a few factors that come to mind; there are of course many, many other factors that language design committees debate before making a feature "implementation defined" or "undefined".

Comments (12)
  1. Arnshea says:

    Another excellent post.

    "Particularly this is the case if FooCorp and BarCorp both have representatives on the committee." – LOL

  2. says:

    Excellent post as always! I notice that almost all of your arguments seem to be justifications for leaving behavior implementation defined, e.g. "let the implementers pick the behavior that works best for their situation". How does this translate into the much looser statement of "let the implementation do whatever it wants without restriction"? Other than high run-time costs is there ever a good reason for *completely* undefined behavior?

  3. Carl D says:

    @mike Within the C and C++ languages, the justification for undefined behavior almost always came down to Eric's reason 5: high burden on the runtime or reason 6: loss of a major optimization opportunity (on some platform) by constraining the implementation at all.  IMO this problem is much worse on languages like C and C++ that aim to target a very wide range of architectures, some (or even many) of which are not representative of modern CPU/memory designs.

  4. qrli says:

    Func<int, int> f1 = (int x)=>x + 1;

    Func<int, int> f2 = (int x)=>x + 1;

    bool b = object.ReferenceEquals(f1, f2);

    I guess you meant

       object.Equals(f1, f2)

    because if it is ReferenceEqual then it is reference equality.

  5. Damien says:

    @qrli – no, I'm guessing he meant reference equality – he's asking whether the compiler should be forced to determine that both lambda expressions are equal, lambda expressions are immutable, so only one instance is required to exist.

  6. Farhan says:

    Real good blog, kind of Sapolsky lecture….

  7. David H. says:

    @Farhan: Joel Spolsky at the business end and Eric Lippert at the technical one — imagine that!

    The quality of posts such as this one — a majority of posts on this blog, IMO — is also in their uniqueness. When was the last time you read about this subject in particular, in such detail? Thanks, Eric.

  8. Farhan says:

    @ David

    I was talking about Robert Sapolsky (may be, you fully disagreed now) 😉

  9. Jon Skeet says:

    For enquiring minds who were intrigued by Eric's mention of integer division overflow, the relevant section of the C# 4 spec is 7.8.2.

    Division by 0 is well-defined to always throw System.DivideByZeroException.

    However, int.MinValue / -1 and long.MinValue / -1L always throw an ArithmeticException in a checked context; in an unchecked context it's implementation-defined. (It's stricter than Eric mentions, too – the exception must be ArithmeticException or a subclass if it's thrown at all, and if the implementation decides to return a value, it must be the left-hand operand.)

    Good stuff as always.

  10. JonK says:

    "And in fact, there are some languages in which programs that use "undefined behaviour" idioms are permitted by the language specification to crash the compiler!"

    Cool – tell us more. Are we talking bondage&discipline languages like ADA (which I've always avoided) or research languages or…

  11. CarlD says:

    @JonK – I believe that C++ nested template instantiation that exceeds the nesting depth limit of the compiler might be an example of something that's permitted by the C++ standard to crash the compiler.  I'd have to check the standardese to be sure though.

  12. Wolfram says:

    Thanks for the interesting article!

    One thing I did not understand: You say 'a foreach loop is like a while loop inside a try block'. Why? What is a try block for when there is no catch or finally clause?



Comments are closed.

Skip to main content