**User:** Recently I found out about a peculiar behaviour concerning division by zero in floating point numbers in C#. It does not throw an exception, as with integer division, but rather returns an "infinity". Why is that?

**Eric:** As I've often said, "why" questions are difficult for me to answer. My first attempt at an answer to a "why" question is usually "because that's what the specification says to do"; this time is no different. The C# specification says to do that in section 4.1.6. But we're only doing that because that's what the IEEE standard for floating point arithmetic says to do. We wish to be compliant with the established industry standard. See IEEE standard 754-1985 for details. Most floating point arithmetic is done in hardware these days, and most hardware is compliant with this specification.

**User:** It seems to me that division by zero is a bug no matter how you look at it!

**Eric:** Well, since clearly that is not how the members of the IEEE standardization committee looked at it in 1985, your statement that it must be a bug "no matter how you look at it" must be incorrect. Some industry experts do not look at it that way.

**User:** Good point. What motivated this design decision?

**Eric:** I wasn't there; I was busy playing Jumpman on my Commodore 64 at the time. But my educated guess is that **it is desirable for all possible operations on all floats to produce a well-defined float result**. Mathematicians would call this a "closure" property; that is, the set of floating point numbers is "closed" over all operations.

Positive infinity seems like a reasonable choice for dividing a positive number by zero. It seems plausible because of course the limit of 1 / x as x goes to zero (from above) is "positive infinity", so why shouldn't 1/0 be the number "positive infinity"?

Now, speaking *as a mathematician*, I find that argument specious. A thing and its limit need not have any particular property in common; it is fallacious to reason that just because, say, a sequence has a particular limit that a fact about the limit is also a fact about the sequence. Mathematically, "positive infinity" (in the sense of a limit of a real-valued function; let's leave transfinite ordinals, hyperbolic geometry, and all of that other stuff out of this discussion) is not a number at all and should not be treated as one; rather, it's a terse way of saying "the limit does not exist because the sequence diverges upwards".

When we divide by zero, essentially what we are saying is "solve the equation x * 0 = 1"; the solution to that equation is not "positive infinity", it is "I cannot because there is no solution to that equation". It's just the same as asking to solve the equation "x + 1 = x" -- saying "x is positive infinity" is not a solution; there is no solution.

But speaking *as a practical engineer* who uses floating point numbers to do an imprecise approximation of ideal arithmetic, this seems like a perfectly reasonable choice.

**User:** But surely it is impossible for the hardware to represent "infinity".

**Eric:** It certainly is possible. You've got 32 bits in a single-precision float; that's over four billion possible floats. All bit patterns of the form

?11111111???????????????????????

are reserved for "not-a-number" values. That's over sixteen million possible NaN combinations. Two of those sixteen million NaN bit patterns are reserved to mean positive and negative infinity. Positive infinity is the bit pattern 01111111100000000000000000000000 and negative infinity is 11111111100000000000000000000000.

**User:** Do all languages and applications use this convention of division-by-zero-becomes-infinity?

**Eric: **No. For example, C# and JScript do but VBScript does not. VBScript gives an error if you do that.

**User:** Then how do language implementors get the desired behaviour for each language if these semantics are implemented by the hardware?

**Eric: **There are two basic techniques. First, many chips which implement this standard allow the programmer to make float division by zero an exception rather than an infinity. On the 80x87 chip, for example, you can use bit two of the precision control register to determine whether division by zero returns an infinity or throws a hardware exception.

Second, if you don't want it to be a hardware exception but do want it to be a software exception, then you can check bit two of the status register after each division; it records whether there was a recent divide-by-zero event.

The latter strategy is used by VBScript; after we perform a division operation we check to see whether the status register recorded a divide-by-zero operation; if it did, then the VBScript runtime creates a divide-by-zero error and the usual VBScript error management process takes over, same as any other error.

Similar bits exist for other operations that seem like they might be better treated as exceptions, like numeric overflow.

The existence of the "hardware exception" bits creates problems for the modern language implementor, because we are now often in a world where code written in multiple languages from multiple vendors is running in the same process. Control bits on hardware are the ultimate "global state", and we all know how irksome it is to have global, public state that random code can stomp on.

For example: I might be misremembering some details, but I seem to recall that Delphi-authored controls set the "overflows cause exceptions" bit. That is, the Delphi implementors did not use the VBScript strategy of "try it, allow it to succeed, and check to see whether the overflow bit was set in the status register". Rather, they used the "make the hardware throw an exception and then catch the exception" strategy. This is deeply unfortunate. When a VBScript script calls a Delphi-authored control, the control flips the bit to force exceptions but it never "unflips" it. If, later on in the script, the VBScript program does an overflow, then we get an unhandled hardware exception because the bit is still set, even though the Delphi control might be long gone! I fixed that by saving away the state of the control register before calling into a component and restoring it when control returns. That's not ideal, but there's not much else we can do.

**User:** Very enlightening! I will be sure to pass this information along to my coworkers. I would be delighted to see a blog post on this.

**Eric: **And here you go!

That's one more reason why I strongly prefer Decimal over Double "by default" (i.e. when there are no other clear reasons to prefer one over another), and recommend the same to those new to C# - because Decimal has no INF or NAN values, and all arithmetic (including division by zero) is always checked.

(The other reason is that there's no such Decimal value x for which (x+1)==x, while there are plenty such Double values. Regardless of the rationale for such values, people often forget about this little peculiarity of float/double, and it can be extremely confusing for them when they actually run into it.)

Say hello to Mr. User from me! As always, he's got some interesting questions...

"Positive infinity seems like a reasonable choice for dividing a positive number by zero. It seems plausible because of course the limit of 1 / x as x goes to zero is "positive infinity", so why shouldn't 1/0 be the number "positive infinity"?"

-----

While your further statements about limits are all reasonable, this paragraph is simply untrue: the limit of 1/x as x decreases to 0 (a so-called right-hand limit) diverges toward 'positive infinity'; however the limit of 1/x as x increases to 0 (left-hand limit) is *negative* infinity.

Good point. I'll clarify the text. -- Eric

Based on this I'd reject 1/0 := positive infinity simply for the inconsistency, all arguments about what it means to have a limit, divergent or not, aside.

My personal opinion is that it would have been better to have a

specificNaN reserved to mean "undefined"; this NaN would logically be none of positive, negative, zero or infinite. But then again, I understand that the infinity value almost always crops up in scientific calculations exactly at the point where something really is diverging to positive infinity, so I see why this was a reasonable, if not entirely formally justifiable, choice. -- EricActually INFs are not NaNs if you go by the _isnan function in C++ ( MS VC8 that is ).

@Deskin Miller : a good way to illustrate this is the euqation x*y = 1, which is the formula for the standard hyperbola.

http://demo.activemath.org/ActiveMath2/search/show.cmd?id=mbase://LeAM_calculus/functions/ex1_rat_function

I hated that overflow behaviour back in my Delphi days. It always seemed to take over at the most senseless times, like in the middle of a checksum calculation, even when I had that option turned off in the project settings (obviously because some package in the dependency chain was turning it back on).

I'm pretty sure I can even remember an instance where I'd explicitly put a block of code after a {$Q-} and it STILL threw a runtime exception because some opaque internal component had turned it back on. Ridiculous.

On the subject of division by zero, though, I've been bitten a few times by the infinity result. The most recent happened when populating a chart (using doubles as point values), and instead of "gracefully" throwing an exception, it happily added the infinite values, causing the chart component to hang and the entire program to become non-responsive. Took me nearly a day to track that one down. I know it's all in the IEEE spec, but I do kind of wish that C# had some compiler option or special operator to throw an exception instead.

I believe signalling on division by zero is the best approach. Perhaps I am somewhat biased as one of the current maintainers of the Delphi compiler, but if you don't deal with the division by zero when and where it happens, it can bleed through the rest of your calculations, and depending on what calculations you did, it may not be clear any longer where exactly you divided by zero.

Delphi programmers must similarly go to great lengths to reset the FPU control word, in particular to re-set back Extended (80 bit) precision, after various meddling MSVC RTLs modify it after a LoadLibrary call, or ActiveX component, etc. etc.

Usually we fix it by saving away the state of the control register before calling into a component and restoring it when control returns. That's not ideal, but there's not much else we can do.

LOL. Dude, I feel your pain. -- Eric

It doesn't even have to be different languages to create problems. If you use Direct3D you need to specify a specific flag to the initialize function in order not to mess with the FPU state (although it's not the exception flags but another FPU flag).

I've the same thought as mathematician. Especially in engineering type application, "division by zero" always mean something can be further digested instead of an exception.

Although we can catch the exception and do something follow-up (better than none), it does not tell what the actual result is (+Inf , -Inf or NaN ?). Fortunately C# applies the IEEE floating point standard or such things can be difficult to implement.

And this is the engineering thinking: http://www-h.eng.cam.ac.uk/help/tpl/programs/Matlab/NaNInF.html

@Pavel Minaev,

I agree that using Decimal would be a better choice or, in any case, on a more familiar ground, for most people who started at the time of CC++, Pascal (not yet Delphi), and Assembler, because the "zero-divide error" being a hardware, or at least a very low-level, "gut response", of a computer, is almost as traditional as the notion of the Earth revolving around the Sun.

But for the same very people, the types (or, rather, the *words*) "float" and "double" are the first to come to mind when thinking the floating-point arithmetic (isn't the Decimal type new to .NET/C#? I'm not too sure) So my question is: why not use Decimal for all that "new age", 16-million-NaN-values sutff, and let the "good old" float and double behave "as before?"

But then again: the answer is, as Eric points out, "because the specifications, and the standards, say so," I guess...

Speaking from a mathematical perspective, it actually is possible and sometimes useful to extend the real field R to include the limits of sequences like 1,2,3... and -1,-2,-3... This can be done in two different ways — by introducing two "infinities" (+∞ and -∞), giving R the topology of a closed interval, or a single infinity ∞ to which both sequences converge, giving R the topology of a circle (this construction is analogous to the Riemann sphere). In both these constructions, operations like 1/∞ can be defined to have a definite result, although this breaks the algebraic structure of R.

http://en.wikipedia.org/wiki/Extended_real_number_line

Yes, I know. That's why I deliberately called out that I was not considering mathematical systems in which infinities are numbers, like Cantor's system of transfinite ordinals, or geometries that have "a point at infinity", and so on. -- Eric

"float" and "double" are historically associated specifically with IEEE binary floating-point standard, which is also the one including INF and NaNs. In other words, people who started with C/C++ (and Java) would actually be more likely to expect the behavior of division by zero as described, and not get an exception. So it really isn't quite "new age".

On the other hand, System.Decimal is new (not the idea, but this particular implementation - so far as I know, it doesn't match VT_DECIMAL, nor any other pre-existing decimal spec), so its semantics aren't guided by any standard.

This is news to me. In what way does it not exactly match the VT_DECIMAL spec? It had better match that spec exactly, because the compiler does compile-time decimal arithmetic on decimal constants by making VT_DECIMALs and calling the OLE Automation decimal math routines. If you know something that I don't about this, please let me know. -- Eric

So far as I can tell, it it specifically designed following the "principle of least astonishment" - consider:

- No magic NaN or INF values.

- All operations are checked for overflow and throw if any such happens. Division by zero also throws. Underflow, however, is permitted, and the result is 0.

- "Implicit zeros" are not permitted - that is, you cannot specify the position of floating point such that it goes "beyond the edge" of the sequence of decimal digits defined by significand, even though the exponent field size allows for such values. The consequence of this is "guaranteed sane" behavior for decimal arithmetic, such that it's never the case that (a+1)==a, as I mentioned earlier can happen for float/double.

- All results are rounded (to allowed number of decimal places) using banker's rounding, minimizing accumulation of rounding errors.

- Explicit zeros after decimal point are permitted, and are preserved in arithmetic operations - so 1+2=3, but 1.0+2.0=3.0. This can be used to encode precision information, and preserve it when passing numbers around and operating on them.

Decimal is one little bit of .NET and C# that I really appreciate being there since 1.0, and presented in an accessible way (compare to Java's BigDecimal...), and that I think is very much unappreciated and undervalued by many, and definitely not getting the praise that it deserves.

My only pity is that floating-point literals are double by default when no suffix is specified, but that is a really minor nit, and understandable for historical reasons (C++ guys probably wouldn't appreciate it if literals quietly changed meaning, or if they had to suffix their doubles with "d" everywhere).

@Pavel,

Ok, I'm convinced now; besides, I personally use Decimal quite often (compared to others whose code I have read so far). One little question that remains is: does the System.Decimal's implementation utilize the 80x87 floating-point co-processor (or whatever part of the modern mlti-core CPUs plays that role?) If it does not, then here you have the reason why the System.Decimal is so badly undervalued and, as you say, does not get the praise it deserves... apart from the legacy code that has been "translated" from C++ or Java and has "float" and "double" all over the place, and no Decimal at all.

Decimal arithmetic is actually done in integers behind the scenes. -- Eric

@Pavel:

"My only pity is that floating-point literals are double by default when no suffix is specified, but that is a really minor nit, and understandable for historical reasons."

There is no absolutely no reason to use float in new software (of course compatibility with existing software is always a good reason). double and float are the same in every respect except precision. And you should always want better precision. So it makes perfect sense that by default 3.14 is a double rather than a float.

There are some very narrow corner cases where the smaller memory footprint of a float, or its very slightly better performance makes it a good choice, but you should leave that choice to experts (which most programmers are not, judging by the huge amount of questions/posts on the web regarding floating point arithmetic; sadly I stumble upon code like "if (denominator == 0.0)" far too often).

Consider that even WPF, which surely should strive for small memory footprint and good performance, uses double all over the place, rather than float.

Just realized you may have wanted decimal literals rather than double by default! While I understand your love for the decimal type, it is a lot slower than double. And by a lot, I mean as much as 100x slower for addition (and "only" 10x for division).

So since most software works well with double, it makes sense to keep it the default (again consider WPF as an example and imagine how it would perform if it used decimal all over the place).

To give you another perspective on the double vs float argument.

We develop an application were switching from float to double, and thus doubling the memory footprint, would have a large impact.

First of all, our floats are used for 3D coordinates which are transformed into device coordinates. A float has got more than enough precision for this. One problem you could face is a model that has two sub-components with a different scale ( or order of magnitude ) : one on a microscopic scale, one on a cargo-ship size scale. When fitting the whole model into the display you would of course not see the microscopic part, but when zooming in, onto that part, adjusting the scale of the modelviewmatrix by incrementally multiplying, it could be very well be that the modelviewmatrix components accumulate a large error relative to the microscopic part. This could result in strange display behaviour.

But fortunately for us, none of our models is like that 🙂

The memory we allocate for the model needs to be contiguous at one point or another - agreed this doesn't scale well, but well enough for our application - so doubling its size decreases the odss of finding such a contiguous section.

So, we use float for this particular application, and are happy with it.

For other parts, which involve signal representation and processing, we do use doubles for calculations, but once the values find their way to a serialized format, we convert to float.

@Pavel -

System.Decimal is a wrapper over VT_DECIMAL, at least in the 32-bit "ROTOR" version of the CLR. I can easliy imagine that in the 64-bit CLR it's be re-implemented. AFIAK, System.Decimal and VT_DECIMAL are always 100% identical.

@Eric: I'm definitely wrong here, but I'm not sure where I've picked the idea that Automation DECIMAL is somehow different from System.Decimal. Now that I look at the description of both in MSDN, it's clear that they are exact same thing. I haven't actually looked at Automation DECIMAL before, though, so apparently I've picked that bit of misinformation from some of the older (and worse) C# books that got me started.

I'm not terribly surprised that I (and, apparently, someone else) got it wrong, as I recall Automation DECIMAL being a fairly obscure thing - most people knew that it's there, but I never recall seeing a detailed description of what's it for and how it actually works outside MSDN reference articles. Probably because everyone just used VT_CURRENCY for money, and especially because VB6 had Currency type, but didn't have Decimal type - even though it could handle VT_DECIMAL variants.

@bypasser, @Kristof

I'm not arguing that decimal should always be used in favor of float/double. It also exhibits the same rounding problems that are inherent to any limited-precision floating types, it's larger, and it's significantly slower. I was only saying that its behavior is "more common sense" to vast majority of people out there, and therefore it's a better candidate for a "default" real type, if there can even be such a thing. I'd rather use a slow application that works as specified (because its author understood how it _actually_ works), rather than a fast application that has subtle rounding-related bugs because its author has never read "What Every Computer Scientist Should Know About Floating-Point Arithmetic" (or a suitable replacement, like Jon's C#-specific article on the same thing).

It's somewhat ironic that you have to repeatedly explain why (float)1.1 + (float)1.2 != (float)1.3 - see SO for plenty of examples - but the same people readily understand why 1.0m/3.0m = 0.33333...m, and why multiplying it back by 3m won't give you 1.0m. Probably because all of us tortured a calculator with such things at some point, and because, somehow, binary integers are much easier to comprehend than binary fractions...

I definitely wouldn't use Decimal for vertex coordinates in a graphical application, or for measurements in an engineering application - perf is much more likely to be an issue there, and double is good enough (in fact, quite often, even float is plenty).

All in all, it would probably be best if the choice always had to be explicit, with no default at all. If you specifically want float or double, say so ("d" and "f"). If you specifically want decimal, say so ("m"). If you do not know which one you want, then you should probably stop and think for a moment, because the choice may have some far-reaching implications.

As a curiosity of note, here's a quite real bug in .NET Framework that I've ran into just now, that got there because someone, somewhere, forgot about INF and NAN values. This one is interesting because it is fairly unusual - it has nothing to do with IEEE floating-point arithmetic, or, indeed, with numbers at all...

Compile the following code as a DLL:

public class Foo

{

[System.ComponentModel.DefaultValue(double.NaN)]

public double Bar;

}

Next, try running sgen.exe (XmlSerializer precompiler) on it, with /k option to keep the generated code (or, alternatively, just try to create an XmlSerializer instance for typeof(Foo)):

sgen.exe /k foo.dll

You'll get the following cryptic error message:

Microsoft (R) Xml Serialization support utility

[Microsoft (R) .NET Framework, Version 2.0.50727.3038]

Copyright (C) Microsoft Corporation. All rights reserved.

Error: Unable to generate a temporary class (result=1).

error CS0103: The name 'NaN' does not exist in the current context

If you look at generated code, sure enough, you see this:

if (((global::System.Double)o.@Bar) != NaN)

Which, of course, references an in-scope variable, field or property "NaN", which is undefined. It should clearly be double.NaN here, but apparently someone just did Double.ToString(), forgetting about the corner cases. Similar problem exists if you put +INF or -INF there.

I was actually quite surprise to see that there, because I always thought that XmlSerializer uses CodeDOM to generate its output, and CSharpCodeGenerator handles NaNs and other special values properly (try it!). Apparently, I was wrong.

In my opinion the only thing that's really wrong with the IEEE floating-point specification is the names. +INF, -INF, +0, -0, have little to do with infinities and zeros in practice. The very concept of positive or negative zero makes no sense mathematically. But the values themselves make perfect sense within the context of floating point arithmatic. The +0 and -0 values do not mean zero at all, but rather they mean "a value too small to be represented." Similarly, +INF and -INF do not mean infinity. They simply mean "a value too large to be represented." So division by zero is never an issue with floating point, because floating point doesn't have a zero. It just has two small values called (unfortunately) +0 and -0.

Understood in this way, the way mathematical operations are defined to work on these values make perfect sense.

Anyway, that's just the impression I'm under. I'm not an expert on this subject.

Where do you find the VT_DECIMAL spec? All I could find was this: http://msdn.microsoft.com/en-us/library/ms221061.aspx which really doesn't say anything about rounding or how exceptions are handled.

And I would like to add that I prefer my divisions by zero to not raise exceptions.

There's a bunch of API functions for DECIMAL arithmetic, but their documentation seems to be very laconic:

http://msdn.microsoft.com/en-us/library/ms221612.aspx

Jeffery L. Whtiledge -- Nice post, I didn't realize that there is no plain zero in floating-point, only positive or negative. It does make since when you think of it that way.

Pavel -- Talking about defaults, maybe we should also have to explicitly specify the sign of zero. 0.0 would be illegal, it would have to be +0.0D or -0.0D

Isn't it sad to see how often Decimals are labelled "exact" while Doubles are labelled "approximate"? Of course, as some of you already pointed out, both are approximate and Doubles are more precise (per unit of storage) and more efficient than Decimals. The only "advantage" of Decimals is that they are highly biased (and as a result compromised) towards financial calculations. I think it is better to educate people than to fudge numbers towards the expectations of the not sufficiently educated.

With regards to "zero" and "infinity" I agree there is probably a naming issue here that has a negative contribution to the discussion: perhaps we should be talking about plus and minus underflow and overflow, rather than zero and infinity.

Alex - Decimals are EXACT in that what you see is what you get. As has already been discussed, if you set a Decimal to 1.1 it is EXACTLY 1.1. With a Double this is NOT the case, 1.1 is only approximately 1.1 and this can lead to some unintuitive comparisons such as 1.1 + 1.2 not equal to 1.3.

I agree with you that Decimals are highly biased to financial calculations, but that is the whole point. A large percentage of code is geared to finance. I also like the terms you suggest: plus and minus underflow and overflow.

> I agree with you that Decimals are highly biased to financial calculations, but that is the whole point.

I would actually disagree. I think that Decimal isn't specifically biased towards financial calculations. Rather, it is biased towards any calculation wherein input is supplied by the user in form of a decimal number, output is also expected to be provided to the user in form of a decimal number (hence the name "decimal", rather than "currency" of VB - the latter, being fixed point, was quite specifically biased towards financial), and precision matters. It just so happens that financial calculations are a very typical scenario where this is the case, but by no means not the only one.

I hope I don't have to explain why 1.1 + 1.2 != 1.3 🙂 Still though, Pavel is right: Decimal calculations just make more sense, and not just for financial calculations, but for all calculations. The fact that I know why float and double calculations produce unintuitive results does not make them any less unintuitive. And I am still likely to miss the nuances of those results in my applications.

The fact is that the IEEE floating point standard was created specifically for situations where a small amount of correctness is worth sacrificing for a significant gain in performance. However, most applications written today do not fall into that category. Most applications are not CPU-bound on mathematical operations, and most cannot tolerate discrepancies such as 1.1 + 1.2 != 2.3. If your application is so bound and can tolerate such discrepancies, by all means use double. Otherwise, do yourself (and those who support your application in production) a favor and use Decimal instead.

@DRBlaise - No, both Decimal and Double are inexact, but both can represent certain numbers exactly. It just happens that the numbers represented exactly by Decimal have a base-10 representation that terminates after fewer than 20 or so (don't recall the exact number) digits, while those represented exactly by Double have a base-2 representation that terminates after fewer than 56 or so (binary) digits.

This makes Decimal a natural choice for financial work, since currency values from the real world are always chosen to have values that a reprented exactly in base-10.

@Pavel - aren't Decimal and Currency different types? They're different thing in OLE automation - I'd imagine that the VB currency type maps to the OLE automation currency type, not to decimal.

> @Pavel - aren't Decimal and Currency different types?

They are. That's precisely my point: VB6 Currency was really mostly useful only for money - IIRC, it was a fixed-point decimal float with 4 decimal digits after the point - whereas Decimal is more much generic than that (though also covers all scenarios Currency did), and its name reflects that.

> a fixed-point decimal float

Oops. That's what happens when you start using terms without remembering what they're actually supposed to mean. Scratch the "float" there, please, and pretend that you never heard that bit from me, ever 😉

It's even possible to handle infinite ranges in C#: http://alicebobandmallory.com/articles/2009/10/20/infinite-ranges-in-c

I'm surprise to see so much animosity towards NANs and INFs. The nice thing about the IEEE floating-point standard is that if you don't like them then you are supposed to be able to adjust the runtime environment to cause things like x/0.0 and sqrt(-1.0) to fire exceptions -- in C/C++ you use _controlfp.

But they are definitely useful. You can use NANs to mark a variable as uninitialized. And x/0.0 giving infinity is critical in some calculations to make them come out right without excessive special casing. If you are calculating the resistance of parallel circuits then it's something like 1/(1/R1+1/R2) and if either R1 or R2 is zero then the correct answer is zero. With x/0.0 giving infinity, and x/infinity giving zero this all works magically. That, in a nutshell, is why IEEE math was carefully designed that way.