There are functions like
`_isnan`

, `_isnanf`

,
`_finite`

,
and
`_fpclass`

for detecting that a floating point value is one of the special
values like NaN,
but how do you actually generate one of these values?

You can access these values from the `std::numeric_limits`

template.

std::numeric_limits<float>::infinity(); // positive infinity std::numeric_limits<float>::quiet_NaN(); // non-signalling NaN

Wait, where's negative infinity?
The compiler folks provided these handy little definitions
for when you need to generate a special value (as opposed to merely
detecting one),
and for which the `numeric_limits`

template comes up short.

DECLSPEC_SELECTANY extern const float FLOAT_POSITIVE_INFINITY = ((float)(1e308 * 10)); DECLSPEC_SELECTANY extern const float FLOAT_NEGATIVE_INFINITY = ((float)(-1e308 * 10)); DECLSPEC_SELECTANY extern const float FLOAT_NaN = ((float)((1e308 * 10)*0.));

Disclaimer: Applies to Microsoft Visual Studio. Your mileage may vary. Use the template when available.

**Bonus chatter**:
Note that you must use functions like
`_isnan`

to detect special values,
because floating point special values behave very
strangely in comparisons.
(For example, NaN does not compare equal to itself!)

I thought the various NaNs were defined by setting one or more of the high order bits of the value .. (I’ve long since forgotten exponent/mantissa etc representation of floats .. I just "remember" enough to worry about accumulated roundoff errors when doing certain scientific functions)

More precisely, there are a lot of different NaN values. If there are N mantisa bits, then there are 2 ^ N – 1 different NaN values, and half of those are "signaling" and half of those are "quiet." For double, N = 53, there are 9007199254740991 different NaN values.

acq: yes, but even if you

dohave the same NaN it won’t compare equal to itself, which is a fact not fully captured in your statement.Not present in the current version of MSVC but for C rather than C++ it’s worth mentioning that c99 and [not a final standard, but almost certainly since it tends to defer to C on these things] c++0x add two macros to math.h – INFINITY and NAN – which are a positive infinity and a quiet nan respectively [of type float] (I guess you’re expected to write ‘-INFINITY’ for negative infinity) – in addition to standardizing isnan, fpclassify, and isfinite.

It was not the subject of my statement. I wanted to point that there is no "one NaN" not even "two NaNs" but very large number of them.

And I think I erred with the number, not counting positive nad negative variants. I think there are actually 18e15 different NaNs representable inside of one IEEE double.

Programmers should understand that if they use "quiet_NaN()" value they are using only one out of 18e15 NaNs.

Still, not all NaNs are created equal, only a few values are generated in FPU instructions.

I don’t think that is correct. AFAIK the infinities aren’t equal to themselves either.

“Wait, where’s negative infinity?”

It’s “-std::numeric_limits::infinity()” surely?

I wouldn’t put it past IEEE floating point rules to make −infinity() different from negative_infinity(). -Raymond]@Nathan

That is of no relevance here. There are only two infinities (positive and negative) as upper and lower bound of any real number sequence. Cardinal number’s ain’t got nothing to do with it.

Does IEEE floating point have projective infinity? I can’t remember and I can’t be bothered to check. -Raymond]@acq: If I recall correctly, one intent behind having many NaNs, combined with the fact that NaNs propagate (e.g. 1+NaNx returns NaNx) was to allow a system to initialize every floating point value in a program to a unique NaN. Because of NaN propagation, if the result of a calculation is a NaN, you can determine which (or at least one of) variable was undefined in the calculation based on the mantissa bits of the result.

@Raymond: No, IEEE floating point has only affine infinity. The original 8087 supported both affine and projective infinity, IIRC.

The 8087 has an Infinity Control bit. I recall 0 is projective, 1 is affine. When they created the IEEE standard they decided that one infinity mode was enough, and they chose affine because it is less surprising.

A previous generation of hackers used lunch-structured expressions like "split-p soup?".

I’m more in favour of inquiries such as "isNaan?".

∞ = ∞

-∞ = -∞

-(∞) = -∞

-(-∞) = ∞

∞ ≠ -∞

I just tried this in C#:

var ∞ = double.PositiveInfinity;

and got:

Unexpected character: ∞

Shame. What’s the use of having a unicode-capable language if you can’t do that? :)

You actually don’t need to use _isnan() — you can write your own, if for some reason _isnan() isn’t available:

bool isnan(float x) { return x != x; }

And if you don’t mind using non-portable hacks, you can do stuff like this:

const uint32_t FLOAT_POSITIVE_INFINITY_BITS = 0x7f800000;

const float FLOAT_POSITIVE_INFINITY = *(const float *)&FLOAT_POSITIVE_INFINITY_BITS

And so forth for the other constants.

Or you could use the standard function isnan() from math.h. However, this was only standardised 11 *years* ago now, so it’s still possible that some old or unmaintained compilers don’t have it yet.

Oh, and all symbols named "is*" are reserved for future extensions to the C library, so it’s probably not a good idea to define your own "isnan()".

http://stackoverflow.com/questions/228783/what-are-the-rules-about-using-an-underscore-in-a-c-identifier

Note that some compilers have bugs regarding things like |x != x| for floating-point values, so implementing your own isNaN — just in terms of identical inequality — can and does backfire with real-world, mainstream compilers. Sad, but true.

@Raymond

Is it useful to have a separate projective infinity? A “projective” infinity is really just coming from the one point compactification of the reals, so if you really wanted you could always steriographically project your values onto the circle and check that way! But it seems to me that this would almost always come about in doing arithmetic calculations where such a distinction wouldn’t be so useful?

@stuart

For some reason I did not see the . in your statement and read it as “double positive infinity” which sounds much more interesting than double.PositiveInfinity.

I make no value judgement as to whether projective infinity is useful or not. I just vaguely recalled that it was supported somewhere. Carl D filled in the gaps and noted that it existed in the 8087. -Raymond]FYI, projective infinity also existed on the 287, but nowadays all infinities are affine.

"FYI, projective infinity also existed on the 287, but nowadays all infinities are affine."

Support was in fact eliminated in the 387 to comply with the final standard released in 1985.

BTW, don’t assume that a 286 processor means a 287 coprocessor, or that a 386 processor means a 387 coprocessor. Early 386 computers often had a 287 socket since the 387 was not yet available, and Intel later made a version of 387 that fit into the 287 socket called the 80287XL.

but is it *countably infinite* ?

Unicode or not, there’s some logic to only allow *letters* at the beginning of an identifier (possibly followed by other characters, numbers and underscores being popular). A built-in ∞ numeric literal would be pretty cool, though. Where can I contact the language team?

Also, -0 ≠ 0.

@zero: What exactly do you mean? If you’re using twos complement than -0 is equal to 0 because they’re both represented by the same binary number. I’m not sure if the C-standard specifies the representation for numbers (too lazy to look it up, but probably not), but I don’t know any compiler that doesn’t (at least VS08 and gcc do).

Maybe it’s just a misunderstanding though..

@Voo: -0 and 0 are sometimes used to denote how a number was rounded. Such as -0.000001 would round to -0 rather than 0 and 0.0000001 would round to zero. I know that can be important with thermodynamic research for whatever reason.

However, almost all compilers treat the two as equal.

@Voo: And we’re talking about IEEE floating-point numbers, which aren’t twos-complement, they’re sign+exponent+mantissa. So they do have different bit patterns to represent positive and negative zero.

OK, here you go. IEEE 754 single-precision floating point format:

Sign, Exponent, Fractional part

S EEEEEEEE FFFFFFFFFFFFFFFFFFFFFFF

0 1 8 9 31

Examples:

0 00000000 00000000000000000000000 = 0

1 00000000 00000000000000000000000 = -0

0 11111111 00000000000000000000000 = Infinity

1 11111111 00000000000000000000000 = -Infinity

0 11111111 00000100000000000000000 = NaN

1 11111111 00100010001001010101010 = NaN

One of the nice things about IEEE 754 is that floating point value bit patterns, reinterpreted as integers, are ordered. This can be very useful when you want a ‘near as dammit equal’ function. Decide how many ‘floating point quanta’ apart counts as equal, cast the pointers to the floats as integer pointers, then check the dereferenced integers do not differ by more than your criterion.

The following may also be of help when dealing with floating point problems in debug builds:

http://support.microsoft.com/kb/94998

(Trapping Floating-Point Exceptions)

For a few cases this trick has saved me time.

@vcsjones: However, almost all compilers treat the two as equal.

If that’s true, VS is an exception.

Karellen: Microsoft does not and as far as I know will not support C99. As far as they’re concerned, C++ is more important. Only changes that C++ adopts formally are liable to be included. There are numerous bugs and suggestions on Visual Studio Connect that have all been closed as Won’t Fix.

Incidentally, Raymond, browsers swallow your template instantation parameter. <PRE> doesn’t stop the browser trying to parse it and picking up <float> as an unknown HTML tag.

Oops, fixed the angle brackets. -Raymond]Raymond and what’s about NaN, +INF, -INF etc for double type? Could anybody please write appropriate constants for double?

Raymond,

All those constants are already declared in ymath.h. Why do you have to reinvent the wheel?

std::_Dconst _FInf = {{ 0x0000, 0x7F80 }};

std::_Dconst _FNan = {{ 0x0000, 0x7FC0 }};

std::_Dconst _FSnan = {{ 0x0001, 0x7F80 }};

std::_Dconst _Inf = {{ 0x0000, 0x0000, 0x0000, 0x7FF0 }};

std::_Dconst _Nan = {{ 0x0000, 0x0000, 0x0000, 0x7FF8 }};

std::_Dconst _Snan = {{ 0x0001, 0x0000, 0x0000, 0x7FF0 }};

std::_Dconst _LInf = {{ 0x0000, 0x0000, 0x0000, 0x8000, 0x7FFF }};

std::_Dconst _LNan = {{ 0x0000, 0x0000, 0x0000, 0xC000, 0x7FFF }};

std::_Dconst _LSnan = {{ 0x0001, 0x0000, 0x0000, 0x8000, 0x7FFF }};

(1) At the top it says “/* ymath.h internal header */”. You know how I feel about undocumented behavior. Especially since it isn’t documented whether these are float or double values. (Could be either.) (2) My ymath just declares them extern const but doesn’t define them. Maybe I have an old ymath. -Raymond]zero: I don’t have any other versions here to test this out further, but VC++ 08 (with /fp:precise) treats +0 (0x00000000) and -0 (0x80000000) as equal when compared as floats (and correspondingly for double precision +0 and -0), which I believe is also what the standard requires.

1a. I agree with you. It is internal and undocumented.

But I would copy and rename those constants to your own header. _Dconst union and notation { 0x0000, 0x7FC0 } is a visual evidence and it is useful for working with floating point numbers. Just compare with notation "((float)((1e308 * 10)*0.))"

1b. It is pretty clear and even commented well

_FInf, Fxxx are floats,

_Inf, xxx are doubles,

_LInf, Lxxx are long doubles.