A double-precision floating point number carries fifteen significant digits. What does this really mean?

I multiplied 0.619207 by 10,000,000 and got 6192069.999999991 instead of 6192070. That's only six significant digits; where's my fifteen?

Talking about significant digits is really just a shorthand for talking about relative precision. "Fifteen significant digits" means that the representational resolution is one part in 10^{15}. It doesn't mean that the first fifteen digits are correct. (If you spend more time with numerical analysis, you can even see people talking about things like "five and a half significant digits". If the meaning of "significant digits" were literal, how could you have half a digit?)

The relative error in the above computation is 9 / 6192070000000000 = 1.5 × 10^{-15}, which is consistent with about fifteen significant digits. (And that's assuming the decimal representations are exact, which they aren't.)

Even if you take a literalist interpretation of significant digits, the values are still good to fifteen digits. Remember that 0.99999... = 1, and therefore the values

6192069.999999991 |

6192069.999999999... |

agree to fifteen significant digits, as promised.

Now, if you're a hyperliteralist and refuse to accept that 0.99999... = 1, then you are forced to accept that the only possible numbers of significant digits are zero and infinity. Consider a computation whose result is the value 1 exactly, and that the computation is performed to *N* significant digits (with *N* > 0). Since you do not accept that 0.9 agrees with 1.0 to even one significant digit, the only values that agree with 1.0 to at least one significant digit must be of the form "one point something". Let's call the result 1 + ε with 0 ≤ ε < 1. Now subtract this result from 2.0, yielding 1 − ε. Again, since you do not accept that 0.9 agrees with 1.0 to even one significant digit, in order for this result to be good to *N* significant digits (*N* > 0), the result must be of the form "one point something". Let's call that result 1 + δ with 0 ≤ δ < 1.

Therefore, we have 1 − ε = 1 + δ and therefore, ε = −δ. Since both δ and ε are greater than or equal to zero, the only way for this equation to be satisfied is to have ε = δ = 0. Consequently, the only number which is equal to 1 to any nonzero number of significant digits (if you subscribe to the hyperliteral definition of significant digits) would be 1 itself. In other words, the only positive number of significant digits is infinity. And I think we all agree that if the only valid numbers of significant digits are zero and infinity, then the whole concept of significant digits would become kind of silly.

That's why significant digits don't use the hyperliteralist definition.

"Remember that 0.99999… = 1, and therefore the values

6192069.999999991

6192069.999999999…

agree to fifteen significant digits, as promised."

No, you are saying that "0.9…" is the same as 0.1, which is not true. Its equivalent to 1.0. You’re off by a decimal place.

Well, at least whoever posted that question wasn’t blaming Windows. (I hope, anyway.) That result comes directly from the processor’s floating-point computation instructions (specifically, FMUL), and the fact that the fractional part of a number is stored as a base-2 value, not a base-10 like it’s written (IEEE-754).

(In other words, you get a similar result when you represent 1/3 in writing by 0.33333333, multiply it by 3, get 0.99999999, and then claim you should get 1. You don’t get 1 because 1/3 is not exactly 0.33333333, and you don’t get 6192070 because 0.6192070 is not exactly whatever bit pattern the system used for it.)

BradC: No, he’s saying that 0.9… is the same as 1. Adding 0.9… to 6192069 gives 6192070, which is the result the user expected.

It’s Microsoft’s fault for choosing to base windows on buggy processors.

;-)

This effect has nothing to do with significant digits and everything to do with the binary representation of floating point numbers. BryanK has it right.

You would have the same problem with significant digits even if you used decimal. “I compute 1.0000 / 3.0000 * 3.0000 and get 0.9999 back. But I did the computation to five significant digits, and this has none!” -Raymond]I get 6192069.999999999 in ghci… Maybe inaccuracies in the code to covert to decimal? A double should actually have almost 16 digits (53 bits) over most of its range.

Of course, you have to understand that operations will eat away at that. 1 + 2^-53 – 1 has zero correct digits.

I agree that

6192069.99999999 1

6192069.99999999 9…

agree to fifteen significant digits (simply because the first fifteen places are identical)

BUT this has nothing to do with 0.99999… = 1. It appears that in the article he’s saying that

(ignore these digits)999999…

is mathematically the same as

(ignore these digits)100000

which is rubbish. I apologize if that’s not what you’re trying to say, Raymond. To expose the error, lets show one of those hidden digits, shall we?

(ignore)699999…

(ignore)610000

WRONG

(ignore)699999…

(ignore)700000

CORRECT

I’m saying that]6192070.000000000… and

6192069.999999999… are equal. Since

6192069.999999991 and

6192069.999999999 agree to fifteen significant digits, then you have to admit that

6192069.999999991 and

6192070.000000000 also agree to fifteen significant digits. -Raymond

"Now, if you’re a hyperliteralist and refuse to accept that 0.99999… = 1, then you are forced to accept that the only possible numbers of significant digits are zero and infinity. Consider a computation whose result is the value 1 exactly, and that the computation is performed to N significant digits (with N > 0)"

1) A real hyperrealist will refuse to accept any computation that does any rounding

2) That "(with N > 0)" is a bit superfluous. It would take a surrealist to perform computations rounding to N ≤ 0 digits.

I have the same chance of winning as someone that actually purchases a lottery ticket… to 8 significant digits.

I believe that there is a basic problem with all of this discussion. The word "significant" is being confused with "precision." In the original question:

I multiplied 0.619207 by 10,000,000 …

there are to my eyes 7 significant digits (6 from the first number a one from the second). This is easier to see when converted to scientific notation:

6.19207 x 10^-1 * 1.0 x 10^7

Now these 7 significant digits are multiplied using 15 digits of precision (based on the double type) which gives:

… 6192069.999999991 …

which when rounded to 7 significant digits gives us 6192070.0 (or 6.19207 x 10^6) which is the expected result. We should not confuse the precision of the computation with the significance of the numbers.

Oh, the joy of discrete numbers in a continous world – I certainly have no argument with what you present, but you’ve left some of the neccessary formalisms (that I hazily remember) out. (I think)

Where you declare that X − ε == X for any sufficiently small ε, don’t you have to constrain the relation to being non-transitive ? If I define ε2 as being 1/2 ε, I could then state that X − ε2 == X + ε2 for some precision that is a function of ε. I can also state that Y − ε2 == Y + ε2 , and then I can state that Y == X + ε2, or (rewritten) X + ε2 == X + 2ε2.

So I can now rewrite all of this as

X – ε2 == X + ε2 == X + 2e2, but this is wrong because X – ε2 != X + 2e2.

I remember this from courses a looooonnnnnnggg time ago – I’ve used it in practice in some cases where very slow moving inputs to a system were being incorrectly integrated (the theoretical sum of an infinite number of infintessimally small quantities is not zero!)

But I can’t remember the formalism that wraps around this – I know its entangled with the discrete nature of the problem and the fact that the (effectively continous) set of results isn’t transitive, but since you’ve wandered down this thorny garden path…… wanna go a little further….. ?????? In short, dealing with single instances of the discrete/continous space conundrum is pretty simple (I think) but its possible to get seriously messed up when dealing with sums and powers – I once had a client insist that .Net was junk, because iteratively adding a penny to a sum 100B times did not produce 1B :-)

Significant digits just give an "error margin," which I find easier to understand than counting digits. Say you want 619207 to 15 significant digits. What you are really saying is that the answer should be 619270 plus or minus 0.000000005. Put another way, the 15th digit could be off by half either way. Some possibilities include:

6192069.999999995 = 619207 – 0.000000005

6192069.999999997 = 619207 – 0.000000003

6192067.000000000 = 619207 – 0.000000000

6192067.000000002 = 619207 + 0.000000002

6192067.000000005 = 619207 + 0.000000005

This implies that a number with N significant digits is completely correct when

roundedto N digits. There is no guarantee that the first N digits will be correct before the rounding takes place.Using terminals connected to an IBM mainframe, many moons ago, the Rexx language natively did what I think they called "pure arithmetic". I amused myself by asking for something like 52! (52 factorial) which has about 67 digits, and Rexx displayed them all.

I assumed the answer was correct since it didn’t end with a string of zeros.

(Rexx was used for scripts and such… what we use DOS and vbscript/wscript for these days.)

Rexx would also calculate and display all of the digits for 150! (about 260 digits) and I was impressed.

But, uh… 52-factorial does end in a string of zeros. You have to multiply by 50, by 40, by 30, by 20, and by 10 (and, since those give an even number, you have to add a zero for each of the multiplications by 45, 35, 25, 15, and 5). That’s at least ten zeros, perhaps more. :-P

(Lisp also does bignum (they call it "arbitrary-precision") math, BTW. So do bc and dc, IIRC.)

The .999999…. = 1.0 argument comes up in higher math, too… and the same skeptics emerge.

Cantor’s diagonalization proof of the uncountability of [0,1] doesn’t work so well in base two, because you have to worry about things like 0b0.011111… == 0b0.1 (one-half)

The usual avoidance technique is to

1) use a higher order base than binary (like ten, which most people are familiar with :P)

2) stay away from the first digit (0) and the last digit (9)

For example, it suffices to prove the uncountability of decimal digits that match the regex:

0.[38]+

Pedantic corrections:

> 0.[38]+

^0.[38]+$

> The number of zeros in a number

That is, the number of TRAILING zeroes. There’s no easy way to count interior zeros like the first zero in 60660000 AFAIK.

By "didn’t end wit a bunch of zeros", I meant that the answer Rexx gave for 52! didn’t start with 15 or 20 digits of precision and end with the remaining 45 or 50 digits all being zero. And the result of 150! didn’t end with 250 zeros.

Interesting stuff, though.

Obligatory: 2+2=5 for sufficiently large values of 2

http://en.wikipedia.org/wiki/Two_plus_two_make_five

[sarcastic] Wouldn’t it be easier to just convert all measurements & monetary values used by everyone in the world to base 2? [/sarcastic]

Base eight, maybe. Most of us have eight fingers, after all.

Mark Mullin, did you tell your friend that he should be using the Decimal type? It would allow him to add .01 about 10^28 times before losing precision. Anybody doing arithmetic with monetary figures should be using it.

Indeed, if the original questioner performed 0.619207m * 10000000m with .Net, they would have gotten 6192070 as they expected.

Ah *ding* <– light bulb going on

I propose that we build a processor that works natively with fractions. :)

I’m surprised nobody has linked to the classic David Goldberg paper, "What Every Computer Scientist Should Know About Floating-Point Arithmetic".

http://docs.sun.com/source/806-3568/ncg_goldberg.html

It is sadly true that many people these days don’t really know how floating point actually works.

I guess the number->string function is between a rock and a hard place: if it only shows 15 digits, then you could potentially print out two numbers that have the same string representation, but differ in a numeric comparison (since the 16th digit is partially significant).

So you can either confuse people by printing out extra slightly off digits, or you can confuse people by violating a=b IFF str(a)=str(b). But then again, making exact comparisons of floating point values is kind of stupid anyway…

"Now, if you’re a hyperliteralist and refuse to accept that 0.99999… = 1"

If one refuses to accept that 0.(9) = 1, he is not hyperliteralist. He just dosn’t know maths ;-) 0.(9) = 9/9, and it IS 1, no matter if you accept it or not.

However, the resoning for the "hyperliteralists" is just sweet, enjoyed it!

nice entry. What the numbers represent is much more significant to me than the decimal point placing. For example if we are counting people the 0.619207 is not a good operationalisation (does that mean people without limbs, any specific limb are they alive?) where-as the 10,000,000 makes sense. If ever you see decimal points used in research about humans…. …trace how that decimal point arrived and what it means. apologies for going slightly off topic, I have ‘issues’ with decimal point usage in the human sciences….

@::wendy::

Yes, of course, you always have to keep in mind what the numbers represent, and what makes sense. If you just use any mathematical method you know, without thinking about the usefulness of the results, bogus results are guaranteed.

I’ve found a great blog entry about it:

http://blogs.msdn.com/ericlippert/archive/2004/02/05/68174.aspx

Also, you have to always keep in mind, that just cutting of the decimal point and anything after it is seldom sensible when converting a floating point val to an int. Nearly every time, proper rounding solves the issues you would otherwise have.

@steveg

Thank you for the interesting links, i always thought, the only numeric bases used

todaywere 2, 8, 10, 16, but now i know better :-)@David Walker

If Rexx impresses you, you should really try out bc (bash calculator) – "An arbitrary precision calculator language". For all your precision needs …

@Maurits

"Most of us have eight fingers, after all." – do you live in a Matt Groening cartoon? ;-)

@steveg: that’s why there are 60 minutes in an hour, too. :)

David is at risk of falling into the decimal vs binary trap there: just because the number doesn’t end in a string of zeros in decimal form, does not mean it hasn’t lost a load of bits from the binary. (To take a trivial example, round 1024 off to three bits of precision, and you still have 1024.)

A small company once fell foul of this: their customer database stored credit card numbers as ‘numbers’ – in this case, *floating point* numbers. Slightly over 15 digits of precision, 16 digit numbers – so all their credit card numbers were rounded off, losing the last couple of bits! Worst of all, the error wasn’t immediately visible: by rounding to the nearest multiple of 8 or 16, they weren’t getting exact multiples of 10 (all the credit card numbers ending in 0 would have made the problem obvious much sooner).

It had a happy ending, though: thanks to the first-step checksum on card numbers, apparently their IT guy was able to ‘repair’ all the numbers for them.

Sunday, June 18, 2006 1:29 AM by Roger

>> I have the same chance of winning as

>> someone that actually purchases a lottery

>> ticket… to 8 significant digits.

>

> Actually, no.

Roger beat me to it, but I can still add the example I was planning for Ray Trent:

000000012345

and

000000012346

are not equal to 11 significant digits, they’re only equal to 4 significant digits. (I’m not an expert on numerical analysis and mostly only know when it’s time to get help from an expert, but this case seems to be an exception. In exchange, Mr. Trent owes me one correction on device drivers, some year ^

^)Friday, June 16, 2006 1:49 PM by Mark Mullin

> Oh, the joy of discrete numbers in a

> continous world

Quantum physicists disagree ^^> Where you declare that X − ε == X for any

> sufficiently small ε, don’t you have to

> constrain the relation to being

> non-transitive ?

Yes. Computational equality doesn’t equal mathematical equality. In fact, it’s less equal than you thought it was.

IBM works on standardizing the base 10 floating point numbers

http://www2.hursley.ibm.com/decimal/

for a lot of non-physics calculations (e.g. money) it’s much more convenient.

It means that someone is handwaving.

Double precision IEEE floating point has 52 significant bits–and plays a trick to give 53 for normalized numbers. 53 * log10(2) is between 16 and 17, but that doesn’t mean any number (in range) with 16 significant decimal digits can be exactly represented. In fact, very few can. Only those which, if expressed as a fraction in lowest terms, have a denominator that’s a power of two can be… and since 10 = 5 * 2, most of the time you’re out of luck.

James: You’re right, of course, but I knew from research that Rexx does "pure arithmetic" and isn’t limited to 15 siginificant digits.