One more thing -- I said earlier that the VBScript float-to-string algorithm was a little bit different than the JScript algorithm. We can demonstrate quite easily by comparing the outputs of two nigh-identical programs:

' VBScript

print 9.2 * 100.0 < 920.0

print 919.9999999999999 < 920.0

print 920.0000000000001 > 920.0

' JScript

print(9.2*100.00 < 920.0);

print(919.9999999999999 < 920.0);

print (920.0000000000001 > 920.0);

As you'd expect, the last two comparisons of each result in true. But why does the first also result in true? Because of the very issues we've been talking about in the last five parts -- 9.2 cannot be exactly represented as a float. There is some representation error. When the float is multiplied by 100, the representation error also gets 100 times larger, and that's big enough to make it slightly smaller than 920.

If that's the case then why do these programs produce different output?

print 9.2 * 100.0

print 919.9999999999999

print 920.0000000000001

print(9.2*100.00);

print(919.9999999999999);

print(920.0000000000001);

The VBScript program produces 920, 920, 920. The JScript program produces 919.9999999999999, 919.9999999999999, 920.0000000000001. What is up with that?

The JScript algorithm for converting floats to strings is designed to have as much precision as possible. Since 919.9999999999999 and 920.0 have different binary representations as floats, they have different string representation.

The VBScript algorithm on the other hand assumes that if you have 919.9999999999999 or 920.0000000000001, that probably what has happened is you've run into a floating point error accrual issue, and it rounds it back to the correct value for you when it displays the string.

This heuristic means that VBScript (paradoxically) loses a small amount of precision and yet displays more accurate results for typical cases. The down side is that VBScript is unable to display full precision when you really DO want to represent 919.9999999999999. Such cases are quite rare though, and the error created in such cases is tiny.

I dunno – using the ubiquitous ‘bills money’ example, if Bill has 40 billion and I can maintain some of this precision, say, 39.999999, then I get a thousand dollars, which is a lot of money to me. ðŸ™‚

In serious terms I’d agree – if you’ve got infintesimal precision concerns, you probably shouldn’t be using vbscript

With all this concern about precision in floating point numbers, and my own experience with simple financial calculations going awry, I suggest a SafeFloat that keeps track of its own error. I have made such a thing.

Generally speaking, my naive SafeFloat is a float with a second float indicating the error term. Operations using SafeFloats also calculate the resulting error term. I am certain that less naive implementations are possible and they can (?should?) be implemented in hardware.

Knowing a float’s error makes the float more precise, in a way. The most obvious use of the error term is in displaying the value in your favorite base. In my own financial calculations (base 10), I do not have to fiddle with significant digits because SafeFloat knows how many significant digits there are already. I do not have to worry about doing +/- epsilon comparisons to see if two values are "equal"; instead SafeFloat knows two values are equal when their error range overlap.

SafeFloat can even throw an exception when your error gets bigger then your value. This can certainly help in the (BigNumber+SmallNumber)-BigNumber problem with floats.

SafeFloat may not be a beautiful choice in languages without operator overloading. And I am not sure that SafeFloat is your best choice for highly performant applications, but it is at least a good substitute when debugging.

Last night I was thinking about this subject and came up with an interesting variation for representing floats. What kind of tradeoff would one get if two bits were used to specify which base power was used for the exponent part? You could, for example, use:

00 – base 2

01 – base 3

10 – base 5

11 – base 7

If you took the bits necessary for this off other parts intelligently, would you end up with greater precision over the whole range of numbers or less precision? And what would the ramifications be of trying to compare and manipulate floats which had different number bases? Could the operations be performed internally as though working with fractions instead of the full (decimal, binary, whatever) expansions? You know, least common denominator, multiplying numerators separately from denominators, and so on.

I’m not suggesting this is actually practical, by any means–it certainly would be slower than a single base system. But consider that many numbers which previously had precision error could be stored exactly. Consider 1/3. There is representation error for the other bases, but not for base 3.

I just realized that because this scheme I suggested would have multiple encodings for many expansions, it’s unlikely to make up for in precision what it loses in depth.

In any case, thanks for the great series, Eric. It was a very useful and stimulating read.