Show Me the Money - Floating Point Numbers and Accuracy

Floating point numbers have some accuracy problems. The reason for this is because of fractions that covert to repeating values. Most students understand that 1/3 translates to .3333333 in an infinite sequence. But not many of them know, until it bits them, that in the binary system used by computers 1/10 is a repeating fraction.

 

This really shows up in comparisons. Take a look at the following code:

For y as Double = 1 to 2 Step 0.01

That is an infinite loop because y will never equal 2. You’ll have to use a while or do until loop and use a greater than or equal ( >= ) operator to make it work. This is the kind of thing that makes using double or floating point numbers in loops a generally bad idea.

There have been a number of different ways that have been used to handle the round off problems caused my binary repeating fractions. Years ago Perkin-Elmer (a company better known for other things) had a computer that used a "sticky bit" to help round both up and down which resulted in much more accurate numbers. That was done in hardware floating point operations.

 

While keeping track of large numbers of decimal places is important in science, in business it is all about the pennies. And pennies can be hard to track as well.

 

In programming languages, DIBOL (the "B" stands for "Business") didn't support floating point numbers at all. Programmers used integers and had to keep track of where the decimal point was supposed to be. It was accurate to quite a few digits as I recall. You paid for that accuracy with performance and with requiring smarter programmers.

COBOL had a decimal type for money operations as was mentioned. Other languages and platforms still do. The .NET Framework (and all of the languages - VB, C#, etc - that run under it) have a Decimal data type which gets special processing to not lose those pennies. For performance reasons you don't want to use them for non-money operations as a general rule.

 

I used to use a change making program assignment (for x dollar amount break it down to pennies, nickels, dimes, etc) to help students see the difficulties involved in handling round-off errors. I assume that is fairly common. If you have them do it once with doubles and then let them do it again with decimals it can be an eye opener especially if you have a class discussion about why the difference.

 

BTW banks still worry about lost fractions of a penny. A program manager from a large New York bank told me about a programmer who looked for transactions that resulted in fractions of a penny and rerouted those fractions to his account. Even small numbers add up if you add them billions of times. The programmer ran off with millions after a few months. I believe they did catch him though.