Let us try following T-SQL script on SQL Server:

declare @a float =300000000000000000000000000000000000

select @a + 50 -@a

select @a -@a +50

go

Guess what is the result? the first is ** 0**. and the second result is

**.**

__50__Floating point data is approximate; therefore, not all values in the data type range can be represented exactly. When you add a small number with a very big number, the small number might just lost in the end result.

As a end-user, we want consistent and correct result. However, float data type has no correct result since the value is not accurate. Query result on float data type is also not consistent as well. In the above example, we change the order of + and – operator, we will see different result. Suppose we want aggregate float values, such as sum(floatcolumn), SQL Server may choose to parallel scan the whole table with different thread, and sum up the result together. In this case, the order of data values are random, and the end-result will be random. In one of our customer’s case, he run the same query multiple times, and each time he get a totally different result.

So, please try to avoid using float data type, especially you want to do some aggregation on float types. A workaround is to covert the type to a numeric type before doing aggregate. For example, support X is float

you can use sum(cast(x as decimal(30,4))) to get consistent result

Explanation of the problem with Float with an example was very helpful especially for beginners.

Thank you!

Heh... "it depends". Compared to the original number, 50 is so incredibly trivial that it may not matter in your calculation. If that's within the tolerance of whatever you're doing, then it doesn't matter.

Also, let's talk about another "evil" datatype... DECIMAL. What's 1 divided by 3 times 3?

DECLARE @D1 DECIMAL(3,0), @D2 DECIMAL(18,0),@F FLOAT

SET @D1 = 3

SET @D2 = 3

SET @F = 3

SELECT 1/@D1*3.0, 1/@D2*3.0,1/@F*3.0

SELECT 1/@D1*3, 1/@D2*3, 1/@F*3

Go ahead... try it... see what you come up with.

Heh... now is it that the DECIMAL datatype is bad? Nope... no more than FLOAT is bad. It all depends on what you use it for and how you use it. Just like a calculator does in the background, if you calculate using more decimal places than needed and round to the correct answer, the code above will produce the correct answer just like FLOAT does.

Like I said... it all "Depends". 😉

Talking of "evil" is a bit excessive. Floats have their legitimate uses, though their users should indeed be aware of their imprecision, and floats should indeed be avoided in certain contexts like accounting where exactness matters.

But:

- floats are not that inexact and not that random

- if the data is indeed imprecise, why not use float ? For instance, when storing a set of measures from an analog device ?

- sum(cast(x as decimal(30,4))) is in most cases LESS accurate than x ! So here of instead of inconsistent results, you'd get consistent, but much less precise results ! It is a question of prioritizing. Do you prefer more consistent or more precise results ? Consistency should not (always) be valued above everything else.

I have never used floats for anything before. In fact I have rarely even heard of it. I don't remember discussing it in any of the numerous classes I took and it isn't even mentioned in the index of the two SQL books I have handy here. I don't understand why someone would use it?

I've used the decimal type for doing accounting-related work in the past, but have not yet used the float type mainly because of it's imprecision. The fact that certain fractional values, even just 2 or 3 places past the decimal, can't be accurately represented by float makes it undesirable in accounting situations.

Like Bard, I have wondered about what some appropriate uses for the float data type might be. Does anyone have any real-world examples, just from curiosity?

Float is widely used in telemetry applications, where bandwidth is limited and use of a decimal type would be considered a luxury.

The engineers who work with telemetry have a keen understanding of the limitations of float, and don't perform edge-case computations with them like the contrived example in this post.

What do they teach in classes these days??

What a float gives you is _more_ precision over a wider range of values. The cost of that flexibility is that there are cases where the results of operations are non-deterministic.

If you need a column that can store values in the range of .000000000000001 and values in the range of 10000000000000000 then your choice is to throw 17 bytes of decimal storage at it, or use an 8 byte float and recognize that the float can't store the sum of those two numbers.

Jeff's example, while being a little misleading due to implicit casts points out that there are also "quirks" with the decimal data type. Namely a decimal can't store values that can't be represented in decimal form, e.g. 1/3, What I think he's getting at, which is a different issue from the one the article was getting at, is that within the same precision (number of bits), there are some numbers that can be represented in base 2 (i.e. in float format) that can't be represented in decimal format, and visa versa.

-a.

Precision is something one should always be aware of, but by your definition all data types are "evil". Tinyint is "evil", just try to store 500 in it and see what I mean. One has to be aware of the limitations. This isn't new. While taking matrix classes over 30 years ago, our instructor had us "Solve" a matrix using real and double variables using a method that is generally accurate. We got wildly different results. He then manually solved the matrix using a different process and we found neither "solution" was close to the correct one.

This was our introduction to "not" well formed matrixes and the perils of combining division, addition, and subtraction on the computer.

You need to pick the right solution for the right problem. Here is an example, where float is definitely the better option than decimal.

declare @tb table(val float)

declare @i int

select @i=12345678+3456780+5000678

insert into @tb values(1.2345678)

insert into @tb values( .3456780)

insert into @tb values( .5000678)

select sum(val), sum(cast (val as decimal(30,4))), @i from @tb

Result:

2.0803136, 2.0804, 20803136

So, float is exactly precise in this (several decimal point) example and decimal fails miserably (Using your constraints.). Just think if I was measuring the diameter of molecules in mm! Context! Context! Context!

Uioped1 said:

"If you need a column that can store values in the range of .000000000000001 and values in the range of 10000000000000000 then your choice is to throw 17 bytes of decimal storage at it"

Not exactly correct, you need to throw 34 bytes at it to store both of those values accurately. You also get an idea of the range of accuracy float has when you change to that level (Change the table definition back to using float.):

--declare @tb table(val float)

declare @tb table(val decimal(34,17))

declare @i bigint

select @i=12345678+3456780+5000678+10000000000000000

--Will overflow if I try to add the decimal places (0000000) needed to last number

insert into @tb values(1.2345678)

insert into @tb values( .3456780)

insert into @tb values( .5000678)

insert into @tb values(10000000000000000)

select sum(val), sum(cast (val as decimal(34,17))), @i from @tb

With float produces:

1E+16, 10000000000000002.08031359999999994, 20803136

If you ever did need to aggregate a column of floating point numbers and were worried about loss of precision from intermediate computations, you would need a function that first ranks the numbers in ascending order before summing. That way, by the time the small numbers have been dealt with, the intermediate sum has not lost anything due to rounding.

I don't know how you'd do this in SQL. It would probably need to be implemented as part of the language in order to achieve any kind of speed.

OK, last post I said 34 bytes. I'm not sure how many bytes are used. If you ask for 34 precision, the decimal values can be stored in 17 bytes if each byte uses two place precision. It was my recollection that each decimal value is stored in 1 byte.

Allen, processing in ascending order before summing doesn't work in some situations. Here are two examples that process in ascending order and one example of processing in the correct order. The problem with this process is that rounding errors would still cause problems with large ranges of numbers.

I thought this command would work (select @i=@i+val from @tb order by abs(val) desc,val), but it doesn't. The cursor version does work but isn't too efficient.

declare @tb table(val float primary key not null)

declare @i float

declare @i1 float

declare @C cursor

set @i=0

insert into @tb values(300000000000000000000000000000000000)

insert into @tb values(-300000000000000000000000000000000000)

insert into @tb values(-30)

insert into @tb values(50)

select sum(val) from @tb

select @i=@i+val from @tb

set @i=0

select val from @tb order by abs(val) desc,val

select @i=@i+val from @tb order by abs(val) desc,val

select @i

set @i=0

Set @C=cursor for select val from @tb order by abs(val) desc,val

open @c

fetch next FROM @c into @i1

WHILE @@FETCH_STATUS = 0

BEGIN

set @i=@i+@i1

-- print cast(@i as varchar(32)) + ': ' + cast(@i1 as varchar(32))

fetch next FROM @c into @i1

END

select @i

Willful ignorance of datatypes at best and thoroughly misleading at worst. Evil is in every datatype if you don't know how to use them correctly.

declare @a float = 300000000000000000

select @a + 50 - @a, @a - @a + 50

declare @b float = 30000000000000000

select @b + 50 - @b, @b - @b + 50

Float is useful when you need to perform geometric calculations, but Decimal is often better at arithmetic operations.

declare @a float, @b float

declare @x decimal(38,18), @y decimal(38,18)

Set @a = 0.00000001

Set @b = 100000000

Set @x = 0.00000001

Set @y = 100000000

--float correctly calculates product, but decimal doesn't

select @a*@a*@b*@b [float], @x*@x*@y*@y [decimal]

--Decimal correctly calculates sum, but float doesn't

select @a+@a+@b+@b [float], @x+@x+@y+@y [decimal]

One more vote for "Decimal is more evil than Float" from me.

There is nothing wrong with Float, but there is a lot wrong with peoples' expectations of operations involving floating point (and decimal) numbers. Let's not encourage that ignorance too much or people might begin to even complain about percentages in pie charts not adding up to 100%...

A good rule of thumb I've found is ratios... float is far better for calculating what percentage something is of something else than decimal, because the internal variable is a "floating point" hence although lacking the coverage, is actually more accurate than equivalent number of places decimal.

My rule of thumb is quantifiable amounts in decimal, ratios in float.

Try to get a sum of percentages to add up to 1 when using decimal!

You'd be better off calling the MONEY datatype evil; I believe it is probably responsible for way more havoc than FLOAT. Those four decimal places don't really cut it when dealing with percentage markups on small units. No-one in their right mind uses MONEY to store dollar values for which calculations will be done. A tad misleading in a data type name, no ?

It's always seemed wrong-headed to me that float and real are called approximate numeric data types but decimal and numeric are called exact numeric data types. I have mentored lots of folks, and almost everybody (including myself when I was a mere lad) thinks this means that there is something magical about decimal and numeric types, and something evil about float and real.

The difference isn't in the "exactness" of the data. As many have pointed out, both exact and approximate types have to round off data when any value cannot be represented exactly given the precision available. The difference is really that float and real data allows you to pick a precision only, and tries to manage the scale automatically, based on the calculation being performed. This is what leads to the discrepancies observed by Qingsong.

Decimal and numeric data, however, allows you to specify both the precision and the scale ... exactly. It is the scale that is exact, not the data values themselves.

If you are following what I'm saying, you won't be surprised to hear me say that I agree with the people who have said that all of the data types have to be understood to be used properly. Some people have given good examples here of times when float seems to be a much better choice than decimal, and there are certainly times (working with localized currency comes to mind) when you'd better be able to specify the scale exactly.

I hope this is helpful to somebody - just trying to do my part to demystify that word "exact."

I an sort of new to SQL Server and work with financial based systems - this information was very helpful - Thanks for the post.

Just to add sth interesting to the discussion Please compare the behaviour of two types in the following example:

declare @a float

set @a=5000000

select @a+0.01-@a;

declare @b decimal (10,2)

set @b=5000000

select @b+0.01-@b;