I'm a math student and just know the basics of CS. In pure math, a every decimal number has an equivalent representation in bicimal but on a finite machine, you'll have floating point errors. If you can't accurately represent numbers like 0.1 in floating point, how do programmers get around this limitation?
if you really need to test equality between floating point numbers, you need to work out an appropriate epsilon given their magnitudes. If the error is within that epsilon, you can call it a day. Alternatively look into ULP or some exact representation (slower since it doesn’t run on hardware).
Computers are bad at this. Whenever you’re using decimal values while programming, be aware of the implementation and take care.
Landon Myers
>Whenever you’re using decimal values while programming, be aware of the implementation and take care.
Or better yet stick to base 2.
There is hardly ever a good reason to do mathematics in base 10. Base 10 is no better than, say, base 7. Just because humans happen to have 10 fingers doesn't mean it's a magical number.
In some ways base 2 results actually looks nicer because you can plot them easier on a grid.
Austin Anderson
Kenty is that you?
Tyler Collins
As several have stated, you can round the values, and for very serious scientific computing there are data structures that store the fraction itself rather than it's decimal equivalent. A third option is that pretty much every programmer learns relatively early to avoid strict equality when comparing floats
E.g. instead of checking "floatA == floatB", you might write check "near(floatA, floatB)" where near() is a function accepting two floats and returning true iff the values are within some acceptable range of each other (e.g. 0.0000001)
In practice, extremely high float precision is rarely required. NASA is able to predict the landing position of a spacecraft on another object within a few feet using just 15 digits of precision - I imagine, then, that few applications would require even that modest level of precision, which can easily be captured with a modestly sized data structure (probably a Double?)
Luis Parker
You also don't need much precision because measurements generally aren't that precise either.
Why would you need to add 0.1 to 0.2 in the first place? Are they two temperature delta's you're trying to add? Who's saying those temperature delta's weren't really 0.0983 and 0.2038, and they just got rounded off by your thermometer?
NASA can do some amazing measurements, but I doubt any measurement they ever did was so precise it needed anything more than a regular float to represent it.
Brody Powell
if you require really high precision you can use 4 bits to represent each digit
James Thomas
>There is hardly ever a good reason to do mathematics in base 10. There is always a good reason to do mathematics in base 10, and that is that everyone else can read it easily. Anyway, the problems with float comparison apply no matter what base you use.
Bentley Nelson
For money decimal number packages For science using aprox number For Math using symbolic representation number.
Joseph Bennett
>the problems with float comparison apply no matter what base you use.
No it doesn't.
As long as you stick to fractions that can be represented in base 2 floating point (ie: 1/2, 1/4, 3/8, etc.) there are no rounding errors. OP's rounding error occurs because 1/10 has no exact representation in base 2 floating point.
It's no different to decimal where you can safely convert 1/10, 1/100, 196/1000, etc. to decimal floating point but not 1/3, 1/7, etc. People are just more used to getting 0.9999999999 when adding 0.3333333333 to 0.6666666666, but the problem is exactly the same as in OP's example.
Bentley Wright
Simple. You don't store the value as float.
Isaac White
>No it doesn’t. Yes it does. It’s not realistic to restrict yourself to the set of numbers that can be represented exactly in base 2. Are you really saying that a program would never have a reason to divide anything by 3? Get outta here.