You better explain this
You better explain this
Other urls found in this thread:
0.30000000000000004.com
perso.ens-lyon.fr
learnxinyminutes.com
twitter.com
javascript is trash
10^16 > 2^53
both of you have no clue
it's a computer problem, not a JS one. Python has the same issue for instance, it can be solved logically but it's due to type coercion and the fact float numbers are stored in 64bit
>The closest 64 bit floating point value to 1.9999999999999999 is 2.0, because 64 bit floating point values (so-called "double precision") uses 52 bits of significand, which is equivalent to about 15 decimal places. So the literal 1.9999999999999999 is just another way of writing 2.0
Try with ===
it also happens in c#
Python at least makes a distinction between floats and integers.
true
1.9999999999999999 and 2 have the same representations as floats so they are equal. JavaScript numbers are all floats.
>JavaScript numbers are all floats
not true, you just don't have that trivial control over it, look up how asm.js works
>Node.js
There's your problem.
>floating point operations confuse and scare me
why would that make any difference here
2 float are equal if their difference is super tiny. Sure it leads to that funny result, but if they made used explicit equality then you will have weird shit like:
0.2 + 0.3 == 0.5
false
And that's just worst.
I would assume that JS uses regular floating point numbers in this case, the IEEE 754 standard means rounding a number which can not be expressed to the nearest floating point number.
>it's a computer problem
No, ABSOLUTELY not, it is a "Problem" of the IEEE 754 floating point standard.
If you use a CAS this won't happen.
#include
int main() {
if (1.9999999999999999f == 2.0f)
std::cout
>I would assume that JS uses regular floating point numbers in this case, the IEEE 754 standard means rounding a number which can not be expressed to the nearest floating point number.
you are correct
In javascript (be it running in your browser or under node.js), all numbers are internally represented using floating point.
*all* numbers, even if they look integer in your code.
I heard about some bank which decided to switch their backend to node, and only discovered their horrible mistake once the project was already completed.
> bank
> switch their backend to node
Kek.
>all numbers are internally represented using floating point
and this is why it's always better to never trust anything you've read on Jow Forums
How do computers handle precise numbers then?
Fixed point arithmetic, usually not worth the memory cost
It's not a problem. It's by design. Use appropriate technology.
This is true but only if you are using the data type double, which is the default floating-point data-type in C#. If you need greater precision, use decimal.
While I think there are many good criticisms of javascript, this is not something unique to javascript.
how does CAS even calculate floating points?
What about it?
That's how flaoting point works, mang.
That's why you always compare floating points like this:
if (2 - 1.9999999999999 < 0.0001) {
// we can consider those numbers equal
}
abstract types like Rational (2 integers)
Rationals with arbitrary-precision integers
symbolic computation and avoiding rounding until necessary
All of these are trade-offs of speed and complexity for precision.
TIL FPU registers are 80-bit
>inb4 autistic mathfags attempting to "prove" 0.999... = 1
I'm so glad we covered this in like week 3 of comp sci mathmatics. all these clueless people here shitting on node or whatever never learned proper computer math...very sad.
does anybody have a good filter so I don't have to see these absolutely retarded floating point precision posts anymore?
...
1/3 = .33333...
3/3 = .99999...
It is the same thing autist we chose it 0.99999999999.... to be one because there can't be two elements that are the same thing in the set.
you make this thread every day. if you want to use an arbitrary precision type you need to use an arbitrary precision type. most programs do not need it. the default in most languages is the standard hardware supported floating points.
Well, you're just making the problem worse by explicitly telling the compiler to interpret those two numbers as floats rather than doubles, which have an even narrower representation, so your 1.999... was rounded to 2.0 at compile time even before the comparison.
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Should be mandatory reading for any "coder"
perso.ens-lyon.fr
And afterwards any reasonable person should realise floats are the devil and instead use an integer representing the value at the minimum level of precision e.g. milli, nano,
U dunno lol. Incidentally yesterday I read this
learnxinyminutes.com
but both sides are the same type, maybe you should read more carefully
It calculates in theoretically infinite precision, by treating numbers as elements of the rational numbers which works well since any given rational number can be stored in a finite amount of space, or alternatively in countable sub fields of the real numbers.
Yes, it is true by definition, no proof needed.
Will this get posted every day from now on?