What did the computer mean by this?

what did the computer mean by this?

Attached: Screen Shot 2018-08-21 at 04.14.29.png (260x84, 9K)

Other urls found in this thread:

youtu.be/jTSvthW34GU
en.wikipedia.org/wiki/Double-precision_floating-point_format
smbc-comics.com/?id=2999
en.wikipedia.org/wiki/Peano_axioms#First-order_theory_of_arithmetic
quora.com/Why-is-0-1 0-2-not-equal-to-0-3-in-most-programming-languages
twitter.com/NSFWRedditGif

Means you're a brainlet and should learn that computers work in base 2.

read the IEEE floatpoint documentation

>base 2 causes floating point inaccuracy

literal r*ddit brainlet, read the IEEE 754 spec

>read documentation
No.

This shit is explained EVERYWHERE, any book there's a page or two going "SOME DUMB FUCKING IDIOTS DON'T UNDERSTAND FLOAT, SO HERE'S A REMINDER YOU FUCKING REPROBATE", and you STILL post this question right here on GEEEE

>mom, just look at my edgy reply xD

>IEEE 754
So base 2? Retard.

stop chimping out and give an answer

>uses floating point numbers
>surprised they work as intended
What did he mean by this?

0.1 + 0.2 > 0.3 as it should be with IEEE 754 doubles. reed the fucking specs you raging homosexual.

also 1.0 + 2.0 == 3.0

To cover a wide range of values, floating point numbers are internally stored in sign, exponent and mantissa. The mantissa is a faction between 1 and 2 that has to be taken to the power of the exponent.

So in order to add two floating point numbers, at first one of them has to be shifted so that their exponents match. While shifting, you can lose bits. Hence the result isn't always exact.

I hope that the noobs in this thread can understand that. You can do exact calculations in base two, but when register size limits the number of digits, you have to make do.

I don't get your post OP.

Attached: file.png (323x77, 6K)

convert 0.1 to binary and you'll see that it has infinitely repeated portion, like 1/3 in decimal. and because of the fact that computers can't store infinite amount of data to represent one number, we chose to compromise that such floats that don't convert to binary precisely may be a bit greater or less than we intend.

Attached: OneTenthLongDivision.png (230x482, 9K)

You suck at math, you had to subtract 9090 from 10000.

Don't worry, that just means it's fast and efficient.

Well wow. You guys must be great at parties. AM

multiply by 10 and convert to integer before comparing you retard

It means that you're too stupid to figure out how to import and use an arbitrary-precision arithmetic library.

No need for that, just use the decimal type.

wow, people talking about computers on a computer board must be great at parties ahah!!

Not him but that's not a solution in performance critical applications

abs(a-b)

>their languages are too primitive to handle this bullshit on itself
Pathetic.

Attached: 15425634735934.png (547x579, 12K)

It's still stupid

Name 1(One) language that does that automatically if you use the comparison operator on floats.

I fail to see the problem.

back to computer architecture class nigger

0.1 + 0.2 is 0.3
The values should be equal, and 0.1 + 0.2 > 0.3 should be false. 0.3 is not greater than 0.3.

>0.1 + 0.2 is 0.3
Why do you think so?

>he doesn't read documentation
lol everyone laugh at this retard

For the same reason that 1 + 2 = 3. Because math

Did a stacey and/or Karlie wander onto Jow Forums somehow? What the fuck? Did a Jow Forumsentooman orbiter send you here or something?

0.100000001490116119384765625 + 0.20000000298023223876953125 is not equal 0.300000011920928955078125.
The math is correct here though.

>Because math
You mean arithmetic? But it deals with numbers.
As in, mathematical abstraction.
I see no evidence that
> 0.1 + 0.2 > 0.3
implements this abstraction.
Since the OP mentioned
>computer
I've assumed that these numbers represent floating point arithmetic.
In this case, it makes perfect sense.

But i’m not adding 0.100000001490116119384765625 to 0.20000000298023223876953125. I’m adding 0.1 to 0.2.

this.

Haskell, I think.

>I’m adding 0.1 to 0.2.
Please enlighten me how are you adding numbers that don't exist?

are you using a decimal machine? are you using Babbage's difference engine?

>0.1 and 0.2 don’t exist
wew

You'd need an infinite amount of memory to store them so yeah.

the computer doesnt store 0.1 and 0.2 precisely

>Why do you think so?
Because that is how arithmetic works. The sentence
>0.1+0.2=0.3
has a value of True in arithmetic.

>As in, mathematical abstraction.
>I see no evidence that
> 0.1 + 0.2 > 0.3
>implements this abstraction.
Is this a joke? You have no evidence that a sentence containing only numbers and arithmetical operators is a mathematical abstraction?
Tell me, you own a mac, don't you?

"my computer can't represent those values" is a wildly different statement from "those numbers don't exist", brainlet.

>I'm adding 0.1 to 0.2
You mean you're adding
0.1000000000000000055511151231257827021181583404541015625 to 0.200000000000000011102230246251565404236316680908203125 ?

what kind of hardware are you using? is it a decimal machine?

>look mom, today I learned about floating points!

What does that have to do with anything I said? I never stated my computer could represent those numbers, I just said you're wrong (whichever of the morons I quoted you are).

>Is this a joke?
No.
>You have no evidence that a sentence containing only numbers and arithmetical operators is a mathematical abstraction?
Considering the context - yes, I have no evidence.
If I see
> 0.1 + 0.2 > 0.3
in any computer language, I assume floating point arithmetic if it's not stated otherwise.

>0.1+0.2 = 0.300000000000004
Wtf

you have not replied to me yet, so that's not quite right. can you tell us how your machine stores numbers? what kind of hardware are you using?

>0.999... = 1
Wtf.
What did real life mean by this?

Computers store floating point in a different manner.
youtu.be/jTSvthW34GU
Watch at 33:20 onward. Become enlightened young Jow Forumsman.

basically this

Attached: jeknhc8mqghz.jpg (599x712, 43K)

Computers aren't true calculators. A calculator can't attribute a value of False to a mathematically sound statement, yet here is an example that it does.

>can you tell us how your machine stores numbers?
The same way yours does: imprecisely. I just don't pretend this is actual math. OP is right: that statement is wrong.

That has absolutely nothing to do with the previous statement.
>0.1+0.2 = 0.300000000000004
This is a representation problem and that statement is false in mathematics. It's wrong. It's a lie.
>0.999... = 1
This is not a representation problem. There are theorems that demonstrate this. It's a true statement.

I never said otherwise. The statement is still false. l2read.

Maybe you morons should stop learning from your Code.* programming tutorials and actually pick up a math book.

Stop trying to be pedantic. You just look retarded.

>That has absolutely nothing to do with the previous statement.
.1+0.2 = 0.300000000000004
>This is a representation problem and that statement is false in mathematics. It's wrong. It's a lie.
.999... = 1
>This is not a representation problem. There are theorems that demonstrate this. It's a true statement.

I was just baiting, but what's your point anyway?
You're not even the one I was replying to.

>reee stop using proper definitions

>reee the definition in this domain is different than the definition in a different domain

The domain is mathematics, are you stupid?

wew lad that was hard

Attached: Screen Shot 2018-08-21 at 16.39.51.png (440x76, 13K)

Math isnt Technology, go to /sci/

You're right, math doesn't have this problem.

Come on, why don't you reply to , afraid of exposing your autism for everyone to see?

"Math" says your are right on the high level based upon what's given to you. However, your programming language says your wrong, and the underlying math as well.

If I say 1+12=1, you'd assume I'm mad. If I says mod(1+12,12)=1, we'd all be just fine.

Essentially the language you're using is compiling what you think is 0.1, 0,2, and 0,3 into something you didn't quite expect. Read the about the interpreter or compiler of it and you'll see somewhere it's asserting that what you enter as your inputs is being stored differently than what you expect. That is all. Look up methods to circumvent this with said language and you should be fine. Weakly typed languages are more apt to do this.

Samefagging, this guy knows what's up.

>read the IEEE 754 spec
The wise man also follows his own advice

This.
And this.
Also this.
Absolutely NOT this.

this must be b8

Nope!

>This is a representation problem and that statement is false in mathematics.
No. You are very confused. Nobody claimed that 0.1 is a real number...

.1+0.2 = 0.300000000000004
A perfectly valid mathematical statement if the numbers are members of the mathematically well defined floating point numbers.

Next you are gonna tell me 1+1=0 is false in the finite field where you calculate mod 2.

You can implement a CAS in a computer based on binary, are you dumb?

>decimal 0.1 can't be represented in binary - surprising
>ternary 0.1 can't be represented in decimal - meh
Before complaining about math try learning it first.

Number precision.

You go 64 or whatever bits to work with for bot storing and/or doing a calculation in a computer more or less efficiently.

So what do you do even just for the real numbers between 0 and 0.1? There is an infinite amount of numbers there. There are multiple approaches like just figuring out how many places after the comma you could represent exactly and then just cutting off the rest and say "well, we can't do this, done".

But the standard chosen approach for floating point numbers is to allocate the bits to sign [1 bit] - exponent [11 bit] - mantissa [52 bit, one can be derived]. en.wikipedia.org/wiki/Double-precision_floating-point_format

This method has the nice property of approximating precision to the range around zero more closely at the expense of being less accurate with numbers further away. But you can't rely on it being absolutely accurate, it's an approximation except for specific numbers.

>decimal 0.1 can't be represented in binary
It can't be represented by the IEE 754 floating point numbers.
Which neither means that no other floating point standard exists which could do this nor that there is anything limiting a CAS operating on a binary data storage.

Not sure if troll or actually claiming fp inaccuracy is not the issue with op's image. Either way very stupid.

>fp inaccuracy is not the issue with op's image.
The "issue" is with the IEEE 754 definition of floating point numbers, it has LITERALLY nothing to do with computers operating on binary storage.

Are you dense? Do you think it's practical to use a cas for every arithmetic operation? Do you know how ALUs work? Jesus Christ the level of smug idiocy.

>Do you think it's practical to use a cas for every arithmetic operation?
No, especially numerical algorithms suffer enormously.
>Do you know how ALUs work?
Yes. I am perfectly aware that CAS are incredibly slow compared to FP arithmetic, as FP arithmetic is done directly in hardware.

What does that have to do with anything though?
You claimed that the base 2 data storage of a computer causes theses issues (or at least supported that statement) that is plain wrong.

smbc-comics.com/?id=2999

No, your question was just stupid. You compared a rpresentation error to a statement provable through a theorem. When I pointed this out, you caved back "I was just baiting". Whatever. I don't know what else you want me to say, go learn math.

>However, your programming language says your wrong, and the underlying math as well.
There is no underlying math where that statement is true. There is something close. This isn't even a subset: arithmetic says that statement is false and the language says it's true. Therefore, the language is incompatible with arithmetic.

>Nobody claimed that 0.1 is a real number...
I do. It's a number defined by mathematics.
>A perfectly valid mathematical statement if the numbers are members of the mathematically well defined floating point numbers.
The statement is not compatible with the axioms of arithmetic, therefore it is false in normal math. Unless you mean a separate math system, in that case we have to agree that every single operator doesn't necessarily mean the same thing, so = means different things in your system and in the normal one. Is that what you mean?

>doesn't understand that binary storage can represents floating point.
It's over. OP is a great b8ter or has a dunce hat sewn into his skull. Everyone go home.

>The statement is not compatible with the axioms of arithmetic
There are no "arithmetic axioms",modern math is based upon ZFC.

>therefore it is false in normal math
False.

>Unless you mean a separate math system, in that case we have to agree that every single operator doesn't necessarily mean the same thing, so = means different things in your system and in the normal one. Is that what you mean?
Kinda, but it isn't a different system of maths, both can operate under the exact same axioms, just different definitions. Symbols often mean different things in math.
Floating point arithmetic is normal math, just for floating point operations the + operator is "overloaded" and has a different meaning.

>doesn't understand that binary storage can represents floating point
I do perfectly well.
They also can represent numbers in a CAS.

>There are no "arithmetic axioms"
en.wikipedia.org/wiki/Peano_axioms#First-order_theory_of_arithmetic

>False.
So you're saying a system that says a statement has a true value, while that statement has a false value in math, is not incompatible with math? Provide proof.

>Kinda, but it isn't a different system of maths, both can operate under the exact same axioms, just different definitions.
How the fuck is "using different definitions" not a different system of maths? So you redefine whatever you want to the point where you derive false statements, but you say you're still under the same system? Ok, so apparently your system can derive P and !P. Totally makes sense.

quora.com/Why-is-0-1 0-2-not-equal-to-0-3-in-most-programming-languages

"It has nothing to do with the languages usually, but on the underlying representation the languages use for representing numbers.

BY default most but not all languages use the fastest representations. On the x86 this is usually 32 or 64 bit binary integers.

Some languages that are a bit more particular, like COBOL or PL/I or Delphi, give you a choice, binary or decimal or character math.

In COBOL you can declare a variable to be PIC 9999.9999 and it will do decimal math so that 0.1 plus 0.2 gives you exactly 0.3. But that will take like ten times longer than doing a binary floating-point add. You have a choice, exact math or fast math.

Some really old CPU's like IBM mainframes had instructions for decimal and binary math, so there the speed difference isn't quite so large.

To go into detail: in a binary fraction, 0.1 is not representable exactly, it's represented as 1/16 plus 1/32 plus 1/256 plus 1/512 plus 1/4096 plus 1/8192 plus 1/65536 and so on. going out that far you get 0.09999084472, that's with sixteen binary fraction digits. So you can get close to 0.1 but not exactly. It's confusing, since most languages store the inexact value, but when you go to print the value out, the output routine rounds it to the nearest number, which may be 0.1. Confusing.

If you really need exact math, see if the language has an exact decimal type or a set of functions or methods for doing exact math."

First search result. Now go fuck off, you have your answer. The language doesn't represent in CAS. Either assert your statement as a previous user mentioned.
Or go find a programming language that asserts all your numbers in CAS. Use the SAGE tool, it's powerful and can verify extremely large and small numbers.

If you aren't happy, build your own language and compiler from scratch, because no one else cares.

>en.wikipedia.org/wiki/Peano_axioms#First-order_theory_of_arithmetic
Yeah? I know the peano axioms, but that is not how MODERN math works, you prove these "axioms" from ZFC.

>So you're saying a system that says a statement has a true value, while that statement has a false value in math, is not incompatible with math?
I don't know what that is supposed to mean.
There is no "normal math" the statement 0.1+0.2>0m3 is true if the numbers are floating point numbers and false if they are real.

>How the fuck is "using different definitions" not a different system of maths?
Because they are compatible.

>So you redefine whatever you want to the point where you derive false statements
No.

>Ok, so apparently your system can derive P and !P.
Only if I add axioms. I didn't do that, I defined things.
Also it is impossible to prove that I can't derive P and !P under ZFC.

>It can't be represented by the IEE 754 floating point numbers.
in English explain what a calculator does that a computer cannot

>quora.com/Why-is-0-1 0-2-not-equal-to-0-3-in-most-programming-languages
>"It has nothing to do with the languages usually, but on the underlying representation the languages use for representing numbers.
>BY default most but not all languages use the fastest representations. On the x86 this is usually 32 or 64 bit binary integers.
>Some languages that are a bit more particular, like COBOL or PL/I or Delphi, give you a choice, binary or decimal or character math.
>In COBOL you can declare a variable to be PIC 9999.9999 and it will do decimal math so that 0.1 plus 0.2 gives you exactly 0.3. But that will take like ten times longer than doing a binary floating-point add. You have a choice, exact math or fast math.
>Some really old CPU's like IBM mainframes had instructions for decimal and binary math, so there the speed difference isn't quite so large.
>To go into detail: in a binary fraction, 0.1 is not representable exactly, it's represented as 1/16 plus 1/32 plus 1/256 plus 1/512 plus 1/4096 plus 1/8192 plus 1/65536 and so on. going out that far you get 0.09999084472, that's with sixteen binary fraction digits. So you can get close to 0.1 but not exactly. It's confusing, since most languages store the inexact value, but when you go to print the value out, the output routine rounds it to the nearest number, which may be 0.1. Confusing.
>If you really need exact math, see if the language has an exact decimal type or a set of functions or methods for doing exact math."
>First search result. Now go fuck off, you have your answer. The language doesn't represent in CAS. Either assert your statement as a previous user mentioned.
You are dumb holy shit. ANY Turing complete language can implement a CAS. This has nothing to do with language or hardware you dumb morron. It's about interpretating a bit pattern.

>that a computer cannot
My computer can do that though?
Just ask a CAS it will give you the exact answer.

>a calculator
Different hardware, I think they often represent numbers decimally and calculate on them.
Some more advanced calculators implement a basic CAS.

>but that is not how MODERN math works, you prove these "axioms" from ZFC.
>prove
>axioms
user...

>There is no "normal math"
Mate are you shitting me right now? Let me make this explicit for you: unless otherwise stated, I am referring to the system defined by Peano's Axioms. According to this system, OP's statement is false.

>Because they are compatible.
No, they aren't. The statement "0.1+0.2>0.3" has a false value in mathematics and a true value in floating point arithmetic. You derive one conclusion and it's opposite. That is the meaning of being incompatible.

>No.
Expand.

>Only if I add axioms. I didn't do that, I defined things.
And that changed the system. The system is composed of axioms and definitions for symbols. You changed the latter, so the system is different.

>Also it is impossible to prove that I can't derive P and !P under ZFC.
Then your retarded system doesn't follow ZFC.

>I think they often represent numbers decimally
well then
if I supply an equation with a decimal number
just hold on here with me
HOW ABOUT THE ENGINE REPRESENT THE NUMBER DECIMALLY WHEN DOING THE CALCULATION BY FUCKING DEFAULT

Is there a language that rounds to something before comparing? I'm a C programmer, and there's a lot of annoyances that come with newer languages, my first is non-ascii characters (yes I am racist). Second, forced floating, or integer-to-float values in a number of languages. Different languages make different assumptions, I wish they had an "assumption table".

In practice I'd probably use integers ONLY when comparing two numbers unless absolutely necessary, but by then the rounding error would be insignificant to my goal.

No, the default is doing math wrong. Brainlets will defend this.

>user...
Yes, you "prove" the "peano axioms" notice the ".

>, I am referring to the system defined by Peano's Axioms
Fine.

>According to this system, OP's statement is false.
Yeah, unless you don't ignore the context and realize OP meant that all these numbers are the numbers defined by the axioms of the IEE 754 standard.

>The statement "0.1+0.2>0.3" has a false value in mathematics
Only if you define 0.1, + and so on...

>and a true value in floating point arithmetic
Yeah shocking.
x+2=5 is true for some x and false for others. HOW CAN THAT HAPPEN????

>You derive one conclusion and it's opposite.
x+2=5 for x = 3 and x+2 =|=5 for x=4.
MATH BTFO.

>Expand.
My definition just has to be consistent.
If I define "+" to mean something for one type of number and something for a different type that is no contradiction.
Google : operator overloading.

>Then your retarded system doesn't follow ZFC.
What?

>You changed the latter, so the system is different.
If by "system" you mean type of number, then you are correct.

>Just ask a CAS
in what fucking sense does a programming language not have an obligation to solve a mathematical equation with the same precision because some neckbeard said "but that's HARD"

Attached: Screen Shot 2018-08-21 at 9.49.19 AM.png (250x36, 8K)

>HOW ABOUT THE ENGINE REPRESENT THE NUMBER DECIMALLY WHEN DOING THE CALCULATION BY FUCKING DEFAULT
This is done for important technical reasons.
In fact IEEE 754 is in some sense optimal, as it aproximates the numbers as best as possible, it is inevitably that in finite memory there will ALWAYS be this problem, it is INHERENT to all computation, the calculator WILL face the exact some problems, just for other numbers.

>math wrong
No. Just a different number system.

Then go take your pure mathematics over to another board. We don't need your level of precision. Come to say, what do you need such precision for? Why not just assert it's precision and move on? Do you have another solution of representing 0.1 without speed penalties? What problem has this imposed on your project? Are you just arguing this as a problem in and of itself (as no other programmers are being held up by this issue and found solutions quickly)?

>in what fucking sense does a programming language not have an obligation to solve a mathematical equation with the same precision because some neckbeard said "but that's HARD"
Almost no calculations require accuracy and for these issues you have decimal types of a CAS.

>you should require a library in order to subtract two decimal numbers accurately
what did he mean by this