Vitalik - LMAO

a cult of personality and one time fortune -- sharding in 6+ years
Lmao

Attached: C32583E2-DF47-4D32-ABBA-7E07D0BA3C3D.png (581x678, 73K)

Other urls found in this thread:

electronics.stackexchange.com/questions/22410/how-does-division-occur-in-our-computers
twitter.com/NSFWRedditImage

What are you even trying to do?

467217.5

How donyou do it with a, b, and c?

how does this make sense?

in the first example you do one addition and one division.

in the second you do one subtraction, one addition and then one division.

how can the second one be faster when there is an extra operation??

this has to be troll

I had honestly never seen the second algorithm before, but if you try it once it's pretty obvious why it's better.

I saw this yesterday, can somebody please explain it? In terms of computer architecture, the second method requires 1 more operation and 1 more register. In terms of mental capacity, it requires remembering two numbers. The only way this makes sense to me is that it *might* reduce the chance of an overflow happening, but, it requires sorting numbers so that a is less than b.

What the FUCK is Vitalik talking about? I'm an ETH holder but it genuinely worries me that he's 'boggled' about this.

electronics.stackexchange.com/questions/22410/how-does-division-occur-in-our-computers

I think what he's getting at is that in the first option you're dividing a and b added together, whereas in the second option, you're dividing the amount of b-a.

b-a is a smaller number than a+b, therefore less iterations of operations are required by a computer to achieve the answer by dividing. The adding of a in the second algorithm has a negligible impact on algorithm's performance. Therefore, the second algorithm is faster than the first at calculating average between two numbers.

This will be totally dependent on the architecture and the compiler for that architecture. A person who programs at any level above assembly shouldn't be concerned about efficiency of division operations, because you simply don't know how the operation will occur in practice, at the architectural or machine code level.

However, this would be a reasonable assumption, except in the comments he says that he was talking about mental efficiency.

Ok, then what he's saying is that this method allows you to divide a smaller number, rather a larger number, which is easier for most people.

What's easier? (388+389)/2, or 388 +(1/2)?

He's saying if you're computing something by head and the numbers are close together it's better to do it like he says, just check the example, adding by head two large but similar numbers is less efficient than just adding half the difference, because you can ignore most digits when doing it by head or by hand.

Let's try another example, can you calculate by head the average of a=98765432
b=98765430
?
You should, it's trivial, and you didn't add up the two numbers, so what DID you do in your head?

Your a is larger than your b, so you'll get a negative fraction, but I get you.

For this case you describe, obviously the second method. But for two arbitrary numbers... 23421 and 24311...you first have to sort the numbers so you know which is bigger, and then remember 1) the smallest, and 2) the difference, and then divide the smaller difference, then add them anyway. It's far more difficult.

It's nice that he considers these things, but his mind being boggled is weird.

If you only had a pen and paper, doing b-a and performing long division on that amount would be quicker than doing long division on a+b. He specifically mentioned numbers which are close together btw.

>so what DID you do in your head?
Picked the middle number of 30 and 32 (31)

if b and a are very close together, you might not get a very accurate answer depending on how much information the program can store for each number.

to add to this, computers have difficulty in subtracting 2 numbers that are very close together (has to do with how computers store numbers). In general you will get a more accurate answer by turning a subtraction problem into an addition problem when very close numbers are involved.

people say AI is improving but still need to be spoonfed how to average two numbers. Pathetic.

the difference in computational power required for both cases is negligible, even if you iterate millions of times. Even moreso with todays computers. Vitalik is pure autism

Imagine working with him as a general office monkey and being spotted doing (a+b)/2. Then he goes on a rant about efficiency and you get scolded. You agree, keep cucking till evening only to go home and read that he tweeted about how you are a brainlet.

Integers, floating points?

It depends but implemented in silicon the latter is not efficient. With integer numbers (a+b)/2 does not even need the division (equals to 1 bitshift right).

Also a and b do not "tend to be together".

but dividing by 2 is just about the cheapest operation there exists

Attached: 1533564332295.jpg (657x527, 50K)

And then you do the same thing the next day

Are you retarded? Why do you need to sort them or see which is bigger?

Consider 5 and 3.

5+(3-5)/2=5+(-1)=4
3+(5-3)/2=3+(+1)=4

WOW MAGIC. NEGATIVE NUMBERS EXIST
gtfo fucking humanities`

also
>then divide the smaller difference
>different differences between the same numbers
I hate how stupid you are

the absolute state of Jow Forums

So clever. The suggestion is that this formulation is easier to compute in ones head. It should be obvious that, when averaging two large positive numbers for example, arbitrarily ending up with either an addition or subtraction depending on the order of your operands is not easier to mentally compute. 'Sorting' them means you end up with 1 addition and 1 subtraction always.

You can't even write English you dumb pajeet. If you can count past 10, go test each method with two large numbers instead of 3 and 5.

(2+6)/2
8/2= 4

2 + (6-2)/2
2 + 4 / 2
6/2=3


?????

>2 + 4 / 2
= 2 + 2

>american education system