Highest you can count

What's the highest number you can count to using a computer and how would you do it?

Attached: Fibonacci.sh.png (721x725, 102K)

Other urls found in this thread:

en.wikipedia.org/wiki/Rapira
unix.stackexchange.com/questions/202778/bash-how-to-calculate-very-long-numbers-in-bash
en.wikipedia.org/wiki/Landauer's_principle
physicsoftheuniverse.com/numbers.html
twitter.com/AnonBabble

9,223,372,036,854,775,808

It's basically as many numbers as you can fit in your memory. So if you were to buy a server with like 2TB of RAM, you can now count to infinity. Fill your memory entirely with 9s and there you go.

>9,223,372,036,854,775,808
You don't think there's a way to count past this?

An unsigned 64 bit int can store between 0 and 2+62 - 1 = 18446744073709551616

But then you use a "bigint" or something like it, and it's only bounded by memory.

Sorry, that's 2 ^ 64 - 1

Whatever you have enough memory to store.

Attached: 2018-12-09-225334_1366x768_scrot.png (1366x768, 754K)

about 2^96 billion on this machine

>LARBS
Imagine being so lazy and dumb

Come to think of it, you could have a floating point number with a bigint as an exponent. That'd be even bigger than if you would just fill your memory with a single bigint.

Can confirm, tried to add one to this, and I got -9,223,372,036,854,775,809.

>insulting people for not messing with their config files all day
Imagine being so autistic.
Not with a 64 bit signed integer or whatever.

This, and then add compression

2^(number of bits in your RAM)

/thread

This is a good idea. 64 bit mantissa and a bigint exponent. But it wouldn't let you count, if you wanted to count up by one at a time. So a regular bigint would work.

Theoretically, if you were to make your own OS, you could store 2^(However many bits left in memory)

Wrong.
You can count up to however much your total ram allows.
Look up carry adding. You are not limited by your CPU's word size.

en.wikipedia.org/wiki/Rapira
over (2 EXP 1015)

Also, you could use disk to help store your number.

2^68719476736 - 1 on my machine.

We shall see

Attached: Selection_023.png (1694x465, 59K)

Yes.

My understanding of it is that one clock cycle of the CPU can handle one 64 bit string at maximum. Something along those lines. So, the idea is to use multiple clock cycles to process the one number, right? But, how is this done? How could I implement this into my Fibonacci script? I would need to have variables that are over 2^63-1. I'm willing to switch to a different language or whatever, really.

Add with carry.

Seems to be mainly using virtual memory, so depending on how long this takes I'll have to see.

Did you just output a whole bunch of zeroes? That's basically what the script will do in bash after counting to 2^63-1 and overflowing a few times.

What language are you using? Most languages have big integer libraries available.

I'm using Bash.

Python uses bigints.

unix.stackexchange.com/questions/202778/bash-how-to-calculate-very-long-numbers-in-bash

Seems python is only using ~5 GB, I'll probably leave it running overnight

I'm finding that out as we speak. I'll write the script again in .py

Attached: bigints.png (725x407, 12K)

Bookmarked. Thanks.

No it's the python script to the right that multiplies by 10 million. Python does not have integer overflows because everything is an object.

You can store a number that's arbitrarily large. It doesn't have to use a regular type, you can just use data as a number. Want to make a 500-Gigabyte number? You can! There's almost never a reason to, though, and for numbers that large it's easier to lose precision and use scientific notation.

3.7 GB

I wonder how compressible a 500GB number would be if not already compressed

Literally as large as your RAM could go. Use a Number library that uses String instead of Integer or Long. Python use this by default. When the integers reached a certain point, IIRC it will convert itself into the Number representation instead of Long.

It depends on the number.

With 2 TB of RAM you can count to the decimal equivalent of 2^(2^15). Then since it's got so many repeating bytes you can compress that into a file that basically says "2TB of 1s". So in theory you could count to a number high enough that the number of exponent operators in the compressed file fills your RAM. Which would be fucking huge.

Let's say you have 2TB of RAM all set to 1s. You compress that into however much RAM it takes to say "2TB of 1s". Then you repeat that value throughout your RAM, and compress THAT into "2TB of (2TB of 1s)", and repeat THAT throughout your RAM. Then you repeat this so you have "2TB of (2TB of (2TB of RAM))" and so on and so on.

I don't know how compression is expressed and how many bytes precisely it takes to store that kind of information but what you would be left with is an impossibly long but finite string of 1s.

>haskell

Attached: bignumber.png (1366x768, 39K)

Busy beaver number of however much memory you have available. Good luck calculating that.

Nice, you think you know how compression works

try unsigned

Simply define a compression algorithm which allows an infinite repetition to be compressed. Then you can count to infinity.

>while 1 ==1
>Not saying while True

fill with 9s? you know computers stores in binary right? you should fill the mem with 1

imagine recognizing larbs...

Define counting. If you mean incrementing by 1, counting to the largest number that fits in 16 GIB of RAM will take longer than the age of the Universe.

Attached: 1543716103284.png (480x490, 345K)

you can do counting on your disk, not only in RAM, dimwits

I was going to say 2^64 but what are bigints? Do you literally just start counting from 0 but keep the previous 2^64 in memory and add to it?

lol, PHP

Attached: 2018-12-10_14-55.png (830x551, 47K)

Save yourself a name lookup and just use 1

use Scheme or Python, they provide bignums by default (probably some other langs as well)
alternatively use some bignum implementation for other languages

>counting to the largest number that fits in 16 GIB of RAM will take longer than the age of the Universe

Attached: 1544177019028.jpg (1200x1000, 160K)

You can keep doing binary addition till you run out of memory

>Do my homework pls Jow Forums
Fuck off

Run this and test.

Attached: Untitled.png (534x216, 7K)

Compress while your adding.

there's a limit of addressable memory on x64 processors do, you will require a group of programmers and mathematicians to get around that limit

This thread has shown me that there are 0 actual programmers on /g

Attached: 1544464259117.png (720x1026, 187K)

Assume a 5 GHz CPU that only takes 1 clock cycle to increment an arbitrarily large number.

It would take 116 years to count to the largest 64-bit number:
(2^64) / (5 × 10^9 Hz) ~ 116 years

Counting up to the largest 65-bit number would take 233 years.

What about 1 MiB = 8 Mib?

So 2^(2^20 * 8) / (5 × 10^9 Hz)
~ 8.52 × 10^2525212 years

OK, that's way too long. The largest number countable in 14 billion years on this hypothetical CPU?

bits = log2((14 × 10^9 years) * ( 5 × 10^9 Hz))
~ 89 bits is the most largest number the hypothetical CPU could compeltely enumerate from 0 to 2^89-1 in the age of the Universe

en.wikipedia.org/wiki/Landauer's_principle

The minimum energy required to switch a bit is 2.75 zJ (zeptojoule)

The estimated total mass-energy (in Joules) of the observable universe is 4 × 10^69
physicsoftheuniverse.com/numbers.html

If you used up the entire Universe, a perfect computer could count up to
(4 × 10^69 Joule) / (2.75 zeptojoule)
~ 1.455×10^90
or just about
log2(1.455×10^90) ~ 300 bits
or ~ 38 byte

*giggles* well, I don't know, let me try.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
That's about it!

Cringe

>piping to cowsay
how did i never think of this

>If you used up the entire Universe
so where would that computer be stored? fucking reddit brainlet

Linux looks very interesting, even if some of the screen colours and menu options appear to be a little out of the ordinary.

But you are missing a vital point, a point which takes some experience and depth of knowledge in the field of computers. You see, when a computer boots up, it needs to load various drivers and then load various services. This happens long before the operating system and other applications are available.

Linux is a marvellous operating system in its own right, and even comes in several different flavours. However, as good as these flavours are, they first need Microsoft Windows to load the services prior to use.

In Linux, the open office might be the default for editing your wordfiles, and you might prefer ubuntu brown over the grassy knoll of the windows desktop, but mark my words young man - without the windows drivers sitting below the visible surface, allowing the linus to talk to the hardware, it is without worth.

And so, by choosing your linux as an alternative to windows on the desktop, you still need a windows licence to run this operating system through the windows drivers to talk to the hardware. Linux is only a code, it cannot perform the low level function.

My point being, young man, that unless you intend to pirate and steal the Windows drivers and services, how is using the linux going to save money ? Well ? It seems that no linux fan can ever provide a straight answer to that question !

May as well just stay legal, run the Windows drivers, and run Office on the desktop instead of the linus.

I'd just like to interject for a moment. What you're referring to as Linux,
is in fact, GNU/Linux, or as I've recently taken to calling it, GNU plus Linux.
Linux is not an operating system unto itself, but rather another free component
of a fully functioning GNU system made useful by the GNU corelibs, shell
utilities and vital system components comprising a full OS as defined by POSIX.

Many computer users run a modified version of the GNU system every day,
without realizing it. Through a peculiar turn of events, the version of GNU
which is widely used today is often called "Linux", and many of its users are
not aware that it is basically the GNU system, developed by the GNU Project.

There really is a Linux, and these people are using it, but it is just a
part of the system they use. Linux is the kernel: the program in the system
that allocates the machine's resources to the other programs that you run.
The kernel is an essential part of an operating system, but useless by itself;
it can only function in the context of a complete operating system. Linux is
normally used in combination with the GNU operating system: the whole system
is basically GNU with Linux added, or GNU/Linux. All the so-called "Linux"
distributions are really distributions of GNU/Linux.

You're new.

no my friend YOU are new. the funny thing about this copypasta though is that some of it is true, windows has alot of rights to software and drivers in the linux kernel which is why they pay the foundation so much and even have a seat in their commitee

Attached: 1541074942824.jpg (1024x1024, 176K)

As big as your HDD allows for.

Attached: 1357282843960.png (637x431, 171K)

This isn't my homework dude chill

You are the new one.

>being this new

for i in range(0, MAXINT):
print (i)