Why 128 bit computers are not a thing?

Why 128 bit computers are not a thing?

Attached: Bits+and+Bytes.jpg (280x237, 14K)

Other urls found in this thread:

en.wikipedia.org/wiki/Word_(computer_architecture)
homepage.divms.uiowa.edu/~jones/ternary/
twitter.com/NSFWRedditVideo

they are

Why should they be?

But they are.

that's just too many bits. we just can't handle them

because even 64 bits is too many for most tasks

Wtf do you need so many bits for

Bait bit i'll bite. Consumer 64bit pcs only use 48 of those bits for memory addresses. We dont need 128 bit address spaces. For real you cant afford it.

More power I guess?

Why did GPU manufactures drop support for indexed colors?

Increased word size means more overhead for many things. If you want to play with a 1-byte value on a 128-bit machine you either have 7 bytes of wasted space in memory or you have slower memory lookups from non-aligned data

128 bit memory addresses also means you need 128 bit pointers, which slows things down and takes up more memory than 64 or 32-bit ones

it's physically impossible to go beyond 68 bits at the CPU level. There's a theoretical barrier named candice-sugon that essentially describes how the interactions between electrons at that level can no longer be guaranteed - resulting in erratic behaviour value loss etc.

whats the point in indexed colors if you can fit every color in 24 bits

>it's physically impossible to go beyond 68 bits at the CPU level
this sounds like a blatant misinterpretation of some fact
perhaps it's physically impossible at a certain die size and clock speed, but obviously not in general
I could build a gigantic 128 bit 1-hertz machine out of telegraph relays or vacuum tubes instead of transistors right now and it would work just fine, aside from being uselessly slow and expensive

delete this it is so wrong

wow i did not know this. thx makes sense i guess

out of all the posts in the thread you react to the only one that's false

Because 64 bits are just used for memory access above 4GiB. At 64 bits ( 48, actually ) the addressable memory pool is X-box huge and there is no need to extend further for... a few decades.

That being said - your CPU is most likely IEEE something capable and has SSE and AVX extensions, so it already has 64+ bit registers.

Using memory bus wider then 64 bits is problematic, because for each memory channel you need to pull out at least 64 wires on the motherboard, die and other circuitry. At 4 channels that goes to 256 traces + control signals.

Why do you think that "4 channel" hedt boards also have homongous sockets? Because "quad channel" is essentially 256 bit memory bus traced out of the CPU.

Having that in mind 'bitness' of the CPU is a pointless value at this point. Any AVX-512 cpu is "512 bit", technically.

Attached: 1524800098125.jpg (268x312, 28K)

Your consumer 64-bit CPUs already deal in 256-bits for certain instructions.
Neo-Jow Forums...

uh yeah i can??
i have 4 64 bit pcs so thats like 256 bits right there

why would you need to address more memory than the 256 tebibytes that you can address with 48 bits?

Holy shit you know nothing

the human eye cant see over 64 bit silly

because it makes the CPUs bigger and adds no tangible benefits.
The only reason we jumped to 64-bit at all was so you could store pointers to locations in memory beyond the 3rd gigabyte.
Most uses for registers larger than 64-bit involve rapid 3D math where you have to perform the same operation to 16 or more variables concatenated together in one 512-bit register

Itanium

This. AVX and more.

Because it is not only about addressing RAM.

Pointer size =! Register size
Fucking brainlet, you're the true nu-Jow Forums

Certain instructions != Memory addressing

Uhh gigabyte is offensive to me. I prefer the term gigglybit, thank you.

t. Atari marketers

layman terms please

one thing, bloat. 64 bits was a lot of bloat, but can you imagine 128 bits?

why are not 821 bit computers a thing?

>calling people neo-Jow Forums when you can't even read the post you're quoting (or the OP of the thread you're replying to) that specifically refers to memory addressing
He's completely right. 128-bit general purpose registers currently just don't have much of a point, and all those buzzwords and extensions you're reading off of the marketing keynote for your latest gamer shit already have their own dedicated large registers as such extensions have had since practically the beginning of computing.

do powers of 2 actually matter for wordsize? I know you'd want to be an integer number of bytes, but is there anything in principle wrong with say 40-bit or 72-bit registers ?

no, historically speaking
en.wikipedia.org/wiki/Word_(computer_architecture)

>64 bits? What do you need 32 for?

Because 512 bit computers are a thing? For memory addressing even 64 bits is overkill, x86-64 uses 48, and arithmetic operations have used wider operations since, well, forever really.

Attached: 1348710092154.jpg (500x329, 54K)

If we converted our entire galaxy in RAM 64bit would be still enough to address it.

It's just easier that way, lots of early small computers were 12- or 18-bit, a good chunk of mainframes were 36-bit systems, there were 48-bit and 60-bit systems as well like pic related.

Attached: CDC_6600.jc.jpg (4098x2853, 2.81M)

For residential use?
That level of memory allocation isn't needed yet

>Those wires
I love messy cables

This
We need 2megabit processors

Diminishing returns.

Actually performance would most likely decrease due to the worsened alignment issues and bigger pointer size.

I'm sure by the time you need to address more than 16 exabytes, CPU arch would have caught up. 128+ bit vector ops have been a thing for decades.

We don't have much use for integers that large. We certainly don't need pointers that large. SIMD extensions are the bread and butter of modern CPUs, and they're basically the only way "128-bit" and "256-bit" processing can be leveraged in a way that actually yields performance benefits.

Negative returns then.

Negative bit computers when?

It exists, it's called ternary.

Attached: binary vs ternary.png (396x346, 8K)

I mean -32bit cpu

there is no need, like there is no need for ipv8

homepage.divms.uiowa.edu/~jones/ternary/
Read from "21-trit words" onward.

Main application for that will just be high speed computation with low power consumption.

Our current power of two convention arises from microprocessors starting out as 8-bit, which is effectively the smallest useful size for a microcomputer, giving you 256 opcodes and enabling you to easily work with 7-8 bit text like ASCII. That logically scaled to 16/32 by enabling 16-bit ops on multiple 8-bit registers like on the 6809 and Z80, and on from there.

Modern CPU arch all came up from the bottom, not down from the top.

There is, signed 32-bit on a 64-bit cpu.

Because the length of the instructions wouldn't really bring great advantages.

In the 80, people thought a more complex instruction set would bring big advantages, so CISC (Complex Instruction Set Computer) was a thing.

Today CISC gets mostly emulated by RISC.


Of course 64bit brings some (minor) advantages to 32bit. But the trade-off for 128bit would not be great, because you have to change the whole architecture for this.


Imagine you drive a Ferrari within a city.
You couldn't really go "full throttle", could you?
Ther are too many bottlenecks like traffic lights.
Of course "faster is better", but you have to take the bigger picture into consideration.

Fuck off, it wouldn't even run 2 tabs in Chrome.

A more practical example of tenary logic is SQL:

CREATE TABLE t ( i Integer );
INSERT INTO t.i
VALUES (1), (2), (NULL);

SELECT COUNT(*)
FROM t
WHERE i == 1;
-- result: 1

SELECT COUNT(*)
FROM t
WHERE i 1;
-- result: 1

SELECT COUNT(*)
FROM t
WHERE i 1 OR i IS NULL;
-- result: 2

it was that way long before that though, the 8-bit byte was standardized on 32-bit mainframes and minicomputers were generally using 16-bit words, and early microprocessors had plentiful variation in bit width depending on their market positioning and architectural influences

Powers of two are natural units to use at any level of hardware design, and it certainly bleeds into software as well. The simple reason is that if you're using a set of bits to represent something, the number of possible combinations or states that system can be in is a power of two. Conversely, the number of bits you need to implement some kind of multiplexer is the ceiling of the log_2 of the number of switches you want. The particular makeup of addresses and wordsizes as powers of 2 is no coincidence, they're both to make most effective use of the hardware, and they actually fit together in an elegant and fine grained system. For example, the 64 bit word sized registers can be efficiently indexed into two 32 bit registers, and those could be made into 4 16 bit registers, and so on. Depending on how far you get in studying process optimization of various kinds or processor implementation, you'll see that hardware is a complex nest of breaking apart such words and bit arrays of all sizes are used, but powers of 2 are an ubiquitous pattern the same way patterns of 2.713 or 1.618 are natural in other kinds of systems. As far as address space goes, 64 bits is just the smallest power of two to fully address the space practically everyone would want. And that address space is also split up in various ways, like to write code to one half and data the other, which is a more of a function of the operating system, but it benefits from having such fine grained hardware optimization.

The whole thing is kind of like a russian doll, where you can appreciate that powers of 2 are just natural consequence of A. 2 is very, very, very much easier to represent digitally than any other number, and B) Every piece of the system enjoys the simple indexing pattern.

>Today CISC gets mostly emulated by RISC.
CISC at the ISA level is still valuable because it increases instruction density. RISC is used at the microarchitectural level only because it's easier to pipeline.

Can't you just install more bits from the internet?

Yes.

I agree OP this is shameful, not even available in C unless you use compiler hacks with gcc on 64bit machine.

This, more than 90 bits are dangerous for our universe.
Also muh bit bloat.

Ah, I was wondering about this too, thanks for the info

I mean, if you plan on storing a memory address in a register then obviously the register needs to be wide enough. In that sense, they're related, if not necessarily equivalent. The only benefit I can see in 128-bit systems might be for floating-point precision.

8-bit microprocessors like the Z80 would commonly use two 8-bit registers to store 16-bit pointers.

each byte on the RAM needs to have an unique address or "location", the amount of possible locations depends on the CPU "bits"

a 4 bit CPU has locations for 16 bytes of RAM (2x2x2x2)

at 16 bit CPU can have 65,536 bytes (65kb) of RAM (2^16)

the reason we jumped from 32 bits to 64 bits is because at 32 bits you can access at most 4,294,967,296 bytes (about 4.3GB) of RAM

todays 64 bit cpu use can have up to 52 bits of RAM locations, which translates into about 4,500,000 GB of RAM, so might don't really have reason to upgrade soon as more "bits" means completely new CPU architecture

>each byte on the RAM needs to have an unique address or "location", the amount of possible locations depends on the CPU "bits"
That's not actually true.

What's sugon ?

>this eyelet can't tell the difference between 65536 shades of gray

I cannae handle that many bits, capt'n

They are? Just look at RISC.

Would just make pointers take up twice the memory and offer no other benefits.
64 bit integers are high enough. If you ever find an application where they are not, then it's time to use a bigint library anyway.

this user gets it

What RISC would that be?

You fucks are wrong: more bits = more power.

Computer Manufacturer, I need your most powerful Computer.

RISC-V?