When are 128-bit CPUs coming out, Jow Forums?

When are 128-bit CPUs coming out, Jow Forums?

Attached: images(9).jpg (678x452, 49K)

Other urls found in this thread:

quora.com/Why-arent-there-128-bit-CPUs
msdn.microsoft.com/en-us/library/windows/desktop/aa366556(v=vs.85).aspx
fujitsu.com/global/about/resources/news/press-releases/2017/0404-01.html
extremetech.com/computing/53982-inside-amds-opteron-processor/2
twitter.com/SFWRedditGifs

never
most of your 64-bit ram is filled with zeroes. we should go back to 16 bit and quadruple our program complexity.

>He fell for the 3.16912650E+29GB of RAM meme

>He fell for the 65,536B of RAM meme

x-bit system is basically an indication about the address space that the processor can address, currently even x64 processors now don't use the entire 64 bits, afaik only 48 bits


addressing 128 bits is ridiculous and won't happen this century or most probably ever the majority of use cases

When we start converting entire planets into RAM.

ITT: People that don't realize that 32-bit vs. 64-bit has to do with more than memory addressing.
If it was just memory addressing, phone CPUs wouldn't have jumped to 64 bit.
For example of one advantage, 64-bit registers actually reduce overhead by allowing passing arguments in registers rather than on the stack.
64 bit integers and floats allows computing of larger numbers in a single operation (your processor actually has some 128-bit ops and registers, especially for multimedia).
64-bit processors also generally have more bandwidth between different parts.

But it'll probably be a while before we see 128-bit CPUs. Transitioning from 8 bit to 16 bit happened fast. It was a bit longer to jump to 32 bit. And that stuck around for a couple decades before we got to 64 bit. Increasing bits sees diminishing returns.

I guess 128-bit between 2025 and 2040. Might happen sooner if some processor company wants to try to get ahead of the game.

>For example of one advantage, 64-bit registers actually reduce overhead by allowing passing arguments in registers rather than on the stack
Depends entirely on the calling convention.
>64 bit integers and floats allows computing of larger numbers in a single operation (your processor actually has some 128-bit ops and registers, especially for multimedia).
Doesn't depend on memory bus width, regular x86 could do that as well.
>64-bit processors also generally have more bandwidth between different parts.
Not really.

>ITT: People that don't realize that 32-bit vs. 64-bit has to do with more than memory addressing

If I wasn't educated enough I would say the same thing, however it's easy to do 64-bit arithmetic and floating point operations in 32 bit cpus, in fact it happened and still happens

the only applications that would benefit from 128 bit would be minecraft and ARK

>If it was just memory addressing, phone CPUs wouldn't have jumped to 64 bit.
it has everything to do with memory space you retard, in 32 bit systems user space has 3 gb out of the 4gb

> I guess 128-bit between 2025 and 2040
I am 100% sure you don't understand what you're talking about

What program would want to store a max of 3.4028237e+38?

Also to achieve 128-bit transistors would need to be the size of an atom, and quantum mechanics doesn't like that much.

>Also to achieve 128-bit transistors would need to be the size of an atom,


another retard, that's what happen when webdevs talk about computer architecture

based

> Depends entirely on the calling convention.
GCC on 64-bit calls much more often on registers than 32-bit. 32-Bit GCC calling convention uses two registers, 64-bit GCC uses 7 registers. Registers on 32-bit ABIs are, naturally, restricted to data less than 33 bits -- anything larger requires putting it on the stack. This, overall, means much less usage of the stack and usage of registers on 64-bit CPUs.
> Doesn't depend on memory bus width, regular x86 could do that as well.
It's more than the memory bus width, it's the size of registers (where memory addressing is constrained by register size), ALU, datapath, etc.
For various reasons, memory addressing tends to use the same number of bits as general purpose registers. Moving to a 64-bit memory bus width generally comes with it a move to 64-bit registers. You could have 64-bit registers with a 32-bit bus width, or vice versa, but it causes asymmetry, performance issues, and/or odd restrictions.
>Not really.
Yes, really. Data has to get from one part of the CPU to another. 32-bit CPUs have 32-bit data paths, 64-bit CPUs have 64-bit data paths.
however it's easy to do 64-bit arithmetic and floating point operations in 32 bit cpus, in fact it happened and still happens
And it's more efficient to do it on 64-bit CPUs.

You can have 64-bit memory addresses on 32-bit CPUs. For example, Physical Address Extension.

So, yeah...
ITT: People that don't realize that 32-bit vs. 64-bit has to do with more than memory addressing.
Essentially: 64-bit CPUs are optimized to work with data up to 64-bits. 32-bit CPUs are optimized to work with data up to 32-bits.
Anyone that thinks it's just memory addressing has no fucking idea what they're talking about.

Most processors arent even 64-bit. There is a difference between x64 amd 64-bit processors. 128 bit is decades away

there is nothing that prevents a 32-bit cpu from having 64 bit registers and do native logic/integer operations, in fact 32 bit cpus respected IEEE 754 standard completely and had floating point units that had 64 bit registers for operands and outputs inside 32-bit architecture

and currently 64 bit systems support SIMD operations for operands much larger than 64-bit naively

The problem is inherently and strongly related to space addressing

And doing more than 32-bit processing didn't use "general purpose" ALUs, but rather provided limited resources to that capability.

Consider that there are 8 registers for SSE in 32-bit CPUs, but 16 in 64-bit CPUs, and that even on 64-bit CPUs with 16 of them, it restricts you to using only 8 of them in 32-bit mode.
There are underlying reasons for that. I'm not going to spend the half hour to explain that in a Jow Forums post. You COULD still technically do that on a 32-bit CPU, but because of various implementation reasons, you typically didn't (you'd be using a not-insignificant amount of die space for that logic that only gets used a minority of the time). But when moving to 64-bit CPUs, you get those underlying capabilities for "free", so filling out the implementation is marginal and thus justified.

>The problem is inherently and strongly related to space addressing

Before 64-bit memory addressing caught on, you had a huge mix of 64-bit and 32-bit features on non-general-purpose processors, and the 64-bit features were becoming more and more popular. Scientific computing often made use of 64-bit registers, for example, and a lot of industries were begging for more 64-bit features on mainstream CPUs even after they already had PAE.

If the problem was inherently and strongly related to memory addressing, we'd be using 32-bit CPUs with PAE today, not 64-bit CPUs.

Perhaps testament to this fact is that it was AMD that designed AMD64 to make a more competitive CPU, whereas Intel was just fine and dandy selling x86 CPUs with PAE.

When you kill yourself.

>x64 amd
you couldn't have fucked that up more if you tried

Not anytime soon. We don't even have a use for a 128-bit CPU.

Read this.
quora.com/Why-arent-there-128-bit-CPUs

>You COULD still technically do that on a 32-bit CPU, but because of various implementation reasons, you typically didn't (you'd be using a not-insignificant amount of die space for that logic that only gets used a minority of the time)


I understand, you're talking about for example if you want to have 64-bit addition in a 32-bit ALU, you can divide the operands into 2 words and add the carry bit to the second half, for mul/div it should be a little harder and add a FSM to supervise the who operation

guess what, you can have a 64-bit ALU inside a 32-bit architecture very easily, nothing in the architectural level can prevent you from having that and you can do native 64-bit ALU operations inside a 32-bit CPU

The only change you need here is just loading two 32-bit words sequentially from the memory to the 64-bit operands registers and that's actually what happen currently in AVX with 65-bit architecture, you can just lock the bus and load whatever width you want into your registers that are larger than system bus width


AGAIN, the problem is inherently related to space addressing, PAE and other techniques were just a hack to improve things without breaking compatabilty and with PAE every process would still see only 32-bit space

Implementation issues

and
is the winner
32bits cpu: 32bit registers
64bits cpu: 64bit registers
end of history

Attached: 1486162916963.jpg (720x960, 70K)

>I understand, you're talking about for example if you want to have 64-bit addition in a 32-bit ALU, you can divide the operands into 2 words and add the carry bit to the second half, for mul/div it should be a little harder and add a FSM to supervise the who operation
And other issues, such as data paths, primitive structures, issues regarding modularity, programmer friendliness, etc. Overall, symmetric designs are preferred.

> AGAIN, the problem is inherently related to space addressing, PAE and other techniques were just a hack to improve things without breaking compatabilty and with PAE every process would still see only 32-bit space
And applications could still use more than 4gb of memory under PAE, through file mappings: msdn.microsoft.com/en-us/library/windows/desktop/aa366556(v=vs.85).aspx
And the limitations of PAE that did exist on an architectural level could've been overcame without switching to an effectively new architecture, and at much reduced cost.

Anyway, just look at the marketing material of the time to understand the motivations.

fujitsu.com/global/about/resources/news/press-releases/2017/0404-01.html
An early 64-bit system
Note that nowhere does it discuss the benefits of 64-bit addressing but it sure as hell stresses performance.
Note at this time, servers weren't necessarily having issues with memory constraints, they were having issues scaling vertically.

extremetech.com/computing/53982-inside-amds-opteron-processor/2
Check out this page, detailing AMD64
>Fred Weber says the x86-64 instruction set provides the code density of CISC with the register usage and internal execution simplicity of RISC.
>All along, AMD has heavily promoted the advantages of 64-bit computing .. its architecture shines while processing large datasets, and also speeds up various 32-bit algorithms with 64-bit ports

Also, look at pic related. N64 used 64-bit processing for performance reasons.

Attached: ultra64.jpg (644x907, 156K)

...

>what is a tyo

Oh, also note, a lot of earlier 64-bit servers chose to run 32-bit OS's. This completely negates the advantages of memory addressing, but was done anyway because of performance benefits.

>And other issues, such as data paths, primitive structures, issues regarding modularity, programmer friendliness, etc. Overall, symmetric designs are preferred.


All the techniques I mentioned are done in mircroarchitecture, it's transparent to software and all the details are completely hidden


> And applications could still use more than 4gb of memory under PAE, through file mappings: msdn.microsoft.com/en-us/library/windows/desktop/aa366556(v=vs.85).aspx

WTF is this? this is another topic

Moon-sized Intel CPUs when?

> WTF is this? this is another topic
Your argument is the motivation of 64-bit is enabling access of more than 4gb of memory.
You don't need 64-bit to get more than 4gb of memory. PAE allows the OS to use more than 4gb, and file mappings allow software to request more than 4gb from the OS.
Seriously, look at my examples, and go ahead and look at more on your own. The desire for increased performance was just as important, if not more important, as 64-bit memory addressing. 32-bit CPUs and OS's already had ways around the 4gb memory limit. All of the early 64-bit CPUs were all about the increased performance. Few computer systems at the advent of 64-bit computing were close to approaching the 4GB limit, and those that were used exotic architectures anyway.
64-bit memory addressing didn't become relevant until it hit mainstream CPUs because Microsoft used PAE support as a differentiating factor between consumer and enterprise editions of Windows.

Consider that Windows 2000 Datacenter edition, for example, supported 32GB of RAM despite being a 32-bit OS.

PAE is inefficient and I'm pretty sure it doesn't work with userspace programs (arm's lpae doesn't)

File mapping (I never studied windows but I studied linux kernel), if I get this right, it's a another topic, if you have a file that's 1TB you can easily load small parts of it to the process address space and deal with normally even in 32-bit systems, this is not a problem at all, of course it will be slower if it's heavily modified like DBs or similar applications

so file mapping will work just the same on the kernel level in 32-bit system with 4GB RAM like 64-bit system with 4GB RAM, this is not anywhere related to memory space addressing problem

> Consider that Windows 2000 Datacenter edition, for example, supported 32GB of RAM despite being a 32-bit OS

I know, again, this was a hack like many other hacks that were inject in the shitty x86 architecture to improve things without breaking the whole system, but it is in the end a dirty hack and every process still sees only a 32-bit space

It's a

this is why x32 abi is superior
>64-bit calling conventions and ISA features with 32-bit pointers for better cache utilization

Never. 128 bit addressing space is absurd. But we do need a new general purpose arch with more than 16 general purpose registers and bigger than 64 bits wide.

When true 64-bit is out.

kek

rv128 is specified, but is basically a placeholder - nobody's planning to actually use it in the near-term, it's more of a "never say never" thing.

Vector and SIMD primitives give us wider ALUs and more of them without worrying about cache explosion caused by huge addresses.

Never
Quantum cpus will revolutionize the market
We'll start running qubits

I doubt we'll see commercially available 128-bit CPUs in the near future.
The reason is that, quite simply, it's not a priority at this point. In the following years we'll definitely see more parallelism (both in CPUs and GPUs), slightly higher frequencies and more efficient power management, all of which are easier to implement and can still bring noticeable performance improvements.
However, the sad truth is that right now the greatest bottleneck is by far shit software. We are using 10 times the resources we were using 15 years ago to do the same thing, except with slow, "pretty" interfaces.

x64 AND 64-bit.
My fucking bad

A cryptographic program

using 10 times the resources we were using 15 years ago to do the same thing, except with slow, "pretty" interfaces.

Fucking deep man..made me think

Loving this thread
I know x86_64 actually only uses 48 bits registers instead of 64 bits however I don't know many bits wide the ALU is. I take it the ALU is 64 bits wide? It wouldn't make sense to me if it was only 48 bits wide.

Also since x86_64 is a CISC arch wouldn't it make sense for engineers to make specialized 128 bit extensions for specialized applications instead of designing a new 128 bit arch just for the sake of it? I really don't see a reason to have a 128 bit cpu since the extra ammounts of cache it would use would be massive. However accelerating certain algorithms with 128 bit extensions might be actually worth it.