So which possibility you think it might come true anons?

so which possibility you think it might come true anons?

Attached: naples_to_rome_mpc-potential-ideas.png (1020x1587, 199K)

Other urls found in this thread:

servethehome.com/hardware-behind-the-amd-epyc-and-xilinx-alveo-boxx/
computermachines.org/joe/publications/pdfs/hpca2017_exascale_apu.pdf
twitter.com/NSFWRedditVideo

The top one.

>one GPU chiplet and 48 Zen cores
Literally retarded, now imagine 16 Zen cores and THREE GPU chiplets with 1070 like performance.

And you're feeding the GPU with what exactly?

>putting DRR trash into CPU
Retard.

>so which possibility you think it might come true anons?
intel 10nm monolithic die btfoing everything

Attached: 1542512216982.jpg (250x246, 9K)

>FPGA
That would be interesting...
servethehome.com/hardware-behind-the-amd-epyc-and-xilinx-alveo-boxx/

>450mm2 die standing a chance against 1200mm2 of AMD silicon

Lmao

First it's already a reality, third might become a reality in the future for hyperscalers but not anytime soon since AMD doesn't need to release anything more than the first one

The third one is prime semicustom design though, which AMD will gladly do if you fork over the cash

First is already essentially confirmed

Second is gonna be way too bandwidth starved for the GPU. Probably makes no sense from a power/thermal budget either. (~50W budget for GPU?)

Third could maybe happen if AMD cooperates with Xilinx (Altera clearly no longer an option), but:
- there has been no public confirmation about GenZ/CCIX.
- Xilinx seems to want to be the SoC themselves with Everest/ACAP/whatever.
- it would need a one-off substrate design (if not special mobos) to be actually able to give the FPGA SERDES/PHYs decent access to IO, since piggybacking off the IO die for everything is not going to be what every customer wants.

Skylake and presumably Cascade Lake XCC dies are like 700mm^2, user. There is nothing stopping them from aiming for the Moon with Icelake other than getting infinitesimal yields.

>Cascade Lake XCC dies are like 700mm^2
and their yields are absolute PISS on 14nm++++++++++++++++++++++++, There yields will be even worse on 10nm.

Icelake is already leaked to be 48 cores, that can't be over 500mm2 tops. Their 10nm yields are horrendous for purely sized dies

Cascade Lake AP implies that they are able to at least get 24/28 working dies somewhat consistently. But yeah the fact that they can't even sell 2c laptop chips with the iGPU working on 10nm ain't a great sign.

Attached: CLAP, please.jpg (1024x719, 72K)

>System on a chip

Soon you'll just have to buy a CPU adapter which you bolt into the case instead of a mainboard.

I went and did the math on that. Attempting to make 700mm^2 silicon on the failed 10nm node with the estimated defect density the dual cores were getting ended up spitting out an estimated yield of 0.13%.

In other words, basically impossible to manufacture.

That's for a perfect chip though, and nothing's stopping them from selling gobs of chips with 60-70% of cores works. 32/48 would still be more than what Cascade Lake XCC can pull off, even if it would be wasteful.

Which would end up exactly like GF100 ended up: yields so horrific having a chip that DIDNT have a major flaw was a fucking unicorn, and entire wafers would be coming off the line with nothing on them useable. Note that each wafer runs about $60-70K apiece. Thats a shitton of money to waste on a wafer with no good dies on it.

>since piggybacking off the IO die for everything is not going to be what every customer wants.
Custom I/O die.

>each wafer runs about $60-70K apiece
what are you smoking? unless you are talking about opportunity cost in terms of perfect yields of Xeons on a less ambitious cost, you are off by over an order of magnitude, and that's at 3rd party foundry prices, not in-house.

I don't want chiplets on ryzen 3.

possible, but 400-500 mm^2 die mask are not exactly cheap to make, and even that's cheap compared to design and validation work.

The GPU one might be neat to see, but the problem with it then is: what is AMD going to do for dealing with video out? The package doesnt have any open pins for that kind of shit.

It's just a top level block diagram highlighting the flexibility of the chiplet design. There most certainly will be lines leading out of the GPU but it's not important.

aren't 1 & 2 just configurations for Zen 2? 1st being the epyc and non APU Ryzen parts and 2 being 3XXXG parts

not big enough

Attached: file.png (1041x1069, 117K)

...In 2022

APUs already exist and will continue to. We might see FPGAs added to server chips.

I need a movie streaming site for free to watch a movie without all the debit stuff

Memory bandwidth starved much?

there I fixed it

Attached: file.png (1044x1070, 153K)

bottom case is best case scenario for users,
middle is best case scenario for... somebody. Don't know who would want a gimped GPU coprocessor besides laptopfags.
Top is best case for anyone who actually needs their CPU to do shit.

nothing other than the process is complete dogshit and slower than 14nm

Hyperscale can bankroll pretty much everything including cocaine parties for execs as long as they get winning TCO numbers.

> cpu to displayport adapter

This is what I'd love to see on an updated X399 platform. VEGA20/NAVI with 2 stacks of HBM2 (preferably full 8gb dies), freeing up all 64 PCI-E 4.0 lanes for storage, other GPUs, etc.

Attached: Real Shit.png (1020x492, 43K)

Scratch that, I'd love to see HBM3 near-memory (L4) on Ryzen, Threadripper, and Epyc. Having a huge faster-than-DDR cache would massively improve latency-sensitive workloads.

I presume that's what they're working towards, I could at least see a future epyc revision adopting this at least, as it would accelerate database workloads significantly, which is one of the few workloads that AMD currently loses out to Intel on the server side

It sure is interesting, but packing a lot of shit on a HUGE socket might be a really expensive way of doing it, chiplets are the way to go and they have to make it into AMD's 7nm GPU's in a generation or 2

HBM is too high latency to serve as L4.

>Using retarders guess instead of Real paper by AMD engineers.
computermachines.org/joe/publications/pdfs/hpca2017_exascale_apu.pdf

Source? All the data I've been able to scrape up about HBM specs is that HBM1 was already lower latency than DDR4 by a large margin (while also having an absurdly greater bandwidth), and HBM2 only improves on that.

superior design

Attached: amd zeenor.png (1694x542, 62K)