Intel to support Super PCIe 5.0 while AMD is stuck on 4.0

HAH HAHA AH AHAHA HA AH AH

Attached: 1539007387448.jpg (711x457, 75K)

Other urls found in this thread:

techpowerup.com/reviews/NVIDIA/GeForce_RTX_2080_Ti_PCI-Express_Scaling/6.html
twitter.com/SFWRedditGifs

>tfw stuck on 64GB/s
nooooooo

Wow so amazing when Intel will finally release a 10nm CPU with PCIe 5.0 in 2021.

Attached: 02.jpg (600x536, 81K)

congrats you can load up your tranny porn quicker.

GOD DAMN IT

My Ryzen 2600 is stuck at PCIe 3 and Intel has had PCIe 4 since 2017 and PCIe 5 since last year?!??!??!??!?!??!?!??!

I FEEL SCAMMED NOOO

Attached: 6446505575_a9d2b2eb3c_z.jpg (427x640, 149K)

Except he can't, because PCI bandwidth isn't the bottleneck to loading tranny porn, just like it isn't for anything else.

are we already having all the speed we can get from 3.0?

does it happen in practice?

What exactly is saturating PCIe 4? It'll probably be a while before 5 matters. My motherboard has 3.0. Do we even have anything that can use up all of that?

techpowerup.com/reviews/NVIDIA/GeForce_RTX_2080_Ti_PCI-Express_Scaling/6.html

Brand new RTX 2080ti only loses 2-3% of its performance when going from 3.0 X16 to X8.
A full 16 lanes for a GPU won't be a bottleneck for a very long time. This will alleviate the issue of limited lanes on mainstream platforms though. With upcoming PCI-E 4.0 processors on AMD's side you can run a highest tier GPU in X8 and get full performance leaving you that many more lanes free. X4 NVMe drives only require the equivalent bandwidth of X2 4.0 lanes.
It'll let you run more high NVMe SSDs along side a top tier GPU without needed to use any of the lanes going through the mobo chipset.

No 5.0 compliant hardware will exist for years yet. OP is just shitposting.

Attached: lanes.png (1365x2001, 553K)

The actual limit you'll run into pretty quickly these days is the number of lanes. A 2080ti is the first CPU to give you a performance decrease when it's running at PCIe 3 x8. That leaves a lot of bandwidth in a x16 slot.

Biggest annoyance with PCIe is that a AM4 motherboard will typically have ONE usable PCIe3 x16 slot. Put something in the second and the first two actually PCIe3 slots run at x8. The third "x16" slot is in reality a PCIe2 x4 slot which shares bandwidth with the x1 slots who are also PCIe2 slots. So.. . if you want a GPU and a HDMI capture card and a 10Gigabit network card you can't. But you can buy the Threadripper and get ze lanes.

If I could choose between a board with one PCIe4 slot and a board with three PCIe3 x16 slots actually capable at running at PCIe3 x16 speeds then I'd take the latter.

Attached: IMG_0242.jpg (800x1200, 578K)

Next intel arch is 4.0. liar faggot.

Amazing, too bad their cpu design fucking sucks right?

MFW still on old pcie 3.0

Cannon lake is 4.0.

Attached: 1502790639694.jpg (882x758, 324K)

Bulldozer was stuck on pcie 2.0, but ryzen isn't the modern bulldozer at all.

It probably will be soon with NVME SSD's.

The M.2 form factor is limited to 4 lanes with a maximum of 3938.4MB/s on PCIe 3.0.

The latest Samsung is can reach 3600MB/s for sequential reads I think.

OP is a faggot regardless. Intel supports the least amount of PCIe lanes on its CPU and uses a chipset to support the rest on HEDT.

>Do we even have anything that can use up all of that?
every nvme ssd can max it out

We haven't even maxed out 3 yet.

Making lanes faster is easier than adding lanes
Let's say it takes 4 lanes of pcie 3.0 to not bottleneck an NVMe drive, now with 4.0 you only need 2 lanes to do the same job and you can add another NVMe drive without increasing the pin count on the CPU side. Go with PCIe 5.0 and now you can have 4 drives running full speed
PCIe 4.0/5.0 was never really about the need for faster speed for GPUs it's more for enterprise to shove more devices on a single CPU without losing speed

AMD can just change the I/O die on the CPU without needing to modify the chiplet.

Intel is fucked beyond recognition because of this.

Attached: 1528705187294.png (900x980, 104K)

Currently no technology found in desktops can max out pci-e 3.0

an NVMe drive can max out a pcie 3.0 1x slot

Mellanox ConnectX-6 EN 200Gb/s NIC requires either PCIe 3.0 x32 or PCIe 4.0 x16.

So you buy M.2 NVME SSD's and then use a converter to fit it in a 1x slot...........

If you had a faster pcie revision you can get away with something like that
Your only going to get so many lanes on a desktop platform so why blow 4 lanes on a single drive when with pcie 4.0 or 5.0 you can get away with one lane and not bottleneck?

>Ryznen

PCIE 5.0 is a meme standard. It will require a huge amount of analog technology that can't be shrunk on any node.

PCIE 3.0 was successful because it was only a minor upgrade from 2.0 and in general good enough up until now.
PCIE 4.0 will already be a stretch over longer traces .

>PCIE 3.0 was successful because it was only a minor upgrade from 2.0
Every PCIe revision doubles the bandwidth compared to the previous. PCIe 3.0 was as much of an upgrade coming from 2.0 as 4.0 is from 3.0. Same with 5.0 coming from 4.0.

More like, on newer PCIe versions you need to supply less PCIe lanes to each m.2 slot, making more lanes available for more m.2 slots or other interfaces.

makes sense next intel family will be released along with zen 3+ that will probably support pcie 5.0

That is not entirely true. PCIE 3.0 was an increase from 5 GT/s to 8 but used a different encoding to double bandwidth. This requires less analog technology than doubling the actual bandwidth.

>Biggest annoyance with PCIe is that a AM4 motherboard will typically have ONE usable PCIe3 x16 slot.
Only if you're talking about cheap $60 mobo.
Just put more $40 into it and it's solved for your case. It would still be cheaper than Intel.
But I don't see why you would need x16 for HDMI capture nor 10Gbit, since x1 should suit both well. With maybe the exception of HDMI but only if you're capturing it RAW/lossless (which x4 is enough). Well talking about PCIe 4 since that's what they're going to release now. Still x2/x8, no reason to have three x16 on your case.

It doesn't matter anyway since AMD doesn't have thunderbolt. I can only see this being useful for thunderbolt hardware that requires a ton of bandwidth like eGPUs

Well, eventually once Intel has their chiplet plans done, so can they.
Not anytime soon though.

Intel are now a solid 3-4 YEARS behind AMD.

AMD is going to beat Intel to PCIe 4.0 in the server market (where it matters most) by at least a full year, even if it's just 100+ GbE controllers being used initially.

AMD can release a PCIe 5.0 IO die whenever they want without being constrained to compute chiplet release cycles, so it's also unlikely that Intel will beat them there either.

The bigger issue is that PCIe 4.0 only goes ~7"-8" over PCB without repeater chips (read as: expensive as fuck), and PCIe 5.0 is going to be even more pathetically limited.

good news for mini ITX enthusiasts

>The bigger issue is that PCIe 4.0 only goes ~7"-8" over PCB without repeater chips (read as: expensive as fuck), and PCIe 5.0 is going to be even more pathetically limited.

Can't they place the CPU right next to the slots like with old 486 and 386 boards.

You can end up saturating if you have low lane count and lots of PCIe/M.2 stuff plugged in and doing stuff non stop. If you only have a GPU 3.0 is good enough.

Wow! Can't wait for none of my PCIE5 devices to work properly because every brand new device uses a slightly different style of polling and the implementation that intel uses is even more slightly different than them!

at 2030

>intel to only have 1 PCIe 5.0 slot, 1 4.0, and the rest 3.0 because distance
>all vertical gpus will be 3.0 because distance

Deal with it miner.

While keeping clearance for a CPU cooler? Good luck with that.

They could use a solution like in laptops.

they will sell delided cpu to minimize the distance

what's the point of Super PCIe 5.0 if your cpus are crap

Heat transfer gets worse the longer the heatpipes get, and desktops CPUs dissipate way more heat. Watercooling could be a solution, but would drive high costs and there's always the risk of pump/radiator failure/leakage/whatever defect some shitty companies can come up with.

The only reason CPU coolers don't already shift the pipes off to a diagonal is because they want to avoid lever action on the PCB, but they've already started reinforcing PCI ports so it's an easy solution as soon as someone decides to put weight behind it

laptop tier stuff is laptop tier for a reason.

They can easily dissipate 35-65W. The XPS 15 fiasco/fix proved that for many laptops the only thing holding them back is the poor build quality, because changing out the TIM and applying it properly stops all throttling and efficiently cools the chips within tjunc max

>implying anyone gives a shit about a standard that cant be a bottleneck because barely any current tech can max out a PCIE slot

>watercooling
or
>offset heatsink
or
>pcie extender cables

After accounting for bus and peripheral protocol overhead, a single 100 GbE interface pretty much saturates a PCIe 3.0 x16 connection. Enterprise users wanting 2*100 GbE, 200 GbE, and even 400 GbE is far and away the biggest reason for the speed increases. Enterprise NVMe controllers and devices were getting close to PCIe 3.0 x4 limits, but that was a pretty distant second concern.

Consumers with m.2 NVMe drives running QD1-QD2 loads and wanting 2% higher framerates on their $800 gaymen GPUs are not the target audience here.

yeah, i totally need that.
- Sent from my $100,000 server

> pcie extender cables as a workaround for pcie signal length constraints

Attached: confused basketball american.jpg (800x450, 18K)

>>pcie extender cables
Won't that cause the same issues?

Attached: 1542027477262.png (807x745, 205K)

> speeds given as sums of both duplex directions

this kikery will never not annoy me.

are you saying one lane is only 500MB/s in one direction?

wait for GenZ

What is the point?

the point is bigger is better

noooo bros! it was our turn .........

Attached: ayy.png (649x255, 120K)

The chart claims "32 GB/s" bandwidth for PCIe 3.x x16, which is the sum of the 16 GB/s in and out each way.
Nobody tries to tell you that you have 2-gigabit Ethernet ports on the back of your PC.

10nm wont be delayed again

Attached: FAGGOT.jpg (679x758, 54K)

Just imagine the kinda cpu bottlenecks you could achieve with pcie 5

PCIe5 sounds already like vaporware. Like those cd drives in the 2000s that claimed 10,000 rpm but everyone stuck with standard speeds.

Intel is only implementing PCIe 5.0 on their HPC platforms (only for backboning) while their customer-tier and enterprise-stuff will implement PCIe 4.0.

PCIe 5.0 will likely not be on enterprise/customer-tier platform until late 2020s.

Just buy new motherboard goys. and buy new one after DDR5 come out

Intelfag so dumb

no please...

Attached: 0000000002.jpg (638x599, 123K)

>late 2020s
>implying they'll still be in business

Attached: 1506678304349s.jpg (124x101, 2K)

Cool let me know when Intel actually has 10nm, see you in 2020

>Intel is only implementing PCIe 5.0 on their HPC platforms (only for backboning)
What have they actually promised? It is absolutely not in Cascade Lake / CLAP for all of 2019, and the credibly near-term parts of their tech roadmap is in shambles.

Are they promising it for Cooper Lake? (which will "launch" sometime between 11:45 and midnight next New Year's Eve and be available to customers with names not rhyming with Jewgle or YidCrook maybe late Spring next year) That seems rather optimistic for a spec just now reaching ratification.

don't nvme ssds only use x4 connections