PCIE 14 years later still shit

>2004
>boards with PCIE appear
>2005
>boards with 2 PCIEx16 buses appear, operate at x8 when both are populated
>2006
>board manufacturers make boards that support x16 when both buses are populated
>2018
>board manufacturers go back to 2004

Why? Why do we still have inferior technology?

Attached: agp_pci_pcie_pciex1_bus.jpg (600x364, 79K)

Other urls found in this thread:

supermicro.com/Aplus/motherboard/EPYC7000/H11SSL-NC.cfm
toshiba.semicon-storage.com/us/product/storage-products/enterprise-hdd/mg07acaxxx.html
seagate.com/www-content/datasheets/pdfs/barracuda-pro-12-tbDS1901-7-1707US-en_US.pdf
hgst.com/products/hard-drives/ultrastar-he12
twitter.com/SFWRedditGifs

Gen-Z or nothing.

nice trips

niggers

It's not board manufacturers, Intel decided that 16 PCIe lanes are good enough for consumers, and if you need more you buy a Xeon. Board manufacturers have the choice to split them between slots as they see fit, x8 x8 being the most logical choice for two slots.

Blame the CPU mfgrs, they sell last year's shit with a new sticker continuously.
Props to AMD, with their TR4 and SP3 sockets, though.

Well one reason was patent fuckery with those PLX switch chips that you used to see on multi-GPU boards, they went from costing $20 to over $100 and stayed there. Other was AMD and Intel loading PCI-E lanes on-die like they did with memory controllers. There's a lot more pressure to push down heat and die area on the CPU than there was on the northbridge, so they a.) dumped lanes, and b.) used lots of lanes as an upsell/market segmentation tactic for server CPUs.

>Intel decided that 16 PCIe lanes are good enough for consumers
Yep, that's part of what fucked the rollout of NVME.

Does it even matter? Are there any benchmarks that suggest you need a full 16x lane for GPUs?

OP here. Here's my use case:
>PCIE x16 3.0 for main GPU
>PCIE x16 3.0 for RAID controller
>PCIE x4 2.0 for VM GPU
The reason why I need a dedicated RAID controller is too long of a story and it deserves its own thread.

Yeah, nah... I didn't want to elaborate, but fine.

I bought 4 HDDs. Gonna upgrade to SSDs in 10 years when they don't cost like they're made for pink-haired liberals with daddy's war chest of colonization gold. Wanted to use them to make a RAID10. After all, the controller on the mobo supports, it right? No, not really. It's a software RAID controller that only populates the selected drives with metadata that is decipherable by dmraid. The actual work of making the RAID work is done by CPU through a "RAID" driver. That's at least the happy sunshine story on Windows. On other systems, you're hard-pressed to find support for this pretended RAID hack. AMD commissioned some clown company to write a proprietary driver for their 9xx chips, but they only released binaries for RHEL6. For X370 and adjacent models, the driver has binaries for RHEL7 and Ubuntu 16.04, with sources published under another goofball proprietary license, but they don't compile with kernel 4.14. I need at least that version of the kernel to have my GPU work.

GPU isn't the only thing eating your PCIe lanes.
NVME takes x4 per device, a good dedicated USB 3.1 controller takes x8, a good SAS III controller takes x8, a multi port 10Gb NIC takes x8.

1 GPU (x16) + 2 NVME (x8) + USB HBA (x8) + SAS HBA (x8) + NIC (x8) = 56 lanes.
So let's just say 64 lanes is the minimum requirement for a workstation tier machine.

Why not just run it in AHCI, and let the OS handle the array?

I'd like to, but I'm already short on CPU performance. I needed to downclock my 8350 to 2GHz just so it won't overheat under constant workload. I want to upgrade to Ryzen 2700, but I'd still like to avoid taxing my CPU with handling the RAID.

>So let's just say 64 lanes is the minimum requirement for a workstation tier machine.
You aren't likely to max out all those lanes, but if you were you would have to go for a threadripper or expensive xeon.

>my 8350
Ah, I see.

Yep, a full on RAID controller is your only real option.

>hardware RAID

Do you see a problem?

>RAM still shit
>CPU still not at 10GHz
>HDD still not over 12TB
>GPU still slow and useless
>Internet still 1GBit/s
why do I live in that time...

Should mdadm be listed in htop for cpu load or how do I check how much processing power it takes?
Because I run 8x8TB raid6 on a dual-core APU used as htpc playing 1080p with no performance issues.

Yeah, you could avoid all the stupid fuckery that this user went through while not surrendering any performance of significance. If you have hardware problems you can drop the drives into another machine and read them. You're guaranteed to have monitoring of the array, the OS is doing it, and its as easy to monitor as anything else in your system. As opposed to getting some proprietary firmware monitoring thing, where you have a dim idea of what it does and need a bunch of glue to be notified if there's a problem. If you get any monitoring at all, cheap cards don't bother. Does it do any scrubbing? How does it react in the face of errors?

"Oh, yes", you say, "but my expensive PROPER REAL GROWN-UP hardware RAID card solves all these problems!" One, I'm skeptical. Two, if you dump the thing you'll still simplify your life and save a thousand bucks.

Hardware RAID has been deprecated for ten years, and it's only ever used anymore because of voodoo and cargo-cultism.

No need to fret over GHz. My 8350 can handle heavy loads even under 1GHz. But still, package compilation takes forever.
Yes, it should be there. Problem is, I don't run HTPC. Here are my workloads:
>Firefox, usually on Facebook and playing Netflix
>Telegram and Discord in bg
>there's a game playing, Civ5 or CSGO
>Jenkins server runs on this machine, tending to package compilation for my own distro
Both of these posts are from me. So you're saying, I should instead have another host do the IO load and I should instead rely on the ethernet pipe to access this data? What if the RAID is faster than the ethernet?

Hehe, boards that had both AGP 8x and PCI-E 16 were rare. Boards that had both buses running at full speed (aka not gimped) rarer still. Boards that allowed both AGP/PCI-E and PCI cards to be used all at once (aka surround display view), well there was only one and I've got it. AS Rock 939 Dual Sata II. Full Speed AGP x8, Full speed PCI-16, and PCI card can be used all in one go for nice 9 display output (reg desktop usage, not gaming)

You should just use software RAID. When you have a bunch of cores, the overhead of it vanishes down into the noise. Especially since the disk subsystem takes milliseconds to respond, any work the CPU is doing for RAID calculations happens for free, during the time the CPU would otherwise just waste waiting on the disk subsystem.

Wasn't AMD coming out with some 40 lane shit in consumer chips? Anyway, from this thread I take it Intel just released something with only 16 lanes. They've really shot themselves in the fucking foot with this one, now that every SSD is either on-motherboard m2 or PCI-E regardless, you've already lost the ability to run at 16x.
Throw in thunderbolt, four more per port, but at this point I don't think anyone other than apple gives a shit about thunderbolt, not even Intel themselves. Apple'll be pissed, if they can't get their hands on chips for their MacBooks that can actually handle making every USB C port thunderbolt capable.

Fine... I accept defeat.

Yes, TR4 socket runs up to 64 lanes and quadchannel RAM, if I recall.
And the SP3 socket runs 128 lanes and octochannel RAM.

Especially if you have a fakeraid card that offloads to the CPU anyway, you won't lose anything. Software RAID on the CPU will get you exactly what you would have had with that RAID card. Only it works properly, is integrated into your OS, is much better tested because millions of Linux users use it, gets updated along with your kernel, etc etc.

Yeah, back in the day hardware raid was practical cause cpu and ram were limited performance wise, now you can get an 8 core cpu and 8gb of ram pretty fucking cheap. Software raid does use cpu for raid 5 parity calculation but with today fast cpu this is not bad. Also due to nature of hardware raid a lot of your "smart" programs will not see each drive attached to the array, merely the single virtual volume that gets presented to windows. Software raid, the smart software will see each drive so you can better see if any problems turn up. But no mater what, hardware or software, use a UPS. It will save your ass in the long run.

>Should mdadm be listed in htop for cpu load
No. As the name suggests, mdadm is the ADMINISTRATION tool for multi-disk arrays. It sets up some mappings in the kernel, maybe sticks around to monitor for kernel events, but that's it. If there's a significant CPU hit from using raid, it'll show up as kernel threads, not mdadm.

I mean, why not test it?
The one thing I'd do, if I were you, is set up a small SSD, 128GB or so, as a NV Cache for your array.

>And the SP3 socket runs 128 lanes and octochannel RAM.
>8 full length 16x PCI-E sloths all running at 16x
MUH DICK
>128 PCI-E 1x slots attached to the motherboard through flex-cables
UNF

I mean, just look at this shit.
supermicro.com/Aplus/motherboard/EPYC7000/H11SSL-NC.cfm

> Epyc mobo with anything fewer than 32 DIMM slots

Attached: my disgust.jpg (453x439, 85K)

I'm sure there's a quad socket server board out there, just for you.

H310 chipset makes no fucking sense. 6 pcie 2 lanes?!

> hold my beer

Attached: AS-1123US-TR4_top_detailed.jpg (1600x857, 1.1M)

What the fuck are you cooling your CPU with? A sheet of paper?

>those two SFF-8639 backplane slots

Attached: 1443825179560-0.jpg (477x450, 52K)

So if the i7 isn't for workstations, and isn't for gamers either really. What is it for? Just for capitalizing on consumers? Intel can't be THAT evil now, can they?...

>Just for capitalizing on consumers?
>Intel can't be THAT evil now, can they?...

Attached: 0606dc9e03f7bd8f603e121d2d8f9bb36b98fb7a8a2540ff35757bd10a91d8c0.jpg (548x420, 46K)

Motherboard manufacturers can at least help by adding PCIe switches (like a PEX 8747). That will of course not increase the bandwidth to the CPU but it can help in some scenarios
>data transfer from one card to another happens at full x16 speed rather than x8: useful for SLI/CF
>if both cards aren't used the same time, either can get x16 to the CPU rather than being limited to x8 at all times
There used to be boards like this, but it seems they aren't made anymore for the newest CPUs, as in Coffee Lel and Ryzen. The only option now if you want more PCIe lanes is Skylel X or Threadripper.

Reading through the thread, reveals that most people that disagree or doesn't see the problem in OP's question only use their PC for gamez and p0rnz. Nothing wrong with using a PC for that, but don't scale your weak teenage use case onto professional users of any kind.

An high tier Intel CPU, SHOULD have many more PCI-e lanes, they don't. And it seems very logical that there's no real existence of i7 these days, as any "professional" usage, force you into Xeons, or god forbid, the i9 platform mess.

Just use Threadripper retard.

Nigger, he's pointing out how badly intel has fucked the dog, on this issue.

yeah, it actually was sarcastically. Brian Krzanich, i had to look his name up, is a good for nothing director, who doesn't care for computers at all. He cares about money, which from a shareholders perspective if fine, perfect for the average American short-term investment ideal.

But there's also a reason why i had to google "CEO of Intel", whilst i can you throw Lisa Sue right out there. It's not like Krzanich hasn't been in the media the last few months.

I do, I'm just giggling that people still deny going to AMD in order to get good products. But there's that whole thing about the AMD fanbase, it's crap. They've been an underdog so long, that most loyal users that's also loudly active on the internet, are autistic inbreed fuckheads. That is probably the single biggest obstacle for AMD, they need a newer and way better fanbase. But IMO, it's steadily going for that. Now they just have to not fuck something up big. Like a 1c/2t threadripper, like those fucking X299 intel shit.

Well, AMD has had a rough few years since the days of the Phenom II.
Hopefully they can banish the FX and A series to the ash heap of history so they no longer taint their name.

>2008
Board manufacturers make 1337GAYMUR branded motherboards with PXE, dual-lan with hardware firewalls and cable distance testers, semi-integrated audio with decent output quality and stereo/beam hardware background noise cancelling mic, maintenance mode LEDs highlighting ports and slots only when needed.
>2018
Board manufacturers make workstation branded motherboards with dual 8x and no PXE, single shitty lan that somehow manages to produce high CPU load when active, integrated audio with grorious nippon caps that do literally nothing and RGBLEDs randomly thrown around to show gay flag that pulsates with music.

I'm in no doubt that Lisa Sue is smart enough to make sure that everybody in the marketing department is made clear, that they'll be sent to south Sudan if they even suggest utilizing that name.

The shitty thing is that you can have a dual SP3 board but that board is seriously lacking in actual accessible PCIE lanes, granted all slots are wired fully but you have to go to a Supermicro proprietary form factor motherboard to get more

Attached: H11DSi-NT_spec.jpg (261x222, 32K)

Likely due to the fuckhuge size of the SP3 socket, even on an EATX board.

What is wrong with PCIE? Please post some cons

It's the number and style of implementation. Read through the thread

complaining about PLX/Broadcom and Intel kikery isn't really a justification of calling PCIe inferior tech.

Pci lanes on CPU add complexity since you are directly conencting to the die.
We are finally getting the new revision with pcie4 and pcie5, it's expected at around 2020 it will be on boards. Pcie5 will effectively double transfer rate, maning it will suprass 16x with 8x lanes.

just get threadripper faggot, it has enough lanes for your use cases

>Wanted to use them to make a RAID10.
this is your problem

I've posted six post in this thread, not counting this. Not in a single one of this did I, nor the person i was discussion with, imply any inferiority of PCI-e. Not even with the most Americanized social justice warrior principle, can you possibly read deep enough between the lines, to conclude that I at point hinted at any negative aspect of PCI-e.

It works, i enjoy it. I do however complain about Intel, who thought nobody would complain if the cut corners, and saved just a little bit in production cost, by cutting PCI-e lanes.

PCIe 4 could conceivably show up as early as late next year, but if you expect PCIe 5 anytime remotely soon, you're gonna be horribly disappointed.

Just because they defined the electrical standards for it doesn't mean anybody knows how to make 32+Gb transceivers (that can deal with 1'+ of PCB and two connector hops) cheap enough to put in mainstream parts.

My assumption is that even when it is rolled out, it will only work on the slot or two closest to the CPU sockets and be supported only by the latest Mellanox etc. server cards.

>muh style
this isn't /fa/
You sound retarded.

>>HDD still not over 12TB
lol

>Toshiba MG07ACA14TE
14TB 3.5 drive
toshiba.semicon-storage.com/us/product/storage-products/enterprise-hdd/mg07acaxxx.html

>Seagate barracuda pro
12TB 3.5 drive
seagate.com/www-content/datasheets/pdfs/barracuda-pro-12-tbDS1901-7-1707US-en_US.pdf

>HGST Ultrastar He12
12TB 3.5 drive
hgst.com/products/hard-drives/ultrastar-he12

Don't know. Afaik since the pcie5 specs are already defined, pice4 will be ignored and pcie5 will be adopted.

There is a pressure in the industry to either increase lane or increase bandwidth with direct nvme and newer video cards. SSD raid only make it worse, and it's an Enterprise feature.
This I believe in 2 years we will see consumer boards with updated interfaces

>buy more than 1 computer
>network them together
>basically have as many PCIE lanes as you like in your supercomputer beowulf cluster

Attached: f26987448.png (792x792, 457K)

based
capitalism wins again

Is there even a reason to have x16 on PCI-E 3.0?

It doesn't matter if it's split into x8 and x4 because it's still miles faster than PCI-E 1.0 which is still barely bottlenecked

>pice4 will be ignored and pcie5 will be adopted.

that simply won't happen.
PHY complexity/cost increases far more than linearly with bandwidth, and vendors won't go from 8 Gb to 32 Gb PCIe in a single hop.

We won't see PCIe 5.0 until Google (or somebody similarly influential) decides it needs 400 GbE to the host and not just between switches, and that's only if Intel, AMD, etc. haven't already fully integrated network controllers of that speed by then.

>milliseconds
Is a perceivable stutter. Multitasking of CPUs does not exist.

If you have a RAID array that actually utilizes 16x 3.0 lanes, you should be running a server/workstation/HEDT platform, and those typically don't have a shortage of PCIe.

>32x 40x40x20 high rpm fans
imagine the noise from a whole rack of these

I feel you, my 9370 Gets dangerously close to to the thermal limit at ~80% load... with a 240mm radiator

I just limit heavy sustained loads to a few cores. At least I'm prepared to cool threadripper, which has a slightly lower TDP and a way higher thermal limit.

>Is there even a reason to have x16 on PCI-E 3.0?

Yes

>barely bottlenecked

Even your shitty use case will be fucked a couple years down the road. Like it or not even regular dipshits like you will end up needing that bandwidth later on down the line.

this is why they're leaving intel

>>board manufacturers go back to 2004
Have you tried buying AMD?

Sas is dead

You haven't heard that they've officially announced that they're developing an in house replacement for Intel? It's not going to be x86, it seems like an ARM. It's probably going to be a Facebook machine, and hipster will embrace it for being much easier to develop cross apple product "apps".

trips confirm

time to overthrow the pci-e dictature

Attached: VivaLaRevoución.jpg (590x290, 43K)

>SAS is dead
SAS is king of HDD storage.

NVME for flash, SAS for HDD, you need both.
Hammer and Anvil, dingus.

I don't think any laptop cpu supports more than 16 lanes.

Really though, if you want lanes you really need to go with workstation hardware with xeons or threadripper. e.g. the iMac Pro has 48 lanes, plenty to power its thunderbolt ports and other shit.

The fact that his name in my language basically means "to fuck up" greatly amuses me.

>Does it even matter?
Yes. It does. I've got a ATX motherboard with the usual number of PCI-e slots. The manual has this entire page of PCI_4 can not be used if PCI_3 and/or PCI_2 is populated and so on. Then there's a chapter on what speeds what slots run on if other slots are populated. I don't even get why they put all those PCI-e slots on the board when you can't put thing in more than half of them before the other ones stop working. It's a total scam.

>I don't even get why they put all those PCI-e slots on the board when you can't put thing in more than half of them before the other ones stop working.

Maybe to have enough slots even when using multiple 2 or 3 slot graphics cards?

>pic related is what Gen Z thinks is their destiny
>t. Gen Z tard
Lmao at you losers