>2004 >boards with PCIE appear >2005 >boards with 2 PCIEx16 buses appear, operate at x8 when both are populated >2006 >board manufacturers make boards that support x16 when both buses are populated >2018 >board manufacturers go back to 2004
It's not board manufacturers, Intel decided that 16 PCIe lanes are good enough for consumers, and if you need more you buy a Xeon. Board manufacturers have the choice to split them between slots as they see fit, x8 x8 being the most logical choice for two slots.
Nathan Price
Blame the CPU mfgrs, they sell last year's shit with a new sticker continuously. Props to AMD, with their TR4 and SP3 sockets, though.
Adam Thomas
Well one reason was patent fuckery with those PLX switch chips that you used to see on multi-GPU boards, they went from costing $20 to over $100 and stayed there. Other was AMD and Intel loading PCI-E lanes on-die like they did with memory controllers. There's a lot more pressure to push down heat and die area on the CPU than there was on the northbridge, so they a.) dumped lanes, and b.) used lots of lanes as an upsell/market segmentation tactic for server CPUs.
Justin Jones
>Intel decided that 16 PCIe lanes are good enough for consumers Yep, that's part of what fucked the rollout of NVME.
Lucas Bell
Does it even matter? Are there any benchmarks that suggest you need a full 16x lane for GPUs?
Brayden Adams
OP here. Here's my use case: >PCIE x16 3.0 for main GPU >PCIE x16 3.0 for RAID controller >PCIE x4 2.0 for VM GPU The reason why I need a dedicated RAID controller is too long of a story and it deserves its own thread.