WHAT the fuck is going on with x570s chip-set?

WHAT the fuck is going on with x570s chip-set?
WHY is a Chip-set on 55nm using LESS power than X570s on 14nm; despite using the same devices?
And now ASUS proved X470 can enable PCI 4.0 without retarded power usage.
Rumors are showing the Chip-set has something to do with zen2's re-worked Infinity fabric.
What's the real reason?

Attached: x570 power.jpg (700x441, 82K)

Other urls found in this thread:

overclock.net/forum/6-intel-motherboards/1242211-faq-does-my-p67-z68-motherboard-s-support-pci-express-3-0-a.html
anandtech.com/show/14639/no-amd-still-isnt-enabling-pcie-4-on-300400-series-boards
gamersnexus.net/guides/3482-amd-x570-vs-x470-x370-chipset-comparison
twitter.com/NSFWRedditImage

PCIe 4.0 is all about signal integrity
X470 could not sustain a viable connection for a meaningful amount of time

then why is ASUS enabling it on x470 boards?
and it seems to work just fine.

>15w tdp
>oh no it consumes ~10W!
duh, its just turning up the voltage for a stronger signal , plus it's not much

*WHHHHHHRRRRRRRRRRRRR*
NO MOMMY SU PLEASE SAVE ME

>TDP doesn't matter
x570 has no reason for the increased power usage.
it's pointless over x470

The i5 7600K was the limit of computer technology. Everything that happened beyond that is a failed experiment.

*NEW*Power consumption doesn’t matter!

I guarantee you that working and working well are entirely separate concepts
If you have a basic level understanding on the X570 lineup, it's that they need to overdesign boards and drive cost up because PCIe 4.0 will not run as intended without shielding and good cooling. I absolutely would not expect X470 or the cheaper X570s to be able to saturate the bandwidth without being forced to revert to 3.0 speed.

>working well
>separate concepts
i require proofs.
ASUS said it can reach full 4.0 speeds on x470

That's too vague. Full speed on what lanes, and for how long? It's not all of them, and it's not forever. If you have a board with a bunch of PCB layers already, I don't think it would be totally garbage. You just wouldn't want to be running a PCIe 4.0 SSD.

>it wont work Goys
>buy x570
the AMD shilling has gone to far.

Enjoy having all your data corrupted by your piece of shit mobo.

I really don't think X570 is worth it, either.
However, none of the business decisions that AMD and the board partners are making would make any sense if it didn't need to be the case. Motherboards that cost more are going to sell worse, so making the X470 high end into the X570 midrange would be a waste of time.

>NEW x470 doesn't matter

You haven't presented proofs showing that it works. Companies always throw around that kind of claims and then cover it with 9999 footnotes saying that it's actually not true.

just imagine the power consumption on the new threadrippers

>working and working well are entirely separate concepts
In digital transmission it's virtually the same.
People here ignore the fact that some sockets are directly routed to CPU while the others are routed via chipset. Chipset itself is a PCIe device and provides PCIe lanes for other devices. While x470 is connected via PCIe 3.0 and provides 2.0 lanes, x570 is connected via 4.0 and provides 3.0 lanes, which obviously makes a difference.
Motherboard vendors can only enable PCIe 4.0 for sockets that are directly routed to CPU, and it will only work if the signal integrity of said sockets is sufficient. Commonly it won't be, only the closest to the CPU PCIe socket may work at 4.0 speeds, simply due to the short distance to the CPU. Supporting 4.0 standard was not the requirement for x470 boards. It is a requirement for x570 boards so they are more expensive and have more complex routing.
Regarding the x570 power consumption - probably both the PCIe 4.0 and AMDs talent to make housefires from nothing contributed to it.

*and provides 4.0 lanes
the x570 is just the IO die built in 14nm instead of 12nm and provides (just like the CPU) 16 PCIe 4.0 lanes.

What the fuck, why?
I thought it was just the hub re-distributing bandwidth, what for do they need so many PCIe 4.0 lanes?

>WHY is a Chip-set on 55nm using LESS power than X570s on 14nm; despite using the same devices?
X570 is more than likely an emergency solution just to get boards out the door, since ASMedia hasn't been able to produce the real chipsets on time. For all we know, it seems that that's also the reason why the Ryzen 3000 launch was delayed as long as it was. More than likely, when it was obvious how long ASMedia needed, AMD just took its I/O die, dubbed it X570 and pushed it to board partners just to have something to launch this year.
>And now ASUS proved X470 can enable PCI 4.0 without retarded power usage.
That's a completely different thing, and has nothing with X470 to do. X470 has no 4.0 support whatsoever, it's just about allowing the CPU to enable 4.0 on the lanes connected directly to it, so it wouldn't affect chipset power consumption in the least.
>Rumors are showing the Chip-set has something to do with zen2's re-worked Infinity fabric.
Probably just some retards extrapolating from the fact that X570 is Ryzen's I/O die.

are you fucking stupid?

>I thought it was just the hub re-distributing bandwidth, what for do they need so many PCIe 4.0 lanes?
Wat. How would it "redistribute" that bandwidth without having lanes to distribute it to?

>yes bro we need that PCIE4 for 5gbps SSDs to shitpost with anime pictures online, or training muh naked waifu generator "AI"
and who would take asus word over AMDs for muh PCIe? it's not officially supported so you're tossing a coin

Taking AMD's word is not about it not working. AMD has said that PCIe 4.0 may very well work on many non-X570 boards, but they don't think it should be enabled because it creates a confusing support situation where it isn't obvious which boards will support or not support it.

>go x570 your house catches on fire
>go b450 your computer doesn't boot
damn..... so this is the power of AMD

and what's your problem faggot? You'd have to be a retard to buy 470 especially now, knowing that PCIe 4 is not supported and it was never advertised to be supported.
Stop acting like you're entitled to shit. If asus can make it work, it's a risk but go for it, but amd made the proper decision marketing wise and support-wise. They're growing too fast to introduce new features on chipsets they're not actively developing on. (Support=/=Development)
As a first gen user it sucks but that's reality, you have to remember companies only think about the P word.

How did Intel 6 Series and X79 chipsets work with CPU supplied PCIe 3 with Ivy Bridge Processors then? There is a double in bandwidth after all.

inb4
>This one is different! AMD is always right!

Intel is the one that likely does these things. However, that didn't happen during the Ivy Bridge transition. There is certainly something fishy here.

How would it provide new lanes from thin air? You can't just connect a chipset vie 4x PCIe 4.0 and provide 16 more PCIe 4.0 lanes. Where will chipset get that bandwidth? AFAIK there are no new interfaces introduced in ryzen 3000, like DMI on intel. Chipset is still connected via PCIe.
Oh, and CPU does provide 24 lanes, 4 of which are occupied by chipset, and you are bulshitting.

>and what's your problem faggot? You'd have to be a retard to buy 470 especially now, knowing that PCIe 4 is not supported and it was never advertised to be supported.
>Stop acting like you're entitled to shit.
That's literally the stance of a typical Inturd fanboy during launch of Z370

>Intel 6 Series
They didn't? Don't know about x79, probably supported PCIe 3.0 from the very start, because SandyBridge-X supported it to begin with.

>buyers remorse: the chip

Whoever acted entitled? I merely explained the reasoning.
>You'd have to be a retard to buy 470 especially now, knowing that PCIe 4 is not supported
If you aren't interested in PCIe 4.0 then it's hardly retarded to buy X470.

Wat. It's a switch. You can get the full PCIe 4.0x4 bandiwdth to any one device connected at a time. Yes, the total combined bandwidth to all the devices connected to the chipset can't exceed 4.0x4, but for many applications that doesn't actually matter.

That's not Ryzen 1

I don't know if it was you, but currently I'm arguing with
>the x570 is just the IO die built in 14nm instead of 12nm and provides (just like the CPU) 16 PCIe 4.0 lanes.
this bullshit

The 7600K wasn't part of Ryzen 1 silly.

Attached: hitman2.png (798x449, 181K)

Because "the X570 chipset" is just repurposed faulty I/O dies designed for Zen 2 CPUs. AMD chose to save money by using them for X570 instead of having ASMedia develop a proper bespoke chipset, and this is the result. No matter how well AMD are doing in general, they always manage to fuck up somehow and gimp their products. With Zen 2 it's the housefire chipset, which both requires a fan and costs a fortune because the dies were supposed to go inside expensive CPUs, resulting in HEDT motherboard pricing. Hell, not even that. The highest end X570 boards are more expensive than X299 or X399 boards ever were.

Attached: CIA.jpg (556x575, 41K)

>AMD Unboxed
>DX12

At least try to make a post that isn't pure shit.

botnet on duty

Attached: SLACKWARE.png (640x480, 8K)

>My CPU is (only) better at running dated games, truly the pinnacle of technology!

t. never played a game with DX12

>DX12 BAD!
>because it takes advantage of more than four cores and more than four cores is BAD!
>buy quadcore i5s goy!
Cyclical logic moving on.

>HWUnboxed BAD!
>except when they're bashing Vega then they're alright
Cretins the fucking lot of you.
Damage control. Intel sucks even at gayms now.

>except when they're bashing Vega then they're alright

Imagine making a card that's so shit that even the people you pay to shill it tell everyone that it's shit.

>because it takes advantage of more than four cores and more than four cores is BAD!

t. never played a game with DX12

You've never played a game with DX12 because you don't play games you just shill Intel on /v/.
Also, I'm sorry but last I checked the only people paying anyone for positive reviews are Intel, e.g. PC Perspective and Principled Technologies. But, you know, nice try.

You are literally in denial.

overclock.net/forum/6-intel-motherboards/1242211-faq-does-my-p67-z68-motherboard-s-support-pci-express-3-0-a.html

As long as 6 series motherboards have no PCIe switches(or if they are bypassable if that works), PCIe 3.0 is available.


Sandy Bridge-E/EP are certified PCIe 2.0 from CPU. They are capable of PCIe 3.0 speeds however, but it's unofficial. Yet, Intel did not actively prohibit PCIe 3.0 implementation on X79 motherboards, even though there is a good reason to doubt X79 motherboards are PCIe 3.0 ready (traces wise). In the end, most motherboards are capable of running PCIe 3.0 without issues when paired with Ivy Bridge-E/EP, and many are capable of activating PCIe 3.0 on Sandy Bridge-E without problems.

You can see that there are 2 things going on here--one is support for 3.0 on older motherboards that allegedly "may not have good enough traces" for 3.0 itself. This is the case even the case for lower end motherboards without PCIe switches. The second thing is that even Intel's PCIe controller(it's already not the traces anymore) is not officially certified for 3.0, Intel did not actively prohibit PCIe 3.0 operation on SB-E CPUs. What AMD did here(or is planning to do, since their AGESA to block 4.0 operation is not yet released) is inexcusable, especially since a representative claimed otherwise previously.

I'm sorry bro when you play a DX12 game in your life we can talk shit all day and have fun debates and banter until then stfu

What are you even arguing, though?
>You can't just connect a chipset vie 4x PCIe 4.0 and provide 16 more PCIe 4.0 lanes.
Yes, you can. You won't get the full bandwidth from all at the same time, but you do get 16 lanes, so that you can connect more devices.

AYYMD finally has good CPUs and decent GPUs so they need to have something for the hardcore AMDrone masochist.

So what mainboard to buy now for ryzen 9 3700x? Help a noob out

>dx12 makes gaymes bad
>source: me

Attached: a.png (361x334, 87K)

>dx12 makes games good
>source: AMD Unboxed and AMD's official gaming website

>dx12 makes games good
To the extent that it raises framerates, it sure doesn't hurt.

and also raises input lag to the point where it makes games unplayable, which hurts a bit

...

MSI B450 gaming pro carbon AC. Or at least after they release a working bios for it

Are you saying that DX12 has intrinsically worse input lag, rather than some specific engine utilizing it badly? If so, citation needed.

I've got Forza 7, Battlefield 1, Warhammer Vermintide 2 and Doom 2016, and I'm planning on getting Metro Exodus and Total War Three Kingdoms, all of which use either DX12 or Vulkan, and this is just the ones I know about, I don't really keep track of this shit.
You don't play video games. Go back to /v/.

Well same applies to am4 boards, but another requirement is added to absence of switches - the length of wiring. PCIe 4.0 has tight limits on range and requires repeaters if the socket is physically located too far. The longer the distance, the weaker is the signal.
It's up to vendors to enable the feature.

Can you please not copy paste promotional announcements from AMD's website, thank you

The options menu in Battlefield V literally says something along the lines of "activating DX12 will introduce input lag"

Promotional? What fuck? Do you really have THAT hard a time believing THAT SOME PEOPLE ACTUALLY FUCKING PLAY VIDEO GAMES?
Holy shit the absolute state of Intel drones.
This is my work PC so obviously it's not installed but I've got like 30 hours in this one back on my 1500X.

Attached: forza7.png (1280x994, 1.22M)

Input lag is not as obvious in Forza because your inputs are not as snappy as they would be with a mouse or keyboard but I guarantee you it's there.

Does Vulkan have this issue? Because Doom is a fucking fastpaced game and I don't feel jack shit in terms of input lag. And both Vulkan and DX12 are based on Mantle so if it's really an API issue and not an implementation issue it should exist in both.

>Well same applies to am4 boards, but another requirement is added to absence of switches - the length of wiring. PCIe 4.0 has tight limits on range and requires repeaters if the socket is physically located too far. The longer the distance, the weaker is the signal.
>It's up to vendors to enable the feature.

Nope, not the same:

anandtech.com/show/14639/no-amd-still-isnt-enabling-pcie-4-on-300400-series-boards

AMD is actively thwarting PCIe 4 support by gimping the AGESA. If the AGESA is gimped, the vendors are literally powerless.

I watched a Buildzoid mobo overview vid yesterday and in it he said that the X579 chipset doesn't have any proper power-saving capabilities and it "either runs or it does not" which is part of the reasoning behind the chipset fan: make sure it doesn't overheat and crash your system.

Yes, and Battlefield V seems to be the specific game that has that problem, rather than DX12 being intrinsically problematic.

You girls seem to be debating PCIe 4 signal integrity without mentioning path length. PCIe 4 is probably working just fine to the first slot on Asus boards. I have no idea if they enabled it to any other slots, probably not. It's surely not enabled for the bottom x4 x16 slot or the x1 slots since those aren't connected directly to the CPU. Anyway, it does depend on the quality of the traces and things like that. ASUS are probably confident that their x470 boards traces are good enough to do PCIe 4 to the first x16 slot and it's probably fine.

If could, of course, be that ASUS is enabling PCIe4 to the first slot knowing full well that there's basically nothing that will go in that slot which does PCie4 as of now anyway. High-end GPUs can barely max out PCIe3 x8.

This. P67/Z68 can support PCIe 3 on the first slot too with Ivy Bridge CPUs. Intel did not actively block this behaviour.

I felt it in Doom, felt it in Battlefield Hardline, felt it in BFV, you can google "nausea Doom, headaches DX12, input lag DX12" and you'll get tons of results.

I don't know what else to tell you.

Attached: doom.png (709x555, 64K)

No graphics API is more or less likely to produce input lag in a video game. It's more likely tied to the way the game is designed (from a graphics pipeline perspective) and how it loads the GPU, and how the developer goes about optimizing their game to provide a smooth framerate.

Pretty sure most of those people fixed the motion sickness by cranking the FOV slider, nothing to do with input lag just tunnel vision. Worked for me at least.
Also I've heard no one else complaining about input lag on any of these games ever. I don't feel anything like that in Vermintide 2 either and that's first person DX12.
Maybe it's only true for certain hardware configurations or you can only feel it on high refresh rate monitors? Because I'm on 75Hz Ultrawide IPS so there's already a 5ms lag there.

Hey senpai tell that to him not me I've not even felt this shit.

Intel sucks at everything really.

Attached: maxresdefault.jpg (1280x720, 162K)

Keep talking shit, bitch motherfucker.

I'm talking shit? What? I'm being nice here. Are you fucking autistic?

Working about 100% better than the Intel implementation

Attached: index.jpg (197x255, 13K)

>No graphics API is more or less likely to produce input lag in a video game.
Not exactly true, graphics APIs giving more or less control over vsync behavior definitely affects input lag. I only have personal experience with OpenGL so I don't know how the other APIs fare in this regard, but my main input-lag problem is that I can't make my "front-most" thread wait for the next vsync before starting to produce the next frame.

>it goes from 4W nvme load to 8W nvme load
WOW WHAT THE FUCK THIS IS TOO MUCH MY HOUSE IS ON FIRE WTF

Attached: 1536612142306.png (504x400, 91K)

that's a rebranded 6600k and it's one of the most cucked pieces of shit CPUs out there.
t. 6600k goy
where are our big copper heatsinks on X570? instead there's a little 9000rpm fan.

I'm not sure if the devices connected to the chipset can communicate with each other without going through the bottleneck of the CPU-x570-link (limited to 4 PCIe 4.0 lanes), otherwise the other posters are correct, the bandwidth will be limited to 4xPCIe 4.0.

There is also a small error: The IO die/x570 supports 24 PCIe 4.0 lanes, 4 of which are reserved for the connection between CPU and chipset, leaving 20 lanes on each side.

The CPU uses 16 for the GPU and 4 for usually a NVMe SSD (or other IO).
The chipset gives 8+4+4 = 16 lanes in various configurations, but also 4 SATA 6GBs ports (probably connected through the missing 4 lanes).

gamersnexus.net/guides/3482-amd-x570-vs-x470-x370-chipset-comparison

Attached: AMD-X570-Chipset-Details-and-Specs_1-1480x832.png (1480x832, 708K)

1gpu = 16 lanes
1 10gbit = 4 lanes
1 nvme = 4 lanes

you are now eating 24 lanes of pcie and are unable to do anything more without stepping on toes. 32+ should have been the standard for a long time but intel decided it was a great segmentation method.

The claim that AMD reuses i/o dies is complete bullshit. And no peripherals can't communicate without CPU as it requires access to RAM which is directly connected to CPU.

thats 8€ more a year if you run your pc 24/7 365 days a year with electricity cost of 0.31cent per kwh
who cares

Attached: belly.jpg (1600x900, 85K)

x570 provides pcie4.0 on all the lanes; the x470 bios update only provides pcie4.0 to the pcie16x slot which is connected to the GPU - the rest of the lanes, provided by the x470 chipset, is pcie 2.0 (not even 3.0, but 2.0).

Also iirc the x570 chipset is basically the IO chiplet from the ryzen cpu, repurposed.

The problem isn't the power usage but the fact that it runs so hot it needs active cooling.

NOOOOOO 4 CORES 4 EVA

Attached: 1497883390389.jpg (267x297, 18K)

Shrinking dies often cause them to be hotter. Though I hardly find that a problem.