i've discovered assigning a gayme's affinity to all odd/even cores improves performance a bit i'm guessing it's because smt/hyperthreading is shit or is windows at fault here? disabling smt negates this but prevents the machine from sleeping for some reason r5 1600 win7 btw, any difference on win10/intel?
is smt/hyperthreading really worth the 20 or so % multithread increase with a potential decrease in gaymen fps?
Also, it doesn't help that you're running windows 7 as they cut off updates before ryzen cpu support was added. So, i'm pretty sure the scheduler is thrashing the fuck out of the cores w.r.t to thread assignment.
Stop fucking with this, it's not made for retards like you
Jason Bell
>it's AMD loser realizes his mistake episode
Gabriel Wright
SMT is always a double-edged sword. Some programs will benefit tremendously from it, others will see small losses. Very few programs will lose a lot from it, which is why it's turned on by default.
If you're a gamer, you should find a cpu with incredible single core performance. Or dual core performance. Because most games aren't optimized to large amounts of cores.
Large core amounts are best for assigning virtual machines to them and running server stuff.
Jeremiah Peterson
>So, i'm pretty sure the scheduler is thrashing the fuck out of the cores w.r.t to thread assignment. Windows 7 should already be capable of reading the CPU topology from the ACPI tables, doe.
Chase Powell
i want to thank you for providing an on topic post instead of babbling about random shit
John Allen
what if I want to play multiple copies of a game at the same time? I can play 6 atm but would like to scale up to 16.
multiboxers pay for performance, so why are there no metrics for us?
Christopher Sullivan
I welcome the idiot to the party... Magically, everyone forgot the Intel Stutterfest(tm) even with their "high-end" 7700k 4corelets. >Windows 7 It does, but it doesn't know how to treat an unknown CPU because the scheduler hasn't got patches for that μArch. the scheduler doesn't know where to assign the main threads and where the children(I know that winblows don't have child processes, but I am loaning the term from POSIX)
Colton Roberts
'Ping' is reflective of the best case latency you're going to see between a thread running on two separate physical cores. There's tons of communication between cores related to multi-threading, etc. A game engine is going to have tons of core to core communication thus why you're seeing a 20% bump if you locate the processes on the same physical CCX. What you're looking at is a more than double increase in latency if they aren't which is why you see a 20% performance drop when you just let windows' retarded scheduler fuck around w/ the threads.
Windows 7 doesn't properly support Ryzen because its scheduler doesn't take this into consideration and threads get thrashed about. So, you have two things to keep in mind : You're running windows 7 and ryzen really is a numa architecture. Think about it as though you have two sockets w/ 3 cores each. You want to due cross socket communication as less as possible due to the added latency.
Make sense?
Austin Smith
multi-what? > mfw poorfags
Cooper Campbell
Did you even read the article you linked? It's all about Windows 7 lacking USB XHCI support before having drivers installed and the motherboard only supporting XHCI rather than EHCI. It doesn't mention a word about CPU topology.
>It does, but it doesn't know how to treat an unknown CPU because the scheduler hasn't got patches for that μArch. The point being that the ACPI tables describe the CPU topology and so the OS shouldn't require a driver for each and every specific CPU model just to understand its layout. There may be other issues, like using an optimal idle loop and such things, but that's another thing.
Leo Jenkins
??????????????? excuse me this is Jow Forums not /v/ piss off
Logan Lopez
>'Ping' is reflective of the best case latency you're going to see between a thread running on two separate physical cores. There's tons of communication between cores related to multi-threading, etc. Thanks, Cap'n Obvious.
What I meant was what it means specifically. Taking a cache line from another core? Pulling a cache line from core A to core B and back again? APIC latency? Something else entirely?
Jonathan Phillips
>this is Jow Forums not /v/ piss off Discussing the factors that affect gayming performance is safely in Jow Forums territory.
Brandon Turner
you're also gay, so you should head over to /lbgt/
the topology cannot tell the scheduler how to treat each core(s) with every load. Just remember the infamous windows 7 scheduler patch that gave Bulldozer's CMT implementation a 5% performance jump. Don't forget that SMT and CMT have shared resources with their physical partners, placing the wrong load on the "weak" thread, it can cause many more hazards and nops especially with FP instructions.
>What I meant was what it means specifically. noone knows, those metrics are not valid unless AMD verifies them. There are several cases where you should check the latencies, cross-CCX cache r/w cross-CCX context switching data forwarding from one core to another core on the other CCX. ...and even then you won't have a clear picture, because all those latencies are eliminated with a simple scheduler patch. >excuse me you are excused.
Liam Richardson
>the topology cannot tell the scheduler how to treat each core(s) with every load. That's exactly what it does. It's the whole point of it. >Just remember the infamous windows 7 scheduler patch that gave Bulldozer's CMT implementation a 5% performance jump. That's because AMD's topology description of Bulldozer presented each "core" as an independent core, which is kind of understandable since the standard description format has (or had) no conception of CMT. Zen however has no such exotic attributes so it fits nicely into the standard topology descriptions. >the "weak" thread There is no weak thread. Both threads of a core are symmetric.
Samuel Green
>noone knows, those metrics are not valid unless AMD verifies them. It's not AMD's graph.