ITS OFFICIAL PERFOMANCE DOESNT MATTER

Attached: JtawADR.jpg (1080x1709, 528K)

Other urls found in this thread:

techreport.com/review/33531/amd-ryzen-7-2700x-and-ryzen-5-2600x-cpus-reviewed/7
youtube.com/watch?v=61i2iDz7u04
i.4cdn.org/wsg/1558931960706.webm
twitter.com/AnonBabble

INTODDLERS BTFO!

Attached: 1532551463229.gif (540x304, 1.52M)

But Intel you're shit at architechture.

Defined by who has the least amount of flaws

>performance literally doesn't matter
Jesus Christ.

For latency-sensitive applications, they might actually be superior (for now they are, we'll see with Zen 2).

>used to be defined by clock speed
Ya don't say there bud

Attached: p4.jpg (1250x1243, 155K)

>architecture
isn't that one of Intel's biggest problems right now?

But are they, though? Take the 12c part. You're only inherently hit with that chiplet latency if your task can expand to 12 threads but requires low latency between each and every thread. If your task only need low latency between 8 threads, a smart scheduler can put related threads on the same chiplet, even the same CCX. But of course there are tasks that benefit from low latency between all threads, but then you have to consider that AMD's multithreaded performance is significantly better than Intel's, the 3900X trounces the 9920X. So Intel would only win in very specific scenarios where the marginal latency advantage outweighs AMD's overall performance advantage. It'd be so specific that I'd say the only way it'd be pronounced is, ironically, through synthetic tests designed to exploit low latency across many threads. But of course, this is all predicated on AMD getting Microsoft to make the NT scheduler halfway decent. It'll be interesting to see how it shakes out, but AMD could do some powerful stuff with good software. Right now Threadripper can game decently if you disable chiplets to force execution on one, but if AMD can motivate MS, Windows should be able to give a game, or other low latency workload, exclusive control of a chiplet, and move other threads to other chiplets.

unless those apps are confinded to use certain amount of cores then yes you are correct
this was the case in the 1900s tho not now

They are. I'm on mobile and can't link shit but the audiofags had poor results with any Ryzen on very low buffer sizes (realtime audio).

I can also attest to this through gaming. If I set 4+0 in BIOS I get vastly better mouse response and marginally better smoothness (not because of less cores but because of lower driver latency). SMT is obviously off.

techreport.com/review/33531/amd-ryzen-7-2700x-and-ryzen-5-2600x-cpus-reviewed/7

(((((((((Intel)))))))))

That's kind of my point, though. AMD has been making steady progress, based on the benches you posted and their Computex presentation (assuming they aren't totally full of shit), on the inherent latency of their platform. And the ccx/chiplet latency problem should be solvable through software; improving schedulers to place latency sensitive threads as close to each other as possible. Right now people get better performance by disabling a ccx or entire chiplets, but with improved scheduling, it should be possible for Windows/Linux to just put threads on only one ccx/chiplet if they need fast interthread latency. Instead of disabling cores to force local execution, Windows would just say "Oh, this is a game or other low latency task, I'll put its threads close to each other and move other threads onto other cores." Basically, I'm questioning how much of the latency is actually inherent to the architecture, and how much is due to immaturity and poor software support.

>Performance used to be defined by clock speeds and cache sizes.
Indeed, like AMD 64 proved, right?

My God, they really have nothing to grasp at anymore

I've tested 4+0 in BIOS vs. numproc in Windows and I've definitely preferred 4+0 for input response. You can't schedule away the architecture's "flaws," you can only try to make it better. If you wanted to set the scheduler for low latency but not have to disable cores, you would have to force NUMA which would basically defeat the purpose of moar cores. In the DAWBench VI results, the 8700k was substantially ahead of the 2700x, and it will always be that way due to monolithic vs. nonmonolithic. Sure, you could have shit loads of cache, but you'd still have inter-CCX communication (I'm assuming they're still using CCXs).
I have a friend who plays Osu! on Linux with a TR occasionally and he constantly complains about the latency. This is with Linux's scheduler, not the garbage Windows one, so that goes to show the architecture simply cannot be fixed with scheduling.

Said the company whose architecture is full of security exploits

Yeah nice architecture

>I've tested 4+0 in BIOS vs. numproc in Windows and I've definitely preferred 4+0 for input response.
Again, with scheduler improvements, you'd be able to get the same results with out manually disabling cores for other tasks.
>You can't schedule away the architecture's "flaws,"
I never implied that. I said you can schedule tasks that are latency dependent and don't require more than 4/8 cores, or would suffer more from the latency than benefit from moar cores, to a single ccx/chiplet, and I pointed out the inherent latency issues that DAWBench VI highlights were mitigated with architectural improvements as it matured between Zen and Zen+, and were likely improved upon again in Zen2, which means Intel's latency advantage is shrinking. Overall, this means Intel's latency advantage is shrinking in lightly threaded applications, and diminishing in largely threaded tasks as AMD is simply able to outperform through sheer moar cores and better multithreaded performance, leaving a slim area of advantage where tasks are latency sensitive and scale beyond 4/8 cores to introduce latency spikes on AMD, but don't scale so much that they exhaust Intel's core capcity, or suffer more from the latency hits than they benefit from faster cores.

*NEW* Cores doesn't matter
*NEW* Clock Speeds doesn't matter
*NEW* Cache Size doesn't matter

Attached: 1558930639076.png (1070x601, 816K)

based

I M P L E M E N T A T I O N

YAMEROOOOOO

Attached: 1542027477262.png (807x745, 205K)

actually quite funny

>it's all about the implementation bro
>leaks your data to some shitter with an AWS spun up on the same machine as you
>but our performance...

unironically based

Maybe they should recently have spent all their time innovating CPU performance rather than innovating CPU backdoors for the ZOG state.

I still use a fucking Xeon E5450 and 8GB DDR2 and can run Witcher 3 at 60 FPS at Ultra with my RX 580 Nitro+. Step the fuck up Intel

>our (((10nm*))) process delivers a regression in clockspeeds, what are we gonna dooo?!

youtube.com/watch?v=61i2iDz7u04

based

You and your friend are brainlets. The latency difference is measured in nanoseconds, not milliseconds. The effect of the increased latency is that certain algorithms with large amounts of sequential data dependency will have decreased through-put; not that you experience greater latency in muhgames. If the game is running at the same FPS, then it will have the same level of input delay.

Ok, I'll give that to you. But either way, as far s raw mouse response would go, it would be very difficult to schedule the interrupts, drivers, OS calls, etc. to be as low latency as possible without making a niche RTOS. Windows and Linux won't rewrite the schedulers just for Ryzen.

>M-MUH ARCHITECTURE!
>Ignore the fact that Core is swiss cheese right now
Which Zen2 should I get? Thinking maybe the 3800x

Attached: bgHome1903x875NNN.jpg (1600x736, 184K)

Those nanoseconds scale into milliseconds, brainlet.

lmao gaymurs are fucking retarded

3900X, it's gonna crush even the 10 core intel garbage housefire.

TOP KEK

Attached: yLMsmJ8.jpg (575x575, 65K)

3800x will have no core chiplet to core chiplet latency, which is nice, wait for overclocking results

That's not what they say you idiots.

They're saying cores, clock speeds and cache sizes don't matter

"""""implementation"""""" bro ;)

BASED

They're right with clock speeds. IPC matters more. Something AMD btfos them in as well.

yeah, that's not how it works, communist AMDrone.

Wtf I love communism now.

BASED
A
S
E
D

Post your face when you never owned an Intel product.

Attached: index.jpg (229x220, 8K)

Bste

Attached: 1521516636756.jpg (300x300, 17K)

I don't give a shit about the AMD vs Intel thing (it's a CPU lmao) but this seems like maximum coping from Intel.

And it only changed when we couldn’t compete in those other metrics.

Also when we can again, if we survive all this, it will change back.

>maximum coping from Intel.
Of course.
Their new cores have been optimized to cope.

Attached: Hardware cope.png (1160x250, 68K)

Architecture
> 14nm+++++++++
Workload
> 8 core for ever
Implementation
> ZombieLoad

Attached: 1490323756520.jpg (800x600, 259K)

By workload they mean you're supposed to use only software build with Intel's compiler :^)

>AMD Athlon
>AMD Turion X2 2GHz laptop
>Apple Aidsbook with Core2 Duo (needed it for imgayOS development)
>Intel Core i7 860
>AMD Zen 1800X

My upgrade cycle is around 5-6 years, but now that I have more money, I think I might drop that down to 2-3 years. Waiting for next Zen 2 TR.

Attached: 1558895762920.jpg (900x1200, 182K)

OOOF

Single-core perf. is a combination of clock speeds, IPC, and cache. To say "IPC matters more" doesn't make much sense.
AMD's architectural IPC improvements combined with a node shrink (and that too from low power to high power) compounds to say Intel is absolutely fucked.

Today in the real world it's defined by paying companies to optimise their software specifically for your hardware instead of both you and the competition.

Shush goy

based

Attached: 1527941367525.jpg (269x493, 45K)

i.4cdn.org/wsg/1558931960706.webm

>Osu! on Linux with a TR occasionally and he constantly complains about the latency.
Sorry bud, but to actually be something you feel in the control, we're taking about 50ms latency here, and the interconnects on the TR are not 100km long.

>architecture, workload and implementation
But they suck at those. Why do you think we have all these mitigations?

Jesus christ almighty, these fuckers are making it easy for me to choose shit for my next build

Does anyone have more of these? Need to populate folder so once this gets released I can properly make fun of Inlel.

Attached: 1558946089649_1.png (801x1500, 1.05M)

Attached: 1457294956960.jpg (801x1500, 177K)

oy vey

Nice cope

b a s e d

Marketers just trying to sound philosophical and progressive, when in reality they’re just making themselves look like idiots

Attached: 1497848607524.jpg (5098x1500, 929K)

Friendly reminder that
Intel Inside = ISRAEL INSIDE

FUCK INTEL AND FUCK ISRAEL

How are they wrong though?

Intel Inside is already worrying enough as a slogan.

Yea, but it's clearly a lie.
In reality it's your intel leaking outside your cpu because of vulnerabilities.

by releasing a literal 8 core housefire at 5ghz pretty much contradicts their own words

>retarded
3700x 65W is where it's at

Worse binning, too much silicon lottery

>released
>"the processor is launching in Q4 of this year"
releasing means you could actually buy it, not this auctioning 80 of them 7 months from now joke they are trying to pull off again

Bazonka

Jow Forums is too much of a brainlet to understand the true definition of real life performance. You're all so focused on benchmarks and numbers that you all forgot why Intel became the world leader on the desktop CPU market in the first place.

Lies and bribes?

oh, right...

Last intel cpu I had was a 386.

Attached: bnm.png (438x342, 150K)

still remember when japan raided intel offices

Hahahahaha yikes!

Based and redpilled.

brainlet here, plis explain.
what exactly is implementation?

Cringe but redpilled

with marketing, bribes and other unethical practices.
intel becoming so hegemonious had nothing to do with their engineering competence.

>architecture
But Skylake is almost 4 years old.

>osu! on linux
you know the latency is immense due to wine/pulseaudio, right? it's a known issue.

ABSOLUTELY BASED AND SATANIAPILLED

Attached: (COMIC1☆12) [MOSQUITONE. (Great Mosu)] The Archdemon In Love (Gabriel DropOut) [English] {Tanjoubi (2508x2730, 2.91M)

No, it's defined by vendor-specific optimizations in software and underhanded, anti-competition business strategies

You don’t have to use pulse. apulse is a thing.

you can get way better latency by tuning pulse and patching wine-pulse driver, however it seems to depend on your hardware (and is pretty hard to setup, fuck compiling wine). alsa only gives slightly better results than using pulse, but it's still way worse than windows and really noticeable.

source?

it's right there in the goddamn image (and filename) you imbecile

How embarrassing can they get? They already tried to do this with the 8086k.

They're kikes, they're legit sociopaths.