How do we fix our dogshit latency?

>How do we fix our dogshit latency?

Attached: gamecache.png (608x450, 141K)

Other urls found in this thread:

en.wikichip.org/wiki/intel/core_i7/i7-5775c
browser.geekbench.com/v4/cpu/14442914
browser.geekbench.com/v4/cpu/14535508
spectrum.ieee.org/tech-talk/semiconductors/processors/4-things-to-know-about-the-biggest-chip-ever-built
twitter.com/SFWRedditGifs

what does latency actually affect?

AMD has lower latency than Intel on products with >8 cores.

should be spelled MOAR

Honestly they should start throwing L4 cache on these things to reduce the power density. Chip design at 7nm seems like a different beast.

Smoothness and input response in games.

I would be interested to see what Intel does with their next generation since they will have to extend their ringbus to deal with more total cores
>input response in games
Bait

How is it bait if zen 2 has nearly double the interrupt latency?

>in games
Thats where you are neckbeard, kiddo

>I would be interested to see what Intel does with their next generation since they will have to extend their ringbus to deal with more total cores
they already have mesh and monolithic dies with 28 cores right now?
ringbus apparently works at least up to 10 cores and maybe more, not sure if moving to a smaller process node would make a difference for this? but really more than 8 or 10 or even 12 if you're pushing it cores for mainstream desktop is just pointless and wasteful for >99% of users so I'm not even sure if they'll bother desu, they might just let AMD have the marketing win.
>inb4 muh content creation
there already is X-series and Xeon-W for this shit if you're a professional, and if not really who gives a shit if their 30 second rendering a video for instagram once a week(already way more rendering than the average user is doing) is 25 seconds instead, people would rather have an overall faster, snappier experience, and better battery life for mobile, etc

just add 16gb of l3 cache
thank me later amd

> input response in games.
Throughput for the cost of latency has been a trend in tech for ever. Machines from the last millenia will respond so fast it makes
Notepad in Windows feel like you're typing over SSH with 500ms ping. This thread is just hilarious bait.

A microsecond slower interrupt doesn't matter especially for gaming when GPUs and displays add 20-50ms of latency.

>Smoothness
I'm not avare of AMD hardware having inherent jitter and stutter issues (not caused by immature drivers or microcode etc). 1600+480 been damn smooth for me.
Also a modern gaymer runs Discord, Spotify, Steam, probably also a real browser, whatever mandatory background services and telemetry Microsoft forces down your throat etc etc. The extra cores will definitely help to combat the load and maintain a smooth experience.

Maximum throughput.
Caching only means you have multiple buffers so you sideload into more troughput, but the latency is still the same.

Then again modern hardware is so stupidly fast that its only a issue if you go for spesific benchmarks for it. I.e disregard faster modern instruction sets and go for something like singlecore performance.
Crysis is still a good benchmark because of that, since outside of instruction sets the CPUs have not gotten faster at single thread.

>MAKE MORE CACHE
that was literally intel """""""philosophy""""""" between 2006 and 2017

Anything you can't cache or prefetch, say you have a big database in memory with random access patterns, latency becomes a bottleneck over time.

Not memory latency

It's funny cause more cache is always effective, but it just has high costs in terms of die-area.

With any performant programs, if your touching memory outside your current cache-contents regularly you've already lost.

It's already been fixed in zen 2 with larger L3 cache. It basically means you can use 2400MHz CL15 RAM and have similar input latency as if using 3200MHz CL15 RAM, though having that 3000MHz CL15 RAM is always better as IF is still somewhat linked to RAM speed/latency.

What WAY too many people are confusing is their monitor response times with "input latency". That can only effectively be fixed by going oldschool CRT or oled especially if you want more than 60 FPS.

Attached: 13053644169l (1).jpg (1918x1078, 602K)

Imagine if Intel replaced the GPU with L4 cache. Ahhhh, yes.

everyone knows l1 cache is what matters

No it wasn't, Intel still has lower interrupt latency which is what matters in games.

They already had an L4 cache on the 5xxx series desktop chips, which had iGPUs too.

en.wikichip.org/wiki/intel/core_i7/i7-5775c

with in-spec (i.e not warranty voiding) memory frequency

browser.geekbench.com/v4/cpu/14442914
>82ns

browser.geekbench.com/v4/cpu/14535508
>78.4ns

amd could make 1 ccd ryzens with even more cache and mark them as the ultimate gayminge cpus

You picked X299, which is quad channel and are comparing it to a consumer dual channel chipset from AMD.

Come on, you know that's not apples to apples.

Attached: 2019-08-31 09_48_43.png (532x503, 119K)

that's an 8-core CPU, read the post again carefully

The Intel RAM is clocked at 1319MHz vs 1600MHz on the AMD and has 2ns higher memory latency. You just proved yourself wrong. Intel has WAY more headroom to lower memory latency than AMD.

It doesn't matter how many CCDs there are. At least not in the way you think it does. CCXs are what cause high latency. But more CCDs = more CCXs = greater chance of interCCX communication.

Attached: 4L_pNyD4ozQ.jpg (822x777, 207K)

Yes it is an 8 core CPU, you picked x299 which uses quad channel memory and inherently has higher latency though.

The 9700k that I posted, is also an 8 core CPU, but it's on Z390 and using dual channel memory, which is why it gets sub 60ns.

do you not understand what "in-spec" means or is this a bait post?

Who runs enthusiast platforms in-spec?

Moar cache
Moar FCLK

>if you just compare totally different products, AMD wins

wow shocking

How much lower do you think it is? Even if you have to make 10000 trips to memory for a single interrupt the difference wouldn't add up to something a human could detect.

the whole advantage of quad channel memory is higher memory bandwidth from interleaving, and yet it gets 34.9 GB/sec vs 40.5 GB/sec on the dial channel AMD platform

It very well does add up, the difference was night and day even at capped FPS.

Attached: 4L_cYxMsC0J.jpg (2448x3264, 1.78M)

Take a look at the average interrupt to DPC latency.

Attached: 4L_2FeXZ9od.png (754x200, 25K)

Which is why you know it's bullshit, because even a stock 9700k gets ~39GB/s memory bandwidth.

AIDA64 memory bench is far better for this type of comparison than fucking geekbench.

Except it's nowhere near as noticeable as monitor response time and monitor input latency which dipshits often confuse for "muh X CPU has higher latency".

IPS monitors which make up over 90% of the monitors in use can have response times of 20+ms and even more in input latency. Those things combined can give users over 50ms of latency which can very easily be felt as human eyes can discern individual changes in motion up 10ms in duration.

Point is zen 2 core latency is the least of your worries if you really give a shit about latency.

The 27" nanoIPS from LG that came out last month has input latency and pixel response time that's better than any other IPS on the market.
1 of 2

Attached: 2019-09-03 10_25_32.png (664x634, 37K)

2 of 2

Attached: 2019-09-03 10_25_59.png (956x800, 110K)

Everyone playing competitive games is already using 240Hz displays, boomer. Display lag is obvious so it goes without saying. CPUs are much less documented as it's way beyond the average retard's understanding.

no, because you want the number of transaction high not latency. running stuff in parallel means throughput. same goes for web facebook doesn't care if there server need 1 or 2ms it is all about serving millions of users.

These latencies are multiple orders of magnitude smaller then anything noticeable by a human being. Get a faster responding display.

>t. 60hz ultra settings vsync gamer ryzen user windows 10 apologist

Which is pants-on-head retarded as:
1.) The pixel response time can't keep up (unless CRT/OLED)
2.) Human vision maxes out at 10ms changes (ie 100FPS)
3.) To get ANYWHERE near this render speed you have to drop down to 720p or play some dumb boomer game like csgo with minecraft graphics

see

interesting, does the latency drop if you don't use all the channels or is it determined by how many it can address to begin with?

I haven't seen anyone running X299 without quad channel, so I honestly don't know.

I would imagine the latency would drop assuming the IMC isn't fucking retarded.

>every workload using a database is throughput oriented

Be honest with me, do you even play games?

I don't even remember how many times I argued with retards like you, you people don't even know what latency it fucking is you are trying to fix.

Irrelevant

So the answer is "no." Thanks.

You know the main thing that contributes to DPC latency is Nvidia graphics cards right?

The answer has nothing to do with the thread. Everyone posting in here could have never played games in the last year for all you know. It doesn't invalidate the criticism if this thinly veiled intel COPEā„¢ thread.

Oh wow, I see how it is.

Your opinions are worthless, please stop now.

Now remove the GPU and make MOAR cache. This would be absolutely sick and practical since the iGPU is worthless on enthusiast class processors.

Yup, what a lot of people really love to ignore is those on 1080p don't even have a CPU bottleneck. If you have an Rx 570 all the way up to a Rx 5700 it doesn't matter what CPU you use as long as it's a hexa-core with 12 threads at least.

Then those who do cash out for 1440p gaming ALSO don't have a CPU bottleneck unless they're stupid enough to actually buy a 2080ti.

This isn't an opinion, just a hard to grip fact of life for the average braindead intard.

Attached: 1080p_Ultra.png (1369x1385, 81K)

>ultra quality settings in a CPU thread
Like I said, your opinions don't matter.

You know these are measured in microseconds right? Even the most insane latency freaks never claim they can detect something like that.

And you know that CPU cycles are measured in nanoseconds or lower? There's no way anyone could ever notice that.

Who plays on low/medium settings at 1080p even if you have an Rx 570?

Your game goes through millions of CPU cycles per frame. Your input goes through a handful of DPCs.

Competitive players

Those thousands of DPCs halt the entire system to get serviced, where microseconds begin to matter.

>playing bibeo gays is now a sport
How soon before we have competitive wanking competitions?

Because latency is just a metric, it has no inherint meaning. There isnt a single benchmark where zen 2 performs any worse than zen+. Also zen 2 has better latency in single cores than intel as it is. And ccx latency doesn't really matter outside heavy multi-core, which amd already beats intel at even with the extra latency.

Attached: IMG_9659.jpg (966x602, 58K)

He made more than you'll ever make

Attached: 4L_ZJECuRL9.png (597x780, 612K)

So you're saying it affects your frame rate by interrupting the game? Then latency is only one component of total throughput. Buy whatever CPU gets you the most FPS.

There's way more to a game's performance than just framerate. Frame time deltas are what actually matter and low latency is the only way to achieve it.

So you're saying that somehow a higher latency makes the frame time less stable? There's no way you can back that idea up. It's nonsense.

>there already is X-series and Xeon-W
You mean the 3900x, 3950x and Threadrippers.
Intel's overpriced shit is nothing but a waste of money.

He sucked more cock than I ever will and he's a literally who? to me.

Nvidia drivers universally cause frequent 200us DPC latency spikes. Does that make them unusable for esports?

Also I'm looking forward to competitive wanking. I'm the biggest wanker you'll ever meet.

I thought that AMD CPUs had more stable frametimes?

CPUs with more cores and threads tend to have more stable frame times because the game threads are less likely to be interrupted by background tasks and because any asset streaming on a jobs system has more unused cores to delegate to.

Only due to more cores. Intel doesn't have CCXs or distant memory controllers, so they inherently have less jitter. Plus you can overclock to your heart's desire to further reduce latency. Stuttering is usually caused by misbehaving and unnecessary drivers like IME that AMD doesn't have, but on Intel come by default with the bloatware chipset drivers.

Unless you think there are some frames that go down 100x more latency dependent paths than others there is no reason latency would affect frame rate stability.

>This thread is just hilarious bait.
it's kind of lame. op probably still uses hdd for his boot drive.

The occasional game I play doesn't need all that crap.

Man all my hype for intel just instantly died after seeing this. I'm not getting a fucking 2080ti just to play video games. Fuck that.

Can you post the 1440p Result?

Not him but he's mostly right, at 1440p the gains from a 3600 to an i9-9900K are so minimal unless you have a 2080ti the GPU will become the bottleneck. Bar a few meme titles like far cry new dawn.

Attached: ACO_VeryHigh_1440p.png (1370x1412, 59K)

No game average? Lame

My latency is awesome

Attached: memory.jpg (540x518, 97K)

AMD context switch cost is like one tenth of that of Intel.
If anything, Intel has a lot of catch up to do.

Attached: 1557122770727.png (876x556, 358K)

Attached: vs.jpg (1920x1080, 528K)

NOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO

Meanwhile @ Intel

Attached: lakes.png (800x554, 132K)

>My latency is awesome
Not everyone has 16-17-17-39 CR2. If memory latency is what your talking about:?
Computer Architecture? Do you speak it?

>Not everyone has 16-17-17-39 CR2

It's actually a pair of 3200 b-die. It took me 4 months of non stop tinkering to get it to where it is now. 14cas 3200 originally

>My latency is awesome
Your CPU may be a bit faster, and have more cores, it it spends more clock cycles waiting for ram.
As you can see, my ram is only DDR3, but operates closer to my clock speed, giving me more efficiency, and less 'Latency'
9-10-9-28 vs 16-17-17-39. Looks like your CPU has to wait 39 cycles during a fresh (non-cached read. ).

Hmm... should I post my AIDA64 results?
Personally, I think you wasted your money, as my system was ... free.

Attached: Real_Latency.png (836x412, 69K)

That's what the higher clock speed is for, you absolute fucking retard.

You're absolute fucking retarded if you think nanosecond latency on the processor level affects input latency.
You're confusing ns with ms.

>GPUs and displays add 20-50ms of latency.
Even a bad setup is like 20ms max at a 60Hz screen and at 60 FPS. That's render time + panel response time.
At 144Hz you're doing around 8-10ms. At that time, the latency of slow USB devices is more noticeable even.

>machines from the last millennia will respond so fast
they respond fast because they used monolithic chip designs, and so the CPU never had to communicate far (order of ns) to do almost anything
spectrum.ieee.org/tech-talk/semiconductors/processors/4-things-to-know-about-the-biggest-chip-ever-built

Nowadays chiplet design has become extremely popular, at the cost of latency.

1G cache by 2022