New Cinebench R20 results are coming in. Where the fuck is AMD?

New Cinebench R20 results are coming in. Where the fuck is AMD?

Attached: Capture.jpg (676x1235, 151K)

Other urls found in this thread:

embree.github.io/downloads.html
extremetech.com/computing/82264-why-wont-ati-support-cuda-and-physx
cpu-monkey.com/en/cpu_benchmark-cinebench_r15_single_core-7
youtube.com/watch?v=CZ0SxpGzbw0
twitter.com/SFWRedditGifs

AMD is antisemitic.

>beat by and i3

Embarrassing desu

Huuur but it is UNFAIR AND CHEATING that Intel works with the people who make professional software to insure that their products get the best performance. Just like how Nvidia is cheating by using cuda when everyone should be forced to use open gl to ensure that every benchmark is "fair".

9700k btfo

Doesn't it use an Intel developed raytracing engine for its benchmark? Wouldn't that naturally favor Intel?

how is it even called? Bree Bree? it's open sores
embree.github.io/downloads.html

Embree? Like Embryo.

So it measures how well AVX, AVX2, AVX-512 work in CPU. Given that Ryzen uses dual AVX-128 to create a AVX 256 instead of a native AVX256, this will disadvantage AMD for sure.

>intel has a 20% single core advantage
wtf amd poozen shills told me it was almost the same!!

>8600k rekting every other CPU

Attached: 1551635835406.jpg (400x400, 19K)

The new test is a measure of AVX advantage.

propietary things like cuda lock software to tbr hardware of one(1) specific vendor(nvidia), so depending on cuda is the same as spreading your ass for nvidia to rape it.
while OpenGL, Vulkan, Freesync and most other standards AMD develops/promotes are open so every hardware vendor can support them. wants
Nvidia tries to create a soft-locked monopoly while AMD does not (yet, its still a company which would rape you anally of they could).

i'm so glad I used cinebench scores to determine which hardware to buy. I find myself constantly using cinebench at work. I'd go on about how it's such a great productivity tool, but gosh, that'd be like going on about how great Gentoo is or how Linux is just a kernel: it doesn't need to be said

Well this R20 is going to make it worse.

Almost no one uses AVX for their programs(for good reasons as most programs can't/don't need tp vector their computes) and those that do are usually scientific ones.

yep it's unfair to use AVX or even SIMD instructions, people should avoid them even when they are 100x faster because poozen sucks so it just wouldnt be fair
IT'S NOT FAIR REEEE

Attached: noooooo amdbros this cant be happening.png (1135x792, 98K)

But who cares about 120 vs 100 fps lol

>cuda lock software to tbr hardware of one(1) specific vendor(nvidia)
Because AMD didn't accept the licensing offer Nvidia extended to them like a decade ago.

>OpenGL, Vulkan
Nvidia's engineers actually head the Khronos committees that work on these, though AMD has some engineers too. And in the first place Mantle was designed by Johan Andersson from EA DICE, he talked extensively about his journey, going from design of the specification to shopping for hardware vendors to implement it. AMD was the only one so they get points but people need to stop pretending they lead the charge.

>Freesync
Is proprietary, but royalty free. Freesync is a brand and also the technology on the Radeon GPUs that interacts with VESA's Adaptive Sync protocol to enable VRR. G-Sync Compatible does the same thing, also proprietary.

>Nvidia tries to create a soft-locked monopoly while AMD does not
Except they don't. Like I said above Nvidia offered AMD a CUDA license for "cents per GPU", according to both Nvidia and AMD, AMD didn't even call back about the offer. Nvidia invented GPGPU as a platform and they invested a ton of money into it. OpenCL was literally a shitty clone made to compete against it because Apple didn't like the soft vendor lock. It's fair to say they deserve to be able to license it for a reasonable fee. The funniest part is it's an article about physx and other CUDA shit and AMD's rep said they won't support CUDA because they don't like closed software, literally at the same time promoting the INTEL OWNED, PROPRIETARY Havok physics framework.

extremetech.com/computing/82264-why-wont-ati-support-cuda-and-physx
>ATI would also be required to license PhysX in order to hardware accelerate it, of course, but Nvidia maintains that the licensing terms are extremely reasonable—it would work out to less than pennies per GPU shipped.

>AMD
>single thread
Athlon days are long gone, they will never surpass Intel again in single thread performance.

woah an actual high IQ post on Jow Forums for once

Attached: rtx 2060.png (500x770, 46K)

No one uses AVX that's the point. R15 measured SSE2 which most people used regularly for their programs. Thus the scores reflected the CPU's performance. But now you have CPU like i3-7350k beating Ryzen 2700X. The benchmark becomes quite meaningless.

AMD does still do good things. I'm not trying to trash them but there needs to be some push back on the insane fanboyism that paints AMD as Jesus or something

>Vulkan
They were the only ones to help out Johan. Probably, without AMD we wouldn't have Vulkan and DX12 since the industry needed a concrete proof of concept.

>Freesync
AMD did advocate for VESA to move the adaptive sync technology from eDP to the full DP spec, under a new name. And even if the certification sucked and we had nothing but lemons for like 2 years they promoted Freesync so VRR became more affordable for everyone, and now the market is in a pretty good state. That said, Nvidia invented VRR for PCs and launched working products the day AMD was showing off canned prototypes

>GPGPU
rocm is a nice attempt I guess, but like many things they do kind of half assed. There's decent technical work but little advocacy as usual. Hopefully they'll carry on

but the i3-7350k is better in single core than the 2700x lol even without avx

You're confusing different metrics here. This benchmark is a measure of AVX features sets.

technically correct is the best kind of correct: the post

?
cpu-monkey.com/en/cpu_benchmark-cinebench_r15_single_core-7
the i3-7350k beats it even in r15 single core

This thread is about R20 not R15.

see:

I would never buy Intel pozzed garbage even if it was infinitely better for just $1

Attached: riptel.jpg (1280x720, 131K)

>Because AMD didn't accept the licensing offer Nvidia extended to them like a decade ago.
based AMD

>concern trolling

>>Single thread results
Now post multi thread.

I'm not concerned about shit. I think AMDfags are pompous fucking newfags and they should shut the fuck up because they're ignorant and have no idea what happened before they joined the hobby a few years ago. The company itself is fine ethically but I think the same about Nvidia and many other tech companies too except maybe Intel. But I will say AMD's suffering until recently is a result of their mid-late 2000s era leaderships' arrogance. Turning their nose up at the nvidia merger, turning their nose up at CUDA, overpaying for ATI and then acting like they were big dawgs, breaking years of their own pricing convention and overpricing GCN gen 1. A bunch of dumb shit by really shitty leadership. Rory and Lisa really cleaned things up thankfully but they should get over the sour grapes and look at licensing CUDA, though it may be too late since it's a lot more valuable than it was in its infancy when Nvidia was looking for other hardware partners.

literally

Attached: 1551972116184.gif (1100x600, 465K)

>The company itself is fine ethically but I think the same about Nvidia
impressive opinion

Lets try that again.

Attached: R20v2.png (657x2466, 293K)

post .jpg

It took you three tries to post that image and you know what user, I'm proud that you did it. Good job, you. Never give up.

How the fuck do people manage to get Ryzen CPUs so massively OC'd? No matter what i try i can't make my 2700X break past the 4.20 niggahurtz barrier, the temps are high but under control staying just under 80c, i guess MSI has shit VRMs

Relatively significant portion of silicon lottery winners (until Intel). I have a Ryzen 3 1200 that can run 4.1GHz @ 1.32ish volts perfectly stable.

i had a 1600X do 4.2GHz on the same mobo, so maybe, but obvsly 8 cores needs more power

>silicon lottery winners (until Intel)
*unlike Intel

Anyone who says "open sores" is usually low IQ

Sub ambient cooling

Insane voltages. Anything over 1.36 for daily usage on Zen+ is guaranteed to kill it in a few months, although some fanboys insist it's fine

He said Ryzen, not entry level Intel

rekt

nah that's BS, i had my 1600X running at 1.4v no problem for a year, i'm pretty sure AMD said anything above 1.45v is the danger zone, my mobo seems to agree too since it highlights anything >=1.45v in red but will actually default to around 1.408v or so on the auto setting even at stock clocks where the CPU boosts up to 4.00GHz most of the time anyway

Because intel shills (((shut it down)))

5ghz

Attached: Cine r20 8700k 5ghz.png (1042x1929, 110K)

INTEL DOMINATION

AYYMDPOORFAGS CONFIRMED ON SUICIDE WATCH

silicon lottery & insane cooling.

i browse that site regularly, those germans do like to spend outrageous amount of money on their virgin tech

>yep it's unfair to use AVX or even SIMD instructions
It's only fair when you test for maximum power consumption ;)))

Nooo, how do I cope?

Attached: 1700x.png (356x378, 24K)

>compiled with intel propiertary shit
>gets shocked when amd doesnt do well

i hope the tech world will call the bullshit

one x series better than x2 epyc

yeah ok

This is only single threaded for fucks sake. Bet they got paid by intel

Wonder how these results will look after the Spoiler "mitigations" start rolling in. Not like speculative execution will have an impact on IPC or anything, right?
And post the fucking multithread results as well when you do this shit.

lol what is with those results, there's no consistency between core counts or clock speed in the same product family. The perf scaling is garbaģe.

It's going to be months or even years hefore the slow pace of the cinebench team turns R20 into a reliable metric.

>It's going to be months or even years hefore the slow pace of the cinebench team turns R20 into a reliable metric.

expect amd to be thrown shit on gdc for not using the r20

>single thread
>2019
Why are jews still doing this
Is not ''New'' bench supposed to be built for future?
Oh rite
Can't have goys tech advance
Worry and forever program about and for Single core Single thread
I'm really close to totally abandon any kind of tech and just and go farm fucking pears and tomatoes

Attached: 125163.jpg (1080x1080, 189K)

what's with the whore?

Cinebench has always defaulted to multicore benchmarks.
Single core benchmark has to be manually enabled and serves for legacy comparisons.

im saying that by this age there shoulc not be single threaded cpus get me

There will always be some enterprise or corporate legacy software that requires high IPC single threaded performance. Especially things that need to be executed real time and cannot be reasonably multi-threaded.

what is that site

calm down, only a few more weeks until CB R20 DOESN'T MATTER!!! you know its true

>i3 @ 4.8ghz
>i5 @ 5.4ghz
is this even possible?

sure with insane voltages for the sake of testing

i3s are basically the same as their bigger brothers but with less cores or without HT, don't see why they can't hit 5ghz like the rest, the 5.4ghz of the i5s must have won the silicon lottery though.

Odd its not on there, my Poozen 7 gets just over 2000 on cinebench, that is 9900k territory

PBO on
XMP
3200mhz C14
NH-D15

Also X470 Taichi

Fucking motherboard manufacturers removing 2 sata going from x370 to x470

This we need equality of outcome, not meritocracy.
Tear it all DOWN!

>Please acknowledge my FUD

>1600X
This is 1st gen Ryzen, 1.4V is probably OK, but Zen+ has lower limits

Does it matter? Everybody is using M.2 drives nowadays

Single thread. Intel is still better at single thread. This isn't news.

Voltage isn't really the issue the main issue is heat. If you're having 90c @ 1.2v that's a real issue. If you're having 40c @ 1.4v that's not an issue at all. Ofcourse I wouldn't trust >60C @ 1.4V on full load.

>40c @ 1.4v
Except that's impossible unless you live in Alaska.

The point is, you can have high voltage if you have your temp under control.

True, but up to a reasonable limit. Electron migration still occurs at faster rates the higher the voltage regardless of temperature (unless we're talking about sub-ambient cooling)

Thats R15 you retard

>implying I care about meme R20 Ayymd shilling garbage

you did enough to make a post about it and cited the wrong benchmark to make it appear you were close to a 9900k lol

BS, you can easily OC a Ryzen to 4.1GHz even with the stock cooler but it won't go past that even in a chiller, it's all a silicon lottery + your mobo's VRMs and power delivery capabilities

>Where the fuck is AMD?

Attached: 1551946516712.jpg (3072x1728, 1010K)

Obviously if you're overclocking then you're paying attention to your cooling. If some clueless Linus Tech Tips watching kid wanting to feel cool clocks their Ryzen to 4.2 GHz all cores on automatic voltage+stock cooler and ends up killing their chip then that isn't really the processor's fault. The same is going to end up happening with any processor.

Given the dude who took that screenshot actually wrote the ryzen dram calculator his system is pretty heavily tuned.

>Single thread
>R7 2700x @ 4.4 is equal to i9-7980XE @ 4.5
>who would've though that with similar IPC higher clocking CPUs are faster?!

I don't see the problem here. The only issue is that Ryzens so far can't clock as high, we'll see what 7nm brings to the table.

That's a problem in silicon being ntc, but thermal throdling should save the chip regardless

Are you implying it's fake, you dumb coping Intbecile fuck?

No. What i'm saying is using that as a guideline for typical ryzen performance is unwise given the extreme tuning done to it.

What you don't get, is that this isn't a showcase of 2700X, but rather a leeway of showing things to come. Zen 2. Zen 2 THREADRIPPER Mark III. Inturd is ded.

The most you're going to get from extreme RAM tuning a Ryzen 2xxx series is 10-15% sooooo what's the problem here, newfag?

I see a 32-core, 4 channel Ryzen Threadripper at 4Ghz performing about as well as you'd expect an AMD product to perform against Intel products.

youtube.com/watch?v=CZ0SxpGzbw0

If leaks are any indication 5 Ghz is coming.

But I'll be happy with 4.5-4.7Ghz for typical overclocks too.

2700x beating the 8700k at housefire speed/temps using intel pozzed benchmark. zen 2 will be interesting here for sure.

>single thread

Attached: muhsct.jpg (960x686, 255K)

Going off how shitty Vega 7 overclocks, 7nm will mostly have better power efficiency and not these BS 16core 5.0ghz clocks leaks show.

Vega isn't a high-clock design.