Apple A13 matches i7 8700k

It matches it in single core.

Could this possibly be the beginning of the end of x86 and even desktops altogether?

Attached: C22CFA25-6DFF-44E3-AA6E-E0944E6E2FF8.jpg (830x334, 34K)

Other urls found in this thread:

realworldtech.com/forum/?threadid=136526&curpostid=136666
winbuzzer.com/2019/05/28/windows-10-on-arm-snapdragon-8cx-beats-intel-i5-in-breakthrough-benchmark-xcxwbn/
notebookcheck.net/Intel-Core-i5-8250U-SoC-Benchmarks-and-Specs.242172.0.html
notebookcheck.net/Qualcomm-Snapdragon-855-SoC-Benchmarks-and-Specs.375436.0.html
notebookcheck.net/Apple-A12-Bionic-SoC.331518.0.html
notebookcheck.net/Apple-A12X-Bionic-SoC.354571.0.html
en.wikichip.org/wiki/samsung/microarchitectures/m4
geekbench.com/doc/geekbench5-cpu-workloads.pdf
geekbench.com/doc/geekbench4-cpu-workloads.pdf
anandtech.com/show/13392/the-iphone-xs-xs-max-review-unveiling-the-silicon-secrets/4
twitter.com/NSFWRedditImage

8700k score

Attached: 0ADA3D53-B0A6-4B25-A6B1-C055A03D91C6.jpg (800x965, 56K)

No,first off the benches are synthetic and id need to see a real and equivilent workload to believe it completely

Second, a need and demand for a "bigger and more powerful machine" will always exsist

Finally,right now way too much is in X86 right now

The situation may be different in 5 years but as of now,no,at least not for "power users".

>Cache bias benchmark with heavy score on integer performance.

Gearbench?

As in, "Every android manufacturer has been caught faking scores for it due to the open nature of the system but take our word for it we're totally not on iOS either" gearbench??

cool, now you can update your myfitnesspal much more faster

Yes but the same way that computers used to take up a whole room and now its just a case, it may change things to the point that you can get current high end desktop performance from a tablet, and desktops will be relegated to be used only by professional users, as well as costing much more if you do want better performance parts that can take advantage of the increased size.

Not saying i want this to happen btw, but it looks likely to me with how fast mobile is catching up with desktop.

How can you “fake scores” except maybe overclock the CPU when a synthetic load is detected? And that overclock makes how much of a difference, 10%? Its still basically matches the 8700k

Tell me more about this. And desu i would also like to see cross referencing between different benchmarks but i dont know of any other bencharmk that can test both x86 and ARM.

Back to your designated shitting street iJeet.

Attached: 1546397931966.png (817x1337, 322K)

Reminder that OP is a ban evading street shitter evading 500 years of bans.

Attached: 1557335492455.png (1276x667, 378K)

A serious question from someone who knows nothing about ARM CPUs: can they be clocked higher with proper cooling (i.e. same as for PC CPUs)? Could they be made bigger to include more transistors for faster speed/more cores in a desktop setting?

I actually dislike Apple but they always have the most powerful mobile CPUs, thats just a fact

Im not who you are thinking of and if anything i would be an Apple hater more than a fanboy (though im neither in reality, i have no brand loyalty). I just posted this because its crazy if true

And they run at 100% usage all the time
iOS is so poorly optimized that it forces the CPU to max out just to idle

You actually shit in the street and Apple CPUs get destroyed by everything from Pentium IIIs to Qualcomms, that's just a fact.
Get a fucking life or consider suicide, street shitter.

Attached: 1547978745372.webm (640x290, 2.43M)

Sure, why not.

There may be an architectural limit on the size of each core, but there is no reason why they couldnt add more cores. These CPUs are basically dual core CPUs, so i imagine with a higher core count they could match x86 in all areas. And obviously no reason why it cant be cooled, they already do it in phones now with heatpipes

>4W ARM CPU matches 95W x86 CPU
yeah no, that's not how it works.


First off you're comparing cross architecture (arm vs x86) and you're ALSO comparing cross operating system (iOS vs windows).

If you ACTUALLY think a mobile CPU with a power envelop THAT small is gonna actually compare to a full desktop CPU in performance, you need to seek mental help.

Apple didn't suddenly break the laws of physics OR discover a whole new way to compute. You're just trying to compare two things that simply aren't directly comparable, at least not in the manner you're attempting to compare them.

>How can you “fake scores” except maybe overclock the CPU when a synthetic load is detected?
If it's a popular benchmark, then the calculations are known beforehand. One way is that the phones detect that the benchmark is being ran and provide hardcoded results.

You are schizophrenic

>Wastes time on stupid animation to open app
>Literally hangs for 1 second

You are shitting in the street.

Attached: 1539874452950.png (602x1300, 1.18M)

Extreme hand-optimization.
It makes a benchmark no longer representative of anything since nothing else is so optimized.

You forget that its a 7nm dual core CPU running at 2.4ghz. The 8700k is 14nm , has three times the core count, and runs at 3.8 ghz. All of that adds up.

There is also hyperthreading which is also going to drive up the power consumption. And on top of that there are architectural improvements for power efficiency, the 8700k is already a couple years old, and its basically just Skylake.

I wonder if it would make sense for Apple to sell CPUs to competitors. Considering that nobody buys an iPhone because of performance, and that AFAIK Samsung already does that with screens, it could work and Apple could make some neat bucks. Brb applying as Apple CEO.

>gookbench
>literally written by an apple fantard
>caught running his scam dozens of times already
>forced to flee to canada to dodge prison time

Attached: 1566862613009.png (874x982, 175K)

Run an x264 encode on an iPhone and see what kind of speed you get.

No android OEM would want to downgrade to Apple's street shitter CPUs.

No one cares about your 30kbps 2% quality utter shit quality encodes.

>gookbench
>the whitest, most anglo name possible
Can't trust whitey.

I wasnt aware of that.

I dont think thats actually happening though. Geekbench has a record of being consistent with the actual design of CPUs. None of the scores are crazy when you look at the architectures. For example, Apples CPUs always outperform their Snapdragon counterparts, and this is reflected in the architectural differences between the two as Snapdragon cores are always smaller in size, and the size difference is proportional to the difference in the scores.

Everything checks out basically. But again, if there are other cross platform benchmarks available, it would be good to look at the results of that too.

It would actually be quite fast and pretty power efficient since there is probably a hardware H264 encoder in the A13 as with many ARM SoCs

Isnt that run by specific sections of hardware on the CPU? Of course if you miss those you arent gonna have good performance in that specific workload. However that does not affect general performance.

Attached: 1568622875503.jpg (700x516, 28K)

x264 encode is a specific use case that's usually has dedicated hardware support. Considering that encoding any video on a phone is a fringe use case, it's not unreasonable to not give a fuck about it when designing a phone CPU.

Obviously he is talking about software encoding, otherwise you could just use the dGPU's dedicated hardware encoder on your desktop and that would crush the iphone too.

>gookbench
Find me a better source, and then we'll talk.

You find it. Im just as interested in seeing at as you are.

God I hope so. Not necessarily getting rid of desktops, but we need to get off of the Intel/AMD duopoly ASAP. ARM and RISC-V are the future.

>encoding video is a fringe use case
>what is video recording
>what is streaming
>what is FaceTime
Yeah encoding video is very fringe

Great thread OP.

software encoding with the CPU is sorta a fringe use-case.


you only bother when you're looking for production quality encoding, otherwise most people are fine with the fast and dirty hardware encoder settings.

>debunked benchmark
Try again street shitter.

Attached: 1549167658220.png (675x780, 99K)

Alright, you have a point. When I thought about encoding video, the only thing that I thought was the autists who download raw blurays and encode them themselves instead of just getting a decent release.
No, he's right, in my post I was actually talking about hardware encoding.

>the only thing that I thought was the autists who download raw blurays and encode them themselves instead of just getting a decent release
that's software encoding.

No one encodes from a remux or disc and uses hardware encoding, that's just fucking retarded.

>Geekbench

Attached: smug-3d-anime-girl.webm (380x480, 1.41M)

I didn't know that. Is hardware encoding just some dirty hack that sacrifices quality for speed (similar to e.g. using Phong lighting modern instead of real raytracing)? Or is there some other reason?

i am more interested in real world performance than benchmarks.

How gullible are you...servers, supercomputers, render farms - they all still take up the whole room

hardware encoding uses fixed encoding settings on a dedicated hardware block that ONLY does those functions and those particular settings.

Software encoding allows you to finely tune your encode, which is what those autists are doing.

>Geekbench
OOF

realworldtech.com/forum/?threadid=136526&curpostid=136666

x264 is a software encoder.

Not an Apple chip, but here's an Intel vs. ARM comparison with PCMark
winbuzzer.com/2019/05/28/windows-10-on-arm-snapdragon-8cx-beats-intel-i5-in-breakthrough-benchmark-xcxwbn/

they mean h.264 and you know it.

The problem with real world performance is optimization can make bigger differences than actual computing power. Its not a good a medium for comparison

Did you understand anything you read? Because it literally says in the post that desktops may be relegated to professional use only just like whole room computers are relegated to professional use only in the form of servers

real world performance is the only performance that matters street shitter

Real world?

See 3dmark Ice Storm Unlimited Physics, a longer running benchmark

notebookcheck.net/Intel-Core-i5-8250U-SoC-Benchmarks-and-Specs.242172.0.html

notebookcheck.net/Qualcomm-Snapdragon-855-SoC-Benchmarks-and-Specs.375436.0.html

notebookcheck.net/Apple-A12-Bionic-SoC.331518.0.html

2nd generation vs 3rd generation 14nm. And I wonder how much of that is a compiler comparison.

3dmark is a GPU benchmark

Attached: 1567830035546.gif (248x200, 314K)

what did he mean with this

The physics part runs entirely on the CPU

I forgot about that, you’re right.

I just checked it out but this is a multi core score, of course the CPU with more cores will win.

>this is a multi core score, of course the CPU with more cores will win.
You are fucking brain dead.

Not an argument

Compare them in desktop applications then faggot. I can't believe I'm defending Intel. What timeline is this?

>poole
m00t's brother?

Apple A-series isn't on desktop systems yet, although I've heard they have plans to go in that direction. Here's a bench between another ARM chip and an Intel mobile processor if you're interested

They still fall short in multi core, so dont expect to see desktop performance from the mobile chips. But, there is nothing stopping Apple, ARM or some other company from developing a 6 core variant with hyperthreading for use in desktops. It may even outperform x86 in all aspects in that case

He's right you know. The A12 uses a similar scheme to big.LITTLE in that it has high performance cores and low power cores that are locked into a low power state. The 855 has 1+3+4 (max boost/highest performance 1 big core, 3 big cores with high performance, 4 small core). The A12 just has 2 high performance cores and 2 low power cores. The 855 has a natural advantage no matter how you cut it since the A12 high performance cores are only like 50-60% better in the best case, whereas the 855 straight up has 100% more high performance cores.

The a12x with 4 high performance cores does much better.
notebookcheck.net/Apple-A12X-Bionic-SoC.354571.0.html

Not quite 50% but that's probably because of the low power cores on both SoCs.

Intel still does much better despite only having 4 cores with HT. The A12x in the ipad pro shouldn't thermal throttle in a single bench so I'd say it's fairly representative of what you can expect from them in similar chassis

some nigger compared 2 memory read opcodes and how fast they executed and """ bench marked """ them

wew

Do you know why Samsungs Exynos 9820 with their own M4 cores who are very similar to Apples Vortex cores in their A12 falls behind the 855 so much in PCMark despite having a similar score to Apple in Geekbench? The Exynos 9820 even has two middle cores that the A12 lacks yet it still falls behind.

Is it just software optimizations and the lack of it for Exynos because comparatively so few users have it? Or is there something else going on?

Geekbench is literally made by an Apple employee.

IIRC the exynos 9820 had some weird boost delay so it didn't hit its boost in short workloads. Anandtech did an overview but the chip just seemed flawed based on performance outside of geekbench, it even loses to the Snapdragon 855 in a lot of cases (outside of geekbench) I believe. Even after they tried to improve its boosting behavior it only fared a little better. My guess is the designers focused in one specific thing like fast cache or something, which would benefit cache/memory optimized programs, like certain benchmarks

Then why does it show their biggest compitetor as having a similar score to them?

Geekbench is useless for cross-architectural comparisons. They do a ton of ARM optimizations with their analogues not done on x86 and the scoring system is arbitrarily different.

>My guess is the designers focused in one specific thing like fast cache or something, which would benefit cache/memory optimized programs, like certain benchmarks
Damn, if thats the case then thats pretty bad.

But i personally dont think that is the reason. I dont think Samsung would be stupid enough to do that, they dont really have a reputation for being that retarded. Secondly Geekbench is pretty well rounded afaik, im not aware of it favoring one specific thing in a CPU.

I think its mostly just a software issue desu. Exynos chips are basically a niche product because almost everyone uses either Snapdragon or Apple.


Though im not willing to dismiss your theory right away, i think you could very well be right. Maybe you can check out en.wikichip.org/wiki/samsung/microarchitectures/m4 and see if the problem is indeed what you think it is

ShillBench

Attached: 20190920_213032.jpg (1282x1459, 597K)

Read

I'm a retard in terms of hardware, it's just nonsense conjecture based on what I've read of the benchmarks characteristics rather than any sort knowledge of hardware.

geekbench.com/doc/geekbench5-cpu-workloads.pdf

geekbench.com/doc/geekbench4-cpu-workloads.pdf

Small workloads fit on cache, avoid complex math float or heavy RAM load task.

>the benchmark runs better on Apple CPUs when compiled with iOS XCode and compared to another ARM CPU
>therefore comparing Apple CPUs to x86 using it is sound

Attached: comment_ylSZ2Zu3TYzs4GtDTtYuvJFpERK3ck2Y.jpg (900x900, 113K)

>sees thread
>"This is going to be Geekbench, right?"
>yep, it's Geekbench
Worst benchmark ever. I've seen a Celeron running OS X beat an i9 running Windows on it. The versions of it on Apple platforms just multiply their result by 10 or something.

If this chart is real, now Apple shills get to explain why their OS lags so badly single-tasking while Microsoft's heapashit OS manages to fly by with no noticeable lag on an 8700K.

It runs about the same on the chip of their biggest competitor who is not using any Apple code.

Never said the latter

The Apple bionics win in SPEC benches by a mile, it's not just for show, the hardware is real.

anandtech.com/show/13392/the-iphone-xs-xs-max-review-unveiling-the-silicon-secrets/4

Forgot the pic

Attached: B04C2267-F727-423B-B5E7-A7BB11576EC5.jpg (405x533, 45K)

>PajeetTech
Your street shitting site's already been debunked iJeet.

Post those results then, id like to see it
That just eliminates RAM latency as a factor. Its still impressive that the actual cores themselves are on par. Good point though

ARM cores are a tiny fraction of the size of x86 cores due to less legacy baggage (seriously, x86 decoder units are CHONKY) so the typical datacenter use for them is as embedded controllers/coprocessors for specific tasks. AWS uses their Nitro SoCs for this - hypervisor tasks are handed off to the ARM chip so effectively 100% of the x86 cores are used for EC2 instances. The typical ARM as main CPU approach is cramming a gorillion ARM cores into the same die space as an x86 CPU for scale-out workloads (like most people selling ARM servers).

2080Ti begin better in performance / W agains A13 in right task...

Memory controller are huge part performance real world in CPU.

or maybe intel just sucks lmao u ever think of that?

/thread

Attached: 1550056208392.png (942x810, 425K)

But pure core to core, without the hardware encoding units, they actually shouldnt be bigger as the Apple cores are 7 way wide and the 8700k for reference is “only” 5 way wide.

>>Yes but the same way that computers
but that still happens

Attached: msingleton_180612_2663_0005.jpg (1200x800, 163K)

>it may change things to the point that you can get current high end desktop performance from a tablet, and desktops will be relegated to be used only by professional users

It doesnt say desktops wont be used anymore, it says the same thing will happen to it that happened to server sized computers

>runs the bench on l1 private cache
>gets insane perfomance because desktop cpu's couldnt bother having any private cache just to be dickwaving around

how come the iphones perform slower then?

Because Apple engineers literally shit in the street. Apple is simply decades behind, in engineering and sanitation.