IPhone XS outperforms Intel Xeon in CPU performance

The Apple A12 CPU (iPhone XS) is faster in single-threaded performance than Intel's latest Xeon.

SPECfp2006 403.gcc benchmark:
>Apple A12 (2.5ghz): 44.56
>Xeon 8176 (3.8ghz): 31.00

SPECfc2006 464.h264ref benchmark:
>Apple A12 (2.5ghz): 66.59
>Xeon 8176 (3.8ghz): 64.50

Anandtech's new iPhone review guy is new, so although he correctly identified this shocking result, he failed to create a catchy graph and instead you have to compare the SPECfp2006 results from two different articles:
1) iPhone XS Review
anandtech.com/show/13392/the-iphone-xs-xs-max-review-unveiling-the-silicon-secrets/4
2) Intel Xeon results:
anandtech.com/show/12694/assessing-cavium-thunderx2-arm-server-reality/7

The end of x86 is upon us.
The end of Intel is also upon us.
This changes everything.

Attached: Apple California.png (824x468, 355K)

Other urls found in this thread:

docs.microsoft.com/en-us/windows/arm/
spec.org/cpu2017/Docs/overview.html#Q22
spec.org/cpu2006/Docs/403.gcc.html
spec.org/cpu2017/Docs/benchmarks/525.x264_r.html
spec.org/cpu2006/Docs/464.h264ref.html
twitter.com/AnonBabble

Of course we already had a hint at this result from earlier in September when early benchmarks showed the iPhone XS beating an Intel Xeon-powered iMac Pro, but Jow Forums dismissed this result as an irrelevant benchmark.

In fact the thread was scat bombed by Apple haters.

This literally changes everything.

Attached: Dnp2zMNUUAAPHMC[1].jpg (1084x804, 139K)

Important to note, the Apple A12 is the first commercially available 7nm piece of silicon, so it gives us a good idea of what to expect from AMD's upcoming 7nm CPUs.

Intel is still stuck at 14nm from 2014 with no progress in sight.

Attached: 1294436354922.png (468x494, 31K)

It changes very little, because Apple doesn't currently make desktop processors and every good ARM processor designer has been bought by Apple.

>Xeon 8176
>165W
>A12
>5W
What jewish magic is Apple using?

when will apple drop intel and use their own cpus for their computers? i feel like it's only a question of time. would also be an easy way to kill off hackintosh

It's all those American foreskins they're using.

This is the price for Holocaust.

The xeon makes up for it with moar coars. For xeon workloads you want a coar hoard. Also AMD sucks because of it single core perf.

fucking retard.
Comparing Ghz of RISC and CISC and even multiplicate it with the number of cores..

Nobody gives a shit about benchmarks, because they are mostly manipulated and now give me any Windows program, which runs on ARM CPUs?

It consumed over 4 Watt on some of those benchmarks, multiply by 28 cores for the Xeon.

docs.microsoft.com/en-us/windows/arm/

Nice try

spec.org/cpu2017/Docs/overview.html#Q22

Q22. What will happen to SPEC CPU2006?
Three months after the announcement of CPU2017, SPEC will require all CPU2006 results submitted for publication on SPEC's web site to be accompanied by CPU2017 results. Six months after announcement, SPEC will stop accepting CPU2006 results for publication on its web site.

After that point, you may continue to use SPEC CPU2006. You may publish new CPU2006 results only if you plainly disclose the retirement (the link includes sample disclosure language).

i assume it's some misleading microbenchmark, or the iMac was throttling like all apple products

>SPECfp2006 403.gcc
>spec.org/cpu2006/Docs/403.gcc.html

so when will they be in apple laptops?

Now try running the mobile chipset on any workload lasting longer than a moment.

the magic first conjured by the Cunt Crusherâ„¢

>Tfw 7nm Threadripper

Attached: 042A3C45589946448E63DC6317EAC646.gif (640x360, 2.84M)

CPU2017
spec.org/cpu2017/Docs/benchmarks/525.x264_r.html
525.x264_r uses the Blender Open Movie Project's "Big Buck Bunny", Copyright 2008, Blender Foundation / www.bigbuckbunny.org. Each workload uses a portion of the movie.

To save space on the SPEC CPU media, the movie is first decoded to YUV format in a (non-timed) setup phase, using the decoder 'ldecod' from the H.264/AVC reference software implementation. (The H.264/AVC encoder was used in SPEC CPU2006 benchmark 464.h264ref.)

spec.org/cpu2006/Docs/464.h264ref.html
Foreman (foreman_qcif.yuv): a standard sequence used in video compression, consisting of 120 frames with resolution 176x144 pixels.
SSS (sss.yuv): a sequence from a video game, consisting of 171 frames with resolution 512x320 pixels

>What jewish magic is Apple using?
cheat in benchmarks at hardware and kernel level
synthetic benchmarks are retarded anyways just look at real life example and see how poorly iphones behaves.
I don't have a smartphone btw, android is trash too.

POO IN LOO
No one gives a shit about your fruit cult cheating at benchmarks again. Let me know when lagPhones can type text without lagging.

Attached: 1528656062054.jpg (643x960, 256K)

When will Jow Forums autobans anyone who posts geekbench scores to compare chips of two different architectures?

Wait are you actually fucking for real?

This begin SPEC2006 by anantech but SPEC2006 is outdate for new CPU2017 using massive workloads close to real world performance.

Anandtech just using outdate benchmark,change for low memory loads and thermal constrains.
But Muh anandtech.

Can someone redbull me on ARM and RISC? Afaik nothing inherently should prevent RISC from being used at higher voltages for high performance tasks, but most of the info is theoretical and there are not many IRL examples besides few 'server grade' ARM chips but it seems like they didn't perform that good and I have no idea about their voltage scaling. If ARM can be flesh out to perform same as x86 at lover voltages for everything except high end then it would be a boon for notebooks.

ITT: x86fags in denial

Xeon also has HT while A12 does not.