Surprisingly, the 6W Gemini Lake (Atom) Pentium N5000 outperforms the Kaby Lake (Core) Pentium 4410Y

twitter.com/FanlessTech/status/1016445311869509634

>Surprisingly, the 6W Gemini Lake (Atom) Pentium N5000 outperforms the Kaby Lake (Core) Pentium 4410Y

OH NO NO NO NO

Attached: 600px-intel_pentium_silver_logo_(2017).png (600x600, 120K)

Other urls found in this thread:

cpubenchmark.net/cpu.php?cpu=AMD A10 Micro-6700T APU
cpubenchmark.net/compare/Intel-Pentium-4410Y-vs-Intel-Pentium-Silver-N5000/3134vs3204
youtube.com/watch?v=5pgXUcYYMds
youtube.com/watch?v=8nZrozt2Rmg
notebookcheck.net/Acer-Swift-1-SF114-32-N5000-SSD-FHD-Laptop-Review.303606.0.html
twitter.com/NSFWRedditVideo

i-it's nothing guys, nothing to see here

S-SAGE!!!1

Attached: 1524147471130.jpg (267x297, 18K)

DELETE THIS THREAD !

NOOOOOOOOO

Attached: 1506977173618.jpg (882x758, 324K)

not surprised at all.
AMD has been avoiding CMT and SMT on low core+ low TDP designs because managing the resources for the second thread or second core in the same "cluster" take a huge amount of space for logic and gets you more consumption for a mere performance boost.
that's why jaguar puma and all those low TDP CPUs from amd can cost pennies, have high core count and consume almost to nothing.
A 4 core jaguar went toe to toe with some Atoms on better node and 3-4 times the cost of the AMD cpu(for multicore performance)

This is what a 4.5W TDP from April 2014
cpubenchmark.net/cpu.php?cpu=AMD A10 Micro-6700T APU

now compare this with intel's shitty offerings from 2017 and 2018
cpubenchmark.net/compare/Intel-Pentium-4410Y-vs-Intel-Pentium-Silver-N5000/3134vs3204

I had all those faggots telling me that AMD is bankrupt and finished, yet they have been getting a shitload of money from embedded and custom contracts due to jaguar and its derivatives.
Only AMD could pull an AMD64 tablet, but the jews didn't let them.

Based animuposter

Attached: Screenshot at 15-08-26.png (626x207, 27K)

Wrong thread bois. Newer cpu with same TDP and same price has better performance. Literally nothing wrong this time.

>Atom
>ever
Intlel is fucking pathetic

Intel cucks are getting desperate.

>Number of samples 3, 4

this

q6600fags almost btfo

Attached: Untitled-1.png (632x500, 28K)

You can't compare those directly though. The AMD one is a tablet-specific SoC without PCIe, SATA etc., it's more like Atom x5/x7.
Also, the real-world performance for all of them is heavily dependent on cooling (any of those CPUs will show wildly different results on a test board with a fan that allows it to maintain boost clocks indefinitely and in a plastic tablet with a 2x2" aluminum plate for cooling)

What does the pcie have to do with the performance?
All you have to do is add a root complex and violin you have 4 8 12 16 32 pcie lanes.
Don't forget that amd's newest socs run with IF, which connects all the peripherals.

>you can't compare one sub 6W CPU to another when Intel loses
Hm.
That user is correct, though. AMD was still competing well with Intel 24nm process on a 32-28nm process when it came to mobile CPUs.

Tnx man.
I would like to add that amd has a full portfolio of patents about low power cpus and socs.
I will remind everyone that intel did lawsuits against cyrix.(same they did with transmeta)
Cyrix went bankrupt.
Cyrix patents were bought by amd.
Amd in 2004 iirc released the first sub1watt x86 soc in the world, a.k.a. amd geode.
Amd might not have the fab advantage back then but they had the know how to build smartphone and tablet chips, but the jews wanted the market by themselves so they flooded it with their free chips and fucked up the chance for amd to penetrate it.
That's why you'll jever see a respectable x86 soc on phones.

It has to do with TDP and cost. 4410Y is a fully featured 2C/4T babby lake CPU with dual channel memory, 10 PCIe lanes, 3 simultaneous video outputs, a dozen USB ports etc., which requires quite a lot of chip real estate and supporting circuitry. Ironic how you can attach a fucking 1080M and a couple NVMe SSDs to a processor that's as slow as molasses.

>All you have to do is add a root complex and violin you have 4 8 12 16 32 pcie lanes.
It only works if you want to add more endpoints, not performance. Running a x4 NVMe SSD or a x16 GPU through a switch that only has a x2 connection to the CPU is completely pointless.

You can compare it to Atom x5/x7 as I've said. They have roughly the same feature set.

Although I wonder if the lack of AMD-based tablets compared to Atom based ones has something to do with TDP shenanigans, like with those old A-series desktop APUs that ran noticeably hotter than Pentiums/i3s despite nominally having the same 65W TDP.

Actually had a chuckle, thanks user.

Attached: oh-no-no-no.jpg (1280x720, 186K)

WOAH

2.7 Ghz Turbo outperforms non-turbo 1.5 Ghz variant.

WOAHHHHHHHHH

With moar coars too.

Why Intel made an extremely thermally limited CPU without frequency scaling is beyond me. Has a more expensive package, slower memory and no wireless MAC, too, so I don't even know who it's aimed at.

Shill, do you mind explaining to me why core i laptops seem to rise in value (I got i5 5200u for 400$ 2.5 year ago) and now for the same price I could get a pentium or core m device nowadays?
Is intel stretching the low-power cheap cpus segment and decreasing the amount of mobile strong cpus and raising their price?

Atoms are OK as long as you don't game in public. Do you play video games, user?

OOF IDF shills on suicide watch in this thread

>Atoms are OK as long as you don't game in public

And as long as you don't do any remotely demanding work. I've got a chink tablet with an Atom x5-something and it's unbearably slow in Coreldraw and Autocad. Perfectly fine for shitposting though.

You're retarded, Cherry Trail/Braswell is slow and outdated

Newer generation Gemini Lake with Goldmont Plus CPU cores is totally different beast altogether, at least close to 100% faster performance

I want a desktop with this processor and a gtx 1050ti please.

Got my mom a laptop with an i3 8130U for $350 before tax, just gotta look around and temper expectations.

Are Gemini Lake tablets/laptops even out yet? I'm not a time traveler.

Also they're not called Atoms.

Only 1 atm for the China brands, more coming soon in the future

youtube.com/watch?v=5pgXUcYYMds

youtube.com/watch?v=8nZrozt2Rmg

notebookcheck.net/Acer-Swift-1-SF114-32-N5000-SSD-FHD-Laptop-Review.303606.0.html

1 Acer notebook with Pentium Silver N5000

They are not branded Atoms but certainly come from the Atom family

That horrible ipc wtf

> certainly come from the Atom family

It's a continuation of Atom, but it's not a mere re-use of the cores, there are significant internal changes even between (((Goldmont))) and (((Goldmont Plus))). So Intel is free to call them whatever they like.

Why would you use pcie ssd in a low power device?
Sata is sufficient enough for such devices, doesn't require any power hungry controller because it uses digital signals, it reduces the cost of the board and the peripherals.
I have no idea why do you mention nvme as something good.
It's not. It's a fast interconnect using pcie and it's suitable for some use cases.
It doesn't add any better experience in other applications and it comes with great costs, both engineering and financial.
So please, if you want to compare 2 things, do it using a sane example.
Even on laptops pcie is power hungry.
The ridiculousness of your post explodes when you mention 1080m and a couple nvme ssds.
Shiw one case where a 6w cpu/soc/apu is paired with a couple of nvme ssds and a 1080m.

So this is the power of 10nm?

Hoo boy, remember the IPC of the original Atom family?

Attached: Untitled-1.png (613x500, 20K)

Low power 7nm Zen is just about here and you deflect with 10 year old shit?

1 sample is enough, CPUs aren't animals.

>doesn't require any power hungry controller because it uses digital signals

sata does not require any controller?
pcie does not use digital signals?
What the fuck am I reading?

Go read something about NVMe vs SATA and realize that the former is actually simpler.
NVMe drives tend to be more power-hungry simply because they're several times faster, not because the interface is inherently less energy-efficient.

>The ridiculousness of your post explodes when you mention 1080m and a couple nvme ssds.
>Shiw one case where a 6w cpu/soc/apu is paired with a couple of nvme ssds and a 1080m.
I should've probably written "SARCASM" everywhere in capital letters and included a le funny maymay image for you autists because apparently the word "Ironic" didn't tip you off.

Attached: 1411874802622.png (226x207, 121K)

T. Intel Boomer
Intel are scrambling to get back to where they used to be and failing

>Low power 7nm Zen is mentioned nowhere in the OP
>Low power 7nm Zen is mentioned nowhere by AMD
>Low power 7nm Zen is absolutely irrelevant to shit IPC on Atoms

fuck I'm posting this from a 1700x but you AMD autists are insufferable.

Mate I own a 1600x and 2700x and rocked i5 and i7 for almost a decade
Intel done goofed hard with this CPU and basically everything since zen2

Attached: 1530640035508.jpg (1024x584, 43K)

god, 1st/2nd gen Atom was so fucking garbage they should not have even released it.

You can get a Ryzen 2500U for around $400 counting inflation, m8.

It's not the IPC that's bad. It's the process and architecture being power hungry requiring it to run at absurdly low clocks to be 6W.

You could probably do a 4c/4t Raven Ridge at around 2.3GHz boost with 2 GPU CUs for almost 6W instead.
But it looks like AMD isn't bothering with 6-8W TDP chips until Zen2, which makes sense really.

To be fair you're comparing 2.5W to 6W. That also looks like passmark, which is basically just fake. But yeah, they were bad regardless.

Ryzen2, not Zen2. Zen2 arch isn't out to consumers yet.

But amd shills on Jow Forums told me that Intel is absolutely that and can't produce anything more.

>passmark
>bad

Oops I meant zen2xxx
Zen 2 7nm is gonna kick ass in all segments
Zen 1 on desktops was basically a beta test so now we now it needs good quality fast memory and as low latency as possible.
Ram-ssd hybrids will solve this eventually as well as more tightly integrated soc

thermally constraints cause variance.

>pissmark

>pcie does not use digital signals?
I cannot believe that there are people here, who express their opinion and have zero relation to what they are talking.
Useless cunt, answer this: Is pic related Binary?

Attached: pcie.jpg (476x312, 107K)

>sata does not require any controller?
let's continue shitting on this clueless fag.
First of, LVDS signals need analog circuits to read them and decide the binary representation of the incoming differential signal.
Second, I never said that Sata doesn't need a controller. A sata controller, because sata is just a serial binary bus, unlike PCI-e, it needs a scrambler a decoder and some logic. That's all. Sata doesn't need a PHY.
Ethernet needs A PHY.
Wifi needs a PHY.
PCI-e needs a PHY.
Guess what? those protocols that require PHY, they don't "speak" in 0s and 1s, they use a completely different signal, it's just called electic signal.
A sata controller is cheaper, smaller, easier to integrate(ever heard of analog VLSI faggot? maybe not, because you are busy posting screenshots of your riced VM) and it consumes fucking nothing compared to a PCI-e PHY that has to take the differential signal, find the process the 2 lanes and then proceed to descrambling one binary stream.
Do you know what else means differential signal? Fourier motherfucker. That's what it means.
I haven't designed componets from PCI-e,
but I have designed components from both ethernet and wifi as described in the respective standards. You have no idea what a clusterfuck of technologies you need to decode a simple DS stream or even worse design a simple BPSK receiver.
>I should've probably written "SARCASM" everywhere
I didn't know where you where being stupid or sarcastic.
I saw the stupid in the beginning and I assumed the rest of it is the same.
(I am not yelling at you for being ignorant. I am yelling at you for not, at least, searching the web about PCIe. You just implied that PCI-e is binary and then you wrote a novel around it)

>find the process the 2 lanes and then proceed to descrambling one binary stream.
fuck me sideways. this is the correct phrase.
>process the 2 lanes to get a binary stream and then proceed to descrambling, decoding and so on.

Running a Pentium N4200 and it's actually pretty alright on Linux in 4k.

I know that faggot.

>you're comparing 2.5W to 6W
N270 wasn't a SoC, it was paired with a 2-chip 945GSE chipset that consumed several more watts, so the total power consumption was

Yes, it is binary. You're confusing type of information with its physical representation. Sure, PCIe uses more complex physical signals than simple TTL levels, but it transmits binary digital data nonetheless.

Are you drunk? SATA also uses LVDS signaling just like PCIe.

Fuck
*so the total power consumption was well over 10 watts (but it was cheap lol)

Time to upgrade my desktop soon...

You gotta consider the alternatives that existed at the time. U-series Core 2 Solos were expensive and not all that much faster, and AMD basically had nothing below 10 watts except embedded-only Geodes. The original concept for Atom as an ultra-cheap CPU for non-demanding purposes may be abandoned now, but OG Atoms had a fairly long and successful life in netbooks and POS machines.

The first desktop Atom boards that paired 4W CPUs with 25W ancient desktop chipsets were pure embarrassment though.

the same that thing happens with USB, it's thesame thing that happens with SATA.
D+ is the opposite of D-
In every serial bus you send both the data and its *not* value, to avoid interference and minimize noise.
You can do the same with SPI and i2c just for a more robust inteconnect.... and your whole "differential" signal is demodulated with a XOR.
In noisy and high bandwidth serial buses where you have to send the complimentary signal, the "phy" consists of several pull up/down resistors, a small termination resistor and that's it.
Do you know why? because over D+ you send 0s and 1.

In PCIe you don't have D+ or D-.
You have a differential pair. Each pair has a main signal and complimentary, not in terms of 0 or 1s.

This doesn't happen in almost all sata specs and usb. Only the latest ones that use higher frequencies and are prone to get errors even with minimal interference... and that's it, they just added more complex decision making in the decoding of the signal in order to transmit the 0s and 1s faster. Best example is the latest 3.x USB standards and the SATA 3.2.
So, just by reading the D+ of either USB or SATA, you are reading the raw bitstream.


PCI-e's standard I know that it uses spread spectrum modulation to transmit its clocks but I am not sure what kind of modulation it uses for the rest of the signals.
If I could stay more here, I'd have to argue even why PCI-e is not a bus.

>In PCIe you don't have D+ or D-.
Yes you do. They're just marked "p" and "n" instead of "+" and "-". It's literally the exact same differential pair principle as SATA, or HDMI, or pretty much any other current meter-range, high-speed serial interface that uses differential signaling, with variations only in voltage/current/terminating resistance values, DC bias and timing specifics.

> just by reading the D+ of either USB or SATA, you are reading the raw bitstream.
You can't "read" a single pin. You can probably read the voltage between D+ and ground, but then you're just using LVDS as a plain TTL signal and losing on the EMI-canceling properties of the differential pair.

>why PCI-e is not a bus
It's a bus mostly because everyone is used to the word bus. It's more like a star otherwise.

Worth it in energy savings alone.

Even if I ran the CPU at 100% all the time, it would take me like 5 years to save enough energy to justify a $200 upgrade.

Man electricity is like $1/kwh here. Plus its hot as fuck, so tdp and thermals matter a lot here

Two and a half 60w incandescent lightbulbs, what I assume to be the difference between a non-GPU loaded LGA775 system and this mobile 6w-CPU system, cost ~$200USD to run 24 hours a day for a year
The 6w-CPU system might use 10w in total, coming up to ~$13.15 in electrical cost for a year.

A complete functional Q6600 system can be bought for $50-75, and costs in entirety $250-275 to run 24 hours a day for a year (without monitor or other peripherals)
The cheapest Pentium Silver N5000 system I could find costs $300, totaling $315 for a full year of use.

So no, it's not actually worth it even if the system only uses 10 watts

you're lazy and bad at math.

>$1/KwH
Kill your politicians and energy company owners
>thermals matter a lot
Then a shitty cheap laptop is not what you're looking for.
The battery will cook itself and the CPU will always be throttling. The board components will suffer and degrade faster than intended.

>Man electricity is like $1/kwh here.

Have you tried running your PC from electrical grid instead of man-electricity?

>you're lazy and bad at math.

No, I just live in a country with non jew power companies.

I cant. Unless i draw over a certain ridiculous amount of power you cant change what company you use