Linux BTFO

So... 4.20 kernel is finally out and it is 26% slower than 4.19 Linux kernel.

LOL!

Enjoy that slideshow graphics and stuttering video playing, boys!

archive.vn/WYjb7

Attached: 1521640757764.jpg (600x515, 121K)

Other urls found in this thread:

wiki.archlinux.org/index.php/Downgrading_packages
ibm.com/blogs/systems/ibm-power9-scale-out-servers-deliver-more-memory-better-price-performance-vs-intel-x86/
extremetech.com/computing/202519-arm-based-chip-can-run-for-decades-on-one-set-of-batteries
acer.com/ac/en/US/content/predator-model/NH.Q3GAA.001
twitter.com/NSFWRedditGif

Nice link, you fucking moron

just like people on weed are slower than sober people.

Thanks intel

This. Fuck druggies and fuck Linus Sebastian.

This. Fuck Intel.

>420
>is a slow, buggy piece of shit

Attached: goldface.jpg (228x221, 10K)

>downgrade linux
Wow, that was hard!

>downgrade
Rolling distros won't let you downgrade.

You can run an LTS Kernel.

>just like people on weed are slower than sober people.
Intel and Loonix BTFO. AMD + Win = Masterrace.

BLAZE IT

sure, but it's missing so many new drivers that your life will be a pain.

Plus it has no security protections..

Let me guess, this only affects intel systems, right?

>Linux BTFO
Intel BTFO, you mean?

Attached: 1452463320840.png (1755x1080, 776K)

user...
wiki.archlinux.org/index.php/Downgrading_packages

Attached: Screenshot_20190101_194659.png (710x322, 22K)

>AMD
No thanks. You can miss me with that PSP.

Ah yes, all those new drivers.

For stuff like the uh......
and no security.. like.. uhh...

>intel systems specifically are slowed down because of spectre mitigations
>"linux btfo"

Attached: faggot-17.jpg (705x435, 230K)

>Linux kernel
Yes, Linux is a kernel. No need to specify it each time. What else would Linux be?

Yeah man because intel ime is miles better, right?

>missing so many new drivers my life will be a pain

but my system is stable user why would i update drivers?

>For stuff like the uh......
Sound card chips, wifi and BT. None of those work on 4.14 with my laptop. Support was added in 4.18 and 4.19.

>and no security.. like.. uhh...
It has no Meltdown/Spectre mitigations.

>entirely new kernel released
>"your life will become pain unless you upgrade to this"

Attached: a.gif (500x419, 239K)

Why do you think I am showing support to Intel? Were you neglected as a child?

>but my system is stable user why would i update drivers?
you should upgrade your pizza box, son. it can't even play Crysis.

There seem to be people who are confusing it with an operating system. Kind of stupid, I know, but they are apparently out there.

Attached: rms-17.jpg (480x451, 43K)

|
|
|>
|
|
|
|

Windows has different mitigations and it's not as affected.

>what is disabling the security fix through kernel parameters
Mods should ban all these /v/irgins coming to infest Jow Forums

On Arch the LTS kernel is at 19.13 or some shit, you'd never even notice and it's a small price to pay for performance.

4.19 is LTS, you fucking moron

>intards need to run systems full of security exploits to get the same performance as an AMD system now

Attached: 1543286940603.png (613x263, 327K)

You can group together the operating systems that use Linux as Linux or Linux-based, but because they are so different there is no real point in doing so.

huh?

>core/linux-lts 4.14.90-1 (60.8 MiB 105.3 MiB)

You fag I use AMD, I'm just saying OP is retarded

Yes, I'm sure Microsoft simply has the better programmers, who can add Spectre mitigations with less performance impact. I'm entirely sure it's not at all that Intel has convinced them that this or that little detail surely doesn't need mitigating.

They're both slow garbage. When will the desktop world drop x86? It's funny, because the vast majority of computers running Linux are using RISC architectures. The desktop market is behind in regards to ISA and OS.

Not for long. 4.20 LTS will replace it soon.

Windows devs couldn't spell spectre, let alone patch it.

>They're both slow garbage. When will the desktop world drop x86?
But the entire point of using x86 processors is that they are in fact the fastest. I'd love for you to prove me wrong, but it is what it is.

Elaborate on how the mitigations are different and how Windows is "not as affected".

Since like a decade ago x86 IS RISC you mong. Just because it translates cisc to risc code doesn't make it CISC.

EOL is December 2020, by then the trash with the new kernels will be fixed

windows is not as slow on the same hardware.
Firefox now stutters when I play a 4K video yet it played videos just fine before 4.20.

>Intel
>4.20
BLAZE IT!

Attached: lmao.jpg (1200x600, 62K)

>stupidly believing a temporary pre-release problem affects the final release
>stupidly proposing using kernel 4.19 when two versions of 4.19 have this problem while 4.20 didn't
here's what happened for those of you who haven't been paying attention: A security patch-set for Intel which caused a 20-30% slowdown did make it into the 4.20 git and two release candidates. It was also pack-ported to 4.19, I suspect 4.19.7 or .8 had it. Linus threw a fit about it and made the patchset optional for those who want it; it's not default in the latest 4.19.13 kernel and it never was in the 4.20 final release.

Of course.

4.20 won't be a LTS release, the next LTS will probably be 4.24 or 5.x. That's kind of sad, really, because 4.19 really is a disaster of a kernel in so many ways. It's one to be skipped, for example there's problems with amdgpu that aren't in 4.18 or 4.20 that are still present in 4.20. The networking stack in 4.19 is also buggy which cases problems with setups like a bridge with a bond and non-bond interfaces.

Attached: official.april_29087673_507445669656732_511888843880792064_n.jpg (1080x809, 79K)

>But the entire point of using x86 processors is that they are in fact the fastest.
No. Not at all. Are you retarded?
No, it isn't. You are the mong here. You sound like someone who reads headlines and nothing more. x86 is CISC. Just because it uses slow algorithms to act as a RISC CPU doesn't make it RISC. I bet you think WINE makes GNU Windows.

We told you to stop buying Intel but you didn't listen.

Attached: Cover-tom2.jpg (565x600, 40K)

Windows has literally had the same slowdown issues with spectre

"x86" is no less CISC now than it was three decades ago. Intel µops may or may not be RISC, but x86 sure isn't. That being said, Intel's µops have actually grown more CISCy again as years have passed since the P6, now corresponding almost 1:1 with x86 instructions in the fused domain (which is why the simpler single-op decoders have grown more powerful). AMD's µops have been just as CISCy since the K7 days.

What? There were complaints of 5-35% performance hits from the Wangdows mitigations.

>windows is not as slow on the same hardware
Give solid examples.
>Firefox now stutters when I play a 4K video yet it played videos just fine before 4.20.
I recommend mpv with youtube-dl on any OS.

>No. Not at all. Are you retarded?
Please point out a faster one, then.

x86 USED to be 100% CISC but somewhere along the line intel/amd adopted risc cores that just translate cisc code to get more IPC uplifts. Current zen+ is essentially like 90% RISC at this point. How else do you think AMD is going to get pic related to operate around just 3 fucking watts per core at 2.5GHz? CISC alone would never be able to achieve even a fraction of that kind of insane performance/watt.

Attached: amd_rome-678_678x452.png (678x541, 525K)

that's a big chip

>one
One what? Do you have any idea what we are talking about? One ISA? That is a ludicrous way to phrase such a question.

>Just because it uses slow algorithms to act as a RISC CPU doesn't make it RISC.
That's a pretty retarded statement, though. The very reason why people reading only the headlines believe that x86 is RISC is because x86 implementers figured out way to take the techniques that made RISC faster and use them for x86 as well, like pipelining and out-of-order execution. So it's not using "slow algorithms". I agree that doesn't actually make it RISC, of course, but there's a reason, and not an entirely unreasonable one, why people believe it is.

...

If you don't think it's slow then you haven't used it directly or have little experience with other ISAs.

you can freely choose kernel on arch
if you want you can keep running 4.19 until new packages wont support it anymore which will be a long time from now and even then you could still keep using it

>intel/amd adopted risc cores that just translate cisc code to get more IPC uplifts
That's exactly what I just said it doesn't do, though. Read this again:
>That being said, Intel's µops have actually grown more CISCy again as years have passed since the P6, now corresponding almost 1:1 with x86 instructions in the fused domain (which is why the simpler single-op decoders have grown more powerful). AMD's µops have been just as CISCy since the K7 days.
What they adopted are specific implementation techniques that originated in RISC implementations (see ), but that doesn't actually make it RISC.
>How else do you think AMD is going to get pic related to operate around just 3 fucking watts per core at 2.5GHz?
Thanks to the µop cache, quite simply, plain and simple. It's not the µops that consume power (even though they correspond almost exactly 1:1 to x86 instructions), but just the actual decoders.

Pedants like you are why we can't have nice things.

>4 (You)s

>If you don't think it's slow then you haven't used it directly or have little experience with other ISAs.
Again, please point out one single other current CPU implementation that is faster, then.

If we go down the to the bare circuitry both processors essentially do the same thing excluding all the cisc to risc translation that goes on in x86. In fact AMD was ready to go full blown RISC with K12 but chose not to because the real money was in x86 software compatibility like always.

Windows is fucking diamonds for qualcum right now given how close their A7X series cores are getting to modern x86. If qualcum can deliver and make ARM on laptops relevant again they can more easily lock people into windows RT 2.0.

>CPU implementation that is faster
If you can phrase this question correctly I can. But I don't know what you actually mean by "faster".

>but chose not to because the real money was in x86 software compatibility like always.
No, the safe money was. And since AMD was struggling financially at the time, they went with the safe money.

The question is completely correctly phrased. You may interpret "faster" as meaning "runs the majority of any significant collection of software faster". If you want to be precise, we can look at SPEC benchmarks, but I'll leave the choice to you.

No, because there are too many independent variables in the way you have phrased the question. I would like a precise question.

>cisc to risc translation that goes on in x86.
But for the third time, there exists no significant translation. On both Intel and AMD, µops (in the fused domain) correspond almost exactly 1:1 to x86 instructions. Yes, there do exist microcoded instructions, but those are either instructions that no compiled piece of software has used for 30 years, or for administrative instructions like TLB invalidation (which are microcoded in RISC implementations as well).

imho they could have still done well with K12 if they would have waited for A76s and done some custom work on them and sold them to servers. Jim killer was with them so they would have at least gotten insane performance/watt gains anyway. They would have given up the consumer market to intel but gained the enterprise market.

Okay, show me a RISC processor that has a higher SPECint score than the fastest x86 implementation. Is that precise enough for you?

You can't, can you?

No, because that is just one method that you have arbitrarily presented.

ibm.com/blogs/systems/ibm-power9-scale-out-servers-deliver-more-memory-better-price-performance-vs-intel-x86/

I honestly find that hard to believe, a 100% CISC CPU is essentially a piece of shit that would require refrigent cooling to run past 3 GHz on a single core. RISC processors like the cortex M3's can run on AA batteries for decades. It can't be coincidence that AMD's 8-core processors are now able to run on fucking laptops.

extremetech.com/computing/202519-arm-based-chip-can-run-for-decades-on-one-set-of-batteries

>No, because that is just one method that you have arbitrarily presented.
Which is exactly why I gave you the option of choosing your own data, faggot.
>ibm.com/blogs/systems/ibm-power9-scale-out-servers-deliver-more-memory-better-price-performance-vs-intel-x86/
That, however, is a pretty stupid piece of data, since it's chosen, benchmarked and presented by IBM themselves, on IBM-specific software. Second, it's showing multi-threaded performance, which is hardly the metric of performance for an ISA. Third, it focuses on price-to-performance, not on absolute performance (and the whole reason why x86 performs so badly in that regard is because of IBM's retarded licensing options for their own software that they tested lmao baka desu fampai).

>Which is exactly why I gave you the option of choosing your own data, faggot.
Reading what you wrote below, you can see why that is an issue. You can see why "fastest" is a nonsense way of putting it. Unless you can objectively define it then don't use it as a metric.

amd mobile chips are 4c/8t

>a 100% CISC CPU is essentially a piece of shit that would require refrigent cooling to run past 3 GHz on a single core
Please tell me all about whose ass you pulled that statistic from. As I've said previously, the reason x86 has held up is simply because Intel and AMD figured out ways to use the same techniques that were pioneered by RISC (tight pipelining, most specifically) and apply them to CISC. The techniques in question were invented for RISC processors, but that (apparently) didn't mean that they couldn't be used for CISC as well. It just required more transistors.
>RISC processors like the cortex M3's can run on AA batteries for decades.
Well yeah, if you want a 50 MHz in-order architecture, I'm sure it can. I'm sure an 80486 reimplemented on a modern process node could, too. A Cortex-A76 would not run for decades on AA batteries, just like a Core i7 wouldn't.

Of course the question is complex, but since you kept harping on how much faster your RISC processors were, I'd assume you would be able to back that up other than with cherry-picked information from their manufacturers themselves.

So, it's not too different from the previous releases?

>other than with cherry-picked information from their manufacturers themselves.
They are objective benchmarks, you do realise. A source does not discredit data.

The issue isn't that 4.20, it's that intel processors are badly designed.

I double checked, that's an 8-core senpai in a 17" laptop.

acer.com/ac/en/US/content/predator-model/NH.Q3GAA.001

Attached: Predator-Helios-500-PH517-51_sku_main.png (536x536, 149K)

>A source does not discredit data.
It absolutely does when the source has a strong vested interest in making one side of the data look as good as possible. I'm sure it would be theoretically possible to delve into the test and see if it's useful or not (the latter being the more likely case), but even if it weren't for nobody ain't having time for that, it's not even possible that it would be possible to do so without having the hardware at hand. Which is exactly why third-party testing is so important.

>It absolutely does
No it doesn't. That is a logical fallacy.

>its not a mobile chip
>putting a desktop cpu in a laptop
enjoy your 90minutes of battery life
retarded gamershit
who thought this was a good idea?
eitherway i guess you're right

I find it annoying that only eight keys have the blue trim.

It's not a logical fallacy at all. Source credibility has little with pure logic to do at all.

I mean, I'm sure their data shows *something* objectively. The question is just exactly what it does show, what conclusions can be drawn from it, how useful it is to extrapolate it to actual, real-world workloads, and what they're not showing. I'm absolutely sure you could prove objectively that POWER9 is faster than any x86 implementation at the task of executing POWER ISA instructions, but that's not a very useful dictum, and the results from your link are quite likely to not be much more useful.

Good thing I can hotswap kernels anytime with manjaro

>Source credibility
This isn't about taking it at face value. It is data, right there in front of you.
>I'm absolutely sure you could prove objectively that POWER9 is faster than any x86 implementation at the task of executing POWER ISA instructions, but that's not a very useful dictum, and the results from your link are quite likely to not be much more useful.
Maybe now you will see why nobody will ever agree on a definitive answer for "fastest" and the best you will get is "fastest" for a certain domain or application.

>A source does not discredit data.
x86 is the fastest CPU. There, I said it, so that means it's true, right?

Do you have data?

You're highly underestimating the energy efficiency of AMD processors right now. The 2700 has a TDP of just 65W but that's with XFR enabled upt to 4.1 GHz. At 3.2 GHz on all cores I wouldn't be surprised if it only consumed 45W on full load. Sure you'll get shit battery life if you run prime 95 and furmark on it at the same time but for normal laptop tasks I don't see how it can't crank out at least 3 hours of web browsing/office stuff.

>Maybe now you will see why nobody will ever agree on a definitive answer for "fastest" and the best you will get is "fastest" for a certain domain or application.
Like I said, I never pretended that the question was a simple one, but since you kept harping on how RISCs are "obviously" faster than x86 without even feeling the need to substantiate the claim, it would stand to reason that you would have a reason to say so. Are you saying that IBM's own data is your only such reason?

naeun is perfect

Yes, it's all in my head, but I've heard that doesn't discredit it.

>it would stand to reason that you would have a reason to say so. Are you saying that IBM's own data is your only such reason?
It is A reason. I imagine you're just pretending to be dense at this point.
Show it to me, then.

Fucking Dell Optiplexes and their bad board capacitors.

>2700u has a TDP of 15W
>2700 has a TDP of 65W
>2700x has a TDP of 105W
thats actually pretty impressive