INTEL IS BECOMING BULLDOZER

>Intel is dangerously close to having piledriver level of perfomance
phoronix.com/scan.php?page=article&item=l1tf-early-look&num=3
What is WRONG with this company?

Attached: 1534400234313.png (602x320, 15K)

Other urls found in this thread:

ark.intel.com/products/137979/Intel-Core-i7-8559U-Processor-8M-Cache-up-to-4_50-GHz
youtu.be/FF8BVE-ayc0
twitter.com/NSFWRedditVideo

Pajeets

They spent more than a decade without serious competition (since the Core 2, really). They kept making improvements for a while, but after Sandy Bridge they kinda shrugged and said "Eh, that's good enough" and rested on their laurels. Now all of a sudden they get kicked in the nuts three times at once (spectre/meltdown, their inability to get any process past 14nm out the door, and AMD releasing competitive CPUs again)

They've probably lost a lot of know-how over the years and now they're scrambling to get back on the horse and are finding they can't get it back.

I wonder what happened to Intel's L4 128MB eDRAM

>god emperor took over
best time line

>tsmc, amd ceo, nvidia ceo, samsung btfoing intel
*chinks took over

>SMT disabled
Well, what did you think was going to happen, on a heavily threaded workload

Attached: 15331351057981.jpg (492x306, 24K)

>They've probably lost a lot of know-how over the years
speaking of this there's a lot of talk about how companies simply lost good chip designers over the years because they gambled on moores law rather than trying to optimise instructions and the like and now that there's serious problems with the shrinking process most companies are in a bit of a shit spot

L3 cache? why do you need L2 cache for?

>Piledriver was actually on-par with Sandy Bridge after removing Intel's dodgy hacks which traded security for performance.

They still have it, Apple gets most of the CPUs with it though.
ark.intel.com/products/137979/Intel-Core-i7-8559U-Processor-8M-Cache-up-to-4_50-GHz

youtu.be/FF8BVE-ayc0

They haven't actually improved since Pentium 3.
Netburst was terrible and the Haifa team just kept pushing P3 to new nodes, when the NetBurst bubble finally burst, Intel just started making desktop TDP versions of the same.
It's almost guaranteed Piledriver is actually faster than Sandy now with all the security protections for both enabled.

Intel did dodgy hacks to make their performance better, AMD didn't - and they say cheaters never prosper...

that's what you are disabling when it is full of holes like a swiss cheese.
guess how many Intel Shekel Zions are going to have this feature disabled because they host on very important services

>moronix

Here they come...

>Piledriver
piledriver was a very small core compared to a core from i-cores.
it was supposed to offer great scalability, which in terms of 2011 it offered a 16core affordable solution on a 32nm node.
it was a beast for I/O that's why it made it into many gpgpu focused servers and many top500 green supercomputers.
In games or in gaming situations where you had to stream and what not, piledriver rekt 3770k and some of the "hedt"
There's a review from tek syndicat
>hurr durr muh tek syndicate shill
don't forget faggots that those benchmarks were done by wendell,
that shoahes piledriver being better when you could saturate it.

FOR THE LOVE OF GOD, STOP POSTING THIS CRAP ALREADY!

They have arrived. The intel damage control task force.

Attached: intel_tech_lead.png (1260x892, 55K)

ENOUGH

When Intel will put their asses in gear and fix their shit for good?

Attached: I hate all of you.jpg (225x350, 26K)

In 4 years.

In that period of time, even those ridiculous rumors of Apple going full ARM will be true.

Apple won't go full ARM, they have no experience in massive monolithic dies.
AMD is now crushing Intel in HEDT/Workstation and mid level desktop.

Apple will likely transition their smaller macbooks to ARM with an x86 translation layer, Intel will give Apple an x86 license in exchange for a 10-15 year (really long term) supply contract for HEDT cpus, so Intel will keep Apple as a customer for the highend gear whilst Apple themselves get effectively get x86 for themselves, but unlike other potential competitors for Intel, Apple doesn't sell parts - so it's not like giving nVidia an x86 license (which will NEVER happen)

>similar to intel 14mm
>intel doesn't even have viable 10mm yet

Back then it was true.
That slide is old.
It's before GF cancelled 20nm altogether
It's before Samsung and TSMC cancelled 20nm outside expensive runs for mobile SoCs only
It's before Samsung and GlobalFoundries partnered up for 14nm
It's before IBM paid GlobalFoundries 2Billion to take their fabs away.
It's before Intel screwed the pooch with their first gen 14nm
It's before Intel dropped Tick-Tock
It's before Intel royally fucked 10nm.

Huh. Early 2017 for TSMC 10nm was actually spot on.
They're gonna hit 7nm way sooner though

Finally Intel-aviv is finished, the AMD age is here and no one can stop it. Praise Lisa Su.

>Intel will give Apple an x86 license
They need AMD64 license as well, which beliongs to AMD.

They probably will license a Zen core design and build some custom bullshit on top of that.

...meanwhile, outside of Jow Forums:

>...the technology firm [ARM] had sold 15 billion microchips in 2015, which was more than US rival Intel had sold in its entire history.

Attached: 1525995570426.jpg (498x573, 49K)

Not exactly a fair comparison. ARM sells licenses to outside firms and those firms sell to consumers. There's probably hundreds of companies that sell ARM chips to many different markets.

>Not exactly a fair comparison.
How fitting for a topic like Intel!

I need L2 cache for really good avx512 performance, intel

Spreadtrum and VIA are both making 64bit x86 in China that's properly licensed - so either AMD is open to licensing AMD64 or Intel can sub-license it - either case Intel is apparently the only gatekeeper preventing other players in the market.

Unironically dissolve Intel then, the humanity will greatly benefit from this.

No need to dissolve (plenty of elected officials will object to that anyway, since local jobs = votes)
Just force Intel and AMD to license x86+ia32+AMD64 under FRAND terms to anyone inside USA.
Allow export to allied countries and force them to license to companies in allied countries, but FRAND pricing need not apply (other FRAND principles in place).

Breaking Intel into like 10 companies that are around AMD's size would be interesting. Maybe the original Intel would remain as a shell to hold on to Intel's properties but they would have to negotiate in good faith with any outside firm that wanted to use their property. Their fabs could be turned into a separate company that can produce chips for any outside firm.

To add, maybe even 8bit (8086) and 16bit x86 (80186/80286) should be freely available (as an ISA), with only ia32+AMD64 needing licenses.

The only reason those are fine to make is because they never leave the country of origin. They are manufactured and sold solely in China. Apple wants to be able to sell globally.

>disable a essential function instead of using an already available patch
>WOW PERFORMANCE SUX INTELAVIV IS DEAD
2 rupees for you sir

Intel might allow it since even though Apple will sell globally, their CPUs are only ever going to be in Apple products, soldered down.
If Apple's making their own, they likely wouldn't even share a common pincount with anything Intel.

It wouldn't be like you could just grab one and use it some other PC.

you know that hyperthreading is an attack vector for spectre, and that intel themselves said to disable it, right?

It would require Intel to renegotiate with AMD for their own x86 cross patent agreement which would put Intel at an even greater disadvantage (sine they're now negotiating with a company much larger with them and also the company that holds the x86-64 extension patent).
The state the processor is in is not the problem, it's where it's being sold. AMD could easily file suit if it's found to have violated their x86-64 extensions, which is implicitly needed for modern computers, and doubly so for Apple who dropped 32-bit support for their OS a few years back.

Disabling HT is the only way to assure full security of your systems, otherwise it's just another available attack vector.

The sad part is that piledriver was a good value for its performance because it cost less than an i5 3570k with the multicore performance of a locked 3770, the Intel chips are expensive as hell and have a high tdp.

Attached: 041.jpg (1024x683, 198K)

>when you find out that Ice Lake xeons are going to compete with zen 3 EPYCs

Attached: 1533922785515.jpg (600x582, 68K)

That was before the mitigation. You don't need to disable it anymore.

Totally not FUD.

You have to be competitive to compete I think. At this rate Intel's going to be competing with refurbished Bulldozer rigs.

well, lisa said that zen 2 epycs were built to compete with ice lake (this was before they gutted 10nm)
but ice lake will come out around the same time as zen 3, so by then intel will be 2 gens behind

Attached: roadmap.jpg (1920x1042, 234K)

L2 cache? What do you need L1 cache for? Processor registers are perfectly sufficient.

>four xeon gens in 3 years
>two of those withing 6 months of eachother
lol

>four xeon gens in 3 years
>two of those within 6 months of each other
lol

>intel
>allowing more competition
Pick one. They already cucked nvidia out of the chipset market using licensing shit in the past.

Design oversights from reusing circuit logic and designs from 1990s are finally catching up.

These attack vectors were unthinkable at the time groundwork for modern x86 CPUs were being laid.

Nobody expect for pure researchers gave a shit about it. Those flaws existed for years in future chips ("If it ain't broken/problematic, don't fix it")

Some bored security geeks a.k.a gray/white hats decided to go after CPU-level exploitation after exhausting software and network level explanations. They started to discover these flaws and brought them out to the light. Those earlier pure researchers who warn them about them "flaws" back in the day are finally vindicated.

3 sockets a year keeps the goyim in fear

>when you find out that 7nm Zen2 EPYCs will compete with another 14nm Skylake re-release because Ice Lake still can't be fabbed with anything more than literal 1.7% yields
It's going to be a nice year for AMD.

I came

Attached: Intel GPU.jpg (700x363, 46K)

Kek 10nm was supposed to be eol by now.

>teaser 2 years before the product is even ready
oh raja.

not an intel fag but "full mitigation" is a useless meme. And the other 2 patch methods have pretty much 0 impact on performance.

I was wondering why they're switching the i7 from 6c/12t to 8c/8t, but if they recommend not using HT, I guess it makes a lot of sense for them to go with more cores instead of HT wherever they can.

>Kek 10nm was supposed to be eol by now.
it is

combined with intel's solid track record of GPU driver updates

Attached: keks.gif (200x150, 1.01M)

it isn't, we'll see 10nm(12nm) for years to come, since all the 10nm delay delayed everything that was supposed to come after it too

nooo...my dream...

Attached: 50e49105c273b8e4fc261b5a126545df.gif (500x267, 913K)

Those dual Opteron boards now look attractive.

The full mitigation to this issue is to disable HT.

But muh task manager thingies

Attached: moar cores.png (418x465, 67K)

>intel will be 2 gens behind
SHREEEEEEEEEEEEEEEEEEEEE

And it will be really expensive...

Attached: 1523433903969.jpg (1038x1000, 140K)

>fairness
>intel

Would be far cheaper for Apple to just buy ryzen from AMD.

When will Zen 3 release for consumers?

Attached: 1496981869617.png (8597x7288, 1.13M)

DELET

I think he means that other fabs are already going 7nm.

intel's 10nm isn't
the others are

It feels like the early 2000s are returning. Intel building monstrious high clocked Pentiums while AMD produces future proof multicore CPUs and pumps out cores. They'll match Intels clock performance in 2 generations. Zen 2 will be 4.5+ and has 10-15 % more IPC. It may not need 5 GHz then to match Intels CPUs, and they won't be able to release anything but refreshes.

Can Intel even design a completely new architecture?

VMs don't matter

of course they can, even more now that they hired keller
but if they don't unfuck their fabs even a new arch won't save them

>Intel is becoming Prescott/Gallatin
FTFY

Intel is becoming Tualatin

They have a long history of IEDs

Hyperthreading doesn't matter

2020. Followed by a new architecture after Intels new scalable chip.

>Intels new scalable chip
Intel glue?

inb4 infinity mayo interconnect

>mfw L1 Terminal Fault

Attached: with_jews.jpg (565x555, 69K)

>delid chip
>demap kernel memory
>delet cache rows
>dethread hyperthreds
The Intel ride never stops.

>inb4 next exploit requires enabling only one core to be fully protected

AMD's Zen architecture is fundamentally more advanced The only reason is "slower" is due to the inferior process. Once Zen has its first major architecture revision (zen1>zen2) and gets put on the new 7nm process you will see performance matching and passing current 14nm intel.

Due to their design Zen is not only cheaper to fab and segment but its also more power efficient which is a massive killer at the Datacenter/workstation level.

AMD will be able to pump out massive numbers of cores and threads and sell them for dirt cheap while being close in singlethread performance, faster in multithread performance and more power efficient when compared to intel arch on an identical process.


The datacenter is where real money is and AMD is set to steal most of that market share which will cause their stock to skyrocket and intel stock to tank.

AMD is now offering a 32core/64thread processor on THE DESKTOP now and its only 1800$

Every prosumer is buying that chip. Zen2 is likely going to increase the core/thread count by 50% or more at every price segment as well.\

This will rapidly boost the amount of cores every person is using and eventually consoles with zen to the point where game API's will start to increase their threading which currently only utilizes about 6-8 cores currently.

Assassins creed origins has unusually high thread saturation and you saw a 16-core threadripper chip decimate the top of the line 8700k 6/12 gaming chip because it couldnt compete with a 16/32 chip when all threads are being used.

Thats the future. Now nvidia is offering NVLINK on their upcoming RTX series graphics cards which should drastically improve SLI latency/scaling

Im guessing some game devs are soon going to make another CRYSIS type game to push the graphics envelope soon to leverage flagship RTX crads with NVlink and an intel/AMD chip with 28-32 cores to see just how crazy they can push the envelope.

fucking learn to write properly, you can put more than one sentence per line

Learn to suck dick properly

Probably a redditor esl.

Apple doesn't want you to use AMD Ryzen.

Attached: allen_paltrow_steve_jobs_2.jpg (500x333, 22K)

you probably meant Zen 2, Ryzen 3 instead
that's the next gen built on 7nm, releasing Q1 2019

toppest of keks