Shills fuck off

Shills fuck off.
ITT: We try to objectively weigh the pros and cons of using either. Personal experience has priority.

Attached: 87223E52-CDDE-45F8-99E4-282BE7B1BA83.jpg (1202x673, 447K)

Other urls found in this thread:

github.com/RPCS3/rpcs3/pull/6056).
agner.org/optimize/instruction_tables.pdf
lemire.me/blog/2018/08/15/the-dangers-of-avx-512-throttling-a-3-impact/
twitter.com/SFWRedditGifs

Used both without complaints. Tend to favor AMD more recently due to price/performance. Just a consumer looking for value without brand allegiance.

i only use laptops nowadays so amd is a non-starter

Currently using a 1231 xeon on lga1150

Practically no hiccups asides from issues with geforce drivers.

Seriously considering buying into ryzen 3000 after the first batch of reviews come out.

Intel:
>strong SC, enough that ryzen 3000 is just catching up, if they even do (wait for 7/7 to see)
>weaker multicore performance
>questionable whether multicore performance will become more valuable in the next 5 years
>greater security vulnerabilities
>real world implications of these vulnerabilities unknown
>full mitigations for these vulnerabilities decrease performance on synthetic benchmarks, but not to the equivalent extent in video games
>unquesitonably greater software support
>worse quality mobos for the same price (compared to, say, b450)
>supposedly will cut prices 15% on or around 7/7

Still using a 2600K and never had a single issue with it, before that I used Phenom II, Athlon XP, Sempron Socket A, they all had random bluescreens and crappy performance.
Never going back to AMD.

Currently the Ryzen lineup looks good on paper, with high synthetic benchmark scores. But It seems to have high latency that will diminish it's performance in real world usage. Games, Web browsing, Photoshop is all I use my PC for.

What exactly are these vulnerabilities? I have no clue

AMD is more unstable at launch, is finicky with RAM, and they take their sweet time to patch their CPUs to not crash with productivity software.

Spectre meltdown zombieload, many vulnerabilities were found in the last year's on Intel's CPUs, maybe because they are less secure, maybe because they are more popular and that's why hackers prioritized them.
But in the end, Intel CPUs got worse with time, because patches for these vulnerabilities had to slow them down a little.

>Personal experience has priority.
>Small sample size has priority

With a high clocked quad core you won't see any buckling until you start adding many many layers in PS.

Adobe support for multicore is inefficient at best.

2700x/3600x for all rounder
99k if your a cashed up retard with with a very good cooling system preferably custom closed loop or better with exotic coolants
Nothing else is worth buying right now intcel made their own i5 and i7 brands a laughing stock since Zen

Intel:
-High refresh gaming (although this advantage could disappear when Zen 2 releases)
-Adobe media encoder can really take advantage of the iGPU

AMD:
-Great multithreaded performance at a great price
-AM4 is futureproof

Both make great CPUs. Please fuck off with the brand loyalty and other forms of tribalism.

The whitepapers are available online. Intel CPUs implemented a series of "shortcuts" that can be exploited, the same attacks kinda work on AMD, but only some of them.
A super dumbed down example would be that you have your CPU do a take a number, do a bunch of calculations back to back and then get a result. These shortcuts made it so, after you ran those calculations once, the second time the CPU would go ahead and say "no need to calculate everything again if the numbers are the same" and bam, instant result. That can be exploited in a number of ways
Intel's implementations are apparently pretty hardwired into their architecture, so you have to disable a lot of shit if you're worried about security. Fixing them for good will probably take some pretty serious redesigns. AMD I do believe has pretty much fixed it with Zen2 on a hardware level.
If you're paranoid, don't use an Intel CPU for stuff like banking, personal accounts, important information or serious shit. If it's just for gaymen and shitposting, then no worries

*by the way when I said to fuck off with the tribalism I meant to the tech community, not OP.

i've never used intel in my life up until several (5ish?) years ago, because it was just too expensive.
at the time i was working for a software company and i've bought 4790k because my amd at the time was too unstable, hot and slow running multiple virtual machines. i've never regretted it, it paid itself pretty soon. never and i mean never ever i had any problem with it which i never experienced with amds. maybe it was just newer/better generations of motherboards, memory and os-es but it still serves me well with the usual work and casual gaming i do, tho i never applied any of the shit patches.

i always better tech based on value per money, i can't understand anyone over 12 waging brand wars.

My main machine for gaming and encoding is a Ryzen 1700X. My file/print server is running a Haswell Pentium dual core in an LGA1150 socket Supermicro board with ECC RAM. Both work fantastic and the only thing AMD hasn't had up until the upcoming 3xxx Ryzen chips are boards with guaranteed out of the box ECC compatibility. Now that the Asus WS X570 Ace is a thing, I'll be switching my server over to a 3950X and 64GB if DDR4 ECC.

Sucks about all the Intel flaws tho.

I'm putting this here since I don't see enough people discussing this. It's about Intlel's latest microcode gimping

There is literally no reason to get intel anymore, even for TSX just for RPCS3. This affects 6th gen through 9th gen - 6700k, 7700k, 8700k and 9900k among others.

A new microcode update released about 1 month ago fixes a TSX errata that caused a major regression in performance and you're SOL if you have an updated bios with the latest microcode.

Pic related

Attached: intel warning.png (919x1256, 194K)

Intel is only good for high refresh rate gaming.
AMD is better for everything else.

The majority of people will never max out a modern cpu released in the last 2 years. Only power uses and enthusiasts will be able to.
CPU gaming benchmarks are retarded because they all use the 2080ti or similar, which is power that 99% of people will never own for the next few years.

Sounds like RPCS3 accidentally relied on a CPU bug, Intel fixed it as part of the MDS patches, and now the RPCS3 developers are working on not relying on it (github.com/RPCS3/rpcs3/pull/6056). People are probably not discussing it because it's not a big deal

I haven't had an AMD CPU since the Athlon XP 3200+, I was pretty happy with it in general. I've been on Intel quad cores ever since, now with a 4790K running 4.7GHz. This one is a good CPU and served well, but its age is showing even for games.

Going by the Zen 2 results we're seeing, I don't think buying anything from Intel is going to be worth it. 9900K is expensive for only 8 cores and the worst part is that it also comes with security holes. It's not like I'd throw one away because of the speculative execution vulnerabilities if I already had one, as I have not thrown my 4790K away, but buying a brand new CPU with so many known holes which require performance-degrading patches just seems retarded, especially since it's not a stellar deal at the price tag it has anyway. 3900X at about the same price comes with 50% more cores, no (or much fewer) security vulnerabilities and apparently similar single-thread performance and obvious superiority when all cores are used.

brokefag whose cpu/mobo just died. should i reuse ddr3 for 8350 system or spend more on low-end ryzen?

Intel has lower latencies throughout the board, making it more suitable for realtime applications such as audio or gaming.

I've seen the 1600 for $80 at Micro Center.

TSX is the major advantage Intel has in RPCS3

we understand you kid no need to spam

not

>Intel:

pros: good

cons: expensive

>AMD

pros: cheap

cons: shit

Hello there. It appears you woke up from hibernation. Ryzen is out and FX isn't the newest lineup from them anymore.

Doesn't seem to be much of an improvement from FX

Attached: 1561708909548.jpg (719x568, 152K)

This, but opposite, because muh precious gaymes, and softwares.

> Phenom II:
- cheapest quad core at the time
- OCable
- supports ECC
- system can be upgraded to FX-8300
> Core i7:
- Supports AES
- Non-K supports IOMMU
- Supports AVX
- Better performance

I don't get why would you want to discuss 2011 CPUs. Are you making a retro gaming rig to play GTA IV, OP?

Intel is expanding in features. AVX512 is reaching consumers with Icelake which increases throughput by factors of 16 while AMD is only just-now catching up to AVX2 and only cares to do micro-optimizations for the base x86 instruction set. Something that games would probably benefit from but not for content production or anything related to video work.

AMD still has terrible performance for stuff like BMI and BMI2(pdep, pext) and plenty of other x86 instructions. AMD is objectively slower in many, many cases.

agner.org/optimize/instruction_tables.pdf

Whenever you bring up AVX2 benchmarks or anything of the sort to AMD kids you'll always hear "YEA WELL AVX DOESN'T COUNT" type of shit from them. Stuff like FFMPEG is objectively faster on Intel due to stuff like AVX2 and AVX512 performance.

AMD has the "moar cores" approach to solving their short comings and aren't actually doing any innovations. It's like copying Intel's homework with better handwriting but not with better substance. AMD is adopting and polishing things that Intel has already moved on from, though if you are fine with being about 2 or 4 years behind then AMD trends to be the cheaper option. Especially if you're just a normie that browsers the web and plays games and doesn't ask anything more of their processor than that.

Many of the speculative execution issues that happen with Intel means next to nothing for the average consumer. Those of you playing games and browsing the web have the smallest realistic attack surface with this. The ones that actually have to worry are the massive as fuck IT centers that have to pass security audits and not the gamer that just plays Fortnite or the Boomer browsing Jow Forums or the 3d artist or video producer that runs premiere 90% of the time.

Obvious b8. No h8 just try harder m8.

what can you tell us about the implication of the graph you posted

I (my family) had those CPUs in recent 10 years.
>Intel Core 2 Duo P8600
>AMD Phenom II X4 N970
>AMD Phenom II X3 (I don't remember actually, since I gifted that PC to friend, since I accidentally their PC)
>Intel Celeron E3200
>Intel Celeron E1500
>Intel Atom N270
>Intel Core i5 3210M
>Intel Core i5 4260U
>AMD Some Bulldozer based laptop, E-series chip
>AMD Ryzen 5 2500U

What can I say.
Phenom was better than Core 2.
Bulldozer was better than Celeron (Core 2). Their low-end CPUs were actually pretty good for price and power consumption (first laptop under $270 I had that could into 5 hours)
Core i was better, than Bulldozer.
Ryzen is better, than Core i.
Atom N270 is worse, than Bulldozer or Core 2.
Phenom was probably better, than Bulldozer
Atom (early)

intel doesn't do avx512 without throttling

That the CPU is poorly designed and that by adding more complexity to try and keep up with Intel's absurdly superior performance, they've fucked it up and made it perform badly in real time applications.

do you have a measurement for this bad performance in real time applications?

do expand on your claims.
How is it poorly designed
How does it perform badly in real time applications

the throttling is still clear into the benefits of vectorization. Also you only throttle if you actualy use ZMM registers. If you stick to XMM registers you don't get a spec of throttle

If I get a ~%3 throttle and get a x16 speedup then that's still an incredibly huge gain.

lemire.me/blog/2018/08/15/the-dangers-of-avx-512-throttling-a-3-impact/

Okay now this is epyc b8. I r8 8/8 m8.

but AVX-512 ran slower by 3%
what is even the point of it?

It causes frame latency in gaming.

can you provide proof that can without a shadow of a doubt prove that it is caused by memory latency and not games runtime?

>If I get a ~%3 throttle and get a x16 speedup then that's still an incredibly huge gain.
do i gotta do the math for you

A ~3% slow down for a x16 throughput

You get input lag when you play real-time applications like games.

Dont bother, he's just talking out of his ass.

If the fucking CPU lags, then the game should lag you mong XD

IF THE CPU TAKES MORE TIME TO ACCESS THE MEMORY THEN THE GAME IS GOING TO FUCKING LAG WHAT IS IT THAT YOU DON'T UNDERSTAND MOTHERFUCKER

Thanks for confirming my fears that you have no proof of your claims. Lying on the internet? How disgusting.

IF MY CPU LAGS THEN MY GAME IS GOING TO LAG FFS WHAT MORE DO YOU WANT ME TO SAY PLEASE UNFUCK YOUR SHIT

It is a known fact that an Intel CPU would be less bottlenecked by lower ram speeds, but in general, Intel has lower latency. If a Ryzen CPU has fast ram speeds the latency gap is significantly smaller.

Dickens, please refrain of using capital letters and use arguments you can substitute with affirmation.

Is there any Core i7 that is slower than a Phenom II?

Can you elaborate on the implications of 80 ns memory access latency versus 50 ns in the context of CPU game processing / GPU frame rendering at, say, 144hz (6.94 ms per frame)? Also, I would appreciate a deep dive regarding the nuances of AMD/Intel CPU cache sizes and how that could also factor into latency.

No.

>real-time applications like games.
Please stop using terms you don't understand

intel used to be the best performance AND best value
ryzen1 was only good for some multi-core use (many apps still relied on single core and incel optimizations - ie Adobe)
but now we are switching to ryzens for some upgrades
and most old intels are still doing fine with everything we need
t. small dev studio, coders and 2d/3d artists

For many, many years I bought only AMD and ATi products so I was chuffed to bits when the former acquired the latter. We're talking AMD k7 era and ATi Rage series, and I used AMD exclusively until December 2017 when I was given a macbook air by a friend when my ideapad shit the bed and I needed a laptop for uni and was short on cash. That was my very first Intel CPU since my old 75MHz Pentium 1 Packard Bell in the early 90s. I was actually reasonably impressed with its performance as it compiled packages much faster than my ideapad's AMD A8 and even the integrated gpu was playing back video much smoother than the integrated R5 in the ideapad with the exact same mpv config and same distro (Ubuntu 17.04) so last November when a really nice gaming laptop went on sale, I was faced with a decision: the AMD version with some Ryzen cpu and RX gpu, or the Intel-NVidia version with i7-8750H and GTX 1060 (don't remember the exact AMD model numbers as the laptop's AMD version has already had a refresh with Ryzen 5 3550H and RX 560X and I can't find any info on the original AMD version), I went with the Intel version.

I have never been into gaymen, only occasionally playing some mariokart/ Mario party or smash bros with friends, but I was buying this laptop mostly as a desktop replacement, but decided to keep the win10 partition (albeit drastically shrunk) to give it a go sometime, but in Linux this thing fucking flies and everything just werks. Literally none of the mucking about in configs or compiling kernel modules manually to enable features inexplicably turned off by default, etc. Even the free Nvidia drivers aren't ass.

I think I'm becoming an Intel goy and nvidiot. And I don't really mind. Click subscribe to be notified of future blog posts! kek

that leak is with garbage beta bios - ryzen3 ram latency actually has improved over ryzen2, no way its below ryzen1/on par with lowend athlon

wait for real tests with uptodate retail products