Is this even possible?

Is this even possible?

Attached: threadripper.png (730x630, 733K)

Yes. Scalability is a major design criteria for the EPYC/Ryzen architecture.

The latest Epyc CPUs, Rome, go all the way up to 64c/128t per CPU. Threadripper is based on Epyc, so yeah, AMD can totally do this if they want.

Based as fuck I'm gonna buy one for my new shitposting rig

TR is just EPYC with higher clocks and some features disabled. It's possible, but a 64c TR would probably be very expensive.

>still no mITX options
Damn, I fell for the meme.

>but a 64c TR would probably be very expensive.

Which workloads would make use of 64c/128 threads realistically?

I can name a few algorithms that, if you wanted to run some large scale versions, something like this might be good.

In practice though, this CPU will have the most applications in the field of compensation. As for what kind of women will care, who knows.

Encoding 64/n videos at once.
Where n = number of cores before returns significantly deminish.

Attached: 4bOIMQi.png (876x890, 316K)

All kind of datacenters must be foaming from excitement after the recent years cpu developements.

>tfw not having a 1GW consuming CPU

>Very expensive
At this point it's just amd waving their collective dick, price doesn't matter that much

We are at the point where Intel just doesn’t have the engineering to compete. The only reason they aren’t bankrupt is because intel is deeply embedded in the military industrial surveillance state.

>tfw not having a 1GW consuming CPU
not while browsing Jow Forums. also "1GW" would okay considering intels CPUs would draw a lot more power for the same performance.

Running a butt-fuck ton of VMs.

Job recently jumped on the EPYC CPU train with a remote-server specifically so we could turn 4 individual servers, into 1, that still has all the same capability and more that the old 4 did. Everyone still gets 4-dedicated cores per instance, and the IPC is probably higher, so they're getting a minor speed bump, aside from the new SSD array being faster, too.

From a personal user standpoint, I'd still say this is fucking grand for that. I could imagine that some power-dev who wants to try to run VMs with multiple different builds of their software could really flex this. Spool up more machines with identical resources, automate and test more builds at once. Or a QA tester could do similar. Or an MMO gold miner/farmer could do more with less, too. Script up their actions, and run more instances of the same game, different accounts.

Lots of computer graphics stuff. Rendering is the most well known because it can be packaged into a tool like Cinebench.

> Make scalable design
> oh wow its scalable

First off it always depends on the software.
So i guess music/video/art production.
More scientific workloads.
Software Development.
In some cases it could even help in. A space shuttle but i dont know what kind of components they usually use there.

>A space shuttle but i dont know what kind of components they usually use there.

They use ones that are few years old at least, since everything that gets launched into space needs to undergo such rigorous testing that you wouldn't believe. Like cameras that were attached to the latest Mars rover. At the time of the launch much better ones were available, but they wouldn't have time to test them.

A whole space shuttle has the processing power of a modern smartphone. You don't need much for that shit.

If you don't need all the features of EPYC it'd make a beastly home server with a ton of virtualisation options.

I wonder what the power consumption is going to be like. Is an air cooler going to be able to hand such a beast even at stock? EPYC gets away with air cooling because even with its high core/thread count, they're far lower clocked than Threadripper.

Yes. In theory when Moores law finally hits the wall in the next 3-6 years, we will have to rely on parallels to keep pushing. Very few other techs are consumer friendly or as cost effective as a pc by year 2030 having 512cores across four cpus. Sounds retarded but we started a bit late on replacing our current architecture.


GPU will have to rely on things like subchips handling lighting (raytracing is the current example).

Consumer cpus have jumped only a few percent each generation for the last 7 years. GPU has had a more steady 20-30% jump per generation, which will slowly go away as well. It will only be a matter of time before we have to change what qualifies as a benchmark to compare if one GPU is better than the next. We already do that by comparing one cpu to another by heavily focusing on specific tech the cpu can do and how many cores it has, while acting like we are not comparing more cores to less.

I for one cannot wait for 90% of applications to be properly multi-threaded.

At work we have a 2990WX workstation for deep learning (with multiple Titans) and computational fluid dynamics, and it was such a massive jump compared to the dual socket Sandy Bridge Xeons thanks to all the PCIE lanes and the massive amount of cores.

Zen 2 TR will have even more pcie lanes, pcie 4.0 and double the cores so I'm probably gonna try to convince the finance people to buy a workstation with one of these as well.

>Which workloads would make use of 64c/128 threads realistically?
3d rendering, physics and electrical simulation, compiling. You could probably even take 56 cores or so and have it act as your GPU if you wrote a driver.

writing and maintaining multithreaded code is not easy in most languages.

Yes but it will be underperforming with only half EPYC memory channels.

Threadrippers have no coherent IF links so they are SP only. Because of that also have half the memory and I/O bandwidth. They are not compatible with Socket SP3 despite having the same pin layout. The memory controller isn't compatible with registered DIMMs and LRDIMMs.

Professional-tier only stuff doesn't need registered DIMMs and LR-DIMMs or needs 128 PCIe lanes of an Epyc SP.

Basically a high-end workstation (engineering, scientific, medical and 3D graphics) or a mid-tier NAS/SAN box.

my body is ready

This pic has aged poorly.

I don't fucking care. I just like having many cores.

>muh air cooling
we're leaving consumer territory, boyo. air cooling is for gamers.

yes, and that's why you have shit like GNU parallel

you should just consider killing yourself

>In theory when Moores law finally hits the wall in the next 3-6 years
>Consumer cpus have jumped only a few percent each generation for the last 7 years
Hurrrr
Moore's law has been dead since 2012 you psedo intellectual.

>more cores than Intel's share price

Attached: 1534645053074.jpg (690x720, 77K)

>Moore's law is the observation that the number of transistors in a dense integrated circuit doubles about every two years.
So just adding another core to the die: transistors doubled. Moore never mentioned anything about single core shit.
You retards should stop repeating reddit memes. Moore's law is still valid.

Learn how to google before you write a wall of retard, or learn what Moore's law actually is. Moore's law had been dead, retard.