Outside of servers what actual use case is there where a 32 core CPU would be useful for daily use...

Outside of servers what actual use case is there where a 32 core CPU would be useful for daily use? Seems limited to CPU specific intensive tasks or running a shitload of VMs

Attached: threadripper-2-800x450.jpg (800x450, 23K)

Other urls found in this thread:

youtube.com/watch?v=lN5mxFfkr7g&t=25s
youtube.com/watch?v=J3ue35ago3Y
youtube.com/watch?v=jkhBlmKtEAk
shadertoy.com/view/4ld3DM
twitter.com/AnonBabble

installing gentoo

Video editing/transcoding
CAD work

Gentoo

VM, home workstation (rendering on blender specifically since it loves threadrippers), calculating light mapping and occlusion culling (if you're a game dev shit takes literal ages in some scenes) and also converting all your animu library to another codec and/or format.

wouldn't a GPU do most of that?

The top threadripper only has 16 cores. Has hyperthreading though, so you get 16c/32t. Only epyc gets you 32 real cores (32c/64t)

Anyway the core count isn't the only reason for it, people who want massive amounts of memory (or memory bandwidth) or a big pile of PCI-E lanes are also interested, even if they don't need that many cores.

>installing gentoo
Compiling anything large will scale very well to many cores. That being said, 32 still seems like overkill unless it is for some weird server that compiles everything for a large number of developers.

>The top threadripper only has 16 cores
Op is referencing threadripper 2. You've been living under a rock for the last 24 hours.

Attached: NDG5wmN (1).jpg (608x1070, 796K)

Windows VM, Mac VM, Linux VM, BSD VM, Android VM, iOS VM, and Host all having 4 cores and the host an extra 4.

>wouldn't a GPU do most of that?
Not really, I'm talking about baking a light map not doing real time stuff hence why it takes ages also it's software dependent and most of the time you can't even do it yourself outside due to how the systems in some game engines work.
For example beast which is used in unity doesn't have any support for CUDA or openCL when baking things up. Same goes for UE3 and UE4.
Making reflection probes also use CPU only and you have to bake all that stuff to have the pretty reflections in video game else it would slow things significantly in the actual game being ran in the player's machine.

Nvdia and AMD are doing some cool real time raytracing and other crap nowadays but we will probably not see that any time soon in engines. For now we bake things so they run smoothly.

>Seems limited to CPU specific intensive tasks or running a shitload of VMs
Sounds like you answered your own question there friendo.

Attached: 5714923+_5902469af4bf151ecc2abc645026cbac (1).jpg (523x720, 90K)

Threadripper is fucking worthless.
eypc on the other hand has my workplace seriously considering replacing a few intel HA clusters that are getting old. Consolidating rack space with more core dense shit will eventually pay for itself.

That is a weird mix. I usually see people running multiple VMs all with the same OS.

so you can have a virtual machine inside a virtual machine inside a virtual machine inside a virtual machine inside a virtual machine inside a virtual machine inside a virtual machine inside a virtual machine inside a virtual machine inside a virtual machine inside a virtual machine inside a virtual machine inside a virtual machine inside a virtual machine inside a virtual machine inside a virtual machine inside a virtual machine inside a virtual machine inside

We are getting dangerously close to a CPU core count where it might make sense to start doing certain things we delegated to the GPU on the CPU again... One cycle out of a modern x86 CPU can do a fuckton of useful "work" compared to a single cycle out of a GPU compute node or shader. Raytracing is a pretty good example of something that is complex enough to justify the use of a complex x86 instruction set. This enables enable advanced control flow/caching techniques/accelerated arithmetic and can interface at extreme, low-latency speed with main memory to build out massive ephemeral data structures which are seemlessly paged out to intermediate storage as needed. Putting this work on a GPU applies lots of unnecessary constraints on the approaches and even basic algorithms which may be used. If you have 32 cores in your CPU, and you were to dedicate 18 of those to a ray tracing engine, i bet you could come up with a visually satisfying result and decent refresh rates with only using the GPU as a 2d frame buffering and delivery solution.

>THIS IS WHAT NVIDIA ACTUALLY FEARS

The software [defined] rendering scene may be upon us once again.

I'm going to use my TR2 32 core machine as a portable mATX VFIO monsterbox. Cerberus case, X499M Taichi (when it releases), 7nm AMDGPU for the Windows GPU, and the lowest end AMDGPU for host GPU. Put the TR2 chip under the LiqTech TR4 280mm, slap the 140mm Sterrox Noctua fans on it, set it to 4GHz locked, and enjoy 300w of peak power draw, silence, >muh vidya, and freetardation at the same time.

nvidia likes epycs (and thus threadrippers) tho and unironically recommend them in combos for machine learning with their 30 thousand dollar teslas.
I don't get where this meme of company X wants company Y dead all the time, leave it for console wars or just use it as bantz on intel since they are in deep shit, also remember that even though we have many manufacturers that specialize in smaller and simpler chips out there, there's only 3 big companies and if any of them goes under the others are fucked due to anti monopoly laws which makes it an undesirable outcome for any of them.

Btw pic happened, both keep shitposting together and making fun of intel.

Attached: ryzen33.jpg (1285x965, 225K)

Dont forget. Unreal Tournament 2004 in software rendering mode was playable on an AMD Athlon CPU back then. Imagine the visual result you could get in 2018 with one of these monsters.

UT2004 is on GOG, and I have it. I also have a lowly 8 core 1700. May want to try this out when I get home and post results.

>32 core cpu
>4GHz
>non specialized cooling
pick two

Yeah I am going to do the same. Quake 2 has some software mode rendering options too. I need to dig out my old books and labs on this shit. We came up with so many cool algorithms and datastructures trying to make this kind of thing work on hardware from the early 2000s. That was the timeframe when everyone just went full retard into the 3d frameworks and GPU vendor pipelines. One wonders where we would be if the GPU never happened and we all had to keep pushing the algorithmic limits of what the hardware could do over the last 15 years.

Why do you need 32 cores for gaming? Even if you want the crazy pcie lane count for dual GPUs and an NVMe drive, you would be better off with one of the lower core count chips. This smells like a larp.

Attached: 0a54c052f95a4e45846226a1fbd3cfb1.png (1024x576, 515K)

>Outside of servers what actual use case is there
Server tier performance for more users. People that will be interested in Threadripper:
> Game devs
> Video editors
> Scientists running huge simulations
> etc

A 2700X is more than enough for you user...

Any developer. Have you fucks ever had >5 visual studio processes running at once? It would scream on this thing.

>Have you fucks ever had >5 visual studio processes running at once
I'll admit to rarely ever having more than a couple open at the same time.

That being said... 32 cores still seems like overkill, even with a bunch of msvs instances open and something building in the background. Even 16 cores seems a little bit silly. Nobody is going to spend $2000 on a CPU for a single developer when a $400 CPU would be perfectly adequate.

I'll get it for my own personal machine, or if i employed someone who asked for it. Otherwise, yes i think its overkill for most. Just having the cores sitting on my desk will make me want to experiment with ways to leverage them, so that is a benefit in my eyes.

Would love one for video editing and animation.

MEGATASKING

Seems like this is the state of the art and its insanely cool.

Attached: Screen Shot 2018-06-06 at 7.33.02 PM.png (724x839, 830K)

Put this shit on like 28 cores, leave 1 for the game loop and 3 to run whatever else and you are in an interesting place i think.

>light coming through a door
>900x500
Wow...so this is the power of AMD Threadripper...

This picture is probably from an ancient graphics text book. Raytracing has been used for offline rendering for decades, it's pretty standard for animated films. Real time applications like videogames has always been out of reach for consumer hardware though. I don't really see threadripper making it a reality, considering NVIDIA and microsoft have already teamed up to provide low level real time raytracing support on Volta.

eh, I sometimes had 3 on my dual core box and it was fine

mining

Attached: 1482201678038.jpg (500x581, 37K)

Starting to sound like NVDA shilling around here...

I hate to say this, but i think the biggest audience of this chip are the youtubers, specially the ones that post daily.

Cad and other real time 3d workload stuff. Renderfarm, vr, 4k and 8k content, there are so many things.

>youtube.com/watch?v=lN5mxFfkr7g&t=25s
This concerns the nvidia jew

user that picture is from some ancient shitty opengl rendering book.
I have a truckload of these books and can probably find that same shitty teapot picture. To be more precise there's a fucking teapot function on openGL I used a lot back in my uni days.
Also nvidia is actually doing a real time ray tracing thing, we even had some videos about it a while ago.
AMD is also developing something by the looks of it so user isn't lying you're just retarded or too young to be posting about this sort of stuff or a literal marketeer who knows jack shit about the subject.

youtube.com/watch?v=J3ue35ago3Y
youtube.com/watch?v=jkhBlmKtEAk

showing people that you 'built' your pc with the most expensive lego pieces in order to play overwatch

>literally who
kys

just a couple of quad cores is enough to serve million of users in a day

A single 32 core server is a fucking meme

That picture is from an academic paper published in 1997 regarding the metropolis light transport algorithm. It has nothing to do with traditional rendering techniques and is probably way beyond your current understanding.

Why are you so concerned?

What game

You won't get clicks for your shitty channel.

Those techniques are deprecated compared to what's being us used nowadays. You're picking a student whitepaper from 1997 as you said yourself and using it to prove you're smarter than other people on Jow Forums, jesus christ my dude.
Just because you like a company you don't have to become a marketeer for free and talk bullshit about things that aren't used at all in video games.
Also that is absolutely not real time ray tracing and you can't compare it with real time graphics mr smart pant because a single frame would take quite a lot more to render than what would be considered "optimal" for a real time application, there's a budget in milliseconds of how long each frame can take to be calculated before everything goes to shit.

They're both risking overlap of people buying their high-end workstation chips in place of server chips and taking the risk of no ECC. I wonder if it will bite them in the ass.

As for use case, I think that's basically been covered everywhere. Video editing/rendering and multi-VM systems will do well.

I have no particular love for NVIDIA, but it is literally a fact that they are producing specialized hardware for ray tracing in their next generation of GPUs and it's already supported in DirectX.

I wish AMD would do the same and hopefully a free graphics API will follow suit, but sadly it's not the case yet. I did hear Khronos is working on the latter though.

fpbp
/thread

Liinus Sebastian, you need us, you desire us.
There's no escaaape Linus, we will rip your threads Liinus.

Attached: tureadorippa.jpg (2223x1233, 286K)

>128 PCIe lanes
>much cheaper than Xeon
>use saved cash for buying more nvidia GPUs
It's a win-win for everyone, except Intel.

threadripper supports ECC though

Fantastic post

Multiboxing eve.

Multiple VMs

if the base/boost clocks AMD gave anandtech are real, then the 32c one will outperform an EPYC 7601 in non memory bound stuff

Funny that people weren't willing to say that about Intels CPU till after Ryzen BTFO of them at computex.

Intel shills are so transparent.
>OMG LOOK INTEL HAS 20% HIGHER CLOCKS BUT IT COST TWICE THE PRICE!!!!
>TECHNICALLY ITS FASTER AND SUPERIOR SO THEREFORE ITS THE BEST!

>WHO CARES IF THE CPU COSTS 2K + AND THE MOBO IS 1k+ ITS FUCKING 20+ CORES
>AMD BTFOOOOO

>wait... wait...
FUCKING AMD MADE A 32 CORE CPU!??? WHAT FUCKING BRAINLETS WHAT KIND OF CONSUMER WILL UTILIZE THAT HAAAAAH ITLL PROBABLY COST MORE THAN A AVERAGE DESKTOP BUILD HAAAAAH

>AMD BTFOOOOO :^)

Shh.
You will anger the rabbi.

I must agree that this is impressive, but the holy grail is still on zen2, and that's what I'm waiting for

Perhaps I wasn't clear enough when I specified my usecases. I'm a developer, I do a lot of media transcoding for various reasons (backing up stuff for my family, 200gb+ music collection that grows all the time, etc), I love to play around in 3D modeling/rendering as a hobby, etc. I have a lot of very heavy CPU-intensive tasks that I often leave running in the background while doing other things.
On top of that, I want to run Linux as a daily driver but retain a Windows VM because I do game when I have the time. So, VFIO, dedicate a CCX to the VM, leave 24 cores to the host system.
As for cooling, 2000 series Ryzen sits at around 100w for 4GHz under maximum load, as per the Guru3D review. Anandtech reports similar results in their "whoops we fucked up the Ryzen results" article, pic related. 400w is not totally insane for a 280mm cooler with full coldplate coverage, it'd just be pushing the radiator to its limit, given good fans. So, the Enermax LiqTech 280mm cooler could almost certainly do it.
I promise I'm not larping. Just have very few hobbies that don't involve powerful computing hardware.

Attached: pudding.png (650x337, 39K)

my current computer is estimated to take slightly over 500 days to compute usable results for protein simulation

inb4: normies can't use that

derp muh games

Not true at all. CPUs have per core only a few SIMD lanes, while GPUs have millions, essentially saving you billions in costs and time if you're a big company.

Any algorithm can be parallelized, given enough 'passes'.

nobody needs 32 cores for desktop work

>Any algorithm can be parallelized
Wrong.

t. corelet

>Wrong.
Wrong.

shadertoy.com/view/4ld3DM

can someone explain to a brainlet what practical use(s), if any, there are for your average consumer for 32 cores and 64 threads?

If you want an example of the most normalfag task you would see a huge performance boost in, imagine compressing several gigabytes of your vacation pictures within a several seconds to a couple minutes.

you can, for example, use 20 cores to run your micropajeet designated operating system, 8 cores for your loli ecchi blu ray rips, and 1 core for your work since Blender is singlethreaded. The other 3 cores are spares.

raytracing

In a really odd way (Ryzens do it too, IIRC): it corrects 1-bit errors, but 2-bit errors (that are supposed to be logged and shut the machine down) just get ignored.

So I can play minecraft at 60 fps

Parallelise 1+2+3+4 then.

thread 1 : 1 + 2 -> a
thread 2: 3 + 4 - > b
sync threads
thread 3: a + b -> output

2-bit errors never happen though

Adding up members of a set is a classic map operation, retard.

Only a tiny fraction of problems is parallelizable. And only a tiny fraction of THOSE problems scales beyond 4 cores according to Amdahl's law. Sorting algorithms for example only see a speedup from N log(N) to log(N) - and that's in the best case.

Actually can't be paralellized:
a + b = x
x + c = y
y + x = z
recursively, n times.

Reduce actually.
Only log(N) speedup if you have as many cores as there are numbers/2.

I mean, if you need to sort ten billion things, Mergesort will parelellize nicely onto 32+ threads, regardless of latency.

>""""""only"""""" N log(N) to log(N)
lol

Sorting algorithms need as many cores as there are objects to sort/2 to achieve the theoretical speedup. And it's still an immense waste of resources.

>you only sped up my sorting algorithm by 32x what a complete waste

Except you won't see a 32x speedup.

For the average Jow Forums poster? Fucking useless and overkill. Basically a "I have a lot of money so I'm going to waste it so I can shitpost on Jow Forums"

Machine vision too
Also only rendering in CAD may be cpu intensive

Here's quicksort for example.

Attached: Speedup-Estimate-of-Parallel-Quick-Sort.png (598x364, 45K)

Software rendering and encoding results in far higher output quality. 32 solid cores makes shit like AV1 encoding 4k video a sane task.

Basically, everything that isn't embarrassingly parallel isn't worth the hassle to parallelize.

Cycles renders burning down your house

Attached: 5845ca7c1046ab543d25238b.png (750x730, 104K)

You're still seeing a 4x speedup with 16 cores. At the lowest performance bound.
The sudden flatlining could be explained by the hardware(8 cores but 16 threads?)

Not true. Histopyramids are regularly used when it comes to image processing, which turns otherwise non-parallel problems into fully parallel ones.

Threadrippers aren't for gaming as more as an alternative for CAD work etc. However I use a 1950x OCd to 4.2ghz for virtual gaming server where I host 4 virtual gaming pcs from the one desktop. So I can basically get desktop performance off a thin client.
I would'nt really be interested in 32c but if they release another 16c model with higher clocks I'd get it.

Perfect shadow of the colossus ps2 emulation

How does that work? I suppose you cant just rdp into that thing?

It basically is RDP I can't remember what hypervisor was used etc. but you just log on to the virtual server from your client and boot up a VM. All the processing is done by the server so the client is just peripherals and a monitor.

everything you can imagine
considering the fact that amd lets you disable as many cores as you want via windows you can game while rendering and having a vm running and the cpu wont even sweat