Can shit like this be used for gaming?

Been poking around the net about gpgpu and whatnot.
Can you run your os and games off these?
Can you combine gpgpu and video gpu?

Attached: IMG_3064-geforce-1024x768 (1).jpg (1024x768, 262K)

Other urls found in this thread:

asrockrack.com/general/news.asp?id=128
twitter.com/NSFWRedditImage

I reckon this, someone answer please

and never come back

GPGPU can't actually output video

man, do you guys think kettle corn is better than regular popcorn? im more of a sweet than salty guy tbqh

Attached: toilet_paper_dilemma.jpg (600x604, 35K)

Could one be using a single 2080 for video and like, 8 older 1080s in some four way or eight way gpgpu sli to increase "cpu" performance?or assign side thread processes in the background to a gpgpu?
A quick Google search says running the os on the gpu alone is impossible

you can suck my dick with your tongue and the roof of your mouth in 2 way SLI

Sure, I would reckon to have one 2080 for output and four 1080s to boost overall IS and games performance.... If that even works.

BLAST PROCESSING
Poking around on YouTube regarding playing games or rendering on a mining rig didn't come up with much.
I wanna fuck around with solidworks and matla b as well as game

Depends on the card, you can run gpgpu workloads on consumer gpus.

Most gpus made for gpgpu workloads that have video out you can game on, but they don't perform better than normal gpus. That's because gpus made for gpgpu have higher double precision performance and more (but not faster) vram, both of which don't matter much for gaming.

Attached: nvidia_c870_1.jpg (450x309, 24K)

chinkvidya is also charging 9 grand for a tesla v100.

Well my idea was more to get one modern gaming card and four older gpgpu or gpu cards to assist with more CPU type processes whilst gaming or solidtwerking

I used to have mining rigs and I do gpgpu computing.

Most mining rigs use USB risers. They are PCI-E 1x. So you get increased latency and decreased bandwidth, neither of these are something you want when gaming.

You can game on them, but only with one card at a time if you use risers.

And you can't SLI 8-12 GPUs for "megagigaultra" performance either so yeah

If you want to do gpgpu computing as a consumer, buy a consumer card. If you need anything else you'll also know what you need and why you need it.

I use a 1080Ti for my gpgpu workloads, and it's pretty sweet. I could add another one but my gpgpu workloads don't scale that well on multiple GPUs and SLI with 1080Ti is not worth it either due to heat, noise, and me not having any need for it anyways.

And you can't SLI 8-12 GPUs for "megagigaultra" performance either so yeah

Reeeeeeee

Yeah tesla 9000$ card is way overpriced... They're trying to ban server farms from using 1080 or 2080 because Asian merchants

You can do that with consumer GPUs, no problem. I used to game on one 280X while the other 6 or what ever it was, was mining.

If you want to mix tesla with a gtx card or something like you'll likely have driver issues and headaches, though I haven't tried it.

Unless you need a quadro, get a geforce card (or an AMD card)

Depends on the occasion but if you're at a fair where they're making it in those giant pots kettle corn is amazing.

Cool, excellent
Does it really make much of a difference? I guess that's also what I wonder
Maybe if get like one 2080 plus eight models of something from two generations ago like 980, dual xeons, fuckton of RAM
I mean gpus are way beyond the cpu now,but how does the os or game take advantage of six gpus in a mining config or however you force it to work (how do you force it to work like you describe while gaming?)

Also curious if I can mix and match new and old gpu like you can whilst using one for physx

Ah, you were mining with the other six.
Not really what I'm looking for yeah,want that
Me gagigaultra performance
Maybe a server pc

This will not perform better than the latest top of the line consumer GPUs in gaming. GPGPU setups are intended for parallel processes (e.g., image processing, simulation, scientific processing). These systems still need a CPU to perform all the fundamental processes that cannot be parallelized (hardware interrupts, OS systems, process management). All CUDA cards allow for multiple processes to run at the same time (multiple grids), so you can effectively have graphics and compute processes running at the same time on the same card.

You'll want to put the GPU you're gonna use for gaming in a PCI-e slot, preferable the one closest to the CPU. If you want to use another one for physx that one should be in a PCI-e slow too. The other ones can be connected with USB risers. If you try to mine or do gpgpu workloads with 250W cards without risers they will throttle because they are too close together. SLI face the same problem. Not a lot of people talk about it, and I only realized when I tried to mine without risers.

I don't know how drivers work with nvidia but with AMD it was no problems running a setup like that.

You want to set the CPU affinity manually so you give each GPU doing gpgpu stuff one CPU core. The rest can be used for gaming or what ever.

Depending on your workload you might want to run one instance of each per gpu, but it if's something like rendering in solidworks or what ever then I suppose that's not an option. In such a case, I'm not sure if you could use the 2080x for gpgpu stuff after you've done gaming if the process is already running.

asrockrack.com/general/news.asp?id=128

Would a server pc offer hypergigaultra performance that a mining rig cant?

That's what I figured, thank you
I'm a sunglass didn't read the part where you said the other six were for mining, I think you're right that it would be hard to assign a gpgpu to some thing already running, except for perhaps video capture or uh discord or something

>want that
>Me gagigaultra performance

Is the workload massively parallel? Unless it is, you're always better off buying the fastest single GPU you can find and the fastest RAM with the lowest latency.

One of the workloads I have, takes about 2 weeks to compute on a 1080Ti. Using 4 1080Tis only makes it about 60% faster.
The solution for me is to have 4 machines, each with one 1080Ti, and each running it's own two week compute job. After two weeks they will finish 4 different jobs, and in that case it's just as good as if I had one computer that could do them sequentially one after the other in the same two weeks.

Xeons have lower clocks and lower single core performance so unless your workload is massively parallel don't bother.

Dual socket xeon systems have more latency and each cpu can only use half the ram. Newer systems where the CPus share all the ram have more latency and passing the data through the bus reduce bandwidth, so lower performance in some workloads.

My dream was to just have like 7 or 8 gpgpu working as a hyper cpu that wouldn't need to be assigned to Jack shit

Thanks for the info, it's very interesting
My plan is to make a nice Ramdisk gaming rig then I suppose.

I mean, this is retarded and all, and you should definitively fuck off to but it COULD work if one was to create a game engine capable of offloading things (physics, patricle effects, etc) to GPGPU. Nothing like that currently exists, but it would make a cool uni thesis.

This. Pathetic /v/irgins…

Attached: 1474759105542.png (375x500, 223K)

Well, thats not feasible with current technology. Additionally, some tasks are not parallelizable, so I would still like to have a system that has a hierarchy of slower parallel processing, and faster single threads. For example, you could have a gpu, and a cpu, but the cpu would have a couple fast cores, with 4-6 more slower ones to allow for a better hierarchy (of course this results in overhead, but as a thought experiment).

Keep in mind that you have to load what ever you're gonna put in that ramdisk from a HDD/SSD. So if you want to decrease loading times in a game, first you have to read that entire shit into the ramdisk. If you do it at boot, then it's gonna slow down your machine after boot and until it's finished. I've done it but it's not worth it.

Some programs that don't utilize ram very good can benefit from a ramdisk. I put the entire installation directory and config files of chrome into a ramdisk once and it increased my performance in a very heavy web application compared to just normal ssd + ram.

I think it would be really amazing
With the current power of gpu
But potentially a ps3 cell level debacle
allegedly ps4 can handle tons of physics more than pc for some reason as well, need to find the article again
Yes

I was hoping to boot win10 off a DVD for perfect copy but the bastards only enabled USB boot. Maybe this will be a partial freetard/steam machine
Chrome sounds like a good idea.
Is reboot restore rx a good program, anyone?
Preferred ramdisk programs?

Also since yall are around
Do y'all think the mobile i9 and mobile rtx 2080 SLI laptops will be good
I'm amazed at how mobile gpus are nearly identical now, but aren't mobile cpu still gimped?

Would 512 gigs of ram noticeably improve my quality of life

The laptops will be good, and really the reason that mobile CPUs are gimped is because of thermals.
It's very difficult to make a laptop that is a reasonable thickness and won't instantly overheat.
The i9 2080 laptops will probably not be worth the premium over an i7 1070 laptop.

>Would 512 gigs of ram noticeably improve my quality of life
Unless you have an SSD that loads entirely into RAM at boot time, no.

idk check on the internet

SLI sucks for esports increases input/output time.

>mobile gpus
HAHAHAHAHAHAH
HAHAHAHAHAHAHAHAHHAHAHAHAH

Attached: 1483651519783.gif (225x249, 808K)