Jow Forums's thoughts on the concept of Modular Graphics Cards?

Jow Forums's thoughts on the concept of Modular Graphics Cards?

Attached: modular-graphics-card-nvidia-radeon-design-2018-dave-delisle-davesgeekyideas.jpg (740x450, 162K)

Other urls found in this thread:

davesgeekyideas.com/2018/11/19/modular-graphics-card-nvidia/
twitter.com/SFWRedditVideos

kill you're self

where is the cooler?

Seems like a good idea until you think about it. Or have every thought about it. Or have ever thought about anything remotely similar.

if that's what you want to do then why not just build the sockets onto the mainboard silly

Attached: 9d5.png (2688x2688, 173K)

Latencies.

>anything remotely similar
Tell that to /pcbg/

This is retarded.

>just buy a new card
>just consume
go be a kike somewhere else

/paul/ everybody

Why not ask dave?

davesgeekyideas.com/2018/11/19/modular-graphics-card-nvidia/

That kind of thing is probably better for compute cards. If it's desirable for anything anyway.

no heatsink.....

Also note the lack of VRM and fan connectors

>Modular Compute Cards.
Too bad Intel gave up on that idea.

Modular sounds pretty stupid but graphics cards using standard parts would be nice.

I am curious to know why a modular Graphics Card would be such a pic related idea?

Attached: 1512864245244.png (951x972, 241K)

>kid builds PC
>can’t afford the $1400 worth of shit plus a $1400 GPU
>instead gets mum to buy him a $400 base model GPU, 2gb of ram by using 4x 512mb sticks filling all slots
>year later he can finally afford to upgrade GPU
>already $400 in so just upgrades the components
>replaces processor with the highest available at the time of purchase, can’t upgrade to current because the chipset on the baseboard won’t allow it
>upgrades RAM to 8GB in four 2GB sticks, uses all slots as forced by config so the original 512mb sticks are now useless
>old processor is useless
>all up spent $400 originally plus an extra $1400 to upgrade, so rather than spending $1800 and having two working GPU’s, one baseline from a year ago and one high end today, you’ve spent the same amount, have a high end from a year ago and some useless worthless chips no one would ever pay for
Sounds fucking brilliant. Kids will lap this shit up, and their parents will never let them buy a new board because “well can’t you just upgrade the one we bought you?”

Gpus are much hotter than cpus and require direct die contact. LGA sockets wouldn't work well. Gpu manufacturers already have enough trouble with drivers, you want casuals to troubleshoot vbios and daughter boards? Rather why not just make a universal or dedicated gpu socket and DP motherboards?

The only promising thing about it would be being able to bin VRAM and core silicon.

>Rather why not just make a universal or dedicated gpu socket and DP motherboards?
I have always wondered why they didn't do this, I've been told it's because GDDR5 onwards requires precise PCB traces and replaceable DIMMS would introduce too much latency.

>Rather why not just make a universal or dedicated gpu socket and DP motherboards?
You would be stuck with whatever display standard your motherboard shipped with
That would really suck if you wanted something like HDMI 2.1 or DP 1.5 or some other future standard as it would mean a new motherboard

That can't be right. The layout of memory VRMs on gpus is usually an afterthought. Also there's always HBM.

That really doesn't happen very often. Your chipset features and the limits of a vrm would become a reason to upgrade more frequently. You could say the same about PCI-e and DIMM slots after all.

An easy solution would be single slot pci-e cards that are just display controllers.

This whole thread was a ploy for a gimmicky website
Fuck, Jow Forums sucks. Jow Forums as a whole sucks now. Fuck this gay earth.

Attached: aYYp6xV_460s.jpg (460x493, 51K)

What about Zero insertion force PGA instead of LGA?
What about a cooler akin to pic related?

Attached: Heat_Sink.jpg (1492x1024, 785K)

>SODIMM memory
>as a GPU's VRAM

Are you mentally retarded?

From a purely theoretical standpoint it's really neat to me, as I would be able to shoot for insane memory levels for use with Redshift and Houdini (Especially the later where point caches for ultra high end fluid sims can be over 100GB and has to leave the GPU RAM for much slower system RAM instead).

In practice though it would probably get bungled, either support would be dismal and ended quickly or it wouldn't work as well as a regular GPU

He isn't talking about VRMs
The actual memory dies need to be really close to the GPU because signals actually take time to travel
If traces are too long it introduces undesired latency, if traces are mismatched in length some bits arrive too early or too late
With HBM this is even more important and some of HBMs improvements come from the fact the memory and GPU share the same interposer

You know there are aftermarket gpu coolers like the morpheus right? Also at that point I think water cooling makes a lot more sense.

Bad idea

Maybe ok if you can choose one of a couple cooling options, the same as some CPUs ship without heatsinks, but otherwise bad idea.

Those were less "add another cpu as a coprosessor" and more "It's your whole PC, just with all inputs and outputs through this connector".

Ah gotcha, the DIMM slot itself is introducing latency. I wonder what speed this becomes an issue? We're not far from 4000 MHz DDR5 being used with APUs. GDDR5 is between 8000-4000 MHz.

GDDR is usually labeled as 4x the actual clockspeed of the memory whereas DDR is 2x but are they actually comparable in terms of latency at the same frequency?

it'll cost 10x as much and be 10x slower than a flagship nvidia

>Gpus are much hotter than cpus and require direct die contact
This is plainly false because the TIM most brands use is worse than "top" TIMs like Noctua NH-1, Gelid GC Extreme, probably NH-2, etc.

Any soldered heatspreaders would have very little impact on the temps

Lets not get into TIM. Just in terms of TDP, even a midrange 180W gpu is twice the TDP of a 91W i7 or r7.

What I would prefer is getting a stock card with a standardised system to install a cooler of my choosing so I dont have to worry about some fans id have no chance of replacing if they die.

What about introducing a new Modual factor: V-DIMM
V-DIMM should be not only be 1.5 times the size of regular DIMMs, they should have the number of pins to warrant the size.

Modular gpus would be dumb pcbs are probably 15% of the total cost to manufacture so any cost savings to the consumer would be eaten up by the cost of making them compatible not to mention the design limitations.

>91W i7
>91W
oy vey

>at that point I think water cooling makes a lot more sense.
lets see, a $70 cooler or a $300 loop for gpu alone. hmmm. i think i'd rather take up 4 pci slots that will never be used for $70 rather than going water.
>hurr aio mod
hurr its still more than $70 morpheus. especially considering morpheus is nearly identical in custom loop performance.

>pic related
oh wait, i did go morpheus with my 64 and it keeps the core in the 50's.

Attached: DSC_0332.jpg (2304x1536, 1.65M)

>save $10 in plastic to strangle $500+ in silicon with shitty outdated buses and interfaces five years down the road
Fucking why

Would PCI-e x16 be better?

What about PCIe 5.0 x 12?

Jesus Christ the absolute state of Jow Forums and nu-males

>implying 3Dfx didn't try this with Rampage before they were bought out
You could swap the chips becausw it was on a bed of BGA pins. Fuck nVidia.

Attached: Screenshot_20190417-153052_Chrome.jpg (1440x2960, 1.56M)

I installed a Morpheus Core II or however it's called with a new F12 Noctua cooler, a single one, since these are 30€, I will add a second one. I put a case cooler to have something there blowing air. Temps went up to 64°C, at full speed.
How faster do you run yours?