How dead is SLI or NvLink?

How dead is SLI or NvLink?

Attached: images.jpg (229x220, 7K)

Other urls found in this thread:

ncbi.nlm.nih.gov/pmc/articles/PMC4456887/
twitter.com/SFWRedditImages

engines stopped supporting it, it's dead

I saw the Jayz two cents video too

for GAYMING, it's dead. But NVLink for productivity is still possible.

>How dead is SLI
"Only 1% use it" levels of dead. All that money and hassle just to get micro stuttering

GPUs run too hot as is and only reason to get that high fps is report input lag and ski makes that worse ..

So its basically always been pointless

sli is dead since 3dfx died.
novidia didn't change it a bit, they never developed such technology in the first place and they let it die a slow death.
>engines stopped supporting it
that's the memery that novidia wants you to believe.
you don't have to support it. you are already creating batch jobs as part of openGL ( I cannot talk about d3d since I've never used it) which makes it easier than regular programming to distribute this to a system.
AMD is working towards that system of assigning jobs to different chips, as part of their "everything's a chiplet" strategy and that's why they filed a new h/w scheduler for the next generation of gpus.
>"Only 1% use it" levels of dead.
when something is in 2000s levels of shit tech, you never expect it to work.
novidia even wanted you to have the same stickers on your gpu for this shit to work. they tried scamming their consumers into buying the ultra gaymen sli connector and when that cow was milked they threw them, just like the physx retards.
otoh, amd managed to have it working on different gpus(of the same gen ofc) and even between APUs and dGPUs.
If AMD could do that with 1/100th of the r&d, and novidia didn't, then imagine what kind of pajeetery they are doing in their r&d depts.
>GPUs run too hot as is
>So its basically always been pointless
cpus ran too hot when they were monolithic single core behemoths targeting higher clocks to increase performance.
huge chips require very sophisticated techniques to turn on/off parts of the chip and/or very high voltage to propagate the clocks from one place to another. your 700mm2 novidia 2080ti could be broken down to 2-3 chips, with 1/2 the total consumption and way way less heating issues.
take a look at the 3900x. the I/O chip consumes ~13Watts and the rest of the 95Watts are consumed by the 2x6c12t chiplets.

Multicore adds massive lag you can feel it in bf and carmac talks about it in lectures

Its shit for fast games we talking like 1"5th of a second look at a clock that's massive the only reason consoles have 4 cores are they are cheap failure productivity CPUs

Even a 9900k with all cores disabled beside 1 clocked high performs better in every thing beside bfv

yeah, that's bullshit.
carmack is a good programmer, but not a cpu architect.
in multicores you make specific compromises regarding coherence and consistency.
those 2 introduce latencies when sharing data between many cores or even distributed systems.
>you can feel it in bf
I don't know what gamen feel with their bf, but no human bean can feel anything that's less than 10ns.
You can't even feel/see/whatever even the cache miss, which lasts 10 to 20 times more than a request in the cache to bring data from another core.
I doubt that carmack said such horrible bs, if he said so, he is a complete retard.

No its not lol carmak strait up says using more than one core isn't suitable for VR or esports in his Texas 2017 uni lecture

But hey he just invented 3d so he must be wrong actually play modern BF its unresponsive as fuck compared the older games and csgo multicore rendering isn't CPU isnt what you think its for loading data ahead of time at a delay lol they don't make single core any more but they still make 2 only reason higher core looks better is they put newer gen cores in them like a 8700k isn't the same core as 8350k 8350k is a 7th gen core

You got played learn2tune 2stoke will always win if you wait abit like 10350k will be a 2 or 4core 8/ 9th core and optimal lawl

Humans can perceive 2ms read up on it reaction time doesn't matter as that's the same and based on perception

You a dumb poor cunt with too many cores sell your shit to a architect or engineer or youtuber and shut up

>gamen
gaymen
>no human bean can feel anything that's less than 10ns.
humans cannot even sense the difference within ms, movies are at 24 fps and if you factor in the online gayming, where you have severa 10s of ms of latency within every action, there's no chance for anyone to be able to sense anything that's 1000s times smaller than a few ms.
that's just for clearing some things out.
>he just invented 3d
no he didn't invent anything. he used different algorithms at that time to produce light and fov.
Those algorithms are from the 60s.
The other mumbo-jumbo about intel products are irrelevant to this topic and I don't see what validity offer to the other bullshit that you typed.

>Humans can perceive 2ms
>The accepted figures for mean simple RTs for college-age individuals have been about 190 ms for light stimuli and about 160 ms for sound stimuli
ncbi.nlm.nih.gov/pmc/articles/PMC4456887/
are scientific articles accepted here, or do I have to quote some tabloid?
inteltards are becoming dumber and dumber... I know that intel has financial issues and zero product portfolio for the foreseeable future, but god damn it, they don't have to hire complete retards to shill for them. they can do better.

I haven't used SLI since early this year, so my knowledge isn't entirely up to date. When SLI works it tends to work well, but it was facing major issues. One of the problems, which NVIDIA actually solved with NVLINK, was the limited bandwidth from one card to another. The old SLI bridges (even HB) were actually very slow and the cards needed a PCIe 16x slot each in order to perform well in SLI in all cases, but mainstream platforms do not have sufficient PCIe lanes. So NVLINK on Turing was a legitimate upgrade due to the huge bandwidth increase, but alas it cannot really solve the 2nd problem, namely that a lot of modern game engines simply are not SLI friendly.

SLI works by rendering frames on each GPU in an alternating fashion, one GPU renders the even frames and one renders the odd ones. A lot of modern engines simply aren't made to run this way at all and have gotten too complex and with too many inter-frame data dependencies on the GPU side for driver-side SLI profile hacking. Devs simply have not made engines which are friendly towards alternate frame rendering, which SLI uses. There's also almost no support from the devs for SLI itself, I assume they simply do not give a fuck since it always had few users. This is what is essentially killing SLI, modern games do not support it and have at the same time gotten too complex and unfriendly for driver-side profiles to hack in multi-GPU support.

VR SLI would be another application of multi-GPU (1 GPU per eye) and there are actually a few games which support it (the Serious Sam games and Talos Principle) and work great, but that's it. Other VR titles do not support it at all and there's no such thing as SLI driver profiles for VR, despite the fact that using 2 GPUs for 2 eyes would make a lot of sense.

>movies are at 24 fps and if you factor in the online gayming, where you have severa 10s of ms of latency within every action
neither of which can produce anything remotely close to "smooth"

>"smooth"
we are entering audioretard territory, now.

I am wondering though, how come a 2-4-6-8-12 core cpu produces latency, but a 2000 core (SPs or Cuda Cores) gpu which obeys the same principles, works with "laggier" ram, gets data via a packet based interconnect, which we all know that it doesn't guarantee latency, it used an intermediate buffer before the data are coppied into the vram (search for pcie BAR) and runs at half or less the frequency of a CPU, doesn't introduce any lag that is noticable by the gayman and his bf.

To top this, take into consideration the IRQs that are needed to notify back and forth the data request or any other task that has to be started/ended... and IRQs are asynchronous, as we all now... right?

this discussion reminds me a face-to-face discussion I had with an appletard in 2014.
At that time I had a dell 1645 studio xps from 2010 and he had a 2014 tardbook (air iirc) and he claimed that his display was better.
Metrics from a review with light sensors and whatnot showed that his 1 month old tardbook had worse display than my 4 year old laptop with the same price point.
his final words were "it feels better".
I didn't know if he went home and cried himself to sleep, but he got off my back bragging about his shitty purchase.

>we are entering audioretard territory, now.
not with things like 24fps and online games, both of those are absolute bare minimum to be tolerable, and that's after movies take care not to move the camera around to fast, and use copious amount of motion blur to make sure it's not too painful to look at, and games use all kinds of tricks to try to extrapolate and anticipate the movement of online players to make their movement not look too janky

>both of those are absolute bare minimum to be tolerable
that's 18fps.
>games use all kinds of tricks to try to extrapolate and anticipate the movement of online players to make their movement not look too janky
that's done on online gayming and has nothing to do with multicore systems, but due to the fact that you play with other retards who are 500miles away or more.

i'm not who you were talking to about multicore systems, i'm only concerned with your points about 24fps and online games being imperceptibly fine, which is total bullshit

>we are entering audioretard territory, now
It's like the gold-coated audio cables where you can totally "feel" the difference. Some people are just happy with whatever overpriced shit they're being marketed
>trust me bruh I can totally FEEL the difference even if objectively there is none

>It's like the gold-coated audio cables where you can totally "feel" the difference
gold coated connections, like audio jacks, produce less resistance when connected than other harder metals.

>which is total bullshit
gaymen cannot cope with the fact that every game that is not turn-based runs with certain clock resolution, whether this is 5ms or 50ms.
this is irrelevant to drawing frames. e.g. you draw 2 frames back2back within 10 ms, 5ms for each. AI is designed to get updated or make a new decision within 50ms.
thus, your game runs at 200 fps, your ingame "clock" generates 40 different states per second.
your "lag" is going to remain 50ms no matter what you do, even if you render at 2000fps.
gaymen and CS is 2 totally different things and Jow Forums is a consumerist board.

i understand the difference between fps and a game engines' tick rate

Yeah, you can totally feel the difference in audio quality

the audioretard market is not about gold-coated connectors, it's about witchcraft.
audioretards think that vinyl is pure recorded sound and digital representation with the proper sampling frequency is not.
there's even an audio format that offers infinite bit-rate, but no one of the audioretards want to hear about "muh-bad-digital".
audioretards even think that different usb drivers produce less noise on the digital data transmitted via usb and a wood-coated usb is audioretard approved.
the resistance in the connector is described in physics and in signal theory you can even study the "filter" that is created there that destroys part of the signal.

that's not the point of gold plated connectors, the point is that gold doesn't tarnish as easily as most metals, so the contacts stay clean longer, providing more reliable connections
the difference in conductivity is basically nothing, especially since they only use a very, very thin plating of gold, it's not like the whole cable, or even the whole of the pins alone are gold

>there's even an audio format that offers infinite bit-rate
explain yourself

AFAIK, nvidia doesn't make a lot of money in gaming, their main business is in AI or clusters or some other bullshit. That might explain why they seem to not give a fuck about fucking gaymen in the ass repeatedly.

>AFAIK, nvidia doesn't make a lot of money in gaming
NVIDIA makes around half of all their money from gaming and I'm pretty sure it's the largest single sector they're operating in.

>sli is dead since 3dfx died.
This. It never really worked well... except for the Voodoo 2. I've got two Voodoo 2 cards and have recently been playing games with them and its surprisingly really stable, no stutter, etc. Its the only time I've ever used a multi GPU setup that has worked so well. I do wonder if it worked as well as it did since Dx1-7 does not have pixel shading.

Your article on reaction time has literally nothing to do with what can be percieved.

You are trolling or retarded, I can't tell.

there are dirty cheap metals out there that don't tarnish.
gold is used in the connections because it's so softer than say steel, and when you bring 2 golden edges together, they create a larger contacting surface than say steel.
there's even a formula that you can calculate the resistance of such connection based on the surface area.
>explain yourself
there was a new format that expressed the amplitude of each sample as a +1 or -1 or ofc 0.
what does this mean?
this means that when you get data with amplitude A at freq X, it places a value, e.g. 1. if the next amplitude is greater, you have a +1, or the other values respectively.
You can then set your bit rate as high as you can/want.
with this you can store any amount values per sample and you can go as high as your equipment lets you.
the difference with regular algorithms is that you don't rely on whether you can use 32/16-bit numbers.
I'll post the name when I find it, since I haven't read about it since 2015.
check again
>percieved
perception has to give feedback in order to be measured.
how could you measure rtt, if you never get the packet back?

Attached: 9110881-15579730745768838_origin.png (1348x506, 106K)

>check again
wrong quote, it was meant for him

>it was meant for him
god damn it. him novidia is a gaymen company.

sli is dead because multi-gpu tech is getting integrated directly into the graphical API layer

>there was a new format that expressed the amplitude of each sample as a +1 or -1 or ofc 0. ...
that would need to be converted to an absolute voltage in the end, so you'll still be limited to how many bits of accuracy your dac has
i'm not sure where this would be useful over float32 pcm, which provides a shitton of breathing room

>bits of accuracy your dac has
not at all, since you are not having bits, but rather increments or decrements, you just need an amplifier and a circuit to change the amplitude based on the value that is sent.
>i'm not sure where this would be useful over float32 pcm
recording and storage of the original piece.
I gotta find the spec. brb

Use some fucking punctuation you illiterate nigger. God.

i'm just trying to imagine how that would work/how it would actually be implemented
especially, how would you actually record and reproduce this 'arbitrary precision'?
and is it more efficient than PCM? because if not, what's the point? keep in mind that pcm's precision affects only the noise floor, nothing else, and the audible limits of that is well-known. just like how recording sounds over 20kHz (so sampling over 40kHz) is pointless for the purpose of unmanipulated playback for human listening

also, what you describe isn't arbitrary precision, it just doesn't decouple frequency from noise floor like pcm does, the maximum range of values you can possibly get from what you describe is coupled to the sampling rate, though i think there might be something missing from your description, i don't see how it would work in practice, like what happens with a positive sawtooth wave? where you'll see more '+' samples than '-' samples? your capture would just constantly drift up, how does it account for drift?

Vulkan and DX12 multi-GPU is even more dead than SLI. It is not supported in major engines and most devs don't bother to add support since it's even more work on their side than typical SLI would be, as the driver won't be holding your hand with some profile NVIDIA makes.

3dfx SLI and NVIDIA SLI aren't even the same thing at all. They only share the acronym and nothing more, not even the full name (Scan Line Interleave and Scalable Link Interface). NVIDIA just named their multi-GPU tech 'SLI' for the recognition, it has nothing to do with what Voodoo cards were doing. 3dfx SLI had the GPUs rendering individual scanlines in each frame, so the GPUs are working on the same frame at the same time while NVIDIA SLI in practice only really worked with alternate frame rendering where each GPU just renders complete frames in alternating fashion.

it never was alive to begin with - dead

Clevo and other vendors cant do NVlink mobile its so dead

I have a 2008 clevo I'm going to do up for lols

Think I can get like a 4ghz core2quad in it and maybe even some obscure MXM quadro from 2017 lol as it has widest mxm slot n power support

Going to be brutal hard work modding BIOS thou but will be hilarious if I get another 5years out of it for 400$

I think a 4ghz c2quad is like a 3.3ghz i5 I think

2 2080's would run smooth as butter VR on high settings