OH NO NO NO NO NO NO NO NO NO NO NO
AHAHAHAHAHAHAHAHAHAHAHAHAHAHA
OH NO NO NO NO NO NO NO NO NO NO NO
AHAHAHAHAHAHAHAHAHAHAHAHAHAHA
2080ti bros ww@
Source? With several different benchmarks if possible, otherwise it's bullshit.
Even though I know AMD shit sucks, I prefer it to get BTFO factually.
NVIDIA has 16/32x more throughput with their new TURING architecture
openbenchmarking.org
that's half true, but also half false.
the integer performance is due to the tensor cores.
tensor cores are made for certain kind of worklods, e.g. matrix multiplication.
en.wikipedia.org
Performance doesn't matter.
we saw that in the quake II ray tracing benchmarks.
>int4
lmao who thinks this is relevant
>1650 beats 1080
that's all I needed to know
In a synthetic benchmark, yes it does.
that shit python AI learning program that eats one cpu core needs that gpu acceleration or else you get 0.00001x the performance.
OpenCL lack support for tensor core hardware.
Nvidia in volta/turing build async fast operations in int32 for fast memory access( int32 is used for compute array access).
>synthetic benchmarks
wtf i love nvidia now
>haha yeah my team is winning!!!!
>clpeak
ah yeah you must be the fag that kept spamming the same shit back when vega fe launched
you do know that clpeak doesnt represent anything remotely close to anything in the real world eh?
3,5 = 4
>BENCHMARKS DON'T MATTER
github.com
>A synthetic benchmarking tool to measure peak capabilities of opencl devices. It only measures the peak metrics that can be achieved using vector operations and does not represent a real-world use case
You had me at "Integer"
ARE THERE ANY REAL WORLD BENCHMARKS OUT YET?! DON'T GIVE ME FUCKING RELATIVE FPS! I WANT 99%s, 1%s and 0.1%s IN DEMANDING AAA TITLES, NOT SYNTHETIC BENCHMARKS!
>khrishna raj
32x theoretical
3x real world
nvidia marketing team at it again.
here today gone tomorrow