Does AMD have anything similar to nvidia's technology, or will it slump behind again?

youtube.com/watch?v=7Yn09UHWYFY

Does AMD have anything similar to nvidia's technology, or will it slump behind again?

Attached: metro.jpg (1920x1080, 1.88M)

Other urls found in this thread:

youtu.be/GJfEy-kMdFc
computerbase.de/2018-05/mifcom-miniboss-titan-v-test/3/#diagramm-baikal-ray-tracing-radeon-pro-renderer-in-1920-1080
twitter.com/ZigguratVertigo/status/977225599805669376
twitter.com/ZigguratVertigo/status/1031681286551871493
twitter.com/SFWRedditImages

It seems like this RTX stuff is a closed source, proprietary technology so even if AMD had the tech, it wouldn't run Nvidia's implementation and would either offer developers an alternative open standard that would be promptly ignored, or would have to run the raytracing in software mode like any other GPU from Nvidia at a massive performance cost.

As for the tech itself, it's basically small rays being casted and then using a neural network controlled smart denoiser filter. AMD could implement something similar, but would require something similar to both tensor cores and dedicated raytracing cores on their GPUs, which I doubt will come any time soon.

I expect AMD will simply not bother competing with the RTX lineup and will instead just compete with the GTX lineup (2050, 2050Ti, 2060, 2060Ti and under) and will focus on perf/dollar and power efficiency with 7nm Vega.
Let Nvidia keep the ultra high end raytracing market.

i didn't see anything in that video that would justify buying a new gpu. grafix fags are the absolute worst.

Anyone know how much faster is this than CPU raytracing now that we've got 4 GHz 32 core CPUs?

Since it's not real full raytracing, but rather a small amount of raytracing samples with an AI denoiser filter, we don't know yet. They didn't show benchmarks and only showed figures like "10 times faster".
This implementation seems to be real-time friendly, not sure how it could be implemented in software mode, remember the RTX cores are dedicated hardware just for raytracing.

youtu.be/GJfEy-kMdFc
Graphically this is the only impressive game I've seen that's built from the ground up for this tech
Shame about the terrible game play that remedy have had since Alan wake and quantum shit (which had voxel based nodes faux ray-tracing)

the fact that as they said they have 21 games already with ray tracing tech

i bet my ass virginity that they call dynamic lightining as ray tracing

Real full ray tracing was already a thousand times faster on GPU, and that was with Volta, so with tensor cores and not that dedicated RT hardware.

>Real full ray tracing was already a thousand times faster on GPU
I thought that was only with cards that had double precision perf though.

If it traces rays to render it's ray tracing, regardless of your gay ass virginity.

Which the Titan V has.

Is there anything worth playing these days?

I'm grandpa tier old, still play BF4, new games are all kinda shitty.

Nah, RTX is, but DXR is available to both AMD and Nvidia as it's a MS product. RTX is simply their handling of DXR in their cards.

I expect AMD will release their budget version of it that doesn't quite work right in the next few months. Jow Forums will hail it as good guy AMD open source, NVidia is done, etc etc but the implementation will be shoddy and buggy. Nvidia's solution will gradually get phased out because whatever proprietary software comes with RTX to make DXR run will end up being bloatware (like hairworks); they'll pay to hamfist it into a few games.

In 3 or 4 years games will start using this tech properly, using neither AMD nor Nvidia's solution, but likely a third party middleware group's solution like they did with Havok.

RTX uses the proprietary RTX cores that are dedicated raytracing hardware. AMD will probably have no response to that and will be forced to use the DXR in software mode with massive performance tax.

AYYMD can't compete with 10 Giga Rays/s

Attached: 2018-gdc-realtime-raytracing-techniques-for-integration-into-existing-renderers-39-638.jpg (638x360, 25K)

You'd be surprised, they usually boogyman something together. AMD cards were better than Nvidia's for a long time anyway in raytracing (luxray benchmarks etc), so I don't think they'll be as far behind on this as people think.

AMD has always been better at compute than Nvidia, we'll see if AMD comes up with their own dedicated hardware for raytracing or if they come up with something else.

The only thing that is walled off is Optix. And I am willing to bet most implementations for games will be done through DirectX and Vulkan which AMD is not walled out from.

Anything that is part of the DXR or Microsoft implementation should be compatible with any DX12 compatible hardware, it will just lack the dedicated hardware for it that Nvidia has. I expect AMD to come up with either their own implementation or something that won't be adopted and as said, we'll end up seeing this in every game in a couple of years through some third party implementation that's API and vendor agnostic.

There is nothing stopping AMD from implementing the DXR api in hardware. RTX is just nvidias implementation of it. The problem AMD has is it doesn't have the hardware ready and a software path isn't feasable. Navi is probably going to be a nice graphics card but for people that want to run games on high graphics settings navi is just not going to be an option. Graphics you would consider very nice are going to be nvidia-only in next years titles. That is probably going to cause real damage.

>Not playing DOTA
>what are you doing still playing shooters?
>wait for the new BF

computerbase.de/2018-05/mifcom-miniboss-titan-v-test/3/#diagramm-baikal-ray-tracing-radeon-pro-renderer-in-1920-1080

>AYYMD better at compute

TOP KEK

STOP POSTING FAKE NEWS, AYYMDPOORFAGS

It'll be like 5 games with the option to turn it off, like pretty much every other game they've jammed stuff into.

PhysX, HairWorks? They didn't get that many games. They're usually entirely pointless too. PhysX added some cool shit to Borderlands 2, but it didn't make or break the game.

>Titan V vs Vega 64
>2999$ MSRP vs 499$ MSRP
>Vega above 1080Ti and half Titan V

AYYMD BTFO LMAO

> AMD cards were better than Nvidia's for a long time anyway in raytracing (luxray benchmarks etc
What if...
it will be another GPU die on the same card just for RT, therefore it will be cheaper than a huge NV die.

>proprietary shit
AMD will be late but his alternative will most likely be an open standard.

RTX is being added to Unity and Unreal. It's going to be as simple as dragging some new entities into your scene and flipping a few switches. If you want to use full PhysX you literally have to build your entire game around it. It's not even comparable implementation experiences.

there is a difference between tracing one one screen triangle and how many intersects with it

with basicly a real time raytracing which you need to render the entire scene along with everything that not even supercomputers can do atm

Very accurate

how about a separate pci device exclusively for raytracing
so you can get a cheapo gpu+advanced ray tracer 4000 and get moar fps than a 2070 for example

all the consumer cards of nvidia have depricated DP at 1/32 than quadro that is at 1/1

unity engine already supports amd version of ray tracing since like 6 months now

current games don't fucking have ray tracing, and i won't be buying this for nvidia-exclusive shitgames

>raytracing hardware

its fp32 matrix math aka retarded tensor cores that are artifically limited to INT there is nothing NEW on this

they mean in development

because just like with nvidia physx, it won't be needed. but enjoy half your fucking chip being used for this gimmick.

thats because nvidia is shorting rays on a single one screen triangle moron
aka they can do thousand of rays not even millions in a interescted way

I haven't looked at game engines in ages, but: Maybe Ageia will make it. :)

Given their multi-GPU track record, I wouldn't bank on it.

Reminder Nvidia does 10 Giga rays (10k) rays per scene with the 2080Ti. That's because they can get away with the granulated image 10k gives you thanks to their AI denoise filter. It's a very smart trick.

n principle you have to test each triangle (also off screen triangles for secondary rays) if it intersects the ray.
With 10 million triangles, if you test each triangle, you could be down to 1 thousand rays per second.
Acceleration structures like BVH can bring down the number of intersections, but currently we have seen no information how fast the RT core can traverse a BVH and how many remaining triangle test per ray there typically remain.
Acceleration structures help reduce the number of ray/triangle tests, but they have as drawback that they have to be rebuilded when the geometry changes, like for skinning, dynamic hair/vegetation etc.
We have no information either on what is the cost of rebuilding the BVHs each frame.

So it's just a cheap ASIC they tacked on and are charging exorbitant prices for

Pro render is so dogshit. It's amazing to me that amd decided to devote resources to it, while simultaneously abandoning any relationship they had with production ready vendors like otoy.
I get mad just thinking about it.

Raytrace must be on GPU for shaders, low latency.

Yep that's why it's a scam and overpriced as fuck due to retarded die size and ancient optimised 16nm (12nm my ass)

I'm willing to bet huge CPU overhead so it needs insanely retarded i9 and threadripper just to run it let alone at 60fps (with drops stutters fireflies and other visual artifacts as seen in the demo's)

Exactly. I understand tacked on bespoke cores for media encoding-decoding for example, but cores JUST for raytracing? Now thats a pretty big gamble.

that NOT why its overpriced user
nvidia literally overinflated it based purely on pr marketing

remember the PICA PICA DEMO that nvidia said it was running on a dgx?(4 titan V 150k system)
it was literally the main point if leatherman presentation that ONE rtx can surpass that dgx
turns out the creator of the demo said that he was running it on a single gpu
twitter.com/ZigguratVertigo/status/977225599805669376
and that nvidia insisted to use a dgx

Nobody is gonna buy it any way the launch titles will flop and have horrible technical and performance problems
For non gaming stuff it's obviously great but it's no longer accessible to the majority of gamers anymore
Most people will just keep their pascals and get a gtx 2060 and 2050ti 4gb with gtx 1080 level performance

i really doubt 2060 will have 40% more perfomance
considering that people are expecting an 8 to 10% difference between the generations

Creator demo says
>NVIDIA Turing improves Project PICA PICA's raytracing by up to 6X compared to Volta. Powered by
@SEED
's Halcyon R&D engine and measured on prototype hardware, early drivers and unmodified code. Slides will be online soon. #GraphicsReinvented

twitter.com/ZigguratVertigo/status/1031681286551871493

I dunno it doesn't even look that good. Graphics don't seem to have meaningfully improved for 5 years now.