Real time ray tracing advancements

When Volta was announced and showing demos, we had some nice wholesome threads of sharing real-time ray tracing tech.
Lets do that again.

I know there's some nice ones running on WebGL on shadertoy, as well.

youtube.com/watch?v=LyH4yBm6Z9g

Attached: 2007 ray tracing.jpg (1998x1120, 220K)

Other urls found in this thread:

youtube.com/watch?v=4uYkbXlgUCw
youtube.com/watch?v=gAKmqv4smaU
research.nvidia.com/publication/interactive-reconstruction-monte-carlo-image-sequences-using-recurrent-denoising
disneyresearch.com/publication/deep-learning-denoising/
raytracey.blogspot.com/2017/12/freedom-of-noise-nvidia-releases-optix.html
youtube.com/watch?v=FbGm66DCWok
youtube.com/watch?v=6xE3J56pabk
youtube.com/watch?v=Jd3-eiid-Uw
youtube.com/watch?v=V0TyhQiY2pQ
en.wikipedia.org/wiki/Cone_tracing
youtube.com/watch?v=cuCwyIBOapY
youtu.be/9yy18s-FHWw
youtube.com/watch?v=VK7lL3E2LVc
youtube.com/watch?v=uCtgtF52nAQ
youtu.be/8H0czanVspM?t=35m32s
youtu.be/X2firbdhkvg?t=29m33s
youtu.be/X2firbdhkvg?t=47m34s
youtu.be/8H0czanVspM?t=26m56s
youtu.be/X2firbdhkvg?t=37m38s
youtu.be/X2firbdhkvg?t=44m40s
youtu.be/wofE5Bu009g?t=11s
jo.dreggn.org/path-tracing-in-production/2018/index.html
youtube.com/watch?v=n0vHdMmp2_c
youtube.com/watch?v=BpT6MkCeP7Y
youtube.com/watch?v=XByzjj9JTBM
youtube.com/watch?v=yVflNdzmKjg
youtube.com/watch?v=9A81NeQgJFE
youtube.com/watch?v=-T072CeODs8
youtube.com/watch?v=E4aF5KeaGFc
youtube.com/watch?v=ak4L_GR5fYY
youtube.com/watch?v=uZU0g7edxuI
en.wikipedia.org/wiki/Bounding_volume_hierarchy
en.wikipedia.org/wiki/Möller–Trumbore_intersection_algorithm
youtube.com/watch?v=TLaD_EqB_t4
youtube.com/watch?v=31_xMpNT-cE
research.nvidia.com/sites/default/files/pubs/2018-08_Adaptive-Temporal-Antialiasing/adaptive-temporal-antialiasing-preprint.pdf
youtube.com/watch?v=tjf-1BxpR9c
twitter.com/NSFWRedditGif

What happened to unlimited detail engine?

impossible to integrate in game engine

they seem to have made some programmer-graphics quality game demos for their 'hologram room' with animated models and rudimentary lighting
youtube.com/watch?v=4uYkbXlgUCw
youtube.com/watch?v=gAKmqv4smaU

looks like the guy himself edits together these videos very adequately poorly compared to typical, professionally produced snakeoilware promotional material, so i assume it has to be real

Attached: 1534965420837.jpg (1080x1080, 54K)

They started that shit 15 year ago and current hardware still can't run it at a decent frame rate? I thought only the screen resolution that was the bottleneck, it would really change VR but I started to think it's a giant hoax.

>We describe a machine learning technique for reconstructing image sequences rendered using Monte Carlo methods. Our primary focus is on reconstruction of global illumination with extremely low sampling budgets at interactive rates. Motivated by recent advances in image restoration with deep convolutional networks, we propose a variant of these networks better suited to the class of noise present in Monte Carlo rendering. We allow for much larger pixel neighborhoods to be taken into account, while also improving execution speed by an order of magnitude. Our primary contribution is the addition of recurrent connections to the network in order to drastically improve temporal stability for sequences of sparsely sampled input images. Our method also has the desirable property of automatically modeling relationships based on auxiliary per-pixel input channels, such as depth and normals. We show significantly higher quality results compared to existing methods that run at comparable speeds, and furthermore argue a clear path for making our method run at realtime rates in the near future.

research.nvidia.com/publication/interactive-reconstruction-monte-carlo-image-sequences-using-recurrent-denoising

disneyresearch.com/publication/deep-learning-denoising/

raytracey.blogspot.com/2017/12/freedom-of-noise-nvidia-releases-optix.html

It's just a cloud point renderer.
The problem would seem to be VRAM and not processing power for it.

To get high quality graphics with it, essentially what you're doing is a voxelized deferred renderer where you're doing processing in voxel space instead of screen space.

Well screen space at 1920x1080 is only about 2 million pixels per frame buffer.
To have enough voxels that they only appear as at most 1 pixel when you're up close to them, in an entire scene you're talking uh... billions.
a 1920x1080 frame buffer (which you'll have a dozen or more of in defered rendering) is about 66Mb, afaik. Or 4x larger for 10bit HDR.

For 15 billion voxels on screen, and say a dozen frame buffers each, you're talking.. I don't actually know. 480Mb each? But you also have world space data for each voxel. But either way, you're talking a nearly 10x increase in VRAM, minus some optimizations that it could open up.
The highest VRAM we have on a card currently is 32Gb. Though with HBCC, you can roughly double that afaik (latest CoD allocates 14Gb+ on the 8Gb cards)

Currently from what I've seen of it, this "issue" is gotten around of by... basically not having lighting effects so all these various voxels are instanced to use less vram. If you want better lighting, you need to not instant them.

The bigger issue is just animations, and how inefficient it is to have pixel level detail for something that's 1/100th a pixel, and how you blend them together far away.
There's also another problem that the youtube compression of video doesn't really capture, and that is that various voxels sort of z-fight.
You would need some sort of special fixed function renderer to blend the nearby voxels at distance together, afaik. So a dedicated ASIC.
Not worth it for all the downsides. Look how good polygons look now days.

Over 2 years prior to that without tensor cores.
youtube.com/watch?v=FbGm66DCWok
This work was largely done by the same guy who did Ati's real time ray tracing demo on the 2x HD4870 GPUs, btw.
He recently released this: youtube.com/watch?v=6xE3J56pabk which is the same concept but movie quality but not quite real time.

their infinity engine is as points out, not possible to make a game out of

However if did get used in industry due to how they managed to get the point cloud working fast and dense.

their vr stuff is actually interesting.
My guess is its doing something similar to
youtube.com/watch?v=Jd3-eiid-Uw
but with multiple projectors so your hand does not noticeably obscure something you are trying to touch.

This shit works amazingly well for making you believe something is there,

got a link to the ati real time ray tracing, would love to see it as I only really know the brigade stuff and that was on titan's, and am not aware of other earlier rtrt efforts

youtube.com/watch?v=V0TyhQiY2pQ
Well it's a bit much to call it real time ray tracing.

As I understand it...:
What they did is ray traced lighting for every frame and store it into compressed voxels.
For each frame of the animation, they have voxelized lighting data for each frame. TeraScale's ability to compress them allowed them to store this
Each frame it's loading the new voxels for that frame, and lighting from those voxels.

So really it's more like VXGI. Except instead of ray cones in real time it was actual ray tracing, however it was precomputed, to create a voxel blob for each frame.

The idea was that they could have ray traced quality lighting in interactive movies. You'd have a lot pre-scripted, so lighting could be pre-computed. But you could still watch it from different camera angles. I don't think you could actually interact, since changing where an object is would make it not match up with the precomputed lighting.

But this is very much a precursor to VXGI.

Very interesting. I did some reading up on ray tracing and there's an incredible amount of hacks/speed ups they do to actually mimic the lighting effects and interaction with objects as they should.. There's a lot of statistical pre-backing that happens at the light/surface level interaction especially as it relates to various materials. Ultimately I am reading lots and lots of convincingly visible hacks that are quite necessary given the limitations of memory and compute. With this being said, are you aware of and can you give any pointers to any cutting edge research that seeks to fundamentally rewrite the way ray tracing is done from the ground up? This is why my interest ultimately are going to lead me I think because I feel there is a lot of wagon circling going on and now that is making its way to the hardware stack.

The idea of totally reformulating the approach is intriguing to me.

Well it's same with shaders.
Instead of having individual polygons really tiny for microscopic surface imperfections, you have material properties.
Ray tracing is the same. You have material properties to say what it's like at a microscopical, chemical, and physical level so the rays know how to react beyond simply the major angle of the surface.

What tensor cores is doing isn't really a hack. "denoising" is a fancy sort of smarter blurring.
Ultimately though... you still need A LOT of rays each frame, not just each second.

Are you aware of any prominent research papers or algorithms that try to fundamentally reinvent ray tracing from the ground up?

Er no not really, other than voxel cones instead of rays.
DICE developers have some good papers on that.

en.wikipedia.org/wiki/Cone_tracing
Concept of beam/cone tracing itself is not exactly new.

But personally I'm much more on board with voxel cone tracing for the next 5-10 years. Though afaik, it can't do highly detailed mirror reflections.
It just makes more sense to me to do progressively smaller and more detailed cones than rays.

Thank you user for all of your helpful info.

>undamentally reinvent ray tracing from the ground up?
what the fuck does that even mean? how can you reinvent something that's as clearly defined as ray tracing?

Yeah, it's a weird question. But I mostly got what user meant I think.

They had 4 Voltas to do it vs a single Turing chip...

?

>cone tracing
is the increase in complexity even worth it? doesn't sound like it would be any faster than ray tracing

66 Megabits or Megabytes? and 256x2x2 colors = 66x4 == 264 ?

>shills
>influencer
huh, really makes you think

Attached: 1531002237462.png (973x628, 304K)

200$ ps4 can play gow at 4K 60fps
2000$ vga can’t play games at 1080p and sub30fps
What's the point

I remember this, it running with 4 x ati 3870 to render.

We need more optimized console port without nvidia or intel logo sponsored gaemz.

it's faster. It works on today's hardware. It works on a 7970, really, if you don't have much else going on to use up GPU time.
But yeah, it's immensely complicated programming compared to ray tracing.

Not really what this thread is for? Pretty off topic.
But yeah uh... this is going to increase, not decrease it.
Real ray tracing would decrease it.
Hybrid rendering is a massive increase in development time.

I was talking per channel.

GoW? Isn't that Xbox? And the One X that it does 4k at.
But yes, GoW relatively looks and ran nicer than these demos did.

That doesn't change that ray tracing is developing and improving over the years. You can't say that raster rendering looks better than good movie CGI which is raytraced. The goal is to real time ray traced one day.
Or who knows. Maybe rasterization will get so good that real time ray tracing will be pointless. GoW did look so nice that it looks like we're near the level of, say, 10 year old ray tracing with rasterization.
Iron Man, Transformers, The Dark Knight were 10-11 year old movies. I'd say today's rasterization is close to those.

Some console ports seem to strangely have their performance and graphics gimped when going to PC now days, despite them basically having Polaris GPUs.

rasterization only gets good on the back of complexity which creates technical debt, todays rendering pipelines are way too complicated as they are
and some things you just can't do well in raster like proper transparency

if it's faster then why doesn't nVidia have cone tracing cores instead of ray tracing cores?

Because it's not real 4K, it's lower res upscaled to 4k and it's not 60fps, it's 30. And it is using graphical settings similar to medium or low compared to the PC version.
Real talk, do you own any consoles? I've played on my friends PS4, the input lag, frame pacing, frame rate and graphics are terrible. You get what you pay for with a console.

Njewdia's new GPUs have no defending though.

presumably their new RTU ASIC only works for rays and not cones?
Or more likely, simply because their new thing is impossible on current hardware and they want to segment the market.

Hybrid rendering is even more complicated in its current state here.
True raytracing, on the other hand, is rather simple.

The Xbox one x does 4k native and 60fps, what are you talking about

Generally the 60fps "4K" games on the Xbox One X are something like 1620-1780p upscaled, or they are like 1920x2160 which gets "reconstructed" into 3840x2160, and so on.
There are no true 4k 60fps games on Xbox One X that I'm aware off. At least not really high end graphics ones. If there are, point them out.

youtube.com/watch?v=cuCwyIBOapY I wonder how this scales to larger scenes and more lights.

That's definitely been one of the cruxes of rasterization. You get something working with one material, one light, but it doesn't always scale.
You get volumetric fog working in a certain condition, but it doesn't work everywhere.

Attached: firefox_2018-08-24_05-33-32.png (1264x934, 654K)

Unironically kill yourself if you care about this.

>Buy our brand new expensive CPU so that 10% of the screen can get 5% more realistic in your video games

Fuck you. Nobody needs better shadows, nobody needs better lighting, there's nothing to gain.

*GPU

Off topic.
The topic is not RTX. It's just ray tracing in general.
>Fuck you. Nobody needs better shadows, nobody needs better lighting, there's nothing to gain.
Why are you on a tech board when you're a Luddite?

mora 2011 naive raytracing a divide and conquer approach

>exploiting lighting to cheat
huh
don't aimbots/esps litterally use rays to check for positions?

Attached: confused m14.png (1060x871, 82K)

Remider that OP is a retarded twitter russian amdrone attention whore who's being BTFO in /pcbg/

This doesn't look much better than the 2011 demo. What the hell.
youtu.be/9yy18s-FHWw

so something like a lighting bake.

You can do this in blender, or at least something like it, where all the lighting is baked and you can walk around a scene real time.

A version of this is what I thought nvidia was doing, as they can get rays relatively fast and dense enough push it though their ai denoise to shit out something that looks halfway decent, or at least better than non rayed scenes, take a shadow or reflection, bake it, and wait for the next one, decoupling ray traced components from non ray traced, shadows you could just have them go low resolution and get higher res as more detail comes in...

but nope, though they showed off full raytraced multiple times, its all single room low fps or reflections only, some shadows were seen but honestly, i'm not holding breath.

really want to see someone with a 2080 go though some of these demos to show off what they are doing.

I thought he ai denoise was more along the lines of it sees a pixel over here, it knows that that pixel 4 spots away is that shade, and it has the info for the texture, so interpolate what the shade would be. that's what highlights tend to come in after a short time.

ray tracing is simple, the code to do it fits legibly on the back of a business card, the problem is how much processing power it needs, so how we deal with ray tracing is finding new ways to constrict it.

my guess is have there been any breakthroughs in the constriction.

Battlefield V RTX 2080 Ray Tracing

>youtube.com/watch?v=VK7lL3E2LVc

>impossible to integrate in game engine

Unless you are Ken Silverman

youtube.com/watch?v=uCtgtF52nAQ

A lighting bake, except it was baked for every frame, and stored in voxels instead of a texture map with the voxel data interpreted as lighting and shadows in real time.

The missing component was actually generating those voxels from ray tracing in real time like VXGI and such do.

BFV Actually looks rather nice. Seems higher detail than the RTX launch demo.
But the framerate... "it's at 60fps". That's not high enough.

The pretending that planar reflections don't exist is triggering the fuck out of me, though.

Digital Foundry actually accidentally broke NDA by saying none of these games reached close to 144fps with RTX on, even at 1080p apparently. They said it's not something you'd really want to enable on such a monitor without Gsync, as much as some of Jow Forums will shill and say that microstutters and screen tearing is the true and proper $1200 GPU experience.

What do you think it means?
They don't mimic how light actually functions in the real world using 'ray tracing'. It's just a hacky ass algorithm that ensures that can mimic the outcome. As such, I'm wondering if anyone has come up with a new hacky algorithm that is just as effective but more efficient to compute...

Get the question now?

> ray tracing is simple, the code to do it fits legibly on the back of a business card, the problem is how much processing power it needs, so how we deal with ray tracing is finding new ways to constrict it.

You are so full of shit. You've obviously never actually looked at source code for a raytracing program. You need to stop posting, you pretentious retard.

Attached: ray-tracing-n.jpg (720x540, 62K)

just 2 examples.

again ray tracing itself is relatively simple, the difficulty comes in the way you constraint it, as basic ray tracers take fuckloads of computational power compared to ones that contain it.

Attached: render.png (960x540, 555K)

You've found a major issue... I did some research last night and found another. Something wasn't making sense to me about ray tracing and how it marries with the pipeline so I looked into it a little further.
youtu.be/8H0czanVspM?t=35m32s
comes from this presentation :
youtu.be/X2firbdhkvg?t=29m33s

So, listen to Jensen when he refers to this "hybrid" pipeline. He says... :
> To achieve this, were using every aspect of the GPU "effective ops"
This is understandable because there's no way in hell they could compute all of these layers of lighting with tracing cores at the timing of per frame. Even at 60fps that's 16ms. Imagine 90fps, 11ms they have to compute all of this

At :
youtu.be/X2firbdhkvg?t=47m34s
He mentions super resolution....
Well yes, this is where you break down the marketing. They are not doing ray tracing at full resolution... They are doing it at way less than that and then using tensor cores to "upsample" the lighting effects.

So, if you looked at the actual lighting output of the raytracing cores once combined with the raster render it would look like a low resolution point cloud... Then the DLSS smooths it out.

Performance :
youtu.be/8H0czanVspM?t=26m56s
Note that the Starwars trailer takes 45ms per frame to render w/ ray tracing.
That's way below 60fps. That's 22fps. So, obviously the amount of time ray tracing takes per frame is variable based on how high res and detailed it is. So, there's no way in hell you're getting high detail ray tracing in an actual game. You're getting super low res ray trace (almost a teaser) that is getting smoothed out and upscaled :
youtu.be/X2firbdhkvg?t=37m38s
> Infallible may you comment on this?
In this video he details what's behind what you end up seeing :
> Once we generate the half resolution image w/ the hit results we use it to (((synthesize))) the full resolution image

Attached: gaytrace.png (3348x1760, 2.95M)

Thank you very much user

Attached: 1534066304076.png (451x619, 392K)

Yes, this was my primary find from my research found here :
Which was that the image on the right is what the actual ray trace cores produce : A noisy point cloud looking lighting result. Then they use the tensor cores and probably cuda cores with a statistical process to guesstimate a smoothed out clean image. This just clicked to me when I was thinking about how in the world they could compute in real time (with high fps) layers of lighting effects. They don't..They produce a super rough estimate and then guesstimate using a statistical process what the other neighboring pixels look like. This is why they introduced DLSS because the ray trace hybrid pipeline results probably look like ass compared to when they're disabled. Nothing in the current GPU pipeline produces shitty results like the ray trace output because they have solid textures in the rasterizer correct?
> as they can get rays relatively fast and dense enough push it though their ai denoise to shit out something that looks halfway decent
Wtf.. this just occurred to me. Is this standard practice or do the pros for movies do a much higher sampling per pixel/generate way more rays so the image isn't so noisy? So, there's essentially a slider quality for ray tracing and the higher quality you set it to, the lower your FPS

Which comes to my final comment, this all has to be done within a variable FPS? What of these dweebs with 144hz monitors pushing 144fps? That's 7ms per frame. Meanwhile, pic shows it takes 45ms for the starwars film. So essentially, gay trace will completely destroy your FPS and performance? It's the weakest link and slowest thing in the chain. At high FPS, they essentially have to crank the quality way the fuck down of the ray trace output? So, the results are ready in time

Attached: timing.png (1752x986, 1.04M)

Yeah, at 10gigarays per second, that's 8 billion rays per second.
That's something like 83 rays per frame at 60fps.
At that many rays per frame, it should be able to pretty much do real time raytracing without the need for hybrid rasterization.

Also look at Vega 64 with its 400 megarays/s. That's 25 times less rays per second yet look at the nearly real time raytracing results it achieves with Pro Render.

There is something very fishy about how Nvidia calculates its claimed ray-ops/s. I think they must be counting the denoising into that and saying "this is how many rays you'd need per second to achieve that quality, even though we don't use nearly that many rays".

>They are doing it at way less than that and then using tensor cores to "upsample" the lighting effects.
That was known. The denoising and hybrid rasterization is the big thing here.
And yeah, the UE4 demo was at 22fps, and from what I could tell the 1 turing card rendering it looked worse

> Once we generate the half resolution image w/ the hit results we use it to (((synthesize))) the full resolution image
I've heard people saying that DLSS is 1080p getting upscaled to 4k. But they said it was 4k on the presentation. I hope they weren't just lying.
If they say 4k DLSS it should mean a 4k render done with DLSS AA to make it look even nicer and fix the shimmies.
If they were comparing 1080p upscaled to 4K with DLSS to 4K with 4x TXAA, that's absolutely fucked. As shady as Nvidia are being, I can't believe that's what they mean.

Also something worth noting is that the Turing running the UE4 demo has far more artifacts.
I could only spot 1 artifact with the 4x Titan Vs running it.
Like you get blinking red lights on the storm trooper's chest piece because there just aren't enough rays to capture the red light reflection. The red lights which are being reflected from the elevator are NOT blinking.

>you could just have them go low resolution and get higher res as more detail comes in...
>but nope, though they showed off full raytraced multiple times, its all single room low fps or reflections only, some shadows were seen but honestly, i'm not holding breath.

Yep, this is the fundamental issue with this shit.
Essentially ray tracing produces higher quality results the longer the camera [frame] is still and nothing is moving and the rays have a chance to accumulate a non noisy output result. However, the minute you move, a new frame has to be rendered and at even 60fps this will result in a very low res ray tracing result as it doesn't have a lot of time to produce the results of the ray trace at even a decent resolution. At 144fps, (144hz $1k monitor you ponied up for), fucking forget about it. The amount of rays they can resolve in 7ms and bake into the buffer would look like ass. So, ray tracing output quality is probably fluctuating all over the place in any decent 60fps render. The ray trace result has to be ready just in time for the frame render. So, how they hell are they syncing pipelines with a fluctuating non deterministic FPS with something as complicated as ray tracing? This is why they didn't produce benchmark results. The benchmarks will be all over the fucking place.

GODDAYMIT NVIDIA, THIS SHIT IS A GIMMICK

> youtu.be/X2firbdhkvg?t=44m40s
Take a look at the end of this presentation on directx ray tracing @ 44:40
> With ray tracing, we run into tradeoffs :
> do you want noise
> do you want ghosting
> do you want high performance
> what's the right tradeoff
> In our demo, not much is moving so our temporal projections work well and we were able to reduce ghosting .. Going forward were try more complex moving and we'll have to make different tradeoffs there :
> SOMETHINGS MOVE BACK TO RASTERIZER

HOLY SHIT MAN... So, this is a complicated fucking mess in a real world gaming solution. THIS is a glorified hack

Attached: 1529873279372.jpg (249x203, 8K)

I forgot the link
youtu.be/wofE5Bu009g?t=11s
Here is what, I'm assuming, Vega FE looks like when it's drawing rays. Though it could be a weaker card.

Since Nvidia claims to be 25x faster, it should be able to jizz out rays so fast that you don't even need the hybrid rasterization.
Look how fast 400 megarays(I'm assuming) gets to a reasonable level of detail.
With 10 gigarays they should be able to simply denoise (like in the OctaneRenderer example) and completely skip rasterization with no ghosting and such.

I remember when Vega was first shown with ProRender, a lot of my artist friends were wowed at the ray tracing previews they could get in almost real time instead of having to use raster previews.

>1 or 2 frames per second.
>30 or 60 frame per second.
I believe still Turing is too inmature for raytrace games, but difference between preview render and videogame is insane.

higher quality, more physics effects to take into account.

you can fake quite a lot of what ray tracing does, but when it comes to movies, they have render farms that can crunch the data, effectively taking months of computer time for single frames.

BOOM, there it is.. holy shit.
> Yeah, at 10gigarays per second, that's 8 billion rays per second.That's something like 83 rays per frame at 60fps.

So this is why leather jacket man stated this weird per second figure. It sounds epic until you break it down to FPS.
> 83 rays per frame at 60fps..
LMFAO @4k resolution.

> At that many rays per frame, it should be able to pretty much do real time raytracing without the need for hybrid rasterization.
Explain yourself. I have a 3840x2160 4k monitor. 8 million pixels. What in the hell are 83 rays going to do for lighting in a frame with that many pixels? Aren't the rays shot from a pixel from a view frame defined by a camera with the res of your monitor? 83 rays per 8 million pixels? Can you explain what's going on here?

> And yeah, the UE4 demo was at 22fps, and from what I could tell the 1 turing card rendering it looked worse
This was my overarching point... , this gaytrace feature tanks the shit out of FPS. Who the fuck would enable this feature if it drops your fps down to 22fps? from say 100fps?

> Storm trooper demo
Here's the thing I also don't get.. Given this amazing new feature, they now have to process things that aren't in the camera view port. Doesn't this shoot processing way the fuck up and tank FPS?

Again, the Microsoft guy in relation to ray tracing directX12 stated :
With ray tracing, we run into tradeoffs :
> do you want noise
> do you want ghosting
> do you want high performance
> what's the right tradeoff
> In our demo, not much is moving so our temporal projections work well and we were able to reduce ghosting .. Going forward were try more complex moving and we'll have to make different tradeoffs there :
> SOMETHINGS MOVE BACK TO RASTERIZER

So, this is like one big cluster fuck of a beta level feature no stamped in hardware that drags down performance and quality for meme shadows.

I can't believe this shit right now esp. given the $.

Take render equation and search for new integration or approximation method.

jo.dreggn.org/path-tracing-in-production/2018/index.html

Here's the thing and I'm sure in your profession vs gaming, you understand what I'm saying. This is how professional rendering works. Your goal is to achieve a certain quality per frame and you simply wait for however long it takes to produce that super high quality frame. This is achievable via ray tracing and the acceleration of ray tracing is thus crucial to professional renders.

> ON THE OPPOSITE END OF THE SPECTRUM
Are gamers which have been convinced, the goal is high FPS. In a high FPS environment, you are incredibly constrained on the amount of time you have for a particular frame. In this paradigm, the idea of ray tracing is in direct contradiction with a quality gaming experience. Ray tracing now directly reduces FPS to produce anything of value. THIS IS WHY LEATHER JACKET MAN DIDNT PRODUCE BENCHMARKS and there are little to no details about these cards. Ray trace cores and tensor cores are all but useless in a high quality gaming experience and were talking about a max of 60fps.
> Since Nvidia claims to be 25x faster, it should be able to jizz out rays so fast that you don't even need the hybrid rasterization.
Bullshit. Of course they need a hybrid rasterizer which is why DirectX12 is so closely involved.
> Look how fast 400 megarays(I'm assuming) gets to a reasonable level of detail.
I see no time counter. A second passes by quite fast. In one seconds, they'd have to produce 60 unique frames that could be hugely varied. I want to see what a pro render tool produces on GPU accelerated ray tracing in 16ms. I bet it looks like shit.

> With 10 gigarays they should be able to simply ..
You still need to clarify 83 rays per frame for a 4k resolution 8 million pixel image. I'm not understanding your excitement for such a low ray count for that # of pixels.

> I remember when Vega was first shown with ProRender,..
Professional rendering is not gaming. Again, show me what a result looks like after 16ms.

youtube.com/watch?v=n0vHdMmp2_c
youtube.com/watch?v=BpT6MkCeP7Y

3.0 was done on 2 titans,
the first was 720p 60

titan It had 4499.7 single precision 1300 double
ray traceing, so assuming near perfect scaling happens, 9000 single and 2600 double. at least as far as real time goes, my understanding is half precision is more than enough, possibly even quarter precision.

amd has currently 22000-25000 half precision
nvidia has 10-12k single precision (they gimp half and double now)

the titan v has 12-15k single 6-7.5k double and 25-30k half presision with a wild card of 110k tensor core (which I think is a 2x2 matrix of half precision)

personally I would take the brigade 3.0 as a render mode without any denoise, I find that acceptable and would look like film grain, effectively adding to a photorealistic feel.

brigade 3 also seems to have more ray points then nvidia's fucking 'ray acceleration' engine so this though a denoise may look even better then it already does.

So, ray tracing in gaming cards is a gimmick because it's either going to tank your FPS to shit or you will get shitty lighting effects at high FPS. A literal gimmick. At the pricing they put these cards at, it's like they wanted to convince gaymen to pay quadro level pricing and go enterprise V100 pricing on quadros. The exception being that gaytrace acceleration is perfectly fit for pros but completely unfit for gaymen at the moment. Quadros went from a starting price of $800 to $2,300 and magically, the 2080 is $800 (the price of a prior quadro card).

Jensen is absolutely fucking the shit out of consumers.

I think that's actually with a much older WX card like a 7100 and not Vega.
I can't find one that shows someone using Vega and how fast it jizzes out rays to the preview.

I really think if RTX has 25x the rayops per second as Vega, it should be able to do at least 30fps real time ray tracing with the denoising WITHOUT any rasterization.

>difference between preview render and videogame is insane.
When it comes to ray tracing, it is not different. Rays are rays.
If you could real time render a render preview in your modeling program at real time ray traced quality image, then you can do the same in the game.
Speaking of that, did they demo Quadro RTX being used in render previews? That would have been at Sigraph. I didnt' watch the other thing.

youtube.com/watch?v=XByzjj9JTBM They do show this in real time. There's not a breakdown showing the actual ray casting before the denoising, though.
There's clearly not enough rays given the artifacts, though. But at 10 gigarays per second, there should be enough.
I wish they showed it without the denoising.
There shouldn't be those problems with the tight edges, the creases, flickering when each pixel is sending out 80 rays per frame at 60fps. And this is likely 30fps, so 160.

youtube.com/watch?v=yVflNdzmKjg Here we go.
Uh. Doesn't look like many more rays. More, but not 25x more. And I'm far from sure that the Pro Render one was Vega with its 0.4Gigarays to begin with.

No that's at 1080p.
And it's 80.3 rays per frame at 60fps.
That should be sufficient to do real time ray tracing without a hybrid renderer. But you can see from their demos that it clearly isn't, and clearly doesn't do that many rays per second.

> I would take the brigade 3.0 as a render mode without any denoise, I find that acceptable and would look like film grain, effectively adding to a photorealistic feel.
No way. That'd be annoying.

Attached: firefox_2018-08-24_15-41-44.png (966x395, 28K)

> ray tracing in gaming cards is a gimmick

Just because AMD can't do it doesn't mean it's a gimmick.

giga ray implies

1 ray
1000 kilo ray
1000000 mega ray
1000000000 gigaray
10000000000 gigarays

10000000 rays per ms
160000000 per 60fps frame
69000000 per 144 fps frame
1 ray should be 1 pixel

1920x1080
2073600 rays needed per frame
3840x2160
8294400 rays needed per frame

my understanding is also that instead of shooting rays from the source, the shoot rays from the camera and see where they bounce, this leaves less wasted rendering power.

they are fucking around with names or i'm missing something, which im willing to admit to.

youtube.com/watch?v=9A81NeQgJFE

youtube.com/watch?v=-T072CeODs8

youtube.com/watch?v=E4aF5KeaGFc

The issue I have is that none of these demos has real world gaming scenarios. Most of the objects are static. Also, there is no FPS counter or live action. I want to see snapshots of what a render looks like with 16ms of render time allocated in an active scene. Then, i'll know what I already know which is : You're either going to get a shitty quality picture or you're going to have to reduce performance to 30fps. This is not questionable. The presentation I linked to from microsoft states this :
> High quality image and shitty fps
OR
> low quality image and high fps
As such :
> Some shit ultimately has to go back to the rasterizer
Which is why they call it a hybrid pipeline. This complicates game dev because they probably have to do gay trace on top of all the rasterizer hacks so that in worst case quality scenarios (high fps), you can dynamically go back to the rasterized result w/o gay trace.

I'm seeing through all the marketing bullshit at the moment and it jives completely as to why they're being tight lipped on performance. This is a goddamn quadro card that has no business being sold as a gaming card. Nvidia is trying to cash the fuck in on pros and gamers.

look at the brigade demos, better then game engines, but noisy
I want to know if you applied ai noise reduction what would the result be on the brigade engine.

the way its being used on nvidia is more of a gimmick than anything else at this point. the faggots show off real time ray traced scenes, showing off its doing it in real time for months, and when it gets close and close, we fucking get 'its only reflections and some shadow assists'

something is fucked from the demos we have been seeing, and what they are telling us now.

>30fps
This is absolute shit for gaming dude..
My garbage entry level maxwell card does 80fps at 2k resolution in the games I play. Do you grasp how bad this is for gaming? I have no doubt this is a great feature for pro rendering but this is tangential to what people want out of a gaming card. It's like this shit was an afterthought for how to deal w/ processors that didn't meet the cut for quadro.

possibly annoying, however I see it and think we have single gpus, that are 2-3 times more powerful, and with the tensor up to 10 times more powerful, the amount of noise would be far less, and if ai noise reduction does anything, I would take a film grain like filter for a few generations till we do have the power to do it without issue.

The camera resolution is your screen resolution. It's your view port. The pixels map 1 to 1.
10 gigrays = 10*10^9 per sec
60fps result = 166666666.6666664 rays per frame
> 1920x1080 = 2073600 pixels
80 rays per pixel per frame @ 60fps

K, that's not so bad. I had a completely different idea.

They didn't have a lot of time to implement it into the new games. As the technology matures and when they get enough time it will get better.

Pro Rendering is not gaming.
For one, the objects are still
For two, there is no FPS counter
A second passes by quickly. 1FPS is not what people game at. Your judging the quality at max at a 500ms analysis.. 2FPS. In a game, you have multiple objects moving at 60FPS (MINIMUM). Show me a pro render result after 16ms. I bet it looks like shit. Which is my point.

both of those demos are 60fps.
as far as motion goes, that would fall back on the cpu not so much the gpu, and if the gpu is doing the ray tracing, most of the bullshit like post processing effects would now be handled with physics through ray tracing.

a game playing and moving around shouldn't be to much worse than the demo having the camera movie around, though there is a brigade where they have manikins and a car drives through it with physics attached to the manikins.

Hi I am I agree with you.

Is the rtx the most frustrating release in the history of cards? It's just non stop conjecture on meaningless graphs with arbitrary units of measurement and hidden factors. Everything Nvidia has released on the cards is doing nothing to quell this.

>Is the rtx the most frustrating release in the history of cards?

No. Only AMD shills believe this.

No I mean every non nvidia demo they are showing off has this hybrid shit, and its sucking all the dick.
youtube.com/watch?v=ak4L_GR5fYY

for mixed ray tracing to work, they need to decouple the gameplay from the rays, bake results and do it again, an update every 4-8 frames int bad, especially if they never make the reflections or anything a focal point, and the shadows could just go low resolution when moving and get higher the less movement there is.

minimal performance uptick and the only way they get the big numbers is by bullshitting the figures as much as possible, then charging 800-1200 dollars for features that re FAR the fuck off from prime time?

considering nvidia has nothing here worth money, and amd will at the very least with navi be performance compatible with 1080ti from a die shrink alone, id go amd at this point if I had anything less then a 1080, and a hard skip next gen if I had anything more.

What demos? There are no FPS counters on any of the official demos with the RTX cards.. Zero. There are no performance demos. The only video out with an FPS counter of the RTX cards was a leak someone sneakily recorded on the demo rigs. In it, you see the FPS almost half of what it is w/ gay trace off and there are multiple people who state the FPS on a 2080ti w/ ray tracing on @ 1080p was around 30fps. Yes, you heard me right. The real world numbers in a game when you turn on meme gay trace is :
30fps @ 1080p. This actually makes sense when you consider a quality high enough to present to a user on a screen and all the other shit you must do after you get the ray trace results. 33ms... So, they showed the starwars trailer which Nvidia officially said took 45ms to render a frame. That's 22fps and that's w/o a lot of motion. Now imagine a full motion FPS.. Seems the max they could do is bump it to 30fps and lop off 12ms to present a good enough image w/ the ray tracing effects. This is my point and a point Made during Microsoft's DXR presentation : With ray tracing in real-time you have to make a tradeoff between :
> FPS
> Image quality
This trade off is so big that the engineer himself said :
> SOMETHINGS HAVE TO GO BACK TO THE RASTERIZER AS A RESULT

This is for RT gaming not some canned bullshit marketing demo. They showed all the gaming footage in slomo on stage and all of the post jensen demos don't have an FPS counter, are played in slomo, and resolution isn't specified.

The only logical reason for doing this is because the FPS is shit when you enable this which is proven and stated.

I've finally discovered what's behind your reasoning. Blowing my mind right now.

>for mixed ray tracing to work, they need to decouple the gameplay from the rays, bake results and do it again, an update every 4-8 frames int bad, especially if they never make the reflections or anything a focal point, and the shadows could just go low resolution when moving and get higher the less movement there is.
Dude you can't do this in high action scenes. It's called ghosting and artifacts :
> an update every 4-8 frames int bad
That's fuckin horrid and that's not real-time as its marketed. Real-time means per frame.

20fps vs 144fps with a really high quality reflection bake

in a fast action scene, with shadows, unless you have having a face paced action scene set somewhere along the lines of youtube.com/watch?v=uZU0g7edxuI

the faster you go, the less you notice, kind of like how with nvidias 'broken' drivers that somehow don't have af, have no shadows or limited in the distance, and have them out just long enough for most benchmarks to be done and then add the effects back in, you usually need screenshots to compare because in real time its VERY hard to notice.

in the case of shadows and ao, you could effectively bake everything and only update when the player is going to interact with it.
as for fast acting, with a reflection bake, its never going to be an issue, that said, you are never going to move fast enough for it to be an issue anyway, the only bit where it may be a problem would be something like trees in wind and the shadows cast, from leaves, but making them lower resolution and come in high would likely be far better than aliased to hell and back shadow maps.

decoupling it also has the effect of increasing frame rate because some heavy rendering things are now done in a different dedicated section,

Ill take raytracing assisted over real time ray tracing if it gets me 144hz+ with better effects over 20 or less with effects that aren't good enough to justify the unplayable frame rate.

Thank you for your thoughts.
Trying to average this all out, what I am arriving at is, there's no way they're shipping this shit w/ such horrid FPS numbers. Thus, they have do be doing some 'hacks' to get this to be performant. I think this is where the 'hybrid rendering' aspect comes in. They most likely are doing regular rendering w/ no gay trace and then overlaying the gay trace on top of it w/ statistical mapping. This allows FPS preservation with minimal hit and a dynamic range of ray trace quality. There definitely not doing full on real-time ray tracing. It's an overlay on top of a regular rasterized image
> Hybrid pipeline.

they are going to do something like ray trace dynamic lights, they have showed this off in I believe a game already, but this is a scene that has a small amount of shadows at a low enough fidelity that the blur of shadow looks better then hard coded, but isn't great

they have showed reflections, where they can effectively focus all the rays on singular points, kind of pointless if you ask me, but its there nonetheless. The big tech demos where everything is going at once are all set up in ways that hammer fps while they do have better visuals, are not worth the expense for said visuals.

I honestly want some of these demos in peoples hands so we can see impacts and frame differences.

till this happens, for quite a lot of nvidia's ray tracing shit i'm willing to just fully call it bullshit at worst and about 3-4 gens away from prime time ready.

There's more I could probably say but its conspiracy theory territory.

>in the case of shadows and ao, you could effectively bake everything and only update when the player is going to interact with it.
We have been doing that for 3D shit for the last 20 years by now. With few excpetions where parts of the shadow/light system use polygonal shadow volume.
End result is that you can have particles, but not lightning, so you can have smoke: But fire isn't going to be look pretty without the dancing lights on the environments.

Which is why something like First Encounter Assault Recon from 2005 still doesn't look dated, when modern games barely can do particles and lightning at that level, even if the rest of the groundwork looks a lot better. First Encounter Assault Recon ends up looking so much better in motion that is what your brain perceives.
Its such a shame. Static might give us games like Tekken, but not moving pictures of particles and light.

>youtube.com/watch?v=9A81NeQgJFE
Very impressive, but also seems to suggest the real Gigarays per second is for mixed ray tracing to work, they need to decouple the gameplay from the rays, bake results and do it again, an update every 4-8 frames int bad, especially if they never make the reflections or anything a focal point, and the shadows could just go low resolution when moving and get higher the less movement there is.
That's how Nvidia's demo last year worked. It looked like shit in motion. Especially when the camera moved. Full of artifacts.

Agreed. A couple more week until some more details come out. Even then, I think its going to take months to comb through this convoluted mess of an architectural (((concept))).

I might pre-order one but its for dev purposes and not gaymen. I'm concerned as to what is independently computable and what's not though... As in, how many different things you can have going on w/ how tightly fused this stuff is to an SM. I'm going to review the cancel/return policy and likely fire up an order. 2070 is out because they deleted Nvlink from it. So, an $800 2080 FE... If things don't turn out the way I want them, I'm returning it unopened for a full refund. No risk in pre-ordering. I'm ordering it because it's an affordable quadro of this variant (no way I'm paying $2,300). As a gaming card, it's a joke.

The frustration comes from the fact that they're openly lying and being ridiculously manipulative. They could be up front and get better results in sales, development, and branding. They're choosing to be assholes however. The only reason why people care is because these new features could be big if its put in more people's hands, transparency is given to how everything works, and how it performs. Let your customers try out new things w/ it and figure out things you don't see as a company. This would progress the tech much faster and accelerate the timeline on ray tracing. Instead, they greedily aim for assrape. People are pissed at the potential they see in the tech that is being chained by Nvidia's greedy ass non-technical staff. I have some pretty interesting compute flows and algorithms I'd like to try out. However, I'm torn because I am not a fan of these prices at all. It takes time to learn a new architecture/APIs/etc. I'm 100% excited about this until I look at the price and that pisses me off to a degree you can't imagine. It brings back the recent and infuriating pain of the coinfag fiasco and the assrape everyone in the seller chain exacted against customers. It reminds me of Nvidia's partner program bullshit that everyone had to call them out on and get terminated. It reminds me of how Nvidia played dumb about the pricing even as retailers raped the shit out of gamers. And what makes people especially mad is that Nvidia had a huge and unexpected windfall of profits from this coin bullshit and even then they aren't satisfied and satiated. Instead the greedy ass business department decides :
> Wow, if we could rape them that hard then imagine what we could do going forward..

Having no other company or alternative at the moment puts people at their mercy. I know they're a corp and profits come first but this shit feels disgusting. Enough such that I'd scrap my dev plans all together for these cards and for some generations to come.

Arnold demo begin no denoiser.

en.wikipedia.org/wiki/Bounding_volume_hierarchy

en.wikipedia.org/wiki/Möller–Trumbore_intersection_algorithm

Raytrace cores begin ASIC for this algorithms, Cuda cores still begin used to calculate raytrace, usually GPU had huge Flops but Bad branching, CPU had good branch but low Flops, RTCores helps on branching part of Raytrace.

GigaRays depends from complex scene, but RTCores get 6 to 10 performance.

Real advance RTX is graphics and compute begin same thing, usually render GPU begin like only compute units, no access to graphics part(texture processing,rotation or translation) of GPU means you need implement your own graphics libraries for graphics manipulation and duplication memory for graphics model and render scene, RTX made Graphics interactive and raytrace computer begin same and same memory representation.

got a link to that demo?

you may not have noticed, but since 2014, possibly 2012, game developers have been refusing to bake even non interactable environments, prefering to do them real time even though it kills performance.

also, fear does look dated, fairly dated, however gameplay beats out most new games. largely art direction and atmosphere is what makes a game look like this, as most engines have the tools to do it already in side.
people like shitting on the divisions graphical downgrade, but it still does effects better than most games do
youtube.com/watch?v=TLaD_EqB_t4

granted, I dont play many brand new games anymore so my ability to pull examples is really down to remember the best ones I have seen.

youtube.com/watch?v=31_xMpNT-cE

So :
> Hybrid Rendering
This feature slams the whole GPU with a combined process and overlays it for appearance over a traditional rasterized image and then denoises it for final effect.

Mixed both pipelines made possible a lot new things as ATAA.

research.nvidia.com/sites/default/files/pubs/2018-08_Adaptive-Temporal-Antialiasing/adaptive-temporal-antialiasing-preprint.pdf

>Primary surface aliasing is a cornerstone problem in computer graphics. The best known solution for offline rendering is adaptive supersampling. This was previously impractical for rasterization renderers in the context of complex materials and scenes because there was no way to efficiently rasterize sparse pixels. Even the most efficient GPU ray tracers required duplicated shaders and scene data. While DXR solves the technical challenge of combin- ing rasterization and ray tracing, applying ray tracing to solve aliasing by supersampling was nontrivial: knowing which pixels to supersample when given only 1spp input, and reducing the cost to something that scales are not solved by naively ray tracing.

Yep and this makes it clear that they're doing overlay over a traditional rasterized image and then doing denoising in the fact that you can see artifact pixel level sparkling int he backdrop due to the ray trace results. This would be highly annoying in a game and I must note that the only scenes they show reflections and a high amount of detail in are in scenes w/ a low amount of moving objects. @4:26 minutes, that whole scene is static beyond the guy walking around, his shadow, and reflections therein. and even then you have pixel flicker.

Zero pixel flickering when they turn RTX off. It occurs immediately when they turn it on. And I unironically like the RTX off image. It doesn't have as much detail but its consistent and the flat feel is better. Glossing everything up feels like a gimmick. In the real world, you actively ignore reflections and shadows... they're environmental noise that usually isn't essential to processing information. It's like a natural artifact in and of itself.

Again, in all of these demos they make sure to slow things down and have a low amount of movement. Even then you can see pixel flicker.

I think this is a great and impressive first step but its literally like buying a dev board. It's great for developers but its sort of a fleecing for gaymers. No developer cares unless there are consumers. So this is Nvidia getting the ball rolling on a long series versioning that will occur until this is refined. In one year on 7nm alone there should be a big improvement. As the software/dev tools/vulkan aren't even solidified yet, I feel like it would be a paper weight until 2019 anyway.

I Believe nvidia want 100 faster Raytrace by 2025.
Change GTX to RTX, Graphics as rasterization for raytrace.

Thank you for the link.
I get exactly what is going on now at a high level and a detailed level. At a high level :

> Generate BVH
fork :
In an SM and across multiple SMs :
> Pipe BVH to rasterizer pipeline for traditional rendering
> Pipe BVH to ray trace pipeline for ray trace algo
> Utilize other components such as cuda cores for divergence handling/etc
> Overlay ray trace noisy result over rasterized image
> Denoise using new "AI" algo [tensor cores]
> Apply ATAA
> Spit out frame

youtube.com/watch?v=tjf-1BxpR9c

The majority of obvious artifacts are toward the end.

Heh. This is true. And I'm largely one of those...

I mean they stated themselves they're doing hybrid rendering. lol.

Only in the Porsche demo does it appear to be pure ray tracing with denoising, and not hybrid rendering.
That's a VERY limited scene.
It just highlights that they aren't really 10 Gigarays per second.

Mixed pipelines are doing great things. We're using tons of ray tracing already, like screen space reflections technically are a ray trace.

The coding for it is getting so convoluted and difficult though, whereas ray tracing is simpler. I really can't say I'm too excited about hybrid rendering.

RTCores, aka RTU, is for voxelizations, and not actual ray tracing, afaik.

Yeah I wonder what it'd be if they took the approach to just say "These aren't a huge leap for today's games, but it's a step toward a better future".

But better future for who? Because it'll be a while before a card 10x more powerful than the 2080Ti really makes that "dream" a reality. And then you're talking about throwing a lot of electricity, even at 5nm, to power something that in many cases would look no difference than prebaked lighting depending on how your game works.

It's like someone else mentioned about Fear. It's still pretty gorgeous. Art style does a lot, as much as I said I hate prebaking myself.