Is Ray tracing a meme? Should I get any of the new cards if I don't care about it?

Is Ray tracing a meme? Should I get any of the new cards if I don't care about it?

Attached: geforce-rtx-2080-ti-social-1200x627-fb-800x396.jpg (800x396, 32K)

Other urls found in this thread:

youtube.com/watch?v=fAiai0fCBOw
youtube.com/watch?v=IJ77a0erU4w
youtube.com/watch?v=gTbWVt2_OWc
youtube.com/watch?v=7VrPjVSPmKw
en.wikipedia.org/wiki/Sparse_voxel_octree
youtube.com/watch?v=Nr4_VAHTj_w
youtube.com/watch?v=AyT8LOjrnWE
youtube.com/watch?v=QoePGSmFkBg
twitter.com/NSFWRedditGif

yes and no.

Lol Nvidia advertised it as a 1200$ gaymin card

It's the future goyim! better buy that TI card!

Attached: 1534973964758.jpg (600x1152, 143K)

Its a meme, watch the trailers of the games with it, just use some nice ssao and you're done, wont sacrifice 999999FPS for a barely noticeable effect.

yes I recommend you buy a couple RTX 2080 Tis, it can do so much more than ray tracing!

More like gay tracing. Who gives a fuck? PC will Co tinue to get nuetered by companies so they can multi-plat on the consoles for more jew gold. There's going to be like three companies that ever take advantage of it.

For 1200$ plus the price of other components I can buy a 4k tv, ps4 pro/Xbox one s and a big comfy sectional
>All so a shitty console port can look 2-5% better than a console
Gaymin is straight retarded

Depends.

Do you want 4k60+? 1440p300?

ssao is a meme, dipshit

It was a meme before it was ever created.

>ALL THE LEAKS WERE WRONG

The card actually is more expensive and performs worse than expected.

The usefulness and necessity of Ray Tracing can be argued in an analogue manner to Ambient Occlusion, you'll never know that you need it until you try for the first time and then proceed to disable the feature. But you can make do without it.

I'm not sure why I expected this board to know the significance behind realtime raytracing. I swear you're all from /v/ and only see GPU's as gaming toys.

999

>I swear you're all from /v/ and only see GPU's as gaming toys.
because the vast overwhelming majority of people only GPUs for gaming, fucking deal it

It's as significant as a unified shader model is.
But SM2.0 was shit.
2.0 was shit.
2.0a was shit.
2.0b was shit.
It didn't get half decent until 3.0.

And how long did it take to go from 2.0 to 3.0? 3 years? 7 years after we had the basically fixed function SM1.0?
Oh, and don't forget how Nvidia held back SM 3.0 for fucking years with their shitty rebranded GPUs, so it was really more like 6 years after a unified shader model was introduced that it was finally this amazing feature we couldn't do without that completedly changed graphics.

5 years later, Nvidia will still be releasing rebranded "RT2.0" GPUs when it's only good on "RT3.0" on the high end models. But developers will only use "RT2.0" because lots of people got scammed into buying the shitty rebrands.
And we're only on 1.0 now and it's shit.

Plus, Nvidia is basically moving us to a new fixed function raytracing model. It'll be a while before we have a proper unified raytracing model, as far as I'm aware. I'm not sure on the specifics here.

>Is Ray tracing a meme?

Ray tracing is definitely not a meme. It's the standard rendering method used in practically every CGI animated film in the past decade.

The following video is geared towards Blender, but it describes the differences between ray tracing and rasterization rendering commonly used in games today.

youtube.com/watch?v=fAiai0fCBOw

So explain to us loathsome plebs what it's good for then?

Why don't you read the post directly above yours?

Post a link to where I can pre-order a 2080 Ti for $999. The cheapest one on Newegg is $1149.99 (plus tip).

I did. What I got out of that was that it's mostly useful for... games.

user...ray tracing is the standard rendering method for CGI films and visual effects.
youtube.com/watch?v=IJ77a0erU4w

Not samefag but isnt nvidia ray tracing Ai based/supported/analyzed for each game and every possible frame, then supplied to customer?
What the fuck this has to do with movies?

Yeah I got that, but that's a pretty niche application when we're talking about a general consumer item. The post I answered scoffed at people thinking this was only about teh gayms, but most people aren't making CGI films.

It's a meme. What is in the new Nvidia cars is a partial kick starter ray tracing pipeline. My understanding is that only the primary rays or a partial segment of the ray trace are processed in ray cores. The rest is processed through a series of tricks in the traditional pipeline. It's why they had to reorganize the architecture to share more of the cache/registers. So, when you enable gaytrace, you're going to cause a performance hit. It isn't completely parallel or self-capable. It's like a teaser/beta test feature on a dev board. They're giving the market just enough to test their design/fix the kinks and wet your taste buds. It's still a gimmick that will take multiple generations to smooth out. The hilarious part is that they're charging dumb take my money niggers a premium for beta testing this shit. I tip my hat to them for exploiting the situation hardmode. Ride pascal to 7nm and see what the full market offers in 2019. By then, Nvidia will have released all of the benchmarks and details for you to fully assess their new architecture. Prices will likely fall as well.

The easiest way to think of ray tracing is to look around you, right now. The objects you’re seeing are illuminated by beams of light. Now turn that around and follow the path of those beams backwards from your eye to the objects that light interacts with. That’s ray tracing.

Historically though, computer hardware hasn’t been fast enough to use these techniques in real time, such as in video games. Moviemakers can take as long as they like to render a single frame, so they do it offline in render farms. Video games have only a fraction of a second. As a result, most real-time graphics rely on another technique, rasterization.

With rasterization, objects on the screen are created from a mesh of polygons that create 3D models of objects. In this virtual mesh, the corners of each polygon — known as vertices — intersect with the vertices of other triangles of different sizes and shapes. A lot of information is associated with each vertex, including its position in space, as well as information about color, texture and its “normal,” which is used to determine the way the surface of an object is facing.

Computers then convert the triangles of the 3D models into pixels, or dots, on a 2D screen. Each pixel can be assigned an initial color value from the data stored in the triangle vertices.

Ray tracing is different. In the real-world, the 3D objects we see are illuminated by light sources, and photons can bounce from one object to another before reaching the viewer’s eyes.

Light may be blocked by some objects, creating shadows. Or light may reflect from one object to another, such as when we see the images of one object reflected in the surface of another. And then there are refractions — when light changes as it passes through transparent or semi-transparent objects, like glass or water.

Ray tracing captures those effects by working back from our eye. It traces the path of a light ray through each pixel on a 2D viewing surface out into a 3D model of the scene.

Attached: glass_bowl_fancy_wallpaper.jpg (1280x1024, 614K)

>What the fuck this has to do with movies?
Compare Overwatch gameplay to their animated movies. Do you have any idea how long animated movies take to render? The key here is that it's being rendered in real time.
youtube.com/watch?v=gTbWVt2_OWc

You don't buy this shit for games, especially since it's a beta test. Eventually it would make games look like those films though.

^this guy knows what's going on.
Can you comment on the fact that this meme ray trace pipeline in its current form is just a kick starter? I am hearing that non-primary rays are still processed the way they used to in the graphics pipeline/cuda cores/SM through a hacky emulation. This is why they had to change the architecture {see pic}

Attached: gay_trace_secondary_rays.jpg (1600x898, 97K)

>IMPLYING YOU COULD BUY ONE EVEN IF YOU WANTED TO

TOO LATE KID.

SOLD OUT

unironically this image sums it up.

But by being assisted by Ai, isnt it?
Inst that the whole point?

>You don't buy this shit for games, especially since it's a beta test. Eventually it would make games look like those films though.
I agree with that at least. Sounds like 1200$ worth of meme to me.

Yes it is being assisted by AI. I honestly don't know how the AI is helping though. The only thing I know about AI and raytracing rendering is that there exists AI denoising which allows you to get away with using less iterations (and render faster) as long as the AI cleans up the noise nicely.

Ray tracing creates a real-world-accurate model of light to deliver complex, physically-accurate shadows, reflections and refractions that current-gen video game rasterization rendering methods cannot dream of.

Attached: rasterization-vs-ray-tracing-n.jpg (720x540, 44K)

From what I read, all games are analysed frame by frame with supercomputers at nvidia HQ to get ray data or some shit.
Then that data is provided to end users by being included in drivers.
So it is not real time done all by your gpu, but by having already massive provided data.
Now, if I understood this correctly, will this help with rendering movies?

>will this help with rendering movies?
Yes.
Like, Overwatch movies use Redshift, but each frame will take a fuckload of time to render. Now imagine the final product being instantly visible in real-time like it was a game. Now games can also look like those movies because raytracing is now instant.
youtube.com/watch?v=7VrPjVSPmKw

>what is FE premium?
hello underage. that said the 3rd parties are absolutely going to price gouge this time round

The ray cores also supply a buffer which shows where which triangles are intersected by rays to aid in shading.

The problem is just how this whole pipeline works and how slow it is.
There are too many stages. You have to do the RT ASIC, THEN tensor core usage, THEN your shaders and THEN tensor cores again.
And there doesn't appear to be any mechanism to do these asynchronously with multiple stages running concurrently in order. Yes the new Volta/Turing CUDA cores support async compute, but that's only for the shaders themselves and without concurrency.

What we need is fuck... I don't know. I want to say we need something where ray tracing is done in parallel with shading and the shader can wait for the result and immediately use it within the middle of a shader program.
At the moment, you're actually better off simply doing ray tracing on the slower typical cores because though it's slower, there's less staging. We just need better improvements to cone voxel tracing. I have a feeling that is what AMD has been working on for Navi and we'll see in next gen consoles.

I've been hearing people say that the tensor cores are virtual and so is the ray tracing, but I don't think so.
There appears there is an actual, well I'm calling it a sparse octree voxelization accelerator ASIC.
en.wikipedia.org/wiki/Sparse_voxel_octree
You can visualize it like many cubes, where you increase how much the cubes of subdivided to the level of accuracy needed at something.
Like say you had a bear. You'd have one big cube for the center of its body. This is one unit of measurement. Very fast to measure it. Then as you add the roundness, you're adding more cubes of smaller sizes. Then cubes going down and out for arms and legs. Then smaller cubes to round them out. Then more smaller cubes for fingers. etc.
If you shoot a ray through this bear, you're going through way more voxels with the center of its body being mostly one big cube then a few extras around it to round it out.

If they were virtual tensor cores, they should just be intrinsic shader functions on the same function call like rapid packed math. I don't think they are?

That graph for their cache and shared mem doesn't make sense, too. Why is load/store unit split? Is one for tensor cores and the other for cuda cores?

Pretty misrepresentative to act like these RTX cards will do render farm level of ray tracing in real time. They do not.

There's supposed to be reference data points for how a higher resolution frame looks in a game that the "AI" checks against to make higher quality anti aliasing.
I uh will have to see more. But regardless, you don't need that level of AA at 4k. 1x SMAA is good. And at lower resolutions, it appears like it might not offer much of a speed advantage, and might not look better without having enough pixels to work with and blend. I think there's a number of telling factors as to why they only showed DLSS at 4k, despite it being useless at 4k.

I am engi by profession, but not in this field.
This looks impressive, that is fine, and ai workaround idea is great. However, I still cannot justify the reason why would you buy it now.
Only thing I know that I can compare it to is PhysX, which I never used in games because I never noticed it and it dipped my frames bad.
I do not render shit, and have no clue about it, people buy faster/better GPUs primarily for gaymen from my knowledge. This is not new technology, just new way of implementing it and it is closed solely to nvidia.
AMD has, from my knowledge, dibs in new consoles for supplying APU, which means they wont get it.
When you are deving a game, you want to increase your profits and sell on all platforms. Which means this future may be included as optional on PC, but not at all needed by any means.
Most PCs are toasters, with old GPUs. Their users cannot afford this, neither will I be able to for that matter, because price is just absurd. And these PCs are in majority.
From everything I can gather about the subject, I can say that it is indeed fine way they did it with Ai thingy, but it is also a problem which will directly impact adaptation of this.
I see no reason to pay that money for this which wont be useful for me in any way except having some nice shadows which I mainly turn off/reduce for FPS gain, which ray tracing will dip even more.
There is no reason for me, a standard user, to get this. At all. Technology is fine, its cool, but at this moment, a gimmick few will use, and even lesser amount actually really unironically need.

Will it be good at mining?

Right, exactly.
I'm a graphics programmer myself. I think the new Quadro cards are great, but these being gaming cards is a complete scam and attempt to rip people off to try and justify a price of which they're not remotely worth.

They couldn't sell a card that's only 20-30% faster than the 1080Ti for $1200, so they make some new snakeoil marketing around it to sell it to idiots.

>Only thing I know that I can compare it to is PhysX, which I never used in games because I never noticed it and it dipped my frames bad.
I remember the physX in CoD Modern Warfare on my 9600 GTX. The blowing news papers looked amazing.
That card was a lemon so I got a 5770 within a year instead, which completely blew the fuck out of a compareable Nvidia card at the time and was 10 times quieter, but I no longer had the blowing news papers.
It pissed me off when I figured out that effect was easily possible on the 5770 and would have run better on it, but it was simply locked out by proprietary bullshit.
Hence well, I don't know a lot about programming for Nvidia's proprietary features. I don't give a shit about them.

>AMD has, from my knowledge, dibs in new consoles for supplying APU, which means they wont get it.
Yeah, I can't really see MS or Sony going to Nvidia after their sour history with them.
AMD has their own "deep learning" support in Vega 7nm and likely navi with virtual tensor cores which use vastly less die space.
They do lack the sparse octree voxelization accelerator, but I'm not too sure that that is even a big deal. You can denoise off the fucking depth buffer and with stenciling.

I don't know.

All that computational power for graphics that barely look any better.

Ray tracing in a video game context is a fucking meme.

You don't buy it now. 10 years maybe, but you have to also understand that it's beyond just shadows. Everything you see in life is a reflection, and raytracing emulates that and will just inherently make shit look better just by having light on it. But like, the tech isn't there yet for the average consumer and I don't know if the devs who can afford this are going to spend their time implementing it. For me, an individual who renders, but isn't a big studio who can afford Quadros, I'm going to see if the render benchmarks are insanely superior to the 1080ti and if they are then it'll be an easy purchase. I'll be able to work so much faster. I do see it as a gimmick right now for games, but like, I can understand they still have to attempt to market it for gamers obviously. It's just in its infant stage right now and they're doing a shitty job at explaining it. To be honest I don't really blame them though because they could have very well said fuck gaming with this line, but they gotta make that money I guess.

It's looking good so far support-wise though from the render side at least. youtube.com/watch?v=Nr4_VAHTj_w

>Ray tracing captures those effects
No it doesn't. Right now the best they can do is selective ray tracing

I understand its not just about shadows, but at the present moment not much more.
As I said I have no clue about rendering/design so Ill leave that to others and wont say shit.
My point is, like you said, why buy it now. It offers little and cost way too much, at least to me.

You can go back to any point in time and time and you will see that these real-time rendered animated movies look way better than gameplay. Games never look like this shit because it takes a lot of effort to make it that way.

youtube.com/watch?v=AyT8LOjrnWE
4k Quake looks just as impressive as it would w/ this meme lighting. You're unironically not staring at the details on rocks, doorknobs, and shadows while you're running around trying to get a max frag count. They're prematurely exporting a Quadro level feature to the consumer gaming market in hopes of converting it to the prior Quadro margins and increasing prices/margins in all of their other vertical markets. It's a cash grab. It stinks of it. Ray tracing is still emulated as fuck and cut down even in their meme cores

Artificial supply manipulation.. The same shit the pulled during the memecoin crisis.. They're faking proven market conditions to see whether or not their new market segment exist : FULL ON MILKING OF SOIBOYS

Essentially a more advanced pre-baking.
Essentially they compress the higher res/refined pre-bake into a NN and stamp it on demand into the game. Nothing is being computed in RT. It's pre-computed. It's a hack and it wont scale across all games. It's a cloud computing gimmick they'll have to start charging gaming companies for which will restrict is usage and availability to (YOU).

Goddamn user.. Spot on insights and analysis.
This is my thinking too minus the understanding you have about the interaction of the different pipeline segments. I knew some fuckery was afoot. IMO, yes.. they need to break this shit out into a completely different dedicated asic that has a low latency access to a coherent shared cache which stores the BVH in snapshots that the respective pipeline elements sync on for their compute for raytracing/shading... Something AMD is a lot more suited for than this monolothic die trash Nvidia is pushing much like Intel.

The understand is that they keep the BVH in an upper tier of cache shared among different parts of the pipline like pic. This is why they beef up L1/L2 and higher cache stores? Because they are doing a lot more time sensitive sharing/syncing among different portions of the pipeline in order to achieve the couple of the output of the ray trace and tensor core pipeline. This is also why I think there was a demonstrated performance hit when they enable it... Dynamic pipeline syncing. The tensor nor ray cores are virtual. They are dedicated asic portions on die interfaced at a high level with the SMs and shader pipeline.
> Why is load/store unit split? Is one for tensor cores and the other for cuda cores?
I think it was for :
> Finally, Nvidia CEO Jensen Huang mentioned the ability of Turing to do simultaneous floating-point and integer calculations, at 14 TFLOPS/TIPS each. Graphics cards have to do a lot of address generation when loading textures, and the simultaneous FP + INT calculations could provide a serious benefit to performance, even in games that don't use ray-tracing.

I have no clue how this relates to ray tracing/tensor cores but I feel it does have relation. So, it appears they broke out the architecture into new pipeline portions but the syncing/coherence probably causes a hit to overall performance. It's why they keep stressing gaytrace off/on.. DLSS on/off and have delayed benchmarks. They're going to try to stress performance w/ and w/o. They tightly infused this shit deep into their traditional GPU pipeline and it causes performance hits.

Attached: meme_trace_power_Vr.png (620x388, 50K)

>Ray tracing
More like 1080p silky smooth 20 fps tracing

>Should I get it
Are you able to throw over $1.2k into the trash? If not then either wait for the GTX 1180Ti or buy two GTX 1080Ti if you just want to use the GPUs for gaming.

It is literally confirmed that if you turn Ray tracing on you will not even get 60 fps and I'm only talking about 1080p and yes, this is the RTX 1080Ti which can't even give you 60 fps with 1080p.

The whole card is joke. The huge price for something which is not ready for gamers. People who spend over $1k for a GPU don't use 1080p and want much more than 60 fps.

If you just look at the improvement in normal games you have to wait for reviews. The card itself looks pretty bad and even $800 would be too much for a RTX 1080Ti.

I took his
>I see no reason to pay that money for this which wont be useful for me in any way except having some nice shadows which I mainly turn off/reduce for FPS gain, which ray tracing will dip even more.
as being what you get NOW with this early generation.
In the future, sure, ray tracing is insane.
You get depth of field, caustics, reflections, and all sorts of realistic effects for "free", physically simulated.

But you need a GPU that's like at least 8x lower latency and more powerful than the 2080Ti, so we're looking at at least 4 years, and then another 5 years before enough of an install base has it.

I keep up to date on this shit, and I still wasn't expecting the "early adoption" phase for ray tracing GPUs to come until 2020.

That's not right. DLSS has nothing to do with ray tracing. It's post processing AA.

From what I heard, the AIB cards aren't even expected to come to market for 1-3 months. So some of these preorders are 3 months in advance.

They basically said it in their reveal the other day. With graphs as well. There are clearly separated stages.
So even though tensor cores operate at say 100 TFLOPs of very limited operations, you have to to read and write through atomics and have extra function calls you're waiting for. You lose the low latency of a single shader call.
That drop from 100 fps to 34 FPS in tomb raider is likely not even that the card is being "stressed", it's likely the latency of having 3-4 pipelines which must go in order instead of it all being on multipurpose cores.
In Nvidia's demo from last year on Volta, they "got around" a lot of these issues by having ray traced lighting data be many frames behind. It increased FPS, but it induced artifacts when the camera or objects move quickly.

I mean they could just be lying in the slides. Maybe they are "virtual" tensor cores. I doubt it, though.

>This is not new technology, just new way of implementing it and it is closed solely to nvidia.
Correct. All of the render techniques including gaytrace use a slew of hacks in order to be reasonably performant. The hacks change and get better with time and sometimes are placed in hardware.
> Comments about how game developers approach development
Correct. Given the shit tier pricing of these new cards, a small minority will end up with them and it wont be worth the extra cost needed for development. That AI cloud bullshit will require a large amount of computing that Nvidia will have to charge game developers for. They're not going to do that shit for free and might even be niggerish and charge people per game title for it in the future (which is why you never tether your hardware to someone's cloud service)
> nice shadows which I mainly turn off/reduce for FPS gain
You're not paying attention to this shit in high action gaming anyway.
It's a detail you never notice.

Until they properly implement this shit w/o causing a performance hit or taking up die space, I see it going nowhere. This shit belongs in a dedicate asic that runs completely in parallel without degrading performance.

Samefagger. I've seen this exact reply to you on half your comments.

> I think the new Quadro cards are great, but these being gaming cards is a complete scam and attempt to rip people off to try and justify a price of which they're not remotely worth.They couldn't sell a card that's only 20-30% faster than the 1080Ti for $1200, so they make some new snakeoil marketing around it to sell it to idiots.

Exactly but didn't they go way up in the Quadro pricing as well? $2,300 for an entry level RTX Quadro that's essentially a beefed up 1080ti. That being said, I understand ray tracing is such a commonly used and primary aspect of a Quadro user's workload that ray tracing speedups here are possibly worth the added money...

> It pissed me off when I figured out that effect was easily possible on the 5770 and would have run better on it, but it was simply locked out by proprietary bullshit.
And this is the biggest thing I hope vulkan resolves. I'm sick of the proprietary bullshit.
> Yeah, I can't really see MS or Sony going to Nvidia after their sour history with them.
Can you link to what caused this?

It's stochastic path tracing with an NN kernel denoiser. See the Brigade engine for an example, but fill in the dots from the earlier examples.

no and no

>So even though tensor cores operate at say 100 TFLOPs of very limited operations, you have to to read and write through atomics and have extra function calls you're waiting for. You lose the low latency of a single shader call.
Exactly, sinking pipelines is a mother fucker as is cache coherency especially if you have to do it at a sub system level.
> That drop from 100 fps to 34 FPS in tomb raider is likely not even that the card is being "stressed", it's likely the latency of having 3-4 pipelines which must go in order instead of it all being on multipurpose cores.
Exactly. Essentially, this is a beta level architecture which has a very crucial elements that still need to be ironed out. As such, it has an incredible amount of costly syncing and coherency.
> I mean they could just be lying in the slides.
They aren't lying. They just aren't telling the full detailed truth. It is in fact a parallel pipeline but it also has tons of costly sync points. I am thankful for the product launch. It has caused me to do a great wealth of research to understand GPUs/Rasterization/Ray tracing etc. However, I also know comp arch and I know a lot of fuckery is likely present which is why they are slim on details. Anytime you see the words simultaneous and parallel, increased cache sizes, or meme performance names is where they needed to conduct hackery to make all this shit work.

Not same fag. I've seen this informed tripfag in several threads now and his particular posts reflect he knows a good deal. So, i'm probing him with my best formed inquiries. I've given up on Geforce 20 series but I still want to understand what makes it tick and the broader feature functionality in general.

>The understand is that they keep the BVH in an upper tier of cache shared among different parts of the pipline like pic. This is why they beef up L1/L2 and higher cache stores
Yeah. Looks like L2, and the RTU is opperating on some reserved L2 space for the scene hierarchy, frame accumulation, etc.

>Graphics cards have to do a lot of address generation when loading textures, and the simultaneous FP + INT calculations could provide a serious benefit to performance, even in games that don't use ray-tracing.
So one L1 for FP and one for INT? Makes sense I guess.
I'm actually unaware myself whether GCN can also only do FP and INT simultaneously, instead of FP and FP or INT and INT.

But it's interesting that tensor cores aren't shown anywhere here.

>but the syncing/coherence probably causes a hit to overall performance.
Yep, this is my guess. There is a delay between the pipelines and having so many different pipelines means that without breaking things up asynchronously (which will introduce artifacts like in their demo a year ago), you get terribly slow frame times. Not exactly stressing, but there's just so many different compounding levels of latency.
You'd have to google how long you need to expect for atomics to be available in two separate programs on a GPU. I haven't looked into this in a while because it was so slow and inconsistent across GPUs that I gave up on it.
If they somehow made their USC. It's notable that AMD is going the other way, with multipurpose cores that do machine learning ops in your typical unified shaders.
I'm actually getting even more hopeful for AMD that this is the best that Nvidia could improve Volta after a year.

Eh I think for actual movie development ray tracing, the $10000 Quadro RTX 8000 is pretty justifiable.
Though as far as I was aware, companies like Pixar were still doing software rendering on CPUS to get absolutely accurate and perfect rendering.

Thank you for the thought out and full replies user. Screen capping a number of these for follow on research pointers. I'm not buying Geforce 20 because its an immature architecture sold at a ridiculous premium but I am happy Nvidia took the risk and put it out there. I look forward to full details about the Turing micro-architecture and how these new cores work and how they integrated them. I hope they don't stay tight lipped. The benchmarks are going to be a shit show though it seems which is why they're delaying the NDA lift so long. You probably need week long courses to be able to properly discuss this new architecture. I was excited about it for compute but these prices are ridiculous. I'm better of coming up with some new hacky method of achieving the same thing in traditional cuda cores. I could by two or three pascal cards for this price.

>ITT: Brainlets who don't even know how ray tracing works compared to rendering methods employed in games for the past 15 years
It allows for some nice optical effects that you'd otherwise have to cheat a lot to achieve. It's a pretty cool concept allowing for more detailed, more accurately rendered scenes.

The average /v/oddler won't care because they're all ADHD kids who won't notice the difference when half the screen is taken up by HUDs, hit markers, social media integration and twitch live stream face cams or whatever the fuck else they are wasting their time with.

The problem is, it doesn't matter that rasterization is cheating when it looks fucking good.
The Xbox One X and PS4 games shown at E3 this year have far more realistic and better graphics than Nvidias demos on their $1200 GPU.

I look forward to real time ray tracing becoming the normal. Programming for it is far far easier for most cases. It's so much simplier. But it requires far more powerful hardware than this. I probably won't start with serious ray tracing programming for another 3-5 years, and probably won't release something using it for another 5-8 years after that.

>when it looks fucking good.
But it doesn't, unless your scene is very simplistic or you employ a number of additional processing steps to mimic certain lighting effects such as ambient occlusion, extra scene rendering when mirrors or translucent objects appear, shadow mapping, light mapping etc.
>E3
>games
Thanks for supporting my hypothesis of the /v/oddler.
>Programming for it is far far easier for most cases.
This is very true.

>Is Ray tracing a meme?
No.
>Should I get any of the new cards if I don't care about it?
No.
But if you're gullible enough to blindly follow the advice on an anonymous forum, then sure, go ahead.

How long would it take for this tech to be actually viable?
Pessimistic: decade. Optimistic: half of the decade.
This card is akin to Voodo in a sense.

they are using ray tracing tech only on shadows and literally nothing else
and even then the perfomance tanks down below 40 fps
its a meme for 30+ years now and it will remain a meme

As much a meme as PhysX is.
10 years later and enemies still fly when hit by a 9mm bullet and limbs still act as if bodies were rag-dolls.

Should be faster than the unified shader model timeline, I think.

It should be viable by say... the 3000 or more likely the 4000 series of RTX cards. Depends if there is a new release every year again (fermi->kepler->maxwell->pascal was about a bit over a year on average, wasn't it?) or if we have 30 month cycles like this past generation...
We know the 3000 series is almost surely coming next year, so it's at least not a 30 month wait for that cycle again.
So at least years before you have the new top end card which might be decent enough at this, but then more years still until you have $200 cards equal to that top end card so more people have it.
And at that point, the 2080Ti is going to be obsolete
And even then we're talking light raytracing effects. For fully realtime real ray tracing without much rasterization, at least 10 years.

I don't really see a time where we can actually get rid of rasterization, is the sad thing.
Ray tracing offers a lot of benefits like doing depth of field, caustics, etc, for "free". But when you have hard polygon edges that you're denoising to to give you your outlines for what's what, you don't get that benefit... The more I think of that, the more I completely question the point of all this.

I hope someone writes a good article on the drawbacks of hybrid rendering and how much of the benefits of ray tracing we're losing out on here for such an absurd amount of effort. It's hard for me to articulate them myself and pictures would help others understand.
We really are limited to 'better lights and shadows' here, which is not all what is great about ray tracing. And better lights and shadows can be done through rasterization fine... lmao.

partial raytracing is a waste of time and effort, nobody wants to pay that much extra just for slightly improved reflections, unless you can use raytracing for everything what's the point?

The ones on Newegg aren't Founders Editions, you fucking retard. Not to mention the hilarity of calling someone underage for "not knowing" about the FE tax, when that was only introduced two years ago with Pascal (and was bullshit then too, because third party cards were all as or more expensive than it anyway). Seems like you're the underage one, friend.

>$1200 for a video card
jesus christ. im glad i got over this gayming fad years ago.

I pretty much agree with you, sadly.

I'm torn and all. As much as I don't like Nvidia as a company, they have some amazing engineers and this is, well, a step.
But this step sucks.

I'm much more interested in the machine learning geared operations than this ray tracing crap.
I hope AMD's machine learning shit delivers.

Except some salvaged parts that's more than I spent on my current computer, and I can still pretty much run the gayms that are being shat out these days.

It's not a meme but you shouldn't ever get anything first gen.

In the future it will even simplify the code. Realistic shadows, illumination, reflection, refraction and all the other optical effects are basically free with ray tracing. But it's still prohibitively expensive and their hybrid approach is a clusterfuck. I'd wait for proper dedicated real time pure ray tracing cards. RTX are going to be a meme for at least a decade.

No NV link for RTX 2070. kek
youtube.com/watch?v=QoePGSmFkBg

>ps4 pro/xbox one s

Why do you hate perfectly good money?
Those same 1200 plus price of other components (waiting for you to put a price tag on these) gets you a good gaming PC, a decent 1080p monitor plus a few credits to spare.

Or were you just shilling for consoles?

>AMD has, from my knowledge, dibs in new consoles for supplying APU, which means they wont get it.
amd has its own way via async the same goes for TF workloads amd simply doesnt need any hardware so far but they will get some in navi
the problem is devs will have an increasing amount of work to do now because its getting a lot harder to render shit for amd and nvidia and im not quite sure how much money nvidia can spend in order to completely remove the async and reimplements its own way on its fucking way..hence why unity provided a extension for nvidia but it has as a backbone the amd way

For now at these prices ray tracing is as meme as VR, even more so, wait for one more generation for devs to actually figure it out and it's optimisation and for prices to go down. Hopefully AMD can release something competitive by that time so it won't be Nvidia monopoly all over again.