Why trace rays (points) to fill up an area, instead of tracing polygons (areas)?

why trace rays (points) to fill up an area, instead of tracing polygons (areas)?

the amount of points you'll need grows with the amount of pixels you want to render, while the polygons are constant.

Attached: coherence-1.png (1905x1080, 2.79M)

Other urls found in this thread:

jo.dreggn.org/path-tracing-in-production/2018/index.html
graphics.stanford.edu/papers/veach_thesis/]
pbr-book.org/
isgwww.cs.uni-magdeburg.de/graphics/projects/dissertation/lessig_dissertation.pdf
twitter.com/SFWRedditVideos

Want to know the answer too, have a bump

You want every pixel to be correct and since polygons are often larger than one pixel you would get a lot of artifacting as shading would reveal the mesh structure. It would look like bad self-shading of older games.

i guess it's also less suitable for curved geometry.

would still be interesting to compare the performance of ray vs polygon.

this is already done. it's called beam tracing.

Raytracing allows you for physical based rendering, ie. photorealism. Polygons are fundamentally limited.

/thread

No you wouldn't. You think polygons are entirely 2d or something?

Your computer renders using bitmaps... otherwise raytracing already does trace geometry. That's what the intersection testing is for, trace the geometry and map the projection to your buffer. Not sure what else you could be suggesting

Isn't this obvious? Or am I missing something here? A whole polygon's surface does not have the same lighting. That's why you need to calculate lighting for every point on the surface, not just once for the surface itself.

Because rays more accurately represent the way light works.

Alright but games today's literally have a billion polygons on screen at the same time.
I mean there is space here to experiment.

Which is incorrect from physics standpoint.
You know because light is a wave.

oh christ this fucking thread is really attracting all the brainlets, huh

no they don't, you're confusing pbr rendering for actual polycount

and there isn't really any space here to experiment, what you are suggesting is called vertex lighting and it looks like ass unless you are specifically going for the mid '90s 3d game look. memetracing could not make it any better.

If I had to guess, besides the fact that the picture in the op looks like shit, it's probably a lot more computationally expensive. Analytically projecting some polygon onto other polygons is much harder than just intersecting a bunch of lines. Ray tracing is also easy to just throw more parallelism at.

yeah I gave a (you) to one of them

it scales better
OP's method would look okay if you could render so many polygons that they are each roughly the size of a pixel, but it would look awful on any game made on existing hardware, within the limitations of technology.

you can interpolate inside the polygon when rasterizing it.

Sure, but I assume you'd ultimately get a less accurate result and performance would still be worse than the basic solution. Ultimately I don't know how many polygons a high-end game has in a scene and how such a solution would perform.

>polygon per pixel
you're missing the point, that's exactly what it tries to avoid

>You think polygons are entirely 2d or something?
Polygons are by definition entirely two dimensional.

I think you are full of shit but w/e

Brainlet.
Raytracing tries to imitate how light works, it's best we can do.
Rasterisation only tries to make the end result look somewhat like real thing.

but raytracers cast millions of rays to approximately fill an area. instead you could just cast a single rectangle and let it split up when it hits stuff.

from mathematical viewpoint there's no difference, both will sample the same data (exept the rays will be noisy because mathematically you'll need infinite points to fill an area, while in practice after some point there's no difference).

polygon to polygon intersection test algorithm is orders of magnitude more complex that just casting a ray to triangle

Okay retard, then how to you propose to accurately model things like diffuse shadows? You need to cast thousands of those rectangles with some random angles added otherwise it looks like the pictue in the op.

Serious ray tracing software actually casts cones. Also they use pathtracing.

I'm not talking about some naive implementation like whitted algorithm, but professional solutions like pic related.

Attached: lrg.jpg (500x613, 397K)

Alright, I think I get what you mean now.
You don't cast complex shapes because at the end you still have to calculate discreet pixels. There is no point to calculate it algebraically because you are not interested in the final light function, only samples(pixels). Also it's practically impossible to solve non-naive scene this way even if you only use rays. That's why you use Monte Carlo method to approximate reflections.

Still if you are really interested in the subject, I recommend you the book I posted above. It contains all these subjects and can answer your questions. You can just read the high level/physic overview.

Inb4 2B's ass has more polygons than the whole super mario 64 game

Unless you're bouncing off a perfectly smooth reflective surface the light will scatter.

This is quite easy to do with rays because you just make multiple rays bouncing in different directions.
With beams ("polygons") I have no idea how to do it but it's probably quite complicated.

But you also need to calculate orders of magnitude more rays, no?

I'm /ic/ with some basic programming skills:

Does Jow Forums actually know how does lighting work or applicable mathematical formulas needed to render objects or is it all something else or just using a ready-made library from a big studio?

If programmers have to program lighting from a scratch (and know light theory) then why do they never do art or even photography?

State of art in production AAA path tracing render
jo.dreggn.org/path-tracing-in-production/2018/index.html

first raytracing was just oh dude throw rays, until render equation and years latter using monte carlo methods(borrow from nuclear simulations) for solved render equation[graphics.stanford.edu/papers/veach_thesis/]

Here a book closed to than big render engines do.
pbr-book.org/

But currently computer graphics ignore some more deep foundation on lighting than render equation.

This PhD thesis tried found more complex foundation using math and physics in light transport to render but nobody care today

isgwww.cs.uni-magdeburg.de/graphics/projects/dissertation/lessig_dissertation.pdf

Attached: 4E9E9954-10A1-42B8-8502-50375E856180.jpg (749x410, 41K)

So it's applying formulas from physics, right?
I'm glad I'm not a programmer because I can't understand any of them, but by looking at pictures and descriptions it's more or less what I knew about light theory and try to apply "by intuition" while drawing. Graphics and lighting are hard things.

>Serious ray tracing software actually casts cones.

Nope. They cast differential rays.

>instead of tracing polygons
because it's $current_Year and not the 90s

Attached: renderex1.gif (416x155, 20K)

So a vertex shader?

>$current_Year
got me to reply.

Well, right. I mean you just sample rays from cones, IIRC.

That's literally just bounding volume intersection testing. You're literally describing a very common implementation of raytracing at a very high level unless you can be more specific

Ok wait maybe I understand you know. That's voxel cone tracing basically. Vxgi uses it

ok retart, imagine u have ur points from the raycasts, no connect the dots and interpolate the values between them.

seriously if u need explaining for such simple things dont bother, im not going to draw it for you

perhaps use spheres

as i see it the more complex the scene's geometry, the more splitting that will be done and the worse the performance. ray casting can be seen as "already split" version of sampling.

>the amount of points you'll need grows with the amount of pixels you want to render, while the polygons are constant.
Wat. Are you just pretending to be retarded? The polygons are rasterized to pixels, the number of which grows with resolution.