Sunday, January 30, 2022

What is direct lighting (next event estimation) in a ray tracer? Part 1 of 3

Part 2 of 3

Part 3 of 3

In Ray Tracing in One Weekend, we took a very brute force and we hope intuitive approach that didn't use "shadow rays", or as some on the whipper-snappers say "next event estimation".  Instead we scattered rays and they got "direct light" (light that is emitted by a radiant object, hits one surface, and is scattered to the camera) implicitly by having a ray randomly scatter and happen to hit to hit the light source.

Direct light with shadow rays can make some scenes much more efficient, and 99%+ of production ray tracers use shadow rays.  Let's look at a simple scene where the only thing in the world is two spheres, one diffusely reflective with reflectivity R, and one that absorbs all light but emits light uniformly in all directions, so it looks white.  So if we render just the light source, let's assume it is color RGB = (2,2,2).   (for now, assume RGB=(1,1,1) is white on the screen, and anything above (1,1,1) "burns out" to also be white-- this is a tone mapping issue which is its own field of study!  But we will just truncate above 1 for now which is the best tone mapping technique when measured by quality over effort :) ).

To color a pixel in the "brute force path tracer" paradigm, the color of an object is:

color = reflectivity * weighted_average(color coming into surface)

The weighted average says not all directions are created equal-- some influence the surface color more than others.  We can approximate that weighted average above by taking some random rays, weighting them, and averaging them:


for the six rays above, the weighted average is 


color = reflectivity*(w1*(0,0,0) + w2*(0,0,0) w3*(2,2,2) + w4*(2,2,2) w5*(0,0,0) + w6*(0,0,0))) / (w1+w2+w3+w4+w5+w6)

So reflectivity*(2/3, 2/3, 2.3) so a very light color.  If we had happened to have an extra ray randomly hit it would be reflectivity*(1,1,1) and if one more missed reflectivity*(1/3, 1/3, 1/3)

So we see why there is noise in ray tracings but also why the bottom of the sphere is black-- no scattered rays can hit the light. 

A key thing is this sort of Monte Carlo is any distribution of rays can be used, and any weighting functions can be used, and the picture will usually be "reasonable".  Surfaces attenuate light (with reflectivity-- their "color") and they preferentially average some directions over others.

Commonly, we will just make all the weights one, and make the ray directions "appropriate" for the surface type (like for a mirror, only one direction is chosen.  

The above is a perfectly reasonable way to compute shading from a light.   However, it gets very noisy for a small light (or more precisely, a light that subtends a small angle, light our biggest light, the Sun).

A completely different way to compute direct lighting is to view all those "missed" rays wasted, and only send rays that don't yield a zero contribution.  But to do that, we need to somehow figure out the weighting so that we get the same answer.

What we do there is just send rays toward the light source, and count the ones that hit (if they hit some other object, like a bird between the shaded sphere and the light, it's a zero).

So the rays are all random, but also only in directions toward the light.  Now the magic formula is

 color = MAGICCONSTANT*reflectivity*(w1*(0,0,0) + w2*(0,0,0) w3*(2,2,2) + w4*(2,2,2) w5*(0,0,0) + w6*(0,0,0))) / (w1+w2+w3+w4+w5+w6)

 What is different is:

1. what directions are chosen?

2. what are the weights?

3. What is the MAGICCONSTANT?     // will depend on the geometry for that specific point-- like it will have some distance squared attenuation in it

The place where we have freedom is 1.  Choose some way to sample directions toward the sphere.  For example, pick random points on the sphere and send rays to them.  Then 2 and 3 will be math.  The bad news is that the math is "advanced calculus" and looks intimidating because of the formulas and because 3D actually is confusing.  The good news is that that advanced calc can be done methodically.  More on that in a future post.

But great cheat!  Just use some common sense for guesses on 2 and 3 (checking the quality of your guesses with the more brute force technique) and your pictures will look pretty reasonable!  That is the beauty of weighted averages.

  




2 comments:

Unknown said...

Informative post!
Looking forward for more.
The reflectivity is the albedo/color of the surface, I take it?

Phil Dutré said...

"...or as some on the whipper-snappers say "next event estimation""

IIRC, the original "next event estimation" name comes from the nuclear engineering literature of the 60s - neuron transport and all that - that also delved into Monte Carlo radiative transport.

Not sure who introduced this into graphics, but I have a vague recollection it was somewhere during the early 90s, so rather early in the whole MC rendering story.