Saturday, October 20, 2018

Flavors of Sampling in Ray Tracing

Most "ray tracers" these days are what we used to call "path tracers" or "Monte Carlo ray tracers".   Because they all have about the same core architecture, this short hand is taking over.

These ray tracers have variants but a big one is whether the ray tracing is:
  • Batch: run your ray tracer in the background until done (e.g., movie renderers)
  • Realtime: run what you can in 30ms or some other frame time (e.g. game renderers go RTX!)
  • Progressive: just keep updating the image to make it less noisy as you trace more rays (e.g., a lighting program used by at artist in a studio or engineering firm)
For batch rendering let's pretend for now we are only getting noise from antialiasing the pixels so we have a fundamentally 2D program.   This typically has something like the following for each pixel (i,j):

pixel_color = (0,0,0)
for s = 0 to N-1
       u =  random_from_0_to_1()
       v =  random_from_0_to_1()
       pixel_color += sample_ color(i+u,j+v)
pixel_color /= N

That "sample_color()" does whatever it does to sample a font or whatever.    The first thing that bites you is the diminishing return associated with Monte Carlo methods:   error = constant / sqrt(num_samples).     So to halve the error we need 4X the samples. 

Don Mitchell has a nice paper that explains that if you take "stratified" samples, you get better error rates for most functions.    In the case of 2D with edges the error = constant / num_samples.   That is LOADS better.   In graphics a common what to stratify samples is called "jittering" where you usually take a perfect square number of samples in each pixel: N = n*n.   This yields the code:

pixel_color = (0,0,0)
for s = 0 to n-1
       for t = 0 to n-1
           u =  (s + random_from_0_to_1()) / n
           v =  (t + random_from_0_to_1()) / n
           pixel_color += sample_ color(i+u,j+v)

Visually the difference in the pattern of the samples is more obvious.   There is a very nice blog post on the details of this sampling here, and it includes this nice figure from Suffern's book:


 It turns out that you can replace the random samples not only with jittered sample, but with a regular grid and you will converge to the right answer.   But better still you can use quasi-random samples which makes this a quasi-Monte Carlo method (QMC).   The code is largely the same!   Just replace the (u,v) part above.    The theory that justifies it is a bit different, but the key thing is the points need to be "uniform" and not tend to clump anywhere.   These QMC methods have been investigated in detail by my friend and coworker Alex Keller.   The simplest QMC method is, like the jittering, best accomplished if you know N (but it doesn't need to be restricted to a perfect square).   A famous and good one is the Hammersley Point set.   Here's a picture from Wolfram:
It's regular and uniform, but not too regular.

In 2D these sampling patterns, jittering and QMC are way way better than pure random.   However, there is no free lunch.   In a general path tracer, it is more than a 2D problem.   The program might look like this:

For every pixel
    for example sample
          pick u,v for screen
          pick a time t
          pick an a,b on lens
          if ray hits something
                    pick a u',v' for light sampling
                    pick a u",v" for BRDF sampling
                    recurse

If you track your random number generation and you take 3 diffuse bounces with shadow rays that will be nine random numbers.    You could think of a ray path through the scene as something that gets generated by a function:

ray_path get_ray_path(u, u', u", u'", u"", u""', ....)

So you sample a nine dimensional hypercube randomly and map those nine-dimensional points to ray paths.   Really this happens in some procedural recursive process, but abstractly it's a mapping.   This means we run into the  CURSE OF DIMENSIONALITY.   Once the integral is high dimensional, if the integrand is complex, STRATIFIED SAMPLING DOESN'T HELP.    However, you will notice (almost?) all serious production ray tracers do add stratified sampling.   Why?  

The reason is that for many pixels, the integrand is mostly constant except for two of the dimensions.   For example, consider a pixel that is:

  • In focus (so lens sample doesn't matter)
  • Not moving (so time sample doesn't matter)
  • Doesn't have an edge (so pixel sample doesn't matter)
  • Is fully in (or out of) shadow (so light sample doesn't matter)
What does matter is the diffuse bounce.   So it acts like a 2D problem.   So we need the BRDF samples, let's call them u7 and u8, to be well stratified. 

QMC methods here typically automatically ensure that various projections of the high dimensional samples are themselves well stratified.    Monte Carlo methods, on the other hand, typically try to get this property for some projections, namely the 2 pixel dimensions, the 2 light dimensions, the 2 lens dimensions, etc.   They typically do this as follows for the 4D case:

  1. Create N "good" light samples on [0,1)^2 
  2. Create N "good" pixel samples on [0,1)^2
  3. Create a permutation of the integers 0, ..., N-1
  4. Create a 4D pattern using the permutation where the ith sample is light1[i], light2[i],pixel[permute[i]], pixel2[permute[i]].
So a pain :)   There are bunches of papers on doing Monte Carlo, QMC, or "blue noise" uniform random point sets.   But we are yet quite done.

First, we don't know how many dimensions we have!   The program recurses and dies by hitting a dark surface, exiting the scene, or Russian Roulette.    Most programs degenerate to pure random after a few bounces to make it so we can sort of know the dimensionality.

Second, we might want a progressive preview on the renderer where it gets better as you wait.   Here's a nice example.

So you don't know what N is in advance!   You want to be able to add samples potentially forever.  This is easy if the samples are purely random, but not so obvious if doing QMC or stratified.   The QMC default answer are to use Halton Points.    These are designed to be progressive!     Alex Keller at NVIDIA and his collaborators have found even better ways to do this with QMC.   Per Christensen and  Andrew Kensler and Charlie Kilpatrick at Pixar have a new Monte Carlo sampling method that is making waves in the movie industry.   I have not ever implemented these and would love to hear your experiences if you do (or have!)

           

Monday, June 4, 2018

Sticking a thin lens in a ray tracer

I need to stick a " ideal thin lens" in my ray tracer.   Rather than a real lens with some material, coating, and shape, it's an idealized version like we get in Physics1.

The thin lens has some basic properties that have some redundancy but they are the ones I remember and deploy when needed:
  1. a ray through the lens center is not bent
  2. all rays through a point that then pass through the lens converge at some other point
  3. a ray through the focal point (a point on the optical axis at distance from lens f, the focal length of the lens) will be parallel to the optical axis after being bent by the lens
  4. all rays from a point a distance A from the lens will converge at a distance B on the other side of the lens and obey the thin lens law: 1/A + 1/B = 1/f
 Here's a sketch of those rules:




So if I have a (purple) ray with origin a that hits the lens at point m, how does it bend?   It bends towards point b no matter what the ray direction is.   So the new ray is:

p(t) = m + t (b-m)

 So what is point b?

It is in the direction of point c, but extended by some distance.  We know the center of the lens c, so we can use the ratios of the segments to extend to that:

b = a + (c-a)  (B+A) / A?

So what is (B+A) /A? 

We know 1/A + 1/B = 1/f, so B = 1/(1/f-1/A).   So the point b is:

b = a + (c-a) (1/(1/f-1/A) + A)/A =
b = a + (c-a) (1/(A/f - 1) + 1) =
b = a + (c-a) (A/(A - f))

OK lets try a couple of trivial cases.   What if  A = 2f?  

b = a + (c-a) (2f/(2f-f)) = b = a + 2(c-a) 

That looks right (symmetric case-- A = B there) 

So final answer, given a ray with origin a that hits a lens with center c and focal length f at point m, the refracted ray is:

p(t) = m + t( a + (c-a) (A/(A - f)) - m)

There is a catch.   What if B < 0?   This happens when A < f.  Address that case when it comes up :)


Thursday, May 31, 2018

Generating uniform Random rays that hit an axis aligned box


For some tests you want the set of "all" rays that hit a box.   If you want to stratify this is somewhat involved (and I don't know that I have done it nor seen it).   Chris Wyman and I did a jcgt paper on doing this stratified in a square in 2D.   But often stratification isn't needed.   When in doubt I never do it-- the software is always easier to deal with un-stratified, and as soon as dimension gets high most people dont bother because of the Curse of Dimensionality.

We would like to do this with as little math as possible.   First lets consider any side of the box.   (this would apply to any convex polyhedron if we wanted something more general).    If the box in embedded in all possible uniform rays, any ray that hits the box will enter at exactly one point on the surface of the box, and all points are equally likely.   So our first job is to pick a uniform point on the surface.   We can use the technique Greg Turk used to seed points for texture generation on a triangular mesh:

Probability of each side with box of side lengths X, Y, Z is side area of total area.   The side areas are:

XY, YZ, ZX (2 of each)

We can do cumulative area and stuff it in an array of length 6:

c_area[0] = XY
c_area[1] = area[0] + XY
c_area[2] = area[1] + YZ
c_area[3] = area[2] + YZ
c_area[4] = area[3] + ZX
c_area[5] = area[4] + ZX 

Now normalize it so it is a cumulative fraction:


for (int i = 0; i < 6; i++) 
    c_area[i] /= c_area[5]

No take a uniform random real r() in [0,1)

int candidate = 0;
float ra = r();
while (c_area[candidate] < ra) candidate++;

Now candidate is the index to the side.

Let's say the side is in the xy plane and x goes from 0 to X and y goes from 0 to Y.   Now pick a uniform random point on that side:

vec3 ray_entry(X*r(), Y*r(), 0);

Now we have a ray origin.   What is the ray direction?  It is not uniform in all directions.   These are the rays that hit the side.   So the density is proportional to the cosine to the normal-- so they are Lambertian!   This is not obvious.   I will punt on justifying that for now.

So for the xy plane one above, the normal is +z and the ray direction is a uniform random point on a disk projected onto the hemisphere:

float radius = sqrt(r());
float theta = 2*M_PI*r();
float x = radius*cos(theta);
float y = radius*sin(theta);
float z = sqrt(1-x*x-y*y);
ray_direction = vec3(x,y,z);

Now we need that for each of the six sides.

We could probably find symmetries to have 3 cases, or maybe even a loop, but I personally would probably not bother because me trying to be clever usually ends poorly...




Wednesday, March 14, 2018

Egyptian estimates of PI

I saw a neat tweet on how the estimate the Egyptians used for PI.

This is all my speculation, and maybe a math history buff can enlighten me, but the D^2 dependence they should discover pretty naturally.   Including the constant before squaring is, I would argue, just as natural as having it outside the parentheses, so let's go with that for now.   So was there a nearby better fraction?   How well did the Egyptians do?   A brute force program should tell us.

We will use the ancient programming language C:

#include
#include
int main() {
    double min_error = 10;
    for (int denom = 1; denom < 10000; denom++) {
        for (int num = 1; num < denom; num++) {
            double approx = 2*double(num)/double(denom);
            approx = approx*approx;
            double error2 = M_PI-approx;
            error2 = error2*error2;
            if (error2 < min_error) {
                min_error = error2;
                printf("%d/%d %f\n", num, denom, 4*float(num*num)/float(denom*denom));
            }
        }
    }
}


This produces output:

1/2 1.000000
2/3 1.777778
3/4 2.250000
4/5 2.560000
5/6 2.777778
6/7 2.938776
7/8 3.062500
8/9 3.160494
23/26 3.130177
31/35 3.137959
39/44 3.142562
109/123 3.141252
148/167 3.141597
4401/4966 3.141588
4549/5133 3.141589
4697/5300 3.141589
4845/5467 3.141589
4993/5634 3.141589
5141/5801 3.141590
5289/5968 3.141590
5437/6135 3.141590
5585/6302 3.141590
5733/6469 3.141590
5881/6636 3.141591
6029/6803 3.141591
6177/6970 3.141591
6325/7137 3.141591
6473/7304 3.141591
6621/7471 3.141591
6769/7638 3.141591
6917/7805 3.141592
7065/7972 3.141592
7213/8139 3.141592
7361/8306 3.141592
7509/8473 3.141592
7657/8640 3.141592
7805/8807 3.141592
7953/8974 3.141593
8101/9141 3.141593
8249/9308 3.141593
8397/9475 3.141593
8545/9642 3.141593


So 7/8 was already pretty good, and you need to get to 23/26 before you do any better!   I'd say the Egyptians did extremely well.

What if they had put the constants outside the parens?   How well could they have done?   We can change two of the lines above to:

double approx = 4*double(num)/double(denom);//approx = approx*approx;

and the printf to:

printf("%d/%d %f\n", num, denom, 4*float(num)/float(denom));

And we get:

1/2 2.000000
2/3 2.666667
3/4 3.000000
4/5 3.200000
7/9 3.111111
11/14 3.142857
95/121 3.140496
106/135 3.140741
117/149 3.140940
128/163 3.141104
139/177 3.141243
150/191 3.141361
161/205 3.141464
172/219 3.141552
183/233 3.141631
355/452 3.141593


So 7/9 is not bad!  And 11/14 even better.   So no clear winner here on whether the rational constant should be inside the parens or not.


Friday, December 29, 2017

Rendering the moon, sun, and sky

A reader asked me about rendering the moon using a path tracer.   This has been done by several people and what's coolest about it is that you can do the whole thing with four spheres and not a lot of data (assuming you don't need clouds anyway).

First, you will need to deal with the atmosphere which is most easily dealt with spectrally rather than RGB because scattering has simple wavelength based formulas.   But you'll also have RGB texture for the moon, so I would use the lazy spectral method.

Here are the four spheres-- the atmosphere sphere and the Earth share the same center.   Not to scale (speaking of which, choose sensible units like kilometers or miles, and I would advise making everything a double rather than float).

The atmosphere can be almost arbitrarily complicated but I would advice making it all Rayleigh scatterers and have constant density.  You can also add more complicated mixtures and densities.   To set the constant density just try to get the overall opacity about right.   A random web search yields this image from Martin Chaplin:


 http://www1.lsbu.ac.uk/water/images/sun.gif







This means something between 0.5 and 0.7 (which is probably good enough-- a constant atmospheric model is probably a bigger atmospheric limitation).   In any case I would use the "collision" method where that atmosphere looks like a solid object to your software and exponential attenuation will be implicit.

For the Sun you'll need the spectral radiance for when a ray hits it.   If you use the lazy binned RGB method and don't worry about absolute magnitudes because you'll tone map later anyway, you can eyeball the above graph and guess for [400-500,500-600-600-700]nm you can use [0.6,0.8,1.0].   If you want to maintain absolute units (not a bad idea-- good to do some unit tests on things like luminance of the moon or sky).   Data for the sun is available lots of places but be careful to make sure it is spectral radiance or convert it to that (radiometry is a pain).

For the moon you will need a BRDF and a texture to modulate it.   For a first pass use Lambertian but that will not give you the nice constant color moon.   This paper by Yapo and Culter has some great moon renderings and they use the BRDF that Jensen et al. suggest:

Texture maps for the moon, again from a quick google search are here.

The Earth you can make black or give it a texture if you want Earth shine.   I ignore atmospheric refraction -- see Yapo and Cutler for more on that.

For a path tracer with a collision method as I prefer, and implicit shadow rays (so the sun directions are more likely to be sampled but all rays are just scattered rays) the program would look something like this:

For each pixel
     For each  viewing ray choose random wavelength
          send into the (moon, atmosphere, earth, sun) list of spheres
          if hit scatter according to pdf (simplest would be half isotropic and half to sun)

The most complicated object above would be the atmosphere sphere where the probability of hitting per unit length would be proportional to (1/lambda^4).    I would make the Rayleigh scattering isotropic just for simplicity, but using the real phase function isn't that much harder.

The picture below from this paper was generated using the techniques described above with no moon-- just the atmosphere.



There-- brute force is great-- get the computer to do the work (note, I already thought that way before I joined a hardware company).

If you generate any pictures, please tweet them to me!


Wednesday, December 6, 2017

Lazy spectral rendering

If you have to do spectral rendering (so light wavelengths and not just RGB internal computations) I am a big fan of making your life simpler by doing two lazy moves:

1. Each ray gets its own wavelength
2. Use a 3 element piece-wise constant approximation for most of the spectra, and make all the XYZ tristimulous stuff implicit

First, here's how to do it "right".   You can skip this part-- I'll put it in brown so it's easy to skip.  We want some file of RGB pixels like sRGB.   Look up the precise definition of sRGB in terms of XYZ.   Look up the precise definition of XYZ (if you must do that because you are doing some serious appearance modeling use Chris Wyman's approximation).   You will have three functions of wavelength x(), y(), and z().   X is for example:

X = k*INTEGRAL x(lambda) L(lambda) d-lambda

If you use one wavelength per ray, do it randomly and do Monte Carlo: lambda = 400 + 300*r01(), so pdf(lambda) = 1/300

X =approx=  k*300*x(lambda) L(lambda)

You can use the same rays to approximate Y and Z because x(), y(), and z() partially overlap.

Now read in your model and convert all RGB triples to spectral curves.    How?   Don't ask me.   Seems like overkill so let's be lazy.

OK now let's be lazier than that.    This is a trick we used to use at the U of Utah in the 1990s.   I have no idea what its origins are.   Do this:

R =approx= L(lambda)

where lambda is a random wavelength in [600,700]nm

Do the same for G, B with random wavelengths in [500,600] and [400,500] respectively.

When you hit an RGB texture or material, just assume that it's a piecewise constant spectrum with the same spectral regions as above.   If you have a formula or real spectral data (for example, Rayleigh scattering or an approximation to the refractive index of a prism) then use that.

This will have wildly bad behavior in the worst case.   But in practice I have always found it to work well.   As an empirical test in an NVIDIA project I tested it on a simple case, the Macbeth Color Checker spectra under flat white light.   Here's the full spectral rendering using the real spectral curves of the checker and XYZ->RGB conversion and all that done "right":

xyz.png

 And here it is with the hack using just 3 piece-wise constant spectra for the colors and the RGB integrals above.

rgb.png

That is different, but my belief is that is no bigger than the intrinsic errors in input data, tone mapping, and display variation in 99% of situations.   One nice thing is it's pretty easy to convert an RGB renderer to a spectral renderer this way.



Sunday, April 9, 2017

Email reply on BRDF math

I got some email asking about using BRDFs in a path tracer and thought my reply might be helpful to those learning path tracing.

Each ray tracing toolkit does this a little differently.   But they all have the same pattern:

color = BRDF(random direction) * cosine / pdf(random direction)

The complications are:

0. That formula comes from Monte Carlo integration, which is a bit to wrap your mind around.

1. The units of the BRDF are a bit odd, and it's defined as a function over the sphere cross sphere which is confusing

2. pdf() is a function of direction and is somewhat arbitrary, through you get noise if it is kind of like the BDRF in shape.

3. Even once you know what pdf() is for a given BRDF, you need to be able to generate random_direction so that it is distributed like pdf

Those 4 together are a bit overwhelming.   So if you are in this for the long haul, I think you just need to really grind through it all.   #0 is best absorbed in 1D first, then 2D, then graduate to the sphere.