Most "ray tracers" these days are what we used to call "path tracers" or "Monte Carlo ray tracers". Because they all have about the same core architecture, this short hand is taking over.

These ray tracers have variants but a big one is whether the ray tracing is:

**Batch**: run your ray tracer in the background until done (e.g., movie renderers)
**Realtime**: run what you can in 30ms or some other frame time (e.g. game renderers go RTX!)
**Progressive**: just keep updating the image to make it less noisy as you trace more rays (e.g., a lighting program used by at artist in a studio or engineering firm)

For batch rendering let's pretend for now we are only getting noise from antialiasing the pixels so we have a fundamentally 2D program. This typically has something like the following for each pixel (i,j):

pixel_color = (0,0,0)

for s = 0 to N-1

u = random_from_0_to_1()

v = random_from_0_to_1()

pixel_color += sample_ color(i+u,j+v)

pixel_color /= N

That "sample_color()" does whatever it does to sample a font or whatever. The first thing that bites you is the

**diminishing return **associated with Monte Carlo methods: error = constant / sqrt(num_samples). So to halve the error we need 4X the samples.

Don Mitchell has a

nice paper that explains that if you take "stratified" samples, you get better error rates for most functions. In the case of 2D with edges the error = constant / num_samples. That is LOADS better. In graphics a common what to stratify samples is called "jittering" where you usually take a perfect square number of samples in each pixel: N = n*n. This yields the code:

pixel_color = (0,0,0)

for s = 0 to n-1

for t = 0 to n-1

u = (s + random_from_0_to_1()) / n

v = (t + random_from_0_to_1()) / n

pixel_color += sample_ color(i+u,j+v)

Visually the difference in the pattern of the samples is more obvious. There is a

very nice blog post on the details of this sampling here, and it includes this nice figure from Suffern's book:

It turns out that you can replace the random samples not only with jittered sample, but with a regular grid and you will converge to the right answer. But better still you can use quasi-random samples which makes this a quasi-Monte Carlo method (QMC). The code is largely the same! Just replace the (u,v) part above. The theory that justifies it is a bit different, but the key thing is the points need to be "uniform" and not tend to clump anywhere. These QMC methods have been investigated in detail by my friend and coworker

Alex Keller. The simplest QMC method is, like the jittering, best accomplished if you know N (but it doesn't need to be restricted to a perfect square). A famous and good one is the Hammersley Point set. Here's a picture from

Wolfram:

It's regular and uniform, but not

**too** regular.

In 2D these sampling patterns, jittering and QMC are

**way way better **than pure random. However, there is no free lunch. In a general path tracer, it is more than a 2D problem. The program might look like this:

For every pixel

for example sample

pick u,v for screen

pick a time t

pick an a,b on lens

if ray hits something

pick a u',v' for light sampling

pick a u",v" for BRDF sampling

recurse

If you track your random number generation and you take 3 diffuse bounces with shadow rays that will be nine random numbers. You could think of a ray path through the scene as something that gets generated by a function:

ray_path get_ray_path(u, u', u", u'", u"", u""', ....)

So you sample a nine dimensional hypercube randomly and map those nine-dimensional points to ray paths. Really this happens in some procedural recursive process, but abstractly it's a mapping. This means we run into the

**CURSE OF DIMENSIONALITY**. Once the integral is high dimensional, if the integrand is complex,

**STRATIFIED SAMPLING DOESN'T HELP**. However, you will notice (almost?) all serious production ray tracers do add stratified sampling. Why?

The reason is that for many pixels, the integrand is mostly constant except for two of the dimensions. For example, consider a pixel that is:

- In focus (so lens sample doesn't matter)
- Not moving (so time sample doesn't matter)
- Doesn't have an edge (so pixel sample doesn't matter)
- Is fully in (or out of) shadow (so light sample doesn't matter)

What does matter is the diffuse bounce. So it

**acts like** a 2D problem. So we need the BRDF samples, let's call them u7 and u8, to be well stratified.

QMC methods here typically automatically ensure that various projections of the high dimensional samples are themselves well stratified. Monte Carlo methods, on the other hand, typically try to get this property for some projections, namely the 2 pixel dimensions, the 2 light dimensions, the 2 lens dimensions, etc. They typically do this as follows for the 4D case:

- Create N "good" light samples on [0,1)^2
- Create N "good" pixel samples on [0,1)^2
- Create a permutation of the integers 0, ..., N-1
- Create a 4D pattern using the permutation where the ith sample is light1[i], light2[i],pixel[permute[i]], pixel2[permute[i]].

So a pain :) There are bunches of papers on doing Monte Carlo, QMC, or "blue noise" uniform random point sets. But we are yet quite done.

First, we don't know how many dimensions we have! The program recurses and dies by hitting a dark surface, exiting the scene, or Russian Roulette. Most programs degenerate to pure random after a few bounces to make it so we can sort of know the dimensionality.

Second, we might want a progressive preview on the renderer where it gets better as you wait. Here's a

nice example.

So you don't know what N is in advance! You want to be able to add samples potentially forever. This is easy if the samples are purely random, but not so obvious if doing QMC or stratified. The QMC default answer are to use

**Halton Points**. These are designed to be progressive! Alex Keller at NVIDIA and his collaborators have

found even better ways to do this with QMC. Per Christensen and Andrew Kensler and Charlie Kilpatrick at Pixar have a

new Monte Carlo sampling method that is making waves in the movie industry. I have not ever implemented these and would love to hear your experiences if you do (or have!)