Tuesday, March 12, 2019

Making your BVH faster


I am no expert in making BVHs fast so just use this blog post as a starting point.   But there are some things you can try if you want to speed things up.   All of them involve tradeoffs so test them rather than assume they are faster!  

1. Store the inverse ray direction with the ray.   The most popular ray-bounding box hit test assumes this (used in both the intel and nvidia ray tracers as far as I know):


Note that if ray direction were passed in you would have 6 divides rather than 6 adds.

2.  Do an early out in your ray traversal.   This is a trick used in many BVHs but not the one I have in the ray tracing minibook series.   Martin Lambers suggested this version to me which is not only faster, it is cleaner code.



3. Build using the surface area heuristic (SAH).   This is a greedy algorithm that minimizes the sum of the areas of the bounding boxes in the level being built.    I based mine on the pseudocode in this old paper I did with Ingo Wald and Solomon Boulos.    I used simple arrays for the sets and the quicksort from wikipedia for the sort.





Sunday, February 17, 2019

Lazy person's tone mapping

In a physically-based renderer, your RGB values are not confined to [0,1] and your need to deal with that somehow.

The simplest thing is to clamp them to zero to one.   In my own C++ code:
inline vec3 vec3::clamp() {
    if (e[0] < real(0)) e[0] = 0;
    if (e[1] < real(0)) e[1] = 0;
    if (e[2] < real(0)) e[2] = 0;
    if (e[0] > real(1)) e[0] = 1;
    if (e[1] > real(1)) e[1] = 1;
    if (e[2] > real(1)) e[2] = 1;
    return *this;
 }


A more pleasing result can probably be had by applying a "tone mapping" algorithm.   The easiest is probably Eric Reinhard's "L/(1+L)" operator from the Equation 3 of this paper

Here is my implementation of it.   You still need to clamp because of highly saturated colors, and purists wont like my luminance formula (1/3.1/3.1/3) but never listen to purists :)

void reinhard_tone_map(real mid_grey = real(0.2)) {
// using even values for luminance.   This is more robust than standard NTSC luminance
// Reinhard tone mapper is to first map a value that we want to be "mid gray" to 0.2// And then we apply the L = 1/(1+L) formula that controls the values above 1.0 in a graceful manner.
    real scale = (real(0.2)/mid_grey);
    for (int i = 0; i < nx*ny; i++) {
        vec3 temp = scale*vdata[i];
        real L = real(1.0/3.0)*(temp[0] + temp[1] + temp[2]);
        real multiplier = ONE/(ONE + L);
        temp *= multiplier;
           temp.clamp();
           vdata[i] = temp;
        }

}

This will slightly darken the dark pixels and greatly darken the bright pixels.   Equation 4 in the Reinhard paper will give you more control.   The cool kids  have been using "filmic tone mapping" and it is the best tone mapping I have seen, but I have not implemented it (see title to this blog post)



Thursday, February 14, 2019

Picking points on the hemisphere with a cosine density


NOTE: this post has three basic problems.  It assumes property 1 and 2 are true, and there is a missing piece at the end that keeps us from showing anything :)

This post results from a bunch of conversations with Dave Hart and the twitter hive brain.  There are several ways to generate a random Lambertian direction from a point with surface normal N.   One way is inspired by a cool paper by Sbert and Sandez where he simultaniously generated many form factors by repeatedly selecting a uniformly random 3D line in the scene.   This can be used to generate a direction with a cosine density, an idea first described, as far as I know, by Edd Biddulph.

I am going to describe it here using three properties, each of which I don't have a concise proof for.   Any help appreciated!   (I have algebraic proofs-- they just aren't enlightening--- hoping for a clever geometric observation).


Property 1: Nusselt Analog: uniform random points on an equatorial disk projected onto the sphere have a cosine density.   So in the figure, the red points, if all of them projected, have a cosine density.

 Property 2: (THIS PROPERTY IS NOT TRUE-- SEE COMMENTS) The red points in the diagram above when projected onto the normal, will have a uniform density along it:
 Property 3: For random points on a 3D sphere, as shown (badly) below, they when projected onto the central axis will be uniform on the central axis.


Now if we accept Property 3, we can first generate a random point on a sphere by first choosing a random phi uniformly theta = 2*PI*urandom(), and then choose a random height from negative 1 to 1, height = -1 + 2*urandom()

In XYZ coordinates we can convert this to:
z = -1 + 2*urandom()
phi = 2*PI*urandom()
x = cos(phi)*sqrt(1-z*z)
y = sin(phi)*sqrt(1-z^2)

Similarly from property 2 we can given a random point (x,y) on a unit disk in the XY plane, we can generate a direction with cosine density when N = the z axis:

(x,y) = random on disk
z = sqrt(1-x*x-y*y)

To generate a cosine direction relative to a surface normal N, people usually construct a local basis, ANY local basis, with tangent and bitangent vectors B and T and change coordinate frames:

get_tangents(B,T,N)
(x,y) = random on disk
z = sqrt(1-x*x-y*y)
direction = x*B + y*T + z*N

There is finally a compact and robust way to write get_tangents.   So use that, and your code is fast and good.   

But can we do this show that using a uniform random sphere lets us do this without tangents?

So  we do this:
P = (x,y,z) = random_on_unit_sphere
D = unit_vector(N + P)

So D is the green dot while (N+P) is the red dot:


 So  is there a clever observation that the green dot is either 1) uniform along N, or 2, uniform on the disk when projected?







Tuesday, February 12, 2019

Adding point and directional lights to a physically based renderer


Suppose you have a physically based renderer with area lights.   Along comes a model with point and directional lights.   Perhaps the easiest way to deal with them is convert them to a very small spherical light source.   But how do you do that in a way that gets the color right?   IF you have things set up so they tend to be tone mapped (a potential big no in physically-based renderer), meaning that a color of (1,1,1) will be white, and (0.2,0.2,0.2) a mid-grey (gamma-- so not 0.5-- the eye does not see intensities linearly), then it is not so bad.

Assume you have a spherical light with radius R at position C and emitted radiance (E,E,E).    When it is straight up from a white surface (so the sun at noon at the equator), you get this equation (about) for the color at a point P

reflected_color = (E,E,E)*solid_angle_of_light / PI

The solid angle of the light is its projected area on the unit sphere around the illuminated point, or approximately:

solid_angle_of_light = PI*R^2 /distance_squared(P,C)

So

 reflected_color = (E,E,E)*(R / distance(P,C))^2

If we want the white surface to be exactly white then

E = (distance(P,C) / R)^2

So pick a small R (say 0.001), pick a point in your scene (say the one the viewer is looking at, or 0,0,0),  and and set E as in the green equation


Suppose the RGB "color" given for the point source is  (cr, cg, cb).    Then just multiply the (E,E,E) by those components.     

Directional lights are a similar hack, but a little less prone to problems of whther falloff was intended.   A directional light is usually the sun, and it's very far away.    Assume the direction is D = (dx,dy,dz)

Pick a big distance (one that wont break your renderer) and place the center in that direction:

C = big_constant*D

Pick a small radius.   Again one that wont break your renderer.   Then use the green equation above.

Now sometimes the directional sources are just the Sun and will look better if you give them the same angular size for the Sun.   If your model is in meters, then just use distance = 150,000,000,000m.   OK now your program will break due to floating point.   Instead pick a somewhat big distance and use the right ratio of Sun radius to distance:

R = distance *(695,700km / 149,597,870km)

And your shadows will look ok




Saturday, October 20, 2018

Flavors of Sampling in Ray Tracing

Most "ray tracers" these days are what we used to call "path tracers" or "Monte Carlo ray tracers".   Because they all have about the same core architecture, this short hand is taking over.

These ray tracers have variants but a big one is whether the ray tracing is:
  • Batch: run your ray tracer in the background until done (e.g., movie renderers)
  • Realtime: run what you can in 30ms or some other frame time (e.g. game renderers go RTX!)
  • Progressive: just keep updating the image to make it less noisy as you trace more rays (e.g., a lighting program used by at artist in a studio or engineering firm)
For batch rendering let's pretend for now we are only getting noise from antialiasing the pixels so we have a fundamentally 2D program.   This typically has something like the following for each pixel (i,j):

pixel_color = (0,0,0)
for s = 0 to N-1
       u =  random_from_0_to_1()
       v =  random_from_0_to_1()
       pixel_color += sample_ color(i+u,j+v)
pixel_color /= N

That "sample_color()" does whatever it does to sample a font or whatever.    The first thing that bites you is the diminishing return associated with Monte Carlo methods:   error = constant / sqrt(num_samples).     So to halve the error we need 4X the samples. 

Don Mitchell has a nice paper that explains that if you take "stratified" samples, you get better error rates for most functions.    In the case of 2D with edges the error = constant / num_samples.   That is LOADS better.   In graphics a common what to stratify samples is called "jittering" where you usually take a perfect square number of samples in each pixel: N = n*n.   This yields the code:

pixel_color = (0,0,0)
for s = 0 to n-1
       for t = 0 to n-1
           u =  (s + random_from_0_to_1()) / n
           v =  (t + random_from_0_to_1()) / n
           pixel_color += sample_ color(i+u,j+v)

Visually the difference in the pattern of the samples is more obvious.   There is a very nice blog post on the details of this sampling here, and it includes this nice figure from Suffern's book:


 It turns out that you can replace the random samples not only with jittered sample, but with a regular grid and you will converge to the right answer.   But better still you can use quasi-random samples which makes this a quasi-Monte Carlo method (QMC).   The code is largely the same!   Just replace the (u,v) part above.    The theory that justifies it is a bit different, but the key thing is the points need to be "uniform" and not tend to clump anywhere.   These QMC methods have been investigated in detail by my friend and coworker Alex Keller.   The simplest QMC method is, like the jittering, best accomplished if you know N (but it doesn't need to be restricted to a perfect square).   A famous and good one is the Hammersley Point set.   Here's a picture from Wolfram:
It's regular and uniform, but not too regular.

In 2D these sampling patterns, jittering and QMC are way way better than pure random.   However, there is no free lunch.   In a general path tracer, it is more than a 2D problem.   The program might look like this:

For every pixel
    for example sample
          pick u,v for screen
          pick a time t
          pick an a,b on lens
          if ray hits something
                    pick a u',v' for light sampling
                    pick a u",v" for BRDF sampling
                    recurse

If you track your random number generation and you take 3 diffuse bounces with shadow rays that will be nine random numbers.    You could think of a ray path through the scene as something that gets generated by a function:

ray_path get_ray_path(u, u', u", u'", u"", u""', ....)

So you sample a nine dimensional hypercube randomly and map those nine-dimensional points to ray paths.   Really this happens in some procedural recursive process, but abstractly it's a mapping.   This means we run into the  CURSE OF DIMENSIONALITY.   Once the integral is high dimensional, if the integrand is complex, STRATIFIED SAMPLING DOESN'T HELP.    However, you will notice (almost?) all serious production ray tracers do add stratified sampling.   Why?  

The reason is that for many pixels, the integrand is mostly constant except for two of the dimensions.   For example, consider a pixel that is:

  • In focus (so lens sample doesn't matter)
  • Not moving (so time sample doesn't matter)
  • Doesn't have an edge (so pixel sample doesn't matter)
  • Is fully in (or out of) shadow (so light sample doesn't matter)
What does matter is the diffuse bounce.   So it acts like a 2D problem.   So we need the BRDF samples, let's call them u7 and u8, to be well stratified. 

QMC methods here typically automatically ensure that various projections of the high dimensional samples are themselves well stratified.    Monte Carlo methods, on the other hand, typically try to get this property for some projections, namely the 2 pixel dimensions, the 2 light dimensions, the 2 lens dimensions, etc.   They typically do this as follows for the 4D case:

  1. Create N "good" light samples on [0,1)^2 
  2. Create N "good" pixel samples on [0,1)^2
  3. Create a permutation of the integers 0, ..., N-1
  4. Create a 4D pattern using the permutation where the ith sample is light1[i], light2[i],pixel[permute[i]], pixel2[permute[i]].
So a pain :)   There are bunches of papers on doing Monte Carlo, QMC, or "blue noise" uniform random point sets.   But we are yet quite done.

First, we don't know how many dimensions we have!   The program recurses and dies by hitting a dark surface, exiting the scene, or Russian Roulette.    Most programs degenerate to pure random after a few bounces to make it so we can sort of know the dimensionality.

Second, we might want a progressive preview on the renderer where it gets better as you wait.   Here's a nice example.

So you don't know what N is in advance!   You want to be able to add samples potentially forever.  This is easy if the samples are purely random, but not so obvious if doing QMC or stratified.   The QMC default answer are to use Halton Points.    These are designed to be progressive!     Alex Keller at NVIDIA and his collaborators have found even better ways to do this with QMC.   Per Christensen and  Andrew Kensler and Charlie Kilpatrick at Pixar have a new Monte Carlo sampling method that is making waves in the movie industry.   I have not ever implemented these and would love to hear your experiences if you do (or have!)

           

Monday, June 4, 2018

Sticking a thin lens in a ray tracer

I need to stick a " ideal thin lens" in my ray tracer.   Rather than a real lens with some material, coating, and shape, it's an idealized version like we get in Physics1.

The thin lens has some basic properties that have some redundancy but they are the ones I remember and deploy when needed:
  1. a ray through the lens center is not bent
  2. all rays through a point that then pass through the lens converge at some other point
  3. a ray through the focal point (a point on the optical axis at distance from lens f, the focal length of the lens) will be parallel to the optical axis after being bent by the lens
  4. all rays from a point a distance A from the lens will converge at a distance B on the other side of the lens and obey the thin lens law: 1/A + 1/B = 1/f
 Here's a sketch of those rules:




So if I have a (purple) ray with origin a that hits the lens at point m, how does it bend?   It bends towards point b no matter what the ray direction is.   So the new ray is:

p(t) = m + t (b-m)

 So what is point b?

It is in the direction of point c, but extended by some distance.  We know the center of the lens c, so we can use the ratios of the segments to extend to that:

b = a + (c-a)  (B+A) / A?

So what is (B+A) /A? 

We know 1/A + 1/B = 1/f, so B = 1/(1/f-1/A).   So the point b is:

b = a + (c-a) (1/(1/f-1/A) + A)/A =
b = a + (c-a) (1/(A/f - 1) + 1) =
b = a + (c-a) (A/(A - f))

OK lets try a couple of trivial cases.   What if  A = 2f?  

b = a + (c-a) (2f/(2f-f)) = b = a + 2(c-a) 

That looks right (symmetric case-- A = B there) 

So final answer, given a ray with origin a that hits a lens with center c and focal length f at point m, the refracted ray is:

p(t) = m + t( a + (c-a) (A/(A - f)) - m)

There is a catch.   What if B < 0?   This happens when A < f.  Address that case when it comes up :)


Thursday, May 31, 2018

Generating uniform Random rays that hit an axis aligned box


For some tests you want the set of "all" rays that hit a box.   If you want to stratify this is somewhat involved (and I don't know that I have done it nor seen it).   Chris Wyman and I did a jcgt paper on doing this stratified in a square in 2D.   But often stratification isn't needed.   When in doubt I never do it-- the software is always easier to deal with un-stratified, and as soon as dimension gets high most people dont bother because of the Curse of Dimensionality.

We would like to do this with as little math as possible.   First lets consider any side of the box.   (this would apply to any convex polyhedron if we wanted something more general).    If the box in embedded in all possible uniform rays, any ray that hits the box will enter at exactly one point on the surface of the box, and all points are equally likely.   So our first job is to pick a uniform point on the surface.   We can use the technique Greg Turk used to seed points for texture generation on a triangular mesh:

Probability of each side with box of side lengths X, Y, Z is side area of total area.   The side areas are:

XY, YZ, ZX (2 of each)

We can do cumulative area and stuff it in an array of length 6:

c_area[0] = XY
c_area[1] = area[0] + XY
c_area[2] = area[1] + YZ
c_area[3] = area[2] + YZ
c_area[4] = area[3] + ZX
c_area[5] = area[4] + ZX 

Now normalize it so it is a cumulative fraction:


for (int i = 0; i < 6; i++) 
    c_area[i] /= c_area[5]

No take a uniform random real r() in [0,1)

int candidate = 0;
float ra = r();
while (c_area[candidate] < ra) candidate++;

Now candidate is the index to the side.

Let's say the side is in the xy plane and x goes from 0 to X and y goes from 0 to Y.   Now pick a uniform random point on that side:

vec3 ray_entry(X*r(), Y*r(), 0);

Now we have a ray origin.   What is the ray direction?  It is not uniform in all directions.   These are the rays that hit the side.   So the density is proportional to the cosine to the normal-- so they are Lambertian!   This is not obvious.   I will punt on justifying that for now.

So for the xy plane one above, the normal is +z and the ray direction is a uniform random point on a disk projected onto the hemisphere:

float radius = sqrt(r());
float theta = 2*M_PI*r();
float x = radius*cos(theta);
float y = radius*sin(theta);
float z = sqrt(1-x*x-y*y);
ray_direction = vec3(x,y,z);

Now we need that for each of the six sides.

We could probably find symmetries to have 3 cases, or maybe even a loop, but I personally would probably not bother because me trying to be clever usually ends poorly...