Monday, June 4, 2018

Sticking a thin lens in a ray tracer

I need to stick a " ideal thin lens" in my ray tracer.   Rather than a real lens with some material, coating, and shape, it's an idealized version like we get in Physics1.

The thin lens has some basic properties that have some redundancy but they are the ones I remember and deploy when needed:
  1. a ray through the lens center is not bent
  2. all rays through a point that then pass through the lens converge at some other point
  3. a ray through the focal point (a point on the optical axis at distance from lens f, the focal length of the lens) will be parallel to the optical axis after being bent by the lens
  4. all rays from a point a distance A from the lens will converge at a distance B on the other side of the lens and obey the thin lens law: 1/A + 1/B = 1/f
 Here's a sketch of those rules:




So if I have a (purple) ray with origin a that hits the lens at point m, how does it bend?   It bends towards point b no matter what the ray direction is.   So the new ray is:

p(t) = m + t (b-m)

 So what is point b?

It is in the direction of point c, but extended by some distance.  We know the center of the lens c, so we can use the ratios of the segments to extend to that:

b = a + (c-a)  (B+A) / A?

So what is (B+A) /A? 

We know 1/A + 1/B = 1/f, so B = 1/(1/f-1/A).   So the point b is:

b = a + (c-a) (1/(1/f-1/A) + A)/A =
b = a + (c-a) (1/(A/f - 1) + 1) =
b = a + (c-a) (A/(A - f))

OK lets try a couple of trivial cases.   What if  A = 2f?  

b = a + (c-a) (2f/(2f-f)) = b = a + 2(c-a) 

That looks right (symmetric case-- A = B there) 

So final answer, given a ray with origin a that hits a lens with center c and focal length f at point m, the refracted ray is:

p(t) = m + t( a + (c-a) (A/(A - f)) - m)

There is a catch.   What if B < 0?   This happens when A < f.  Address that case when it comes up :)


Thursday, May 31, 2018

Generating uniform Random rays that hit an axis aligned box


For some tests you want the set of "all" rays that hit a box.   If you want to stratify this is somewhat involved (and I don't know that I have done it nor seen it).   Chris Wyman and I did a jcgt paper on doing this stratified in a square in 2D.   But often stratification isn't needed.   When in doubt I never do it-- the software is always easier to deal with un-stratified, and as soon as dimension gets high most people dont bother because of the Curse of Dimensionality.

We would like to do this with as little math as possible.   First lets consider any side of the box.   (this would apply to any convex polyhedron if we wanted something more general).    If the box in embedded in all possible uniform rays, any ray that hits the box will enter at exactly one point on the surface of the box, and all points are equally likely.   So our first job is to pick a uniform point on the surface.   We can use the technique Greg Turk used to seed points for texture generation on a triangular mesh:

Probability of each side with box of side lengths X, Y, Z is side area of total area.   The side areas are:

XY, YZ, ZX (2 of each)

We can do cumulative area and stuff it in an array of length 6:

c_area[0] = XY
c_area[1] = area[0] + XY
c_area[2] = area[1] + YZ
c_area[3] = area[2] + YZ
c_area[4] = area[3] + ZX
c_area[5] = area[4] + ZX 

Now normalize it so it is a cumulative fraction:


for (int i = 0; i < 6; i++) 
    c_area[i] /= c_area[5]

No take a uniform random real r() in [0,1)

int candidate = 0;
float ra = r();
while (c_area[candidate] < ra) candidate++;

Now candidate is the index to the side.

Let's say the side is in the xy plane and x goes from 0 to X and y goes from 0 to Y.   Now pick a uniform random point on that side:

vec3 ray_entry(X*r(), Y*r(), 0);

Now we have a ray origin.   What is the ray direction?  It is not uniform in all directions.   These are the rays that hit the side.   So the density is proportional to the cosine to the normal-- so they are Lambertian!   This is not obvious.   I will punt on justifying that for now.

So for the xy plane one above, the normal is +z and the ray direction is a uniform random point on a disk projected onto the hemisphere:

float radius = sqrt(r());
float theta = 2*M_PI*r();
float x = radius*cos(theta);
float y = radius*sin(theta);
float z = sqrt(1-x*x-y*y);
ray_direction = vec3(x,y,z);

Now we need that for each of the six sides.

We could probably find symmetries to have 3 cases, or maybe even a loop, but I personally would probably not bother because me trying to be clever usually ends poorly...




Wednesday, March 14, 2018

Egyptian estimates of PI

I saw a neat tweet on how the estimate the Egyptians used for PI.

This is all my speculation, and maybe a math history buff can enlighten me, but the D^2 dependence they should discover pretty naturally.   Including the constant before squaring is, I would argue, just as natural as having it outside the parentheses, so let's go with that for now.   So was there a nearby better fraction?   How well did the Egyptians do?   A brute force program should tell us.

We will use the ancient programming language C:

#include
#include
int main() {
    double min_error = 10;
    for (int denom = 1; denom < 10000; denom++) {
        for (int num = 1; num < denom; num++) {
            double approx = 2*double(num)/double(denom);
            approx = approx*approx;
            double error2 = M_PI-approx;
            error2 = error2*error2;
            if (error2 < min_error) {
                min_error = error2;
                printf("%d/%d %f\n", num, denom, 4*float(num*num)/float(denom*denom));
            }
        }
    }
}


This produces output:

1/2 1.000000
2/3 1.777778
3/4 2.250000
4/5 2.560000
5/6 2.777778
6/7 2.938776
7/8 3.062500
8/9 3.160494
23/26 3.130177
31/35 3.137959
39/44 3.142562
109/123 3.141252
148/167 3.141597
4401/4966 3.141588
4549/5133 3.141589
4697/5300 3.141589
4845/5467 3.141589
4993/5634 3.141589
5141/5801 3.141590
5289/5968 3.141590
5437/6135 3.141590
5585/6302 3.141590
5733/6469 3.141590
5881/6636 3.141591
6029/6803 3.141591
6177/6970 3.141591
6325/7137 3.141591
6473/7304 3.141591
6621/7471 3.141591
6769/7638 3.141591
6917/7805 3.141592
7065/7972 3.141592
7213/8139 3.141592
7361/8306 3.141592
7509/8473 3.141592
7657/8640 3.141592
7805/8807 3.141592
7953/8974 3.141593
8101/9141 3.141593
8249/9308 3.141593
8397/9475 3.141593
8545/9642 3.141593


So 7/8 was already pretty good, and you need to get to 23/26 before you do any better!   I'd say the Egyptians did extremely well.

What if they had put the constants outside the parens?   How well could they have done?   We can change two of the lines above to:

double approx = 4*double(num)/double(denom);//approx = approx*approx;

and the printf to:

printf("%d/%d %f\n", num, denom, 4*float(num)/float(denom));

And we get:

1/2 2.000000
2/3 2.666667
3/4 3.000000
4/5 3.200000
7/9 3.111111
11/14 3.142857
95/121 3.140496
106/135 3.140741
117/149 3.140940
128/163 3.141104
139/177 3.141243
150/191 3.141361
161/205 3.141464
172/219 3.141552
183/233 3.141631
355/452 3.141593


So 7/9 is not bad!  And 11/14 even better.   So no clear winner here on whether the rational constant should be inside the parens or not.


Friday, December 29, 2017

Rendering the moon, sun, and sky

A reader asked me about rendering the moon using a path tracer.   This has been done by several people and what's coolest about it is that you can do the whole thing with four spheres and not a lot of data (assuming you don't need clouds anyway).

First, you will need to deal with the atmosphere which is most easily dealt with spectrally rather than RGB because scattering has simple wavelength based formulas.   But you'll also have RGB texture for the moon, so I would use the lazy spectral method.

Here are the four spheres-- the atmosphere sphere and the Earth share the same center.   Not to scale (speaking of which, choose sensible units like kilometers or miles, and I would advise making everything a double rather than float).

The atmosphere can be almost arbitrarily complicated but I would advice making it all Rayleigh scatterers and have constant density.  You can also add more complicated mixtures and densities.   To set the constant density just try to get the overall opacity about right.   A random web search yields this image from Martin Chaplin:


 http://www1.lsbu.ac.uk/water/images/sun.gif







This means something between 0.5 and 0.7 (which is probably good enough-- a constant atmospheric model is probably a bigger atmospheric limitation).   In any case I would use the "collision" method where that atmosphere looks like a solid object to your software and exponential attenuation will be implicit.

For the Sun you'll need the spectral radiance for when a ray hits it.   If you use the lazy binned RGB method and don't worry about absolute magnitudes because you'll tone map later anyway, you can eyeball the above graph and guess for [400-500,500-600-600-700]nm you can use [0.6,0.8,1.0].   If you want to maintain absolute units (not a bad idea-- good to do some unit tests on things like luminance of the moon or sky).   Data for the sun is available lots of places but be careful to make sure it is spectral radiance or convert it to that (radiometry is a pain).

For the moon you will need a BRDF and a texture to modulate it.   For a first pass use Lambertian but that will not give you the nice constant color moon.   This paper by Yapo and Culter has some great moon renderings and they use the BRDF that Jensen et al. suggest:

Texture maps for the moon, again from a quick google search are here.

The Earth you can make black or give it a texture if you want Earth shine.   I ignore atmospheric refraction -- see Yapo and Cutler for more on that.

For a path tracer with a collision method as I prefer, and implicit shadow rays (so the sun directions are more likely to be sampled but all rays are just scattered rays) the program would look something like this:

For each pixel
     For each  viewing ray choose random wavelength
          send into the (moon, atmosphere, earth, sun) list of spheres
          if hit scatter according to pdf (simplest would be half isotropic and half to sun)

The most complicated object above would be the atmosphere sphere where the probability of hitting per unit length would be proportional to (1/lambda^4).    I would make the Rayleigh scattering isotropic just for simplicity, but using the real phase function isn't that much harder.

The picture below from this paper was generated using the techniques described above with no moon-- just the atmosphere.



There-- brute force is great-- get the computer to do the work (note, I already thought that way before I joined a hardware company).

If you generate any pictures, please tweet them to me!


Wednesday, December 6, 2017

Lazy spectral rendering

If you have to do spectral rendering (so light wavelengths and not just RGB internal computations) I am a big fan of making your life simpler by doing two lazy moves:

1. Each ray gets its own wavelength
2. Use a 3 element piece-wise constant approximation for most of the spectra, and make all the XYZ tristimulous stuff implicit

First, here's how to do it "right".   You can skip this part-- I'll put it in brown so it's easy to skip.  We want some file of RGB pixels like sRGB.   Look up the precise definition of sRGB in terms of XYZ.   Look up the precise definition of XYZ (if you must do that because you are doing some serious appearance modeling use Chris Wyman's approximation).   You will have three functions of wavelength x(), y(), and z().   X is for example:

X = k*INTEGRAL x(lambda) L(lambda) d-lambda

If you use one wavelength per ray, do it randomly and do Monte Carlo: lambda = 400 + 300*r01(), so pdf(lambda) = 1/300

X =approx=  k*300*x(lambda) L(lambda)

You can use the same rays to approximate Y and Z because x(), y(), and z() partially overlap.

Now read in your model and convert all RGB triples to spectral curves.    How?   Don't ask me.   Seems like overkill so let's be lazy.

OK now let's be lazier than that.    This is a trick we used to use at the U of Utah in the 1990s.   I have no idea what its origins are.   Do this:

R =approx= L(lambda)

where lambda is a random wavelength in [600,700]nm

Do the same for G, B with random wavelengths in [500,600] and [400,500] respectively.

When you hit an RGB texture or material, just assume that it's a piecewise constant spectrum with the same spectral regions as above.   If you have a formula or real spectral data (for example, Rayleigh scattering or an approximation to the refractive index of a prism) then use that.

This will have wildly bad behavior in the worst case.   But in practice I have always found it to work well.   As an empirical test in an NVIDIA project I tested it on a simple case, the Macbeth Color Checker spectra under flat white light.   Here's the full spectral rendering using the real spectral curves of the checker and XYZ->RGB conversion and all that done "right":

xyz.png

 And here it is with the hack using just 3 piece-wise constant spectra for the colors and the RGB integrals above.

rgb.png

That is different, but my belief is that is no bigger than the intrinsic errors in input data, tone mapping, and display variation in 99% of situations.   One nice thing is it's pretty easy to convert an RGB renderer to a spectral renderer this way.



Sunday, April 9, 2017

Email reply on BRDF math

I got some email asking about using BRDFs in a path tracer and thought my reply might be helpful to those learning path tracing.

Each ray tracing toolkit does this a little differently.   But they all have the same pattern:

color = BRDF(random direction) * cosine / pdf(random direction)

The complications are:

0. That formula comes from Monte Carlo integration, which is a bit to wrap your mind around.

1. The units of the BRDF are a bit odd, and it's defined as a function over the sphere cross sphere which is confusing

2. pdf() is a function of direction and is somewhat arbitrary, through you get noise if it is kind of like the BDRF in shape.

3. Even once you know what pdf() is for a given BRDF, you need to be able to generate random_direction so that it is distributed like pdf

Those 4 together are a bit overwhelming.   So if you are in this for the long haul, I think you just need to really grind through it all.   #0 is best absorbed in 1D first, then 2D, then graduate to the sphere. 

Wednesday, December 28, 2016

Bug in my Schlick code

In a previous post I talked about my debugging of refraction code.   In that ray tracer I was using linear polarization and used these full Fresnel equations:

Ugh those are awful.   For this reason and because polarization doesn't matter that much for most appearance, most ray tracers use R = (Rs+Rp)/2.    That's a very smooth function and Christophe Schlick proposed a nice simple approximation that is quite accurate:

R = (1-R0)(1-cosTheta)^5

A key issue is that the Theta is the **larger** angle.   For example in my debugging case (drawn with limnu which has some nice new features that made this easy):

The 45 degree angle is the one to use.   This is true on the right and the left-- the reflectivity is symmetric.   In the case where we only have the 30 degree angle, we need to convert to the other angle by using Snell's Law: Theta = asin(sqrt(2)*sin(30 degrees).

The reason for this post is that I have this wrong in my book Ray Tracing in One Weekend :


Note that the first case (assuming outward normals) is the one on the left where the dot product is the cos(30 degrees).  The "correction" is messed up.    So why does it "work"?    The reflectances are small for most theta, and it will be small for most of the incorrect theta too.   Total internal reflection will be right, so the visual differences will be plausible.

Thanks to Ali Alwasiti (@vexe666) for spotting my mistake!