Friday, December 29, 2017

Rendering the moon, sun, and sky

A reader asked me about rendering the moon using a path tracer.   This has been done by several people and what's coolest about it is that you can do the whole thing with four spheres and not a lot of data (assuming you don't need clouds anyway).

First, you will need to deal with the atmosphere which is most easily dealt with spectrally rather than RGB because scattering has simple wavelength based formulas.   But you'll also have RGB texture for the moon, so I would use the lazy spectral method.

Here are the four spheres-- the atmosphere sphere and the Earth share the same center.   Not to scale (speaking of which, choose sensible units like kilometers or miles, and I would advise making everything a double rather than float).

The atmosphere can be almost arbitrarily complicated but I would advice making it all Rayleigh scatterers and have constant density.  You can also add more complicated mixtures and densities.   To set the constant density just try to get the overall opacity about right.   A random web search yields this image from Martin Chaplin:


 http://www1.lsbu.ac.uk/water/images/sun.gif







This means something between 0.5 and 0.7 (which is probably good enough-- a constant atmospheric model is probably a bigger atmospheric limitation).   In any case I would use the "collision" method where that atmosphere looks like a solid object to your software and exponential attenuation will be implicit.

For the Sun you'll need the spectral radiance for when a ray hits it.   If you use the lazy binned RGB method and don't worry about absolute magnitudes because you'll tone map later anyway, you can eyeball the above graph and guess for [400-500,500-600-600-700]nm you can use [0.6,0.8,1.0].   If you want to maintain absolute units (not a bad idea-- good to do some unit tests on things like luminance of the moon or sky).   Data for the sun is available lots of places but be careful to make sure it is spectral radiance or convert it to that (radiometry is a pain).

For the moon you will need a BRDF and a texture to modulate it.   For a first pass use Lambertian but that will not give you the nice constant color moon.   This paper by Yapo and Culter has some great moon renderings and they use the BRDF that Jensen et al. suggest:

Texture maps for the moon, again from a quick google search are here.

The Earth you can make black or give it a texture if you want Earth shine.   I ignore atmospheric refraction -- see Yapo and Cutler for more on that.

For a path tracer with a collision method as I prefer, and implicit shadow rays (so the sun directions are more likely to be sampled but all rays are just scattered rays) the program would look something like this:

For each pixel
     For each  viewing ray choose random wavelength
          send into the (moon, atmosphere, earth, sun) list of spheres
          if hit scatter according to pdf (simplest would be half isotropic and half to sun)

The most complicated object above would be the atmosphere sphere where the probability of hitting per unit length would be proportional to (1/lambda^4).    I would make the Rayleigh scattering isotropic just for simplicity, but using the real phase function isn't that much harder.

The picture below from this paper was generated using the techniques described above with no moon-- just the atmosphere.



There-- brute force is great-- get the computer to do the work (note, I already thought that way before I joined a hardware company).

If you generate any pictures, please tweet them to me!


Wednesday, December 6, 2017

Lazy spectral rendering

If you have to do spectral rendering (so light wavelengths and not just RGB internal computations) I am a big fan of making your life simpler by doing two lazy moves:

1. Each ray gets its own wavelength
2. Use a 3 element piece-wise constant approximation for most of the spectra, and make all the XYZ tristimulous stuff implicit

First, here's how to do it "right".   You can skip this part-- I'll put it in brown so it's easy to skip.  We want some file of RGB pixels like sRGB.   Look up the precise definition of sRGB in terms of XYZ.   Look up the precise definition of XYZ (if you must do that because you are doing some serious appearance modeling use Chris Wyman's approximation).   You will have three functions of wavelength x(), y(), and z().   X is for example:

X = k*INTEGRAL x(lambda) L(lambda) d-lambda

If you use one wavelength per ray, do it randomly and do Monte Carlo: lambda = 400 + 300*r01(), so pdf(lambda) = 1/300

X =approx=  k*300*x(lambda) L(lambda)

You can use the same rays to approximate Y and Z because x(), y(), and z() partially overlap.

Now read in your model and convert all RGB triples to spectral curves.    How?   Don't ask me.   Seems like overkill so let's be lazy.

OK now let's be lazier than that.    This is a trick we used to use at the U of Utah in the 1990s.   I have no idea what its origins are.   Do this:

R =approx= L(lambda)

where lambda is a random wavelength in [600,700]nm

Do the same for G, B with random wavelengths in [500,600] and [400,500] respectively.

When you hit an RGB texture or material, just assume that it's a piecewise constant spectrum with the same spectral regions as above.   If you have a formula or real spectral data (for example, Rayleigh scattering or an approximation to the refractive index of a prism) then use that.

This will have wildly bad behavior in the worst case.   But in practice I have always found it to work well.   As an empirical test in an NVIDIA project I tested it on a simple case, the Macbeth Color Checker spectra under flat white light.   Here's the full spectral rendering using the real spectral curves of the checker and XYZ->RGB conversion and all that done "right":

xyz.png

 And here it is with the hack using just 3 piece-wise constant spectra for the colors and the RGB integrals above.

rgb.png

That is different, but my belief is that is no bigger than the intrinsic errors in input data, tone mapping, and display variation in 99% of situations.   One nice thing is it's pretty easy to convert an RGB renderer to a spectral renderer this way.



Sunday, April 9, 2017

Email reply on BRDF math

I got some email asking about using BRDFs in a path tracer and thought my reply might be helpful to those learning path tracing.

Each ray tracing toolkit does this a little differently.   But they all have the same pattern:

color = BRDF(random direction) * cosine / pdf(random direction)

The complications are:

0. That formula comes from Monte Carlo integration, which is a bit to wrap your mind around.

1. The units of the BRDF are a bit odd, and it's defined as a function over the sphere cross sphere which is confusing

2. pdf() is a function of direction and is somewhat arbitrary, through you get noise if it is kind of like the BDRF in shape.

3. Even once you know what pdf() is for a given BRDF, you need to be able to generate random_direction so that it is distributed like pdf

Those 4 together are a bit overwhelming.   So if you are in this for the long haul, I think you just need to really grind through it all.   #0 is best absorbed in 1D first, then 2D, then graduate to the sphere. 

Wednesday, December 28, 2016

Bug in my Schlick code

In a previous post I talked about my debugging of refraction code.   In that ray tracer I was using linear polarization and used these full Fresnel equations:

Ugh those are awful.   For this reason and because polarization doesn't matter that much for most appearance, most ray tracers use R = (Rs+Rp)/2.    That's a very smooth function and Christophe Schlick proposed a nice simple approximation that is quite accurate:

R = (1-R0)(1-cosTheta)^5

A key issue is that the Theta is the **larger** angle.   For example in my debugging case (drawn with limnu which has some nice new features that made this easy):

The 45 degree angle is the one to use.   This is true on the right and the left-- the reflectivity is symmetric.   In the case where we only have the 30 degree angle, we need to convert to the other angle by using Snell's Law: Theta = asin(sqrt(2)*sin(30 degrees).

The reason for this post is that I have this wrong in my book Ray Tracing in One Weekend :


Note that the first case (assuming outward normals) is the one on the left where the dot product is the cos(30 degrees).  The "correction" is messed up.    So why does it "work"?    The reflectances are small for most theta, and it will be small for most of the incorrect theta too.   Total internal reflection will be right, so the visual differences will be plausible.

Thanks to Ali Alwasiti (@vexe666) for spotting my mistake!

Sunday, September 18, 2016

A new programmer's attitude should be like an artist's or musician's

Last year I gave a talk at a CS education conference in Seattle called "Drawing Inspiration from the Teaching of Art".   That talk was aimed at educators and said that historically CS education was based on that of math and/or physics and that was a mistake and we should instead base it on art.   I expected a lot of pushback but many of the attendees had a "duh-- I have been doing that for 20 years" reaction.

This short post is aimed at students of CS but pursues the same theme.   If you were an art or music student your goal would be to be good at ONE THING in TEN YEARS.   If a music student that might be signing, composition, music theory, or playing a particular instrument.   Any one of those things is very hard.   Your goal would be to find that one thing you resonate with and then keep pushing your skills with lots and lots of practice.  Sure you could become competent in the other areas, but your goal is to be a master of one.   Similarly as an artist you would want to become great at one thing be it printmaking, painting, drawing, pottery, sculpture, or art theory.   Maybe you become great at two things but if so you are a Michelangelo style unicorn and more power to you.

Even if you become great at one thing, you become great at it in your own way.    For example in painting Monet wanted Sargent to give up using black.   It is so good that Sargent didn't do that.   This painting with Monet's palette would not be as good.   And Monet wouldn't have wanted to do that painting anyway!

Computer Science is not exactly art or music, but the underlying issues are the same.   First, it is HARD.   Never forget that.   Don't let some CS prof or brogrammer make you think you suck at it because you think it's hard.   Second you must become master of the tools by both reading/listening and playing with them.   Most importantly find the "medium" and "subject" where you have some talent and where it resonates with you emotionally.   If you love writing cute UI javascript tools and hate writing C++ graphics code, that doesn't make you a flawed computer scientist.   It is a gift in narrowing your search for your technical soul mate.   Love Scheme and hate C# or vice-versa.   That is not a flaw but again is another productive step on your journey.   Finally, if you discover an idiosyncratic methodology that works for you and gets lots of pushback, ignore the pushback.   Think van Gogh.    But keep the ear-- at the end of the day, CS is more about what works :)

Friday, July 15, 2016

I do have a bug

I questioned whether this was right:



I was concerned about the bright bottom and thought maybe there was a light tunnel effect.   I looked through my stuff and found a dented cube, and its bottom seemed to show total internal reflection:
The light tunnel effect might be happening and there is a little glow under the cube, but you cant see it thought the cube.   Elevating it a little does show that:

Digging out some old code and adding a cube yielded:
This is for debugging so the noise was just because I didn't run to convergence.   That does look like total internal reflection on the bottom of the cube, but the back wall is similar to the floor.   Adding a sphere makes it more obvious:
Is this right?   Probably.



Wednesday, July 13, 2016

Always a hard question: do I have a bug?

In testing some new code involving box-intersection I prepared a Cornell Box with a glass block, and first question is "is this right?".    As usual, I am not sure.    Here's the picture (done with a bajillion samples so I wont get fooled by outliers):


It's smooth anyway.   The glass is plausible to my eye.   The strangest thing is how bright the bottom of the glass block is.   Is it right?    At first I figured bug.   But maybe that prism operates as a light tunnel (like fiber optics) so the bottom is the same color as a diffuse square on the prism top would be.   So now I will test that hypothesis somehow (google image search?   find a glass block?) and if that phenomenon is right and of about the right magnitude, I'll declare victory.