Tuesday, April 14, 2015

Sampling a disk revisited

In almost all variants of Monte Carlo renderers there are various point samples that are averaged.  The most common example is a square pixel with box filter (i.e., the average within the pixel is taken).  Underlying whatever screen coordinates you use, there is some
"get ith point in square" or "get N points in square" function.   The square is usually [0,1)^2.  These always seem to add more ugliness to programs than I would expect, but we'll save that for another time.   Additionally, it's really a multidimensional sampling (i.e., antialised motion-blur is 3D) a topic we will also defer.

There are five basic strategies for getting 2D samples on the unit square:
  1. Regular (lattice, sometimes rotated.)
  2. Random
  3. Jittering (stratified sampling)
  4. QMC
  5. Optimized
Of these I suspect optimized (introduced by Don Mitchell back in the day) is the best, but that is still a topic of research.  I'm going for easy so will do jittered.

Jittered is usually done for perfect square number of samples because it is a SxS grid that "jittered" or perturbed.  Pseudocode for Jittered for S^2 samples (e.g, for S = 4, 16 samples) is:

Vec2 get_sample(int s, int sqrt_num_samples)
        float i = s % sqrt_num_samples
        float j = s /  sqrt_num_samples
        return Vec2((i+drand48())/ sqrt_num_samples, (i+drand48())/ sqrt_num_samples)

If we are going to sample a disk (like a lens) people typically sample a unit disk centered at (0,0) with radius 1.    If we transform a sample (u,v) on [0,1)^2 to a disk we can't do this:

Vec2 transform_to_disk(Vec2 on_square)
        theta = 2*PI*on_square.x
        r =   on_square.y
        return (r*cos(theta), r*sin(theta))

Because this will clump up samples near r=0.  Instead we need  r =   sqrt(on_square.y) which compensates for the distortion.

However, there is a "concentric" mapping on the square that some say is better and Dave Cline sent me (see previous blog post) a nicely small code for that so I finally tried it.

Here is an image with random sampling and 25 samples per pixel.
Random sampling 25 samples per pixel
I also tried 45 and 100 samples per pixel with the two different mappings to the disk discussed above.  I zoomed in (using cut and paste so not a very precise test-- in my experience if a precise test is needed it is not a big effect).  This are zoomed in with 5X and pixel duplication


25 samples per pixel, random, jittered, and jittered with concentric map.

49 samples per pixel, random, jittered, and jittered with concentric map.

100 samples per pixel, random, jittered, and jittered with concentric map.

 All in all for this case I do think the concentric map does make a SLIGHT improvement.  But the big leap is clearly jittering at all.   Still, Dave Cline's code is so simple I think it is worth it.

Sunday, April 12, 2015

Rocket Golfing: Morgan McGuire's new iOS video game

I was (and still am) addicted to Angry Birds, but Rocket Golfing has displaced it for me.   This is Morgan McGuire's new iOS game that is just out, but I have been play testing it the last month.  Here's his promo video:



I love it.   It's not only a really good game, it has an unbounded number of procedural levels.   I hope Morgan will publish how he did that.    I'm sure many of you are familiar with Morgan's work on Console games and realistic rendering research, which is the context I know him from when we worked together at NVIDIA Research.   So I was surprised he did a 2D kid/adult game/education games that is so fun.

A personal thing I really like about it is it takes me back to my first "wow" moment with video games when I saw the coin-operated Space Wars over three decades ago.  It was simple graphics but it had terrific gravitational physics and I really liked the game play.  Here's a screenshot from Wikipedia:
Screen Shot from Space Wars (Wikipedia)

Only later did I find out that Space Wars was the grandchild of the game SpaceWar! which is so old that it was released the year before I was born!   To give you an idea of how primeval programming was then, check out this quote from the wikipedia article: "After Alan Kotok obtained some sine and cosine routines from DEC, Russell began coding, and, by February 1962, had produced his first version."  Apparently SpaceWar! was played a lot at the University of Utah and that is part of the Atari Origin Story.   I do like Morgan's game better, but I especially love the connections to OLD school computer graphics.   In addition, its space trivia gets back at the excitement of the space race when I was growing up (and thankfully seems to getting so new breath, or so I hope).  I also like that I don't need to pump quarters into it-- it's a pay once (12 quarters) and no in-app purchases or adds or any of that stuff.

Tuesday, April 7, 2015

Pixar's old tileable textures

Pixar's terrific 1993 tileable textures now online.   850MB.




Friday, April 3, 2015

Cameras in a distribution ray tracer

You need two things in a distribution ray tracer camera.  First is just to be able to have the light be gathered from a lens (disk) rather than a point.  Second in to move and rotate the lens.

A very generic way to generate viewing rays for pixels (i,j) for a square image with W = H in version 0.1 of the ray tracer looking up the -z axis is:

       vec3 ray_direction(-1.0 + 2.0*i/(W-1.0), -1.0 + 2.0*j/(H-1.0), -1.0);

The ray origin is just (0,0,0).   That can produce an image like this (see last blog posting). 
Distribution ray tracer with 100 non-stratified samples and a simple pinhole camera

Now we just need to make the ray origin a random point on the radius R XY disk centered at the origin:

   vec3 random_xy_disk() {
       vec3 p;
       do {
           p = vec3(2*drand48()-1,2*drand48()-1,0);
       } while (dot(p,p) > 1);
       return p;
   }

   vec3 origin = R*random_xy_disk()
   ray_direction *= focus_distance; // distance to where things are in focus
   ray_direction -= origin;

Yes, there are more efficient and compact ways to do that.  We're into getting the program done ASAP here :).   Now all the rays for a given pixel will converge at z distance focus.

Now we need to move the camera lens to position "eye" (yeah bad legacy naming).   We want to focus on a point at position "lookat" (there are many other viewing systems like specify the gaze direction, and it's a matter of taste-- just use one!).   We need a "view up vector" which would be any vector from the center of the skull through the center parted hair of a person.  This will allow us to make an orthonormal basis uvw:

    vec3 w = eye - lookat;  // z axis goes behind us
    vec3 u = cross(w, vup);
    vec3 vv = cross(w, u);
    focus = w.length();
    u.MakeUnitVector();
    v.MakeUnitVector();
    w.MakeUnitVector();


Add a horizontal field-of-view so we can go for non-square images and we get:
        
        float     float multiplier = tan(hfov/2);
        ray_origin = 0.2*random_xy_disk();
        vec3 ray_direction(-1.0 + 2.0*float(i)/(W-1.0), -1.0*H/W + 2.0*float(j)/(W-1.0), -1.0);
        ray_direction *= vec3(multiplier, multiplier, 1.0);
        ray_direction = focus*ray_direction - ray_origin;
        ray_origin = ray_origin.x()*u + ray_origin.y()*v + ray_origin.z()*w;
        ray_direction = ray_direction.x()*u + ray_direction.y()*v + ray_direction.z()*w;



And that gives:
Depth of field, viewing,  100 unstratified samples.