dipityPix app

Sunday, May 3, 2015

Some light art at the Albany airport

Waiting for a plane and checked out the art gallery.  It was not only impressive, it appears there are people who like caustics even more than rendering nerds!   The LIT show is detailed on a link here.  Some highlights below.

A moving caustic by Yael Erel

Another moving caustic by Yael Erel

The reflector causing the caustic

A shadow of a tangle of wires by Larry Kagan



Add caption


Some wild shadows by Victoria Palermo


Tuesday, April 14, 2015

Sampling a disk revisited

In almost all variants of Monte Carlo renderers there are various point samples that are averaged.  The most common example is a square pixel with box filter (i.e., the average within the pixel is taken).  Underlying whatever screen coordinates you use, there is some
"get ith point in square" or "get N points in square" function.   The square is usually [0,1)^2.  These always seem to add more ugliness to programs than I would expect, but we'll save that for another time.   Additionally, it's really a multidimensional sampling (i.e., antialised motion-blur is 3D) a topic we will also defer.

There are five basic strategies for getting 2D samples on the unit square:
  1. Regular (lattice, sometimes rotated.)
  2. Random
  3. Jittering (stratified sampling)
  4. QMC
  5. Optimized
Of these I suspect optimized (introduced by Don Mitchell back in the day) is the best, but that is still a topic of research.  I'm going for easy so will do jittered.

Jittered is usually done for perfect square number of samples because it is a SxS grid that "jittered" or perturbed.  Pseudocode for Jittered for S^2 samples (e.g, for S = 4, 16 samples) is:

Vec2 get_sample(int s, int sqrt_num_samples)
        float i = s % sqrt_num_samples
        float j = s /  sqrt_num_samples
        return Vec2((i+drand48())/ sqrt_num_samples, (i+drand48())/ sqrt_num_samples)

If we are going to sample a disk (like a lens) people typically sample a unit disk centered at (0,0) with radius 1.    If we transform a sample (u,v) on [0,1)^2 to a disk we can't do this:

Vec2 transform_to_disk(Vec2 on_square)
        theta = 2*PI*on_square.x
        r =   on_square.y
        return (r*cos(theta), r*sin(theta))

Because this will clump up samples near r=0.  Instead we need  r =   sqrt(on_square.y) which compensates for the distortion.

However, there is a "concentric" mapping on the square that some say is better and Dave Cline sent me (see previous blog post) a nicely small code for that so I finally tried it.

Here is an image with random sampling and 25 samples per pixel.
Random sampling 25 samples per pixel
I also tried 45 and 100 samples per pixel with the two different mappings to the disk discussed above.  I zoomed in (using cut and paste so not a very precise test-- in my experience if a precise test is needed it is not a big effect).  This are zoomed in with 5X and pixel duplication


25 samples per pixel, random, jittered, and jittered with concentric map.

49 samples per pixel, random, jittered, and jittered with concentric map.

100 samples per pixel, random, jittered, and jittered with concentric map.

 All in all for this case I do think the concentric map does make a SLIGHT improvement.  But the big leap is clearly jittering at all.   Still, Dave Cline's code is so simple I think it is worth it.

Sunday, April 12, 2015

Rocket Golfing: Morgan McGuire's new iOS video game

I was (and still am) addicted to Angry Birds, but Rocket Golfing has displaced it for me.   This is Morgan McGuire's new iOS game that is just out, but I have been play testing it the last month.  Here's his promo video:



I love it.   It's not only a really good game, it has an unbounded number of procedural levels.   I hope Morgan will publish how he did that.    I'm sure many of you are familiar with Morgan's work on Console games and realistic rendering research, which is the context I know him from when we worked together at NVIDIA Research.   So I was surprised he did a 2D kid/adult game/education games that is so fun.

A personal thing I really like about it is it takes me back to my first "wow" moment with video games when I saw the coin-operated Space Wars over three decades ago.  It was simple graphics but it had terrific gravitational physics and I really liked the game play.  Here's a screenshot from Wikipedia:
Screen Shot from Space Wars (Wikipedia)

Only later did I find out that Space Wars was the grandchild of the game SpaceWar! which is so old that it was released the year before I was born!   To give you an idea of how primeval programming was then, check out this quote from the wikipedia article: "After Alan Kotok obtained some sine and cosine routines from DEC, Russell began coding, and, by February 1962, had produced his first version."  Apparently SpaceWar! was played a lot at the University of Utah and that is part of the Atari Origin Story.   I do like Morgan's game better, but I especially love the connections to OLD school computer graphics.   In addition, its space trivia gets back at the excitement of the space race when I was growing up (and thankfully seems to getting so new breath, or so I hope).  I also like that I don't need to pump quarters into it-- it's a pay once (12 quarters) and no in-app purchases or adds or any of that stuff.

Tuesday, April 7, 2015

Pixar's old tileable textures

Pixar's terrific 1993 tileable textures now online.   850MB.




Friday, April 3, 2015

Cameras in a distribution ray tracer

You need two things in a distribution ray tracer camera.  First is just to be able to have the light be gathered from a lens (disk) rather than a point.  Second in to move and rotate the lens.

A very generic way to generate viewing rays for pixels (i,j) for a square image with W = H in version 0.1 of the ray tracer looking up the -z axis is:

       vec3 ray_direction(-1.0 + 2.0*i/(W-1.0), -1.0 + 2.0*j/(H-1.0), -1.0);

The ray origin is just (0,0,0).   That can produce an image like this (see last blog posting). 
Distribution ray tracer with 100 non-stratified samples and a simple pinhole camera

Now we just need to make the ray origin a random point on the radius R XY disk centered at the origin:

   vec3 random_xy_disk() {
       vec3 p;
       do {
           p = vec3(2*drand48()-1,2*drand48()-1,0);
       } while (dot(p,p) > 1);
       return p;
   }

   vec3 origin = R*random_xy_disk()
   ray_direction *= focus_distance; // distance to where things are in focus
   ray_direction -= origin;

Yes, there are more efficient and compact ways to do that.  We're into getting the program done ASAP here :).   Now all the rays for a given pixel will converge at z distance focus.

Now we need to move the camera lens to position "eye" (yeah bad legacy naming).   We want to focus on a point at position "lookat" (there are many other viewing systems like specify the gaze direction, and it's a matter of taste-- just use one!).   We need a "view up vector" which would be any vector from the center of the skull through the center parted hair of a person.  This will allow us to make an orthonormal basis uvw:

    vec3 w = eye - lookat;  // z axis goes behind us
    vec3 u = cross(w, vup);
    vec3 vv = cross(w, u);
    focus = w.length();
    u.MakeUnitVector();
    v.MakeUnitVector();
    w.MakeUnitVector();


Add a horizontal field-of-view so we can go for non-square images and we get:
        
        float     float multiplier = tan(hfov/2);
        ray_origin = 0.2*random_xy_disk();
        vec3 ray_direction(-1.0 + 2.0*float(i)/(W-1.0), -1.0*H/W + 2.0*float(j)/(W-1.0), -1.0);
        ray_direction *= vec3(multiplier, multiplier, 1.0);
        ray_direction = focus*ray_direction - ray_origin;
        ray_origin = ray_origin.x()*u + ray_origin.y()*v + ray_origin.z()*w;
        ray_direction = ray_direction.x()*u + ray_direction.y()*v + ray_direction.z()*w;



And that gives:
Depth of field, viewing,  100 unstratified samples.

 




Friday, March 27, 2015

Easy distribution ray tracer

I am having my graphics class at Westminster College (not a new job-- I am teaching a course for them as an adjunct-- so go buy my apps!) turn their Whitted-style ray tracer into a distribution ray tracer.   I think how we are doing it is the easiest way possible, but I still managed to butcher the topic in lecture so I am posting this mainly for my students.   But if anybody tries it or sees something easier please weigh in.   Note these are not "perfect" distributions, but neither is uniform or phong really :)

Suppose we have a basic ray tracer where we send a shadow and reflection ray.
The basic code (where rgb colors geometric vectors are vec3) will look like:
vec3 rayColor(ray r, int depth) {
    vec3 color(0)
    if (depth > maxDepth) return color
    if (hitsScene(r)) {  // somehow this passes back state like hitpoint p
         if (Rs > 0) { // Rs is rgb specular reflectance
                ray s(p, reflect(r, N))
                color += rayColor(s, depth+1)
         }
         if (Rd > 0) { //Rd is rgb diffuse reflectance
                 ray shadow(p, L)   // L is the direction to the one light
                 if (not hitsScene(shadow)) {
                       color += Rd*lightColor*max(0, dot(N,L) )
                 }
          }
      }
      else 
             color = background(r)
      return color
}
               
           
The three spheres we add to get fuzzy effect.
Now let's assume we want to fuzz things up to get soft shadows and glossy reflections, and that we want diffuse rays for bounce lighting (even one level of depth would help).   We just need to pick random points in three sphere, one for each effect.   Let's assume we have a function rs() that returns a random point in a unit sphere centered at the origin (see last blog post).   All we need to do is randomly perturb each shadow and specular reflection ray, and we can generate another diffuse ray that will get bounce light.  This is:

vec3 rayColor(ray r, int depth) {
    vec3 color(0)
    if (depth > maxDepth) return color
    if (hitsScene(r)) {  // somehow this passes back state like hitpoint p
         if (Rs > 0) { // Rs is rgb specular reflectance
                // radius_specular is a material parameter where 0 = perfect
                ray s(p, reflect(r, N)+ radius_specular*rs())
                color += rayColor(s, depth+1)
         }
         if (Rd > 0) { //Rd is rgb diffuse reflectance 
                ray shadow(p, L + radius_shadow*rs())   // L is the direction to the one light
                if (not hitsScene(shadow)) {
                       color += Rd*lightColor*max(0, dot(N,L) )
                }
                ray diffuse(p, N + rs()) // assumes N is unit length
                color += Rd*rayColor(diffuse, depth+1)
          }
      }
      else 
             color = background(r)
      return color
}

Easy random point in sphere

Often you need a uniform random point in a sphere.   There are good methods for doing that that have deterministic runtime and good stratification and all that, but usually you just want something easy that works, and rejection sampling is usually the way to go.    Usually you take a larger domain than the target one and repeatedly "throw darts" until one hits the target.
First let's do this in 2D with a circle at center c = (xc, yc) and radius R.  Now let's assume we have a random number generator r() that returns floats in [0,1) (like drand48() in most C environments or Math.random() in Java).


do {  // pick random point insquare until it is inside the circle
      x = xc - R + 2*R*r()
      y = yc - R + 2*R*r()
} while (sqr(x - xc) + sqr(y - yc) > sqr(R))

For 3D, the extension is just the dimension add:


do {  // pick random point in cube until it is inside the sphere
      x = xc - R + 2*R*r()
      y = yc - R + 2*R*r()
      z = zc - R + 2*R*r()
} while (sqr(x - xc) + sqr(y - yc) + sqr(z- zc) > sqr(R))