A nice new paper from Wyman, Hoetzlein, and Lefohn out of NVIDIA research digs deeper into the idea of a shadow map sampled in screen space. The paper and video are online. This basic idea has been kicking around for over a decade and most people thought it was very promising. This paper seems to have finally really made the idea live up to its promise (watch the video!).

I think this will be a big paper: my biggest surprise talking to video game programmers has been how much effort they put into shadows and yet that is still the source of their most annoying dynamic artifacts. I look forward to seeing good shadows in games!

# Pete Shirley's Graphics Blog

## Tuesday, February 24, 2015

## Saturday, February 14, 2015

### A C++ vec3 class

I'm teaching an intro graphics class as an adjunct at Westminster College (and really liking it-- it is a great environment and I would recommend it as a very good place to go as an undergrad). The assignments are mostly ray tracing and I have been writing my solutions from scratch so I can understand what is needed for them. My first decision was programming language. The students are all either using Python or Java, and so I didn't want to use either of those. I tried Swift which I am enamored with but I ran into a compiler bug early and decided some old battle-tested language would be better. I almost did C but operator overloading is too much to give up. So I went with C++. In the past I have always went with the heuristic:

This can be taken to an extreme where a

On the opposite end of the spectrum are languages like GLSL which I have been using a lot lately where representation determines type and most graphics variables are

I decided to go the GLSL route simply because I am liking it in practice, and verbose variable names make errors less likely. So for the first time in at least a decade I wrote a new C++ vector class vec3. First question was "

vec3 v = vec3::cross(u, w); // ugly

So I just went with double (precision problems are a pain). Some typedef REAL might be wise but I was too lazy.

I'll be curious to see if I consider this a mistake by the end of the semester!

Here's the resulting class:

*If it has the same representation (e.g., 3D Cartesian vectors and 3D rgb colors), that doesn't mean it is the same class. What IS the thing?*This can be taken to an extreme where a

*length*and a*time*are not floats, but are rather different types, and an operator*length/time*returns a*velocity*. Jim Arvo and Brian Smits (and probably many others) experimented with this and found the C++ typing became a little too cumbersome to manage. They both thought some real language support for SI units etc. might be a good idea, but rolling your own was probably too hard in practice.On the opposite end of the spectrum are languages like GLSL which I have been using a lot lately where representation determines type and most graphics variables are

*vec3*or*float*. If you add a color to a surface normal that is your problem.I decided to go the GLSL route simply because I am liking it in practice, and verbose variable names make errors less likely. So for the first time in at least a decade I wrote a new C++ vector class vec3. First question was "

*float, double, or templated*?" I tried templated but it is too much typing. For example, here is a cross product:vec3

So I just went with double (precision problems are a pain). Some typedef REAL might be wise but I was too lazy.

I'll be curious to see if I consider this a mistake by the end of the semester!

Here's the resulting class:

#ifndef VEC3_HPP #define VEC3_HPP #includetemplate class Vec3 { private: // A Vec3 simply has three properties called x, y and z T x, y, z; public: // ------------ Constructors ------------ // Default constructor Vec3() { x = y = z = 0; }; // Three parameter constructor Vec3(T xValue, T yValue, T zValue) { x = xValue; y = yValue; z = zValue; } // ------------ Getters and setters ------------ void set(const T &xValue, const T &yValue, const T &zValue) { x = xValue; y = yValue; z = zValue; } T getX() const { return x; } T getY() const { return y; } T getZ() const { return z; } void setX(const T &xValue) { x = xValue; } void setY(const T &yValue) { y = yValue; } void setZ(const T &zValue) { z = zValue; } // ------------ Helper methods ------------ // Method to reset a vector to zero void zero() { x = y = z = 0; } // Method to normalise a vector void normalise() { // Calculate the magnitude of our vector T magnitude = sqrt((x * x) + (y * y) + (z * z)); // As long as the magnitude isn't zero, divide each element by the magnitude // to get the normalised value between -1 and +1 if (magnitude != 0) { x /= magnitude; y /= magnitude; z /= magnitude; } } // Static method to calculate and return the scalar dot product of two vectors // // Note: The dot product of two vectors tell us things about the angle between // the vectors. That is, it tells us if they are pointing in the same direction // (i.e. are they parallel? If so, the dot product will be 1), or if they're // perpendicular (i.e. at 90 degrees to each other) the dot product will be 0, // or if they're pointing in opposite directions then the dot product will be -1. // // Usage example: double foo = Vec3 ::dotProduct(vectorA, vectorB); static T dotProduct(const Vec3 &vec1, const Vec3 &vec2) { return vec1.x * vec2.x + vec1.y * vec2.y + vec1.z * vec2.z; } // Non-static method to calculate and return the scalar dot product of this vector and another vector // // Usage example: double foo = vectorA.dotProduct(vectorB); T dotProduct(const Vec3 &vec) const { return x * vec.x + y * vec.y + z * vec.z; } // Static method to calculate and return a vector which is the cross product of two vectors // // Note: The cross product is simply a vector which is perpendicular to the plane formed by // the first two vectors. Think of a desk like the one your laptop or keyboard is sitting on. // If you put one pencil pointing directly away from you, and then another pencil pointing to the // right so they form a "L" shape, the vector perpendicular to the plane made by these two pencils // points directly upwards. // // Whether the vector is perpendicularly pointing "up" or "down" depends on the "handedness" of the // coordinate system that you're using. // // Further reading: http://en.wikipedia.org/wiki/Cross_product // // Usage example: Vec3 crossVect = Vec3 ::crossProduct(vectorA, vectorB); static Vec3 crossProduct(const Vec3 &vec1, const Vec3 &vec2) { return Vec3(vec1.y * vec2.z - vec1.z * vec2.y, vec1.z * vec2.x - vec1.x * vec2.z, vec1.x * vec2.y - vec1.y * vec2.x); } // Easy adders void addX(T value) { x += value; } void addY(T value) { y += value; } void addZ(T value) { z += value; } // Method to return the distance between two vectors in 3D space // // Note: This is accurate, but not especially fast - depending on your needs you might // like to use the Manhattan Distance instead: http://en.wikipedia.org/wiki/Taxicab_geometry // There's a good discussion of it here: http://stackoverflow.com/questions/3693514/very-fast-3d-distance-check // The gist is, to find if we're within a given distance between two vectors you can use: // // bool within3DManhattanDistance(Vec3 c1, Vec3 c2, float distance) // { // float dx = abs(c2.x - c1.x); // if (dx > distance) return false; // too far in x direction // // float dy = abs(c2.y - c1.y); // if (dy > distance) return false; // too far in y direction // // float dz = abs(c2.z - c1.z); // if (dz > distance) return false; // too far in z direction // // return true; // we're within the cube // } // // Or to just calculate the straight Manhattan distance you could use: // // float getManhattanDistance(Vec3 c1, Vec3 c2) // { // float dx = abs(c2.x - c1.x); // float dy = abs(c2.y - c1.y); // float dz = abs(c2.z - c1.z); // return dx+dy+dz; // } // static T getDistance(const Vec3 &v1, const Vec3 &v2) { T dx = v2.x - v1.x; T dy = v2.y - v1.y; T dz = v2.z - v1.z; return sqrt(dx * dx + dy * dy + dz * dz); } // Method to display the vector so you can easily check the values void display() { std::cout << "X: " << x << "\t Y: " << y << "\t Z: " << z << std::endl; } // ------------ Overloaded operators ------------ // Overloaded addition operator to add Vec3s together Vec3 operator+(const Vec3 &vector) const { return Vec3 (x + vector.x, y + vector.y, z + vector.z); } // Overloaded add and asssign operator to add Vec3s together void operator+=(const Vec3 &vector) { x += vector.x; y += vector.y; z += vector.z; } // Overloaded subtraction operator to subtract a Vec3 from another Vec3 Vec3 operator-(const Vec3 &vector) const { return Vec3 (x - vector.x, y - vector.y, z - vector.z); } // Overloaded subtract and asssign operator to subtract a Vec3 from another Vec3 void operator-=(const Vec3 &vector) { x -= vector.x; y -= vector.y; z -= vector.z; } // Overloaded multiplication operator to multiply two Vec3s together Vec3 operator*(const Vec3 &vector) const { return Vec3 (x * vector.x, y * vector.y, z * vector.z); } // Overloaded multiply operator to multiply a vector by a scalar Vec3 operator*(const T &value) const { return Vec3 (x * value, y * value, z * value); } // Overloaded multiply and assign operator to multiply a vector by a scalar void operator*=(const T &value) { x *= value; y *= value; z *= value; } // Overloaded multiply operator to multiply a vector by a scalar Vec3 operator/(const T &value) const { return Vec3 (x / value, y / value, z / value); } // Overloaded multiply and assign operator to multiply a vector by a scalar void operator/=(const T &value) { x /= value; y /= value; z /= value; } }; #endif

## Saturday, January 31, 2015

### Lighting effects on Instagram

I'm teaching intro graphics as an Adjunct at Westminster College this semester and am gathering photos of lighting effects to show in class. I am also going to start some marketing for our photo apps on their own Instagram feeds, so I thought I would use the graphics project to get some Instagram experience so I am using my personal Instagram account for that. If you post photos of good tutorial value could you please tag them with the effect (e.g., "#colorbleeding") and tag me so I see it and I will repost? Thanks in advance. Here is an example of the type of image I am gathering:

## Friday, January 23, 2015

### Some real world color bleeding and glossy reflection

I saw this effect in my basement recently and it's one of those rare "big" color bleeds. As a bonus there is glossy reflection on the ceiling so I decided it was worth a picture.

## Wednesday, December 17, 2014

### Winner of U of Utah Ray Tracing class image contest

Yesterday I attended Cem Yuksel's end of semester image contest from his ray tracing class. The images were all very impressive (it's clearly a good class... see the link above) and the winner (by Laura Lediaev) I found so impressive I asked her if I could post it here. Here's her description:

*This is a fun scene with candy. There are two main components to this scene - the glass teapot, and the candies. I spent over 30 hours creating this teapot practically from scratch. I started with the Bezier patch description, which I used to create a mesh, and went to work duplicating surfaces, shrinking them to create the inner surfaces, doing some boolean work for cutting out holes, then fusing together all the seams vertex by vertex. The candies started out as a single candy prototype which I sculpted starting from a cube. I then created a huge array of candy copies and used a dynamics simulation to drop the candies into one teapot, and onto the ground in front of the other teapot. The backdrop is just a ground with a single wall (a.k.a. an infinite plane). I have two area lights, and an environment image which is creating the beige color of the ground and some interesting reflections. Can you spot the reflection of a tree in the left teapot handle? The challenge with rendering this scene is all the fully specular paths, which are rays that connect the camera to a light while only hitting specular surfaces such as glass or mirrors. The only way to do this using the rendering methods that we learned in the class is brute force path tracing which takes an extraordinary amount of time. The image has roughly 30,000 samples per pixel.*## Tuesday, December 16, 2014

### Cool website using hex color

A 24bit RGB triple such as red (255,0,0) is often represented as a hex string because 16^2 = 256 and you only need 6 digits. Recall that the hex digits are (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F). So the color (255, 0, 0) would be (FF,0,0) or as a string FF0000. Usually people put a "#" at the front by convention to tag it as a hex. #FF0000. Alain Chesnais made me aware of a clever site that uses the fact that times are also 6 digits when seconds are included. For example 101236 is 36 seconds after 10:12. If one interprets that as a hex color, it is valid (note the maximum on 0..255 is 59 so they are all somewhat dark). There is a website that makes this more concrete so you can start internalizing hex codes. The dark ones anyway! Here's a screenshot.

As it gets closer to the minute rollover you'll get a dark blue.

As it gets closer to the minute rollover you'll get a dark blue.

## Monday, December 8, 2014

### Empirical confimation of diffuse ray hack

Benjamin Keinert
sent a nice note in regards to an earlier post to quickly get "lambertianish" rays. He empirically confirmed it is exactly lambertian. Cool, and thanks Benjamin I always believe such demonstrations more convincing that proofs which can have subtle errors (fine, yes, I am an engineer). I include his full email with his permission.

I think I found a simple informal "engineer's style proof" that it is indeed lambertian (assuming the sample rejection method yields a uniform distribution on a sphere, which it should).

Instead of using a rejection method I picked the uniform distribution on a sphere using spherical fibonacci point sets [2] and constructed a cosine hemisphere sampling variant of it.

Without the loss of generality it should be sufficient to show that the mapping is lambertian for a single normal (0,0,1) - given uniformly distributed points on a sphere and rotational invariance.

Sorry, rapid prototyping code, oldschool OpenGL, PHI = (sqrt(5.0)*0.5 + 0.5):

// PDF: 1/(4*PI)

float3 uniformSampleSphereSF(float i, float n) {

float phi = 2*PI*(i/PHI);

float cosTheta = 1 - (2*i+1)/n;

float sinTheta = sqrt(1 - cosTheta*cosTheta);

return float3(cos(phi)*sinTheta, sin(phi)*sinTheta, cosTheta);

}

// PDF: cos(theta)/PI

float3 cosineSampleHemisphereSF(float i, float n) {

float phi = 2*PI*(i/PHI);

float cosTheta = sqrt(1 - (i+0.5)/n);

float sinTheta = sqrt(1 - cosTheta*cosTheta);

return float3(cos(phi)*sinTheta, sin(phi)*sinTheta, cosTheta);

}

[...]

void test() {

[...]

// Enable additive blending etc.

[...]

uint n = 1024;

glBegin(GL_POINTS);

for (uint i = 0; i < n; ++i) {

glColor4f(0,1,0,1); // Green

float3 p = normalize(uniformSampleSphereSF(i, n) + float3(0,0,1));

glVertex3fv(&p[0]);

glColor4f(1,0,0,1); // Red

float3 q = cosineSampleHemisphereSF(i, n);

glVertex3fv(&q[0]);

// Additive blending => Yellow == "good"

}

glEnd();

}

This little function results in the attached image (orthogonal projection of cosine distributed points on a hemisphere -> uniformly distributed points on a circle).

With some more effort one can show that normalize(uniformSampleSphereSF(i, n) + float3(0,0,1)) = cosineSampleHemisphereSF(i, n) - instead of using additive blending.

[1] http://psgraphics.blogspot.de/2014/09/random-diffuse-rays.html

[2] Spherical Fibonacci Point Sets for Illumination Integrals, Marques et. al.

I think I found a simple informal "engineer's style proof" that it is indeed lambertian (assuming the sample rejection method yields a uniform distribution on a sphere, which it should).

Instead of using a rejection method I picked the uniform distribution on a sphere using spherical fibonacci point sets [2] and constructed a cosine hemisphere sampling variant of it.

Without the loss of generality it should be sufficient to show that the mapping is lambertian for a single normal (0,0,1) - given uniformly distributed points on a sphere and rotational invariance.

Sorry, rapid prototyping code, oldschool OpenGL, PHI = (sqrt(5.0)*0.5 + 0.5):

// PDF: 1/(4*PI)

float3 uniformSampleSphereSF(float i, float n) {

float phi = 2*PI*(i/PHI);

float cosTheta = 1 - (2*i+1)/n;

float sinTheta = sqrt(1 - cosTheta*cosTheta);

return float3(cos(phi)*sinTheta, sin(phi)*sinTheta, cosTheta);

}

// PDF: cos(theta)/PI

float3 cosineSampleHemisphereSF(float i, float n) {

float phi = 2*PI*(i/PHI);

float cosTheta = sqrt(1 - (i+0.5)/n);

float sinTheta = sqrt(1 - cosTheta*cosTheta);

return float3(cos(phi)*sinTheta, sin(phi)*sinTheta, cosTheta);

}

[...]

void test() {

[...]

// Enable additive blending etc.

[...]

uint n = 1024;

glBegin(GL_POINTS);

for (uint i = 0; i < n; ++i) {

glColor4f(0,1,0,1); // Green

float3 p = normalize(uniformSampleSphereSF(i, n) + float3(0,0,1));

glVertex3fv(&p[0]);

glColor4f(1,0,0,1); // Red

float3 q = cosineSampleHemisphereSF(i, n);

glVertex3fv(&q[0]);

// Additive blending => Yellow == "good"

}

glEnd();

}

This little function results in the attached image (orthogonal projection of cosine distributed points on a hemisphere -> uniformly distributed points on a circle).

With some more effort one can show that normalize(uniformSampleSphereSF(i, n) + float3(0,0,1)) = cosineSampleHemisphereSF(i, n) - instead of using additive blending.

[1] http://psgraphics.blogspot.de/2014/09/random-diffuse-rays.html

[2] Spherical Fibonacci Point Sets for Illumination Integrals, Marques et. al.

Subscribe to:
Posts (Atom)