Monday, November 5, 2007

Hybrid algorithms

Some movies are made with a "hybrid" algorithm where rasterization is used for the visibility pass and ray tracing is used for some or all of the shading. This raises two questions:
  1. Why is this done rather than using just one algorithm?
  2. Would this technique be good for games?
First question 1: the reason it makes sense to use both is that rasterization is fantastic at dealing with procedural images. It is the basic Reyes look that is still going strong after almost 25 years: for each patch, dice in micropolygons while applying displacements. If you do a billion micropolygons, no problem as they are never stored. The frame buffer is where the action is. Now suppose you want to do ambient occlusion; the best way to do that is ray tracing. But ray tracing on procedural geometry is slow. Using Pharr et al's caching technique is probably the best known method. But an alternative is just not to apply the displacements and use less geometry as PDI did for the Shrek 2 movie (see their siggraph paper for details). That idea goes back to at least Cohen's 85 paper where detailed geometry is illuminated by a coarse model.

Now question 2: would this be good for games. Nobody knows. I will give two arguments, one for, and one against. The reason this will happen is again procedural geometry. But there will not be so much that it wont fit in core. Still doing it with rasterization will help locality and something like a DX10 geometry shader can be used, and efficient use of caches is where the action is and will continue to be. Now an argument against: Whitted-style ray tracing has good locality and thus such complexity is not needed. Once games use ray tracing they may as well use it for visibility for software simplicity. My bet is on the latter, but I sure wouldn't give odds. If graphics teaches us anything, it is that predicting the future is difficult!

Sunday, October 21, 2007

Probabilistic reflection/refraction

In the response to the previous post, Thomas said he would choose probabilistically. If you are sending many rays per pixel I definitely agree this is the way to go. I believe that approach was first suggested in Kajiya's 86 path tracing paper.

color trace(ray r, depth d)
color c
if (r hits scene at point p)
c = direct_light(p)
if (depth < maxdepth)
if (random < ks)
c+= trace(reflected_ray)
else
c+= trace(refracted_ray)
else
c = background(r)
return c

Note that the weighting is implicit; if you trace reflected_ray a fraction ks of the time, that is just like tracing it every time and weighting it by ks. And as Thomas said, the best images come when you use Fresnel equations.

Wednesday, October 17, 2007

Ray tree pruning

A time honored way to speed up your glass is to do "ray tree pruning" and you should do it if you aren't already. The way Whitted originally implemented his paper was like this:

color trace(ray r, depth d)
color c
if (r hits scene at point p)
c = direct_light(p)
if (depth < maxdepth)
c += ks*trace(reflected_ray) + kt*trace(refracted_ray)
else
c = background(r)
return c

If you hit glass that will be two branches per recursion level. The first efficiency improvement would be to only send the rays if ks and kt are non-zero. But that would not help you for glass. Instead, you need to kill whole paths down the recursion tree once they are sufficiently attenuated. For example, if kt=0.05, then after 3 transmissions the coefficient is (0.05)^3 and can be ignored. To implement this, you need to change the call to include an attenuation argument:

color trace(ray r, depth d, float atten)

and add

if (depth < maxdepth)
if (atten*ks.max_component() < maxatten)
c += ks*trace(refected_ray, atten*ks.max_component())
if (atten*kt.max_component() < maxatten)
c += kt*trace(refracted_ray, atten*kt.max_component())

You will be amazed at how much faster a scene like Whitted's glass ball is.

Friday, August 24, 2007

Ray Tracing Shading Language

Steve Parker and friends at Utah have developed a ray tracing shading language (RTSL) described in this pdf file. I am very excited about this work for entirely selfish reasons. I like ray tracing partially because of the elegant code. Now add ray packets and SSE and voila-- it makes DirectX code look lovely in comparison. The initial results are that ray packets and SSE can be relegated to a compiler with little or no loss of performance. and the code is pretty sweet looking. I have been looking over the shoulder of this project writing some RTSL code and it really seems to work as well as reported in the paper. A surprise to me is that it is much nicer to write than C++ for me because you avoid the blasted C++ header files.

Monday, August 13, 2007

Grid creation paper

Here is a pdf file of a new paper by Thiago Ize, Steve Parker, and myself. It will appear in Ulm at RT07 It does some theory on how to build grids within grids. It is a follow-on to the classic Jevans and Wyvill paper. Its main practical result is that for a two level grid of small triangles, use N**(1/5) subdivisions in each dimension, and then in occupied cells use M**(1/3) where M is the number of primitives in that cell. For single ray code this seems to work quite well. There is also some analysis for long-skinny triangles (like you might get subdividing a cylinder) which says grids are bad at such scenes. That being said, BVH with AABB and k-d are probably even worse!

Friday, August 10, 2007

Results of new poll in

This poll was better designed but still had one big flaw-- should have had more years-- people clumped in 2011.

For those readers that have not done a ray tracer I recommend Suffern's new book. He covers a ton of details that are not in other books. I saw a copy at SIGGRAPH and it lives up to the potential shown by all those chapters he's let me use in classes over the years.

Friday, August 3, 2007

New poll

Well people are clearly divided on the last poll. But as pointed out it was ambiguous so here's a new poll that is more straightforward.

Monday, July 30, 2007



Bram's comments on the last article got me thinking about halftones. This is probably obvious to most people but it was news to me. Without good color profiles and calibrated monitors, you of course can't get predictable appearance. Things like sRGB will help the first but 99% of monitors are not calibrated. But with halftoning you can (assuming LCDs are well behaved about half tone patterns-- CRTs sure are not) get predictable greyscales. For example here are two images with a 25% and a 50% greyscale. (and RE the last article, not which is more of a middle grey). It just doesn't matter what your gamma is (assuming your black point is dark!) to predict what these greys are.

This basic idea is often used for gamma estimation-- the best example is Greg Ward's great gamma image.

Sunday, July 29, 2007

Gamma and textures.





Gamma correction issues are always a pain. A review: given 256 grey levels, it makes sense to make them perceptually uniform. Thus a 20% grey intensity (one fifth the physical intensity of white) should be around 127 as it is perceptually half way between black and white. But what happens when you want to use it as a texture map in a physically based renderer? You need to "de-gamma" the image. If you are using somebody else's renderer, should you do that as a preprocess or might they do it? For my own purposes in that situation I have made three textures of the grey squares in the macbeth color checker chart. The grey square in position 3 from the left (i.e., the third darkest) should look mid-grey in your final render. The 1.0 image has 0.19*255 in that square so it is not gamma corrected (i.e. the byte values are linear in intensity). Of course to make this all more complicated, a renderer might or might not do something different if there is something in the texture's color profile or not.

Saturday, July 21, 2007

New poll online

Here is a poll to narrow down what the non-random sample of people who look at this blog think. I am most interested in the percentage that say never (I am guessing 40%) but also interested in the distribution of the other answers.

Saturday, July 14, 2007

Plucker coords considered harmful

Christer Ericson has a terrific article on why scaler triple products are usually preferable to plucker coords here. I couldn't agree more. Christer has a book on collision detection I have been meaning to order for sometime and his blog content encourages me.

Sunday, July 1, 2007

Ratatouille

I saw Ratatouille with the family this weekend. It is the first CG movie that we all really liked a lot. Brad Bird is really on fire-- I think I liked this movie better than the Iron Giant which was terrific. One thing I especially liked was the lack of big-name actors who are not voice actors. Watch the Toy Story movies and the watch some old Disney (e.g. Jungle Book) and you will see just how much using screen actors for voice talent hurts animated movies. Spongebob and the Simpsons would be awful if they used Tom Hanks and John Travolta. I hope Ratatouille makes a zillion dollars and will show that the director and studio is enough to bring in the crowds.

In addition to being a terrific movie, the rendering passed some threshold for me. Previous CG movies have always looked flat to me, with Shrek 2/3 being better due to their use of indirect bounces I assume. But Ratatouille had even more depth. It is the best looking CG movie I have seen by a long shot. As much depth as live action I think. I have no idea what is different, but I have to suspect global illumination given my prejudices. If it was done with hand-placed fill lights, then the artists must be on performance-enhancing drugs! If anybody knows please reply.

Saturday, June 2, 2007

Top utility papers of all time

This is heavily biased toward rendering, but here are what I consider the most useful papers at present. Many great papers have of course been obviated by improvements so this list is far from a "best paper" list. These are papers worth reading now.

Recursively generated B-spline surfaces on arbitrary topological meshes, Catmull and Clark, 1978. No sign of fading away!


An improved illumination model for shaded display Whitted, 1980. Beware the 1/r^2 mistake but otherwise about as perfect a paper as there is!

Distributed Ray Tracing Cook, 1984. There are some sampling details you can find elsewhere, but all the basic ideas are here.

Ray Casting for Modeling Solids, Roth, 1982. A sweet CSG algorithm.

An Image Synthesizer, Perlin 1985. See his later paper and his webpage for more info on noise.

(more coming later-- suggestions welcome)

Monday, May 28, 2007

My top utility papers

Some of the papers I have worked on have been influential (e.g., Maciel's I3D one) but most of these are superceded by improvements in subsequent papers, or are mainly of theoretical interest. Here is my current list of the most useful papers that are also easy to implement. I will include Mike Stark's b-spline paper as soon as I find an online copy. Note I will not post a list of papers that were a dumb idea! And also for you new people in the field, take note that half of these were rejected the first time, so never assume the reviewers know what they are talking about (or that they don't! Just read reviews with a scientist's skepticism).

A Ray Tracing Framework for Global Illumination, GI 91. This paper repeats much of Arvo and Kirk's metahierarchy work which unfortunately I wasn't aware of at the time, but also spells out how to manage the various abstract classes in a distribution ray tracer. It also includes a better way to sample the disk, later spelled out in a jgt paper. Most of this stuff is now standard practice, but good if you are new to the field.

Direct Lighting Calculation by Monte Carlo Integration EGRW 91. This paper lays out what to do when you have
more than one light. The follow-on TOG 96 article has some more details, but except for the light grid I think most of them are not worth the added complexity. I do think there has been little subsequent progress on the "thousands of lights" problem which is a shame, but the Cornell folks have recently been doing lots of interesting work on it.

A Practitioners’ Assessment of Light Reflection Models . Pacific Graphics 97. Section 5.1 is especially useful. If you have a bump or displacement map, you really only need a BRDF for the subsurface part, and this one is as simple as they get.

The irradiance volume, CG&A 97. I think that like ambient occlusion this technique has tons of problems but in practice is "good enough".

A Non-Photorealistic Lighting Model For Automatic Technical Illustration, SIG 02. Section 4.2 has all you need to know. Simple hack, seems to work.

Interactive Ray Tracing for Isosurface Rendering, Viz 98. This just plain works. Steve Parker's group still uses the same technique and it shows no signs of losing its utility as it scales so well.

An Anisotropic Phong BRDF Model, JGT 00. This works surprisingly well in practice. It is no good for cloth though-- I would love to see a similar model for cloth.

A Spatial Post-Processing Algorithm for Images of Night Scenes, JGT 02. This still needs to be extended to animation.

Photographic Tone Reproduction for Computer Graphics, SIGGRAPH 02. The first simple model works well on almost all images I have seen. Erik Reinhard et al.'s book is definitely worth picking up for putting this model in context.

An efficient and robust ray-box intersection algorithm. There is also source code at the JGT site. This paper made me hate IEEE FP because of negative and positive zero.

Optimizing Ray-Triangle Intersection via Automated Search, RT06. This is really just Moller-Trumbore 2. I was shocked that the automated search actually worked, and unsurprised the subsequent hand tuning did too.

Image Synthesis Using Adjoint Photons. Graphics Interface 06. This paper is how I now think about path tracing, and it maps directly into some pretty sweet code. The author list is long but Boulos and Morley are really the two principle guys behind it. It takes advantage of the same properties of light as the dual photography paper. Jared Johnson, Solomon Boulos, Austin Robison and I have extended this to linearly polarized light which we will write up soon. I am hoping that a clean extension to fluorescence is also possible.

Ray Tracing Deformable Scenes using Dynamic Bounding Volume Hierarchies, ACM TOG 07. I have been advocating the BVH for years-- maybe I am just too lazy to really learn k-d trees. But the real advantage of BVH in my opinion is dynamic scenes. Note that the presentation of the packet-culling is a bit weak. For more details see this tech report.

Tuesday, May 1, 2007

Meta-h index rankings

A couple of us have been trying to make a more objective (note that does not mean more accurate!) ranking of CS research departments. We're using the google scholar data combined with the h-index. Still lots of mistakes I am sure. Interesting that Greenberg and Hanrahan are in a dead heat for the highest rating in graphics. Not worth drawing too much conclusion from, but the data is interesting. I'd like to add more universities and research groups, so feel free to send me updates and suggestions.

The data is here

Saturday, April 21, 2007

tor@brown siggraph page

Tim Rowley is once again making his very useful siggraph preview page. I am especially intrigued by the first paper in the list: Solid Texture Synthesis from 2D Exemplars.

Friday, March 23, 2007

Rainbows in clouds

When the sun is behind you and you look at some rainstorms, you see a rainbow. Since the droplets in a cloud are (I assume) spherical, why don't we see rainbows in clouds? My hypothesis is that they are there, but they are superimposed on a bright cloud where secondary scattering dominates, so the rainbow is just washed out by contrast. Recently I was on a flight around noon and could see below me the shadow of my plane and when the clouds became thin enough that primary scattering probably dominates, there was a perfect circular rainbow with the dark terrain below it. So why no rainbows from thin clouds when the viewer is not on a plane? My guess is the sky behind it is bright. But I am unsure. Another possibility is that what I saw was not a rainbow. But the ring's angular radius looked about right...

Getting to Ulm from the US

For those interested in attending RT07 (see previous post), I found that going through Frankfurt was really easy-- the airport has a train station that will take you straight to downtown Ulm. If you do that, be aware that train has a reserved seating system (like an airplane or Amtrak). Hope to see you there!

Monday, March 19, 2007

IRT 07

The interactive ray tracing symposium will be in Ulm, Germany this year:

IRT 07 site

SCHEDULE
JUNE 14, 2007 SUBMISSION DEADLINE
JULY 12, 2007 REVIEWS DUE
JULY 19, 2007 AUTHOR NOTIFICATION
AUGUST 2, 2007 CAMERA-READY COPY
SEPTEMBER 10-12, 2007 SYMPOSIUM IN ULM

Thursday, March 15, 2007

Bounding Volume Hierarchy

If I were trapped on a desert island with only one data structure allowed I would bring my trusty BVH. I have been liking the BVH for some time in spite of all the k-d tree advocates (see the SIGGRAPH 05 notes for a discussion before ray packets were integrated) despite them being "slow". One reason is simplicity-- dividing on objects leads to cleaner implementations than dividing space, and leads to O(N) bounded memory use. My basic software rule is KISS rules! . Another reason is they handle many dynamic models well as has been shown by the collision detection community. It turns out they can also be fast.

Solomon Boulos and Ingo Wald have done a bunch of nice work to packetize a BVH (a ray packet is a set of rays that are traced together). The first optimization (and a huge one) is to descend the BVH as soon as any of the rays hit a box. Maintaining a "first not dead ray" index makes this more efficient. Another optimization is to use interval arithmetic to test a whole packet against a box for a conservative trivial reject. See Solomon's tech report for details (it is really simple even though it sounds fancy, and has a nice mapping to C++ code). Note that both of these optimizations are algorithmic and are mostly orthogonal to SSE optimizations (the descend if one hits optimization makes SSE less helpful).

Finally, the BVH traversal has a low penalty for false positives on boxes, so conservative boxes like you get for moving objects, and well as the lower intrinsic coherency of secondary packets, are not so bad for the BVH.

An important disclaimer: I have previously (in the early 90s) been an advocate of k-d, and later the grid. So maybe this is just the end of the first cycle in my opinion!

It does help if you use the surface-area heuristic to build (that was what it was first invented for in fact-- the BVH). Nobody has ever justified to my satisfaction why the SAH greedy algorithm works so well.

Wednesday, March 14, 2007

New ray packet paper

Solomon Boulos and friends have shown that ray packets may work for secondary rays. Check out the first paper here. As much as I hate the software engineering implications that arise with ray packets, I do like that they can make things faster.

Saturday, March 10, 2007

Environment map formats

As there are no "natural" mappings between the sphere of directions and the square, so there is no standard way to parametrize your environment map. In the Original paper by Blinn and Newell, a cylindrical map was used. There are many such cylindrical maps out there and have been used for map projection . For software renderers these are more intuitive than cube maps or the Disk on square paramerterization. In a physically based renderer, you would like to importance sample the environment map. This is easiest if each texel in the map subtends the same solid angle as every other texel. There is one cylindrical map that does this for a given rectangle. A well-known version of this projection is the Peters Projection.

Here's an example projection. The rectangle has parameters [0.1]^2. Let's take u to the angle phi (longitude): u = phi/(2*pi) = atan2(y,x)/(2*pi) assuming your direction (x,y,z) is a unit vector and z is "up". Now we have v = f(z). The simplest such mapping is v = (z+1)/2. Is there area distortion there? What is the area of a given pixel in the texture map. How about the differential area? Let's say that we have a differential square du*dv in the texture map. The area on the sphere of that will be sin(theta) * dtheta(v) * dphi(u).

dphi(u) = 2*pi*du = constant so we can ignore it.

Since z = cos(theta), the other mapping is
(cos(theta) + 1)/2 = v
so differentiating both sides:
-sin(theta)*dtheta/2 = dv

As the differential area is proportional to sign theta, each pixel does have the same solid angle. So this most simple mapping is also a good one! I am not sure why anybody uses any other in a software renderer.

Saturday, February 17, 2007

Bye-bye z-buffer?

The main advantage of the z-buffer in my opinion is that you can stream geometry through it so companies like Pixar can make images of HUGE models. However, in an interactive setting you don't want to do that, so games use pretty small polygon sets that can fit in main memory. The image quality in games will be better once ray tracing is interactive. When will that be?

Intel has just demonstrated a 80 core teraflop chip that looks ideal for ray tracing. According to recent tests, full ray tracing is about 20 times slower than it needs to be on a 2GHz dual-core opteron 870. If Intel's numbers are realistic (we'll have to wait to see what the meory implications are) then the slightly smaller Larabee system will be plenty fast enough for good ray tracing on a single chip.

What does this mean? I predict it means the z-buffer will soon exist only in software renderers at studios, so if you are an undergraduate don't bother learning OpenGL unless you need to use it in a project or job soon. I doubt OpenGL will die entirely due to lagacy issues, but it is not something to learn unless you have to. I speak as a person that spent much of my early years writing FORTRAN code!

Left and right-handed coordinate systems

We have hit that time in my class again where we set up viewing frames. I continue to shun left-handed from my software, but do see why they are popular.

Let's suppose when looking "into" the screen, you want the y axis to point up and the x axis to point to the right. If you have a right-handed system, then the z axis will come out from the screen. Negative z will go into the screen. If you implement a z-buffer system you will initialize it with z = -infinity. In a ray tracer, the viewing rays will have a negative z component. It does seem "nicer" to have positive z go into the screen. But at what cost?

We can have the y axis point down as is done in many device coordinate systems. That seems worse to me than having z point out from the screen.

We can have a left-handed system. This is equivalent to adding one reflection matrix to your system. Whether that is worth it is a matter of taste. To me it is not, as it confuses me more than the negative-z problem. My brain-washing from many courses 20 years ago and geometric software in the intervening years is all in favor of right-handed systems, while the "which way is z" issue has a lot less congitive cement for me, so I choose to live with negative z going into the screen. Interestingly, the perspective matrix doesn't change either way!

Friday, February 9, 2007

Pixar's Monte Carlo Patent done?

Pixar patented Monte Carlo for graphics back in the mind-1980s, and for reasons I don't know the patent wasn't issued (#4897806) until Jan 1990. That is in the transitional phase of patent law as I understand it, so it expired 17 years after being issued-- i.e., last month. So is Monte Carlo now fair game in the US? I would think so, but there are also patents 5025400 and 5239624 that last until 2009 and 2011 respectively. They seem to my eye to not really claim anything new that was not in the original patent or the fabulous 1984 SIGGRAPH paper. Anyone have any more informed wisdom on this?

Tuesday, January 30, 2007

And a not-so-new paper

The architecture of our rendering system is described in a gi 06 paper. Morley and Boulos should get co-first authors for this paper and deserve the bulk of the credit for the system. I consider this paper a turning point in my thinking about realistic rendering. The reason is that it addresses media as a first class citizen of rendering in a practical way. The system doesn't really know the difference between media and surfaces, and overlapping media are not a problem. So you can add an atmosphere, and then a cloud, and neither the atmosphere nor the cloud need be aware of each other. Brute force is not an insult-- just ask quicksort!

New paper

I'll indicate new papers we do on this blog with a short post.

My student Margarita Bratkova just completed a paper on her work on panoramic maps. A tech report version is here.

Monday, January 22, 2007

Intro Graphics Course

For the first time in several years I am teaching Intro Graphics. I am basing this course on the Reyes pipeline as described in the superb paper. Here is my course web page. If you would like to follow along and do the assignments let me know and I'll link to your page. Each week there will be one assignment that will take 1-4 hours or so.