Saturday, February 17, 2007

Bye-bye z-buffer?

The main advantage of the z-buffer in my opinion is that you can stream geometry through it so companies like Pixar can make images of HUGE models. However, in an interactive setting you don't want to do that, so games use pretty small polygon sets that can fit in main memory. The image quality in games will be better once ray tracing is interactive. When will that be?

Intel has just demonstrated a 80 core teraflop chip that looks ideal for ray tracing. According to recent tests, full ray tracing is about 20 times slower than it needs to be on a 2GHz dual-core opteron 870. If Intel's numbers are realistic (we'll have to wait to see what the meory implications are) then the slightly smaller Larabee system will be plenty fast enough for good ray tracing on a single chip.

What does this mean? I predict it means the z-buffer will soon exist only in software renderers at studios, so if you are an undergraduate don't bother learning OpenGL unless you need to use it in a project or job soon. I doubt OpenGL will die entirely due to lagacy issues, but it is not something to learn unless you have to. I speak as a person that spent much of my early years writing FORTRAN code!

Left and right-handed coordinate systems

We have hit that time in my class again where we set up viewing frames. I continue to shun left-handed from my software, but do see why they are popular.

Let's suppose when looking "into" the screen, you want the y axis to point up and the x axis to point to the right. If you have a right-handed system, then the z axis will come out from the screen. Negative z will go into the screen. If you implement a z-buffer system you will initialize it with z = -infinity. In a ray tracer, the viewing rays will have a negative z component. It does seem "nicer" to have positive z go into the screen. But at what cost?

We can have the y axis point down as is done in many device coordinate systems. That seems worse to me than having z point out from the screen.

We can have a left-handed system. This is equivalent to adding one reflection matrix to your system. Whether that is worth it is a matter of taste. To me it is not, as it confuses me more than the negative-z problem. My brain-washing from many courses 20 years ago and geometric software in the intervening years is all in favor of right-handed systems, while the "which way is z" issue has a lot less congitive cement for me, so I choose to live with negative z going into the screen. Interestingly, the perspective matrix doesn't change either way!

Friday, February 9, 2007

Pixar's Monte Carlo Patent done?

Pixar patented Monte Carlo for graphics back in the mind-1980s, and for reasons I don't know the patent wasn't issued (#4897806) until Jan 1990. That is in the transitional phase of patent law as I understand it, so it expired 17 years after being issued-- i.e., last month. So is Monte Carlo now fair game in the US? I would think so, but there are also patents 5025400 and 5239624 that last until 2009 and 2011 respectively. They seem to my eye to not really claim anything new that was not in the original patent or the fabulous 1984 SIGGRAPH paper. Anyone have any more informed wisdom on this?

Tuesday, January 30, 2007

And a not-so-new paper

The architecture of our rendering system is described in a gi 06 paper. Morley and Boulos should get co-first authors for this paper and deserve the bulk of the credit for the system. I consider this paper a turning point in my thinking about realistic rendering. The reason is that it addresses media as a first class citizen of rendering in a practical way. The system doesn't really know the difference between media and surfaces, and overlapping media are not a problem. So you can add an atmosphere, and then a cloud, and neither the atmosphere nor the cloud need be aware of each other. Brute force is not an insult-- just ask quicksort!

New paper

I'll indicate new papers we do on this blog with a short post.

My student Margarita Bratkova just completed a paper on her work on panoramic maps. A tech report version is here.

Monday, January 22, 2007

Intro Graphics Course

For the first time in several years I am teaching Intro Graphics. I am basing this course on the Reyes pipeline as described in the superb paper. Here is my course web page. If you would like to follow along and do the assignments let me know and I'll link to your page. Each week there will be one assignment that will take 1-4 hours or so.

Friday, December 15, 2006

Gamma correction

Most graphics monitors have a "gamma" that gives a rough approximation to their non-linearity. For example, if we use an internal color space where all of R, G, B are stored as 0 to 1 values, then the physical spectra coming from the red part of the pixel would be:

output_red( lambda ) = max_output_red(lambda) * pow( R, gamma )

where gamma is typically a number around 2.0. Suppose gamma is 2.0. If we want to display a red that is 1/4 the maximum, the R would would store in a file to send to the display would be 0.5.

Note that all of this is not to make up for some "defect" in the monitor. When you use 8-bit per channel color representations, there are only 256 gray levels available. If you space these out so they LOOK evenly spaced to a human observer, that makes a perceptually uniform set of 256 grays and banding is minimized. But since human intensity perception is non-linear (think of the "middle gray" photographer's card which is physically about 1/5 the rreflectance of white paper), this means the monitor must be as well.