Monday, November 5, 2007

Hybrid algorithms

Some movies are made with a "hybrid" algorithm where rasterization is used for the visibility pass and ray tracing is used for some or all of the shading. This raises two questions:
  1. Why is this done rather than using just one algorithm?
  2. Would this technique be good for games?
First question 1: the reason it makes sense to use both is that rasterization is fantastic at dealing with procedural images. It is the basic Reyes look that is still going strong after almost 25 years: for each patch, dice in micropolygons while applying displacements. If you do a billion micropolygons, no problem as they are never stored. The frame buffer is where the action is. Now suppose you want to do ambient occlusion; the best way to do that is ray tracing. But ray tracing on procedural geometry is slow. Using Pharr et al's caching technique is probably the best known method. But an alternative is just not to apply the displacements and use less geometry as PDI did for the Shrek 2 movie (see their siggraph paper for details). That idea goes back to at least Cohen's 85 paper where detailed geometry is illuminated by a coarse model.

Now question 2: would this be good for games. Nobody knows. I will give two arguments, one for, and one against. The reason this will happen is again procedural geometry. But there will not be so much that it wont fit in core. Still doing it with rasterization will help locality and something like a DX10 geometry shader can be used, and efficient use of caches is where the action is and will continue to be. Now an argument against: Whitted-style ray tracing has good locality and thus such complexity is not needed. Once games use ray tracing they may as well use it for visibility for software simplicity. My bet is on the latter, but I sure wouldn't give odds. If graphics teaches us anything, it is that predicting the future is difficult!