Some movies are made with a "hybrid" algorithm where rasterization is used for the visibility pass and ray tracing is used for some or all of the shading. This raises two questions:
- Why is this done rather than using just one algorithm?
- Would this technique be good for games?
First question 1: the reason it makes sense to use both is that rasterization is fantastic at dealing with procedural images. It is the basic Reyes look that is still going strong after almost 25 years: for each patch, dice in micropolygons while applying displacements. If you do a billion micropolygons, no problem as they are never stored. The frame buffer is where the action is. Now suppose you want to do ambient occlusion; the best way to do that is ray tracing. But ray tracing on procedural geometry is slow. Using Pharr et al's caching technique is probably the best known method. But an alternative is just not to apply the displacements and use less geometry as PDI did for the Shrek 2 movie (see their siggraph paper for details). That idea goes back to at least Cohen's 85 paper where detailed geometry is illuminated by a coarse model.
Now question 2: would this be good for games. Nobody knows. I will give two arguments, one for, and one against. The reason this will happen is again procedural geometry. But there will not be so much that it wont fit in core. Still doing it with rasterization will help locality and something like a DX10 geometry shader can be used, and efficient use of caches is where the action is and will continue to be. Now an argument against: Whitted-style ray tracing has good locality and thus such complexity is not needed. Once games use ray tracing they may as well use it for visibility for software simplicity. My bet is on the latter, but I sure wouldn't give odds. If graphics teaches us anything, it is that predicting the future is difficult!
8 comments:
Another argument for a hybrid approach in the context of film rendering is the how cheap motion blur and depth of field are. With a REYES renderer you just transform your samples into image space and then interpolate the values along the motion vector, similar for depth of field. This is of course incorrect as you assume the shading is constant over the frame time delta, but this generally isn't a problem. Texture locality is another big win. Film assets tend to have multiple gigs of texture data. It's not uncommon for a hero character to have an upwards of 40 gigs worth of textures. This may be excessive for what is actually required to maintain an acceptable fidelity in the final frame, but it's cheaper to pay an artist to just paint everything and then let the renderer decide what is required than to train an artist to work within the bounds of the rendering algorithm being employed.
I think there are a lot of positive arguments for hybrid rendering in games. For one, it's easier to sell ray tracing to the game development community. As far as they are concerned, everything is pretty much the same except a large set of effects just got easier to implement or became more correct. Once the game development community begins to understand ray tracing, perhaps they can slowly be weened off of rasterization implementations, at least where it makes sense. Texture locality is important in this domain too, perhaps even more so. Being able to render with texture data out of core is very appealing due to both the large environments in games and the cost of on card memory. It's easy to throw more memory at the problem in off-line rendering, but games require a lot of complicated LOD techniques, many of which are dependent on texture locality.
- Ryan
ryan geiss?
personally i think it'll be a while before we see bona fide ray tracing in games, even selectively. the first step is finding efficient gpu traversal algorithms which don't need a stack (or the development of gpus with stacks). it's certainly true that game developers aren't shy to use ray tracing in restricted situations (e.g. parallax mapping, which is really just isosurface intersection via ray marching).
If you remember news from Intel, the design of GPUs is about to change radically (especially given the games industry names Intel have been aggressively hiring recently). Why have a programmable pipeline when you can just program your own pipeline? Realtime raytracing is about to get a new lease of life.
What's the exact reference of the Siggraph paper you're referring to ? Thanks
Hi-- the reference is:
http://portal.acm.org/citation.cfm?id=1015748
thanks
Measured in terms of performance per Watt (and almost no other measure matters for high-end realtime graphics), hardware-assisted rasterization with multisampling is an incredibly efficient way to cast antialiased primary rays. A modern GPU can probably render roughly 1 million-polygon scenes at 2560x1600 with multisampling at well over 100 Hz. Game developers are reluctant to give up any performance at all, so if you rely on the purity or cleanliness of ray tracing algorithms to win them over, it will be a long time and a slow transition to casting all primary rays "for simplicity". I think this will only happen when the rendering performance is "good enough".
New visual effects are a better motivation for ray tracing, but the problem is most ray tracing "looks" can be and are rendered convincingly using techniques with rasterization. E.g. games use precomputed light maps today for global illumination, it looks great. To improve on this with ray tracing, you need to posit lots of highly dynamic objects, and then you've got to build the acceleration structure for those objects really darn fast, and it's a linear-time problem likely to be bandwidth-bound. So you'd like to be on a platform with a throughput-oriented high-bandwidth memory subsystem...like the GPU. And if you're on a GPU why not rasterize those primary rays while you're at it...
I think there are other "hybrid" approaches that make sense. For example use a traditional shadow map (created with Z-only rasterization, insanely fast on todays GPUs) that catches most shadows, then cast rays in the ambiguous pixels where. Because you have the RT "fallback" you can be much more casual about sizing your shadow map, using clever tricks like cascaded shadow maps, etc., than today's developers have to be.
I'm still think ray tracing won't hit full stride until rasterization hits it's limits. If you compare computer games five years ago like Battefield 1942 to Crysis, it's a major jump in visual quality. I think we can expect a lot from GPUs in the next few years. It wouldn't be a major change in architecture, for example, if new GPUs started supporting 64x supersampling like we see in movies today. That alone would produce some very competitive images. Also with subdivision surfaces now becoming mainstream on the GPU as fully deformable meshes, the idea that ray tracing can't build equally fast acceleration structures for these animating meshes makes ray tracing less appealing. If I understand correctly, deforming meshes can be instanced much easier with a GPU since you only need one copy in memory and can reposition it on the fly for every pose, while ray tracing needs a copy in memory for each unique instance. I guess this becomes less important as memory gets cheaper and cheaper, but those acceleration structures still have to be built.
I see this as part of the negative aspects of a hybrid renderer. Using a ray tracer, you need an acceleration structure no matter what. You can't rasterize the animated geometry without spending the time rebuilding the acceleration structures for everything that might be ray traced. I see real-time skinning as one of the biggest hurdles in ray tracing. The Unreal engine demoed a crowd of a thousand creatures running through the road, each having a unique pose of animation and while taking up a small footprint in memory. I would find that difficult to efficiently implement in a ray tracer.
Post a Comment