Saturday, February 17, 2007

Bye-bye z-buffer?

The main advantage of the z-buffer in my opinion is that you can stream geometry through it so companies like Pixar can make images of HUGE models. However, in an interactive setting you don't want to do that, so games use pretty small polygon sets that can fit in main memory. The image quality in games will be better once ray tracing is interactive. When will that be?

Intel has just demonstrated a 80 core teraflop chip that looks ideal for ray tracing. According to recent tests, full ray tracing is about 20 times slower than it needs to be on a 2GHz dual-core opteron 870. If Intel's numbers are realistic (we'll have to wait to see what the meory implications are) then the slightly smaller Larabee system will be plenty fast enough for good ray tracing on a single chip.

What does this mean? I predict it means the z-buffer will soon exist only in software renderers at studios, so if you are an undergraduate don't bother learning OpenGL unless you need to use it in a project or job soon. I doubt OpenGL will die entirely due to lagacy issues, but it is not something to learn unless you have to. I speak as a person that spent much of my early years writing FORTRAN code!

6 comments:

Sebastian Mach said...

I am sure Ray Tracing is the (currently only) future. That's good for me since I'm hobbyish studying it for nearly a year now (you can watch my efforts here: http://greenhybrid.net).

On the other hands, (HW) rasterizing has a huge 'lobby' and I think it will survive the next 5-10 years. I don't know exactly, but I'd guess 80-95 percent of all professional game developers never wrote at least a simple Whitted Style Ray Tracer.


greets,

Sebastian Mach

Peter Shirley said...

Hi Sebastian-- those images look good. And remember I am an academic, so 5-10 years to me is soon! I agree with your reasoning.

Unknown said...

I really wish more people would lobby for the big NV to include some robust ray/triangle intersection units in their multi-mega-transistor stream processors.

Why aren't the academics pushing for it?

PS. http://www.fractographer.com/propaganda/rrt.jpg

Unknown said...

Actually that's a bit unfair to say to you specifically, since I know you & Wald are big proponents :) I meant academia in general.

BTW I've met Wald while he was in Cape Town at Afrigraph :D

Peter Shirley said...

Hi Thomas. I have always been quite skeptical about making GPUs do ray tracing even if you add features. I am just too skeptical of their memory architecture for a non-zbuffer system (though it would make happy to be wrong).

I'd rather have the CPU makers start making their cores more ray tracing friendly. As Intel has a very strong ray tracing group in-house, I bet that will happen...

Unknown said...

the latency-hiding architecture of a gpu seems naturally suited to the incoherent nature of monte carlo simulations to me; having said that, i certainly wouldn't mind if intel dedicated some logic to the cause, and am well aware of reshetov et al.'s awesomeness :)

btw, i should take a moment to invite you to the ompf forums: http://ompf.org/forum/ (many people in the industry hang out there, and now and then there's a hot preprint to be gotten :)