Tuesday, January 30, 2007

And a not-so-new paper

The architecture of our rendering system is described in a gi 06 paper. Morley and Boulos should get co-first authors for this paper and deserve the bulk of the credit for the system. I consider this paper a turning point in my thinking about realistic rendering. The reason is that it addresses media as a first class citizen of rendering in a practical way. The system doesn't really know the difference between media and surfaces, and overlapping media are not a problem. So you can add an atmosphere, and then a cloud, and neither the atmosphere nor the cloud need be aware of each other. Brute force is not an insult-- just ask quicksort!

9 comments:

  1. Maybe bruteforce is not the best word-- how about "simple and uninformed"?

    ReplyDelete
  2. I've got a couple of implementation questions.

    First, how do you handle sampling overlapping media? Do you rely on independent samples for each medium? This loses some stratification. Or do you try to use a single sample, which (I think) requires finding the all the entrance and exits from the media in order.

    Second, you talk about using a pdf that describes the directions to the luminaires. How do you compute this pdf? I can image sampling the luminaire area, then casting the ray and only intersecting against luminaires. Summing the prob. densities of the intersected points (times the appropriate geometry terms) should give the overall prob. density for casting a ray in that direction.

    ReplyDelete
  3. Indeed we just rely on independent samples, and stratification isn't that helpful. I don't find that stratification helps except for optically thin media anyway because pretty quick multiple scattering dominates. And yes, O(1/sqrt(n)) convergence means LOTS of samples!

    For the pdf we just tag things (e.g., the aperture in a ceiling light case). In the long-run I expect that to be very human aided with a GUI.

    ReplyDelete
  4. Thanks for the answer.

    With my pdf question I guess I'm trying to get at how you actually generate the direction sample.

    Assume I have two tagged objects (like the aperture). When they are projected onto the hemisphere over which I'm sampling, the two objects overlap. Since they overlap I can't consider them independently, the pdf in the overlap is actually some mixture of the two.

    Direct lighting solves this problem (via the visibility test) by giving the closer tagged object a weight of 1 in the overlap region and the more distant object a weight of 0. This is essentially MIS with something like the maximum heuristic.

    In the paper you argue that your approach is simpler because you don't have to do direct lighting. However, at some point you still have to solve the overlapping tagged object problem. How do you do that?

    ReplyDelete
  5. Hi Justin. You are absolutely correct. This issue was first raised by Eric V in his MIS paper. Each object you need to sample needs a pdf(direction) function. This way once you choose a direction you evaluate pdf for that direction for every object you might sample. Definitely the downside of this approach both in code complexity and runtime. But for small numbers oflights no big deal. For large numbers a research problem!

    ReplyDelete
  6. i don't think the computational and conceptual requirements for MIS are all that steep; one thing i can't work out however is how specular brdfs (with delta functions) mix with MIS :/

    ReplyDelete