Mattias Rabb showed me a really neat trick a few years ago that this post will share. He found it in a paper I had never seen by Coleman in 1967. Kudos to Mattias for not only finding this paper, but managing to find this trick in it (I would not have understood it had he not told me what was there). It gives a way to compute an unbiased random hitpoint for a light particle and a nonuniform density.
Recall that for a uniform density volume this can be done analytically. Given a canonical random number X (e.g., a call to drand48()), and a medium with interaction coefficient s (the probability of a collision in a small distance d is sd), we have distance = -log(1-X)/s (see Dutre et al., p 241 for a derivation of this classic result).
In a photon tracing program we would use this as follows:
X = drand48()
d = -log(1-X)/s
newposition = rayorigin + d*raydirection // assumes unit length direction
scatter or absorb
No suppose s(x,y,z) is a function of position like a call to noise or whatever. Usually we would resort to ray maching with small steps that assume s is locally constant. This is a pain in the same family as ray epsilons. But the Coleman paper has a lovely trick when s(x,y,z) is always less than a known smax. Pretend the medium is all constant smax, take a step as above, and sometimes reject the result:
d = 0
while (1) {
X = drand48()
d += -log(1-X)/smax
Y = drand48()
newposition = rayorigin + d*raydirection
if (s(newposition)/smax > Y) break
}
This takes a bunch of random steps and is unbaised (I would love to see a clean proof of this-- I don't follow Coleman's). I think it saved us something like 5X in our adjoint photon tracer.
8 comments:
It seems strange to take -log(1-X) when X is a uniform random variable on 0..1. (1-X is also a uniform random variable on 0..1). Why not just take -log(X)?
X is uniform on [0,1), and log(0) is undefined. Taking 1-X gives you a value in (0,1], and that interval is valid for the log function.
That is a very interesting trick! And the simple restriction on the noise function (that it has some max value) is quite reasonable for those of us who work with volumes that aren't always easily integrable. Thank you for sharing this gem!
Hey there,
We have been reading the articles on your website http://psgraphics.blogspot.com/
and are very impressed with the quality of your information.
We have a team of copywriters who specialize in writing articles on various topics and would like to write an original article for you to use on your website – this article will not be used anywhere else on the Internet.
In exchange all we ask is that we can have one or two links within the body of the article back to one of our sites. You can view a sample of the quality of our articles at http://www.darkhorse.com.au/blog/
If you are interested in having us write an article for your website please just let me know and we would be more than happy to have one written for you within two weeks.
Kind regards,
Papia
Intuitive argument:
I'm sure you already thought of this, but intuitively the method works by treating the medium as uniform with density smax, except that only a fraction of the particles are visible, namely s(x,y,z)/smax.
Proof sketch:
While not very rigorous, and maybe not that clean either, here's a sketch of a possible proof in three stages: (1) start with the uniform density case, (2) progress to two densities, and (3) inductively extrapolate to arbitrary density:
1) The method works for uniform densities as follows: Let's say that the density of the medium is a*smax (where 0 <= a <= 1). Then, if we compute the expected distance to a scatter event, we get a stochastic sum of segment lengths. Each term has the form
-log(1-x)/smax
with a different x each time (but each term has the same expected value). The probabilities for the terms being added follow a geometric series:
P = 1, (1-a), (1-a)^2 ...
Thus, the expected sum of the series is the expected value of a term times the sum of the geometric series (1/a):
E[-log(1-x)/smax] / a
This is exactly what we would get if just performed the standard calculation using density a*smax.
2) We can extend this result to a 2 density case fairly readily. Let the density of the medium be a*smax from d=0 to d=d0, and b*smax for d>d0.
Clearly the first segment (from 0 to d0) will have the right density of samples, because it is just the uniform
case.
We can assure the right density in the second segment by gathering up all the samples that make it past d0, restarting them at d0 and computing a new scattering event based on the uniform density method.
However, we would not need to gather up the samples and restart them (compute new scattering distances) if the distribution of samples that make it past d0 in the first step is the same as it would be by restarting at d0. Astonishingly, this is the case, regardless of where the samples start from. This remarkable fact is due to the self-similarity of the exponential falloff function. In other words, if we shift and scale the exponential function it remains the same:
e^(-d) = (1/e^(-d0)) * e^(-d-d0)
3) Since d0 is arbitrary, we can make it as small as we want, and inductively stack more intervals behind it, making an arbitrary density.
~Dave Cline
clinedav@gmail.com
I know this is an old post, but great explanation Dave.
The technique you describe is much older, originating from neutron tracking techniques in nuclear physics. It is called Woodcock tracking.
Check out f.i. this paper:
http://nis-lab.is.s.u-tokyo.ac.jp/~egaku/sigasia10/abstsigasia10.html
- it contains an explanation of the technique, and a kD-tree-optimized version of it, along with the proof of unbiasedness. And of course, also the references to the relevant previous work.
Thanks Oskar Elek! I was unaware of that 2010 paper and it is very good.
Post a Comment