SimplePic app

Friday, November 21, 2014

And a mistake in the original ray tracing paper

More fun trivia on the original ray tracing paper.  Whitted put in attenuation with distance along the reflection rays:


This is a very understandable mistake.  First, that radiance doesn't fall off as distance squared is confusing when you first hit a graphics class, and there were no graphics classes with ideal specular reflection then; Whitted was inventing it!  Second, the picture with the bug I think probably looks better than without: the fading checkerboard in the specular reflection looks kind of like a brushed metal effect.  So it's a good hack!
Whitted's famous ray tracing image.  From wikipedia
There are two lessons to draw from this: when bugs look good think about how to make them into a technique.  Second, when you make a mistake in print, don't worry about it: if somebody is pointing it out decades later you probably wrote one of the best papers of all time.  And if the paper is not great, it will disappear and nobody will notice the mistake!

Thursday, November 20, 2014

Another fun tidbit from the original ray tracing paper

The paper also used bounding volume hierarchies (built by hand).    It's amazing how well this paper stands up after so long.


Cloud ray tracing in 2050

My recent reread of Turner Whitted's paper from 35 years ago made me think about what kind of ray tracing one can do 35 years from now.  (I am likely to barely see that).

First, how many rays can I send now?  Of the fast ray tracers out there I am most familiar with the OptiX one from NVDIA running on their hardware.  Ideally you go for their VCA Titan-based setup.  My read of their speed numbers is that for "hard" ray sets (incoherent path sets) on real models, that on a signle VCA (8 titans) I can get over one billion rays per second (!).

Now how many pixels do I have now.  The new computer I covet has about 15 million pixels.  So the VCA machine and OptiX ought to give me around 100 rays per pixel per second even on that crazy hi-res screen.  So at 30fps I should be able to get ray tracing with shadows and specular inter-reflections (so Whitted-style ray tracing) at one sample per pixel.  And at that pixel density I bet it looks pretty good.  (OptiX guys I want to see that!).

How would it be for path-traced previews with diffuse inter-reflection?  100 rays per pixel per second probably translates to around 10 samples (viewing rays) per pixel per second.  That is probably pretty good for design previews, so I expect this to have impact in design now, but it's marginal which is why you might buy a server and not a laptop to design cars etc.

In 35 years what is the power we should have available?  Extending Moore's law naively is dangerous due to all the quantum limits we hear about, but with Monte Carlo ray tracing there is no reason that for the design preview scenario where progressive rendering is used you couldn't use as many computers as you could afford and where network traffic wouldn't kill you.

The overall Moore's law of performance (yes that is not what the real Moore's Law is so we're being informal) is that historically performance of a single chip has doubled every 18 months or so.  The number of pixels has only gone up about a factor of 20 in the last 35 years and there are limits of human resolution there but let's say they go up 10.  If the performance Moore's law continues due to changes in computers, or more likely lots of processors on the cloud for things like Monte Carlo ray tracing, then we'll see about 24 doublings which is about 16 million.   To check that, Whitted did about a quarter million pixels per hour, so let's call that a million rays an hour which is about 300 rays per second.   Our 16 million figure would predict about 4.8 giga rays per second, which for Whitted's scenes I imagine people can easily get now.   So what is that in 2050 per pixel (assuming 150 million pixels) we should have ray tracing (on the cloud?  I think yes) of  10-20 million paths per second per pixel.

What does this all mean?  I think it means unstratified Monte Carlo (path tracing, Metopolis, bidirectional, whatever) will be increasingly attractive.  That is in fact true for any Monte Carlo algorithm in graphics or non-graphics markets.  Server vendors: make good tools for distributed random number generation and I bet you will increase the server market!  Researchers: increase your ray budget and see what algorithms that suggests.  Government funding agencies: move some money from hardware purchase programs to server fee grants (maybe that is happening: I am not in the academic grant ecosystem at present).

Wednesday, November 19, 2014

Path tracing preview idea

In reviewing papers from the 1980s I thought again of my first paper submission in 87 or so.  It was a bad idea and rightly got nixed.  But I think it is now a good idea.  Anyone who wants to pursue it please go for it and put me in the acknowledgements: I am working on 2D the next couple of years!

A Monte Carlo ray tracer samples a high dimensional space.  As Cook pointed out in his classic paper, it makes practical sense to divide the space into 1D and 2D subspaces and parametrize them to [0,1] for conceptional simplicity.  For example:

  1. camera lens
  2. specular reflection direction
  3. diffuse reflection direction (could be combined with #3 but in practice don't for glossy layered)
  4. light sampling location/direction
  5. pixel area
  6. time (motion blur)
Each of these can be parametrized not only to [0,1]^2, it can be uniform  by looking at the random number seeds that feed into them.  This yields:
  1. (u1, u2)
  2. (u3, u4)
  3. (u5, u6)
  4. (u7, u8)
  5. (u9, u10)
  6. u11
You can just stratify each of these 2D sets to lower noise and that can help a lot.  For example, generate the first pair so (from Andrew Kensler's paper at Pixar):


All sorts of fun and productive games are played to make the multidimensional properties of the samples for sets across those dimensions, with QMC making a particular approach to that, and Cook suggesting magic squares.  The approach in that bad paper was to pretend there is no correlation at between the pairs of dimensions.   So use:

  1. (u1, u2)
  2. (u1, u2)
  3. (u1, u2)
  4. (u1, u2)
  5. (u1, u2)
  6. u1
 No we have a 2D sampling problem and we can use Warnock style adaptive sampling (wikipedia with figure from the article below)

The samples can be regular or jittered-- just subdivide when the 4-connected or 8-connected neighbors don't match.  Start with 100 or so samples per pixel.

Yes this approach has artifacts, but often now Monte Carlo is used for a noisy preview and the designer hits "go"  for a noisy version when the setup is right.  The approach above will instead give artifacts but the key question for preview (a great area to research now) is how to get the best image for the designer as the image progresses. 

Note this will be best for one bounce of diffuse only but that is often what one wants, and it is another artifact to replace multi-bounce with an ambient.  The question for preview is how best to tradeoff artifacts for time.  Doing it unbiased just makes the artifact noise.

Let me know if anybody tries it.  If you have a Monte Carlo ray tracer it's a pretty quick project.

Tuesday, November 18, 2014

Choosing research topics

A graduate student recently asked me for some advice on research and I decided to put some of the advice down here.  One of the hardest things to do in research is figure out what to work on.  I reserve other topics like how to manage collaborations for another time.

What are your career goals?

To decide what to work on what on, first decide what goal any given piece of work leads to.  I roughly divide career paths for researchers into these:
  1. Academic researcher
  2. Academic teacher
  3. Industrial researcher
  4. Industrial advanced development
  5. start-up company
I am still trying to figure out what I most want to do on that list, and it is surely that way for most people.  So projects that further more than one, or a mix of projects is fine.


What are you good at?

This should be graded on the curve.  Obvious candidates are:
  1. math
  2. systems programming
  3. prototyping programming and toolkit use
  4. collaborating with people from other disciplines (biology, econ, archeology)

What do you like?

This is visceral.  When you have to make a code faster is it fun or a pain?   Do you like making 2D pictures?  Do you like wiring novel hardware together?


Where is the topic in the field ecosystem?

How hot is the topic, and if hot will it be played out soon?   If lots of people are already working on it you may be too late.  But this is impossible to predict adequately.  Something nobody is working or worse something out of fashion on is hard to get published, but find a minor venue or tech report and it will get rewarded, sometimes a lot, later. 

How does it fit into your CV portfolio?

This is the most important point in this post.  Look at your career goals and figure out what kind of CV you need.    Like stocks and bonds in the economic world, each research project is a risk (or it wouldn't be research!) and you want a long term goal reached by a mix of projects.  Also ask yourself how much investment is needed before you know whether a particular project has legs and will result in something good.


What is my advantage on this project?

If the project plays to your strengths then that is a good advantage (see "what I am good at" above).  But access to unique data, unique collaborators, or unique equipment is a lot easier than being smarter than other people.  Another advantage can be to just recognize something is a problem before other people do.  This is not easy, but be on the lookout for rough spots in doing your work.  In the 1980s most rendering researchers produced high dynamic range images, and none of us were too sure what to do with them, so we scaled.  Tumblin and Rushmeier realized before anybody else that this wasn't an annoyance: it is a research opportunity.    That work also is an example of how seminal work can be harder to publish but does get rewarded in the long run, and tech reporting it is a good idea so you get unambiguous credit for being first!

How hard is it?

Will it take three years full time (like developing a new OS or whatever) or is it a weekend project?  If it's a loser, how long before I have a test that tells me it's a loser?


What is the time scale on this project?


Is the project something that will be used in industry tomorrow, in a year, 3 years, 10 years?  There should be a mix of these in your portfolio as well, but figure out which appeals to you personally and lean in that direction.

Now let's put this into practice.  


Suppose you have a research topic you are considering:
  1. Does it help my career goals? 
  2. Am I good at it?
  3. Will it increase my skills?
  4. Will it be fun?
  5. Is the topic pre-fashion, in-fashion, or stale?
  6. What is my edge in this project?
  7. Where is it in the risk/reward spectrum?
  8. How long before I can abandon this project?
  9. Is somebody paying me for this?
  10. Is this project filling out a sparse part of my portfolio?
If you have a project firing on all of these then yes do it.     The one I would attend to most is #1 and #10.  For #1 look at the CVs of people that just got jobs that you want and see if you are on pace.  Things may be different in four years, but it's as good a heuristic as we can get.  #10 keeps your options more open and exposes you to less risk.

Suppose you DON'T have a project in mind and that is your problem.  This is harder, but these tools can still help.  I would start with #6.  What is your edge?  Then look for open problems.  If you like games, look for what sucks in games and see if you bring anything to the table there.  If your lab has a great VR headset, think about whether there is anything there.  If you happen to know a lot about cars, see if there is something to do with cars.  But always avoid the multi-year implimentational projects with no clear payoff unless the details of that further your career goals.  For example, anyone that has written a full-on renderer will never lack for interesting job offers in the film industry.   If you are really stuck just embrace Moore's law and project resources out 20 years.  Yes it will offend most, reviewers but it will be interesting.  Alternatively, go to a department that interests you, go to the smartest person there, and see if they have interesting projects they want CS help with. 

 

Publishing the projects

No matter what you do, try to publish the project somewhere.  And put it online.  Tech report it early.   If it isn't cite-able, it doesn't exist.  And if you are bad at writing, start a blog or practice some other way.   Do listen to your reviewers but no like it is gospel.  Reviewing is a noisy measurement and you should view it as such.  And if your work has long term intact the tech report will get read and cited.  If not, it will disappear as it should.  The short-term hills and valleys will be annoying but keep in mind that reviewing is hard and unpaid and thus rarely done well-- it's not personal.


But WAIT.  I want to do GREAT work.


Great work is sometimes done in grad school.    But it's high-risk high-reward so it's not wise to do it unless you aren't too particular about career goals.   The good news in CS is that most industry jobs are implicitly on the cutting edge so it's always a debate whether industry or academics is more "real" research (I think it's a tie but that's opinion).   But be aware of the trade-off you are making.  I wont say it can't be done (Eric Veach immediately comes to mind) but I think it's better to develop a diversified portfolio and have at most one high-risk high-reward project as a side project.  In industrial research you can get away with such research (it's exactly why industrial research exists) and after tenure you can do it, but keep in mind your career lasts 40-50 years, so taking extreme chances in the first 5-10 is usually unwise.

Sunday, November 16, 2014

A fun tidbit from the original ray tracing paper

We're about 35 years after Whitted's classic ray tracing paper.  It's available here.  One thing that went by people at the time is he proposed randomized glossy reflection in it.


Always be on the lookout for things current authors say are too slow!

Note that Rob Cook, always a careful scholar, credits Whitted for this idea in his classic 1984 paper.