Wednesday, February 17, 2016

Ray-Object Intersection Repository at RTR

Until Eric H's comment on a previous post, I was somehow unaware of this page.   It really rocks and will be my first stop when I look again at triangles (next week?).   Here is the page.

Sunday, February 14, 2016

New simple ray-box test from Andrew Kensler

Andrew Kensler and Thiago Ize have been helping me tune up my ray-box code trying to find a sweet-spot of fast and simple.   I've updates a few test cases here and added the new method Andrew sent me.   Andrew's new code a super-simple version that on some compilers/machines is very competitive with the Amy Williams method, and you don't have to muck up your ray class.    It's not as fast under clang on my mac (but hard-coding swap helped a little) but it's so simple it will be my go-to method for the time being.

If there are NaN the compare will fail and it returns false.   It handles the inf cases right.    Very clean!

Saturday, February 6, 2016

ray-box intersection and fmin

In working on the code for the next mini-book, I have followed the philosophy of using the simplest code and trusting that it is good enough.   For bounding volume hierarchies I tried making a simpler ray-box test that people usually use.   I was especially impressed with this post by Tavian Barnes.

This is the simplest ray-box test I was able to write:

The overall code seemed pretty slow so I tried some old more complicated code that stores the sign of the ray propagation directions as well as the inverses of the ray direction components:

This was a lot faster in the overall ray tracer so I started digging into it a bit.   I wrote an isolation test that sends lots of rays from near the corner of a box in random directions (so a little less than 1/8 should hit).    It occurred to me that the fmax and fmin might be slower due to their robust NaN handling and that does appear to be the case.    Here is the test I used on my old macbook pro compiled with -O.   Perhaps it gives a little too much amortization advantage to the long code because 100 trials for each ray, but maybe not (think big BVH trees).


The runtimes on my machine are:

traditional code: 4.3 s
using mymin/mymax: 5.4s
using fmin/fmax: 6.9s

As Tavian B points out, the properties of fmax and NaN can be used to programmer advantage, and vectorization might make system max functions a winner.   But the gaps between all three functions were bigger than I expected.

Here is my code.   If you try it on other compilers/machines please let us know what you see.

Monday, February 1, 2016

Motion blur in a ray tracer

I implemented the first change in my ray tracer for my next mini-book.    I had forgotten how easy motion blur is to add to a ray tracer.   For linear motion anyway.    The ray gets a time it exists at, the camera generates a ray at a random time, and the object intersection takes a time in.

For a sphere whose center moves linearly, the change is just to make the center a function of time:





To test it I had the diffuse spheres have a random vertical motion.   This was a rare time my code worked on first run (after many typos on the way to compilation).


Sunday, January 31, 2016

I've decided to go ahead with a part-2 on the mini-book




The mini-book Ray Tracing in One Weekend got more interest than I expected.   Apparently ray tracing really is hot.   The book just gets a brute force path tracer with spheres up, so I decided it would be worth making a part 2 (to be written!).   I am seeking comments on what is missing from the list.   Here is my first cut:

Chapter 1: Motion Blur
Chapter 2: A Bounding Volume Hierarchy (BVH) Chapter 3: Solid Texture Mapping Chapter 4: Image Texture Mapping
Chapter 5: Volumes Chapter 6: A Polished Material Chapter 7: Environment Maps Chapter 8: Shadow Rays Chapter 9: Triangle Meshes


I would consider that a full-featured ray tracer.   That is not to say having all features!  

Friday, January 29, 2016

Some of Andrew Glassner's thoughts on publishing

Andrew Glassner left a reply on an earlier post that seemed too interesting to let sit in a comments section.   So here it is in its entirety.   I didn't ask Andrew, so keep in mind that he wrote this as a comment (not that I would change anything in an edit!).

Andrew's comment:

I like what you're doing here. To me, the interesting thing is that you're trying to avoid extra work that's been pushed on authors with the decline of the traditional publication model. That decline has been hailed as "democratizing" publication, which is true in some ways, but mostly not.

Here's what I mean. In the 1960's (as I understand it), to publish a book with math and figures, you'd type the book on your typewriter (or have someone do it for you from your longhand notes). You'd leave blank spaces for equations and figures, which you'd draw in by hand. The publisher would then often hire someone to redo your figures, and they'd lay out and typeset your book to make it look great.

Then TeX and LaTeX came along, and authors realized they could make their own pages. And they did. Before long, publishers required this, because why should they pay someone when they can get the author to do it for free? So now the author had to learn LaTeX and go through the sweat of typesetting math. But the publisher would still often hire an artist to redraw the figures. My first few books were developed this way.

Then drawing tools got better, and more authors started using them, and again the publishers chose to make that all but mandatory (if you swore you couldn't make figures, they would hire an artist, but I know that at least sometimes you gave up some royalties in return).

Then indexing packages became more widespread. And so, rather than hire a professional indexer (and this is a much harder job than you might imagine, if you haven't done it yourself), that too became the author's job.

And so it went, with authors now essentially required to produce beautiful, camera-ready works, with headers and footers and footnotes and properly formatted bibliographies and on and on.

The author's job went from "write a manuscript, indicate the math, provide sketches for the figures, and let the publisher take it from there," to the far more demanding "produce the entire book in every detail."

Since most people can get the tools required, this is indeed democratizing, in that most anyone can make a professional-looking book. On the other hand, it means the author must have enough leisure (or paid) time to learn the tools, and then devote significant more time to produce all the content to a professional standard. This is pretty much the opposite of democratizing - it says only people who have a lot of available time can afford to produce a book with equations and figures that holds up to modern standards.

You've found some great ways to reduce this burden. Well done! I've taken a similar attitude in my own works, when I can. I'm now very happy to illustrate my work with hand-drawn pictures, as long as they're clear and do the job. I wish I could get away from fiddling with LaTeX, but programs like the one at http://webdemo.myscript.com/#/demo/equation are a big help.

It's interesting to me that we've gone from the author's job being almost completely about writing words and merely indicating the rest, to the author becoming a self-contained and fully staffed publishing house (with typesetter, artist, indexer, etc.), to now where you're shedding some of those tasks (e.g., pagination) and automating others (e.g., screenshots of code with automatic coloring).

What you're doing really does make it easier to write a book (not merely cheaper for a publisher to print it), and I say "Hooray!" The easier it becomes to produce quality materials, the better off we all become.

Thursday, January 28, 2016

Figures in my new mini-book

In my new mini-book I used a new (to me) fun process.  The process I am referring to as agile writing, is based on the best low-pain tools I could find.   It would not be unfair to say I wrote a minimum viable book (MVB).   The tools are mainly google docs, limnu drawings, and screen captures of vim with auto-coloring.   The code was super-easy because vim auto-coloring works really well IMO:



Note that google docs has a nice crop interface so you don't need to screen capture precisely.

The white-boarding in limnu took me a little getting used to, but works very well.   Here's an example of a screenshot from limnu pasted into google docs:

I asked Dave Hart and Grue Debry for tips on how to do figures, and what is funny is that their advice mainly boiled down to "do what you do on a real whiteboard".   That advice may seem content-free but it's not; I did change my mentality that it was "fast and loose" and I needed to not be too uptight.    I know this from using real white boards, but somehow I wasn't in that mode with a tablet.   But once I just pretended it was a whiteboard and not a traditional computer drawing program I got much better results.   They wrote up the advice at their site.