tag:blogger.com,1999:blog-83502570637731446002019-06-18T02:00:55.390-07:00Pete Shirley's Graphics BlogPeter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.comBlogger237125tag:blogger.com,1999:blog-8350257063773144600.post-23475120989493435512019-06-04T08:41:00.000-07:002019-06-04T08:41:17.975-07:00How bright is that screen? And how good a light is it?I have always wanted a "fishtank VR" fake window that is bright enough to light a room. Everytime a new brighter screen comes out I want to know whether it is bright enough.<br /><br />Apple <a href="https://www.macrumors.com/2019/06/03/apple-unveils-32-inch-6k-pro-display-xdr/">just announced a boss 6K screen</a>. But what caught my eye was this:<br /><br />"...and can maintain 1,000 nits of full-screen brightness indefinitely."<br /><br />That is very bright compared to most screens (though not all-- the<a href="http://hdr.sim2.it/hdrproducts/hdr47es6mb"> SIM2 sells one that is about 5X</a> that but it's a small market device at present). How bright is 1000 nits?<br /><br />First, let's get some comparisons. "Nits" is a measure of <a href="https://en.wikipedia.org/wiki/Luminance">luminance</a>, which is an objective approximation to "how bright is that". A good measure if I want a VR window is sky brightness. <a href="https://en.wikipedia.org/wiki/Orders_of_magnitude_(luminance)">Wikipedia has an excellent page on this I grab this table from</a>:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-H7uX9Su1kI0/XPaOmbxKycI/AAAAAAAACWE/LSpkoTfZsloVromuZTPmE3k24t_WHXOxgCLcBGAs/s1600/Screen%2BShot%2B2019-06-04%2Bat%2B9.30.03%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1070" data-original-width="1554" height="275" src="https://1.bp.blogspot.com/-H7uX9Su1kI0/XPaOmbxKycI/AAAAAAAACWE/LSpkoTfZsloVromuZTPmE3k24t_WHXOxgCLcBGAs/s400/Screen%2BShot%2B2019-06-04%2Bat%2B9.30.03%2BAM.png" width="400" /></a></div><br /><br /><br />Note that the Apple monitor is almost at the cloudy day luminance (and the SIM2 is at the clear sky luminance). So we are very close!<br /><br />Now as a light source one could read by, the *size* of the screen matters. Both the Apple monitor and the SIM2 monitor are about half a meter in area. How many *lumens* is the screen? This is how we measure light bulbs.<br /><br /><a href="https://www.lowes.com/pd/SYLVANIA-100-Watt-EQ-Bright-White-Light-Fixture-CFL-Light-Bulbs-4-Pack/1000074263?cm_mmc=src-_-c-_-prd-_-lit-_-bing-_-lighting-_-new_dsa_lit_183_ceiling_fans_%26_landscape_lighting-_-ceiling%20fans-_-0-_-0&k_clickID=bi_272638719_1309518539938445_81844947436519_dat-2333644710524915:loc-190_c_66460&msclkid=1c73d660812a19e892728a960e765e6c&utm_source=bing&utm_medium=cpc&utm_campaign=NEW_DSA_LIT_183_Ceiling%20Fans%20%26%20Landscape%20Lighting&utm_term=ceiling%20fans&utm_content=LIT_Ceiling%20Fans_DSA">This 100 Watt EQ bulb is 1600 lumens</a>. So that is a decent target for a screen to read by. So how do we convert a screen of a certain area with a luminance to lumens? As graphics people let's remember that for a diffuse emitter we know wsomething about luminous exitance (luminous flux):<br /><br />luminance = luminous exitance divided by (Pi times Area)<br /><br />So 1000 = E / (0.5*PI) = E/6 (about).<br /><br />E is lumens per square meter. So we want lumens = E*A = 0.5*E = 1000. So lumens = 2000. That is about a 100 watt bulb. So I think you could read by this Apple screen if it's as close as you would keep a reading lamp. If one of you buys one, let me know if I am right or if I am off by 10X (or whatever) which would not surprise me!<br /><br /><br /><br /><br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com1tag:blogger.com,1999:blog-8350257063773144600.post-42745166445622013852019-03-12T07:57:00.000-07:002019-03-12T07:57:03.320-07:00Making your BVH faster<br />I am no expert in making BVHs fast so just use this blog post as a starting point. But there are some things you can try if you want to speed things up. All of them involve tradeoffs so test them rather than assume they are faster! <br /><br /><b>1. Store the inverse ray direction with the ray. </b> The most popular ray-bounding box hit test assumes this (used in both the intel and nvidia ray tracers as far as I know):<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-wnyycQ1gGCo/XIfFb-cSTZI/AAAAAAAACU0/I5mqOq2TGAAtswxCBRrD_xQmbyGWIxmNgCLcBGAs/s1600/Screen%2BShot%2B2019-03-12%2Bat%2B8.42.37%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="343" data-original-width="1374" height="156" src="https://4.bp.blogspot.com/-wnyycQ1gGCo/XIfFb-cSTZI/AAAAAAAACU0/I5mqOq2TGAAtswxCBRrD_xQmbyGWIxmNgCLcBGAs/s640/Screen%2BShot%2B2019-03-12%2Bat%2B8.42.37%2BAM.png" width="640" /></a></div><br />Note that if ray direction were passed in you would have 6 divides rather than 6 adds.<br /><br /><b>2. Do an early out in your ray traversal.</b> This is a trick used in many BVHs but not the one I have in the ray tracing minibook series. Martin Lambers suggested this version to me which is not only faster, it is cleaner code.<br /><br /><img alt="" class="media-image" data-height="476" data-width="618" height="246" src="https://pbs.twimg.com/media/D1bKEwiVYAARNl4.png:large" style="margin-top: 0px;" width="320" /><br /><br /><b>3. Build using the surface area heuristic (SAH). </b> This is a greedy algorithm that minimizes the sum of the areas of the bounding boxes in the level being built. I based mine on the pseudocode in <a href="http://www.cs.utah.edu/~shirley/papers/bvh.pdf">this old paper </a>I did with Ingo Wald and Solomon Boulos. I used simple arrays for the sets and the <a href="https://en.wikipedia.org/wiki/Quicksort#Lomuto_partition_scheme">quicksort from wikipedia</a> for the sort.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-5_4L1ZaN2ZM/XIfIFTy7G7I/AAAAAAAACVA/xfguIrPWbmADZr3l_dbaK0PsvJYufUn2gCLcBGAs/s1600/Screen%2BShot%2B2019-03-12%2Bat%2B8.53.50%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1384" data-original-width="1370" height="640" src="https://4.bp.blogspot.com/-5_4L1ZaN2ZM/XIfIFTy7G7I/AAAAAAAACVA/xfguIrPWbmADZr3l_dbaK0PsvJYufUn2gCLcBGAs/s640/Screen%2BShot%2B2019-03-12%2Bat%2B8.53.50%2BAM.png" width="632" /></a></div><br /><br /><br /><br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com6tag:blogger.com,1999:blog-8350257063773144600.post-71608472969154621642019-02-17T06:20:00.003-08:002019-02-17T06:20:29.123-08:00Lazy person's tone mappingIn a physically-based renderer, your RGB values are not confined to [0,1] and your need to deal with that somehow.<br /><br /><span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">The simplest thing is to clamp them to zero to one. In my own C++ code:</span></span><br /><span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">inline vec3 vec3::clamp() {<br /> if (e[0] < real(0)) e[0] = 0;<br /> if (e[1] < real(0)) e[1] = 0;<br /> if (e[2] < real(0)) e[2] = 0;<br /> if (e[0] > real(1)) e[0] = 1;<br /> if (e[1] > real(1)) e[1] = 1;<br /> if (e[2] > real(1)) e[2] = 1;<br /> return *this;<br /> }</span></span><br /><br />A more pleasing result can probably be had by applying a "tone mapping" algorithm. The easiest is probably Eric Reinhard's "L/(1+L)" operator from the Equation 3 of <a href="http://www.cs.utah.edu/~reinhard/cdrom/">this paper</a><br /><br />Here is my implementation of it. You still need to clamp because of highly saturated colors, and purists wont like my luminance formula (1/3.1/3.1/3) but never listen to purists :)<br /><br /><span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">void reinhard_tone_map(real mid_grey = real(0.2)) {</span></span><br /><span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">// using even values for luminance. This is more robust than standard NTSC luminance</span></span><br /><span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">// Reinhard tone mapper is to first map a value that we want to be "mid gray" to 0.2// And then we apply the L = 1/(1+L) formula that controls the values above 1.0 in a graceful manner.<br /> real scale = (real(0.2)/mid_grey);<br /> for (int i = 0; i < nx*ny; i++) {<br /> vec3 temp = scale*vdata[i];<br /> real L = real(1.0/3.0)*(temp[0] + temp[1] + temp[2]);<br /> real multiplier = ONE/(ONE + L);<br /> temp *= multiplier;<br /> temp.clamp();<br /> vdata[i] = temp;<br /> }</span></span><br /><span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">}</span></span><br /><br />This will slightly darken the dark pixels and greatly darken the bright pixels. Equation 4 in the Reinhard paper will give you more control. The cool kids have been using "filmic tone mapping" and it is the best tone mapping I have seen, but I have not implemented it (see title to this blog post)<br /><br /><br /><br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com1tag:blogger.com,1999:blog-8350257063773144600.post-77077567966972147952019-02-14T06:24:00.001-08:002019-02-15T08:53:22.121-08:00Picking points on the hemisphere with a cosine density<br />NOTE: this post has three basic problems. It assumes property 1 and 2 are true, and there is a missing piece at the end that keeps us from showing anything :)<br /><br />This post results from a bunch of conversations with Dave Hart and the twitter hive brain. There are several ways to generate a random Lambertian direction from a point with surface normal <b>N</b>. One way is inspired by a <a href="https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=Integral+Geometry+Methods+for+Form+Factor+Computation+&btnG=">cool paper by Sbert and Sandez </a>where he simultaniously generated many form factors by repeatedly selecting a uniformly random 3D line in the scene. This can be used to generate a direction with a cosine density, an <a href="http://amietia.com/lambertnotangent.html">idea first described</a>, as far as I know, by Edd Biddulph.<br /><br />I am going to describe it here using three properties, each of which I don't have a concise proof for. Any help appreciated! (I have algebraic proofs-- they just aren't enlightening--- hoping for a clever geometric observation).<br /><br /><br /><span style="color: blue;"><b>Property 1</b></span>: <a href="https://en.wikipedia.org/wiki/View_factor#Nusselt_analog">Nusselt Analog</a>: uniform random points on an equatorial disk projected onto the sphere have a cosine density. So in the figure, the red points, if all of them projected, have a cosine density.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-shfOTbIGrsQ/XGVwmCcodxI/AAAAAAAACT8/Un_kJ1g2AdcK8g7j-gBbeJImEBlP35tAACLcBGAs/s1600/Screen%2BShot%2B2019-02-14%2Bat%2B6.13.15%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="538" data-original-width="702" height="153" src="https://1.bp.blogspot.com/-shfOTbIGrsQ/XGVwmCcodxI/AAAAAAAACT8/Un_kJ1g2AdcK8g7j-gBbeJImEBlP35tAACLcBGAs/s200/Screen%2BShot%2B2019-02-14%2Bat%2B6.13.15%2BAM.png" width="200" /> </a></div><div class="separator" style="clear: both; text-align: left;"> <span style="color: blue;"><b>Property 2</b></span>:<span style="color: red;"><b> (THIS PROPERTY IS NOT TRUE-- SEE COMMENTS)</b></span> The red points in the diagram above when projected onto the normal, will have a uniform density along it: </div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/--dG5BpbaDZU/XGVwmMZM_sI/AAAAAAAACUE/KPf1Aa_HohcxpE8xt0xeu3kTWPIPVdLxQCLcBGAs/s1600/Screen%2BShot%2B2019-02-14%2Bat%2B6.30.04%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="636" data-original-width="1416" height="143" src="https://3.bp.blogspot.com/--dG5BpbaDZU/XGVwmMZM_sI/AAAAAAAACUE/KPf1Aa_HohcxpE8xt0xeu3kTWPIPVdLxQCLcBGAs/s320/Screen%2BShot%2B2019-02-14%2Bat%2B6.30.04%2BAM.png" width="320" /></a></div> <b><span style="color: blue;">Property 3:</span></b> For random points on a 3D sphere, as shown (badly) below, they when projected onto the central axis will be uniform on the central axis.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-DIvIsF0s4HM/XGVwmEo_SeI/AAAAAAAACUA/3yUnEmI3FUsK0C7QyZNccVUrdIr140xPACLcBGAs/s1600/Screen%2BShot%2B2019-02-14%2Bat%2B6.23.41%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="734" data-original-width="1408" height="166" src="https://4.bp.blogspot.com/-DIvIsF0s4HM/XGVwmEo_SeI/AAAAAAAACUA/3yUnEmI3FUsK0C7QyZNccVUrdIr140xPACLcBGAs/s320/Screen%2BShot%2B2019-02-14%2Bat%2B6.23.41%2BAM.png" width="320" /></a></div><br /><br />Now if we accept Property 3, we can first generate a random point on a sphere by first choosing a random phi uniformly theta = 2*PI*urandom(), and then choose a random height from negative 1 to 1, height = -1 + 2*urandom()<br /><br />In XYZ coordinates we can convert this to:<br />z = -1 + 2*urandom()<br />phi = 2*PI*urandom()<br />x = cos(phi)*sqrt(1-z*z)<br />y = sin(phi)*sqrt(1-z^2)<br /><br />Similarly from<span style="color: blue;"><b> property 2 </b></span>we can given a random point (x,y) on a unit disk in the XY plane, we can generate a direction with cosine density when<b> N </b>= the z axis:<br /><br />(x,y) = random on disk<br />z = sqrt(1-x*x-y*y)<br /><br />To generate a cosine direction relative to a surface normal <b>N,</b> people usually construct a local basis, ANY local basis, with tangent and bitangent vectors <b>B</b> and<b> T</b> and change coordinate frames:<br /><br />get_tangents(B,T,N)<br />(x,y) = random on disk<br />z = sqrt(1-x*x-y*y)<br />direction = x*B + y*T + z*N<br /><br />There is f<a href="https://r.search.yahoo.com/_ylt=AwrWmjlEdGVcI1IAsVgPxQt.;_ylu=X3oDMTByb2lvbXVuBGNvbG8DZ3ExBHBvcwMxBHZ0aWQDBHNlYwNzcg--/RV=2/RE=1550181572/RO=10/RU=https%3a%2f%2fgraphics.pixar.com%2flibrary%2fOrthonormalB%2fpaper.pdf/RK=2/RS=n0VYXkXI0KUFpNw48SpuMV5K0p8-">inally a compact and robust way to write get_tangents</a>. So use that, and your code is fast and good. <br /><br />But can we do this show that using a uniform random sphere lets us do this without tangents?<br /><br />So we do this:<br />P = (x,y,z) = random_on_unit_sphere<br />D = unit_vector(N + P)<br /><br />So <b>D </b>is the green dot while (<b>N+P</b>) is the red dot:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-eFLsRQdX7JQ/XGV5ePMRosI/AAAAAAAACUc/pU9bCnk7KBEQ_ruZnHRHw3MFYavee6ITwCLcBGAs/s1600/Screen%2BShot%2B2019-02-14%2Bat%2B7.19.50%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="771" data-original-width="812" height="189" src="https://3.bp.blogspot.com/-eFLsRQdX7JQ/XGV5ePMRosI/AAAAAAAACUc/pU9bCnk7KBEQ_ruZnHRHw3MFYavee6ITwCLcBGAs/s200/Screen%2BShot%2B2019-02-14%2Bat%2B7.19.50%2BAM.png" width="200" /></a></div><br /><div class="separator" style="clear: both; text-align: left;"> So is there a clever observation that the green dot is either 1) uniform along <b>N, </b>or 2, uniform on the disk when projected?</div><br /><br /><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div style="text-align: left;"><br /></div><br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com4tag:blogger.com,1999:blog-8350257063773144600.post-91806245435701945472019-02-12T11:08:00.000-08:002019-02-12T11:08:20.120-08:00Adding point and directional lights to a physically based renderer<br />Suppose you have a physically based renderer with area lights. Along comes a model with point and directional lights. Perhaps the easiest way to deal with them is convert them to a very small spherical light source. But how do you do that in a way that gets the color right? <b> IF</b> you have things set up so they tend to be tone mapped (a potential big <b>no</b> in physically-based renderer), meaning that a color of (1,1,1) will be white, and (0.2,0.2,0.2) a mid-grey (gamma-- so not 0.5-- the eye does not see intensities linearly), then it is not so bad.<br /><br />Assume you have a spherical light with radius R at position C and emitted radiance (E,E,E). When it is straight up from a white surface (so the sun at noon at the equator), you get this equation (about) for the color at a point P<br /><br /><i>reflected_color = (E,E,E)*solid_angle_of_light / PI</i><br /><br />The solid angle of the light is its projected area on the unit sphere around the illuminated point, or approximately:<br /><br /><i>solid_angle_of_light = PI*R^2 /distance_squared(P,C)</i><br /><br />So<br /><br /><i> </i><i>reflected_color = (E,E,E)*(R / distance(P,C))^2</i><br /><br />If we want the white surface to be exactly white then<br /><br /><span style="color: lime;"><i>E = (distance(P,C) / R)^2</i></span><br /><br />So pick a small R (say 0.001), pick a point in your scene (say the one the viewer is looking at, or 0,0,0), and and set E as in the <span style="color: lime;">green </span>equation<i><br /></i><br /><br />Suppose the RGB "color" given for the point source is<i> (cr, cg, cb). </i> Then just multiply the (E,E,E) by those components.<i> </i><br /><br />Directional lights are a similar hack, but a little less prone to problems of whther falloff was intended. A directional light is usually the sun, and it's very far away. Assume the direction is D = (dx,dy,dz)<br /><br />Pick a big distance (one that wont break your renderer) and place the center in that direction:<br /><br />C = big_constant*D<br /><br />Pick a small radius. Again one that wont break your renderer. Then use the <span style="color: lime;">green</span> equation above.<br /><br />Now sometimes the directional sources are just the Sun and will look better if you give them the same angular size for the Sun. If your model is in meters, then just use distance = 150,000,000,000m. OK now your program will break due to floating point. Instead pick a somewhat big distance and use the right ratio of Sun radius to distance:<br /><br />R = distance *(695,700km / 149,597,870km)<br /><br />And your shadows will look ok <br /><br /><br /><br /><br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com0tag:blogger.com,1999:blog-8350257063773144600.post-66121498707381835492018-10-20T08:33:00.001-07:002018-10-21T20:40:33.453-07:00Flavors of Sampling in Ray TracingMost "ray tracers" these days are what we used to call "path tracers" or "Monte Carlo ray tracers". Because they all have about the same core architecture, this short hand is taking over.<br /><br />These ray tracers have variants but a big one is whether the ray tracing is:<br /><ul><li><b>Batch</b>: run your ray tracer in the background until done (e.g., movie renderers)</li><li><b>Realtime</b>: run what you can in 30ms or some other frame time (e.g. game renderers go RTX!)</li><li><b>Progressive</b>: just keep updating the image to make it less noisy as you trace more rays (e.g., a lighting program used by at artist in a studio or engineering firm)</li></ul>For batch rendering let's pretend for now we are only getting noise from antialiasing the pixels so we have a fundamentally 2D program. This typically has something like the following for each pixel (i,j):<br /><br />pixel_color = (0,0,0)<br />for s = 0 to N-1<br /> u = random_from_0_to_1()<br /> v = random_from_0_to_1()<br /> pixel_color += sample_ color(i+u,j+v)<br />pixel_color /= N <br /><br />That "sample_color()" does whatever it does to sample a font or whatever. The first thing that bites you is the <b>diminishing return </b>associated with Monte Carlo methods: error = constant / sqrt(num_samples). So to halve the error we need 4X the samples. <br /><br />Don Mitchell has a <a href="http://mentallandscape.com/Papers_siggraph96.pdf">nice pape</a>r that explains that if you take "stratified" samples, you get better error rates for most functions. In the case of 2D with edges the error = constant / num_samples. That is LOADS better. In graphics a common what to stratify samples is called "jittering" where you usually take a perfect square number of samples in each pixel: N = n*n. This yields the code:<br /><br />pixel_color = (0,0,0)<br />for s = 0 to n-1<br /> for t = 0 to n-1<br /> u = (s + random_from_0_to_1()) / n<br /> v = (t + random_from_0_to_1()) / n<br /> pixel_color += sample_ color(i+u,j+v)<br /><br />Visually the difference in the pattern of the samples is more obvious. There is a <a href="https://blogs.sap.com/2018/02/11/abap-ray-tracer-part-5-the-sample/">very nice blog post </a>on the details of this sampling here, and it includes this nice figure from Suffern's book:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-0G-ZcQNovEk/W8tA3sM6U7I/AAAAAAAACRw/Yu1Gmo4_tQsyzDzHqRX5di5ycWWTL7tgwCLcBGAs/s1600/Screen%2BShot%2B2018-10-20%2Bat%2B8.50.00%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1350" height="212" src="https://3.bp.blogspot.com/-0G-ZcQNovEk/W8tA3sM6U7I/AAAAAAAACRw/Yu1Gmo4_tQsyzDzHqRX5di5ycWWTL7tgwCLcBGAs/s400/Screen%2BShot%2B2018-10-20%2Bat%2B8.50.00%2BAM.png" width="400" /></a></div><br /> It turns out that you can replace the random samples not only with jittered sample, but with a regular grid and you will converge to the right answer. But better still you can use quasi-random samples which makes this a quasi-Monte Carlo method (QMC). The code is largely the same! Just replace the (u,v) part above. The theory that justifies it is a bit different, but the key thing is the points need to be "uniform" and not tend to clump anywhere. These QMC methods have been investigated in detail by my friend and coworker <a href="https://research.nvidia.com/person/alex-keller">Alex Keller</a>. The simplest QMC method is, like the jittering, best accomplished if you know N (but it doesn't need to be restricted to a perfect square). A famous and good one is the Hammersley Point set. Here's a picture from <a href="http://mathworld.wolfram.com/HammersleyPointSet.html">Wolfram</a>:<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-sCzvjaVNrcY/W8tCqAcQq9I/AAAAAAAACR8/HTzHfJ4Uvn8BPAWTBkeWzFiFsPYT1zkhQCLcBGAs/s1600/Screen%2BShot%2B2018-10-20%2Bat%2B8.58.22%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="242" data-original-width="238" src="https://4.bp.blogspot.com/-sCzvjaVNrcY/W8tCqAcQq9I/AAAAAAAACR8/HTzHfJ4Uvn8BPAWTBkeWzFiFsPYT1zkhQCLcBGAs/s1600/Screen%2BShot%2B2018-10-20%2Bat%2B8.58.22%2BAM.png" /></a></div>It's regular and uniform, but not <b>too</b> regular.<br /><br />In 2D these sampling patterns, jittering and QMC are <b>way way better </b>than pure random. However, there is no free lunch. In a general path tracer, it is more than a 2D problem. The program might look like this:<br /><br />For every pixel<br /> for example sample<br /> pick u,v for screen<br /> pick a time t<br /> pick an a,b on lens<br /> if ray hits something<br /> pick a u',v' for light sampling<br /> pick a u",v" for BRDF sampling<br /> recurse<br /><br />If you track your random number generation and you take 3 diffuse bounces with shadow rays that will be nine random numbers. You could think of a ray path through the scene as something that gets generated by a function:<br /><br />ray_path get_ray_path(u, u', u", u'", u"", u""', ....)<br /><br />So you sample a nine dimensional hypercube randomly and map those nine-dimensional points to ray paths. Really this happens in some procedural recursive process, but abstractly it's a mapping. This means we run into the <b>CURSE OF DIMENSIONALITY</b>. Once the integral is high dimensional, if the integrand is complex, <b>STRATIFIED SAMPLING DOESN'T HELP</b>. However, you will notice (almost?) all serious production ray tracers do add stratified sampling. Why? <br /><br />The reason is that for many pixels, the integrand is mostly constant except for two of the dimensions. For example, consider a pixel that is:<br /><br /><ul><li>In focus (so lens sample doesn't matter)</li><li>Not moving (so time sample doesn't matter)</li><li>Doesn't have an edge (so pixel sample doesn't matter)</li><li>Is fully in (or out of) shadow (so light sample doesn't matter)</li></ul>What does matter is the diffuse bounce. So it <b>acts like</b> a 2D problem. So we need the BRDF samples, let's call them u7 and u8, to be well stratified. <br /><br />QMC methods here typically automatically ensure that various projections of the high dimensional samples are themselves well stratified. Monte Carlo methods, on the other hand, typically try to get this property for some projections, namely the 2 pixel dimensions, the 2 light dimensions, the 2 lens dimensions, etc. They typically do this as follows for the 4D case:<br /><br /><ol><li>Create N "good" light samples on [0,1)^2 </li><li>Create N "good" pixel samples on [0,1)^2</li><li>Create a permutation of the integers 0, ..., N-1</li><li>Create a 4D pattern using the permutation where the ith sample is light1[i], light2[i],pixel[permute[i]], pixel2[permute[i]].</li></ol>So a pain :) There are bunches of papers on doing Monte Carlo, QMC, or "blue noise" uniform random point sets. But we are yet quite done.<br /><br />First, we don't know how many dimensions we have! The program recurses and dies by hitting a dark surface, exiting the scene, or Russian Roulette. Most programs degenerate to pure random after a few bounces to make it so we can sort of know the dimensionality.<br /><br />Second, we might want a progressive preview on the renderer where it gets better as you wait. Here's a <a href="https://www.youtube.com/watch?v=82SsR4s6BEM">nice example</a>.<br /><br />So you don't know what N is in advance! You want to be able to add samples potentially forever. This is easy if the samples are purely random, but not so obvious if doing QMC or stratified. The QMC default answer are to use <b>Halton Points</b>. These are designed to be progressive! Alex Keller at NVIDIA and his collaborators have <a href="http://web.maths.unsw.edu.au/~josefdick/MCQMC_Proceedings/MCQMC_Proceedings_2012_Preprints/100_Keller_tutorial.pdf">found even better ways to do this with QMC</a>. Per Christensen and Andrew Kensler and Charlie Kilpatrick at Pixar have a <a href="https://graphics.pixar.com/library/ProgressiveMultiJitteredSampling/paper.pdf">new Monte Carlo sampling method </a>that is making waves in the movie industry. I have not ever implemented these and would love to hear your experiences if you do (or have!)<br /><div class="page" title="Page 1"><div class="layoutArea"><div class="column"><span style="font-family: "nimbusromno9l"; font-size: 9.000000pt;"><br /></span> </div></div></div> Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com1tag:blogger.com,1999:blog-8350257063773144600.post-37357470809973350402018-06-04T07:27:00.002-07:002018-06-04T07:27:30.206-07:00Sticking a thin lens in a ray tracerI need to stick a " <a href="https://en.wikipedia.org/wiki/Thin_lens">ideal thin lens</a>" in my ray tracer. Rather than a real lens with some material, coating, and shape, it's an idealized version like we get in Physics1.<br /><br />The thin lens has some basic properties that have some redundancy but they are the ones I remember and deploy when needed:<br /><ol><li>a ray through the lens center is not bent</li><li>all rays through a point that then pass through the lens converge at some other point</li><li>a ray through the focal point (a point on the optical axis at distance from lens <b><i>f</i></b>, the focal length of the lens) will be parallel to the optical axis after being bent by the lens</li><li>all rays from a point a distance <i><b>A</b></i> from the lens will converge at a distance <b><i>B</i></b> on the other side of the lens and obey the thin lens law: <i>1/<b>A</b> + 1/<b>B</b> = 1/<b>f</b></i></li></ol> Here's a sketch of those rules:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-DHTTQMA8wUQ/WxRfeZlRn9I/AAAAAAAACQw/qKJSB_5hSQwMhGPtZ2CeXVnBtanf6pv0ACLcBGAs/s1600/Screen%2BShot%2B2018-06-03%2Bat%2B3.36.24%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="408" data-original-width="1018" height="256" src="https://2.bp.blogspot.com/-DHTTQMA8wUQ/WxRfeZlRn9I/AAAAAAAACQw/qKJSB_5hSQwMhGPtZ2CeXVnBtanf6pv0ACLcBGAs/s640/Screen%2BShot%2B2018-06-03%2Bat%2B3.36.24%2BPM.png" width="640" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><br /></div><br />So if I have a (purple) ray with origin <i><b>a</b></i> that hits the lens at point <i><b>m</b></i>, how does it bend? It bends towards point<i> <b>b</b></i> no matter what the ray direction is. So the new ray is:<br /><br /><i><b>p</b>(t)<b> = m + </b>t<b> (b-m)</b></i><br /><br /> So what is point <i><b>b?</b></i><br /><br />It is in the direction of point <i><b>c, </b></i>but extended by some distance.<i><b> </b></i>We know the center of the lens <i><b>c,</b></i> so<i><b> </b></i>we can<i><b> use the ratios of the segments to extend to that:</b></i><br /><i><b><br /></b></i><i><b>b = a + (c-a) </b> (B+A) / A?</i><br /><i><b><br /></b></i>So what is <i>(B+A) /A? </i><br /><i><b><br /></b></i>We know 1/A + 1/B = 1/f, so B = 1/(1/f-1/A). So the point <i><b>b</b></i> is:<br /><br /><b></b><i><b></b></i><i><b>b = a + (c-a) (</b>1/(1/f-1/A) + A)/A =</i><br /><i><b>b = a + (c-a) </b>(1/(A/f - 1) + 1) =</i><br /><i><b><i><b>b = a + (c-a) </b></i></b><i>(A/(A - f)) </i> </i><br /><br /><b></b><i><b></b></i>OK lets try a couple of trivial cases. What if A = 2f? <br /><br /><b></b><i><b></b></i><i><b>b = a + (c-a) </b>(2f/(2f-f)) = </i><i><b>b = a + </b>2<b>(c-a) </b></i><br /><br />That looks right (symmetric case-- A = B there) <br /><br /><b></b><i><b></b></i><span style="color: red;"><span style="font-size: large;">So final answer, given a ray with origin <i><b>a</b></i> that hits a lens with center <i><b>c</b></i> and focal length <i>f </i>at point <i><b>m</b></i>, the refracted ray is:</span></span><br /><br /><span style="color: red;"><span style="font-size: large;">p(t) = m + t( <i><b><i><b>a + (c-a) </b></i></b><i>(A/(A - f)) - <b>m</b>)</i></i></span></span><br /><i><i><br /></i></i>There is a catch. What if <i><i>B < 0? </i></i>This happens when<i><i> A < f. </i></i>Address that case when it comes up :)<br /><br /><b></b><i><b></b></i><br /><b></b><i><b></b></i>Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com0tag:blogger.com,1999:blog-8350257063773144600.post-61217046379187892162018-05-31T18:20:00.001-07:002018-05-31T18:20:40.142-07:00Generating uniform Random rays that hit an axis aligned box<br />For some tests you want the set of "all" rays that hit a box. If you want to stratify this is somewhat involved (and I don't know that I have done it nor seen it). Chris Wyman and I did a <a href="http://jcgt.org/published/0006/02/03/">jcgt paper on doing this stratified in a square in 2D</a>. But often stratification isn't needed. When in doubt I never do it-- the software is always easier to deal with un-stratified, and as soon as dimension gets high most people dont bother because of the <a href="https://en.wikipedia.org/wiki/Curse_of_dimensionality">Curse of Dimensionality</a>.<br /><br />We would like to do this with as little math as possible. First lets consider any side of the box. (this would apply to any convex polyhedron if we wanted something more general). If the box in embedded in all possible uniform rays, any ray that hits the box will enter at exactly one point on the surface of the box, and all points are equally likely. So our first job is to pick a uniform point on the surface. We can use the technique Greg Turk used to seed points for texture generation on a triangular mesh:<br /><br />Probability of each side with box of side lengths X, Y, Z is side area of total area. The side areas are:<br /><br />XY, YZ, ZX (2 of each)<br /><br />We can do cumulative area and stuff it in an array of length 6:<br /><br /><span style="font-family: "Courier New", Courier, monospace;">c_area[0] = XY</span><br /><span style="font-family: "Courier New", Courier, monospace;">c_area[1] = area[0] + XY</span><br /><span style="font-family: "Courier New", Courier, monospace;">c_area[2] = area[1] + YZ</span><br /><span style="font-family: "Courier New", Courier, monospace;">c_area[3] = area[2] + YZ</span><br /><span style="font-family: "Courier New", Courier, monospace;">c_area[4] = area[3] + ZX</span> <br /><span style="font-family: "Courier New", Courier, monospace;">c_area[5] = area[4] + ZX </span><br /><br />Now normalize it so it is a cumulative fraction:<br /><br /><br /><span style="font-family: "Courier New", Courier, monospace;">for (int i = 0; i < 6; i++) </span><br /><span style="font-family: "Courier New", Courier, monospace;"> c_area[i] /= c_area[5]</span><br /><br />No take a uniform random real <span style="font-family: "Courier New", Courier, monospace;">r()</span> in [0,1)<br /><br /><span style="font-family: "Courier New", Courier, monospace;">int candidate = 0;</span><br /><span style="font-family: "Courier New", Courier, monospace;">float ra = r(); </span><br /><span style="font-family: "Courier New", Courier, monospace;">while (c_area[candidate] < ra) candidate++;</span><br /><br />Now <span style="font-family: "Courier New", Courier, monospace;">candidate</span> is the index to the side.<br /><br />Let's say the side is in the xy plane and x goes from 0 to X and y goes from 0 to Y. Now pick a uniform random point on that side:<br /><br /><span style="font-family: "Courier New", Courier, monospace;">vec3 ray_entry(X*r(), Y*r(), 0);</span><br /><br />Now we have a ray origin. What is the ray direction? It is <b>not</b> uniform in all directions. These are the rays that hit the side. So the density is proportional to the cosine to the normal-- so they are Lambertian! <b>This is not obvious</b>. I will punt on justifying that for now.<br /><br />So for the xy plane one above, the normal is +z and the ray direction is a uniform random point on a disk projected onto the hemisphere:<br /><br /><span style="font-family: "Courier New", Courier, monospace;">float radius = sqrt(r());</span><br /><span style="font-family: "Courier New", Courier, monospace;">float theta = 2*M_PI*r();</span><br /><span style="font-family: "Courier New", Courier, monospace;">float x = radius*cos(theta);</span><br /><span style="font-family: "Courier New", Courier, monospace;">float y = radius*sin(theta);</span><br /><span style="font-family: "Courier New", Courier, monospace;">float z = sqrt(1-x*x-y*y);</span><br /><span style="font-family: "Courier New", Courier, monospace;">ray_direction = vec3(x,y,z);</span><br /><br />Now we need that for each of the six sides.<br /><br />We could probably find symmetries to have 3 cases, or maybe even a loop, but I personally would probably not bother because me trying to be clever usually ends poorly...<br /><br /><br /><br /><br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com0tag:blogger.com,1999:blog-8350257063773144600.post-82638163917974846242018-03-14T09:03:00.005-07:002018-03-14T09:03:44.801-07:00Egyptian estimates of PII saw a neat <a href="https://twitter.com/fermatslibrary/status/973181926684192769">tweet</a> on how the estimate the Egyptians used for PI.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-qf2qj6VFGRc/WqlFBzUBOjI/AAAAAAAACP8/epaQ1P0XAG8HgVCDNOW4wu4TlYDjphmVwCLcBGAs/s1600/Screen%2BShot%2B2018-03-14%2Bat%2B9.51.03%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="939" data-original-width="1220" height="246" src="https://4.bp.blogspot.com/-qf2qj6VFGRc/WqlFBzUBOjI/AAAAAAAACP8/epaQ1P0XAG8HgVCDNOW4wu4TlYDjphmVwCLcBGAs/s320/Screen%2BShot%2B2018-03-14%2Bat%2B9.51.03%2BAM.png" width="320" /></a></div>This is all my speculation, and maybe a math history buff can enlighten me, but the D^2 dependence they should discover pretty naturally. Including the constant before squaring is, I would argue, just as natural as having it outside the parentheses, so let's go with that for now. So was there a nearby better fraction? How well did the Egyptians do? A brute force program should tell us.<br /><br />We will use the ancient programming language C:<br /><br /><span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">#include <math .h=""><br />#include <stdio .h=""><br />int main() {<br /> double min_error = 10;<br /> for (int denom = 1; denom < 10000; denom++) {<br /> for (int num = 1; num < denom; num++) {<br /> double approx = 2*double(num)/double(denom);<br /> approx = approx*approx;<br /> double error2 = M_PI-approx;<br /> error2 = error2*error2;<br /> if (error2 < min_error) {<br /> min_error = error2;<br /> printf("%d/%d %f\n", num, denom, 4*float(num*num)/float(denom*denom));<br /> }<br /> }<br /> }<br />}</stdio></math></span></span><br /><br />This produces output:<br /><br /><span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">1/2 1.000000<br />2/3 1.777778<br />3/4 2.250000<br />4/5 2.560000<br />5/6 2.777778<br />6/7 2.938776<br />7/8 3.062500<br />8/9 3.160494<br />23/26 3.130177<br />31/35 3.137959<br />39/44 3.142562<br />109/123 3.141252<br />148/167 3.141597<br />4401/4966 3.141588<br />4549/5133 3.141589<br />4697/5300 3.141589<br />4845/5467 3.141589<br />4993/5634 3.141589<br />5141/5801 3.141590<br />5289/5968 3.141590<br />5437/6135 3.141590<br />5585/6302 3.141590<br />5733/6469 3.141590<br />5881/6636 3.141591<br />6029/6803 3.141591<br />6177/6970 3.141591<br />6325/7137 3.141591<br />6473/7304 3.141591<br />6621/7471 3.141591<br />6769/7638 3.141591<br />6917/7805 3.141592<br />7065/7972 3.141592<br />7213/8139 3.141592<br />7361/8306 3.141592<br />7509/8473 3.141592<br />7657/8640 3.141592<br />7805/8807 3.141592<br />7953/8974 3.141593<br />8101/9141 3.141593<br />8249/9308 3.141593<br />8397/9475 3.141593<br />8545/9642 3.141593</span></span><br /><br />So 7/8 was already pretty good, and you need to get to 23/26 before you do any better! I'd say the Egyptians did extremely well.<br /><br />What if they had put the constants outside the parens? How well could they have done? We can change two of the lines above to:<br /><br /><span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">double approx = 4*double(num)/double(denom);//approx = approx*approx;</span></span><br /><br />and the printf to:<br /><br /><span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">printf("%d/%d %f\n", num, denom, 4*float(num)/float(denom));</span></span><br /><br />And we get:<br /><br /><span style="font-family: "Courier New", Courier, monospace;"><span style="font-size: x-small;">1/2 2.000000<br />2/3 2.666667<br />3/4 3.000000<br />4/5 3.200000<br />7/9 3.111111<br />11/14 3.142857<br />95/121 3.140496<br />106/135 3.140741<br />117/149 3.140940<br />128/163 3.141104<br />139/177 3.141243<br />150/191 3.141361<br />161/205 3.141464<br />172/219 3.141552<br />183/233 3.141631<br />355/452 3.141593</span></span><br /><br />So 7/9 is not bad! And 11/14 even better. So no clear winner here on whether the rational constant should be inside the parens or not.<br /><br /><br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com14tag:blogger.com,1999:blog-8350257063773144600.post-30096303809156714572017-12-29T04:04:00.002-08:002017-12-29T04:10:58.417-08:00Rendering the moon, sun, and skyA reader asked me about rendering the moon using a path tracer. This has been done by several people and what's coolest about it is that you can do the whole thing with four spheres and not a lot of data (assuming you don't need clouds anyway).<br /><br />First, you will need to deal with the atmosphere which is most easily dealt with spectrally rather than RGB because scattering has simple wavelength based formulas. But you'll also have RGB texture for the moon, so I would use the <a href="https://psgraphics.blogspot.com/2017/12/lazy-spectral-rendering.html?showComment=1513514197944">lazy spectral method</a>.<br /><br />Here are the four spheres-- the atmosphere sphere and the Earth share the same center. Not to scale (speaking of which, choose sensible units like kilometers or miles, and I would advise making everything a double rather than float).<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-SO4639I4nIw/WkYousC-4bI/AAAAAAAACOk/ebEwaVMJg2wyoIMlxqKtol9pkOVBK3TnQCLcBGAs/s1600/Screen%2BShot%2B2017-12-29%2Bat%2B4.33.46%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="822" data-original-width="1356" height="193" src="https://3.bp.blogspot.com/-SO4639I4nIw/WkYousC-4bI/AAAAAAAACOk/ebEwaVMJg2wyoIMlxqKtol9pkOVBK3TnQCLcBGAs/s320/Screen%2BShot%2B2017-12-29%2Bat%2B4.33.46%2BAM.png" width="320" /></a></div>The atmosphere can be almost arbitrarily complicated but I would advice making it all Rayleigh scatterers and have constant density. You can also add more complicated mixtures and densities. To set the constant density just try to get the overall opacity about right. A random web search yields this image from Martin Chaplin:<br /><br /><br /> <img alt="http://www1.lsbu.ac.uk/water/images/sun.gif" class="transparent" src="http://www1.lsbu.ac.uk/water/images/sun.gif" /><br /><br /><br /><br /><br /><br /><br /><br />This means something between 0.5 and 0.7 (which is probably good enough-- a constant atmospheric model is probably a bigger atmospheric limitation). In any case I would use the "collision" method where that atmosphere looks like a solid object to your software and exponential attenuation will be implicit.<br /><br />For the Sun you'll need the spectral radiance for when a ray hits it. If you use the lazy binned RGB method and don't worry about absolute magnitudes because you'll tone map later anyway, you can eyeball the above graph and guess for [400-500,500-600-600-700]nm you can use [0.6,0.8,1.0]. If you want to maintain absolute units (not a bad idea-- good to do some unit tests on things like luminance of the moon or sky). Data for the sun is available lots of places but be careful to make sure it is spectral radiance or convert it to that (radiometry is a pain).<br /><br />For the moon you will need a BRDF and a texture to modulate it. For a first pass use Lambertian but that will not give you the nice constant color moon. <a href="https://www.cs.rpi.edu/~cutler/publications/yapo_gi09.pdf">This paper</a> by Yapo and Culter has some great moon renderings and they use the BRDF that <a href="http://graphics.stanford.edu/~henrik/papers/nightsky/nightsky.pdf">Jensen et al.</a> suggest:<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-JgpHccacxsM/WkYr_u6q5oI/AAAAAAAACOw/AEEsAxyUEzYG34stBrX2mTGOBwBsGi4aQCLcBGAs/s1600/Screen%2BShot%2B2017-12-29%2Bat%2B4.49.59%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="422" data-original-width="830" height="202" src="https://3.bp.blogspot.com/-JgpHccacxsM/WkYr_u6q5oI/AAAAAAAACOw/AEEsAxyUEzYG34stBrX2mTGOBwBsGi4aQCLcBGAs/s400/Screen%2BShot%2B2017-12-29%2Bat%2B4.49.59%2BAM.png" width="400" /></a></div><br />Texture maps for the moon, again from a quick google search are <a href="http://planetpixelemporium.com/earth.html">here</a>.<br /><br />The Earth you can make black or give it a texture if you want Earth shine. I ignore atmospheric refraction -- see Yapo and Cutler for more on that.<br /><br />For a path tracer with a collision method as I prefer, and implicit shadow rays (so the sun directions are more likely to be sampled but all rays are just scattered rays) the program would look something like this:<br /><br />For each pixel<br /> For each viewing ray choose random wavelength<br /> send into the (moon, atmosphere, earth, sun) list of spheres<br /> if hit scatter according to pdf (simplest would be half isotropic and half to sun)<br /><br />The most complicated object above would be the atmosphere sphere where the probability of hitting per unit length would be proportional to <a href="http://hyperphysics.phy-astr.gsu.edu/hbase/atmos/blusky.html">(1/lambda^4)</a>. I would make the Rayleigh scattering isotropic just for simplicity, but using the real phase function isn't that much harder.<br /><br />The picture below <a href="https://graphics.stanford.edu/~boulos/papers/gi06.pdf">from this paper</a> was generated using the techniques described above with no moon-- just the atmosphere.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-P3kb6z5GbXI/WkYvOWPmxpI/AAAAAAAACO8/eeRuHZvbUhsMe97Vvv1WcUH9OSpSsi_8ACLcBGAs/s1600/Screen%2BShot%2B2017-12-29%2Bat%2B5.02.34%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1082" data-original-width="822" height="400" src="https://1.bp.blogspot.com/-P3kb6z5GbXI/WkYvOWPmxpI/AAAAAAAACO8/eeRuHZvbUhsMe97Vvv1WcUH9OSpSsi_8ACLcBGAs/s400/Screen%2BShot%2B2017-12-29%2Bat%2B5.02.34%2BAM.png" width="303" /></a></div><br /><br />There-- brute force is great-- get the computer to do the work (note, I already thought that way before I joined a hardware company).<br /><br />If you generate any pictures, please tweet them to me!<br /><br /><br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com2tag:blogger.com,1999:blog-8350257063773144600.post-37253364045939119882017-12-06T21:10:00.000-08:002017-12-06T21:10:10.680-08:00Lazy spectral renderingIf you have to do spectral rendering (so light wavelengths and not just RGB internal computations) I am a big fan of making your life simpler by doing two lazy moves:<br /><br />1. Each ray gets its own wavelength<br />2. Use a 3 element piece-wise constant approximation for most of the spectra, and make all the XYZ tristimulous stuff implicit<br /><br />First, here's how to do it "right". <span style="color: #b45f06;">You can skip this part-- I'll put it in brown so it's easy to skip. We want some file of RGB pixels like sRGB. Look up the precise definition of sRGB in terms of XYZ. Look up the precise definition of XYZ (if you must do that because you are doing some serious appearance modeling use Chris Wyman's <a href="http://jcgt.org/published/0002/02/01/">approximation</a>). You will have three functions of wavelength x(), y(), and z(). X is for example:</span><br /><span style="color: #b45f06;"><br /></span><span style="color: #b45f06;">X = k*INTEGRAL x(lambda) L(lambda) d-lambda</span><br /><span style="color: #b45f06;"><br /></span><span style="color: #b45f06;">If you use one wavelength per ray, do it randomly and do Monte Carlo: lambda = 400 + 300*r01(), so pdf(lambda) = 1/300</span><br /><br /><span style="color: #b45f06;">X =approx= k*300*x(lambda) L(lambda)</span><br /><br /><span style="color: #b45f06;">You can use the same rays to approximate Y and Z because x(), y(), and z() partially overlap.</span><br /><br /><span style="color: #b45f06;">Now read in your model and convert all RGB triples to spectral curves. How? Don't ask me. Seems like overkill so let's be lazy.</span><br /><br />OK now let's be lazier than that. This is a trick we used to use at the U of Utah in the 1990s. I have no idea what its origins are. Do this:<br /><br />R =approx= L(lambda)<br /><br />where lambda is a random wavelength in [600,700]nm<br /><br />Do the same for G, B with random wavelengths in [500,600] and [400,500] respectively.<br /><br />When you hit an RGB texture or material, just assume that it's a piecewise constant spectrum with the same spectral regions as above. If you have a formula or real spectral data (for example, Rayleigh scattering or an approximation to the refractive index of a prism) then use that.<br /><br />This will have wildly bad behavior in the worst case. But in practice I have always found it to work well. As an empirical test in an NVIDIA project I tested it on a simple case, the <a href="http://www.babelcolor.com/colorchecker.htm">Macbeth Color Checker</a> spectra under flat white light. Here's the full spectral rendering using the real spectral curves of the checker and XYZ->RGB conversion and all that done "right":<br /><br /><b id="docs-internal-guid-a40abe80-2f5a-3aba-4ded-3c8b58d6b7db" style="font-weight: normal;"><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><img alt="xyz.png" height="400" src="https://lh4.googleusercontent.com/c1bzfBAZ7i8wlBwcl6chbXOfkaW1Yj6rXUpYMr_wI4DVw6hme8PVC9esn5Fj81KzPjyjCcbGYTnZztt-Z7JnhoqFkvDyT3syuZ85_ajrZtHSdLhihXSFWUJbS7ii1xtnngKhrDWi" style="-webkit-transform: rotate(0.00rad); border: none; transform: rotate(0.00rad);" width="600" /></span></b><br /><br /> And here it is with the hack using just 3 piece-wise constant spectra for the colors and the RGB integrals above.<br /><br /><b id="docs-internal-guid-a40abe80-2f5b-4844-e514-6f6bef058d21" style="font-weight: normal;"><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><img alt="rgb.png" height="400" src="https://lh6.googleusercontent.com/QG4MWh5Sn9EZSrBPyfLDkqpkR0yJ1uI2po0CFy_6eyPH80ia0J_JM7wwPxoCVD4tlAvfMz1pITpccR0OrnZb-lrWaaCjhZ7X1M9ze4qVLecg92pe2lGWk3OkHSGWnVjnXTeMlXCJ" style="-webkit-transform: rotate(0.00rad); border: none; transform: rotate(0.00rad);" width="600" /></span></b> <br /><br />That is different, but my belief is that is no bigger than the intrinsic errors in input data, tone mapping, and display variation in 99% of situations. One nice thing is it's pretty easy to convert an RGB renderer to a spectral renderer this way.<br /><br /><br /><br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com2tag:blogger.com,1999:blog-8350257063773144600.post-6203983354577556672017-04-09T09:29:00.002-07:002017-04-09T09:29:35.880-07:00Email reply on BRDF mathI got some email asking about using BRDFs in a path tracer and thought my reply might be helpful to those learning path tracing.<br /><br />Each ray tracing toolkit does this a little differently. But they all have the same pattern:<br /><br />color = BRDF(random direction) * cosine / pdf(random direction)<br /><br />The complications are:<br /><br /><div>0. That formula comes from Monte Carlo integration, which is a bit to wrap your mind around.</div><div><br /></div>1. The units of the BRDF are a bit odd, and it's defined as a function over the sphere cross sphere which is confusing<br /><br />2. pdf() is a function of direction and is somewhat arbitrary, through you get noise if it is kind of like the BDRF in shape.<br /><br />3. Even once you know what pdf() is for a given BRDF, you need to be able to generate random_direction so that it is distributed like pdf<br /><br />Those 4 together are a bit overwhelming. So if you are in this for the long haul, I think you just need to really grind through it all. #0 is best absorbed in 1D first, then 2D, then graduate to the sphere. Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com3tag:blogger.com,1999:blog-8350257063773144600.post-45048990915286521732016-12-28T18:59:00.002-08:002016-12-28T18:59:36.458-08:00Bug in my Schlick codeIn a previous post I talked about <a href="http://psgraphics.blogspot.com/2015/07/debugging-refraction-in-ray-tracer.html">my debugging of refraction </a>code. In that ray tracer I was using linear polarization and used these full <a href="https://en.wikipedia.org/wiki/Fresnel_equations">Fresnel equations</a>:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-kAyCotSZ-LI/WGR4PVx0v5I/AAAAAAAACNE/CPfgCH03to4o-tlmaB2wchas2yDnsHlJACLcB/s1600/Screen%2BShot%2B2016-12-28%2Bat%2B7.41.57%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="215" src="https://2.bp.blogspot.com/-kAyCotSZ-LI/WGR4PVx0v5I/AAAAAAAACNE/CPfgCH03to4o-tlmaB2wchas2yDnsHlJACLcB/s400/Screen%2BShot%2B2016-12-28%2Bat%2B7.41.57%2BPM.png" width="400" /></a></div>Ugh those are awful. For this reason and because polarization doesn't matter that much for most appearance, most ray tracers use R = (Rs+Rp)/2. That's a very smooth function and <a href="http://www.labri.fr/index.php?n=Annuaires.Profile&id=Schlick_ID1084917791">Christophe Schlick</a> proposed a <a href="https://en.wikipedia.org/wiki/Schlick's_approximation">nice simple approximation</a> that is quite accurate:<br /><br />R = (1-R0)(1-cosTheta)^5<br /><br />A key issue is that the Theta is the **larger** angle. For example in my debugging case (drawn with <a href="http://limnu.com/">limnu</a> which has some nice new features that made this easy):<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-xPvgUIAb8aU/WGR5VhV6V_I/AAAAAAAACNI/WZ9iveZ89xUOj9VQ-dKOZpl3gJIg30GkwCLcB/s1600/Screen%2BShot%2B2016-12-28%2Bat%2B7.38.36%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="182" src="https://2.bp.blogspot.com/-xPvgUIAb8aU/WGR5VhV6V_I/AAAAAAAACNI/WZ9iveZ89xUOj9VQ-dKOZpl3gJIg30GkwCLcB/s400/Screen%2BShot%2B2016-12-28%2Bat%2B7.38.36%2BPM.png" width="400" /></a></div>The 45 degree angle is the one to use. This is true on the right and the left-- the reflectivity is symmetric. In the case where we only have the 30 degree angle, we need to convert to the other angle by using Snell's Law: Theta = asin(sqrt(2)*sin(30 degrees).<br /><br />The reason for this post is that I have this wrong in my book <a href="https://www.amazon.com/Ray-Tracing-Weekend-Minibooks-Book-ebook/dp/B01B5AODD8">Ray Tracing in One Weekend</a> :<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-fTMquKFKXvg/WGR6rnBEMpI/AAAAAAAACNY/-74OkZzt9XUgJSU1d0tbheo3bmSIBultQCLcB/s1600/Screen%2BShot%2B2016-12-28%2Bat%2B7.53.03%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="141" src="https://1.bp.blogspot.com/-fTMquKFKXvg/WGR6rnBEMpI/AAAAAAAACNY/-74OkZzt9XUgJSU1d0tbheo3bmSIBultQCLcB/s400/Screen%2BShot%2B2016-12-28%2Bat%2B7.53.03%2BPM.png" width="400" /></a></div><br />Note that the first case (assuming outward normals) is the one on the left where the dot product is the cos(30 degrees). The "correction" is messed up. So why does it "work"? The reflectances are small for most theta, and it will be small for most of the incorrect theta too. Total internal reflection will be right, so the visual differences will be plausible.<br /><br />Thanks to Ali Alwasiti (@vexe666) for spotting my mistake!<br /><br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com1tag:blogger.com,1999:blog-8350257063773144600.post-60534587112404480352016-09-18T07:44:00.004-07:002016-09-18T07:44:52.159-07:00A new programmer's attitude should be like an artist's or musician'sLast year I gave a talk at a <a href="http://www.ccsc.org/northwest/2015/program.html">CS education conference </a>in Seattle called "Drawing Inspiration from the Teaching of Art". That talk was aimed at educators and said that historically CS education was based on that of math and/or physics and that was a mistake and we should instead base it on art. I expected a lot of pushback but many of the attendees had a "duh-- I have been doing that for 20 years" reaction.<br /><br />This short post is aimed at students of CS but pursues the same theme. If you were an art or music student your goal would be to be good at ONE THING in TEN YEARS. If a music student that might be signing, composition, music theory, or playing a particular instrument. Any one of those things is very hard. Your goal would be to find that one thing you resonate with and then keep pushing your skills with lots and lots of practice. Sure you could become competent in the other areas, but your goal is to be a master of one. Similarly as an artist you would want to become great at one thing be it printmaking, painting, drawing, pottery, sculpture, or art theory. Maybe you become great at two things but if so you are a Michelangelo style unicorn and more power to you.<br /><br />Even if you become great at one thing, you become great at it in your own way. For example in painting <a href="http://www.phaidon.com/agenda/art/articles/2015/february/12/what-happened-the-day-sargent-painted-monet/">Monet wanted Sargent to give up using black</a>. It is so good that Sargent didn't do that. <a href="https://upload.wikimedia.org/wikipedia/commons/a/a4/Madame_X_(Madame_Pierre_Gautreau),_John_Singer_Sargent,_1884_(unfree_frame_crop).jpg">This painting</a> with Monet's palette would not be as good. And Monet wouldn't have wanted to do that painting anyway!<br /><br />Computer Science is not exactly art or music, but the underlying issues are the same. First, it is HARD. Never forget that. Don't let some CS prof or brogrammer make you think you suck at it because you think it's hard. Second you must become master of the tools by both reading/listening and playing with them. Most importantly find the "medium" and "subject" where you have some talent and where it resonates with you emotionally. If you love writing cute UI javascript tools and hate writing C++ graphics code, that doesn't make you a flawed computer scientist. It is a gift in narrowing your search for your technical soul mate. Love Scheme and hate C# or vice-versa. That is not a flaw but again is another productive step on your journey. Finally, if you discover an idiosyncratic methodology that works for you and gets lots of pushback, ignore the pushback. Think van Gogh. But keep the ear-- at the end of the day, CS is more about what works :)<br /><div data-canvas-width="217.53794117647067" style="font-family: sans-serif; font-size: 11.1438px; left: 290.519px; top: 357.146px; transform: scaleX(1.00743);"><br /></div>Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com1tag:blogger.com,1999:blog-8350257063773144600.post-61495316900273965582016-07-15T22:08:00.001-07:002016-07-15T22:08:30.650-07:00I do have a bugI questioned whether this was right:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-yrK1bRpzJ5s/V4m-c_7-lvI/AAAAAAAACLE/uOXsqXHJpaA_ZWaU9N7KnM0Hk2jO7abWQCLcB/s1600/testglass.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://1.bp.blogspot.com/-yrK1bRpzJ5s/V4m-c_7-lvI/AAAAAAAACLE/uOXsqXHJpaA_ZWaU9N7KnM0Hk2jO7abWQCLcB/s320/testglass.png" width="320" /></a></div><br /><br />I was concerned about the bright bottom and thought maybe there was a light tunnel effect. I looked through my stuff and found a dented cube, and its bottom seemed to show total internal reflection:<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-agDPZvs9VRc/V4m_0vBHSFI/AAAAAAAACLY/GYYhvYPlNg4CWtEsqWT31OKSfewt3M67wCLcB/s1600/FullSizeRender%25282%2529.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://1.bp.blogspot.com/-agDPZvs9VRc/V4m_0vBHSFI/AAAAAAAACLY/GYYhvYPlNg4CWtEsqWT31OKSfewt3M67wCLcB/s320/FullSizeRender%25282%2529.jpg" width="318" /></a></div>The light tunnel effect might be happening and there is a little glow under the cube, but you cant see it thought the cube. Elevating it a little does show that:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-bLsEZTnWfXY/V4nBJziT1-I/AAAAAAAACLo/-Mao9gD9QRQnWmjmIpY3qjrJFgvIkDpgwCLcB/s1600/FullSizeRender%25281%2529.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="293" src="https://2.bp.blogspot.com/-bLsEZTnWfXY/V4nBJziT1-I/AAAAAAAACLo/-Mao9gD9QRQnWmjmIpY3qjrJFgvIkDpgwCLcB/s320/FullSizeRender%25281%2529.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"></div>Digging out some old code and adding a cube yielded:<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-kVO0ck0Vt3Q/V4nAtM9R17I/AAAAAAAACLg/5oTCRyVRg9I7sr-xb7VRsDU9JIqHbyhRACLcB/s1600/blocks.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://1.bp.blogspot.com/-kVO0ck0Vt3Q/V4nAtM9R17I/AAAAAAAACLg/5oTCRyVRg9I7sr-xb7VRsDU9JIqHbyhRACLcB/s320/blocks.png" width="320" /></a></div>This is for debugging so the noise was just because I didn't run to convergence. That does look like total internal reflection on the bottom of the cube, but the back wall is similar to the floor. Adding a sphere makes it more obvious:<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-djreGQhJ0yM/V4nBCvWjgMI/AAAAAAAACLk/sj1RNqxS43w27v747l4MF6X7KVi2obD5gCLcB/s1600/sphere.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://2.bp.blogspot.com/-djreGQhJ0yM/V4nBCvWjgMI/AAAAAAAACLk/sj1RNqxS43w27v747l4MF6X7KVi2obD5gCLcB/s320/sphere.png" width="320" /></a></div>Is this right? Probably.<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"></div><br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com5tag:blogger.com,1999:blog-8350257063773144600.post-41981563215626204292016-07-13T07:15:00.002-07:002016-07-13T07:15:26.087-07:00Always a hard question: do I have a bug?In testing some new code involving box-intersection I prepared a Cornell Box with a glass block, and first question is "is this right?". As usual, I am not sure. Here's the picture (done with a bajillion samples so I wont get fooled by outliers):<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-CHHxFUxhSvQ/V4ZMEvfN89I/AAAAAAAACKk/tckiqdIk8Ts6kIRJepdw0E_h9gQFrvDnQCLcB/s1600/testglass.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://3.bp.blogspot.com/-CHHxFUxhSvQ/V4ZMEvfN89I/AAAAAAAACKk/tckiqdIk8Ts6kIRJepdw0E_h9gQFrvDnQCLcB/s400/testglass.png" width="400" /></a></div><br />It's smooth anyway. The glass is plausible to my eye. The strangest thing is how bright the bottom of the glass block is. Is it right? At first I figured bug. But maybe that prism operates as a light tunnel (like fiber optics) so the bottom is the same color as a diffuse square on the prism top would be. So now I will test that hypothesis somehow (google image search? find a glass block?) and if that phenomenon is right and of about the right magnitude, I'll declare victory.Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com0tag:blogger.com,1999:blog-8350257063773144600.post-66991087084395576602016-06-07T19:22:00.000-07:002016-06-07T19:22:45.832-07:00A sale on LimnuThe collaborative white-boarding program I used for my <a href="https://www.amazon.com/Ray-Tracing-Weekend-Minibooks-Book-ebook/dp/B01B5AODD8/ref=zg_bs_3937_2">ray tracing e books</a> has finished their enterprise team features that your boss will want if you are in a company. It's normally $8 a month per user but if <a href="https://limnu.com/limnu-for-teams-is-here/">you buy in the next week it is $4 per month for a year</a>. Looks like that rate will apply to any users added to your team before or after the deadline as well. I love this program-- try it!Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com0tag:blogger.com,1999:blog-8350257063773144600.post-46775462540856054302016-05-15T09:24:00.001-07:002016-05-15T09:24:17.681-07:00Prototyping video processingI got a prototype of my 360 video project done in Quartz Composer using a custom Core Image filter. I am in love with Quartz Composer and core graphics because it is such a nice prototyping environment and because I can stay in 2D for the video. Here is the whole program:<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-O6pT0S3cEaM/VzihjUCAELI/AAAAAAAACKE/9DJaTd4WmFAvNf8QTcgWTwAsTiBHz7xCgCLcB/s1600/Screen%2BShot%2B2016-05-15%2Bat%2B10.16.34%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="226" src="https://4.bp.blogspot.com/-O6pT0S3cEaM/VzihjUCAELI/AAAAAAAACKE/9DJaTd4WmFAvNf8QTcgWTwAsTiBHz7xCgCLcB/s400/Screen%2BShot%2B2016-05-15%2Bat%2B10.16.34%2BAM.png" width="400" /> </a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div style="text-align: left;">A cool thing is I can use an image for debugging where I can stick in whatever calibration points I want to in Photoshop. Then I just connect the video part and no changes are needed-- the Core Image Filter takes and image or video equally happily and Billboard displays the same.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">The Filter is pretty simple and is approximately GLSL </div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-s-ot1DAIigo/VzihjsoJR-I/AAAAAAAACKM/3c9o6tSOO90V1vz87LnJSHo5APB_1_xhQCKgB/s1600/Screen%2BShot%2B2016-05-15%2Bat%2B10.17.06%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="208" src="https://4.bp.blogspot.com/-s-ot1DAIigo/VzihjsoJR-I/AAAAAAAACKM/3c9o6tSOO90V1vz87LnJSHo5APB_1_xhQCKgB/s400/Screen%2BShot%2B2016-05-15%2Bat%2B10.17.06%2BAM.png" width="400" /></a></div><div style="text-align: left;"><br /></div><div style="text-align: left;">One thing to be careful on is the return range of atan (GLSL atan is the atan2 we know and love).</div><div style="text-align: left;"><br /></div><div style="text-align: left;">I need to test this with some higer-res equirectangular video. Preferably with fixes viewpoint and with unmodified time. If anyone can point me to some I would appreciate it.</div>Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com1tag:blogger.com,1999:blog-8350257063773144600.post-17562267230040936682016-05-14T10:40:00.002-07:002016-05-14T11:33:18.948-07:00What resolution is needed for 360 video?I got my basic 360 video viewer working and was not pleased with the resolution. I've realized that people are really serious that they need very high res. I was skeptical of these claims because I am not that impressed with 4K TVs relative to 2K TVs unless they are huge. So what minimum res do we need? Let's say I have the following 1080p TV (we'll call that 2K to conform to the 4K terminology-- 2K horizontal pixels):<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-xswswqKpvXw/VzdexRmF5OI/AAAAAAAACJw/DUReWt3GV70XzG8MHMoFSF5H36ESzhiAQCKgB/s1600/living_room_room_style_sofa_tv_interior_39259_3840x2160.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="225" src="https://1.bp.blogspot.com/-xswswqKpvXw/VzdexRmF5OI/AAAAAAAACJw/DUReWt3GV70XzG8MHMoFSF5H36ESzhiAQCKgB/s400/living_room_room_style_sofa_tv_interior_39259_3840x2160.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Image from <a href="https://wallpaperscraft.com/">https://wallpaperscraft.com</a></td></tr></tbody></table>If we wanted to tile the wall horizontally with that TV we would need 3-4 of them. For a 360 surround we would need 12-20. Let's call it 10 because we are after approximate minimum res. So that's 20K pixels. To get up to "good" surround video 20K pixels horizontally. 4K is much more like NTSC. As we know, in some circumstances that is good enough.<br /><br /><a href="https://developers.facebook.com/videos/f8-2016/optimizing-360-video-for-oculus/?pnref=story">Facebook engineers have a nice talk on some of the engineering issues these large numbers imply. </a><br /><br />Edit: <a href="https://twitter.com/renderpipeline/status/731543630872236033">Robert Menzel pointed out on Twitter</a> that the same logic is why 8K does suffice for current HMDs.<br /><br /><br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com15tag:blogger.com,1999:blog-8350257063773144600.post-29633558419392303842016-05-12T11:38:00.000-07:002016-05-15T09:14:45.747-07:00equirectangular image to spherical coordsAn <a href="https://en.wikipedia.org/wiki/Equirectangular_projection">equirectangular</a> image, popular in <a href="http://support.video-stitch.com/hc/en-us/articles/203657036-What-is-equirectangular-">360 video</a>, is a projection that has equal area on the rectangle match area on the sphere. Here it is for the Earth:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-pPcmphNNw5M/VzTJlb-pySI/AAAAAAAACJc/E004_lefdSExXGO3o6Z4yZqPsz02UIA6ACLcB/s1600/Equirectangular_projection_SW.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="201" src="https://1.bp.blogspot.com/-pPcmphNNw5M/VzTJlb-pySI/AAAAAAAACJc/E004_lefdSExXGO3o6Z4yZqPsz02UIA6ACLcB/s400/Equirectangular_projection_SW.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Equirectangular projection (source <a href="https://en.wikipedia.org/wiki/Equirectangular_projection">wikipedia</a>)</td></tr></tbody></table>This projection is much simpler than I would expect. The area on the unit radius sphere from theta1 to theta2 (I am using the graphics convention of theta is the angle down from the pole) is:<br /><br /><i>area = 2*Pi*integral sin(theta) d_theta = 2*Pi*(cos(theta_1) - cos(theta_2))</i><br /><br />In Cartesian coordinates this is just:<br /><br /><i>area = 2*Pi*(z_1 - z_2)</i><br /><br />So we can just project the sphere points in the xy plane onto the unit radius cylinder and unwrap it! If we have such an image with texture coordinates (u,v) in [0,1]^2, then<br /><br /><i>phi = 2*Pi*u</i><br /><i>cos(theta) = 2*v -1</i><br /><br />and the inverse:<br /><br /><i>u = phi / (2*Pi)</i><br /><i>v = (1 + cos(theta)) / 2 </i><br /><br />So yes this projection has singularities at the poles, but it's pretty nice algebraically!Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com1tag:blogger.com,1999:blog-8350257063773144600.post-1470151485832648362016-05-12T11:02:00.000-07:002016-05-12T11:02:23.137-07:00spherical to cartesian coordsThis is probably easy to google if I had used the right key-words. Apparently I didn't. I will derive it here for my own future use.<br /><br />One of the three formulas I remember learning in the dark ages:<br /><br /><i>x = rho cos(phi) sin(theta)</i><br /><i>y = rho sin(phi) sin(theta)</i><br /><i>z = rho cos (theta)</i><br /><br />We know this from geometry but we could also square everything and sum it to get:<br /><br /><i>rho = sqrt(x^2 + y^2 + z^2)</i><br /><br />This lets us solve for theta pretty easily:<br /><br /><i>cos(theta) = z / sqrt(x^2 + y^2 + z^2)</i><br /><br />Because sin^2 + cos^2 = 1 we can get:<br /><br /><i>sin(theta) = sqrt(1 - z^2/( x^2 + y^2 + z^2))</i><br /><br />phi we can also get from geometry using the ever useful atan2:<br /><br /><i>phi = atan2(y, x) </i><br /><br /><br /><br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com0tag:blogger.com,1999:blog-8350257063773144600.post-2909099730582269762016-05-06T08:47:00.001-07:002016-05-06T08:47:31.942-07:00Advice sought on 360 video processing SDKsFor a demo I would like to take come 360 video (panoramic, basically a moving environment map) such as that in this image:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-xbyO9ghdkzk/Vyy61stz0_I/AAAAAAAACJA/n4q2nWFNQU0xaD1jkFi5e766HJkrN_CjwCLcB/s1600/krokus_helicopter_big.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="138" src="https://2.bp.blogspot.com/-xbyO9ghdkzk/Vyy61stz0_I/AAAAAAAACJA/n4q2nWFNQU0xaD1jkFi5e766HJkrN_CjwCLcB/s320/krokus_helicopter_big.jpg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">An image such as you might get as a frame in a 360 video (<a href="http://www.airpano.com/files/krokus_helicopter_big.jpg">http://www.airpano.com/files/krokus_helicopter_big.jpg</a>)</td></tr></tbody></table>And I want to select a particular convex quad region (a rectangle will do in a pinch):<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-RWrUjTt39cE/Vyy62R0kjoI/AAAAAAAACJI/osACYlrv3AQ1mof8beFobsYmXJde8YQ_QCKgB/s1600/Screen%2BShot%2B2016-05-06%2Bat%2B9.28.58%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="139" src="https://2.bp.blogspot.com/-RWrUjTt39cE/Vyy62R0kjoI/AAAAAAAACJI/osACYlrv3AQ1mof8beFobsYmXJde8YQ_QCKgB/s320/Screen%2BShot%2B2016-05-06%2Bat%2B9.28.58%2BAM.png" width="320" /></a></div><br /><br />And map that to my full screen.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-04Aubf5P4Wk/Vyy61oq7JRI/AAAAAAAACJI/RtfWVnsP1pgwG3FLBS4QMG_kraWhamtTgCKgB/s1600/Screen%2BShot%2B2016-05-06%2Bat%2B9.34.16%2BAM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="269" src="https://1.bp.blogspot.com/-04Aubf5P4Wk/Vyy61oq7JRI/AAAAAAAACJI/RtfWVnsP1pgwG3FLBS4QMG_kraWhamtTgCKgB/s320/Screen%2BShot%2B2016-05-06%2Bat%2B9.34.16%2BAM.png" width="320" /></a></div><br />A canned or live source will do, but if live the camera needs to be cheap. MacOS friendly preferred.<br /><br />I'm guessing there is some terrific infrastructure/SDK that will make this easy, but my google-fu is so far inadequate.Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com15tag:blogger.com,1999:blog-8350257063773144600.post-39481756120695819242016-05-03T10:58:00.000-07:002016-05-03T10:58:10.235-07:00Machine learning in one weekend?I was excited to see the title of this quora answer: <a href="https://www.quora.com/What-would-be-your-advice-to-a-software-engineer-who-wants-to-learn-machine-learning-3/answer/Alex-Smola-1?srid=3mq4&share=0cdee82c">What would be your advice to a software engineer who wants to learn machine learning?</a> However, I was a bit intimidated by the length of the answer.<br /><br />What I would love to see is <i><b>Machine Learning in One Weekend</b></i>. I cannot write that book; I want to rread it! If you are a machine learning person, please write it! If not, send this post to your machine learning friends.<br /><br />For machine learning people: my <a href="http://www.amazon.com/Ray-Tracing-Weekend-Minibooks-Book-ebook/dp/B01B5AODD8">Ray Tracing in One Weekend</a> has done well and people seem to have liked it. It basically finds the sweet spot between a "toy" ray tracer and a "real" ray tracer, and after a weekend people "get" what a ray tracer is, and whether they like it enough to continue in the area. Just keep the real stuff that is easy, and skip the worst parts, and use a real language that is used in the discipline. Make the results satisfying in a way that is similar to really working in the field. Please feel free to contact me about details of my experience. Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com3tag:blogger.com,1999:blog-8350257063773144600.post-77316586469061072632016-04-25T13:38:00.001-07:002016-04-25T13:38:08.827-07:00Level of noise in unstratified renderersWhen you get noise in a renderer a key question, often hard to answer, is is it a <i>bug or just normal outliers? </i>With an unstratified renderer, which I often favor, the math is more straightforward. Don Mitchell has a <a href="http://mentallandscape.com/Papers_siggraph96.pdf">nice paper on the convergence rates of stratified sampling</a> which is better than the inverse square root of unstratified.<br /><br />In a brute force ray tracer it is often true that a ray either gets the color of the light <i>L</i>, or a zero because it is terminated in some Russian Roulette. Because we average the <i>N</i> samples the actual computation looks something like:<br /><br /><i>Color = (0 + 0 + 0 + L + 0 + 0 + 0 + 0 + L + .... + 0 + L + 0 + 0) / N</i><br /><br />Note that this assumes Russian Roulette rather than downweighting. With downweighting there are more non-zeros and they are things like <i>R*R'*L</i>. Note this assumes <i>Color</i> is a float, so pretend it's a grey scene or think of just each component of RGB.<br /><br />The expected color is just <i>pL</i> where <i>p</i> is the probability of hitting the light. There will be noise because sometimes luck makes you miss the light a lot or hit it a lot.<br /><br />The standard statistical measure of error is <i>variance</i>. This is the average squared error. Variance is used partially because it is meaningful in some important ways, but largely because it has a great math property:<br /><br /><i>The variance of a sum of two random quantities is the sum of the variances of the individual quantities</i><br /><br />We will get to what is a good intuitive error message later. For now let's look at the variance of our <i>"zero or L" </i>renderer. For that we can use the definition of variance:<br /><br /><i>the expected (average) value of the squared deviation from the mean </i><br /><br />Or in math notation (where the average or expected value of a variable <i>X</i> is <i>E(X)</i>:<br /><br /><i>variance(Color) = E[ (Color - E(Color))^2 ]</i><br /><br />That is mildly awkward to compute so we can use the most commonly used and super convenient variance identity:<br /><br /><i>variance(X) = E(X^2) - (E(X))^2</i><br /><br />We know <i>E(Color) = pL. </i> We also know that <i>E(Color^2) = pL^2,</i> so:<br /><br /><i>variance(Color) = pL^2 - (pL)^2 = p(1-p)L^2</i><br /><br />So what is the variance of <i>N</i> samples (<i>N is</i> the number of rays we average)?<br /><br />First it is the sum of a bunch of these identical samples, so the variance is just the sum of the individual variances:<br /><br /><i>variance(Sum) = Np(1-p)L^2</i><br /><br />But we don't sum the colors of the individual rays-- we average them by dividing by N. Because variance is about the square of the error, we can use the identity:<br /><br /><i>variance(X / constant) = variance(X) / constant^2</i><br /><br />So for our actual estimate of pixel color we get:<br /><br /><i>variance(Color) = </i> (<i>p(1-p)L^2) / N</i><br /><i><br /></i>This gives a pretty good approximation to squared error. But humans are more sensitive to contrast and we can get close to that by relative square-root-of-variance. Trying to get closer to intuitive absolute error is common in many fields, and the square-root-of-variance is called standard deviation. Not exactly expected absolute error, but close enough and much easier to calculate. Let's divide by <i>E(Color)</i> to get our approximation to relative error:<br /><br /><i>relative_error(Color)</i> is approximately <i>Q = sqrt((p(1-p)L^2) / N) / ( pL)</i><br /><i><br /></i><i>We can do a little algebra to get:</i><br /><br /><i> <i>Q = sqrt((p(1-p)L^2) / (p^2 L^2 N) ) = sqrt( (1-p) / ( pN) )</i></i><br /><br />If we assume a bright light then<i><i> p</i></i> is small,<i><i> then</i></i><br /><br /><i><i>Q is approximately sqrt(1/(pN))</i></i><br /><br />So the perceived error for a given <i>N</i> (<i>N</i> is the same for a given image) ought to be approximately proportional to the inverse squareroot of pixel brightness, so we ought to see more noise in the darks.<br /><br />If we look at an almost converged brute force cornell box we'd expect the dark areas to look a bit noisier than the bright ones. Maybe we do. What do you think?<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-CzT2gF8cyqE/Vx6AIE0AYjI/AAAAAAAACIk/tlqNKKmrMjooRu1RIfoSjNu4NsSj5pdIwCLcB/s1600/boxnoise.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://1.bp.blogspot.com/-CzT2gF8cyqE/Vx6AIE0AYjI/AAAAAAAACIk/tlqNKKmrMjooRu1RIfoSjNu4NsSj5pdIwCLcB/s400/boxnoise.png" width="400" /></a></div><br /><br /><i><br /></i><br /><br /><br /><br /><br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com3tag:blogger.com,1999:blog-8350257063773144600.post-17001610598996135732016-04-03T14:44:00.001-07:002016-04-03T14:44:31.666-07:00Debugging by sweeping under rugSomebody already found several errors in my <a href="http://www.amazon.com/Ray-Tracing-Rest-Your-Minibooks-ebook/dp/B01DN58P8C/ref=sr_1_5?ie=UTF8&qid=1459719260&sr=8-5&keywords=peter+shirley">new minibook</a> (still free for until Apr 5 2016). There are some pesky black pixels in the final images.<br /><br />All Monte Carlo Ray Tracers have this as a main loop:<br /><br />pixel_color = average(many many samples)<br /><br />If you find yourself getting some form of acne in the images, and this acne is white or black, so one "bad" sample seems to kill the whole pixel, that sample is probably a huge number or a NaN. This particular acne is probably a NaN. Mine seems to come up once in every 10-100 million rays or so.<br /><br />So big decision: sweep this bug under the rug and check for NaNs, or just kill NaNs and hope this doesn't come back to bite us later. I will always opt for the lazy strategy, especially when I know floating point is hard.<br /><br />So I added this:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-1ogUYO6ApzE/VwGNl3rhMzI/AAAAAAAACH8/mCWYUb17TCEOg--t1afsZTUdSUY6A6eRg/s1600/Screen%2BShot%2B2016-04-03%2Bat%2B3.39.14%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="135" src="https://3.bp.blogspot.com/-1ogUYO6ApzE/VwGNl3rhMzI/AAAAAAAACH8/mCWYUb17TCEOg--t1afsZTUdSUY6A6eRg/s400/Screen%2BShot%2B2016-04-03%2Bat%2B3.39.14%2BPM.png" width="400" /></a></div><br /> There may be some isNaN() function supported in standard C-- I don't know. But in the spirit of laziness I didn't look it up. I like to chase these with low-res images because I can see the bugs more easily. It doesn't really make it faster-- you need to run enough total rays to randomly trip the bug. This worked (for now!):<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-q2-GWG9iNQM/VwGN8Oi4IHI/AAAAAAAACIA/L8sLvH8l4uE1idfUrgcfcElxN1d9oaRcg/s1600/bug.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="100" src="https://3.bp.blogspot.com/-q2-GWG9iNQM/VwGN8Oi4IHI/AAAAAAAACIA/L8sLvH8l4uE1idfUrgcfcElxN1d9oaRcg/s400/bug.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Left: 50x50 image with 10k samples per pixel (not enough for bug). Middle 100k samples per pixel. Right: with the NaN check.</td><td class="tr-caption" style="text-align: center;"> </td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td></tr></tbody></table><br /> <br /><br />Now if you are skeptical you will not that by increasing the number of samples 10X I went from 0 bugs to 20+ bugs. But I wont think about the possibly troublesome implications of that. MISSION ACCOMPLISHED!<br /><br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com4