tag:blogger.com,1999:blog-83502570637731446002024-03-13T03:04:23.402-07:00Pete Shirley's Graphics BlogPeter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.comBlogger253125tag:blogger.com,1999:blog-8350257063773144600.post-67569167894896366352023-12-12T09:34:00.000-08:002023-12-12T09:38:00.597-08:00Working old software is underappreciated<div style="text-align: justify;"><span style="color: #fff2cc;"><br /></span></div><p style="text-align: left;"><span style="color: #fff2cc;"> </span></p><p id="docs-internal-guid-7400ac4d-7fff-0871-8c4d-50801e2e2a1a" style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">Working old software is underappreciated</span></p><span style="color: #fff2cc;"><br /></span><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">A decision all programmers have faced is whether or not to replace old working software. </span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">There are usually good reasons to replace but especially extensibility. Old code is brittle </span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">and often contains parts nobody understands. Sometimes it contains inefficiencies for modern</span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> hardware and you wouldn’t design it that way now.</span></p><span style="color: #fff2cc;"><br /></span><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">My experience is that for most programmers, they err on the side of rewriting when they shouldn’t.</span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">And this is not a small effect: they err a lot in that direction. I am no exception to this. </span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> I think there are many reasons for this systematic err, but one is programmer personality…</span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> we like to create. Better to build a new creative building than to paint an old one that is boring. </span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">But I think the big reason is we tend to underrate the best quality of old software: it works.</span></p><span style="color: #fff2cc;"><br /></span><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">A key invisible thing about old software is that it has survived many battles </span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">nobody remembers and is empirically robust to many situations. </span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> The key is that old software has survived under selection pressure. </span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">As this famous Ape 2.0 has shown, this is VERY powerful:</span></p><span style="color: #fff2cc;"><br /></span><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: center;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"><span style="border: medium; display: inline-block; height: 501px; overflow: hidden; width: 373px;"><img height="501" src="https://lh7-us.googleusercontent.com/dyJ9edEjqWFvFFrfcveOqfWaZGnBYa2Mlz548lvV54_yTmWDY8MjhBirRzMndYZDgVeT9CClLTCkc0t1QhET_NRHgGOjMILWXWzMaIv4fPQRR3ZObZd3uk94jyVxsr_eDT-uWPmR1fBU9f0xw1lVjHw" style="margin-left: 0px; margin-top: 0px;" width="373" /></span></span></p><span style="color: #fff2cc;"><br /><br /></span><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> If you start from scratch, it may take years to develop that intrinsic robustness. </span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">Being more wary of deleting old code is a cousin of this maxim:</span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span><span style="color: #fff2cc;"><i><span face="Arial, sans-serif" style="background-color: transparent; font-size: 12pt; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">“Don't ever take a fence down until you know the reason it was put up” (Chesterton)</span></i></span></p><span style="color: #fff2cc;"><br /></span><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">Let’s go to an Oryx and Crake style genetic engineering problem if you had the DNA </span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">skills to design creatures. Should you improve on this ridiculous </span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">poorly designed creature (Photo:Samuel Blanc):</span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: center;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"><span style="border: medium; display: inline-block; height: 328px; overflow: hidden; width: 304px;"><img height="328" src="https://lh7-us.googleusercontent.com/f1BOnWtSKr-DEOAyQKXKs3NCAr5eydM6d-zbqVRrzIaPtDrzA_aOZ9B2bhWbtWSUrduruk9W0Gsp1CpdIvQMSYjBA9IlW78BYqI37zuSm2PDRHwu9QdWemEUSWrsYyQ75kB762MLePZmDNJyzXojbLc" style="margin-left: 0px; margin-top: 0px;" width="304" /></span></span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">I mean it has wings but can’t fly?! Some users may need that feature. </span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> It swims but has feathers than can get wet?! Why not make it like a dolphin. </span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"> Sure that creature might be better than version 1.0, but it might need many </span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">mods as it “dies” in various ways.</span></p><span style="color: #fff2cc;"><br /></span><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">Another poorly designed creature that should be redesigned? No brains and all spikes:</span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: center;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"><span style="border: medium; display: inline-block; height: 310px; overflow: hidden; width: 395px;"><img height="310" src="https://lh7-us.googleusercontent.com/zBUVlzZ4SscYnBpotWzJjze-vz9VTVNiCRKEnfnN6lMHyJA03j8_AFVzUKjkm57CLSLEUs738gvRcVvS-EHVzpms5AZJbly9iv56Iog9XnBZl8GcJN2Ry2_8tY9GB30kFWNq2Qh3K4ujcCc1zNb0odk" style="margin-left: 0px; margin-top: 0px;" width="395" /></span></span></p><span style="color: #fff2cc;"><br /></span><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">Make the brain bigger. Make the legs longer. Give it thumbs.</span></p><span style="color: #fff2cc;"><br /></span><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">Having had a hedgehog as a pet, I am the first to admit they are ridiculous. </span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">But the hedgehog has needed</span><a href="https://homeandroost.co.uk/blog/the-history-of-hedgehogs/" style="color: #fff2cc; text-decoration: none;"><span face="Arial,sans-serif" style="-webkit-text-decoration-skip: none; background-color: transparent; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration-skip-ink: none; text-decoration: underline; vertical-align: baseline; white-space: pre;"> little revision for 15 million years</span></a><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">!</span></p><span style="color: #fff2cc;"><br /></span><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">While the penguin and hedgehog are both very robust, if they traded continents both </span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">would probably die. So sometimes the environment changes enough that starting from </span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">scratch is a good thing. But never underestimate all the hidden talents in an </span></p><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: left;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;">evolved creature or piece of software! Please read more about it in my new book:</span></p><span style="color: #fff2cc;"><br /></span><p style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: center;"><span face="Arial,sans-serif" style="background-color: transparent; color: #fff2cc; font-size: 12pt; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre;"><span style="border: medium; display: inline-block; height: 636px; overflow: hidden; width: 454px;"><img height="450" src="https://lh7-us.googleusercontent.com/T0H7JGqI4fvfTddmQiYphyGB5LLpy01_ejHFZptUm7U3FVDT1X54KbxysaLiNq1git8oGFWYSdOLz8OgSRG1K6OnHZ1jl-dZD5Pha15m58kE6p8i3LLhXmvXk7unt28EGa1TRPSG2SVASHmi6Pvlot4=w321-h450" style="margin-left: 0px; margin-top: 0px;" width="321" /></span></span></p><span style="color: #fff2cc;"><br /><br /></span>Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com0tag:blogger.com,1999:blog-8350257063773144600.post-65667652720614417502022-06-07T08:43:00.001-07:002022-06-07T08:43:12.350-07:00Hello World for Smart Lights<p> </p><p>I got three Philips Hue bulbs and a Philips Hue Bridge (basically a hub you need to attach to your router) and tried various Python interfaces to control them. I found one I really like and here is a hello world:</p><p> </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijYev6mkr6y5sOii_tTVqpmE5KLpOHBnRNUraj6UrdaQ7HD0WbFUTtRtPIz5ejYz66zzrzTxcqueNEBRxS5mnjMRj8cls8SfU8fIc_updtWttg2YLnBKIKG8ueQdoaK5hdSYbhBr6THz8AyscZ81UwkMg5HGioWu9sMZrIZpb7ifRJwPq_2h2M8A/s818/Screen%20Shot%202022-06-07%20at%209.28.04%20AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="446" data-original-width="818" height="286" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijYev6mkr6y5sOii_tTVqpmE5KLpOHBnRNUraj6UrdaQ7HD0WbFUTtRtPIz5ejYz66zzrzTxcqueNEBRxS5mnjMRj8cls8SfU8fIc_updtWttg2YLnBKIKG8ueQdoaK5hdSYbhBr6THz8AyscZ81UwkMg5HGioWu9sMZrIZpb7ifRJwPq_2h2M8A/w525-h286/Screen%20Shot%202022-06-07%20at%209.28.04%20AM.png" width="525" /></a></div><br /> <p></p><p>(Yes, you could in theory run that program and control my bridge, but part of the Philips security is you need to be on the same network as the bridge)</p><p>Each bulb is set using HSB system that has (256^2, 256, 256) settings, and the program above cycles through the hues -- here are four stops along there:</p><div class="separator" style="clear: both; text-align: center;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-ZdJsIdESwFYL9BIYQuLn89vtHu2Q4iRLqPme4RTs4YGDI_fqisM52bpW-6wHzL_uqLBcwi43XI0yelMCipOYlqV3vzbd8OTboWVPScUJB1Gn-9OaSTFVPyDaNiYXUHRClwycHxQrUeLluUnbqNHN5_1KMfRttv1I-2v9LtH9J81AwgDnoEZolQ/s3024/Image%20from%20iOS.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="3024" data-original-width="3024" height="120" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-ZdJsIdESwFYL9BIYQuLn89vtHu2Q4iRLqPme4RTs4YGDI_fqisM52bpW-6wHzL_uqLBcwi43XI0yelMCipOYlqV3vzbd8OTboWVPScUJB1Gn-9OaSTFVPyDaNiYXUHRClwycHxQrUeLluUnbqNHN5_1KMfRttv1I-2v9LtH9J81AwgDnoEZolQ/w120-h120/Image%20from%20iOS.jpg" width="120" /> </a><img border="0" data-original-height="3024" data-original-width="3024" height="119" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1JkqNjCx9wLc-6faDY_8QdDz2pWlE2MPfsaK1iddDO2gIEpNEDEmt9XJxNRS6ysKda5pIrqo5CBNtYvZQD7ECp9SNVeTVF1HBKxHSeYIF56gWbnR0lWVAncFTeuWBZN8SWP3LVrjL9bRo639DHrV3eFQP8MaSnUlTMl9wUWYrOnf5lXeA86y3KA/w119-h119/Image%20from%20iOS%20(1).jpg" width="119" /><img border="0" data-original-height="3024" data-original-width="3024" height="120" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmnfTG0VuPnm-u3ZMJoUDCAvLO9uIlJ1qXRHeTeAwqYB3q1XVYLp9vkrOrSOB9iaa16JbmET3XjfpnDTIrZJR1n8XjYiHWSQtQGTAtRrq4K_T9net8-kSzV5C4qbze9nOpQu6M4vO9rrF9fLj3m87ojZbIbjC-glAcVyIImznyfKeWUh4-1e1WFQ/w120-h120/Image%20from%20iOS%20(3).jpg" width="120" /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjRfKuoNhVoOS02pvNOKSpLbSxPS5RQ-JgNreHa2ytATYTT3sjoRAnbniYOOIqGtfkDjEO1RRok3nDJH-qqrSS5VKHvh5H4yu6krs25q3cGZenb5-2OABBatQJMyPrP-Vvfw-JZtKgB68l6rt_yA_cx3lGkyFeQd_ccz23C6EZ0pQ53VuMnBs81w/s3024/Image%20from%20iOS%20(2).jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="3024" data-original-width="3024" height="121" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjRfKuoNhVoOS02pvNOKSpLbSxPS5RQ-JgNreHa2ytATYTT3sjoRAnbniYOOIqGtfkDjEO1RRok3nDJH-qqrSS5VKHvh5H4yu6krs25q3cGZenb5-2OABBatQJMyPrP-Vvfw-JZtKgB68l6rt_yA_cx3lGkyFeQd_ccz23C6EZ0pQ53VuMnBs81w/w121-h121/Image%20from%20iOS%20(2).jpg" width="121" /></a></div> <br /><div class="separator" style="clear: both; text-align: center;"></div></div>I found this to be quite fun and am thinking if I ever teach Intro Programming again I will base it on the Internet of Things-- the execution of commands becomes quite concrete!<br /><div class="separator" style="clear: both; text-align: center;"></div><p><br /></p>Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com0tag:blogger.com,1999:blog-8350257063773144600.post-58834290140279359622022-02-01T08:22:00.004-08:002022-02-01T14:59:23.384-08:00Direct Light 3: what goes in the direct light function?<p><a href="http://psgraphics.blogspot.com/2022/01/what-is-direct-lighting-next-event.html">Part 1 of 3</a></p><p><a href="http://psgraphics.blogspot.com/2022/01/direct-light-2-integrating-shadow-rays.html">Part 2 of 3</a><br /></p><p> </p><p> In the previous two posts I discussed why one might want direct lighting as a separate computation. But what goes in that function?</p><p>If we are just sampling a single light, then we need to bite the bullet and do a Monte Carlo integration with actual probability density functions. These functions can either be defined over the area of the light, or the angles the light is seen through. You can dive into all of that later-- it looks intimidating but is mainly like learning anything new with bad terminology and symbols-- just eat the pain :)</p><p>But for now, we can do it the easiest way possible and "just take my word for it". After you have implemented this once, the written descriptions will make more sense.</p><p>First, pick a random point uniformly on the surface of the light. Chapter 16 of <a href="http://www.realtimerendering.com/raytracinggems/unofficial_RayTracingGems_v1.9.pdf">this book</a> will show you how to do this for various shapes.</p><p>The magic integral you need to solve, often referred to as the "rendering equation" is this:</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgoeHhj159WKBMoHj6VNfTMc0TtiE7ijWvPYNAqBP1mUMEoF2mqqt0WNXBqLH7E2KHjxP7VU0Lsa0HVN3rHKeCVNO3HUvc95K17I7Ccq0eamy1GtETZ1C4EdMpATpGMVq2qqCQtbVH-KmU4k9FHYuLEtaRxR6VF9yL0dGxF7sHEzPOY3-NbMz5WJQ=s1672" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1622" data-original-width="1672" height="429" src="https://blogger.googleusercontent.com/img/a/AVvXsEgoeHhj159WKBMoHj6VNfTMc0TtiE7ijWvPYNAqBP1mUMEoF2mqqt0WNXBqLH7E2KHjxP7VU0Lsa0HVN3rHKeCVNO3HUvc95K17I7Ccq0eamy1GtETZ1C4EdMpATpGMVq2qqCQtbVH-KmU4k9FHYuLEtaRxR6VF9yL0dGxF7sHEzPOY3-NbMz5WJQ=w443-h429" width="443" /></a></div><br /> The key formula you evaluate is in red. Area is the total area of the light. The "rho" is the BRDF and for a diffuse (matte) surface that is "rho = reflectance / pi". So pick a random point<b> y</b> uniformly on the light, see if <b>x</b> can "see" <b>y</b> by tracing a "shadow" ray from <b>x</b> to <b>y</b> (or vice-versa). That's it! It will be noisy for spherical lights (half the time, <b>y</b> is on the back and so will be shadowed, and the<i> cosb</i> will vary a lot), but it will work well for polygonal lights. Worry about that later-- just get it to work.<p></p><p>What about multiple lights. Three choices:</p><ol style="text-align: left;"><li>do one of these computations for each light and add them</li><li>pick one of the N lights at random, do the computation for that one light, and multiply by N</li><li>implement reservoir sampling like all the cool kids do (it is a new technique and works really well)</li></ol><p>I endorse option 2 as your first step. </p><p> <br /></p><p><br /></p>Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com4tag:blogger.com,1999:blog-8350257063773144600.post-68364446922920294232022-01-31T13:01:00.005-08:002022-02-01T14:57:55.782-08:00Direct Light 2: integrating shadow rays into a path tracer<p><a href="https://psgraphics.blogspot.com/2022/01/what-is-direct-lighting-next-event.html"><span style="font-family: verdana;">Part 1 of 3</span></a></p><p><span style="font-family: verdana;"><a href="http://psgraphics.blogspot.com/2022/02/direct-light-3-what-goes-in-direct.html">Part 3 of 3</a><br /></span></p><p><span style="font-family: verdana;"> </span></p><p><span style="font-family: verdana;">In my last post, I talked about using shadow rays to sample a light source directly. Here is our old lovely but noisy path tracer:</span></p><div style="text-align: left;"><i><span style="font-family: courier;">vec3 ray_color(ray r)<br /></span></i></div><div style="text-align: left;"><i><span style="font-family: courier;"> if (r hits an object)</span></i></div><div style="text-align: left;"><i><span style="font-family: courier;"> r = ray(hitpoint, generate a scattered direction from the surface)</span></i></div><div style="text-align: left;"><i><span style="font-family: courier;"> return emitted(hitpoint) + reflectance*ray_color(r)<br /></span></i></div><div style="text-align: left;"><i><span style="font-family: courier;"> else</span></i></div><div style="text-align: left;"><i><span style="font-family: courier;"> return background_color(r)</span></i></div><p><span style="font-family: verdana;">This works great provided the light emitting objects are big, but otherwise we get a lot of noise. We also are crossing our fingers we don't get an infinite recursion.</span></p><p><span style="font-family: verdana;"><br /></span></p><p><span style="font-family: verdana;">Let's assume we have a magic function <i><span><b>direct_light(vec3 p, reflectance_info ref).</b></span></i> Can we just do this to the return line above:</span></p><p> <i><span style="font-family: courier;">return emitted(hitpoint) + direct(hitpont, ref_info) +reflectance*ray_color(r)</span></i></p><p><span style="font-family: verdana;"><span>NOPE! That would double count direct because that sacttered ray r might hit the light and get the emitted part. So this is better:</span></span></p><p><span style="font-family: courier;"> <span style="font-family: courier;"> <i>return direct(hitpont, ref_info) +reflectance*ray_color(r)</i></span></span></p><p><span style="font-family: courier;"><span style="font-family: courier;"> BUT, if you see the light source in the picture, it will be black! </span></span></p><p><span style="font-family: verdana;"><span><span>And before we fix that, another problem is that if the surface is a perfect mirror, the direct lighting itself will be too noisy because only one point on the light matters.</span></span></span></p><p><span style="font-family: verdana;"><span><span>So we need something like a "sees the light" flag on a ray.</span></span></span></p><div style="text-align: left;"><i><span style="font-family: courier;">vec3 ray_color(ray r)<br /></span></i></div><div style="text-align: left;"><i><span style="font-family: courier;"> vec3 color = (0, 0, 0)<br /></span></i></div><div style="text-align: left;"><i><span style="font-family: courier;"> if (r hits an object)</span></i></div><div style="text-align: left;"><i><span style="font-family: courier;"> if (r.should_see_lights)</span></i></div><div style="text-align: left;"><i><span style="font-family: courier;"> color += emitted(hitpoint)<br /></span></i></div><div style="text-align: left;"><i><span style="font-family: courier;"> r = ray(hitpoint, generate a scattered direction from the surface, should_r_see_lights_flag)</span></i></div><div style="text-align: left;"><i><span style="font-family: courier;"> if (r.should_see_light) <br /></span></i></div><div style="text-align: left;"><i><span style="font-family: courier;"> color += reflectance*ray_color(r)</span></i></div><div style="text-align: left;"><i><span style="font-family: courier;"> else</span></i></div><div style="text-align: left;"><i><span style="font-family: courier;"> color += </span></i><i><span style="font-family: courier;">direct(hitpoint, ref_info) +reflectance*ray_color(r)</span></i><i><span style="font-family: courier;"></span></i></div><div style="text-align: left;"><i><span style="font-family: courier;"> else</span></i></div><div style="text-align: left;"><i><span style="font-family: courier;"> color += background_color(r)</span></i></div><div style="text-align: left;"><i><span style="font-family: courier;"> return color <br /></span></i></div><p><span style="font-family: verdana;"><span><span>Man that is pretty ugly! But I don't know a much better way. Be sure to set the viewing ray flags to "should_see_light = true".</span></span></span></p><p><span style="font-family: verdana;"><span><span> Are we done? No. For perfect mirrors, should_see_lights scattered rays should be set true. For diffuse reflectors, false. For glossy objects, it depends on the size of the light source. Here,<a href="https://cseweb.ucsd.edu/~viscomp/classes/cse168/sp21/readings/veach.pdf"> a lovely technique from Eric Veach i</a>s often used. I would go with the "balance heuristic"-- it is easiest. There is a wonderful figure from the paper that shows why this technique is needed:</span></span></span></p><p><span style="font-family: courier;"><span style="font-family: courier;"></span></span></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgTKf1VbGfduZWL07jK9qdzMpJmzA3LxO-dm7wlyfJn_jnMck5Phm9yPR8lA1I0T0mqp6T9MghsdQj71EDRtmJPkd0nQkFheyllSCJviKX6bp90L_16vKXv_I47UHvclcbT2Mqig0CK15KVxjLWWZY4OMBh3IbyzXiOjilp61LQLhgfuyTZ92Lbtw=s1736" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1466" data-original-width="1736" height="468" src="https://blogger.googleusercontent.com/img/a/AVvXsEgTKf1VbGfduZWL07jK9qdzMpJmzA3LxO-dm7wlyfJn_jnMck5Phm9yPR8lA1I0T0mqp6T9MghsdQj71EDRtmJPkd0nQkFheyllSCJviKX6bp90L_16vKXv_I47UHvclcbT2Mqig0CK15KVxjLWWZY4OMBh3IbyzXiOjilp61LQLhgfuyTZ92Lbtw=w554-h468" width="554" /></a></div> <p></p><p><span style="font-family: verdana;"><span><span>So are we done? No, there is one more issue. Does the background "emit" light? Isn't it a light source. The answer is you can do it either way. But if it is a light source, and you can hit it, we can get rid of the </span></span><span><span><i><span> if (r hits an object)</span></i> branch-- the background is always hit! But if it has infinite radius where is the hitpoint? I would say these design decisions are not obvious and I am on the fence even after trying all of them.</span></span></span></p><p><span style="font-family: verdana;"><span><span>Next time: what goes in here: </span></span><i><span>direct(hitpoint, ref_info)</span></i></span></p><i><span style="font-family: courier;"></span></i><span style="font-family: courier;"><span style="font-family: courier;"><i><span style="font-family: courier;"></span></i></span></span><p><span style="font-family: courier;"><span style="font-family: courier;"><i><span style="font-family: courier;"> </span></i> <br /></span></span></p><p><span style="font-family: courier;"><span style="font-family: courier;"> </span></span></p><p><span style="font-family: courier;"><span style="font-family: courier;"> <br /></span></span></p><p><span style="font-family: courier;"><span style="font-family: courier;"> </span><br /></span></p><p><span style="font-family: courier;"><br /></span></p><p><span style="font-family: courier;"><br /></span></p><p> </p><p><br /></p><p><br /></p><p><br /></p>Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com2tag:blogger.com,1999:blog-8350257063773144600.post-29795187307491539712022-01-30T10:16:00.001-08:002022-02-01T14:54:25.654-08:00What is direct lighting (next event estimation) in a ray tracer? Part 1 of 3<p><a href="https://psgraphics.blogspot.com/2022/01/direct-light-2-integrating-shadow-rays.html">Part 2 of 3</a></p><p><a href="http://psgraphics.blogspot.com/2022/02/direct-light-3-what-goes-in-direct.html">Part 3 of 3</a><br /></p><p>In <a href="https://raytracing.github.io/">Ray Tracing in One Weekend</a>, we took a very brute force and we hope intuitive approach that didn't use "shadow rays", or as some on the whipper-snappers say "next event estimation". Instead we scattered rays and they got "direct light" (light that is emitted by a radiant object, hits one surface, and is scattered to the camera) implicitly by having a ray randomly scatter and happen to hit to hit the light source.</p><p>Direct light with shadow rays can make some scenes much more efficient, and 99%+ of production ray tracers use shadow rays. Let's look at a simple scene where the only thing in the world is two spheres, one diffusely reflective with reflectivity R, and one that absorbs all light but emits light uniformly in all directions, so it looks white. So if we render just the light source, let's assume it is color RGB = (2,2,2). <i>(for now, assume RGB=(1,1,1) is white on the screen, and anything above (1,1,1) "burns out" to also be white-- this is a tone mapping issue which is its own field of study! But we will just truncate above 1 for now which is the best tone mapping technique when measured by quality over effort :) ).</i><br /></p><p>To color a pixel in the "brute force path tracer" paradigm, the color of an object is:</p><p>color = reflectivity * weighted_average(color coming into surface)</p><p>The weighted average says not all directions are created equal-- some influence the surface color more than others. We can approximate that weighted average above by taking some random rays, weighting them, and averaging them:</p><p><br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhysEQQ-SXUx43IWtgWJvgHTlrnjcutLhu0pS-VkTpcneDjZg1GywC3BLPTecoO70jx3toOhDPU6jmt5cPSI1hFIq_6n-YJRlOcuAPFu13jqRx0lvzj8Y0ElTJ5xo-YNyrVsfo12sZvjYp3LF2KvoEpIsew1SSzNnLQBLbCEYFud5NKeTmKqi7wDg=s1080" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="730" data-original-width="1080" height="312" src="https://blogger.googleusercontent.com/img/a/AVvXsEhysEQQ-SXUx43IWtgWJvgHTlrnjcutLhu0pS-VkTpcneDjZg1GywC3BLPTecoO70jx3toOhDPU6jmt5cPSI1hFIq_6n-YJRlOcuAPFu13jqRx0lvzj8Y0ElTJ5xo-YNyrVsfo12sZvjYp3LF2KvoEpIsew1SSzNnLQBLbCEYFud5NKeTmKqi7wDg=w461-h312" width="461" /></a></div><p></p><p>for the six rays above, the weighted average is </p><p><br /></p><p><i>color = reflectivity*(w1*(0,0,0) + w2*(0,0,0) w3*(2,2,2) + w4*(2,2,2) w5*(0,0,0) + w6*(0,0,0))) / (w1+w2+w3+w4+w5+w6)</i></p><p>So <i>reflectivity*(2/3, 2/3, 2.3)</i> so a very light color. If we had happened to have an extra ray randomly hit it would be <i>reflectivity*(1,1,1) </i>and if one more missed<i> reflectivity*(1/3, 1/3, 1/3)</i></p><p>So we see why there is noise in ray tracings but also why the bottom of the sphere is black-- no scattered rays can hit the light. </p><p>A key thing is this sort of Monte Carlo is any distribution of rays can be used, and any weighting functions can be used, and the picture will usually be "reasonable". Surfaces attenuate light (with reflectivity-- their "color") and they preferentially average some directions over others.</p><p>Commonly, we will just make all the weights one, and make the ray directions "appropriate" for the surface type (like for a mirror, only one direction is chosen. </p><p>The above is a perfectly reasonable way to compute shading from a light. However, it gets very noisy for a small light (or more precisely, a light that subtends a small angle, light our biggest light, the Sun).</p><p>A completely different way to compute direct lighting is to view all those "missed" rays wasted, and only send rays that don't yield a zero contribution. But to do that, we need to somehow figure out the weighting so that we get the same answer.</p><p>What we do there is just send rays toward the light source, and count the ones that hit (if they hit some other object, like a bird between the shaded sphere and the light, it's a zero).</p><p>So the rays are all random, but also only in directions toward the light. Now the magic formula is</p><p> <i>color = <span style="color: red;"><b>MAGICCONSTANT*</b></span>reflectivity*(w1*(0,0,0) + w2*(0,0,0) w3*(2,2,2) + w4*(2,2,2) w5*(0,0,0) + w6*(0,0,0))) / (w1+w2+w3+w4+w5+w6)</i></p><p> What is different is:</p><p>1. what directions are chosen?</p><p>2. what are the weights?</p><p>3. What is the MAGICCONSTANT? // will depend on the geometry for that specific point-- like it will have some distance squared attenuation in it<br /></p><p>The place where we have freedom is 1. Choose some way to sample directions toward the sphere. For example, pick random points on the sphere and send rays to them. Then 2 and 3 will be math. The bad news is that the math is "advanced calculus" and looks intimidating because of the formulas and because 3D actually is confusing. The good news is that that advanced calc can be done methodically. More on that in a future post.</p><p>But great cheat! Just use some common sense for guesses on 2 and 3 (checking the quality of your guesses with the more brute force technique) and your pictures will look pretty reasonable! That is the beauty of weighted averages.<br /></p><p> </p><p><br /></p><p><br /></p><p><br /></p>Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com2tag:blogger.com,1999:blog-8350257063773144600.post-8154714277846324282021-12-30T09:44:00.001-08:002021-12-30T09:44:19.606-08:00What is an "uber shader"?<p> I am no expert on "uber shaders", but I am a fan. These did not make much sense to me until recently. First let's unpack the term a little bit. The term "shader" in graphics has become almost meaningless. To a first approximation it means "function". So a "geometry shader" is a function that modifies geometry. A "pixel shader" is a function that works on pixels. In context those terms might mean something more specific. </p><p>So "uber shader" is a general function? No.</p><p>An uber shader is a very specific kind of function: it is one that evaluates a very particular <a href="https://en.wikipedia.org/wiki/Bidirectional_reflectance_distribution_function">BRDF</a>, usually in the context of a particular rendering API. The fact that it is a BRDF implies this is a "physically based" shader, so is ironically much more restricted than a general shader. The "uber" refers to it being the "only material model you will ever need", and I think for most applications, that is true. The one I have the most familiarity with (the only one I have implemented) is the <a href="https://autodesk.github.io/standard-surface/">Autodesk Standard Surface</a>.</p><p>First let's get a little history. Back in ancient times people would classify surfaces as "matte" or "shiny" and you would call a different function for each type of surface. Every surface would somehow have a name or pointer or whatever to code to call about lighting or rays or whatever. So they had different behavior. Here is a typical example of some materials we used in <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.45.1962">our renderer three decades ago</a>:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhjy54DORJ7GkgkGjoUwgUV9Sg1DLQwndul6o4GmrSfRO-mAkBriT6h1XvAspuGmbLoY81uw4sP8r-BR5nJM8FEYoFmiDxvkDo2fQ5KqoHf-BB21Mu42vNDTQcr9Bs5q_8_zjuJTMV5toE9fvNf5yIpWyrovkhdd8-1iloJ8PzsGBGBQ_PEnp19HQ=s1316" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1316" data-original-width="848" height="640" src="https://blogger.googleusercontent.com/img/a/AVvXsEhjy54DORJ7GkgkGjoUwgUV9Sg1DLQwndul6o4GmrSfRO-mAkBriT6h1XvAspuGmbLoY81uw4sP8r-BR5nJM8FEYoFmiDxvkDo2fQ5KqoHf-BB21Mu42vNDTQcr9Bs5q_8_zjuJTMV5toE9fvNf5yIpWyrovkhdd8-1iloJ8PzsGBGBQ_PEnp19HQ=w413-h640" width="413" /></a></div><br /><p>But sometime in the late 1990s some movie studios started making a single shader that encompassed all of these as well as some other effects such as retro-reflection and sheen and subsurface scattering. (I don't know who came up with this idea first, but I think Sing-Choong Foo, one of the BRDF measurement and modeling pioneers that I overlapped with at Cornell, did one at PDI in the late 1990s... this may have been the first... please comment if you know anything about the hisotry which really ought to be documented).</p><p>Here is the Autodesk version's conceptual graph of how the shader is composed:</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEiuaL1GnCaA49k9CG-JPTZ9z0nAQvurNNaB2GktiaWt31ey_VGSALDgm3DB_43lg-MmwWdflz6B3dpsYOPD-gK9-syQBHrDl4menglSUa125jUDrlHyqMwsBo9HSa19w-u7M6KcLAdlecZJRzzQMTaS_yn64GM8NRaRSLQ96EIE7EpEqbFgfPiDJA=s1814" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1424" data-original-width="1814" height="502" src="https://blogger.googleusercontent.com/img/a/AVvXsEiuaL1GnCaA49k9CG-JPTZ9z0nAQvurNNaB2GktiaWt31ey_VGSALDgm3DB_43lg-MmwWdflz6B3dpsYOPD-gK9-syQBHrDl4menglSUa125jUDrlHyqMwsBo9HSa19w-u7M6KcLAdlecZJRzzQMTaS_yn64GM8NRaRSLQ96EIE7EpEqbFgfPiDJA=w640-h502" width="640" /></a></div><br />So a bunch of different shaders are added in linear combinations, and the weights may be constant or may be functions. This is a bit daunting looking. Let's show how you would make a <b>metal</b> (like copper!): First set opacity=1, coat=0, metalness=1. This causes most of the graph to be irrelevant:<p></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEiD9QrgiDkCpieD4WlRKeROccKs-3-4E1BoBdaDQf_O9ksSCRSnyvgLgrW__H4iwKjEUpMp1B2CM3M_1cqGucZKHB1fzkUYPmWuAcwOa8sLHSorY18-_3EYeev7lhf4MmAKay9r3V09JdSfKWXCIN-fKpnUQtIKKfvHDp0xdrsqH1Ae5qsfMDhflw=s1814" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="580" data-original-width="1814" height="204" src="https://blogger.googleusercontent.com/img/a/AVvXsEiD9QrgiDkCpieD4WlRKeROccKs-3-4E1BoBdaDQf_O9ksSCRSnyvgLgrW__H4iwKjEUpMp1B2CM3M_1cqGucZKHB1fzkUYPmWuAcwOa8sLHSorY18-_3EYeev7lhf4MmAKay9r3V09JdSfKWXCIN-fKpnUQtIKKfvHDp0xdrsqH1Ae5qsfMDhflw=w640-h204" width="640" /></a></div>Now let's do a <b>diffuse</b> surface. Opacity=1, coat = 0, metalness=0, specular=0,transmission=0,sheen=0,subsurface=0. Phew! Again most of the graph drops away:<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgyyD9UPDzIOYj9DMMDO9FRQR3tyQj9m41-FPQq0sU_McFXzw9wr8nLo60lEml6Om7v6JgJlhQtBYpy0A2kR7Sw__wUEkKpzdqMI1gkctb0N_1EhkqniLElcnJcvp8-fSyJ5cgXZlI1oQ0h2cAxLcQCYfb9VZCaTo6FkEL4ywsLN__N-FeMC_Q-MA=s1814" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1195" data-original-width="1814" height="422" src="https://blogger.googleusercontent.com/img/a/AVvXsEgyyD9UPDzIOYj9DMMDO9FRQR3tyQj9m41-FPQq0sU_McFXzw9wr8nLo60lEml6Om7v6JgJlhQtBYpy0A2kR7Sw__wUEkKpzdqMI1gkctb0N_1EhkqniLElcnJcvp8-fSyJ5cgXZlI1oQ0h2cAxLcQCYfb9VZCaTo6FkEL4ywsLN__N-FeMC_Q-MA=w640-h422" width="640" /></a></div><br /><p>So why has this, for the most part, won out over categorical shaders that are different? Having implemented the above shader along with my colleague and friend Bob Alfieri, I really like it for streamlining software. Here is your shader black box! Further, you can point to the external document and get data in that format. </p><p>But I suspect that is not the only reason uber shaders have taken over. Note that we could have set <i>metalness=0.5 </i>above. So this thing is half copper metal and half pink diffuse. Does that make any sense as a physical material? Probably not. And isn't the whole point of a BRDF to restrict us to physical materials? I think such unphysical combinations serve two purposes:</p><ol style="text-align: left;"><li><b>Artistic expression.</b> We usually do physically-based BRDF as a guide to keeping things plausible and robust. But an artistic production like a game or movie might look better with nonphysical combinations, so why not expose the knobs!</li><li><b>LOD and antialiasing</b>. A pixel or region of an object may cover more than one material. So the final pixel color should account for both BRDF. Combining them in the shading calculation allows sparser sampling.</li></ol><p>Finally, graphics needs to be fast both in development and production. So the compiler ecosystem is here. I don't know so much about that, which is a credit to the compiler/language people who do :)<br /></p><p><br /></p><p>.<br /></p>Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com17tag:blogger.com,1999:blog-8350257063773144600.post-64071791672717950342021-08-06T09:50:00.002-07:002021-08-06T09:50:36.112-07:00A call to brute force<p> In the writing of the <a href="https://www.amazon.com/Fundamentals-Computer-Graphics-Steve-Marschner/dp/0367505037/ref=sr_1_3?dchild=1&keywords=steve+marschner&qid=1628268204&sr=8-3">new edition of Marschner's and my graphics text</a>, we tried to add more "basics" on light and material interaction (I don't mean BRDF stuff-- I mean more dielectrics) and more on brute force simulation of light transport. In the "how would you maximally brute force a ray tracer) we wrote this:</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAKgojC3I9s_k1j4XLlLOBuJ7bE0eClbNxPzGYzxSYGAZoaeT39kmxRTsOF8pAqPkkT6yxWkIyAowd0-4Be0oauLnj_WAMz_vFA2C68SNcffP3VXQ9b4YGdPid6vvhYsEa0PQFnbM3FQ/s1646/Screen+Shot+2021-08-06+at+10.29.02+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1022" data-original-width="1646" height="398" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAKgojC3I9s_k1j4XLlLOBuJ7bE0eClbNxPzGYzxSYGAZoaeT39kmxRTsOF8pAqPkkT6yxWkIyAowd0-4Be0oauLnj_WAMz_vFA2C68SNcffP3VXQ9b4YGdPid6vvhYsEa0PQFnbM3FQ/w640-h398/Screen+Shot+2021-08-06+at+10.29.02+AM.png" width="640" /></a></div> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1hSeOrMvYIP-4ric4VPTrfSV_EXdA_eV5EWzBed0p6y3PQ-TsB3iqN9pbjs65hUjhbol-dGipyzjobShACYRxePNhq8SYQ2Aktii09ercGcIxGTWbO9MI9Q9rlILxked6tqjAg_RxdQ/s1574/Screen+Shot+2021-08-06+at+10.29.38+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="808" data-original-width="1574" height="328" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1hSeOrMvYIP-4ric4VPTrfSV_EXdA_eV5EWzBed0p6y3PQ-TsB3iqN9pbjs65hUjhbol-dGipyzjobShACYRxePNhq8SYQ2Aktii09ercGcIxGTWbO9MI9Q9rlILxked6tqjAg_RxdQ/w640-h328/Screen+Shot+2021-08-06+at+10.29.38+AM.png" width="640" /></a></div><br />Basically, it's just a path tracer run with *only* smooth dielectrics and Beer's Law. If you model a scene with all the microgeometry and allow some of it to absorb, you can get colored paint with a rough surface. Here is an few figures from my thesis (and it was an old idea then!):<p></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiatrMUUn4zvYS4R-x0kV5cgoCUg35q39o0Y01bhU1z5-bHqVZ1zFx79LpP8kySCG3q7yOffOMmYNTJzoiJfZvfTCOEyTOEXZOfB1N7TWY6Gas84srR2CPdwvMGrKjotI9oaFWVAwxA_A/s1050/Screen+Shot+2021-08-06+at+10.35.36+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="782" data-original-width="1050" height="297" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiatrMUUn4zvYS4R-x0kV5cgoCUg35q39o0Y01bhU1z5-bHqVZ1zFx79LpP8kySCG3q7yOffOMmYNTJzoiJfZvfTCOEyTOEXZOfB1N7TWY6Gas84srR2CPdwvMGrKjotI9oaFWVAwxA_A/w400-h297/Screen+Shot+2021-08-06+at+10.35.36+AM.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtZnk0_KXP_4P8hDNwyDxTL5rqQFEbjrsGcbEB4nbTsL-Rt1yQrvEwMww8o5zjUo1ChWx8K9HS7uyD9L8G5aunbPRvi3LxjxWzdW3aXS6fdd32tP_fSxIE5YLQH4uoGBtu8MGU3M5x-g/s1360/Screen+Shot+2021-08-06+at+10.35.28+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="818" data-original-width="1360" height="384" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtZnk0_KXP_4P8hDNwyDxTL5rqQFEbjrsGcbEB4nbTsL-Rt1yQrvEwMww8o5zjUo1ChWx8K9HS7uyD9L8G5aunbPRvi3LxjxWzdW3aXS6fdd32tP_fSxIE5YLQH4uoGBtu8MGU3M5x-g/w640-h384/Screen+Shot+2021-08-06+at+10.35.28+AM.png" width="640" /></a></div><br /><br />Doing the brute force path tracing is slow and dealing with the micro-particles is slow, so we invent BRDFs and other bulk properties, but that is all for efficiency. When we wrote this, which is a classic treatment people use in the classroom all the time, we were thinking it was just for education and for reference solutions (like <a href="http://www.eugenedeon.com/">Eugene d'Eon</a> has done for skin for example), but since Monte Carlo path tracing is almost infinitely parallel, why not do this on a huge crowd sourced network of computers (in the spirit <a href="folding@home">folding@home</a>)?<p></p><p></p><p>I am thinking for images of ordinary things whose microgeometry would be easy to model. For example a lamp shade with a white lining:</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj17xfa9BJQEHfM-BqIpX9u7NDTf91qgCdwPP1T4sPY_5gYH4DxDAzHyWebenUj3dDHHG0rJEjzp2J2Sc71ffLPpWTMYxJdNIlCSFIoXcAixec1uF0tAgEpou_tG8z8OLFzw7mgBUu7mQ/s1104/Screen+Shot+2021-08-06+at+10.44.59+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="738" data-original-width="1104" height="268" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj17xfa9BJQEHfM-BqIpX9u7NDTf91qgCdwPP1T4sPY_5gYH4DxDAzHyWebenUj3dDHHG0rJEjzp2J2Sc71ffLPpWTMYxJdNIlCSFIoXcAixec1uF0tAgEpou_tG8z8OLFzw7mgBUu7mQ/w400-h268/Screen+Shot+2021-08-06+at+10.44.59+AM.png" width="400" /></a></div><br />Another example of something very complex visually that might be modeled procedurally (https://iswallquality.blogspot.com/2016/07/fresh-snow-close-up-wallpaper-hd.html):<p></p><p> </p><p><img id="img" src="https://3.bp.blogspot.com/-CfRBkQQFLUk/V3iYCTNju3I/AAAAAAAAriU/kgrO8KYkV9kBq_nqF-5bnk5Xx3qd_ZMpQCHM/s1600/falling-snow-desktop.jpg" style="height: 427px; width: 766px;" /> <br /></p><p>Or a glass of beer or... (on and on).</p><p>So what would be needed:</p><ol style="text-align: left;"><li>some base path tracer with a procedural model we could all install and run</li><li>some centralized job coordination server that doled out random seeds and combined results </li><li>an army of nerds willing to do this with idle cycles rather than coin mining</li></ol><p>I don't have the systems chops to know how to best do this. Anyone? I will take the discussion to twitter! <br /></p><p> </p><p><br /></p><p><br /></p><p><br /><br /> <br /></p><p><br /></p><p><br /></p>Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com2tag:blogger.com,1999:blog-8350257063773144600.post-2607030849520181452021-04-14T10:58:00.000-07:002021-04-14T10:58:26.911-07:00I am not a believer in trying to learn two things at once.<p> I just spent this week learning to program with OpenGL in Python using the fabulous <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjZybWGof7vAhVZZ80KHeK7BpoQFjAAegQIAhAD&url=http%3A%2F%2Fpyopengl.sourceforge.net%2F&usg=AOvVaw2O0Xc0FVwXHemJli49yqEE">PyOpenGL</a>. I was <b>barely</b> able to do it. I have written a whole textbook on computer graphics and it derives the OpenGL matrix stack and the viewing model conceptually. The OpenGL graphics pipeline model, if I recall accurately, I was also barely able to learn! I think there is a zero percent chance I could learn the ins and outs of calls to and behavior of PyOpenGL and the underlying conceptual model.</p><p>One reason I am pretty sure of that is that I once tried learning TensorFlow, Python, and neural nets. It did not go well. Once I was comfortable with Python and kind of understood neural nets, I tried again (about a year later) with PyTorch. It was not pretty. Finally, I implemented a neural net in C, then one in Python, from scratch. It was painful, but I made it. <b>Barely</b>. (<i>And thanks to some advice from Shalini Gupta, Rajesh Sharma, and Hector Lee</i>). Then used PyTorch and managed to train a network from scratch. Again, <b>barely</b>.</p><p>My empirical experience is that it takes a focused and concerted to learn anything really new. And if it were any harder, I don't think I could do it. And if I am trying to learn two things at once (particularly an API and the concepts/algorithms that the API is abstracting for me) then <b>forget it.</b></p><p><b>Conclusion: I<i> should never try to learn two things at once. If you have trouble with that, break it down. It may seem like it takes longer, but nothing is longer than never learning it!</i></b></p><p><b><i>Corollary: </i></b><i>when you see somebody quickly picking up packages and wondering why you don't, maybe you are just like me, or maybe they are really something special. Either way, if you develop competence in a technical area, you are in the most fortunate 0.1% of humans, and take a bow. You need to find a way that works for you.</i><br /></p>Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com12tag:blogger.com,1999:blog-8350257063773144600.post-68782745459579570962020-12-07T15:40:00.001-08:002020-12-07T15:40:42.584-08:00Debugging refraction in a ray tracer<p> Almost everybody has trouble with getting refraction right in a ray tracer. I have a previous blog post on <a href="http://psgraphics.blogspot.com/2015/07/debugging-refraction-in-ray-tracer.html">debugging refraction direction if you support boxes</a>. But usually your first ray tracers just supports spheres and when the picture just looks wrong and/or is black, what do you do?</p><p>So if you are more disciplined than I usually am, write a visualization tool that shows you in 3D the rays generated for a given pixel and you will often see what the problem is. Like this <a href="http://www.rtvtk.org/~cgribble/research/papers/gribble12ray.pdf">super cool program</a> for example.</p><p>But if you are a stone age person like me:</p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p><img alt="Pebbles Flintstone -- Evangelist!" class="n3VNCb" data-noaft="1" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUc_Sn9PSduEjnSv4CRXyr9HLLKXeOXUN7Jw55VWk9wxerPS2Vj3JD5jwZpBTD-o62KZOPx_jXI7f-fKCJXQZa4isZgtuxQi4wpET_9hfGPehAA5DrWv0gR1Ghbr_GpDVyE9NwIlffjl-u/s320/flintstonecomputer.jpg" style="height: 320px; margin: 0px; width: 198px;" /> </p><p>Then you have exactly two debugging tools: 1) printf() and 2) output various frame buffers. For debugging refraction, I like #1. First, create some model where you know exactly what the behavior of a ray and its descendants should be. Real glass reflects and refracts. Let's get refraction right. So comment out any possibility of reflection. A ray goes in, and refracts (or if that is impossible, prints that).</p><p>Now let's set up the simplest ray and single sphere possible. The one I usually use is this:</p><p> The viewing ray <span style="color: #2b00fe;"><b>A</b></span> from the camera starts at the eye and goes straight long the minus Z axis. I assume here it is a unit length vector but it may not be depending on how you generate them.<br /></p><p> </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixaWn0IbKSx8gWgQXSY4zpkkyM4QG-IsBrzlcxxmW4b8v7mXY3jT_GITHPkytdEEogapjPqj3WkFXNDZUR9npc-O8efYBgI4qKmPddkH36eWdFrfB1MDKDj57cqe9fMr56nNYMIU2peQ/s2483/Screen+Shot+2020-12-07+at+4.24.04+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1003" data-original-width="2483" height="258" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixaWn0IbKSx8gWgQXSY4zpkkyM4QG-IsBrzlcxxmW4b8v7mXY3jT_GITHPkytdEEogapjPqj3WkFXNDZUR9npc-O8efYBgI4qKmPddkH36eWdFrfB1MDKDj57cqe9fMr56nNYMIU2peQ/w640-h258/Screen+Shot+2020-12-07+at+4.24.04+PM.png" width="640" /> </a> <br /></div><br />How do you generate that ray? You could hard-code it, or you could instrument your program to take a parameter or command line argument or whatever for which pixel to trace (like -x 250 -y 300 or whatever). If you do that you may need to be careful to get the exact center-- like what are the pixel offsets? That is why I usually just hard code it. Then let the program recurse and make sure that you get:<p></p><p><br /></p><p><span style="color: #2b00fe;"><b>A</b></span> hits at <span style="color: #660000;"><b>Q</b></span> which is point (0,0,1)</p><p>The surface normal vector at Q is (0,0,1)</p><p>The ray is refracted to create <span style="color: #2b00fe;"><b>B</b></span> with origin <span style="color: #660000;"><b>Q</b></span> and direction (0,0,-1)</p><p><span style="color: #2b00fe;"><b>B</b></span> hits at <span style="color: #660000;"><b>R</b></span> is is point (0,0,-1)</p><p>The surface normal at<span style="color: #660000;"><b> R</b></span> is (0,0,-1)</p><p>The ray is refracted to create <span style="color: #2b00fe;"><b>C</b></span> with origin <span style="color: #660000;"><b>R</b></span> and direction (0,0,-1)</p><p>The ray <span style="color: #2b00fe;"><b>C</b></span> goes and computes a color <b>H</b> of the background in that direction. </p><p>The recursion returns that color <b>H</b> all the way to the original caller</p><p>That will find 99% of bugs in my experience<br /></p><p><br /></p>Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com5tag:blogger.com,1999:blog-8350257063773144600.post-28795837597364742172020-11-17T10:17:00.000-08:002020-11-17T10:17:12.656-08:00How to write a paper when you suck at writing<p>Most papers are bad and almost unintelligible. Most of my papers are bad and almost unintelligible. The ubiquity of this is evidence that writing good papers must be hard. So what am I to do about this if I suck at writing? This is not false modesty; I do suck at writing. But I am an engineer, so there must be a way to apply KISS here. KISS works. This blog post describes my current methodology to produce a paper. My version of the famous Elmore Leonard <a href="https://quoteinvestigator.com/2017/11/14/skip/">quote</a> is:</p><p><i><b>Concentrate on the parts most readers actually look at.</b></i></p><p>Your title, figures, and their captions should work as a stand-alone unit. Write/make these first. Doing this in the form of a Powerpoint presentation is a good method. Explaining your paper to a friend using the white board and then photographing what evolves under Q&A is a good method. Then ruthlessly edit. Ask yourself: what is this figure conveying? Each idea should have exactly one figure, and you should know exactly what each of those ideas is.</p><p>Let's do an example. Suppose I were to write a paper on why it is better to feed your cats in the evening than in the morning. First, you should have a figure on what is a cat, along with a caption.</p><p> </p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><img alt="File:Scheme cat anatomy-io.svg - Wikimedia Commons" class="n3VNCb" data-noaft="1" src="https://upload.wikimedia.org/wikipedia/commons/2/20/Scheme_cat_anatomy-io.svg" style="height: 279.742px; margin: 0px auto; width: 542px;" title="A cat" /></td></tr><tr><td class="tr-caption" style="text-align: center;">This is a cat<br /></td></tr></tbody></table><p> <br /></p><p><br /></p><p></p><p></p><p></p><p></p><p></p><p></p><p>Don't assume your reader knows much of anything (see caveat at end of article). Now the figure above has details that are not relevant to the point, so you probably need a different one. Getting exactly the right figure is an iterative process.<br /></p><p>If my key reason for not feeding the cat is that in the morning they will wake me earlier each day, I need some figure related to that, such as:</p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><img alt="Does Your Cat Wake You Up? - Cat Tree UK" class="n3VNCb" data-noaft="1" src="https://cattree.uk/wp-content/uploads/2017/02/Cat-Waking-Me-Up.jpg" style="height: 330.863px; margin: 0px auto; width: 542px;" /></td></tr><tr><td class="tr-caption" style="text-align: center;">The problem with a 6am feeding, is the cat starts thinking about it before 6am<br /></td></tr></tbody></table><p> <br /></p><p>(credit <a href="https://cattree.uk/does-your-cat-wake-you-up/">cattree</a>)</p><p>Finally, you will need a figure that gets across the idea that feeding the cat in the evening works. </p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><img alt="https://img.buzzfeed.com/buzzfeed-static/static/2014-12/24/13/enhanced/webdr12/enhanced-buzz-20034-1419447316-20.jpg?downsize=600:*&output-format=auto&output-quality=auto" src="https://img.buzzfeed.com/buzzfeed-static/static/2014-12/24/13/enhanced/webdr12/enhanced-buzz-20034-1419447316-20.jpg?downsize=600:*&output-format=auto&output-quality=auto" style="margin-left: auto; margin-right: auto;" /></td></tr><tr><td class="tr-caption" style="text-align: center;">Feeding the cat before bedtime is effective<br /></td></tr></tbody></table><p></p><p>(credit <a href="https://www.buzzfeed.com/annas31/17-kittens-who-understand-your-exhaustion-after-ea-942y?utm_term=.svDkkDg68">buzzfeed</a>)</p><p> </p><p>So you now have an argument sequence where the reader can "get" the point of your paper. Your actual paragraphs should make your case more airtight, but 90% of the battle is getting the reader to understand what you are even talking about. "What is this?" should never be a struggle, but usually is.</p><p>For the paper writing itself, write to the figures. First, add paragraphs that speak to the point of the figure and reference it. Then, add paragraphs as needed to make the point convincing. Each paragraph should have a definite purpose, and you should be able to say explicitly what that is. The results section should convince the reader that, for example, cats do in fact behave better when fed in the evening.<br /></p><p>Now, there is a caveat for peer-reviewed papers. Reviewers often think that if something is clear, it must not be academic. They will want you to omit the first figure. This is an example of "you can't fight city hall". But make things as clear as they will let you. If this irritates you, suck it up, and do writing any way you want to in books, blog posts, and emerging forms of expression that you have full control over.</p><p>So, in summary, make the figures the skeleton of the paper and do them first. An important point that applies to more than this topic: develop your own process that works for YOU; mine may be that process for you, and may not. Final note-- I feed my cats in the morning.<br /></p><p><br /></p>Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com4tag:blogger.com,1999:blog-8350257063773144600.post-42147538206670072982020-08-19T10:18:00.003-07:002020-08-19T10:18:44.099-07:00Grabbing a web CSV file <br />
I am tutoring an undergrad non-CS person on Python and we tried to graph some online data. This turned out to be pretty painful to get to work just because of versioning and encodings and my own lack of Python experience. But in the end it was super-smooth code because Python packages rule.<br />
<br />
Here it is. The loop over counter is super kludgy. If you use this, improve that ;)<br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">import csv<br />import urllib<br />import codecs<br />import matplotlib.pyplot as plt<br /><br />url = 'https://data.london.gov.uk/download/historic-census-population/2c7867e5-3682-4fdd-8b9d-c63e289b92a6/census-historic-population-borough.csv'<br /><br />response = urllib.request.urlopen(url)<br />cr = csv.reader( codecs.iterdecode(response, 'utf-8') )<br /><br />counter=0<br /><br />x = []<br />y = []<br /><br />for row in cr:<br /> if counter == 3:<br /> for j in range(2,len(row)):<br /> x.append(1801+10*j)<br /> y.append(int(row[j]))<br /> plt.plot(x, y)<br /> plt.title(row[1])<br /> plt.xlabel('year')<br /> plt.ylabel('population')<br /> plt.show()<br /> print(row[1])<br /> counter=counter + 1<br /></span><br />
<br />
<br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com0tag:blogger.com,1999:blog-8350257063773144600.post-29890487311806738152020-07-02T09:25:00.000-07:002020-07-02T09:25:17.653-07:00Warping samples paper at rendering symposium<br />
<a href="http://casual-effects.com/research/Hart2020Sampling/index.html">Here is a paper at EGSR</a> being presented tomorrow at the time of this writing.<br />
<br />
It deals with sampling lights. The paper is pretty daunting because it has some curve fitting which always looks complicated and it works on manifolds so there are Jacobians running around. But the basic ideas are less scary than the algebra implies and is clean in code. I will cover a key part of it in a simple case.<br />
<br />
If you are doing direct lighting from a rectangular light source, this is the typical Monte Carlo code for shading a point <b>p</b>:<br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">pick point q on light with PDF p(q) = pdf_q</span><br />
<span style="font-family: "Courier New", Courier, monospace;">if (not_shadowed(p, q))</span><br />
<span style="font-family: "Courier New", Courier, monospace;"> d = unit_vector(q-p) </span><br />
<span style="font-family: "Courier New", Courier, monospace;"> light += BRDF * dot(d, n_p) * dot(-d, n_q) * emitted(q) / (distance_squared(p,q) * pdf_q)</span><br />
<br />
Most renderings, or most of mine anyway, pick the point uniformly on the rectangle so pdf_q = 1/area. There are more sophisticated methods that try to pick the PDF intelligently. This is hard in practice. See the paper for some references to cool work.<br />
<br />
But we can also just try to make it one or two steps better than constant PDF. The first obvious candidate is PDF that is bilinear which just computes this at each corner:<br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">weight = BRDF * dot(d_corner, n_p) * dot(-d, n_corner) * emitted(corner) / distance_squared(p,corner)</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;">The PDF gets 4 weights and needs to be normalized so it a PDF, and you need to pick points from it. This is an algebraic book-keeping job. But not bad (see the paper!). The disruption to the code is no big deal. For my particular C++ implementation here is my sampling code (which is a port of David Hart's shadertoy-- see that for the code in action-- this is just to show you it's no big deal)</span><br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">// generate a sample from a bilinear set of weights </span><br />
<span style="font-family: "Courier New", Courier, monospace;">vec2 sample_quad(const vec2& uv, const vec4& w) {<br /> vec2 newuv;<br /> real u = pdf_u(uv.x(), w);<br /> real v = pdf_v(uv.y(), u, w);<br /> return vec2(u,v);<br />} </span><br />
<span style="font-family: "Courier New", Courier, monospace;"><br /></span>
<span style="font-family: "Courier New", Courier, monospace;">// helper functions</span><br />
<span style="font-family: "Courier New", Courier, monospace;">inline real pdf_u(real r, const vec4& w) {<br /> real A = w.z() + w.w() - w.y() - w.x();<br /> real B = 2.*(w.y()+w.x());<br /> real C = -(w.x() + w.y() + w.z() + w.w())*r;<br /> return solve_quadratic(A,B,C);<br />}<br /><br />inline real pdf_v(real r, real u, const vec4& w) {<br /> real A = (1.-u)*w.y() + u*w.w() - (1.-u)*w.x() - u*w.z();<br /> real B = 2.*(1.-u)*w.x() + 2.*u*w.z();<br /> real C = -((1.-u)*w.y() + u*w.w() + (1.-u)*w.x() + u*w.z())*r;<br /> return solve_quadratic(A,B,C);<br />}</span><br />
<span style="font-family: "Courier New", Courier, monospace;"><br /></span>
<span style="font-family: "Courier New", Courier, monospace;">// evaluate the PDF</span><br />
<span style="font-family: inherit;"><span style="font-family: "Courier New", Courier, monospace;">inline real eval_quad_pdf(const vec2& uv, const vec4& w ) {<br /> return 4.0 *<br /> ( w.x() * (1.0 - uv.x()) * (1. - uv.y())<br /> + w.y() * (1.0 - uv.x()) * uv.y()<br /> + w.z() * uv.x() * (1. - uv.y())<br /> + w.w() * uv.x() * uv.y()<br /> ) / (w.x() + w.y() + w.z() + w.w());<br />} </span></span><br />
<br />
<span style="font-family: inherit;">The same basic approach works on triangles and works if you sample in a solid angle space. I hope you try it! </span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: inherit;"><br /></span>
<br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com0tag:blogger.com,1999:blog-8350257063773144600.post-45082637702823537962020-04-24T16:59:00.001-07:002020-04-24T16:59:51.628-07:00Deugging refraction in a sphere<br />
Here is a setup to help debugging refraction using a sphere and a particular ray. I have do track single rays like this <b>every time</b> I implement refraction. Here is the setup:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKEHGBf3U9CEjwzjNQ2E42bLTkF2kizXUrPgJrwUfpE_yeJY8EpQKRfA2xIbvvXcSHmA9rubeYLChyphenhyphenP_eY7jkDY3riKlurJkGW2kKZfRrDSlxowtIZsgVTt3xX6fL8HBxbPH65PEIFZA/s1600/Screen+Shot+2020-04-24+at+4.43.11+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1201" data-original-width="1600" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKEHGBf3U9CEjwzjNQ2E42bLTkF2kizXUrPgJrwUfpE_yeJY8EpQKRfA2xIbvvXcSHmA9rubeYLChyphenhyphenP_eY7jkDY3riKlurJkGW2kKZfRrDSlxowtIZsgVTt3xX6fL8HBxbPH65PEIFZA/s640/Screen+Shot+2020-04-24+at+4.43.11+PM.png" width="640" /></a></div>
So let's put a sphere of radius 1 the with center (0,0,0). Now all of the action is in XY so I will leave off the Zs. First, what is <b>A</b>?<br />
<br />
If it is to hit where the surface of the sphere is at 45 degrees, it has to have y component cos(45) = sqrt(2)/2 . So the origin of the ray should be o = (-something, sqrt(2)/2). And v = (1, 0). <b>B</b>, the reflected direction should have origin (-sqrt(2)/2, sqrt(2)/2) and direction (0,1).<br />
<br />
What about <b>C</b>? It has origin (-sqrt(2)/2, sqrt(2)/2) and direction (1+sqrt(2)/2, -sqrt(2)/2) which has a length of about 1.84776. So <b>C </b>= (0.9239, -0.3827) approximately.<br />
<br />
But what refractive index produces such a <b>C</b>? We an see that two of the sides have length 1, so the two angles are the same, and we can see that sin(theta) = (sqrt(2)/2) / 1.8776 = 0.38268. So the refractive index must be the ratio of that ans sin(45). So refractive_index = 1.8478.<br />
<br />
So what is <b>D</b>? By symmetry it must leave at 45 degrees so <b>D</b> = (1, -1) / sqrt(2).<br />
<br />
And finally E reflects across the X axis and <b>E </b> = (-0.9239, -0.3827) approximately <br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com0tag:blogger.com,1999:blog-8350257063773144600.post-77543009317674658272020-03-13T18:31:00.002-07:002020-03-14T20:34:24.768-07:00Fresnel Equations, Schlick Approximation, Metals, and DielectricsIf you look at a renderer you will probably see some references to "Schlick" and "Fresnel" and see some 5th order polynomial equations. This is all about smooth metals and "dielectrics".<br />
<br />
You will notice that the reflectivity of smooth surfaces vary with angle. This is especially easy to see on water:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container"><tbody>
<tr><td style="text-align: center;"><img alt="Three Worlds.jpg" data-file-height="382" data-file-width="261" height="382" src="https://upload.wikimedia.org/wikipedia/en/8/85/Three_Worlds.jpg" style="margin-left: auto; margin-right: auto;" width="261" /></td><td class="tr-caption" style="text-align: center;"></td><td class="tr-caption" style="text-align: center;">Note
how the reflectivity increases as the angle of the viewer increases as
the incident angle of the viewing direction to the normal increases. <a href="https://en.m.wikipedia.org/wiki/Three_Worlds_(Escher)">Work by Escher.</a></td></tr>
</tbody></table>
<br />
The fraction of how much light is transmitted and how much is reflected adds to one. This is true tracing the light in either direction:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2smcLyKZl8dRC7puHirk0mcIJC81W5tmMyBMPNWhCAbFTYjifPMCVRB-Y8MT_JkTs-qCvk8nPC5uGGGyjpI8uuJVMvkA3sIJOjSa35LcX2R8enyfnYvUVao_6oANnE-ABQDmXmETCHg/s1600/Screen+Shot+2020-03-13+at+5.19.08+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="637" data-original-width="1600" height="254" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2smcLyKZl8dRC7puHirk0mcIJC81W5tmMyBMPNWhCAbFTYjifPMCVRB-Y8MT_JkTs-qCvk8nPC5uGGGyjpI8uuJVMvkA3sIJOjSa35LcX2R8enyfnYvUVao_6oANnE-ABQDmXmETCHg/s640/Screen+Shot+2020-03-13+at+5.19.08+PM.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><b>Left</b>: the light hitting the water divides between reflected and refracted based on angle (for this particular one it is 23% reflected). <b>Right</b>: the eye sees 77% of whatever color comes from below the water and 23% of whatever color comes from above the water.</td></tr>
</tbody></table>
<br />
<br />
So in a ray tracer we would have the color of the ray be<br />
<br />
<b><span style="font-family: "courier new" , "courier" , monospace;">color = 0.77*color(refracted_ray) + 0.23*color(reflected_ray)</span></b><br />
<br />
But how do we compute those reflected and refracted constants?<br />
<br />
These are given by the <a href="https://en.wikipedia.org/wiki/Fresnel_equations">Fresnel Equations</a>. For a non-metal like water, all that you need for them is the <a href="https://en.wikipedia.org/wiki/Refractive_index">refractive index</a> of the material. Back in the dark ages, I was very excited about these equations and used them in my ray tracer. Here they are from that wikipedia page:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyXxdVRRGV-OlTrptMRCxWnuXUS8q5omhlXpFkSV5cgwVYgfKnEbpcSSvADnChL6wUqnFv-2VwG2TbHL5V2cd01kIKBjqSuI7GaYzPPm5mjWbQJpjaGVmJTfrmL9Q8ENLvfaAOMA243A/s1600/Screen+Shot+2020-03-13+at+5.40.46+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="526" data-original-width="1268" height="265" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyXxdVRRGV-OlTrptMRCxWnuXUS8q5omhlXpFkSV5cgwVYgfKnEbpcSSvADnChL6wUqnFv-2VwG2TbHL5V2cd01kIKBjqSuI7GaYzPPm5mjWbQJpjaGVmJTfrmL9Q8ENLvfaAOMA243A/s640/Screen+Shot+2020-03-13+at+5.40.46+PM.png" width="640" /></a></div>
<br />
Here R is the reflectance, theta is the angle to the normal of the incident direction, n1 is the refractive index of the material the incident light is in, and n2 is the refractive index of the material the light is going into. But there are two of them?! One is for one type of polarization, and one is for the other. The "types" are relative to the surface normal. This is a cool topic but we would need would delve into some pretty serious optics we wont use (though many applications do where optical accuracy is needed and <a href="https://cgg.mff.cuni.cz/~wilkie/Website/Home.html">Alexander Wilkie </a>has been doing great work in that area). So what if I don't have polarization in my ray tracer. I used to just average them:<br />
<br />
R = (R_s + R_p) /2<br />
<br />
So how do those look? Here is from that same wikipedia page as above:<br />
<br />
<img alt="" class="mw-mmv-final-image svg mw-mmv-dialog-is-open" crossorigin="anonymous" height="532" src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/f6/Fresnel_power_air-to-glass.svg/2560px-Fresnel_power_air-to-glass.svg.png" width="640" /><br />
<br />
Well there are some cool things there. Especially that R_p goes to zero for one angle. This is <a href="https://en.wikipedia.org/wiki/Brewster%27s_angle">Brewster's Angle </a>and why I have polarized sun glasses-- so I can get x-ray vision into water! But I wont simulate that in a ray tracer.<br />
<br />
So what does R = (R_s + R_p) /2 look like for the example above. I'll type it into grapher on my mac, where x is in theta_i in radians:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhA86HZxv3d9MRFp4bAgirAvZhfG2x5mfNigwL-F9tCo6iIS5L0305BwhLQzIojF6ilk0cQV-GiGS-VXFsw-JgHSxZ7ieyYpJJzZghD_LoiXI0CVkQt0E5gm-BF8OSJsJ2R-n7OIVYh7g/s1600/Screen+Shot+2020-03-13+at+6.48.25+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="764" data-original-width="1564" height="312" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhA86HZxv3d9MRFp4bAgirAvZhfG2x5mfNigwL-F9tCo6iIS5L0305BwhLQzIojF6ilk0cQV-GiGS-VXFsw-JgHSxZ7ieyYpJJzZghD_LoiXI0CVkQt0E5gm-BF8OSJsJ2R-n7OIVYh7g/s640/Screen+Shot+2020-03-13+at+6.48.25+PM.png" width="640" /></a></div>
<br />
<br />
Wow that is certainly a complicated expensive function for such a smooth curve. The x axis in in radians so 90 degrees is x = PI/2 = 1.57. So 0 to 1.57 is the interesting part.<br />
<br />
Now I ran that code for years. But Christophe Schlick has some sweet papers in the 1990s about using simple approximations to smooth functions in rendering. I mean why are we solving the approximate equation (we threw away polarization) with exact, expensive equations??! His e<a href="https://en.wikipedia.org/wiki/Schlick%27s_approximation">quation for approximate fresnel reflection</a> was almost immediately adopted by all of us. Here it is from that wikipedia page:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkEZBnqfDJEgxWiugWzanBYaZjBIcVzNQXLgsQ2gGM2PSRGjbA59IzAZRMnGHqY3l3E1-jV6jJjRwXMMZXOQDDuf4boX_ErHDEn4JCfG0ySSkRSbs561iJcERQ0ZQxj7SY05bAeCh_Aw/s1600/Screen+Shot+2020-03-13+at+6.54.35+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="282" data-original-width="1400" height="128" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkEZBnqfDJEgxWiugWzanBYaZjBIcVzNQXLgsQ2gGM2PSRGjbA59IzAZRMnGHqY3l3E1-jV6jJjRwXMMZXOQDDuf4boX_ErHDEn4JCfG0ySSkRSbs561iJcERQ0ZQxj7SY05bAeCh_Aw/s640/Screen+Shot+2020-03-13+at+6.54.35+PM.png" width="640" /></a></div>
That R_0 is the reflectance for theta=0, so light going straight at the surface. That is right where transparent surfaces are most clear. It is just the big fancy equations above which both collapse to that for theta=0. If I graph that too for the n=1.5 I get:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqBdcAsDwNasJrFVHQJpq5ZqXCzVNFnutIyXePDKr9VXnWVXWCu2FGWdo9HTpvjg-xGLaspBySujCV7JKJ0likiNhjCKEqb2NHOJgj6S6v7K5Ulg_PR2lNSELMfHsrixO3po-P0uSvOg/s1600/Screen+Shot+2020-03-13+at+6.58.52+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1044" data-original-width="1600" height="416" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqBdcAsDwNasJrFVHQJpq5ZqXCzVNFnutIyXePDKr9VXnWVXWCu2FGWdo9HTpvjg-xGLaspBySujCV7JKJ0likiNhjCKEqb2NHOJgj6S6v7K5Ulg_PR2lNSELMfHsrixO3po-P0uSvOg/s640/Screen+Shot+2020-03-13+at+6.58.52+PM.png" width="640" /></a></div>
<br />
OK that is close! And a lot easier and cheaper.<br />
<br />
So where do you get n1 and n2? If one of the surfaces is air use 1.0 for that. From that wikipedia article on refractive index above we have:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLGjdfuhELfUYMTkSlCNESLL0dkWRn5e0uwo7w8oc9CSIy8LeCahHk_PhrpBq03idZHCUNd9svLah6jcpx1cYbYiC6ZZEEe8mZat_QTcafdvHOLiZg9HDm24ku_dNDWhR0XHGCJXxmGw/s1600/Screen+Shot+2020-03-13+at+7.01.20+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="813" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLGjdfuhELfUYMTkSlCNESLL0dkWRn5e0uwo7w8oc9CSIy8LeCahHk_PhrpBq03idZHCUNd9svLah6jcpx1cYbYiC6ZZEEe8mZat_QTcafdvHOLiZg9HDm24ku_dNDWhR0XHGCJXxmGw/s400/Screen+Shot+2020-03-13+at+7.01.20+PM.png" width="202" /></a></div>
<br />
When in doubt, use 1.5.<br />
<br />
Now what happens if you want the rainbows you can see along edges in glass? That is <a href="https://en.wikipedia.org/wiki/Dispersion_(optics)">dispersion</a> and it is a pain in RGB renderers... you probably need more samples than 3 in color and you need to convert to a spectral renderer. This has been done a few times (here is a <a href="http://people.eecs.berkeley.edu/~cecilia77/graphics/a6/">fun one </a>with lovely images) but I wont here!<br />
<br />
Note that this same equation is used not only for transparent surfaces but also for the reflective coat of "diffuse-specular" surfaces. Here is one way to think of these surfaces-- a chunk of glass/plastic/whatever with junk in it:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQiW5aeMLLnDdNqrhr7pg9zTex_AJ4Zj8U0ErVzv2FPCwvUQPbiAr1jNntKWOdqt3iZ0oqE7AOmwxzztQKwRy8e0NR4UXaIv32gkbrGyVHSjLy7LG-ArA0kCYmDN42rJKWrWnu0cUJHw/s1600/Screen+Shot+2020-03-13+at+7.15.15+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="654" data-original-width="782" height="333" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQiW5aeMLLnDdNqrhr7pg9zTex_AJ4Zj8U0ErVzv2FPCwvUQPbiAr1jNntKWOdqt3iZ0oqE7AOmwxzztQKwRy8e0NR4UXaIv32gkbrGyVHSjLy7LG-ArA0kCYmDN42rJKWrWnu0cUJHw/s400/Screen+Shot+2020-03-13+at+7.15.15+PM.png" width="400" /> </a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
Note that the *surface* (specular) component is just like glass-- use the Schlick for that! What refractive index? When in doubt, 1.5 :)<br />
<br />
Now what about metals? Does the above apply? No, they are nothing like the Fig 2.11 above. All the action is at the surface. But the Fresnel equations are more complex-- literally-- the refractive index of metals is <b>complex</b>! This is sometimes written as <i>real refractice index</i> and<i> conductivity </i>(the real and imaginary part).<br />
<br />
Here are the equations:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhP8B1vztuE0922yHJ12N1ZnP0VzF5DvMUApnq5c0uXNcaDDwQfRJa5Z1HlMk6jY14OxcFdoecc3MdSW8PSbxpC_94etAPgGYhAiOg0DrtDJ-fviECPG8JoY79M6R4rMcbjc3fmUhwC_Q/s1600/Screen+Shot+2020-03-13+at+7.21.12+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1002" data-original-width="1600" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhP8B1vztuE0922yHJ12N1ZnP0VzF5DvMUApnq5c0uXNcaDDwQfRJa5Z1HlMk6jY14OxcFdoecc3MdSW8PSbxpC_94etAPgGYhAiOg0DrtDJ-fviECPG8JoY79M6R4rMcbjc3fmUhwC_Q/s640/Screen+Shot+2020-03-13+at+7.21.12+PM.png" width="640" /></a></div>
<br />
Wow. Those are even worse (with k, the conductivity set to zero, you get the non-conductor ones). So for some applications, this is needed. But not for most graphics programs. I rarely use them. Note that they are real valued. So can we use Fresnel? The answer is <b>yes</b>! So where do I get <i>n</i> and <i>k</i>? I implemented them in the dark ages and tried it for a few metals. Here is one:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUQfKNsEhxvTKefCn1x9p6hEujMXPq2gtTZ1lQ3lp4KtBBDVeY05HHX_puLtd5I5l3nENcVByhapHspOq2SN-migxop5Xid-etv8bCAWhWShOuJ4BKPqJlTjdN7e9-hHoWYeNYSQ0_Iw/s1600/Screen+Shot+2020-03-13+at+8.04.30+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="998" data-original-width="1580" height="404" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUQfKNsEhxvTKefCn1x9p6hEujMXPq2gtTZ1lQ3lp4KtBBDVeY05HHX_puLtd5I5l3nENcVByhapHspOq2SN-migxop5Xid-etv8bCAWhWShOuJ4BKPqJlTjdN7e9-hHoWYeNYSQ0_Iw/s640/Screen+Shot+2020-03-13+at+8.04.30+PM.png" width="640" /></a></div>
<br />
<br />
That looks like we can probably use the Schlick equations but with different base cases for R, G, B. So where do we get the normal incident reflectance? We could by evaluating normal incidence. But the data isn't widely available and a lot of metals are alloys of unknown composition. So we make R0 up (like grab them from an image-- it's an RGB color, but be careful about gamma!)<br />
<br />
<b><span style="font-family: "courier new" , "courier" , monospace;">vec3 R0 = (guess by looking)</span></b><br />
<b><span style="font-family: "courier new" , "courier" , monospace;">vec3 R = R0 + (1-R0)*pow(1-cosine,5)</span></b><br />
<br />
But with RGB not just floats. This is because the refractive index for metals can vary a lot with wavelength (that is why some metals like gold are colored).<br />
<br />
Now most graphics programs have two Schlick Equations running around-- one for dielectrics that are float and set by a refractive index, and one that is RGB and set by an RGB R0. This seems odd at first, because it is, but it makes sense if you go searching for the refractive index of glass, then gold (try it!).<br />
<br />
Naty H writes:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-TOg0w1BWl_7BDVPwJUfeSoW44k-j-SJ3j4i2rupetf74fvwi-qyntCc8foZQXs36B0jK54MRzRgFKdhzUQlLYqhd97lgq-kL3UtgZk20sbqOkbt61kUgeZFrxy7-1RdkxbUGBALTSQ/s1600/Screen+Shot+2020-03-13+at+8.44.55+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="962" height="173" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-TOg0w1BWl_7BDVPwJUfeSoW44k-j-SJ3j4i2rupetf74fvwi-qyntCc8foZQXs36B0jK54MRzRgFKdhzUQlLYqhd97lgq-kL3UtgZk20sbqOkbt61kUgeZFrxy7-1RdkxbUGBALTSQ/s400/Screen+Shot+2020-03-13+at+8.44.55+PM.png" width="400" /></a></div>
I think this is good advice by Naty.<br />
<br />
<i>For the curious</i>-- does light refract into metal? Yes it does, but it gets absorbed quickly. This is why a gold film is transparent if it is thin enough. Here is <a href="https://en.wikipedia.org/wiki/Snell%27s_law">information on refraction for metals</a> (I have never implemented this) using Snell's law: <br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEingVDRlr1C8yYR0XKoNzTu0mg6KSdLlqQ-sKBIcOM-4vUqm48vPEORS_0kruXXqwWIOJ0Z2Uifp0n74uTPEJCaUSUI25e9VVCjbd-PpckRoWTTPnYmZyBwuq-4N_Sa1bbFOvObrkvWmA/s1600/Screen+Shot+2020-03-13+at+8.12.17+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="276" data-original-width="1600" height="110" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEingVDRlr1C8yYR0XKoNzTu0mg6KSdLlqQ-sKBIcOM-4vUqm48vPEORS_0kruXXqwWIOJ0Z2Uifp0n74uTPEJCaUSUI25e9VVCjbd-PpckRoWTTPnYmZyBwuq-4N_Sa1bbFOvObrkvWmA/s640/Screen+Shot+2020-03-13+at+8.12.17+PM.png" width="640" /></a></div>
Thanks to Alain Galvan for pointing out some errors!<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><br /></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"></td><td class="tr-caption" style="text-align: center;"></td><td class="tr-caption" style="text-align: center;"></td><td class="tr-caption" style="text-align: center;"></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td></tr>
</tbody></table>
Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com0tag:blogger.com,1999:blog-8350257063773144600.post-76659566751283153932019-10-16T19:24:00.002-07:002019-10-16T19:24:56.099-07:00A reader responds<div dir="auto">
Hi Pete,</div>
<div dir="auto">
<br /></div>
<div dir="auto">
I have some nits with your blog post, in particular two points:</div>
<div dir="auto">
<br /></div>
<div dir="auto">
1)
ALAN: I found it ironic that you wrote this while you are currently
learning something new (neural networks...) while I agree you want to
balance learning new things with getting stuff done, I think it’s
important to always learn new things.</div>
<div dir="auto">
<br /></div>
<div dir="auto">
2)
good at one thing vs. multiple. This one struck a chord with me, I
think I’m not that good at any one thing: There are people in graphics
let alone math with way better math chops than me, same for physics / ME
/ etc. There are also people with better low level (optimizing code)
and high level (software architecture) programming skills than me. I
think it is precisely that I am pretty good at multiple different things
that has made me successful - most people that are really good at math
are terrible programmers, and great coders are often bad at math. I
actually think a mixture of skills is super valuable! In games technical
artist, or programmers with an eye for art, are super valuable for this
same reason.</div>
<div dir="auto">
<br /></div>
<div dir="auto">
I think a lot of the advice seemed good and resonated with me, but those two points did not.</div>
Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com0tag:blogger.com,1999:blog-8350257063773144600.post-41013391855798827212019-09-28T17:04:00.000-07:002019-09-28T17:04:21.185-07:00How to succeed as a poor programmerIn a <a href="http://psgraphics.blogspot.com/2016/09/a-new-programmers-attitude-should-be.html">previous post </a>I outlined why programmers should strive to be good at one thing rather than everything.<br />
<br />
In this post I discuss how to be an asset to an organization when you are not a good programmer. First, for the record:<br />
<br />
<b>I am a poor programmer.</b> Ask anyone who has worked on group projects with me.<br />
<br />
But the key compensation is I am aware I am a poor programmer. I do not try to do anything fancy. I follow some heuristics that keep me productive IMO. I think these heuristics are in fact good for all programmers, but are essential for those of us in the bottom quartile.<br />
<br />
<ol>
<li><a href="https://en.wikipedia.org/wiki/KISS_principle"><b>KISS</b></a>. Always ask first something the extreme programming movement advocates: what is the simplest thing that could <i>possibly</i> work? Try that and if it doesn't work, try the second simplest thing.</li>
<li><a href="https://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it"><b>YAGNI</b></a>. When it comes to features and generality, "<b>Y</b>ou <b>A</b>int <b>G</b>onna <b>N</b>eed <b>I</b>t".</li>
<li><b>ALAN. </b><b>A</b>void<b> L</b>earning<b> A</b>nything <b>N</b>ew<b>. </b> Various languages and environments and tools come and go, and each of them requires learning a new set of facts, practices, and a mentality. And they will probably go away. Only learn something new when forced. In 1990 I learned C++. In 2015 I learned Python. Good enough.</li>
<li>Make <b>arrays</b> your goto data structure. Always ask "how would a FORTRAN programmer do this?". During the course of your lifetime, you should occasionally use trees, but view them like binge drinking... regular use will damage your liver.</li>
<li>Never pretend you understand something when you don't, and never bluff. Google first, then ask people for what to read, and finally ask for a tutorial from a colleague. We are all ignorant about almost all things. </li>
<li>Be aware that most coding advice is bad. Think about whether there is empirical evidence that a given piece of advice is true.</li>
<li>Avoid linking to other software unless forced. It empirically rarely goes well. </li>
<li>Try to develop a comfort zone in some area. Mine is geometric C++ code that calls random numbers. Don't try to be an expert in everything or even most things.</li>
</ol>
Finally, let the computer do the work; Dave Kirk talks about the elegance of brute force (I don't know if it original with him). This is a cousin of KISS. Hardware is fast. Software is hard. Bask in the miracle of creation in software; you make something with <i>behavior</i> out of bits. If you are bad at programming, you are still programming, something that very few people can do.Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com35tag:blogger.com,1999:blog-8350257063773144600.post-23475120989493435512019-06-04T08:41:00.000-07:002019-06-04T08:41:17.975-07:00How bright is that screen? And how good a light is it?I have always wanted a "fishtank VR" fake window that is bright enough to light a room. Everytime a new brighter screen comes out I want to know whether it is bright enough.<br />
<br />
Apple <a href="https://www.macrumors.com/2019/06/03/apple-unveils-32-inch-6k-pro-display-xdr/">just announced a boss 6K screen</a>. But what caught my eye was this:<br />
<br />
"...and can maintain 1,000 nits of full-screen brightness indefinitely."<br />
<br />
That is very bright compared to most screens (though not all-- the<a href="http://hdr.sim2.it/hdrproducts/hdr47es6mb"> SIM2 sells one that is about 5X</a> that but it's a small market device at present). How bright is 1000 nits?<br />
<br />
First, let's get some comparisons. "Nits" is a measure of <a href="https://en.wikipedia.org/wiki/Luminance">luminance</a>, which is an objective approximation to "how bright is that". A good measure if I want a VR window is sky brightness. <a href="https://en.wikipedia.org/wiki/Orders_of_magnitude_(luminance)">Wikipedia has an excellent page on this I grab this table from</a>:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyqiQcZ9JhpxUrkYZF3EFFhY5uPis5_-eg4VtKJJD6fdJPwgfT2NJK_4suqx-b8X5elrgT0aGwXDHvGRktD9ugxU6IrHdctCYyXlK1OMjzoYK3n4o-FJZOPk8uF9K1mAxEPU8GsOJkLw/s1600/Screen+Shot+2019-06-04+at+9.30.03+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1070" data-original-width="1554" height="275" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyqiQcZ9JhpxUrkYZF3EFFhY5uPis5_-eg4VtKJJD6fdJPwgfT2NJK_4suqx-b8X5elrgT0aGwXDHvGRktD9ugxU6IrHdctCYyXlK1OMjzoYK3n4o-FJZOPk8uF9K1mAxEPU8GsOJkLw/s400/Screen+Shot+2019-06-04+at+9.30.03+AM.png" width="400" /></a></div>
<br />
<br />
<br />
Note that the Apple monitor is almost at the cloudy day luminance (and the SIM2 is at the clear sky luminance). So we are very close!<br />
<br />
Now as a light source one could read by, the *size* of the screen matters. Both the Apple monitor and the SIM2 monitor are about half a meter in area. How many *lumens* is the screen? This is how we measure light bulbs.<br />
<br />
<a href="https://www.lowes.com/pd/SYLVANIA-100-Watt-EQ-Bright-White-Light-Fixture-CFL-Light-Bulbs-4-Pack/1000074263?cm_mmc=src-_-c-_-prd-_-lit-_-bing-_-lighting-_-new_dsa_lit_183_ceiling_fans_%26_landscape_lighting-_-ceiling%20fans-_-0-_-0&k_clickID=bi_272638719_1309518539938445_81844947436519_dat-2333644710524915:loc-190_c_66460&msclkid=1c73d660812a19e892728a960e765e6c&utm_source=bing&utm_medium=cpc&utm_campaign=NEW_DSA_LIT_183_Ceiling%20Fans%20%26%20Landscape%20Lighting&utm_term=ceiling%20fans&utm_content=LIT_Ceiling%20Fans_DSA">This 100 Watt EQ bulb is 1600 lumens</a>. So that is a decent target for a screen to read by. So how do we convert a screen of a certain area with a luminance to lumens? As graphics people let's remember that for a diffuse emitter we know wsomething about luminous exitance (luminous flux):<br />
<br />
luminance = luminous exitance divided by (Pi times Area)<br />
<br />
So 1000 = E / (0.5*PI) = E/6 (about).<br />
<br />
E is lumens per square meter. So we want lumens = E*A = 0.5*E = 1000. So lumens = 2000. That is about a 100 watt bulb. So I think you could read by this Apple screen if it's as close as you would keep a reading lamp. If one of you buys one, let me know if I am right or if I am off by 10X (or whatever) which would not surprise me!<br />
<br />
<br />
<br />
<br />
Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com3tag:blogger.com,1999:blog-8350257063773144600.post-42745166445622013852019-03-12T07:57:00.000-07:002019-03-12T07:57:03.320-07:00Making your BVH faster<br />
I am no expert in making BVHs fast so just use this blog post as a starting point. But there are some things you can try if you want to speed things up. All of them involve tradeoffs so test them rather than assume they are faster! <br />
<br />
<b>1. Store the inverse ray direction with the ray. </b> The most popular ray-bounding box hit test assumes this (used in both the intel and nvidia ray tracers as far as I know):<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisZBqzhl2hODIUQCD7UAuhAMgZE1IQfWFLMhinM6rjvSwIrgxkpQ9Q8Q43CAdYE3e9nEbc96inCwa2h1Kpd2yp3FOzVdiabS-izaOmclEduAJ9xURH4l_G2-ykcjo_0Va_VHXqwCAljQ/s1600/Screen+Shot+2019-03-12+at+8.42.37+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="343" data-original-width="1374" height="156" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisZBqzhl2hODIUQCD7UAuhAMgZE1IQfWFLMhinM6rjvSwIrgxkpQ9Q8Q43CAdYE3e9nEbc96inCwa2h1Kpd2yp3FOzVdiabS-izaOmclEduAJ9xURH4l_G2-ykcjo_0Va_VHXqwCAljQ/s640/Screen+Shot+2019-03-12+at+8.42.37+AM.png" width="640" /></a></div>
<br />
Note that if ray direction were passed in you would have 6 divides rather than 6 adds.<br />
<br />
<b>2. Do an early out in your ray traversal.</b> This is a trick used in many BVHs but not the one I have in the ray tracing minibook series. Martin Lambers suggested this version to me which is not only faster, it is cleaner code.<br />
<br />
<img alt="" class="media-image" data-height="476" data-width="618" height="246" src="https://pbs.twimg.com/media/D1bKEwiVYAARNl4.png:large" style="margin-top: 0px;" width="320" /><br />
<br />
<b>3. Build using the surface area heuristic (SAH). </b> This is a greedy algorithm that minimizes the sum of the areas of the bounding boxes in the level being built. I based mine on the pseudocode in <a href="http://www.cs.utah.edu/~shirley/papers/bvh.pdf">this old paper </a>I did with Ingo Wald and Solomon Boulos. I used simple arrays for the sets and the <a href="https://en.wikipedia.org/wiki/Quicksort#Lomuto_partition_scheme">quicksort from wikipedia</a> for the sort.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtKS9c2namsc7EGXC_dpU-vDNsYpxnRcmk5gMy7_cjI1tHPlB-mmK06zSQuorHLmjw6YlNQGST-97eFoZ3L3TQKSecZtth5nYl2YT0FlPZLTEfYCJri_9bTIwZVnpSE0mqu531Hsog7g/s1600/Screen+Shot+2019-03-12+at+8.53.50+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1384" data-original-width="1370" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtKS9c2namsc7EGXC_dpU-vDNsYpxnRcmk5gMy7_cjI1tHPlB-mmK06zSQuorHLmjw6YlNQGST-97eFoZ3L3TQKSecZtth5nYl2YT0FlPZLTEfYCJri_9bTIwZVnpSE0mqu531Hsog7g/s640/Screen+Shot+2019-03-12+at+8.53.50+AM.png" width="632" /></a></div>
<br />
<br />
<br />
<br />
Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com8tag:blogger.com,1999:blog-8350257063773144600.post-71608472969154621642019-02-17T06:20:00.003-08:002019-02-17T06:20:29.123-08:00Lazy person's tone mappingIn a physically-based renderer, your RGB values are not confined to [0,1] and your need to deal with that somehow.<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">The simplest thing is to clamp them to zero to one. In my own C++ code:</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">inline vec3 vec3::clamp() {<br /> if (e[0] < real(0)) e[0] = 0;<br /> if (e[1] < real(0)) e[1] = 0;<br /> if (e[2] < real(0)) e[2] = 0;<br /> if (e[0] > real(1)) e[0] = 1;<br /> if (e[1] > real(1)) e[1] = 1;<br /> if (e[2] > real(1)) e[2] = 1;<br /> return *this;<br /> }</span></span><br /><br />
A more pleasing result can probably be had by applying a "tone mapping" algorithm. The easiest is probably Eric Reinhard's "L/(1+L)" operator from the Equation 3 of <a href="http://www.cs.utah.edu/~reinhard/cdrom/">this paper</a><br />
<br />
Here is my implementation of it. You still need to clamp because of highly saturated colors, and purists wont like my luminance formula (1/3.1/3.1/3) but never listen to purists :)<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">void reinhard_tone_map(real mid_grey = real(0.2)) {</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">// using even values for luminance. This is more robust than standard NTSC luminance</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">// Reinhard tone mapper is to first map a value that we want to be "mid gray" to 0.2// And then we apply the L = 1/(1+L) formula that controls the values above 1.0 in a graceful manner.<br /> real scale = (real(0.2)/mid_grey);<br /> for (int i = 0; i < nx*ny; i++) {<br /> vec3 temp = scale*vdata[i];<br /> real L = real(1.0/3.0)*(temp[0] + temp[1] + temp[2]);<br /> real multiplier = ONE/(ONE + L);<br /> temp *= multiplier;<br /> temp.clamp();<br /> vdata[i] = temp;<br /> }</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">}</span></span><br />
<br />
This will slightly darken the dark pixels and greatly darken the bright pixels. Equation 4 in the Reinhard paper will give you more control. The cool kids have been using "filmic tone mapping" and it is the best tone mapping I have seen, but I have not implemented it (see title to this blog post)<br />
<br />
<br />
<br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com2tag:blogger.com,1999:blog-8350257063773144600.post-77077567966972147952019-02-14T06:24:00.001-08:002019-02-15T08:53:22.121-08:00Picking points on the hemisphere with a cosine density<br />
NOTE: this post has three basic problems. It assumes property 1 and 2 are true, and there is a missing piece at the end that keeps us from showing anything :)<br />
<br />
This post results from a bunch of conversations with Dave Hart and the twitter hive brain. There are several ways to generate a random Lambertian direction from a point with surface normal <b>N</b>. One way is inspired by a <a href="https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=Integral+Geometry+Methods+for+Form+Factor+Computation+&btnG=">cool paper by Sbert and Sandez </a>where he simultaniously generated many form factors by repeatedly selecting a uniformly random 3D line in the scene. This can be used to generate a direction with a cosine density, an <a href="http://amietia.com/lambertnotangent.html">idea first described</a>, as far as I know, by Edd Biddulph.<br />
<br />
I am going to describe it here using three properties, each of which I don't have a concise proof for. Any help appreciated! (I have algebraic proofs-- they just aren't enlightening--- hoping for a clever geometric observation).<br />
<br />
<br />
<span style="color: blue;"><b>Property 1</b></span>: <a href="https://en.wikipedia.org/wiki/View_factor#Nusselt_analog">Nusselt Analog</a>: uniform random points on an equatorial disk projected onto the sphere have a cosine density. So in the figure, the red points, if all of them projected, have a cosine density.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEiuYGxprpIVKQuldfiyKtBR48-8A586OgnvTk_Il6ns5fw0BjGGaMYkBjEzTx52cNDhhAao_2Jx_J9Y6kUPIwLtG3Bx1cbesXL1EN59iLJdkNgURX2GqjrjGm7P95gtUNvjTKpWunGQ/s1600/Screen+Shot+2019-02-14+at+6.13.15+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="538" data-original-width="702" height="153" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEiuYGxprpIVKQuldfiyKtBR48-8A586OgnvTk_Il6ns5fw0BjGGaMYkBjEzTx52cNDhhAao_2Jx_J9Y6kUPIwLtG3Bx1cbesXL1EN59iLJdkNgURX2GqjrjGm7P95gtUNvjTKpWunGQ/s200/Screen+Shot+2019-02-14+at+6.13.15+AM.png" width="200" /> </a></div>
<div class="separator" style="clear: both; text-align: left;">
<span style="color: blue;"><b>Property 2</b></span>:<span style="color: red;"><b> (THIS PROPERTY IS NOT TRUE-- SEE COMMENTS)</b></span> The red points in the diagram above when projected onto the normal, will have a uniform density along it: </div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSFgggCrpKHyloSpyx0iYfuP1CFEogYhoUp-aC-hyqqb9ZyMpI5XlDvAWjd8tPI8y6WBeOUwythYx0_dlpgcVTyjkM_kMGLxUNm80H9zd6TaicOK-mYsaNDjRnP4vEhyphenhyphenfcqcAooVuB-A/s1600/Screen+Shot+2019-02-14+at+6.30.04+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="636" data-original-width="1416" height="143" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSFgggCrpKHyloSpyx0iYfuP1CFEogYhoUp-aC-hyqqb9ZyMpI5XlDvAWjd8tPI8y6WBeOUwythYx0_dlpgcVTyjkM_kMGLxUNm80H9zd6TaicOK-mYsaNDjRnP4vEhyphenhyphenfcqcAooVuB-A/s320/Screen+Shot+2019-02-14+at+6.30.04+AM.png" width="320" /></a></div>
<b><span style="color: blue;">Property 3:</span></b> For random points on a 3D sphere, as shown (badly) below, they when projected onto the central axis will be uniform on the central axis.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1kHIgQT44VCL3n1rWOvYl-kqpXmuMlLZAw4Y4uMctczlyzO7raDeeGcK2MI5VvFwwTEfxIMuqxWYgl3hOYtDY60-tyVKrhSZ3PmW-XMIhtXgdrJFvNIsuLkpyK3vt86-BM46HuOwhTg/s1600/Screen+Shot+2019-02-14+at+6.23.41+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="734" data-original-width="1408" height="166" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1kHIgQT44VCL3n1rWOvYl-kqpXmuMlLZAw4Y4uMctczlyzO7raDeeGcK2MI5VvFwwTEfxIMuqxWYgl3hOYtDY60-tyVKrhSZ3PmW-XMIhtXgdrJFvNIsuLkpyK3vt86-BM46HuOwhTg/s320/Screen+Shot+2019-02-14+at+6.23.41+AM.png" width="320" /></a></div>
<br />
<br />
Now if we accept Property 3, we can first generate a random point on a sphere by first choosing a random phi uniformly theta = 2*PI*urandom(), and then choose a random height from negative 1 to 1, height = -1 + 2*urandom()<br />
<br />
In XYZ coordinates we can convert this to:<br />
z = -1 + 2*urandom()<br />
phi = 2*PI*urandom()<br />
x = cos(phi)*sqrt(1-z*z)<br />
y = sin(phi)*sqrt(1-z^2)<br />
<br />
Similarly from<span style="color: blue;"><b> property 2 </b></span>we can given a random point (x,y) on a unit disk in the XY plane, we can generate a direction with cosine density when<b> N </b>= the z axis:<br />
<br />
(x,y) = random on disk<br />
z = sqrt(1-x*x-y*y)<br />
<br />
To generate a cosine direction relative to a surface normal <b>N,</b> people usually construct a local basis, ANY local basis, with tangent and bitangent vectors <b>B</b> and<b> T</b> and change coordinate frames:<br />
<br />
get_tangents(B,T,N)<br />
(x,y) = random on disk<br />
z = sqrt(1-x*x-y*y)<br />
direction = x*B + y*T + z*N<br />
<br />
There is f<a href="https://r.search.yahoo.com/_ylt=AwrWmjlEdGVcI1IAsVgPxQt.;_ylu=X3oDMTByb2lvbXVuBGNvbG8DZ3ExBHBvcwMxBHZ0aWQDBHNlYwNzcg--/RV=2/RE=1550181572/RO=10/RU=https%3a%2f%2fgraphics.pixar.com%2flibrary%2fOrthonormalB%2fpaper.pdf/RK=2/RS=n0VYXkXI0KUFpNw48SpuMV5K0p8-">inally a compact and robust way to write get_tangents</a>. So use that, and your code is fast and good. <br />
<br />
But can we do this show that using a uniform random sphere lets us do this without tangents?<br />
<br />
So we do this:<br />
P = (x,y,z) = random_on_unit_sphere<br />
D = unit_vector(N + P)<br />
<br />
So <b>D </b>is the green dot while (<b>N+P</b>) is the red dot:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjANhlSMtXFkTic6loI87XmDydUCnVRtzhg4UmGBGkKDPnsAxlx55h6zyilHaC80iyJvAf43lccNWBltHl4zxXH_sSS4ht9lFDYQpv4n2fOt9IVhJ3EQG2vQEMqnZ_F-GLpPZpqBulzeg/s1600/Screen+Shot+2019-02-14+at+7.19.50+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="771" data-original-width="812" height="189" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjANhlSMtXFkTic6loI87XmDydUCnVRtzhg4UmGBGkKDPnsAxlx55h6zyilHaC80iyJvAf43lccNWBltHl4zxXH_sSS4ht9lFDYQpv4n2fOt9IVhJ3EQG2vQEMqnZ_F-GLpPZpqBulzeg/s200/Screen+Shot+2019-02-14+at+7.19.50+AM.png" width="200" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: left;">
So is there a clever observation that the green dot is either 1) uniform along <b>N, </b>or 2, uniform on the disk when projected?</div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div style="text-align: left;">
<br /></div>
<br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com5tag:blogger.com,1999:blog-8350257063773144600.post-91806245435701945472019-02-12T11:08:00.000-08:002019-02-12T11:08:20.120-08:00Adding point and directional lights to a physically based renderer<br />
Suppose you have a physically based renderer with area lights. Along comes a model with point and directional lights. Perhaps the easiest way to deal with them is convert them to a very small spherical light source. But how do you do that in a way that gets the color right? <b> IF</b> you have things set up so they tend to be tone mapped (a potential big <b>no</b> in physically-based renderer), meaning that a color of (1,1,1) will be white, and (0.2,0.2,0.2) a mid-grey (gamma-- so not 0.5-- the eye does not see intensities linearly), then it is not so bad.<br />
<br />
Assume you have a spherical light with radius R at position C and emitted radiance (E,E,E). When it is straight up from a white surface (so the sun at noon at the equator), you get this equation (about) for the color at a point P<br />
<br />
<i>reflected_color = (E,E,E)*solid_angle_of_light / PI</i><br />
<br />
The solid angle of the light is its projected area on the unit sphere around the illuminated point, or approximately:<br />
<br />
<i>solid_angle_of_light = PI*R^2 /distance_squared(P,C)</i><br />
<br />
So<br />
<br />
<i> </i><i>reflected_color = (E,E,E)*(R / distance(P,C))^2</i><br />
<br />
If we want the white surface to be exactly white then<br />
<br />
<span style="color: lime;"><i>E = (distance(P,C) / R)^2</i></span><br />
<br />
So pick a small R (say 0.001), pick a point in your scene (say the one the viewer is looking at, or 0,0,0), and and set E as in the <span style="color: lime;">green </span>equation<i><br /></i><br />
<br />
Suppose the RGB "color" given for the point source is<i> (cr, cg, cb). </i> Then just multiply the (E,E,E) by those components.<i> </i><br />
<br />
Directional lights are a similar hack, but a little less prone to problems of whther falloff was intended. A directional light is usually the sun, and it's very far away. Assume the direction is D = (dx,dy,dz)<br />
<br />
Pick a big distance (one that wont break your renderer) and place the center in that direction:<br />
<br />
C = big_constant*D<br />
<br />
Pick a small radius. Again one that wont break your renderer. Then use the <span style="color: lime;">green</span> equation above.<br />
<br />
Now sometimes the directional sources are just the Sun and will look better if you give them the same angular size for the Sun. If your model is in meters, then just use distance = 150,000,000,000m. OK now your program will break due to floating point. Instead pick a somewhat big distance and use the right ratio of Sun radius to distance:<br />
<br />
R = distance *(695,700km / 149,597,870km)<br />
<br />
And your shadows will look ok <br />
<br />
<br />
<br />
<br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com0tag:blogger.com,1999:blog-8350257063773144600.post-66121498707381835492018-10-20T08:33:00.001-07:002018-10-21T20:40:33.453-07:00Flavors of Sampling in Ray TracingMost "ray tracers" these days are what we used to call "path tracers" or "Monte Carlo ray tracers". Because they all have about the same core architecture, this short hand is taking over.<br />
<br />
These ray tracers have variants but a big one is whether the ray tracing is:<br />
<ul>
<li><b>Batch</b>: run your ray tracer in the background until done (e.g., movie renderers)</li>
<li><b>Realtime</b>: run what you can in 30ms or some other frame time (e.g. game renderers go RTX!)</li>
<li><b>Progressive</b>: just keep updating the image to make it less noisy as you trace more rays (e.g., a lighting program used by at artist in a studio or engineering firm)</li>
</ul>
For batch rendering let's pretend for now we are only getting noise from antialiasing the pixels so we have a fundamentally 2D program. This typically has something like the following for each pixel (i,j):<br />
<br />
pixel_color = (0,0,0)<br />
for s = 0 to N-1<br />
u = random_from_0_to_1()<br />
v = random_from_0_to_1()<br />
pixel_color += sample_ color(i+u,j+v)<br />
pixel_color /= N <br />
<br />
That "sample_color()" does whatever it does to sample a font or whatever. The first thing that bites you is the <b>diminishing return </b>associated with Monte Carlo methods: error = constant / sqrt(num_samples). So to halve the error we need 4X the samples. <br />
<br />
Don Mitchell has a <a href="http://mentallandscape.com/Papers_siggraph96.pdf">nice pape</a>r that explains that if you take "stratified" samples, you get better error rates for most functions. In the case of 2D with edges the error = constant / num_samples. That is LOADS better. In graphics a common what to stratify samples is called "jittering" where you usually take a perfect square number of samples in each pixel: N = n*n. This yields the code:<br />
<br />
pixel_color = (0,0,0)<br />
for s = 0 to n-1<br />
for t = 0 to n-1<br />
u = (s + random_from_0_to_1()) / n<br />
v = (t + random_from_0_to_1()) / n<br />
pixel_color += sample_ color(i+u,j+v)<br />
<br />
Visually the difference in the pattern of the samples is more obvious. There is a <a href="https://blogs.sap.com/2018/02/11/abap-ray-tracer-part-5-the-sample/">very nice blog post </a>on the details of this sampling here, and it includes this nice figure from Suffern's book:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcOq-sUlq6TQjBFIbKI8nKtK3uSHWiyCP7irM4HU2sOuaf_h6IVmbMyS3UdWM-JraqKULIqX-YuN8b_TjWAjQZ2aXk_R8I2NuwdDzI0yyvy_dgd-I5KyiV86HIFs1IEF0SUD6rAoUSog/s1600/Screen+Shot+2018-10-20+at+8.50.00+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="1350" height="212" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcOq-sUlq6TQjBFIbKI8nKtK3uSHWiyCP7irM4HU2sOuaf_h6IVmbMyS3UdWM-JraqKULIqX-YuN8b_TjWAjQZ2aXk_R8I2NuwdDzI0yyvy_dgd-I5KyiV86HIFs1IEF0SUD6rAoUSog/s400/Screen+Shot+2018-10-20+at+8.50.00+AM.png" width="400" /></a></div>
<br />
It turns out that you can replace the random samples not only with jittered sample, but with a regular grid and you will converge to the right answer. But better still you can use quasi-random samples which makes this a quasi-Monte Carlo method (QMC). The code is largely the same! Just replace the (u,v) part above. The theory that justifies it is a bit different, but the key thing is the points need to be "uniform" and not tend to clump anywhere. These QMC methods have been investigated in detail by my friend and coworker <a href="https://research.nvidia.com/person/alex-keller">Alex Keller</a>. The simplest QMC method is, like the jittering, best accomplished if you know N (but it doesn't need to be restricted to a perfect square). A famous and good one is the Hammersley Point set. Here's a picture from <a href="http://mathworld.wolfram.com/HammersleyPointSet.html">Wolfram</a>:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmY71dlb6yqEWEw1KSqMkCgYDvHGGsGA-jS66B9LtVxNHiYQJawKEqyPVya7fR51tgquuZ2VY3fYCa9QFeugLxG4EfSghRzWydCN61NSonu6Rq-2Zc_4rhj2e7R-RgrI3dOhUHh0wiEw/s1600/Screen+Shot+2018-10-20+at+8.58.22+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="242" data-original-width="238" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmY71dlb6yqEWEw1KSqMkCgYDvHGGsGA-jS66B9LtVxNHiYQJawKEqyPVya7fR51tgquuZ2VY3fYCa9QFeugLxG4EfSghRzWydCN61NSonu6Rq-2Zc_4rhj2e7R-RgrI3dOhUHh0wiEw/s1600/Screen+Shot+2018-10-20+at+8.58.22+AM.png" /></a></div>
It's regular and uniform, but not <b>too</b> regular.<br />
<br />
In 2D these sampling patterns, jittering and QMC are <b>way way better </b>than pure random. However, there is no free lunch. In a general path tracer, it is more than a 2D problem. The program might look like this:<br />
<br />
For every pixel<br />
for example sample<br />
pick u,v for screen<br />
pick a time t<br />
pick an a,b on lens<br />
if ray hits something<br />
pick a u',v' for light sampling<br />
pick a u",v" for BRDF sampling<br />
recurse<br />
<br />
If you track your random number generation and you take 3 diffuse bounces with shadow rays that will be nine random numbers. You could think of a ray path through the scene as something that gets generated by a function:<br />
<br />
ray_path get_ray_path(u, u', u", u'", u"", u""', ....)<br />
<br />
So you sample a nine dimensional hypercube randomly and map those nine-dimensional points to ray paths. Really this happens in some procedural recursive process, but abstractly it's a mapping. This means we run into the <b>CURSE OF DIMENSIONALITY</b>. Once the integral is high dimensional, if the integrand is complex, <b>STRATIFIED SAMPLING DOESN'T HELP</b>. However, you will notice (almost?) all serious production ray tracers do add stratified sampling. Why? <br />
<br />
The reason is that for many pixels, the integrand is mostly constant except for two of the dimensions. For example, consider a pixel that is:<br />
<br />
<ul>
<li>In focus (so lens sample doesn't matter)</li>
<li>Not moving (so time sample doesn't matter)</li>
<li>Doesn't have an edge (so pixel sample doesn't matter)</li>
<li>Is fully in (or out of) shadow (so light sample doesn't matter)</li>
</ul>
What does matter is the diffuse bounce. So it <b>acts like</b> a 2D problem. So we need the BRDF samples, let's call them u7 and u8, to be well stratified. <br />
<br />
QMC methods here typically automatically ensure that various projections of the high dimensional samples are themselves well stratified. Monte Carlo methods, on the other hand, typically try to get this property for some projections, namely the 2 pixel dimensions, the 2 light dimensions, the 2 lens dimensions, etc. They typically do this as follows for the 4D case:<br />
<br />
<ol>
<li>Create N "good" light samples on [0,1)^2 </li>
<li>Create N "good" pixel samples on [0,1)^2</li>
<li>Create a permutation of the integers 0, ..., N-1</li>
<li>Create a 4D pattern using the permutation where the ith sample is light1[i], light2[i],pixel[permute[i]], pixel2[permute[i]].</li>
</ol>
So a pain :) There are bunches of papers on doing Monte Carlo, QMC, or "blue noise" uniform random point sets. But we are yet quite done.<br />
<br />
First, we don't know how many dimensions we have! The program recurses and dies by hitting a dark surface, exiting the scene, or Russian Roulette. Most programs degenerate to pure random after a few bounces to make it so we can sort of know the dimensionality.<br />
<br />
Second, we might want a progressive preview on the renderer where it gets better as you wait. Here's a <a href="https://www.youtube.com/watch?v=82SsR4s6BEM">nice example</a>.<br />
<br />
So you don't know what N is in advance! You want to be able to add samples potentially forever. This is easy if the samples are purely random, but not so obvious if doing QMC or stratified. The QMC default answer are to use <b>Halton Points</b>. These are designed to be progressive! Alex Keller at NVIDIA and his collaborators have <a href="http://web.maths.unsw.edu.au/~josefdick/MCQMC_Proceedings/MCQMC_Proceedings_2012_Preprints/100_Keller_tutorial.pdf">found even better ways to do this with QMC</a>. Per Christensen and Andrew Kensler and Charlie Kilpatrick at Pixar have a <a href="https://graphics.pixar.com/library/ProgressiveMultiJitteredSampling/paper.pdf">new Monte Carlo sampling method </a>that is making waves in the movie industry. I have not ever implemented these and would love to hear your experiences if you do (or have!)<br />
<div class="page" title="Page 1">
<div class="layoutArea">
<div class="column">
<span style="font-family: "nimbusromno9l"; font-size: 9.000000pt;"><br /></span>
</div>
</div>
</div>
Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com1tag:blogger.com,1999:blog-8350257063773144600.post-37357470809973350402018-06-04T07:27:00.002-07:002018-06-04T07:27:30.206-07:00Sticking a thin lens in a ray tracerI need to stick a " <a href="https://en.wikipedia.org/wiki/Thin_lens">ideal thin lens</a>" in my ray tracer. Rather than a real lens with some material, coating, and shape, it's an idealized version like we get in Physics1.<br />
<br />
The thin lens has some basic properties that have some redundancy but they are the ones I remember and deploy when needed:<br />
<ol>
<li>a ray through the lens center is not bent</li>
<li>all rays through a point that then pass through the lens converge at some other point</li>
<li>a ray through the focal point (a point on the optical axis at distance from lens <b><i>f</i></b>, the focal length of the lens) will be parallel to the optical axis after being bent by the lens</li>
<li>all rays from a point a distance <i><b>A</b></i> from the lens will converge at a distance <b><i>B</i></b> on the other side of the lens and obey the thin lens law: <i>1/<b>A</b> + 1/<b>B</b> = 1/<b>f</b></i></li>
</ol>
Here's a sketch of those rules:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikCwI7ex3d1l7Gq5gGjcK2wuizrO0D-S0J6j3Sa3l_WQHDM9ZPhqGuPB39FF4AQ6S1Q523GjqnVhOH1koM25y6ab9mLiQe1v11YJ51ZzfS27D-04bT9Bg-ENlaUxzMHeA-fwCsYZzmOw/s1600/Screen+Shot+2018-06-03+at+3.36.24+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="408" data-original-width="1018" height="256" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikCwI7ex3d1l7Gq5gGjcK2wuizrO0D-S0J6j3Sa3l_WQHDM9ZPhqGuPB39FF4AQ6S1Q523GjqnVhOH1koM25y6ab9mLiQe1v11YJ51ZzfS27D-04bT9Bg-ENlaUxzMHeA-fwCsYZzmOw/s640/Screen+Shot+2018-06-03+at+3.36.24+PM.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
So if I have a (purple) ray with origin <i><b>a</b></i> that hits the lens at point <i><b>m</b></i>, how does it bend? It bends towards point<i> <b>b</b></i> no matter what the ray direction is. So the new ray is:<br />
<br />
<i><b>p</b>(t)<b> = m + </b>t<b> (b-m)</b></i><br />
<br />
So what is point <i><b>b?</b></i><br />
<br />
It is in the direction of point <i><b>c, </b></i>but extended by some distance.<i><b> </b></i>We know the center of the lens <i><b>c,</b></i> so<i><b> </b></i>we can<i><b> use the ratios of the segments to extend to that:</b></i><br />
<i><b><br /></b></i>
<i><b>b = a + (c-a) </b> (B+A) / A?</i><br />
<i><b><br /></b></i>
So what is <i>(B+A) /A? </i><br />
<i><b><br /></b></i>
We know 1/A + 1/B = 1/f, so B = 1/(1/f-1/A). So the point <i><b>b</b></i> is:<br />
<br /><b></b><i><b></b></i>
<i><b>b = a + (c-a) (</b>1/(1/f-1/A) + A)/A =</i><br />
<i><b>b = a + (c-a) </b>(1/(A/f - 1) + 1) =</i><br />
<i><b><i><b>b = a + (c-a) </b></i></b><i>(A/(A - f)) </i> </i><br />
<br /><b></b><i><b></b></i>
OK lets try a couple of trivial cases. What if A = 2f? <br />
<br /><b></b><i><b></b></i>
<i><b>b = a + (c-a) </b>(2f/(2f-f)) = </i><i><b>b = a + </b>2<b>(c-a) </b></i><br />
<br />
That looks right (symmetric case-- A = B there) <br />
<br /><b></b><i><b></b></i>
<span style="color: red;"><span style="font-size: large;">So final answer, given a ray with origin <i><b>a</b></i> that hits a lens with center <i><b>c</b></i> and focal length <i>f </i>at point <i><b>m</b></i>, the refracted ray is:</span></span><br />
<br />
<span style="color: red;"><span style="font-size: large;">p(t) = m + t( <i><b><i><b>a + (c-a) </b></i></b><i>(A/(A - f)) - <b>m</b>)</i></i></span></span><br />
<i><i><br /></i></i>
There is a catch. What if <i><i>B < 0? </i></i>This happens when<i><i> A < f. </i></i>Address that case when it comes up :)<br />
<br /><b></b><i><b></b></i>
<br /><b></b><i><b></b></i>Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com0tag:blogger.com,1999:blog-8350257063773144600.post-61217046379187892162018-05-31T18:20:00.001-07:002018-05-31T18:20:40.142-07:00Generating uniform Random rays that hit an axis aligned box<br />
For some tests you want the set of "all" rays that hit a box. If you want to stratify this is somewhat involved (and I don't know that I have done it nor seen it). Chris Wyman and I did a <a href="http://jcgt.org/published/0006/02/03/">jcgt paper on doing this stratified in a square in 2D</a>. But often stratification isn't needed. When in doubt I never do it-- the software is always easier to deal with un-stratified, and as soon as dimension gets high most people dont bother because of the <a href="https://en.wikipedia.org/wiki/Curse_of_dimensionality">Curse of Dimensionality</a>.<br />
<br />
We would like to do this with as little math as possible. First lets consider any side of the box. (this would apply to any convex polyhedron if we wanted something more general). If the box in embedded in all possible uniform rays, any ray that hits the box will enter at exactly one point on the surface of the box, and all points are equally likely. So our first job is to pick a uniform point on the surface. We can use the technique Greg Turk used to seed points for texture generation on a triangular mesh:<br />
<br />
Probability of each side with box of side lengths X, Y, Z is side area of total area. The side areas are:<br />
<br />
XY, YZ, ZX (2 of each)<br />
<br />
We can do cumulative area and stuff it in an array of length 6:<br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">c_area[0] = XY</span><br />
<span style="font-family: "Courier New", Courier, monospace;">c_area[1] = area[0] + XY</span><br />
<span style="font-family: "Courier New", Courier, monospace;">c_area[2] = area[1] + YZ</span><br />
<span style="font-family: "Courier New", Courier, monospace;">c_area[3] = area[2] + YZ</span><br />
<span style="font-family: "Courier New", Courier, monospace;">c_area[4] = area[3] + ZX</span> <br />
<span style="font-family: "Courier New", Courier, monospace;">c_area[5] = area[4] + ZX </span><br />
<br />
Now normalize it so it is a cumulative fraction:<br />
<br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">for (int i = 0; i < 6; i++) </span><br />
<span style="font-family: "Courier New", Courier, monospace;"> c_area[i] /= c_area[5]</span><br />
<br />
No take a uniform random real <span style="font-family: "Courier New", Courier, monospace;">r()</span> in [0,1)<br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">int candidate = 0;</span><br />
<span style="font-family: "Courier New", Courier, monospace;">float ra = r(); </span><br />
<span style="font-family: "Courier New", Courier, monospace;">while (c_area[candidate] < ra) candidate++;</span><br />
<br />
Now <span style="font-family: "Courier New", Courier, monospace;">candidate</span> is the index to the side.<br />
<br />
Let's say the side is in the xy plane and x goes from 0 to X and y goes from 0 to Y. Now pick a uniform random point on that side:<br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">vec3 ray_entry(X*r(), Y*r(), 0);</span><br />
<br />
Now we have a ray origin. What is the ray direction? It is <b>not</b> uniform in all directions. These are the rays that hit the side. So the density is proportional to the cosine to the normal-- so they are Lambertian! <b>This is not obvious</b>. I will punt on justifying that for now.<br />
<br />
So for the xy plane one above, the normal is +z and the ray direction is a uniform random point on a disk projected onto the hemisphere:<br />
<br />
<span style="font-family: "Courier New", Courier, monospace;">float radius = sqrt(r());</span><br />
<span style="font-family: "Courier New", Courier, monospace;">float theta = 2*M_PI*r();</span><br />
<span style="font-family: "Courier New", Courier, monospace;">float x = radius*cos(theta);</span><br />
<span style="font-family: "Courier New", Courier, monospace;">float y = radius*sin(theta);</span><br />
<span style="font-family: "Courier New", Courier, monospace;">float z = sqrt(1-x*x-y*y);</span><br />
<span style="font-family: "Courier New", Courier, monospace;">ray_direction = vec3(x,y,z);</span><br />
<br />
Now we need that for each of the six sides.<br />
<br />
We could probably find symmetries to have 3 cases, or maybe even a loop, but I personally would probably not bother because me trying to be clever usually ends poorly...<br />
<br />
<br />
<br />
<br />
Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com0tag:blogger.com,1999:blog-8350257063773144600.post-82638163917974846242018-03-14T09:03:00.005-07:002018-03-14T09:03:44.801-07:00Egyptian estimates of PII saw a neat <a href="https://twitter.com/fermatslibrary/status/973181926684192769">tweet</a> on how the estimate the Egyptians used for PI.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiz8YaUmlTEexT2mLuig6D5PKPOQu1KacNe9xLLED0nLvIXWkk9v_0bR1R_ywcoREjlPAbGSAr-K9928qlnYfCuHgjCHym89niTW4zEWxVDYKflcTVojo9aaqEfpKtNgAc8gxwYr13dXA/s1600/Screen+Shot+2018-03-14+at+9.51.03+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="939" data-original-width="1220" height="246" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiz8YaUmlTEexT2mLuig6D5PKPOQu1KacNe9xLLED0nLvIXWkk9v_0bR1R_ywcoREjlPAbGSAr-K9928qlnYfCuHgjCHym89niTW4zEWxVDYKflcTVojo9aaqEfpKtNgAc8gxwYr13dXA/s320/Screen+Shot+2018-03-14+at+9.51.03+AM.png" width="320" /></a></div>
This is all my speculation, and maybe a math history buff can enlighten me, but the D^2 dependence they should discover pretty naturally. Including the constant before squaring is, I would argue, just as natural as having it outside the parentheses, so let's go with that for now. So was there a nearby better fraction? How well did the Egyptians do? A brute force program should tell us.<br />
<br />
We will use the ancient programming language C:<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">#include <math .h=""><br />#include <stdio .h=""><br />int main() {<br /> double min_error = 10;<br /> for (int denom = 1; denom < 10000; denom++) {<br /> for (int num = 1; num < denom; num++) {<br /> double approx = 2*double(num)/double(denom);<br /> approx = approx*approx;<br /> double error2 = M_PI-approx;<br /> error2 = error2*error2;<br /> if (error2 < min_error) {<br /> min_error = error2;<br /> printf("%d/%d %f\n", num, denom, 4*float(num*num)/float(denom*denom));<br /> }<br /> }<br /> }<br />}</stdio></math></span></span><br />
<br />
This produces output:<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">1/2 1.000000<br />2/3 1.777778<br />3/4 2.250000<br />4/5 2.560000<br />5/6 2.777778<br />6/7 2.938776<br />7/8 3.062500<br />8/9 3.160494<br />23/26 3.130177<br />31/35 3.137959<br />39/44 3.142562<br />109/123 3.141252<br />148/167 3.141597<br />4401/4966 3.141588<br />4549/5133 3.141589<br />4697/5300 3.141589<br />4845/5467 3.141589<br />4993/5634 3.141589<br />5141/5801 3.141590<br />5289/5968 3.141590<br />5437/6135 3.141590<br />5585/6302 3.141590<br />5733/6469 3.141590<br />5881/6636 3.141591<br />6029/6803 3.141591<br />6177/6970 3.141591<br />6325/7137 3.141591<br />6473/7304 3.141591<br />6621/7471 3.141591<br />6769/7638 3.141591<br />6917/7805 3.141592<br />7065/7972 3.141592<br />7213/8139 3.141592<br />7361/8306 3.141592<br />7509/8473 3.141592<br />7657/8640 3.141592<br />7805/8807 3.141592<br />7953/8974 3.141593<br />8101/9141 3.141593<br />8249/9308 3.141593<br />8397/9475 3.141593<br />8545/9642 3.141593</span></span><br />
<br />
So 7/8 was already pretty good, and you need to get to 23/26 before you do any better! I'd say the Egyptians did extremely well.<br />
<br />
What if they had put the constants outside the parens? How well could they have done? We can change two of the lines above to:<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">double approx = 4*double(num)/double(denom);//approx = approx*approx;</span></span><br />
<br />
and the printf to:<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">printf("%d/%d %f\n", num, denom, 4*float(num)/float(denom));</span></span><br />
<br />
And we get:<br />
<br />
<span style="font-family: "Courier New", Courier, monospace;"><span style="font-size: x-small;">1/2 2.000000<br />2/3 2.666667<br />3/4 3.000000<br />4/5 3.200000<br />7/9 3.111111<br />11/14 3.142857<br />95/121 3.140496<br />106/135 3.140741<br />117/149 3.140940<br />128/163 3.141104<br />139/177 3.141243<br />150/191 3.141361<br />161/205 3.141464<br />172/219 3.141552<br />183/233 3.141631<br />355/452 3.141593</span></span><br />
<br />
So 7/9 is not bad! And 11/14 even better. So no clear winner here on whether the rational constant should be inside the parens or not.<br />
<br />
<br />Peter Shirleyhttp://www.blogger.com/profile/17871569418798062417noreply@blogger.com35