Thursday, December 30, 2021

What is an "uber shader"?

 I am no expert on "uber shaders", but I am a fan.  These did not make much sense to me until recently.  First let's unpack the term a little bit.  The term "shader" in graphics has become almost meaningless.  To a first approximation it means "function".  So a "geometry shader" is a function that modifies geometry.  A "pixel shader" is a function that works on pixels.  In context those terms might mean something more specific.  

So "uber shader" is a general function?  No.

An uber shader is a very specific kind of function: it is one that evaluates a very particular BRDF, usually in the context of a particular rendering API.  The fact that it is a BRDF implies this is a "physically based" shader, so is ironically much more restricted than a general shader.  The "uber" refers to it being the "only material model you will ever need", and I think for most applications, that is true.  The one I have the most familiarity with (the only one I have implemented) is the Autodesk Standard Surface.

First let's get a little history.  Back in ancient times people would classify surfaces as "matte" or "shiny" and you would call a different function for each type of surface.  Every surface would somehow have a name or pointer or whatever to code to call about lighting or rays or whatever.  So they had different behavior.  Here is a typical example of some materials we used in our renderer three decades ago:

But sometime in the late 1990s some movie studios started making a single shader that encompassed all of these as well as some other effects such as retro-reflection and sheen and subsurface scattering.  (I don't know who came up with this idea first, but I think Sing-Choong Foo, one of the BRDF measurement and modeling pioneers that I overlapped with at Cornell, did one at PDI in the late 1990s... this may have been the first... please comment if you know anything about the hisotry which really ought to be documented).

Here is the Autodesk version's conceptual graph of how the shader is composed:

So a bunch of different shaders are added in linear combinations, and the weights may be constant or may be functions.  This is a bit daunting looking.  Let's show how you would make a metal (like copper!):  First set opacity=1, coat=0, metalness=1.   This causes most of the graph to be irrelevant:

Now let's do a diffuse surface.  Opacity=1, coat = 0, metalness=0, specular=0,transmission=0,sheen=0,subsurface=0.  Phew!  Again most of the graph drops away:

So why has this, for the most part, won out over categorical shaders that are different?  Having implemented the above shader along with my colleague and friend Bob Alfieri, I really like it for streamlining software.  Here is your shader black box!   Further, you can point to the external document and get data in that format.  

But I suspect that is not the only reason uber shaders have taken over.  Note that we could have set metalness=0.5 above.  So this thing is half copper metal and half pink diffuse.  Does that make any sense as a physical material?  Probably not.  And isn't the whole point of a BRDF to restrict us to physical materials?  I think such unphysical combinations serve two purposes:

  1. Artistic expression.  We usually do physically-based BRDF as a guide to keeping things plausible and robust.  But an artistic production like a game or movie might look better with nonphysical combinations, so why not expose the knobs!
  2. LOD and antialiasing. A pixel or region of an object may cover more than one material.  So the final pixel color should account for both BRDF.  Combining them in the shading calculation allows sparser sampling.

Finally, graphics needs to be fast both in development and production.  So the compiler ecosystem is here.  I don't know so much about that, which is a credit to the compiler/language people who do :)


Friday, August 6, 2021

A call to brute force

 In the writing of the new edition of Marschner's and my graphics text, we tried to add more "basics" on light and material interaction (I don't mean BRDF stuff-- I mean more dielectrics) and more on brute force simulation of light transport.  In the "how would you maximally brute force a ray tracer) we wrote this:


Basically, it's just a path tracer run with *only* smooth dielectrics and Beer's Law.  If you model a scene with all the microgeometry and allow some of it to absorb, you can get colored paint with a rough surface.  Here is an few  figures from my thesis (and it was an old idea then!):

Doing the brute force path tracing is slow and dealing with the micro-particles is slow, so we invent BRDFs and other bulk properties, but that is all for efficiency.  When we wrote this, which is a classic treatment people use in the classroom all the time, we were thinking it was just for education and  for reference solutions (like Eugene d'Eon has done for skin for example), but since Monte Carlo path tracing is almost infinitely parallel, why not do this on a huge crowd sourced network of computers (in the spirit folding@home)?

I am thinking for images of ordinary things whose microgeometry would be easy to model.  For example a lamp shade with a white lining:

Another example of something very complex visually that might be modeled procedurally (


Or a glass of beer or... (on and on).

So what would be needed:

  1. some base path tracer with a procedural model we could all install and run
  2. some centralized job coordination server that doled out random seeds and combined results 
  3. an army of nerds willing to do this with idle cycles rather than coin mining

I don't have the systems chops to know how to best do this.  Anyone?   I will take the discussion to twitter!


Wednesday, April 14, 2021

I am not a believer in trying to learn two things at once.

 I just spent this week learning to program with OpenGL in Python using the fabulous PyOpenGL.  I was barely able to do it.  I have written a whole textbook on computer graphics and it derives the OpenGL matrix stack and the viewing model conceptually.  The OpenGL graphics pipeline model, if I recall accurately, I was also barely able to learn!  I think there is a zero percent chance I could learn the ins and outs of calls to and behavior of PyOpenGL and the underlying conceptual model.

One reason I am pretty sure of that is that I once tried learning TensorFlow, Python, and neural nets.  It did not go well.  Once I was comfortable with Python and kind of understood neural nets, I tried again (about a year later) with PyTorch.  It was not pretty.  Finally, I implemented a neural net in C, then one in Python, from scratch.  It was painful, but I made it.  Barely.  (And thanks to some advice from Shalini Gupta, Rajesh Sharma, and Hector Lee).  Then used PyTorch and  managed to train a network from scratch.   Again, barely.

My empirical experience is that it takes a focused and concerted to learn anything really new.   And if it were any harder, I don't think I could do it.  And if I am trying to learn two things at once (particularly an API and the concepts/algorithms that the API is abstracting for me) then forget it.

Conclusion: I should never try to learn two things at once.  If you have trouble with that, break it down.  It may seem like it takes longer, but nothing is longer than never learning it!

Corollary:  when you see somebody quickly picking up packages and wondering why you don't, maybe you are just like me, or maybe they are really something special.  Either way, if you develop competence in a technical area, you are in the most fortunate 0.1% of humans, and take a bow.  You need to find a way that works for you.

Monday, December 7, 2020

Debugging refraction in a ray tracer

 Almost everybody has trouble with getting refraction right in a ray tracer.  I have a previous blog post on debugging refraction direction if you support boxes.  But usually your first ray tracers just supports spheres and when the picture just looks wrong and/or is black, what do you do?

So if you are more disciplined than I usually am, write a visualization tool that shows you in 3D the rays generated for a given pixel and you will often see what the problem is.  Like this super cool program for example.

But if you are a stone age person like me:

Pebbles Flintstone -- Evangelist! 

Then you have exactly two debugging tools: 1) printf() and 2) output various frame buffers.  For debugging refraction, I like #1.  First, create some model where you know exactly what the behavior of a ray and its descendants should be.  Real glass reflects and refracts.  Let's get refraction right.  So comment out any possibility of reflection.  A ray goes in, and refracts (or if that is impossible, prints that).

Now let's set up the simplest ray and single sphere possible.  The one I usually use is this:

 The viewing ray A from the camera starts at the eye and goes straight long the minus Z axis.  I assume here it is a unit length vector but it may not be depending on how you generate them.


How do you generate that ray?  You could hard-code it, or you could instrument your program to take a parameter or command line argument or whatever for which pixel to trace (like -x 250 -y 300 or whatever).  If you do that you may need to be careful to get the exact center-- like what are the pixel offsets?   That is why I usually just hard code it.  Then let the program recurse and make sure that you get:

A hits at Q which is point (0,0,1)

The surface normal vector at Q is (0,0,1)

The ray is refracted to create B with origin Q and direction (0,0,-1)

B hits at R is is point (0,0,-1)

The surface normal at R is (0,0,-1)

The ray is refracted to create C with origin R and direction (0,0,-1)

The ray C goes and computes a color H of the background in that direction. 

The recursion returns that color H all the way to the original caller

That will find 99% of bugs in my experience

Tuesday, November 17, 2020

How to write a paper when you suck at writing

Most papers are bad and almost unintelligible.  Most of my papers are bad and almost unintelligible.  The ubiquity of this is evidence that writing good papers must be hard.  So what am I to do about this if I suck at writing?  This is not false modesty; I do suck at writing.  But I am an engineer, so there must be a way to apply KISS here.  KISS works.  This blog post describes my current methodology to produce a paper.  My version of the famous Elmore Leonard quote is:

Concentrate on the parts most readers actually look at.

Your title, figures, and their captions should work as a stand-alone unit.  Write/make these first.  Doing this in the form of a Powerpoint presentation is a good method.  Explaining your paper to a friend using the white board and then photographing what evolves under Q&A is a good method.  Then ruthlessly edit.  Ask yourself: what is this figure conveying?  Each idea should have exactly one figure, and you should know exactly what each of those ideas is.

Let's do an example.  Suppose I were to write a paper on why it is better to feed your cats in the evening than in the morning.  First, you should have a figure on what is a cat, along with a caption.


File:Scheme cat anatomy-io.svg - Wikimedia Commons
This is a cat

Don't assume your reader knows much of anything (see caveat at end of article).  Now the figure above has details that are not relevant to the point, so you probably need a different one.  Getting exactly the right figure is an iterative process.

If my key reason for not feeding the cat is that in the morning they will wake me earlier each day, I need some figure related to that, such as:

Does Your Cat Wake You Up? - Cat Tree UK
The problem with a 6am feeding, is the cat starts thinking about it before 6am

(credit cattree)

Finally, you will need a figure that gets across the idea that feeding the cat in the evening works.*&output-format=auto&output-quality=auto
Feeding the cat before bedtime is effective

(credit buzzfeed)


So you now have an argument sequence where the reader can "get" the point of your paper.  Your actual paragraphs should make your case more airtight, but 90% of the battle is getting the reader to understand what you are even talking about.  "What is this?" should never be a struggle, but usually is.

For the paper writing itself, write to the figures.  First, add paragraphs that speak to the point of the figure and reference it.  Then, add paragraphs as needed to make the point convincing.  Each paragraph should have a definite purpose, and you should be able to say explicitly what that is.  The results section should convince the reader that, for example, cats do in fact behave better when fed in the evening.

Now, there is a caveat for peer-reviewed papers.  Reviewers often think that if something is clear, it must not be academic.  They will want you to omit the first figure.  This is an example of "you can't fight city hall".  But make things as clear as they will let you.   If this irritates you, suck it up, and do writing any way you want to in books, blog posts, and emerging forms of expression that you have full control over.

So, in summary, make the figures the skeleton of the paper and do them first.  An important point that applies to more than this topic: develop your own process that works for YOU; mine may be that process for you, and may not.  Final note-- I feed my cats in the morning.

Wednesday, August 19, 2020

Grabbing a web CSV file

I am tutoring an undergrad non-CS person on Python and we tried to graph some online data.  This turned out to be pretty painful to get to work just because of versioning and encodings and my own lack of Python experience.  But in the end it was super-smooth code because Python packages rule.

Here it is.  The loop over counter is super kludgy.  If you use this, improve that ;)

import csv
import urllib
import codecs
import matplotlib.pyplot as plt

url = ''

response = urllib.request.urlopen(url)
cr = csv.reader( codecs.iterdecode(response, 'utf-8') )


x = []
y = []

for row in cr:
    if counter == 3:
        for j in range(2,len(row)):
        plt.plot(x, y)
    counter=counter + 1

Thursday, July 2, 2020

Warping samples paper at rendering symposium

Here is a paper at EGSR being presented tomorrow at the time of this writing.

It deals with sampling lights.  The paper is pretty daunting because it has some curve fitting which always looks complicated and it works on manifolds so there are Jacobians running around.   But the basic ideas are less scary than the algebra implies and is clean in code.  I will cover a key part of it in a simple case.

If you are doing direct lighting from a rectangular light source, this is the typical Monte Carlo code for shading a point p:

pick point q on light with PDF p(q) = pdf_q
if (not_shadowed(p, q))
    d = unit_vector(q-p)
    light += BRDF * dot(d, n_p) * dot(-d, n_q) * emitted(q) / (distance_squared(p,q) * pdf_q)

Most renderings, or most of mine anyway, pick the point uniformly on the rectangle so pdf_q = 1/area.   There are more sophisticated methods that try to pick the PDF intelligently.  This is hard in practice.  See the paper for some references to cool work.

But we can also just try to make it one or two steps better than constant PDF.  The first obvious candidate is PDF that is bilinear which just computes this at each corner:

weight = BRDF * dot(d_corner, n_p) * dot(-d, n_corner) * emitted(corner) / distance_squared(p,corner)

The PDF gets 4 weights and needs to be normalized so it a PDF, and you need to pick points from it.  This is an algebraic book-keeping job.  But not bad (see the paper!).  The disruption to the code is no big deal.   For my particular C++ implementation here is my sampling code (which is a port of David Hart's shadertoy-- see that for the code in action-- this is just to show you it's no big deal)

// generate a sample from a bilinear set of weights
vec2 sample_quad(const vec2& uv, const vec4& w) {
    vec2 newuv;
    real u = pdf_u(uv.x(), w);
    real v = pdf_v(uv.y(), u, w);
    return vec2(u,v);

// helper functions
inline real pdf_u(real r, const vec4& w) {
    real A = w.z() + w.w() - w.y() - w.x();
    real B = 2.*(w.y()+w.x());
    real C = -(w.x() + w.y() + w.z() + w.w())*r;
    return solve_quadratic(A,B,C);

inline real pdf_v(real r, real u, const vec4& w) {
    real A = (1.-u)*w.y() + u*w.w() - (1.-u)*w.x() - u*w.z();
    real B = 2.*(1.-u)*w.x() + 2.*u*w.z();
    real C = -((1.-u)*w.y() + u*w.w() + (1.-u)*w.x() + u*w.z())*r;
    return solve_quadratic(A,B,C);

// evaluate the PDF
inline real eval_quad_pdf(const  vec2& uv, const vec4& w ) {
    return 4.0 *
        ( w.x() * (1.0 - uv.x()) * (1. - uv.y())
        + w.y() * (1.0 - uv.x()) * uv.y()
        + w.z() * uv.x()        * (1. - uv.y())
        + w.w() * uv.x()        * uv.y()
    ) / (w.x() + w.y() + w.z() + w.w());

The same basic approach works on triangles and works if you sample in a solid angle space.  I hope you try it!