I am no expert on "uber shaders", but I am a fan. These did not make much sense to me until recently. First let's unpack the term a little bit. The term "shader" in graphics has become almost meaningless. To a first approximation it means "function". So a "geometry shader" is a function that modifies geometry. A "pixel shader" is a function that works on pixels. In context those terms might mean something more specific.
So "uber shader" is a general function? No.
An uber shader is a very specific kind of function: it is one that evaluates a very particular BRDF, usually in the context of a particular rendering API. The fact that it is a BRDF implies this is a "physically based" shader, so is ironically much more restricted than a general shader. The "uber" refers to it being the "only material model you will ever need", and I think for most applications, that is true. The one I have the most familiarity with (the only one I have implemented) is the Autodesk Standard Surface.
First let's get a little history. Back in ancient times people would classify surfaces as "matte" or "shiny" and you would call a different function for each type of surface. Every surface would somehow have a name or pointer or whatever to code to call about lighting or rays or whatever. So they had different behavior. Here is a typical example of some materials we used in our renderer three decades ago:
But sometime in the late 1990s some movie studios started making a single shader that encompassed all of these as well as some other effects such as retro-reflection and sheen and subsurface scattering. (I don't know who came up with this idea first, but I think Sing-Choong Foo, one of the BRDF measurement and modeling pioneers that I overlapped with at Cornell, did one at PDI in the late 1990s... this may have been the first... please comment if you know anything about the hisotry which really ought to be documented).
Here is the Autodesk version's conceptual graph of how the shader is composed:
So a bunch of different shaders are added in linear combinations, and the weights may be constant or may be functions. This is a bit daunting looking. Let's show how you would make a metal (like copper!): First set opacity=1, coat=0, metalness=1. This causes most of the graph to be irrelevant:Now let's do a diffuse surface. Opacity=1, coat = 0, metalness=0, specular=0,transmission=0,sheen=0,subsurface=0. Phew! Again most of the graph drops away:
So why has this, for the most part, won out over categorical shaders that are different? Having implemented the above shader along with my colleague and friend Bob Alfieri, I really like it for streamlining software. Here is your shader black box! Further, you can point to the external document and get data in that format.
But I suspect that is not the only reason uber shaders have taken over. Note that we could have set metalness=0.5 above. So this thing is half copper metal and half pink diffuse. Does that make any sense as a physical material? Probably not. And isn't the whole point of a BRDF to restrict us to physical materials? I think such unphysical combinations serve two purposes:
- Artistic expression. We usually do physically-based BRDF as a guide to keeping things plausible and robust. But an artistic production like a game or movie might look better with nonphysical combinations, so why not expose the knobs!
- LOD and antialiasing. A pixel or region of an object may cover more than one material. So the final pixel color should account for both BRDF. Combining them in the shading calculation allows sparser sampling.
Finally, graphics needs to be fast both in development and production. So the compiler ecosystem is here. I don't know so much about that, which is a credit to the compiler/language people who do :)
.
I'm finding it hard to remember the details...
ReplyDeleteSing wrote a few different shaders while working on VFX. They do have the components you describe. IIRC, the biggest improvement was the improved BDRF. One of the reasons this worked well at PDI, was because we had a separation visible in the UI that allowed any parameter to be textured with a separate map shader, which nicely separated the roles of the technical shader writer from the artistic texturing process.
I find it interesting that you don't mention anything about the hardware issues that led to "uber shaders" including compilation and execution limitations. I think it was the GPU world that first wrote single function shaders, and then that was taken over to film.
Ugh, for some reason it didn't get my name even though I was logged in. This is Dan Wexler, author of the PDI renderer at that time.
ReplyDeleteI would like to hear more about the hardware history. I really ignored uber shaders until a couple of years ago.
ReplyDeleteI believe the term uber, at least for me, comes from Ronen Barzel's JGT article in 1997 on lighting - where he describes an "uber light shader". This article was included in the Apodaca/Grtiz Advanced Renderman book.
ReplyDeleteI wrote the "uber" material shader at PDI in 1998/99. After Antz we had a collection of different materials which didn't always work well together, so for Shrek I wrote one material to rule them all. We called it "base" because we could combine shaders in different ways so the notion was no shading would be allowed in the combiners and all the shading would happen at the "base" level of the material graph. But we considered calling it "uber". As Dan points out, procedural textures (not shading) were already a different kind of shader in the PDI system. A few other materials hung on, but by Shrek 2 and for about 15 years there were really only two materials: base and a specialized version for hair.
"base" was also an early physically based material, and would be at home in a PBR set-up today outside of a few features we had to have to help compensate for the fact that we weren't using any area lights - only point light sources back then. This all lasted until about 4 years ago when it was all rewritten for the new ray-tracer.
(That was Jono Gibbs btw).
ReplyDeleteIs this the 97 JGT paper? I am not seeing "uber" but I might be missing it. I am now curious about that etymology. https://www.ronenbarzel.org/papers/lighting.pdf
ReplyDelete(Jono again)
ReplyDeleteOh, interesting. The term "uber" was then just in the 2000 Advanced Renderman book which includes an update to that 1997 paper. Ronen credits credits Larry Gritz for the implementation included which is called "uberlight.sl". This is a light shader, not a material shader, but the notion that we should combine the various shaders which grew organically at the various studios with "uber" versions applies to lights as well as materials. So maybe Larry originated the term, or knows who did.
LG here. I wish I could give a definitive answer, but hey, that was getting close to 25 years ago!
ReplyDeleteIt's possible that I was the true originator of the term? It's also possible that I heard it around from one or multiple sources that I've since forgotten, and my role was merely to help popularize the term via using it in the name of the shader I wrote to help illustrate Ronen's chapter of my book.
The fact that Jono and Dan remember "uber" from before the book's publication might mean that it was one of those ideas "in the air" before an of us wrote it down, probably something we were all joking about in the hallways at SIGGRAPH as we compared notes on what we were doing at different studios. It could also have been a term I heard from others at Pixar, though I'm pretty sure none of our shaders used that word as their name before I wrote uberlight, though the term may have been used informally to describe the kind of shader that had a ton of functionality built into a single shader rather than having a dozen specialized separate shaders.
My recollection was that the whole point of an uber shader was to make artists' lives easier when they had to make changes to a scene. In the case of a light, for example, you may think you need a spotlight at first, then as the shot progressed through review, it became clear you wanted barndoor controls. It's a PITA to delete the light and add a new one, versus having the one light shader that could do both, and you just flipped a parameter. Even worse was the combinatorics: If you had one light shader that implemented blockers, and another light shader that implemented cookies, then the lighting artist would have to request a change if they had a situation that needed both, and such a shader didn't exist. So the shader writer would make one light shader that had all the options, that could be independently enabled or disabled.
One thing I am 100% certain of, though, is that it's not in any way related to hardware. The term was in wide use by the end of the 90's, long before there were shaders in hardware.
A little follow-up from a side conversation I was having with Jono about this...
ReplyDeleteI found a partial email exchange from March 1998 between myself, Ronen Barzel, and Tom Lokovic (who I think was my officemate at the time at Pixar), in which I appear to be asking them to give me feedback about the uberlight shader. I use the term without explanation, so clearly by early 1998 there was a collection of people who knew what that term meant without needing to define it.
That's the earliest "uber" appears in any email that I still have a record of.
I feel like it's the kind of term we would casually use while sitting around the lunch table talking about shaders. I have a feeling I'm not the first one to say it out loud, but I may well be the first to publish a shader with that name or use it in a public talk.
Sorry, one more bit of archeology: It appears that I distributed the uberlight shader source code as part of a SIGGRAPH 1998 course I taught, so that means lots of people had heard the term before the ARMan book was published in late 1999.
ReplyDeleteThat doesn't really answer whether I had heard the term elsewhere prior to that, but it does push the date of wide introduction to the lingo of shader writers to no later than mid 1998.
Awesome archeology! Thank you!
ReplyDeleteAbout the word "uber": one year when I was in the SIGGRAPH papers committee (I think it was 2004 or 2005), I was the primary reviewer for an accepted paper that had the word "uber" in the title.
ReplyDeleteThat was new for me at the time, and being European, slightly offensive ;-) I consulted with the chair, and he agreed the title should be changed. So the paper got a new title, removing the word uber.
The term was in regular use by the time hardware shading was complex enough to need to borrow it. In that context we used it to mean a shader that was highly configurable via either compile-time or uniform parameters. Usually compile-time was (and presumably still is) favored because you wanted all the optimizations the compiler could give you.
ReplyDeleteAn ubershader in this environment usually requires some reflection API to be used post-compilation to figure out which resources (textures, uniforms, uniform buffers, per-vertex data, render targets) the main program needs to configure before rendering anything with the shader.
The main value of the highly configurable shader was that the contract between the application and the shader content was sophisticated enough to develop a lot of new shaders without having to rebuild or change the plumbing of the app. You could get some of the same effects by adhering to a strict contract with a bunch of simple shaders that work in that system. Effectively, that's what shaders with #define or compile time constants are, but having them all in the same source keeps the contracts from diverging.
Peter,
ReplyDeleteOne reason ubershaders became popular is due to their association with deferred shading. Deferred shading extends the Z-buffer into a more general G-buffer. While a Z-buffer only stores depths for the closest surface at each pixel, a G-buffer also stores shader parameters (color, metalness, UVs, etc.). After all the polygons have gone through the G-buffer pass, shading is done. The G-buffer properties for each pixel are passed into the shader to determine color. This avoids computer colors for surfaces that may be behind other surfaces. The tradeoff is that a G-buffer uses a lot of memory. Most major game engines have implemented deferred shading.
Zap here.
ReplyDeleteThe Autodesk Standard Surface owes a fair chunk of its design from the 3ds Max Physical Material, which in turn was a piece of work I did based loosely on the disney principled BRDF as well as some of the other materials that were popular at the time.
Back in the day I was personally trying to build this whole system where you were combining atomic BSDF's into actual usable materials.
It was very nifty and kinda like photoshop layers, where each layer decided how much of the below layers were visible. So you could layer a dielectric on top of a diffuse, and it became a coating.
But in the end, I basically found, a few things:
a) It was too easy to make nonsense materials
b) It needed user education - in the style of "everything has fresnel"
c) You effectively ended up building starting with the same "glossy over diffuse" set of shading layers each time, and tended to always add the same stuff. You ended up using the same things all the time, it was only a matter of how many of them you were using.
Secondly, having a fully flexible system, was somewhat tricky to implement for different rendering backends. A light-loop based HLSL shader is quite different to what a path tracer does, or what OSL closures do....
Basically, in a fit of frustration, I threw my nifty layering BSDF system in the trashcan, retired to a mountaintop, levitated a bit in the Lotus position and came up with the Physical Material.
At about that time, Arnold was moving to a more physically accurate set of shaders, and they effectively adopted the Physical Material under the name Standard Surface (an Arnold naming convention) and added a few things (thin film and sheen).
And then we started to try to standardize this, and advocate it's use not only inside of Autodesk, but in standard systems like MaterialX et. al.
Having a "black box" shader that "does everything you need, but not more", is actually much easier to implement. You can for example write your HLSL version once. The number of times you truly *need* totally flexible BSDF layering are actually few and far between. But where you need something pretty commong, reflecting what most real-worl materials do, something like Standard Surface helps a lot.
/Z
Oh - forgot to say.
ReplyDeleteThe justification for making metalness a float, is to allow it to be texture-mapped and support anti-aliased pixels.
Metalness should, stricly speaking, be either 1 or 0 (and in my previous similar work for mental ray, the mia_material, it was a boolean - but booleans were not textureable in mental ray) so the ability to texture it was the main motivator for it being float. I always regret not putting in a warning if a user set it to something like a fixed value of 0.5.... :)
/Z
Everything I will need to start a successful ride-sharing business , is laid out in great detail. This article has answered most of my business and revenue model questions.
ReplyDelete