I am no expert on "uber shaders", but I am a fan. These did not make much sense to me until recently. First let's unpack the term a little bit. The term "shader" in graphics has become almost meaningless. To a first approximation it means "function". So a "geometry shader" is a function that modifies geometry. A "pixel shader" is a function that works on pixels. In context those terms might mean something more specific.
So "uber shader" is a general function? No.
An uber shader is a very specific kind of function: it is one that evaluates a very particular BRDF, usually in the context of a particular rendering API. The fact that it is a BRDF implies this is a "physically based" shader, so is ironically much more restricted than a general shader. The "uber" refers to it being the "only material model you will ever need", and I think for most applications, that is true. The one I have the most familiarity with (the only one I have implemented) is the Autodesk Standard Surface.
First let's get a little history. Back in ancient times people would classify surfaces as "matte" or "shiny" and you would call a different function for each type of surface. Every surface would somehow have a name or pointer or whatever to code to call about lighting or rays or whatever. So they had different behavior. Here is a typical example of some materials we used in our renderer three decades ago:
But sometime in the late 1990s some movie studios started making a single shader that encompassed all of these as well as some other effects such as retro-reflection and sheen and subsurface scattering. (I don't know who came up with this idea first, but I think Sing-Choong Foo, one of the BRDF measurement and modeling pioneers that I overlapped with at Cornell, did one at PDI in the late 1990s... this may have been the first... please comment if you know anything about the hisotry which really ought to be documented).
Here is the Autodesk version's conceptual graph of how the shader is composed:
So a bunch of different shaders are added in linear combinations, and the weights may be constant or may be functions. This is a bit daunting looking. Let's show how you would make a metal (like copper!): First set opacity=1, coat=0, metalness=1. This causes most of the graph to be irrelevant:
So why has this, for the most part, won out over categorical shaders that are different? Having implemented the above shader along with my colleague and friend Bob Alfieri, I really like it for streamlining software. Here is your shader black box! Further, you can point to the external document and get data in that format.
But I suspect that is not the only reason uber shaders have taken over. Note that we could have set metalness=0.5 above. So this thing is half copper metal and half pink diffuse. Does that make any sense as a physical material? Probably not. And isn't the whole point of a BRDF to restrict us to physical materials? I think such unphysical combinations serve two purposes:
- Artistic expression. We usually do physically-based BRDF as a guide to keeping things plausible and robust. But an artistic production like a game or movie might look better with nonphysical combinations, so why not expose the knobs!
- LOD and antialiasing. A pixel or region of an object may cover more than one material. So the final pixel color should account for both BRDF. Combining them in the shading calculation allows sparser sampling.
Finally, graphics needs to be fast both in development and production. So the compiler ecosystem is here. I don't know so much about that, which is a credit to the compiler/language people who do :)