"So, I have this shader that does normalmapping, and this other shader that does skinned animation. How can I use them both to render an animated normalmapped character?"
Welcome, my friend, to one of the fundamental unsolved problems of graphics programming...
It seems a reasonable expectation that if you have two effects which work separately, it should be easy to combine them, no? After all, that's exactly what happens in a program like Photoshop. I can add a drop shadow, then a blur, then apply a contrast adjustment, and everything Just Works™. Why not the same for GPU shaders?
To understand the problem, we need to understand the underlying programming model. An app like Photoshop has a very simple model:
- Data is a bitmap image (a 2D array of color values)
- Filters are functions which take in one bitmap and output a different bitmap
- Any number of filters can be stacked by passing the output from one filter as the input of another
This programming model is conceptually simple, mostly because it only deals with one data type. It is trivial to chain filters together when their input and output are the same type.
The GPU shader programming model is more flexible, and thus more complex. A slightly simplified diagram:
The blue boxes represent input data. Red are customizable processing operations, and yellow is the final output. Let's walk through what happens each time you draw something, starting at the left of the diagram and working to the right:
- The GPU reads vertex data and indices
- These values are combined to form one or more triangles
- Your vertex shader programruns once for each vertex
- Inputs: vertex data + effect parameter values
- Outputs: position + colors + texture coordinate values
- The GPU takes the position values output by the vertex shader, and works out what screen pixels are covered by each triangle
- It interpolates color and texture coordinate values over the surface of the triangle, generating smooth gradients between the three corner vertices
- Your pixel shader programruns once for each pixel covered by the triangle
- Inputs: colors and texture coordinates (produced by interpolation of the vertex shader output values) + effect parameter values + textures
- Outputs: color
- The color produced by the pixel shader is combined with the previous color at that location in the rendertarget, by applying a user specified blend function
- The resulting color is stored into the output rendertarget
Yikes! Note that although the final output is a 2D bitmap (same as for a Photoshop filter), the input is a combination of vertex data, indices, effect parameters, and textures. The input and output types are not the same, which means there is no generalized way to pass the output from one shader as the input of another, and thus no way to automatically combine multiple shaders.
In fact the only universal way to combine two shaders is to understand how they work individually, then write a new shader that contains all the functionality you are interested in. This is the price we pay for flexibility. Because shader programs can do so many different things in so many different ways, the right way to merge them is different for every pair of shaders. For instance to use animation alongside normalmapping, it is necessary to animate the tangent vectors used by the normalmap computation, which requires changes to both the animation and normalmapping shader code.
However, there are specific cases in which Photoshop style layering is possible, if you impose extra constraints on the programming model by restricting all your shaders to work in a similar way:
- When processing rectangular 2D regions (most often fullscreen), you can feed the output rendertarget from one drawing operation as an input texture to another. This is only possible when all the interesting work is done in the pixel shader, with SpriteBatch often used to provide the vertex data, indices, and vertex shader. Restricting the geometry pipeline to 2D quads enables Photoshop style composition using a separate shader per layer. Check out this sample for an example.
- If you have several pixel shaders which produce different color values for the same model, and the desired way to combine these colors is a simple arithmetic operation (typically addition, multiplication, or interpolation), you can draw several times directly to the backbuffer, one pass per shader, and use alpha blending to combine the shader outputs. The first pass will typically use opaque blending with standard depth buffer states, while subsequent passes use some other blend function, depth compare set to equal, and depth writes disabled. This can be a good solution for scenes with large numbers of lights, where each pass adds the contribution of a single light to the color already in the backbuffer. See this sample.
In a previous job I designed a system for automatically combining shader fragments in more flexible ways than are possible using rendertargets or alpha blending. This worked well, but was complex both to implement and when adding new shader fragments, and there were still many things it could not handle.
So there you have it. Combining shaders turns out to be harder than you might expect, and is usually a manual process. But on the plus side, I guess this makes good job security for us shader programmers 🙂