My previous post described how resampling a signal can cause aliasing problems. The worst problems occur when dramatically reducing the number of samples used to represent a signal, or when the source includes lots of high frequency detail. Specifically, there is a magic value called the Nyquist frequency, which is half the rate at which are you taking output samples. If the source signal contains information with a higher frequency than this threshold, you will have aliasing problems.

To put this another way: in order to avoid aliasing, you must take at least twice as many output samples as the finest detail in your input signal.

That leaves us with basically just two ways to avoid aliasing:

- Take more output samples
- Or have less fine detail in the input signal
- Smooth (aka. blur) the input data to remove fine detail
- Or smooth (aka. filter) on the fly as we read our sample values

Option 2.1 can often be applied to the input data as a preprocess, which makes it very efficient. But this is no good if we need to sample the data at different frequencies depending on the situation! (for instance textures often need to be scaled by different amounts depending on distance from the camera). If we pre-smooth our input data according to the lowest frequency we will ever sample it at, the result will be excessively blurry when sampled at higher frequencies. Or if we pre-smooth to match a higher frequency, we will still have aliasing when sampling at the lower rate.

Option 2.2 can dynamically adjust to different sampling frequencies, but tends to be expensive to implement, as it must average many different samples from slightly different locations in the input.

A third option is to declare this whole digital sampling business a mug’s game and refuse to play at all. In some situations it is possible to work entirely with mathematical equations, transforming one signal into another by applying mathematical transformations to the equations which describe them. This approach inherently avoids digital approximation, so will not produce any aliasing, but the math tends to get very complex. It is not widely used in realtime computer graphics, but many offline renderers (eg. RenderMan) work this way.

Ok, that’s enough theory. Next, let’s get practical…

This seemed like the perfect time to plug my ancient distance field sample (I'll do that anyway – http://www.xnainfo.com/content.php), but I just realized I'm not sure it actually fits into either 2 or 3. I guess it's something in between, reinterpreting the input as parameters for equations that interpolate smoothly and still preserve much detail. Anyway, while I'm rambling, wouldn't you agree it'd make for a great built-in font processor? ðŸ™‚

Could you give some examples of where Renderman is doing analytic stuff to avoid aliasing? You don't have to explain how Renderman is doing it, just where so I can go look it up in the spec or something.

I don't know Renderman well enough to have spec references, but my understanding was this is mostly in the shader functions, which are usually written to evaluate things like lighting by integrating a function over a region of space rather than as a single point sample.