Back when multisampling first showed up, GPU hardware was slow, so triangle counts were low. Complex curved shapes were approximated by a small number of faceted triangles, each of which was big enough to cover many screen pixels.

This created an odd situation where early articles about multisampling focused on how it could smooth jaggies along the straight edges of these enormous triangles. We’d become so used to the faceted appearance of computer graphics that we could geek out over how beautifully antialiased our triangle edges were, without ever stopping to notice “*whoah, that dude’s entire head is made of just 5 triangles!*”

But hardware improved. Triangle counts went up. A high end modern game typically uses somewhere between 4000 and 6000 triangles for a main character.

Consider a typical gameplay situation:

- The character is drawn in such a way that it occupies 1/8 the height of the screen
- The game is rendering at 720p, so the resulting image is 90 pixels high
- That means it covers ~2000 screen pixels
- Assume half of our 6000 triangles are backfacing
- We are left with 3000 triangles, covering 2000 output pixels

With this much detail in our models, faceted geometry is a thing of the past. Instead we have a new problem: we just got Nyquisted!

Remember: to avoid aliasing we must take at least twice as many output samples as our highest frequency input signal. But here we have FEWER output pixels than input triangles. It is intuitively obvious that this must cause aliasing. With fewer pixels than triangles, some triangles will inevitably not appear in the output image at all. As the object moves, which triangles are lucky enough to get themselves a pixel will randomly vary, so the image shimmers with aliasing as individual triangles pop in and out of view.

In fact there is a narrow sweet spot for how many triangles a model should contain. For best quality, you want each triangle to cover exactly two screen pixels. If triangles are smaller than this, you cross the Nyquist threshold and get aliasing. If larger, your silhouette will appear polygonal and faceted.

It is not entirely intuitive that the sweet spot is two pixels per triangle as opposed to just one, yet this is true. Consider a circle being approximated as a series of straight line segments. As you increase the number of segments, the circle becomes more perfectly round. By the time each line segment reaches two pixels in length, the circle is perfect. Adding more line segments beyond this point will not give any improvement to the curvature of the shape.

It is obviously impractical to keep all our triangles exactly the same size on screen, given that models must be drawn at different sizes as they or the player move around the world. So how can we draw high detail geometry without aliasing?

Multisampling (or supersampling if you can afford it) helps by giving more headroom before we hit the Nyquist threshold, but this alone is not enough to avoid all geometry aliasing.

Normalmaps can be a powerful technique. We have great features for avoiding aliasing when resampling textures, so any time we can take fine detail out of our geometry and replace it with a normalmap, that will help control aliasing. We often think of normalmaps as purely a performance optimization (replacing expensive geometry with a cheap texture map) but they can also boost visual quality (replacing a data representation that cannot be easily filtered with one that supports anisotropic filtering and mipmaps).

Finally, it is important to consider level-of-detail model variants. When the object is far away, replacing that 6000 triangle model with a simpler 1000 or 500 triangle alternative will not only boost perf, but also reduce geometry aliasing artifacts.

*Moral: when it comes to triangle counts, more is not always better! Beware of trying to draw more triangles than you have screen pixels to represent. That way lieth the land of aliasing.*

What about the WP7 gpu? How many polys are too many? I have an animated model with about 500 polys and I would like to have 8 of them walking around at once. Would I have to radically reduce the poly count? Will there be any difference to the cpu/gpu skinning overhead if I animate all 8 models at once (all being at the same frame of the animation)?

Interesting stuff – I guess the 2px ideal means both horizontally and vertically, since images / the screen are a sampling system along two axes.

Do you know offhand if the benefits of texture filtering (anisotropic, bilinear, etc.) fall apart when the triangles are pixel size or smaller? E.g. would a UV mapped texture still get smoothly filtered across a grid of sub-pixel triangles as if it were a single quad, or do you start getting moire and sparkly junk?

Great question Luke!

Texture filtering itself works fine when triangles are very small. Geometry aliasing is a subtly different issue, which is that the triangle size itself must get rounded either up to 1 or down to 0, and that can cause all those aliasing problems regardless of how well the texture itself was filtered. It won't help to have the texture hardware making a great choice of what mip level to use for that 1×1 triangle, if this entirely triangle is randomly flickering in and out of existence as the camera moves!

Think of it this way. Draw a textured quad at exact size, so each texel maps to exactly one screen pixel. Now shrink it to half size. The texture filtering hardware will realize that each output pixel is covering a 2×2 region of the source texture, so it will select the half size mip level and all is good.

Now consider if you instead drew this same image as a grid of 1×1 quads, each mapped to a single texel of the source image. At original size, this will produce identical results to the first, single quad version. But if you scale it to half size, now each of those 1×1 quads is only covering a fraction of an output pixel! That's not possible, so what actually happens is that some of the quads round down to zero size and are discarded, while the others are rounded up to cover an entire output pixel. The mip selection hardware is smart enough to realize that it should still be using the half size mip level, but instead of texture coordinates covering a 2×2 region of the source image, it now has just a single rounded up in size 1 pixel quad that has texture coordinates covering only a 1×1 portion of the source image. Exactly which such quads are discarded vs. rendered is basically random, and will change as the camera or object moves, so you get aliasing as the texture coordinates bounce around unpredictably from one frame to the next.

This is only a problem when your triangles get so small that some of them may not be rendered at all.

Turning geometry into normal maps does help with aliasing, in the sense that high-frequency details (tiny bits of geometry) get turned into samples that are band-limited. Pages 194-199 of the book "The Computer Image" I found helpful understanding this, especially page 199 where he notes that you can get essentially infinitely high frequencies when sampling a synthetic scene, since your samples can vastly change with a tiny shift in position. He points out that we can't really use the Nyquist limit when sampling synthetic geometry. Turning the geometry into a normal map risks losing details (super-fine details will either randomly get captured or not), so you're just shifting the problem to the map's generation. But once you have the map (and some reasonable way to filter it – you can't simply average, like with color maps), you know you can then avoid aliasing pops and whatnot.

BTW, you might like this related blog post: blog.selfshadow.com/…/specular-showdown