Texture filtering

Any time we draw graphics using textures, we must figure out how to map one grid of pixels (the texture) onto a different grid of pixels (the screen). This process is called filtering.


The Identity Transform

The pixel mapping is trivial if:

  • The texture is not being scaled
  • The texture is not rotated, or only rotated in 90 degree increments
  • The texture is being drawn at an integer destination location
  • The texture may be flipped horizontally or vertically
  • There are no more complex transforms (shears, perspective, etc)

In this case the source and destination pixel grids line up exactly, so pixel values can be copied directly across.


Filtering In Theory

When the source and destination pixel grids do not line up, we must somehow compute new destination pixel values from the source data. In mathematical terms, you can think of this as a two step process:

  • Based on the source pixel values, try to guess what the original continuously varying analog image might have been
  • Quantize this imaginary analog image back down to digital format, sampling at the destination pixel locations based on our chosen definition of what a pixel is

Of course this isn't actually implemented as two stages: both parts are combined in a single computation.

It can sometimes also be useful to think of filtering as a choice between two different operations:

  • When scaling up an image, we must invent new destination pixel values by trying to guess what should go in between the source pixels
  • When scaling down an image, we must combine several source pixels to produce a single destination value

There are many ways this can be done. For instance when I resize an image in Paint.NET I can choose between three filtering modes: nearest neighbor, bilinear, or bicubic. Many algorithms are tuned to work with specific types of image, for instance hqx filtering gives great results when scaling up retro game graphics, but would be a terrible choice for digital photographs.


Filtering In Practice

Realtime graphics hardware supports just four filtering modes:

  • Point (also called nearest neighbor)
  • Linear (also called bilinear)
  • Mipmapping (also called trilinear)
  • Anisotropic

These are specified by three SamplerState properties:

  • MinFilter controls how textures are minified (scaled down)
  • MagFilter controls how textures are magnified (scaled up)
  • MipFilter controls whether mipmapping is used

The attentive reader may wonder why the TextureFilter enum also includes GaussianQuad and PyramidalQuad options. Ignore these: they're a historical legacy. I guess someone once thought they might be useful, but I don't know any graphics cards that actually bother to implement them!

Note that these are just the built-in filtering modes. Any time a shader performs a texture lookup, the hardware automatically applies these filtering computations and returns a filtered value. If you want some other kind of filtering, you can implement it yourself in a shader by doing several texture lookups at slightly different locations to read the values of multiple source pixels, then applying your own filtering computation. This is more expensive than the built-in hardware filtering, but necessary if you want to use more advanced filters such as Gaussian blur or edge detection.


Point Sampling

Point sampling uses a trivially simple filter function. For each destination pixel, it rounds to the closest matching location in the source pixel grid, then takes the value of that single source pixel. Implications:

  • When magnifying an image, pixels become large and blocky
  • When shrinking, many source pixel values are discarded, which causes aliasing and shimmery speckled noise
  • When drawing sprites at fractional destination positions, point sampling has the same effect as if you rounded to an integer location before drawing
  • When rotating sprites, some pixel locations will round up while others round down, which causes shimmery aliasing

For example, here is a textured terrain using point sampling:


Note how the closest pixels (where the texture is magnified) are large and blocky, while the distant hills (where the texture is minified) look like they're covered in random green and brown noise as opposed to a proper texture! This noise looks especially ugly when the camera moves.


Linear Filtering

For each destination pixel, linear filtering works out the closest matching location in the source pixel grid. It then reads four pixel values from around this location, and interpolates between them. Implications:

  • When magnifying an image, it produces smooth gradients in between pixels
  • When shrinking, four source pixels are averaged to produce each destination value, so images can be shrunk to half size before any source pixels are discarded
  • When rotating or drawing sprites at fractional destination positions, averaging between adjacent pixels creates the illusion of smooth movement with subpixel accuracy, but at the cost of a slight blurring

The same terrain as above, using linear filtering:


Note how the closest pixels are smoothed out, and the mid ground is less noisy than with point sampling, but the distant hills still look pretty nasty. This is because with a maximum of four source samples per destination (two vertical and two horizontal), we can only shrink to half size before we have to start skipping some source pixels. Scaling to less than half size still causes the same aliasing as with point sampling.

We could fix this by averaging more than four source pixels, right? But sampling pixel values is expensive, and graphics cards have to balance the conflicting goals of high visual quality but also high performance. Linear filtering is a classic compromise: good enough to look great when used wisely, but still far from perfect.

Consider, for instance, this 3x3 image, scaled up using point sampling:


If I  scale it up on my GPU using linear filtering, I get:


That's not horrible, but also not exactly beautiful. If I scale up the same image in Paint.NET using a bicubic filter, I get a much smoother result:


Paint.NET produces better results than my GPU because it uses a more sophisticated filter algorithm that examines more than four source pixels when computing each destination value.

Coming up: mipmaps, anisotropic filtering, and the challenges presented by alpha cutouts and sprite sheet borders...

Comments (5)

  1. Jay Kint says:

    Thanks for the good article.  Sometimes it’s good to go over the basics again :).

  2. Vyacheslav says:

    R6xxR7xx is said to support bicubic filtering in texture sampler. Their R6xx_3D_Registers.pdf backs up that.

  3. Josh says:

    Always funny to see the complexity of a hardware feature we take for granted.

  4. Adayah says:

    Nice article!

    It was however followed by two other ones. Since it takes a moment to find them, I hope you have nothing against putting them up here:



Skip to main content