Long time no post. But don’t worry, I’m not dead yet…
One of the more surprising things about texture filtering is how easily it can read past the edge of the texture. Consider a 4×1 texture containing four color values:
A B C D
If we draw this using SpriteBatch with an integer destination location and no scaling or rotation, these values will be copied directly to the screen. But if we draw with scaling, rotation, or a fractional destination location, the output colors will be computed by filtering between the four input values. As long as our source texture coordinates lie in the range 0-1, these filtering computations should never read past the edge of the texture, right?
(you can probably guess this isn’t right, on account of how much of this article is still to read 🙂
Consider what happens if we scale up to cover an 8×1 destination region. The output pixels will be:
A (A+B)/2 B (B+C)/2 C (C+D)/2 D ???
What should go in that last cell?
- We could just repeat D again, but that seems kind of odd. If we are drawing a tile map with the same sprite repeated many times to cover a grid, it would cause an ugly discontinuity where one tile joins the next.
- We could continue the interpolation pattern, wrapping around to fill this last cell with (D+A)/2. This would look good along the joins between tiles, but not so good if we were drawing a character where it could cause colors from one side of the sprite to wrap around and appear on the other!
The right solution depends on what you are drawing, so there is no universally correct behavior. You can choose which you want by setting the SamplerState.AddressU and AddressV properties to TextureAddressMode.Clamp or TextureAddressMode.Wrap.
Many people assume these addressing modes are only important if your texture coordinates go outside the range 0-1, but as we see here, filtering can read past the edge of the texture even when the source texture coordinates lie inside it.
AddressU and AddressV are usually set to the same value, but they do not have to be. For instance the sky.fx shader from our Generated Geometry sample sets AddressU to Wrap and AddressV to Clamp, because the sky is a cylinder which should wrap from left to right, but we do not want stray pixels from the bottom of the ground wrapping around and appearing directly overhead!
Scaling is not the only way to run into these issues. Consider what happens if we slide our sprite half a pixel sideways. Now the four output colors are:
(A+B)/2 (B+C)/2 (C+D)/2 ???
Same dilemma. Should that last cell contain D, or (D+A)/2?
Drawing at fractional positions is especially confusing because we intuitively expect the result to be something like:
A (half alpha) (A+B)/2 (B+C)/2 (C+D)/2 D (half alpha)
But that’s not how it works. Exercise for the reader: what would go wrong if it did?
I think the confusing part is that there isn’t actually any such thing as drawing at a fractional screen position. Screen pixels can either be written or left alone: you can’t change just half a pixel (at least not without alpha blending, which introduces its own set of problems). So when we ask the GPU to draw at a fractional location, it rounds to the nearest integer, then slides the texture in the opposite direction to make the output appear fractionally positioned. This produces coordinate values outside the original range.
"But I want my sprite to be rendered like in your last example, with half alpha on both edges!"
Easy: just make sure the outermost pixels of your texture have zero alpha (give it a single pixel transparent border). Now when you filter using TextureAddressMode.Clamp, the edges will smoothly fade to zero no matter how things are positioned.
"Couldn’t I do that automatically using TextureAddressMode.Border?"
Why yes, indeed you could. But border address mode is not supported by all graphics cards.
"What about rotation?"
Hopefully by now it will be obvious that filtered rotation can read from outside the source texture region, just like scaling and fractional positioning.