Here’s a subtle improvement I bet you would never notice if I didn’t point it out, but which should save much pulling-out-of-hair for those who previously ran into this issue…
XNA supports two different color formats: byte values ranging 0-255, or floating point values ranging 0-1. The Color struct had constructor overloads accepting either format:
Color(byte r, byte g, byte b); Color(float r, float g, float b);
Well and good, until someone tries something like:
Color x, y; Color z = new Color(x.R + y.R, x.G + y.G, x.B + y.B);
That seems straightforward, but falls foul of an unfortunate interaction between two parts of the C# type system:
- When you do math on 8 or 16 bit types, C# automatically promotes the result to a 32 bit integer. Although x.R and y.R are bytes, the result of x.R + y.R is an int.
- When you pass integer values to a method that has byte and float overloads, C# method resolution rules choose the float version, because int -> float is considered a better conversion than int -> byte.
Result: values mysteriously end up 255 times larger than you intended. Colors saturate to pure white. Attempted alpha fades saturate to fully opaque. Confusion ensues.
In Game Studio 4.0, we changed the Color constructor overloads to take ints rather than bytes:
Color(int r, int g, int b); Color(float r, float g, float b);
- If you pass bytes, you get the int version, which gives the same result as before
- If you pass floats, you get the float version
- If you pass the result of doing math on bytes, you get the int version, which was what you wanted all along!