I had an email this morning about Managed Direct3D 'breaking' the math functions in the CLR. The person who wrote discovered that this method:
public void AssertMath()
double dMin = 0.54797677334988781;
double dMax = 4.61816551621179;
double dScale = 1/(dMax - dMin);
double dNewMax = 1/dScale + dMin;
dMax == dNewMax);
Behaved differently depending on whether or not a Direct3D device had been created. It worked before the device was created, and failed afterwords. Naturally, he assumed this was a bug, and was concerned. Since i've had to answer questions similar to this multiple times now, well that pretty much assures it needs it's own blog entry.
The short of it is this is caused by the floating point unit (FPU). When a Direct3D device is created, the runtime will change the FPU to suit its needs (by default switch to single precision, the default for the CLR is double precision). This is done because it has better performance than double precision (naturally).
Now, the code above works before the device is created because the CLR is running in double precision. Then you create a Direct3D device, the FPU is switched to single precision, and there are no longer enough digits of precision to accurately calculate the above code. Thus the 'failure'.
Luckily, you can avoid all of this by simply telling Direct3D not to mess with the FPU at all. When creating the device you should use the CreateFlags.FpuPreserve flag to keep the CLR's double precision, and have your code functioning as you expect it.