Remember, my complete blog is now at: http://www.SebbyLive.com/ (http://www.sebbylive.com/post/2007/10/FSX-AccelerationSP2---DX9-Bloom-Versus-DX10-HDR.aspx)
Yay! Time for another post about some of the upcoming graphical changes in FSX SP2/Acceleration. This time I take a look at the topic of Bloom (as in DX9) vs HDR (High-Dynamic Range which is DX10 only). From some of the forum posts I have read, there seems to be alot of speculation as to what the difference really is and as to whether or not there is actually any difference (you just have to love those conspiracy theories). The point of this post is to give a side-by-side comparison of the two approaches and then I will make a brief talk as to what the differences actually are from a more technical point of view...
The two videos below compare the DX9 Bloom with the DX10 HDR...
DX10 HDR (Download Here)
DX9 Bloom (Download Here)
If you take a close look at the videos, you can see some visible but subtle differences between the two clips... But the hot question, is what really happens behind the scenes...
Although the techniques used in DX9 and DX10 are different, they both aim to reproduce the same effect. To keep this simple, bloom or HDR is the effect that occurs when lights of various intensities affect the viewer (generally by darkening or oversaturating the image). The best way to picture the effect is to imagine that you are driving on a bright sunny day and that you approach a dark tunnel. From a distance, the inside of the tunnel appears black, which is due to the fact that the bright intensity of the sun forces our eyes to adjust themselfs to the ambient light intensity. This means that even though there is light inside the bridge, our eyes are adjusted to brighter intensities and the lesser intensities get filtered out (essentially the same thing as playing with exposure setting on a camera). The reverse is also true... If you are inside the tunnel looking towards the exit, it will generally appear quite bright, mostly white and washed out. This is due to the reverse process where the overall ambient lighting is low and the eye adjusts itself consequently meaning that any bright source of light will have a tendancy to appear saturated.
The exact detailed technical description of what is HDR and how it can be repsented can be quite long. Maybe the beat and easiest is to simply refer and quote the relevant Wikipedia article (http://en.wikipedia.org/wiki/High_dynamic_range_rendering):
Preservation of detail in large contrast differences
One of the primary features of HDRR is that both dark and bright areas of a scene can be accurately represented. Without HDR (sometimes called low dynamic range, or LDR, in comparison), areas that are too dark are clipped to black and areas that are too bright are clipped to white. These represented by the hardware as a floating point value of 0.0 and 1.0 for pure black and pure white, respectively.
Graphics processor company nVIDIA summarizes one of HDRR's features in three points:
Bright things can be really bright
Dark things can be really dark
And details can be seen in both
Accurate preservation of light
Without HDRR, the sun and most lights are clipped to 100% (1.0 in the framebuffer). When this light is reflected the result must then be less than or equal to 1, since the reflected value is calculated by multiplying the original value by the surface reflectiveness, usually in the range 0 to 1. This gives the impression that the scene is dull or bland. However, using HDRR, the light produced by the sun and other lights can be represented with appropriately high values, exceeding the 1.0 clamping limit in the frame buffer, with the sun possibly being stored as high as 60000. When the light from them is reflected it will remain relatively high (even for very poor reflectors), which will be clipped to white or properly tonemapped when rendered.
Likewise when light passes through a transparent material, the light that passes through has a lower brightness than when the light entered. An example of the differences between HDR & LDR rendering can be seen in the images to the right, from Valve's Half-Life 2: Lost Coast which uses their game engine "Valve Source". In the example pictures, with low dynamic range rendering, much less light passes through the stained glass, causing the scene to be darker. The reason for this is that when light passes through a transparent material, it lowers the light’s brightness. In a simple example, say the stained glass can block 40% of the light. Since the highest value of the low dynamic range light is 1.0, this means a brightness of 0.6 is illuminating the other side. The high dynamic range light is perhaps 100, which means a brightness of 60 is illuminating the other side.
As I have mentioned, both the DX9 and the DX10 approach aim to accomplish the same goal but use different approaches.
DirectX 10 VS DirectX 9:
The easiest implementation is actually the one taken under DirectX 10. As mentioned above, true lighting intensities cannot be represented by a simple 0->1 range and DX10 guarantees floating-point texture support. The implementation takes advantage of the this feature and do all our rendering directly to a floating point texture. We then use this texture and apply standard texturing operation such as a blur filter (to create a bloom/glow) and a star filter which is used to crate the stat-like patterns that generally occur when the light traverses a imperfect medium such as glass.
Under DirectX9, the back-end (bluring and filtering) is mostly the same. However the main difference is that we do not use floating point textures. I can already hear the conspiracy theories right now that we didn't use the DX9 support for floating point textures to force users to upgrade to Vista and purchase new hardware. This is far from true....
The main problem under DX9 is that support for floating point textures was introduced late and is an optional feature (although most Shader 3.0 cards have some support for it). In addition, alpha-blending and filtering on floating point textures is not required and most cards under DX9 do not support these features which are not a requirement but can make a huge performance difference when we do the actual bluring and filtering passes (under DX9 we would have to potentially filter the texture manually wihin the shader, not the most efficient approach). Since proper floating-point texture was so sparse under DX9 and that DX10 had guaranteed support for the features we needed to implement proper HDR, we decided to keep the floating point implementation for DX10 only.
So what do we do under DX9? We seperate our Bloom rendering into two distinct passes which are only applied to blooming objects. In the first pass, we capture the color of the pixel as it would occur during regular rendering. In the second pass, instead of capturing the color of a pixel, we capture its intensity (which is in essense a multiplier to the base color). We then use these two sets of information to complete the bloom filtering in a similar way as it is done with the DX10 HDR. The main advantage of using this approach is that it can be implemented on any video cards that has support for shaders and render targets. There are a few drawbacks:
The intensity map is sampled at a lower resolution (1/4 resolution) in order to optimize bandwith
There is a small overhead due to the need of having to render two passes for any Bloom emitting object.
Using a 8-bit value to represent the intensity does not give us as much fine grain detail as the use of a floating point texture. Consequently, the DX9 Bloom has a tendancy to get "hot" really fast and has less of the warm glow feeling that the DX10 HDR effect gives.