Rendertarget changes in XNA Game Studio 2.0

The bad news:

If you had a program using rendertargets that worked with the XNA Framework 1.0, it might not still work with 2.0.


The good news:

Things are actually much more consistent now, honest!

Let me explain...


How rendertargets used to work (1.0)

On Windows:

  • Each rendertarget lives in a separate piece of video memory
  • After you select the rendertarget, you can draw onto that video memory
  • When you are done drawing, you call GetTexture to reuse that same area of video memory as a texture
  • You can draw onto the same rendertarget as many times as you like, and its contents will always remain valid

On Xbox:

  • All rendertargets share a single special piece of EDRAM memory
  • This means only one of them can physically exist at a time
  • When you finish drawing to a rendertarget, the GraphicsDevice.ResolveRenderTarget method copies from EDRAM to a separate area of texture memory
  • You can then use this texture in any way you like
  • But EDRAM is now being reused by some other rendertarget!
  • This won't work like you expect:
    • Draw to backbuffer (EDRAM contains what you just drew)
    • Switch to rendertarget
    • Draw to rendertarget (EDRAM contains what you just drew)
    • Resolve rendertarget (RenderTarget.GetTexture() contains a copy of what you drew)
    • Switch back to backbuffer (problem! the act of selecting a different rendertarget has overwritten what you previously drew to the backbuffer, so the EDRAM no longer contains that backbuffer image)
  • The rules in summary:
    • Any time you change rendertarget, the contents of EDRAM are overwritten, so all previous rendertargets (including the backbuffer) are clobbered
    • Rendertarget data which was resolved into the associated texture remains valid, however
    • This is ok:
      • Draw to rendertarget A
      • Draw to rendertarget B
      • Draw textures from rendertargets A and B onto rendertarget C
    • But this is not:
      • Draw to rendertarget A
      • Draw to rendertarget B
      • Switch back to A and continue drawing over the top of it

Problem with the 1.0 behavior:

It was far too easy to write a program that worked fine on one platform, but then rendered incorrectly when you run it on the other!


How rendertargets work now (2.0)

By default:

  • You get what used to be the Xbox behavior
  • On Xbox, it works exactly the same as before
  • On Windows, we automatically clear your rendertargets at the right times to emulate the Xbox behavior
  • This is fast on both platforms (Clear is very cheap)

If you don't like that default:

  • You can specify a different RenderTargetUsage
    • RenderTarget2D constructor parameter
    • To change it for the backbuffer, use the GraphicsDeviceManager.PreparingDeviceSettings event to alter GraphicsDeviceInformation.PresentationParameters.RenderTargetUsage
  • Specify RenderTargetUsage.PreserveContents to get what used to be the Windows behavior
    • Works exactly the same as before on Windows
    • On Xbox, we automatically copy data back from the resolved texture into EDRAM to restore its contents when you change rendertarget
    • This is not cheap! Use it if you must, but be aware of the performance penalty
  • Specify RenderTargetUsage.PlatformContents to get the exact same behavior as 1.0, which is different on Xbox versus Windows

Shawn recommends:

  • If at all possible, use the default RenderTargetUsage.DiscardContents mode. This gives good performance and consistent behavior on both platforms.

Other good stuff:

  • In 2.0, you no longer need to call the GraphicsDevice.ResolveRenderTarget method. In fact you can't, because we removed it. We now do this automatically when you switch away from the rendertarget.
  • In 2.0, we now support multiple simultaneous rendertargets (MRT) on Xbox.

Comments (17)

  1. MartSlot says:

    Great post Shawn, this really helped me understand the differences! 🙂

    If you happen to have some time left some day, could you write about how you do the MRT stuff on the 360 in XNA 2.0? I know it looks and works the same on 360 and pc in the XNA framework. I’m really curious as to how it works inside the framework itself, because afaik, the 360 does not support MRT like the pc does.

  2. Ultrahead says:

    "This is ok … But this is not"

    That example is plain and simple, so it explains it all quite ok. Always works … thanks!


    Is there any extreme case where this cannot be avoided? I’ll follow your advice and use "DiscardContents", but I’m asking for learning purposes …

    "… you no longer need to call the GraphicsDevice.ResolveRenderTarget method …"

    So no more situations like: "why I’m getting a blank …?" / "Did you resolve the render target?".

    "I’m really curious as to how it works inside the framework itself, because afaik, the 360 does not support MRT like the pc does."


  3. ShawnHargreaves says:

    > afaik, the 360 does not support MRT like the pc does.

    Easy: the 360 does support MRT in hardware! We just didn’t expose that in v1, because we didn’t have time to finish all the (rather large and complex) driver code needed to actually make it work.

  4. ShawnHargreaves says:

    > "RenderTargetUsage.PreserveContents"


    > Is there any extreme case where this cannot be avoided? I’ll follow your advice and use "DiscardContents", but I’m asking for learning purposes …

    I can’t think of any, but there are certainly some situations (like doing image feedback from one frame to the next) where the preserve behavior can be useful. If you found yourself having to implement a preserve-like behavior by using two buffers and manually drawing the contents of one over the other each time, you might as well just simplify your code by setting the PreserveContents flag to ask us to take care of that for you.

  5. Ultrahead says:

    I remember a technique to obtain a better framerate -lossing some quality- by doing something like that with two halves of images: one from the new one and the other from the old one. So you kinda render a moving "average" each frame, you just preserve the last image and "merge" an new half (so you render half the screen on each Draw call).

    It’s like the method used to cast images in old TVs, I guess.

    I have to check it, but maybe it was explained on the ShaderX4 book.

  6. MartSlot says:

    “Easy: the 360 does support MRT in hardware! We just didn’t expose that in v1, because we didn’t have time to finish all the (rather large and complex) driver code needed to actually make it work.”

    I’m sorry, what I was trying to refer to was the ‘predicated tiling’ I read about. I found a set of powerpoint slides* in the meantime that talk about it, and I think I understand how it works now.

    I am curious as to the use of predicated tiling in other situations. It’s supposedly necessary in case of 720p and MSAA, but since the 360’s EDRAM is only 10MB large, it seems it’s also not possible to get a full HD resolution backbuffer and depth-stencil buffer into it. Does that mean that tiling is also used with 1080p resolution?

    Given the number of uses of the predicated tiling, I’m really interested to know how much of a performance hit it gives? I read that it’s only the vertices of polygons on two different tiles that have to be recalculated, but there’s the cost of copying as well, or is that negligible? Is the total cost high enough to search for alternatives to MRT, MSAA and 1080p?

    Sorry about all the questions, I just get excited by reading about technology and ‘tricks’ like these 😉


  7. ShawnHargreaves says:

    Predicated tiling is used any time the combined rendertarget(s) + depth buffer are bigger than 10 megabytes. The XNA Framework handles this entirely automatically (unlike the native API, where you have to write code specifically to make this work).

    Performance is usually a lot better than you would expect. The fillrate is not changed at all, and while vertex processing workload goes up, this is usually not the bottleneck so it may not have any measurable difference at all. Predicated tiling is only really going to hurt you on games which are:

    a) GPU limited

    b) Vertex shader limited within the GPU (ie. have a lot of very high polycount models and/or  complex vertex shaders)

  8. CatalinZima says:

    In the current model, what happens with the DepthBuffer ? Is it still discarded when changing the RenderTarget?

  9. ShawnHargreaves says:

    The depth buffer matches the rendertarget behavior. It’s preserved if the rendertarget is, or discarded otherwise.

    Stencil is never preserved, though (not for any particularly good reason: we just didn’t have time to make that work 🙂

  10. CatalinZima says:

    And if I’m using MRTs, and the 0 RT is set to Preserve, while the others are not, the Depth will also be saved, right? is this true on the Xbox also?

  11. CatalinZima says:

    And one more question (sorry for spamming you with questions):

    Is the depth buffer preserver from one RT to another?

    I set an RT, draw stuff, and when I set a new RT, will the depth buffer (when using the new RT) be the same, or will it be cleared?

  12. CatalinZima says:

    The reason I need the DepthBuffer to be preserved is the following:

    In your deferred rendering slides, you mention using Stencil tests to lower the number of pixels affected by each individual light.

    But if the depthbuffer is cleared when changing from the Gbuffer creation to the light processing, this is no longer possible. Any ideas on how to solve this problem?

  13. ShawnHargreaves says:

    For that technique you would need to enable the preserve contents mode, yes.

    I’m not sure that would actually give you a perf gain on Xbox, though. I’m guessing restoring the depth contents is going to be slower than however much you gain through the stencil tests.

    Deferred rendering is actually quite an awkward fit for the 360, because of the limited size but very fast EDRAM hardware design. The technique as a whole may work ok, but I think that light volumes idea is probably more work than it is worth.

  14. smgorden says:

    When trying to implement RenderToTarget feature in 2.0, I used the same setup as in a 1.0 application. The only changes I made were as described above. I removed the calls to ResolveRenderTarget, and selected the old default of PreserveContents, attempting to get my previous behavior. It all compiles, but when I run this program I get the error "The active render target and depth stencil surface must have the same pixel size and multisampling type" when I call clear on the first render target. I’m not sure how there would be a difference, though. Any suggestions on resolving this mis-match?

  15. clouddream says:


    I could use this code to get a rendertarget's surface pointer when i use RenderTargetUsage.DiscardContents

    public unsafe IntPtr GetSurface(RenderTarget renderTarget)


       Type type;

       FieldInfo fi;

       object ptr;

       type = typeof(RenderTarget);

       fi = type.GetField("pRenderTargetSurface", BindingFlags.NonPublic | BindingFlags.Instance);

       ptr = fi.GetValue(renderTarget);

       return new IntPtr(Pointer.Unbox(ptr));


    however, if i set use RenderTargetUsage.PreserveContents,i get result 0x00000000

    can you tell me why? thanks!

  16. ShawnHargreaves says:

    clouddream: using reflection to access internal implementation details is not a supported scenario, sorry 🙂  This code will not work reliably across different platforms and different versions of the XNA Framework.

  17. DanNeedsHelp says:

    I still don't understand how to Preserve the contents of the backbuffer after switching render targets. I fully understand that I need to react to the "GraphicsDeviceManager.PreparingDeviceSettings event to alter GraphicsDeviceInformation.PresentationParameters.RenderTargetUsage", but I don't know how to actually do this.  I guess it's a C# thing that I am unfamailiar with.  How do I actually access and alter this information?  Can someone show me a code snippet?

Skip to main content