The role of the Windows Display Driver Model in the DWM

The Problem

Ever since the advent of dedicated graphics processors, even old-school graphics processors that only accelerated GDI blits, the way you would program against them would be similar to how you programmed against the main CPU/memory system before there was virtual memory or interruptible/preemptible processes.  That is, you'd have to be sure to directly manage all the video memory yourself, and just count on not being able to have your graphics instructions interrupted.  Specifically, DirectX applications have always needed to deal with not getting the video memory they need, or deal with "surface lost" messages from video memory that got kicked out for one reason or another.  This puts a major burden on the programmer, and, probably even more importantly, makes for a very poor ecosystem for running multiple video-memory resource intensive applications, because their likelihood of cooperating in a sensible way on resource management is virtually nil.

Well, the DWM is a DirectX application with a couple of unique challenges in this arena:

  • The memory requirements on the DWM vary widely.  That's because they vary directly with the number of windows the user has open, and while there are known typical usage patterns, the user certainly isn't and cannot be limited to N open windows.
  • The DWM operates in an environment where other DirectX applications do operate.  Video playback, WPF applications, windowed games (btw, Vista "inbox" games like Solitaire, etc., are now written in DirectX), etc.  In fact, the DWM is responsible for the final presentation of those applications.  So it's critical that such DirectX applications "play well together" and play well with the DWM. 

The above challenges don't mesh well with the DirectX described in the first paragraph.

Enter WDDM

It's the Windows Display Driver Model (WDDM, formerly known as LDDM) that makes all of this viable.  WDDM is the new DirectX driver model for Windows Vista and beyond.  From the perspective of the DWM it does three main things:

  1. Virtualizes video memory.
  2. Allows interruptibility of the GPU.
  3. Allows DirectX surfaces to be shared across processes.

The surface sharing feature is key for redirection of DirectX applications, but that's the topic of a later post.  Here we're going to discuss the first two.  There are other motivators for, and certainly a lot more details on the WDDM, but those aren't as immediately relevant to the DWM as what's discussed here.

Virtualizing Video Memory

With the WDDM, graphics memory is virtualized.  This means that just like system memory, if there is a demand for memory and the memory is all allocated, then secondary storage is turned to, and the system manages all the paging algorithms and mechanics for faulting in the secondary storage into the primary storage when it needs to be operated on.  In the case of video memory, the primary storage is video memory, and the secondary storage system memory.

In the event that video memory allocation is required, and both video memory and system memory are full, the WDDM and the overall virtual memory system will then turn to disk for video memory surfaces.  This is an extremely unusual case, and the performance would suffer dearly in that case, but the point is that the system is sufficiently robust to allow this to occur and for the application to reliably continue.

The upshot of all of this is that applications don't need to be greedy to get all the memory they need, since they won't be guaranteed true video memory anyhow, and they can always be paged out.  This brings the goal of a cooperative set of DirectX applications much, much closer to reality.  It also means that there are effectively no more "surface lost" messages from DirectX, and no failed allocations.

From the DWM's perspective, this is all absolutely key because the DWM can and will allocate memory, and those memory allocations will be done in conjunction with allocations for other applications on the system, putting the "right" surfaces into the true video memory, and paging in and out as necessary.  Now, naturally, this is a little bit of a naive viewpoint, since this is the first generation of this virtualizer, but we're observing it to be doing quite well, and it will keep improving.

Interruptibility of the GPU

So, memory's virtualized, that's good, but what about those little computrons that run around the GPU doing stuff?  Can one application's GPU commands be preempted by another application?  Prior to WDDM, they could not.  With WDDM, they can be.  This is referred to as WDDM scheduling, and WDDM arbitrates usage of the GPU, giving computation to the different applications requesting it.  In order to do this, WDDM must be able to interrupt a computation going on on the GPU and context switch in a different processes operation.  WDDM defines two levels of interruptibility to support this. 

  • Basic Scheduling - this is the granularity of scheduling achievable in DirectX 9 class WDDM drivers and hardware, and means that an individual primitive and an individual shader program cannot be interrupted, and must run to completion before a context switch.
  • Advanced Scheduling - this is achievable in DirectX 10 class WDDM drivers and hardware, and here the GPU can be interrupted within an individual primitive and within an individual shader program, leading to much finer-grained preemptability.  Note that while DX10 supports advanced scheduling, it's not a requirement for DX10 -- that is, only certain hardware will support it.

The Desktop Window Manager uses DirectX 9, and thus Basic Scheduling.  So it's possible that an application that makes errant use of the GPU and uses complex shader programs across large primitives can potentially glitch the DWM.  We have yet to see such applications, but there no doubt will be some that either do this unintentionally or are built specifically to do this.  Nonetheless, we don't believe that this will be a common issue.

Comments (30)
  1. LinWinOverlord says:

    Wow, this is very interesting, it also explains why it is necessary to have a different model, especially with Windows Presentation Foundation being implemented over as DX10 surfaces..

  2. Ленин и Партия будущей версии Microsoft Windows — Оконный менеджер рабочего стол

  3. Greg Schechter has an interesting write-up on the Role of WDDM in the Desktop Window Manager.

  4. Piethein Strengholt says:

    Could you tell us what we’ve seen so far? What is used in 5308 and 5342? The flip 3d doesn’t look very nice.. No Anti Aliasing and the preview tumbnails they look a bit blurry. Do you have plans to improve this? And what’s next?

  5. Raiker says:

    Олег Михайлик, ура, ура, ура! Партия и родина тебя на забудет! А про DWM надо читать на 😉

    Greg Schechter, thanks for this article, very nice story.

  6. 息乐园 says:

    <ul><li> <a href="" target="_blank">The role of the Windows Display Driver Model in the DWM</a><br/></li></ul>

  7. Jerry Mead says:

    Interesting piece, many thanks. I’d love to have been present at the early "Is this actually do-able" meetings.

  8. Stephane Rodriguez says:

    It seems to me there is at least one more challenge not being addressed above : GPU heat. What about Vista DWM on laptops, Tablet PC and other form factors? What about a combined multi-core board with DWM in terms of heat?

  9. Sherrod Segraves says:

    How does this work with multiple monitors, or even multiple video cards?

    If I want the full UI experience with three or four monitors, would I need to look for a quadruple-head video card, or could I use two dual-head cards?

  10. asdf says:

    What do you mean by "there are effectively no more "surface lost" messages from DirectX"? "effectively" in that sentence looks to me like there are surface lost messages but it only occurs because you allow applications to allocate more memory than can actually be committed somewhere. I hope I am wrong and there is none of this behavior at all.

  11. Aleko says:

    So, the WDDM would allocate a buffer for each window. Sounds simple, but how much memory to allocate? When the window is resized, the requirements would change, and it reallocating the buffer on the fly would be absurd. So… allocate enough space for a fullscreen-sized buffer?

    If so, then:

    1 window @ 1280×104 x 32bpp = 5.2MB

    10 windows = 52MB!

    This can’t be right.

  12. Frederik Slijkerman says:

    It is right… that’s why MS didn’t try this in the days of 8 MB RAM – Windows 95.

  13. Princess says:

    Aleko:  52MB?  It far worse than you think 🙂

    Background apps don’t receive WM_PAINT messages when they are revealed by a window moving above them.  This necessitates that the window contents is buffered.  For this to buffer to be always available, the application can’t render directly to it or you would see incomplete rendering.  So each window needs to have 2 buffers – one for rendering, one for the DWM to access.  Unless something sneaky is going on involving serialisation of rendering… Greg?

    Resizing is an interesting question though.  Does Vista allow the window contents to be shown when resizing?  Because that sounds hard to do without frequent reallocations or fullscreen buffers.  I guess you could fake it by performing an imaging operation to scale the  original backbuffer onto the on-screen location whilst resizing.  Then you could properly redraw the window using its new size after the resizing drag is completed.  That way you only need to reallocate the backbuffer once.

  14. Michael says:

    Eh… the memory is to be expected.  If you want the next generation of apps, get the next generation of hardware.

  15. Jevan says:

    Piethein — we’re actively working on improving the visual quality of Flip3D and thumbnails.  Unfortunately whereas games have the luxury of turning on all sorts of visual quality knobs like multisampled edge antialiasing, mipmapping and anisotropic filtering, each one of these features comes at a significant cost.  Being the DWM we have to coexist with all the other apps on the desktop, which means we need to be very careful about constraining our CPU and GPU resources (both runtime and memory usage) and we can’t afford enabling these features.  Greg (or I) can delve into more details in a later blog post.

    Aleko — yes, you’re right.  A fullscreen window at 1280×1024 takes up 5MB.  When you consider that modern video cards currently only have 128MB or 256MB of memory, it is obvious why WDDM’s virtualization model is so important.  We need to be allow these surfaces to be paged out from video memory to system memory to make room for other windows’ surfaces or other DX apps’ GPU resources.  

    Princess — Greg will be covering this in later planned posts, but I can briefly answer now.  You are correct that we need two surfaces, but not because of any locking/synchronization problems (we can read from a surface as an app is writing to it if we want).  It is because the GPU can only render from surfaces visible to it, which means the surface has to reside either in video memory or in "non-local video memory" (aperture space).  Due to a number of constraints we create the GDI sprite wherever we want in system memory and then transfer it to GPU-visible memory when necessary.

  16. I’m really enjoying this series of articles. No one else seems to be covering this in such detail so your blog is a unique goldmine of Avalon goodness. Thank you and keep up the great posting!

  17. Lorenzo says:

    Like Stephane Rodriguez points out: what about Laptops? Battery life and heat? Using the GPU when on battery brings my laptop battery life from 3.30 hrs to 1.30…

  18. Here’s a list of topics that I have posted on (with active links) or expect to post on (without links)…

  19. igor1960 says:

    While WDDM idea is interesting from User Experience as well as development aspects, I feel it introduces conflicting aspect into Windows development.

    Before, WDDM each Window on receiving WM_PAIN message and/or otherwise through its own DC was responsible for drawing its own content on its own surface. Therefore, as each Window belongs to separate process, developed by different developers, such process was “responsible” for proper rendering of that drawing content and therefore it could be optimized/debugged and etc. independently from other UI component running on the desktop.

    Now, with WDDM extra layer is introduced, and now effectively above described Window painting are performed into WDDM supplied bitmap buffers, managed by WDDM and as the result those buffers are transferred by WDDM to Desktop.

    This WDDM extra layer maybe seen as an advantage as it allow to produce “visual effects”, as WDDM at any moment in time maintains all content of all windows running on the system.

    However, alternatively it maybe seen as an extra not efficient burden on the system, as it introduces one extra memory and processor power hungry element into the system.

    I could point to several scenarios when the system utilizing WDDM will obviously “suffer” compared to standard XP performance wise (for example, now each window has to respond to WM_PAINT message and redraw it’s full content, even if it is not visible at all), but I will decide to limit myself to the following scenario. Let’s assume:

    — I have currently DirectShow application A that is playing some content with 30fps refresh rate on standard XP system (no WDDM);

    — I have 60Hz Adapter/Monitor system;

    — I have just enough processing power on XP system to run 2 applications A on my system simultaneously;

    — as I have 60Hz Adapter/Monitor system and I’m running 2 applications A each updating Monitor with 30fps – my system perfectly displays content of both applications to the user;

    Now let’s examine that scenario on WDDM system:

    — I have the same DirectShow application A that is playing some content with 30fps refresh rate on standard (on WDDM);

    — I have 60Hz Adapter/Monitor system;

    — I do not have enough processing power on WDDM system to run 2 applications A on my system simultaneously, because now each of those applications pass there content to WDDM that require extra processing power to combine and render content from 2 applications;

    — as I have 60Hz Adapter/Monitor system and I’m running 2 applications A each updating Monitor with less then 30fps – my system skips some frames.

    Conclusion: while User Experience is very important, the first requirement from OS is to provide the most efficient and fastest way to deliver content and if such delivery suffers in order to achieve some “visual effects”, efficiency aspect should prevail and system should allow to turn off those effects.  Therefore, Vista, while providing WDDM as default for users not concerned with speed of execution, should also provide option to turn WDDM completely off (XP compatible mode) and it would be nice to do that not just on OS level, but per application and possibly module/call.

  20. As mentioned in earlier posts, by far the most important aspect of the DWM is the fact that application…

  21. Windows Display Driver Model – z tego co rozumiem, ma on możliwość współdzielenia obszaru pamięci DirectX…

  22. avalite says:

    Hello. Seema Ramchandani here, PM of the Avalon 2d & 3D graphics team.


    Many people have…

  23. When talking about WPF during the Windows Vista ISV Touchdown training a lot of people were interested…


    I’ve been getting a lot of the same performance questions over the last few months regarding…

  25. SilverLite says:

    Hello. Seema Ramchandani here, PM of the Avalon 2d & 3D graphics team. Many people have asked me

Comments are closed.

Skip to main content