When WinForms met Game Loop

WinForms and XNA have quite different ideas about how a program should run.

XNA assumes that games are always in motion, moving and animating. It constantly cycles around the game loop, calling your Update and Draw methods. This is convenient for most games, but wasteful if the program reaches a static state, because XNA will keep calling Draw even though the image has not actually changed.

WinForms assumes that programs tend to just sit there doing nothing, and only wake up when the user interacts with them by pressing a key, clicking the mouse, etc. It uses an event based programming model, where input and paint requests are delivered to the program via the Win32 message queue. If no messages arrive, the program runs no code at all and thus consumes no CPU.

When you combine XNA with a WinForms app, such as in our WinForms Graphics Device sample, you must decide whether you want to follow the XNA model, or the WinForms model, or something in between. Are you a game that happens to include some WinForms UI controls, or a WinForms app that happens to include some 3D XNA graphics rendering?

Our sample actually demonstrates two such options. Its SpriteFontControl does not animate, so uses the standard WinForms programming model where Draw is only called in response to Win32 paint messages. SpinningTriangleControl, on the other hand, is animated using the second technique described below.


I'm a polite WinForms app, so please tick me whenever you have a spare moment

The WinForms way to arrange for an animated control to be redrawn at regular intervals is to use a Timer object. Add this field to your control:

    Timer timer;

Set it going in your Initialize method:

    timer = new Timer();
    timer.Interval = (int)TargetElapsedTime.TotalMilliseconds;
    timer.Tick += Tick;


  • Timer events are delivered through the Win32 message queue, so this plays nice with the rest of the UI system
  • Scales well if you have many animating controls at the same time
  • Gracefully yields unused CPU between timer ticks


  • Because this plays so nice with the rest of the system, it doesn't always deliver the smoothest animation
  • Timer events may be delayed or discarded if the system is under heavy load
  • Timers have limited accuracy


I'm lazy and want to write the least code possible 🙂

Our WinForms Graphics Device sample animates its SpinningTriangleControl by hooking the WinForms Application.Idle event:

    Application.Idle += Tick;

This event is raised after processing Win32 messages, whenever the queue becomes empty.  Using it for animation is a bit of hack, and only works if the Tick method calls Invalidate:

  • Application.Idle is raised
  • Tick calls Invalidate
  • Invalidate sends WM_PAINT to the control
  • WM_PAINT is processed, and the control redraws itself
  • There are no more messages in the queue, so Application.Idle is raised again
  • Rinse, lather, repeat

If the Tick method did not call Invalidate, there would only be a single Idle event rather than a steady stream of them.


  • Similar to the next option, but less code to implement it


  • It's a hack
  • The next option does the same thing in a more robust way


I'm a game, dammit!  I want to run flat out, tick tock tick tock, hogging all the CPU

The built-in Microsoft.Xna.Framework.Game class customizes how Win32 messages are processed, taking over the message loop so the game can run as smoothly as possible in the gaps between message processing. You can do the same thing if you want this same behavior. First, hook the Application.Idle event:

    Application.Idle += TickWhileIdle;

Whenever the Win32 message pump becomes idle, we go into an infinite loop, calling Tick as fast as possible for maximum game performance and smoothness. But we must check to see if a Win32 message has arrived, so we can break out of the loop if there is UI processing to be done:

    void TickWhileIdle(object sender, EventArgs e)
        NativeMethods.Message message;

        while (!NativeMethods.PeekMessage(out message, IntPtr.Zero, 0, 0, 0))
            Tick(sender, e);

WinForms does not expose the native Win32 PeekMessage function, so we must import this ourselves:

    static class NativeMethods
        public struct Message
            public IntPtr hWnd;
            public uint Msg;
            public IntPtr wParam;
            public IntPtr lParam;
            public uint Time;
            public System.Drawing.Point Point;

        [return: MarshalAs(UnmanagedType.Bool)]
        public static extern bool PeekMessage(out Message message, IntPtr hWnd, uint filterMin, uint filterMax, uint flags);


  • Being selfish gives us the smoothest possible animation


  • Doesn't play so nice with the rest of the UI system
  • Gets tricky if you have more than one animating control at the same time
  • Hogs 100% of CPU time, which isn't so great for multitasking scenarios


Invalidate vs. Draw

Any of the above techniques will provide repeated calls to our Tick method. But to display animation, Tick must somehow cause the control to redraw itself.

The easiest way is for Tick to call Invalidate.  This tells Win32 "hey, whenever you get a moment, could you please send a WM_PAINT message to this control?"  That triggers the GraphicsDeviceControl.OnPaint method, which calls BeginDraw to set up the device, then calls the main Draw method, and finally calls EndDraw to present the image onto the screen.

For maximum smoothness you may wish to avoid Win32 message processing entirely, and have Tick directly call BeginDraw, Draw, and EndDraw (the same way the existing OnPaint method does) instead of Invalidate.


Fix that timestep, fix it good

So we've chosen one of the above techniques, and hooked it up to call our Tick method.  But our animation is not smooth, because Tick is not called at a steady rate!  The third technique will usually call Tick more steadily than the first, but none of them can guarantee how often this will occur.  Whenever Tick is called we must examine the clock to see how much actual time has passed, adjusting our update logic and animation playback accordingly.

First, we must decide whether we want fixed or variable timestep update logic.

Variable timestep is easy. Just use a Stopwatch to see how much time has passed, perform the appropriate update calculations, then redraw the control.

Fixed timestep logic is implemented like so:

    Stopwatch stopWatch = Stopwatch.StartNew();

    readonly TimeSpan TargetElapsedTime = TimeSpan.FromTicks(TimeSpan.TicksPerSecond / 60);
    readonly TimeSpan MaxElapsedTime = TimeSpan.FromTicks(TimeSpan.TicksPerSecond / 10);

    TimeSpan accumulatedTime;
    TimeSpan lastTime;

    void Tick(object sender, EventArgs e)
        TimeSpan currentTime = stopWatch.Elapsed;
        TimeSpan elapsedTime = currentTime - lastTime;
        lastTime = currentTime;

        if (elapsedTime > MaxElapsedTime)
            elapsedTime = MaxElapsedTime;

        accumulatedTime += elapsedTime;

        bool updated = false;

        while (accumulatedTime >= TargetElapsedTime)

            accumulatedTime -= TargetElapsedTime;
            updated = true;

        if (updated)

The MaxElapsedTime check avoids running excessively many Updates to catch up after the program has been paused, for instance when debugging.

If you are using the second of the previously mentioned ticking techniques, remove the "if (updated)" check from around the Invalidate call.

Comments (17)

  1. filc says:

    What about 2nd option for the timer. Instead of using standard Windows timer (low accuracy) you can use System.Timers.Timer. Server timer is designed for multithreaded stuff (ie. not blocking UI) and more accurate.

  2. ShawnHargreaves says:

    System.Timers.Timer is only significantly better than a WinForms timer if you let it dispatch on a worker thread, rather than setting its SynchronizingObject property. If you give it a SynchronizingObject, it dispatches via the message queue just like WinForms.

    But running D3D render code on worker threads isn't really a good way to go!

  3. filc says:

    Why not? I would rather like to have it running on completely different thread then UI one. The biggest problem with XNA running in winform is that (and in your examples you described) any single operation on UI will block the render or render will eat your CPU and blocks your UI. That's the biggest issue, which you will run into, especially if your Update/Draw cycle must be very precise. BTW The accuracy on windows timer is limited to 55ms, which is pretty much useless for serious application.

    1. Brett says:

      Nope, it’s not limited to 55ms. It wasn’t then and it isn’t now. QueryPerformanceCounter, on a 3Ghz CPU for example, has a resolution of 333 nanoseconds.

  4. Kevin Gadd says:

    D3D render code on a worker thread works fine for me, unless you mean the specific kind of worker thread that System.Timers.Timer uses?

  5. ShawnHargreaves says:

    > Why not?

    – You now have to synchronize all shared state with the async render thread

    – System.Timers.Timer will issue reentrant callbacks, which D3D certainly does not want!

    – You cannot create or reset D3D device when not on the UI thread

    Using D3D (whether via XNA or not) in a WinForms app is really no different to any other kind of UI rendering technology. The best way to keep things snappy is to run short tasks (which includes frame rendering) directly on the UI thread, but move lengthy processing tasks to worker threads. Doing that the other way around (putting drawing on a worker thread) will lead to a world of pain and unpleasantness.

    > BTW The accuracy on windows timer is limited to 55ms

    The timer /defaults/ to 55ms, but that can be changed by the app.

  6. Tom says:

    I can't accept that it's necessary to be selfish in order to obtain the smoothest possible animation.  There are games that run rock-solid at 60fps without using any more CPU (or GPU) than they need to get the job done.  Live for Speed is an excellent example.  In my opinion, if a game uses 100% CPU no matter how much processing capability is available, it had better deliver some proportionate benefits.  Otherwise the person responsible has failed as a programmer.  Multi-core CPUs originally saved us from such failures, but sadly more and more people are finding ways to waste each and every core.

    As an incredibly talented professional programmer, how can you condone such behavior?

    The only case where this could be acceptable is on a system that is 100% dedicated to running the game and does not vary its power consumption in response to load.  But few if any modern systems satisfy this criteria.

  7. Fraser says:

    Tom see Raymond Chens article on 100% CPU usage, blogs.msdn.com/…/10097861.aspx

    If you continue to supply instructions to the CPU it will continue to try and run through them as fast as possible, if you've paid upwards of £500 for latest and greatest CPU why would you want it running at anything less than full speed.

  8. Edding3000 says:

    Tom, The battle for fps is a constant one: cpu vs gpu!

    If you arent setting a frame limit that is!

    Gpu limited -> cpu not at 100%, gpu 100%

    Cpu limited -> cpu at 100%, gpu not at 100%

  9. ShawnHargreaves says:

    > As an incredibly talented professional programmer, how can you condone such behavior?

    In my experience, Windows does not provide sufficiently accurate timer/thread functionality for a game that sleeps or yields unused CPU time to run smoothly across all machines. At least I have never seen anyone manage to do that successfully!

    So games on Windows have exactly three options:

    – Be graceful, yield time, and accept that you won't always run entirely smoothly on all machines.

    – Be aggressive, run flat out, and maximize smoothness at the cost of CPU. I believe this is the right thing for almost all games when they are in the foreground, and this is what XNA Game class does by default.

    – Defer the issue to the driver, and limit your framerate by using a blocking Present call. This may or may not yield unused CPU depending on the driver. The downside is that your update logic is then synced to GPU refresh rate, so you need to program your game to adapt to whatever that refresh rate might be. You can get this behavior in XNA by switching to variable timestep mode while leaving vsync turned on.

    1. Brett says:

      Running flat-out 100% on a medium to high level consumer CPU today may or may not be a good idea, depending on how the power management aspects are configured — and the latest Intel CPUs on expert level boards, there is so such thing as “default” setting. Going to 100% on one of these systems can cost up to 200W and even with good cooling, put out temperatures exceeding 60 or 70 degrees Celsius. Unless you happen to live in a cooler region, I would not advise doing this. I am doing it right now on an Intel 6850, installed on an ASUS RoG X99 board, using the example code suggested above. Even with a massive fan (Maker 8) and 10 case fans providing optimal air flow, the CPU is nearly at 70C and is starting to heat up my room 🙂 I have tried using D2D and D3D using worker threads and Shawn is correct, there are sacrifices but DirectX today is very forgiving compared to how it was 10 years ago. Careful use of background threads and swapchains, in conjunction with the FrameLatencyWaitableObject:

      m_SwapChainWaitHandle.SafeWaitHandle = New Microsoft.Win32.SafeHandles.SafeWaitHandle(Direct3D.SwapChain.FrameLatencyWaitableObject, True)

      m_SwapChainLatencyCallback = ThreadPool.UnsafeRegisterWaitForSingleObject(m_SwapChainWaitHandle,
      New WaitOrTimerCallback(AddressOf UpdateThreadProc),
      Nothing, -1, False)

      Will give you rock solid 60hz (or more, if you have GSync display) updates if you carefully manage your physics and input code. I suggest throwing Input onto another thread by using a SynchronizationContext and NativeWindow to listen out for (as an example) WM_INPUT messages. Apparently this is not recommended today and Tasks are preferred but I have no experience there.

  10. Tom says:

    Fraser, I believe you've misunderstood Raymond's post.  He's saying that a compute-bound task should run at 100% CPU for maximum efficiency.  This is absolutely true.  But games are not compute-bound (given fast enough processors) unless they're perfectly scalable, which they rarely are!  Notice how Raymond ended his post with "You don't want to be a task which consumes 100% CPU even when there's nothing going on. That'd just be kicking puppies."  If your game is using 100% CPU ***without delivering proportionate benefits***, then you're kicking puppies.

    To answer your question of why I'd want my expensive CPU or GPU running at anything less than full speed, it's simple.  Modern CPUs and GPUs vary their power consumption in response to varying load, and not just in two steps of idle vs. load.  Higher power consumption means more electricity consumed (which costs money and hurts the environment), higher temperatures (heating the room during hot summer is not good), higher noise as fans have to spin up, decreased battery life if running on battery, and decreased longevity of the system.  Laptops are notoriously poor at running under max load, but even consoles suffer from this problem.  Think of how many RRODs were accelerated by particularly intensive games.

    Edding3000, it's stupid to render more frames than the display can display unless you're benchmarking.  It's not just a battle between CPU-limited vs. GPU-limited.  Unless your game is perfectly scalable, it should be frame limited either now or at some point in the future.  If you're CPU-limited or GPU-limited rather than being frame-limited, how are you going to have the consistent frame rate that smooth gameplay demands?

    Imagine that your game is running as smoothly as possible on the best gaming machine there currently is.  The game is at its limit of scalability.  Getting a better CPU or GPU would not provide any visible benefit to the user.  Now imagine it running on the ultimate gaming machine of 10 years from today.  It's not doing anything more for the user.  Is it using max CPU or GPU?  It shouldn't be!  This is my point.

    Shawn, thanks for responding.  Try Live for Speed sometime.  It runs perfectly smoothly even on older machines.  I agree that if there are background things going on on the system, and the system doesn't have enough spare processing capacity to handle them and the game at the same time, the game might run a little more smoothly if it's hogging the CPU.  But I see the benefit being so small compared to the downsides presented by the waste.  I wish games would not waste CPU while being frame limited.

    1. Brett says:

      Wow — should have scrolled down and read your reply before I wrote mine. Well, I am writing to a past you, from the future. CPUs don’t run as hot as we expected back in 2010, but a 12-core Intel with all cylinders blazing is hot and noisy… and yes, it heats your room remarkably well.

      Kicking puppies D:

  11. Tom says:

    Warcraft 3 is another great example of a game that runs perfectly smoothly without wasting CPU or GPU.  CPU temps don't even rise perceptively on my i7 860 while running it, and GPU temp rises only a few degrees.  Task Manager shows substantial CPU usage on only one core, which is not maxxed out but varying from 20-50%.  The CPU probably isn't even ramping up to its normal clock speed the whole time to run this.

  12. xcorporation says:

    Hello Shawn,

    Nice article's. 🙂

  13. aschultz says:

    WPF seems to be the future direction of Windows apps… any chance we'll get the ability to embed XNA rendertarget data inside of WPF applications sometime soon? The WPF devs added the D3DImage class for interfacing with DirectX surfaces (and avoiding airspace issues), but XNA doesn't expose any of the hooks necessary for doing this.

  14. Fernando says:

    I know this is very old, but I had an additional detail that may be useful. This method is very accurate, but consumes way too much CPU. I found that adding a Thread.Sleep(1) when you do update helps reduce the CPU usage MASSIVELY, and has minimal impact, even at 60 FPS. More than 1 millisecond, though, is much more likely to generate stutters (although maybe it can work at lower FPS).

Skip to main content