Introduction to WPF 4 Multitouch

This tutorial recaps the multitouch features in WPF 4, as of the Beta 2 release.
I also included two basic samples to get you jumpstarted with working code:

A Multitouch Backgrounder

Multitouch is simply an abstraction from the OS (or a platform) that routes touch input to an application. 
The OS exposes multitouch input with different levels of control and/or detail.  For example, Windows 7 exposes multitouch data in three modes:

  • Raw touch provides access to all touch messages. Aggregation or interpretation of the messages is left to the application. This level of access is useful for programs that require raw access to all the primitives possibly for some custom interpretation and handling of the messages. For example Corel Paint It Touch and Windows 7 Paint require this level of control to implement drawing. Other possible consumers for this level of detail are custom control or platform vendors.
  • Gestures is a very convenient abstraction from raw-touch.
    The platform interprets all the lower level events and translates them into pre-defined gestures, then notifies the application that a gesture has occurred.  The most common gestures are pan, zoom, rotate, and tap.
    Gestures offers a very easy programming model, but it has a limitation: gestures engines tend to handle only one gesture at a time ( for example, rotate or zoom, but not a rotate with zoom).  
  • Manipulation and inertia.   
    Manipulation is a superset of gestures; any thing you can do with gestures, you can do with manipulations, but you gain greater granularity and flexibility.  Of course, the trade-off is that manipulation is a pinch harder to program than gestures, but don’t worry, both are straight forward.
    Inertia is an extension to manipulation that adds physics support to make all of your manipulations smooth and realistic.

If you are not familiar with multitouch, I recommend these articles on multitouch in Windows 7:

Now that you are a touch expert,  we can simply focus on explaining WPF’s support.

Multitouch in WPF 4

WPF 4 includes support for raw touch and manipulation (with some inertia support).  This support extends throughout the platform; UIElement, UIElement3D, and ContentElement have all been tweaked to support raw-touch and manipulation.

Post beta2, WPF 4 will also support touch in some of the controls (for example, ScrollViewer). The list of controls and level of support is not yet finalized, so don’t hold it against me; I will update this post as soon as I get details.

 

Approach #1: Raw-touch in WPF 4

Again, raw multitouch support begins at UIElement, UIElement3D and ContentElement.
All of these types now support a TouchDown, TouchUp, TouchMove, TouchEnter and TouchLeave event.
Each of these events have a corresponding routed event and a tunneling (Preview) event. 

  • public static readonly RoutedEvent TouchDownEvent;
  • public static readonly RoutedEvent TouchEnterEvent;
  • public static readonly RoutedEvent TouchLeaveEvent;
  • public static readonly RoutedEvent TouchMoveEvent;
  • public static readonly RoutedEvent TouchUpEvent;

 

If you drill down through these events, you will find they all have a TouchEventArgs parameter that holds a TouchDevice member and can get you a TouchPoint.  The TouchPoint is the meaningful data since it tells you whether it was a Up,Down, or Move TouchAction, and it tells you the Position where the touch happened.

I have included a class diagram below; the names are pretty descriptive.

Touch

 

Handling raw touch in WPF is really as simple as listening for these events and reacting to the points and the actions. 
Unlike Manipulation where you do have to opt-in by setting the IsManipulationEnabled property to true,  event notifications for raw touch are available without an explicit opt-in

A sample application for raw touch

Of course, for raw touch I had to create the canonical helloMT drawing pad.   

Disclaimer: I took the code written by Sela to demonstrate the .NET wrappers for Windows 7 multitouch and simply ported it to WPF 4.  Taking their apps and porting them to WPF 4 was about a 15 minute exercise.

Download the source code.  

When running the app, simply apply multiple fingers through the window to have the drawing pad draw strokes that follow your fingers’ movements.

 

Approach #2: Manipulation in WPF 4
Manipulation in WPF 4 is an opt-in behavior.  There is a simple process to handle manipulation events in any WPF element:

  1. Set IsManipulationEnabled=true on the element you are touch enabling. You can do this from XAML or from code. 

    <Image x:Name="image"Width="200"IsManipulationEnabled="True" Source="Windows7.png">

  2. [Optional] Subscribe to ManipulationStarting and set your ContainerElement. 
    The ContainerElement is the UI element to which all manipulation calculations and events are relative. If you do not set a ContainerElement, the UI element that is firing the event will be used. This works well for Zoom/Scale, but for all Translate or Rotate manipulations you should set a ContainerElement, or else the UI will flicker and be jumpy. This is not a UI glitch, it happens because a single manipulation will fire multiple deltas, so you are recording movements relative to the UI element that is being moved. Not cool!

    In ManipulationStarting, you can also set your ManipulationMode to control the manipulations you are allowing. You can select from All | None | Rotate | Translate | Scale | TranslateX | TranslateY. If you don’t override it, the default is All.

    Finally, if you want to do single hand rotations, you can set a Pivot that your UI element will rotate around.

      void image_ManipulationStarting(object sender, ManipulationStartingEventArgs e)
     {   
            //canvas is the parent of the image starting the manipulation;
            //Container does not have to be parent, but that is the most common scenario
             e.ManipulationContainer = canvas; 
            // you could set the mode here too 
            // e.Mode = ManipulationModes.All;             
     }
    

  3. Subscribe to the ManipulationDelta event in the UI Element (or an element higher in the Visual tree, since the event is routed).  ManipulationDelta is where all the action happens.  I have included a “default” implementation below, with good commenting.

     void image_ManipulationDelta(object sender, ManipulationDeltaEventArgs e)
    {
        //this just gets the source. 
        // I cast it to FE because I wanted to use ActualWidth for Center. You could try RenderSize as alternate
        var element = e.Source as FrameworkElement; 
        if ( element != null ) 
        { 
            //e.DeltaManipulation has the changes 
            // Scale is a delta multiplier; 1.0 is last size,  (so 1.1 == scale 10%, 0.8 = shrink 20%) 
            // Rotate = Rotation, in degrees
            // Pan = Translation, == Translate offset, in Device Independent Pixels 
    
            var deltaManipulation = e.DeltaManipulation; 
            var matrix  = ((MatrixTransform)element.RenderTransform).Matrix;            
            // find the old center; arguaby this could be cached 
            Point center =  new Point ( element.ActualWidth/2, element.ActualHeight/2) ;
            // transform it to take into account transforms from previous manipulations 
            center = matrix.Transform(center); 
            //this will be a Zoom. 
            matrix.ScaleAt(deltaManipulation.Scale.X, deltaManipulation.Scale.Y, center.X, center.Y); 
            // Rotation 
            matrix.RotateAt(e.DeltaManipulation.Rotation, center.X, center.Y);             
            //Translation (pan) 
            matrix.Translate(e.DeltaManipulation.Translation.X, e.DeltaManipulation.Translation.Y);
    
            ((MatrixTransform)element.RenderTransform).Matrix = matrix; 
    
            e.Handled = true;
        }
    }
    

That is how simple manipulation is. All the raw-touch data, translated into these simple Delta Matrixes!  

Enhancing Manipulation with Inertia

Inertia adds physics to a manipulation to make it feel more natural.  As expected, it works on all UI elements that support manipulation. The way to think of inertia is that it carries through the physical momentum of a manipulation. For example, if you are implementing a translation manipulation that is moving an image across the X-axis, inertia will continue the manipulation a bit longer than the actual manipulation contact and it would decelerate at a speed you define, simulating the momentum and the friction to stop the translation.

To add support for inertia, we simply update our old code and listen to a new event and then add code to handle inertia on our manipulation Delta.

  1. Subscribe to ManipulationInertiaStarting. 
    This event is similar to ManipulationStarting, it gets called at the beginning of each individual manipulation. In the event handler we append parameters to the Manipulation. For inertia, the interesting properties include:

    • ExpansionBehavior – decelerates at DIPs per squared millisecond . 

    • TranslationBehavior  - decelerates at DIPs per millisecond.

    • RotationBehavior - decelerates at degrees per millisecond

    • InitialVelocities is read-only; it gives you the velocities calculated from the previous stage of the manipulation. You can use these values to calculate your own behaviors. 

      image 

      Here is the code to add our desired behaviors for inertia:

       void canvas_ManipulationInertiaStarting(object sender, ManipulationInertiaStartingEventArgs e)
      {                
              // Decrease the velocity of the Rectangle's movement by 
              // 10 inches per second every second.
              // (10 inches * 96 DIPS per inch / 1000ms^2)
              e.TranslationBehavior = new InertiaTranslationBehavior()
              {
                  InitialVelocity = e.InitialVelocities.LinearVelocity,
                  DesiredDeceleration = 10.0 * 96.0 / (1000.0 * 1000.0)
              };
      
              // Decrease the velocity of the Rectangle's resizing by 
              // 0.1 inches per second every second.
              // (0.1 inches * 96 DIPS per inch / (1000ms^2)
              e.ExpansionBehavior = new InertiaExpansionBehavior()
              {
                  InitialVelocity = e.InitialVelocities.ExpansionVelocity,
                  DesiredDeceleration = 0.1 * 96 / 1000.0 * 1000.0
              };
      
              // Decrease the velocity of the Rectangle's rotation rate by 
              // 2 rotations per second every second.
              // (2 * 360 degrees / (1000ms^2)
              e.RotationBehavior = new InertiaRotationBehavior()
              {
                  InitialVelocity = e.InitialVelocities.AngularVelocity,
                  DesiredDeceleration = 720 / (1000.0 * 1000.0)
              };
              e.Handled = true;                  
      }
      

    • You may notice I did not override the ManipulationContainer. This is not required; it will reuse the ManipulationContainer we set during the ManipulationStarting event.

  2. [Optional] Now, we could add code at our ManipulationDelta event handler for inertia. 
    This is optional, if you run the code at this point, inertia is already working, but you will notice there is no boundaries (the images fly off the screen).  So, just as an example, I will add code to handle the boundaries and stop the inertia when we reach the boundaries.

     void image_ManipulationDelta(object sender, ManipulationDeltaEventArgs e)
    {
        // …. this is the same code as above, in our manipulation delta.. 
            ((MatrixTransform)element.RenderTransform).Matrix = matrix; 
    
            e.Handled = true;
    
            // Here is the new code. 
            // We are only checking boundaries during inertia in real world, we would check all the time 
    
            if (e.IsInertial)
            {
                Rect containingRect = new Rect(((FrameworkElement)e.ManipulationContainer).RenderSize);
    
                Rect shapeBounds = element.RenderTransform.TransformBounds(new Rect(element.RenderSize));
    
                // Check if the element is completely in the window.
                // If it is not and intertia is occurring, stop the manipulation.
                if (e.IsInertial && !containingRect.Contains(shapeBounds))
                {
                    //Report that we have gone over our boundary 
                    e.ReportBoundaryFeedback(e.DeltaManipulation); 
                    // comment out this line to see the Window 'shake' or 'bounce' 
                    // similar to Win32 Windows when they reach a boundary; this comes for free in .NET 4                
                    e.Complete();
                }
            }          
        }
    }
    
    
    
    That is it. Our image viewer now has inertia support, and we have full control on the deceleration, rotation ratios, etc. 
    

A sample application for Manipulation and Inertia:

Image Viewer. Manipulation

This sample uses the code above to manipulate the images on the canvas. 

Download the source code

The viewer supports scaling, translating, and rotating the images, using multitouch gestures. There is also inertia support as you execute any manipulation.

 

 

 

 

Mixing and matching approaches:

In WPF 4 raw-touch and manipulation are not mutually exclusive –this is different from Win32. 
You can enable both raw touch and manipulation at the same time on any UI Element.  

The table below explains how logic is handled for scenarios with different options enabled.

Manipulations Enabled

TouchDown is Handled

GotTouchCapture is Handled

User Logic

WPF Logic

None

No

No

None

Promoted to Mouse

None

Yes

No

Handled as Touch by user

None

None

Yes

Yes

Handled as Touch by user

None

Enabled

No

No

None

1. Handled by Manipulation logic, TouchDevice is Captured,

2. Manipulation logic will handle GotTouchCapture event and manipulation events will be reported

Enabled

Yes

No

Handled as Touch by user. User has to explicitly capture the touch device.

Manipulation logic will handle GotTouchCapture event and manipulation events will be reported

Enabled

Yes

Yes

1. Handled as Touch by User.

2. User has to explicity capture the touch device.

3. GotCapture handled by user, user has to explicitly AddManipulator to invoke manipulation

None

 

**Summary** This tutorial provided a basic introduction to multitouch in WPF. As you have seen, WPF supports both raw-touch and manipulation (with inertia) for all WPF UI elements.  Using WPF’s new touch support, you can accomplish quite a bit with just a few lines of code. The support compliments and integrates quite well with the platform.