WPF Manipulation Basics

Here's the second post by Drake on the usage of Manipulation features.. Sample attached   

WPF Manipulation Basics

Things to know before we start

Understanding how RoutedEvents work might provide additional insight for this tutorial. https://msdn.microsoft.com/magazine/cc785480.

In the previous post we looked at the basics of the primitive Touch APIs. This was a good starting point for conceptual material, but when developing your application it’s recommended that you use the Manipulation APIs. In addition to providing a richer user experience you will also help ensure consistent behavior across Multi-Touch enabled applications.

The Manipulation APIs are built on top of the Touch APIs. Generally using one implies that you will not be using the other. In fact if you mark the TouchDown event as being handled you will not receive any of the Manipulation events. Essentially the Manipulations events are consuming the Touch events and doing extra work for you enable to provide you with a richer set of information about the manipulation. Touch can be viewed as a subset of Manipulations.

The Manipulation APIs allow you to manipulate an object in multiple ways at the same time. Using the Manipulation APIs you can be notified of the changes in scale, translation, and rotation of an object. This is a significant addition over the Touch APIs. Lastly the Manipulation APIs enable you to add inertia to an object.

Goals

The goal for this tutorial will be to create a small sample that will illustrate the use of the Manipulation events by using them to scale, rotate, translate, and add inertia to a rectangle.

Let’s get started

Create a new WPF Application in VisualStudio2010 Beta2 or later and give it the name ManipulationSample.

For those who have read the previous post the next part will be a repeat but for completeness I will re-post it anyway. Add a new User Control called TouchableThing to the project. In the TouchableThing.xaml file replace the Grid with a Canvas. We use a Canvas because the default layout is more desirable. Then add a rectangle to the Canvas. See Fig 1.

 

Fig 1.

<UserControl x:Class="ManipulationSample.TouchableThing"

            xmlns="https://schemas.microsoft.com/winfx/2006/xaml/presentation"

            xmlns:x="https://schemas.microsoft.com/winfx/2006/xaml"

        xmlns:mc="https://schemas.openxmlformats.org/markup-compatibility/2006"

            xmlns:d="https://schemas.microsoft.com/expression/blend/2008"

            mc:Ignorable="d"

            d:DesignHeight="300" d:DesignWidth="300">

    <Canvas>

        <Rectangle

           Name="BasicRect"

           Width="200" Height="200"

           Fill="Orange" Stroke="Orange" StrokeThickness="1"

           IsManipulationEnabled="true"/>

    </Canvas>

</UserControl>

 

We give the Rectangle a name so we can access it in the code behind and we make it large enough to hit test using a finger. Notice the last property being set on the Rectangle, IsManipulationEnabled. Setting this property to true enables the Rectangle, or any control with the property set to true, to bubble Touch events. Remember that the Manipulation events are really just aggregated Touch events so setting this property is still required. Since the Manipulation events are also bubbled they do not have to be handled by the control that has the property set to true. We see how to handle this case shortly.

Now we have to add the event listeners for the Manipulation events. I could have done this from xaml but I chose to do it in the code behind file. The decision was arbitrary. In the TouchableThing.xaml.cs file find the constructor and add event listeners to the ManipulationStarting, ManipulationDelta, and ManipulationInertiaStarting events. Using tab-completion in the editor will auto generate names and stubs for you. See Fig 2.

 

 

Fig 2.

 

public partial class TouchableThing : UserControl

{

    public TouchableThing()

    {

        InitializeComponent();

        this.ManipulationStarting += this.TouchableThing_ManipulationStarting;

        this.ManipulationDelta += this.TouchableThing_ManipulationDelta;

        this.ManipulationInertiaStarting += this.TouchableThing_ManipulationInertiaStarting;

    }

    void TouchableThing_ManipulationStarting(object sender, ManipulationStartingEventArgs e)

    {

    }

    void TouchableThing_ManipulationDelta(object sender, ManipulationDeltaEventArgs e)

    {

    }

       

    void TouchableThing_ManipulationInertiaStarting(object sender, ManipulationInertiaStartingEventArgs e)

    {

    }

}

 

Notice that all the event arguments have different types. This was not the case with the Touch event arguments.

I have also created variables to assist me in performing the transformations required for this sample. Again, Matrix is a viable alternative to using transforms but I find transforms simpler to use. To do the translation, scale, and rotation we will need a TransformGroup and the individual transforms, so let’s add them. Lastly we will need to set the RenderTransform on the BasicRect. See Fig 3.

Fig 3.

public partial class TouchableThing : UserControl

{

    private TransformGroup transformGroup;

    private TranslateTransform translation;

    private ScaleTransform scale;

    private RotateTransform rotation;

    public TouchableThing()

    {

        InitializeComponent();

        .

        .

        .

        this.transformGroup = new TransformGroup();

        this.translation = new TranslateTransform(0, 0);

        this.scale = new ScaleTransform(1, 1);

        this.rotation = new RotateTransform(0);

        this.transformGroup.Children.Add(this.rotation);

        this.transformGroup.Children.Add(this.scale);

        this.transformGroup.Children.Add(this.translation);

        this.BasicRect.RenderTransform = this.transformGroup;

     }

}

Since the children of the TransformGroup, as well as the individual transforms, are bound via dependency properties we only need to deal with the individual transforms from here on out. Take note of the order the children are being added in. First we apply the rotation, then the scale, then the translation. Doing them in this order requires the least amount of math; otherwise we would have to do additional transformations before applying the values in the ManipulationDelta event handler.

Now for the fun part. In the ManipulationStarting event handler we will set a property on the event args called ManipulationContainer. The event args will be passed to subsequent Manipulation events. Setting this property is essential for getting the coordinate space correct. In the Touch sample we did not have to do this because we had access to methods that allowed us to get the mouse position relative to a component. In the Manipulation APIs this translation is handled for you and will be relative to the ManipulationContainer. See Fig 4.

Fig 4.

void TouchableThing_ManipulationStarting(object sender, ManipulationStartingEventArgs e)

{

    e.ManipulationContainer = this;

}

This will become more obvious when we implement the ManipulationDelta event handler.

In the ManipulationDelta event handler we need to add the logic that will scale, translate, and rotate the rectangle. We can handle these one at a time and in any, the order I chose was arbitrary. We can do this because the TransformGroup will apply the transforms (its children) in the order they were added to the group. This was done in the constructor. See Fig 5.

Fig 5.

private void ManipulationDeltaVersion3(object sender, ManipulationDeltaEventArgs e)

{

    // the center never changes in this sample, although we always compute it.

    Point center = new Point(

         this.BasicRect.RenderSize.Width / 2.0, this.BasicRect.RenderSize.Height / 2.0);

    // apply the rotation at the center of the rectangle if it has changed

    this.rotation.CenterX = center.X;

    this.rotation.CenterY = center.Y;

    this.rotation.Angle += e.DeltaManipulation.Rotation;

    // Scale is always uniform, by definition, so the x and y will always have the same magnitude

    this.scale.CenterX = center.X;

    this.scale.CenterY = center.Y;

    this.scale.ScaleX *= e.DeltaManipulation.Scale.X;

    this.scale.ScaleY *= e.DeltaManipulation.Scale.Y;

    // apply translation

    this.translation.X += e.DeltaManipulation.Translation.X;

    this.translation.Y += e.DeltaManipulation.Translation.Y;

  

}

Notice that we are using the DeltaManipulation property of the event arguments. There is also a CumulativeManipulation but as its name describes it is cumulative for the lifetime of the manipulation. Remember that we set the ManipulationContainer property on the ManipulationStarting event handler. The values of the DeltaManipulation property are relative to the manipulation container. All we really have to do is continue to add the deltas to the current values.

Now on to inertia. Inertia is the feature that will allow you to flick something and watch it move across the screen, similarly for rotation and scaling. From a high level here is how it works. The InertiaStarting event is raised when a manipulation is in progress and a “flick” is detected. This is all handled for you. In the InertiaStarting event handler you get the opportunity to set the initial velocity and deceleration for the object in motion. You set these be changing the behavior properties on the event arguments. There is a separate initial velocity and deceleration for each behavior: scale, translation, and rotation. Once these values are set and the InertiaStarting event handler returns, the system will periodically raise the ManipulationDelta event, passing in the newly calculated values based on the initial velocity and deceleration. Again, all this is done for you. Here is how we implement the inertia. See Fig 6.

Fig 6.

void TouchableThing_ManipulationInertiaStarting(object sender, ManipulationInertiaStartingEventArgs e)

{

    e.TranslationBehavior = new InertiaTranslationBehavior();

    e.TranslationBehavior.InitialVelocity = e.InitialVelocities.LinearVelocity;

    // 10 inches per second squared

    e.TranslationBehavior.DesiredDeceleration = 10 * 96 / (1000 * 1000);

           

    e.ExpansionBehavior = new InertiaExpansionBehavior();

    e.ExpansionBehavior.InitialVelocity = e.InitialVelocities.ExpansionVelocity;

    // .1 inches per second squared.

    e.ExpansionBehavior.DesiredDeceleration = 0.1 * 96 / 1000.0 * 1000.0;

    e.RotationBehavior = new InertiaRotationBehavior();

    e.RotationBehavior.InitialVelocity = e.InitialVelocities.AngularVelocity;

    // 720 degrees per second squared.

    e.RotationBehavior.DesiredDeceleration = 720 / (1000.0 * 1000.0);

}

The last thing we need to do with these events is prevent the rectangle from going off screen. To do this we need to get the bounding rectangle of the ManipulationContainer, the transformed bounds of the rectangle, and check if one is still inside the other. We will use an event argument property IsInertial from the ManipulationDelta event to determine if the callbacks are from inertia or direct user manipulation. When the condition is met we call the Complete method from the event args which will end the manipulation. This code can go at the beginning or the end of the ManipulationDelta. See Fig 7.

 

Fig 7.

void TouchableThing_ManipulationDelta(object sender, ManipulationDeltaEventArgs e)

{

    Rect containerBounds =

         new Rect(((FrameworkElement)e.ManipulationContainer).RenderSize);

    Rect objectBounds = this.transformGroup.TransformBounds(

            new Rect(this.BasicRect.RenderSize));

    if (e.IsInertial && !containerBounds.Contains(objectBounds))

    {

        e.Complete();

    }

.

.

.

}

This is the home stretch. All we have to do now is import the namespace and add the user control to the MainWindow.xaml. See Fig 8

Fig 8.

<Window x:Class="ManipulationSample.MainWindow"

       xmlns="https://schemas.microsoft.com/winfx/2006/xaml/presentation"

       xmlns:x="https://schemas.microsoft.com/winfx/2006/xaml"

       Title="MainWindow" Height="350" Width="525"

       xmlns:custom="clr-namespace:ManipulationSample">

    <custom:TouchableThing />

</Window>

Run the application and enjoy.

In closing

I have covered the bare basics and attempted to highlight the sticking points that I encountered while learning this material. There is an amazing video (https://microsoftpdc.com/Sessions/CL2) from the Professional Developers Conference (PDC) by Anson Tsao and Robert Levy that cover this material in more depth. If you have read this far I highly recommend watching the video. They do a deep dive into the material and showcase some new controls. Here is the link to Anson’s blog on Touch, https://blogs.msdn.com/ansont/archive/2009/12/03/multi-touch-in-wpf-4-part-1.aspx, and of course thanks to Lester for allowing me to post these blog entries. I had fun creating these samples and I hope they help reduce the learning curve when getting started with Multi-Touch and WPF.

Share this post

 

ManipulationSample.zip