Multi-touch in WPF 4 Part 3 – Manipulation Events

One of the most common usage of multi-touch input is for panning, zooming and rotation. With WPF4, the easiest way to implement these gestures is by using the Manipulation events on UIElements. Manipulation events also support simple inertial physics for a more fluid user experience.

You may well wonder why we call these ‘manipulation’ events rather than gestures like Win32’s WM_GESTURE messages. With manipulation events, we interpret the multi-touch input to simulate directly manipulating the on screen elements, as if you are using your fingers to move, rotate and stretch physical objects. Manipulation events reports translation, rotation and scaling transformations simultaneously, whereas with WM_GESTURES, you can only get one of these transformations components at one time. Thus with WM_GESTURE, you cannot pan and zoom at the same time.

There are typically 2 ways of using manipulation events: 1) to interpret the events for panning/zooming/rotating content such as maps and images, 2) to interpret these events for manipulating onscreen elements such as organizing a deck of cards, moving puzzle pieces etc. Depending on your scenario, you will decide which elements to enable manipulation on and which element you use as the manipulation container.

Manipulation Events

Manipulation events on the UIElement is a state machine for the interaction sequence.


Getting Started

There are 3 simple concepts that you need to understand to get started with manipulations.

First, you need to enable the IsManipulationEnabled boolean property on elements that are to be manipulated. Setting this property to true on an UIElement will start hit-testing of touch events on the element and raising manipulation events when you drag the element with one or more touch contacts.

Second, you need to handle the ManipulationStarting event and specify the ManipulationContainer for the interaction. The ManipulationStarting event is a routed event that is raised from an UIElement with IsManipulationEnabled set to true. This event is raised after the first touch down on the UIElement and before any other manipulation events. This event is used to configure the manipulation processing logic.

The ManipulationContainer is an element that act as the frame of reference for the manipulation transformation calculations. You can imagine that the manipulation container is the physical surface that the manipulated elements are moving against. The manipulation container cannot be changed during the interaction sequence between ManipulationStarting and ManipulationCompleted.

Last, you need to handle at least the ManipulationDelta event to respond to the transformations calculated from the finger movements. The ManipulationDelta event reports pan, zoom, and rotate as separate components to the transformation. The transformation is reported both as delta since the last event (DeltaManipulation) and since the the start of the manipulation (CumulativeManipulation).

Basic example

In this example, we enable multi-touch manipulation of a red rectangle element on a canvas. You can move the rectangle with 1 finger and rotate and zoom with 2 or more fingers. The following XAML snippet shows hooking the manipulation events and enabling manipulation on the rectangle.

<Canvas x:Name=”_canvas”        
<Rectangle IsManipulationEnabled=”True”
Fill=”Red” Width=”100″ Height=”100″/>

We will be applying render transformation on the rectangle, the transformation is relative to the canvas so we will use the canvas as the manipulation container.

private void _canvas_ManipulationStarting(object sender, 
ManipulationStartingEventArgs e)
e.ManipulationContainer = _canvas;
e.Handled = true;

We handle the ManipulationDelta event to apply the render transformation to the rectangle. First we retrieve the rectangle as the original source of the manipulation event. Then we extract the current render transformation as a MatrixTransform.

To calculate the effects of the manipulation, we apply the scale, rotation and translation transformation components from the event argument to the current render transform. In matrix calculations, the order of transformation (multiplication) is important. We need to apply scaling and rotation first centered at the manipulation origin before moving the rectangle based on the translation component.

private void _canvas_ManipulationDelta(object sender,
ManipulationDeltaEventArgs e)

var element = e.OriginalSource as UIElement;

var transformation = element.RenderTransform
as MatrixTransform;
var matrix = transformation == null ? Matrix.Identity :




element.RenderTransform = new MatrixTransform(matrix);
e.Handled = true;

That’s it!

In the next parts of the series, I’ll talk about inertial movement and single finger rotation with manipulation pivot.

Comments (9)

  1. zephyr2b says:

    I have a problem following your code examples when integrating this in more complex code.

    When I tried this after PDC09 it seemed to work well until I tried a more complex example.

    In XAML, I create a Window consisting of a Canvas. In that Canvas a have a smaller canvas that has 2 children: a rectangle and an image. I let my outer Canvas handle the Manipulation events and in the ManipulationStarting eventhandler I set the container to the outer canvas.

    My problem: the rectangle is scaled, rotated and translated properly, but when I try to rotate or scale the image it jumps away.

    Could you help me in understanding and solving this problem? I have no clue anymore. I would owe you if you could help me since it is very important for me and my customer.

    Source code in .zip can be found here:


  2. AnsonT says:

    The problem is because there are a number of coordinate transformations between the Rectangle and the layout root from the various Canvas.Left/Top properties. The render transform is relative to the parent  element. So the manipulation transformation is calculated based on the LayoutRoot’s coordinate system while the render transform is based on the canvasje along with the offset introduced by the Canvas.Left/Top properties.

  3. zephyr2b says:

    Thanks for your reply. I’m afraid I don’t really get it right. Are you saying I need to recalculate the e.ManipulationOrigin used in the RotateAt and ScaleAt towards the parent canvas (=LayoutRoot)?

    Could you, please, find some time to correct my example into a correctly working sample?  That would be very kind of you.


  4. AnsonT says:

    You should not be using both Canvas.Left/Top with RenderTransform for the transformation.

  5. zephyr2b says:

    Hey Anson,

    Thanks again for the quick response. In my test project I was able to get it working properly, so now I will change this in the customer’s project. Superb!

    I can hardly wait until WPF4 is officially released.


  6. Taran says:

    Thanks a lot ..It works for me

  7. Eduardo says:

    I How can you, make a diffent gesture with touch events, that means without use manipulation events and dont use WPF for interpret the gesture


  8. Murali says:

    Hi Anson,

    Your code gave me a good start up to work with manipulation events, but i've some issue when a child is added to the canvas.

    I've textblock inside a canvas. when i apply manipulation events for canvas, I want even textblock to translate accordingly with canvas. Can u please guide me.

    plz post ur answer to

  9. ashish says:

    How to reset all the manipulations after doing manipulations?