I was at the University of York yesterday and had a quick conversation re: the opportunity of the Microsoft Kinect SDK and Bing maps. The conversation was specifically in relation to Bing Street Side View Maps and Open Street Maps and the interaction with a NUI using the Kinect device.
This conversation was interested as yesterday I posted a blog about the new WPF for Surface 2.0 so I wanted to share some further detail in light of yesterdays conversation into the PrimeSensor in WPF.
To help explain this I want to share the following video from InfoStrat.
Using the same techniques that allow us to use WPF on Surface, we can now use depth camera hand tracking to control multi-touch applications. Here is a very rough proof-of-concept where Joshua Blake, Microsoft Surface MVP author Multitouch in .NET book and (WPF 4 & Surface SDK)controlling the InfoStrat.VE WPF 4 multi-touch control using a depth camera.
Josh simply controls Bing Maps with the Kinect 3D-sensing technology
In this multi-touch application Josh display an outlines of the tracked hands to provide better feedback about what is going on he also used OpenNI and NITE from PrimeSense.
The tracked hands can participate in all of the multi-touch manipulations and gestures that you’ve already written for your touch application. You can even interact using hand tracking and touch at the same time in the same window. The code that enables this is part of the InfoStrat MotionFx open sourced project which is available from http://motionfx.codeplex.com/
What is shown above in the video that it is feasible to use the WPF Touch stack and Surface SDK as the unified platform for both multi-touch and motion tracking modalities.