This Kinect for Windows v2 tutorial series will help you build an interactive Kinect app using Visual Studio and C#. You are expected to have basic experience with C# and know your way around Visual Studio. Throughout the series, you will learn how to set up your Kinect for Windows v2 sensor and dev environment, how to track skeletons and hand positions, how to manipulate the data from the colour and infrared sensors and the microphone array, how to recognize hand gestures, and how to put it all together in a deployable Windows app.
Level: Beginner to Intermediate
Welcome to part 3 of the Kinect With Me Kinect for Windows development series! Last time we started getting into the code with an intro to body tracking and overlaying shapes on the body in real time. This week’s tutorial will show how easy it is to add gesture control to an app – it only takes six lines of code!
To really get a feel for the range of gesture controls, let’s start off with a blank grid app. By default, the grid app has a main menu similar to the Windows 8 start screen, with tiles that you can select to enter the next level of content. For the sake of simplicity for this tutorial we’ll leave our content blank, but you can customize it however you want.
Step 1: Set Up Your App for Kinect Use
You should already know how to do this from Part 2 of this tutorial series, but there are a couple more things you must do:
- Add a reference to Microsoft.Kinect.Xaml.Controls
- Add using Microsoft.Kinect.Xaml.Controls
Step 2: Create a KinectUserViewer and a KinectRegion
NOTE: for the next steps we’re working in the App.xaml.cs file, in the if (rootFrame == null) condition.
Once you’ve set up your app using the steps in Part 2, the next thing we’re going to do is create a user viewer, which allows the user to see through the eyes of the Kinect. As discussed in the last part of Kinect With Me, it is important for the user to be able to see what the Kinect sensor is recognizing. The KinectUserViewer class shows the sensor’s infrared channel and highlights the shape of users’ bodies as they are recognized. We give it a horizontal and vertical alignment (top center is an intuitive place for users to look and doesn’t get in the way of the application functionality), and give it an appropriate height and width. We’ll also add a KinectRegion, which is the primary class to add KinectInteraction experiences to an application.
KinectUserViewer kinectUserViewer = new KinectUserViewer()
HorizontalAlignment = HorizontalAlignment.Center,
VerticalAlignment = VerticalAlignment.Top,
Height = 100,
Width = 121,
KinectRegion kinectRegion = new KinectRegion();
Step 3: Add Content to App on Launch
Finally we’ll create a new grid and add our region and user viewer to it. Now here comes the important part: instead of leaving the default code, which sets Window.Current.Content = rootFrame, we’re going to set the content of our kinectRegion to rootFrame, and set Window.Current.Content to the grid. By doing this, we make the entire rootFrame accessible and controllable by hand gestures. And since the hand gestures are already built as part of the Kinect SDK and enabled by KinectRegion, that’s all we have to do!
Grid grid = new Grid();
kinectRegion.Content = rootFrame;
// Place the frame in the current Window
Window.Current.Content = grid;
Now when you run your app you should see the user frame in the top centre of the screen. When the Kinect recognizes you, you’ll be highlighted, and when you raise your hand the app should engage and display your hand cursor on the screen. Give it a try! You should be able to grab and scroll around the app, and select tiles by pushing your hand toward the sensor.
That’s it! Who knew enabling gesture control was so easy?
Sage Franch is a Technical Evangelist at Microsoft and blogger at Trendy Techie.