This article will look into how to leverage the new Kinect for Windows 2.0 device to create a simple executive toy using Unity and the Unity plugin. The aim is to demonstrate how easy it is to leverage Kinect in your Unity projects and how to build some thing simple quickly. This is a step by step guide on how to great the Kinect Wall app and how it works.
Keywords: Unity3D, Kinect, Shaders, Cg, Geometry Shaders, Vertex Shaders, Fragment Shaders, Textures
I am taking a tutorial approach for this article where I will show you step by step how to build create the application. The application source code will be available on Codeplex free to use in anyway you like.
- Unity 3d Pro – yes unfortunately because we are accessing unsafe code we need Pro
- Kinect SDK
- Kinect Unity Plugin
The first step is to start Unity and create a new Project. Give it a name and click create.
The next step is to do some minor admin in my project. The main thing I do here is create the following folders so I can manage my project a little neater:
Now we begin to setup some of the items we need to create to make this all work. We will need the following:
- Empty Material
- Scene – Main one with our camera, lights, etc
- Textures – These are textures we are going to use for our bricks
- Shader – This does all the depth and body index process as well as all the geometry generation
- Main Script – This spins up the Kinect device and passes information through to the Shader for processing.
We now need to add the Kinect for Windows v2 assets by going to the Assets –> Import Package –> Custom Package option and selecting the Kinect v2 Unity package. This will import all the required assets into the project so we can access the sensor. I need to enable unsafe code, the way that I do this is by adding a file called smcs.rsp to my Assets folder. This file is a simple text file with the single line inside of –unsafe.
Next we create the mainscript file and retrieve all the information from the sensor and pass it to the Shader. Inside the MainScript there are a number of default methods. Start which is used for initialisation and Update which is called as part of the update thread. Inside the Start function we are going to initialise the sensor and some of the storage structure for the stream of information. Firstly I create some local variables to store the sensor states:
The KinectSensor object gives us access to the various data streams from the device. Because we want the outline of the player and the depth information we need to access to the DepthFrame and BodyIndex information.
We get the information from the reader in the update loop in the following manner
This will read the information into a byte array that I can then send to the Shader either through a Texture of via a ComputeBuffer. To kick start the shader I need to set all the parameters and then tell it to start drawing using the following:
We now have everything in the MainScript we need, to see the complete source code with both methods of getting data to a Shader as well as getting bodyindex information have a look at the codeplex source where the complete project is available.
Remember to save your scene scene into the scene folder. I then add some items to my scene the first is a few lights to provide a clean effect for the wall. The next is an empty game object (GameObject –> Create Empty /Ctrl-Shift-N). I now add my main script into my Scripts folder and select the mainScriptObject and drag the script onto it.
What I need to do now is create a shader and a material that will use this shader. Creating the shader first and then the material I then drag the material on to the mainScriptObject shader material property. Now we start looking at the shader. The shader will handle all the tile generation for my scene. I will be leveraging a geometry shader to perform all the object handling. The reason for this is mainly performance and processing for the textures will take too long outside of the shader.
The shader itself is below and I have commented as much of the code as possible to make it easy to understand. Currently the depth from the depth buffer isn’t functioning as I had hoped and will post an update as soon as I have tweaked it.
With that all done you can now run your scene with the Kinect v2 Running and you should see the wall pushing out when you walk in front of it.
Video Link – https://www.youtube.com/watch?v=cFsPluBZn1s
Using Kinect v2 with Unity3D is extremely easy and the API’s are pretty much identical to the .Net API. Probably the biggest lesson is the benefit of using Shaders to handle the large volume of information between the various image streams you are receiving from the device.
Edit: I have uploaded the zip file of the complete project to codeplex.