Updated SDK, with HTML5, Kinect Fusion improvements, and more


I am pleased to announce that we released the Kinect for Windows software development kit (SDK) 1.8 today. This is the fourth update to the SDK since we first released it commercially one and a half years ago. Since then, we’ve seen numerous companies using Kinect for Windows worldwide, and more than 700,000 downloads of our SDK.

We build each version of the SDK with our customers in mind—listening to what the developer community and business leaders tell us they want and traveling around the globe to see what these dedicated teams do, how they do it, and what they most need out of our software development kit.

The new background removal API is useful for advertising, augmented reality gaming, training and simulation, and more.
The new background removal API is useful for advertising, augmented reality gaming, training
 and simulation, and more.

Kinect for Windows SDK 1.8 includes some key features and samples that the community has been asking for, including:

  • New background removal. An API removes the background behind the active user so that it can be replaced with an artificial background. This green-screening effect was one of the top requests we’re heard in recent months. It is especially useful for advertising, augmented reality gaming, training and simulation, and other immersive experiences that place the user in a different virtual environment.
  • Realistic color capture with Kinect Fusion. A new Kinect Fusion API scans the color of the scene along with the depth information so that it can capture the color of the object along with its three-dimensional (3D) model. The API also produces a texture map for the mesh created from the scan. This feature provides a full fidelity 3D model of a scan, including color, which can be used for full color 3D printing or to create accurate 3D assets for games, CAD, and other applications.
  • Improved tracking robustness with Kinect Fusion. This algorithm makes it easier to scan a scene. With this update, Kinect Fusion is better able to maintain its lock on the scene as the camera position moves, yielding a more reliable and consistent scanning.
  • HTML interaction sample. This sample demonstrates implementing Kinect-enabled buttons, simple user engagement, and the use of a background removal stream in HTML5. It allows developers to use HTML5 and JavaScript to implement Kinect-enabled user interfaces, which was not possible previously—making it easier for developers to work in whatever programming languages they prefer and integrate Kinect for Windows into their existing solutions.
  • Multiple-sensor Kinect Fusion sample. This sample shows developers how to use two sensors simultaneously to scan a person or object from both sides—making it possible to construct a 3D model without having to move the sensor or the object! It demonstrates the calibration between two Kinect for Windows sensors, and how to use Kinect Fusion APIs with multiple depth snapshots. It is ideal for retail experiences and other public kiosks that do not include having an attendant available to scan by hand.
  • Adaptive UI sample. This sample demonstrates how to build an application that adapts itself depending on the distance between the user and the screen—from gesturing at a distance to touching a touchscreen. The algorithm in this sample uses the physical dimensions and positions of the screen and sensor to determine the best ergonomic position on the screen for touch controls as well as ways the UI can adapt as the user approaches the screen or moves further away from it. As a result, the touch interface and visual display adapt to the user’s position and height, which enables users to interact with large touch screen displays comfortably. The display can also be adapted for more than one user.

We also have updated our Human Interface Guidelines (HIG) with guidance to complement the new Adaptive UI sample, including the following:

Design a transition that reveals or hides additional information without obscuring the anchor points in the overall UI.
Design a transition that reveals or hides additional information
without obscuring the anchor points in the overall UI.

Design UI where users can accomplish all tasks for each goal within a single range.
Design UI where users can accomplish all tasks for each goal
within a single range.

My team and I believe that communicating naturally with computers means being able to gesture and speak, just like you do when communicating with people. We believe this is important to the evolution of computing, and are committed to helping this future come faster by giving our customers the tools they need to build truly innovative solutions. There are many exciting applications being created with Kinect for Windows, and we hope these new features will make those applications better and easier to build. Keep up the great work, and keep us posted!

Bob Heddle, Director
Kinect for Windows

Key links

Comments (11)

  1. Benjamin Anderson says:

    Wonderful! Thank you so much for this! I have been a long time kinect enthusiast.

    But, please, please allow us control over the darned LED in the motor drivers. Please!

    I am mooing crazy: green….off…..green….off……green….off…..green…..off

    Also, please allow for the infrared camera stream to be used with face tracking.

    My application is vein used in the dark because it is controlling a media application and people watch movies in low light situations. I have added emugv face recognition but can not see a face to crop in the dark.

    Infrared camera bytes do not con inside with the face tracking

    Great job team!!

  2. geodome says:

    Hi,

    I am keen to test and develop apps for the Kinect on Windows. I am also new to Windows programming. I am unsure which Kinect software to download, given there are three:

    1) Kinect for Windows SDK v1.8

    2) Kinect for Windows Developer Tool Kit v1.8

    3) Kinect for Windows Run-Time v1.8

    Please advise. What purpose does each one serve? Why is there a need for so many 'flavours'?

    Cheers

  3. bart says:

    download first two – Kinect for Windows SDK v1.8 – install, and then Kinect for Windows Developer Tool Kit v1.8 and you are good to go.

  4. lostinmyworld says:

    Hello,

    1) Kinect for Windows SDK v1.8

    The Kinect for Windows SDK includes drivers for Windows 7, Windows Embedded Standard 7 and Windows 8 developer preview desktop apps. It supports applications built with C++, C# or VB.

    This is a MUST for your computer to detect Kinect properly.

    2) Kinect for Windows Developer Tool Kit v1.8

    This includes:

    Kinect Studio, Kinect Fusion API, Background Removal API, JavaScript APIs with HTML5, Face Tracking API, and Visual Studio Controls using Kinect.Toolkit

    This is MUST for developers. This has everything you need for youw Visual Studio to detect Kinect features and functions. Install this and start develop!

    3) Kinect for Windows Run-Time v1.8

    This is for a PC that needs only to run Kinect applications and not develop them. Provides only necessary components to run Kinect applications.

    4)You must have Kinect and a PC with Windows…

    Was this helpful?

  5. ashim says:

    does microsoft provide student training (in kinect)

  6. vinayak says:

    hi what is difference between kinect for windows and v2 sensor if i start developing apps for kinect for windows 1.8v then can a use them in v2 sensors for compatibility is there please let me know  

  7. Kinect for Windows Team says:

    Hello Vinayak,

    There is no forward compatibility between the SDK 1.8 and the v2 sensor.

    Thank you.

  8. 김동희 says:

    혹시, 1.5버전이랑 달라진점이 있습니까?

    1.8버전에서 BackgroundRemoval을 1.5버전에서도 사용할 수 있습니까?

  9. Kinect for Windows Team says:

    Hello 김동희

    Are you asking if you can use background removal in previous versions (like 1.8) of the current public preview Kinect for Windows SDK 2.0?

  10. Shayan Ali Akbar says:

    Hi,

    My question is regarding the noise present around the edge of the point cloud/mesh obtained from KinectFusion. This is probably a limitation of the depth sensor. This axial and lateral noise can be modeled as given in "Modeling Kinect Sensor Noise for Improved 3D Reconstruction and Tracking" paper by Chuong V. Nguyen et. al. found here:

    http://www.researchgate.net/…/0912f50e633228e84e000000.pdf

    This paper is somewhat old and I was wondering if these changes are adopted in the new original KinectFusion version in the SDK 2.0.

    Thanks,

    Shayan

    1. Thank you for your interest. Please direct this technical question to our public Kinect for Windows v2 SDK forum, where you can exchange ideas with the Kinect community and Microsoft engineers. You can also browse existing questions or ask a new question by clicking the Ask a question button.

      Access the forum at https://social.msdn.microsoft.com/Forums/en-US/home?forum=kinectv2sdk&filter=alltypes&sort=lastpostdesc.