What happens when you mix developers, designers, and dancers? This time, it’s a virtual reality dance cube!
You read that right – in the latest development to come out of Microsoft’s emerging hack culture, a four-foot cube touting an interactive dance party appeared at Seattle’s Decibel music festival last week. With 5 CPUs, 5 projectors and 4 Kinect sensors, the team created a virtual reality dance portal through which people dancing on opposite sides of the cube could dance together and interact in ways not otherwise possible. “What really makes it exciting,” says Abram Jackson, a Program Manager at Microsoft, “is that it’s got this sense of physical presence with this virtual world, and it’s so cool that you can interact with these digital objects in sort of a physical way.” Check out this video to see the dance cube in action:
One of the exciting things about this project, aside from the fact that it reflects the overall shift in company attitude since Satya took the helm, is the demonstration of the full range of possibilities of the Kinect. This July, Microsoft began shipping its Kinect for Windows v2 sensor – and boy, does it ever pack a punch. The original Kinect for Windows presented a way for developers to integrate richer environmental information into their apps and create more interactive, smarter experiences for users, but Kinect v2 takes this to the next level with improved hardware and new features that pose an exciting new spectrum of opportunities for developers. Here’s a look at how new features including improved skeletal tracking, expanded field of view, and new active infrared capabilities greatly improve the resources for applications in augmented reality, 3D visualization, machine learning – and virtual dance environments.
Compared to the original Kinect for Windows, the v2 sensor can now track 6 bodies at a time, with 25 joints per person (compared to 20 with the original sensor). One of the most notable of the added joints is positioned at the top of the torso to enable shoulder movement tracking. As a result of these changes, tracked positions are more anatomically correct, more bystander involvement is possible in group interaction scenarios, and evaluation of body position is more accurate and consistent.
Depth fidelity and non-light-based readings have also improved with the upgraded depth sensing capabilities and expanded field of view. With these features the v2 sensor gives you improved 3D visualization, improved ability to see smaller objects, improved overall clarity, and larger area of scene captured by the camera – which is an upgraded 1080p color camera. Fitness, wellness, and entertainment scenarios alike will benefit from these improvements, which lead to better augmented reality applications in high definition.
Finally, the sensor’s new active infrared capabilities allow the sensor to see in the dark and produce a lighting-independent view of the scene. This eliminates the need to account for lighting-based variations when image processing, which greatly simplifies machine learning tasks. As an added bonus, you can now use both the full-colour and infrared modes at the same time.
Whether you’re developing an interactive desktop app or a multi-faceted art installation like Microsoft’s Cube, you don’t want to ignore the Kinect. To learn more about programming for Kinect, check out the Microsoft Virtual Academy course, Programming Kinect for Windows v2 Jump Start. Want to order your sensor? Here’s how.