Kinect for life

Kinect for life

This week we had an event showcasing  medical technology innovation in partnership with Kingston University, the University of Surrey, Brunel University and Microsoft.

Given the revolutionary advances made possible with Microsoft’s Kinect for Windows, Medical professionals and researchers are exploring how computer vision and natural user interfaces can enhance healthcare.

Confirmed sessions:

  • Fall detection system, Dr Dimitrios Makris, Kingston University
  • Facial expression recognition from 3D data, Dr Hongying Meng, Brunel University
  • Controlling a smart home, Dr Francisco Florez Revuelta, Kingston University
  • Concept to Commercialisation – A strategy for business innovation 2011- 2015– Graham Worsley, Lead Technologist in the Assisted Living Innovation Platform, Technology Strategy Board
  • Kinect for Medical and Non-gaming applications: developments at the University of Surrey - Dr Kevin Wells, University of Surrey

Confirmed speakers:

  • Graham Worsley, Technology Strategy Board
  • Prof Malcolm Sperrin, Royal Berkshire Hospital
  • Dr Dimitrios Makris, Kingston University
  • Dave Brown, Microsoft
  • Prof Paolo Remagnino, Kingston University
  • Dr Kevin Wells, University of Surrey
  • Dr Hongying Meng, Brunel University
  • Dr Francisco Florez Revuelta, Kingston University
  • Tim Craig, Smart Care UK

Kinect use examples

clip_image002

One of those areas is robotics. The Kinect sensor uses an infrared laser projector and a infrared camera to determine the distance between the Kinect and several hundred of points, thus creating a 3d map of it’s environment.  

In Games it’s used to create a representation of your skeleton and in Robotics this is used for example to create a 3d map of a room so that a robot can navigate through it without colliding with objects. All of these features are accessible through the Kinect SDK that lets you access the post processed data (for example skeleton positions) or tap into the raw data if you need to. This is a task that just a few years ago would require months of work, at a PhD level.

Robbosavvy have clever Skeletal functions in the SDK to create a small demo using a Kinect sensor and a small humanoid robot called Robobuilder to demonstrate the capacities of both.

http://www.youtube.com/watch?v=3AcFeX3REDk

http://www.youtube.com/watch?v=zOg_yyX3Hok

In this sample, we read Kinect’s skeletal data and make a Humanoid Robot mimic the Human’s position.

The flowchart is quite simple:

Step 1- Read the positions of each person’s joint (in our case Shoulder, Elbow and Wrist ) from Kinect

Step 2- With some trigonometry calculate the angles of the body, to determine for example if the arms are raised or not.

Step 3- That information is then sent to the servos (the motors that move the robot), to position them so that the Robot mimics the movements of that person. (The legs are not tracked otherwise the robot would fall off the table.)

In addition, to keep track of the person when they move side to side (L/R), we created a cool gadget that makes Kinect track you wherever you go.

The magic behind it is simple: you look at the person’s head and rotate Kinect so that the Head is always at the center of the Field of View.

This is a preview into a technology that will be used in the future for example to perform remote surgery or to send robots to work in dangerous areas.

More information on our Kinect + Robot project can be found here:http://robosavvy.com/forum/viewtopic.php?t=8026

Microsoft Robotics Studio

There is currently no standard regarding the control of Robots.  

Most small robots, use a low power microcontroller similar to Arduino. This is something like a computer, but much less powerful, however suitable to communicate with sensors, control motors, recharge batteries, etc.

There is a huge variety of these microcontrollers, and even when the more usual types are used, the robot manufacturers usually create their own software to operate the robots.

This causes situations where for example when a program is designed for a given robot, it needs to be completely rewritten if another brand of robot is to be used, even if they are nearly identical at an hardware level.

This where Microsoft Robotics Studio steps in and closes that gap. Robot manufacturers, or the users themselves can design small software modules for each robot, that act as a translator between MRDS and the control system for the robots.

This means that in MSRDS a command, for example to make a humanoid robot step forward, is identical across several brands of robots.

With MSRDS, Robots can now talk to each other, talk to sensors from different manufacturers or be supervised by our own master process (hypervisor) that makes sure everything is working as expected.   MSRDS also enables interoperability with complex functionalities hosted on the PC such as Speech Recognition.

Another advantage of MRDS is that it is a tool accessible to wide range of users, regardless of their expertise. Beginners can build Robot behaviours using Visual Programming Language and Advanced users can work with textual programming (any .Net language) to make the most out of MSRDS.

As an example, with one of our most sold robots called Robobuilder, we are able to control it using Voice commands, by using Windows built in Voice Recognition. This is achieved by simply dragging a few boxes in MSRDS Visual Programming Tool. Once all boxes are connected our Robot becomes capable of understanding what we tell him to do.