The Power of PixelSense™

Today marks the release of a video the Surface team put together highlighting the power of PixelSense™. Microsoft’s PixelSense, in the new Samsung SUR40 for Microsoft Surface, allows a display to recognize fingers, hands, and objects placed on the screen, enabling vision-based interaction without the use of cameras.  The individual pixels in the display see what’s touching the screen and that information is immediately processed and interpreted.

Think of it like the connection between the eye and the brain. You need both, working together, to see. In this case, the eye is the sensor in the panel, it picks up the image and it feeds that to the brain which is our vision input processor that recognizes the image and does something with it. Taken in whole…this is PixelSense technology. 
We’ve gone behind-the-scenes to show you the creation of the technology and some of the people involved. It’s a little longer than most web videos but we wanted to go deeper than normal and really explain what’s going on.

In conjunction with the video, let’s walk through the high-level steps of how PixelSense actually works:

  1. A contact (finger/blob/tag/object) is placed on the display
  2. IR back light unit provides light (though the optical sheets, LCD and protection glass) that hits the contact.
  3. Light reflected back from the contact is seen by the integrated sensors.
  4. Sensors convert the light signal into an electrical signal/value.
  5. Values reported from all of the sensors are used to create a picture of what is on the display.
  6. The picture is analyzed using image processing techniques.
  7. The output is sent to the PC. It includes the corrected sensor image and various contact types (fingers/blobs/tags).

Right now PixelSense is only available in the Samsung SUR40 for Microsoft Surface and we believe it’s going to change the way you interact with touch-enabled content.