Surface Development Part 1: What is the Microsoft Surface?

This post is the beginning of a four-part series on developing for the Microsoft Surface.  But before you can write cool apps for the Microsoft Surface, you should understand what it is. 

The Surface is a coffee-table-sized touch computer that can respond to natural hand gestures and real-world objects.  It utilizes a vision system with five cameras to sense input.  The 30-inch diagonal display allows a number of users to see the screen while surrounding the table, enabling highly collaborative experiences.  The users can interact with the content by touch, "grabbing" digital information with their hands.  Surface can recognize many points of contact simultaneously, not just one finger as with a typical touch screen.  Finally, Surface sees what touches it and can recognize physical objects, providing potential for many compelling experiences:

With Windows 7, it will be much easier to do multi-touch computing on normal computers.  So what makes Surface different from that?  Two major things:

  1. Surface supports *massive* multi-touch capabilities.  Surface can track over 52 simultaneous contacts at once.  The number of simultaneous contacts that a touch-enabled computer running Windows 7 can handle is dependent on the hardware, but it's not close to that scale. 
  2. Surface can react to tagged objects placed on it. 

In the remaining posts in this series, I will dive into the Surface SDK and discuss three things that you should understand to get started with coding Surface applications:

  1. Surface controls (and how close they are to WPF controls)
  2. The ScatterView class
  3. The classes to enable Surface to react to tagged objects