How creating Surface apps is different, part 1

In winter/spring of this year I was asked to do some applications-development training for our internal teams in China and Japan as well as our external partners. In addition to my own experiences developing Surface apps, I talked with many team members about what they thought future developers should know. One result of this process was a list of how developing for the Surface is different from the desktop. Over the next two posts I’ll share this list. I admit some of these things seem pretty obvious after you hear them. That’s often the mark of a very usable piece of information (at other times it can be the mark of restating the obvious.) Here goes:

No screen orientation

The assumption that computer displays have one orientation starts high in the system and goes down deep. Even the computer in the projector has what it thinks is “up.” The OS, UI frameworks, and development tools all think you want an application where everything is oriented the same way.

We are highly reliant on WPF’s ability to rotate user interface elements in any direction you want. This ability to set a “transform” at any level in your UI is one of the primary reasons we decided to use WPF as the main platform for Surface applications development. In developing for Surface we often put the bulk of the UI in a “UserControl” so it can be replicated for multiple users and oriented to face them. The Photos demo is a good example of where each photo or video is a UserControl. There is a transform set on each of these so that the photo is scaled, positioned, and rotated however the user wants.

Designing the user interface without an orientation is also very difficult. It usually takes a few iterations before all the elements of the UI do not imply an orientation. A close look at the demo applications will reveal some places where the design assumes the user is on one side of the Surface.

Multiple “mouse pointers”

There is a lot of similarity between a single “contact” on the Surface and the mouse pointer on a regular PC screen. Dragging your finger across the Surface is very similar to dragging a mouse. Unfortunately, the conventional computing system is built to expect just one mouse pointer. Even if you connect multiple mice to a single PC, you still just get one mouse pointer on the screen.

Fortunately, WPF is flexible enough to allow us to put Surface "contact" events into its event stream. So in addition to seeing the mouse events, your UI will see events generated from Surface interaction. WPF does not do much more for you at this point though. What your UI does with a bunch of contacts moving over it is up to the UI. In a paint application it can just draw on the screen lines that reflect the positions of the contacts it sees. For an application like photos it has to do some math with all the contacts it sees in order for the photo to respond intuitively to the user. This can be a lot more complicated than things are in the single-mouse world. A goal of the SDK is to simplify this for Surface application developers by providing controls that give you the behavior you want without having to handle all the events directly. Robert Levy will talk more about this in his posts.

More in Part 2.