Creating compelling user interfaces for embedded devices

Let’s get started with a disclaimer, I’m a developer, I write code, that’s what I do, that’s what I’m good at – that means that my user interface design skills are somewhat lacking – I do a reasonable job of building forms based user interface for tools (CEFileWiz being an example) but that’s about as far as UI design/development goes for me. I’m a whizz at building operating system images, building COM objects, device drivers, native code web services etc… most of this you would consider to be plumbing, something that the user typically doesn’t (and shouldn’t) see.

When MFC (Microsoft Foundation Classes) shipped with support for CDialog and CFormView I was super happy, generating (some form of) application user interface became simple, I could drag controls from a toolbox onto a blank form and then “code behind” the controls. A very similar developer experience exists for C#/VB WinForms development in the managed world today.

If I needed to have a “fancy” user interface then that typically required some form of control subclassing and handling the paint message for the control by hand, this means that all of the user interface is really code driven, or to put it another way, the UI is “developer driven”. Changing the way the user interface looked required changes to code. From a Win32 C/C++ developers perspective all user interface elements are completely code driven. If you need some controls in the client area of your applications window then you need to create these by hand (CreateWindow), you need to place these by pixel coordinate, and subclass as appropriate to give the look and feel that makes sense for your device. In some cases controls may need to be animated, a drop down menu that scrolls open for example. Coding an animated drop down list would require creating a timer or thread and animating the control on each “tick” – that can be a lot of work, and even more work if the UI needs to change at some point in the future.

If we look at the desktop UI development experience (Windows Presentation Foundation for example) an application is really built by two groups of people, “Designers” and “Developers”. The designer/developer gets to focus on the work that they are good at and each are cleanly separated from each other – The designer only needs to care about the user interface look and feel and doesn’t care about the code behind the user experience. Animation, timelines, shading, fonts, images are all taken care of by the designer using tools like Microsoft Expression Blend. The developer then only focuses on writing the code behind the user experience and doesn’t need to care about what the UI looks like or even how the UI is implemented – the code in this case could be loading data sets to be displayed in the UI, or talking to web services, calling operating system APIs. Note that the developer doesn’t directly get involved in “how” the data is actually displayed to the user, but is involved in how the data is dealt with in the application. The developer is really focused on the plumbing, not on the UI. The link between designer/developer is XAML (eXtensible Application Markup Language) and Expression Blend (for UI) and Visual Studio (for the developer).

You could imagine that having a similar designer/developer story for building compelling user experiences for Windows Embedded CE devices would be awesome. This week, Kevin Dallas, General Manager for the Windows Embedded Business is showing a keynote demo at Computex, this demo shows the types of user experiences that could be build by decoupling the designer and developer experiences.

Since you are probably not at Computex we’ve recorded a video of the demo, and a brief interview with one of the Program Managers and Developers in the Windows Embedded CE Shell team that focus on applications and user experience.

 

User Interface Technologies for Windows Embedded CE

- Mike