We’ve come a long way in engineering Windows 7 since we first provided an engineering preview of Windows 7 and the work we are doing to support the touch interface paradigm back at the D: All Things Digital conference. We chose to kick-off the discussion about engineering Windows 7 with touch scenarios because we know this is a long-lead effort that requires work across the full ecosystem to fully realize the benefit. For Windows 7, touch support is engineered by building on our advances in input technology we began with the TabletPC work on Windows XP. Touch in Windows 7 requires improvements in hardware, driver software, core Windows user experience, and of course application support. By having this support in an open platform, consumers and developers will benefit from a wide variety of choices in hardware, software, and different PC form factors. Quite a few folks have been a little skeptical of touch, often commenting about having fingerprints on their monitor or something along those lines. We think touch will become broadly available as the hardware evolves and while it might be the primary input for some form factors (such as a wall mounted display in a hospital, kiosk, or point of sale) it will also prove to richly augment many scenarios such as reading on a convertible laptop or a “kitchen PC”. One of my favorite experiences recently was watching folks at a computer retailer experience one of the currently available all-in-one touch desktops and then moving to another all-in-one and continuing to interact with the screen—except the PC was not interacting back. The notion that you can touch a screen seems to be becoming second nature rather quickly. This post is our first dedicated blog on the subject. This is a joint effort by several people from the touch team, mostly Reed Townsend, Dave Matthews, and Ian LeGrow. -Steven
Windows Touch is designed to enhance how you interact with a PC. For those of us that have been living and breathing touch for the last two years we’re excited to be able to deliver the capability to people using Windows 7. In this blog we’re going to talk about what we’ve done to make Windows touchable. We approached this from a number of different directions: key improvements to the core Windows UI, optimizing for touch in key experiences, working with hardware partners to provide robust and reliable touch PCs, and providing a multitouch platform for applications.
Making Windows Touchable
With Windows 7 we have enriched the Windows experience with touch, making touch a first-class way to interact with your PC alongside the mouse and keyboard. We focused on common activities and refined them thoughtfully with touch in mind. You will have the freedom of direct interaction, like being able to reach out and slowly scroll a web page then flick quickly to move through it. With new touch optimized applications from creative software developers you will be able to immerse yourself as you explore you photos, browse the globe, or go after bad guys in your favorite games.
While providing this touchable experience, we made sure you are getting the full Windows 7 experience and not a sub-set just for touch. We’ve been asked if we are creating a new Touch UI, or “Touch Shell” for Windows – something like Media Center that completely replaces the UI of Windows with a version that is optimized for touch. As you can see from the beta, we are focused on bringing touch through the Windows experience and delivering optimized touch interface where appropriate. A touch shell for launching only touch-specific applications would not meet customers’ needs – there would be too much switching between “touch” mode and Windows applications. Instead, we focused our efforts on augmenting the overall experience so that Windows works great with touch.
We took a variety of approaches – some broad, and some very targeted to support this goal:
- Touch gestures: Windows 7 has a simple set of touch gestures that work in many existing applications. These include the basics of tap and drag, as well as scroll, right-click, back, forward, zoom, and rotate. More details on how gestures work is described later.
- Improved high DPI support: Windows 7 has improved high dpi support (see High DPI blog post). The broad benefit to touch is that UI elements are rendered closer to their intended size – usually larger – which makes small buttons, links, and other targets easier to access with touch.
- Improved window management: The updated taskbar and windows arranging features go a long way towards making Windows easier to use with touch. There have been several subtle but critical touch optimizations:
- The taskbar buttons and thumbnails are ideally sized for pressing with touch, and specific behaviors are tuned for touch input. For example, the Jump Lists can be accessed with a simple drag up from the taskbar, and when opened with touch, the shortcuts in the Jump Lists are drawn with extra vertical spacing to make them easier to select.
- Aero Peek has been tuned to work with touch – the show desktop button is twice as wide (the only visual sign you are on a Windows Touch PC) and instead of hovering (which you can’t do with touch), a press-and-hold on the button activates Aero Peek.
- Sizing and positioning windows is easy with Aero Snap – just drag a window to a screen edge. Furthermore, this was tuned with special touch thresholds so that you don’t have to drag to the absolute edge of the screen – a better balance for touch usage.
- Refinements to key experiences: The top browsing and media activities were refined to provide an optimized touch experience. IE8 includes support for the core touch gestures (scrolling, back, forward, zoom) as well as an optimized address bar that opens by dragging down, and extra spacing in favorites and history lists when opened with touch for easy selection. In Windows Media Player, the transport controls (play, pause, etc) have larger clickable areas even though they still look the same size – so that they are easier to touch.
- Touch keyboard: The on-screen keyboard has been optimized for touch with glow key feedback that’s visible when your finger is covering the letter and multitouch support for natural typing behavior and key combinations. It’s designed for quick usage, like entering a URL.
Overall, the Windows Touch features are designed to work together to deliver a great end-to-end touch experience. For example, the goal with IE8 was to deliver a seamless touch browsing experience, this includes the panning, zooming, URL entry, and several interface enhancements. For this reason, all the new touch features require the presence of a multi-touch digitizer – more on that further down.
The Windows Touch gestures are the basic actions you use to interact with Windows or an application using touch. As we noted above, because the gestures are built into the core of Windows, they are designed to work with all applications, even ones that were never designed with touch in mind.
Our mantra with gestures has been “Predictable + Reliable = Habits”. To be predictable the action should relate to the result – if you drag content down, the content should move down. To be reliable, the gesture should do roughly the same action everywhere, and the gesture needs to be responsive and robust to reasonable variations. If these conditions are met then people are far more likely to develop habits and use gestures without consciously thinking about it.
We’ve intentionally focused on this small set of system-wide gestures in Win7. By keeping the set small we reduce misrecognition errors – making them more reliable. We reduce latencies since we need less data to indentify gestures. It’s also easier for all of us to remember a small set! The core gestures are:
- Tap and Double-tap – Touch and release to click. This is the most basic touch action. Can also double-tap to open files and folders. Tolerances are tuned to be larger than with a mouse. This works everywhere.
- Drag – Touch and slide your finger on screen. Like a dragging with a mouse, this moves icons around the desktop, moves windows, selects text (by dragging left or right), etc. This works everywhere.
- Scroll – Drag up or down on the content (not the scrollbar!) of scrollable window to scroll. This may sound basic, but it is the most used (and most useful – it’s a lot easier than targeting the scrollbar!) gesture in the beta according to our telemetry. You’ll notice details that make this a more natural interaction: the inertia if you toss the page and the little bounce when the end of the page is reached. Scrolling is one of the most common activities on the web and in email, and the ability to drag and toss the page is a perfect match for the strengths of touch (simple quick drags on screen). Scrolling is available with one or more fingers. This works in most applications that use standard scrollbars.
- Zoom – Pinch two fingers together or apart to zoom in or out on a document. This comes in handy when looking at photos or reading documents on a small laptop. This works in applications that support mouse wheel zooming.
- Two-Finger Tap – tapping with two fingers simultaneously zooms in about the center of the gesture or restores to the default zoom – great for zooming in on hyperlinks. Applications need to add code to support this.
- Rotate – Touch two spots on a digital photo and twist to rotate it just like a real photo. Applications need to add code to support this.
- Flicks – Flick left or right to navigate back and forward in a browser and other apps. This works in most applications that support back and forward.
- Press-and-hold – Hold your finger on screen for a moment and release after the animation to get a right-click. This works everywhere.
- Or, press-and–tap with a second finger – to get right-click, just like you would click the right button on a mouse or trackpad. This works everywhere.
For touch gestures, seeing them in action is important so here is a brief video showing the gestures in action:
In order to make the gestures reliable, we tuned the gesture detection engine with sample gesture input provided by real people using touch in pre-release builds; these tuned gestures are what you will see in the RC build. We have a rigorous process for tuning. Similar to our handwriting recognition data collection, we have tools to record the raw touch data from volunteers while they perform a set of scripted tasks. We collected thousands of samples from hundreds of people. These data were then mined looking for problems and optimization opportunities. The beauty of the system is that we can replay the test data after making any changes to the gesture engine, verifying improvements and guarding against regression in other areas.
This has led to several important optimizations. For example, we found that zooms and rotates were sometimes confused. Detecting zoom gestures only in applications that don’t use rotation has resulted in a 15% improvement in zoom detection.
Further analysis showed that many short gestures were going unrecognized. The gesture recognition heuristics needed to see 100ms or 5mm worth of data before making a decision about what gesture the user was performing. The concern that originally led to these limits was that making a decision about which gesture was being performed too early would lead to misrecognition. In fact, when we looked at the collected user data, we found we could remove those limits entirely – the gesture recognition heuristics performed very well in ambiguous situations. After applying the change and replaying the collected gesture sample data, we found zoom and rotate detection improved by about 6% each, and short scrolling improved by almost 20%!
Gestures are built into the system in such a way that many applications that have no awareness of touch respond appropriately, we have done this by creating default handlers that simulate the mouse or mouse wheel. Generally this gives a very good experience, but there are applications where some gestures don’t work smoothly or at all. In these cases the application needs to respond to the gesture message directly.
In Windows, several experiences have been gesture enabled. We’ve spent a considerable amount of effort on IE8 – ensuring scrolling and zooming are smooth and that back and forward are at your fingertips. Media Center, which is a completely custom interface ideally suited to touch, added smooth touch scrolling in galleries and the home screen. The XPS Viewer has gesture support that will could become a model for many document viewing apps. Scrolling and zoom work as you would expect. When zooming out beyond a single page, pages start to tile so you can view many at a time. When zoomed out in that fashion, double tapping on any page jumps back to the default view of that page. A two-finger tap restores the view to 100% magnification. These predictable behaviors become habit forming quickly.
Working with the Hardware Ecosystem
A major benefit of the Windows ecosystem is diversity – PCs come in all shapes and sizes. To help ensure that there is a great Windows Touch experience across the many different types of PCs we have defined a set of measurements and tests for Windows Touch that are part of the Windows Logo. We’ve been working with touch hardware partners since the beginning of Windows 7 to define the requirements and ensure they are ready for launch.
Our approach has been to provide an abstraction of the underlying hardware technology. We’ve specified a requirements for the quantitative aspects of the device, such as accuracy, sample rate, and resolution, based on the requirements to successfully enable touch features. For example, we have determined the necessary accuracy values for a device so people can successfully target common UI elements like close boxes, or what sample rate and resolution are required to ensure quality gesture recognition.
The requirements form the basis for the Windows Touch logo program. For consumers, the logo tells you that the PC and all of its components are optimized for Windows. Component level logo, which is what we grant to Touch digitizers helps the OEMs choose a device that will deliver a great touch experience.
Based on the quantitative requirements, we built an interactive test suite that includes 43 separate tests, all validating the core requirements under different conditions. There are single point accuracy tests at various locations on the screen, including the corners which are often harder for accuracy but critical to Windows. There are also several dynamic tests where accuracy is measured while drawing lines on the screen – see the screenshot below of Test 7. In this test, two lines are simultaneously drawn using touch along the black line from the start to the end. The touch tracings must remain within 2.5 mm of the black line between the start and end points. The first image below shows a passing test where the entire tracing is green (apologies for the fuzziness – these are foot long tracings from a large screen that have been scaled down).
Figure 1: A passing line accuracy test from the Windows 7 Touch logo test tool
Not all devices pass the tests. Below is a screenshot of a device that is failing. This one has some noise – notice the deviation from the line in red. These errors need to be resolved before it would receive the logo. Errors like this can result in misrecognized gestures.
Figure 2: A failing line accuracy test from the Windows 7 Touch logo test tool
To ensure repeatability of the tests, we’ve built a set of plastic jigs with tracing cut-outs, see photo below. This particular jig is used for 5 of the tests and measures accuracy while tracing an arc.
The testing tool is available to our partners now, we’re working closely with several of them to help tune the performance of their devices to meet the requirements and deliver a great touch experience. We have set-up an in-house testing facility that will be testing every device submitted for Logo.
With the Release Candidate, OEMs and IHVs will be able to finalize the logo process for systems designed for Windows 7. Today we already have several hardware partners that have provided us with devices and drivers for testing.
Windows Touch for Software Developers
We also want to talk a little about the touch platform for software developers. Windows 7 provides a rich touch platform for applications. We’ve already mentioned gestures, there’s also a lower level platform that gives developers complete control over the touch experience. We think about it in a Good-Better-Best software stack.
The “good” bucket is what touch-unaware applications get for free from Windows 7. Windows provides default behaviors for many gestures, and will trigger those behaviors in your application in response to user input. For example, if someone tries touch scrolling over a window that is touch-unaware, we can detect the presence of various types of scrollbars and will scroll them. Similarly, when the user zooms, we inject messages that provide an approximation of the zoom gesture in many apps. As a developer you can ensure that the default gestures work just by using standard scrollbars and responding to ctrl-mouse wheel messages.
The “better” bucket is focused on adding direct gesture support and other small behavior and UI changes to make apps more touch-friendly. For instance, there is a new Win32 window message, WM_GESTURE (preliminary MSDN docs), that informs the application a gesture was performed over its window. Each message contains information about the gesture, such as how far the user is scrolling or zooming and where the center of the gesture is.
Applications that respond to gestures directly have full control over how they behave. For example, the default touch scrolling is designed to work in text centric windows that scroll primarily vertically (like web pages or documents), dragging horizontally does selection rather than scrolling. In most applications this works well, but if an app has primarily horizontal scrolling then the defaults would have to be overridden. Also, for some applications the default scroll can appear chunky. This is fine with a mouse wheel, but it feels unnatural with touch. Apps may also want to tune scrolling to end on boundaries, such as cells in a spreadsheet, or photos in a list. IE8 has a custom behavior where it opens a link in a new tab if you drag over it rather than click it.
In addition to gestures, there are subtle optimizations applications can make for touch if they check to see if touch is in use. Many of the subtle touch behavior optimizations in Windows were enabled in this manner. Larger Jump List item spacing for touch, larger hot spots for triggering window arranging, and the press and hold behavior on the desktop Aero Peek button with touch are all features written with the mouse in mind, but when activated via touch use slightly different parameters.
Applications or features that fall into the “best” bucket are designed from the ground up to be great touch experiences. Apps in this bucket would build on top of WM_TOUCH – the window message that provides raw touch data to the application. Developers can use this to go beyond the core system gestures and build custom gesture support for their applications. They can also provide visualizations of the touch input (e.g. a raster editing application), build custom controls, and other things we haven’t thought of yet!
We also provide a COM version of the Manipulations and Inertia APIs from Surface. The Manipulations API simplifies interactions where an arbitrary number of fingers are on an object into simple 2D affine transforms and also allows for multiple interactions to be occurring simultaneously. For instance, if you were writing a photo editing application, you could grab two photos at the same time using however many fingers you wanted and rotate, resize, and translate the photos within the app. Inertia provides a very basic physics model for applications and, in the example above, would allow you to “toss” the photos and have them decelerate and come to a stop naturally.
We’ve previously demonstrated, Microsoft Surface Globe, an interactive globe done in partnership with the Surface effort. Spinning the globe works as you would expect from a real-world globe, but with a touchable globe you can grab and stretch the view to zoom in, rotate, and move the view around. Interacting with the globe and exploring the world is the majority of the UI, and it is exceedingly easy to use with touch. Other features like search and adding markers to the map have also been designed with touch in mind.
Here’s another video to get an idea of what we’re talking about:
We’re eagerly looking forward to seeing new touch-optimized user interfaces and interactions. If you’re thinking about writing touch applications or adding touch support to your existing app, you should start with the MSDN documentation and samples.
We’ve noted several touch updates in the RC. If you have the Windows 7 Beta you can experiment with touch using a PC that supports multiple touch points. Please note that the multitouch PCs available today were developed while the Windows 7 requirements were also defined, so while we believe they can support Windows 7’s requirements, only the maker of the PC can provide the logoed drivers for Windows 7 and support the PC on Windows 7. Keeping that caveat in mind, today there are a few multitouch PCs on the market:
- HP TouchSmart All-in-One PCs (IQ500 series & IQ800 series)
- HP TouchSmart tx2 Tablet PC
- Dell Latitude XT or XT2 Tablet PC
To enable multitouch capabilities on these PCs running the Windows 7 Beta you will need to make sure you have the latest multitouch beta drivers. Remember these are pre-release drivers and are not supported by Microsoft, Dell or HP. And again, they still need to pass through the Windows Logo process we described above before they are final.
- For HP TouchSmart All-in-One PCs: The pre-release driver is available from Windows Update. After you have installed the Window 7 Beta, open Windows Update from the Start menu. You might have to click the “Check for Updates” link on your left so it will find the driver, it is Optional right now so you’ll have to select it before it will install. Alternatively you can download it from the NextWindow website.
- For the Dell Latitude XT, XT2 and HP TouchSmart tx2 Tablet PCs: the drivers are available on N-Trig’s website. N-Trig is the company that makes the digitizer in these PCs (you should read the release notes, there are some limitations, like no pen support – that will be fixed by RC – you should be aware of and how to switch between Windows Vista and Windows 7).
We often get asked about single-touch PCs. Will they work with Windows 7? There are many types of hardware available for touch and many screens and PCs can provide single touch (usually based on resistive touch technology). A single-touch PC will have the same functionality on Windows 7 as it does on Vista, but this functionality will not be extended to the Windows 7 capabilities. As we noted earlier, Windows Touch in Windows 7 is comprised of a collection of touch enhancements, several of which require multitouch, that work together to deliver a great end-to-end touch experience.
As form factors change and the demands of our user interfaces change, input methods change and grow as well. We’re excited about the unique benefits touch offers the user, and the new places and new ways it enables PCs to be used. We expect PCs of all form factors and price points to provide touch support and so it makes sense that these PCs will be able to take advantage of the full range of Windows 7 capabilities.
Windows 7 is designed to provide efficient ways to use multitouch for the most common and important scenarios, while being a natural and intuitive complement to the mouse and keyboard people use today.
Keep in Touch!
– Windows Touch Team