Probably the most memorable comment from a snooker commentator was Ted Lowe during a Pot Black match in the late 1960s. Acknowledging the fact that in those days many viewers didn’t have a color television, he helpfully noted “for those of you watching in black and white, the pink ball is next to the green”. Question is: can I stay on the ball when I’m writing in black and white?
While Ted’s comment might initially seem a bit daft, it was actually helpful. With the green ball on its spot at the baulk end of the table, any snooker fan would know which ball he meant. Unfortunately, the same can’t be said of my current skirmishes with the new Microsoft Accessibility Standard (MAS) that’s just come into operation. Yes, it’s a good thing that we now have a fully documented and very necessary standard that details how to achieve optimum accessibility for all users of our content. But putting it into practice here at p&p is an interesting exercise.
The overall rule is simple enough: all documentation must be usable by every type of reader, using every type of input and output device, without mandating a specific type of hardware. In other words, we must not force a user to use a mouse or keyboard (they may be using a touch screen device or some other type of input device), or any type of screen or output device (they may be using a screen reader designed for those with low or no vision).
For many years before becoming a ‘Softie, I specialized in conference presentations and articles related to maximizing web site accessibility. Web pages that work fine with no script support (such as in all types of screen readers), and which work without depending on color – a bit like Ted Lowe’s snooker conundrum. In fact one of my slide decks had a title page consisting of six large buttons in various shades of gray, with an instruction below that said “Press the green button to start the presentation”.
So I should be well versed in the technicalities of accessibility, but it’s not so easy when you come to put it into practice with our relatively complex documents. In many cases, the stuff I create is architectural and developer guidance that talks concept and implementation, so it’s just a case of text and schematics. OK, so the schematics aren’t usable by those with low or no vision, but we do explain the contents in the surrounding text as best as we can. And we never depend on colors because our guidance is typically designed to be printed in monochrome as a book, as well as HTML on MSDN.
However, my current project is a series of Hands-On Labs related to the guide Building Hybrid Applications in the Cloud on Windows Azure. These are, of course, pages and pages of procedures with steps consisting of “click the Whatever button” and “press the Whoknowswhich key”. Except now they aren’t because we can’t do this any more. It contravenes the MAS rules. I guess it always did, but we never really noticed because the labs are aimed at knowledgeable developers who we assume are familiar with Visual Studio and know the equivalent key presses, menus, and shortcuts.
I wondered if we would be exempt because we’re just using Visual Studio, and the people who write the docs for it would have covered all the accessibility bases already. But that’s not a reasonable approach – we need to make sure that we abide by the rules and offer the most accessible content to the widest range of users.
One solution suggested by MAS is to include all options for each user action, such as “Press Alt-Whatever followed by the Another key (or right-click and then click Whatever, or select Whatever from the Thatone menu)”. But imagine how that will pan out when there are twenty steps in the procedure where each one requires three user actions, and Visual Studio has four different ways of carrying out each action.
Another alternative is to provide a table of equivalent actions. We’d need five columns, one each for mouse, keyboard, shortcut menu, shortcut key combination, and screen tap; and there’d be a row for every action for the whole lab. So the table could easily run to several pages, and is unlikely to be anywhere near useful.
We finally decided on the third option: be non-specific. Use words such as “select” and “choose” that don’t imply any specific input device or method. For example, “Select Yes please and then choose OK“. The selecting could be with a mouse scrolling a drop-down list, arrow keys, tapping with a finger, or through a voice-actuated screen reader device. The OK could be a mouse click, the Enter key, a screen tap, voice recognition, or the equivalent in any other configured input device.
However, there’s another issue as well. We can’t use relative visual position as a guide to a task because some devices may not display the screen contents in the same way, or may be audio screen readers. Therefore, instructions such as “In the left pane of the screen, click Whatever” or “Select the Another option below the textbox” won’t comply with the rules. So some judicious re-wording is required to indicate which control we mean when there are several similar ones.
As I worked on the Hands-On Labs last week, I decided to make life easier for users by highlighting the required button, textbox, list, or option with a big red circle on the related screenshot for each step. That will make the newly added vagueness easier to cope with, though it’s hard to see how it will help users with low or no vision. And I’ve yet to fully grasp the validity of using words such as “look” and “see” when guiding users through the steps and actions.
But it will certainly be interesting to discover how long it takes me to get used to not automatically typing “Click” at the start of each step…