A while back I described some of my experiences with LiveRegions in a WinJS app, at http://blogs.msdn.com/b/winuiautomation/archive/2013/08/04/an-accessibility-case-study-reading-list-part-5-live-regions.aspx. I recently got to do some more experiments involving LiveRegions, so I thought it’d be worth sharing my thoughts on those.
My situation was that I had a “label” HTML element, and it was marked as aria-live=”assertive”. When the element first had text set on it and was made visible, the Narrator screen reader announced the visual text just fine. Yet after the element had been hidden and its text cleared, when it was later made visible and its text set again, Narrator said nothing. So this is what I did…
1. Run the AccEvent SDK tool
If Narrator doesn’t say anything, then this might mean that the LiveRegionChanged event didn’t actually get raised by my UI. If the event isn’t being raised, then Narrator can’t know to announce anything to the user. As it happens, AccEvent did report that the event was raised, and the name of the element raising the event was the displayed text. So things looked correct up to this point.
2. Is the text on my element ever changed?
In the past, how a screen reader reacted to a LiveRegionChanged event could be affected by whether the text had actually changed since the previous LiveRegionChanged event was raised on the same element. If the text hadn’t changed, the screen reader might assume the event was an unintended duplicate of the earlier event, and ignore it. So as a test, whenever I made the text visible in the UI, I appended an incrementing digit to the end of the string. (I didn’t want to append spaces to the end of the string, as a screen reader might trim those anyway before deciding how to react to the event.) In my case I did find that the dynamically changing string was spoken, even though the fixed string wasn’t. So this told me that my events were reaching Narrator. My next step was to somehow get the fixed string spoken.
I was pretty much out of ideas at this point, so I decided to run the Inspect SDK tool and poke around in my UI with it. I love Inspect, as I can learn a ton about how my UI is represented through UI Automation (UIA) to screen readers and other AT tools. The thing that really caught my attention was that my label element was represented as two elements through UIA. The UI framework had created a text element which contained a text element, and both these elements had an accessible name of the text being shown visually. So this got me wondering if the existence of the two elements rather than one element was someone impacting Narrator’s decision on whether to announce anything in response to the LiveRegionChanged event.
So, I kept looking through MSDN, and happened to notice aria-atomic=”true” at http://msdn.microsoft.com/en-us/library/windows/apps/Hh700329.aspx. From http://msdn.microsoft.com/en-us/library/windows/apps/hh968001.aspx…
“If aria-atomic is explicitly set to true, assistive technologies will present the entire contents of the element.”
Perhaps if I set aria-atomic to true on my label element, Narrator might account for the existence of both text elements rather than only the parent, (which has the aria-live property set,) and so the end results might change for the user. Sure enough, with both aria-assertive=”true” and aria-atomic=”true”, Narrator spoke the visual text just fine whenever it was made visible.
4. Beware focus
Having taken the above steps, the LiveRegion label behaved great. Whenever the text appeared visually on the screen, Narrator would announce it to the user.