Ten questions on programmatic accessibility


I’ve been thinking about a few of the questions I’ve heard over the last three months, on the subject of programmatic accessibility. Some of these questions relate to the fundamentals of accessibility which affect any apps with UI, and some would only affect app devs who build their own custom controls. I find the variety of the questions to be pretty fascinating, and I’d like to share some of the discussion in what’s hopefully an easy-to-digest, bite-size form.

So below is my take on a few questions relating to programmatic accessibility. (I expect some of my responses to these questions might be different in the future as related technologies evolve.) Most of the details are specific to XAML Windows Store apps, but many of the principles apply to all apps with UI.

All the best for 2016.



P.S. A quick note before we start…

A Windows app exposes its UI programmatically through the UI Automation (UIA) API. The UI framework being used, (such as XAML or WinJS,) will do lots of work on your behalf to provide a default representation of your UI through the UIA API. You, as the app developer, can then enhance this default representation, in order to make the experience as helpful and efficient as possible for all your customers.

UI Automation itself will propagate the programmatic representation of your app’s UI, over to apps using the UIA client API. Assistive Technology (AT) tools such as screen readers and magnifiers can use the UIA client API to access that programmatic representation, and go on to expose the information in whatever way is most helpful to your customers. The Windows Narrator screen reader, Magnifier app and On-Screen Keyboard features all use the UIA client API.

If you know of someone who’d benefit from a new AT tool which would enable them to interact with a Windows device in some new way, perhaps UIA could help you build that tool.


1. The Narrator screen reader doesn’t notice something in my app’s UI. Where do I start the investigation?

Narrator is a UIA client app, and what it knows about your app’s UI is based on what it can learn using the UIA client API. So this includes learning about all the properties of elements in your UI, (such as name, control type and bounding rectangle,) what behaviors your elements support, (such as whether they can be invoked, expanded or scrolled,) and what events are being raised by your UI, (such as when an element acquires keyboard focus or its name or value changes). When you find a UIA client app like Narrator doesn’t seem to notice an element in your UI, its best to point the Inspect SDK tool at the UI, to see what Inspect reports about the UI.

Check whether your UI element is being exposed in the Raw view of the UIA tree of elements. (Details of the Raw view and Control view of the UIA tree at I'll say this only once - Your customers need an efficient experience.) If your element is not being exposed through the Raw view of the UIA tree, then the element is not being exposed through UIA at all, and there’s no way that a UIA client app like Narrator can make your customer aware of the element. If the element is in the Raw view of the UIA tree, but not in the Control view, then it is being exposed through UIA, but effectively being marked as something that’s not interesting to your customer. So Narrator decides not to make your customer aware of the element.

If the element is in the Control view of the UIA tree, yet still Narrator isn’t making your customer aware of it, check the values of the properties on the element. Do any of those properties seem to not match what you would expect? For example, is the element shown visually on the screen, yet its IsOffscreen property is true?

Whenever I’ve built UI, I always point the Inspect SDK tool at it before using AT tools with the UI. The first thing I’m interested in is whether all the elements I want my customers to be aware of are being exposed through the Control view of the UIA tree.

The screenshot below shows the Inspect tool reporting elements in the High Contrast settings page that are exposed through the Control view of the UIA tree.


Figure 1: Selecting the Control view in Inspect’s Options menu, to have Inspect show all elements that are exposed through the Control view of the UIA tree.


2. My XAML Rectangle has a Click handler. Why isn’t it being exposed through UIA?

This question’s rather a continuation of #1.

Most UI frameworks available for building Windows apps do a lot of work for you to expose some programmatic representation of your app’s UI. For example, if you build apps with XAML, WinJS, WPF, WinForms or Win32, you can point the Inspect SDK tool at the app, and learn how your UI is being exposed through UIA. (An exception here is DirectX. You have to implement the UIA provider interface yourself to make your DirectX app programmatically accessible.)

When building your app, if you use standard controls that are available through the UI framework, you get a huge head-start on making your app programmatically accessible. For example, if you add a XAML Button control to your app, you’ll find it exposed through the UIA Control view, its properties will specify such things as its control type and location, and it can be programmatically invoked. In fact, if it has text shown on the button, that text will be exposed programmatically as the name of the element, and it’s usually fair to say that the control is accessible by default.

Note: If your button shows an image rather than text, then you will have to take steps to provide a helpful, concise and localized accessible name for the button. Tips on doing that are at Giving your XAML element an accessible name: Part 1 - Introduction.

If you’re considering building UI that doesn’t use standard controls, it’s really, really important to weigh up the pros and cons of doing that. An example of the complications around doing that are in this #2 question.

Someone had used a Rectangle in a XAML app to create something that looked visually like a button. He’d then added a Click handler to it, in order to be able to respond to user action at the Rectangle. This enabled his customers who could use a mouse or touch to leverage the button-like thing on the screen, but the UI was not programmatically accessible through UIA. The XAML framework doesn’t consider a Rectangle to be the sort of UI that’s important enough to the user to be exposed with its own element in the UIA tree. 

And very importantly, this isn’t all just about programmatic accessibility, the UI needs to be keyboard accessible too. A Rectangle with a Click handler isn’t accessible to someone who only uses the keyboard, because the Rectangle won’t be able to get keyboard focus.

So in this situation, the first question would be – if it behaves like a button, why not use a standard Button control? A Button can be styled in XAML or CSS to look like pretty much anything you want, (including, say, a Rectangle). So add a Button and style it, (not forgetting high contrast styling). You’ll get a ton of accessibility for free, such as programmatic properties, behaviors, events and keyboard accessibility.


Tip: Check whether your element can be programmatically invoked, through the two steps using Inspect below. (And note that while adding a Click event handler to some types of controls might make your element programmatically invokable, adding a Pointer-related event handler will probably leave the element as not programmatically invokable.)


(i) Check that the IsInvokePatternAvailable property is true.



Figure 2: Inspect showing that an element has an IsInvokePatternAvailable property of true.


(ii) If the IsInvokePatternAvailable property is true, go to the Actions menu and invoke the element. 

Figure 3: Using the Inspect tool to programmatically invoke an element on the High Contrast settings page.


3. Why doesn’t my ComboBox or list item have an accessible name?

If you’re using a standard control from the UI framework, and showing text on the control, then often that text will be exposed through UIA as the accessible name of the control.

Nothing’s more pleasing than “Accessible by default”.

In some cases you will have to take action to have a helpful, concise and localized accessible name on a UI element. Sometimes that’s because the element is showing an image rather than text, and the UI framework can’t know what the best accessible name would be. And sometimes you have to set the name because of constraints in the way the UI framework behaves today.

I listed a few different ways to set the accessible name up at the blog series starting at Giving your XAML element an accessible name: Part 1 - Introduction. While I listed a few ways, in most cases it’s pretty straightforward to set up the accessible name of an element in a Windows Store app. That is, add a localized string resource, (just as you’d do if the text was shown visually,) and then add a reference to the string from wherever the element is defined.

Over the last couple of months I did get some questions related to adding accessible names to ComboBoxes and list items, so some application of a few of the techniques available for adding an accessible name are described below.


The first ComboBox

The first question related to setting an accessible name for a standard ComboBox. The dev had suggested adding an event handler to the ComboBox, and when some event relating to the ComboBox is received, programmatically setting an accessible name in code-behind. While technically I expect that would work, and perhaps might be necessary for some custom UI, if this is a standard ComboBox control, it’s way less work just to do the two steps of adding a localized string resource and referencing that resource from the UI definition.


For example, with a XAML ComboBox, first add the helpful, concise and localized resource…

      name=" ComboBoxBirds.[using:Windows.UI.Xaml.Automation]AutomationProperties.Name"
      <comment>Accessible name of the Birds ComboBox</comment>


And then add the X:Uid markup to the ComboBox definition, to link the ComboBox to the string…



The second ComboBox

This question related to the accessible name of a ComboBox when a custom Header is used. The dev said that when a regular Header is used with the ComboBox, Narrator would speak the Header fine, but when a custom ComboBox.Header contains a TextBlock, Narrator didn’t announce that text.

So after getting that question, I tried a quick test with the ComboBox class. I created a ComboBox with the accessible name defined with the Header property in the ComboBox tag, and as the dev pointed out, Narrator announced that just fine. When I pointed the Inspect tool at the UI I’d built, I could see the accessible name exposed on the ComboBox element…



Figure 4: Inspect showing the accessible name of the ComboBox.


I then changed the XAML to have a ComboBox.Header tag inside the ComboBox, and a TextBlock for the label. When I did that, the ComboBox element lost its accessible name as shown by Inspect below, and that’s why Narrator said nothing… 


Figure 5: Inspect showing that the ComboBox has no accessible name.


So I then added some “LabeledBy” markup to explicitly tell the XAML framework that the accessible name of the ComboBox should be picked up from the TextBlock showing its label. (By the way, please excuse the fact that some screenshots here show the results of an earlier test where I was building a font-related ComboBox, while some code snippets relate to a "Birds" ComboBox. I wrote some of this post just after getting back from a wildlife refuge.)

<ComboBox x:Name="ComboBoxBirds" ItemsSource="{x:Bind Birds}" DisplayMemberPath="Source"
    AutomationProperties.LabeledBy="{Binding ElementName= ComboBoxBirdsHeaderLabel }">
            <TextBlock Name="ComboBoxBirdsHeaderLabel" x:Uid="ComboBoxBirds" />


That made a name available through UIA again, and Narrator could then access and announce it. I don’t know if there’s any automatic way to have the contents of the ComboBox.Header be repurposed as the label of the ComboBox, but the above steps got things working as required.

The list items

In attempting to set the accessible name on XAML list items, the dev had used the common approach of overriding the ToString() method in their C# class associated with the list items. In addition to this, they had also attempted to use databinding to bind the AutomationProperties.Name property in their DataTemplate. Having done that, they did not find the accessible names set on the list items as expected.

The answer to this issue was to only use the ToString() override, and not attempt any additional databinding of the list items accessible names. While in other situations databinding the AutomationProperties.Name property can be a great way to expose whatever the current accessible name is on some element, for list items in a C# app, the ToString() override is the way to go.


4. My TextBlock is a “LiveRegion”, but still my customer isn’t told when it's updated to present some critically important information. Why?

Your app can declare some element, such as a TextBlock, to be a “LiveRegion”, in order for your customer to be notified of some very important text being set on the element, when the customer isn’t interacting directly with the element at the time the text is set.

Details on using LiveRegions in XAML apps can be found at Let your customers know what's going on by leveraging the XAML LiveSetting property, and that post describes some reasons why the user might not be notified as expected of the text being set. I do still get questions on this topic, so I think it’s worth calling out how valuable the AccEvent SDK tool can be when working with LiveRegions.

The AccEvent tool can show me that a LiveRegionChanged event is being raised immediately after the critical text has been set on the element. If no LiveRegionChanged event is being raised, then an AT tool might not be made aware of the change, and so not inform your customer of the some critically important information.

So use AccEvent to verify that the LiveRegionChanged event is being raised as expected, and to check what properties are set on the element at that time. For example, has the important text already been set on the element, and is the IsOffscreen property false? It can also be interesting to learn what other events are being raised by your UI around the same time as the LiveRegionChanged event. One dev I talked to found that a FocusChanged event was raised immediately after the LiveRegionChanged event, and Narrator chose to only make the user aware of the keyboard focus change.



Figure 6: AccEvent showing that LiveRegionChanged events are being raised by some UI.


5. Why is Narrator completely oblivious to me tabbing through my custom controls?

On two occasions over the last few months, I was asked by people who’d built their own custom UI, why Narrator didn’t say anything as they tabbed through that custom UI. As you and I know, the first thing we’d ask is whether it’s appropriate to use custom UI, given that many standard controls will provide keyboard accessibility and programmatic accessibility by default. But let’s assume here that the custom UI is required in these apps, and the devs had done the work to show some helpful keyboard focus feedback around the elements as the user pressed (say) the Tab key or arrow keys. (Not forgetting that the helpful keyboard focus feedback needs to stay very helpful when a light-on-dark or dark-on-light theme is active.)

In order for a UIA client app like Narrator to inform your customer when keyboard focus has moved between your custom elements, your app needs to raise a UIA FocusChanged event. In the case of a C#/XAML app, this might be done with…

    AutomationPeer peer = FrameworkElementAutomationPeer.FromElement(<The name of the XAML element>);
    if (peer != null)


Neither of the apps I was looking at here was raising that event. So the apps were updated to raise the event, and the devs could use the AccEvent SDK tool to verify the event was now getting raised as expected. But in one case, that was only half the story. Elements exposed through UIA have certain keyboard focus-related properties set on them, and the values of those properties must match the elements’ keyboard behaviors.

One property of interest is IsKeyboardFocusable. For any element that can get keyboard focus, IsKeyboardFocusable must be true. (And not surprisingly, it must be false if the element can’t get keyboard focus.) The other property of interest is HasKeyboardFocus. This property is set based on whether the element has keyboard focus at the time the property is queried. For custom XAML UI which is doing all the work itself to manage its keyboard accessibility, the UI will need a custom AutomationPeer and return the appropriate values for IsKeyboardFocusable and HasKeyboardFocus. (And note that you’ll want these values set to the appropriate current values by the time the FocusChanged event is raised.)

Once the app with the custom UI was setting the two related properties and raising the FocusChanged event, Narrator would announce the element acquiring keyboard focus as required when the user tabbed through the UI.



Figure 7: Inspect showing that a button declares that it can get keyboard focus, but doesn’t have keyboard focus at the moment.


6. What do I have to do to enable my customer using Narrator to invoke my context menu through touch alone?

I was asked by someone building a XAML Windows Store app, how can a customer using Narrator on a touch device, (and with no keyboard,) invoke a context menu in his UI? The touch scenario is particularly interesting here, as when you don’t have a keyboard, you can’t press the Context Menu key or Shift+F10.

From the perspective of a UIA client app, that client app is interested in whether it can call a ShowContextMenu() method on some UIA object. (That method might be available through the IUIAutomationElement3 and IUIAutomationTextRange2 interfaces.) But the handy thing for XAML app developers is that they don’t have to care about all that. All the app dev needs to do is implement a RightTapped event handler on the UI which presents the context menu. Once the RightTapped event handler is set up, the XAML UI framework will connect things up such that the handler gets called when a UIA client calls the ShowContextMenu() method.

So typically all the XAML app dev needs to do is…


        <Button RightTapped="Button_RightTapped" …  />


        private async void Button_RightTapped(object sender, RightTappedRoutedEventArgs e)
            // I've been right tapped, so show the context menu...


And in fact the XAML app dev who originally asked the question had already done everything required to make their context menu accessible to a Narrator user on a touch machine. Once Narrator was moved over to the element of interest, a 2-finger double-tap gesture resulted in the context menu appearing.


7. How should I expose the contents of my empty table cells?

This was an interesting one. The dev had built some very custom table-based UI, and had done a lot of work to support the UIA Table and Grid patterns. All that work had paid off, and his customers using Narrator could use table-related keyboard shortcuts to navigate the rows and columns in the UI.

The question was, should the cells that are empty still be exposed through the UIA tree? After all, it might seem more efficient for the customer to only reach cells which visually present information.

My first thought was, are those empty cells conveying useful information despite not showing any contents visually? For example, say I built an app which presents the following table visually to my customers who are sighted. In the first column, two cells contain checkmarks and two cells are empty.



Figure 8: Table with two columns and four rows, including two empty cells.


The empty cells enable my sighted customers to know at a glance whether I saw either a Bittern or Kingfisher. So it seems appropriate for the programmatic representation of those cells to also let my customers who interact with that programmatic representation, to efficiently know the same information. That is, the empty cells should be exposed through the Control view of the UIA tree, (just as the not-empty cells are,) and the cells should have a helpful, concise and localized name of (say) “Not spotted”.

It would be helpful to have the start of the accessible name be different from the start of the accessible name of the cells that aren’t empty. For example, by having the cells names be “Spotted” or “Not spotted”, my customer could quickly move through the cells and know immediately as the name starts to be spoken the state of the cell. (Alternatively, if the column has a header of “Spotted”, perhaps the cell names might be “Yes” and “No”.)

And of course, I’ve not sufficiently considered the full end-user experience here, as using “Spotted” for the accessible name of the cell could lead to all sorts of confusion once I add a row for Woodpeckers.

But whatever the helpful, concise and localized names you choose, by having the empty cells exposed through UIA, you’re providing a consistent experience during table navigation. Imagine if your customer issues a command to move to a cell in the “next” row or “next” column, and they were moved past one or more empty cells. How do they know where they are now? Instead, if a customer using Narrator knows they’ve moved three columns when they did CapsLock+F3 three times, that provides a fast and predictable experience.

Nothing’s more pleasing than “Predictable”.


8. Is there anything I need to do related to the size of the text shown in my app?

This question isn’t related to programmatic accessibility, but I felt it was such a great question that I had to include it here.

The dev had built a Windows Store app and experimented with the “Change the size of text, apps and other items” setting in the Settings app on the desktop. They’d verified that the UI in their app looked great at different display scalings. And in many cases on the desktop, an app dev doesn’t need to take any more action around accounting for display scaling.

A couple of situations where I have needed to take additional action on the desktop, related to the following…

1. Supplying images at multiple resolutions, (because I didn’t want the quality of my app’s visuals to degrade due to low resolution images being scaled up).

2. Accounting for the current scaling in some custom input pointer event handling. I don’t remember where I had to account for that, or even if it’s necessary any more, but if you’re basing calculations on pointer location events, it’s worth verifying that those calculations aren’t affected by display scaling.


Today the settings affecting text size on the Windows phone are different from the settings affecting text size on the desktop, and I’ve found that phone apps are more prone to problems with text scaling due to hard-coded UI sizes which don’t account for the scaling. I’ve put thoughts on that towards the end of Building accessible Windows Universal apps: Other important accessibility considerations.

A final note on this question, on the desktop there’s also a setting of “Change only the text size” in the classic Control Panel. This allows your customer to set the size of text in specific parts of UI, such as menus or the title bar. Most apps won’t have to do anything to account for this, but if you’ve built some very custom UI where you control the size of text in such things as menus and title bars, then you’d want to consider those settings. 


9. Why are my attempts to set keyboard focus to particular elements not working?
This question came from someone trying to programmatically set keyboard focus to two different elements in their XAML app, and in neither case was keyboard focus actually ending up on the elements.

In the first case, the dev was trying to set focus to a TextBlock, but TextBlock’s aren’t keyboard focusable, and so XAML won’t allow keyboard focus to be set there. If TextBlocks were focusable, then as a customer tabs around your UI, they’d be taken to lots of static text labels which they can’t interact with. This would make the keyboard experience inefficient. Using the keyboard should be a very fast experience, often faster than using mouse or touch.

AT tools like screen readers provide their own ways to enable your customers to access elements like TextBlocks which aren’t keyboard focusable.

In the second case, the dev was attempting to set keyboard focus to a dialog when the dialog is displayed, but finding that keyboard focus ended up being put on a TextBox contained inside the dialog. This is again due to deliberate action taken by the XAML UI framework, which is moving keyboard focus from the dialog on to some keyboard focusable element inside the dialog. If keyboard focus was left on the dialog, then in almost all cases the user would have to press the Tab key to move keyboard focus away from the dialog itself and on to a control where they want to work, (such as the TextBox). The keyboard user wants the UX to be very efficient, and so expects keyboard focus to be placed at a control where they’re likely to want to start working first.

And note that while XAML was moving the keyboard focus to the TextBox in this case, perhaps that control isn’t the most likely place where your customer wants to start working. How would XAML know what the best control is in your UI for initial keyboard focus? But you, as the app dev, do know which control in the dialog is commonly going to be where the user wants to start working, so set keyboard focus to that control when the dialog appears.


9.1 Why isn’t my Back button being exposed through UIA?

This question related to why the Back button in an app wasn’t being exposed through the UIA tree. (Perhaps the thoughts on question #1 above might be useful here.) And as interesting as the Back button question was, I’m not going to talk about it here, because it led to another question related to keyboard accessibility that I think is extremely important but which sometimes ends up being an afterthought. So I thought I’d slip in some mention of that other question here.

And the other question is, how efficient is the experience that you provide for your customers who only use the keyboard? We know how it’s critical that your customers must be able to access all your app’s great functionality when using only the keyboard, but how fast can they do that?

Say your Back button can be reached by your customers who only use the keyboard, but it may take (say) ten presses of the Tab key to reach the button. Your customers will likely find this a really frustrating experience. After all, a Back button is a very heavily used button.

So to solve this, provide a keyboard shortcut to access the Back button’s functionality without requiring the user to spend a while moving keyboard focus to the button. Many apps use the Backspace key or Alt+LeftArrow as shortcuts to the Back functionality. In fact, consider providing keyboard shortcuts for all the heavily used features in your UI. For example, Ctrl+F (say) beats having to tab over to a search field, and Ctrl+B beats tabbing to a Bold button.

Nothing’s more pleasing than “Efficient".


10. I want to build a new AT tool to help someone I know. Should I use the managed .NET UIA API or the native Windows UIA API?

I knew the behaviors of the managed .NET UIA API and native Windows UIA API were similar but not exactly the same, so I recently looked into what components are shared between the two APIs. I learnt that both API’s use the UIAutomationCore.dll that’s in the \Windows\System32 folder, and this is a very important point. It means that any enhancements made to this component over time, will be leveraged by both the managed .NET UIA API and the native Windows UIA API.

As far as I know, there are two reasons why the two APIs might still on occasion behave differently. The .NET UIA API can take some action specific to certain control types, which isn’t taken by the Windows UIA API. I don’t think this commonly impacts the relative behaviors of the two APIs, but can do once in a while. (For example, it seems there’s a difference in behavior when interacting with a WinForms ToolStrip control, as discussed at https://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/2363de9f-3bf0-48dc-a914-1eef5bcf1482/toolstriptextbox-toolstripcombobox-not-automated?forum=windowsaccessibilityandautomation).

The other reason why I expect there might occasionally be a difference in behaviors, is that (again, as far as I know,) the .NET UIA API uses the CUIAutomation object, whereas a client of the Windows UIA API can use either the CUIAutomation object or CUIAutomation8 object. When using the CUIAutomation8 object, the API is more resilient against unresponsive providers. For example, if a UIA client is using the CUIAutomation object, and it queries for UI data in an app that’s hung, the call to get the data might not return. But if a CUIAutomation8 object is being used, it’s much more likely that the call will return a timeout error to the client.

Also, I know of one case where the data returned when using the CUIAutomation8 object is different from that returned when using a CUIAutomation object. The post at http://blogs.msdn.com/b/winuiautomation/archive/2015/10/07/how-come-i-can-t-find-a-text-pattern-for-notepad-when-inspect-tells-me-it-s-there.aspx describes how you can get a Text pattern from an Edit control when using the CUIAutomation8 object.

But going back to the question of which API to use, I’d say that often it’s a case of using whichever you feel most comfortable coding against. If you’re a C# .NET developer, and you want to use the managed .NET UIA API, go for it. If you’re more familiar with C++/COM development, use the native Windows UIA API. Given that they both use the same UIAutomationCore.dll, you’ll often get the same results.

If you’re using the managed .NET UIA API, and you find your UIA client calls need to be more resilient against unresponsive apps, (or you do hit one of those cases where you don’t get the data you expect from a call,) you could try using the native Windows UIA API, and compare the results. If the results from the native Windows UIA API are more helpful in your case, it’s then a case of deciding if the work to move to the native Windows UIA API from the managed .NET UIA API is justified. I’ve written a lot of UIA client C# code using a managed wrapper around the native Windows UIA API, generated by the tlbimp.exe tool, but the interface in that managed wrapper’s API isn’t the same as the managed .NET UIA API. There is an old managed wrapper for the native Windows UIA which is similar to the managed .NET UIA API, but I’ve never used it. Some details on using a managed wrapper with the native Windows UIA API are at So how will you help people work with text? Part 1: Introduction.

All in all, while neither UIA API is perfect, I do feel that they’re both really powerful, and can be valuable tools when building an AT solution for someone you know.


Comments (18)
  1. tim says:

    Dear Guy,

    Besides the PowerPoint same bounding box issue, I recently found that the EXCEL7 (Excel 2016) TextPattern breaks when you try to .Clone, and no TextUnits are available. Any chance I can work with you and the Office team to resolve these issues? They are critical for us. I am located in Bellingham and can quickly hop over to Seattle. Would love to buy you lunch to chat on this. Pls contact me at tim at loqu8.com. Added you on LinkedIn.



  2. Hi Tim, could you post code snippets which demonstrate the problem with the cloning of the TextPattern in Excel 2016? Hopefully I'll be able to find contacts who can investigate this further.



  3. tim says:

    I'll post the snippets. Regarding the PowerPoint issue, I believe it is being tracked as OfficeMain:2556280 – but I do not have any contact into the owning team or visibility to who was it that tested it. It would be nice to know if this is being looked at or if there is any hope of resolution in the near term. It is a critical issue for us (as well as the Excel one).

  4. tim says:

    For EXCEL7 (Excel 2016), try pasting "媒体盘点奇葩公司那些事:员工训练亲吻垃圾桶" into a cell. Then, I use the following;

    1. To get the range

                       var textPattern =


                       if (textPattern != null)


                           var range = textPattern.RangeFromPoint(pt);

                           if (range != null)




    2. In rangeToTarget, I have

               if (targetSource == "EXCEL7")


                   // Excel issues:

                   // 1. range.Clone does not work in Excel

                   // 2. Expand to Enclosing Unit never works, stays at the cell contents level

                   onNotify("uia: " + range.GetText(-1));


                   onNotify("uia (char): " + range.GetText(-1));


                   onNotify("uia (word): " + range.GetText(-1));


                   onNotify("uia (para): " + range.GetText(-1));



                       var rangeClone = range.Clone();


                   catch (Exception ex)


                       onNotify("uia: " + ex.Message + Environment.NewLine + ex.StackTrace);



  5. tim says:

    3. The result in my debug log is

    CaptureXY: {X=79,Y=423}

     source: EXCEL7

     processName = EXCEL

     processPath = C:Program Files (x86)Microsoft OfficerootOffice16EXCEL.EXE

    process ProductVersion = 16.0.6366.2036

    process FileVersion = 16.0.6366.2036

    uia: 媒体盘点奇葩公司那些事:员工训练亲吻垃圾桶

    uia (char): 媒体盘点奇葩公司那些事:员工训练亲吻垃圾桶

    uia (word): 媒体盘点奇葩公司那些事:员工训练亲吻垃圾桶

    uia (para): 媒体盘点奇葩公司那些事:员工训练亲吻垃圾桶

    uia: Exception from HRESULT: 0x800A03EC

      at UIAutomationClient.IUIAutomationTextRange.Clone()

      at Loqu8.Capture.Windows.HybridCapture.rangeToTarget(IUIAutomationTextRange range) in E:Loqu8ProjectsintuitionsrcLoqu8Loqu8.CaptureWindowsHybridCapture_uia.cs:line 214

    0000.000s – Exception from HRESULT: 0x800A03EC

      at UIAutomationClient.IUIAutomationTextRange.Clone()

      at Loqu8.Capture.Windows.HybridCapture.rangeToTarget(IUIAutomationTextRange range) in E:Loqu8ProjectsintuitionsrcLoqu8Loqu8.CaptureWindowsHybridCapture_uia.cs:line 265

      at Loqu8.Capture.Windows.HybridCapture.uiaScanCapture(CaptureMode mode) in E:Loqu8ProjectsintuitionsrcLoqu8Loqu8.CaptureWindowsHybridCapture_uia.cs:line 112

    4. For the GetText results, I would have expected something more like:

    uia: 媒体盘点奇葩公司那些事:员工训练亲吻垃圾桶

    uia (char): 媒

    uia (word): 媒体盘点奇葩公司那些事:

    uia (para): 媒体盘点奇葩公司那些事:员工训练亲吻垃圾桶

    5. The range.Clone error, which is less critical to me than the TextUnit issues, is 0x800A03EC

    6. Finally, when I put the point outside of the cell, even if there is overlap visually into the next cell, rangeFromPoint does not work. Ideally, it should be able to get what you see on the screen.

  6. tim says:

    PowerPoint is really close, TextUnit works fine to the character level, but the bounding box returned is always the same and the character TextUnit is always the first one. rangeFromPoint is most likely failing because of the bounding box issue. Going to the next character with Move works fine, but again the bounding box is the larger one, so one cannot even iterate through the boxes to find the right one under the point.

    Code snippet:

    if (targetSource == "mdiClass")     // PowerPoint


                   // For PowerPoint, the bug is that the BoundingRectangle is always the maximum one

                   // for the entire range, even when we are looking at smaller segments. This means

                   // we cannot compute a rangeFromPoint.

                   onNotify("uia: " + range.GetText(-1));

                   var doubles = range.GetBoundingRectangles();

                   if (doubles.Length >= 4)


                       var rect = new System.Windows.Rect(

                           new System.Windows.Point(doubles[0], doubles[1]),

                           new System.Windows.Size(doubles[2], doubles[3]));

                       onNotify("uia: " + range.GetText(-1) + " (" + rect.ToString() + ")");



                   onNotify("uia (char): " + range.GetText(-1));

                   doubles = range.GetBoundingRectangles();

                   if (doubles.Length >= 4)


                       var rect = new System.Windows.Rect(

                           new System.Windows.Point(doubles[0], doubles[1]),

                           new System.Windows.Size(doubles[2], doubles[3]));

                       onNotify("uia: " + range.GetText(-1) + " (" + rect.ToString() + ")");


                   range.Move(TextUnit.TextUnit_Character, 1);

                   onNotify("uia (char): " + range.GetText(-1));

                   doubles = range.GetBoundingRectangles();

                   if (doubles.Length >= 4)


                       var rect = new System.Windows.Rect(

                           new System.Windows.Point(doubles[0], doubles[1]),

                           new System.Windows.Size(doubles[2], doubles[3]));

                       onNotify("uia: " + range.GetText(-1) + " (" + rect.ToString() + ")");




    uia: 媒体盘点奇葩公司那些事:员工训练亲吻垃圾桶

    uia: 媒体盘点奇葩公司那些事:员工训练亲吻垃圾桶  (381,619,528,148)

    uia (char): 媒

    uia: 媒 (381,619,528,148) <— 🙁

    uia (char): 体

    uia: 体 (381,619,528,148) <— 🙁

  7. tim says:

    For Edge, things seem to work well when I get to a TextPattern enabled element. When I get to a link, I have to use TextChild, and parse through the bounding boxes (would be nice to get a TextPattern here instead). However, from time to time, I run into huge blocks of text that are only TextChild capable. For text past the window size (when I scroll), the bounding box fails to return and I can no longer fake my rangeToPoint. Actually, when this happens the ones that use to work also stop returning bounding boxes (maybe I'm working it too much).

    This is my TextChild rangeFromPoint

    private void rangeFromPoint(tagPOINT pt, IUIAutomationTextRange range)


               // TODO: if Chinese, do it character by character, otherwise word by word


               //onNotify("uia: " + range.GetText(-1));

               int moved = 0;

               // do loop below is basically rangeFromPoint



                   moved = 0;

                   var doubles = range.GetBoundingRectangles();

                   if (doubles.Length >= 4)


                       var rect = new System.Windows.Rect(

                           new System.Windows.Point(doubles[0], doubles[1]),

                           new System.Windows.Size(doubles[2], doubles[3]));

                       onNotify("uia: " + range.GetText(-1) + " (" + rect.ToString() + ")");

                       if (pt.y > rect.Bottom)


                           moved = range.Move(TextUnit.TextUnit_Line, 1);


                       else if (pt.x > rect.Right)


                           moved = range.Move(TextUnit.TextUnit_Character, 1);






                       onNotify("uia: No bounding rect");


               } while (moved != 0);


    For an example TextChild webpage that seems like it should be TextPattern enabled, take a look at windows.microsoft.com/…/antivirus-partners – isTextChildPatternAvailable is true but isTextPatternAvailable is false. The AutomationId is bodyContentPane. Maybe there is a way of getting a TextPattern from a TextChild so that I can use rangeFromPoint from there?

  8. tim says:

    Maybe this is problem of text inside of a div?

  9. tim says:

    For the Microsoft antivirus page, the problem is that the text is inside "aside" which makes the text TextChild only. However, if you look at the Bing search results (the snippet) there are issues there too that somehow make only TextChild. For a real life example of what we need, consider news.qq.com/…/029547.htm – that main text only ever presents itself as TextChild available. Not sure why.

  10. Hi Tim,

    I've been experimenting with text ranges in Excel 2016.

    I can repro the unexpected results using ExpandToEnclosingUnit(). In my tests, the results of the call seem to depend on whether the cell has entered a state where the caret is displayed in the cell. (I don't know what that state is called, but I mean when you double click in the cell, or move keyboard focus over to it and press F2.) When I do that, and leave the mouse cursor over the cell, I can switch back to my test UIA client app, call RangeFromPoint(), and get a degenerative text range as expected. From that, I can call ExpandToEnclosingUnit(TextUnit.TextUnit_Character) and get the character next to the mouse cursor. I can then use the other text units to expand beyond the character.

    Note that it looks like the ':' character in the string you supplied isn't considered to be a word breaking character. (If I do Ctrl+Right/LeftArrow, then the caret doesn't jump to be next to the ':'.) If I replace the ':' with a space, then I get different results expanding to TextUnit_Word and to TextUnit_Paragraph.

    But if I've not set the caret into the cell content before calling RangeFromPoint(), I seem to get a range containing the entire cell's content, and that's contrary to what msdn.microsoft.com/…/ee696219(v=vs.85).aspx says about returning a degenerative range. So I'll log a bug about this, and see if I can find someone in Office who can follow up with that.

    Interestingly, I also find the state of the cell before the call also affects the Clone() call. If I've not placed the caret in the call, then Clone() throws an exception just like you said. But if I've placed the caret in the cell before switching back to my UIA Client app, then the Clone() call works, and I can expand the cloned range to the enclosing units. Also, I find that RangeFromPoint() works as expected with the text overflowing the cell, if the caret has been placed in the cell with the text first.

    While all this unexpected behavior is being investigated, I wonder if there's any way to temporarily workaround the issue. I don't know enough about Excel to know what the options are. Simulating a double click of the left mouse button would presumably work, but I realize that's really ugly, and could have unacceptable side-effects, and wouldn't work if you want to access text overflowing a cell. But if in some cases it is acceptable, then this would appear to work around the two most significant issues you've raised with Excel 2016.

    As part of trying to get you an update on the above, I'll also see if I can find someone who can supply an update on the PowerPoint issue, and comment on the Edge behavior you're encountering.



  11. tim says:

    Guy, thank you for checking into these things. With regards to Excel, the moment you enter into Edit mode things are good – because the class is EXCEL6 not EXCEL7 (the tool bar edit window also works, it is EXCEL<). Clone will also work when you are in EXCEL6 mode.

  12. tim says:

    Regarding PowerPoint, I have been working with Jesse Harvey. Are you by chance in Redmond? I would love to buy you lunch sometime and show you what we have been working on.

  13. alek says:

    Is there any way to know if the use has the Narrator switched on or off in code?

    1. Hi Alek, there’s no supported way of knowing whether Narrator’s running. I don’t know what’s prompting the question in your situation, but every now and again this question’s asked with the intent of delivering two UI experiences; one with Narrator not running, and one with Narrator running. In general, we avoid building two such experiences. Rather we build a single UI which can be consumed and controlled visually or via a screen reader.

  14. Nasko says:

    Hi Guy,
    First, I want to thank you for the great post. It is of great help for me. However, I stuck with the following issue and could not find the answer how to resolve it.

    So, basically I have created a custom control that derives from Control – it is a UWP control. After implementing its automation peer (the peers inherit the FrameworkElementAutomationPeer) everything was perfect – the narrator was reading everything as expected. However, I decided to test if they work for Windows Phone. When I turn on the narrator in Windows Phone the BoundingRectangle was visualized as expected, again the narrator was reading everything expected, but I realize that some of my handled events like PointerPressed, PointerEntered etc. no longer were invoked. My control really depends on the logic implemented in those events, however after turning on the Windows Phone narrator they were no longer working (if the narrator is turned off everything is working as expected).

    Could you please, provide me some information why these events are no longer triggered when the narrator is turned on and if there is a suitable approach how to make my logic in the events execute and work together with the narrator?


    1. Hi Nasko,

      I don’t think I’m going to be much help here, but here are a few thoughts anyway…

      I can’t explain the difference in behavior when your control’s running on the desktop and on the phone. While I know great progress has been made in having the same platform code running on all devices, (which means components such as Narrator, UI Automation, and the XAML framework will largely behave the same,) I don’t know if there are still specific details which are platform dependent. Your experience suggests that that is the case, given that your control behaves differently.

      Having said that, if you were controlling Narrator with touch on the desktop, I’m actually surprised that the control was working as well as it was for you there. When Narrator’s running on a touch device, it intercepts touch input, and determines whether your customer has performed a gesture which maps to one of Narrator’s commands. For example, a right-swipe to move Narrator to the next element, or a double- or triple-tap to trigger a control’s primary or secondary action. While your customer is issuing these commands through touch gestures, they won’t want whatever UI happens to lie beneath their finger to be independently reacting to that input. Otherwise your customer might do a double-tap to invoke the element where Narrator currently is, and at the same time invoke an unrelated button beneath their finger. As such, in general Narrator will block the touch input, such that related pointer events don’t directly reach the controls where the input is made.

      This can mean that a control’s pointer-related event handler won’t get called in response to the gesture. Instead, when Narrator’s running on a touch device, all interaction with controls is made programmatically through the UIA API. Often the actual work done to respond to the customer’s action will be taken by the XAML framework. For example, say I present a scrollable ListView, and my customer issues Narrator’s scroll gesture. Narrator calls into the UIA API, and asks for the list to be scrolled. UIA then calls into the XAML framework, and XAML responds by scrolling my ListView. I didn’t have to do any work to make that happen, which is how I like it. If my customer reaches a button, and issues Narrator’s gesture to invoke the button, then Narrator calls into the UIA API and asks for the button to be invoked, (through the UIA Invoke pattern). UIA then calls into the XAML framework and asks for the button to be invoked. XAML then calls the button’s Click handler, and that handler responds in the same way as if the button had been invoked using mouse, keyboard or touch (ie touch input when Narrator’s not running). I know in the past, XAML wouldn’t call a pointer-related event handler in response to an attempt to programmatically invoke the button through UIA. I don’t know if that’s changed since I last looked into it. (For what you’ve said, maybe this is an area where there is a difference on how some component behaves on the phone relative to the desktop.)

      So what you may want to do is consider whether it’s practical to add other event handlers to the control, which do get called by XAML in response to Narrator’s touch gestures. For example, replace PointerPressed with Click. (It is worth noting that apps want to be fully keyboard accessible too, and use of pointer-related event handlers alone can impact that.) If you feel it just isn’t possible today to have all your functionality called through Narrator gestures and UIA patterns, I can pass your feedback on to the relevant platform teams. In the past I’ve encountered a control than expose multiple actions through regular mouse or touch input, and not all of it could be accessed at the control through Narrator gestures. (I think Narrator could invoke, expand and scroll the control, but there was additional functionality in the control for which there was no matching Narrator gesture.) So in this case the additional functionality was made accessible through some other UI. The goal was to have all the app’s functionality accessible to all customers, and the goal was met, despite the entry point to the functionality not being identical.



      1. Nasko says:

        Hi Guy,

        Thank you for the detailed information. It has really helped me a lot to understand how exactly the Narrator was working in touch. You gave me some ideas that I will try to implement in my project.

        As always your answer was great. Thanks a lot again. 🙂


  15. Petar Mladenov says:

    Hi Guy,

    Very useful post, so big thanks for this contribution.

    I have a question regarding Narrator and how it reads Selection events. If we have typical ItemsControl whose pear is ISelectionProvider and its items (their containers) have peers which are ISelectionItemProvider – this is easy. Focusing and selecting the items is usually enough to force the Narrator reading “selected”, “nonselected”, “selected item 3 of 5” etc.

    How about more complex scenarios when the focus is not above the items. For example custom controls like AutoSuggestBox (https://docs.microsoft.com/en-us/uwp/api/Windows.UI.Xaml.Controls.AutoSuggestBox)- focus is always in the TextBox, but the selection in the drop down list (list of suggestions). How does the narrator reads the selection in the drop down list ?

    Let’s say we have couple of similar custom controls (they inherit from Control, not ItemsControl). We tried the following tricks to get the narrator reading the selection of items which cannot be focused:

    – the control’s peer is ISelectionProvider and we fire automation selection events from the peer.
    – the item’s peers are ISelectionItemProvider and we fire automation selection events and proeprtychanged events for IsSelected.
    – we tried adding the child peers as direct children of ISelectionProvider peer
    – we fire AutomationFocusChanged event
    – we tried various types of AutomationControlType for the custom control – Group (like the AutoSuggestBox), ComboBox ( like the Silverlight’s AutoCompleteBox), List, Custom, Edit.
    – we delegated the GetPatternCore() to the child ListBox’s peer’s GetPatternCore in the case when selectionPattern == Selection
    – we even tried hacks like TextBox’s peer being a fake ISelectionProvider delegating the Selection methods and members to the ListBox from the dropdown list.

    We actually tried various combinations of these bet we failed to force the narrator reading the selection. Do you see anything we miss or we can try additionally ?

    Thank you in advance,


Comments are closed.

Skip to main content