The perils of styling visuals

This post describes the hazards of building custom UI, and how some of these hazards can be avoided by basing the custom UI on fully accessible standard controls. The sample results described in the post are based on HTML and CSS hosted in the Edge browser.


A few days ago I was talking with a dev who’d built UI which hosted controls that looked like push buttons, but in some ways behaved like radio buttons. So when one button was clicked with the mouse, its visual state would change to reflect that the app was in some particular state. Only one of the two buttons could appear in that visual state, and so clicking on one button might result in the other button’s visuals changing to somehow look inactive.

This is a situation where it’s really important to consider the meaning of the UI.

In this particular case, the controls are intended to behave exactly like radio buttons, even though they don’t visually appear like standard radio buttons. So the question is, how might those controls be implemented, such that they provide a fully functioning and predictable experience for all customers, regardless of how the customers interact with their device?


Leverage what the platform already does for you

So I considered what I might do if I wanted to build two controls that looked like buttons, but behaved like radio buttons.


Two controls that visually appear as buttons, and where the colors showing on the controls are different.

Figure 1: Two controls that visually appear as buttons, and where the colors showing on the controls are different.


One option would be for me to use two standard push buttons. By doing that, I’d get a lot of accessibility for free. For example, I could interact with the controls via the keyboard, and a screen reader like Narrator could programmatically invoke the buttons. In response to a button being invoked, I could change the visuals for the other of the two buttons. That seems like a great way to leverage all sorts of useful functionality provided by the platform right?

But let’s think about this from my customer’s perspective.

Say my customers who use Narrator reach the first control. They’re told that they’ve reached a button, but aren’t told anything about the state of the button. This isn’t acceptable, as they must be told about the control’s state when they reach it. Maybe I could address that by trying to add support for some UIA pattern which lets my customer know that the button is toggled or selected, but my customer is still not made aware that there’s any relationship with the other control. So why would they expect some interaction with that first control to have any impact on the other control?

My sighted customers might assume there’s a relationship between the controls given their visual presentation, and might assume the two controls are going to behave like radio buttons. So they tab to the first control, and then try to use the arrow keys to move between the controls, because that’s the standard way of moving keyboard focus between radio buttons. But their attempt to move between the controls fails, because standard push buttons don’t react to arrow key presses at all.

Note: There’s some very interesting information on how standard control types typically react to keyboard input, in the “Keyboard Interaction” sections at WAI-ARIA Authoring Practices 1.1. So check out the Radio Group details there, and finally put those days of wondering how controls are meant to behave, behind you.

The upshot is that while it would be technically possible for me to use standard push buttons for my UI, and make a whole bunch of changes to them such that they behave more like radio buttons for all my customers, that sounds like way more work that I want to have to do. So is there a more efficient way for me to do this?


If it quacks like a radio button…

Ok, I want to build UI that contains two controls that behave like radio buttons. They behave like radio buttons for my sighted customers, my customers who use screen readers, my customers who use any of touch input, mouse input, the keyboard, eye gaze input, speech input, switch input. Basically – my controls are radio buttons for all my customers.

The only interesting thing about these radio buttons is that their visuals on the screen don’t look like traditional radio buttons.

So if my UI’s meaning is conveyed fully through the use of standard radio buttons, I’m going to use standard radio buttons. By doing this, my customers using Narrator will learn of the state of the control as soon as they reach it, and will know that by changing the control’s state, other controls in the same radio group will be affected. And my customers using the keyboard will know that an arrow key press will move them to another control in the radio group, and will select the radio button that’s getting keyboard focus. And a press of the tab key will move keyboard focus out of the radio group.

I get tons of accessibility support from the platform by default, simply by using standard radio buttons.

So all that’s left for me to do is to style the visuals, so that the controls appear on the screen in some unconventional way that I happen to think is really cool.

Note: Before I’d do this in a shipping app, I would think twice about why I want to present these unconventional visuals. After all, what’s wrong with showing radio buttons that look like radio buttons? And by default, I like to avoid work where I can. But let’s assume here that there’s a great reason why the custom visuals are required.


Two standard radio buttons showing default visuals, and which provide a predictable experience for all my customers. 

Figure 2: Two standard radio buttons showing default visuals, and which provide a predictable experience for all my customers.


The perils of visually styling controls

I’ve said many times how if you present custom visuals, it’s best to use standard controls that are accessible by default, and style them as required, rather than trying to make fully custom controls accessible. But I must confess, styling controls isn’t something I’ve had much experience with, and so I don’t know what sort of challenges can be hit in practice. So I had a go a styling two HTML radio buttons, to make them look more like push buttons when hosted in Edge.

I thought this would be pretty easy. I’d just go to the web, look up stuff on styling radio buttons, and be set. But this turned out to be far from easy. Everything I tried based on what I found on the web left the controls inaccessible. The keyboard interaction would be broken, as was the programmatic accessibility. The stuff I found on the web might leave the controls fine for my sighted customers who input at the device through touch or mouse, but many of my other customers would not be able to use the UI. That’s no good to me.

This is a reminder of how important it is to pay attention to the keyboard and programmatic accessibility of UI, once steps have been taken to override their default visual representation.

So I set to work doing my own styling of the controls. The resulting HTML and CSS are included at the end of this post.


Disclaimer: After careful analysis, I have determined that there are exactly 1.5 million things I don’t know about CSS. So you may well spot a number of ways my CSS could be improved. The CSS I wrote was at least sufficient for me to do my experiment.


Applying styling that left the controls usable for all my customers

Typically a radio button will be defined with an input tag, of type “radio”, and some accompanying text label. One critically important point here is that the input element must not be hidden. If it is hidden, it won’t be exposed through the UI Automation (UIA) API, and screen readers won’t know it’s there. This completely breaks the experience for my customers using Narrator. So I won’t be hiding the radio buttons.

What I did do however, is make the radio buttons completely transparent. This means they’re still exposed to screen readers, and still fully interactable via the keyboard. Goodo. I could then concentrate on updating the visuals associated with the radio buttons’ labels, to reflect whether a radio button had keyboard focus and whether it was selected. (This experiment isn’t actually complete, as I didn’t get round to styling for when the mouse cursor’s hovering over the control. But hopefully that work would be similar to what I did for handling keyboard focus.)

For this experiment, I picked colors which had a strong contrast between text and background, and I chose to give a selected control a darker background than the unselected control.

The experiment left the controls reacting to keyboard input in exactly the way described at Radio Group, and my sighted users are given feedback to let them know which control is selected, and where keyboard focus is.


The two controls, with the control on the right visually appearing selected.

Figure 3: The two controls, with the control on the right visually appearing selected.


The two controls, with the control on the right visually appearing selected and with keyboard focus.

Figure 4: The two controls, with the control on the right visually appearing selected and with keyboard focus.



My next step is to point the Inspect SDK tool at the controls, and learn about their programmatic representation.


The Inspect SDK tool reporting the programmatic representation of my radio buttons, as exposed through the UIA API. Inspect’s Action menu is open, in preparation for programmatically changing the selection state of one of the radio buttons.

Figure 5: The Inspect SDK tool reporting the programmatic representation of my radio buttons, as exposed through the UIA API. Inspect’s Action menu is open, in preparation for programmatically changing the selection state of one of the radio buttons.


There are number of interesting things reported about the controls.


1. The Name property of the control matches the text shown visually on the screen.

2. The ControlType property is RadioButton, and this sets my customers’ expectations around how the control behaves.

3. The BoundingRectangle property matches the visuals shown on the screen. (At least it’s very close.)

4. The control supports the UIA SelectionItem pattern and so my customers using the Narrator screen reader can learn about the current state of the control as soon as they reach it.


Another interesting point is that I could get Inspect to report all this information about the control by hovering the mouse cursor over the control. When I first tried doing this, I found Inspect’s hit-testing hit the text label instead of the control, which isn’t what I wanted. By setting the pointer-events attribute on the label, I avoided that issue.


And having verified that the programmatic representation seems pretty good when reported by Inspect, I then tried interacting with the controls with Narrator through touch input. I prodded the first control to verify that the hit-testing was working. Having done that, I did two right-swipe gestures to move Narrator to the second control. When Narrator moved to that second control, I heard the following:


“No, radio button, double tap to select, non-selected, 2 of 2”.


This is exactly what I wanted to hear.


Narrator’s highlight at the second control.

Figure 6: Narrator’s highlight at the second control.


One additional step I took was to make sure the current state of the control is exposed when a high contrast theme is active. When I first tried that, I found that both controls had the same visuals despite one of the controls being selected, and so that needed to be fixed. I resolved this by adding a high contrast media query to the CSS. By doing that, Edge will automatically apply the appropriate system colors for the controls’ text and background, based on the colors in the active theme. This is handy for me, as I don’t have to care about whether the theme uses black, white or any other color for control text and background. I just relax in the knowledge that Edge will respect my customer’s choice.


The controls showing system colors when a high contrast theme is active.

Figure 7: The controls showing system colors when a high contrast theme is active.


All in all, this seemed like success. I’d built UI which behaved exactly like radio buttons for all my customers, but which had custom visuals applied.


I have to say that there were a couple of things which weren’t really how I would have preferred, but I don’t think they have a significant impact on the user experience.

1. The labels themselves are exposed as elements in the Control view of the UIA tree. This means it’s possible for Narrator to move to those text elements. This isn’t serving any useful purpose to my customers, and could be an irritation. This issue is why above I said “I did two right-swipe gestures to move Narrator to the second control”, from the first control. The first right swipe moved Narrator to the first control’s text label, and the second swipe moved narrator to the second control.

2. When an element supports the UIA SelectionItem pattern, it exposes details on exactly what related containing element supports the Selection pattern. A classic example of where these patterns are used relates to lists. A list element might support the Selection pattern, and its list items support the SelectionItem pattern. In the UI I’d built, when I pointed Inspect to the second control, I’m told that the related element supporting the Selection pattern is the first control, and that really doesn’t match the meaning of my UI.


I didn’t find a way of avoiding the above issues, but like I said, they didn’t seem to significantly impact the experience.



All the snippets I’d found on the web relating to creating controls which had custom visuals yet behaved like radio buttons, might have been fine for my sighted customers who use mouse or touch input, but left the UI unusable for my other customers.

While I’ve not done an exhaustive test, my own experiment into the visual styling of standard radio buttons was encouraging to me in that it reinforced my belief that by default, standard controls should be used to create UI even when the controls’ visual representation is being customized. I’m not claiming that the work to do that is trivial, (and the results still need careful verification,) but if you really want to build UI that all your customers can use, in general it’s going to be less work to base the experience on accessible standard controls that best match the meaning of your UI.


And a useful side effect of my experiment is that there are now only 1.49 million things I don’t know about CSS.



P.S. Below is the HTML and CSS I built for my experiment. I’ve found that if I copy all the HTML and CSS into notepad, I get unexpected line feed characters which prevent it getting loaded up in Edge as expected. (Same goes for the double quotes too.) So a global replace of the spaces and double quotes is necessary if you want to load this up in Edge. I’d say it’s worth the effort, given the hours of fun you can have pointing Inspect to the radio buttons.


.rbContainer {
  float: left;
  position: relative;
  width: 80px;
  height: 20px;
  margin: 4px;

.myradio {
  opacity: 0;
  position: absolute;
  top: 0px;
  left: 0px;
  width: 80px; 

.mylabel {
  position: absolute;
  top: 0px;
  left: 0px;
  width: 80px; 
  text-align: center;
  border-style: solid;
  border-color: black;
  border-width: 1px;
  background: #E0E0E0;

.myradio:checked + label {
  background-color: #404040;
  color: white;

.myradio:focus + label {
  border-color: white;
  outline-style: solid;
  outline-color: black;
  outline-width: 2px;

@media screen and (-ms-high-contrast)
  .myradio:checked + label {
    background-color: Highlight;
    color: HighlightText;



<div class="rbContainer">
  <input type="radio" id="rbYes" name="rbGroup" class="myradio" value="Yes" checked="checked" />
  <label for="rbYes" class="mylabel">Yes</label>

<div class="rbContainer">
  <input type="radio" id="rbNo" name="rbGroup" class="myradio" value="No" />
  <label for="rbNo" class="mylabel">No</label>



Comments (0)

Skip to main content