Differences and stats for screen reader testing.

Leader posted Originally published at a11ynews.substack.com 3 min read

Online Navigation

Online navigation works quite similarly to offline navigation, only the tools differ greatly. For online navigation, we mainly have zoom, screen magnifiers, and screen readers (or SR for short). Zoom and screen magnifiers are pretty easy to wrap your head around, either because you tested it on purpose or accidentally hit Ctrl while trying to scroll and got a jump scare by gigantic UI elements: They make things big. Checks out.

But screen readers are more intimidating at first. You boot them up and suddenly everything starts talking! Most of us are not particularly fond of their device issuing unexpected noises because it’s usually a bad thing.

While text-to-speech (or TTS for short) is the intended functionality of text-to-speech screen readers (shocking, I know), it is overwhelming at first. Monotone, technical-sounding voices are simply not a joy to listen to, especially when you have to concentrate on understanding what exactly these voices are describing to you. On another note: Braille displays peacefully coexist with TTS screen readers and are a staple for web access!

It’s time to talk about the oh-so-dreaded screen reader testing!

Screen Reader Output

So screen readers read what’s on the screen? Wrong! They read what you wrote into your code! While a software tester who uses a mouse and screen might not notice that your menu exit button is actually a styled div, the SR will. And it will not work as well.

This is where the accessibility tree comes into play. The accessibility tree is how SR and other assistive tech users navigate through a website, climbing along heading levels and structures to find the desired information. The accessibility tree is sprouted by the browser based on the DOM (shorthand for Document Object Model) tree and accessed by platform-specific Accessibility APIs.

The DOM tree contains objects representing all the markup’s elements, attributes, and text nodes. This is precisely why following the h1, h2, h3 … heading structure is important. When you skip heading levels, you cut off the branches that assistive tech needs for a sound climbing route.

Example

But not all screen readers are the same. Let’s briefly go over the differences in screen reader output: Below are 2 popular screen readers reading the same thing in the same browser and both outputting something different. Feel free to copy the code, download NVDA and follow along.

Starter Pokemon Table

Testing setup

  • NVDA 2024.1.0.31547 in Chrome v126.0.6478.127 on Windows 11 Enterprise
  • JAWS 2022.2204.20.400 in Chrome v126.0.6478.127 on Windows 11 Enterprise

The HTML

<table>
<caption>Choose your Starter Pokemon?</caption>
<tr>
  <th>Person</th>
  <th>Pokemon</th>
  <th>Type</th>
</tr>
<tr>
  <th>Julia</th>
  <td>Pikachu</td>
  <td>Electro</td>
</tr>
<tr>
  <th>Laura</th>
  <td>Eevee</td>
  <td>Normal</td>
</tr>
<tr>
  <th>Audience</th>
  <td>Magikarp</td>
  <td>Water</td>
</tr>

The CSS

table {
  font-size: 1.5rem;
  margin: 50px;
  min-height: 200px;
  min-width: 500px;
}

tr:first-child > th {
  background-color: lightblue;
  border-bottom: 1px solid darkblue;
}

td {
  text-align: center;
}

NVDA will say:

“Choose your Starter Pokémon Table with 4 rows and 3 columns Choose
your Starter Pokémon caption”

and JAWS will say:

“Table with 4 rows and 3 columns Choose your Starter Pokémon”

Same table, same code, same browser, yet 2 different outputs. That doesn’t mean that one is better than the other; they both get the job done. But be aware that there are some nuances between them.

Then what Screen Reader Setup is best for Testing?

Classic UX answer: it depends. Firstly, on your target demographic, but more tangibly, on the operating systems you are developing for. Every year, WebAIM releases a survey for screen reader users where they ask which SR-browser combination they use.

The latest survey was conducted between December 2023 and January 2024. Go check the Screen Reader User Survey #10 Results! While you’re at it, check out the WebAIM Million report as well. The Million report audited 1 million websites to give us a benchmark of progress in web accessibility over the years and the most common WCAG failures. (It’s still low contrast btw.)

Screen Reader Testing Summary:

JAWS and NVDA came out on top with 40.5% and 37.7% respectively. The 3rd place on the podium took VoiceOver with 9.7%. As for browsers, the big 3 are Google Chrome, Microsoft Edge, and Mozilla Firefox.

results for primary desktop or laptop screen reader use

results for browser usage with primary screen reader

If you read this far, tweet to the author to show them you care. Tweet a Thanks

Great read, I appreciate the effort in explaining screen readers so clearly. It’s interesting how NVDA and JAWS differ — do you think developers should focus on one or test across all?

Thank you! According to the webAIM survey, the most popular combinations are JAWS with Chrome, NVDA with Chrome, JAWS with Edge, NVDA with Firefox, and VoiceOver with Safari. Ideally, devs would test all of these combinations.

For B2B applications, I would prioritize JAWS because most employers provide this one to employees.

For B2C/individual use, NVDA and VoiceOver seem most popular. But don't forget about the system-built-in ones!

NVDA is my go-to for side projects and the one I recommend to start out with. It is free to download and open source.

More Posts

Quick manual testing to cover basic functionalities for accessibility.

Laura_a11y - Sep 7

Web accessibility and automated testing basics explained with Pokémon.

Laura_a11y - Sep 4

This is your event summary for upcoming opportunities in accessibility education in September 2025

Laura_a11y - Sep 8

Your summary of upcoming accessibility education events in October 2025

Laura_a11y - Oct 1

When we construct the digital world to the limits of the best devices, we build a less usable one.

Virtually(Creative) - Mar 13
chevron_left