Part of a Whole

My name is Nicolas Steenhout.
I speak, train, and consult about inclusion, accessibility and disability.

Listen to the A11y Rules Podcast. And become a patron on Patreon.
Become a Patron!

Automated Testing Tools

Many accessibility experts rant and rave against automated accessibility testing tools. They claim that these tools are useless. They further claim that the criterion are too complex to evaluate properly without human judgement. They are not wrong. But we should be careful not to throw out the baby with the bathwater. They can be beneficial as a quick indicator that something’s not right.

In other words, passing the automated test does not guarantee that the site is accessible. But failing the test generally indicates that there are issues to address on the site (although they don’t always get it right). One of the problems is that a keen understanding of each criteria will help assess accessibility.

For example, alternate text for images, or the alt attribute of the img tag. We’ve been drilled to include alt attribute for images. The reason is simple, screenreaders can tell if there is an image, but are unable to decode what the image looks like. So we provide some text to describe the image. This may look like this:

<img src="tree.jpg" alt="Photo of a weeping willow on the side of a pond" />

Or, if the design uses spacer images (setting aside that it’s bad practice to do so), it might look like this:

<img src="spacer.gif" alt="spacer" />

The first example is a good way to use alternate text, the second one isn’t so, because for each instance of the spacer image, the screenreader application will state "spacer". This may quickly become cumbersome.

Yet, an automated testing tool will see that an alt attribute for the spacer image has been declared, and say that the page meets the guideline!

So we must use a combination of knowledge, understanding of the requirements, and human judgement to provide a good assessment. We have problems when people who don’t have both knowledge and understanding rely on these testing tools. If their site passes the automated test, they think they are out of the woods. But that is not necessarily the case. As with many things, the real answer is "it depends". It depends on the level of accessibility sought. It depends on subtle differences that can often only be evaluated by a knowledgeable human.

Of course you might suspect accessibility expert to dismiss automated tools because they want to stay in business. It would be an easy assumption to make. It is my experience that most accessibility experts would prefer to see accurate testing tools, as it would arguably help create a more accessible web, than they are in maintaining some kind of perceived monopoly of knowledge.