How we implemented scheduled accessibility testing

Posted in: Accessibility

This article was first published as part of Global Accessibility Awareness Day 2024 at the University of Bath.

In the Digital Content and Development team, digital accessibility is at the forefront of everything we do. As a University we are required to abide by the Public Sector Bodies (Websites and Mobile Applications) (No. 2) Accessibility Regulations 2018. Our content must comply with Web Content Accessibility Guidelines (WCAG) version 2.2. We also like to go beyond the minimum criteria to make sure the site is accessible to as many people as possible. This is why accessibility is one of the most important things when designing and developing new features for the University of Bath website.

All our development work goes through a review process before being pushed to production. Part of these reviews involves checking for accessibility issues. While we manage to prevent many issues from going live, sometimes there are things that might slip through the net.

As well as accessibility issues with our code, we must also consider content accessibility issues. Most departments and teams are responsible for creating and publishing their own web pages. However, some people aren't aware of all the ways to make web content accessible, which means issues do arise.

Discovery work

To help us monitor missed accessibility issues on the site, we decided we needed to implement scheduled accessibility testing. We started the process by doing discovery work on how we were going to do this and what we were going to test for. We looked at any manual reports that were available online to get an idea of what others tested for and how the results were laid out.

From our discovery work we found out that we would need to conduct two different types of tests: automated and manual. With automated testing we would run through the site using automated accessibility testing tools and with manual testing we would need to produce a checklist, which would be based on Web Content Accessibility Guidelines (WCAG). These are an internationally recognised set of recommendations developed by the World Wide Web Consortium (W3C) for improving web accessibility.

For automated testing we looked at which tools would be best for the job. From research we discovered that browser plugins Wave, ARC and Axe along with the W3C Markup Validator would be the best tools for the job. Some plugins would detect things that others didn’t, which is why we needed a spread of tools.

We decided that we would pick a random sample of pages from each different content type across the site. Key pages such as those that were in our top ten viewed would be tested every time and we would also test pages maintained by other teams outside our content management system, such as our jobs platform.

Producing the checklist

Once the discovery phase was complete, we moved on to putting together the plan and checklists for automated and manual accessibility testing. Using what we had found in the discovery, we went through the WCAG criteria and produced our checklist from these.

For manual testing, we changed our mind and decided that rather than test random pages from different content types, we would instead choose a department and test a sample of their pages from each content type. This way we'd be able to spot accessibility errors, produce a report and help educate content writers on how to prevent accessibility issues in future.

We also created some guidance on how to do the tests as well as templates for our team to record their results. Feedback from different members of the team meant that these went through several iterations during the review stage of this project.

Our first round of testing

From our discovery we had decided that we should carry out automated testing on the last Friday of every month. We decided that I would carry out the automated testing myself, since the issues picked up by automated testing would be related to front-end development. The first automated test looked at the key pages on our website and those that ranked highest on Google Analytics. Automated testing was useful as we were able to pick up some issues, such as HTML validation errors. Only taking one day for automated testing did not feel long enough, so we felt it would be better to give a longer timeframe for our next automated test.

A few weeks later it was time for our first round of manual testing. We had agreed that manual testing would be carried out every 13 weeks. We met before the process began to discuss how to carry out the testing. We split up into pairs consisting of a member of the Development team and a member of the Content team.  Rhian Griggs, Head of Digital, had given us a list of departments to look at and we decided who was going to look at what. We began testing on the Tuesday. To begin with we all worked on the same set of pages, which were the key pages on the University site. By working together we could all learn the process together and make sure we were consistent in our approach.

What we learned from testing

At the end of the manual testing, we all got together and had a retrospective meeting where we discussed what went well, what didn’t go so well and what we would improve in the future.

Carrying out the testing allowed us all to learn more about accessibility. The Content team was able to learn more about the developer side of accessibility while developers were able to learn more about accessible content. It also allowed the two teams to work even more closely together than we usually do.

Rhian Griggs, UX Designer Sam Street, and I also had meetings with our Assistive Technology Team, who helped us understand how assistive technology is used by students and staff at the University. For example, we learnt which screen readers are best for testing, which meant we could test the website more thoroughly.

It was interesting to see the code issues that were picked up when carrying out automated and manual testing. We pride ourselves on having an accessible website, but nobody is perfect and accessibility testing allowed us to pick up on issues we might never have noticed otherwise.  We have recorded all these issues and are working though them in order of priority.

It was also useful to see the common content accessibility issues that popped up, such as poor alt text, badly named call to actions and broken links. These will allow us to understand what education is needed to help members of the University to produce more accessible web content.

Improvements for next time

When we were doing manual testing, we used aspects of automated testing to help us with the process. For example we would use the automated testing plugins on each page that was tested. We felt that we had used too many automated tools, which meant we had spent a lot of time switching between tools and we had less time to focus on content issues. Some of the tools were meant for front-end developers, so error messages often left members of the team confused. As a result, we have decided to switch to using the Silktide browser plugin for our next round of testing. This means that we are only using one tool and we have found that Silktide presents issues in a much simpler way for those who are not front-end developers.

Automated testing will instead be carried out every quarter, but we will give ourselves a week rather than a day to complete it. As most of the pages on our site use templates, any issues would repeat as a result, and we do not change our templates often enough to warrant monthly automated testing. When we are doing automated testing, we will continue to use a range of accessibility testing tools to ensure we are picking up every possible error.

Templates for our accessibility testing reports will be moved from our University Wiki to Microsoft Word. Our Wiki sits behind the University network and when working from home you need to use VPN to access it. We encountered issues with work loss when the VPN would disconnect but the page would remain open, and the user would not realise they had lost changes. It is also much easier to share Word documents compared to exporting PDFs from the Wiki.

We are working on improving our manual template and producing guidance for accessibility testers which will allow us to present content issues in a better way. Our next manual accessibility test will be taking place on the 21 May, when we will be using our new testing template and learning from what didn’t go so well in the last test. We have learned that accessibility testing is not a small task, and we expect that we will learn more things from our next test, but we will adapt our processes over time.

 

 

Posted in: Accessibility

Respond

  • (we won't publish this)

Write a response