How to Test for Accessibility
Creating accessible new code and content is a priority for ASU developers and editors. But testing existing content is equally valuable--even the most knowledgeable of us make mistakes.
One of the most effective things you can do to improve the accessibility of your websites and applications is to introduce a mandatory QA stage in your team's development and/or publication processes to catch--and fix!--problems before going live.
At ASU, all new and redesigned web-based applications and sites should comply with the Web Content Accessibility Guidelines (WCAG) 2.1, level AA. See the Best Practices for more on how to comply with WCAG requirements.
What to test
When auditing a website or application for accessibility, you don't need to test every page or screen. Usually, you will evaluate:
- the landing page
- one representative interior page
- any page that contains an unusual feature or unique layout
- the most popular pages (based on analytics)
How often to test
Setting up your website or application to be continuously monitored by a scanning service such as Siteimprove will offer the best indication of how accessible it is and when it might be time to perform more stringent testing.
If continuous scanning is not available, plan to test your entire website or application once a year. Perform additional tests after any redesigns or large content changes.
How to test
Evaluating websites and applications for accessibility can (and often should) involve multiple levels of testing:
1. Automatic scanning
Automated testing tools such as the WAVE and Siteimprove can give a rough idea of the overall accessibility of a site or application and are a valuable first step in accessibility evaluations. However, automated tools can identify only 20-30% of accessibility issues and are rarely relied upon alone.
In addition to the two listed above, many other accessibility browser extensions and tools exist.
2. Manual checklist
The most accurate accessibility evaluations use a combination of automated and manual testing. The ASU Web Accessibility Audit walks you step-by-step through a full accessibility audit and covers most of the basics.
3. User testing
Testing with assistive technologies such as screen readers is standard practice in full accessibility evaluations. User testing can identify barriers that other tests are incapable of finding, and it is particularly useful when people with disabilities are included.
The desktop screen readers used most often in testing are JAWS or NVDA. VoiceOver also is frequently used, particularly with responsive sites that will be viewable on the iPhone:
- JAWS for PC: used by 46.6% of desktop screen reader users. The demo version permits 40 minutes use; you must restart your computer for an additional 40 minutes. Used with IE, Firefox, and Chrome (in order of prevalence).
- NVDA for PC: used by 31.9% of desktop screen reader users. Free. Used with Firefox, Chrome, and IE (in order of prevalence).
- VoiceOver for Mac: used by 11.7% of desktop screen reader users. Free on Mac. Used with Safari and Chrome (in order of prevalence). Widely used in mobile iOS with Safari.
The UTO UX Research team offers user testing that includes people with disabilities. Their services help improve user experiences through the analysis of real users interacting with products.
4. Expert review
Some third-party suppliers can evaluate websites and applications for accessibility and even remediate issues. Contact the ASU Web Accessibility team for suggestions on reputable vendors.
The ASU Web Accessibility team also performs a limited number of accessibility evaluations for university projects.
Tools for developers
In addition to the tools described above, developers need special tools that they can rely on during development. Some of the best include:
- The accessibility features in Chrome DevTools
- WebAIM Color Contrast Checker
- Paul J. Adam's Bookmarklets for Accessibility Testing
- W3C Nu Markup Validation Service, especially with WCAG Parsing Bookmarklet