Voice Should Be User-Friendly—to All Users
When we think of accessibility for the disabled, we think of people with physical limitations such as inability to see, hear, or use one's hands. But cognitive impairments, affecting memory, attention, or the ability to use language, present their own accessibility issues.
Cognitive disabilities are both diverse and common; they include dyslexia, autism, ADHD, intellectual disabilities, aphasia, and dementia. Like everyone else, these users can benefit from the enormous number of services and the incredible amount of information on the Web. But many (if not most) Web sites are difficult for people with cognitive disabilities to use. This issue is becoming particularly important as organizations (including companies, nonprofits, and governments) rely more and more on the Web to connect with their customers and their citizens.
The World Wide Web Consortium has recently recognized the need for standards in this area and has formed a task force within the Web Accessibility Initiative with the goal of developing standards for Web accessibility for people with cognitive disabilities. Because of the wide range of disabilities, the task force is currently focusing its work on user research and gap analysis, documenting the kinds of users who might benefit from standards and finding relevant existing standards and best practices. Later steps will include recommendations for techniques to improve accessibility.
As speech becomes more integrated into the Web, and indeed, as voice-based systems in general become more widely used for customer service, the systems’ accessibility for people with cognitive disabilities looms larger, as users will be expected to be able to produce and understand speech and language. Speech difficulties can arise from motor issues like Parkinson's disease or because of neurological problems such as aphasia, and even people who can speak may have difficulty producing speech quickly enough to meet a system's timeout settings. Users with cognitive disabilities can also have problems that are similar to but more severe than those experienced by the general population in understanding system output—they may have difficulty remembering or focusing attention on long sets of menu items, for instance, or understanding poorly constructed or jargon-filled menu options.
Fortunately, those working on making voice systems more amenable to users with cognitive disabilities don't have to start from scratch; the long history of research in the speech community on the usability of voice interfaces has produced guidelines that are directly applicable. The Association of Voice Interaction Design (AVIxD) has put together a design guidelines wiki with many recommendations for voice user interface (VUI) design in general that are even more important for users with cognitive disabilities.
The AVIxD suggestions on reducing cognitive load include the following:
• Don't overload short-term memory.
• Put important things first—and putting them last isn't bad, either.
• Avoid combining ideas in a single question.
• Similarly, avoid long menu options.
• Chunk information for greater understanding.
All of these are particularly helpful for users with cognitive disabilities. And besides these specific design recommendations, guidelines that allow users to customize their VUI interface by asking for more time to respond, or for the system to slow down prompts, will also be valuable. Of course, there won't be a one-size-fits-all set of recommendations. For example, a natural language UI that lets users speak in their own words might be well suited for users with memory issues, who have difficulty with menu trees, but it could prove a nightmare for users with aphasia, who can only say one or two words at a time.
Finally, in some cases even the best-designed system will be unable to address users' needs, and so a user will have to talk to an agent. Although it's important for anyone to be able to reach a human, it's especially important for users with cognitive disabilities, who may not be able to use the automated system at all. These users don't always have the Web as an alternative. Outbound notifications are increasingly common for important announcements of emergencies, medical appointment reminders, and prescription notifications, and these do not always have a Web-based counterpart.
Anyone who is interested in this area is invited to review and comment on the task force documents. This work is at a very early stage, and it should be exciting to see how VUI design and cognitive accessibility develop together going forward.
Deborah Dahl, Ph.D., is principal at speech and language consulting firm Conversational Technologies and chair of the World Wide Web Consortium’s Multimodal Interaction Working Group. She can be reached at firstname.lastname@example.org.