What Usability Testing Can't Tell Us
Anyone who knows me (or even the name of my company) knows I have been advocating for usability testing for speech-enabled applications for more than a decade. During the past few years, I’ve been happy to see more awareness and acceptance of usability testing, both within the speech community and among clients.
This is clearly a positive development, but I have also noticed a troubling tendency where people believe that running a usability test is the only step they need to take to guarantee a successful speech deployment. For some organizations the prevailing view seems to be no matter what they do wrong, usability testing has the power to fix it. This inflated view of usability testing as a universal cure-all is perhaps more dangerous than not doing usability testing at all. When the overblown expectations for usability testing are not met (of course), clients tend to throw the baby out with the bathwater and reject the entire design process.
So my goal here is to restate my ideas about the value of usability testing and remind us all what usability testing can and can’t tell us.
First a definition, because the term “usability testing” is used to mean many different things: When I say usability testing, I mean a method in which we observe representative users interacting with a realistic version of an application under controlled conditions and then collect their opinions about that interaction. By considering just this sort of situation, we impose methodological controls that allow us to ignore certain factors during testing.
Methodological controls also place serious limitations on the conclusions we should draw from a usability test. Because we test under controlled conditions, usability testing does not take into account real-world variables, like telephony problems or background noise. Test participants are not susceptible to real-world distractions, like reading their email, crying babies, or ringing doorbells. Usability testing also does a poor job of capturing real-world urgency or other emotional states—it’s far different to pretend you need to call tech support in the usability lab than to actually need to call because your Internet connection is down at a particularly inopportune moment.
In short, usability testing does not address many vital conditions that exist when real customers interact with real speech applications. Therefore, viewing usability results as directly predictive of real-world customer behavior, application performance, or customer satisfaction is risky. Usability results give us strong hints about these aspects, but they don’t tell the whole story, especially when we’ve skimped on other aspects of the user-centered design process (more on this shortly).
What usability testing can tell us is whether people can quickly and easily figure out how to use the speech application to accomplish tasks that are important to them and to your business. When we observe people struggling in a usability test, we know it’s not because they’re distracted or because of a bad phone connection. We know exactly what tasks participants are trying to complete, so we can clearly evaluate whether they were successful. And most important, we have the opportunity to ask people what they were thinking during the interaction and how they felt about it. You can’t get this combination of data from any other kind of evaluation, and it’s absolutely invaluable.
The beauty of usability testing is that methodological controls allow us to focus on what customers can do under certain conditions and to hear their immediate reactions and opinions of this controlled interaction. The interactions we observe tell us what customers do, and the opinions assign a value to the interaction. The combination of observing interactions and soliciting opinions can reduce risk for organizations because it suggests which design issues are meaningful to customers.
Sadly, organizations often reap only a portion of the risk-reduction benefits of usability testing by squeezing one in—if there’s time—late in a project or waiting until there is a known problem before running a test. Yes, any usability data is better than none, but consider how much more valuable such data can be if it is part of a strategic plan for designing and deploying useful, usable speech applications that customers use willingly. By conducting user research up front, designing to user needs, performing usability testing to validate design decisions, and then redesigning based on usability feedback, organizations can make speech a comfortable, intuitive, and engaging mode of self-service interaction.
Susan Hura, Ph.D., is principal and founder of SpeechUsability, a VUI design consulting firm. She can be reached at email@example.com.
Consider this solution to reel in elusive answers.
This valuable tool offers unique insights into caller motivations.