Failure to Test Detestable
Despite Good Intentions …
It isn't difficult to find a bad IVR. There is an apparent abundance of confusing, poorly produced, difficult to use IVRs. Thousands upon thousands of them exist out there, sitting as if in ambush, behind some telephone number, awaiting their unlucky callers.
The funny thing is this: not one of those systems was flippantly construed. In other words, each and every bad IVR represents someone's sincere and best effort to create an effective voice user interface. All of these IVRs came into being after some very intelligent, methodical, well-intentioned people carefully analyzed the system's requirements and put forth what appeared to be the very best design they could devise to solve their self-service problem. Yet, all of these systems ended up being bad.
Pride and Prejudice
How can this be?
There is a very human tendency to believe, particularly among the most experienced, most intelligent and most competent among us, that we are capable of reasoning our way to a problem's solution. The fact is this is often and obviously true.
On the other hand, when members of an IVR design team answer the call to "come let us reason together," they often bring with them a certain, almost hubristic prejudice. To be sure, they analyze. They construct theories and arguments. They discuss and defend their ideas against counter arguments and they eventually make design decisions. And when the deliberative process is all said and done, they typically are confident that the process has identified the essential components of the best of all possible designs. Once consensus is reached on design, the project moves onward toward implementation.
But will it work?
Again, good designs often result from the "analytical" approach. But how does one know one has a good design? Regardless of how thorough the analysis; regardless of how unanimous the consensus, how can a designer be sure that his design will actually work?
Let's take a theoretical example. All the king's horses and all the king's men are summoned together to design the basic call flow of an IVR application. Everything is considered: all the necessary functionality is included; menus are suggested; and diagrams are created. Everything starts to look pretty good. Everyone involved agrees: the basic system will answer the call whereupon it will say the following (whatever) things. The user will then respond in the following (whatever) ways and consequently be efficiently moved on through the system to task completion, all the while enjoying a fabulous user experience.
It is often under such blue skies that the basic organization of the system, its call flow, gets decided.
It never hurts to ask…
What's wrong with this rosy picture? The problem is that it marginalizes the fact that call flow designs embody the designers' expectations about how the user will respond to the system. All call flows thus embody the designers' predictions about human behavior. One might think that a call flow is fairly basic and hard to "get wrong." But even the simplest call flow designs may include assumptions about user behavior that turn out to be unfounded.
There is really only one way to determine how users will navigate through a call flow: expose actual users to the call flow under production-like circumstances and quantify their responses as they move through the design. Validating a call flow is not a particularly expensive or time-consuming endeavor. Most systems can be thoroughly assessed in a matter of days. Failing to test call flow assumptions in a speech application is truly reckless. If call flow problems are not identified early on, they will greatly complicate matters during later usability testing. If there is no pre-production usability testing, and call flow problems make their way into production, they can completely subvert an application.
Who is to blame when a call flow doesn't work? While there is a tendency among some designers to "blame the user," the user is never responsible. After all, users, in one sense, cannot make a mistake: they generally just do what they think they are supposed to do. If a system's designer did not accurately predict the behaviors that users actually exhibit, the failure lies not in the behavior of the user but in the designer's ability to predict that behavior.
Testing: It's not just for others anymore
To the extent that a call flow works, it has to possess some degree of accuracy in its predictions of user behavior. Since no one has a magic crystal ball or even holds any tremendous advantage over others when it comes to predicting human behavior, the only truly reasonable approach to ensure that a call flow works is to test it.
Almost every VUI designer I know will publicly recommend thorough usability testing throughout the project lifecycle. It is a pity that so many designers seem to think that they are not subject to their own advice.