-->

With Voice Design, Why Don’t People Expect More?

Article Featured Image

It is often said that the key to happiness is low expectations. If that’s true, then there must be a lot of people who are happy with the voice interfaces they encounter.

One of the things I regularly see in usability testing is that when participants are asked their opinion of a system or how it performed, they are generally positive, even when the testers know the system is awful. Why is this so? Are they trying to please the tester? Are they so conditioned to lousy systems that they’ve simply accepted them? Do they truly not know that the systems could be better?

I think the answer is a little of all three. The first circumstance—eager-to-please participants—is more or less a given in test situations. So let’s look at the other two.

In a general user-experience intro course I taught, students were asked to apply a heuristic to a website, and one of the criteria was “it performed as expected.” I was amazed at how many people gave me a variation on “I expected it to be bad and it was; therefore, it performed as expected” and gave it full points. Granted, these were industry people looking to further their skills, so maybe they’re more jaded than the general population, but you could also argue that their bias should have been in the other direction—they knew how the voice design could be better, and therefore should have expected more.

Every voice designer I know has had people who, upon learning what they do, tell them, “Oh, I hate those things,” or, “They never understand me.” This is strong evidence for the conditioned-to-lousy-systems hypothesis. When I started in this field, I genuinely thought that since we were making such strides in understanding what works and what doesn’t, the overall impressions would change. Sadly, I think that for every good system great designers put out there, there continue to be at least 10 lousy ones. Why the lousy ones still exist is a whole other can of worms, but it’s often tied to the lack of well-trained designers, and companies that still insist on short-changing design and vetoing design recommendations for any number of reasons.

The third question above is closely related to the second, and may give us most insight. We recently had a project where the company knew the existing system was loaded with problems. They wanted us to test that one as a baseline, redesign the system, and then test the new one. All parties were somewhat surprised at how often people said they liked the original system and felt it could do what they needed it to do in spite of the evidence clearly showing their struggles.

When planning the test of the redesigned system, we decided we’d have each person make a final call into the old one and repeat one of the tasks. While we figured we wouldn’t learn anything else useful about the old system at that point, we suspected those final calls would yield some value. And boy were we right!

The direct comparison between the two gave us a goldmine of feedback. Many participants were able to clearly articulate the advantages to the new system over the old. And we all know that when test participants and the design team use the same words to describe a system, those words carry more weight with the client when they come from the former. While the participants had good things to say about the new system initially, after the final call into the old system, those good things turned into raves. It was clear to them that the difference was huge.

The initial reactions to the new system were better than initial reactions to the old, but having our testers use both back to back led to ratings of the new going up even further, and ratings for the old going in the opposite direction. The A/B comparison let them clearly experience both ends of the spectrum; they all said they wanted the new system.

This is a truly valuable testing technique that can help sell clients on change. It’s why so many companies use before-and-after pictures to sell products. Knowing that things can be better and actually experiencing the difference helps set expectations.

As designers we want to provide better solutions. And we want people to expect them, too.


Jenni McKienzie has worked as a consultant with SpeechUsability, a VUI design consulting firm, since 2013. Previously she held positions at Travelocity and Intervoice. She is also a founding board member of the Association for Voice Interaction Design.

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues