-->

Traps Leading from Good Intentions to Bad Automation

Nobody ever makes a goal of providing bad customer service. Yet, judging from consumer complaints and the generally poor quality of automated customer service systems, bad service happens far more often than anyone in the contact center industry wants to admit.

Building a truly outstanding automated customer service system takes hard work and discipline. Creating a bad system merely takes hard work. There are a number of pitfalls which can derail good intentions and lead to a poor end result.

Automating Only to Save Money
Customer service automation is an important technique for controlling costs, but just as importantly, it can lead to better customer service. We've found that for a well-designed system, customers actually would rather use the self-service option than speak to an agent. This leads to a triple win: the customer gets better service, the company saves money, and the project manager gets promoted.

But when a company approaches building customer service automation solely from the perspective of saving money, the opportunity to improve service is often missed. The end result is often more expensive, since customers are less willing to use an automated system which doesn't serve their needs.

Measuring Success with the Wrong Criteria
Determining the success or failure of a project is crucial, and the measurable goals of the project will steer nearly every design and implementation decision. Often, however, the success criteria do not really reflect the business goals of the project, or reflect them in a distorted way.

For example, if the business case was built around both saving money and improving service, but only metrics related to saving money are measured, then the goal of improving customer service might as well not exist.

Not Measuring Customer Impact
Even if improved customer service isn't a formal criterion of success for the project, at the very least it needs to be measured. As the management aphorism says, "you can't manage what you can't measure."

Quantifying the impact of automation on the customer service experience will at least provide a window into what side-effects the project may be creating, and provide the data to possibly change course and head off disaster. Moving forward without this data is like trying to steer a ship through a fogbank without radar. You think you're making great progress, right up to the moment when you hit an iceberg.

Often, no attempt is made to proactively measure the impact on callers of a change to customer service automation, on the mistaken belief that it can't be done until the new system is up and running and taking live customer calls. This myth serves to keep questions of customer impact out of design discussions until the project is complete and it is too late to make changes.

Measuring Customer Impact the />Wrong Way
Data might not mean what you think it means, or worse, it can be misleading. Data needs to be gathered from representative callers, the measurements have to be meaningful, and the results must be placed in the proper context.

A classic mistake is testing a new automated system by using company employees. At first blush, it appears to be an efficient and economical way to get feedback, but company employees are very different from customers . They know how the process is supposed to work, and they know the company jargon.

It is also tempting to use "free" statistics from an IVR or ACD to measure the quality of customer service, but many of these measures are at best proxies, and at worst misleading. For example, hold time is an important predictor of customer satisfaction, but it is not the only factor. A call center manager who is offers agents large bonuses if hold times drop may discover unintended consequences, such as agents abruptly disconnecting customers during busy times.

Even with good data, it is hard to draw conclusions without context. To reach meaningful conclusions, you need high quality baseline or benchmark data. This can be either data you gathered in the same way at a different time, or benchmark data which is consistent with your own data.

Misaligned Incentives for Staff and Vendors
People tend to do what you reward them for doing. A simple rule of management, but one which is often overlooked. If the incentives for the staff and vendors don't match the project goals, then the goals will likely not be met.

For example, if improving customer service is a project goal (and it should be), then the project team should have incentives to meet that goal. Just as they would have incentives to complete the project on-time and meet technical performance criteria.

Some vendors will balk at contractual obligations to meet customer service improvement goals, but others (those who are more confident in their abilities) will be willing to include this. Regardless of the details, if there's no incentive to meet all the project goals, including service quality, the goals without incentives will inevitably take second place.

Focusing on Plumbing, not VUI
A rule of thumb is that 20% of the work in building an automated customer service system will be in the voice user interface (VUI), and the remaining 80% will be systems integration, infrastructure, and other non-interface-related work. It is natural, then, to focus management efforts and planning on the 80% of the project which is plumbing.

This is a mistake, since success or failure of the project will almost certainly be determined by the work which goes into designing and implementing the VUI.

In a customer service environment, you need to persuade the caller to use the automated options, rather than go to an agent or a competitor. The ultimate success of the project will depend on whether callers actually use the automated system, not on whether it meets the technical requirements.

Underestimating the Difficulty of Good Design
Good design of a customer service application isn't always easy, and thinking that it will be simple is almost guaranteed to cause problems.

When a customer calls a toll-free phone number, he or she is asked a series of questions, each of which must be answered within a few seconds in order to continue. For many (if not most) callers, this will be the first call in a long time, so there is no chance to train them in how to use the system.

A good design must take complete novice callers, and make it possible for them to immediately choose the correct option at each step, as well as provide a way to go back (in case of a mistake), or break out to a human. It should also accommodate the needs of frequent callers to get their tasks done efficiently. In addition, the system should also enhance the brand image of the company, convince the customers that they had a satisfying experience, and ideally, help persuade the customers to buy more products or services.

VUI is not GUI
People with experience designing and building Graphical User Interfaces (GUIs) for web sites or desktop software often fail to appreciate the important differences between the GUI world and the VUI world:

  1. In a VUI, there are often a lot of infrequent or first-time callers. If you test a VUI by giving more than one or two tasks to the same person, that person is no longer a novice. In a GUI, people often do dozens of different tasks with the same application, and are willing to spend hours learning it.
  2. Even a partial VUI implementation is useful, and a good approach to implementing a Big Project. Partial GUI implementations are often useless: what good is a spreadsheet which adds but doesn't multiply?
  3. Every VUI usability flaw costs money, directly out-of-pocket from the company, since every problem will cause some people to go to an agent or give up entirely. GUI design flaws don't usually cost money directly, and as long as the software achieves an acceptable level of usability, there's often no benefit to fixing smaller problems.

Inflexible Specifications
If I could figure out the magic way to create a flawless specification every time, Bill Gates would no longer be the richest man in the world.

Let's face it, every project specification, no matter how good, will need to be changed. This is a fact of life.

So treat your vendors as partners in the project's success. Have the flexibility to make changes as they are needed, rather than discovering--after the vendor delivers a good implementation of a flawed specification--that something wasn't right.

Assuming Small Changes Will Have Small Impact
"The prototype sounds great, but Marketing wants to make just one little change. You see, they're big on calling customers 'partners,' so we wanted to change the first prompt to say, 'If you're a BigBank partner, press one. If you want to establish a new account, press two.'"

That sound you hear is millions of BigBank customers scratching their heads as they wonder if they qualify as "partners" or not. Odds are, their next act will be to dial zero to get to an agent.

Of course, it is entirely possible that most BigBank customers are familiar with this odd bit of marketing lingo. It is also possible that most won't have a clue what the computer expects them to do. The only way to know for sure is to test both designs.

In the world of customer service, even seemingly small changes can sometimes have an enormous impact. The wrong word in a prompt can send millions of callers rushing to select the wrong option, even though it seemed perfectly reasonable to the people who suggested it.

Fragmentation
Fragmentation happens when a company builds its customer service automation out of many small applications, without ensuring that the overall experience is consistent and high-quality.

Callers are subjected to a maze of options and prompts, recorded using different voices, and often with no way back after landing in the wrong application. The companies are also poorly-served, since they have no way to know what overall performance is, and the data which might point out problems are hidden across multiple systems.

Building a customer service piece-by-piece is a useful strategy. But companies need to be careful to make sure that the overall service and image is consistent with what they want their customers to experience. The whole is not merely the sum of the parts.

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues