What To Do When Callers Arent Using Speech
What do you do when callers arent using speech? Speech deployments are expensive. From the initial meetings and presale activity, to closing the deal, to implementing and tuning the deployment - scarce resources are spent, and expectations for success are high. Predicted ROIs come from the estimated user adoption rate of the deployed system, and when that actual adoption rate falls below expectation, the pain is felt by everyone involved in the project.
So what do you do when user adoption rates are lower than expected? Can they be improved?
User adoption rates can be improved - with the understanding that the improvement process will take focused time and energy, and you may not be able to reverse all of the negative impacts of a less-than-optimal speech deployment. Beginning the improvement process requires that you uncover the root causes of the missed adoption rates. Mapping out how the speech solution was built and deployed, and comparing that against a proven blueprint for speech deployments accomplishes this. There is no rabbit to be pulled from the magic hat when done correctly, speech deployments meet or exceed the targeted adoption rates. The price paid to correct speech implementations is often higher than the original price would have been to get it right.
While there are several proven methods to delivering successful speech recognition, those methods have broad-reaching requirements for an organization. Cross-departmental cooperation, participation, and ownership are essential, which can be challenging to achieve within the window of opportunity. Budgets come and go, tactical business requirements become strategic mandates, and the twists and turns of the corporate structure make it difficult to do a successful speech deployment. The challenge extends beyond the deploying customer - the vendor delivery channel also contributes to the complexity involved in achieving the speech automation goals. Software, hardware, services, contracts, who, what, when, where and how are many of the variables that are a part of the mix; and it can be difficult to control enough of them to achieve the targeted plan.
How applicable is this in the real world?
The eight step process helps an organization identify root-cause issues to lower-than-anticipated adoption rates. In one organization that had originally rolled out speech with much success, initial adoption rates were solid, and customer feedback was positive. As time passed and more applications rolled out, customers became dissatisfied with their experience with the speech automation and the speech adoption rates dropped significantly.
What happened? Through focused sessions with the customer, several root-cause elements were identified. Leading the list was the lack of a company-wide speech strategy and the lack of a customer-centric approach. This created inconsistencies in how telephony automation was delivered to the customer: some applications were touchtone, while others were speech. It became very confusing to callers as to when they should speak their account number and when to key it in. Another issue involved measuring the success of the speech program. The business objectives centered around reducing call times and other internal measurements, but the customer perspective measurements were not considered.
The most damaging of the findings the voice user interface design was not meeting with the approval of the majority of callers. Like it or not, every speech implementation has a persona, and once introduced, that persona takes hold. You always read about designing the best user interface, but you rarely see focused discussion on how to reverse a poorly designed/already in the market voice persona. The persona of a speech application can impact the actual brand of your company. If you mess it up, you risk damaging the most sacred asset there is your brand. Reversing the damage is difficult, if not impossible.
Putting out a less-than-optimal speech application can be like getting a bad sunburn - it hurts. It can have long-term negative impacts, and it can look bad to observers. Of course the sunburn is preventable, but an I told you so does little to lessen the pain.
In the example above, the speech project measurements for success were re-written to reflect a more customer-focused set of criteria. Automation recording processes were implemented to allow more proactive monitoring of the callers experiences, and user interface modifications were made based on those observations. While the implementation of a corporate speech strategy was not feasible, those responsible for the speech solutions took a proactive role to drive improved user interface design for all of the telephony automation. They re-designed many of their speech applications to flow better with the non-speech applications that existed. To improve the persona, they toned down its personality and eliminated calling it by name in their applications and in their customer literature. These actions resulted in measurable improvements in user adoption rates, but the costs of the mistakes were severe customer dissatisfaction, significant resources spent fixing the issues, a loss of executive-level confidence in speech as a viable technology, and long-term negative impacts on the corporate brand. In this case, the cost of doing speech correctly would have been much less than the cost to repair the damage.
Along with controlling the impact of the persona, the speech deployment cannot be static. The speech engine must be able to learn and adapt to spoken words and dialects. The user interface design must be flexible enough to handle changes based on initial testing and deployment. The application design and work flow must be modular enough to allow for immediate changes if the testing/customer feedback dictates said change. The dynamic nature of a speech deployment mandates that companies plan for change and prepare the organization up front for that change. Minimizing how those changes impact organizational resources is key to success in speech deployments.
Integrating a speech-automated caller feedback application into the overall speech solution is one innovative approach in this area. When combined with the right call center data integration, an organization can intelligently query callers on their completed automation experience. Of course the dialog and user interface design of the feedback application must be done well or all of this is for naught. When done correctly, this application provides immediate quantifiable feedback on many elements of the core automation. Keys to a successful implementation of this feedback tool include:
- A simple set of queries, allowing the customer to complete the process in less than one minute
- A distinct persona from the core automation
- Selective use of the query (you would not want to force every customer through the query every time they called)
- Solid reporting mechanisms on the results
- Timely implementation of the obvious changes needing to be made
Observed results of this type of tool in deployment found that automation rates within existing applications rose measurably. This held true for both new applications as well as for existing applications where this technique was applied.
A customer-centric view of speech technology does not just apply to how the speech system is built and deployed, it also applies to how the speech solution is positioned with the end-user customers. One of the most telling examples of this was observed when an organization deployed speaker verification technology as a front-end to its existing speech deployment. The organization took great care in integrating the existing call flow with the speaker-verification call flow, and usability testing showed the combined application to be very customer friendly. The new speaker verification-enabled application rolled out to customers and user adoption plummeted. Worse yet, competitors seized the opportunity to grab the customers who were dissatisfied with their automation experience.
The end-customer base was not prepared for the introduction of the new technology. There was no integrated outbound marketing campaign in place to explain the huge benefits of speaker verification technology security and quicker customer identification. These were differential benefits to the customer base over the competition. In the absence of customer communication, callers became confused and turned to live operators, or worse, the competition. A customer-centric approach is essential in every step of the speech solution, from solution conception to solution delivery. In this example, the only way to bring adoption rates back up was by taking down the speaker verification deployment. The business lost credibility with their customer base and with their executive sponsors.
What do you do when callers arent using speech? Go back to the core principles of successful speech deployments. Figure out where things went wrong. Learn from those experiences, and do it better next time.
Ted is Avayas director for strategic planning and product management for voice automation. His career focus has been on the strategic impact of technology on customer service. Ted is routinely published in industry periodicals and is a frequent conference speaker in the areas of speech recognition and voice automation.