Capitalize on Customer Conversations with Speech Analytics

For years, speech analytics have been used worldwide by security organizations to help government agencies identify potential risks and threats. In the past two years, contact centers have begun to use speech analytics applications to capture and structure customer communications. The applications analyze the structured data to identify customer trends and insights for the purpose of improving service quality, customer satisfaction, and generating new revenue.

Do You Know What I Mean?
By Blade Kotelly, Edify

If you’ve ever used a really good speech-recognition system, you know that it’s good because it borders the line between formal and colloquial, between casual and serious.  A North American system that says, “Please hold, while I verify that information” sounds stilted whereas a cooler and more friendly “Hold on” not only suffices in a modern context, but also reduces the overall interaction time of the call.   If we’re simply to take a successful North American English speech system and translate these phrases without regard to both obvious translation issues as well as more subtle issues that deal with the context of where in the application the phrase is spoken, the system could sound disjointed and be ineffective.

So instead of thinking “translation,” think “localization.”  The same application deployed in Miami for Spanish speakers may need to be tuned differently for Spanish speakers in southern California or Puerto Rican-Spanish speakers in New York.  Knowing your audience means more than just speaking the same language on a technical level; it also means understanding how they speak when they’re talking to automation. 

For instance, while English speakers might say, “112 Main Street” as “one, one, two Main Street,” a Spanish speaker in the U.S. may re-order the information and say the equivalent of “the street Main, one, one, two” or “one, one, two, the street Main.”  Or in the case of “112 South First Street,” the Spanish speaker may say “the street one, one, two main, first to the South.”  And, different Spanish populations will use very different words to express the same concept, so it’s important to identify synonyms when constructing the recognition grammars.

Also, there is a difference between the ways that a native U.S. Spanish speaker wants to receive information and that of an immigrant Spanish speaker.  Being an immigrant means learning a lot about new ways to handle familiar tasks and experiencing situations where they are not understood, resulting in frustration. You’ll find that systems for an immigrant Spanish speaking population will need to provide more verbose confirmations, and may explicitly need to indicate that this information is the same information that an agent can provide, in order to satisfy this type of caller.

So what’s happening? The native and non-native Spanish speakers living in the U.S. are differently influenced by media and what others around them are saying as well as their experiences.  The tendency for a non-native person to mix and match the various ways they hear information presented or to need information conveyed in different ways is a natural phenomena that requires designers to consider localization when designing a speech system for multilingual users.

Blade Kotelly is the chief VUI designer and director of the Edify Design Collaborative Worldwide.

There are three major analysis techniques and outputs from speech analytics:

  1. Keyword or Key Phrase Identification – the speech analytics application identifies themes hidden in customer interactions. Some applications reflect only concepts and ideas “pre-identified” by the enterprise. Other systems will report on themes requested by the enterprise and also highlight high frequency phrases and topics.
  2. Emotion Detection – the application indicates if the customer/agent was happy or unhappy during the interaction. This will give an organization some idea of the customer’s satisfaction level. It will also allow the company to analyze the impact of pleasant and unpleasant agents upon customers.
  3. Talk Analysis – the system captures the impact of talking/silence patterns, such as agents putting customers on hold or representatives “talking over” customers during conversations. The application can measure the frequency of these activities.

Speech Analytics Benefits
Today, more than 95 percent of the customer communications that flow through contact centers go to waste because enterprises do not have tools for capturing, analyzing and using this information. The opportunity costs for companies are huge – customers are not shy about sharing their thoughts on product improvements, competitors and new product ideas. Customers also frequently tell contact center representatives about things they want to buy. And customers tell companies (the ones willing to listen) when they are unhappy and about to jump ship.

Agents sometimes try to pass this information on to their immediate supervisors, but even if they do, few contact centers have formal processes for making use of customer insights at all, no less on a timely basis. Besides, it’s one thing to reflect the thoughts of one or two customers, which is all an agent can do; it’s another to have a system that collects, analyzes and identifies a broad range of trends that impact the entire enterprise.

The potential benefits of speech analytics applications go far beyond the boundaries of the contact center. Structured data is valuable for all customer-facing departments, operations and even senior executives. Anyone interacting with customers needs to know what they want and need.

Speech Analytics Applications Are Maturing
I’m not suggesting that speech analytics applications are perfect. They are, however, already good enough to make real and quantifiable contributions to corporations that invest in new technology, accompanied by a commitment to best practices. Speech analytics is an emerging technology and its recognition capabilities are still maturing. The accuracy of these applications improves as the size of the underlying data sets increases.

Speech Analytics Market
There are two categories of vendors selling speech analytics applications to contact centers. The first are the stand-alone vendors: CallMiner, Inc., Nexidia and Utopy. The second are quality management/liability recording vendors that have integrated speech analytics as a component of their suites. All of the vendors in the latter category are using technology developed by a third party. These vendors include: Dictaphone, Envision, etalk, Magnetic North, Mercom, NICE Systems, Verint Systems, Voice Print International, Inc. and Witness Systems.

The Future for Speech Analytics
During the next few years, speech analytics will play a critical role in opening up contact centers by structuring customer communications and sharing this enriched information with relevant decision makers throughout the enterprise. The projected payback from speech analytics is six to nine months, but the benefits are far more than the sum of the financial gains. Enterprises that implement best practices to accompany their speech analytics initiatives will realize enhanced customer satisfaction and loyalty, improved productivity and agent satisfaction and increased sales and profitability.

Donna Fluss is the principal of DMG Consulting LLC, delivering customer-focused business strategy, operations and technology for Global 2000 and emerging companies. Fluss, a recognized leader and contact center visionary, is a highly sought-after writer and speaker. She is the author of the industry-leading annual “Quality Management/Liability Recording Product and Market Report” and “The Real-Time Contact Center,” published in August 2005.  Contact Fluss at donna.fluss@dmgconsult.com .

SpeechTek Covers
for qualified subscribers
Subscribe Now Current Issue Past Issues