-->

What Speech Technology Buyers Really Want: How to Meet the Needs of Enterprise Customers

Article Featured Image

Finding a manager willing to take on the expense is difficult. Traditionally, enterprise applications were sold to department managers who understood how their group functioned and what impact a new software program would have on it. Given the horizontal nature of a voice interface, its benefits cut across traditional boundaries, but vendors still need someone to advocate for their solution. “We see three distinct customer audiences: (1) contact centers; (2) chief innovation officers or chief digitalization officers; and (3) usually a few developers in organizations who are searching for new cool technology,” says Avaamo’s Chakravarthy.

Using Voice for Data Analytics

While enterprise speech application development work is challenging, a few corporations have successfully taken it on, with help from third-party specialists. In business for a decade, Srijan Technologies, which has 370 employees, is a speech systems integrator with customers in 10 countries around the globe. In the spring of 2018, one of Malaysia’s largest and oldest telecom enterprises approached the consultancy. The company had 11,000 network sites supporting 2G, 3G, and 4G wireless technology and serving 14 million subscribers in the country. Daily, the management team examined company performance by pulling information from Netezza, its data warehousing solution. However, the executives invested a lot of time extracting the information. They wanted quick answers to relatively simple questions and found it cumbersome to work with the dashboards.

So the telco replaced the screen-driven dashboards with a conversational voice interface. Srijan leveraged Microsoft’s Azure Bot Service to create chatbots that answer queries around key performance indicators such as market share, customer loyalty, and sales revenue. Srijan tied a lot of software components together: Azure Bot Service, SQL DB, Azure App Service, Azure Active Directory, and React-based mobile applications. The business stakeholders now access current operational and financial metrics, charts, and insights via their smartphones. For instance, they can ask for the month’s revenue and compare it to the previous year’s or competitors’ financial performance. The executives now spend more time evaluating data and less time extracting it.

Easing Agrarian Data Entry

Founded in 2016, AgVoice has six employees and is based in Atlanta. The company has been building mobile voice applications for the agriculture industry. The solution is geared to individuals whose jobs have them on the go, like plant inspectors or veterinarians who care for large domestic animals. The solution has unique design requirements. To enable users to leave the device in their pocket, the AgVoice needed to include robust noise cancellation features. The application has a cloud back end and a mobile front end, but workers are without any network connectivity for about 15% of their day. The start-up needed to provide users with a means to access data in those circumstances.

These individuals typically have long checklists to complete at each site. The system enables them to keep their hands free as they enter data. The system time-stamps each entry and notes the location. Capturing such information enables food growers and distributors to gain more insight into their supply chains so they can optimize resource usage.

The solution does require a fair amount of customization. AgVoice has subscription models that ran from $50,000 to $200,000 per year, depending on the complexity of the system and the number of users.

Speech Ecosystem Still Growing

Speech suppliers have made great progress in improving their systems’ conversational capabilities. Now businesses would like to extend these features to enterprise business processes.

Currently, that work is vexing because the speech application development ecosystem is immature. A few leading companies have worked with third parties to develop custom applications. Long term, more building blocks are expected to be put in place so that the reach of such applications can go much further. x

Paul Korzeniowski is a freelance writer who specializes in technology issues. He has been covering speech technology issues for more than two decades, is based in Sudbury, Mass., and can be reached at paulkorzen@aol.com or on Twitter @PaulKorzeniowski.

Where Speech Engines Still Need to Improve

Synonym Detection: People use different words to express the same idea. Speech systems have been working to recognize groups of words, such as “Pay my bill.” The next step in the process is taking the various groupings and tying the ones with virtually the same meaning together.

Better Understanding of Tone and Nuance: Sometimes, what a person says literally is not as important as how they say it. For instance, sarcasm expresses a negative comment mildly or in a humorous way. Speech algorithms have often misinterpreted such subtle distinctions.

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues