Nexidia Bulks Up Latest Version of Speech Analytics Software
Nexidia is readying the release of Nexidia Interaction Analytics 11.0, an enterprise solution that provides and breaks down big data gathered from Neural Phonetic Speech Analytics.
According to Nexidia, the company invented the process of rapidly searching audio known as phonetic indexing. Nexidia's Neural Phonetics solution integrates phonetic indexing with Large Vocabulary Continuous Speech Recognition (LVCSR), used to build transcripts, and understand how a word or a phrase relates to other words or phrases. The combination of technologies provides word level transcription, phonetic indexing, and sentiment scoring.
"Taking all the rocket science out of that, we changed from one way of understanding the acoustics of call interactions to another way," says Ryan Pellet, the company's chief strategy officer. "That new way is through neural networks. Neural phonetics speech analytics allows us to do that better and faster and more accurately."
Nexidia Interaction Analytics 11.0's features include:
- advanced word clouds that can display which key phrases are spoken, predominantly by customers or agents;
- the ability to filter word clouds or analysis based on high or low customer sentiment scores, speaker, or phrase location within a call;
- graphical display of the related phrases and relative quantification of occurrence that can be manipulated by the user to dynamically explore the relationships of key terms;
- automatic call categorization with a top 10 view;
- sentiment analysis that allows calls to be sorted by positive or negative sentiment and to be viewed by how sentiment trends within a call;
- call transcription that can be viewed when replaying a call recording; and
- drag-and-drop structured query creation capabilities that use automatically surfaced phrases to build advanced categorization of interactions with business logic.
"We're raising word level transcription, we're producing a phonetic index which we've always done, and we have sentiment scoring. Because of the way we're processing audio, we have some machine learning algorithms that are running all the time to be able to figure out what kinds of things are related to sentiment," Pellet explains.
There are four main categories that sentiment fits into. The first category, audio, looks at call characteristics such as how fast and loud someone is talking. The second property centers around laughter, typically a sign that a conversation is going well.
The third piece is in understanding natural words that are getting used. If someone is unhappy, there are thousands of combinations of words used to indicate discontent.
Finally, the fourth area focuses on cross-talk. If people are interrupting each other during a conversation, that typically means that things aren't going very well. However, if this is combined with laughter, it could mean the opposite.
"All of those things are put into machine learning to be able to figure out sentiment, and the addition of sentiment, queue, LSCVR, phonetics, and indexing transcriptions adds a whole another specter of analysis that can be done on calls."
Nexidia Interaction Analytics 11.0 also provides advanced metrics that considers performance quality management and agent evaluation functionality.
"One of the things that we're injecting into the market is that a business user can also use speech analytics and interaction
The Nexidia integration with IPV Curator allows users to search for words in massive media libraries.
Nexidia SearchGrid now comes as a standard part of Qumu's enterprise video management portal and viewer.
Nexidia QC 2.1 automates quality control of closed captions, video description, and languages.