-->

Affectiva Releases Emotion-Based API

Affectiva Tuesday launched its cloud-based API for measuring emotion in recorded speech to beta users.

When the company started developing emotion recognition technology 15 years ago, it began delivering software development kits for users to integrate into their apps, says Abdo Mahmoud, product manager for speech at Affecticva. "Since then, there was a lot of interest in a lot of vertical markets to add additional capabilities in addition to the speech signals."

The new cloud-based API has been developed using an existing deep-learning based framework with expert data collection and labeling methodologies. This, coupled with its existing emotion recognition technology for analyzing facial expressions, makes Affectiva the first AI company to allow for emotions to be measured across both face and speech, according to Mahmoud.

"There was a lot of interest in people being able to understand emotions from speech alone when [facial clues] aren't available, such as when a person is on a phone call," Mahmoud adds. "This is an automated system for understanding emptions from speech. Analyses are delivered in near real time."

As a result, a person becoming frustrated with automated voice response or a human agent can have the call automatically escalated to the next level of customer care, Mahmoud says.

Affectiva's API observes changes in speech paralinguistics, tone, volume, speed, and voice quality to distinguish anger, laughter, arousal, and the speaker's gender, in conversations.

"More often than not, humans' interactions with technology are transactional and rigid," said Rana el Kaliouby, Affectiva co-founder and CEO, in a statement. "Conversational interfaces like chatbots, social robots, or virtual assistants could be so much more effective if they were able to sense a user's frustration or confusion and then alter how they interact with that person. By learning to distinguish emotions in facial expressions, and now speech, technology will become more relatable, and eventually, more human.

"Action could be taken to more quickly appease a disgruntled customer after he or she expresses anger on the phone, or a vehicle's navigation system could discover that the driver is experiencing a burst of road rage and react accordingly, just to name a few examples," el Kaliouby added. "Ultimately, the ways in which socially and emotionally aware technology will enrich our lives is endless. Affectiva's new API puts us that much closer."

Through Affectiva's beta program, speech classifiers will be continuously developed and improved upon so that emotions in speech can be identified in real time and in conversations. The goal is to create a multimodal emotion AI platform that can distinguish emotions across multiple communication channels. Expanding Affectiva's emotion recognition technology to include speech, in addition to facial expression, ensures Emotion AI will be applied to a variety of new use cases and markets.

"We are working with many partners to develop it and enhance it," Mahmoud says. "Our goal is to come out of beta early next year. This year we are working with partners who can improve and integrate it. We are exploring various use cases, not only in the call center, but also in media and in advertising."


SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues