Smartmedical Launches Web Empath API to Identify Emotion from Speech
Japanese firm Smartmedical has launched its Web Empath API for developers, capable of analyzing multiple vocal properties, such as intonation, pitch, speed, and volume, to identify emotions in real time, regardless of language.
"Just by adding sample code to a Web site, the API enables developers to incorporate our vocal emotion recognition technology into various applications," said Takaaki Shimoji, a Smartmedical board director, in a statement "The API will help developers innovate by creating their own customized applications. There is a lot of potential for inventive use."
Empath was first developed to assist with the delivery of mental healthcare services. In 2013, it was used in NTT DoCoMo's Tohoku reconstruction project to evaluate care workers' mental health conditions.
In addition to mental healthcare, Empath is used in robotics (with Yukai Engineering), lighting systems (with Phillips), virtual reality (with Intelligence), and contact center service (with TMJ).
"In the past 10years, the number of emotion recognition start-ups has increased, but Smartmedical is unique for developing multiple real use cases for our signature technology. The API will expand our fields with imaginative developers all around the world," Shimoji added.
With a program that sends .WAV files to the API, there is potential for integration with multiple platforms, including Windows, iOS, and Android OS. In addition, the API is adaptable for use as a machine-to-machine and Internet of Things sensor.