The 2014 Speech Luminaries
poured on smart watches and glasses, but neither takes us away from the visual/tactile-centric interface that we have been using since the introduction of the touch smartphone UI about six years ago," he said in a statement.
Bouzid calls voice the most natural, demanding interface. He maintains that as we become increasingly dependent on mobile devices, voice will become more important.
And he's not wasting any time. Xowi officially launched in June 2013 and a month later it already had a prototype of the Smart Voice Badge, a small wearable device that lets users make and receive phone calls, access weather and traffic information, send and receive text messages, hear Facebook and Twitter updates, bid on items on eBay, listen to RSS feeds, play music, dictate notes, and more, just by speaking and listening.
The Voice Badge pairs with the user's Android or iOS smartphone via a companion app and Bluetooth connection. A Microsoft Windows Phone version is due for release later this year. The badge is also compatible with any voice assistant already on the market.
While the device launched with support for many functions, Xowi also offers an open application programming interface that will enable developers to add functionality and content.
Prior to founding Xowi, Bouzid worked in product development at Genesys and Angel (acquired by Genesys in early 2013) and as an engineer at Tell-Eureka and Unisys. Today, he serves as senior director of business solutions at Genesys and has coauthored Don't Make Me Tap: A Common Sense Approach to Voice Usability, released in 2013.
Though still in the early stages of development, the Xowi product is already being met with anticipation.
Bill Scholz, president of the Applied Voice Input/Output Society, calls the Xowi badge "a well-thought-out and highly useful contribution to the wearable technology industry.
"Wearable computing is emerging as a significant new trend in our industry, and Xowi is superbly positioned to take full advantage of the growing enthusiasm in this area," he adds.
The Standard Bearer
Deborah Dahl, Principal, Conversational Technologies
Emotion Markup Language (EmotionML) became the industry standard for representing emotions within speech applications in late May, and Deborah Dahl is behind much of the effort to get it drafted and approved through the World Wide Web Consortium (W3C). Dahl chairs the W3C's Multimodal Interaction Working Group, the committee responsible for the standard, which can be applied to any number of speech applications that gather or convey emotions. Dahl's committee published the first working draft of the standard in 2009.
EmotionML's main goal is to set the coding framework for computer systems to represent and process emotion data and to
WebRTC and WebAudio add speed and simplicity.
The programming language will become the platform of choice for designing and deploying multimodal applications.
Pop culture has been both a help and a hindrance to the perception of the speech industry, panelists contend.
Consider it along with data when evaluating caller experience.