Speech Technology Magazine

 

For Greater Experiences, Combine Voice with Other Interfaces

Combining voice with emerging technologies, such as emotion sensing, eye tracking, and gestures, creates more personable and personal experiences, Strategy Analytics concludes.
Posted Aug 22, 2017
Page1 of 1
Bookmark and Share

While there is no one ideal human-machine interface (HMI) across all devices and platforms, the focus of HMI needs to be on specific use cases and context, meeting emerging user needs, enhancing usability, and requiring minimal cognitive effort, according to a new report from the User Experience Strategies (UXS) service at Strategy Analytics.

The report also suggests that combining voice with emerging interface technologies, such as emotion sensing, eye tracking, and gesture recognition, holds the key to creating more personable and personal user experiences.

Continued advances in artificial intelligence, coupled with machine learning and the ever-expanding Internet of Things, is creating more personal and personable experiences. While this is primarily driven by voice, joining voice with other emerging HMIs, such as gestures, eye tracking, and emotion sensors, can help detect what users want to do without vocal command, the research found.

"AI is becoming more central to our everyday lives. For example, emotion sensors can provide greater context to a user's command when speaking to a digital assistant," said Christopher Dodge associate director the User Experience Strategies service at Strategy Analytics and report author, in a statement. "All of these future HMIs revolve around in-home use with scattered use cases across the phone, in-car, and wearables. Flexibility is key."

Other findings in the report, titled"UXS Technology Planning Report: Human Machine Interface: Moving Towards the Invisible Experience," include the following:

  • The ideal context for eye-tracking HMI will stem across the car, phone, and in-home, all of which can cause line-of-sight interference from other objects. This HMI will have will have greatest impact on devices from which users frequently read, although most concerns center on accuracy and positioning.
  • Through analyzing and assessing users' current emotional states, emotion sensors could be best used to detect tired users, especially while driving, and provide actions or recommendations.
  • Thought-controlled sensors could be best aimed at thought-to-text dictation and controlling various elements of the smart home, including lighting. This concept holds the greatest levels of uncertainty, however, due to consumer concerns surrounding privacy, assuming their thoughts could always be tracked, and to how commands are initiated and concluded through thought.

"To allay consumer fears around false positives, ensuring a system asks before acting will make the user feel in control of the experience," concluded Chris Schreiner, director of syndicated research at Strategy Analytics. "As HMIs move from mature to future, less cognitive effort from the user for single usage scenarios is required, while more machine intelligence is needed for validation and error recovery."


Page1 of 1