-->

The 2017 State of the Speech Technology Industry: Assistive Technology

Article Featured Image

“The trends toward accessibility are changing,” says Matthew Janusauskas, program manager at the American Foundation for the Blind. “It used to be you had to have third parties provide these capabilities; now they are being included directly in the platforms. It used to be that you had to search for assistive technologies. There is greater awareness of the issue. And there is great allure for technologies like Siri that help those who are fully sighted, not just people who are visually impaired.”

Assistive technologies are also becoming more user-friendly today, Janusauskas adds. “There are new services and products being brought to the market all of the time. Speech input and output accuracy continues to increase; they are more forgiving of people’s accents. Speech synthesizers are becoming more human-sounding.”

Not only do the synthesizers sound less robotic but in some instances they can very closely mimic the user’s own voice by using available recordings of the speaker—as in the case of Ebert. Through CereProc’s CereVoiceMevoice cloning service, recordings of Ebert’s television and radio shows were used to re-create a text-to-speech version of his own voice. Other technologies from companies such as Acapela offer the same capabilities today.

But not all assistive devices rely on large, computer-generated interfaces.

The OrCam MyEye assistive technology device communicates visual information via a small, intuitive smart camera mounted on the wearer’s eyeglasses to instantly and discreetly read any printed text, from any surface—including newspapers, books, computer and smartphone screens, restaurant menus, labels on supermarket products, and street signs. The artificial vision innovation also recognizes the faces of individuals and identifies products and even the denominations on dollar bills. The device communicates what the glasses see to a tiny earpiece speaker. With faces, the wearer (with assistance) records a name with a face on an initial sighting; the device uses facial recognition to identify the person on subsequent viewings. There is also a stripped-down, less expensive version that serves as a text reader, but doesn’t offer the facial and product recognition features. The device, first introduced in the United States in 2015, is still in a beta-controlled release.

“We want to make sure that our customers are happy with it and that we fix whatever needs fixing,” says Yonatan Wexler, OrCam’s executive vice president of research and development. “We’ve just started to expand. We are working on additional functionality.”

Just after Thanksgiving 2016, Project Vive launched a crowd-funding campaign for its Voz Box, which uses small movements, such as those from a knee, wrist, or finger, to construct full sentences. The funding campaign was designed to raise money to provide the Voz Box to 10 needy people.

And similar assistive technologies are in development in China and elsewhere, providing even greater market potential for an already blossoming sector.


Phillip Britt is a freelance writer in the Chicago area. He can be reached at spenterprises@wowway.com.

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues