Apple Adding Voice-Enabled Accessibility Features
Apple is set to launch several software voice-enabled features for cognitive, vision, hearing, and mobility accessibility.
Coming later this year, nonspeaking individuals can type to speak during calls and conversations with Live Speech; and those at risk of losing their ability to speak can use Personal Voice to create a synthesized voice that sounds like them for connecting with family and friends. For users who are blind or have low vision, Detection Mode in Magnifier offers Point and Speak, which identifies text and reads it out loud.
"At Apple, we've always believed that the best technology is technology built for everyone," said Tim Cook, Apple's CEO, in a statement. "Today we're excited to share incredible new features that build on our long history of making technology accessible so that everyone has the opportunity to create, communicate, and do what they love."
With Live Speech on iPhone, iPad, and Mac, users can type what they want to say to have it spoken out loud during phone and FaceTime calls and in-person conversations. Users can also save commonly used phrases to chime in quickly during lively conversation with family, friends, and colleagues.
For users at risk of losing their ability to speak, such as those with a ALS (amyotrophic lateral sclerosis) or other conditions that can progressively impact speaking ability, Personal Voice lets them create voices that sound like them. Users create a Personal Voice by reading along with a randomized set of text prompts to record 15 minutes of audio on iPhone or iPad. This speech accessibility feature uses on-device machine learning to keep users'; information private and secure and integrates with Live Speech so users can speak with their Personal Voice when connecting with loved ones.
Point and Speak in Magnifier helps users with vision disabilities interact with physical objects that have text labels. For example, while using a household appliance such as a microwave, Point and Speak combines input from the Camera app, the LiDAR Scanner, and on-device machine learning to announce the text on each button as users move their fingers across the keypads. Point and Speak is built into the Magnifier app on iPhone and iPad, works with VoiceOver, and can be used with other Magnifier features, such as People Detection, Door Detection, and Image Descriptions, to help users navigate their physical environments.
Additional voice-based features being introduced include the following:
- Voice Control adds phonetic suggestions for text editing so users who type with their voice can choose the right word out of several that might sound alike, like do, due, and dew. Additionally, with Voice Control Guide, users can learn tips and tricks about using voice commands as an alternative to touch and typing across iPhone, iPad, and Mac.
- For VoiceOver users, Siri voices sound natural and expressive, even at high rates of speech feedback; users can also customize the rate at which Siri speaks to them, with options ranging from 0.8x to 2x.