-->

Curbing Speech Data Overreach

Article Featured Image

Ann Thymé-Gobbel, director of user experience/user interface design at Loose Cannon Systems, a provider of voice-enabled mobile devices, believes there’s an urgent need to review and update regulations to explicitly call out and define voice, speech, and artificial intelligence/natural language data of every kind.

“This includes recorded voice files, text interpretations, and dialogue information gathered in every environment and context, including via phone dictation, Internet of Things devices, virtual assistants, interactive voice response devices, and dedicated devices. There needs to be a stronger focus on storing and accessing such data to enable approved actions, like performance enhancements, while hindering potential data breaches and uses of personal information not agreed to by each user,” she says.

Janice Mandel, a communications strategist, Open Voice Network ambassador, and member of the Open Voice Network Ethical Use Community, is also concerned about the lack of regulation for voice-based information companies.

“Though the components of voice technology have been around for years, voice as an industry continues to evolve along with its use cases. U.S. government regulators need assistance understanding its ramifications in order to protect citizens’ rights and support industry development,” she says.

Jon Stine, executive director of the Open Voice Network, an industry association dedicated to the development of industry standards for voice assistance, says that the good news is that nearly all U.S. companies are deeply aware of and in alignment with GDPR and CCPA.

“There are also numerous U.S. state-based privacy legislative efforts under way, many of which are a local legislative response to GDPR and CCPA,” he says.

The problem, however, is that bellwether laws like these don’t specifically define protections for or rules around speech data.

“There needs to be a specific focus on extending and refining current regulations to explicitly apply to all media and data types by calling out voice or speech,” Thymé-Gobbel says. “Laws for recording phone calls should also be reviewed and understood by anyone working with voice data. And important questions need to be answered. Do these laws apply to voice assistants? Should they? What if part of the conversation is with an assistant in your home that’s connected to a server in another state or country? What if you transfer out to speak to a person at some point? Different U.S. states have different consent laws; how does that apply, if at all?”

More Government Needed

Nava Shaked, an artificial intelligence and natural language processing expert from Holon Institute of Technology in Israel, says state and national governments need to get more involved in this issue.

“There should be regulation regarding collecting, storing, and reusing speech data the same way we demand to have about any other type of data. But speech is unique because it is also biometric and because it can be manipulated to create fake speech and can be reused destructively once in the wrong hands,” she cautions.

Igor Jablokov, CEO of AI technology provider Pryon, says one reason oversight is lagging significantly is that governments don’t want to stifle artificial intelligence innovation.

“Technology, and AI in particular, is also advancing so quickly that even people in the industry are challenged to keep up. Plus, many regulatory organizations are woefully underfunded and simply don’t have the resources to achieve their goals,” he says.

Like many others, Shaked fears that governments and users are not fully aware of the power of speech data and speech applications.

“Smart speakers are now in most homes. And COVID-19 has made it clear that speak-don’t-touch apps will be used more and more,” she says. “We need regulation desperately so that we will not wake up in five years and discover what we learned about social network influence and data used to affect state processes and world elections as well as major political and commercial decisions.”

The experts are particularly concerned about the passive collection and usage of speech data.

“Because of technology’s rapid advancement, we cannot rely on companies alone to make the ethical decisions,” Shaked says. “The challenge will be creating and passing legislation in a way that will not delay technology from happening or scare users from cooperating. This act of balance created GDPR and CCPA, so we have a foundation to start with.”

Perhaps what’s most needed is consistency in legislation and regulation over the voice industry, many agree.

“The lack of federal privacy legislation in the United States threatens to create a patchwork quilt of mismatched state legislation, which poses a potential legal minefield for enterprises,” Stine says.

Trusting companies to do the right thing in this matter could lead to disappointment. “It’s probably not realistic to think that companies will properly protect voice data on their own. The only motivation they might have to do that would be getting bad publicity if they’re caught, which isn’t a particularly strong motivation,” Dahl says. “That’s why we need government regulations.”

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues