Curbing Speech Data Overreach
But not everyone is in lockstep with this assessment. Bianca Rose Phillips, an attorney in Australia who specializes in global digital health law and innovation and hosts The Voice of Law podcast, feels that there’s no pressing need for additional state/national government intervention.
“The voice tech community may prove their ability to self-regulate with standards that comply with present-day laws, including artificial intelligence regulations and frameworks,” Phillips says.
A Legislative Road Map
Of course, the actual nuts and bolts of crafting and passing legislation aren’t so easy when you take a closer look. “The issue of consumer privacy protection and voice assistant data is a detailed web of transparency, data accuracy, informed consent, rights of access and erasure, rights of restricted processing, data portability, and many other issues,” Stine says.
Developing effective regulation first requires a clear and consistent definition of voice/speech data, many believe. For lawmaking clarity, Shaked says voice data should be broken down into several areas that are treated differently and not seen always under one umbrella. These include the following:
• Voice collected over telephone conversations for speech analytics (semi-active collection).
• Voice biometrics used in private or governmental applications for identification (active collection).
• Voice collected over voice search interactions in Google and other infotainment platforms for analytics and segmentation (passive collection).
• Speech used in social media apps and personal assistant dialogues for segmentation and customer service (passive collection).
“Questions that lawmakers need to ask about each of these categories include: Who holds the data? How much data should they share? How is the data protected? And how aware or in control is the individual of the data he or she produces?” Shaked says.
Mandel echoes those thoughts.
“Effective legislation will require companies to understand when and how to ask users for permission to collect their data, what they may do with that data, and for how long they retain it,” she says.
Conversely, in an attempt to protect people’s rights related to speech data, there’s always the danger of political and legislative overreach and unintended negative consequences that come in the wake of enacted regulation.
“There’s a big twofold risk here: enacting regulation that hinders valid use while also not accurately addressing nefarious use,” cautions Thymé-Gobbel.
According to Phillips, there is also uncertainty regarding how present-day legal constructs apply to the field of voice.
“Unless future regulations would seek to clarify and simplify processes for voice tech companies, the regulations would predictably add to the cost and time involved in creating voice technologies and getting them to market,” Phillips explains.
Consider, too, that if companies are legally obligated to guard data with stricter security measures, it could be more difficult for smaller companies to absorb the associated costs, Dahl notes.
For these and other reasons, Jablokov believes it will take time to create a cohesive regulatory strategy concerning voice data.
“Until then, it requires the private sector to self-police. The private sector cares about consumer backlash if people suddenly feel uneasy about having these technologies in their homes and workplaces,” he says.
Speech-based information companies and their customers don’t have to wait for Uncle Sam or any individual state to put laws in place to regulate voice data privacy and security. Businesses can take proactive steps now, and consumers can pressure these companies to follow best practices.
Dahl’s recommended steps companies should follow include the following:
• Users should have to give their consent for recordings.
• They should be told what will be done with their voice data.
• They should have the ability to ask to have their data deleted at any time.
• They need to know how their data will be safeguarded.
• Users’ stored voice data should not identify them.
• Licenses that give companies a user’s consent to recording need to be written in clear language and should be renewed periodically.
“Companies should also indicate when and how speech data will be destroyed as well as when consumers will be notified about these practices,” Adams adds.
Additionally, enterprises should create internal rules and follow ethically responsible practices that can be modeled from existing laws.
“Carefully study GDPR, CCPA, and relevant state privacy legislation line for line,” Stine suggests. “Secondly, pursue a detailed study of the voice-specific issues that may touch one or more of the specific GDPR/CCPA issues. Third, work closely with a leader in voice and consumer privacy, such as the good folks at Microsoft.”
Offering and maintaining transparency to consumers is also crucial.
“Being aboveboard is key: Companies should do active and truthful education about what happens to their customers’ data, how it’s used, and why that’s important to the user’s own successful experience,” Thymé-Gobbel says.
Lastly, users need to be given clear and detailed choices.
“These choices shouldn’t be limited to merely keep versus delete your data. Consumers need to be given more control over what to do based on that data, and they need to have clear options for what is recorded and how it will be used,” adds Thymé-Gobbel. “In addition, voice assistants should provide easy access for users to the complete history of past voice interactions in their accounts: not just text results of some of them, but recordings and text interpretations of each one that triggered the mic.” x
Erik J. Martin is a Chicago area-based freelance writer and public relations expert whose articles have been featured in AARP The Magazine, Reader’s Digest, The Costco Connection, and other publications. He often writes on topics related to real estate, business, technology, healthcare, insurance, and entertainment. He also publishes several blogs, including martinspiration.com and cineversegroup.com.