Tips for Reviewing Voicebot Vulnerability

Article Featured Image

Amazon’s Alexa, Apple’s Siri, Google Assistant, and other similar products are listening to more people than ever before. For proof, consider that the number of voice assistant users in the United States is forecast to exceed 162 million by 2027, up from 145 million in 2023, according to Insider Intelligence data. Insider Intelligence also found that around 46 million Millennials and 45 million Gen Zers are regularly using voice assistants today. Market Research Future expects the voice assistant market to grow to $30.7 billion by 2030, up from just $4.6 billion in 2022 and $3.5 billion in 2021.

But note that nearly 7 in 10 consumers worldwide are either somewhat or very concerned about their privacy online, according to the International Association of Privacy Professionals’ “Privacy and Consumer Trust Report 2023,” with 57 percent of consumers believing that artificial intelligence poses a major threat to their privacy. Many of these worries pertain to the increasingly ubiquitous and pervasive presence of voicebots and voice assistants used by companies, especially in customer service capacities. People are becoming more aware of and troubled by the collection, storage, and use of their data by corporate America. There’s long been apprehension among consumers that voice assistants could be constantly recording rather than only when activated, leading to fears about unauthorized access to personal and sensitive information. And unease about the protection of voice data from potential breaches or hackers persists, too.

Why User Concerns Persist

Experts agree that there are several reasons for consumers to have trepidation about voice assistants related to data security, privacy, and intrusion.

“People worry that their personally identifiable information (PII) may not be secure and that bad actors could potentially gain access to that information,” says Nikola Mrkšic, cofounder and CEO of PolyAI. “Many aren’t used to talking to sophisticated voice assistants, so much of their concern stems from not yet understanding, and thereby not trusting, the technology to protect their sensitive information.”

Neal Keene, chief technology officer of Gryphon.ai, agrees.

“Three prominent fears I hear are ‘It’s always listening,’ ‘What’s it collecting?’ and ‘Who has access to my information?’ It comes down to one word: trust,” he says.

Certainly, consumers find the notion of devices constantly listening unsettling, as it raises concerns that private conversations could be inadvertently overheard, recorded, and potentially misused by the companies responsible for the voice assistants or by unauthorized third parties gaining access.

Robert Wakefield-Carl, senior director of Innovation Architects at TTEC Digital, says fundamental human emotions often drive these apprehensions and trust issues.

“Bias. Lack of control. Fear of the unknown. Paranoia of loss. Absence of empathy. These are all feelings that consumers express when talking to AI or using virtual assistance,” he explains. “They have the feeling that they can’t control the conversation and are forced into artificial dialogue with a machine and not the human they were calling for.”

Consumers also fret about unintentional activation of voice assistants, which could lead to the recording of private conversations, raising concerns about the extent to which collected information is shared with third parties, used for targeted advertising, or potentially disclosed to other parties without consent.

“This can, in turn, lead to fears of being profiled or monitored based on personal data,” Damian Edwards, commercial manager at Omnie AI, points out. “Consumers are also unlikely to fully understand how their data is used or how to control their privacy settings effectively.”

Indeed, legal and ethical concerns surrounding voice assistant technology abound, heightened by media reports that highlight instances of voice assistants recording without permission or sharing private conversations with unintended recipients.

“Others have increased concerns because voice assistance is more personalized today. Now we’ve seen the likes of Alexa being able to determine whose voice within a family or household is speaking to it,” says Leon Gordon, CEO of Onyx Data. “And with the amount of deepfakes happening today, there’s nervousness around how the audio for voice assistance is going to be stored. Will it be used to train existing models? How secure is the data behind it? And how intrusive can these devices that effectively listen out for keywords be?”

Are these worries overblown?

Many experts believe the concerns surrounding voice assistant privacy and data security are well-founded due to past breaches, unintended data sharing, and companies occasionally flouting data regulations.

“Consumers’ concerns about the privacy of their voice are very real and reasonable,” says David Notowitz, president of the National Center for Audio and Video Forensics. “Many devices, such as cars, toys, and home devices, have horribly negligent security, are easy for most computer experts to hack, and can easily transmit audio recordings to foreign states. Some devices have Bluetooth and wireless capabilities so the signal is flowing through the air. A lot of the components to these parts and maybe the devices themselves are developed in China, a country that can implant, undetected, whatever they want into devices. These capabilities could be programmed and engineered on a chip [and] no one would ever know they are present.”

Edwards insists that the level of fear consumers should have about data security and safety related to voice assistants is a nuanced issue.

“There are, of course, significant benefits to the use of voice assistants, such as convenience, accessibility, and the ability to control smart home devices. However, these benefits do come with risks about eavesdropping and data breaches,” Edwards adds. “Consumers should not be overly cautious to the point of avoiding technology, but they should also not be complacent about these risks. They should maintain a healthy level of concern regarding data security and privacy and keep informed about how voice assistants work, how they collect data, and how this is used and stored.”

Routine interactions, such as asking for a weather update or getting a stock quote, should warrant no serious doubts, Keene says, “but customers should be cautious before sharing more sensitive data like health information or financial account details unless they are confident on how this information is collected, stored, and redacted. Fears of eavesdropping and unauthorized recordings are unwarranted from the larger providers of these services but must be allayed through user control of data and preferences and transparency.”

Remember that voice assistants are largely closed systems trained on internal information and acting as conduits to other technical functions, experts agree.

“If there is some sort of data breach, it’s typically going to be because of a flaw in, for example, the company’s CRM, not the voice assistant itself,” Mrkšic maintains. “The voice assistant is an interface that can’t be used as an entry point to access that kind of sensitive data. As a data processor, the voice assistant itself is secure, but the tech ecosystem that it is a part of could potentially run the risk of leaking data or becoming compromised during a cyberattack.”

Instead of being complacent, consumers should be encouraged to take proactive steps to secure their data and devices in general, including reviewing their privacy settings and securing their home networks and accounts by using strong, unique passwords and two-factor authentication where available.

“Consumers should also remain informed regarding developments in technology and data privacy and demand transparency in terms of how companies use their data,” Edwards suggests. “The enforcement of more robust data protection laws can ensure that consumers’ rights are protected, which again creates a safer environment for the use of voice assistants.”

What Companies Can Do to Improve Voice Data Safety

Organizations can do more to ensure that users feel safe when talking to their voice assistants and protect the data gathered from being used irresponsibly. Keene advises taking the following steps:

  • Educate customers on how your voice assistant works. “Tell them what you are listening for, what you are recording, and who has access. Letting customers know that the voice assistant is listening for wake words to start a session and is not recording every conversation will help build trust,” Keene says.
  • Let users know when the voice assistant is listening. Use voice prompts, disclosures, lights, or other obvious signals to indicate the voice assistant is hearing what is being said.
  • Keep data only as long as needed. “Maintaining a well-documented, automated data retention policy and deleting data in a timely and consistent manner is necessary, and in many places, the law,” Keene adds.
  • Secure the information you store. It’s essential for companies to obscure or disguise user data whenever feasible. This involves either eliminating or encrypting personal identifying details to prevent the data from being associated with any particular individual unless supplemented with separate information.
  • Be transparent. “Regularly remind users of the privacy policy and how you handle audio recordings and transcripts,” Keene continues.
  • Give users control of their data. Make preference management tools and controls for capturing consent and opt-outs available.

Edwards echoes many of these best practices.

“Most importantly, companies should enhance transparency. This includes providing easy-to-understand privacy policies and regular updates about any changes to data usage practices,” he says. “Furthermore, companies should obtain explicit consent from users before collecting, using, or sharing their data, especially for purposes that are not directly related to the functionality of the voice assistant.”

Seconding the latter is Samuel Danby, head of voice at

“The principle of explicit consent is crucial in respecting user autonomy. Companies should ensure that users are fully informed and have control over their data, offering clear options to opt in or out of data collection,” Danby says. “This practice, reinforced by user education on privacy settings, empowers individuals to take charge of their digital interactions.”

Additionally, companies should adopt robust security measures like encryption, secure data storage solutions, and routine security audits, implement two-factor authentication, and limit data collection to only essential information required for the voice assistant’s functionality, Edwards adds.

Wakefield-Carl says a few other steps are also needed.

“Create an AI advisory board and oversight committee that is privy to all knowledge fed to the AI and provide escalation to a human agent at any point in the conversation,” he says. “Also, ensure that [personally identifiable information] and [protected health information] data are not part of the common knowledge consumed by the AI for assisting customers and that any input from them is stored securely and not accessible to learning notebooks.”

It’s also smart for companies operating voice assistants to adhere to legal frameworks, such as the European Union’s General Data Protection Regulation (GDPR) and similar privacy regulations in several U.S. states.

“These regulations set a high standard for data protection, mandating that companies handle voice data with the utmost care. By complying with these laws, companies not only avoid potential fines but also signal to their users that they are committed to the highest levels of data security and privacy,” Danby says.

Other Best Practices to Follow

Businesses don’t have to stop there, either. They can adopt additional strategies to better protect voice assistant data and consequently enhance user privacy, experts agree. Here are some additional tips:

  • Assure callers that steps are in place to safeguard their data. “People want reassurance that their data will only be accessed for its intended use. Voice assistants should deliver this information in natural, straightforward language, not confusing legal jargon,” Mrkšic advises.
  • Undertake regular software updates to better protect user data and privacy.
  • Limit access to recordings and data only to employees who need this information. “These strict access rules can prevent unauthorized access or misuse of data,” Edwards says.
  • Regularly refresh security protocols and use technologies such as blockchain for maintaining immutable data records.
  • Improve your company’s security culture. “Implement a culture of privacy by design and build privacy and security features into core architecture, including voice assistants,” Keene recommends. “In addition, perform regular security audits and address vulnerabilities promptly and transparently.”
  • Develop a robust incident response plan to address security breaches quickly and efficiently, including a plan for notifying users on how to protect themselves from potential harm.
  • Engage with security researchers and the broader community to identify and fix vulnerabilities, “and encourage the identification and reporting of security issues,” Edwards adds.

What else needs to happen in the near- and long-term future for consumers to feel more secure and assured when using voice assistants? Plenty, experts insist.

“First, continued advancements in encryption, data protection, and voice recognition technology can make voice assistants more secure and reduce the risk of unauthorized access or data breaches,” Edwards says. “Improvements in voice biometrics can ensure that voice assistants respond only to authorized users. The development and enforcement of stricter data protection and privacy regulations globally can also drive higher standards across the industry.”

Better informing the public on the workings of voice assistants and associated risks can ease concerns ahead as well. And implementing and adhering to industrywide standards and best practices establishes a foundation for protection and fosters consumer trust.

“Educating users on privacy controls and adhering to strict privacy laws like the GDPR are key steps. The recent EU AI Act represents a significant move toward aligning voice technology providers on data processing standards, fostering a more transparent and trustworthy industry. The EU AI Act also helps bridge the gap between responsibly acting technologies and those that are not, ensuring a level playing field,” Danby states. “But the responsibility lies with both companies and regulatory bodies to maintain vigilance, suggesting that while improvements are likely, privacy concerns will persist, requiring ongoing attention.”

In addition, companies should improve their communication with customers by clearly indicating when they are interacting with a bot, assuring them of the security of their information and providing assistance as needed.

“Just saying ‘Thank you for calling’ is not enough; instead, the verbiage needs to be something like ‘Thank you for calling TTEC Digital. I am Louis, your secure virtual agent, and I am here to help you, but you can reach a human agent at any time,’” Wakefield-Carl says.

Although there is no single solution that will completely alleviate all concerns about voice assistant security, the widespread deployment of these sophisticated automated solutions in consumer-facing applications, such as phone-based customer support, will gradually increase exposure and enhance resolution levels. This, in turn, will contribute to consumers feeling more confident about the capability and security of voice assistants.

“The more people interact with intelligent voice assistants as time passes, the more they will trust them,” Mrkšic predicts.

Voice assistant technology will also surely evolve and become more sophisticated with the addition of artificial intelligence.

“The difference between success or failure for your company in this space depends on your ability to deliver trusted experiences,” Keene says. “This voice assistant trust concern is likely to remain a significant issue. Technology evolves rapidly, but so do privacy risks. As long as voice assistants handle personal data, consumers will rightly demand stringent safeguards. However, with concerted efforts, we can move toward a future where voicebots coexist seamlessly with user privacy.” 

Erik J. Martin is a Chicago area-based freelance writer and public relations expert whose articles have been featured in AARP The Magazine, Reader’s Digest, The Costco Connection, and other publications. He often writes on topics related to real estate, business, technology, healthcare, insurance, and entertainment. He also publishes several blogs, including martinspiration.com and cineversegroup.com.

SpeechTek Covers
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Making Self-Service More Intelligent

Generative AI is leading to vastly smarter and more capable bots.

Generative AI and Speech Technology: Proceed with Caution

With great power comes great responsibility.

Conversational Assistants and Privacy

Guiding principles (and potential issues) for our AI-powered future.