-->

Conversational Assistants and Privacy

Article Featured Image

Many conversational assistants—software that speaks and listens to humans using voice, text, and graphics—use data analysis software to extract and interpret information from the sound of your voice and the words that you speak. Your voice contains information about who you are, where you live, how you look, and how you feel. You might think of your voice as just a way to convey information to others, but speaking poses many challenges and risks, including these:

  • Conversational assistants can reveal sensitive and personal informationabout you: your identity, location, age, gender, ethnicity, health condition, personality, traits, and mood. This information can be used for beneficial or harmful purposes, depending upon who accesses and uses the information.
  • Conversational assistants can introduce bias and discriminationinto decision-making processes. Some analyzers might favor certain accents or dialects over others or might misinterpret the emotions or intentions of a speaker based on their voice tone or pitch, which may influence decisions made using data from these analyzers.
  • Conversational assistants might store and share voice data without your consent or knowledgeand make you vulnerable to hacking or spoofing attacks. You have to be wary of conversational assistants that record your voice for “training and analysis purposes.”

Trust is an essential ingredient for any scenario in which conversational assistants create natural and meaningful interactions between humans and machines. To earn this trust, developers and organizations involved with conversational assistants must ensure that they uphold established rights and foster positive social values to protect users.

Governments and governmental agencies in both the European Union and the United States have developed guidelines, recommendations, and laws to protect personal data, including data used by conversational assistants. Some of the most important developments are summarized in the table below.

The Open Voice Forum (OVON) Trustmark Initiative (https://openvoicenetwork.org/trustmark-initiative/) is seeking to establish a set of guiding principles for ensuring that conversational assistants follow an ethical path. It has outlined a vision for trustworthy conversational assistants consisting of of the six pillars described in the chart below.

The Open Voice Network is also developing an online self-assessment maturity model for organizations that wish to see how their current structure and strategies line up with the OVON TrustMark Initiative’s guiding principles. I encourage everyone to review their public training class and post the TrustMark logo on their websites. 

James A. Larson is a senior scientist at Open Voice Network. He can be reached at jim42@larson-tech.com.

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Tips for Reviewing Voicebot Vulnerability

How companies can help users feel safe by better securing voice assistant data.

Generative AI Is the Swiss Army Knife for Today’s Conversational Assistants

Helpful assistants can become even more so thanks to genAI.