-->

Are the Brakes About to Be Applied to Speech-Enabled AI?

Article Featured Image

Much of the spotlight surrounding artificial intelligence in the media focuses on generative AI in the form of text created by the likes of ChatGPT or Bard, or in images created by DALL-E or similar image generators. But there are many examples of AI being used in various industries with significantly positive results.

In contact centers, speech-enabled AI can yield real-time insights about why customers are calling and, via sentiment analysis, how they’re feeling about those calls. Additionally, real-time transcripts are used by contact centers as inputs for call categorization and agent performance scores.

But where speech-enabled AI is most promising is in the ability to guide agents in real time through a call by providing coaching (“Speak a little slower, the caller is becoming confused”) and presenting appropriate knowledge at key points, without the agent having to take the time to search, or know how best to search, for that knowledge. And then there’s the golden ticket that all organizations with contact centers have been searching for over the past three decades: self-service that customers accept and even choose to use over speaking with a human agent. Agent guidance and self-service are major cost reducers and, if effective, positively affect customer experience.

According to Grand View Research (which is closely matched by most analysts), conservatively the global market for artificial intelligence in contact centers is expected to reach $7.1 billion by 2030, with a compound annual growth rate of 23.1 percent from 2023 to 2030.

And as this column often reminds, regulations around data privacy and security will come into play, and usually following the technology rather than preparing for it. The legal pitfalls of copyright protections of publicly available works that generative AI is built upon are obvious and are constantly mentioned in the media. Contrast that with recordings made during calls to contact centers, especially given that all calls are front-ended with a message about call recording to cover states that have two-party recording notification.

In a move that could have major implications for data privacy and corporate transparency, Google is facing a class-action lawsuit for allegedly eavesdropping on customer support calls made to a telecom company. The lawsuit, filed in California federal court, claims that Google’s Contact Center AI (CCAI) technology violates the California Invasion of Privacy Act (CIPA) by recording conversations without informed consent from all parties involved.

At the heart of the case lies the plaintiffs’ assertion that they, as customers of the unnamed telecom, believed their support calls were private exchanges between themselves and the company representative. Unbeknownst to them, they allege, Google was also listening in, using CCAI to analyze the conversations and improve its AI models.

The lawsuit hinges on the interpretation of CIPA, which prohibits recording or eavesdropping on confidential communications without the consent of all parties. The plaintiffs argue that customer support calls, especially those involving sensitive personal information, fall within the scope of the law. Google, on the other hand, is likely to argue that CCAI acts as an extension of the telecom company’s customer service operations, making the current consent obtained by the company sufficient.

This case creates several crucial questions:

  • Who owns the data? If companies using CCAI record and use CCAI to analyze customer support calls, who owns the resulting data and how is it used?
  • Transparency and consent: Are customers adequately informed about the involvement of third-party AI technologies in their calls though the current recording consent messages? And it reinforces the question floating around several years before this lawsuit: Should explicit consent be required from all parties before recording?
  • Balancing AI advancement with privacy rights: Can AI innovation be achieved without compromising individual privacy?

This legal threat will force organizations to take preemptive actions to protect themselves from unplanned disruptions to their operations. They may invoke an explicit consent to recording and sharing those recordings, with redactions of personally identifiable information, with their AI vendors.

The outcome of this lawsuit could have far-reaching consequences for both Google and its competitors. If found guilty, Google could face significant financial penalties, and all speech-enabled AI vendors could be forced to revise their data collection practices.

Higher accuracy redactions, tokenization of data, and stricter governance of data collection all seem to be in the near future for the speech-enabled AI industry. Its ability to continue to innovate at the current blinding rate remains to be seen. 

Kevin Brown is an enterprise architect at Miratech with over 25 years of experience designing and delivering speech-enabled solutions. He can be reached at kevin.brown@miratechgroup.com.

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Real-Time Transcription Serves an Immediate Need (or Lots of Them)

Contact centers are seeing all kinds of potential use cases,

Large Language Models Are Suddenly All the Talk in Speech Technology

A few considerations before going all in on the hype.

ChatGPT: Why the Hype and How Does It Affect Speech Technology?

The AI chatbot is a fascinating step forward, and soon it could be much more,