Ethics and Algorithms—Exploring the Implications of AI
The Ethics of Voice and Sentiment Analysis
Thanks to advances in voice analytics, with the right algorithm, medical professionals can determine whether a person is suffering from depression or even dementia just by listening to their speech patterns. In many ways, this is an exciting development, allowing people who may not have access to medical or psychiatric care to be properly diagnosed.
But what can a tool like this do in the wrong hands? Employers could, theoretically, use voice analytics to weed out candidates who may suffer from a disease. “A company could determine that the person they are interviewing is likely to suffer from a debilitating disease, like cerebral palsy, and not hire them,” says Brian Garr, senior creative technologist for Virgin Voyages.
In October 2018, VoiceSense announced an AI-driven solution that says it can streamline recruitment processes by automatically screening applicant interviews and objectively identifying top candidates. VoiceSense says its technology can assess more than 200 prosodic speech parameters and build an AI-driven personality profile of an individual’s working characteristic, including temperament, ambition, cooperation, communication, dependability, creativity, and others. At the time of the announcement, Yoav Degani, CEO of VoiceSense, told Speech Technology that whether job applicants know they are being recorded and what the recordings are being used for “depends on the privacy regulations in each country as well as the privacy policies of the recruiting firms and HR departments using our technology. In some countries, notifying candidates that the calls are recorded is sufficient. In other countries, especially those in Europe, candidates can request that all HR data is shared with them.”
Sentiment analysis solutions are quickly becoming fixtures in call centers. Companies can better read an individual’s mood and outlook, which could help them respond more empathetically to an angry customer reaching out to the call center. But here, again, some potentially unethical ripple effects may arise. “Vendors may use their understanding about a person’s emotions to prey on consumers,” Dahl says. For instance, they may use scare tactics to convince older consumers to buy services that they do not need. Or a cable company may choose not to offer a customer a free upgrade if the sentiment analysis finds that the person isn’t serious about canceling his subscription.
Accuracy is also an issue. “An AI model will never deliver 100% accurate results,” notes Ronald Schmelzer, managing partner at Cognilytica, an AI market research firm. “It builds a probabilistic, not a deterministic, model, so there is always a margin of error included in the results.” This may be a bargain companies are willing to make when it comes to marketing offers, but the already questionable ethics of using the technology in a healthcare or hiring setting become even more high stakes when you realize the algorithm may be wrong.
Alexa in the Courtroom
By now, most users of virtual assistants like Amazon Echo and Google Home realize that the devices are always listening—and recording. They “listen” to conversations and act like trained dogs, anxiously sitting by their owner’s side and ready to jump into action when needed. A small piece of code on the device waits for trigger words, like “Alexa,” and then turns the system on. Complex machine learning algorithms sleep in the background. Once a trigger word is spoken, the machine learning application wakes up, and the system records the entire interaction. But these recording capabilities open up an ethical Pandora’s box.
Speech recordings are important to vendors. To improve systems’ response rates, suppliers spend significant resources tuning their solutions. They collect oodles of data and build various algorithms with the goal of responding correctly to every inquiry. The more recordings collected, the more responsive the system becomes. So they want as many recordings as possible and cast a wide net when determining when to turn on the system. This became apparent to the masses when news broke that Alexa recorded a family’s dinner conversation and sent it to a random email contact.
Current AI and machine learning (ML) technologies are starting to change the way we build and innovate. However, the power of our current ML technologies is not fixed. Sam Ringer will explore where ML is at the moment.
Apple has temporarily suspended its Siri response grading program over privacy concerns and says users will be able to opt-out in future iterations of its popular voice assistant.
It's now possible to detect everything from depression to Parkinson's disease with speech analytics, but privacy concerns are still being dealt with
As the UK 's Information Commissioner's Office orders the nation's tax authority to delete 5 million voice recordings under GDPR, we offer 5 tips for staying out of trouble with privacy regulators.
Deborah Dahl, principal at Conversational Technologies and co-chair of the SpeechTEK Conference, kicked off the last day of SpeechTEK 2019 with a talk titled "Just Like Talking to a Person: How to Get There from Here?"
The technology is magical, but can be misused
Delivering and leveraging artificial intelligence throughout the contact center, including several AI-enabled speech capabilities was the theme of the NICE Interactions 2019 Conference, which drew an estimated 3,000 attendees.
Verint announced a new offering, Verint AI Blueprint, which is designed to help companies launch conversational artificial intelligence solutions through a combination of consulting and technology.