-->

Overcoming Bias Requires an AI Reboot

Article Featured Image

Artificial intelligence (AI) is becoming much more common as an embedded feature in customer service technologies, including those that are voice-enabled.

Companies are using AI to handle billions of customer communications annually. The AI systems can respond to the most common and easiest queries, such as “what is my account balance,” leaving human agents free to handle more complex customer service issues. The AI systems are designed not only to understand customer queries but determine the best way to respond, whether that means providing a direct answer, asking a clarifying question, passing the interaction on to a live agent, placing an order, or something else.

Programming systems to make those decisions and take those actions is no easy process, and it is subject to the same failings as the humans charged with that task. As such, bias can creep into the algorithms, particularly because AI is only as good as the training data that is fed into it, and that data can include historical or social inequities with regards to gender, race, sexual orientation, or other factors

At a time when many companies are expanding their use of AI, being aware of those biases and mitigating them is an urgent priority.

“This is an important question because we are making increasingly critical decisions in AI,” says Sagie Davidovich, CEO and cofounder of SparkBeyond, a provider of automated research engines using AI. “You want to have accountability, transparency, and inclusiveness.”

“AI speech recognition systems have been found to contain biases that discriminate against particular demographics of people,” says Ray Walsh, digital privacy expert at ProPrivacy.com. “This is troubling because these kinds of algorithms are often deployed on platforms used by all citizens. As a result, these technologies could actively be resulting in the re-expression of prejudice or discrimination.”

Left undetected, bias in AI can lead to harmful discriminatory practices, distorted campaign results, regulatory violations, or the loss of public trust.

Nowhere was this more evident than with the ill-fated launch of Microsoft Twitter bot Tay in 2016. Microsoft described Tay as an experiment in conversational understanding, one where the more Tay was engaged, the smarter it would become, learning to engage people through “casual and playful conversation.”

But as soon as Tay went live, people starting tweeting all sorts of misogynistic and racist remarks at the bot. Tay started repeating these racist, misogynistic sentiments back to users. Microsoft pulled the plug on the technology after just 16 hours in circulation.

Tay was certainly not the first time something like this happened. Automated technology has also been blamed for bias in the lending industry for years. Financial institutions in droves turned to automation to speed up loan approval and denial decisions, presumably without the same biases that can creep into human thought processes.

A study by WP Engine supports this notion, at least in theory. Among survey respondents, 92 percent said they believed AI would provide an opportunity to examine both organizational and data-related bias; and 53 percent said they believe AI provides an opportunity to overcome human bias.

But the study also found that nearly half (45 percent) thought that bias in AI could cause an unequal representation of minorities.

In the end, automated systems in the lending process were indeed found to be biased because they used data and variables that tended to discriminate against mortgage applicants who belonged to ethnic minorities.

Help on the Horizon

Today, more powerful computing systems and more readily available data have been shown to eliminate some of the inherent bias of earlier technology, but there is still some bias in today’s systems, many experts agree.

“We need to start thinking about bias in our AI systems,” Davidovich says.

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues