-->

Overcoming Bias Requires an AI Reboot

Article Featured Image

As an example, he cites IBM’s Deep Blue computer, noting that it can play chess at a grand master level but is incapable of playing tic-tac-toe.

Abdallat views AI as a complement to people rather than as a replacement for people.

The best way to eliminate as much bias as possible in speech systems’ AI is to include people from as many different backgrounds as possible in the development, experts agree.

“A crucial factor is to ensure that there does not exist any bias in the data or knowledge base that the AI system is going to learn from,” Tomar said. “While this might sound like an easy issue, it is very difficult to ensure this because the humans evaluating these datasets might have inherent biases themselves.”

Beyond the data, Tomar also suggests evaluating the underlying models as well. “A model could attach importance to a particular feature relating to a discriminatory factor but not necessarily relevant to the problem at hand,” he says. “For these reasons, we need to come up with stringent measures of fairness in AI and how to evaluate those.”

Thomas Westerling, director of scientific strategy and business development at Aiforia Technologies, a provider of deep learning AI and cloud solutions for the medical industry, says that in creating AI for speech systems, developers need to “capture the wealth of variance of speech patterns of human beings,” including those who have speech impediments.

Corporate Moves

A number of efforts toward AI fairness have been mounted by large corporations like Google and Microsoft in recent years, but even smaller firms are actively pursuing ways to limit bias in AI systems.

“At Fluent, we ensure that all the data we collect represents the diversity of the target demographics in a balanced manner,” Tomar said. “This allows us to design systems that work well with the majority of our target users.”

Another solution could be to develop new models that require less training for new languages and can quickly adapt to users, according to Tomar. “At Fluent.ai, this has been a fundamental motivating factor behind the development of our end-to-end spoken language understanding (SLU) systems. Our SLU system can learn any language or a mix of languages quickly and also allows the end user to adapt the model with very little feedback directly on the device.”

Pegasystems, a provider of cloud software for customer engagement, is also exploring ways to eliminate bias in AI. Earlier this year, the company launched Ethical Bias Check, a new capability for the Pega Customer Decision Hub.

According to the company, the new feature simulates AI-driven customer engagement strategies before they go live, flagging possible biases and unwanted discrimination by using predictive analytics to simulate the likely outcomes of any strategy.

The company claims to be the first to offer always-on bias detection across all customer engagements on all channels, including those that are speech-based.

After setting their testing thresholds, clients receive alerts when the bias risk reaches unacceptable levels, such as when the audience for a particular offer skews toward or away from specific demographics.

“As AI is being embedded in almost every aspect of customer engagement, certain high-profile incidents have made businesses increasingly aware of the risk of unintentional bias and its painful effect on customers,” says Rob Walker, Pegasystems’ vice president of decision management and analytics.

Pegasystems’ AI Bias Check can be streamlined across channels. All of the AI’s decisions are automatically screened for bias, whether they result in a marketing offer delivered on the web, a promotion placed in an email, or a customer service recommendation made via an agent or chatbot. The bias protection can even adjust as strategies and offers change.

Companies can set acceptable thresholds for any element that could cause bias, such as age, gender, or ethnicity.

How well Pegasystems’ AI Bias Check or any other evolving bias mitigation technology works remains to be seen. But most observers agree that combating bias in AI systems will be an issue that companies will need to continue to deal with for the foreseeable future. 

Phillip Britt is a freelance writer based in the Chicago area. He can be reached at spenterprises@wowway.com.

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues