-->

Overcoming Bias Requires an AI Reboot

Article Featured Image

This applies as much to voice systems as it does to other AI-enabled technologies.

“The biases contained in voice recognition systems are primarily caused by pre-existing social biases contained within the datasets used to train the AI algorithms,” Walsh says. “How much bias is contained within an AI speech recognition system largely depends on the amount of inherent societal or algorithmic bias contained within the data assets used to develop those algorithms.”

According to Vikrant Tomar, founder and chief technology officer of Fluent.ai, a provider of AI-enabled voice interfaces, there are two layers of bias in speech systems.

• Usability: Speech systems work well only in a handful of languages and, even then, only for a very selective set of dialects and accents. Because of this, many people around the world cannot use speech recognition.

• Response level: Most modern speech systems are a combination of automatic speech recognition, speech-to-text, and natural language understanding, any of which could be susceptible to a degree of bias. A lack of understanding of languages, accents, or dialects are one example. Systems could also be biased toward questions that men would ask; therefore, they would map even women’s queries toward more male domains.

“Beyond ASR and NLU, even if the system is able to understand the user’s query, it has to answer the query based on a certain knowledge base (internal or external),” Tomar explains. “Such knowledge bases might have their own biases. Therefore, it is important to ensure fairness in the data/knowledge bases used for training.”

According to Walsh, a primary problem caused by such biases is that they render algorithms unfit for their intended purposes. Algorithms that contain biases will produce ill effects for users who do not fall within specific parameters.

Biases in the algorithms come from the developers who build them; even if unintentional, these developers have their own biases based on their backgrounds and experiences, experts agree.

“Simply put, AI models are biased because the data that feeds them is biased,” says Chris Doty, content marketing manager at RapidMiner, a provider of artificial intelligence for analytics teams, “For example, if you have a speech recognition system trained only on speakers of American English, it’s going to struggle with…people from Australia, speakers of non-standard varieties of American English, non-native speakers, etc.”

Doty adds that bias can also come into play over time with models that drift, the technical term for the disconnect that occurs when real-world changes are not reflected in static data, when the data is not updated so it starts to poorly match the real world.

“Ultimately, models are only as good as the data that is used to train them,” Doty says. “Ensuring you have a wide range of relevant, up-to-date data is key, especially when it comes to speech projects.”

What Can Be Done?

Most experts agree that biases in AI cannot be fully eliminated but can be minimized by having a diverse array of developers involved in system design from the beginning.

But even then, most developers have similar backgrounds, so their biases—intended or not—tend to be reflected in any system they build, says Timothy Summers, president of Summers & Co., a cyber strategy and organizational design consulting firm. That’s why facial recognition, speech, and other AI-based systems will fail more often when they are used with more diverse populations.

The more diverse the developers, the more diverse the AI they will develop, Summers says. “You need to embrace a more diverse approach for developing these systems. You need to innovate responsibly. You need to co-create with the community that you are trying to serve.”

“Solving the problem is extremely tricky, and there has not been any definitive proof that it can completely be solved at the moment,” Walsh laments.

“One serious problem is that of expectation, of what AI can really do. At the end of the day, an AI system is educated and trained to solve a particular problem, and that is pretty much its entire universe,” says A.J. Abdallat, founder and CEO of Beyond Limits, an artificial intelligence company. “These systems are not humans who can freely interact with their environment, go to libraries, call up experts on the phone, perform experiments, and test hypotheses. They have a myopic view of the world. They are machines, not people.”

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues