-->

The Dilemma of AI Transparency

Article Featured Image

When Google’s Duplex ventured into the uncanny valley last year, making phone calls that did not disclose its artifice, it prompted significant blowback, including opinion and commentary from AVIOS. Subsequently, Google announced it would henceforth deploy transparent artificial intelligence (AI).

It appears that humans trust algorithms less than other humans, even when algorithms perform better. We break promises made to AI more often than to other humans, and we believe we are smarter than AI. While we may deceive ourselves in this brief twilight of human intellectual hegemony, our prejudices can negatively impact how we interact with AI in areas of importance, like healthcare or investment decisions. The result is an interesting dilemma: When we know we’re interacting with AI, sometimes the result of the transparency is suboptimal.

In a recent study, written up in Nature, humans were paired against either AI or other humans in rounds of prisoner’s dilemma, a game theory classic:

Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge, but they have enough to convict both on a lesser charge. Each prisoner is given the opportunity either to betray the other by testifying that the other committed the crime, or to cooperate with the other by remaining silent.

Assume three strategies and outcomes: (1) mutual betrayal—A and B betray each other, each serves two years in prison; (2) single betrayal—A betrays B and B remains silent, A is set free and B serves three years (or vice versa—B betrays A, A remains silent); (3) mutual silence—A and B both remain silent, each serves only one year in prison (on the lesser charge).

In this study, humans and an algorithm were recruited for a multiple trial game where the human subjects were randomly assigned to one of four groups: (1) playing with a human they knew to be a human; (2) playing with a human they believed to be AI; (3) playing with an AI they knew to be AI; and (4) playing with an AI they believed to be a human.

It turns out that the AI learned the optimal strategy (mutual silence) pretty quickly, as do humans if they think they are playing with other humans. But when humans are paired with what they think is AI, they mess up. It turns out humans’ distrust of AI causes them to make bad decisions.

The AI also learned to expect less from human partners. It starts the game with the aspiration that it would not be ratted out, but given continued defections, it will change its strategy. The AI in this study learned that humans were not to be trusted.

The authors write: “We used cooperation in a social dilemma as a proxy for efficiency, to capture situations where cooperation would lead to the best possible result, but can be compromised by a temptation not to cooperate, or a belief that the partner will not cooperate. Help desks operated by bots may provide a good example: While trusting the bot to help might lead to a quicker and easier resolution, humans may nevertheless decide to wait for human help due to a prejudice against the bot.”

I personally believe that transparency trumps efficiency in the long run, because people will eventually get over their prejudices toward AI. I think the reason we have these prejudices is that AI has been pretty bad at most tasks in our lifetimes, so pound zero, get an operator. To pass the Turing test, AI will have to dumb it down.

Given that we may soon be outclassed, I believe it is imperative to add this law to the laws of robotics: “A robot or an AI interacting with humans must disclose that it is not a human.” We don’t want to have to rely on Blade Runner’s Voight-Kampff test. That movie, by the way, was set in 2019. 

Phil Shinn is chief technology officer of ImmunityHealth and principal of the IVR Design Group. He’s been building speech and text apps since 1984 for dozens of companies, including HeyAnita, Morgan Stanley, Genesys, Bank of America and Citigroup. Shinn has a Ph.D. in linguistics, holds five patents, and has served on the board of AVIOS and ACIXD.

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues