Deepfakes Raise Ethical Questions
With great power comes great responsibility. It’s a centuries-old adage that gained renewed popularity as a central theme in the Spider-Man comics and movies.
It’s also been said that absolute power corrupts absolutely, with the notion that having power corrupts a man, and the more power a man has, the more corrupted he will become.
While both of these statements were originally put forth in political contexts, they could just as easily be applied to the world of technology. Those with the technology have all the power, and they need to wield it over the rest of us with great care.
We all know that’s not always the case. Any technology—no matter how well-intentioned—can usually be exploited by those who wish to do us harm. Splitting the atom can generate tremendous power to light our homes, but it can also level cities and kill millions. The phone, email, and the internet can all do tremendous good, but in the hands of a criminal, these things can do tremendous harm as well. Voice technologies are no different.
The ability to reproduce the human voice can give a voice to the voiceless, but criminals can also use it for nefarious purposes, as our cover story, “Deepfakes: The Latest Trick of the Tongue,” points out. The feature shines a spotlight on several highly publicized criminal schemes where voice deepfakes were used to steal huge sums of money from corporate coffers. Smaller-scale schemes are just as common, and do great harm to those who can least afford it.
This was the case a few years ago when crooks used a deepfake of my niece’s voice (God only knows where they got the original voice recording) to convince my father that she had been arrested and needed $7,500 for bail. The reproduction was so convincing that my father fell for it and sent the money to a strange address in Miami. My brother was able to intervene and stopped the delivery of the package, but we found out later that many other seniors aren’t so lucky. We got the money back, but others have lost thousands, and in some cases tens of thousands, of dollars, in the same scam involving the voice deepfakes of their friends and relatives.
This is a clear-cut, black-and-white example of voice deepfakes being used for evil. There are other examples that fall more in those gray areas.
One such case involved the voice of the late TV personality, chef, and adventurer Anthony Bourdain. In a documentary about Bourdain titled Roadrunner, Bourdain says, “You are successful, and I am successful, and I’m wondering: Are you happy?”
But Bourdain never said those words, at least not orally. They were from an email he sent to a friend before his suicide in 2018 and re-created in Bourdain’s voice using artificial intelligence. It is one of three lines in the biopic put into Bourdain’s mouth by deepfake technology. Some critics argued that the show’s producers had an obligation to tell viewers that some of the voice content had been manufactured.
Also central to the discussion about deepfakes is the question of consent. Should the people whose voices are being re-created have the right to say whether their voices can be used in such a way? And what about how their voice samples are collected, stored, and possibly shared with others?
People have been clearly unsettled by Hollywood’s use of deepfakes, and even more unsettled by its use in scams like the one that befell my father, and with good reason. Part of it is a fear of the unknown, that the technology is still so new, and that its wider implications have yet to be determined.
Answering many of the ethical questions will not be easy either. It will require collaboration between the government, the industry, the academic community, and scores of others.
The government has already started to step in unilaterally. Last year alone, 17 U.S. states introduced legislation seeking to regulate AI technology. Even the federal government has begun devoting resources to researching AI questions.
Technology providers and those in the AI content community are also promising to take up the issue. Companies like Adobe, Twitter, Microsoft, and Intel and several media companies have already launched their own initiatives, but more work is clearly needed to address the ethical considerations in AI voice technology. Let’s do it quick, before more seniors are swindled into sending money to bail out relatives in nonexistent trouble.
Leonard Klie is the editor of Speech Technology magazine. He can be reached at firstname.lastname@example.org.