Video: How Biometrics Can Detect Deepfakes
Learn more about biometrics and deepfakes at the next SpeechTEK conference.
Read the complete transcript of this clip:
Ben Cunningham: We call these deepfakes, right? We try to think what's the largest example. A deepfake is like you may have seen on television: Someone has a recording of Barack Obama, speaking, and it sounds just like him, but it's someone else actually creating those words. Synthetic speech, synthetic videos, which we believe present a really credible threat to honestly our way of life, and I know that sounds really big, but imagine someone posts an incendiary video by a leader to another country, which enrages them.
Anyway, I think maybe we can all agree that they're dangerous. The way to detect is things like liveness detection, synthetic speech detection, basically you're looking for any anomaly. Our system, it's really interesting how are deep voice biometrics work. Instead of just does this match the arithmetric of another call, or probabilities, it actually, the way I think about it is the system actually tries to reverse-engineer what the person's throat had to look like in order to produce that sound.
So if it's a recording it's going to really, really impact how that sounds, right. There's distortion in there. So, evolution took millions of years to develop our voicebox, so someone doing deepfake, they're not thinking about that. They're only trying to fool your ears. They're not trying to fool something that knows your anatomy. So, when we look at deep fakethings like that, and our technology will show, well, someone will have to have an eight foot tall neck to make that sound, so it's really based on what's possible with the nerves and muscles in your larynx, but I think detecting that, it's a challenge. It goes greater than just authentication.
With authentication you're matching a one to a known other, right, it's a one-to-one. Where deepfakes, you jut have to say, is this anybody else, which is really tough problem to solve, so you're going to have to look pretty inscrupulously at things you can't hear, rather than the things you can hear, just because it sounds the same, right, so those inaudible characteristics of our phone printing technology picks up on, you know 1800-some-odd things that the human ear can't hear, so it's going to take a level of essentially machine learning and artificial intelligence to be able to discern out this really quickly from those types of bits and bytes of audio.
Nuance Communications Senior Manager, Commercial Security Strategy Roanne Levitt explains how Nuance's Conversation Print addresses speech spoofing in this clip from her panel an SpeechTEK 2019.
Today's consumers don't just want personalization, they expect it. And the call center is no exception. As organizations attempt to meet these expectations, gaining an understanding of the Voice of the Customer is imperative.
Nuance Communications Senior Manager, Commercial Security Strategy Roanne Levitt and ID R&D VP of Sales John Amein discuss the essential requirements for complying with privacy regulations in this clip from their panel at SpeechTEK 2019.
Interactions LLC NLP Scientist Yocheved Levitan outlines the findings of a recent academic study on detecting deceptive speech and how to distinguish it from truthful speech in this clip from her presentation at SpeechTEK 2019.
Interactions LLC NLP Scientist Yocheved Levitan discusses a recent research study on identifying cues of deception in speech, and machine learning-based approaches to classifying deceptive speech in this clip from her presentation at SpeechTEK 2019.
Pindrop Director of Product Marketing Ben Cunningham discusses best practices for voice authentication in IVR design in this clip from his panel at SpeechTEK 2019.