Interpol to Add Voice Recognition to Its Investigative Tools
We’ve all seen grainy security videos and thought, “There’s no way you can identify anyone from that!” Well, voice-recognition has made it possible to identify the “bad guy” even when you can’t see his face. While voice evidence is allowed as evidence, some skepticism remains. According to a post on the European Union’s website, the EU-funded SIIP (Speaker Identification Integrated Project) aims to put an end to any doubts about voice recognition in the court room “with an innovative probabilistic, language-independent identification system. This system uses a novel Speaker-Identification (SID) engine and a Global Info Sharing Mechanism (GISM) to identify unknown speakers who are captured in lawfully intercepted calls, recorded crime or terror arenas, social media and any other type of speech source.”
SIIP relies on the merger of multiple speech recognition algorithms related to speaker model, gender, age, language, and accent provided by different vendors to set itself apart from the competition. This, according to the EU’s Community Research and Development Information Service (CORDIS), “results in highly reliable and confident detection, keeping false positives and false negatives to the minimum.”
CORDIS says the system has already demonstrated its value in real cases, including identification of speakers on social media. It also reports that SIIP may join other Interpol central biometric databases such as fingerprint, face, and DNA.
The project isn’t due to end until April 2018, but the development phase is complete. It’s expected that it will be ready for commercial release soon, and CORDIS recommends that the EU and Interpol should create a spin-off company to handle the marketing, sales, customization, maintenance, and future developments.
Service providers lose more than USD $38.1 billion from voice fraud annually, according to the Communications Fraud Control Agency (CFCA). In a voice market where margins are declining, any loss from fraud is too much. Service providers have to take action or face potentially going out of business.
If you own a smart speaker, you know that it can be fun trying to trick Alexa, Siri, or Google into doing or saying something it shouldn't—like obeying your friend who imitates your voice commands. While such ruses are fun and harmless, the truth is that bad actors are undoubtedly attempting trickery of a more nefarious nature and voice-controlled systems (VCSs) and speech recognition systems (SRSs) can be easily fooled via clever techniques.