-->

Listening for Lies

Article Featured Image

As a double agent for the Soviet Union and later for Russia, Aldrich Ames passed two polygraph screenings. That he was able to do so underscored the inadequacy of so-called lie detection technology, and yet, when FBI agent Robert Hanssen was uncovered as a mole in early 2001, the FBI immediately imposed lie detector tests on 500 employees. Still, despite its fallibility, it remains a valuable tool. With that in mind, Harrow Council, one of 33 local government authorities in the greater London area, implemented a pilot program using voice risk analysis (VRA) technology to combat housing benefits fraud among its 214,000 residents.

Benefits fraud occurs when individuals file false claims—such as lying about their income or filing a claim from a vacant address. Costs to the government can be steep. For instance, a couple in Dukinfield, England, claimed $60,000 in Housing and Council Tax benefits over 10 years on a house for which they paid no rent. The British government reports that such fraud amounted to more than $1.4 billion last year alone.

So, it’s little surprise that Harrow Council, noting more than $500,000 in fraudulent claims during 2006, wanted to install countermeasures.

Using about $128,000 in funding from the Department of Work and Pensions, in May Harrow Council began a year-long pilot of Digilog’s Advanced Validation Solutions (AVS) in its call center. The solution uses VRA technology—which studies a speaker’s vocal frequency and stress—supplied by the Israeli company Nemesysco. As of September, Harrow Council says it saved roughly $225,000 in benefits payments. The solution also helped identify 126 incorrectly awarded single-person council tax discounts worth nearly $80,000, and turned over 304 claims for review, of which 47 were found to be invalid.

What’s notable is the extensive role trained human operators play in using the solution. "[The technology] is just an element within the overall risk assessment of a conversation," says Lior Koskas, business development director at Digilog. "What we train time and time again is to use this as an indicator. It can guide you through a conversation, but at the end of the day, you need to rely on what the technology is telling you and [your training] as an operator."

As part of the solution’s installation, operators are taught—over three to seven days depending on seniority and specific duties—30 techniques to spot inconsistencies within a narrative. For instance, scammers often have a problem remembering tense. Take an individual who contacts his insurance company to report his vehicle stolen when, in fact, he drank too much hot toddy and sank it in the village duck pond. Naturally, the fraudster would construct a fiction surrounding the theft of his car. So when the insurance agent asks, "How did you secure your vehicle?" the intrepid driver would know to say he locked it. But often, liars forget to use the past tense— after all, the theft never occurred—and will instead say, "I lock my car."

"So just imagine that you’re an operator with 30 different signposts to detect deception within the spoken word, together with the technology in the background," Koskas says. "You’ve got a very robust method to assist the conversation."

Before the start of each conversation, the system gives a Data Protection Notice: This conversation is being recorded for the purposes of fraud detection and may be analyzed later to check the details given. When a claim rolls through, the operator conducts the conversation with a risk assessment form in front of her. Naturally, claimants with benefit problems enter the conversation already stressed, so the system immediately establishes a baseline level. Only if the claimant’s agitation dramatically increases during the conversation does the system note a possible risk by beeping in the operator’s ear. The operator assesses whether that high-risk message is relevant to the claim. If it is, she’ll mark it on her risk assessment form, flagging the case for further investigation. If the claim falls below a given number of acceptable risks, the operator expedites the claim.

Ultimately, the implementation of the voice risk assessment technology is designed to speed up the claims process by swiftly identifying the genuine ones. "If you’re submitting the claim and, [by] the end of the conversation, we don’t spot any linguistic behavior and the technology said there were no areas of risk, we would be able to fast-track the claim," Koskas says. "So instead of going through weeks of documentation that cost a lot of money and time, we would be able to send it to payment immediately."

Digilog’s VRA offering is not new technology. U.K. motor vehicle insurer Highway Insurance Holdings has used it for the past few years, and in September reported the solution has contributed $22.4 million in savings on false claims since 2002. As a soft benefit, Highway Insurance also reduced call center staff turnover by 75 percent. Employees, no longer just number-crunchers, became part of a process that required a great deal of decision-making. Job satisfaction naturally increased.

CONTROVERSIAL INSTALLATION
The decision by Harrow Council to implement the solution is a government endorsement of the technology. So far the solution has proved accurate, efficient, and stable—unobtrusive to IT platforms currently in use. Yet its implementation was not without controversy from the beginning. When the solution was first rolled out, the London-based Trades Union Congress (TUC) published a report entitled "Lies, Damned Lies, and Lie Detectors" that criticized Harrow Council’s pilot program. The chief complaint was that "lie detector tests face ‘false negatives’—liars who are not detected, and ‘false positives’— honest people accused of lying." Additionally, the report cites a lack of scientific data on VRA technology.

"Our stand is if genuine, it’s virtually impossible to come up as a risk statement," Koskas contends. He emphasizes the variety of factors—both human and technological—that combine to formulate each risk assessment. "If you came up with the end of the initial conversation as high risk, we have the second tier to double check and make sure that risk is relevant. There’s no scientific proof to accuracy for this combined solution, other than the results that our customers are publicly talking about."

He adds that in Highway Insurance’s case the number of customer complaints dropped from 4 percent to less than 1 percent. "Which is proof to us that what we do is fair, ethical," Koskas says. "Otherwise, we’d get complaints going through the roof."

Despite the TUC’s concerns, British local governments are lining up to implement Digilog’s AVS solution. Both Birmingham and Lambeth councils launched the solution in September, and six more councils are waiting in the wings.

And though the brouhaha centers around the technology’s accuracy, the success of Digilog’s solution hinges on the way VRA combines technology with the operators’ trained judgment. "We would never endorse using technology and making decisions solely on the technology," Koskas says. "Because it’s not enough."

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues