Diagnosing Disease with Speech Analytics
The first known treatise on medical diagnostics was written down in ancient Egypt about 4,000 years ago. Hippocrates (eponym of the famous Hippocratic Oath) attempted to diagnose disease for the ancient Greeks somewhere around 2,400 years ago. Still, with something as complex as the human body and its interactive relationship with its environment, thousands of years of trial and error have not provided us with the perfect diagnostic tool. There are people among us, however, who are trying to change that. They are not claiming perfection, or even that perfection is possible, but rather they are taking the steps to move humanity in that direction by opening that diagnostic window just a little bit more and allowing us to peek inside.
Their tool of choice? Voice. Medical practitioners have been using vocal analysis for diagnosis for years. The Diagnostic and Statistical Manual (DSM) of mental health has been using speech sounds and language usage to diagnose mental illness for at least 50 years (e.g., the second DSM, published in 1968, has “talkativeness” and “accelerated speech” as two common symptoms of what was then called manic depression, now termed bipolar). And that seems to make sense. When we produce sounds, there is a complex system at work that involves our lung power, energy level, physiology, and mental state—all things that can be affected by illness. We speak differently when in different moods, mind-sets, or emotional states, especially moods and emotional states at the extreme ends of the spectrum. But what about diagnosing something like heart disease, migraines, or Parkinson’s disease using only the sound of someone’s voice? We couldn’t possibly detect such things just by listening.
Well, of course, humans cannot. But, as with inventions like the microscope, the telescope, and the stethoscope, technology can help us to see more—or in this case, to listen better. Speech analysis is being used to assess recordings of human voices to diagnose conditions such as depression, Parkinson’s, Alzheimer’s, heart disease, concussions, migraines, PTSD, and even suicide risk, among others. That list is expected to grow. This may sound like a dream come true, but there are always concerns.
Alexa, What’s My Diagnosis?
So what exactly is speech analysis capable of detecting? Amazon has applied for a patent to analyze speech for signs of illness, such as a sore throat, so that it may tailor ads to someone with that illness, say for a specific medicine. In other words, Alexa knows when you have a cold. On a much more serious note, we may also be closer to preventing some suicides by using speech analysis to predict suicide risk: Researchers are studying the vocal characteristics of people at risk of becoming suicidal in the hopes of developing smartphone-based apps or other software that could detect changes imperceptible to the human ear.
One of the companies helping detect disease through voice is Canary Speech. It has several patents on mathematical models able to analyze speech using some of the more than 2,500 biomarkers that exist in the sub-language elements of human speech—“sub-language” meaning they are independent of specific words. Any recorded speech can be analyzed—even past recordings of people. And this can be done using a recording of only 300 words.
Speech analysis could enhance a clinician’s ability to listen to her patients, says Canary Speech CEO Henry O’Connell, adding that “by selecting, through guided machine learning, the proper biomarkers to create the optimum model,” such analysis can augment the natural senses of the clinician. Canary Speech has done the heavy lifting of analyzing thousands of recordings, matched with health data, to identify those biomarkers. Canary’s software can help lend credence to a hunch a clinician has about a client or draw the clinician’s attention to something they didn’t even notice, such as an unusually depressed mood that isn’t readily apparent to even the trained eye but is picked up by software that is able to detect minute changes in vocal qualities. The software is not making a diagnosis, per se, but simply providing more data to the clinician.
Constant Companion and Canary Speech study to explore how voice and AI technologies can detect conditions such as anxiety, depression, and Alzheimer's.
Recent research analysis, sound vibrations, which were employed as therapeutic healing for mental health conditions, are now witnessing application into various disease diagnostics.
GDPR has already had wide-ranging consequences for companies collecting data, and now some are calling for federal regulations in the U.S. Voice-data isn't exempt from the regulations, and vendors need to be ready.
For years, investments in the voice channel have taken a backseat to digital. But thanks to the rise of IoT devices and AI-driven conversational experiences in the consumer realm, organizations must rethink the role of voice.
As speech technology becomes more commonplace, new markets are opening up for vendors. But as new opportunities abound, how do voice vendors reach out to potential new customers like marketers?
Concerns have been voiced about how AI and speech technologies are now being used, but solutions are not clear-cut
Planned Parenthood introduced Roo, a chatbot that answers the sometimes embarrassing questions of teens and young adults who might otherwise not have access to science-based sexual education.