-->

What Speech Can Reveal About Your Health

Article Featured Image

Ever since Eric Bland’s article “Cough into Your Cell Phone, Get Diagnosis” ran on the Discovery Channel’s Web site in November, the story has spread virally. Imagine being diagnosed in the comfort of your own home. As we weathered flu season and fear of pandemics, newly funded research to diagnose respiratory illnesses with software that analyzes coughs seemed just what the doctor ordered. That this funding was coming from the Gates Foundation made the report only more enticing, especially to those searching for the next killer application. 

Actually, this advanced method of diagnosis is the anti-killer. Pneumonia is a leading cause of death among both children and adults around the world, especially in developing nations. 

Suzanne Smith of Speech Technology and Applied Research (STAR) Analytical Services, a Bedford, Mass., company developing cough-analyzing software for disease diagnosis, says her firm received the Gates Foundation grant to develop “low-cost diagnostics for priority health conditions when there are no other close-by diagnostics available.” People in these areas could have cell phones, she notes in the article. 

STAR aims to identify the cause and severity of respiratory illnesses by adapting speech analysis algorithms to characterize coughs. Doctors already classify coughs they hear as wet or dry. The hope is that acoustic landmarks can further identify cough substructures to help screen for pneumonia and perhaps even hint at whether a bacterial or viral infection is present. “[The cough] is the most common symptom when a patient presents, and we are relying on good old technology from the 19th century,” Smith says in the article, referring to the stethoscope. 

But technical and strategic barriers stand in the way of achieving this solution. The healthcare industry can be notoriously slow to adopt change and new-fangled devices. 

Even in my analog world, I can’t count the number of times healthcare providers have asked me to cough and say “Aaah” while we were physically connected by a stethoscope. What if we started collecting those vocalizations and linked the data to subsequent health status? I can see the benefits of having speaker-dependent cough models—as well as speaker-independent ones—for decision support in choosing appropriate treatments. Healthcare professionals could also be shown the benefits of not having to be in the same physical space as their patients.

Three years ago, another View from AVIOS column focused on the untapped home healthcare market and encouraged application developers to take a closer look at the use of speech technologies in systems designed for telehealth. Successful use of speech in cooperation with other telehealth technologies can yield elegant solutions to real-world problems. 

By overcoming obstacles, we can lower healthcare costs and reach underserved populations. The global market opportunity is ripe. So consider how you can improve the health of your business while improving the health of the world. Telehealth services, including remote patient monitoring, treatment, and education, are increasingly becoming reimbursable. 

Imagine the role speech technologies could play in the future of healthcare to improve diagnosis and patient outcomes. Healthcare in the future might involve screenings in emergency rooms or even shopping malls. Therefore, existing mobile voice technologies must be extended. Fortunately, the following process is already inherent in the speech technology life cycle:

  • collect human speech and nonspeech data;
  • use vocalization analysis to identify regions of interest; and
  • apply statistics, or algorithms, for classification.

While the provider-patient relationship will change, this new model of medicine is meant to augment, not replace, the healthcare provider. 

Analyzing speech and other vocalizations can provide a noninvasive way to probe at least our respiratory and nervous systems. Studies of clear versus conversational speech already have correlates in clinical domains, such as Parkinson’s disease and sleep deprivation. We might be able to tell when patients are on or off their medications, overmedicated, or self-medicating. Increasingly, it’s not just what we say or the way we say it that counts, but also how we feel when we open our mouths. So let’s see if speech technology can be adapted to tell whether we need to take two aspirins and call in the morning or to voice-dial 911. Will we soon be able to cough into our cell phones and get an immediate diagnosis? Maybe not, but this germ of an idea is nothing to sneeze at.


Lorin Wilde, Ph.D., is chief technology officer of Wilder Communications, a firm specializing in novel applications of speech technology, and a member of the board of directors of AVIOS. She can be reached at wildercom@gmail.com or in her consulting role at LorinW@s-t-a-r-corp.com

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues