Speech Technology Magazine

 

Speech Technology Could Aid in Dementia Detection

New algorithm looks at speech patterns to help identify early stages of Alzheimers, dementia and aphasia.
By Phillip Britt - Posted Jan 22, 2018
Page1 of 1
Bookmark and Share

Engineers and professors at the Stevens Institute of Technology in Hoboken, N.J., have developed a software tool that uses natural language processing and machine learning-based algorithms to help identify early signs of neurological disorders, including Alzheimer's disease, dementia and aphasia.

Rajarathnam Chandramouli, professor of electrical and computer engineering at Stevens, developed the application with fellow professor K.P. Subbalakshmi and computer engineering Ph.D. candidate Zongru Shao.

The algorithm works by analyzing speech and writing, mining for patterns or linguistic cues that mighthint at health issues.

Chandramouli explains that when people start having issues with Alzheimer's disease, dementia, and aphasia, their speech, syntax, and vocabulary will change to much simpler words, phrases, and sentence structures.

To test the method, the team acquired clinical data from several settings and 800 machine learning data models. That data included transcribed recorded interviews with confirmed Alzheimer's, dementia, and aphasia patients as well as interviews with a control group unaffected by the three disorders. Interview subjects described pictures and discussed their lives, among other topics.

"The group was interested because these mental healthcare issues present challenging healthcare problems," Chandramouli says. "Many of the medical technologies [used for diagnoses of these diseases], like MRIs, are very expensive. Some in rural areas don't have access to these technologies; for others, insurance companies will deny their claims."

As a result, many of these diseases go undiagnosed in the early stages, when they are more easily (and more inexpensively) treated, Chandramouli says.

The software can access speech via very inexpensive devices, such as Google Home and Amazon Echo, or even smartphone recordings. Any of the captured speech could be uploaded to the cloud for examination. In addition to the use of speech, the software can also look at text in social media posts and email, for example, to help detect changes to simpler word and syntax usage, according to Chandramouli.

The Stevens team created and ran proprietary new big data algorithms to scan and compare the text databases against each other, searching for differences between the transcripts of healthy and afflicted subjects. Among the group of algorithms now being tested, one—the best performer—can already distinguish Alzheimer's patients from healthy individuals with 85 percent to 90 percent accuracy.

Chandramouli has applied for a patent, and a paper on this research is slated to publish in the next couple months in the academic journal Expert Systems and Applications.


Page1 of 1