-->

Speech Technology Could Aid in Dementia Detection

Engineers and professors at the Stevens Institute of Technology in Hoboken, N.J., have developed a software tool that uses natural language processing and machine learning-based algorithms to help identify early signs of neurological disorders, including Alzheimer's disease, dementia and aphasia.

Rajarathnam Chandramouli, professor of electrical and computer engineering at Stevens, developed the application with fellow professor K.P. Subbalakshmi and computer engineering Ph.D. candidate Zongru Shao.

The algorithm works by analyzing speech and writing, mining for patterns or linguistic cues that mighthint at health issues.

Chandramouli explains that when people start having issues with Alzheimer's disease, dementia, and aphasia, their speech, syntax, and vocabulary will change to much simpler words, phrases, and sentence structures.

To test the method, the team acquired clinical data from several settings and 800 machine learning data models. That data included transcribed recorded interviews with confirmed Alzheimer's, dementia, and aphasia patients as well as interviews with a control group unaffected by the three disorders. Interview subjects described pictures and discussed their lives, among other topics.

"The group was interested because these mental healthcare issues present challenging healthcare problems," Chandramouli says. "Many of the medical technologies [used for diagnoses of these diseases], like MRIs, are very expensive. Some in rural areas don't have access to these technologies; for others, insurance companies will deny their claims."

As a result, many of these diseases go undiagnosed in the early stages, when they are more easily (and more inexpensively) treated, Chandramouli says.

The software can access speech via very inexpensive devices, such as Google Home and Amazon Echo, or even smartphone recordings. Any of the captured speech could be uploaded to the cloud for examination. In addition to the use of speech, the software can also look at text in social media posts and email, for example, to help detect changes to simpler word and syntax usage, according to Chandramouli.

The Stevens team created and ran proprietary new big data algorithms to scan and compare the text databases against each other, searching for differences between the transcripts of healthy and afflicted subjects. Among the group of algorithms now being tested, one—the best performer—can already distinguish Alzheimer's patients from healthy individuals with 85 percent to 90 percent accuracy.

Chandramouli has applied for a patent, and a paper on this research is slated to publish in the next couple months in the academic journal Expert Systems and Applications.


SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

How to Optimize Your Content for Voice Search

Keeping up with the changing whims of search engines has always been a sisyphean task, but now a new obstacle has been tossed into the always evolving SEO mix: voice search. There is some good news, however, for brands and marketing agencies look to optimize content for voice search. Much of what they need to do to optimize content for voice search is simply to hone in on many of the same rules they have used to achieve SEO success in the past.

When Dolphins Attack: How Voice Assistants and Speech Recognition Software Can be Fooled

If you own a smart speaker, you know that it can be fun trying to trick Alexa, Siri, or Google into doing or saying something it shouldn't—like obeying your friend who imitates your voice commands. While such ruses are fun and harmless, the truth is that bad actors are undoubtedly attempting trickery of a more nefarious nature and voice-controlled systems (VCSs) and speech recognition systems (SRSs) can be easily fooled via clever techniques.