The chief medical officer and cofounder of Linus Health discussed how changes in voice may help serve as early indicators for late-life cognitive deficits. [WATCH TIME: 3 minutes]
WATCH TIME: 3 minutes
"What’s different here is we can offer quantitative metrics of that, which is immensely powerful. But I don’t think it’s doing anything else than what a really good clinician does, other than put the numbers to it. We then make it scalable and more broadly useful."
At the 2022 Alzheimer’s Association International Conference (AAIC), July 31 to August 4, in San Diego, California, data from a multimodal machine learning investigation showed that the inclusion of voice/speech features increases the accuracy of cognitive status classification. The DCTClock, an FDA-listed cognitive assessment solution designed by Linus Health, was administered as part of a large-scale, longitudinal study of 495 participants, and analyzed alongside Mini-Mental State Examination (MMSE) scores, a common instrument used within the field.
Linus’ subsequent assessment, the Digital Clock and Recall (DCR), was also incorporated into the analysis, and was found to be more highly correlated with MMSE than the DCTClock (r = 0.43 vs 0.38). Results on multilayer Perceptron Neural Networks (ANN) indicated that combining acoustic features with cognitive assessments achieves greater classification accuracy for health (area under the curve [AUC], 0.95), mild cognitive impairment (AUC, 0.93), and Alzheimer disease (AUC, 0.97) groups than models including either acoustic features or cognitive assessments alone.
Senior investigator Alvaro Pascual-Leone, MD, PhD, believes the findings serve as a reminder of the potential of analyzing voice and speech. Pascual-Leone, the chief medical officer and cofounder of Linus Health, recently spoke with NeurologyLive® to discuss whether these findings change the way neurologists care for late-life patients, as well as some of the vocational signs and signals that may hint at underlying cognitive decline.