The director of IT and Neuroinformatics Development at the Buffalo Neuroimaging provided thoughts on artificial intelligence’s role in neuroimaging and treating multiple sclerosis.
Once thought of as a general phenomenon, clinicians are starting to see the specific capabilities that artificial intelligence (AI) has to offer within neurology. At the Buffalo Neuroimaging Analysis Center (BNAC), there remains a focus on the application of quantitative image analysis methods to neuroimaging data to better characterize the onset, progression, and treatment of neurological diseases, mainly multiple sclerosis (MS).
Michael Dwyer, PhD, director, IT and Neuroinformatics Development, BNAC, and assistant professor of neurology and biomedical engineering, University at Buffalo, has headed several of these projects at the center. Highlights of his work include the development and validation of a method for detecting and quantifying demyelination and remyelination in vivo, the development of a method dramatically improving on the precision of conventional tissue-specific atrophy measurement in clinical routine, and the investigation of the MRI “connectome” on cognition in MS.
Dwyer believes that the possibilities with AI and neuroimaging are endless and can expand past only MS. In a new iteration of NeuroVoices, Dwyer provided context on the work being done at the center, the potential of these AI-based modalities, and ways clinicians can benefit the most from them.
Michael Dwyer, PhD: AI is a loaded term within the field and means a lot of different things to a lot of people. There’s not a bright line between what’s a classical approach and what’s AI. In general, the idea is that we’re trying to use techniques that can behave a bit more like the way humans do in that they can learn over time. They get better as they see examples and are trained on things as opposed to the classic algorithm. Ten years ago, we’d write something like “find the thalamus by looking for this edge strength in this contrast and find something that’s this bright, or something [along those lines].” Now, instead, we can take the training data, make lots of examples, and use a system that keeps looking at those answers to teach itself how to get better at identifying a specific structure, for example. There are a million different applications, but that’s the basic idea. Instead of writing these algorithms in a recipe way, we can use artificial intelligence to do it in a way that’s more robust.
We know that AI has a lot of capabilities, including the discovery of new areas or things. At BNAC, we’re closely associated with the neurology clinic here. We see a lot of the day-to-day needs. One of the spaces that AI can shine in is processing data that may not be perfect research quality. We do these scans all the time in research setting to find out areas of the disease that we can measure. For example, brain atrophy. We measure the loss of tissue over time in clinical trials where we have nice, highly standardized MRI protocols that have been constant over the years of the study. We can take an hour of scanning time that just doesn’t happen in clinical routine. One of the biggest challenges that I see is that the metrics we’re able to get in clinical trial and research setting aren’t always translatable to the clinic. AI is a tool that can help bridge that gap.
We’ve been trying to use AI to translate these existing metrics that are being used in clinical trials to work on quality imaging instead of making new ones. One off the biggest things we’ve done recently is called deep gray, which is deep gray matter rating via AI. Essentially, it’s a tool to measure the thalamic atrophy. We know that thalamic atrophy is extremely important in MS. It’s a strong predictor of clinical disability of cognition. The thalamus tends to atrophy relatively steadily over the [course of] disease so it’s also potentially a good biomarker for treatment response, but it’s not measurable in clinical routine. We took 4000 scans from all kinds of different centers and were able to use that data to train an artificial intelligence system to automatically delineate and locate the thalamus on T2-FLAIR image. This is the lowest common denominator of imaging. That’s an image that anybody who goes to any clinic pretty much anywhere in the world for MS will receive. Now we have a tool that can make these clinical research measures but on clinical routine images.
My work is in the multiple sclerosis arena, so I can’t speculate too much, but I do know that MS is similar to other diseases in that we have changes that happen quickly that are easy to spot. These are things like new lesions that show up on a brain scan. A clinician or radiologist will catch that and say, “You have some new activity.” At the same time, we have this brain atrophy, which is much more insidious and, in many cases, mainly progressive MS, even more important. These are subtle changes that take a long time and are not visible to the naked eye. The rate of brain atrophy is 0.4% per year. No radiologist can see that in a meaningful period. Maybe 5 years, 10 years down the road you can see it.
The challenge in MS and other diseases with that slow progressive activity is to first know whether it’s happening. You need the tools to measure that and tell whether you’re a person who is at higher risk of developing more disability over time. These kind of measures of brain atrophy are good markers of disease progression. A lot of times when a clinician is making a treatment decision, they’re weighing risk and benefit. You want to know that prognosis before doing this. The third point is to know whether you’re seeing a treatment response. Is this treatment slowing that brain atrophy? The bottom line is that the tools that are currently out there don’t let a clinician make that assessment for years even though you want to know that early. Those kinds of questions go beyond just MS. We have similar questions in Alzheimer disease. There are potential changes in Parkinson disease that I’m sure could benefit from these AI tools. The real benefit of them in this context is that they can be really robust. It puts things into place that can be translated in a way so that these ivory tower research tools that we’ve always had in our arsenal can finally start to percolate out to real clinical use.
Transcript edited for clarity. For more segments of NeuroVoices, click here.