Applying Artificial Intelligence to Monitor Multiple Sclerosis Disease Progression

Article

Michael Dwyer, PhD, director of IT and Neuroinformatics Development at the Buffalo Neuroimaging Analysis Center, provided insight on how artificial intelligence techniques may be used to monitor disease progression in multiple sclerosis.

Michael Dwyer, PhD

Michael Dwyer, PhD

This is a 2-part interview. Click here for part 1.

An up-and-coming phenomenon, artificial intelligence (AI) technologies have the potential to transform many aspects of patient care, as well as administrative processes within provider, payer, and pharmaceutical organizations. There are several types of AI, including machine learning, which learned from experience without being explicitly programmed, and deep learning, which learns from raw/nearly raw data without the need for feature engineering.

Researchers at the Buffalo Neuroimaging Analysis Center (BNAC) have been at the forefront of developing and validating AI-like algorithms to improve patient care, specifically for those with multiple sclerosis (MS). At the 2023 Americas Committee for Treatment and Research in Multiple Sclerosis (ACTRIMS) Forum, February 23-25, in San Diego, California, Michael Dwyer, PhD, presented a talk on the use of AI and MRI for MS care, and how each can be applied. In his presentation, Dwyer discussed several areas of potential, including MR acquisition, image segmentation, diagnosis, prognosis, and others.

At the forum, Dwyer, director of IT and Neuroinformatics Development at BNAC, spoke on the increased exposure to AI and whether these techniques should be a part of medical schooling going forward. Additionally, he discussed the many ways AI can help both patients with MS and their clinicians in terms of tracking long-term disease progression and identifying erroneous patterns.

NeurologyLive®: Should learning AI be incorporated more into the neurology education space?

Michael Dwyer, PhD: That's a good question, I don't know that I have a short answer for it. think they should be aware. So I would answer that two ways. First, I'm going to give a very boring answer, because I think there's no replacement for basics, the basic statistics, basic familiarity with how to do hypothesis testing. AI is a wonderful, powerful tool, but it is also so powerful that it can fool us very easily.

We've seen a lot of AI techniques that seemed promising, and then fizzled out because the statistical foundations weren't necessarily there, we didn't test them properly, or we trained them on one dataset, and then it doesn't translate or work on the other one. I think that it should be part of a more holistic statistical and general kind of research methods framework. For clinicians and the general public, they don't need to know how to do deep learning. You don't need to know how to sit there and program something in pytorch. What I think they need to know how to do right now—because of the explosion that we were talking about—is to separate the baby from the bathwater, and how to recognize what's a reliable AI tool? What can I trust, what can't I trust.

The editors of Radiology, and the newer journal, Radiology Artificial Intelligence, have released guidelines for clinicians called the CLAIM guidelines and for publishers to have checklists for AI to be properly unethically used in these areas. That kind of thing is very important for people to understand. What's good AI? What's bad AI? How much you should use? And how you should use it? Take ChatGPT, that something everybody is so excited about. It is an amazing tool if you're trying to write a document. If you look at clinicians, and the real world of clinical day to day, they spend a lot of their time filling out templates. Where they fill out the forms, ChatGPT type technology can probably help make those templates much better. But you have to use it in a way where an expert is reviewing everything that its saying, and you can't rely on it. That's the key. We see the negative where students use it to write their term paper. But it's just a tool that can be used badly or it can be used to great value. We need to balance those things.

Are there ways in which AI can be used specifically to monitor disease progression?

There's a couple of areas where it can help with that. A lot of people think about AI as replicating what humans. We train a model to do something faster, or maybe more reliably, but not fundamentally different. That’s called supervised learning. We tell it what we want it to learn. Unsupervised learning is where we tell an AI tool to look at data and see if it can find patterns. There have been some really interesting advances in that with clustering, for example. Arman Eshaghi and his group in the UK, for example, were able to identify latent clusters of MS, different types of disease pathology and say, "this, this may have a different progression going forward.” If we can identify those kinds of subtypes early, we can potentially help intervene earlier and know whether people have different responses to different treatments.

Another area is being able to synthesize data from a lot of different places. Humans are good at analysis and reduction, we're not always great at putting together lots of data points in the same way. These AI tools can be a very helpful assistant to integrate, genomics, connectomics, other serum markers, and imaging, all together to make predictions based on lots of data points, as opposed to the kind of clinical algorithms where we just look at 2 or 3 things. That’s another potential way that that we can [use AI], and we are already starting to see a shift in that.

I should mention too, deep learning has been a big buzzword for a while. What sets this whole deep learning thing apart as opposed to from traditional machine learning, it learns on raw data. With traditional machine learning, it would still learn the rules, but you had to decide what it was going to look at, you had to take an image and say, “I'm going to measure the thalamus, the cortex, the amount of lesions or, I’m going to take a clinical assessment, I’m going to take the EDSS into these 4 scores or something, or these specific sub scores.” With deep learning, you feed much rawer data. For us, it's working for raw MRI. Instead of extracting those features, we just tell it to look who's progressing and who's not and try to find predictors from the images. There are people doing gait analyses, people doing wearables, and it's all from the raw data so we don't have to have somebody sit there and be a gatekeeper of the information and say, “These are the features we should pull out."

It's very powerful there because it can pick up on things we miss. It can pick up on subtleties that we may not see: maybe that a part of the thalamus is important, but the other isn't, and when we just go and extract the thalamus, we lose that. It’s potentially powerful, and it's a very exciting field. We’re going to see a lot more going forward. I get we need to be careful. We need to go with our eyes open. We need to go step by step and validate these tools carefully. But there’s tremendous value here.

Transcript edited for clarity.

Related Videos
Shadi Yaghi, MD, associate professor of neurology at Brown University
Matthew B. Harms, MD, MDA Medical Advisor and Care Center Director, Associate Professor of Neurology at Columbia University Irving Medical Center - Eleanor and Lou Gehrig ALS Center.
Ava L. Liberman, MD, medical director of the Stroke Center at Weill Cornell Medicine
Svetlana Blitshteyn, MD, FAAN, director and founder of Dysautonomia Clinic
© 2024 MJH Life Sciences

All rights reserved.