Kristina Simonyan, MD, PhD, DrMed, and Davide Valeriani, PhD, discussed the translational potential of DystoniaNet, and its potential to be adjusted for use in additional disease states.
This is the second of a 2-part interview. For part 1, click here.
Recently, Kristina Simonyan, MD, PhD, DrMed, and Davide Valeriani, PhD, conducted a study of the artificial intelligence (AI) based machine learning platform they developed to inquire about its ability to correctly diagnose dystonia, a disorder that has been notoriously challenging to diagnose.
The data showed that the platform could identify the condition with 98.8% accuracy in a matter of 0.36 seconds. Specifically, it showed 98.2% accuracy in diagnosing laryngeal dystonia, 100% in diagnosing cervical dystonia, and 98.1% in diagnosing blepharospasm, while referring 3.5% of patients for further examination.
Simonyan, who is the director of Laryngology Research at Mass Eye and Ear, an associate neuroscientist at Massachusetts General Hospital, and associate professor of Otolaryngology-Head and Neck Surgery at Harvard Medical School, noted the significance of these findings to NeurologyLive with the context that dystonia is the third most common movement disorder, affecting more than 300,000 people in the United States alone.
To find out more about the potential of this platform, NeurologyLive spoke with Simonyan as well as Valeriani, who is a postdoctoral research fellow in the Dystonia and Speech Motor Control Laboratory at Mass Eye and Ear and Harvard Medical School.
Kirstina Simonyan, MD, PhD, DrMed: With any research, there should be further validation, even with the larger clinical studies. We are in the process of conducting such studies; we have been collaborating with some clinics across the United States to lift up studies with a prospective testing of this biomarker in patients who do come to clinic.
But in terms of its use, it's quite straightforward because it's a cloud-based software and can be accessed from anywhere where internet is available. Basically, it relies only on brain MRI that is already clinically acquired in many places. There may be longer wait time for acquisition of MRI, given the schedule of a clinic and radiology department, for example, than the processing of the data. There is actually no processing of the data, the MRIs from the scanner just need to be loaded into this cloud-based platform, and the diagnosis is output as a probability of either having dystonia, not having dystonia, as well as the need for further evaluation. That was another advancement of this platform that we incorporated, this so-called dynamic range, where the platform refers the patient for further evaluation when the certainty of diagnosis is below 10%. We incorporated this dynamic range in order to reduce AI-based errors and have more collaboration between clinicians and AI, rather than AI dictating what the diagnosis is.
Davide Valeriani, PhD: I want to add to this that we even though we need to do this extra clinical work to validate it, we have started doing it. In the paper we published, we mentioned this in the in the supplementary information—we basically validated it with a number of different scanner manufacturers and scanner protocols, and the parameters for the scanning techniques. We really wanted to see whether any of these parameters were influencing the decision of the algorithm, and they weren't.
The part of the results that were most interesting and surprising to me was that we trained all of our DystoniaNet with 3 Tesla MRI scanners, which are research-grade scanners. But then we tested the DystoniaNet on clinical scanners, which are usually less accurate than research ones in terms of the image acquisition in dystonia. It was equally accurate with both of those, suggesting that we could really translate it from day 1 to the clinics and start looking at how accurate it would be on a prospective study.
Kirstina Simonyan, MD, PhD, DrMed: We obviously need to do more work. As with any research or even clinical testing tool we have, there is always room for improvement. We are thinking and we're preparing to conduct such studies where we would be able to provide clinicians yet another level of diagnostic accuracy by involving differential diagnosis. The challenge with this is that dystonia mimics some neurological and non-neurological conditions. Multiple sclerosis, for example. Many patients do have dystonic symptoms due to secondary causes. The list of differential diagnoses is very, very long. But there should be a start someplace.
We are thinking about the top misrepresented and misdiagnosed disease where we can start testing this new platform for differential diagnosis and incorporating this in the future iterations of the platform where the clinician would get not only the probability diagnosis of dystonia—let's say its 73% probability of dystonia—but also a probability of the differential diagnosis with other disorders. That would be one of the extensions that we're working to, and we have some exciting results there, too, on treatment outcomes. On how well these biomarkers can predict the outcome before the treatment is given, which will cut off a significant amount of healthcare costs and time efforts, and so on. So those are kind of extensions who are working currently on that.
Davide Valeriani, PhD: I wanted to add that we have to remember that diagnosing dystonia currently is very challenging and could take up to more than 10 years for a patient. What we're really doing here is implementing a tool that could really accelerate this process and provide an answer and a response to the patient in a timely manner, to then be able also to identify the best treatment for that person. That's what we are moving towards as a next step.
Kristina Simonyan, MD, PhD, DrMed: And this said, there is absolutely not intended to replace neurologists, radiologists, primary care physicians, laryngologists, or whatever specialists this patient may see—and there is a wide range. This is clinically not correct. That's not how blood tests work, and that's not how AI tests should work. This is meant to be a diagnostic platform that would provide an objective measure of the disease against the normative values right now. These should be taken as such to help in decision-making, and not replace the physician or clinician who sees this patient. It is just to provide the objective measure that we're lacking, the biomarker that we're lacking currently, to help to aid in clinical decision-making. I just want to stress this as much as I can that this is meant to improve clinical practice and not replace clinical practice because the training of physicians, speech language pathologists, or any clinicians cannot be replaced.
Davide Valeriani, PhD: I would say it's totally possible. We have already data supporting this platform on other relevant and similar disorders in this area. I have to say that machine learning has been broadly applied to the medical domain in various disorders, but it has targeted mostly the more common disorders so far. Parkinson disease is one example.
But the problem when we use machine learning with rare diseases, or less common diseases, is really the lack of data available to train our models. In some sense, this provided us an opportunity to train this machine-learning algorithm in dystonia because we had this large data set of neuroimages available. But we also made the development of DystoniaNet open to the possibility of extending it to other disorders, in a sense that, for example, we took away all the preprocessing required, and we just started from raw structural MRI. Every patient could get their own structural MRI, and we have these data available for many different disorders.
We are currently testing these on some of these disorders, too, and we are seeing very promising results above 80% accuracy. We want to see how much this architecture—this sequence of layers of the Dystonia Net—would generalize to other disorders. Of course, we would need to retrain part of it because now that the problem is diagnosing a person with let's say, Parkinson disease, versus healthy control. But the core components of the DystoniaNet would remain the same, and that would allow us to expand and extend this architecture to other disorders.
Kristina Simonyan, MD, PhD, DrMed: We're very excited about the results and we do hope that clinicians will find this interesting and useful in their clinical practice. We will keep going forward, we will work closely with clinicians to gather feedback on their confidence in the use of this tool, and we will listen closely to their feedback to further improve the platform as they feel it fits better their needs. Hopefully, this can really make further clinical impact on diagnosing this disorder, as well as, potentially extending to other disorders as we take a little bit of a different approach in constructing this algorithm and how we use our data that has been collected previously in other disorders.
Transcript edited for clarity.