NeuroVoices: Christian Meisel, MD, PhD, on Forecasting Seizures

December 9, 2020
Marco Meglio
Marco Meglio

Marco Meglio, Associate Editor for NeurologyLive, has been with the team since October 2019. Follow him on Twitter @marcomeglio1 or email him at

Christian Meisel, MD, PhD, department of neurology, Universitätsmedizin Berlin, and Berlin Institute of Health, discussed the landscape for devices that forecast seizures, including the use of multi-modal wristband sensors.

At the American Epilepsy Society (AES) Annual Meeting, December 4–8, 2020, Christian Meisel, MD, PhD, presented data from a study which found that multi-modal wristband sensor data from easy-to-use, non-invasive devices in combination with deep learning can provide statistically significant and clinically useful seizure forecasting. He and colleagues applied deep learning networks such as long short-term memory (LTSM) and 1DConv on the Empactica E4 wristband device for 69 persons with epilepsy (PWE).

Results showed that the seizure forecasting was significantly better than chance for 43.5% of patients, yielding a mean improvement over chance (IoC) of 28.5 (±2.6) and a mean sensitivity of 75.6 (±3.8). In addition, the mean prediction horizon was 1896 (±101) seconds, which was deemed enough time to afford reasonable warning of seizures in advance.

Meisel, department of neurology, Universitätsmedizin Berlin, and Berlin Institute of Health, understands the significance of being able to correctly predict seizures and the improved outcomes that come with it. As part of our NeuroVoices series, Meisel sat down to discuss his study, the landscape for seizure forecasting devices, challenges in creating these devices, and the long-term importance of his results.

NeurologyLive: Can you just give me a little bit more background, your study and how it came about?

Christian Meisel, MD: Our study was motivated by the potential benefits for patients and clinicians that seizure risk assessments or seizure forecasting may have. These benefits have long been known. This is a field that has had an active research community for several decades. The benefits are also pretty clear. If you ask a patient with epilepsy what they’re most concerned about, they will usually tell you it’s the unpredictability of seizures. This was recently confirmed in a study by the Epilepsy Foundation. If there was some way to tell these patients when the seizure risk is high or low would help them plan their days better and potentially avoid certain activities. Also, for clinicians it would give them a better objective measure of when seizure risk is high and low, thus allowing them to target their therapies better. For example, a clinician could think about titrating and targeting therapies to periods when risk is high, and then maybe lowering it when risk is low. There are obvious benefits that motivated our study.

In the study, we used a wearable device called Empactica E4 which monitors certain data modalities, including actigraphy, temperature, blood volume, pulse, and electrodermal activity. We equipped some patients in the EMU during monitoring with this device and recorded over several days. Along with this device, we had the gold standard data of EEG and video. We could know exactly when a seizure would start and when it would end. We then used machine learning to train only on these wearable devices and see whether the data would be sufficient along to predict when a seizure would occur. We found that this is principally feasible, and that we were able to predict the seizures better than chance probability in roughly 43% of patients.

These are not only feasible, but that all of the sensor data was used, including performance peaks. We can also confirm that by using time match seizure surrogate data, forecasting was not driven by time of day or by vigilance state. We have observed similar results for focal and generalized onset seizures. The most interesting finding was that the algorithm tended to become better with the more patients we trained. By including more data, the algorithm became better. That is an encouraging finding to extend this approach and potentially collect more data, potentially collaborate with other groups and improve this algorithm for them.

What does the treatment landscape for devices that predict seizures look like? Is this something that is under observed?

There are several efforts in the field. Usually we have to distinguish between seizure detection and seizure forecasting. Both are an active field of research. With regards to seizure forecasting, there was 1 particularly influential study several years ago with a device that was using implanted electrodes to directly record electrocardiogram. That could show that you can actually forecast the seizures really well. However, that is not really available, currently as a treatment or as a monitoring option. There are several efforts, some funded, some sponsored by the Epilepsy Foundation, to try to solve this question exactly. What devices do we need? What would perform best? What are the algorithms that give you the best benefit and outcomes? To my knowledge, there is no real device that would offer all of these things yet, but hopefully, sometime soon in the future.

What are some of the challenges in creating and utilizing these devices? Especially in terms of a sensitivity aspect

That was historically something that the field needed to learn, and in the early 2000s, there was a large effort which led to a consensus on how to evaluate and assess the performance of such an algorithm. This is important because you have to carefully control against random prediction and have various controls in order to truly determine whether that statistically significant forecasting. The clinical utility of such a thing is important. It is clear that even if you have something that statistically may predict a seizure, it’s not clear that this will be clinically useful. If there are a lot of false alarms, people may not take the device seriously anymore. In fact, it may be harmful. The biggest example is when someone has to call 9-1-1.

We’re definitely not there yet. It is important to evaluate at which point of accuracy that includes sensitivity, but also specificity and false alarm rate. When we reach that point, then it will be clinically useful. This is also something that we’ll have to keep in mind when we are working on these things as well.

You mention in the conclusions that these are initial results and that this is a step forward for future re-evaluation? How might your data influence future studies?

First off, our study is a feasibility study, which shows that it is statistically possible. I don’t think we’re at the point to claim that this is clinically useful. To move towards clinical applicability, we have to improve this further. We have to validate this on independent data. The validation aspect is an important point to prove. Secondly, there’s various ways in which we can move forward. One of the results I previously mentioned is the dependence on data size. If we combined data and increase the data sets from potentially other labs and continue to build a longer recording method that collects longer periods of time, then we can improve this algorithm further. At least, that’s what our data suggest. I think this can benefit from being more personalized. Our goal was to think of a way that we have an out-of-the-box algorithm that you could install on such a watch, and then run it right away. We know from a lot of other studies that these algorithms perform best if they are trained on the individual patient.

One way to picture this is starting out with the algorithm, wear the watch, and over time it learns your patient specific patterns and thus improve significantly move. For this, we need longer data. That is another way of moving forward. Lastly, it could be beneficial if we target these methods to more homogenous patient groups that have similar seizure type reflecting signals in the autonomic nervous system and on actigraphy. I think there’s also room to improve the models further. We explored a particular deep learning model called LSTM, but there are other ways in which we can improve this further and have more elaborate and better performing models to potentially benchmark.

Is Empactica the most widely used device in this landscape? Would you say there’s a standard device that is most commonly used?

There are different devices. I don’t want to advocate for 1 or the other, but in this case, we worked with this 1 because it’s a research device that allows us access to the data. There are other watches and wearables that may allow you to do that as well. I know the same company has a device for seizure detection, in particular generalized tonic-clonic seizures. There are other smartwatches and devices that are wearable, such as EMG trackers, seizure diaries, and wearable EEG systems that could also allow you to do something like this. These are all worthwhile to explore, and there’s efforts in this direction so hopefully soon we will know which of them perform best. Potentially, the best performance may come from a device that includes different modalities of data such as seizure diaries, medication effects, triggers, and physiology data, which was used in our study. That combined approach may be the most promising.

Transcript edited for clarity.

For more coverage of AES 2020, click here.

Related Content:

AES | News | Clinical | NeuroVoices | Conferences | Epilepsy