Chris Kalafatis, MD, MRCPsych, discussed a new tool, the Integrated Cognitive Assessment, developed by Cognetivity, and its potential for patients with dementias and multiple sclerosis.
Cognetivity Neurosciences recently announced that its Integrated Cognitive Assessment (CognICA) has met the requirements of regulations 21 CFR 882.1470; Class II Exempt Medical Device, following review by the FDA. Designed for studies into dementias, the assessment can be performed remotely on an iPad, takes only 5 minutes to complete, and can now be marketed as a medical device for commercial distribution in the US.
Further, the CognICA is powered by artificial intelligence (AI), advancing the potential of assessments when compared with standard-of-care, pen-and-paper tests. The assessment is unique in that when taking it, patients are meant to feel as if they are playing a game, as they are asked to determine whether pictures are of animals or nonanimals.
We sat down with Chris Kalafatis, MD, MRCPsych, chief medical officer, Cognetivity; and consultant in Old Age Psychiatry, South London & Maudsley NHS Foundation Trust and Affiliate of King’s College London, to discuss the format of the assessment, as well as how it can be implemented in the clinical setting. Kalafatis highlighted the importance of the test, which has a high sensitivity to early-stage cognitive impairment, while simultaneously avoiding cultural and educational bias.
Chris Kalafatis, MD, MRCPsych: This is a 5-minute, computer-based, self-administered, cognitive test that we have validated across more than 10 National Health System trusts in the UK. One hundred natural images—50 containing animals and 50 that do not contain animals—are presented sequentially and rapidly. The subject is then simply asked to select whether they see an animal in the picture or not by tapping right or left on an iPad.
The reason we chose animal images is because of the brain’s innately strong response to animal stimuli, so pure evolutionary principles are at play here. Hereto also lies part of the novelty of the assessment, as we use image statistics to characterize each image individually. Images vary in terms of difficulty, and this categorization is accounted for and is key in our analysis. All images are grayscale because colors can easily give away whether there is an animal in the picture or not. The data science component is important in how we generate results. We employ an AI model that aims at improving classification of cognitive impairment more accurately and provides a probability of cognitive impairment.
The standard pen-and-paper tests that we've been using for close to half a century now bring an evidence base that is old, completely understood, and effective. But essentially, they are a little bit outdated, as these cognitive tests have widely known limitations, and this is exactly why we developed the [CognICA], in order to tackle them.
Conventional tests, pen-and-paper tests, have a ceiling effect, meaning patients who have a good premorbid cognitive reserve or are well educated will be scoring almost perfect scores—that's not very good for the assessment of milder subclinical problems. Typically, those tests, as I insinuated, are bound by language and educational bias, so they don't translate very well across different demographics. They also have other significant cultural considerations, so they don't translate very well across different cultures, and typically, they can be learned—this is really important, because that limits our capacity to use them frequently in order to monitor whether patients are progressing or not. They also require face-to-face administration by trained specialists.
The ICA does address these limitations, and these are hypotheses that we have proven and published in peer-reviewed journals. The ICA doesn't have a ceiling effect; it can measure and quantify early mild cognitive decline. It's not affected by language in education, so it can be used in large population screening, and exactly for the same reason, the AI model that we use must be able to be applied across cultural boundaries, it must generalize. This is really, really important. We found that the test, which we have actually validated in a multicultural population in the UK, can be taken and applied to a completely different population, in this case, an Asian population, with very different demographics and cultural attributes. [The test] can work as well as in the previous population, and there’s no need to re-norm the test to use new normative data, or revalidate the test in a different population, [we] just change the language back.
Also, the practice effect of the test is negligible, and it’s highly reliable. It’s really important, if one wants to monitor cognitive trajectories in high resolution. The test results are automated, so we avoid interpretation, biases, and transcription errors. Finally, the test is developed to be easily interoperable with electronic notes and online platforms. Last, but not least, this is a quick test—5 minutes—but this is not the end all be all—it must also be easy to use, in order to improve and maintain patient engagement. The ICA is intrinsically gamified. Users see it as a game, which is really positive, and this also reflects the feedback that we have received so far.
It is approved for face-to-face administration and is an excellent test that can be used as an end-to-end assessment across the memory pathway, and this is important. It should start from primary care. Five or 6 years ago, discussing a risk-based population screening would be sort of a “no-no,” absolutely forbidden. Now we are in the realm of risk-based population screening—we have new disease-modifying treatments that are approved, of which at least 1 is approved for prodromal dimension, so early dementia. We need to identify those people, and we need to do this early and in a standardized way.
The test can be used as an excellent screening tool in primary care, and because of its interoperability, it can help primary care physicians talk to the specialists that they would normally refer [patients to], and the specialist can actually use the test in order to help with the diagnosis. Equally important, with the monitoring of patients—and this is something that most health systems are lacking at the moment—it can be done remotely, and we can help monitor patients more regularly than we do now. This is really important if one wants to understand whether the treatments are working, so response to treatment, but also whether patients are progressing and how rapidly.
My message to patients would be that the test can be used in at-risk groups, as one would take blood pressure on a yearly basis to understand the risk of cardiovascular disease, and if so, monitor it in an efficient and objective way. Of course, for individuals with a known diagnosis, I think it can help with understanding mild cognitive impairment very specifically and dementia early, when it really matters.
We have also looked at individuals with multiple sclerosis. Multiple sclerosis is known to exacerbate cognitive problems, and cognitive problems were mainly [referring to] one’s sort of attention, so we have found that the test is very accurate in highlighting cognitive impairment in patients with multiple sclerosis in a more efficient way than standard-of-care pen and paper tests, as well as monitoring how one's cognition improves after different interventions. In this case, we found that the test can map both the improvements of physical and neurorehabilitation and also correlate extremely well with levels of fluid biomarkers of the disease. As fluid biomarkers improve, test scores improve at the same time, which shows that the test can monitor, again, objectively.
Primary care clinicians need to know that it will really standardize their practice. Typically, our practice in primary care is ad hoc and quite variable. Quite often, we only take a little bit of medical history from the patient and/or families, and then the tests that we use are quite crude. The test will give them, as I mentioned, a good objective assessment of one's cognition and help them refer more accurately. The interoperability helps with communicating with other clinicians and gives them back clinical medical time because it doesn't need to be done by a doctor, it can simply be done by a trained clinician, a trained healthcare professional. For the specialists, I think it's important to mention that it's an accurate, reliable test of global cognition, and it can help monitor patients much more regularly than with existing cognitive tests.
Transcript edited for clarity.