In an assessment including 23 patients, the novel and standardized quick neurologic examination showed excellent interrater reliability and agreement, suggesting good validity.
An assessment of a novel and standardized quick neurologic examination (QNE) tool has demonstrated validity for use in neurological examinations of patients in the pediatric intensive care unit (PICU).1
Presented at the 2022 American Academy of Neurology (AAN) Annual Meeting, April 2-7, in Seattle, Washington, by Michael Miksa, MD, PhD, attending pediatric intensivist, The Children's Hospital at Montefiore, and assistant professor of pediatrics, Albert Einstein College of Medicine, and colleagues, the tool showed excellent interrater reliability, agreement, and validity in an assessment that included 23 participants assessed with varying combinations of raters.
Ultimately, between nurse and physician raters, the agreement—assessed by tolerating a total score difference of 1-2—was 68% (tolerance, 1) and 86% (tolerance, 2; n = 22), with strong interrater reliability (𝜅 weighted = 0.92; 𝜅 = 0.24 [95% CI, 0.08-0.4]; P = .004). Additionally, when evaluating these scores in stratified groups (lower, 0-5; middle, 6-11; and upper, 12-16), the agreement between raters was 83% (tolerance, 0).
“Common practice is to perform serial neurologic exams in the PICUs, [which] include pupillary response and the Glasgow Coma Scale (GCS) or Full Outline of Un-Responsiveness (FOUR) score, with shortcomings in assessing more subtle cerebral dysfunction,” Miksa et al wrote. “The QNE tool has excellent reliability and good validity. It will likely improve appropriate neurologic assessment and communication between different providers in the PICU setting.”
The investigators, who also included Apirada Thongsing, MD, found 70% agreement and good correlation (𝜅 weighted = 0.4; 𝜅 = 0.27 [95% CI, -0.28 to 0.82]; P = .236) in the 10 individuals who were evaluated by a pediatric neurologist and a resident. The FOUR and GCS scores that were collected at the same time by the raters also showed good correlation the QNE scores (FOUR: r2 = 0.744; GCS: r2 = 0.877).
Thongsing explained to NeurologyLive® that the QNE is advantageous in that it is much quicker to do. The standard neurological exam, she said, can take between 5 and 10 minutes, but the QNE can be done in less than 1 minute. "If we can prove that this is a good tool that is reliable and valid, I think any institution can use it—and not just in the PICU, but also in other divisions, including the emergency room, or even pediatric hospitals. And even though we say this is for pediatrics, I also think this applies to adults as well. So it has a lot of application."
The QNE tool includes an evaluation of 4 neurologic domains: level of consciousness, communication, motor function, and cranial nerves, with each domain scored from 0 to 4 (worst to best). As such, total scores range from 0-16. The patients in the PICU included were aged 2-21 years with a neurologic diagnosis, and were evaluated by different raters, including registered nurses, residents, and attending physicians. Validity was determined by comparison of categories between neurologists and nonspecialists, with the comparison between QNE and FOUR and GCS scores conducted with simple linear regression.
"The future goal is obviously that we would like to have more data on more patients and we would like to assess it in a different department," Thongsing told NeurologyLive®. "Another thing is that we would like to be able to correlate between the score that we have and the outcome after the patient is discharged. So we're working on that in the long term."
Previously published findings in Critical Care Medicine showed similar results for the QNE tool. This prior validation effort included a pediatric neurologist describing exam findings using free text with 6 patients, ranging from normal neurological exams to severely impaired.2 In that assessment of the tool, Miksa and colleagues noted that the narrative was then translated into QNE subscores via consensus from 2 attending physicians. “Based on wording used by the pediatric neurologist for the individual levels of subscores, the domain descriptions were amended. In a subsequent study, using only this revised scoring tool, the inter-rater reliability was assessed in 18 subjects with 78% agreement between 2 separate physician raters,” they wrote.
In that 18-patient assessment, reliability was also deemed very good (𝜅 weighted = 0.88; 𝜅 = 0.17 [95% CI, -0.04 to 0.38]; P = .0468). When by comparing scores determined by neurologists to those of nonspecialists in 6 patients who had been assessed by a pediatric neurologist and a resident, the investigators reported 67% agreement and very good correlation (𝜅 weighted = 0.86; 𝜅 = 0.42 [95% CI, 0.04-0.8]; P = .004). Miksa et al noted in that assessement that the lower correlation coefficients were attributable to the small number of patients and inexperience with the scoring.2