↓ Skip to main content

Michigan Publishing

Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools: Findings from the NCAA-DoD CARE Consortium

Overview of attention for article published in Sports Medicine, November 2017
Altmetric Badge

About this Attention Score

  • In the top 5% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (94th percentile)
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
71 X users
facebook
2 Facebook pages

Citations

dimensions_citation
138 Dimensions

Readers on

mendeley
185 Mendeley
Title
Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools: Findings from the NCAA-DoD CARE Consortium
Published in
Sports Medicine, November 2017
DOI 10.1007/s40279-017-0813-0
Pubmed ID
Authors

Steven P. Broglio, Barry P. Katz, Shi Zhao, Michael McCrea, Thomas McAllister, CARE Consortium Investigators

Abstract

Concussion diagnosis is typically made through clinical examination and supported by performance on clinical assessment tools. Performance on commonly implemented and emerging assessment tools is known to vary between administrations, in the absence of concussion. To evaluate the test-retest reliability of commonly implemented and emerging concussion assessment tools across a large nationally representative sample of student-athletes. Participants (n = 4874) from the Concussion Assessment, Research, and Education Consortium completed annual baseline assessments on two or three occasions. Each assessment included measures of self-reported concussion symptoms, motor control, brief and extended neurocognitive function, reaction time, oculomotor/oculovestibular function, and quality of life. Consistency between years 1 and 2 and 1 and 3 were estimated using intraclass correlation coefficients or Kappa and effect sizes (Cohen's d). Clinical interpretation guidelines were also generated using confidence intervals to account for non-normally distributed data. Reliability for the self-reported concussion symptoms, motor control, and brief and extended neurocognitive assessments from year 1 to 2 ranged from 0.30 to 0.72 while effect sizes ranged from 0.01 to 0.28 (i.e., small). The reliability for these same measures ranged from 0.34 to 0.66 for the year 1-3 interval with effect sizes ranging from 0.05 to 0.42 (i.e., small to less than medium). The year 1-2 reliability for the reaction time, oculomotor/oculovestibular function, and quality-of-life measures ranged from 0.28 to 0.74 with effect sizes from 0.01 to 0.38 (i.e., small to less than medium effects). This investigation noted less than optimal reliability for most common and emerging concussion assessment tools. Despite this finding, their use is still necessitated by the absence of a gold standard diagnostic measure, with the ultimate goal of developing more refined and sound tools for clinical use. Clinical interpretation guidelines are provided for the clinician to apply with a degree of certainty in application.

X Demographics

X Demographics

The data shown below were collected from the profiles of 71 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 185 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 185 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 25 14%
Student > Master 20 11%
Student > Bachelor 18 10%
Student > Doctoral Student 17 9%
Other 15 8%
Other 36 19%
Unknown 54 29%
Readers by discipline Count As %
Medicine and Dentistry 29 16%
Sports and Recreations 21 11%
Neuroscience 18 10%
Psychology 14 8%
Nursing and Health Professions 13 7%
Other 21 11%
Unknown 69 37%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 49. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 29 May 2019.
All research outputs
#831,995
of 24,920,664 outputs
Outputs from Sports Medicine
#763
of 2,873 outputs
Outputs of similar age
#17,093
of 331,710 outputs
Outputs of similar age from Sports Medicine
#25
of 46 outputs
Altmetric has tracked 24,920,664 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 96th percentile: it's in the top 5% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 2,873 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 55.1. This one has gotten more attention than average, scoring higher than 73% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 331,710 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 94% of its contemporaries.
We're also able to compare this research output to 46 others from the same source and published within six weeks on either side of this one. This one is in the 47th percentile – i.e., 47% of its contemporaries scored the same or lower than it.