Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Investigating impact exposure and functional neurological status in collegiate football players Rebchuk, Alexander David 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2016_september_rebchuk_alexander.pdf [ 5.46MB ]
Metadata
JSON: 24-1.0305718.json
JSON-LD: 24-1.0305718-ld.json
RDF/XML (Pretty): 24-1.0305718-rdf.xml
RDF/JSON: 24-1.0305718-rdf.json
Turtle: 24-1.0305718-turtle.txt
N-Triples: 24-1.0305718-rdf-ntriples.txt
Original Record: 24-1.0305718-source.json
Full Text
24-1.0305718-fulltext.txt
Citation
24-1.0305718.ris

Full Text

INVESTIGATING IMPACT EXPOSURE AND FUNCTIONAL NEUROLOGICAL STATUS IN COLLEGIATE FOOTBALL PLAYERS by  Alexander David Rebchuk  B.Kin., The University of British Columbia, 2013  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  MASTER OF SCIENCE in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Kinesiology)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  June 2016  © Alexander David Rebchuk, 2016   ii Abstract  A single head impact in sport can cause an acute concussion, whereas repetitive head impacts are suspected to cause chronic neurological impairment. However, the diagnostic accuracy of concussion assessment tools are not well understood and sparse research evidence exists regarding the neurological implications of repetitive head impacts. The objective of this thesis was to investigate repetitive head impacts, including impact detection technology and neurocognitive function, over the duration of a collegiate football season. Thirty-five healthy participants were recruited from a collegiate football program for a three-part study. Participants adhered an impact detection sensor (xPatch, X2 Biosystems) to their right mastoid process prior to each game and practice. As well, they completed a weekly battery of neurological testing that included the graded symptom checklist, standardized assessment of concussion, balance error scoring system and King-Devick test. In experiment 1, we investigated the accuracy of the xPatch to classify each detected event as an impact or non-impact. We matched each event to game video and assigned a true positive, false positive, true negative or false negative classification. The sensitivity of the sensor was 77.6%, specificity was 70.4% and overall accuracy was 75.1%. Additionally, we determined that impact count is strongly correlated to cumulative head kinematic load, i.e. cumulative linear acceleration (r2=0.98), cumulative rotational acceleration (r2=0.98) and cumulative rotational velocity (r2=0.99). In experiment 2, we explored the relationship between alterations in neurological status and repetitive head impact exposure using linear mixed models. The number of head impacts sustained was significantly related to the number and severity of symptoms in participants, but not to any other indicator of neurological status. In experiment 3, we investigated the diagnostic accuracy of each neurological test using receiver operating characteristic curves and corresponding area under the curve values. The diagnostic accuracy for the graded symptom checklist was high (0.76-0.93), King-Devick Test was moderate (0.64-0.80), standardized assessment of concussion and balance error scoring system were poor (0.47-0.71). In summary, this thesis identified limitations in current impact detection technology, provided evidence of a link between repetitive head impacts and symptomatology, and determined that the graded symptom checklist can accurately diagnose concussion.     iii Preface  The research presented in chapter 4 was conducted at the athletic facilities of the University of British Columbia (UBC), University of Calgary, University of Alberta, University of Saskatchewan, University of Regina, University of Manitoba, Université Laval, St. Francis Xavier University, and Saint Mary’s University, using methods approved by UBC’s Clinical Research Ethics Board (H11-02306). A manuscript detailing this work has yet to be prepared. This work was conducted in collaboration with Dr. Jean-Sébastien Blouin, Dr. Gunter Siegmund and Harrison Brown. Dr. Blouin was the senior investigator. I was responsible for all major areas of study design, concept formation, data collection and data analysis. Mr. Brown was involved with study design, concept formation, and data collection. Dr. Blouin was involved with study design, concept formation and analysis advisory. Dr. Siegmund was involved with concept formation and analysis advisory.   The research presented in chapter 5 was conducted at the Sensorimotor Physiology Lab at UBC, and the athletic facilities of UBC and Université Laval, using methods approved by UBC’s Clinical Research Ethics Board (H11-02306). A manuscript detailing this work has yet to be prepared. This work was conducted in collaboration with Dr. Jean-Sébastien Blouin, Dr. Gunter Siegmund, Harrison Brown and Dr. Michael Koehle. Dr. Blouin was the senior investigator. I was responsible for all major areas of study design, concept formation, data collection and data analysis. Mr. Brown was involved with concept formation and data collection. Dr. Blouin was involved with study design, concept formation, and analysis advisory. Dr. Siegmund was involved with analysis advisory. Dr. Koehle was involved with analysis advisory.  The research presented in chapter 6 was conducted at the Sensorimotor Physiology Lab at UBC, and the athletic facilities of UBC and Université Laval, using methods approved by UBC’s Clinical Research Ethics Board (H11-02306). A manuscript detailing this work has yet to be prepared. This work was conducted in collaboration with Dr. Jean-Sébastien Blouin, Dr. Gunter Siegmund, Harrison Brown and Dr. Michael Koehle. Dr. Blouin was the senior investigator. I was responsible for all major areas of study design, concept formation, data collection and data   iv analysis. Mr. Brown was involved with concept formation, analysis advisory, and data collection. Dr. Blouin was involved with study design, concept formation, and analysis advisory. Dr. Siegmund was involved with concept formation and analysis advisory. Dr. Koehle was involved with analysis advisory.    v Table of Contents  Abstract .......................................................................................................................................... ii Preface ........................................................................................................................................... iii Table of Contents .......................................................................................................................... v List of Tables ............................................................................................................................... viii List of Figures ............................................................................................................................... ix List of Abbreviations ..................................................................................................................... x Acknowledgements ....................................................................................................................... xi Dedication ................................................................................................................................... xiii Chapter 1: Introduction ................................................................................................................ 1 1.1 Current Definition of Concussion ................................................................................................ 3 1.2 Pathophysiology of Concussion .................................................................................................... 3 1.3 Nomenclature: mTBI or Concussion? ......................................................................................... 5 Chapter 2: Review of Literature .................................................................................................. 6 2.1 Head Impact Monitoring Technologies in Football ................................................................... 6 2.1.1 Typical Impact Kinematics in Collegiate Football .................................................................. 7 2.1.2 Head Impact Telemetry System ............................................................................................... 9 2.1.3 xPatch Sensor ......................................................................................................................... 10 2.2 Subconcussive Impacts in Sport ................................................................................................ 11 2.2.1 Single Season Neurological Alterations and Head Impact Exposure .................................... 12 2.2.1.1 Animal Studies ............................................................................................................................... 13 2.2.1.2 Human Studies ............................................................................................................................... 13 2.3 Concussion Assessment Tools .................................................................................................... 16 2.3.1 Sport Concussion Assessment Tool – 3rd Edition  (SCAT3) ................................................. 17 2.3.1.1 Graded Symptom Checklist ........................................................................................................... 17 2.3.1.2 Standardized Assessment of Concussion ....................................................................................... 18 2.3.1.3 Balance Error Scoring System ....................................................................................................... 19 2.3.1.4 King-Devick Test ........................................................................................................................... 21 Chapter 3: Objectives and Hypotheses ..................................................................................... 23 3.1 Aims .............................................................................................................................................. 23 3.2 Hypotheses ................................................................................................................................... 24 3.2.1 Hypothesis for Aim #1 ........................................................................................................... 24 3.2.2 Hypothesis for Aim #2 ........................................................................................................... 24 3.2.3 Hypothesis for Aim #3 ........................................................................................................... 24 Chapter 4: Impact Exposure in Collegiate Football ................................................................ 26 4.1 Validation of a Wearable Impact Monitoring Technology ..................................................... 26 4.1.1 Introduction ............................................................................................................................ 26 4.1.2 Methods .................................................................................................................................. 27 4.1.2.1 Study Participants .......................................................................................................................... 27 4.1.2.2 Biomechanical Measurements ....................................................................................................... 28 4.1.2.3 Statistical Analysis ......................................................................................................................... 29   vi 4.1.3 Results .................................................................................................................................... 31 4.1.3.1 Sensitivity, Specificity, Positive and Negative Predictive Values ................................................. 31 4.1.3.2 Correlation of Cumulative Head Impact Kinematics ..................................................................... 32 4.1.3.3 Distribution of Impact Confirmation Types ................................................................................... 34 4.1.4 Discussion .............................................................................................................................. 35 4.1.5 Conclusion .............................................................................................................................. 38 4.2 Head Impact Exposure in Canadian Collegiate Football ........................................................ 39 4.2.1 Introduction ............................................................................................................................ 39 4.2.2 Methods .................................................................................................................................. 40 4.2.2.1 Biomechanical Measurements ....................................................................................................... 40 4.2.2.2 Statistical Analysis ......................................................................................................................... 41 4.2.2.3 Missing Data Extrapolation ........................................................................................................... 42 4.2.3 Results .................................................................................................................................... 43 4.2.3.1 Differences in Hour Impact Exposure by Session Type ................................................................ 44 4.2.4 Discussion .............................................................................................................................. 46 4.2.5 Conclusion .............................................................................................................................. 47 Chapter 5: Exploratory Models of Neurological Function and Repetitive Head Impacts ... 49 5.1 Introduction ................................................................................................................................. 49 5.2 Methods ........................................................................................................................................ 51 5.2.1 Biomechanical Measurements ............................................................................................... 51 5.2.2 Neurological Status Measurements ........................................................................................ 52 5.2.3 Statistical Analysis ................................................................................................................. 54 5.2.3.1 Differences in Neurological Function in Exposure Groups ........................................................... 54 5.2.3.2 RHI Effect on Neurological Status Models ................................................................................... 56 5.3 Results .......................................................................................................................................... 56 5.3.1 Exposure Groups Differences In Neurological Status ........................................................... 57 5.3.2 RHI Effect on Neurological Status ........................................................................................ 57 5.4 Discussion ..................................................................................................................................... 58 5.5 Conclusion .................................................................................................................................... 61 Chapter 6: Diagnostic Accuracy of Concussion Assessment Tools in a Collegiate Population ....................................................................................................................................................... 62 6.1 Introduction ................................................................................................................................. 62 6.2 Methods ........................................................................................................................................ 64 6.2.1 Concussion Diagnostic Tools ................................................................................................. 64 6.2.2 Statistical Analysis ................................................................................................................. 66 6.3 Results .......................................................................................................................................... 70 6.3.1 Area Under the Curve Values ................................................................................................ 70 6.4 Discussion ..................................................................................................................................... 72 6.5 Conclusion .................................................................................................................................... 76 Chapter 7: Conclusion ................................................................................................................ 78 Bibliography ................................................................................................................................ 81 Appendices ................................................................................................................................... 89 Appendix A - SCAT3 ............................................................................................................................ 89 Appendix B - King-Devick Test ........................................................................................................... 93 Appendix C - Alternative prompts used in the SAC .......................................................................... 95 Appendix D - Instructions to Setup Linear & Hierarchal Mixed Models ....................................... 98   vii Appendix E - Instructions to Setup Repeated Measures Receiver Operating Characteristic Curves ..................................................................................................................................................... 99 Appendix F - Pairwise p-Values for Comparison Between Diagnostic Test for All Cases ........... 101 Appendix G - Pairwise p-Values for Comparison Between Diagnostic Test for Select Cases ..... 102 Appendix H - Collinearity (r) Values Between Diagnostic Test for Select Cases .......................... 103 Appendix I - Collinearity (r) Values Between Diagnostic Test for Select Cases ........................... 104    viii List of Tables  Table 4.1. Event type classification, using xPatch detection algorithm and visual confirmation of event types. .................................................................................................................................... 30  Table 4.2. Classification of game events that were visually reviewed by investigators. .............. 31  Table 4.3. Accuracy of the xPatch detection algorithm for different ranges of impact PLA values. ....................................................................................................................................................... 32  Table 4.4. r2 Values for cumulative impact kinematic variables of visually donfirmed impacts. 33  Table 4.5. r2 Values for cumulative impact kinematic variables for all impacts. ......................... 34  Table 4.6. Time spent in each team session type in each year of the study. ................................. 43  Table 4.7. Average cumulative head kinematics by team session type for a collegiate football team (Canadian football). .............................................................................................................. 44  Table 4.8. Impact exposure per hour for different team session types. ......................................... 44  Table 5.1. Reported absolute scores for the neurocognitive battery. ............................................ 53  Table 6.1. Diagnostic tests of concussion used in the neurocognitive battery and the potential ranges of their absolute scores. ..................................................................................................... 66  Table 6.2. Summary of concussion diagnostic tools. .................................................................... 69  Table 6.3. AUC values for ROC curves generated from commonly used concussion assessment tools. .............................................................................................................................................. 71   ix List of Figures  Figure 1-1. The time course of ionic and metabolic disturbances following a mTBI event. .......... 4  Figure 2-1. BESS conditions. ........................................................................................................ 20  Figure 4-1. Correlation between impact count and cumulative head kinematic variables. .......... 33  Figure 4-2. Boxplot between events that were determined valid by investigator and events that the sensor’s algorithm classified as valid. ..................................................................................... 35  Figure 4-3. Impact count per hour between different team session types. .................................... 45  Figure 5-1. Schematic of the null hypothesis for hierarchal mixed models. ................................. 56  Figure 6-1. A sample ROC curve using change scores from the King-Devick Test (select cases). ....................................................................................................................................................... 67    x List of Abbreviations  mTBI  Mild Traumatic Brain Injury (i.e. concussion) RHI  Repetitive Head Impacts CTE  Chronic Traumatic Encephalopathy  HIT  Head Impact Telemetry  PLA  Peak Linear Acceleration cPLA  Cumulative Peak Linear Acceleration PRA  Peak Rotational (Angular) Acceleration cPRA  Cumulative Peak Rotational (Angular) Acceleration PRV  Peak Rotational (Angular) Velocity cPRV  Cumulative Peak Rotational (Angular) Velocity  GCS  Glasgow Coma Scale SCAT3 Sport Concussion Assessment Tool – 3rd Edition GSC  Graded Symptom Checklist SAC  Standardized Assessment of Concussion BESS  Balance Error Scoring System mBESS Modified Balance Error Scoring System  fBESS  Balance Error Scoring System – Foam Trials ROC   Receiver Operating Characteristic AUC  Area Under the Curve NCAA  National Collegiate Athletic Association CIS  Canadian Interuniversity Sport    xi Acknowledgements  I would like to thank my co-advisors, Dr. Jean-Sébastien Blouin and Dr. Gunter Siegmund, for inspiring and encouraging me to purse my passions throughout my time at UBC. I am grateful for their encouragement and autonomy to pursue research I am passionate about. Together they have taught me numerous skills that will be essential to my future growth as a physician and scientist. Their questions and critiques have taught me to think and reflect in depth before forming an opinion. They’ve reminded me that you’ll always be proud of your work if you seek perfection. I will be forever grateful for the time and energy they have put into my development as a young researcher.   I would also like to thank my third committee member, Dr. Michael Koehle, for providing guidance and advice from his unique perspective as a clinician/researcher. His clinical insights have been integral to the development of this thesis. Dr. Koehle’s insights will certainly influence my future work as a clinician.  I would like to thank Mr. Harrison Brown, a friend and colleague. Without Mr. Brown’s guidance in the study design and his tremendous assistance with data collection the work presented within this thesis would not have been possible.    Most importantly I would like to thank the participants. Without their participation none of this work would have been possible. Participants certainly contribute to the advancement of scientific knowledge as much as any researcher or academic. I am incredibly grateful for their time and willingness to participant in near-daily data collection for three months as part of this thesis.   Special thanks are owed to my fellow graduate students, undergraduate students and post-docs from the Sensorimotor Physiology Lab for their help, advice, encouragement, and in many cases involvement, during the development of this thesis.     xii Lastly, I would like to thank the Canadian Chiropractic Research Foundation, MEA Forensic Engineers & Scientists, and the Canadian Institute of Health Research for providing the funding and tools that made this research possible.    xiii Dedication    To my mother, father and sister.  This is only possible because of your love and support.   1 Chapter 1: Introduction  During a typical competition contact sports athletes are exposed to multiple cranial impacts of varying magnitudes. Collegiate football players sustain approximately 1177±773 impacts per season, with the actual number being dependent upon their status (i.e. starter or bench player) and position played (Gysland et al., 2012; Bailes et al., 2013). The average peak linear acceleration (PLA, i.e. force) of each impact is 28, but can range in magnitude from 10g to 200g (Reynolds et al., 2015). The increased exposure to head impacts in this population increases their risk of sustaining a neurological injury. Since a single biomechanical force that directly or indirectly causes a transmission of forces to the head can result in a transient neurological impairment, it is hardly surprising that 4.4-5.5% of collegiate football players sustain a concussion in any given season (Guskiewicz et al., 2000; McCrory et al., 2013).   The high prevalence of reported concussions in contact sport led to the establishment of datasets outlining the risk of concussion in different sports, including different levels of competition for a given sport. In collegiate football there are 0.52-0.81 concussions for every 1000 athletic exposures (AEs; n.b. one athlete exposure is equivalent to one athlete participating in one game or practice). At the high school level, the risk of concussion is similar with 0.48-1.03 concussions for every 1000 AEs (Clay et al., 2013). Comprehensive data on professional football is lacking but a retrospective review found 0.38-0.42 concussions per game in the National Football League, which is an estimated 4.1-4.6 concussions per 1000 AEs (Yengo-Kahn et al., 2016). Similar risks for concussion have been reported in hockey (1.55-21.52 concussions per 1000 AEs), lacrosse (0.28-1.08 concussions per 1000 AEs), soccer (0.13-0.49 concussions per 1000 AEs) and rugby (1.8-7.97 concussions per 1000 hours of participation) (Clay et al., 2013).  In the past athletes accepted concussions as a normal aspect of contact sport, often referring to them in colloquial terms such as ‘ringers’ or ‘seeing stars’. It was not uncommon for football players to remain in a game even after reporting these symptoms (Fainaru-Wada & Fainaru, 2013). Over the past two decades we have come to understand that concussion is in fact a serious neurological injury, going so far as to referring to it as ‘mild traumatic brain injury (mTBI).’   2 Currently, the diagnosis of concussion is highly subjective due to the heterogeneous presentations of signs, symptoms and impact kinematics for each concussion. Many neurophysiological and neuropsychological tests have been developed to aid clinicians with diagnosing concussions, yet a gold standard diagnostic technique remains elusive (McCrory et al., 2013). With the advancement of real-time impact monitoring technologies investigators tried to use kinematic data from a single impact to predict the risk of concussion. However, they were unsuccessful and impact kinematics from a single impact were deemed irrelevant in terms of clinical outcomes (Guskiewicz & Mihalik, 2011)  Potentially more concerning than concussions is the suspected risk of chronic neurological damage and impairment caused by repetitive head impacts (RHI) is sport. RHI have been previously described in the literature as subconcussive impacts. For nearly a century RHI have been observed to contribute to cognitive and motor deficits (Martland, 1928). Martland (1928) first described the condition as “punch drunk” due to its prevalence in boxers. In 1937, Millspaugh changed the clinical terminology to dementia pugilistica. It is believed that dementia pugilistica is a form of chronic traumatic encephalopathy (CTE) specific to boxers (Bailes et al., 2013).   CTE is a neurodegenerative disease that presents as pronounced atrophy in the cerebral cortex and limbic system. It leads to significant cognitive and motor impairments, as well as mood and behavioural changes (Baugh et al., 2012). It is suspected that each head impact induces micro-traumas, which eventually accumulate and become permanent neurological damage (Baugh et al., 2012). Interestingly, CTE has been observed in football players with and without a history of clinically diagnosed concussion suggesting that an asymptomatic individual playing contact sport can still sustain neurological damage. Gavett et al. (2011) conservatively estimated that 3.7% of professional football players would develop CTE as a result of RHI. However, the relationship between RHI and CTE is currently unclear as longitudinal studies are lacking.    Concussion and RHI in sport is an evolving field and there is much more to be learned. Notably there is a need to develop a gold-standard diagnostic test for concussion, develop tools that   3 accurately measure head impact kinematics and determine the relationship between RHI and neurological impairment and disease. These investigations are certainly warranted and pertinent in Canada since 244,000 and 1.3 million Canadians play football and hockey, respectively, per annum (Statistics Canada, 2005). This thesis intends to provide information that will help advance the field of concussion and RHI in sport.     1.1 Current Definition of Concussion  Concussion is defined clinically as a transient, self-limiting, neural dysfunction, resulting from a rapid acceleration-deceleration of the cranium (McCrory et al., 2013). Concussions are short-duration injuries. Eighty to ninety percent resolve without therapeutic or pharmaceutical intervention in 7-10 days (McCrory et al., 2013).   Concussions are a complex pathophysiological process of the brain. It is suspected that the metabolic imbalances following a concussion persist longer than clinical neurological deficits (Barkhoudarian et al., 2011; Guskiewicz et al., 2007; Giza & DiFiori, 2011).   1.2 Pathophysiology of Concussion  Neurological damage occurs when forces transmitted to the cellular microstructure of the brain exceed the natural limits of the cytoskeleton (Viano et al., 2005). The initial microtubule damage following an impact event persists for 6-24 hours. However, more concerning are the concurrent biochemical disruptions. These biochemical disruptions occur 25-50ms after an impact and potentially lead to secondary axon damage and subsequent neuron death (Bigler, 2005; Giza & Hovda, 2001). The biochemical disruptions follow a stereotyped metabolic cascade disrupt homeostasis within the brain. Initially, neurotransmitters are indiscriminately released followed by ionic imbalances and adenosine triphosphate (ATP) depletion, as depicted in Figure 4-1. (Barkhoudarian et al., 2011; Hovda, 1996). The metabolic cascade concludes with neuron and glial cell death in regions of the brain that absorbed the impact forces (Hovda, 1996).    4  Figure 1-1. The time course of ionic and metabolic disturbances following a mTBI event. The vertical axis represents deviations from normal (100%). K+ represents potassium; Ca2+ represents calcium, CMRgluc represents cerebral glucose consumption. From  © Cantu, RC (2000). Neurologic Athletic Head and Spine Injuries. Philadelphia, PA: WB Saunders Company. Page 80-100. By permission from author.   The microtubule damage resulting from kinematic forces transmitted to the brain trigger a positive reflex loop of potassium ion efflux into the extracellular space that depolarizes the cellular membrane (Barkhoudarian et al., 2011; Giza & Hovda, 2001). Following cellular membrane depolarization, excitatory neurotransmitters (primarily glutamate) are released indiscriminately (Hovda, 1996; Barkhoudarian et al., 2011). Glutamate binds to N-methyl-D-aspartate receptors in neurons, which cause an opening of potassium and calcium ligand-gated ion channels, further depolarizing the cellular membrane (Giza & Hovda, 2001; Hovda, 1996).  The rapid influx of calcium ions damages axonal structural integrity and is suspected of triggering neuron death via proteolysis, over-production of free radicals and activation of apoptotic signals (Hovda, 1996; Giza & DiFiori, 2011).   The ATP-dependent sodium-potassium pumps of neurons quickly deplete their ATP reserves while attempting to restore the normal resting ion gradient (Giza & DiFiori, 2011; Barkhoudarian et al., 2011). Within 30 minutes of an impact this hyper-metabolic state forces damaged neurons to revert to glycolytic ATP generating mechanisms, ultimately resulting in increased lactate   5 levels and neural acidosis (Giza & DiFiori, 2011; Barkhoudarian et al., 2011; Hovda, 1996). Furthermore, during the hyper-metabolic period cerebral blood flow (CBF) is uncoupled from its auto-regulation with glucose consumption, promoting anaerobic glycolysis. This further worsens the ATP deficits and resulting neural acidosis (Giza & Hovda, 2001). The neural acidosis leads to mitochondrial swelling and cerebral edema both of which damage axons (Barkhoudarian et al., 2011; Giza & Hovda, 2001; Hovda, 1996).   Six hours following an impact the hyper-metabolic excitation phase transitions to a metabolic depression phase (Barkhoudarian et al., 2011; Giza & Hovda, 2001). This phase is characterized by glucose depression that can last for up to 2-4 weeks (Barkhoudarian et al., 2011; Giza & DiFiori, 2011). It has been suggested that this phase is responsible for the clinically observed neurological deficits in concussed patients post-injury (Hovda, 1996; Giza & Hovda, 2001).   1.3 Nomenclature: mTBI or Concussion?   Confusion and debate exists regarding the difference between the terms concussion and mild traumatic brain injury (mTBI). The terms are often used synonymously in the literature. There is a slight preference for concussion to be used in a sporting context, and mTBI to be used in an acute-care setting. For the purpose of this thesis the terms will be used interchangeably.     6 Chapter 2: Review of Literature   2.1 Head Impact Monitoring Technologies in Football  This section within the literature review focuses on the HIT system and xPatch impact monitoring technologies because of their common usage in RHI and concussion research. Whereas other impact detection technologies exists, for example mouth guard sensors, the xPatch and HIT system have been used almost exclusively in research concerning RHI in football (Crisco et al., 2010; Crisco et al., 2011; Crisco et al., 2012; Gysland et al., 2012; McAllister et al., 2012; Reynolds et al., 2015; Beckwith et al., 2012; McCuen et al., 2015).   Impact monitoring technologies were first developed for research in sport during the 1970s. Initial data collection technologies were limited by obtrusive data collection hardware, limited accelerometer technologies and the ability to only store data for a few impacts (Reid et al., 1971; Moon et al., 1971). This research was important for proof of concept, however the tools were limited in their ability to measure head acceleration in multiple players simultaneously. In 2000, Naunheim et al. imbedded triaxial accelerometers within the padded vertex of hockey and football helmets to record peak linear accelerations. This methodology improved upon earlier designs but still required players to wear obtrusive equipment for data storage (Naunheim et al., 2000).  Following these advances, Viano and Pellman’s group published a series of papers from reconstructed National Football League (NFL) impacts (Viano et al., 2005). Impacts were visually recorded from two or more camera angles to establish kinematic information and then reconstructed in a laboratory setting with instrumented Hybrid-III dummies (Denton ATD, Inc, Milan, OH; (Viano et al., 2005)). This design improved upon previous methods of quantifying impact kinematics, but still contained limitations. It was an indirect measure of head kinematics, was unable to monitor impacts in real-time and was unable to quantify a high volume of impacts due to the time consuming process required to reconstruct each impact.   In 2003, a group at Virginia Polytechnic Institute and State University (Virginia Tech) in   7 conjunction with Simbex (Lebanon, NH) developed the Head Impact Telemetry (HIT) system and began on-field implementation (Duma et al., 2005). Although a significant advancement in sport-related impact research, the HIT system had limitations. The most significant being that the HIT system required participants to wear football helmets, thus restricting its application in other contact sports. Further limiting the HIT system were reported inaccuracies in its measurements of head impact kinematics (Siegmund et al., 2015; Beckwith et al., 2012).   The xPatch sensor (X2 Biosystems, Seattle, WA) was developed in 2010 as an alternative technology to quantify head impact kinematics in sport. It adheres to the head rather than being embedded within helmets, thereby increasing it applicability in RHI and concussion research. An important aspect of these technologies is their ability to use proprietary algorithms to classify each detected event as an impact or non-impact event (e.g. helmet removal).   Although numerous groups have investigated the accuracy of the HIT system and xPatch to measure head kinematics, studies investigating the accuracy of their detection algorithms in a field setting have not been published to our knowledge (Siegmund et al., 2015; Beckwith et al., 2012; Jadischke et al., 2013; McCuen et al., 2015; Siegmund et al., 2015). This raises doubts regarding the accuracy of previously published longitudinal impact exposure values and restricts the clinical utility of these technologies. Impact events may be missed (false negatives) and non-impact events may be mistaken as impact events (false positives). Investigating and, if required, improving the accuracy of these detection algorithms is of paramount importance for researchers and clinicians.   2.1.1 Typical Impact Kinematics in Collegiate Football   Our initial understanding of the biomechanics of football impacts comes from laboratory reconstruction of impacts using Hybrid-III dummies (Viano et al., 2005). Viano et al. used a computer model to estimate the forces transmitted to cranial structures during each impact. A typical impact was found to last 15-20ms and displaced deep brain structures by 4-5mm (Viano et al., 2007; Viano et al., 2005). The ventricles were suspected to partially dampen the resulting   8 torque of the cerebral hemispheres about the brainstem, thereby limiting shearing deformation of neural tissue (Ivarsson et al., 2000).   Field kinematic data from helmet-based accelerometer arrays (i.e. the HIT system) has given insight into the kinematics and frequency of head impacts in the collegiate football population. Data from 184,358 unique head impact events, from 254 participants across 3 different National Collegiate Athletic Association (NCAA) football programs, found that the median head impact was 20.2g PLA and 1197 rad/s2 peak rotation acceleration (PRA), whereas the 95th percentile head impact was 49.6 PLA and 3145 rad/s2 PRA (Crisco et al., 2012). The median and 95th percentile PLA and PRA significantly differed between player positions (Crisco et al., 2012). A recent study using a sensor (xPatch) adhered to the skin overlying the mastoid process found that the average impact in a collegiate population (n=28; NCAA athletes) was approximately 28g PLA and 5500 rad/s2 PRA (Reynolds et al., 2015). In practices when participants did not wear shoulder or thigh pads the average impact PLA and PRA were significantly lower at 21.7g and 3899 rad/s2, respectively (Reynolds et al., 2015). It must be noted that the ability of these systems to accurately measure PLA and PRA has been questioned (Siegmund et al., 2015; Siegmund et al., 2015). These inaccuracies are discussed further in subsection 2.1.2 (HIT system) and subsection 2.1.3 (xPatch).  Head impact kinematic data exist for sports other than football, for example ice hockey (Mihalik et al., 2011) and soccer (McCuen et al., 2015). Data also exist for youth football (Broglio et al., 2011). However, due to high participation rates and ease of access to NCAA football athletes, most research has been conducted within that population. The most recent data indicates that the NCAA has 72,788 football student-athletes per annum and Canadian Interuniversity Sport (CIS) in Canada has 1,581 football student-athletes per annum (CIS-SIC, 2015; Irick, 2015). The population of study for this thesis will be collegiate football players.       9 2.1.2 Head Impact Telemetry System   The HIT system uses six spring-mounted linear accelerometers (8 bit; 1000hz/channel) embedded within a MxEncoder that is fit into a football helmet to record impact kinematics for 40ms (12ms pre-trigger, 28ms post-trigger) whenever a sensor exceeds a 10g PLA threshold (Duma et al., 2005; Siegmund et al., 2015). The HIT system allows for data to be wirelessly transmitted in real time to a sideline computer or stored internally (Duma et al., 2005). Data collected by the HIT system are run through a proprietary algorithm, developed by Crisco et al. (2004), to evaluate PLA and PRA at the estimated head center of gravity, impact location, Gadd Severity Index (GSI) and Head Injury Criteria (HIC).  The HIT system improved significantly upon earlier methods of head impact kinematic monitoring but limitations remained. Laboratory testing found the HIT system to have a relative error of 136% (60% SD) for PLA measurements of impacts to the facemask region, which is concerning as 29.3% of all football impacts occur in this region (Pellman et al., 2003; Siegmund et al., 2015). An independent research group confirmed the lack of reliability in PLA measurements in facemask region impacts, finding an absolute error of  >100% for PLA measurements at three different impact velocities to this region (4.9m/s, 7.3m/s and 9.1m/s; Jadischke et al., 2013).  The HIT system’s inability to accurately measure head kinematics is not contained to the facemask region. In a small investigation of 28 impacts to the helmet shell (velocity = 9.3m/s) the absolute PLA error rate exceeded 15% in the majority of impacts (Jadischke et al., 2013). Siegmund et al. (2015) found similar performance when the HIT system was tested across twelve impact locations at five velocities (n=878 impacts). Across all impacts the HIT system had a relative error of 20% (SD: 50%; Siegmund et al., 2015). The performance of the HIT system varied with impact location. Relative error ranged from -1% (SD: 12%) to 136% (SD: 60%) depending upon impact location (Siegmund et al., 2015). These findings were similar to those of Beckwith et al. (2012) who found that when impacted (n=54) across four impact sites (facemask, front boss, rear boss, side) at four velocities (4.4m/s, 7.4m/s, 9.3m/s, 11.2m/s) the relative error   10 of PLA measurements from the HIT system ranged from 0.1-38.9%. Furthermore, the mean absolute angular error in detecting the impact site was 42°±33° (range: 7°-111°; Siegmund et al., 2015).   These studies cast doubt on the HIT system’s ability to provide accurate and reliable information from a single head impact. Furthermore, these studies reported instances when the HIT system misclassified impact events as non-impact events, further limiting the usefulness of the HIT system as a clinical tool (Siegmund et al., 2015). These findings urge caution when interpreting RHI data collected by the HIT system.   2.1.3 xPatch Sensor   The xPatch sensor contains a 3-axis linear accelerometer design that can detect ±200g per axis at a sampling rate of 1000Hz, and a 3-axis gyroscope that can detect ±2000°/s per axis at a sampling rate of 850Hz. The xPatch measures 31×17×8mm, weighs 9g and is designed to be adhered to the skin over the right mastoid process using an adhesive patch. The xPatch records 100ms of data (10ms pre-impact, 90ms post-impact) for each event it senses over 10g PLA.   An initial validation study found that the xPatch overestimate PLA by 64±41% and overestimates PRA by 370±456% for single events (Siegmund et al., 2015). Furthermore, a small pilot study (n=1) found that the xPatch is displaced 3mm (SD: 0.7mm) relative to a mouth-guard based sensor during controlled head impacts at 9.3 ± 2 g PLA (Wu et al., 2016). This is unsurprising as a mouth-guard is attached to the teeth, which are embedded within the mandible, whereas the xPatch is adhered to the skin overlying the mastoid process. These studies suggest that kinematic variables determined by the xPatch for a single head impact event are unreliable, limiting its usage as a clinical impact-monitoring tool. However, the xPatch may be useful as a research tool. When individual impacts detected by the xPatch were pooled into a single dataset, linearity (R2 = 0.85) of PLA data was found (Siegmund et al., 2015). This suggests that the xPatch may be a reliable measure of relative PLA. That being said it is concerning that during ex   11 vivo testing (cadaver head drops) the xPatch classified 49% of impacts as invalid, suggesting potential limitations with the detection algorithm (Siegmund et al., 2015).   Independent validation studies are needed to confirm the reliability and accuracy of the xPatch.  Initial results suggest that the xPatch may have utility as a tool to measure longitudinal RHI exposure. However, caution must be taken when interpreting and reporting PLA values. Further in vivo studies should be conducted to determine the accuracy of the xPatch impact detection algorithm in the field.   2.2 Subconcussive Impacts in Sport  The discovery of CTE upon autopsy of a former professional football player initiated the hypothesis that RHI contributes to later-life neurodegeneration (Omalu et al., 2005). Since 2005, dozens of CTE cases were found in deceased ex-professional football players. This led the academic community to develop a working hypothesis that RHI is a contributing factor to later-life neurological impairments and disease (Baugh et al., 2012). To date, comprehensive longitudinal studies investigating potential links between RHI and neurological alterations are lacking.   It has been established that contact sport athletes sustain impacts with enough mechanical force to injure axon integrity, but without reaching the threshold of concussion (Bailes et al., 2013). These impacts have been previously referred to as subconcussive impacts in the literature. Recently they have been referred to as repetitive head impacts (RHI). Collegiate football players sustain approximately 1177±773 subconcussive impacts, up to a reported maximum of 2492, per season (Gysland et al., 2012; Crisco et al., 2011).   A dose-response relationship has been found between cumulative RHI and later-life cognitive and neuro-behavioural impairments (Montenigro et al., 2016). Participants self-reported the number of years in football, position played, level played and the estimated percentage of games started. Montenigro et al. (2016) used these self-reports to estimate lifelong RHI based on   12 average values from published studies that quantified RHI per season, position and level of play. Researchers found the baseline threshold for a dose-response relationship varied between 2723-6480 impacts, depending on the outcome measure of interest (Montenigro et al., 2016). On average, the risk of later-life neurological impairment doubled with approximately every 2800 impacts sustained above the baseline threshold (Montenigro et al., 2016).   Montenigro’s findings provide evidence supporting the theory that a relationship exists between RHI and neurological impairment. The nature of this relationship, and whether it can be detected within a single season still remains unclear (Baugh et al., 2012).    2.2.1 Single Season Neurological Alterations and Head Impact Exposure  Although an estimated dose-response threshold for neurological impairment and RHI has been established, the model used to establish the relationship relied upon a retrospective design (Montenigro et al., 2016). Based on the research conducted to date, there is no consensus regarding a relationship between RHI and neurological status within active players. Some groups speculate that complex cognitive tasks, such as concussion assessment tests, can detect subtle neurological deficits resulting from an accumulation of cranial impacts (Bigler, 2005; McAllister et al., 2012). Alternatively, other groups speculate that concussion assessment tests lack the sensitivity to detect subtle neurological deficits resulting from repetitive impact trauma (Gysland et al., 2012; Miller et al., 2007). Further compounding these concerns are methodological weakness in longitudinal studies that investigated relationships between RHI and neurological status in active football players. These weaknesses include not quantifying impact exposure (Miller et al., 2007), grouping participants by subjective diagnosis rather than objective measures (Talavage et al., 2014; Breedlove et al., 2014), testing participants at random intervals (Talavage et al., 2014) or testing participants many weeks following their last contact exposure (McAllister et al., 2012). This is disconcerting, since the establishment of a relationship between RHI and neurological impairment in active athletes could potentially prevent chronic injury in this population.    13 It is known that the average impact magnitude is equivalent across shoulder-pads practices, full-pad practices and games, but the number of impacts sustained is significantly different (Reynolds et al., 2015). Impact frequency is greatest during games and decreases from full-pad practices to shoulder-pads practices to helmet-only practices. (Reynolds et al., 2015). If a relationship between RHI and neurological impairments in active athletes is established, policy changes, such as restricting the number of contact practices per season, could be easily implemented to protect the health of contact sport athletes.   2.2.1.1 Animal Studies  Animal models have proven useful in testing the effect of RHI. Importantly, they have established that RHI can cause neurodegeneration in later-life. In a study of rats that received cranial insults spaced 48hrs apart the multiple-insult rats demonstrated significantly worse sensorimotor and cognitive impairment than single-insult and sham rats (Aungst et al., 2014). Interestingly, the single and multiple-insult rats displayed equivalent neurodegeneration in both ipsilateral and contralateral hippocampi (Aungst et al., 2014). In another study, mice that received multiple mild head insults displayed anatomical damage in the optic nerve, cerebellum, corticospinal tract, lateral lemniscus and corpus collosum (Xu et al., 2016). The intensity of axonal degeneration was dependent upon the severity and frequency of cranial insult (Xu et al., 2016).   A major limitation of animal studies is their inability to determine whether a cranial insult given to the mouse or rat is equivalent to a concussive insult in humans (Xu et al., 2016; Aungst et al., 2014). Regardless, these studies provide crucial evidence in support of a link between intensity and frequency of RHI and later-life neurological impairments and degeneration.   2.2.1.2 Human Studies  The work of Breedlove and Talavage’s group has been influential in the literature in determining a relationship between RHI and neurological dysfunction in active athletes. Their initial finding   14 was that fMRI task performance was related to the number of impacts sustained (Breedlove et al., 2012). This appeared to suggest a potential link between neurophysiology and impact exposure, however the functional neurological consequence of this relationship was unknown (Breedlove et al., 2012). In 2014, when investigating RHI and neurological functional changes in a cohort of football players over a single season, Talavage then showed that 50% of subjects who presented clinically healthy exhibited neurological deficits as determined by the Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT). Subjects displayed these abnormal evaluations when tested at random intervals throughout the season (Talavage et al., 2014). In a follow-up study, Breedlove et al. (2014) reported that 54.5% of clinically healthy football players exhibited neuropsychological deficits when measured in-season, as determined by ImPACT. Small sample sizes and inappropriate grouping of participants limited the findings of these studies. Participants were grouped by subjective variables such as clinical concussion diagnosis, rather than objective variables such as impact exposure.   The work of Breedlove and Talavage appeared to establish that neurological alterations could be detected in active, clinically healthy, contact sport athletes. Yet a review of the literature shows conflicting evidence for this claim. Some researchers argue that football players can remain clinically healthy while exhibiting neurological deficits (Breedlove et al., 2014; Talavage et al., 2014), while others argue that neurological status remains unaltered during a single season (Miller et al., 2007; McAllister et al., 2012).   No relationship between head impact exposure and neurological status was found in a study that compared football players to non-contact control athletes (e.g. track team), except for a correlation between impact PLA and reaction time (McAllister et al., 2012). In their discussion, McAllister et al. (2012) postulated that deficits for other measures of neurological function (e.g. memory task) would have been observed if their post-season testing was conducted immediately following the last impact exposure rather than 26 days following last impact exposure. Thereby emphasizing the methodological weaknesses of their work.     15 Gysland et al. (2012) observed no change in the Standardized Assessment of Concussion (SAC) between collegiate football players tested before and after completion of a football season. Miller et al. (2007) unexpectedly found an improvement in SAC performance (p=0.0003) amongst 67 non-concussed football players between preseason, midseason (bye-week) and postseason test dates. However, Miller et al. did not quantify impact exposure and treated all participants as a homogeneous group even though 33 participants were on the starting team and 25 were on the non-starting team. The potential differences in impact exposure between participants may have been an unexamined covariate in their study.   Evidence in soccer players suggests a potential dose-dependent relationship between impact exposure (i.e. headers) and neurological dysfunction (Lipton et al., 2013). Emerging evidence suggests that as few as 10 soccer headers may cause a transient dysfunction in vestibular processing (Hwang et al., 2016). Oculomotor function, as measured by near point convergence, has been shown to increase by 29-38% compared to baseline values following RHI exposure in collegiate football players (Kawata et al., 2016). When investigating potential relationships between RHI and neurological status in collegiate football players, Gysland et al. (2012) found that balance deficits correlated with cumulative PLA magnitude and that the number of impacts greater than 90g PLA correlated with reported symptoms. These studies appear to suggest that RHI accrued during a single season of football may alter neurological function. Yet the relationship between RHI and neurological function, as well as whether these findings are clinically significant, is not yet clear.   Imaging modalities have also been used to investigate the effects of RHI. Independent studies have shown damage to the dorsolateral prefrontal cortex (DLPFC) following repetitive cranial impact exposure (Czerniak et al., 2015; Lipton et al., 2009). DLPFC damage was found to be partially predictive of cognitive deficits, specifically executive function deficits (Czerniak et al., 2015; Lipton et al., 2009).   Human research conducted to date appears to suggest the possibility of a link between RHI and neurological impairment. Whether this link requires many years to progress into a detectable   16 impairment or whether it can be detected in active athletes is still unknown. More research is needed to establish a potential link, if any, between RHI and neurological impairment and degeneration. Caution must be exerted when interpreting current findings between RHI and neurological function, in active or former players, as research has only begun to control for covariates such as genetic variability and substance abuse (e.g. anabolic steroids).    2.3 Concussion Assessment Tools  In the hopes of developing a gold-standard diagnostic test for concussion researchers have explored many different ideas including blood and cerebral spinal fluid biomarkers (Jeter et al., 2013), neuropsychological tests (Barr & McCrea, 2001; Putukian, 2011), computerized neuropsychological tests (Van Kampen et al., 2006), measures of postural stability (Hunt et al., 2009; McCrea et al., 2003), oculomotor tests (Galetta et al., 2011) and electrophysiological examinations (Gaetz & Bernstein, 2001). Imaging techniques, such as single photon emission, computerized tomography, positron emission tomography, and functional magnetic resonance imaging have also been explored for their effectiveness to diagnose concussion. The diagnostic accuracy of these numerous methods and tests has yielded mixed results. Complicating matters are factors such as age, gender, fatigue, practice effects, low test-retest reliability, and psychological distress (Putukian, 2011). Currently, a gold standard diagnostic technique remains elusive (McCrory et al., 2013).   The lack of a gold-standard diagnostic technique for concussion is concerning since 52% of concussions are known to be unreported by athletes for either fear of removal from play or a belief that the injury was not serious enough to report (McCrea et al., 2004). Not only do we lack a method of accurately differentiating whether a patient is healthy or concussed but we have concussed patients hiding their symptoms from medical practitioners, placing them at risk of Second Impact Syndrome, a very rare and often fatal condition (Bey & Ostick, 2009).      17 2.3.1 Sport Concussion Assessment Tool – 3rd Edition  (SCAT3)   The SCAT3 is a concussion assessment tool that was developed during deliberations at the 4th International Conference on Concussion in Sport, held in Zurich in November 2011. The SCAT3 is intended to be used by healthcare practitioners involved in the care of injured athletes at all levels of sport. The SCAT3 was made freely accessible and the authors encouraged its distribution to all relevant healthcare practitioners. The SCAT3 incorporates validated neuropsychological tests of concussion to provide data on symptomatology and functional impairments that clinicians can incorporate into their diagnostic decision (McCrory et al., 2013).  The SCAT3 compromises the Glasgow Coma Scale (GCS), Maddock Score, Graded Symptom Check (GSC), Standardized Assessment of Concussion (SAC), Neck Examination, Balance Examination (i.e. modified balance error scoring system [mBESS]) and Coordination Examination. A full copy of the SCAT3 can be found in Appendix A.  2.3.1.1 Graded Symptom Checklist   There are many symptoms that are common to concussion, for example dizziness, fatigue, headache and sensitivity to light (Guskiewicz et al., 2003). The GSC evaluates the perceived severity of 22 symptoms common to concussion using a 0-to-6 Likert scale. A score of zero represents ‘normal’ symptomatology (Lovell & Collins, 1998). The GSC takes approximately 2-3 minutes to administer and provides a summative score for the number of symptoms experienced and total symptom severity (see Appendix A for a copy of the GSC). The GSC is incorporated into commonly used concussion assessment tools including the SCAT3 and the NFL Sideline Concussion Assessment Tool (NFL, 2014; McCrory et al., 2013).   The GSC remains significantly increased up to five days post concussion, when compared to healthy controls (McCrea et al., 2003). The sensitivity of the GSC is 0.89 immediately post-injury but decreases to 0.10 five days following injury, although specificity remains constant at 1.00 (McCrea et al., 2004). McCrea et al. (2004) did not report receiver operating characteristic   18 (ROC) curves in their studies. The internal consistency of the GSC has been reported to be 0.88-0.94 (Lovell et al., 2006).   A concern with the effectiveness of the GSC as a diagnostic tool is that its accuracy is dependent upon athletes being honest and forthcoming in disclosing their symptoms with medical practitioners. Future work would should improve upon the work of McCrea et al. (2004) and build ROC curves to assess the diagnostic ability of the GSC in a population of active athletes.  2.3.1.2 Standardized Assessment of Concussion  The Standardized Assessment of Concussion (SAC) is a clinical test that was specifically designed to assess concussed patients. Its length and content were designed to be implemented by clinicians with no previous psychometric expertise. It was designed to assess the domains of cognitive function most sensitive to concussion, specifically immediate memory, delayed recall, orientation and concentration (McCrea et al., 1998). Independent studies have confirmed that the SAC is sensitive to neurological impairment in concussed athletes (Broglio & Puetz, 2008; McCrea et al., 1998; McCrea et al., 2003). The SAC has been incorporated into both the SCAT3 and NFL Sideline Concussion Assessment Tool (McCrory et al., 2013; NFL, 2014).   The SAC has four sections (i.e. immediate memory, delayed recall, orientation and concentration) and takes 5 minutes to administer. Each section is scored independently, however, the aggregate score (out of 30) is often reported. A lower score indicates worse cognitive function and impaired global cognitive ability (Belanger & Vanderploeg, 2005). A copy of the SAC can be found in Appendix A. When young athletes performed the SAC on two occasions 60 days apart, athletes that were administered the test three additional times between day 0 and day 60 did not demonstrate improved scores compared to the control group (Valovich-McLeod et al., 2004). This suggests that the SAC is immune to learning effects with serial administration.   SAC performance is significantly worsened in concussed patients up to 48 hours post-injury, but subtle deficits may persist for up to five days post-injury (McCrea et al., 2003). The sensitivity of   19 the SAC is 0.80 immediately post-injury, but decrease rapidly to 0.31 within 24 hours post-injury and decreases to 0.18 five days following injury (McCrea et al., 2004). The specificity of the SAC is 0.91 immediately post-injury and remains elevated at 0.93 up to five days post-injury (McCrea et al., 2004). Conflicting area under the curve (AUC) values for the SAC have been published. Galetta et al. (2015) reported an AUC value of 0.66 (95% CI: 0.53-0.79) whereas Barr and McCrea (2001) reported an AUC value of 0.91-0.94 (95% CI not reported). These conflicting findings suggest uncertainty in the true diagnostic accuracy of the SAC.  The SAC may be beneficial as a diagnostic test for concussion immediately following a suspected injury, however its usage as a diagnostic test in the hours to days following a suspected injury is poor. This is reflected in the low sensitivity values reported for the SAC when used 1-5 days post-injury by McCrea et al. (2013). More work assessing the accuracy of the SAC in differentiating concussed and healthy patients is necessary to evaluate its effectiveness as a concussion assessment test.   2.3.1.3 Balance Error Scoring System  The Balance Error Scoring System (BESS) was developed by researchers at the University of North Carolina - Chapel Hill and is the current gold standard for assessing static postural stability in concussed athletes (Valovich-McLoed et al., 2012; Hunt et al., 2009). The BESS requires participants to balance in three stances (double-leg, single-leg, tandem) on both a firm and foam surface for 20s with their hands on their hips and eyes closed (see Figure 2-1).     20  Figure 2-1. BESS conditions. (a) Double-leg firm surface. (b) Single-leg firm surface. (c) Tandem firm surface. (d) Double-leg foam surface. (e) Single-leg foam surface. (f) Tandem foam surface.   The BESS takes approximately five minutes to administer. Participants are assigned one point for each error (e.g. hands lifted off iliac crest) up to a maximum of 10 points per condition (Guskiewicz et al., 2001). A modified version of the balance error scoring system (mBESS) that excludes the three foam conditions is currently used in the SCAT3 and the official NFL Sideline Concussion Assessment Tool (McCrory et al., 2013; Guskiewicz et al., 2001; NFL, 2014). Since a single examiner scores the BESS subjectively it is prone to measurement error. A review article found moderate intra-rater (ICC = 0.60-0.92) and inter-rater reliability (ICC = 0.57-0.85) for the BESS (Bell et al., 2011). The test-retrest reliability of the BESS is moderate with reported values of 0.70-0.80 (Putukian, 2011). The complete BESS protocol and error list can be found in Appendix A.     21 BESS scores remain significantly increased up to 24 hours post-concussion but return to baseline levels within 72 hours post-injury (McCrea et al., 2003; Bell et al., 2011). The sensitivity of the BESS is 0.34 immediately post-injury, and decreases to 0.10 five days following injury (McCrea et al., 2004). The specificity fluctuates between 0.91-0.96 depending on time of testing (McCrea et al., 2004).   The subjectivity of BESS scoring procedure, its low sensitivity and the short period of affected impairment following a concussion cast doubts regarding its usefulness as a concussion assessment tool. Its ability to differentiate concussion and healthy patients, through an ROC analysis, should be conducted to support, or provide evidence against, its continued inclusion in concussion assessment toolkits.  2.3.1.4 King-Devick Test  The King-Devick Test (KDT) is a rapid numerical reading test that evaluates saccadic eye movements, language function and attention by having participants respond to external visual cues while measuring task completion time (Marinides et al., 2015; Galetta et al., 2011). The KDT involves reading aloud a string of digits from left to right on three progressively challenging test cards as accurately and as fast as possible (see Appendix B for KDT and instructions). The commonly reported KDT outcome is the total time to complete all 3 trials. The KDT takes approximately 2 minutes to administer, is unaffected by participant fatigue and has a high degree of test-retest reliability (ICC = 0.97; Galetta et al., 2011). The KDT exhibits a moderate-learning effect upon re-test. Control participants improve by 1.9s (range: 0.9-7.4s) between pre-exercise and post-exercise (Galetta et al., 2011). Clinical research studies suggest that KDT completion times worsen with concussion by an average of 4.4s-5.9s (range: -0.9s to 28.1s) compared to baseline times (Galetta et al., 2011; Leong et al., 2015).   In a meta-analysis of 15 studies (n=314), the sensitivity of the KDT to detect concussion immediately following injury was 0.86 and the specificity was 0.90 (Galetta et al., 2015). The AUC value for an ROC curve was reported as 0.89 (95% CI: 0.85-0.96), suggesting a high   22 degree of diagnostic accuracy (Galetta et al., 2015). The sensitivity, specificity and AUC of the KDT to detect concussion at various post-injury time periods, for example 24hrs, 48hrs, 72hrs, 120hrs post-injury has not been investigated.   The current research on KDT suggests a high diagnostic accuracy when used immediately post-injury. However, its effectiveness as a concussion assessment tool used in the clinical context remains unknown. The performance of the KDT to differentiate concussed and healthy patients 24-120 hours following an injury requires further investigation.     23 Chapter 3: Objectives and Hypotheses   The primary objective of this thesis was to investigate RHI, including impact detection technology, and neurocognitive function over the duration of a single season in a collegiate football population. The accuracy of a RHI monitoring tool was investigated, single season RHI exposure in Canadian football was quantified, relationships between RHI and functional neurocognitive status were explored and the diagnostic accuracy of multiple concussion assessment tests was investigated. The findings of this thesis will help inform and educate researchers, medical professionals, policy-makers and athletes.   The thesis presents novel data regarding RHI in Canadian collegiate football and presents data pertaining to the accuracy of an impact-monitoring tool. It is imperative that the accuracy of impact monitoring tools is well understood if researchers want to have a true understanding of the magnitude and frequency of head impacts in sport.    The exploratory models presented within this thesis pertain to the longitudinal effect of RHI on symptomatology, physiological and cognitive aspects of neurological function. Collegiate athletes have a concurrent commitment to academics while participating in sport. As such, it is important that we understand the relationship between neurological impairment and RHI in this population.  This thesis presents novel data regarding the accuracy of commonly used diagnostic tools for concussion that do not require advanced training or invasive procedures. The findings of this research will inform clinicians as to which tests they should use in the clinic, especially if working with a collegiate athlete population.   3.1 Aims  The aims of this thesis are as follows:    24 1- To determine the accuracy of a post-processing algorithm, which evaluates detected signal characteristics to classify detected events as valid impacts or artefacts, of an impact measurement tool that attaches to the skin.  2- To determine whether repetitive head impact exposure is related to alterations of neurological status in collegiate football players, as measured by functional tests of neurological status.  3- To evaluate the effectiveness of concussion assessment tools that can be applied by clinicians in the field in differentiating individuals with a diagnosed concussion from non-concussed individuals.   3.2 Hypotheses  The hypotheses for the preceding aims are as follows:  3.2.1 Hypothesis for Aim #1  The impact detection algorithm will perform with less than 100% accuracy.  3.2.2 Hypothesis for Aim #2  Cumulative number of head impacts will be a significant predictor of neurological impairment as measured by change from baseline performance using clinical measures of neurological status (i.e. GSC, SAC, BESS, and KDT).  3.2.3 Hypothesis for Aim #3  Concussion assessment tests evaluating cognitive measures (i.e. KDT, SAC) and physiological measures (i.e. BESS) will have a significantly higher diagnostic accuracy, as measured by area   25 under a receiver operating characteristic curve, than a symptomatology measures of concussion (i.e. GSC), when using a clinical diagnosis as the gold standard.     26 Chapter 4: Impact Exposure in Collegiate Football   4.1  Validation of a Wearable Impact Monitoring Technology  This section of chapter 4 addresses the accuracy of the xPatch sensor. The following section (4.2) quantifies impact exposure in Canadian collegiate football.   4.1.1 Introduction  The cumulative number of repetitive head impacts (RHI) sustained during an athletic career has been found to be a predictor of later-life neurological impairment (Montenigro et al., 2016; Lipton et al., 2013). Investigators related in-vivo neurological status of former-football players to estimates of their lifetime impact exposure. Impact exposure estimates were tabulated using previously established RHI values for different sub-populations of football players, for example different levels of play and different positional groups. Therefore, the models were reliant upon the accuracy of these previously established datasets. However, a pilot study found that a technology used to quantify RHI exposure operates with less than 90% accuracy (Rebchuk et al., 2015).  Normative datasets for RHI in collegiate football were established using the Head Impact Telemetry (HIT) system, a technology embedded within football helmets that quantifies head kinematics (Broglio et al., 2011; Crisco et al., 2011; Gysland et al., 2012; Crisco et al., 2012; Crisco et al., 2010). When measuring peak linear acceleration (PLA) the HIT system has been shown to have relative error rates as high as 136% (Siegmund et al., 2015). Independent groups have shown that the HIT system exceeds 15% error in PLA measurements for the majority of impacts recorded (Jadischke et al., 2013). These findings appear to suggest a limited ability for the HIT system to accurately quantify impact exposure.   Another sensor that has been used to quantify RHI exposure, and associated kinematics, in sport is the xPatch (X2 Biosystems, Seattle, WA). The xPatch is adhered to the head rather than   27 embedded within a helmet like the HIT system. It is postulated that by attaching the sensor to the skin rather than embedding it within a helmet one obtains a more accurate measure of head kinematics. Yet, as seen with the HIT system the xPatch has difficulties in accurately quantifying impact kinematics (McCuen et al., 2015; Siegmund et al., 2015). The xPatch has been shown to overestimate peak linear acceleration (PLA) and peak rotation acceleration (PRA) for a single impact event by 64±41% and 370±456%, respectively (Siegmund et al., 2015).   A crucial aspect of impact monitoring technologies, especially in the context of quantifying cumulative RHI exposure, is their proprietary detection algorithm. These algorithms classify each detected event as an impact or artefact (i.e. non-impact). A typical artefact is helmet removal by the participant. It is known that current impact monitoring technologies have difficulties in accurately evaluating head kinematics, however to our knowledge the accuracy of these detection algorithms has not been investigated in the field. Even without field validation, numerous published studies have assumed that these detection algorithms were reliable and quantified RHI (i.e. cumulative number of impacts sustained) exposure in sport. This is concerning since in a laboratory setting these algorithms misclassified 49% of impacts as invalid (Siegmund et al., 2015).   The goal of this study is to evaluate the accuracy of the xPatch sensor’s impact detection algorithm. We hypothesize that the algorithm will perform with less than 100% accuracy. It is imperative that these detection algorithms are validated so researchers can reliably establish accurate RHI values for different populations in sport.  4.1.2 Methods  The following methods were used for the study in section 4.1 of this thesis.    4.1.2.1 Study Participants  In 2014 and 2015 football players were recruited from a Canadian Interuniversity Sport (CIS; i.e.   28 Canada’s equivalent to NCAA) collegiate football team to participate in this study. In year one (i.e. 2014), 21 players participated. In year two (i.e. 2015), 14 players participated, 6 of whom participated in 2014. Data from year one and year two were combined into a single dataset. No participant had a history of developmental disorder, neurological disorder, or severe traumatic brain injury. Eight unique participants were diagnosed with concussion during the course of the study. Four concussions occurred during year one and four occurred during year two. The experimental protocol was approved by the University of British Columbia Clinical Ethics Board (H11-02306), and conforms to the Declaration of Helsinki.  Participants wore the xPatch sensor for each practice and game. Hereafter, we defined instances when the entire team was participating in either a practice or game as a team session.   4.1.2.2 Biomechanical Measurements  The xPatch sensor was chosen for investigation in this study due to its applicability in non-helmeted contact sports (e.g. rugby, soccer). The sensor contains a 3-axis linear accelerometer that detects ±200g per axis at a sampling rate of 1000Hz, and a 3-axis gyroscope that detects ±2000°/s per axis at a sampling rate of 850Hz. The sensor measures 31×17×8mm and weighs 9g. The sensor records 100ms of data (10ms pre-event, 90ms post-event) for each detected event. The sensor uses a proprietary algorithm to calculate various impact metrics and evaluates whether a detected event was an impact or artefact (i.e. non-impact) via cross-correlation analysis.   In this study we collected peak linear acceleration (PLA), peak rotational acceleration (PRA), peak rotational velocity (PRV), and the binary classification of event type (i.e. valid or invalid impact) from events detected by the sensor. The sensors adhered to participants’ right mastoid process using double-sided adhesive tape. Each participant was assigned a unique sensor at the beginning of the season.     29 4.1.2.3 Statistical Analysis   Investigators recorded the commencement and termination time of all team sessions. Post-processing of data excluded events that occurred outside of team sessions, including during halftime of games. Impacts less than 10g PLA were excluded as they have been found to be inconsequential (Mihalik et al., 2007). The sensor is not incorporated into mandatory football equipment, yet for this study we achieved 93.6% compliance for participants wearing the sensor.   Detected events were time-stamped, stored internally by the sensors and uploaded to the Impact Monitoring System Software (X2 Biosystems, Seattle, WA) on a laptop following each team session. Data were de-identified and exported to Matlab for analysis (Version R2014a, The MathWorks Inc., Natick, MA).   Investigators (ADR and two research assistants) matched each detected event with game video using the event time-stamp. Investigators then visually classified each event as a valid, invalid or indeterminate impact. The three investigators evaluated different events. However, for events in which an investigator was uncertain of the classification, they collaboratively discussed the event and reached a consensus classification. Valid classifications were given to events that investigators deemed to be football-related impacts. Invalid classifications were given to events that investigators deemed to be unrelated to football impacts, for example helmet removal, changing direction while running, etc. Indeterminate impacts were events that investigators were unable to view on film (e.g. the participant was not visible in the camera frame) or when investigators were uncertain of whether a football-related impact occurred and a consensus classification could not be reached. Investigators were able to slow-down, rewind and magnify the film while classifying events. Each event was then classified as true positive (TP), false positive (FP), false negative (FN), true positive (TP) or excluded as per Table 4.1.        30 Table 4.1. Event type classification, using xPatch detection algorithm and visual confirmation of event types.  Investigator Visual Classification Using Video Impact Event Non-Impact Event Indeterminate Event xPatch Event Detection Algorithm  Valid Impact True Positive (TP) False Positive (FP) Excluded Invalid Impact False Negative (FN) True Negative (TN) Excluded  Following classification, events were de-assigned from participants and a pooled dataset was created. This ensured that the event detection algorithm was investigated independent of participants. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and accuracy were calculated using Equation 1. Binomial proportion 95% confidence intervals were calculated using the Clopper-Pearson interval method. Equation 1. Calculations for sensitivity, specificity, PPV, NPV and accuracy. !"#!$%$&$%' = ! !"!" + !"  !"#$%&%$%'( = ! !"!" + !"  !!" = ! !"!" + !!  !"# = ! !"!" + !"  !""#$!"% = ! !" + !"!" + !" + !" + !"  Coefficients of determination (r2) were calculated between kinematic variables and impact count using visually confirmed impact events (true positives and false negatives). Cumulative peak linear acceleration (cPLA), cumulative peak rotational acceleration (cPRA) and cumulative peak rotational velocity (cPRV) for a full season were the kinematic variables of interest. Cumulative impact count for a full season was used as the predictor variable. This analysis was repeated from all practices and games using events that were deemed valid by the detection algorithm (true positives and false positives, with false negatives excluded). These data are presented for comparison.     31 A Mann–Whitney U Test was performed to determine whether the PLA distribution of impacts deemed valid by the detection algorithm were significantly different than the PLA distribution of visually confirmed impacts.  4.1.3 Results  We observed 37,062 unique events from 126 team sessions (n=52 in year 1; n=74 in year 2). Of the events detected, 28,680 occurred during practices while 8,382 occurred during games. For events detected during games 1,142 were classified as indeterminate. Table 4.2 outlines the classifications given to game events that investigators could visually review.    Table 4.2. Classification of game events that were visually reviewed by investigators.  Event Classification Number of Events True Positive 3693 True Negative 1745 False Positive 734 False Negative 1068 Total 7240   4.1.3.1 Sensitivity, Specificity, Positive and Negative Predictive Values    In this analysis, only data collected during games (n=8 in year 1, n=13 in year 2) were analyzed. Twenty-six participants played during at least one game (n=15 in year 1, n=11 in year 2) and were included in the analysis. The remaining participants were excluded from analysis.  For game events that were visually reviewed, the sensor’s sensitivity was 77.6% (95% CIs: 76.4-78.8%) and specificity was 70.4% (95% CIs: 68.6-72.2%). The corresponding false negative rate (FNR) was 22.4% (95% CIs: 21.2-23.6%) and false positive rate (FPR) was 29.6% (95% CIs:   32 27.8-31.4). The sensor’s PPV was 83.4% (95% CIs: 82.3-84.5%) and NPV was 62.0% (95% CIs: 60.2-63.8). The overall accuracy of the detection algorithm was 75.1% (95% CIs: 74.1-76.1%).   In post-hoc analysis we binned events into percentile groupings based on their PLA value and calculated the sensitivity, specificity, FPR, FNR, PPV, NPV and accuracy for each bin. The results are presented in Table 4.3. Percentiles were used because linearity exists for PLA measurements if the data are pooled (Siegmund et al., 2015).   Table 4.3. Accuracy of the xPatch detection algorithm for different ranges of impact PLA values. PLA Percentile 0 - <25th 25th - <50th 50th - <75th 75th - <95th ≥95th PLA Range (g) 10 - <12.8 12.8 - <18.2 18.2 - <30.8 30.8 - <66.4 ≥66.4 Number of Events 1810 1810 1810 1448 362 Sensitivity 66.7% 72.0% 79.9% 87.8% 77.9% Specificity 73.0% 70.5% 66.9% 63.1% 81.1% PPV 68.0% 80.3% 87.1% 91.6% 92.6% NPV 71.9% 60.1% 54.4% 53.1% 54.9% FNR 33.3% 28.0% 20.1% 12.2% 22.1% FPR 27.0% 29.5% 33.1% 36.9% 18.9% Accuracy 70.1% 71.4% 76.5% 83.4% 78.7%  4.1.3.2 Correlation of Cumulative Head Impact Kinematics  During games 4,761 unique impacts occurred that were visually confirmed (i.e. true positives and false negatives). Coefficients of determination values were calculated using cPLA, cPRA, cPRV and impact count, as acquired over the course of a single season for visually confirmed impacts. These data are presented in Table 4.4. For the analysis, each participant was considered as a unique case containing all four variables. The range of values for impact count were 3-781 impacts, cPLA were 82.7-2.3x104 g, cPRA were 1.9x104-4.3x106  rad/s2, and cPRV were 5.1x103-1.1x106 rad/s.    33  Table 4.4. r2 Values for cumulative impact kinematic variables of visually confirmed impacts.  Impact Count PLA  (g) PRA (rad/s2) PRV (rad/s) Impact Count 1 0.98 0.98 0.99 PLA (g) 0.98 1 0.99 0.99 PRA (rad/s2) 0.98 0.99 1 0.99 PRV (rad/s) 0.99 0.99 0.99 1  Figure 4-1 plots the relationship between impact count and cPLA, cPRA, and cPRV for all participants using visually confirmed impacts (true positives and false negatives).    Figure 4-1. Correlation between impact count and cumulative head kinematic variables. Blue diamonds represent participants. These data include all visually confirmed impacts (true positive and false negative impacts).  When including all team sessions 17,103 impacts were deemed valid by the xPatch detection algorithm (i.e. true positives and false positives, with false negatives excluded). Coefficients of determination values for cPLA, cPRA, cPRV and impact count, using impacts acquired over the course of a single season that were deemed valid by the detection algorithm, are presented in Table 4.5.     34 Table 4.5. r2 Values for cumulative impact kinematic variables for all impacts.  Impact Count cPLA  (g) cPRA (rad/s2) cPRV (rad/s) Impact Count 1 0.90 0.78 0.82 cPLA (g) 0.90 1 0.96 0.98 cPRA (rad/s2) 0.78 0.96 1 0.99 cPRV (rad/s) 0.82 0.98 0.99 1   4.1.3.3 Distribution of Impact Confirmation Types  During games 7,240 unique events occurred that were classified by investigators. A Mann–Whitney U Test indicated that no statistical difference exists between events that were deemed valid by investigator (i.e. true positives and false negatives) and events that the sensor’s algorithm classified as valid (i.e. true positives and false positives, with false negatives excluded), Z = -0.466, p = 0.641. Boxplots for these data are shown in Figure 4-2. No significant differences were found when separate Mann–Whitney U Tests were performed on individual participants data.     35  Figure 4-2. Boxplot between events that were determined valid by investigator and events that the sensor’s algorithm classified as valid.   4.1.4 Discussion  It is well established that impact-monitoring technologies, both xPatch and the HIT system, have difficulties in accurately quantifying head kinematics (Siegmund et al., 2015; Siegmund et al., 2015; McCuen et al., 2015; Beckwith et al., 2012; Jadischke et al., 2013). Previous studies have described inaccuracies with impact detection algorithms during laboratory tests but did not investigate their on-field performance (Siegmund et al., 2015; Siegmund et al., 2015). A lack of on-field validation studies is surprising, as these detection algorithms are crucial for RHI research and clinical applications of these technologies. Here, we sought to determine the accuracy of the detection algorithm for a skin-based impact monitoring technology.     36 The overall accuracy of the xPatch’s impact detection algorithm was moderate (75.1%) confirming our hypothesis that the xPatch’s impact detection algorithm performs with less than 100% accuracy. This finding indicates that the sensor correctly classifies three out of every four detected events, but conversely, incorrectly classifies one out of every four detected events. Based on this finding we suggest caution when interpreting impact count data measured by the xPatch and raise doubts regarding the accuracy of previously reported RHI values from studies that used the xPatch sensor. Future work should investigate the HIT system’s detection algorithm since this technology has been frequently used to establish RHI exposure values in football.     Sensitivity and PPV increased with increasing impact PLA percentiles, whereas NPV decreased and specificity remained fairly constant. This suggests that both the waveform pattern and waveform magnitude potentially contribute to inaccuracies within the detection algorithm. The higher values found for sensitivity (77.6%) and PPV (83.4%) compared to specificity (70.4%) and NPV (62.0%) appears to suggest that the detection algorithm preferentially favours false positive results compared to false negative results. This follows in the current thinking of concussion management whereas it is recommended to remove athletes from play if any concussion or head injury is suspected (McCrory et al., 2013). If these technologies are being used clinically to help inform removal-from-play decisions this finding may be beneficial. However, when you consider the sensor’s error in quantifying PLA coupled with the high FPR (29.6%) reported, clinicians may only have moderate confidence when interpreting data from single events detected by the xPatch sensor (Siegmund et al., 2015). If this sensor is used to monitor single events for specific kinematic characteristics, unnecessary removal-from-play decisions may result. This may jeopardize the trust between medical practitioners, athletes, and coaches. Medical practitioners who use this tool in the field to monitor impact exposure in real-time should interpret data cautiously.   A potential limitation of the sensitivity, NPV and accuracy values reported within this study is that some events went undetected. It is possible that a few impacts over 10g PLA did not trigger the xPatch sensor. This may have led to the number of false negatives reported being lower than the actual number of false negative occurrences. Thus, our sensitivity, NPV and accuracy values   37 may be slightly elevated. In a laboratory investigation, however, the xPatch sensed all impact events (Siegmund et al., 2015). This suggests that very few, if any, impact events may have been missed during our data collection and gives confidence to the sensitivity, NPV and accuracy values reported.   The strong coefficients of determination (r2) observed between cumulative impact count and cPLA, cPRA and cPRV lends support to the theory suggested by other researchers that impact count is an adequate measure of cumulative RHI exposure in sport (Reynolds et al., 2015; Montenigro et al., 2016). Although the r2 values between impact count and cumulative head kinematics ranged between 0.98-0.99 for visually confirmed impacts, these values decreased to 0.78-0.90 when investigating impacts deemed valid by the xPatch’s detection algorithm. This finding re-iterates the importance of improving the accuracy of impact detection algorithms. Without improved detection algorithms, or visually reviewing every detected event, there is a relatively high degree of error when estimating cumulative head kinematic variables based on impact count.  Strong correlations between cumulative kinematic variables and impact count suggest that impact count is a suitable measure when quantifying RHI exposure in sport. Since impact count is a relatively simple measure to quantify, this finding is quite promising. An accurate event detection algorithm will allow for the establishment of population specific RHI exposure values in sport, and will improve investigations into the relationship between RHI and neurological impairments. We suggest the development of a sport-specific machine-learning algorithm where multiple investigators classify the same detected events for a given sport. This would ensure robustness of the algorithm to detect waveforms that are typical of an impact in a given sport, such as headers in soccer.  Without advancement of impact detection algorithms researchers will remain restricted in their ability to quantifying RHI in sport.       38 4.1.5 Conclusion   This section provides pause to current work investigating relationships between RHI and neurological impairment in sport. We showed that impact count strongly correlates to cPLA, cPRA and cPRV if the measure of impact count is accurate. When relying upon the xPatch’s detection algorithm to quantifying impact count, rather than visually confirming impacts, these correlations were weakened. The accuracy of the investigated impact detection algorithm was found to be 75.1%.   This work confirmed a previously suggested hypothesis that impact count is an appropriate measure of impact exposure, however we argue that this theory only holds if confidence exists in the tool used to quantify impact count. Future work should aim to improve event detection algorithms. Improved detection algorithms should provide an accurate measure of impact count, thus allowing for accurate predictions of head kinematics over a given time period for contact sport athletes.      39 4.2 Head Impact Exposure in Canadian Collegiate Football   As discussed and shown in section 4.1, limitations with current head impact monitoring technologies exist. Yet when used cautiously, these data provide context regarding repetitive head impact (RHI) exposure in sport. Whereas RHI exposure in American Football at the collegiate and high school levels has been well investigated, RHI exposure in Canadian Football, either at the professional, collegiate or high school level has not been quantified to our knowledge. This section provides novel data on RHI in Canadian Football at the collegiate level.     4.2.1 Introduction  Sport related concussion has been recognized as a serious health concern to all contact sport athletes (McCrory et al., 2013). It is estimated that 4.4-5.5% of football players will sustain a concussion each season (Guskiewicz et al., 2000). However concussions are transient injuries. The vast majority (90%) resolve in 7-10 days without intervention (McCrory et al., 2013). Potentially more concerning than concussion are the long-term implications of RHI in sport. Football players accumulate hundreds of subconcussive impacts (defined in subsection 2.2) each football season, the cumulative effects of which remain unclear. We are only now beginning to understand the long-term implications of RHI exposure.   The discovery of chronic traumatic encephalopathy (CTE) in autopsies of former professional football players evoked the hypothesis that RHI contributes to later-life neurodegenerative disease (Omalu et al., 2005). Recent work suggests that RHI exposure (i.e. the number of impacts sustained) over a football career can increase the risk of later-life neurological impairments in a dose-response relationship (Montenigro et al., 2016). It is imperative that we accurately quantify head impact exposure in all sport to allow for accurate risk assessment of later-life neurological impairment.   Previous research has quantified RHI exposure in American football at the high school and collegiate (NCAA) level, including differences in RHI exposure between player positions   40 (Crisco et al., 2010; Crisco et al., 2012; Crisco et al., 2011; Broglio et al., 2011; Mihalik et al., 2007; Reynolds et al., 2015). It’s been reported that American collegiate football players sustain 1,177±773 subconcussive impacts, up to a maximum of 2,492 per season (Gysland et al., 2012; Crisco et al., 2011).   Canadian football is similar to American football but with a few notable differences. The Canadian field is 10yd longer, 11.15yd wider, and has 10yd longer end zones. With Canadian rules players line up 1yd apart, rather than 1 football length (~0.3yd) apart at the line of scrimmage. Furthermore, Canadian rules allow 12 players on the field rather than 11, and permit three downs, instead of four, to advance the ball 10 yards. Canadian rules typically results in more pass-oriented, instead of run-oriented, offensive strategies. Canada has 1,581 players per year at the collegiate level as well as 9 professional teams and 19 semi-professional teams (CIS-SIC, 2015). The goal of this study was to quantify RHI exposure in Canadian collegiate football.  4.2.2 Methods  General participant information was previously described in subsection 4.1.2.1. In this section, participants who failed to wear the xPatch for >20% of team sessions were excluded from analysis. This exclusion criterion is based on a previous study that used the xPatch to quantify impact exposure in the field (Reynolds et al., 2015).   4.2.2.1 Biomechanical Measurements   The xPatch sensor, hereafter referred to as the sensor, was chosen for investigation in this study. The sensor’s specifications are discussed in detail in subsection 4.1.2.2. In this study we investigated impact count, peak linear acceleration (PLA), peak rotational acceleration (PRA) and peak rotational velocity (PLV) as our kinematic variables of interest. We excluded events that were deemed invalid by the xPatch’s event detection algorithm, although we recognize that the accuracy of this algorithm is moderate at 75.1% (discussed in section 4.1). Each participant   41 was assigned a unique sensor at the beginning of the season. Participants wore the sensor on their right mastoid process using double-sided adhesive tape.   Events were time-stamped, stored internally by the sensors and uploaded Impact Monitoring System Software (X2 Biosystems, Seattle, WA) on a laptop following each team session. Data was de-identified and exported to Matlab for analysis (Version R2014a, The MathWorks Inc., Natick, MA). Additional statistical analysis was conducted in IBM SPSS Statistics for Macintosh (Version 23.0, IBM Corp., Armonk, NY).  4.2.2.2 Statistical Analysis  Investigators (either ADR or HJB) were present at each team session. Team sessions were defined as instances when the entire team was participating in either a practice or game. The start and end time of each team session was recorded, and investigators assigned a team session type designation for each team session. Team sessions were designated as a helmet-only practice, shoulder-pads practice, full-pads practice (shoulder pads, and thigh pads) or game. These designations are consistent with those of Reynolds et al. (2015). Post-processing of data excluded events that occurred outside of team sessions, such as during halftime of a game. Impacts less than 10g PLA were excluded as they have been found to be inconsequential (Mihalik et al., 2007).   Cumulative PLA (cPLA), cumulative PRA (cPRA), cumulative PRV (cPRV) and impact count accrued over the length of a season are reported. Data were extrapolated to represent perfect compliance as described in the following subsection (4.2.2.4).   Participant specific impact count, cPLA, cPRA and cPRV for each team session type were divided by the length of time of data collection for each respective team session type to generate an impact exposure per minute per team session variable. As a result of imperfect compliance (93.6%) for wearing the sensor and coaching decisions, we did not have data for every participant for all four team session types. For helmet-only team sessions we collected data from   42 27 participants, for shoulder-pads team sessions we collected data from 29 participants, for full-pads team sessions we collected data from 31 participants, and for games we collected data from 25 participants. We had 22 participants with complete data sets (i.e. data for each team session type).   Mean impact exposure per hour values and 95% confidence intervals were calculated for each team session type. Running time was used in these analyses, e.g. we included time when a coach was speaking between drills during a practice. All participants with data for a given team session type were included in our analysis.  A repeated measures analysis of variance was performed to determine if differences in impact exposure per hour existed between team session types. The 22 participants with complete datasets were used for this analysis. A Tukey post-hoc analysis, with a Bonferroni correction applied, was performed to further investigate between-group differences. These analyses were performed using the non-extrapolated dataset and were repeated for each impact variable (cPLA, cPRA, cPRV and impact count). A priori alpha values were set at 0.05.   4.2.2.3 Missing Data Extrapolation  Previous studies that used the xPatch reported less than 100% compliance with respect to participants wearing the sensor (Reynolds et al., 2015). This is likely because the sensor is not incorporated into mandatory football gear and participants may forget to have the sensor placed prior to the commencement of a team session. Therefore, we developed an algorithm to extrapolate perfect compliance, with respect to cumulative load per season. The extrapolated data are provided for reference to published studies reporting cumulative RHI in collegiate football per season.   To address missing impact data, investigators first determined whether the sensor was working properly (i.e. providing reliable data) for each team session. Investigator reviewed raw data from every team session where impact exposure values were 3 standard deviations greater than the   43 average for that team session type. Data were excluded if they were atypical of a team session, for example all recorded events occurred in a 5-minute period. Investigators then summed all outlier-removed data (PLA, PRA, PRV and count) and length of time (minutes) for each team session type for every participant. Therefore we generated an impact exposure per minute per team session type (IE/min/type) for each participant, and for each variable of interest.  For team sessions when a participant failed to wear the sensor, and team sessions with outlier data removed, missing exposure was estimated as IE/min/type multiplied by the duration (minutes) of that team session. For team sessions when data were partially collected (e.g. sensor died prematurely, sensor became dislodged), missing impact exposure was estimated as IE/min/type multiplied by the duration (minutes) of missing data. This value was then added to the raw data collected for the team session of interest.   4.2.3 Results  Three participants dropped out of the study and one participant only wore the sensor for 17.8% of team sessions. These data were excluded from analysis. Data were collected from 126 team sessions (n=52 in year 1; n=74 in year 2) which had a total duration of 14,535 minutes. Helmet-only practices comprised 26.3% of total team session time, shoulder-pad practices were 13.9%, full-pads practices were 34.3% and games were 25.5%. Table 4.6 outlines the minutes observed for each team session type in year one and year two of the study. For the duration of the study we extrapolated a total of 22,399 impacts.  Table 4.6. Time spent in each team session type in each year of the study. All values are given in minutes.  Helmet Only Shoulder-Pads Full-Pads Game Total Year 1 653 547 3394 1401 5995 Year 2 3169 1481 1586 2304 8540 Total 3822 2028 4980 3705 14535  The average number of impacts sustained per season was 722.6 (SD: 386.9). The mean per season cPLA was 17437.6g (SD: 9076.7g), cPRA was 3.2x106 rad/s2 (SD: 1.9x106 rad/s2) and   44 cPRV was 8.1x105 rad/s (SD: 4.6x105 rad/s). The cumulative impact exposure values for each team session type are given in Table 4.7.  Table 4.7. Average cumulative head kinematics by team session type for a collegiate football team (Canadian football). Standard deviations are reported in brackets.  Impact Count cPLA (g) cPRA (rad/s2) cPRV (rad/s) Helmet Only 113.2  (168.2) 2275.4  (3190.9) 4.1x105  (6.1x105) 1.1x105  (1.6x105) Shoulder-Pads 102.2  (103.6) 22478.0  (2115.5) 4.2x105  (4.2x105) 1.1x105  (1.0x105) Full-Pads 382.7  (194.8) 9088.8  (4432.6) 1.7x106  (8.2x105) 4.2x105  (2.0x105) Game 183.9  (202.5) 5151.0  (5846.0) 9.6x105  (1.2x106) 2.4x105  (4.6x105)    4.2.3.1 Differences in Hour Impact Exposure by Session Type  Impact exposure per hour and 95% confidence intervals were determined for each team session type. These values are given in Table 4.8. We suspect that variations in position played, experience and desire for impact amongst our participants led to the broad range of confidence intervals observed.   Table 4.8. Impact exposure per hour for different team session types. Confidence intervals (95%) are given in brackets.  Impacts (hit/hr) PLA (g/hr) PRA (rad/s2/hr) PRV (rad/s/hr) Helmet Only 2.2 (1.2-3.2) 40.0 (24.8-55.2) 6888.2 (3972.1-9804.3) 1786.9 (1114.7-2459.1) Shoulder-Pads 4.9 (3.2-6.5) 108.8 (77.3-140.2) 20082.1 (14242.7-25921.6) 5157.7 (3688.7-6626.7) Full-Pads 7.6 (5.8-9.5) 180.6 (143.1-218.0) 33103.3 (25963.6-40242.9) 8387.2 (6600.5-10173.9) Game 8.5 (6.0-11.0) 244.7 (167.4-322.0) 45619.6 (30980.4-60258.8) 11465.9 (7820.4-15111.4)   45  With respect to impact count, a significant effect for team session type was found F(3,19) = 37.56, p <0.001. A Tukey post-hoc analysis revealed that helmet-only, shoulder-pads and full-pads team sessions significantly differed from every other team session type, p <0.05. There was no significant difference between full and game team session types, p = 1.000. These data are presented in Figure 4-3.  Our results for cPLA, cPRA and cPRV per hour variables followed the same pattern as impact count. For cPLA, cPRA and cPRV per hour, a significant effect (p <0.05) for team session type was observed. Tukey post-hoc analyses revealed that helmet-only, shoulder-pads and full-pads team sessions significantly differed from every other team session type, p <0.05. However, no significant differences were found between full-pads and game team session types.   Figure 4-3. Impact count per hour between different team session types. Light grey lines show data for individual participants, dark blue data show sample mean values with 95% confidence intervals. Red crosses represent participant specific values for the team session type.    46  4.2.4 Discussion  Although current impact-monitoring technologies are limited in their ability to accurately measure head kinematics and detect impact events, they can provide approximate values, and thus context, for RHI exposure in sport. Here we present data concerning impact exposure in Canadian collegiate football. We present extrapolated data to ensure that our findings fit in the context of previous studies reporting cumulative impact exposure per season in collegiate football.   We observed 722.6 (SD: 386.9) impacts per season, which is less than the 974 (SD not reported) impacts per season Reynolds et al. (2015) observed while using the xPatch in NCAA collegiate football. We suspect that we observed less impacts per season than those reported in American collegiate football due to the increased field spacing in Canadian football. However, it is well known that player positions can influence the number of impacts sustained in a single season (Crisco et al., 2011). Thus, individual differences in playing position and experience between participants involved in these studies may be the cause of the observed difference in impact count per season.   Similar to Reynolds et al. (2015) we found that helmet-only team sessions have the lowest exposure rates (2.23 impacts/hr), followed by shoulder-pads team sessions (4.87 impacts/hr), and then by full-pads team sessions (7.65 impacts/hr). Game team sessions had the highest exposure rate (8.53 impacts/hr), but this did not reach statistical significance when compared to exposure rates in full-pads team session. This was likely a result of some participants being only minimal involved in games. In a post-hoc analysis the principal investigator (ADR) grouped participants by those who started and played frequently in games, and those who did not. Participants were deemed to be starters if they were in the starting offensive or defensive line-up (excluding special teams) for 50% or greater of the games in which they dressed. The starters average 12.7 (95% CI: 9.4-16.1) impacts/hr during games, whereas non-starters averaged 3.8 (95% CI: 2.0-5.6) impacts/hr. These data suggest that impact exposure increases between full-pad practices   47 and games, confirming Reynolds et al.’s (2015) finding, although this increase is dependent upon the status (i.e. starter or non-starter) of the participant.  We suspect that the observed trends for impact exposure load per hour for impact count, cPLA, cPRA and cPRV between team session types were similar because of the strong correlation between impact count and cumulative kinematic variables (presented in subsection 4.1.3.3).  Our data improved on the work of Reynolds et al. (2015) as we provided impact exposure per hour, which is an objective measure of time. Whereas Reynolds et al. (2015) provided impact exposure per practice, which was a subjective unit of time. Reporting impact exposure per hour gives context to individuals who may be unfamiliar with collegiate football.   Our findings suggest that by altering the type of team session for a given day, week or season, a coach can modulate impact exposure and thus modulate head injury risk. Recently, NCAA sub-divisions have begun to regulate the number of full-pads practices teams can hold during a season (Belson, 2016). Our study provides data to support the notion that policy-changes, such as limiting the number of contact practices, can reduce the cumulative impact burden in collegiate athletes. Further work should focus on whether these policy-changes, and resultant changes to impact burden, have any effect on the rate of head injuries in collegiate sport.   4.2.5 Conclusion  This section investigated the impact exposure loads in Canadian collegiate football. We generated a normative dataset of RHI values for Canadian collegiate football, which is imperative for researchers investigating the relationship between cumulative RHI and later-life neurological impairment and disease. Our observed season-long cumulative impact loads were slightly less than those reported in American collegiate football, but within the range of previously reported values. Our findings suggest that altering a team session type can reduce the impact burden on athletes, which may reduce their risk of head injury.    48 By classifying impact exposure for different team session types per hour, rather than per team session, we advanced the methodology of Reynolds et al. (2015) and established a more externally valid dataset. The impact load per hour values we generated should help inform coaches, policy-makers and sports medicine practitioners of the impact burden placed on collegiate football players during a given team session.    49 Chapter 5: Exploratory Models of Neurological Function and Repetitive Head Impacts  Here, we sought to investigate potential relationships between repetitive head impacts (discussed in chapter 4) and functional neurological status over the course of a single collegiate football season. In section 4.1 we showed that cumulative head impact kinematics could be reliably estimated from impact count. Therefore, in this chapter we used impact count as our measure of interest for repetitive head impacts (RHI).   We acknowledge that our measure of impact count contains intrinsic error as a result of inaccuracies of the detection algorithm (shown in section 4.1). The xPatch sensor is neither 100% specific nor 100% sensitive to head impacts. Therefore, false positives and false negatives are included within the models presented in this chapter.     Although impact detection algorithms display inaccuracies we currently lack a better-validated tool to measure impact count. The measure of impact count provide by the xPatch provides a general estimate of a participant’s impact exposure. This estimate can be used to investigate general trends between RHI and neurological status. This chapter intends to provide more evidence to the discussion regarding the neurological implications of repetitive head impact exposure in sport rather than establish any definitive relationships. As impact detection technologies continue to improve the relationship between neurological status and repetitive head impacts will become better understood. With this research field in its infancy it is important to present our results to establish general trends that will help guide future research in the field.   5.1 Introduction  Sport related concussion has been recognized as a serious health concern for all contact sport athletes (McCrory et al., 2013). An estimated 5.5% of football players sustain a concussion each season, but these injuries are transient and athletes typically recover in 7-10 days (Guskiewicz et al., 2000; McCrory et al., 2013). Recently it is becoming evident that the greater risk to contact   50 sport athletes may be the long-term neurological implications of repetitive head impacts. The discovery of chronic traumatic encephalopathy (CTE) in professional football players led to a substantial volume of research regarding RHI in football, including the discovery that collegiate football players sustain approximately 1177±773 impacts per season (Omalu et al., 2005; Gysland et al., 2012). It is suspected that these impacts induce cerebral micro-traumas, which compound into later-life neurological impairments (Bailes et al., 2013).  The popular press has stressed the relationship between RHI and later-life neurological diseases such as CTE (Fainaru-Wada & Fainaru, 2013). However, research supporting this theory is in its infancy. Emerging research suggests a potential dose-response relationship between cumulative RHI and later-life neurological impairment, although these studies are limited by their retrospective designs (Montenigro et al., 2016; Lipton et al., 2013). Prospective studies are currently focused on establishing a relationship between RHI and neurological impairment during a single season.   Recent prospective studies have found some evidence in support of the hypothesis that neurological impairments occur during a football season. Balance deficits were found to correlate with cumulative magnitude of impact exposure, impacts greater than 90g were predictive of increased symptomatology and fMRI task performance was dependent upon the number of impacts sustained (Gysland et al., 2012; Breedlove et al., 2012).   Conflictingly, some prospective studies have presented results suggesting no link between RHI and neurological impairment over a single season. When non-contact control athletes and football players were compared using a computer-based neuropsychological test (i.e. ImPACT test) no differences in neurological status were found (McAllister et al., 2012). Furthermore, two independent studies found no difference between pre- and post-season cognitive performance in collegiate football players, as evaluated by the Standardized Assessment of Concussion (SAC; Gysland et al., 2012; Miller et al., 2007). That being said, the conclusions of these studies were restricted since the researchers did not quantify RHI.    51 To date, studies that have quantified RHI appear to suggest a link between RHI and neurological impairment over a football season. However, further evidence must be presented before any reliable relationships are established, as conflicting data exists in the literature. This chapter seeks to improve upon the design of previous studies by quantifying RHI, increasing sample sizes, investigating different domains of neurological function and measuring neurological function on a weekly basis. Based on previous work we hypothesize that the number of impacts sustained will be a significant predictor of neurological impairment as measured by cognitive function, oculomotor function, standing balance, and symptomatology.   5.2 Methods  General participant information was previously described in subsection 4.1.2.1. For this study, participants who wore the xPatch for <75% of team sessions or completed <70% of neurological tests, when asked by the investigators, were removed from analysis. All data collected were stored securely with the senior investigator (JSB), kept confidential and not shared with the team medical staff or coaching staff.  5.2.1 Biomechanical Measurements  The xPatch sensor was chosen to quantify RHI in this study. The sensor’s specifications are discussed in subsection 4.1.2.2. Here, we used the cumulative number of impacts sustained (cNI) as the independent variable of interest, i.e. predictor variable. We used cNI as our predictor variable because impact count is strongly correlated to cumulative head kinematics (see subsection 4.1.3.3). As well, previous work has suggested that cNI predicts the risk of later-life neurological impairment (Montenigro et al., 2016). For our measure of cNI we included all events that were deemed valid by the xPatch algorithm. Although, we recognize that sensitivity and specificity of the algorithm is 77.6% and 70.4%, respectively, thereby adding internal error to our models.      52 Each participant was assigned a unique sensor at the beginning of the season. Sensors were adhered to participants’ right mastoid process using double-sided adhesive tape. Events were timed stamped, stored internally by the sensors and uploaded using the Impact Monitoring System Software (X2 Biosystems, Seattle, WA) on a laptop following each team session. Data were de-identified and exported to Matlab for analysis (Version R2014a, The MathWorks Inc., Natick, MA). Statistical models were constructed in IBM SPSS Statistics for Macintosh (Version 23.0, IBM Corp., Armonk, NY).  5.2.2 Neurological Status Measurements  Prior to the start of training camp, participants underwent baseline neurological testing using a neurocognitive battery comprising the Graded Symptom Checklist (GSC), Standardized Assessment of Concussion (SAC), King-Devick Test (KDT) and Balance Error Scoring System (BESS). The GSC evaluates the perceived number and severity of symptoms most common to concussion (Lovell & Collins, 1998). The SAC is a clinical test specifically designed to assess the domains of cognitive function presumably sensitive to concussion, such as immediate memory, delayed recall, orientation and concentration (McCrea et al., 1998). The KDT is a rapid numerical reading test that evaluates saccadic eye movements, language function and attention by having participants respond to external visual cues (Marinides et al., 2015; Galetta et al., 2011). The BESS was designed to assess static postural stability in concussed athletes (Valovich-McLoed et al., 2012; Hunt et al., 2009). These tests are discussed in more detail in section 2.3.  The variables of interest that we collected from the neurocognitive battery are given in Table 5.1. Since the SAC was designed with only four versions of alternative prompts, and we were performing more than four repeated measures, we generated additional prompts for the immediate memory and delayed recall sections. Alternative word lists were generated from a random word generator and alternative numerical strings were generated by a custom Matlab code. The full list of the prompts used for the SAC are found in Appendix C.    53 Table 5.1. Reported absolute scores for the neurocognitive battery. Test Variable Absolute Score GSC Number of Symptoms Endorsed (GSCnm) 0 - 22 Total Severity of Symptoms (GSCsv) 0 - 132 SAC Composite Score 0 - 30 KDT Completion Time 0 - ∞ BESS Summed Number of Errors (BESStot) 0 - 60 Summed Number of Errors for trials #1-#3, i.e. firm trials (mBESS) 0 - 30 Summed Number of Errors for trials #4-#6, i.e. foam trials (fBESS) 0 - 30  The BESS is known to have moderate intra-rater reliability (ICC = 0.60-0.92) and moderate inter-rater reliability (ICC = 0.57-0.85; Bell et al., 2011). To obtain a more reliable BESS score, we recorded every trial using a video camera and had three independent raters (i.e. BESS scoring committee) that were blind to the participants score the trial. Raters were required to agree upon a score for each trial. If agreement was not reached, the principal investigator (ADR) scored the trial. The investigator (ADR) was only required to score one trial. Raters were permitted to stop, rewind, or re-watch videos as many times as required. Unbeknownst to the BESS scoring committee, we had them re-rate two random videos per participant to determine ICC values for the committee. The intra-committee ICC values were very high (0.96 and 0.98) depending on the year of data analyzed, thereby giving us confidence in the reported BESS scores. Participants with lower limb injuries (e.g. ankle sprain) did not complete the BESS since lower limb injuries produce worsened BESS scores compared to healthy controls (Docherty et al., 2006).  Participants were asked to complete the neurocognitive battery each week for the duration of the football season including a baseline and post-season evaluation. Efforts were made to have participants complete the test battery within the same 72hrs period each week (typically following game exposure). The neurocognitive testing protocol took approximately 20 minutes to complete and was performed by investigators in a quiet room, free of extraneous distractions, at the teams practice facilities. The neurocognitive tests were administered in the same sequential order each week: GSC followed by the SAC then KDT then BESS.   54  5.2.3 Statistical Analysis  The xPatch sensor is not incorporated into mandatory football gear, yet we achieved a 96.7% compliance rate for participants wearing the sensor. We built an algorithm to extrapolate perfect compliance with respect to number of impacts sustained during each team session. This algorithm is discussed in depth in subsection 4.2.2.4.  Change scores were computed for each component of the neurocognitive battery (i.e. dependent measure) by subtracting baseline scores from absolute test scores as described in Gysland et al. (2012). Reporting change scores controlled for within subject variations in baseline values. All models described in this chapter used change from baseline scores as their dependent variable input.   5.2.3.1 Differences in Neurological Function in Exposure Groups   To investigate whether cNI affected neurocognitive status, we grouped participants into a low, moderate or high cNI group post hoc. For each participant we divided their cNI for the season by the length of time (in days) between the first day of study and the last day of impact exposure, i.e. 68 days in year 1 and 108 days in year 2. Participants were assigned to the low (0th - <33.3rd), moderate (33.3rd - <66.7th) or high (≥66.7th) impact exposure group based on their ranking.   Hierarchal mixed models were constructed to explore fixed effect differences between groups with respect to neurological status over time. Random effects for the covariates of time and repeated test number (except in the GSCnm and GSCsv models) were included within the models. We included the effect of repeated test number to account for any learning effect with repeated neurocognitive battery administration. Separate models were constructed for each component of the neurocognitive battery (e.g. GSCnm). Data were analyzed with maximum likelihood estimation (Shek & Ma, 2011).     55 Hierarchal mixed models combine two stages of analysis. The first stage approximates subject-specific longitudinal profile estimates using linear regression functions and generates subject specific effects (Seltman, 2009). In the second stage of analysis, multivariate regression techniques relate subject-specific estimates to fixed group effects (i.e. impact exposure group). It follows that between group differences are due to between group variability rather than intra-subject variability. Hierarchal mixed models were used due to their ability to include unbalanced data, specifically incomplete neurocognitive battery datasets and neurocognitive battery testing performed at different time points (Verbeke & Molenberghs, 2000; Seltman, 2009).   The hierarchal model used for analysis in this study was as follows:  !"!" = ! (!! + !!!)+ !! + !!! ∗ !!" + !!"(!! + !!!)+ !! + !!! ∗ !!" + !!"(!! + !!!)+ !! + !!! ∗ !!" + !!" !!  Where, • NCpr is the score for the component of the neurocognitive battery under study (e.g. SAC) for a given participant (p) at a given re-test interval (r).   • Βo is the mean baseline score for the component of the neurocognitive battery under study. • βL is the mean growth rate for the low exposure group. • βM is the mean growth rate for the moderate exposure group. • βH is the mean growth rate for the high exposure group.  • tpr = time (in days); for the pth participant at the rth measurement. ! First day of baseline test = day 0. • b1p = participant-specific deviation about the baseline value. • b2p  = participant-specific deviation in growth rate. • εpr is a residual error component.     56 The null hypothesis tested was that βL = βM = βH. The alternative hypothesis was that βΘ ≠ 0, for Θ = [L M H], at α = 0.05. Figure 5-1 provides a schematic of the hierarchal mixed model used.    Figure 5-1. Schematic of the null hypothesis for hierarchal mixed models. Exposure groups are red (level 2 of analysis), the blue boxes represented participants (level 1 of analysis), and the green boxes represent repeated measures of the neurocognitive battery component. The dotted lines indicated which measurements correlated.   5.2.3.2 RHI Effect on Neurological Status Models  To investigate the effect of cNI on neurological status, we constructed linear mixed models that explored how neurological function co-varied with cNI within-participants. A separate model was constructed for each component of the neurocognitive battery (e.g. GSCnm). The dependent variable was neurocognitive change score and the predictor variable was cNI (from baseline to the time of neurological battery completion). Data were analyzed with maximum likelihood estimation (Shek & Ma, 2011). In the models for SAC, KDT, BESStot, mBESS, and fBESS the covariate of repeated test number was included to account for any learning effect with repeated neurocognitive battery administration.   5.3 Results  We excluded eight participants from analysis. Three participants dropped out of the study before the end of the season. Four participants failed to complete 70% of requested neurocognitive   57 batteries, respectively completing 36.4%, 63.6%, 68.8% and 56.2% of the requested batteries. One participant failed both the required compliance for the sensor (17.8% completed) and neurocognitive battery (12.5% completed). Of the included participants (n=27), they completed 93.7% of requested GSCnm, GSCsv and SAC tests, 93.4% of requested KDT tests and 90.0% of requested BESStot, mBESS and fBESS tests. Their compliance for wearing the xPatch was 96.7%.   Seven unique participants (four in year 1 and three in year 2) were diagnosed with a concussion during data collection. For both hierarchal and linear mixed models we excluded instances when a participant was concussed to prevent any contamination of the neurocognitive test score due to a transient impairment of neurological function. Specifically we excluded 22 cases for the GSCnm, GSCsv, KDT and SAC, and 21 cases for the BESStot, mBESS and fBESS.   5.3.1 Exposure Groups Differences In Neurological Status  For GSCnm, a significant effect (p=0.026) was found for differences between exposure groups, but a non-significant date-by-group interaction was observed (p=0.644). The highest exposure group reported on average 2.1 more symptoms than the low exposure group (p=0.008). For GSCsv, the fixed effect of exposure group approached significance (p=0.070), and the date×group interactions did not reach significance (p=0.546). For the SAC, KDT, BESStot, mBESS, and fBESS the fixed effect of exposure group, as well as date×group interactions did not reach statistical significance.   5.3.2 RHI Effect on Neurological Status  The cumulative number of impacts sustained (cNI) was found to be a significant predictor of both the number (GSCnm; p = 0.031) and severity of reported symptoms (GSCsv; p = 0.046). The number of symptoms reported was found to increase by 3.9 with each 1000 accumulated impacts, however the cNI predictor variable explained only 2.28% of this increase. The reported severity of symptoms was found to increase by 7.5 with each 1000 accumulated impacts, with   58 the cNI predictor variable explaining only 8.44% of this increase. Cumulative number of impacts sustained did not reach significance for predicting SAC (p = 0.257), KDT (p = 0.851), BESStot (p = 0.768), mBESS (p = 0.634) and fBESS (p = 0.500) change scores.   5.4 Discussion  Here we investigated the relationship between the cumulative number of RHI sustained and different aspects of neurological status, including cognitive function (i.e. KDT, SAC), balance (i.e. BESS) and symptomatology (i.e. GSC). We determined that the number of head impacts sustained is related to the number and severity of symptoms reported. However, less than 10% of the alterations in symptomatology was explained by the number head impacts sustained. The weekly neurological evaluations performed within this study are a novel approach in research investigating relationships between RHI and neurological impairment. This approach improved upon previous research designs, and provides better insight into the alterations of neurological status throughout a collegiate football season.   Our models investigating differences between impact exposure tertiles only showed a significant group effect for the number of symptoms reported (GSCnm). For GSCnm the high exposure group displayed 2.1 more symptoms than the low exposure group confirming our hypothesis that the highest exposure group would display worsened neurological impairment compared to the moderate and low impact exposure tertiles. A group effect for the severity of symptoms reported (GSCsv) was trending towards significance (p = 0.070) lending further support to a potential relationship between RHI and symptomatology. However, this pattern was not observed for any other component of the neurocognitive battery.   Our models that used cNI as a predictor variable found further evidence of a link between RHI and the number and severity of symptoms endorsed. The cNI sustained at the time of neurocognitive battery testing was a significant predictor of the number (p = 0.031) and severity (p = 0.046) of symptoms endorsed. Although interesting, this finding may be due to many other   59 confounding variables. This is suggested by the low percentages of variability in GSCnm and GSCsv alterations explained by nCI.   It appears that symptomatology is the branch of neurological function with the most detectable alterations following impact exposure. Our finding that the number of impacts sustained was related to symptomatology was similar to Gysland et al.’s (2012) finding in which the number of impacts over 90g where shown to correlate to the number of symptoms reported. Cases studies of early stage CTE report increased depression, irritability, memory impairment and difficulty concentrating, and a dose-dependent relationship has been shown between risk of depression and RHI (Baugh et al., 2012; Montenigro et al., 2016). All of these symptoms are investigated within the GSC. Taken together these findings appear to suggest that repetitive head impacts lead to impairment of neurological function as assessed by the GSC.   However many factors, in addition to neurological alterations, could contribute to the number and severity of reported symptoms. Participants who sustain a higher number of impacts are likely more involved in the football games, which might lead to increased non-specific symptoms such as fatigue and neck pain. Therefore, we cannot say with certainty that the relationship GSCnm, and GSCsv, and the number of impacts sustained represents altered neurological function.   Our findings lend further support to the work of Gysland et al. (2012) and Miller et al. (2007), which found no relationship between nCI and SAC performance. We conclude that no relationship exists between RHI and cognitive function, as evaluated by the SAC.   Interestingly, our findings contradict those of Gysland et al. (2012) who found a relationship between increased RHI and worsened BESS performance, whereas we found no relationship. The difference in our findings may be due to the subjectivity of the BESS scoring procedure. Gysland’s group had the lead author score each BESS trials, whereas we had a group of three raters blind to the study score each BESS trial. The intra-class correlations for our BESS scoring committee was very high, specifically 0.96-0.98 for BESStot, 0.98-0.99 for mBESS and 0.93-  60 0.95 for fBESS. A single individual scoring BESS tests displays much lower reliability, with intra-class correlations ranging between 0.60-0.92 (Bell et al., 2011). The low reliability of a single investigator scoring the BESS may have contaminated the findings of Gysland et al. (2012), especially as they did not report the reliability of their scoring procedure. We argue that the lack of association between cNI and BESS performance observed here likely reflects the true relationship between these variables.    Other than symptomatology, we did not find any relationships between nCI and our measures of neurological function. Following the premise that the accumulation of RHI over multiple years increases the risk of neurological impairment, we suspect that our measures of neurological function (i.e. SAC, KDT and BESS) were too crude to detect any cognitive, oculomotor or balance impairments resulting from RHI accumulated in a single football season (Montenigro et al., 2016). However, there is the alternative possibility that cognition, oculomotor function and balance, as measured by the SAC, KDT and BESS, are not related to impact exposure. The anatomical damage caused by RHI may not be severe enough to result in functional neurological deficits (Xu et al., 2016). Additionally, factors other than RHI (e.g. genetics, substance abuse) may influence an individual’s predisposition to developing neurological impairment following impact exposure. We were unable to investigate the influence of these factors in our study due to our small sample size. These additional factors may explain why CTE only develops in 3.7% of professional football players (Gavett et al., 2011). Future research should use more sensitive measures of cognition, oculomotor function and balance when investigating relationships between RHI and neurological function.   Our discovery that cNI is related to GSCnm and GSCsv raises many questions. How long these symptoms persist past the end of a season, and whether these symptoms worsen over multiple years is unknown. Future work should observe athletes for multiple seasons and test them at consistent intervals year-round. This will allow for a better understanding of the longitudinal progression of these relationships. As well, any future work should include non-contact athletes as controls to limit any potentially confounding effects such as fatigue and musculoskeletal injuries. Lastly, we suggest that future work uses models with nCI as the predictor variable rather   61 than investigating group differences. Impact exposure in athletes is very heterogeneous as many factors contribute to the number of impacts an athlete sustains. If we are to establish definitive relationships between head impacts and neurological function it is important to avoid subjective groupings.   5.5 Conclusion  This chapter provides novel data to the current efforts to determine the relationship between RHI and neurological impairments. Here, we established that the cumulative number of impacts sustained predicts the number and severity symptoms experienced during a football season. This partially confirmed our hypothesis that cumulative head impacts sustained affects neurological function. Whether this relationship compounds over multiple seasons or leads to later-life neurological impairment (e.g. chronic headaches, depression, etc.) is unknown and requires further investigation.   We did not find any relationship between the cumulative number of impacts sustained and performance in other domains of neurological function. Our measures of balance, oculomotor function and cognition were not related to the number of head impacts sustained. It is important to note that this work was exploratory and larger studies are needed before making any definitive conclusions. Caution must be exerted when interpreting the results of this chapter since many covariates (e.g. age, medications, family medical history) were ignored in order to increase robustness of our models.     62 Chapter 6: Diagnostic Accuracy of Concussion Assessment Tools in a Collegiate Population   This chapter investigates the diagnostic accuracy of the concussion assessment tests that were used in chapter 5 to evaluate neurocognitive function. From a clinical perspective, it is important to understand how well these tests perform in regards to differentiating concussed patients from non-concussed controls in a cohort of collegiate athletes. The current literature contains information about the sensitivity and specificity of these tests but information regarding their diagnostic accuracy is limited. Therefore, clinicians are currently restricted when interpreting the results of these tests.   6.1 Introduction   Sport-related concussion has been recognized as a serious health concern facing contact sport athletes, yet a gold standard diagnostic test for concussion remains elusive (McCrory et al., 2013). Many different tests and techniques for diagnosing concussion have been explored including blood and cerebral spinal fluid biomarkers (Jeter et al., 2013), paper and pencil neuropsychological tests (Barr & McCrea, 2001; Putukian, 2011), computerized neuropsychological tests (Van Kampen et al., 2006), measures of postural stability (Hunt et al., 2009; McCrea et al., 2003), oculomotor tests (Galetta et al., 2011) and electrophysiological examinations (Gaetz & Bernstein, 2001). However, no test or technique has yielded perfect diagnostic accuracy. Factors such as age, gender, practice effects, low test-retest reliability and psychological distress have limited the accuracy of these tests (Putukian, 2011).   The current consensus based diagnostic tool for concussion is the SCAT3 (McCrory et al., 2013). However, recent research has questioned the accuracy of the individual components that comprise the SCAT3. Some components of the SCAT3 are reported to have diagnostic accuracies, as measured by area under receiver operating characteristic curves, as low as 66% (Galetta et al., 2015). The components of the SCAT3 most frequently used to diagnose concussion include the Graded Symptom Checklist (GSC), Standardized Assessment of   63 Concussion (SAC), and the Balance Error Scoring System (BESS). The GSC and SAC are currently used in the NFL Sideline Concussion Assessment Tool, whereas the BESS is the current gold standard test for assessing static postural stability in concussed athletes (Valovich-McLoed et al., 2012; Hunt et al., 2009; NFL, 2014).   The King-Devick Test (KDT) has recently increased in popularity as a diagnostic test for concussion. It is an oculomotor test that is currently used in professional football and hockey leagues (Galetta et al., 2015). Numerous studies, discussed in detail in section 2.3, have shown the GSC, SAC, BESS, and KDT to be sensitive to concussion (Bell et al., 2011; McCrea et al., 2003; Galetta et al., 2011; Leong et al., 2015; McCrea et al., 2004; Broglio & Puetz, 2008; McCrea et al., 2003). However, the diagnostic accuracy of these tests has not been thoroughly investigated.   Receiver operating characteristic (ROC) curves plot sensitivity versus false positive rate of a diagnostic test when there is a binary outcome (i.e. diseased or non-diseased). The area under the curve (AUC), taken from ROC curves, is used as a measure of diagnostic accuracy. AUC values allow for meaningful interpretations of, and comparisons between, diagnostic tests (Hajian-Tilaki, 2013).   For the SAC, conflicting AUC values have been published. Galetta et al. (2015) reported an AUC of 0.66 (95% CI: 0.53-0.79) whereas Barr and McCrea (2001) reported an AUC of 0.91-0.94. A meta-review found the AUC for the KDT to be 0.89 (95% CI: 0.85-0.96). To our knowledge AUC values for the GSC and BESS have not been reported. Furthermore, the AUC reported for the KDT was determined from studies that evaluated the KDT score immediately post-concussion, and not in the clinical context (e.g. hours-to-days post suspected injury).   The consensus statement from the 4th International Consensus Conferences on Concussion in Sport held in Zurich, November 2011, recommended that sport medicine clinicians use the SCAT3 as a diagnostic aid for concussion (McCrory et al., 2013). However, the diagnostic accuracy of the components of the SCAT3 has scarcely been investigated. Here we sought to   64 evaluate which component of the SCAT3 is the most effective at differentiating concussed patients from healthy patients. Furthermore, we evaluated the diagnostic accuracy of the KDT due to its increasing popularity amongst clinicians and the suggestion that future versions of the SCAT include an oculomotor test (McCrory et al., 2013). We hypothesized that that tests evaluating cognitive measures (i.e. KDT, SAC) and physiological measures (i.e. BESS) will have a significantly higher diagnostic accuracy than symptomatology (i.e. GSC), when using a clinical diagnosis as our gold standard.   6.2 Methods  General participant information was previously described in subsection 4.1.2.1.   In this chapter, participants were assigned a concussed state if diagnosed by the team physician. Only one physician was involved in the diagnoses of participants. The physician used the Zurich Consensus statement on concussion in sport to determine whether a participant was, or was not, concussed (McCrory et al., 2013). The physician did not perform the SCAT3 as part of his clinical evaluation. However, the physician integrated participants’ reported symptomatology into their diagnostic decision. The physician, in conjunction with athletic therapists and physiotherapists, used the SCAT3 to monitor the longitudinal recovery of concussed participants. This physician was blind to the study and practiced independently of the researchers. Data were stored as described in subsection 5.2.1.  6.2.1 Concussion Diagnostic Tools  Prior to the start of training-camp, participants underwent baseline neurological testing using a neurocognitive battery. The neurocognitive battery consisted of the Graded Symptom Checklist (GSC), Standardized Assessment of Concussion (SAC), King-Devick Test (KDT) and Balance Error Scoring System (BESS). We subdivide the BESS component into total BESS (BESStot), BESS trials performed on firm ground (mBESS) and BESS trials performed on a foam surface (fBESS). BESStot represents the BESS protocol as originally designed and mBESS represents   65 how the BESS is currently used as part of the SCAT3 and NFL Sideline Concussion Assessment Tool (McCrory et al., 2013; NFL, 2014; Valovich-McLeod et al., 2004). We investigated fBESS because in concussed populations the foam trials produce more errors than firm surface trials (i.e. mBESS, Bell et al., 2011).   The methodology used is identical to that discussed in subsection 5.2.3. Briefly, participants completed the neurocognitive battery each week for the duration of the football season including a baseline and post-season evaluation. The neurocognitive testing protocol took approximately 20 minutes to complete and was performed by investigators in a quiet room, free of extraneous distractions, at the team’s practice facilities. An overview of the GSC, SAC, BESS and KDT are presented in Table 6.2.   We investigated the diagnostic accuracy of each component of the neurocognitive battery using both absolute and change from baseline scores. Change scores were used to control for within subject variation in baseline values. By generating change scores we were able to investigate alterations in neurological function rather than absolute neurological status. Change scores were computed by subtracting baseline scores from absolute scores as described by Gysland et al. (2012).   We additionally investigated absolute scores because they reflect situations when a patient sees a physician in clinic without having conducted any baseline testing. The allowed ranges of absolute scores for each components of the neurocognitive battery are given in Table 6.1.           66 Table 6.1. Diagnostic tests of concussion used in the neurocognitive battery and the potential ranges of their absolute scores. Test Variable Absolute Score GSC Number of Symptoms Endorsed (GSCnm) 0 - 22 Total Severity of Symptoms (GSCsv) 0 - 132 SAC Composite Score 0 - 30 KDT Completion Time 0 - ∞ BESS Summed Number of Errors (BESStot) 0 - 60 Summed Number of Errors for trials #1-#3, i.e. firm trials (mBESS) 0 - 30 Summed Number of Errors for trials #4-#6, i.e. foam trials (fBESS) 0 - 30  6.2.2 Statistical Analysis  Data were collected manually and entered into Microsoft Excel (Version 14.6, Microsoft Corporation, Redmond, WA) to be de-identified and exported to Matlab (Version R2014a, The MathWorks Inc., Natick, MA) for sorting and analysis. ROC curves were generated using SAS (University Edition, SAS Institute Inc., Cary, NC), whereas AUC confidence intervals were generated in Matlab using a custom code.   ROC curves were constructed using the methods of Liu & Wu (2003). A sample ROC curve is shown in Figure 6-1. Briefly, generalized linear mixed models were used to estimate the probabilities of a positive or negative concussion state within the population and the estimated probabilities were used to calculate the AUC (Liu & Wu, 2003). Confidence intervals were calculated for each AUC value using the methods described in Hanley and McNeil (1982). When generating ROC curves for the SAC, KDT, BESStot, mBESS and fBESS, the number of times the test protocol had been completed was inputted as a covariate in order to control for potential learning effects.     67  Figure 6-1. A sample ROC curve using change scores from the King-Devick Test (select cases). An ROC curve plots false positive rate versus sensitivity. The green line indicates the reference diagnostic likelihood (i.e. 50% chance of correctly diagnosing the patient).   In addition to each diagnostic test in Table 6.1, we constructed ROC curves and calculated the AUC for models that contained multiple predictor variables. The first model we build was GSCfull, which was comprised the GSCnm and GSCsv. Our rationale for this model was that if a medical practitioner were to investigate the number of symptoms, it is very easy to collect the severity of symptoms. Our aim was to determine whether the individual components of the GSC (i.e. number and severity of symptoms) performed as well as the complete GSC (i.e. GSCfull). Our second model was the SCAT3, which comprised the GSCnm, GSCsv, SAC, and mBESS. Our rationale for this model was to determine whether combining multiple neurological tests, which evaluate different functional domains, improved diagnostic accuracy.    Our first analysis compared the AUC between absolute and change scores for every component of the neurocognitive battery. We used the methodology of Hanley & McNeil (1983) for this analysis because we were comparing AUCs calculated from the same sample population. Our second analysis compared the AUC for each component of the neurocognitive battery, separately   68 for change and absolute scores, in a pairwise manner using the methodology of Hanley & McNeil (1983). For all analyses, a priori alpha levels were set at 0.05.   Previous studies indicated that the absolute scores of our diagnostic tests should return to baseline (i.e. change score of zero) within 5 days following a concussive event (McCrea et al., 2003). Therefore, we removed all cases when a participant had a positive concussed state 6 days or more after the injury, and repeated the two analyses. We reasoned that patients with suspected concussions are likely to see a medical practitioner within the first 5 days following their suspected injury and we wanted to characterize the performance of these diagnostic tests in this time period (0-5 days post-suspected concussive event). These analyses are referred to as ‘select cases’, whereas the analyses with all data is referred to as ‘all cases.’   69 Table 6.2. Summary of concussion diagnostic tools. Measure Functional Domains Description Score Range Time to Administer Graded Symptom Checklist  (GSC) Concussive Symptoms Participants rate the presence and perceived severity of 22 symptoms common to concussion using a Likert scale. Higher scores indicate more concussive symptoms. Number of Symptoms: 0-22  Severity of Symptoms: 0-132 2-3 minutes Standardized Assessment of Concussion  (SAC) Cognitive function: • Immediate Memory • Delayed Recall • Orientation • Concentration A brief neurocognitive assessment that can be implemented by physicians without any previous psychometric expertise. Lower scores indicate worse cognitive function and impaired global cognitive ability. Total score: 0-30 5 minutes Balance Error  Scoring System  (BESS) Postural stability Clinical assessment of postural stability in three stances (double-leg, single-leg, tandem) on two surfaces (firm & foam); 20s per stance. Higher scores indicate impaired performance.  The BESStot comprises all 6 conditions. The mBESS comprises only the 3 firm surface conditions. The fBESS comprises only the 3 foam surface conditions. Participants are given 1 point per error.  BESStot = 0-60  mBESS = 0-30  fBESS = 0-30 5 minutes King-Devick Test  (KDT) Oculomotor function: • Saccadic eye movements • Language function • Attention Reading aloud a string of digits from left to right on three progressively challenging cards while measuring task completion time. Completion time: 0-∞ 2 minutes  70 6.3 Results  In our dataset, we had eight unique participants with a physician-diagnosed concussion. Four concussions occurred in year one and four occurred in year two of the study. The average age of participants in the study was 21.1±1.7y. Concussed (21.0±1.8y) and non-concussed (20.9±1.8y) participants did not significantly differ in age (p=0.858).   Due to the repeated-measure design of our study protocol, we observed 23 positive cases of concussion for the GSCnm, GSCsv, SAC and KDT. This encapsulates concussed participants tested at multiple times points throughout the duration of their concussion. In other words, each participant completed the GSC, SAC and KDT 2.9 times throughout the duration of their concussion. Eleven of these cases were within five days post-injury. For the BESStot, mBESS and fBESS we observed 22 positive cases of concussion, i.e. each participant completed the BESS 2.8 times throughout the duration of their concussion. Ten of these cases were within five days post-injury. There was one fewer positive case for the BESS than the GSC, SAC and KDT because a participant was unable to complete the BESS component of the neurocognitive battery due to a lower limb injury.   6.3.1 Area Under the Curve Values   We determined the AUC for each diagnostic test for all four conditions: i) All Cases – Absolute Score, ii) Select Cases – Absolute Score, iii), All Cases – Change Score and iv) Select Cases – Change Score. Recall that the Select Cases category removed cases when a participant had a positive concussion status but was tested more than 5 days post-injury. These AUC values are presented in Table 6.3.       71 Table 6.3. AUC values for ROC curves generated from commonly used concussion assessment tools. Blue indicates high diagnostic accuracy (AUC >0.90), green indicates moderate diagnostic accuracy (AUC >0.70-0.90), yellow indicates low diagnostic accuracy (AUC >0.50-0.70), and red indicates very low diagnostic accuracy (AUC <0.50). Diagnostic Test All Cases Absolute Score Select Cases Absolute Score All Cases Change Score Select Cases Change Score GSCnm 0.76 (0.64-0.88) 0.91 (0.79-1.00) 0.76 (0.65-0.88) 0.92 (0.80-1.00) GSCsv 0.75 (0.63-0.87) 0.93 (0.82-1.00) 0.74 (0.62-0.86) 0.91 (0.79-1.00) SAC 0.48 (0.36-0.60) 0.49 (0.32-0.67) 0.63 (0.50-0.76) 0.64 (0.46-0.82) BESStot 0.59 (0.46-0.77) 0.58 (0.39-0.77) 0.57 (0.44-0.70) 0.65 (0.46-0.84) mBESS 0.61 (0.48-0.74) 0.71 (0.52-0.89) 0.59 (0.46-0.72) 0.67 (0.48-0.85) fBESS 0.55 (0.42-0.68) 0.47 (0.29-0.65) 0.59 (0.46-0.72) 0.65 (0.46-0.83) KDT 0.66 (0.53-0.78) 0.64 (0.46-0.82) 0.69 (0.57-0.82) 0.80 (0.64-0.96) GSCfull 0.76 (0.64-0.88) 0.93 (0.82-1.00) 0.76 (0.64-0.88) 0.92 (0.80-1.00) SCAT3 0.78 (0.66-0.90) 0.92 (0.80-1.00) 0.78 (0.66-0.90) 0.93 (0.81-1.00)  For each component of the neurocognitive battery we compared the AUC between their absolute and change scores. Components with a significantly higher AUC for change scores included the SAC (all cases; p = 0.003), fBESS (select cases; p = 0.037) and KDT (select cases; p = 0.008); as well SAC (select cases) was trending towards significance (p = 0.091). For the other components of the neurocognitive battery there was no difference between AUC for absolute scores and change scores.   The AUC for each component of the neurocognitive battery was compared in a pairwise manner. Select cases and all cases were evaluated independently. For all cases GSCnm, GSCsv and GSCfull had significantly higher (p<0.05) AUC than BESStot, mBESS and fBESS. The SCAT3 had a significantly higher AUC than SAC, BESStot, mBESS and fBESS. The complete p-value for all cases results are given in Appendix F. For select cases GSCnm, GSCsv, GSCfull and   72 SCAT3 had significantly higher AUC than SAC, BESStot, mBESS and fBESS. The complete p-value results for select cases are given in Appendix G.  6.4 Discussion   Here we investigated the diagnostic accuracy of commonly used concussion assessment tools, which evaluate different aspects of neurological function including cognition (i.e. KDT, SAC), balance (i.e. BESS) and symptomatology (i.e. GSC). We determined how well these tests performed in the first five days following a suspected injury (i.e. select cases), and how they performed throughout the full recovery from concussion (i.e. all cases). Furthermore, we determined how the diagnostic accuracy of these tools varied depending upon whether change from baseline or absolute scores were used. Change scores reflected the usage of these concussions assessment tools in an athletic setting, whereas absolute scores reflected their usage in a clinic.   Our analysis revealed that change scores provide higher diagnostic accuracy than absolute scores for the SAC (all cases), fBESS (select cases) and KDT (select cases). Although statistical significance was not reached for the other concussion assessment tools, there was a general trend towards higher AUCs (i.e. better diagnostic accuracy) for change scores compared to absolute scores. The SAC (AUC: 0.49) and fBESS (AUC: 0.47) performed worse than chance when absolute scores were used. The current consensus-based expert recommendation advises that baseline testing be conducted when possible but that it is not necessary (McCrory et al., 2013). Regardless, some organizations (e.g. NCAA) have mandated baseline testing and many research studies have used change from baseline scores as their dependent variables of interest (NCAA, 2015; Galetta et al., 2015; Gysland et al., 2012). Our findings suggest that baseline testing should be conducted whenever possible as baseline testing allows for patient-referenced values of normal neurological function during diagnostic decisions.   Our study found a trend of higher diagnostic accuracy for the select cases compared to all cases. Previous research has shown that the GSC returns to baseline within 5 days post-concussion, the   73 SAC returns to baseline in 3 days and the BESS returns to baseline within 3 days (McCrea et al., 2003). As well, the sensitivity of the GSC, SAC and BESS peaks immediately post-concussion and decreases in the subsequent 5 days post-injury (McCrea et al., 2004). It is likely that individuals remain concussed even as their functional neurological status returns to baseline because the metabolic imbalances of a concussion persist longer than clinical neurological deficits (Barkhoudarian et al., 2011; Guskiewicz et al., 2007; Giza & DiFiori, 2011). We suspect that our all cases dataset is contaminated by positive concussed cases that have returned to baseline neurological function. Our findings emphasize the importance of early screening for concussion following a suspected injury.   Previous research has shown that the sensitivity of the SAC to concussion is moderate (0.80) immediately post-injury, but rapidly decreases (0.34) within 24 hours post-injury (McCrea et al., 2003). It is therefore unsurprising that we found the SAC to have a low diagnostic accuracy (AUC = 0.64) within 5 days following a concussive event. Our finding is consistent with that of Galetta et al.’s (2015) who found that SAC had a diagnostic accuracy of 0.66, but conflicts with the finding of Barr and McCrea (2001) who reported a diagnostic accuracy of 0.91-0.94. Based on our data, and those of Galetta’s group, we suggest that the SAC is a poor diagnostic tool of concussion in a clinical context and question its continued usage in the SCAT3.   We observed low diagnostic accuracy for the KDT (AUC = 0.64-0.69), except when change scores were evaluated within 5 days following a concussive event (AUC = 0.80). The derived diagnostic accuracy in this situation is slightly less than that previously reported (AUC = 0.89; Galetta et al., 2015). We suspect that this is a result of our inclusion of data up to 5 days post-injury whereas Galetta’s group only included tests performed immediately post-injury. Our findings in conjunction with those of Galetta et al. (2015) suggest a moderate-to-high diagnostic accuracy of the KDT. Thereby supporting its relevancy as a concussion assessment tool, if baseline testing is performed and well documented. Furthermore, there was a trend of higher diagnostic for the KDT compared to the SAC, although it did not reach statistical significance. This observation was also shown by Galetta et al. (2015) and provides evidence in support of the inclusion of the KDT within the SCAT3.   74   We found the BESStot, mBESS and fBESS to have poor diagnostic accuracy, which mirrors previous research in which the BESStot was found to have a low sensitivity (0.34-0.10) post-injury (McCrea et al., 2004). In our study the BESStot, mBESS and fBESS all performed significantly worse than a simple measure of a patient’s symptomatology. It is well established that evaluating BESS performance is highly subjective and scorers display low reliability (Bell et al., 2011). Researchers are attempting to solve this issue through the use of objective measurement tool to evaluate BESS performance, however we solved it by using a three-person scoring committee (Brown, 2013). The committee displayed high intra-class correlation (ICC: 0.96-0.98) thereby giving us confidence that our results reflect the true effectiveness of the BESS as a diagnostic tool. Our findings indicate that the BESS does not perform well at diagnosing concussion and we question its inclusion in the SCAT3. Future work should attempt to develop a better measure of postural stability in concussed patients.    Contrary to our hypothesis, the GSC was the most accurate diagnostic test for concussion. The number of symptoms endorsed and the severity of symptoms were both significantly more accurate at diagnosing concussion than the BESS, mBESS and fBESS. Furthermore, when investigating change scores from select cases the GSCnm, GSCsv and GSCfull almost reached significance with respect to being more accurate diagnostic tests of concussion than the KDT (p = 0.08, p = 0.08 and p = 0.06, respectively). Our standard for classifying whether a participant was concussed or non-concussed, i.e. a sport physician diagnosis, partially relied upon participants reported symptomatology during their clinical evaluations, which may have contributed to the high AUC observed for GSCnm and GSCsev.   Although the GSC displayed promising diagnostic accuracy, its effectiveness is dependent upon patients being honest and forthcoming when disclosing their symptoms to medical practitioners. We are confident that participants provided us with an honest assessment of their symptoms since we did not share our observations with the team medical staff. This allowed participants to share information with us that if shared with the team medical staff may have resulted in the participant being removed from play for monitoring or further concussion evaluations. This is   75 unsurprising, as it is well known that athletes often hide their symptoms from medical practitioners (McCrea et al., 2004). However, this complicates the interpretation of GSC in a clinical context. Clinicians must exert their judgement and determine whether they can trust a patient’s GSC evaluation. The GSC should only be considered an accurate diagnostic tool if the clinician is confident that the patient is being honest when undergoing the evaluation.    It is currently recommended that clinicians perform a variety of tests that evaluate different domains (e.g. cognitive, symptomatology, balance, etc.) when assessing patients with a suspected concussion. We determined that the SCAT3 had a higher diagnostic accuracy than any other concussion assessment tool evaluated, but that these findings were non-statistically significant due to the low power of our dataset. The collinearity between the SCAT3 and GSCnm and GSCsv was 0.77 and 0.82, respectively. This suggests that the high diagnostic accuracy of the SCAT3 is likely due to its high reliance upon the GSCsv and GSCnm input variables.   With respect to diagnostic accuracy in the field, we recommend that physicians perform the GSC and KDT. Our results independently support previous research that concluded that the KDT is very effective at diagnosing concussion immediately following injury (Galetta et al., 2015). Additionally, we found that the GSC has a very high diagnostic accuracy (AUC = 0.93) up to five days post-injury. Together these tests take less than 5 minutes to complete and can be performed in an athletic setting. Therefore we suggest that sport medicine practitioners use the GSC and KDT together as a screening tool to determine whether an athlete is potentially concussed and in need of further medical evaluation.   A potential limitation of this study was the method used to classify participants’ status as concussed or non-concussed. Since no gold standard exists for a concussion diagnosis, we used the same subjective standard as other studies, i.e. a physician specializing in sports medicine (Barr & McCrea, 2001; Galetta et al., 2015). It is likely that our AUC values would change if a different physician were used to classify participants. Indeed different physicians may preferentially emphasize distinct aspects of neurological function (e.g. symptomatology, cognition, balance, etc.) when making concussion diagnoses. Therefore, we encourage readers to   76 consider the reported AUC confidence intervals when interpreting our results. That being said, using a single physician prevented different individual interpretations of the diagnostic criteria, thereby strengthening the design. Another potential limitation of our study was that participants had to actively seek a physician-evaluation to be assessed for a concussion. We assumed that all other participants were non-concussed. It is likely that some participants with a non-concussed status were actually concussed since approximately half of concussions in collegiate football go unreported (McCrea et al., 2004). This may have contaminated our analysis, likely depressing AUC values.   In conclusion, we found that including the BESS and SAC components when performing the SCAT3 provides no better diagnostic accuracy than the GSC, however they require an additional time and resource commitment. Researchers should aim to develop a more accurate test of postural stability and cognitive ability to replace the BESS and SAC, respectively. As well, future work should evaluate the diagnostic accuracy of additional concussion assessment tests, for example blood biomarkers. The moderate-to-low AUC values observed for the diagnostic tests investigated, other than GSC, reflects the fact that no gold standard test currently exists for concussion.  6.5 Conclusion  This chapter provided novel data regarding the diagnostic accuracy of current clinical concussion assessment tests using ROC curve analyses. When using change from baseline scores we observed AUC values for the SAC (0.63-0.64) that were similar to previously reported values, whereas the AUC values we observed for the KDT (0.69-0.80) were slightly less than those previously reported (Galetta et al., 2015). The findings of this chapter emphasize a need for a more reliable test of balance and executive function in concussed patients.   Our hypothesis that concussion assessment tests reliant upon cognitive (i.e. KDT, SAC) and physiological measures (i.e. BESS) have significantly higher diagnostic accuracy than symptomatology measures of concussion (i.e. GSC) was proven false. We determined that the   77 number of symptoms endorsed and the severity of symptoms is the most accurate diagnostic tool of concussion. This is concerning since athletes are not always forthcoming and honest when discussing their symptoms with medical practitioners (McCrea et al., 2004). Medical practitioners should work to educate athletes about the risk of concussion, in the hopes of improving their willingness to report symptomatology.     78 Chapter 7: Conclusion   This thesis investigated topics related to repetitive head impacts and concussion. Specifically, this thesis investigated: i) the accuracy of a head impact-monitoring tool, ii) the relationship between RHI and neurological status, and iii) the diagnostic accuracy of concussion assessment tools. The population of study for this thesis was collegiate football players.  This thesis determined that a skin-based impact-monitoring tool has moderate accuracy in differentiating impact from non-impact events via a detection algorithm. It was concluded that these tools continue to be important for research use, but caution should be exerted if interpreting data from these technologies in a clinical context. Future work should aim to develop a more accurate impact detection algorithm. We showed that cumulative impact kinematics could be confidently estimated from an accurate measure of impact count. This suggests that impact count adequately quantifies longitudinal impact exposure in sport.   A more accurate and more robust detection algorithm may allow researchers to establish stronger relationships between RHI and neurological function. We suggest the creation of a machine-learning model for impact detection algorithms. Multiple investigators should classify the same events to prevent potential biases within the algorithm and impacts from multiple sports should be reviewed due to potential differences in impact waveforms between sports. Furthermore, the accuracy of other impact monitoring technologies that have been used to measure RHI in sport (e.g. HIT system) should be investigated. This would provide researchers with context regarding the accuracy of previously published RHI datasets that used these technologies.   This thesis established a potential relationship between the cumulative number of impacts sustained and number, and severity, of symptoms reported. Although exploratory, this finding is a novel discovery in the efforts to investigate the relationship between RHI and neurological impairment. Future work should aim to investigate neurological status and RHI over multiple seasons, preferably for an entire athletic career. This would provide insight into how neurological alterations, such as symptomatology, compound over multiple seasons.   79  We concluded that the Standardized Assessment of Concussion, Balance Error Scoring System and King-Devick Test are too crude to measure any cognitive, postural or oculomotor alterations, respectively, that may occur following repetitive head impact exposure during a single football season. This conclusion follows the assumption that microscopic neurological alterations result due to impacts accrued throughout a football season (Bailes et al., 2013). However, it is possible that cognitive, postural and oculomotor function are unaffected by repetitive head impact exposure. We suggest using more sensitive measures of cognition, oculomotor function and balance in any future studies that intend to investigate relationships between RHI and neurological function. To balance practicality and sensitivity of neurological measures, we speculate that it would be best to test participants monthly or quarter-yearly. This would allow for a more intensive testing protocol while still evaluating longitudinal profiles of neurological function.   This thesis reported novel data regarding the diagnostic accuracy of clinical concussion assessment tools. Symptoms, both number endorsed and severity, best differentiate concussed patients from non-concussed controls. Our findings may inform clinicians how to weigh the results they obtain for each component of the SCAT3 when deciding whether a patient is concussed or healthy. The results of this thesis emphasize the need for improved concussion assessment tools. We suspect that the BESS is too crude of a task to differentiate subtle, and transient, balance impairments that result from concussion. With the ease of accelerometer data collection, we recommend investigating whether measures of postural sway better differentiate concussed patients from healthy controls. Furthermore, we recommend that future iterations of the SCAT include the KDT, potentially as a replacement for the SAC, which itself was shown to be a poor diagnostic tool of concussion. The diagnostic accuracy and ease of use of the KDT suggest that it may be beneficial as a clinical concussion assessment tool.  Potential limitations of this thesis include the small sample size and convenience sampling. Our study population was thirty-five collegiate athletes from a single football program. A larger sample size would provide more power to the analyses conducted in chapter 5 and chapter 6,   80 whereas having participants from different football programs would increase the external validity of our findings. Furthermore, future research should investigate both male and female populations to determine whether our findings are sex-specific. Lastly, to increase the robustness of our findings in chapter 5 and chapter 6 we did not control for many covariates such as drug use (e.g. anabolic steroids), medications and medical history. Once broad relationships between RHI and neurological impairment are established, it is important to determine how covariates affect these relationships.    In conclusion, this thesis presents novel findings in multiple areas of research regarding concussion and RHI in sport. The findings of this thesis will benefit researchers and clinicians in the field of RHI and concussion in sport. The work presented in chapter 4 informs researchers that cumulative head kinematics can be estimated from an accurate measure of impact count, thus future projects involving cumulative RHI may only need to quantify impact count. With respect to clinicians, this chapter concluded that the xPatch technology is not reliable for monitoring individual impacts. The work presented in chapter 5 suggests that future projects consider the cost/benefit of using the SAC, BESS or KDT to monitor neurological status. Furthermore, this chapter concluded that future studies should investigate RHI and symptomatology over multiple seasons to better understand their relationship. Lastly, the work presented in chapter 6 informs clinicians that the most accurate diagnostic tool, of those investigated, for concussion is a patient’s reported symptomatology. When determining whether a patient is concussed or healthy, physicians should place a greater emphasis on the patient’s reported symptomatology rather than any changes in cognitive, postural or oculomotor function. This thesis represents the culmination of a broad two-year investigation into repetitive impacts and concussion in sport. It is my hope that the findings of this thesis eventually lead to improved health and safety of contact sport athletes.      81 Bibliography  Aungst, S. et al., 2014. Repeated mild traumatic brain injury causes chronic neuroinflammation,     changes in hippocampal synaptic plasticity, and associated cognitive deficits. Journal of      Cerebral Blood Flow and Metabolism, 34, 1223-1232.  Bailes, J., Petraglia, A., Omalu, B. & Talavage, T., 2013. Role of subconcussion in mild     repetitive mild traumatic brain injury. Journal of Neurosurgery, 119,1235-1245.  Barkhoudarian, G., Hovda, D. & Giza, C., 2011. The Molecular Pathophysiology of Concussive     Brain Injury. Clinical Journal of Sports Medicine, 30, 33-48.  Barr, W. & McCrea, M., 2001. Sensitivity and specificity of standardized neurocognitive testing     immediately following sports concussion. Journal of the International Neuropsychological       Society, 7, 693-702.  Baugh, C. et al., 2012. Chronic traumatic encephalopathy: neurodegeneration following     repetitive concussive and subconcussive brain trauma. Brain Imaging and Behaviour, 6,       244-254.  Beckwith, J., Greenwald, G. & Chu, J., 2012. Measuring Head Kinematics in Football:     Correlation Between the Head Impact Telemetry System and Hybrid III Headform.     Annals of Biomedical Engineering, 40(1), 237-248.  Belanger, H. & Vanderploeg, R., 2005. The neuropsychological impact of sport-related     concussion: A meta-analysis. Journal of the International Neuropsychological Society, 11,     345-357.  Bell, D., Guskiewicz, K., Clark, M. & Padua, D., 2011. Systematic Review of the Balance Error     Scoring System. Sports Health, 3(3), 287-295.  Belson, K., 2016. Ivy League Moves to Eliminate Tackling at Football Practices. The New York     Times, 1 March.  Bey, T. & Ostick, B., 2009. Second Impact Syndrome. Western Journal of Emergency Medicine,     10, 6-10.  Bigler, E., 2005. Neuropsychology and clinical neurosceince of persistent post-concussive     syndrome. Journal of the International Neuropsychological Society, 11, 345-357.  Breedlove, K. et al., 2014. Detecting Neurocognitive and Neurophysiological Changes as a     Result of Subconcussive Blows in High School Football Athletes. Athletic Training and Sports     Health Care, 6, 1-9.  Breedlove, E. et al., 2012. Biomechanical correlates of symptomatic and asymptomatic    82    neurophysiological impairment in high school football. Journal of Biomechanics, 45, 1265-    1272.  Broglio, S. et al., 2011. Cumulative Head Impact Burden in High School Football. Journal of     Neurotrauma, 28, 2069-2078.  Broglio, S. & Puetz, T., 2008. The Effect of Sport Concussion on Neurocognitive Function, Self-    Report Symptoms and Postural Control: A Meta-Analysis. Sports Medicine, 38(1), 53-67.  Brown, H.J., 2013. Development and Validation of an Objective Balance Error Scoring System     (Master’s Thesis). University of British Columbia, Vancouver, British Columbia.  CIS-SIC, 2015. Sport By Sport AFA (2013-14) Public. Richmond Hill, ON: Canadian     Interuniversity Sport.  Clay, M., Glover, K. & Lowe, D., 2013. Epidemiology of concussion in sport: a literature     review. Journal of Chiropractice Medicine, 12, 230-251.  Crisco, J. et al., 2010. Frequency and Location of Head Impact Exposures in Individual     Collegiate Football Players. Journal of Athletic Training, 45(6), 549-559.  Crisco, J. et al., 2012. Magnitude of Head Impact Exposures in Individual Collegiate     Football Players. Journal of Applied Biomechanics, 28(2), 175-183.  Crisco, J. et al., 2011. Head impact exposure in collegiate football players. Journal of     Biomechanics, 44, 2673-2678.  Czerniak, S. et al., 2015. A resting state functional magnetic resonance imaging study of     concussion in collegiate athletes. Brain Imaging and Behavior, 9(2), 323-332.  Docherty, C., Valovich-McLeod, T. & Shultz, S., 2006. Postural Control Deficits in Participants     with Functional Ankle Instability as Measured by the Balance Error Scoring System. Clinical     Journal of Sport Medicine, 16(3), 203-208.  Duma, S. et al., 2005. Analysis of Real-time Head Accelerations in Collegiate Football Players.     Clinical Journal of Sports Medicine, 15(1), 3-8.  Eierud, C. et al., 2014. Neuroimaging after mild traumatic brain injury: Review and meta-    analysis. NeuroImage: Clinical, 4, 283-294.  Fainaru-Wada, M. & Fainaru, S., 2013. League of Denial: The NFL, Concussions and the Battle     for TruthT. 1st ed. New York: Three Rivers Press.  Gaetz, M. & Bernstein, D., 2001. The Current Status of Electrophysiologic Procedures for the     Assessment of Mild Traumatic Brain Injury. The Journal of Head Trauma Rehabilitation,       83    16(4), 386-405.  Galetta, K. et al., 2011. The King–Devick test and sports-related concussion: Study of a rapid     visual screening tool in a collegiate cohort. Journal of the Neurological Sciences, 309, 34-39.  Galetta, K. et al., 2015. The King-Devick test of rapid number naming for concussion detection:     meta- analysis and systematic review of the literature. Concussion, 1(2), ePub.   Gavett, B., Stern, R. & McKee, A., 2011. Potential Late Effect of Sport-Related Concussive and     Subconcussive Head Trauma. Clinical Journal of Sport Medicine, 30, 179-188.  Giza, C. & DiFiori, J., 2011. The pathophysiology of sprot-related concussion: An update on     basic science and translational research. Sports Health, 3, 46-51.  Giza, C. & Hovda, D., 2001. The Neurometabolic Cascade of Concussion. Journal of Athletic     Training, 36, 228-235.  Guskiewicz, K. et al., 2003. Cumulative Effects Associated With Recurrent Concussion in     Collegiate Football Players. Journal of the American Medical Association, 290(19), 2549-    2555.  Guskiewicz, K. & Mihalik, J., 2011. Biomechanics of Sport Concussion: Quest for the Elusive     Injury Threshold. Exercise and Sport Sciences Reviews, 39(1), 4-11.  Guskiewicz, K. et al., 2007. Measruement of Head Impacts in Collegiate Football Players:     Relationship Between Head Impact Biomechanics and Acute Clinical Outcomes After     Concussion. Neurosurgery, 61, 1244-1253.  Guskiewicz, K., Ross, S. & Marshall, S., 2001. Postural Stability and Neuropsychological     Deficits After Concussion in Collegiate Athletes. Journal of Athletic Training, 36(3), 263-    273.  Guskiewicz, K., Weaver, N., Padua, D. & Garrett, W., 2000. Epidemiology of Concussion in     Collegiate and High School Football Players. American Journal of Sports Medicine, 28(5),     643-650.  Gysland, S. et al., 2012. The Relationship Between Subconcussive Impacts and Concussion     History on Clinical Measures of Neurologic Function in Colleagiate Football Players. Annals      of Biomedical Engineering , 40(1), 4-22.  Hajian-Tilaki, K., 2013. Receiver Operating Characteristic (ROC) Curve Analysis for Medical     Diagnostic Test Evaluation. Caspian Journal of Inernal Medicine, 4(2), 627-635.  Hanley, J. & McNeil, B., 1982. The meaning and use of the area under a receiver operating     characteristic (ROC) curve. Radiology, 143, 29-36.   84  Hanley, J. & McNeil, B., 1983. A method of comparing the areas under receiver operating     characteristic curves derived from the same cases. Radiology, 148, 839-843.  Hovda, D., 1996. Metabolic Dysfunction. In R. Narayan & J.P.J. Wilberger, eds. Neurotrauma.     New York, NY: McGraw-Hill Health Professions Division. 1459-1478.  Hunt, T., Ferrara, M., Bornstein, R. & Baumgartner, T., 2009. The reliability of the Modified     Balance Error Scoring System. Clinical Journal of Sport Medicine, 19, 471-475.  Hwang, S. et al., 2016. tibular Dysfunction following Sub-Concussive Head Impact. Journal of     Neurotrauma, ePub.  Irick, E., 2015. Student-Athlete Participation: 1981/82-2014/15. Indianapolis, IN: NCAA The     National Collegiate Athletic Association.  Ivarsson, J., Viano, D., Lovsund, P. & Aldman, B., 2000. Strain relief from the cerebral     ventricles during head impact: experimental studies on natural protection of the brain. Journal     of Biomechanics, 33(2), 181-189.  Jadischke, R. et al., 2013. On the accuracy of the Head Impact Telemtry (HIT) System used in     football helmets. Journal of Biomechanics, 46, 2310-2315.  Jeter, C. et al., 2013. Biomarkers for the Diagnosis and Prognosis of Mild Traumatic Brain     Injury/Concussion. Journal of Neurotrauma, 30, 656-670.  Katayama, Y., Becker, D., Tamura, T. & Hovda, D., 1990. Massive increases in extracellular     potassium and the indiscriminate release of glutamate following concussive brain injury.     Journal of Neuroscience, 73, 889-900.  Kawata, K. et al., 2016. Association of Football Subconcussive Head Impacts With Ocular Near     Point of Convergence. JAMA Ophthalmology, ePub.  Leong, D. et al., 2015. The King-Devick test for sideline concussion screening in collegiate     football. Journal of Optometry, 8(2), 131-139.  Lipton, L. et al., 2009. Diffusion-Tensor Imaging Implicates Prefrontal Axonal Injury in     Executive Function Impairment Following Very Mild Traumatic Brain Injury. Radiology,     252(3), 816-824.  Lipton, M. et al., 2013. Soccer Heading Is Associated with White Matter Microstructural and     Cognitive Abnormalities. Radiology, 268(3), 850-857.  Liu, H. & Wu, T., 2003. Estimating the Area under a Receiver Operating Characteristic (ROC)     Curve For Repeated Measures Design. Journal of Statistical Software, 8, 1-18.   85  Lovell, M. & Collins, M., 1998. Neuropsychological assessment of the college football player.     Journal of Head Trauma Rehabiliation, 13, 9-26.  Lovell, M. et al., 2006. Measurement of symptoms following sports-related concussion:     reliability and normative data of the post-concussion scale. Applied Neuropsychology, 13(3),     166-174.  Marinides, Z. et al., 2015. Vision testing is additive to the sideline assessment of sports-related     concussion. Neurology, 5(1), 25-34.  Martland, H., 1928. Punch Drunk. Journal of the American Medical Association, 91, 1103-1107.  McAllister, T. et al., 2012. Cognitive effects of one season of head impacts in a cohort of     collegiate contact sport athletes. Neurology, 78(22), 1777-1784.  McCrea, M. et al., 2003. Acute Effects and Recovery Time Following Concussion in Collegiate     Football Players. Journal of the American Medical Association, 290(19), 2556-2563.  McCrea, M. et al., 2004. Unreported Concussion in High School Football Players. Clinical     Journal of Sports Medicine, 14(1), 13-17.  McCrea, M. et al., 1998. Standardized assessment of concussion (SAC): on-site mental status     evaluation of the athlete. Journal of Head Tauma Rehabilitation, 13(2), 27-35.  McCrory, P. et al., 2013. Consensus statement on concussion in sport: the 4th International     Conference on Concussion in Sport held in Zurich, November 2012. British Journal of Sports     Medicine, 47, 250-258.  McCuen, M. et al., 2015. Collegiate women's soccer players suffer greater cumulative head     impacts than their high school counterparts. Journal of Biomechanics, 48(13), 3720-3723.  Meaney, D. & Smith, D., 2011. Biomechanics of Concussion. Clinics in Sports Medicine, 30,     19-31.  Mihalik, J., Bell, D., Marshall, S. & Guskiewicz, K., 2007. Measurement of Head Impacts in     Collegiate Football Players: An Investigation of Positional and Event-Type Differences.     Journal of Neurosurgery, 61(6), 1229-1235.  Mihalik, J. et al., 2011. Collision Type and Player Anticipation Affect Head Impact Severity     Among Youth Ice Hockey Players. Pediatrics, 125(6), 1394-1401.  Miller, R., Adamson, G., Pink, M. & Sweet, J., 2007. Comparison of Preseason, Midseason, and     Postseason Neurocognitive Scores in Uninjured Collegiate Football Players. American Journal     of Sport Medicine, 35(8), 1284-1288.   86  Millspaugh, J., 1937. Dementia pugilistica. US Naval Medical Bulletin, 35, 297-303.  Montenigro, P. et al., 2016. Cumulative Head Impact Exposure Predicts Later-Life Depression,     Apathy, Executive Dysfunction, and Cognitive Impairment in Former High School and College    Football Players. Journal of Neurotrauma, ePub.  Moon, D., Beedle, C. & Kovacic, C., 1971. Peak head acceleration of athletes during     competition. Medicine & Science in Sports & Exercise, 3, 44-55.  Naunheim, R., Standeven, J. & Lewis, L., 2000. Comparison of impact data in hockey, football     and soccer. Journal of Trauma and Acute Care Surgery, 48, 938-941.  NCAA, 2015. Concussion guidelines: Diagnosis and Management of Sport-Related Concussion     Guidelines. [Online] Available at: http://www.ncaa.org/health-and-safety/concussion-      guidelines [Accessed 20 August 2015].  NFL, 2014. NFL Sideline Assessment Tool. [Online] Available at:     http://static.nfl.com/static/content/public/photo/2014/02/20/0ap2000000327057.pdf [Accessed     8 July 2015].  Omalu, B. et al., 2005. Chronic traumatic encephalopathy in a National Football League Player.     Neurosurgery, 57, 128-134.  Pellman, E. et al., 2003. Concussion in professional football: location and direction of helmet     impacts. Neurosurgery, 53, 1328-1341.  Putukian, M., 2011. Neuropsychological Testing as It Relates to Recovery From Sports-related      Concussion. PM&R, 3, 425-432.  Rebchuk, A., Brown, H., Siegmund, G. & Blouin, J., 2015. Measuring Football Head Impacts:     Sensitivity and Specificity of the xPatch Sensor. Second Annual AAN Sport Concussion     Conference. Denver,CO, 2015. American Academy of Neurology.  Reid, S., Tarkington, J., Epstein, H. & O'Dea, T., 1971. Brain tolerance to impact in football.     Surgery, Gynecology, and Obstetrics, 133, 929–936.  Reynolds, B. et al., 2015. Practice type effects on head impact in collegiate football. Journal of     Neurosurgery, 4, 1-10.  Seltman, H., 2009. Experimental Design and Analysis.     http://www.stat.cmu.edu/∼hseltman/309/Book/Book.pdf. [Accessed 22 August 2015].  Shamin, P. et al., 2014. Blood Biomarkers for Brain Injury in Concussed Professional Ice     Hockey Players. JAMA Neurology, 71(6), 683-692.   87  Shek, D. & Ma, C., 2011. Longitudinal Data Analyses Using Linear Mixed Models in SPSS:     Concepts, Procedures and Illustrations. The Scientific World Journal, 11, 42-76.  Siegmund, G., Bonin, S., Luck, J. & Bass, C., 2015. Validation of a Skin‐Mounted Sensor for     Measuring In‐Vivo Head Impacts. In International Research Council on the Biomechanics of     Injury. Lyon, 2015. IRCOBI Conference.  Siegmund, G. et al., 2015. Laboratory Validation of Two Wearable Sensor Systems for     Measuring Head Impact Severity in Football Players. Annals of Biomedical Engineering,     44(4), 1257-1274.  Siman, R. et al., 2015. Serum SNTF Increases in Concussed Professional Ice Hockey Players and     Relates to the Severity of Postconcussion Symptoms. Journal of Neurotrauma, 32, 1-7.  Statistics Canada, 2005. Sport Participation in Canada. Ottawa:     http://www.statcan.gc.ca/pub/81-595-m/81-595-m2008060-eng.htm [Accessed 19 August     2015].  Talavage, T. et al., 2014. Functionally-Detected Cognitive Impairment in High School Football     Players without Clinically-Diagnosed Concussion. Journal of Neurotrauma, 31, 327-338.  Valovich-McLeod, T. et al., 2004. Serial Administration of Clinical Concussion Assessments     and Learning Effects in Healthy Young Athletes. Clinical Journal of Sport Medicine, 14(5),     287-295.  Valovich-McLoed, T., Bay, R., Lam, K. & Chhabra, A., 2012. Representative Baseline Values     on the Sport Concussion Assessment Tool 2 (SCAT2) in Adolescent Athletes Vary by Gender,     Grade, and Concussion History. The American Journal of Sports Medicine, 40(4), 927-933.  Van Kampen, D. et al., 2006. The “Value Added” of Neurocognitive Testing After Sports-    Related Concussion. The American Journal of Sports Medicine, 34(10), 1630-1635.  Verbeke, G. & Molenberghs, G., 2000. Linear Mixed Models for Longitudinal Data. New York,     NY: Springer-Verlag.  Viano, D., Casson, I. & Pellman, E., 2007. Concussion In Professional Football: Biomechanics     of the Struck Player - Part 14. Journal of Neurosurgery, 61(2), 313-328.  Viano, D. et al., 2005. Concussion in Professional Football: Brain Responses by Finite Element     Analysis: Part 9. Neurosurgery, 57, 891-916.  Wu, L. et al., 2016. In Vivo Evaluation of Wearable Head Impact Sensors. Annals of Biomedical     Engineering, 44(4), 1234-1245.    88 Xu, L. et al., 2016. Repetitive mild traumatic brain injury with impact acceleration in the mouse:     Multifocal axonopathy, neuroinflammation, and neurodegeneration in the visual system.     Experimental Neurology, 275(3), 436-339.  Yengo-Kahn, A., Johnson, D., Zuckerman, S. & Solomon, G., 2016. Concussions in the National     Football League. The American Journal of Sports Medicine, 44(3), 801-811.   89 Appendices Appendix A  - SCAT3 From © McCrory, P. et al. (2013). Consensus statement on concussion in sport: the 4th International Conference on Concussion in Sport held in Zurich, November 2012. British Journal of Sports Medicine, 47, 250-258. SCAT3. By permission from publisher   SCAT3 Sport ConCuSSion ASSeSment tool 3 | PAge 1  © 2013 Concussion in Sport GroupWhat is the SCAT3?1the SCAt3 is a standardized tool for evaluating injured athletes for concussion and can be used in athletes aged from 13 years and older. it supersedes the orig-inal SCAt and the SCAt2 published in 2005 and 2009, respectively 2. For younger persons, ages 12 and under, please use the Child SCAt3. the SCAt3 is designed for use by medical professionals.  If  you are not qualifi ed, please use  the Sport Concussion recognition tool1. preseason baseline testing with the SCAt3 can be helpful for interpreting post-injury test scores. Specifi c instructions for use of the SCAT3 are provided on page 3. If you are not familiar with the SCAt3, please read through these instructions carefully. this tool may be freely copied in its current form for distribution to individuals, teams, groups and organizations. Any revision or any reproduction in a digital form re-quires approval by the Concussion in Sport Group. NOTE: the diagnosis of a concussion is a clinical judgment, ideally made by a medical professional. the SCAt3 should not be used solely to make, or exclude, the diagnosis of concussion in the absence of clinical judgement. An athlete may have a concussion even if their SCAt3 is “normal”.What is a concussion? A concussion is a disturbance in brain function caused by a direct or indirect force to the head. It results in a variety of non-specifi c signs and / or symptoms (some examples listed below) and most often does not involve loss of consciousness. Concussion should be suspected in the presence of any one or more of the following:   - Symptoms (e.g., headache), or - Physical signs (e.g., unsteadiness), or - Impaired brain function (e.g. confusion) or - Abnormal behaviour (e.g., change in personality). Sideline ASSeSSmenTindications for emergency management noTe: A hit to the head can sometimes be associated with a more serious brain injury. Any of the following warrants consideration of activating emergency pro-cedures and urgent transportation to the nearest hospital: - Glasgow Coma score less than 15 - Deteriorating mental status - potential spinal injury - progressive, worsening symptoms or new neurologic signsPotential signs of concussion? if any of the following signs are observed after a direct or indirect blow to the head, the athlete should stop participation, be evaluated by a medical profes-sional and should not be permitted to return to sport the same day if a concussion is suspected.Any loss of consciousness?  Y  n“if so, how long?“ Balance or motor incoordination (stumbles, slow / laboured movements, etc.)?  Y  nDisorientation or confusion (inability to respond appropriately to questions)?  Y  nloss of memory:  Y  n“if so, how long?“ “Before or after the injury?" Blank or vacant look:  Y  nVisible facial injury in combination with any of the above:  Y  nSCAT3™Sport Concussion Assessment Tool – 3rd editionFor use by medical professionals onlyglasgow coma scale (gCS)Best eye response (e)no eye opening 1eye opening in response to pain 2eye opening to speech 3eyes opening spontaneously 4Best verbal response (v)no verbal response 1incomprehensible sounds 2inappropriate words 3Confused 4oriented 5Best motor response (m)no motor response 1extension to pain 2Abnormal fl exion to pain  3Flexion / Withdrawal to pain  4localizes to pain 5obeys commands 6glasgow Coma score (e + v + m) of 15GCS should be recorded for all athletes in case of subsequent deterioration.1name Date / Time of Injury:Date of Assessment:examiner:notes: mechanism of injury (“tell me what happened”?):Any athlete with a suspected concussion should be removed From PlAy, medically assessed, monitored for deterioration (i.e., should not be left alone) and should not drive a motor vehicle until cleared to do so by a medical professional. no athlete diag-nosed with concussion should be returned to sports participation on the day of injury.2 maddocks Score3“I am going to ask you a few questions, please listen carefully and give your best effort.”Modifi ed Maddocks questions (1 point for each correct answer)What venue are we at today?  0 1Which half is it now? 0 1Who scored last in this match? 0 1What team did you play last week / game? 0 1Did your team win the last game? 0 1maddocks score of 5Maddocks score is validated for sideline diagnosis of concussion only and is not used for serial testing.259group.bmj.com on August 10, 2015 - Published by http://bjsm.bmj.com/Downloaded from   90  SCAT3 Sport ConCuSSion ASSeSment tool 3 | PAge 2  © 2013 Concussion in Sport GroupCogniTive & PhySiCAl evAluATionBACkgroundname: Date: examiner: Sport / team / school: Date / time of injury:Age: Gender:  m  FYears of education completed:  Dominant hand:   right  left  neitherHow many concussions do you think you have had in the past? When was the most recent concussion? How long was your recovery from the most recent concussion? Have you ever been hospitalized or had medical imaging done for a head injury? Y  nHave you ever been diagnosed with headaches or migraines?  Y  nDo you have a learning disability, dyslexia, ADD / ADHD?  Y  nHave you ever been diagnosed with depression, anxiety or other psychiatric disorder? Y  nHas anyone in your family ever been diagnosed with any of these problems? Y  nAre you on any medications? if yes, please list:  Y  nSCAT3 to be done in resting state. Best done 10 or more minutes post excercise. SymPTom evAluATion 3 how do you feel? “You should score yourself on the following symptoms, based on how you feel now”.none mild moderate severeHeadache 0 1 2 3 4 5 6“pressure in head” 0 1 2 3 4 5 6neck pain 0 1 2 3 4 5 6nausea or vomiting 0 1 2 3 4 5 6Dizziness 0 1 2 3 4 5 6Blurred vision 0 1 2 3 4 5 6Balance problems 0 1 2 3 4 5 6Sensitivity to light 0 1 2 3 4 5 6Sensitivity to noise 0 1 2 3 4 5 6Feeling slowed down 0 1 2 3 4 5 6Feeling like “in a fog“ 0 1 2 3 4 5 6“Don’t feel right” 0 1 2 3 4 5 6Difficulty concentrating 0 1 2 3 4 5 6Difficulty remembering 0 1 2 3 4 5 6Fatigue or low energy 0 1 2 3 4 5 6Confusion 0 1 2 3 4 5 6Drowsiness 0 1 2 3 4 5 6trouble falling asleep 0 1 2 3 4 5 6more emotional 0 1 2 3 4 5 6irritability 0 1 2 3 4 5 6Sadness 0 1 2 3 4 5 6nervous or Anxious 0 1 2 3 4 5 6Total number of symptoms (Maximum possible 22) Symptom severity score (Maximum possible 132)Do the symptoms get worse with physical activity?  Y  nDo the symptoms get worse with mental activity?  Y  n self rated  self rated and clinician monitored clinician interview  self rated with parent inputoverall rating: if you know the athlete well prior to the injury, how different is the athlete acting compared to his / her usual self? Please circle one response:no different very different unsure N/A4 Cognitive assessmentStandardized Assessment of Concussion (SAC) 4orientation (1 point for each correct answer)What month is it?  0 1What is the date today?  0 1What is the day of the week?  0 1What year is it?  0 1What time is it right now? (within 1 hour) 0 1orientation score of 5immediate memory List Trial 1 Trial 2 Trial 3 Alternative word listelbow 0 1 0 1 0 1 candle baby fingerapple 0 1 0 1 0 1 paper monkey pennycarpet 0 1 0 1 0 1 sugar perfume blanketsaddle 0 1 0 1 0 1 sandwich sunset lemonbubble 0 1 0 1 0 1 wagon iron insectTotalimmediate memory score total of 15Concentration: digits BackwardList Trial 1 Alternative digit list4-9-3 0 1 6-2-9 5-2-6 4-1-53-8-1-4 0 1 3-2-7-9 1-7-9-5 4-9-6-86-2-9-7-1 0 1 1-5-2-8-6 3-8-5-2-7 6-1-8-4-37-1-8-4-6-2 0 1 5-3-9-1-4-8 8-3-1-9-6-4 7-2-4-8-5-6Total of 4Concentration: month in reverse order (1 pt. for entire sequence correct)Dec-nov-oct-Sept-Aug-Jul-Jun-may-Apr-mar-Feb-Jan 0 1Concentration score of 58 SAC delayed recall4delayed recall score of 5Balance examinationDo one or both of the following tests.Footwear (shoes, barefoot, braces, tape, etc.) Modified Balance Error Scoring System (BESS) testing5Which foot was tested (i.e. which is the non-dominant foot)  left  rightTesting surface (hard floor, field, etc.) ConditionDouble leg stance: errorsSingle leg stance (non-dominant foot): errorstandem stance (non-dominant foot at back): errorsAnd / orTandem gait6,7time (best of 4 trials):  seconds 6Coordination examinationupper limb coordinationWhich arm was tested:  left  rightCoordination score of 17neck examination:range of motion tenderness upper and lower limb sensation & strengthFindings: 5Scoring on the SCAT3 should not be used as a stand-alone method to diagnose concussion, measure recovery or make decisions about an athlete’s readiness to return to competition after concussion. Since signs and symptoms may evolve over time, it is important to consider repeat evaluation in the acute assessment of concussion.260group.bmj.com on August 10, 2015 - Published by http://bjsm.bmj.com/Downloaded from   91  SCAT3 Sport ConCuSSion ASSeSment tool 3 | PAge 3  © 2013 Concussion in Sport GroupinSTruCTionS Words in Italics throughout the SCAt3 are the instructions given to the athlete by the tester.Symptom Scale“You should score yourself on the following symptoms, based on how you feel now”.to be completed by the athlete. in situations where the symptom scale is being completed after exercise, it should still be done in a resting state, at least 10 minutes post exercise.For total number of symptoms, maximum possible is 22.For Symptom severity score, add all scores in table, maximum possible is 22 x 6 = 132.SAC 4immediate memory“I am going to test your memory. I will read you a list of words and when I am done, repeat back as many words as you can remember, in any order.” Trials 2 & 3:“I am going to repeat the same list again. Repeat back as many words as you can remember in any order, even if you said the word before.“Complete all 3 trials regardless of score on trial 1 & 2. Read the words at a rate of one per second. Score 1 pt. for each correct response. Total score equals sum across all 3 trials. Do not inform the athlete that delayed recall will be tested.Concentrationdigits backward“I am going to read you a string of numbers and when I am done, you repeat them back to me backwards, in reverse order of how I read them to you. For example, if I say 7-1-9, you would say 9-1-7.” If correct, go to next string length. If incorrect, read trial 2. One point possible for each string length. Stop after incorrect on both trials. The digits should be read at the rate of one per second.months in reverse order“Now tell me the months of the year in reverse order. Start with the last month and go backward. So you’ll say December, November … Go ahead”1 pt. for entire sequence correctdelayed recallthe delayed recall should be performed after completion of the Balance and Coor-dination examination.“Do you remember that list of words I read a few times earlier? Tell me as many words from the list as you can remember in any order.“ Score 1 pt. for each correct responseBalance examinationModified Balance Error Scoring System (BESS) testing 5This  balance  testing  is  based  on  a modified  version  of  the  Balance  Error  Scoring System (BESS)5. A stopwatch or watch with a second hand is required for this testing.“I am now going to test your balance. Please take your shoes off, roll up your pant legs above ankle (if applicable), and remove any ankle taping (if applicable). This test will consist of three twenty second tests with different stances.“(a) double leg stance: “The first stance is standing with your feet together with your hands on your hips and with your eyes closed. You should try to maintain stability in that position for 20 seconds. I will be counting the number of times you move out of this position. I will start timing when you are set and have closed your eyes.“(b) Single leg stance: “If you were to kick a ball, which foot would you use? [This will be the dominant foot] Now stand on your non-dominant foot. The dominant leg should be held in approximately 30 de-grees of hip flexion and 45 degrees of knee flexion. Again, you should try to maintain stability for 20 seconds with your hands on your hips and your eyes closed. I will be counting the number of times you move out of this position. If you stumble out of this position, open your eyes and return to the start position and continue balancing. I will start timing when you are set and have closed your eyes.“ (c) Tandem stance: “Now stand heel-to-toe with your non-dominant foot in back. Your weight should be evenly distributed across both feet. Again, you should try to maintain stability for 20 seconds with your hands on your hips and your eyes closed. I will be counting the number of times you move out of this position. If you stumble out of this position, open your eyes and return to the start position and continue balancing. I will start timing when you are set and have closed your eyes.”Balance testing – types of errors1. Hands lifted off iliac crest2. opening eyes3. Step, stumble, or fall4. moving hip into > 30 degrees abduction5. lifting forefoot or heel6. remaining out of test position > 5 seceach of the 20-second trials is scored by counting the errors, or deviations from the proper stance, accumulated by the athlete. the examiner will begin counting errors only after the individual has assumed the proper start position. The modified BeSS is calculated by adding one error point for each error during the three 20-second tests. The maximum total number of errors for any single con-dition is 10. if a athlete commits multiple errors simultaneously, only one error is recorded but the athlete should quickly return to the testing position, and counting should resume once subject is set. Subjects that are unable to maintain the testing procedure for a minimum of five seconds at the start are assigned the highest possible score, ten, for that testing condition. oPTion: For further assessment, the same 3 stances can be performed on a surface of medium density foam (e.g., approximately 50 cm x 40 cm x 6 cm). Tandem gait6,7Participants are instructed to stand with their feet together behind a starting line (the test is best done with footwear removed). Then, they walk in a forward direction as quickly and as accurately as possible along a 38mm wide (sports tape), 3 meter line with an alternate foot heel-to-toe gait ensuring that they approximate their heel and toe on each step. Once they cross the end of the 3m line, they turn 180 degrees and return to the starting point using the same gait. A total of 4 trials are done and the best time is retained. Athletes should complete the test in 14 seconds. Athletes fail the test if they step off the line, have a separation between their heel and toe, or if they touch or grab the examiner or an object. In this case, the time is not recorded and the trial repeated, if appropriate.Coordination examinationupper limb coordinationFinger-to-nose (FTN) task: “I am going to test your coordination now. Please sit comfortably on the chair with your eyes open and your arm (either right or left) outstretched (shoulder flexed to 90 degrees and elbow and fingers extended), pointing in front of you. When I give a start signal, I would like you to perform five successive finger to nose repetitions using your index finger to touch the tip of the nose, and then return to the starting position, as quickly and as accurately as possible.”Scoring: 5 correct repetitions in < 4 seconds = 1Note for testers: Athletes fail the test if they do not touch their nose, do not fully extend their elbow or do not perform five repetitions. Failure should be scored as 0.references & Footnotes1. this tool has been developed by a group of international experts at the 4th in-ternational Consensus meeting on Concussion in Sport held in Zurich, Switzerland in november 2012. the full details of the conference outcomes and the authors of the tool are published in the BJSm injury prevention and Health protection, 2013, Volume 47, issue 5. the outcome paper will also be simultaneously co-published in other leading biomedical journals with the copyright held by the Concussion in Sport Group, to allow unrestricted distribution, providing no alterations are made.2. mcCrory p et al., Consensus Statement on Concussion in Sport – the 3rd inter-national Conference on Concussion in Sport held in Zurich, november 2008. British Journal of Sports medicine 2009; 43: i76-89.3. maddocks, Dl; Dicker, GD; Saling, mm. the assessment of orientation following concussion in athletes. Clinical Journal of Sport Medicine. 1995; 5(1): 32 – 3.4. mcCrea m. Standardized mental status testing of acute concussion. Clinical Jour-nal of Sport medicine. 2001; 11: 176 – 181. 5. Guskiewicz Km. Assessment of postural stability following sport-related concus-sion. Current Sports medicine reports. 2003; 2: 24 – 30.6. Schneiders, A.G., Sullivan, S.J., Gray, A., Hammond-tooke, G. & mcCrory, p. normative values for 16-37 year old subjects for three clinical measures of motor performance used in the assessment of sports concussions. Journal of Science and Medicine in Sport. 2010; 13(2): 196 – 201.7. Schneiders, A.G., Sullivan, S.J., Kvarnstrom. J.K., olsson, m., Yden. t. & marshall, S.W.  The  effect  of  footwear  and  sports-surface  on dynamic  neurological  screen-ing in sport-related concussion. Journal of Science and medicine in Sport. 2010; 13(4): 382 – 386261group.bmj.com on August 10, 2015 - Published by http://bjsm.bmj.com/Downloaded from   92  SCAT3 Sport ConCuSSion ASSeSment tool 3 | PAge 4  © 2013 Concussion in Sport GroupAThleTe inFormATion Any athlete suspected of having a concussion should be removed from play, and then seek medical evaluation.Signs to watch forProblems could arise over the first 24 – 48 hours. The athlete should not be left alone and must go to a hospital at once if they: - Have a headache that gets worse - Are very drowsy or can’t be awakened - Can’t recognize people or places - Have repeated vomiting - Behave unusually or seem confused; are very irritable - Have seizures (arms and legs jerk uncontrollably) - Have weak or numb arms or legs - Are unsteady on their feet; have slurred speechremember, it is better to be safe. Consult your doctor after a suspected concussion.return to playAthletes should not be returned to play the same day of injury.When returning athletes to play, they should be medically cleared and then follow a stepwise supervised program, with stages of progression. For example:rehabilitation stage Functional exercise at each stage of rehabilitationobjective of each stageno activity physical and cognitive rest recoverylight aerobic exercise Walking, swimming or stationary cycling keeping intensity, 70 % maximum predicted heart rate. no resistance trainingincrease heart rateSport-specific exercise Skating drills in ice hockey, running drills in soccer. no head impact activitiesAdd movementnon-contact training drillsprogression to more complex training drills, eg passing drills in football and ice hockey. may start progressive resistance trainingexercise, coordination, and cognitive loadFull contact practice Following medical clearance participate in normal training activitiesRestore confidence and assess functional skills by coaching staffreturn to play normal game playThere should be at least 24 hours (or longer) for each stage and if symptoms recur the athlete should rest until they resolve once again and then resume the program at the previous asymptomatic stage. resistance training should only be added in the later stages. if the athlete is symptomatic for more than 10 days, then consultation by a medical practitioner who is expert in the management of concussion, is recommended.medical clearance should be given before return to play. notes:ConCuSSion injury AdviCe(To be given to the person monitoring the concussed athlete) this patient has received an injury to the head. A careful medical examination has been carried out and no sign of any serious complications has been found. recovery time is variable across individuals and the patient will need monitoring for a further period by a responsible adult. Your treating physician will provide guidance as to this timeframe.if you notice any change in behaviour, vomiting, dizziness, worsening head-ache, double vision or excessive drowsiness, please contact your doctor or the nearest hospital emergency department immediately.other important points: - Rest (physically and mentally), including training or playing sports  until symptoms resolve and you are medically cleared - no alcohol - no prescription or non-prescription drugs without medical supervision.  Specifically: · no sleeping tablets · Do not use aspirin, anti-inflammatory medication or sedating pain killers - Do not drive until medically cleared - Do not train or play sport until medically clearedClinic phone numberpatient’s name  Date / time of injury  Date / time of medical review  treating physician  Contact details or stampScoring Summary:test Domain Score Date: Date: Date: number of Symptoms of 22Symptom Severity Score of 132orientation of 5immediate memory of 15Concentration of 5Delayed recall of 5SAC TotalBESS (total errors)Tandem Gait (seconds)Coordination of 1262group.bmj.com on August 10, 2015 - Published by http://bjsm.bmj.com/Downloaded from   93 Appendix B  - King-Devick Test  Participants held test cards at a normal reading distance. Investigators recorded completion time using a stopwatch (iPhone, Apple, Cupertino, CA). During baseline testing participants performed the KDT twice and their best time was used as their baseline score.     Instructions: 1. Test cards were explained to participants: “There are three test cards that increase in difficulty. I will be timing how quickly you can read aloud the numbers on each card and keeping track of any errors that you make. You cannot use your hand or finger on the card to help you follow the number pattern. I will start the time when you read the first number and I will stop the time when you finish saying the last number at the bottom right-hand corner. Then we will continue on to the next cards. Again, you are to read the numbers as quickly as you can without making any errors.” 2. If a subject made an error and quickly corrected it, no error was recorded. An error was recorded for each omission, addition and reversal.  3. Timing stopped once a participant finished a test card, this allowed time for the participant to flip to the next test card.  4. The total time (the cumulative time to complete all the test cards) was recorded.    94  From © Galetta et al. (2011). The King–Devick test and sports-related concussion: Study of a rapid visual screening tool in a collegiate cohort. Journal of the Neurological Sciences, 309, 34:39. Figure 1. By permission from publisher.   95  Appendix C  - Alternative prompts used in the SAC  Random words were generated from http://listofrandomwords.com/index.cfm?blist. Random numbers were generated in a custom Matlab code.  Word Lists (v1 – used in year 1 data collection) Week 1 (baseline): elbow, apple, carpet, saddle, bubble Week 2: candle, paper, sugar, sandwich, wagon Week 3: baby, monkey, perfume, sunset, iron Week 4: finger, penny, blanket, lemon, insect Week 5: lamp, snowball, potato, gumball, pumpkin  Week 6: shipyward, topic, boxer, toolshed, alter  Week 7: tango, chilli, pepper, wagon, monkey Week 8: penny, fossil, silver, turbine, runner  Week 9: tandem, rocket, dinner, painter, marble  Week 10: zebra, sleepy, racket, lighter, channel Week 11 (post-season): iron, sunset, perfume, monkey, baby  Word Lists (v2 – used in year 2 data collection) Week 1 (baseline): granite, oyster, yoga, jasmine, bourbon Week 2: decoy, liquor, viper, fossil, iron Week 3: baguette, falcon, cactus, novice, pigeon Week 4: finger, penny, blanket, lemon, insect Week 5: lamp, giraffe, potato, basket, pumpkin  Week 6: sugar, psychic, burger, impact, candle Week 7: tango, chilli, perfume, wagon, monkey Week 8: penny, fossil, silver, lentil, runner  Week 9: tandem, rocket, dinner, painter, marble  Week 10: zebra, sleepy, bubble, lighter, oyster Week 11: iron, sunset, perfume, monkey, baby   96 Week 12: royal, crafty, brisket, swagger, messy Week 13: navy, camel, seduce, brisket, pelvis  Week 14: permit, argyle, silence, broker, pilsner Week 15: otter, petrol, affair, champaign, carbon Week 16: collage, ballade, fortress, boxer, taffy Week 17: phantom, feather, pony, arcade, gouda  Number Lists (v1 – used in year 1 data collection) Week 1 (baseline): 493 – 3814 – 62971 – 718462  Week 2: 629 – 3279 – 15286 – 539148 Week 3: 526 – 1795 – 38527 – 831964 Week 4: 415 – 4968 – 61843 – 724856 Week 5: 493 – 3814 – 62971 – 718462 Week 6: 629 – 3279 – 15286 – 539148 Week 7: 526 – 1795 – 38527 – 831964 Week 8: 415 – 4968 – 61843 – 724856 Week 9: 493 – 3814 – 62971 – 718462 Week 10: 526 – 1795 – 38527 – 831964 Week 11: 415 – 4968 – 61843 – 724856  Number Lists (v2 – used in year 2 data collection) Week 1 (baseline): 496 – 5712 – 93871 – 471985  Week 2: 629 – 3279 – 15286 – 539148 Week 3: 526 – 1795 – 38527 – 831964 Week 4: 415 – 4968 – 61843 – 724856 Week 5: 925 – 8174 – 59413 – 195746  Week 6: 243 – 7598 – 73162 – 725618  Week 7: 178 – 2936 – 39476 – 172493  Week 8: 451 – 7936 – 69351 – 469712  Week 9: 947 – 3265 – 54328 – 387296    97 Week 10: 846 – 5127 – 17439 – 897134  Week 11: 946 – 8135 – 72583 – 725839   Week 12: 427 – 6198 – 65927 – 347896  Week 13: 487 – 7846 – 73852 – 638594  Week 14: 629 – 5389 – 75486 – 196342  Week 15: 296 – 3914 – 47825 – 428352  Week 16: 135 – 2713 – 94637 – 892651  Week 17: 824 – 1897 – 23654 – 865937    98  Appendix D  - Instructions to Setup Linear & Hierarchal Mixed Models  Here, we describe the procedure for performing the linear and hierarchal mixed model analysis presented in chapter 5. This appendix is to be consulted by future students or investigators that intend to perform a similar analysis.   Before performing a linear or hierarchal mixed model analysis the work of Verkeke & Molenberghs (2000) should be consulted. Their work provides a detailed explanation of these models including the underlying mathematical concepts. The works of Seltman (2009) and Shek & Ma (2011) should additionally be consulted if performing the analysis in SPSS.   The following is a step-by-step outline of the procedure used to conduct the analysis within this thesis:  • Ensure the following data are collected for each case: participant ID, time of testing, repeated test number, independent (predictor) variables, dependent variable(s). • Import data into SPSS.  • Build the models. o Open SPSS syntax editor. o Construct the desired syntax using the skeleton provided by Shek & Ma (2011). ! Note, in the syntax ‘by’ refers to factors, whereas ‘with’ refers to covariates. o Use the unstructured covariance type.  Within this thesis, we used a factor design when investigating group (i.e. impact exposure tertile) effects. These were referred to as hierarchical mixed models. The effect of impact exposure independent of tertile ranking was investigated using a covariate design. These were referred to as linear mixed models. If requested, ADR can provide examples of the syntax used in the analyses for this thesis.    99  Appendix E  - Instructions to Setup Repeated Measures Receiver Operating Characteristic Curves   Here, we present the methodology for the repeated measure ROC curve analysis presented in chapter 6. This appendix is to be consulted by future students or investigators who intend to perform a similar analysis. If lacking prior experience with ROC curves it may be beneficial to consult an introductory statistical textbook that discusses ROC curves before proceeding.  Before performing any analysis the work of Liu & Wu (2003) should be consulted. Their paper outlines the mathematical validation for ROC curves generated from a repeated measure designs. They also provide a macro, written in SAS, to perform the analysis. The work of Hanley & McNeil (1983) should also be consulted. Their paper provides the mathematical validation for comparing ROC curves derived from dependent populations.   The following is a step-by-step outline of the procedure used in this thesis to conduct the analysis. Data for the analysis must consist of a predictor variable (e.g. heart rate) and a binary classifier variable (i.e. alive or dead). The predictor variable is the output of the diagnostic test under investigator, whereas the binary classifier is the state (diseased or non-diseased) of the patient as determined by a gold standard.   • Ensure the following variables are collected for each case: participant ID, repeated test number, cofounding variables (e.g. time, age, sex), predictor variable (e.g. diagnostic test score) and binary state classification (diseased or non-diseased). • Export data to Microsoft Excel. • Download SAS. o SAS provides a University Edition at no cost to students. ! http://www.sas.com/en_us/software/university-edition.html. o Follow the instructions on the SAS website. • Set up code (i.e. macros) in SAS.   100 o Create a macro with the code provided by Liu & Wu (2003), i.e. glimmroc macro. o Create a macro for the glimmix procedure. ! Contact ADR or visit http://www-personal.umich.edu/~kwelch/genmod/glimmix.sas to obtain a copy of the glimmix macro. o Embed the glimmix macro within the first line of the glimmroc macro. • Perform the ROC analysis in SAS: o Create a new script. o The first line of the code should state %include followed by the directory to the glimmroc macro (with the glimmix macro embedded). o Used PROC Import to import data for analysis. o Set up glimmroc macro variables: y = diseased state, x_list = fixed predictor variables, z_list = independent variables for the random effect, c_s_r = ar(1) [select autoregressive covariance structure]. o Run analysis to generate the AUC output. • Note: if running the analysis in SAS University Edition, the ROC curve plots will not populate and an error will occur. Regardless, an AUC value will be generated.  o To prevent this error from occurring remove the code under the ‘%get ROC Curve’ and ‘%get sensitivity and specificity’ sections of the glimmroc macro. • Evaluate whether AUC values are statistically different. o This was performed using a custom Matlab script based on the work of Hanley & McNeil (1983). o Inputs: AUC values to compare, and 3-column data array (column 1 = diseased state, column 2 and 3 = predictor variable from the two diagnostic tests being compared). o A copy of the code can be provided from ADR.    101 Appendix F  - Pairwise p-Values for Comparison Between Diagnostic Test for All Cases   GSCnm GSVsv GSCfull SAC mBESS fBESS BESStot KDT SCAT3 GSCnm  0.270 0.493 0.055 0.022 0.021 0.010 0.196 0.381 GSCsv 0.270  0.276 0.091 0.033 0.036 0.017 0.250 0.212 GSCfull 0.493 0.276  0.059 0.020 0.021 0.009 0.182 0.358 SAC 0.055 0.091 0.059  0.336 0.298 0.235 0.220 0.042 mBESS 0.022 0.033 0.020 0.336  0.494 0.349 0.125 0.003 fBESS 0.021 0.036 0.021 0.298 0.494  0.316 0.115 0.002 BESStot 0.010 0.017 0.009 0.235 0.349 0.316  0.076 0.000 KDT 0.196 0.250 0.182 0.220 0.125 0.115 0.076  0.129 SCAT3 0.381 0.212 0.358 0.042 0.003 0.002 0.000 0.129      102 Appendix G  - Pairwise p-Values for Comparison Between Diagnostic Test for Select Cases   GSCnm GSVsv GSCfull SAC mBESS fBESS BESStot KDT SCAT3 GSCnm   0.431 0.463 0.003 0.013 0.011 0.008 0.081 0.415 GSCsv 0.431   0.395 0.004 0.013 0.011 0.008 0.079 0.359 GSCfull 0.463 0.395   0.003 0.010 0.008 0.006 0.063 0.429 SAC 0.003 0.004 0.003   0.426 0.484 0.482 0.100 0.007 mBESS 0.013 0.013 0.010 0.426   0.434 0.400 0.152 0.002 fBESS 0.011 0.011 0.008 0.484 0.434   0.492 0.110 0.001 BESStot 0.008 0.008 0.006 0.482 0.400 0.492   0.109 0.000 KDT 0.081 0.079 0.063 0.100 0.152 0.110 0.109   0.061 SCAT3 0.415 0.359 0.429 0.007 0.002 0.001 0.000 0.061      103 Appendix H  - Collinearity (r) Values Between Diagnostic Test for Select Cases   GSCnm GSVsv GSCfull SAC mBESS fBESS BESStot KDT SCAT3 GSCnm   0.919 0.953 0.124 0.154 0.152 0.202 0.184 0.767 GSCsv 0.919   0.995 0.079 0.206 0.159 0.232 0.321 0.826 GSCfull 0.953 0.995   0.094 0.197 0.166 0.233 0.298 0.827 SAC 0.124 0.079 0.094   0.040 0.371 0.238 0.173 0.160 mBESS 0.154 0.206 0.197 0.040   0.169 0.685 0.087 0.488 fBESS 0.152 0.159 0.166 0.371 0.169   0.829 0.152 0.542 BESStot 0.202 0.232 0.233 0.238 0.685 0.829   0.152 0.680 KDT 0.184 0.321 0.298 0.173 0.087 0.152 0.152   0.393 SCAT3 0.767 0.826 0.827 0.160 0.488 0.542 0.680 0.393      104 Appendix I  - Collinearity (r) Values Between Diagnostic Test for Select Cases   GSCnm GSVsv GSCfull SAC mBESS fBESS BESStot KDT SCAT3 GSCnm   0.932 0.958 0.173 0.165 0.070 0.161 0.348 0.771 GSCsv 0.932   0.996 0.158 0.223 0.126 0.223 0.443 0.821 GSCfull 0.958 0.996   0.157 0.213 0.126 0.221 0.434 0.823 SAC 0.173 0.158 0.157   0.125 0.415 0.188 0.053 0.056 mBESS 0.165 0.223 0.213 0.125   0.186 0.710 0.166 0.524 fBESS 0.070 0.126 0.126 0.415 0.186   0.815 0.221 0.521 BESStot 0.161 0.223 0.221 0.188 0.710 0.815   0.239 0.690 KDT 0.348 0.443 0.434 0.053 0.166 0.221 0.239   0.501 SCAT3 0.771 0.821 0.823 0.056 0.524 0.521 0.690 0.501     

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0305718/manifest

Comment

Related Items