Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A robust and sensitive voxel-based method for measuring cortical thickness change using fuzzy correspondence Saurabh, Saurabh 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2015_february_saurabh.pdf [ 3.11MB ]
Metadata
JSON: 24-1.0135619.json
JSON-LD: 24-1.0135619-ld.json
RDF/XML (Pretty): 24-1.0135619-rdf.xml
RDF/JSON: 24-1.0135619-rdf.json
Turtle: 24-1.0135619-turtle.txt
N-Triples: 24-1.0135619-rdf-ntriples.txt
Original Record: 24-1.0135619-source.json
Full Text
24-1.0135619-fulltext.txt
Citation
24-1.0135619.ris

Full Text

A Robust and SensitiveVoxel-Based Method forMeasuring Cortical ThicknessChange using FuzzyCorrespondencebySaurabhB.Tech., International Institute of Information Technology, 2007A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF APPLIED SCIENCEinTHE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES(Biomedical Engineering)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)November 2014c© Saurabh, 2014AbstractThe human cerebral cortex is a deep folded structure of grey matter and isresponsible for maintaining cognitive functions. The cortical thickness hasemerged as an important surrogate biomarker in many neurodegenerativediseases. In this thesis, we propose a longitudinal method called LCT (Lon-gitudinal Cortical Thickness) for measuring cortical thickness changes overtime in magnetic resonance scans. We adopt a voxel-based approach ratherthan surface-based to gain computational efficiency and sensitivity to subtlechanges but aim to achieve the scan-rescan reproducibility of surface-basedmethods. The existing surface-based methods are very time-consuming andthe topological and smoothness constraints imposed during surface recon-struction may blunt longitudinal sensitivity. Also, longitudinal processing re-quires establishing anatomical correspondence across time, and most currentmethods use deformable registration for this purpose, which requires settingmany parameters that can directly affect the measurements. In contrast,LCT establishes cortex-specific matches by using three intuitive features:normal direction, spatial coordinates and shape context that are defined onthe cortical skeleton. We also introduce the concept of fuzzy correspondence,which allows a skeletal point in one scan to be partially matched to multi-ple points in another scan, thereby enhancing the stability of the matches.LCT was evaluated using three longitudinal datasets: 1) same-day pairs ofscans of 15 subjects to test for scan-rescan reproducibility, 2) paired scans of50 Alzheimer’s disease (AD) and 50 healthy subjects taken one-year apart,to test our method’s ability to distinguish between AD and normal, and 3)paired scans of 100 secondary progressive multiple sclerosis (MS) subjectsiiAbstracttaken two years apart, to test for sensitivity to change over time. Tests onthe scan-rescan dataset show that LCT is comparable in scan-rescan repro-ducibility to two other state-of-the-art methods: FreeSurfer and minimumline integral (MLI). Tests on the AD dataset show that LCT detected largergroup differences between AD and normal subjects than FreeSurfer and MLI.Further, the results on the MS dataset demonstrate that LCT is more sensi-tive to change over time and produced stronger correlations between corticalthickness changes and changes in clinical scores than FreeSurfer, which didnot produce any statistically significant correlations.iiiPrefaceThis thesis is an original and independent work by the author, Saurabh. Allof the work presented henceforth was conducted in the MS/MRI researchgroup at the University of British Columbia, Point Grey campus.Chapter 1. Figure 1.1 was taken from the 20th edition of Gray’s Anatomyof the Human Body and the book is cited for reference.Chapter 2. Figures 2.1, 2.2, 2.3 and 2.4 were taken from the original paperand the paper is cited for reference.Chapter 4. and 5. An earlier version of Chapter 4. and Chapter 5 has beenpublished at a main MS conference as listed below:Garg, S., Tam, R., Li, D. K. B., Roy-Hewitson, C., Zhao, Y.,Freedman, M., Traboulsee, A., 2013. Cortical atrophy in SPMSand its association with cognitive impairment and whole brainand T2 lesion volume over two years. In Proceedings of theCongress of the European Committee for Treatment and Researchin Multiple Sclerosis.The more recent version of Chapter 4. including Section 4.1.4 was publishedin XXIV Brazilian Congress of Biomedical Engineering and was presentedorally. The paper was also selected as one of the five finalists for the CaˆndidoPinto de Melo Award, which is awarded in honor of the founder of the Brazil-ian Society of Biomedical Engineering:Garg, S., Traboulsee, A., Li, D. K. B., Tam, R., 2014. A robustand sensitive voxel-based method for measuring cortical thicknessivPrefacechange using fuzzy correspondence. In Proceedings of the XXIVBrazilian Congress of Biomedical Engineering.The AD dataset used in this thesis was obtained from the Alzheimers DiseaseNeuroimaging Initiative (ADNI) database (adni.loni.usc.edu) and the MSdata used in this thesis was provided by MS/MRI research group, UBC.vTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xList of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . xiiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Cerebral Cortex . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Magnetic Resonance Imaging of Neurodegenerative Diseases . 21.3 Overview of Alzheimer’s Disease . . . . . . . . . . . . . . . . 41.3.1 Clinical Score for Alzheimer’s Disease . . . . . . . . . 41.4 Overview of Multiple Sclerosis . . . . . . . . . . . . . . . . . 51.4.1 Clinical Scores for Multiple Sclerosis . . . . . . . . . . 61.5 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.5.1 Challenges . . . . . . . . . . . . . . . . . . . . . . . . 81.6 Thesis Contributions . . . . . . . . . . . . . . . . . . . . . . . 10viTable of Contents2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.1 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . 122.1.1 Cross-sectional Methods vs. Longitudinal Methods . . 122.1.2 Surface-based Methods vs. Voxel-based Methods . . . 152.1.3 Definitions of Cortical Thickness . . . . . . . . . . . . 163 Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.1 Cortical Thickness Measurement Pipeline . . . . . . . . . . . 213.1.1 Preprocessing . . . . . . . . . . . . . . . . . . . . . . . 213.1.2 Registration . . . . . . . . . . . . . . . . . . . . . . . 243.1.3 Paired Inhomogeneity Correction . . . . . . . . . . . . 253.1.4 Brain Extraction . . . . . . . . . . . . . . . . . . . . . 263.1.5 Cerebellum Mask . . . . . . . . . . . . . . . . . . . . . 283.1.6 Brain Tissue Classification . . . . . . . . . . . . . . . 293.1.7 Non-cortical Grey Matter Segmentation . . . . . . . . 313.1.8 Thickness Estimation . . . . . . . . . . . . . . . . . . 324 Thickness Change Analysis . . . . . . . . . . . . . . . . . . . . 334.1 The Proposed Thickness Method . . . . . . . . . . . . . . . . 334.1.1 Correspondence Features . . . . . . . . . . . . . . . . 344.1.2 Thickness Computation . . . . . . . . . . . . . . . . . 364.1.3 Cross-sectional Thickness Measurements . . . . . . . . 414.1.4 Regional Analysis and Buried Sulci . . . . . . . . . . . 414.1.5 Effect of Brain Extraction . . . . . . . . . . . . . . . . 425 Experiments and Results . . . . . . . . . . . . . . . . . . . . . 445.1 Subjects and Imaging Data . . . . . . . . . . . . . . . . . . . 445.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . 455.2.1 Scan-rescan Reproducibility . . . . . . . . . . . . . . . 465.2.2 Clinical Relevance . . . . . . . . . . . . . . . . . . . . 475.2.3 Assessment of Thickness Measurements on AD Dataset 47viiTable of Contents5.2.4 Assessment of Thickness Measurements on MS Dataset 495.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505.3.1 Scan-rescan Reproducibility . . . . . . . . . . . . . . . 505.3.2 Distinguishing between AD and Normals over a Year . 515.3.3 Correlations with Clinical Scores on AD dataset . . . 535.3.4 Thickness Changes at Global level on MS Dataset overTwo Years . . . . . . . . . . . . . . . . . . . . . . . . 545.3.5 Thickness Changes at Global and Lobe-Level in theMS Dataset over Two Years . . . . . . . . . . . . . . . 555.3.6 Comparison of Speed . . . . . . . . . . . . . . . . . . 586 Conclusions and Future Work . . . . . . . . . . . . . . . . . . 606.1 Summary and Conclusions . . . . . . . . . . . . . . . . . . . 606.2 Limitations and Future Work . . . . . . . . . . . . . . . . . . 62Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65viiiList of Tables2.1 Classification of representative cortical-thickness measurementmethods into four categories: cross-sectional, longitudinal, surface-based, and voxel-based. . . . . . . . . . . . . . . . . . . . . . . 134.1 List of parameters . . . . . . . . . . . . . . . . . . . . . . . . . 375.1 Scan-rescan reproducibility . . . . . . . . . . . . . . . . . . . . 505.2 Mean cortical thickness change (%) over 1 year computed by3 methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515.3 Spearman correlation coefficient (p-value) between change incortical thickness measurements using different methods. . . . 535.4 Spearman correlation of thickness change with the change inMMSE score. . . . . . . . . . . . . . . . . . . . . . . . . . . . 545.5 Mean cortical thickness change (%) over 2 years computedby our proposed method and FreeSurfer on the MRIs of 100SPMS subjects. . . . . . . . . . . . . . . . . . . . . . . . . . . 565.6 Pearson correlation coefficient (r) between thickness measure-ments of FreeSurfer and our method over 2 years computedon the MRIs of 100 SPMS subjects. . . . . . . . . . . . . . . . 575.7 Spearman correlation coefficient (ρ) between clinical scoresover 2 years with % change in thickness measurements com-puted in 100 SPMS subjects using FreeSurfer and Our Method. 58ixList of Figures1.1 Human cerebral cortex. . . . . . . . . . . . . . . . . . . . . . . 22.1 Thickness measurement based on distance along the normal. . 172.2 Thickness measurement based on distance along the shortestpath. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.3 Laplacian based thickness measurement. . . . . . . . . . . . . 182.4 Schematic diagram of minimum Line Integral thickness mea-surement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.1 Block diagram of the implemented pipeline for measuring changesin cortical thickness. . . . . . . . . . . . . . . . . . . . . . . . 223.2 An example of results after the preprocessing step. . . . . . . 233.3 An example of rigidly registering the two time points. . . . . . 253.4 An example of brain extraction mask overlaid on the inputregistered scan. . . . . . . . . . . . . . . . . . . . . . . . . . . 273.5 An example of extraneous tissues left by the brain extractionmethod in some slices. . . . . . . . . . . . . . . . . . . . . . . 283.6 A sample sagittal slice of the extracted brain. . . . . . . . . . 293.7 Intensity histogram of the MR scan showing three peaks. . . . 303.8 An example of results from probabilistic segmentation of brainusing FSL’s FAST. . . . . . . . . . . . . . . . . . . . . . . . . 313.9 An example of results after removing non-cortical grey matterfrom grey matter segmentation. . . . . . . . . . . . . . . . . . 324.1 Measuring cortical thickness change at matched skeletal points. 34xList of Figures4.2 Shape context computation and matching. . . . . . . . . . . . 365.1 Power analysis with bootstrapping on the AD and normal data. 52xiList of AbbreviationsAD Alzheimer’s diseaseADNI Alzheimer’s Disease Neuroimaging InitiativeBET Brain Extraction ToolBVF brain volume fractionCNS central nervous systemCLADA cortical longitudinal atrophy detection algorithmCLASSIC consistent longitudinal alignment and segmentation for serialimage computingCSF cerebrospinal fluidDiReCT diffeomorphic registration based cortical thicknessDOF degrees of freedomEDSS Expanded Disability Status ScaleFC Fuzzy CorrespondenceFSL FMRIB software LibraryFAST FMRIB’s Automatic Segmentation ToolFLIRT FMRIB’s linear registration toolxiiList of AbbreviationsGM grey matterHD Huntington’s diseaseICS Inner Cortical SurfaceLCT Longitudinal Cortical ThicknessMCI Mild cognitive impairmentMLI Minimum line integralMMSE Mini-Mental-State-ExaminationMR Magnetic resonanceMS multiple sclerosisMSFC Multiple Sclerosis Functional CompositeN3 Nonparametric nonuniform normalizationOCS Outer Cortical SurfacePASAT Paced Auditory Serial Addition TestPPMS Primary-Progressive Multiple SclerosisPRMS Progressive-Relapsing Multiple SclerosisRRMS Relapse-Remitting Multiple SclerosisSPMS Secondary-Progressive Multiple SclerosisT2LV T2 lesion volumeWM white matterxiiiAcknowledgementsFirst of all, I offer my sincerest gratitude to my supervisor Dr. Roger Tamfor the continuous support throughout my graduate studies and research.His encouragement and guidance throughout the research project, as well ashis pain-staking effort in proof reading the drafts, are greatly appreciated.I want to express my deep thanks to my committee members, Dr. PurangAbolmaesumi, Dr. Rafeef Abugharbieh and Dr. Anthony Traboulsee fortaking the time out of their busy schedule and for serving as my committeemembers. Special thanks go to Dr. David Li, Dr. Anthony Traboulsee,Dr. Yinshan Zhao and Andrew Riddehough for discussing ideas, answeringquestions, and providing feedback on my research. This thesis would not havebeen possible without their support and encouragement. I would also like tothank Ken Bigelow and Vilmos Soti, for their excellent technical assistance inthe lab. Further, I would also like to thank my fellow labmates and friendsin MS/MRI Research Group: Lisa Tang, Tom Brosch, Youngjin Yoo andChunfang Wang, for the stimulating discussions, memories and all the goodtimes in the lab. And last but not the least, a special thanks to my familyfor their endless love and support. Words cannot express how grateful I amto my mother, and father for all of the sacrifices that they have made on mybehalf.Data collection and sharing for this project was funded by the AlzheimersDisease Neuroimaging Initiative (ADNI) (National Institutes of Health GrantU01AG024904). ADNI is funded by the National Institute on Aging, theNational Institute of Biomedical Imaging and Bioengineering, and throughgenerous contributions from the following: Abbott, AstraZeneca AB, BayerxivAcknowledgementsSchering Pharma AG, Bristol-Myers Squibb, Eisai Global Clinical Devel-opment, Elan Corporation, Genentech, GE Healthcare, GlaxoSmithKline,Innogenetics, Johnson and Johnson, Eli Lilly and Co., Medpace, Inc., Merckand Co., Inc., Novartis AG, Pfizer Inc., F. Hoffman-La Roche, Schering-Plough, Synarc, Inc., as well as non-profit partners the Alzheimers Associa-tion and Alzheimers Drug Discovery Foundation, with participation fromthe US Food and Drug Administration. Private sector contributions toADNI are facilitated by the Foundation for the National Institutes of Health(www.fnih.org). The grantee organization is the Northern California In-stitute for Re- search and Education, and the study is coordinated by theAlzheimers Disease Cooperative Study at the University of California, SanDiego. ADNI data are disseminated by the Laboratory for Neuro Imagingat the University of California, Los Angeles.xvChapter 1Introduction1.1 Cerebral CortexThe human brain is made up of numerous nerve cells known as neurons. Aneuron consists of a cell body, dendrites, and an axon. The dendrites andaxons are fibrous parts of neurons which connects the cell bodies of neuronsto form a network. These nerve fibers are covered with a fatty layer oftissues called myelin sheath. The presence of fats makes the appearance ofthe myelin white, which forms the white matter (WM) of the brain. Onthe other hand, the grey matter (GM) largely consists of the cell bodies ofthe neurons, which lacks the fatty tissue and hence appear grey. The greymatter can be classified into two parts based on its location: cerebral cortex,which is the superficial part of grey matter and deep grey matter, which isfound deep within the brain. The cerebral cortex is the distinctive, deeplywrinkled outer sheet-like structure as shown in Figure 1.1. Whilst whitematter modulates the distribution of action potentials, acting as a relay andcoordinating communication between different brain regions, grey matter isprimarily associated with processing and cognition. The human cerebralcortex has about 20 billion neurons (Drachman, 2005) which carry out thehighest levels of mental functioning and complex functions such as memory,language, abstraction, creativity, judgment, emotion and attention.Besides neurons, the cerebral cortex also has support cells called neu-roglia which maintain homeostasis, provide support and protection to theneurons. Two of the support cells, astrocytes and microglia, are known toplay an important role in neurodegenerative diseases. Astrocytes transport11.2. Magnetic Resonance Imaging of Neurodegenerative DiseasesFigure 1.1: Human cerebral cortex. Images taken from 20th edition of Grayand Lewis (1918).nutrients to neuron and plays an important role in the waste removal. Ifdue to genetic or environmental factors astrocytes malfunction, the wastedisposal system is stalled. This results in the formation of clumps of proteinsand cell function is compromised. On the other hand, microglia are spe-cial immune cells that can detect damaged or unhealthy neurons and helpin clearing dead neurons and destroying foreign bodies using phagocytosis.Due to the abnormal hyperactivity of microglia cells, the immune systemmistakenly attacks and destroys healthy body tissue and causes damage tothe myelin sheath, which in turn affects neurons and neuronal transmission.The reason behind over-active microglia cells is not known.1.2 Magnetic Resonance Imaging ofNeurodegenerative DiseasesNeurodegenerative diseases such as Alzheimer’s disease (AD) and multiplesclerosis (MS) are characterized by deterioration of neurons or their myelinsheath, disrupting transmission of sensory information, movement control,21.2. Magnetic Resonance Imaging of Neurodegenerative Diseasesand more. These diseases hamper various body activities and can be lifethreatening. The causes of these diseases are still under investigation. Atpresent, the diagnosis involves detecting the disease early and treating thesymptoms. An early diagnosis of the disease helps to control the disease andto provide a better quality of life to the patient. Magnetic resonance (MR)imaging has proven useful in detecting the early structural changes of thebrain in many neurological disorders. Looking at the structural changes inthe brain helps in making an informed diagnosis about the presence or theabsence of the neurological disease. The benefit of using MR imaging is thatit is non-invasive. Unlike other imaging methods such as X-rays, it doesnot use ionizing radiation but a strong magnetic field instead. No harmfulside effects from the strong magnetic field are known, besides the limitationthat MRIs cannot be acquired from people with pacemakers, pregnancy andphobias. Apart from these limitations, MR imaging has proven helpful inproviding in-vivo assessing of the brain and can show tissue damage overtime in the brain.MR scans have been used to measure whole brain volume, lesion count,cortical thinning, etc. in subjects suffering from neurodegenerative disease.Thinning of the cortex has been linked to various neurological disorders suchas Alzheimer’s disease (AD) (Lerch and Evans, 2005; Li and Shen, 2010),frontotemporal dementia (Das et al., 2009), multiple sclerosis (MS) (Naka-mura et al., 2011) and Huntington’s disease (HD) (Rosas et al., 2008). Inthis thesis, I have developed software algorithms to detect subtle changes inthe thickness of the cortex of the brain with the aim of detecting the changesearly and hence be able to assist in the diagnosis of the disease. We focuson measuring longitudinal change in cortical thickness as a potential imagingbiomarker for AD and MS.31.3. Overview of Alzheimer’s Disease1.3 Overview of Alzheimer’s DiseaseAlzheimer’s disease is known to be the leading cause of dementia and ischaracterized by progressive impairment of cognitive function. Intellectualfunctions such as memory, comprehension, and speech deteriorate graduallyover time. The cognitive decline also occurs due to the natural aging but therate of decline is much slower in normal aging as compared to Alzheimer’sdisease. Although current Alzheimer’s treatments cannot stop it from pro-gressing, they can temporarily slow the worsening of dementia symptoms.Biological markers for Alzheimer’s disease based on non-invasive methodsare highly desirable not only for diagnosis but also for monitoring diseaseprogression.Over the last decade, many studies that measure the whole brain atro-phy, grey matter atrophy and white matter lesion load have been conducted.These studies suggest significant correlation between physical disability andthe whole brain atrophy. However, the cognitive decline only shows weak tono correlation with the whole brain atrophy. Previous studies (Lerch andEvans, 2005; Li and Shen, 2010) have shown indication of decrease in corti-cal thickness in AD. However, the association of change in cortical thicknesswith change in clinical score on a large dataset is not well studied and needsfurther investigation. Since grey matter is known to be responsible for cogni-tive function in human beings, so it was our motivation to study how corticalthickness changes with the severity of the disease.1.3.1 Clinical Score for Alzheimer’s DiseaseMini-Mental-State-ExaminationThe Mini-Mental-State-Examination (MMSE) is a quantitative measure de-veloped by Folstein et al. (1975) and is most commonly used to estimate theseverity of cognitive impairment in adults, in case of Alzheimer’s disease andother forms of dementia. It is used to monitor the cognitive decline in an41.4. Overview of Multiple Sclerosisindividual over time, and to document the subject’s response to treatment.The MMSE provides measures of orientation, immediate memory, short-termmemory as well as language functioning. It is only used as a screening toolrather than diagnostic tool.1.4 Overview of Multiple SclerosisMultiple sclerosis is a chronic autoimmune disease that attacks the centralnervous system (CNS) resulting in a wide range of symptoms. Four differ-ent courses of MS have been identified: Relapse-Remitting Multiple Scle-rosis (RRMS), Secondary-Progressive Multiple Sclerosis (SPMS), Primary-Progressive Multiple Sclerosis (PPMS) and Progressive-Relapsing MultipleSclerosis (PRMS). The different courses are identified based on how de-fined the periods of attacks are and if there is any recovery after the attack.RRMS is the most common disease course which is characterized by completeor nearly complete recovery in between the attacks and later, a significantpercentage of RRMS patients convert to SPMS. In SPMS course, there is noor little recovery and the disease continue to worsen steadily.Traditionally, MS was considered as a white matter disease in whichmyelin degrades and nerve fibers are left exposed, causing problems in mo-tor coordination and loss of senses. Much of the focus until now in MS wason RRMS course with majority of studies focusing on white matter lesionsand whole brain atrophy. As SPMS is clinically and pathologically differ-ent from RRMS, subjects diagnosed with SPMS may suffer different atrophyrates than RRMS. Therefore, we wanted to study the structural changes inthe brain of SPMS subjects, which is a more progressive form of MS. Also,white matter lesions do not singularly account for the neurological impair-ment associated with MS. To explain the loss of cognition, recent studieshave started looking at the grey matter and suggest that significant greymatter atrophy is observed in MS (Fisher et al., 2008). Grey matter dam-51.4. Overview of Multiple Sclerosisage and atrophy have been shown to occur early in the course of RRMS(Chard et al., 2002) and may be related to disability. Some recent publica-tions suggest that the first attack actually happens in GM (Schutzer et al.,2013). These attacks in GM take longer to notice as symptoms associatedwith dysfunctions in grey matter in MS are less obvious than symptoms dueto dysfunctions in white matter but the structural signs can be present earlyon without any symptoms. However, it is still unclear whether grey matteratrophy occurs subsequent to white matter pathology or as a consequence ofdirect pathological mechanisms.1.4.1 Clinical Scores for Multiple SclerosisMultiple Sclerosis Functional CompositeThe most commonly used clinical score to quantify disability in MS is Ex-panded Disability Status Scale (EDSS). But, EDSS does not take cognitivedysfunction into account and places too much emphasis on the ability towalk. Also, EDSS score is non-linear in terms of progression. Multiple Scle-rosis Functional Composite (MSFC) was developed by Fischer et al. (1999)to measure all the clinical dimensions which were not emphasized in existingscales, e.g. cognition. The MSFC comprises quantitative functional measuresof three key clinical dimensions of MS: leg function/ambulation using timed25-foot walk (T25W), arm/hand function using 9-hole peg test (9HPT), andcognitive function using the Paced Auditory Serial Addition Test (PASAT).The T25W is a quantitative mobility and leg function performance test.The average time taken to walk clearly marked 25-foot course and back isused as part of the MSFC score. In 9HPT, the patient is asked to pick upthe nine pegs one at a time and put them in the nine holes as quickly aspossible. Then the patient is asked to remove them one at a time and placethem back into the box. This is repeated for both hands and the total timeto complete the task is recorded. This test is used to quantitatively measure61.5. Motivationthe performance of the upper extremity function. Finally, the PASAT is ameasure of cognitive function that specifically assesses auditory informationprocessing speed and flexibility, as well as calculation ability. It was initiallydeveloped by Gronwall (1977) to monitor the recovery of patients who hadsustained mild head injuries. Later, the test was adapted for use with MSpatients by Rao et al. (1989), and the measure has been widely used in MSstudies during the last decade.1.5 MotivationEvaluating the structural changes of the cortex using MR scans has shownconsiderable potential not only for the early diagnosis (Querbes et al., 2009)but also to evaluate the effectiveness of the administered therapies (Reuteret al., 2012). The structural changes of the cortex include but are not lim-ited to cortical thinning and grey matter lesions. Earlier, whole brain atrophyhas shown potential as a biomarker but it can be significantly affected by theinflammatory edema, and since most of the neurons cell bodies are in GM,GM atrophy may be a more specific marker for irreversible tissue destructionthan whole brain atrophy. Additionally, the association of change in corticalthickness with cognitive clinical outcomes is not well studied and needs fur-ther investigation. In this thesis, we focus on measuring the rate of corticalthinning using 3D T1 weighted MR scans.Most efforts in the past studies are dedicated to measuring cortical thick-ness at a single time-point rather than its longitudinal changes over time.The methods that work on the longitudinal data tend to be more scan-rescan reproducible (Han et al., 2006) making them better suited for under-standing disease progression than the methods that uses normalized cross-sectional measurements. A fast and sensitive method for measuring longi-tudinal changes in cortical thickness would be an extremely valuable tool.Very few longitudinal methods have been proposed so far. The longitudinal71.5. Motivationmethod, which has been used in many studies is FreeSurfer and is knownfor its high scan-rescan reproducibility. However, a recent evaluation byKlauschen et al. (2009) and Eggert et al. (2012) suggest that, it underesti-mates grey matter volume and over estimates white matter volume for all im-age qualities. Also, FreeSurfer being a surface-based method, the smoothnessconstraints applied during cortical surface building may blunt its sensitivityto change. Furthermore, it is known to be computationally very expensive.The high computational cost is inherent in surface-based methods and is truefor other surface-based methods such as cortical longitudinal atrophy detec-tion algorithm (CLADA). The lack of a sensitive and a fast reliable methodmotivated us to develop a voxel-based method, which is known to be morecomputationally efficient approach. However, voxel-based methods lack con-sistency in thickness measurements as compared to surface-based methods.So special measures have been taken in the design of the proposed methodto address this issue. A recent analysis by Tustison et al. (2014) showed thatusing a voxel-based method they are able to achieve scan-rescan reliabilitysimilar to that of FreeSurfer, while having higher prediction accuracy of ageand gender of test subjects. However, their proposed pipeline is computa-tionally very expensive and takes even longer time than FreeSurfer to processa subject. Our aim is to develop a longitudinal method whose reproducibilityis comparable to that of surface-based methods such as FreeSurfer and whosecomputational efficiency and sensitivity to change over time is comparableto that of voxel-based methods.1.5.1 ChallengesThere are several key challenges involved. The typical resolution of a brainMRI used to evaluate brain atrophy is 1 mm×1 mm×1 mm, which is coarserelative to the thickness of the cortex, which has a mean thickness of about2.5 mm, but varies greatly (1 to 4.5 mm) depending on the location, in anadult brain (von Economo, 1929). Due to limited resolution in typical MR81.5. Motivationimaging, the intensity distinction between two tissue types is blurred. So avoxel at the border between two tissues typically represents more than onekind of tissue type. This effect is called partial volume effect. In GM anal-ysis, the partial volume effect is large relative to the thickness of the cortex.Further, accurately distinguishing two tissue types is made difficult by thepresence of the intensity inhomogeneity or bias field in the MR scan. Thiscauses the intensity distributions of the different tissue types to overlap. Inaddition, the annual rate of change induced by disease is typically only asmall fraction (2.22 % to 3.70 % for AD; Clarkson et al., 2011) of the thick-ness, so this change would largely be observed as an intensity shift in theboundary voxels. Also, due to the convoluted shape of the cortex, there isno standard technical definition of cortical thickness and different methodsdefine thickness differently. For example, some methods use normal distancefrom one surface to the other while some others use closest distance betweenthe two surfaces. The different definitions result in different thickness mea-surements and the absence of gold standard and ground truth measurementsmakes validating a new method challenging. Finally, the run-time of thelongitudinal methods can run into several hours or even days and can be alimiting factor when working with very large datasets. The currently widelyused method is computationally expensive, which makes it less preferable tobe used on a large dataset, unless one has access to powerful fast-processingcomputers with huge memory. Recently, only a few studies have reportedresults on large enough dataset so that they can be generalized over a largerpopulation. Li and Shen (2010) reported on 55 subjects (40 mild cognitiveimpaired and 15 normals), Das et al. (2009) reported results on phantomdata and 20 real scans of elderly patients and Nakamura et al. (2011) re-ported results on different regions from 5 subjects for sensitivity test and 9subjects for reproducibility. In our opinion, Reuter et al. (2012) (136 demen-tia subjects) and Tustison et al. (2014) (1205 images from four datasets) arethe only studies to have used a large dataset.91.6. Thesis Contributions1.6 Thesis ContributionsWe propose a new voxel-based method, which we call Longitudinal CorticalThickness (LCT), for measuring changes in cortical thickness between pairsof longitudinal scans. The proposed method focuses on regularizing the com-putation of anatomical correspondences across time. Rather than a generalalignment approach like deformable registration, the proposed method onlymatches points in the cortex using three intuitive positional and shape fea-tures that are defined on a cortical skeleton computed on each scan to providea local frame of reference. Most earlier methods perform deformable registra-tion followed by segmentation, whereas we perform segmentation followed bycortex-specific alignment. This approach is different from the existing meth-ods in that instead of deformably registering the whole brain volume we areregistering the cortex only, thereby increasing sensitivity to cortical changes,which tend to be more subtle than other regions. Further, to improve thestability of the matches, we introduce the concept of fuzzy correspondencein which each point in one scan can partially match several points in theother scan. Fuzzy correspondence has been used in previous work to im-prove the robustness of deformable registration (Shen, 2009), but not in thecontext of cortical thickness measurement. In this approach, each skeletalpoint in one scan can have up to three partial matches in the second scan,which improves the stability of the matching process. The fuzzy matchingbetween time points not only takes care of improper alignment but also actsas an alternative to the regularization step done in the deformable registra-tion. After computing the matches between the two skeletons, the thicknessmeasurements are performed by integrating GM probabilities along com-mon directional vectors computed using both scans. Further, the proposedmethod calculates thickness on a probabilistic rather than on a binary GMsegmentation so as to capture subtle changes in the partial volume regionof GM. This helps to increase the sensitivity to change of our method. Weuse normal distance as it is visually more intuitive distance measure. The101.6. Thesis Contributionsmethod is validated by applying it to a scan-rescan reproducibility datasetand two larger clinical sets of 100 subjects each and comparing the resultswith those produced with two state-of-the-art methods. The first one, calledFreeSurfer (Reuter et al., 2012), is a longitudinal surface-based method thatis widely used in clinical studies. The tool’s general availability and robust-ness in measuring thickness make it a useful tool in clinical experiments. Theother, called Minimum line Integral (Aganj et al., 2010), is a cross-sectionalvoxel-based method that also uses a skeletal line to compute thickness. Thecomparison with the second method is done to demonstrate the benefits ofestablishing fuzzy correspondence. Clinically, we showed the utility of thecortical thickness as a potential imaging biomarker for AD and SPMS. Andto the best of our knowledge, this is the first longitudinal study that haslooked at the association between percentage change in cortical thicknessand percentage change in clinical scores in SPMS.The rest of this thesis is organized as follows: Chapter 2 gives a descriptionof the previous published work on cortical thickness that is related to ourwork. Chapter 3 describes the standard pipeline used by our method toobtain longitudinal measurements. In Chapter 4, our proposed voxel-basedmethod for calculating changes in cortical thickness is described. A detaileddescription of experiments and results on three different datasets are providedin Chapters 5. Finally, in Chapter 6, conclusions drawn from our experimentsand future work are discussed.11Chapter 2Related Work2.1 Literature ReviewThere are several methods proposed in the literature to measure the corticalthickness. These methods can be categorized as cross-sectional methods orlongitudinal methods depending on the number of time points they workon. They can also be categorized as voxel-based or surface-based methods.The later classification is done based on how cortical thickness is defined andcomputed. These classifications are shown by examples in Table 2.1.2.1.1 Cross-sectional Methods vs. LongitudinalMethodsCross-sectional methods are those which make measurements on individualtime points independently whereas longitudinal methods are those whichtake multiple time points into consideration. When applied to longitudinalstudies, cross-sectional methods tend to have larger errors relative to themagnitude of the changes that occur across time (Nakamura et al., 2011).Because each time point is analyzed independently, cross-sectional methodsare largely dependent on the accuracy of the GM segmentation method used,and more sensitive to variations in the segmentation caused by noise, partialvolume and other artifacts particular to each scan. Conversely, longitudinalmethods are designed to improve the consistency of thickness measurementsat corresponding locations with the aim of increasing sensitivity to subtlechanges and hence be more accurate in measuring thickness changes. One122.1. Literature ReviewTable 2.1: Classification of representative cortical-thickness measurementmethods into four categories: cross-sectional, longitudinal, surface-based,and voxel-based.Surface-based Voxel-basedCross-sectional Dale et al. (1999) Hutton et al. (2008)Lerch and Evans (2005) Acosta et al. (2009)Han et al. (2004) Aganj et al. (2010)Scott et al. (2009)Longitudinal Nakamura et al. (2011) Das et al. (2009)Wang et al. (2012) Li and Shen (2010)Reuter et al. (2012)of the main steps in most methods of longitudinal processing is to alignbrain anatomy and to establish correspondence using deformable registra-tion. However, using the deformable registration for alignment has somedisadvantages. Most longitudinal methods that use deformable registrationperform measurements on the aligned images assuming that registration hasprovided perfect alignment across time, but in reality, registration achievesa compromise between regularization and correspondence. Registration pa-rameters that work well for one patient may not work well for another, anda major challenge is to find a set of parameter values that can be fixed for anentire study without biasing against certain patients, such as those experienc-ing larger tissue changes. This problem is exacerbated by the fact that manydeformable registration methods are available, and the parameters that needto be set can vary between them. Some methods register follow-up images tobaseline (Han et al., 2004 and Li and Shen, 2010), which can introduce biasas different time points are treated differently as highlighted in Reuter et al.(2012). Bias often goes unnoticed due to large measurement noise, imprecisemethods, insufficient testing or small sample size and not being able to findthe bias does not mean it is not present.132.1. Literature ReviewDas et al. (2009) proposed a registration-based longitudinal method formeasuring cortical thickness called diffeomorphic registration based corticalthickness (DiReCT). A diffeomorphic mapping between the Inner CorticalSurface (ICS) and Outer Cortical Surface (OCS) is computed which providesone-to-one correspondence between the GM-WM interface and the estimatedGM-cerebrospinal fluid (CSF) interface. Thickness is then calculated as thedistance between these two interfaces. This one-to-one correspondence makesthe thickness computation symmetric. However, as suggested by Nakamuraet al. (2011) a potential bias exists in it where the cortical model from firstimage is used for subsequent images. This can be overcome by using an un-biased template. Also, DiReCT uses binary segmentation to fit the WM-GMsurface. An estimate of grey matter probability in the thickness computationwould be preferable for more sensitive cortical thickness measurements dueto partial volume effect but it is unclear how the ICS and the OCS will bedefined with probabilistic segmentation.Nakamura et al. (2011) proposed a surface based longitudinal methodcalled CLADA. The method is designed to work on lower resolution imagesand works on all time points simultaneously. The method is faster thanFreeSurfer (20 − 30 hours) but still takes approximately 15 hours withoutmanual editing while processing six time points of a subject. The algorithmcombines images from all time points and creates a single deformable modelfor each subject, consisting of two explicit surfaces. These surfaces are thendeformed to fit each individual time point. The method gains in speed byusing a multiresolution remesh function and a method for fast surface inter-section detection.Li and Shen (2010) proposed a 4D cortical thickness measuring methodusing 4D segmentation method called consistent longitudinal alignment andsegmentation for serial image computing (CLASSIC), which is able to capturesubtle longitudinal changes and more resistant to noise. However, it wasfound that thickness maps generated by CLASSIC are not always temporally142.1. Literature Reviewconsistent (Wang et al., 2012). Further, in some instances the thicknesschanges were not as smooth and consistent as expected. The authors latermodified this method to use a surface-based method using coupled level sets(Wang et al., 2012).2.1.2 Surface-based Methods vs. Voxel-basedMethodsIn addition to cross-sectional and longitudinal, cortical thickness measure-ment methods can also be categorized as surface-based or voxel-based. Voxel-based methods (e.g., Das et al., 2009; Li and Shen, 2010; Aganj et al., 2010)perform the thickness measurements directly on the voxels of the GM proba-bility map, whereas surfaced-based methods (e.g., FreeSurfer by Reuter et al.,2012, CLADA by Nakamura et al., 2011) typically create two triangulatedmeshes for each scan, one for the ICS and one for the OCS. The meshingalgorithm is typically regularized to enforce smoothness and to ensure cor-rect topology, which reduces the impact of noise and minor segmentationerrors. The general robustness of FreeSurfer (Dale et al., 1999), which is afreely available surface-based method, has made it a valuable research tool.However, surface reconstruction is computationally very expensive, especiallyin enforcing correct topology. The detection and prevention of self intersec-tions, which can easily develop in deforming surface, are time consuming andadd further to the computational complexity. For example, CLADA takes12 − 15 hours (Nakamura et al., 2011) and FreeSurfer takes 20 − 30 hours(Reuter et al., 2012) to process a subject. Further, these methods are in-herently required to make a binary decision on what is inside or outside thecortex and does not preserve the probabilistic information that is availablefrom many GM segmentation algorithms. In contrast, voxel-based methodcan preserve probabilistic information fully and are generally much faster.However, a recent comparison suggests that they are less accurate and re-producible (Clarkson et al., 2011) as compared to surface-based methods,152.1. Literature Reviewdue to limited resolution of voxel grid and mis-segmentation. Our aim isto develop a longitudinal voxel-based method that has the reproducibility ofsurface-based methods but takes considerably less computational time.2.1.3 Definitions of Cortical ThicknessIn the literature, different ways to measure cortical thickness have been pro-posed. Each method defines correspondence between ICS and OCS differ-ently. There is no universal technical definition of cortical thickness. Whilethickness is generally defined as the distance between corresponding pointson the ICS and the OCS, the convoluted sheet-like shape of the cortex hasrelatively few points with distinct features that naturally define one-to-onematches between the ICS and OCS. As a result, different computationalapproaches to defining thickness are possible, and can lead to considerablydifferent measurements (Lerch and Evans, 2005). For example, Das et al.(2009) uses deformable registration between the ICS and OCS, Aganj et al.(2010) and Li and Shen (2010) integrates GM probabilities through skeletalpoints to find the minimum at each point, and Reuter et al. (2012) uses theaverage of the shortest distance from the ICS to the OCS and vice versa. Weprovide a brief summary of different thickness definitions proposed in thepast below.Distance along the NormalThis method measures cortical thickness along the normal direction from onesurface to another. The first step involves extraction of the two surfaces. Insurface-based methods, the surfaces are represented as meshes and in voxel-based method, surfaces are represented by the boundary voxels. Normalsare computed from one surface to another and thickness is measured as thedistance along the normal between the surfaces. This method of measuringthickness is used in MacDonald et al. (2000). The advantage of using this162.1. Literature ReviewFigure 2.1: Thickness measurement based on distance along the normal.Thickness is calculated at every point P on surface S along the normal di-rection. Thickness measurements are different when measured from surfaceS as compared to when measured from surface S ′.method is that it measures thickness in a straight line which makes it easierto be validated against clinically motivated priors. However, this way ofmeasuring thickness requires hard segmentation as the surfaces need to bedefined to obtain normal direction. Furthermore, as shown in Figure 2.1 thismethod is not symmetric i.e. the thickness measured from any point P onsurface S is not the same as the one measured from the corresponding pointP ′ on other surface S ′ in the reverse direction. This is because there is nomutual correspondence (points P and P′′are different).Distance along the Shortest PathShortest distance is another commonly used method which searches from ev-ery vertex (voxel) of one surface across the opposite surface and measuresdistance to the closest vertex (voxel) (Fischl and Dale, 2000). The measure-ments are performed for all the vertices (voxels) on a surface and an averageof these measurements is used to compute the final thickness value for thewhole brain. Similar to the normal distance, this approach has some advan-tages and some disadvantages. This method measures thickness in a straightline so it can also be validated using clinical priors. However, again thisapproach requires the two surfaces (S and S ′) to be well defined and needsbinary segmentation to determine these surfaces. Also, using the shortestdistance underestimates the true thickness. Additionally, thickness values172.1. Literature ReviewFigure 2.2: Thickness measurement based on distance along the shortestpath. Thickness is calculated at every point P on surface S along the shortestdistance. Thickness measurements are different when measured from surfaceS as compared to when measured from surface S ′.are slightly different when measured from a point P on surface S as com-pared to when measured from a point P′on surface S ′ due to lack of mutualcorrespondence as highlighted in Figure 2.2. To overcome this, Dale et al.(1999) proposed a slight variation of the method. They proposed to measurethickness from both surfaces and take a mean value of these two measure-ments.Laplacian MethodThis method was proposed by Jones et al. (2000) in order to overcome someof the limitations of the earlier methods. The main advantage of this methodis that it is symmetric i.e. there is one-to-one correspondence between thepoints on ICS and OCS. The method works by assuming the two surfacesFigure 2.3: Laplacian based thickness measurement. Thickness is measuredalong the streamline after solving the Laplace equation. Thickness measure-ments are symmetric.182.1. Literature Revieware at different electric potentials Ψ and calculates the thickness along thedirection of the electric field present between the surfaces due to the differentpotentials. The electric field is calculated by solving the Laplace equationEquation 2.1. Figure 2.3 shows a two-dimensional example of how Laplace’sequation determines thickness. The surface S is assumed to be at zero po-tential (Ψ = 0) and surface S ′ to be at 10,000 V (Ψ = 10, 000). The choice ofthe voltages have no effect on the profile of the electric field and hence thick-ness measurements, as long as the two voltages are different. The advantageof this method is that the electric field lines are known to be smooth andare non intersecting, resulting in symmetric thickness measurements. Math-ematically, Laplace’s equation is a second-order partial differential equationfor a scalar field Ψ enclosed between two interfaces:∆Ψ =δ2Ψδx2+δ2Ψδy2+δ2Ψδz2= 0 (2.1)The one-to-one correspondence between the ICS and OCS makes the mea-surement symmetric. Although the method was able to overcome some limi-tations of the earlier methods and has been used in numerous studies includ-ing Jones et al. (2000), Hutton et al. (2009) and Li et al. (2012a), it has somedrawbacks as well. This thickness measurement method does not measurethickness in a straight line but in a curved field. This makes the computedmeasurements not intuitive and hard to verify using visual inspection. Fur-ther, this method still requires binary segmentation and does not use theprobabilistic segmentation for thickness calculation.Minimum Line IntegralThis method was proposed by Aganj et al. (2010) to overcome the limitationof not being able to use the probabilistic segmentation for thickness calcu-lation. Thickness measurements are performed on a probabilistic map thatcontains the probability of each voxel belonging to the grey matter. A skeletal192.1. Literature ReviewFigure 2.4: Schematic diagram of minimum line integral thickness measure-ment. Thickness is measured as the minimum of the line integrals drawn indifferent directions from a skeletal point. (a) Thickness measurements on bi-nary probability map (b) Thickness measurements on continuous probabilitymapsurface of the GM is computed which is positioned in the middle of GM, farfrom the inner and outer cortical surfaces. Straight lines are drawn in all di-rections from every point on the skeletal surface and probabilities are addedalong these lines to give an estimate of thickness as shown in Figure 2.4.Cortical thickness is then measured based on minimizing these line integralsover the probability map of the grey matter, for every point on the skeletalsurface. Li and Shen (2010) extended this concept to 4D and calculated theminimum of line integrals across time points. However, this method suffersfrom a limitation that using the shortest distance greatly underestimates thetrue thickness change.20Chapter 3Pipeline3.1 Cortical Thickness MeasurementPipelineA fully automated pipeline was implemented using Insight Segmentation andRegistration Toolkit (Ibanez et al., 2003) to estimate the cortical thicknesschanges. Figure 3.1 illustrates the basic steps for longitudinal cortical thick-ness evaluation in a flow diagram. The basic processing flow is the following:3D T1 MR scans are corrected for intensity inhomogeneity and the tempo-ral scans are then symmetrically registered using 6 degrees of freedom to amid point. Non-cerebral tissues such as skull, cerebellum and brain stemare removed from the region of interest. Then brain is classified into greymatter, white matter and cerebrospinal fluid. Cortical thickness change isthen estimated using the method described in Chapter 4. Automatic greymatter labeling techniques can be added to obtain regional thickness changestatistics.3.1.1 PreprocessingThe first step in the pipeline is to normalize the data. Since the data iscollected from various sites and using different scanners, the variability inthe data is reduced using this preprocessing step. The scans were rotated sothat they are all oriented axially. Some of the scans in the dataset containslices from the neck region. These extra slices sometimes interfere in theextraction of the brain. Therefore in those scans, neck slices were manually213.1. Cortical Thickness Measurement PipelineBrain Skull andCerebellum masksResampling and BiasField correctionRigid RegistrationBrain Extractionand CerebellumRemovalBrain Tissue Clas-sificationNon-cortical grey matterremovalThickness analysisFigure 3.1: Block diagram of the implemented pipeline for measuring changesin cortical thickness.removed. Then, scans were resized and resampled to have the same voxeldimensions (i.e. 1× 1× 1 mm3).Next, the non-parametric non-uniformity normalization (N3; Sled et al.,1998) algorithm is used for inhomogeneity correction in each image. Almostall the scans obtained from MR machines have this problem. The inhomo-223.1. Cortical Thickness Measurement Pipeline(a) (b)Figure 3.2: An example of results after the preprocessing step. In top roware the input slices which may have different bias, size and resolution andin bottom row are the resampled and intensity inhomogeneity corrected sliceusing N3 bias correction. Column (a) represents time point 1 and column(b) represents time point 2geneity due to induced currents and spatial inhomogeneity of the excitationfield depends on the geometry and electrical properties of the subject andcannot be corrected accurately internally by the scanner. This can only beestimated using post processing methods such as N3. The performance ofthe brain segmentation methods rely largely on this step. N3 is a fully auto-matic iterative method that operates on 3-D volumetric images. The methodassumes the bias field to be multiplicative and spatially smooth in nature.In each iteration it tries to maximize the high frequency component present233.1. Cortical Thickness Measurement Pipelinein the noisy image. There are only 2 parameters which need to be decided;one is the smoothness of the estimated field and the second is the number ofiterations. Number of iterations is more sensitive as it determines the trade-off between accuracy and convergence rate. In our experiments, we used thedefault value of 0.15 for the smoothness field and 8 for the number of itera-tions. Each of the scans is corrected separately for intensity inhomogeneity.The results on one of the slices is shown in Figure 3.2.3.1.2 RegistrationThe longitudinal scans were then registered using rigid registration having sixdegrees of freedom (DOF) as shown in Figure 3.3. As mentioned earlier thechanges in brain over the period of study are minimal and rigid registration issufficient to match the intra-subject scans (Han et al., 2006). The similaritymeasure used in rigid registration was mutual information. The generalizedmutual information I(X;Y ) is given by:I(X;Y ) = H(X) +H(Y )−H(X, Y )H(X) = −∑x∈XP (x) log2 P (x)H(X, Y ) = −∑x∈X∑y∈YP (x, y) log2 P (x, y)where H(X) is the entropy of X, H(X, Y ) is the joint entropy of two ran-dom variables X and Y , P (x) is the probability of the occurrence of x andP (x, y) is the probability of x and y occurring together. We used Powell-Brent method for optimization. B-spline interpolation was used to resamplethe scans according to the transformation matrix. Traditionally, in imageregistration, one image is considered stationary and other image is moved tomatch the first image. Since only one image is going to change, this may re-sult in interpolation related bias (Reuter and Fischl, 2011; Yushkevich et al.,2010). In order to avoid bias due to this asymmetry both the scans were243.1. Cortical Thickness Measurement Pipeline(a) (b)Figure 3.3: An example of rigidly registering the two time points. At topare the preprocessed input images and at bottom are the images after theregistration step. Column (a) represents time point 1 and column (b) repre-sents time point 2. Both the scans are moved to midpoint so that there isno interpolation bias.warped to a mid point using symmetric transformation.3.1.3 Paired Inhomogeneity CorrectionN3 was used to correct intensity inhomogeneity in each time point sepa-rately. There may still exist a differential bias between the two time pointsof a subject. The difference in intensity inhomogeneity between the longi-tudinal scans can affect the atrophy measurement significantly. A pairedcorrection as proposed by Lewis and Fox (2004) was used to remove the dif-253.1. Cortical Thickness Measurement Pipelineferential bias field. The method proposed by Lewis and Fox (2004) does notmake any assumptions about the nature of the bias. Bias is estimated fromthe difference image. In the absence of bias the difference image should ide-ally contain noise and atrophy. Both atrophy and noise are high frequencycomponents whereas the differential bias is slowly varying low frequency com-ponent in the difference image. The high frequency components are removedusing median filtering which erases structure of size less than half of the ker-nel leaving us with the differential bias field. In our experiments, a kernelsize of 3 was used.3.1.4 Brain ExtractionNext, brains are separated from skull and other non-brain structures usingtwo methods: Brain Extraction Tool (BET; Smith, 2002) and the skull strip-ping program in FreeSurfer (mri watershed; Se´gonne et al., 2004). BrainExtraction Tool (BET) is part of the FMRIB software Library (FSL) toolkit(Jenkinson et al., 2012). BET works by fitting a deformable mesh to thebrain and mri watershed uses a hybrid watershed and deformable surfaceapproach. The masks generated from this step on one of the slices, is shownin Figure 3.4 overlaid on the input scan.More specifically, BET uses a threshold to collect approximate statisticssuch as centre of gravity, radius of brain etc. Then the brain scan is initializedwith a spherical mesh at the calculated center of gravity of the brain. Theinitial radius of the spherical mesh is kept at half of the brain size. Eachvertex of the mesh is then iteratively deformed outwards to fit the brainsurface. The deformation is performed under the constraint that the meshis smooth. BET works well on most of the dataset with default parametersbut on some scans it leaves extraneous tissue as shown in Figure 3.5(b).So we tune the ’-f’ parameter (fractional intensity threshold) of the BETautomatically to find the best parameter that gives good brain extraction.Golden-section search was performed on this parameter between bounds of263.1. Cortical Thickness Measurement Pipeline(a) (b)Figure 3.4: An example of brain extraction mask overlaid on the input reg-istered scan at time point 1 (a), and at time point 2 (b). Brain extractionis done using FSL’s BET and FreeSurfer’s mri watershed. Please note thatcerebellum is also included in the mask.0.2 and 0.5 and a best result is chosen based on the comparison with anatlas with skull stripped. A root mean square error is computed of thesegmented brain with the atlas and those parameters were selected whichhad the minimum error.The mri watershed is based on a hybrid approach which combines wa-tershed algorithm, atlas-based priors and deformable surface model. Thewatershed algorithm builds an initial estimate of the brain which is refinedby the atlas-based priors and deformable surface model. This method as-sumes that brain surface is a smooth manifold and has similar global shape.This assumption is used to validate and potentially correct the segmentationusing a statistical atlas. The atlas is created from a set of accurately seg-mented brains. The deformable model incorporates geometric informationsuch as curvature which will remove extraneous regions and the atlas-basedshape constraints will recover potential missing parts.We found that both brain extraction methods occasionally leave extrane-ous tissue as shown in Figure 3.5, and that intersecting the two masks largely273.1. Cortical Thickness Measurement Pipeline(a) (b) (c)Figure 3.5: An example of extraneous tissues left by the brain extractionmethod in some slices by (a) FreeSurfer’s mri watershed (b) FSL’s BET.This can be automatically corrected in most cases if we take a logical ANDof masks produced by each of these methods as shown in (c)resolves this problem. A brain mask is produced by each program, and thetwo masks are combined using a logical AND operation.3.1.5 Cerebellum MaskSince we are only interested in measuring cortical thickness, other non-cortical structures in the brain such as cerebellum and brain stem, needto be removed. These structures are masked out using an atlas based ap-proach. The cerebellum and the brain stem were labelled manually on oneof the representatives of the population. The expert user was asked to tracethe boundary of the cerebellum and the brain stem. The traced region wasfilled using an automatic tool and a binary mask was generated. The maskwas then dilated using a circular structuring element of radius one. The di-lation was done to accommodate the variations in the dataset. The originalrepresentative scan was then affinely registered to the current subject scan.The representative scan was the moving scan and the current subject scanwas fixed. The transformation thus obtained was applied to the binary cere-bellum mask and the registered mask is used to mask out the non-cerebrum283.1. Cortical Thickness Measurement Pipeline(a) (b)Figure 3.6: A sample sagittal slice of the extracted brain, overlaid withcerebellum mask obtained via atlas segmentation at time point 1 (a), andtime point 2 (b)structures as shown in Figure 3.6. The affine registration was performedusing FMRIB’s linear registration tool (FLIRT) which is part of the FSLtoolkit. More details on the registration method can be found in Jenkinsonet al., 2002. As suggested in literature, the atlas rather than MR image wasresampled to prevent interpolation artifacts. Only one mask is created persubject and applied to all the time points of that subject. Brain masks fromthe previous step for each of the longitudinal paired scans were combined toform one mask using logical OR operation. The cerebellum mask was thentaken out of the combined brain mask using logical NOT operation and thefinal mask was applied to get the cerebrum from each time point. Maskingof the brain structure was done only once.3.1.6 Brain Tissue ClassificationSegmentation is one of the most important step in the pipeline as the thick-ness measurements depend largely on this step. Most automatic segmenta-tion methods provide probabilistic segmentations along with binary. This is293.1. Cortical Thickness Measurement PipelineFigure 3.7: Intensity histogram of the MR scan showing three peaks eachcorresponding to CSF, grey matter and White matter, respectively.desirable as, at the current MR scan resolution, the boundaries are not clearlydefined. Each tissue type, corresponding to CSF, grey matter and white mat-ter creates a peak in the histogram of the scan as shown in Figure 3.7. Thesepeaks can be modelled by separate Gaussian functions. Presence of par-tial volume effect and bias field distortion prevents simple Gaussian mixturemodels to give robust results. One widely used method called FMRIB’s Auto-matic Segmentation Tool (FAST) proposed by Zhang et al. (2001) combinesspatial information with histogram distribution. This method is also basedon Gaussian mixture model but additionally uses hidden Markov randomfield to improve on the results. In FAST, a 3-dimensional hidden Markovrandom field is used to impose local spatial constraints on the segmentationprocess. This penalizes discrepancies in tissue classification for isolated vox-els, but allows for contiguous areas of change. Further, FAST provides bothbinary and probabilistic segmentation of the tissues in three classes. In ourpipeline, probabilistic results were used as they provide more sensitive thick-ness measurements because they try to estimate the partial volume effect.We assumed that the GM probabilities obtained from FAST are close to onein the middle of the cortex and they monotonically decrease on either sidedue to the presence of partial volume. This was verified visually at differentrandom locations of the cortex on a variety of scans.303.1. Cortical Thickness Measurement Pipeline(a) (b) (c)(d) (e) (f)Figure 3.8: An example of results from probabilistic segmentation of brainusing FSL’s FAST. First row shows the result at time point 1 of (a) CSF (b)grey matter (c) white matter. Second row shows the result at time point 2of (d) CSF (e) grey matter (f) white matter.3.1.7 Non-cortical Grey Matter SegmentationThe grey matter obtained from the segmentation provides cortical and non-cortical grey matter. Since we are measuring only cortical thickness, thenon-cortical grey matter was masked out from the segmentation results usingthe prior spatial and size information. When viewed in axial plane the non-cortical grey matter is present close to the center of the slice, deep insidethe cortex, as shown in Figure 3.8. The non-cortical grey matter removal isbased on the observation that it can not be reached from outside the brainwithout crossing the cortical grey matter. For each slice, points were selectedaround the skull outside the cortex. The points were spread around thecortex such that they encapsulate the whole brain. These points were then313.1. Cortical Thickness Measurement Pipeline(a) (b)Figure 3.9: An example of results after removing non-cortical grey matterfrom grey matter segmentation at time point 1 (a), and time point 2 (b)moved iteratively towards the center until they encounter the grey mattersegments. All the grey matter where the points land are retained and anysmall disconnected structure within was rejected. Since non-cortical greymatter is located inside the cortical surface, none of the points would reachinner grey matter and these segments can easily be rejected. Each axial sliceis processed separately. A mask containing the rejected grey matter wascreated for each time point separately and the two masks from the two timepoints were combined using a logical OR operation. The combined maskwas then applied to each of the scans to obtain cortical grey matter only asshown on one of the slices in Figure 3.9.3.1.8 Thickness EstimationFinally, the proposed thickness method to measure the changes in corticalthickness was applied on the probabilistic grey matter. Along with measuringchanges in cortical thickness, our method also provides cross-sectional thick-ness values at each time point. The benefits of using the proposed methodover existing established methods are shown in chapter 5. The proposedmethod is discussed in details in the next chapter.32Chapter 4Thickness Change Analysis4.1 The Proposed Thickness MethodIn this thesis, we have proposed a new longitudinal method called Longitudi-nal Cortical Thickness (LCT) to measure changes in cortical thickness. Themethod is designed for reliable and accurate measurements without beingbiased towards any time point. The basic principle involved in measuringthe change in cortical thickness using the proposed method is illustrated inFigure 4.1. The proposed method measures changes in thickness between apair of scans by using the following steps:1. Compute a skeletal representation of the cortex in each scan using theprobabilistic GM segmentation.2. Compute fuzzy correspondences between the scans on the skeletal points.3. For each pair of matched points, average the two normals to the skele-tons to compute a common normal.4. Integrate the GM probabilities along the common normal in both scans,weighted by the strength of the match, to compute the thickness in eachscan.5. Compute the difference in thickness from each skeletal line.334.1. The Proposed Thickness MethodCSFWMICSOCSs(1)p s(2)q−1s(2)qs(2)q+1Skeletal LineW (s(1)p , s(2)2 )W (s(1)p , s(2)1 )W (s(1)p , s(2)3 )ICSOCSSkeletal LineTime point 1 Time point 2CSFWMCSFWMN1NavgN2ICSOCSs(1)p s(2)qSkeletal LineN1NavgN2ICSOCSSkeletal LineTime point 1 Time point 2Figure 4.1: Measuring cortical thickness change at matched skeletal points(green). For a skeletal point s(1)p at Time Point 1, up to 3 matches arefound at Time Point 2 (s(2)q−1, s(2)q , s(2)q+1). An average normal Navg (blueline) is computed, along which the GM probabilities are integrated in bothdirections (toward OCS and ICS). The computed thickness is weighted bythe strength of the match W (s(1)p , s(2)q ). The final thickness is a weighted sumfrom all matches.4.1.1 Correspondence FeaturesThe skeletal line on each scan is obtained by 2D morphological binary thin-ning on the GM segmentation that has been thresholded at 90%. Any of theother skeletonization algorithms such as ridge following (Hoffman and Wong,1998) or medial axis transformation, can be used to obtain the skeletal line.The binary segmentation is used only to estimate the skeletal line; rest of theprocessing is performed on the probabilistic segmentation. Next, for everypoint on the skeletal line in the first scan, a set of the closest matches are344.1. The Proposed Thickness Methodfound on the skeletal line in the second scan based on positional and shapefeatures. We used three features for matching: spatial coordinates, unit nor-mal direction, and shape context (Belongie et al., 2002), which can capturethe local structure of the skeletal line efficiently.Shape context can be intuitively thought of as a spatial histogram thatcaptures the distribution of neighbouring points relative to a reference point.It was introduced for measuring the shape similarity of the skeletal line. Inthis method, shapes are represented by a discrete set of points. These pointscan be any points sampled from the edge profile of the shape as shown inFigure 4.2. They need not be any special landmark points or high curvaturepoints. The shape context is then calculated at each of these points as acoarse representation of the distribution of the rest of the points with refer-ence to itself. Shape context is represented as a multidimensional histogramwith bins defined in log-polar space. The histogram is such that it is differ-ent for different points at different locations on the shape. For example, inFigure 4.2, two handwritten letters A are represented by a set of sampledpoints. If we consider three points on these sampled letters marked by ◦, and / as shown in Figure 4.2 (a-b), and calculate shape context descriptorat each of these three points, we can see that points at similar locations havesimilar histograms and points at different locations have different histogramsas illustrated in Figure 4.2 (d-f).The spatial coordinates and unit normal vectors are computed in 3Dwhereas shape-context is calculated slice by slice on every point on the skele-tal line. We chose to calculate shape context in 2D as we are only interestedin percentage change in thickness and the 3D shape context is not expectedto generate significantly different average thickness values as oppose to 2D.Also, 2D shape context is more meaningful as the skeletal line on which shapecontext is defined, is also computed in 2D.354.1. The Proposed Thickness MethodFigure 4.2: Shape context computation and matching. (a) sampled edgepoints of the letter A (b) sampled edge points of another similar looking letterA (c) log-polar bins to compute shape context (d) Shape context descriptorof a point marked with ◦ (e) Shape context descriptor of another point atsimilar location marked with  (f) Shape context descriptor of a point at adifferent location marked with /. We can see that the shape context at ◦and  are relatively similar as compared to the shape context at / (Imagecourtesy of Belongie et al. (2002)).4.1.2 Thickness ComputationLet s(t)p be the skeletal point at a time point t and position p. The cost of themismatch in positional coordinates between points s(1)p and s(2)q is calculatedas:c1(s(1)p , s(2)q ) =1Kpc∑j∈x,y,z(Pj(s(1)p )− Pj(s(2)q ))2(4.1)where Pj(s) is the value of the jth component of the feature vector at locations and Kpc is a normalization constant. Similarly, for unit 3D normals, thecost of the mismatch is calculated as:c2(s(1)p , s(2)q ) = 1−N(s(1)p ) ·N(s(2)q ) (4.2)364.1. The Proposed Thickness MethodTable 4.1: List of parameters used in the proposed method.Parameter Value Descriptionw1 0.25 Weight for shape context (Page 38)w2 0.25 Weight for positional coordinate (Page38)w3 0.50 Weight for normal (Page 38)Kpc 16 Normalization constant for positionalcoordinate (Page 38)Ksc 18 Normalization constant for shape con-text (Page 38)c(s(1)p , s(2)q ) > thresh 0.22 Cost cut-off threshold (Page 39)Normal window size 5 Size of the window in which normal wascomputed (Page 37)Shape context win-dow size9 Size of the window in which shape con-text was computed (Page 38)Matching windowsize7 Size of the window in which the skeletalpoints are matched (Page 39)∆s 0.25 mm Step size for computing thickness (Page39)nf , nb 16 Maximum number of steps taken in theforward and backward direction fromskeletal line (Page 39)where N(s) is the unit 3D normal at point s on the skeletal line in a cubicwindow of size 5 and (·) represents the dot product of the vectors. Wedetermined the window size visually to capture the variability at the desiredscale. Finally, the cost of the mismatch in shape context c3(s(1)p , s(2)q ) iscalculated using a χ2 statistic:c3(s(1)p , s(2)q ) =1Ksc5∑i=112∑j=1(Vij(s(1)p )− Vij(s(2)q ))2Vij(s(1)p ) + Vij(s(2)q )∀∣∣Vij(s(1)p ) + Vij(s(2)q )∣∣ 6= 0(4.3)374.1. The Proposed Thickness Methodwhere Vij(s) is the value of the log-polar histogram at point s and Ksc is thenormalization constant. To construct the histogram, a circular window ofradius 4 is partitioned into 5 radial bins and 12 angular θ bins.The normalization constants for the positional coordinate (Kpc) and shapecontext (Ksc) features are each set to the maximum individual cost computedfor that feature, so that ci(s(1)p , s(2)q ) ∈ [0, 1]. As a result, the normalizationconstants are set to Kpc = 16 and Ksc = 18 for the positional coordinatesand shape context, respectively.The individual cost from each feature is weighted differently towards thefinal cost:c(s(1)p , s(2)q ) =3∑i=1wici(s(1)p , s(2)q ) (4.4)where wi is the weight of feature i between points s(1)p and s(2)q . We deter-mined the weights wi empirically by simulating changes in cortical thicknessusing pseudo-random deformations, and using the Jacobian determinant tocompute the actual changes in thickness. We then altered the weights sys-tematically to determine the values that would provide a good agreement,while having a reasonable balance between features. The pseudo-random de-formations were generated via non-rigid registration between two scans fromdifferent patients. This procedure was performed 10 times and the resultsaveraged, with the final weights being constrained to not differ by more than0.25 from each other for generalizability. We chose the weights to be the mostbalanced set of weights that produced close to the maximum correlation be-tween the Jacobian determinant and the change in cortical thickness. Wedo not assume the maximum correlation to produce exactly the best weightsbecause the simulation does not replicate true atrophy, and we only use itas a guide. After the experiment, the weight for the shape context (w1), theweight for positional coordinate (w2) and the weight for normal (w3) is setto 0.25, 0.25 and 0.5, respectively.Using the computed costs, for each point in the first scan, a maximum of384.1. The Proposed Thickness Methodthe three closest matches are found in the second scan. In order to constrainthe maximum distance of the match, points are compared in a 3D neighbour-hood window of size 7, which we believe is sufficiently large to contain thetrue matches for most AD and MS studies up to a conservative estimate of 5years, unless the follow-up scan is severely deformed. A match is discarded ifits cost c(s(1)p , s(2)q ) is above a precomputed threshold, which we determinedempirically by varying the threshold in an independent set of developmentscans, and finding the value that maximized the number of matches whileproducing stable cortical change measurements. The thickness for each timepoint is then calculated as the sum of GM probabilities along the directionof the average normal (Navg) computed between the unit normals of the twoscans. The average normal is used as a method of longitudinal regulariza-tion to reduce the variability of normal computations. From the point s(1)p ,the GM probabilities are integrated in both forward and backward directionsalong Navg. The thickness T (s(1)p ) is then computed as:T (s(1)p ) =nf∑i=0p(s(1)p + i∆sNavg) +nb∑i=1p(s(1)p − i∆sNavg) (4.5)where p(s) is the linearly interpolated probability of point s belonging toGM, ∆s is the step size (0.25 mm) and nf , nb are the numbers of steps takenin the forward and backward directions until one of the following stoppingcriteria is met:• if there is a break in the otherwise expected monotonic decrease in GMprobability;• if the distance travelled from the skeleton point in either direction isgreater than 4 mm, where 4 mm is determined using clinical prior.This is done to prevent including voxels that have most likely been misclas-sified. The total prior of 8 mm is larger than that used in some previouswork (5 mm in Das et al. (2009), 6.5 mm in Wang et al. (2012)) because394.1. The Proposed Thickness Methodwe are working directly with the probabilistic segmentation in which mostvoxels have a value less than 1, and using a larger prior produces thicknessmeasurements that are in good agreement to those produced with smallerpriors applied to binary segmentations. At each point on the skeletal lineof both time points, the difference of the thickness values between matchedpoints, weighted by the strength of the match Equation 4.6, is calculated.The first term in Equation 4.6 indicates the sum of the differences over allthe points on the skeletal line at time point 1. The second term indicatessimilar measurements made on skeletal line at time point 2. Finally, boththe measurements are averaged to obtain a symmetrical cortical thicknesschange.∆T =1P (1) + P (2)[ P (1)∑p=1∑q∈L(2)pW (s(1)p , s(2)q )[T (s(2)q )− T (s(1)p )]+P (2)∑p=1∑q∈L(1)pW (s(2)p , s(1)q )[T (s(2)p )− T (s(1)q )]](4.6)where P (1) and P (2) are the number of skeletal points in the first scan andsecond scan respectively, L(2)p is the set of points in the second scan that havebeen matched with s(1)p , L(1)p is the set of points in the first scan that havebeen matched with s(2)p and W (s(1)p , s(2)q ) is the strength of the match betweens(1)p and s(2)q , defined as:W (s(1)p , s(2)q ) =S(s(1)p , s(2)q )∑q∈L(2)pS(s(1)p , s(2)q )(4.7)where S(s(1)p , s(2)q ) = (c(s(1)p , s(2)q ) + )−1 and  is a small number added fornumerical stability. In order to make the measurements symmetric betweenthe two time points, the thickness changes are also calculated in the reversedirection using the fuzzy correspondence and subsequent calculations, and404.1. The Proposed Thickness Methodthe resulting two measurements are then averaged.4.1.3 Cross-sectional Thickness MeasurementsThe proposed method is also modified to provide cross-sectional corticalthickness measurements. Instead of measuring the change in cortical thick-ness over time, the method is used to measure the absolute thickness ofthe cortical ribbon at each time point. Our approach to measure cross-sectional measurements is closely related to the longitudinal approach. Incross-sectional measurements, the two time-points are still rigidly registeredfor the purpose of normalization. The skeletal lines are matched betweenthe two time points and the cross-sectional thickness is then computed asthe sum of the probabilities of the voxel belonging to GM along the averagenormal as shown in Equation 4.5, weighted by the strength of the match.∆T (1) =1P (1) + P (2)[ P (1)∑p=1∑q∈L(2)pW (s(1)p , s(2)q )[T (s(1)p )]]∆T (2) =1P (1) + P (2)[ P (2)∑p=1∑q∈L(1)pW (s(2)p , s(1)q )[T (s(2)p )]](4.8)where P (1) and P (2) are the number of skeletal points in the first scan andsecond scan respectively, W (s(1)p , s(2)q ) is the strength of the match betweens(1)p and s(2)q and ∆T (1), ∆T (2) are the cross-sectional thickness measurementsat time point 1 and time point 2, respectively.4.1.4 Regional Analysis and Buried SulciWhile understanding global cortical thickness change is important and theglobal thickness change provides an overall rate of the cortical atrophy, it isbelieved that the thinning patterns of the cortex are heterogeneous (Saileret al., 2003; Calabrese et al., 2007), i.e., the different regions of the cortex414.1. The Proposed Thickness Methodthin at different rates. The cortex, based on functionality, can be dividedinto four different regions called lobes; these are frontal, parietal, temporal,and occipital. We extended our method to measure thickness changes ateach of these lobes separately, with the lobes defined from the FreeSurferlabels and transformed into voxel space. In measuring cortical thickness atthe lobe-level, special considerations need to be made to handle the pres-ence of buried sulci, especially in the case of cross-sectional measurements.Due to partial volume and coarse resolution, the CSF in deep, thin sulci issometimes mislabelled as GM and may result in erroneously large thicknessvalues. In calculating global change in thickness measurements, the buriedsulcus does not create significant errors in measurements as the number ofmeasurements in the buried sulci is relatively small compared to the totalnumber of measurements, so the impact on the global mean is small. How-ever, in regional analysis, these errors can be significant. In the proposedmethod, the buried sulci are identified by analyzing each average normal todetermine whether both the forward and backward directions terminate atthe same type of boundary (GM-CSF or GM-WM). In the cases where theydo, we assume the sulcus coincides with the skeleton, and we take the thick-ness value at that skeletal point to be half of the measured value, similarly tohow buried sulci are handled in other current voxel-based methods proposedby Thompson et al. (2005); Luders et al. (2006); Bromiley et al. (2006); Scottet al. (2009). In the absence of any more information, this is a reasonableassumption to make as suggested by Bromiley et al. (2006) and Scott et al.(2009).4.1.5 Effect of Brain ExtractionWhile experimenting, we noticed that the choice of the brain extraction af-fects the thickness measurements, particularly because the results of the seg-mentation method FAST rely on good brain extraction. The extraneous tis-sues left by the brain extraction step can interfere with the initial estimation424.1. The Proposed Thickness Methodof the mean intensity values of the different tissues. If the initial estimatedmeans are too far from the actual means, the expectation-maximization stepin the FAST can get trapped in a local minima and may provide wrong finalsegmentation results (Zhang et al., 2001). In our experiments, we found thatFSL’s BET in a few cases, does not provide a correct boundary even afteradjusting the fractional intensity threshold parameter. Also, the surface ofthe extracted brain is quite jittery. In contrast, FreeSurfer’s brain extractionprovides a smooth brain surface but again leaves some extraneous tissue.Therefore, for reliable results and keeping the whole pipeline unsupervisedand fully automatic, masks from both methods are combined.43Chapter 5Experiments and Results5.1 Subjects and Imaging DataTo evaluate the proposed method, we applied it to three longitudinal datasetsof 3D T1-weighted brain MRIs. The first dataset contained MRIs from 15subjects with two scans each (total of 30 scans). The scans were acquiredan hour apart and were used to test for scan-rescan reproducibility of themethods.The second dataset contained images from 50 AD subjects and 50 age-matched normals that were randomly selected from the Alzheimer’s DiseaseNeuroimaging Initiative (ADNI) study1(Mueller et al., 2005). The selecteddataset contains pairs of scans over one-year interval (total of 200 scans) andwere used to test for the ability of the method to distinguish between ADand normal. The subjects were scanned at either 1.5T or 3T with a 3D T1-weighted acquisition. The resolution of the scans varied from 0.99 × 0.98 ×0.93 mm3 to 1.20× 1.24× 1.25 mm3. All the 3D scans were resampled to 1 ×1× 1 mm3 and oriented such that the head faces the same direction in all thescans. The mean age of the healthy controls was 76.5±6.1 years and the meanage of AD patients was 77.0±7.4 years. Of the 100 subjects, 52 were femalesand 48 were males. The extent of the cognitive impairment is measuredclinically by MMSE (Folstein et al., 1975). Deterioration in cognition isindicated by decreasing scores of repeated tests. The mean MMSE score forhealthy controls was 28.62± 1.22 and 22.96± 1.99 for AD. A manual qualitycheck was performed on all the scans. Each of the scans was visually assessed1http://adni.loni.ucla.edu445.2. Experimentsand the scans that had severe motion artifacts were rejected.Lastly, to verify the change in thickness measurements in MS, anotherlongitudinal dataset of 100 SPMS patients was used. Two 3D T1-weightedMRIs of each of the SPMS patients acquired 2 years apart were used totest for sensitivity to change over time in a disease with generally slowercortical atrophy than AD. The mean age of the subjects of the selecteddataset was 49.52 years. The selected dataset contained 59 females and 41males. The resolution of the scans varied from 1.00 × 0.58 × 0.58 mm3 to1.50× 1.17× 1.17 mm3. These scans were resampled to 1 mm×1 mm×1 mmand were also padded to have the same dimensions of 256× 256× 256. Thescans were randomly selected from a larger set of 480 patients collected fora multi-center clinical trial. Treatment groups were pooled because of thenegative clinical treatment result. Various clinical scores such as EDSS andMSFC were collected along with MRI scans and statistical tests were done tosee if change in cortical thickness is associated with any of the clinical scores.The clinical scores MSFC and PASAT collected by the experts, measure theextent of disability of the disease (Fischer et al., 1999). This dataset alsowent through the same quality control and all the scans containing severemotion artifacts were rejected. In addition, there were some subjects whereone time point was taken with gadolinium contrast agent and the other timepoint without the contrast agent. These scans were also rejected after athorough visual assessment.5.2 ExperimentsVarious statistical tests were performed to test the robustness and sensitivityof the method to measure a significant change over time. The experimentswere run on an Intel Core2 Quad Q8200 2.33 GHz processor and 8 GB ofRAM. The proposed pipeline took approximately 2.5 hours to process a sub-ject. The thickness measurements were validated by comparing them with455.2. Experimentsthe measurements from previously widely used methods and by examiningtheir relationship with clinical scores. The ground truth data for the corticalthickness is not available as producing manual thickness measurements wouldbe laborious, time consuming and the measurements would suffer from inter-operator differences. Furthermore, The manual measurements are prone toerrors, especially in the areas where the highly folded cortical surface is notperpendicular to any of the cardinal axes and the line along which thicknessis measured passes through several slices. Therefore, the change in thick-ness measurements were validated by comparing against measurements frompreviously widely used methods such as FreeSurfer and Minimum line inte-gral (MLI). Although the previously reported methods are not guaranteedto provide accurate measurements, it is assumed that they can capture ageneral trend over a dataset and measuring a similar trend using our methodwill help to validate our measurements. Additionally, validating against twomethods will prevent any bias in the measurements towards one method.For the validation, longitudinal pipeline of FreeSurfer (version 5.2.0), whichis an open source software suite and is freely available, was used to obtainthe change in thickness measurements. On the other hand, MLI was imple-mented in-house as described in the paper Aganj et al. (2010). The MLI,only being a cortical thickness-measuring block, shares the rest of the pipelinewith our method.5.2.1 Scan-rescan ReproducibilityScan-rescan reproducibility is used to measure the robustness of the methodsusing same day scans from 15 subjects. Since the scans were taken after onehour, no significant change in cortical thickness should have taken place. Theexpected average change in thickness on these scans should be close to zero.However, due to some systematic differences that may be present betweenthe images acquired earlier and later, we randomly assign each of two imageslabels ‘baseline’and ‘followup’, and then perform the longitudinal analysis on465.2. Experimentsthe relabelled data. The mean thickness change is computed on this datasetand a one-sample t-test is performed to check if the mean is statisticallydifferent from zero. Further, 95% confidence intervals are calculated. Weexpect that the confidence interval for a reproducible method should containzero. Also, tighter confidence intervals are preferred.5.2.2 Clinical RelevanceThe clinical relevance of the results is evaluated by investigating the associa-tion of the clinical scores with the measured cortical thickness changes. Theclinical scores, collected by experts, try to quantify the impact of the dam-age on the physical and cognitive ability of the subject due to the disease byrating the subject’s performance on standardized tasks. These clinical scoresare helpful in documenting the decline in cognition and limb movement overtime and to measure the subject’s response to a treatment. In this thesis,we particularly emphasize on the cognitive function as cognitive impairmentis known to be associated with GM atrophy. We wanted to evaluate if thereis any correlation between the percentage change in thickness and measureddecline in cognitive score so as to show the effectiveness of the method in aclinical setting. For AD subjects, MMSE is a commonly used clinical scoreto assess the severity of the disease, whereas for MS disease, MSFC is usedto test the progression of the disease and is composed of 3 subscores: 25-Footwalk (T25W), Paced Auditory Serial Addition Test (PASAT) and 9-Hole PegTest (9-HPT).5.2.3 Assessment of Thickness Measurements on ADDatasetFor the AD data, the different methods were evaluated on their ability todistinguish between AD and healthy normals. The p-values computed fromstudent’s t-test determine if the measured cortical thickness change is statis-475.2. Experimentstically significant or not. We assume statistical significance if p-value < 0.05.The student’s t-test has a drawback of being sensitive to the presence ofoutliers in the measurements. Even the presence of a few outliers can sig-nificantly affect the p-value obtained from the t-test. So, in addition to thet-test, Wilcoxon rank-sum test was also performed on the dataset. This testis an alternative to the student’s t-test and takes rank of the measurementsinto account rather than their values. Using rank instead of actual valuesreduces the effect of outliers and is considered to be more robust. Also, it isa non-parametric statistical test that does not make any assumptions aboutthe probability distributions of the variables.Cohen’s d-test was used to measure the effect size of the percentage changein the cortical thickness for healthy controls and AD subjects in addition tothe statistical significance. The Cohen’s d-value measures the standardizeddifference between the two means. Clarkson et al. (2011) has shown thatvoxel-based methods can show a larger effect size than surface-based meth-ods. This suggests that voxel-based methods can capture larger magnitudeof the differences between two groups than surface-based methods. This testcomputes a d-value using the equation:d =|µ1 − µ2|σ(5.1)where σ represents the common variance and µ1, µ2 represents the meanthickness change of healthy and diseased subjects, respectively. Typically,d-values of 0.2, 0.5 and 0.8 represent small, medium and large effect sizes,respectively. A larger d-value, which indicates larger separation between thegroups, is preferred. In addition to these tests, we also performed the poweranalysis using bootstrapping (with replacement) to determine how the abilityof the measurement methods to produce separable means may be affected bysample size. The sample size was varied from 2 to 50, and 999 bootstrapiterations were performed for each sample size. Power is then computed fora sample size as the number of iterations in which the percentage change485.2. Experimentsin thickness was significant using a t-test, divided by the total number ofiterations.Further, Spearman correlations were computed to measure the agreementof thickness measurements between different methods. The correlations be-tween different methods were computed for the longitudinal thickness mea-surements as well as for the cross-sectional thickness measurements. In ad-dition to these tests, the Pearson correlations between percentage changein thickness and percentage change in MMSE were computed for differentmethods.5.2.4 Assessment of Thickness Measurements on MSDatasetFor the MS dataset, due to the unavailability of healthy subject scans, thevalidation tests performed were slightly different from the AD dataset. In MSdataset, the different methods were evaluated based on their sensitivity to de-tect a significant change over time using a one-sample t-test. Further, Spear-man correlations were computed between the percentage change in thicknesswith other known MS biomarkers such as percentage change in brain volumefraction (BVF) and T2 lesion volume (T2LV) load. These biomarkers arewidely reported in earlier clinical studies and shown to be partially corre-lated to the physical disability and the cognitive decline (Benedict et al.,2004; Bermel et al., 2002). BVF is defined as the ratio of brain parenchymalvolume to the brain intradural volume and measures the normalized wholebrain atrophy. T2LV captures the volume of the brain covered by hyperin-tense lesions, as compared to the surrounding white matter in T2-weightedMR scan. The correlations are computed to evaluate the possibility of changein thickness as a potential independent biomarker as compared to the existingones. A partial correlation is expected as the whole brain atrophy comprisesof the atrophy of the cortex as well. However, a perfect or near perfect cor-relation would suggest that change in cortical thickness is not useful as an495.3. Resultsindependent biomarker.Further, besides global mean thickness change, regional analysis was alsoperformed on this dataset. A one sample t-test was performed in each lobeseparately to identify the lobes with significant change in thickness. Weevaluated the longitudinal agreement between the methods by computing thecorrelations between the changes measured both at global and regional level.Also, correlations with the different available clinical scores were calculatedbetween the percentage regional change in thickness measurements and thepercentage change in clinical scores.5.3 Results5.3.1 Scan-rescan ReproducibilityTable 5.1 shows the mean changes of cortical thickness on scan-rescan dataset.When applied to the 15 pairs of scans acquired an hour apart, our proposedmethod (LCT) computed a mean change of −0.094% (SD = 0.525%) whereasFreeSurfer and minimum line integral computed mean changes of 0.008% (SD= 0.452%) and −0.265% (SD = 0.794%), respectively. None of the means arestatistically different from 0, when one sample two-tailed t-tests are applied.All three SDs are comparable, indicating that the methods produced similarscan-rescan variability in this dataset. The 95% confidence intervals for theTable 5.1: Scan-rescan reproducibility. One sample t-test shows that none ofthe means are statistically different from 0. The SD indicates that all threemethods produced similar scan-rescan variability.Method Mean % 95% t-testchange (SD) Confidence interval p-valueFreeSurfer 0.008 (0.452) -0.233, 0.249 0.944Min-Line Integral -0.265 (0.794) -0.688, 0.158 0.201Our Method -0.094 (0.525) -0.374, 0.185 0.484505.3. Resultsmeans from all the three methods contain zero, which suggest that they arereproducible. In order to eliminate the possibility of the systematic bias,several iterations were performed in which the scans were randomly groupedinto the two time points. The p-value varies with the selection of samplesbut its range is between 0.23 and 0.97 in 10 iterations performed.5.3.2 Distinguishing between AD and Normals over aYearTable 5.2 summarizes the results of applying the three methods to the ADdataset. All three methods measured a higher rate of atrophy for the ADgroup than for the normal group. Minimum line integral and our methodboth produced positive means of 0.206% and 0.317%, respectively, for thenormal group while FreeSurfer produced a negative mean of −0.446%, butall three are not statistically different from 0. For the AD group, our methodmeasured the largest change (−1.400%), followed by FreeSurfer (−1.033%)and MLI (−0.346%). These results are in agreement with previous studiessuch as Li et al. (2012b) who reported the average rate of atrophy overTable 5.2: Mean cortical thickness change (%) over 1 year computed by 3methods in 50 AD and 50 normal subjects. The results of two-tailed t-testand Wilcoxon rank-sum test of group separability are also given. The p-values show that only our method was able to produce statistically differentmeans between AD and healthy controls.Method Normal Mean AD Mean t-test Wilcoxon% change (SD) % change (SD) p-value p-valueFreeSurfer -0.446 (1.961) -1.033 (2.311) 0.188 0.090(FS)Minimum Line 0.206 (1.989) -0.346 (1.352) 0.105 0.237Integral (MLI)Our Method 0.317 (3.496) -1.400 (3.329) 0.013 0.006(LCT)515.3. Results0.00.20.40.60 10 20 30 40 50Sample SizePowerOur Method FreeSurfer Minimum Line IntegralFigure 5.1: Power analysis with bootstrapping on the AD and normal data todetermine the effect of sample size on the ability of each method to separatethe groups. Our method shows consistently the highest power throughoutthe range of sample sizes.2-years in AD patients to be between 2 − 6%. We performed three tests ofgroup separability: two-tailed t-test, Wilcoxon rank-sum test and Cohen’s d.The p-values from the t-tests show that only our method was able to pro-duce statistically different means and the p-values from Wilcoxon rank-sumtest again show similar results, with FreeSurfer exhibiting a weak trend to-wards significance (p = 0.090). The d-values show that our method producedthe highest effect size (0.537) while FreeSurfer produced the least effect size(0.264) on this data. The power analysis on the same dataset show that theproposed method demonstrated higher power (0.80) than FreeSurfer (0.283)and MLI (0.35). Further, the results on the bootstrapped samples (Fig-ure 5.1) show that the proposed method has the highest power for virtuallythe whole range of sample sizes.The Spearman correlations between thickness measurements from ourmethod and other state-of-the-art methods show reasonable agreement. Thelongitudinal measurements of our method correlated strongly (r = 0.799, p <0.001) with MLI but observed only a moderate correlation with FreeSurfer525.3. ResultsTable 5.3: Spearman correlation coefficient (p-value) between change in cor-tical thickness measurements using different methods. The results show thatthe measurements from different methods were significantly correlated.FreeSurfer Min-Line Integral Our MethodFreeSurfer 1 0.233 (0.020) 0.265 (0.007)Min-Line Integral 0.233 (0.020) 1 0.799 (< 0.001)Our Method 0.265 (0.007) 0.799 (< 0.001) 1(r = 0.265, p = 0.007). All the p-values from the correlations were measuredto be significant as shown in Table 5.3. However, our method obtained lowerp-value as compared to MLI while measuring the longitudinal correlation withFreeSurfer. Also, the cross-sectional thickness measurements were stronglycorrelated between our method and MLI at Time point 1 (r = 0.803) andTime point 2 (r = 0.705), and moderately correlated between our methodand FreeSurfer at Time point 1 (r = 0.604) and Time point 2 (r = 0.654).A recent cross-sectional study (Tustison et al., 2014) which uses DiReCT forthickness measurement reported a Pearson correlation of 0.44 with FreeSurferon a large and varied dataset.5.3.3 Correlations with Clinical Scores on AD datasetWe compared the change of MMSE score with the change in thickness mea-surement using different methods. The p-value from our method shows aweak trend towards significance whereas other methods failed to show anysignificant trend. The correlation values and the corresponding p-values areshown in Table 5.4. The weak significance could be due to the reason thatthe MMSE score is not a complete neuro-psychological examination. It onlyassess a narrow range of cognitive functions (Rapp et al., 2003). MMSE doesnot assess long-term memory and executive functions such as capacity to ab-stract or to judge a social situation. People with these difficulties would thusshow a normal MMSE score, but still would have severe cognitive limitations.535.3. ResultsTable 5.4: Spearman correlation of thickness change with the change inMMSE score. None of the methods were able to find a significant correlationbetween percentage change in cortical thickness and percentage change inMMSE score.Method correlation p-valueFreeSurfer 0.012 0.90Min-Line Integral 0.000097 0.99Our Method 0.133 0.185.3.4 Thickness Changes at Global level on MSDataset over Two YearsThe following results are from the earlier version of LCT which producesmeasurement at global level only and does not handle buried sulci explicitly.Using this method, cortical thickness decreased by a mean of -0.392% (95%confidence intervals: −0.665% to −0.120%) over 2 years. The p-values fromboth the student’s t-test and Wilcoxon rank-sum test were significant (p-value< 0.05). FreeSurfer measured a slightly larger mean atrophy rate of -0.426%(95% confidence intervals: −1.032% to 0.180%) over 2 years. However, thechange in thickness measurements by FreeSurfer were not statistically sig-nificant using student’s t-test as well as Wilcoxon rank-sum test. Further,the cortical thickness changes calculated using LCT correlated both withpercentage change in PASAT (r = 0.340, p < 0.001) and with percentagechange in MSFC (r = 0.258, p < 0.05). However, no significant correlationwas seen with the other two clinical scores, T25W and 9-HPT.The correlations with other biomarkers show that the change in corticalthickness correlated moderately with change in BVF but failed to show anycorrelation with the change in T2LV score. However, moderate to strong cor-relations were seen on cross-sectional measurements. The percentage changein cortical thickness correlated with change in BVF (r = 0.490, p < 0.05)and the cross-sectional cortical thickness correlated with BVF at baseline545.3. Results(r = 0.692, p < 0.0001) and 2 years (r = 0.725, p < 0.0001). Further, corti-cal thickness correlated negatively with T2LV at baseline (r = −0.437, p <0.0001) and 2 years (r = −0.432, p < 0.0001). When evaluating the corre-lations using FreeSurfer, similar cross-sectional results were seen. FreeSurfercorrelated cross-sectionally with BVF at baseline (r = 0.485, p < 0.0001)and at 2 years (r = 0.538, p < 0.0001). Further, moderate negative corre-lations were seen with T2LV at baseline (r = −0.344, p < 0.001) and at 2years (r = −0.311, p < 0.05). However, no significant longitudinal correla-tions were seen using FreeSurfer with any of the above mentioned imagingbiomarkers.5.3.5 Thickness Changes at Global and Lobe-Level inthe MS Dataset over Two YearsWe performed comparisons in thickness measurements at global-level as wellas at the lobe-level for the MS scans using modified LCT (see Section 4.1.4)which can handle buried sulci in the MR images. Table 5.5 shows theglobal and regional mean thickness change results from this experiment.Our method measured a mean change of −0.395% (95% confidence intervals:−0.607% to −0.183%) over the two-year interval, compared to FreeSurfer,which measured a slightly larger mean change of−0.426% (95% confidence in-tervals: −1.032% to 0.180%). For these global changes, the p-value from onesample t-test suggest that the mean change in cortical thickness measuredby our method was statistically significant (p < 0.05), whereas the meanmeasured by FreeSurfer was not statistically significant (p = 0.166). Fur-ther, our change in cortical thickness measurements had a smaller standarddeviation (SD = 1.077%) than FreeSurfer (SD = 3.086%). For the regionalanalysis, our method is more sensitive to changes in the cortical thickness onthe test dataset in the frontal lobe in both the hemispheres than FreeSurferand produced change in thickness measurements that are statistically sig-nificant (p < 0.05) as opposed to measurements using FreeSurfer which did555.3. ResultsTable 5.5: Mean cortical thickness change (%) over 2 years computed byour proposed method and FreeSurfer on the MRIs of 100 SPMS subjects.Our method (LCT) shows greater sensitivity to change than FreeSurfer asindicated by significant p-values. LCT also produced lower SD measurementsfor all regions than FreeSurfer. There is better left-right symmetry in theresults of our method, while FreeSurfer (FS) produced some very large SDsin the right hemisphere. The ∗∗ indicates where p-value was significant (p <0.05) and ∗ indicates a trend towards significance (0.05 < p < 0.1).Global Left Hemisphere Right HemisphereFrontal Temporal Parietal Occipital Frontal Temporal Parietal OccipitalLCTMean %Change −0.3950 −0.5999 −0.3194 −0.2298 0.0927 −0.5722 −0.2210 −0.3036 −0.0019SD 1.0777 1.4739 1.6793 1.2873 2.0828 1.5043 1.5323 1.2218 1.9038t-test p-value 0.0003** 0.0001** 0.0570* 0.0755* 0.6575 0.0002** 0.1524 0.0146** 0.9921FSMean %Change −0.4260 −0.4667 −0.1322 −0.3484 0.0596 −0.2690 −0.5282 −1.4759 −0.6772SD 3.0865 2.4962 2.9256 2.4233 3.2811 2.6410 9.5555 13.0361 9.6947t-test p-value 0.1664 0.0619* 0.6491 0.1496 0.8549 0.3109 0.3084 0.2603 0.4865not show any statistical significance. Further, significant decrease in corticalthickness was also observed in the right parietal lobe and a trend towardssignificance was seen in the left parietal lobe (p = 0.0755). In addition, ourmethod also showed a trend towards significance in the left temporal lobe(p = 0.0570). The measurements produced by FreeSurfer are not significantin any of the lobes; only a trend towards significance was seen in the leftfrontal lobe (p = 0.0619). Also, it is notable that our method provided bet-ter left-right symmetry, while FreeSurfer produced some very large SDs inthe right hemisphere. The large SDs in FreeSurfer are observed due to thepresence of an outlier in the measurements, which was not observed with ourmethod. Our results are consistent with earlier MS studies that also reportedsignificant changes in cortical thickness in the frontal lobe (Sailer et al., 2003;Calabrese et al., 2007; Narayana et al., 2013).Overall, our method and FreeSurfer agree reasonably well, consideringthat they are two very different approaches. Cross-sectionally, the globalmean thickness measurements produced by our method were 2.99 mm (SD =565.3. ResultsTable 5.6: Pearson correlation coefficient (r) between thickness measure-ments of FreeSurfer and our method over 2 years computed on the MRIsof 100 SPMS subjects. All the p-values from the correlations produced arestatistically significant (p < 0.05) except for three which are marked with †.Overall, the proposed method and FreeSurfer agree reasonably well.Time Point 1 Time Point 2 % Thickness changeGlobal 0.671 0.684 0.200Left hemi. Right hemi. Left hemi. Right hemi. Left hemi. Right hemi.Frontal 0.763 0.784 0.692 0.712 0.332 0.313Temporal 0.542 0.523 0.425 0.260 0.217 −0.015†Parietal 0.664 0.694 0.682 0.562 0.271 0.305Occipital 0.446 0.408 0.546 0.436 −0.045† 0.188†0.20) and 2.98 mm (SD = 0.20) for Time Points 1 and 2, respectively, whileFreeSurfer produced fairly close measurements of 3.15 mm (SD = 0.20) and3.14 mm (SD = 0.20). We evaluated the longitudinal agreement between themethods by computing the Pearson correlations between the changes mea-sured. The results are summarized in Table 5.6. The global cross-sectionalmeasurements between the two methods show moderately strong correlations(∼0.7), while the correlations for the regional cross-sectional measurementsrange from about 0.3 to 0.8, with the strongest agreement in the frontaland parietal lobes. In the longitudinal results, mild but significant corre-lations are seen between the two methods, with the global changes havinga correlation of 0.20 and the regional changes having correlations rangingfrom about 0.20 to 0.35, again the frontal and parietal lobes exhibited thestrongest agreement between the two methods.To evaluate the clinical relevance of the results from the two methods,correlations were also computed between the percentage change in thicknessmeasurements and percentage change in clinical scores (Table 5.7). Theglobal percentage thickness changes computed by our method correlatedmoderately with percentage change in MSFC (r = 0.257, p < 0.05) andchange in PASAT (r = 0.379, p < 0.05). In addition, the regional analysis575.3. ResultsTable 5.7: Spearman correlation coefficient (ρ) between clinical scores over2 years with % change in thickness measurements computed in 100 SPMSsubjects using FreeSurfer (FS) and our method (LCT). Our method was ableto measure thickness changes that correlate significantly with MSFC at theglobal level and in the frontal lobe, while FreeSurfer did not. Also, significantcorrelations of change in thickness were measured with change in PASAT atglobal level and in all the lobes except for the left temporal lobe. The ∗∗indicates where p-value was significant (p < 0.05) and ∗ indicates a trendtowards significance (0.05 < p < 0.1).Global Left Hemisphere Right HemisphereFrontal Temporal Parietal Occipital Frontal Temporal Parietal OccipitalLCTMSFC 0.257** 0.258** 0.011 0.192* 0.093 0.197** 0.123 0.133 0.063PASAT 0.379** 0.297** 0.132 0.206** 0.215** 0.370** 0.241** 0.199** 0.220**9-HPT −0.095 −0.086 −0.190* −0.179* −0.192* 0.006 −0.186* −0.059 −0.154T25W 0.094 0.131 0.041 −0.054 −0.002 0.033 0.139 −0.022 0.172*FSMSFC 0.069 0.099 0.061 0.080 0.094 0.091 0.052 0.076 0.088PASAT 0.002 0.074 0.059 −0.031 −0.059 0.096 0.066 −0.022 −0.0449-HPT −0.094 −0.081 −0.068 −0.056 −0.041 −0.096 −0.078 −0.063 −0.049T25W 0.047 0.102 0.093 0.165* 0.163* 0.109 0.108 0.173* 0.167*revealed significant correlation of 0.258 (p = 0.008) and 0.197 (p = 0.049) be-tween percentage change in thickness with percentage change in the MSFC inthe left frontal lobe and the right frontal lobe, respectively. Further, the per-centage change in thickness correlated significantly with percentage changein PASAT in all the four lobes in both the hemispheres except for the lefttemporal lobe. These significant correlation coefficients varied from 0.2 to0.38 in different lobes. For the percentage change in 9-HPT only a weaktrend towards significance (0.050 < p < 0.075) was seen in different lobes.While the proposed method was able to find significant correlations in boththe hemispheres, no significant correlation was seen in the FreeSurfer results.5.3.6 Comparison of SpeedProcessing a subject for change in cortical thickness can take a significantamount of processing time. Surface-based methods are known to be com-putationally very intensive with most of the time is consumed in detect-585.3. Resultsing and preventing self intersections when deforming surfaces, For example,FreeSurfer takes from 20 to 30 hours per subject, which may be impracti-cal for many neurological studies that have hundreds of patients. AlthoughFreeSurfer produces more measures than just cortical thickness, in our opin-ion the time taken is still very large. Likewise, CLADA (Nakamura et al.,2011), another surface-based method, takes 12 to 15 hours for processingeach subject to calculate thickness on an Intel 2 GHz processor. Nakamuraet al. (2011) only reported the timing for processing six lower resolutionscans and it is difficult to estimate the processing time for a single pair ofhigh resolution scans. Voxel-based methods avoid the meshing steps alto-gether. Voxel-based methods are generally much faster, e.g., Minimum LineIntegral (Aganj et al., 2010) takes 2 to 3 hours for a pair of scans. In our ex-periments, it took approximately 26 hours for FreeSurfer to process each pairof scans, whereas our proposed method took approximately 2.5 hours on anIntel 2.33 GHz machine with 8 GB of RAM. The change in thickness measure-ment step takes approximately 3 minutes per subject. Most of the time in theproposed method is taken up by the brain extraction step performed usingmri watershed of FreeSurfer (45 minutes/scan). Nonetheless, the proposedmethod being a voxel-based method performs at much faster speed thansurface-based methods while being more sensitive to change, which makes itdesirable to be used in a study having a large dataset.59Chapter 6Conclusions and Future Work6.1 Summary and ConclusionsWe have proposed a new voxel-based longitudinal method for measuringchanges in cortical thickness between pairs of 3D T1-weighted MR scans. Theproposed method is designed to address the various challenges in measuringchanges in cortical thickness such as partial volume effect, coarse MR scanresolution as compared to the cortical thickness, small annual rate of change(< 5 %) of cortical thickness in diseased subjects, aligning the two time pointsfor the longitudinal study and computational time. The proposed method isvoxel-based for computational efficiency as opposed to surface-based methodswhich are computationally expensive. However, surface-based methods areknown to be more reproducible on scan-rescan data than voxel-based meth-ods. Therefore, special considerations are made for the proposed method tobe reproducible as well as sensitive to the subtle cortical changes. Unlike cur-rent longitudinal methods, the proposed method does not require deformableregistration, but instead uses a robust feature-matching approach specificallytargeting the cortex. Additionally, fuzzy correspondence is used to enhancethe stability of matches and acts as an alternative to the regularization stepdone to align different time-points using deformable registration in most ear-lier methods. Furthermore, the proposed method’s sensitivity to detect asignificant change over time is improved by measuring thickness changes onprobabilistic segmentation which is made possible via the adaption of voxel-based approach as opposed to surface-based approach which require binaryedges to clearly define the surfaces. Finally, the proposed method is de-606.1. Summary and Conclusionssigned to be symmetric, i.e., both time points are passed through the sameprocessing steps and we get the same measurements with opposite signs, ifwe interchange the time points.The evaluation of our method on three different datasets shows thatthe proposed method has potential benefits over the current state-of-the-art method FreeSurfer. Tests using a scan-rescan dataset show that theproposed method is similarly reproducible as FreeSurfer. Overall, the mea-surements from our method agree reasonably well with FreeSurfer on two clin-ical datasets of AD and SPMS, with 100 subjects each, with the proposedmethod being more sensitive to pathological changes while being 10 timesfaster in processing a pair of scans than FreeSurfer. The agreement betweenthe two methods is stronger on the cross-sectional measurements than on thelongitudinal measurements. Also, the proposed method showed significantcorrelations between change in thickness measurements and clinical scoreswhereas FreeSurfer failed to show any correlations. This study is amongthe few studies that measured the longitudinal relationship between changein cortical thickness and change in clinical scores in SPMS at global andlobe-levels. The global change in cortical thickness correlated with changein MSFC and PASAT, a cognitive clinical score, on SPMS dataset. Fur-thermore, regional analysis showed significant correlations between changein PASAT and change in thickness in all four lobes in both hemispheresexcept for the left temporal lobe. Significant correlations were also seen be-tween change in MSFC and change in thickness in the frontal lobe, whereasFreeSurfer failed to show any such correlation.The cortical atrophy was compared with other established imaging biomark-ers such as BVF and T2LV on the MS dataset. The cortical atrophy wasonly measured to be partially associated with whole brain atrophy and T2lesion increase, suggesting an independent contribution as a biomarker. Ameasurable change of cortical thickness over the period of study and its asso-ciation with cognitive outcome and conventional MRI measures make cortical616.2. Limitations and Future Workthickness a promising and feasible imaging biomarker for AD and SPMS.Furthermore, only a few studies have reported results on large datasets.In this study, we present results on two large clinical datasets containing 100subjects each. The method’s performance on a large, multi-center data isdesirable to evaluate the usefulness of the method in a practical environment.Also, the power analysis test on AD dataset shows that our method is ableto detect significant change even using a smaller dataset which can be usefulto evaluate the performance of the administered therapies efficiently as fewersubjects will be needed to be enrolled in the study to evaluate the effect ofthe drug.The proposed method has consistently performed better on certain sta-tistical tests than FreeSurfer on our datasets while exhibiting similar scan-rescan reproducibility. The smaller run-time, better sensitivity to changeover time and better correlations with clinical scores make it potentiallymore desirable for measuring subtle changes of the cortex in clinical studiesespecially where a large amount of data needs to be processed.6.2 Limitations and Future WorkWith the continuous advancements in the MR image acquisition and softwaretechniques, we can improve the sensitivity and reproducibility of our methodeven further. The cortical thickness methods can potentially benefit from theimproved and temporally consistent segmentation results. The segmentationresults can be improved by either improving the quality of the input scansor by improving the robustness of the segmentation algorithm. One of theways by which the input scans to the segmentation step can be improvedis by enhancing the quality of the brain extraction. In our experiments, weobserved that the brain masks obtained from BET are not always smooth atthe surface. An improved brain extraction with smooth and robust maskswould be helpful to obtain better segmentation results. Currently, relatively626.2. Limitations and Future Worksmooth masks are obtained by applying an inclusive OR operation on theoutputs obtained from FSL’s BET and FreeSurfer’s mri watershed. Insteadof combining masks from two methods to achieve reliable brain extraction,the brain mask can be improved by including prior spatial knowledge in theexisting methods. Another potential way of improving the segmentation re-sults is by using information from all the available time points which wouldmake the segmentation results consistent among these time points. We per-formed preliminary experiments with different options available with FAST,by providing prior information from the output of one time point to the otherbut found on visual inspection that the results did not improve. Dwyer et al.(2013) proposed a 4D version of FAST. They mentioned that using 4D willintroduce a small bias in the results toward no change and might also havelimited benefit in the face of very large deformations. However, they hypoth-esized that these concerns could be outweighed by the potential reductionin noise-induced variance. We would like to experiment with these 4D seg-mentation methods and try to improve the consistency of the segmentationmethod and hypothesize that this will further improve the robustness of thecortical thickness results.In the future, we would like to extend the method to handle more thantwo time points, and make the method more robust to large deformationsto enable longer-term (> 5 years) studies. For longer-term studies, morecorrespondence features may be needed that are able to capture large de-formations. Also, the method may benefit if the shape-context feature iscalculated in 3D. In our present implementation, shape-context in computedin 2D as the skeletal line on which it is described is also computed in 2D.The skeletal line is calculated on slice-by-slice basis, independently of otherslices because computing skeletal representation of the cortex in 3D is nottrivial and poses several challenges. The main challenges will be the parame-terization of the medial cortical surface and regularization to avoid any dipsin the surface. In our opinion, the added complexity of constructing medial636.2. Limitations and Future Worksurface in 3D is not justified as the differences in the positions of the skeletallines computed by the current method between the two time points over theperiod of our study are not too large. Moreover, we believe that the smalldeviations in the position of the skeletal line would not affect the percentagechange in thickness measurements significantly as the thicknesses are mea-sured along the common average normal, which ensures that the thicknesscalculations are performed along the same direction in both the time points.However, we would still like to evaluate if our proposed method benefits fromthe use of 3D shape-context.Furthermore, we would also like to evaluate how the algorithm performson high resolution 7T scans. Lu¨sebrink et al. (2013) have done a preliminarycomparison of results from cortical thickness measurements using 3T and7T scans. They reported that the thickness measurements using FreeSurfermarginally reduced with 7T scans of same resolution (1 mm) as 3T scans, inthe frontal and parietal lobes, and increased in the occipital lobes. They alsoreported that the ultra-high resolution (isotropic 0.5 mm) 7T data resulted insignificantly reduced values for the cortical thickness estimation compared tothe lower resolution (isotropic 1 mm) 7T data on all the different processingalgorithm and software used. They concluded that there exists a bias inthe grey matter segmentation due to partial volume effects and indicatedthat true cortical thickness is over-estimated by most current MR studiesusing both voxel-based or surface-based methods and can be more accuratelydetermined with high resolution imaging at 7T.64BibliographyAcosta, O., Bourgeat, P., Zuluaga, M. A., Fripp, J., Salvado, O., Ourselin,S., Initiative, A. D. N., 2009. Automated voxel-based 3D cortical thicknessmeasurement in a combined lagrangianeulerian pde approach using partialvolume maps. Medical Image Analysis 13 (5), 730–743.Aganj, I., Sapiro, G., Parikshak, N., Madsen, S. K., Thompson, P. M., 2010.Measurement of cortical thickness from MRI by minimum line integrals onsoft-classified tissue. Human brain mapping 30 (10), 3188–3199.Belongie, S., Malik, J., Puzicha, J., 2002. Shape matching and object recog-nition using shape contexts. IEEE Transactions on Pattern Analysis andMachine Intelligence 24 (24), 509–522.Benedict, R. H. B., Weinstock-Guttman, B., Fishman, I., Sharma, J., Tjoa,C. W., Bakshi, R., 2004. Prediction of neuropsychological impairment inmultiple sclerosis: Comparison of conventional magnetic resonance imag-ing measures of atrophy and lesion burden. Archives of Neurology 61 (2),226–230.Bermel, R. A., Bakshi, R., Tjoa, C., Puli, S. R., Jacobs, L., 2002. Bicaudateratio as a magnetic resonance imaging marker of brain atrophy in multiplesclerosis. Archives of Neurology 59 (2), 275–280.Bromiley, P. A., Scott, M. L. J., Thacker, N. A., 2006. A comparative evalu-ation of cortical thickness measurement techniques. Tech. rep., Universityof Manchester, Imaging Science and Biomedical Engineering Division.Calabrese, M., Atzori, M., Bernardi, V., Morra, A., Romualdi, C., Rinaldi,L., McAuliffe, M., Barachino, L., Perini, P., Fischl, B., Battistin, L., Gallo,P., 2007. Cortical atrophy is relevant in multiple sclerosis at clinical onset.Journal of Neurology 254 (9), 1212–1220.65BibliographyChard, D. T., Griffin, C. M., Parker, G. J. M., Kapoor, R., Thompson,A. J., Miller, D. H., 2002. Brain atrophy in clinically early relapse-remittingmultiple sclerosis. Brain 125 (Pt 2), 327–337.Clarkson, M. J., Cardoso, M. J., Ridgway, G. R., Modat, M., Leung, K. K.,Rohrer, J. D., Fox, N. C., Ourselin, S., Aug 2011. A comparison ofvoxel and surface based cortical thickness estimation methods. NeuroImage57 (3), 856–65.Dale, A. M., Fischl, B., Sereno, M. I., 1999. Cortical surface-based analysis.i. segmentation and surface reconstruction. NeuroImage 9 (2), 179–184.Das, S. R., Avants, B. B., Grossman, M., Gee, J. C., Apr 2009. Registrationbased cortical thickness measurement. NeuroImage 45 (3), 867–79.Drachman, D. A., 2005. Do we have brain to spare? Neurology 64 (12),2004–2005.Dwyer, M. G., Bergsland, N., Zivadinov, R., 2013. Improved longitudi-nal gray and white matter atrophy assessment via application of a 4-dimensional hidden markov random field model. NeuroImage 90, 207–217.Eggert, L. D., Sommer, J., Jansen, A., Kircher, T., Konrad, C., 2012. Path-ways on real and simulated structural magnetic resonance images of thehuman brain. PloS one 7 (9), 3188–3199.Fischer, J., Rudick, R., Cutter, G., Reingold, S., et al., 1999. The multiplesclerosis functional composite measure (MSFC): an integrated approach toms clinical outcome assessment. Multiple sclerosis 5 (4), 244–250.Fischl, B., Dale, A. M., 2000. Measuring the thickness of the human cere-bral cortex from magnetic resonance images. Proceedings of the NationalAcademy of Sciences of the United States of America 97 (20), 11050–11055.Fisher, E., Lee, J.-C., Nakamura, K., Rudick, R. A., 2008. Gray matteratrophy in multiple sclerosis: a longitudinal study. Ann Neurol. 64 (3),255–265.Folstein, M. F., Folstein, S. E., McHugh, P. R., 1975. Mini-mental state: Apractical method for grading the cognitive state of patients for the clinician.Journal of Psychiatric Research 12 (3), 189–198.66BibliographyGray, H., Lewis, W. H., 1918. Anatomy of the human body. Lea Febiger,Philadelphia and New York.Gronwall, D. M. A., 1977. Paced auditory serial addition task: A measure ofrecovery from concussion. Perceptual and Motor Skills 44 (2), 367–373.Han, X., Jovicich, J., Salat, D., van der Kouwe, A., Quinn, B., Czanner, S.,Busa, E., Pacheco, J., Albert, M., Killiany, R., Maguire, P., Rosas, D.,Makris, N., Dale, A., Dickerson, B., , Fisch, B., 2006. Reliability of MRI-derived measurements of human cerebral cortical thickness: The effectsof field strength, scanner upgrade and manufacturer. NeuroImage 32 (1),180–194.Han, X., Pham, D. L., Tosun, D., Rettmann, M. E., Xu, C., Prince, J.,2004. CRUISE: cortical reconstruction using implicit surface evolution.NeuroImage 23 (1), 997–1012.Hoffman, M. E., Wong, E. K., 1998. A ridge-following algorithm for findingthe skeleton of a fuzzy image. Information Sciences 105 (14), 227–238.Hutton, C., De Vita, E., Ashburner, J., Deichmann, R., Turner, R., May2008. Voxel-based cortical thickness measurements in MRI. NeuroImage40 (4), 1701–10.Hutton, C., Draganski, B., Ashburner, J., Weiskopf, N., Dec 2009. A compar-ison between voxel-based cortical thickness and voxel-based morphometryin normal aging. NeuroImage 48 (2), 371–80.Ibanez, L., Schroeder, W., Ng, L., Cates, J., 2003. TheITK Software Guide. Kitware, Inc. ISBN 1-930934-10-6,http://www.itk.org/ItkSoftwareGuide.pdf, 1st Edition.Jenkinson, M., Bannister, P., Brady, M., Smith, S., 2002. Improved optimiza-tion for the robust and accurate linear registration and motion correctionof brain images. NeuroImage 17 (2), 825 – 841.Jenkinson, M., Beckmann, C. F., Behrens, T. E., Woolrich, M. W., Smith,S. M., 2012. FSL. NeuroImage 62 (2), 782 – 790.Jones, S. E., Buchbinder, B. R., Aharon, I., 2000. Three dimensional map-ping of cortical thickness using Laplace’s equation. Human brain mapping11 (1), 12–32.67BibliographyKlauschen, F., Goldman, A., Barra, V., Meyer-Lindenberg, A., Lundervold,A., 2009. Evaluation of automated brain MR image segmentation and vol-umetry methods. Human Brain Mapping 30 (4), 1310–1327.Lerch, J. P., Evans, A. C., Jan 2005. Cortical thickness analysis examinedthrough power analysis and a population simulation. NeuroImage 24 (1),163–73.Lewis, E. B., Fox, N. C., 2004. Correction of differential intensity inhomo-geneity in longitudinal MR images. NeuroImage 23 (1), 75–83.Li, G., Nie, J., Wu, G., Wang, Y., Shen, D., 2012a. Consistent reconstructionof cortical surfaces from longitudinal brain MR images. NeuroImage 59 (4),3805 – 3820.Li, Y., Shen, D., 2010. Consistent 4D cortical thickness measurement forlongitudinal neuroimaging study. MICCAI 13 (Pt 2), 133–142.Li, Y., Wang, Y., Wu, G., Shi, F., Zhou, L., Lin, W., Shen, D., 2012b. Dis-criminant analysis of longitudinal cortical thickness changes in alzheimer’sdisease using dynamic and network features. Neurobiology of Aging 33 (2),427.e15 – 427.e30.Luders, E., Narr, K., Thompson, P., Rex, D., Jancke, L., Toga, A., 2006.Hemispheric asymmetries in cortical thickness. Cerebral Cortex 16 (8),1232–1238.Lu¨sebrink, F., Wollrab, A., Speck, O., 2013. Cortical thickness determinationof the human brain using high resolution 3T and 7T MRI data. NeuroImage70, 122–131.MacDonald, D., Kabani, N., Avis, D., Evans, A. C., 2000. Automated 3-dextraction of inner and outer surfaces of cerebral cortex from MRI. Neu-roImage 12 (3), 340 – 356.Mueller, S. G., Weiner, M. W., Thal, L. J., Petersen, R. C., Jack, C., Jagust,W., Trojanowski, J. Q., Toga, A. W., Beckett, L., 2005. The Alzheimersdisease neuroimaging initiative. Neuroimaging Clin N Am. 15 (4), 869–877.Nakamura, K., Fox, R., Fisher, E., Jan 2011. CLADA: cortical longitudinalatrophy detection algorithm. NeuroImage 54 (1), 278–89.68BibliographyNarayana, P. A., Govindarajan, K. A., Goel, P., Datta, S., Lincoln, J. A.,Cofield, S. S., Cutter, G. R., Lublin, F. D., Wolinsky, J. S., 2013. Regionalcortical thickness in relapsing remitting multiple sclerosis: A multi-centerstudy. NeuroImage: Clinical 2, 120 – 131.Querbes, O., Aubry, F., Pariente, J., Lotterie, J.-A., Demonet, J.-F., Duret,V., Puel, M., Berry, I., Fort, J.-C., Celsis, P., Initiative, T. A. D. N., 2009.Early diagnosis of Alzheimer’s disease using cortical thickness: impact ofcognitive reserve. Brain 132 (8), 2036–2047.Rao, S. M., Leo, G. J., Haughton, V. M., Aubin-Faubert, P. S., Bernardin, L.,1989. Correlation of magnetic resonance imaging with neuropsychologicaltesting in multiple sclerosis. Neurology 39 (2), 161–166.Rapp, S. R., Espeland, M. A., Hogan, P., Jones, B. N., Dugan, E., 2003.Baseline experience with modified mini mental state exam: The women’shealth initiative memory study (WHIMS). Aging Mental Health 7 (3),217–223.Reuter, M., Fischl, B., 2011. Avoiding asymmetry-induced bias in longitudi-nal image processing. NeuroImage 57 (1), 19–21.Reuter, M., Schmansky, N. J., Rosas, H. D., Fischl, B., 2012. Within-subjecttemplate estimation for unbiased longitudinal image analysis. NeuroImage61 (4), 1402–1418.Rosas, H. D., Salat, D. H., Lee, S. Y., Zaleta, A. K., Pappu, V., Fischl,B., Greve, D., Hevelone, N., Hersch, S. M., 2008. Cerebral cortex and theclinical expression of huntington’s disease: complexity and heterogeneity.Brain 131 (Pt 4), 1057–1068.Sailer, M., Fischl, B., Salat, D., Tempelmann, C., Schnfeld, M. A., Busa, E.,Bodammer, N., Heinze, H., Dale, A., 2003. Focal thinning of the cerebralcortex in multiple sclerosis. Brain 126 (8), 1734–1744.Schutzer, S. E., Angel, T. E., Liu, T., Schepmoes, A. A., Xie, F., Bergquist,J., Vecsei, L., Zadori, D., II, D. G. C., Holland, B. K., Smith, R. D., Coyle,P. K., 2013. Gray matter is targeted in first-attack multiple sclerosis. PLoSONE 8 (9), e66117.69BibliographyScott, M., Bromiley, P., Thacker, N., Hutchinson, C., Jackson, A., Apr 2009.A fast, model-independent method for cerebral cortical thickness estima-tion using MRI. Medical image analysis 13 (2), 269–85.Se´gonne, F., Dale, A., Busa, E., Glessner, M., Salat, D., et al., 2004. A hybridapproach to the skull stripping problem in MRI. NeuroImage 22 (3), 1060–1075.Shen, D., 2009. Fast image registration by hierarchical soft correspondencedetection. Pattern Recognition 42 (5), 954–961.Sled, J. G., Zijdenbos, A. P., Evans, A. C., February 1998. A nonparametricmethod for automatic correction of intensity nonuniformity in MRI data.IEEE Transactions on Medical Imaging 17 (1), 87–97.Smith, S., 2002. Fast robust automated brain extraction. Human Brain Map-ping 17 (3), 143–155.Thompson, P. M., Lee, A. D., Dutton, R. A., Geaga, J. A., Hayashi, K. M.,Eckert, M. A., Bellugi, U., Galaburda, A. M., Korenberg, J. R., Mills,D. L., Toga, A. W., Reiss, A. L., 2005. Abnormal cortical complexity andthickness profiles mapped in Williams syndrome. The Journal of Neuro-science 25 (16), 4146–4158.Tustison, N. J., Cook, P. A., Klein, A., Song, G., Das, S. R., Duda, J. T.,Kandel, B. M., van Strien, N., Stone, J. R., Gee, J. C., Avants, B. B.,2014. Large-scale evaluation of ANTs and FreeSurfer cortical thicknessmeasurements. NeuroImage 99 (0), 166–179.von Economo, C., 1929. The cytoarchitectonics of the human cerebral cortex.University Press, H. Milford Oxford.Wang, L., Shi, F., Li, G., Shen, D., 2012. 4D segmentation of longitudinalbrain MR images with consistent cortical thickness. Spatio-temporal imageanalysis for longitudinal and time-Series image data 7570, 63–75.Yushkevich, P. A., Avants, B. B., Das, S. R., Pluta, J., Altinay, M., Craige,C., 2010. Bias in estimation of hippocampal atrophy using deformation-based morphometry arises from asymmetric global normalization: An il-lustration in ADNI 3T MRI data. NeuroImage 50 (2), 434–445.70BibliographyZhang, Y., Brady, M., Smith, S., 2001. Segmentation of brain MR im-ages through a hidden markov random field model and the expectation-maximization algorithm. IEEE Transactions on Medical Imaging 20 (1),45–57.71

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0135619/manifest

Comment

Related Items