UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Novel statistical and geometric models for automated brain tissue labeling in magnetic resonance images Huang, Ko-Kai Albert 2010

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2010_spring_huang_kokaialbert.pdf [ 5.86MB ]
Metadata
JSON: 24-1.0064931.json
JSON-LD: 24-1.0064931-ld.json
RDF/XML (Pretty): 24-1.0064931-rdf.xml
RDF/JSON: 24-1.0064931-rdf.json
Turtle: 24-1.0064931-turtle.txt
N-Triples: 24-1.0064931-rdf-ntriples.txt
Original Record: 24-1.0064931-source.json
Full Text
24-1.0064931-fulltext.txt
Citation
24-1.0064931.ris

Full Text

NOVEL STATISTICAL AND GEOMETRIC MODELS FOR AUTOMATED BRAIN TISSUE LABELING IN MAGNETIC RESONANCE IMAGES by Ko-Kai Albert Huang B.A.Sc., The University of British Columbia, 2000 S.M., Harvard University, 2002  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES (Electrical and Computer Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  March, 2010 © Ko-Kai Albert Huang, 2010  ABSTRACT Analysis of brain tissues such as white matter (WM), gray matter (GM), cerebrospinal fluid (CSF), and pathological regions from magnetic resonance imaging (MRI) scans of normal adults and patients with neurodegenerative diseases such as multiple sclerosis (MS) allows for improved understanding of disease progression in vivo. As images are often confounded by acquisition noise and partial-volume effects, developing an automatic, robust, and efficient segmentation is essential to the accurate quantification of disease severity. Existing methods often require subjective parameter tuning, anatomical atlases, and training, which are impractical and undesirable. The contributions of this thesis are three-fold. First, a 3D deformable model was explored by integrating statistical and geometric information into a novel hybrid feature to provide robust regularization of the evolving contours. Second, to improve efficiency and noise resiliency, a 3D region-based hidden Markov model (rbHMM) was developed. The novelty of this model lies in subdividing an image into irregularly-shaped regions to reduce the problem dimensionality. A tree-structured estimation algorithm, based on Viterbi decoding, then enabled rotationally invariant estimation of the underlying discrete tissue labels given noisy observations. Third, estimation of partial volumes was incorporated in a 3D fuzzy rbHMM (frbHMM) for analyzing images suffering from acquisition-related resolution limitation by incorporating forward-backward estimations. These methods were successfully applied to the segmentation of WM, GM, CSF, and white matter lesions. Extensive qualitative and quantitative validations were performed on both synthetic 3D geometric shapes and simulated brain MRIs before applying to clinical scans of normal adults and MS patients. These experiments demonstrated 40% and 10% improvement in segmentation efficiency and accuracy, respectively, over state-of-the-art approaches under noise. When modeling partial-volume effects, an additional 30% reduction in segmentation errors was observed. Furthermore, the rotational invariance property introduced is especially valuable as segmentation should be invariant to subject positioning in the scanner to minimize analysis variability. Given such improvement in ii     the quantification of tissue volumes, these methods could potentially be extended to the studies of other neurodegenerative diseases such as Alzheimer’s. Furthermore, the methods developed in this thesis are general and can potentially be adopted in other computer vision-related segmentation applications in the future.  iii     TABLE OF CONTENTS  Abstract.............................................................................................................................. ii Table of Contents ..............................................................................................................iv List of Tables ................................................................................................................... vii List of Figures....................................................................................................................ix Glossary .............................................................................................................................xi Acknowledgements ........................................................................................................ xiii Dedication ........................................................................................................................xiv Co-authorship Statement ................................................................................................xv 1 Introduction ................................................................................................................... 1 1.1 Neuroimaging ............................................................................................................ 2 1.1.1 Introduction to MRI........................................................................................... 2 1.1.2 Application of MRI to Neurodegenerative Disease Research ........................... 5 1.1.3 Analysis of Multiple Sclerosis Biomarkers as an MRI Application ................. 6 1.2 Challenges ............................................................................................................... 11 1.2.1 MRI Acquisition Artifacts ...............................................................................11 1.2.2 Scan Variability ............................................................................................... 13 1.3 Existing Work ......................................................................................................... 14 1.3.1 Atlas-based Segmentation Models .................................................................. 15 1.3.2 Geometric Segmentation Models ....................................................................16 1.3.3 Statistical Segmentation Models ..................................................................... 19 1.3.4 Segmentation Pre-processing .......................................................................... 22 1.3.5 Segmentation Validation ................................................................................. 24 1.4 Thesis Overview...................................................................................................... 25 1.5 Thesis Organization ................................................................................................28 1.6 Bibliography ............................................................................................................ 30 2 A Hybrid Geometric-Statistical Deformable Model for Automated 3D Segmentation in Brain MRI ....................................................................................... 39 2.1 Introduction ............................................................................................................. 39 2.2 Methods ...................................................................................................................44 2.2.1 Edge-Based Deformable Model ...................................................................... 44 2.2.2 Proposed Hybrid Geometric-Statistical Feature .............................................. 46 2.2.3 Segmentation of Brain MRI ............................................................................ 50 2.2.4 Extension to Multiple MR Sequence Data ...................................................... 53 iv     2.2.5 Data Pre-processing ......................................................................................... 54 2.3 Results and Discussion ............................................................................................ 55 2.3.1 Segmentation Validation using Simulated Brain MRI .................................... 57 2.3.2 Segmentation Improvement using Multiple MR Sequences ........................... 60 2.3.3 Segmentation Comparison using Clinical MR Scans of Normal Adults ........62 2.3.4 Qualitative Performance using Clinical MR Scans of MS and AD Patients...66 2.4 Conclusions ............................................................................................................. 69 2.5 Bibliography ............................................................................................................ 70 3 A Novel Rotationally-Invariant Region-based Hidden Markov Model for Efficient 3D Image Segmentation ..............................................................................76 3.1 Introduction ............................................................................................................. 76 3.2 Methods ...................................................................................................................79 3.2.1 Computing Image Regions .............................................................................. 80 3.2.2 Region-based Hidden Markov Model ............................................................. 80 3.2.3 Model Parameter Estimation ........................................................................... 83 3.3 Results and Discussion ............................................................................................ 87 3.3.1 Robustness to Parameter Settings and Object Rotation .................................. 89 3.3.1.1 Parameter Sensitivity ............................................................................... 89 3.3.1.2 Object Rotation ........................................................................................ 91 3.3.2 Segmentation of Synthetic Geometric Shapes ................................................ 93 3.3.2.1 Accuracy .................................................................................................. 93 3.3.2.2 Efficiency .................................................................................................96 3.3.3 Segmentation of Simulated MRI Data ............................................................ 97 3.3.3.1 Accuracy .................................................................................................. 97 3.3.3.2 Object Rotation ...................................................................................... 100 3.3.4 Segmentation of Clinical MRI Data .............................................................. 103 3.4 Conclusions ........................................................................................................... 107 3.5 Bibliography .......................................................................................................... 109 4 Multi-Channel White Matter Lesion Segmentation in Multiple Sclerosis Using Region-based Hidden Markov Modeling ..................................................... 113 4.1 Introduction ........................................................................................................... 113 4.2 Methods ................................................................................................................. 117 4.2.1 Subjects and MRI Protocol............................................................................ 117 4.2.2 Data Pre-Processing ...................................................................................... 118 4.2.3 Multi-Channel Lesion Segmentation ............................................................. 118 4.3 Results ...................................................................................................................124 4.4 Discussion ............................................................................................................. 129 4.5 Conclusions ........................................................................................................... 133 4.6 Bibliography .......................................................................................................... 134 5 A Fuzzy Region-based Hidden Markov Model for 3D Partial-Volume Classification .............................................................................................................. 137 5.1 Introduction ........................................................................................................... 137 5.2 Methods ................................................................................................................. 140 v     5.2.1 Summary of Original Region-based Hidden Markov Model (rbHMM) .......141 5.2.2 Proposed Fuzzy Region-based Hidden Markov Model (frbHMM) ..............143 5.3 Results and Discussion .......................................................................................... 148 5.3.1 Validation Using Simulated Brain MRIs ...................................................... 149 5.3.1.1 Simulated MRIs: Effects of Varying Noise Levels ............................... 150 5.3.1.2 Simulated MRIs: Effects of Varying Image and Slice Resolution ........ 152 5.3.1.3 Simulated MRIs: Effects of Region Refinement on Performance vs. Computational Cost ............................................................................... 155 5.3.1.4 Simulated MRIs: Comparisons of frbHMM vs. Other State-of-theArt Methods ........................................................................................... 157 5.3.2 Validation Using Real Clinical Brain MRIs..................................................160 5.3.2.1 Clinical MRIs: Increased Partial Volume .............................................. 162 5.3.3 Implementation Details ................................................................................. 164 5.4 Conclusions ........................................................................................................... 164 5.5 Bibliography .......................................................................................................... 166 6 Conclusions ..................................................................................................................169 6.1 A Hybrid Geometric-Statistical Deformable Model for Automated 3D Segmentation in Brain MRI .................................................................................. 170 6.1.1 Hypothesis ..................................................................................................... 170 6.1.2 Significance ................................................................................................... 170 6.1.3 Strengths and Weaknesses ............................................................................. 171 6.1.4 Potential Applications and Future Work ....................................................... 171 6.2 A Novel Rotationally-Invariant Region-based Hidden Markov Model for Efficient 3D Image Segmentation ......................................................................... 172 6.2.1 Hypothesis ..................................................................................................... 172 6.2.2 Significance ................................................................................................... 173 6.2.3 Strengths and Weaknesses ............................................................................. 173 6.2.4 Potential Applications and Future Work ....................................................... 174 6.3 Multi-Channel White Matter Lesion Segmentation in Multiple Sclerosis Using Region-based Hidden Markov Modeling .............................................................. 175 6.3.1 Hypothesis ..................................................................................................... 175 6.3.2 Significance ................................................................................................... 175 6.3.3 Strengths and Weaknesses ............................................................................. 176 6.3.4 Potential Applications and Future Work ....................................................... 176 6.4 A Fuzzy Region-based Hidden Markov Model for 3D Partial-Volume Classification .........................................................................................................177 6.4.1 Hypothesis ..................................................................................................... 177 6.4.2 Significance ................................................................................................... 177 6.4.3 Strengths and Weaknesses ............................................................................. 178 6.4.4 Potential Applications and Future Work ....................................................... 178 6.5 Thesis Contributions ............................................................................................. 179 6.6 Related Work ........................................................................................................ 181 6.7 Bibliography .......................................................................................................... 182 Appendix A. List of Publications ..................................................................................185 vi     LIST OF TABLES Table 2.1 Table 2.2 Table 2.3 Table 2.4 Table 2.5 Table 3.1 Table 3.2 Table 3.3 Table 3.4 Table 3.5 Table 3.6 Table 3.7 Table 4.1 Table 4.2  Table 4.3 Table 5.1  Effects of regularizing contour propagation using geometric and statistical features ....................................................................................... 49 Parameter sensitivity of a simulated brain volume (9% noise, 40% inhomogeneity) .......................................................................................... 53 Quantitative evaluation of segmentation results obtained by using the geometric feature, the statistical feature, and the proposed hybrid approach on 18 simulated T1w BrainWeb scans ....................................... 59 Quantitative Evaluation of Segmentation Results Obtained by using the Proposed Method on 6 Simulated T1w/T2w/PDw BrainWeb Scans .......................................................................................................... 62 Quantitative comparison of segmentation results on 18 real clinical T1w IBSR scans.........................................................................................64 Misclassification error rate of segmentation results obtained by using the proposed 3D rbHMM method on 3D torii with shown parameters (σ=0.5 and 0.6). ..........................................................................................90 Misclassification error rate of segmentation results obtained by using the proposed 3D rbHMM method on 3D ellipsoids with shown parameters (σ=0.5 and 0.6). ....................................................................... 90 Misclassification error rate of segmentation results obtained by using the proposed 3D rbHMM method on 3D hyperboloids with shown parameters (σ=0.5 and 0.6). ....................................................................... 91 Effect of rotation on rbHMM segmentation results of 3D ellipsoids. ........ 92 Comparison of segmentation Misclassification Error Rate: grid-based 3D HMM vs. proposed 3D rbHMM on 3D torii with shown parameters....................................................................................... 94 Comparison of segmentation Misclassification Error Rate: grid-based 3D HMM vs. proposed 3D rbHMM on 3D ellipsoids with shown parameters....................................................................................... 94 Comparison of segmentation Misclassification Error Rate: grid-based 3D HMM vs. proposed 3D rbHMM on 3D hyperboloids with shown parameters. .............................................................................94 Expert variability between the UNC rater and the CHB rater on the clinical dataset .......................................................................................... 125 Results obtained on the clinical dataset using rbHMM. Lesion segmentation results are compared to two expert segmentation results (UNC and CHB). Scores are calculated linearly given a score of 90 for average variability between the two experts ...................................... 128 Comparison of lesion segmentation performance vs. expert variability score ......................................................................................................... 132 Effects of varying noise levels (0-9%) on the MSE of classifcations based on discrete and fuzzy rbHMMs on the simulated BrainWeb MRIs of normal and MS subject anatomical models.. ............................. 152  vii     Table 5.2 Table 5.3 Table 5.4  Table 5.5 Table 5.6  Effects of reducing slice and overall image resolution on the MSE of classifcations based on discrete and fuzzy rbHMMs on the simulated BrainWeb MRIs of normal and MS subject anatomical models ............. 154 Effects of varying degree of region refinement on the MSE and runtime of classifcations using discrete and fuzzy rbHMMs on the simulated BrainWeb MRIs of a normal anatomical model ..................... 156 Comparisons of effects of increasing partial volume on the MSE of fuzzy classifcations using state-of-the-art methods (FAST, SPM, EMS) and the proposed frbHMM on the simulated BrainWeb MRIs of normal and MS subject anatomical models. ............................................ 159 Quantitative average WMVF measurement on high resolution clinical MRI data .................................................................................................. 161 Quantitative normal appearing white matter segmentation comparisons on real clinical control (C) and patient (S) data with escalated partial volume effects ............................................................... 163  viii     LIST OF FIGURES Figure 1.1 Figure 1.2 Figure 1.3 Figure 1.4 Figure 1.5 Figure 1.6 Figure 1.7 Figure 1.8 Figure 1.9 Figure 1.10 Figure 2.1 Figure 2.2 Figure 2.3 Figure 2.4 Figure 2.5 Figure 3.1 Figure 3.2 Figure 3.3 Figure 3.4 Figure 3.5 Figure 3.6 Figure 3.7 Figure 3.8 Figure 3.9  A simplified diagram of brain MRI acquisition ...........................................5 Different 2D orthogonal views of a 3D MRI scan showing brain anatomy. ....................................................................................................... 6 Variations of MS patient disability progression over time. ......................... 8 2D axial slices of a MRI volume showing hyperintense white matter lesions. The MRI acquisition sequence is optimized to highlight chronic white matter lesions. ....................................................................... 9 Axial views of brain atrophy progression and ventricular enlargement. ... 10 Examples of various MRI modalities. ........................................................ 11 Examples of various MRI acquisition artifacts. Noise, PVE and INU corrupts the intensity profiles of the acquired MRI scans ......................... 12 A simplified diagram of artifacts in brain MRI acquisition. ...................... 12 Examples of MRI scan variability. .............................................................14 Intensity distributions of T1w/T2w/PDw MRI at varying noise levels .....21 Block diagram of the proposed MRI brain segmentation algorithm. ......... 50 Qualitative segmentation performance of two simulated T1w brain images ........................................................................................................ 58 Qualitative segmentation performance of a multiple simulated MR sequence (T1w, T2w, PDw) brain images. ................................................ 61 Qualitative segmentation performance of a real clinical T1w brain images (IBSR #08) ..................................................................................... 64 Qualitative segmentation performance of real clinical T1w brain images (UBC MS/MRI and LONI IDA) ................................................... 68 A 2D example of rbHMM. ......................................................................... 83 Effect of varying rotational angles (0°~90°) on the proposed rbHMM segmentation performance of 3D ellipsoids. ............................................. 93 rbHMM segmentation results on 3D torus images with parameters a=20 and c=40 ........................................................................................... 95 rbHMM segmentation results on 3D ellipsoid images with parameters a=40, b=20 and c=40 .................................................................................95 rbHMM segmentation results on 3D hyperboloid images with parameters a=10, b=30 and c=30 ............................................................... 96 Comparisons of segmentation accuracy of the proposed 3D rbHMM and the grid-based HMM on simulated 3D brain MRI images ................. 98 3D simulated BrainWeb T1-weighted brain MR images showing effect of varying noise levels on segmentation performance..................... 99 Comparisons of segmentation accuracy of the proposed 3D rbHMM, the block-based HMM and the FAST HMRF on simulated 3D brain MRI images .............................................................................................. 100 Comparisons showing effect of varying rotational angles on segmentation accuracy of the proposed 3D rbHMM and the gridbased HMM on simulated 3D brain MRI images .................................... 101  ix     Figure 3.10 Figure 3.11 Figure 3.12 Figure 3.13 Figure 4.1 Figure 4.2  Figure 4.3 Figure 4.4 Figure 4.5 Figure 5.1 Figure 5.2 Figure 5.3 Figure 5.4 Figure 5.5 Figure 5.6  3D simulated BrainWeb T1-weighted brain MR images showing effect of (a) 0°, (b) 15°, (c) 30° and (d) 45° rotational angles on segmentation performance ....................................................................... 102 Intensity histograms of MR images (excluding background) showing the differences between (a) simulated BrainWeb (3% noise) and (b) clinical IBSR V2.0 (#12) scans................................................................104 Comparisons of segmentation accuracy of proposed 3D rbHMM and grid-based HMM on clinical 3D IBSR V2.0 brain MRI images ............. 105 3D clinical IBSR V2.0 T1-weighted brain MR images (normal adult scan #06) showing segmentation accuracy of the proposed 3D rbHMM method. ...................................................................................... 106 Simplified diagrams of 1D and different higher dimensional HMMs ..... 116 An example (a) T2w-FLAIR intensity distribution where lesions are not directly observable, (b) the initialization of class means and standard deviations as illustrated by green ellipses, (c) results of all classes after 10 iterations (the lesion class is shown by a blue ellipse), and (d) T2w-FLAIR intensity distribution of manually segmented lesions only with the lesion class shown by the same blue ellipse .......... 123 Segmentation results of two slices from subject #02 of the clinical dataset ...................................................................................................... 127 Segmentation results of two slices from subject #07 of the clinical dataset. ..................................................................................................... 127 TOADS segmentation results of one slice each from subject #02 and #07 of the clinical dataset ........................................................................ 132 Image (grayscale) and watershed subdivision (green) .............................148 Experimental results on normal and MS simulated BrainWeb MRI scans at varying noise levels ....................................................................151 Experimental results on normal simulated BrainWeb MRI scans at varying slice resolution ............................................................................ 155 Experimental results on normal simulated BrainWeb MRI scans at varying degree of region refinement ........................................................ 157 Experimental results produced by FSL FAST, SPM and EMS on simulated BrainWeb MRI scans smoothed by a Gaussian with standard deviation=2.0 ............................................................................. 160 Example results on real 3D T1-weighted clinical MRI scans .................. 162  x     GLOSSARY 2D 3D AD ADNI BET BN CHB CNS CSF CT DFT EDSS EM EMS FAST FEG FLAIR FLASH FSE fMRI frbHMM GB Gd GHz GM GMM GRE GRF HMM HMRF ICDA IBSR INU IR KL k-NN LDA M3DLS MAP MNI MSD MSE MR MRF  Two dimensional Three dimensional Alzheimer’s disease Alzheimer’s Disease Neuroimaging Initiative Brain Extraction Tool Bayesian networks Children’s Hospital Boston Central nervous system Cerebrospinal fluid Computed tomography Discrete Fourier transform Expanded disability status scale Expectation maximization Expectation-Maximization Segmentation FMRIB’s Automated Segmentation Tool Frequency encode gradient Fluid attenuated inversion recovery Fast low angle shot Fast spin echo Functional magnetic resonance imaging Fuzzy region-based hidden Markov model Gigabyte Gadolinium Gigahertz Gray matter Gaussian mixture model Gradient echo Gibbs random field Hidden Markov model Hidden Markov random field Independent component discriminant analysis Internet Brain Segmentation Repository Intensity non-uniformity Inversion recovery Kullback-Leibler K-nearest neighbors Linear discriminant analysis Multiphase 3D Levelset Maximum a posteriori Montreal Neurological Institute Mean square difference Mean square error Magnetic resonance Markov random field xi      MRI MS N3 NM PCDA PD PDw PEG PPMS PRMS PV PVE rbHMM RF RRMS SE SNR SPGR SPM SPMS STIR SVM T1w T2w TE TI TOADS TR TSE UNC US VBM WM WML  Magnetic resonance imaging Multiple sclerosis Nonparametric nonuniformity normalization Nuclear medicine Principal component discriminant analysis Parkinson’s disease Proton density-weighted Phase encoding gradient Primary progressive multiple sclerosis Progressive multiple sclerosis Partial volumes Partial-volume effect Region-based hidden Markov model Radio frequency Relapse remitting multiple sclerosis Spin echo Signal to noise ratio Spoiled gradient echo Statistical Parametric Mapping Secondary progressive multiple sclerosis Short-T1 inversion recovery Support vector machine T1 (spin-lattice relaxation)-weighted T2 (spin-spin relaxation)-weighted Time to echo Time of inversion Topologically-Preserving Anatomically-Driven Segmentation Repetition time Turbo spin echo University of North Carolina Ultrasound Voxel-based morphometry White matter White matter lesion  xii     ACKNOWLDEGEMENTS I offer my enduring gratitude to all the people, who have helped and inspired me during my doctoral study. I want to thank my advisor, Dr. Rafeef Abugharbieh, for her invaluable guidance and supervision during my study at the University of British Columbia (UBC). I would also like to thank our collaborators at the UBC MS/MRI Research Group – Dr. Anthony Traboulsee, Dr. David Li, and especially Dr. Roger Tam for his knowledge and insights throughout all these years. Special thanks go to all the present and past members of the Biomedical Signal and Image Computing Laboratory (BiSICL) for your companies and friendship. Sincere thanks to all my examination committee members – Dr. Alex McKay, Dr. Robert Rohling, and Dr. Purang Abolmaesumi for their time and valuable feedbacks. I also want to thank my examination committee chair, Dr. Ravindra Shah, for his positive encouragement. I want to thank NSERC for their generous support and funding, and Dr. Martin Styner at UNC, Dr. Corree Laule at the UBC MRI Research Centre, the UBC MS/MRI Research Group, and LONI ADNI (Principal Investigator: Michael Weiner; NIH grant U01 AG024904) at UCLA for data collection and sharing. My deepest gratitude goes to my family for their unflagging love and support throughout my life - my father, Chin-Chin, who has always believed in me; my mother, Shueh-O, who has kept me humbled throughout these years, my sister, Valerie, for her endless love and joy for her spoiled brother, and especially Jean for keeping me focused on what matters the most in life.  xiii     DEDICATION  To my family  xiv     CO-AUTHORSHIP STATEMENT A HYBRID GEOMETRIC-STATISTICAL DEFORMABLE MODEL FOR AUTOMATED 3D SEGMENTATION IN BRAIN MRI (Chapter 2) Identification and design of research program This work is carried out under the guidance and feedback of the thesis supervisor and coauthors. I participated in meetings identifying the segmentation problem and feasible approaches, and planning the validation process. I proposed combining both geometric and statistical information in a novel hybrid feature in a deformable model for this brain tissue segmentation task. Performing the research I designed, developed and implemented all software tools using C++ and MATLAB. The magnetic resonance imaging (MRI) scans analyzed were acquired by technologists of respective sources. Data analyses I performed all data analyses, including accuracy validation and statistical tests. Manuscript preparation I prepared the manuscript and figures and did the original writing. The thesis supervisor and co-authors provided guidance and editing.  xv     A  NOVEL  ROTATIONALLY-INVARIANT  REGION-BASED  HIDDEN  MARKOV MODEL FOR EFFICIENT 3D IMAGE SEGMENTATION (Chapter 3) Identification and design of research program This work is carried out under the guidance and feedback of the thesis supervisor and coauthors. I identified the need to improve the efficiency of segmentation by employing a region-based model. I proposed a novel region-based hidden Markov model to efficiently represent 3D image data and devised a novel rotationally-invariant parameter estimation scheme. Performing the research I designed, developed and implemented all software tools using C++ and MATLAB. The magnetic resonance imaging (MRI) scans analyzed were acquired by technologists of respective sources. Data analyses I performed all data analyses, including accuracy validation and statistical tests. Manuscript preparation I prepared the manuscript and figures and did the original writing. The thesis supervisor and co-authors provided guidance and editing.  xvi     MULTI-CHANNEL MULTIPLE  WHITE  SCLEROSIS  MATTER  USING  LESION  REGION-BASED  SEGMENTATION HIDDEN  IN  MARKOV  MODELING (Chapter 4) Identification and design of research program This work is carried out under the guidance and feedback of the thesis supervisor and coauthors. I investigated the application of the region-based hidden Markov model in segmenting white matter lesions. I proposed using a data-driven initialization, combined with a rule-based lesion selection scheme similar to existing methods, to automate this segmentation task. Performing the research I designed, developed and implemented all software tools using C++ and MATLAB. The magnetic resonance imaging (MRI) scans analyzed were acquired by technologists of respective sources. Data analyses I performed all data analyses, including accuracy validation and statistical tests. Manuscript preparation I prepared the manuscript and figures and did the original writing. The thesis supervisor and co-authors provided guidance and editing.  xvii     A FUZZY A REGION-BASED HIDDEN MARKOV MODEL FOR 3D PARTIALVOLUME CLASSIFICATION (Chapter 5) Identification and design of research program This work is carried out under the guidance and feedback of the thesis supervisor and coauthors. I identified the need to accurately model partial-volume effects by employing a mixel representation such that each voxel was represented by a combination of different tissue types. I proposed applying an iterative forward-backward algorithm for the parameter estimation process in a region-based hidden Markov model. Performing the research I designed, developed and implemented all software tools using C++ and MATLAB. The magnetic resonance imaging (MRI) scans analyzed were acquired by technologists of respective sources. Data analyses I performed all data analyses, including accuracy validation and statistical tests. Manuscript preparation I prepared the manuscript and figures and did the original writing. The thesis supervisor and co-authors provided guidance and editing.  xviii     1. INTRODUCTION Digitized or computerized biomedical imaging technologies, showing detailed structures and functions of the body, such as X-ray radiography, fluoroscopy, microscopy, computer tomography (CT), ultrasound (US), magnetic resonance imaging (MRI), functional MRI (fMRI), or nuclear medicine (NM), have recently become integral tools for the medical and healthcare communities for disease diagnosis, disease progression monitoring, pre-operative planning, intra-operative visualization, and treatment assessment. Due to the increasing dimensionality and resolution of these imaging techniques, this constant influx of a large amount of digital data has made effective manual analysis both difficult and costly, and sometimes impossible, prompting the development of fast, automated, reliable, and reproducible techniques to perform the otherwise time-consuming and practically infeasible tasks of medical image analysis such as segmentation, registration, shape analysis, and data modeling. This thesis describes the development and validation of an automatic three-dimensional (3D) segmentation algorithm for digital volumetric neuroimaging medical data, in particular, of multi-channel MRI scans. It aims to present an efficient, accurate, and robust process for segmenting various brain tissues, i.e. white matter (WM), gray matter (GM), cerebrospinal fluid (CSF), and pathological regions, from MRI acquisitions of normal adults and of patients with neurodegenerative diseases such as multiple sclerosis (MS). This allows for improved understanding of the central nervous system (CNS), in particular to the quantification of disease progression. Thus, overall costs of MS can be reduced and quality of life of neurodegenerative disease patients can be improved by facilitating and accelerating disease and drug treatment studies. 1  The introduction provides an overview of neuroimaging and magnetic resonance imaging, as well as how MRI is applied to the study of MS. It also highlights major challenges encountered in the analysis of such medical images, and presents an overview of the proposed research.  1.1  NEUROIMAGING  1.1.1 Introduction to MRI MRI provides a non-invasive mechanism for the examination of human bodies. Unlike X-ray or CT, MRI does not use ionizing radiation and is most effective for examining the soft tissues such as the brain, spinal cord, muscles, ligaments, and tendons. MRI functions by measuring the density and relaxation times of hydrogen atoms in tissues. Water molecules, which made up 78% of the human brain [1] (70% in white matter and 80% in gray matter [2]), contain two hydrogen nuclei or protons that intrinsically spin or precess like a spinning top along a natural random axis. In the absence of a magnetic field, the sum of all moments or net magnetization in the body is zero. When placed inside an external magnetic field, B0, their angular momentum and magnetic moments cause these spinning hydrogen protons to align either along (parallel) or opposite (anti-parallel) of B0 as described by the Boltzmann statistics with the protons precessing at the Larmor frequency:  2  ·  (1.1) 2  where γ is the nucleus gyromagnetic ratio or 42.576 MHz/Tesla for hydrogen nuclei [3]. In addition, because parallel alignment has a lower energy state, more protons align along rather than opposite of B0, thus, generating a net equilibrium magnetic moment, M. Applying a radio frequency (RF) transient field pulse, B1, orthogonal to B0 at the Larmor frequency, causes the nuclei and the resulting M to reorient with respect to the cumulative magnetic field at a flip angle towards the applied B1. When B1 is switched off, M precesses around and finally aligns with B0. Depending on the tissues imaged, such relaxation is characterized by T1 (spin-lattice) and T2 (spin-spin) time constants. T1 is the time required for 63% of M to return to the stable longitudinal state whereas T2 is the time for the spinning nuclei to lose their phase coherence without the presence of B1. Both time constants form the basis of image contrasts in MRI.  To distinguish different locations of protons in order to produce an MRI image (Figure 1.1), a linear magnetic field gradient is generally applied across B0. Thus, to select a particular slice in the z-direction, one only needs to produce an RF pulse consistent with the range of Larmor frequencies resulting from the presence of the gradient field. Furthermore, a phase encoding gradient (PEG) is applied to cause phase shifts in precessing nuclei along the y-direction. A linear readout or frequency encode gradient (FEG) is used to encode the position of signal along a specific line or x-direction. Two-dimensional images can thus be reconstructed using the 2D Discrete Fourier Transform (DFT). Similarly a second PEG will provide additional phase information that enables 3D reconstruction by using signals encoded in a 3D Fourier space.  3  Furthermore, the sequence of RF and gradient pulses make up the pulse sequence. By adjusting the ordering of these pulses and pulse parameters such as repetition time (TR), time to echo (TE), and time of inversion (TI), the acquired MRI images can be made sensitive to specific tissue. Some commonly used pulse sequences are:    Spin echo (SE) / Fast spin echo (FSE) / Turbo spin echo (TSE): A 90º slice selective pulse followed by a 180º re-focusing pulse is used to produce T1-weighted (T1w), T2-weighted (T2w), and proton density-weighted (PDw) images based on varying TR and TE. Generally, changing TR will influence the T1 weighting, whereas changing TE will influence the T2 weighting. Using a second 180º re-focusing pulse allows a second image to be simultaneously captured, e.g. T2w (long TE) and PDw (short TE) image pairs.    Gradient recalled echo (GRE) / Fast low angle shot (FLASH) / Spoiled gradient (SPGR): Removing the re-focusing pulse and using a smaller flip angle (<90º) allow faster acquisition of images with shorter TR and TE. A larger angle provides more T1 weighting and a smaller angle provides for less T1 weighting.    Inversion recovery (IR) / Short-tau inversion recovery (STIR) / Fluid-attenuated inversion recovery (FLAIR): A 180º preparation pulse followed by a normal imaging pulse sequence after TI provides specific tissue suppression and strong tissue contrast in the MRI image, e.g. fat (STIR) and fluid (FLAIR).  4  WM  GM  CSF Lesions Others  Inherent Tissue Characteristic (T1, T2, density)  Magnet Coils  Imaging Parameters (TE, TR, TI)  Perceived Tissue Characteristic (K-space) Fourier Reconstruction  MRI Image  Figure 1.1. A simplified diagram of brain MRI acquisition.  1.1.2 Application of MRI to Neurodegenerative Disease Research By using MRI, neurologists are able to examine the brain structures of patients with MS [4, 5] or other neurodegenerative diseases such as Alzheimer’s disease (AD) [6, 7] and Parkinson’s disease (PD) [8, 9, 10]. These diseases are recognized as a major health concern at the present time. Caused by many different factors, including but not limited to genetic abnormalities, immune system, and injuries of brain and nervous system, these neurological disorders are quite diverse, chronic, challenging to treat, and often disabling.  The ability to non-invasively diagnose these diseases systematically and quantitatively in vivo with the images produced with MRI brings exciting developments not only to the study of pathology, but also to the assessment of drug treatments during clinical trials. This non-invasive structural neuroimaging is a fairly recent discipline combining radiology, physics, medicine, and neuroscience. Using magnetic fields and radio waves without injecting radioactive tracers, high quality 3D images of brain structures (Figure 1.2) can be created to show normal soft brain tissues and tissue damages or diseases, such 5  as infection, inflammation, or a tumour. Specific orientations can be generated, and data can be saved for future analysis.  WM  CSF  CSF  GM  (a) Axial view  EYE  (b) Sagittal view (c) Coronal view Sources: Generated with data from the simulated BrainWeb database, MNI.  Figure 1.2. Different 2D orthogonal views of a 3D MRI scan showing brain anatomy.  1.1.3 Analysis of Multiple Sclerosis Markers as an MRI Application More specifically, MRI has been widely used in research studies of MS [11]. MS is an autoimmune disease, caused by damages in the CNS, which consists of the brain, spinal cord, and optic nerves. CNS is responsible for transmitting communication signals both within the CNS and between the CNS and the nerves traveling to the rest of the body. The loss of myelin, a fatty tissue surrounding and protecting the nerve fibres of the CNS, is generally observed in MS and leaves scar tissues called sclerosis. Frequently, the nerve fibre itself is damaged, resulting in deteriorating CNS functions in MS patients over time and causing debilitating symptoms with no known cure. The most common complications are sensory (40%), visual (35%), motor (21%), brainstem (16%), cerebellar (15%), and 6  bladder (4%) leading to fatigue, depression, tremors, stiffness, paralysis, optic neuritis, and other symptoms [12].  Disabilities related to MS are often categorized into a 10-point Expanded Disability Status Scale (EDSS) score with a score of 0 indicating no abnormalities and a score of 10 indicating diseased due to MS [13]. Affecting approximately 2.5 millions of mostly young adults between the ages of 15 to 40 worldwide [14], studies have shown an average annual cost of MS, both direct and indirect, to be approximately US$13,500 to US$46,600 [15, 16, 17] depending on the disease states (mild to severe). This leads to more than US$940 millions per year of cost of MS in Canada alone. In order to minimize the overall MS costs and improve the quality of life for MS patients, providing early treatment before disability accumulates is essential.  However, MS is an unpredictable neurological disease and each individual’s disease progression is unique. MS has been categorized into four main phenotypes (Figure 1.3) [14]:    Primary progressive (PPMS): 10% of people diagnosed with MS have PPMS, affecting men and women equally. Disability is accumulated without relapses.    Relapse remitting (RRMS): 85% of people diagnosed with MS have RRMS at the time of diagnosis. RRMS is characterized by episodes or attacks and recovery or remission between attacks.    Progressive relapse (PRMS): 5% of people diagnosed with MS suffered from PRMS, in which both disease accumulation and attacks are clearly observed. PRMS is 7  sometimes established in PPMS patients, who experience an acute attack.   Secondary progressive (SPMS): SPMS follows on the course of RRMS with a steady worsening of disease over time. Approximately 50% of people with RRMS will develop into SPMS within 10 years of diagnosis.  RRMS  PRMS  SPMS  Disease Progression  PPMS  Time Figure 1.3. Variations of MS patient disability progression over time.  In order to track the disease progression, two main markers have been identified [18]. The first marker is lesions or plaques resulting from tissue damage in neurons of white matter (Figure 1.4). These neurons are responsible for nerve signal transmission within the brain and from the CNS to the rest of the body, and when the neurons are damaged, they do not function efficiently and, thus, give rise to various MS symptoms. However, these lesions appear in a loosely distributed fashion – they change in size over time or merge with other lesions, and new lesions might appear at different sites; thus, lesions cannot be easily tracked. Lesions have been shown to be related to relapses in MS but are not clearly related to the progressive disability of the patients [19].  8  Sources: Generated with data from the MS Lesion Segmentation Challenge 2008 database, UNC/CHB.  Figure 1.4. 2D axial slices of a MRI volume showing hyperintense white matter lesions. The MRI acquisition sequence is optimized to highlight chronic white matter lesions.  The second marker is tissue atrophy (Figure 1.5) due to irreversible neural-axonal loss in brain and spinal cord. Cerebral atrophy can be focal, affecting only a limited area of the brain, or can shrink the entire brain, resulting in progressive reductions of CNS functions. A typical rate of brain atrophy observed for patients with MS has been shown to be 0.6% to 0.8% annually, which is two to three times the normal rate and, thus, is a good indicator of disease progression [20, 21, 22].  9  (a) Normal brain  (b) Atrophy brain  Sources: Generated with data from the MS/MRI Research Group, UBC.  Figure 1.5. Axial views of brain atrophy progression and ventricular enlargement.  Various MRI modalities can be used for various imaging studies. For example, gray matter and white matter are typically best distinguished with T1w MRI (Figure 1.6a). T2w MRI reveals both MS lesions and CSF as hyperintense (Figure 1.6b), whereas PDw MRI suppresses CSF while keeping lesions hyperintense (Figure 1.6e). FLAIR MRI suppresses CSF even further (Figure 1.6c). Enhanced T1w MRI (Figure 1.6d) shows enhancing lesions as hyperintensive but requires the use of contrast-enhancing agent such as gadolinium (Gd). It is worth noting that recent developments in high field MRI and 3D acquisitions provided superior performance in terms of achieving better signal-to-noise ratios (SNR) in a linear scale with respect to the field strength and providing higher resolution scans.  10  (a) T1-weighted  (b) T2-weighted  (c) FLAIR  (d) Gd-enhanced  (e) PD-weighted  Sources: Generated with data from (a-c) the MS Lesion Segmentation Challenge 2008 database, UNC/CHB and (d-e) the MS/MRI Research Group, UBC.  Figure 1.6. Examples of various MRI modalities.  1.2  CHALLENGES  Robust and accurate segmentation of brain MRI images into WM, GM, CSF, and pathological regions allows for systematic and quantitative analysis of both lesions and brain atrophy. Manual segmentation of this 3D volumetric MRI data is, however, very difficult and time-consuming, and is prone to intra- and inter-rater errors. Automatic methods are needed to provide accurate, robust, and reproducible alternatives, by incorporating and integrating all available information from different modalities. 1.2.1 MRI Acquisition Artifacts Automating segmentation is not a straight forward process due to noise or interference that corrupts MRI scans. The signal-to-noise ratio of the acquired image depends on factors that are either under user’s control such as the coil and sequence parameters used and patient motion, or factors beyond the operator’s control or fixed, e.g. scanner noise, field inhomogeneity, and tissue characteristics. Three major artifacts are generally considered in an MRI analysis - partial volume effects (PVE), bias field or intensity 11  non-uniformity (INU), and noise (Figure 1.7). In addition to common image acquisition noise, which corrupts an image, PVE causes blurring of edges resulted from a mixing of different tissue types within an acquired voxel due to limited imaging resolution, whereas INU introduces a low frequency intensity variation due to inhomogeneity in the magnetic field or non-uniformity in the coil (Figure 1.8). All these artifacts are problematic for most automatic segmentation algorithms, which assume that similar tissue types share similar intensity profiles in an MRI scan.  (a) Artifact-free  (b) Noise artifact  (c) PVE artifact  (d) INU artifact  Sources: Generated with data from the simulated BrainWeb database, MNI.  Figure 1.7. Examples of various MRI acquisition artifacts. Noise, PVE and INU corrupts the intensity profiles of the acquired MRI scans.  WM  GM  CSF Lesions Others  Inherent Tissue Characteristic (T1, T2, density)  Neighbouring Tissues  Bias Field  Magnet  Perceived Tissue Characteristic (K-space)  Coils  Imaging Parameters (TE, TR, TI)  Noise  Fourier Reconstruction  MRI Image  Figure 1.8. A simplified diagram of artifacts in brain MRI acquisition.  12  1.2.2 Scan Variability In combination with these aforementioned artifacts in the MRI data acquisition process, variability in practical acquisition limitations also presents additional challenges to subsequent analysis. These include (Figure 1.9):    Cross-sectional: Wide anatomic and pathological variability exist between brains of each subject imaged. Even though all brains share the same general structures, each acquired MRI scan is unique as each individual is unique with different identity, age, gender, ethnicity, and disease state.    Longitudinal: The unpredictable time-progression nature of neurodegenerative diseases such as MS often requires patients to undergo repeated scans for tracking of the disease or treatment effect. As brain atrophies over time, and abnormalities such as lesions appear or disappear, the brain MRI scans may vary.    Scan-rescan: Reproducibility of acquisition of the same subject using the same scanner after repositioning can introduce subtle variability in the acquired MRI scans due to factors such as flow of contrast agent or changes in brain volumes due to hydration levels.    Inter-scanner: In a large multi-centre study of MS, MRI scans are often acquired with different MRI scanners by different personnel under different environmental settings. The acquired images are affected by the levels of quality control and standardization of procedures across these sites.  13  All of these can potentially confound most automatic segmentation algorithms.  (a) Elongated vs. round heads with blackholes.  (b) Baseline vs. Month 3  (c) Varying head positions and Gd-enhancement  (d) Siemens vs. GE Scanners  Sources: Generated with data from (a-b) the MRI Research Centre, UBC, (c-d) the MS/MRI Research Group, UBC.  Figure 1.9. Examples of MRI scan variability.  1.3  EXISTING WORK  Segmentation, in general, is a very well researched area, and has been widely applied in many fields including computer vision, robotics, and satellite imagery. However, it is typically a much more daunting task to perform segmentation for the purpose of medical image analysis. The segmentation algorithm has to not only successfully reproduce the results over different scans in different studies with different patients of different disease states, but also has to realize accuracies high enough for distinguishing small changes over certain disease progression or populations.  14  1.3.1 Atlas-based Segmentation Models One class of segmentation approaches transforms the segmentation problem into a registration problem. Registration is the process of overlaying two images using geometric alignment. In general, registration takes a fixed image, IA, and aligns a moving image, IB, via a geometric transformation, T. To obtain the optimal transformation T, a metric, L, is defined between the two images of interests such that T is obtained by optimizing the objective function EREG, which equals to L between image IA and the transformed image of IB (1.2). If a difference metric such as mean square error is used, minimization is carried out. If a correlation metric such as normalized correlation and mutual information is used, maximization is carried out instead.  ,  (1.2)  In the case of aligning intra-subject MRI scans, linear rigid registration methods are often carried out due to the fact that since human skulls are rigid, only changes in head positioning are assumed [23, 24, 25, 26]. In the case of handling intra-subject scans in a longitudinal study, anatomical differences might be present due to inflammation, remyelination, and neurodegeneration such as brain atrophy due to MS. In addition, in cases involving matching inter-subject MRI scans and matching MRI scans with a generic tissue atlas derived from a known ground truth or a template derived from a population mean for the purpose of tissue labelling, nonlinear registrations are required for spatially normalization [27]. In nonlinear registration, the global nonlinear differences between positions of brain structures can be described by using a deformation model such 15  as a viscous fluid model [28], which allows large scale nonlinear deformations while maintaining topology but at very high computational costs. To reduce complexity by reducing the number of parameters to be estimated, a B-spline free form deformation [29] can be applied locally. Another alternative approach employing a linear combination of smooth basis functions [30] is employed such that the residual squared differences are minimized between the images while maximizing the smoothness of the deformation. Once co-registered, various statistical methods such as mixture model clustering for tissue labelling [31] and voxel-based morphometry (VBM) for the parametric statistical analysis of GM [32] can be applied. Registration-based segmentation methods have been shown to be robust to acquisition artifacts as template registration is capable of aligning the major underlying structural features. Integrating registration, segmentation, and artifact correction in iterative frameworks has shown to be robust [33, 34, 35, 36]. However, ensuing discussions [37, 38, 39, 40] raise concerns regarding the use of templates of one population when analyzing data from another population, and this topic remains controversial. For example, morphing patient scans with pathologically enlarged ventricles to match a normal template could potentially distort the surrounding tissues in an unpredictable manner. Such structural differences might lead to systematic biases and misregistration errors that are difficult to quantify. In addition, wide varieties of features, shapes, and deformations in actual brain scans with pathologies present another level of complications to template generation and to the already non-trivial segmentation problem. 1.3.2 Geometric Segmentation Models A second class of segmentation approaches relies on geometric image features such as 16  methods based on edge gradients and seeded region growing [41, 42, 43], and fuzzy affinity and connectedness [44, 45] between pixels PA and PB (1.3):  ,  ,  , 0,  , ,  1 1  (1.3)  where h is a user-defined scalar-valued function between [0, 1] and f is the pixel intensity function. Most of these methods, however, are parametric such that the algorithm definitions are either application or modality specific. To accommodate for the variability that exists in medical images, deformable model-based approaches such as snakes or active contours [46, 47] have shown to be highly promising. A snake is an elastic curve, u, defined by a set of point coordinates such that the optimal curve that delineates region boundaries is defined as one that minimizes the defined energy functional:               (1.4) |  |  |  |  where q is the parameterizations of the curve u, EINT is the internal regularizing energy, EEXT is the external attracting energy, f is the intensity function, g is a general positive-valued monotonically-decreasing function of the image gradient acting as an edge detector, and α, β and λ are the weighting terms. One major drawback is that snakes require the definition and tracking of parameterizations q, which identifies a set of points for the evolving curve. Thus, topological changes such as curve merging and splitting 17  have to be handled explicitly. Using a level set implementation [48], the geometric active contour [49, 50, 51] solves this deformable front propagation problem by employing an implicit representation of the evolving curve:      (1.5)  where L is the length of the curve u and ds is the Euclidean arc-length. Deformable models, however, are prone to leakage due to its dependency on the edge feature which is never truly zero on real images. Thus, incorporating prior models such as shape priors [52, 53], coupled region competition [54, 55], and fuzzy logics [56, 57], improves stability when handling medical images where noisy and weak edges are ubiquitous. Alternatively, a region-based deformable model [58, 59] is introduced based on maximizing the intra-region intensity similarities and minimizing the inter-region intensity differences for two regions inside and outside an evolving curve:        |  |  |  |  (1.6)  where μ represents the means of both piecewise constant regions. Implemented in the context of brain segmentation using multiple curves or multi-phases [60, 61], the region-based contour has demonstrated some success; however, problems such as high complexity and slow convergence remains [62, 63].  18  1.3.3 Statistical Segmentation Models A third class of segmentation approaches is to statistically separate different tissue types in a high dimensional feature space. Using instance-based learning such as K-nearest neighbours (k-NN) [64, 65], voxel labels are assigned based on the labels of its K nearest neighbours. Others employed a linear statistical classifier such as linear discriminant analysis (LDA), which separates tissue classes by estimating the eigenvectors and eigenvalues of density functions either based on Gaussian assumptions or estimated via non-parametric kernel estimations [66], and a naive Bayes classifier [67], which assigns voxel label to one of C classes given features f of size M to maximize the conditional probability (1.7).  | ,…,  |  (1.7)  Furthermore, non-linear classifiers are also employed based on principal component discriminant analysis (PCDA) and independent component discriminant analysis (ICDA) that transform the data features into respective component spaces before performing non-parametric discriminant analyses [66]. Similarly, others carry out segmentation in a higher dimensional non-linear space by using support vector machines (SVM) and neural networks such as Hopfield and Kohonen self-organizing networks [68, 69, 70]. However, these classifier-based methods often require training for estimating parameters given a specific dataset by minimizing errors or maximizing probabilities. However, training is impractical in most cases as segmenting training scans manually a priori is a 19  time-consuming and variable process. Alternative classifiers based on clustering [71, 72] and finite Gaussian mixture modeling [73] are automatic and data-driven but algorithm convergence potentially relies heavily on the model initialization.  On the whole, statistical methods can suffer from performance degradation when statistical abnormalities are present such as when neighborhood features fail to capture pathologies during training or when the voxel intensity profile is altered by noise and artifacts resulting in increasing overlaps between intensity distributions (Figure 1.10). Thus, tissue segmentation and artifact correction can be integrated in an iterative expectation-maximization (EM) framework [74] given that one cannot correct for artifacts without knowing the underlying true tissue labels, and yet one cannot estimate the true underlying tissue labels without correcting for the artifacts. Adding atlas registration as initialization to the EM scheme further improves the robustness of EM initialization [75, 76, 77].  20  (a) 0% Noise  (b) 9% Noise Sources: Generated with data from the simulated BrainWeb database, MNI.  Figure 1.10. Intensity distributions of T1w/T2w/PDw MRI at varying noise levels. Note the increase in overlaps between intensity distributions at higher noise levels.  Furthermore, it has been shown to be advantageous to incorporate additional contextual information in various forms. Spatial regularization such as edge and topology based on template is employed in clustering [78, 79, 80]. Markov random field (MRF) [81, 82, 83] models an image X as an undirected graph such that contexture constraints are imposed in the prior probability based on a pre-defined neighbourhood j of voxel i:  ∑,  (1.8)  where Z is a normalizing constant, α is a weighting term, and Vij is a potential function of the neighbourhood. Similarly, mean field [84] estimates the true underlying image X given observed image Y using neighbours j of each voxel i in a pre-defined neighborhood 21  by examining the posterior probability P(X|Y). A solution Q can be obtained by optimizing the Kullback-Leibler (KL) divergence between Q and P, and Q factorizes over voxels such that each Qi can be estimated iteratively as a function of the neighbourhood sum:             |  | log  log  ∑,  ∑                          (1.9)  (1.10)  where Z is a normalization constant, and α and β are weighting terms. However, estimation of maximum a posteriori (MAP) labels in a MRF model is computation intensive; thus, MAP estimation can only be achieved through local optimization [84] or relaxing the optimization problem [86]. Alternatively, since MAP estimation is comparatively easier in a directed graphical model such as the hidden Markov model (HMM) by assuming causal relationship [87], efficient segmentation methods have been developed for higher dimensional images by reducing estimation dimensions using vectorization [88], larger block size [89], and iteration through each rows [90, 91]. 1.3.4 Segmentation Pre-Processing Considerable attention has been given to the pre-processing of brain MRI scans, including intensity non-uniformity correction and skull stripping. Numerous INU correction methods have been proposed, either prospectively by employing in-scanner calibration or retrospectively by using the acquired image data [92]. Since the bias field is 22  a smoothly varying spatial field, estimation methods based on low-pass filtering have been proposed [93, 94]. However, this is complicated by low frequency component overlaps between the actual signal and targeted artifacts. Other methods integrate anatomical segmentation and bias field correction in an EM framework based on the observation that these two steps are intertwined processes [32, 74, 95]. However, these are anatomically driven, typically brain-specific, thus requiring major modifications if they were to be applied to other anatomical structures. Most widely applied is a histogram-based approach that aims to maximize high frequency image content without a priori knowledge [96]. Known as the nonparametric non-uniformity normalization (N3) approach, the non-uniformity is modeled as a multiplicative spatial field that causes blurring of the image intensity distributions. Signals can, thus, be recovered by sharpening the intensity distributions by deconvolving with a Gaussian kernel and then constraining the smoothness of the field by approximating with a 3D cubic B-spline. N3 has been shown to be relatively stable and superior to various state-of-the-art techniques [97], and will be employed throughout this thesis.  Similarly, numerous methods have been developed to extract the cerebral cortex from the MRI head scans for the segmentation and analysis of intracranial volumes. Many utilize mathematical morphologies and connected component analysis together with gradient-based and intensity-based region analysis [98, 99, 100, 101, 102, 103]. One drawback of such methods is that results often strongly depend on the parameters specified. Alternatively, others attempt to either incorporate template information [32] or derive a consensus by using results obtained from multiple methods [104] but both approaches require specific a priori knowledge. On the other hand, deformable surface 23  methods [105], allow the evolving contour to fit the underlying image data automatically. Although the resulting masks are often overly smooth due to the contour constraints, this approach is robust on a wide range of acquisition sequences, and will be employed throughout this thesis. 1.3.5 Segmentation Validation Validation of brain segmentation algorithms is one of the very challenging tasks faced by the medical image analysis community. True validation can be done with post-mortem histology or histopathology studies; however, this information is not readily available for in vivo studies such as ones described in this thesis. It is also difficult to relate the level of anatomic detail between the actual microscopic brain slices and the MRI data.  An alternative validation standard is to perform comparisons against manual expert segmentation masks. An example database is the Internet Brain Segmentation Repository (IBSR) from the Center for Morphometric Analysis (CMA) at Harvard Medical School, which provides normal brain scans along with corresponding expert segmentation results [106], and the MS Lesion Segmentation Challenge database at University of North Carolina (UNC) and Children’s Hospital Boston (CHB), which provides multi-channel MRI scans of MS subjects with lesion segmented by multiple expert human raters [107]. However, manual expert segmentation results are often subjective and user-dependent but are the closest to ground truths as humanly possible.  Another available resource is the simulated brain MRI data and the digital synthesis phantom of the BrainWeb database from the Montreal Neurological Institute (MNI) at 24  McGill University [108, 109]. Validations on results based on the BrainWeb data can simply be done by comparing the segmentation results against the provided phantom for both normal and MS anatomical models. The simulated data closely resembles actual brain anatomy, but are still, after all, synthetic in nature and only represent a small subset of what actual MRI scans resemble. 1.4  THESIS OVERVIEW  This thesis focuses on the development and validation of automated 3D brain tissue segmentation algorithms using in vivo MRI scans of normal adults. In addition, this thesis extends the developed algorithms to the study of MS pathology.  First, a novel 3D deformable model-based approach was developed and implemented to investigate discrete segmentation performance using both single- and multi-channel brain MRI scans. Objectives:   Employing an edge-based deformable model rather than a region-based deformable model will provide a comparatively more efficient implementation by reducing complexity.    Integrating both image edge geometry and voxel statistics into a hybrid geometric-statistical feature will help regularize contour convergence and allow extraction of complex anatomical features.  Specific aims:   Validate accuracy of WM, GM, and CSF segmentation using simulated T1w, T2w, and PDw brain MRI scans. 25    Compare WM, GM, and CSF segmentation performance to region-based deformable approach by using real T1w MRI scans of normal adults.    Study segmentation performance when clinical MRI scans with MS and AD pathologies are processed.  Second, a novel 3D region-based hidden Markov model-based approach was developed and implemented to investigate discrete segmentation performance using single-channel brain MRI scans as parameter estimations in hidden Markov models are known to be efficient and robust to noise. Objectives:   Employing a region-based representation of the image data rather than using traditional rectangular lattices or grids will improve the efficiency of the parameter estimation process.    Region-based parameter estimation will additionally provide a locally-optimum data labelling that is invariant to object rotation, which is a highly valuable property in the segmentation task, especially in medical imaging where the segmentation results need to be independent of patient positioning in the scanner in order to minimize methodological variability in data analysis.  Specific aims:   Validate the efficiency, accuracy, and rotational invariance properties of the region-based hidden Markov model by using synthetic shapes, and both simulated and real T1w MRI scans of normal adults at various noise levels.    Compare the improvement in efficiency, accuracy, and rotational invariance to the grid-based hidden Markov model by using synthetic shapes, and both simulated and 26  real T1w MRI scans of normal adults at various noise levels.  Third, we studied the segmentation of MS lesions by using multi-channel brain MRI scans with the region-based hidden Markov model approach. Objectives:   Employing a region-based hidden Markov model will provide a data-driven estimation of MS lesions without a priori atlas and training.  Specific aims:   Compare MS lesion segmentation performance to variability between expert human raters by using real T1w, T2w, and FLAIR brain MRI scans of MS patients.    Compare MS lesion segmentation performance to other state-of-the-art lesion segmentation methods by using real T1w, T2w, and FLAIR brain MRI scans of MS patients.  Finally, a novel fuzzy 3D region-based hidden Markov model was developed and implemented to investigate segmentation gains when assigning voxel memberships to a number of classes rather than one discrete label. Objectives:   Incorporating partial-volume estimation in the segmentation process will provide improved robustness to tissue volume estimation when analyzing images suffering from acquisition related resolution limitation.    Employing region boundary refinement will further improve the resolution of parameter estimation.  Specific aims: 27    Validate the performance gain when partial-volume effect is modeled in the segmentation process by using simulated T1w brain MRI scans of normal and MS anatomical models at various degrees of noise and partial volumes.    Examine performance tradeoffs between degree of region refinement, segmentation accuracy, and efficiency.    Compare WM, GM, and CSF segmentation performance to other state-of-the-art segmentation methods that model partial-volume effect by using simulated T1w brain MRI scans of normal and MS anatomical models.    Evaluate the performance gain when partial-volume effect is modeled in the segmentation process by using real clinical T1w brain MRI scans of controls and RRMS patient groups at reduced image resolutions.  The concluding chapter discusses the findings of each manuscript chapter, including the contributions and significance of this thesis research to the field of study, strengths, weaknesses, potential applications of the research findings, and future research directions in the field of study. 1.5  THESIS ORGANIZATION  This is a manuscript-based thesis which follows the specifications required by the University of British Columbia for this format. In addition to the introductory and concluding chapters, this thesis includes four chapters which originally appeared as conference/journal publications and have herein been slightly modified for readability. The chapters of this thesis are organized as follows:  Chapter 1 presents an overview and challenges in automating medical imaging 28  analysis, especially for segmenting brain tissues in magnetic resonance imaging scans for neuroimaging applications such as multiple sclerosis;  Chapter 2 describes a novel 3D deformable model-based segmentation approach that integrates both statistical and geometric information in a hybrid image feature;  Chapter 3 describes a novel 3D contextual statistical model-based segmentation approach, namely a region-based hidden Markov model, and an efficient and rotationally invariant tree-structure parameter estimation algorithm;  Chapter 4 extends the proposed rbHMM to segmenting white matter lesions using multi-channel MRI data in a data-driven approach;  Chapter 5 introduces modeling of partial-volume effect into rbHMM as a fuzzy rbHMM with a forward-backward parameter estimation algorithm;  Chapter 6 concludes with the hypothesis, significance, strengths, weaknesses, potential applications, future work, and contributions of the thesis.  29  1.6  BIBLIOGRAPHY  H. McIlwain, H. S. Bachelard, Biochemistry and the Central Nervous System (4th Edition), Churchill Livingstone, Edinburgh, UK, 1971. 2. K. P. Whittall, A. L. McKay, D. A Graed, R. A. Nugent, D. K. B. Li, D. W. Paty, “In vivo measurement of T2 distributions and water contents in normal human brain,” Magnetic Resonance in Medicine, 37(1):34-43, 1997. 3. J. Beutel, H. L. Kundel, R. L. Van Metter, Handbook of Medical Imaging – Volume 1. Physics and Psychophysics, SPIE Press, Washington, USA, 2000. 4. D. W. Paty, D. Li, and G. J. Zhao, MRI in Multiple Sclerosis – Implications for diagnosis and treatment (2nd Edition), UBC MS/MRI Research Lab, Vancouver, BC, Canada, Report for Ares-Serono SA, 1999. 5. D. Miller, “MRI and assessment of treatment in multiple sclerosis,” Brain, 124(5):1052-1053, May 2001. 6. C. R. Jack, R. C. Petersen, Y. Xu, P. C. O’Brien, G. E. Smith, R. J. Ivnik, E. G. Tangalos, E. Kokmen, “Rate of medial temporal lobe atrophy in typical aging and Alzheimer’s disease,” Neurology, 51:993–999, 1998. 7. J. C. Masdeu, J. L. Zubieta, J. Arbizu, “Neuroimaging as a marker of the onset and progression of Alzheimer’s disease,” Journal of Neurological Science, 236:55-64, 2005. 8. M. Hutchinson, U. Raff, “Parkinson’s disease: a novel MRI method for determining structural changes in the substantia nigra,” Journal of Neurology, Neurosurgery and Psychiatry, 67:815-818, 1999. 9. J. Kassubek, F. D. Juengling, B. Hellwig, J. Spreer, C. H. Lucking, “Thalamic gray matter changes in unilateral Parkinsonian resting tremor: a voxel-base morphometric analysis of 3-dimensional magnetic resonance imaging,” Neuroscience Letters, 323(1):29-32, April 2002. 10. E. J. Burton, I. G. McKeith, D. J. Burn, E. D. Williams, J. T. O’Brien, “Cerebral atrophy in Parkinson’s disease with and without dementia: a comparison with Alzheimer’s disease, dementia with Lewy bodies and controls,” Brain, 127(4):791-800, 2004. 11. A. Traboulsee, G. Zhao, and D. K. B. Li, “Neuroimaging in multiple sclerosis,” Neurologic Clinics, 23(1):131-148, February 2005. 12. J. de Graaf, J. H. van der Hoeven, M. C. Hoogstraten, J. M. Minderhoud, Multiple 1.  30  13. 14. 15.  16.  17.  18. 19.  20.  21.  22.  23. 24.  Sclerosis, Bohn, Scheltema & Holkema, Utrecht, The Netherlands, 1988. J. F. Kurtzke, “Rating neurological impairment in multiple sclerosis: an expanded disability status scale (EDSS),” Neurology, 33:1444-1452, 1983. National Multiple Sclerosis Society, (December, 2005) About MS, Available: http://www.nationalmssociety.org/. B. Taylor, E. McDonald, B. Fantino, L. Sedal, R. Macdonnell, F. Pittas, T. Groom, “The cost of multiple sclerosis in Australia,” Journal of Clinical Neuroscience, 14(6):532-539, June 2007. K. Whetten-Goldstein, F. A. Sloan, L. B. Goldstein, E. D. Kulas, “A comprehensive assessment of the cost of multiple sclerosis in the United States,” Multiple Sclerosis, 4(5):419-425, 1998. G. Kobelt, J. Berg, P. Lindgren, S. Fredrikson, B. Jönsson, “Costs and quality of life of patients with multiple sclerosis in Europe,” Journal of Neurology, Neurosurgery, and Psychiatry, 77: 918-926, May 2006. D. H. Miller, “Biomarkers and surrogate outcomes in neurodegenerative disease: lessons from multiple sclerosis,” Neurotherapeutics, 1(2):284-294, April 2004. L. Kappos, d. Moeri, E. W. Radue, A. Schoetzau, K. Schweikert, F. Barkhof, “Predictive value of gadolinium-enhanced magnetic resonance imaging for relapse rate and changes in disability or impairment in multiple sclerosis: a meta-analysis,” Lancet, 353:964-969, 1999. H.-P. Adams, J. A. Koziol, N. C. Fox, D. H. Miller, A. J. Thompson, “Progressive cerebral atrophy in MS: a serial study using registered, volumetric MRI,” Neurology, 55(8):1242-1243, October 2000. R. A. Rudick, E. Fisher. J. C. Lee, J. Simon, L. Jacobs, “Use of the brain parenchymal fraction to measure whole brain atrophy in relapsing-remitting MS. multiple sclerosis Collaborative Research Group,” Neurology, 53(8):1698–704, November 1999. M. Rovaris, G. Comi, M. A. Rocca, J. S. Wolinsky, M. Filippi, The European/Canadian Glatiramer Acetate Study Group, “Short-term brain volume change in relapsing remitting multiple sclerosis: effect of glatiramer acetate and implications,” Brain, 124(9):1803-1812, September 2001. M. Jenkinson, S. M. Smith, “A global optimization method for robust affine registration of brain images,” Medical Image Analysis, 5(2):143-156, June 2001. M. Jenkinson, P. R. Bannister, J. M. Brady, S. M. Smith, “Improved optimization for the robust and accurate linear registration and motion correction of brain images,” NeuroImage, 17(2):825-841, 2002. 31  25. J. Ashburner P. Neelin, D. L. Collins, A. C. Evans, K. J. Friston, “Incorporating prior knowledge into image registration,” NeuroImage, 6(4):344-352, November 1997. 26. D. L. Collins, P. Neelin, T. M. Peters, A. C. Evans, “Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space,” Journal of Computer Assisted Tomography, 18(2):192-205, March-April 1994. 27. A. Klein, J. Andersson, B. A. Ardekani, J. Ashburner, B. Avants, M.-C. Chiang, G. E. Christensen, D. L. Collins, J. Gee, P. Hellier, J. H. Song, M. Jenkinson, C. Lepage, D. Rueckert, P. Thompson, T. Vercauteren, R. P. Woods, J. J. Mann, R. V. Parsey, “Evaluation of 14 nonlinear deformation algorithms applied to human brain MRI registration,” NeuroImage, 46:786-802, July 2009. 28. G. E. Christensen, R. D. Rabbitt, M. I. Miller, “Deformable templates using large deformation kinematics,” IEEE Transactions on Image Processing, 5(10):1435-1447, October 1996. 29. D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach, D. J. Hawkes, “Nonrigid registration using free-form deformation: application to breast MR images,” IEEE Transactions on Medical Imaging, 18(8):712-721, August 1999. 30. J. Ashburner, K. J. Friston, “Nonlinear spatial normalization using basis functions,” Human Brain Mapping, 7:254-266, 1999. 31. J. Ashburner, K. Friston, “Multimodal image coregistration and partitioning – a unified framework,” NeuroImage, 6(3):209-217, October 1997. 32. J. Ashburner, K. J. Friston, "Voxel-based morphometry--the methods," Neuroimage, 11(6):805-821, June 2000. 33. A. Yezzi, L. Zollei, “A variational framework for integrating segmentation and registration through active contours,” Medical Image Analysis, 7:171-185, 2003. 34. P. P. Wyatt, J. A. Noble, “MAP MRF joint segmentation and registration of medical images,” Medical Image Analysis, 7(4):539-552, 2003. 35. X. Chen, M. Brady, D. Rueckert, “Simultaneous segmentation and registration for medical images,” in Proc. of Medical Image Computing and Computer Assisted Intervention, 2004, pp.663-670. 36. M. Murgasova, D. Rueckert, D. Edwards, J. Hajnal, “Robust segmentation of brain structures in MRI,” in Proc. of IEEE International Symposium in Biomedical Imaging, 2009. 37. F. L. Bookstein, “Voxel-based morphometry should not be used with imperfectly registered images,” NeuroImage, 14(6):1454-1462, December 2001. 38. J. Ashburner, K. J. Friston, “Why voxel-based morphometry should be used,” NeuroImage, 14(6):1238-1243, December 2001. 32  39. W. R. Crum, L. D. Griffin, D. L. G. Hill, D. J. Hawkes, “Zen and the art of medical image registration: correspondence, homology, and quality,” NeuroImage, 20(3):1425-1437, November 2003. 40. C. Davatzikos, “Why voxel-based morphometric analysis should be used with great caution when characterizing group differences,” NeuroImage, 23(1):17-20, September 2004. 41. M. Filippi, T. Yousry, C. Baratti, M. A. Horsfield, S. Mammi, C. Becker, R. Voltz, S. Spuler, A. Campi, M. F. Reiser, G. Comi, “Quantitative assessment of MRI lesion load in multiple sclerosis: a comparison of conventional spin-echo with fast fluid-attenuated inversion recovery,” Brain, 119(4):1349-1355, 1996. 42. R. C. Parodi, F. Sardanelli, P. Renzetti, E. Rosso, C. Losacco, A. Ferrari, F. Levrero, A. Pilot, M. Inglese, G. L. Mancardi, “Growing region segmentation software (GRES) for quantitative magnetic resonance imaging of multiple sclerosis: intra- and inter-observer agreement variability: a comparison with manual contouring method,” European Radiology, 12(4):866-871, April 2002. 43. V. Grau, A. U. Mewes, M. Alcaniz, R. Kikinis, S. K. Warfield, "Improved watershed transform for medical image segmentation using prior information," IEEE Transactions on Medical Imaging, 23(4):447-458, April 2004. 44. J. K. Udupa, L. Wei, S. Samarasekera, Y. Miki, M. A. van Buchem, R. I. Grossman, "Multiple sclerosis lesion quantification using fuzzy-connectedness principles," IEEE Transactions on Medical Imaging, 16(5):598- 609, 1997. 45. M. A. Horsfield, R. Bakshi, M. Rovaris, M. A. Rocca, V. S. R. Dandamudi, P. Valsasina, E. Judica, F. Lucchini, C. R. G. Guttmann, M. P. Sormani, M. Filippi, “Incorporating domain knowledge into the fuzzy connectedness framework: application to brain lesion volume estimation in multiple sclerosis,” IEEE Transactions on Medical Imaging, 26(12):1670-1680, December 2007. 46. M. Kass, A. Witkin, D. Terzopoulos, “Snakes: active contour models,”' International Journal of Computer Vision, 1(4):321-331, January 1988. 47. D. Terzopoulos, A. Witkin, M. Kass, "Constraints on deformable models: recovering 3D shape and nonrigid motion," Artificial Intelligence, 36(1):91-123, 1988. 48. S. Osher, J. A. Sethian, “Fronts propagating with curvature-dependent speed: algorithms based on Hamilton--Jacobi formulations,” Journal of Computational Physics, 79:12-49, 1988. 49. V. Caselles, R. Kimmel, G. Sapiro, “Geodesic active contours,” International Journal of Computer Vision, 22(1):61–79, February 1997. 50. R. Malladi, J. A. Sethian, B. C. Vemuri, “Shape modeling with front propagation: a 33  51.  52.  53.  54.  55.  56.  57. 58. 59.  60.  61.  62.  level set approach,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(2):158–175, February 1995. R. Malladi, R. Kimmel, D. Adalsteinsson, V. Caselles, G. Sapiro, J. A. Sethian, “A geometric approach to segmentation and analysis of 3D medical images,” in Proc. of IEEE Workshop on Mathematical Methods in Biomedical Image Analysis, June 1996, pp.244. M. E. Leventon, W. E. L. Grimson, O. Faugeras, “Statistical shape influence in geodesic active contour,” IEEE International Summer School on Biomedical Imaging, 2002, pp.8-15. Y. Chen, H. D. Tagare, S. Thiruvenkadam, F. Huang, D. Wilson, K. S. Gopinath, R. W. Briggs, E. A. Geiser, “Using prior shapes in geometric active contours in a variational framework,” International Journal of Computer Vision, 50(3):315–328, December 2002. S. Ho, E. Bullitt, G. Gerig, "Level-set evolution with region competition: automatic 3-D segmentation of brain tumors,” in Proc. of International Conference on Pattern Recognition, August 2002, pp.532-535. X. Zeng, L. H. Staib, R. T. Schultz, J. S. Duncan, "Segmentation and measurement of the cortex from 3-D MR images using coupled-surfaces propagation," IEEE Transactions on Medical Imaging, 18(10):927-937, October 1999. C. Ciofolo, C. Barillot, P. Hellier, “Combining fuzzy logic and level set methods for 3D MRI brain segmentation,” in Proc. of IEEE International Symposium in Biomedical Imaging, 2004, pp.161-164. C. Ciofolo, C. Barillot, “Brain segmentation with competitive level sets and fuzzy control,” in Proc. of Information Processing in Medical Imaging, 2005, pp.333-344. T. F. Chan, L. A. Vese, "Active contours without edges," IEEE Transactions on Image Processing, 10(2):266-277, February 2001. L. A. Vese, T. F. Chan, “A multiphase level set framework for image segmentation using the Mumford and Shah model,” International Journal of Computer Vision, 50(3):271-293, 2002. E. D. Angelini, T. Song, B. D. Mensh, A. Laine, "Multi-phase three-dimensional level set segmentation of brain MRI," in Proc. of Medical Image Computing and Computer Assisted Intervention, 2004, pp.318-326. E. Angelini, T. Song, B. D. Mensh, A. Laine, “Brain MRI segmentation with multiphase minimal partitioning: a comparative study,” International Journal of Biomedical Imaging, 2007:10526, April 2007. A. Tsai, A. Yezzi, A. S. Wilsky, “Curve evolution implementation of the 34  63.  64.  65.  66.  67.  68.  69.  70.  71. 72.  73.  Mumford-Shah functional for image segmentation, denoising, interpolation, and magnification,” IEEE Transactions on Image Processing, 10(8):1169-1186, August 2001. M. Jeon, M. Alexander, N. Pizzi, W. Pedrycz, “Unsupervised hierarchical multi-scale image segmentation: level set, wavelet and additive splitting operator,” Pattern Recognition Letters, 26(10):1461-1469, July 2005. C. A. Cocosco, A. P. Zijdenbos, A. C. Evans, "A fully automatic and robust brain MRI tissue classification method," Medical Image Analysis, 7(4):513-527, December 2003. P. Anbeek, K. L. Vincken, M. A. Viergever, “Automated MS-lesion segmentation by K-nearest neighbor classification,” in Workshop of MS Lesion Segmentation Challenge in Medical Image Computing and Computer Assisted Intervention, 2008. U. Amato, M. Larobina, A. Antoniadis, B. Alfano, “Segmentation of magnetic resonance brain images through discriminant analysis,” Journal of Neuroscience Methods, 131(1-2):65-74, December 2003. M. Scully, V. Magnotta, C. Gasparovic, P. Pelligrino, D. Feis, H. J. Bockholt, “3D segmentation in the clinic: a grand challenge II at MICCAI 2008 – MS lesions segmentation,” in Workshop of MS Lesion Segmentation Challenge in Medical Image Computing and Computer Assisted Intervention, 2008. W. E. Reddick, O. J. Glass, E. N. Cook, T. D. Elkin, R. J. Deaton, “automated segmentation and classification of multispectral magnetic resonance images of brain using artificial neural networks,” IEEE Transactions on Medical Imaging, 16(6):911-918, December 1997. S. C. Amatur, D. Piraino, Y. Takefuji, “Optimization neural networks for the segmentation of magnetic resonance images,” IEEE Transactions on Medical Imaging, 11(2):215-220, June 1992. Y. Wang, T. Adali, S.-Y. Kung, Z. Szabo, “Quantification and segmentation of brain tissues from MR images: a probabilistic neural network approach,” IEEE Transactions on Image Processing, 7(8):1165-1181, August 1998. D. L. Pham, J. L. Prince, “Adaptive fuzzy segmentation of magnetic resonance images,” IEEE Transactions on Medical Imaging, 18(9):737-752, September 1999. D. L. Pham, J. L. Prince, A. P. Dagher, C. Xu, “An automated technique for statistical characterization of brain tissues in magnetic resonance imaging,” International Journal of Pattern Recognition and Artificial Intelligence, 11(8):1189-1211,1997. T. Tasdizen, D. M. Weinstein, J. N. Lee, “Automatic Tissue classification for the human head from multispectral MRI,” SCI Institute Technical Report, No. 35  74. 75.  76.  77.  78. 79.  80.  81.  82.  83.  84. 85. 86.  UUSCI-2004-001, University of Utah, March 2004. M. W. Wells, III, W. E. L. Grimson, F. A. Jolesz, “Adaptive segmentation of MRI data,” IEEE Transactions on Medical Imaging, 15(4):429-442, August 1996. K. Van Leemput, F. Maes, D. Vandermeulen, P. Suetens, “Automated model-based tissue classification of MR images of the brain,” IEEE Transactions on Medical Imaging, 18(10):897-908, October 1999. K. Van Leemput, F. Maes, D. Vandermeulen, A. Colchester, P. Suetens, “Automated segmentation of multiple sclerosis lesions by model outlier detection”, IEEE Transactions on Medical Imaging, 20(8):677-688, August 2001. K. M Pohl, W. M. Wells, A. Guimond, K. Kasai, M. E. Shenton, R. Kikinis, W. E. L. Grimson, S. K. Warfield, “Incorporating non-rigid registration into expectation maximization algorithm to segment MR images,” in Proc. of Medical Image Computing and Computer Assisted Intervention, Tokyo, Japan, September 25-28, 2002, pp. 564-572. D. L. Pham, “Spatial models for fuzzy clustering,” Computer Vision Image Understanding, 84:285-297, 2001. D. L. Pham, “Unsupervised tissue classification in medical images using edge-adaptive clustering,” in Proc. of IEEE Engineering in Medicine and Biology Society, 2003, pp.17-21. P.-L. Bazin, D. L. Pham, “Topology-preserving tissue classification of magnetic resonance brain images,” IEEE Transactions on Medical Imaging, 26(4):487-496, April 2007. Y. Wang, T. Adali, J. Xuan, Z. Szabo, "Magnetic resonance image analysis by information theoretic criteria and stochastic site models," IEEE Transactions on Information Technology in Biomedicine, 5(2):150- 158, June 2001. Y. Zhang, M. Brady, S. Smith, "Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm," IEEE Transactions on Medical Imaging, 20(1): 45- 57, January 2001. S. Geman, D. Geman, “Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 6(6):721-741, November 1984. M. Opper, O. Winther, “Gaussian processes for classification: mean field algorithms,” Neural Computation, 12(11):2177-2204, 2000. J. Besag, “On the statistical analysis of dirty pictures (with discussion),” Journal of Royal Statistics Society, 48(3-B):259–302, 1986. N. Komodakis, G. Tziritas, N. Paragios, “Fast, approximately optimal solutions for 36  87. 88.  89. 90. 91.  92.  93. 94.  95.  96.  97.  98.  single and dynamic MRFs,” in Proc. of IEEE Computer Vision and Pattern Recognition, 2007. J. Domke, A. Karapurkar, Y. Aloimonos, “Who killed the directed model?” in Proc. of IEEE Computer Vision and Pattern Recognition, 2008. S. Bricq, Ch. Collet, J. P. Armspach, “Unifying framework for multimodal brain MRI segmentation based on hidden Markov chains,” Medical Image Analysis, 12:639-652, 2008. M. Ibrahim, N. John, M. Kabuka, A. Younis, “Hidden Markov models-based 3D MRI brain segmentation,” Image and Vision Computing, 24:1065-1079, 2006. J. Li, A. Najmi, R. M. Gray, “Image classification by a two dimensional hidden Markov model,” IEEE Transaction on Signal Processing, 48(2):517-533, 2000. D. Joshi, J. Li, J. Z. Wang, “A computationally efficient approach to the estimation of two- and three-dimensional hidden Markov models,” IEEE Transactions on Image Processing, 2005, accepted. U. Vovk, F. Pernus, B. Likar, “A review of methods for correction of intensity inhomogeneity in MRI,” IEEE Transactions on Medical Imaging, 26(3):405-421, March 2007. L. Axel, J. Constantini, J. Listerud, “Intensity correction in surface coil MR imaging,” American Journal of Roentgenology, 148(2):418-420, February 1987. B. H. Brinkmann, A. Manduca, R. A. Robb “Optimized homomorphic unsharp masking for MR greyscale inhomogeneity correction,” IEEE Transactions on Medical Imaging,” 17(2):161-171, April 1998. K. Van Leemput, F. Maes, D. Vandermeulen, P. Suetens, “Automated model-based bias field correction of MR images of the brain,” IEEE Transactions on Medical Imaging,” 18(10):885-896, October 1999. J. G. Sled, A. P. Zijdenbos, A. C. Evans, “A nonparametric method for automatic correction of intensity nonuniformity in MRI data,” IEEE Transactions on Medical Imaging,” 17(1):87-97, February 1998. J. B. Arnold, J.-S. Liow, K. A. Schaper, J. J. Stern, J. G. Sled, D. W. Shattuck, A. J. Worth, M. S. Cohen, R. M. Leahy, J. C. Mazziotta, D. A. Rottenberg, “Qualitative and quantitative evaluation of six algorithms for correcting intensity nonuniformity effects,” NeuroImage, 13:931-943, 2001. D. W. Shattuck, S. R. Sandor-Leahy, K. A. Schaper, D. A. Rottenberg, R. M. Leahy, “Magnetic resonance image tissue classification using a partial volume model,” NeuroImage, 13(5):856-876, May 2001.  37  99. L. Lemieux, G. Hagermann, K. Krakow, F. G. Woermann, “Fast, accurate, and reproducible automatic segmentation of brain in T1-weighted volume MRI data,” Magnetic Resonance in Medicine, 42(1):127-135, July 1999. 100.M. S. Atkins, B. T. Mackiewich, “Fully automatic segmentation of the brain in MRI,” IEEE Transactions on Medical Imaging, 17(1):98-107, February 1998. 101.M. E. Brummer, R. M. Mersereau, R. L. Eisner, R. R. J. Lewine, “Automatic detection of brain contours in MRI data sets,” IEEE Transactions on Medical Imaging, 12(2):153-166 June 1993. 102.Z. Y. Shan, G. H. Yue, J. Z. Liu, “Automated histogram-based brain segmentation in T1-weighted three-dimensional magnetic resonance head images,” NeuroImage, 17(3):1587-1598, November 2002. 103.J.-F. Mangin, O. Coulon, V. Frouin, “Robust brain segmentation using histogram scale-space analysis and mathematical morphology,” in Proc. of Medical Image Computing and Computer Assisted Intervention, 1998, pp.1230-1241. 104.K. Rehm, K. Schaper, J. Anderson, R. Woods, S. Stoltzner, D. Rottenberg, “Putting our heads together: a consensus approach to brain/non-brain segmentation in T1-weighted MR volumes,” NeuroImage, 22(3):1262-1270, July 2004. 105.S. M. Smith, “Fast robust automated brain extraction,” Human Brain Mapping, 17(3):143-155, November 2002. 106.A. Worth, (January 2009) Internet Brain Segmentation Repository, Available: http://www.cma.mgh.harvard.edu/ibsr/. 107.M. Styner, S. Warfield, W. Niessen, T. van Walsum, C. Metz, M. Schaap, X. Deng, T. Heimann, B. van Ginneken, (June 2008) MS Lesion Segmentation 08, Available: http://www.ia.unc.edu/MSseg/. 108.D. L. Collins, A. P. Zijdenbos, V. Kollokian, J. G. Sled, N. J. Kabani, C. J. Holmes, A. C. Evans, “Design and construction of a realistic digital brain phantom,” IEEE Transactions on Medical Imaging, 17(3):463-468, June 1998. 109.C. A. Cocosco, V. Kollokian, R. K.-S. Kwan A. C. Evans, “BrainWeb: online interface to a 3D MRI simulated brain database,” NeuroImage, 5(4 part2/4):S425, 1997.  38  2. A HYBRID GEOMETRIC-STATISTICAL DEFORMABLE MODEL FOR AUTOMATED 3D SEGMENTATION IN BRAIN MRI*  2.1 INTRODUCTION Magnetic resonance (MR) has become the main modality for brain imaging that facilitates  safe,  non-invasive  assessment  and  monitoring  of  patients  with  neurodegenerative diseases such as Parkinson’s disease, Alzheimer’s disease (AD), epilepsy, schizophrenia, and multiple sclerosis (MS) [1-6]. The ability to diagnose and characterize these diseases in vivo using MR image data promises exciting developments both towards understanding the underlying pathologies, as well as conducting clinical trials of drug treatments. One important marker that is often used to assess patients with neurodegenerative disease is brain tissue volume. For example, the typical rate of global brain atrophy in MS patients has been shown to be 0.6%-0.8% annually which is two to three times the normal atrophy rate [7]. Evidence has shown that white matter (WM) and gray matter (GM) atrophy at different rates, and each correlates differently to disability [8, 9, 10]; thus, accurate measurement of the WM and GM brain tissues can provide valuable quantitative indicators of disease progression and, potentially, treatment outcomes [6, 11]. Thus, the main goal of this chapter is to introduce an automatic algorithm for accurate WM, GM, and cerebrospinal fluid (CSF) segmentation to facilitate *  A version of this chapter has been published. A. Huang, R. Abugharbieh, R. Tam, “A hybrid  geometric-statistical deformable model for automated 3-D segmentation in brain MRI,” IEEE Transactions on Biomedical Engineering, 54(7):1838-1848, July 2009. A preliminary version of this chapter has been published. A. Huang, R. Abugharbieh, R. Tam, A. Traboulsee, “Automatic MRI brain tissue segmentation using a hybrid statistical and geometric model,” in Proc. of IEEE International Symposium on Biomedical Imaging: Nano to Micro, 2006, pp.394-397. 39  accurate measurement of brain tissues.  Previously, to measure various tissue volumes in MRI head scans, manual WM and GM segmentation were often performed by skilled experts. Manual segmentation, however, is extremely time-consuming, mostly limited to 2D slice-based segmentation, and prone to significant intra- and inter-rater variability [12]. In particular, manual segmentation cannot be practically and efficiently performed in situations where precise measurements on a large number of scans are required, such as in clinical trials. Therefore, a fully automatic, highly accurate tissue segmentation technique that provides systematic quantitative analysis of tissue volumes in brain MRI is an invaluable tool in many studies of neurodegenerative diseases. A wide variety of methods have been proposed for automating the segmentation process over the past decade which provided either semi or fully automated frameworks for the segmentation of brain tissues. A review of some of these methods can be found in [13, 14].  One popular family of brain tissue segmentation methods is based on normalizing the brain scans by registering (or aligning) them to a pre-defined atlas of brain tissues. One example is the popular statistical parametric mapping (SPM) technique, which relies on voxel-based morphometry (VBM) [15]. A number of extensions to the original SPM technique have been proposed. For example, SPM is utilized to initialize an expectation-maximization (EM) segmentation framework [16], which has been extended to non-rigid registration [17]. Although such atlas-based methods are typically robust to artifacts such as acquisition noise and distortions, concerns [18, 19] and discussion [20] have ensued regarding the use of templates from one population when analyzing data 40  from another population. For example, morphing patient scans with pathologically enlarged ventricles to match a normal template could potentially distort the surrounding tissues in an unpredictable manner. Such structural differences might lead to systematic biases and misregistration errors that are difficult to quantify [19]. All of these concerns introduce yet another level of complications arising from image registration and atlas generation procedures which add to the already non-trivial segmentation problem, especially in the presence of anomalies such as tumours, lesions and tissue atrophy.  A second family of brain tissue segmentation methods assigns a label for each tissue based on image statistics either by clustering [21] or by modeling the brain tissue intensity distributions as a finite mixture of distributions such as EM [22], maximum a posteriori (MAP) [23], simulated annealing [24], and Gaussian mixture modeling (GMM) [25]. Other approaches incorporate additional regional information, which is lacking from these statistical methods, into their segmentation framework. Such methods extend clustering or EM by integrating with fuzzy connectedness [26], topological constraints [27], Gibbs random field (GRF) [28], and hidden Markov random field (HMRF) [29] in the segmentation task. A common difficulty with many of these methods, particularly the random field approaches, is the requirement for proper parameter settings in a supervised setting.  A third family of brain tissue segmentation methods is based on utilizing geometric information such as deformable models or active contours [30] which delineates region boundaries using a minimization of an energy functional [31, 32]. Deformable models employing level sets [33] provide an effective implicit representation rather than explicit 41  parameterization of the evolving contour. However, a common problem of directly applying the active contour approach in segmenting brain MR images is leakage through weak or noisy edges that are ubiquitous, especially for edge-based deformable models, e.g. geodesic active contour [34], which describe the evolution of propagating curve as a function of image gradient features. Some researchers incorporated image statistics into their deformable models in various segmentation applications by utilizing coupled surface principle [35, 36] and fuzzy logic [37, 38] to better achieve stability. Others employed a region-based model [39] by utilizing regional homogeneity in a curve evolution perspective and a hierarchical implementation on brain pathology images [40]. More recently, tissue segmentation was performed and quantitatively evaluated [41-43] by using the multiphase 3D level set segmentation (M3DLS) algorithm [41]. M3DLS utilizes a multi-phase extension [44] of the region-based deformable model [39] based on the Mumford-Shah functional [45] by iteratively deforming two closed curves separating four regions. This minimal partitioning approach assumes piecewise constant or piecewise smooth data and optimizes a sum of terms, including the lengths and areas for the two closed curves, and the sums of square intensity differences from the means for all four separated regions. This minimization is also performed in a level set framework [33] implicitly. Further extending this model to N-phase allows separation of 2N regions but the number of classes to be segmented is limited to two to the power of the number of closed curves defined. Moreover, complexity increases as more level sets are required and the rate of convergence is typically slow [46].  In this chapter, we propose a MR brain tissue segmentation approach that integrates both geometric and statistical image features into an edge-based deformable model 42  formulation to achieve accurate segmentation results. By utilizing this novel hybrid image feature, we present one solution to the challenging problem of stabilizing the active contour. Similar existing work used a topology preservation principle enforced at nonsimple points in a geometric deformable model [47] or a curve shortening prior for smoothness in a level set framework with to minimize leakage [40, 48]. Here, we do not explicitly apply any smoothness and topological constraints (e.g. topology preservation at nonsimple points) to the geometric deformable models but rather rely on the proposed hybrid feature to regularize the level sets. Other approaches used prior knowledge such as a distance penalizing term in the level set function between two boundary classes [35], a fuzzy decision system on contour distance to an anatomical target or atlas [38], or a dissimilarity measure between the contour and a shape model in the energy term [49]. Here, we demonstrate our proposed approach in segmenting complex anatomical structures such as WM, GM, and CSF without prior knowledge. Hence, the proposed approach is truly automated and data-driven in both a statistical and geometric sense. Furthermore, we compare the segmentation performance of our proposed edge-based level set method to the region-based M3DLS approach [42] on real clinical MRI scans. We demonstrate the improved WM, GM, and CSF segmentation accuracy when using the proposed method.  The rest of the chapter is organized as follows: In §2.2, we introduce our novel hybrid geometric-statistical feature implemented on the edge-based geodesic active contour formulation. Our modified deformable model is then used to design a new automated 3D brain tissue segmentation algorithm for both single and multiple MR sequence data. In §2.3, we present quantitative and qualitative results and analysis obtained on simulated 43  and real clinical MRI images, as well as comparisons to results reported by using M3DLS. We then conclude in §2.4.  2.2 METHODS In this section, we present our 3D brain segmentation method which integrates both geometric and statistical features in an edge-based geodesic active contour framework. We describe the proposed model in its general form. We then present a segmentation framework for brain MRI for both single and multiple MR sequence data.  2.2.1 Edge-Based Deformable Model  In this work, we utilize the geodesic active contour model [34] rather than the region-based formulation [39] due to its computation soundness and extendibility. The geodesic model delineates region boundaries by describing the evolution of a curve or surface C from an initial position C0 as finding the minima of the Riemannian curve distance:  min  g   I C q   C q  dq  min  1  0  where g is a general feature function,  L C   0  I  g   I C q  ds ,  (2.1)  is the gradient norm of voxel with intensity I,  and q is the parameterization of the curve C. The right side of the equation describes the parameterized curve C(q) such that the Euclidean length of C can be represented as L(C)=∮|C’(q)|dq =∮ds, where ds=|C’(q)|dq is the Euclidean arc-length or the Euclidean 44  metric. Note that this geodesic formulation of the active contour relies on g, the speed and halting feature for the evolving surface in 3D applications derived based on the geometric gradient feature of an image. Generally, g is chosen as a positive valued function of the intensity gradient as in (2.2) where  Iˆ is  a smoothed version of I and ρ=1 or 2 [34]. Other  similar monotonically decreasing functions, such as the sigmoid function (2.3) with parameters α (width of intensity window) and β (center of intensity window) are also often utilized [50].  g(I )   1 1  Iˆ  (2.2)    1  g (I )  1 e   Iˆ             (2.3)  The value of this feature function determines the propagation of the surface by searching for the minimal Riemannian distance. An ideal edge would ultimately have a feature value of zero at all the pixel points along this boundary. However, propagation relying solely on edge feature is typically sensitive to noisy and weak edges that are frequently observed in medical images. In particular, with the presence of complex anatomical structures, it is often impossible to automatically and accurately derive the desired geometric edge term to prevent contour leakage into the surrounding regions. Consequently, achieving accurate segmentation results with edge-based geodesic active contour requires either user intervention or careful adjustment of parameters such that the ideal boundary is minimal. This process is subjective and ideal parameters are often difficult to derive for a fully automated segmentation framework. 45  2.2.2 Proposed Hybrid Geometric-Statistical Feature We propose to transform the feature function, g, in the traditional geodesic active contour formulation into a hybrid feature function by incorporating geometric image features with voxel statistics to help automate and regularize the evolving contours. The minimization of the active contour is thus represented by (2.4):  min   L C   0  g  I C q  , P k I ds ,  (2.4)  where for gray-scale intensity MR images, P(k|I) represents the posterior probability (2.6) derived from a mixture model (2.5) from which voxel statistics are drawn, assuming that all voxels are identically and independently distributed and the image is to be described with K class labels.  K  P I    P k P I k   (2.5)  k 1  P k | I   P k P I | k   K   P  j P I | j   (2.6)  j 1  P(k) represents the prior probability of the class label k and P(I|k) is the likelihood function of the k-th class. We assume the data is normally distributed given the class label as in (2.7):  P I k    1   k 2    e   I   k 2 2 k 2  ,  (2.7)  46  where the parameters μk and σk are the mean and standard deviation of the k-th class. This parameter estimation problem for GMM is then solved by applying the EM algorithm [51] to the image intensity histogram.  The design of g in (2.4) utilizes both a geometric term and a statistical term. Geometrically, the presence of strong image gradients indicates significant structural content. As a result, the contour propagation speed slows to a halt. On the other hand, a lack of edge features often indicates the presence of a homogeneous region. Statistically, a high voxel posterior indicates a high probability of the voxel belonging to the class of interest, warranting a fast contour propagation. If the voxel posterior is reduced, the contour propagation is slowed down accordingly. The contribution of voxel posteriors to the contour propagation exhibits an inverse behaviour to that of the image gradients. Since both geometric and statistical features are essential to the contour stability, they can be combined into a single hybrid feature function by modeling the aforementioned behaviour as:  gk (I )   sigmoid  I sigmoid 1 Pk I       1  Pk I   , 1    ln    ˆ  I     Pk I           1  e      (2.8)  where the first term is the traditional geometric feature as in (2.3) and the second term models the inverse behavior of the voxel posterior to image gradients using an inverse  47  sigmoid function with magnitudes between -1 and 1. Complementarily, these two components in the new hybrid feature help regularize the evolving contour in both the geometric and statistical sense. The minimization of (2.4) is then achieved by computing the Euler-Lagrange equation [34]:           L Ck , 0  d 1            g C q C q dq   g C  N N  g C  N  Ck ,0 ds , 0 , 0 k k k k k k k t 0 0 dt 0   N  where κ is the Euclidean curvature,  (2.9)  is the inward unit normal, and Ck,0 is the initial  curve or surface for the k-th class, and performing steepest gradient search (2.10), to deform the curve or surface Ck towards a minima :         Ck  g k I   ck N   g k  N N , t  C k 0   C k , 0  (2.10)  where {ψ, c, ε} are the free parameters introduced to govern curvature, propagation and advection strengths respectively. With the designed hybrid feature, the algorithm uses only the propagation term. Other terms are shown here for completeness. This curve evolution equation is then embedded in a level set function, uk, and solved for the steady state solution:  u k  g k I   c  u k  g k I   u k , t  uk 0  uk , 0  (2.11)  The numerical implementation is based on the curve evolution algorithm via level sets [33], which utilizes an upwind piecewise continuous approximation scheme to provide a 48  numerically stable solution in the presence of singularities.  As summarized in Table 2.1, the rationale behind using this new hybrid feature is to enable handling of situations where the image gradient is high (small sigmoid(|▽I|) value) and the posterior probability of voxel is low, in which the voxel is considered to be a significant feature but lies outside of the desired region. We, therefore, aim to steer the contour slowly away from this voxel by assigning a small negative feature value. On the other hand, if the posterior probability is high, this indicates a significant feature within the desired region; therefore, a small positive feature value is assigned. In contrast, if the image gradient is low (large sigmoid(|▽I|) value) and the posterior probability is low, the voxel is considered to be a weak gradient feature that lies outside of the desired region, warranting a large negative feature value such that contour can be quickly steered away from that region. If the posterior probability is high, a homogeneous area in the desired region is indicated, and is rewarded with a large positive feature value for fast contour expansion. In summary, the proposed hybrid feature provides an adaptive active contour propagation based on local information reflecting both geometry and statistical homogeneity.  Statistical Feature (posterior probability)  Geometric Feature (gradient magnitude)  High  Low  Low  High  ↓ Speed  ↓ Speed  Deflation  Inflation  ↑ Speed  ↑ Speed  Deflation  Inflation  Table 2.1. Effects of regularizing contour propagation using geometric and statistical features  49  2.2.3 Segmentation of Brain MRI  Based on the proposed active contour model, we develop a fully automated 3D brain tissue segmentation algorithm for MR images. The segmentation procedures (Figure 2.1) are a generalization and extension of earlier work [52].  Pre-processing •Bias Field Correction •Noise Filtering •Skull Stripping  Raw Scan  Pre-processed Brain  Gradient Magnitude Sigmoid Feature  Gaussian Mixture EM Estimation  Gradient Sigmoid Magnitude  Normalized Tissue Probability  × * Hybrid Feature Geometric-Statistical  GM/WM Feature  GM/WM Probability Map  Contour Initialization Thinning/Pruning  Feature Map  Initial Contour  GM/WM Skeleton  Active Contour Model Active Contour Model Overlap Refinement  Segmented Tissues  Figure 2.1. Block diagram of the proposed MRI brain segmentation algorithm. * denotes the proposed novel hybrid geometric-statistical image feature described in §2.2.2.  We first present the proposed algorithm for T1-weighted (T1w) MRI scans, which are most often used for brain tissue segmentation due to the generally high WM and GM 50  contrast and the reduced effects of white matter lesions in patients with neurodegenerative diseases. We later extend the proposed method to simultaneously incorporate additional MR sequence data, such as T2-weighted (T2w) and PD-weighted (PDw) images, in addition to T1w.  To segment the brain tissues, we first estimate the GMM parameters such that each mixture distribution represents one single class. Based on these estimated distributions, the posterior probability of each voxel is calculated. We derive the hybrid geometric-statistical feature as described above by combining both the voxel statistics and the image gradient information. To initialize the active contour, we first introduce a voxel threshold, ts, on the posterior probability to further regularize this contour initialization by effectively removing voxels outside of the desired region boundaries due to noise or partial volume artifacts. Based on the thresholded masks, we form the skeleton representation using standard 2D morphological sequential thinning (3x3 kernel) and sequential pruning (3x3 kernel) [53] iteratively until no further changes occur. This is done slice-by-slice in 2D as morphological operations are performed on discrete numbers of voxels, and to do this in 3D on anisotropic scans would require resampling of the acquired data, which would potentially introduce additional artifacts. The hybrid feature and the contour initialization are determined individually for each tissue to be segmented, and each contour is then propagated independently. After the contours converge, minor overlaps and gaps occurred between the individually evolving contour boundaries are re-assigned to a single label by comparing the individual z-scores, which is the difference between voxel intensity to the sample mean normalized by the sample standard deviation, of all tissue classes. A brief description of the overall algorithm is described below: 51  Step 1. Given raw MRI scan U. Step 2. Pre-process with intensity inhomogeneity correction, noise filtering and brain extraction to obtain V. Step 3. Calculate gradient norm, |▽I|, of pixel intensity I in V. Step 4. For k=1 to K tissue class labels,  Determine the GMM distribution parameter set by using EM: Φk={μk, σk}.  For each I, calculate posterior probability P(k|I). Step 5. For k=1 to K tissue class labels,  Derive hybrid geometric-statistical feature: gk(I)=g(|▽I|, P(k|I)).  Derive initial contour Ck,0 by thresholding P(k|I) at tS=0.1 and skeletonizing.  Propagate curve Ck from Ck,0 on gk(I) until convergence to obtain class label image Lk(I). Step 6. For each I, if Lk(I)=1 for more than one k or Lk(I)=null for all k, assign Lk(I)=1 for k with the highest z-score.  In Step 5, the performance gain by setting ts > 0 is demonstrated in Table 2.2, and we show that for 0.1 ≤ ts ≤ 0.6, the final segmentation accuracy in a simulated brain volume is shown not to be sensitive to the parameter value tested but is necessary for removing extreme outliers before skeletonizing is performed. The parameter is set to ts = 0.1 (lowest ts tested with greater than 80% accuracy for WM and GM) for all subsequent experiments, both simulated and clinical. 52  Tissue  Dice Segmentation Accuracy Given Parameter Value ts=0.0  ts=0.1  ts=0.3  ts=0.6  WM  88.99  88.89  88.68  88.88  GM  55.51  81.34  82.74  82.76  Table 2.2. Parameter sensitivity of a simulated brain volume (9% noise, 40% inhomogeneity)  2.2.4 Extension to Multiple MR Sequence Data To further demonstrate the flexibility of the proposed segmentation approach, we extend our method so that information from multiple MR sequences with different contrast properties can be incorporated when the data is available. Assuming registered images, we first replace the geometric feature component in the proposed hybrid active contour feature (2.8) with the multi-dimensional vector gradient norm derived from all available data sequences. To derive the statistical active contour feature term, the GMM parameters and the voxel statistics are individually estimated for each contrast modality m given M input modalities. Since our main goal is to improve the ability to differentiate between various tissue labels, we derive a pair-wise intensity contrast term, Cim, j , as a ratio of absolute intensity difference between the EM estimated means, μ, of labels i and j over the observed intensity range of [IMax … IMin] (2.12) and then normalize (2.13).  C  m i, j     im   mj m m I Max  I Min  Cim, j   (2.12)  Cim, j M  C  m i, j  (2.13)  m 1  53  This intensity contrast factor is used as an optimized weighting factor to linearly combine    as in (2.14), for label k in modality m to form  individual posterior probabilities, P k I m  a new normalized measure based on individual posterior probabilities. This is done by using all relevant pair-wise contrast terms, defined between label k and all others. This ensures that the MR sequence with greater contrast receives a greater weight with respect to other lower contrast MR modalities used. Equation (2.14) shows how the final normalized measure is computed as a function of the means and variances of each tissue and modality. In the case where tissues at different modalities exhibit equal variances, more emphasis is placed on the scan that possesses the greater difference between the expected values of the tissues of interest, thereby facilitating separation. In the other case where the differences in the sample means are equal, the differences in variance are taken into account inherently in the individual posterior probability terms. The resulting normalized measure (2.14) replaces the posterior probability derived from single contrast input in the segmentation procedure.   C Pk I  M   k ( I ,..., I )  1  M  K  m 1 i 1 M  m i ,k  K  m   C  m i ,k  ,  (2.14)  m 1 i 1  2.2.5 Data Pre-Processing We employed the non-parametric non-uniform intensity normalization (N3) algorithm [54] for intensity inhomogeneity correction using default parameter settings (width of deconvolution kernel=0.15, number of iterations=50, sampling factor=4, characteristic 54  distance=200 mm, stopping criteria=0.001). All scans were then noise filtered using an edge-preserving Perona-Malik anisotropic diffusion filter [55] (number of iterations=4, conductance=3.0) to further enhance the image signal-to-noise ratio. Brain masks were generated using the provided ground truths consistent with other published methods to facilitate direct comparisons, or alternatively, many methods are also available for this brain extraction task [56, 57, 58]. For the clinical datasets where the ground truths are not available, the brain masks were generated with the Brain Extraction Tool (BET) [56] using default parameter settings.  2.3 RESULTS AND DISCUSSION We applied our proposed segmentation on both simulated and real clinical MRI scans, and demonstrated in the following sections (1) the accuracy of the proposed segmentation method on simulated T1w brain MRIs, (2) the segmentation improvement on multiple MRI sequences, (3) the accuracy of the proposed method on real clinical MRI scans of normal adults, and (4) the qualitative performance of the proposed methods on clinical MRI scans of MS and AD patients.  We first validated our proposed method on eighteen simulated T1w BrainWeb [59] MRI images (with 0/1/3/5/7/9% noise, 0/20/40% inhomogeneity, 181×217×181 dimension, 1×1×1 mm3 spacing). We also performed multi-sequence segmentation based on six T1w/T2w/PDw MRI triplets (with 0/1/3/5/7/9% noise, 0% inhomogeneity). Secondly, eighteen real high-resolution clinical T1w MRI scans from the Internet Brain Segmentation Repository (IBSR) [60] (coronally acquired, 256×128×256 dimension, 55  0.837×0.837~1×1 mm2 in-plane spacing, 1.5 mm slice thickness) were also segmented. For both datasets, the “ground truths” are known for comparisons. For the BrainWeb dataset, the ground truth is the phantom atlas used to generate the simulated scans, whereas for the IBSR dataset, the ground truth is the provided expert-guided manual segmentation label for each of the clinical scans. Lastly, from the Multiple Sclerosis MRI Research Group (MS/MRI), real clinical 1.5T spoiled gradient (SPGR) MRI scans (axially acquired, 256×256×120~160 dimension, 0.937×0.937×1.50 mm3 spacing) were taken at multiple sites. Real clinical 1.5T magnetization prepared rapid gradient echo (MP-RAGE)  MRI  scans  (sagittally  acquired,  256×256×166  dimension,  0.937×0.937×1.20 mm3 spacing) were also obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) of the LONI Image Data Archive (IDA) [61] initiated by the National Institute on Aging (NIA), the National Institute of Biomedical Imaging and Bioengineering (NIBIB), the Food and Drug Administration (FDA), private pharmaceutical companies and non-profit organizations. These clinical scans were segmented and qualitatively evaluated. For all experiments, the parameters for the level set evolutions are set at {ψ=0.0, c=1.0, ε=0.0} to reinforce propagation and effectively remove boundary attraction and smoothness regularizations. The convergence criteria for the gradient descent optimization is defined as either less than 0.5% root mean square change in the level set function or a maximum of 100 iterations reached if the mean square change is not achieved.  The average surface distance between the ground truth and the computed segmentation was computed for each test scan by approximate nearest neighbour searching [62]. In addition, the Dice similarity index [63] was also chosen for the quantitative evaluation of 56  the 3D brain segmentation results to facilitate direct comparisons to other published results:  Dice similarity index:  2T  100% 2T   F  F  (2.15)  We denote the true positives, true negatives, false positives and false negatives as T+, T-, F+ and F- respectively, between the known ground truth and the resulting segmentation results. We compared our segmentation results with those of the M3DLS approach [42].  2.3.1 Segmentation Validation using Simulated Brain MRI  We first validated a 3-class (WM, GM, and CSF) segmentation using the proposed method on the simulated T1w brain MRI data. Segmentation was performed using the traditional geometric feature only, the statistical feature only, and the proposed hybrid feature on all eighteen datasets with varying noise and intensity inhomogeneity levels. For the edge-only level set evolution, the parameter set {ψ=2.0, c=1.0, ε=4.5} was used to enforce a stronger smoothness constraint; otherwise, contours leaking through weak edges were often observed. These parameter values were chosen empirically through experiments with a set of parameter values to ensure a competitive comparison. For the statistical feature term only, the parameter set {ψ=0.0, c=1.0, ε=0.0} was used, same as the hybrid approach. Qualitative results in Figure 2.2 demonstrated very good resemblance between the provided phantom label and the segmentation results based on the hybrid feature.  57  T1w Input  Tissue Labels (hybrid feature)  T1w Input  Tissue Labels (hybrid feature)  Slice 130  Slice 90  Slice 40  Phantom Labels  Figure 2.2. Qualitative segmentation performance of two simulated T1w brain images showing the provided phantom label, raw images and the segmentation results obtained by using the proposed hybrid feature. We show three slices for both the best case (0% noise, 0% inhomogeneity) and worst case (9% noise, 40% inhomogeneity) scenarios. White, light gray and dark gray colors represent, respectively, the WM, GM and CSF classes in the tissue and phantom labels. We note the results from the hybrid approach resemble the phantom for both the best and worst input scenarios.  Table 2.3 shows that, on average over all noise and inhomogeneity levels, the proposed method achieved consistently accurate segmentation results for both WM and GM with similarity indices of 93.71% (σ=2.11%, average distance=0.27 mm) and 92.59% (σ=2.40%, average distance=0.30 mm). Segmentation of structures such as CSF by using the proposed approach also achieved considerable (>70%) similarity of 77.75% (σ=6.15%, average distance=2.24 mm). The CSF results were not as stable as WM and 58  GM mainly due to the much smaller structural volume, leading to increased sensitivity to estimation errors in the active contour initialization and feature derivation. Nonetheless, the overall segmentation results were on par if not better than other previously published results [16, 27, 63].  Tissues  Hybrid Feature  WM GM CSF  Statistical Feature Only  Geometric Feature Only  WM GM CSF  0% Inhomogeneity (6 images) TDice T+ Dist % % % mm (std) (std) (std) (std) 94.95 91.45 99.22 0.20 (0.44) (1.33) (0.83) (0.02) 93.70 97.10 88.43 0.29 (0.55) (0.97) (0.71) (0.03) 78.02 74.74 98.83 1.39 (4.14) (6.20) (0.05) (0.09) 91.40 95.58 91.08 4.70 (0.48) (0.73) (1.29) (0.17) 88.97 86.36 91.25 0.55 (0. 92) (2.02) (0.61) (0.01) 68.31 67.56 67.56 6.28 (1.58) (0.48) (0.31) (0.14)  0.0001 0.0011 0.0001 0.0001 0.0030 0.0368 94.78 91.12 (0.39) (1.58) GM 94.44 98.77 (0.46) (1.16) CSF 83.92 74.76 (4.81) (6.12)  WM GM CSF WM  0.0001 0.0001 0.0007 0.0001 0.0001 0.0001 99.24 5.06 (0.87) (0.07) 88.15 0.27 (0.62) (0.02) 99.78 5.40 (0.12) (0.26)  WM 0.5106 0.7119 0.9689 0.0001 GM 0.0527 0.0425 0.4993 0.2322 CSF 0.0718 0.9954 0.0001 0.0001  20% Inhomogeneity 40% Inhomogeneity (6 images) (6 images) TTDice T+ Dist Dice T+ Dist % % % % mm % % mm (std) (std) (std) (std) (std) (std) (std) (std) 92.90 94.13 94.20 0.32 93.27 94.00 94.87 0.30 (2.63) (2.73) (5.74) (0.14) (2.23) (2.68) (5.14) (0.13) 91.84 92.67 89.77 0.31 92.25 93.39 89.79 0.31 (3.10) (5.85) (0.94) (0.07) (2.71) (5.21) (0.96) (0.07) 77.31 68.07 99.54 2.84 77.92 69.03 99.52 2.49 (7.72) (13.65) (0.37) (1.75) (7.17) (12.89) (0.35) (1.27) 86.52 96.10 82.35 4.02 87.13 96.21 83.45 4.06 (5.41) (0.93) (10.14) (0.60) (4.42) (1.01) (8.26) (0.55) 83.08 77.24 91.11 0.59 84.02 78.45 91.24 0.57 (7.36) (11.24) (0.53) (0.08) (5.63) (9.14) (0.53) (0.06) 68.48 64.30 98.25 7.63 68.56 64.58 98.22 7.40 (1.28) (2.87) (0.17) (1.60) (1.04) (2.48) (0.16) (1.32) p Values 0.0484 0.1552 0.0551 0.0001 0.0288 0.1174 0.0348 0.0001 0.0435 0.0307 0.0287 0.0013 0.0233 0.0177 0.0230 0.0010 0.0396 0.5373 0.0006 0.0043 0.0250 0.4443 0.0004 0.0012 92.88 94.16 94.15 4.52 93.26 94.02 94.83 4.55 (2.65) (2.74) (5.80) (0.40) (2.30) (2.69) (5.19) (0.37) 92.16 93.28 89.81 0.30 92.59 94.03 89.83 0.30 (3.40) (6.48) (0.95) (0.08) (3.00) (5.81) (0.96) (0.07) 79.86 68.17 99.92 6.42 80.57 69.12 99.92 6.25 (9.79) (13.70) (0.03) (1.42) (9.15) (12.92) (0.03) (1.21) p Values 0.9901 0.9856 0.9886 0.0001 0.9939 0.9901 0.9901 0.0001 0.8717 0.8709 0.9446 0.8272 0.8449 0.8486 0.9454 0.8147 0.6376 0.9901 0.0540 0.0115 0.6009 0.9909 0.0385 0.0033  Overall (18 images) TT+ % % (std) (std) 93.19 96.10 (2.54) (4.78) 94.39 89.33 (4.72) (1.05) 70.61 99.30 (11.14) (0.44) 95.96 85.62 (0.89) (8.17) 80.68 91.20 (8.96) (0.53) 65.48 98.07 (2.57) (0.32)  Dist mm (std) 0.27 (0.12) 0.30 (0.06) 2.24 (1.33) 4.26 (0.55) 0.57 (0.06) 7.10 (1.28)  0.0002 0.0004 0.0002 0.0001 0.0001 0.0001 0.0001 0.0740 0.0001 93.64 93.10 96.07 (2.10) (2.67) (4.84) 93.06 95.36 89.26 (2.76) (5.38) (1.14) 81.45 70.68 99.87 (7.93) (11.14) (0.10)  0.0001 0.0001 0.0001 4.71 (0.39) 0.29 (0.06) 6.03 (1.12)  Dice % (std) 93.71 (2.11) 92.59 (2.40) 77.75 (6.15) 88.35 (4.41) 85.36 (5.71) 68.45 (1.24)  0.9215 0.9184 0.9851 0.0001 0.5928 0.5728 0.8500 0.6235 0.1362 0.9851 0.0001 0.0001  Table 2.3. Quantitative evaluation of segmentation results obtained by using the geometric feature, the statistical feature, and the proposed hybrid approach on 18 simulated T1w BrainWeb scans  To statistically evaluate the differences of segmentation results between the proposed hybrid approach, and the contours based on geometric-only and statistical-only features, we calculated the p values (p<0.05 indicates a statistically significant difference in the group means). Compared to results from using only the traditional geometric feature, the proposed hybrid approach achieved significantly higher similarity indices and reduced surface distance across all scans. On average, the proposed method achieved increased 59  similarity indices of 5.36% (p=0.0002) in WM, 7.23% (p<0.0001) in GM, and 9.30% (p<0.0001) in CSF segmentation results with reduced surface distance of 3.99 mm (p<0.0001) in WM, 0.27 mm (p<0.0001) in GM, and 4.86 mm (p<0.0001) in CSF. Compared to results from using only the statistical feature, the similarity indices achieved are comparable to those of the proposed hybrid approach. However, we observed that, on average, the proposed method was able to reduce the average surface distance by 4.44 mm (p<0.0001) in WM and 3.79 mm (p<0.0001) in CSF. These results showed that using the geometric term alone was highly sensitive to image artifacts and require contour regularization, and using the statistical term alone caused the contours to deviate from the true edges. Only when incorporating both features can we evolve the contour in the appropriate statistical space while maintaining high geometric relevance at the same time. Processing a single BrainWeb volume takes approximately 55 minutes (dual 3.20 GHz Xeon PC with 3.25 GB memory) to 75 minutes (3.60GHz Pentium4 PC with 2 GB memory), comparable to the processing time required by other conventional techniques.  2.3.2 Segmentation Improvement using Multiple MR Sequences  We next performed a 3-class (WM, GM, and CSF) segmentation using the proposed method on the simulated T1w/T2w/PDw brain MRI data. Segmentation was performed on six datasets with varying noise levels and 0% intensity inhomogeneity. Qualitative results in Figure 2.3 illustrated very good resemblance between the provided phantom label and the segmentation results.  60  T1w Input  T2w Input  PDw Input  Tissue Labels (hybrid feature)  Slice 130  Slice 90  Slice 40  Phantom Label  Figure 2.3. Qualitative segmentation performance of a multiple simulated MR sequence (T1w, T2w, PDw) brain images showing the provided phantom label, raw images (0% noise, 0% inhomogeneity) and the segmentation results obtained by using the proposed approach. White, light gray and dark gray colors represent, respectively, the WM, GM and CSF classes in the tissue and phantom labels. We show three slices for the test case and note improved segmentation results on multiple MR sequence.  Table 2.4 demonstrated the quantitative segmentation accuracies. On average over all noise levels, the proposed method achieved consistently accurate segmentation results for WM, GM and CSF with similarity indices of 96.24% (σ=1.13%, average distance=0.15 mm), 94.14% (σ=1.24%, average distance=0.24 mm) and 81.57% (σ=2.82%, average distance=1.56 mm) respectively. When compared to the experiment on single simulated T1w brain images, segmentation using multiple MR sequence data provided an average improvement in similarity indices of 1.29% (p=0.0479), 0.44% (p=0.4627), and 3.55% 61  (p=0.1403) for WM, GM and CSF. The segmentation improvements are not statistically significant which is not surprising given that the synthetic T1w brain images by themselves already have the strong image contrast required to distinguish between the majority of WM and GM tissues. Additional MR sequences such as T2w and PDw in this case, helped improve the overall consistency by achieving a much better balance between the WM, GM, and CSF estimation as observed by the T+ and T- performance. Table 2.4 showed that with only T1w scans as inputs, the differences between the T+ and T- were 7.77%, 8.67% and 24.09% respectively for WM, GM and CSF, whereas with T1w, T2w and PDw inputs, these differences were much reduced to 2.84%, 0.24% and 6.95%.  Single MR Sequence (6 images) Tissue Dice T+ Dist T% % mm % (std) (std) (std) (std) WM 94.95 91.45 99.22 0.20 (0.44) (1.33) (0.83) (0.02) GM 93.70 97.10 88.43 0.29 (0.55) (0.97) (0.71) (0.03) CSF 78.02 74.74 98.83 1.39 (4.14) (6.20) (0.05) (0.09)  Multiple MR Sequences p Value (6 images) Dice T+ Dist Dice T+ Dist TT% % mm % % mm % % (std) (std) (std) (std) 96.24 95.33 98.17 0.15 0.0479 0.0009 0.1562 0.0721 (1.13) (0.19) (1.40) (0.05) 94.14 93.72 93.96 0.24 0.4627 0.0075 0.0001 0.0580 (1.24) (1.65) (0.89) (0.04) 81.57 90.61 97.56 1.56 0.1403 0.0046 0.0001 0.1160 (2.82) (5.01) (0.23) (0.20)  Table 2.4. Quantitative evaluation of segmentation results obtained by using the proposed method on 6 simulated T1w/T2w/PDw BrainWeb scans  2.3.3 Segmentation Comparisons using Clinical MR Scans of Normal Adults We applied the proposed method to segment eighteen clinical IBSR brain images. The images were segmented using a 3-class (WM, GM, and CSF) segmentation. The qualitative results are shown in Figure 2.4. The tissue labels were then post-processed 62  due to a known limitation of the provided manual segmentation labels. It has been reported previously [42] that the expert-guided manual segmentation label contains much of the cortical CSF being mislabeled as GM. We have confirmed this with our own observations. As observed from Figure 2.4, we note that the original segmentation results matched closely with what can be visually observed from the raw images. However, this observation did not correspond well to the provided expert-guided manual segmentation label due to this aforementioned limitation. If quantitative evaluation was to be performed on this result, both GM and CSF would produce much lower accuracies than what were actually present.  To enable a valid comparison by retaining only the ventricular CSF for comparison, we employed the following post-processing scheme:  Step 1. Re-assign segmented CSF labels close to (≤ 5 mm) any background labels as GM. Step 2. Define a rectangular region of interest (30% of the mask width in the superior-inferior orientation and 50% for both left-right and anterior-posterior) around the brain mask centroid. Apply 2D morphological thinning (3x3 kernel) 64 on CSF pixels outside the region of interest in coronal slices, and singly links are removed. Step 3. Identify the largest 3D connected component as the desired ventricular CSF. Apply 3D morphological dilation (3x3x3 kernel) to this final mask, and re-assign all original CSF labels outside as GM.  63  Manual Labels  Tissue Labels  Tissue Labels  (no post-processing)  (with post-processing)  Slice 85  Slice 65  Slice 35  T1w Input  Figure 2.4. Qualitative segmentation performance of a real clinical T1w brain images (IBSR #08) showing the raw images, expert-guided manual segmentation label and the segmentation results obtained by using the traditional edge feature and the proposed hybrid approach. We show three slices for the 3-class segmentation case and the post-processed case. White, light gray and dark gray colors represent, respectively, the WM, GM and CSF classes in the segmentation labels. We note the good resemblance between the segmentation results and the raw image, and between the post-processed results and the expert-guided manual segmentation label.  This post-processing scheme allowed us to generate results comparable to the provided manual label. We showed in Table 2.5 that the results obtained by using the proposed method achieved similarity indices of 87.55% (σ=2.92%, average distance=0.54 mm), 93.18 (σ=0.92%, average distance=0.38 mm) and 77.39% (σ=11.15%, average  64  distance=3.12 mm) for WM, GM, and CSF averaged across the eighteen images tested.  Tissue WM GM CSF  Proposed Dice T+ Dist T% % mm % (std) (std) (std) (std) 87.55 87.12 93.84 0.54 (2.92) (6.17) (2.33) (0.22) 83.00* 74.00 90.00 93.18 93.36 87.16 0.38 (0.92) (2.25) (6.03) (0.13) 92.00 77.39 78.17 99.68 3.12 n/a n/a (11.15) (10.18) (0.28) (1.28) * Dice indices are estimated to the nearest integer for the M3DLS method based on the average true volumes given the average reported T+ and T-. Dice % (std) 79.00*  M3DLS T+ T% % (std) (std) 75.00 92.00  Table 2.5. Quantitative comparison of segmentation results on 18 real clinical T1w IBSR scans  To statistically compare the segmentation performance of our hybrid active contour approach against the region-based active contour M3DLS algorithm, a one-sample t-test was performed to calculate the p values (standard deviations were not reported by M3DLS [42]). Both methods are automatic and are based on 3D level set implementation with the key difference in the active contour formulation and the optimization functional. The proposed method achieved significantly higher accuracies with an average improvement of 8.55% (p<0.0001) in WM and 10.18% (p<0.0001) in GM segmentation similarities. CSF results were not reported with the M3DLS method; however, our proposed approach achieved considerable (>70%) similarity. Furthermore, the segmentation results achieved by M3DLS were conservative with low true positives. This can be mainly attributed to the fact that the optimization functional in M3DLS modeled each class as piecewise constant by taking only the intensity square differences from the  65  means. However, variations within each segmented class are not considered, such that if, intrinsically, WM contains significantly larger intensity variance than GM, voxels belonging to WM would potentially be mislabeled as GM if the voxel intensities are closer to the estimated GM mean. This lead to under-segmentation in WM (under-segmentation in GM is attributed to the known limitation in the manual labels), and M3DLS is required to post-process the segmentation results by incorporating an additional morphological dilation step in their subsequent work [43]. On the other hand, by integrating both geometric and statistical features into an edge-based deformable model, the proposed hybrid approach captures both the image edge geometry and the voxel statistical homogeneity, and achieved higher segmentation accuracies with less additional computation complexity. 2.3.4 Qualitative Performance using Clinical MR Scans of MS and AD Patients Lastly, we applied the proposed method to segment clinical MRI brain scans of MS and AD patients. The images were segmented using a 3-class (normal appearing WM, GM with diseased WM, and CSF) segmentation. The qualitative results are shown in Figure 2.5, demonstrating that the proposed approach appears stable on clinical scans. Figure 2.5a showed a typical clinical MRI scan of a MS patient, whereas Figure 2.5b illustrated an example scan of a MS patient with enlarged lateral ventricles. Both segmentation results appeared to have good overall accuracy. In addition, we demonstrated segmentation results from an MRI scan where the brain extraction step inaccurately included part of the eyes in the brain mask in Figure 2.5c. This error in the brain mask did not seem to cause visible problems for the algorithm in the adjacent brain tissue.  66  Furthermore, Figure 2.5d and Figure 2.5e showed segmentation results of scans from an AD patient one year apart. We again noticed visibly consistent segmentation in the tissue labels in the presence of tissue atrophy / ventricle enlargement over time. We note that image quality is an important limiting factor to the anatomical accuracy of these clinical results, as tissues in the MS and AD scans, with increased overlapping tissue intensity distributions, are generally more difficult to be delineated than those of the BrainWeb and IBSR scans.  In our T1w test scans, the intensity difference between GM and diseased WM is subtle, and separating these two class types is likely not possible without additional MRI sequences that are more sensitive to WM pathology, such as PDw or T2w, or relying on prior probability maps such as those derived from a training set. We have left these experiments for future work. In its current form, the proposed method can potentially be used for the assessment of disease severity by providing stable and consistent segmentations of CSF and normal appearing WM.  67  Tissue Labels  Slice 45  Slice 75  Slice 45  T1w Input Tissue Labels  Slice 40  T1w Input  Slice 70  Tissue Labels  Slice 35  T1w Input  (c) MS  (TE/TR/TI=3.176/7.655/450ms)  (TE/TR/TI=4.000/11.000/0ms)  (TE/TR/TI=3.917/8.110/0ms)  Slice 130  Slice 130  Slice 95  (b) MS  Slice 95  (a) MS  (d) AD  (e) AD  (TE/TR/TI=3.95/9.124/1000ms,  (TE/TR/TI=3.98/9.124/1000ms,  Flip Angle=8˚)  Flip Angle=8˚)  Figure 2.5. Qualitative segmentation performance of real clinical T1w brain images (UBC MS/MRI and LONI IDA) showing the raw images and the segmentation results obtained by using the proposed approach. We show two slices each for the 3-class segmentation. White, light gray and dark gray colors represent, respectively, the normal appearing WM, GM with diseased WM, and CSF classes in the segmentation labels. We note the approach is stable on (a) a typical MS patient scan, (b) an MS patient scan with enlarged ventricles, (c) an MS patient scan with an inaccurate brain mask, and AD patient scans at (d) year one and (e) year two.  68  2.4 CONCLUSIONS We proposed a 3D brain MR segmentation method based on deformable models and demonstrated accurate and stable brain tissue segmentation on single as well as multiple MR sequence scans. The main contribution of our work is that we employed a geodesic active contour formulation by integrating both image geometry and voxel statistics into a hybrid geometric-statistical feature, which acts as a stabilizing regularizing function for the extraction of complex anatomical features such as WM, GM, and CSF. We validated our technique first using both single and multiple simulated brain MRI sequence data. Improved segmentation accuracy was shown in results from the proposed hybrid approach against those using individual geometric or statistical features only. Furthermore, on real clinical MRI datasets, we also demonstrated improved accuracy over another state-of-the-art approach, the region-based M3DLS method. We also demonstrated consistent results when segmenting MRI scans of both MS and AD patients.  Issues identified for possible future work include further enhancing the statistical distribution estimation using more complex intensity distribution estimation methods such as nonparametric and partial volume models, and extending additional segmentation classes, hierarchy or feature cues for the segmentation of anomalies such as white matter lesions or tumours.  69  2.5 BIBLIOGRAPHY 1.  2.  3.  M. K. Beyer, C. C. Janvin, J. P. Larsen, and D. Aarsland, “An MRI study of patients with Parkinson’s disease with mild cognitive impairment and dementia using voxel based morphometry,” Journal of Neurol. Neurosurg. Psychiatry, 2006. M. Grossman, C. McMillan, P. Moore, L. Ding, G. Glosser, M. Work, and J. Gee, “What’s in a name: voxel-based morphometry analysis of MRI and naming difficulty in Alzheimer’s disease, frontotemporal dementia and corticobasal degeneration,” Brain, 127(3):628-649, 2004. P. E. Grant, “Structural MR Imaging,” Epilepsia, 45(s4):4-16, 2004.  4.  J. J. Wisco, G. Kuperberg, D. Manoach, B. T. Quinn, E. Busa, B. Fischl, S. Heckers, and A. G. Sorensen, “Abnormal cortical folding patterns within Broca’s area in schizophrenia: evidence from structural MRI,” Schizophrenia Research, 2007. 5. D. W. Paty and G. J. Zhao, MRI in multiple sclerosis – Implications for diagnosis and treatment, 2nd ed., 1999. 6. D. H. Miller, “Biomarkers and surrogate outcomes in neurodegenerative disease: lessons from multiple sclerosis,” NeuroRx, 1:284-294, 2004. 7. A. Traboulsee, G. Zhao, and D. K. B. Li, “Neuroimaging in multiple sclerosis,” Neurologic Clinics, 23:131-148, 2005. 8. D. T. Chard, C. M. Griffin, G. J. M. Parker, R. Kapoor, A. J. Thompson, and D. H. Miller, “Brain atrophy in clinically early relapsing-remitting multiple sclerosis,” Brain, 125(2):327-337, February, 2002. 9. N. De Stefano, P. M. Matthews, M. Filippi, F. Agosta, M. De Luca, M. L. Bartolozzi, L. Guidi, A. Ghezzi, E. Montanari, A. Cifelli, A. Federico, and S. M. Smith, “Evidence of early cortical atrophy in MS,” Neurology, 60:1157-1162, 2003. 10. J. Sastre-Garriga, G. T. Ingle, D. T. Chard, L. Ramio-Torrenta, D. H. Miller, and A. J. Thompson, “Grey and white matter atrophy in early clinical stages of primary progressive multiple sclerosis,” NeuroImage, 22(1):353-359, May 2004. 11. V. M. Anderson, N. C. Fox, and D. H. Miller, “Magnetic resonance imaging measures of brain atrophy in multiple sclerosis,” Journal of Magnetic Resonance Imaging, 23:605-618, 2006. 12. S. K. Warfield, K. H. Zou, and W. M. Wells, “Simultaneous Truth and Performance Level Estimation (STAPLE): An algorithm for the validation of image segmentation,” IEEE Transactions on Medical Imaging, 23(7), 2004.  70  13. D. L. Pham, C. Xu, and J. L. Prince, “Current methods in medical image segmentation,” Annual Review of Biomedical Engineering, 2:315-337, August 2000. 14. A. W. C. Liew and Y. Hong, “Current methods in automatic tissue segmentation of 3D magnetic resonance brain images,” Current Medical Imaging Reviews, 2(1):91-103, February 2006. 15. J. Ashburner and K. J. Friston, "Voxel-based morphometry--the methods," NeuroImage, 11(6-1):805-821, 2000. 16. K. Van Leemput, F. Maes, D. Vandermeulen, and P. Suetens, “Automated model-based tissue classification of MR images of the brain,” IEEE Transactions on Medical Imaging, 18(10):897-908, October 1999. 17. K. M. Pohl, W. M. Wells, A. Guimond, K. Kasai, M. E. Shenton, R. Kikinis, W. E. L. Grimson, and S. K. Warfield, "Incorporating non-rigid registration into expectation maximization algorithm to segment MR images," in Proc. of Medical Image Computing and Computer Assisted Intervention, 2002, pp.564-571. 18. F. L. Bookstein, “Voxel-based morphometry should not be used with imperfectly registered images,” NeuroImage, 14:1454-1462, 2001. 19. W. R. Crum, L. D. Griffin, D. L. G. Hill, and D. J. Hawkes, “Zen and the art of medical image registration: correspondence, homology, and quality,” NeuroImage, 20:1425-1437, 2003. 20. J. Ashburner and K. J. Friston, “Why voxel-based morphometry should be used,” NeuroImage, 14:1238-1243, 2001. 21. D. L. Pham and J. L. Prince, “Adaptive fuzzy segmentation of magnetic resonance images,” IEEE Transactions on Medical Imaging, 18(9): 737-752, September 1999. 22. M. W. Wells, W. E. L. Grimson, and F. A. Jolesz, “Adaptive segmentation of MRI data,” IEEE Transactions on Medical Imaging, 15(4):429-442, August 1996. 23. J. Rajapakse and F. Kruggel, “Segmentation of MR images with intensity inhomogeneities,” Image and Vision Computing, 16(3):165-180, 1998. 24. J. Burkhardt, “A Markov chain Monte Carlo algorithm for the segmentation of T1-MR-images of the head,” Diploma thesis, Technische Universitat Munchen, 2003. 25. A. Ruf, H. Greenspan, and J. Goldberger, “Tissue classification of noisy MR brain images using constrained GMM,” in Proc. of Medical Image Computing and Computer Assisted Intervention, 2005, pp.790-797. 26. Y. Zhou and J. Bai, “Atlas-based fuzzy connectedness segmentation and intensity nonuniformity correction applied to brain MRI,” IEEE Transactions on Biomedical Engineering, 54(1):122-129, 2007. 71  27. P. L. Bazin and D. L. Pham, "Topology preserving tissue classification of magnetic resonance brain images," IEEE Transactions on Medical Imaging, 26(4):487-496, April 2007. 28. L. Liao and T. Lin, “MR brain image segmentation based on kernelized fuzzy clustering using fuzzy Gibbs random field model, “ in Proc. of IEEE Complex Medical Engineering, 2007, pp.529-535. 29. Y. Zhang, M. Brady, and S. Smith, "Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm," IEEE Transactions on Medical Imaging, 20(1):45- 57, 2001. 30. D. Terzopoulos, A. Witkin, and M. Kass, "Constraints on deformable models: Recovering 3D shape and nonrigid motion," Artificial Intelligence, 36(1): 91-123, 1988. 31. J. S. Suri, “Leaking prevention in fast level sets using fuzzy models: an application in MR brain,” in Proc. of IEEE Engineering in Medicine and Biology Society, 2000, pp.220-225. 32. P. A. Yushkevich, J. Piven, H. C. Hazlett, R. G. Smith, S. Ho, J. C. Gee, and G. Gerig, “User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability,” NeuroImage, 31(3):1116-1128, July 2006. 33. S. Osher and J. Sethian, “Fronts propagating with curvature-dependent speed: algorithms based on the Hamilton-Jacobi formulation,” Journal of Computational Physics, 79:12-49, 1988. 34. V. Caselles, R Kimmel, and G Sapiro, “Geodesic active contours,” International Journal of Computer Vision, 22(1):61–79, 1997. 35. X. Zeng, L. H. Staib, R. T. Schultz, and J. S. Duncan, “Segmentation and measurement of the cortex from 3-D MR images using coupled-surfaces propagation,” IEEE Transactions on Medical Imaging, 18(10):927-937, 1999. 36. R. Goldenberg, R. Kimmel, E. Rivlin, and M. Rudsky, “Cortex segmentation – a fast variational geometric approaches,” in Proc. of IEEE Workshop on Variational and Level SetMethods in Computer Vision, Vancouver, BC, Canada, July 2001, pp.127-133. 37. C. Ciofolo, C. Barillot, and P. Hellier, “Combining fuzzy logic and level set methods for 3D MRI brain segmentation,” in Proc. of IEEE International Symposium on Biomedical Imaging, 2004, pp.161-164. 38. C. Ciofolo and C. Barillot, “Brain segmentation with competitive level sets and fuzzy control,” in Proc. of Information Processing in Medical Imaging, 2005, pp.333-344. 72  39. T. F. Chan and L. A. Vese, "Active contours without edges." IEEE Transactions on Image Processing, 10(2):266-277, 2001. 40. A. Tsai, A. Yezzi, Jr., and A. S. Willsky, “Curve evolution implementation of the Mumford-Shah functional for image segmentation, denoising, interpolation, and magnification,” IEEE Transactions on Image Processing, 10(8), August 2001. 41. E. Angelini, T. Song, B. D. Mensh, and A. Laine, “Segmentation and quantitative evaluation of brain MRI data with a multi-phase three-dimensional implicit deformable model,” in Proc. of SPIE Medical Imaging, 5370(1), 2004, pp.526-537. 42. E. Angelini, T. Song, and A. Laine, “Homogeneity measures for multiphase level set segmentation of brain MRI,” in Proc. of IEEE International Symposium on Biomedical Imaging, 2006, pp.746-749. 43. E. Angelini, T. Song, B. D. Mensh, and A. Laine, “Brain MRI segmentation with multiphase minimal partitioning: a comparative study,” International Journal of Biomedical Imaging, 2007:10526, 2007. 44. L. A. Vese and T. F. Chan, “A multiphase level set framework for image segmentation using the Mumford and Shah model,” International Journal of Computer Vision, 50(3):271-293, 2002. 45. D. Mumford and J. Shah, "Optimal approximation by piecewise smooth functions and associated variational problems," Communications on Pure and Applied Mathematics, 42:577-685, 1989. 46. M. Jeon, M. Alexander, N. Pizzi, and W. Pedrycz, “Unsupervised hierarchical multi-scale image segmentation: level set, wavelet and additive splitting operator,” Pattern Recognition Letters, 26:1461-1469, 2005. 47. X. Han, C. Xu, and J. L. Prince, “A topology preserving level set method for geometric deformable models,” IEEE Transactions on Pattern Analysis and Machine Learning, 25(6):755-768, June 2003. 48. K. M. Pohl, R. Kikinis, and W. M. Wells, “Active mean fields: solving the mean field approximation in the level set framework,” in Proc. of Information Processing in Medical Imaging, 2007, pp.26-37. 49. M. Rousson, N. Paragios, and R. Deriche, “Implicit active shape models for 3D segmentation in MR imaging,” in Proc. of Medical Image Computing and Computer Assisted Intervention, 2004, pp.209-216. 50. L. Ibanez, W. Schroeder, L. Ng, and J. Cates, The ITK Software Guide, 2nd ed., Kitware Inc., 2005.  73  51. A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1-38, 1977. 52. A. Huang, R. Abugharbieh, R. Tam, and A. Traboulsee, "Automatic MRI brain tissue segmentation using a hybrid statistical and geometric model," in Proc. of IEEE International Symposium on Biomedical Imaging, 2006, pp.394-397. 53. R. C. Gonzales and R. E. Woods, Digital Image Processing, 2nd ed., Prentice Hall, 2002, pp.549. 54. J. G. Sled, A. P. Zijdenbos, and A. C. Evans, “A non-parametric method for automatic correction of intensity non-uniformity in MRI data,” IEEE Transactions on Medical Imaging, 17:87-97, 1998. 55. P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Transactions on Pattern Analysis Machine Intelligence, 12:629–639, 1990. 56. S. M. Smith, “Fast robust automated brain extraction,” Human Brain Mapping, 17(3):143-155, November 2002. 57. Z. Y. Shan, G. H. Yue, and J. Z. Liu, "Automated histogram-based brain segmentation in T1-weighted three-dimensional magnetic resonance head images" NeuroImage, 17:1587-1598, 2002. 58. A. Huang, R. Abugharbieh, R. Tam, and A. Traboulsee, "MRI brain extraction with combined expectation maximization and geodesic active contours," in Proc. of IEEE International Symposium on Signal Processing for Information Technology, 2006, pp.107-111. 59. C. A. Cocosco, V. Kollokian, R. K.-S. Kwan, and A. C. Evans, “BrainWeb: Online interface to a 3D MRI simulated brain database,” NeuroImage, 5(4 part 2/4):S425, 1997. 60. A. Worth, (June 2008) Internet Brain Segmentation Repository, Available: http://www.cma.mgh.harvard.edu/ibsr/. 61. M. W. Weiner, (November 2008) Alzheimer’s Disease Neuroimaging Initiative, VA Medical Center and University of California – San Francisco, Available: http://www.loni.ucla.edu/ADNI/. 62. S. Arya, D. M. Mount, N. S. Netanyahu, R. Silverman, and A. Y. Wu, “An optimal algorithm for approximate nearest neighbor searching,” Journal of the ACM, 45:891-923, 1998. 63. L. R. Dice, “Measures of the amount of ecologic association between species,” Ecology, 26:297-302, 1945.  74  64. D. W. Shattuck, S. R. Sandor-Leahy, K. A. Schaper, D. A. Rottenberg, and R. M. Leahy, "Magnetic resonance image tissue classification using a partial volume model," NeuroImage, 13(5):856-876, 2001. 65. L. Lam, S.-W. Lee, and C. Y. Suen, "Thinning methodologies-a comprehensive survey," IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(9):869-885, September 1992.  75  3. A  NOVEL  ROTATIONALLY-INVARIANT  REGION-BASED  HIDDEN MARKOV MODEL FOR EFFICIENT 3D IMAGE SEGMENTATION*  3.1 INTRODUCTION Hidden Markov models (HMM) have over the past years been successfully used in numerous applications such as speech recognition [1, 2], document classification [3, 4], recognition of characters [5], gestures [6] and facial features [7, 8], segmentation of video [9], color images [10] and semantic images [11, 12], regional climate forecast [13], target tracking [14], protein sequence comparisons [15] and many others [16, 17]. The theory of hidden Markov models was originally developed by Baum [18] with an underlying basic one dimensional (1D) Markov chain [19]. In simple terms, a HMM is a Markov source a conditionally independent process on a Markov chain or a Markov chain on a memoryless noisy channel [20, 21]. In a first order system, the Markovian assumption states that the current system state at any given time depends only on the immediate preceding state with fixed transitional probabilities. In an HMM, an observation is generated as a random function of the current state probability distribution; therefore, it is not possible to determine the current state by relying only on the current observation. To *  A version of this chapter has been accepted for publication (March 2010). A. Huang, R. Abugharbieh, R.  Tam, “A novel rotationally-invariant region-based hidden Markov model for efficient 3D image segmentation,” IEEE Transactions on Image Processing. A preliminary version of this chapter has been published. A. Huang, R. Abugharbieh, R. Tam, “Image segmentation using an efficient rotationally invariant 3D region-based hidden Markov model,” in Proc. IEEE Computer Vision and Pattern Recognition Workshop on Mathematical Methods for Biomedical Image Analysis, June 2008, pp.1-8. 76  estimate the model parameters, the Baum-Welch maximum likelihood algorithm [22] is usually used. An efficient approximation, the Viterbi training [23, 24], uniquely assigns the observation to the state with the maximum a posteriori (MAP) probability.  To extend HMM to two dimensional (2D) images, a pseudo 2D HMM [25] has been proposed. The model is pseudo in the sense that instead of a fully connected 2D HMM, a hierarchy of 1D HMM is used. A truly 2D HMM was developed by representing images as Markov meshes in [26]. Further extensions to three dimensional (3D) images include the pseudo 3D HMM [8], grid-based 3D HMM [27], and block-based 3D HMM [28]. Another popular model for 2D and 3D Markov processes is the Markov random field (MRF) [29, 30]. As an undirected graphical model based on a predefined local neighborhood system of sites, an MRF is a probabilistic stochastic relaxation approach that converges approximately to the MAP estimate. MRF-based approaches have been adopted for labeling of watershed segments [31, 32], region adjacency graph processing [33], rotational invariant texture feature classification [34], and hidden MRF expectation-maximization (HMRF-EM) based segmentation of medical images [35].  Two difficulties with the HMRF model have generally been identified; first, an HMRF requires the estimation of a normalization constant that is typically intractable to compute in order to achieve a balanced solution that fits both the observed data and the given prior distribution [30, 36]. Second, an HMRF is computationally intensive. In a typical N-ary segmentation problem, MAP estimation can only be achieved through other optimization strategies such as the iterated conditional modes algorithm [37]. HMM representation, in comparison, allows for exact MAP decoding without parameters and neighborhood 77  system that are trained for the specific problem, and requires less cost increases when feature dimensions are increased to incorporate larger neighborhood samples [28], additional image features, or multiple imaging modality data. That said, as image resolutions increase with continuous advances in imaging technologies, there is a need for greater efficiency in 3D HMM based representations and algorithms as some computational tasks would otherwise still be prohibitively expensive. Recently, Joshi et al. [38] derived a computationally efficient 3D HMM parameter estimation method that iteratively decodes each row of voxels using neighboring rows in a 3D rectangular lattice or grid. Though this approach is computationally efficient, the method is sensitive to object rotations as a striping artifact is observed in the horizontal image direction regardless of object orientation. Such pose dependency of segmentation results is undesirable for applications such as in medical imaging, where patient positioning in the scanners could potentially have a significant impact on subsequent analysis tasks.  In this chapter, we propose a novel 3D region-based HMM (henceforth referred to as rbHMM). Our approach employs a new tree-structured parameter estimation algorithm for efficient, accurate as well as rotationally-invariant 3D image segmentation. Our 3D rbHMM organizes volumetric images into homogeneous sub-regions akin to the concept of superpixels [40]. Our parameter estimation algorithm then iteratively estimates the model parameters and the MAP labeling of each region using a non-uniform representation of voxel data in a 3D space. The representation is non-uniform in the sense that we do not make any a priori positioning or directionality assumptions on the data. We validate our proposed segmentation approach on synthetic 3D geometric shapes as well as simulated and clinical T1-weighted brain magnetic resonance imaging (MRI) 78  volumes. We demonstrate using quantitative results the superior performance of rbHMM in both accuracy and efficiency measures over current grid-based approach. Some early results were presented in [39], and here, we present the proposed rbHMM approach with more in-depth experimental validation using real image data and comparison to the grid-based HMM.  The remainder of the chapter is organized as follows. In §3.2, we introduce the proposed 3D rbHMM model. We then present an unsupervised, computationally efficient algorithm for estimating the MAP states using 1D tree-structured Markov chains. Lastly, we present extensive experimental results in §3.3 and conclude in §3.4.  3.2 METHODS In this section, we present the motivation for grouping voxels into local regions, from which our 3D region-based hidden Markov model is derived. The term “regional” has previously been used in the context of HMMs [13, 14] but generally in a geographical sense rather than based on image properties. We describe the proposed model in its general form along with the statistical assumptions made. We then present an efficient extension of the Viterbi algorithm for estimating the 3D rbHMM parameters and the MAP states.  79  3.2.1 Computing Image Regions  The proposed algorithm first groups image voxels into contiguous local regions. The motivations behind this grouping is that voxels are typically a discrete representation of a continuous scene and as such are not independent; similar regions exhibit similar properties such as intensity or texture and therefore can be grouped so that the complexity and computational cost are reduced from the number of voxels to the number of regions observed in the image. In this work, we apply the watershed transform [40] for simplicity. Alternative methods such as normalized cuts [41] can be employed to further incorporate texture cues.  Determining the watershed depth based on image gradient magnitude is non-trivial due to the wide variability in image data and image quality. Here, we adopt a procedure similar to the threshold selection process in most edge detectors, namely, by examining the cumulative distribution function of gradient magnitudes. Instead of defining a specific gradient value, a percentage threshold, Tw, on the cumulative distribution is chosen to ensure over-segmentation and preservation of image structures. This ensures a threshold value that is stable with respect to the image intensity scale and is insensitive to small changes in the threshold value, making it applicable to a wider range of images. 3.2.2 Region-Based Hidden Markov Model We propose a 3D HMM governed by the immediate neighborhood of each delineated region in a non-uniform 3D space. The representation does not follow any a priori directionality assumption, and the ordering of the model nodes is data-driven. Unlike that 80  of Li et al. [27], which follows a strict lexicographic order of voxels from left-to-right, top-to-bottom, and front-to-back, or that of Li et al. [42], which follows a center-to-boundary ordering, the proposed region-based model fundamentally allows for improved computational efficiency and invariance to object rotation. We describe the model as follows.  Given a 3D image U of dimensions W×H×D, we denote a voxel with 3D coordinates (x, y, z) along the X, Y and Z axes. U can, therefore, be described as a collection of the intensity values, I(x,y,z), of all the voxels in the 3D grid: U={ I(x,y,z): 1≤x≤W, 1≤y≤H, 1≤z≤D }. By dividing an image into regions, each voxel in U is assigned a region label L(x,y,z). A region ri is then defined as ri={ (x, y, z): L(x, y, z)=i } with Ni=║ri║ reflecting the size of the region. Consequently, we can describe U as a set of regions R={ri:1≤i≤NR}, where NR=║R║ is the total number of regions. In addition, we denote the neighboring adjacent regions of ri in 3D space as Ri’={ rj: j≠i,  (x,y,z)  ri s.t. (x±1,y,z) or (x,y±1,z) or (x,y,z±1)  rj } and Ni’=║Ri’║ is the number of neighbors of ri.  We can represent each ri by using a mean regional feature vector, fi. For grayscale images such as MRI, we can define fi simply as the mean of the observed voxel intensities:  fi    I  x, y , z   ( x , y , z )ri  Ni  ,  1 i  NR  (3.1)  If the image is ideal and the image acquisition is perfect, i.e. noiseless and artifact free, all ri with the same true underlying state si will share the same observed feature fi. 81  However, images are often corrupted by noise (often approximated as white Gaussian in MR images with signal-to-noise ratio > 2 [43]) such that representing fi by mean intensities provides noise resiliency, and given fi, the true state si can be optimally estimated in the MAP sense by using the proposed 3D rbHMM. We inherit the following model assumptions from the classical HMM:  Assumption 1: The observed features follow a conditional Gaussian distribution given a hidden model state l  { 1, 2, …, Ns } for Ns true underlying states such that the distribution parameters (mean μl, and variance σl2) can be readily estimated from the sample observation. The choice of Ns is generally determined a priori. For example, in MRI data, Ns is typically the number of distinct tissues present. If Ns is not apparent from the data, model selection methods such as information criterion approaches [44, 45] can be applied. Assumption 2: For any region ri, the observed features fi given the hidden states si are conditionally independent. Assumption 3: si is governed by an irregular Markov mesh with transitional probabilities: αi,l = P(si=l | Sj˝) where Sj˝ is a set of states of some preceding neighbors Rj˝Ri for which the states are already known.  Figure 3.1 shows a 2D representation of rbHMM based on the lexicographic order of regions from top-left to bottom-right. This model shows that each node transition is affected only by the immediate preceding neighbors, in this case, Ri”={ rj: j<i }. This representation is, however, not unique for any given region map as one can traverse the 82  image regions in many different ways, e.g. from bottom-left to top-right, depending on the node number assignment. For real images, the ordering of regions and the transitional dependencies are based on the image features and cannot be pre-determined. The next section explains the computation of this ordering in detail.  (a)  (b)  Figure 3.1. A 2D example of rbHMM. (a) Pre-delineated region map; each region is represented as a node in a hidden Markov model; (b) Representation of connected nodes based on the lexicographic order (indicated by the node number) of regions from top-left to bottom-right.  3.2.3 Model Parameter Estimation Unlike a grid-based 3D HMM [27, 38], where a total of Ns(W×H×D) possible state combinations exist, the proposed 3D rbHMM greatly reduces the state search space to N sN R . We adopt the Viterbi algorithm used in 1D HMM estimation to iteratively search  for the optimal MAP states, { si*: 1≤i≤NR }, given the current model parameters by locally optimizing the likelihood of the observation. The unknown states are estimated, and the model parameters are then updated assuming the MAP states, si*, are the true underlying states.  83  We initialize si* using K-means clustering [46] on all fi for 1≤i≤NR for simplicity. Other alternative unsupervised learning algorithms such as mixture modeling [47] can also be employed. To compute the model parameters [μl, σl2,  n ,l ] for each state l  { 1,2,…,Ns }, i  the mean, μl, and variance, σl2, are estimated as the sample mean and the sample variance of all observations, and the transition probabilities,  n ,l , are computed based on the i  empirical frequencies and subject to the constraint lN1  n ,l S  i  1  for 1≤i≤NR. These model  parameters are then iteratively updated for unsupervised estimation without atlas priors or training. By assuming the estimated states are the true underlying states, the parameter estimation procedure is identical to an expectation-maximization step by finding the maximum likelihood estimates of parameters in a discrete Gaussian mixture HMM [22]. The key computational issue in applying the Viterbi algorithm to the 3D rbHMM is to solve for the MAP states, si*, given a set of parameters with no dependencies or directionality assumption.  Thus, we propose the following iterative algorithm to search for the MAP states, where, to reiterate, Ri’ denotes the set of all neighbors of region ri, Ri” denotes the set of preceding neighbors of ri, and si and fi denote the state and observed feature of ri.  Step 1. Initialize iteration number t=1. Step 2. Initialize si(t-1) using K-means for 1 ≤ i ≤ NR. Step 3. Initialize μl, σl2, ni, Step 4. Step 5.   n ,l for i  1 ≤ i ≤ NR, 1 ≤ l ≤ Ns.  Initialize tree branch depth d=0. Initialize current and processed region sets: Cd=C0={ ri } where i=rand( { 1,2,…,NR } ). 84  Pd=P0={ Ø }. Step 6.  Update the processed region set by including the current region set: P(d+1)={ Pd  Cd }. Expand the current region set with neighbors of the current region set that have not yet been processed: C(d+1)={  Ri’ } \ P(d+1), ri  Cd For each ri  C(d+1) , determine each tree branch direction from P(d+1). If (║C(d+1)║>0) d++ and goto Step 6.  Step 7.  For each tree branch given ri from d=0 to end Si={ sjt: rj  Ri” }  { sj(t-1): rj  Ri’ \ Ri” } Fi={ fj: rj  Ri’ } Search for MAP states si* given Si and Fi.  Step 8.  Update sit=si* for 1 ≤ i ≤ NR.  Step 9.  Update μl, σl2, ni,   n ,l i  for 1 ≤ i ≤ NR, 1 ≤ l ≤ Ns.  Step 10. If (t<max iterations) t++ and go to Step 4. Step 11. Return sit for 1 ≤ i ≤ NR.  As can be seen above, the algorithm starts at a random region in each iteration t. A nested process (Steps 6 to 7) iteratively constructs a Markov tree by extending one step outward from all nodes that have already been formed towards the immediate neighboring regions. The direction of the tree branching at each node follows a path of the most influential neighbor, which is determined based on the sum of two ratios defined as region adjacency, aR(j,i), and edge adjacency, aE(j,i) between neighboring regions rj and ri:  aR  j , i    area rj    arear  k  rk Ri'  1  i, j  N R  ' r j  Ri  (3.2)  85  edgerj , ri   aE  j , i     edger , r  k  (3.3)  i  rk Ri'  where area(rj) denotes the area of region rj, and edge(rj,ri) denotes the number of edge points between regions ri and rj. Both region and edge adjacencies are defined as region and edge ratios of the neighborhood of the region ri. Under the same initial conditions, the direction of the tree branching remains the same under image rotations since the regional neighborhood relationship has not changed; thus, allowing a rotationally invariant description of the image data.  For each tree branch, 1D Viterbi is used to search for the optimal MAP states, si*, given in equation (3.4).  si*  =  si  =      arg max P s i , f i , s j , f j , s k , f k : r j  Ri" , rk  Ri' \ Ri" si  =    arg max P si f i , s j , f j : r j  Ri'    arg max P f i si  P si s j : r j  Ri" si     P s k si ,s j : r j  Rk' , rk  Ri' \ Ri"       (3.4)  The first term on the right hand side of the last equality sign for equation (3.4) is the likelihood of the observation belonging to the state, whereas the second and the third terms indicate the transition probabilities. This derivation is based on [38], where s is used to distinguish the conditioned states of the neighboring regions from the state, s, of the current region to be optimized. The difference is that instead of locally optimizing 1D 86  rows on a 3D grid, si* is calculated along each 1D model branch. For nodes that lie on multiple branches, a majority voting rule is applied after all the branches have been optimized.  Since we work on an irregular representation rather than a grid mesh,  n ,l is generated and i  iteratively updated as a codebook of neighboring states given state l. Each region in the image often yields a unique codebook entry with the probability determined based on the region area ratio. In addition, to improve lookup efficiency, we define an overall neighborhood adjacency overlap ratio, aO(i, j), in (3.5) such that any pair of entries, i and j, with significant overlap (>70%), will have their corresponding region and edge adjacencies merged and the corresponding transition probabilities updated by using a weighted sum based on the size of regions i and j normalized over the entire image or volume.  aO (i, j )     1 N s  min   a R m, i ,  a R n, j  s m  s n  k    2 k 1   mRi ' nR j '  (3.5)    min   a E m, i ,  a E n, j  s m  s n  k     mRi ' nR j '  3.3 RESULTS AND DISCUSSION We applied our proposed rbHMM to 3D image segmentation, and demonstrated in the following sections (1) the robustness of the proposed 3D rbHMM to parameter settings and object rotation, (2) the accuracy and efficiency of image segmentation using synthetic  87  geometric shapes, and the accuracy and robustness of the proposed method in segmenting brain tissues using (3) simulated MRI data and (4) clinical MRI scans.  We first tested on synthetic geometric shapes with ground truth available similar to [38]. In these images, the objects were black (intensity value 0) on a white (intensity value 1) background. The size of each image was 100×100×100 voxels. Gaussian noise was added with varying standard deviations σ. The shapes were generated using equations (3.6)-(3.8), where (x, y, z) are the coordinates along the X, Y and Z axes, and a, b and c are parameters governing the shape of each structure.  c   Torus:  x  2   2   y 2   z 2  a2  Ellipsoid:  x  2  a2  y2 b2  z2 c2  1  Hyperboloid:  x  2  a2  y2 b2  z2 c2  1  (3.6)          (3.7)          (3.8)  We also validated our method by applying the 3D rbHMM to six simulated T1-weighted BrainWeb [48] MRI image volumes (with 0/1/3/5/7/9% noise, 181×217×181 resolution, 1×1×1mm3 spacing), and eighteen real high-resolution clinical T1-weighted MRI scans from the Internet Brain Segmentation Repository (IBSR) [49] (coronally acquired, 256×128×256 resolution, 0.837×0.837~1×1 mm2 in-plane spacing, 1.5mm slice thickness). For both datasets, the “ground truth” is known for comparisons. For the BrainWeb dataset, the ground truth is the phantom atlas used to generate the simulated scans, whereas for the IBSR dataset, the ground truth is the provided expert-guided manual segmentation label for each of the clinical scans.  88  The quantitative evaluation metrics in (3.9) and (3.10) were chosen to facilitate direct comparisons to other published results:  Misclassification rate:  F  F  100% T  T  (3.9)  Dice similarity index:  2T  100 % 2T   F  F  (3.10)  where T+, T-, F+ and F- are the true positives, true negatives, false positives and false negatives, respectively, between the known ground truth and the resulting segmentation results. The misclassification rate was used to assess the binary segmentation performance on the 3D synthetic geometric shapes, whereas the Dice similarity index [50] was used for quantitative evaluation of the 3D brain MRI scans. We compared our results with results presented by Joshi et al. [38] for classification of 3D geometric shapes, and implemented their grid-based HMM for segmentation of brain MRIs.  3.3.1 Robustness to Parameter Settings and Object Rotation  3.3.1.1 Parameter sensitivity  We first performed a 2-class: object and background, segmentation on the 3D geometric shapes. We examined the misclassification rates in the segmentation when varying the watershed threshold parameter, Tw, by ±13.3% around 0.75 (from 0.65 to 0.85). Tw denotes the percentage of significant edges present in the image, and as its value increases, the sizes of the regions decrease. A general rule of thumb for selecting the  89  value of Tw (and the maximum number of iterations) is to examine the granularity and performance vs. efficiency tradeoffs for specific applications. By selecting Tw=0.75, we achieved 90% reduction in the number of image regions used, i.e. NR≤100,000, as opposed to processing every image voxel (W×H×D=1,000,000). This parameter sensitivity analysis was carried out first to ensure proper selection of the segmentation parameter for comparison to the best results presented by Joshi et al. [38] in the following section §3.3.2. Table 3.1 to Table 3.3 show the results after 3 iterations of the parameter estimation algorithm with σ=0.5 and σ=0.6.  Tw=[0.65, 0.70, 0.75, 0.80, 0.85] σ=0.5  a  c  20 20 20 30 30 40  20 30 40 30 40 40  σ=0.6 Mean Mean Std. Dev. Std. Dev. Misclass. Rate Misclass. Rate 0.60% 0.08% 0.66% 0.05% 0.78% 0.06% 0.93% 0.07% 0.77% 0.03% 0.86% 0.06% 0.74% 0.01% 0.90% 0.07% 0.81% 0.01% 0.92% 0.06% 0.84% 0.01% 0.93% 0.06% a denotes the radius of the torus and c denotes the radius of the tube. We note the stable performance with regard to varying parameter Tw.  Table 3.1. Misclassification error rate of segmentation results obtained by using the proposed 3D rbHMM method on 3D torii with shown parameters (σ=0.5 and 0.6).  Tw=[0.65, 0.70, 0.75, 0.80, 0.85] σ=0.5 a  b  c  20 40 40 40 30 30  30 30 20 10 30 30  40 40 40 40 40 30  Mean Misclass. Rate 0.49% 0.56% 0.53% 1.01% 0.50% 0.47%  σ=0.6 Mean Std. Dev. Std. Dev. Misclass. Rate 0.04% 0.64% 0.08% 0.00% 0.68% 0.01% 0.01% 0.64% 0.01% 0.25% 3.10% 1.02% 0.01% 0.59% 0.01% 0.02% 0.55% 0.04% a, b and c denote radii along each axis. We note the stable performancewith regard to varying parameter Tw.  Table 3.2. Misclassification error rate of segmentation results obtained by using the proposed 3D rbHMM method on 3D ellipsoids with shown parameters (σ=0.5 and 0.6).  90  Tw=[0.65, 0.70, 0.75, 0.80, 0.85] σ=0.5  a  b  c  10 10 10 10 10 20  10 20 30 20 30 30  10 30 30 40 40 20  σ=0.6 Mean Mean Std. Dev. Std. Dev. Misclass. Rate Misclass. Rate 0.58% 0.00% 0.74% 0.01% 0.57% 0.01% 0.70% 0.03% 0.70% 0.01% 0.83% 0.02% 0.63% 0.04% 0.82% 0.12% 0.69% 0.02% 0.80% 0.02% 0.65% 0.00% 0.84% 0.02% a and b govern the skirt radii and c determines the sharpness. We note the stable performance with regard to varying parameter Tw.  Table 3.3. Misclassification error rate of segmentation results obtained by using the proposed 3D rbHMM method on 3D hyperboloids with shown parameters (σ=0.5 and 0.6).  Results indicate our method is robust to changes in Tw. The worst case performance (ellipsoid, a=40, b=10, c=40, σ=0.6) was still significantly more stable and accurate than the results in [38], where misclassification rates of 5.76%~22.30% were reported.  3.3.1.2 Object rotation  To illustrate the robustness of rbHMM to object rotation, we demonstrated rotational invariance on a 3D ellipsoid (a=40, b=25, c=30) with noise (σ=0.5). The image was progressively rotated between 0° to 90° along each axis. Our proposed parameter estimation algorithm was then applied for 3 iterations at a fixed Tw=0.75.  Table 3.4 demonstrates our method’s robustness to rotation. The misclassification error ranged from 0.47% to 0.55%, as compared to a range of 0.40% to 6.70% with the grid-based approach [38]. With only three iterations, we noted a very small difference (0.01%) at 0° rotation between each run of the proposed method due to the random initialization nature of the parameter estimation algorithm and such difference will 91  diminish with increasing iterations. The proposed method is highly stable and repeatable. Additional repeatability studies on the same 3D ellipsoid showed an error standard deviation due to random initializations of 0.0026% for twenty trials of the same dataset. Also, for twenty different random noise patterns applied at σ=0.5, the repeatability studies showed an error standard deviation of 0.0089%.  Axis/Angle X Y Z Axis/Angle X Y Z  0° 0.49% 0.50% 0.50% 50° 0.54% 0.52% 0.53%  10° 20° 30° 40° 0.50% 0.53% 0.53% 0.54% 0.51% 0.51% 0.52% 0.52% 0.51% 0.52% 0.53% 0.53% 60° 70° 80° 90° 0.54% 0.52% 0.52% 0.47% 0.55% 0.51% 0.51% 0.49% 0.52% 0.50% 0.50% 0.47% We note the stable performance with varying angles of rotation with ≤ 0.07% changes in the misclassification rate.  Table 3.4. Effect of rotation on rbHMM segmentation results of 3D ellipsoids.  Qualitatively, compared to the segmentation results shown in Joshi et al. [38], Figure 3.2 shows cleaner segmentation results based on the proposed method without significant observable artifacts such as horizontal streaks along the search direction.  92  (a) 0°  (b) 30°  (c) 60°  (d) 90°  Figure 3.2: Effect of varying rotational angles (0°~90°) on the proposed rbHMM segmentation performance of 3D ellipsoids. This figure shows greater invariance of the proposed method than the grid-based HMM [38], where striping artifacts are observable and classification performance is affected by the angles of rotation. Top row: original noisy images for σ=0.5; Bottom row: segmentation results.  3.3.2 Segmentation on Synthetic Geometric Shapes  3.3.2.1 Accuracy  Next we compare the highest accuracies of the proposed 3D rbHMM model from §3.3.1.1 above to the highest accuracy results reported in Joshi et al. [38] for the same shape parameter values tested at σ=0.5 and σ=0.6. Table 3.5 to Table 3.7 show that rbHMM outperformed grid-based HMM under various noise conditions in the vast majority of synthetic geometric shapes tested (32 out of 36). One result was equal to and three were only slightly lower (≤ 0.38 percentage point) than those of grid-based HMM. Overall, we achieved 66.24% (σ=0.5) and 54.05% (σ=0.6) reductions in the misclassification rates compared to those reported in [38]. Figure 3.3 to Figure 3.5 93  further illustrate the segmentation performance qualitatively.  a  c  20 20 20 30 30 40  20 30 40 30 40 40  Grid HMM [38] Proposed 3D rbHMM σ=0.5 σ=0.6 σ=0.5 σ=0.6 0.64% 1.70% 0.56% 0.56% 1.04% 1.49% 0.67% 0.81% 1.15% 1.64% 0.75% 0.76% 3.14% 4.33% 0.74% 0.77% 2.33% 2.83% 0.80% 0.82% 0.91% 0.45% 0.83% 0.83% 1.54% 2.07% 0.73% 0.76% 0.98% 1.34% 0.10% 0.10% a denotes the radius of the torus and c denotes the radius of the tube. The best performance is highlighted in bold.  Mean Std. Dev.  Table 3.5. Comparison of segmentation Misclassification Error Rate: grid-based 3D HMM vs. proposed 3D rbHMM on 3D torii with shown parameters.  a  b  c  20 40 40 40 30 30  30 30 20 10 30 30 Mean Std. Dev.  40 40 40 40 40 30  Grid HMM [38] σ=0.5 σ=0.6 3.37% 4.61% 0.56% 0.89% 0.43% 0.64% 6.26% 5.76% 0.48% 0.86% 1.69% 4.41% 2.13% 2.86% 2.32% 2.31%  Proposed 3D rbHMM σ=0.5 σ=0.6 0.45% 0.56% 0.56% 0.67% 0.52% 0.63% 0.73% 1.86% 0.49% 0.58% 0.44% 0.51% 0.53% 0.80% 0.11% 0.52%  a, b and c denote radii along each axis. The best performance is highlighted in bold. Table 3.6. Comparison of segmentation Misclassification Error Rate: grid-based 3D HMM vs. proposed 3D rbHMM on 3D ellipsoids with shown parameters.  a  b  c  10 10 10 10 10 20  10 20 30 20 30 30 Mean Std. Dev.  10 30 30 40 40 20  Grid HMM [38] σ=0.5 σ=0.6 0.80% 1.12% 0.90% 2.24% 0.74% 3.69% 1.43% 1.99% 0.97% 1.24% 0.74% 1.19% 0.93% 1.91% 0.26% 0.99%  3D rbHMM σ=0.5 0.58% 0.56% 0.70% 0.58% 0.67% 0.65% 0.62% 0.06%  σ=0.6 0.74% 0.67% 0.81% 0.68% 0.79% 0.82% 0.75% 0.07%  a and b govern the skirt radii and c determines the sharpness. The best performance is highlighted in bold. Table 3.7. Comparison of segmentation Misclassification Error Rate: grid-based 3D HMM vs. proposed 3D rbHMM on 3D hyperboloids with shown parameters.  94  (a)  (b)  (c)  (d)  Figure 3.3. rbHMM segmentation results on 3D torus images with parameters a=20 and c=40. Top row: original noisy images for (a)-(b) σ=0.5, (c)-(d) σ=0.6; Bottom row: segmentation outputs of corresponding slices.  (a)  (b)  (c)  (d)  Figure 3.4. rbHMM segmentation results on 3D ellipsoid images with parameters a=40, b=20 and c=40. Top row: original noisy images for (a)-(b) σ=0.5, (c)-(d) σ=0.6; Bottom row: segmentation outputs of corresponding slices.  95  (a)  (b)  (c)  (d)  Figure 3.5. rbHMM segmentation results on 3D hyperboloid images with parameters a=10, b=30 and c=30. Top row: original noisy images for (a)-(b) σ=0.5, (c)-(d) σ=0.6; Bottom row: segmentation outputs of corresponding slices.  3.3.2.2 Efficiency  Complexity-wise, applying 1D Viterbi to 3D grid-based HMM was shown to be O(WHDNs2) [27], where W, H and D are the dimensions of the 3D image. Our proposed 3D rbHMM on the other hand can be processed in O(NNNBNs2) time, where NB is the number of tree branches and NN is the average number of nodes per branch. If no branch overlaps exist, NN×NB=NR. The proposed algorithm is thus, generally, more computationally efficient, since, with proper watershed delineation, NN×NB << W×H×D. The average running time of our 3D rbHMM for the 18 tested 3D geometric shapes above (σ=0.5, 100×100×100 voxels) with 3 iterations at Tw=0.75 was 170s on a 3.0 GHz Xeon processor. This corresponds to roughly a 40% speed increase when compared to the 280s reported based on the grid-based 3D HMM [38], adjusted for CPU clock speed.  96  3.3.3 Segmentation of Simulated MRI Data  3.3.3.1 Accuracy  We performed 4-class segmentation: background, white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF), on the simulated T1-weighted BrainWeb MR images [48] with varying noise levels (n = 0%, 1%, 3%, 5%, 7%, 9%). The proposed rbHMM was applied for 10 iterations at Tw=0.25, which corresponded to approximately 96.7%~98.7% reduction in the number of image regions used (NR = 89,725~234,501) as opposed to processing every image voxel (W×H×D = 7,109,137). The Dice similarity index [50] was used to quantitatively evaluate WM and GM segmentation accuracies by comparing with the provided phantom.  We compared our segmentation results against the grid-based HMM. The grid-based HMM implementation employed three states per class (K-means clustering was used to initialize each class, to which three states were then randomly assigned within each class) as described by Joshi et al. [38], full 3D probability transitions, and 20 iterations due to its slower convergence as demonstrated by the results from the experiments of 3D geometric shapes (6 iterations for [38] vs. 3 iterations for the proposed method). Our proposed rbHMM achieved, on average over various noise levels, WM and GM similarity indices of 92.99% (σ = 1.42%) and 92.27% (σ = 1.44%), respectively. The grid-based HMM achieved 92.65% (σ = 1.81%) and 85.22% (σ = 2.23%) in WM and GM similarity indices. The improvement in the segmentation accuracy achieved by the proposed method was comparable for WM results (p=0.7249) and significant for GM  97  results (p<0.0001). Figure 3.6 illustrated that both our WM and GM segmentation results were consistently stable and highly accurate in the presence of noise. For the grid-based HMM approach, we observed consistent and comparable results for WM segmentation, but performance largely degrades for GM.  GM Accuracy  WM Accuracy 95.00  95.00  90.00  90.00  Dice (%)  100.00  Dice (%)  100.00  85.00 Grid HMM (20 Iterations)  80.00  85.00 80.00  Grid HMM (20 Iterations)  3D rbHMM (10 Iterations)  3D rbHMM (10 Iterations)  75.00  75.00 0%  1%  3%  Noise  5%  7%  9%  0%  1%  (a)  3%  5%  7%  9%  Noise  (b)  Figure 3.6. Comparisons of segmentation accuracy of the proposed 3D rbHMM and the grid-based HMM on simulated 3D brain MRI images for (a) WM and (b) GM. Our method shows consistently high accuracies and stable results when the noise level is varied from 0% to 9%.  This degradation in GM segmentation performance was also observable in the segmentation outputs illustrated in Figure 3.7. Two main artifacts were observed in the grid-based HMM segmentation results. Firstly, the structural details remained grainy in regions of high noise. A second horizontal striping artifact was also observed with the higher noise scans, resulting in increased jaggedness around class boundaries. This striping artifact was similarly observed with the segmentation results of the high noise 3D synthetic geometric images [38]. We further examine this striping artifact in our subsequent experiment with object rotations.  98  (a) 0%  (b) 9%  (c)  (d)  Figure 3.7. 3D simulated BrainWeb T1-weighted brain MR images showing effect of varying noise levels on segmentation performance. Top row: original noisy images; Middle row: grid-based HMM segmentation outputs; Bottom row: rbHMM segmentation outputs for noise levels of (a) 0% and (b) 9%. We demonstrated superior GM segmentation using the proposed rbHMM, whereas the grid-based HMM approach showed (c) grainy and (d) striping in the close-up results of the highlighted regions. Problematic regions are identified by arrows.  Further comparing the proposed method to other existing models, i.e. a block-based HMM [28] and a HMRF model [35], our WM and GM segmentation results were again consistently stable and highly accurate in the presence of noise as shown in Figure 3.8  99  100.00 95.00 90.00 85.00 80.00 75.00 70.00 65.00  WM Accuracy Dice (%)  Dice (%)  below.  0%  1%  3%  5%  7&  100.00 95.00 90.00 85.00 80.00 75.00 70.00 65.00  GM Accuracy  0% 1%  9%  3% 5%  3D rbHMM  (a)  7&  9%  Noise  Noise 3D HMM  HMRF-EM  (b)  Figure 3.8. Comparisons of segmentation accuracy of the proposed 3D rbHMM, the block-based HMM, and the FAST HMRF on simulated 3D brain MRI images for (a) WM and (b) GM. Our method shows consistently high accuracies and stable results when the noise level is varied from 0% to 9%.  The average running time of the proposed method was 94 seconds per iteration on a 3.0 GHz Xeon processor. 3.3.3.2 Object rotation  We next compared the rotational invariance capabilities of our proposed rbHMM to the grid-based HMM approach on a 3D BrainWeb volume (9% noise) rotated for 15°, 30°, and 45° around the Z axis. Rotated phantoms were generated using nearest neighbor interpolation for the quantitative comparison in Figure 3.9 below. We observed that both methods appeared equally stable with respect to various rotational angles, but our results are consistently more accurate than the grid-based HMM, especially for grey matter segmentation. The small variations (~1%) in WM and GM similarity indices apparently due to rotation were largely caused by variations introduced with resampling of the 100  discrete phantom ground truth.  WM Accuracy 100.00  Grid HMM (20 Iterations) 3D rbHMM (10 Iterations)  95.00 90.00 85.00 80.00  Grid HMM (20 Iterations) 3D rbHMM (10 Iterations)  95.00 Dice (%)  Dice (%)  GM Accuracy 100.00  90.00 85.00 80.00  75.00 0˚  15˚  30˚  Angle of Rotation (degrees)  45˚  75.00 0˚  15˚ 30˚ Angle of Rotation (degrees)  (a)  45˚  (b)  Figure 3.9. Comparisons showing effect of varying rotational angles on segmentation accuracy of the proposed 3D rbHMM and the grid-based HMM on simulated 3D brain MRI images for (a) WM and (b) GM. Both methods show consistently stable results when the angle is varied from 0˚ to 45˚.  Qualitatively, however, we were able to demonstrate that both methods, in fact, did not perform equally under object rotation. Figure 3.10 illustrates qualitative segmentation results for both approaches. We confirmed no observable artifacts resulted from object rotations for the proposed rbHMM method. However, for the grid-based HMM method, as the object was rotated, the striping artifact remained in the direction of the horizontal grid-based decoding process, irrespective of the rotational angles.  101  Input Simulated MR Images  Grid HMM (20 Iterations) Results  3D rbHMM (10 Iterations) Results (a) 0°  (b) 15°  (c) 30°  (d) 45°  Figure 3.10. 3D simulated BrainWeb T1-weighted brain MR images showing effect of (a) 0°, (b) 15°, (c) 30° and (d) 45° rotational angles on segmentation performance. Row 1-2: original noisy simulated MR images (9%) and the corresponding phantom labels; Row 3-4: grid-based HMM segmentation outputs and close-up results of the highlighted regions showing striping artifact in the horizontal decoding direction; Row 5-6: rbHMM segmentation outputs and close-up results of the highlighted regions. Problematic regions are identified by arrows.  This was especially observable in the regions of caudate nucleus and putamen. A side effect of this artifact could also be observed in the quantitative results, where at 45° 102  rotation, the GM accuracy is increased by 3.04% in similarity index than that of 0° rotation. Since, at 45°, the subcortical GM structures appeared approximately horizontal/vertical in the simulated MRI scan, a reduced amount of errors were introduced, resulting in higher GM segmentation accuracy. Thus, although the quantitative results for the grid-based HMM method were stable, qualitative results showed that the proposed rbHMM method demonstrated improved robustness to object rotations than the grid-based HMM approach.  3.3.4 Segmentation of Clinical MRI Data  We performed segmentation on 18 clinical T1-weighted IBSR V2.0 MR images [49]. 3D rbHMM was applied for 10 iterations at Tw=0.25, which corresponded to approximately 98.8%~99.2% reduction in the number of image regions used (NR = 64,204~ 99,008) as opposed to processing every image voxel (W×H×D = 8,388,608). Evaluation of segmentation accuracies was performed by comparing against the provided expert-guided manual segmentation labels. A 5-class: background, GM (2 classes), WM, and CSF segmentation was performed rather than a 4-class data representation because the intensity histogram of these clinical scans exhibited different mixture distribution properties than the simulated scans as shown in Figure 3.11; thus, a different mixture model was necessary for the subsequent segmentation task.  103  5  2  4  1.5  3  1 2  0.5 1  0  0 0  500  1000  1500  2000  2500  (a) BrainWeb  0  10  20  30  40  50  60  70  (b) IBSR V2.0  Figure 3.11. Intensity histograms of MR images (excluding background) showing the differences between (a) simulated BrainWeb (3% noise) and (b) clinical IBSR V2.0 (#12) scans. The BrainWeb histogram demonstrated three distinct peaks, whereas the IBSR V2.0 histogram showed four mixtures.  Figure 3.12 shows that the proposed rbHMM achieved, on average over scans, similarity indices of 80.08% (σ = 3.95%) and 82.78% (σ = 4.81%) with the expert-guided manual segmentation labels for WM and GM respectively, whereas the grid-based HMM method achieved 75.48% (σ = 4.21%) and 75.07% (σ = 4.48%) for WM and GM overlaps. To statistically evaluate the differences of segmentation results between rbHMM and the grid-based HMM, we calculated the p values of WM and GM similarity indices (p<0.05 indicates significant difference). The proposed rbHMM method achieved significantly higher accuracies with increased similarity indices of 4.60% (p=0.0022) in WM and 7.71% (p<0.0001) in GM segmentation results.  104  100.00 95.00 90.00 85.00 80.00 75.00 70.00 65.00 60.00 55.00 50.00 45.00  Grid HMM (20 Iterations) 3D rbHMM (10 Iterations) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18  Dice (%)  WM Accuracy  Dataset  (a)  Grid HMM (20 Iterations) 3D rbHMM (10 Iterations) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18  Dice (%)  GM Accuracy 100.00 95.00 90.00 85.00 80.00 75.00 70.00 65.00 60.00 55.00 50.00 45.00  Dataset  (b)  Figure 3.12. Comparisons of segmentation accuracy of proposed 3D rbHMM and grid-based HMM on clinical 3D IBSR V2.0 brain MRI images for (a) WM and (b) GM. Our method shows consistently high accuracies and stable results.  Further qualitative examples demonstrating the superior performance of the proposed segmentation method are shown in Figure 3.13. We observed that our rbHMM segmentation results matched more closely to the manual segmentation labels than the results of using the grid-based HMM approach. However, both methods under-segmented subcortical GM structures (thalamus and putamen) when compared to the manual segmentation labels as shown in Figure 3.13b. 105  slice 45  slice 65  slice 85  (a) Input Clinical MR Images  slice 45  slice 65  slice 85  (c) Grid HMM (20 Iterations) Results  slice 45  slice 65  slice 85  (b) Expert-Guided Manual Segmentation Labels  slice 45  slice 65  slice 85  (d) 3D rbHMM (10 Iterations) Results  Figure 3.13. 3D clinical IBSR V2.0 T1-weighted brain MR images (normal adult scan #06) showing segmentation accuracy of the proposed 3D rbHMM method. (a) original image clinical MR images and close-up images of the highlighted regions; (b) expert-guided manual segmentation labels; (c) grid-based HMM segmentation outputs; (d) 3D rbHMM segmentation outputs of slice number 45, 65 and 85. Problematic regions are identified by arrows.  This was mainly due to the fact that the intensity features of these structures in this MR image were very similar to their surrounding WM. Without any other a priori cues such as expert knowledge, these subcortical structures were fairly indistinguishable. For the grid-based approach, the previously observed striping artifact was not as significant as 106  before because the high quality of these clinical scans where noise was not a major factor. However, we still observed jaggedness between the WM and GM boundaries. The results generated by using the rbHMM approach, on the other hand, were much more robust as identified by the arrows shown in Figure 3.13. The average running time of the proposed method was 62 seconds per iteration a 3.0 GHz Xeon processor.  3.4 CONCLUSIONS We proposed a novel 3D region-based hidden Markov model (rbHMM) that employs a new unsupervised, computationally efficient parameter estimation algorithm, and demonstrated their use for accurate and robust 3D image segmentation. The main contributions of our work are: first, a 3D HMM framework based on irregularly shaped homogeneous regions, which reduces the overall model complexity thus resulting in a more efficient optimization process compared to current approaches which are based on rectangular lattices or grids; second, a novel 3D tree-structured algorithm that provides rotationally invariant estimates of the locally optimal MAP states of image regions by employing the Viterbi algorithm. We validated our technique first using synthetic geometric shapes where we demonstrated rbHMM’s improved computational efficiency and increased invariance to 3D object rotation compared to grid-based 3D HMM frameworks using a similar optimization strategy. Furthermore, we successfully applied our proposed segmentation method to both simulated and real clinical brain MRI data, and demonstrated improved accuracy and robustness of the rbHMM over the grid-based HMM approach. Extensive tests with varying model, shape, noise, and rotation parameters demonstrated the capability of the proposed 3D rbHMM and parameter 107  estimation algorithm to robustly handle 3D image segmentation.  Future work plans include incorporating texture cues to extend the model to segmentation of color images or natural scenes, incorporating multiple medical imaging modalities for the identification of anomalies such as tumors or lesions, and integrating iterative correction of medical imaging artifacts such as MR intensity inhomogeneity, a slow-varying non-uniformities in the magnetic field, which is a common difficulty for the segmentation task.  108  3.5 BIBLIOGRAPHY 1. 2. 3.  4.  J. K. Baker, “The dragon system—An overview,” in Proc. of International Conference on Acoustics, Speech, and Signal Processing, February 1975, pp.24–29. L. Rabiner, B. H. Juang, Fundamentals of Speech Recognition, Englewood Cliffs, NJ: Prentice-Hall, 1993. M. Diligenti, P. Frasconi, and M. Gori, “Hidden tree Markov models for document image classification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(4):519--523, 2003. J. Li, A. Najmi, R.M. Gray, “Image classification by a two dimensional hidden  Markov model,” IEEE Transactions on Signal Processing, 48(2):517-533, February 2000. 5. E. Levin, R. Pieraccini, “Dynamic planar warping for optical character recognition,” in Proc. of International Conference on Acoustics, Speech, and Signal Process., San Francisco, CA, 3, Mar. 1992, pp.149–152. 6. H. K. Lee, J. H. Kim, "An HMM-based threshold model approach for gesture recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(10):961-973, Oct. 1999. 7. S. Eickeler, S. Muller, G. Rigoll, "Improved face recognition using pseudo 2-D hidden Markov models,” European Conference on Computer Vision Workshop on Advances in Facial Image Analysis and Recognition Technology, Freiburg, Germany, June 1998 8. F. Hulsken, F. Wallhoff, G. Rigoll, “Facial expression recognition with pseudo-3D hidden Markov models,” in Proc. of Pattern Recognition, 2001, pp.291-297. 9. J. Boreczky, L. Wilcox, “A hidden Markov model framework for video segmentation using audio and image features,” in Proc. IEEE Conf. on Acoustics, Speech, and Signal Processing, 6, 1998, pp.3741-3744. 10. T. Vlachos, A. G. Constantinides. “A graph-theoretic approach to colour image segmentation and contour classification,” in Proc. of International Conference on Image Processing and its Applications, Maastricht, Netherlands, 1992, pp.298-302. 11. K. Barnard, D. A. and Forsyth, “Learning the semantics of words and pictures,” in Proc. of IEEE International Conference of Computer Vision, 2001, pp.408-415. 12. J. Jiten, B. Mérialdo, “Semantic image segmentation with a multidimensional hidden Markov model,” Advances in Multimedia Modeling, 4351:616-624, 2006.  109  13. A. J. Frost, M. A. Thyer, R. Srikanthan, G. Kuczera. “A general Bayesian framework for calibrating and evaluating stochastic models of annual multi-site hydrological data,” Journal of Hydrology, 340(3-4):129-148, July 2007. 14. C.-C. Ke, J. G. Herrero, J. Llinas, “Comparative analysis of alternative ground target tracking techniques,” in Proc. of IEEE Information Fusion, 2000, pp.WeB5/3-10. 15. R. Durbin, S. Eddy, A. Krogh, G. Mitchison, Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids, Cambridge: Cambridge University Press, 1998. 16. O. Cappe, E. Moutines, T. Ryden, Inference in Hidden Markov Models, New York: Springer-Verlag, 2005. 17. O. Cappe. (2001 March), Ten years of HMMS, Available: http://www.tsi.enst.fr/~cappe/docs/hmmbib.html. 18. L. E. Baum, “An inequality and associated maximization technique in statistical estimation for probabilistic functions of finite state Markov chains,” Inequalities, 3:1-8, 1972. 19. A. A. Markov, “An example of statistical investigation in the text of ‘Eugene Onyegin’ illustrating coupling of ‘tests’ in chains,” in Proc. of the Academy of Sciences, vol.7, 1913, pp.153-162. 20. C. E. Shannon, “A mathematical theory of communication,” Bell System Tech. Journal, 27:379–423, July 1948. 21. R. G. Gallager, Information Theory and Reliable Communication, New York, NY: Wiley, 1968. 22. L. E. Baum, T. Petrie, G. Soules, N. Weiss, “A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains,” Annals of Mathematical Statistics, 41(1): 164–171, 1970. 23. A. J. Viterbi, J. K. Omura. “Trellis encoding of memoryless discrete-time sources with a fidelity criterion,” IEEE Transactions on Information Theory, IT-20:325–332, May 1974. 24. G. D. Forney, “The Viterbi algorithm,” in Proc. of the IEEE, 61(3), March 1973, pp.268–278. 25. S. S. Kuo, O. E. Agazzi, “Machine vision for keyword spotting using pseudo 2D hidden Markov models,” in Proc. of International Conference on Acoustics, Speech, and Signal Processing, vol.5, 1993, pp.81–84. 26. P. A. Devijver, “Probabilistic labeling in a hidden second order Markov mesh,” Pattern Recognition in Practice II, 113–123, 1985.  110  27. J. Li, D. Joshi, J. Z. Wang, "Stochastic Modeling of Volume Images with a 3-D Hidden Markov Model,” in Proc. of IEEE International Conference on Image Processing, 2004, pp.2359-2362. 28. M. Ibrahim, N. John, M. Kabuka, A. Younis, “Hidden Markov models-based 3D MRI brain segmentation,” Image and Vision Computing, 24:1065-1079: 2006. 29. S. Geman, D. Geman, “Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721–741, Nov. 1984. 30. S. Z. Li, “Markov random field models in computer vision,” in Proc. of European Conference on Computer Vision, Stockholm, Sweden, May 1994, pp.361-370. 31. I. Patras. E. A. Hendriks, R. L. Lagendijk, "Video segmentation by MAP labeling of watershed segments,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(3):326-332, March 2001. 32. Y. Tsaig, A. Averbuch, “A region-based MRF model for unsupervised segmentation of moving objects in image sequences,” in Proc. of IEEE Computer Vision and Pattern Recognition, vol.1, 2001, pp.I-889-896 33. A. Sarkar, M. K. Biswas, K. M. S. Sharma, ”A simple unsupervised MRF model based image segmentation approach,” IEEE Transactions on Image Processing, 9(5):801-812, 2000. 34. H. Deng, D. A. Clausi, “Gaussian MRF rotation-invariant features for image classification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(7):951-955, July 2004. 35. Y. Zhang, M. Brady, S. Smith, “Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm,” IEEE Transactions on Medical Imaging, 20(1):45-47, 2001. 36. M. Maddah, K. H. Zou, W. M. Wells, R. Kikinis, S. K. Warfield, “Automatic optimization of segmentation algorithms through simultaneous truth and performance level estimation (STAPLE),” in Proc. of Medical Image Computing and Computer Assisted Intervention, 2004, pp.274-282. 37. J. Besag, “On the statistical analysis of dirty pictures (with discussion),” Journal of Royal Statistics Society, ser. B, 48(3):259–302, 1986. 38. D. Joshi, J. Li, J. Z. Wang, "A computationally efficient approach to the estimation of two- and three-dimensional hidden Markov models,” IEEE Transactions on Image Processing, 15(7):1871-1886, 2006. 39. A. Huang, R. Abugharbieh, R. Tam, “Image segmentation using an efficient rotationally invariant 3D region-based hidden Markov model,” in Proc. of IEEE 111  40.  41. 42.  43. 44. 45. 46. 47. 48.  49. 50.  Computer Vision and Pattern Recognition Workshop on Mathematical Methods in Biomedical Image Analysis, Alaska, USA, 2008, pp.1-8. S. Buecher, “Watersheds of functions and picture segmentation,” in Proc. of IEEE Int. Conf. Acoustics, Speech, and Signal Processing, Paris, France, May 1982, pp.1928–1931. J. Shi, J. Malik, “Normalized cuts and image segmentation,” in Proc. of IEEE Computer Vision and Pattern Recognition, 1997, pp.731-737. F. Li, Q. Dai, W. Xu, “Region-based hidden Markov models for image categorization and retrieval,” in Proc. of SPIE Electronic Imaging, vol.6508, 2007, pp.1V/1-8. H. Gudbjartsson and S. Patz, “The Rician distribution of noisy MRI data,” Magnetic Resonance in Medicine, 34(6):910-914, December 1995. H. Akaike, “A new look at the statistical model identification,” IEEE Transactions on Automatic Control, 19(6):716-723, 1974. G. E. Schwarz, “Estimating the dimension of a model,” Annals of Statistics, 6(2):461-464, 1978. S. P. Lloyd, “Least square quantization in PCM,” IEEE Transactions on Information Theory, 28(2):129-136, 1982. G. J. McLachlan, K. E. Basford, Mixture models: inference and applications to clustering, Marcel Dekker, New York, 1988. C. A. Cocosco, V. Kollokian, R. K.-S. Kwan, A. C. Evans, “BrainWeb: online interface to a 3D MRI simulated brain database,” NeuroImage, 5(4 Part 2/4):S425, 1997. A. Worth. (2007, November) The Center for Morphometric Analysis at MGH. Available: http://www.cma.mgh.harvard.edu/ibsr/. L. R. Dice, “Measures of the amount of ecologic association between species,” Ecology, 26:297-302, 1945.  112  4. MULTI-CHANNEL WHITE MATTER LESION SEGMENTATION IN MULTIPLE SCLERSIS USING REGION-BASED HIDDEN MARKOV MODELING*  4.1 INTRODUCTION Multiple sclerosis (MS) is an autoimmune disease that affects the central nervous system (CNS). The myelin sheath surrounding nerve cells is progressively damaged, causing slowdown or discontinuation of nerve impulses. To better understand MS and to track the disease progression, magnetic resonance imaging (MRI) has become widely used in clinical diagnosis and evaluation of treatment efficacy. The total volume of white matter lesions visible on T2-weighted (T2w) scans is one of the most commonly used MRI markers for MS [1].  Delineation of white matter lesions (WML) allows for quantitative measurement and analysis of lesion loads and tissue volumes in clinical studies. However, manual expert segmentation is quite challenging due to the complexity of the associated three-dimensional (3D) information and the necessity of integrating data from various MRI acquisitions. Such complications render the manual segmentation process extremely tedious, time-consuming, and confounded by high intra- and inter-rater variability. With various MR acquisitions highlighting different aspects of the imaged regions, simultaneous analysis of multiple MRI modalities is quite advantageous. For example, *  A version of this chapter has been submitted. A. Huang, R. Tam, R. Abugharbieh, “Multi-channel white  matter lesion segmentation in multiple sclerosis using region-based hidden Markov modeling,” (Submitted February 2010). 113  T2w images provide high contrast between lesions and white matter, and proton density weighted (PDw) and fluid attenuated inversion recovery (FLAIR) images provide superior contrast between lesions and cerebrospinal fluid (CSF). Several automatic approaches for lesion segmentation that are based on multi-modal MR brain data have been proposed [2]. Some of the techniques employed include statistical classification based on training [3, 4], outlier detection using expectation-maximization (EM) [5, 6], principal component analysis (PCA) based on training [7], and atlas-based region partitioning [8]. These methods mainly rely on examining the differences between the intensity profiles of lesions vs. normal tissues. However, since lesion intensities often overlap with those of other tissues in the brain, other segmentation approaches have incorporated additional contextual information such as the position, mean and standard deviation of intensity values, curvature and gradients in neighborhoods of various sizes within supervised statistical classification frameworks [7, 9]. Statistical atlases have also been utilized to ensure topological consistency in all segmented structures [10].  A key difficulty in automating MS lesion segmentation is associated with the significant variations in scan quality. With acquisition artifacts and signal-to-noise ratios varying from scan to scan, and sometimes even from region to region within the same scan, training-based segmentation methods [3, 4, 7, 9] may require elaborate re-training for different datasets. Furthermore, the manual delineations often used for training may contain errors, as demonstrated by the fact that segmentations produced by multiple experts often do not agree well, and this can impact the accuracy in supervised learning methods. For segmentation methods relying on registration to atlases [7, 8, 10, 11], the presence and various degrees of brain atrophy attributed to pathology may have an 114  adverse effect on robustness, particularly if the atlas being used is not representative of the population being studied.  An alternative approach has been proposed that incorporates neighborhood information by modeling 3D data with a one-dimensional (1D) hidden Markov model (HMM) via a Hilbert-Peano vectorization procedure using a priori information from a probabilistic atlas [11]. Originally designed for applications with 1D data sequences (Figure 4.1a) such as speech recognition [20] or error correction coding [21], HMMs, despite its causal data assumptions, have been extended to higher dimensional images due to its robustness to noise [22]. However, performing exact inference on a high dimensional model is computationally expensive. Pseudo-HMMs (Figure 4.1b) [23] and vectorization (Figure 4.1c) [24] have thus been employed to transform the higher dimensional problems into a 1D one. A truly high-dimensional HMM with an efficient parameter estimation algorithm has been proposed [25] that iteratively decodes each row of voxels given its neighboring rows (Figure 4.1d). However, such estimation relies on a rectangular grid lattice which typically results in segmentations suffering from horizontal striping artifacts. Previously, we have proposed a 3D region-based HMM [12, 13], henceforth referred to as rbHMM (Figure 4.1e). The rbHMM employs an efficient region-based representation of the image data by first dividing an image into contiguous local regions by using the watershed transform [26]. Since voxels are typically a discrete representation of a continuous scene and as such are not independent, by grouping similar regions that exhibit similar properties such as intensity or texture, computational cost is reduced and noise resiliency is improved.  115  Figure 4.1. Simplified diagrams of 1D and different higher dimensional HMMs. Circles represent model nodes to be estimated, starting with red circles. Arrows with solid lines denote direction of estimation, whereas arrows with dash lines indicate information used from previous iterations for the estimation of the current iteration. (a) A 1D HMM, (b) a pseudo-2D HMM based on hierarchical 1D HMM, (c) a 1D HMM based on a Hilbert-Peano scan of a 2D image, (d) a 2D HMM using neighboring rows, and (e) a 2D rbHMM using neighboring regions in a tree structure.  Parameter estimation on an rbHMM does not follow any a priori directionality assumption, which fundamentally allows for improved segmentation accuracy and invariance to object rotations that are ubiquitous in medical images. In this chapter, we extend rbHMM to automatic segmentation of white matter lesions by adapting the method to multi-channel MRI scans and incorporating a special initialization procedure designed to improve sensitivity to low lesion loads that may be problematic for approaches based on clustering and/or outlier detection. The key advantages of using 116  such a model are that estimation is data-driven without requiring a priori information, and noise resiliency is achieved without sacrificing accuracy in scans with a small number or volume of lesions.  This chapter is organized as follows. First we present our extension of rbHMM to multi-channel segmentation of white matter lesions. We then demonstrate qualitative and quantitative results on real clinical MRI scans. Lastly, we present a discussion on these results and our conclusions.  4.2 METHODS  4.2.1 Subjects and MRI Protocol  For our experiments, we employed the publicly available MS Lesion Segmentation Challenge database [2] acquired at the University of North Carolina (UNC) on a 3T Siemens Allegra MRI scanner. In this database, a number of high resolution (512×512×256 dimensions, 0.5 mm×0.5 mm×1.0 mm spacing) structural MRI scans are available including T1w, T2w, and FLAIR images. The provided T1w data have already been rigidly registered to the MNI atlas, whereas the T2w and FLAIR data have been rigidly registered to the corresponding T1w. In addition, all images in the database have also been resliced to an isotropic voxel spacing of 0.5 mm and dimension of 512 in each direction. Two sets of manual lesion masks, extracted independently by expert raters at UNC and the Children’s Hospital of Boston (CHB), are also available as validation data  117  for ten of the subjects. For this reason, we use the scans and masks from these ten subjects for our experiments.  4.2.2 Data Pre-Processing  We removed the non-brain part of each scan using the surface model approach implemented in the Brain Extraction Tool (BET). T1w scans were used for this brain extraction step as BET has been demonstrated to be robust on T1w scans with less than 2% mean percentage error difference between fully-automatic and individually optimized results [15]. Default BET parameters were used and the resulting brain masks were applied to the T2w and FLAIR images.  Following our extraction of the brain areas, we performed a bias field correction in order to reduce the slowly varying variations in image intensities due to acquisition inhomogeneity in the field strength. We employed the popular non-parametric non-uniform intensity normalization (N3) method [16] optimized in a multi-scaled approach [17] to each T2w and FLAIR image individually. Intensity mean and variance were also standardized across both slices and scans. 4.2.3 Multi-Channel Lesion Segmentation We have recently demonstrated how our rbHMM can be used for segmentation of normal brain tissues in T1w scans with accurate and robust results even under noise and image rotations [12, 13]. In this work, rather than modeling lesions as intensity outliers as done in other methods (e.g. [18, 19]), we directly model the lesion class as one or more states  118  in the rbHMM. The features used to compute the state transition probabilities are n-dimensional intensity vectors, where n is the number of MRI channels used. This approach is designed to improve sensitivity in the case where the lesions do not occupy a distinct cluster or lobe in intensity space. To achieve this, the proposed method initializes the rbHMM with many more states than the actual number of tissues, which decreases the likelihood of the lesion class being overwhelmed by a larger class. During the convergence of the rbHMM, discrete maximum a posteriori (MAP) class labels are determined by incorporating both statistical likelihoods and spatial neighborhood information in an iterative unsupervised estimation process, which effectively reduces the number of mislabeled regions due to noise. Finally, lesion regions are selected based on connectivity and relative tissue intensities, as outlined in the rule set at the end of this section.  Given a 3D image U, we denote a voxel with its coordinates (x, y, z) along the X, Y, and Z axes, respectively. U can thus be described as a collection of intensity values in the 3D grid U={ I(x, y, z) }. By grouping voxels into regions, each region ri is defined simply as a collection of coordinates that share the same region label L(x, y, z)=i, and U can be denoted as the set R={ ri: 1≤i≤NR }, where NR=║R║ is the total number of regions. We denote the adjacent regions of ri in 3D space as Ri={ rj: j≠i,  (x,y,z)  ri s.t. (x±1,y,z) or (x,y±1,z) or (x,y,z±1)  rj }. Each region ri is represented by its multivariate regional feature, a mean vector fi, which captures the mean observed voxel intensities in all MRI modalities available for each subject. Ideally, all regions, rj, with the same true underlying states, sj, would share the same observed feature, fi. However, this is usually not the case due to imaging non-idealities such as corruption by noise (often modeled as white 119  Gaussian noise). However, we can optimally estimate the true state label sj given the observation fi in the MAP sense by making assumptions similar to those associated with classical HMMs:  Assumption 1:  The observed feature, or feature vector, follows a multivariate conditional Gaussian distribution given a hidden state l  {1, …, Ns} for Ns true underlying states with mean vector μl and covariance matrix Σl,  Assumption 2:  For any region ri, the observed feature vectors fi given si are conditionally independent.  Assumption 3:  The true underlying states si, i=1,…,NR are governed by an irregular Markov mesh with transitional probabilities: αi,l = P(si=l | Sj˝) where Sj˝ is a set of states of some preceding neighbors Rj˝Ri for which the states are already known.  To estimate the model parameters, we employ the Viterbi algorithm [21]. We iteratively search for the optimal MAP states, { si*: 1≤i≤NR }, given the current model parameters, by locally optimizing the likelihood of the observation. An iterative process starts at a random region as the root node, where we then construct a Markov tree by extending one step outward towards the immediate neighboring regions. We note that under image rotation, the same Markov tree would be constructed since the regional neighborhood relationships remain the same. This allows for a rotationally invariant description of image data. 1D Viterbi is then used for each tree branch to find for the optimal MAP states as given in Eq (4.1):  120  si*  =  si  =      arg max P s i , f i , s j , f j , s k , f k : r j  Ri" , rk  Ri' \ Ri" si  =    arg max P s i f i , s j , f j : r j  Ri'    arg max P  f i si   P si s j : rj  Ri" si   P sk si , s j : rj  R , rk  R \ R " k  ' i  " i        (4.1)  The first term, P(fi|si), on the right hand side of the last equality sign of Eq (4.1) is the likelihood of the observation feature set belonging to the state. The second term,      P si s j : r j  Ri" , describes the transition probability of region ri in state si given the states  of the preceding neighbors in Ri˝. The third term, P sk si ,s j : rj  Rk" , rk  Ri' \ Ri"  , describes the probability of region rk being in state s k given the states for ri and other preceding neighbors in Rk˝. This derivation is based on [25], where s is used to distinguish the conditioned states of the neighboring regions from the state, s, of the current region to be optimized. The difference in our method is that instead of locally optimizing 1D rows on a 3D grid, si* is calculated along each 1D model branch on a 3D tree. Once unknown states are estimated, the model parameters and transitional probabilities are updated empirically in an iterative unsupervised manner assuming the MAP states, si*, are the true underlying states. Further details on our Markov tree construction and parameter estimation can be found in Huang et al. [12, 13].  With the MS lesion class typically constituting only a very small portion of the brain (generally <1% of the voxels), the iterative nature of our parameter estimation algorithm 121  and other EM-like methods such as k-means clustering, will tend to capture the majority of the data points in order to maximize the model data likelihood, generally overlooking lesions. Thus, in order to automate segmentation of MS lesions, especially in real clinical MRI scans where the presence of acquisition artifacts results in much higher intensity distribution overlaps between lesions and other tissue types, it is essential that the lesion intensity distribution can be effectively captured by the estimation process. Hence, since the intensity of all scans was normalized, initialization of our rbHMM parameter estimation algorithm is performed using a fixed initialization approach rather than relying on k-means clustering as previously used for tissue segmentation [12, 13]. Fixed initialization assigns a large number of means (Ns= 20 in our experiments) at fixed positions covering the majority of data points in the T2w-FLAIR intensity space and the covariance was initialized by using an Euclidean distance classifier given such means. Figure 4.2 shows an example data intensity distribution in the T2w-FLAIR intensity space, the state initializations and results after convergence.  After the MAP states are estimated for all image regions, the regions representing WML are then selected based on connected component analysis using a rule-based system similar to [27, 28]. The rule set used in our experiment consists of the following:  1. The state with a mean vector closest to zero is labeled as the ‘background’ class. 2. From the three largest ‘non-background’ states, the one with the lowest mean vector is labeled as the ‘white matter (WM)’ class. 3. All states satisfying the following are labeled as ‘WML’:  Intensity means are higher than the ‘WM’ class in both T2w and FLAIR scans. 122   The majority (>70% as determined empirically through experiments) of regions are neighboring the ‘WM’ class and not neighboring the ‘background’ class. 4. Any individual ‘WML’ regions neighboring the ‘background’ class are discarded.  Note that for rules 1 to 3, decisions are made at the state/class level, whereas for rule 4, decisions are made at the region level.  a  b  c  d  Figure 4.2. An example (a) T2w-FLAIR intensity distribution where lesions are not directly observable, (b) the initialization of class means and standard deviations as illustrated by green ellipses, (c) results of all classes after 10 iterations (the lesion class is shown by a blue ellipse), and (d) T2w-FLAIR intensity distribution of manually segmented lesions only with the lesion class shown by the same blue ellipse (note the color map scale difference between a-c and d).  123  4.3 RESULTS Segmentation tests were performed on the T2w and FLAIR scans of the ten subjects from MS Lesion Segmentation Challenge database described above. Our validation against the available expert rater segmentations was done using four metrics: volume difference (Eq (4.2)), average Euclidean surface distance (Eq (4.3)), true positive rate (Eq (4.4)) and false positive rate (Eq (4.5)):  Volume Difference =  Average Distance =  (4.2)  VSEG  VREF / VREF  100%   min(B  iBSEG  SEG  (i ), BREF )    min(B  iBREF  BSEG  BREF  REF  (i), BSEG )  (4.3)  True Positive =  M SEG  M REF  / M REF  100%  (4.4)  False Positive =  M SEG  ~ M REF  / M SEG  100%  (4.5)  where VSEG and VREF denote the lesion volumes of the segmentation algorithm and the provided reference. BSEG and BREF denote the border voxels of the segmented lesions and the provided reference, min(B1(i), B2) finds the minimum Euclidean distance from the i-th point in B1 to the B2 mask, and ||B|| indicates the size of mask B. MSEG and MREF denote the lesion masks of the segmentation algorithm and the provided reference. Employing these same four metrics as used in [2] allowed for comparison against various state-of-art algorithms [3-11].  124  We first assessed expert variability by comparing the provided segmentation masks from the UNC and CHB raters. Table 4.1 shows the results when comparing the lesion masks delineated by the UNC rater to the masks delineated by the CHB rater, and vice versa.  Comparison Metrics UNC Case01 UNC Case02 UNC Case03 UNC Case04 UNC Case05 UNC Case06 UNC Case07 UNC Case08 UNC Case09 UNC Case10 Average Comparison Metrics UNC Case01 UNC Case02 UNC Case03 UNC Case04 UNC Case05 UNC Case06 UNC Case07 UNC Case08 UNC Case09 UNC Case10 Average Average (both raters)  UNC Rater Lesions [mL] [%] 0.99 0.08 4.41 0.41 1.14 0.11 2.12 0.22 0.11 0.01 4.52 0.42 0.80 0.08 0.44 0.04 0.22 0.02 1.13 0.10 1.59 0.15 CHB Rater Lesions [mL] [%] 0.53 0.04 2.66 0.25 3.38 0.32 2.90 0.30 0.56 0.05 2.60 0.24 0.40 0.04 1.46 0.13 1.83 0.18 2.41 0.21 1.87 0.18 1.73  0.16  vs. CHB Rater Vol. Diff. Avg. Dist. [%] [mm] 46.5 10.5 39.6 3.0 197.5 6.8 37.0 5.3 437.9 15.7 42.4 5.2 50.3 5.6 228.7 4.8 725.9 13.2 113.8 8.9 192.0 7.9 vs. UNC Rater Vol. Diff. Avg. Dist. [%] [mm] 86.8 10.5 65.5 3.0 66.4 6.8 27.0 5.0 81.4 15.7 73.7 5.2 101.1 5.6 69.6 4.8 87.9 13.2 53.2 8.9 71.3 7.9 131.6  7.9  True Pos. [%] 50.0 79.4 68.7 75.8 60.0 52.0 71.4 100.0 75.0 55.6 68.8  False Pos. [%] 81.8 39.1 80.4 68.3 82.4 82.5 28.6 60.0 85.7 71.4 68.0  True Pos. [%] 18.2 60.9 15.6 31.7 17.7 17.5 71.4 40.0 14.3 28.6 31.6  False Pos. [%] 50.0 20.6 27.3 24.2 40.0 48.0 28.6 0.0 25.0 44.4 30.8  50.2  49.4  Table 4.1. Expert variability between the UNC rater and the CHB rater on the clinical dataset.  On average, the CHB rater labeled lesion volume (3.46mL) is almost double that of the UNC rater (1.87mL), demonstrating a large variability in expert opinions on lesion sizes on many cases in this clinical dataset. However, the lesion locations appear fairly consistent as the average surface distance between both lesion labels is only 7.9mm.  125  Averaging between two comparisons, we observed a volume difference of 131%, an average surface distance of 7.9mm, a true positive rate of 50.2% and a false positive rate of 49.4%. The average lesion volumes (1.73mL) and lesion loads (0.16%) in this dataset is extremely small, making segmentation difficult.  We then evaluated the performance of the proposed lesion segmentation method by comparing to each available expert lesion mask. Rather than directly comparing the raw metric values, we computed a performance score, from 0 to 100, of each of the four metrics of interest on a linear scale to the average expert variability obtained above. We adopted a predefined score of 90 to represent the average variability between the two experts, as was done previously by other researchers in validating their methods [2-11].  Qualitatively, we show two examples from this dataset in Figure 4.3 (case 02, average score=93) and Figure 4.4 (case 07, average score=77). Both figures show two slices of the skull-stripped T1w, T2w and FLAIR input scans (subfigures a-c), lesion masks segmented by both UNC and CHB raters (subplot d-e), and lesion masks generated by using the proposed rbHMM (subplot f). Both quantitatively and qualitatively, we observed that case 02 resembles the UNC rater in lesion size (score=94), and resembles the CHB rater in lesion location (score=95). As for case 07, the proposed method captured similar lesions as both raters as shown in Figure 4.4f.  126  (a) T1w  (b) T2w  (c) FLAIR  (d) UNC Expert  (e) CHB Expert (f) rbHMM Results  Figure 4.3. Segmentation results of two slices from subject #02 of the clinical dataset. We note the variability between the rater-delineated lesions, and the resemblance of the rbHMM results to the CHB rater in lesion location. (a-c) The input T1w, T2w and FLAIR MRI scans, (d-e) UNC and CHB expert rater segmented lesion masks, and (f) lesion mask obtained by using the proposed rbHMM segmentation after 10 iterations.  (a) T1w  (b) T2w  (c) FLAIR  (d) UNC Expert  (e) CHB Expert (f) rbHMM Results  Figure 4.4. Segmentation results of two slices from subject #07 of the clinical dataset. We note the similarity (top) and variability (bottom) between the rater-delineated lesions and between the rbHMM results to each rater. (a-c) The input T1w, T2w and FLAIR MRI scans, (d-e) UNC and CHB expert rater segmented lesion masks, and (f) lesion mask obtained by using the proposed rbHMM segmentation after 10 iterations. 127  Quantitatively, Table 4.2 shows that, on average, we captured 61.3% (score=92) of the expert segmented lesion masks with an 87.7% (score=82) false positive rate. The segmented lesion volumes are generally about 206.3% more than those delineated by the experts (score=85) and the average surface distance between border voxels is 12.9mm (score=84).  Comparison Metrics UNC Case01 UNC Case02 UNC Case03 UNC Case04 UNC Case05 UNC Case06 UNC Case07 UNC Case08 UNC Case09 UNC Case10 Average  Vol. Diff. Avg. Dist. [%] Score [mm] Score 67.7 95 22.2 72 12.0 99 3.8 95 270.0 79 9.0 89 68.0 95 12.5 84 745.8 43 30.0 62 25.0 98 5.0 94 298.2 77 20.3 74 465.0 65 17.0 78 841.5 36 19.9 75 113.5 91 15.8 80 290.7 78 15.5 80  Comparison Metrics UNC Case01 UNC Case02 UNC Case03 UNC Case04 UNC Case05 UNC Case06 UNC Case07 UNC Case08 UNC Case09 UNC Case10 Average Average (both raters)  Vol. Diff. Avg. Dist. [%] Score [mm] Score 212.2 84 21.0 73 85.4 94 1.8 98 24.4 98 5.9 93 22.6 98 10.7 86 57.3 96 18.3 77 30.2 98 5.1 94 701.0 47 21.0 73 71.9 95 7.3 91 14.0 99 5.4 93 0.1 100 5.6 93 121.9 91 10.2 87 206.3  85  12.9  84  UNC Rater True Pos. [%] Score 16.7 83 76.5 95 81.8 96 45.5 89 60.0 92 72.0 94 71.4 94 63.6 93 75.0 95 66.7 93 62.9 93 CHB Rater True Pos. [%] Score 9.1 82 87.0 97 64.4 93 23.2 85 88.2 98 28.1 86 85.7 97 75.0 95 50.0 90 85.7 97 59.6 92 61.3  92  False Pos. [%] Score 95.5 81 69.6 86 93.4 81 94.6 81 96.5 80 81.3 84 97.4 80 96.9 80 98.2 80 91.8 81 91.5 81  Avg. Score 83 94 86 87 69 92 82 79 71 87 83  False Pos. [%] Score 98.5 80 57.0 88 68.1 86 92.7 81 80.2 84 82.4 83 97.9 80 89.3 82 92.3 81 80.6 84 83.9 83  Avg. Score 80 94 92 88 88 90 74 91 91 93 88  87.7  82  Table 4.2. Results obtained on the clinical dataset using rbHMM. Lesion segmentation results are compared to two expert segmentation results (UNC and CHB). Scores are calculated linearly given a score of 90 for average variability between the two experts.  128  Comparing to each rater individually, even though we observed a large discrepancy in lesion volumes (290.7% with score=78) between lesions segmented by the proposed method and the UNC rater, the difference with the CHB rater is only 121.9% (score=91). We note that for the cases with smaller (<1.00mL) lesion volumes (cases 01, 05, 07, 08, 09), the proposed method was generally able to capture as much lesions as a human rater could (true positives score of ~90). Even though these lesions were less distinct in the intensity space, our proposed rbHMM approach was successful in identifying such regions.  An example result (case 07) is shown in Figure 4.2, where the segmented lesion class is identified as shown by a blue ellipse in Figure 4.2c. At first glance, the blue ellipse may not appear to be capturing meaningful image features as the intensities lay partly inside the gray matter (GM) distribution and partly outside as outliers. It is not until we zoom in to a much lower intensity distribution scale (Figure 4.2d) that we can see that the lesion class is indeed well represented. Furthermore, methods that model lesions as outliers only may miss up to half of the lesion class that overlapping with GM in intensity distribution, whereas the proposed data-driven method did not. Overall, the lesion masks produced by the proposed method were consistently high across all metrics (mean score=86, σ=6).  4.4 DISCUSSION In this chapter, we presented a multi-channel MS lesion segmentation approach, which is an extension of the 3D region-based hidden Markov model previously proposed for brain tissue segmentation on MRI. The results demonstrated that lesions segmented based on 129  rbHMM yielded very good correspondence to lesions delineated by expert raters. When a score of 90 was assigned to inter-rater variability, the proposed method achieved an overall score of 86. This promising result suggested that such an rbHMM approach that incorporated both region likelihood and neighborhood information was successful in segmenting MS lesions by using multi-channel MRI images. By initializing many more states than the actual number of target tissues, the proposed method allows greater sensitivity in estimating the data distribution such that the very small lesion class could be successfully captured (Figure 4.2). A rule-based system for lesion region selection also helped improve noise resiliency by removing many false lesion regions. All of these were especially important as many of the lesions to be estimated were relatively small in size and segmentation results could be significantly affected by noise as the volumetric evaluation metrics used, i.e. volume difference, true positive rate and false positive rate, were very sensitive to small lesions. However, we have shown quantitatively in Table 4.2 and qualitatively in Figure 4.3 and Figure 4.4 that the proposed method was able to successfully delineate lesions using clinical scans.  We demonstrated in Table 4.1 that even with expert raters, achieving high correspondence in lesion segmentation was extremely difficult. In many cases, the amount of overlap was small, suggesting a large difference in rater opinions on what were considered as lesions. Of interest was that a comparison of an automatic computerized lesion segmentation algorithm to this expert variability as was done in [2]. Rather than examining the raw performance metric values, a relative evaluation score was used (as in Table 4.2) to allow for direct comparisons of segmentation performance vs. expert variability. A score of 90 indicated comparable variability as if lesions were segmented by 130  expert human raters. We showed lesions segmented by using rbHMM were in close agreement to such inter-rater variability with an overall score of 86.  We observed qualitatively in Figure 4.3 and Figure 4.4 that hyperintense lesion boundaries varied between the raw input T2w and FLAIR scans as reported by [29], and the lesion masks obtained by using the proposed rbHMM method resembled the results of the CHB rater closer than those of the UNC rater. This was also quantitatively observed in Table 4.2. This is because when comparing between the raters, the UNC rater appeared to rely mainly on the intensity of the FLAIR scans in delineating lesions, whereas the CHB rater also considered T2w scan intensity, resulting in more conservative lesion masks. Therefore, since our proposed segmentation method utilized both T2w and FLAIR scans, it was expected that the segmentation results had better resemblance to the CHB than the UNC rater results.  Lastly, we compared the segmentation results of the proposed rbHMM method and another state-of-art method, namely topological-preserving anatomy-driven segmentation framework or TOADS [10], to human expert raters. TOADS is publicly available and therefore allows for a direct comparison, and the method has been shown to perform well in previous tests [10]. First, lesion segmentation was performed using TOADS on the clinical T1w, T2w and FLAIR scans (20 iterations, data subsampled to 1.0 mm×1.0 mm×1.0 mm according to [10]). TOADS segmented the MRI images first into WM, GM and cerebrospinal fluid (CSF) classes before lesions were separated from the WM class (Figure 4.5). When comparing lesions segmented by TOADS to lesions delineated by expert raters and our proposed method (Figure 4.3 and Figure 4.4), we observed in these 131  two examples that TOADS under-segmented significant lesions on the right side of both images. Quantitatively, we observed in Table 4.3 that both rbHMM (scores=83, σ=8) and TOADS (scores=83, σ=13) achieved comparable results when comparing to the UNC rater, whereas rbHMM results (score=88, σ=6) were better matches than TOADS (score=84, σ=14) to the CHB rater, particularly in the true positive scores (mean=78 for TOADS, 86 for rbHMM). The rbHMM segmentation scores were more consistent as a much lower standard deviation was observed. This improvement in standard deviation (54%) is substantial, which further demonstrates the robustness of the proposed method.  Ground Truth Comparison Metrics  UNC Rater CHB Rater Overall Vol. Avg. True False Avg. Vol. Avg. True False Avg. Avg. Diff. Dist. Pos. Pos. Score Diff. Dist. Pos. Pos. Score Score Score Score Score Score Score Score Score Score 78 80 93 81 83 91 87 86 83 88 86 σ=23 σ=10 σ=4 σ=2 σ=8 σ=16 σ=9 σ=15 σ=3 σ=6 σ=6 71 85 91 83 83 74 87 78 85 84 82 σ=42 σ=11 σ=4 σ=4 σ=13 σ=40 σ=12 σ=29 σ=6 σ=14 σ=13  rbHMM TOADS [10]  Subject #02  Subject #07  Table 4.3. Comparison of lesion segmentation performance vs. expert variability score.  a  b  c  d  Figure 4.5. TOADS segmentation results of one slice each from subject #02 and #07 of the clinical dataset. The (a, c) discrete WM, GM and CSF tissue segmentation labels, and (b, d) lesion masks obtained after 20 iterations.  132  4.5 CONCLUSIONS Segmentation of white matter MS lesions from MRI images is a challenging task due to not only variability in image quality complicating automatic procedures, but also variability in the ground truths established by human experts. Our main contribution is proposing a new multi-channel rbHMM approach that is data-driven and requires no a priori atlases or training for the white matter lesion segmentation task. By initializing the model with many more states than the actual number of target tissues, the proposed method allows greater sensitivity to small lesions. A rule-based system for lesion region selection also helps improve noise resiliency by removing false lesion regions. Using a publicly available clinical brain MRI dataset of MS patients, we demonstrated that lesion segmentation results achieved are in close agreement to expert human raters. Using a reference score of 90 to represent the variability between two expert human raters, the proposed method was evaluated with four metrics - volume difference, average surface distance, true positive and false positive. The method performs well, achieving a mean score of 86 and shows greater consistency (σ=6) across various lesion loads than another state-of-the-art method. Our future work will focus on acquiring additional clinical scans of various modalities and ground truths for further validation of the technique.  133  4.6 BIBLIOGRAPHY 1. 2.  3.  4.  5.  6.  7.  8.  9.  A. Traboulsee, G. Zhao G, D. K. B. Li, “Neuroimaging in multiple sclerosis,” Neurologic Clinics, 23(1):131-148, 2005. M. Styner, J. Lee, B. Chin, M. S. Chin, O. Commonwick, H.-H. Tran, V. Jewells, S. Warfield, “3D segmentation in the clinic: a grand challenge II: MS lesion segmentation,” in Proc. of Medical Image Computing and Computer Assisted Intervention Workshop on MS Lesion Segmentation Challenge, 2008, id.638. P. Anbeek, K. L. Vincken, M. A. Viergever, “Automated MS-lesion segmentation by K-nearest neighbor classification,” in Proc. of Medical Image Computing and Computer Assisted Intervention Workshop on MS Lesion Segmentation Challenge, 2008, id.610. M. Scully, V. Magnotta, C. Gasparovic, P. Pelligrino, D. Feis, H. J. Bockholt, “3D segmentation in the clinic: a grand challenge II at MICCAI 2008 – MS lesion segmentation,” in Proc. of Medical Image Computing and Computer Assisted Intervention Workshop on MS Lesion Segmentation Challenge, 2008, id.611. D. Garcia-Lorenzo, S. Prima, S. P. Morrissey, C. Barillot, “A robust Expectation-Maximization algorithm for multiple sclerosis lesion segmentation,” in Proc. of Medical Image Computing and Computer Assisted Intervention Workshop on MS Lesion Segmentation Challenge, 2008, id.606. J. C. Souplet, C. Lebrun, N. Ayache, G. Malandain, “An automatic segmentation of T2-FLAIR multiple sclerosis lesions,” in Proc. of Medical Image Computing and Computer Assisted Intervention Workshop on MS Lesion Segmentation Challenge, 2008, id.613. D.-J. Kroon, E. van Oort, K. Slump K, “Multiple sclerosis detection in multispectral magnetic resonance images with principal components analysis,” in Proc. of Medical Image Computing and Computer Assisted Intervention Workshop on MS Lesion Segmentation Challenge, 2008, id.604. M. Prastawa, G. Gerig, “Automatic MS lesion segmentation by outlier detection and information theoretic region partitioning,” in Proc. of Medical Image Computing and Computer Assisted Intervention Workshop on MS Lesion Segmentation Challenge, 2008, id.616 J. H. Morra, Z. Tu, A. W. Toga, P. M. Thompson, “Automatic segmentation of MS lesions using a contextual model for the MICCAI grand challenge,” in Proc. of  134  10.  11.  12.  13.  14.  15. 16.  17. 18.  19.  20.  Medical Image Computing and Computer Assisted Intervention Workshop on MS Lesion Segmentation Challenge, 2008, id.609. N. Shiee, P.-L. Bazin, D. L. Pham, “Multiple sclerosis lesion segmentation using statistical and topological atlases,” in Proc. of Medical Image Computing and Computer Assisted Intervention Workshop on MS Lesion Segmentation Challenge, 2008, id.605. S. Bricq, C. Collet, J.-P. Armspach, “MS lesion segmentation based on hidden Markov chains,” in Proc. of Medical Image Computing and Computer Assisted Intervention Workshop on MS Lesion Segmentation Challenge, 2008, id.612. A. Huang, R. Abugharbieh, R. Tam, “Image segmentation using an efficient rotationally invariant 3D region-based hidden Markov model,” in Proc. of IEEE Compuater Vision and Pattern Recognition Workshop on Mathematical Methods in Biomedical Image Analysis (MMBIA), 2008, pp.107-111. A. Huang, R. Abugharbieh, R. Tam, “Image segmentation using an efficient rotationally invariant 3D region-based hidden Markov model,” IEEE Transactions on Image Processing (Accepted, March 2010). C. A. Cocosco. V. Kollokian, R. K.-S. Kwan, A. C. Evans, “BrainWeb: online interface to a 3D MRI simulated brain database,” NeuroImage, 5(4 part 2/4):S425, 1997. S. M. Smith, “Robust automated brain extraction,” Human Brain Mapping, 17(3):143-155, 2002. J. G. Sled, “A non-parametric method for automatic correction of intensity non-uniformity in MRI data,” IEEE Transactions on Medical Imaging, 17:87-97, 1998. C. K. Jones, E. B. Wong, “A multi-scale application of the N3 method for intensity correction of MR images,” in Proc. of SPIE Medical Imaging, 2002, pp.1123-1129. B. R. Sajja, S. Datta, R. He, M. Mehta, R. K. Grupta, J. S. Wolinsky, P. A. Narayana, “Unified approach for multiple sclerosis lesion segmentation on brain MRI,” Annals of Biomedical Engineering, 34(1):142-151, 2006. S. K. Warfield, M. Kaus, F. A. Jolesz, R. Kikinis R, “Adaptive template moderated, spatially varying statistical classification,” Medical Image Analysis, 4(1):43-55, 2000. L. Rabiner, B. H. Juang, Fundamentals of Speech Recognition, Englewood Cliffs, NJL Prentice-Hall, 1993.  135  21. A. J. Viterbi, J. K. Omura, “Trellis encoding of memoryless discrete-time sources with a fidelity criterion,” IEEE Transactions on Information Theory, 20:325-332, 1974. 22. J. Domke, A. Karapurkar, Y. Aloimonos, “Who killed the directed model?” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008, pp.1-8. 23. S. S. Kuo, O. E. Agazzi, “Machine vision for keyword spotting using pseudo 2D hidden Markov model,” in Proc. of International Conference on Acoustics, Speech, and Signal Processing (ICASSP), vol.5, 1993, pp.81-84. 24. S. Bricq, C. Collet, J. P. Armspach, “Unifying framework for multimodal brain MRI segmentation based on hidden Markov chains,” Medical Image Analysis, 12:639-652, 2008. 25. D. Joshi, J. Li, J. Z. Wang, “A computationally efficient approach to the estimation of two- and three-dimensional hidden Markov models,” IEEE Transactions on Image Processing, 15(7):1871-1886, 2006. 26. S. Buecher, “Watersheds of functional and picture segmentation,” in Proc. of International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 1997, pp.731-737. 27. K. van Leemput, F. Maes, D. Vandermeulen, A. Colchester, P. Suetens, “Automated segmentation of multiple sclerosis lesions by model outlier detection,” IEEE Transactions on Medical Imaging, 20(8):677-688, 2001. 28. O. Freifeld, H. Greenspan, J. Goldberger, “Lesion detection in noisy MR brain images using constrained GMM and active contour,” in Proc. of IEEE International Symposium on Biomedical Imaging, 2007, pp.596-599. 29. B. Moraal, S. D. Roosendaal, P. J. W. Pouwels, H. Vrenken, R. A. van Schijndel, D. S. Meier, C. R. G. Guttman, J. J. G. Geurts, F. Barkhof, “Multi-contrast, isotropic, single-slab 3D MR imaging in multiple sclerosis,” European Radiology, 18:2311-2320, 2008.  136  5. A FUZZY REGION-BASED HIDDEN MARKOV MODEL FOR 3D PARTIAL-VOLUME CLASSIFICATION IN BRAIN MRI*  5.1 INTRODUCTION Graphical models have long been successfully used in various signal processing and analysis applications such as speech recognition [1, 2], computer vision [3, 4], error correction coding [5, 6], and genome analysis [7]. Such models provide a graphical representation of probabilistic distributions by expressing complex computations for inference and learning as simpler graphical structures [8]. There exist two major classes of graphical models - directed and undirected. The directed graphical models, or Bayesian Networks (BN), specify a particular directionality on the links of the graph, which is useful for conveying causal relationships between the underlying random variables. An example of BN is the hidden Markov model (HMM), which is commonly used to represent stochastic processes. On the other hand, undirected graphical models such as Markov random fields (MRF) do not carry any directional implications but rather specify constraints between the random variables. Both directed and undirected graphical models have been previously applied to brain tissue segmentation in magnetic resonance imaging (MRI) data. In such approaches, the true tissue class labels of the image voxels are represented by underlying hidden variables that cannot be directly observed but can  *  A version of this chapter has been submitted. A. Huang, R. Abugharbieh, R. Tam, “A fuzzy region-based  hidden Markov model for 3D partial-volume classification in brain MRI,” (Submitted January 2010). A preliminary version of this chapter has been published. A. Huang, R. Abugharbieh, R. Tam, “A fuzzy region-based hidden Markov model for partial-volume classification in brain MRI,” in Proc. of Medical Image Computing and Computer Assisted Intervention, 2009, pp.474-481. 137  be indirectly estimated by inference from the available observations [19], which are the image intensities.  Performing exact inference on a fully connected high dimensional graphical model is a daunting task. Regardless of whether the model is directed or undirected, the amount of computation is expensive if not intractable. For undirected representations, performing a maximum a posteriori (MAP) estimation on a hidden MRF (HMRF) is a computationally difficult problem [15]. In such cases, the optimal solutions are usually computed using local optimization techniques or optimization on a relaxed problem [16, 17, 18]. For directed representations, 2D and 3D HMMs or causal MRFs have been proposed on a regular rectangular voxel lattice [9]. However, modeling an image in such a way imposes an unnatural ordering on voxels that do not typically bear causal orders, and often results in striping artifacts in the direction of grid traversal regardless of object orientation [11]. In addition, estimating the exact model states requires exponential time on the number of voxels. A block-based approach [10] and an iterative approximation method [11] have been proposed to improve efficiency, but these do not solve the problem of sensitivity to object rotations. Recently, a more data-driven model [12, 13] was proposed as an extension of the iterative approximation method. Rather than estimating the optimal states on a rectangular voxel lattice, an image is first organized into irregular homogeneous sub-regions akin to the concept of superpixels [14], and the model parameters are then estimated using a non-uniform representation of voxel data in 3D space. The representation is non-uniform in the sense that no a priori voxel grouping or directionality is assumed.  138  In image segmentation tasks, most estimation methods assign a single discrete label to each image voxel. However, the alternative approach that allows for each voxel to belong to multiple classes simultaneously, called the mixel model [23], facilitates more accurate modeling when partial volume effects (PVE) exist, which is often the case in real data due to physical limitations of imaging resolution. One can intuitively infer partial memberships from the class distributions of a discrete labeling process using techniques such as the classical forward-backward algorithm [19]. However, a more accurate model would simultaneously model all voxel likelihoods without first assuming a single true label class, and such a mixel-based approach has shown to improve stability and accuracy [23, 24]. A number of researchers have employed MRFs for estimating partial volumes in brain MRI by incorporating information from immediate voxel neighbors [21, 23, 24, 25, 26, 27]. However, parameter estimation in such undirected models is known to be difficult and very time consuming. In contrast, estimation in a directed model is comparatively easy with the results achieved generally being comparable to those of an undirected model [28]. For example, Bricq et al. [22] recently proposed using a directed graphical model to estimate partial volume by converting 3D image data into a 1D Markov chain. However, this simple vectorization approach considers a single fixed-order neighboring voxel in the estimation process and the scan order is pre-defined irrespective of the actual data analyzed. In this chapter, we build on our recently proposed region-based HMM (rbHMM) [12, 13], which consists of many 1D Markov chains grown in a data-driven manner to form a 3D tree structure, whose advantages include rotational invariance and increased efficiency. In the current work, we propose a novel probabilistic extension to our original model that enables modeling of partial-volume effects within the segmentation process (our preliminary investigation on this appeared in [29]). Our new 139  technique, henceforth referred to as fuzzy rbHMM (frbHMM) integrates the classical forward-backward scheme with our original tree-structured parameter estimation algorithm and also refines region boundaries to the voxel level resulting in a more accurate classification model for data suffering from partial-volume effects. Using both simulated and real MRI medical data, we demonstrate the advantages of modeling each image region as a mixel, as opposed to having a single label, and present in-depth quantitative experimental validation as well as comparisons to other state-of-the-art methods.  The remainder of the chapter is organized as follows. We first detail the proposed 3D frbHMM model and present our extension of the parameter estimation algorithm to compute partial volume using 1D tree-structured Markov chains. We then present extensive experimental brain tissue segmentation tests on both simulated as well as real clinical MR data of normal and multiple sclerosis (MS) patient subjects. Furthermore, we validate our method’s classification performance under varying noise levels and partial volume effects and demonstrate improved accuracy over other current methods such as FAST, SPM and EMS.  5.2 METHODS We first briefly review our original discrete rbHMM [12, 13] and then present the proposed extension for modeling partial volume within the new frbHMM fuzzy framework. We then describe how the forward-backward algorithm is employed for estimating the 3D frbHMM parameters and region class posteriors, and how the 140  classification resolution is improved by further refinement of region boundaries.  5.2.1 Summary of Original Region-Based Hidden Markov Model (rbHMM)  In Huang et al. [12, 13], we proposed a new method for image segmentation that initially subdivides an image U into a set R of contiguous homogeneous regions ri, i=1,…,NR where NR is the total number of regions in U. Each region ri is comprised of the set of 3D voxel coordinates assigned to region i. The basic idea was that grouping voxels with similar properties, e.g. intensity or texture, using methods such as a watershed transform [30] or a normalized cut [14], would significantly reduce complexity and computational cost by allowing each set of grouped voxels to be processed as a node with a single discrete label in a Markov model. Each resulting region ri would then have an observed feature fi, that represents a noisy observation of the true underlying discrete state si, which can be any one of Ns model states {1,…,Ns}. For MR images (with noise modeled as Gaussian) fi can simply be taken as the mean intensity for improved noise resiliency. The true underlying states si, i=1,…,NR of the image regions can be optimally estimated in a MAP sense based on three statistical model assumptions:  Assumption 1: The observed features, i.e. image intensities in our case, for any given underlying model state l  {1,…, Ns} follow a conditional Gaussian distribution with mean μl and variance σl2, which can be estimated by sample mean and sample variance. The choice of Ns is generally determined a priori. For example, in MRI data Ns is typically the number of distinct tissues present.  141  Assumption 2: For any region ri, the observed features fi given the hidden states si are conditionally independent. Assumption 3: The true underlying states si, i=1,…,NR are governed by an irregular Markov mesh such that the neighborhood influence on the probability of a region ri having a given state l is conditionally dependent on the state estimates of the immediate neighboring regions Ri. The state transition probabilities are defined as αm,l(i) = P(si=l | si=m, Si ), where si is the estimated state of the preceding neighbor riRi along the branch in the tree-structured estimation process and  is a set of the latest state estimates for  the remaining regions in Ri, and βm,l(i) =∑    \     ,  for  all rjRi not yet estimated in the current iteration. The first term, α, defines the probability of ri being in state si=l given the known states (si,  ) of neighboring  regions in Ri for the current iteration. The second term, β, defines that given si=l, how likely are the state estimates of all sj=m.  Parameter estimation in the above model is carried out by constructing a 3D Markov tree starting from a random region, and extending one step outward from all nodes that have already been formed towards their immediate neighboring regions until all regions are traversed. The direction of the tree branching, or region traversal, follows a path of the most influential neighbor, which is determined based on the sum of two ratios defined by region adjacency, aR(j,i), and edge adjacency, aE(j,i), for neighboring regions rj  Ri with respect to region ri:  142  aR  j , i    vrj    vr   aE  j, i    (5.1)  k  rk Ri'  erj , ri    er , r   rk Ri'  k  (5.2)  i  where v(rj) denotes the volume of region rj, and e(rj,ri) denotes the number of border voxels along the region boundary between ri and rj. The region/edge adjacencies are hence defined as the ratio of the volume/shared border length of/to a particular neighboring region normalized to that of the entire neighborhood of the region ri. Note that under the same initial conditions, the direction of region traversal remains the same under image rotations since the regional neighborhood relationships do not change. This allows a rotationally invariant representation of the image data, where for each tree branch, the 1D Viterbi algorithm [5, 6] is used to efficiently search for the optimal MAP states for each region. Further details on the Markov tree construction and parameter estimation can be found in [12, 13]. 5.2.2 Proposed Fuzzy Region-Based Hidden Markov Model (frbHMM) The main contribution of this chapter is the extension of our original rbHMM [12, 13], as summarized in section above through the introduction of fuzzy states which allows for each image region ri to belong to multiple classes simultaneously. Thus, rather than considering one single “true” underlying state si for region ri, we now consider an underlying state vector Ti=(ti,1, ti,2, …, ti,Ns) such that lNs1 ti ,l  1 ,  where ti,l  [0,1]  represents the ratio of the lth class in region ri for l=1,…, Ns. The term ti,l can be seen as  143  the posterior probability of labeling region ri with label l given the model and observations. Hence, for a discrete or crisp segmentation, ti,l=0 for all l in Ti except for the single element where si=l, then ti,l=1.  To compute our model parameters, K-means clustering is applied to all of the voxel samples as a simple way to initialize the means and variances of the model states, which are then used to initialize the region posterior probabilities Ti = (ti,1, ti,2, …, ti,Ns) for i=1,…,NR based on a Gaussian mixture assumption. The transition probabilities αm,l(i) and βm,l(i), are determined based on probability-weighted empirical frequencies given the  constraint lN1 m ,l (i )  1 for i=1,…,NR. Once these model parameters are initialized, the S  posteriors, ti,l , are computed using the forward-backward algorithm, which has been shown to be efficient in 1D [19]. Thus, similar to rbHMM, we iteratively construct a 3D Markov tree from a randomly selected region and traverse outwards to all other regions.  For each region ri, the subscript i represents a global index of the specific region in the image U. For parameter estimation on the constructed 3D Markov tree, each region can alternatively be uniquely indexed as rb(k) to represent ri as the kth region on branch b. Correspondingly, ti,l can now be represented as tb(k),l. For each of the tree branches formed, the forward-backward algorithm [19] is applied to estimate the posterior probability (5.3) of class l{1,…,Ns} at each node by defining a forward term, Ab(k)(l), and a backward term, Bb(k)(l), as:  144  P ( sb ( k )  l | f , S b ( k ) )    P ( f , S b ( k ) | sb ( k )  l ) P ( sb ( k )  l ) P( f , Sb ( k ) )   P ( f b ( k ) , S b ( k ) | sb ( k )  l ) P ( f b ( k ) , S b ( k ) | sb ( k )  l ) P ( sb ( k )  l )  P( f , S b( k ) )   P ( sb ( k )  l , f b ( k ) , S b ( k ) ) P ( f b ( k ) , S b ( k ) | sb ( k )  l )  Ns    P( sb( k )  m, f b( k ) , Sb( k ) ) P( f b( k ) , Sb( k ) | sb( k )  m)  (5.3)  m 1    Ab( k ) (l ) Bb( k ) (l ) Ns  A  b( k )  ( m) Bb ( k ) ( m)  m 1   Ab ( k ) ( l )  P ( f b ( k ) , sb ( k )  l , S b ( k ) )  (5.4)   Bb ( k ) ( l )  P ( f b ( k ) , S b ( k ) | sb ( k )  l )  (5.5)  where S b ( k ) is the set of known neighboring states to region rb(k), and f is the set of all observed mean intensity features on a tree branch – consisting of   f b(k ) from the start of    the tree branch to region rb(k) and f b (k ) from region rb(k+1) to the end of the branch. Intuitively, the forward term represents the probability of observing the sequence of intensity features from rb(1) to rb(k) and ending in state l, and the backward term represents the probability of observing the sequence of intensity features from rb(k+1) to the end of the chain, given that rb(k) is in state l. We can solve both terms inductively by:  145  Ab ( k 1) (l )    P ( sb ( k 1)  l , f b ( k 1) , Sb ( k 1) )   P ( sb ( k 1)  l , f b ( k ) , S b ( k 1) ) P ( f b ( k 1) | sb ( k 1)  l )    Ns   P ( sb ( k )  m, f b ( k ) , S b ( k 1) ) P ( sb ( k 1)  l | sb ( k )  m, S b ( k 1) ) P ( f b ( k 1) | sb ( k 1)  l )   m1  (5.6)   ms 1 [ Ab ( k ) ( m) m.l (b( k  1))]P ( f b ( k 1) | sb ( k 1)  l ) N  Bb ( k ) (l )   P ( f b ( k ) , S b ( k ) | sb ( k )  l ) Ns    P ( f b ( k 1) , S b ( k 1) | sb ( k 1)  m ) P ( f b ( k 1) , S b ( k ) | sb ( k )  l , sb ( k 1)  m ) m 1  (5.7)    Bb ( k 1) (m ) P ( S b ( k ) | sb ( k )  l , sb ( k 1)  m ) P ( f b ( k 1) | sb ( k 1)  m ) Ns  m 1   ms 1 [ Bb ( k 1) ( m )  m ,l (b(k )) P ( f b ( k 1) | sb ( k 1)  m)] N  The forward term is initialized as A(b,1)(l)=P(fb(1) | sb(1)=l)/ΣNsm=1P(fb(1) | sb(1)=m) and the backward term is initialized as Bb(length)(l)=1, where length is the number of nodes in the branch. The posterior ti,l representing the probability of class l in region ri can thus be expressed in terms of the forward and the backward terms as:  ti , l  tb ( k ),l   Ab ( k ) (l ) Bb ( k ) (l )    Ns m 1  Ab ( k ) ( m) Bb ( k ) ( m)  (5.8)  By assuming a linear model, the estimated region posterior probabilities represent the percentages of the underlying partial volumes that constitute the observed voxel intensities. Once all of the region posteriors are found, we then re-evaluate the model parameters {μl, σl2, αi,l } for each state l  {1,…,Ns} using the computed posteriors as weights on the observed features. By assuming the estimated states are the true  146  underlying states, the parameter re-estimation procedure is identical to an expectation-maximization step by finding the maximum likelihood estimates of parameters in a Gaussian mixture HMM [31]. A new Markov tree is then constructed from a random region and the estimation process iterates. The convergence criterion is defined by reaching a minimum mean absolute change or a maximum number of iterations. Since we work on an irregular representation rather than a grid mesh, similar to rbHMM, αi,l is generated and iteratively updated in a lookup table for greater efficiency. The difference is that since we now represent each voxel as a mixture of states, each neighboring region contributes partially to all states based on the estimated partial volume, rather than on a single discrete state.  While using watershed segmentation for initial subdivision of an image into contiguous homogeneous regions works very well for discrete classification (Figure 5.1a), the proposed fuzzy approach would benefit from a finer resolution around region boundaries which would allow for superior capturing of gradient changes due to partial volume effects (Figure 5.1b). Therefore, once our watershed regions are established, a refinement step subdivides the region boundaries into voxel-sized regions to provide maximal boundary resolution for the likelihood estimation procedure (Figure 5.1c). The degree of region refinement can be controlled by examining the intensity deviation of each boundary voxel from the mean intensity of the region that the voxel is initially assigned to. For simplicity, we employ a standard z-score measure in our implementation where boundary voxels above a z-score threshold are reassigned as individual regions. A lower z-score threshold provides finer region resolution and greater classification accuracy at the expense of increased computational costs, and vice versa, as we 147  demonstrate in the results section.  (a)  (b)  (c)  Figure 5.1. Image (grayscale) and watershed subdivision (green) for (a) ideal and (b) partial-volume edges with two regions. By assigning boundary voxels as individual regions (c), we increase the resolution for subsequent region-based estimations.  5.3 RESULTS AND DISCUSSION We validated our fuzzy partial-volume classification technique, frbHMM, on 3D MR brain images and compared the performance to that of the discrete approach, rbHMM [12]. As reported by Huang et al. [12], discrete rbHMM segmentation has been shown to outperform other state-of-the-art brain tissue classification methods including a block-based 3D HMM [10] and the HMRF-based FMRIB’s Automated Segmentation Tool (FAST) [15]. Here we demonstrate the additional benefit of employing a partial-volume mixel model within the segmentation process in the context of rbHMM.  For all experiments, we performed a four-class segmentation - background, white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF). The measure that we have chosen to evaluate the classification performance is the mean square error (MSE), as defined in (5.9), where t(x,y,z) and tˆ (x,y,z) are the estimated and true voxel class  148  memberships of the particular tissue at voxel coordinates (x,y,z), and X, Y and Z are the dimensions of the image U. Unlike overlap metrics such as the Dice [33] and Jaccard [34] coefficients, which are often used for evaluating associations of binary representations, MSE provides a direct measure for quantifying overall error performance for both discrete and fuzzy classifications without having to quantize the fuzzy results. A similar measure, root MSE, has also been used previously [23, 24]. In addition to MSE, the percentage MSE gain (5.10) between the discrete and fuzzy models was also evaluated for each experiment.   t ( x, y, z)  tˆ( x, y, z)  2  MSE  % gain   ( x , y , z )U  X Y  Z MSEdiscrete  MSE fuzzy MSEdiscrete   100%  (5.9)  (5.10)  5.3.1 Validation Using Simulated Brain MRIs  We first calculated the classification accuracy on simulated T1-weighted BrainWeb [32] scans, with dimensions 181×217×181 and 1 mm×1 mm×1 mm spacing, of normal and MS anatomical models. We evaluated the impact of variations in: (1) image noise, (2) image resolution, and (3) degree of region boundary refinement on the algorithm’s performance. As both discrete and partial volume ground truths were available, the discrete and fuzzy classifications were compared to their respective ground truths.  149  5.3.1.1 Simulated MRIs: effects of varying noise levels  We examined the performance of frbHMM over a range of noise levels (0%-9%) and compared that to the original rbHMM performance on the simulated BrainWeb MRI database. The rbHMM results, which have previously been shown to be highly robust in the presence of noise [12], were compared to the discrete phantom whereas the frbHMM results were compared to the fuzzy phantom. Figure 5.2 shows qualitative classification results achieved by frbHMM for the 0% and 9% noise cases. For the normal anatomical MRI data, both the WM and GM results closely resemble the true fuzzy phantoms. For the MS anatomical MRI data, the WM results closely resemble the provided fuzzy phantom. The GM results also match well with the GM in the phantom but, as expected, also include significant amounts of the lesion class due to the strong intensity overlap between the two classes in T1-weighted scans. However, the classification remains robust in that it does not show any unexpected stability problems when the four-class model assumptions are violated. Table 5.1 shows the quantitative results attained by both rbHMM and frbHMM in the form of MSE values for WM, GM and CSF at various noise levels. The errors gradually increase with increasing noise levels, as expected, but still remain extremely small, especially for frbHMM (maximum MSE < 0.020), for both the normal and MS anatomical models. Most notably, frbHMM maintains overall consistent MSE gains in the range of 19.87%-38.17% over rbHMM. The average MSE improvement for frbHMM over rbHMM are MSE=0.0041 (p<0.0001) for WM, MSE=0.0064 (p<0.0001) for GM, and MSE=0.0030 (p<0.0001) for CSF.  150  0% Noise 0% Noise  9% Noise  Normal  9% Noise  MS Subject (a) WM fuzzy phantom  (b) GM/lesion fuzzy phantom  (c) input image (1 mm)  (d) frbHMM WM results  (e) frbHMM GM/lesion results  Figure 5.2. Experimental results on normal and MS simulated BrainWeb MRI scans at varying noise levels. (a) WM and (b) GM/lesion fuzzy phantoms, (c) input T1-weighted scans at 1 mm slice thickness, and (d-e) classification results of the proposed frbHMM.  151  0% Noise 1% Noise 3% Noise rbHMM frbHMM % gain rbHMM frbHMM % gain rbHMM frbHMM WM 0.0089 0.0069 22.47 0.0094 0.0070 25.55 0.0108 0.0073 Normal GM 0.0149 0.0119 20.13 0.0158 0.0117 25.81 0.0177 0.0121 CSF 0.0075 0.0056 25.71 0.0079 0.0060 24.34 0.0079 0.0058 WM 0.0090 0.0069 23.33 0.0101 0.0071 29.15 0.0109 0.0073 MS GM 0.0151 0.0121 19.87 0.0189 0.0138 26.81 0.0176 0.0123 CSF 0.0107 0.0081 24.63 0.0108 0.0083 23.18 0.0084 0.0057 5% Noise 7% Noise 9% Noise Data Tissue rbHMM frbHMM % gain rbHMM frbHMM % gain rbHMM frbHMM WM 0.0128 0.0081 36.26 0.0150 0.0097 35.53 0.0173 0.0120 Normal GM 0.0204 0.0133 34.65 0.0237 0.0152 35.80 0.0273 0.0184 CSF 0.0087 0.0061 29.46 0.0098 0.0067 32.21 0.0140 0.0097 WM 0.0130 0.0080 38.54 0.0150 0.0092 38.17 0.0171 0.0110 MS GM 0.0232 0.0156 32.82 0.0262 0.0172 34.19 0.0293 0.0197 CSF 0.0124 0.0096 22.61 0.0128 0.0096 24.77 0.0144 0.0107 Data  Tissue  % gain 32.87 31.19 27.34 32.44 30.21 31.55 % gain 30.46 32.67 31.04 35.82 32.87 25.59  Table 5.1. Effects of varying noise levels (0-9%) on the MSE of classifcations based on discrete and fuzzy rbHMMs on the simulated BrainWeb MRIs of normal and MS subject anatomical models. Results were compared to the discrete and fuzzy phantoms, respectively. The better performance for each comparison is highlighted in bold. Gain is defined as (rbHMM-frbHMM)/rbHMM. The results demonstrate that frbHMM consistently outperforms rbHMM on BrainWeb data at different noise levels.  5.3.1.2 Simulated MRIs: effects of varying image and slice resolution  We also tested the performance of the proposed model under varying image resolution conditions that simulate the wide range of voxel sizes and slice reconstruction parameters available in real MRI acquisitions. In order to simulate progressively reduced resolution, i.e., increased partial volume effects, we first increased slice thickness by factors of two, three and five by downsampling in the through-slice direction, with averaging smoothing to avoid violating the sampling theorem. We then compared the results to the correspondingly downsampled phantoms. As shown in Table 5.2, frbHMM yields a 40.10%-74.81% improvement in MSE over rbHMM. Next, we downsampled the images in all data dimensions by factors of two, three and five to simulate overall reductions in  152  image resolution. Again, frbHMM is consistently more accurate, achieving MSE gains of 26.00%-83.33% over rbHMM. Compared to frbHMM, the performance of rbHMM is largely degraded at the lower resolutions. This can be intuitively understood as the reduction in the number of input voxels results in a reduction in the number of voxels per class, making iterative parameter estimation unstable. In contrast, we note that the performance of our proposed frbHMM remains much more consistent for WM, GM and CSF (MSE < 0.030) as the partial class memberships in each region allow the labels to be adjusted fractionally as the resolution degrades.  To further investigate the performance of frbHMM and rbHMM, we also introduced increased partial volumes while maintaining the original image dimensions by applying Gaussion smoothing (kernel dimensions = 1×1×7 and 7×7×7) at varying standard deviations to incorporate intensities within ±3 mm in slice orientation only as well as in all three dimensions. By using smoothing instead of downsampling, we maintained the same amount of voxels available to each method, thus, removing the cause of performance degradation for rbHMM and providing a more competitive comparison. Quantitative results in Table 5.2 show that frbHMM maintained consistent MSE gains (25.34%-41.64%) over rbHMM with MSE below 0.0250 and 0.0350 with the resolution reduction in the slice orientation only and all three dimensions, respectively. The rbHMM performance was more stable in this experiment than the downsampling case because the number of voxels to be estimated remained the same throughout; thus, a more realistic comparison to frbHMM’s performance was possible. Figure 5.3 demonstrates qualitative sagittal results at varying slice resolutions.  153  Data  Tissue WM GM CSF WM GM CSF  Normal  MS  Data  Tissue WM GM CSF WM GM CSF  Normal  MS  Data  Tissue WM GM CSF WM GM CSF  Normal  MS  Data  Tissue  Normal  MS  WM GM CSF WM GM CSF  Reducing Slice Resolution by Downsampling Resolution = 1×1×2 mm3 Resolution = 1×1×3 mm3 rbHMM frbHMM % gain rbHMM frbHMM % gain 0.0250 0.0082 67.11 0.0281 0.0103 63.44 0.0421 0.0145 47.89 0.0421 0.0169 59.87 0.0183 0.0094 48.75 0.0261 0.0119 54.29 0.0262 0.0095 63.55 0.0335 0.0132 60.49 0.0327 0.0180 45.03 0.0594 0.0224 62.26 0.0230 0.0138 40.10 0.0402 0.0170 57.82 Reducing Overall Resolution by Downsampling Resolution = 2×2×2 mm3 Resolution = 3×3×3 mm3 rbHMM frbHMM % gain rbHMM frbHMM % gain 0.0348 0.0101 71.11 0.0489 0.0178 63.60 0.0825 0.0174 79.18 0.0626 0.0219 64.97 0.0404 0.0139 65.46 0.0509 0.0193 62.20 0.0363 0.0130 64.19 0.0553 0.0189 65.73 0.0559 0.0216 61.33 0.0737 0.0249 66.17 0.0433 0.0186 57.08 0.0334 0.0247 26.02 Reducing Slice Resolution by Smoothing Gaussian kernel Gaussian kernel std. dev. = 1.0 voxel std. dev. = 1.5 voxel rbHMM frbHMM % gain rbHMM frbHMM % gain 0.0139 0.0084 39.46 0.0178 0.0106 40.78 0.0262 0.0167 36.54 0.0323 0.0199 38.47 0.0132 0.0099 25.34 0.0156 0.0116 25.51 0.0140 0.0085 39.30 0.0175 0.0103 40.91 0.0260 0.0165 36.48 0.0321 0.0198 38.47 0.0137 0.0099 27.19 0.0161 0.0118 26.73 Reducing Overall Resolution by Smoothing Gaussian kernel Gaussian kernel std. dev. = 1.0 voxel std. dev. = 1.5 voxel rbHMM frbHMM % gain rbHMM frbHMM % gain 0.0217 0.0133 38.71 0.0292 0.0182 37.52 0.0364 0.0231 41.48 0.0501 0.0299 40.25 0.0181 0.0125 31.32 0.0225 0.0155 31.31 0.0222 0.0136 38.74 0.0292 0.0181 37.83 0.0364 0.0213 41.48 0.0506 0.0296 41.46 0.0186 0.0128 31.12 0.0230 0.0153 33.60  Resolution = 1×1×5 mm3 rbHMM frbHMM % gain 0.0449 0.0163 63.73 0.0628 0.0204 67.48 0.0473 0.0168 64.46 0.0570 0.0213 62.55 0.0951 0.0267 71.95 0.0818 0.0206 74.81 Resolution = 5×5×5 mm3 rbHMM frbHMM % gain 0.0747 0.0203 72.79 0.0854 0.0174 79.66 0.0386 0.0285 26.00 0.1157 0.0193 83.33 0.1250 0.0238 80.96 0.0362 0.0196 45.81 Gaussian kernel std. dev. = 2.0 voxel rbHMM frbHMM % gain 0.0202 0.0120 40.78 0.0365 0.0223 38.98 0.0162 0.0113 30.25 0.0199 0.0118 40.76 0.0364 0.0222 38.87 0.0180 0.0131 26.99 Gaussian kernel std. dev. = 2.0 voxel rbHMM frbHMM % gain 0.0347 0.0211 39.34 0.0581 0.0340 41.39 0.0251 0.0172 31.34 0.0344 0.0211 38.54 0.0582 0.0340 41.64 0.0256 0.0171 33.27  Table 5.2. Effects of reducing slice and overall image resolution on the MSE of classifcations based on discrete and fuzzy rbHMMs on the simulated BrainWeb MRIs of normal and MS subject anatomical models. Results were compared to the discrete and fuzzy phantoms, respectively. The better performance for each comparison is highlighted in bold. Gain is defined as (rbHMM-frbHMM)/rbHMM. The results demonstrate that frbHMM consistently outperforms rbHMM on BrainWeb data at different levels of resolution.  154  input image (1 mm) frbHMM WM results frbHMM GM results (a) fuzzy phantoms  (b) original resolution  (c) Gaussian kernel std. dev. = 1.0  (d) Gaussian kernel std. dev. = 2.0  Figure 5.3. Experimental results on normal simulated BrainWeb MRI scans at varying slice resolution. Larger smoothing standard deviation is reflected in the increased smearing in image edges and tissue intensity distributions. (a) WM and GM fuzzy phantoms, and (b-d) input T1-weighted scans and classification results of the proposed frbHMM.  5.3.1.3 Simulated MRIs: effects of region refinement on performance vs.  computational cost  We tested the effects of varying the degree of region boundary refinement by performing classification on simulated brain MRI data while varying the intensity z-score threshold described in the last paragraph of the Methods section. To examine the performance and computational cost tradeoffs, we introduced increased partial-volume effects in the high resolution data by Gaussian smoothing in the through-slice direction (kernel dimensions = 1×1×7, standard deviation = 2.0) to incorporate additional intensities within ±3 mm without changing the original image dimensions. The quantitative results in Table 5.3 155  show that both rbHMM and the proposed frbHMM demonstrate high efficiency when no region refinement is employed (i.e., z = ∞). Classification times are 672 seconds and 876 seconds for the discrete and fuzzy methods, respectively. Even with such a coarse region division, frbHMM demonstrates improved performance over rbHMM by achieving MSE gains of 36.60%, 41.55% and 24.97% in WM, GM and CSF classifications, respectively. The advantage of frbHMM over the rbHMM approach is observable for all other z-values tested (z = 0.10, 0.25, 0.50, 0.75, 1.00). As region refinement is increased by lowering the z-score cutoff, there is a gradual reduction in MSE values for both rbHMM and frbHMM at a cost of increased processing time, as expected.  Tissue WM GM CSF Tissue WM GM CSF Tissue WM GM CSF  z-score cutoff=0.10 (voxels/region=6.65) % rbHMM (time) frbHMM (time) gain 0.0150 0.0084 44.33 0.0289 (3676s) 0.0161 (7598s) 44.06 0.0148 0.0098 34.06 z-score cutoff=0.50 (voxels/region=17.64) % rbHMM (time) frbHMM (time) gain 0.0202 0.0120 40.78 0.0365 (1730s) 0.0223 (2906s) 38.98 0.0162 0.0113 30.25 z-score cutoff=1.00 (voxels/region=97.88) % rbHMM (time) frbHMM (time) gain 0.0213 0.0130 39.00 0.0376 (810s) 0.0211 (1139s) 41.25 0.0182 0.0129 29.26  z-score cutoff=0.25 (voxels/region=9.96) % rbHMM (time) frbHMM (time) gain 0.0179 0.0104 42.06 0.0309 (2989s) 0.0177 (5637s) 42.77 0.0140 0.0092 34.23 z-score cutoff=0.75 (voxels/region=28.61) % rbHMM (time) frbHMM (time) gain 0.0206 0.0121 41.20 0.0359 (978s) 0.0211 (1903s) 41.25 0.0179 0.0120 32.83 z-score cutoff=∞ (voxels/region=129.69) % rbHMM (time) frbHMM (time) gain 0.0213 0.0135 36.60 0.0377 (672s) 0.0220 (876s) 41.55 0.0183 0.0137 24.97  Table 5.3. Effects of varying degree of region refinement on the MSE and runtime of classifcations using discrete and fuzzy rbHMMs on the simulated BrainWeb MRIs of a normal anatomical model. Results were compared to the discrete and fuzzy phantoms, respectively. The better performance for each comparison is highlighted in bold. Gain is defined as (rbHMM-frbHMM)/rbHMM. Greater levels of refinement result in more accurate classifcation, at the cost of greater run time.  The qualitative results in Figure 5.4 show that the granularity of the classification results 156  can be greatly improved by using lower z-scores. The observation of such a tradeoff in accuracy and processing time is consistent with the nature of a region-based approach. While using larger regions allows greater efficiency by lowering the number of regions, the segmentation is correspondingly coarse. As the region sizes are reduced to approach those of the real image features, the accuracy naturally increases, but so does the number  frbHMM GM results  frbHMM WM results  of regions to be processed.  (a) ground truth  (b) Z=0.10  (c) Z=0.50  (d) Z=∞  Figure 5.4. Experimental results on normal simulated BrainWeb MRI scans at varying degree of region refinement. (a) Fuzzy WM and GM phantoms, and frbHMM classifications results at (b-f) different standard Z score cutoff values and with (g) no region refinement. Smaller Z produced smaller average region size, thus, finer classification resolution.  5.3.1.4 Simulated MRIs: comparisons of frbHMM vs. other state‐of‐the‐art methods  Next, we demonstrate the improved accuracy of our proposed method over other state-of-the-art fuzzy classification tools, namely FAST [15], Statistical Parametric Mapping (SPM) [35] and Expectation-Maximization Segmentation (EMS) [37]. These methods have been shown to be robust to imaging artifacts and are widely applied in 157  many clinical studies. FAST [15] estimates the voxel MAP labels using the HMRF approach with skull-stripped images, and estimates the posterior probabilities by first assuming a discrete model, then re-estimates the Gaussian distribution parameters by probability weighting (i.e., as a mixel model). SPM [35] spatially normalizes each raw MR image to an a priori template. By incorporating priors in a mixture model clustering algorithm, posterior probabilities can thus be computed [36]. EMS [37] utilizes SPM for the initialization of raw MR images and applies an MRF to incorporate contextual information. The estimation algorithm employed by EMS is similar to the iterative conditional modes algorithm [16] except that the method produces soft rather than discrete labels.  To examine the performance of these methods with increasing partial volume effects, we again smoothed the images with a Gaussian kernel (1×1×7 dimensions) at varying standard deviations (1.0, 1.5 and 2.0) to incorporate intensities within ±3 mm. The results in Table 5.4 show that FAST, SPM and EMS all produce higher MSE values compared to our proposed approach in fuzzy classification. However, we note that the MSE values produced by FAST are very consistent across the entire data set tested. Surprisingly, the classification results for EMS are observed to improve gradually with increases in partial volume. The qualitative results in Figure 5.5 demonstrate that both FAST and SPM over-estimated the GM in this data, leaving only very little of the WM. As a result, the MSE values achieved by both methods are about two to three-fold worse than those achieved by frbHMM. Furthermore, EMS failed to initialize properly and the results appear to contain a smoothly varying field. The MSE values produced by EMS are about two to four-fold worse compared to those achieved by frbHMM. Specifically, the average 158  MSE improvement for frbHMM over SPM is MSE=0.0073 (p=0.0077) for WM, MSE=0.0068 (p=0.0279) for GM, and MSE=0.0066 (p=0.0137) for CSF. The average MSE improvement for frbHMM over FAST is MSE=0.0276 (p<0.0001) for WM, MSE=0.0264 (p<0.0001) for GM, and MSE=0.0269 (p<0.0001) for CSF. The average MSE improvement for frbHMM over EMS is MSE=0.0202 (p<0.0001) for WM, MSE=0.0572 (p<0.0001) for GM, and MSE=0.0403 (p<0.0001) for CSF.  Method  Data Tissue  WM Normal GM CSF SPM WM MS GM CSF WM Normal GM CSF FAST WM MS GM CSF WM Normal GM CSF EMS WM MS GM CSF WM Normal GM CSF frbHMM WM MS GM CSF  Original image (0% Noise) MSE T+ F+ .0100 77.17 0.04 .0139 95.62 3.12 .0100 81.01 1.44 .0087 78.88 0.08 .0132 95.56 2.99 .0087 93.89 1.58 .0354 50.98 0.00 .0405 93.42 5.72 .0354 94.13 1.09 .0377 47.69 0.00 .0439 93.65 6.13 .0377 93.89 1.06 .0340 63.56 3.02 .0857 53.63 8.01 .0504 16.77 2.97 .0364 62.42 3.20 .0914 48.52 7.82 .0514 13.81 2.90 .0069 92.37 1.03 .0119 88.68 1.63 .0056 87.95 0.84 .0069 90.28 0.73 .0121 86.94 1.78 .0081 88.94 1.44  Gaussian kernel std. dev. = 1.0 voxel MSE T+ F+ .0186 66.46 0.02 .0249 90.88 4.03 .0186 83.85 2.31 .0148 70.97 0.10 .0209 90.98 3.55 .0148 84.33 2.33 .0384 47.75 0.00 .0443 93.52 6.24 .0384 91.37 2.50 .0382 47.03 0.01 .0452 93.54 6.36 .0382 91.30 2.50 .0302 62.86 2.24 .0745 58.74 7.01 .0498 24.53 3.24 .0306 63.40 2.44 .0767 58.00 7.26 .0507 22.63 3.27 .0084 89.18 0.95 .0167 82.91 1.84 .0099 88.31 1.77 .0085 90.35 1.14 .0165 81.76 1.71 .0099 88.45 1.83  Gaussian kernel std. dev. = 1.5 voxel MSE T+ F+ .0125 74.67 0.09 .0202 88.03 3.08 .0125 70.39 1.72 .0152 70.59 0.11 .0235 88.13 3.57 .0152 80.00 2.46 .0373 48.93 0.02 .0442 93.88 6.31 .0373 88.10 3.08 .0361 49.31 0.02 .0441 93.93 6.31 .0361 88.02 3.07 .0269 63.18 1.56 .0682 62.57 6.31 .0499 30.21 3.43 .0268 63.58 1.69 .0686 62.70 6.46 .0508 29.58 3.52 .0106 88.47 1.28 .0199 78.68 1.89 .0116 87.55 2.10 .0103 88.91 1.31 .0198 78.40 1.84 .0118 87.56 2.16  Gaussian kernel std. dev. = 2.0 voxel MSE T+ F+ .0254 59.71 0.02 .0386 82.30 4.54 .0254 70.59 2.84 .0285 56.19 0.06 .0405 84.16 4.98 .0285 70.04 2.57 .0376 48.63 0.03 .0458 92.89 6.38 .0376 87.27 3.21 .0355 50.14 0.04 .0446 92.89 6.25 .0355 87.23 3.21 .0251 63.38 1.13 .0655 65.09 6.05 .0498 32.40 3.51 .0267 63.35 1.59 .0681 63.20 6.35 .0510 30.55 3.56 .0120 86.86 1.36 .0223 77.17 2.09 .0113 86.52 2.25 .0118 87.73 1.46 .0222 76.23 1.97 .0131 87.13 2.34  Table 5.4. Comparisons of effects of increasing partial volume on the MSE of fuzzy classifcations using state-of-the-art methods (FAST, SPM, EMS) and the proposed frbHMM on the simulated BrainWeb MRIs of normal and MS subject anatomical models. The estimated posterior probabilities were compared to the corresponding fuzzy phantoms. The better performance for each comparison is highlighted in bold. T+ and F+ are the true and false positive rates respectively.  159  frbHMM WM results frbHMM GM results (a) fuzzy phantoms  (b) FSL-HMRF  (c) SPM  (d) EMS-MRF  Figure 5.5. Experimental results produced by FSL FAST, SPM and EMS on simulated BrainWeb MRI scans smoothed by a Gaussian with standard deviation=2.0 (a) Fuzzy WM and GM phantoms (b) FSL FAST (c) SPM (d) EMS. The FSL and SPM results show an over-estimation in the GM, and the EMS results seem to be overly over-smoothed.  5.3.2 Validation Using Real Clinical Brain MRIs  We tested our method on real clinical 3D T1-weighted MR scans of 18 relapsing-remitting MS (RRMS) patients and 14 healthy controls to demonstrate a potential clinical application of frbHMM and to further show its robustness to increased partial volume. The scans have dimensions of 256×256×120 and a voxel size of 0.837 mm× 0.837 mm×1.10 mm. The intensity non-uniformity in these scans was first corrected using the N3 algorithm [38] before applying frbHMM.  We compared the segmentation performance of rbHMM and frbHMM on real clinical control and MS patient groups. With no ground truth available, in order to identify and evaluate any advantages of the proposed frbHMM approach, we examined the normal-appearing WM volume fraction measurements (5.11) produced by rbHMM and  160  frbHMM, and compared the results between subject groups and classification methods.  WMVF   VW  100% VB  (5.11)  where VW and VB are the WM and intradural volumes, respectively. We targeted the WM as it is the largest tissue by volume and where lesions in the patient scans are more distinct compared to GM in T1-weighted scans and thus can be more readily excluded. We first demonstrate how both rbHMM and frbHMM perform comparably in terms of the WMVF measure when using the scans at full resolution. Table 5.5 shows the average WMVF obtained by using the original scans. Both methods agree that the mean WMVF in the controls is significantly (p<0.05) higher than those of the RRMS patients (+3.53% for discrete, +3.11% for fuzzy), likely due to the presence of WM lesions and enlarged CSF spaces in the MS patients. Comparing the two methods within each subject group, no significant difference in WMVF is observed, demonstrating that both methods have similar performance for this application on the full resolution data.  rbHMM frbHMM Δ (methods) p-Value Controls (C) 42.57 (σ=1.66) 41.98 (σ=1.43) 0.58 0.33 Patients (S) 39.03 (σ=2.82) 38.88 (σ=2.59) 0.15 0.87 Δ (groups) 3.53 3.11 p-Value <0.05 <0.05 Table 5.5. Quantitative average WMVF measurement on high resolution clinical MRI data. Note that both the rbHMM and frbHMM segmentation methods show consistent and significant differences between the control and the MS patient groups, while no significant differences were observed between the two methods for either group, which is expected in high resolution data.  161  5.3.2.1 Clinical MRIs: increased partial volume  Finally, we show that the proposed frbHMM performs far more accurate and stable with increased partial volume than rbHMM. Similar to our experiments with the simulated scans, we progressively increased both the through-slice and overall image partial volume of the clinical data by Gaussian smoothing with kernel dimensions = 1×1×7 and 7×7×7 and standard deviations = 1.0 and 2.0. Since ground truths are unavailable for this real data set, we compared classification results of the smoothed scans against those obtained using the original high resolution scans. Figure 5.6 shows an example of smoothing a clinical scan and the effect on discrete and fuzzy classification. The proposed frbHMM again displays greater accuracy than the discrete model, particularly in the ambiguous  Smoothed 3D T1  Original 3D T1  intensity regions (zoomed-in areas of Figure 5.6).  (a) input image  (b) rbHMM results  (c) frbHMM results  Figure 5.6. Example results on real 3D T1-weighted clinical MRI scans. (a) Original input scan (top) and with increased partial volume (bottom, std. dev. = 2.0 voxel) (b) Results of discrete rbHMM classification (c) Results of fuzzy frbHMM classification. The frbHMM classifies ambiguous intensity regions (zoomed-in regions) with greater detail.  162  Quantitatively, Table 5.6 shows the average MSE between the classification results performed on the original and smoothed scans of all of the control and MS subjects. The proposed frbHMM is consistently better than rbHMM, yielding an improvement of 15.82% to 35.89% in mean MSE, which is in agreement with our previous results on simulated brain MRI data sets. Table 5.6 also shows the mean square difference (MSD) in the WMVF values of the control and patient groups. With low partial volume effects, MSD is similar for both methods tested. However, with increasing partial volume, frbHMM produces significantly lower MSD (49.10%-79.86%) than rbHMM, confirming its ability to perform with higher consistency when the signal at each voxel becomes more ambiguous.  Smoothing in the through-slice direction only  Smoothing in all three dimensions  Gaussian kernel Gaussian kernel Gaussian kernel Gaussian kernel Metric std. dev. = 1.0 voxel std. dev. = 2.0 voxel std. dev. = 1.0 voxel std. dev. = 2.0 voxel rbHMM frbHMM % gain rbHMM frbHMM % gain rbHMM frbHMM % gain rbHMM frbHMM % gain 0.0086 0.0072 15.82 0.0145 0.0114 21.34 0.0159 0.0102 35.89 0.0250 0.0179 28.40 MSE (C) σ=0.0008 σ=0.0006 p<0.05 σ=0.0015 σ=0.0011 p<0.05 σ=0.0018 σ=0.0010 p<0.05 σ=0.0027 σ=0.0022p<0.05 0.0090 0.0074 17.03 0.0156 0.0116 25.43 0.0155 0.0100 35.48 0.0244 0.0173 29.10 MSE (S) σ=0.0010 σ=0.0010 p<0.05 σ=0.0019 σ=0.0012 p<0.05 σ=0.0015 σ=0.0008 p<0.05 σ=0.0022 σ=0.0016p<0.05 0.18 0.11 38.54 2.27 1.25 44.87 11.37 2.29 79.86 35.48 18.06 49.10 MSD (C) σ=0.30 σ=0.16 p=0.44 σ=2.42 σ=1.56 p=0.20 σ=6.33 σ=3.10 p<0.05 σ=11.05 σ=10.30 p<0.05 0.33 0.44 -32.30 6.70 1.08 84.31 6.72 1.43 78.72 22.71 8.55 62.35 MSD (S) σ=0.34 σ=0.57 p=0.51 σ=13.36 σ=1.43 p=0.08 σ=5.97 σ=1.75 p<0.05 σ=13.74 σ=8.24 p<0.05  Table 5.6. Quantitative normal appearing white matter segmentation comparisons on real clinical control (C) and patient (S) data with escalated partial volume effects. Classifications were performed using discrete and fuzzy rbHMMs, and the results were compared to the segmentations obtained on the original unsmoothed scans. Both mean MSE in the fuzzy labels and mean MSD in WMVF are reported. The superior performance is highlighted in bold. Gain is defined as (rbHMM-frbHMM)/rbHMM. The results demonstrate that frbHMM is robust to increased partial volume on real clinical data.  163  5.3.3 Implementation Details  The experiments were carried out on a computer with a 3.0GHz Xeon processor. For any given scan, the frbHMM method required approximately twice the runtime as rbHMM. The convergence criteria for all experiments were a maximum of 10 iterations or a minimum of 1E-6 in the mean change of each class label. Except where specified, all experiments in the Results section employed region refinement using z = 0.50 to maintain a reasonable performance vs. computational cost tradeoff.  5.4 CONCLUSIONS We introduced a novel fuzzy 3D region-based hidden Markov model for modeling and estimating partial volume in MRI data within a highly efficient unsupervised framework. Our chapter’s main contribution is a new fuzzy 3D HMM which is based on irregularly-shaped homogeneous image regions combined with spatial refinement at the region boundaries. In contrast to discrete segmentation approaches, the contents of each region are assigned to multiple underlying classes simultaneously rather than assuming a single true discrete label. To compute the region class posteriors, we employ a classical iterative forward-backward scheme. We evaluated the classification accuracy and robustness of our method under increased partial volume effects using both simulated and real clinical brain MRI data of healthy controls and MS patients. The results demonstrate that our proposed frbHMM is highly robust under adverse noise levels and reduced image resolution. The frbHMM is also shown to be consistently superior to the discrete rbHMM and a number of other current methods in labeling intensity-ambiguous regions. Our 164  future work will focus on investigating the utility of frbHMM for quantifying the extent of white matter disease with indistinct boundaries such as diffusely-abnormal white matter (DAWM).  165  5.5 BIBLIOGRAPHY 1. 2. 3.  4.  J. K. Baker, “The dragon system—An overview,” in Proc. of International Conference on Acoustics, Speech, and Signal Processing, Feb. 1975, pp. 24–29. L. Rabiner, B. H. Juang, Fundamentals of Speech Recognition, Englewood Cliffs, NJ: Prentice-Hall, 1993. H. K. Lee, J. H. Kim, "An HMM-based threshold model approach for gesture recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(10):961-973, Oct. 1999. S. Eickeler, S. Muller, G. Rigoll, "Improved face recognition using pseudo 2-D  hidden Markov models,” European Conference on Computer Vision Workshop on Advances in Facial Image Analysis and Recognition Technology, Freiburg, Germany, June 1998. 5. A. J. Viterbi, J. K. Omura. “Trellis encoding of memoryless discrete-time sources with a fidelity criterion,” IEEE Transactions on Information Theory, IT-20:325–332, May 1974. 6. G. D. Forney, “The Viterbi algorithm,” in Proc. of the IEEE, 61(3)3, March 1973, pp.268–278. 7. R. Durbin, S. Eddy, A. Krogh, G. Mitchison, Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids, Cambridge: Cambridge University Press, 1998. 8. C. M. Bishop, Pattern Recognition and Machine Learning, New York: Springer, 2006, pp.359-422. 9. J. Li, A. Najmi, R. M. Gray, “Image classification by a two dimensional hidden Markov model,” IEEE Transactions on Signal Processing, 48(3):517-533, 2000. 10. M. Ibrahim, N. John, M. Kabuka, A. Younis, A., “Hidden Markov models-based 3D MRI brain segmentation,” Image and Vision Computing, 24:1065-1079, 2006. 11. D. Joshi, J., Li, J. Z. Wang, “A computationally efficient approach to the estimation of two- and three-dimensional hidden Markov models,” IEEE Transactions on Image Processing, 15(7):1871-1886, 2006. 12. A. Huang, R. Abugharbieh, R. Tam, “Image segmentation using an efficient rotationally invariant 3D region-based hidden Markov model,” in Proc. of IEEE Computer Vision and Pattern Recognition Workshop on Mathematical Methods in Biomedical Image Analysis, 2008, pp.107-111.  166  13. A. Huang, R. Abugharbieh, R. Tam, “Image segmentation using an efficient rotationally invariant 3D region-based hidden Markov model,” IEEE Transactions on Image Processing (Accepted, March 2010). 14. J. Shi, J. Malik, “Normalized cuts and image segmentation,” in Proc. of IEEE Computer Vision and Pattern Recognition, 1997, pp.731-737. 15. Y. Zhang, M. Brady, S. Smith, “Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm,” IEEE Transactions on Medical Imaging, 20(1):45-57, 2001. 16. J. Besag, “On the statistical analysis of dirty pictures (with discussion),” Journal of Royal Statistics Society, Ser.B, 48(3):259–302, 1986. 17. N. Komodakis, G. Tziritas, N. Paragios, “Fast, approximately optimal solutions for single and dynamic MRFs,” in Proc. of IEEE Computer Vision and Pattern Recognition, 2007, pp.1-8. 18. V. Kolmogorov, “Convergent tree-reweighted message passing for energy minimization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(10):1568-1583, 2006. 19. L. R. Rabiner, “An introduction to hidden Markov model,” IEEE ASSP Magazine, pp.4-16, 1986. 20. P. Santago, H. D. Gage, “Quantification of MR brain images by mixture density and partial volume modeling,” IEEE Transactions on Medical Imaging, 12(3):566-574, 1993. 21. K. Van Leemput, “A unifying framework for partial volume segmentation of brain MR images,” IEEE Transactions on Medical Imaging, 22(1):105-119, 2003. 22. S. Bricq, Ch. Collet, J. P. Armspach, “Unifying framework for multimodal brain MRI segmentation based on hidden Markov chains,” Medical Image Analysis, 12:639-652, 2008. 23. H. S. Choi, D. R. Haynor, Y. Kim, “Partial volume tissue classification of multichannel magnetic resonance images – a mixel model,” IEEE Transactions on Medical Imaging, 10(3):395-407, 1991. 24. S. Ruan, C. Jaggi, J. Xue, J. Fadili, D. Bloyet, “Brain tissue classification of magnetic resonance images using partial volume modeling,” IEEE Transactions on Medical Imaging, 19(12):1179-1187, 2000. 25. J. Tohka, A. Zijdenbos, A. Evans, “Fast and robust parameter estimation for statistical partial volume models in brain MRI,” NeuroImage, 23(1):84-97, 2004.  167  26. D. W. Shattuck, S. R. Sandor-Leahy, K. A. Schaper, D. A. Rottenberg, R. M. Leahy, “Magnetic resonance image tissue classification using a partial volume model,” NeuroImage, 13(5):856-876, May 2001. 27. X. Li, L. Li, “Partial volume segmentation of brain magnetic resonance images based on maximum a posteriori probability,” Medical Physics, 23(7):2337-2345, 2005. 28. J. Domke, A. Karapurkar, Y. Aloimonos, “Who killed the directed model?” in Proc. of IEEE Computer Vision and Pattern Recognition, 2008, pp.1-8. 29. A. Huang, R. Abugharbieh, R. Tam, “A fuzzy region-based hidden Markov model for partial-volume classification in brain MRI,” in Proc. of Medical Image Computing and Computer Assisted Intervention, 2009, pp.474-481. 30. S. Buecher, “Watersheds of functions and picture segmentation,” in Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing, May 1982, pp.1928–1931. 31. L. E. Baum, “A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains,” Annals of Mathematical Statistics, 41(1):164-171, 1970. 32. C. A. Cocosco, V. Kollokian, R. K.-S. Kwan, A. C. Evans, “BrainWeb: online interface to a 3D MRI simulated brain database,” NeuroImage, 5(4 part 2/4):S425. 33. L. R. Dice, “Measures of the amount of ecologic association between species,” Ecology, 26:297-302, 1945. 34. P. Jaccard, “Étude comparative de la distribution florale dans une portion des Alpes et des Jura,” Bulletin del la Société Vaudoise des Sciences Naturelles, 37:547-579, 1901. 35. J. Ashburner and K. J. Friston, “Unified segmentation,” NeuroImage, 26(3):839-851, July, 2005. 36. J. Ashburner and K. J. Friston, “Multimodal image coregistration and partitioning – a unifying framework,” NeuroImage, 6(3):209-217, October 1997. 37. K. Van Leemput, F. Maes, D. Vandermeulen, P. Suetens, “Automated model-based tissue classification of MR images of the brain,” IEEE Transactions on Medical Imaging, 18(10):897-908, October 1999. 38. J. G. Sled, A. P. Zijdenbos, A. C. Evans, “A non-parametric method for automatic correction of intensity non-uniformity in MRI data,” IEEE Transactions on Medical Imaging, 17:87-97, February 1998.  168  6. CONCLUSIONS  In this thesis, improved segmentation of brain tissues from magnetic resonance imaing (MRI) scans has been realized with both deformable model (chapter 2) and contextual model (chapters 3 to 5). While the first chapter (chapter 2) proposed a highly accurate method integrating both geometric and statistical information as a novel hybrid contour feature for delineating white matter (WM), gray matter (GM), and cerebrospinal fluids (CSF), it was computationally expensive. The next chapter (chapter 3) took a different step towards improving computational efficiency by applying a contextual statistical model on pre-delineated irregularly-shaped regions obtained based on geometric watershed transform. Results in chapter 3 demonstrated similar accuracy performance as results in chapter 2 but with significantly less computation time required and improved noise resiliency in this novel region-based hidden Markov model. Improved invariance to object rotations was also observed. Chapter 4 applied this new method to scans of MS patients to automatically and accurately delineated white matter lesions in a data-driven approach. Finally, chapter 5 made an even large step towards improving segmentation accuracy by modeling the acquisition-related partial-volume limitation and solving for class posterior probabilities as fuzzy classifications.  169  6.1  A HYBRID GEOMETRIC-STATISTICAL DEFORMABLE MODEL FOR AUTOMATED 3D SEGMENTATION IN BRAIN MRI  6.1.1 Hypothesis While deformable model based on multi-phase region-based active contour [1] suggested success in segmenting brain tissues from MRI head scans (Dice similarity index of 79% and 83% for WM and GM respectively), complexity and stability remained an issue [2]. On the other hand, employing an edge-based active contour [3] required additional constraints in the form of statistical regularization to improve stability but its lower computational cost would potentially be suitable for an efficient implementation of automatic segmentation of complex brain tissues. 6.1.2 Significance By employing an edge-based 3D active contour with a hybrid geometric-statistical regularization feature [4], highly accurate WM (Dice similarity=93.71%), GM (Dice similarity=92.59%), and CSF (Dice similarity=77.75%) segmentation was successfully achieved using T1-weighted MRI scans. Integrating T2-weighted and PD-weighted scans with T1-weighted scans in the segmentation framework yielded stability and accuracy improvement but not significantly higher as T1-weighted scans already provided the best soft tissue intensity contrast amongst the available MRI modalities. Specifically, when segmenting using only the geometric feature, i.e. edge gradients, WM, GM, and CSF similarities to the ground truth were significantly (p<0.05) lower. When segmenting using only the statistical feature, i.e. voxel posterior probability, the average distance of both WM and CSF contours to the ground truth were significantly (p<0.05) higher. In 170  addition, when comparing to a region-based deformable model, the proposed method achieved higher WM and GM segmentation accuracy. These results suggested the effectiveness of the proposed hybrid feature [5]. 6.1.3 Strengths and Weaknesses A strength of this study was that the added statistical feature enabled accurate and stable segmentation of complex brain anatomical structures as highlighted above.  A weakness of this study was that noise filtering was required to enhance the noise resiliency of the proposed method. Without noise filtering, the contour needed to be individually fine-tuned and regularized by enforcing curvature strengths, which were undesirable and subjective. Without adding curvature regularization, the evolution of the contour depended solely on the accuracy of the statistical estimation and the presence of geometric features, which, with the presence of noise, were less precise. A second weakness of this study was that the computation time required for such a deformable model-based method was very long. Furthermore, no contour competition was implicitly modeled in this work as each contour was evolved individually. 6.1.4 Potential Applications and Future Work The proposed method is applicable for segmenting brain tissues from high resolution T1-weighted MRI. By doing so, automatically evaluating brain tissue/whole brain volumes may be possible, especially for scans with higher signal-to-noise ratios, where noise filtering will not be required.  171  Furthermore, it may be useful to enhance the statistical model and model estimation with nonparametric and partial volume models such that the posterior probabilities can be more accurately represented. Extending the proposed method to segmentation of clinical anomalies such as lesions or tumours may also be investigated by incorporating additional imaging modalities that enhance the intensities of these pathological regions. Lastly, evolution of contour propagation may be sped up for improved practicality.  6.2  A  NOVEL  ROTATIONALLY-INVARIANT  REGION-BASED  HIDDEN  MARKOV MODEL FOR EFFICIENT 3D IMAGE SEGMENTATION 6.2.1 Hypothesis While employing statistical features in a geometric deformable model provided evidence of success for the MRI brain tissue segmentation task, less parameter dependency, higher noise resiliency, and better computation efficiency were desirable to further improve robustness and automation of brain tissue segmentation across a wider variety of scans. A classical hidden Markov model (HMM) [6] was known to be able to provide maximum a posterior (MAP) estimates [7] of underlying truth when observation is corrupted by noise as was shown by Markov random field (MRF) techniques [8]. Rather than employing a MRF model due to parameterization and high complexity [9], a high dimensional HMM with an efficient parameter estimation scheme [10] would potentially allow for exact inference with improved efficiency. Rotational invariance could also be improved by utilizing an irregularly-shaped region-based representation rather than a traditional rectangular grid-based estimation.  172  6.2.2 Significance Accurate  segmentation  of  WM  (Dice  similarity=92.99%)  and  GM  (Dice  similarity=92.27%) in brain MRI images was successfully demonstrated by using the proposed region-based hidden Markov model (rbHMM) [11]. The robust performance was clearly observed and validated across various noise levels in synthetic shapes, and both simulated and real T1-weighted scans. In addition, the segmentation results were invariant to object rotation as demonstrated and the proposed method was computationally efficient. All of these suggested that the overall robustness of the proposed region-based model and estimation algorithm in segmenting WM and GM tissues from brain MRI scans was significantly (p<0.05) improved over the traditional grid-based model [12]. 6.2.3 Strengths and Weaknesses Modeling brain MRI scans as a region-based HMM allowed for exact and efficient MAP estimation of each region unlike a MRF model where approximation was often used. In addition, the estimation process was fully automatic and data-driven, and no weighting parameter and training were required for the segmentation task. The tree-structured parameter estimation procedure also allowed for improved rotational invariance over the traditional fixed scan-line ordering scheme.  A weakness of this work was that all experiments were carried out using only single T1-weighted MRI modality. Thus segmenting anomalies such as MS lesions, which was usually carried out by using a combination of T2-weighted and PD-weighted scans would 173  not be possible at this stage. A second weakness was that the segmentation approach assumed a global intensity distribution as modeled by a Gaussian mixture model; thus, segmentation results might not be optimum for individual sub regions/lobes. 6.2.4 Potential Applications and Future Work This unsupervised segmentation approach may now be applied to a wide variety of scans, including, higher field scans in the future and lower field scans from past studies to study volumetric changes in brain tissues of normal adults and patients suffering from neurodegenerative diseases.  It may be useful to further extend the estimation procedure to include a partial-volume model to better account for partial contributions of tissues to image intensity features. Additional modalities may also be employed for segmenting other clinical regions of interest such as lesions. Other Markov neighborhood relationships may be investigated such that mid-range or long range information can be incorporated in addition to first order neighbours. Multiple states or classes may also be used to model a single tissue type or multiple Gaussian mixture models may be used to model different brain regions for an alternative model representation of the observed data.  174  6.3  MULTI-CHANNEL  WHITE  MATTER  LESION  SEGMENTATION  IN  MULTIPLE SCLEROSIS USING REGION-BASED HIDDEN MARKOV MODELING 6.3.1 Hypothesis Many of the state-of-the-art methods relied on examining the different intensity profiles between lesions and normal tissues, whereas other incorporated additional contextual information [13]. However, most methods required either training or registration to atlases. As rbHMM had shown to be robust to noise and object rotations in the segmentation task, extending such a model to incorporate multi-channel MRI data could potentially allow us to automate this lesion segmentation task in a data-driven approach. 6.3.2 Significance Using clinical MRI of T2-weighted and fluid-attenuated inversion recovery (FLAIR) scans, successful segmentation of white matter lesions was demonstrated [14]. Compared to lesions delineated by two expert human raters, evaluations were performed on four metrics - lesion volume difference, average surface distance, true positive rate, and false positive rate. When using a reference score of 90 to represent the variability between two expert human raters, the proposed method achieved an average score of 86 (standard deviation = 6). This demonstrated that the segmented lesions were in close agreement with the experts. In addition, when comparing to a state-of-the-art method (score = 82, standard deviation = 13) [15], not only did the proposed method achieved higher accuracy but the overall robustness was also improved as a lower standard deviation was observed. 175  6.3.3 Strengths and Weaknesses The proposed method was able to successfully capture small lesions (average volume = 1.73mL, average lesion load = 0.16%) using a fully-automatic and data-driven estimation process.  A weakness of this work was that the experiment was done on only ten clinical scans due to the lack of expert rater segmentation masks for comparisons. Furthermore, because of such small and confounding lesion intensity distributions in clinical MRI scans, a global segmentation approach such as the one proposed here, required a post-segmentation region selection procedure to further regularize the segmentation results. 6.3.4 Potential Applications and Future Work This unsupervised multi-channel lesion segmentation method can provide an automatic and systematic approach of delineating small white matter lesions without a priori training or atlases. This may also be potentially applicable to other neurological disorders such as migraine or dementia, where studying of lesions is of interests. In addition, various lesion subtypes such as chronic black holes, may be identified by employing additional image modalities or hierarchical applications of the proposed method.  Future work may include recruiting more expert raters such that the rater variability can be  better  represented,  and  performing  additional  evaluation  or  alternative  modeling/regularization of other clinical scans.  176  6.4  A FUZZY REGION-BASED HIDDEN MARKOV MODEL FOR 3D PARTIAL-VOLUME CLASSIFICATION  6.4.1 Hypothesis As the previously proposed rbHMM [11, 12] had shown to be robust in discrete segmentation of brain tissues in MRI scans, accuracy could be further improved by incorporating partial-volume estimations to model the common physical imaging limitation. Thus, rather than assuming a single discrete class, a region was represented as a combination of numerous classes similar to a mixel model [16]. The classical forward-backward parameter estimation scheme [17] was well-suited for such extension in the rbHMM framework as all posterior probabilities could be simultaneous estimated without making any discrete class label assumption. 6.4.2 Significance Improved segmentation accuracy of WM, GM, and CSF in brain MRI images was successfully demonstrated by comparing the proposed fuzzy region-based hidden Markov model (frbHMM) with the discrete rbHMM [18]. The robust performance was clearly observed and validated across various noise levels (30.73% reduction in MSE) and degrees of partial volume effects (39.51% reduction in MSE) in simulated T1-weighted scans. In addition, white matter volumes in clinical controls and relapse-remitting multiple sclerosis (RRMS) scans were much more consistently estimated under increased partial volume effects. All of these experiments suggested the overall performance of the proposed fuzzy region-based model and estimation algorithm was significantly (p<0.05) improved over the discrete region-based model [19] and other state-of-the-art methods 177  (p<0.05) in segmenting tissues from brain MRI scans. 6.4.3 Strengths and Weaknesses Modeling brain MRI scans as a fuzzy region-based HMM provided exact computation of class posterior probabilities for each region without any discrete class assumption. The estimation process was fully automatic and data-driven, and no weighting parameter or training was required for the segmentation task.  A weakness of this work was that computation time was approximately twice of that of the discrete segmentation method. This was due to the two-pass nature of the forward-backward algorithm employed in the parameter estimation step. However, this computation time increase resulted in a consistent accuracy gain (approximately 40% in MSE) observed when comparing to the discrete results. 6.4.4 Potential Applications and Future Work This unsupervised fuzzy segmentation approach may now be applied to a wide variety of scans for improved accuracy in estimating brain tissue volumes using both higher resolution scans in the future and lower resolution scans from past studies. Together with the improved noisy resiliency demonstrated previously, the overall segmentation approach implicitly enables efficient and accurate segmentation that is robust to both variations in resolution or signal-to-noise levels.  It may be useful to explore modeling multiple states per tissue class, thus, to increase the overall model-data likelihood and potentially improve the overall segmentation accuracy. 178  The estimation procedure may also be extended to include additional modalities to specifically study pathological regions with diffuse intensity signals using conventional MRI. This may provide additional information to the studies of pathological progression as quantification of diffuse signal regions, which is currently often performed with quantitative MRI, is not possible with human raters using convention MRI scans.  6.5  THESIS CONTRIBUTIONS  The thesis contributions are summarized as follows:   We presented a novel 3D deformable model based approach for accurate, robust, and automated tissue segmentation of brain MRI data of single as well as multiple MR sequences. The main contribution of this work was that we employed an edge-based geodesic active contour for the segmentation task by integrating both image edge geometry and voxel statistical homogeneity into a novel hybrid geometric-statistical feature to regularize contour convergence and extract complex anatomical structures.  We presented a novel 3D region-based hidden Markov model or rbHMM for efficient and unsupervised 3D image segmentation. Our contribution was two-fold; first, rbHMM employed a more efficient representation of the image data than current state-of-the-art HMM-based approaches that were based on either voxels or rectangular lattices/grids, thus resulting in a faster optimization process. Second, our proposed novel tree-structured parameter estimation algorithm for the rbHMM provided a locally optimal data labeling that is invariant to object rotation, which was a highly valuable property in segmentation tasks especially in medical imaging where 179  the segmentation results needed to be independent of patient positioning in scanners in order to minimize methodological variability in data analysis.  We proposed an automated segmentation method for accurate extraction of white matter lesions from MRI data. The specific application presented in this work was MS lesions. Our proposed multi-channel lesion segmentation technique built on the rbHMM. A key advantage to our approach was that parameter estimation was data-driven and does not require a priori atlases or training. This was achieved by initializing the model with many more states than the actual number of target tissues, which allowed for greater sensitivity to small lesion loads than initialization based on clustering to a small number of states. After convergence of the model, a rule-based system that incorporated intensity and spatial context was used to determine which of the resulting region and state combinations were most likely to represent lesions.  We presented a novel 3D fuzzy rbHMM or frbHMM for unsupervised 3D partial-volume classification of brain MRIs. Our primary contribution was an efficient graphical representation of 3D image data in which irregularly-shaped image regions had memberships to a number of classes rather than to a single discrete class. Our model grouped voxels into regions for efficient processing, but also refined region boundaries to a voxel level for increased accuracy. The proposed strategy was very effective in data where challenging intensity ambiguities exist due to partial-volume effects resulting from resolution limited image acquisition. Our frbHMM employed a forward-backward scheme for parameter estimation through iterative computation of region class posteriors.  180  6.6  RELATED WORK  Other publications authored or co-authored that relate (directly or indirectly) to the work presented in this thesis are detailed below:   [20-23] These studies comprised of investigations of using active contour models to the extraction of brain cortex from T1w MRI scans. By employing a combination of optimal threshold selection, connected component analysis, and morphological operations, accurate brain masks were derived from scans of both simulated and real normal adult brains.  181  6.7  BIBLIOGRAPHY  1. E. Angelini, T. Song, and A. Laine, “Homogeneity measures for multiphase level set segmentation of brain MRI,” in Proc. of IEEE International Symposium on Biomedical Imaging, 2006, pp.746-749. 2. M. Jeon, M. Alexander, N. Pizzi, and W. Pedrycz, “Unsupervised hierarchical multi-scale image segmentation: level set, wavelet and additive splitting operator,” Pattern Recognition Letters, 26:1461-1469, 2005. 3. V. Caselles, R Kimmel, and G Sapiro, “Geodesic active contours,” International Journal of Computer Vision, 22(1):61–79, 1997. 4. A. Huang, R. Abugharbieh, R. Tam, A. Traboulsee, “Automatic MRI brain tissue segmentation using a hybrid statistical and geometric model,” in Proc. of IEEE International Symposium on Biomedical Imaging: Nano to Micro, 2006, pp.394-397. 5. A. Huang, R. Abugharbieh, R. Tam, “A hybrid geometric-statistical deformable model for automated 3-D segmentation in brain MRI,” IEEE Transactions on Biomedical Engineering, 54(7):1838-1848, July 2009. 6. L. E. Baum, “An inequality and associated maximization technique in statistical estimation for probabilistic functions of finite state Markov chains,” Inequalities, 3:1-8, 1972. 7. A. J. Viterbi, J. K. Omura. “Trellis encoding of memoryless discrete-time sources with a fidelity criterion,” IEEE Transactions on Information Theory, IT-20:325–332, May 1974. 8. S. Geman, D. Geman, “Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721–741, Nov. 1984. 9. S. Z. Li, “Markov random field models in computer vision,” in Proc. of European Conference on Computer Vision, Stockholm, Sweden, May 1994, pp.361-370. 10. D. Joshi, J. Li, J. Z. Wang, "A computationally efficient approach to the estimation of two- and three-dimensional hidden Markov models,” IEEE Transactions on Image Processing, 15(7):1871-1886, 2006. 11. A. Huang, R. Abugharbieh, R. Tam, “Image segmentation using an efficient rotationally invariant 3D region-based hidden Markov model,” in Proc. of IEEE Computer Vision and Pattern Recognition Workshop on Mathematical Methods in Biomedical Image Analysis, Alaska, USA, 2008, pp.1-8. 182  12. A. Huang, R. Abugharbieh, R. Tam, “A novel rotationally-invariant region-based hidden Markov model for efficient 3D image segmentation,” IEEE Transactions on Image Processing (Accepted, March 2010). 13. M. Styner, J. Lee, B. Chin, M. S. Chin, O. Commonwick, H.-H. Tran, V. Jewells, S. Warfield, “3D segmentation in the clinic: a grand challenge II: MS lesion segmentation,” MS Lesion Segmentation Challenge Workshop in Medical Image Computing and Computer Assisted Intervention (MICCAI), 2008, id.638. 14. A. Huang, R. Abugharbieh, R. Tam, “Multi-channel white matter lesion segmentation in multiple sclerosis using region-based hidden Markov model,” (Submitted, February 2010). 15. N. Shiee, P.-L. Bazin, D. L. Pham, “Multiple sclerosis lesion segmentation using statistical and topological atlases,” MS Lesion Segmentation Challenge Workshop in Medical Image Computing and Computer Assisted Intervention (MICCAI), 2008, id.605. 16. H. S. Choi, D. R. Haynor, Y. Kim, “Partial volume tissue classification of multichannel magnetic resonance images – a mixel model,” IEEE Transactions on Medical Imaging, 10(3):395-407, 1991. 17. L. R. Rabiner, “An introduction to hidden Markov model,” IEEE ASSP Magazine, pp.4-16, 1986. 18. A. Huang, R. Abugharbieh, R. Tam, “A fuzzy region-based hidden Markov model for partial-volume classification in brain MRI,” in Proc. of Medical Image Computing and Computer Assisted Intervention, 2009, pp.474-481. 19. A. Huang, R. Abugharbieh, R. Tam, “A fuzzy region-based hidden Markov model for partial-volume classification in brain MRI,” (Submitted, January 2010). 20. A. Huang, R. Abugharbieh, R. Tam, A. Traboulsee, “Brain extraction using geodesic active contours,” in Proc. of SPIE Medical Imaging, 7(2), 2006, pp.61444J.1-6144J.8. 21. A. Huang, R. Abugharbieh, R. Tam, A. Traboulsee, “Automatic brain extraction of T1-weighted MRI,” The American Roentgen Ray Society 106th Annual Meeting, 2006. 22. A. Huang, R. Abugharbieh, R. Tam, A. Traboulsee, “Automatic cortex extraction and gray/white matter segmentation from T1-weighted MRI with geodesic active contours,” in Proc. of the International Society for Magnetic Resonance in Medicine 14th Scientific Meeting, 2006, pp.1605 23. A. Huang, R. Abugharbieh, R. Tam, A. Traboulsee, “MRI brain extraction with combined expectation-maximization and geodesic active contour,” in Proc. of IEEE 183  International Symposium on Signal Processing, 2006, pp.107-111.  184  APPENDIX A. LIST OF PUBLICATIONS  Journal Manuscripts (Peer-Reviewed): 1. A. Huang, R. Abugharbieh, R. Tam, “Multi-channel white matter lesion segmentation in multiple sclerosis using region-based hidden Markov modeling,” (Submitted, February 2010). 2. A. Huang, R. Abugharbieh, R. Tam, “A fuzzy region-based hidden Markov model for partial-volume classification in brain MRI,” (Submitted, January 2010). 3. A. Huang, R. Abugharbieh, R. Tam, “A novel rotationally-invariant region-based hidden Markov model for efficient 3D image segmentation,” IEEE Transactions on Image Processing, (Accepted, March 2010). 4. A. Huang, R. Abugharbieh, R. Tam, “A hybrid geometric-statistical deformable model for automated 3-D segmentation in brain MRI,” IEEE Transactions on Biomedical Engineering, 54(7):1838-1848, July 2009. Conference Papers (Peer-Reviewed): 1. A. Huang, R. Abugharbieh, R. Tam, “A fuzzy region-based hidden Markov model for partial-volume classification in brain MRI,” in Proc. of Medical Image Computing and Computer Assisted Intervention, 2009, pp.474-481. 2. A. Huang, R. Abugharbieh, R. Tam, “Image segmentation using an efficient rotationally invariant 3D region-based hidden Markov model,” in Proc. of IEEE Computer Vision and Pattern Recognition Workshop on Mathematical Methods in Biomedical Image Analysis, 2008, pp.1-8. 3. A. Huang, R. Abugharbieh, R. Tam, A. Traboulsee, “MRI brain extraction with combined expectation-maximization and geodesic active contour,” in Proc. of IEEE International Symposium on Signal Processing, August 2006, pp.107-111. 4. A. Huang, R. Abugharbieh, R. Tam, A. Traboulsee, “Automatic MRI brain tissue segmentation using a hybrid statistical and geometric model,” in Proc. of IEEE International Symposium on Biomedical Imaging: Nano to Micro, 2006, pp.394-397. 5. A. Huang, R. Abugharbieh, R. Tam, A. Traboulsee, “Brain extraction using geodesic active contours,” in Proc. of SPIE Medical Imaging, 2006, pp.61444J.1-61444J.8.  185  Conference Abstracts (Peer-Reviewed): 1. A. Huang, R. Abugharbieh, R. Tam, A. Traboulsee, “Automatic brain extraction of T1-weighted MRI,” The American Roentgen Ray Society 106th Annual Meeting, Vancouver, BC, Canada, 2006. 2. Huang, R. Abugharbieh, R. Tam, A. Traboulsee, “Automatic cortex extraction and gray/white matter segmentation from T1-weighted MRI with geodesic active contours,” in Proc. of the International Society for Magnetic Resonance in Medicine 14th Scientific Meeting, vol.14, 2006, pp.1605.  186  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0064931/manifest

Comment

Related Items