UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Statistical models of the spine for image analysis and image-guided interventions Rasoulian, Abtin 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2014_spring_rasoulian_abtin.pdf [ 2.64MB ]
Metadata
JSON: 24-1.0166855.json
JSON-LD: 24-1.0166855-ld.json
RDF/XML (Pretty): 24-1.0166855-rdf.xml
RDF/JSON: 24-1.0166855-rdf.json
Turtle: 24-1.0166855-turtle.txt
N-Triples: 24-1.0166855-rdf-ntriples.txt
Original Record: 24-1.0166855-source.json
Full Text
24-1.0166855-fulltext.txt
Citation
24-1.0166855.ris

Full Text

Statistical Models of the Spine forImage Analysis and Image-guidedInterventionsbyAbtin RasoulianB.Sc., Sharif University of Technology, 2007M.Sc., Technische Universitaet Muenchen, 2009A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinThe Faculty of Graduate and Postdoctoral Studies(Electrical and Computer Engineering)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)January 2014c? Abtin Rasoulian 2014AbstractThe blind placement of an epidural needle is among the most difficult re-gional anesthetic techniques. The challenge is to insert the needle in themidline plane of the spine and to avoid overshooting the needle into thespinal cord. Prepuncture 2D ultrasound scanning has been introduced asa reliable tool to localize the target and facilitate epidural needle place-ment. Ideally, real-time ultrasound should be used during needle insertionto monitor the progress of needle towards the target epidural space. How-ever, several issues inhibit the use of standard 2D ultrasound, including theobstruction of the puncture site by the ultrasound probe, low visibility of thetarget in ultrasound images of the midline plane, and increased pain due toa longer needle trajectory. An alternative is to use 3D ultrasound imaging,where the needle and target could be visible within the same reslice of a 3Dvolume; however, novice ultrasound users (i.e., many anesthesiologists) havedifficulty interpreting ultrasound images of the spine and identifying the tar-get epidural space. In this thesis, I propose techniques that are utilized foraugmentation of 3D ultrasound images with a model of the vertebral column.Such models can be pre-operatively generated by extracting the vertebraefrom various imaging modalities such as Computed Tomography (CT) orMagnetic Resonance Imaging (MRI). However, these images may not beobtainable (such as in obstetrics), or involve ionizing radiation. Hence, theuse of Statistical Shape Models (SSM) of the vertebrae is a reasonable al-ternative to pre-operative images. My techniques include construction ofa statistical model of vertebrae and its registration to ultrasound images.The model is validated against CT images of 56 patients by evaluating theregistration accuracy. The feasibility of the model is also demonstrated viaregistration to 64 in vivo ultrasound volumes.iiPrefaceThis thesis is primarily based on four manuscripts, resulting from the collab-oration between multiple researchers. All publications have been modifiedto make the thesis coherent. Ethical approval for conducting this researchhas been provided by the Clinical Research Ethics Board, certificate num-bers: H11-02594, H09-01423, H07-01691, and SCOMP-003-07.A study described in Chapter 1 has been published in:? Abtin Rasoulian, Jens Lohser, Mohammad Najafi, Hedyeh Rafii-Tari,Denis Tran, Allaudin A. Kamani, Victoria A. Lessoway, Purang Abol-maesumi, and Robert N. Rohling. Utility of prepuncture ultrasoundfor localization of the thoracic epidural space. Canadian Journal ofAnesthesia, 58(9):815-823, 2011.The contribution of the author was in developing, implementing, and evalu-ating the method. Dr. Lohser helped with development of the study protocoland was the medical team leader for the patient study. Dr. Kamani was themedical coordinator of pre-study testing in laboratory. M. Najafi, D. Tran,and H. Rafii Tarii helped develop the patient study protocol. Profs. Rohlingand Abolmaesumi helped with their valuable suggestions in improving themethodology.All co-authors contributed to the editing of the manuscript.A version of Chapters 2 and 3 has been published in:? Abtin Rasoulian, Robert N. Rohling, and Purang Abolmaesumi. Group-wise registration of point sets for statistical shape models. IEEETransactions on Medical Imaging, 31(11):2025-2034, 2012.The contribution of the author was in developing, implementing, and eval-uating the method. Profs. Rohling and Abolmaesumi helped with theirvaluable suggestions in improving the methodology.All co-authors contributed to the editing of the manuscript.A version of Chapters 4 and 5 has been accepted for publication in:iiiPreface? Abtin Rasoulian, Robert N. Rohling, and Purang Abolmaesumi. Lum-bar spine segmentation using a statistical multi-vertebrae anatomicalshape+pose model. IEEE Transactions on Medical Imaging.The contribution of the author was in developing, implementing, and eval-uating the method. Profs. Rohling and Abolmaesumi helped with theirvaluable suggestions in improving the methodology.All co-authors contributed to the editing of the manuscript.A version of Chapter 7 has been publication in:? Abtin Rasoulian, Robert N. Rohling, and Purang Abolmaesumi. Aug-mentation of Paramedian 3D Ultrasound Images of the Spine. InProceedings of Information Processing in Computer-Assisted Inter-ventions (IPCAI), pages 51-60, 2013.The contribution of the author was in developing, implementing, and eval-uating the method. Profs. Rohling and Abolmaesumi helped with theirvaluable suggestions in improving the methodology.All co-authors contributed to the editing of the manuscript.The following publication was from a study to learn the aspects of the clini-cal problem and its results are used in the development of many of the ideasin this thesis:? Abtin Rasoulia, Purang Abolmaesumi, and Parvin Mousavi. Feature-based multibody rigid registration of CT and ultrasound images oflumbar spine. Medical Physics, 39:31-54, 2012.The contribution of the author was developing, implementing, and evalu-ating the method. Dr. Abolmaesumi and Dr. Mousavi helped with theirvaluable suggestions in improving the methodology.All co-authors contributed to the editing of the manuscript.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xList of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . xvAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Clinical background . . . . . . . . . . . . . . . . . . . . . . . 11.1.1 Epidural anesthesia and blind needle placement usingloss-of-resistance . . . . . . . . . . . . . . . . . . . . . 11.1.2 Complications associated with this procedure and suc-cess rate . . . . . . . . . . . . . . . . . . . . . . . . . 51.1.3 Need for image guidance . . . . . . . . . . . . . . . . 61.1.4 Ultrasound in epidural anesthesia: methods, outcomes,and limitations . . . . . . . . . . . . . . . . . . . . . . 71.2 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 121.4 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Construction of a single object statistical shape model . . 152.1 Introduction and background . . . . . . . . . . . . . . . . . . 152.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2.1 The group-wise GMM-based registration algorithm . 172.2.2 Statistical shape model . . . . . . . . . . . . . . . . . 222.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . 22vTable of Contents2.3.1 Objective metrics for evaluating statistical shape mod-els . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.3.2 Group-wise GMM-based registration algorithm param-eters . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.3.3 Comparison to other methods . . . . . . . . . . . . . 242.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.4.1 Group-wise GMM-based registration algorithm param-eters . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.4.2 Comparison to other methods . . . . . . . . . . . . . 292.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Registration of a single object SSM to a target image . . 323.1 Introduction and background . . . . . . . . . . . . . . . . . . 323.2 Registering the model to a new subject . . . . . . . . . . . . 333.2.1 Registration to ultrasound volumes . . . . . . . . . . 353.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.3.1 Registration to ultrasound volumes . . . . . . . . . . 363.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.4.1 Registration to ultrasound volume . . . . . . . . . . . 373.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 Generation of statistical multi-object shape+pose model . 414.1 Introduction and background . . . . . . . . . . . . . . . . . . 414.2 Lie groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434.2.1 Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . 434.2.2 Principal geodesic analysis . . . . . . . . . . . . . . . 444.2.3 Lie groups and similarity transformations . . . . . . . 454.2.4 PGA for shape analysis . . . . . . . . . . . . . . . . . 474.3 Statistical multi-object shape+pose model . . . . . . . . . . 484.3.1 Model construction . . . . . . . . . . . . . . . . . . . 484.3.2 Model instantiation . . . . . . . . . . . . . . . . . . . 504.4 Experiments and results . . . . . . . . . . . . . . . . . . . . . 514.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 Segmentation of CT images using statistical multi-objectmodel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555.1 Introduction and background . . . . . . . . . . . . . . . . . . 555.2 General description of the registration method . . . . . . . . 575.3 Experiment and results . . . . . . . . . . . . . . . . . . . . . 59viTable of Contents5.3.1 Validation criteria . . . . . . . . . . . . . . . . . . . . 595.3.2 Segmentation of CT images using the multi-vertebraemodel . . . . . . . . . . . . . . . . . . . . . . . . . . . 605.3.3 Selection of the trade-off variables of the registrationalgorithm . . . . . . . . . . . . . . . . . . . . . . . . . 605.3.4 Segmentation accuracy . . . . . . . . . . . . . . . . . 625.3.5 Capture range . . . . . . . . . . . . . . . . . . . . . . 645.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686 Automatic labeling of CT images . . . . . . . . . . . . . . . . 696.1 Introduction and background . . . . . . . . . . . . . . . . . . 696.2 Labeling technique . . . . . . . . . . . . . . . . . . . . . . . . 716.2.1 Construction of the generic statistical n-vertebra model716.2.2 Labeling algorithm . . . . . . . . . . . . . . . . . . . 736.2.3 Labeling and segmentation of the entire vertebral col-umn . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736.3 Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746.3.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 746.3.2 Validation criteria . . . . . . . . . . . . . . . . . . . . 746.4 Experiments and results . . . . . . . . . . . . . . . . . . . . . 746.4.1 Generic 3-vertebrae model . . . . . . . . . . . . . . . 746.4.2 Labeling and segmentation of the entire vertebral col-umn . . . . . . . . . . . . . . . . . . . . . . . . . . . . 766.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787 Registration of a statistical multi-object model to ultra-sound images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797.1 Introduction and background . . . . . . . . . . . . . . . . . . 797.2 Registration technique . . . . . . . . . . . . . . . . . . . . . . 817.2.1 Enhancement of the ultrasound images . . . . . . . . 817.2.2 Registration of the model to the ultrasound images . 817.3 Experiments and results . . . . . . . . . . . . . . . . . . . . . 827.3.1 Synthetic data . . . . . . . . . . . . . . . . . . . . . . 827.3.2 In vivo data . . . . . . . . . . . . . . . . . . . . . . . 837.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 867.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87viiTable of Contents8 Conclusion and future work . . . . . . . . . . . . . . . . . . . 888.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 888.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . 90Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92viiiList of Tables4.1 Variables used in the text together with their description. . . 484.2 Pair-wise correlation between shapes of lumbar vertebrae. . . 535.1 Values selected for the trade-off variables of the registrationof the multi-vertebrae model to the CT images. . . . . . . . 625.2 Segmentation result demonstrated by the distances betweenthe fitted model and the manual segmentation. Mean, RMSand maximum (referred to as Hausdorff) of these distancesare presented. . . . . . . . . . . . . . . . . . . . . . . . . . . . 655.3 Mean error (?standard deviation) of the registration in dif-ferent regions for each vertebrae (SP, spinous process; TP,transverse processes; AP, articular processes; VB, vertebralbody). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656.1 Accuracy of the segmentation (in mm) for each vertebra. . . 777.1 The registration error of the model to the paramedian volume.Values are given as mean?std (mm). . . . . . . . . . . . . . . 857.2 Error between the two mid-sagittal planes, one estimatedfrom the registered model, and one extracted from the cen-tered 3D ultrasound volume. Distance is defined as the dis-tance between two planes at the epidural depth. . . . . . . . 86ixList of Figures1.1 a) Shape of the vertebrae varies between different levels. b)Transverse view of the vertebra and epidural needle injection.A midline insertion passes through skin, subcutaneous fat, su-perspinous ligament, interspinous ligament, and ligamentumflavum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Localization of the interspaces by palpation. The superior as-pect of the iliac crest (highlighted with blue circle) can be usedto localize the L5 vertebrae (highlighted with yellow circle).The inferior angle of scapula (highlighted with red circle) canbe used to localize the T7 vertebrae (highlighted with greencircle). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Steps of a epidural needle insertion. a) Injection of local anes-thesia to numb the skin, b) insertion of the epidural needle, c)loss-of-resistance to confirm the location of the epidural space,and d) insertion of a catheter to deliver anesthesia. From:The New York School of Regional Anesthesia (NYSORA)(http://www.nysora.com). . . . . . . . . . . . . . . . . . . . . 41.4 Entrance of the Tuohy needle into the epidural space. Theneedle tip has a curve to prevent accidental dural punctureand to facilitate passage of the catheter up the spinal canal. . 51.5 Needle angle for epidural insertion in different levels. Theneedle is angled in lower thoracic insertion (b), whereas itis less angled in lumbar (a) and upper thoracic (c) inser-tions. From: The New York School of Regional Anesthesia(NYSORA) (http://www.nysora.com). . . . . . . . . . . . . 61.6 a) Transverse and b) paramedian ultrasound imaging of thespine. Laminae in paramedian view are visualized as wave-like structure. Ligamentum flavum produces a bright echo atthe base of the laminae. . . . . . . . . . . . . . . . . . . . . . 81.7 a) Ultrasound- and b) CT-measure skin to epidural distance. 10xList of Figures1.8 Three parallel ultrasound images. Planes are spaced 10 mmfrom each other. Ideally, the anesthesiologist should performthe needle injection in the third plane which does not includestrong features. The first plane shows the facet joints whereasthe second plane shows the laminae. The planes are also hardfor novice users to distinguish the anatomy in the ultrasound,making needle guidance difficult. . . . . . . . . . . . . . . . . 112.1 (a,c) Generalization and (b,d) compactness of the model builtusing different values of parameters, ? and ?. . . . . . . . . . 252.2 Graphical representation of the L2 vertebrae, and hippocampishapes described by the SSM after varying the weights corre-sponding to the first three and four principal modes of vari-ation by 3?. The amount of variation is color coded for eachmode. Higher variations are shown in red. . . . . . . . . . . 262.3 Generalization, specificity, and compactness ability of the L2vertebra and hippocampus shape models. Error bars showthe standard deviation divided by square root of the numberof trials. SSM generated by group-wise GMM has the bestgeneralization ability among other techniques. EM-ICP hasthe best compactness and specificity since it uses an affinetransformation. However, among other techniques, our tech-nique is the best in terms of these two measures. . . . . . . . 272.4 Distance error is overlaid on the surface of the mean shapebuilt using different approaches. Higher distance is shown inred. For better visualization, hippocampi are viewed frommedial, and distances higher than 1.5 mm in vertebra SSMand higher than 0.5 mm in hippocampus SSM are shown inthe same color. . . . . . . . . . . . . . . . . . . . . . . . . . . 283.1 An example of SSM (shown in blue) to a target (shown inred) registration using GMM-ASM. . . . . . . . . . . . . . . 363.2 Registration scenario with 10% outliers and Gaussian additivenoise with 3 mm standard deviation. Registration results arepresented for: a) the original ASM implementation, b) WLS-ASM, c) LMedS-ASM, and d) our proposed algorithm, GMM-ASM. Blue points are the model whereas red points are thetarget. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.3 Robustness of the algorithm against additive noise (a and b)and outliers (c and d). . . . . . . . . . . . . . . . . . . . . . . 39xiList of Figures3.4 Example of registration of a SSM to ultrasound images ofhuman T12 vertebra. The initial position of the model isdrawn in red, the registered model is white. . . . . . . . . . . 403.5 (a) TRE and (b) RMS distance error for probabilistic regis-tration of the model built by our method and the congealingtechnique to ultrasound images of in vivo data for both L1and T12 vertebrae. . . . . . . . . . . . . . . . . . . . . . . . . 404.1 The tangent space is defined over each element of a manifold.A vector on the tangent space can be transformed to themanifold by exponential map. . . . . . . . . . . . . . . . . . . 444.2 Generalized Procrustes analysis is performed to align all thetraining set. The similarity transformation from the meanshape to each object is then computed. . . . . . . . . . . . . . 494.3 Steps for instantiation of the model. . . . . . . . . . . . . . . 504.4 Graphical representation of the L1-L5 vertebrae variationsdescribed by changing the weights corresponding to the firsttwo principal modes of pose and shape variation by ?3??pkand ?3??sk, respectively. The first mode of pose variationsrepresents mainly the scaling and the lateral curvature, andthe second mode represents the anterior-posterior curvatureof the spine. The first and second modes of variation of shaperepresent variations in the transverse processes and verte-bral body, respectively. For sake of better visualization, theamount of the shape variations is color coded on the meanshape. Unit is in millimeter. . . . . . . . . . . . . . . . . . . . 524.5 Generalization (for both Mean Error and Haussdorf), speci-ficity, and compactness ability of the statistical multi-vertebraeshape+pose model. Error bars show the standard deviationdivided by square root of the number of trials. . . . . . . . . 545.1 Example of results obtained with the Canny edge detector onsample CT images of spine. . . . . . . . . . . . . . . . . . . . 575.2 The mean distance error of the registration for different valuesof the trade-off variables. Registration results are averagedacross all lumbar vertebrae. . . . . . . . . . . . . . . . . . . . 61xiiList of Figures5.3 Registration results and volume overlaps for four chosen vol-umes from different views. The registered model is high-lighted in white. Dashed lines mark where the axial slicesare located in the sagittal view. The red surface shows themanual segmentation. . . . . . . . . . . . . . . . . . . . . . . 635.4 Registered model with (white color) and without (gray color)the additional non-rigid transformation step. This exampleshows a location where the errors are especially large in orderto best demonstrate the additional deformation step. . . . . . 645.5 a) Mean error and b) success rate for registration with dif-ferent initial displacements. Registration results are averagedacross all lumbar vertebrae. . . . . . . . . . . . . . . . . . . . 665.6 Time versus accuracy for different number of CT points usedin the registration. Time is calculated from an unoptimizedMatlab implementation. . . . . . . . . . . . . . . . . . . . . . 676.1 a) Three examples of T1 (first thoracic) vertebrae from threedifferent patient. b) An example of T1-T2-T3 vertebrae ofa patient. c) Three examples of L1 (first lumbar) vertebraefrom three different patients. d) An example of L1-L2-L3vertebrae of a patient. . . . . . . . . . . . . . . . . . . . . . . 716.2 An example of a training set for construction of a generic3-vertebrae model. . . . . . . . . . . . . . . . . . . . . . . . . 726.3 The probability matrix indicating the similarity of the seg-mented vertebrae to each level. The best configuration isfound by averaging the diagonal of the matrix and reportingthe maximum. . . . . . . . . . . . . . . . . . . . . . . . . . . 746.4 A schematic diagram, showing the steps of the labeling andsegmentation approach proposed. Initially, a point is auto-matically found on the vertebral column (red cross). Next, a3-vertebrae model is registered. The last vertebra of the reg-istered model (red arrow) is then used to initialize the nextmodel. The segmented vertebrae are labeled in each itera-tion. This process continues until it reaches the extents ofthe field-of-view or it labels the vertebrae as T1. . . . . . . . 756.5 Graphical representation of the generic 3-vertebrae model de-scribed by changing the weights corresponding to the first twoprincipal modes of pose and shape variation. . . . . . . . . . 766.6 Examples of segmentation result . . . . . . . . . . . . . . . . 77xiiiList of Figures7.1 (a) Midline sagittal insertion of the needle. The horizontal ar-row shows the epidural width and the vertical arrow shows theepidural depth. (b) 3D motorized ultrasound probe equippedwith a needle guide. (c) The relative positions of the 3D ul-trasound probe, needle and vertebra in superior-inferior view.The arrow shows the sweep direction. The probe should beplaced to align the needle path (dashed blue line) with themid-sagittal plane. The red line shows the visible part of thevertebrae in ultrasound images. No echoes appear on the grayarea since the spinous process surface is not orthogonal to thetransducer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807.2 Distance error overlaid on the model in the a) anterior-posteriorand b) left-to-right view. The arrow points to the side usedin the registration. . . . . . . . . . . . . . . . . . . . . . . . . 837.3 Three volumes are acquired from each intervertebral level.Arrows show the plane visualized in each case. a) The modelis registered to this paramedian volume (volume A). The reddashed line shows the sagittal slice of this volume shown to-gether with the registered model. b) A transverse view ofthe centered volume (volume B) augmented with the regis-tered model. c) A sagittal view of the contra-lateral volume(volume C) augmented with the registered model. . . . . . . . 847.4 Examples of the registration. The multi-vertebrae model iscapable of capturing different sizes, shapes and poses of ver-tebrae. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85xivList of AbbreviationsAAM Active Appearance ModelsASM Active Shape ModelBFGS Broyden-Fletcher-Goldfarb-ShannoCT Computer TomographyEM Expectation MaximizationGHT Generalized Hough TransformGMM Gaussian Mixture ModelICP Iterative Closest PointITK Insight ToolkitLF Ligamentum FlavumLOR Loss-of-ResistanceMRI Magnetic Resonance ImagingPCA Principal Coponent AnalyisisRBF Radial Basis FunctionRMS Root Mean SquareSSM Statistical Shape ModelSVM Support Vector MachineUS UltrasoundxvAcknowledgementsI am most grateful to my advisors Profs. Purang Abolmaesumi and RobertRohling for making the four years of my Ph.D. studies extremely enjoyableand productive. Their guidance and support have been essential not onlyin the development of this thesis, but also in my personal development asa scientist. Purang is a wonderful researcher. He has taught me how takeinitiative and how to think critically. He cares about his students a lot andconsiders their success in their academic lives as his own success. Rob isan excellent teacher and researcher. You can always rely on his helps andsupports while facing with any academic or real-life challenge.I am very grateful to the Jens Lohser at VGH and Ali Kamani at BC WomenHospital for helping me to develop many ideas, and for their advice and manyfruitful discussions. I am also thankful to Vickie Lessoway, our sonographer,who was far more than a medical collaboration partner.I would also like to thank my dissertation committee, Tim Salcudean, JaneWang, and Jim Little for careful criticism and insightful feedback that im-proved the quality of this work.Special thanks to Mehdi Ramezani for his insightful feedbacks and sharinghis knowledge in signal and image processing, and to Samira Sojoudi forhelping with software development and programmming.I am also thankful to all the former and current members of the RCL lab atUBC: Ramin, Hedyeh, Omid, Siavash, Saman, Amin, Ali, Reza, Hani, Sara,Caitlin, Mohammad Najafi, Mohammad Honarvar, Phillipe, Hussam, andJeff. I also wish to thank all other friends who shared their joy with me.I thankfully acknowledge the fellowships and grants that supported my re-search during the course of my studies: the University of British Columbia,the National Science and Engineering Council of Canada (NSERC), and theCanadian Institutes of Health Research (CIHR).I offer my gratitude to my beloved parents, Farzaneh and Vahid, and mybrother Arash, whose unconditional support helped me accomplish manymilestones in my life including this. Last but not least I would like to thankmy wife, Mahshid, who has been with me in every step with her support,encouragement, quiet patience and unwavering love.xviChapter 1Introduction1.1 Clinical background1.1.1 Epidural anesthesia and blind needle placement usingloss-of-resistanceEpidural anesthesia is a form of regional anesthesia involving placement of aneedle into the epidural space of the spine. Depending on the application, itis performed on cervical, thoracic, or lumbar vertebrae. Cervical epiduralsare used for treatment of radicular pain in the upper limb and also to provideanalgesia for surgery in the same area. Thoracic epidurals are performedon patients who undergo large abdominal or thoracic incisions for surgery.Lumbar epidurals are used mostly to ease the pain of labour or a cesareansection and are also utilized for other surgeries on the abdomen and lowerlimbs.An understanding of the anatomy of the epidural space is essential for fur-ther incorporation and improvement of computer assisted technologies insuch medical procedures. Although the morphology of the vertebrae differssignificantly, the basic components remain the same (see Figure 1.1). Eachvertebra is composed of a vertebral body and a bony arch. The arch consistsof two posterior laminae and two anterior pedicles. The spinous process isdirected posteriorly and inferiorly from the junction of the laminae, andtransverse processes are located at the junction of the pedicle and lamina.Spinous processes vary in their angulations. They are oriented almost trans-versely in most of the regions and become more angulated inferiorly in themid-thoracic region. This shape particularly impacts the epidural injectionsin these regions since midline insertions become more difficult.The vertebrae are joined together by soft tissues, namely, discs and liga-ments. Some of the important ligaments are the intertransverse ligament,supraspinous ligament, interspinous ligament, and ligamentum flavum (LF).The LF is located immediately posterior to the epidural space and connectsthe laminae of adjacent vertebrae.The epidural space lies between the LF and the dura mater. The contents11.1. Clinical background(a) (b)Figure 1.1: a) Shape of the vertebrae varies between different levels. b)Transverse view of the vertebra and epidural needle injection. A midlineinsertion passes through skin, subcutaneous fat, superspinous ligament, in-terspinous ligament, and ligamentum flavum.of this space include nerve roots, epidural fat, epidural veins, and radicu-lar arteries entering the space via the neural foramen. The volume of fat isgreater in obese individuals and less in the elderly. The distance between theLF and dura mater is greatest in the lumbar region (4?6 mm), followed bythe thoracic region (3?5 mm), followed by the cervical region (2?4 mm) [79].Patients are placed either in the sitting or in lateral decubitus position forepidural injections depending on the patient?s medical status, weight, etc.Identification of the midline is generally easier in an obese patient in thesitting position, but it requires a trained assistant to maintain the patient?scorrect posture.In epidural insertions, selection of the puncture site and the needle angle istraditionally achieved solely through palpation of the spine. For example, incase of lumbar insertions, the virtual line that connects the superior aspectof the iliac crests is used to locate the spinous process of L4 or the L4-5 in-terspace (see Figure 1.2). Since lumbar epidurals are commonly introduced21.1. Clinical backgroundFigure 1.2: Localization of the interspaces by palpation. The superior aspectof the iliac crest (highlighted with blue circle) can be used to localize theL5 vertebrae (highlighted with yellow circle). The inferior angle of scapula(highlighted with red circle) can be used to localize the T7 vertebrae (high-lighted with green circle).at the L2-3 interspinous space, two levels above the L5 can safely be chosenfor needle entry into the epidural space. In case of thoracic insertions, theinferior angle of the scapula can be used to locate the T7 spinous process(see Figure 1.2).Prior to the needle insertion, local anesthetics are injected to numb theskin (see Figure 1.3a). The epidural needle is then inserted slowly to locatethe epidural space (see Figure 1.3b), followed by the insertion of a catheterthrough the needle to deliver anesthesia (see Figure 1.3d). The needle trajec-tory encounters skin, muscle, inter-spinous ligament (for lumbar insertion),and the LF before entering the epidural space. The needle may be angleddepending on the level of the entry (see Figure 1.5). The Loss-of-Resistance(LOR) technique is normally used to confirm that the needle tip has reachedthe epidural space. This technique involves attaching a saline-filled syringeto the needle applying pressure during needle insertion and then feeling theloss-of-resistance of saline injection when the needle tip enters the epidu-ral space (see Figure 1.3c). The needle is inserted either in the midline orparamedian plane. The midline approach is commonly used for lumbar or31.1. Clinical background(a) (b)(c) (d)Figure 1.3: Steps of a epidural needle insertion. a) Injection of local anesthe-sia to numb the skin, b) insertion of the epidural needle, c) loss-of-resistanceto confirm the location of the epidural space, and d) insertion of a catheterto deliver anesthesia. From: The New York School of Regional Anesthesia(NYSORA) (http://www.nysora.com).41.1. Clinical backgroundFigure 1.4: Entrance of the Tuohy needle into the epidural space. Theneedle tip has a curve to prevent accidental dural puncture and to facilitatepassage of the catheter up the spinal canal.low thoracic epidural placements in the sitting position. For midthoracicinsertions, or patients with calcified interspinous ligaments, the paramedianapproach can be used. Tuohy needles with 16 to 18 guage, 8 cm in length,with centimetric markings are typically used in epidural insertions. Theneedle tip has a 15 to 30-degree curve to prevent accidental dural punctureand to facilitate passage of the catheter up the spinal canal. Figure 1.4visualizes the entrance of the Tuohy needle into the epidural space.Epidural placements may be performed under fluoroscopic guidance for non-obstetric indications, but the majority of them are done blindly, since mostare performed on parturient patients where exposure to ionizing radiationis contraindicated.1.1.2 Complications associated with this procedure andsuccess rateComplication rates associated with epidural insertion such as postduralpuncture headache, persistent paresthesias and blood aspiration are reportedat 1.32%, 0.13% and 3.3%, respectively, for 4767 consecutive spinal anesthet-ics [52]. Other complications include a rare but serious risk of neurologicinjury (? 0.01%), including paraplegia [94]. The interfaces among LF, epidu-ral space and dura mater are encountered several millimeters apart so needleinsertion must be controlled to a high level of accuracy. A needle overshoottypically results in accidental puncture of the dura mater surrounding the51.1. Clinical background(a) Lumbar insertion (b) Mid-thoracic insertion (c) Upper-thoracic inser-tionFigure 1.5: Needle angle for epidural insertion in different levels. The needleis angled in lower thoracic insertion (b), whereas it is less angled in lum-bar (a) and upper thoracic (c) insertions. From: The New York School ofRegional Anesthesia (NYSORA) (http://www.nysora.com).spinal cord and leakage of cerebral spinal fluid, which leads to side effectsfor patients, such as post dural puncture headache. The incidence of du-ral puncture with no treatment is below 0.8% [42, 52, 114], however, itmay be catastrophic [109]. For university training programs, the incidenceof dural perforation is even greater (3-5%), which demonstrates the steeplearning curve associated with these procedures [34, 89]. Complications mayalso include temporary or permanent neurologic deficit. This risk has twoconsequences: (1) a small number of patients will suffer permanent neuro-logic deficit, and (2) the risk inhibits more widespread use of percutaneousneuraxial needle injections. More accurate selections of the puncture site,needle trajectory, and depth of insertion are needed to decrease the risk andincrease the use of this procedure.1.1.3 Need for image guidanceThe previous review shows that there are still limitations with the currentblind protocol for epidural needle insertion. These limitations are primarilydue to insufficient information to the clinician for the choice of puncture61.1. Clinical backgroundsite, needle trajectory, and depth of insertion. Such issues become more im-portant in more challenging interventions, i.e. the ones performed on obesepatients, patients with spine diseases, or parturient patients with edema.Palpation may not be possible for obese patients since spinous processes(the anatomical landmark used in palpation) are located deep below a fatlayer. Abnormalities such as scoliosis, hyperlordosis, and kyphosis may alsoinhibit the blind technique. Additionally, during pregnancy, the optimumpuncture site is smaller, the soft-tissue channel between the spinal processesis narrower, and the skin-epidural space distance is greater [48].A number of solutions have been proposed for increasing the needle place-ment accuracy, while maintaining or improving patient safety and reducingthe complication rates. This includes the use of pressure infusion pumps [62]to replace the tactile sense of loss-of-resistance, or adding electrical sensorsto the needle to quantify the loss-of-resistance [103]. Although these tech-niques help with avoiding overshoot, they do not help with selection ofpuncture site or needle trajectory. Therefore, there is a need for a practicalimage-guidance system to view the spinal anatomy as part of the routineclinical workflow in anesthesia. The guidance system needs to be able toshow the location of the epidural space and neighboring anatomy so thatthe anesthesiologist can guide the needle appropriately to the target.1.1.4 Ultrasound in epidural anesthesia: methods,outcomes, and limitationsIn search of an accessible, portable, inexpensive, efficient, and non-ionizingimaging method for epidural guidance, ultrasound (US) guidance has en-joyed a recent resurgence in the past decades, with research looking intopre-puncture ultrasound to help decide the puncture site to reach the desiredinter-vertebral level [105], to measure the depth from skin to the epiduralspace [46], and for real-time guidance of needle insertion [104]. Figures 1.6aand 1.6b show ultrasound images of the spine captured by locating the ul-trasound transducer in transverse and paramedian planes.Thoracic intervertebral levels can be established based on the counting-upmethod from the last rib. For a target level of T6-7, the seventh rib isidentified by counting up from the 12th rib while the transducer is orientedin the sagittal direction and moved superiorly in a parasagittal plane 2 cmto the right of midline. After intervertebral identification, the transducer ismoved medially along the seventh rib echo to identify the LF at the T6-7intervertebral space. Since the plane is 2-3 mm from the midline sagittal71.1. Clinical background(a)(b)Figure 1.6: a) Transverse and b) paramedian ultrasound imaging of thespine. Laminae in paramedian view are visualized as wave-like structure.Ligamentum flavum produces a bright echo at the base of the laminae.81.1. Clinical backgroundplane, this plane is termed the offset-sagittal plane. The same approachis used for other target levels. Lumbar intervertebral levels can be foundsimilarly using the counting-down method from the last rib.Ultrasound has also been used for measurement of the depth of skin-to-epidural space which is commonly marked by the leading (posterior) edgeecho from the LF as the target (see Figure 1.7a) [47]. An acoustic windowcan be found by searching through the echoes from the intervertebral spaceby moving (2-3 mm) and/or angling (5-10 degrees) until the characteristi-cally bright echo from the LF is clearly visible in the image (Figure 1.6b).As the epidural space itself is not directly visible in ultrasound images, theadjacent echogenic LF is used as the identifiable surrogate on the ultrasoundimages. The echo-dense LF appears as a bright echo at the base of the lam-inae with an appearance and thickness that varies slightly from patient topatient.Evaluation of the reliability of such measurement was the subject of ourfirst clinical study. We limited the study to those undergoing the thoracicepidural injection (n = 20). The measured distance using ultrasound wascompared against the actual needle insertion depth and the CT-measureddepth (see Figure 1.7b). The US-based depth measurements displayed abias of 3.21 mm with 95% limits of agreement of -7.47 to 13.9 mm com-pared with the clinically determined needle depth, and displayed a bias of2.99 mm with 95% limits of agreement of -7.29 to 13.28 mm compared withthe CT-measured epidural depth. In summary, we showed that ultrasoundcan reliably image the LF in the thoracic spine using the paramedian plane.We also found an acceptable correlation between both ultrasound and CT-determined skin-to-epidural depth and the clinically determined epiduraldepth.Real-time guidance of needle insertion using ultrasound imaging is also chal-lenging since the operator should obtain a high quality image of the targetepidural space without obscuring the puncture site directly above the targetin the midline plane of the spine (i.e. the needle is typically inserted mid-line). The two possible planes for imaging are paramedian and transverse,where the transducer is placed in paramedian or transverse plane, in thesame plane as the needle trajectory and next to the puncture site. Severalvariations of these transducer and needle orientations have been proposedwith no clear winner to date [26]. The unresolved problem is that it is im-possible with standard ultrasound machines to see the target and needletrajectory together in real time with a standard midline needle insertion.Other advantages may result from developing and using appropriate image-guidance systems. Spine asymmetries can be well-depicted in ultrasound im-91.2. Objective(a) ultrasound-measured depth (b) CT-measured depthFigure 1.7: a) Ultrasound- and b) CT-measure skin to epidural distance.ages [63]. It may also aid to detect the LF midline gap, which is implicatedas a potential cause of failure to recognize the loss-of-resistance [63, 65].Despite this surge of interest in ultrasound imaging, this imaging modal-ity has not become the standard-of-care for obstetrics/general surgery evenfor pre-puncture imaging, because of ongoing issues related to poor qual-ity depiction of anatomical features, poor depiction of the needle, and thedifficulty in interpreting images by novice ultrasound operators, i.e. manyanesthesiologists. Figure 1.8 shows some examples of ultrasound images ofspine where images are hard to interpret. Because of these difficulties, US-guided techniques are not easy to learn. Preliminary studies show that atleast 40 or more cases are required to reach competency of 90% in epiduralneedle placement [26].Other difficulties are also reported. In the case of obese patients, the greaterdistance from the skin to the target leads to lower image quality at the tar-get location. Moreover, the visibility of the LF, dura mater and epiduralspace in ultrasound images decreases during pregnancy [48].1.2 ObjectiveThe global objective of this thesis is to facilitate the integration of real-timeultrasound imaging into epidural injections. To this end, we investigate anddevelop techniques that can enable accurate interpretation of ultrasoundimages within a clinically acceptable time-frame. In other image-guidanceapplications, ultrasound images have been augmented with virtual repre-101.2. Objective(a) (b)(c) (d)Figure 1.8: Three parallel ultrasound images. Planes are spaced 10 mmfrom each other. Ideally, the anesthesiologist should perform the needleinjection in the third plane which does not include strong features. The firstplane shows the facet joints whereas the second plane shows the laminae.The planes are also hard for novice users to distinguish the anatomy in theultrasound, making needle guidance difficult.sentations of the anatomical target [84]. Such models can be generated byextracting the target from various imaging modalities such as Computer To-mography (CT) or Magnetic Resonance Imaging (MRI). A feasibility studyperformed by Moore et al. demonstrated that this augmentation reducesthe procedure time and increases the accuracy [73].Although significant advantages can be achieved by enhancing the ultra-sound images with pre-operative CT or MRI images, such images may notbe obtainable (such as in obstetrics), or involve ionizing radiation with in-creased risk to the patient and physician. Hence, the use of Statistical ShapeModels (SSM) of the vertebrae is a reasonable alternative to pre-operative111.3. Contributionsimages. In this thesis we investigate the feasibility of construction and reg-istration of SSM of vertebrae to in vivo ultrasound images of spine.When the CT or MRI images are available, they could be used for patient-specific guidance of the epidural procedures. However, segmentation of ver-tebrae from those images is a time consuming task and requires significantmanual interaction. In this thesis, we also propose a semi-automatic and afully-automatic techniques to align the SSM of vertebrae to CT images tofacilitate the vertebral column segmentation.1.3 ContributionsThis thesis is an attempt to develop techniques that are essential for anultrasound-guided epidural needle insertion system. In the course of achiev-ing this objective, the following contributions were made:? Developing a novel technique for group-wise registration of a group ofpoint sets, referred to as group-wise Gaussian Mixture Model (GMM)-based registration. This is an essential prior step to generation of astatistical shape model of a single vertebra.? Developing a novel probabilistic approach to register the SSM to anew target, presented as either a point set or an ultrasound volume.Our algorithm performs a global search to find the parameters forinstantiating the SSM.? Developing a novel technique for the generation of a statistical multi-vertebrae shape+pose model by separating the shape and pose statis-tics of multiple vertebrae.? Introducing a novel iterative Expectation Maximization (EM) regis-tration technique to align the model to CT images of spine.? Extending the multi-vertebrae model to incorporate the statistics withinthe vertebrae and using such information for automatic and simulta-neous labeling and segmentation of vertebrae vertebrae in CT images.? Proposing a new technique for real-time guidance of the epidurals usingultrasound imaging and introducing a novel technique for alignmentof the multi-vertebrae model to the ultrasound volume.121.4. Thesis outline1.4 Thesis outlineThe rest of this thesis is subdivided into eight chapters as outlined below:CHAPTER 2: CONSTRUCTION OF A SINGLE OBJECT STATISTICALSHAPE MODELAn SSM defined over an anatomy usually includes a mean shape and its vari-ations; capable of building an unseen observation of that specific anatomy.SSMs are usually built from a set of training shapes by learning the vari-ations among them. In this chapter we introduce a novel technique forgroup-wise registration of a group of surfaces (presented with point sets),referred to as group-wise GMM-based registration. We use the deformableregistration to generate the mean shape and its variations. We decouple thegroup-wise registration into two steps: registration using a non-rigid trans-formation and updating the mean shape. Given the introduced techniquefor SSM construction, we investigate the quality of such a statistical modelfor vertebrae. We quantitatively evaluate the generalization and compact-ness of statistical model of vertebrae and compare it to the state-of-the-arttechniques.CHAPTER 3: REGISTRATION OF A SINGLE OBJECT SSM TO A TAR-GET IMAGEIn search for a clinically viable solution for real-time registration of an SSMto ultrasound images, we investigate a probabilistic technique to register theSSM to ultrasound. Our approach aims to be substantially faster than thestate-of-the art techniques.CHAPTER 4: GENERATION OF STATISTICAL MULTI-OBJECTSHAPE+POSE MODELThe epidural space is located between each two neighboring vertebrae. There-fore, registration of a single SSM is not sufficient for an US-guided needleinsertion. In this chapter, we introduce a novel method for construction of astatistical multi-vertebrae shape+pose models by adapting a technique pro-posed by Bossa and Olmos [16], given that they perform separate statisticalanalysis on the pose and shape information.CHAPTER 5: SEGMENTATION OF CT IMAGES USING STATISTICALMULTI-OBJECT MODEL131.4. Thesis outlineIn this chapter we investigate the possibility of using the multi-vertebraemodel for segmentation of the vertebral column from CT images. Despitethe high contrast of bony structures in CT images, the segmentation taskremains challenging due to the presence of unclear boundaries, the complexstructure of vertebrae, and substantial inter-subject variability in perform-ing and assessing segmentation [60].CHAPTER 6: AUTOMATIC LABELING OF CT IMAGESIn this chapter we extend the proposed multi-vertebrae model to extractinter-vertebrae statistics and register the model to CT images. We then useit for labeling a target vertebra. An accurate segmentation is a by-productof this registration. We then utilize this technique in an automatic approachfor labeling and segmentation of the entire vertebral column in CT imagewith an arbitrary field-of-view.CHAPTER 7: REGISTRATION OF A STATISTICAL MULTI-OBJECTMODEL TO ULTRASOUND IMAGESIn this chapter, we adapt our registration technique to align the statisticalmulti-vertebrae model to in vivo ultrasound volumes. Registration is chal-lenging due to the lack of echoes on most parts of the vertebrae and presenceof speckle and artifacts in the ultrasound images.CHAPTER 8: CONCLUSIONThis chapter includes a short summary followed by a discussion of the in-tegration of the registration algorithms into the clinical workflow. It alsoincludes suggestions for future work.14Chapter 2Construction of a singleobject statistical shapemodel2.1 Introduction and backgroundStatistical shape models (SSM) are known to be a powerful tool in segmen-tation and shape analysis [30]. The a priori knowledge incorporated in thesemodels greatly improves robustness of the segmentation to noise, artifacts,and missing data. A complete review on alternative approaches to SSMsare given by Heimann and Meinzer [51]. SSMs are usually built from a setof training shapes by learning the variations among them. This involvesconstruction of a mean shape and its principal variations. Training shapesmay be represented by point sets, curves, surfaces or volumes. Althougha rich body of literature exists on the generation of shape models usingvolumetric data [3, 4, 11, 32, 55, 93, 107, 113], quite less is available oversurfaces [28, 33, 36]. This is primarily due to the difficulty associated withestablishing the correspondences between surfaces of the training shapes,which directly affects the model quality. Errors in establishing such cor-respondences can lead to large errors in shape variability and meaninglessshape models.The first attempt at generating SSMs [31] required manual landmark iden-tification for establishing the point correspondence, generally a time con-suming and subjective task. In order to automatically find exact one-to-onecorrespondences, others [14, 67] developed techniques to minimize the dis-tances between corresponding points using variations of the Iterative ClosestPoint (ICP) algorithm [10]. In these techniques, a random shape is chosenas the reference and registered to all other shapes in the training set. ThisThis chapter is adapted from Rasoulian A., Rohling R. N., and Abolmaesumi P.,Group-wise registration of point sets for statistical shape models. IEEE Transactions onMedical Imaging, 13(11):2025-2034, 2012.152.1. Introduction and backgroundtechnique is biased to the selection of the reference and may not representall the possible variations [7]. Thus far, attempts to solve this problem bythe supervised selection of the reference [18, 121] remain template-basedand, hence, biased to the reference. To solve this problem, others developedgroup-wise registration [3, 11, 55], where both the mean shape and its trans-formation to the training shapes were considered unknown.The two general approaches for performing a group-wise registration ofshapes are the backward and the forward models. Durrlemann et al. [36]give a complete comparison between these two approaches. In the backwardmodel, the mean shape is assumed to be a noisy observation of trainingshapes. Therefore, all shapes are deformed back to a common reference.The mean shape is then the by-product of this procedure and is recon-structed by averaging the deformed instances [28]. In the forward model,the training shapes are assumed to be the noisy observations of the deformedmean shape [36]. This assumption is less computationally intensive, and isalways implemented by breaking the registration into two separate steps: a)Construction of the mean shape, and b) Registration of the mean shape totraining shapes. These steps are alternately performed in each iteration [55].Surfaces are traditionally represented by point sets. However, such a rep-resentation is challenging for group-wise registration. Discretization mayprohibit exact one-to-one correspondences between two point sets; thus, theperfect matching of the underlying geometries is not guaranteed. To addressthis problem, many used other representations of surfaces such as triangu-lated surfaces [36], parametric representation [33], medial axes [85], andprobabilistic distributions [24, 111]. Such representations, however, lead tolarge computational complexity in the registration step.To improve the run-time, Chui and Rangarajan developed an algorithmthat took point sets as input [28]. They extended soft correspondences, atechnique originally developed to register two point sets using a deformabletransformation [27]. Unlike ICP, this technique establishes probabilistic andnot exact one-to-one correspondences between point sets. Specifically, theyuse the backward model to produce an unbiased mean shape. The backwardmodel, however, is computationally intensive [36]. To address this problem,Hufnagel et al. [54] developed a forward modelling technique to build SSMs.Unlike Chui and Rangarajan, they utilized an affine transformation to reg-ister the mean shape to the training shapes. The affine transformation maylead to a large residual between the registered mean shape and the trainingshapes. Consequently, the model is not accurate and not capable of gener-ating all possible variations.We introduce a novel technique for group-wise registration of a group of point162.2. Methodssets, referred to as group-wise Gaussian Mixture Model (GMM)-based reg-istration. We use forward modeling and deformable registration to generatethe mean shape and its variations. We decouple the group-wise registrationinto two steps: registration using a non-rigid transformation and updat-ing the mean shape. The registration is performed using an Expectation-Maximization (EM) algorithm [75] and the mean shape is updated usingthe Quasi-Newton method. The non-rigid transformation is considered asa probability density estimation problem. The points in the mean shape(moving point set) are assumed to be centeroids of a GMM and the pointsin the training sets (fixed point set) are the observations. The points inmoving set are forced to move locally coherently to preserve the topologicalstructure and additionally maximize the likelihood of the GMM generatingthe observations.We perform extensive experiments on two data sets containing instances ofL2 vertebrae and hippocampi. The latter is a benchmark dataset that isused in many researches and is added for sake of comparison. We compareour algorithm to available state-of-the-art group-wise registration algorithmsincluding feature-based and intensity-based approaches. Specifically, we usethe following approaches:? An intensity-based method proposed by Balci et al. [4] that is publiclyavailable.? A feature-based method proposed by Styner et al. [101] that is publiclyavailable.? Our implementation of a feature-based method proposed by Hufnagel etal. [54].? Our implementation of a feature-based method proposed by Chui andRangarajan [28].2.2 MethodsIn this section we introduces the GMM-based group-wise registration algo-rithm and describes the reconstruction of the SSM using the result of thegroup-wise registration.2.2.1 The group-wise GMM-based registration algorithmThe objective of the group-wise registration process is to extract statisticalproperties of a particular shape within a training population. In the context172.2. Methodsof statistical shape models, this procedure normally results in a mean shapeand its variations. The variations are usually derived by registration of themean shape to all training shapes.The surface points can be considered as observations of a probability distri-bution. Having this assumption, the registration of two point sets is con-verted to a probability density estimation problem, where one point setrepresents a distribution, and the other one represents the data points.Good candidates for representing such distributions are mixture models.The GMM [71], which is widely used in pattern recognition and machinelearning [12], has been utilized as the model in our work. The mixture isdefined by a convex combination of Gaussian components, each presentedby a mean point and a variance.Following the probabilistic registration scheme, points in the mean shape areassumed to be the centroids of a GMM that generates the training shape.Throughout this section the following notation is used:? K: number of training data sets? D: dimension of the point sets, i.e. 2D or 3D.? Nk: size of the kth training point set? Tk(Nk?D): kth training point set? tkn: nth point of the kth training point set? M : number of points in the mean shape? ZM?D: mean shape? zm: mth point of the mean shape? ?: variance of the Gaussian components? ?k: Transformation from the mean shape to kth training point setFor simplicity, we assume the same isotropic variance in all directions, ?I,for all Gaussian components.Based on the forward model, training shapes, Tk, are noisy observations ofthe deformed mean shape, Z, by the transformation ?k:Tk = ?k(Z) + k. (2.1)182.2. MethodsThe unknowns are Z and ?k, while k is an independent and identicallydistributed Gaussian noise component. The problem is defined as a Maxi-mum a Posteriori (MAP) estimation for independent observations and canbe solved by minimizing the log-likelihood function:E(?k,Z) = ?K?k=1Nk?n=1logM?m=1P (zm)P (tkn|?k(zm)), (2.2)?k is a displacement function and is defined discretely over points in Z.The unknowns, Z and ?k, are found using an Expectation-Maximizationalgorithm [71]. The Expectation part computes the probability of how likelya point in the mean shape generates another point in a training point set:P (zm|tkn) =exp(?12?tkn??k(zm)? ?2)?Nkj=1 exp(?12?tkj??k(zm)? ?2), (2.3)Initially, ?, the variance of the Gaussian components, is chosen to be large.Its value decreases in each iteration. Parameters are updated in the Maxi-mization step. The following objective function is minimized:(Z?,??k) = argminZ,?kK?k=1Q(Z,?k), (2.4)whereQ(Z,?k) =12?2M,Nk?m,n=1P (zm|tkn)?tkn ? ?k(zm)?2 +?2?L?k?2. (2.5)The latter term, ?L?k?, is a regularization over the transformations andis controlled by a ?trade-off? variable ?. The objective function can beminimized alternately with respect to the mean shape and the transforma-tion parameters. For a fixed mean shape, the problem becomes a pair-wiseregistration problem and each term of Equation (2.5) can be minimized sep-arately. The pair-wise registration for a given Tk and Z is stated and solvedby Myronenko et al. [75]. They modeled the transformation to be a dis-placement field defined over each point belonging to the mean shape. Theyalso defined the regularization over transformation function by:?L?k?2 =?RD|??k(s)|2G?(s)ds. (2.6)192.2. MethodsThe function ??k indicates the Fourier transform of the function ?k. G?represents a symmetric positive-definite low pass filter. The integral thusmeasures the energy at high frequencies. Using a variational approach, theyshowed that the displacements which minimize the above objective functionhave the form of the radial basis function:?k(s) = s +Nk?n=1wnG(s? zm). (2.7)In the implementation, G? is represented by a Gaussian kernel with standarddeviation ?, set initially by the user. The Gaussian kernel is a square sym-metric Gram matrix with elements gij = e? 12?zi?zj? ?2. The regularizationterm can now be written as ?2 tr(WTkGWk), where the weighting matrix,Wk, is unknown and should be found. Hence, the objective function forfixed mean shape can be written as:Q(Wk) =12?2M,Nk?m,n=1P (zm|tkn)?tkn ? zm ?G(m, .)Wk?2 +?2tr(WTkGWk).(2.8)Taking the derivative of the above equation with respect to W and settingit to zero, results in the following linear system of equations:(diag(P1)G + ??2I)Wk = PTk ? diag(P1)Z, (2.9)which should be solved at each iteration to find the weighting matrix, Wk,and the displacement. The variable ?, controls the locality of the coherencybetween point transformations. A higher value of ? results in more regular-ized transformations. For a detailed discussion on the difference between ?and ?, we invite the reader to refer to [117].For fixed transformations, each point of the mean shape, zm, can be foundseparately by minimizing:Z? = argminZK?k=1Nk?n=1P (z|tkn)?tkn ? ?k(zm)?2. (2.10)The transformations, ?k, are discrete functions defined over initial positionof the mean shape points, zm. To find ?k(p) for an arbitrary point, p, inter-polation should be performed. To reduce the amount of computation, thetransformation is coded in grid points. A thin-plate spline interpolation us-ing grid points is used. The new zm is found by minimizing the cost function202.2. Methodsdefined in Equation (2.10). The Broyden-Fletcher-Goldfarb-Shanno (BFGS)Quasi-Newton method, with a cubic line search, is used as the optimizer.We initialize the group-wise GMM-based registration algorithm by removingthe scaling factor of the training set by normalizing them to have a zero meanand standard deviation equal to one in all directions. This normalizationfactor can be applied after the group-wise registration to the transforma-tions of the mean shape to instances. The mean shape is set to one of theinstances in the training set. We found that the result of the algorithm isnot sensitive to the mean shape initialization. The transformations from themean shape to the training shapes are set to identity. The atlas estimationis detailed in Algorithm 1.Algorithm 1 Group-wise registration of point sets.Require: Group of point sets TkInitialize Z and ?kBring all point sets to the same coordinate system by rigidly registeringall to one selected randomly from the training set.repeatfor all Tk doUpdate ?k by solving Equation 2.9end forUpdate mean shape Z by minimizing Equation (2.10)until convergeIn the actual implementation, it is more efficient to only update the meanshape after several iterations of the registration. In particular, each itera-tion of the registration includes updating the probability function, P (z|tkn),and computing the transformations, ?k, that minimize the cost functionin Equation (2.5). Additionally, performing the group-wise registration us-ing affine transformations (as was generated by Hunfangel et al. [54]) andpassing the results as an initialization to the non-rigid transformation step,improves the computation speed substantially. The algorithm convergencecriterion is the value of ? being less than a threshold, ?. A suitable value forsuch threshold can be calculated from the data, such as the average distancebetween points, and is set at 0.01 mm for all tests.In summary, the behavior of the group-wise GMM-based registration al-gorithm is mainly determined by ?, the ?trade-off? variable, and ?, theregularization factor. Higher values for these variables result in smoothertransformations.212.3. Experiments2.2.2 Statistical shape modelThe model can be built from the mean shape, Z, and its transformations, ?k,to each subject. Any bias in the transformation, including translations androtations, are removed using Procrustes analysis [35]. Each transformationcan be described by a 3M vector as: Vk = [?k(z1)T . . .?k(zM )T]T. Theaverage transformation is V? = 1K?Kk=1 Vk. The covariance matrix is givenby:S =1K ? 1K?k=1(Vk ? V?)(Vk ? V?)T. (2.11)An eigen-decomposition on S, which is identical to Principal Coponent Anal-ysis (PCA), delivers the principal modes of variation ?k (eigenvectors) andtheir respective variances ?k (eigenvalues). Eigenvectors are ordered withrespect to their corresponding eigenvalues. By using only c modes, the SSMcan be instantiated as follows:Z? = Z +c?k=1?kbk, (2.12)for arbitrary selection of bk.2.3 ExperimentsQuantitative experiments were carried out on two different data sets: hu-man L2 lumbar vertebra and hippocampus.Vertebrae were manually segmented from a set of CT images captured from45 patients undergoing CT imaging at Kingston General Hospital, Kingston,Canada (n=36) and at Vancouver General Hospital, Vancouver, Canada(n=9). Written informed consent was obtained from all patients. The man-ual segmentation was performed using ITK-SNAP [118]. The CT imagingresolution ranged from 0.6 mm ? 0.6 mm ? 0.6 mm to 0.97 mm ? 0.97 mm? 3.2 mm spacing. 45 L2 vertebrae were used as the training set. The ver-tebrae had sizes between 47 mm ? 64 mm ? 47 mm and 87 mm ? 95 mm? 61 mm.The second dataset we used was the publicly available 42 segmented hip-pocampi from Styner et al. [101]. The hippocampi had sizes between 20 mm? 35 mm ? 17 mm and 26 mm ? 41 mm ? 24 mm.222.3. Experiments2.3.1 Objective metrics for evaluating statistical shapemodelsEvaluations are based on the distances between an instantiated SSM andthe target surface. To compute the surface distances, both the instantiatedSSM and the test data are triangulated using the Crust algorithm [2]. Thedistances from vertices of the two surfaces are computed. The Root MeanSquare (RMS) of these distances is the first measure we have used andhereafter is referred to as RMS distance. The maximum of these distancesis the second measure and is called the Hausdorff distance.The proposed group-wise registration algorithm, i.e. the group-wise GMM,is evaluated by computing well-known measures of generalization, specificity,and compactness [102]. All measures are provided as a function of, c, thenumber of modes used in SSM instantiation.An SSM with high generalization capability can adopt to a new observationthat comes from the same anatomy. Generalization can be measured usingthe leave-one-out method: A model is built using all but one member of thetraining set and then fitted to the excluded member. The accuracy to whichthe model can describe the unseen member is measured using:G(c) =1KK?i=1D(S?i, Si), (2.13)where D(S?i, Si) can be the RMS or Haussdorf distance between two surfaces.Si is the left out member and S?i is the instantiated and registered SSM.The specificity measures the ability to represent only valid instances of theobject. To calculate it, the SSM is randomly instantiated within the rangeof valid parameters, [?3? + 3?], and is rigidly registered to all members ofthe training set:S(c) =1RN?n=1D(S?n, Sn), (2.14)Here, R is the number of the trials and Sn is the member of the training setwith lowest RMS distance error to the instantiated shape model S?n.The compactness is the ability to use a minimal set of parameters. Thiscan be measured using the cumulative variance for the first c modes of themodel:C(c) =?ci=1 ?i?K?1i=1 ?i(2.15)232.3. Experiments2.3.2 Group-wise GMM-based registration algorithmparametersThe behavior of the group-wise GMM-based registration algorithm is gov-erned by two parameters, ? and ?. The experiments were performed forvarious combinations of ? and ?, where ? ranges between [2?3 2?2 . . . 26]and ? ranges between [2?3 2?2 . . . 23]. This covers a wide region aroundthe proposed values by Myronenko et al. [74] (?, ? = 1.0).The generalization was reported for each combination of these two variables.The compactness was also computed as number of modes that cover 95% ofthe variations:minimum r such that?ri=1 ?i?Ni=1 ?i> 0.95 (2.16)2.3.3 Comparison to other methodsFor each data set, i.e. L2 vertebrae and hippocampi, the shape models werebuilt using the group-wise GMM-based registration algorithm and comparedto other available state-of-the-art methods. The first compared method isbased on affine registered point sets using EM-ICP as proposed by Hufnagelet al. [54]. This is the most similar algorithm to the one proposed here withthe key difference in the type of transformation. We also compared ourmethod to the one proposed by Chui and Rangarajan [28]. This is also simi-lar to our method except that they follow a backward method to reconstructthe mean shape. The same experiments were also performed on SPHARM-PDM, a method proposed by Styner et al. [101]. In their approach, thecorresponding points in the training shape surfaces are found by convertingthe surfaces into a spherical harmonic description.The proposed algorithm was also compared to an intensity-based group-wiseregistration algorithm proposed by Balci et al. [4] that is referred to as ?con-gealing? hereafter. In this work, deformations are modeled by B-splines andthe objective function is entropy-based and defined in a group-wise frame-work. The open source implementation of the registration algorithm is avail-able in Insight Toolkit (ITK) [4].We ran all the other algorithms with the parameters suggested by the orig-inal authors [54, 101].242.3. ExperimentsL2 vetebra0.125 0.25 0.5 1 2 4 8 16 32 640.80.820.840.860.880.90.92?RMS (mm)(a) Generalization0.125 0.25 0.5 1 2 4 8 16 32 6424681012141618?Compactness Measure  (b) CompactnessHippocampus0.125 0.25 0.5 1 2 4 8 16 32 640.20.30.40.50.60.70.8?RMS (mm)(c) Generalization0.125 0.25 0.5 1 2 4 8 16 32 6451015202530?Compactness Measure(d) Compactness0.125 0.25 0.5 1 2 4 8 16 32 6451015202530?Compactness Measure  ?=0.125 ?=0.25 ?=0.5 ?=1 ?=2 ?=4Figure 2.1: (a,c) Generalization and (b,d) compactness of the model builtusing different values of parameters, ? and ?.252.3.ExperimentsFirst modeSecond modeThird mode?3?? ? Mean Shape? +3??First modeSecond modeThird modeFourth mode?3?? ? Mean Shape? +3??Figure 2.2: Graphical representation of the L2 vertebrae, and hippocampi shapes described by the SSM aftervarying the weights corresponding to the first three and four principal modes of variation by 3?. The amount ofvariation is color coded for each mode. Higher variations are shown in red.262.3.ExperimentsL2 vetebra0 5 10 15 20 25 300.70.80.911.11.21.3Number of Modes (c)G(c) (mm)  (a) Generalization (RMS)0 5 10 15 20 25 3045678910Number of Modes (c)G(c) (mm)  (b) Generalization (Haussdorf)0 5 10 15 20 25 3011.522.5Number of Modes (c)S(c) (mm)  (c) Specificity0 5 10 15 20 25 30 3540%50%60%70%80%90%100%Number of Modes (c)C(c)  (d) CompactnessHippoacampus0 5 10 15 20 25 300.250.30.350.40.450.50.550.60.650.7Number of Modes (c)G(c) (mm)  (e) Generalization (RMS)0 5 10 15 20 25 3011.522.533.5Number of Modes (c)G(c) (mm)  (f) Generalization (Haussdorf)0 5 10 15 20 25 300.40.450.50.550.60.650.70.75Number of Modes (c)S(c) (mm)  (g) Specificity0 5 10 15 20 25 30 35 4030%40%50%60%70%80%90%100%Number of Modes (c)C(c)  (h) Compactness0 5 10 15 20 25 3011.522.5Number of Modes (c)S(c) (mm)  Congealing EM?ICP Group?wise GMM SPHARM?PDM ChuiFigure 2.3: Generalization, specificity, and compactness ability of the L2 vertebra and hippocampus shape models.Error bars show the standard deviation divided by square root of the number of trials. SSM generated by group-wise GMM has the best generalization ability among other techniques. EM-ICP has the best compactness andspecificity since it uses an affine transformation. However, among other techniques, our technique is the best interms of these two measures.272.4. Results(a) SPHARM-PDM(b) EM-ICP (c) Chui (d) Congeal-ing(e) Group-wise GMM(g) SPHARM-PDM(h) EM-ICP (i) Chui (j) Congealing (k) Group-wise GMMFigure 2.4: Distance error is overlaid on the surface of the mean shapebuilt using different approaches. Higher distance is shown in red. For bet-ter visualization, hippocampi are viewed from medial, and distances higherthan 1.5 mm in vertebra SSM and higher than 0.5 mm in hippocampus SSMare shown in the same color.2.4 Results2.4.1 Group-wise GMM-based registration algorithmparametersFigure 2.1a and 2.1c show the average RMS distance error for SSM built fromGroup-wise GMM-based registration algorithm applied to both datasets us-ing different values of ? and ?. Higher values of these parameters constrainthe deformation of the mean shape in the registration step and consequentlyproduce a larger error. Thus, the model may not cover all the variations.On the other hand, the resulted model is more compact. Lower values ofthese variables remove all regularization over deformation of the mean shapeand consequently, the variation modes are more noisy. Thus, the SSM is notcapable of properly reconstructing an unseen observation. The selection ofthese two variables is a trade-off between compactness of the model and itsgeneralization ability. Figure 2.1b and 2.1d show the compactness of theresulted SSMs using different values of these parameters.282.4. Results2.4.2 Comparison to other methodsL2 vertebraeWe generated the SSM of L2 vertebrae using the proposed group-wise GMM-based registration and performing PCA as explained earlier. Training pointsets are resampled to include 2000 points. The SSM is reconstructed with1500 points. We found this number of point sufficient to represent the mainstructures of the vertebra. Parameters of the registration were set to be? = 16 and ? = 1.0. This value of ? and ? gives the best compromise be-tween generalization and compactness. Figure 2.2a illustrates the changesin the shape of the vertebra SSM which is reconstructed using the proposedmethod. The first mode of the shape variation is mainly a scaling in size ofthe vertebra whereas the second and the third modes involve an expansionof the vertebra body in the lateral direction and changes over length androtation of the transverse processes. The amount of the variation is colorcoded for each mode.Figure 2.3a-2.3d show the comparison of different methods in terms of gen-eralization, G(c), specificity, S(c), and compactness, C(c), as a function ofthe number of shape parameters, c, used in the reconstruction [102]. Thebest SSM should have the lowest generalization, lowest specificity, and high-est compactness for the same number of modes. Since the SPHARM-PDMis limited to closed surfaces, we filled the vertebrae hole. The SSM which isbuilt using SPHARM-PDM statistically has the significant lowest Hausdorffdistance. However, the model give the worst results in terms of specificity.Excluding the SPHARM-PDM result, the EM-ICP has the worst general-ization but the best specificity and compactness, meaning that it generallygenerates valid instances using few parameters, however, it cannot gener-ate all possible variations. Congealing and group-wise GMM show almostthe same behavior except that the group-wise GMM is more compact. TheSSM built using the method proposed by Chui and Rangarajan has bettercompactness and specificity than the group-wise GMM-based technique butworse generalization.Figure 2.4a-2.4e show the distance error, overlaid on the mean shape. Ex-cept SPHARM-PDM where errors are propagated irregularly, higher errorsare mainly observed in extremities parts such as transverse processes andspinous process.292.5. Discussionhippocampi dataFigure 2.2b illustrates the changes in the shape of the hippocampi modelsthat result from changing the weights corresponding to the first four modesof the variation. The first mode represents mainly scaling whereas the otherthree represent the stretching of different parts of this anatomy. Similarto the previous section, the SSM built using the proposed method and theother methods are compared in Figure 2.3e-2.3h.EM-ICP behaves the same on hippocampi as in the case of the L2 vertebra.It has the best specificity and compactness but the worst generalization.Also, it has the largest RMS distance error. SPHARM-PDM shows betterresults on this data set. It has the best generalization together with thegroup-wise GMM-based technique but the worst specificity, meaning thatits instances are likely to be non-realistic. Congealing has the worst gener-alization with RMS distance error using small number of modes (i.e. c < 8),but it shows better results than EM-ICP using more number of modes.It also has substantially worse generalization than group-wise GMM andSPHARM-PDM. Chui?s method result in a SSM with better compactnessand specificity than the group-wise GMM-based but worse generalization.Figure 2.4g-2.4k show the distance error, overlaid on the mean shape. De-spite L2 vertebra, errors are propagated irregularly on the hippocampi dataset.2.5 DiscussionThe common approach in all group-wise registration techniques is the regis-tration of the mean shape to the instances. Establishing correct correspon-dences and subsequently identifying the variations are strongly dependenton the accuracy of such registration. However, a registration with perfectalignment but no regularization over the transformation may create incor-rect correspondences since the exact anatomical correspondence may notexist. The best trade-off is usually achieved by changing the regularizationfactor used in the registration. As shown in Figure 2.1a and 2.1c, very largeand small values of regularization parameters, ? and ?, lead to low general-ization ability.We compared our technique to four state-of-the-art group-wise registrationtechniques using two datasets, L2 lumbar vertebra (K = 45) and hippocampi(K = 42). The most similar method to ours is the EM-ICP method [54],which utilizes the affine transformation to establish the transformation be-tween the mean shape and the instances. Although the SSM built using302.6. Summarythis technique is the most compact model (see Figure 2.3d and 2.3h), theaffine transformation may lead to a large residual between the registeredmean shape and the instances. Consequently, the model is not capable ofgenerating all possible variations and predicting an unseen observation (seeFigure 2.3a and 2.3e). On the other hand, the limited number of variationsprohibits the model from generating invalid variations (see better specificityof the model built using EM-ICP among other models in Figure 2.3c and2.3g).Our technique is also similar to the one proposed by Chui and Rangara-jan [28]. They use a backward model to align all shapes to a referencecoordinate and update the mean shape iteratively. That approach is com-putationally intensive, since it requires re-parameterization of the shapesin each iteration. On the other hand, our technique uses a forward model,which is less computationally demanding. Our MATLAB implementation ofthe algorithm proposed by Chui and Rangarajan takes more than 18 hourswhereas the MATLAB implementation of group-wise GMM takes less thanone hour. The result demonstrates that our technique generates SSM withbetter generalization and specificity but worse compactness.The model built using the SPHARM-PDM method, shows the best gen-eralization and acceptable compactness among other techniques in the hip-pocampi data, but the worst specificity in both L2 vertebra and hippocompidatasets. The SPHARM-PDM is limited to closed surfaces. To overcomethis limitation, we filled the vertebrae hole. But the resulting model stilldoes not provide the best compromise between different measures.The distance errors overlaid on the vertebra mean shape reveal that thehigher errors are mainly observed in facet joints, transverse processes andspinous process of the vertebra. In the hippocampi data, higher errors aredistributed irregularly.2.6 SummaryWe presented a new method to register a group of point sets. We utilizedthe forward modeling and sought for a mean shape and its transformationsto each given point set. We formulated the problem as an estimation of theprobabilistic density function and used the EM algorithm to solve it. Weused the resulted mean shape and transformations to generate a statisticalshape model. We performed experiments on two data sets and showed thatthe overall performance of the proposed algorithm is better, on balance,than the state-of-the-art techniques.31Chapter 3Registration of a singleobject SSM to a target image3.1 Introduction and backgroundAvailable techniques for instantiation and registration of an SSM to a newinstance, usually referred to as Active Shape Model (ASM) search algo-rithms, commonly rely on performing a search along the normal to the SSMsurface to find the edges and instantiating the SSM to minimize the distanceof the SSM to these edges. However, this approach is known to be sensi-tive to noise and outliers [1]. Many modifications are applied to the originalASM implementation. Some address better selection of the landmarks in theimage [72, 96], and others improve the shape parameters estimation [1, 64].Rogers and Graham [91] investigate different approaches to robustly esti-mate the shape parameters. In particular they compare the M-estimatorsand random sampling approaches. M-estimators mainly rely on weightingstrategies to reduce the effect of outliers and noisy observations. Lowerweights are given to landmarks with higher residuals. Random samplingstrategies on the other hand, try to find a small subset of landmarks, whichcontain only good data. RANSAC [38] and LMedS [92] are used as randomsampling algorithms and both show superior performance over M-estimatorsin the presence of outliers. Other weighting strategies have also been pro-posed by Li and Chutatape [64].Previous attempts for registering SSMs to ultrasound data [6, 58] suggestthat, without hardware acceleration, registering volumetric atlases to ul-trasound images is computationally more expensive than registering atlasesbased on surface data. In search for a clinically viable solution for real-timeregistration of an unbiased SSM to ultrasound data, we propose a method toThis chapter is adapted from Rasoulian A., Rohling R. N., and Abolmaesumi P.,Group-wise registration of point sets for statistical shape models. IEEE Transactions onMedical Imaging, 13(11):2025-2034, 2012. and Rasoulian A., Rohling R. N., and Abolmae-sumi P., Probabilistic registration of an unbiased statistical shape model to ultrasoundimages of the spine SPIE Medical Imaging, volume 83161P-1, 2012.323.2. Registering the model to a new subjectcreate an unbiased SSM using a novel surface-based group-wise registrationmethod. This technique extends the affine group-wise registration methodproposed by Hufnagel et al. [54] to a deformable method by integrating thedeformable registration technique proposed by Myronenko et al. [75] Wealso develop a probabilistic technique to register the SSM to ultrasound im-ages. Our approach aims to be substantially faster than the one proposedby Khallaghi et al. [59], which is a volumetric atlas-based registration.3.2 Registering the model to a new subjectWe propose a GMM-based approach (referred to as GMM-ASM hereafter)to register the model (Z with points zm, m = 1 . . .M) to a new instance(T with points tn, n = 1 . . . N). The new instance is the observationgenerated by a GMM, which is defined by the SSM mean shape points as itscentroids. The SSM mean shape (here the moving set) can be instantiatedusing Equation (2.12). We use the EM algorithm to find the parametersfor instantiation of the SSM. The Expectation part of the algorithm is asfollows:P (zm|tn) =exp(?12?tn?(zm+?mb)? ?2)?Nj=1 exp(?12?tj?(zm+?mb)? ?2). (3.1)The same as before, ?, the variance of the Gaussian components, is initiallychosen to be large. Its value decreases in each iteration. In the Maximizationstep, the following objective function is minimized:Q(b) =M,N?m,n=1P (zm|tn)?tn ? (zm + ?mb)?2 + ?bT?b. (3.2)andb? = arg minbQ. (3.3)Here, vector bc?1 holds the shape parameters. The latter term in Equation(3.2) is the Tikhonov regularization over the shape parameters and is con-trolled by the trade-off variable ?. Matrix ? is diagonal with elements 1?k .Matrix ?m3?c is made by concatenation of the parts of the vectors ?m thatcorrespond to the point zm as?m = [?T1{3m...3m+2} . . .?Tc{3m...3m+2}]T. (3.4)333.2. Registering the model to a new subjectIn order to find the optimum weights, b, we expand Equation (3.2) as below:Q =M,N?m,n=1P (zm|tn)(tTn t + zTmzm + bT?Tm?mb?2zTmtn ? 2zTm?mb + 2tTn?mb) + ?bT?b (3.5)By differentiation we have:?Q?b= ?2M,N?m,n=1P (zm|tn)(?Tm?mb? zTm?m + tTn?m)+??b. (3.6)If? =M,N?m,n=1P (zm|tn)?Tm?m + ??,? =M,N?m,n=1P (zm|tn)(tTn?m ? zTm?m), (3.7)the optimum parametrization can be achieved by:b? = ???1. (3.8)The steps are outlined in Algorithm 2.Algorithm 2 Registration of an SSM to a target.Require: Z, ?, ?, and TInitialize Z? = Z and ?Construct ?mrepeatConstruct P (zm|tn) from Z? and TFind ? and ? using Equation (3.7)b = ???1Decrease ? and Z? = Z + ?buntil converge343.3. Experiments3.2.1 Registration to ultrasound volumesUltrasound images are process to enhance bony anatomies prior to regis-tration of SSM. We closely follow the approach of Foroughi et al. [40] tocompute the bone surface probability in ultrasound images for each pixel byweighted summation of reflection amount (pixel intensity) and shadowingeffect. The shadow below a pixel is quantified by a weighted summation ofthe intensity values of all pixels beneath. The result, P (X), represents thebone surface probability for each pixel in the ultrasound volume. Since theultrasound probe is positioned posterior-anterior, only the posterior aspectof the vertebra is visible. Similar to the work of Winter et al.[115], thepoints from the anterior surface are removed, leaving those points on the tipof spinous process, laminae and facets.To register the SSM to the bone surface probability, the model is evolvedusingZ? = T(Z +??kbk), (3.9)where T is a rigid transformation, and Z +??kbk is the deformation ofthe SSM using modes learned in the training step. Assume that z?i are thepoints belonging to Z?, the following objective function is maximized:(T?, b?) = argmaxT,b?P (z?i). (3.10)All parameters are optimized simultaneously using the CMA-ES optimizer [115].3.3 ExperimentsWe evaluated the robustness of the GMM-ASM algorithm proposed for reg-istration of the SSM to a new subject and compared it to the original ASMsearch approach [31].Two sets of experiments were performed. In the first experiment, an SSMof the L2 vertebra was reconstructed using the proposed group-wise GMM-based algorithm. A target was generated by randomly instantiating theSSM by giving random weights to the SSM modes and transforming usingtranslation and rotation parameters drawn randomly within ?5 mm and?5 degrees, using a uniform distribution. Then outliers were added to thetarget. The outliers were points randomly generated with a uniform dis-tribution around the target. The SSM was registered to the target usingfour different algorithms, the original ASM, weighted least squares ASM353.4. ResultsInitialization Iteration 10 Iteration 30Figure 3.1: An example of SSM (shown in blue) to a target (shown in red)registration using GMM-ASM.(WLS-ASM), LMedS-ASM, and GMM-ASM. In the original ASM imple-mentation, the parameters are estimated to minimize the sum of squaresof residuals between the shape and the data. In the WLS-ASM approach,the residual is minimized using a weighted least squares technique, wherethe weighting strategy proposed by Huber [53] is utilized. LMedS-ASM isalso implemented using the approach presented by Rogers and Graham [91].LMedS is a random sampling approach to fit a model into noisy data byminimizing the median of the residual. The fitting procedure was repeated100 times for different percentages of outliers [0%-40%].In the second experiment, we followed the same steps taken in the first ex-periments, except that the instance points were perturbed by a zero meanadditive noise with known standard deviation.The average distance between the corresponding points in the target andthe registered SSM was reported as the Target Registration Error (TRE).Registrations with a TRE below 3 mm were considered as successful.3.3.1 Registration to ultrasound volumesUS volumes are captured by a 3D transducer (4DC7-4/40, Ultrasonix, Rigch-mond). Experiments are performed on lumbar and thoracic vertebrae. CTimages (with 0.70 mm?0.70 mm?2.50 mm spacing) are collected at a lo-cal hospital with approval from the research ethics board, and informedconsent. Using ITK-Snap, L1 (n = 37) AND T12 (n = 18) vertebrae aresemi-automatically segmented from these images for generation of the SSM.3.4 ResultsAn example of a registration of the SSM to a target using proposed GMM-ASM algorithm is shown in Figure 3.1. Figure 3.2 shows the result of the363.4. Results(a) Original ASM (b) WLS-ASM (c) LMedS-ASM (d) GMM-ASMFigure 3.2: Registration scenario with 10% outliers and Gaussian additivenoise with 3 mm standard deviation. Registration results are presented for:a) the original ASM implementation, b) WLS-ASM, c) LMedS-ASM, andd) our proposed algorithm, GMM-ASM. Blue points are the model whereasred points are the target.registration in the presence of noise and outliers using different approaches.Quantitative results are shown in Figure 3.3. The original ASM approachmost likely converges to a local minimum. It is highly sensitive to the amountof additive noise and outliers. The results are improved using both WLS-ASM and LMedS-ASM. The proposed method is, however, substantially lesssensitive to both outliers and additive noise. The success rate remains above90% with 35% of outliers or additive noise with 10 mm standard deviation.3.4.1 Registration to ultrasound volume3D ultrasound volumes are captured from the L1 and T12 vertebrae of threehuman subjects. The transducer is positioned in the transverse plane suchthat the tip of the spinous process is centered in the image. An expertsonographer is asked to select points on the bone surface belonging to thetarget vertebra. For each patient, the probability map of the bone surfaces isextracted from the ultrasound volumes. The shape model is registered man-ually to the segmented points. Then, the model is misaligned by applyinga rigid transformation including translation and rotation parameters drawnrandomly within ?10 mm and ?10 degrees. The shape model is then reg-istered to the probability map of the ultrasound volume using the proposedmethod. This experiment is repeated 100 times for each vertebra of eachpatient (overall 600 registrations are performed). Examples of the registra-tion results are shown in Figure 3.4 . The RMS distance between manually373.5. Summaryselected points and their nearest point from registered model points is com-puted as a measure of accuracy in each case. Figure 3.5 includes results foreach patient separately.Given the overlay of the atlas on the ultrasound, the echoes from the lig-amentum flavum/epidural space can be readily interpreted and localized.It is expected that the SSM will be used to augment the ultrasound imageinterpretation, but not replace the standard technique for epidural needleplacement. The registered SSM, built using the proposed method, had suc-cess rate (final average TRE below 4 mm) of 84%. The success rate is 74%for the alternative congealing method. In this pilot study, we have demon-strated that the range of errors for registering the SSM to ultrasound imagesof human spine is 2.3 mm to 2.8 mm for T12 and 3.4 mm to 4.5 mm forL1. Our results show that the two group-wise registration methods pro-duce acceptable accuracy for the targeted clinical application. However,the group-wise CPD shows a better compromise between generalization andcompactness. The current unoptimized MATLAB code requires 183 secondsto register the SSM to 3D ultrasound images compared to many hours inthe work by Khallaghi et al. [58].3.5 SummaryA new method for registration of an SSM to a target was presented. Themethod showed robustness against outlier and noise and outperformed state-of-the-art techniques. The method was extended for registration of the SSMto an in-vivo ultrasound volume of spine. The range of errors for registeringthe SSM to ultrasound images of the human spine was 2.3 mm to 2.8 mm forT12 and 3.4 mm to 4.5 mm for L1, which was close to the required clinicalaccuracy of 4 mm.383.5. Summary0 2 4 6 8 10 12 14 160246810121416Noise STD (mm)Target Registration Error (mm)(a) TRE against additive noise0 2 4 6 8 10 12 14 1600.10.20.30.40.50.60.70.80.91Noise STD (mm)Success Rate(b) Success rate against additive noise0% 5% 10% 15% 20% 25% 30% 35% 40%246810121416OutlierTarget Registration Error (mm)(c) TRE against outliers0% 5% 10% 15% 20% 25% 30% 35% 40%00.10.20.30.40.50.60.70.80.91OutlierSuccess Rate(d) Success rate against outliers0 2 4 6 8 10 12 14 160246810121416Noise STDTarget Registration Error (mm)  Original ASM WLS?ASM LMedS?ASM GMM?ASMFigure 3.3: Robustness of the algorithm against additive noise (a and b)and outliers (c and d).393.5. SummaryFigure 3.4: Example of registration of a SSM to ultrasound images of humanT12 vertebra. The initial position of the model is drawn in red, the registeredmodel is white.Figure 3.5: (a) TRE and (b) RMS distance error for probabilistic registrationof the model built by our method and the congealing technique to ultrasoundimages of in vivo data for both L1 and T12 vertebrae.40Chapter 4Generation of statisticalmulti-object shape+posemodel4.1 Introduction and backgroundStatistical multi-object models have recently been introduced to integratevariations of multiple anatomies (also referred to as complex) to allow simul-taneous registration of them to an image [16, 22, 37, 68, 106, 112]. The un-derlying assumption is the existence of a strong correlation between shapesand poses of different anatomies in the same patient, making these mod-els capable of generating all possible variations of shapes and poses of acomplex. Using these models, other advantages arise. In particular, by em-bedding the statistics of the relative position and orientation in the model,the pre-processing step for finding and detecting each object can be avoided.The orientation of an object is also constrained by its neighbors, which leadsto more accurate segmentation with lower computational cost.An early application of multi-object statistical shape models is performedby Duta and Sonka [37]. In their technique, all segmented surface pointsare concatenated to form a single object. Next, Principal Component Anal-ysis (PCA) is applied to find the main variations of the shapes. Wang etal. [112] uses the same approach to model multi-organs in mouse micro-CTimages except that the shape correlations between different organs are de-scribed using a conditional Gaussian model. Tsai et al. [106] also follow thesame concept integrated with a level-set function. They initially computethe mean shape by averaging all distance functions generated for all objectsin the training set. The difference between samples and the mean shapeare all stacked in a single matrix and then PCA is applied to extract theThis chapter is adapted from Rasoulian A., Rohling R. N., and Abolmaesumi P.,Lumbar Spine Segmentation Using a Statistical Multi-vertebrae Anatomical Shape+PoseModel. IEEE Transactions on Medical Imaging, In press.414.1. Introduction and backgroundmain modes of variation. One of the problems with such concatenation isthe high-dimensionality of the problem, which may lead to inaccurate sta-tistical analysis. To resolve this problem, Lu et al. [68] utilize M-reps, amedial representation of the objects, to capture shape features at variousscale levels. In each scale, the modes of variation are described by a seriesof deformations, which are residues left from the coarser level. Another ap-proach uses a hierarchical formulation of multi-object structures using thewavelet transform to decompose the inter-object relationships into differentlevels of detail [22].In all mentioned methods, the pose statistics are either neglected or jointlyrepresented with shape statistics. However, this representation may leadto two major issues: first, the pose statistics are not necessarily correlatedwith the shape statistics, since they may depend on external factors such asthe position and orientation of the patient during data acquisition. Second,the shape deformations are assumed to lie on a Euclidean space. However,the poses are represented by similarity transformations, i.e. rigid+scaletransformations. These transformations form a Lie group, which is a Rie-mannian manifold where analysis performed in Euclidean space is not ap-plicable [39]. Bossa and Olmos [16] addressed this problem by separatingthe pose and shape information and presenting statistical tools to extractvariations among a set of objects. They used this technique to describe prin-cipal variation of brain structures such as the subcortical nuclei and lateralventricles.In this chapter, we start with an overview of Lie groups and their properties.We continue with the approach proposed by Bossa and Olmos [16] and applytwo major modifications. First, the original technique performs two levels ofstatistics. Initially, variations of shapes and poses are separately extracted,and then they are correlated to present a joint statistical model. This pre-sentation is not optimal in some situation, e.g. for the case of vertebrae,since the relative position of vertebrae depends highly on patient postureduring data acquisition. Given that observation, we represent shape andpose variations separately in our model. Second, Bossa and Olmos?s tech-nique extracts shape variations for each anatomy separately whereas weperform the analysis on all objects combined. The primary assumption forsuch analysis is that the shapes of different objects in the same patient arestrongly correlated.424.2. Lie groups4.2 Lie groups4.2.1 ManifoldsA manifold is a topological space which locally looks like Cartesian n-spaceRn; it is built up of pieces of Rn glued together by homeomorphisms. If thesehomeomorphisms are differentiable we obtain a differentiable manifold. Onecan attach to every element of the manifold x ? M, a tangent space TxM,which contain all possible vectors that tangentially pass through x. Thestructure of a manifold is specified by a Riemannican metric which is acollection of dot products ?.|.? defined over the tangent space.Considering a curve ?(t) on the manifold, one can compute the speed of thecurve at each point ??(t) and its norm using the dot product, ???(t)|??(t)?.This can be used for estimation of the length of the curve as follows:Lyx(?) =? yx(???(t)|??(t)?)12dt. (4.1)The distance between two points on a manifold is defined as the shortestpath between two points, usually referred to as geodesic:dist(x,y) = min?L(?) with ?(0) = x and ?(1) = y (4.2)Theoretically, there exists one and only one geodesic, ?(x,v) that startsfrom a point, ?(0) = x, with initial speed equat to the tangent vector,v ? TxM. The exponential map maps each tangent vector to a point ofmanifold reached in unit time (see Figure 4.1):y = expx(v) = ?(x,v)(1). (4.3)Logarithmic map is also defined as inverse of the exponential map and isdenoted by logx = exp?1x , or in other words ~xy = logx(y), which results inredefinition of the distance as:dist(x,y) = ?logx(y)?. (4.4)Given a population of elements, xi ?M, the Fre?chet expectation is definedas a point, ?, where the variance E[dist(?,xi)2] is minimized. In otherwords:? = E[x] = arg min??M(1n?idist(?,xi)2)(4.5)Such mean may not exist and may not be unique given the possible dis-434.2. Lie groupsFigure 4.1: The tangent space is defined over each element of a manifold.A vector on the tangent space can be transformed to the manifold by expo-nential map.continuity of the manifold and its special structure. Pennec suggested analgorithm based on gradient descent to compute the mean in an iterativeway [83]:?k = ?k?1exp(1n?ilog(??1k?1xi)). (4.6)4.2.2 Principal geodesic analysisAnalogous to principal components analysis in the Euclidean space, Princi-pal Geodesic Analysis (PGA) is defined. Recalling the definition of PCA,such analysis on a set of points, xi, results in orthogonal vectors, v1, . . . ,vm,referred to as principal components. The first principal component corre-sponds to a vector that passes through the mean, ?, and minimizes the sumof squares of the distances of the points from the vector. In other words:v1 = arg min?v1?=1?imint?xi ? (?+ tv1)?2, (4.7)where tv1 defines a one-parameter linear sub-space. The rest of the principalcomponents are defined in the same way with the constrain of orthogonality.Extension of this approach to manifolds is straight forward since the meanand the distance are well-defined. Assuming that principal geodesics aredefined as vectors in tangent space of the Fre?chet mean. The one parametersub-space of these vectors can be defined as ? exp(tv). Therefore, the first444.2. Lie groupsAlgorithm 3 Principal geodesic analysis.Require: set of elements, xi ?MCompute the mean, ?.Transfer the elements into the tangent space (T?M) by log? xi.Perform PCA on {log? x1, . . . , log? xn} to compute eigenvectors, vk.Transfer the eigenvectors to the manifold by exponential map, exp? vk.principal geodesics can be derived as follows:v1 = arg min?v1?=1?imintdist(xi, ? exp(tv))2= arg min?v1?=1?imintlog(??1 exp(tv)?1xi)2? arg min?v1?=1?imint? log(??1xi)? tv?2 (4.8)The approximation is proposed by Fletcher et al. [39] and convert the prob-lem into the standard principal component analysis in the tangent spacedefined at the mean. The steps are outlined in Algorithm 3.4.2.3 Lie groups and similarity transformationsA Lie group is a group and also a differentiable manifold where multiplicationand inversion are smooth. A group is a set of elements together with anoperation, (G, .), that combines any two of its elements while satisfyingthree conditions:? Closure. For elements x, y ? G, the result of the operation (x.y) isalso in G.? Associativity. For elements a, b, and c ? G, (a.b).c = a.(b.c).? Identity element. Identity element, e, exists such that (a.e) =(e.a) = a.? Inversion element. For each element a, an inverse element existssuch that (a.a?1) = (a?1.a) = e.Examples of Lie groups include the vector space under the operation addi-tion, invertible matrix under the operation matrix multiplication, rotationmatrices under the operation matrix multiplication, etc. The Lie group of454.2. Lie groupsinterest in this work are the similarity transformations that include threecomponents: Rotation, R, which is an orthogonal matrix with determinantequal to 1; translation, d; and scale, s. Such transformations can transforma 3D point by T (x) = sRx+d. It can also be represented in a matrix form:T =[sR d0 1], (4.9)and be applied to homogeneous coordinates of a 3D point, X = (x, y, z, 1)T,i.e.: T (X) = TX. Exponential and logarithm maps are performed usingthe standard matrix exponential and matrix logarithm; e.g., the matrixexponential is defined by the series:exp(T) =??k=01k!Tk. (4.10)The logarithm map of a similarity transformation T results in:log T =????l ?rz ry xrz l ?rx y?ry rx l z0 0 0 0???? , (4.11)where l = log(s), (x, y, z) is the translation vector, and (rx, ry, rz) definesthe rotation axis with angle ? =?r2x + r2y + r2z . The transformation canbe expressed by a 7? 1 vector, ?7?1 = (rx, ry, rz, x, y, z, l)T, in the tangent464.2. Lie groupsspace with corresponding bases:B1 =????0 0 0 00 0 ?1 00 1 0 00 0 0 0???? , B2 =????0 0 1 00 0 0 0?1 0 0 00 0 0 0???? ,B3 =????0 ?1 0 01 0 0 00 0 0 00 0 0 0???? , B4 =????0 0 0 10 0 0 00 0 0 00 0 0 0???? ,B5 =????0 0 0 00 0 0 10 0 0 00 0 0 0???? , B6 =????0 0 0 00 0 0 00 0 0 10 0 0 0???? ,B7 =????1 0 0 00 1 0 00 0 1 00 0 0 0???? . (4.12)Given above, a similarity transformation can be written as T = exp(??kBk).4.2.4 PGA for shape analysisIn some situations one can assume that the distribution of point sets can bemodeled as a multivariate Gaussian, the parameters of which are obtainedusing linear Principal Component Analysis (PCA). However, in other cases,a non-linear model such as Kernel PCA should be used [108]. Recently, amethod was proposed by Bossa and Olmos which outperforms conventionalPCA in many cases. This approach assumes that surfaces behave locallyas a Riemannian manifold and Principal Geodesic Analysis is performedto extract the modes of variations [15]. To find the shape variations, theinstances are transformed into the tangent space at the mean shape, ?:u = log?(xi), (4.13)and then PCA is performed. Note that the logarithm map of a point set, x,to the tangent space of another point set, y, is defined as [16]logy(x) = vec?1(2 arcsin(1/2?x?? y??)x?? y?(y?Tx?)?x?? y?(y?Tx?)?). (4.14)474.3. Statistical multi-object shape+pose modelVariable DescriptionK number of training data setsL number of objectsNl Number of surface points in the lthobject in the training setSk,l (Nl?3 matrix), the boundary of thelth object of the kth training set?pl Mean pose for lth object?sl (Nl ? 3 matrix), the mean shape ofthe lth objectTk,l (4 ? 4 matrix), similarity transfor-mation from the lth object in themean shape to the correspondingobject in kth training setvpc,l (4 ? 4 matrix), cth pose principalgeodesic for lth objectvsc,l (Nl? 3 matrix), cth shape principalgeodesic for lth objectTable 4.1: Variables used in the text together with their description.The exponential mapping is defined asexpy(x) = vec?1( cos(?x???y??)y? + sin(?x???y??)?y???x??x?), (4.15)where x? = vec(x), y? = vec(y), and the operator, vec, represents a matrixvectorization.4.3 Statistical multi-object shape+pose model4.3.1 Model constructionTo construct the model, we adapt a method introduced by Bossa and Ol-mos [16]. We use superscript ?s? and ?p? to differentiate between shape andpose related variables, respectively. Table 4.1 provides an overview of thevariables used in this section.The method assumes the presence of dense correspondences among a train-ing set that contains K samples of a complex with L objects. In this work,484.3. Statistical multi-object shape+pose modelFigure 4.2: Generalized Procrustes analysis is performed to align all thetraining set. The similarity transformation from the mean shape to eachobject is then computed.the correspondences are established across the boundary of objects, Sk,lwhich is a point set, using the approach presented in Section 2. Gener-alized Procrustes analysis [35] is performed on the entire complex of alltraining sets to bring them into the same reference frame. Generalized Pro-crustes analysis is also performed on each object separately to find the meanshape and its transformation to each instance. The mean shape is forcedto have the norm of one, since shapes are defined to be invariant to scaleas well as position and orientation. The scale is represented by the simi-larity transformation and thus is captured by the pose statistics. Let Tk,lbe the similarity transformation from the lth object of the mean shape tothe corresponding object of the kth instance (see Figure 4.2). For each ob-ject, the mean transformation, ?pl , and the residuals in the tangent space,Upk,l = log(?pl?1Tk,l), are found. For each Upk,l, a 7 ? 1 vector, upk,l, isdefined according to basis (see Equation (4.12)). The residual vector foreach complex is then reconstructed by concatenating the residual transfor-mations, upk = (uTk,1, . . . ,uTk,L)T and then PCA is applied. The results areC principal components, vpc = (vpc,1T, . . . ,vpc,LT)T, and their correspondingeigenvalues, ?pc . Corresponding Principal Geodesics (PG) can be computedby ?pl exp(vpc,l). The formulation of the PCA is shown in Equation (4.16).Each upk,l is a 7 ? 1 vector representing the similarity transformation inthe tangent space of the mean. Concatenation of these vectors results ina matrix of size K ? 7L, which includes all the similarity transformationsof the training set. The PCA decomposition results in two matrices, thescore matrix and the eigenvectors. Matrix WK?C is the score matrix and is494.3. Statistical multi-object shape+pose modelFigure 4.3: Steps for instantiation of the model.neglected afterward. The eigenvectors are listed in rows of a matrix of sizeC ? 7L and can be separated for each object, vpc,l.??????up1,1T . . . up1,LTup2,1T . . . up2,LT.... . ....upK,1T . . . upK,LT??????= WK?C ???????vp1,1T . . . vp1,LTvp2,1T . . . vp2,LT.... . ....vpC,1T . . . vpC,LT??????(4.16)Contrary to the original algorithm where shape statistical analysis is per-formed on each object separately, we concatenate all objects in the tangentspace and apply PCA to them. Our assumption is that the anatomicalshapes obtained from each patient are highly correlated with each other.This assumption is reasonable for some organs such as vertebrae of the sameperson but may not be valid for other anatomical complexes. The results areprincipal components in the tangent space, vsc = (vsc,1T, . . . ,vsc,LT)T (withcorresponding eigenvalues, ?sc), which produce PGs using so called exponen-tial mapping, exp?sl (vsc).4.3.2 Model instantiationThe shape+pose model inherits two kinds of separate statistics: positionsand shapes. We will show later in this section that these two are not corre-lated for the case of vertebrae. Therefore, we analyze the statistics of shape504.4. Experiments and resultsand pose separately. New complexes are generated by applying differentweights to PGs and combining them. Figure 4.3 shows the steps for themodel instantiation. Initially, the shape variations are applied to the meanshapes of vertebrae, followed by pose variations and a rigid registration step,respectively. Assuming that ?sc is the weight applied to the cth shape PG and?pc is applied to cth pose PG, a new instance of the lth object is generatedas follows:?l(?s,?p) = ?pl(?sl (?s);?p). (4.17)Note that ?pl (.;?p) denotes a similarity transformation built by combinationof the pose PGs with corresponding weights:?pl (.;?p) = ?plC??c=1exp(?pkvpc,l). (4.18)PGs are sorted according to their corresponding eigenvalues. To limit thespace of unknown parameters for the registration and to speed up the regis-tration process, we only use a subset of PGs and ignore those with negligiblevariance, i.e. C? ? C.Additionally, the shape instantiation, ?sl , can be formulated as follows:?sl (?s) = exp?sl (C??c=1?skvsc,l). (4.19)4.4 Experiments and resultsThe statistical shape+pose model was constructed from the datasets, con-taining L1-L5 vertebrae of 32 patients, using the approach presented inSection 4.3.1. Each vertebra had 1500 vertices in the model. Figure 4.4illustrates the changes in the shape of the multi-vertebrae model that resultfrom changing the weights corresponding to the first two pose modes of thevariation by ?3??pk and first two shape modes of the variation by ?3??sk.An experiment was performed to assess the correlation between the shapesof vertebrae. Initially, correspondences between each two vertebrae weredetected using a non-rigid registration [75]. Next, pair-wise correlation be-tween correspondences were computed and averaged. The average correla-tions between each two vertebrae are presented in Table 4.2. The correlationvalues suggest that the shapes of vertebrae closer to each other (e.g. L2 andL3) are more correlated than those far from each other (e.g. L1 and L5).514.4. Experiments and resultsPose first mode Pose second modeShape first mode Shape second modeFigure 4.4: Graphical representation of the L1-L5 vertebrae variations de-scribed by changing the weights corresponding to the first two principalmodes of pose and shape variation by ?3??pk and ?3??sk, respectively. Thefirst mode of pose variations represents mainly the scaling and the lateralcurvature, and the second mode represents the anterior-posterior curvatureof the spine. The first and second modes of variation of shape represent vari-ations in the transverse processes and vertebral body, respectively. For sakeof better visualization, the amount of the shape variations is color coded onthe mean shape. Unit is in millimeter.To show a lack of correlation between the pose and the shape variations, weperformed pair-wise correlation between the variation modes score for thetraining set and calculated the R2 (square of the Pearson correlation) ac-cordingly. All R2 values were below 0.4 and only six variations had R2 > 0.3524.5. SummaryL2 L3 L4 L5L1 0.98?0.02 0.98?0.04 0.96?0.08 0.94?0.10L2 0.98?0.02 0.96?0.07 0.95?0.09L3 0.98?0.04 0.96?0.07L4 0.97?0.04Table 4.2: Pair-wise correlation between shapes of lumbar vertebrae.which showed an insignificant relationship between pose and shape statis-tics. Given the results, we performed separate analysis on pose and shapefor the rest of this work.The model is evaluated by computing well-known measures of generaliza-tion, specificity, and compactness [102]. All measures are provided as afunction of c, the number of modes used in model instantiation. Results areshown in Figure 4.5. Compared to shape, pose statistics are more compact,i.e. the variations of the pose can be represented with fewer modes (com-pare Figure 4.5a and 4.5b). This is to be expected since the number ofpose parameters are significantly less than the number of shape parameters.According to the generalization measure, shown in Figure 4.5c, the modelis capable of reconstructing a new observation with a mean distance erroraround 1.4 mm when using the first 10 modes of the variation. Specificityis below 2.5 mm and, as we observed in our results, the error doubles whenextreme values of different modes are used, i.e. the boundaries of the (hyper-rectangular) space of allowable shapes and allowable poses.We should also note that the proposed technique relies on performing lin-ear analysis on data with large dimensions and few observations. This willreduce the quality of the model in terms of compactness and generalization.A larger training set may resolve this problem.4.5 SummaryTo the best of our knowledge, this is the first time that a statistical multi-vertebrae model has been created. Our approach enjoys joint representationof all vertebrae by embedding statistics derived from curvature and shape ofspinal column into a single model. Our approach makes two assumptions:first, the shapes of the lumbar vertebrae are strongly correlated and canbe learned together. Since the shapes of all vertebrae are learnt together,no overlapping should be observed if the coefficients of the shape variationsstay within a reasonably small range, i.e. 3??. Second, the shape and the534.5. Summary0 5 10 15 20 25 300.10.20.30.40.50.60.70.80.91Number of Modes Compactness(a) Shape compactness0 5 10 15 20 25 300.40.50.60.70.80.91Number of ModesPose Compactness(b) Pose compactness0 5 10 15 20 25 30123456789Number of Shape and Pose ModesGeneralization (mm)  HaussdorfMean(c) Generalization0 5 10 15 20 25 300.511.522.5Number of Shape and Pose ModesSpecificity (Mean Error, mm)(d) SpecificityFigure 4.5: Generalization (for both Mean Error and Haussdorf), specificity,and compactness ability of the statistical multi-vertebrae shape+pose model.Error bars show the standard deviation divided by square root of the numberof trials.relative position/orientation of the vertebrae are not correlated since thelatter is more related to the patient posture during data acquisition.54Chapter 5Segmentation of CT imagesusing statistical multi-objectmodel5.1 Introduction and backgroundAs mentioned in Chapter 1, spinal needle injections are widely used for anes-thetic, analgesic and diagnostic purposes [17]. A facet joint injection for therelief of chronic lower back pain, is an example of such procedures performedin large numbers of hospitals and radiology clinics. Facet joint injection re-quires careful placement of the injection needle, both to ensure effective ther-apy delivery and to avoid damaging the nerves of the spinal cord. The cur-rent gold standard for the procedure is to use fluoroscopy (or CT) for guidingthe injection. However, fluoroscopy needle guidance for anesthesia deliveryhas significant drawbacks, particularly the significant dose of ionizing radi-ation and the need for a specialized pain management clinic with access tofluoroscopy equipment. To alleviate these issues, several new image-guidedtechniques have been recently proposed [19, 23, 73, 116]. In these tech-niques, a segmented anatomical model from CT is used either separately orin conjunction with other image modalities such as ultrasound, for planningand guiding the intervention. However, the key step for smooth integrationof such techniques within a clinical workflow is a robust segmentation of CTimages to create the 3D anatomical model of spine. Unfortunately, despitethe high contrast of bony structures in CT images, the segmentation taskremains challenging due to the presence of unclear boundaries, the complexstructure of vertebrae, and substantial inter-subject variability in perform-ing and assessing segmentation [60].Several semi-automatic and automatic methods have been proposed for ver-This chapter is adapted from Rasoulian A., Rohling R. N., and Abolmaesumi P.,Lumbar Spine Segmentation Using a Statistical Multi-vertebrae Anatomical Shape+PoseModel. IEEE Transactions on Medical Imaging, In press.555.1. Introduction and backgroundtebral column segmentation from CT images [20, 60, 69, 70, 95, 97, 110].Most methods consist of two steps: 1) identification of vertebrae, and 2)separate segmentation of each vertebra. The identification may be per-formed manually by using seeds placed within the vertebral body [70] orby drawing contours [20]. There are also reports on automatic detectionof vertebrae [95, 97, 110]. In most automatic techniques, the spine curva-ture is initially extracted and then vertebrae are detected by finding theinter-vertebral disk [57, 81, 97], searching for some unique characteristicsof vertebrae shape [60, 95], or prediction of the vertebrae location usingimage features extracted from the entire image [44]. Techniques have beendeveloped that unify these two steps by employing a local articular spinemodel [56, 120]. Notably, Kadoury et al. cluster the training subjects (verte-bral column extracted manually from the CT images) into sub-groups usingmanifold learning, and construct linear point distribution models for eachsub-group. An unseen target is then segmented by pairwise selection of theclosest data from the training set based on the measures defined on themanifold. Although the results are promising, this approach requires a largeenough training data set to cover the range of feasible postures. Severaltechniques are also proposed for the segmentation of a single vertebra. Atypical approach is to use a template (or a mean shape) for initializationand warp it until it fits the edge of the vertebra [20, 70, 97]. Others useActive Appearance Models (AMM) to account for the inhomogeneity of thevertebrae intensity in CT images and also to incorporate the appearancestatistics of the vertebrae in the segmentation task [56, 90]. Another ap-proach addresses this problem by fitting a 3D superquadric model to thevertebral body [100].Separate segmentation of the individual vertebrae has several disadvantages.A clear boundary may not exist in regions between two vertebrae such asregions near the intervertebral disk and facet joints, which may lead to mis-segmentation of these areas or an overlap between the segmentation of neigh-bouring vertebrae. Moreover, some anatomically useful information such asthe common shape variations among vertebrae, is discarded. Although theformer problem has been tackled by Klinder et al. [60] by simultaneous seg-mentation of all vertebrae and applying a penalty to overlapped areas, thelatter has not yet been addressed.To alleviate these issues, we propose to segment a section of the vertebralcolumn, containing the target facet joint, using a statistical multi-objectshape+pose model presented in Section 4.565.2. General description of the registration method(a) (b) (c) (d)Figure 5.1: Example of results obtained with the Canny edge detector onsample CT images of spine.5.2 General description of the registrationmethodA preprocessing step is performed to extract edges from the CT images ofthe lumbar spine. To this end, CT images are smoothed using a Gaussianfilter with a standard deviation of 1 mm. Next, CT images are thresholdedwith a value of 100 Hounsfield units and a Canny edge detector [21] is appliedto the 2D transverse planes. The result is a point set Z (consisting of pointszm) representing a rough segmentation of the spinal column. Examples ofsuch segmentations are shown in Figure 5.1. The model is then initializedby a single point selected by the user within the middle vertebra. Since allof our CT images are captured from patients in a supine position, the orien-tation of the spinal columns are known a priori (i.e. along the z-axis). Wedidn?t observe any segmentation error regarding the scale variation in ourdata set; however, this might become an issue when the method is appliedto cases that are not included in the training set, e.g. children.To register the model to the extracted edges, Z, we build upon the Expec-tation Maximization (EM) framework presented in Section 3. The targetis assumed to be an observation generated by a GMM, which is defined bythe shape+pose model points as its centroids. The EM algorithm is usedto maximize the likelihood of the observation by finding appropriate pa-rameters for the modes of variations of the model and the extra non-rigidtransformation.Assuming tln to be the nth point of the model belonging to the lth object.The expectation step is performed by computing the probability of how575.2. General description of the registration methodlikely a point in the model generates another point in the target:P (tln|zm) =exp(?12?zm?(?(tln;?s,?p)+?ln)? ?2)?Ll=1?Nj=1 exp(?12?zm?(?(tlj ;?s,?p)+?lj)? ?2), ) (5.1)Here, ?(tln;?s,?p) is the transformation of point tln given the parameters ofthe shape and the pose variations.The displacement vector, ?ln, defined for each point on the model, is anadditional non-rigid transformation added to the model to compensate forvariations that are not within the statistics represented by the model. Thestandard deviation of all the Gaussian components, ?, is chosen initially tobe large. Its value decreases in each iteration. In the maximization step,the following objective function is minimized:Q =L?l=1M,Nl?m,n=1P (tln|zm)?zm ? (?(tln;?s,?p) + ?ln)?2+ ?s?sT?s?s + ?p?pT?p?p + ??L??2. (5.2)The three latter terms are for Tikhonov regularization over the shape, pose,and non-rigid transformation, and are controlled by the trade-off variables,?s, ?p, and ?, respectively. Matrices ?s and ?p are diagonal with elements1/?sc and 1/?pc , the corresponding eigenvalues of the shape and the posePGs, respectively. The regularization over the non-rigid transformation isthe same as that suggested by Myronenko et al. [75], which is proposed forconstraining non-rigid registration of two point sets. Briefly, the operatorL intensifies the energy at high frequencies. In our implementation, it isrepresented by a Gaussian kernel with standard deviation, ?, set initially bythe user. Although the value of ? depends on the size of the shapes, we setit to 5 mm for all the vertebrae.The cost function is minimized alternately with respect to the model coef-ficients and the non-rigid transformation. The model coefficients are foundusing the Quasi-Newton method. Ignoring the non-rigid transformation, weexpand Equation (7.3):Q =L?l=1M,Nl?m,n=1P (tln|zm)(zTmzm ? 2zTm?(tln;?s,?p)+ ?(tln;?s,?p)T?(tln;?s,?p))+ ?s?sT?s?s + ?p?pT?p?p. (5.3)585.3. Experiment and resultsThe derivative with respect to shape parameter, ?sc , is?Q??sc=L?l=1M,Nl?m,n=1P (tln|zm)[(?(tln)T ? zTm)??(tln)??sc]+ ?s?sc,.?s, (5.4)where ?sc,. is the cth row of the matrix ?s. The same can be achieved forthe pose parameters:?Q??sk=L?l=1M,Nl?m,n=1P (tln|zm)[(?(tln)T ? zTm)??(tln)??pk]+ ?p?pc,.?p. (5.5)The partial derivatives can be found separately for shape and pose param-eters. Following Equation (4.18), we have??s(.;?p)??pc= ?pl(c?1?i=1exp(?pcvpc,l))?pcvpc,l(C?i=cexp(?pcvpc,l)). (5.6)Following Equations 4.15 and 4.19, the partial derivative with respect to theshape parameters can be written as??sl (?s)??sc= cos(?h????)vc,lh?h?2h+ sin(?h????)[vc,lh.????h?3h +????h?vc,l?vc,lh?h?.????], (5.7)where h =?Cc=1 ?scvsc,l.The non-rigid transformation, (?ln), is also updated in each iteration. Forfixed values of shape and pose coefficients, the optimization of the costfunction becomes a non-rigid registration with respect to ?ln, which is solvedusing the closed-form solution proposed by Myronenko et al. [75].5.3 Experiment and results5.3.1 Validation criteriaEvaluations are based on the distances between the registered model andthe triangulated surfaces. The shortest distances between vertices of the595.3. Experiment and resultstwo surfaces are computed. The RMS, mean, median, 95th percentile, andthe maximum (referred to as the Hausdorff distance) of these distances arereported.Both accuracy and capture range are computed for the segmentation of theCT images using our proposed technique. The capture range is definedas the maximum possible distance between the center of the mass of themiddle vertebra (L3) in the model and in the manual segmentation beforethe registration, for which the registration error is less than 2 mm in at least90% of all cases. Since the narrow spaces between facets are between 2 and4 mm [80], and for facet joint injections, a target accuracy of 5 mm has beenreported to be sufficient [49], we expect that an accuracy of 2 mm shouldbe accurate enough for clinical procedures.5.3.2 Segmentation of CT images using the multi-vertebraemodelCT images are randomly divided into two sets. One set, consisting of fiveCT images, is used to optimize the trade-off variables of the algorithm.The rest (27 CT images) are used to measure the segmentation accuracyand its capture range. Since the size of the training set is limited, in allexperiments explained in the following sections, the model is reconstructedusing all manual segmentations excluding the manual segmentation of thetarget (i.e. using 31 CT images).5.3.3 Selection of the trade-off variables of the registrationalgorithmThe appropriate choice of the trade-off variables (?p, ?s, and ?) and numberof modes, c, were found by fitting the model to a subset of five CT imageswith different values for these variables. The mean distance error of thefitted model was computed. Figure 5.2 shows the distance error versus arange of values of the trade-off variables.Constraints on the pose, shape, and non-rigid transformation. Thedistance error increases as the value of ?s and ? decreases, since the modelis more likely to be deviated from the mean shape. The same is observed forthe large value of these variables, since then the model is forced to be closerto the main shape. However, this was not the case for the pose trade-offvariable, ?p, and interestingly the registration error remained below 1.2 mmfor ?p < 1, meaning that this constraint does not necessarily improve thesegmentation accuracy.605.3. Experiment and results10?3 10?2 10?1 100 101 102 10311.522.5?pMean Error (mm)(a) Pose trade-off10?2 100 102 104 1061.11.151.21.25?sMean Error (mm)(b) Shape trade-off10?3 10?2 10?1 100 1011.181.21.221.241.261.281.31.32?Mean Error (mm)(c) Deformation trade-off0 5 10 15 20 25 3011.522.5Number of ModesMean Error (mm)(d) Number of modesFigure 5.2: The mean distance error of the registration for different valuesof the trade-off variables. Registration results are averaged across all lumbarvertebrae.Number of modes. Confirming the generalization capability of the model,the more modes, c, used in the registration, the more accurate the segmen-tation becomes. However, for c > 10 the error remains almost the same(approximately 1.1 mm).Table 5.1 gives an overview of all variables with a short description andtheir chosen values for the segmentation. We selected the number of targetpoints to be M = 20, 000. A larger number gives higher accuracy becauseit provides a more detailed description of the target. For faster but subse-quently lower accurate segmentation, one should decrease this number (Seethe Discussion section).615.3. Experiment and resultsVariable Description Value?p Constrains the coefficients forpose modes0?s Constrains the coefficients forshape modes103? Constrains the smoothness ofthe non-rigid transformation1c Number of modes used in theregistration10M Number of target points 20,000Table 5.1: Values selected for the trade-off variables of the registration ofthe multi-vertebrae model to the CT images.5.3.4 Segmentation accuracyLeave-one-out experiments were performed on 27 CT images to assess theaccuracy of the segmentation. The center of mass of the L3 vertebra in themodel and in the manually segmented CT image were initially aligned andthen the registration was performed. The statistics of the distance error foreach vertebra are detailed in Table 5.2. Additionally, the errors for someof the subregions including the articular process (part of the vertebra thatsurrounds the facet joint) are calculated and are presented in Table 5.3. Theerrors are almost equally distributed in all extremities such as the spinousprocess, transverse process and articular process (< 2 mm). Moreover, thevertebral body shows much smaller error than the other regions. The errorreaches a minimum for the middle vertebra (L3). Examples of the segmen-tation results are shown in Figure 5.3.The additional non-rigid transformation slightly improves the segmentationaccuracy (see Figure 5.4 as an example). The mean distance error for reg-istration without the non-rigid transformation is 1.65?0.44 mm, comparedto a distance error of 1.38?0.56 mm achieved with the non-rigid transfor-mation. The improvement in registration error is statistically significant(p < 0.05; using a paired t-test). The small difference between these er-rors confirms that the additional deformable registration step results insub-millimetre changes in segmented boundary achieved with the staticalshape+pose model alone.625.3. Experiment and resultsFigure 5.3: Registration results and volume overlaps for four chosen volumesfrom different views. The registered model is highlighted in white. Dashedlines mark where the axial slices are located in the sagittal view. The redsurface shows the manual segmentation.635.3. Experiment and resultsFigure 5.4: Registered model with (white color) and without (gray color)the additional non-rigid transformation step. This example shows a loca-tion where the errors are especially large in order to best demonstrate theadditional deformation step.5.3.5 Capture rangeLeave-one-out experiments were performed to measure the sensitivity of thealgorithm to the initial point selection, i.e. a point around the center ofL3, which is marked by the user. Initially, the center of mass of the L3vertebra in the model and the L3 vertebra in the manually segmented CTimage were aligned. Next, a displacement ranging from 0 to 20 mm, in arandom direction, was added to the model, followed by registration. Initialdisplacements were divided into bins with 2 mm width. For each bin, 27experiments (one experiment per subject) were performed. The mean dis-tance error against the initial displacement is shown in Figure 5.5a. Thepercentage of the registrations with mean distance error below 1.5 mm and2 mm are shown in Figure 5.5b. The capture range is ?8 mm which covers areasonable area (16 mm) within the vertebrae, given that the inter-spinousdistance is around 33 mm for lumbar spine [87]. For images with low resolu-tion, the selection of this point might become challenging since the capturerange will be only few voxels.The current unoptimized MATLAB code running on an Intel Xeon X56502.67 GHz, requires 15 minutes. This includes the initial segmentation of theCT images with Canny edge detection and the entire registration pipeline.645.4. DiscussionMedian 95 Percentile Haussdorf Mean RMSL1 1.01?0.31 3.67?1.28 9.07?2.27 1.33?0.31 1.79?0.58L2 0.93?0.25 3.23?1.08 8.97?3.00 1.19?0.33 1.59?0.46L3 0.94?0.32 3.26?1.25 7.87?1.64 1.21?0.41 1.59?0.46L4 1.04?0.40 3.79?1.53 8.41?1.95 1.37?0.53 1.83?0.70L5 1.34?0.65 5.04?2.09 10.22?2.53 1.78?0.80 2.41?1.00All 1.05?0.43 3.80?1.61 8.91?2.42 1.38?0.56 1.84?0.74Table 5.2: Segmentation result demonstrated by the distances between thefitted model and the manual segmentation. Mean, RMS and maximum(referred to as Hausdorff) of these distances are presented.SP (mm) TP (mm) AP (mm) VB (mm)L1 1.96?1.59 2.27?1.22 1.49?0.82 0.69?0.35L2 1.72?1.36 1.73?0.66 1.28?0.67 0.75?0.36L3 1.17?0.63 1.37?0.71 1.44?0.87 0.71?0.34L4 1.85?1.36 2.03?0.91 1.90?1.00 0.73?0.49L5 2.35?1.17 2.48?1.31 2.51?1.59 0.68?0.30All 1.81?1.22 1.97?0.96 1.72?0.99 0.71?0.36Table 5.3: Mean error (?standard deviation) of the registration in differentregions for each vertebrae (SP, spinous process; TP, transverse processes;AP, articular processes; VB, vertebral body).5.4 DiscussionOur method successfully segments the entire lumbar vertebrae with a meanpoint-to-surface error of 1.38?0.56 mm. The surface error around the tar-get facet joint/articular process is 1.72 mm, which is below the clinicallyaccepted error [43]. Error for the other critical subregions of the spine isalso below 2 mm.Our algorithm requires three regularization parameters to constrain theshape and the pose variations, and the non-rigid transformation smoothness.Although the segmentation accuracy shows a global optimum by changingeach parameter, it also exhibits a lack of sensitivity to selection of theseparameters. The mean surface error remains below 2.5 mm for a wide rangeof values (see Figure 5.2).The most similar work to what is presented here is performed by Klinder etal. [60] with few key differences. Our approach unifies the detection, iden-tification, and segmentation into a single step by using the shape prior and655.4. Discussion0?2 2?4 4?6 6?8 8?10 10?12 12?14 14?16 16?18 18?2000.511.522.533.544.5Initial Displacement (mm)Mean Error (mm)  (a) Mean error0?2 2?4 4?6 6?8 8?10 10?12 12?14 14?16 16?18 18?2000.10.20.30.40.50.60.70.80.91Initial Displacement (mm)Success Rate  Mean Error < 2 mmMean Error < 1.5 mm(b) Success rateFigure 5.5: a) Mean error and b) success rate for registration with differentinitial displacements. Registration results are averaged across all lumbarvertebrae.the pose statistics embedded in our model. Additionally, our method re-quires an initialization that is a single point marked by the user close to thecenter of the mass of the L3 vertebra. On the other hand, Klinder?s methodis fully automatic, but requires a few extra steps for detection, identifica-tion, and extraction of vertebrae. Moreover, they do not take advantage ofthe shape statistics, and instead only perform a non-rigid registration of atemplate of a single vertebra to its isolated sub-volume. It should be notedthat our achieved segmentation error is slightly worse (1.38 mm comparedto 0.76 mm); however, both algorithms produce segmentation errors thatare within the clinically acceptable range for facet joint injections. Ourtechnique is also similar to a segmentation algorithm recently proposed byKadoury et al. [56] since they represent the relative position of the neigh-boring vertebrae on a manifold. However, they do not apply any statisticalanalysis on pose information but find a training subject with the most simi-lar spine curvature to the target. The segmentation error achieved with ourmethod is also comparable to the results reported in the literature whichhave the range of 0.95-1.5 mm [41, 56, 69].The most computationally expensive component of our algorithm is the cal-culation of the likelihood function (Equation (5.1)). Therefore, the runtimeis proportional to the number of target points. On the other hand, the ac-curacy decreases when fewer target points are used. Figure 5.6 shows therelation between the accuracy, computation time, and the number of target665.4. Discussion0 5 10 151.31.41.51.61.71.81.922.1Time (min)Mean Error (mm)  M=1000M=2000M=3000M=5000M=10,000M=15,000M=20,000Figure 5.6: Time versus accuracy for different number of CT points used inthe registration. Time is calculated from an unoptimized Matlab implemen-tation.points. The runtime is not directly comparable to Klinder et al.?s approach,because of differences in coding, implementation, hardware and number ofimage slices. The detection and identification steps in their algorithm takesaround 37 minutes, and final fine-tuning of the segmentation takes 179 sec-onds. Conversely, our method uses a single point to manually identify avertebra, and our mean error is still clinically acceptable (i.e. 2 mm) forrun-times as low as 1 min.Previous work suggested that more than 12 modes are needed to represent95% of the variations for a single vertebra [58]. Given that, at least 60 modesshould be used to represent the shape variations of five vertebrae. For thetask of the registration, one must also compute the position and orientationof each vertebra (6 parameters) which increases the number of parametersto 90 (60 + 6? 5). On the contrary, by using a multi-vertebrae shape+posemodel, 95% of the shape variations are captured by only 25 shape modes,and first 15 pose modes represent the entire possibilities of the relative po-sition and orientations of the vertebrae. In our registration, however, weonly used the first 10 modes for both shapes and poses since the additionalnon-rigid transformation captured the small variations. The additional non-675.5. Summaryrigid transformation step slightly improves the segmentation accuracy. Thisinvolves small variations that are not included in our statistical model. How-ever, segmentation of pathological cases such as fractured/broken bone can-not be recovered with this transformation. Additionally, we believe thatenlarging the training set with patients with abnormalities in their spinecurvatures (such as scoliosis or kyphosis) will improve the segmentation re-sults for such cases. We should also note that the training set is based ononly 32 CT images which may not reflect the entire variability of L1-L5vertebrae. A larger dataset is expected to reduce errors further.5.5 SummaryA statistical multi-vertebrae model was created and registered to CT im-ages of spine. This model reduces the complexity of the segmentation bycombining many steps such as detection and identification of the individ-ual vertebrae. Our registration technique was inspired by an EM approach,where soft correspondences established between two point sets, facilitated arobust alignment.68Chapter 6Automatic labeling of CTimages6.1 Introduction and backgroundAccurate labeling (sometimes referred to as identification) and segmentationof individual vertebrae from Computed Tomography (CT) images is a nec-essary pre-processing step in a range of image-guided therapy applicationssuch as insertion of pedicle screws or spinal implants. It is also beneficialfor many other applications that use vertebrae as anatomical landmarks. Apractical use is in Picture Archiving and Communication Systems (PACS)that are widely used in hospitals. Such labeling system can aid practition-ers, to reduce clinical errors such as interventional procedure or surgery atan incorrect level.The automatic localization of individual vertebrae is challenging due to therepetitive nature of these structures and the variability of images in res-olution and field-of-view. The latter becomes more important when thereference structures that are typically used for labeling (e.g., the first cervi-cal or the first/last thoracic vertebra) are missing from the image.Previous research has addressed the labeling problem in X-ray images [9, 98,119], MR images [78, 82], and CT images [44, 50, 60, 69, 76]. In most tech-niques, vertebrae are detected and separated by either searching for inter-vertebral disks [50, 76, 78], performing Generelized Hough Transform [119],or searching for characteristics shapes such as ellipsoids fitted to spinal canaland vertebral body [50]. Labeling is then performed by searching for specificvertebrae, such as T12 using the ribs [76] or lumbosacral junction [29] andlabel the other vertebrae accordingly. There are also reports on labeling ofthe entire spinal column in full scans [50, 95].All mentioned approaches use a priori knowledge about the scan and pres-ence of particular vertebrae. To the best of our knowledge, only two workshave handled labeling of the vertebral column in arbitrary field-of-viewscans [44, 60]. Klinder et al. perform affine registration between templates696.1. Introduction and backgroundreconstructed for each level (T1, T2, etc) and each detected vertebra in theCT image. They combine the information for all vertebrae and use the ruleof consecutive ordering and report the best configuration as the labels [60].Given the large number of registrations, the identification phase is reportedto take up to 45 minutes for 12 thoracic vertebrae. Glocker et al. addressthe identification problem by performing a random forest classification onthe image features extracted from the entire image and predict the locationof each vertebrae accordingly [44]. Although the results are promising andcomputationally tractable, this technique might not be applicable to CTimages with a small field-of-view or those that are reduced in-plane show-ing only a small region around the vertebrae with little structural context.Moreover, this technique only provides labeling, and not the segmentation.In lack of image feature or contextual information, and confirming the workperformed by Klinder et al., we believe that the identification task shouldbe performed by shape comparison. Looking at Figures 6.1a and 6.1c, shapedifferences exist within the population of each vertebrae (we referred to itas intra-level statistics hereafter). However, some characteristic differencescan also be detected between the shape of different vertebrae (we referredto it as inter-level statistics hereafter). For example, lumbar vertebrae havesmaller spinous processes and wider vertebral bodies in comparison to tho-racic vertebrae. Differences can also be detected in other aspects of thedata. Looking at Figures 6.1b and 6.1d, the relative position/orientationof neighbouring vertebrae can be used for distinguishing vertebrae from dif-ferent levels, e.g., the spine curvature is usually larger around the lumbarvertebrae. A study performed by Boisvert et al. confirm such variability be-tween the relative position of the neighbouring vertebrae in different regionof the spinal column [13].In the previous chapter, we proposed a technique to segment the lumbar ver-tebrae by registration of a multi-vertebrae shape+pose model to CT images.Here we make two main contributions. First we propose a novel extensionto the model construction to include the statistics of not only a specificmultiple neighbouring vertebrae, e.g. L1-5, but any neighbouring verte-brae. In other words, we construct a statistical generic n-vertebrae modelthat reflects the variations in shapes of all the vertebrae (both inter-leveland intra-level statistics). Such a model can generate any combination ofn neighbouring vertebrae. Second, we develop techniques to use the modelfor labeling a target vertebrae. We also use this technique for automaticlabeling and segmentation of the entire vertebral column in CT images witharbitrary field-of-views. The automatic labeling and segmentation techniqueis evaluated on 56 CT images including 13 thoracic, 36 lumbar scans, and 7706.2. Labeling technique(a) (b)(c) (d)Figure 6.1: a) Three examples of T1 (first thoracic) vertebrae from threedifferent patient. b) An example of T1-T2-T3 vertebrae of a patient. c)Three examples of L1 (first lumbar) vertebrae from three different patients.d) An example of L1-L2-L3 vertebrae of a patient.full scans.6.2 Labeling techniqueIn this section we describe our technique for construction of the genericstatistical n-vertebra model. We then introduce our technique for labeling ofarbitrary vertebrae and also labeling and segmentation of the entire vertebralcolumn.6.2.1 Construction of the generic statistical n-vertebramodelIn Chapters 4 and 5 we presented a technique for construction of a statis-tical multi-vertebrae shape+pose model and its registration to CT images.For construction of the model, pose statistics are separated from the shapestatistics since they are not necessarily correlated and do not belong to thesame space. The result of the statistical analysis is the modes of varia-tions for shape and pose that are represented by vs and vp, respectively. Anew instance of the model is generated by linear combination of modes of716.2. Labeling techniqueT1-T2-T3L3-L4-L5T2-T3-T4......T1-T2-T3L3-L4-L5T2-T3-T4Patient 1Patient NTraining SetFigure 6.2: An example of a training set for construction of a generic 3-vertebrae model.variations as follows:S = ?( K?k=1wskvsk,K?k=1wpkvpk). (6.1)The registration of this model to CT images is formulated as searching forproper weights, (wsk, wpk) that best fit the model to the bone edges found inthe CT image.Klinder et al. performed the labeling of a vertebra in a CT image by assess-ing its similarity to templates representing the mean shape of each vertebra.Instead of one-to-one similarity assessment, one can build a SSM of all thevertebrae with the assumption that the differences between vertebrae lie insome of the modes of variations. However, registration of an SSM of a singlevertebra is non-trivial. The registration process is susceptible to local min-ima due to the unclear boundary between neighboring vertebrae. Moreover,statistics may not be distinguishable between close vertebrae since theirshapes are very similar. To alleviate these problems, we propose to con-struct and align a generic statistical n-vertebra model, which can produceany n neighbouring vertebrae. Using the same technique explained in thissection, the key novelty is that we form the training set to contain all pos-sible complexes of n neighbouring vertebrae. An example of a training setfor n = 3 is shown in Figure 6.2. We found n = 3 to be sufficient althoughone can extend it to larger complexes for potentially better labeling.726.2. Labeling technique6.2.2 Labeling algorithmRegistration of the introduced model results in weights associated with theshape and pose variations that best fit the model to the target, [ws,wp].These weights are then used as features for the labeling.A non-linear Support Vector Machine (SVM) with Radial Basis Function(RBF) as its kernel is used to label the subjects. Unlike some statisticalclassification methods, SVM does not provide posterior class probabilities.Platt trained an SVM and later used the parameters of an additional sigmoidfunction to map the values of SVM outputs to posterior probabilities [86].We use these posterior probabilities later to increase the labeling accuracyas we detail below.6.2.3 Labeling and segmentation of the entire vertebralcolumnLabeling of the vertebral column is performed as follows. Initially, a singlepoint is selected on the vertebral column regardless of its level. To dothis automatically, we use the Generalized Hough Transform (GHT) [5] todetect vertebrae. The result of the algorithm are many points, marked asthe center of vertebrae, together with their probability. At this step werequire only one vertebra and we pick the one with strongest probability inthe GHT. Next, the generic 3-vertebrae model is initialized on the selectedpoint and is registered using the algorithm explained earlier. Then, weperform an iterative technique for labeling of the entire vertebral column. Ateach iteration, we use the latest segmented vertebra, to locate the superiorvertebra. The model is then registered to the located vertebra and is labeledbased on the algorithm introduced in previous section. At each iteration, weupdate a matrix, P, where its elements, pn,l, are the posterior probabilitiescomputed using SVM and represent the similarity of nth found vertebraein the CT image to the level, l (see Figure 6.3. For each diagonal of thismatrix, the average posterior probability is calculated and the diagonal withthe highest mean posterior probability is selected as the true configurationof labels for the vertebrae segmented up to now. The iterations are stoppedas the latest vertebrae is labeled as T1 or the model is out of the field-of-view. We perform the same iterations for inferior vertebrae. The algorithmsteps are shown in Figure 6.4.736.3. MaterialsP1,T1 P1,L5P1,T2P2,T1 P2,L5P2,T2PN,T1 PN,T2 PN,L5............ ...First vertebraSeconed vertebraNth vertebraFigure 6.3: The probability matrix indicating the similarity of the segmentedvertebrae to each level. The best configuration is found by averaging thediagonal of the matrix and reporting the maximum.6.3 Materials6.3.1 DataExperiments were carried out on CT images of 56 patients (36 lumbar,13 thoracic, and 7 full scans), which include 301 thoracic and 168 lum-bar vertebrae in total. The spines appear normal, though we observe somecases with mild scoliosis. Manual CT segmentations were performed us-ing ITK-SNAP. For each subject, three independent segmentations (per-formed by three different users) were averaged using majority voting toform the final segmentation, then triangulated using the marching cubes al-gorithm [66]. The CT imaging resolution varied between subjects and rangedfrom 0.6 mm?0.6 mm?0.6 mm to 0.9 mm?0.9 mm?3.2 mm spacing.6.3.2 Validation criteriaEvaluation of the labeling is based on the difference between the correctlevel and the one predicted by the classifier; e.g., for an actual level of T1and predicted level of T3, the error is 2 levels.Segmentation results are evaluated based on the distances between the regis-tered model and the triangulated surfaces of the manual segmentation. Theshortest distances between vertices of the two surfaces are computed. Themean of these distances is reported as the segmentation error.6.4 Experiments and results6.4.1 Generic 3-vertebrae modelUsing the manual segmentation of the vertebrae, the generic 3-vertebraemodel is reconstructed. The model contains 4500 points (1500 points foreach vertebra). Figure 6.5 illustrates the changes in the shape of the model746.4. Experiments and resultsLo c ate a v erteb raR egis ter th e m o d elLo c ate th e inf erio r v erteb raR egis ter th e m o d elI s  th e l as t v erteb ra l ab el ed  as  L5Lab el  th e entire s egm ented  v erteb ral  c o l u m nLo c ate th e s u p erio r v erteb raR egis ter th e m o d elI s  th e f irs t v erteb ra l ab el ed  as  T1Lab el  th e entire s egm ented  v erteb ral  c o l u m nN oN oLo c ate a v erteb raR egis ter th e m o d elLo c ate th e s u p erio r v erteb raR egis ter th e m o d elLab el  th e entire s egm ented  v erteb ral  c o l u m nR eac h ed  th e ex tent o f  C T o r f irs t v erteb ra l ab el ed  as  T1N oPerf o rm  th e s am e f o r inf erio r v erteb raeY esI nitial l y  a p o int is  au to m atic al l yFigure 6.4: A schematic diagram, showing the steps of the labeling andsegmentation approach proposed. Initially, a point is automatically foundon the vertebral column (red cross). Next, a 3-vertebrae model is registered.The last vertebra of the registered model (red arrow) is then used to initializethe next model. The segmented vertebrae are labeled in each iteration. Thisprocess continues until it reaches the extents of the field-of-view or it labelsthe vertebrae as T1.756.4. Experiments and results?3????+3??Pose firstmodePose secondmodeShape firstmodeShape secondmodeFigure 6.5: Graphical representation of the generic 3-vertebrae model de-scribed by changing the weights corresponding to the first two principalmodes of pose and shape variation.that result from changing the weights corresponding to the first two poseand shape modes of the variation. 90% of the variations are achieved by thefirst 3 pose modes and the first 55 shape modes. As expected, the model isnot compact in the shape space since it represents the shape of an arbitraryvertebra.6.4.2 Labeling and segmentation of the entire vertebralcolumnLeave-one-out experiments were performed on CT images to assess the ac-curacy of the automatic labeling and segmentation. Segmentation was per-formed on the entire vertebral column using the algorithm explained in theprevious section. Examples of the segmentation are shown in Figure 6.6.Construction of the probability matrix between the detected vertebrae andfinding the best labeling configuration results in 70% correct identification.Others were labeled one level off. To assess the segmentation, the distancesfrom vertices of the manual and fitted model were computed. The mean ofthe distances for each vertebra separately is detailed in Table 6.1.The current unoptimized MATLAB code running on an Intel Xeon X56502.67 GHz, requires one minute for segmentation and labeling of each ver-tebra. Given that, the segmentation of a CT with thoracic vertebrae (12vertebrae) takes around 12 minutes. This is approximately four times faster766.5. DiscussionFigure 6.6: Examples of segmentation resultTable 6.1: Accuracy of the segmentation (in mm) for each vertebra.T1 T2 T3 T4 T5 T63.3?0.7 2.2?0.2 2.5?1.0 2.0?0.5 1.9?0.6 2.2?0.4T7 T8 T9 T10 T11 T122.1?0.7 2.5?1.0 2.6?0.8 1.7?0.4 2.0?0.5 2.7?0.8L1 L2 L3 L4 L5 Average1.9?0.5 1.5?0.4 1.8?0.6 2.2?0.9 2.8?0.8 2.1?0.7than previously reported work [60]. The segmentation error is 2.1 mm whichmakes it suitable for a wide range of image-guided spinal interventions.6.5 DiscussionMost similar works to the presented technique are performed by Klinder etal. [60], Kadouri et al. [56], and Glocker et al. [44]. Our approach uses reg-istration of a statistical model to a target for the task of labeling ratherthan registration of several templates as performed by Klinder et al andpicking the best. Although we showed that the differences between levelsare embedded in the model variation, our approach produces larger errorsin labeling, if we compare our results of 70% correct identification, to the776.6. Summaryaccuracy of 95% reported by Klinder et al.. Moreover, our technique re-quires automatic detection of only a singe vertebra in comparison to theapproach of Klinder et al. where all vertebrae should be detected initially.This results in less computationally intensive algorithm (compare 12 min-utes in our approach to 45 minutes achieved by Klinder et al.). It should benoted that our achieved segmentation error is still worse (2.1 mm comparedto 0.76 mm); however, both algorithms produce segmentation errors thatare acceptable for many image-guided interventions. Our approach is alsonaturally different from the one presented by Glocker et al. since we use theshapes and relative pose of vertebrae for the task of labeling and not the in-formation achieved from the image context. Therefore, our approach worksmore robustly in CT images with very limited field-of-view. The labelingresults are not comparable since they do not report the level differences forthe mis-identified vertebrae. Besides, they do not perform any segmenta-tion. Our technique is also similar to the segmentation algorithm recentlyproposed by Kadoury et al. [56] since they represent the relative position ofthe neighboring vertebrae on a manifold. However, they do not apply anystatistical analysis on pose information but find a training subject with themost similar spine curvature to the target. In summary, our segmentationresults are better in terms of accuracy (2.1 mm compared to 2.5 mm) butworse identification of the vertebrae since they report a misidentification of12%. We have also evaluated our algorithm on larger dataset (56 patientscompared to 20 patients).One should also note that vertebral labeling is subject to error accordingto lumbosacral transitional vertebrae, which is defined as either sacraliza-tion of the lowest lumbar segment or lumbarization of the most superiorsacral segment of the spine. This spine anomaly is common in the generalpopulation, with a reported prevalence of 4% - 30% [61].6.6 SummaryWe proposed a novel technique for the construction of a generic n-vertebraemodel to incorporte the statistics within the vertebrae. This model enjoysjoint representation of n arbitrary neighbouring vertebrae by embeddingstatistics derived from a training set which contain thoracic and lumbar ver-tebrae. We showed that shape and pose characteristics of three neighbouringvertebrae can be used robustly for vertebra identification. We also used thegeneric n-vertebrae model to segment the entire vertebral column.78Chapter 7Registration of a statisticalmulti-object model toultrasound images7.1 Introduction and backgroundAs mentioned in Chapter 1, ultrasound imaging has been proposed for epidu-rals since it poses no known risk to the patient, making it the only modalitythat is feasible for obstetric anesthesiology. 2D ultrasound imaging hasbeen demonstrated as a pre-puncture tool for measuring the distance fromthe skin to the epidural space (referred to as the epidural depth) and to helpdecide the puncture site [45].Ideally, ultrasound would also be used during needle insertion to visuallyconfirm the needle progressing toward and then entering the epidural spacecorrectly. Unfortunately, real-time guidance of a midline needle insertion ishindered by the fact that a standard 2D ultrasound transducer obscures thepuncture site, and moving the transducer to the side makes it impossible toview both the needle tip and the target together. The usual image-guidancesolutions based on tracking of both the needle and the ultrasound trans-ducer do not work in this application. The tracking sensor would need to beeither mounted on the base of the needle, which reduces the accuracy due toneedle bending, or placed inside the needle close to its tip, which prohibitsthe standard procedure of loss-of-resistance since the sensor does not allowpassage of the saline.A solution to these problems has been developed using a 3D motorized ultra-sound transducer equipped with a needle guide (see Figure 7.1b and 7.1c) [88].3D ultrasound is widely used in obstetrics and has several advantages suchas easier diagnosis of cleft palate [99]. 3D ultrasound also has the potentialThis chapter is adapted from Rasoulian A., Rohling R. N., and Abolmaesumi P.,Augmentation of Paramedian 3D Ultrasound Images of the Spine. Information Processingin Computer-Assisted Interventions (IPCAI), pages 51?60. 2013.797.1. Introduction and background(a) (b) (c)Figure 7.1: (a) Midline sagittal insertion of the needle. The horizontal ar-row shows the epidural width and the vertical arrow shows the epiduraldepth. (b) 3D motorized ultrasound probe equipped with a needle guide.(c) The relative positions of the 3D ultrasound probe, needle and verte-bra in superior-inferior view. The arrow shows the sweep direction. Theprobe should be placed to align the needle path (dashed blue line) with themid-sagittal plane. The red line shows the visible part of the vertebrae inultrasound images. No echoes appear on the gray area since the spinousprocess surface is not orthogonal to the transducer.to improve epidural catheterization and operation orientation of the verte-bral column [8]. In our solution, a virtual anterior-posterior plane (hereafterreferred to as a reslice plane) containing the needle path is extracted fromvolumes captured in real time by the transducer. Placing the transducerin a paramedian plane and consequently placing the needle in midline, thereslice plane depicts both the needle and the epidural space. Initial experi-ments on animals have shown the feasibility of this approach [88], but thereare still limitations of this technique. First, the ultrasound images are hardto interpret and require extensive training. Second, detection of the mid-sagittal plane is not trivial due to a lack of significant image features in thisplane.In Chapter 3, augmentation of 3D ultrasound has been explored by register-ing a statistical shape model of a single vertebra. The ultrasound volumeswere acquired with the probe centered on the midline and bony featuresfrom both side of the vertebrae were visible. Here we modify that approachand make another contributions. We perform the registration of a statisticalmulti-vertebrae model to single-sided paramedian ultrasound volumes. Reg-istration is challenging due to the lack of echoes on the contra-lateral side,807.2. Registration techniqueand thus a lack of information on the symmetric shape of the vertebrae.In summary, the model is registered to the ultrasound images based onechoes that are typically visible in such ultrasound images, i.e. laminae, ar-ticular processes, and transverse processes of one side of the vertebrae. Weuse the registered model to interpret the echoes in the ultrasound imagesand to predict the mid-sagittal plane. We validate our registration techniqueon synthetic data and 64 in vivo ultrasound volumes.7.2 Registration technique7.2.1 Enhancement of the ultrasound imagesA preprocessing step is performed on the ultrasound images to extract thebone surface probability using high intensity and shadow information. Toextract the bone surface probability, we use an adaptation of a techniqueproposed by Foroughi et al. [40]. Initially, the image is filtered by Laplace ofGaussian (LoG) kernel to extract the high intensity pixels which typicallyhave larger probability to be on the bone surface. The result is addedto the blurred image. Next, the blurred image is convoluted by a profilehighlighting the shadow beneath a pixel. The shadow image is combinedwith the blurred image to generate the bone probability map.7.2.2 Registration of the model to the ultrasound imagesThe multi-object statistical model can be instantiated by applying differentweights to the PGs and combining them. Assuming that wsk is the weightapplied to the kth shape PG and wpk is applied to the kth pose PG, the lthobject of the model can be instantiated as follows:sl = ?(ws,wp) = ?pl(?sl (ws); wp), (7.1)where ?pl (.; wp) and ?sl (.) denote a similarity transformation and a shape,respectively, which are built by a combination of the pose and shape PGswith corresponding weights:?p(.; wp) = ?plK?k=1exp(wpkvpk,l) and ?sl (ws) = exp?sl (K?k=1wskvsk,l). (7.2)The registration is performed using the GMM-based technique proposedearlier in Chapter 3. In this iterative technique, soft-correspondences areestablished between surface of the model and the target that is represented817.3. Experiments and resultsby a point set. Assume that the correspondence function, P (xln,ym), isdefined for the nth point of the lth anatomy on the model, xln, and the mthpoint of the target, ym, and has a value between 0 and 1. The point setY constitutes a partial surface of the vertebrae and is extracted from theultrasound images as explained in the previous section. Additionally, thebone surface probability extracted from the ultrasound images is alreadyintegrated into the correspondence function.The model is then instantiated and rigidly transformed to minimize thefollowing objective function:Q =L?l=1M,N?m,n=1P (xln|ym)?ym ?(R?(xln; ws,wp) + t)?2 + ?s?sws + ?p?pwp,(7.3)where the two latter terms are the regularization over the PGs weights, andmatrices ?s and ?p are diagonal with elements 1/?s and 1/?p, the cor-responding eigenvalues of the shape and the pose PGs, respectively. Thematrices R and t are the rotation and translation of the rigid transforma-tion, respectively. The optimization is performed using the Quasi-Newtonmethod.Note that the objective function is minimized with respect to the pointsof the model that are visible in ultrasound volumes, i.e. laminae, articularprocesses and transverse processes of one side only (see Figure 7.1b).This is the key challenge. Once registered, the model is used later to es-timate the location of the mid-sagittal plane. This is performed by fittinga plane to the tip of the spinous processes and most anterior point of thevertebral body.7.3 Experiments and results7.3.1 Synthetic dataTo assess the performance of the registration of one side of the model to thecorresponding ultrasound images in an ideal scenario, we constructed a syn-thetic data set using 32 CT scans and performed leave-one-out experiments.For each CT scan, the model is constructed using all other CT images andis registered to a surface extracted from one side of L1-L4 vertebrae of thetarget, including the laminae, articular processes and transverse processes.The surface error is then computed for the entire vertebrae and results inan RMS distance error of 2.2?0.6 mm. Figure 7.2 shows the distance error827.3. Experiments and results(a) (b) (c)Figure 7.2: Distance error overlaid on the model in the a) anterior-posteriorand b) left-to-right view. The arrow points to the side used in the registra-tion.overlaid on the model. As expected, the registration error is smallest nearthe anatomy involved in the registration and increases further away. Theregistration error is largest around the spinous process since its shape is notcorrelated with laminae and transverse processes. This error is however notcritical since the epidural space is not close to the spinous process and doesnot affect needle insertion. Interestingly, the error is equally distributed inthe other regions, i.e. laminae and articular processes on the opposite sideand vertebral body.The registered model is then used to estimate the mid-sagittal plane andis compared against the mid-sagittal plane extracted from manual segmen-tation. The normals of the two planes differ by 4.4?2.6 degrees, and thelocation of the epidural space in the registered model differs from the mid-sagittal plane of the manual segmentation by 1.3?1.2 mm. As we will men-tion in the Discussion, these errors effectively convey the ability to registerthe model using only a few features on only one side of the vertebral anatomy.7.3.2 In vivo data3D ultrasound volumes were captured by an expert sonographer using aSonix Touch ultrasound machine (Ultrasonix, Medical Corp, Richmond,BC) with a curvilinear 3D transducer, operating at 3.3 MHz with depthof 7.0 cm. 80 frames were captured for each volume to have a 60 degreefield of view. The 3D probe was tracked using an EM tracker (pciBird, As-cension Technology Corp., Burlington, VT, USA) and was calibrated usingdouble N-wire phantom with an RMS error of 1.7 mm [25]. The purpose of837.3. Experiments and results(a) (b) (c)Figure 7.3: Three volumes are acquired from each intervertebral level. Ar-rows show the plane visualized in each case. a) The model is registered tothis paramedian volume (volume A). The red dashed line shows the sagit-tal slice of this volume shown together with the registered model. b) Atransverse view of the centered volume (volume B) augmented with the reg-istered model. c) A sagittal view of the contra-lateral volume (volume C)augmented with the registered model.tracking is only for validation of the model registration to the volumes onthe contra-lateral side and for measurement of the true mid-sagittal plane.Written consent was obtained from eight volunteers. Ultrasound volumeswere acquired in the prone position. For each subject, the intervertebrallevels were found using 2D ultrasound and were marked on the skin. A mag-netic sensor (referred to as the reference) was attached on the skin abovethe T12 vertebra to track the patient?s movement. We assumed that thespine curvature does not change during the entire scan. Four intervertebrallevels (L1-L2, L2-L3, L3-L4, and L4-5) were scanned. Three volumes wereacquired from each level (see Figure 7.3), one paramedian volume from eachside and one centered on the mid-sagittal plane (referred to as the centeredvolume). Note that bony features are most visible in the paramedian vol-umes whereas the mid-sagittal plane is best detected in the centered volume.The ultrasound volumes were then transformed to the reference coordinatesystem using the position tracker measurements and calibration of the 3Dultrasound probe.847.3. Experiments and resultsFigure 7.4: Examples of the registration. The multi-vertebrae model iscapable of capturing different sizes, shapes and poses of vertebrae.Table 7.1: The registration error of the model to the paramedian volume.Values are given as mean?std (mm).Registration side Contra-lateral sideRMS Haussdorf RMS HaussdorfL1-L2 2.0?0.3 3.7?1.0 4.4?2.0 7.1?3.2L2-L3 3.0?1.7 6.8?4.3 4.0?2.0 7.4?3.0L3-L4 2.5?0.9 4.6?2.7 3.9?1.4 6.7?2.7L4-L5 3.0?1.1 5.1?2.8 3.5?1.1 6.1?3.0All 2.6?1.2 5.0?3.1 3.9?1.7 6.8?3.0Registration AccuracyThe bone surfaces were manually extracted for each paramedian volume.The model was registered to one of the paramedian ultrasound volumes(e.g. volume A in Figure 7.3). Examples of the registrations are shown inFigure 7.4. The current unoptimized MATLAB code requires 53 secondsto register the multi-vertebrae shape+pose model to 3D ultrasound images.The RMS and maximum (referred to as Haussdorf) distance between themanual segmentation of the ultrasound volume and the registered modelare calculated and reported. We also reported the error between the sameregistered model and the paramedian ultrasound volumes acquired from thecontra-lateral side (e.g. volume C in Figure 7.3). Figure 7.3a and 7.3c showan example of the registration. Results are given in Table 7.1. As expectedthe error is larger on the contra-lateral side, but remains below 4.4 mm.857.4. DiscussionL1-L2 L2-L3 L3-L4 L4-L5 AllNormal (degree) 6.4?2.4 13.0?7.5 10.0?8.0 14.2?8.6 10.8?7.5Distance (mm) 4.0?2.9 5.6?4.6 4.5?3.1 6.1?4.9 5.0?3.9Table 7.2: Error between the two mid-sagittal planes, one estimated fromthe registered model, and one extracted from the centered 3D ultrasoundvolume. Distance is defined as the distance between two planes at the epidu-ral depth.Detection of the mid-sagittal planeThe mid-sagittal plane is detected by fitting a plane to the points acquiredby marking the symmetric features of vertebral anatomy (i.e. laminae andtransverse processes) and taking their average. The manually extractedplane is compared against the mid-sagittal plane extracted from the regis-tered model. Similar to synthetic data, the angle between these two planesand their separation at the depth of the epidural space is reported. Resultsare given in Table 7.2. Figure 7.3b shows an example of the registered modeltogether with the centered volume.7.4 DiscussionIt is expected that the multi-vertebrae model will be used to augment theultrasound image interpretation and to predict the mid-sagittal plane of thespine, but not replace the standard technique for epidural needle placementsuch as loss-of-resistance. In this pilot study, we have demonstrated thatthe errors for registering the model to single-sided ultrasound volumes of thehuman spine have an average of 2.6 mm on the registration side and 3.9 mmon average on the contra-lateral side.The width of the ligamentum flavum covering the epidural space is reportedto be 6.8?1.9 [77]. The epidural depth varies between patients, and increaseswith obesity. In our experiments, the maximum epidural depth was 47 mm.Given these numbers and referring to Figure 7.1a, the safe needle insertionzone (represented by the gray area) confine the proper needle insertion to anangle of less than 8 degrees and 6.8 mm distance to the mid-sagittal plane.This suggests that the proposed method for mid-sagittal plane estimationhas the potential for successful midline epidural injection. The results canbe further improved by better edge enhancement in the ultrasound imagesand also using larger training set used for the construction of the model.867.5. Summary7.5 SummaryA new method was presented for alignment of the statistical multi-vertebraemodel to single-sided paramedian ultrasound volumes. Registration waschallenging due to the lack of echoes on the contralateral side, and thus alack of information on the symmetric shape of the vertebrae. Nevertheless,registration was achieved within an acceptable level of accuracy.87Chapter 8Conclusion and future workUltrasound is emerging as an effective tool for improving the guidance ofneedle insertion in epidural anesthesia. However, its use is hindered by thefact that the ultrasound images are hard to interpret and require extensivetraining. A solution to this drawback is the augmentation of ultrasoundimages with virtual representation of the anatomical target, i.e. vertebrae.These models can generated by extracting the target from various imag-ing modalities such as CT or MRI. Although significant advantages can beachieved by enhancing the ultrasound images with pre-operative CT or MRIimages, such images may not be obtainable, or require ionizing radiation.In this thesis, novel algorithms and methods were proposed with the ulti-mate goal of segmentation of CT images and augmentation of 3D ultrasoundimages of spine with statistical representation of vertebrae. In Chapter 2 agroup-wise technique was presented to find dense correspondences betweena set of instances. We then used the subsequent registration to generatea Statistical Shape Model (SSM) of a single vertebra. In Chapter 3 weintroduced a probabilistic technique for registration of the SSM to a newtarget. We then extended our technique in Chapter 4 to build a statisti-cal multi-vertebrae shape+pose model where pose and shape statistics wereseparated. In Chapter 5 we used the constructed model for segmentation ofthe vertebral column from CT images. We then extended our registrationtechnique in Chapter 7 to augment 3D ultrasound images acquired from thespine. We also used the proposed technique in Chapter 6 in another clinicalapplication: fully automatic labeling of the vertebrae in CT images.8.1 ContributionsThe contributions of this thesis are summarized as follows:? A novel technique is developed for group-wise registration of a groupof point sets which uses deformable registration to generate the meanshape and its variations. The registration is decoupled into two steps:registration using a non-rigid transformation and updating the mean888.1. Contributionsshape. The registration is performed using an EM algorithm and themean shape is updated using the Quasi-Newton method. The non-rigid transformation is considered as a probability density estimationproblem. The points in the mean shape (moving point set) are as-sumed to be centeroids of a GMM and the points in the training sets(fixed point set) are the observations. The points in moving set areforced to move locally coherent to preserve the topological structureand additionally maximize the likelihood of the GMM generating theobservations.? A novel technique is developed for alignment of a statistical shapemodel to a target. The technique utilizes the concept of soft cor-respondences to align the model that is represented by a point set,constituting its surface, to a target. Techniques are developed for reg-istering the SSM to ultrasound images as well. We show that the SSMcan be robustly registered to the ultrasound images even though allfeatures of the spine do not appear in ultrasound images. Our feature-based technique is significantly faster in comparison to volumetric-based techniques and has the capability of being used in real-timeguidance systems.? A novel technique is developed for construction of a statistical multi-vertebrae shape and pose model. The shape statistics are separatedfrom the pose statistics since they do not belong to the same space.These two statistics do not necessarily correlate, especially in the caseof vertebrae, since the pose depends on external factors such as theposture of the patient during data acquisition. Separating the statis-tics give a potentially more realistic model of the spine. The tech-nique is capable of extracting common statistics of multiple objectsand transfer them into a compact representation.? A novel technique is developed for registration of the statistical multi-vertebrae shape+pose model to CT images for the task of segmentationand extending this technique to augment ultrasound images acquiredfrom the vertebral column. We also extend the registration techniqueto perform the registration to single-sided paramedian ultrasound vol-umes. Registration is challenging due to the lack of echoes on thecontra-lateral side, and thus a lack of information on the entire shapeof the vertebrae. Nevertheless, registration is achieved within an ac-ceptable level of accuracy for guided epidurals.898.2. Future work? A novel extension is made to the model construction to include thestatistics of not only specific multiple neighbouring vertebrae, e.g. L1-5, but any neighbouring vertebrae. In other words, we construct a sta-tistical generic n-vertebrae model that reflects the variations in shapesand relative poses of all the vertebrae. Such a model can generate anycombination of n neighbouring vertebrae. We also develop techniquesto use the model for labeling a target vertebra. We use this techniquefor fully automatic labeling and segmentation of the entire vertebralcolumn.8.2 Future workNovel methods have been presented in this thesis for construction and reg-istration of statistical models to CT and ultrasound images of the spine. Anumber of interesting areas of research can be suggested as follows:? The proposed statistical multi-vertebrae model is limited in terms ofpresenting the entire variations of vertebrae both in shape and pose.Including a greater number of subjects in training set may resolve thisproblem. In order to further improve accuracy, one may also integratealternative frameworks for statistical analysis such as non-linear PCAor Independent Component Analysis (ICA). Although we did not ob-serve any overlapping of neighboring vertebrae after instantiating themodel, the model does not theoretically guarantee that. Addressingthis problem can possibly further improve accuracy and robustness ofthe segmentation.? The proposed technique for multi-vertebrae model generation and reg-istration can be used to exploit the inherent quantitative joint statisticsof the vertebral model for detection of spine abnormalities that are ex-hibited through geometric deviations. To this end, one should extendthe data set to include pathological cases and develop techniques forinterpretation and classification of these cases in a new CT image.? Extension of the model to segment not only the static images of spine,but also the dynamic images may also be beneficial. This setting isachievable within our framework, since the pose statistics are alreadyincorporated into the model. Moreover, only CT images that are ac-quired while the patient is supine position are used as the trainingset in this study. To cover all other spinal postures, one should add908.2. Future workimages captured from patients in other posture. Segmentation of dy-namic images opens the possibility to perform kinematic analysis ofspine and all related applications.? The proposed technique for ultrasound augmentation has to be inte-grated into the clinical workflow. Current limitations are the run-timeof the registration. A real-time guidance system is preferred. Thecomputational effort can be reduced by implementing the code on agraphical processing unit or reducing the resolution of the model. Al-though it may reduce the quality of the registration, the accuracy oflower resolution models may remain within clinically acceptable range.? A natural extension of the our technique is to upgrade our point-based model to an appearance model. These models include statisticsof the shape as well as the intensity of them in a particular imagingmodality. This may improve the results given the large inhomogeneityin the spine intensity in the CT images. It also helps segmentationof the vertebral column in MRI images given the unclear boundary ofvertebrae in these images.91Bibliography[1] J. Abi-Nahed, M. Jolly, and G. Z. Yang. Robust active shape models:A robust, generic and simple automatic segmentation tool. In Medi-cal Image Computing and Computer-Assisted Intervention?MICCAI,volume 4191, pages 1?8. 2006.[2] N. Amenta, M. Bern, and M. Kamvysselis. A new voronoi-based sur-face reconstruction algorithm. In Proceedings of the 25th annual con-ference on Computer graphics and interactive techniques, pages 415?421, 1998.[3] B. Avants and J. C. Gee. Geodesic estimation for large deformationanatomical shape averaging and interpolation. NeuroImage, 23:S139?S150, 2004.[4] S.K. Balci, P. Golland, M. Shenton, and W.M. Wells. Free-form b-spline deformation model for groupwise registration. In Medical ImageComputing and Computer-Assisted Intervention?MICCAI, volume 10,page 23, 2007.[5] D.H. Ballard. Generalizing the Hough transform to detect arbitraryshapes. Pattern Recognition, 13(2):111?122, 1981.[6] D. C. Barrat, C. S. K. Chan, P. J. Edwards, G. P. Penney, M. Slom-czykowski, T. J. Carter, and D. J. Hawkes. Instantiation and regis-tration of statistical shape models of the femur and pelvis using 3Dultrasound imaging. Medical Image Analysis, 12(3):358?374, 2008.[7] D. C. Barratt, C. SK Chan, P. J. Edwards, G. P. Penney, M. Slom-czykowski, T. J. Carter, and D. J. Hawkes. Instantiation and regis-tration of statistical shape models of the femur and pelvis using 3Dultrasound imaging. Medical Image Analysis, 12(3):358?374, 2008.[8] D. Belavy, M. J. Ruitenberg, and R. B. Brijball. Feasibility studyof real-time three-/four-dimensional ultrasound for epidural catheterinsertion. British journal of anaesthesia, 107(3):438?445, 2011.92Bibliography[9] M. Benjelloun, Sa. Mahmoudi, and F. Lecron. A framework of vertebrasegmentation using the active shape model-based approach. Journalof Biomedical Imaging, 2011(9):110?124, 2011.[10] P.J. Besl and N.D. McKay. A method for registration of 3-D shapes.IEEE Transactions on Pattern Analysis and Machine Intelligence,pages 239?256, 1992.[11] K.K. Bhatia, J.V. Hajnal, B.K. Puri, A.D. Edwards, and D. Rueckert.Consistent groupwise non-rigid registration for atlas construction. InIEEE International Symposium on Biomedical Imaging, pages 908?911, 2004.[12] C. M. Bishop. Pattern recognition and machine learning. SpringerNew York, 2006.[13] J. Boisvert, F. Cheriet, X. Pennec, H. Labelle, and N. Ayache. Geo-metric variability of the scoliotic spine using statistics on articulatedshape models. IEEE Transactions on Medical Imaging, 27(4):557 ?568,2008.[14] F. L. Bookstein. Landmark methods for forms without landmarks:morphometrics of group differences in outline shape. Medical ImageAnalysis, 1(3):225?243, 1997.[15] Matias Bossa and Salvador Olmos. Statistical linear models in Pro-crustes shape space. In 1st MICCAI Workshop on Mathematical Foun-dations of Computational Anatomy: Geometrical, Statistical and Reg-istration Methods for Modeling Biological Shape Variability, pages 102?111, 2006.[16] M.N. Bossa and S. Olmos. Multi-object statistical pose+shape mod-els. In IEEE International Symposium on Biomedical Imaging - ISBI,pages 1204 ?1207, 2007.[17] M.V. Boswell, J.D. Colson, W.F. Spillane, et al. Therapeutic facetjoint interventions in chronic spinal pain: A systematic review of ef-fectiveness and complications. Pain Physician, 8(1):101?14, 2005.[18] A.D. Brett and C.J. Taylor. A method of automated landmark gener-ation for automated 3D PDM construction. Image and Vision Com-puting, 18(9):739?748, 1999.93Bibliography[19] P. Bruners, T. Penzkofer, M. Nagel, R. Elfring, N. Gronloh,T. Schmitz-Rode, R.W. Gu?nther, and A.H. Mahnken. Electromag-netic tracking for CT-guided spine interventions: phantom, ex-vivoand in-vivo results. European radiology, 19(4):990?994, 2009.[20] S. S. C. Burnett, G. Starkschall, C. W. Stevens, and Z. Liao. Adeformable-model approach to semi-automatic segmentation of CTimages demonstrated by application to the spinal canal. MedicalPhysics, 31(2):251?263, 2004.[21] J Canny. A computational approach to edge detection. IEEE Trans-actions on Pattern Analysis and Machine Intelligence, 8(6):679 ?698,1986.[22] J.J. Cerrolaza, A. Villanueva, and R. Cabeza. Hierarchical statisticalshape models of multiobject anatomical structures: Application tobrain MRI. IEEE Transactions on Medical Imaging, 31(3):713 ?724,2012.[23] E. C. S. Chen, P. Mousavi, S. Gill, G. Fichtinger, and Pu. Abolmae-sumi. Ultrasound guided spine needle insertion. page 762538. SPIEMedical Imaging, 2010.[24] T. Chen, B.C. Vemuri, A. Rangarajan, and S.J. Eisenschenk. Group-wise Point-set registration using a novel CDF-based Havrda-Charva?tDivergence. International journal of computer vision, 86(1):111?124,2010.[25] T. K. Chen, A. D. Thurston, R. E. Ellis, and P. Abolmaesumi. A real-time freehand ultrasound calibration system with automatic accuracyfeedback and control. Ultrasound in Medicine & Biology, 35(1):79?93,2009.[26] K. J. Chin, M. K. Karmakar, and P. Peng. Ultrasonography of theAdult Thoracic and Lumbar Spine for Central Neuraxial Blockade.Anesthesiology, 114(6):1459?1485, 2011.[27] H. Chui and A. Rangarajan. A new point matching algorithm fornon-rigid registration. Computer Vision and Image Understanding,89(2-3):114?141, 2003.[28] H. Chui, A. Rangarajan, J. Zhang, and C.M. Leonard. Unsupervisedlearning of an atlas from unlabeled point-sets. IEEE Transactions onPattern Analysis and Machince Intelligence, 26(2):160?172, 2004.94Bibliography[29] M. P. Chwialkowski, P. E. Shile, D. Pfeifer, R. W. Parkey, and R. M.Peshock. Automated localization and identification of lower spinalanatomy in magnetic resonance images. Computers and BiomedicalResearch, 24(2):99?117, 1991.[30] T. F. Cootes, A. Hill, C. J. Taylor, and J. Haslam. Use of active shapemodels for locating structures in medical images. Image and VisionComputing, 12(6):355?365, 1994.[31] T.F. Cootes, C.J. Taylor, D.H. Cooper, J. Graham, et al. Activeshape models-their training and application. Computer vision andimage understanding, 61(1):38?59, 1995.[32] T.F. Cootes, C.J. Twining, K.O. Babalola, and C.J. Taylor. Dif-feomorphic statistical shape models. Image and Vision Computing,26(3):326?332, 2008.[33] R.H. Davies, C.J. Twining, T.F. Cootes, and et al. A minimum de-scription length approach to statistical shape modeling. IEEE Trans-actions on Mededica Imaging, 21(5):525 ?537, 2002.[34] G. R. de Oliveira Filho. The construction of learning curves for basicskills in anesthetic procedures: an application for the cumulative summethod. Anesthesia & Analgesia, 95(2):411?416, 2002.[35] I.L. Dryden and K.V. Mardia. Statistical shape analysis, volume 4.John Wiley & Sons New York, 1998.[36] S. Durrleman, X. Pennec, A. Trouve?, and N. Ayache. Statistical mod-els of sets of curves and surfaces based on currents. MedIA, 13(5):793?808, 2009.[37] N. Duta and M. Sonka. Segmentation and interpretation of MR brainimages. an improved active shape model. IEEE Transactions on Med-ical Imaging, 17(6):1049 ?1062, 1998.[38] M. A. Fischler and R. C. Bolles. Random sample consensus: aparadigm for model fitting with applications to image analysis andautomated cartography. ACM Communication, 24(6):381?395, 1981.[39] P.T. Fletcher, Conglin Lu, and S. Joshi. Statistics of shape via prin-cipal geodesic analysis on lie groups. In IEEE Computer Vision andPattern Recognition, volume 1, pages 95?101, 2003.95Bibliography[40] P. Foroughi, E. Boctor, and M.J. Swartz. 2-D ultrasound bone segmen-tation using dynamic programming. In IEEE Ultrasound Symposium,pages 2523?2526, 2007.[41] S. Ghosh, R. Alomari, V. Chaudhary, and G. Dhillon. Automaticlumbar vertebra segmentation from clinical ct for wedge compressionfracture diagnosis. In Proceedings SPIE medical imaging, volume 7963,page 796303, 2011.[42] R. M. Giebler, R. U. Scherer, and J. Peters. Incidence of neurologiccomplications related to thoracic epidural catheterization. Anesthesi-ology, 86(1):55?63, 1997.[43] S. Gill, P. Abolmaesumi, G. Fichtinger, J. Boisvert, D. Pichora,D. Borshneck, and P. Mousavi. Biomechanically constrained group-wise ultrasound to CT registration of the lumbar spine. Medical imageanalysis, 16(3):662?674, 2012.[44] B. Glocker, J. Feulner, A. Criminisi, D Haynor, and E. Konukoglu. Au-tomatic localization and identification of vertebrae in arbitrary field-of-view CT scans. Medical Image Computing and Computer-AssistedIntervention?MICCA, pages 590?598, 2012.[45] T. Grau, E. Bartusseck, R. Conradi, E. Martin, and J. Motsch.Ultrasound imaging improves learning curves in obstetric epiduralanesthesia: a preliminary study. Canadian Journal of Anesthesia,50(10):1047?1050, 2003.[46] T. Grau, R. W. Leipold, R. Conradi, E. Martin, and J. Motsch. Ef-ficacy of ultrasound imaging in obstetric epidural anesthesia. Journalof Clinical Anesthesia, 14(3):169?175, 2002.[47] T Grau, RW Leipold, S Delorme, E Martin, and J Motsch. Ultrasoundimaging of the thoracic epidural space. Regional anesthesia and painmedicine, 27(2):200?206, 2002.[48] T. Grau, R.W. Leipold, J. Horter, R. Conradi, E. Martin, andJ. Motsch. The lumber epidural space in pregnancy: Visualizationby ultrasonography. British Journal of Anesthesia, pages 798?804,2001.[49] M. Greher, L. Kirchmair, B. Enna, P. Kovacs, B. Gustorff, S. Kapral,and B. Moriggl. Ultrasound-guided lumbar facet nerve block: accuracy96Bibliographyof a new technique confirmed by computed tomography. Anesthesiol-ogy, 101(5):1195?1200, 2004.[50] S. Hanaoka, K. Fritscher, B. Schuler, Y. Masutani, N. Hayashi,K. Ohtomo, and R. Schubert. Whole vertebral bone segmentationmethod with a statistical intensity-shape model based approach. InSPIE medical imaging, pages 796242?796242, 2011.[51] T. Heimann and H.P. Meinzer. Statistical shape models for 3d medicalimage segmentation: A review. Medical image analysis, 13(4):543?563,2009.[52] T. T. Horlocker, D. G. McGregor, D. K. Matsushige, D. R. Schroeder,and J. A. Besse. A retrospective review of 4767 consecutive spinalanesthetics: central nervous system complications. perioperative out-comes group. Anesthesia & Analgesia, 84(3):578?584, 1997.[53] P. J. Huber. Robust Statistics. Wiley, 1981.[54] H. Hufnagel, X. Pennec, J. Ehrhardt, and et al. Generation of a sta-tistical shape model with probabilistic point correspondences and theexpectation maximization-iterative closest point algorithm. Interna-tional Journal of Computer Assisted Radiology and Surgery, 2(5):265?273, 2008.[55] S. Joshi, B. Davis, M. Jomier, and G. Gerig. Unbiased diffeomor-phic atlas construction for computational anatomy. NeuroImage,23(1):151?160, 2004.[56] S. Kadoury, H. Labelle, and N. Paragios. Automatic inference of ar-ticulated spine models in CT images using high-order markov randomfields. Medical Image Analysis, 15(4):426?437, 2011.[57] B. M. Kelm, S. K. Zhou, M. Suehling, Y. Zheng, M. Wels, and D. Co-maniciu. Detection of 3d spinal geometry using iterated marginalspace learning. In Medical Computer Vision. Recognition Techniquesand Applications in Medical Imaging, pages 96?105. Springer, 2011.[58] S. Khallaghi, P. Mousavi, R. Gong, S. Gill, J. Boisvert, G. Fichtinger,D. Pichora, D. Borschneck, and P. Abolmaesumi. Registration of astatistical shape model of the lumbar spine to 3D ultrasound images.In Medical Image Computing and Computer-Assisted Intervention?MICCAI, volume 6362, pages 68?75. 2010.97Bibliography[59] S. Khallaghi, P. Mousavi, R. Gong, and et al. Registration of a statisti-cal shape model of the lumbar spine to 3D ultrasound images. In Med-ical Image Computing and Computer-Assisted Intervention?MICCAI,volume 6362, pages 68?75. 2010.[60] T. Klinder, J. Ostermann, M. Ehm, A. Franz, R. Kneser, andC. Lorenz. Automated model-based vertebra detection, identification,and segmentation in CT images. Medical Image Analysis, 13(3):471?482, 2009.[61] GP Konin and DM Walz. Lumbosacral transitional vertebrae: clas-sification, imaging findings, and clinical relevance. American Journalof Neuroradiology, 31(10):1778?1786, 2010.[62] T. J. Lechner, M. G. van Wijk, Ad J. Maas, F. R. van Dorsten, Ro. A.Drost, C. J. Langenberg, L. J. Teunissen, P. H. Cornelissen, and J. vanNiekerk. Clinical results with the acoustic puncture assist device, anew acoustic device to identify the epidural space. Anesthesia & Anal-gesia, 96(4):1183?1187, 2003.[63] Y. Lee, M. Tanaka, and J. C. A. Carvalho. Sonoanatomy of the lumbarspine in patients with previous unintentional dural punctures duringlabor epidurals. Regional anesthesia and pain medicine, 33(3):266?270,2008.[64] H. Li and O. Chutatape. Automated feature extraction in color retinalimages by a model based approach. IEEE Transactions on BiomedicalEngineering, 51(2):246?254, 2004.[65] P. Lirk, J. Colvin, B. Steger, H. P. Colvin, C. Keller, J. Rieder, C. Kol-bitsch, and B. Moriggl. Incidence of lower thoracic ligamentum flavummidline gaps. British Journal of Anaesthesia, 94(6):852?855, 2005.[66] W.E. Lorensen and H.E. Cline. Marching cubes: A high resolution 3Dsurface construction algorithm. In ACM Siggraph Computer Graphics,volume 21, pages 163?169. ACM, 1987.[67] C. Lorenz and N. Krahnstoever. Generation of point-based 3D sta-tistical shape models for anatomical objects. Computer Vision andImage Understanding, 77(2):175?191, 2000.[68] C. Lu, S. M. Pizer, S. Joshi, and J. Jeong. Statistical multi-objectshape models. International Journal of Computer Vision, 75(3):387?404, 2007.98Bibliography[69] J. Ma, L. Lu, Y. Zhan, X. Zhou, M. Salganicoff, and A. Kr-ishnan. Hierarchical segmentation and identification of thoracicvertebra using learning-based edge detection and coarse-to-fine de-formable model. Medical Image Computing and Computer-AssistedIntervention?MICCAI, pages 19?27, 2010.[70] A. Mastmeyer, K. Engelke, C. Fuchs, and W. A. Kalender. A hierar-chical 3D segmentation method and the definition of vertebral bodycoordinate systems for QCT of the lumbar spine. Medical Image Anal-ysis, 10(4):560?577, 2006.[71] G. J. McLachlan and D. Peel. Finite mixture models. Wiley-Interscience, 2000.[72] S. Milborrow and F. Nicolls. Locating facial features with an extendedactive shape model. In Computer Vision ECCV 2008, volume 5305,pages 504?513. 2008.[73] J. Moore, C. Clarke, D. Bainbridge, C. Wedlake, A. Wiles, D. Pace,and T. Peters. Image guidance for spinal facet injections using trackedultrasound. In MICCI, volume 5761, pages 516?523. 2009.[74] A. Myronenko and X. Song. Point Set Registration: Coherent PointDrift. IEEE Transactions on Pattern Analysis and Machince Intelli-gence, 32(12):2262?2275, 2010.[75] A. Myronenko, X. Song, and M. Carreira-Perpinan. Non-rigid point setregistration: Coherent point drift. In Advances in Neural InformationProcessing Systems (NIPS), pages 1009?1016. 2007.[76] B. Naegel. Using mathematical morphology for the anatomical la-beling of vertebrae from 3D CT-scan images. Computerized MedicalImaging and Graphics, 31(3):141?156, 2007.[77] R. Nickalls and M. Kokri. The width of the posterior epidural spacein obstetric patients. Anaesthesia, 41(4):432?433, 1986.[78] A. Oktay and Y. Akgul. Simultaneous localization of lumbar vertebraeand intervertebral discs with SVM based MRF. IEEE Transactionson Biomedical Engineering, In process, 2013.[79] Burak Ozgur, Edward Benzel, and Steven Garfin. Minimally InvasiveSpine Surgery. Springer, 2012.99Bibliography[80] M. Pathria, DJ Sartoris, and D. Resnick. Osteoarthritis of thefacet joints: accuracy of oblique radiographic assessment. Radiology,164(1):227?230, 1987.[81] Z. Peng, J. Zhong, W. Wee, and J. Lee. Automated vertebra detec-tion and segmentation from the whole spine MR images. In IEEEEngineering in Medicine & Biology Society?EMBS, pages 2527 ?2530,2005.[82] Z. Peng, J. Zhong, W. Wee, and J. Lee. Automated vertebra detec-tion and segmentation from the whole spine MR images. In IEEEEngineering in Medicine & Biology Society?EMBS, pages 2527?2530,2006.[83] X. Pennec. Intrinsic statistics on riemannian manifolds: Basic toolsfor geometric measurements. Journal of Mathematical Imaging andVision, 25:127?154, 2006.[84] G. P. Penney, J. M. Blackall, M. S. Hamady, T. Sabharwal, A. Adam,and D. J. Hawkes. Registration of freehand 3D ultrasound and mag-netic resonance liver images. Medical Image Analysis, 8(1):81?91,2004.[85] S. M. Pizer, P. T. Fletcher, and S. Joshi. Deformable m-reps for3D medical image segmentation. International Journal of ComputerVision, 55(2):85?106, 2003.[86] J. Platt et al. Probabilistic outputs for support vector machines andcomparisons to regularized likelihood methods. Advances in large mar-gin classifiers, 10(3):61?74, 1999.[87] H Rafii-Tari, P Abolmaesumi, and R Rohling. Panorama ultrasoundfor guiding epidural anesthesia: A feasibility study. In InformationProcessing in Computer-Assisted Interventions, volume 6689, pages179?189. 2011.[88] A. Rasoulian, P. Abolmaesumi, R. Rohling, A. Kamani, L. Charles,and V. Lessoway. Porcine thoracic epidural depth measurement using3D ultrasound resliced images. In Canadian Anesthesiologists SocietyAnnual Meeting, 2011.[89] D. L. Renfrew, T. E. Moore, M. H. Kathol, G. Y. El-Khoury, J. H.Lemke, and C. W. Walker. Correct placement of epidural steroid in-100Bibliographyjections: fluoroscopic guidance and contrast administration. AmericanJournal of Neuroradiology, 12(5):1003?1007, 1991.[90] M. Roberts, T. Cootes, E. Pacheco, T. Oh, and J. Adams. Segmen-tation of lumbar vertebrae using part-based graphs and active ap-pearance models. Medical Image Computing and Computer-AssistedIntervention?MICCAI, pages 1017?1024, 2009.[91] M. Rogers and J. Graham. Robust active shape model search. InComputer Vision ECCV, volume 2353, pages 289?312. 2006.[92] P. J. Rousseeuw. Robust Regression and Outlier Detection. Wiley,New York, 1987.[93] D. Rueckert, A. F. Frangi, and J. A. Schnabel. Automatic constructionof 3D statistical deformation models of the brain using nonrigid reg-istration. IEEE Transactions on Medical Imaging, 22(8):1014?1025,2003.[94] W. Ruppen, S. Derry, H. McQuay, and R. Moore. Incidence of Epidu-ral Hematoma, Infection, and Neurologic Injury in Obstetric Patientswith Epidural Analgesia/Anesthesia. Anesthesiology, 105(2):394?399,2006.[95] S. Schmidt, J. Kappes, M. Bergtholdt, V. Pekar, S. Dries, D. Bystrov,and C. Schnoerr. Spine detection and labeling using a parts-basedgraphical model. In Information Processing in Medical Imaging?IPMI,volume 4584, pages 122?133. 2007.[96] K. Seshadri and M. Savvides. Robust modified active shape model forautomatic facial landmark annotation of frontal faces. In IEEE 3rdInternational Conference on Biometrics: Theory, Applications, andSystems, pages 1?8, 2009.[97] H. Shen, A. Litvin, and C. Alvino. Localized priors for the precisesegmentation of individual vertebras from CT volume data. In Med-ical Image Computing and Computer-Assisted Intervention?MICCAI,volume 5241, pages 367?375. 2008.[98] P.P. Smyth, C.J. Taylor, and J.E. Adams. Automatic measurement ofvertebral shape using active shape models. Image and Vision Com-puting, 15(8):575?581, 1997.101Bibliography[99] H. Steiner, A. Staudach, D. Spitzer, and H. Schaffer. Diagnostictechniques: Three-dimensional ultrasound in obstetrics and gynae-cology: technique, possibilities and limitations. Human Reproduction,9(9):1773?1778, 1994.[100] D. S?tern, B. Likar, F. Pernus?, and T. Vrtovec. Parametric modellingand segmentation of vertebral bodies in 3D CT and MR spine images.Physics in Medicine and Biology, 56(23):7505, 2011.[101] M. Styner, I. Oguz, S. Xu, C. Brechbuhler, D. Pantazis, J. J. Levitt,M. E. Shenton, and G. Gerig. Framework for the statistical shapeanalysis of brain structures using SPHARM-PDM. In Insight Journal,pages 1?20, 2006.[102] M. Styner, K. Rajamani, and L. Nolte. Evaluation of 3D correspon-dence methods for model building. In Information Processing in Med-ical Imaging ? IPMI, volume 2732 of Lecture Notes in Computer Sci-ence, pages 63?75. Springer Berlin / Heidelberg, 2003.[103] D. Tran, K. W. Hor, A. A. Kamani, V. A. Lessoway, and R. N. Rohling.Instrumentation of the loss-of-resistance technique for epidural needleinsertion. IEEE Transactions on Biomedical Engineering, 56(3):820?827, 2009.[104] D. Tran, A. Kamani, and E. Al-Attas. Single-operator real-timeultrasound-guidance to aim and insert a lumbar epidural needle. Cana-dian Journal of Anesthesia, 57(4):313?321, 2010.[105] D. Tran, A.A. Kamani, V.A. Lessoway, and R. Rohling. PreinsertionParamedian Ultrasound Guidance for Epidural Anesthesia. Anesthesia& Analgesia, 109(2):661?667, 2009.[106] A. Tsai, W. Wells, C. Tempany, E. Grimson, and A. Willsky. Coupledmulti-shape model and mutual information for medical image segmen-tation. In Information Processing in Medical Imaging?IPMI, volume2732, pages 185?197. 2003.[107] C. J. Twining, T. Cootes, S. Marsland, V. Petrovic, R. Schestowitz,and C. J. Taylor. A unified information-theoretic approach to group-wise non-rigid registration and model building. In Information Pro-cessing in Medical Imaging - IPMI, volume 3565, pages 1?14. 2005.102Bibliography[108] C. J. Twining and C. J. Taylor. Kernel principal component analysisand the construction of non-linear active shape models. In BritishMachine Vision Conference, volume 1, pages 23?32, 2001.[109] W. A. Visser, J. B. Kolling, G. J. Groen, E. Tetteroo, R. van Dijl,P. M. J. Rosseel, and N. J. M. van der Meer. Persistent cortical blind-ness after a thoracic epidural test dose of bupivacaine. Anesthesiology,112(2):493?495, 2010.[110] T. Vrtovec, B. Likar, and F. Pernus. Automated curved planar refor-mation of 3D spine images. Physics in Medicine and Biology, 50:4527?4540, 2005.[111] F. Wang, B.C. Vemuri, and A. Rangarajan. Groupwise point pat-tern registration using a novel CDF-based jensen-shannon divergence.volume 1, pages 1283?1288, 2006.[112] H. Wang, D.B. Stout, and A.F. Chatziioannou. Estimation of mouseorgan locations through registration of a statistical mouse atlas withMicro-CT images. IEEE Transactions on Medical Imaging, 31(1):88?102, 2012.[113] Q. Wang, L. Chen, P.T. Yap, and et al. Groupwise registration basedon hierarchical image clustering and atlas synthesis. Human BrainMapping, 31(8):1128?1140, 2010.[114] R. E. Windsor, S. Storm, and R. Sugar. Prevention and manage-ment of complications resulting from common spinal injections. PainPhysician, 6(4):473?484, 2003.[115] S. Winter, B. Brendel, I. Pechlivanis, K. Schmieder, and C. Igel. Reg-istration of CT and intraoperative 3D ultrasound images of the spineusing evolutionary and gradient-based methods. IEEE Transactionson Evolutionary Computation, 12(3):284?296, 2008.[116] C.T. Yeo, T. Ungi, A. Lasso, RC McGraw, and G. Fichtinger. Theeffect of augmented reality training on percutaneous needle placementin spinal facet joint injections. IEEE Transactions on Biomedical En-gineering, 58(7):2031?2037, 2011.[117] A.L. Yuille and N.M. Grzywacz. The motion coherence theory. InInternational Conference on Computer Vision, pages 344?353, 1988.103Bibliography[118] P. A. Yushkevicha, J. Pivenc, H. C. Hazlettc, R. G. Smithc, S. Hob,J. C. Geea, and G. Gerig. User-guided 3D active contour segmen-tation of anatomical structures: significantly improved efficiency andreliability. Neuroimage, 31(3):1116?1128, 2006.[119] G. Zamora, H. Sari-Sarraf, and L. R. Long. Hierarchical segmentationof vertebrae from x-ray images. In Medical Imaging, pages 631?642,2003.[120] Y. Zhan, D. Maneesh, M.n Harder, and X. S. Zhou. Robust MRspine detection using hierarchical learning and local articulated model.In Medical Image Computing and Computer-Assisted Intervention?MICCAI, pages 141?148. Springer, 2012.[121] Z. Zhao and E.K. Teoh. A novel framework for automated 3D PDMconstruction using deformable models. In Proceedings of SPIE MedicalImaging, volume 5747, pages 303?314, 2005.104

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0166855/manifest

Comment

Related Items