UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Joint multimodal registration of medical images to a statistical model of the lumbar spine for spine… Behnami, Delaram 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
24-ubc_2016_november_behnami_delaram.pdf [ 30.68MB ]
Metadata
JSON: 24-1.0319910.json
JSON-LD: 24-1.0319910-ld.json
RDF/XML (Pretty): 24-1.0319910-rdf.xml
RDF/JSON: 24-1.0319910-rdf.json
Turtle: 24-1.0319910-turtle.txt
N-Triples: 24-1.0319910-rdf-ntriples.txt
Original Record: 24-1.0319910-source.json
Full Text
24-1.0319910-fulltext.txt
Citation
24-1.0319910.ris

Full Text

Joint Multimodal Registration ofMedical Images to a Statistical Modelof the Lumbar Spine forSpine AnesthesiabyDelaram BehnamiB.A.Sc., The University of British Columbia, 2014A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinThe Faculty of Graduate and Postdoctoral Studies(Biomedical Engineering)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)October 2016c© Delaram Behnami 2016AbstractFacet joint injections and epidural needle insertions are widely used forspine anesthesia. Needle guidance is usually performed by fluoroscopy orpalpation, resulting in radiation exposure and multiple needle re-insertions.Several ultrasound (US)-based guidance approaches have been proposed toeliminate such issues.However, but they have not widely accepted in clin-ics due to difficulties in interpretation of the complex spinal anatomy inUS, which leads to clinicians’ lack of confidence in relying only on infor-mation derived from US for needle guidance. In this thesis, a model-basedmulti-modal joint registration framework is introduced, where a statisticalmodel of the lumbar spine is concurrently registered to intraprocedure USand easy-to-interpret preprocedure images. The goal is to take advantage ofthe complementary features visible in US and preprocedure images, namelyComputed Topography (CT) and Magnetic Resonance (MR) scans. Twoversions of a lumbar spine statistical model are employed: a shape+posemodel and a shape+pose+scale model. The underlying assumption is thatthe shape and size of the spine of a given subject are common amongstall imaging modalities . However, the pose of the spine changes from onemodality to another, as the patient’s position is different at different im-age acquisitions. The proposed method has been successfully validated ontwo datasets: (i) 10 pairs of US and CT scans and (ii) nine US and MRimages of the lumbar spine. Using the shape+pose+scale model on theUS+CT dataset, mean surface distance error of 2.42 mm for CT and meanTarget Registration Error (TRE) of 3.14 mm for US were achieved. Asfor the US+MR dataset, TRE of 2.62 mm and 4.20 mm for the MR andUS images, respectively. Both models models were equally accurate on theUS+CT dataset. For US+MR, the shape+pose+scale model outperformedthe shape+pose model. The joint registration allows augmentation of im-portant anatomical landmarks in both intraprocedure US and preproceduredomains. Furthermore, observing the patient-specific model in preproce-dure domains allows the clinicians to assess the local registration accuracyqualitatively. This can increase their confidence in using the US model forderiving needle guidance decisions.iiPrefaceThis thesis is based on a manuscript, resulting from the collaboration be-tween multiple researchers. This publication has been modified to makethe thesis coherent. The research conducted in this study was undertakenunder the approval of the UBC Research Ethics Board, certificate numbersH13-01968 and H16-00341 and Queen’s University Research Ethics Board,certificate number SCOMP-003-07.A primary version of the framework presented in Chapter 2 as well asthe study on the CT+US image pairs in Chapter 3 has been published in:• Delaram Behnami, Alexander Seitel, Abtin Rasoulian, Emran Moham-mad Abu Anas, Victoria Lessoway, Jill Osborn, Robert Rohling, andPurang Abolmaesumi. “Joint registration of ultrasound, CT and ashape+ pose statistical model of the lumbar spine for guiding anes-thesia,” International Journal of Computer Assisted Radiology andSurgery; 11(6): 937-45, 2016.The author’s contribution was in developing and implementing the pre-sented joint and weighted joint frameworks, constructing the shape+pose+scalemodel, as well as evaluating the proposed solution on two datasets of US+CTand US+MR clinical images. The original shape+pose model of the lumbarspine was previously developed by Dr. Abtin Rasoulian. Professors PurangAbolmaesumi and Robert Rohling helped with their valuable suggestionsfor extending the methodology.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii1 Introduction and Background . . . . . . . . . . . . . . . . . . 11.1 Clinical Background . . . . . . . . . . . . . . . . . . . . . . . 11.1.1 Lumbar Spine Procedures . . . . . . . . . . . . . . . . 11.1.2 Ultrasound Guidance and Its Challenges . . . . . . . 31.1.3 Spinal US Augmentation . . . . . . . . . . . . . . . . 31.2 Thesis Objective . . . . . . . . . . . . . . . . . . . . . . . . . 81.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Model-based Joint Registration Framework . . . . . . . . . 102.1 Statistical Multi-vertebrae Models of the Lumbar Spine . . . 102.1.1 Model Construction . . . . . . . . . . . . . . . . . . . 102.1.2 Shape+pose Model . . . . . . . . . . . . . . . . . . . 112.1.3 Shape+pose+scale Model . . . . . . . . . . . . . . . . 112.2 Point-set Registration of Statistical Models to Targets . . . . 132.2.1 Registration of the Statistical Models to a Single Imag-ing Modality . . . . . . . . . . . . . . . . . . . . . . . 142.2.2 Joint Registration of the Statistical Models to Multi-ple Imaging Modalities . . . . . . . . . . . . . . . . . 152.2.3 Parameter Optimization for Model-to-Target Regis-tration . . . . . . . . . . . . . . . . . . . . . . . . . . 16ivTable of Contents3 Joint Registration of Model to US+CT Images of the Lum-bar Spine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.1 US and CT Datasets . . . . . . . . . . . . . . . . . . . . . . 183.2 Joint Registration for US+CT Images . . . . . . . . . . . . . 183.2.1 Target Point Cloud Generation . . . . . . . . . . . . . 193.2.2 Geometric Initialization of the Model Instances onTargets . . . . . . . . . . . . . . . . . . . . . . . . . . 193.2.3 Optimization . . . . . . . . . . . . . . . . . . . . . . . 203.2.4 Model Transformation . . . . . . . . . . . . . . . . . 203.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223.3.1 Registration Parameter Selection . . . . . . . . . . . 223.3.2 Validation . . . . . . . . . . . . . . . . . . . . . . . . 223.3.3 US+CT Joint Registration Results and Analysis . . . 233.4 Discussion and Summary . . . . . . . . . . . . . . . . . . . . 334 Joint Registration of Model to US+MR Images of the Lum-bar Spine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364.1 US and MR Datasets . . . . . . . . . . . . . . . . . . . . . . 364.2 Joint Registration for US+MR Images . . . . . . . . . . . . . 384.2.1 Target Point Cloud Generation . . . . . . . . . . . . . 384.2.2 Model Initialization on Targets . . . . . . . . . . . . . 394.2.3 Weighted Joint Registration . . . . . . . . . . . . . . 394.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.3.1 Registration Parameter Selection . . . . . . . . . . . 414.3.2 Validation . . . . . . . . . . . . . . . . . . . . . . . . 414.4 Discussion and Summary . . . . . . . . . . . . . . . . . . . . 435 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 505.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54vList of Tables3.1 Breakdown of the fields of view of images in the CT datasetbased on the visible lumbar vertebrae. . . . . . . . . . . . . . 193.2 Distribution of US TRE registration errors (mm) achievedusing the shape+pose model for the US+CT dataset, per re-gion. No significant improvement was observed using the jointregistration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.3 US TRE (mm) achieved using the shape+pose model forthe US+CT dataset. In each experiment (each column), theshape parameter of the model is optimized from the modal-ities indicated on the first row of the table. The errors arereported for different regions of the vertebrae (AP, LA, SP,TP) and the average of all four. . . . . . . . . . . . . . . . . . 253.4 CT SDE registration errors (mm) achieved using the shape+pose+scalemodel for the US+CT dataset. In each experiment (eachcolumn), the shape and scale parameters of the model areoptimized from the modalities indicated on the first row ofthe table. The errors are reported for different regions of thevertebrae (AP, LA, SP, VB) and the average of all four. . . . 263.5 US TRE (mm) achieved using the shape+pose+scale modelfor the US+CT dataset. In each experiment (each column),the shape and scale parameters of the model are optimizedfrom the modalities indicated on the first row of the table.The errors are reported for different regions of the vertebrae(AP, LA, SP, TP) and the average of all four. . . . . . . . . . 264.1 Breakdown of the fields of view of US images in the US+MRdataset based on the visible lumbar vertebrae. . . . . . . . . . 384.2 Weights assigned to the different regions of the vertebrae . . 414.3 Numbers of MR gold standard points used to report TREvalues on MR space for the US+MR dataset. The numbersare provided per region for each subject. . . . . . . . . . . . . 42viList of Tables4.4 Numbers of US gold standard points used to report TREvalues for the US+MR dataset. The numbers are providedper region for each subject. The landmarks were selected byan expert sonographer. Zeros indicate the region was notvisible to the sonographer. . . . . . . . . . . . . . . . . . . . . 434.5 MR TRE (mm) achieved using the shape+pose model forthe US+MR dataset. In each experiment (each column), theshape parameter of the model is optimized from the modal-ities indicated on the first row of the table. The errors arereported for different regions of the vertebrae (AP, LA, SP,VB) and the average of all four. . . . . . . . . . . . . . . . . . 454.6 US TRE (mm) achieved using the shape+pose model for theUS+MR dataset. In each experiment (each column), theshape parameter of the model is optimized from the modal-ities indicated on the first row of the table. The errors arereported for different regions of the vertebrae (AP, LA, SP,TP) and the average of all four. . . . . . . . . . . . . . . . . . 464.7 MR TRE achieved (mm) using the shape+pose+scale modelfor the US+MR dataset. In each experiment (each column),the shape and scale parameters of the model are optimizedusing the methods indicated on the first row of the table. Theerrors are reported for different regions of the vertebrae (AP,LA, SP, VB) and the average of all four. . . . . . . . . . . . . 474.8 US TRE achieved (mm) using the shape+pose+scale modelfor the US+MR dataset. In each experiment (each column),the shape and scale parameters of the model are optimizedusing the methods indicated on the first row of the table. Theerrors are reported for different regions of the vertebrae (AP,LA, SP, TP) and the average of all four. . . . . . . . . . . . . 48viiList of Figures1.1 Anatomy of human lumbar spine. The full vertebral columnconsists of 33 vertebrae, including five lumbar vertebrae high-lighted in yellow (a). Different parts of an individual lum-bar vertebra are shown in a cross-sectional view (b). (Fromwww.biodigital.com and www.compelvisuals.com). . . . . . . 21.2 Examples of typical hard-to-interpret US images of the lum-bar spine. Only partial bone responses are present and dif-ferent parts of the anatomy are hard to detect. . . . . . . . . 41.3 An overview of the state-of-the-art methods for augmentinganatomical information with spinal US. Methods (a), (b) and(c) are proposed by [12, 21, 35, 36], [10, 18, 25, 27], and [5]as well as this thesis, respectively. . . . . . . . . . . . . . . . . 62.1 First two modes of variation of shape ((a) and (b)) and pose((c) and (d)) for the shape+pose model. Images on the left ofeach pair indicate −3√λ, and those on the right correspondwith +3√λ. Values of λ are the eigenvalues determined foreach eigenvector (PC) at training. . . . . . . . . . . . . . . . 122.2 First two modes of variation of pose ((a), (b)) and the firstmode of variation of the scale (c) for the shape+pose+scalemodel. This model decouples the scale from the pose. Imageson the left of each pair indicate −3√λ, and those on the rightcorrespond with +3√λ. The shape variations are similar tothe shape+pose model in 2.2. Values of λ are the eigenvaluesdetermined for each eigenvector (PC) at training. . . . . . . . 133.1 An overview of the proposed framework for joint registrationof US, preoperative images and a statistical model. . . . . . . 21viiiList of Figures3.2 Example snapshots of the registration results using the shape+posemodel on CT (a) and US (b) images. The joint method (blue)performs better than the model-to-US technique (red). Theyellow annotations on US depict the gold standard segmen-tation obtained by the sonographer. . . . . . . . . . . . . . . 243.3 Colormaps showing the difference between shape SDE valuesper surface point computed from US-to-atlas and joint regis-trations, using segmented reference CT as the gold standardfor patients 1 (a) to 10 (j). Blue regions show improvementsachieved by the joint registration, where US-to-atlas error islarger than joint registration error. . . . . . . . . . . . . . . . 273.4 Box plots showing the break-down of TRE values obtainedusing the shape+pose model for US images (US+CT dataset)at different regions of the vertebrae (a) and the five differentvertebrae (b). No significant improvements are observed. . . . 283.5 Example snapshots of the registration results using the shape+pose+scalemodel on CT (a) and US (b) images. The joint method (blue)performs better than the model-to-US technique (red). Theyellow annotations on US depict the gold standard segmen-tation obtained by the sonographer. . . . . . . . . . . . . . . 293.6 Colormaps showing the difference between shape SDE valuesper surface point computed from US-to-atlas and joint regis-trations, using segmented reference CT as the gold standardfor patients 1 (a) to 10 (j). Blue regions show improvementsachieved by the joint registration, where US-to-atlas error islarger than joint registration error. . . . . . . . . . . . . . . . 303.7 Box plots showing the break-down of TRE values obtainedusing the shape+pose+scale model for US images (US+CTdataset) at different regions of the vertebrae (a) and the fivedifferent vertebrae (b). No significant improvements are ob-served. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.8 Colormaps showing the difference between shape SDE valuesper surface point computed from US-to-atlas and joint regis-trations, using segmented reference CT as the gold standardfor patients 1 (a) to 10 (j). Blue regions show improvementsachieved by the joint registration, where US-to-atlas error islarger than joint registration error. . . . . . . . . . . . . . . . 32ixList of Figures3.9 Snapshots of a CT scan belonging to a scoliotic patient (P8),which was proven challenging to register. The vertebrae seemto diverge from the vertical yellow line (left). The yellowline on the coronal view also marks the sagittal slice on theright. White arrows indicate the failed pose registration ofthe model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.1 An example MR image from the present dataset. The yellowdashed line on the sagittal image (right) indicates the locationof the axial slice (left). The TPs are not captured in thefield-of-view of these MR images. The sagittal slices are 3.5−4.4 mm thick in this dataset. . . . . . . . . . . . . . . . . . . 374.2 An example US image from the present challenging dataset,shown in three different views. The yellow and green dashedlines on the sagittal image (middle) indicate the location ofthe axial (left) and coronal slice (right), respectively. Theshadow of the rib attached to T12 is used to locate L1. Theblue lines are drawn to help visualize the landmarks. . . . . . 384.3 The lumbar spine in the sitting position. Figure in (a) showsthat in the sitting pose, lumbar spine is almost straight butthe L5-sacrum curve is sharper (courtesy of ergonut.com [22]).Figures in (b) and (c) demonstrate the straight pose used forregistering the shape+pose and shape+pose+scale models tothe US images, respectively. . . . . . . . . . . . . . . . . . . . 404.4 Example snapshots of the registration results using the shape+pose+scalemodel on MR (a) and US (b) images. The weighted jointmethod (blue) performs better than the model-to-US tech-nique(red). The yellow annotations on US depict the goldstandard segmentation obtained by the sonographer. . . . . . 444.5 Box plots showing the break-down of TRE values obtainedusing the shape+pose model for US images (US+MR dataset)at different regions of the vertebrae (a) and the five differentvertebrae (b). No significant improvements are observed. . . . 474.6 Example snapshots of the registration results using the shape+pose+scalemodel on MR (a) and US (b) images. The weighted jointmethod (blue) performs better than the model-to-US tech-nique(red). The yellow annotations on US depict the goldstandard segmentation obtained by the sonographer. . . . . . 48xList of Figures4.7 Box plots showing the break-down of TRE values obtainedusing the shape+pose+scale model for US images (US+MRdataset) at different regions of the vertebrae (a) and the fivedifferent vertebrae (b). Significant improvements were ob-served at AP, SP, TP regions, as well as at L1 and L3. . . . . 49xiChapter 1Introduction andBackground1.1 Clinical Background1.1.1 Lumbar Spine ProceduresA human spine consists of 33 vertebrae including the cervical, thoracic andlumbar vertebrae as well as the sacrum and coccyx. The lumbar spineis located halfway at the bottom of the spinal column and is consisted offive vertebrae L1-L5 (Fig. 1.1(a)). The bone of each vertebra consists ofthe spinous process, the laminae, articular processes, and transverse pro-cesses(Fig. 1.1(b)).Facet joint injections and epidural needle insertions are common proce-dures performed on the lumbar spine, which aim to deliver anesthesia andanalgesia [8, 11]. Facet joints are the paired joint connecting the articularprocesses of two adjacent vertebrae, which allow the vertebrae to extendand flex [8]. The cartilage on facet joints can wear out and cause chronicback pains in some patients, a problem experienced at least once by 80%of the adult population [28]. To treat this condition, facet joint injectionsare performed on patients suffering from it. In these procedures, a localanesthetic is injected in the affected joints to numb them and hence, relievethe pain. These injections can also include steroids which help reduce theinflammation of the joints [4]. Epidural needle insertions, or simply epidu-rals, involve inserting specialized needles in the epidural space of the spine,the area between the vertebral walls and dura mater [19]. Every year, ap-proximately 3.2 million people undergo these procedures in North Americaalone [11]. The purpose of these procedures is to help alleviate chronic painsby blocking the nerve impulses and hence the sensation of pain in the de-sired regions. Depending on the target of the procedure, epidurals can beperformed on different parts of the spine. Lumbar spine epidural anesthesiais also very popular amongst women as 50% of women going in labor chooseto receive this treatment to ease the pain of labor.11.1. Clinical BackgroundCervical spine(7 vertebrae)Thoracic spine(12 vertebrae)Lumbar spine(5 vertebrae)Sacrum(4 joint vertebrae)Coccyx(5 joint vertebrae)L1L2L3L4L5(a)Articularprocess(b)Figure 1.1: Anatomy of human lumbar spine. The full vertebral col-umn consists of 33 vertebrae, including five lumbar vertebrae highlightedin yellow (a). Different parts of an individual lumbar vertebra areshown in a cross-sectional view (b). (From www.biodigital.com andwww.compelvisuals.com).Lumbar spine needle insertion procedures are particularly challengingdue to the proximity to nerve tissue, the deep location of the target, thesmall and narrow size of the channel between the articular processes of thejoint, and the oblique entry angle [8]. Hence, careful needle placement isvery important to ensure no damage is done to the nerves and nearby softtissue, and the therapy is effective [37]. In traditional epidural anesthesia,needle insertion is performed with the help of manual palpation and theloss-of-resistance technique [19]. In this method, the clinicians rely on thethe loss of pressure they feel as they insert the needle, to know that nee-dle has entered the epidural space. This blind and consequently inaccuratetechnique can lead to longer needle paths, multiple insertions or painful in-sertions through the muscle [5]. In both facet joint and epidural injections,inaccurate needle placement can cause complications such as accidental du-ral puncture (2.5% of all cases [1], and 3-5% in procedures performed byinexperienced operators [14]) and post-puncture headache (86%) [31]. Per-forming these procedures becomes even more challenging in the cases of21.1. Clinical Backgroundobese patients, patients with abnormally curved spines (scoliotic patients),and those with previous back surgery [37].1.1.2 Ultrasound Guidance and Its ChallengesTo prevent the above complications from happening, image-based guidancemethods may be beneficial. The current standard of care for guiding facetjoint injections is fluoroscopy, which provides real-time contrast images ofthe anatomy during a procedure [32]. However, this imaging modality im-poses health risks as it exposes both the patient and the surgeon to ionizingradiation. Moreover, the specialized equipment for fluoroscopy are expensiveand have limited portability [37].Consequently, ultrasound (US) has been suggested as an alternativemodality to guide these procedures. US imaging has many advantages in-cluding its non-ionizing technology, real-time 3D acquisition capability, ac-cessibility and low cost [37]. Many studies demonstrate the feasibility of USguidance for spine interventions [13, 20, 23, 34, 37]. However, at present,these methods are not commonly used in clinics because of the challenges ininterpreting spinal US. Due to their physical nature, US images in generalare often corrupted by significant amounts of noise and artifacts. Interpret-ing images of bony structures is specially challenging as bone responses canappear blurry and disconnected. Also, as US waves do not penetrate bones,only bone surfaces closest to the surface of the skin can be visible in USimages. Finally, the complex anatomy of the spine adds to the difficultiesof making sense of US images and the imaged anatomies.Figure 1.2 showstwo example slices from a typical sagittal spinal US scan. As it can be seenin these images, only little bone responses are visible and it is not easy toidentify which part of the lumbar spine they represent.1.1.3 Spinal US AugmentationWith the goal to improve interpretation of sonographic images, many workshave proposed US augmentation by fusing these images with other formsof data. These fusion techniques can be divided into two categories: (i) fu-sion with anatomical information from preoperative Computed Tomography(CT) or Magnetic Resonance Imaging (MRI), and (ii) statistical anatomicalmodels.31.1. Clinical BackgroundLaminaLaminaePosterior duraLigamentum flavumEpidural spaceSpinous processAnterior complexFigure 1.2: Examples of typical hard-to-interpret US images of the lumbarspine. Only partial bone responses are present and different parts of theanatomy are hard to detect.US Augmentation by Multi-modal FusionSeveral methods have focused on registering easy-to-interpret preprocedureCT images, to intraprocedure US [12, 21, 35, 36] (Fig. 1.3(a)). For pa-tients receiving routine injections for back pain, preprocedure CT or MRimages are often available as they were acquired for initial diagnosis [30].Hence, such multi-modal methods can be feasible for US augmentation.Prior works on fusion of preoperative CT with US mostly focus on individ-ual bones [29]. For registration of multiple bones, such as that of multiplevertebrae, Brendal et al. considered the ensemble of bone structures asa single rigid object [9]. Yan et al. attempted to independently registereach individual bone [36]. Gill et al. proposed a multi-object registration,which used a biomechanical model for regularization purposes [15]. Bø etal. attempted US augmentation for spine surgery by fusion with MR im-ages [6]. This was performed by rigidly registering the bone responses inUS to the pre-segmented MR image. An automatic segmentation followedby manual correction was done for this purpose. CT-based and MR-basedmethods require an accurate segmentation of the vertebrae in the CT data,which cannot effectively fit in the clinical workflow. This is because man-ual segmentation of preprocedure images is very time-consuming and labourintensive, and hence not practical for anesthesiologists or interventional ra-diologists in busy clinics. On the other hand, automatic segmentation ofpreprocedure data for accurate intraoperative registration to US within the41.1. Clinical Backgroundoperation time has been proven challenging. Moreover, preprocedure imagesoften cover only a portion of the anatomy that is imaged with US during theprocedure for targeting. The limited fields-of-view mainly of these imagesis mainly intended to minimize radiation dose in CT or reduce acquisitiontime in MR. Hence, even in the presence of accurate segmentation of thespine from the preprocedure CT or MR data, these images may not coverthe entire region of interest on the lumbar spine for guiding anesthesia. Fi-nally, although CT images can provide improved anatomical representationto US, CT image acquisition is harmful due to the ionizing nature of thismodality.51.1. Clinical BackgroundSegmentationIntra-procedure USSpine in USPre-procedureCT or MRRegistration(a)RegistrationIntra-procedure USSpine in USStatistical Model(b)Intra-procedure USSpine in USPre-procedure CT or MRStatistical ModelSpine in CT or MRJoint Registration(c)Figure 1.3: An overview of the state-of-the-art methods for augmentinganatomical information with spinal US. Methods (a), (b) and (c) are pro-posed by [12, 21, 35, 36], [10, 18, 25, 27], and [5] as well as this thesis,respectively.61.1. Clinical BackgroundModel-based US AugmentationAn alternative approach for augmenting spinal US images is to fuse themwith statistical anatomical models of the imaged anatomy(Fig. 1.3(b)). Sev-eral works have developed statistical models of the spine for augmentationpurposes. Boisvert et al. [7]constructed a statistical pose model that focusedon detecting relative positions of individual vertebrae with respect to theirneighbors. In this work, the shape of the full spine were further studiedand the pose model was used to register to radiographic images. Our grouphas previously proposed several methods to register a statistical atlas of thelumbar spine to the spinal US [10, 18, 25, 27]. Khallaghi et al. [18] focused oncreating statistical shape models for each vertebra. In order to impose con-straints on the relative pose of neighboring vertebrae, a biomechanical modelwas further used. The constrained shape atlas was then registered to USimages. The problem with this method is that having multiple shape modelsignores the similarities between shape of different vertebrae and makes thecomputations unnecessarily expensive. To address these problems, Rasou-lian et al. [25] proposed a more complex multi-body shape+pose statisticalmodel for the lumbar spine. This statistical model was developed by viewingthe spine as a single unified structure constructed of five bones, which areallowed to move within a certain range. This multi-vertebrae shape+posemodel was then used to segment CT images of the lumbar spine. In a laterstudy, the same model was used for guiding anesthesia in real-time in tracked2D US [27]. Brudfors et al. further investigated registering this model totracker-less 3D US [10]. While being promising, the above model-to-US reg-istration methods have not been clinically accepted. This is because it isdifficult to evaluate the patient-specific models that have been registered toUS since the problem of interpreting spinal US still exists. As a results, it ischallenging for clinicians to decide whether or not the registration results arereliable and can be used as a basis for their guidance decision. Furthermore,registration errors obtained using these methods vary in magnitude and lo-cation and tend to be sensitive to several factors such as the registrationparameters, initialization and regularization, image quality, etc. This sensi-tivity is mainly due to the presence of noise and the sparsity of US imagesin terms of including anatomical information. Hence, for such methods togain clinical acceptance, their accuracy and reliability should be improved.Furthermore, these methods should provide the clinicians with additionalconfidence regarding the accuracy of the US augmentation in order for theclinicians to determine whether or not the guidance decisions can be basedon the registration results.71.2. Thesis Objective1.2 Thesis ObjectiveThe overall objective of this thesis is to propose an effective augmentationframework for guiding facet joint and epidural injections using preproceduredata and intraprocedure US. To ensure reliability of spine registration inUS, we present a joint framework for registration of statistical models ofthe lumbar spine to preoperative images (CT or MR) and intraoperative US(Fig. 1.3(c)). Essentially, this framework combines the two fusion techniquesmentioned in section 1.1.2 and attempts to eliminate the downfalls of eachone. Including a statistical model, eliminates the need for segmentationof the vertebrae, a challenge faced in US-CT or US-MR fusion. A jointframework makes it possible to take advantage of all available information,i.e., two or more imaging modalities of the same patient as well as priorinformation about the anatomy coming from a statistical atlas. This canlead to a more accurate localization of the lumbar spine in US images.On theother hand, involving multiple imaging modalities in the registration allowsfor a simultaneous visualization of the model on both US and preoperativeimage spaces. Finally, this can help the clinician assess the registrationaccuracy and decide whether to base guidance decisions on the registrationresults.In this thesis, an improved and extended version of the existing model-based spine augmentation system is presented, which registers statisticalatlases to multiple imaging modalities, i.e. preoperative CT or MR andintraoperative US images. The framework is set up based on the assumptionthat unlike the pose, the spine shape and size for a given subject are the sameacross multiple modalities. The statistical shape+pose model is improvedby reformulating it to obtain a statistical shape+pose+scale model. Thejoint optimization of the shape and scale coefficients allows for flexibility tomodify each parameter independently. The proposed method is validated onUS+CT data to ensure feasibility. It is applied on US+MR data thereafter.1.3 Thesis OutlineThe rest of this thesis is divided into four chapters outlined below. Chap-ter 2 introduces the general framework for joint registration of medical im-ages using multi-body statistical atlases. A statistical shape+pose modeland a newly developed shape+pose+scale model of the lumbar spine arepresented. A coherent point drift (CPD)-based registration framework isused for registering the statistical model to the target medical images of the81.3. Thesis Outlinelumbar spine. Expressions for optimizing model parameters (shape, poseand scale) are derived thereafter. We then propose to modify the registra-tion objective function to achieve a generalized version of the registrationframework for two or more imaging modalities. This generalized frameworkallows for a joint registration of the statistical models to US with CT and/orMR images. In Chapter 3, the proposed framework is validated on a datasetof 10 preoperative CTs and corresponding intraoperative US images. Thedetails of the preprocessing steps such as bone surface extraction and regis-tration initialization are discussed. The registration results are reported onboth US and CT spaces in terms of Target Registration Errors (TRE) andSurface Distance Errors (SDE). The results obtained by the joint frameworkare compared with existing methods. Chapter 4 includes a similar study ofthe method on a different dataset, which consists of nine intraoperative USand preoperative MR scans. In this chapter, the joint registration methodis modified to a weighted joint registration framework, where weights areassigned to different regions of vertebrae in each modality, based on thevisibility of the regions. Preprocessing steps specific to MR images are dis-cussed. Several experiments are carried out and TRE results achieved on USand MR spaces are reported. Finally, Chapter 5 summarizes and concludesthe thesis. The major contributions and future works are also highlightedin this chapter.9Chapter 2Model-based JointRegistration FrameworkThis section focuses on the general mathematical framework for register-ing a statistical model of the lumbar spine to multiple imaging modali-ties. In Section 2.1, a shape+pose statistical model and a newly developedshape+pose+scale model of the lumbar spine are described. Section 2.2introduces the joint registration framework by revisiting the optimizationproblem of registering a statistical atlas to a target image and extendingit to multiple targets, i.e. multiple imaging modalities. Optimization ofthe parameters of the statistical models (shape, pose and scale) are furtherdiscussed.2.1 Statistical Multi-vertebrae Models of theLumbar Spine2.1.1 Model ConstructionIn this thesis, two versions of statistical multi-vertebrae models are used forthe lumbar spine: (i) a shape+pose model (Section 2.1.2) and (ii) a newlydeveloped shape+pose+scale model (Section 2.1.3). These models are con-structed based on the work presented by Rasoulian et al. [24]. Precisely,model construction is done by performing Principal Component Analysis(PCA) on a training set. The training set used in this thesis consists of 32segmented CT images of the full lumbar spine. Each CT image includeslumbar vertebrae L1-L5. The data set was previously acquired and manu-ally segmented in an previous study [24]. The PCA yields to the PrincipalComponents (PCs) of the shape, pose and scale observed as observed in thetraining set. Theoretically, the PCs can describe the shape and pose of testimages.102.1. Statistical Multi-vertebrae Models of the Lumbar Spine2.1.2 Shape+pose ModelIntuitively, the shape parameter of the model refers to the nonrigid trans-formation defining the shape of the bones. In the present method, the fivelumbar vertebrae are assumed to be correlated in terms of shape. Hence,variations of the shape describe the shape of the overall multi-vertebraestructure, as opposed to those of individual bones. The model’s pose pa-rameter, on the other hand, refers to the rigid transformation affecting therelative position of the vertebrae with respect to each other.An instance of the shape+pose model is created by applying the desiredcoefficients (or weights) to each of the shape and pose PCs determined inmodel training and linearly adding them. Assuming that θsk is the coefficientapplied to the kth shape PC and θpk is the coefficient applied to the kth posePC, the lth object of the model can be instantiated as follows:Φ(θs, θp) = Φpl(Φsl (θs); θp), (2.1)where Φpl (.; θp) represents a similarity transformation built as a weightedcombination of the pose PCs and Φsl (.) describes the associated shape in-stantiation. The first and second modes of variation of the shape and posefor the shape+pose model are demonstrated in Fig. 2.1.2.1.3 Shape+pose+scale ModelA shortcoming of the shape+pose model is that variations in the size of theanatomy are accounted for in the rigid transformation for pose [3]. Hence,the scale cannot be optimized independently of the pose at the time ofregistration. In order to better capture the pose and scale variations, wepropose to decouple scale from pose in the model. Furthermore, decouplingthe scale from the pose makes it possible to estimate the scale jointly andthe poses independently. In other words, this guarantees that the jointlyregistered patient-specific models on multiple modalities have equal sizes.Formally, for the shape+pose model, the pose can be formulated as thematrix Tn,l which rigidly transforms the n-th point on the model to the l-thpoint on the target. More precisely,Tn,l =[kn,lRn,l tn,l0 1](2.2)where kn,l, Rn,l and tn,l express the scaling, rotation and translation com-ponents of the rigid transformation, respectively. In order to separate thescaling from the rest of the pose transformations, we can rewrite Tn,l as the112.1. Statistical Multi-vertebrae Models of the Lumbar Spine(a) (c)(e) (g)Figure 2.1: First two modes of variation of shape ((a) and (b)) and pose ((c)and (d)) for the shape+pose model. Images on the left of each pair indicate−3√λ, and those on the right correspond with +3√λ. Values of λ are theeigenvalues determined for each eigenvector (PC) at training.product of two independent transforms Tn,l = T1n,l × T 2n,l. Here, T 1n,l is thescaling transform and T 2n,l indicates the rotation and translation transforms.That is,T 1n,l =kn,l 0 0 00 kn,l 0 00 0 kn,l 00 0 0 1 (2.3)T 2n,l =[Rn,l tn,l0 1](2.4)An instance of the shape+scale+pose model can be created like below:Φ(θs, θk, θp) = Φpl(Φsk(Φsl (θs); θk); θp), (2.5)Here, Φkl (.; θk) describes the scaling transformation with coefficients θk.Figure 2.2 demonstrates the scale variations (one mode) as well as thefirst two modes of variations for the pose in the shape+pose+scale model.122.2. Point-set Registration of Statistical Models to TargetsThe shape variations are not shown here as they are identical to those ofthe shape+pose model. Here, the size of the individual vertebrae does notchange by varying the pose (Fig. 2.2 (a)). This is unlike what is observedwith the shape+pose model shown previously (Fig. 2.1 (b)).(a) (c)(e)Figure 2.2: First two modes of variation of pose ((a), (b)) and the first modeof variation of the scale (c) for the shape+pose+scale model. This modeldecouples the scale from the pose. Images on the left of each pair indicate−3√λ, and those on the right correspond with +3√λ. The shape variationsare similar to the shape+pose model in 2.2. Values of λ are the eigenvaluesdetermined for each eigenvector (PC) at training.2.2 Point-set Registration of Statistical Modelsto TargetsThe registration of the model to targets is based on the Coherent PointDrift framework proposed by Myrorenko et al. [22]. Here, the registrationproblem essentially consists of finding the optimal parameters for the modelto obtain minimized net distance between the model point cloud of the modeland target images. The bone surfaces are represented by point clouds for132.2. Point-set Registration of Statistical Models to Targetsboth the model and targets. Different bone surface extraction methods usedfor US, CT and MR will be discussed in Chapters 3 and 4.2.2.1 Registration of the Statistical Models to a SingleImaging ModalityRasoulian et al. previously showed the statistical shape+pose model couldbe registered to a target point cloud representing bone surfaces in CT [24]or US [27] images. In this method, an Expectation-Maximization (EM) al-gorithm is followed for the model registration. The expectation step consistsof computing the probabilities of any n-th model point belonging to the l-thobject (tln) generating a target point zm. In other words, the soft correspon-dences P (tln|zm) between the m-th model instance point of l-th bone (tln)and n-th target point of the image (zm). The maximization step involvesoptimizing the problem’s objective function (Eq. 2.6) using the correspon-dences found in the expectation step. The following objective function isoptimizedQ =L∑l=1M,Nl∑m,n=1P (tnl|zm)||zm − Φ(tln, θs, θp)||2 +Rs +Rp (2.6)where Φ(tln, θs, θp) denotes the transformation of point tln on the model withthe shape and pose coefficients θs and θp. M , NI and L represent thetotal number of points in the model per bone, total number of points inthe I-th target point cloud and the total number of bones in the atlas,respectively. The regularizers Rs and Rp constrain the shape and posevariations, respectively, and are computed according to [24]. For imagingmodalities such as CT and MR, target points zm’s are assumed to be edgepoints extracted from the bones and they are all equally weighted. However,when used for US, zm’s are assigned probabilities indicating how likely them-th target point is to represent a bone pixel. Calculation of P (tnl|zm) hasbeen presented in [24]. Similarly, for the shape+pose+scale model, the costfunction can be written as:Q =L∑l=1M,Nl∑m,n=1P (tnl|zm)||zm − Φ(tln, θs, θp, θk)||2 +Regularizers (2.7)Here, Φ(tln, θs, θp, θk) represents the transformed model with shape, poseand scale coefficients θs, θp and θk, respectively.142.2. Point-set Registration of Statistical Models to Targets2.2.2 Joint Registration of the Statistical Models toMultiple Imaging ModalitiesIn order to incorporate information from more than one modality, we modifythe cost function to include the sum of all distances from the points belongingto the model and those of two (or more) target modalities of interest, such asUS, CT and MR. For any imaging modality involved in the fusion, a modelinstance is created. These instances will all have the same shape coefficients,whereas the pose coefficients are specific to each corresponding modality.This is because the shape of the vertebrae remains the same before andduring the procedure for each individual patient. The pose of the individualvertebrae, on the other hand, will change from one imaging modality toanother. For example, preprocedure CT and MR images are acquired in theprone position, unlike intraprocedure US, which often requires the patientto be in the supine position.Let md ∈ M = {US, CT , MR} denote the target’s imaging modality.zm,md will then represent the m-th point from the point cloud (of size Mmd)of each given modality. Therefore, Φ(tln, θs, θpmd) represents the transfor-mation of model points tln with the common shape coefficients θs and theindividual modality-specific pose coefficients θpmd. To sum over the distancesbetween the transformed model and the target points of both modalities, weextend the objective function to the following (Eq. 2.8).Q =∑md∈MDL∑l=1Mmd,Nl∑m,n=1P (tnl|zm,md)||zm,md − Φ(tln, θs, θpmd)||2 +Rs +Rpmd.(2.8)The above equation is the generalized objective function and holds for anysubset MD ⊆ {US, CT , MR};MD 6= ∅. In addition to the shape, thesize of a patient’s spine remain the same in different imaging modalities.Therefore, for the shape+pose+scale model, we can jointly optimize theshape+scale coefficients from two or more imaging modalities, while inde-pendently optimizing the pose coefficients for each image (Eq. 2.9).Q =∑md∈MDL∑l=1Mmd,Nl∑m,n=1P (tnl|zm,md)||zm,md−Φ(tln, θs, θk, θpmd)||2+Regularizers(2.9)152.2. Point-set Registration of Statistical Models to Targets2.2.3 Parameter Optimization for Model-to-TargetRegistrationIn order to optimize the model parameters, every iteration, the soft corre-spondences are computed in the estimation step. Practically, the maximiza-tion step is then carried out in two steps. First, the common shape (andscale) coefficients are jointly optimized. Then, n(MD) independent sets ofmodality-specific pose coefficients are estimated. The optimal values of thecoefficients can be computed using the derivatives of the cost functions withrespect to the model parameter. For the shape+pose model, differentiatingEq.( 2.8) with respect to the common shape coefficients yields to Eq. 2.10below.∂Q∂θs=∑md∈MDL∑l=1Mmd,Nl∑m,n=1P (tnl|zm,md)[Φ(tln)> − zm,md> ∂Φ(tln)∂θs]+Rs.(2.10)Differentiation with respect to each set of pose coefficients θp can bedone in a similar fashion, and yields to the expression below (Eq. 2.11).∂Q∂θpmd=∑md∈MDL∑l=1Mmd,Nl∑m,n=1P (tnl|zm)[Φ(tln)>−zm> ∂Φ(tln)∂θp]+Rpmd. (2.11)Calculation of partial derivatives ∂Φ(tln)∂θsand ∂Φ(tln)∂θphas been previouslypresented in [24].We can similarly differentiate Eq. 2.13 the shape+pose+scale model pa-rameters(Eqs. 2.12- 2.13).∂Q∂θs=∑md∈MDL∑l=1Mmd,Nl∑m,n=1P (tnl|zm,md)[Φ(tln)> − zm,md> ∂Φ(tln)∂θs]+Rs.(2.12)∂Q∂θpmd=∑md∈MDL∑l=1Mmd,Nl∑m,n=1P (tnl|zm)[Φ(tln)> − zm> ∂Φ(tln)∂θp]+Rpmd.(2.13)∂Q∂θk=∑md∈MDL∑l=1Mmd,Nl∑m,n=1P (tnl|zm,md)[Φ(tln)> − zm,md> ∂Φ(tln)∂θk]+Rk.(2.14)162.2. Point-set Registration of Statistical Models to TargetsThe partial derivatives ∂Φ(tln)∂θsand ∂Φ(tln)∂θpare similar to before. The scaleparameter partial derivative ∂Φ(tln)∂θkcan be computed similar to that of thepose as presented in [24].17Chapter 3Joint Registration of Modelto US+CT Images of theLumbar SpineThis chapter presents a study, that applies the joint framework proposedin Chapter 2 to a dataset of US+CT images. In this chapter, the jointregistration algorithm is revisited, with a focus on the preprocessing stepsrequired for US and CT images (Section 3.2). Several experiments are per-formed on this data. Comparison is made between results obtained usingthe old model-to-US and the proposed joint methods, as well as betweenshape+pose and. shape+pose+scale results. The results are reported anddiscussed in Sections 3.3 and 3.4.3.1 US and CT DatasetsThe dataset used in this study consists of volumetric US data and cor-responding CT scans of 10 patients. US volume acquisition was performedwith an electromagnetically 2D tracked US probe, in a zigzag sweep in a pre-vious study [27]. Each US volume consists of 307 to 550 2D sagittal slices.3D reconstruction of the US volume was performed using the PLUS soft-ware. Each US image includes all the lumbar vertebrae L1-L5, as well as thesacrum. The voxel size varies throughout the dataset between (0.5, 0.5, 0.5)to (1, 1, 1)mm.3.2 Joint Registration for US+CT ImagesFigure 3.1 shows the workflow for joint registration of US and CT imagepairs. The steps are explained in details in Sections 3.2.1- 3.2.4 below.183.2. Joint Registration for US+CT Images3.2.1 Target Point Cloud GenerationPrior to registration, US and CT images undergo a preprocessing step wherevisible bone surfaces are extracted from each imaging modality to form pointclouds. Registration is later performed by minimizing the distances betweenthe model and both of these target point clouds. The US volume is prepro-cessed by applying a phase-based bone enhancement technique [17] on indi-vidual native 2D slices. This bone enhancement results in a stack of boneprobability maps from which a reconstructed 3D map can be obtained. Theintensity value of each voxel on this map indicates the probability that thisvoxel represents bone tissue. A threshold is then applied to the map. Vox-els above this threshold along with their corresponding probability valuesmake up the probabilistic point cloud used in optimization. The probabil-ity assigned to each point helps determine the soft correspondences in theestimation phase of the EM algorithm. For the CT volume, the bone pointcloud is obtained from the visible edges of the spine. Edges are extractedusing a simple Canny edge detector [24]. We do not assign any predefinedprobabilities for these points.3.2.2 Geometric Initialization of the Model Instances onTargetsAnother essential step before registration is geometric initialization of themodel instances on the two target point clouds. This is to ensure registrationdoes not converge far off the optimal solution. In most cases, the assumptioncan be made that CT and US are acquired in supine and prone positions,respectively. This assumption allows the model to be aligned roughly atthe correct anatomical position using a simple one-click initialization. Forthis purpose, approximate center of gravity of the L3 vertebra is manuallyselected on US and CT volumes. Two instances of the model (one for eachmodality) are then translated accordingly. The L3 vertebra is found byField of View Patient Numbers Count (Numberof Subjects)L2-L5 P2, P8 2L3-L5 P1, P10 2L4-L5 P3-P7, P9 6Table 3.1: Breakdown of the fields of view of images in the CT dataset basedon the visible lumbar vertebrae.193.2. Joint Registration for US+CT Imagesidentifying the sacrum and counting up. A rigid registration is performedthereafter to correct for rotations between the model and targets and furtherimprove the initial alignments.3.2.3 OptimizationOnce the target point clouds are obtained and an instance of the model isplaced on each, the iterative shape+pose or shape+pose+scale registrationbegins. Every iteration, the model parameters are estimated as described inChapter 2. Pose coefficients are estimated and updated for each individualmodality, whereas the shape and scale are computed jointly.3.2.4 Model TransformationAt the end of each iteration, the model instances are updated according tothe new model coefficients and the models are projected back to the CTand US spaces. This process continues until convergence. In the case ofCT, a nonrigid registration is performed at the last step for a more accuratelocalization of the vertebrae [26].203.2. Joint Registration for US+CT ImagesUS Bone Probability Map GenerationUS InitializationUS Soft Correspondence AssignmentJoint Shape Coefficient CalculationCT Edge DetectionCT InitializationCT Soft Correspondence AssignmentPose Coefficient Calculation for USPose Coefficient Calculation for CTModel Transformation for USNon-rigid RegistrationEXPECTATION STEPMAXIMIZATIONSTEPUS VolumeCT VolumeMean Shape+Pose ModelRegistered Model to USRegistered Model to CTModel Transformation for CTFigure 3.1: An overview of the proposed framework for joint registration ofUS, preoperative images and a statistical model.213.3. Results3.3 Results3.3.1 Registration Parameter SelectionFor choosing the number of PCs for the CT pose and the joint shape, a sep-arate set of 10 segmented, publicly available CT data sets [16] were used andan exhaustive parameter search was performed with respect to the SurfaceDistance Error (SDE). The number of PCs for the US pose was chosen ac-cording to [27]. The US-related parameters are set according to prior workby Hacihaliloglu et al. [17].3.3.2 ValidationIn addition to the joint method, the registration results are reported fora US-only method, where shape and scale optimization are done usingUS only, without involving any information from CT. This helps studythe improvement achieved by the joint technique. Similarly, CT-only ex-periments are carried out to further investigate the reliability of the jointmethod. All experiments are done for each of the two (i.e. shape+pose andshape+pose+scale). The joint registration accuracy is evaluated quantita-tively in both CT and US domain.Experiments in CT DomainIn this step, each patient-specific model is projected to the CT space, i.e.,the pose of the patient-specific model is computed from the CT image. Thegold standard is obtained by manually segmenting the fully-visible verte-brae of each CT scan, which serves as the reference. For each visible verte-bra in the CT volume, the shape SDE is computed between the registeredmodel surface and the surface obtained from the gold standard segmentation.The correspondences between the model and segmentation point clouds aredetermined based on nearest neighbors. Hence, SDE is calculated as theEuclidean distance between the points on one surface to their closest neigh-boring points on the other. In order to investigate only the shape errors,each vertebra of the model is rigidly registered to the corresponding verte-bra on the gold standard. This step align the model on the gold standardand eliminates potential pose errors. The shape errors are reported at sev-eral regions for more thorough and detailed analysis of the results. Theseregions include the spinous processes (SPs), transverse processes (TPs), an-terior and posterior articular processes (APs) and vertebral bodies (VBs).223.3. ResultsThe break-down of the different regions is done by manually selecting theseregions on the gold standard on each vertebra, for each patient.Experiments in US DomainSimilarly, the registered models are projected to the US space for this step.That is, the model’s pose is computed from the US scans only, while theshape and scale are optimized jointly. Landmarks were selected by an ex-pert sonographer on the SPs and laminae (LAs) on sagittal slices, as well asTPs and APs on sagittal and axial slices. The US registration accuracy ismeasured using the Target Registration Error (TRE) metric calculated be-tween pre-selected landmarks on the gold-standard and corresponding targetpoints on the registered model’s surface. In the lack of a better alternative,the correspondenses are assigned using the nearest neighbors approach.3.3.3 US+CT Joint Registration Results and AnalysisThe joint method was successfully applied on the US+CT dataset. Exam-ples can be seen in Fig. 3.2. Registration results obtained in the differentexperiments are provided in Tables 3.2 and 3.3. The columns of these tablesindicate the registration method, in terms of shape optimization. That is,CT shape refers to registration of the model by optimizing the shape usingCT only. Joint shape refers to the proposed joint registration. US shaperepresents the method where the shape parameter are computed from USonly. Colormaps in Fig. 3.3 illustrate the improvements achieved with thejoint method and the shape+pose model. The joint registration techniquein the CT domain resulted in the smallest errors at all regions, even out-performing the CT-only method. The largest errors can be seen at the SPsof the vertebrae. In the US domain, the joint shape improves the TRE atall regions except for the APs. Figure 3.4 shows the distribution of TREfor the US images using the three different shape optimization techniques atthe different regions of the vertebrae (3.4(a)), as well as at the five vertebraeL1-L5 (3.4(b)). A statistical -test analysis was carried out. No statisti-cally significant improvements were observed using the joint method overthe other two.The joint registration process had an average run-time of 7.5minutes, approximately 6.2 minutes of which was used for target point ex-traction, and 1.3 minutes for the actual shape+pose registration.Tables 3.2 and 3.3 summarize the SDE and TRE differences of the twomethods at different regions of the vertebrae, namely SP, TP, AP and VB.The joint approach leads to an overall improvement in the maximum and233.3. ResultsL4Joint modelUS-only modelL5L3(a)LAL4L5Joint modelUS-only modelGold standard(b)Figure 3.2: Example snapshots of the registration results using theshape+pose model on CT (a) and US (b) images. The joint method (blue)performs better than the model-to-US technique (red). The yellow annota-tions on US depict the gold standard segmentation obtained by the sonog-rapher.RMS errors.Figure 3.3 highlights the differences between the joint and the US-onlyregistration as colormaps projected on the individual vertebrae for each ofthe 10 patients.The joint method combined with the shape+pose+scale model improvesthe overall SDEs and TREs compared to the US-only technique, i.e. themethod that only uses US for shape optimization (Figure 3.5). The CT-only performs slightly better than the joint method. In US, the region withthe smallest errors is the AP and largest errors are seen at LAs. In theCT domain, the errors are within the same range. SDE and TRE resultsobtained using the shape+pose+scale model are provided in Tables 3.4 and3.5. Colormaps in Fig. 3.6 illustrate the improvements achieved with thejoint method and the shape+pose+scale model. Box plots also show thebreak-down of TRE errors for US at different regions and vertebrae (3.7).The average run-time for the iterative registration was 1.9 minutes.A comparison is made between the results obtained using the joint frame-work along with the two different models. Colormaps in Fig. 3.8 represent243.3. ResultsRegionReg.Type CT-only Joint US-onlyAP 2.88 ± 0.48 2.81 ± 0.56 2.97 ± 0.61LA 2.71 ± 0.53 2.67 ± 0.65 2.86 ± 0.56SP 3.26 ± 0.88 3.17 ± 0.80 3.24 ± 0.89VB 2.92 ± 0.51 2.88 ± 0.54 3.07 ± 0.62All 2.81 ± 0.50 2.40 ± 0.56 2.93 ± 0.49Table 3.2: Distribution of US TRE registration errors (mm) achieved usingthe shape+pose model for the US+CT dataset, per region. No significantimprovement was observed using the joint registration.RegionReg.Type CT-only Joint US-onlyAP 1.65 ± 0.57 1.60 ± 0.56 1.42 ± 1.42LA 2.99 ± 1.90 2.97 ± 1.88 3.18 ± 3.18SP 1.81 ± 0.88 1.70 ± 0.78 2.03 ± 2.03TP 2.11 ± 1.23 1.99 ± 1.24 2.14 ± 2.14All 2.94 ± 2.40 2.93 ± 2.40 2.99 ± 2.78Table 3.3: US TRE (mm) achieved using the shape+pose model for theUS+CT dataset. In each experiment (each column), the shape parameterof the model is optimized from the modalities indicated on the first row ofthe table. The errors are reported for different regions of the vertebrae (AP,LA, SP, TP) and the average of all four.the difference in errors obtained using the two models.253.3. ResultsRegionReg.Type CT-only Joint US-onlyScaleAP 2.48 ± 0.43 2.55 ± 0.50 3.17 ± 0.70LA 2.37 ± 0.51 2.41 ± 0.53 3.04 ± 0.79SP 2.63 ± 0.68 2.72 ± 0.70 3.31 ± 0.74VB 2.44 ± 0.43 2.53 ± 0.50 3.11 ± 0.67All 2.35 ± 0.37 2.42 ± 0.39 3.15 ± 0.53Table 3.4: CT SDE registration errors (mm) achieved using theshape+pose+scale model for the US+CT dataset. In each experiment (eachcolumn), the shape and scale parameters of the model are optimized fromthe modalities indicated on the first row of the table. The errors are reportedfor different regions of the vertebrae (AP, LA, SP, VB) and the average ofall four.RegionReg.Type CT-only Joint US-onlyAP 1.53 ± 0.61 1.49 ± 0.73 1.30 ± 1.42LA 3.10 ± 2.08 3.08 ± 2.09 3.26 ± 2.21SP 2.22 ± 0.95 2.12 ± 0.90 2.37 ± 0.93TP 2.29 ± 1.18 1.96 ± 1.19 2.36 ± 1.17All 3.16 ± 2.47 3.14 ± 2.47 3.22 ± 2.50Table 3.5: US TRE (mm) achieved using the shape+pose+scale model forthe US+CT dataset. In each experiment (each column), the shape and scaleparameters of the model are optimized from the modalities indicated on thefirst row of the table. The errors are reported for different regions of thevertebrae (AP, LA, SP, TP) and the average of all four.263.3. Results(a) (b) (c) (d)(e) (f) (g) (h)(i) (j)-3 -1 0 1 2(mm)-2 3Figure 3.3: Colormaps showing the difference between shape SDE valuesper surface point computed from US-to-atlas and joint registrations, usingsegmented reference CT as the gold standard for patients 1 (a) to 10 (j).Blue regions show improvements achieved by the joint registration, whereUS-to-atlas error is larger than joint registration error.273.3. Results(a)(b)Figure 3.4: Box plots showing the break-down of TRE values obtained usingthe shape+pose model for US images (US+CT dataset) at different regionsof the vertebrae (a) and the five different vertebrae (b). No significantimprovements are observed.283.3. ResultsL4Joint modelUS-only modelL5L3(a)SPL1L2Joint modelUS-only modelGold standard(b)Figure 3.5: Example snapshots of the registration results using theshape+pose+scale model on CT (a) and US (b) images. The joint method(blue) performs better than the model-to-US technique (red). The yellowannotations on US depict the gold standard segmentation obtained by thesonographer.293.3. Results(a) (b) (c) (d)(e) (f) (g) (h)(i) (j)-3 -1 0 1 2(mm)-2 3Figure 3.6: Colormaps showing the difference between shape SDE valuesper surface point computed from US-to-atlas and joint registrations, usingsegmented reference CT as the gold standard for patients 1 (a) to 10 (j).Blue regions show improvements achieved by the joint registration, whereUS-to-atlas error is larger than joint registration error.303.3. Results(a)(b)Figure 3.7: Box plots showing the break-down of TRE values obtained usingthe shape+pose+scale model for US images (US+CT dataset) at differentregions of the vertebrae (a) and the five different vertebrae (b). No signifi-cant improvements are observed.313.3. Results(a) (b) (c) (d)(e) (f) (g) (h)(i) (j)-3 -1 0 1 2(mm)-2 3Figure 3.8: Colormaps showing the difference between shape SDE valuesper surface point computed from US-to-atlas and joint registrations, usingsegmented reference CT as the gold standard for patients 1 (a) to 10 (j).Blue regions show improvements achieved by the joint registration, whereUS-to-atlas error is larger than joint registration error.323.4. Discussion and Summary3.4 Discussion and SummaryIn this chapter, a joint registration framework was presented, where statis-tical models of the lumbar spine were registered to US and CT with theaim to improve interpretation of the anatomy in spinal US for facet jointinjections and epidural needle insertions. The key contribution involved theconcurrent optimization of the shape and scale coefficients, which accommo-dates multimodal fusion, despite the substantial pose differences between themodalities. It was demonstrated that by taking advantage of available easy-to-interpret preprocedure CT data for computing the shape of the statisticalmodel, improved performance of the US registration can be achieved. Look-ing at the results at individual regions of the vertebrae, as opposed to theoverall average, gives us better insight into the registration accuracy. Thejoint registration lead to a more accurate overlay of the anatomical infor-mation of the lumbar spine in both CT and US domain compared to theapproach where only US information was used for registration. Moreover,as a by-product of the joint registration, the pose of the spine in CT is alsoestimated (Fig. 3.1). This allows the model to be transformed and overlaidonto the CT domain, helping the clinician to qualitatively evaluate the lo-cal registration accuracy of the patient-specific model constructed for theUS image based on the detailed anatomy visible in the CT scan. Hence,including preprocedure CT makes it possible to simultaneously visualize thespine anatomy in both US and CT domains without the need for preopera-tive segmentation of CT. It can be seen from Tables 3.2- 3.5 that the jointregistration almost always performs better than the US-only method bothin terms of RMS errors and standard deviation. The differences betweenthe two methods are greater with the shape+pose+scale model, as opposedto the shape+pose model. This is most likely because in the absence of CTdata, the false bone points lead to incorrect scale coefficient estimate, whichis independently optimized. Although results are often improved by jointregistration, the errors in isolated anatomical regions (shown in red) seem tohave increased. We noticed that the registration accuracy of the joint modelin more complicated areas of the processes was not high enough in the CTdomain which propagated to an apparently higher error in the US domain.It is speculated that this issue stems from having a small size of training set(32 patients) used to generated the statistical spine model. Enlarging thetraining set can potentially lead to better results. Furthermore, correct edgedetection for point cloud extraction in CT scans strongly depends on thequality and resolution of these image. As the examined CT volumes origi-nate from routine clinical acquisitions with inhomogeneous voxel resolution,333.4. Discussion and SummaryFigure 3.9: Snapshots of a CT scan belonging to a scoliotic patient (P8),which was proven challenging to register. The vertebrae seem to divergefrom the vertical yellow line (left). The yellow line on the coronal view alsomarks the sagittal slice on the right. White arrows indicate the failed poseregistration of the model.the bone surface is not sharply visible in all imaging axes, making it difficultto obtain a dense bone point cloud ideal for model registration. ImprovingCT image quality during acquisition or enhancing bone surface detectionmight further increase the registration accuracy. Figures 4.5 demonstratedthe registration error values vary across different vertebrae. For both USand CT, the model registers better to L1-L3, as opposed to to L4-L5. Thisvariation is likely to originate from the fact that in general, L1-L3 of themodels undergo smaller pose variations when going from supine to prone.That is, the significant change in curvature happens at L4-L5. Moreover,the manual alignment of the centre of gravity of L3 can contribute to theregistration results at L3. As mentioned above, an important limitation ofthe current models is their relatively small training set. Hence, it is expectedfor the the model to fail in registering to the spine of certain subjects withshapes or poses unlike those of the subjects in the training set. For exam-ple, an observation easily made from the colormaps in Figs. 3.3- 3.8 is thaterrors obtained for patient P8 (Fig. 3.3(h), 3.6(h), and 3.8(h)) are high andinconsistent, compared to the other nine cases. This is most like due to thefact that this particular subject seems to be suffering from scoliosis, as theCT image shows a lateral curvature. This indicates that the current modelsand methods are not suited for scoliotic patients(Fig. 3.9).343.4. Discussion and SummaryThis problem can be eased by increasing the number of subjects in thetraining set. Another possibility is to employ a two-step registration tech-nique. In the first step the multi-vertebrae model is registered to the data asshown. In the second step, the registered model is used as an initializationto register smaller models locally to one or more vertebrae of interest. Nev-ertheless, the advantage of the proposed framework is that in such scenarios,the clinicians will still be able to look at the patient-specific model on thepreoperative space and decide about the adequacy of the registration.A limitation of the performed evaluation is that potentially inaccuratecorrespondence assignment may lead to inaccurate error evaluation. Cur-rently, we rely on finding the closest neighbors for assigning correspondencesbetween the gold standard segmentations and the model point because man-ual correspondence assignment is not feasible. Correspondences are im-proved in the case of SDE calculation for the CT, as a rigid registration iscarried out on each vertebra prior to computing the errors. However, a rigidregistration for the US is not particularly useful for US segmentations arepartial and very sparse in this modality. The required clinical accuracy forfacet joint injections and epidurals are estimated at around 5 mm and 3 mm,respectively. Hence, it was shown that the accuracy of the presented frame-work is sufficient for facet joint injections. However, further improvementof maximum errors is needed for epidural procedures. The statistical spinemodels used in this study have been designed for the five lumbar vertebrae,with different pose and shape variations changing the entire structure as awhole. However, the CT scans used for testing the method included vari-able fields-of-view with two to four visible vertebrae. The increase in theerrors for patients P3 and P6 (red spots on APs and SPs in Figures 3.6(c)and 3.6(f)) in the shape+pose+scale results could be due to the fact thatonly two vertebrae were visible in their CT scans. For patients P1, P2, andP9-10, whose CT scans included three or four vertebrae, improvements weremore pronounced.Finally, though tolerable for preoperative CT registration, the computa-tional complexity of the current method is still too high to allow real-timeimplementation for intraoperative procedures. However, the most time-consuming step of the workflow is extracting the bone surfaces, speciallyfor the US. This is because in the current method, individual 2D slices areprocessed one at a time.35Chapter 4Joint Registration of Modelto US+MR Images of theLumbar SpineThis chapter involves the study of applying the joint registration frameworkto intraprocedure US and preprocedure MR scans of the lumbar spine. Amodification of the method presented in the previous chapters has been em-ployed where predefined weights are assigned to different regions of the lum-bar vertebrae based on their presumed degree of visibility in each modality.Results of different joint registration experiments performed on this datasetare presented and discussed.4.1 US and MR DatasetsThe dataset used in this study consists of volumetric US data and corre-sponding multi-slice MR scans of nine patients. Ethics approval and writtenconsent were obtained from the subjects. This dataset includes both maleand female subjects ranging between 17 to 75 years in age.MR images used in this study are actual clinical images acquired at hos-pitals for diagnosis and treatment purposes. Data was downloaded fromthe hospitals’ servers and anonymized. The MR images used for the regis-tration are acquired sagittally since bone detection can be performed moreeasily and accurately on sagittal slices. T1-weighted images are chosen forthis study because the spinal cord appears dark in these images, makingthe bone outlines more easily detectable. Each 3D MR image includes thecoccyx and sacrum, five lumbar vertebrae, as well as one or more vertebraefrom the bottom of the thoracic spine. The sagittal field-of-view is limited inthese images as the main focus has been on capturing the vertebral bodies,which are close to the median plane. Hence, TPs, which lie more laterallyfrom the median plane, are not captured. The slice thickness is between3.5 to 4.4 mm. Figure 4.1 depicts an example of sagittal and axial slices of364.1. US and MR DatasetsFigure 4.1: An example MR image from the present dataset. The yellowdashed line on the sagittal image (right) indicates the location of the axialslice (left). The TPs are not captured in the field-of-view of these MRimages. The sagittal slices are 3.5− 4.4 mm thick in this dataset.these MR images.Similar to the previous chapter, the US data has been acquired usingan electromagnetically 2D tracked US probe and the 3D volumes are re-constructed using PLUS. However, the present dataset includes more chal-lenging images compared to those used in Chapter 3. Firstly, the 2D imageresolution are relatively lower, with voxel dimensions of (1.3, 1.3, 1.3) to(1.5, 1.5, 1.5) mm, and a total of 103 to 203 slices. Also, unlike the US scansin the US+ct study, the images used in this study were obtained in the sit-ting position, which puts the vertebrae in a completely different pose. Also,in this dataset, the sitting position limited the range of the scan, in termsof how far down the spine the US transducer could go without making thepatient uncomfortable. As a result, present US scans have a limited axial374.2. Joint Registration for US+MR ImagesField of View Patient Numbers Count (Numberof Subjects)Top of L1 - Bottom of L4 P2, P4, P8 3Top of L1 - Middle of L5 P1, P3, P5, P6 4Top of L1 - Bottom of L5 P7, P9 2Table 4.1: Breakdown of the fields of view of US images in the US+MRdataset based on the visible lumbar vertebrae.T12L1Figure 4.2: An example US image from the present challenging dataset,shown in three different views. The yellow and green dashed lines on thesagittal image (middle) indicate the location of the axial (left) and coronalslice (right), respectively. The shadow of the rib attached to T12 is used tolocate L1. The blue lines are drawn to help visualize the landmarks.field-of-view. Specifically, they start at the bottom of the thoracic vertebrae(T12) but do not always go as low as the bottom of L5 (Table 4.1). Anexample US image of this dataset is shown in Fig. 4.2.4.2 Joint Registration for US+MR Images4.2.1 Target Point Cloud GenerationThe MR images are processed in the steps described below:1. Intensity Bias Correction: Initially, the biased intensity of theimage is corrected using a modified fuzzy C-means algorithm [2]. Theparameters for this step are set according to the work by Suzani et384.2. Joint Registration for US+MR Imagesal. [33].2. Anisotropic Diffusion: An anisotropic diffusion is applied to removethe speckles and noise present in the image. This helps reduce thenumber of points that will be falsely detected as edge points in thenext step.3. Edge Detection: A Canny edge detection is performed on the pro-cessed MR volume. To further refine the extracted edges and removefalse positive points, a morphological erosion is performed thereafter.The final edge points make up the logical target point cloud for theMR image.Similar to Chapter 3, the US images are processed using a phase-based boneenhancement extract a probabilistic point cloud [17].4.2.2 Model Initialization on TargetsThe centers of gravity of L3 are manually selected on the US and MR images.Two instances of the model are roughly aligned on the targets accordingly.This step is followed by a rigid registration for further correction. Since thesacrum is not visible in the US images of this dataset, the shadows of the ribsattached to T12 are used to identify T12 and hence the lumbar vertebraebelow it. As mentioned before, the US images in this dataset were acquiredin the sitting position. To account for this, the model instance used on theUS space is initialized with a straight spine pose, as opposed to the meanpose of the model. This is because in the sitting position, the five lumbarvertebrae align on a less curved line as shown in Fig. 4.3.4.2.3 Weighted Joint RegistrationIn US and MRI data, the visibility of each part of a lumbar vertebra dif-fers due to various factors. In MR images, the outlines of VBs can be seenvery clearly in almost all slices, making the VB edges reliable points forregistration. Similarly, LAs have high visibility due to the different inten-sity levels inside and outside the boundaries of these regions in the sagittalimages. However, the SPs are more challenging to detect. This is becausethe current MR slices are very thick and hence, SP of each vertebra is typ-ically only visible on one or no sagittal slice. Also, high-frequency detailsof tissues surrounding the SPs lead to falsely detected edge points in thetarget point clouds. AP edges are also relatively more difficult to accurately394.2. Joint Registration for US+MR Images(a) (b) (c)Figure 4.3: The lumbar spine in the sitting position. Figure in (a) shows thatin the sitting pose, lumbar spine is almost straight but the L5-sacrum curveis sharper (courtesy of ergonut.com [22]). Figures in (b) and (c) demonstratethe straight pose used for registering the shape+pose and shape+pose+scalemodels to the US images, respectively.extract as the neighboring tissues cause false positives. Finally, the TPsare not visible at all due to the limited sagittal field of view. US images,on the other hand, provide strong bone responses at LAs, TPs and SPs,relatively weaker responses at APs and almost no useful information fromthe VBs. To account for this variable visibility across different regions ineach modality, we propose a weighted joint registration technique. In thismethod, predefined weights are assigned to each instance of the model de-pending of the visibility of the different regions in the given modality. Theobjective function (Eqs. 2.8 and 2.9) are modified by adding the weightmatrix to the expectation step. the joint shape derivative becomes:∂Q∂θs=∑md∈ML∑l=1Mmd,Nl∑m,n=1W ln,md×P (tnl|zm,md)[Φ(tln)>−zm,md> ∂Φ(tln)∂θs]+Rs.(4.1)Here, W ln,md is a diagonal matrix, where the a n-th diagonal entry of thel-th object represents the weights assigned to point tnl of the model forthe modality md. These weights are assigned to Each point belonging toa given region (SP, LA, AP, TP and VB) is assigned the weight specific tothat region. Intuitively, based on the presumption about the presence andcorrectness of information at these regions, W ln,md determines which regionsthe objective function will be in favor of for each modality.Based on the rationale discussed above, weights given in Table 4.2 are404.3. Resultsselected for the US and MR data. For each modality, regions with highvisibility are assigned weights of 1. More challenging regions, where falsepositives are expected, receive weights of 0.5. Zero weights are given toregions that are not at all visible in each image.Region SP LA AP TP VBWeight in MR 0.5 1 0.5 0 1Weight in US 0.5 1 0.5 1 0Table 4.2: Weights assigned to the different regions of the vertebraeThe iterative optimization and model transformation is as described inChapters 2 and 3.4.3 Results4.3.1 Registration Parameter SelectionThe registration parameters obtained in Chapter 3 were used for the MR+USdata.4.3.2 ValidationRegistration results are assessed separately on both the MR and US spaces.Similar to Chapter 3, the results from the joint and weighted joint methodsare compared with those of the US-only and MR-only techniques for bothversions of the statistical model.Experiments in MR DomainIn order to study the registration results on the MR domain, the pose param-eter of the model is computed from the MR image. The shape is optimizedjointly, or only from one of the modalities in different experiments. For eachsubject, TRE is calculated between the model and the landmarks depictedin the ground truth. Correspondences are assigned by locating the nearestmodels to each landmark point. For the MR images, the ground truth con-sists of landmarks selected on SPs, APs, LAs and VBs. More precisely, thefollowing landmarks were segmented: the leading edge of visible SPs on themedial slice, APs and LAs on the most lateral slices which include visibleAPs and LAs, respectively, and the outline of VBs on the medial slice, aswell as on those where LAs are marked. The landmark segmentation was414.3. ResultsPatientNumberRegionAP LA SP VB1 353 326 265 2,4692 227 275 131 1,9443 361 270 233 1,5874 546 565 413 3,4255 390 325 135 2,1906 254 277 182 1,1927 525 568 280 3,6378 331 343 155 2,2509 607 395 233 1,345All 3,594 3,344 2,027 20,039Table 4.3: Numbers of MR gold standard points used to report TRE valueson MR space for the US+MR dataset. The numbers are provided per regionfor each subject.done manually on sagittal slices using MITK’s segmentation tools. Table 4.3shows a breakdown of number of landmark points used for computing theTRE at each region in MR images.Experiments in US DomainThe analysis in the US domain is similar to that of Chapter 3. TRE valuesare computed for patient-specific models with US poses in the different ex-periments. The landmarks were selected by a sonographer using the sameprotocols described in Section 3.3.2. Table 4.4 shows the number of land-mark points used on different regions of each US image.424.4. Discussion and SummaryPatientNumberRegionAP LA SP TP1 4 220 0 442 1 125 0 493 15 173 33 744 4 242 30 05 8 164 0 896 1 220 61 257 4 197 46 608 0 174 0 09 10 270 16 85All 47 1,785 186 426Table 4.4: Numbers of US gold standard points used to report TRE valuesfor the US+MR dataset. The numbers are provided per region for eachsubject. The landmarks were selected by an expert sonographer. Zerosindicate the region was not visible to the sonographer.The shape+pose model was used for joint registration of MR and US.Figure 4.4 shows examples of the results of the weighted joint and the US-only methods on the two modalities. TRE values obtained in experimentsperformed are provided in Tables 4.5 and 4.6. U-tests were performed onthe error distributions ( 4.7). No satistically significant improvements wereobserved. Execution time was approximately 1.1 minutes.Similarly, results obtained using the shape+pose+scale model are pro-vided below(Tab. 4.7- 4.8 and Fig. 4.6). At AP, SP and TP regions p¡0.05were achieved. That is, the shape+pose+scale model combined with theweighted joint method lead to a significant improvement of TRE values.Also, L1 and L3 vertebrae improvements were obtained (p¡0.05). An aver-age run-time of 1.9 was achieved.4.4 Discussion and SummaryIn this chapter, the shape+pose and shape+pose+scale models were used inthe joint and weighted joint registration methods to augment intraprocedureUS and preprocedure MR. The US+MR dataset was shown to be challeng-ing due the low quality and slice thickness, as well as limited fields of viewof both modalities. Nevertheless, a consistent and statistically significant434.4. Discussion and SummaryWeighted joint modelUS-only model(a)Weighted modelUS-only modelGold standard(b)Figure 4.4: Example snapshots of the registration results using theshape+pose+scale model on MR (a) and US (b) images. The weighted jointmethod (blue) performs better than the model-to-US technique(red). Theyellow annotations on US depict the gold standard segmentation obtainedby the sonographer.improvement was seen using the joint methods using both models, speciallyin the US domain. The joint and weighted joint methods resulted in signifi-cant improvements in the TRE compared to the US-only method in both twomodalities. U-tests revealed that significant improvements were made usingthe shape+pose+scale model combined with the weighted joint technique.Specifically, it was shown that APs, SPs and TPs TREs dropped using thejoint methods. This suggest the joint methods are preferable over US-onlyones for facet joint injections, were APs are the most critical regions. Er-rors at the most critical regions for epidurals, i.e. LAs, show much smallerchanges across the different shape optimization methods and model versions.Nevertheless, using the shape+pose+scale, the weighted joint methods stillachieve lower TREs compared to US-only. The joint and weighted jointmethods were very similar in terms of accuracy for the shape+pose model.However, in the case of shape+pose+scale, the improvements made by theweighted joint method over the joint technique are more pronounced. Thiscan be due to the fact that this model is less constrained in terms of posedeformation and is hence more prone to converge further from the optimalminima in presence of false positives. Overall, the shape+pose+scale model444.4. Discussion and SummaryRegionReg.Type MR-only WeightedJointJoint US-onlyAP 2.80 ± 1.42 2.69 ± 1.44 2.70 ± 1.40 2.78 ± 1.54LA 2.48 ± 1.37 2.49 ± 1.30 2.48 ± 1.31 2.52 ± 1.32SP 4.21 ± 2.54 4.11 ± 2.55 2.24 ± 2.58 4.07 ± 2.44VB 2.52 ± 1.38 2.50 ± 1.35 2.54 ± 1.40 2.54 ± 1.37All 2.68 ± 1.68 2.75 ± 1.70 2.67 ± 1.64 2.60 ± 1.62Table 4.5: MR TRE (mm) achieved using the shape+pose model for theUS+MR dataset. In each experiment (each column), the shape parameterof the model is optimized from the modalities indicated on the first row ofthe table. The errors are reported for different regions of the vertebrae (AP,LA, SP, VB) and the average of all four.proved to be more successful for this dataset at throughout the differentregions of the vertebrae. In general, the weighted joint framework combinedwith the shape+pose+scale model is more accurate than the shape+posemodel at all the regions. With the shape+pose+scale model, the weightedjoint method even outperformed the MR-only registration in some casessince the it benefits from the complementary features of both modalities.TRE values obtained for the MR+US chapter were proven to be higherthan SDEs and TREs of the US+CT data in Chapter 3. This can be due tofrom several factors. Firstly, accurate and complete bone surface extractionis more difficult as for the US+MR was generally more challenging. Com-pared to the US+CT data, this dataset’s preprocedure image (MR) havethicker slices (3.5− 4.4 mm). Hence, the best and most accurate obtainableerror in the sagittal direction is 1.75− 2.2 mm, which is already within therange of errors of the US+CT dataset. Furthermore, due to visibility ofdifferent tissues in MR images, facet joints and spinous process are not asclearly visible as in CT scans. This yields to relatively poorer and less ac-curate bone edge detection in MR, compared to CT. Also, the fields of viewof both MR and US in this dataset are limited, with the clinical MR imagesnot including the TPs and US images not capturing the full lumbar spine.Another important source of error, specially in the US space, is the subjects’pose at image acquisition time. As mentioned before, the US images in thisdataset were obtained in the sitting position, which is a more extreme posecompared to the prone position. To address this issue, a less curved posewas used to initialized the model. However, the observed errors at L4 and454.4. Discussion and SummaryRegionReg.Type MR-only WeightedJointJoint US-onlyAP 7.60 ± 4.05 7.92 ± 4.02 7.52 ± 3.63 7.98 ± 3.81LA 3.37 ± 2.29 3.46 ± 2.48 3.44 ± 2.45 3.43 ± 2.43SP 4.65 ± 1.74 4.39 ± 1.74 4.33 ± 1.60 4.11 ± 1.71TP 3.92 ± 2.61 3.91 ± 2.59 3.93 ± 2.65 3.91 ± 2.63All 4.25 ± 3.56 4.20 ± 3.64 4.24 ± 3.54 4.20 ± 3.58Table 4.6: US TRE (mm) achieved using the shape+pose model for theUS+MR dataset. In each experiment (each column), the shape parameterof the model is optimized from the modalities indicated on the first row ofthe table. The errors are reported for different regions of the vertebrae (AP,LA, SP, TP) and the average of all four.L5 (the vertebrae affected most in the sitting position) were higher. Thiscan be due the fact that the current models are based on CT images only,which may not actually contain the kind of pose deformation involved inthe sitting position. Also, the initial poses were merely chosen by manuallyselecting a pose for each shape+pose and shape+pose+scale models that ap-peared straight, from the possible variations of pose in each model. In orderto obtain a more accurate initial pose, an US training dataset is required,where the optimal pose parameters are manually determined. This priorknowledge about the pose of the patient can be integrated in the system toinitialize the pose registration for a more accurate and faster convergence.An important observation made from the data is that the variations of thepose of the sacrum are often quite noticable in the different poses seen inthe current datasets. Hence, we believe that including the sacrum in thestatistical model can potentially help achieve a better pose convergence,specially for the sitting cases. Finally, there can potentially be errors inthe process of calculating the TREs. In Chapter 3, each vertebra of thepatient-specific model was rigidly registered to the corresponding vertebraon the gold standard to eliminate pose errors. This step cannot be done inthe case of the MR+US dataset because full segmentation of the vertebraeis not feasible. As a result, the reported TRE values include both shapeand pose errors. Moreover, not performing a rigid registration on individualvertebrae increases the odds of assigning incorrect correspondences betweenthe model points and points on the gold standard segmentation.464.4. Discussion and Summary(a)(b)Figure 4.5: Box plots showing the break-down of TRE values obtained usingthe shape+pose model for US images (US+MR dataset) at different regionsof the vertebrae (a) and the five different vertebrae (b). No significantimprovements are observed.RegionReg.Type MR-only WeightedJointJoint US-onlyAP 2.73 ± 1.54 2.73 ± 1.50 2.80 ± 1.60 2.80 ± 1.63LA 2.52 ± 1.40 2.58 ± 1.61 2.60 ± 1.74 2.65 ± 1.67SP 4.10 ± 2.74 4.19 ± 2.85 4.44 ± 2.98 4.22 ± 3.04VB 2.45 ± 1.36 2.46 ± 1.39 2.54 ± 1.44 2.51 ± 1.42All 2.61 ± 1.89 2.62 ± 1.63 2.71 ± 1.72 2.69 ± 1.70Table 4.7: MR TRE achieved (mm) using the shape+pose+scale model forthe US+MR dataset. In each experiment (each column), the shape and scaleparameters of the model are optimized using the methods indicated on thefirst row of the table. The errors are reported for different regions of thevertebrae (AP, LA, SP, VB) and the average of all four.474.4. Discussion and SummaryWeighted joint modelUS-only model(a)Weighted modelUS-only modelGold standard(b)Figure 4.6: Example snapshots of the registration results using theshape+pose+scale model on MR (a) and US (b) images. The weighted jointmethod (blue) performs better than the model-to-US technique(red). Theyellow annotations on US depict the gold standard segmentation obtainedby the sonographer.RegionReg.Type MR-only WeightedJointJoint US-onlyAP 6.10 ± 2.18 6.28 ± 2.31 7.13 ± 2.16 8.26 ± 2.53LA 3.40 ± 2.08 3.49 ± 2.11 3.61 ± 2.15 3.66 ± 2.32SP 7.95 ± 4.89 7.57 ± 5.13 8.18 ± 4.96 7.76 ± 4.98TP 5.38 ± 5.20 5.41 ± 5.19 5.54 ± 4.76 5.45 ± 5.05All 4.15 ± 3.56 4.20 ± 3.64 4.24 ± 3.54 4.20 ± 4.58Table 4.8: US TRE achieved (mm) using the shape+pose+scale model forthe US+MR dataset. In each experiment (each column), the shape and scaleparameters of the model are optimized using the methods indicated on thefirst row of the table. The errors are reported for different regions of thevertebrae (AP, LA, SP, TP) and the average of all four.484.4. Discussion and Summary(a)(b)Figure 4.7: Box plots showing the break-down of TRE values obtained usingthe shape+pose+scale model for US images (US+MR dataset) at differentregions of the vertebrae (a) and the five different vertebrae (b). Significantimprovements were observed at AP, SP, TP regions, as well as at L1 andL3.49Chapter 5ConclusionIn this thesis, a novel solution was presented for augmentation of spinal USused in guiding anesthesia. A point-based shape+pose registration frame-work was modified and extended to jointly register multiple instances ofthe statistical model to multiple imaging modalities, with the underlyingassumption that they vary in pose and not in shape and size. The proposedframework makes it possible to take advantage of complementary featuresexisting in different modalities and hence, helps achieve a more accuratespine localization. The statistical model was improved by decoupling scalefrom pose in the atlas, yielding to a shape+pose+scale model. The jointregistration technique using the two versions of the model was successfullyapplied to a US+CT and a US+MR dataset. Registration accuracy atarticular processes, the most critical regions in facet joint injections, wassignificantly improved. most critical regions. As for the laminae, the mostimportant visible bone regions in epidurals, no significant improvement wasachieved. Nonetheless, it was demonstrated that the joint method allowedfor a simultaneous visualization of the patient-specific models on intraop-erative US and the preoperative images. Preoperative images are easier tointerpret, hence the simultaneous visualization helps the clinicians assess thelocal registration accuracy of the model. Seeing the model on the preoper-ative images can help the clinician decide whether or not to base guidancedecision on the registration results. This framework has the potential toimprove the performance of delivering anesthesia by adding atlas overlays,eliminating the need to preprocedurely download or process data.5.1 ContributionsThe contributions made in this thesis are summarized below:1. A novel joint framework is proposed for simultaneous registration ofmultiple available imaging modalities including intraoperative US, pre-operative CT and MR.505.2. Future Work2. A modified version of the registration cost function is provided, whichaccounts for one common shape (and scale) and multiple poses formultiple imaging modalities.3. The derivatives for fast optimization of shape and pose coefficients inthe joint framework are derived and presented.4. A weighted version of the joint framework was proposed to allowweights to be assigned to each imaging modality in the objective func-tion. The weights of the different regions in each modality are chosenaccording to their presumed visibility. (They can be adjusted by theuser according to the application, resolution of each image, number ofvisible vertebrae, and etc.)5. A new shape+pose+scale statistical model of the lumbar spine hasbeen developed. This was done by decoupling the scaling from thepose to allow the pose and scale to be optimized separately and in-dependently. The separate parameter optimization lead to a moreaccurate fitting of pose to data.6. The proposed method combined with both shape+pose and shape+pose+scalemodels has been validated on a US+CT dataset. The effectiveness ofthe method was demonstrated by carrying out several experimentsfor each dataset and comparing their registration errors at individualregions of the vertebrae.7. The method has also been successfully applied to a challenging dataset of low quality US and clinical MR images with limited field of view.To the best of our knowledge, previously no work of art had attemptedUS+MR fusion for the lumbar spine using a deformable registrationapproach with statistical models.8. The presented joint registration method has been compared to a pre-vious work that only register a statistical model to US.9. The pose convergence has been improved in terms of speed and ac-curacy by initializing the registration according to prior knowledge ofthe pose in intraoperative US.5.2 Future WorkProvided below are several possible paths to be taken for further researchafter this thesis.515.2. Future Work1. Optimal initial poses can be obtained for US images. In this the-sis, for registering the model to the US+MR pair, we used an initialpose, in which according to the statistical model, the spine appearedstraight. A statistical analysis on an US data set can be performedto find a better initial pose. Using such initial poses for the US helpsreduce the risk of converging to local minima, and helps speed up theoptimization.2. The computation time can be improved by developing target pointcloud extraction techniques that can be applied on 3D volumes, asopposed to individual 2D slices. Currently, these preprocessing stepsvery computationally expensive, specially for US images. This is be-cause the bone enhancement algorithm is applied on individual 2Dslices, one at a time. The computation time can also be improvedthrough parallel processing on graphics processing units.3. Increasing the size of test datasets will give more insight on the per-formance and robustness of the proposed method.4. Increasing the size of the training dataset can also help increase therange of variability that the model is able to capture and hence, im-prove the statistical model.5. Constructing a statistical model that includes the sacrum as well as thelumbar vertebrae can potentially help improve the pose registration forthe lumbar spine.6. A two-step registration approach can be used for improving the results.Here, the registered five-vertebra model can be used as an initializationfor local registration of smaller models to individual vertebrae.7. The proposed joint framework could be used for joint registration toboth CT and MR scans together with the statistical model and US.We believe this can improve the final registration accuracy which, inturn, could further increase the confidence of the physician in usingthese patient-specific models for making guidance decisions in spineanesthesia.8. A user study can be conducted, where sonographers will be asked torate and comment on the effectiveness and viability of the presentedframework for spinal ultrasound augmentation. This will be a helpfuland essential step for the proposed method to gain clinical acceptance.525.2. Future Work9. US augmentation for scoliotic patients is additionally challenging dueto abnormal curvatures in their spines. This method, which benefitsfrom complementary features of multiple imaging modalities, can beapplied on scoliotic patients.10. Finally, the proposed framework can be thought of as a generic solutionfor the problem of registration of a statistical atlas to rigid multi-body anatomies, which may vary only in pose in different imagingmodalities. Hence, we envision that such a multi-data fusion could befurther used for applications such as wrist, femur, tibia, cervix, etc.53Bibliography[1] Salahadin Abdi, Sukdeb Datta, and Linda F Lucas. Role of epiduralsteroids in the management of chronic spinal pain: a systematic reviewof effectiveness and complications. Pain physician, 8(1):127–43, 2005.[2] Mohamed N Ahmed, Sameh M Yamany, Nevin Mohamed, Aly A Farag,and Thomas Moriarty. A modified fuzzy C-means algorithm for biasfield estimation and segmentation of MRI data. IEEE transactions onmedical imaging, 21(3):193–199, 2002.[3] Emran Mohammad Abu Anas, Alexander Seitel, Abtin Rasoulian,Paul St John, Tamas Ungi, Andras Lasso, Kathryn Darras, DavidWilson, Victoria A Lessoway, Gabor Fichtinger, Michelle Zec, DavidPichora, Parvin Mousavi, Robert Rohling, and Purang Abolmaesumi.Registration of a statistical model to intraoperative ultrasound forscaphoid screw fixation. International Journal of Computer AssistedRadiology and Surgery, 10(6):959–969, 2015.[4] Ray M. Baker. Cervical, thoracic and lumbar facet joint injections,2013.[5] Delaram Behnami, Alexander Seitel, Abtin Rasoulian, Emran Moham-mad Abu Anas, Victoria A Lessoway, Jill Osborn, Robert Rohling,and Purang Abolmaesumi. Joint registration of ultrasound, CT and ashape+pose statistical model of the lumbar spine for guiding anesthe-sia. International Journal of Computer Assisted Radiology and Surgery,30(9):1–10, 2016.[6] Lars Eirik Bø, Rafael Palomar, Tormod Selbekk, and Ingerid Reinert-sen. Registration of mr to percutaneous ultrasound of the spine forimage-guided surgery. In Computational Methods and Clinical Applica-tions for Spine Imaging, pages 209–218. Springer, 2014.[7] Jonathan Boisvert, Farida Cheriet, Xavier Pennec, Hubert Labelle, andNicholas Ayache. Articulated spine models for 3D reconstruction from54Bibliographypartial radiographic data. Biomedical Engineering, IEEE Transactionson, 55(11):2565–2574, 2008.[8] Mark V Boswell, James D Colson, and William F Spillane. Therapeuticfacet joint interventions in chronic spinal pain: a systematic review ofeffectiveness and complications. Pain Physician, 8(1):101–114, 2005.[9] Bernhard Brendel, Susanne Winter, Andreas Rick, Martin Stockheim,and Helmut Ermert. Registration of 3D CT and ultrasound datasets ofthe spine using bone structures. Computer Aided Surgery, 7(3):146–155,2002.[10] Mikael Brudfors, Alexander Seitel, Abtin Rasoulian, Andras Lasso, Vic-toria A Lessoway, Jill Osborn, Atsuto Maki, Robert N Rohling, andPurang Abolmaesumi. Towards real-time, tracker-less 3D ultrasoundguidance for spine anaesthesia. International journal of computer as-sisted radiology and surgery, pages 1–11, 2015.[11] P Center, L Covington, and A Parr. Lumbar interlaminar epiduralinjections in managing chronic low back and lower extremity pain: Asystematic review. Pain Physician, 12(1):163–188, 2009.[12] Elvis CS Chen, Parvin Mousavi, Sean Gill, Gabor Fichtinger, and Pu-rang Abolmaesumi. Ultrasound guided spine needle insertion. In SPIEMedical Imaging, page 762538. International Society for Optics andPhotonics, 2010.[13] PH Conroy, C Luyet, CJ McCartney, and PG McHardy. Real-timeultrasound-guided spinal anaesthesia: a prospective observational studyof a new approach. Anesthesiology research and practice, 2013, 2013.[14] Getu´lio Rodrigues de Oliveira Filho. The construction of learning curvesfor basic skills in anesthetic procedures: an application for the cumula-tive sum method. Anesthesia & Analgesia, 95(2):411–416, 2002.[15] Sean Gill, Purang Abolmaesumi, Gabor Fichtinger, Jonathan Boisvert,David Pichora, Dan Borshneck, and Parvin Mousavi. Biomechanicallyconstrained groupwise ultrasound to CT registration of the lumbarspine. Medical image analysis, 16(3):662–674, 2012.[16] Ben Glocker, Darko Zikic, Ender Konukoglu, David R. Haynor, andAntonio Criminisi. Vertebrae localization in pathological spine CT viadense classification from sparse annotations. In MICCAI 2013 - 16th55BibliographyInternational Conference on Medical Image Computing and ComputerAssisted Intervention. Springer, September 2013.[17] Ilker Hacihaliloglu, Abtin Rasoulian, Robert N Rohling, and PurangAbolmaesumi. Statistical shape model to 3D ultrasound registrationfor spine interventions using enhanced local phase features. In MedicalImage Computing and Computer-Assisted Intervention–MICCAI 2013,pages 361–368. Springer, 2013.[18] Siavash Khallaghi, Parvin Mousavi, Ren Hui Gong, Sean Gill, JonathanBoisvert, Gabor Fichtinger, David Pichora, Dan Borschneck, and Pu-rang Abolmaesumi. Registration of a statistical shape model of thelumbar spine to 3D ultrasound images. In Medical Image Comput-ing and Computer-Assisted Intervention–MICCAI 2010, pages 68–75.Springer, 2010.[19] Spencer S Liu, Wyndam M Strodtbeck, Jeffrey M Richman, andChristopher L Wu. A comparison of regional versus general anesthesiafor ambulatory anesthesia: a meta-analysis of randomized controlledtrials. Anesthesia & Analgesia, 101(6):1634–1642, 2005.[20] Alexander Loizides, Siegfried Peer, Michaela Plaikner, Verena Spiss,Klaus Galiano, Jochen Obernauer, and Hannes Gruber. Ultrasound-guided injections in the lumbar spine. Med Ultrason, 13(1):54–8, 2011.[21] John Moore, Colin Clarke, Daniel Bainbridge, Chris Wedlake, AndrewWiles, Danielle Pace, and Terry Peters. Image guidance for spinalfacet injections using tracked ultrasound. In Medical Image Comput-ing and Computer-Assisted Intervention–MICCAI 2009, pages 516–523.Springer, 2009.[22] Andriy Myronenko and Xubo Song. Point set registration: Coherentpoint drift. Pattern Analysis and Machine Intelligence, IEEE Transac-tions on, 32(12):2262–2275, 2010.[23] AU Niazi, KJ Chin, R Jin, and VW Chan. Real-time ultrasound-guidedspinal anesthesia using the SonixGPS ultrasound guidance system: afeasibility study. Acta Anaesthesiologica Scandinavica, 58(7):875–881,2014.[24] Abtin Rasoulian, Robert Rohling, and Purang Abolmaesumi. Lum-bar spine segmentation using a statistical multi-vertebrae anatom-56Bibliographyical shape+pose model. Medical Imaging, IEEE Transactions on,32(10):1890–1900, 2013.[25] Abtin Rasoulian, Robert N Rohling, and Purang Abolmaesumi. Aug-mentation of paramedian 3D ultrasound images of the spine. In Infor-mation Processing in Computer-Assisted Interventions, pages 51–60.Springer, 2013.[26] Abtin Rasoulian, Robert N Rohling, and Purang Abolmaesumi. Astatistical multi-vertebrae shape+ pose model for segmentation of CTimages. In SPIE Medical Imaging, pages 86710P–86710P. InternationalSociety for Optics and Photonics, 2013.[27] Abtin Rasoulian, Alexander Seitel, Jill Osborn, Samira Sojoudi, SamanNouranian, Victoria A Lessoway, Robert N Rohling, and Purang Abol-maesumi. Ultrasound-guided spinal injections: a feasibility study of aguidance system. International journal of computer assisted radiologyand surgery, 10(9):1417–1425, 2015.[28] Devon I Rubin. Epidemiology and risk factors for spine pain. Neurologicclinics, 25(2):353–371, 2007.[29] Steffen Schumann. State of the art of ultrasound-based registration incomputer assisted orthopedic interventions. In Computational Radiol-ogy for Orthopaedic Interventions, pages 271–297. Springer, 2016.[30] Richard Silbergleit, Bharat A Mehta, William P Sanders, and Sanjay JTalati. Imaging-guided injection techniques with fluoroscopy and CTfor spinal pain management. Radiographics, 21(4):927–939, 2001.[31] JS Sprigge and SJ Harper. Accidental dural puncture and post duralpuncture headache in obstetric anaesthesia: presentation and manage-ment: A 23-year survey in a district general hospital. Anaesthesia,63(1):36–43, 2008.[32] N Suhm, AL Jacob, L-P Nolte, P Regazzoni, and P Messmer. Surgi-cal navigation based on fluoroscopy clinical application for computer-assisted distal locking of intramedullary implants. Computer AidedSurgery, 5(6):391–400, 2000.[33] Amin Suzani, Abtin Rasoulian, Sidney Fels, Robert N Rohling, andPurang Abolmaesumi. Semi-automatic segmentation of vertebral bod-ies in volumetric MR images using a statistical shape+ pose model. In57BibliographySPIE Medical Imaging, pages 90360P–90360P. International Society forOptics and Photonics, 2014.[34] Denis Tran, Allaudin A Kamani, Elias Al-Attas, Victoria A Lessoway,Simon Massey, and Robert N Rohling. Single-operator real-timeultrasound-guidance to aim and insert a lumbar epidural needle. Cana-dian Journal of Anesthesia/Journal canadien d’anesthe´sie, 57(4):313–321, 2010.[35] Tamas Ungi, Purang Abolmaesumi, Rayhan Jalal, Mattea Welch, IreneAyukawa, Simrin Nagpal, Andras Lasso, Melanie Jaeger, Daniel PBorschneck, Gabor Fichtinger, and Parvin Mousavi. Spinal needle navi-gation by tracked ultrasound snapshots. Biomedical Engineering, IEEETransactions on, 59(10):2766–2772, 2012.[36] Charles XB Yan, Benoˆıt Goulet, Julie Pelletier, Sean Jy-Shyang Chen,Donatella Tampieri, and D Louis Collins. Towards accurate, robustand practical ultrasound-CT registration of vertebrae for image-guidedspine surgery. International journal of computer assisted radiology andsurgery, 6(4):523–537, 2011.[37] Steve H Yoon, Sarah Lee OBrien, and Mike Tran. Ultrasound guidedspine injections: advancement over fluoroscopic guidance? CurrentPhysical Medicine and Rehabilitation Reports, 1(2):104–113, 2013.58

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0319910/manifest

Comment

Related Items