Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Prostate segmentation in ultrasound images using image warping and ellipsoid fitting Badiei, Sara 2007

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2007-0016a.pdf [ 27.73MB ]
Metadata
JSON: 831-1.0065585.json
JSON-LD: 831-1.0065585-ld.json
RDF/XML (Pretty): 831-1.0065585-rdf.xml
RDF/JSON: 831-1.0065585-rdf.json
Turtle: 831-1.0065585-turtle.txt
N-Triples: 831-1.0065585-rdf-ntriples.txt
Original Record: 831-1.0065585-source.json
Full Text
831-1.0065585-fulltext.txt
Citation
831-1.0065585.ris

Full Text

Prostate Segmentation in Ultrasound Images Using Image Warping and Ellipsoid Fitting by Sara Badiei A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES (Electrical and Computer Engineering) THE UNIVERSITY OF BRITISH COLUMBIA APRIL 2007 © Sara Badiei, 2007 11 Abstract This thesis outlines an algorithm for 2D and 3D semi-automatic segmentation of the prostate from B-mode trans-rectal ultrasound (TRUS) images. In semi-automatic segmentation, a computer algorithm outlines the boundary of the prostate given a few initialization points. The algorithm is designed for prostate brachytherapy and has the potential to: i) replace pre-operative manual segmentation, ii) enable intra-operative segmentation, and iii) be integrated into a visualization tool for training residents. The segmentation algorithm makes use of image warping to make the 2D prostate boundary elliptical. A Star Kalman based edge detector is then guided along the elliptical shape to find the prostate boundary in the TRUS image. A second ellipse is then fit to the edge detected measurement points. Once all 2D slices are segmented in this manner an ellipsoid is fit to the 3D cloud of points. Finally a reverse warping algorithm gives us the segmented prostate volume. In-depth 2D and 3D clinical studies show promising results. In 2D, distance based metrics show a mean absolute difference of 0.67 ± 0.18mm between manual and semi-automatic segmentation and area based metrics show average sensitivity and accuracy over 97% and 93% respectively. In 3D, i) the difference between manual and semi-automatic segmentation is on the order of inter-observer variability, ii) the repeatability of the segmentation algorithm is consistently better than the intra-observer variability, and iii) the sensitivity and accuracy are 97% and 85% respectively. The 3D algorithm requires only 5 initialization points and can segment a prostate volume in less than 10 seconds (approximately 40 times faster than manual segmentation). The novelties of this algorithm, in comparison to other works, are in the warping and el-lipse/ellipsoid fitting steps. These two combine to provide a simple solution that works well even with non-ideal images to produce accurate, real-time results. I l l Table of Contents Abstract ii Table of Contents • • iii List of Tables vii List of Figures viii Acknowledgements xv 1 Introduction 1 1.1 Prostate Cancer 1 1.2 Prostate Br achy therapy 3 1.2.1 Why Prostate Brachytherapy 3 1.2.2 Prostate Brachytherapy Procedure at the BCCA 4 1.3 Thesis Motivation: Automatic Prostate Segmentation 5 1.3.1 Why Segment the Prostate Automatically 5 1.3.2 Difficulties in Automatic Prostate Segmentation 6 1.4 Thesis Objective: Semi-Automatic 3D Prostate Segmentation 6 1.4.1 Segmentation Criteria at BCCA 6 1.4.2 Literature Review 8 1.4.2.1 2D Automatic Prostate Segmentation from TRUS images 8 1.4.2.2 3D Automatic Prostate Segmentation from TRUS images 9 1.4.3 General Overview of Approach 10 1.5 Thesis Overview 12 2 Algorithm: 2D Segmentation 13 2.1 Introduction to the 6-step Algorithm 13 2.2 Initialization 14 TABLE OF CONTENTS iv 2.3 Forward and Reverse Warp 15 2.3.1 Forward Warp 15 2.3.2 Warping Function 17 2.3.2.1 Choosing the a (Stretch Factor) Automatically 18 2.3.3 Reverse Warp 20 2.4 Ellipse Fitting 21 2.4.1 Equation of an Ellipse 21 2.4.2 Fitting an Ellipse to Measurement Points 24 2.5 IMMPDA Edge Detector 26 2.5.1 Star Kalman - Finding Candidate Edge Points 26 2.5.2 Kalman or Extended Kalman Filter - Finding the Prostate Boundary . . . . 27 2.5.3 Probabilistic Data Association Filter - Finding the Measurement Value . . . 29 2.5.4 Interacting Multiple Modes 29 2.5.5 System Models 33 2.5.5.1 Linear Constant Velocity (Circle) Model 33 2.5.5.2 Linear Ellipse Model 35 2.5.5.3 Non-linear Ellipse Model - Derivative Method 37 2.5.5.4 Non-Linear Ellipse Model - Output Method 39 2.5.6 Comparing the Four Models 40 2.5.6.1 Comparing the Two Linear Models 41 2.5.6.2 Comparing the Two Non-linear Models : . 41 2.5.6.3 Comparing Linear vs Non-linear Model 42 3 Study Results and Discussion: 2D Segmentation 44 3.1 Study Setup 44 3.2 Validation Methods 44 3.2.1 Distance Based Metrics 45 3.2.2 Area Based Metrics 46 3.3 Results 47 3.4 Discussion 48 4 Algorithm: 3D Segmentation 49 4.1 Introduction to 8-step Algorithm 49 TABLE OF CONTENTS v 4.2 Ellipsoid Fitting 52 4.2.1 Equation of an Ellipsoid 52 4.2.2 Fitting an Ellipsoid to Measurement Points 55 5 Study Results and Discussion: 3D Segmentation 57 5.1 Study Outline 57 5.1.1 Study Setup 58 5.1.2 Removing Bias . . . 59 5.1.3 Segmentation Guidelines for Observers 59 5.2 Validation Methods and Results 60 5.2.1 Distance Based Metrics 60 5.2.1.1 Comparing Difference Between Semi-automatic Segmentation and the Gold Standard to Inter-observer Variability 61 5.2.1.2 Comparing Repeatability of Semi-automatic Segmentation to Intra-observer Variability 70 5.2.2 Volume Based Metrics 73 5.2.3 3D Segmentation Algorithm Speed 75 5.3 Discussion 78 5.3.1 Performance 78 5.3.1.1 Potential for Integration at BCCA 78 5.3.1.2 Algorithm Limitations 78 6 Conclusion and Future Work 80 6.1 Future Work 83 Bibliography 85 A Ellipse/Ellipsoid Reference Frame Conventions 88 A . l Reference Frame Convention in 2D 88 A . 2 Reference Frame Convention in 3D 89 B Ellipse/Ellipsoid Parameter Conversions 92 B. l Fitting and Ellipse to Measurement Points in 2D 92 B . l . l Deriving P Parameters from q parameters (q to P) 92 B.l.2 Deriving q Parameters from P parameters (P to q) 93 B.1.2.1 Method 1 93 TABLE OF CONTENTS vi B.1.2.2 Method 2 96 B. 2 Fitting an Ellipsoid to Measurement Points in 3D 98 B.2.1 Deriving P Parameters from q parameters (<j to P) 98 B.2.2 Deriving q Parameters from P parameters (P to q) 99 B.2.2.1 Method 1 • . . ' 99 B.2.2.2 Method 2 102 B.2.2.3 Comparing Method 1 and Method 2 103 C Ellipse/Ellipsoid Fitting - Mathematical Proofs 107 C. l LSM Cost Function 107 C.2 Minimizing Cost Function using GEP 109 D Lagrangian Multipliers 112 E Newton-Raphson Method 114 F Tracking theory and (Extended) Kalman Filters 116 F . l Kalman Filter 116 F.2 Extended Kalman Filter 118 F.3 Interacting Multiple Modes 118 F.4 Probabilistic Data Association Filter 122 G Semi-automatic Segmention Results 125 vii List o f Tables 1.1 Overview of best 2D and 3D segmentation algorithms available 11 2.1 Linear State Space Models - 30 2.2 Non-linear State Space Models 31 3.1 Average area and distance based metrics for validation of segmentation algorithm . . 47 3.2 Average times for manual and semi-automatic segmentation 47 5.1 Pearson correlation coefficients (correlating computer to manual segmentation) for different regions of the prostate and for different observers 65 5.2 Comparing the average difference between semi-automatic segmentation and the gold standard (ASA-GS) to inter-observer variability (IOV) for different regions of the prostate 69 5.3 Comparing the repeatability of the computer algorithm to intra-observer variability in manual segmentation 72 5.4 Comparing intra-observer variability to computer repeatability in different regions of the prostate and for different observers 73 5.5 Sensitivity and accuracy of 3D semi-automatic segmentation in comparison to the gold standard 75 5.6 Average times for manual and semi-automatic segmentation of an entire prostate volume composed of approximately 10-13 slices 76 V l l l List of Figures 1.1 Prostate brachytherapy 1 1.2 Segmentation difficulties due to anatomy 7 1.3 Comparing TRUS image quality before and after operation 7 2.1 The proposed semi-automatic segmentation algorithm consists of 6 steps. Steps 1 and 2 show the unwarped (x-mark) and warped (circles) initialization points respectively. Steps 3 and 5 show the best ellipse fit to the warped initialization points and to the IMMPDA edge map respectively. Step 6 shows the final segmentation result 14 2.2 Place every point in the original image (left) into a new position to create the warped image (right) using a radial warping function to get r n e w from r and 9 16 2.3 Comparing the original image and original initialization points (a) to the warped image and warped initialization points (b) 16 2.4 The 6 component (top) and r component (bottom) of the warping function in 2D . . 17 2.5 The warping function: rnew/r = 1 — sin (6) exp (—r2/2a2) in 3D. The warped radius, r n e w , is approximately equal to the unwarped radius, r, for angles, 6 close to CP or 18CP and for large r values. The warped radius, r n e w , is less than the un-warped radius, r, for angles, 9 close to 9CP and small r values, therefore, causing a stretch. The limit between small and large r values is set by a 18 2.6 Compressed prostate boundary (solid line) and rough estimate of un-compressed prostate boundary (dashed line) with v-components of the initialization points on the v-axis 19 LIST OF FIGURES ix 2.7 For every point on the final ellipse fit (dotted line), we find r n e w and 9, and then use the Newton-Raphson method to find the r value that will unwrap that point . . . . 21 2.8 (a) Image and contour before reverse warp, and (b) image and contour after reverse warp 22 2.9 Tracking the prostate boundary using Star Kalman/PDA and a seed point 27 2.10 Pixel intensities along the ray shown in figure (a) are recorded and shown in the first subplot in figure (b). The gradients of the pixel intensities are shown in the second subplot and the 5 maximum gradient values are shown in the third subplot in figure (b) 28 2.11 z(k — l\k — 1) on ray k — 1 is mapped to z{k\k — 1) on ray k using the state space model. Five maxima are chosen on ray k to find z(k\k) . 32 2.12 Comparison of edge detection results using one mode or two mode IMMs (edge detection is guided along an ellipse (thin solid line) and results are shown for the Circle model (thick dotted line) and Ellipse model (thick solid line)) 42 2.13 Comparison of edge detection results using the Circle model (thick dotted contour) and the Ellipse model (thick solid contour) for four different prostate contours . . . 42 3.1 The radial differences (at angles 9i) between the manual (dotted line) and semi-automatic (solid line) contour, are used in validating the segmentation algorithm . 45 3.2 Pixel by pixel comparison of manual vs semi-automatic contours 46 3.3 Comparison between manual (dotted line) and semi-automatic (solid line) segmen-tation 48 4.1 Eight steps of the 3D semi-automatic prostate segmentation algorithm 51 5.1 Each of the three observers manually segmented and initialized the initial 10 cases. Two weeks later, each observer manually segmented and initialized 5 of the original 10 cases 58 5.2 A guideline for choosing the location, order and number of initialization points. Points should be chosen somewhere within the grey boxes which are located at the center of the TRUS axis, center of the prostate and right most, bottom most and top most ends of the prostate boundary. . 60 LIST OF FIGURES x 5.3 Finding points on the secondary curve (x-marks) that correspond to points on the reference curve (o-marks). The image to right shows the final result in which we have two different curves whose points align radially making radial distance based analysis possible 61 5.4 A 3D comparison of semi-automatic segmentation (astrix marks) to manual segmen-tation (circle marks) 62 5.5 Comparing semi-automatic contour (solid line) to the manual contour (dotted line) for the entire prostate volume 63 5.6 Finding the average manual contour (solid line) from three manual contours by three different observers (square marks, x-marks and o-marks) 64 5.7 Comparing each computer generated contour to the gold standard on a point-by-point basis to evaluate the segmentation algorithm (all units are millimeters). . . . 66 5.8 Using pearson correlations to compare computer segmentation (initialization by ob-server 1) to the gold standard 67 5.9 Finding the inter-observer variability for case c, slice s, point p where rrrii is Manuali(c, s,p) Manual segmentation by observers 1, 2 and 3 are represented by (square marks), (o-marks) and (x-marks) respectively. 68 5.10 Degree of inter-observer variability at the base slice of case 1. Manual segmentations by observer 1 (solid line), observer 2 (dashed line) and observer 3 (dotted line). . . . 69 5.11 Notched box plots comparing the difference between comp\ and GS, compi and GS, comps and GS to IOV for different regions of the prostate 71 5.12 Gold Standard (dashed line) vs semi-automatic (solid line) segmentation results for observer 3, case 1. (Sensitivity=99.6%, Accuracy=75.6%) 76 5.13 Gold Standard (dashed line) vs semi-automatic (solid line) segmentation results for observer 3, case 5. In this case sensitivity=95.7% and Accuracy=90.28% 77 5.14 Gold Standard (dashed line) vs semi-automatic (solid line) segmentation results for observer 2, case 4. In this case sensitivity=86.95% and Accuracy=84.04% 77 A . l Three reference frames for describing an ellipse 90 A.2 Convention for reference frames in 3D 91 A.3 Moving between reference frames in 3D 91 LIST OF FIGURES xi B . l Two ellipses with different parameters (left) can give the same final ellipse shape (right) 95 B.2 Correcting the order of the basis vectors in 3D 105 D. l The dashed lines represent the level curves of the function we are trying to opti-mize. The solid line represents the constraint. The arrows point in the direction of maximum gradient 112 E. l Warping function as a function of r only with 6=90° and a = 40. Newton's method is used to find the r(i) value corresponding to rnew(i) 115 F. l One iteration of the Kalman Filter [5] 119 F.2 One iteration of the Extended Kalman Filter [5]. The bold boxes show the difference between the EKF and KF 120 F.3 One cycle of the Interacting Multiple Mode Estimator 123 F. 4 Tracking aircraft motion using a radar and PDA theory 124 G. l Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 1, initial 10 cases 126 G.2 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 2, initial 10 cases 126 G.3 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 3, initial 10 cases . 127 G.4 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 4, initial 10 cases. . . 127 G.5 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 5, initial 10 cases 128 G.6 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 6, initial 10 cases 128 G.7 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 7, initial 10 cases 129 G.8 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 8, initial 10 cases 129 LIST OF FIGURES xii G.9 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 9, initial 10 cases. 130 G.10 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 10, initial 10 cases 130 G . l l Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 2, Case 1, initial 10 cases 131 G.12 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 2, Case 2, initial 10 cases 131 G.13 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 2, Case 3, initial 10 cases ; 132 G.14 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 2, Case 4, initial 10 cases 132 G.15 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 2, Case 5, initial 10 cases 133 G.16 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 2, Case 6, initial 10 cases 133 G.17 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 2, Case 7, initial 10 cases 134 G.18 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 2, Case 8, initial 10 cases 134 G.19 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 2, Case 9, initial 10 cases 135 G.20 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 2, Case 10, initial 10 cases 135 G.21 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 3, Case 1, initial 10 cases 136 G.22 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 3, Case 2, initial 10 cases 136 G.23 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 3, Case 3, initial 10 cases 137 LIST OF FIGURES xiii G.24 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 3, Case 4, initial 10 cases 137 G.25 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 3, Case 5, initial 10 cases 138 G.26 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 3, Case 6, initial 10 cases 138 G.27 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 3, Case 7, initial 10 cases 139 G.28 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 3, Case 8, initial 10 cases 139 G.29 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 3, Case 9, initial 10 cases 140 G.30 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 3, Case 10, initial 10 cases 140 G.31 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 2, repeat of 5 cases 141 G.32 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 3, repeat of 5 cases _ 141 G.33 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 6, repeat of 5 cases 142 G.34 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 8, repeat of 5 cases 142 G.35 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 10, repeat of 5 cases 143 G.36 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 2, Case 2, repeat of 5 cases 143 G.37 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 2, Case 3, repeat of 5 cases 144 G.38 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 2, Case 6, repeat of 5 cases 144 LIST OF FIGURES xiv G.39 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 2, Case 8, repeat of 5 cases 145 G.40 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 2, Case 10, repeat of 5 cases 145 G.41 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 3, Case 2, repeat of 5 cases 146 G.42 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 3, Case 3, repeat of 5 cases 146 G.43 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 3, Case 6, repeat of 5 cases 147 G.44 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 3, Case 8, repeat of 5 cases 147 G.45 Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 3, Case 10, repeat of 5 cases 148 X V Acknowledgements First and foremost I would like to thank my supervisor, Dr. Tim Salcudean, for his knowledge, guidance and support throughout my thesis. My research would not have been possible without collaboration with the BC Cancer Agency. I would like to thank Dr. William J Morris and Dr. Ingrid Spadinger for their gracious help and insight. I would also like to thank Alex Kruk and Nick Chng for manually segmenting prostate contours for my research. I am greatful to Dr. James Varah for helping me so patiently with questions regarding el-lipse/ellipsoid fitting. I thank my labmates in the Control and Robotics group. They are my second family and some of my best friends. I don't think anyone would disagree if I said we have the best lab in ECE. In particular I'd like to thank Julian Guerrero for helping me understand Kalman Filters and Danny French for teaching me the basics of Prostate Brachytherapy. I am ever so thankful for my family. An encouraging phone call, a surprise visit, a few lunch boxes full of food made all the difference in the world. 1 Chapter 1 Introduction 1.1 P r o s t a t e C a n c e r The prostate is a gland of the male reproductive system responsible for producing most of the volume of seminal fluid. It is located in front of the rectum, just below the bladder and encloses the prostatic urethra (refer to Figure 1.1a). In older men the prostate is highly variable in size ranging from 15-150cm3. (a) Male anatomy (b) Setup for prostate brachytherapy Figure 1.1: Prostate brachytherapy 1.1 Prostate Cancer 2 Prostate cancer occurs when normal prostate gland cells mutate into cancer cells. If discovered early, prostate cancer is often curable; however, there are very few symptoms at an early stage and tumors can remain undetected until they have spread. In late stages, prostate cancer spreads to nearby organs such as the seminal vesicles, rectum, bladder and bones or travels via the bloodstream and lymphatic system. Prostate cancer is the most commonly diagnosed cancer in Canadian men affecting one in seven in their lifetime. In 2006, an average of 398 Canadian men will be diagnosed and 81 will die from the disease per week [3]. In recent years, increased prostate cancer awareness and improved diagnostic tools have dramat-ically increased the number of reported prostate cancer cases. This increase has raised a significant amount of interest in research that aims to improve the detection, treatment and survival rate for prostate cancer patients. Treatment options for prostate cancer depend on a variety of factors including clinical stage, Gleason scores, pre-treatment PSA levels etc. Some prostate cancers are indolent in their behaviour and 'active surveillance', which refers to regular monitoring without any invasive treatment, is a good option. One common treatment option for early stage, clinically organ confined prostate cancer is 'radial prostatectomy' in which the prostate is removed surgically. Another treatment option is 'radiation therapy' which uses x-rays to irradiate the cancer cells. There are two types of radiation therapy: i) 'External Beam Radiation Therapy (EBRT)' that directs high-energy x-ray beams on the prostate and surrounding tissues and ii) 'Prostate Brachytherapy' in which radiation sources are placed in and around the prostate to irradiate the cancer cells from within. When cancer has spread beyond the prostate it is incurable and the standard treatment is 'hormonal therapy' in which medication or surgical orchiectomy is used to block the pathways that produce dihydrotestosterone (DHT), a hormone necessary for the growth of prostate cancer cells. When the disease becomes refractory to hormone therapy, 'Chemotherapy' may be used to postpone disease progression. Sometimes more than one treatment modality is required for optimal success. Treatment complications may include loss of urinary continence, erectile dysfunction and rectal injury. 1.2 Prostate Brachytherapy 3 1.2 P r o s t a t e B r a c h y t h e r a p y 1.2.1 W h y Prostate Brachytherapy To be specific, there are two types of prostate brachytherapy: i) High Dose Rate (HDR), and ii) Low Dose Rate (LDR). In HDR brachytherapy, small catheters are inserted into the prostate through the perineum through which radioactive doses are delivered to the prostate. This technique is also called temporary brachytherapy because no radioactive material is left inside the prostate at the end of the procedure. In contrast, LDR brachytherapy is a permanent procedure in which small radioactive capsules are permanently implanted into the prostate. The technique of interest for this thesis is the LDR option and its steps are explained in Section 1.2.2. From this point on a reference to prostate brachytherapy implies LDR brachytherapy. Of all the treatment options available for locally confined prostate cancer, prostate brachyther-apy is the most efficient in terms of recovery time and procedure outcome. Brachytherapy has several advantages over prostatectomy. It is a minimally invavise procedure (no incisions and very minimal bleeding) that typically takes less than 1 hour to perform in contrast to prostatectomy which could take 2-3 hours. Brachytherapy is done in an out-patient setting meaning that the patient is released from the hospital hours after the procedure and, despite discomfort, is able to walk, sit and resume work immediately. Prostatectomy requires overnight stay at the hospital followed by 8-12 weeks of recovery at home before full activities can be resumed. Brachytherapy also has several advantages over EBRT. Brachytherapy allows for 50-100% greater local dose intensity than EBRT, and image guidance ensures more accurate placement and less damage to surrounding tissue than with external beam radiation. [26] As well, brachyther-apy implants are permanent and the procedure is carried out only once whereas external beam requires daily treatments 5 days per week for 6-8 weeks. Overall, brachytherapy is an attractive option for patients who do not want to undergo major surgery or endure daily radiation treatment for seven weeks. New developments in image guidance and visualization technology, treatment planning software and improvements in radioisotopes for dose delivery have renewed interest and therefore research in prostate brachytherapy. For this reason, the number of brachytherapy procedures carried out in North America is expected to increase significantly over the next few years. At the BG Cancer Agency (BCCA) there had been 1800 brachytherapy implants between 1998 and 2006 and 300 implants in 2006 alone. 1.2 Prostate Brachytherapy 4 1.2.2 Prostate Brachytherapy Procedure at the B C C A At the BCCA, a prostate brachytherapy patient undergoes a pre-operative scan approximately 4-8 weeks prior to the actual procedure. During this scan, a TRUS probe is inserted into the rectum, along the long axis of the prostate (coincident with the long axis of the body), and adjusted so that the transverse TRUS images of the prostate are symmetrical about the D column (center column) of the needle guide template (reason for ensuring symmetry is explained in Section 1.4.1). Approximately 9-14 transverse images of the prostate are taken at 5mm increments. The images are referred to as BASE+O.O, BASE+0.5, BASE+1.0 and so on where the 0.5 increment refers to the 5mm increment in slice spacing. Although it seems counter-intuitive, the real base is on slice BASE+0.5 and BASE+O.O is 5mm superior to the real base (this image is always taken). Since different prostates are of different sizes the apex can be anywhere between BASE+3.5 to BASE+5.5. As with the base, one image is always taken 5mm inferior to the apex of the prostate. The images are imported into a planning software package by Varian called Vari-Seed. In this software, the oncologists manually outline or segment approximately 70% of the transverse slices. Vari-Seed interpolates in between points on each slice and interpolates contours in between slices to create a 3D representation of the prostate size, shape, location and orientation. This 3D segmented volume is called the Clinical Target Volume (CTV). The oncologists then assign extra margins to the CTV to create a Planning Target Volume (PTV). The PTV is used to design a seed distribution plan to be applied to the patient's prostate on the day of the procedure. Four to eight weeks later, during the actual brachytherapy procedure, approximately 60-150 radioactive seeds (Iodine 125 or Palladium 103) are guided trans-perineally, through brachytherapy needles, to specific xyz coordinates within the prostate according to the dosimetric plan. Refer to Figure 1.1b for setup of the brachytherapy procedure and orientation of the xyz coordinate system. The needle guide template helps the oncologist localize the needle in the xy plane, the TRUS images give the z coordinate as well as confirm the xy coordinate and Fluoroscopic images re-confirm the xz coordinates of seeds. On the day of the implant, the patient has a post-operative CT scan. The CT images are segmented manually to locate the seeds. Knowing the seed locations and the size, shape and orientation of the prostate the radiation dose delivered to the tissue can be calculated to evaluate how closely the final dosimetry matches the pre-implant plan. In a successful implant, the 100% 1.3 Thesis Motivation: Automatic Prostate Segmentation 5 isodose (lines of equal dose distribution) should cover more than 85% of the CT prostate volume. 1.3 Thes i s M o t i v a t i o n : A u t o m a t i c P r o s t a t e Segmen ta t ion 1.3.1 Why Segment the Prostate Automatically Currently oncologists manually select 25-35 points around the prostate boundary and then adjust these points to ensure symmetry and smoothness. When all the slices have been segmented a 3D volume is created. If the smoothness of the resulting 3D volume is un-satisfactory, the oncologists re-visit some slices and re-segment the boundary. Manual prostate segmentation has a very steep learning curve and the times for segmenting a whole volume can range from 20 minutes (non-expert) to 5 minutes (expert). It is clear to see that manual prostate segmentation from TRUS images is a time consuming, and laborious process and it would be ideal if a computer algorithm could segment the prostate boundary automatically. Currently oncologists only segment the prostate from TRUS images pre-operatively; however, intra-operative prostate segmentation can provide many benefits. For example, it is possible that the prostate shifted and changed shape in the two weeks between the pre-operative scan and the brachytherapy procedure implying that the dosimetric plan (originally designed for the pre-operative scan) is no longer valid. As well it is possible for the prostate to rotate and deform due to needle insertions, and increases in volume due to swelling during the brachytherapy proce-dure making the dosimetric plan even more inaccurate. Currently, intra-operative updates to the dosimetry to compensate for such deformation are not feasible because it is impossible to carry out intra-operative manual segmentation. Automated prostate segmentation could assist in updating the treatment plan intra-operatively, with minimal user intervention, so that the patient receives the most accurate dose distribution. Intra-operative prostate segmentation could also provide a 3D, intra-operative, visualization tool. At BCCA, the more experienced oncologists compensate for the deformations discussed previously by inserting the seeds into modified grid locations so that, once the needle is removed, the intended seed drop off location is reached. In order to know how to modify the dosimetric plan, the oncologists must have i) a very good 3D image of the prostate and its surrounding anatomy in their mind, and ii) an understanding of where the prostate is anchored and how it will rotate when punctured at a given location. For this reason, training residents in the 'art' of prostate 1.4 Thesis Objective: Semi-Automatic 3D Prostate Segmentation 6 brachytherapy is very difficult. It would be desirable to create a software package with an easy to use Guided User Interface (GUI) that integrates the automatic 3D segmentation algorithm, along with an automatic dosimetric planning algorithm and a dynamic model of the prostate's motion. This package would enable real-time intra-operative prostate segmentation which would result in: i) intra-operative updates to the dosimetry, ii) a training tool for residents to help with 3D visualization and motion dynamics of the prostate. 1.3.2 Difficulties in Automatic Prostate Segmentation The success of automatic prostate segmentation to date has been very limited [2] due to the nature of TRUS images of the prostate. In these images the prostate boundary is unclear and the images suffer from speckle because the acoustic properties of tissues surrounding the prostate are very similar. Figure 1.2 shows transverse ultrasound images being taken of the prostate (gray lines). At the base of the prostate, the boundary becomes blurred with the bladder neck because, i) their acoustic properties are similar and ii) beam widening causes more information to be reflected off of the bladder and thereby producing poor images of the prostate base. The best images are obtained at mid gland where the prostate is free of surrounding organs. At its apex, the prostate merges with the pelvic floor and there, once again, the boundary of the prostate is unclear. In older men 'calcium deposits' surrounding the prostate set up another strong reflecting boundary for the ultrasound waves and cast shadows behind the deposit in the TRUS images. It is common for oncologists to rely on experience and knowledge of physiology to carry out segmentation when the prostate boundary is unclear (which occurs frequently). At the end of the procedure the prostate is swollen, contains up to 150 seeds (which act as reflectors for the ultrasound waves), has numerous blood trails and an onslaught of platelets and scar tissue attempting to stop the bleeding within the prostate. Therefore, the post-operative TRUS images of the prostate are much worse and the prostate boundary is nearly indistinguishable (Refer to Figure 1.3). In this thesis we focus only on pre-operative TRUS images. 1.4 Thes i s Ob jec t ive : S e m i - A u t o m a t i c 3 D P r o s t a t e Segmenta t ion 1.4.1 Segmentation Cri ter ia at B C C A In this thesis we emphasize the following requirements for segmenting the prostate: 1.4 Thesis Objective: Semi-Automatic 3D Prostate Segmentation 7 Prostate Mid Prostate Base Gland Apex Figure 1.2: Segmentation difficulties due to anatomy (a) Pre-operative prostate TRUS image (b) Post-operative prostate TRUS image Figure 1.3: Comparing TRUS image quality before and after operation 1. Smooth, continuous contours with no sharp edges. (a) The radiation pattern of the radioactive seeds is smooth, symmetrical and spherical. The superposition of the radiation dose from multiple seeds can only create smooth dose patterns and it is difficult to create dosimetry plans for jagged or discontinuous edges. For this reason, and the fact that the prostate has an approximately ellipsoidal shape, we require the prostate contour to be smooth and continuous. 1.4 Thesis Objective: Semi-Automatic 3D Prostate Segmentation 8 2. Convex shape with no concavities or hourglass shapes. (a) A dosimetric plan for concave shapes is very hard to obtain. As well, the natural shape of the prostate is convex and physiologically an hourglass shaped prostate is not possible. At some centers, symmetry is required about the D column on transverse images. The reason for the symmetry is that there are an infinite number of asymmetric images that an operator can take. If such asymmetric images were taken pre-operatively, the oncologists would have difficulty registering the randomly asymmetric pre-operative images to the intra-operative images during the brachytherapy procedure. In addition to reducing intra-operative image registration time, symmetric images would reduce error probability in loading and delivering needles. They would also make treatment planning faster since only half of the prostate needs to be planned on. In some cases, our segmentation algorithm returns contours that extend further on one side of the D column than the other. If, for example, the contour on the right side of the D column extended further than the left side oncologists at BCCA would extend the contour on the left side to match the right side. Although it is not currently done, we could easily adopt a similar technique in our algorithm to ensure that all segmented prostate contours are symmetric. 1.4.2 Literature Review 1.4.2.1 2D Automatic Prostate Segmentation from TRUS images The 2D segmentation of the prostate from TRUS images is a challenging problem and has received significant attention in the medical imaging field. Strictly edge-based algorithms such as [1,2,21, 25] result in poor segmentation due to speckle noise and poor contrast in TRUS images. One such edge-detection algorithm, introduced by Abolmaesumi et al. [2], uses a spatial Kalman filter along with Interacting Multiple Modes Probabilistic Data Association (IMMPDA) filters to detect the closed contour of the prostate. Although the IMMPDA edge-detector is capable of creating continuous contours, it cannot satisfy the smoothness and symmetry requirements on its own. In order to overcome these issues, deformable models such as those outlined in [20,22] were proposed. These techniques worked moderately well; however, the segmented shapes were not constrained and hourglass contours, for example, were possible. In order to constrain the prostate contours, groups such as Shen et al. [29] and Gong et al. [16] incorporated a priori knowledge of the possible prostate 1.4 Thesis Objective: Semi-Automatic 3D Prostate Segmentation 9 shapes into their algorithm. The automatic segmentation procedure of Shen et al. [29] combined a statistical shape model of the prostate, obtained from ten training samples, with rotation invariant Gabor features to initialize the prostate shape. This shape was then deformed in an adaptive, hierarchical manner until the technique converged to the final model. Unfortunately this technique is very expensive computationally, requiring over a minute to segment a single image. Gong et al. [16] modeled the prostate shape by fitting superellipses to 594 manually segmented prostate contours. These a priori prostate models, along with an edge map of the TRUS image, were then used in a Bayesian segmentation algorithm. The edge map was created by first applying a sticks filter, then an anisotropic diffusion filter and finally a global Canny edge-detector. In this technique, the pre-processing steps are computationally expensive and the global Canny edge-detector finds fragmented, discontinuous edges in the entire image. 1.4.2.2 3D Automatic Prostate Segmentation from TRUS images The 3D segmentation problem is inherently much more difficult (yet more practically useful) than its 2D counterpart, particularly due to poor boundary contrast at the base and apex. For this reason, 3D prostate segmentation has not received as much attention as 2D segmentation and most of the literature presently available for 3D segmentation are extensions of 2D methods. Richard et al. [27] presented a pixel classifier segmentation technique that uses four texture features to assign an 'energy' to each pixel in the image. In the paper they acknowledge that the effect of using texture information is marginal and their algorithm is very computationally extensive. Shao et al. [28] begin with a small sphere (no manual initialization) which is deformed using the level set method. No times or quantitative comparisons are reported and the results seem to suffer from 'boundary leakage' and are sensitive to a parameter chosen experimentally by the user. Ghanei et al. [15] use heavy initialization (5-8 points on 40-70% of the 2D slices) to create an initial ellipsoid mesh. The coordinates of each mesh point are updated iteratively according to i) external forces that use edge detection to propel the point towards the edges and ii) internal forces which enforce smoothness. The algorithm was tested on 10 prostate volumes and took approximately 30s to segment a single volume. The 2D segmentation algorithm of Ladak et al. [22] has been extended to 3D in [11,12,18,33]. In these papers a slice in which the boundary is visible (ie, midgland) is manually initialized with 4-8 points to create an initial contour and this contour is refined and updated using the Discrete 1.4 Thesis Objective: Semi-Automatic 3D Prostate Segmentation 10 Dynamic Contour (DDC). The contours are propagated iteratively through the slices, each time using the previous contour as the initialization and applying the DDC. This technique is prone to propagated segmentation error (especially at the base and apex) and to solve this problem a continuity constraint, using a zero-order AR model [12] is introduced. The speed of this algorithm is less than 10 seconds. As in the 2D case, the contour obtained with the DDC algorithm does not incorporate a priori shape knowledge which results in wandering of the prostate boundary. To correct for this, manual editing tools were introduced but up to 20% of the (parallel) slices need manual editing which adds significant time to the segmentation algorithm. The solution, as in the 2D case, is to introduce a priori shape information and this is done in the 3D extension of the work by Shen et al. [35-37]. In this algorithm 3D texture features obtained from two banks of orthogonal 2D Gabor filters are used to train the Kernal Support Vector Machine (KSVM) which differentiates the prostate from its surrounding tissue. Then a statistical shape model is deformed, as was done in 2D, to find the prostate volume using texture and shape matching. Although the results of this algorithm are quite accurate, it has no real-time capabilities due to its computationally expensive nature (requires approx 3 minutes to segment a single volume). Refer to Table 1.1 for a comparison of the best 2D and 3D segmentation algorithms available to the algorithm presented in this thesis. 1.4.3 General Overview of Approach The goal of this thesis was to create an algorithm with low computational complexity that satisfies the smoothness and symmetry constraints and is driven by the clinical understanding of prostate brachytherapy. The proposed algorithm requires no computationally expensive pre-processing and is insensitive to the initialization points. The algorithm makes use of a novel image warping technique, ellipse and ellipsoid fitting and locally applied IMMPDA edge-detection. The ellipse fitting procedure outlined in [13,32], and ellipsoid fitting problem outlined in [24] solve a convex problem, are computationally inexpensive, robust and always return an ellipsoidal solution. The IMMPDA edge-detector is applied locally meaning that it finds continuous edges only along the boundary of the prostate instead of along the entire image. O 3 CD to O £» P a-00 O CO CB O P 3 CB B O B P o 22. Fenster et al Zhan & Shen Ghanei et al Gong et al Badiei et al Reference [9,11,19,23,33] [35-37] [15] [15] [4] Latest revision 2005 2006 2000 2004 2007 Pre-processing N / A Extensive N / A Extensive Minimal Manual Initialization 4-8 pnts Non 40+ pnts 2-10 pnts 5 pnts A priori shape knowledge No Yes No Yes Yes Contours are always: Symmetrical No No No Yes Yes Smooth, Continuous Yes Yes Yes Yes Yes No hourglass shapes No No No Yes Yes Clinical Study: Number of Volumes 12 6 10 125 2D slices 10 Number of observers 1 1 1 . 5 3 Image Resolution Ideal Average Average Poor (realistic) Poor (realistic) Accuracy Excellent Excellent Excellent Excellent Excellent Reproducibility N / A N/A Good N/A Excellent Speed 10 s (C++) 180 s (C++) 30 s (C++) 5 s (C++) 8 s (Matlab) Pros and Cons Fenster Zhan Ghanei Gong Badiei Images used seem to be of better quality than typically encountered. The DDC does not incorporate a priori shape knowledge which is necessary when dealing with non-ideal images. Approx 20% of parallel slices require manual editing which adds approx 60 s to segmentation time. Very computationally expensive Too many initialization points Algorithm is for 2D images only. If extended to 3D, the algorithm would be slow (approx 60 s). In-depth clinical studies showed excellent results. Clinical integration is feasible. Excellent performance despite low resolution images. Can be easily integrated into clinical setting. Extensive 3D clinical study was carried out with excellent results. Upon optimization and transfer to C++ this will be the fastest available algorithm. Algorithm performance at the base and speed of image warping could be improved. CD CO o IS4. CB <rt-Co c O O OO t) o Co c-t-CB CO g 1.5 Thesis Overview 12 1.5 Thesis Overview The purpose of this thesis was to design and test an algorithm for 3D prostate segmentation from 2D TRUS images. The first step was to create a 2D semi-automatic segmentation algorithm and test and validate it. The second step was naturally to extend this 2D algorithm to 3D and again test and validate it. The proceeding 5 chapters are broken down as follows: Chapter 2 - Algorithm: 2D Segmentation. This chapter presents the 6-step 2D segmentation algorithm: i) initialization, ii) forward warp, iii) initial ellipse fit, iv) Interacting Multiple Modes Probabilistic Data Association (IMMPDA) edge detector, v) final ellipse fit and vi) reverse warp. The details of how the initialization points are chosen, how the image and initialization points are forward and reverse warped, how an ellipse is fit to a set of measurement points and how the edge detector works are outlined in detail in this chapter. Chapter 3 - Study Results and Discussion: 2D Segmentation. A study carried out on 17 midg-land transverse TRUS images of the prostate is presented to evaluate the 2D semi-automatic seg-mentation algorithm using manual segmentation as the gold standard. Results and simulation times are presented and discussed. Chapter 4 - Algorithm: 3D Segmentation. This chapter outlines the eight steps of the 3D segmentation algorithm: i) initialization on midgland only, ii) forward warp of all slices, iii) 2D segmentation only on midgland, iv) 2D segmentation on remaining slices with initialization propa-gating from previous slice's contour, v) initial ellipsoid fit, vi) contouring base and apex, vii) final ellipsoid fit, and viii) reverse warp. The details of this 8-step algorithm and how to fit an ellipsoid to a set of measurement points are presented in this chapter. Chapter 5 - Study Results and Discussion: 3D Segmentation. A study was carried using 10 prostate volumes (each with 10-14 slices) and 3 observers (2 non-expert and 1 expert). The differ-ence between computer segmentation and the gold standard (average manual segmentation) was compared to inter-observer variability. The repeatability of the computer algorithm was compared to the intra-observer variability. Various distance and volume based metrics were used to evaluate the segmentation algorithm. Results and simulation times are presented and discussed. Chapter 6 - Conclusion and Future Work. An overview of contributions and future work are summarized in this chapter. 13 Chapter 2 Algorithm: 2D Segmentation 2.1 Introduction to the 6-step Algorithm In 2D semi-automatic prostate segmentation, the goal is to create an algorithm that finds the prostate boundary in TRUS images given a few initialization points. The proposed computer algorithm has six steps. In the first step, the user selects 5 initialization points (initialization outlined in Section 2.2) and in the second step, the image along with the initialization points are warped (warping algorithm described in Section 2.3). The warped prostate should look like an ellipse so that, in the third step, we can make use of the extensive literature on ellipse fitting to fit an initial ellipse to the warped initialization points. The details of how to fit an ellipse to a set of points are outlined in Section 2.4. In step four, this initial ellipse fit is used to confine and locally guide the IMMPDA edge-detector (IMMPDA described in Section 2.5) along the boundary of the warped prostate to prevent the trajectory from wandering. In step five, a second ellipse is fit to the IMMPDA edge points, which provides a more accurate estimate of the warped prostate boundary. Finally, in the last step, inverse warping is applied to the second elliptical fit to obtain the final segmented prostate contour (reverse warping discussed in Section 2.3). The only pre-processing is a computationally cheap median filter (5x5 kernel) applied locally along the path of the edge-detector. The 6-step segmentation procedure outlined above is presented in Figure 2.1. Chapter 2 Algorithm: 2D Segmentation 2.1 Introduction to the 6-step Algorithm In 2D semi-automatic prostate segmentation, the goal is to create an algorithm that finds the prostate boundary in TRUS images given a few initialization points. The proposed computer algorithm has six steps. In the first step, the user selects 5 initialization points (initialization outlined in Section 2.2) and in the second step, the image along with the initialization points are warped (warping algorithm described in Section 2.3). The warped prostate should look like an ellipse so that, in the third step, we can make use of the extensive literature on ellipse fitting to fit an initial ellipse to the warped initialization points. The details of how to fit an ellipse to a set of points are outlined in Section 2.4. In step four, this initial ellipse fit is used to confine and locally guide the IMMPDA edge-detector (IMMPDA described in Section 2.5) along the boundary of the warped prostate to prevent the trajectory from wandering. In step five, a second ellipse is fit to the IMMPDA edge points, which provides a more accurate estimate of the warped prostate boundary. Finally, in the last step, inverse warping is applied to the second elliptical fit to obtain the final segmented prostate contour (reverse warping discussed in Section 2.3). The only pre-processing is a computationally cheap median filter (5x5 kernel) applied locally along the path of the edge-detector. The 6-step segmentation procedure outlined above is presented in Figure 2.1. 13 2.2 Initialization 14 4) IMMPDA Edge Map 5) Ellipse Fit 6) Reverse Warp Figure 2.1: The proposed semi-automatic segmentation algorithm consists of 6 steps. Steps 1 and 2 show the unwarped (x-mark) and warped (circles) initialization points respectively. Steps 3 and 5 show the best ellipse fit to the warped initialization points and to the IMMPDA edge map respectively. Step 6 shows the final segmentation result 2.2 Initialization In the first step of the 6-step segmentation algorithm, the user is presented with the TRUS image and asked to initialize the image. The 5 initialization points must be selected at the following locations: i) the center of the TRUS probe, ii) the center of the prostate and iii) 3 points along the boundary of the prostate at the right most edge, bottom most edge and top most edge. Figure 2.3 a) shows an example of an initialized TRUS image. Using the 3 initialization points around the boundary of the prostate, 3 more points are inter-polated at the left most edge of the prostate and midway between the left and top and midway between the right and top initialization points (as shown with x-marks in Figure 2.1). The in-terpolation is done by making use of the symmetry of the prostate shape. In step 2 of the 6-step algorithm these initialization points are warped and in step 3 an initial ellipse is fit to them. The warping algorithm is described in detail in the following section. 2.3 Forward and Reverse Warp 15 2.3 F o r w a r d a n d Reverse W a r p As mentioned previously, the prostate is a walnut shaped organ with a roughly elliptical cross section. During image acquisition the TRUS probe is pressed anteriorly against the rectum wall which causes the prostate to look deformed in 2D transverse image. The goal of the forward warping algorithm is to undo the compression effect caused by the TRUS probe so that the 2D prostate boundary looks elliptical. The goal of the reverse warping algorithm is to bring the prostate contour and prostate image back into the original (compressed) form. 2.3.1 Forward Warp We assign a polar coordinate system to the 2D TRUS images of the prostate with origin on the TRUS probe axis, angle 9 = 0° along the horizontal, and r extending radially. The assumption in our warping algorithm is that the TRUS probe causes radial compression; therefore, all of the warping will be done along the radial direction. Referring to Figure 2.2, our goal is to take every point (u, v) at angle 9 and radius r, in the original image I0, and place it into a new warped position (x,y) at angle 9 and radius r n e w , to create the warped image, iftyd- The pseudo-code for doing this is as follows: for 9 = 0: n for T = Rjnin • Rmox u = uo + r cos(0) v = vo — r sin(#) new = f(r,9) x y vo — r. uo + r, new new cos(9) sin(0) l/wd(y, x) = Io(v, u) end end 2.3 Forward and Reverse Warp Hi Figure 2.2: Place every point in the original image (left) into a new position to create the warped image (right) using a radial warping function to get r n e w from r and 8 As you can see from the pseudo-code above, we need to come up with a function, r n e w , to describe the radial compression as a function of 6 and r. This function is derived in the next subsection. Refer to Figure 2.3 to see the effects of warping on an image and its initialization points. (a) (b) Figure 2.3: Comparing the original image and original initialization points (a) to the warped image and warped initialization points (b) 2.3 Forward and Reverse Warp 17 2.3.2 Warping Function We can undo the effect of the anterior compression caused by the TRUS probe by: i) stretching the image maximally around 9 — 90° and minimally around 9 = 0° and 9 = 180°, and ii) stretching the image maximally for small r values and minimally for large r values. These requirements suggest a sin(0) dependence in the angular direction and a Gaussian dependence in the radial direction. The resulting warping function is: where r n e w represents the new warped radius, r is the un-warped radius, 9 ranges from 0° to 180° and a is a variable that represents the amount of stretch in the radial direction. Refer to Figure 2.4 and 2.5 for 2D and 3D graphs of the warping function. Small a values indicate less radial stretch and can be used for prostate shapes that are already elliptical and/or have experienced little deformation. Larger a values indicate greater radial stretch and can be used for prostate shapes that have experienced more deformation by the TRUS probe. Using the initialization points for each TRUS image, we can design a small algorithm that will choose the optimal a automatically. This is discussed in detail in Section 2.3.2.1. (2.1) 9 = 0:;t 100 r = 10 : Lmax Figure 2.4: The 9 component (top) and r component (bottom) of the warping function in 2D 2.3 Forward and Reverse Warp 18 20 r - 10: 100 0 Figure 2.5: The warping function: rnew/r = 1 — sin (0) exp (—r2/2cr2) in 3D. The warped radius, r n e w , is approximately equal to the unwarped radius, r, for angles, 0 close to 0° or 18CP and for large r values. The warped radius, r n e w , is less than the un-warped radius, r, for angles, 9 close to 90° and small r values, therefore, causing a stretch. The limit between small and large r values is set by a 2.3.2.1 Choosing the a (Stretch Factor) Automatically In order to find the stretch factor automatically, we first solve equation 2.1 for cr as follows: To find a we need an r value and an r n e w value that are unique to the image we are trying to warp. These values can be found from the initialization points. Referring to Figure 2.6 and recalling from the previous section, the observers place their initialization points on the prostate image in the following order: i) ( U T , V T ) at the TRUS center, ii) (us,vs) at the prostate center, iii) (ui,v\) on the right most point of the prostate boundary, iv) (u2 ,^2)on the bottom most point of the prostate boundary, and v) (u5 , t>5) on the top most point of the prostate boundary. The point ( " 3 , ^ 3 ) is obtained by reflecting (u\,vi) about the sagittal plane (which is also the D column on the need needle guide template) and ( ^ 4 , ^ 4 ) , which is the mirror image of ( u 6 , w 6 ) is placed on the prostate boundary half way between ( 1 x 3 , ^ 3 ) and ( ^ 5 , ^ 5 ) . In this manner the initialization points are symmetric about the prostate's sagittal plane. (2.2) 2.3 Forward and Reverse Warp 19 •v Figure 2.6: Compressed prostate boundary (solid line) and rough estimate of un-compressed prostate boundary (dashed line) with v-components of the initialization points on the v-axis In Figure 2.6, the dotted ellipse is the approximate shape of the prostate when the TRUS probe is not inserted into the rectum. During radial compression, the maximum stretch occurs along 6 = 90° where the point (u'2, v'2) gets pushed up to (u2,t>2); therefore, we assume that if we find a there then everything else will follow. When the TRUS probe is not in the rectum, we can assume the prostate is symmetrical so that: vs - vs .= v'2 - vs (2.3) Solving this equation for v'2 we get: v'2 = 2vs - v5 (2.4) In the polar coordinate system of Figure 2.6, with origin at (UT,VT), the radial distance from the TRUS center to the compressed prostate bottom is: r = VT — v2 (2.5) 2.3 Forward and Reverse Warp 20 and the radial distance from the TRUS center to the uncompressed prostate bottom is: rnew = VT - f 2 (2.6) Substituting equation 2.4 into the above equation we get: new = VT - 2 Vs + V5 (2.7) Now we have both r and rnew in terms of the initialization points at 9 = 90° and we can use equation 2.2 to find o: Therefore, given any image and its initialization points, we can always find an optimal a (stretch factor) for that image. 2.3.3 Reverse Warp If we recall, the 6-step semi-automatic segmentation algorithm has the following steps: i) initial-ization, ii) forward warp, iii) initial ellipse fit, iv) IMMPDA edge detection, v) final ellipse fit and vi) reverse warp. Referring to Figure 2.7, our goal is to take every warped point on the final ellipse fit, (x,y) at angle 9 and radius rnew, and place it into a new un-warped position, (u,v) at angle 9 and radius r, to create the final semi-automatically segmented prostate contour. The pseudo-code for doing this is as follows: for i = 1 : length(x) % Cycle through every point on the final ellipse fit r-new(i) = sqrt((x(i) - u T ) 2 + (y{i) - vT)2) 6(i) = atan((vT - y(i))/(x(i) - uT)) r(i) = f(rnew(i),9(i),cr) u (i) = WT + r(i) cos(9(i)) o = - (vr ~ V2f (2.8) 2.4 Ellipse Fitting 21 v (i) — VT — r(i) sin(6(i)) end As you can see from the pseudo-code above, for every point (x,y) at angle 9 and radius r n e w on the ellipse we need to find r to be able to un-warp that point. Since a closed form solution for r cannot be obtained from equation 2.1, the Newton-Raphson method is used as described in E. Refer to Figure 2.8 to see the effect of the reverse warping on a prostate contour in 2D. 2.4 E l l i p s e F i t t i n g 2.4.1 Equation of an Ellipse We would like to come up with a general equation, in two dimensions, that represents an ellipse that has been both translated and rotated. Before we go on we refer to reader to Appendix A where the reference frame notations that will be used throughout the rest of this section are outlined. Lets first consider an ellipse centered at the origin, with no rotation as in Figure A.la. There : ~~—** u„ no + r„m cos (0) x (solid line) warped prostate contour (dotted line) 2' ellipse fit — result of step 5 in 6-step algorithm y Figure 2.7: For every point on the final ellipse fit (dotted line), we find rnew and 9, and then use the Newton-Raphson method to find the r value that will unwrap that point 2.4 Ellipse Fitting 22 (a) (b) Figure 2.8: (a) Image and contour before reverse warp, and (b) image and contour after reverse warp are three ways to represent this ellipse. The parametric equation of this ellipse is: 5 X a cos (Oi) . 5 y . b sin (#;) (2.9) where [5x 5y]T are points in the final prostate frame. Here, a is the width of the ellipse along the 5 x axis, b is the width of the ellipse along the 5y axis and 0, is a sweep variable. The canonical form of the same equation is: (2.10) and the quadratic form is: J ? V + J £ Y + J £ - O (2.ii) Next, we incorporate the 2D rotation and translation outlined in Appendix A into the above 3 equations. The parametric equation becomes: 2.4 Ellipse Fitting 23 lx XQ cos (0) - sin (9) a cos (0j) (2.12) = + Vo sin (0) cos (9) b sin (f9j) and the canonical form becomes: (lx - x0) cos (9) + ( :y - ya) sin (9) \ f - (^x - xQ) sin (9) + (Ay - ya) cos (9) \ _ ^ (2.13) where [x0 y0]T represents the center of the ellipsoid with respect to frame 1. In order to get the quadratic form of a rotated and translated ellipse we use the equation of a general conic in 2D: F(V,P) = V-P (2.14) = Pi 1x2 + P2 V y + P3 V +Pi1x + P5 ly + P6 = lx 1y] A + B + c where V = [xx2 lxxy V xx ly l], P = P 2 P3 PA P 5 P6], A = [Pi f ; f P 3 ] , A ^3 ^6 in equation 2.11 denotes B — [Pi P 5 ] and c = Pe- It should be noted that P = the P parameters in the unrotated and untranslated reference frame and Pj' 7^ P i . The general conic can represent hyperbolas, parabolas, ellipses and degenerate cases; however, in the case of ellipses the A matrix is positive definite. A positive definite matrix has a positive determinant and therefore we can say that the subset of P vectors that represent an ellipse must satisfy the constraint 4PiP 3 - P 2 2 > 0. Referring to equations 2.12, 2.13, and 2.14, we can see that there are two sets of parameters that uniquely define our ellipse. We define the q parameters from equations 2.12, 2.13 as: q=[a b xQ yQ 9} (2.15) and the P parameters from equation 2.14 as: 2.4 Ellipse Fitting 24 P = [Pi Pi Ps Pi Ps Pe] (2.16) It should be noted that any scalar multiple of a given P vector produces the same ellipse due to a free scaling factor. In other words, in order to define an ellipse we only need 5 parameters and we can fix one of the parameters, for example PQ, to be a constant value. As will be shown in Section 2.4.2, there is an elegant way to find the P vector for an ellipse that best fits a set of points. Once we have found this P vector, we would like to draw out the ellipse contour on the prostate image. It is very difficult to draw an ellipse in MatlabTI^ given the P vector; however, it is very easy to do so using equation 2.12 and the q parameters. For this reason we need to derive a relation between the q and the P parameters. Appendix B . l . l shows how to derive the P Parameters in terms of the q parameters and Appendix B.1.2 shows two methods of finding the q parameters in terms of the P parameters. 2.4.2 Fitting an Ellipse to Measurement Points Our goal is to fit an ellipse to a set of measurement points (in this case the points around the bound-ary of the prostate). The following equation gives a measure of how far away a point \^Xj 1y J] is from the ellipse defined by P: F(Vj,P) = V r P = Pi lx] + P2 1XJ l V j + P3 xy) + Pi xXJ + P5 x V i + P6 (2.17) The best ellipse fit is found by finding the P vector that minimizes the sum of squared distances (also known as Least Squares Minimization - LSM), to a set of points [1x, 1yi], {i = l...n) as follows: Cost = J2(F(VhP))2 t= i = E(^p)2 i = l IISPI (2.18) 2.4 Ellipse Fitting 25 where S = [Vf V2T ••• F„T] is called the Design Matrix [13]. Refer to Appendix C . l for the proof of why and how Y17=i (^»' P)2 e c L u a l s ll 'ST'll 2- Since we are interested in elliptical fits only, we must place the constraint AP1P2, — P2 > 0 on the set of P vectors returned by this LSM. Instead of the inequality 4P\P$ — P2 > 0, we will use the constraint 4P\P% — P2 = 1. This is because we could use any positive number, 7, on the right hand side of this equality; however, dividing both sides by 7 would result in a P vector that represents the same ellipse due to the free scaling factor. We can re-write the constraint 4P1P3 — P2 — 1 as PTNP = 1, where: 0 0 2 0 0 0 0 - 1 0 0 0 . 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 N = (2.19) Note that N is a singular matrix and therefore its inverse does not exist. Now the ellipse fitting problem reduces to minimizing Cost = | |5P| | 2 subject to the constraint PTNP = 1. Using Lagrangian Multipliers, which are explained in more detail in Appendix D, we let f(P) = \\SP\\2 and h(P) = PTNP — 1 and find their gradients as follows: grad(f(P)) = Xgrad(h(P)) (2.20) ^p(PT(STS)P)=X^p(PTNP-l) where | |5P| | 2 = PT (STS) P. If we use the following matrix differentiation rule: (2.21) d (2.22) equation 2.21 becomes: (N - pSTS)P = 0 (2.23) 2.5 IMMPDA Edge Detector 26 where \i = 1/A. The above equation can be solved using the Generalized Eigenvalue Problem (GEP) with p, representing the eigenvalues and P representing the eigenvectors. If we recall, we are attempting to find the P vector that represents an ellipse that best fits a set of measured points; however, the GEP gives a set of P vectors. How do we know which of the eigenvectors is the right P vector? In Appendix C.2 we prove that: i) there exists only one positive eigenvalue, and ii) the P vector that minimizes the cost function is the eigenvector corresponding to this unique positive eigenvalue. 2.5 I M M P D A E d g e De tec to r This section describes the edge detection algorithm used in the 4th step of the 6-step 2D segmen-tation algorithm. We model the distance from the center of the prostate to the prostate boundary using four models (two linear and two non-linear) which are derived and presented in section 2.5.5. Before presenting the derivation for each model we give a brief overview of the Star Kalman (section 2.5.1), the (Extended) Kalman Filter (section 2.5.2), Probabilistic Data Association (section 2.5.3), and Interacting Multiple Modes (section 2.5.4). In section 2.5.6 we compare the performance of the four models and choose the best one for segmenting the prostate boundary. 2.5.1 Star Kalman - Finding Candidate Edge Points We begin the edge detection algorithm by choosing a point at the center of the prostate in the TRUS image and we call this the seed point (not to be confused with the implanted seeds). From the seed point we project rays outwards, at angular increments A0, for all 360 degrees (thus the name Star Kalman) to discretize the prostate boundary. Next the gradient of the image intensity is found along each ray according to: Gradient^, 0) = \{I(r + 2 A r, 0) + I(r + Ar, 0) + I(r, 0) -J(r - Ar, 0) - 7(r - 2 A r, 0) — I(r — 3 A r, 0)} (2.24) where I(r, 0) is the image intensity at radius r and angle 0 using polar coordinates (with respect to the seed point). It should be noted that the gradient intensity is not found along the entire length 2.5 IMMPDA Edge Detector 27 of the ray but from a value Rmin to Rmax- Along any ray j , Rmin and Rmax a r e approximately ± 2 0 pixels on either side of the initial ellipse fit (3rd step of the 6-step algorithm). This means that the initial ellipse fit confines the search area for the edge detector to avoid picking up erroneously high gradient values from structures such as the pubic bone, calcium deposits etc. Along each ray we choose the highest E (here E = 5) gradients and call them candidate edge points. Referring to Figure 2.9 we have E candidate edge points located along ray k at a radial distance of r\(k), r2(fc)...r£(fc) from the seed point. Figure 2.10b shows the pixel intensities, gradient of pixel intensities and 5 gradient maxima along a single ray in Figure 2.10a. Indeed, from Figure 2.9 and Figure F.4 we can see that there is a perfect analogy between the prostate boundary and the aircraft trajectory since the aircraft's radial position with respect to the radar, at time k, could be either r\(k), or r2(k) or r£;(fc). The accurate prediction of the prostate boundary is made harder due to the speckle noise in the image similar to the random noise signals in the air. 2.5.2 Kalman or Extended Kalman Filter - Finding the Prostate Boundary A Kalman Filter (KF) or Extended Kalman Filter (EKF) can be adopted to track the prostate boundary. The K F equations for estimating the states of a system (outlined in more detail in xc x Figure 2.9: Tracking the prostate boundary using Star Kalman/PDA and a seed point 2.5 IMMPDA Edge Detector 28 Figure 2.10: Pixel intensities along the ray shown in figure (a) are recorded and shown in the first subplot in figure (b). The gradients of the pixel intensities are shown in the second subplot and the 5 maximum gradient values are shown in the third subplot in figure (b) Appendix F.l) are: x(k\k) = x(k\k-l) + W(k){z(k)-z(k\k-l)} x(k\k-l) = F{k - l)x (k - l\k - 1) + G(k - l)u{k - 1) z(k\k-l) = H(k)x (k\k - 1) (2.25) The EKF equations for estimating the system states (outlined in more detail in Appendix F.2) are: x(k\k) = x(k\k-l) + W(k){z(k)-z(k\k-l)} x{k\k-l) = f{k-l,x(k-l\k-l),u(k)) z(k\k-l) = h{k,x(k\k-l)) (2.26) where in both sets of equations at time k, x(k\k) is the estimate of the system states, W(k) is the filter gain, z(k) is the system measurement, z(k\k — 1) is the measurement prediction, u(k) is the 2.5 IMMPDA Edge Detector 29 input into the system, and F(k) (/ in the EKF) and H(k) (h in the EKF) represent the model of the time-varying system. Here, the time varying system is the distance between the seed point and the prostate boundary. This distance varies with each angle increment, AO, and we can represent this spatial discretization as temporal discretization by letting AO equal k. Tables 2.1 and 2.2 summarize two linear and two non-linear models for this time-varying system and section 2.5.5 outlines the detailed derivation along with pros and cons of each model. It should be noted that none of the models have an input term, u(k). At any time k we need to obtain a measurement value, z(k), from the TRUS image. Recalling from the previous section, there are E candidate edge points at any time k and we use these candi-date points in a Probabilistic Data Association (PDA) filter, outlined in more detail in Appendix F.4, to find one measurement value, z(k). 2.5.3 Probabilistic Data Association Filter - Finding the Measurement Value In the PDA filter, we assign each candidate boundary point a Gaussian weight function based on the magnitude of the gradient at that point (higher magnitudes receive higher weight) and their distance away from z(k\k — 1). In the above equation we can see that points closer to z(k\k — 1) receive a higher weight since (rj(fc) — z(k\k — 1).) will be smaller. Then z(k) is found by finding the weighted average of the candidate boundary points [5]. Refer to Figures 2.11 for a pictorial explanation. 2.5.4 Interacting Multiple Modes The time-varying model for the KF is: (2.27) 2.5 IMMPDA Edge Detector 30 o o CM — i II II. os a o o CM <u ' i. o 1-c H 1 1 CN O o >/"> t - CM O i © ID T 3 O on * o o u 'I ' i •g os a CM o o CM ^ <o 'L -•§ os a 2 . . . . r X> O a. 03 1 >/-> i >/-> CM r-O o <n r-o o 1) O 00 * (D O c •c > O U <5> < < CM < *3 CM II as QS <5> < <5> < CM "as < CM II QS II 7 as o 3 a. on 0) •4-* 55 <3i CM + i i <35 < — i O I I 5 + I 1 i i <3i < CM <3i <3> <3 <3i < OS I -—- <i> " i .5 qi~ -a " M l R <3i I a, + -S3 I I <3i~ I <3i~ o II -a I •c > .2 55 o II II x x II II X X Constant Velocity Ellipse Linear Table 2.1: Linear State Space Models 2.5 IMMPDA Edge Detector 31 .S -2 co I I S o Z o o > o U o o o 1 ol o o O t o o « o o o! o o o J OS O ) ° ° O f ° ol ° O l ° ° ^ II -a; O ) u o o OS 3 oo 1 •a R -c? R -1 O O o < o o < o O < o o <3> < < l o o o < 7 T 7 7 I 1 7 •3 R -t R -* R -+ 1 O O < of O < O < O O + « * • 1 7 _. -ii 1 (=1 1 O O 0 O —1 0 0 O 0 0 II 11 1 1 - o 1 + Si I H •n > -4-* o3 00 a* a 1^  i<0 05 <3i -Si? -O 0 , q, , a. Ellipse - Derivative Based Ellipse - Output Based Non-Linear Table 2.2: Non-linear State Space Models 2.5 IMMPDA Edge Detector 32 5 gradient ray k maxima Figure 2.11: z(k — l\k — 1) on ray A; — 1 is mapped to z(k\k — 1) on ray k using the state space model. Five maxima are chosen on ray k to find z(k\k) x(k + l) = F{k)x(k) + G(k)u(k) + T(k)V(k) z(k + l) = H(k + l)x(k + 1) + w(k + 1) (2.28) Similarly, the time-varying model for the EKF is: x(k + l) = f{k,x{k),u{k)) + F(k)V(k) z(k + l) = h(k + l,x(k + l)) + w(k + l) (2.29) where V(k) and w(k) are sequences of white Gaussian noise with covariance Q(k) and R(k) rep-resenting the process and measurement noise respectively. Generally, large Q(k) and small R(k) implies confidence in the ultrasound image, resulting in high curvature boundary segmentation, whereas small Q(k) and large R(k) implies confidence in the model, resulting in smooth boundary segmentation. In our model of the time-varying system we would like to have both options of large Q(k) with small R(k) and small Q(k) with large R(k) at different times. The Interacting Multiple Modes (IMM) allows us to have multiple models running at once with different process and measurement covariances. The states and outputs of the models are then 2.5 IMMPDA Edge Detector 33 mixed and combined according to a pre-defined set of probabilities. The IMM enables the user to have access to an overall model which allows for both high curvature and smooth boundary segmentation. For more detail on the IMM refer to Appendix F.3. In the following section, the specifications of the IMMs are given for each system model. 2.5.5 System Models In the following sections we present the state variables, system models, measurement and process noise covariances and IMM specifications for two linear models and two non-linear models that we tried. At the end we will discuss the merits of each model and specify the model that was used for the final segmentation algorithm. 2.5.5.1 Linear Constant Velocity (Circle) Model The Constant Velocity Model is the most basic model. It was derived initially by Abulmaesumi et al in [2] and was based on work by Bar-Shalom et al in [6]. The assumption here is that the prostate boundary is approximately circular and the rate of change of the radius remains constant at each angular increment (hence the name Constant Velocity Model). State Variables: In this model we define our state variables as: x(k) = [r(k) r(k)} xa = [aa 0]T (2.30) where r(k) is the radial distance between the seed point and the actual prostate boundary and r(k) is the rate of change of this distance. The initial value of r is aQ and is found from the user initialization points and the initial value of r is zero. 2.5 IMMPDA Edge Detector 34 State Space Model: If r(k) is the radial distance between the seed point and the prostate boundary at angular increment (analogous to time) k, then it follows from the definition of a derivative that: •ik-1) r(k)-r(k-l) A9 (2.31) where k and k — 1 are separated by AO. If the above equation is solved for r(k) then we have the 1st order Taylor Expansion of r(k) at k about k — 1. For a better approximation of r(k) we find the 2nd order Taylor Expansion and its derivative as follows: r(k) 1 AO r(k — 1) + A02 2 = r(k) 0 1 r(k — 1) _ A6 r(k (2.32) x(k) -F(fc-l) x(fc-l) T(fc-l) If the goal is to represent the prostate boundary as a Constant Velocity Model (r(k) = r(k — 1)) then the second term on the right hand side of the equation above, T(k — l)V(k — 1), represents the process noise with covariance T(k — l)Q(k — l)T'(k — 1) (refer to Figure F . l in Appendix F.l) . The output (measurement) of the system will simply be: , r ( f c - l ) r(k) - [1 0] 1 V ' z(k) H(k-\) L r(k ~ ^ +w(k - 1) (2.33) x(fc-l) where w(k — 1) is the measurement noise with covariance R(k — 1). Noise Covariances: It is assumed that r(k — 1) is a zero mean, Gaussian random variable with variance Qa. This would imply that the process noise covariance is: 2.5 IMMPDA Edge Detector 35 r ( f c - l ) Q ( f c - l ) r ' ( f c - l ) = At9 2 2 Qo *f AO AO A64 A e 3 2 4 (2.34) Afl3 2 Af52 As well, it is assumed that w(k — 1) is a zero mean, Gaussian random variable with variance RQ. Interacting Multiple Modes: The prostate boundary was represented with two Constant Velocity models (modes). Mode 1 had a large R(k) and small Q(k) and enabled the edge detector to extract smooth boundary contours (trusting the model more than the measurements). Mode 2 had a small R(k) and large Q(k) enabling the edge detector to pick up on fine resolution (trusting the measurements more than the model). In Mode 1 R(k) = 20 and Q(k) = 10 and in Mode 2 R(k) = 20 and Q(k) = 105. The transition probability from a mode to itself was 75% and the transition probability from a mode to another mode was 25%. These numbers were chosen by trial and error and showed excellent results. 2.5.5.2 Linear Ellipse Model The next three models are based on on-going work by Julian Guerrero [17]. The assumption here is that the prostate is elliptical instead of circular. State Variables: In this model the state variables are the same as in the Circle Model. State Space Model: We begin with the basic equation of an ellipse in Cartesian coordinates: 2.5 IMMPDA Edge Detector 36 l (2.35) where a and b are the semi-major and semi-minor axes of the ellipse respectively. If the above equation is converted to polar coordinates and an angle offset 0 is introduced, the equation becomes: r 2 cos2 (di - 9) r 2 sin2 (0; - 9) _ a? + t 2 (2.36) where x = r cos (0j) and y = r sin (0j) and 9i is the sweeping angle. Solving for r we obtain: ab (2.37) V ^ 2 cos2 (9i -9) +a2 shi2(9i - 0) which is the equation of a rotated ellipse in polar coordinates. Taking the first and second derivatives of the above equation with respect to 9 we get: r = r([b2-a2 sm(9i - 9) /2 b2 cos2 (9t -9) +a2 sin2 (0* - 0) (2.38) r = r([b2- a2] (b2 cos4(0i - 0) - [a2 - b2] sin2(2(0i - 0))/22 sin4(0i - 0))' (b2 cos2(0i - 0) + a2 sin2(0i - 9)f D Next, the above equations are discretized using the following equation: f(jfc) r(k) r(fc+l)-r(fc) AO r(k+\)-r(k) A9 r(k+l)-r(k) AO r(k)D(k) (2.39) (2.40) Re-arranging the above equation and introducing noise, we can write the state space equation of the linear ellipse model as: r(jfc) r(jfc) x(k) 1 A0 D(k-1)A9 1 r(k - 1) r(k - 1) +V(k-1) (2.41) F(fc-l) x(fc-l) The output equation for this model is the same as the Constant Velocity model. It should 2.5 IMMPDA Edge Detector 37 be noted that the state space equations of the ellipse model are very similar to the state space equations of the Constant Velocity model except that the derivative of the radius is related to the radius in this case. Unlike the next 3 models, the a, b, 6 values are constant here and obtained from the initial ellipse fit (step 3 of the 6 step segmentation algorithm). Noise Covariances: The constant velocity and linear model are close enough to assume that they have the same noise covariance. Interacting Multiple Modes: This model used the same two modes with the same transition probabilities as the Constant Velocity model. 2.5.5.3 Non-linear Ellipse Model - Derivative Method In this model we iterate an E K F many times until the states a, b, 6 converge to a constant value. Essentially we want to find the parameters of the ellipse that best describe the prostate shape that we are trying to segment. State Variables: The state variables for this model are: x(k) = XQ [r(k) a{k) b(k) 0(k)f [rQ aa bQ 60]T (2.42) and we track their evolution at every iteration of the E K F . State Space Model: 2.5 IMMPDA Edge Detector 38 To find the state-space model we begin with equation 2.38 and add noise to get: f = rC + 77 Now we discretize the state variables as follows: (2.43) r(fc) -r(k- -1) A6 a(k) —a(k- -1) A9 b(k) -b(k- -1) A6 etk) -6(k- -1) A9 r(k)C(k- 1) +7?r(/c- 1) Va(k-1) Vb(k - 1) V9(k - 1) (2.44) and finally the state space representation becomes: r(k) a(k) b(k) x(k) r(k) z{k) + r(fc-l) 1-C(fc-1)A<9 a(k - 1) b(k - 1) 6(k- 1) /(fc-l.x(fc-l)) [1 0 0 0] x(A; - 1) + w(k - 1) H{k-1) AO l - C ( f c - l ) A t 9 0 0 0 0 0 A0 0 0 A9 0 0 r(fe-i) 0 0 0 AO rjr(k --1) Va{k --1) Vb(k --1) rje{k --1) V ( k - •1) (2.45) (2.46) In the state space equation above, the process equation is non-linear (hence use of the EKF) but the measurement equation is linear. In this case, only the process equation has to be linearized as outlined in Appendix F.2. Noise Covariances: It is assumed that the noise introduced into each variable, in this system, is independent of the noise in the other variables and is a zero mean, Gaussian white noise. The covariance of the noise of each state (or the process noise covariance) is then: 2.5 IMMPDA Edge Detector 39 Q(k - 1) = Qor 0 0 0 0 Qoa 0 0 0 0 Qob 0 0 0 0 QoO (2.47) As well, it is assumed that w{k — 1) is a zero mean, Gaussian random variable with variance Ra. Interacting Multiple Modes: The EKF iterates until it converges to constant a, b, 0 values. When we introduce the IMM we mix and change the states constantly making it more difficult and time consuming to reach convergence; therefore, there are no IMMs in the non-linear models. 2.5.5.4 Non-Linear Ellipse Model - Output Method This model is very similar to the previous model, except here the non-linear portion of the model is in the measurement equation instead of the process equation. State Variables: In this model our state variables are: x(k) = [a(k) b(k) 6(k)Y x0 = [a0 ba 90}T (2.48) (2.49) where the radius is omitted as a state variable because it is the output of the measurement equation in the state space model described below. State Space Model: As before, to find the state-space model we discretize the state and output variables and add noise. 2.5 IMMPDA Edge Detector 40 Discretizing equation 2.37 gives us the output of the system: a(k)b(k) r(k) ^/b(k)2 cos2(<9; - 9{k)) + a(k)2 s in 2 ^; - 9(h)) Therefore, the final state space model becomes: (2.50) a(k) 1 0 0 a(k - 1) A9 0 0 Va(k - 1) b(k) = 0 1 0 b(k - 1) + 0 AS 0 Vb(k -1) 9(k) 0 0 1 9(k-l) 0 0 A8 _ ve(k - 1) x(k) F(fc-l) x(fc-l) r(fc-i) V(fc-l) z(k) = r(k-l) +w(k-l) h(fc-l,x(fc-l)) (2.51) (2.52) Note that the measurement equation above is non-linear and must be linearized. Noise Covariances: The process and measurement noise covariance for this model are the same as for the previous model. In this case Q(k — 1) is a 3 x 3 diagonal matrix with the noise covariances of the state variables on the diagonal entries. The covariance of w(k — 1) is Ra as before. Interacting Multiple Modes: There are no IMMs for the reasons described in the previous model. 2.5.6 Comparing the Four Models The 6-step segmentation algorithm consists of the following steps: 1) initialization, 2) forward warp, 3) initial ellipse fit, 4) edge detection along the initial ellipse fit, 5) final ellipse'fit and 6) reverse warp. The two linear edge detection techniques are used in the 4th step of the 6-step segmentation algorithm; however, the two non-linear techniques attempt to replace steps 3 to 5. The goal of the EKF is to find the parameters of the ellipse that best fit the shape of the prostate being segmented. 2.5 IMMPDA Edge Detector 41 Therefore, if the non-linear techniques are used, the 6-step segmentation algorithm becomes a 4-step segmentation algorithm: 1) initialization, 2) forward warp, 3) non-linear edge detection with EKF and 4) reverse warp. It should also be noted that the non-linear techniques cannot be guided with the initial ellipse fit since the EKF relies on having the same search area at each iteration so that it can converge to a final set of state values. If the EKF is guided, its search area will change at each iteration making it more difficult to converge. The following question naturally rises: does the EKF with Parameter ID in the 4-step algorithm converge and if it does are the resulting a, b,9 values close to those obtained from the 6-step algorithm? In order to make this comparison we found the best linear model and the best non-linear model and compared the performance of the two to each other. 2.5.6.1 Comparing the Two Linear Models We analyzed the two linear models to see which gave the best edge detection results. We also evaluated whether it was necessary to have an IMM with two modes of operation. In Figure 2.12 the thin white contours are the initial ellipse fit and the thick dotted and thick solid lines are the results of the edge detection from the Circle and Ellipse model respectively. As well, this figure illustrates the effect of having multiple modes by comparing the result of the edge detection when there are 2 modes (Qi = 10, Ri = 20 andQ2 = 105, R2 = 20) and only one mode (Qi = 10, R1 = 20). From this figure, it is clear that the IMMs are necessary for finer, more precise edge detection. In Figure 2.13, the edge detection results of the Circle model are super-imposed on top of the Ellipse model (both with 2 modes) for four different cases. We can see that both models create very similar results. In conclusion either model would work well so we choose the Circle Model since its simulation time is slightly better than its counterpart. 2.5.6.2 Comparing the Two Non-linear Models In a previous study, Julian found that the Output method was slightly better than the Derivative method since the derivative method introduces noise into the state variables. Therefore in the analysis for the following section we use the Output method. 2.5 IMMPDA Edge Detector 42 • Model: Circle • Model: Circle • Model: Ellipse • Model: Ellipse • IMM: 2 modes • IMM: 1 modes • IMM: 2 modes • IMM: 1 modes • Time: 2.23 sec • Time: 2.21 sec • Time: 2.65 sec • Time: 2.35 sec Figure 2.12: Comparison of edge detection results using one mode or two mode IMMs (edge de-tection is guided along an ellipse (thin solid line) and results are shown for the Circle model (thick dotted line) and Ellipse model (thick solid line)) Figure 2.13: Comparison of edge detection results using the Circle model (thick dotted contour) and the Ellipse model (thick solid contour) for four different prostate contours 2.5.6.3 Comparing Linear vs Non-linear Model We would like to compare the linear Circle model (with two modes: Q\ — 10, R\ — 20, andQ2 = 105, R2 = 20) to the non-linear Ellipse based Output model. Using a complex search algorithm, the Q and R factors for the non-linear model were found to be Q\ = diag [4.5238 0.0069 11.4152] and R\ — 38.5133 [17]. Then the 6-step segmentation algorithm using the linear model and the 4-step segmentation algorithm using the non-linear model were applied. In the non-linear case, since the edge detection could not be guided, and since the images are 2.5 IMMPDA Edge Detector 43 extremely noisy, the detected contour wandered heavily. As a result, the a,b,6 values did not converge even after many iterations. This suggests that due to the noisy nature of the images the edge detection algorithm must be very robust and therefore needs to be guided with the initial ellipse fit. Since the non-linear models cannot be guided (the EKF would not converge since the search area changes at each iteration) we use the Circle model throughout the rest of this thesis with the same Q and R as stated above. 44 Chapter 3 Study Results and Discussion: 2D Segmentation 3.1 Study Setup Using the standard equipment (Siemens Sonoline Prima with 5MHz biplane TRUS probe) and the standard image acquisition procedure used at the BC Cancer Agency, we acquired 17 pre-operative TRUS images of the prostate from the midgland region. The image sizes were 480 x 640 with pixel size of 0.18 x 0.18mm. The images had low resolution and contained a superimposed needle grid. Although it is possible to obtain higher resolution images our goal was to work with the images currently used at the Cancer Agency. A single expert radiation oncologist with vast experience in prostate volume studies manually segmented the images using the Vari-Seed planning software from Varian. We then applied our semi-automatic segmentation algorithm and the same expert chose the initialization points. The algorithm was written in Matlab 7 (Release 14) and executed on a Pentium 4 PC running at 1.7 GHz. 3.2 Validation Methods The manual segmentation was used as the 'gold standard' while distance [8] and area [22] based metrics were used to validate the semi-automatic segmentation. 3.2 Validation Methods 45 3.2.1 Distance Based Metrics For distance-based metrics we found the radial distance between the manual and semi-automatic contours, with respect to a reference point at the center of the prostate, as a function of theta around the prostate: r,nff(6). Figure 3.1 shows an example of this radial distance at 6 — 0\. A value called the Mean Absolute Difference (MAD) was found for each of the 17 cases as follows: 1 N M A D = 5^>*//Mi (3J) where on any given prostate, N radial differences were found at N different angles. This value explains how well, on average, the shape of the semi-automatic contour matches the manual contour. A second value, called the Maximum Difference (MAXD) was found for each of the 17 cases as follows: MAXD = max(\rdiff(ei)\) (3.2) where the max() operator simply finds the maximum value in the array r<nff(6). This value tells us, for any angle 6, what is the greatest radial difference between the two contour shapes. Figure 3.1: The radial differences (at angles 0j) between the manual (dotted line) and semi-automatic (solid line) contour, are used in validating the segmentation algorithm 3.3 Results 46 3.2.2 Area Based Metrics For area-based metrics we superimposed the manually segmented polygon (gold standard), Q m , on top of the semi-automatically segmented polygon, Qa (refer to Figure 3.2). The region common to both flm and Qa is the True Positive (TP) area. The region in f2m but not in Q a is the False Negative (FN) area and the region in Qa but not in Q m is the False Positive (FP) area. We then found the following area-based metrics for each of the 17 cases [22]: Sensitivity = TP/(TP + FN). (3.3) Accuracy = 1 - (FP + FN)/(TP + FN). (3.4) In medical trials, the Sensitivity of a test determines how sensitive a test is at picking up a disease when the disease is actually present. In our case, Sensitivity tells us how sensitive our algorithm is at finding the pixels that are prostate from the set of pixels that the average manual segmentation assumes are prostate. Ideally the algorithm would be able to identify every pixel that was prostate as prostate (100% Sensitivity). In the Accuracy formula, the second term indicates how incapable the algorithm is at picking the true prostate pixels. Ideally this term would be zero because FP and FN would not exist. Therefore, one minus this term gives us another measure of how accurate the algorithm is when it states that a pixel is prostate. Figure 3.2: Pixel by pixel comparison of manual vs semi-automatic contours 3.3 Results 47 Area Based Metrics Distance Based Metrics Sensitivity (%) Accuracy (%) MAD (mm) MAXD (mm) Mean STD Mean STD Mean STD Mean STD 97.4 1.0 93.5 1.9 0.67 0.18 2.25 0.56 Table 3.1: Average area and distance based metrics for validation of segmentation algorithm 3.3 Results The MAD, MAXD, sensitivity and accuracy were found for each image and the overall average values along with standard deviations (STD) are presented in Table 3.1. On average, the mean absolute difference between the manual and semi-automatic contours was 0.68±0.18mm which is less than half the average variability between human experts (1.82 ± 1.44mm) [16]. Therefore, the mean absolute distance is on the same order of error as the repeatability between two different expert observers performing manual segmentation. On average, the maximum distance between the manual and semi-automatic contours was 2.25 ± 0.56mm. The average sensitivity and accuracy were over 97% and 93% respectively with std values less than 2%. Refer to Figure 3.3 for sample outputs. The times for manual and semi-automatic segmentation were recorded and are presented in Table 3.2. On average the time for manual segmentation, which includes the time for selecting 16-25 points and then re-adjusting them to obtain approximate symmetry, was 45.35 ± 4.85 seconds. The average time for semi-automatic segmentation, which includes the time to choose initialization points and then run the algorithm, was 12.38±0.98 seconds. The average time to run the algorithm alone was 1.12 ±0.14 seconds for a single slice; however, we expect this time to decrease by at least five fold when the algorithm is transferred to C++. It should be noted that in the MICCAI paper [4] we reported a time of 14.10 ± 0.2 seconds to run the algorithm alone. The improvements to this time are due to Matlab code optimization, especially in the image warping step. Manual Semi-automatic (Matlab) Total Time (sec) Initialization (sec) Algorithm (sec) Total Time (sec) Mean STD Mean STD Mean STD Mean STD 45.35 4.85 11.26 1.81 1.12 0.14 12.38 0.98 Table 3.2: Average times for manual and semi-automatic segmentation 3.4 Discussion 48 Figure 3.3: Comparison between manual (dotted line) and semi-automatic (solid line) segmentation 3.4 Discussion The 2D semi-automatic segmentation algorithm does not require training models or time consuming pre-filtering of TRUS images. The result is a simple, computationally inexpensive algorithm with potential for real-time applications and extension to 3D segmentation. Furthermore, the algorithm works well on the low resolution TRUS images currently used at the BC Cancer Agency and can be integrated easily into their setup without any changes to the image acquisition procedure. In preliminary studies, an untrained observer chose the initialization points. The segmented contour resulted in sensitivity and accuracy values over 90% with MAD and MAXD values less than 1 and 3mm respectively. This result shows that unlike most other semi-automatic segmentation techniques the success of the final segmentation does not rely on precise initialization of the prostate boundary. Consequently, during intra-operative procedures, a less experienced observer can choose the initialization points to update the treatment plan allowing the oncologist to continue their work uninterrupted. A more in-depth study of initialization sensitivity may be a worthwhile topic for future students. Due to the nature of this technique, the algorithm works best for prostates that have an elliptical shape, and worst for those that have a diamond shape, after warping. 49 Chapter 4 Algorithm: 3D Segmentation 4.1 I n t r o d u c t i o n to 8-step A l g o r i t h m A prostate volume study is composed of a stack of 10-14 transverse, 2D, TRUS images taken at 5mm intervals. In volume study terminology, the base is the most superior slice, the apex is the most inferior slice and the midgland is midway between the base and apex. The boundary of the prostate in and around the midgland slice is usually clearly visible; however, at the base and the apex it is nearly in-distinguishable. In 3D prostate segmentation, the goal is to create a computer algorithm that segments the prostate from the stack of images making up the 3D volume study. In the previous two chapters we showed the success of the 2D segmentation algorithm on the midgland images; however, in 3D we need to segment the base and apex as well. For this reason, in 3D we start off with the information we can easily obtain (prostate boundary in and around the midgland) and work our way outwards, to the more difficult slices, while using the previous slice as a priori data at each step. This technique is exactly what the oncologists use to infer the location of the prostate at the base and the apex. The proposed 3D computer algorithm has 8 steps (Figure 4.1) as follows: 1. Initialize only the midgland slice. 2. Warp every slice in the prostate volume so that the prostate shape becomes ellipsoidal. 3. Segment the midgland slice using the 2D segmentation algorithm (do not perform reverse warp at this stage). 4.1 Introduction to 8-step Algorithm 50 4. Use the segmented midgland contour as initialization for the slice before the midgland (midgland-1) and the slice after the midgland (midgland+1). Segment these slices using the 2D segmen-tation algorithm (again without reverse warp). Use segmentation of midgland+1 as initial-ization for midgland+2 and segmentation of midgland-1 as initialization for midgland-2 and segment these slices as well. Continue this iterative segmentation until all slices, except base and apex, have been segmented. 5. Fit an ellipsoid to the stack of 2D elliptical contours from the previous step. 6. Assume circular contours for the base and apex centered at the ellipsoid's major axis. 7. Fit a final ellipsoid to all 2D contours including base and apex. 8. Slice the final ellipsoid transversally in 5mm increments to get 2D elliptical contours and apply the reverse warping algorithm to this set of 2D contours. The final, segmented 3D prostate volume is a set of 2D contours. In the initialization step, the user selects 5 initialization points in the same manner as in 2D (refer to section 2.2). In the warping step, we warp each 2D slice according to the 2D warping algorithm described in section 2.3.2. Here a question arises: what a values do we use for each slice? When the TRUS probe is inserted into the rectum, the assumption is that the force exerted by the probe is uniform along the entire length of the prostate and therefore every cross sectional slice experiences the same radial compression. Therefore we use the same a value for each slice and section 2.3.2.1 outlines a technique to find the optimal a based on the initialization points. In step 4, every slice, except the base and apex, is segmented using the 2D segmentation algorithm with the initialization being the previous slice's final ellipse fit. It should be noted that as we move away from the midgland, the prostate boundary becomes smaller and so we need to make the 2D segmentation algorithm biased to finding prostate contours that are smaller than the previous contour. This is done by forcing the IMMPDA to find more candidate edge points inside the initial ellipse fit then outside (ie, search range would be 10 pixels outside the initial ellipse fit but 30 pixels inside). In TRUS images, the prostate boundary cannot be seen at the base and apex (due to blending with the bladder neck and pelvic floor respectively). A good strategy for segmenting the prostate boundary in these slices is to extrapolate from the contours in the previous slices. This is why 4.1 Introduction to 8-step Algorithm 51 Perform 2D 6-step segmentation algorithm (only up to 5th step) on midgland slice. Use the result as initialization for the slices immediately before and after the midgland Initialize midgland only Fwd Warp all Slices Each 'final ellipse fit' is used as the initialization for the next slice i T% ( 1 i y v I i i \ I / ( \ f 1 Base . . . Midgland-2 Midgland-1 +1 slice slices slice Midgland + 1 Midgland + 2 . . . Apex slice slices - 1 slice Figure 4.1: Eight steps of the 3D semi-automatic prostate segmentation algorithm 4.2 Ellipsoid Fitting 52 we have chosen ellipsoid fitting so that with a good approximation of the size and shape of the base and apex the ellipsoid will extrapolate and give us the best hypothesis for the actual prostate boundary in these slices. After looking through a large database of manually segmented prostate volumes at the local cancer agency we found that the approximate shape and size of the prostate at the base and apex were circles with average radii of 18mm and 15mm respectively. Therefore, in semi-automatic segmentation, circles of radii 18mm and 15mm were drawn at the base and apex, centered about the center axis of the initial ellipsoid fit. The final ellipsoid fit would extrapolate to give us the actual prostate boundary at the base and apex. 4.2 E l l i p s o i d F i t t i n g 4.2.1 Equation of an Ellipsoid As in the 2D case, we would like to derive a general equation for a rotated, translated ellipsoid. The reference frame notations used throughout the rest of this section are outlined in Appendix A. Let us first consider an ellipsoid centered at the origin, with no rotation. There are three ways to represent this ellipsoid. The parametric equation is: 5 X 5„, y 5 z acos(9i) sin((/>i) 6sin(0i)sin(&) (4-1) ccos(4>i) where 5X = [5x 5 y 5z] is a point in the unrotated, untranslated final prostate frame. Here a, b and c are the lengths of the axes along the 5 x , 5y and 5z directions respectively, and 6i and fa are sweep variables. The Canonical form of the same ellipsoid is: and the quadratic form is: 5 \ 2 - M T ) + a I \ b \ c V 2 = 1 (4.2) P'l V 4- P2 V + Pz V + ^ •>" 5„.2 " 5^2 10 (4.3) We will now use the rotation/translation convention of equation A.6 to get the above three equations in the {^ o, 1C} reference frame. For notational convenience we let cos (7) and sin (7) 4.2 Ellipsoid Fitting 53 equal cy and respectively. The parametric equation of a rotated, translated ellipsoid will be: lx XQ ly = Vo + xz c(3 s/3 0 -s/3 c(3 0 0 0 1 c4> 0 —s(f> 0 1 0 s(f> 0 ccf) 1 0 0 0 cO s9 0 -s9 c9 a c9i s(j>i b s6i scf>i C C(f>i (4.4) where we call the sequence of three 3 x 3 rotation matrices U. The Canonical equation becomes: cj3c(j) (1x - x0) - s{3c<j> (*y - yQ) + s<j) {^z - zQ) (s(3c9 + c(3s<j)s9) (* x — x0) + (c/3c9 — s(3s<f>s9) (1y — yc) — C(j>s9 {^z — zQ) ' {s/3s9 - c(3s<pc9) (*x - xa) + (c(3s9 + s(3s(f)c0) (ly - yQ) + C(j>c9 (xz - zQ) ' + (4-5) + = 1 In order to get the Quadratic form of a rotated and translated ellipsoid we use the equation of a general conic in 3D: F(V,P) = V-P = P 1 1 X 2 + P 2 V + P3LZ2 +2P41y1z + 2P51x1z + 2P6 V y +2P7 xx + 2P8 ly + 2P9 xz + Pio lx lx P* ly xz\A XV + B lV + c (4.6) lz lz Pi- P5 Pi Pi],B = {Pr P8 P 9], C = P i 0 ) 21xly 2lx 2'y 2xz 1 1 , and P=[Pi Pi Ps PA PS P& Pi Ps P9 Pio}- As in 2D, if the A matrix is positive definite, the 3D conic represents an ellipsoid (as opposed to a hyperboloid, parabloid or degenerate case). 4.2 Ellipsoid Fitting 54 Two tests to check the positive definiteness of a symmetric 3 x 3 matrix A are: A is pos def <-> Ai, A 2 , A3 > 0 <-> determinant of the principal minors of A > 0 (4.7) where Ai is the ith eigenvalue, the l x l principal minors are P i , P 2 and P 3 , the 2 x 2 principal minors are [Pi P 6 ; P 6 P 2], [Pi P 5 ; P 5 P3] and [P2 P 4 ; P 4 P3] and the 3 x 3 principal minor is A itself. In [24], it is shown that A is positive definite if and only if I, J, K > 0 where I, J and K are the sum of the 1 x 1 , 2 x 2 and 3 x 3 principal minors of A respectively as follows: I = Pi + P 2 + P 3 = Aj + A 2 + A 3 J • K = |A| = AiA 2 A 3 Pi p 6 + Pi Ps + p 2 P 4 p 6 p 2 Ps p 3 p 4 p 3 AiA 2 + A1A3 + A2A3 (4.8) (4.9) (4.10) From equation 4.7 we can see that if A is positive definite then I, J, K > 0. To show the inverse implication we find the characteristic equation of A as follows: det(A-XI) = -XS + IX2- JX + K = 0 (4.11) In this case, if / , J, K > 0 then the characteristic polynomial has alternating signs so that the roots, Xi, are always positive; therefore, according to equation 4.7, this implies that A is positive definite. We have now proven that for a given P vector, A is positive definite if and only if / , J, K > 0 and therefore P describes an ellipsoid. In order to put this into the form of a constraint that is similar to 2D, we say aJ > I2 for some a; therefore the ellipsoid constraint is a J — I2>0. The parameter a will be discussed in more detail in the following section. Referring to equations 4.4, 4.5, and 4.6, we can see that there are two sets of parameters that uniquely define our ellipsoid. We define the q parameters from equations 4.4, 4.5 as: 4.2 Ellipsoid Fitting 55 q = [a b c xQ yQ zQ 9 cj> /?] (4.12) and the P parameters from 4.6 as: P = [P1 P2 Pz P4 Ps Pe PJ Pa Ps Pio) (4.13) Note that there is a free scaling factor in P so that we only need 9 parameters to describe an ellipsoid. As in the 2D case, there is an elegant way to find the P vector for an ellipsoid that best fits a set of measurement points and this method is outlined in the following section. Again we are faced with the problem of needing the q parameters to be able to draw the ellipsoid so we must convert the P parameters to q parameters. The problem of deriving P parameters in terms of q parameters is straight forward; however, deriving q parameters in terms of P parameters is quite challenging. Appendix B.2.1 presents a method for finding P from q and Appendix B.2.2 presents two methods of finding q from P. 4.2.2 Fitting an Ellipsoid to Measurement Points Fitting an ellipsoid to measurement points follows the exact same steps as fitting an ellipse to measurement points and is discussed by Li and Griffiths in [24]. In 3D, our stack of 2D segmented contours create a cloud of points, [*X{ xyi 1Zi]T, (i = l...n), to which we attempt to fit an ellipsoid. We find the P vector of the best fit ellipsoid by using the Least Squares Minimization technique (refer to equation 2.18 and Appendix C.l) as follows: where, as in 2D, S = [Vf V2T ... V^]T,P = [P1 P2 Pz Pi Ps P 6 Pi Ps P9 Pio], and Vi — [xx2 ly2 lz2 2lyilZi 2lXilZi 21Xi1yi 21x; 2lyi 2lZi l ] . As explained in the previous section, the general conic represents an ellipsoid if the P vector satisfies the constraint [24]: n (4.14) aJ-I2>0. (4.15) In our application we are dealing with prostate shapes whose small radii are greater than half 4.2 Ellipsoid Fitting 56 their large radii. For this subset of ellipsoids Li and Griffiths [24] state: Given a P vector, the constraint 4 J — I 2 > 0 is satisfied if and only if the corresponding quadratic surface is 0,71 ellipsoid with TS7nan ^ ^^large-Therefore, we use 4J — I2 — 1 as our constraint, which can be written in matrix form as PTNP = 1. N = -1 1 1 0 0 0 1 -1 1 0 0 0 1 1 -1 0 0 0 0 0 0 -4 0 0 0 0 0 0 -4 0 0 0 0 0 0 -4 Ni 0 6 x 4 04x6 04x4 (4.16) We can now re-state our ellipsoid fitting problem as follows. Given a set of measurement points in 3D, we would like to find the P vector that minimizes Cost = \\SP\\2 subject to the constraint PTNP = 1. Using Lagrangian Multipliers (explained in Appendix D) we get the following set of equations that we must solve for A. STSP = \NP PTNP = 1 (4.17) The above equation can be re-written as: (N-pSTS)P = 0 (4.18) The P vector for the best fit ellipsoid is the eigenvector corresponding to the unique positive eigenvalue, p,, in the above GEP. The reasoning for this follows directly from the 2D case. 57 Chapter 5 Study Results and Discussion: 3D Segmentation In this study, we evaluated the 3D semi-automatic prostate segmentation algorithm outlined in the previous chapter. In section 5.1 we give an overview of the study setup. Section 5.2 outlines the distance and volume based validation techniques and results along with speed performance. Section 5.3 gives a discussion of these results. 5.1 S t u d y O u t l i n e Ten TRUS prostate volumes (also referred to as cases), each containing between 10 to 14 transverse slices from base to apex, were chosen from a database at the BCCA. AU image volumes were gath-ered using the standard image acquisition procedure at BCCA and standard equipment (Siemens Sonoline Prima with 5MHz biplane TRUS probe). As in 2D, the images were 480x640 with pixel size of 0.18 x 0.18mm. All images had low resolution and contained a superimposed needle grid; however, we attempted to choose volumes with clearly visible midglands so as to give the computer algorithm a fair chance and a good place to start. The prostate near the base and apex was not visible in any of the volumes - this is the essence of the difficulty in 3D segmentation. It is possible to obtain better images using more advanced TRUS machines (ie, 3D GE Volusion machine with a real-time 4D transducer); however, we wanted to see how well the 3D segmentation algorithm would perform, compared to manual segmentation, with BCCA's current image acquisition proce-dure. Although 3D imaging technology is not yet mainstream, our algorithm can easily be adjusted 5.1 Study Outline 58 to work with 3D images and will have even better performance due to the finer inter-slice spacing. 5.1.1 Study Setup This study was comprised of three observers; two non-expert observers (observers 1 and 2) who were trained for manual segmentation by radiation oncologists and radiologists, and one expert observer (observer 3) who is the head of prostate brachytherapy at BCCA. Before starting the study the non-expert observers practiced manual segmentation on many volumes (other than the 10 volumes used here) in order to gain efficiency and repeatability. The study was carried out as follows (refer to Figure 5.1). All three observers manually segmented the prostate contour in each slice of every case. For each case they also choose 5 initialization points to be used for the segmentation algorithm. Two weeks later, the same observers manually segmented and initialized 5 of the original 10 cases for the purpose of evaluating intra-observer variability. It should be noted that each case had n slices (where n was different for each case) and each slice was associated with m manually selected points (where m was different for each slice and for each observer). The semi-automatic segmentation algorithm produced 120 points on each slice of every case. The segmentation algorithm was written in Matlab 7 (Release 14) and executed on an Intel Pentium M PC running at 2.0 GHz with 512MB of RAM. Initial 10 cases Case 1 Case 2 Case 3 Case 4 Case 5 Case 6 Case 7 Case 8 Case 9 Case 10 Manual Segm Initialization I O O'O O O O & & Two weeks later: Repeat of 5 cases Case 1 Case 2 Case 3 Case 4 Case 5 Case 6 Case 7 Case 8 Case 9 Case 10 Manual Segm Initialization a & Figure 5.1: Each of the three observers manually segmented and initialized the initial 10 cases. Two weeks later, each observer manually segmented and initialized 5 of the original 10 cases. 5.1 Study Outline 59 5.1.2 Removing Bias In this study, the following steps were taken to minimize subjective bias: 1. All identifying information was removed on all slices. 2. Observers carried out segmentation and initialization independently from each other. 3. The order in which the cases were given to observer 1 was different than the order given to observer 2 and so on. 4. For a given observer, the volumes to be manually segmented and the volumes to be initialized were not presented in the same order. 5. For a given observer, the order of the initial 10 volumes was different from the order of the repeat 5 volumes. 5.1.3 Segmentation Guidelines for Observers There were no restrictions on the number of manually selected points, the location of the points, the distance between points or the starting point. The reason for this was because we wanted the manual segmentation to resemble as closely as possible the manual contours created at BCCA on a daily basis. Since the prostate boundary is not distinguishable at the base and apex, BCCA oncologists are required to abide by the following segmentation rules: Base: Draw a circle, centered at the center of the midgland slice, of minimum 1.5 cm radius. Apex: Draw a circle, centered at the center of the midgland slice, of minimum 1.0 cm radius. For semi-automatic segmentation, the observers were required to choose 5 points on a slice close to the midgland where the prostate contour was visible (for consistency all observers initialized slice Base+2.5 for every case). The observers chose one initialization point at the center of the TRUS axis, one at the center of the prostate and three around the boundary of the prostate in the manner shown in Figure 5.2. These initialization points were then fed into the segmentation algorithm to find the computer generated 3D contour. 5.2 Validation Methods and Results 60 Bi ise+2.5 g .^^j^ 5 1^*-. / * ST - - ^ " ^ ^ ^ ^ 2 £ 1 \ . J fa 1 & & 1 Figure 5.2: A guideline for choosing the location, order and number of initialization points. Points should be chosen somewhere within the grey boxes which are located at the center of the TRUS axis, center of the prostate and right most, bottom most and top most ends of the prostate boundary. 5.2 Validation Methods and Results In the 2D study, we used distance and area based metrics to validate our results - here we extend these validation techniques to 3D. In Section 5.2.1, we use distance based metrics to see: i) if the computer generated contours differ from the gold standard as much as different observers' manual segmentations differ from each other, and ii) if the computer algorithm is more repeatable than the observers. In Section 5.2.2, we find the sensitivity and accuracy of the segmentation algorithm by comparing the computer segmented volume to the average manually segmented volume. Section 5.2.3 compares the speed of the 3D segmentation algorithm to manual segmentation. 5.2.1 Distance Based Metrics In the following sections we compare many curves whose number of points, location of points and order of points are different. We need to find corresponding points on two different curves so that we can use distance based metrics to compare the two curves. In Figure 5.3, the less densely populated curve (o-marks) is the reference curve and the more densely populated curve (x-marks) is the secondary curve. For every point on the reference curve we find the corresponding point on the secondary curve by finding the intersection of two lines: i) the line passing from the center of the prostate through the reference point, and ii) the line passing through two consecutive points on 5.2 Validation Methods and Results 61 the secondary curve that fall on either side of the reference point. The image to the right in Figure 5.3 shows the final result. For all the distance based analysis we will assume that all curves being compared have gone through this process and therefore have corresponding points. Note that we always choose the reference curve to be the less densely populated curve. Figures 5.4 and 5.5 show the result of the semi-automatic segmentation algorithm on a sample case where the first subfigure is BASE+0.5, the subfigure to its right is BASE+1.0 and so on until the last subfigure which is the apex (BASE+4.5). Appendix G shows the segmentation results for every volume in this 3D study. It would be desirable to assess the algorithm's performance on the base, midgland and apex separately to be able to localize and fix problems in the future. For this reason we have split up all the proceeding analysis into 'Base slices' (which are the initial 2 slices), 'middle slices' and 'Apex slices' (which are the final two slices). To assess the overall performance we also show the results for 'all slices' analyzed together. 5.2.1.1 Comparing Difference Between Semi-automatic Segmentation and the Gold Standard to Inter-observer Variability We would like to compare computer generated contours to a gold standard to assess the performance of the algorithm. Then we would like to put this comparison in perspective by looking at the variability in manual segmentation by different observers (inter-observer variability). Our goal is to show that the semi-automatic segmentation differs from a gold standard no more than manual o -ox " OX -<x ex ,. SC * o °x o * x; Figure 5.3: Finding points on the secondary curve (x-marks) that correspond to points on the reference curve (o-marks). The image to right shows the final result in which we have two different curves whose points align radially making radial distance based analysis possible. 5.2 Validation Methods and Results 62 . . ® * ( > » • • ' , ® »•••• * ' * • •> . . • • • ©••• " O . " * • * *: ^ . . . b 4.. Figure 5.4: A 3D comparison of semi-automatic segmentation (astrix marks) to manual segmenta-tion (circle marks). segmentation by one observer differs from manual segmentation by another observer. For every slice in every case we carry out the following three steps: 1. Find the average of the three manual contours in order to have one gold standard for com-parison. 2. Compare the three computer generated contours, each initialized by a different observer, to the gold standard from step 1. 3. Compare the difference between semi-automatic segmentation and the gold standard to inter-observer variability. The above steps are explained in more detail below. Step 1: Finding the gold standard by averaging three manual contours 5.2 Validation Methods and Results 63 Figure 5.5: Comparing semi-automatic contour (solid line) to the manual contour (dotted line) for the entire prostate volume. For any of the n slices in any of the 10 cases we would like to find one average manual contour (gold standard) from three manual contours by carrying out the following steps: 1. Find corresponding points on all three contours as explained previously and shown in Figure 5.3. 2. For each set of corresponding points find the average of the x and y coordinates (refer to Figure 5.6). The resulting curve is the average manual contour (gold standard). Step 2: Comparing three computer generated contours to the gold standard For each slice in any of the 10 cases we have three computer generated contours which were initialized by three different observers. For notational convenience, let us call the gold standard GS 5.2 Validation Methods and Results 64 Figure 5.6: Finding the average manual contour (solid line) from three manual contours by three different observers (square marks, x-marks and o-marks). and each of the computer generated contours Comp\, Comp2 and Comps relating to initialization by observer 1, 2 and 3 respectively. For a given slice, we would like to compare Compi to GS, Comp2 to GS and Comps to GS. Since there are a large number of variables to keep track of we have created a stacking system, shown in Figure 5.7, in which we organize the data for Compi, Comp2 Comps and GS in 4 different stacks (3D matrices). Each stack contains 10 cards and each card corresponds to one of the 10 cases. Each card contains 13 rows and each row corresponds to a slice in the case. Each row contains 40 entries and each entry corresponds to the radial distance from the center of the prostate to a point on the slice. It should be noted that some cases have n < 13 slices and some slices have m < 40 points (where 13 and 40 were the upper bounds). In this case the extra entries are zero padded. As an example, card 1, row 2, column 5 contains the radial distance from the center of the prostate to the 5th point in the 2nd slice of case 1. Figure 5.7 shows how to compare comp\ to GS, on a point-by-point basis, by comparing the radial distance for the first point in case 1, slice 2 of GS (which is 56.2) to the first point in case 1, slice 2 of comp\ (which is 47.0). Every point in the GS stack and the compi stack can be compared in this manner. It should be noted that due to the technique of finding corresponding points on comp\ with respect to GS, row s, card c in the comp\ stack will have the same number of entries as row s, card c in the GS stack; therefore point-by-point comparison is valid and possible. 5.2 Validation Methods and Results 65 To compare compi to GS we find the Pearson correlation, which indicates the strength of a linear relationship between two random variables, at different regions of the prostate. In the first subfigure of Figure 5.8 we've plotted the first two rows of the GS stack, for every case, on the horizontal axis and the first two rows of the compi stack, for every case, on the vertical axis. The second and third subfigures contain the correlation for the midgland and the final two slices respectively. Note the smaller radii in the initial two and final two slices and larger radii in the midgland. Also note that there are far fewer points in the initial two and final two slices than in the midgland slices. The final subfigure shows the overall correlation between comp\ and GS. We make the following conclusion from this point-by-point comparison: • Correlation is strongest in the middle slices (correlation coefficient is 0.8807); therefore the segmentation algorithm is closest to the GS in this region. • Correlation is weakest in the initial two slices (correlation coefficient is 0.7456); however, the correlation coefficient is still large enough to indicate a linear relationship. The Pearson correlation coefficients for different observers and for different regions of the prostate are presented in Table 5.1. We can see from this table that there is a strong correla-tion between semi-automatic segmentation and GS for all the observers; however, no one observer has consistently better results than any other observer. Since observers 1 and 2 are non-expert and observer 3 is expert, this could imply that the segmentation algorithm is not sensitive to the initialization points or to the experience of the observer making the initialization points. Step 3: Comparing difference between semi-automatic segmentation and GS to inter-observer variability Observer 1 Observer 2 Observer3 Initial 2 Slices 0.7456 0.7093 0.7373 Middle Slices 0.8807 0.8856 0.8541 Final 2 Slices 0.8251 0.8276 0.7818 All Slices 0.9053 0.9057 0.8877 Table 5.1: Pearson correlation coefficients (correlating computer to manual segmentation) for dif-ferent regions of the prostate and for different observers. 5.2 Validation Methods and Results 66 u u c 2 B — CM IT) ON >/-> : o cn CN i / r i co ! vo : : ° co D 1 i> : .y <n co CO ^ CN Figure 5.7: Comparing each computer generated contour to the gold standard on a point-by-point basis to evaluate the segmentation algorithm (all units are millimeters). 5.2 Validation Methods and Results 67 Initial 2 slices - Compl vs manual Middle slices - Compl vs manual . .'140 <2 130 -Q O 120 -Q no c- iop ^ 90 O) 60 CD "> 70 CL E 6 0 O 5 0 p=07456" 40 50 60 70 80 90 100 110 129 130 140 Segmentation Gold Standard Final 2 slices - Compl vs manual 50 SO 70 60 90 100 1.10 120 130 .140 Segmentation Gold Standard C/) 130 .o 9:120 110 c 100 I 9 0 O) 60 CU <n 70 Q. E 6 0 o o 50 40 in 130 0,120 p=0.8807 50 60 70 60 90 100 110 1 20. 130 140; Segmentation Gold Standard A l l s l i c e s - C o m p l v s manua l p=0.9053 50 60 70 80 90 100 110 120 1 30 140. Segmentation Gold Standard r -Figure 5.8: Using pearson correlations to compare computer segmentation (initialization by observer 1) to the gold standard. The difference between semi-automatic segmentation and GS (ASA-GS) f° r case c (c = 1...10), slice s (s = l...n), point p (p = l..m) is found according to: (\compi(c,s,p) - GS(c,s,p)\ ASA-Gs(c,s,p) = +\comp2{c,s,p) - GS(c,s,p)\ + \cpmp3(c,s,p) - GS(c,s,p)\)/3 (5.1) Next, we need to find the average proportion of disagreement, in manual segmentation, between the three observers (inter-observer variability). As in step 2, we organize the manual segmentation by observers 1, 2 and 3 into three stacks called Manuali, Manual2 and Manuals- Referring to Figure 5.9, we find the inter-observer variability (IOV) for case c, slice s, point p as follows: 5.2 Validation Methods and Results 68 (\Manual\(c, s,p) — Manua^ic, s,p)\ IOV(c,s,p) = +\Manual2(c, s,p) — Manualz(c, s,p)\ + |Mannate, s,p) — Manual\(c, s,p)\)/3 (5-2) It should be noted that Manuali, Manual, Manuals, compi, comp2, compz and GS all have corresponding points so that for card c, row s, they all have the same number of entries; therefore, we can make valid comparisons. At this point we have 10 x n x m measures of inter-observer variability and 10 x n x m measures for the difference between semi-automatic segmentation and GS. The average IOV and ASA-GS f° r the initial two slices, middle slices, final two slices and all slices are presented in Table 5.2 along with standard deviations and maximum values. From this table we make the following observations: • Computer algorithm and observers have more difficulty segmenting the base and apex than the midgland. The IOV at the base (2.27±1.34) is quite poor since observers are free to choose any radius greater than 15mm. Refer to Figure 5.10 for three different manual segmentations of the same base slice. The IOV results at the midgland (1.31±0.86) are better than at the base which is due to the clear prostate boundary in these slices. Figure 5.9: Finding the inter-observer variability for case c, slice s, point p where rmi is Manuali(c, s,p). Manual segmentation by observers 1, 2 and 3 are represented by (square marks), (o-marks) and (x-marks) respectively. 5.2 Validation Methods and Results 69 & S A - G S IOV IOV - ASA-GS p-value CI (95%) Base MAD (mm) 1.34±0.89 2.27±1.34 0.93±1.6 <0.001 (0.86,1.22) Slices MAXD (mm) 5.09 7.28 Middle MAD (mm) 1.21±0.78 1.31±0.86 0.10±1.13 0.005 (0.03,0.18) Slices MAXD (mm) 4.27 5.30 Apex MAD (mm) 1.54±0.83 1.64±1.21 0.11±1.43 0.174 (-0.05,0.28) Slices MAXD (mm) 4.29 6.44 All MAD (mm) 1.28±0.82 1.56±1.11 0.29±1.34 <0.001 (0.22,0.36) Slices MAXD (mm) 5.09 7.28 Table 5.2: Comparing the average difference between semi-automatic segmentation and the gold standard (ASA-GS) to inter-observer variability (IOV) for different regions of the prostate. Figure 5.10: Degree of inter-observer variability at the base slice of case 1. Manual segmentations by observer 1 (solid line), observer 2 (dashed line) and observer 3 (dotted line). • The difference between computer segmentation and the gold standard is in the range of inter-observer variability. At BCCA, treatment plans would be designed for observer 1, observer 2 and observer 3's manually segmented prostate volumes. The error in semi-automatic seg-mentation is actually lower, for all regions of the prostate, than the IOV meaning that if a treatment plan was designed for the semi-automatically segmented volume no one could tell whether the original segmentation was done by the computer or the observer. To check the statistical soundness of our analysis we used a two sample t-test on paired data (IOV and ASA-GS) to find p-values and 95% Confidence Intervals (CI). Note that the difference between IOV and ASA-GS is always positive indicating that the error in semi-automatic segmentation is actually lower than the IOV. The data is statistically significant in the initial two slices, middle 5.2 Validation Methods and Results 70 slices and when looking at all the slices together. In the final two slices there is insufficient data to conclude statistical significance. We had sufficient data so that all values for IOV — ASA-GS were within the 95% CI. Figure 5.11 further illustrates the performance of the segmentation algorithm, relative to IOV, using notch plots. In each sub-figure, the lower and upper lines in the notch plot represent the inter-quartile range (the data between the 25th and the 75th percentile range in a histogram). The line in the middle is the median of the data range and an off-center median line would indicate a skew in the data. The whiskers show the entire range of the data with the exception of the outliers. By default, in Matlab, outliers are data points that are more than 1.5 times the inter-quartile range away from the top or bottom of the box and are represented by plus signs above or below the whiskers. In each notch plot we compare Acompi-cs (the difference between comp\ and GS), ACOMP2-GS and Acompj-cs to IOV. We can see that in all regions of the prostate the median difference between semi-automatic segmentation and GS is lower than or equal to IOV. The only exception is ACORNP3-GS in the final two slices which explains why there is a high p-value (0.174) in that region. We should take note of the scale in these plots and put them in perspective. All medians are less than 2mm, which is a very small value compared to the finest accuracy available in planning (5mm spacing of the needle guide template). 5.2.1.2 Comparing Repeatability of Semi-automatic Segmentation to Intra-observer Variability If we recall, in this study 10 cases were initially manually segmented and initialized by three observers and then 5 of the original 10 cases were re-segmented and initialized two weeks later. In this section we only deal with those cases that were re-segmented and we would like to compare the intra-observer variability (average proportion of disagreement in one observer's multiple manual segmentations of the same case) to the computer algorithm's repeatability (ability to re-generate the same contour given two different sets of initialization points by the same observer). Basically, we would like to see whether this computer algorithm is better at reproducing results than humans. We will organize the data into stacks again for convenience; however, this time our stacks only have 5 cards because we are dealing with 5 cases. Referring to Table 5.3, for each observer we have the the initial manual contour (IMC), repeated manual contour (RMC), initial computer contour 5.2 Validation Methods and Results 71 Looking at Base Slices for all Cases; 7 E © <« 5 4 £ O J SI < 2 .CO :: * 1 0 ;:|: Compf-SS 'Ae6mp2-GS : ACorr.p3GS I O V Looking at Apex Slices for all Gases ACcmp;-GS ACcnp2-GS ACnrrp3-3S l 0 V Looking at Middle Slices for all Cases I 5 Q 3 0) — ~ — ".: ' + ,4., 1, ;| ...J;... r:r::::;i::: |: 1, ,-:::|;-:, i 1: h " . :i i . . j : ' 1 • Cc:."ip!-GS 7Comp2-GS AComp3-GS: I O V Looking at All Slices for all Cases ACorip1-GS ACnrnp2-GS AComp3-3S I O V Figure 5.11: Notched box plots comparing the difference between compi and GS, compi and GS, compz and GS to IOV for different regions of the prostate. (ICC) and repeated computer (RCC) contour for each of the 5 cases. The intra-observer variability (IAOV) is calculated as follows: IAOV(c, s,p) = IMC{c, s,p) - RMC{c, s,p) The computer repeatability (Comprepeat) is found as follows: (5.3) Comprepeat(c, s,p) = ICC(c, s,p) - RCC(c, s,p) (5.4) 5.2 Validation Methods and Results 72 If the observer or the computer algorithm is good at reproducing results then these subtractions will result in very small values. At this point, for each observer, we have 5 x n x m measures for IAOV and 5 x n x m measures for Comprepeat. The average and standard deviations of IAOV and Comprepeat are presented in Table 5.4 for different observers and for regions of the prostate. We carry out a 2 sample, paired (IOV and Comprepeat) t-test with the null hypothesis that the difference between the two sample means is zero. As we can see from IAOV — Comprepeat, the difference between the means is not zero and the p-values (all < 0.001) suggest rejecting the null hypothesis. From this table we can make the following observations: • Computer algorithm is always more repeatable. The difference between the means of IAOV and Comprepeat is always positive and the 95% CI shows that 95% of the time this difference will be positive for all regions of the prostate and for all observers. This implies that computer segmentation will always be more reproducible than manual segmentation. As well, note the consistently smaller variance in Comprepeat as compard to IAOV. • Computer algorithm is consistent in repeatability at different regions of the prostate. Looking at IAOV at different regions of the prostate we can see that the observers can reproduce the same contours at the midgland more consistently than at the base and apex. In contrast, the computer algorithm is as repeatable at the base and apex as it is at midgland. • Computer algorithm is more repeatable than the most experienced observer. The IAOV for observers 1 and 2 (both non-expert) is higher than the IAOV for observer 3 (expert). This Obsl Obs2 Obs3 IMC Manual Segmentation RMC IAOV ICC Computer Segmentation RCC Comprepeat = Table 5.3: Comparing the repeatability of the computer algorithm to intra-observer variability in manual segmentation. 5.2 Validation Methods and Results 73 Slices IAOV CR IAOV- CR p-value CI (95%) Observer 1 Base 1.87±1.40 0.48±0.36 1.39±1.38 <0.001 (1.16,1.62) Middle 1.09±0.86 0.54±0.50 0.55±0.91 <0.001 (0.46,0.63) Apex 1.32±1.24 0.61±0.66 0.70±1.30 <0.001 (0.49,0.91) All 1.28±1.10 0.55±0.52 0.74±1.14 <0.001 (0.65,0.82) Observer 2 Base 1.70±1.66 0.80±0.65 0.90±2.02 <0.001 (0.73,1.21) Middle 1.30±1.15 1.02±1.07 0.28±1.86 <0.001 (0.17,0.41) Apex 1.43±1.59 1.03±0.97 0.40±2.12 <0.001 (0.26,0.68) All 1.40±1.36 l.Olil.OO 0.39 ±1.97 <0.001 (0.22,0.41) Observer 3 Base 0.95±1.07 0.43±0.31 0.52±1.44 <0.001 (0.23,0.71) Middle 0.72±0.73 0.40±0.28 0.32±1.13 <0.001 (0.29,0.50) Apex 1.27±1.06 0.44±0.35 0.83±1.52 <0.001 (0.36,0.86) All 0.89±0.91 0.41±0.30 0.48±1.29 <0.001 (0.37,0.56) Table 5.4: Comparing intra-observer variability to computer repeatability in different regions of the prostate and for different observers. makes sense because we assume that with experience an oncologist can more consistently reproduce their results; however, Comprepeat is less than IAOV even for this most experienced observer. 5.2.2 Volume Based Metrics We can,find the volume of the prostate from a set of 2D contours using a method called Planimetry. The equation for Planimetry is: Volume = l^Y^A(i)j*d (5.5) where there are n slices in a volume, each with area A and uniform inter-slice distance d. This equation essentially says that we can find the volume of an ellipsoid by approximating it as a series of cylinders. In 3D, we superimpose the manually segmented prostate volume, T m , on top of the computer generated prostate volume, T a , and call the volume common to both the True Positive volume 5.2 Validation Methods and Results 74 (TPvol). As in 2D, the volume common to T m but not to T a is the False Negative (FNvol) volume and the volume common to T a but not T m is the False Positive (FPvol) volume. Then the sensitivity of the segmentation algorithm for a given prostate volume becomes: Sensitivity = TPvol manually segmented volume {EUTP(i)}*d {zZU^m(i)}*d E I U TP(J) U=iAm(i) (5.6) where TP(i) is the True Positive area and Am(i) is the area of the manual contour for each of the n slices in the volume. We can see from the second line in the above equation that the inter-slice distance cancels out and plays no role in the final answer. Following the same approach, the accuracy of the segmentation algorithm for a given volume is: A c c u r a c y = ( s .„ 2^=1 A n W Table 5.5 shows the sensitivity and accuracy results when comparing the computer generated volumes to the gold standard volume for each case and for each observer. Figures 5.12, 5.14 and 5.13 show the semi-automatically segmented contours (solid line) superimposed on top of the gold standard (dashed line) for every slice in some sample volumes. Figure 5.12 shows observer 3's segmentation of case 1 in which the sensitivity and accuracy of the algorithm were 99.6 and 75.6 percent respectively. In this figure, the semi-automatic contour is encompassing almost every gold standard contour resulting in the very high sensitivity value; however, there are a lot of false positive areas which contribute to the lower accuracy value. In Figure 5.14, observer 2's segmentation of case 4 resulted in sensitivity and accuracy results of 86.9 and 84.4 percent respectively. In this figure, the semi-automatic contour is encompased by almost every gold standard contour resulting in the lower sensitivity value; however, there are a lot of false negative areas which contribute to the lower accuracy value. Figure 5.13 shows the segmentation results of case 5 by observer 3 in which sensitivity and accuracy results are 95.7 and 90.3 percent respectively. In this figure, the semi-automatic contours line up almost exactly with the gold standard resulting in the high 5.2 Validation Methods and Results 75 sensitivity value and there are very small false positive and false negative areas contributing to the high accuracy values. The sensitivity and accuracy of the 3D segmentation algorithm, averaged over all cases and all observers, are 96.5 and 84.7 percent respectively. These numbers indicate: • The algorithm is highly sensitive when picking up pixels that the gold standard specifies belong to the prostate. • The algorithm tends to over-compensate and cover too much so that there are lots of false positives; therefore the accuracy is lower then the sensitivity. 5.2.3 3D Segmentation Algorithm Speed The times for manual and semi-automatic segmentation of an entire prostate volume were recorded and are presented in Table 5.6. The time for manually segmenting an entire volume was ap-proximately 15 minutes for the non-expert observers and approximately 5 minutes for the expert observer; therefore the average time for manual segmentation was approximately 11.67 minutes or 700 seconds. It took no more than approximately 10 seconds to choose 5 initialization points for the semi-automatic algorithm. The computer algorithm itself took, on average, 8.16 ± 0.43 seconds to run with more than 80% of that time being tied up in the warping step. This means that on average Observer 1 Observer 2 Observer 3 Sens Accu Sens Accu Sens Accu Case 1 97.07 73.69 98.67 80.05 99.61 75.59 Case 2 97.46 84.56 98.36 84.09 99.04 79.57 Case 3 97.32 89.27 94.81 87.42 95.59 78.13 Case 4 97.43 84.37 86.95 84.04 94.85 78.89 Case 5 94.97 90.51 96.14 90.58 95.70 90.28 Case 6 95.82 88.29 97.41 89.21 97.63 88.01 Case 7 96.37 89.77 97.95 93.48 98.51 88.53 Case 8 95.22 79.43 96.57 83.85 98.18 74.38 Case 9 93.25 87.71 93.28 87.86 93.78 88.21 Case 10 99.56 81.68 98.06 87.26 99.19 83.53 Avg Case 96.45 84.93 95.82 86.78 97.21 82.51 Table 5.5: Sensitivity and accuracy of 3D semi-automatic segmentation in comparison to the gold standard. 5.2 Validation Methods and Results 76 Figure 5.12: Gold Standard (dashed line) vs semi-automatic (solid line) segmentation results for observer 3, case 1. (Sensitivity=99.6%, Accuracy=75.6%) the semi-automatic segmentation algorithm, which takes approximately 18.16 ± 0.43 seconds from beginning to end is 37 times faster than manual segmentation. In the future the following two steps could be carried out to further improve the simulation time: i) convert the program code to C++ (making it at least 5 times faster), and ii) optimize the warping algorithm to perform faster. Manual Semi-automatic (Matlab) Total Time (sec) Mean STD 700 N / A Initialization (sec) Algorithm (sec) Total Time (sec) Mean STD Mean STD Mean STD ~ 10 N / A 8.16 0.43 18.16 0.43 Table 5.6: Average times for manual and semi-automatic segmentation of an entire prostate volume composed of approximately 10-13 slices. 5.2 Validation Methods and Results 77 Figure 5.14: Gold Standard (dashed line) vs semi-automatic (solid line) segmentation results for observer 2, case 4. In this case sensitivity=86.95% and Accuracy=84.04%. 5.3 Discussion 78 5.3 D i s cus s ion 5.3.1 Performance The 3D semi-automatic segmentation algorithm is accurate, repeatable, real-time, robust and re-quires only 5 initialization points to segment the entire prostate volume. The error in the accuracy of the segmentation algorithm is on the order of the error of manual segmentation by two differ-ent observers (inter-observer variability). As well, the segmentation algorithm is more repeatable when re-segmenting the same volume (and more consistently repeatable in different regions of the prostate) than intra-observer variability in manual segmentation by the most experienced observer. Although the algorithm is currently written in Matlab, it is approximately 37 times faster than manual segmentation and upon transfer to C++ we expect the algorithm to be almost 200 times faster than manual segmentation (performance on the order of 1 second). 5.3.1.1 Potential for Integration at BCCA This algorithm works well with low resolution images currently used at BCCA and can be integrated easily into their setup without any changes to their image acquisition procedure. Currently most cancer centers use prostate volumes that are composed of 2D slices with 5mm inter-slice spacing. When 3D TRUS technology, with finer inter-slice spacing, becomes mainstreams our algorithm can be easily adjusted to work with such volumes for even better performance. The potential for intra-operative integration is high because the algorithm: 1) can be easily optimized to obtain real-time speed, 2) requires only 5 initialization points to segment the entire volume, and 3) is insensitive to initialization points so that an inexperienced observer can initialize the volume while the oncologist continues un-interrupted with the implant procedure. 5.3.1.2 Algorithm Limitations This algorithm will perform poorly on cases where the prostate boundary is not visible at the midgland; however, the inter and intra-observer variability in manual segmentation is significantly larger in such cases as well. The shape and radius of the base and apex are very simplified and more complex statistical analysis could be applied there at the sake of computational complexity. Though it is not common, some prostates have a diamond or triangular shaped cross section and in those cases this algorithm would simply return the greatest ellipse that would encompass 5.3 Discussion 79 the prostate. In this case the prostate would be over-dosed which is always more preferable than being under-dosed. 80 Chapter 6 Conclusion and Future Work In this thesis we proposed a 2D semi-automatic segmentation algorithm for TRUS images of the prostate. Then we carried out a study to evaluate the success of the 2D segmentation algorithm on midgland, transverse TRUS images. After successful results we extended the 2D algorithm to 3D. In order to assess the 3D algorithm a large clinical study was set up using full 3D prostate volumes and multiple observers. This thesis has resulted in the following contributions: 1. 2D semi-automatic segmentation algorithm (a) Algorithm steps: i. Choose 5 initialization points ii . Warp image and initialization points so that prostate boundary becomes elliptical iii. Fit an ellipse to warped initialization points iv. Perform edge detection (IMMPDA [2]) on the prostate boundary along the ellipse fit v. Fit a second ellipse to edge detected contour vi. Apply a reverse warp to the second ellipse to obtain the final prostate contour (b) Clinical Study: i. Algorithm was applied to 17 pre-operative, midgland TRUS images of the prostate ii. A single expert observer manually segmented all 17 images and the results were used as the gold standard 81 (c) Contributions and benefits: i. Algorithm is insensitive to initialization points and can be initialized by an inexpe-rienced observer ii. Works well on TRUS images that have low resolution iii. Distance based metrics between manual and semi-automatic contours showed a mean absolute difference of 0.67±0.18mm which is lower than inter-observer variability [16] iv. Area based metrics show sensitivity and accuracy results greater than 97% and 93% respectively v. The algorithm was two times faster than manual segmentation 2. 3D semi-automatic segmentation algorithm (a) Algorithm steps: i. Choose 5 initialization points on midgland only ii. Warp all 2D slices and initialization points so that prostate shape becomes ellipsoidal iii. Perform 2D segmentation on midgland slice iv. Propagate the midgland contour as initialization to adjacent slices and iteratively segmented every 2D slice up to, but not including, the base and apex v. Fit an initial ellipsoid to all 2D contours from the previous step vi. Segment base and apex using circles (of 18mm and 15mm radii respectively) centered about the major axis of the ellipsoid from the previous step vii. Fit a second ellipsoid to all 2D contours including base and apex viii. Slice the final ellipsoid at 5mm increments and apply a reverse warp to all resulting 2D contours to get the segmented 3D contour (b) Clinical Study: i. Algorithm was applied to 10 pre-operative TRUS prostate volumes, each with 10-14 slices ii. Three observers (two non-expert and one expert) were used in the study iii. In the first week all observers manually segmented and initialized (for the purpose of semi-automatic segmentation) all 10 volumes 82 iv. In the second week all observers manually re-segmented and re-initialized 5 of the original 10 volumes (for the purpose of evaluating intra-observer variability) (c) Contributions and benefits: i . Requires only 5 initialization points to segment entire prostate volume ii. Can be initialized by an inexperienced observer since algorithm is insensitive to initialization points iii. Robust algorithm works well on low resolution images iv. Has potential for intra-operative use and integration into BCCA's current brachyther-apy procedure v. Pearson correlation between manual segmentation and semi-automatic segmentation shows a linear relationship (Pearson coefficient of at least 0.7) vi. Segmentation results are on the order of inter-observer variability vii. Segmentation algorithm's repeatability is better than intra-observer variability viii. Sensitivity and accuracy of the algorithm are approx 97% and 85% with respect to manual segmentation ix. Although the program is written in Matlab and not optimized it is currently one of the fastest 3D prostate segmentation algorithms and upon transfer to C++ the algorithm will perform on the order of milliseconds (almost 200 times faster than manual segmentation) 3. Novel warping algorithm. A technique was introduced to undo the compression effect of the TRUS probe on the prostate images resulting in elliptical prostate cross sections. A polar coordinate system was used with origin at the center of the rectum in the TRUS image and angle of 0° along the horizontal. In this technique, the pixels of an image were warped radially according to a Gaussian function where pixels close to the rectum were stretched maximally and pixels far away from the rectum were left alone. In the angular direction, the pixels were warped according to a sin (9) function with maximum stretch along 9 = 90o and minimum stretch along 9 = 0°. The purpose of having elliptical prostate cross sections was to be able to fit ellipses to the prostate shape. 4. Ellipse/ellipsoid fitting applied to the prostate contour. We used a priori knowledge of the 6.1 Future Work 83 prostate shape and the extensive literature on ellipse/ellipsoid fitting to a set of scattered points to fit ellipses/ellipsoids to the prostate contour resulting in symmetrical, smooth and convex shapes. The ellipse/ellipsoid fitting problem is convex (always returns an ellipti-cal/ellipsoidal solution), robust, computationally inexpensive and rotational and translational invariant. (a) P to q parameter conversions and vice versa: In both 2D and 3D we derived the P parameters from the q parameters and vice versa where the P and q parameters were used in quadratic and parametric representation of an ellipse/ellipsoid respectively. 6.1 F u t u r e W o r k Possible future work and improvements to some of the algorithm's drawbacks are suggested below: 1. The warping algorithm is computationally expensive: (a) Transfer the algorithm to C++ and optimize the code. (b) Explore to see if there are built-in image warping functions in the Visualization ToolKit™ (VTK) or other programming languages which make extensive use of graphics. 2. The warping algorithm can be made more accurate: (a) One can use motion tracking and elasticity (the basics of elastography [31]) to track the prostate motion due to different compressions by the probe. A statistical model of this deformation can be created and used to design the warping function more accurately. 3. Overall segmentation (especially at the base and apex) could be improved: (a) Create a statistical shape atlas for the prostate using fiducial markers and MRI images of the prostate. (b) In parallel transverse TRUS images, the boundary of the midgland is easily visible yet the base is blended with the bladder neck and the apex is blended with the pelvic floor. We could obtain 3D TRUS volumes with finer inter-slice spacing and slice the volume at 5° intervals in a star pattern. In such images the contour of all slices are equally visible 6.1 Future Work 84 resulting is equal segmentation performance in all regions of the prostate. Alternatively, we could incorporate sagittal images to be used along with transverse images in order to infer the location of the base and apex more easily. (c) Choose more initialization points with some points at base and apex. 4. Other improvements: (a) It would be informative to design a study with prostate volumes where even the midgland slice contains poorly visible boundaries to further test the robustness of this algorithm. (b) This algorithm, combined with seed localization [34], real-time dosimetry [14] and elas-tography [31] technology (all developed in our group), can be incorporated into a re-search workstation to be used for clinical procedures. This research station can also be used as a visualization tool for training residents since the learning curve for prostate brachytherapy is very steep. (c) We are currently running a clinical study at the BCCA to test the feasibility of replacing manual segmentation with semi-automatic segmentation. To do this, radiation oncolo-gists are designing dosimetric plans for the semi-automatically segmented volumes pre-sented in this paper. These treatment plans are then transferred to their corresponding manually segmented volumes to see whether the resulting dosimetric parameters (Vioo, D 9 0 , conformality index etc [30]) are satisfactory. If they are, then semi-automatic seg-mentation can indeed replace manual segmentation. The results have been promising so far and will be presented in a paper when the study is complete. 85 Bibl iography [1] R .G . Aarnink, R.J . Giesen, A . L . Huynen, J.J. de la Rosette, F . M . Debruyne, and H. Wijkstra. A practical clinical method for contour determination in ultrasonographic prostate images. Ultrasound Med. Biol., 20(8):705-717, 1994. [2] P. Abolmaesumi and M . Sirouspour. An interacting multiple model probabilistic data association filter for cavity boundary extraction from ultrasound images. IEEE Trans. Med. Imaging, 23(6):772- 784, 2004. [3] American Cancer Society. Key statistics about prostate cancer, 2006. http://www.cancer.org. [4] S. Badiei, S. E . Salcudean, J . Varah, and W. J Morris. Prostate segmentation in 2d ultrasound im-ages using image warping and ellipse fitting. In Medical Image Computing and Computer Assisted Interventions Conferece, 2006. [5] Y . Bar-Shalom and T .E . Fortmann. Tracking and Data Association. Academic Press Inc, 1988. [6] Y . Bar-Shalom and L. Kirubarajan. Estimation with Applications to Tracking and Navigation. Wiley Inter-Science, 2001. [7] Blackman and Popoli. Design and Analysis of Modern Tracking Systems. Artech House, 1999. [8] V . Chalana and Y . Kim. A methodology for evaluation of boundary detection algorithms on medical images. IEEE Trans. Med. Imaging, 16(5):642-652, 1997. [9] B . Chiu, G. H. Freeman, M . M . A . Salama, and A. Fenster. Prostate segmentation algorithm using dyadic wavelet transform and discrete dynamic contour. Physics in Medicine and Biology, 49:4943-4960, 2004. [10] C K Chui and G. Chen. Kalman Filtering. Springer-Verlag, 1991. [11] M . Ding, L. Gardi, Z. Wei, and A. Fenster. 3d trus image segmentation in prostate brachytherapy. In Proceedings of IEEE Engineering in Medicine and Biology Conference, 2005. [12] M . Ding, I. Gyacskov, X . Yuan, M . Drangova, D.B. Downey, , , and A. Fenster. Slice-based prostate segmentation in 3d us images using continuity constraint. In Proceedings of IEEE Engineering in Medicine and Biology Conference, 2005. [13] A . Fitzgibbon, M . Pilu, and R. Fisher. Direct least square fitting of ellipses. IEEE Trans. Pattern Anal. Mach. Intellig., 21(5):476-480, 1999. [14] D.G. French, J . Morris, M . Keyes, and S.E. Salcudean. A real-time dosimetry system for prostate brachytherapy using TRUS and fluoroscopy. In Proc. of 7th MICCAI Conference, Sept 2004. [15] A . Ghanei, H. Soltanian-Zadeh, A. Ratkewicz, and F.F. Y i n . A three-dimensional deformable model for segmentation of human prostate from ultrasound images. Medical Physics, 28(10):2147-2153, 2001. [16] L. Gong, S.D. Pathak, D.R. Haynor, P.S. Cho, and Y . Kim. Parametric shape modeling using deformable superellipses for prostate segmentation. IEEE Trans. Med. Imaging, 23(3):340- 349, 2004. BIBLIOGRAPHY 86 J. Guerrero, S. E. Salcudean, J . A. McEwen, B . A . Masri, and S. Nicolaou. System for deep venous thrombosis detection using objective compression measures. IEEE Transactions on Biomedical Engi-neering, 53(5):845-854, 2006. N . Hu, D. Downey, A. Fenster, and H. Ladak. Prostate surface segmentation from 3d ultrasound images. In IEEE International Symposium on Biomedical Imaging, pages 613-616, 2002. N . Hu, D. Downey, A . Fenster, and H. Ladak. Prostate boundary segmentation from 3d ultrasound images. Medical Physics, 30(7): 1648-1659, 2003. C. Knoll, M . Alcaniz, V . Grau, C. Monserrat, and M . C . Juan. Outlining of the prostate using snakes with shape restrictions based on the wavelet transform. Pattern Recogn., 32:1767-1781, 1999. C. K . Kwoh, M . Y . Teo, W.S. Ng, S.N. Tan, and L . M . Jones. Outlining the prostate boundary using the harmonics method. Med. Biol. Eng. Computing, 36:768-771, 1998. H . M . Ladak, F. Mao, Y . Wang, D.B. Downey, D.A. Steinman, and A. Fenster. Prostate boundary segmentation from 2d ultrasound images. Med Phys, 27(8):1777-1788, 2000. H . M . Ladak, Y . Wang, D.B. Downey, and A. Fenster. Testing and optimization of a semiautomatic prostate boundary segmentation algorithm using virtual operators. Medical Physics, 30(7):1637-1647, 2003. Q. L i and J .G. Griffiths. Least squares ellipsoid specific fitting. In Proc. Geometric Modeling and Processing/IEEE Comp Soc, 2004. S.D. Pathak, V . Chalana, D.R. Haynor, and Y . K im. Edge guided delineation of the prostate in transrectal ultrasound images. In Proc. First Joint BMES/EMBS IEEE Conf., volume 2, page 1056, 1999. H . Ragde, G.L. Grado, B. Nadir, and Elgamal A . A . Modern prostate brachytherapy. CA Cancer J Clin, 50(6):380-393, 2000. W.D. Richard and C.G. Keen. Automated texture-based segmentation of ultrasound images of the prostate. Comput. Med. Imag. Graph., 20(3):131-140, 1996. F. Shao, K . V . Ling, and W.S. Ng. 3d prostate surface detection from ultrasound images based on level set method. In Medical Image Computing and Computer Assisted Interventions Conferece, pages 389-396, 2002. D. Shen, Y . Zhan, and C. Davatzikos. Segmentation of prostate boundaries from ultrasound images using statistical shape model. IEEE Trans. Med. Imaging, 22(4):539- 551, 2003. I. Spadinger, M . Hilts, M . Keyes, C. Smith, S. Sidhu, and W.J . Morris. Prostate brachytherapy postim-plant dosimetry: A comparison of suture-embedded and loose seed implants. Brachytherapy, 5(3):165-173, 2006. E. Turgay, S. Salcudean, and R. Rohling. Identifying mechanical properties of tissue by ultrasound. Ultrasound Med. Biol, 32(2):221-235, 2006. J . Varah. Least squares data fitting with implicit function. BIT, 36(4):842-854, 1996. Y . Wang, H. N . Cardinal, D. B . Downey, and A. Fenster. Semi-automatic three-dimensional segmenta-tion of the prostate using two-dimensional ultrasound images. Med Phys, 30(5):887-897, 2003. X . Wen, S. E. Salcudean, and P. D. Lawrence. Detection of brachytherapy seeds using ultrasound radio frequency signals. In SPIE International Symposium on Medical Imaging, pages 168-175, 2006. Y . Zhan and D. Shen. Automated segmentation of 3d us prostate images using statistical texture-based matching method. In Medical Image Computing and Computer Assisted Interventions Conferece, 2003. BIBLIOGRAPHY 87 [36] Y . Zhan and D. Shen. An efficient method for deformable segmentation of 3d us prostate images. In Medical Imaging Augmented Reality Conferece, 2004. [37] Y . Zhan and D. Shen. Deformable segmentation of 3d ultrasound prostate images using statistical texture matching method. IEEE Trans on Med Img, 25(3):256-271, 2006. 88 Appendix A Ellipse/Ellipsoid Reference Frame Conventions In this Appendix, we define this thesis' reference frame conventions for representing ellipses and ellipsoids in two and three dimensional space. A . l Reference Frame Convention in 2D The base frame will be {^o, 1C} where 1o represents the center and the columns of lC represent the coordinate axes of frame 1. With respect to this frame the prostate is rotated and translated as can be seen in Figure A . l b . The prostate center frame will be {25,2C} and with respect to this frame the prostate is rotated but not translated. This frame is translated with respect to the base frame. The final prostate frame, { 5o, 5 C 1 } , is rotated with respect to the center frame and is rotated and translated with respect to the base frame. With respect to this frame the prostate is neither rotated nor translated. We define a point in reference frame i as %X = [lx *y]T• If we wanted to rotate (but not translate) every point [5x 5y]T in frame 5, we would use the following convention: 2X cos (6) —sin (9) 5x . 2 y sin (9) cos (9) S In this convention, if we take all the points in the frame {5o, 5C7} and rotate them counter-clockwise by an angle 8, the new untranslated, but rotated coordinate system will be { 2o, 2C}. Refer to Figure A . l a for an illustration. A.2 Reference Frame Convention in 3D 89 Next, we take the rotated ellipse in frame 2 and translate it with the following convention: XQ + 2x 1 2 y y where XQ = [x„ y0]T is an offset that is added to every point in frame 2 to get every point in frame 1. Alternatively, we can say that X0 is the center of the ellipse in the { 1o, 1 C } reference frame. Refer to Figure A . l b for an illustration. The final result is a rotated and translated ellipse with respect to reference frame 1 as follows: XQ = + Vo cos (9) —sin (0) sin (6) cos (6) 5 X (A.3) A.2 Reference Frame Convention in 3D In 3D, we will use the same reference frames as we used in two-dimensions. Here the { 1 5, 1 C},{ 2 5, 2 C} and { 5o, 5(5} frames represent the rotated/translated, rotated/untranslated and unrotated/untranslated reference frames respectively. Here again we define a point in reference frame i as *X = \^x zy *z]. Referring to Figures A.2 and A.3, we define the convention to get from { 5o, 5C} to { 1 5 , 1 C } as follows: 1. Rotate 5C about 5x by 6 ->get frame {4o,4C} 2. Rotate 4C about 4y by <A —»get frame { 3o, 3C} 3. Rotate 3 C about 3z by (3 ^ get frame {2o,2C} 4. Add XQ to any point in frame 2, 2X , to get that same point in frame 1, 1X —AX = XQ +2 X Combining steps 1-3 gets us from frame 5 to frame 2 as follows: 2X = U5X (A.4) where U is the combination of the three rotation matrices. Taking step 1 as an example, each rotation matrix can be written in the following form: exp(65xx) (A.5) A.2 Reference Frame Convention in 3D 90 (a) Rotated reference frame (b) Rotated and translated reference frame Figure A . l : Three reference frames for describing an ellipse Finally, if we combine all 4 steps together we get: 1X = X0+ [exp (f33zx) exp (0 4 y x ) exp (e5xx)] 5X (A.6) where X0 = [x0 yQ z0] is the ellipsoid center with respect to frame 1. According to this equation, given any point in frame 5, we can rotate it by U and translate.it by X0 to get that same point in frame 1. A.2 Reference Frame Convention in 3D 91 Figure A.3: Moving between reference frames in 3D 92 Appendix B Ellipse/Ellipsoid Parameter Conversions For visualization purposes, it is important to be able to express the q parameters of an ellipse/ellipsoid in terms of the P parameters and vice versa. Sections B . l and B.2 show the parameter conversion techniques for ellipses and ellipsoids respectively. B . l Fitting and Ellipse to Measurement Points in 2D B . l . l Deriving P Parameters from q parameters (q to P) Given the q parameters, it is easy to find the P parameters. This can be done by expanding equation 2.13, collecting like terms and letting the coefficients equal the P parameters in equation 2.14. Below are the set of equations that convert the given q parameters to P parameters. $ = —cos (6) xa — y0sin (9) 0 = sin (0) xa — y0cos (9) (B.l) Pi P2 Ps PA Ps Pe (sin(9)cos(9) \ — 0 ( sin(&)c°s{®) ^ , a'2 J \ b2 , ( ! ) 2 + ( ! ) 2 - i (B.2) B.l Fitting and Ellipse to Measurement Points in 2D 93 B . l . 2 Deriving q Parameters from P parameters (P to q) The Least Squares Minimization (LSM) method from Section 2.4.2 returns the P parameters of the ellipse that best fits a set of measurement points around the boundary of the prostate. In order to plot this ellipse in Matlab, we need to be able to convert the P parameters to q parameters. Converting from P to q parameters is not as straight forward as the q to P conversion in the previous section. Two methods for converting from P to q are presented here. B.1.2.1 Method 1 Step 1: Finding the ellipse origin [xa y0]T We start with the general conic representation of an ellipse in equation 2.14 and translate all the points in frame 1, using equation A.2, to get the reference frame { 25, 2C7} as follows: [(2a; + a: 0) 2 (2x + xQ) (2y + y0) {2y + y0)2 (2x + x0) (2y + y0) l ] (B.3) •[Pi Pi P3 PA PS Pe] = 0 After expanding this equation and collecting like terms we see that there is an 2x and 2y term: P i 2 i 2 + P2 2x 2y + P 3 V + {2x0P1 + P2y0 + P4)2x + (2y0P3 + Pix0 + P5) 2y + (P2x0y0 + P4xQ + P5y0 + P 6 ) = 0 (B.4) We know that in the { 2 5 , 2 C } frame the ellipse has no translation so we can set the coefficients of these 2x and 2y terms to zero. We then solve these equations for xQ and yQ in terms of the P parameters to get: XQ Vo 1 4 P ! P 3 - P2 -2P3P4 + P2P5 P2P4 - 2P1P5 (B.5) Note that the above equation would be undefined if 4P1P3 — P | = 0, which is in agreement with our ellipse constraint 4 P j P 3 — P | > 0. The equation of our ellipse in the rotated, but untranslated reference frame then becomes: P i 2x2 + P2 'x 'y + P3 V + (PiXoVo + PiXo + PsVo + Pe) = 0 2„2„ 2„.2 (B.6) B.l Fitting and Ellipse to Measurement Points in 2D 94 Step 2: Finding the ellipse angle 9 We start with equation B.6 and rotate all the points in the {2o,2C} frame, using equation A . l , to get the {5o, 5 C } frame. After expanding and collecting like terms we get: X (P\COS (9) + P2cos (9) sin (9) + P3sin2 (9)) +5x5y (2cos (9) sin (9) (P 3 - P i ) + P 2 (cos2 (9) - sin2 (9))) + V (P i s in 2 (9) - P2cos (9) sin (9) + P3cos2 (9)) + (Pixoyo + Pixo + PBy0 + P6) = 0 (B.7) Using trigonometric identities, setting the coefficients of the 5x5y term to zero (since in frame 5 the ellipse has no rotation) and making use of equation A . l , it can be shown that: , .^<*-* ) ± M-* ) , + '*) ( R 8 ) The question is now which sign do we use in the above equation? If we let 8\ be the angle in which the positive sign is used and 92 be the angle with the negative sign, then using trigonometric identities we can prove that tan(8\ — 92) = oo, which implies that the two angles are 90 degrees apart. Referring to Figure' B . l , we can see that any ellipse can be represented with two different sets of (9,a,b) values where the two angles are 90 degrees apart. In conclusion, it does not matter which sign is used because the a and b values in step 3 will be adjusted accordingly depending on whichever angle we choose. The equation of our ellipse in the unrotated and untranslated reference frame then becomes: 5x2 (Picas2 (9) + P2cos (9) sin (8) + P3sin2 (9)) + V (P i s in 2 (8) - P2cos (9) sin (8) + P3cos2 (8)) + (PiXoVo + P4x0 + PsVo + Pe) = 0 (B.9) Step 3: Finding the ellipse size (a, b) At this point we have the P, xa, ya, and 8 for this best fit ellipse and the only remaining variables are a and b. If we substitute x0,y0, and 8 into equation 2.13 we get one equation with four unknowns: x, y, a and b. If we can find two points, (xi,j/i) and (x2,y2), that are on this ellipse then we can sub-B.l Fitting and Ellipse to Measurement Points in 2D 95 (6, a,b)=(e,, a,P) P A 0, L5r 7^ (9;a,b)=(e2; p,a) 7 Figure B . l : Two ellipses with different parameters (left) can give the same final ellipse shape (right) stitute each point into equation 2.13 and get two equations with two unknowns, as shown below, and we can solve for a and b. First, however, we need to find the two points on the ellipse as follows. We know the approximate location of the ellipse in the TRUS images. We choose xi to be the middle of the TRUS image's horizontal axis and x2 to be 10 pixels away from x\. These points are chosen arbitrarily but we are sure they are on the ellipse. Then we substitute each x value into equation 2.14 and knowing P, we use the quadratic equation to find the corresponding y value. Finally, the a and b values are found as follows: / (xi - x0) COS (0) + (yi - yQ) sin(0))\2 / - (xx - x0) sin (6) + (y t - y0) cos (6)) \ 2 = V a ) + \ b ) {x2 - x0)cos (6) + (y2 - y0)sin(6))\2 | / - (x2 - xB) sin(6) + (y2 - y0)cos(c9))\2 _ ^ a J V b A2\2 / £ 2 x 2 1 (B.10) a = ^ b = A22B\2 - A\2B22 BI2 -B22 A22B\2 - A\2B22 A22 -Al2 (B.ll) B.l Fitting and Ellipse to Measurement Points in 2D 96 B.l.2.2 Method 2 Step 1: Finding the ellipse origin [x0 y0] We begin by normalizing and re-writing equation 2.14 as follows: Pi 2 2 + -p[P* Ps] ^6 + 1 = 0 (B.12) lXTC1X + B1 lX + l = 0 T 1 • (B.13) where 1XT = [\x 1y], C = [Px ^; % P3] and BT = ^ [P4 P5}. Now we translate all the points in the -j^o, 1 C 1 } frame, using equation A.2, to get the { 2o, 2C7} reference frame. (2X - X0f C (2X - X0) + BT (2X - X0) + 1 = 0 (B.14) If we expand the above equation we get: 2XTC2X+ (2XTC + BT) 2X + (XTCXQ + BTX0 + l ) = 0 (B.15) However, in the {2o, 2 C } frame there is no translation and therefore the coefficients of the 2X term should be zero. Solving (2XjC + BT) for XQ we get: X0 Vo X ) T B - 4P1P3-P2 -2P3P4 + P2P5 P2P4 - 2P1P5 (B.16) Note that [xQ y0] was found using the same technique as in Method 1 except using matrix form because we will need this form for steps 2 and 3. Step 2: Finding the ellipse size (a, b) From equation B.15 we see that the equation of the ellipse in the rotated, but untranslated reference frame is now: 2XTC2X + (XjCX0 + BTX0 + l) = 0 (B.17) Since C is a square, positive definite matrix, it can be broken down into the product of 3 matrices using an Eigenvalue Decomposition (EVD) as follows: B.l Fitting and Ellipse to Measurement Points in 2D 97 C = UJ DCU (B.18) where U is a square matrix with orthogonal columns and Dc is a diagonal matrix. Substituting the above equation into equation B.17 we get: 2XTUT DCU2X + (XjCX0 + BTX0 + l) = 0 From equation A . l we Can see that 5X = U2X and with this the above equation becomes: (B.19) 5XT Dc 5X + (XT CX0 + BTX0 + 1) = 0 which can be re-arranged as: 5 X T -Dc °X = 1 (B.20) (B.21) ^XJCX0 + BTX0 + 1/ Now if we re-write equation 2.10 in matrix form, the general equation of an unrotated, untranslated ellipse is: 5 X T 5X = 1 (B.22) Using equations B.21 and B.22 we can find a and b as follows: a b r Step 3: Finding the ellipse angle 6 When we carry out the E V D of C (C = UTDCU), what we are looking for are the diagonal entries in Dc (which will give us the a, b information) and the columns of the rotation matrix U (which will give us the 0 information). Ideally the E V D of C would be: XTCX0 + BTX0 + l A: (1,1) XTCX0 + BTX0 + l Dc (2,2) (B.23) C = [v2 v}]diag{f(a) f(b)}[vT vff (B.24) B.2 Fitting an Ellipsoid to Measurement Points in 3D 98 So how do we know if the first entry in Dc is actually a or bl We make the accurate assumption that the angle of rotation of the prostate is never greater than 45°. We then take the dot product of the basis vectors vi and v2 with the coordinate axes of the unrotated, untranslated reference frame (frame 5). For example, if the dot product of V\ with 5x is less than the dot product of v2 with 5x then the first entry in Dc is actually f(b). Now that we have corrected the order of the columns in Dc and U we have: C = U1 DM Vll vi2 V2i l>22 /(«) 0 0 /(&) Recalling from equation A . l , the relationship between U and 9 is: 5 X = 5x cos (8) sin (9) 2x 5y —sin (9) cos (9) 2y Using the above two sets of equation we can find 9 as follows: 9 — acos (i>n) vu v21 Vl2 v22 = U2X (B.25) (B.26) (B.27) B . 2 F i t t i n g an E l l i p s o i d to M e a s u r e m e n t P o i n t s i n 3 D B.2.1 Deriving P Parameters from q parameters (q to P) We can find the P parameters by expanding equation 4.5 and letting the coefficients equal the P parameters in 4.6 to get: @1 = c/3c9 - sf3s<j>s8 @2 = s/3c8 + c(3s<j>s9 @3 = -mx0-@ly0 + c(j>s9z0 @4 = c(3s9 + sf3s<j>c9 @5 = s(3s9 - c(3sct>c9 @6 = -@§x0 - @4ya — c<f>c9z0 @7 = —cf3c4>x0 + s/3c<j bya - S(f>zt B.2 Fitting an Ellipsoid to Measurement Points in 3D 99 Pi cf32c<f>2 , ® 2 2 , @ 5 2 0 2 r + -gs~ + -35-Pi s / 32 c 0 2 , ® 1 2 , ® 4 2 * a2 + + S T p3 ^ + ^ + ^ PA s<t>cfs0 C 0 S 0 Q 1 . c<pc6@A a2 b'2 c2 P5 s<j>c<bc0 c4>s6®2 . c<t>c8@5 a 2 62 1 c 2 Pe sPc<t>2c/3 . @ 1 ® 2 , @ 4 ® 5 a 2 1 fc2 1 c 2 Pi <&7cBcd> . © 3 ® 2 1 ® 6 @ 5 J 1 62 1 c 2 Ps @7sf3ccf) , @ 3 ® 1 1 ® 6 ® 4 a 2 . 1 62 1 c 2 P9 ® 7 s < £ @ 3 c 0 s 0 . © 6 c < £ c 0 a'2 b'2 c2 P i ° ® 7 2 , ® 3 2 , ® 6 2 1 "a2* t" "o^~ " f 1 (B.29) B.2.2 Deriving q Parameters from P parameters (P to q) We would like to fit an ellipsoid to a cloud of points in 3D. L i and Griffiths [24] show an algorithm to find the P parameters of this 'best fit' ellipsoid using a least squares approach and an ellipsoidal constraint on the P parameters. The problem has since been to convert this P vector to the q parameter representation so that we can draw the ellipsoid. We will present two methods for finding the (xa, y0, za) and (a, b, c) values. At the end we will show one method of finding the (6, <j>, /?) values; Before we proceed, we divide both sides of equation 4.6 by Pio to get: F(V,P) = V.P = PP11x2 + PP21y2 + PP31z2 +2PP 4 ly Xz + 2PP5 1x xz + 2PP6 1x xy +2PP71x + 2PP8ly + 2PPQlz + l (B.30) where PPi = Pi/Pio- In doing this we have eliminated the free scaling factor and this is the equation that we will use in the proceeding sections. B.2.2.1 Method 1 Step 1: Finding the ellipsoid origin [xa yQ z0]T B.2 Fitting an Ellipsoid to Measurement Points in 3D 100 In this method we start in frame 5 and work our way to frame 1. An unrotated, untranslated ellipsoid in frame 5 is described by: 5XTDA5X = 1 (B.31) where DA = diag { l / a 2 l/b2 1/c 2}. Referring to Appendix A , we can describe frame 1 with respect to frame 5 as follows: 1X = XQ + U5X 5X = UT(XX-X0) (B.32) Substituting equation B.32 into B.31 we get the ellipsoid equation in frame 1: (lX-Xa) UDAUT (lX-X0) = 1 (B.33) Expanding the above equation and letting A = UDAUT we get: 1XT A *X - 2 (XTA) *X + [XTAXo - 1) = 0 (B.34) Now we divide both sides by (XjAXQ — l) to get: <XT {j^hi)lx+(iM^)lx+1 -0 (B-35) and we define: ° = XTAX^l (B.36)' 3 r - 2 (XTA) Bl = XjUA (B.37) If we re-write equation B.30 in matrix form we get: B.2 Fitting an Ellipsoid to Measurement Points in 3D 101 PPX PPe PPs PPe PP2 PP4 xv PPs PP4 PP3 + [2PP7 2PP8 2PP9] + 1 Combining equations B.35, B.36, B.37 and B.38 it can be shown that: (B.38) BT = '-2XXTC (B.39) Step 2: Finding the ellipse size (a, b, c) At this point we have the P vector and X0 and we would like to find (0,6, c). For the time being we make an assumption that we will prove shortly. We assume: C XTCX„ + BTXQ + 1 (B.40) Now that we have A in terms of the known variables, we can find (a, b, c) by carrying out an E V D on A as follows: UDAUT l/o? 0 0 = U I 0 1/b2 0 0 0 1/c2 U1 (B.41) From this we can find (a, b, c) values easily: B.2 Fitting an Ellipsoid to Measurement Points in 3D 102 a = y/l/DA(l,l) b = y/l/DA{2,2) c = y/l/DA(3,Z) (B.42) B.2.2.2 Method 2 Step 1: Finding the ellipse origin [xQ yQ z0]T In this method we start in frame 1 and work our way to frame 5. A rotated, translated ellipsoid in frame 1 is described by: 1XTC1X + BT1X + 1 = Q (B.43) If we translate every point on this ellipse according to 2X =x X — X0 we get the equation of the ellipsoid in frame 2: {2X + X0)TC(2X + X0)+BT (2X + X0) + l = 0 (B.44) Expanding the above equation gives us: 2XTC2X+(2XjC + BT) 2X+{XTC2X0 + BTX0 + 1) = 0 (B.45) But in frame 2 there is no translation so we set the coefficient of 2X to zero to find Xa: (2XjC + BT) = 0 :.X0 = ~\c~lB (B-46) This result agrees with our previous finding for Xa. The equation of the ellipsoid in frame 2 becomes: 2XTC2X+ (XTCX0 + BTX0 + l) = 0 (B.47) B.2 Fitting an Ellipsoid to Measurement Points in 3D 103 Step 2: Finding the ellipse size (a,b,c) Once again we use E V D to break down C into a sequence of orthogonal, diagonal and orthogonal matrices (C = VTDCV). Equation B.47 then becomes: 2XTVTDcV2X+(XTCX0 + BTX0 + l) = 0 where V = UT. If we recall from equation B.32 that 5X = UT2X then we can re-write the above equation as: Comparing the above equation to equation B.31 we can find (o, b, c): XjCXo + BTXQ + 1 £>c(l,D = , XICX0 + BTX0 + 1 £>c(2,2) XJCX0 + BTX0 + 1 £>c(3,3) ( i 4 4 y j B.2.2.3 Comparing Method 1 and Method 2 The following check proves equation B.40 and shows that the two methods are in agreement. From Method 2 we saw that: XJCX0 + BTXQ + 1 -VCVT XTCX0 + BTX0 + 1 (B.50) but, from Method 1 we saw that: B.2 Fitting an Ellipsoid to Measurement Points in 3D 104 A = UDAUT (B.51) Substituting equation B.50 into B.51 we get: -VCVT > A = U [ - ¥ — — P F — — )UT (B.52) ^XTCX0 + BTX0 + 1 But recall that U = VT. Therefore the above equation becomes A ~ XlCX0 + BTXo + l ( R 5 3 ) which is the proof for equation B.40. Re-arranging the above equation we get: c = xjihi (R54) which agrees with equation B.36. Now we have proven that the above two Methods are in agreement. Step 3: Finding the ellipsoid angles (0, cf>, /?) Finding the ellipsoid angles is the same for both methods. Before we find the angles we must make sure the order of the eigenvalues and eigenvectors from the E V D are correct. When we carry out the eigenvalue decomposition of A (A = UTDAU), what we are looking for are the diagonal entries in D A , which contain the (a, b, c) information, and the columns of U, which contain the basis vectors (and therefore the (6, <fr, (3) information) for 5C. So ideally we have: A = [Vl v2 v3}diag{f(a) f (b) f (c)} [vT vT vT)T (B.55) However, the eigenvalue decomposition does not necessarily give the (a, b, c) and basis columns in the right order. This means that Matlab could return any of the following eigenvalue decompositions, all of which give A: B.2 Fitting an Ellipsoid to Measurement Points in 3D 105 A = [v2 Vl v3}diag{f(b) f (a) /(c)} [ v T vT v j f = [v3 v2 vl}diag{f(c) f (b) f (a)} [ v T v2 v T ] T = ["1 " 3 v2]diag{f(a) / (c ) f (b)} [ v T vT v T ] T So how do we know if the first entry in D is actually a or b or c? We make the assumption that the angles of rotation of the prostate, (6, <j>, (3), are never greater than 45°. This is a practical assumption since the prostate would never rotate more than 45° about any of the 3 axes. Next, we take the dot product of each vector Vi with [1 0 0] T , [0 1 0]Tand [0 0 1] T to see which axis it is aligned with. (Refer to Figure B.2 for an illustration) Also, sometimes the Vi are the negative of what they should be. In that case if the largest entry in is negative we multiply the whole vector by —1. Once we have corrected the order of the basis vectors in U, the correct E V D of A becomes: A = UDAUT V2l "31. / ( « ) 0 0 " n "12 "13 v22 "32 0 f(b) 0 "21 "22 ."23 (B.56) V\3 "23 "33 0 0 f(c) "31 "32 "33 Referring back to equation A.6 we recall that : Figure B.2: Correcting the order of the basis vectors in 3D B.2 Fitting an Ellipsoid to Measurement Points in 3D 106 U = exp (p3zx) exp (4>iyx) exp (95xx) (B.57) and when we expand U we get: U cf3c<f> s(3c0 + c(3s4>s9 s/3s9 - c/3s4>c9 -s(3c4> cficO - sj3s4>se c(3s0 + s(3s4>c6 s<f> —c<t>s9 c<t>c6 At this point we can use equations B.58 and B.56 to find the angles as follows (B.58) (f> = asin (vis) = asin (17(3,1)) (B.59) asm = asm — V23 c4> f(3,2) c<f> (B.60) /? = asin Vl2 c<f> (7(2,1) (B.61) 107 Appendix C Ellipse/Ellipsoid Fitting -Mathematical Proofs C . l L S M Cost Function Our purpose is to prove that the cost function for LSM, yj™=1 (-F, (Vi,P))2 is equal to ||5P||2. Given a set of n points in 2D space, (xi, 2/1), (x2, 2/2), (x„, yn), where we have omitted the reference frame notation for convenience, the corresponding V* vectors are: Vi = [xi xiyi 2/1 xi 2/1 1] V2 = [x2 x2y2 2/2 x2 2/2 1] Vn = [xn xnyn yn xn yn 1] (C.l) The distance from each point to an ellipse described by P = [Pi P2 P3 PA P5 ' Pe] is: C.l LSM Cost Function 108 Fi(VuP) = P1x2 + P 2 x m + P3y2 + P4x1 + Psy1+P6 = \x\ xiyi yi xi y1 1] • P F2{V2,P) = Pxx\ + P2x2y2 + P3y2 + PAx2 + P5y2 + P6 = [x2 x2y2 y2 x2 2/2 1] • P Fn(Vn,P) = Pxx\ + P2xnyn + P3yl + PAxn + P5yn + P6 = [xn xnyn yn xn yn 1] • P (C.2) We would like to find the P vector that minimizes the sum of the square of these algebraic distances as follows: n Cost = ^ ( ^ ( ^ . P ) ) 2 i=i = (F1(V1,P))2 + (F2(V2,P))2 + ... + (Fn(Vn,P))2 = (Vi • P)2 + (V2 • P)2 + ... + (V n • P)2 (C.3) We will now prove that the above equation equals | | 5 P | | 2 , where S = [Vj T V2 ... V ^ ] T by expanding: C.2 Minimizing Cost Function using GEP 109 x\ xiyi yi xi yt 1 x2 x2y2 2/2 x2 y2 1 Xn xnyn yn xn yn 1 V1 P V2P VnP (Vj • pf + (v2 • pf +... + (vn • py Pl Pi Ps P4 Ps = (v1.p)2 + (v2.p)2 + ... + (v;.p)2 Therefore from equations C.3 and C.4 we can see that indeed TJ?=1 (P ( ^ , P ) ) 2 = | | 5 P | | 2 (C.4) C.2 Minimizing Cost Function using G E P The mathematical proofs in this section were obtained from Dr. Jim Varah. Our purpose is to show that the ellipse which best fits a set of measurement points is represented by the P vector (eigenvector) corresponding to the unique positive eigenvalue in equation 2.23. This will be proven in the following two steps. Step 1: Proof of the Existence of a Single Positive Eigenvalue We know that AT is a singular matrix and has eigenvalues [—2 — 1 0 0 0 2]. The inertia of a matrix corresponds to the number of negative, zero and positive eigenvalues that the matrix has. For example, the inertia of N is {2,3,1}. We also know that STS is positive semi-definite because for all PeRn, where P ^ 0, PTSTSP = \\SP\\2 > 0 if 5 is full rank and PTSTSP = \\SP\\2 > 0 if S is not full rank. We let 5 T 5 equal M and use the Cholesky Decomposition to re-write M as follows: M = LTL where L is a lower triangular matrix. We now substitute equation 2.23 into C.5 and re-arrange: (C.5) C.2 Minimizing Cost Function using GEP 110 (L-TN - nLT) P = 0 (C.6) If we let P = L~lq and we substitute this into the above equation we get: (L-TNL~l -til)q = 0 (C.7) If we let B = L~TNL~l then we can say that the eigenvalues of B are a from the above equation. Now we let Q = L~x to get B = QTNQ which is a congruency transformation. By Sylvester's Law, a congruency transformation maintains the inertia of a matrix. This means that B has the same number of negative, zero and positive eigenvalues as N. This implies that p, from equation 2.23 can only have one unique positive eigenvalue because N has a single positive eigenvalue. Step 2: Proof That the Cost Function is Minimized by the P Vector Corresponding to the Unique Positive Eigenvalue We use the 'Simultaneous Diagonalization of two Quadratic Forms' to simultaneously diagonalize M and N, using the same eigenvector matrix z, as follows: zTMz = Da (C.8) zTNz = Dp where Da and Dp are diagonal matrices containing (eigenvalues of M) and Pi (eigenvalues of N) on the diagonal entries. Starting with equation 2.23 and plugging in the above equations it can be shown that Mi = ai/Pi- Since M is positive definite all its eigenvalues are positive and therefore ai > 0 for i = 1...6. We recall from step 1 that ./V has only one positive eigenvalue; therefore, Dp has only one positive entry and so we let pi > 0 and Pi < 0 for i = 2...6. We now revisit the problem statement: min Cost = PTMP subject to PTNP = 1 (C.9) Using a simple transformation, P = zy we change the problem statement to: min Cost = yT (zTM z) y subject to yT (zTN z) y = 1 (CIO) Substituting equation C.8 into the above equation we get: min Cost — y Day subject to y Day = 1 (C- l l ) y C.2 Minimizing Cost Function using GEP 111 which essentially simplifies to finding the y value that minimizes Cost = y j i = 1 cny2 subject to J2i=1 Piyf = 1. Solving the constraint for y\ we get: y\ = (C.12) P i Then substituting this into the cost function and re-arranging gives us: C o s i = f + (C13) Since QJ is always positive and /?$ is always negative for i > 2, the contents of the bracket, in the equation above, will always be positive. This means that the minimum of this cost function is a i / / 3 i and this occurs when j / i , for all i > 2, are zero. Therefore the cost function is minimized when: y 4 = 0 0 0 0 0 (C.14) We still need to find the P vector that minimizes the cost function of equation C.9. If z = [zi z2 z3 z 4 z5 where is the eigenvector corresponding to a.i and Pi then we can use the transformation P = zy to find P. P'W< (C15) This shows that the P vector we are looking for is the one that corresponds to the first eigenvector in z, which corresponds to the first eigenvalues Q i and p\- both of which are positive. Note that if /z; = cti/Pi then fii is the only positive eigenvalue and the eigenvector that minimizes our cost function is the one corresponding to this eigenvalue. This ellipse fitting technique always returns a P vector that represents an ellipse regardless of initial (xi,yi) points. Also, the fit has rotational and translational invariance (meaning if you translate/rotate the data, the resulting ellipse fit will translate/rotate by the same amount), is robust, easy to implement and computationally cheap. 112 Appendix D Lagrangian Multipliers Lagrangian multipliers are used in mathematical optimization problems that attempt to find the local ex-trema of a function of one or several variables subject to one or more constraints. Using this method we can reduce a problem with n equations, n variables and k constraints to a problem with n + k equations, n + k variables and no constraints by introducing a new scalar multiplier, called the Lagrangian multiplier, for each constraint. Lets consider the two dimensional case with one constraint so that n = 2, k = 1. Let the function to be optimized be f(x,y) and let the constraint be h(x,y) = g(x,y) — c. FigureD.l shows that as we traverse the constraint h(x, y) we are moving up and down along the level curves of f(x, y) and at some point we will reach a maximum value on f(x, y) along the h(x, y) path. This maximum occurs when the g(x, y) = c curve is tangential to a level curve, f(x, y) = dn, and is no longer crossing the level curves. At this point the gradients of f(x, y) and h(x, y) are parallel. I f(x.y)~=d; f(x.y) = id-Figure D. l : The dashed lines represent the level curves of the function we are trying to optimize. The solid line represents the constraint. The arrows point in the direction of maximum gradient 113 grad (f (x, y)) = \grad (g (x, y) - c) (D.l) which gives us: dj_ _ xdh dx dx Using the above two equations and h (x, y) we have 3 equations and 3 unknowns (A, x and y) that we can find easily. In the general case, assuming / has n variables and there are k constraints, we get the following equation: grad (/) — \\grad (h\) — X2grad (h2) • • • = 0 (D-3) If we take the gradient of this equation we get n equations with n + k variables. Using the k constraint equations we will have n + k equations with n + k variables and we will be able to solve the system of equations to find the n variables that optimize / subject to the k constraints. 114 Appendix E Newton-Raphson Method Figure E.l shows that the warping function, r n e w , as a function of r (9 and a are constant), is a monotonically increasing function. This means that given any warped radius, rnev!(i), we can find its single corresponding unwarped radius, r(i), using the Newton-Raphson method. In this method we start with an arbitrary initial r value, r0, and we check to see if rnev/(r0) is sufficiently close to r n e w(i) by checking to see whether the difference between r n e w(rQ) and r n e w(i) is less than some threshold (diff < tresh). If the difference is less than a pre-defined threshold then r0 is the single r value corresponding to r n e w(i) and we have found the unwarped radius value. If the difference is not less than the threshold then we find the intersection of the line tangent to the warping function at (r0 , rnew(ra)) and the x-axis. This x-intercept becomes our new r0 and we find its corresponding rnew(''o)and again check to see if it is close to r n e w(i). These steps are iterated until we find the ra = r(i) that corresponds to r n e w (i) . Then we move on to r n e w ( i + 1) until all warped radii are unwarped. Figure E . l : Warping function as a function of r only with 9=90° and a = 40. Newton's method is used to find the r(i) value corresponding to rnew(i) 116 Appendix F Tracking theory and (Extended) Kalman Filters F . l K a l m a n F i l t e r The Kalman Filter (KF) is a recursive numerical technique that estimates and tracks the internal states of a time-varying system which has noise superimposed both on top of its state variables (process noise) and its output signal (measurement noise). Figure F . l was obtained from [5] and is an excellent diagram for describing the steps of the K F . Equation F . l shows the state space representation of a linear stochastic process (or plant): x(k + l) = F{k)x(k) + G{k)u(k) + r(k)V(k) z(k + l) = H(k+l)x(k + l) + w(k.+ l) (F.l) where F(k), G(k) and H(k) describe our model of the plant and u(k), x(k) and z(k) are the input, true state and output variables, respectively. The variables V(k) and w(k) are the zero mean white Gaussian process and measurement noise with variance Q{k) and R{k) respectively [10]. In general, the discrete state-space model of a system is obtained by first finding its continuous state space model and then discretizing it. If the continuous model does not perfectly model the real system, then a noise term is added to the continuous model, prior to discretization, which results in the existence of T(k) after discretization. If the continuous model perfectly models the real system then the process noise, V(k), exists solely due to the discretization and F(fc) is the identity matrix. In Figure F . l , the first column (left most column) shows the evolution of the true system state from F J Kalman Filter 117 x(k) to x(k + 1) and the true system output (or measurement) z(k + 1). However, we do not have access to the system state, but we do have access to the system input and output and using these we would like to estimate the system state. Referring to the third column in Figure F . l we can see that there are 2 steps in updating the state estimate from time tk to time tk+i- Assuming we know the state at tk , first we predict the state and measurement at tk+i using the following equations: x(fc + l|fc) = F(k)x(k\k) + G{k)u{k) z(k + l\k) = H(k+l)x{k + l\k) (F.2) where x(k\k) is the state estimate at time tk, x(k + l\k) is the state prediction given the state estimate at tk and z(k + l\k) is the measurement prediction given the state prediction. Note that the above equations are very similar to equation F . l except there are no noise terms (the noise is taken into account in the prediction covariance). Next we find the updated state estimate at t^+i using the state prediction as follows: x(k + l\k+l) = x(k + l\k) + W(k + 1) {z{k + 1) - z(k + l\k)} W ( k + 1 ) = P{k+Mk)H^{k + l) ; H(k + l)P(k + l\k)HT(k + l) + R{k + l) K ' ' where the K F Gain, W(k +1), corrects the predicted state by amplifying the incremental difference between the true measurement (z(k + 1)) and the measurement prediction ( z(k + l\k)). From the fourth column in Figure F . l , we see that the state prediction covariance, P(k + l\k), is found from and updated by the following equations: P(k + l\k) = F(k)P(k\k)FT(k) + T(k)Q(k)T(k) P(fc + l | f c+ l ) = P(k + l\k) (F.4) -W{k + 1) [H(k + l)P(k + l\k)HT{k + 1) + R(k + 1)} WT(k + 1) where P(k + l\k) is the covariance of the state at tk- At this point we have the updated estimates for the state, output (measurement) and state covariance at time tk+i and we can re-iterate the same steps to get these values for time tk+2- The K F continues this iteration with the goal of making the difference between the true measurement and the measurement prediction zero which would imply a perfect estimate of the system states. In Kalman filtering the initial conditions, x(0|0) and P{0\0) are important in determining the speed with F.2 Extended Kalman Filter 118 which the state estimate tracks the true estimate. If the initial conditions are far from the true values it will take a long time for the difference between x(k) and x(k\k) (or z(k + 1) and z(k + l\k)) to go to zero [6]. F.2 Extended Kalman Filter The Extended Kalman Filter (EKF) , shown in Figure F.2 [6], has the same purpose and function as the K F except for systems that are not linear. Non-linear systems, in this case, are ones where the state transition and measurement models are a (differentiable) function of the state variables as follows: x(k + l) = f(k,x(k),u(k)) + r(k)V(k) z(k + l) = h(k + l,x(k+l)) + w(k + l) (F.5) Therefore, the State Prediction and Measurement Prediction become: x(k+l) = f(k,x(k),u(k)) z(k + l) = h(k + l,x(k + l)) (F.6) Since /and h are non-linear functions of the state variables they cannot be used, as they are, to find the State Prediction Covariance and Innovation Covariance. The solution is to linearize the system by finding Fand H, which are the Jacobians of /and h respectively, and then to use F and H in finding P(k + l\k) and S(k + 1). The Jacobians are calculated as follows: F(k) = H(k + 1) = The rest of the E K F is the same as the K F . df(k) ®X \x=x(k\k) dh(k + 1) I dx x=£(k+l\k) (F.7) F.3 Interacting Multiple Modes If a system is complex, it is possible to represent the system with multiple state-space models (modes), fn the Interacting Multiple Modes (IMM) [6] approach n models run in parallel and the inputs and outputs to and from each model are mixed to give the overall system's state variables and covariances. Evolution of the true system state x Controller Evolution of the estimated system state x Evolution of the state covariance P Input values at time tk State at tk X(k) Control input at tk u(k) State estimate at tk x(k\k) State error covariance at tk P(k\k) (1) Internal computations taking data from tk and creating data for t k + 1 Transition to 4+ / x(k+l) = F(k) x(k) + G(k) u(k) + r(k) V(k) (3) (4) (5) @) Measurement at t k + 1 Output z(k+l) = values at H(k+l) x(k+l) + time tk +, w(k+l) State Prediction x(k+l\k) = F(k) x (k\k) + G(k) u(k) Measurement Prediction z(k+l\k) = H(k+1) x(k+l\k) (6) (7) Innovation v(k+l) = z(k+l)- z (k+l\k) Updated State estimate x(k+l\k+l) = x(k+l\k) + W(k+l)v(k+l) State Prediction Covariance P(k+l\k) = F(k) P(k\k) F-(k) + F(k)Q(k)F'(k) (8) Innovation Covariance S(k+1) = H(k+l)P(k+l\k)H!(k+l) + R(k+1) (9) r Filter Gain W(k+1) = ' P(k+l\k)H'(k+l)S-' (k+1) (10) Updated State Covariance P(k+l\k+l) = P(k+l\k) -W(k+l)S(k+l) W(k+1) Next Cycle starting with t k + i as input and creating data for t k + 2 s- P. ct> P Cr a M P 1-1 CD io o 1=1 a c+ CD i-1 I" o p CD H x c-t-CD P -CD P -W 95 p p" CD cr P -cr CD CO TO cr I cr CD CT) Evolution of the true system state x Controller Evolution of the estimated system state x Input values at time tk (1) Internal computations taking data from tk and creating data for tit+i State at tk X(k) Control input at tk u(k) Transition to tt+i x(k+l) = f(k, x(k) u(k)) + T(k) V(k) dfik) W= dx x=x(k\k) (2) Output values at time t k + 1 (3) (4) (5) Measurement at tk+l z(k+l) = h(k+l,x(k+l)) + w(k+l) dh(k + l) H(k+1) = d x x=i(k+\\k) State estimate at tk X(k\k) State Prediction x(k+l\k) = f(k, x(k\k),u(k)) Measurement Prediction z(k+l\k) = h(k+l, x(k+l\k)) Innovation v(k+l) = z(k+l)- z(k+l\k) (6) Updated State estimate x(k+l\k+l) = x(k+l\k) + W(k+l)v(k+l) Evolution of the state covariance P State error covariance at tk P(k\k) (7) State Prediction Covariance P(k+l\k) = F(k) P(k\k) F'(k) + r(k)Q(k)r'(k) (8) Innovation Covariance S(k+1) = H(k+l)P(k+l\k)H'(k+l) + R(k+1) (9) Filter Gain W(k+1) = " POc+llkJH'Oc+lJST1 (k+1) (10) Updated State Covariance P(k+l\k+l) = P(k+l\k) -W(k+l)S(k+l) W'(k+1) Next Cycle starting with t k + ] as input and creating data for t k + 2 F.3 Interacting Multiple Modes 121 Lets consider a system that can be represented by r modes. The Mode Probability, Hi(k — 1) is the probability that the system is behaving according to mode i (i — l...r) at time k — 1. The Mode Transition Probability, py, is the probability that the system will transition and behave according to mode i given that it is currently behaving according to mode j (j = l . . .r). Lets assume that we know pij, Ui(k — 1) and the states and state error covariances for each mode at time k — 1. In order to find the states, state error covariances and Mode Probability at time k we use the following I M M estimator algorithm (refer to Figure F.3): 1. Ca lcu la t ion of M i x i n g Probabi l i t ies . The Mixing Probabilities for mode i given that the system is in mode j at time k — 1 are found by: (F.8) 2. M i x i n g . The states and state error covariances of each mode, at time A; — 1, are mixed together using the Mixing Probabilities founds in the previous step. The result is the input state and input state error covariance into each mode. r£OJ'(fc — l|fc — 1) = ^ ^ ' ( J k - l l A i - l V i i ^ f c - l l J f e - l ) i=l r P°i(k-\\k-\) = ^ ^ . ( f c - l l f e - l ) (F.9) i=l Pi{k-l\k-l)+ - l\k - 1) - x°i(k - l\k - 1)] [x^k - l\k - 1) - x°i(k - l\k - 1)]T 3. M o d e - M a t c h e d F i l t e r ing . The states and state error covariance for each mode are processed through each mode and updated to get i1(fc|fc), P1(k\k) and x2(k\k), P2(k\k). The likelihood function, shown below, is used to find the mode probability update which is then used to find the updated states and state error covariances. A,-(fc) = exp j - (z(k) - zj(k\k - l ) ) 2 /25^'(fc)} y/2llS3{k) (F.10) 4. M o d e Probab i l i ty Upda te . The Mode Probability is updated for time k according to the following equation: A j ( f e ) J2ri=iPijVi(k -1) E L i ( A i ( * O E r = i P « w ( * - i ) ) (F.ll) 5. Est imate and covariance combination. Finally the state and state error covariance for the overall F.4 Probabilistic Data Association Filter 122 system are found by mixing the state and state error covariance from each mode as follows: r x(k\k) = ^:F(A;|ifc)^-(fc) J = l r P(k\k) = ^ A ^ ( f c ) [Pj{k\k)+ [xj(k\k) -x(fc|Jfc)] . [xj{k\k) -£(fc|fc)] T | (F.12) F.4 Probabilistic Data Association Filter Lets consider the problem of using a radar to track the trajectory of an aircraft moving along in space (Figure F.4). If we have an estimate of the aircraft's radial position at time k — 1, x(k — l\k — 1), we can use the K F to find the position at time k, x(k\k) according to: x{k\k) = x(k\k-l) + W(k){z(k)-z(k\k-l)} x(k\k-l) = F{k-l)x{k-l\k-l) + G(k-l)u(k-l) £(fc | fc - l ) = H(k)x(k\k-l) (F.13) In order to find x(k\k) we need a measurement value, z(k), from the radar. Assume the plane is moving in an environment that is immersed in white Gaussian noise so that at any time k the radar receives several signals - some of which are noise and some of which are echoes off of the plane with superimposed noise. This problem, where at one instant in time there are numerous measurement points and we are looking for one measurement point, z(k), can be solved using the Probabilistic Data Association (PDA) filter [5,7]. At time k there will be E measurement values (alternatively called candidate edge points) and we will represent each point as ri(k) (i = 1...E). Then each candidate edge point is assigned a Gaussian weight based on the magnitude of the signal (higher magnitude receives higher weight) and its distance away from the prediction of the aircraft's previous position x(k\k — 1) . (All distances are radial distances from the radar). V ' y/2^S{k) 1 2S(k) > V ; where Pi(k) is the Gaussian weight, Magf(k) is the magnitude of the zth signal and S(k) is the innovation covariance. Note that in this equation, the candidate edge points closer to the state prediction, x(k\k — 1), receive a higher weight. Finally, z(k) is found by performing a weighted average of the candidate boundary points. F.4 Probabilistic Data Association Filter 123 x (k-l\k-l) , P' (k-l\k-l) 1 •(k-l\k-l), P2(k-l\k-l) i p(k-l\k-l) A,(k)-A2(k)-Interaction/Mixing I x01(k-l\k-l) , P0' (k-l\k-l) r x02 (k-l\k-l) ,P°2 (k-l\k-l) 1 •A,(k) i A2(k) (k\k), P2 (k\k) State estimate and covariance combination Mode Probability update and mixing probability calculation x(k\k) ,P(k\k) x0j(k-l\k-l) u(k-l) z(k) P°J(k-l\k-l) Modej 1 State Prediction x0i(k\k-l) = F(k-l) x0J (k-l\k-l) + G(k-l) u(k-l) Measurement Prediction z0j (k\k-l) = H(k) xUJ (k\k-l) Innovation v(k) =z(k)- z0j (k\k-l) State Prediction Covariance P°J(k\k-l) = F(k-l) P°J (k-l\k-l) F'(k-1) + r(k-l)Q(k-l)r'(k-l) Innovation Covariance S(k) = H(k)P°' (k\k-l)H'(k) + R(k) Filter Gain W(k) = P (k\k-l) H' (k) S-' (k) Updated State estimate x(k+l\k+l) = x(k+l\k)+ W(k+1) v(k+l) Updated State Covariance P(k+l\k+l) = P(k+l\k) - W(k+1) S(k+1) W'(k+1) ~l 1 P(k\k) P(k\k) Figure F.3: One cycle of the Interacting Multiple Mode Estimator -*Aj(k) Figure F.4: Tracking aircraft motion using a radar and PDA theory 125 Appendix G Semi-automatic Segmention Results In the 3D segmentation study presented in Chapter 5, three observers manually segmented and initialized 10 cases in the first week and manually re-segmented and re-initialized 5 of the original 10 cases two weeks later. In this appendix we present the semi-automatic segmentation results, superimposed on top of the gold standard, for every case in this study. Due to space limitations we have shown only nine slices from each case. If there were more slices in the volume we ommitted slices 7 and/or 8. It should be noted that the first subfigure is always be base and the last subfigure is always the apex. Figure G.2: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 2, initial 10 cases. 127 Figure G.4: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 4, initial 10 cases. Figure G.6: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 6, initial 10 cases. Figure G.8: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Ob-server 1, Case 8, initial 10 cases. 130 Figure G.10: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Observer 1, Case 10, initial 10 cases. Figure G.12: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Observer 2, Case 2, initial 10 cases. Figure G.14: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Observer 2, Case 4, initial 10 cases. Figure G.16: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Observer 2, Case 6, initial 10 cases. 134 Figure G.18: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Observer 2, Case 8, initial 10 cases. 135 Figure G.20: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Observer 2, Case 10, initial 10 cases. 136 Figure G.22: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Observer 3, Case 2, initial 10 cases. 137 Figure G.24: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Observer 3, Case 4, initial 10 cases. Figure G.26: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Observer 3, Case 6, initial 10 cases. 139 Figure G.28: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Observer 3, Case 8, initial 10 cases. Figure G.30: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Observer 3, Case 10, initial 10 cases. 141 Figure G.32: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Observer 1, Case 3, repeat of 5 cases. Figure G.34: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Observer 1, Case 8, repeat of 5 cases. 143 Figure G.36: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Observer 2, Case 2, repeat of 5 cases. Figure G.38: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Observer 2, Case 6, repeat of 5 cases. 145 Figure G.40: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Observer 2, Case 10, repeat of 5 cases. 146 Figure G.42: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Observer 3, Case 3, repeat of 5 cases. 147 Figure G.44: Comparing semi-automatic (solid line) to manual segmentation (dotted line) for Observer 3, Case 8, repeat of 5 cases. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0065585/manifest

Comment

Related Items