New Methods for Calibration and Tool Tracking inUltrasound-Guided InterventionsbyMohammad NajafiB.Sc., University of Tehran, 2007M.Sc., K.N. Toosi University of Technology, 2009A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFDoctor of PhilosophyinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Electrical and Computer Engineering)The University of British Columbia(Vancouver)December 2014c©Mohammad Najafi, 2014AbstractUltrasound is a safe, portable, inexpensive and real-time modality that can pro-duce 2D and 3D images. It is a valuable intra-operative imaging modality to guidesurgeons aiming to achieve higher accuracy of the intervention and improve pa-tient outcomes. In all the clinical applications that use tracked ultrasound, onemain challenge is to precisely locate the ultrasound image pixels with respect to atracking sensor on the transducer. This process is called spatial calibration and theobjective is to determine the spatial transformation between the ultrasound imagecoordinates and a coordinate system defined by the tracking sensor on the trans-ducer housing. Another issue in ultrasound guided interventions is that trackingsurgical tools (for example an epidural needle) usually requires expensive, largeoptical trackers or low accuracy magnetic trackers and there is a need for a low-cost, easy-to-use and accurate solution. In this thesis, for the first problem I haveproposed two novel complementary methods for ultrasound calibration that pro-vide ease of use and high accuracy. These methods are based on my differentialtechnique which enables high measurement accuracy. I developed a closed-formformulation that makes it possible to achieve high accuracy with using a low num-ber of images. For the second problem, I developed a method to track surgical tools(epidural needles in particular) using a single camera mounted on the ultrasoundtransducer to facilitate ultrasound guided interventions. The first proposed ultra-sound calibration method achieved an accuracy of 0.09± 0.39 mm. The secondmethod with a much simpler phantom yet achieved similar accuracy compared tothe N-wire method. The proposed needle tracking method showed high accuracyof 0.94±0.46 mm.iiPrefaceThis thesis is primarily based on three manuscripts, resulting from the collaborationbetween multiple researchers. All publications have been modified to make thethesis coherent.Two papers from Chapter 2 has been published in:• Mohammad Najafi, Narges Afsham, Purang Abolmaesumi, and Robert N.Rohling. Single wall closed-form differential ultrasound calibration. Pro-ceedings of SPIE Medical Imaging Conference, Bellingham, WA, SPIE Press,2012.• Mohammad Najafi, Narges Afsham, Purang Abolmaesumi, and Robert N.Rohling. A closed-form differential formulation for ultrasound spatial cali-bration: single wall phantom. Ultrasound in medicine & Biology [acceptedfor publication].The contribution of the author was in developing, implementing, and evaluat-ing the method. Narges Afsham helped with deriving the closed-form formulationfor the generalized single-wall method, conducting the experiments and writing themanuscript. Profs. Rohling and Abolmaesumi helped with their valuable sugges-tions in improving the methodology. All co-authors contributed to the editing ofthe manuscript.Two papers from Chapter 3 have been published in:• Mohammad Najafi, Narges Afsham, Purang Abolmaesumi, and Robert N.Rohling. A closed-form differential formulation for ultrasound spatial cali-bration. In Proceedings of the Conference Information Processing in Computer-iiiAssisted Interventions, Springer Berlin Heidelberg, 2012, Pages 44-53 [AwardedBest Paper Award].• Mohammad Najafi, Narges Afsham, Purang Abolmaesumi, and Robert N.Rohling. A closed-form differential formulation for ultrasound spatial cali-bration: Multi-wedge phantom. Ultrasound in medicine & Biology, Volume40, Issue 9, September 2014, Pages 2231-2243.The contribution of the author was in developing, implementing, and evalu-ating the method. Narges Afsham helped with conducting the experiments andwriting the manuscript. Profs. Rohling and Abolmaesumi helped with their valu-able suggestions in improving the methodology. All co-authors contributed to theediting of the manuscript.Two papers of Chapter 4 were published in:• Mohammad Najafi and Robert N. Rohling. Single camera closed-form Real-time needle trajectory tracking for ultrasound. Proceedings of SPIE MedicalImaging Conference, Bellingham, WA, SPIE Press, 2011.• Mohammad Najafi, Purang Abolmaesumiand and Robert N. Rohling. Singlecamera closed-form real-time needle tracking for ultrasound-guided needleinsertion [In review].The contribution of the author was in developing, implementing, and evaluatingthe method. Profs. Rohling and Abolmaesumi helped with their valuable sugges-tions in improving the methodology. All co-authors contributed to the editing ofthe manuscript.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixGlossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.1 Ultrasound-Guided Interventions . . . . . . . . . . . . . . 11.1.2 Ultrasound Calibration: Definition, Methods and Challenges 41.1.3 Surgical Tool Pose Estimation and Tracking . . . . . . . . 81.2 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.4 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 A Closed-Form Differential Formulation for Ultrasound Spatial Cal-ibration Using a Single Wall Phantom . . . . . . . . . . . . . . . . . 152.1 Introduction and Background . . . . . . . . . . . . . . . . . . . . 15v2.2 Differential Measurement Technique . . . . . . . . . . . . . . . . 172.3 Single Wall Phantom . . . . . . . . . . . . . . . . . . . . . . . . 172.4 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . 192.5 Linear Transducer Calibration . . . . . . . . . . . . . . . . . . . 212.5.1 Predetermined Wall Method . . . . . . . . . . . . . . . . 232.5.2 Generalized Wall Method . . . . . . . . . . . . . . . . . 262.6 Curvilinear Transducer Calibration . . . . . . . . . . . . . . . . . 292.6.1 Automatic Segmentation and Measurement . . . . . . . . 312.6.2 Plane Localization with Stylus . . . . . . . . . . . . . . . 332.6.3 Calibration Accuracy Evaluation . . . . . . . . . . . . . . 342.7 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . 352.7.1 Stylus Accuracy . . . . . . . . . . . . . . . . . . . . . . 382.7.2 Slope Measurement Evaluation . . . . . . . . . . . . . . 382.7.3 Calibration Results . . . . . . . . . . . . . . . . . . . . . 382.7.4 Calibration Accuracy . . . . . . . . . . . . . . . . . . . . 392.7.5 Calibration Time . . . . . . . . . . . . . . . . . . . . . . 402.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 A Closed-Form Differential Formulation for Ultrasound Spatial Cal-ibration Using a Multi Wedge Phantom . . . . . . . . . . . . . . . . 453.1 Introduction and Background . . . . . . . . . . . . . . . . . . . . 453.2 Multi Wedge Phantom . . . . . . . . . . . . . . . . . . . . . . . 453.3 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . 483.4 Mathematical Formulation . . . . . . . . . . . . . . . . . . . . . 493.4.1 Rotation Parameters . . . . . . . . . . . . . . . . . . . . 513.4.2 Translation Parameters . . . . . . . . . . . . . . . . . . . 563.5 Phantom Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 573.6 Automatic Feature Extraction and Measurement . . . . . . . . . . 593.7 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . 613.7.1 Sensitivity Analysis to Measurement Error . . . . . . . . 623.7.2 Calibration Experimental Results . . . . . . . . . . . . . 623.7.3 Calibration Precision . . . . . . . . . . . . . . . . . . . . 653.7.4 Calibration Accuracy . . . . . . . . . . . . . . . . . . . . 66vi3.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674 Single Camera Closed-Form Real-time Needle Tracking for Ultrasound-Guided Needle Insertion . . . . . . . . . . . . . . . . . . . . . . . . . 694.1 Introduction and Background . . . . . . . . . . . . . . . . . . . . 694.2 Tracking Technique . . . . . . . . . . . . . . . . . . . . . . . . . 714.2.1 Single Camera Pose Estimation . . . . . . . . . . . . . . 734.3 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . 764.4 Mathematical Framework . . . . . . . . . . . . . . . . . . . . . . 764.5 Automatic Feature Extraction . . . . . . . . . . . . . . . . . . . . 814.5.1 Collecting Marking Edge Points . . . . . . . . . . . . . . 834.5.2 Clustering Edge Points . . . . . . . . . . . . . . . . . . . 864.5.3 Finding The Center Line and Fine Tuning . . . . . . . . . 874.6 System Calibration . . . . . . . . . . . . . . . . . . . . . . . . . 884.6.1 Ultrasound Calibration . . . . . . . . . . . . . . . . . . . 894.6.2 Camera Calibration . . . . . . . . . . . . . . . . . . . . . 904.7 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . 904.7.1 Camera Calibration . . . . . . . . . . . . . . . . . . . . 904.7.2 Repeatability of the Feature Extraction . . . . . . . . . . 904.7.3 Precision of the Needle Pose Estimation . . . . . . . . . . 914.7.4 Accuracy of the Needle Pose Estimation . . . . . . . . . . 924.7.5 Overall System Accuracy . . . . . . . . . . . . . . . . . . 944.7.6 Real-time Implementation . . . . . . . . . . . . . . . . . 944.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . 975.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104viiList of TablesTable 2.1 Calibration PRA test results for the linear transducer. . . . . . 38Table 2.2 Calibration PRA test results for the curvilinear transducer. . . . 39Table 3.1 Specific details of the operations and the corresponding MAT-LAB functions in the image processing procedure. . . . . . . . 61Table 3.2 Frequency distribution of the number of detected line segmentsin 100 ultrasound images. . . . . . . . . . . . . . . . . . . . . 62Table 3.3 Calibration PRA test results for three different calibration meth-ods using a stylus (65 points). . . . . . . . . . . . . . . . . . . 66Table 4.1 Parameters used in the automatic feature extraction algorithm. . 83Table 4.2 Camera intrinsic parameters obtained by camera calibration. . . 89Table 4.3 The repeatability of the feature extraction and pose estimationalgorithms are measured. . . . . . . . . . . . . . . . . . . . . 90viiiList of FiguresFigure 1.1 Typical positioning of the operator, ultrasound transducer and thepatient during, real-time ultrasound-guided spinal anaesthesia [22]. . 2Figure 1.2 Single point calibration phantom ([36]). . . . . . . . . . . . . . . 3Figure 1.3 Single plane calibration phantom ([36]). . . . . . . . . . . . . . . 4Figure 1.4 N-wire calibration phantom ([36]). . . . . . . . . . . . . . . . . 5Figure 1.5 Wire appearance determined by the ultrasound axial and lateral res-olutions. The wire does not appear as a small point. (Reproducedfrom Fig.5 in [19]). . . . . . . . . . . . . . . . . . . . . . . . . 7Figure 1.6 Ultrasound-guided needle insertion. From: American Academy ofProcedural Medicine (http://aaopm.com). . . . . . . . . . . . . . 8Figure 1.7 Needle insertion with ultrasound guidance and real-time trackingwith a transducer-mounted tracker. . . . . . . . . . . . . . . . . 9Figure 2.1 Differential measurement on two ultrasound scan lines. Line slopeis measured with cross-correlation of RF echos. m = tan(θ) = ∆y∆x . . 17Figure 2.2 Calibration setup: depicting coordinate system of the reference (R),transducer (T), ultrasound image (I), and the normal (~n) of a flat plane. 20Figure 2.3 Intersection of the ultrasound image and the wall plane. ~U is a unitvector in the direction passing through the center of array elements(lateral) and ~V is a unit vector in the direction of ultrasound beam(axial). P0 is the origin of the imaging plane. The plane equation isdefined with the normal vector,~n, and a point, Q. . . . . . . . . . 22Figure 2.4 Differential measurement on two ultrasound scan lines. Line slopeis measured with cross-correlation of RF echos. m = tan(θ) = ∆y∆x . . 23ixFigure 2.5 Elements of a curvilinear transducer are placed on an arc with radiusr0. The axial direction vector, ~Wi, can be defined in terms of a pairof perpendicular vectors, ~U and ~V . . . . . . . . . . . . . . . . . 29Figure 2.6 Each scan line was compared against a scan line at a specified lat-eral distance (15 scan lines) and the axial shift was calculated usingnormalized cross correlation. . . . . . . . . . . . . . . . . . . . 32Figure 2.7 (a) Stylus being used for plane localization. (b) phantom with 3×4semi-spherical holes for stylus accuracy evaluation. . . . . . . . . 33Figure 2.8 (a) PRA evaluation phantom comprising of an array of five truncatedcones. (b) Ultrasound image of the PRA phantom by the linear trans-ducer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Figure 2.9 The box plot of the average CC values for all ultrasound images(RF and B-mode) for (a) the linear transducer and (b) the curvilineartransducer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35Figure 2.10 Calibration results of the predetermined single wall method using 20to 95 RF images. Mean (red) and standard deviation (blue) of cali-bration parameters are calculated (a) translation parameters (tx, ty, tz)[mm] (b) rotation parameters (α,β ,γ) [deg]. . . . . . . . . . . . . 36Figure 2.11 Calibration results of the predetermined single wall method using25 to 80 RF images of the curvilinear transducer. Mean (red) andstandard deviation (blue) of calibration parameters are calculated (a)translation parameters (tx, ty, tz) [mm] (b) rotation parameters (α,β ,γ)[deg]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37Figure 2.12 Mean (red) and standard deviation (blue) of PRA using differentnumber of RF images for (a) the linear transducer and (b) the curvi-linear transducer. . . . . . . . . . . . . . . . . . . . . . . . . . 40Figure 3.1 The Multi Wedge phantom model comprising of five different planes.The dashed line shows the ultrasound image intersection line seg-ments. The transformation from image to phantom coordinate sys-tem is calculated with a closed-form solution. . . . . . . . . . . . 46Figure 3.2 Manufactured Multi Wedge phantom with optical markers and a dou-ble N-wire phantom incorporated in it. . . . . . . . . . . . . . . 48xFigure 3.3 The coordinate systems defined on markers’ rigid bodies on thetransducer, T, and the phantom are measured in the tracker’s refer-ence coordinate system, R. The coordinate system of the phantom’swedges, P, relative to the phantom markers is known by the CADmodel therefore TR P can be calculated from tracker measurements.TR T is also measured by the tracker. . . . . . . . . . . . . . . . . 50Figure 3.4 Intersection of the ultrasound image and the phantom’s ith plane. ~Uis a unit vector in the direction passing through the center of arrayelements (lateral) and~V is a unit vector in the direction of ultrasoundbeam (axial). P0 is the origin of the imaging plane. A plane equationon each wedge (i) is defined with a normal vector, ni, and a point, Qi. 51Figure 3.5 Multi wedge phantom with two pairs of parallel wedges. . . . . . . 53Figure 3.6 (a) Slope measurement error ( Error(∆y)∆x ) versus the inclination angles(ϕ). (b) ratio of outcome (absolute sum of rotation angles) error tomeasurement error (◦/µm) for different inclination angles (ϕ). . . . 58Figure 3.7 Automatic feature extraction from the ultrasound image to find theline segments. . . . . . . . . . . . . . . . . . . . . . . . . . . 60Figure 3.8 Sensitivity analysis of RF Multi Wedge to axial measurement error(a) Norm of rotation error vector (mean and standard deviation). (b)Norm of translation error vector (mean and standard deviation). . . 63Figure 3.9 Calibration results of the RF Multi Wedge method using 2 to 87images of the phantom. Mean (red) and standard deviation (blue)of calibration parameters are calculated (a) translation parameters(tx, ty, tz) [mm] (b) rotation parameters (α,β ,γ) [deg]. . . . . . . . . 64Figure 3.10 CR measure of calibration parameters when using different numberof images, ns, for the (a) Top left corner point (b) Bottom right cornerpoint. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65Figure 4.1 Out-of-plane needle trajectory planning. The appropriate needle tra-jectory for the given puncture site on the skin and the selected targeton the ultrasound image is shown with the dashed line. . . . . . . . 71Figure 4.2 The camera is mounted on the transducer (side view) in a way thatthe needle can be easily seen by the camera. . . . . . . . . . . . . 75xiFigure 4.3 Epidural needle with 1 cm markings is projected into the cameraplane using the standard camera model. . . . . . . . . . . . . . . 77Figure 4.4 Three known collinear points on the object (P1, P2, and P3) and thecorresponding projections on the camera plane (Q1, Q2, and Q3) andthe focal point (F) all reside on the same plane. . . . . . . . . . . 78Figure 4.5 (a) Edge points should be extracted from the image of the needle.Center line, marking edges and edge points are shown. (b) An ex-ample of “off-the-needle” lines (in blue) and “on-the-needle” lines(in red). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81Figure 4.6 The points associated with the peak values on each line are stored(red dots). These points are clustered to find marking edges. . . . . 82Figure 4.7 (a) Pixel values along an “on-the-needle” line. (b) Derivative of thevalues using the smooth noise-robust differentiator. . . . . . . . . 84Figure 4.8 Flow chart of the algorithm for collecting marking edge points. . . 85Figure 4.9 The dendrogram of the clustered points and the cut-off threshold ofc = 15 (red line). . . . . . . . . . . . . . . . . . . . . . . . . . 86Figure 4.10 Final feature extraction results showing the center line (green line)and the marking edge centers (red dots). . . . . . . . . . . . . . . 87Figure 4.11 The coordinate systems of the transducer, the camera and the ultra-sound image. . . . . . . . . . . . . . . . . . . . . . . . . . . . 88Figure 4.12 Measuring the precision of the method by rotating the needle arounda pivot point (needle tip). Estimated needle lines and the estimatedpivot point are shown. . . . . . . . . . . . . . . . . . . . . . . 91Figure 4.13 Accuracy testing device comprised of a base with a grid of semi-spherical holes on its surface and a marker plane. The marker planeis a flat piece with a checker-board pattern attached to its top surface. 92Figure 4.14 PRA measurement experiment setup showing the needle immersedpartially in water seen both by the camera and the ultrasound image. 93Figure 4.15 Segmented target points have been compared to the estimated inter-section points. . . . . . . . . . . . . . . . . . . . . . . . . . . 95xiiGlossaryCT Computed TomographyRF Radio FrequencyDOF Degrees of FreedomCAD Computer Aided DesignRMS Root Mean SquaredCR Calibration ReproducibilityPRA Point Reconstruction AccuracyCC Cross-correlation CoefficientPLUS Public software Library for UltraSound Imaging ResearchOPENCV Open Source Computer VisionxiiiAcknowledgmentsI would like to express my deep gratitude to my supervisors Dr. Robert Rohlingand Dr. Purang Abolmaesumi for their support and guidance throughout my Phdand making it an enjoyable and productive journey for me. I appreciate their givingme the freedom, yet the guidance to pursue this research.I am thankful to Vickie Lessoway, our sonographer, who was very helpful andunderstanding. I am also thankful to all the former and current members of the RCLlab at UBC: Abtin, Hedyeh, Denis, Ali, Reza, Hani, Sara, Guy, Philip, MohammadHonarvar, Saman, Samira, Jeff, Ramin, Omid, Amin, Siavash, Hussam, Caitlin,Angelica, Julie, Ana and Hamid.I thankfully acknowledge the fellowships and grants that supported my re-search during the course of my studies: the University of British Columbia, theNational Science and Engineering Council of Canada (NSERC), and the CanadianInstitutes of Health Research (CIHR).In the end, I offer my gratitude to my parents for their support that helped meaccomplish many milestones in my life, and also to my sisters, Mahdieh and Hoda,and my brother, Ali. Besides, I would like to thank my ex-wife, Narges, for hercontributions and her support. I also thank Morad, Sakineh, Nastaran and Neda fortheir support. I am also very grateful to all my dear friends who have supported meover the last few years.xivChapter 1Introduction1.1 Background1.1.1 Ultrasound-Guided InterventionsUltrasound is a safe, portable, inexpensive and real-time modality that can produce2D and 3D images and therefore it is a valuable intra-operative imaging modality toguide surgeons aiming to achieve higher accuracy of the intervention and improvepatient outcomes.Specifically, there has been a surge of interest in integrating ultrasound imag-ing in a number of clinical procedures, such as laparoscopic procedures [62], min-imally invasive cardiac surgeries and therapies [37], spinal fusion surgeries [78],orthopedic surgeries [64, 66], guidance for breast biopsy [23], tumor resection [42],brain neurosurgery [75] and radiation therapy [20].In many such clinical procedures, there is a benefit of tracking the spatial loca-tion of the transducer while sweeping over the anatomy of interest. This “freehand”3D ultrasound imaging approach can be used for visualization and quantitativemeasurements such as 3D locations, sizes and volumes of anatomical structures.Also, by tracking an ultrasound transducer, multiple ultrasound datasets can bemapped into the same coordinate system to construct larger volumes with an ex-tended field of view. Ultrasound with tracking information also facilitates registra-tion to complementing image modalities such as magnetic resonance imaging [54].1Figure 1.1: Typical positioning of the operator, ultrasound transducer and the pa-tient during, real-time ultrasound-guided spinal anaesthesia [22].Figure 1.1 shows an example of real-time ultrasound-guided spinal anaesthesia.In some applications, laparoscopic, biopsy and surgical tools are also trackedand their position should be converted to a common coordinate system as the ultra-sound images. Augmented reality is yet another application that can benefit fromtracked ultrasound transducers.The accuracy of the freehand tracked ultrasound is an important factor in theoverall accuracy of the aforementioned procedures [65]. In many cases, high accu-racy results in numerous clinical benefits. For example, intraoperative ultrasoundimaging of the vertebrae combined with automated registration to preoperativeComputed Tomography (CT) could improve spine surgery by improving accuracyof toll placement, reducing operative time, and decreasing invasiveness. The result-ing benefits include lower surgical risk, increased possibility of performing morecomplex instrumentation, decreased postoperative complications, more confidencein the surgical procedures, and better postoperative function [78].To mention a few examples, in performing neuronavigation based on intraop-erative 3D ultrasound, precise surgical planning and intervention is possible which2Figure 1.2: Single point calibration phantom ([36]).results in the reduction of residual tumor volumes, reduced operation times and bet-ter patient outcomes [46]. In ultrasound-guided liver tumor resection, the surgeonrelies on ultrasound volumes for accurate orientation with respect to the tumor.High accuracy is needed to provide tumor-free resection margins and to preservevessels close to the tumor [31]. For a breast tumor biopsy, the needle tip should beaccurately located inside several positions of the tumor [23]. In real-time visual-ization of high-intensity focused ultrasound for prostate cancer treatment with 3Dultrasound, precise knowledge of the size and location of the tumor, and the treatedareas can improve the outcome [69].Note that even an error of about a degree can cause points to be located severalmillimeters from their true positions [28]. One example is ultrasound guided out-of-plane needle insertion [59], where 0.5◦ error in the needle trajectory’s anglecould result in more than 5 mm error in the intersection point. Even higher accuracyis required in an in-plane needle insertion procedure where the in-plane resolutionis 1− 2 mm at the focus for high frequency transducers. In an epidural needleinsertion procedure, accurate navigation of the needle tip is critical for safe andeffective use of regional anesthesia [24]. Therefore, considering that the thicknessof the target epidural space ranges from 2 to 6 millimeters [58], very high accurateultrasound calibration is essential for such applications.3Figure 1.3: Single plane calibration phantom ([36]).1.1.2 Ultrasound Calibration: Definition, Methods and ChallengesIn all the clinical applications that use freehand tracked ultrasound to reconstruct3D ultrasound volumes, such as those examples cited above, the challenge is toprecisely locate the ultrasound image pixels with respect to a tracking sensor onthe transducer. In a process called spatial calibration, the spatial transformationbetween the ultrasound image coordinates and the transducer’s coordinate systemis determined.The ultrasound calibration problem has been investigated over the last twodecades. Most of the calibration techniques are based on imaging an artificial ob-ject with known geometrical parameters. Prior knowledge of the phantom, alongwith ultrasound images of the phantom, are used to calculate the calibration pa-rameters.The phantom can be as simple as a fixed point target which can be constructedwith a bead [4, 25], crossed-wires [54, 74, 79] or the center of a sphere [16]. Thephantom is scanned by the tracked transducer from various poses of the transducer(Figure 1.2). The 3D location of the point appearing in the ultrasound image canbe calculated using the correct calibration matrix. Since the phantom point is fixed,the 3D location of the point for each pose of the transducer should be all the same.4Figure 1.4: N-wire calibration phantom ([36]).However, for an incorrect calibration matrix, these points will not be the sameand therefore their spread can be used as a cost function. This cost function isminimized iteratively to find the correct calibration matrix. The accuracy of thecalibration depends on how well the point can be segmented in the ultrasound im-age. Moreover, aligning the ultrasound images to capture the point phantom foreach position of the transducer is a challenging task and time consuming.Alternatively, the phantom can be a fixed plane as in the single wall method[60, 67, 79] or in its variant called the Cambridge phantom [67]. The plane ap-pears as a straight line in the ultrasound image (Figure 1.3). Similar to point basedphantoms, the tracked transducer is placed in different positions and it each pose,the projection of the lines in 3D space using the correct calibration matrix shouldbe in the same plane. This method is attractive because it enables an automaticsegmentation algorithm to be used and doesn’t required construction of a specialphantom, making the calibration process rapid to perform. Also, if a line is par-tially missing, it still can be easily identified, which is not the case for points. Onechallenge is that when imaging from an angle far from the normal, the line doesn’tappear clearly.5Another class of calibration phantoms consists of thin wires, usually with N orZ shaped patterns [11, 19, 63, 65]. The position of the end-points of the N-wirephantom is known by construction the phantom is tracked with the same tracker(Figure 1.4). The N-wire will appear as three colinear points in the ultrasoundimage. Since the location of the wires is known, it is possible to calculate the3D location of the middle point using a closed-form solution based on the relativedistances of the points in the image. Usually two or three rows of N-wires areimplemented in the phantom in parallel. The closed-form solution and the real-timeimplementation [19] of this method are its advantages. However, the constructionof the phantom can be cumbersome. Moreover, wires appear as cloud of points inultrasound images and it is a challenge to segment the center accurately.There are other approaches such as registration of 2D ultrasound images withthe 3D model of the phantom [7, 10, 44], precise manual alignment of the ultra-sound image with a thin planar phantom [28, 47] and even a few methods do notrequire a phantom and use a calibrated stylus [35, 39, 57] or use changes in specklefrom transducer movements [13]. Detailed reviews, comparison of different cali-bration methods, and a summary of various validation techniques can be found insurvey papers [36, 55].Calibration methods can also be categorized based on the type of the solver andthe mathematical framework. There are iterative optimization techniques (e.g. [25,54, 67]) and closed-form solutions (e.g. [11, 19, 60]). Iterative methods are subjectto sub-optimal local minima and are sensitive to initial estimates and thereforethey are less robust in general than closed-form solutions [26]. They also typicallyrequire more input data to achieve the same level of accuracy.Despite all these efforts, it is not easy to find a method for ultrasound calibra-tion that is fast and easy to perform while at the same time being highly accurate.This may adversely affect the result of ultrasound guided interventions and possi-bly limit the scope of new applications. Accurate, absolute localization of phantomfeatures in ultrasound images is the biggest challenge in the calibration process [44]and is the limiting factor in increasing the accuracy of calibration.One reason is the blurry appearance of features due to finite resolution of theultrasound images and the presence of noise. There are also image formation errorsdue to speed of sound variations, refraction and a finite beam width, all of which6Figure 1.5: Wire appearance determined by the ultrasound axial and lateral resolu-tions. The wire does not appear as a small point. (Reproduced from Fig.5 in[19]).contribute to distortions in the shape of the depicted features.Finite resolution of the ultrasound and imaging artifacts, for example, causesmall point-shaped objects to appear as short lines in the ultrasound images [19]and therefore a cross section of a wire in an ultrasound image does not appear inthe shape of a circle but rather as an asymmetrical cloud with a width of severalpixels (Figure 1.5). This makes it very challenging to accurately localize the actualintersection point. The ultrasound beam thickness can be minimized (particularlyin the elevational direction) by using multiple focal zones. However, increasing thenumber of focal zones generally reduces the frame rate.The finite beam width also affects the accuracy of feature localization of planarobjects [67]. A plane appears in the ultrasound image as a line with a thicknessrelated to the beam width and plane orientation. Unlike point features, the shape ofthe line remains unchanged along the line as long as the plane’s inclination relativeto the beam direction is not high.Improper or poor ultrasound calibration has been reported as one of the majorcontributors to the final accuracy of ultrasound neuronavigation [46]. It is possi-ble to reduce this error by using specialized calibration techniques. Calibrationaccuracy is typically 1 mm with the latest techniques.7Figure 1.6: Ultrasound-guided needle insertion. From: American Academy of Pro-cedural Medicine (http://aaopm.com).1.1.3 Surgical Tool Pose Estimation and TrackingMany modern clinical practices require percutaneous procedures where thin de-vices such as needles, catheters and tissue ablation probes are inserted deep intothe tissue. Applications of percutaneous needle insertion include biopsies, regionalanesthesia, blood sampling, neurosurgery and brachytherapy.Ultrasound can be used as a guidance tool during needle insertion operations.Ultrasound is easy to use, it is safe, and can be used in virtually any clinical setting.One study shows that ultrasonography proved to be an effective tool, with 100%accuracy in locating the sacral hiatus and in guiding the caudal epidural needle intothe caudal epidural space [18]. Figure 1.6 shows an example of ultrasound-guidedneedle insertion.It is a challenging task to visualize the needle trajectory inside the tissue justby looking at the ultrasound image and the needle. By tracking the needles andother tools and providing real-time information such as live needle-path display,the interventionalist can define targets directly. For this reason, the pose of theneedle or tool should be tracked in real-time. Also, the transformation from thetracker to the ultrasound image should be determined.In order to track the transducer and the tools, one of the several types of track-8Figure 1.7: Needle insertion with ultrasound guidance and real-time tracking witha transducer-mounted tracker.ers such as mechanical, electromagnetic or optical should be used. Tracking isperformed by localizing specific markers or sensors on the tool or the transducer inthe 3D reference coordinate system of the tracker. It should be noted that there aredrawbacks in using each of the tracking techniques, such as limiting the freedomof movement in mechanical trackers, interference with metallic objects in electro-magnetic trackers and line-of-sight issues with optical trackers.With electromagnetic trackers, an average accuracy of 1.0 mm is achievablein good environments. However, an awareness of the lack of robustness againstsources of distortion is necessary, because the accuracy can degrade significantlyin the presence of nearby metal objects or electrical circuits. While several clinicalelectromagnetic tracked systems have emerged, practical limitations remain [27].In an ultrasound guided procedure, the tracking device can either be placed ina fixed position near the object being tracked or it can directly be mounted on thetransducer. We will call the former an external tracker and the latter a transducer-mounted tracker (Figure 1.7).For each type of tracker, there is an error in tracking which depends on the9tracking method, quality of the sensors and the markers, environmental noise andthe operating distance. This error can be minimized by using a high quality trackingsystem but not eliminated. Accuracy typically ranges from 0.1 mm to 3 mm forsystems which range in price from CAN$ 10,000 to CAN$ 100,000.Lu et al. [50] proposed a sensor technology that can be embedded on needlesthat are used for ultrasound-guided interventions. The sensor receives acousticenergy and converts it to an electrical signal and should be connected to the work-station through a cable which is one of the drawbacks of this method in spite of itshigh accuracy. Another disadvantage is that pre-puncture planning of the trajectoryof the needle is not possible since it can only track the needle once it is inserted inthe tissue and is within the ultrasound image plane. This is a major drawback inapplications where it is not possible to bend or re-orient the needle after insertion.Another source of error in such an ultrasound guided system is the calibrationerror. Calibration is the rigid body transformation from the ultrasound image to themarkers on the ultrasound transducer. Incorrect calibration results in estimationerrors in the entire navigation system.1.2 ObjectiveThe objective of this thesis is to facilitate ultrasound-guided interventions by pro-viding real-time, accurate, practical and easy-to-use solutions for ultrasound cali-bration and tool tracking. High accuracy and practical usability not only enhancecurrent interventional procedures, they also enable new applications that otherwisewouldn’t be possible. Also, the effectiveness of a treatment in percutaneous proce-dures and the success or precision of a diagnosis is highly dependent on the accu-racy of percutaneous insertion. Therefore, our goal is to improve the performanceof the needle insertion in an ultrasound guided needle insertion.Regarding ultrasound calibration, our aim is to make calibration more reliablefor day-to-day clinical use by reducing the calibration error while keeping the cal-ibration procedure as fast and simple as possible and suitable for intra-operativeuse. Therefore, by bringing down the calibration mean error, we ensure that worsecase calibration error in any given day is still acceptable.For this reason we propose two complementary calibration methods. In the10first one, we aim for simplicity and reasonable accuracy. In this method, a singleplane is used as the calibration phantom. Even the bottom of a simple flat watertank can serve this purpose. We have improved the traditional single-wall methodin various ways to achieve more accurate and robust results.In the second one, we focus on high accuracy. For this reason, a closed-formdifferential formulation is developed based on a Multi Wedge phantom comprisingof five planes [61]. This method shows accurate results, it is easy to use and doesnot require highly skilled users to perform the calibration. However, a 3D printeris required to precisely manufacture the Multi Wedge phantom.Regarding neelde/tool tracking, it is desirable to have a tracking system withsufficient accuracy, low-cost and being able to work in real-time. Moreover, thesystem should be easy to use and suitable for practical clinical purposes. The goalis to design a tracking system that can determine the pose of the needle/tool withrespect to the ultrasound image. Such system should tackle issues mentioned aboveso that it can be used in every-day clinical applications. A new optical trackingsystem is proposed based on a standard optical camera imaging a small field ofview.1.3 ContributionsThis thesis is an attempt to develop techniques that are essential for ultrasound-guided interventions. This includes accurate, practical and easy-to-use solutionsfor ultrasound calibration and also for tracking surgical tools and needles in par-ticular. In the course of achieving this objective, the following contributions weremade:• Developing a single-wall closed-form ultrasound calibration method basedon differential measurement technique.• Improving the single wall calibration method by explicitly measuring theposition of the plane using a stylus.• Significantly improving ultrasound calibration accuracy compared to state-of-the-art methods while keeping the calibration procedure as fast and simpleas possible and suitable for intra-operative use.11• Developing a novel differential framework for accurate feature extractionand measurement in ultrasound images for the purpose of ultrasound cali-bration.• Design and development of an optimized Multi Wedge phantom for accu-rate ultrasound calibration based on the above differential measurement tech-nique.• Deriving a closed-form solution and mathematical framework for the MultiWedge calibration phantom using only a single ultrasound image.• Developing a real-time single-camera system for tracking tools (e.g. nee-dles) for ultrasound-guided interventions with significantly high accuracycompared to similar state-of-the-art methods.• Deriving a closed-form solution for estimating the pose of a linear objectwith a single camera.• Developing a real-time fully automatic image processing and feature extrac-tion algorithm for needle tracking.1.4 Thesis OutlineThis thesis is subdivided into five chapters as outlined below:• Chapter 1 INTRODUCTION:This chapter presents a brief summary of ultrasound-guided interventionsand challenges involved especially regarding ultrasound calibration and tooltracking. It is followed by defining the objective of this thesis and the pro-posed contributions.• Chapter 2 A CLOSED-FORM DIFFERENTIAL FORMULATION FOR UL-TRASOUND SPATIAL CALIBRATION USING A SINGLE PLANE PHAN-TOM:In this chapter, an improved differential single-wall calibration method isdeveloped for both linear and curvilinear transducers, and for both Radio12Frequency (RF) and B-mode input data. The traditional single-wall methodsolves for the plane equation as well as the calibration parameters and is notbased on a closed-form solution. Here, we propose to simplify the calibrationproblem by explicitly measuring the position of the plane using a stylus andthus reduce the number of unknowns and formulate a closed-form solution.Moreover, differential measurements are used to achieve higher accuracy.• Chapter 3 A CLOSED-FORM DIFFERENTIAL FORMULATION FOR UL-TRASOUND SPATIAL CALIBRATION USING A MULTI WEDGE PHAN-TOM:In search of an ultrasound calibration method with the highest accuracy pos-sible while maintaining ease-of-use and practical usability, in this chapterwe propose using planar objects as phantoms and describe the design of aMulti Wedge phantom composed of five planes (wedges). This phantom isoptimized based on the proposed closed-form mathematical framework anduses differential measurements to minimize error. It is explained how thefeatures are automatically extracted from the images and the results of var-ious experiments are followed. These experiments evaluate the sensitivity,precision and accuracy of the method.• Chapter 4 SINGLE CAMERA CLOSED-FORM REAL-TIME NEEDLETRACKING FOR ULTRASOUND-GUIDED NEEDLE INSERTION:Ultimately we aim to improve ultrasound-guided interventions especiallyfor applications such as percutaneous procedures. For this reason, in thischapter we describe our proposed single camera tracking system which ismounted directly on the ultrasound transducer. This tracking system basedon a closed-form solution and is able to determine the pose of a needle (e.g.epidural needle) with respect to the ultrasound image. Accurate camera andultrasound calibration are critical to achieve high accuracy. The Multi Wedgecalibration method is used for ultrasound calibration.• Chapter 5 CONCLUSION:In this chapter a a short summary of this thesis is presented. Then it isfollowed by a list of novel contributions. It also includes suggestions for13future work and possible venues to expand the research.14Chapter 2A Closed-Form DifferentialFormulation for UltrasoundSpatial Calibration Using a SingleWall Phantom2.1 Introduction and BackgroundAs explained in Chapter 1, the accuracy of the freehand tracked ultrasound is animportant factor in the overall accuracy of the ultrasound-guided procedures. Manymethods have been proposed for ultrasound calibration over the last two decades.Despite all these efforts, the best reported accuracies for ultrasound calibration arein the order of 1 to 3 mm [36] which may adversely affect the result of ultrasoundguided interventions and possibly limit the scope of new applications. Ease of use,simplicity, precision (repeatability) and accuracy are among the most importantfactors in ultrasound calibration.Ultrasound calibration is the procedure to find the spatial transformation be-tween the ultrasound image coordinates and the transducer’s coordinate system. Itis almost impossible to measure calibration accuracy, since the true calibration isunknown. If there was a technique that was able to give us the exact error, then15this technique could be used to find the calibration parameters in the first place.However, there are tests, such as Point Reconstruction Accuracy (PRA), that serveas an objective measure of accuracy [36].Based on our review of calibration and observations on the key aspects limitingcalibration accuracy, we have tried to reduce feature localization (i.e. segmenta-tion) error significantly by introducing a novel differential measurement technique.The proposed designed phantom is small, inexpensive to manufacture and can beeasily used in clinical and surgical environments.In order to improve accuracy, the need for absolute localization of the cali-bration phantom features should be reduced. Therefore in this work, we proposeto use the differential measurements of the relative distance between two imagefeatures preferably with the RF signals from the transducer beam forming beforeprocessing into an image. This idea is inspired from the dynamic motion trackingin ultrasound elastography. Advancements on differential measurements for ultra-sound motion tracking in recent years enable accurate measurements of the relativelocations of phantom features. This accuracy can be as high as a few microns whenRF ultrasound data is used [76].The differential measurement can be especially accurate when the shapes ofthe features are very similar. Since the appearance of a wire varies with depth inthe image [19], it is more difficult to perform accurate measurements of the dis-tance difference of point features unless only pairs of features with similar shapesare selected. Planar phantoms, on the other hand, are ideal because a horizontalplane appears as a line with near uniform thickness with a standard linear trans-ducer. In fact, echo RF-pulses in all RF scan lines show a similar pattern since theyall experience the same physical situation except for a variation in the ultrasoundfocus (and therefore beam width) which slightly varies according to the depth. Therelative axial distance of line features can be measured as the relative shift betweenecho RF-pulses, and is therefore a differential measurement. This measurementcan be performed very accurately, especially with RF data. Redundancy of mea-surements, because of the presence of many scan lines, is another advantage ofplanar objects over wires. Redundancy allows averaging of measurements, whichcan also reduce error from measurement noise.16Figure 2.1: Differential measurement on two ultrasound scan lines. Line slope ismeasured with cross-correlation of RF echos. m = tan(θ) = ∆y∆x .2.2 Differential Measurement TechniqueAn ultrasound image is composed of different columns of envelope detected RFecho signals. When imaging a flat surface with ultrasound, the RF echo in eachscan line is produced at a certain depth of the image. All of these reflection pointsreside on a straight line which is the intersection of the imaging plane and thesurface (Figure 2.1). As long as the echo signatures of at least two scan lines aresimilar, the slope of this line can be measured very accurately by finding their axialshift with the normalized cross correlation technique [76].2.3 Single Wall PhantomThe planar single wall method [67] is an obvious choice to consider using thedifferential calibration method in order to improve ease-of-use. The original singlewall method uses an iterative solver to find the calibration parameters as well as theplane equation. In fact, unlike most calibration methods, the wall phantom is notrequired to be tracked and the plane’s position and orientation parameters are alsodetermined in the solution (assuming the wall phantom is placed on a flat surface).17Therefore, although the simplicity of the wall phantom is appealing, it is quite achallenge to achieve high accuracy with a low number of images because of thelarge number of parameters to be solved.Another challenge in the single wall method is when imaging at an angle farfrom the normal to the wall. In this case, most of the beam energy will be reflectedaway from the transducer because of specular reflection, yielding a lower intensityline. Also, because of the ultrasound beam thickness, the line on the image willbecome blurred [55]. In fact, according to Prager et al. (1998), the single wallphantom produced slightly ill-conditioned sets of calibration equations due to thelimited range of scanning motions that resulted in clear images of the wall. Therewas significant uncertainty in the calibration parameters that led to relatively poorpoint-reconstruction precision. Approximately 500 images were used in those ex-periments [67].In order to solve the limited range of motion, the Cambridge phantom [67] wascreated. Although this method was more accurate, it still did not provide a closed-form solution and needed to solve the plane equation parameters as well as thecalibration parameters. Also it required a custom apparatus and accurate mountingon the transducer. The technique was also not fully automatic.In summary, despite the wealth of calibration methods, there is still a needfor improved calibration using the simplest and most popular single-wall method.This chapter proposes an improved variation of the single wall calibration for bothlinear and curvilinear transducers, and for both RF and B-mode input data. Thekey is to adapt the differential measurement approach to a single wall. We alsopropose to simplify the calibration problem by explicitly measuring the position ofthe plane using a stylus and thus reduce the number of unknowns and formulate aclosed-form solution. The goal is to obtain good accuracy with a relatively smallnumber of images.In our preliminary work [60], we developed a closed-form differential formu-lation for the traditional untracked single wall method for linear transducers. Anupdated and detailed description of that method is presented here. Similar to thetraditional single wall method, the limited range of possible rotations caused thatmethod to be sensitive to measurement error. Therefore, a new “predeterminedsingle wall” method is proposed with fewer parameters to solve by ultrasound cali-18bration. This paper describes the new method and compares to the traditional singlewall and N-wire calibration because they are the most commonly used calibrationmethods according to our review of the literature to date.2.4 Experimental SetupThe single wall phantom is simply any planar object with a smooth surface. In ourexperiments, a 15× 15× 0.5 cm aluminum plate was used. A SonixTOUCH ul-trasound machine (Ultrasonix Medical Corporation, Richmond, BC, Canada) wasused for ultrasound imaging. The experiments were performed using both a lin-ear and a curvilinear transducer. A 38 mm L14-5 linear 2D transducer with 7.2MHz center frequency and a C5-2 curvilinear 2D transducer with 3.5 MHz centerfrequency, 60 mm radius and 56◦ field of view were used.An Optotrak Certus optical tracker (Northern Digital Inc, Waterloo, Ontario,Canada) was used for tracking the transducer. A set of active markers with a rigidarranged geometry (rigid body) was attached to the transducer to track its pose.The tracker measures the relative transform from an arbitrary coordinate systemdefined on the rigid body to the tracker’s coordinate system.Prior to the experiment, the plane’s location was predetermined with the tip ofa stylus in order to determine the plane equation. An NDI four marker digitizingprobe was used for this purpose.The wall phantom was immersed at the bottom of a water tank. The wallsurface was traced with the stylus to obtain enough points to predetermine the planeequation. Then, while keeping the wall phantom fixed, the wall was scanned withthe ultrasound transducer from different angles and positions covering all 6 degreesof freedom. The transducer was fixed with a mechanical arm in each position andthe transducer’s position was recorded with the tracker to avoid the influence of thetracker’s lag.Note that the speed of sound depends on the medium and its temperature. Thespeed of sound assumed by most ultrasound machines is 1540 m/s, which is theaverage speed of sound in human tissue. For instance, speed of sound for waterat room temperature is approximately 1485 m/s [55]. For a linear transducer inwater for example, this scales objects by 4% in axial direction. In the calibration19Figure 2.2: Calibration setup: depicting coordinate system of the reference (R),transducer (T), ultrasound image (I), and the normal (~n) of a flat plane.procedure, either the scale factor should be calculated or the water temperatureshould be set in the ultrasound machine. Another solution is to match the speedof sound in tissue by increasing the water temperature to approximately 50◦C or toadd glycerol or ethanol to water at room temperature [55].If the speed of sound in a medium is different from this assumed value, theobjects will appear farther or closer and their shapes might appear distorted.The image of the wall appears as a straight line in the ultrasound images. Us-ing the line equation determined in each ultrasound image and the position of thetransducer obtained with the tracker, it is possible to calculate the calibration trans-form.The slope of the line can be accurately determined by differential measure-ments on ultrasound scan lines (Figure 2.6). Higher accuracy can be achieved withRF data than B-mode, although both are feasible. In a linear transducer, the slopecan be simply calculated by measuring the relative shift between the echo RF sig-20natures of two scan lines and considering the lateral distance of the two scan lines.For a curvilinear transducer, again the relative shift of echo RF signatures are mea-sured and the slope is calculated based on the geometry of the transducer (scanline spacing and line angles). Most of the calibration parameters can be calculatedusing the line slopes. Linear and curvilinear transducer calibration equations arederived separately below.Line intercepts are used to determine translation parameters. It is not possibleto measure line intercept with differential measurements. Based on the simulationsof the problem, the errors in translation parameters are in the same order of the ab-solute measurement error, which is in the order of the pulse width (Section 3.7.1).The calibration goal is to find the six degree-of-freedom transformation fromthe image to the transducer coordinate system ( TT I). The transformation from thetransducer to the reference coordinate system ( TR T ) is known by the readings fromthe tracker (Figure 2.2).2.5 Linear Transducer CalibrationAn ultrasound image is composed of multiple RF scan lines, each formed by anaperture of ultrasonic elements. In a linear transducer, all the elements are placedin a linear fashion in the lateral direction, ~U . Also the elements are parallel to eachother and hence the axial vector ~V , the ultrasound beam propagation direction, isthe same for all the RF scan lines (Figure 2.3).The origin of the imaging plane, P0, can be assumed, without loss of generality,as the origin of the first scan line. The lateral pixel to millimeter ratio, sl , is relatedto the lateral spacing between the elements (assuming equal spacing) and the imageline density setting on an ultrasound machine and can be assumed to be known. Theaxial pixel to millimeter ratio, sa, is related to the speed of sound in the mediumand the sampling rate.Based on this model, the position of each point of the image plane can bedescribed mathematically in 3D space as:P = P0 + slx~U + say~V , (2.1)where x and y are lateral and axial coordinates of the point in the image plane in21Figure 2.3: Intersection of the ultrasound image and the wall plane. ~U is a unitvector in the direction passing through the center of array elements (lateral)and ~V is a unit vector in the direction of ultrasound beam (axial). P0 is theorigin of the imaging plane. The plane equation is defined with the normalvector,~n, and a point, Q.pixels. In order to solve for the unknown calibration parameters in the differentialscheme, the problem is formulated based on the axial difference of the features inultrasound scan lines (Figure 2.4). The pulse echo reflection of the plane occursat a certain axial depth, y, for each scan line, x. Considering any two differentscanlines, xi and x j, in a single image and using (3.2) the following equations areobtained:Pi = P0 + slxi~U + sayi~V , (2.2)Pj = P0 + slx j~U + say j~V , (2.3)Pi−Pj = sl(xi− x j)~U + sa(yi− y j)~V . (2.4)Since the difference of the two points, Pi−Pj, is a vector parallel to the plane,it should be orthogonal to the plane normal vector,~n. Hence:22Figure 2.4: Differential measurement on two ultrasound scan lines. Line slope ismeasured with cross-correlation of RF echos. m = tan(θ) = ∆y∆x .[sl(xi− x j)~U + sa(yi− y j)~V ] ·~n = 0, (2.5)Dividing (4.5) by ∆x = xi− x j and assuming ∆y = yi− y j, m =∆y∆x and k =sasl,we have:~U ·~n+ km~V ·~n = 0. (2.6)In the above equation, the slope of the line, m, is obtained from the differentialmeasurements by dividing the axial difference by the lateral difference. Lateraldifference is, in fact, the distance between the scan lines i and j (Figure 2.4).2.5.1 Predetermined Wall MethodIn this approach, using a stylus, the plane equation of the wall is determined priorto the experiment. Therefore, plane parameters, ~nr and Qr, are now known in thereference coordinate system. All the previous derivations, including parameters in(2.6), can be defined in any coordinate system. However, we choose to define themin the transducer’s coordinate system, T , so that calibration parameters (~U , ~V and23k) remain constant for each pose of the transducer.Since plane parameters are measured in reference coordinate system, we needto transform them into transducer’s coordinate system for each pose, i:~ni = ( RTR)i~nr, (2.7)where ( RT R)i is the rotation part of the transducer to reference transformation forpose i, ( TT R)i, measured by the tracker. After calculating ~ni for all p poses andsubstituting in (2.6), we can put them in matrix format:~n1t m1~n1t......~npt mp~nptp×6~Uk~V= [~0]p×1. (2.8)The null space of the above matrix gives the solution for ~X t = [~U t ,k~V t ]. Toobtain the unique solution, ~X is divided by the norm of its first three elements,since ~U is a unit vector:~Xn =~X||~X(1,2,3)||. (2.9)Finally ~U , ~V and k can be easily found:~U = ~Xn(1,2,3), (2.10)~V =~Xn(4,5,6)||~Xn(4,5,6)||, (2.11)k = ||~Xn(4,5,6)||. (2.12)Now the rotation part of TP I can be determined as follows:R =[~U3×1 ~Y3×1 ~Z3×1], (2.13)~Z =~U×~V∥∥∥~U×~V∥∥∥, (2.14)24~Y =~Z×~U∥∥∥~Z×~U∥∥∥. (2.15)In the case of ultrasound beam steering, ~U and ~V are not orthogonal and thedeviation of their crossing angle from 90◦ can be expressed as the skew angle.Note that here we have calculated k which is the ratio of axial to lateral pixel tomillimeter ratio. Therefore, assuming either of sl or sa is known, the other one canbe calculated. We use a known sl based on the number of scan lines across thelateral extent of the transducer array.The translation part of TP I is defined by P0 and can be calculated using (3.2)by imposing the plane equation:(P0 + slxi~U + sayi~V −Qi) ·~ni = 0, (2.16)where (xi,yi) is the location of one point residing on the intersection line in imagei. The plane intercept, d, is a known constant for any defined coordinate system:d = Qr ·~nr = Qi ·~ni,∀i. (2.17)Now (2.16) can be written in matrix form:~n1t...~nptP0 =d− slx1~U ·~n1− say1~V ·~n1...d− slxp~U ·~np− sayp~V ·~np, (2.18)which is a linear system of equations and can be solved when at least three non-collinear points are available (p >= 3).If we assume that the plane equation is unknown and should be solved, theproblem can still be solved with a closed-form solution. Next section contains thecomplete derivations of the solution for such problem. It should be noted that moreunknowns make the problem less robust and more sensitive to measurement errors.After the slope of the line is measured using the differential measurement tech-nique, the line equation only needs one more variable (i.e., the line intercept) tobe defined. This means that for each image, only one independent equation as in25(2.16) can be written and other points on the line yield dependent equations. In fact,in (3.33) the number of independent equations is equal to the number of poses.2.5.2 Generalized Wall MethodIn this approach, it is assumed that no prior information about the plane is availableand the plane equation is considered unknown and should also be solved. There-fore, the parameters are all defined in the reference coordinate system (instead oftransducer coordinate system):~Uri = ( RRT )i~U , (2.19)~Vri = ( RRT )i~V . (2.20)The calibration vectors for one transducer pose, (~Ur0, ~Vr0), are considered asthe unknowns and the corresponding vectors for other poses are defined relative tothem:~Uri = ( RRT )i( RRT )−10~Ur0, (2.21)~Vri = ( RRT )i( RRT )−10~Vr0, (2.22)and we can define:Rdi = ( RRT )i( RRT )−10 , (2.23)which is a known matrix obtained from the tracker. And now by substituting (2.21),(2.22) and (2.23) into (2.6):(Rdi~Ur0) ·~n+ kmi(Rdi~Vr0) ·~n = 0, (2.24)is obtained where ~Ur0, ~Vr0, ~n and k are unknowns and Rdi and mi are the knownmeasurements.In order to solve (2.24), first we rearrange it so that the dot product is replaced26with vector multiplication and then vectorize the two elements:vec(~ntRdi~Ur0)+ kmivec(~ntRdi~Vr0) = 0, (2.25)where vec(X) denotes the vectorization of the matrix X formed by stacking thecolumns of X into a single column vector. Now using the following property of theKronecker product:vec(AXB) = (Bt ⊗A)vec(X), (2.26)we can rewrite (2.26) into:(~U tr0⊗~nt)vec(Rdi)+ kmi(~V tr0⊗~nt)vec(Rdi) = 0, (2.27)which can be rearranged and put in matrix form as below:vec(Rd1)t ,m1vec(Rd1)t...vec(Rd p)t ,mpvec(Rd p)t~Ur0⊗~nk~Vr0⊗~n= 0, (2.28)The null space of the above equation gives the solution, Xg. Similar to (2.8), theobtained solution should be normalized first to satisfy ||~Ur0⊗~n||= 1 and k can alsobe solved as:k =Xg(10 : 18)||Xg(10 : 18)||. (2.29)Now ~Ur0, ~Vr0 and~n should be calculated from Xg:~Ur0⊗~n = Xg(1 : 9), (2.30)~Vr0⊗~n =Xg(10 : 18)k. (2.31)The solution for ~Ur0, ~Vr0 and ~n can be found using an approximation methodwith Kronecker products [29, 48]. In this method, the problem is finding Bm1×n1and Cm2×n2 that minimizes ||A−B⊗C||F where F denotes the Frobenius norm andA is an m×n matrix with m = m1m2 and n = n1n2.An appealing solution results from the use of the Frobenius norm, while using27other norms would lead to a computationally difficult optimization problem [29].The solutions are the singular vectors associated with the largest singular value ofA˜, a permuted version of A, by Singular Value Decomposition (SVD). If A˜ = PSQtthen vec(A) = α1 p1 and vec(B) = α2q1 and α1α2 = σ1.In order to apply this method on (2.30) and (2.31), first the right hand sideshould be permuted into a 3×3 matrix and then the singular value decompositionshould be computed:H1 =Xg1 Xg2 Xg3Xg4 Xg5 Xg6Xg7 Xg8 Xg9= PSQt , (2.32)andH2 =1kXg10 Xg11 Xg12Xg13 Xg14 Xg15Xg16 Xg17 Xg18= P′S′Q′t , (2.33)And eventually:~Ur0 = p1, (2.34)~Vr0 = p′1, (2.35)~n = q1 = q′1. (2.36)Note that due to the unity of all the vectors, α1 = α2 = σ1 = σ ′1 = 1. Then using(2.19) and (2.20), ~U and ~V can be calculated and eventually, similar to previoussection, the calibration rotation matrix can be computed using (3.28),(3.30) and(3.31).To solve for the translation part, P0, again we use (2.16) and (2.17). How-ever, here d is considered as an unknown and therefore the matrix form should bereformulated as:~n1t −1...~npt −1P0d=−slx1~U ·~n1− say1~V ·~n1...−slxp~U ·~np− sayp~V ·~np, (2.37)28Figure 2.5: Elements of a curvilinear transducer are placed on an arc with radius r0.The axial direction vector, ~Wi, can be defined in terms of a pair of perpendicularvectors, ~U and ~V .which is also a linear system of equations that requires at least 4 points (p >= 4)to solve for [P0]3×1 and d.2.6 Curvilinear Transducer CalibrationCalibration of a curvilinear transducer is similar to that of a linear transducer ex-cept that, if using RF data, a different approach is needed for slope measurementbased on the specific geometry of a curvilinear transducer. However, if using scan-converted B-mode data, the plane appears as a straight line and hence the samemethod as a linear transducer can be used. The following describes the differentialslope measurement for a curvilinear transducer using RF data.In a curvilinear transducer, the elements are placed with an equal spacing onan arc with a certain radius, r0. The elements are orientated to follow the arccurvature resulting in radial propagation direction, ~W . Here we define ~V as theaxial direction corresponding to one of the scan lines (e.g. the middle one) and ~Uis a vector perpendicular to ~V in the imaging plane (Figure 2.5).The direction of each scan line, ~Wi, can be described as a linear combination of29~U and ~V and in terms of θi, the angle between ~U and ~Wi, as following:~Wi = sin(θi)~U + cos(θi)~V , (2.38)where θi is known by the transducer specifications and therefore the followingcoefficients are known:αi = sin(θi), βi = cos(θi). (2.39)If defining the center of the arc as P0, then the position of any point at depth hialong ith scan line can be described in 3D as:P = P0 +(r0 + sahi)~Wi, (2.40)where sa is the axial pixel to millimeter ratio. By combining (2.38), (4.10) and(2.40), while imposing the plane equation for P, we have:P0 ·~n+(r0 + sahi)(αi~U +βi~V ) ·~n = d, (2.41)where d = Q ·~n and hi can be calculated as:hi =d−P0 ·~nsa~V ·~n1αim+βi−r0sa, (2.42)where m = ~U ·~n~V ·~n is in fact the slope of the line, similar to the slope definition in alinear transducer. By writing the same equation for jth scan line and subtracting itfrom (2.42) we have:∆h = hi−h j = e(1αim+βi−1α jm+β j), (2.43)where e is defined as:e =d−P0 ·~nsa~V ·~n. (2.44)In (2.43), ∆h comes from the differential measurement and m and e are the onlyunknowns. This equation can be solved if considering at least two pairs of scanlines by using a non-linear solver.30The number of scan lines used to solve (2.43) varies for each image dependingon the number of acceptable scan lines in the segmentation procedure describedin the following section. Using (2.43) for each pair of scan lines, we can solvefor the two unknowns (m and e) by minimizing f in the equation below using theLevenberg-Marquardt algorithm [56]:nsl∑i=1f = [∆hi j− e(1αim+βi−1α jm+β j)]2, (2.45)where nsl is the number of valid pairs of scan lines. j, the pair of the scanline i, was chosen as explained in the next section. This equation can be solved byconsidering at least two pairs of scan lines.Therefore, for each image, the slope of the line was calculated for the RF data.For B-mode data, a similar procedure to the linear transducer was used to calculatethe line’s slope, m. With the slope values, mi, being known, the calibration matrixcan be calculated using (2.8). In fact, after the slope values were measured, the datafor different poses were combined using a similar method to the linear transducer.Next, the translation parameters were solved using line intercepts, similar to alinear transducer.2.6.1 Automatic Segmentation and MeasurementWhen imaging a flat surface with ultrasound, the RF echo in each scan line isproduced at a certain depth of the image. All of these reflection points resideon a straight line which is the intersection of the imaging plane and the surface(Figure 2.4). As long as the echo signatures of at least two scan lines are similar,the slope of this line can be measured very accurately by finding their axial shiftwith the normalized cross correlation technique [76].Segmenting the line feature in the ultrasound image is performed as follows.First, in each scan line, the location of the maximum was found and stored in anarray as an approximate of the line. In order to perform differential measurementfor more accurate slope estimation, first the image was processed to remove noisesespecially reverberation below the actual surface. For this reason, in each scan line,the points around the maximum’s location were kept and the rest of the signal wascleared. Marginal scan lines both in the left and the right side of the image were31Figure 2.6: Each scan line was compared against a scan line at a specified lateraldistance (15 scan lines) and the axial shift was calculated using normalizedcross correlation.omitted because they have smaller aperture beam forming and typically greatersignal fall out.At this stage, starting from the left, each scan line was compared against a scanline at a specified lateral distance. A higher lateral distance between the two scanlines is desirable to minimize the sensitivity of slope estimation to axial measure-ment error. However, when the two scan lines are too far apart then the pulse shapemay change slightly. In our experiments, the distance was chosen as 15 scan lineswithin an image of 128 scan lines RF image.Each scan line was up-sampled by a factor of 10 and the axial shift differ-ence was calculated using the normalized cross correlation technique. The slopewas calculated by dividing this value by the lateral distance times 10. Then, theoutlier pairs (CC values less than 0.9) were removed and the average was taken(Figure 2.6).For the curvilinear transducer, first, part of the image that included the scansline that contained strong echoes from the wall was identified. For this reason,the scan line with the highest peak value was identified as the center of the linesegment. The line segment was found by starting from this center scan line andadding each scan line at the left or right only if that column’s peak value was morethan 50% of the highest value. Then, similar to the linear transducer, starting from32(a)(b)Figure 2.7: (a) Stylus being used for plane localization. (b) phantom with 3× 4semi-spherical holes for stylus accuracy evaluation.the left of the left of the image, each scan line was compared against a scan line ata specified lateral distance. Here, the distance was chosen as 10 scan lines.2.6.2 Plane Localization with StylusIn the predetermined wall method, the plane equation for the wall surface shouldbe determined prior to the experiment. For this reason, a minimum of three pointsshould be acquired by touching the plane with the stylus. The NDI 4-marker probewas used as the stylus and the plane was traced in a rectangular motion of approxi-mately 10×10 cm in size. At the frame rate of 100 f ps, 3000 points were collectedin 30 seconds. Although three points are enough for plane localization, acquiringmore points in a wide range is suggested to improve accuracy. The plane fittingerror was 0.14 mm in our experiment.In order to evaluate the accuracy of the stylus, a planar sheet was created witha 3D printer (Objet Inc., Billerica, MA, USA) comprising of a grid of 3×4 semi-spherical holes on its surface with 30 mm spacing in both directions (Figure 2.7).33(a) (b)Figure 2.8: (a) PRA evaluation phantom comprising of an array of five truncatedcones. (b) Ultrasound image of the PRA phantom by the linear transducer.Using the stylus, the distances between the holes were measured. The size of theholes was the same as the stylus bead to make sure it remains fixed at each point.The stylus was positioned with a range of angles (up to 30◦ tilt angle) during thisexperiment.2.6.3 Calibration Accuracy EvaluationThe Point Reconstruction Accuracy (PRA) test is an objective measure for accu-racy where a point is scanned and its location reconstructed in 3D space using thecalculated calibration matrix [35]. The true 3D location of the point phantom isdetermined by a tracked stylus. It should be noted that PRA includes all the errorsin the tracking and imaging process such as alignment, segmentation, calibrationand tracking errors.In order to facilitate the validation procedure and acquire multiple points perimage, a phantom comprising of an array of five truncated cones (top radius = 0.13mm, bottom radius = 4 mm, height = 15 mm) was manufactured with a 3D printer.The phantom was fixed at the bottom of the water bath and its pose was determinedby scanning all the four semi-sphere holes incorporated in it with a stylus (alter-natively a rigid body marker could be attached to this phantom). Therefore, thelocations of the top of the cones were always known in the reference coordinatesystem. A supporting bar (height = 20 mm) resides on each side of the phantom to34(a) (b)Figure 2.9: The box plot of the average CC values for all ultrasound images (RFand B-mode) for (a) the linear transducer and (b) the curvilinear transducer.avoid the transducer hitting the cone tips and damaging them (Figure 2.8a).The phantom was then scanned by the transducer that was being tracked. Thetransducer was moved gradually (using a linear stage) in the axial direction 0.5 mmat a time acquiring 370 points with the linear transducer and 400 points with thecurvilinear transducer points over a depth range of approximately 40 mm. The tipof the cones in the ultrasound image (Figure 2.8b) were kept in the imaging planeby observation of the bright echoes, and then segmented manually.2.7 Experiments and ResultsThe calibration procedure was performed on both linear and curvilinear transduc-ers. 100 B-mode and RF images of a plane in a water-bath were captured fordifferent poses of the transducer. In each pose, the transducer was fixed with amechanical arm and then the ultrasound image was acquired and the tracking in-formation were captured. For all the images, the imaging depth for the linear andcurvilinear transducers were set to 4 cm and 5 cm, respectively.The transducer was positioned in different poses following the guidelines in[67] which describes the minimal sequence of motions for the single wall cali-bration. The range of translations and rotations was tried to be maximized whileensuring clear visibility of the line in the ultrasound images. Maximizing the range35(a)(b)Figure 2.10: Calibration results of the predetermined single wall method using 20to 95 RF images. Mean (red) and standard deviation (blue) of calibrationparameters are calculated (a) translation parameters (tx, ty, tz) [mm] (b) rotationparameters (α,β ,γ) [deg].of translations and rotations ensures that the matrix in (2.8) is well-conditioned andreduces the required number of images in achieving the same level of accuracy.For comparison, both the linear and curvilinear transducers were also calibratedwith a triple N-wire phantom [19] using the f-cal application in the Public software36(a)(b)Figure 2.11: Calibration results of the predetermined single wall method using 25to 80 RF images of the curvilinear transducer. Mean (red) and standard devia-tion (blue) of calibration parameters are calculated (a) translation parameters(tx, ty, tz) [mm] (b) rotation parameters (α,β ,γ) [deg].Library for UltraSound Imaging Research (PLUS) [45].The PRA is calculated for the proposed method and the N-wire method. Also,the accuracy of the stylus and the slope measurement procedure have been evalu-ated in this section.37Table 2.1: Calibration PRA test results for the linear transducer.Top Center Bottom Total(4-15 mm) (15-26 mm) (26-37 mm) (4-37 mm)Number of points 130 127 113 370Predetermined RF 0.66±0.22 0.68±0.20 0.76±0.17 0.73±0.23Predetermined B-Mode 1.75±0.25 1.41±0.22 1.15±0.20 1.49±0.40N-wire B-Mode 0.65±0.21 0.65±0.20 0.64±0.16 0.67±0.202.7.1 Stylus AccuracyThe stylus was pre-calibrated by the manufacturer but was also calibrated by thepivot procedure in the NDI software. The reported accuracy was 0.15 mm 3D RMSerror and 0.1 mm mean error. To confirm the results, the distances between pairs ofpoints in a 3×4 grid with 30 mm spacing was measured. In a total of 17 measure-ments between adjacent points in both directions, the error was 0.07 ± 0.13 mm.2.7.2 Slope Measurement EvaluationAs mentioned before, in order to measure the slopes in the differential paradigm,each scan line was compared against a scan line at a specified lateral distance andthe axial shift was measured with the normalized cross correlation technique. Ineach image, the average of the Cross-correlation Coefficient (CC) values is calcu-lated and the box plots of the average CC values for both RF and B-mode datasetswhen using linear and curvilinear transducers are shown in Figure 2.9. The largeCC values suggest that echo pulse shape is relatively unchanged among scan lines.These results show that CC values for the curvilinear transducer are lower than thelinear transducer.2.7.3 Calibration ResultsIn particular, we investigate the estimation of calibration parameters using differentnumbers of images. For this reason, the calibration parameters were calculatedusing different numbers of images, ns. For each ns varying from 20 to 95, nsimages were randomly selected from the data set. The calibration parameters werecalculated and this was repeated 250 times. The average and the standard deviation38Table 2.2: Calibration PRA test results for the curvilinear transducer.Top Center Bottom Total(12-23 mm) (23-35 mm) (35-46 mm) (12-46 mm)Number of points 117 130 153 400Predetermined RF 0.61±0.20 0.82±0.23 0.99±0.21 0.86±0.28Predetermined B-Mode 1.41±0.20 1.18±0.23 1.34±0.22 1.33±0.27N-wire B-Mode 0.53±0.33 0.76±0.38 0.80±0.33 0.80±0.46of these calibration parameters were calculated and are shown in Figure 2.10.The predetermined wall method was used on the RF images of the linear trans-ducer. Note that given the closed-form nature of the solution, the order of theimages is unimportant. The rotation matrix was decomposed into yaw, pitch androll angles (α,β ,γ) using the method described in [71] as follows:R = Rz(α)Ry(β )Rx(γ). (2.46)Figure 2.10 shows that the standard deviation of the parameters rapidly de-creases as the number of input images increases with diminishing returns after 100images. Also it shows that out-of-plane rotation (γ) is estimated with less precisioncompared to other rotation parameters and the reason is that the line’s slope is leastsensitive to roll angle. This is an inevitable effect of a 2D imaging modality whichaffects other 2D calibration methods as well (e.g. Fig. 11 in [19]).In a similar fashion, calibration results of the curvilinear transducer were cal-culated and are shown in (Figure 2.11).2.7.4 Calibration AccuracyThe accuracy of the proposed calibration methods was evaluated and comparedto that of the N-wire method for both the linear and curvilinear transducers. Asdescribed earlier, 370 and 400 points were acquired by the linear and curvilineartransducers respectively over a depth range of approximately 40 mm. The locationsof the segmented points in the images were compared to the estimated positions ofcone’s tip on each image plane and the Euclidian distance error was measured.Table 2.1 and Table 2.2 summarize the results of the linear and curvilinear trans-39(a) (b)Figure 2.12: Mean (red) and standard deviation (blue) of PRA using different num-ber of RF images for (a) the linear transducer and (b) the curvilinear trans-ducer.ducers, respectively.Moreover, the mean and standard deviation of PRA as a function of numberof images were calculated using the same frame-selection method as in the abovesection. The result is shown in Figure 2.12.2.7.5 Calibration TimeLocalization of the plane takes 30 seconds, as mentioned earlier. The image acqui-sition time is proportional to the number of images. In our experiments, acquiring100 different images took about 5 minutes. Using our closed-form solution, theprocessing time for calculation of the calibration results takes only a few secondson a standard computer workstation with a Core 2 Duo CPU @ 2.93 GHz and 4GB of RAM.2.8 SummaryAlthough there are many calibration methods, there is still a need for a simple andaccurate calibration method. The single wall method [67] is an obvious choice toconsider using the differential calibration method in order to improve ease-of-use.The original single wall method uses an iterative solver to find the calibration pa-rameters as well as the plane equation. According to [67], the single wall phantom40produced slightly ill-conditioned sets of calibration equations due to the limitedrange of scanning motions that resulted in clear images of the wall.This chapter proposes an improved single-wall calibration for both linear andcurvilinear transducers, and for both RF and B-mode input data. The key is toadapt the differential measurement approach to a single wall. We also propose tosimplify the calibration problem by explicitly measuring the position of the planeusing a stylus and thus reduce the number of unknowns and formulate a closed-form solution. The goal was to obtain good accuracy with a relatively small num-ber of images. The PRA accuracy evaluation was performed comparing 370 and400 points, for linear and curvilinear transducers respectively, throughout the im-age to their estimated projections using the method and the triple N-wire method.The predetermined wall method, for the linear transducer, achieves PRA accuracyof 0.73± 0.23 using 100 RF images while the triple N-wire PRA accuracy was0.67± 0.20 mm using 100 B-mode images. For the curvilinear transducer, thePRA accuracy of the predetermined method was 0.86±0.28 using 100 RF images,whereas the PRA of the triple N-wire method using the same number of B-modeimages was 0.80±0.46 mm. All the methods are statistically significantly differ-ent, with p values less than 0.05.Similar to the inherent limitations of the single-wall method, in our generalizedclosed-form solution we observed that the method is sensitive to measurement andtracking errors. The PRA for the generalized solution (without predeterminationof wall position) is 3.81± 0.87 mm for RF and 5.16± 0.49 for B-mode for thelinear transducer, and 3.88±0.51 mm for RF and 4.08±1.21 mm for B-mode forthe curvilinear transducer. These are clearly inferior to the predetermined method.Although, in the first stage, the errors in estimation of rotation parameters andplane normal are reasonable, these errors largely affect the estimation of the planeintercept and the translation parameters.The traditional single wall method was implemented in MATLAB using thesame algorithm as in (Prager et al., 1998) and it was applied to the same B-modedata in our experiments. The PRA accuracy of 3.12± 1.01 mm and 4.03± 1.51mm were achieved for linear and curvilinear transducers, respectively. Note that[67] report that the traditional single wall method was shown to produce a meanand maximum of the point reconstruction precision of 3.43 mm and 11.67 mm,41respectively. In the implementation of the traditional single wall method by [36],a PRA of 2.28 mm was reported using B-mode data of a linear transducer with animaging depth of 3 cm.In terms of calibration time, our automatic image processing algorithm runsat the ultrasound frame rate and the final results are calculated in a few secondsthanks to our proposed closed-form solution. We used relatively small number ofimages (i.e. 100) to reduce calibration time compared to 480 and 530 images in thetraditional single wall and the Cambridge phantom, respectively [67]. The N-wiremethod also uses 100 images for calibration [19]. Calibration time is important if,for example, the calibration must be done repetitively in an operating room [41].The predetermined wall method requires the plane localized (using a stylus)prior to calibration but increases the accuracy and robustness of the solution. Thiscan either be done with a stylus before each calibration, or once and thereafterknown by affixing a tracker permanently to the wall.The proposed method is less accurate compared to the Multi Wedge phantom[61] (described in Chapter 3) with PRA of 0.18± 0.40 mm and 0.09± 0.39 mmfor B-mode and RF data, respectively. This is to be expected since there are fiveoptimized plane locations using that phantom, which obviously has the drawbackof requiring a specialized phantom. We therefore see the proposed differentialpredetermined single wall calibration approach as complementary to the multi-wedge approach.Note that direct comparison of errors between calibration methods in differentpapers should be done with caution since different apparatus and parameters wereused. It should also be noted that PRA tests, like other measures of calibration,include errors in distinguishing the point target location within the image, whichaffects the ability of a point to act as a “gold standard”. Such concerns aboutthe gold standard are analogous to the concerns about choosing appropriate imagefeatures for calibration, i.e. at the sub-millimeter level, it is difficult to separatecalibration errors from errors in the gold standard.It is worth mentioning that in a PRA test, the key is to place the point targetin the ultrasound image plane with the highest accuracy possible. The position ofthe stylus (or the transducer), on the other hand, does not affect the accuracy ofthe PRA test assuming uniform tracking accuracy of the tracker. In fact, the PRA42test is most meaningful when many target points are placed throughout the imagewith the highest accuracy possible in aligning the points with the ultrasound imageplane. Obviously, the goal is to minimize the alignment error but zero alignmenterror might not be possible. The proposed PRA phantom reduces the alignmenterror with its cone shaped target points. It also increases the speed by a factor offive facilitating acquisition of more points across the image in a timely manner.Note that, in a curvilinear transducer, not all the scan lines produce a strongecho reflection due to the curvature of the transducer. However, the upside is that,due to the curvature of the transducer, a larger range for in-plane rotations is pos-sible compared to a linear transducer.The differential measurement improves the accuracy of line slope measure-ment; however, there are issues that limit its effect in the generalized method inachieving high accuracy. One issue is the sensitivity of the proposed closed-formsolution (i.e. the generalized method) to slope measurement error. The reason isthat in the closed-form solution, the calibration parameters are calculated in twostages (i.e. null space and Kronecker decomposition) and are not calculated di-rectly. Another issue is that the rotation parameters and the plane normal are cal-culated first and then the translation and plane intercept are calculated. Due to thespecific mathematical formulation of the problem and its sensitivity, as mentionedearlier, the rotational parameters cannot be calculated with very high accuracy.This in turn, results in inaccuracies in estimation of the translation parameters andthe plane intercept. In the traditional single wall, on the other hand, all the param-eters are solved at the same time and the translational parameters, in a way, act asa regularization term in the iterative solution.In fact, since the two methods have different mathematical bases, it is not pos-sible to solely compare the effect of the differential measurement technique. How-ever, the mathematical formulation of the predetermined method allows the useof differential measurements and takes advantage of higher measurement accuracywhile providing a direct and robust solution for calibration parameters.We are currently integrating the software into the PLUS library [45] so it isfreely available to other researchers.Although a transformation such as beam steering will not produce a differentpose, it can be used to increase the range of motion by allowing wider range of43rotations for the transducer when applied in the opposite direction of rotation toimprove the image quality. Future work includes investigating this technique toimprove the performance of the method.The method can be extended to calibration of 3D ultrasound transducers byconsidering that the axial difference of two scan lines in different elevations (i.e.in different frames) yields a new equation similar to (3.2). This is equivalent tomeasuring slope of a line in a resliced plane of the acquired volume. This extraequation reduces the number of required images and improves the accuracy. Thisis under the assumption of equal spacing between the frames. Otherwise, the cal-ibration can be performed on every single frame of the volume similar to a 2Dtransducer. That is also part of our future research.44Chapter 3A Closed-Form DifferentialFormulation for UltrasoundSpatial Calibration Using a MultiWedge Phantom3.1 Introduction and BackgroundIn Chapter 2, we described an improved single wall calibration method based ona closed-form differential formulation which offers simplicity and reasonable ac-curacy. In search of an ultrasound calibration method with the highest accuracypossible while maintaining ease-of-use and practical usability, in this chapter wepropose using planar objects as phantoms and describe the design of a Multi Wedgephantom composed of five planes (wedges).3.2 Multi Wedge PhantomPlanar objects are ideal for accurate differential measurements since echoes fromdifferent portions of a plane with similar beam angle will have similar echo RFsignatures. In order to find the full pose of the ultrasound image plane, a singlemeasurement is insufficient. At least five planes are required to solve for all degrees45Figure 3.1: The Multi Wedge phantom model comprising of five different planes.The dashed line shows the ultrasound image intersection line segments. Thetransformation from image to phantom coordinate system is calculated with aclosed-form solution.of freedom of the calibration problem including speed of sound variation scale (seeSection 3.4). Therefore a phantom comprised of five planes has been proposed.This “Multi Wedge” phantom will be seen in the ultrasound image as five linesegments (Figure 3.1). Based on differential measurements of these features inthe image, it is possible to calculate most of the pose parameters of the ultrasoundimage relative to phantom coordinate system. An absolute measurement within thesame image is still needed for one translation vector.Closed-form calibration based on measurements of the relative pose of twodifferent images using hand-eye coordination technique has been proposed before[7, 11]. Although relative measurements have an inherent advantage, accurate es-timation of the relative 3D poses of 2D ultrasound images is still a challenge. Thiswork is based on in-plane differential measurements which can also directly takeadvantage of the fine sampling rate of RF ultrasound data to achieve high measure-ment accuracy. Moreover, wedges have been used for alignment and calibrationbefore [3, 12]. The proposed Multi Wedge phantom has been specifically designedand optimized for a closed-form solution using differential measurements.Therefore a phantom comprised of five planes has been proposed. This Multi46Wedge phantom will be seen in the ultrasound image as five line segments (Fig-ure 3.1). Based on differential measurements of these features in the image, it ispossible to calculate most of the pose parameters of the ultrasound image relativeto phantom coordinate system.For an independent accuracy evaluation of the calibration results, a set of twonon-overlapping N-wires with known geometry and location relative to the wedgeswere incorporated in the phantom. This double N-wire phantom [19] was used forcomparison purposes. The N-wire phantom works on the principle that a cross-sectional image of an N-wire produces three points in the image, and that the rel-ative distance between these points determines the exact location of the imagingplane. Multiple stacked N-wires can be used to determine the full 6 DOF positionand orientation of the imaging plane. This makes it suitable for validation. Manyresearchers use the N-wire calibration method.The phantom (Figure 3.2) was precisely manufactured with the Objet30 desk-top 3D printer (Objet Inc., Billerica, MA, USA). The 3D printer has 28 µm preci-sion and the construction cost is relatively low (less than $200).In the design of the Multi Wedge phantom, a suitable angle of each wedgeis chosen so that the line segments are clear in the image and can be segmentedaccurately while at the same time the slopes are still large enough to achieve lowsensitivity of the calibration results to the measurement error.The signature of the RF echoes in all the columns will be the same if the ultra-sound beam axis is perpendicular to the surface since they all experience the samephysical conditions. However, as the angle between the beam axis and the normalof the surface increases, the signature of the returned echo slightly changes. Thisphenomenon occurs because of the anisotropic and spatially variant point spreadfunction of the ultrasound beam.Although smaller inclination angles result in lower measurement error, the sen-sitivity of the calibration estimation to measurement error is very large for such lowangles and this means harder parameter identification. The extreme case is an in-clination angle of zero which means all the wedges are parallel and the problem isobviously impossible to solve.47Figure 3.2: Manufactured Multi Wedge phantom with optical markers and a doubleN-wire phantom incorporated in it.3.3 Experimental SetupIn our calibration experimental setup, a SonixTOUCH ultrasound machine (Ul-trasonix Medical Corporation, Richmond, BC, Canada) was used for ultrasoundimaging. An L14− 5 transducer with 7.2 MHz center frequency operating at 10MHz in our experiments, was used for imaging. A single transmit focus was set toa depth of 35 mm to ensure that it is near to, or slightly lower than, all the line seg-ments. The transmit and the receive F-numbers assuming aperture growth, usingthe Ultrasonix provided focus curves, are F8.8 on transmit, and starting at F5.8 onreceive (dynamic receive focusing).An Optotrak Certus optical tracker (Northern Digital Inc, Waterloo, Ontario,Canada) was used for tracking the transducer and the phantom. Note that the Opto-trak was used here because of high accuracy but other trackers could also be usedwith the proposed calibration method. Random errors from less accurate stereotrackers may be accommodated by the use of redundant measurement algorithm.To track an object with this tracker, a set of active markers with a rigid arrangedgeometry (rigid body) was attached to the object. The tracker measures the rela-tive transform from an arbitrary coordinate system defined on the rigid body to thetracker’s reference coordinate system.In our experimental setup, NDI’s standard 4-marker sensor (rectangular rigid48body configuration) was used to track both the phantom and the probe. A 4-markersensor was also precisely incorporated into the design of the phantom in order totrack the coordinate system of the wedges. Based on the Computer Aided De-sign (CAD) model of the phantom and the 4-marker sensor, the location of thewedges relative to the 4-marker sensor’s coordinate system is known. This hasbeen further validated with a calibrated stylus to make sure the 4-marker sensorhas been precisely attached to the phantom’s body.3.4 Mathematical FormulationIn the calibration procedure, the coordinate systems defined on the markers’ rigidbodies on both the phantom and the transducer are simultaneously acquired fromthe tracker. Therefore, the coordinate system of the phantom’s wedges, P, withrespect to the tracker’s reference coordinate system, R, is always known. Thecoordinate system on the transducer’s markers, T, relative to the reference is alsoknown at each frame (Figure 3.3).Therefore if TP I , the transformation from the image coordinate system, I, tothe phantom coordinate system, P, can be found, then TT I , the transformation fromthe image to the transducer coordinate, known as the calibration matrix, can becalculated:TT I = TTR TRP TPI. (3.1)Solving the calibration transform requires measurement of the transformationfrom the image coordinate system to the phantom coordinate system, TP I . We haveproposed a new formulation for this problem where we define the image coordinatesystem with two free vectors ~U and ~V as axial and lateral axes, and its origin, P0,with respect to the phantom’s coordinate system (Figure 3.4).To find the pose of the ultrasound imaging plane, line segment features in theultrasound image are extracted. The slope and intercept of each of the five linesdepend on the pose of the ultrasound imaging plane relative to the phantom. Theseproperties of the line segments can be measured very accurately by the differentialmeasurement technique as mentioned earlier.The Multi Wedge calibration phantom can be defined with five different planes.49Figure 3.3: The coordinate systems defined on markers’ rigid bodies on the trans-ducer, T, and the phantom are measured in the tracker’s reference coordinatesystem, R. The coordinate system of the phantom’s wedges, P, relative to thephantom markers is known by the CAD model therefore TR P can be calculatedfrom tracker measurements. TR T is also measured by the tracker.Each plane equation is known from the CAD model. Therefore the normal vector,ni, and a point, Qi, of each plane is known in the phantom coordinate system (Fig-ure 3.4). The wedge phantom appears as five line segments in the ultrasound image(dashed lines in Figure 3.1).As mentioned before, TP I can be defined from two free vectors ~U and ~V anda point P0, where ~U is a unit vector in the direction passing through the center ofarray elements (lateral) and ~V is a unit vector in the direction of ultrasound beam(axial). P0 is the origin of the imaging plane in the phantom coordinates.The solution for TP I is divided into two stages. First the rotation matrix (de-rived from vectors ~U , ~V ) and the speed of sound scale, Sy, are solved. Then thetranslation vector is determined by solving for P0.50Figure 3.4: Intersection of the ultrasound image and the phantom’s ith plane. ~Uis a unit vector in the direction passing through the center of array elements(lateral) and ~V is a unit vector in the direction of ultrasound beam (axial). P0 isthe origin of the imaging plane. A plane equation on each wedge (i) is definedwith a normal vector, ni, and a point, Qi.3.4.1 Rotation ParametersThe 3D representation of each pixel (x1,y1) of the image in the phantom coordinatesystem can be calculated as:P = P0 +Sxx1~U +Syy1~V . (3.2)Sx and Sy are pixel to milliliter ratios in the lateral and axial directions respec-tively. Each line segment, in fact, is the intersection of two planes: the imagingplane and the wedge plane. Therefore, each point on the ith line segment alsoresides on the ith wedge plane and must satisfy its plane equation (Figure 3.4):[P0 +Sxx1~U +Syy1~V −Qi] ·~ni = 0, (3.3)P0 ·~ni +Sxx1~U ·~ni +Syy1~V ·~ni = Qti ·~ni︸ ︷︷ ︸di. (3.4)Considering another point (x2,y2) on the same line segment we have:51P0 ·~ni +Sxx2~U ·~ni +Syy2~V ·~ni = Qti ·~ni. (3.5)Now by subtracting (3.4) from (3.5):Sx(x2− x1)~U ·~ni +Sy(y2− y1)~V ·~ni = 0, (3.6)and then dividing by Sx, we have:∆x~U ·~ni + k∆y~V ·~ni = 0, (3.7)where k = SySx , ∆x = x2− x1 and ∆y = y2− y1 . Then by dividing (3.7) by ∆x andassuming m = ∆y∆x :~U ·~ni + kmi~V ·~ni = 0. (3.8)In the above equation mi is the slope of the ith line segment (measured by thedifferential technique) and ~ni is known from the plane equation for the correspond-ing wedge. ~U , ~V and k must be solved under the constraint that ~U and ~V are unitvectors. Using at least five linear equations such as (3.8) (for five independentplanes) is sufficient to solve for these unknowns.One way of deriving ~U and~V while getting a non-zero value on the right side isby taking a differential measurement from points on two different parallel wedgesinstead of two points on the same wedge (Figure 3.5). This means that the secondpoint (x2,y2) is on a plane with the same normal vector, ni, but different right-sideconstant:P0 ·~ni +Sxx2~U ·~ni +Syy2~V ·~ni = d′i . (3.9)Now by subtracting (3.4) from (3.9) (instead of (3.5)) we have:Sx(x2− x1)~U ·~ni +Sy(y2− y1)~V ·~ni = d′i −di, (3.10)∆x~U ·~ni + k∆y~V ·~ni =d′i −diSx. (3.11)52Figure 3.5: Multi wedge phantom with two pairs of parallel wedges.Combining (4.12) with (2.6) into matrix format:∆x ∆y1 mi~U ·~nik~V ·~ni=d′i−diSx0 , (3.12)which can be easily solved to obtain ~U ·~ni and k~V ·~ni which are projections of ~Uand k~V on vector ni. To solve for ~U , ~V and k we need one more pair of parallelwedges with a different normal vector along with imposing the unity constraint.The following describes how to solve for ~U and ~V and k from the projectionsof ~U and k~V on ~ni vectors (~U ·~ni and k~U ·~ni). We need these projected values forat least two different values of ~ni. First we find ~U using two projection values of itover ~n1 and ~n2:~U · ~n1 = u1,~U · ~n2 = u2. (3.13)One more projection of ~U over another vector (e.g. ~g) non-parallel to ~n1 and~n2 is enough to find ~U . This means that we have:53~n1t~n2t~gt3×3~U =u1u2u3 , (3.14)~U =~n1t~n2t~gt−13×3u1u2u3 . (3.15)This looks like another pair of parallel wedges with normal vector~g is needed,but in fact it is possible to solve the problem using only ~n1 and ~n2 by defining ~g asa combination of them and also using the unity constraint for ~U :~g = ~n1× ~n2. (3.16)In order to find the projection of ~U over ~g, u3, we define another vector, ~f ,which is perpendicular to both~g and ~n1:~f =~g× ~n1 = (~n1× ~n2)× ~n1 = ~n2− (~n1 · ~n2)~n1. (3.17)The projection of ~U over ~f can be calculated from:~U ·~f = ~U · ~n2− (~n1 · ~n2)~U · ~n1 = u2− (~n1 · ~n2)u1. (3.18)Since ~n1,~g and ~f are all perpendicular to each other (~n1 ⊥~g⊥ ~f ), they form anorthogonal basis and therefore ~U can be written as a linear combination of them:~U = u1~n1 +u3~g||g||2+(~U ·~f )~f|| f ||2. (3.19)Now by applying the unity constraint for ~U we have:||~U ||2 = ~U ·~U = u21 +u23||g||2+(~U ·~f )2|| f ||2= 1, (3.20)And finally u3 can be calculated from:54u3 =±||g||√1−u21−(~U ·~f )2|| f ||2. (3.21)The sign ambiguity should be resolved based on the approximate estimation ofthe calibration matrix given the geometry of the setup. Now that u3 is known, it ispossible to find ~U using (3.15).Now that ~U is solved, we can solve for ~V and k using (4.12) for each pair ofparallel wedges and using (2.6) for the fifth wedge:(∆x)1~U · ~n1 + k(∆y)1~V · ~n1 =d′1−d1Sx, (3.22)(∆x)2~U · ~n2 + k(∆y)2~V · ~n2 =d′2−d2Sx, (3.23)~U · ~n3 + km3~V · ~n3 = 0. (3.24)These three equations can be represented by the following matrix equation:k~V · ~n1~V · ~n2~V · ~n3=(d′1−d1Sx− (∆x)1~U · ~n1)/(∆y)1(d′2−d2Sx− (∆x)2~U · ~n2)/(∆y)2−(~U · ~n3)/m3=~b, (3.25)Which can be solved to find ~V ′ = k~V :~V ′ = k~V =~n1t~n2t~n3t−1×~b. (3.26)This formulation gives both ~V and k from ~V ′ simply as its norm and its normalizedvector:k = ||~V ′|| , ~V =~V ′k. (3.27)And eventually, the rotation part of TP I can be determined as follows:55R = [X3×1 Y3×1 Z3×1] , (3.28)X = ~U , (3.29)Z =X×~V∥∥∥X×~V∥∥∥, (3.30)Y =Z×~U∥∥∥Z×~U∥∥∥. (3.31)In this approach, two pairs of parallel wedges are used to solve for rotationparameters but since for solving translation parameters three wedges with differentnormal vectors are required, one other wedge with a normal vector non-parallel toeach of wedge pairs is also needed (a total of five wedges).Here, we have calculated k, which is the ratio of axial to lateral pixel to mil-limeter ratio. Lateral pixel to milliliter ratio is related to the spacing between imagecolumns which is the spacing of the transducer’s elements. This spacing is gener-ally provided by the manufacturer and, for example, for the transducer used in thisstudy it is 0.3 mm. On the other hand, axial pixel to milliliter ratio depends onthe chosen image formation spacing and the speed of sound. Using the calculatedvalue for k and assuming either of Sx or Sy is known, the other one can be calcu-lated. Also, the deviation of the crossing angle between ~U and ~V from 90◦ can beexpressed as the skew angle.Up to this point, the rotational matrix and speed of sound variation scale are de-termined from ~U ,~V and k. The derivation of the translation parameters is describednext.3.4.2 Translation ParametersTo find the translation part of TP I , P0 should be calculated. Now looking at (3.4),di and Sx are known and Sy, ~U and ~V have been determined from the previous step(Section B), so αi and βi are also known:56P0 ·~ni +Sxx~U ·~ni︸︷︷︸αi+Syy~V ·~ni︸︷︷︸βi= Qti ·~ni︸ ︷︷ ︸di. (3.32)In order to solve P0, we need to have at least three independent equations asabove. Therefore, x and y location of three points on three wedges with differentnormal vectors are required. The matrix format for these equations is:~n1t~n2t~n3tP0 =d1−Sxx1α1−Syy1β1d2−Sxx2α2−Syy2β2d3−Sxx3α3−Syy3β3= D, (3.33)NP0 = D. (3.34)(3.34) is a linear system of equations and can be easily solved:P0 = N−1D. (3.35)Of course it can be solved with more than three points in a least-squares senseto improve accuracy. Now both the rotation and translation parts of TP I are knownand the calibration transform ( TP I) can be found from (3.1).3.5 Phantom DesignIn the design of the Multi Wedge phantom, a suitable angle of each wedge is chosenso that the line segments are clear in the image and can be segmented accuratelywhile at the same time the slopes are still large enough to achieve low sensitivityof the calibration results to the measurement error.The signature of the RF echoes in all the columns will be the same if the ultra-sound beam axis is perpendicular to the surface since they all experience the samephysical conditions. However, as the angle between the beam axis and the normalof the surface increases, the signature of the returned echo slightly changes. Thisphenomenon occurs because of the anisotropic and spatially variant point spreadfunction of the ultrasound beam.Although smaller inclination angles result in lower measurement error, the sen-57(a)0 10 20 30 40 5000.10.20.30.40.50.60.70.80.91Inclination angle (degree)Outcome to measurment error ratio(b)Figure 3.6: (a) Slope measurement error ( Error(∆y)∆x ) versus the inclination angles(ϕ). (b) ratio of outcome (absolute sum of rotation angles) error to measure-ment error (◦/µm) for different inclination angles (ϕ).sitivity of the calibration estimation to measurement error is very large for such lowangles and this means harder parameter identification. The extreme case is an in-clination angle of zero which means all the wedges are parallel and the problem isobviously impossible to solve.In order to investigate the effect of wedge inclination on slope measurementerror, the RF image of the phantom was simulated using the Field II ultrasoundsimulation package [38]. The plane was modeled with about 10,000 discrete scat-terers and 0.1 mm spacing. The angle between the beam axis and the plane normalwas varied while observing the error in the slope measurement. A single focuspoint at the plane depth (20 mm) was used and imaging parameters similar to thereal experiment were used.In the Field II simulation, the angle between the beam axis and the plane nor-mal, ϕ , was chosen in the interval of 0 to 45 degrees. Then the error of crosscorrelation-based slope measurement was calculated (Figure 3.6a) by dividing theaxial measurement error, Error(∆y), by the lateral distance (∆x ≈ 5mm for eachwedge). Results show that as the plane tilts towards higher angles with respectto the ultrasound beam, larger errors in slope measurement are observed. This isbecause of the change in the echo signature for different columns.Our observation of results of a slope measurement experiment when imaginga flat metallic surface immersed in water in different poses also agree that as the58plane’s inclination angle increases, the line appears thicker in the image and thestandard deviation of the slope measurement error increases.The sensitivity of the system to measurement error was measured through asimulation experiment. In the closed-form mathematical model of the system, thewedge inclination angle, ϕ , was varied over a range of 1 to 45 degrees. Axialmeasurement error was modeled as a random variable with a normal distribution ofzero mean and constant standard deviation (10 µm). In 500 iterations, the estimatedrotation parameters were compared to true values to calculate the estimation errorof the system. Since the level of this output error is dependent on the input (axial)measurement error, the ratio of the output to input error is calculated. Figure 3.6bshows the ratio of the standard deviation of the error in rotation parameters to thestandard deviation of the axial measurement error.As expected, the results show that the system is more sensitive to input error atlower angles and the sensitivity decreases as ϕ increases. We noticed that this ratiois not very dependent on the level of measurement error which means that similarresults will be obtained for different values of input error (instead of 10 µm).According to these results, wedges with a 10◦ inclination angle seem to be areasonable compromise between maintaining a constant signature (low measure-ment error) and parameter identification (low sensitivity) and hence this value wasused in the design of the phantom in this work.3.6 Automatic Feature Extraction and MeasurementAs described before, five line segment features should be detected in the ultrasoundimage in order to perform calibration. In this section, the proposed automatic im-age processing algorithm is described (Figure 3.7). First a morphological openoperation with line structuring elements was applied to the image. In order to beable to pick up line segments with different slopes, line structuring elements withdifferent slope values (in this case in the range of −10◦ to 10◦ with 5◦ steps) wereused. The results of the opening operation with different structuring elements (afterapplying a threshold) were combined with logical OR into a single binary image.Then a morphological dilate operation with a disk structuring element was appliedto this binary image.59Detect connected regions Fit a rectangle on each region Find slopes with cross correlation Find the first segment and discard the rest + Dilate Morphology open + threshold Figure 3.7: Automatic feature extraction from the ultrasound image to find the linesegments.The next step was to remove outlier line segments below the actual surfaceof the phantom by removing the underlying (noisy) part. For this reason, in eachcolumn of the binary image starting from the top, the first segment was kept andthe rest of the column is discarded. Thereafter, all the connected regions with ameasure of 8-connected neighborhood with no holes were selected. Specific detailsof the operations and the corresponding MATLAB functions are summarized inTable 3.1.Finally, each region was considered as a candidate line segment. A qualitycheck procedure was then performed to check for the correct number of line seg-ments. Then using the previously described cross correlation technique, the slopeof each line was calculated from pairs of image columns (B-Mode or RF data).After fitting a line to each segment, the middle point, xi, was taken and its depth,yi, was automatically measured from the peak of the RF echo. The value of the linedepth (related to line intercept) is an absolute measurement with an uncertainty ofthe exact actual plane-image intersection in the RF signal. Although this value hasgreater inherent error than differential measurements, the result (translation param-60Table 3.1: Specific details of the operations and the corresponding MATLABfunctions in the image processing procedure.Operation/Attribute Value MATLAB commandImage size 340×319 -Morphological open LEN = 30 SE=strel(’line’, LEN, DEG);DEG = −10 : 5 : 10 IM2 = imopen(IM,SE);Binary Threshold Threshold = MAX6 -Morphological dilate R = 3, N = 4 SE=strel(’disk’, R, N);IM2 = imdilate(IM,SE);Segmentation conn = 8, opt = ’noholes’ bwboundaries(I,conn,opt);eters) is not very sensitive to this measurement (investigated in Section 3.7.1).For calibration, 100 different images of the Multi Wedge phantom were ac-quired and the automatic feature extraction algorithm was applied to them. Thenumber of detected line segments in the image is a measure of the algorithm’s per-formance. In 88 images, five line segments were correctly detected. In the others,4, 6 and in one case 3 line segments were detected. Frequency distribution of thenumber of detected line segments is shown in Table 3.2.Once an estimate of the calibration is obtained (using the first pose), it is pos-sible to fit a geometric model of the phantom to the ultrasound image and use it asan initial guess for segmentation and to improve the accuracy of the segmentationprocedure. This will be the scope of the future work.3.7 Experiments and ResultsIn a set of experiments using both simulated data and experimental data, severalevaluation measures were obtained. The sensitivity of the method to measure-ment error was evaluated using simulated data. Experimental results using differentnumber of images of the Multi Wedge phantom comes next. Calibration precisionand calibration accuracy were measured for the proposed method using either RFor B-Mode data and compared to the Double N-wire method. The method withRF data is referred to as “RF Multi Wedge” and the method with B-Mode data isnamed “B-Mode Multi Wedge”. When using the B-Mode images, the same crosscorrelation technique has been used to measure the differential distances on the61Table 3.2: Frequency distribution of the number of detected line segments in100 ultrasound images.Number of line segments 3 4 5 6Count of the occurrences 1 10 88 1columns of the B-mode images. The peak of the normalized cross correlation wasused as a measure of the similarity of two signals and the location of the peak givesthe relative shift of the two signals. The peak of the normalized cross correlationwas always greater than 0.9. By applying this method on different pairs of scanlines, the value of the slope of each line segment was measured. Then the outlierswere removed and the average of these values was taken.For the Double N-wire method, 100 different images of the double N-wire(integrated into the phantom) were captured and used for calibration. An automaticsegmentation algorithm as explained in [19] was used.3.7.1 Sensitivity Analysis to Measurement ErrorIn order to estimate the sensitivity of the method to the measurement error, the cal-ibration process was simulated in MATLAB using the mathematical model of theproblem. The simulation helps to understand how measurement error propagatesinto the calibration matrix. In this simulation experiment, axial measurement error(ε) was modeled as a random variable with a normal distribution of zero mean andstandard deviation varying form 1 to 50 µm. Note that at a 40 MHz sampling rate,the RF sample interval is 20 µm. Mean and standard deviation of the calibrationparameters were calculated (1000 iterations). The Euclidean norm of the mean andthe standard deviation values for each of the translation and rotation vectors werecalculated and are shown in Figure 3.8.3.7.2 Calibration Experimental ResultsCalibration parameters were calculated using different number of images. In eachpose, the transducer was fixed with a mechanical arm while acquiring the ultra-sound image and capturing the tracking information. It was made sure that all thefive wedges were clearly visible in the ultrasound image and that the phantom was620 10 20 30 40 5000.511.522.53Axial noise ( μm)Norm of rotation error (deg) Norm of stdNorm of mean(a)0 10 20 30 40 5000.10.20.30.40.50.60.70.80.91Axial noise ( μm)Norm of translation error (mm) Norm of stdNorm of mean(b)Figure 3.8: Sensitivity analysis of RF Multi Wedge to axial measurement error (a)Norm of rotation error vector (mean and standard deviation). (b) Norm oftranslation error vector (mean and standard deviation).scanned from all possible positions and angles. In order to minimize tracking er-ror, the tracking information was averaged for each pose. For all the images, theimaging depth and focus were set to 4 cm and 2 cm, respectively.Each time, ns (=2 to 87) images were randomly chosen from all the 88 seg-mented images and the calibration parameters were calculated. The rotation ma-trix was decomposed into yaw, pitch and roll angles (α,β ,γ) using the methoddescribed in [71] as follows:R = Rz(α)Ry(β )Rx(γ). (3.36)For each set of ns images, the calibration solution is derived independently andthe average of calibration parameters has been taken. The number of combinationsof selecting ns images out of 88 is fairly large except for ns = 1. Therefore, in 250iterations, ns images were randomly selected from the data set. The standard devi-ation and the average of the results were calculated and shown in Figure 3.9. Theseresults show that the standard deviation of the error rapidly decreases as the num-ber of input images increases. A relatively small number of images are sufficientto achieve accurate results. Note that the order of the images is unimportant.Figure 3.9 shows that in-plane rotation (α) can be determined with more preci-sion compared to out-of-plane rotations (β and γ). The reason is that compared to63(a)(b)Figure 3.9: Calibration results of the RF Multi Wedge method using 2 to 87 im-ages of the phantom. Mean (red) and standard deviation (blue) of calibrationparameters are calculated (a) translation parameters (tx, ty, tz) [mm] (b) rotationparameters (α,β ,γ) [deg].pitch and roll angles, a variation in the yaw angle causes a larger change in the im-age features (slopes). This rotation error propagates to translation parameters andresults in relatively larger error in tz. This is an inevitable effect of a 2D imagingmodality which affects other 2D calibration methods as well (e.g. Fig. 11 in [19]).640 10 20 30 40 50 60 70 80051015Number of images (ns) μ CR (mm) RF Multi WedgeB−Mode Multi WedgeDouble N−wire(a)0 10 20 30 40 50 60 70 80051015Number of images (ns) μ CR (mm) RF Multi WedgeB−Mode Multi WedgeDouble N−wire(b)Figure 3.10: CR measure of calibration parameters when using different number ofimages, ns, for the (a) Top left corner point (b) Bottom right corner point.3.7.3 Calibration PrecisionOne way to measure calibration precision is by determining the variability in thereconstruction position of points in the image using different calibration solutions.In this approach, a certain point in the image (e.g., bottom right corner) is recon-structed using different calibration solutions and the variation is measured. Thereis no need for imaging a phantom and therefore, alignment and segmentation errorsare not involved. Moreover, the reconstruction can be performed in the marker’scoordinate system rather than the reference coordinate system and hence the track-ing error can also be eliminated [35]. This approach is also known as CalibrationReproducibility (CR) and is defined as:µCR =1NN∑i=1| TT Ii pI− pT |, (3.37)where TT Ii is any of N calibration solutions, pI is an arbitrary point in image co-ordinates and pT is the average of the transformed point in transducer coordinatesystem.Here, the CR test was performed on both the Multi Wedge and the Double N-wire methods for different number ns of images. For each ns (= 2 to 86), in 200iterations (N = 200), the calibration matrix was computed for ns randomly chosenimages from the data collection. The variability of the top left (pI = [0,0,0] mm)65Table 3.3: Calibration PRA test results for three different calibration methodsusing a stylus (65 points).Method Double N-wire B-Mode Multi Wedge RF Multi WedgeDistance Error (mm) Mean Std Mean Std Mean StdTop (10-25 mm) 0.48 0.50 0.26 0.45 0.16 0.46Center (25-40 mm) 0.18 0.39 0.16 0.38 0.08 0.37Bottom (40-55 mm) 0.42 0.44 0.32 0.39 0.24 0.38Average 0.25 0.43 0.18 0.40 0.09 0.39and the right bottom (pI = [38,40,0] mm) points were computed and averagedover all the iterations. µCR values for top-left and bottom-right corner points aredepicted in Figure 3.10.3.7.4 Calibration AccuracyPRA is probably the most objective measure for accuracy where a point is scannedand its location reconstructed in 3D space using the calculated calibration matrix[35]. The true 3D location of the point phantom is determined by a tracked stylus.This test was performed using a stylus (NDI 4 marker probe). The stylus was pre-calibrated by the manufacturer but was also calibrated by the pivot procedure inthe NDI software. The reported accuracy was 0.15 mm 3D Root Mean Squared(RMS) error and 0.1 mm mean error. To confirm the results, a planar phantom wascreated with a 3D printer comprising of a grid of 3× 4 semi-spherical holes onthe surface with 30 mm spacing. Using the stylus, the distances between the holeswere measured. The size of the holes was the same as the stylus bead to makesure it remains fixed at each point. The stylus was positioned with different anglesduring this experiment. In a total of 17 measurements between adjacent points inboth directions the error was 0.07 ± 0.13 mm.The stylus was affixed in a water bath and the transducer was placed in 65 dif-ferent positions. In each position the transducer was aligned carefully in a way thatthe stylus bead would be clearly seen in the ultrasound image. This test was per-formed over an extended imaging depth of 5.5 cm (compared to 4 cm imaging depthfor calibration). The stylus was positioned at different angles and different depthsof the image. The stylus bead was manually segmented in ultrasound images by fit-66ting a circle with a known radius to the bead image. To minimize tracking error, theaverage of the tracking information was taken for both the transducer and the stylusat each pose while they were both fixed. The Euclidian distance error is calculatedfor the 65 images for all the the methods and summarized in Table 3.3. The averagepoint reconstruction error for the “RF Multi Wedge” method was 0.09±0.39 mm.The range of skew angles is small, ±0.5◦.3.8 SummaryDespite all these efforts, the best reported accuracies for ultrasound calibration areon the order of 1 to 3 mm [36] which may adversely affect the result of ultrasoundguided interventions and possibly limit the scope of new applications. Ease of use,simplicity, precision (repeatability) and accuracy are among the most importantfactors in ultrasound calibration.In this chapter, we propose a novel closed-form method for freehand 3D ultra-sound calibration that significantly improves ultrasound calibration accuracy com-pared to state-of-the-art methods while keeping the calibration procedure as fastand simple as possible and suitable for intra-operative use. The method has beendeveloped based on the specific physical properties of the ultrasound imaging sys-tem. The relative shift of two RF-echo pulses hitting the same plane surface seemsto be a very accurate measure to base a calibration technique. With this in mind,based on the simulations and the experiments, a Multi Wedge phantom has beenproposed. There is a trade-off in the design of the phantom. For a given measure-ment error the calibration error decreases as the surfaces in the phantom becomesteeper and more distanced from each other but on the other hand, the measurementerror for those surfaces gets larger.Accuracy of 0.09± 0.39 mm was achieved which is significantly higher thanmost other calibration methods including the well-known Double N-wire method[19]. [36] reports a PRA of 2.28 mm for the traditional single wall method usingB-mode data of a linear transducer with an imaging depth of 3 cm. PRA accuracyof 3.63 mm for Stylus method, and and 1.67 mm for the Cambridge phantom isreported [36]. The method is also very precise. The CR test shows that using onlya single image of the phantom is almost as precise as using 50 images of a Double67N-wire phantom.Note that direct comparison of errors between calibration methods in differentpapers should be done with caution since different apparatus and parameters suchas image size, image resolution, beam profile and ultrasound frequency were used.It should also be noted that PRA tests, like other measures of calibration, includeerrors in distinguishing the point target location within the image, which affectsthe ability of a point to act as a “gold standard”. Such concerns about the goldstandard are analogous to the concerns about choosing appropriate image featuresfor calibration, i.e. at the sub-millimeter level, it is difficult to separate calibrationerrors from errors in the gold standard.The calibration procedure can be performed in real-time using the automaticimage processing method and therefore, the procedure will take less than oneminute.Our proposed Multi Wedge phantom was also used for calibration of a 3Dultrasound transducer [1].Future work will focus on intra-operative calibration. The phantom can beenclosed in a sterile fluid-filled rubber-topped box for intra-operative use. Sincevery few images, or even a single image, are required for calibration, the transducercan be fixed into a mechanical slide to capture the images in a fast sweep. This isalso helpful in acquiring high quality images of the phantom with a clear depictionof all the wedges. In fact, when the transducer is tilted more than 20 degrees,the error increases rapidly (Figure 3.6a). The limited range of tilt angles due to thedeleterious effect of ultrasound beam thickness at large angles, as explained earlier,can reduce the accuracy of the out-of-plane parameter of the calibration.68Chapter 4Single Camera Closed-FormReal-time Needle Tracking forUltrasound-Guided NeedleInsertion4.1 Introduction and BackgroundThe effectiveness of a treatment and the correctness of a diagnosis is often de-pendent on the accuracy of related percutaneous insertion. In procedures such asbiopsy (e.g. prostate, kidney, breast and liver), brachytherapy and anesthesia, mil-limeter level placement accuracy is required. In some brain, fetus, eye and earprocedures, sub-millimeter placement accuracy is desirable [2].For instance, accurate needle-tip placement is a challenge in epidural needleinsertion [9], a central nerve block technique achieved by injection of a local anes-thetic close to the spinal nerves [5]. Studies suggest that inappropriate needleposition may occur in as many as 30% of lumbar epidural injection proceduresperformed with either sacral hiatus or lumbar puncture approach and affects theoutcome [6]. Even small changes in needle angle at the skin may result in fairlylarge changes in position of the needle tip when it reaches the spinal meninges at a69depth of 4 to 6 cm [8].In many applications, the intervention can be aided with ultrasound guidance.However, conventional free-hand needle placement involves simultaneous manip-ulation of the ultrasound probe and needle while the physician mentally relatesimages on a video screen to locations inside the patient [14]. In fact, it is a chal-lenge for the surgeon to visualize the needle trajectory inside the tissue just bylooking at the ultrasound image and the needle. It is especially difficult to alignthe cross sectional ultrasound plane with the needle tip at the desired spot in thetissue [40]. The main challenge is to select the appropriate puncture site and needleorientation so that, upon insertion, the needle tip follows the desired trajectory andhits the target in the image plane. Fig. 1 depicts an example of an out-of-planeneedle trajectory planning process.In other words, visualization and hand-eye coordination skills of the surgeonplay a major role in the success of this process. If the insertion fails due to either awrong puncture site or a wrong needle trajectory, the needle misses the target anda reinsertion is required [17].In order to improve the performance of the needle insertion in an ultrasoundguided needle insertion procedure, it is helpful to track the pose of the needle withrespect to the coordinate system of the ultrasound image. Therefore, the trajectoryof the needle can be displayed on the image in real-time so that the proper trajectorycan be chosen before the skin puncture (Figure 4.1). In our proposed system weassume that the needle is not bent. This assumption can be ensured in the trajectoryplanning step and before deep insertion. However, the method will be less accurateif there are attempts to redirect the needle trajectory by bending the shaft during theinsertion. In some applications such as epidural anesthesia, according to standardclinical procedure [8], care should be exercised when gripping the needle to ensurethat it does not bow, and when redirecting a needle it is important to withdrawthe tip into the subcutaneous tissue. This protocol is followed because, if the tipremains embedded in one of the vertebral ligaments, attempts at redirecting theneedle will simply bend the shaft and will not reliably change needle direction [8].70Figure 4.1: Out-of-plane needle trajectory planning. The appropriate needle tra-jectory for the given puncture site on the skin and the selected target on theultrasound image is shown with the dashed line.4.2 Tracking TechniqueThere are different types of commercial systems for tracking needles such as mag-netic, mechanical and optical trackers. Most systems are designed to operate overa workspace of several cubic meters so have an inherent tradeoff of accuracy ver-sus workspace. Magnetic trackers are commonly used but are affected by nearbymetallic objects and require a fixed field generator, while mechanical guides andtrackers limit the freedom of movement. Fixed camera-based optical trackers donot have these drawbacks, but require a line-of-sight between the tool and the cam-eras (usually mounted on a stand beside the subject) [55] which is especially diffi-cult for wall-mounted cameras.Previous work has shown that small stereo cameras can be mounted directly onthe ultrasound transducer to allow better tradeoffs between accuracy, workspace,cost and line-of-sight issues [17, 40]. The key insight is that the workspace of theneedle relative to ultrasound is small so a tradeoff of small workspace can result in71high accuracy of tracking.Chan et al [17] worked on a method with two cameras mounted on a 2D probe.They estimated the needle trajectory by observing the needle near the puncturesite on the skin. They showed the feasibility of the technique but they found thaterror of calibration (the intrinsic and extrinsic parameters) of the two cameras andthe error of calibration of ultrasound imaging plane and the probe is a limitingfactor. In fact, the errors of calibration of each step accumulates and leads to asignificant final error in needle trajectory estimation. They achieved an accuracyof 3.1±1.8 mm.Khosravi et al [40] also used stereo cameras for needle pose estimation but triedto eliminate the need for several calibration steps by employing a direct mappingfrom un-calibrated stereo images to needle position and orientation. They collecteda set of the needle poses and the corresponding 2D positions in the images. Then atraining algorithm was used to find the mapping between the images of the needleand its 3D pose. They achieved an average error of 2.4 mm in position and 2.61◦in orientation.Wang et al [77] used a Kinect sensor [80] as a tracking tool for needle poseestimation an achieved an accuracy of 2.6± 1.7 mm to 6.9± 5.1 mm. In a morerecent work [73], a stereo camera system was used for needle pose estimationand this tracking system was commercialized [72] confirming the interest in suchcamera-based trackers mounted on the ultrasound transducer. The accuracy of thissystem was 3.3±2.3 mm.A single camera would provide further improvements in cost, complexity andsize. Therefore, a single camera system is proposed here. The main expectedchallenge of tracking with a single camera is that the pose of the needle is lessaccurately measured in the range direction, but this limitation can be mitigatedby exploiting the known markings on the needle. The enabling innovation is torecognize that many medical needles (e.g. epidural needles) already have markingsfor depth measurement (usually 1 cm markings), so these known markings can beused in the tracking algorithm.724.2.1 Single Camera Pose EstimationSingle camera pose estimation methods can be categorized into two approaches.First are feature-based training methods that employ registered labels in a databaseto find the position of the camera based on the comparison between captured fea-tures and the database. Second are mathematical-geometrical methods, which em-ploy geometrical relations between captured images and the camera position todetermine the position and orientation of the camera [68].In feature-based methods, a feature detector is used to extract features fromeach image to compare them to the feature database. For instance, Collet et al[21] have used SIFT [49] features to extract local descriptors from images in theirsystem for object pose estimation. However, images of a needle normally do nothave enough texture to produce enough SIFT features for accurate pose estimation.Moreover issues such as variable light reflectance on the metallic surfaces of aneedle can make correct feature extraction and mapping more difficult. Moreover,although feature-based methods are fast, but the pose estimation accuracy is limitedto the size of the training database [68].On the other hand, mathematical-geometrical methods are not constrained toa training dataset. Their accuracy is typically determined by the feature extrac-tion step and the camera’s intrinsic calibration accuracy. Mathematical-geometricalmethods fall into two categories: analytic closed-form, and iterative optimizationmethods.Analytic closed-form methods provide an analytical solution for the needle’s/object’spose based on the geometrical properties of the model and the corresponding ex-tracted features from the image. The main advantage of these methods is their lowcomputational cost which makes them suitable candidates for real-time applica-tions. Examples include three point perspective pose estimation [32], pose estima-tion using 2D to 3D corner correspondence [70] and pose estimation from metricrectification of coplanar points [51]. A novel mathematical-geometrical formula-tion is devised here for needle trajectory calculation using the centimeter-spacedblack markings on many needles, such as standard epidural needles.In the iterative optimization methods, the needle’s/object’s pose is estimated byoptimization of a geometric constraint, either in the 3D world coordinates or in the73image coordinates. These methods can be less sensitive to feature extraction errorsince they usually use a large number of features but are more computationallyexpensive. Moreover, non-linear optimization methods rely on a good initial guessto converge to the correct solution.One advantage of a closed-form mathematical-geometrical solution is that itis possible to optimize the system design by calculating the sensitivity of each ofthe parameters in the pose estimation procedure using the known error propaga-tion model. For instance, in one study [52] on pose estimation of a planer objectrelative to a fixed reference, the effects of several parameters on the 6 Degrees ofFreedom (DOF) (degrees of freedom: 3 translation and 3 rotation) pose parameterswere investigated using a Monte-Carlo simulation. It was concluded that the dis-tance between camera and the object, and hence the average image scale, plays themost important role in calculating accuracy in the viewing direction. In addition,the size of the object, i.e. the spatial distribution of reference points, is the secondmost important design parameter. Moreover, among all 6 DOF parameters, transla-tion in Z (optical axis) direction and rotations around X and Y (pitch and yaw) arethe most sensitive. In the case of focal length variation (while adjusting the viewingdistance in order to keep image scale constant) the precision in Z translation andpitch and yaw rotations decreases linearly with increasing focal length which is tobe expected from the known advantage of wide angle lenses for orientation pur-poses. Object tilt (around X or Y) up to 50◦ almost does not affect the parameters.These findings were used in our proposed design discussed in this chapter.For the needle tracking problem, the markings are collinear and the needletrajectory is a line which can be described by 5 DOF (excluding the rotation aroundthe needle’s axis). Although there are closed-form solutions for planar (2D) objectsfor 6D pose estimation [21, 68], to the best of our knowledge, we are the first topropose a closed-form method for estimation of the pose of a linear object using asingle image of a camera.In our previous work [59] we showed the feasibility of such a system. We havesubstantially improved our previous work in terms of accuracy (3.0± 2.6 mm),robustness and flexibility. Here, a more generalized version of the closed-form so-lution, which can account for a variable number of feature points, is presented. Inaddition, the feature extraction algorithm has been improved. Moreover, rotation or74Figure 4.2: The camera is mounted on the transducer (side view) in a way that theneedle can be easily seen by the camera.dismounting of the camera no longer requires ultrasound re-calibration. This wasmade possible by using a fixed marker on the transducer and using a mirror to cal-ibrate the camera with respect to that marker at the time of construction/assembly.Our multi-wedge calibration method, which is a more accurate ultrasound calibra-tion method [61], Chapter 3, was also employed here. The precision and accuracyof the needle estimation was measured relative to the camera as well as the accu-racy of the overall ultrasound guidance system. The emphasis is on out-of-planeneedle insertions, but the method could be converted to in-plane needle insertionby remounting the camera on the side of the transducer.754.3 Experimental SetupA SonixTOUCH ultrasound machine (Ultrasonix Medical Corporation, Richmond,BC, Canada) was used for ultrasound imaging. An L14− 5 transducer with 7.2MHz center frequency was used for ultrasound imaging.The camera was an FL2G−13S2M−C (Point Grey Research Inc, Richmond,BC, Canada) with a DF6HA− 1B lens (Fujifilm Group, Fujinon Corporation,Saitama, Japan). The camera had a mono 1.3 MP Sony ICX204 CCD, 1/3′′,4.65 µm sensor. The frame rate was 30 FPS and the image size was 1288× 964pixels. The lens had a 6 mm focal length, iris range of F1.2 to F−16 and horizon-tal and vertical field angels of 56◦ 09′ and 43◦ 36′, respectively. Figure 4.2 showsthe camera mounted on the transducer.4.4 Mathematical FrameworkA standard camera model was adopted here. The principal point (xp,yp), the effec-tive focal length ( fx, fy) and the lens distortion parameters were determined froma separate standard calibration process explained below. Effective focal lengthand principle point are measured in pixels. The skew coefficient defines the an-gle between the x and y pixel axes. The image distortion coefficients (radial andtangential distortions) are used to undistort the image.As described earlier, the marking edge points of the needle (Q = (Qx,Qy)) werethen used as the features required to extracted the needle’s pose (Figure 4.3).Using the camera model [51] it is possible to find the projection of any 3Dpoint (P = (Px,Py,Pz)) in the camera plane:QxQy=fxPxPz+ xpfyPyPz+ yp , (4.1)where fx and fy define the effective focal lengths of the camera along the two axes.The markings on the needle reside on a straight line and, based on the modeldescribed earlier, it is obvious that their projection points on the camera plane arealso collinear. Moreover, all the marking points, their projections and the focalpoint are in the same plane (Figure 4.4).76Figure 4.3: Epidural needle with 1 cm markings is projected into the camera planeusing the standard camera model.The feature extraction procedure, described in Section 4.5, finds the projectedpoints (Qi) in the image, and F = (xp,yp, f ) is calculated in the camera calibrationprocedure. Actual marker distances, di = ||Pi+1−Pi||, are assumed to be known.For an epidural needle, in particular, the marking distances are all equal (di = 1 cm).However, the proposed method is not limited to the equal spacing constraint andcan be generalized for needles with non-equal marking spacings.Having the projection points (Qi), we can define unit vectors ~Vi as below:~Vi =Qi−F||Qi−F ||. (4.2)Any of Pi points can be described as:Pi = si~Vi +F. (4.3)We also define vectors Gi as below:~Gi = Pi+1−Pi, (4.4)77Figure 4.4: Three known collinear points on the object (P1, P2, and P3) and thecorresponding projections on the camera plane (Q1, Q2, and Q3) and the focalpoint (F) all reside on the same plane.and as mentioned earlier, ||~Gi||= di are known. Therefore:si+1 ~Vi+1− si~Vidi=~Gi||~Gi||, (4.5)where, si are unknown and to be determined, and the right-hand side is always aunit vector defined as:~U =~Gi||~Gi||. (4.6)All the ~Vi vectors reside on the same plane with a normal vector,~n:~n =~Vi× ~Vj||~Vi× ~Vj||,∀i 6= j, (4.7)Therefore, we can describe ~Vi as a linear combination of two perpendicularvectors. Without loss of generality, we define ~A = ~V1 and ~B as:78~B =~n×~A. (4.8)and hence we can describe ~Vi as:~Vi = αi~A+βi~B, (4.9)where αi and βi coefficients are calculated from:αi = ~Vi ·~A,βi = ~Vi ·~B. (4.10)Now we can substitute ~Vi in (4.5) with (4.9):si+1(αi+1~A+βi+1~B)− si(αi~A+βi~B)di= ~U . (4.11)We can write (4.11) for j 6= i and subtract:si+1(αi+1~A+βi+1~B)− si(αi~A+βi~B)di−s j+1(α j+1~A+β j+1~B)− s j(α j~A+β j~B)d j= 0.(4.12)By rearranging the above equation we have:~A[d j(si+1αi+1− siαi)−di(s j+1α j+1− s jα j)]+~B[d j(si+1βi+1− siβi)−di(s j+1β j+1− s jβ j)] = 0.(4.13)Since ~A is perpendicular to ~B, their linear combination can be only zero whenboth the coefficients are also zero. Therefore after a rearrangement we have:s j+1(diα j+1)+ s j(−diα j)+ si+1(−d jαi+1)+ si(d jαi) = 0, (4.14)and:79s j+1(diβ j+1)+ s j(−diβ j)+ si+1(−d jβi+1)+ si(d jβi) = 0. (4.15)So far, we have derived the necessary equations (4.14, 4.15) to solve for theunknowns (si). These equations are a linear function of unknowns and can bewritten for any pair of i 6= j values. If we have p points, we can write independentequations for i = 1 to p−1 and j = i+1 to p−1. So if we define the matrix formof the problem as:Cq×pSp×1 = [0]q×1, (4.16)where Sp×1 = [s1...sp]t is the vector of unknowns and q = (p− 1)(p− 2) is thenumber of equations. The C matrix can be formed, for example, by using (4.14)for the even rows and using (4.15) for the odd rows. Depending on the values of iand j, the r’th row of C can be written as follows: if j = i+ 1 and r is even, thenthe non-zero elements of this row are:[cri,cr j,cr( j+1)] = [d jαi,−diα j−d jαi+1,diα j+1], (4.17)else, if i = j+1, and r is even:[cr(i+1),cr j,cr( j+1)] = [−d jαi+1,diα j,d jαi +diα j+1], (4.18)else[cri,cr(i+1),cr j,cr( j+1)] = [d jαi,−d jαi+1,−diα j,diα j+1]. (4.19)Note that the rest of the elements of each row are zero. Equation (4.15) shouldbe used for odd r values and similar equations as (4.17), (4.18) and (4.19) can bewritten, except that α should be replaced with β .Finally, to find the solution, we know that (4.16) is a linear system of equationswith a zero right-hand side. Therefore the null space gives a solution for S. Theunique solution is obtained when the distance constraint is applied:||si~Vi− si+1 ~Vi+1||= di, (4.20)80(a)(b)Figure 4.5: (a) Edge points should be extracted from the image of the needle. Cen-ter line, marking edges and edge points are shown. (b) An example of “off-the-needle” lines (in blue) and “on-the-needle” lines (in red).which means that the calculated S should be scaled as:Sˆ = Sdi||si~Vi− si+1 ~Vi+1||. (4.21)Now using Sˆ, it is possible to calculate Pi using (4.3) and Gi using (4.4) andhence the pose of the needle is determined. Note that at least three marking pointsare required to solve for all the unknowns.4.5 Automatic Feature ExtractionIn order to estimate the pose of the needle, marking edge points should be extractedfrom each image of the needle. These edge points are defined as the intersection ofthe center line of the needle with the marking edge lines (Figure 4.5a). The accu-racy of the pose estimation is directly related to the accuracy of this segmentationprocedure.One classic approach is to identify the needle in the image using the Houghtransform [30]. Although there are several practical issues that limit the accuracyof this approach, it can be used to find the approximate location of the needle. One81Figure 4.6: The points associated with the peak values on each line are stored (reddots). These points are clustered to find marking edges.of the challenges is that in a practical interventional procedure, there may be manyobjects in the background with strong edges similar to a needle that can interferewith proper recognition of the needle using the Hough transform. Secondly, thevariable specular light reflections on the metallic surface of the needle can producemisleading strong edges. Moreover, since the needle can be fairly close to thecamera, its two boarder lines may not appear as parallel lines in the image due toperspective viewing. Also the marking edges may appear as curved lines due tothe cylindrical shape of the needle. Therefore, a simple Hough transform cannotalways accurately recognize the needle and its center line. It should be emphasizedthat correct detection of the needle is needed together with an accurate estimate ofthe center line of the needle for accurate detection of the marking edge points.Therefore, we adopted a novel approach by considering the fact that the mostdistinctive feature of such a marking system is the consecutive black and whitepattern on the needle. Hence, for any arbitrary line in the image, when consideringthe pixel values along the line, if the line lies on the needle then significant stepsoccur at the marking edges. The pixel values will rise to a higher level whenthe transition is from black to white regions and there will be a sudden drop invalues when going from white to black regions. This gives one criterion to identifywhether the chosen line lies on the needle or not. Figure 4.5b shows an example of“off-the-needle” lines (in blue) and “on-the-needle” lines (in red).In our proposed method, instead of detecting the needle’s center line first, wecollect various points that reside on the marking edges. Then these points aregrouped into different classes using a clustering technique. Each class, in fact,82represents a collection of the points residing at each marking edge of the needle.The center of each cluster is calculated and a line is fitted to all the center points.This line is considered as a more accurate estimate of the needle’s center line.Finally, to fine tune the segmentation results, the marking edges are re-calculatedby running an edge detection algorithm along this estimated center line.Table 4.1: Parameters used in the automatic feature extraction algorithm.Parameter Value Unitminpeakdistance 200 pixelminpeakheight 5 -threshold 0.1 -cut-off 15 -4.5.1 Collecting Marking Edge PointsWe start with an overview of the procedure. The algorithm starts with an initial es-timate for the needle center line. The estimate can come from the previous frame,if the current frame is not the first frame. For the first frame, the estimate is pro-vided by the Hough Transform (using the Canny edge detector). Then, randomlines are generated around this estimate by adding random variables to the slopeand line intercept. Pixel values on each line will are examined to determine if itis an “on-the-needle” line. In that case, the intersection points of the line with themarking edges will be stored.In order to detect an “on-the-needle” line, the edges of the pixel values alongthe line were detected to determine whether a pattern of consecutive rising andfalling edges can be observed. For edge detection, first the derivative of the pixelvalues was calculated using a smooth noise-robust differentiator [34] (N = 9) asshown in Figure 4.7. Next, peaks that exceeded a certain threshold (minpeakheight),and had indices separated by more than minpeakdistance and with peak values ex-ceeding their neighbors by greater than threshold were identified. This can beperformed by finding the zero crossings of the first derivative and taking onlythose that meet the mentioned conditions. Here, we used the f ind peaks() func-tion in MATLAB. Proper values for these parameters should be chosen based on83(a) (b)Figure 4.7: (a) Pixel values along an “on-the-needle” line. (b) Derivative of thevalues using the smooth noise-robust differentiator.the experimental setup. Specifically, minpeakdistance is related to the length ofthe markings in pixel and therefore depends on the image resolution, lens field ofview and the distance of the needle to the camera. Table 4.1 shows the parametersused in our experiments.Then the sign (rising or falling edge) of the peak values was determined and thenumber of consecutive positive and negative peaks was measured. If this numberwas greater than 75% of the total number of detected peaks on that line, then thisline was considered as an “on-the-needle” line. If that was the case, then the pointsassociated with the peak values on the line were stored. Higher threshold values(> 75%) will reject more candidate lines while a lower value will allow widerrange of candidate lines. The method, however, is not sensitive to this thresholdvalue since, as described in the next section, the edge points will be clustered andoutlier points would be removed.Figure 4.8 shows the flow chart of this algorithm.Sometimes a cluttered background can be misleading and could result in wrongestimation of the needle center line when using the Hough Transform. This can beverified if the percentage of “on-the-needle” lines is very small (less than 10%).In that case, the edge detection parameters prior to applying the Hough Transformshould be altered or the algorithm can run without an initial guess with a widesearch range (e.g. random search) as shown in the algorithm flowchart Figure 4.8.84Figure 4.8: Flow chart of the algorithm for collecting marking edge points.85Figure 4.9: The dendrogram of the clustered points and the cut-off threshold ofc = 15 (red line).Note that detection of the approximate location of the needle is more challeng-ing only for the first frame; for the following frames, a smaller search range isrequired and therefore, the estimate from the previous frame is sufficient.4.5.2 Clustering Edge PointsThe aforementioned criteria do not guarantee that any “on-the-needle” line is al-ways on the needle and they do not guarantee that all the detected peak points areon the marking edges (Figure 4.6). However, statistically, the majority of thesepoints will be on the marking edges and therefore, it is possible to find the actualmarking edges by clustering them. The clustering procedure should find clustersof points with low inter-class distance and high intra-class distance.Hierarchical clustering [33] was used here to cluster the points. In hierarchicalclustering, data is grouped over a variety of scales by creating a cluster tree or den-drogram. The tree is not a single set of clusters, but rather a multilevel hierarchy,where clusters at one level are joined as clusters at the next level. This allows to bedecided the level or scale of clustering that is most appropriate for one application.86Figure 4.10: Final feature extraction results showing the center line (green line) andthe marking edge centers (red dots).The cluster() function in MATLAB was used with a cut-off = c and along withdistance criterion as an option which uses the distance between the two sub-nodesmerged at a node to measure node height. All leaves at or below a node with heightless than c were grouped into a cluster. Figure 4.9 shows the dendrogram of theclustered points and the cut-off line (c = 15) used here. This value was chosenbased on the average distance between markings with our experimental setup.The number of points in each cluster was examined and those including pointsless than 25 percent of the number of points in the biggest cluster were eliminated.For example in Figure 4.9, the fourth cluster from the left (with only one point)was removed and the other five clusters remained.4.5.3 Finding The Center Line and Fine TuningIn this stage, the center of each cluster was found first by fitting a second orderpolynomial to the points in each cluster and removing the outliers. After re-fittinganother polynomial to the remaining points, the center of the polynomial was cho-sen as the cluster’s center point. Eventually, the center line of the needle wascalculated by fitting a line to these cluster center points.Now another step was taken for accurate estimation of the edge points, Qi. Thederivative of pixel values along the lines parallel to the calculated center line wastaken and the peaks were detected. Then eventually, at each edge, the peak pointswere averaged in a weighted-average manner to find the edge points. Figure 4.10shows the final results.87Figure 4.11: The coordinate systems of the transducer, the camera and the ultra-sound image.4.6 System CalibrationIn order to estimate the pose of the needle relative to the ultrasound image, we needto find the transformation from the camera’s coordinate system to the ultrasoundimage coordinate system. This procedure is called ultrasound calibration. Also,the intrinsic parameters of the camera should be calibrated for the needle poseestimation procedure as explained above.One practical challenge is that the camera to ultrasound calibration should beperformed every time the camera is mounted on the transducer. Also, this limits therange of rotation of the camera that can be used. For example, the camera should berotated 90 degrees to perform an in-plane insertion as opposed to an out-of-plane88Table 4.2: Camera intrinsic parameters obtained by camera calibration.Effective Focal Length [ fx, fy] = [1672.90,1654.61] ± [5.62,5.67]Principle Point [xp,yp] = [679.87,505.27] ± [4.98,4.53]Skew Coefficient αc = [0.00] ± [0.00]Distortion kc = [−0.231,0.107,−0.001,0.000,0.000] ±[0.011,0.070,0.000,0.000,0.000]insertion procedure.To tackle this problem, we have proposed to attach a fixed checker-board pat-tern to the transducer. The ultrasound calibration should be performed only once tofind the transformation from the ultrasound image to the checker-board pattern onthe transducer, TT I . Then each time the camera is re-mounted or has been rotated,only the transformation from the camera to the transducer’s marker, CT I , needsto be calculated using a mirror. Figure 4.11 shows the coordinate systems of thetransducer, the camera and the ultrasound image.By placing a mirror in different poses and obtaining one image of the calibra-tion pattern in each pose while keeping the camera and a checker-board patternfixed, it is possible to estimate the pose of the checker-board pattern as well as allthe mirror’s poses [43]. An alternative approach is to attach another checker-boardpattern to a visible part of the mirror so that the pose of the mirror can be esti-mated directly from one image. The pose of the mirrored checker-board pattern(on the transducer) can also be estimated. Eventually, the estimated pose shouldbe mirrored back using the estimated mirror’s plane to find the actual pose of thechecker-board pattern on the transducer. The width of the mirror should be takeninto account in this procedure. We found that the former method is less robustcompared to the latter and hence the latter was used here with a 5× 5 mm 6× 8checker-board pattern on the mirror and a 10×10 mm 5×5 checker-board patternon the transducer. A standard off-the-shelf mirror was used in our experiments.4.6.1 Ultrasound CalibrationThe transformation from the ultrasound image to the checker-board pattern on thetransducer, TT I , was estimated by ultrasound calibration. In our earlier work, we89Table 4.3: The repeatability of the feature extraction and pose estimation al-gorithms are measured.Feature extraction error RMS(ex,ey) = [0.11,0.7] pixel3D point estimation error std(Pi) = [0.008,0.031,0.031] mmDirection estimation error RMS(arccos(Gi · G¯/||G¯||)) = 0.09 degreeproposed a closed-form differential formulation based on a Multi Wedge phantomcomprising of five planes [61], Chapter 3. That method was used here for ultra-sound calibration.4.6.2 Camera CalibrationCamera calibration was performed using a method developed based on [15]. Thismethod uses several images of a checker-board pattern to estimate the camera’sintrinsic parameters such as focal length, principal point and lens distortions. Itcan also estimate the pose of a checker-board pattern with respect to the camera’scoordinate system.4.7 Experiments and Results4.7.1 Camera CalibrationCamera calibration was performed using 20 images of a checker-board pattern byrunning the camera calibration toolbox [15]. Table 4.2 summarizes the resultsshowing the intrinsic parameters of the camera.4.7.2 Repeatability of the Feature ExtractionIn this section we investigate the effect of parameter selection on the feature ex-traction algorithm. For this purpose, the algorithm was applied to the same imageof a needle 100 times and the variability of the extracted edge points was measuredby calculating the root mean squared of the difference between the extracted edgepoints and their average.Then the repeatability of the pose estimation algorithm was evaluated by using90Figure 4.12: Measuring the precision of the method by rotating the needle around apivot point (needle tip). Estimated needle lines and the estimated pivot pointare shown.the extracted features and using the camera’s intrinsic parameters from the cameracalibration. In the pose estimation results, the equation of the 3D line of the needleis described with one point in space, Pi, and a unit direction vector, Gi. Here, thestandard deviation of Pi was measured as well as the root mean squared of the anglebetween Gi and their average, G¯ =∑100i=0 Gi100 . Table 4.3 summarizes the results.4.7.3 Precision of the Needle Pose EstimationIn order to measure the precision of the pose estimation method, the tip of theneedle was fixed at a pivot point and the needle was rotated in a circular motion.The needle was rotated continuously in a full 360◦ circle. 50 frames were sub-sampled and the pose of the needle was estimated. Obviously for a small tilt angle,low tip error will be expected and hence here, the needle was tilted as much as91Figure 4.13: Accuracy testing device comprised of a base with a grid of semi-spherical holes on its surface and a marker plane. The marker plane is aflat piece with a checker-board pattern attached to its top surface.possible while trying to keep it in the field of view of the camera. The tilt anglewas measured as the root mean squared angle between the estimated lines and theiraverage which was 14.8◦.The pivot point was estimated by using a least-square method by minimizingthe distance from this point to all of the estimated lines. Figure 4.12 shows theestimated lines and the estimated pivot point. The root mean squared error for thepivot point was 0.41 mm.This test measures the precision of the needle pose estimation in the camera’scoordinate system. A precision test with only linear translations for the needlewould not be as effective as this test. The reason is that in linear translation, theshape of needle needle in the image does not significantly change compared towhen it is placed in different orientations (especially in the depth direction). In thistest we measure the tip position which is more challenging due to the lever armeffect. Therefore, we conclude that our proposed test is a suitable indication of theprecision for our pose estimation algorithm. However, this test does not measurethe accuracy of the pose estimation.4.7.4 Accuracy of the Needle Pose EstimationThe accuracy of the needle pose estimation method (in camera’s coordinate system)was measured by placing the tip of the needle on a grid of known positions and92Figure 4.14: PRA measurement experiment setup showing the needle immersedpartially in water seen both by the camera and the ultrasound image.comparing them with the estimated tip position. For this reason, a testing devicewas manufactured with an Objet30 desktop 3D printer (Objet Inc., Billerica, MA,USA with a printing tolerance of 28µm). As shown in Figure 4.13, this testingdevice is comprised of a base with a grid of semi-spherical holes on its surface anda marker plane. The spacing of the grid points was 10 mm in both directions andthe radius of the semi-spheres was 0.25 mm. The marker plane is a flat piece witha checker-board pattern attached to its top surface.For the accuracy test, first the marker plane was placed inside the base and thecamera captured one image of it. The pose of the marker plane was estimated fromthe checker-board pattern and therefore the location of the grid points was calcu-lated (in the camera’s coordinate system). Then the marker plane was removed(without moving the base) and the needle’s tip was placed in each of the holes(4× 5 points). For each hole, the pose of the needle was estimated using the pro-posed method and the estimated tip point was compared with its actual position.The root mean squared error was 0.49 mm.934.7.5 Overall System AccuracyThe primary goal of the proposed system is to estimate the intersection of the nee-dle with the ultrasound image so that the proper needle trajectory to hit a specifictarget on the ultrasound image can be chosen prior to skin puncture.To measure the accuracy of the overall system, an out-of-plane needle insertionexperiment was performed in a water-bath. The needle was partially immersed inwater, as shown in Figure 4.14, so that the intersection of the ultrasound plane withthe needle could be seen as a point in the ultrasound image. Part of the needle thatwas out of the water was visible by the camera and was used for estimating thepose of the needle. A 20 cm long needle is used for this experiment.Using a linear stage, the needle was moved in 2 mm steps laterally at approxi-mately 10 mm and 20 mm axial depths (26 points in total). Ultrasound images weresegmented manually by clicking on the center of the needle’s intersection whichwould appear as a cloud of pixels. These segmented target points were comparedto the estimated intersection points (Figure 4.15). The mean and standard deviationof the distance error was 0.94 mm and 0.46 mm, respectively. Several sources of er-ror are involved in the overall error including the needle pose estimation error, theultrasound calibration error, the camera calibration error and the ultrasound seg-mentation error. Due to relatively poor resolution of the ultrasound, a fairly largeportion of the overall error can be associated to the ultrasound segmentation error.4.7.6 Real-time ImplementationThe algorithm was implemented in C++ and some of the functions in OpenCVlibrary were used. The overall method runs in real-time (50 msec) in a standardcomputer workstation with a Core 2 Duo CPU @ 2.93 GHz and 4 GB of RAM.4.8 SummaryMany needle intervention procedures are performed by using ultrasound guidancebecause it is a flexible, cost-effective, and available intraoperative imaging modal-ity. In a needle insertion procedure with ultrasound guidance, real-time calculationand visualization of the needle trajectory can help to guide the choice of puncturesite and needle angle to reach the target depicted in the ultrasound image. In this94Figure 4.15: Segmented target points have been compared to the estimated inter-section points.chapter we showed that not only it is feasible to calculate the needle trajectory witha single camera mounted directly on the ultrasound transducer by using the needlemarkings but also higher accuracy can be achieved compared to the state of the arttransducer-mounted trackers.Our proposed method provides a practical and efficient tracking solution thatavoids the drawbacks of magnetic and external optical trackers. Moreover, ourproposed tracking system is inexpensive, real-time and easy-to-use. The proposedautomatic feature extraction algorithm is accurate, robust and fast performance andthe closed-form method for pose estimation allows for rapid and accurate needletracking.More importantly, our proposed method significantly improves accuracy to0.94±0.46 mm, and although a different set of camera and ultrasound equipmentwas used and so a direct comparison is not possible, it is worth noting the accuracyof our previous work (3.0± 2.6 mm) [59]. Another group using a stereo systemachieved 3.1± 1.8 mm [17]. Another research group [77] using a Kinect sensorachieved a needle pose estimation accuracy of 2.6± 1.7 mm to 6.9± 5.1 mm andthe most recent work [73] achieved accuracy of 3.3±2.3 mm using stereo cameras.95In a stereo tracking system, there are other sources of error such as the errors inextrinsic camera parameters (stereo calibration) and the errors in image segmen-tation and stereo matching procedures [17]. Also, in a transducer-mounted stereotracking system, practical constraints limit the distance between the stereo cameraswhich leads to lower depth resolution. These can justify the higher errors associ-ated with the above stereo tracking systems compared to our marker-based singlecamera closed-form solution.96Chapter 5Conclusion and Future WorkThe main goal of this thesis was to facilitate ultrasound-guided interventions. Highaccuracy and practical usability of ultrasound calibration and tool tracking not onlyenhance current interventional procedures, they also enable new applications thatotherwise wouldn’t be possible. Specifically the long term goal is to improve theeffectiveness of a treatment and the success or precision of a diagnosis in per-cutaneous procedures by developing an accurate, low-cost and real-time trackingsystem for guidance of needles (e.g. epidural) with ultrasound.Many methods have been proposed for ultrasound calibration over the last twodecades. Despite all these efforts, the best reported accuracies for ultrasound cali-bration are in the order of 1 to 3 mm [36] which may adversely affect the result ofultrasound guided interventions and possibly limit the scope of new applications.We have significantly improved calibration accuracy with our proposed method. Itshould be noted that other than calibration, there are other sources of error involvedin an image-guided intervention such as the localization error.Calibration should be performed any time the markers are attached to the trans-ducer and generally prior to each interventional procedure. Our aim was to makeultrasound calibration more reliable for day-to-day clinical use by reducing the cal-ibration error while keeping the calibration procedure as fast and simple as possibleand suitable for intra-operative use. Therefore, by bringing down the calibrationmean error, we ensure that worse case calibration error in any given day is stillacceptable.97In this thesis we provided two complementary methods for ultrasound calibra-tion. In Chapter 2 we provided a simple ultrasound calibration method with rea-sonable accuracy. We have improved the traditional single-wall method in variousways to achieve more accurate and robust results first by developing a closed-formsolution and also using the differential measurement technique. This was achievedby explicitly measuring the position of the plane using a stylus and thus reducingthe number of unknowns.Experiments were performed on both linear and curvilinear transducers, andfor both RF and B-mode input data. The PRA accuracy evaluation was performedcomparing 370 and 400 points, for linear and curvilinear transducers respectively,throughout the image to their estimated projections using our proposed method.Results were compared to the triple N-wire method. The predetermined wallmethod, for the linear transducer, achieves PRA accuracy of 0.73± 0.23 using100 RF images while the triple N-wire PRA accuracy was 0.67± 0.20 mm using100 images. For the curvilinear transducer, the PRA accuracy of the predeterminedmethod was 0.86± 0.28 using 100 RF images, whereas the PRA of the triple N-wire method using the same number of images was 0.80±0.46 mm.One challenge with this method is the limited range of motion for the trans-ducer due to the deleterious effects from the ultrasound beam thickness at largeangles that can reduce the accuracy of the out-of-plane parameter of the calibra-tion. The Cambridge phantom [67] solves this problem, and in fact, it can be usedwith our method for acquisition of images with wider range of motions.In Chapter 3 we proposed a method based on a Multi Wedge phantom. Wedesignated an optimized Multi Wedge phantom for accurate ultrasound calibrationbased on our proposed differential measurement technique. The use of differentialmeasurements is a novel technique for accurate feature extraction and measurementin ultrasound images for the purpose of ultrasound calibration. A closed-formsolution was developed for calculation of the calibration matrix using only a singleultrasound image of the phantom. The CR test shows that using only a singleimage of the phantom is almost as precise as using 50 images of a Double N-wirephantom. Results of the calibration accuracy evaluation using the PRA test confirmsthat the proposed method is very accurate and specifically is more accurate than thewell-known Double N-wire method [19]. In fact, the average point reconstruction98error for the “RF Multi Wedge” method was 0.09± 0.39 mm compared to 0.25±0.43 mm for the Double N-wire method.In Chapter 4 we provided a tracking system for tracking tools (e.g. needles)for ultrasound-guided interventions to determine the pose of the needle/tool withrespect to the ultrasound image. The proposed system is based on using a singlecamera on the transducer to the track the needle using the markings on the surfaceof the needle. A closed-form solution was derived for estimating the pose of alinear object with markings using a single camera.An automatic algorithm was provided for accurate, robust and fast feature ex-traction. The C++ implementation of the overall method runs in real-time (50 msec)in a standard computer workstation (Core 2 Duo CPU @ 2.93 GHz and 4 GB ofRAM). Various experiments were performed to evaluate the system. Significantlyhigh accuracy of 0.94±0.46 mm, compared to other state of the art methods, wasachieved.One drawback of the proposed needle tracking system is that it requires knownmarkings on the needle. Moreover, unless tracking the needle continuously, it isnot possible to locate the position of the tip once it is inserted into the tissue sinceonly the line equation is determined.5.1 ContributionsThe contributions in this thesis are summarized as below:• A novel closed-form solution for single-wall ultrasound calibration is devel-oped based on using the differential measurement technique. The traditionalsingle-wall lacks a closed-form solution and hence is prone to sub-optimallocal minima and initialization issues. Based on the differential measure-ments, the problem has been formulated into a closed-form solution with theuse of the Kronecker Product and the Frobenius norm.• A novel extension of the single wall calibration method is provided by ex-plicitly measuring the position of the plane using a stylus. Determiningthe plane equation reduces the number of unknowns and makes the solu-tion more robust and more accurate. Another closed-form formulation is99developed using the known plane position and based on the differential mea-surements technique. The method is applicable to both linear and curvilineartransducers and both using RF and B-mode data.• Ultrasound calibration accuracy has been improved significantly comparedto state-of-the-art methods while keeping the calibration procedure as fastand simple as possible and suitable for intra-operative use.• A novel technique and framework is developed for accurate feature extrac-tion and measurement in ultrasound images for the purpose of ultrasoundcalibration. Instead of localizing the absolute position of point or line fea-tures, the relative axial difference between two features is measured. Whenimaging a flat surface with ultrasound, the RF echo in each scan line is pro-duced at a certain depth of the image and all of these reflection points resideon a straight line which is the intersection of the imaging plane and the sur-face. The differential technique utilizes the concept that echo signatures inthese scan lines are similar and therefore their axial shift can be measuredwith high accuracy using the normalized cross correlation technique.• A novel Multi Wedge phantom is designed for accurate ultrasound calibra-tion based on the differential measurement technique. The Multi Wedgephantom includes five wedges and it enables accurate single image ultra-sound calibration. The optimum angle and size of the wedges is calculatedbased on simulated and experimental data. The phantom appears as a seg-mented line in the ultrasound image. Slopes of the line segments are mea-sured and used in the closed-form solution.• A novel closed-form solution and mathematical framework is developed forthe Multi Wedge phantom. The solution uses the slope values of the seg-mented lines and calculates the pose of the ultrasound image plane with re-spect to the phantom’s coordinate system. Unlike iterative solutions, theclosed-form solution is more robust and is not subject to sub-optimal localminima and also not sensitive to initial estimates.• A novel closed-form solution is developed for estimating the pose of a linearobject using a single camera. Known markings on the needle along with the100calibrated model of the camera are used by the closed-form solver to deter-mine the position of the needle with respect to the camera. The proposedmethod achieves significantly high accuracy compared to similar state-of-the-art methods.• A novel fully automatic image processing and feature extraction algorithmis provided for needle tracking. The proposed algorithm is fast, robust andruns in real-time with the C++ implementing using Open Source ComputerVision (OPENCV) library on a standard personal computer.• A novel needle tracking system is developed for calibration of tools with spe-cific markings such as an epidural needle in ultrasound-guided interventions.A single camera mounted on the transducer is used for tracking. With a cali-bration procedure, the transformation form the ultrasound image coordinateswith respect to the camera’s coordinate system is determined.• A novel extension to the single camera tracking system is made to facili-tate movement of the camera on the transducer for different applications. Amarker is attached to the transducer and is calibrated once to the ultrasoundimage. Each time, it is only required to hold a mirror in front of the cam-era in a way that the marker can be seen by the camera so that the the newposition of the camera can be easily re-calibrated.5.2 Future WorkIn this thesis we have presented novel methods to facilitate ultrasound-guided in-terventions. We proposed two complementary ultrasound calibration methods anda single camera needle tracking system. These research areas can be continued inthe following ways:• In the single wall calibration method, it can be investigated how the trans-ducer should be positioned so that the minimum number of poses yield acertain level of accuracy. The closed-form solution in this thesis can be use-ful in addressing this problem.101• The real-time implementation of the single wall calibration method and theintegration of the software in PLUS can make it easier for researchers to use.The software can display the reconstruction error and so can guide the userto get more effective images until it reaches the desired level of accuracy.• Temporal calibration, the synchronization of the tracker and the sequenceof ultrasound images, is especially critical in the single wall method. Onefuture goal is that softwares such as PLUS could implement hardware syn-chronization between the tracker and the ultrasound machine so that veryaccurate temporal calibration can be achieved.• Future work will extend the Multi Wedge method to curvilinear and 3Dtransducers. In case of the 3D transducers , the mathematical framework canbe extend to include elevational direction and for curvilinear transducers theframework can be adapted to differential measurements similar to the tech-nique in Chapter 2. For curvilinear transducers, the Multi Wedge phantommay require a different design optimized for the shape of such transducer.• Given that the Multi Wedge phantom can be easily and inexpensively man-ufactured using a 3D printer, this calibration method can be disseminated toa wide range of researchers by sharing CAD files and program code. More-over, the software can be integrated in the PLUS for public use.• In the needle tracking system, it is desirable that the camera can be freelyrotated to accommodate both in-plane and out-of-plane needle insertions.Ultrasound calibration requires a special setup and can be time consuming,so in this thesis we proposed to use a fixed marker on the transducer andto hold the transducer in front of a mirror for a quick re-calibration. Eventhis re-calibration procedure can be eliminated if a mechanical encoder isattached to the camera to record its rotation in real-time.• The proposed needle tracking algorithm estimates the needle trajectory asa line which can be described by 5 DOF (excluding the rotation around theneedle axis). Although this is enough to predict the intersection of the needlewith the ultrasound image, it is desirable to also track the position of tip of102the needle. We suggest two solutions for this problem. The first way is bydetermining the initial pose of the tip by pivoting the needle and then keepingthe track of the movement of the needle continuously in all the consecutiveframes. Whenever a marker edge disappears in the next frame, the markerindex should be updated. Another way to keep track of the tip is by usingunique marking patterns on the needle instead of similar black and whitemarkings. Some needles are already encoded using marking strips that aregrouped: I, II, III, IIII etc such as a Pajunk needle [53].103Bibliography[1] J. M. Abeysekera, M. Najafi, R. Rohling, and S. E. Salcudean. Calibrationfor position tracking of swept motor 3-d ultrasound. Ultrasound in Medicine& Biology, 40(6):1356–1371, 2014. → pages 68[2] N. Abolhassani, R. Patel, and M. Moallem. Needle insertion into soft tissue:A survey. Medical Engineering & Physics, 29(4):413–431, 2007. → pages69[3] N. Afsham, K. Chan, L. Pan, S. Tang, and R. N. Rohling. Alignment andcalibration of high frequency ultrasound (HFUS) and optical coherencetomography (OCT) 1D transducers using a dual wedge-tri step phantom. InProceedings of SPIE, volume 7964, 2011. → pages 46[4] D. V. Amin, T. Kanade, B. Jaramaz, A. M. DiGioia III, C. Nikou, R. S.LaBarca, and J. E. Moody Jr. Calibration method for determining thephysical location of the ultrasound image plane. In Medical ImageComputing and Computer-Assisted Intervention–MICCAI 2001, pages940–947. Springer, 2001. → pages 4[5] M. Anim-Somuah, R. Smyth, C. Howell, et al. Epidural versus non-epiduralor no analgesia in labour. Cochrane Database Syst Rev, 4, 2005. → pages 69[6] W. S. Bartynski, S. Z. Grahovac, and W. E. Rothfus. Incorrect needleposition during lumbar epidural steroid administration: inaccuracy of loss ofair pressure resistance and requirement of fluoroscopy and epidurographyduring needle insertion. American Journal of Neuroradiology, 26(3):502–505, 2005. → pages 69[7] C. Bergmeir, M. Seitel, C. Frank, R. Simone, H. Meinzer, and I. Wolf.Comparing calibration approaches for 3D ultrasound probes. InternationalJournal of Computer Assisted Radiology and Surgery, 4(2):203–213, Mar.2009. doi:10.1007/s11548-008-0258-x. → pages 6, 46104[8] C. M. Bernards. Epidural and spinal anesthesia. Clinical Anesthesia. 4th ed.Philadelphia, Pa: Lippincott Williams & Wilkins, pages 689–713, 2001. →pages 70[9] C. M. Bernards, S. B. McDonald, and F. V. Salinas. A practical approach toregional anesthesia. Lippincott Williams & Wilkins, 2009. → pages 69[10] J. Blackall, D. Rueckert, J. Maurer, C.R., G. Penney, D. Hill, andD. Hawkes. An image registration approach to automated calibration forfreehand 3d ultrasound. In S. Delp, A. DiGoia, and B. Jaramaz, editors,Medical Image Computing and Computer-Assisted Intervention MICCAI2000, volume 1935 of Lecture Notes in Computer Science, pages 462–471.Springer Berlin Heidelberg, 2000. ISBN 978-3-540-41189-5.doi:10.1007/978-3-540-40899-4 47. → pages 6[11] E. Boctor, A. Viswanathan, M. Choti, R. H. Taylor, G. Fichtinger, andG. Hager. A novel closed form solution for ultrasound calibration. InBiomedical Imaging: Nano to Macro, 2004. IEEE International Symposiumon, pages 527–530. IEEE, 2004. → pages 6, 46[12] E. M. Boctor, I. Iordachita, M. A. Choti, G. Hager, and G. Fichtinger.Bootstrapped ultrasound calibration. Studies in Health Technology andInformatics, 119:61–66, 2006. ISSN 0926-9630. → pages 46[13] E. M. Boctor, I. Iordachita, G. Fichtinger, and G. D. Hager. Ultrasoundself-calibration. Proceedings of SPIE, 6141(1):61412N–61412N–12, Mar.2006. ISSN 0277786X. doi:doi:10.1117/12.659594. → pages 6[14] E. M. Boctor, M. A. Choti, E. C. Burdette, and R. J. Webster Iii.Three-dimensional ultrasound-guided robotic needle placement: anexperimental evaluation. The International Journal of Medical Robotics +Computer Assisted Surgery: MRCAS, 4(2):180–191, June 2008. ISSN1478-596X. doi:10.1002/rcs.184. → pages 70[15] J.-Y. Bouguet. Visual methods for three-dimensional modeling. PhD thesis,1999. → pages 90[16] B. Brendel, S. Winter, and H. Ermert. A simple and accurate calibrationmethod for 3D freehand ultrasound. Biomedizinische Technik, 49(Suppl 2):872–873, 2004. → pages 4[17] C. Chan, F. Lam, and R. Rohling. A needle tracking device for ultrasoundguided percutaneous procedures. Ultrasound in Medicine & Biology, 31105(11):1469–1483, Nov. 2005. ISSN 03015629.doi:10.1016/j.ultrasmedbio.2005.07.014. → pages 70, 71, 72, 95, 96[18] C. P. Chen, S. F. Tang, T.-C. Hsu, W.-C. Tsai, H.-P. Liu, M. J. Chen, E. Date,and H. L. Lew. Ultrasound guidance in caudal epidural needle placement.Anesthesiology, 101(1):181–184, 2004. → pages 8[19] T. K. Chen, A. D. Thurston, R. E. Ellis, and P. Abolmaesumi. A real-timefreehand ultrasound calibration system with automatic accuracy feedbackand control. Ultrasound in Medicine & Biology, 35(1):79–93, Jan. 2009.ISSN 0301-5629. doi:10.1016/j.ultrasmedbio.2008.07.004. → pages ix, 6,7, 16, 36, 39, 42, 47, 62, 64, 67, 98[20] P. Chinnaiyan, W. Tomee, R. Patel, R. Chappell, and M. Ritter.3D-ultrasound guided radiation therapy in the post-prostatectomy setting.Technology in Cancer Research & Treatment, 2(5):455, 2003. → pages 1[21] A. Collet, D. Berenson, S. S. Srinivasa, and D. Ferguson. Object recognitionand full pose registration from a single image for robotic manipulation.pages 3534–3541, Kobe, Japan, 2009. IEEE Press. ISBN978-1-4244-2788-8. → pages 73, 74[22] P. Conroy, C. Luyet, C. McCartney, and P. McHardy. Real-timeultrasound-guided spinal anaesthesia: a prospective observational study of anew approach. Anesthesiology Research and Practice, 2013, 2013. → pagesix, 2[23] F. Cosio, E. Lira Berra, N. Montiel, C. Segundo, E. Garduo, M. Gonzalez,R. Quispe Siccha, B. Ramirez, and E. Hazan Lasri. Computer assistedbiopsy of breast tumors. In Engineering in Medicine and Biology Society(EMBC), 2010 Annual International Conference of the IEEE, pages5995–5998. IEEE, 2010. → pages 1, 3[24] A. E. Desjardins, B. H. Hendriks, M. van der Voort, R. Nachabe´,W. Bierhoff, G. Braun, D. Babic, J. P. Rathmell, S. Holmin, M. So¨derman,et al. Epidural needle with embedded optical fibers for spectroscopicdifferentiation of tissue: ex vivo feasibility study. Biomedical OpticsExpress, 2(6):1452–1461, 2011. → pages 3[25] P. R. Detmer, G. Bashein, T. Hodges, K. W. Beach, E. P. Filer, D. H. Burns,and D. E. Strandness. 3D ultrasonic image feature localization based onmagnetic scanhead tracking: in vitro calibration and validation. Ultrasound106in Medicine & Biology, 20(9):923–936, 1994. ISSN 0301-5629. → pages 4,6[26] D. Eggert, A. Lorusso, and R. Fisher. Estimating 3D rigid bodytransformations: a comparison of four major algorithms. Machine Visionand Applications, 9:272–290, 1997. → pages 6[27] A. M. Franz, T. Haidegger, W. Birkfellner, K. Cleary, T. M. Peters, andL. Maier-Hein. Electromagnetic tracking in medicine-a review oftechnology, validation and applications. 2014. → pages 9[28] A. H. Gee, N. E. Houghton, G. M. Treece, and R. W. Prager. A mechanicalinstrument for 3d ultrasound probe calibration. Ultrasound in Medicine &Biology, 31(4):505–518, 2005. → pages 3, 6[29] M. G. Genton. Separable approximations of space-time covariance matrices.Environmetrics, 18(7):681–695, 2007. → pages 27, 28[30] R. C. Gonzalez, R. E. Woods, and S. L. Eddins. Digital image processingusing matlab. Upper Saddle River, N. J: Pearson Prentice Hall, 2004. →pages 81[31] S. Gulati, E. M. Berntsen, O. Solheim, K. A. Kvistad, A. Hberg, T. Selbekk,S. H. Torp, and G. Unsgaard. Surgical resection of high-grade gliomas ineloquent regions guided by blood oxygenation level dependent functionalmagnetic resonance imaging, diffusion tensor tractography, andintraoperative navigated 3D ultrasound. Minimally Invasive Neurosurgery:MIN, 52(1):17–24, Feb. 2009. ISSN 0946-7211.doi:10.1055/s-0028-1104566. → pages 3[32] R. Haralick, D. Lee, K. Ottenburg, and M. Nolle. Analysis and solutions ofthe three point perspective pose estimation problem. pages 598, 592, 1991.→ pages 73[33] T. Hastie, R. Tibshirani, J. Friedman, T. Hastie, J. Friedman, andR. Tibshirani. The elements of statistical learning, volume 2. Springer,2009. → pages 86[34] P. Holoborodko. Smooth noise robust differentiators.http://www.holoborodko.com/pavel/numerical-methods/numerical-derivative/smooth-low-noise-differentiators/. → pages83107[35] P. Hsu, G. Treece, R. Prager, N. Houghton, and A. Gee. Comparison offreehand 3D ultrasound calibration techniques using a stylus. Ultrasound inMedicine & Biology, 34(10):1610–1621, 2008. → pages 6, 34, 65, 66[36] P. Hsu, R. Prager, A. Gee, G. Treece, C. Sensen, and B. Hallgrmsson.Freehand 3D ultrasound calibration: a review. Advanced Imaging in Biologyand Medicine, pages 47–84, 2009. → pages ix, 3, 4, 5, 6, 15, 16, 42, 67, 97[37] X. Huang, L. Gutie´rrez, D. Stanton, P. Kim, and A. Jain. Image registrationbased 3D TEE-EM calibration. In Biomedical Imaging: From Nano toMacro, 2010 IEEE International Symposium on, pages 1209–1212. IEEE,2010. → pages 1[38] J. A. Jensen. Field: A program for simulating ultrasound systems. In 10THNORDICBALTIC CONFERENCE ON BIOMEDICAL IMAGING, VOL. 4,SUPPLEMENT 1, PART 1: 351–353. Citeseer, 1996. → pages 58[39] A. Khamene and F. Sauer. A novel phantom-less spatial and temporalultrasound calibration method. In J. Duncan and G. Gerig, editors, MedicalImage Computing and Computer-Assisted Intervention MICCAI 2005,volume 3750 of Lecture Notes in Computer Science, pages 65–72. SpringerBerlin Heidelberg, 2005. ISBN 978-3-540-29326-2.doi:10.1007/11566489 9. → pages 6[40] S. Khosravi, R. Rohling, and P. Lawrence. One-step needle pose estimationfor ultrasound guided biopsies. Conference Proceedings: AnnualInternational Conference of the IEEE Engineering in Medicine and BiologySociety. IEEE Engineering in Medicine and Biology Society. Conference,2007:3343–3346, 2007. ISSN 1557-170X.doi:10.1109/IEMBS.2007.4353046. → pages 70, 71, 72[41] J. Kowal, C. A. Amstutz, M. Caversaccio, and L. P. Nolte. On thedevelopment and comparative evaluation of an ultrasound B-mode probecalibration method. Computer Aided Surgery, 8(3):107–119, 2003. → pages42[42] N. Krekel, B. Zonderhuis, H. Schreurs, A. Cardozo, H. Rijna, H. van derVeen, S. Muller, P. Poortman, L. de Widt, W. de Roos, et al.Ultrasound-guided breast-sparing surgery to improve cosmetic outcomesand quality of life. a prospective multicentre randomised controlled clinicaltrial comparing ultrasound-guided surgery to traditional palpation-guidedsurgery (cobalt trial). BMC surgery, 11(1):8, 2011. → pages 1108[43] R. K. Kumar, A. Ilie, J.-M. Frahm, and M. Pollefeys. Simple calibration ofnon-overlapping cameras with a mirror. In Computer Vision and PatternRecognition, 2008. CVPR 2008. IEEE Conference on, pages 1–7. IEEE,2008. → pages 89[44] T. Lange, S. Kraft, S. Eulenstein, H. Lamecker, and P. Schlag. Automaticcalibration of 3D ultrasound probes. Bildverarbeitung fu¨r die Medizin 2011,pages 169–173, 2011. → pages 6[45] A. Lasso, T. Heffter, C. Pinter, T. Ungi, and G. Fichtinger. Implementationof the PLUS open-source toolkit for translational research ofultrasound-guided intervention systems. In Medical Image Computing andComputer-Assisted Intervention (MICCAI 2012) - Systems and Architecturesfor Computer Assisted Interventions, pages 1–12, Nice, France, 10/20122012. The MIDAS Journal, The MIDAS Journal. → pages 37, 43[46] F. Lindseth, T. Lang, J. Bang, and T. A. Nagelhus Hernes. Accuracyevaluation of a 3D ultrasound-based neuronavigation system. ComputerAided Surgery: Official Journal of the International Society for ComputerAided Surgery, 7(4):197–222, 2002. ISSN 1092-9088.doi:10.1002/igs.10046. → pages 3, 7[47] F. Lindseth, G. A. Tangen, T. Langø, and J. Bang. Probe calibration forfreehand 3-D ultrasound. Ultrasound in Medicine & Biology, 29(11):1607–1623, 2003. → pages 6[48] C. V. Loan and N. Pitsianis. Approximation with kronecker products.Technical report, Cornell University, 1992. → pages 27[49] D. G. Lowe. Distinctive image features from Scale-Invariant keypoints.International Journal of Computer Vision, 60:91110, Nov. 2004. ISSN0920-5691. doi:10.1023/B:VISI.0000029664.99615.94. → pages 73[50] H. Lu, J. Li, Q. Lu, S. Bharat, R. Erkamp, B. Chen, J. Drysdale, F. Vignon,and A. Jain. A new sensor technology for 2D ultrasound-guided needletracking. In Medical Image Computing and Computer-AssistedIntervention–MICCAI 2014, pages 389–396. Springer, 2014. → pages 10[51] L. Lucchese. Closed-form pose estimation from metric rectification ofcoplanar points. IEE Proceedings - Vision, Image, and Signal Processing,153(3):364–378, June 2006. doi:10.1049/ip-vis:20045229. → pages 73, 76109[52] T. Luhmann. Precision potential of photogrammetric 6DOF pose estimationwith a single camera. ISPRS Journal of Photogrammetry and RemoteSensing, 64(3):275–284, May 2009. ISSN 0924-2716.doi:10.1016/j.isprsjprs.2009.01.002. → pages 74[53] T. Maecken, M. Zenz, and T. Grau. Ultrasound characteristics of needles forregional anesthesia. Regional anesthesia and pain medicine, 32(5):440–447,2007. → pages 103[54] E. Melvær, K. Mørken, and E. Samset. A motion constrained cross-wirephantom for tracked 2D ultrasound calibration. International Journal ofComputer Assisted Radiology and Surgery, pages 1–10, 2011. → pages 1, 4,6[55] L. Mercier, T. Lang, F. Lindseth, and D. L. Collins. A review of calibrationtechniques for freehand 3-D ultrasound systems. Ultrasound in Medicine &Biology, 31(4):449–471, Apr. 2005. ISSN 0301-5629.doi:10.1016/j.ultrasmedbio.2004.11.015. → pages 6, 18, 19, 20, 71[56] J. J. More´. The levenberg-marquardt algorithm: implementation and theory.In Numerical analysis, pages 105–116. Springer, 1978. → pages 31[57] D. M. Muratore and R. L. Galloway. Beam calibration without a phantomfor creating a 3D freehand ultrasound system. Ultrasound in Medicine &Biology, 27(11):1557–1566, Nov. 2001. ISSN 0301-5629. → pages 6[58] M. Murray, S. Rose, D. Wedel, C. Wass, J. Mueller, and B. Harrison. Faust’sAnesthesiology Review. Elsevier - Health Sciences Division, 2014. ISBN9781437713695. → pages 3[59] M. Najafi and R. Rohling. Single camera closed-form real-time needletrajectory tracking for ultrasound. In Proceedings of SPIE, volume 79641,page 79641F, 2011. doi:10.1117/12.877798. → pages 3, 74, 95[60] M. Najafi, N. Afsham, P. Abolmaesumi, and R. Rohling. Single wallclosed-form differential ultrasound calibration. In Proceedings of SPIE,volume 8316, page 83162A, 2012. → pages 5, 6, 18[61] M. Najafi, N. Afsham, P. Abolmaesumi, and R. Rohling. A closed-formdifferential formulation for ultrasound spatial calibration: Multi-wedgephantom. Ultrasound in Medicine & Biology, 2014. → pages 11, 42, 75, 90110[62] M. Nakamoto, K. Nakada, Y. Sato, K. Konishi, M. Hashizume, andS. Tamura. Intraoperative magnetic tracker calibration using a magneto-optichybrid tracker for 3D ultrasound-based navigation in laparoscopic surgery.IEEE Transactions on Medical Imaging, 27(2):255–270, Feb. 2008. ISSN0278-0062. doi:10.1109/TMI.2007.911003. → pages 1[63] N. Pagoulatos, D. Haynor, and Y. Kim. A fast calibration method for 3-dtracking of ultrasound images using a spatial localizer. Ultrasound inMedicine & Biology, 27(9):1219–1229, 2001. → pages 6[64] K. Paulius, P. Maguina, and A. Mejia. Ultrasound-guided management ofhand fractures. Orthopedics, 31(12):1204–1207, 2008. → pages 1[65] M. Peterhans, S. Anderegg, P. Gaillard, T. Oliveira-Santos, and S. Weber. Afully automatic calibration framework for navigated ultrasound imaging. InEngineering in Medicine and Biology Society (EMBC), 2010 AnnualInternational Conference of the IEEE, pages 1242–1245. IEEE, 2010. →pages 2, 6[66] A. Peters, R. Baker, and M. Sangeux. Validation of 3D freehand ultrasoundfor the determination of the hip joint centre. Gait & posture, 31(4):530–532,2010. → pages 1[67] R. W. Prager, R. N. Rohling, A. H. Gee, and L. Berman. Rapid calibrationfor 3D freehand ultrasound. Ultrasound in Medicine & Biology, 24(6):855–869, July 1998. ISSN 0301-5629.doi:10.1016/S0301-5629(98)00044-1. → pages 5, 6, 7, 17, 18, 35, 40, 41,42, 98[68] K. Rahbar and H. R. Pourreza. Inside looking out camera pose estimationfor virtual studio. Graph. Models, 70(4):57–75, 2008. → pages 73, 74[69] O. Rouvire, R. Souchon, R. Salomir, A. Gelet, J. Chapelon, and D. Lyonnet.Transrectal high-intensity focused ultrasound ablation of prostate cancer:effective treatment requiring accurate imaging. European Journal ofRadiology, 63(3):317–327, Sept. 2007. ISSN 0720-048X.doi:10.1016/j.ejrad.2007.06.026. → pages 3[70] F. Shi, X. Zhang, and Y. Liu. A new method of camera pose estimationusing 2D-3D corner correspondence. Pattern Recogn. Lett., 25(10):1155–1163, 2004. → pages 73[71] G. G. Slabaugh. Computing euler angles from a rotation matrix. Retrievedon August, 6:2000, 1999. → pages 39, 63111[72] P. Stolka, P. Foroughi, M. Rendina, C. Weiss, G. Hager, and E. Boctor.Needle guidance using handheld stereo vision and projection forultrasound-based interventions. In P. Golland, N. Hata, C. Barillot,J. Hornegger, and R. Howe, editors, Medical Image Computing andComputer-Assisted Intervention MICCAI 2014, volume 8674 of LectureNotes in Computer Science, pages 684–691. Springer InternationalPublishing, 2014. ISBN 978-3-319-10469-0. → pages 72[73] P. J. Stolka, P. Foroughi, M. Rendina, C. R. Weiss, G. D. Hager, and E. M.Boctor. Needle guidance using handheld stereo vision and projection forultrasound-based interventions. In Medical Image Computing andComputer-Assisted Intervention–MICCAI 2014, pages 684–691. Springer,2014. → pages 72, 95[74] J. Trobaugh, W. Richard, K. Smith, and R. Bucholz. Frameless stereotacticultrasonography: method and applications. Computerized Medical Imagingand Graphics, 18(4):235–246, 1994. → pages 4[75] G. Unsga˚rd. Ultrasound-guided neurosurgery. Practical Handbook ofNeurosurgery, pages 907–926, 2009. → pages 1[76] W. F. Walker and G. E. Trahey. A fundamental limit on delay estimationusing partially correlated speckle signals. Ultrasonics, Ferroelectrics andFrequency Control, IEEE Transactions on, 42(2):301–308, 1995. → pages16, 17, 31[77] X. L. Wang, P. J. Stolka, E. Boctor, G. Hager, and M. Choti. The kinect asan interventional tracking system. In SPIE Medical Imaging, pages83160U–83160U. International Society for Optics and Photonics, 2012. →pages 72, 95[78] C. Yan, B. Goulet, J. Pelletier, S. Chen, D. Tampieri, and D. Collins.Towards accurate, robust and practical ultrasound-ct registration of vertebraefor image-guided spine surgery. International journal of computer assistedradiology and surgery, 6(4):523–537, 2011. → pages 1, 2[79] Z. Yaniv, P. Foroughi, H. Kang, and E. Boctor. Ultrasound calibrationframework for the image-guided surgery toolkit (IGSTK). In Proceedings ofSPIE, volume 7964, page 79641N, 2011. → pages 4, 5[80] Z. Zhang. Microsoft kinect sensor and its effect. MultiMedia, IEEE, 19(2):4–10, 2012. → pages 72112
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- New methods for calibration and tool tracking in ultrasound-guided...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
New methods for calibration and tool tracking in ultrasound-guided interventions Najafi, Mohammad 2014
pdf
Page Metadata
Item Metadata
Title | New methods for calibration and tool tracking in ultrasound-guided interventions |
Creator |
Najafi, Mohammad |
Publisher | University of British Columbia |
Date Issued | 2014 |
Description | Ultrasound is a safe, portable, inexpensive and real-time modality that can produce 2D and 3D images. It is a valuable intra-operative imaging modality to guide surgeons aiming to achieve higher accuracy of the intervention and improve patient outcomes. In all the clinical applications that use tracked ultrasound, one main challenge is to precisely locate the ultrasound image pixels with respect to a tracking sensor on the transducer. This process is called spatial calibration and the objective is to determine the spatial transformation between the ultrasound image coordinates and a coordinate system defined by the tracking sensor on the transducer housing. Another issue in ultrasound guided interventions is that tracking surgical tools (for example an epidural needle) usually requires expensive, large optical trackers or low accuracy magnetic trackers and there is a need for a low-cost, easy-to-use and accurate solution. In this thesis, for the first problem I have proposed two novel complementary methods for ultrasound calibration that provide ease of use and high accuracy. These methods are based on my differential technique which enables high measurement accuracy. I developed a closed-form formulation that makes it possible to achieve high accuracy with using a low number of images. For the second problem, I developed a method to track surgical tools (epidural needles in particular) using a single camera mounted on the ultrasound transducer to facilitate ultrasound guided interventions. The first proposed ultrasound calibration method achieved an accuracy of 0.09 ± 0.39 mm. The second method with a much simpler phantom yet achieved similar accuracy compared to the N-wire method. The proposed needle tracking method showed high accuracy of 0.94 ± 0.46 mm. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2015-01-06 |
Provider | Vancouver : University of British Columbia Library |
Rights | Attribution-NonCommercial-NoDerivs 2.5 Canada |
DOI | 10.14288/1.0167645 |
URI | http://hdl.handle.net/2429/51776 |
Degree |
Doctor of Philosophy - PhD |
Program |
Electrical and Computer Engineering |
Affiliation |
Applied Science, Faculty of Electrical and Computer Engineering, Department of |
Degree Grantor | University of British Columbia |
Graduation Date | 2015-02 |
Campus |
UBCV |
Scholarly Level | Graduate |
Rights URI | http://creativecommons.org/licenses/by-nc-nd/2.5/ca/ |
Aggregated Source Repository | DSpace |
Download
- Media
- 24-ubc_2015_february_najafi_mohammad.pdf [ 8.82MB ]
- Metadata
- JSON: 24-1.0167645.json
- JSON-LD: 24-1.0167645-ld.json
- RDF/XML (Pretty): 24-1.0167645-rdf.xml
- RDF/JSON: 24-1.0167645-rdf.json
- Turtle: 24-1.0167645-turtle.txt
- N-Triples: 24-1.0167645-rdf-ntriples.txt
- Original Record: 24-1.0167645-source.json
- Full Text
- 24-1.0167645-fulltext.txt
- Citation
- 24-1.0167645.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0167645/manifest