@prefix vivo: . @prefix edm: . @prefix ns0: . @prefix dcterms: . @prefix dc: . @prefix skos: . vivo:departmentOrSchool "Applied Science, Faculty of"@en, "Electrical and Computer Engineering, Department of"@en ; edm:dataProvider "DSpace"@en ; ns0:degreeCampus "UBCV"@en ; dcterms:creator "Flaccavento, Giselle"@en ; dcterms:issued "2009-11-21T01:14:53Z"@en, "2004"@en ; vivo:relatedDegree "Master of Applied Science - MASc"@en ; ns0:degreeGrantor "University of British Columbia"@en ; dcterms:description """The measurement of the relative location of images acquired using freehand ultrasound is often required for panoramic ultrasound, ultrasound assisted surgery, and 3D ultrasound. It can be necessary to keep the patient still for up to 8 min or hold their breath for up to 45 sec. This can be difficult or impossible for a sick patient. Our system intends to minimize the need for patient breath holds and physical restraints during image acquisition. In this thesis, we present a system that uses an inexpensive trinocular camera system that measures the probe location with respect to the patient's body by calculating the location of both the probe and the patient. The goal is to find the location of the ultrasound images relative to the patient's skin. Using an Optotrak as a reference, the accuracy of the camera is tested. Based on the results obtained, we can estimate that at a distance of approximately 1000 mm from the camera, the location of a patch on a curved surface (such as the patient), with a size of approximately 20 x 20 mm, can be calculated to within ±2 mm. The probe location can be calculated to an accuracy between —2.3 mm and 1.8 mm when the object attached to the probe has an area of approximately 90 x 40 mm. A consistency test is created using the camera and a calibrated probe. The results of this test show that the mean distance between the points calculated using only the camera and the points calculated using the calibrated probe with the camera are —6.7mm, 1.2mm, and 1.6mm, in the x-, y-, and z-directions. Since tracking of the area being examined during ultrasound has not been performed using other tracking systems, our system offers an improvement for freehand tracking techniques. Other systems used for tracking patient motion during an ultrasound scan, have not been able to track the area being scanned as the markers used for tracking would interfere with the examination. In our system, the features overlaid on the patient's skin do not interfere with the ultrasound probe."""@en ; edm:aggregatedCHO "https://circle.library.ubc.ca/rest/handle/2429/15419?expand=metadata"@en ; dcterms:extent "26548526 bytes"@en ; dc:format "application/pdf"@en ; skos:note "Patient and Probe Tracking During Freehand Ultrasound by Giselle Flaccavento B.A.Sc, University of Waterloo, Canada 2001 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF M A S T E R OF APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES (Department of Electrical and Computer Engineering) We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA May 2004 © Giselle Flaccavento, 2004 Library Authorization In presenting this thesis in partial fulfillment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Giselle Flaccavento 14/05/2004 Name of Author (please print) Date (dd/mm/yyyy) Title of Thesis: Patient and Probe Tracking During Freehand Ultrasound Degree: Master of Applied Science Year: 2004 Department of Department of Electrical Engineering The University of British Columbia Vancouver, B C Canada Abstract The measurement of the relative location of images acquired using freehand ultrasound is often required for panoramic ultrasound, ultrasound assisted surgery, and 3D ultrasound. It can be necessary to keep the patient still for up to 8 min or hold their breath for up to 45 sec. This can be difficult or impossible for a sick patient. Our system intends to minimize the need for patient breath holds and physical restraints during image acquisition. In this thesis, we present a system that uses an inexpensive trinocular camera system that measures the probe location with respect to the patient's body by calculating the location of both the probe and the patient. The goal is to find the location of the ultrasound images relative to the patient's skin. Using an Optotrak as a reference, the accuracy of the camera is tested. Based on the results obtained, we can estimate that at a distance of approximately 1000 mm from the camera, the location of a patch on a curved surface (such as the patient), with a size of approximately 20 x 20 mm, can be calculated to within ±2 mm. The probe location can be calculated to an accuracy between —2.3 mm and 1.8 mm when the object attached to the probe has an area of approximately 90 x 40 mm. A consistency test is created using the camera and a calibrated probe. The results of this test show that the mean distance between the points calculated using only the camera and the points calculated using the calibrated probe with the camera are —6.7mm, 1.2mm, and 1.6mm, in the x-, y-, and z-directions. Since tracking of the area being examined during ultrasound has not been performed using other tracking systems, our system offers an improvement for freehand tracking techniques. Other systems used for tracking patient motion during an ultrasound scan, have not been able to track the area being scanned as the markers used for tracking would interfere with the examination. In our system, the features overlaid on the patient's skin do not interfere with the ultrasound probe. ii Table of Contents Abstract ii Table of Contents vi List of Tables viii List of Figures xii Notation xiii Acknowledgments xiv 1 Introduction 1 1.1 Patient Motion Tracking System 2 1.1.1 Digital Camera Tracking Component 3 1.1.2 Ultrasound Image-Based Tracking Component 4 1.2 Applications of the Tracking System 5 1.3 Thesis Overview 7 2 Background 9 2.1 Ultrasound Imaging 9 2.1.1 3D Ultrasound 10 2.1.2 Panoramic Ultrasound 12 2.2 Probe and Patient Movement 13 2.2.1 Respiration 15 iii TABLE OF CONTENTS iv 2.2.2 Probe Force 16 2.2.3 Voluntary and Involuntary Patient Movement 16 2.3 Types of Tracking Devices 17 2.4 Other Applications for Tracking Devices 18 2.4.1 Augmented Reality 19 2.4.2 Respiratory Modeling, Compensation, or Elimination 20 2.4.3 Tracking of Medical Tools 21 2.5 Discussion 21 3 Digital Camera Tracking Component 24 3.1 Digital Camera System 25 3.1.1 Hardware and Software 25 3.1.2 Stereo Vision Geometry 27 3.2 Optotrak Positioning System 28 3.2.1 IRED Viewing Angle 29 3.2.2 IRED Z Offset 31 3.2.3 IRED X and Y Offsets 33 3.3 Relationship between the Camera System and Optotrak System 37 3.3.1 Plate to Digiclops Transformation (T£) 38 3.3.2 IRED to Local Optotrak Transformation (Tf) 45 3.3.3 Plate to IRED Transformation (T£) 46 3.3.4 Digiclops to Local Optotrak Transformation (T£) 48 3.3.5 Analysis of the Transformation Matrix T f 49 3.4 Camera System Validation Tests 50 3.4.1 Method Used to Determine the Accuracy of the Digiclops 51 3.4.2 Accuracy Results of the Digiclops 54 3.4.3 Effect of Patch Size on the Accuracy of the Digiclops 56 3.5 Discussion 62 4 Ultrasound Image-Based Consistency Test 66 4.1 Ultrasound Materials 67 TABLE OF CONTENTS v 4.1.1 Properties of Ultrasonic Materials 68 4.1.2 Material Requirements 69 4.1.3 Material Tests 74 4.2 Components Used to Test the Tracking System 76 4.2.1 Phantom Construction 78 4.2.2 . Creating the Artificial Skin and Fiducials 80 4.3 Ultrasound Probe Calibration 83 4.3.1 Flat Plate to Digiclops Transformation ( T £ ) 85 4.3.2 Calibration Box to Digiclops Transformation (Tp) 85 4.3.3 Calibration Box and Ultrasound Data Points (P#, Py) 90 4.3.4 Ultrasound Image to Flat Plate Transformation (Tp) 96 4.4 Method used to Test the Complete System 99 4.4.1 Ultrasound and Imaged Fiducial Data Points [PU,PM) 101 4.4.2 Imaged Fiducial to Reference Fiducial Transformation (T^f) 106 4.4.3 Reference Fiducial to Digiclops Transformation (T^) 108 4.4.4 Flat Plate to Digiclops Transformation ( T £ ) 108 4.5 Accuracy of the Ultrasound Consistency Tests 110 4.5.1 Consistency Test Results 110 4.6 Consistency Test Error Analysis I l l 4.6.1 Information Obtained from the Ultrasound Images I l l 4.6.2 Information Obtained with the Digiclops 114 4.6.3 Accuracy of the Probe Calibration 115 4.7 Correction of the Ultrasound Image to Imaged Fiducial Transformation 117 4.7.1 Geometrical Calculation of TUM 119 4.7.2 On-The-Fly Calculation of ^ u M n e w Derived Directly from the Ultrasound Imagel22 4.7.3 Calculation of Based on a Constant Bias 123 4.8 Discussion 125 5 Conclusions and Future Directions 130 5.1 Tracking System Summary 130 TABLE OF CONTENTS vi 5.2 Digital Camera Evaluation 131 5.3 Ultrasound Image-Based Consistency Test 132 5.4 Future Directions 134 5.4.1 Further Feasibility Testing and Algorithm Implementation 134 5.4.2 Variations for the Artificial Skin and Fiducials 136 Bibliography 148 Appendices 149 A Setup used to Create the Plate to IRED Transformation 149 B Source Code 151 C Details about the Data Used to Calculate the Best-Fit Sphere 153 D Ultrasound Tests on Different Types of Materials 155 D. l Rubber 156 D.2 Metals 158 D.3 Strings and Fibers 162 D.4 Tapes 162 E Specifications for Manufacturing the Metal Fiducial 167 List of Tables 3.1 Residual Error from Finding the Pivot Point of the Digitizing Pointer 33 3.2 X and Y Offset Distances for the IRED Markers 36 3.3 Digiclops Camera System Configuration Information 43 3.4 Patch Size Information for the Flat Plate and Sphere 60 3.5 Mean Error Between the Patch Normal and the Plate Normal 60 4.1 Ultrasonic Properties of a Selection of Materials 70 4.2 Properties of the Boundaries between the Materials Used for the Ultrasound Tracking Experiment 71 4.3 Ultrasonic Properties for Mammalian Tissues 81 4.4 Distance Between the Points PDJ a n ( l PD,II H I 4.5 Spread of the Points Pu Chosen as the Centre of the Bright Spots 112 4.6 Probability that Either a Positive or Negative Angle of the Ultrasound Image Rela-tive to the N-Shaped Fiducial is Chosen based on the Width of Each Bright Spot . . 112 4.7 Variability of Manually Choosing Corners of the Reference Fiducial from the Digi-clops Images 115 4.8 Variability of Manually Choosing the Printed Crosses from the Digiclops Images . . 115 4.9 Distance Between the Points PD,IV and the Point PD,III 117 4.10 Spread of the Points PD,iv 117 4.11 Distance Between the Points PM and the Points Calculated using TMPu 123 4.12 Distance Between the Points PM and the Points Calculated using TMnc^Pu 124 4.13 Standard Deviation of the Differences used to Calculate the Biases 125 4.14 Distance Between the Points PM and the Points Calculated using TMb.^Pu 125 vii LIST OF TABLES viii C l Errors Between each Marker Location and the Best-Fit Sphere 154 C.2 Spread of the Errors Between each Marker Location and the Best-Fit Sphere . . . . 154 List of Figures 1.1 Tracking System Composed of the Digital Camera and Ultrasound Image-Based Components 2 2.1 A 2D Abdominal Ultrasound Image of a Human Fetus[86] 10 2.2 Acquisition of Images using Freehand Scanning for 3D Ultrasound 12 2.3 Acquisition of Images for a Panoramic Ultrasound 13 3.1 Components of the Digiclops Camera and the Digiclops Coordinate System 26 3.2 Projection of a Single Light Ray onto an Image Plane 28 3.3 Geometry of Light Rays onto Two Image Planes 29 3.4 Geometry of Two Parallel Cameras 30 3.5 Range of Viewing Angles for an IRED 31 3.6 Definition of the Z IRED Offset 32 3.7 Setup Used to Calculate the Z IRED Offset 34 3.8 Notation Used in Determining IRED Offsets in the X and Y Directions 35 3.9 Setup Used to Determine the IRED Offsets in the X and Y Directions 36 3.10 Setup of Equipment used to Find the Transformation from the Digiclops to the Local Optotrak 38 3.11 Transformations used to Find the Transformation from the Digiclops to the Local Optotrak 39 3.12 Example of a Subpixel Shift Using the Cross-Correlation Coefficients 42 3.13 Digiclops and Plate Coordinate Systems 45 3.14 Local Optotrak and IRED Coordinate Systems 46 ix LIST OF FIGURES x 3.15 IRED and Plate Coordinate Systems 47 3.16 Digiclops and Local Optotrak Coordinate Systems 50 3.17 Images recorded by the Digiclops using the Flat Plate and Spherical Target Surfaces 55 3.18 Distribution of Errors for Six Runs of the Digiclops Accuracy Test using the Flat Plate as a Target 57 3.19 Distribution of Errors for Six Runs of the Digiclops Accuracy Test using the Sphere as a Target 58 3.20 Mean Error of all the Points on each Surface for Various Digiclops to Target Distances 59 3.21 Accuracy of Surface Tracking using Various Patch Sizes 61 4.1 Components of the Ultrasound Tracking Experiment 67 4.2 Geometry of the Transmitted and Reflected Components of an Incident Ultrasound Wave 69 4.3 Layers of Materials Used for the Complete Experiment and the Material Tests . . . 71 4.4 Relationship Between the Fiducial Size and the Occluded Area in the Ultrasound Image 75 4.5 Ultrasound Image Results for Various Materials of Various Sizes 77 4.6 Setup of Apparatus Used for the Ultrasound Tacking Experiment 78 4.7 Tissue-Mimicking Phantom Torsos 80 4.8 Close-up View of an N-Shaped Fiducial Embedded in a Latex Skin 82 4.9 Artificial Latex Skin with Embedded N-Shaped Fiducials 83 4.10 Transformations used to Find the Transformation from the Ultrasound Image to the Flat Plate 86 4.11 Digiclops and Plate Coordinate Systems 87 4.12 Digiclops and Calibration Box Coordinate Systems 88 4.13 Ultrasound Probe and Calibration Box Coordinate Systems 91 4.14 Top View of the Calibration Box 93 4.15 Sample Ultrasound Image Showing Three Wires from the Calibration Box 94 4.16 Geometry for Finding the Location of the Point PH,K 95 4.17 Flat Plate and Ultrasound Probe Coordinate Systems 96 LIST OF FIGURES 4.18 Transformations used to Verify the Consistency of the Tracking System 100 4.19 Ultrasound Probe and Imaged Fiducial Coordinate Systems 102 4.20 Sample Ultrasound Image Showing Three Components of the Imaged F i d u c i a l . . . . 103 4.21 Two Possible Solutions for Defining the Angle of the Ultrasound Image Relative to the N-Shaped Fiducial 104 4.22 Imaged Fiducial and Reference Fiducial Coordinate Systems 106 4.23 Digiclops Images of the Phantom Recorded During the Consistency Experiment . . . 107 4.24 Digiclops and Reference Fiducial Coordinate Systems 109 4.25 Digiclops Images of the Phantom, Probe, and Flat Plate Recorded During the Con-sistency Experiment 109 4.26 Variability of Multiple Selection of Bright Spot Centres 113 4.27 Probability that Either a Positive or Negative Angle of the Ultrasound Image Rela-tive to the N-Shaped Fiducial is Chosen based on the Width of Each Bright Spot . . 114 4.28 Relationship Between the Coordinate Systems Cu and CM 120 4.29 Rotation Around the Intersection Between the Ultrasound Image and the N-Shaped Fiducial 121 5.1 Example Ultrasound Images Obtained using Single and Double Fiducials 138 A . l Setup Used to Create the Plate to IRED transformation 150 D. l Ultrasound Image Results with Strips of Por-A-Mold Polyurethane Rubber of Vari-ous Sizes 157 D.2 Ultrasound Image Results with Strips of Por-A-Mold Polyurethane Rubber of Vari-ous Sizes (continued) 158 D.3 Ultrasound Image Results with Strips of Silastic and Dap Silicone of Various Sizes . 159 D.4 Ultrasound Image Results with Strips of HS Silicone of Various Sizes 160 D.5 Ultrasound Image Results with Various Sized Strips of Por-A-Mold Polyurethane Rubber Embedded in a Latex Matrix 161 D.6 Ultrasound Image Results of Latex Rubber and Various Sized Strips of Silicone Rubber Embedded in a Latex Matrix 162 LIST. OF FIGURES xii D.7 Ultrasound Image Results of Sheets of Aluminum Cut into Various Sized Strips . . . 163 D.8 Ultrasound Image Results of Needles and Sheets of Copper and Steel Cut into Various Sized Strips 164 D.9 Ultrasound Image Results of Various Wires and Fibers 165 D. 10 Ultrasound Image Results of Various Tapes and Tape Widths 166 E. l Specifications for Manufacturing the Metal Fiducial 168 Notation The following is a summary of the format of the notation used throughout this thesis. The notation in this list show the format of the notation in this thesis without using specific variable from the text. • CA= Coordinate System A • XA= X-Component of Coordinate System A • DA= Y-Component of Coordinate System A • z&= Z-Component of Coordinate System A • PA= Any Point in Coordinate System A • PAX= X-Component of a Point in Coordinate System A • PAV= Y-Component of a Point in Coordinate System A • PAZ = Z-Component of a Point in Coordinate System A • PA,B= Point B in Coordinate System A • PA,BX = X-Component of Point B in Coordinate System A • PA,BV= Y-Component of Point B in Coordinate System A • PA,BZ = Z-Component of Point B in Coordinate System A • T^= Homogeneous Transformation from Coordinate System A to Coordinate System D xiii Acknowledgments Were it not for the strong support of my thesis supervisors, Dr. Robert Rohling and Dr. Peter Lawrence, this work would not have been possible. Dr. Rohling's guidance, dedication, and con-structive feedback have all been essential throughout the research presented in this thesis. He has been an invaluable resource in the area of ultrasound technologies; providing insight into every aspect of this work. Dr. Lawrence's expertise in the area of digital cameras was a key part of the successful completion of this work. His ideas inspired and contributed to much innovation through-out this research. In addition, I would like to thank Dr. Tim Salcudean for accepting to be part of the thesis defence committee and for providing valuable feedback about this thesis. Many thanks are due to the group of colleagues who make up the Robotics and Control Laboratory. From them, new ideas and creative problem solving methods have been developed. The assistance in the con-struction and test sample preparation materials from the technicians in the electrical engineering machine shop are also greatly appreciated. I am also grateful to my family who has provided me with positive reinforcement and support during my research. I thank my wonderful group of friends, both local and far away, whom I am enormously indebted to for providing me with encouragement throughout this work. Very special thanks to Steve for his understanding and patience during this time; his support was invaluable to me. Finally, I would like to acknowledge the financial support provided by Dr. Rohling and Dr. Lawrence, as well as NSERC and the IRIS/ Precarn Networks of Centres of Excellence. The funding received as part of the TULIP (Three-Dimensional Ultrasound for Image-Guided Procedures) and IT-MED (Intelligent Tools for Medical Diagnosis and Interventions) projects has made this research possible. xiv Chapter 1 Introduction The measurement of the relative location of ultrasound images is often required for panoramic ultrasound, ultrasound assisted surgery, and freehand 3D ultrasound [71]. During an ultrasound examination, the probe is placed directly on the surface of the patient's skin. Cross sectional images of the patient's anatomy are collected as the probe is moved to various locations on the patient, imaging different parts of the anatomy. In order to successfully use the information contained in these cross sectional slices for panoramic ultrasound, ultrasound assisted surgery, or freehand 3D ultrasound, the relative ultrasound image location with respect to the anatomy must be known. The ultrasound probe is usually tracked with respect to a fixed coordinate system such as a bed or floor while the patient remains still. In an effort to remain still, the patient is often asked to hold their breath or is physically restrained by a device. For some procedures, it is necessary to keep the patient still for up to 8 minutes [13] or hold their breath for up to 45 seconds [60]. This can be difficult and uncomfortable for a healthy adult, and is often impossible for a pregnant woman, a child with urinary track disease, or an individual with arthritis, to remain still for these long periods of time. One solution to this patient movement problem is to remove the necessity for patient stillness. This thesis details a technique that intends to eliminate the need for patient breath holds and physical restraints during ultrasound image acquisition. In this thesis, we present a system, shown in Figure 1.1, which measures the probe location with respect to the patient's body by tracking both the probe and the patient. The goal is to produce a system with an accuracy better than the 1 1.1 Patient Motion Tracking System 2 Figure 1.1: Tracking System Composed of the Digital Camera and Ultrasound Image-Based Com-ponents motion incurred during the ultrasound examination. This chapter begins with an overview of the technique that we have developed to account for patient motion. Next, we describe the possible uses for our system. The chapter closes with an outline of the original contributions contained in this work. 1.1 Patient Motion Tracking System Our tracking system is used to track the motion of a patient's body during an ultrasound exami-nation. Respiration, skin deformation caused by ultrasound probe force, voluntary movement, and involuntary movement all cause errors in creating 3D and panoramic ultrasound images. Typically, tracking systems used in these types of ultrasound applications have focused on tracking the ul-trasound probe as it moves through an examination without knowledge of the patient's location [56, 74]. Knowing the probe location with respect to the patient's skin is essential when aligning 1.1 Patient Motion Tracking System 3 and registering the 2D ultrasound images. Unfortunately, errors are included in the data if only the probe location is tracked and the patient has moved during the scan. As an example, consider the case where the patient exhales while an ultrasound probe is near the navel. Both the probe and the patient's abdomen will move in the anterior-posterior direction [36]. If only the probe is tracked, misleading data would show that the probe location relative to the patient has changed. In reality, the probe and the patient have both moved, producing no overall relative motion between the two. Similarly, it is possible that the probe remains still while the patient moves. In this case, a traditional system which only tracks the probe movement would record that there has not been any motion. In reality, the location of the probe relative to the patient has changed and the 3D ultrasound or panoramic ultrasound algorithms must take this information into account. Our system is composed of two main components. The first is a digital camera system that tracks the location of the ultrasound probe as well as the location of the patient's skin. The second component uses artificial landmarks on the skin to relate the ultrasound images directly to the patient's skin. Each of these two systems provides some information about the relative location of the ultrasound image and the patient. We have the option of only tracking the movement using the digital camera or including these artificial landmarks which are visible in the ultrasound image into the system. By combining the information from both of these components, we have the opportunity to increase the system's accuracy as well as check the consistency of our results. 1.1.1 Digital Camera Tracking Component The digital camera system is able to record the 3D location of feature points in space. These feature points include natural and artificial landmarks, which are visible in the camera images. The system is able to make accurate correlations between the two images using the features in the scene. This correlation allows the measurement of many 3D point locations on an object attached to the probe and also on the surface of the patient's skin. Rather than describe the surface location using the individual location of each feature point, the mean of patches of points is used. Generally, a clear line of sight must exist between the sensors of the optical tracking system and the target. In the case of the digital camera system used in this thesis, occlusions of some reference points could be compensated for by knowing the location of surrounding data points. For this reason, when the sonographer's arm or the cord of the ultrasound probe is occluding information about the location 1.1 Patient Motion Tracking System 4 of the patient's skin, nearby patches of features points can still be recorded. The ultrasound probe is small and most of the surface of the probe is covered by the sonog-rapher's hand and is therefore often not visible to the camera system. For this reason, an object with a grayscale textured surface is rigidly attached to the probe. Throughout the examination, this object is visible to the camera system. In order to track the location of the ultrasound probe, the camera system calculates the 3 D location of points on the object and then calculates the probe location from these points. By choosing a 3 D object which has surfaces visible from every direction, such as the cube shown in Figure 1.1, the probe is tracked regardless of its orientation. The surface of the patient's skin is also tracked using the digital camera system. The camera system requires that there are features on the skin surface in order to calculate the patient's location. Since the number of features on a person's skin is limited, our system increases the number of features artificially. The surface of the patient's skin is overlaid with a painted grayscale texture rich with features. The grayscale texture is composed of unstructured features which cover the entire surface of the area being examined. Structured landmarks, known as fiducials, are also included in the texture. These fiducials provide specific reference points on the patient's skin surface. 1.1.2 U l t r a s o u n d I m a g e - B a s e d T r a c k i n g C o m p o n e n t Unlike the painted features on the patient's skin, the fiducials used by the ultrasound image-based component of the tracking system are visible directly in the ultrasound images and can be used to partly indicate the location of the ultrasound image with respect to the fiducials. The digital camera supplies the system with information about the global location of the probe with respect to the patient. T h e ultrasound image-based component of the tracking system provides local information about the ultrasound image location relative to the patient. The ultrasound image-based component of the tracking system relies on information provided by the structured fiducials to calculate this relative location. These fiducials are created in the shape of the letter \" N \" . The N-shaped fiducials serve a dual purpose within the tracking system. Not only are the N-shaped fiducials visible within the images recorded by the digital camera system, but they also appear in the ultrasound images. Bright spots are produced within the ultrasound image when the N-shaped fiducials are seen in ultrasound. By calculating the distance between the bright spots from the 1.2 Applications of the Tracking System 5 N-shaped fiducial that are visible in the ultrasound image and knowing the size of the fiducial, the relationship between the ultrasound image and the fiducial is calculated. 1.2 Applications of the Tracking System The tracking system detailed in this thesis is a general one which can be applied to various ultra-sound imaging procedures. The system can be used to increase the accuracy of combining sets of ultrasound slices of the anatomy into a 3D volume or panoramic image, improve the accuracy of augmented reality surgical procedures, or used during surgical planning. When used to create a 3D ultrasound volume, the probe is moved along the surface of the patient's skin as 2D ultrasound images are acquired. As each of the 2D ultrasound images is acquired, the location of the probe and the patient's skin surface are also collected. The probe and patient locations enable the calculation of the relative location of each ultrasound image. Next, the 3D ultrasound volume is created using a reconstruction process. 3D ultrasound volume reconstruction creates a model of the patient's anatomy based on positioning each 2D ultrasound image relative to all the other images. Either the features or the pixels within each 2D image are used to create this model. A similar method is used in the creation of panoramic ultrasound images. The 2D overlapping ultrasound images are acquired as the probe is moved along the surface of the skin. For panoramic ultrasound, the probe is moved in a direction along the ultrasound imaging plane. The location of both the probe and the surface of the patient's skin are recorded for each ultrasound image that is acquired. Using the information about the location of the probe and the skin surface during the acquisition of each ultrasound image, the 2D ultrasound images are aligned. Next, the ultrasound images are stitched together using a reconstruction technique. The final panoramic ultrasound image is a 2D image with a large field of view (FOV). Our system is applicable in cases where freehand scanning is employed. Freehand scanning allows the probe to move without physical constraints. The location of the probe is therefore necessary to determine the relationship between each of the ultrasound images. By using freehand ultrasound techniques, the FOV can be increased from the original limited range of the ultrasound probe [84]. If the ultrasound image to patient relationship is accurately determined, this large FOV 1.2 Applications of the Tracking System 6 can make it possible to see the entire length of a large organ or possible to create a 3D volume of a full term fetus. The ability to accurately create 3D volumes of organs would make it possible to detect changes in organ shape or size and measure the daily change in a fetus' weight. The detection of patient movement during ultrasound image acquisition is also useful when multiple probe sweeps are used to create one volume. Multiple sweeps may be required for large organs as one sweep may not cover the entire width of the organ. These multiple sweeps increase the time required to complete the image acquisition, thus increasing the amount of time a patient is required to remain still when their movement is not tracked. In addition to 3D ultrasound and panoramic reconstruction, the position of the ultrasound image with respect to the patient is required for ultrasound augmented medical procedures. As an example, the alignment of ultrasound images with the patient's body can be used to guide a needle during a biopsy procedure. In [80], the shape of the patient's body was acquired and then used for augmented ultrasound guided needle breast biopsies. It was assumed that the patient does not move from the time the equipment is calibrated until the biopsy is complete. The ability to track the patient's movement and skin surface deformation would remove the need for this assumption of patient stillness. Using our tracking system, it is also possible to record the position on the patient where each ultrasound image was acquired. This information could be reviewed by the physician after the ul-trasound examination is completed. The features on the patient's skin could be used as landmarks during surgical planning since their relationship to the ultrasound images are known. Registra-tion of ultrasound images with other imaging modalities, such as computed tomography (CT) or magnetic resonance imaging (MRJ), could also be improved by tracking the patient's movement. During image guided surgical procedures, knowing the current ultrasound image to patient rela-tionship could assist in provide useful information to the surgeon throughout the procedure. As an example of acceptable error in these systems, registration between preoperative MRI images and 3D ultrasound volume of the liver requires that the registration errors for small lesions are less than 5 mm [64]. For large lesions, errors up to 10 mm may be acceptable [64]. Whether used for creating a 3D volume or panoramic image, improving the accuracy of aug-mented reality surgical procedures, or used during surgical planning, the location of the ultrasound images must be known relative to the patient in order to account for probe and patient movement. 1.3 Thesis Overview 7 The quantity of the improvement of existing techniques for creating panoramic ultrasound images and 3D ultrasound volumes is dependent on the initial patient movement. In addition to reducing errors, measuring the patient's movement allows the patient to relax, increasing the comfort level throughout the ultrasound procedure. In some cases, detecting patient movement will allow an otherwise impossible ultrasound procedure to be successfully completed. 1.3 Thesis Overview Patient movement causes errors in calculating the relationship between the acquired ultrasound images and the location of the patient. These errors make it necessary to repeat procedures and sometimes make it impossible to use the scanned information. In this chapter, we have briefly described a system that could be used to track a patient and probe movement during the acqui-sition of ultrasound scans. This thesis is focused on proving the feasibility of the tracking system described. In order to analyze the system, a series of tests are described and evaluated. Chapter 2 begins by discussing ultrasound imaging in general, and more specifically, 3D and panoramic imaging. Next, we investigate the causes of patient motion that can occur during ultrasound image acquisition. After looking at patient motion, the discussion focuses on various other tracking methods. Each method is discussed and the advantages and disadvantages of tracking systems are weighed. Finally, we give examples of various applications of other tracking systems in medical imaging applications. Next, Chapter 3 deals specifically with the digital camera component of our tracking system. This chapter begins by describing the type of camera used as well as the geometry of stereo vision used to find the 3D feature locations. Two experiments are created in order to test the feasibility of using this camera in our tracking system. The first test measures the accuracy and precision of finding the location of a flat plate that is rich with texture. This test simulates the process of finding the 3D location of the object attached to the probe. The location of the plate is recorded using the digital camera system and this location is compared to the location recorded using an optical tracker, with infra-red light emitting diode (IRED) markers, as a reference standard. The second experiment uses a sphere to test the accuracy and precision of measuring the surface location. This experiment is conducted in order to simulate tracking the abdominal surface of a pregnant patient. 1.3 Thesis Overview 8 Both of the experiments investigate the number of pixels that are necessary to form a patch size with a suitable accuracy for our tracking system. Chapter 4 investigates the ultrasound image-based component of our system. The chapter begins with a discussion of the types of materials that could be used to add features to the patient's skin and to create the N-shaped fiducials. Next, the tracking system is tested using both the digital camera and ultrasound images. A plate with a grayscale texture is attached to the probe. The plate is next calibrated to the ultrasound probe using the camera system and a calibration box. Two tissue-mimicking phantoms are constructed in the shape of a pregnant female and a male torso. Artificial skins with fiducials embedded inside are created to fit the form of each phantom. Finally, an experiment is described and performed to measure the consistency of the results obtained with the tracking system. Concluding remarks and future work are presented in Chapter 5. The chapter begins with a summary of the complete tracking system. The summary highlights the benefits of the system as well as the key contributions of this thesis. Next, conclusions about both the digital tracking component and the ultrasound image-based component are presented. A discussion about other possible methods which can be implemented in order to improve the consistency of our results follow. Finally, the chapter finishes with a look at future directions for our research. This final chapter looks at variations in the tracking system that could be implemented in order to make the tracking system suitable for use in a clinical environment. Practical implementation issues are also discussed. Chapter 2 Background This chapter discusses the background for our research. It begins with a general explanation of ultrasound imaging, followed by specifics for 3D and panoramic ultrasound. Next, patient movement during ultrasound image acquisition is described in detail. An overview of the techniques that are currently used in patient motion tracking is presented. Then, a literature review of research used to track either the ultrasound probe, the patient, or both, during acquisition of ultrasound images is discussed. The chapter closes with a discussion about the need for the tracking system that is presented in this thesis. 2.1 Ultrasound Imaging An ultrasound image of a patient's anatomy is constructed using the information derived from sound. The ultrasound probe sends pulses of sound into the patient and then makes use of the received echoes to create the image. Ultrasound waves are usually between 2 and 10 MHz when used for diagnostics in medicine [38]. The time it takes for the echoes to return to the probe and the intensity of the echoes provide the data required to create 2D ultrasound images [73]. A sample of a 2D abdominal ultrasound image of a fetus is shown in Figure 2.1. There are many aspects of ultrasound that make it a desirable choice in medical imaging. Ultrasound is simple to use [71], safe for the patient [60, 88], has real-time capabilities [60, 72], and is mobile and compact. The cost of ultrasound imaging is also low [60, 71, 72, 88], especially compared to magnetic resonance imaging (MRI) and computed tomography (CT) techniques [27, 29]. As an example, intraoperative 9 2.1 Ultrasound Imaging 10 Figure 2.1: A 2D Abdominal Ultrasound Image of a Human Fetus[86] ultrasound costs are less than 10% of an MRI system [16]. 2.1.1 3 D Ultrasound The use of 2D ultrasound images gathered during an ultrasound scan requires that the sonographer transform the images that they see into a mental 3D ultrasound volume [88]. During a 2D ultra-sound examination, repeated scans are often required in order to get a good mental image of the patient's anatomy. This repetition can be both difficult and time consuming [60]. 3D ultrasound offers a method to eliminate these repeated scans by giving the sonographer and physician a volume of the anatomy to review. New views such as slices parallel to the skin, can be retrieved from the volume after the scan has been completed [65]. The acquisition of 3D ultrasound volumes can use one of several methods. One approach that can be used to create a volume involves first acquiring a set of 2D ultrasound images using a me-chanical positioning device. The mechanical mechanism can be integrated inside the ultrasound probe, such as in the Voluson 730 (GE Medical Systems Kretztechnik GmbH & Co OHG, Austria), or added onto a 2D probe using an external fixture. In either case, the ultrasound images are acquired using predefined angles and positions and therefore the relative location between images is known. Depending on the application, the mechanical motion can be linear, tilting, or rotational, producing parallel, fanlike, or propeller-like sets of 2D images. Although the relative locations of 2.1 Ultrasound Imaging 11 the ultrasound images are accurately known using this technology, the probe movement is prede-fined and therefore the sonographer's interaction is limited. The probes that contain integrated mechanical mechanisms have a limited FOV that in turn limits the size of the volume that can be acquired. Another method that can be used to create a 3D ultrasound volume requires the use of a 2D transducer array. The 2D array collects an ultrasound volume using a pyramid shaped beam. Since the volume is acquired at one time, the errors introduced by fixing the relationship between images is eliminated. Unfortunately, the additional amount of crystals required in this technology means that the probe size is large and the acquire volume, limited in size [59]. As mentioned in Section 1.2, another method that can be used to create a 3D ultrasound volume is called freehand scanning. This method allows the sonographer to use a standard 2D probe without physical constraints on the probe's movement. Tracking devices or image-based methods are used to calculate the relative location of each of the 2D scans. The tracking devices used for freehand scanning include magnetic sensors, articulated arms, and optical trackers. Information about the relative location of the ultrasound images can also be inferred directly from the ultrasound images using image-based techniques. Each of these methods are discussed in Section 2.3. Using a tracking device, the probe's position is calculated throughout the ultrasound image acquisition. The location of each 2D ultrasound image is known relative to the probe based on a calibration method. This calibration therefore allows the position of the ultrasound images to be calculated. The set of 2D ultrasound images is used in a reconstruction algorithm, and a 3D ultrasound volume is created. Figure 2.2 shows a diagram of 3D ultrasound acquisition from freehand scanning. A contributing factor to the accuracy of the reconstructed volume is determined based on the errors introduced by the tracking system as well as the calibration errors. Freehand ultrasound has the advantage of using multiple images from various directions that can supplement the acquired information [72]. It is also a low cost technique compared to other 3D ultrasound methods because it uses a conventional 2D ultrasound machine with added position sensors [66, 72]. The ultrasound image that is acquired with a 2D probe has a fixed width and therefore a limited FOV. Since the anatomy being imaged can be larger than the width of the ultrasound image, only a portion of the anatomy can be viewed with each scan. The same problem of a fixed FOV exists when a probe with an integrated mechanical mechanism or a 2D linear array probe is used to acquire 2.1 Ultrasound Imaging 12 Figure 2.2: Acquisition of Images using Freehand Scanning for 3D Ultrasound a 3D volume because the FOV in each of these cases is also limited. The FOV is expanded with freehand ultrasound techniques [60], allowing the user the flexibility to choose the image volume size [27, 72]. This expanded FOV makes it possible to view a fetus older than mid-term [81] or the entire volume of a large organ [64]. Because the probe movement is not constrained during acquisition of freehand ultrasound im-ages, the images that are collected are not necessarily parallel, nor do they tend to have regular spacing. Accurate knowledge of the relative orientations and positions of ultrasound images is therefore a necessity in reconstructing accurate 3D volumes. This reconstruction is possible if the location of each ultrasound image relative to the patient is known during image acquisition. 2.1.2 Panoramic Ultrasound As discussed in Subsection 2.1.1, ultrasound probes have a limited FOV. This limited FOV inhibits the user's ability to measure the patient's anatomy [21] or acquire images of large organs [81] when it does not fit within the ultrasound image. Panoramic ultrasound aims to improve this limited FOV by recording a set of overlapping 2D ultrasound images as the probe moves along the surface of the skin. The 2D ultrasound images are acquired by moving the probe in the plane of the ultrasound image [66]. These images are then combined into one large image using reconstruction. From this set of images, a new image is formed with a much larger FOV [66]. Figure 2.3 shows 2.2 Probe and Patient Movement 1 3 / ( / Figure 2.3: Acquisition of Images for a Panoramic Ultrasound a diagram of the acquisition of images for panoramic ultrasound. Panoramic ultrasound images make use of tracking algorithms to stitch together the overlapping portions of these consecutive ultrasound images. As opposed to when reconstruction is used in 3D ultrasound and a volume is created using stacks of images, panoramic ultrasound produces a 2D ultrasound image, based on a set of images that share a common plane. Some uses for panoramic ultrasound images in the abdomen include the analysis of masses or inflammation of the spleen, liver or intestines, kidney degeneration, or imaging of large masses in the pelvic region [21]. Large tumors or imaging of fluid in the lungs as well as imaging the content of large masses in the thorax are also possible with panoramic imaging [21]. Acquiring panoramic images without patient motion tracking typically requires that the probe be moved smoothly and with a constant speed along the surface of the patient while the patient remains still [21]. In contrast, with probe and patient tracking, both the probe and patient positions are known throughout the examination and the constraints on the probe requirements and patient stillness can be relaxed. 2.2 Probe and Patient Movement Patient motion is a problem when creating 3D ultrasound volumes [72] and panoramic images [21]. If the patient moves during freehand acquisition of ultrasound images, significant errors will be 2.2 Probe and Patient Movement 14 introduced into the data unless the move is detected [72]. The 3D volume or panoramic image will contain motion artifacts and will be inaccurate due to these errors [26]. In many systems, the ultrasound probe is tracked throughout the freehand ultrasound procedure. By tracking only the location of the probe when ultrasound images are acquired does not compensate for motion of the subject [29]. Panoramic image reconstruction algorithms often assume the patient does not move [21]. If the patient is not tracked, the location of the probe relative to the patient can change without detection. Although there are various sources of error present in a freehand tracking system, anatomy movement often causes much larger errors than those produced by the tracking equipment [83]. Although the patient is usually assumed to be still during an examination, this assumption is seldomly accurate [72]. During the ultrasound examination, the probe is moved along the surface of the patient's skin. During acquisition of images for an abdominal examination, the probe is estimated to move a maximum of 400mm in the anterior-posterior, superior-inferior, and medial-lateral directions. The movement of the trackable object that is attached to the probe must be considered as the probe is moved throughout this area. Including the rotation of the probe, this object is estimated to move up to 600mm in each direction during the ultrasound examination. Our tracking system improves the results obtained with freehand ultrasound by tracking the patient movement that occurs during the scan as well as the movement of the ultrasound probe. If no method to detect and compensate for patient motion is included in the ultrasound system, then repeated ultrasound acquisitions are often required [68]. Systems that use electrocardiogram gating, breath holds, or devices that constrain the patient's movement are reduced or eliminated using our tracking system. Although we recognize that there is internal organ movement as well as external patient movement during the acquisition of ultrasound images, the initial studies presented in this thesis aim to track only the external patient movement and probe movement. Patient movement can be due to respiration, force induced by the probe, accidental movement (such as involuntary muscle contractions), or voluntary movement (such as shifting locations on the bed). Recording patient movement is especially crucial during an abdominal scan because of large changes in probe force and respiration motion in this area [64]. Patients are often expected to remain stationary for extended periods of time. As an example, during a 3D echocardiogram, it may be necessary for a patient to remain still for up to 8 min [13]. Acquiring a set of data for a 3D 2.2 Probe and Patient Movement 15 ultrasound volume of a liver may require a breath hold of 5 — 15sec and for a cardiac examination, 30 -45sec [60]. 2.2.1 Respirat ion Respiration during ultrasound scanning causes both the probe and the patient to move, producing a zero net movement between the two. If only the probe is being tracked, then the system will report that the probe has moved relative to the patient. This will cause inaccuracies in the collected data causing reconstruction errors [30]. Typical patient breath holds are expected to be up to 20sec [23, 53]. It is recommended in [84] that 3D ultrasound scans be performed in one single breath hold. This solution to the problem of respiration is only feasible if the patient is able to hold their breath throughout the procedure. A study measuring the length of time of maximum breath holds in adult outpatients who had undergone abdominal ultrasound for various reasons, is described in [35]. This study concluded that 30 patients, aged 31 — 85 years, were able to hold their breath without aid for a mean expiration time of 24 ± 9sec, and a mean inspiration time of 41 ± 20sec. To put these times in perspective, a 3D ultrasound cardiac examination can require between 30 — 45sec to complete [60]. In an effort to find the volume of large organs, multiple sweeps are made with an ultrasound probe and the patient is asked to hold their breath. In [81], a liver examination was completed within a 20sec breath hold. Although breath holds may be suitable for some patients, young children and sick patients often have additional difficulties holding their breath. Measurement of the 3D displacement of various points on the torso during quiet breathing revealed the amount of motion that can be expected during respiration. The results of a respiration measurement study show that the maximum motion occured in the anterior-posterior direction and was 4.03 mm at the navel [36]. During ultrasound integrated breast cancer surgical procedures, respiration was found to be the largest cause of breast movement [23]. Even if breath holding is used, it has also been found that there are problems associated with the consistency of each breath hold. Although respiratory gating is used for some applications to account for the movement due to respiration [63, 64], each breath is not necessarily consistent with previous breath holds [54]. Respiratory drift may affect the beginning of each breath hold [54], causing inconsistent data to be collected during a scan. In [54], subjects are asked to hold their breath for approximately 12 sec. Towards the end of the scan, the breath holds were often found to be poor quality compared to the 2.2 Probe and Patient Movement 16 beginning of the breath hold.. 2.2.2 Probe Force As the probe moves along the patient's skin, the surface is deformed by the force of the probe [66]. This force causes soft tissues, such as the breast [23, 88] or abdomen to deform substantially during the ultrasound scan. In an effort to eliminate this movement, a device was used in [57] to keep the breast stationary during ultrasound scanning. The use of restraining devices is uncomfortable for the patient and may make it difficult to image some areas [88]. Another instance where probe force affects the acquired data occurs when multiple 3D volumes are combined to create one complete volume of an organ. Multiple ultrasound sweeps are used to create and calculate the volume of these large organs since the FOV for each sweep is too small to accommodate the entire organ. The difference between sweeps includes more errors than within each individual sweep due to varying probe force during each sweep. This difference thus introduces errors in 3D freehand ultrasound techniques [81]. In addition to the surface of the skin being deformed, the internal organs that are being imaged are also deformed due to the probe force. This thesis aims to only track the external patient movement and probe movement. Attempts to compensate for this organ internal movement are discussed in [27]. 2.2.3 Voluntary and Involuntary Patient Movement In addition to patient movement caused by respiration and probe force, it is possible for the patient to move involuntarily or voluntarily during an ultrasound scan. As an example, a patient's muscles may contract or the examination bed may move causing the patient to move. In a more obvious case, the sonographer may request that the patient moves a body part in order to increase the visibility of the organ being imaged. As stated in [19], during a stereotactic mammography procedure, patients may have difficulties remaining still due to neuromuscular disorders. Severe arthritis in the neck, back, or shoulders may also make it difficult to remain still during a biopsy procedure [19]. The same problems can arise when a patient is instructed to remain still during the acquisition of ultrasound images for the purpose of panoramic or 3D volume ultrasound acquisition. During the diagnosis of pediatric 2.3 Types of Tracking Devices 17 urinary track disease, it was necessary in [69] to repeat 3D ultrasound scans a number of times due to patient movement, which included breathing and crying. In a study of 80 pediatric patients, 10 did not cooperate and therefore the 3D ultrasound scans could not be performed. It was also found in [89] that some elderly patients had trouble keeping their leg still during a cross-sectional scan that lasted approximately 10 to 12 seconds. This difficulty was due to a constant state of involuntary shaking of their limb. 2.3 Types of Tracking Devices A variety of methods can be used to track the 3D location of an ultrasound probe during a scan. Most often, external tracking of the probe for use in freehand ultrasound acquisition is performed using magnetic tracking devices [3, 6, 8, 28, 29, 30, 47, 55, 67, 69]. Mechanical [80, 90] and optical tracking devices [88, 91] have also been used. Magnetic Trackers Magnetic trackers are composed of a transmitter which produces a magnetic field, and a sensor placed on the object being tracked that measures the magnetic field. Using knowledge of the magnetic field, the location of the sensor with respect to the transmitter is calculated. Magnetic trackers are flexible [25] and inexpensive [7] since they do not impair the user's movement or require special positioning between the transmitter and the receiver. They are generally less accurate than mechanical or optical systems and require an environment free of highly conductive metals and electromagnetic disturbances [26, 37, 45, 47, 80]. For ultrasound needle guided biopsies, a magnetic tracker did not provide sufficiently accurate location data to align the patient with the ultrasound probe in [80]. Articulated Arm An articulated arm typically has 6 degrees of freedom (DOF) and is attached to the probe. The angle between each joint enables the calculation of the location of the probe, which is placed at the end of the arm. Although usually more accurate than magnetic trackers, articulate arms tend to limit the range of motion of the ultrasound probe, the size of volume that can be imaged, and 2.4 Other Applications for Tracking Devices 18 the flexibility that is associated with freehand scanning [59]. They are often large and cumbersome taking up space in the examination or operating room. As well, these devices have the potential to interfere with the sonographer's hand or with the ultrasound probe itself. Optical Trackers Optical tracking systems generally contain two or more cameras. As described in Subsection 3.1.2, the cameras use stereo triangulation to locate specific targets within their view. The targets being tracked can be made up of passive (reflective balls, natural features, etc.) or active (LED or IRED) markers. A clear line of sight between the sensors and the markers must be available in order to perform optical tracking [9, 58, 64]. In general, optical tracking systems tend to be more accurate than magnetic tracking systems, although also more expensive. Image-Based Methods Instead of using external tracking devices, there are also methods that use purely image-based techniques to register sets of images. Using image-based techniques has the advantage of not requiring additional external tracking devices on the probe. These methods are also prone to an increase in errors as the number of acquired 2D ultrasound images increase since each image is found relative to the previous one. Additionally, the need for extremely accurate calibration of ultrasound images and changes in probe sweep direction may introduce errors into the results [59]. Image-based methods using speckle tracking have been used to predict the spacing between scans [75, 78] as well as to track the probe location [77, 79]. Combining the information from multiple ultrasound sweeps using image-based techniques is performed in [27] and [89]. 2.4 Other Applications for Tracking Devices In this section, we discuss systems that take into account patient motion, probe motion or both. A summary of systems is presented from both the field of ultrasound as well as using other imaging modalities. 2.4 Other Applications for Tracking Devices 19 2.4.1 Augmented Reali ty Preoperative images of a patient are useful both before and during surgery. There are systems that register preoperative volumes with the patient's anatomy during the surgery. In order to perform that registration, the location of the patient is often tracked using an external tracking device. A method was developed by [48] using an optical tracker and LEDs to track the patient during neurosurgery. This method allowed MRI or CT scans to be registered with the patient, based on the location of these lights attached directly to the patient's head or mounted on a Mayfield head clamp. Similarly, registration between MRI or CT images, and the patient, as well as tracking of the medical tool location were achieved using active markers and an optical tracking device in [1, 50]. Another example of head tracking using active IREDs, was discussed in [33]. In this system, a laser range scanner or a trackable pointer were added into the setup in order to improve the registration of MRI or CT images to the patient. Using only patterned light to register the preoperative volumes with the patient during neuro-surgery, the surface of the patient's skin was recorded using a pair of stereo cameras in [24] and using one video camera in [34]. Markers affixed to the patient's skull were used to track movement during neurosurgery in [15] and [92]. Patterned light was again projected onto the surface of the patient's skin in [15] so that stereo video pairs could calculate the location of the surface. This information about the surface location was initially used to register the MRI or CT image with the patient's head. Live video images of the patient during neurosurgery were registered with 3D MRI models in [52] using the natural features on a patient's head. An operator manually matched the features from the MRI volume and the live video in order to achieve this registration. Ultrasound images were registered with the patient as well as with the MRI or CT images in [9]. In this case, a Mayfield head clamp, fitted with IRED markers, was tracked. Markers were also fitted to the ultrasound probe so that both the patient and probe were tracked during the surgical procedure. An Optotrak positioning system (Northern Digital Inc., Waterloo, ON) that uses 3 cameras to track the position of IRED markers was used by [23] to track an ultrasound probe and video camera during breast cancer surgery. This position and orientation information was used to create an augmented scene showing the interior ultrasound view of the patient. The use of retroreflective markers to reflect infrared light allowed an ultrasound probe to be tracked in [74]. The camera used in this setup 2.4 Other Applications for Tracking Devices 20 was fitted with an infrared pass filter that tracked the light that was projected and reflected off of the markers. The scene was then augmented with the ultrasound data so that the operator could view the interior of the patient. Lastly, in [80], augmented reality during an ultrasound scan was achieved using mechanical, magnetic, and optical trackers. Passive markers were fixed to a rigid marker near the patient's skin and tracked based on previous knowledge of their relative locations. 2.4.2 Respiratory Model ing , Compensation, or El iminat ion Although many systems ignore movement due to respiration, the systems described in this section take it into account. The systems described here either model, compensate for, or eliminate respi-ration during image acquisition. Systems involving MRI, CT, and ultrasound scans are discussed. Using MRI imaging, a model of respiration as it affects movement of the heart was created in [54]. Another model was created with the use of MRI imaging in [53], this time used to improve the quality of coronary angiography. Using the information contained in two CT volumes (one acquired at inhalation, and the other at exhalation), a method was developed in [87] that measures patient movement in the thorax region caused by respiration. Instead of using breath holds dur-ing ultrasound image acquisition, [64] recorded the respiratory cycle and chose ultrasound images solely from the maximum exhalation location. The ultrasound volume that was created was then registered with preoperative MRI images. In an effort to correct for the motion of the heart due to respiration, both the patient's respiration and the ultrasound probe were tracked in [4]. The system made use of an optical tracking system with active IRED markers attached to the ultrasound probe and a passive marker attached to the navel of the patient. Based on the one point that defined the movement of the navel, the heart motion is inferred. A similar system was designed by [13] that tracked and compensated for the motion of a patient's chest during a 3D echocardiogram. The system used a magnetic tracker fixed to the probe and to the patient's sternum. A set of experiments were performed whereby the patient did not move and the echocardiogram was performed. The second scenario required that the patient breath freely and the motion be measured and compensated for. After compensation, similar results were achieved for both scenarios. The system was only able to track motion that was parallel to the examination bed. Movement that involved rolls and tilts, which produced heart rotations around the sternum, were anticipated to cause errors in the system. The magnetic tracker 2.5 Discussion 21 attached to the patient's skin was assumed to remain completely fixed throughout the procedure. Finally, the system assumed that ferromagnetic disturbances were negligible. Many imaging systems require that the patient hold their breath during ultrasound image acquisition [69, 83]. Movement due to respiration is eliminated in these cases if the patient is able to comply. In [23], chest motion due to respiration during breast cancer surgery caused errors in the results. To correct these errors, the respiration of the anesthetized patient, was suspended while the ultrasound images were acquired. 2.4.3 Tracking of Medica l Tools Tool tracking during ultrasound or other medical procedures is another application for tracking devices in medicine. Various tracking methods and systems are described in this subsection. By tracking the location of the calibrated 2D or 3D ultrasound probe in [11] using IRED markers, the location of a tumor located inside deformable tissue was known. The tumor's location was calculated using image-based methods and then used during radiation therapy. Another method that used active markers and an optical tracking system to track an ultrasound probe was described in [82] and [83]. In these two papers, artifacts caused by probe force were corrected for by using a combination of probe tracking and ultrasound based image registration. Although the probe was tracked, the patient was assumed to be still and patient motion due to respiration was not corrected. An ultrasound probe was again tracked in [56]. A preliminary study was conducted in order to track the motion of an ultrasound probe during the collection of slices for use in 3D ultrasound reconstruction. Two off-the-shelf video cameras were used to track a fixture of 4 LEDs attached to the probe. In [46], a fiducially marked plate was tracked using a video camera. The prior knowledge of the fiducial placements was used to track four circular marks on a plane. Using only a rigid body, the tracking algorithm was able to track the marks as the plate was moved. 2.5 Discussion As shown in Section 2.4, there is a large variety in the applications of tracking systems. For augmented reality applications, tracking the patient's movement is essential. When used for surgery, 2.5 Discussion 22 augmented reality must align the patient's position with the pre-operative or operative images in order to enhance the information available to the surgeon. Many of the systems discussed in Subsection 2.4.1 use active markers to track the patient's movement. Although active markers are often very accurate in producing location information, their application for ultrasound is limited. The active markers impede the movement of the probe if they are attached to the area being examined under ultrasound. If the markers are secured to an area around the perimeter of the area being imaged, the markers do not accurately represent the movement of the patient in the area being imaged. Patterned light was used as a method for tracking the surface of the patient in [15, 24, 34]. The use of patterned light projected onto the surface of the patient during an ultrasound examination could be used to calculate the surface of the skin. There are however limitations to this idea. Firstly, the area occluded by the probe during the examination will not be trackable during the examination since the patterned light will not be projected onto this area. Secondly, although the surface can be measured as the patient moves, without recognizable landmarks attached to the skin, the movement of the specific area being examined can not be calculated. Applications of tracking systems specifically for patient respiration are described in Subsection 2.4.2. These systems can be divided into three categories: those that model the patient's motion based on previously acquired experimental data, those that calculate the motion during the imaging procedure, and those that eliminate respiration. Using MRI imaging, models relating patient motion and respiration were created in [53, 54]. Similarly, CT and ultrasound were used in [87] and [64], respectively, to create respiration models. These models can be used to predict the movement of the anatomy based on the respiration cycle. Unfortunately, each breath may be different, resulting in errors being introduced into the data [54]. Respiratory motion is calculated in [4] using an active marker and in [13] using a magnetic tracker. Both of these systems were used to track the motion of the patient and the probe during an ultrasound examination. These systems each used one marker located at the navel or sternum of the patient. Since only one marker was used, the entire patient movement was based on the movement of this single point. Using only one marker limits the data that can be acquired for the entire surface of the patient. Because an active marker and a magnetic marker were used in these two systems, the point being tracked could not be placed in the area that was being examined with ultrasound as the marker would have interfered with the scan. Additionally, the wires attached to these markers and the size of the markers make them difficult 2.5 Discussion 23 to attach to the patient and remain attached throughout the procedure. Lastly, respiration was eliminated through breath holds in [69, 83], and suspended during surgery in [23]. Breath holds can cause errors in the data since respiratory drift, as well as inconsistencies between breaths can be present [54]. Furthermore, young children or sick patients may not be able to hold their breath for the required length of time. The tracking of tools during medical image acquisition was performed using active markers in [11, 56, 82, 83]. Although active markers can be suitable for tracking the motion of the ultrasound probe, their inadaptability to tracking patient motion means that a second tracking systems must be used to track the patient movement. Because the cost of these active marker tracking systems is relatively high, and the system size is relatively large, the use of an optical tracking system with active markers in addition to another type of tracking system during an ultrasound examination is not desirable. In [46], a video camera was used to calculate the location of known fiducials attached to a plate. The algorithm was able to locate the fiducial marks within the video image under the constraint that the fiducials were attached to a rigid body. Since the patient does not move as a rigid body, this system is not directly applicable to patient movement tracking. The tracking system described in this thesis is required to track both the ultrasound probe and the patient movement during an ultrasound scan. The following are the objectives for the tracking system presented in this work: • The error of the system should be less than the probe and patient movement. • The tracking system should not interfere with the acquisition of ultrasound images. • The tracking system should be suitable for all patients regardless of their health or age. • The tracking system should be inexpensive and portable (so that it may be moved between examination rooms along with the ultrasound machine). Chapter 3 Digital Camera Tracking Component This chapter describes the use of a trinocular camera system to calculate the location of a surface. The system is composed of three cameras contained within one case. Using stereo vision techniques, the images taken with this camera system are used to find the 3 D locations in space of features on a surface. In the final design of the surface tracking system, the trinocular camera system is used to track both the location of the patient's skin during an ultrasound examination and that of the ultrasound probe location. In this chapter, the accuracy of the trinocular camera system is tested using the Optotrak positioning system as a reference standard. The Optotrak system is used solely for the purposes of testing our system. Once the accuracy of the camera system is determined, the Optotrak system is not included in the surface tracking system. This chapter begins by describing the specifications and appropriateness of the chosen camera for our application. Next, stereo vision geometry is explained. A discussion about the Optotrak system's role in the experiment follows. Then, a procedure detailing the steps required to find the transformation between the coordinate system of the trinocular camera and a coordinate system defined using the Optotrak system is described. The details of an experiment to test the camera's accuracy using a flat plate and a sphere as targets is given. Finally, the accuracy and results are presented and discussed. 24 3.1 Digital Camera System 25 3.1 Digital Camera System A trinocular camera system comprised of three cameras constrained within one case called a Dig-iclops, and a 3D positioning software called Triclops, are used for tracking (Point Grey Research Inc., Vancouver, BC) [42]. Throughout this thesis, the three Digiclops cameras are referred to as left, right, and top. Two cameras are sufficient to determine the 3D location of features using stereo geometry. The third camera is included into the system in order to increase the accuracy of the measurement using the first two cameras. As shown in Figure 3.1, the right camera is common to a stereo pair with both the top and left cameras, and is therefore referred to as the reference camera. The physical placement of the optical centre of the reference camera is used as the world coordinate system for all calculations performed by the Digiclops system. The Digiclops coordinate system, CD has coordinate directions xp, yu, and z&, which are shown in Figure 3.1. The method used to find the location of this optical centre is discussed in detail in Section 3.3. 3 .1.1 Hardware and Software The camera system chosen has a FOV of 44° with a 6 mm focal length (/). The FOV is calculated using where W and H are the width and height of the object being imaged and Z is the distance between the camera lens and the object. An area of 800 mm wide (W) and 600 mm high (H) is assumed to be the area being imaged at a distance of approximately 1000 mm (Z). This area is chosen large enough to accommodate both the abdomen of a patient and the ultrasound probe in roughly the centre of the image. The camera system is equipped with 3 CCD lenses that are progressive scanning Sony HAD sensors with square grayscale pixels and image sizes of 1024 x 768 pixels. For a 1/3\" CCD lens size, the scene that is acquired has a vertical height of 3.6 mm within the camera lens. The camera is controlled via the IEEE 1394 Firewire interface (Texas Instruments, Inc.) using a 2.4 GHz Pentium processor with 256 MB oi RAM. The Digiclops is calibrated by the manufacturer using Tsai's approach [85]. The rectified images produced with the calibrated system mimic an ideal stereo camera model to within 0.06 pixels [42]. where Z = 3.6 (3.1) 3.1 Digital Camera System 26 Top Camera Figure 3.1: Components of the Digiclops Camera and the Digiclops Coordinate System Once the images have been rectified, the Digiclops system establishes correspondences between features detected in the different images. These features are matched using the sum of absolute differences correlation method [42] implemented with the Triclops software. This algorithm first chooses a neighborhood around each pixel in the reference image. Next, the neighborhood is compared to neighborhoods in the top image along the same vertical line or in the left image along the same horizontal line. The best match between features is found when a minimization is computed using m m 2 2 min ]C l^ightix + i){y + j) - Iieft(x + i + d){y + j)\\ (3.2) l ~ 2-1- 2 where d is the disparity that ranges from dmin to dmax, m is the neighborhood size, x is the x-coordinate of the pixel and y is the y coordinate of the pixel, and 7;ey t and Iright are the left and 3.1 Digital Camera System 27 right images. The best match is selected based on Equation (3.2). Next, the disparity is calculated for the sets of images using the matched features. Finally, triangulation is used to calculate the 3D location of each of the features relative to Co- Since the cameras are calibrated to be aligned into two perpendicular pairs, triangulation is simplified to the case with parallel cameras. Subpixel interpolation is used to find the 3D location of features with a better accuracy. 3.1.2 Stereo Vis ion Geometry This Subsection gives a brief synopsis of the geometry of stereo vision and triangulation. Triangu-lation is the method used by the Digiclops system to find the 3D location of feature points from the images provided by the cameras. Instead of using one camera pair, the Digiclops system performs this stereo algorithm for each set of cameras in order to increase the accuracy of the calculated 3D points. Each of the pixels in an image acts as a projection of a 3D point onto an image plane. Each point that is present on the image plane is therefore a representation of the ray of light that is coming from the 3D object. The geometry of a single light ray projected onto an image plane can be seen in Figure 3.2. The image coordinate system is denoted as C/ and the camera coordinate system as Cc- Extending this idea to the case where two cameras are used, a more complex geometry is now observed and shown in Figure 3.3. The coordinate system of the first and second cameras are denoted as Cc and Cc, respectively. Joining these two points creates the baseline, /?, of the geometry. A plane is formed that contains the two camera centers and the 3D point that is being imaged. This plane is called the epipolar plane and it intersects each of the two image planes as seen in Figure 3.3. When the two cameras are parallel, the calculations simplify and the 3D location of a point in space can be calculated. Figure 3.4 shows the geometry of two parallel cameras. The location of the 3D point in space (Xi, Yi, Zi) can be calculated using xi,left xi,right J J where Xijeft and Xi^ight are the horizontal pixel locations for the left and right images. These calculations are repeated for each pair of matched points in the stereo cameras' images. Zi = Vi,Tight (3.3) 3.2 Optotrak Positioning System 28 Ci (X,Y,Z) Light Ray Image Plane Figure 3.2: Projection of a Single Light Ray onto an Image Plane 3.2 Optotrak Positioning System The Optotrak 3020 positioning system is used as a reference standard to create the transformations discussed in Section 3.3 as well as to measure the accuracy of the Digiclops in Section 3.4. Using IREDs, the Optotrak system is able to track a 3D location from a distance of 225 mm with an RMS error of 0.1 mm in the x-direction and y-direction and an RMS error of 0.15 mm in the z-direction. The 3D resolution of the system is 0.01 mm [41]. The Optotrak system uses 3 linearly mounted CCD cameras as sensors to track the IRED markers. Markers with a case diameter of 8 mm are used throughout this thesis. As an Optotrak system default, all of the IREDs are recorded with respect to a Global Optotrak Coordinate system, Co- The coordinate system CQ is located in the centre of the middle Optotrak sensor. The z-direction of CQ is directed into the centre lens, the y-direction vertically upwards from the lens, and the x-direction horizontally from the lens. Since CQ is predefined by the manufacturer, 3.2 Optotrak Positioning System 29 Cc yc Epipolar Plane 0 Image Plane' Figure 3.3: Geometry of Light Rays onto Two Image Planes its orientation and position can not be changed. In this thesis, a new coordinate system, the Local Optotrak coordinate system, CL is defined. As described in Section 3.3, this coordinate system is defined using IREDs, which are visible throughout our experiments. 3.2.1 I R E D Viewing Angle Since the IRED markers are not manufactured to emit light as a point source, depending on the angle at which the marker is viewed, the marker location recorded with the Optotrak system changes. During acquisition of marker locations, the Optotrak system continues to record data as long as the marker is visible to the Optotrak system within ±85°. This angle describes an imaginary cone constructed with the IRED marker at the apex. As long as this cone envelops the three Optotrak cameras, the location of the marker is calculated. If the marker is viewed by the Optotrak system at an angle greater than ±60°, then errors are introduced into the calculation [18]. In order to ensure that these errors are not introduced by oblique viewing angles, rigid body Figure 3.4: Geometry of Two Parallel Cameras 3.2 Optotrak Positioning System 31 Good Accuracy Poor Accuracy Poor Accuracy Figure 3.5: Range of Viewing Angles for an IRED files are created using the Rigmaker software (Northern Digital Inc., Waterloo, ON). The angle of each marker contained in each rigid body file is calculated and taken into account while data is collected with the Optotrak system. 3.2.2 I R E D Z Offset Throughout this chapter, IRED markers are placed onto surfaces in order to measure the location of the surface using the Optotrak system. The 3D location of each IRED that is calculated using the Optotrak system has an offset value compared to the location of the surface where the IRED is attached. In order to calculate the z-component of this IRED offset, which is normal to the surface of the IRED, as seen in Figure 3.6, an experiment is conducted. A flat metal plate with 3 IRED markers secured to its surface using double sided tape is used. A coordinate system on the back of the Digiclops case, CL, is defined using the 3 IRED markers. Two of the markers are used to define the x-axis of the coordinate system. The third marker is used to specify the xy plane for the coordinate system. The coordinate system's z-direction is the normal to the defined plane. The apparatus used to find the IRED z-offset is shown in Figure 3.7. A digitizing pointer is used to measure the offset in the z-direction of these IRED markers. As shown in Figure 3.7, the 3.2 Optotrak Positioning System 32 Recorded Location IRED \\ Target Surface \\ i ^offset \\ I J ] Figure 3.6: Definition of the Z IRED Offset digitizing pointer contains 5 IRED markers positioned at the four ends of a cross with one marker positioned at the centre. A rigid body file is created for the digitizing pointer using 900 sets of location data for the stationary pointer. Next, the pointer is pivoted about ten different locations on the fiat plate. The tip of the pointer is fixed to each of the pivot points during data collection. The rest of the pointer is moved both from side to side and back and forth. The 1800 collected points from each pivot point are used by the Rigmaker Software to solve for the location of the pivot point [39]. The residual error between the solution for the pivot points and the set of IRED locations collected by pivoting the pointer for ten different pivot points are shown in Table 3.1. The range of total RMS error calculated is from 0.13 mm to 0.21 mm with a mean of 0.16 mm1. The position of the tip of the pointer is therefore known to within an accuracy that is better than 0.21 mm, the total accuracy in all three directions as reported by the Optotrak system manufacturer for a single IRED [41]. All measurements are recorded using the coordinate system that is defined with the 3 IRED markers that are attached to the plate. The difference between the z-location of the pointer's tip lrrhe number of significant figures reported in this thesis, when the Optotrak system was used as a measurement tool, is based on the repeatability of recording the location of one IRED over a series of runs. 3.2 Optotrak Positioning System 33 Table 3.1: Residual Error from Finding the Pivot Point of the Digitizing Pointer R M S Error [mm] Run X Y z Total 1 0.04 0.09 0.11 0.15 2 0.04 0.08 0.10 0.13 3 0.05 0.10 0.12 0.17 4 0.05 0.10 0.18 0.21 5 0.06 0.07 0.12 0.15 6 0.06 0.08 0.13 0.16 7 0.06 0.07 0.12 0.16 8 0.07 0.07 0.13 0.17 9 0.07 0.09 0.15 0.19 10 0.03 0.09 0.10 0.14 Mean 0.05 0.08 0.13 0.16 Single I R E D [41] 0.1 0.1 0.15 0.21 and the IRED location is therefore the z-offset of the IRED markers. The average of the IRED z-offset value is found to be 2.4 mm. This IRED z-offset calculated in this Subsection is used throughout this thesis. 3.2.3 I R E D X and Y Offsets Similarly to Subsection 3.2.2, there may be an offset between the geometric centre of each IRED and the location that the Optotrak system calculates in the x-direction and y-direction. This Subsection investigates the amount of this IRED offset through calculations using the Optotrak system. The active geometric centre of the IRED, based on its outside casing, is required in order to find the IRED offsets in the x-direction and y-direction. Each marker has a circular case with a wire exiting from the side. Using this wire as a landmark, the IREDs are divided into four sections. These sections represent the distances from the centre to the edge of the case in the x-direction 3.2 Optotrak Positioning System 34 Figure 3.7: Setup Used to Calculate the Z IRED Offset and y-direction. The section are named a, b, c, and d and are depicted in Figure 3.8. Four IRED markers are aligned with respect to their lead wires in various locations. The alignment of the markers is shown in Figure 3.9. The value 8 is a predetermined constant that is included in each of the tests. The placement of each marker is specified on the plate using an accurate CAD drawing that specifies the location for each marker based on the distance between markers, 6, and the diameter of the IRED casing. Each marker is fixed to a flat plate in order to ensure that a mutual plane is used for the calculations. Next, a coordinate system is defined using the location of three of the IREDs as recorded by the Optotrak system. The location of these three markers is next determined relative to the coordinate system that is defined. The recorded 3.2 Optotrak Positioning System 35 Figure 3.8: Notation Used in Determining IRED Offsets in the X and Y Directions distances between the centre of each marker is determined using the data from the Optotrak system. A new coordinate system is then created using a new combination of three IREDs. The locations are again determined for these IREDs. These steps are repeated using different IRED locations. The various combinations of IRED locations are shown in Figure 3.9. A total of 60 equations are created using the IRED locations. Once the data for all the combi-nations of IRED locations is recorded, least squares minimization is used to find the values for a, b, c, and d. The IRED offsets that are found for each of the variables are shown in Table 3.2. The accuracy of determining the values of a, b, c, and d are dependant on the accuracy of manually attaching the IREDs to the plate as well as the accuracy with which the Optotrak system can measure the IRED positions. From Table 3.2, the offset from the centre of the IRED for each of the distances is smaller than 0.15 mm, the accuracy of the Optotrak system. Since the IRED offset is smaller than the Optotrak system accuracy, the IRED offsets in the x-direction and y-direction are assumed to be negligible throughout this thesis. 3.2 Optotrak Positioning System 36 t* b + a + 5 H d + a + 8 d + a + 5 r>C + d+5* r* a + a + 8 * c+c+8 d+d+8 >*b + b + 8 + * c + a + 8 H 6 + a + t5 c+d+8 K 6 + c+ 5 t Figure 3.9: Setup Used to Determine the IRED Offsets in the X and Y Directions Table 3.2: X and Y Offset Distances for the IRED Markers Variable Distance from Active Centre to Casing Edge [mm] a -0.08 b +0.07 c +0.09 d +0.04 3.3 Relationship between the Camera System and Optotrak System 37 3.3 R e l a t i o n s h i p b e t w e e n t h e C a m e r a S y s t e m a n d O p t o t r a k S y s -t e m The accuracy of the depth measurements is determined based on the comparison between the Digiclops and the Optotrak system location measurements. The origin of the Digiclops camera system is required in order to find this accuracy. Once the origin of the Digiclops is determined, it is possible to use this information to determine the absolute accuracy of the Digiclops. The Optotrak system is used as the reference standard to determine the accuracy of the Digiclops camera system. The Digiclops accuracy is determined by transforming the 3D location of the Optotrak sensors, PL, relative to a Local Optotrak coordinate system, CL, into the Digiclops coordinate system, Cp-A comparison is then made between the transformed points from CL with those recorded using the Digiclops. In order to find the relationship between these two coordinate systems, a set of homogenous transformations are created. The points recorded by the Digiclops are multiplied with the set of matrices and the results are compared to the points found with the Optotrak system. Figure 3.10 shows the components the are used to create this set of homogeneous transfor-mations. The transformations that are required to transform the Local Optotrak points into the coordinate system of the Digiclops camera are shown in Figure 3.11. This image shows the basic relationship between each of the transformations, T f = T f T g T ^ T g ) - 1 = T ^ T ^ T p (3.4) where the transformation between the Global Optotrak and the Local Optotrak coordinate systems, T^, and the transformation from the IRED to the Global Optotrak coordinate systems, T^, can be combined to create a transformation from the IRED to Local Optotrak coordinate system, T^. The Local Optotrak coordinate system, CL, is defined using IREDs attached to the back of the Digiclops case. By rigidly attaching the IREDs to the Digiclops case, the relationship between the Digiclops and CL does not vary throughout the experiments. The Optotrak system is used to record the location of the Digiclops and the flat plate. Both of these objects have IREDs secured to their surfaces. The surface of the plate is also viewed using the Digiclops camera system. In the following subsections, each transformation is described and the method used to calculate it is discussed. The transformation between the plate and the Digiclops are first computed in Subsection 3.3.1 using a set of feature points on the plate and the 3.3 Relationship between the Camera System and Optotrak System 38 Flat Plate Digiclops Figure 3.10: Setup of Equipment used to Find the Transformation from the Digiclops to the Local Optotrak. The equipment in this figure is not drawn to scale. recorded images from the Digiclops. Next, the transformation between the IREDs attached to the plate and those that define the Local Optotrak coordinate system is calculated in Subsection 3.3.2. Then, Subsection 3.3.3 describes the transformation from the coordinate system defined on the surface of the plate to one defined using IREDs attached to the surface of the plate. Finally, the transformation between the Digiclops and the Local Optotrak coordinate systems is calculated in Subsection 3.3.4 based on the transformations that were calculated from each of the previous subsections. It is this transformation, T£, that is necessary when calculating the accuracy of the Digiclops results. 3.3.1 Plate to Digiclops Transformation (T£) A transformation describing the plate with respect to the camera system, T£, is described in this subsection. This transformation represents the relationship between a coordinate system that we define on the plate, Cp, and the coordinate system inside the reference camera in the Digiclops, 3.3 Relationship between the Camera System and Optotrak System 39 Optotrak System Tg=IREDs wrt. Global Optotrak T£=plate wrt. IREDs T/£=Plate wrt. Digiclops =Digiclops wrt. Local Optotrak T^=Global Optotrak wrt. Local Optotrak Tf=IRED wrt. Local Optotrak Back of Digiclops Front of Digiclops Figure 3.11: Transformations used to Find the Transformation from the Digiclops to the Local Optotrak. The equipment in this figure is not drawn to scale. 3.3 Relationship between the Camera System and Optotrak System 40 Co- This transformation is defined as XD — ux Vx Wx XQ Uy Vy UJy Uz Vz VJZ 0 0 0 wv Y0 Zo 1 (3.5) where u = [ux uy uz], v = [vx vy vz], and w = [wx wy wz] represent the direction vectors from the origin of Cp to the origin of Co- The translation between the two origins is defined by [XQ YQ ZO]T. In order to calculate this transformation, we calculate each of these vectors based on the features shown in the Digiclops images. The homogenous transformation, T£, is found using 14 feature points on the plate. T h e location and number of feature points are chosen so that the surface of the plate is evenly covered with features. A sheet of paper with a grayscale image and 14 printed crosses is secured to the plate. T h e grayscale image is used to provide additional information in the areas surrounding the 14 feature points. The feature points are shown as printed crosses and are shown in Appendix A . These feature points are visible to the Digiclops camera system and therefore appear in all of the images taken with the camera system. In determining this transformation, the left and right images from the Digiclops camera system are used. Although it is possible to use all three cameras from the Digiclops, using two of the cameras provides sufficient information to find the location of the feature points in order to determine T^. The third camera is used when all the features on the surface of the plate are determined, as in Section 3.4, since there is a larger possibility of matching errors. The 14 printed crosses are manually picked out of the left Digiclops image of the first run. The first three points that are picked are used to define the origin and the directions of the coordinate system of the plate, Cp. The first three points from the first run are also used to create templates for subsequent runs. The templates are composed of the point that is picked and a 20 x 20 pixel area around each point. For runs after the first, the operator is asked to pick the first three feature points. Next, the algorithm creates a 40 x 40 pixel search window. T h e template and window sizes are chosen large enough to accommodate each of the features. The template that corresponds with the point that is picked is used to search the area around the picked point. Normalized cross-correlation is used in order to find the best match of points within the area [49]. 3.3 Relationship between the Camera System and Optotrak System 41 Next, subpixel interpolation is performed in order to find a more precise match for the picked point. The method described in [22], using a quadratic estimator, is used to find this subpixel location. The values Aa; and Ay are added and subtracted to the matched pixel position that is found using the cross-correlation algorithm, Rp0S. These values are calculated using J^X J^X A X = 2(2R- -R%~- R*_) ( 3 ' 6 ) A y = 2{2RyQ - R l - Rl) • ( 3 ' 7 ) Equations (3.6) and (3.7) are used to determine the difference between the correlated point and the subpixel location in both the x and y directions, Aa; and Ay, respectively. For each pixel position, a correlation coefficient was calculated using the cross-correlation algorithm. These coefficients range from 0 to 1 where 1 means that a perfect match between two windows of pixels is found. Since the cross-correlation method finds the correlation coefficients for each pixel, adding a subpixel interpolation scheme allows the windows to be matched to a fraction of a pixel. In these equations, R% and Ro represent the maximum x and y correlation coefficients from the cross-correlation algorithm. The variables R^_ and Rl represent the correlation coefficients for the pixel to the left and bottom of the x and y maximum, respectively. Similarly, R^ and Ry+ represent the correlation coefficients for the pixel to the right and top of the x and y maximum, respectively. The subpixel position in the x and y directions are denoted by Rpeak and Rvpeaf,, and are calculated using Rpeak = Rpos + A * (3-8) Keak = Rlos + ^y- (3-9) where Rp0S and Rp0s are the x and y pixel location of RQ. Once the subpixel location for each feature has been determined in the left image, the matches for those features are found in the right image. The operator is asked to pick the corresponding features in the right image. A 20 x 20 pixel template is created with the feature from the left image. The template is cross-correlated with the 40 x 40 pixel area around the picked feature in the right image. Using equations (3.6) to (3.9), a subpixel location is found for each feature in the right image that best matches each features in the left image. 3.3 Relationship between the Camera System and Optotrak System 42 R\\ 0.0.6236 Rx_ 0.7594 Ro 0.9290 R% 0.8488 Rl 0.0.7157 Ax — i r i i (io]io)| (10.1789,9.9112) (a) Example Correlation Coefficients (b) Example Subpixel Shift Figure 3.12: Example of a Subpixel Shift Using the Cross-Correlation Coefficients, (a) The values shown are the correlation coefficients from the cross-correlation algorithm. Each square in this image represents one pixel. The larger the coefficient, the better the more accurately the template and search window were matched at that pixel, (b) Using Equations (3.6) - (3.9), the shift from the original matched pixel position is calculated. In this example, the original matched pixel was located in the image at pixel position (10,10) and with subpixel interpolation, this position becomes (10.1789,9.9112). The disparity is then found for each feature using the left and right image features. The disparity, di, is calculated for each feature using di = Rpeak,left ~ Rpeak,right (3.10) where Rpeak left and Rpeak,right a r e subpixel pixel locations of the feature in the left and right images. The Digiclops camera system is calibrated by the manufacturer and as such, the horizontal location of the left and right images are aligned. Only the disparity in the horizontal direction is required for finding the 3D location of a feature since the vertical disparity is zero. The camera parameters that define the Digiclops camera system are retrieved using the C + + library provided by Point Grey Incorporated. The configuration information for the Digiclops 3.3 Relationship between the Camera System and Optotrak System 43 Table 3.3: Digiclops Camera System Configuration Information Baseline (ft) 100.018 mm Focal Length (/) 1336.811523 pixels Centre Row {yCentre) 423.019745 pixels Centre Column (xcentre) 592.776917 pixels system used in this thesis is shown in Table 3.3. Using the parameters from Table 3.3, the 3D location for each of the 14 features on the plate is found. The image centre for each Digiclops varies and has also been included in Table 3.3 for the camera system used. In calculating the 3D points, the image centre is taken into account. The source code for the algorithm used to calculate these 3D points is found in Appendix B. The 3D location of the 14 features, (Xi,Yi, Zi), based on the Digiclops coordinate system are calculated using Uj — R peak,Tight ^centre vi — Rpeak,right Vcentre Zi = ll di UiZi for i = 1,2,... ,14 ViZj (3.11a) (3.11b) / / Once the location of all of the 3D points is found, a plane is fitted to the 3D points. The best-fit plane is found using least squares minimization. The plane and normal to the plane, i i , are defined using Ax + By + Cz + D = 0 where h= [A B C] . (3.12) The 14 points that are calculated using the above method are then solved using Yi Zi 1 A 0 x2 Y2 z2 1 B 0 C Yi Zi 1 D 0 for i — 14 . (3.13) Next, the 3D points are projected onto the plane. This step ensures that the points lie within one plane. The difference between the location of the points and the ideal plane is minimal, yet is still taken into account in order to ensure that Cp, the coordinate system that is created on the 3.3 Relationship between the Camera System and Optotrak System 44 plate, lies on the same plane as the points. The projection of these 3D points onto the best-fit plane is calculated using x = x0 + At, y = y0 + Bt, z = z0 + Ct (3-14) where the line (xo, yo, o^) passes through and perpendicular to the plane Ax + By + Cz + D = 0. Substituting Equation (3.14) into the equation of a plane, A(At + x0+ B(Bt + yO) + C{Ct + z0) + D = 0 A2t + Ax0 + B2t + By0 + C2t + Cz0 + D = 0 {A + B + C2)t + Ax0 + By0 + Cz0 + D = 0 (3.15a) (3.15b) (3.15c) Combining equations (3.14) and (3.15c) gives us the 3D location of the points after they are projected onto the best-fit plane, -Ax - By - Cz - D x = XQ + At, y = yo + Bt, z = ZQ + Ct for t (3.16) A2 + B2 + C2 Once the projected 3D points are calculated, Cp is calculated. Two 3D points from those on the plate, one for the origin and one in the y-direction, were chosen. Each of the coordinate system direction vectors are found using the source code provided in Appendix B. The y-direction, v, is calculated using |(Xi - X0, Yi - Y0, Z\\ - Z§)\\ where {Xo, Yo, Zo) is the origin of Cp relative to Co and (X\\,Y\\,Z\\) is the location of the feature defining the y-direction relative to CQ. Secondly, the z-direction, w, of Cp is calculated, [nx ny nz] w = (nx,ny,nz)\\ Thirdly, the x-direction, i i , is calculated, v x w u = |v X w| The transformation matrix between the plate and the Digiclops is defined, ux vx wx X0 (3.18) (3.19) Uy Vy uz vz w„ Yo wz Z0 0 0 0 1 (3.20) 3.3 Relationship between the Camera System and Optotrak System 45 Figure 3.13: Digiclops and Plate Coordinate Systems Data is collected using various plate to Digiclops distances and angles. The procedure described in this section is repeated for each set of data, each time producing different results for T^. Subsection 3.3.4 discusses how all of the different runs are used to find the final transformation from the Digiclops to the Local Optotrak coordinate systems. 3.3.2 IRED to Local Optotrak Transformation (Tf) The IRED to Local Optotrak transformation, , describes the transformation between the IRED sensors on the plate, Cp, and the Local Optotrak coordinate system, CL- The 3D location of all of these IREDs are recorded by the Optotrak system relative to CL- Six IREDs are secured onto the flat plate using double sided tape. Three additional IREDs are secured to the back of the Digiclops, as seen in Figure 3.14. These IREDs are attached using double sided tape. The IREDs on the Digiclops are used to define CL . The resulting CL has an xy plane parallel to the back surface of 3.3 Relationship between the Camera System and Optotrak System 46 Back of Digiclops Figure 3.14: Local Optotrak and IRED Coordinate Systems the Digiclops. The z-axis points approximately in the direction of the camera lenses. CL is found by first finding a best-fit plane for the three IREDs attached to the Digiclops. Next, the normal to this plane is found. The direction vectors describing CL are next calculated using three of the IRED locations and the final transformation matrix is created. The details describing the method used to calculate CL are the same as those described in Subsection 3.3.1. For each run of data that is collected, is calculated. 3.3.3 P l a t e t o I R E D T r a n s f o r m a t i o n ( T £ ) The plate to IRED transformation, T^, describes the transformation between Cp and CR. Since the IREDs lie on the flat plate, the recorded IRED locations are on a plane that is parallel to the plate. is determined using the IRED offset calculated in Section 3.2.2. The transformation is composed of three translations. The z-translation is caused by the IRED offset between the plate and the location where the IRED locations are recorded. The x-translation and y-translation are known directly from the features on the flat plate. Cp is chosen so that there are no rotations between Cp and CR. 3.3 Relationship between the Camera System and Optotrak System 47 Figure 3.15: IRED and Plate Coordinate Systems The grayscale paper that is attached to the flat plate includes a grayscale image, overlaid with printed crosses that are used as distinguishable features for the Digiclops and circles that dictate the placement of each IRED marker. The placement between the crosses and the circles are drawn using CAD software in order to ensure that the relationship between each feature is known precisely. The distance between the origins of CR and Cp in the x-direction and y-direction are 50 mm and 20 mm. Appendix A shows the CAD drawing used to place the IREDs onto the plate in their appropriate locations. The matrix containing each of the translations has the following form LR — 1 0 0 50 0 1 0 20 0 0 1 Z0ffset 0 0 0 1 (3.21) where z0ffset is the IRED offset in the z-direction calculated in Section 3.2.2. 3.3 Relationship between the Camera System and Optotrak System 48 3.3.4 D ig i c l ops to L o c a l O p t o t r a k T r a n s f o r m a t i o n ( T f ) In order to solve for T f , Digiclops and Optotrak data are collected for 16 runs and T f , T ^ , and T £ are calculated. For each run, the Digiclops and plate are moved into a new location. The angle at which the flat plate is viewed by the Digiclops varies between tests. The Optotrak system also views the Digiclops and the plate with different angles for each run. The Digiclops images that do not clearly show the feature points on the plate are not included in the data set of 16 collected runs. The features were not visible in these images due to difficulties in illumination or oblique viewing angles between the Digiclops and the plate. Knowing T £ and T f for each run, and T ^ that is valid for all runs, T f is calculated for each run, T f = T f T £ T f = T f T£(T£ r 1 (3.22) where, r Vx V,, X L — Ux Uy uz 0 Jy vz 0 wx Wy wz 0 Yo Zo 1 (3.23) The final T f is determined using least squares minimization, T 1 / — D Ln T f T f L\\3 X L l 4 D L21 T f L22 T f ^23 T f D L31 0 0 0 1 (3.24) The 12 unknown values, T f to T f 3 4 are made into a column vector, • L,unknown T f T f Ln L\\2 rpD 1 L i 4 rp£) r£D rpD r^D rp£) L21 L22 L23 L24 ^31 T f T f L32 L33 •\"•1.34 (3.25) The known parameters for the transformations T f from each run are placed into a column vector, r i T B=\\(j)l fa . . . i6 J /on = 1,2, ...,16 where i = <-£U r p D / J i £ > r p D rj\\D rpD I J U Ln L12 L\\3 Ln L21 L22 L23 -iD rpD ryD r p D r p D rp£> L24 L31 L32 L33 L34 (3.26) 3.3 Relationship between the Camera System and Optotrak System 49 A = A minimization matrix is created, r h h he where Ii is a 12 x 12 identity matrix for runs i = 1,2,..., 16. Least squares minimization is used to solve A • T/£ = B (3.27) (3.28) The result of this minimization, found with an RMS residual error of 1.64 mm, for our system is LL — 0.99 0.06 -0.01 -17.93 -0.06 0.99 -0.01 124.04 0.01 0.01 0.99 45.72 0 0 0 1 (3.29) This transformation matrix is used to measure the accuracy of the Digiclops through a series of tests described in Section 3.4. 3.3.5 Analysis of the Transformation M a t r i x T f Comparing Equation (3.29) with the standard form of a homogenous transformation shown in Equation (3.23), the direction vectors of CD, U, V , and w and the origin of CD with respect to CL, PL are determined, u = [0.99 - 0.06 0.01]r v = [0.06 0.99 0.01]T w = [-0.01 - 0.01 0.99]T PL = [-17.93 124.04 45.72]T . (3.30a) (3.30b) (3.30c) (3.30d) The physical origin of CD can not be measured by hand since its location is not visible. However, if we approximate the camera centre, the displacement results shown in Equations (3.30a) to (3.30d) 3.4 Camera System Validation Tests 50 1JD IREDs XL Front of Digiclops Back of Digiclops Figure 3.16: Digiclops and Local Optotrak Coordinate Systems seem reasonable. The origin of Cu is approximately —18 mm in the x-direction, 124 mm in the y-direction and 46 mm in the z-direction with respect to CL- The directions shown in Equation (3.30) look quite good as each of the directions shows that there is almost no rotation between CD and CL- Since the markers defining the Local Optotrak coordinate system are placed along the square back of the Digiclops case, it is expected that CL is approximately aligned with CD and the rotations between the two coordinate systems vary minimally. If there had been absolutely no rotation between coordinate systems, the vectors would be u = [1 0 0] r , v = [0 1 0]T, and w = [0 0 1]T. 3.4 Camera System Validation Tests After determining the location of CD with respect to CL, the accuracy of the camera system is next determined using a test plate and a sphere as target surfaces. The method used to find 3.4 Camera System Validation Tests 51 the Digiclops accuracy and the results obtained by measuring the entire surface as well as small patches on the surface of the targets are described in this section. In Section 3.3, T^ was found using the Optotrak system and the Digiclops. The method used to find this transformation involved using measurements obtained with the Digiclops. These measurements were obtained by manually picking points from the camera images and then finding the corresponding 3D location in space using triangulation. As mentioned in Subsection 3.1.1, the camera system is calibrated by the manufacturer to be an ideal trinocular camera to within 0.06 pixels [42]. If two corresponding points are correctly chosen within the left and right images, the 3D location is therefore known to within 0.06 pixels. In this section, we use Tj? to determine the accuracy of the Digiclops when automatic feature detection, feature matching between the trinocular images, and triangulation are used to find the 3D location of feature points. Instead of using the location of each feature point individually, we look at finding the accuracy of the mean of many points along a surface. 3.4.1 M e t h o d U s e d to D e t e r m i n e the A c c u r a c y of the D ig i c l ops F la t P la te as a Target A flat metal plate is used as a target for locating the surface with the Digiclops. Attached to the plate is a sheet of newspaper, providing a good mix of grayscale and black and white images that are rich with features. Pen and chalk marks are added to sections of the newspaper that are composed of a solid colour. These additions ensure that the camera has many feature points to locate. Similarly to when the transformation matrices are determined in Section 3.3, six IREDs are attached to the surface of the plate on top of the newspaper. The Digiclops system calculates the 3D location of features that are included in a region within the reference image containing the flat plate. Any protrusions such as the stand holding the fiat plate or the IREDs on the flat plate are not included in the boxed region of the image. The box size is 360 x 160 pixels and placed in the centre portion of the complete image. The 3D location of the features within the boxed region are collected at the same time that the IRED locations are recorded. Next, the IRED locations are converted into CD using T £ , that was determined in Section 3.3, PD = TDPL = (Tf )-LPL . (3.31) 3.4 Camera System Validation Tests 52 The transformed IRED locations are then fitted to a plane. The best-fit plane is found using least squares minimization as it was in Section 3.3. Once the coefficients of the equation of a plane are known, an offset of the plane is calculated. This offset is necessary since the location value recorded by the Optotrak system is different than the location of the back of each IRED marker. The offset plane is found using the equation that describes the distance from a point on the new plane, (xo, yo, ZQ), to the old plane, Ax + By + Cz + D = 0. The value of XQ and yo are chosen arbitrarily and the value of ZQ is solved for. The arbitrary selection of those two variables assumes that the new plane is not parallel to the x-axis or the y-axis, zo!fsetVA2 + B2 + C2 - Ax0 -Byo-D z0 = (3.32) Dnew = Ax0 + By0 + Cz0 • (3.33) The new plane has the form Ax + By + Cz + Dnew — 0. This plane represents the true location and is used to find the error of the plate position recorded with the Digiclops. Each point that is calculated using the Digiclops is compared to the true plane. The equation describing the distance from a point PD{PDX,PDV,PDZ) relative to Co, to the true plane defined with the Optotrak system, Ax + By + Cz + D = 0, is used to calculate the error for each point, error , , - A P ^ + BPDy + CPDz + D errorplate- ± v _ _ _ _ . (3.34) This error is calculated for each 3D point, PD, that is calculated with the Digiclops. Sphere as a Target The accuracy of the Digiclops is also tested using a sphere as a target. The sphere is used as it resembles the shape of a pregnant patient's abdomen. The spherical shape is also chosen since it is easily defined mathematically. A bowling ball is used for this experiment. The diameter of the ball is 225 mm and is measured using the Optotrak system. This diameter includes a thin coat of acrylic paint that is used to add features to the surface of the sphere. Similarly to when a flat plate is used as a target, IRED markers are attached to the surface of the sphere. In order to verify the spherical qualities of the bowling ball, a total of 12 IREDs are attached to the surface. Using the NDI Toolbox software (Northern Digital Inc., Waterloo, ON), a rigid body is created for these 12 markers from a dynamic position file. The dynamic position file is 3.4 Camera System Validation Tests 53 created by rotating the sphere for 2 min, collecting 3600 frames of point locations of the IRED points calculated with the Optotrak system. Data is collected for a total of six different tests. Once the 3D locations of the IREDs are known, these points are fitted to a sphere using least squares minimization. After computing the best-fit sphere, the error between each 3D marker and the sphere are calculated. As shown in Appendix C, the difference between the IRED locations and the best-fit sphere location have a total RMS error of 0.07 mm. Since the Optotrak system has an accuracy of 0.15 mm RMS error, the spherical nature of our bowling ball could not be tested any further in our lab and is assumed to be a true sphere. After verifying the shape of the sphere, it is included in a test to determine the accuracy of the Digiclops. The Digiclops is required to view the surface of the spherical ball without occlusions from the IRED markers while still being visible to the Optotrak system. For these sets of experiments, one side of the sphere is painted with a random mix of colours and brush strokes. The Digiclops is positioned in such a manner that it has an unobstructed view of the surface. Next, IRED markers are attached to both the sides of the sphere and the Digiclops using double sided tape. In addition to the 3 IREDs attached to the back of the Digiclops that define CL, 4 IREDs are attached to the side of the Digiclops. These 7 IREDs define the location of the Digiclops as well as CL- There are 12 IREDs describing the sphere location and attached to the surface of the sphere. Both the Digiclops and the sphere are visible to the Optotrak system. A dynamic file is created by moving the Digiclops in front of the Optotrak system for 2 min, collecting 3600 frames of points. This file is then used by the Rigmaker software to create a rigid model. A rigid model is also created for the sphere. Using a rigid model to calculate the location of each object ensures that the angle with which the Optotrak system views each IRED does not introduce any errors into the location calculation. If the angle for an individual marker exceeds ±60°, then the information from that marker will not be included in the location calculation. In addition, a rigid body also allows the NDI Toolbench software to calculate the location of all of the markers within the rigid body file. These locations are calculated even if the markers are occluded or not within the ±60° range. Lastly, the software ensures that the rigid body is accurate by ensuring that a minimum of 4 IRED markers are visible and accepted by the Optotrak system at all times during the collection of data. In addition to the Optotrak system location measurements, the Digiclops collects data points 3.4 Camera System Validation Tests 54 from the features on the surface of the sphere. The IRED locations for the sphere are then used to create a best-fit sphere. This best-fit sphere is used in order to calculate the error between the points recorded using the Digiclops and the true sphere location defined by the Optotrak system. The transformation from the rigid body to the location of the markers being acquired by the Optotrak system for each run are calculated using the NDI Toolbench software. Using this transformation, all of the points for each rigid body are found regardless of whether or not they are visible to the Optotrak system during data collection. The data marker locations describing the sphere are used in order to find the best-fit sphere. The best-fit to the sphere provides the radius and centre of the sphere. Next, the sphere centre is transformed from CL into the CD using T£ that was found in Section 3.3, PD,cen = T£>PL,cen = {^L ) 1T>L,cen (3.35) where PL,cen is the centre point of the sphere recorded relative to the Local Optotrak coordinate system and PD,cen is this point in CD- PD,cen is considered to be the true position of the sphere as it is transformed from data that is measured with the Optotrak system. The radius of the sphere is modified by subtracting the z0ffset for the IRED markers from it's original value. The error between the Digiclops points and the true sphere, now defined with respect to CD is calculated, err or s p h e r e = yJ{PDX ~ PD,cenx)2 + (PDV - PD,ceny)2 + {PDZ ~ PDzcen)2 - rtTue (3.36) where PD{PDX,PDV,PDJ is each point recorded by the Digiclops, PD,cen(PD,cenX,PD,cenY,PD,cenz) is the true centre of the sphere, calculated with the Optotrak system and shown relative to Cp, with respect to C D , and rtrUe is the true radius of the sphere measured by the Optotrak system. 3.4.2 Accuracy Results of the Digiclops The accuracy of the Digiclops system is next tested. Six IREDs are attached to the surface, and near the perimeter, of a new flat plate that is covered with a grayscale texture and used for the test described in this subsection. The painted surface of the test sphere is again used along with 12 IREDs that are attached to the area that is not imaged by the Digiclops. The plate and sphere are placed at various positions ranging from 860 mm to 1050 mm from the Digiclops camera. The angle between the target and the camera also varies for each test run. 3.4 Camera System Validation Tests 55 Figure 3.17: Images recorded by the Digiclops using the Flat Plate and Spherical Target Surfaces As shown in Figure 3.17, a Digiclops window of 360 x 160 pixels for the plate images and 240 x 230 pixels for the sphere images, records the 3D location of all detected features, Pry- The error between the true target locations recorded by the Optotrak system and the measured locations recorded by the Digiclops are calculated using Equations (3.34) and (3.36). Histograms showing the errors for six runs for each of the targets are shown in Figures 3.18 and 3.19. The error presented in these figures is calculated from Equations (3.34) and (3.36). The histograms represents the distribution of errors for each point detected by the Digiclops compared to the true target surfaces. Since our system uses the mean of many locations recorded by the Digiclops as opposed to the location of individual points, the mean error for each set of data is also displayed in the figures. This mean error is denoted in the figures by a thin vertical line. For all of the test runs, including those that are not shown in Figures 3.18 and 3.19, the mean error ranges from —0.7 m m to 1.0 m m for the plate at distances between 947.4 m m and 1052 m m from the camera and —0.7 mm to 0.7 m m for the sphere at distances between 866.0 m m and 959.4 mm from the camera2. The standard deviation for each of the six runs for both targets is also displayed in the figures. For all of the test runs, the standard deviation ranges from 0.9 m m to 1.6 m m for the plate and 0.6 m m to 1.1 mm for the sphere. The mean errors of all of the sets of data are plotted in Figure 3.20. The set of mean errors for all the test runs are distributed around the zero error. Since T f is used to calculate these errors, 2The number of significant figures reported in this thesis, when the Digiclops was used as a measurement tool, is based on the repeatability of recording the location of points on an object over a series of runs. 3.4 Camera System Validation Tests 56 a large error in this transformation would have resulted in values that are not distributed around the zero error. Since this is not the case, we are reassured that the calculated in Section 3.3 is accurate. Our tracking system uses the mean of 3D locations of points recorded by the Digiclops to create an accurate position of the surface. In order to measure the locations along a surface that does not have a known shape, small patches of points are used along the surface. The next subsection explores the effect of the size of these patches on the accuracy of the system. 3.4.3 Ef fect of P a t c h S ize on the A c c u r a c y of the D ig i c l ops In order to measure the position of a surface with an unknown shape using the points found with the Digiclops, it is necessary to divide the surface into small patches. The measured location of each of these patches is calculated using the mean of all recorded points in the patch. The previous subsections dealt with a patch size of 360 x 160 pixels for the plate and 240 x 230 pixels for the sphere. This subsection discusses the accuracy that can be expected as the patch size decreases. For each of the runs discussed in Subsection 3.4.2, the boxed area of the Digiclops image that is used to measure the target location is divided into smaller regions. The region sizes and number of patches that are made from each boxed area for both the flat plate and sphere targets are shown in Table 3.4. The number of points in each patch that is stated in the table represents the maximum number of points that may appear in the patch. The maximum number of points are recorded only when the 3D location of every feature within the patch is found. There are pixels that do not contain enough information to be matched between stereo images; therefore, no 3D position is calculated for these pixels. In this test, a surface rich with features was chosen for both of our targets in order to ensure that a large number of features were matched between images. As in Subsection 3.4.2, the error for each point is described by the distance from the point calculated by the Digiclops to the true surface. The mean of the errors for all the points within each patch for both the plate and the sphere are calculated and displayed in Figure 3.21. These plots represent the distribution of the mean error for each of the patches for all of the test runs. For each patch size, there are numerous patches and test runs. For example, contained within the box and whisker which represents the 6, 900 pixel2 patch size for the sphere, there are 10 runs x 8 patches = 80 mean patch errors. Notice that the box and whisker on the far right of each plot represents the 3.4 Camera System Validation Tests 57 - 5 - 4 - 3 - 1 0 1 2 Error [mm] (a) 947.4 mm (b) 981.7 mm 1 0 l 2 Error [mm] (c) 973.7 mm (d) 1049.0 mm (e) 1052.0 mm (f) 990.1 mm Figure 3.18: Distribution of Errors for Six Runs of the Digiclops Accuracy Test using the Flat Plate as a Target. The distance between the camera and the plate is shown under each plot. 3.4 Camera System Validation Tests 58 - 5 - 4 - 3 - 2 - 1 E r r o r [mm] E r r o r [mm] (a) 908.9 mm (b) 873.7 mm Mean: —0.4 mm Std. : 0.7 mm - 4 - 3 - 2 - 1 0 l 2 3 A E r r o r [mm] - 5 - 4 - 3 - 2 - 1 0 l 2 3 4 5 E r r o r [mm] (c) 866.0 mm (d) 888.1 mm P 2500 (e) 923.9 mm (f) 959.4 mm Figure 3.19: Distribution of Errors for Six Runs of the Digiclops Accuracy Test using the Sphere as a Target. The distance between the camera and the sphere is shown under each plot. 3.4 Camera System Validation Tests 59 860 880 900 920 940 960 980 1000 1020 1040 1060 Distance Between Digiclops and Target Surface [mm] Figure 3.20: Mean Error of all the Points on each Surface for Various Digiclops to Target Distances set of mean errors for all the runs plotted in Figure 3.20 In addition to the error calculated between each patch location relative to the true plate location, the error of the orientation for each patch, n, is calculated using the data acquired with the flat plate target. The points from each patch are fitted to a plane and the normal of this plane, n p atch-is calculated. The angular error between 111>;11 <-ii and the normal of the true plane, i i t r u , . . calculated using the Optotrak, is calculated using \" = 2 ( \" c t a n ( r r ^ ) ) ( 3 - 3 7 a ) . htrue ' rip a tch |fitrue Apatchl / 0 where COST] = — j — — r sinr) = — . (6.6ID) |Utrue| | Ipatch I I Htrue | | U p a t c h | For each patch in each test run, the error between the patch normal and the plate normal, n, is calculated. The results in Table 3.5 show the mean from all of the test runs for each patch size. 3.4 Camera System Validation Tests 60 Table 3.4: Patch Size Information for the Flat Plate and Sphere Number of Width of Each Height of Each Area [pixels2] Patches on Patch [pixels] Patch [pixels] Each Surface Plate Sphere Plate Sphere Plate Sphere Plate Sphere 1 1 360 240 160 230 57,600 55,200 2 2 180 120 160 230 28,800 27,600 4 4 180 120 80 115 14,400 13,800 8 8 90 60 80 115 7,200 6,900 16 20 90 60 40 46 3,600 2,760 32 40 45 30 40 46 1,800 1,380 64 80 45 30 20 23 900 690 144 160 20 15 20 23 400 345 Table 3.5: Mean Error Between the Patch Normal Measured Using the Digiclops and the Plate Normal Measured Using the Optotrak Number of Area of Angle Between Normal Patches on Each Patch Vectors, 77 [deg] Flat Plate [pixels2] 1 57,600 3.1 2 28,800 4.6 4 14,400 6.9 8 7,200 7.1 16 3,600 11.5 32 1,800 7.7 64 900 13.5 144 400 10.5 3.4 Camera System Validation Tests 61 PL, 8 - i 3.6 7.2 14 29 Patch Size [xlO3 pixels2] (a) Plate 1.4 2.8 6.9 14 Patch Size [xlO3 pixels2] (b) Sphere Figure 3.21: Accuracy of Surface Tracking using Various Patch Sizes. Each box in these figures is calculated based on the mean of all the patches, of a specific number of pixels, created during all the test runs. The boxes cover the interquartile range (from the lower quartile to the upper quartile) with a horizontal line at the median. The extension lines for each bar show the extend of the data that fall within | of the interquartile range for each patch size. Outliers that do not fall within this range are denoted by plus (+) signs. 3.5 Discussion 62 3.5 D i s c u s s i o n From Figure 3.20, we see that the Digiclops system is able to track a flat plate and spherical rigid object with mean errors from —0.7 mm to 1.0 mm and —0.7 mm to 0.7 mm, respectively. The standard deviation of the ranges is between 0.9 mm and 1.6 mm for the plate and between 0.6 mm and 1.1 mm for the sphere. These accuracies are recorded using a patch size of 57,600 pixels2 for the plate and 55,200 pixels2 for the sphere. These patch sizes correspond to an area of approximately 400 x 400 mm on the surface of the targets with a distance of approximately 1000 mm between the Digiclops and the target. Calculating one single surface location for this size of surface is not ideal in our system since the patient's skin does not move as one rigid object. Also, when the tracked object attached to the probe is smaller than the test plate from this chapter. Smaller patches on the surface of the patient's skin and on the trackable object attached to the probe are necessary in order to accurately measure the location of each surface. From Figure 3.21, we see that as the number of pixels contained within each patch decreases, the mean error increases. The minimum patch size of 400 pixels for the plate and 345 pixels for the sphere have mean errors from approximately —3 mm to 2.5 mm for the plate and —2 mm to 2 mm for the sphere. The lower and upper quartiles for the patch error means are less than — 1 mm and 1 mm. These error values are found using a combination of all of the runs recorded during the tests. The minimum patch size tested is equivalent to approximately 20 x 20 mm on the surface of the targets with a distance of approximately 1000 mm between the Digiclops and the target. This small patch size is most likely a suitable size when estimating the surface of the patient's skin as the small patch size could allow the deformation of the surface to be measured. Since the trackable object attached to the probe is rigid, the patches used to measure it's location could be larger than those used to measure the skin surface. A patch size of 3,600 pixels, approximately equal to an area of 90 x 40 mm2, has a mean error from approximately —2.2mm to 1.8mm. The angle between the normal of the test plate and the normal of each patch has a mean value that increases as the number of patches on the plate increases. If the normal of the entire surface is used, then the mean error between this normal and the true normal is 3.1°. When the surface is divided into 8 patches, each with 7,200 pixels, the mean error is 7.1°. At 144 patches per surface, with 400pixels in each patch, the mean error is 10.54°. These errors represent the rotational error 3.5 Discussion 63 in the two directions that lie in the plane of the plate. The results shown in Subsection 3.4.3 can therefore be interpreted as the combination of errors around both of these directions. Although not directly comparable because different methods are used to calculate the accuracies, other tracking systems used for medical applications have the following accuracies. The Fastrak A / C magnetic tracker (Polhemus Inc., Colchester, VT) has an RMS accuracy of 0.762 mm in translation and 0.15° in rotation, the Isotrak II A / C magnetic tracker (Polhemus Inc., Colchester, VT) has an RMS accuracy of 0.25 mm in translation and 0.75° in rotation, the Flock of Birds D/C magnetic tracker (Ascension Technology Corp., Burlington, VT) has an RMS accuracy of 1.8 mm in translation and 0.5° in rotation, and the Optotrak system has an RMS point accuracy of 0.15 mm. All of these systems track a minimal number of points on the target surface at a time. The Fastrak system is able to track up to 4 sensor locations, the Isotrak II and the Bird, up to 4 locations, and the Optotrak system, up to 256 locations. Each of the markers being tracked are able to measure the location where they are attached to the surface, but the majority of the surface does not have markers attached and is therefore not being tracked. For example, if the navel of a patient is tracked with one marker, the patient's lower abdominal movement during an ultrasound scan could only be estimated based on the motion of the navel. In contrast, our system is able to find the location of a large number of features distributed over the entire surface. The systems mentioned above measure surface location based on attached markers. These markers are attached to the patient's skin surface using an adhesive. With these methods, the risk of the markers detaching or moving during the scan if the marker wires are accidently pulled exists. This problem does not exist with the tracking system described in this thesis as the grayscale textured surface that is being tracked is overlaid directly on the patient's skin using paint or a thin artificial skin. Details of the method used to overlay these features are discussed in Chapter 4. Using the alternative tracking methods mentioned above, the positioning of the markers on the surface of the skin must be chosen very deliberately before the ultrasound examination. The markers must be placed in such a manner that they do not interfere with the ultrasound being performed. Since the markers protrude from the surface, it is important that they do cover a portion of the skin where the probe will need to pass. For this reason, placement of the markers on the skin precludes the area that can be imaged during the examination. In our tracking system, the textured surface is designed so that the probe can pass over the area being tracked. This means 3.5 Discussion 64 that the position of the features are able to cover the entire skin surface eliminating the need to pre-plan the placement of the tracked features. Using the Optotrak system, a line of sight between the small number of markers and the sensors must be ensured in order to calculate marker locations. By using a large number of features instead, our system is able to calculate the location of areas of the surface that are not being occluded from the cameras. For example, if the sonographer's arm or the ultrasound probe is occluding portions of the skin surface, the system is still able to track the movement of the rest of the skin surface. The purchasing cost of the Digiclops is inexpensive compared to the Optotrak system. In addition, the Digiclops is very portable. The portability of a system used to track an ultrasound procedure should be at least as good as the portability of the ultrasound machine. In this way, if the ultrasound machine is used in multiple examination rooms, the tracking system can also be transported. For example, the Digiclops can be placed directly on the ultrasound machine console and transported whenever the ultrasound machine is moved. The Digiclops is also significantly smaller than the Optotrak system, requiring less space in the examination room. As a comparison, the Digiclops has a volume of approximately 600 cm 3 and a mass of 0.5 kg [43], whereas the Optotrak system has a volume of approximately 76,681 cm 3 and a mass of 36.4 kg [40]. The accuracy of magnetic tracking devices is often reduced based on ferromagnetic disturbances present in the examination room during use [26, 37, 45, 47, 80]. A study, was conducted by [10] to test the accuracy of magnetic tracking devices under the influence of ferromagnetic disturbances present in an operating room. An Isotrak II system was found to have a translational error of 3.2 ± 2.4 mm and a rotational error of 2.9 ± 1.9°. The bird D/C magnetic tracker was, found to have a translational error of 6.4 ± 2.5 mm and a rotational error of 4.9 ± 2.0°. These errors are significantly worse in operation than those reported by the manufacturer. The distance between the Digiclops and the target surface being measured may change as the application for our tracking system varies. At a distance of 1000 mm, the Digiclops that we have used is able to image an area of 600 x 800 mm. In Section 2.2, we estimated that the probe could move a maximum of 400 mm and the object attached to the probe, a maximum of 600 mm in each of the anterior-posterior, superior-inferior, and medial-lateral directions during an abdominal ultrasound scan. The Digiclops is therefore suitable for tracking a probe and the surface of the patient's skin during an entire abdominal ultrasound scan. The distance between the target and the 3.5 Discussion 65 camera can be varied, changing the area that is visible to the tracking system. A Digiclops with a different focal length could potentially also be used in order to increase or decrease the FOV of the tracking system. The distance between the Digiclops and the target surface being measured may change as the application for our tracking system varies. At a distance of 1000 mm, the Digiclops that we have used is able to image an area of 600 x 800 mm. In Section 2.2, we estimated that the probe could move a maximum of 400 mm and the object attached to the probe, a maximum of 600m m in each of the anterior-posterior, superior-inferior, and medial-lateral directions during an abdominal ultrasound scan. The Digiclops is therefore suitable for tracking a probe and the surface of the patient's skin during an entire abdominal ultrasound scan. The distance between the target and the camera can be varied, changing the area that is visible to the tracking system. A Digiclops with a different focal length could potentially also be used in order to increase or decrease the FOV of the tracking system. Chapter 4 Ultrasound Image-Based Consistency Test The results discussed in Subsection 3.5, show that using the Digiclops in our tracking system is feasible as it has a sufficient accuracy for tracking large probe and patient motion. An experiment is conducted in this chapter to assess the consistency of the tracking system when both probe and patient movement are combined. A flat plate with a textured surface is attached to the probe, creating a surface with features that can be tracked by the Digiclops. The plate to the ultrasound imaging plane is found by calibration. Markers of a known shape, called fiducials, are placed on the surface of the patient's skin and are visible to both the Digiclops and the ultrasound machine. This experiment measures the consistency of calculating the location of the ultrasound image using only the camera system compared to using the camera system and the calibrated ultrasound probe. The components of this experiment create a mock scenario of the tracking system and are shown in Figure 4.1. This chapter begins by discussing the physical material properties of the fiducials in very broad terms. Next, the requirements for the fiducials in our system are discussed. These requirements mixed with the material properties lead to a selection of possible materials. These materials are then placed on a tissue-mimicking phantom and their appearance in the ultrasound images are observed. Next, prototype fiducials are created with the chosen material. The ultrasound probe is then calibrated so that it can be used in conjunction with the camera system. The consistency 66 4.1 Ultrasound Materials 67 Ultrasound Image • Figure 4.1: Components of the Ultrasound Tracking Experiment of the system is tested in order to assess the feasibility of relating the skin surface location to the ultrasound images. This consistency check is performed using the Digiclops camera system, an ultrasound machine, and two suitable tissue-mimicking phantoms. The results of both the ultrasound probe calibration and the ultrasound tracking system are presented. Next, an analysis of the errors present in our system and their impact on the results we obtain are discussed. The chapter closes with a discussion about different methods that can be used to improve the accuracy of the tracking system. 4.1 Ultrasound Materials Each material that is chosen to be used in this experiment has different ultrasonic properties. These materials include the tissue-mimicking phantom (Subsection 4.2.1), the artificial skin (Subsection 4.1 Ultrasound Materials 68 4.2.2), the fiducials (Subsection 4.2.2) and finally, the coupling gel (Subsection 4.1.3). The choice of material for these components are guided and constrained by the physical properties of each material. 4.1.1 P r o p e r t i e s of U l t r a s o n i c M a t e r i a l s In this subsection, ultrasonic properties of materials are discussed in general terms. The equations that show the interaction between a material and an ultrasound beam are stated and described. The effect that two different materials have on an ultrasound, image is also examined. Finally, relevant properties for a variety of materials are given. In the experiment, there are often multiple materials for which the ultrasound beam must penetrate. For this reason, it is important to examine the effects these layers have on the ultrasound beam at each of the interfaces and throughout each medium. Each time an ultrasound image is acquired, the probe produces and receives ultrasound waves using the piezoelectric crystals inside the probe. The crystals change shape when an electric current is applied, causing ultrasound waves to travel outwards. These waves travel through the material which is being imaged until a boundary between materials is reached. At this boundary between two different mediums, reflection and transmission occur. When waves are reflected, they return to the probe and excite the crystals, causing them to emit electrical signals. In addition to reflection, transmission also occurs, allowing the ultrasound wave to continue traveling through the second medium after it has passed the boundary and may produce subsequent echoes. The acoustic impedance (Z) of a material can be used to calculate reflection intensity. Equation (4.1) states acoustic impedance in terms of density (p) and velocity (v). An equivalent form of acoustic impedance is also shown in terms of the density and adiabatic bulk compressibility (n) of the material, As soon as an ultrasound beam attempts to permeate the boundary between two materials with different acoustic impedances, some amount of reflection will occur. The reflection coefficient, 1Z, (4.1) 4.1 Ultrasound Materials 69 ' - ^ r i c i d e n ^ r X ' - ' X ^Reflected X Figure 4.2: Geometry of the Transmitted and Reflected Components of an Incident Ultrasound Wave the ratio of reflected to incident pressure, is calculated [12], using the acoustic impedance of the first (Z\\) and second (Z2) materials. The incident angle, 0{, and the transmitted angle, 9t, are depicted in Figure 4.2. The reflected power density to incident power density describe how intensely a boundary will appear in an ultrasound image and is calculated as 1Z2. As TZ2 approaches unity, the boundary becomes very clear between two materials. At unity, the beam is completely reflected. In our experiment, a variety of material interfaces are required. Table 4.1 shows the acoustic impedance of a selection of materials. 4.1.2 Mate r ia l Requirements For our experiment, properties for four types of materials are considered. In Figure 4.3, two different setups are shown. The first, Figure 4.3(a), shows the setup for the experiment that is described in Section 4.4. The second part, Figure 4.3(b), shows the setup that is used to pick the materials for both the fiducials and the artificial skin. Table 4.2 shows the ultrasonic property requirements for the boundaries between materials used in this experiment. From Table 4.2, conclusions are drawn about the relative ultrasonic properties between the \\COS0t COS 0{ J K = (4.2) cos 6t ' cosOi 4.1 Ultrasound Materials 70 Table 4.1: Ultrasonic Properties of a Selection of Materials Material Velocity [ £ \\ Density 'kg Acoustic Reference Impec xlO 6 ance m'sec Air [STP] 330 1.2 0.0004 [5] Silicon Rubber 974-1027 1050-1380 1.04-1.34 [17] Dow Silastic Rubber 1020-1040 1140-1250 1.16-1.3 [17] Water [20°] 1527 993 1.516 [5] Average Soft Human 1480-1570 940-1070 1.39-1.68 [12] Tissue Poly ur ethane 1490-2090 1040-1300 1.38-2.36 [17] Paraffin Wax 1940 910 1.76 [17] Low Density Polyethy- 1950 920 1.79 [17] lene (eg. Shopping Bag) Butyl Rubber (eg. Bi- 1800 1110 2.0 [17] cycle Inner Tube) Scotch Tape (0.0025m 1900 1160 2.08 [17] thick) Oak Wood 4000 720 2.9 [17] Rigid Vinyl 2230 1330 2.96 [17] Acrylic Plexiglass 2610-2750 1180-1190 3.08-3.26 [17] Aluminum 6420 2700 17.3 [5] Tin 3300 7300 24.2 [17] Rolled Copper 5010 8930 44.6 [17] Stainless Steel 5790 7890 45.7 [17] Steel 5800 7900 45.8 [5] 4.1 Ultrasound Materials 71 • Coupling Gel • Artificial Skin ^ Fiducial • Phantom • Coupling Gel ^ Fiducial • Phantom Phantom Probe (a) Experiment (b) Material Test Figure 4.3: Layers of Materials Used for the Complete Experiment and the Material Tests Table 4.2: Properties of the Boundaries between the Materials Used for the Ultrasound Tracking Experiment Boundary Description Reflection Transmission Ultrasound Probe/ Coupling Gel none complete Coupling Gel/ Artificial Skin Material none complete Artificial Skin Material/Fiducial Material medium-high medium-low Fiducial Material/Artificial Skin Material medium-high medium-low Artificial Skin Material/Coupling Gel none complete Coupling Gel/Phantom none complete 4.1 Ultrasound Materials 72 different materials. The ultrasound probe, coupling gel, artificial skin material and phantom are chosen such that they have very similar acoustic impedances. The more similar these acoustic impedance values are, the higher the transmission and lower the reflection will be. The boundaries between materials are faint in the ultrasound image when there is a high transmission and low reflection as the ultrasound passes from one material to the other. In contrast, it is imperative that the boundary between the artificial skin material and the fiducials are visible in the ultrasound image. It is through the detection of this boundary that the fiducials within the ultrasound images are located. In order to see this boundary, the acoustic impedance of the fiducial material must therefore be either larger or smaller than that of the artificial skin material. It is also important that the fiducials do not block important anatomical information in the ultrasound images. A fiducial that creates a bright spot and a minimal amount of shadowing beneath the bright spot in the ultrasound image supplies sufficient information to be located. Materials that are chosen properly produce the required information without destroying the anatomical information that is recorded in the ultrasound image. In order to determine the required acoustic impedance for these materials, the results of a number of tested materials are observed and shown in Subsection 4.1.3. The material surrounding the ultrasound probe as well as the coupling gel have fixed properties chosen by the manufacturers. The probe and gel are both designed to approximate the acoustic impedance of human skin. The probe used during this experiment has an outer material that has an acoustic impedance between 1.233 x lO6-^3— and 1.309 x l O 6 - ^ 2 - and a velocity between 979-and 1039™[20]. The coupling gel has an acoustic impedance of 1.54 x 106 a/ and a velocity of 1510^ [44]. The artificial skin material must also have an acoustic impedance close to that of the coupling gel since it must closely match in ultrasonic properties. Lastly, the phantom must also have a similar acoustic impedance to the coupling gel so that there is not a large amount of reflection at it's surface. In addition to the properties shown is Table 4.2, the phantom, described in Subsection 4.2.1, must also closely mimic the ultrasonic properties of human tissue. In addition to the choice of acoustic impedance, there are other factors that must also be considered during the selection process of materials for the artificial skin and fiducials: • Colour: A large colour contrast between the artificial skin and the fiducials aids in visibility for the Digiclops camera system. When used on a real patient, the artificial skin must have a grayscale textured surface so that it can be tracked using the Digiclops. If the artificial 4.1 Ultrasound Materials 73 skin is translucent, the variability in patient skin tones must be considered when the fiducial colours are chosen. • Contour: The fiducials must be small enough so that they fit the contour of the skin surface. If the fiducials are attached to each other, these attachments must not interfere with the flexibility of the artificial skin. • Cost: The price of the materials becomes crucial if a new artificial skin is required for every ultrasound examination. • Manufacturing Process: The fiducials and artificial skin should be manufactured reason-ably easily and quickly. • Precision: Each fiducial must have the same dimensions. The artificial skin must conform to the surface of the skin so that no air is trapped between the artificial skin and the skin surface. • Reusable: The materials should be reusable while still remaining sanitary for medical ap-plications when used on a real patient. • Recyclable: If the materials cannot be reused, they should be recycled. Ideally, the fiducials and artificial skin are also made of recycled materials. • Rigidity: Each fiducial must not distort under the force of the ultrasound probe. This rigidity ensures that the shape of the fiducial is always known, making the calculations from the information taken from the ultrasound images possible. • Safety: The artificial skin material must not cause any type of harm to the patient or sonographer when used for a real patient. Common allergies should be considered when a material is chosen. • Slippage: During the ultrasound examination, the artificial skin must not slip along the surface of the patient. • Standardization: Standard artificial skin shapes and sizes rather than a specific fit when used for a real patient would save time, money, and reduce the amount of waste produced 4.1 Ultrasound Materials 74 from each examination. • Visibility: The fiducials must be large enough to be visible by the Digiclops camera system and the ultrasound images. At the same time, the fiducials must have thin enough lines that they do not occlude necessary information in the ultrasound images. Figure 4.4 shows a schematic of the relationship between the width of each fiducial component and the occluded area in the ultrasound image. As the fiducial width increases, the amount of information below the fiducial that is recorded in the ultrasound image diminishes. Since the ultrasound probe is composed of a linear array of piezoelectric crystals and the crystals are fired in groups, the fiducial width must be small compared to the width of the group of crystals. As the fiducial width increases compared to the group, the shadow in the ultrasound image also increases, decreasing the amount of anatomical information collected. In Figure 4.4, the shadow under the fiducial is depicted as a triangle since spatial compounding (averaging of multiple images from beams fired at various angles) is assumed to be used in acquiring the ultrasound images. If spatial compounding had not been used, each shadow would be considerably longer under each fiducial. The goal is to choose materials for the fiducials that have properties that offer a compromise between all of the factors mentioned above. A series of tests were conducted, and are shown in Subsection 4.1.3, which examine the ultrasound images created with a variety of materials. Using this information and the information discussed in this subsection, suitable materials are chosen. 4.1.3 Material Tests This subsection describes the results obtained when different types of materials are viewed in ultrasound images. The amount of shadowing as well as the initial bright spot created by each sample are observed and the results compared. The information collected from these tests are used in order to decide which materials should be used for the fiducials and the artificial skin. For this set of tests, a phantom is created in order to mimic human tissue. The method used to create the phantom and the properties of the phantom are described in Subsection 4.2.1. For our experiment, we use a multi-purpose ultrasound system, Ultrasonix 500 Research Pack-age (Ultrasonix Medical Corp., Burnaby, BC). Throughout the experiment, an L7 linear array 4.1 Ultrasound Materials 75 (a) Narrow Fiducial Component (b) Wide Fiducial Component Figure 4.4: Relationship Between the Fiducial Size and the Occluded Area in the Ultrasound Image. The size of the shadow in the ultrasound image is depicted in these images depending on the size of the fiducial used. broadband probe is used with the system. This probe has a view width of 38 mm and has a frequency range from A.SMHz to 9.oMHz. Generally, this probe is used for abdominal imaging of depths up to 50 mm. A selection of fiducial and artificial skin materials are chosen based on their estimated ultrasonic and physical properties. Each sample of material is placed on the phantom. The test samples are created with varying thicknesses and widths. Depending on the material being tested, appropriate sample dimensions are chosen. Clear Image ultrasound scanning gel (Sonotech, Inc., Bellingham, WA) is used between the phantom, sample, and probe interfaces. The gel is required in order to provide an acoustic pathway between surfaces. The ultrasound probe is set to a depth of 35 mm and each ultrasound image contains 440 x 440 pixels. Figure 4.5 shows some of the ultrasound images recorded of the tested materials. Each tested material was chosen because of its compliance to the required material attributes described in Subsection 4.1.2. For a more complete set of ultrasound 4.2 Components Used to Test the Tracking System 76 images, refer to Appendix D. Latex rubber is chosen as the material for creating the artificial skin that is used in the ex-periment described in Section 4.4. As seen in Figure 4.5(a), the difference between the ultrasound image of the phantom and the ultrasound image of the latex sheet on top of the phantom is very small. For this reason, a latex sheet does not cause a loss in ultrasound wave intensity as the wave propagates through the latex. Since the purpose of the latex during the experiment is to hold the fiducials in place while allowing the ultrasound waves to pass, latex rubber is used. In addition to allowing the ultrasound waves to pass, the artificial skin must also provide a rich textured surface for the Digiclops to view. Both before and after latex is cured, colour can be added to the rubber to create random patterns and tones along the surface. The versatility of creating an artificial skin out of latex is another deciding factor for choosing this rubber. Latex cures quickly in the air without any need for a catalyst. The rubber sheet was created, as described in Subsection 4.2.2, to match the shape of the phantom. Lastly, latex rubber is currently commonly used in hospitals and in contact with patients. The steel sewing needle, shown in Figure 4.5(c), is chosen as the material for creating the fiducials. The needles are chosen because of their rigidity. In order to make use of the location of the fiducial in the ultrasound image, it is necessary that the fiducial does not deform under the force of the probe. Since each needle has a consistent diameter, the diameter of each component of the fiducial are all ensured to be the same. The needles were also chosen for our experiment because of their obvious visibility in the ultrasound images and their ability to be seen at a distance, such as with the Digiclops. 4.2 Components Used to Test the Tracking System The overall setup for the experiment is shown in Figure 4.6. The Digiclops system is used in conjuction with a tissue-mimicking phantom. The phantom is shaped like a human torso and is overlaid with a latex sheet that contains fiducials made of steel. The following subsections describe each of the test components that have not been previously described in this thesis. For a description of the Digiclops camera system, refer to Section 3.1. The details about the ultrasound probe and machine that are used throughout this experiment are presented in Subsection 4.1.3 4.2 Components Used to Test the Tracking System 77 (d) (e) (f) Figure 4.5: Ultrasound Image Results for Various Materials of Various Sizes, (a) The left side of the image is only the phantom and the right side is a sheet of latex over the phantom. The latex does not substantially alter the ultrasound wave, (b) A 2 mm wide strip of gage 14 aluminum sheet placed over the phantom (c) A steel sewing needle with a diameter of 1.24 mm placed over the phantom (d) A 2 mm wide strip of gage 21 steel sheet placed over the phantom (e) A 5 mm wide strip of a temporary tattoo placed over the phantom (f) A 10 mm wide strip of adhesive paper tape placed over the phantom 4.2 Components Used to Test the Tracking System 78 Figure 4.6: Setup of Apparatus Used for the Ultrasound Tacking Experiment 4.2.1 Phantom Construction Two phantoms of human torsos are created for this experiment. The phantoms are used in place of human test subjects in order to evaluate our system. In addition to having acoustic properties that mimic human tissue, these phantoms also have a human shape. The realistic form of the phantoms increases the relevance of the data that is collected during the experiment. The camera system and ultrasound machine are both able to acquire data from the realistic surface of the phantoms. The experiment described in this chapter is performed once for each of the torso phantoms. Examples involving one of the two phantoms are used throughout Section 4.4 to describe the procedure used for the experiment. The two phantoms are each used separately to acquire accuracy data. The first phantom has the shape of the abdomen of a 39 week pregnant woman. This phantom represents patients who 4.2 Components Used to Test the Tracking System 79 are being' scanned in order to create a 3D ultrasound image of the fetus. The 3D ultrasound may be used in order to judge the location, size, or attributes of the fetus. The second phantom has the shape of the torso of a 25 year old male. This phantom represents patients being scanned in order to view a large area of their internal organs. An example of this occurs when both kidneys are viewed in one panoramic ultrasound image or when the entire stomach is imaged at once with 3D ultrasound. The first step in creating each phantom required that a mould be created. Each mould is created using gauze that is impregnated with plaster. The gauze is cut into appropriate size strips, wet in water, and applied onto the human model's abdomen. Petroleum jelly is used as a release agent between the model's skin and the plaster. The plaster gauze is allowed to solidify and the mould is removed. The moulds are dried completely in the air. Next, a rubber lining is applied to the inner surface of the moulds. This rubber lining makes the inner surface of the moulds impermeable to the phantom material. Three coats of liquid latex rubber (Coast Fiber-Tek Products Ltd., Burnaby, BC) is applied with a brush to the inner surfaces of the moulds. Each coat is allowed to dry for approximately 20 minutes before the next coat is applied. The result is a thin layer of latex on the inside of the moulds. The material for the phantom is next created based on the procedure described in [70]. A solution of distilled water and 8%[6y volume] 99.5 + %, A.C.S reagent glycerol (Sigma-Aldrich, St. Louis, MO) is mixed with 3%[fa/ mass] 50/xm Type 50 Sigmacell cellulose particles (Sigma-Aldrich, St. Louis, MO), and 3%[by mass] high gel strength agar (Sigma-Aldrich, St. Louis, MO). The mixture is stirred and heated over a burner until it reaches a temperature of 85°C. Next, it is removed from the heat and allowed to cool to 60° C. It is then poured into the rubber lined torso moulds and allowed to cool completely. The two phantoms are shown in Figure 4.7. A phantom is created for both of the torsos in order to approximate the properties of average soft human tissue. Attenuation is a term used to account for the ultrasound beam's reduction in intensity as it moves through a medium. Factors that contribute to attenuation within a ma-terial include divergence of an ultrasound wave, reflection at material interfaces, scattering, and ultrasound wave absorption [12]. Table 4.3 shows a selection of ultrasound properties for various mammalian tissues [31, 32]. The average soft tissue acoustic velocity is assumed to be 1540 j^. Based on [70], at a frequency of 4 MHz, our phantoms have an acoustic velocity of 1545 ^ and 4.2 Components Used to Test the Tracking System 80 (a) Female Torso (b) Male Torso Figure 4.7: Tissue-Mimicking Phantom Torsos an attenuation of 2 As a comparison, a commercially manufactured fetal ultrasound train-ing phantom (CIRS: Computerized Imaging Reference Systems Inc., Norfolk, VA) that has a fetal model contained within a block of polyurethane has an attenuation of 0.05 ^ and an acoustic velocity of 1430 ^ [14]. 4.2.2 Creating the Art i f ic ia l Skin and Fiducials An artificial skin with fiducials embedded is created for each of the two phantoms. The purpose of these fiducials is to provide one reference location that is visible in the Digiclops images as well as one fiducial that is visible in the ultrasound images. The fiducials are shaped like the letter \"N\" inside of a square. These N-shaped fiducials are created using steel sewing needles with a diameter of 1.24 mm. The cylindrical portion of 4 needles are cut to a length of 26 mm. The edges are filed at a 45° angle. A fifth needle is cut to a length of 33.26 mm. The edges of this last section are then filled down to form a 45° \"V\" shape. A drawing showing the details of the dimensions and the construction of this N-shaped fiducial is included in Appendix E. The two N-shaped fiducials are next embedded in a matrix of latex rubber. As shown in Subsection 4.1.3, latex rubber does not substantially alter the data in the ultrasound image. Latex rubber is also used since it is possible to create a sheet of latex rubber that exactly matches the 4.2 Components Used to Test the Tracking System 81 Table 4.3: Ultrasonic Properties for Mammalian (Human Unless Otherwise Noted) Tissues [31, 32] Tissue Type Ultrasound Properties Ultrasound Fre-quency [MHz] Acoustic Velocity [—1 Attenuation [^ ] Abdominal Wall (Fat/Muscle) 5 N/A 13.50-14.70 Abdominal Wall (Mainly Fat) 5 N/A 5-13.5 Amniotic Fluid 5.04 1510 ±3 7.06 x lO\" 2 - 8.6 x lO\" 2 Blood 5 1560-1601 N / A Fat 1 1479 0.6 ±0.2 Fat 5 N/A 0.27 ±0.8 Kidney 4 N/A 10 Kidney (Beef) 1.8 1572 N / A Liver 1.5 1540 1.76 Muscle (Striated) 1 1566 1.4 ±0.6 Skin 1 1498 3.5 ±2.3 Skin 5 N/A 9.2 ±5.5 Spleen (beef) 5 N/A 5.1-8.1 Uterus (abdomen and uterine wall) 2.25 N/A 0.5-1 4.2 Components Used to Test the Tracking System 82 Latex Created in a Vacuum N-Shaped Fiducial Latex Created Outside of a Vacuum Figure 4.8: Close-up View of an N-Shaped Fiducial Embedded in a Latex Skin form of the phantom. This sheet that includes the N-shaped fiducials is then placed on the surface of the phantom before it is imaged. The sheet of latex rubber with the embedded N-shaped fiducials is created using a plaster replica of each of the models' torsos. The plaster cast is created using the phantom mould described in Subsection 4.2.1. A mixture of 2 parts plaster of paris and 1 part water [by volume] is combined and poured into the moulds. The plaster casts are allowed to set and then removed from the moulds. The portions of the latex sheet that contain the N-shaped fiducials are next created. Since air bubbles interfere with the quality of the ultrasound image acquired, the number of air bubbles are minimized in the latex sheet. For this experiment, it is necessary that the air bubbles in the latex immediately surrounding the N-shaped fiducials are removed. The rest of the latex sheet is not used to acquire ultrasound images. The air bubbles are removed using a desiccator and a vacuum chamber during the curing process of the latex. As the air is removed from the desiccator, the bubbles rise to the surface of the latex sheet. Each coat of latex is placed in the vacuum after it is applied to the previous coat of the latex skin. An image of one of the N-shaped fiducials embedded in the latex sheet is shown in Figure 4.8. The circular portion around the N-shaped fiducial encompasses the portion of the latex skin that is created in the vacuum chamber. 4.3 Ultrasound Probe Calibration 83 (a) Female Torso (b) Male Torso Figure 4.9: Artificial Latex Skin with Embedded N-Shaped Fiducials After creating the latex skin immediately surrounding the N-shaped fiducials for each of the torsos, the rest of the latex sheets are created. A fine layer of talc is dusted onto the surface of the plaster mould before the latex is applied. The talc acts as a release agent so that the latex rubber does not stick to the plaster casts. The latex sheet from each of the moulds are shown in Figure 4.9. 4.3 Ultrasound Probe Calibration In order to find the relationship between the ultrasound image and the ultrasound probe, a cali-bration procedure is used. In general, calibration is performed by taking ultrasound images of an object with a known geometry. The images are recorded from various positions and orientations. The location of the probe is recorded for each ultrasound image that is acquired using an external tracking system. Next, the location of the object within the ultrasound image and the correspond-ing location with respect to the external tracker are calculated. Finally, the transformation between the ultrasound coordinate system and the probe coordinate system is solved using an optimization technique to reconstruct the known geometry. Calibration procedures can use a single pinhead, sphere or bead [6, 51, 58, 61, 88], the intersection between two wires [37, 55, 67], the intersection between three wires [67], a planar structure [67], or a set of N-shaped wires [11, 16, 51, 62, 91] to 4.3 Ultrasound Probe Calibration 84 calibrate an ultrasound probe. Our system uses an calibration box containing N-shaped wires, an idea originally proposed by [16]. The calibration box was designed according to the method described in [62]. The calibration setup consists of a plexiglass box with holes drilled in rows along two of the facing plates. Nylon wires are threaded through these holes in order to create a pattern of lines in the shape of \"N\" 's. These N-shaped wires provide a known geometry as the location of each N-shaped wire is known within the box. The location of the box is calculated with respect to the tracking system using specific reference points on the rim of the box. These reference points are measured using the tracking system. A different method is used to measure these points with each type of tracking system. In our system, the Digiclops is used to calculate the location of these reference points since they can be seen in the Digiclops images. The location of the probe is also measured using the tracking system. Using this information, the location of the N-shaped wires can be calculated relative to the probe. The calibration uses the knowledge of the location of the N-shaped wires in the ultrasound images as well as the location relative to the probe to solve the calibration matrix. The procedure used to perform the calibration is based on the technique described in [91]. Using the N-shaped wire method, various tracking systems have been used to track the location of the probe during this calibration procedure. Magnetic trackers have been used in [62]. Similar techniques have also been performed using an optical tracking device with passive [51] and active markers [11, 91] to measure the probe and calibration box. Our calibration procedure is unique as it uses the Digiclops to find the location of the probe and the calibration box. During the calibration procedure, we use stereo vision techniques to track the location of the flat plate. The Digiclops is used to find the location of the calibration box with respect to CQ. This box is made of plexiglass and nylon wires that form N shapes. The ultrasound probe is positioned at the top of the box, which has been filled and submerged, with the exception of the top rim of the box, in water. The nylon wires are visible in the ultrasound image as bright spots. These spots are manually identified in the image and the corresponding location on the box is calculated. Attached to the ultrasound probe is a flat plate. This plate is used as a target for the Digiclops in order to track the probe location. As the probe is moved, the ultrasound image changes and the location of the flat plate changes accordingly. A nonlinear least squares solution of the transformation between the ultrasound image and the flat plate is found using all of the collected information. Figure 4.10 4.3 Ultrasound Probe Calibration 85 shows a diagram of all of the components used during this calibration procedure. The following subsections discuss each of these steps with more detail. 4.3.1 Flat Plate to Digiclops Transformation (T£) The transformation between the target plate attached to the ultrasound probe and the Digiclops, T£ , is determined for each probe location during the calibration. A diagram of the direction of the coordinate systems is shown in Figure 4.11. In order to calculate the location of the flat plate, 10 crosses printed on paper and overlaid onto a grayscale image are placed on the plate. Using the same procedure described in Subsection 3.3.1, the centre point of each N-shaped fiducial is matched between the left and right Digiclops images. Again, using the method described in Subsection 3.3.1, cross-correlation and subpixel interpolation are used to find the best match between the chosen points in the left and right images. A template size of 20 x 20 pixels and a search window size of 40 x 40 pixels around the points chosen by the user are used during the cross-correlation. The 3D location of each feature point is found and the u, v, and w directions are calculated. Equation (4.3) defines the transformation matrix between Cp and CD, X0 Uy Vy Wy yo Vz Wz 0 0 0 l (4.3) For each acquisition of points during the calibration procedure, a different T£ is calculated. 4.3.2 Cal ibrat ion Box to Digiclops Transformation (T^) This subsection describes the transformation between the calibration box and the Digiclops camera system. The calibration box coordinate system, CH, and CD are shown in Figure 4.12. CH is defined for the calibration box with an origin at one of the bottom corners of the box. During the calibration procedure, the calibration box is filled and submerged, in water so that ultrasound images of the wires may be collected. The very top of the box is left out of the water so that attached reference points are visible. Four reference locations are chosen along the top rim of the box so that the location of the box can be determined using the Digiclops. Attached to each Figure 4.10: Transformations used to Find the Transformation from the Ultrasound Image to the Flat Plate 4.3 Ultrasound Probe Calibration Figure 4.11: Digiclops and Plate Coordinate Systems 4.3 Ultrasound Probe Calibration 88 Figure 4.12: Digiclops and Calibration Box Coordinate Systems 4.3 Ultrasound Probe Calibration 89 reference point is a small piece of paper with a grayscale texture and a cross printed on it. When images are recorded with the Digiclops, these features provide the necessary data to determine the location of the calibration box. A reference coordinate system along the top rim of the box, CE, is created using these reference points. is calculated in two steps. Each of these steps is described in this section. The first step requires finding the transformation from CE to CD, T£^, which is found using images provided by the Digiclops and the method described below. Next, the transformation from CE to Cu, T^, is calculated and described below, based on the measurements of the box. Multiplying these two transformation matrices, T^ is determined, T£C = T £ T £ = T g ( T f ) - 1 . (4.4) A selection of Digiclops left and right image pairs from 20 runs are used to calculate the 3D location of the four reference points. There is no Digiclops or calibration box motion between runs. The procedure described in Subsection 3.3.1 is used to find the best match for the feature points in the left and right images. The 3D locations of the feature points with respect to CD are calculated using Equations (3.11a) and (3.11b). Since the calibration box does not move between these 20 runs, the mean of the 3D feature locations is used to calculate CE relative to CD- All 3D locations of feature points are fitted to a plane using least squares minimization. Next, three of the four points are projected onto this plane to create CE- The method described in Subsection 3.3.1 is used to find the u, v , and w directions for T l The transformation matrix from the calibration box reference points to the Digiclops camera system, Tp, is described, Vx Wx Uy Vy Wy yo uz VZ Wz ZQ 0 0 0 l (4.5) T ^ is determined based on the measurements of the calibration box and the locations for the reference points with respect to CH- The locations are originally known based on the CNC (computer numerical control) milling machine data used to create the calibration box and are 4.3 Ultrasound Probe Calibration 90 verified using a vernier. The location of the reference features are measured within CH prior to filling and submerging the box in the water bath. Once the locations of the reference points are known, the method described in Subsection 3.3.1 is used to calculate TH. The transformation TH is a necessary component of our calibration system because the location of the N-shaped wires are known relative to the corner of the calibration box (where CH is situated) instead of relative to CE-The location of the N-shaped wires is not easily determined relative to CE because the coordinate system is not square with the calibration box. After finding T#, Equation (4.4) is used and T£( is calculated. 4.3.3 C a l i b r a t i o n B o x and U l t r a s o u n d D a t a P o i n t s (PH, PU) This subsection describes the method used to collect corresponding points from the calibration box, PH, and ultrasound image, Pu-When the ultrasound probe is placed over the N-shaped wire, points along the line where the ultrasound image and the N-shaped wire intersect, are recorded. The N-shaped wire appears in the ultrasound image as three bright spots. These three bright spots are images of the three points on the N-shaped wire. These points therefore have a location relative to the calibration box, PH, as well as one relative to the ultrasound image, Pu-The method used in this subsection is different than those previously discussed in this section since a transformation matrix is not calculated directly from the 3D location of the feature points. The ultrasound probe is moved to various locations while imaging the N-shaped wires inside the calibration box. Once the points PH and Pu are found for each probe location, the set of data is used to solve the set of transformations in Subsection 4.3.4. The coordinate system for the ultrasound image, Cu, and the calibration box, CH, are shown in Figure 4.13. As mentioned, the calibration box is filled and submerged, with the exception of the top rim of the box, in water. The tip of the ultrasound probe is placed in the water bath and the N-shaped wires are imaged. Knowing the actual distance between the parallel sides of the N-shaped wires, the location of the points PH along the N-shaped wire is geometrically calculated. This location is possible to determine since the distance between each of the parallel wires and the diagonal wire changes as the probe moves from the top to the bottom of the N-shaped wire. The wires appear in the ultrasound image as bright spots and from this data, the probe location is determined. 4.3 Ultrasound Probe Calibration Ultrasound Probe Figure 4.13: Ultrasound Probe and Calibration Box Coordinate Systems 4.3 Ultrasound Probe Calibration 92 Seen from the top of the calibration box, there are three N shapes with parallel side wires. Figure 4.14 shows the calibration box as seen from the top. Each of the N-shaped wires has a different width and a different x-location and z-location with respect to CH- This variability within the wires allows the system to collect points, and estimate their location, from a large variety of locations within the calibration box. Depending on the FOV of the probe, different N wires within the box can be used to collect calibration data. The depth setting of the probe will determine which layer of N-shaped wires within the box is used for calibration. Figure 4.14 shows a drawing of the ultrasound plane as it intersects the N-shaped wire. The line \\PH,EPH,Z\\ is the line that the ultrasound image creates as it intersects the N-shaped wire. The vertices of the N-shaped wire are named PH,BI: PH,CH PH,B2> PH,C2, a n ( i PH,B3, PH,C3 f° r the first, second and third N-shaped fiducials respectively. The location of these points are known from the construction of the calibration box. A sample ultrasound image that is recorded by the ultrasound probe is shown in Figure 4.15. This sample image shows the bright spots that are created as the three wires intersect the ultrasound plane. These bright spots, Pu,z, Pu,K, and PU,E correspond to the points PH,Z, PH,K, and PH,E- The user is asked to pick the centre of the three bright spots from the image. These pixel locations are chosen with respect to Cu- The points chosen by the user are next fitted to a straight line since all three points lie along a line where the ultrasound plane intersects the N-shaped wire. As shown in Figure 4.15, the lengths \\PU,KPU,E\\ and \\PU,EPU,Z\\ are measured directly from the ultrasound image using the three sections of the N-shaped wire. These lengths are next converted from units of pixels to millimeters using the image scale factors. The x-scale factor and y-scale factor, Sx and Sy, with units of ^pj^ , which convert the information in the ultrasound image from pixels to millimeters are calculated during the minimization described in the following sub-section, Subsection 4.3.4. Although the scale factors could have been recorded directly from the manufacturer's settings on the ultrasound machine, these scale factors are not sufficiently accurate since they are based on the assumption that an acoustic velocity of all materials is 1540 j^. Once the lengths \\PU,KPU,E\\ and \\PU,EPU,Z\\ are known in millimeters, the location of the point PH,K with respect to CH is calculated. The properties associated with similar triangles are used to determine the location of this point. The lengths \\PH,KPH,E\\ and \\PH,EPH,Z\\ are equal to \\PU,KPU,E\\ and \\PU,EPU,Z\\ since the points PH and Pu are the same points represented in two 4.3 Ultrasound Probe Calibration 93 PH,B, PH,B2 PH,B3 « • PH,CI PH,C2 PH,C3 Figure 4.14: Top View of the Calibration Box. The reference coordinate system, CE, is rotated around the y-direction and is therefore not aligned with the sides of the box. The reference points are placed on the top of the calibration box so that each reference point sits on a different wall of the box. The reference coordinate system is not aligned with the box because it is calculated using these reference points. 4.3 Ultrasound Probe Calibration 94 Figure 4.15: Sample Ultrasound Image Showing Three Wires from the Calibration Box. The strong diagonal line that is visible in the ultrasound image is simply a reflection from the sides or bottom of the calibration box. This line is not used during the calibration process. different coordinate systems. From the geometry shown in Figure 4.16(a), APH,KPH,EPH,Q — &PH,EPH,ZPH,R- The ratio between these two triangles is therefore where the lengths \\PH,KPH,E\\ and \\PH,EPH,Z\\ are known from the ultrasound image. The points PH,K{PH,KX,PH,Kv), PH,c{pn,cX,PH,cy), and PH,B{PH,BX,PH,BV), where PH,KX and PH,KY are the x-component and y-component of the point PH,K (and similarly for the points PH,C and PH,B)I are shown in Figure 4.16. From geometry, the length of \\PH,ZPH,R\\ is equal to PH,CX-PH,BX and is similar to \\PH,QPH,E\\I which is equal to PH,KX-PH,BX- Using the ratio between the triangles, the location of PH,KX is found relative CH as follows PH,KX = PH,BX + a(PH,cx ~ PH,BX) (4.7) 4.3 Ultrasound Probe Calibration 95 PH,B PH,Z PH,C (a) X-Component of PH,K, PH,K* PH,Z PH,T (b) Y-Component of PH.K, PH,KV Figure 4.16: Geometry for Finding the Location of the Point PH,K{X,V) where x = PH,KX and V = PH,Ky Next, the value of PH,KV is calculated. From the geometry shown in Figure 4.16(b), APH,KPH,SPH,B — APH,CPH,TPH,B- From these similar triangles, \\PH,RPH,S\\ a = — (4.8) \\PH,CPH,T\\ From the geometry, the length \\PH,BPH,S\\ is equal to PH,KY-PH,BV, and similar to \\PH,BPH,T\\ which is equal to PH,cY-PH,BY- Using the ratio between the triangles, the location O{PH,KV is found relative to CH as follows PH,K, = PH,Bv + a(PH,cs ~ PH,BY) • (4.9) As described in this subsection, based on the geometry of the N-shaped wires, the points PH,E, PH,K, and PH,Z in the ultrasound image provides sufficient information for determining the location of the point PH,K relative to the N-shaped wire. 4.3 Ultrasound Probe Calibration 96 Figure 4.17: Flat Plate and Ultrasound Probe Coordinate Systems 4.3.4 U l t r a s o u n d Image to F l a t P l a t e T r a n s f o r m a t i o n (Tp) The purpose of the calibration is to determine the transformation from the ultrasound image to the flat plate, Tp. The necessary matrices, data points and equations have been discussed and calculated. Next, this information is combined in order to solve for the desired calibration matrix. Figure 4.17 shows the relationship between Cu and Cp. There are three rotations and three transformations that transform the points in the ultrasound image to points in the flat plate coordinate system. Each of these rotations and transformations is shown in a matrix in Equation (4.10). First, a rotation of a about xp, then a rotation of cj) about yp, then a rotation of £ about zp, and finally, a translation in the xp-direction, yp-direction, and zp-direction by d, e, and f are performed. Since the rotations and translations are performed relative 4.3 Ultrasound Probe Calibration 97 to the current reference frame, the matrices are multiplied as follows producing the transformation matrix to be solved, r p f / LP where T p n = cos (p • cos£ T £ 2 = cos (- sine;) T u P24 Rotx>a • Roty^ • RotZ£ • TransXtd • TransVte • Transzj 1 0 0 0 0 cos a — sin a 0 0 sin a cos a 0 0 0 0 1 0 0 d 0 1 0 0 0 0 1 0 0 0 0 1 1 cos 0 sin 0 0 1 0 0 — sm 1 0 0 0 1 0 0 0 0 1 0 e 0 0 1 0 0 0 0 1 1 1 0 0 0 0 1 0 0 0 0 1 / 0 0 0 1 cos £ — sin £ 0 0 sin£ cos£ 0 0 0 0' T T T 0 1 0 • 0 0 1 u Pll rp[/ A P l 2 rpU lPl3 lPli u P21 1P22 L P 2 3 L P 2 4 V P31 rnU X P 3 2 rpU lP33 rpU ± P 3 4 0 0 0 1 rp(7 l P l 4 -LP21 -LP22 -L P 2 3 -rpU _ l P 3 1 ~ rnU _ L P 3 2 -L P 3 3 ~ 1 P 3 4 ~ s'm • cos £ — e • sin £ • cos + f • sin (j> sin a • sin 4> • cos £ + cos a • sin £ since • sin(— sin£) + cosa • cos£ cos (f>(— sin a) cZ(sina • shi(/> • cos£ + cosa • sin£) + e(—sine; • sina • sin0 + cosa • cos£) + /(— sina • cos) sincf) • cos£( — cos a) + sina • sin£ sin £ • cos a • sin + sin a • cos £ cos a • cos (j> d(— cos a • s'mcj) • cos£ + sina • sin£) + e(sin£ • cos a • sin0 + sina • cos£) + / • cos a • cos (f> . (4.10) 4.3 Ultrasound Probe Calibration 98 In total, the ultrasound probe is moved to 120 different locations within the calibration box. While acquiring data from these 120 locations, the Digiclops camera is moved to four different locations around the calibration box providing a variety of viewpoints for the scene. For each of the four locations, a transformation T£( is calculated. For each of the 120 probe locations, data points Pu and transformation T£ are calculated. The relationship between the points P\\j and PH is Pu = TPVTDPTHDPH = ( T p y 1 ( T g ) _ 1 T g P H . (4.11) Substituting the data points into Equation (4.12a), the three equations are overconstrained and the unknown parameters in Tp and the scale factors, Sx and Sy, are solved, pny = Tp21pux + TuP22PUy + T^Pu, + TUP24 PHZ = TjL PUx + T\"PUv + TuPPUz + T u Equations (4.13a)-(4.13c) are minimized using a non-linear least squares solver, 0 = TuPnPUx + TuPl2PUy + TuPl3PUz + TuPli - PHx 0 = TuP21PUx + T^22PUy + TuP23PUz + T%4 - PHy 0 = TuP31PUx + TUp32PUy + TuP33PUz + TuP3i - PHz . The solution to the minimization is a = 72.4° 0 = 164.1° £ = 11.1° d = —152.8 mm e = 349.5 mm f — 112.1 mm mm Sx = 0 . 0 7 1 7 ^ - 5, pixel v - 0.0704 ^ — -pixel When substituted into Equation (4.10), the final calibration matrix is TUP = -0.9438 0.1844 0.2744 239.4260 0.3147 0.2469 0.9165 140.9657 0.1013 0.9513 -0.2910 284.3761 0 0 0 1 (4.12a) (4.12b) (4.12c) (4.13a) (4.13b) (4.13c) (4.14) (4.15) 4.4 Method used to Test the Complete System 99 The transformation Tp provides the link between the information provided in the ultrasound image relative to the probe. Since the probe location can be tracked using the Digiclops, the location of the ultrasound images with respect to the Digiclops can also be calculated and we have a freehand ultrasound tracking system. Of course, the difficulty with this system is introduced when the patient moves. Because the image is known relative to the fixed coordinate system of the tracker, the patient movement is not recorded. The purpose of our tracking system is to improve this freehand tracking system by tracking the patient movement together with the probe movement that is relative to the Digiclops coordinate system. Using this movement information, it is possible to calculate the location of the ultrasound image relative to the patient instead of relative to an arbitrarily fixed coordinate system. 4.4 Method used to Test the Complete System An experiment is conducted in this section in order to assess the consistency of the data acquired using the tracking system. This section describes the method used to test this consistency based on the information from the ultrasound images. The test set-up used is shown in Figure 4.18. As described in Subsection 4.2.2, there are two N-shaped fiducials embedded in an artificial skin that is placed on the surface of the phantoms. Based on the method discussed in Section 4.3 to perform the probe calibration, the N-shaped fiducials embedded in the artificial skin are each created in the shape of an \"N\". As shown in Figure 4.19, these metal N-shaped fiducials are used to create an imaged fiducial coordinate system, CM, and a reference fiducial coordinate system, Cp. For the consistency test described in this section, the location of the N-shaped fiducials are fixed relative to each other. The calibrated ultrasound probe is included in the system and can be tracked using the attached flat plate. The probe is positioned over the imaged fiducial during the acquisition of ultrasound images. Throughout this test, only a light force is exerted on the N-shaped fiducials, ensuring that there is no deformation of the phantom. The Digiclops camera system is also included in the set-up as it is used to view the location of the N-shaped fiducials and the flat plate. When the ultrasound probe is placed over the imaged fiducial, points along the line where the ultrasound image and the imaged fiducial intersect, are recorded. The imaged fiducial appears in 4.4 Method used to Test the Complete System 100 Tg=Flat Plate wrt. Digiclops T^=Ultrasound Probe wrt. Flat Plate Tp1=Imaged Fiducial wrt. Reference Fiducial Tp=Reference Fiducial wrt. Digiclops T^f=Imaged Fiducial wrt. Ultrasound Flat Plate (CP) U 6 / -f. Ultrasound Probe Figure 4.18: Transformations used to Verify the Consistency of the Tracking System the ultrasound image as three bright spots. These three bright spots are images of the three points on the imaged fiducial. These points therefore have a location relative to the imaged fiducial, PM, as well as one relative to the ultrasound image, Pu- The transformation T^f from the imaged fiducial coordinate system, CM, to the ultrasound coordinate system, Cu, is not calculated in this section. This transformation is shown in Figure 4.18 in order to show the relationship between the points PM and Pu- The goal of this section is to calculate the location of these three points from both coordinate systems relative to a common coordinate system using our tracking system as well as the calibrated ultrasound probe. If all measurements and transformations were perfect, the transformed points would coincide if expressed in the same coordinate system. In order to calculate the points PM and Pu relative to a common coordinate system, two sets of transformations are created. From Figure 4.18, we can see that these two sets of transformations can 4.4 Method used to Test the Complete System 101 be visualized by imagining two paths within the test setup. In order to investigate the consistency of the results, data that describes the location of the points along the intersection of the ultrasound image and the imaged fiducial are calculated with respect to the Digiclops coordinate system, CD-The two sets of transformations are used to transform these points. Using the transformations described in the following sections, the points are transformed from the imaged fiducial coordinate system, CM-, and from the ultrasound image coordinate system, Cu, PD,I — TpT^PM; PD,II = TpTpPu , (4-16) and the difference is calculated between the points, PD,I and PD,II-The methods used to calculate Pu and PM are described in Subsection 4.4.1. The corresponding points Pu and PM, provide the starting points for the completion of the paths. In the first path, the transformation Tp from CM to CF is made possible, as described in Subsection 4.4.2, using the Digiclops and the fixed relative position of the two N-shaped fiducials during this experiment. The Digiclops is used to find the transformation Tp, between the reference fiducial coordinate system, Cp, and CD, in Subsection 4.4.3, and the points are transformed into the Digiclops coordinate system. In the second path, the calibrated probe provides the transformation between the ultra-sound image and the plate, Tp. The transformation from the plate attached to the probe to the Digiclops, T£, described in Subsection 4.4.4, then transforms the points again into the Digiclops coordinate system. These two sets of transformed points can now be compared as they are both calculated relative to the Digiclops. The following subsections describe the transformations used to calculate Ppj and PD,II- The results of the implementation of the algorithms discussed in this section are described in Section 4.5. 4.4.1 Ultrasound and Imaged Fiducial Data Points (PU,PM) The points in the ultrasound image, Pu and those on the imaged fiducial, PM, are both discussed in this subsection as they are closely linked to each other. Figure 4.19 shows the x-direction, y-direction, and z-direction of the coordinate systems Cu and CM- We begin by examining the images that are created using the ultrasound probe. Figure 4.20 shows an example image that is recorded using ultrasound during the experiment. There are three bright spots that appear in this image. Each of these spots is created from the imaged fiducial. A similar concept for determining 4.4 Method used to Test the Complete System 102 Figure 4.19: Ultrasound Probe and Imaged Fiducial Coordinate Systems the ultrasound image location based on the locations of these marks was used in Section 4.3 when the nylon wires were used to calibrate the probe. Similarly to when the probe was being calibrated, the ultrasound image containing the imaged fiducial is displayed for the user. The user is asked to pick the centre of the three bright spots as well as points describing the width of each bright spot. A best-fit line is next fitted to these six points. The points are projected onto this straight line and the width of each portion of the imaged fiducial, as seen in the ultrasound image, are calculated. The centre of the bright spots, chosen in Cu, are recorded and saved as Pu- The centre of the bright spots are named Pu,z, Pu,K, 4.4 Method used to Test the Complete System 1 0 3 Figure 4.20: Sample Ultrasound Image Showing Three Components of the Imaged Fiducial and PU,E- The distances \\PU,EPU,K\\ and \\PU,EPU,Z\\ are depicted in Figure 4.20 for one of the test samples. After converting these distances from pixels to millimeters, the distances \\PU,EPU,K\\ and \\PU,EPU,Z\\ are equal to \\PM,EPM,K\\ and \\PM,EPM,Z\\, respectively. Based on the distance between each of the bright spots, the location of PM,K is calculated relative to CM- The equations used to calculate the location of the middle point, PM,K, PM,KX = PM,Bx + a • (PM,cx ~ PM,BX ) (4.17b) PM,KV = PM,Bv + a • {PM,cy ~ PM,BV) , (4.17c) are the same as those described in Subsection 4.3.3. These equations provide the location of the intersection between the probe and the diagonal portion of the imaged fiducial with respect to CM-4.4 Method used to Test the Complete System 104 (a) Case I: Positive Angle (b) Case II: Negative Angle Figure 4.21: Two Possible Solutions for Defining the Angle of the Ultrasound Image Relative to the N-Shaped Fiducial. The origin for the coordinate system CM is in the top left corner of the N-shaped fiducial shown in this figure. This point is named PM,K (PM,KX , PM,KV , PM,KZ ) • The next step requires finding the location of the two outer sides of the imaged fiducial with respect to CM- Using only the location of the centre position of the N-shaped fiducial and the distance between the bright spots in Cu, there are two possible solutions for the location of these points PM,E and PM,Z- These two possible solutions are shown in Figure 4.21. Using the ultrasound information taken from the imaged fiducial provides us with more information as to the location of PM than when nylon wire were used. Instead of simply using the locational information about the three spots in the image, the width of each of the spots also provides useful information. Using the information about the width of each bright spot, one of the two solutions shown in Figure 4.21 is possible. The width of each outer edge of the imaged fiducial is hieft = hright = 1-24 m m , measured horizontally. Since the middle portion is soldered at a 45° angle, the width of the middle component of the N-shaped fiducial measured horizontally is h c e n = ^/2(hieft)2 = i/2(/i rj s/ l i) 2. A ratio between 4.4 Method used to Test the Complete System 105 the horizontal centre width and the outer width is calculated, , _ keen _ V^left)2 - R , \"ratio — T — T l 4 - 1 8 ! \"•left \"left For each given probe location during the test, the width of each bright spot of the three N-shaped fiducial components, is calculated based on the points chosen by the user. Next, a ratio between the centre width and the average of the left and right widths is calculated from each test run Wratio = 7 —7— r- (4-19) where wcen is the width of the centre bright spot, and wieft and Wright a r e the widths of the left and right bright spots. If the ratio for the tested ultrasound location, wratio, is smaller than the ratio in the horizontal location, hrati0, than the angle of the probe is positive, if not, it is negative. Figure 4.21 shows the positive and negative possibilities for the probe location. Once the angle of the ultrasound image relative to the N-shaped fiducial and the distance between the fiducial centres is known, the points relative to CM are calculated. The location of the two side points in CM a r e calculated for both the positive and negative angles, < ™ t e £ r ) (42to> PM,ZX = PM,CX ~ PM,BX (4.20b) PM,EX = 0 (4.20c) Case I : PM,zy = PM,KV - \\PM,KPM,Z\\ • sin(C) (4.20d) PM,EV = PM,KV + \\PM,KPM,E\\ • sin(C) (4.20e) Case II,: PM,zy = PM,KV + \\PM,KPM,Z \\ • sin(C) (4.20f) PM,EV = PM,KV - \\PM,KPM,E\\ • sin(C) . (4.20g) Angle £ is shown in Figure 4.21 for each of the two cases. The point PM,K{PM,Kx,PM,Ky,PM,KZ) was calculated previously in Equation (4.17). The points Pu, which includes the points PU,E, PV,K, and Pu z, a n ( i PM-, which includes the points PM,E, PM,K, a n d PM,Z, a r e calculated for each of the probe poses in this experiment. 4.4 Method used to Test the Complete System 106 Imaged Fiducial Figure 4.22: Imaged Fiducial and Reference Fiducial Coordinate Systems 4.4.2 Imaged Fiducial to Reference Fiducial Transformation (T^) After finding the location of the points PM,E, PM,K, and PM,Z, it is necessary to transform this data into the reference fiducial coordinate system, CF- The reference fiducial is placed on a section of the phantom which remains visible to the Digiclops throughout the test. This reference fiducial is necessary since during the acquisition of the ultrasound images, the imaged fiducial under the probe is hidden from the Digiclops camera. Before the ultrasound probe is introduced into the test, the Digiclops camera is used to record a pair of stereo images that contain both the imaged fiducial and the reference fiducial. During the experiment described in this section, there is no displacement between these two N-shaped fiducials. This lack of movement makes it possible to calculate one transformation that describes the relationship between the two coordinate systems, Tp1. This lack of movement is enforced 4.4 Method used to Test the Complete System 107 (a) Left Image (b) Right Image Figure 4.23; Digiclops Images of the Phantom Recorded During the Consistency Experiment during this experiment by ensuring that the two N-shaped fiducials do not move relative to each other, making it possible to track the location of the imaged fiducial using the Digiclops 1 . The x-component, y-component, and z-component of the coordinate systems for both of the N-shaped fiducials are shown in Figure 4.22. The necessary relationship between CM and CF is calculated from a set of Digiclops images that are taken before the ultrasound probe is introduced into the scene. The left and right images are both displayed in Figure 4.23. The user is asked to choose the four corners of the imaged and reference fiducials. As described in Subsection 3.3.1, normalized cross-correlation is used to match the pixels from the left and right images to a subpixel precision. The process of picking the corners and finding the subpixel matches is repeated 30 times. This data is collected 30 times in order to account for any variation in the measurements. A final result is derived by finding the least squares minimization of all of the results. Next, the 3D location of the points in both CM and CF are found. The four corner points for each N-shaped fiducial are fitted to a best-fit plane. The y-direction for each of the coordinate systems is determined by calculating the unit vector from the top left corner to the bottom left lrThe motion between the two N-shaped fiducials is constrained during this experiment as it is used as a consistency test of our tracking system. This constraint would not be enforced if the system was used to track a real patient's motion. 4.4 Method used to Test the Complete System 108 corner of the N-shaped fiducial. Next, the normal for the plane that is fitted to the four points is used as the z-direction. Finally, the x-direction is calculated by taking the cross product of the y-direction and z-direction. Using the direction vectors, T g and T^f are calculated. For more details about each of these steps, refer to Subsection 3.3.1 where the same technique is used to find the transformation from the plate to the Digiclops. Finally, the transformation TM is calculated, = T f T ^ = (Tg)\" 1 T ^ . (4.21) 4.4.3 Refe rence F i d u c i a l to D ig i c l ops T r a n s f o r m a t i o n (Tg) This subsection gives a brief overview of the technique used to calculate the transformation between CF and CQ, Tg . A visual description of the coordinate systems is shown in Figure 4.24. As described in Subsection 4.4.2, the four corners of the reference fiducial are chosen by the user from the left and right Digiclops images. A plane is fitted to the points and a coordinate system is calculated from the 3D points. This technique varies from Subsection 4.4.2 since the images used to calculate the transformation are acquired during the test instead of before the test. An example of one of the images used for these calculations is shown in Figure 4.25. Notice that the imaged fiducial is occluded from the view of the Digiclops while the reference fiducial is still visible. Since Digiclops data is collected each time the ultrasound probe acquires an image, rigid phantom movement between runs is possible and does not affect our results. Because of this possible movement, the least squares minimization of all of the runs can not be averaged together. Instead, the user is shown the same pair of images a multiple number of times and the calculated results are minimized using a least squares approach. The end result is a unique transformation matrix from CF to CD for each test run. Since Tg is calculated individually for each run, the camera and the phantom are able to move between test runs. 4.4.4 F l a t P l a t e to D ig i c l ops T r a n s f o r m a t i o n (Tg) A component in determining the second path of the experimental setup, the flat plate to Digiclops transformation, T g , is discussed in this subsection. After T g was found in Subsection 4.3.4, the location of the plate and the ultrasound probe remained rigidly attached. Because there is 4.4 Method used to Test the Complete System 109 Imaged Fiducial Figure 4.24: Digiclops and Reference Fiducial Coordinate Systems (a) Left Image (b) Right Image Figure 4.25: Digiclops Images of the Phantom, Probe, and Flat Plate Recorded During the Con-sistency Experiment 4.5 Accuracy of the Ultrasound Consistency Tests 110 no relative movement between these two components, the path is completed by finding Tg and knowing Tp. As described in Subsection 4.3.1, the user first chooses 10 printed crosses from the plate's surface in both the left and right images. The points are cross-correlated and the best-fit plane is found. Two points and the normal to the plane are next used to find Cp. Finally, the vectors are used to form Tg . For each run during the experiment, Tg is calculated. The plate is visible to the Digiclops throughout the experiment. 4 .5 A c c u r a c y o f t h e U l t r a s o u n d C o n s i s t e n c y T e s t s In this section, the results of the consistency test using the method described in Section 4.4 are presented. The section contains a comparison of the location of the imaged fiducial using the information obtained from ultrasound images and Digiclops images. 4.5.1 C o n s i s t e n c y Test R e s u l t s Using the male torso phantom and the female pregnant torso phantom, data was collected for different probe to imaged fiducial set-ups. For each phantom, 20 runs of data were collected and the setup shown in Figure 4.6 was used. The Tg and Tg that were calculated for each test run as well as TM that was calculated for each of the phantoms are used to calculate the consistency of the experimental results. The corresponding points Pu, which includes the points PU,E, PU,K, and Pu,z, a n d PM, which includes the points PM,E, PM,K, and PM,Z, that were calculated for each test run, as well as Tp that was calculated once for the entire experiment are also used to find, The distance between PD,I and PD,II is shown in Table 4.4. The results in this table are calculated using the test data from both the male and female phantom torso experiments. The results in this table show that the mean of the distances between the two sets of points is larger than the standard deviation of the point cloud. There is therefore bias in the results between the information obtained using the two sets of transformations. PDJ = TFDTMPM, PDJi = TgT ^ P a • (4.22) 4.6 Consistency Test Error Analysis 111 Table 4.4: Distance Between the Points Pp,/, Calculated using the Digiclops, and PDJI, Calculated Using the Calibrated Probe Y D Z D Mean [mm] -6.7 1.2 1.6 Maximum [mm] -2.2 14.7 7.8 Minimum [mm] -12.5 -10.2 -6.9 Standard Deviation [mm] 2.0 4.5 2.5 4.6 Consistency Test Error Analysis The results presented in Subsection 4.5.1 have errors which are due to a variety of sources. During this experiment, the errors are due to variation in manual selection of the bright spots in the ultrasound image, feature selection from the stereo images, and probe calibration errors. In this section, these errors are quantized. Each method used to calculate a point or transformation of points during the probe calibration described in Section 4.3 and the experiment described in Section 4.4 is analyzed. 4.6.1 I n f o r m a t i o n O b t a i n e d f r o m the U l t r a s o u n d Images Two sets of transformations are used in Section 4.4 to find the location of the sides of the imaged fiducial with respect to the Digiclops. These paths begin by calculating the points that describe the edges of the imaged fiducial within the ultrasound image, Pu, and the corresponding points within the imaged fiducial coordinate system, PM- Both of these sets of points require that the user first choose the correct centre of each bright spot displayed in the ultrasound image. An experiment was conducted in order to quantify the variability in these results. Four ultrasound images, each containing three bright spots, are selected from the set of data acquired in Subsection 4.4.1. The centre of each bright spot is chosen 10 times by the user. For each bright spot centre, a cloud of points is created. The four images that were used are shown in Figure 4.26. Magnified views of the areas containing the cloud of points is included in this figure. Each cloud of points is represented by an ellipse with semi-major and semi-minor axis that have a magnitude of one standard deviation in 4.6 Consistency Test Error Analysis 112 Table 4.5: Spread of the Points Pu Chosen as the Centre of the Bright Spots xu yu Total Standard Deviation [mm] 0.0566 0.0723 0.0.0918 Variance [mm2] 0.00320 0.00523 0.00613 Table 4.6: Probability that Either a Positive or Negative Angle of the Ultrasound Image Relative to the N-Shaped Fiducial is Chosen based on the Width of Each Bright Spot Number of Times One Angle is Chosen [%] Maximum [%] 100 Minimum [%] 60 Mean [%] 78 Standard Deviation [%] 11.96 pixels in either the x-direction or y-direction. The spread of these points in millimeters is presented in Table 4.5. In addition to the centre of each bright spot, the width of each bright spot is required in order to calculate the points PM- As described in Subsection 4.4.1, there is an ambiguity between whether the probe is angled in the positive or negative direction on the imaged fiducial when the width of each bright spot is not calculated. In this subsection, 20 ultrasound images with bright spots are displayed for the user from the set of data acquired in Subsection 4.4.1. The width of each spot was selected manually 10 times. The percentage of times that one angle (either positive or negative) is selected is recorded for each of the 20 ultrasound images. The results are displayed in Figure 4.27 and a summary of these results is presented in Table 4.6. The chances of consistently calculating one angle (either positive or negative) are lowest when the ratios hratio a n d Wratio from Subsection 4.4.1 are close. The chances of consistently calculating one angle increases as the difference between the two ratios increases. 4.6 Consistency Test Error Analysis 113 (c) (d) Figure 4.26: Variability of Multiple Selection of Bright Spot Centres. Magnified views of the areas containing the cloud of points is included in this Figure. Each cloud of points is represented by an ellipse with semi-major and semi-minor axis that have a magnitude of one standard deviation in pixels in either the x-direction or y-direction. 4.6 Consistency Test Error Analysis 114 Set of Bright Spots Figure 4.27: Probability that Either a Positive or Negative Angle of the Ultrasound Image Relative to the N-Shaped Fiducial is Chosen based on the Width of Each Bright Spot 4.6.2 Information Obtained with the Digiclops Errors may also be introduced into the experiment described in Section 4.4 through the use of the Digiclops. The 3D location of the corners of each N-shaped fiducial relies on the information that the user has manually inputted. As described in Section 4.4.3, the user is shown the left and right Digiclops images of the N-shaped fiducial and is then asked to pick the corners of the fiducial. Each component of the N-shaped fiducials has a thickness that appears as a few pixels wide in the image. For this reason, the user must estimate the corners based on the image that is presented. The variability in this data selection is measured. A pair of Digiclops images are presented to the user. The four corners of the reference fiducial are manually chosen and the 3D location of these corners is calculated. The method used in Subsection 4.3.1 is used to calculate the 3D locations. The spread of these results is presented in Table 4.7. Each time the user chooses the centre of each printed cross placed on the flat plate, error is 4.6 Consistency Test Error Analysis 115 Table 4.7: Variability of Manually Choosing Corners of the Reference Fiducial from the Digiclops Images X D YD Z D Total Standard Deviation [mm] 0.4 0.3 2.4 2.4 Variance [mm2] 0.2 0.1 5.7 5.7 Table 4.8: Variability of Manually Choosing the Printed Crosses from the Digiclops Images X D YD Z D Total Standard Deviation [mm] 0.2 0.1 1.1 1.2 Variance [mm2] 0.0 0.0 1.3 1.3 introduced into the system. An experiment is conducted to measure this variability. Three pairs of left and right Digiclops images with 10 printed crosses in each pair are chosen from those used in Subsection 4.4.4. For each of these 30 printed crosses, the user chooses the centre 10 times. A cross-correlated match between the two images is made using subpixel interpolation and the 3D location of each cross point is recorded. The spread of these results is presented in Table 4.8. 4.6.3 A c c u r a c y of the P r o b e C a l i b r a t i o n The calibration of the probe to the flat plate, creating Tp, introduced the largest errors into our experiment. In this subsection, a test is described which evaluated the magnitude of these errors. A modified version of the calibration box used in Section 4.3 is used for this test. Two wires are threaded through holes at the corner of the box. The wires intersect at one point in the middle of the box. Knowing the location of the holes in the box, the locationof the cross point relative to the box, PJJ, and the transformation TH are calculated using the method described in Section 4.3. The calibration box, with the exception of the top rim of the box, is filled and submerged in water and data is collected. The calibrated probe with the flat plate attached is used to acquire images of the intersection of the two wires from various different angles. When the probe is near the intersection without being exactly over the required point, the ultrasound image contains two bright spots. As the probe 4.6 Consistency Test Error Analysis 116 moves closer to the intersection, these two bright spots merge. Finally, the point where the two wires intersect is seen as one single bright spot in the ultrasound image. The ultrasound image is recorded when this single spot is seen at its maximum brightness. Similarly to when the three wires were chosen from the ultrasound image in Subsection 4.3.3, the single bright spot is manually selected and the location of the point Pu is recorded for each location. While the ultrasound image is acquired, the Digiclops records stereo images of the scene. In these Digiclops images, the flat plate with its features, as well as the four reference points on the top rim of the calibration box, can be seen. Using the same methods described in Subsections 4.3.1 and 4.3.2, Tg and Tg are found for each probe location. The points from the ultrasound image, Pu, and the true intersection point location relative to the calibration box, PH, are transformed into the Digiclops coordinate system, Co, PD.HI = T$T%PH, PDJV = TPDTuPPu - (4.23) The points PDJII are transformations of the points on the N-wires using the measurements of the calibration box and using the Digiclops images. The points PD,IV are transformations of the points in the ultrasound images using the calibrated probe and using the Digiclops images. The difference between the points PDJII and PD,IV is found for 40 runs of data and the results are shown in Table 4.9. This table described the accuracy of the calibration matrix Tp. The location of the point PH is known with a small amount of error since there may be a small amount of error in the measurement of the calibration box holes with respect to CH- There may also be a small amount of slack in the crossing wires, adding additional errors to the calculation of the intersection. Both of these factors can be considered small contributers to the error reported in this test. So, although Table 4.9 provides an estimate of the error included in Tp, the true error is not known since the method described here introduces new errors into the measurement of the data. Despite errors introduced during this calibration accuracy test, the error in the calibration transformation, Tp is still large. The results in Table 4.9 suggest that the calibration procedure is the biggest error in our system. The spread of the calculated points PD,IV is shown in Table 4.10. The high variability in the 40 sets of data gives an idea of the calibration errors. Some factors that contribute the point cloud in the results shown in Table 4.10 may also have been introduced during this calibration test. Throughout this calibration test, the ultrasound 4.7 Correction of the Ultrasound Image to Imaged Fiducial Transformation 117 Table 4.9: Distance Between the Points PD,IV and the Point PDJII X D YD Z D Mean [mm] 2.1 0.2 6.8 Table 4.10: Spread of the Points PD,IV X D YD Z D Total Maximum [mm] -20.7 148.4 921.2 Minimum [mm] -30.6 139.3 914.0 Mean [mm] -26.1 143.3 919.0 Standard Deviation [mm] 2.1 2.2 1.6 3.5 Variance [mm2] 4.4 4.9 2.7 7.1 images are assumed to be 2D. In reality, each ultrasound image represents the averaged acoustic information over a finite thickness [37, 66, 76]. An experiment was performed in [37] to measure the thickness of the ultrasound plane. It was found during this experiment that the thickness varies with the distance from the probe. At a distance of 37 mm from the surface of the ultrasound probe, the plane was found to be 4 mm thick. Although the images were acquired when the spot in the ultrasound image was brightest, and most likely in the centre of the ultrasound beam thickness, variability due to this thickness may have been introduced. Other factors that introduced variability into the calibration test results include those incurred while selecting the features in the Digiclops images as well as selecting the bright spot in the ultrasound images. 4 .7 Correction of the Ultrasound Image to Imaged Fiducial Trans-formation Instead of using the N-shaped fiducials only for a consistency check, this section investigates the possibility of using the N-shaped fiducials during a real ultrasound examination. The methods and preliminary results presented in this section are a proposal for an extension to our research. This section describes a preliminary investigation into the feasibility of performing a correction between 4.7 Correction of the Ultrasound Image to Imaged Fiducial Transformation 118 the ultrasound image and the imaged fiducial transformation using the N-shaped fiducials. As no new data is acquired for the correction methods provided in this section, the existing data, acquired previously in this chapter, is used. The information in the ultrasound images, which is provided by each N-shaped fiducial, add extra information that can be used to correct the transformation from the ultrasound image to the imaged fiducial. Since many errors are present when finding this relationship, the key is to determine the location of the ultrasound image with respect to the patient's skin as consistently as possible. This section explains how this could be done. Chapter 3 describes the digital tracking component that measures the probe's location and patient's location. Combining this information, it is possible to calculate the probe location relative to the patient. Section 4.3 describes a method to find the location of the probe with respect to the ultrasound image. Combining all of these transformations, it is possible to calculate the location of the ultrasound image with respect to the imaged fiducial on the patient's skin, TM. The set of transformations used to find TM are TUM = T ^ T p T p T p (4.24) where Tj^ describes the transformation from the reference fiducial to the imaged fiducial on the skin, Tp describes the transformation from the Digiclops to the reference fiducial, Tg described the transformation from the plate attached to the probe to the Digiclops, and Tp is the calibration matrix that transforms the ultrasound image into the plate's coordinate system. Using the setup described in Section 4.4, the results of using the transformations from Equation (4.24) are found to have errors. These errors are introduced into the system due to the Digiclops measurements and the calibration matrix errors. The consistency tests described earlier inspired ways to reduce this error. In this section, we propose that the N-shaped fiducials be used to make corrections to the location measurements: The main goal is to find the location of the ultrasound image with respect to the skin. Incorporating information from the N-shaped fiducials into our tracking system creates a correction of the ultrasound image location with respect to the imaged fiducial on the skin. This section begins with a geometrical analysis of TM in Subsection 4.7.1. Next, Subsection 4.7.2 shows the results of using a new matrix, derived directly from our ultrasound data, to find the transformation from Cu to CM- Finally, Subsection 4.7.3 discusses the bias present in TM when 4.7 Correction of the Ultrasound Image to Imaged Fiducial Transformation 119 calculated using (4.24). The trend in bias for all the observations is observed and this bias is used to correct the values of the rotations and translations in TM. 4.7.1 G e o m e t r i c a l C a l c u l a t i o n of TM This subsection describes a method that can be used to calculate the transformation TM. Geomet-rical information describing this transformation is extracted directly from the ultrasound image. TM can be considered as a multiplication of a series of rotations and translations. These rotations and translations are able to transform a point from the ultrasound coordinate system, Cu, to the imaged fiducial coordinate system, CM- In this subsection, the geometry used to calculate these rotations and translations is described. The rotations and translations that are calculated in this subsection are then used in the following two subsections, Subsections 4.7.2 and 4.7.3. First, let us consider the relationship between Cu and CM- AS shown in Figure 4.28, these two coordinate systems have intersecting planes. As described in Subsection 4.4.1, these planes intersect along a line that contains 3 points. The location of these points can be calculated relative to both Cu and CM- This produces the points PU,E, PU,K, and Pu,z relative to Cu and the same points, PM,E, PM,K, and PM,z relative to CM-Using the relative locations of these 3 points, all but one of the rotation and all of the trans-lations, which multiplied together make TM, can be determined. From the location of the three points relative to both Cu and CM, the rotation about the line from PM,E to PM,Z can not be determined. Figure 4.29 shows a diagram with the rotation, ip, which can not be determined. A coordinate system is first chosen, CA, with an origin at the point PM,E, with an x-direction towards point PM,Z and a z-direction parallel and opposite in direction to the z-direction of Cu-CA is shown in Figure 4.29. Using this new coordinate system, the transformation, T ^ , from CA to CM, and the transformation T ^ , from CA to Cu, are calculated and multiplied, T U ITI.4 rr,U rr,A (rr,A\\~^-ME — L M L A — X M { L U ) =Transx>qE • Transy>TE • Rotx>^ • Roty_^ (4.25) • Roty-v • Rotz_^ • Transx^SE • Transy-tz where qE - PM,Ex, rE - PM,EV, Se = Pu,EX, and tE - Pu,EY-Next, a coordinate system, Cg, is chosen with an origin at the point PM,Z, with the same x-4.7 Correction of the Ultrasound Image to Imaged Fiducial Transformation (b) N-Shaped Fiducial (c) Ultrasound Image Figure 4.28: Relationship Between the Coordinate Systems Cu and CM 4.7 Correction of the Ultrasound Image to Imaged Fiducial Transformation 121 N-Shaped Figure 4.29: Rotation Around the Intersection Between the Ultrasound Image and the N-Shaped Fiducial 4.7 Correction of the Ultrasound Image to Imaged Fiducial Transformation 122 direction, y-direction, and z-direction as CA- This coordinate system is also shown in Figure 4.29. This second coordinate system is used in order to further constrain the results. When calculating the unknown angle tp, the minimization is required to find a solution that produces a rotation both around XA and XQ. Using this new coordinate system, CB, the transformation T^, , from point CB to CM, and the transformation Ty, from point CB to Cu, are calculated, rr,U _ r r B rpU _ rrtB / ' r r B \\ _ 1 i M z — J - M - L B — X M {*-u) =Transx>qz • Transy^z • Rotx<^ • Roty-C (4-26) • Roty-n • Rotz-ry • Transx-Sz • Transy-tz where qz - PM,Zx, rz = PM,ZV, Sz = Pu,zx, and tz = Pu,zy-The rotations 7 and £ are calculated using 7 = a r c t a n ( ^ - ^ ) (4.27) \\^U,ZX ~ ru,ExJ C = arctan f - ^ ~ ^ M (4.28) V PM,zx - PM,EX ) where PU,E (PU,Ex;Pu,EY) and Pu,z (Pu,zx,Pu,zy) are known relative to the ultrasound image and PM,E (PM,EX,PM,EV) and PM,Z {PM,ZX,PM,ZV) are known relative to the imaged fiducial. As it can not be determined from the 3 points that lie along the intersection of the ultrasound image and the imaged fiducial, the final rotation, ip, is calculated based on T ^ , from the set of transformations in (4.24). Using least squares minimization, the value of tp is found when (TME - T & ) and (TMZ - are minimized 2 4.7.2 On-The-Fly Calculat ion of TMnew Derived Direct ly from the Ultrasound Image In this subsection, we discuss the possibility of finding the relationship between the ultrasound image and the imaged fiducial for each ultrasound image that is acquired using an on-the-fly procedure. The transformation T ^ n e i u is calculated based on the information derived in Subsection 4.7.1. Once the values of 7 and £ have been calculated from the ultrasound information, and tp has 2In the experiment described in this chapter, we use a single layer of N-shaped fiducials on top of the phantom. If two or more layers of N-shaped fiducials are used, it is possible to determine the angle ip from the geometry contained in the ultrasound images. This double layer of N-shaped fiducials could be investigated in future work. 4.7 Correction of the Ultrasound Image to Imaged Fiducial Transformation 123 Table 4.11: Distance Between the Points PM, Calculated in Section 4.4.1, and the Points Calculated using TMPu- In this table, TM is calculated using the set of transformations in Equation (4.24). The results shown here are the same as Table 4.4, yet with respect to CM instead of CD-Y M Mean [mm] -8.5 2.2 2.6 Maximum [mm] -2.4 21.5 7.5 Minimum [mm] -15.8 -7.9 -1.3 Standard Deviation [mm] 2.8 5.0 1.7 been calculated through a minimization using TM from Equation (4.24), the transformation can be calculated. This new transformation is calculated using the rotations and translations shown from TME in Equation (4.25) for each ultrasound image that was acquired and presented in Section 4.5. The data collected during the consistency test in Section 4.4 is used to calculate the results of using the on-the-fly method. The user chosen bright spots collected from the ultrasound image are multiplied by both TM and TMNEW and the results are displayed in Tables 4.11 and 4.12. In Table 4.12, the difference in the x-direction between the values calculated with TU, and the original values is zero. This error is zero because the angle tp is a rotation around x& and xB • Since these two coordinate systems were chosen in determining the angle ip, the error in this direction was minimized. Since our data from Section 4.4 is used for the calculation of the angles and translations, and therefore the calculation of TM , the only error present in Table 4.12 is due to the calculation of tp. Were it not for the error in ip, the results would be zero in all the directions. 4.7.3 C a l c u l a t i o n of TM B a s e d o n a C o n s t a n t B i a s This subsection describes another approach for calculating the transformation between the ultra-sound image and the imaged fiducial. Again, the method described in this subsection is a prelim-inary investigation into a correction for the bias present in our results. As the probe calibration method is the largest source of error during our experiment, this bias is likely caused by this cali-4.7 Correction of the Ultrasound Image to Imaged Fiducial Transformation 124 Table 4.12: Distance Between the Points P M , Calculated in Section 4.4.1, and the Points Calculated using TMnewPu. In this table, is calculated using the geometrical data that is extracted from the ultrasound image and Equation (4.25). These results should be compared to those presented in Table 4.11. yivi Z M Mean [mm] 0 0.1 0.6 Maximum [mm] 0 0.9 3.6 Minimum [mm] 0 -0.2 -2.0 Standard Deviation [mm] 0 0.2 1.2 bration matrix. In this approach, we calculate a constant error for each rotation and translation. These errors are then used to offset the transformation from TM that was previously calculated. In this subsection, the angles and distances calculated from geometry are compared to those extracted directly from TM. The values for each of the rotations and translations, from Subsection 4.7.1, calculated from the geometry of the bright spots in the ultrasound image, are again used. These values are compared to the values of the same rotations and translations that are calculated directly from T M calculated using Equation (4.24). Using least squares minimization, the values of 7 , C, i>, IE, rE, SE, and are found when (TME — TM^j and (TMZ — T M ^ are minimized. The difference between the rotation and translation values calculated in Subsection 4.7.1 and those described in the previous paragraph are next calculated for each set of data. For each rotation and translation, the bias is calculated based on the mean of these differences. These biases are assumed to be constant throughout all of the data from each test run. The standard deviation for each bias is presented in Table 4.13. The bias for each rotation and translation is next added to the values calculated from T M for each test run. Using these new values, T M t . is calculated using the transformation that describe T M f i from Equation (4.25) for each ultrasound image that was acquired. These biases were included in the data in order to see if a constant bias for each rotation and translation could improve the overall consistency test results. This addition assumes that a bias was introduced into the original consistency test data, such as with the calibration matrix, and can therefore be reduced using another bias. The user chosen bright spots collected from the 4.8 Discussion 125 Table 4.13: Standard Deviation of the Differences used to Calculate the Biases Translation Standard Devi- Rotation Standard Devi-ation [mm] ation [deg] q E 1.7 7 5.9 4.2 C 9.4 S E 2.4 V* 28.9 t E 1.8 Table 4.14: Distance Between the Points PM, Calculated in Section 4.4.1, and the Points Calculated using TMhiaaPu- In this table, T ^ . ^ is calculated using the extracted rotations and translations from T^. These results should be compared to those presented in Table 4.11. X M y M Mean [mm] -0.1 -0.3 -1.1 Maximum [mm] 6.7 8.3 2.4 Minimum [mm] -5.4 -15.3 -5.1 Standard Deviation [mm] 2.9 3.2 1.7 ultrasound image are multiplied by both and the results are displayed in Table 4.14. 4.8 D i s c u s s i o n Regardless of the type of external tracker that a system uses, the calibration between the tracked probe and the ultrasound image is necessary. This calibration matrix introduces errors into the system when it is determined inaccurately. This calibration step in turn introduces errors into the overall accuracy of the data that is collected. The results presented in Subsection 4.5 include this calibration error. Unlike the tests that were described in Section 3.4, the results of the ultrasound consistency tests can not use the Optotrak system as a reference standard and therefore a validation of system accuracy is not established. 4.8 Discussion 126 From Table 4.4, the distance between the points PDJ, measured using the Digiclops, and the points PDJI, which were calculated using the calibration transformation have a mean value of —6.7 mm, 1.2 mm, and 1.6 mm in the x-direction, y-direction, and z-direction using all the data from both the female phantom torso and male phantom torso. Since the true location is not known for the points PDJ and PDJI, the standard deviation of the distance between the points is also a useful measure of the consistency of the data. The standard deviation between these points is 2.0 mm, 4.5 mm, and 2.5 mm in the x-direction, y-direction, and z-direction. From these results, the point cloud is reasonably small compared to the distance between the clouds of points. As described in Subsections 4.6.1 to 4.6.3, the transformations used to calculate PD,I and PD,II have errors, as they were determined from points chosen from Digiclops and ultrasound images. Variability in selecting these points as well as errors in the calibration matrix both contribute to the errors in the consistency test results. The calculation of the points PD,J IS found using P D J = TFDT^PM • (4.29) The centre and width of each bright spot in the ultrasound images is required to calculate PM-From Table 4.5, it is seen that the total variance in all directions of the manually chosen centres of each bright spot is o-2entre = 0.006mm2. The probability of choosing the width of the three bright spots so that a consistent angle between the ultrasound image and the N-shaped fiducial has a mean of 78%, as seen in Table 4.6. The calculation of Tg is also dependent on the ability of the user to chose data consistently. In order to find this transformation, the corners of the N-shaped fiducials were manually selected from the Digiclops images. From Table 4.7, it is seen that the total variance in all directions of the manually chosen fiducial corners is o2jiducial = 5.7mm2. As described in Subsection 4.4.2, the calculation of Tp* requires that the user choose the corners of the two different fiducials. Although the calculation of Tp also involves selecting points, the calculation of this transformation incorporates a least squares minimization of 30 sets of data. Tp1 is set to a fixed matrix and therefore the inclusion of Tp1 in the calculation of the points PDJ m a y introduce an additional offset. The cloud of the results is made large when these offsets are multiplied by T^. The variance introduced by choosing the N-shaped fiducial corners as well as from choosing the centre of the ultrasound image bright spots produces a minimum point cloud spread of 5.7mm2. 4.8 Discussion 127 Additionally, there may be other sources which contribute to the size of the point cloud. From Table 4.7, it can be seen that the variance introduced by the N-shaped fiducials being chosen in the Digiclops images is a likely cause for the error that is present in the results presented in Table 4.4. The error introduced by the selection of bright spot centres does not contribute significantly to the errors. A cloud of points created in the calculations of the points PD,II is also present. The calculation of the points PD,II involve the following set of transformations, PDtII = TpDTPPu . (4.30) The centre of each bright spot in the ultrasound images is manually selected in order to find the points Pu- From Table 4.5, it is seen that the total variance in all directions of the manually chosen centre spots is o~2entre — 0.00613mm2. The calibration transformation Tp is calculated using a set of matrices each containing some amount of error. These errors affect the final calibration transformation that is calculated. In order to calculate Tp, the least squares minimization of 120 sets of data is used. The results of Tp may introduce an offset into the calculation of the points PD,II- Multiplying this Tp by Tg, the cloud of point becomes larger due to this offset. The calculation of Tg is dependent on the manual selection of data. The user is asked to choose the centre of each printed cross from a piece of paper that is mounted on the flat plate. From Table 4.8, it is seen that the total variance in all directions of the manually chosen corners of the N-shaped fiducials is c r 2 r o s s e s = 0.5mm2. Once again, the total point cloud size, this time for PDJI, from the contributing sources can be estimated. The addition of the variance created from choosing the printed crosses from the Digiclops images as well as the variance from choosing the bright spots in the ultrasound images is 0.5mm2. The variance introduced by the printed crosses being chosen in the Digiclops images is a likely cause for some error that is present in the results presented in Table 4.4. Again, the error introduced by the selection of bright spot centres is minimal and therefore does not contribute significantly to the errors. From Table 4.9, the calibration error of 2.1mm in the x-direction, 0.2mm in the y-direction, and 6.8mm in the z-direction, is the largest of all contributing factors for the point cloud size of PD,II-In an effort to increase the consistency of our results from Table 4.4, two correction methods 4.8 Discussion 128 were presented in Section 4.7. A transformation matrix, TM, between the ultrasound coordinate system and the imaged fiducial coordinate system is first calculated based on the matrices calculated in Section 4.4. The original points from the ultrasound image are multiplied by this matrix and the points in the imaged fiducial coordinate system are calculated. The original points, from Table 4.11, have a mean difference of —8.5 mm, 2.2 mm, and 2.6 mm in the x-direction, y-direction, and z-direction and a standard deviation of 2.8 mm, 5.0 mm, and 1.7 mm in the x-direction, y-direction, and z-direction. Using the correction methods described in 4.7, these values are improved. The first correction method uses an on-the-fly approach to calculate the rotations and transla-tions required to create the transformation from the ultrasound coordinate system to the imaged fiducial coordinate system, TMn^. These calculations are based on the position of the bright spots contained within the ultrasound images. These values are used to transform the original points Pu into PM- The difference between these points and original points, from Table 4.12, have a mean difference of 0 mm, 0.1 mm, and 0.6 mm in the x-direction, y-direction, and z-direction and a stan-dard deviation of 0 mm, 0.2 mm, and 1.2 mm in the x-direction, y-direction, and z-direction. This on-the-fly correction method produces results with a very small error. This apparent improvement is due to the fact that only the rotation ip, around the intersecting line between the ultrasound image and the imaged fiducial, is calculated during the minimization. The other rotations and translations included in are derived directly from the ultrasound images. Since the points PM are also calculated from these images, the results are only representing the error present in the calculation of tp. In this thesis, this method was presented as a proposal. Real tests would need to be performed in order to verify the utility of this method. Instead of using the new rotations and translation values to calculate the transformation between the ultrasound coordinate system and the imaged fiducial coordinate system, the second correction method calculates a single bias for each variable that is consistent throughout all the test runs. This bias is calculated using the mean of the difference between the rotations and translations derived from T M n e w and T M . The bias for each rotation and translation is added to T M for each of the test runs. The corrected transformations are calculated and used to transform the ultrasound points into the imaged fiducial coordinate system. The difference between these points and original points, from Table 4.14, have a mean difference of —0.1 mm, —0.3 mm, and —1.1 mm in the x-direction, y-direction and z-direction and a standard deviation of 2.9 mm, 3.2 mm, and 1.7 mm 4.8 Discussion 129 in the x-direction, y-direction, and z-direction. Using this correction method, our results become more consistent. This reduction in errors is most likely attributed to a reduction in the error from calibration. Even after the correction is applied, the results have additional inconsistencies; most likely due to the selection of data from the Digiclops images or from calculating the points in the imaged fiducial coordinate system. Chapter 5 Conclusions and Future Directions This chapter begins with a summary of our system. The summary highlights the benefits of the system as well as the key contributions of this thesis. Next, conclusions about both the digital tracking component from Chapter 3 and the ultrasound image based component from Chapter 4 are stated. In Section 5.3, a discussion about a proposed alternative method which can be implemented with the use of fiducials during a real examination is discussed. Finally, the chapter finishes with a look at future directions for our research in Section 5.4. This final section also looks at variations in the tracking system that could be implemented in order to make the tracking system suitable for use on a patient. Practical issues about applying the system are also discussed. 5.1 Tracking System Summary To the best of our knowledge, this is the first use of a stereo vision system for calculating the probe motion as well as the patient movement, of the area being examined, during acquisition of ultrasound images. There are three possible options using the tracking system described in this thesis. All of these scenarios make use of a trackable object attached to the ultrasound probe. In each case, grayscale surfaces are tracked by the Digiclops. The grayscale surface attached to the surface of the patient's skin provides a rich set of features over the entire examination area. These features allow large numbers of point locations on the surface of the skin to be calculated by the trinocular camera so that the entire surface can be tracked. When occlusions between the surface and the camera are 130 5.2 Digital Camera Evaluation 131 created using the probe or the sonographer over portions of the patient's surface, the other areas on the patient can still continue to be tracked. In the first of three scenarios, the patient has a textured pattern attached directly to their skin. This textured surface can be created with paint or dye. In this scenario, both the object attached to the probe and the surface of the patient's skin are tracked using the Digiclops while the ultrasound examination is being performed. The second scenario makes use of an artificial skin placed on the surface of the patient's skin instead of the paint or dye. Again, the texture of the artificial skin is visible to the Digiclops, and both the surface of the patient and the object attached to the probe are tracked using the Digiclops. Both of these scenarios rely solely on the Digiclops information to track the position of the object attached to the probe and the patient. The ultrasound information is not used for tracking in these first two scenarios. In the third scenario for a tracking system, the patient's skin surface is covered with both fiducials and a grayscale textured surface. These fiducials are visible in both the ultrasound and Digiclops images, providing additional information about the relationship between the ultrasound image and the fiducial being imaged. 5.2 Digital Camera Evaluation From the tests performed in this thesis, the accuracy of measuring the location of the patient and the probe is suitable for tracking large patient motion as well as some smaller motion. Based on the results obtained in Section 3.5, we can estimate that at a distance of approximately 1000 mm from the camera, the location of the entire sphere using a window of 240mm x 230 mm (55,200 pixels2) can be calculated to within ±2 mm. The location of a patch on a curved surface (such as the patient), with a size of approximately 15mm x 23 mm (345 pixels2), can be calculated to within ±2 mm. Based on the experiments performed with the flat plate, the probe position can be calcu-lated from —0.7 mm to 1.0 mm when the object attached to the probe has an area of approximately 360mm x 160 mm (57, 600 pixels2). When the object's area is decreased to a patch size of approx-imately 90mm x 40 mm (2,760 pixels2), the accuracy of measuring the location of this patch is from —2.2 mm to 1.8 mm. In order to calculate the accuracy in orientation of each patch with respect to the true plane orientation defined with the Optotrak, the angle between the normal of the test plate and the 5.3 Ultrasound Image-Based Consistency Test 132 normal of each patch was determined. If the normal of the entire surface is used, then the mean error between this normal and the true normal is 3.1°. When the surface is divided into 8 patches, each with an area of 90mm x 80mm (7,200 pixels2), the mean error is 7.1°. At 144 patches per surface, with an area of 20mm x 20mm (AOOpixels2) in each patch, the mean error is 10.5°. These errors represent the rotational error in the two directions that lie in the plane of the plate. The results shown in Subsection 3.4.3 can therefore be interpreted as the combination of errors around both of these directions. In addition to the accuracy values presented, the feasibility of the proposed method in a clinical setting should be considered. Since tracking of the area being examined during ultrasound has not been performed using other tracking systems, our system offers an improvement for freehand tracking techniques. In general, 3D freehand ultrasound systems that are currently in use assume that the patient does not move throughout the scan [84]. Other systems used for tracking patient motion during an ultrasound scan, have not been able to track the area being scanned as the markers used for tracking would interfere with the examination [4, 13]. Since the Digiclops makes use of images that it collects of the scene, no active markers are required. The elimination of active markers from the patient's skin surface eliminates the interference with the ultrasound probe. In addition, the grayscale textured features that are placed on the patient's skin do not have any effect on the ultrasound images that are acquired. 5.3 Ultrasound Image-Based Consistency Test Two different sets of transformations, one using the information available to the Digiclops and the other using the calibrated probe, are used to calculate the points relative to the Digiclops. The difference between the transformed points have a mean value of —6.7 mm, 1.2 mm, and 1.6 mm in the x-direction, y-direction, and z-direction using all the data from both the female phantom torso and male phantom torso. The standard deviation between these points is 2.0 mm, 4.5 mm, and 2.5 mm in the x-direction, y-direction, and z-direction. Probe calibration is a common step in an externally tracked freehand ultrasound techniques. Regardless of which tracking system is used, the need to find the transformation between ultrasound images and the tracked probe exists in these systems. Unfortunately, this additional step introduces 5.3 Ultrasound Image-Based Consistency Test 133 additional errors into the system. In the consistency test described in this thesis, the calibration step is included in the set of transformations. The transformation that described the probe calibration likely introduces a large portion of the errors into this test. Because the consistency test includes the calibration transformation, the results reported are inclusive of this error. In an effort to better understand our consistency test results, additional analysis was performed. A transformation matrix, TM, between the ultrasound coordinate system and the imaged fiducial coordinate system is first calculated based on the matrices calculated in Section 4.4. The original points from the ultrasound images are multiplied by this matrix and the points in the imaged fiducial coordinate system are calculated. The original points have a mean difference of —8.5 mm, 2.2 mm, and 2.6 mm in the x-direction, y-direction, and z-direction and a standard deviation of 2.8 mm, 5.0 mm, and 1.7 mm in the x-direction, y-direction, and z-direction. The total standard deviation in all three directions is 6.0 mm. If the tracking system includes a set of fiducials and an artificial skin, then the correction methods can be applied to the data. The first proposed correction method geometrically calculates the rotations and translations required to create the transformation from the ultrasound coordinate system to the imaged fiducial coordinate system, TM . These calculations are based on the position of the bright spots contained within the ultrasound images. These values are used to transform the original points Pu into PM- The difference between these points and original points have a mean difference of 0 mm, 0.1 mm, and 0.6 mm in the x-direction, y-direction, and z-direction and a standard deviation of 0 mm, 0.2 mm, and 1.2 mm in the x-direction, y-direction, and z-direction. This proposed on-the-fly correction method produced a better consistency in our results since all of the rotations and translations, except for one, were calculated directly from the data contained in the ultrasound images. The results obtained using this method show the error present in calculating the angle tp. Although all but one of the degrees of freedom of are calculated directly from each ultrasound image,the camera is still required in order to determine the angle tp. For this reason, this method is considered to be a correction to our existing procedure. The second proposal was to investigate whether a constant bias, likely from errors in the probe calibration matrix, could be corrected using all of the data collected. Instead of using the new rotations and translation values to calculate the transformation between the ultrasound coordinate system and the imaged fiducial coordinate system, the second correction method calculates a bias 5.4 Future Directions 134 for each variable that is consistent throughout all the test runs. This bias is calculated using the mean of the difference between the rotations and translations derived from T M n e ^ and T M . The bias for each rotation and translation is added to T M for each of the test runs. The corrected transformations are calculated and used to transform the ultrasound points into the imaged fiducial coordinate system. The difference between these points and original points have a mean difference of —0.1 mm, —0.3 mm, and —1.1 mm in the x-direction, y-direction, and z-direction and a standard deviation of 3.0 mm, 3.2 mm, and 1.7 mm in the x-direction, y-direction, and z-direction. The total standard deviation in all three directions is 4.7 mm. Compared to the standard deviation of 6.0 mm, obtained when no correction was included in our data, the results have been improved. This correction method was therefore able to improve our data. 5.4 F u t u r e D i r e c t i o n s The accuracy of the tracking system is sufficiently good to continue testing on phantoms and even-tually in clinical applications with ultrasound. For both the camera component and the ultrasound image based component of our system, there are additional tests which must be considered. These investigations include both research in the area of the feasibility of the tracking system as well as in the development of the artificial skin and fiducials and the implementation of proposed correction methods. 5.4.1 F u r t h e r Feas ib i l i t y Tes t ing a n d A l g o r i t h m I m p l e m e n t a t i o n The acoustic material properties have been researched and tested with respect to their suitability for ultrasound. Additional testing must now be conducted to see if the chosen materials for the artificial skin and fiducials produce appropriate results in the digital camera. Factors such as lighting, colour and size all play an important role in detecting the fiducial marks and feature points in the Digiclops images. The detection of these points can also be further developed as various template size search windows, and subpixel interpolation schemes are tested. The method used to automatically locate the fiducials within the images must also be developed so that the user interaction with the system can be minimized or eliminated. The speed of the camera system must also be investigated. Since we have chosen to use a high resolution model of the Digiclops 5.4 Future Directions 135 camera in our tracking system, the camera is able to capture images at 15 Hz. A regular resolution Digiclops is able to capture images at 24 Hz. The use of a curved array ultrasound probe instead of the linear array probe can also be tested. For example, a curved probe has the capability to reach tissues up to 100 mm deep. Other probes and ranges can also be tried to see how well the fiducials are detected when the probe is focusing on a range that is much deeper than the placement of the fiducials. Internal organ movement is also an issue that must be developed. As the probe moves along the surface of the patient's skin, a force is exerted on the patient's anatomy causing the organs to move. This thesis has focused solely on tracking the external patient movement. Eventually, the internal movement must also be accounted for. The movement of the patient's skin may also cause difficulties for our tracking system. If a patient has loose skin, the skin may move relative to the internal organs. In this situation, patient movement may not be consistent with the patient's skin movement and errors will not be corrected [13]. As described in Section 4.6.1, the method in which information is extracted from the ultrasound images can introduce errors into the consistency test. As the user chooses the centre and width of each bright spot in the ultrasound image, the resulting selections may vary. In order to improve the robustness of these selections, the user could choose each point numerous times. Next, a least squares minimization could be used to find the location of each point. Alternatively, an algorithm for automatic detection of the centre of each bright spot [2] as well as the automatic measurement of the width of each bright spot could be implemented. Variation due to the Digiclops during the consistency test calculations as discussed in Section 4.6.2 can be due to errors in template matching, lighting and shadows in the images, and the angle at which the camera views the features and fiducials. In order to minimize these causes of errors, it is possible to introduce the third camera in the Digiclops into the calculations. Since the top camera is aligned with the right camera, triangulation with these two cameras can be performed and the 3D point positions calculated using the left and right cameras can be verified. Although the consistency test does not presently make use of this third camera, Chapter 3 makes use of the third camera when calculating the surface position using the Digiclops. The selection of the four corners of the fiducials in the Digiclops images is subject to errors introduced by the user. Using the fiducial thickness as well as the distance from the Digiclops camera, each portion of the 5.4 Future Directions 136 fiducial has a thickness of a few pixels in the Digiclops images. A possible method of consistently determining the four corners in all of the images could be achieved by applying a skeletal algorithm to the images. This algorithm could be used to find the thin line that passes through the centre of each length of the fiducial. The position where these centre lines intersect can be used as the four corners. Alternatively, a small circular shape can be added to each of the corners of the fiducials. When viewed with the Digiclops, these circles can be located using an algorithm that detects the centroid of these shapes. Since the circles are positioned at the four corners, these positions can therefore be calculated automatically. 5.4.2 Variations for the Ar t i f ic ia l Skin and Fiducials This subsection discusses the various options that are available for the creation of the artificial skin and the fiducials. The fiducials that were created for the experiment in this thesis were chosen so that they would be visible in both the ultrasound images and the Digiclops images. The resulting images of these experiments show that the fiducials are visible in the ultrasound images. The re-shaped fiducial also occludes some anatomy directly below each portion of the fiducial. In order to reduce this shadow, the type of material and thickness of the fiducial can be varied. From the ultrasound images of tested material shown in Appendix D, there is a selection of possible materials that can be used for the fiducials. It is important that all of the requirements for the fiducials be met with the chosen material. If used as part of the tracking system, the artificial skin that is created as a matrix for the fiducials and also to contain the grayscale texture must also be considered. Although latex has the material properties that are required for the artificial skin, the process of creating the skin may not be suitable. In this thesis, we first made a mould of the patient and then applied the liquid latex to this mould. This process is time and labor intensive and therefore not realistic in a clinical environment. Also, the liquid latex can not be applied directly to the patient's skin since it is not safe for contact with skin until it has cured completely. Alternatively, a set of latex skins could be created using various standard shapes. During the examination, the shape that is most appropriate for the patient can be used. Another option is to attach the grayscale texture directly onto the surface of the skin and then apply the fiducials using adhesives that are safe for skin. Further investigation into these ideas is necessary. 5.4 Future Directions 137 In the system described in this thesis, the width of each bright spot in the ultrasound images is used to calculate the slope direction where the ultrasound image intersects the fiducial. It is possible to create two attached N-shaped fiducials which are small enough that they can be seen in one ultrasound image. When these two fiducials are seen at the same time, the centre positions of all bright spots are sufficient to determine the slope direction. The ambiguity of using one fiducial is eliminated since a unique pattern of bright spots exists when two fiducials are used. An example of this ambiguity is shown in Figure 5.1. The first two diagrams in this figure show the corresponding ultrasound images when one fiducial is used. From these two diagrams, it can be seem that the two images are identical even though the ultrasound plane intersects the fiducial differently. The second two diagrams show how two fiducials placed side by side are able to compensate for this ambiguity. The two ultrasound images produced with fiducials are different from each other. In order to fit both fiducials into the width of the ultrasound plane, it is necessary that the fiducials be smaller than a the width of the ultrasound image. This small size could cause difficulties in locating the points in the ultrasound image as well as in the Digiclops images. Experimental testing is necessary to decide if this fiducial configuration is feasible. 5.4 Future Directions 138 Figure 5.1: Example Ultrasound Images Obtained using Single and Double Fiducials. On the left are four example scenarios of an ultrasound plane intersecting a single or double fiducial. On the right are simulated results that show the locations of the bright spots that could occur in the ultrasound images. Bibliography [1] Ludwig Adams, Achim Knepper, Dietrich Meyer-Ebrecht, Rainer Ruger, and Willem van der Brug. An optical navigator for brain surgery. Computer, 29(l):48-54, 1996. [2] Israel Amir. Algorithm for finding the center of circular fiducials. Computer Vision, Graphics, and Image Processing, 49:398-406, 1990. [3] Jo ao M. Sanches and Jorge S. Marques. A multiscale algorithm for three-dimensional free-hand ultrasound. Ultrasound in Medecine and Biology, 28(8):1029-1040, August 2002. [4] D. Atkinson, M. Burcher, J. Declerck, and J.A. Noble. Respiratory motion compensation for 3-d freehand echocardiography. Ultrasound in Medicine and Biology, 27(12): 1615-1620, Dec 2001. [5] J.C. Bamber. Physical Principles of Medical Ultrasonics. Ellis Horwood Limited, England, 1986. [6] Dean C. Barratt, Alun H. Davies, Alun D. Hughes, Simon A. Thorn, and Keith N. Humphries. Accuracy of an electromagnetic three-dimensional ultrasound system for carotid artery imag-ing. Ultrasound in Medecine and Biology, 27(10):1421-1425, October 2001. [7] Dean C. Barratt, Alun H. Davies, Alun D. Hughes, Simon A. Thom, and Keith N. Humphries. Optimisation and evaluation of an electromagnetic tracking device for high-accuracy three-dimensional ultrasound imaging of the carotid arteries. Ultrasound in Medecine and Biology, 27(7):957-968, July 2001. 139 BIBLIOGRAPHY 140 [8] C. D. Barry, C. P. Allott, N. W. John, P. M. Mellor, P. A. Arundel, D. S. Thomson, and J. C. Waterton. Three-dimensional freehand ultrasound: Image reconstruction and volume analysis. Ultrasound in Medicine and Biology, 23(8):1209-1224, 1997. [9] R.A. Beasley, J.D. Stefansic, A.J. Herline, L. Guttierez, and R.L. Galloway Jr. Registration of ultrasound images. In Proceedings of SPIE: The International Society for Optical Engineering, volume 3658, pages 125-132, 1999. [10] W. Birkfellner, F. Watzinger, F. Wanschitz, G. Enislidis, C. Kollmann, D.Rafolt, R. Nowotny, R. Ewers, and H. Bergmann. Systematic distortions in magnetic position digitizers. Medical Physics, 25(ll):2242-2248, November 1998. [11] Lionel G. Bouchet, Sanford L Meeks, Gordon Goodchild, Francis J Bova, John M Buatti, and William A Friedman. Calibration of three-dimensional ultrasound images for image-guided radiation therapy. Physics in Medicine and Biology, 46(2):559-577, February 2001. [12] Douglas A. Christensen. Ultrasonic Bioinstrumentation. John Wiley and Sons, New York, 1988. [13] Michael L. Chuang, Mark G. Hibberd, Raymond A. Beaudin, Matthew G. Mooney, Mari-lyn F. Riley, James T. Fearnside, and Pamela S. Douglas. Patient motion compensation dur-ing transthoracic 3-d echocardiography. Ultrasound in Medicine and Biology, 27(2):203-209, February 2001. [14] CIRS. Specifications for the fetal ultrasound training phantom. Technical report, Norfolk, VA, 2001. [15] Alan C F . Colchester, Jason Zhao, Kerrie S. Holton-Tainter, Christopher J. Henri, Neil Main-land, Patricia T.E. Roberts, Christopher G. Harris, and Richard J. Evans. Development and preliminary evaluation of vislan, a surgical planning and guidance system using inta-operative video imaging. Medical Image Analysis, l(l):73-90, 1996. [16] Roch M. Comeau, Abbas F. Sadikot, Aaron Fenster, and Terry M. Peters. Intraoperative ultrasound for guidance and tissue shift correction in image-guided neurosurgery. Medical Physics, 27(4):787-800, April 2000. BIBLIOGRAPHY 141 [17] ONDA Corporation. Acoustic properties tables. Technical report, ONDA Corporation, Sun-nyvale, CA, 2003. www.ondacorp.com. [18] D.G. Crouch. Design and manufacturing tools incorporating ired markers, technical bulletin NDI-TB-0021 Rev.02, Northern Digital Inc., 103 Randall Dr., Waterloo, Ontario, Sep 1995. [19] D. David Dershaw. Imaging guided biopsy: An alternative to surgical biopsy. The Breast Journal, 6(5):294-298, October 2000. [20] Kris Dickie. Ultrasound transducer information from ultrasonix. private communications, February 2004. [21] C F . Dietrich, A. Ignee, M. Gebel, B. Braden, and G. Schuessler. Imaging of the abdomen. Zeitschrift fur Gastroenterologie, 40(12):965-970, Dec 2002. [22] V.N. Dvornychenko. Bounds on (deterministic) correlation functions with application to reg-istration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 5(2):206-213, 1983. [23] Y. Sato et al. Image guidance of breast cancer surgery using 3D ultrasound images and augmented reality visualization. IEEE Transaction on Medical Imaging, 17(5):681-693, 1998. [24] R. Evans. Vislan: Computer-aided surgery. IEEE Review, 41(2):51-54, Mar 1995. [25] Aaron Fenster and Donal B. Downey. 3-d ultrasound imaging: A review. IEEE Engineering in Medicine and Biology, 15(6):41-51, November/December 1996. [26] Andrew Gee, Richard Prager, Graham Treece, and Laurence Berman. Engineering a freehand 3D ultrasound system. Pattern Recognition Letters, 24(4-5):757-777, February 2003. [27] Andrew H. Gee, Graham M. Treece, Richard W. Prager, Charlotte J.C. Cash, and Laurence Berman. Rapid registration for wide field of view freehand three-dimensional ultrasound. IEEE Transaction on Medical Imaging, 22(11):1344-1357, November 2003. [28] Robert N. Rohling A. H. Gee and L. Berman. Automatic registration of 3-d ultrasound images. Ultrasound in Medicine and Biology, 24(6):841-854, November 1998. BIBLIOGRAPHY 142 [29] Robert Rohling Andrew Gee and Laurence Berman. Three-dimentional spatial compunding of ultrasound images. Medical Image Analysis, 1(3):177—193, April 1997. [30] Robert Rohling Andrew Gee and Laurence Berman. A comparison of freehand three-dimensional ultrasound reconstruction techniques. Medical Image Analysis, 3(4):339-359, De-cember 1999. [31] S. A. Goss, R. L. Johnston, and F. Dunn. A comprehensive compilation of empirical ultrasonic properties of mammalian tissues. Journal of Acoustical Society of America, 64(2):423-457, August 1978. [32] S. A. Goss, R. L. Johnston, and F. Dunn. Compilation of empirical ultrasonic properties of mammalian tissues, i i . Journal of Acoustical Society of America, 68(1):93-108, July 1980. [33] E. Grimson, M. Leventon, G. Ettinger, A. Chabrerie, F. Ozlen, S. Nakajima, H. Atsumi, R. Kikinis, and P. Black. Clinical experience with a high precision image-guided neurosurgery system. In Proceedings of First International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAF98), 2001. [34] W.E.L. Grimson, G.J. Ettinger, S.J. White, T. Lozano-Perez, W.M. Wells III, and R. Kiki-nis. An automatic registration method for frameless stereotaxy, image guided surgery, and enhanced visualization. IEEE Transaction on Medical Imaging, 15(2), April 1996. [35] Reinhard Groell, Gottfried J. SchafHer, and Stephen Schloffer. Breath-hold times in patients undergoing radiological examinations: Comparison of expiration and inspiration with and without hyperventilation. Radiology and Oncology, 35(3):161—165, 2001. [36] A De Groote, M. Wantier, G. Cheron, M. Estenne, and M.Paiva. Chest wall motion during tidal breathing. Journal of Applied Physiology, 83(5):1531-1537, 1997. [37] A. Hartov, S.D. Eisner, M.S. David, W. Roberts, K.D. Paulsen, L.A. Platenik, and M.I. Miga. Error analysis for a free-hand three-dimensional ultrasound system for neuronavigation. Neurosurg Focus, 6(3), 1999. [38] Victor F. Humphrey and Francis A. Duck. Ultrasound in Medicine, chapter 1. Medical Science Series. Institute of Physics Publishing, Philadelphia, December 1989. BIBLIOGRAPHY 143 [39] Northern Digita Inc. Optotrak Rigmaker Guide. 103 Randall Drive, Waterloo, ON, 2000. [40] Northern Digita Inc. Optotrak System Guide. 103 Randall Drive, Waterloo, ON, 2000. [41] Northern Digital Inc. Optotrak® technical specifications 3020 position sensor. Technical report, Northern Digital Inc., 2004. www.ndigital.com. [42] Point Grey Research Inc. Triclops Stereo Vision System Manual Version 2.5. 305-1847 West Broadway, Vancouver, BC, 2001. [43] Point Grey Research Inc. Digiclops™ technical specifications. Technical report, Point Grey Research Inc., 2004. www.ptgrey.com. [44] Sonotech® Inc. Clear image technical specifications. Technical report, Sonotech® Inc., 2002. www .sonotech-inc.com. [45] D. Koller, G. Klinker, E. Rose, D. Breen, R. Whitaker, and M. Tuceryan. Real-time vision-based camera tracking for augmented reality applications. In In Proceedings of the Symposium on Virtual Reality Software and Technology (VRST-97), pages 87-94, Sep 15-17 1997. [46] M. Lee, N. Cardinal, and A. Fenster. Single-camera system for optically tracking freehand mo-tion in 3D: Experimental implementation and evaluation. In Proceedings SPIE Visualization, Display and Image-guided Procedures, pages 109-120, 2001. [47] Daniel F. Leotta, Paul R. Detmer, and Roy W. Martin. Performance of a miniature magnetic position sensor for three-dimentional ultrasound imaging. Ultrasound in Medecine and Biology, 23(4):597-609, 1997. [48] M. Leventon. A registration, tracking, and visualization system for image guided surgery. Master's thesis, MIT, May 1997. [49] J. Lewis. Fast normalized cross-correlation. In Vision Interface, 1995. [50] M. Lievin and E. Keeve. Stereoscopic augmented reality system for computer assisted surgery. In Proceedings Computer Assisted Radiology and Surgery CARS'01, Jun 27-30 2001. BIBLIOGRAPHY 144 [51] Frank Lindseth, Geir Arne Tangen, Thomas Lang0, and Jon Bang. Probe calibration for freehand 3-d ultrasound. Ultrasound in Medecine and Biology, 29(11):1607-1623, November 2003. [52] William Lorensen, Harvey Cline, Christopher Nafis, Ron Kikinis, David Altobelli, and Lang-ham Gleason. Enhancing reality in the operating room. In Visualization, 1993. Visualization '93, Proceedings., IEEE Conference on, pages 410-415, 25-29 Oct 1993. [53] Dirk Manke, Peter Rosch, Kay Nehrke, Peter Bornert, and Olaf Dossel. Model evaluation and calibration for prospective respiratory motion correction in coronary mr angiography based on 3-d image registration. IEEE Transaction on Medical Imaging, 21 (9): 1132-1141, September 2002. [54] Kate McLeish, Derek L.G. Hill, David Atkinson, Jane M. Blackall, and Reza Razavi. A study of the motion and deformation of the heart due to respiration. IEEE Transaction on Medical Imaging, 21(9):1142-1150, September 2002. [55] Stephen Meairs, Jens Beyer, and Michael Hennerici. Reconstruction and visualization of irreg-ularly sampled three- and four-dimensional ultrasound data for cerebrovascular applications. Ultrasound in Medicine and Biology, 26(2):263-272, February 2000. [56] P.H. Mills and H. Fuchs. 3D ultrasound display using optical tracking. Proceedings of the First Conference on Visualization in Biomedical Computing, pages 490-497, 1990. [57] A. Moskalik, P. L. Carson, C. R. Meyer, J. B. Fowlkes, J. M. Rubin, and M. A. Roubidoux. Registration of three-dimensional compound ultrasound scans of the breast for refraction and motion correction. Ultrasound in Medicine and Biology, 21(6):769-778, 1995. [58] Diane M. Muratore and Jr Robert L. Galloway. Beam calibration without a phantom for creating a 3-d freehand ultrasound system. Ultrasound in Medecine and Biology, 27(11):1557-1566, October 2001. [59] Thomas R. Nelson, Donal B. Downey, Dolores H. Pretorius, and Aaron Fenster. Three-Dimensional Ultrasound. Lippincott Williams and Wilkins, USA, 1999. BIBLIOGRAPHY 145 [60] Thomas R. Nelson and Dolores H. Pretorius. Three-dimensional ultrasound imaging. Ultra-sound in Medicine and Biology, 24(9): 1243-1270, 1998. [61] Niko Pagoulatos, Warren S. Edwards, David R. Haynor, and Yongmin Kim. Interactive 3D registration of ultrasound and magnetic resonance images based on a magnetic position sensor. IEEE Transaction on Information Technology in Biomedicine, 3(4):278-288, December 1999. [62] Niko Pagoulatos, David R. Haunor, and Yongmin Kim. A fast calibration method for 3-d tacking of ultrasound images using a spatial localizer. Ultrasound in Medicine and Biology, 27(9):1219-1229, September 2001. [63] Carlo Palombo, Michaela Kozakova, Carmela Morizzo, Fabio Andreuccetti, Alessandro Ton-dini, Paolo Palchetti, Gianluca Mirra, Giuliano Parenti, and Natesa Pandian. Ultrafast three-dimensional ultrasound: Application to carotid artery imaging. Stroke, 29(8):1631-1637, Au-gust 1998. [64] G.P. Penney, J.M. Blackall, M.S Hamady, T.Sabharwal, A.Adam, and D.J. Hawkes. Registra-tion of freehand 3D ultrasound and magnetic resonance liver images. Medical Image Analysis, 8(1):81-91, March 2004. [65] Richard Prager, Andrew Gee, Graham Treece, and Laurence Berman. Freehand 3D ultrasound without voxels: Volume measurement and visualisation using stradx system. Ultrasonics, 40:109-115, 2002. [66] Richard W. Prager, Andrew Gee, and Laurence Berman. Stradx: Real-time acquisition and visualization of freehand three-dimentional ultrasound. Medical imaga Analysis, 3(2):129-140, 1998. [67] R.W. Prager, R.N. Rohling, A.H. Gee, and L. Berman. Rapid calibration for 3-d freehand ultrasound. Ultrasound in Medicine and Biology, 24(6):855-869, 1998. [68] D.H. Pretorius and T. R. Nelson. Opinion: Three-dimensional ultrasound. Ultrasound in Obstetrics and Gynecology, 5(4):219-221, 1995. BIBLIOGRAPHY 146 [69] M. Riccabona, G. Fritz, and E. Ring. Potential applications of three-dimensional ultrasound in the pediatric urinary tract: Pictorial demonstration based on preliminary results. European Radiology, 13(12):2680-2687, December 2003. [70] D.W. Rickey, P.A. Picot, D.A. Christopher, and A. Fenster. A wall-less vessel phantom for doppler ultrasound studies. Ultrasound in Medicine and Biology, 21(9):1163—1176, 1995. [71] Alexis Roche, Xavier Pennec, Gregoire Malandain, and Nicholas Ayache. Rigid registration of 3-d ultrasound with mr images: A new approach combining intensity and gradient information. IEEE Transactions on Medical Imaging, 20(10):1038-1049, October 2001. [72] R.N. Rohling and A.H. Gee. Issues in 3-d free-hand medical ultrasound imaging. Technical report, Cambridge University Department of Engineering, 1996. [73] Georgios Sakas, Lars-Arne Schreyer, and Marcus Grimm. Preprocessing and volume rendering of 3D ultrasonic data. IEEE Computer Graphics and Applications, 15(4):47-54, July 1995. [74] F. Sauer, A. Khamene, B. Bascle, L. Schinunang, F. Wenzel, and S. Vogt. Augmented reality visualization of ultrasound images: System description, calibration, and features. In Aug-mented Reality, 2001. Proceedings. IEEE and ACM International Symposium on, pages 30-39, 2001. [75] Claudio Simon, Philip VanBaren, and Emad Ebbini. Motion compensation algorithm for non-invasive two-dimensional temperature estimation using diagnostic pulse-echo ultrasound. In SPIE - The International Society for Optical Engineering, volume 3249, pages 182-192, 1998. [76] M.L. Skolnick. Estimation of ultrasound beam width in the elevation (section thickness) plane. Radiology, 180:286-288, 1991. [77] Wendy L. Smith. Three-Dimensional Ultrasound Guidance for Core-Needle Breast Biopsy: System Development, Optimization and Evaluation. PhD thesis, University of Western Ontario, 2002. [78] Wendy L. Smith and Aaron Fenster. Optimum scan spacing for three-dimentional ultrasound by speckle statistics. Ultrasound in Medecine and Biology, 26(4):551-562, May 2000. BIBLIOGRAPHY 147 [79] Wendy L. Smith and Aaron Fenster. Analysis of an image-based transducer tracking system for 3D ultrasound. In Proceedings of SPIE, Medical Imaging 2003: Ultrasonic Imaging and Signal Processing, volume 5035, pages 154-165, May 2003. [80] A. State, M. Livingston, W.F. Garrett, G. Hirota, M.C. Whitton, E.D. Pisano, and H. Fuchs. Technologies for augmented reality systems: Realizing ultrasound guided needle biopsies. In Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH 96, pages 439-446, New Orleans, LA, Aug 1996. [81] G. M. Treece, R. W. Prager, A. H. Gee, and L. Berman. 3D ultrasound measurement of large organ volume. Medical Image Analysis, 5(l):41-54, March 2001. [82] G. M. Treece, R. W. Prager, A. H. Gee, and L. Berman. Correction of probe pressure artifacts in freehand 3D ultrasound. Medical Image Analysis, 6(3): 199-214, October 2002. [83] Graham Treece, Richard Prager, Andrew Gee, and Laurence Berman. Correction of probe pressure artifacts in freehand 3D ultrasound. In MICCAI, pages 283-290, October 2001. [84] Graham M. Treece, Andrew H. Gee, Richard W. Prager, Charlotte J.C. Cash, and Lau-rence H. Berman. High-definition freehand 3-d ultrasound. Ultrasound in Medicine and Biol-ogy, 29(4):529-546, April 2003. [85] R. Tsai. A versatile camera calibration technique for high-accuracy 3D machine vision metrol-ogy using off-the-shelf tv cameras and lenses. Robotics and Automation, IEEE Journal of, 3(4):323-344, 1987. [86] K. Vant. Ultrasound image courtesy of K. Vant, 2003. [87] Luis Weruaga, Juan Morales, Luis Nu nez, and Rafael Verdii. Estimating volumetric motion in human thorax with parametric matching constraints. IEEE Transactions on Medical Imaging, 22(6):766-698, June 2003. [88] Guofang Xiao, J. Michael Brady, J. Alison Noble, Michael Burcher, and Ruth English. Nonrigid registration of 3-d free-hand ultrasound images of the breast. IEEE Transactions on Medical Imaging, 21(4):405-412, April 2002. BIBLIOGRAPHY 148 [89] Kefu Xue, Ping He, and Yiwei Wang. A motion compensated ultrasound spatial compounding algorithm. In 19th International Conference IEEE/EMBS, pages 818-821, Chicago, IL, USA, November 1997. [90] Xujiong Ye, J. Alison Noble, and David Atkinson. 3-d freehand echocardiography for auto-matic left ventricle reconstruction and analysis based on multiple acoustic windows. IEEE Transactions on Medical Imaging, 21(9):1051—1058, September 2002. [91] Youwei Zhang, Robert Rohling, and Dinesh K. Pai. Direct surface extraction from 3D freehand ultrasound images. In Proceedings of IEEE Visualization 2002, pages 45-52, Boston, MA, October 2002. [92] J. Zhao, A.C.F. Colchester, C.J. Henri, and K. Holton-Tainter. Preliminary in-theatre experi-ence with the vislan, a video based surgical guidance system. In Engineering in Medicine and Biology Society, 1995., IEEE 17th Annual Conference, volume 1, pages 363-364, 1997. Appendix A Setup used to Create the Plate to IRED Transformation Figure A . l shows a diagram of the printed crosses and the IRED locations used to create the transformation between the plate and the IREDs. In addition to the printed crosses and positioning circles for the IRED markers, a greyscale image was also included in the background of Figure A . l . 149 150 -0\" + O 4 -a. 4h o a Q PS o3 o - d cn co 3 bX) fa Appendix B Source Code This appendix provides the Matlab source code used for finding the 3D location of points as well as calculating a coordinate system using 3D points. Calculating the Location of a 3D Point The function listed below requires an input of matched pixel locations in both the left and right images of the Digiclops. These pixels are chosen from the rectified Digiclops images (calibrated camera images). Using triangulation, the 3D location of the feature is found. The point is found relative to Crj, the Digiclops coordinate system. function [X,Y,Z]=fun_3d_pts(left_pix,right_pix), baseline=0.100018*1000; '/.mm focal_length=1336.811523; '/.pixels for 1024x768 pixels centreRow=423.019745; '/.pixel position centreCol=592.776917; '/.pixel position '/.find the horizontal disparity d_hor=left_pix(:,1)-right_pix(:,1); '/.calculate the 3D points Z=focal_length*baseline./d_hor; 7.Z is the distance along the camera Z axis u=right_pix(:,l)-centreCol; 151 152 v=right_pix(:,2)-centreRow; X=u.*Z.Ifocal.length; Y=v. *Z. /focal.length; Calculating the Coordinate Directions of a Coordinate System The function listed below requires an input of the origin point of the coordinate system (orig), the 3D location of two points (one_tip and two_tip), the normal of the plane that is fitted to all the points (n), and whether the xz or yx plane are required (combo='XZ' or combo='YX'). The result is a transformation containing the x, y, and z direction vectors as well as the origin or the coordinate system. The transformation is calculated relative to the device used to calculate the 3D points. For example, if the Digiclops is used to calculate the 3D points and the normal is fitted to these points, then the transformation is created relative to Co, the Digiclops coordinate system. function [T]=fun_three_xyz(orig,one_tip,two_tip,n,combo), i f combo=='YX', vxyz_num=[one_tip-orig 0]; vxyz=vxyz_num/sqrt(sum(vxyz_num.~2)); wxyz_num=[n(l) n(2) n(3) 0]; wxyz=wxyz_num/sqrt(sum(wxyz_num.\"2)); uxyz_num=[cross(vxyz(l:3),wxyz(l:3)) 0]; uxyz=uxyz_num/sqrt(sum(uxyz_num.\"2)); elseif combo==,XZ', uxyz_num=[one_tip-orig 0]; uxyz=uxyz_num/sqrt(sum(uxyz_num.\"2)); vxyz_num=[-n(l) -n(2) -n(3) 0]; vxyz=vxyz_num/sqrt(sum(vxyz_num.~2)); wxyz_num=[cross(uxyz(l:3),vxyz(l:3)) 0]; wxyz=wxyz_num/sqrt(sum(wxyz_num.\"2)); end; dxyz=[orig 1]; T=[uxyz.' vxyz.' wxyz.' dxyz. ' ]; Appendix C Details about the Data Used to Calculate the Best-Fit Sphere This appendix provides detailed data about the errors that were calculated between the IRED positions and the best-fit sphere. Data was collected 6 times for 12 markers before the best fit sphere was calculated. The error between the location of each marker and the location of the best-fit sphere are shown in Table C l . A summary of these errors are presented in Table C.2. 153 154 Table C l : Errors Between each Marker Location and the Best-Fit Sphere. The total RMS error for the marker position is 0.0737mm Run Number Marker Number 1 2 3 4 5 6 1 -0.02 -0.12 -0.06 -0.09 -0.01 -0.03 2 0.06 0.02 -0.03 -0.04 -0.02 0.02 3 0.02 0.05 0.03 -0.00 0.02 0.05 4 -0.09 0.01 0.04 0.04 0.04 0.03 5 0.19 0.01 -0.04 -0.02 -0.04 -0.11 6 -0.16 0.01 0.01 0.02 -0.00 0.02 7 -0.07 -0.14 0.10 0.05 0.04 0.09 8 -0.11 0.19 -0.02 -0.02 -0.02 -0.01 9 0.17 0.15 -0.09 -0.06 0.00 -0.04 10 -0.00 -0.11 -0.07 -0.07 -0.02 -0.06 11 0.02 0.01 0.12 0.15 -0.06 -0.05 12 -0.02 -0.08 0.02 0.03 0.07 0.08 Table C.2: Spread of the Errors Between each Marker Location and the Best-Fit Sphere Maximum [mm] 0.19 Minimum [mm] -0.16 Standard Deviation [mm] 0.07 Appendix D Ultrasound Tests on Different Types of Materials This appendix shows the ultrasound images that were obtained for each type of material that was tested. Between the probe, material, and the probe, coupling gel is applied. The figures show the bright spot and amount of shadow created for each tested material. In addition to the materials shown in this appendix, the following materials were tested but did not create any bright spot in the ultrasound image: • Latex rubber • Sewing thread • Fiber optic core • 8/6 Fishing line • 12/6 Fishing line • 34 Gauge copper wire • Acrylic paint • Henna skin dye • White glue 155 D.l Rubber 156 D . l Rubber The images shown in Figures D. l and D.2 are ultrasound images of various strips of polyurethane rubber on the phantom. Each strip has a thickness of approximately 1mm and a width of 1mm to 7mm. The Por-A-Mold 2020 is clear and soft, the 2030 is light amber and soft, the 2040 is clear and firm, the 2060 is light amber and firm, and the 2070 is dark amber and firm. The images shown in Figure D.3 are ultrasound images of various strips of silastic silicone rubber on the phantom. Each strip has a thickness of approximately 1mm and a width of 1mm to 7mm. Silastic J RTV is a high durometer mold making rubber which is often used with foam polyurethane. Silastic E RTV is white and has very good tear resistance and long working times. Silastic M RTV Silicone Rubber is a high durometer mold making rubber, used with rigid foam polyurethane for prototypes food and reproductions. DAP is a silicone rubber used for household caulking. The images shown in Figure D.4 are ultrasound images of various strips of silicone rubber on the phantom. Each strip has a thickness of approximately 1mm and a width of 1mm to 7mm. Dow Corning HS II is a high strength silicone mold making rubber with a high durometer for figurine reproduction. Dow Corning HS III is a high strength, low durometer mold making rubber for figurine reproduction. Dow Corning HS IV is a high strength silicone mold making rubber with a high durometer for figurine reproduction. In this subsection, some of the rubbers that were previously discussed are embedded in a latex sheet and then the ultrasound images are acquired. Figure D.5 shows the results when the polyurethane Por-A-Mold rubbers are places inside of latex. Figure D.6 shows the results when the silicone caulking is places inside of latex. The strips of rubber have a thickness of approximately 1mm and a width of 1mm to 3mm. The latex sheet is placed on the phantom and the images are acquired. Figure D.6(d) shows the difference between the sheet of latex, without any additional rubber, compared to the bare phantom. The left side of this image contains the phantom and the right side contains the phantom and the latex sheet. D.l Rubber 157 Figure D. l Sizes (i) 2040,1mm (j) 2040,2mm (k) 2040,3mm (1) 2040,7mm Ultrasound Image Results with Strips of Por-A-Mold Polyurethane Rubber of Various D.2 Metals 158 (a) 2060,1mm (b) 2060,2mm (c) 2060,3mm (d) 2060,7mm (e) 2070,1mm (f) 2070,2mm (g) 2070,3mm (h) 2070,7mm Figure D.2: Ultrasound Image Results with Strips of Por-A-Mold Polyurethane Rubber of Various Sizes (continued) D . 2 M e t a l s This section shows the results of various tested metals. Figure D.7 shows the effect of imaging strips of aluminum placed on the phantom. Gauge 14 to 24 sheet metal was cut into strips of 1mm to 5mm wide. Figure D.8 shows the images obtained using wide and thin steel sewing and knitting needles, strips of copper sheet metal and strips of steel sheet metal. The copper is gauge 19 and cut into strips of 1mm to 5mm wide. The steel is 15 and 21 gauge and is also cut into strips of 1mm to 5mm wide. This section shows the results of various tested metals. Figure D.7 shows the effect of imaging strips of aluminum placed on the phantom. Gauge 14 to 24 sheet metal was cut into strips of 1mm to 5mm wide. Figure D.8 shows the images obtained using wide and thin steel sewing and knitting needles, strips of copper sheet metal and strips of steel sheet metal. The copper is gauge 19 and cut into strips of 1mm to 5mm wide. The steel is 15 and 21 gauge and is also cut into strips of 1mm to 5mm wide. D.2 Metals 159 (a) JRTV,lmm (b) JRTV,2mm (c) JRTV,3mm (d) JRTV,7mm (e) ERTV,lmm (f) ERTV,2mm (g) ERTV,3mm (h) ERTV,7mm (i) MRTV.lmm (j) MRTV,2mm (k) MRTV,3mm (1) MRTV,7mm (m) DAP,lmm (n) DAP,2mm (o) DAP,3ram (p) DAP,4mm Figure D.3: Ultrasound Image Results with Strips of Silastic and Dap Silicone of Various Sizes D.2 Metals 160 (a) HSII,lmm (b) HSII,2mm (c) HSII,3mm (d) HSII,7mm (e) HSIII.lmm (f) HSIII,2mm (g) HSIII,3mm (h) HSIII,7mm (i) HSIV,lmm (j) HSIV,2mm (k) HSIV,3mm (1) HSIV,7mm Figure D.4: Ultrasound Image Results with Strips of HS Silicone of Various Sizes D.2 Metals (a) 2020,1mm (b) 2020,2mm (c) 2020,3mm (d) 2030,1mm (e) 2030,2mm (f) 2030,3mm (g) 2040,1mm (h) 2040,2mm (i) 2040,3mm (j) 2070,1mm (k) 2070,2mm (1) 2070,3mm Figure D.5: Ultrasound Image Results with Various Sized Strips of Por-A-Mold Polyurethane Rubber Embedded in a Latex Matrix D.3 Strings and Fibers 162 (a) DAP,lmm (b) DAP,2mm (c) DAP,3mm (d) Half Latex Figure D.6: Ultrasound Image Results of Latex Rubber and Various Sized Strips of Silicone Rubber Embedded in a Latex Matrix D . 3 S t r i n g s a n d F i b e r s The results of various strings and fibers on the phantom are shown in this section. Figure D.9 shows the image that is obtained when white glue is mixed with 17% [mass] and 23% [mass] cellulose and fine graphite particles. Angler 30/6 fishing line is also imaged. Finally, the results of a small and large diameter acrylic cable jacket for fiber optics are shown. D . 4 T a p e s This section shows the results obtained when tapes and other adhesive materials are placed on the phantom and then viewed with ultrasound. Figure D.4 shows the results obtained from grey duct tape, white duct tape, masking tape, and a white sticker. Temporary tattoos are also placed on the phantom in strips of 1mm to 5mm. Finally, brown paper tape was also tested thicknesses of 1, 2, and 4 layers. DA Tapes (a) Al#14,lmm (b) Al#14,2mm (c) Al#14,3mm (d) Al#14,5mm (e) Al#22,lmm (f) Al#22,2mm (g) Al#22,3mm (h) Al#22,5mm (i) Al#24,lmm (j) Al#24,2mm (k) Al#24,3mm (1) Al#24,5mm Figure D.7: Ultrasound Image Results of Sheets of Aluminum Cut into Various Sized Strips D.4 Tapes 164 (a) Wide Sew (b) Thin Sew (c) Wide Knit (d) Thin Knit (e) Cu#19,lmm (f) Cu#19,2mm (g) Cu#19,3mm (h) Cu#19,5mm (i) Fe#15,lmm (j) Fe#15,2mm (k) Fe#15,3mm (1) Fe#15,5mm (m) Fe#21,lmm (n) Fe#21,2mm (o) Fe#21,3mm (p) Fe#21,5mm Figure D.8: Ultrasound Image Results of Needles and Sheets of Copper and Steel Cut into Various Sized Strips D.4 Tapes 165 (e) Fishing Line (f) Small Cable (g) Large Cable Figure D.9: Ultrasound Image Results of Various Wires and Fibers D.4 Tapes 166 (i) Paperl (j) Paper2 (k) Paper4 Figure D.10: Ultrasound Image Results of Various Tapes and Tape Widths Appendix E Specifications for Manufacturing the Metal Fiducial Figure E . l shows the specifications used to manufacture the metal fiducials. 167 168 "@en ; edm:hasType "Thesis/Dissertation"@en ; vivo:dateIssued "2004-11"@en ; edm:isShownAt "10.14288/1.0065395"@en ; dcterms:language "eng"@en ; ns0:degreeDiscipline "Electrical and Computer Engineering"@en ; edm:provider "Vancouver : University of British Columbia Library"@en ; dcterms:publisher "University of British Columbia"@en ; dcterms:rights "For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use."@en ; ns0:scholarLevel "Graduate"@en ; dcterms:title "Patient and probe tracking during freehand ultrasound"@en ; dcterms:type "Text"@en ; ns0:identifierURI "http://hdl.handle.net/2429/15419"@en .