Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Camera-based estimation of needle pose for ultrasound percutaneous procedures Khosravi, Sara 2008

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
24-ubc_2008_spring_khosravi_sara.pdf [ 3.87MB ]
Metadata
JSON: 24-1.0066700.json
JSON-LD: 24-1.0066700-ld.json
RDF/XML (Pretty): 24-1.0066700-rdf.xml
RDF/JSON: 24-1.0066700-rdf.json
Turtle: 24-1.0066700-turtle.txt
N-Triples: 24-1.0066700-rdf-ntriples.txt
Original Record: 24-1.0066700-source.json
Full Text
24-1.0066700-fulltext.txt
Citation
24-1.0066700.ris

Full Text

Camera-Based Estimation of Needle Pose for Ultrasound Percutaneous Procedures by Sara Khosravi B.Sc. Shahid Bahonar University, Iran 2003 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES (Electrical and Computer Engineering) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) April 2008 © Sara Khosravi, 2008 Abstract A pose estimation method is proposed for measuring the position and orientation of a biopsy needle. The technique is to be used as a touchless needle guide system for guidance of percutaneous procedures with 4D ultrasound. A pair of uncalibrated, light- weight USB cameras are used as inputs. A database is prepared offline, using both the needle line estimated from camera-captured images and the true needle line recorded from an independent tracking device. A nonparametric learning algorithm determines the best fit model from the database. This model can then be used in real-time to estimate the true position of the needle with inputs from only the camera images. Simulation results confirm the feasibility of the method and show how a small, accurately made database can provide satisfactory results. In a series of tests with cameras, we achieved an average error of 2.4mm in position and 2.61° in orientation. The system is also extended to real ultrasound imaging, as the two miniature cameras capture images of the needle in air and the ultrasound system captures a volume as the needle moves through the workspace. A new database is created with the estimated 3D position of the needle from the ultrasound volume and the 2D position and orientation of the needle calculated from the camera images. This study achieved an average error of 0.94 mm in position and 3.93° in orientation. ii Table of Contents ABSTRACT^ ii TABLE OF CONTENTS^ iii LIST OF TABLES LIST OF FIGURES^ vi ACKNOWLEDGMENTS viii DEDICATION^ ix CHAPTER 1: INTRODUCTION^ 1 1.1 ULTRASOUND IMAGING  1 1.2 3D/4D ULTRASOUND^ 3 1.3 IMAGE GUIDED BIOPSY 5 1.4 GUIDED NEEDLE INSERTION^ 6 1.4.1 Freehand Image Guidance 7 1.4.2 Tracked Freehand 7 1.4.3 Mechanical Guides^  11 1.5 REQUIREMENTS FOR A NEEDLE TRACKING DEVICE^  12 1. 6 SUMMARY OF THE PREVIOUS WORK^ 12 1.7 THESIS GOALS^ 13 1.8 THESIS OVERVIEW 13 CHAPTER 2: SYSTEM OUTLINE^ 15 2.1 PREVIOUS SYSTEM OVERVIEW 15 2.2 APPROXIMATION METHODS^ 20 2.2.1 Nonparametric Model Construction^ 21 2.2.1.1 Dimensionality Reducing 22 2.2.1.2 Space Partitioning Algorithms 23 2.3 PARTITIONING SELF-ORGANIZING, AND DIMENSIONALITY REDUCING (SPORE)^ 25 CHAPTER 3: COMPONENT SELECTION AND CONFIGURATION^ 28 3.1 CAMERA SPECIFICATIONS^ 28 3.2 CAMERA OPTIONS^ 32 3.2.1 Web-Cameras 32 3.2.2 Miniature Cameras 33 3.2.3 ARTRAY-130MI-IR-OP Camera^ 33 CHAPTER 4: MODELING^ 35 4.1 COLLECTING THE DATA 35 4.2 DEFINING THE CAMERA MODEL^ 37 4.3 MODEL FITTING^ 40 4.4 MEASURES OF ACCURACY 42 CHAPTER 5: SIMULATIONS^ 44 5.1 SIMULATIONS WORKSPACE 44 5.2 SIMULATION PROCESS^ 45 5.2.1 A Simulation to Verify the Feasibility of a SPORE Created Model^ 47 5.2.2 A Practical Needle Workspace with Respect to the Ultrasound Probe 48 iii 5.3.3 A Random Placement Simulation^ 51 CHAPTER 6: EXPERIMENTS^ 55 6.1 EXPERIMENT I : VALIDATION OF CONCEPT USING OPTICAL TRACKING AS GROUND TRUTH.^ 55 6.1.1 Apparatus and System Design^ 56 6.1.2 Camera Image Processing 61 6.1.3 OPTOTRAK Experiment Method 62 6.1.4 Results^ 65 6.2 EXPERIMENT 2: VALIDATION OF CONCEPT USING ULTRASOUND^ 66 6.2.1 Apparatus 67 6.2.2 Ultrasound Experiment Method^ 69 6.2.3 Results^ 70 CHAPTER 7: CONCLUSIONS & DISCUSSIONS^ 73 7.1 DISCUSSION^ 73 7.2 CONTRIBUTIONS 74 7.3 LIMITATIONS AND DIFFICULTIES^ 76 7.4 FUTURE DIRECTIONS^ 76 REFERENCES^ 78 APPENDIX A 81 APPENDIX B^ 83 iv List of Tables TABLE 2.1: OVERALL TRACKING ERROR. THE PERPENDICULAR DISTANCE ERROR D IS CALCULATED^ 19 AS THE SHORTEST 3D DISTANCE BETWEEN THE ACTUAL INTERSECTION POINT OF THE NEEDLE IN THE ^ 19 ULTRASOUND IMAGE AND THE PREDICTED TRAJECTORY [23]. Two CAMERAS WERE USED: A VERY LOW COST USB WEBCAM AND A MORE ADVANCED, BUT LARGER DIGICLOPS CAMERA FROM PGR^ 19 TABLE 2.2: REPEATABILITY OF INTRINSIC CALIBRATION OF USB CAMERA 1, TESTED OVER 12 TRIALS. THE PROPAGATED POSITION AND ORIENTATION ERRORS ARE CALCULATED FOR THE WORST CASE [23] ^ 19 TABLE 2.3: ERRORS OF EXTRINSIC CALIBRATION OF THE TWO USB CAMERAS, AVERAGED OVER 10 TRIALS [23]^ 20 TABLE 3.1: CAMERA LENS CATEGORIZED BASED ON FILED OF VIEW^ 30 TABLE 3.2: COMMERCIAL SPECIFICATIONS^ 30 TABLE 5.1: COMPARISON BETWEEN ACTUAL AND PREDICTED LINE IN THE FIRST SIMULATION^ 48 TABLE 5.2: COMPARISON BETWEEN ACTUAL AND PREDICTED LINE IN SIMULATION^ 50 TABLE 6.1: COMPARISON BETWEEN ACTUAL AND PREDICTED LINE IN EXPERIMENTS WITH OPTOTRAK ^ 66 TABLE 6.2: COMPARISON BETWEEN ACTUAL AND PREDICTED LINE IN EXPERIMENTS WITH ULTRASOUND ^ 72 List of Figures FIGURE 1.1: A 2D ULTRASOUND IMAGE OF A NEEDLE IN A WATER BATH.^ 3 FIGURE 1.2: A 3D IMAGE OF A FETUS (COURTESY OF ULTRASONIX MEDICAL CORP.)^ 5 FIGURE 1.3: ULTRAGUIDE 1000. THE BASE UNIT CONSISTS OF A COMPUTER (A), A MAGNETIC FIELD GENERATOR (B), AND A DISPLAY SCREEN (C) [15]^ 9 FIGURE 1.4: ULTRAGUIDE 1000. PROCEDURAL SETUP: TWO SMALL POSITION SENSORS ARE USED, ONE CLAMPED TO THE NEEDLE SHAFT NEAR THE HUB (ARROWHEAD) AND THE OTHER AFFIXED TO THE SIDE OF THE PROBE (ARROW) [15]. ^  9 FIGURE 1.5: AN IRLED MARKER IS ATTACHED TO AN ULTRASOUND PROBE TO DETERMINE THE REAL TIME POSITION OF THE PROBE [19].  10 FIGURE 1.6: (A) THE PHYSICIAN HAS INSERTED NEEDLE INTO A LESION WITHIN A BREAST PHANTOM AND HOLDS THE ULTRASOUND PROBE IN HER RIGHT HAND, (B) VIDEO-SEE-THROUGH AUGMENTED REALITY HEAD-MOUNTED DISPLAY [21]^  11 FIGURE 1.7: ABDOMINAL NEEDLE GUIDE SYSTEM (ASCENDIA MEDTECH AB, KISTA, SWEDEN)^ 11 FIGURE 2.1: COORDINATE SYSTEM FOR WORLD, CAMERAS, CAMERA IMAGES, ULTRASOUND PROBE AND ULTRASOUND IMAGES [23]. ^  16 FIGURE 2.2: (A) A SINGLE UNIT IN A NEURAL NETWORK , (B) A NEURAL NETWORK STRUCTURE. [31] ^ 24 FIGURE 2.3: A SIMPLE SPORE DIAGRAM.^ 27 FIGURE 3.1: THE PROBE DIMENSIONS (A), FRONT VIEW (B) SIDE VIEW^ 29 FIGURE 3.2: THE DEFINITION OF THE CAMERA FOV.^ 31 FIGURE 3.3: THE ARTRAY-130MI-1R-OP CAMERA. 34 FIGURE 4.1: (A)SIDE VIEW OF THE WORKSPACE OF THE NEEDLE WITH RESPECT TO THE ULTRASOUND PROBE, (B)TOP VIEW.^ 36 FIGURE 4.2: THE GRID WITH TRAINING POINTS AND TEST POINTS^ 37 FIGURE 4.3: THIN LENS DIAGRAM, SHOWING THE OBJECT DISTANCE S, THE IMAGE DISTANCE Si AND THE FOCAL LENGTH F. THE THIN LENS ESTIMATE IGNORES THE OPTICAL El+BETS DUE TO LENS THICKNESS. ^ 38 FIGURE 4.4: PERSPECTIVE PROJECTIONS^ 39 FIGURE 4.5: THE DIAGRAM SHOWS THE SINGLE BLUE RAY MAPPING TWO DIFFERENT POINTS TO A SINGLE POINT IN THE CAMERA IMAGE. 40 FIGURE 4.6: TWO SKEW LINES IN SPACE (ACTUAL AND PREDICTED NEEDLE LINES)^ 42 FIGURE 4.7: THE MINIMUM DISTANCE BETWEEN TWO SKEW LINES^ 43 FIGURE 5.1: THE EGT CAMERA SCHEMATIC.^ 45 FIGURE 5.2: THE HYPOTHETICAL NEEDLE IN MATLAB ENVIRONMENT 46 FIGURE 5.3: A HYPOTHETICAL NEEDLE IS MOVED TO THE SELECTED TRAINING POINTS WHILE POSITION AND ORIENTATION OF THE NEEDLE ARE RECORDED IN BOTH THE IMAGES AND IN SPACE^ 47 FIGURE 5.4: (A)TWO PIN-HOLE CAMERAS POSITIONED IN THE 3D WORLD FRAME LOOKING DOWN AT THE HYPOTHETICAL NEEDLE, (B)THE 3D NEEDLE PROJECTED ONTO THE CAMERA IMAGE PLANES. ^ 49 FIGURE 5.5 A RESTRICTION WAS PLACED ON THE RANDOM NEEDLE PLACEMENT TO KEEP THE 52 LOWER THE NUMBER OF TRAINING POINTS. THE NEEDLE PLANE WAS KEPT PERPENDICULAR TO THE X-AXIS ^ 52 FIGURE 5.6: RANDOM NEEDLE PLACEMENTS^ 52 FIGURE 5.7: THE DIAGRAM SHOWS THE COMPARISON BETWEEN ACTUAL TEST POINT AND PREDICTED NEEDLE POSITION FROM THE DATABASE CREATED WITH RANDOM NEEDLE PLACEMENT.^ 53 FIGURE 6.1: THE OPTOTRAK 3020 SYSTEM COMPONENTS (NORTHERN DIGITAL CORPORATION)^ 56 FIGURE 6.2: (A) THE ACTUAL ALUMINUM BASE, (B) SCHEMATIC REPRESENTATION OF THEALUMINUM BASE.58 V1 FIGURE 6.3: THE NEEDLE WITH MARKERS.^ 59 FIGURE 6.4: THE FOV OVERLAP OF THE TWO CAMERAS. THE REDLINE IS THE MINIMUM^ 60 DISTANCE TO ACHIEVE 40 MM COVERAGE WITH BOTH CAMERAS^ 60 FIGURE 6.5 A DEVICE TO HOLD THE NEEDLE STATIONARY. 60 FIGURE 6.6: A SCHEMATIC DESIGN OF THE NEEDLE HOLDER DEVICE WITH CLAMP ARM.^ 61 FIGURE 6.7: A SCREEN SHOT OF THE TWO CAMERA IMAGES AND THE ESTIMATED NEEDLE LINES IN BLACK DASH LINES^ 62 FIGURE 6.8: THE WORKSPACE WITH 16 TRAINING POINTS AND TEST POINTS^ 64 FIGURE 6.9: A FLOW CHART FOR THE OPTOTRAK EXPERIMENT^ 65 FIGURE 6.10: (A) ULTRASONIX SONIXRP ULTRASOUND SYSTEM. (B) 3D/4D MOTORIZED PROBE (COURTESY OF ULTRASONIX MEDICAL CORPORATION, RICHMOND, CANADA)^ 68 FIGURE 6.11: ULTRASOUND EXPERIMENT SETUP WITH CAMERAS, NEEDLE AND THE^ 68 PROBE INSIDE THE WATER BATH^ 68 FIGURE 6.12: A 3D NEEDLE IMAGE IN THE WATER BATH. THE ULTRASONIX 3D/4D IMAGING SOFTWARE ALLOWS FOR PARAMETER ADJUSTMENT ON THE TOUCH SCREEN TO OPTIMIZE THE IMAGE^ 69 FIGURE 6.13: A FLOW CHART FOR THE ULTRASOUND EXPERIMENT.^ 71 FIGURE 6.14: (A) THE 3D NEEDLE FROM THE ULTRASOUND VOLUME AI- IER INITIAL THRESHOLDING. (B) THE PREDICTED NEEDLE LINE AND THE INLIERS AFTER THE RANSAC ALGORITHM.^ 72 FIGURE 7.1: A SIMPLE SCHEMATIC OF THE TWO MINIATURE CAMERAS MOUNTED DIRECTLY^ 77 ON A 3D/4D ULTRASOUND PROBE.^ 77 vii Acknowledgments This thesis owes its existence to the help, support, and inspiration of many people. First, I would like to express my deepest sense of appreciation to my supervisor Prof. Peter Lawrence for his patient guidance, encouragement and excellent advice throughout this study. I would also like to express my sincere appreciation and gratitude to my co- supervisor Prof. Robert Rohling. I am very grateful for his dedication, motivation and immense knowledge in the area of our study. I would also like to express my sincere thanks to Dr. Farrokh Sassani for his invaluable advice at the beginning of my studies. To all members of the Robotics and Control group, I am very grateful for their friendship, cooperative spirit and the excellent working atmosphere. I also extend my appreciation to all staff members of the Electrical and Computer Engineering for their assistance and support. I thank my wonderful group of friends, who I am enormously grateful for providing me with encouragement throughout this work. I am deeply and forever indebted to my parents for their continuous love and encouragement and for always believing in me throughout my entire life. Many thanks to Hassan and Rana, you have been sources of joy and happiness throughout my life, this was for you too. Finally, I wish to express my special thanks to my husband Reza whose understanding and support enabled me to complete this work, without your love, it would not have been worthwhile. viii Dedication Dedicated to my roving parents Every journey begins with a step This journey was for you You taught me, my first step ix Chapter 1: Introduction 1.1 Ultrasound Imaging The accurate visualization of the interior of the human is the goal of all medical imaging techniques, such as ultrasound, magnetic resonance and computed tomography imaging. Ultrasound is particularly well-suited for guiding interventions because it offers real-time either cross-sectional (2D) or volumetric (3D) imaging capability [1]. Ultrasound imaging is comparatively inexpensive, compact, fast and does not involve ionizing radiation [2]. Ultrasound is defined as a sound having a frequency above the threshold of human hearing, and is typically in the megahertz range. Ultrasound imaging devices use the echoes from short ultrasound pulses to form an image. Most modern ultrasound devices are based on the piezoelectric effect whereby electrical impulses are converted to sound pulses, the sound pulse interact with tissues and the echo energy is converted back to electrical signals. An ultrasound image is created in three main steps: Producing a sound wave • Ultrasound waves are created by a probe. Several crystals are arranged together to form a probe and the sound waves begin with the mechanical oscillations of a crystal that has been excited by electrical pulses. It is from the probe that sound waves propagate through tissue to be reflected and returned as echoes back to the 1 probe. Short impulses from the ultrasound machine, coupled to a thin piezoelectric crystal, make the probe create sound waves at the desired megahertz frequency. The sound is focused by placing a curved acoustic lens in front of the crystals and by apodization, where the firing and receiving of a small group of crystals are combined with small time delays. On the outward face of the probe, a rubber material enables the sound to be transmitted efficiently into the body by matching the acoustic impedance of tissue. Receiving the echoes • The image is formed by the reverse of the process used to create the sound waves. The returning echoes to the probe are converted by the crystals into electrical signals and are then amplified and filtered. Forming the image • A typical ultrasound transducer has approximately 128 crystals so approximately 128 lines of echoes are used to form an image. • A 2D image is formed line-by-line, with the geometry of the array of crystals determining the relative spacing and orientation between the lines. • For each line, the strength of the echo and the timing are used to calculate the magnitude and location of the intensity to display in the image. • The set of lines are then converted to a regularly spaced set of pixels, in a process called scan conversion, for display on the monitor. Since the crystals were arranged in a line, this image represents a cross-section of the anatomy. 2 • This kind of image is called a B-scan or B-mode image because the strength of the echoes is converted to brightness on the display. See Figure 1.1 for an example. Figure 1.1: A 2D ultrasound image of a needle in a water bath. 1.2 3D/4D Ultrasound Conventional 2D ultrasound is a very flexible imaging modality, allowing the user to manipulate the probe and view the desired anatomical cross-section, but it also suffers from disadvantages that 3D ultrasound imaging attempts to address. Ultrasound imaging in 2D provides image planes of the examination, thus requiring the physicians to reconstruct the 3D volume of interest mentally [1]. There are several ways to create a 3D ultrasound volume. 2D probes can be moved under hand control so that irregularly spaced set of 2D images can be reconstructed into a regularly spaced 3D volume to view 3 with conventional image display software. The positions of the individual 2D images can be measured with an additional position sensor. A computer is used to record the sensor output describing the trajectory of the probe during the scan, together with the sequence of B-scan images produced by the ultrasound machine. This method, called 3D freehand ultrasound, adds expense and requires calibration between the position sensor and the ultrasound plane. To avoid the complication of adding a position sensor, other methods have been introduced to replace the sensor readings with estimations of the probe trajectory based only on the B-scan images content [3]. This is a difficult, unsolved approach still undergoing research by several groups. Creating 3D from a set of 2D images is made easier by 3D probes that provide an automatic sweeping motion within the probe housing so that the user can just hold the probe steady and acquire a volume. 4D ultrasound is a term used to describe real-time 3D ultrasound (the fourth dimension is time). In other words, 4D ultrasound has the ability to acquire, render and display 3D images in real-time so that it is possible to examine changes to the anatomy and tool position during interventions. 3D and 4D ultrasound have recently been introduced on many commercial machines (e.g. GE health Care, General Electric, Fairfield, CT, United States; Ultrasonix Medical corp., Richmond, BC, Canada; Philips Medical Systems, Philips Electronics North America Corp., New York, NY, United States). Figure 1.2 gives an example of a rendering of the skin surface from a 3D ultrasound volume of a fetus. 4 I • 3Damioviv ENID* IIPRROI • Position^I San• • Seam Raw W ShowOR Figure 1.2: A 3D image of a fetus . 1.3 Image Guided Biopsy There are many clinical reasons to perform percutaneous needle insertions such as biopsy, aspiration and drug delivery. The term "biopsy" will be described and used throughout the thesis, but the methods and system suggested can apply to a variety of needle insertion procedures. Biopsy, from the Greek root means "view of the living", is a medical diagnostic test used to establish the composition of tissue or cells. Examination of the tissue sample retrieved by a needle is performed using a microscope. A biopsy is a way to test for the existence of a malignant tumor or to confirm if an abnormality is benign. If cancer is present, a pathologist can study the biopsy specimen to help determine what type of cancer exists and to grade the tumor. 5 Many biopsy procedures are performed with the help of image guidance to localize the target and guide the needle to the target. For example, it may not be possible to differentiate cancer from benign conditions based on imaging alone, so a biopsy procedure may be used to diagnose disease. Ultrasound is often used to locate a suspicious lesion so that the physician has a clear picture of the location needing to be biopsied [4]. This type of intervention can be done with little or no discomfort under a local anesthesia. Ultrasound is used in a wide range of biopsies including biopsy of the liver [5], breast [6], lymph nodes [7], thyroid [8], and various musculoskeletal locations [9]. The challenge with these procedures is to minimize the number of needle insertions by accurately inserting the needle into the target. A successful needle biopsy could mean avoiding open surgical biopsy, which may require general anesthesia, hospitalization and a longer recovery period. It is helpful to monitor the needle in real time during insertion [10]. It is also helpful to guide the operator in choosing the puncture site and predicted needle trajectory before the needle enters the body. 1.4 Guided Needle Insertion Selecting the correct puncture site and needle orientation, so that the inserted needle follows the desired trajectory toward the target, can be challenging. Many biopsy procedures require extensive expertise and skill to perform the task on the first attempt. Missing the target requires repeating the insertion as it is not always possible to modify the trajectory especially after a significant portion of the needle is inserted. There are 6 several methods for executing needle tracking; these include the freehand technique, the tracked freehand techniques and the use of mechanical guides 1.4.1 Freehand Image Guidance Freehand imaging techniques are normally performed with one hand on a 2D probe and one hand on the needle. This technique allows for total freedom over needle orientation and requires no setup time or any additional cost. The main disadvantage and difficulty of this technique is the hand-eye coordination required to align the target in the ultrasound image with the needle [11] [12]. In this technique, the operator must guess the puncture site then try to correct the trajectory as the needle is inserted. This is especially difficult for deep insertions. 1.4.2 Tracked Freehand In the majority of the tracked freehand techniques, 3D ultrasound systems are combined with either an electromagnetic or an optical position sensor for measuring the needle location relative to the ultrasound probe. Magnetic sensors have been used in many freehand systems to measure the real-time position and orientation of the ultrasound probe and the needle. Today there are magnetic sensors available [13] which are small enough to be placed inside the needle (in this case, once the tip of the needle has reached the target, the sensor is removed so the needle can be used for drug delivery or tissue sampling). 7 Magnetic trackers come with transmitters (to produce a magnetic field) and receivers ( to measure the magnetic field) that are placed on the object being tracked. The MagTrax MN001 (Traxtal, Toronto, Canada) needle guide is one example. The MagTrax needle probe enables 5 degrees of freedom point tracking at the tip of a probe just 1 mrn in diameter. Another tracked freehand system, UltraGuide 1000 (UltraGuide, Tirat Hacarmel, Israel), is a freestanding unit with three main components; the base unit consisting of a computer, a magnetic field generator, and a display screen. The base unit generates a low—strength magnetic field over the working surface. Small lightweight position sensors are attached to the ultrasound probe and to the shaft of the biopsy needle (not the tip). When the sensors are both within the magnetic field, their location and orientation in space are detected by the base unit, see Figures 1.3 and 1.4. With the MagTrax needle probe, the location of the magnetic sensor in the tip of the needle probe, so that needle bending will not be a problem, unlike the situation in UltraGuide where the proximal end of the needle is tracked [14] [15][16]. Drawbacks with such tracked freehand techniques employing sensors are the equipment costs, errors from needle bending (for systems where the base of the needle is tracked) and increase in setup time. Magnetic trackers also require an environment mostly free from conductive metals and electromagnetic disturbances [17] [18]. 8 Figure has been removed due to copyright restrictions. Figure 1.3: UltraGuide 1000. The base unit consists of a computer (a), a magnetic field generator (b), and a display screen (c) [15]. Figure has been removed due to copyright restrictions. Figure 1.4: UltraGuide 1000. Procedural setup: two small position sensors are used, one clamped to the needle shaft near the hub (arrowhead) and the other affixed to the side of the probe (arrow) [15]. Optical tracking is a means of determining the position of an object by tracking the positions of either active or passive special markers such as infrared light-emitting diodes (IRLEDs) attached to the object, see Figure. 1.5. The position of the marker is 9 determined using a camera system. [19].The markers could be placed on both the ultrasound probe and the needle and allow for accurate and real-time tracking. Specific challenges faced with optical trackers include sterilization concerns with markers mounted on the needle and the relative long distance between the cameras and the markers, resulting in issues with line of sight. Figure has been removed due to copyright restrictions. Figure 1.5: An IRLED marker is attached to an ultrasound probe to determine the real time position of the probe [19]. Augmented reality systems are the most sophisticated extensions of the tracked freehand idea. 3D augmented reality guidance system allows physicians to see directly into a patient, aided by real-time computer graphics and augmented reality technology. Augmented reality combines computer graphics with images of the real world [20], as shown in Figure 1.6. This kind of approach requires extra equipment that takes away from the flexibility, portability cost effectiveness that ultrasound offers for image-guided interventions [21]. 10 Figure has been removed due to copyright restrictions. (a)^ (b) Figure 1.6: (a) The physician has inserted needle into a lesion within a breast phantom and holds the ultrasound probe in her right hand, (b) Video-see-through augmented reality head-mounted display [21] 1.4.3 Mechanical Guides There have been many attempts to develop needle guides attached rigidly to an ultrasound probe, which allow a needle to follow a preset trajectory, see Figure. 1.7. This provides accurate predication and requires little hand-eye coordination. These types of needle guides are often low cost, disposable clips with a single or multiple slot direction. These tools are often unsuccessful because they restrict the allowable range of needle motion and increase cost [22]. Figure has been removed due to copyright restrictions. Figure 1.7: Abdominal needle guide system (Ascendia MedTech AB, Kista, Sweden) 11 1.5 Requirements for a Needle Tracking Device With all the available tools and a system intended for the needle tracking, there is still a need for a needle tracking device to satisfy the requirements of operators and physicians performing ultrasound percutaneous procedures. The requirements cover factors such as: • cost • accuracy • ease-of-use • restriction on needle motion under operator control • adaptability for a wide range of needle gauges and lengths • compactness • sterility • safety. 1. 6 Brief Summary of Previous Work by our Group Previous students under the supervision of Dr. Rohling developed a needle tracking device to address many of these requirements. A study was carried out with two stereo cameras mounted directly on the ultrasound probe for needle pose estimation [23]. The cameras observed the needle before skin puncture and used standard geometric calculations employing calibration results (camera intrinsic and extrinsic calibration, probe calibration) to show the predicted needle trajectory as an overlay on the 2D ultrasound image. This approach has the advantage that relatively low-cost cameras can 12 be used to track the needle within a relatively small workspace near the probe and line- of-sight should be easier to maintain over the short distances involved. The concept was feasible but problems arose from establishing a coordinate system accurately for each camera, ultrasound probe, and ultrasound volume. Overall accuracy was 3.1mm to 6.5mm (depending on the type of cameras used) using the standard calibration methods. A new approach to the calibration problem is needed to improve accuracy with this type of needle tracking system. 1.7 Thesis Goals Considering the strengths and weaknesses of the concept of this stereo vision approach, the following goals are set for this thesis: 1. Extend the work to 3D ultrasound. 2. Develop a better model for the conversion of image features to location of the needle in 3D ultrasound volume. 3. Select/design new hardware for the system. 4. Test on simulations and experiments. 5.^Improve the accuracy of previous needle tracking systems. 1.8 Thesis Overview Chapter 2 will explain the limitations of the previous work in more detail and give a conceptual overview of the calibration/modeling mathematics. It will also give possible solutions and make a recommendation for a modeling method. Chapter 3 will discuss the redesign of the previous system hardware. Chapter 4 will describe the new model in 13 detail. Chapter 5 will show simulations and Chapter 6 will show experiments. Chapter 7 presents conclusions and recommendations. 14 Chapter 2: System Outline 2.1 Previous System Overview As stated in chapter one, there have been many techniques introduced to aid physicians during the percutaneous procedures employing a variety of different tools. Recently there was an attempt to use video tracking from the ultrasound probe itself [23]. This approach was introduced and tested with 2D ultrasound. Two cameras were placed on the probe facing the puncture site. The cameras were calibrated both intrinsically and extrinsically; intrinsic calibration to calculate the camera focal length, principal center and camera distortion, and extrinsic calibration to determine the relative transformation between two cameras' coordinate systems. For a point A in space, cl A= c1 (X Y Z )A , A , A and c2A= c2 (xAyA ,zA .) were defined as the positions of point A measured with respect to camera one and two respectively. The position coordinates of the same point 11 (xA ,y A ) and 12 (xA ,y A ) were defined as locations of A in the first and second image coordinates. To transform point A from camera 1's coordinate system, to camera 2's coordinate system, a transformation matrix was required: ClA_Cirr C2A 1 C2 2.1 15 where T is a 4 x 4 homogeneous transform determined from extrinsic calibration and consisting of a 3 x 3 rotational matrix R and a 3 x 1 translational vector D : R D T =[ 0 1 2.2 Figure has been removed due to copyright restrictions. Figure 2.1: Coordinate system for world, cameras, camera images, ultrasound probe and ultrasound images [23]. To calculate the position of the point A in space from the two camera images, function h was introduced: Cl A^L. 1 .0 4-^, n ( ,^\ 12 i x , ) ciT \ii.= rikh, j 2 ,cc,n. , k- i 71 , Y A) , k A , -,' Ai ,^cv 2.3 16 where A^ iand f2 are the scalar focal lengths for camera one and two respectively, cc s the camera principal center and K is camera distortion. To define the trajectory of the needle, two points along the needle are needed, and labeled NI and N2. To find the needle trajectory in the ultrasound image plane, several rigid body transformations between coordinate systems were established U N , 11 7,P pT Cl ci N A 2.4 where PTci is the homogeneous transformation from camera 1 (C1) to the ultrasound probe (P) and uTp is the homogeneous transformation from ultrasound probe (P) to the ultrasound image (U). ClN is the position of the tracked point in camera 1 coordinates calculated from Equation 2.3. The method necessitated multiple calibrations through the mathematics of Cartesian coordinate systems and transformations between the systems. ■ Camera intrinsic calibration : f , cc, K ■ Camera extrinsic calibration : ci Tc2 ■ Camera to probe calibration: PTci ■ Ultrasound image calibration: uTp A system prototype showed the feasibility of using such a mathematical framework for stereo tracking of a needle in 3D. Although the feasibility was established, the accumulation of errors and difficulties of calibration steps limited the overall accuracy of the approach [23]. The authors concluded that significant errors could arise at all the calibration steps. The overall accuracy of the system with two sets of cameras are presented in Table 2.1. To study the sources of error, more than ten independent intrinsic 17 calibration trials were performed and showed that small variations in each of the intrinsic parameters arose and that these variations significantly affect the overall needle tracking accuracy, see Table 2.2. For example, in the worst-case scenario an 11.98 pixel error in cc results in a -15.75mm position error (1.3127x11.58) and 1.86° orientation error (0.1558x11.98). Similarly, Table 2.3 shows the level of errors expected from repeated extrinsic calibration. Moreover, any inaccuracies from the camera calibrations are propagated through to the camera-to-probe calibration and probe-to-ultrasound image calibration steps. For those tests, two different cameras were tested in the system, the Digiclops stereo vision (Point Grey Research, Vancouver, BC, Canada) and the Orite web-cameras (O'Rite Technology Co. Ltd, Jhonghe, Taiwan) [23]. During experiments, the two web-cameras were mounted 100mm from the probe with a probe mount and a connecter. It was observed that a higher quality camera should produce a system with an improved overall accuracy and a smaller more compact final system would allow for a easier, more flexible handling of the tracking system. The larger but more accurate Digiclops is a pre-calibrated stereo vision system that, due to its physical attributes (size and weight), required mounting at a location approximately 600 mm away from the probe. The web-cameras did not provide automatic synchronization between the two cameras and suffered from lower resolution and greater image distortion, giving poorer results overall. We propose in this thesis, instead of the separate classical calibration steps, a single step transformation of un-calibrated camera images of the needle to the needle pose in 3D space. To determine the 3D position of the needle without calibrating the 18 camera we intend to create a database of needle positions in the 2D camera images and their resultant 3D position and orientation and subsequently find a model that best fits the data. An approximation function is required to find the best relationship between the inputs (location of the needle in 2D camera images) and the outputs (the location of the needle in 3D ultrasound volume). The challenge is to decide which features in inputs are important and what kind of computational model are required. Table 2.1: Overall tracking error. The perpendicular distance error d is calculated as the shortest 3D distance between the actual intersection point of the needle in the ultrasound image and the predicted trajectory [23]. Two cameras were used: a very low cost USB webcam and a more advanced, but larger Digiclops camera from PGR. USB cameras^PGR cameras No. of images^50 30 Error ( n Ave. d ^ 6.5 ^ 3.1 St. dev. d 5.7 1.8 Table 2.2: Repeatability of intrinsic calibration of USB camera 1, tested over 12 trials. The propagated position and orientation errors are calculated for the worst case [23] Intrinsic parameters Ave value (pixel) St. dev. (pixel) Propagated position error (nm/pixel) Propagated orientation error (degree/pixel) fc(x) 931.55 5.56 -0.0181 0.2103 fc(y) 933.11 4.44 0.1406 0.1612 cc(x) 347.55 11.98 -1.3127 0.1558 cc(y) 241.24 14.57 0.087 0.0939 K -0.2615 0.0723 -0.85 1.4564 K1 0.1094 0.1114 -0.80 1.4022 K, 0.0042 0.0031 112.34 63.91 Kt 0.0015 0.0018 25.54 162.44 19 Table 2.3: Errors of extrinsic calibration of the two USB cameras, averaged over 10 trials [23] Extrinsic parameters mm St. dev. of X 3.92 St. dev. of Y 5.10 St. dew. of Z 1.49 2.2 Approximation Methods To summarize, the problem addressed in this study is to relate the 3D position of an line-object in space, to its 2D position and orientation in stereo images of the line without calibrating the system using a classical stereo camera model. More generally, considering a data set of input/output training points, the goal was to find a mapping function between the inputs and outputs. The mapping function is a rule of correspondence between each member of input set A, and a unique member of the output set, B. If we call the function f , then f is a mapping from set A to B: : A—> B There are two main approaches in creating the mapping function f : parametric and nonparametric algorithms. In parametric algorithms, prior information about the form of the problem is used to define the structure of / , where it contains a set of parameters which need to be assigned values using some numerical algorithms, usually as an optimization of an error criterion. Parametric models are fast to compute and can be effective. They are best model to use when the form of the problem is well understood [24]. 20 Alternatively, nonparametric methods are best to use when the form of the problem is not well defined. In most cases, the basic structure is defined but not the number of parameters for the final mapping function. The size of nonparametric models increases compared to parametric methods, to effectively model the given data. In theory, it has been proven that nonparametric models are able to represent a greater function space in comparison to parametric models but they are more difficult and more time consuming to construct [25] [26]. The methods for constructing models are known in the literature as function approximation, regression, neural networks and machine learning. The particular problem being addressed in this study is model construction when little or no prior information is available on the form of the problem is used. In this study, we decided to focus on nonparametric learning algorithms in order to take the opposite approach of the previous work, and not consider a prior camera model. 2.2.1 Nonparametric Model Construction Classical mathematical statistics consists mainly of parametric rules, when the statistician has some a priori information about the actual problem in order to assume that the unknown probability law belongs to a parametric class of distributions [27]. In parametric learning technique, understanding of the nature of the problem is used to define the structure and size of the approximation function. Nonparametric learning techniques are those with the ability to define their own model structure, work well with 21 little information a priori about the structure, and have the ability to select the most important input-set to effectively predict the output. The curse of dimensionality is one of the difficulties associated with nonparametric approximation functions. The curse of dimensionality describes the problem that the complexity grows exponentially with the dimension and can exceed the computational and memory storage capabilities of computers [28]. Nonparametric learning algorithms can generally be categorized in two groups, the dimensionality reducing algorithms and space partitioning algorithms [25]. 2.2.1.1 Dimensionality Reducing The goal of a dimensionality reducing algorithm is to break a large approximation function into smaller segments to consequently lower the number of needed learning samples and also achieve stable parameter values. Many algorithms have been introduced to address this issue. Multidimensional Adaptive Regression Splines (MARS) approximation functions are constructed using a sum of products of spline functions [29]. Projection pursuit was created with the idea of creating the projection or projections from high- to low-dimensional space that reveal the most details about the structure of the data set [30]. Both algorithms work effectively in low dimensional cases but the computational complexity increase rapidly with the number of dimensions. Many nonparametric neural network regression functions have also been introduced. Artificial neural networks consist of a set of interconnected nodes in which 22 each node accepts a weighted set of inputs and produces an output. For X = (x, ,x 2 ,...xn ) a set of inputs in the unit U , each input can be associated with a weight set, W = (w, , w2 ,...w, ) that represent the strength of that particular connection. These weighted inputs produce a output sum at U given by S =^wi x i^2.9 A neural network is composed of units and the weighted connections between them as shown in Figure 2.2 [31][32]. The Cascade-Correlation algorithm is designed to automatically determine the network structure based on the training data [33]. This algorithm has demonstrated efficiency on high dimensional data but has also exhibited high computational complexity and likely numerical complexity when processing in high dimensional spaces [25]. 2.2.1.2 Space Partitioning Algorithms In space partitioning algorithms, the domain of the approximating function is divided into a number of non-overlapping sub-domains D,. A function is then constructed for each sub-domain: Vx G^y = (x)^ 2.10 In k-nearest neighbor algorithm (kNN), an object is classified by a majority vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors. The algorithm is used in regression by assigning the value for the object to be the average value of its k neighbors. The nearer neighbors could contribute more by giving weight to the contribution of the neighbors. kNN is robust to noisy 23 (a) (b) Figure 2.2: (a) A single unit in a neural network , (b) A neural network structure. [31] training data specially when an inverse square of weighted distance is used instead of the distance and is effective when the training data is large. The disadvantages include a need to determine the value of parameters k, the number of the nearest neighbors and high computational cost due to computing distance of each query instance to all training samples. An indexing strategy (e.g. K-D tree) may reduce this computational cost [34]. Nearest neighbor algorithms have shown good performance on very high dimensional and continuous problems [25]. 24 2.3 Partitioning self-Organizing, and dimensionality Reducing (SPORE) In the previous section, we mentioned a few of the many different nonparametric learning algorithms available. Even though we have not seen an algorithm constructed for the exact problem of our 2D to 3D mapping in previous literature, we believe many of the available algorithms could be customized for this purpose. After some consideration the SPORE (Space Partitioning self-Organizing, and dimensionality Reducing) non- parametric learning algorithm was selected for this application. The SPORE nonparametric learning algorithm has valuable characteristic such as spending little computational effort on variable selection and keeping computational complexity practical even in high dimensional cases [35]. SPORE demonstrated its ability to build high dimensional mapping (using raw pixel values as inputs) in a case of vision based mobile robot localization [36]. SPORE can be closely related to the class of algorithms called Cascade- Correlation algorithms [33] [37]. In these algorithms, structures are built using the output of the previously constructed unit as inputs to the new unit. These functions aim to construct the optimal structures by searching for the best set on input/outputs. The SPORE algorithm spends minimal computational effort on this and concentrates more on building a structure that produces robust approximation without the costly search strategies [25]. The SPORE regression model was originally designed to address the high dimensional mapping needed for intelligent agents in the computational intelligence field. With SPORE, very little computational effort is spent on variable selection and also 25 the model selection strategy does not explode in computation complexity as the dimension of the problem increases One of the important qualities of SPORE is the simplicity of the algorithm structure as it only uses one type of structural unit. The structure is a cascade of 2D functions where the output of a cascade at each level is fed to one of the inputs of the next level. Figure 2.3 shows a diagram of a simple model. The n-dimensional regression function, RI , is constructed using an L level cascade of 2 dimensional functions g 1 0; 2.11 1=3 where al are real valued scaling factors, and the subscripts k o ,..., kL E {1,..., N} are used to identify input variables(x1 The equation y = RL (x) calculates the variable y from the independent variables x=(x1 ,...xn ). A previous paper [25] proves in a theorem that; if f(xl ,...xn ) is a continuous function defined on a compact subset A c 91 A1 and g i 0 is a 2 dimensional polynomial of finite order greater than 1, then for each e > 0 there would exist a finite number L of functions g 1 0 and real numbers cr, such that : if 2.12 A single cascade of g 1 0 functions is constructed in a series of sections. A section of a cascade between level i and j, symbolized by `R, , is defined as: i Ri =Lai .g i (.) ^ 2.13 26 When the current section of the cascade can no longer reduce the mean squared error, the learning outputs y are replaced by residual errors due to IR ] and a new section is begun, starting from the last error reducing function g i O. During structure construction, the required learning outputs are periodically replaced by the residual errors resulting from subtracting the approximation due to the structure already constructed. Further additions to the structure are added to reduce these residual errors. The SPORE construction continues this construction until the approximation error on some validation set has been reduced to some acceptable level. When all the information is collected, SPORE is run to find the best fit of the model to the data. Validation is then performed on the test data. gi Xki Figure 2.3: A simple SPORE diagram. 27 Chapter 3: Component Selection and Configuration The main physical parts of our system are the two digital cameras, and their imaging quality can affect the accuracy of the overall system. Selecting a camera with proper specifications is important. The available ultrasound imaging system for this study was the Ultrasonix Sonix RP machine (Ultrasonix Medical Corp, Richmond, Canada) with a 3D/4D Ultrasonix motorized probe (4DC 7-3/40, 3-7MHz). To have a clear line of sight and as little clutter in the images as possible, cameras are to be placed directly on the probe. A minimal housing for the cameras could make the needle guide system an extension for a 3D/4D probe. 3.1 Camera Specifications For selecting an appropriate camera for this study many factors were considered: ■ Mass Sonographers, who work daylong with ultrasound probes, already complain about the weight of the ultrasound probe. To keep the tracking system light, the system weight should be no more than 10% of the weight of the probe, so that it has a minimal affect on repetitive strain injury experienced by many ultrasound operators. Repetitive strain 28 injury is caused by repeated overuse and injury to the muscles of the hands, wrists, arms or shoulders. ■ Size The cameras should be placed on the probe in such a way as to not block the physician's view or complicate the grasp on the probe. This indicates that the cameras should be placed above where the probe is held. Since our system is intended for 3D/4D imaging, it should not matter which side cameras are placed on. A minimal housing would be required to attach the cameras to the probe. The camera dimensions should not add more than 10% to the external dimensions of the probe. 100mm (a) ^ (b) Figure 3.1: The probe dimensions (a), Front view (b) Side view ■ Field of View 29 Another important parameter to consider is the camera lens and its related factors. Lenses are often categorized in terms of their field of view (FOV) into five groups. Table 3.1 shows the lens categories and their subsequent FOV. Table 3.1: Camera Lens categorized based on filed of view Lens FOV Ultra wide-angle 180° or more Wide-angle 60°-100° Normal 25°-50° Telephoto 10°-15° Super telephoto 1°-8° In our system, two cameras need to monitor the workspace and the overlap between the two FOV should be wide enough to cover the whole workspace. To calculate the FOV a , the dimensions of the sensors and the focal length of the lens are required FOV: a = 2 tan -1 2f ^ (3.1) Where d represents the lens dimension and f represents the focal length. Table 3.3 shows lens formats and their corresponding sizes and Figure 3.2 shows the FOV diagram . Table 3.2: Commercial Specifications Format 1" 2/3" 1/2" 1/3" 1/4" Height (mm) Width (mm) 9.6 12.8 6.6 8.8 4.8 6.4 3.6 4.8 2.7 3.6 30 Lens Image size Field of view Focal length Figure 3.2: The definition of the camera FOV. ■ Computer Interface There are different possible computer interfaces such as NTSC, USB and Firewire. The low cost and "plug and play" capability of the USB technology has made it a popular interface. A USB camera is preferred to a NTSC camera, as it allows easy installation, without further hardware components (frame memory) and configuration. The availability of a USB interface on the Ultrasonix machine allows for easy integration for testing. ■ Infrared/Near Infrared Imaging capability Infrared (IR) radiation is electromagnetic radiation of a wavelength longer than that of visible light, and shorter than that of microwaves. The wavelength range is from about 1 millimeter down to 750 nm. The near infrared (NIR) spectrum is defined as the region of light having wavelengths between 700 nm and 2500 nm, which is longer than the human 31 visible light wavelength (380 nm to 750 nm) [38]. A useful feature for the camera is the ability to image in the IR or NIR range. The effect of the ambient lighting and shadowing on the camera images can be reduced or overcome by using an IR source near the cameras on the probe to illuminate the needle, and then filtering out the visible light. This could be important since subsequent image processing techniques can be affected by lighting. The scene imaged by a camera can be interpreted differently in different lighting and the amount of clutter visible in the image can also change. The metallic nature of the needle reflects light specularly and the reflected light can make it hard to distinguish the true needle edges. The near infrared eliminates the created highlight allowing the true edge to be seen. ■ Cost Another important factor in our design is the affordability of the final product. 3.2 Camera Options Different cameras were considered for the purpose of this study. These considered cameras can be categorized in two main groups; digital web-cameras and industrial miniature cameras. 3.2.1 Web-Cameras Considering the stated camera specifications, the most obvious choice of camera is a USB web-camera. A range of different USB web-cameras were considered for this purpose and their properties were compared. The main observed difference between the different models was the outside packaging followed by a slight difference in image resolution. A series of experiments with ORITE I-cam MC-1300 camera demonstrated 32 problems such as poor image resolution, absence of a software development kit (SDK) and inability to synchronize two cameras at the same time. Table A.1 in Appendix A shows a summary of the considered web-cameras with their features and specifications 3.2.2 Miniature Cameras The specified requirement on the size of the desired camera also guided our search towards miniature cameras. The most available and affordable type of miniature cameras in market today are the security or "spy" cameras. Security cameras come with limited resolution and limited software capabilities but are generally small, light and come with embedded lenses. A more sophisticated category of miniature cameras, comprises high performance cameras designed especially for the industrial applications. The output of these cameras is usually analyzed digitally in industrial applications so a high image resolution is a key feature of these cameras. Because of the need to perform subsequent image processing steps, these types of cameras often come with an SDK, which allows for manual control of the camera features and access to raw data through custom programming. A summary of investigated miniature cameras is presented in Table A.2 in Appendix A. 3.2.3 ARTRAY-130MMR-OP Camera Two ARTRAY-130MI-IR-OP board cameras were selected and purchased for this study. The ARTRAY 130MI, shown in Figure 3.3, is a 1/2" CMOS camera developed for embedded applications. The camera has 43.5mm (W) x 43.4mm (H) x 30.0mm (D) dimensions and weighs approximately 24g. It has 1.3M pixels (1280 x 1024) and a 32FPS frame rate. The embedded camera lens has a 8 mm focal length which 33 allows a maximum 60° viewing angle. The camera needs a 5V power supply, which is supplied directly from the USB connection. '"'"*Vrrrl,-; Figure 3.3: The ARTRAY-130M1-1R-OP camera. The ARTRAY 130MI includes a complete Software development kit (SDK) which allows for capture/snapshot mode and also customized image processing on raw image files. With a raw format file, process steps such as sharpening, color settings and contrast adjustment are all absent, and take place during the conversion to a known image file like TIFF (high-resolution bitmapped graphics). Working with the raw format has several advantages: • Performing conversion to TIFF on the faster personal computer instead of the camera • Accessibility to the full range of data from the sensor. • No missing data due to sharpening or compression [38]. The SDK also allows custom changes to the camera thereby allowing synchronization of the two cameras. 34 Chapter 4: Modeling One of the main innovations of this work is to replace the calibration procedures and the coordinate system transformations employed by the previous work, [23], with a single step mapping of 2D position data to 3D. The proposed approach is to employ an approximation method on a database to associate needle position in 3D space with detected needle lines in the true 2D camera images. After a model is constructed, new camera images can be used as inputs to the system and the model is used to predict the 3D position in space. The offline model that is produced should cover the required workspace of needle positions and orientations. 4.1 Collecting the Data The first step was to define the workspace needed for the application. In this current work, the workspace was considered to be a 30mm x 30mm flat square. This workspace allowed for a needle movement from 0 to 30 mm in the lateral direction away from the probe and 15mm side to side from the center of the probe. Figure 4.1 shows the workspace of the needle with respect to the ultrasound probe. Within this square, the needle is expected to have an angle of between 30° to 60° . A database is created of a set of needle poses (training points) covering this workspace. 35 Ultrasound probe X ±15 mmTrajectory (a) 0Ultrasound probe 4i^ ±15 mm elevation  to. 0-30 mm lateral Needle (b) Figure 4.1: (a)Side view of the workspace of the needle with respect to the ultrasound probe, (b)top view. The workspace should be divided into equi-distant training points to have a uniform coverage. A needle, fixed at a specific angle, was moved through the workspace covering each training point. An image was recorded by each camera when the tip of the needle was held steady at each location. For validation of the model, a separate set of test points were also covered. Test points were considered at positions between the training points and were not included in the construction of the model, see Figure. 4.2. 36 0— Training points Test point The 3D position and orientation of the needle at each point on the workspace was recorded for each pose and the corresponding 2D needle position and orientation was calculated from the camera images. The approximation method should use the positions and orientations of the needle detected in the two camera images as inputs and the 3D position and orientation of the needle in space as outputs to create the model Figure 4.2: The grid with training points and test points The 2D images captured by both cameras should be processed to extract the straight-line depiction of the needle. The equation of the line for the needle can be calculated by using two points from each detected needle. This information will be the 2D data that approximation program will receive as input. 4.2 Defining the Camera Model In order to perform simulations, a camera model is needed. The ARTRAY-130MI comes with an 8mm lens which can be considered to be a thin lens. In a thin lens, the 37 4.1 1^1^1— =— +- f s s l S^ S7 Object plan Axis Focal point Camera plane Lens distance along the optical axis between the two surfaces of the lens is much smaller than the focal length of the lens so the thin lens estimate ignores optical effects due to the thickness of lenses. For a thin lens, the object (s) and image (si) distances are related by the Equation 4.1, where f is the focal length of the lens. Figure 4.3 shows the thin lens diagram. Figure 4.3: Thin lens diagram, showing the object distance s, the image distance si and the focal length f The thin lens estimate ignores the optical effects due to lens thickness. A thin lens can be approximated by a pinhole as the aperture shrinks to zero. The pinhole camera is the simplest, and the ideal model of camera function. To simplify the model, the image plane is defined as being between the focal point of the camera and the object, so that the image is not inverted. 38 Image plane coordinates (0,0) Ow World frame coordinates P(X, Y, Z) (R, D) I ( x,y) Optical Axis Oc Camera centered coordinates Figure 4.4: Perspective Projections For a point of interest, P, let / (xp ,y p ) be the coordinates of the point in the 2D image I, and (X p ,Y p ,Z p ) the coordinates of the point in 3D space. The relationship between image coordinates and space coordinates is given by f 'xp^I YP-= 4.2 Zp X p^YP I XP — fku 0 u c, - 0 X P A I YP 0 — fk, v c, 0 YP 4.3 1 0 0 1 0 Zp 1 where A # 0 is a scale factor. 39 The values (u o ,v0 ) are the coordinates of the optical center and ku ,k, (expressed in units of pixels/m) are the horizontal and vertical scales. Calculating 3D information of the needle from its 2D projections requires the inversion of a many-to-one mapping, as each pixel in the scene is imaged through perspective projection and corresponds to a ray of points. All the points in the scene which are on a particular ray are mapped to a single point in the camera image as a simple example shows in Figure 4.5. Figure 4.5: The diagram shows the single blue ray mapping two different points to a single point in the camera image. 4.3 Model Fitting The next step, after acquiring the database, is to identify the model that best fits the data. The training database contains input data, which is the position and orientation of the needle in both camera images, and output data, which is the position and orientation of the needle in the 3D space. Important factors in selecting the best technique for model creation are number of inputs/output variables and number of training data. The inputs for this system are the 2D position and orientation of the needle, considered a straight line, extracted from the camera images. To define a straight line, two points are required at a minimum. The most common form of straight-line equation is the "slope-intercept" form: 40 y=mx+ b 4.4 where m is the slope and b is the y-intercept. In this work, we propose using a straight line feature detector, then use Equation 4.4 to represent the detected needle in the images. As an aside, we believe it is possible to use the actual image pixels as inputs without line detection as a way to create a more accurate model. The possible advantage is the elimination of needle detection errors. The main disadvantage is that actual images will likely also contain features that are unrelated such as clutter in the background. To have an accurate model and account for unrelated features we would need to increase the number of training points significantly. Furthermore, using all the image pixels directly would increase the dimensionality and the consequent training time. For this study, we decided to use the detected needle position and orientation information, to have a feasible number of training points. A learning file was prepared for SPORE including all the 2D and 3D information of the needle during the training period; [m caml bcaml m cam2 bcam2 xl yl z l , x2 y2 z21 where mc'd and mcam 2 are the slope of the needle in camera one and two and bcam l and bcam 2 are the y-intercept of the camera one and two, respectively. The (xi ,y `) are the 3D coordinates of the point i, from the needle. At the next step, SPORE was initialized and numbers for input and output variables were defined. No prior information on the relationship between the inputs and outputs was provided to SPORE. 41 4.4 Measures of Accuracy The created model takes the position and orientation of the needle in each camera image and predicts the location of the needle in space. With the test points, the actual location of the needle is known so it can be compared to the predicted line from the model. A measure of accuracy is needed for comparison of the predicted and actual line in space. The generic representation of two lines in space are skew lines: lines in space that are not parallel and do not meet, as shown in Figure 4.6. Figure 4.6: Two skew lines in space (actual and predicted needle lines) The error calculation was performed in two steps, difference in distance and orientation. The position error was calculated as the shortest distance between the two lines and the orientation error was measured by the dot product of the two vectors [40]. For position error, the minimum distance between the two lines should be calculated. This line will be perpendicular to both lines 42 A Distance Actual line Estimated line Figure 4.7: The minimum distance between two skew lines The distance and the orientation between two skew lines are calculated from two points xi and x2 on line one (the actual line) and x3 and x4 on line two (the estimated line). If lines are presented as "'Actual = X1 + (X2 — X1) * s Llt edicted^x3 + (X4 X3 )* t then they are considered as skew if (x1 — x3 ) • Kx2 —xt )x (x4 — x3 )] # 0 . ADist = 1(x 3  — x 1  )• [(x2 — )x (x4 — x3 )1 The orientation error is calculated with: (X2 — Xi )X (X4 X3 )I 4.5 4.6 4.7 4.8 = a cos (x2 - ) . (x4 - x3 ) 11(x2 - xi )11 • 11(x4 - x3 )11, 4.9 43 Chapter 5: Simulations The first attempt towards realization of the single step needle tracking was to implement the algorithm in MATLAB and simulate the workspace and camera setup as a reasonably realistic model. Simulations allow rapid and convenient validation of the technique. Simulators also eliminate other real-world factors so the fundamental aspects of the technique can be explored. In the case of needle tracking, simulation removes the effect of surrounding illuminations and shadows. 5.1 Simulation Workspace As explained in Chapter 4, we can use a pinhole model and perspective projection to represent the camera setup. This was implemented with the aid of the Epipolar Geometry Toolbox (EGT) [41]. EGT is a toolbox that combined with MATLAB provides a framework for creation and visualization of multi-camera situations. This toolbox offers a large variety of easy-to-use, customizable functions to design and develop a software environment for a multi-camera set-up. The toolbox defines the perspective model with homogenous coordinates and its projection onto the image plane. The EGT toolbox provides a simple graphical illustration of a camera which takes the defined calibration matrixes and performs the perspective projection [41][42]. 44 Camera plane Figure 5.1: The EGT camera schematic. 5.2 Simulation Process In our setup, two cameras monitor a needle, as the needle moves through a defined workspace. The cameras are stationary while the needle is moving. The program needs to record the actual position of the needle in the 3D space and the position of the projected needle images in the cameras. The MATLAB program is divided into five sections. 1. Initializing the variables Variables are defined to hold the 3D points and 2D points. The size of the database is also declared at the beginning. 2. Placing the hypothetical needle at the specific point by selecting two points to characterize the line going through them. To position the hypothetical needle, a 3D line in this case, at a specific place on the MATLAB environment two 3D points are required. One selected point is the needle tip point and the other is calculated based on the required angle. 45 The needle angle The training points Figure 5.2: The hypothetical needle in MATLAB environment 3. Calculating the perspective projection of the two points in the cameras. By feeding the two 3D points and the camera specific values of R, D and lc., k, to the perspective projection function the resultant 2D points are created. 4. Storing the two 3D points and their resultant 2D points into a table. The MATLAB creates a table at the beginning and values are added to the table at each iteration. 5. If database is not complete , return to step 2, else finish Considering the previously defined workspace of 30mm x 30mm, a comparable space was defined in MATLAB. The purpose of running the simulation at this point was to check the feasibly of the model fitting by the database as the means for predicting the 3D position of the needle. The SPORE non-parametric learning algorithm fits a function to the best selected training points. By placing the training points equally distanced form each other, the function can cover the workspace with a relatively small number of the training points. 46 5.2.1 A Simulation to Verify the Feasibility of a SPORE Created Model A hypothetical needle was defined as straight lines in the MATLAB environment by identifying two points. Two theoretical cameras were introduced with the aid of EGT and were situated above the needle, similar to the final desired place of the real cameras on the probe. The distance between the two camera and the angle of rotation was determined experimentally to provide the largest shared field of view between the two cameras. In the first experiment, the needle was moved on a straight line away from the cameras. The 40 mm distance was covered with 8 training points, each 5mm apart. The needle was kept at a constant angle during the test and the database was recorded. -4 X / cm Y / cm Figure 5.3: A hypothetical needle is moved to the selected training points while position and orientation of the needle are recorded in both the images and in space. 47 In MATLAB simulations, the workspace can be covered uniformly as the needle can be placed on the training points without any error. To make the simulation comparable with real situations in terms of needle placement on the training points, a series of test were conducted with additional random noise. The uniform distributed noise ranged from 0.1mm to 5mm and was added to the position of the needle tip on the grid. Table 5.1 shows the results of the simulations. Table 5.1: Comparison between actual and predicted line in the first simulation Test^Error in Distance^Error in Orientation (mm)^( degrees) Perfect Simulation 0 ^ 0 Simulation with uniform noise (0.1mm to 5mm) ^ 0.02 2.98 5.2.2 A Practical Needle Workspace with Respect to the Ultrasound Probe A database was made by positioning and moving the hypothetical needle to 16 positions on a 30mm by 30mm workspace at four different angles. The 3D position of the two points on the needle and their perspective projection into the image planes of the cameras were stored in the database. Eight test points were chosen between the positions of the training points for fixed training angles and six test points between training positions and between angles. The database consisted of 64 sets of inputs, covering 16 locations in space and 4 different angles at each location. The learning algorithm was run on the database and the model was examined against test points. The chosen angles for these experiments were 30°, 40°, 50° and 60°. 48 600 600 i( 400 300 200 -200 -100^0^100^200 &Pixel 60331:10 500400 XI cm ^ Ytcm (a) camera A camera 9 900 ... 000 CL 700 600 700^800 900 1000 1100 1200 1300 140D 1500 1E00 1700 u/Pixel (b) Figure 5.4: (a)Two pin-hole cameras positioned in the 3D world frame looking down at the hypothetical needle, (b)the 3D needle projected onto the camera image planes. 49 Table 5.2: Comparison between actual and predicted line in simulation Test no. Angle Error in Distance (mm) Error in Orientation ( degrees) 1 30 1.12 3.53 2 30 0.96 1.85 3 40 0.77 2.04 4 40 0.63 0.72 5 50 1.49 3.10 6 50 0.64 0.39 7 60 0.09 1.68 8 60 0.90 1.72 9 35 1.98 3.13 10 35 0.20 2.74 11 45 0.01 0.25 12 45 0.84 4.14 13 55 0.33 2.67 14 55 1.54 3.94 Average 0.82 2.28 Table 5.2 shows a sample set of results from simulations. The mean absolute error (MAE) in distance for predicting the 3D line position for fixed angle was 0.82mm and the MAE for difference in orientation was 2.28°. The small number of training points for the simulations was chosen to test the feasibility of the system and to allow comparison with a small database in real situations. By increasing the training points, we can increase the accuracy significantly. For example testing a test point at 47.5°(not a member of either original or the new training set) with the same model as points in Table 5.2, produces a distance error of 1.27mm and an orientation error of 5.22°. Using the same point with a database made from the same size workspace but decreasing the 10 mm distance between training points to 5 mm and decreasing the 10° step to 5°, the error is reduced to 0.12 mm for distance and 0.96° for orientation. 50 5.3.3 A Random Placement Simulation A flexible needle tracking system would be a system, which can accurately track the needle with no restriction on needle position and orientation (within a reasonable workspace) and provide a trajectory in the ultrasound environment. Instead of planning the placement of the training points, a needle can move through the database at random while the database is being created. A random placement may require a larger number of training points to satisfy the level of accuracy needed. Allowing the needle to move randomly and freely through the space can result in needle positions that are either outside the workspace or nearly overlapping with other positions. By placing some reasonable restriction on the movement of the needle we can achieve the result we want with a smaller database. A simulation experiment was designed to check the random placement. The first limit was to ensure the needle tip touches the simulated skin-plane of the workspace (the point of puncture). In the general case this can be modified to keeping the z value of one of the two points defining the line smaller than the other point. The second restriction for the purpose of this feasibility study was to keep the plane which contains the needle perpendicular to x-axis, and avoid out-of-plane movements as illustrated in Figure 5.5. The third constraint was to keep the needle angle between 30° to 60°. . Considering the assigned limitations a database was created with the hypothetical needle covering 5000 random points on a 30mm x 30mm square workspace with different angles ranging 30° to 60°. The Figure 5.6 shows a 10 sample training point and Figure 5.7 shows a result of comparison between a predicted and an actual line. 51 zFigure 5.5 A restriction was placed on the random needle placement to keep the lower the number of training points. The needle plane was kept perpendicular to the x- axis. Y / c m X/ cm Figure 5.6: Random needle placements 52  Predicted Actual 0,5 0,3 0.2 0,1 3,5 3 2.3 2.5 4.3 4 2 22 5^ 4 ^ 2.31^2.32^2.33^2.34^2.35^2.36^2.37^2.38^2.39^2 4 y and z 2 ^ 4.35^4.4^4.45^4.5^4.55^4.6 x and z 3 ^ 2.25^2.3^2.35^2.4^2.45^25 Figure 5.7: The diagram shows the comparison between actual test point and predicted needle position from the database created with random needle placement. 53 The MAE in distance for a set of 15 test points was 0.01mm and for orientation was 0.2°. Removing the second restriction to account for the possible ±10° needle movement to the side as reported by previous work [23], required 9000 random points. The MAE for a new set of 15 test points was 0.04mm and for orientation was 0.4°. 54 Chapter 6: Experiments After the simulation trials, the next step was to test the practicality of the idea on a real camera image. Two experiments were conducted in this stage, the first experiment was designed to check the feasibility of the overall model fitting algorithm in 3D needle tracking, without ultrasound involvement and the second experiments was designed to check the feasibility of the 3D needle tracking in an ultrasound system. The following chapter explains the details and the results of these experiments. 6.1 Experiment 1: Validation of concept using optical tracking as ground truth The purpose of the initial experiment was to track a hypothetical needle in space and locate its position and orientation, using information extracted from 2D camera images. It was required that the 3D position of the needle be known as it moves through the workspace and covers the training and test points while a database is created, as well as for final accuracy comparison. In simulations, this was not a problem as the MATLAB database creation program placed the needle on the training and test points exactly and the 3D information was always known. Here it must be measured accurately. 55 6.1.1 Apparatus and System Design Optical tracking of active markers can provide an accurate method for needle tracking. The OPTOTRAK 3020 (NDI, Waterloo, Ontario Canada) optical tracker was chosen and two active markers were attached to the needle to provide the gold standard for the position of the needle in 3D space, Figure 6.1. OPTOTRAK uses infrared emitting diodes (IREDs) to calculate the 3D position of the selected object with markers. The system has an RMS error of 0.1 mm in the x-direction and y-direction and 0.15mm in the z-direction. The OPTOTRAK system consists of three linearly mounted cameras that track the IRED markers. Figure has been removed due to copyright restrictions. Figure 6.1: The OPTOTRAK 3020 system components (Northern Digital Corporation) The IRED markers are tracked with respect to the OPTORAK coordinate system which is located in the center of the middle OPTOTRAK sensor. The z-direction of the 56 OPTOTRAK coordinate system is aligned approximately along the center lens axis, the y-direction vertically upwards from the lens axis and x-direction horizontally from the lens. It should be noted that the OPTOTRAK system is used in this experiment solely for testing the proposed needle tracking concept, not as a component of the actual clinical system. The OPTOTRAK system can record a position of an IRED marker as long as it is viewed at an angle ± 60° ( within ± 85° the markers are still visible but additional error is introduced beyond 60°) [43]. The 3D location of each IRED that is calculated with the OPTOTRAK system has an offset value compared to the location of the surface where the IRED is attached. The average of the IRED offset value was found to be 2.4mm, but as long as this offset is constant during the experiment, and the IRED is maintained approximately perpendicular to the OPTOTRAK cameras, this has a negligible influence on the accuracy of these experiments [44]. The test apparatus consisted of an aluminum base, the two ARTRAY cameras, an OPTOTRAK and two markers. The aluminum base, to represent probe mounting, was constructed to hold the cameras in a position similar to the desired placement of the cameras on an ultrasound probe. To allow for flexibility and modifications of the workspace, the aluminum base was designed with moveable camera mounts. This allowed for changes in camera height from the base, camera angle relative to base and distance between the two cameras. The downward orientation of the cameras also limited the amount of clutter in the images and increased the robustness and accuracy of needle 57 111 180 mm 65 mm4— 40 mm 140 mm detection. Figure 6.2 shows the constructed aluminum base together with a schematic representation. (a)^ (b) Figure 6.2: (a) The actual aluminum base, (b) schematic representation of the aluminum base. To model a medical needle in this preliminary experiment, a 3 mm diameter steel rod was used so that needle flexion is minimal. To account for the needle thickness and to achieve a true needle trajectory, the equation of the needle center-line was needed. To calculate the 3D position of the needle, the 3D position of at least two points on the needle center-line are required. In this experiment, we considered the line passing through the center of the two IRED markers as the true needle center. The position of the markers was fixed for the entire experiment to achieve consistent results. Figure 6.3 shows a simple diagram of a needle and two attached markers. 58 Optotrak markers Center line Figure 6.3: The needle with markers. To design the aluminum base, the ARTRAY camera FOV and the height of the Ultrasonix Probe was considered. The cameras need to be situated such that the FOV overlap of the two cameras could cover entire the workspace and some part of the needle. The ARTRAY camera with an 8 mm lens has a horizontal FOV of 33° and a vertical FOV of 40°. To increase the possible overlap, the cameras had to be placed 30 mm apart with their optical angle at a 20° angle at each other. The Figure 6.4 shows a diagram of the cameras and their FOV. A device was also designed to hold the needle and facilitate changing the needle positions and orientations, Figure 6.5. The device used a hinge and was held stationary by a clamp arm, Figure 6.6. The apparatus was placed such that both markers were visible to the position sensors of the OPTOTRAK but not in the camera's FOV. 59 Distance covered by both cameras 40mm Camera 1 Camera 2 20° Figure 6.4: The FOV overlap of the two cameras. The redline is the minimum Distance to achieve 40 mm coverage with both cameras Figure 6.5 A device to hold the needle stationary. 60 Clamp arm Needle Figure 6.6: A schematic design of the needle holder device with clamp arm. 6.1.2 Camera Image Processing The camera SDK captures the images at 15 fps. The needle-tracking program for camera images was implemented in C++ for real-time operations and compatibility with the original ARTRAY camera SDK. To detect the needle in the camera image, the following steps were carried out at each frame: ■ Take a snap-shot of image in raw format ■ Convert the raw image into a gray-scale image ■ Perform Gaussian Smoothing on the gray-scale image ■ Run the Canny edge detector ■ Apply the Hough transform to the detected edges 61 ■ Perform a thresholding algorithm to eliminate weak edges ■ Apply a least square fit to the remaining edges ■ Draw the estimated needle line over the over the original image ■ Calculate the equation of the line using two point from the detected line ■ Store the 2D information of the needle in a table Figure 6.7 illustrates a screen shot of the needle in camera images overlayed with the estimated line. The detected needle is imposed on the original image Figure 6.7: A screen shot of the two camera images and the estimated needle lines in black dash lines 6.1.3 OPTOTRAK Experiment Method After setting up the apparatus as explained in the previous section, the next step was to create the database. A 30mm x 30m square with 10mm separation between the grid lines on a transparent sheet provided 16 points which we used as training points. The 62 sheet was placed and secured on the aluminum base and in the overlap view of both cameras. Test points could be selected anywhere on the grid other than at the training points. Similar to the procedure in simulations, the four different angles (30°, 40°, 50° and 60° ) were selected as training angles. The needle holder was fixed at the first angle, 30° , and the needle was placed and secured on the top plate of the needle holder such that that the needle tip would just touch the surface. Two WED markers were attached to the needle and centered visually to the axis of the needle. The height and the position of the OPTOTRAK was adjusted such that both markers would be visible to the OPTOTRAK sensor anywhere that the needle moved throughout the workspace and with all the selected angles. The needle holder was placed on a training point such that the line going through the two IRED markers would be placed over the gridline at that specific training point. Next the camera needle detection program was performed to capture one frame and the captured camera image was processed as described in Section 6.1.2, to calculate the position and orientation of the needle with respect to the image plane. At the same time, the 3D position of each marker with respect to the OPTOTRAK coordinate system was recorded from the OPTOTRAK. This procedure was repeated for each grid point and for the four needle angles. The training database consists of 64 sets of inputs, covering 16 locations space and 4 different angles (30°, 40°, 50° and 60°) at each location. The ±10° pitch angle was not considered in the experiments as random simulation showed that the results were still less than the previously achieved position errors of the system (see Section 1.6). 63 Two different set of test points were created for this experiment: ■ Using training angles but at new points on the grid (not part of the training set). These test points were chosen to check the accuracy of the model with different positions. ■ Using new angles and new points The second set of test points were planned to test the model with new needle positions and new needle orientations Figure 6.8 shows the workspace diagram with test points and training points. The SPORE program is executed over the database to build the model. For analyzing the model, only the input information of the test points ( the position and orientation of the needle in each camera image ) is used, and the output information (the position and orientation of the needle in 3D space) is used as a comparison with the predicted values from the model. Figure 6.9 shows a flowchart for the steps in the OPTOTRAK experiment. Training points Test points Figure 6.8: The workspace with 16 training points and test points 64 Read 3D position of each marker from OPTOTRAK * Calculate the position and orientation of the needle in each camera Enter 3D position and 2D position in a database 11------- Run learning algorithm to construct the model No Check the model against test points Perform error checking Position needle at a training point on the gird Capture image in both cameras Detect the needle in the camera images Figure 6.9: A flow chart for the OPTOTRAK experiment 6.1.4 Results The learning algorithm was run on the database and the model was then examined against the test points. The same model accuracy measurements as described in Section 65 4.4 were performed to analyze the model created in this experiment. Table 6.1 shows the results. The MAE for error in distance was 2.4 mm and for angle was 2.61°. The majority of the errors were found to arise from compensating for tip position changes introduced by the needle holder as the needle angles were changed in the last 6 points. Another source of errors arose from estimating the predicted needle line in the camera images as a result of a finite needle thickness. Table 6.1: Comparison between actual and predicted line in experiments with OPTOTRAK Test no. Angle Error in Distance (mm) Error in Orientating (de . rees) 1 30 0.140 0.31 2 30 0.967 4.41. 3 40 1.321 1.77 4 40 1.826 3.66 5 50 0.388 1.53 6 50 0.893 0.61 7 60 0.772 0.93 8 60 0.621 0.49 9 35 7.193 1.74 10 35 4.642 6.56 11 45 0.516 3.19 12 45 3.763 6.85 13 55 5.921 3.61 14 55 4.734 2.75 Average 2.4 2.61 6.2 Experiment 2: Validation of the concept using ultrasound This experiment was designed to test the concept in a more realistic ultrasound- guided needle biopsy application. The promising results from the previous experiment were hoped to be achieved again in the new environment. For the ultrasound experiments there are two options: 66 1. Construct the database with an OPTOTRAK and optical markers during the training phase and use 4D ultrasound for evaluation and testing. 2. Construct the database and test solely with ultrasound. Compared to ultrasound, it is easier to make the database with OPTOTRAK measurements in air because the optical markers directly measure the exact location of two points on the needle. With ultrasound, the needle has to be identified and measured in the ultrasound volume. Conversely, the problem with OPTOTRAK measurements arises from the additional transformation required between the OPTOTRAK and the ultrasound coordinate systems. The goal was to produce a one-step mapping and this study was the first attempt to extend the application of one-step un-calibrated 3D video tracking of a needle to the ultrasound system without using an OPTOTRAK or a similar device. We decided to take the second option and use the ultrasound to make the database by tracking a needle partially submerged in a water bath with 3D/4D ultrasound and two miniature cameras. To make the database, the 3D position of the needle with regards to the ultrasound volume and 2D position and orientation of the needle in each camera image are needed. 6.2.1 Apparatus The test apparatus consisted of the same rigid aluminum base, the two cameras, and the SonixRP ultrasound system (Ultrasonix Medical Corporation, Richmond, BC, Canada) with the Ultrasonix 4D motor-driven curved array probe and a water bath. Figure 6.10 shows the SonixRP ultrasound system and probe and Figure 6.11 illustrates a simple diagram of the ultrasound experiment setup in a water bath. 67 Camera Probe Figure has been removed due to copyright restrictions. (a)^(b) Figure 6.10: (a) Ultrasonix SonixRP ultrasound System. (b) 3D/4D motorized probe (courtesy of Ultrasonix Medical Corporation, Richmond, Canada) Needle Water bath Figure 6.11: Ultrasound experiment setup with cameras, needle and the probe inside the water bath 68 BR +anent rtlo. Shen 'ant Mar^514 NPR gad Background - drag •danonto • YI RA.NrInq  * Enable IAFR Sad Cogroat 4M PIa1126 VP The aluminum base was adjusted such that the probe will fit in a position to have the cameras placed as if they were on the probe. The workspace was located inside the water bath and within the field of view of the probe. To model a medical needle, in these experiments the same 3mm diameter steel rod was used so that needle flexion is minimal. Figure 6.12 shows an ultrasound image of the needle images in the water bath. Figure 6.12: A 3D needle image in the water bath. The Ultrasonix 3D/4D imaging software allows for parameter adjustment on the touch screen to optimize the image. 6.2.2 Ultrasound Experiment Method In the present experiment, the database was collected as the needle was moved in a straight line away from the probe, and both the training and test data were recorded. The same real-time C++ needle detection program as described in Section 6.1.3 was used on 69 the images captured by the cameras, to detect the needle and to calculate the equation of the line in each camera image. There was no access to the image processing pipeline of the ultrasound system to allow for display of the 3D needle pose on the ultrasound volume in real-time. Such a display is needed for clinical use, but it is not needed here to validate the accuracy of the new tracking concept. To monitor the needle in 4D ultrasound a region of interest was first defined on the normal 2D ultrasound image, which allowed easier manipulation of the ultrasound volume. The selected ultrasound volume was then downloaded for further processing. The ultrasound volume collected for each 3D position of the needle in the water bath was read into the MATLAB environment. The volume of interest was pre-processed with thresholding and then analyzed with Random Sample Consensus (RANSAC) algorithm to detect the needle. This RANSAC algorithm was used because of its ability to reject outliers - needed to avoid influence of artifacts. Figure 6.13 shows a flow chart of the steps used in this experiment with ultrasound. 6.2.3 Results After model creation, it was tested against an independent set of test points for validation. The predicted and actual trajectories were compared. Figure 6.14 shows the needle pre-processed with thresholding and after RANSAC algorithm The outcomes were analyzed and Table 6.2 shows the numbers for the best and average error. Although care must be taken when comparing results to the previous method using classic camera calibration (for example, different cameras were used in [23]), this accuracy is better than the accuracy produced using classic camera calibration. 70 Position needle at a training point on the gird Ultrasound volume Download the 3D Read the ultrasound volume in MATLAB ir Detect the 3D needle in the ultrasound volume i Calculate the position and orientation of the needle Capture image in both cameras iv Detect the needle in the camera images * Calculate the position and orientation of the needle in each camera Enter 3D position and 2D position in database Run learning algorithms and construct the model Check the model against test points iv Perform error checking Figure 6.13: A flow chart for the ultrasound experiment. 71 74cc 7-1F12 333 - 200 2S.0 1.53 lT^ / 1°3 5.0 330 0 (al 150^10D ^Ir (b) Figure 6.14: (a) The 3D needle from the ultrasound volume after initial thresholding. (b) The predicted needle line and the inliers after the RANSAC algorithm. Table 6.2: Comparison between actual and predicted line in experiments with ultrasound Test^Error in Distance^Error in Orientation (mm) (degrees) Best result achieved 0.23 0.25 Average result over ten trials^0.94^3.93 72 Chapter 7: Discussion and Conclusions 7.1 Discussion From the MATLAB simulations in Section 5.2.1, we showed how a well- defined database can provide both accurate position and orientation estimation and robustness to random measurement noise. The second simulation experiment in Section 5.2.2, on a 30mm by 30mm workspace, was designed to simulate the real workspace of the needle and the probe. With a small database of 64 training points, we achieved the accuracy of MAE of 0.82mm in distance and 2.2° in orientation. This error was further reduced to 0.12 mm in distance and 0.96° in orientation by increasing the number of training point to 343. Section 5.2.3 also showed the feasibility of creating a database through random movements of the needle in the workspace although a larger number of training points are needed. An MAE of 0.01mm for distance and 0.2° for orientation was achieved with a database of 5000 samples. A 9000 samples database through random movement achieved the accuracy of MAE of 0.04mm in distance and 0.4° in orientation. The approach taken by [23] for off-line 2D needle tracking with calibrated cameras reported average position error of overall error of 3.1 ± 1.8 mm with two commercial cameras and an error of 6.5 ± 5.7 mm with two inexpensive consumer cameras. In Section 6.1.4, experimental results using an the OPTOTRAK position sensing device proved the practicality of our algorithm in a real environment. An MAE 73 of 2.4mm in distance and 2.61° in orientation was promising. Results in Section 6.2.3 were from reported the first attempt to extend the application to an ultrasound system. The average error of 0.94mm in distance and 3.93° in orientation was sufficiently small for many percutaneous procedures. 7.2 Contributions There is a clinical need for an accurate, simple and inexpensive needle guide for ultrasound-guided needle insertions. Previous work was done on a novel approach using stereo cameras placed directly on the ultrasound probe to track the needle pose with respect to the ultrasound image. That approach was done with 2D ultrasound and classic camera calibration methods and the accuracy was limited by all of the calibration steps. The main objective of this work was to examine the feasibility of tracking the needle with respect to a 3D ultrasound probe by direct mapping of the needle location in the stereo images to the ultrasound volume. In other words, classical calibration methods were replaced with a nonlinear functional approximation. In short, we proposed a system with two miniature cameras mounted on the probe and a one time, off-line creation of a system model. The model is subsequently used in real-time on newly captured stereo images to predict the location of the needle in space. A database was constructed of needle positions and orientations in 3D space and 2D camera images. The SPORE learning algorithm was used on the database to create the best fit model. The model was then evaluated with the independent test points to show that poses that were not part of the training could be mapped sufficiently accurately 74 by the model. The process of database creation only needs to be done once, as long as the location and orientation of the cameras with respect to the probe does not change. The off-line creation of the database and model means that the system can be used without the need for additional time for system set-up or calibration. In our work, by constructing the database directly in the ultrasound environment, we eliminate the need for intermediate coordinate transformations. This is facilitated by the use of 3D ultrasound whereby the 3D pose of the needle can be observed directly in the volume at the training stage. With 2D ultrasound, the needle must be aligned carefully with the plane of the ultrasound image. 3D ultrasound technology continues to develop and improve, making 3D ultrasound more widespread. This means the proposed method has wide applicability. The proposed system can be manufactured economically since the only costly parts of the system are the two cameras, which we purchased for less than $1000. A simple housing is also required to reproducibly secure the cameras on the probe. Using USB cameras in the system eliminated the need for frame grabbers and any kind of hardware dependency or installation, so it can be installed on any ultrasound machine with a USB interface. The system does not need an extra power cable because the cameras draw their power through the USB cable. This keeps the cabling to a minimum. 75 7.3 Limitations and Difficulties An important aspect of this study was specifying and choosing a suitable camera. A small camera with high resolution, reasonable price, wide FOV and USB interface was chosen. The board lens implemented on the chosen ARTRAY cameras showed small lens distortions but these distortions did not need to be modeled explicitly for our tracking approach. In practice, the curvature of a long thin needle could introduce additional error to the system since a straight needle is assumed. The system was designed such that the cameras captured images close to the puncture site and the needle pose was calculated from the portion of the needle near the skin. Therefore, as long as this part of the needle stays straight, the error should be negligible. Another source of error arose as a result of the finite thickness of the needle in the images. Care must be taken to extract the midline of the needle. 7.4 Future Directions A simple apparatus needs to be designed so that the database can be created automatically or semi-automatically. Currently this is a labour intensive step, although it only needs to be performed once. The final tracking system should be implemented as an attachment to a 3D/4D probe so that it can be mounted on the probe when required and removed when not in use. Figure 7.1 shows the proposed location of the attachment. 76 Our work was on needle tracking in ultrasound, but it was not designed for a particular clinical procedure. Different kinds of procedures may require a different workspace and the system model and apparatus may need to be adjusted accordingly. Finally, implementation of a graphical user interface is needed on the ultrasound monitor for clinical evaluation. Figure 7.1: A simple schematic of the two miniature cameras mounted directly on a 3D/4D ultrasound probe. 77 References [1] Fenster A, Surry K, Smith W, Gill J, Downey D. 3D Uultrasound imaging: application in image guided therapy and biopsy. Computers and Graphics 2002; Vol. 26(4), pp557-568 [2] Thomas R. Nelson and Dolores H. Pretorius. Three- dimensional ultrasound imaging , Ultrasound in Medicine and Biology , 24(9): 1243-1270, 1998 [3] Prager RW, Gee AH, Treece GM, Cash CJC, Berman LH. Sensorless freehand 3-D ultrasound using regression of the echo intensity. Ultrasound Med Biology, 2003; 29, pp.437— 446 [4] Fenster A., Surry KJ.M., Mills GR., Downey DB. Ultrasound Guided Breast Biopsy System, Ultrasonics 2004, 3d ed, Vol.42 (1-9), pp.769-774 [5] Shah S., Mayberry J., Wicks A., Liver biopsy under ultrasound control: implications for training. Gut, 2000. 46(4): pp.582. [6] Liu J., Fornage B., Edeiken-Monro B., The Reliability of Ultrasound-Guided Core Needle Biopsy (USCNB) in the Evaluation of Non-Palpable Solid Breast Lesions. Laboratory Investigation, 1999. 79(1): 26A. [7] Gupta S, Rajak C, Sood B, Sonographically guided fine needle biopsy of abdominal lymph nodes: experience in 102 patients. J Ultrasound Med, 1999. 18(2), pp.135-139. [8] Boland G., Lee M., Mueller P., Efficacy of sonographically guided biopsy of thyroid masses and cervical lymph nodes. AJR Am J Roentgenol, 1993. 161(5), pp.1053- 1056 [9] Cardinal E., Chhem R., Beauregard C., Ultrasoundguided interventional procedures in the musculoskeletal system. Raiol Clin North Am, 1998. 36(3), pp. 597-604. [10] Kochavi E., Goldsher D., Azhari H. Method for Rapid MRI Needle Tracking Magnetic Resonance in Medicine, 2004, Vol. 51, pp.1083-1087 [11] Prager RW, Gee AH, Treece GM, Berman LH. Freehand 3D ultrasound without voxels: volume measurement and visualization using the Stradx system Ultrasonics, 2002 40 (1-8). pp. 109-115. [12] Bradley MJ, An in-vitro study to understand successful free-hand ultrasound guided intervention. Clin Radiol , 2001; Vol.56, pp. 487-490. [13] Abolhassani, N., Patel, R., Ayazi, F., Effects of Different Insertion Methods on Reducing Needle Deflection, 29 th Annual International Conference of the IEEE Engineering in Medicine and Biology Society,2007, pp. 491-494 [14] Banovac F, Glossop N, Lindisch D, Tanaka D, Levy E, Cleary K, "Liver tumor biopsy in a respiring phantom with the assistance of a novel electromagnetic navigation device," in Proceedings of the 5th Int. Conference on Medical Image Computing and Computer-Assisted Intervention, London, UK, Springer-Verlag, 2002, pp. 200-207. 78 [15] Howard MH, Nelson RC., Paulson EK, Kliewer MA., Sheafor DH, An Electronic Device for Needle Placement during Sono graphically Guided Percutaneous Intervention. Radiology, 2001, Vol. 218, pp.905-911 [16] Krombach, GA., et al, US-guided Nephrostomy with the Aid of a Magnetic Field- based Navigation Device in the Porcine Pelvicaliceal System, J Vase Intery Radiol 2001, Vol. 12, pp.623-628 [17] Leotta DF, Detmer PR, Martin RW, Performance of a miniature magnetic position sensor for three-dimensional ultrasound imaging. Ultrasound in Medicine and biology , 1997, Vol.23(4), pp. 597-609 [18] Cochlin D, Dubbins P, Goldberg B, Halpern E. Urogenital Ultrasound A Text Atlas, 2nd edition . Informa Healthcare 2006, pp. 402-404 [19] Meeks SL, Buatti JM, et al Ultrasound-guided extracranial radiosurgery technique and application. International journal of radiation oncology, biology, physics, 2003, vol:55(4), pp.1092 [20] Kettenbach J, Kronreif G, Figl M, Robot-assisted biopsy using ultrasound guidance Initial results from in vitro tests, Eur Radiol, 2004, Vol 13(1), p. 576. [21] Rosenthal M.; State A.; Lee J.; Hirota G.; Ackerman J.; Keller K.; Pisano E.D.; Jiroutek M.; Muller K.; Fuchs H. Augmented reality guidance for needle biopsies: An initial randomized, controlled trial in phantoms Medical Image Analysis, 2002 , Vol. 6(3), pp. 313-320 [22] Sabel MS, Staren ED. Innovations in Breast Imaging: How Ultrasound Can Enhance the Early Detection of Breast Cancer. Medscape Womens Health. 1997 Vol. 2(7):1. [23] Chan C, Lam F, Rohling R A needle tracking device for ultrasound guided percutaneous procedures. Ultrasound in Medicine & Biology, 2005, Vol. 31(11), pp.1469-1483 [24] Arthur G., Seber F., Nonlinear Regression, Wiley-IEFE 1989, pp.768 [25] Grudic G, Nonparametric Learning From Examples in Very High Dimensional Spaces. PhD thesis, Department of Electrical Computer Engineering, University of British Columbia, August 1997 [26] Hastie T., Tibshirani R., Friedman J., The elements of statistical learning Data mining , inference, and prediction Springer-Verlag, 2001 pp40-52 [27] Gyorfi L., Principles of Nonparametric Learning, Springer, 2002, Vienna. pp. 45-50 [28] The Curse of Dimensionality, 5th Online World Conference on Soft Computing in Industrial Applications (WSC5), held on the internet, September 4-18, 2000 Friedman JH, 1991: Multivariate adaptive regression splines. Ann. Stat., 1991, Vol.19(1), pp. 1-141 [29] Friedman JH, and Stuetzle, W., Projection pursuit regression. J. Amer. Statist. Assoc. 1981, Vol. 76, pp. 817-823. [30] Russell, I., Neural Networks in the Undergraduate Curriculum, The Journal of Computing in Small Colleges, April 12-13, 1991, Austin, Texas [31] Rumelhart D., Hinton G., Williams R., Learning internal representations by error propagation. In Neurocomputing, Cambridge, MA: MIT Press.1988 , pp. 675-695. 79 [32] Fahlman, S., Lebiere C., The Cascade-Correlation Learning Architecture in Advances in Neural Information Processing Systems, Touretzky (ed.), Morgan- Kaufmann, 1990 [33] Teknomo,^Kardi.^K-Nearest^Neighbors^Tutorial. http:\\people.revoledu.com\kardi\  tutorial \KNN1 [34] Grudic G, Lawrence PD, Is Nonparametric Learning Practical in Very High Dimensional Spaces?. Fifteenth International Joint Conference on Artificial Intelligence (IJCAI-97) , NAGOYA,1997, pp. 804-809 [35] Grudic G, Lawrence PD, A Nonparametric Learning Approach to Vision Based Mobile Robot Localization IEEE-RSJ International Conference on Intelligent Robots and Systems (IROS98) , 1998. Vol. 2, pp.724-729 [36] A.G. Ivakhnenko (1971). Polynomial Theory of Complex Systems IEEE Transactions on systems, man and cybernetics. SMC-1, 4, 364-378. [37] Starr C., Biology: Concepts and Applications, Thomson Brooks/Cole, 2005, pp94 [38] Cooper K., The RAW digital image format 2006; http://www.northlight- images.co.uk/article_pages/why_use_raw.html [39] Gellert W., Gottwald S., Hellwich M, Kastner H, Kunstner H., VNR Concise Encyclopedia of mathematics, 2nd Ed. NewYork, 1989, pp538 [40] Mariottini GL, Alunno E, Prattichizzo D , The Epipolar Geometry Toolbox (EGT) for Matlab, Technical Report 07-21-3-DH University of Siena, July 2004 Siena, Italy. [41] Mariottini GL, Alunno E, Prattichizzo D, EGT: A toolbox for multiple view geometry and visual servoing, _EEE Robotics & Automation Magazine No. 12, 2005 [42] Crouch DG, "NDI-TB-0021: Design and manufacturing tools incorporating IRED markers, Rev. 002," tech. rep., Northern Digital Inc., 1995. [43] Flaccavento G, Lawrence PD, Rohling RN, Patient and Probe Tracking During Freehand Ultrasound Lecture Notes in Computer Science, Proc MICCAI, 2004, pp. 585-593 80 Appendix A Table A.1: Webcams selections and properties Camera Name of Resolution^Optics^FrameProduct^(Focus range)^Rate Orite — Icam^640 x 480 50 trim to^15 fps infinity 640 x 480LOGITECH Figures have been removed 40 mm to^30 fps infinity Digipix^352 x 288, 50 mm toinfinity due to copyright restrictions. 24 fps 500K USB WebCam ZoomCam 640 x 480 352 x 288 50 mm to infinity 50 mm to infinity 15 fps 30 fps 81 Table A.2: Miniature Camera selections and properties Name Resolution Size & Weight Image sensor Frame rate Lens XS617R NTSC: 510 x 492 PAL: 48 x 45 x 30 1/3" CCD N/A built-in 3.6mm lens or optional 500 x 582 built-in 6-unit IR-LED 15 fps comp. Videology 1280 x 960 32 x 32 x16 1/2" CS mount Figure has been removed due to <100g CMOS 8fps uncomp. 20 g ARTCAM 640x480 Without 1/7' 24 fps Pinhole lens copyright 34CM cable/lens/C CCD restrictions. CU 12x36mm built-in 12mm Elmo 470TVL without 1/3" N/A lens or UN43H cable/lens/C CCD optional CU 18 g ARTCAM 1280 x 1024 43 x 43 x 30 1/2" 32 fps 8mm board 130 24 g CMOS (SVGA) lens 82 Appendix B The following publications have arisen from this work: S. Khosravi, R. Rohling, P.D. Lawrence. Needle guidance using ultrasound and camera images. IEEE International Ultrasonics Symposium, pp. 2251-2254, New York, NY. October 2007. S. Khosravi, R. Rohling, P.D. Lawrence. One-step needle pose estimation in ultrasound guided biopsies. 29 th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 3343-3346, Lyon, France. August 2007. 83

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0066700/manifest

Comment

Related Items