Single camera closed-form real-time needle trajectory tracking for ultrasound Mohammad Najafi and Robert Rohling Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada ABSTRACT In ultrasound-guided needle insertion procedures, tracking of the needle relative to the ultrasound image is beneficial for needle trajectory planning and guidance. A single camera closed-form method is proposed for automatic real-time trajectory tracking with a low-cost camera mounted directly on the ultrasound transducer. The camera is calibrated to the ultrasound image coordinates. By mounting the camera on the transducer, issues of visual obstruction are reduced and accuracy of tracking is increased compared to camera-tracking systems with a fixed case. Compared to previous work with stereo cameras, a single camera further reduces cost, complexity and size, but requires a needle with known markings. The proposed solution uses the depth markings etched on many common needles (e.g. epidural needle). A fully automatic image processing method has been developed for real-time identification of the needle trajectory using a novel closed-form solution based on three identified markings and the camera's intrinsic calibration parameters. The trajectory of the needle relative to the ultrasound image is calculated and displayed. Validation compares the calculated intersection of the needle trajectory to the ultrasound image with the depiction of the actual needle intersection in the image. The overall error is 3.0 ± 2.6 mm for a low-cost 640×480 pixel USB camera. Keywords: Needle Tracking, Single Camera, Ultrasound Guidance, Closed-Form 1. INTRODUCTION Tool tracking is beneficial for many percutaneous procedures, especially needle insertions such as epidural anesthesia, biopsy, RF ablation and drug delivery to locations deep in tissue. When using ultrasound guidance, the pose of the tool should be calculated with respect to the coordinate system of the ultrasound image so that the trajectory of the tool can be displayed to the operator on the image. In this way, the tool trajectory can be adjusted before skin puncture to intersect the target (Figure 1). The tracking system should ideally be low-cost, accurate and unobtrusive. In this paper, we focus on needle tracking, although the results can be extended to other tools. Figure 1. Trajectory planning in an ultrasound guided needle insertion procedure. The needle should intersect with the ultrasound plane at a specific target point. There are different types of tracking methods used in medical applications such as magnetic, mechanical and optical systems. However, magnetic trackers are affected by nearby metallic objects and require a fixed field generator, while Medical Imaging 2011: Visualization, Image-Guided Procedures, and Modeling, edited by Kenneth H. Wong, David R. Holmes III, Proc. of SPIE Vol. 7964, 79641F © 2011 SPIE · CCC code: 1605-7422/11/$18 · doi: 10.1117/12.877798 Proc. of SPIE Vol. 7964 79641F-1 Downloaded from SPIE Digital Library on 05 Jul 2011 to 137.82.117.28. Terms of Use: http://spiedl.org/terms mechanical guides and trackers limit the freedom of movement. Camera-based optical trackers do not have these drawbacks, but require relatively expensive cameras to achieve high accuracy over a large workspace, and require a line- of-sight between the tool and the cameras (usually mounted on a stand beside the subject) [1]. Previous work by our group has shown that small stereo cameras can be mounted directly on the ultrasound transducer to allow better tradeoffs between accuracy, workspace, cost and line-of-sight issues [2][3]. Although good results were achieved with stereo cameras, a single camera would provide further improvements in cost, complexity and size. A single camera system is presented here. The main drawback of tracking with a single camera is that the pose of the needle is poorly determined in the range direction, but can be mitigated by using known markings on the object. Fortunately many medical needles (e.g. epidural needles) already have markings for depth measurement (usually 1 cm markings). For unmarked needles, it is possible to attach a short marked extension to the hub of the needle. Single camera pose estimation methods fall into two categories: feature-based methods and mathematical-geometrical methods. Feature-based methods are less accurate and require highly textured objects. Mathematical-geometrical methods are more accurate but require specific markings on the object [4]. In feature-based methods, a feature detector is used to extract features from each image to compare them to the feature database. For instance, Collet et al [5] have used SIFT [6] features to extract local descriptors from images in their system for object pose estimation. However, images of a needle normally do not have enough texture to produce enough SIFT features for accurate pose estimation. Moreover low resolution (640×480 in this case) images and issues such as light reflectance on the metallic surfaces of objects such as a needle can make correct feature extraction and mapping more difficult. On the other hand, mathematical and geometrical methods are usually robust and accurate. However, their outcome is affected by the accuracy of the feature extraction process and camera calibration. A new mathematical-geometrical formulation is devised here for needle trajectory calculation. Some previous mathematical-geometrical methods are iterative, which is time consuming, not suitable for real-time tracking, and does not always converge to the true solution. There are few closed-form solutions, but they mostly use planar (2D) markings on the object for full pose estimation [7][8]. For the needle tracking problem, markings are almost collinear and the needle trajectory is a line which can be described by 5DOF (eliminating rotation around the needle axis). Here we have proposed a novel closed-form method for 5 DOF pose estimation of a linear object with a single image. 2. METHODS 2.1 Single camera closed-form pose estimation A simple pinhole model is assumed for the camera and the parameters (focal length, principle point) are obtained from a standard camera calibration process [9]. In this model each point in space is projected to the imaging plane through the principle point. Referring Figure 2, having three collinear points of the object in the image (Q1, Q2 and Q3) and the principle point of the camera (F), the goal is to find corresponding points (P1, P2 and P3) of the object in 3D space. Then the pose of the object (needle) will be known from these three points. These points correspond to the marking edges on the object and their distances are known from the marking pattern. Vectors G1 and G2 are two parallel vectors on the object with known (from markings on needle) lengths of d1 and d2, respectively. Since all reside on the same plane, they can be described as a linear combination of vectors V1, V2 and V3 (which are known from Q1, Q2, Q3 and F). In fact: 321 cVbVG −= , 212 bVaVG −= (1) where V1, V2 and V3 are unit vectors in the direction of each projected point to the principle point. 2 2 1 1 d G d G = , 11 dG = , 22 dG = (2) Proc. of SPIE Vol. 7964 79641F-2 Downloaded from SPIE Digital Library on 05 Jul 2011 to 137.82.117.28. Terms of Use: http://spiedl.org/terms Figure 2. Three collinear points on the object (P1, P2, and P3) and the corresponding projections on the camera plane (Q1, Q2, and Q3) and the principle point (F) all reside on one plane. Three unknowns (a, b and c) should be found such that they satisfy Eq.1 under the constraints of Eq 2. Since V1, V2 and V3 are in the same plane, they are not independent and it is possible to write V3 as a linear combination of V1 and V2. 213 VVV βα += (3) Now substituting V3 into equation 1 and then using equation 2 we have: )()( 21 2 1 2121 bVaVd dVVcbVG −=+−= βα (4) From which: c d da α 1 2−= , c dd db β 21 2 += (5) By substituting them into equation 1, vector G2 can be found in terms of c. Since the magnitude of G2 is known (d2), c can be found: 2 21 2 1 1 2 2 V dd dV d d dc βα ++ = (6) Now by substituting c into Eq. 5, the values of a and b can be found. The points (P1, P2 and P3) are then calculated which determine the trajectory of the needle. 2.2 Needle markings detection As an input to the pose estimation algorithm, the distances of the needle’s marking edges (di) in the image should be found. Here it is assumed that the needle has alternating 1cm black and white markings (Figure 3). This is typical for many needles in anesthesiology; for different markings, the calculations can be modified accordingly. In practice, the needle is not a line but a cylinder with a specific width that appears in the image as two lines (not parallel due to perspective viewing) and marking edges appear as curved lines. Considering the average of the two lines as the needle center line, marking distances (di) are defined as the distances of consecutive crossings of the marking edges (curved lines) with this center line. One idea is to first find the two needle lines and then find the marking edges. But, in practice, needle lines cannot be found accurately because of specular light reflections on the metallic surface of the needle that produce misleading strong edges and also because a dark or light background can blur the edge of black or white markings and eliminate some useful edges. In fact the most distinctive feature of such marking system is the consecutive black and white regions on the needle. Hence, if considering the pixel values of a line on the needle image, there are significant steps on the edges (falling on white to black and rising on black to white edges). Therefore, the existence of several consecutive positive and negative Proc. of SPIE Vol. 7964 79641F-3 Downloaded from SPIE Digital Library on 05 Jul 2011 to 137.82.117.28. Terms of Use: http://spiedl.org/terms peaks in the first derivative of the line values identify the line as an "on-the-needle line" and the points on the line at the peak locations are stored. These peaks are assumed to lie on the marking edges. Figure 3. Projection of needle 3D model on the camera plane (left). Cluster of points at marking edges and the fitted center line (right). By running this on many different random lines on the image, a sufficient number of points can be gathered for each edge. Edges are found by clustering the points into clusters with low inter-class and high intra-class distances. To speed up the algorithm, the approximate center line of the needle is first found using the Hough transform of the binary edge of the image (using the Canny edge detector). Searching for the "on-the-needle lines" around the approximate center line reduces the number of trials required for robust marking detection. The center line of the needle is found by fitting a least squares line to the center of clusters. The exact position of the marking edges crossing the center line are then found from peaks of the derivate of pixel values on this line and the nearby parallel lines. The pixel values are up-sampled prior to peak detection to achieve sub-pixel accuracy. 2.3 Ultrasound to camera calibration The single-wall calibration method [10] is used for ultrasound calibration, with the modification of using a checker-board pattern on the portion of the wall surface outside the water bath so that it can be seen by the camera. The pose of the camera relative to the wall surface is found from the image of the checker-board pattern. The wall is seen in the ultrasound image as a line. The lines are extracted from multiple (n=30) images from different poses of the transducer and the calibration parameters (6 DOF) are found. 2.4 Camera calibration The Camera Calibration Toolbox for Matlab [11] is used for camera calibration. This toolbox uses different images of a checker-board pattern to estimate the camera's intrinsic parameters such as focal length, principal point and lens distortions. It can also provide the pose of a checker-board pattern in respect to the camera coordinate system, as used in ultrasound-camera calibration above. All of the images have been corrected for lens distortions using this toolbox. The obtained camera parameters are given in Table 1. Table 1.Camera intrinsic parameters obtained from camera calibration. Parameter Description Value(Pixel) Std of Error(Pixel) Effective Focal Length in x axis 684.1 4.7 Effective Focal Length in y axis 683.6 4.8 Principal point (363.0,219.5) (5.4,5.0) Proc. of SPIE Vol. 7964 79641F-4 Downloaded from SPIE Digital Library on 05 Jul 2011 to 137.82.117.28. Terms of Use: http://spiedl.org/terms 3. EXPERIMENTS AND RESULTS The accuracy of trajectory calculation is evaluated by comparing the calculated versus the actual intersection point of the needle at the image plane for a range of needle poses (n=23) in a water bath (Figure 4). This intersection point is seen in the ultrasound image as a bright dot that is easy to identify. The Euclidian distance between the identified point in the ultrasound image and the calculated intersection point is considered as the error. The experiment is performed with the Sonix MDP (Ultrasonix Medical Corporation, Burnaby, BC, Canada) and L9-4 10MHz linear 2D ultrasound probe. The camera is a USB ARTCAM 34-MC, 640×480 pixel with a 1/7" CCD (Artray, Tokyo, Japan). Figure 4. Experimental setup for measuring the accuracy of the proposed needle tracking method. The overall error is 3.0 ± 2.6 mm. Calculation of the needle trajectory takes about 60 msec in a C implementation running on a standard computer workstation. Figure 5 shows the target points, which have been identified in the ultrasound image, and the corresponding estimated points. Estimated points are the intersection of the needle trajectory line and the ultrasound image plane. Figure 5. Identified intersection points in the image (target) and the estimated points. Proc. of SPIE Vol. 7964 79641F-5 Downloaded from SPIE Digital Library on 05 Jul 2011 to 137.82.117.28. Terms of Use: http://spiedl.org/terms Larger errors are seen for intersection points at deeper location in the image, as expected from the lever-arm effect of measuring needle trajectory at the surface. 4. CONCLUSION In a needle insertion procedure with ultrasound guidance, real-time calculation and visualization of the needle trajectory can be very useful for an easy and successful procedure. In this paper we showed that it is feasible to calculate the needle trajectory with a single camera mounted directly on the ultrasound probe by using the needle markings in an automatic image processing system. The proposed closed-form method for pose estimation allows for rapid needle tracking. Feature detection in image processing part is also fast that the C implementation of the overall method runs in real-time (60 msec) in a standard computer workstation. The accuracy of 3.0 ± 2.6 mm is similar to previous work with a more complex stereo vision system (3.1± 1.8 mm) [2]. Greater accuracy can likely be achieved with an improvement in the quality of the camera but with a trade-off with cost. Future work will focus on using the needle markings to calculate both the needle trajectory and the location of the needle tip. The tip of the needle is not calculated here because only a portion of the needle is seen by the camera. This can be solved with two approaches. If the needle is placed initially in way that the tip can be seen in the images, then as long as the real-time tracking algorithm successfully identifies the markings, it is possible to keep track of the tip location. The other solution is to use a specific marker pattern on the needle, contrary to current repetitive black-white marking, so that unique estimation of the needle tip is possible. ACKNOWLEGMENTS This work is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) and Canadian Institutes of Health Research (CIHR). REFERENCES [1] L. Mercier, T. Langø, F. Lindseth, and D.L. Collins, "A review of calibration techniques for freehand 3-D ultrasound systems," Ultrasound in Medicine & Biology 31(4), 449-471 (2005). [2] C. Chan, F. Lam, and R. Rohling, "A needle tracking device for ultrasound guided percutaneous procedures," Ultrasound in Medicine & Biology 31(11), 1469-1483 (2005). [3] S. Khosravi, R. Rohling, and P. Lawrence, "One-step needle pose estimation for ultrasound guided biopsies," Proc. Annual International Conference of the IEEE Engineering in Medicine and Biology Society ,3343-3346 (2007). [4] K. Rahbar and H.R. Pourreza, "Inside looking out camera pose estimation for virtual studio," Graphical Models 70(4), 57-75 (2008). [5] A. Collet, D. Berenson, S.S. Srinivasa, and D. Ferguson, "Object recognition and full pose registration from a single image for robotic manipulation," Proc. IEEE International Conference on Robotics and Automation, 3534-3541 (2009) [6] D.G. Lowe, "Distinctive Image Features from Scale-Invariant Keypoints," International Journal of Computer Vision 60(2),91-110 (2004) [7] F. Shi, X. Zhang, and Y. Liu, "A new method of camera pose estimation using 2D-3D corner correspondence," Pattern Recognition Letters 25(10), 1155-1163 (2004) [8] L. Lucchese, "Closed-form pose estimation from metric rectification of coplanar points," Proc.IEE Vision, Image, and Signal Processing 153(3), 364-378 (2006) Proc. of SPIE Vol. 7964 79641F-6 Downloaded from SPIE Digital Library on 05 Jul 2011 to 137.82.117.28. Terms of Use: http://spiedl.org/terms [9] J. Heikkila and O. Silven, "A four-step camera calibration procedure with implicit image correction," Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition ,1106-1112 (1997) [10] Prager, R. W., R. N. Rohling, A. H. Gee, and L. Berman, "Rapid calibration for 3-D freehand ultrasound," Ultrasound in Medicine & Biology 24(6) ,855-869 (1998) [11] J. Bouguet, "Camera calibration toolbox for Matlab", (2008) Proc. of SPIE Vol. 7964 79641F-7 Downloaded from SPIE Digital Library on 05 Jul 2011 to 137.82.117.28. Terms of Use: http://spiedl.org/terms