UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Dual-transducer ultrasound for elastography Abeysekera, Jeffrey Michael 2010

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2010_fall_abeysekera_jeffrey.pdf [ 18.14MB ]
Metadata
JSON: 24-1.0071195.json
JSON-LD: 24-1.0071195-ld.json
RDF/XML (Pretty): 24-1.0071195-rdf.xml
RDF/JSON: 24-1.0071195-rdf.json
Turtle: 24-1.0071195-turtle.txt
N-Triples: 24-1.0071195-rdf-ntriples.txt
Original Record: 24-1.0071195-source.json
Full Text
24-1.0071195-fulltext.txt
Citation
24-1.0071195.ris

Full Text

Dual-Transducer Ultrasound for Elastography by Jeffrey Michael Abeysekera B.A.Sc., The University of British Columbia, 2008 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in The Faculty of Graduate Studies (Biomedical Engineering) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) August 2010 c© Jeffrey Michael Abeysekera 2010 Abstract Medical imaging techniques provide valuable information about the internal anatomy of the body. Commonly used techniques can render many prop- erties of the anatomy and its function, but they are limited in their ability to measure tissue mechanical properties such as elasticity. Over the past two decades there has been growing interest in developing methods of non- invasively characterizing mechanical properties of tissues; a field commonly referred to as elastography. Tissues are known to exhibit changes in me- chanical properties in response to pathology. As a result, elastography has the particular potential to help physicians diagnose and locate cancerous tumors and other malignancies. The principle of operation of elastography systems is to apply an excita- tion to the tissue, such as a compression, and to measure the resulting tissue motion as it deforms. The tissue elasticity can then be inferred from the motion estimates by solving the inverse problem. Tissue motion is typically measured with ultrasound because it is fast, safe, and relatively inexpensive. The point spread function of an ultrasound beam is anisotropic, resulting in poorer quality motion estimates in two of the three spatial directions. This thesis investigates a new method of estimating tissue motion by employing two ultrasound transducers with different view angles. The goal of using these two transducers is to create a plane of high quality 2D motion esti- mates. Simulations and experimental results on tissue mimicking phantoms show that the method outperforms other commonly used 2D motion esti- mation methods. For example, in a tissue deformation simulation, the dual transducer method produced lower root mean square measurement error by a factor of 10 compared to a single transducer technique, and a factor of 3 compared to a single transducer with angular compounding. A simple wire-based method of aligning the transducers into a coincident scan plane is initially developed. Later, a novel wedge-based phantom is designed for aligning the two transducers. Calibration results demonstrate improved alignment with the wedge phantom. Manual alignment is found to be repeatable with mean alignment errors under 1 degree in rotation and 1 mm in translation for all degrees of freedom after six independent trials. ii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi List of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . viii Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Statement of Co-Authorship . . . . . . . . . . . . . . . . . . . . . x 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.1 Single Transducer Motion Estimation Techniques . . 5 1.2.2 Multiple Transducer Motion Estimation Techniques . 8 1.3 Thesis Objectives . . . . . . . . . . . . . . . . . . . . . . . . 9 1.4 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . 9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2 Two Dimensional Motion Tracking with Dual Ultrasound Transducers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2.1 Simulated Motion . . . . . . . . . . . . . . . . . . . . 23 2.2.2 Simulation of RF Data . . . . . . . . . . . . . . . . . 25 2.2.3 Experimental Setup . . . . . . . . . . . . . . . . . . . 27 2.2.4 2D Tracking . . . . . . . . . . . . . . . . . . . . . . . 28 2.2.5 Angular Compounding . . . . . . . . . . . . . . . . . 28 iii Table of Contents 2.2.6 Tracking with Dual Transducers . . . . . . . . . . . . 29 2.2.7 Non-Orthogonal Dual Transducers . . . . . . . . . . . 29 2.2.8 Motion Estimation Error . . . . . . . . . . . . . . . . 31 2.2.9 Misalignment Simulation . . . . . . . . . . . . . . . . 32 2.2.10 Strain Estimation . . . . . . . . . . . . . . . . . . . . 33 2.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3 Alignment and Calibration of Dual Ultrasound Transducers Using a Wedge Phantom . . . . . . . . . . . . . . . . . . . . . 52 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.2.1 Acquisition System . . . . . . . . . . . . . . . . . . . 55 3.2.2 Phantom . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.2.3 Calibration . . . . . . . . . . . . . . . . . . . . . . . . 60 3.2.4 Validation . . . . . . . . . . . . . . . . . . . . . . . . 63 3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4 Conclusions and Future Research . . . . . . . . . . . . . . . . 76 4.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Appendix A Fixed Angles Homogeneous Transformation Ma- trix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 iv List of Tables 2.1 Mean bias and standard deviations of motion estimation error. 34 2.2 Root mean square error of motion estimation. . . . . . . . . . 34 2.3 CNRe, SNRe, and RMSe . . . . . . . . . . . . . . . . . . . . 37 2.4 Mean bias and standard deviations of error. . . . . . . . . . . 41 3.1 Summary of the alignment errors in the orientation and po- sition Degrees Of Freedom (DOF) for six independent trials . 67 3.2 Standard deviation of the three orientation DOF (degrees) and three position DOF (mm) after 15 trials on a single image for the wedge and N-fiducial phantoms. . . . . . . . . . . . . 67 3.3 Statistics of the errors in accuracy of mapping N-fiducial points from one transducer’s pose to the other, using the DOF solved by the wedge phantom. . . . . . . . . . . . . . . . . . . 68 v List of Figures 1.1 Physicians rely on (a) hand palpation to probe the stiffness of tissues. (b) Ultrasound elastography relies on the same principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 (a) A simulated elasticity distribution, (b) B-mode image, (c) displacement field, and (d) strain. . . . . . . . . . . . . . . . . 4 1.3 Two RF scan lines. . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4 Schematic for Two Dimensional (2D) motion tracking with dual transducers. . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1 The simulation setup for the compression motion model. . . . 24 2.2 The cross-wire phantom. . . . . . . . . . . . . . . . . . . . . . 26 2.3 The displacement estimate, qθ, is the dot product between the applied motion vector and the unit vector along the beam direction of the transducer. . . . . . . . . . . . . . . . . . . . 30 2.4 Bias vectors and standard deviation ellipses on an 11×11 mo- tion grid. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.5 Typical axial (a) and lateral (b) displacement images. . . . . 36 2.6 The elevational motion profile. . . . . . . . . . . . . . . . . . 38 2.7 Root mean square error of the lateral displacement as a func- tion of angle. . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.8 Boundary of the tracking region. . . . . . . . . . . . . . . . . 39 2.9 Lateral displacement root mean square error as a function of yaw misalignment. . . . . . . . . . . . . . . . . . . . . . . . . 40 2.10 Dual transducer method using convex array transducers. . . . 43 2.11 A custom made dual transducer. . . . . . . . . . . . . . . . . 44 3.1 The transducer is mounted on a four DOF stage. . . . . . . . 55 3.2 (a) The two sides that make up the calibration phantom and (b) cantilever springs. . . . . . . . . . . . . . . . . . . . . . . 56 3.3 A Three Dimensional (3D) model of the phantom pattern in an isometric view. . . . . . . . . . . . . . . . . . . . . . . . . 57 3.4 The calibration apparatus. . . . . . . . . . . . . . . . . . . . . 58 vi List of Figures 3.5 DOF adjusted during alignment provide visual cues to the user to indicate misalignment. . . . . . . . . . . . . . . . . . . 59 3.6 B-mode image of the phantom with the transducer misaligned. 61 3.7 Random Sample Consensus (RANSAC) fitting can reduce the influence of significant errors. . . . . . . . . . . . . . . . . . . 62 3.8 B-mode image of the N-fiducial phantom (a) before process- ing, (b) after opening, and (c) after thresholding. . . . . . . . 65 3.9 Results from sensitivity analysis with 1000 simulated trials. . 66 vii List of Abbreviations 1D One Dimensional 2D Two Dimensional 3D Three Dimensional DOF Degrees Of Freedom FEM Finite Element Method GUI Graphical User Interface MRI Magnetic Resonance Imaging PVC Polyvinyl Chloride RANSAC Random Sample Consensus RF Radio Frequency RMSE Root Mean Square Error ROI Region Of Interest SOS Speed Of Sound TDPE a time domain cross correlation algorithm using prior estimates US Ultrasound viii Acknowledgements I would like to thank my thesis supervisor, Dr. Robert Rohling, for his guidance and support during the course of my research. I would also like to thank Dr. Tim Salcudean for providing valuable feedback. Many thanks go to my colleagues at the Robotics and Control Lab of the University of British Columbia. In particular I’d like to thank Dr. Reza Zahiri Azar for generously providing his 2D motion tracking algorithm, Dr. Orcun Goksel for his help in programming in ANSYS and Field II, and Dr. Ali Baghani and Dr. Hani Eskandari for their insights into the problem of solving for tissue elasticity. Thanks are due to Leo Pan and Narges Afsham for discussions on aligning and calibrating ultrasound. Markus Fengler and Roland Genschorek from the Mechanical Engineering department machine shop should be thanked for their patience and expertise in helping manufacture my complicated designs. I would also like to thank Troy Adebar, John Bartlett, Raoul Kingma, Hedyeh Rafii-Tari, Caitlin Schneider, Jing Xiang, and Michael Yip; you all made the lab a fun and enjoying place to work. Finally, I am grateful for the love and support of my family, without which this work would not have been possible. ix Statement of Co-Authorship This thesis is written in a manuscript-based style, resulting from collabora- tion between multiple researchers. A version of Chapter 2 has been submitted for publication. The paper was co-authored with Dr. Reza Zahiri-Azar, Dr. Orcun Goksel, Dr. Robert Rohling, and Dr. Tim Salcudean. The author’s contribution in that paper was developing the idea, performing numerical simulation, phantom fabri- cation, experimental evaluation, data analysis, and manuscript preparation. Dr. Zahiri-Azar developed the TDPE algorithm, the angular compounding technique, and helped edit the manuscript. Dr. Goksel helped with the data simulation using Field II and ANSYS as well as editing the manuscript. Dr. Rohling and Dr. Salcudean assisted with their suggestions and editing the manuscript. A version of Chapter 3 has been submitted for publication. The paper was co-authored with Dr. Robert Rohling. The author’s contribution in that paper was developing the calibration algorithm, designing the align- ment phantom and transducer apparatus, phantom fabrication, experimen- tal evaluation, and writing the manuscript. Dr. Rohling provided numerous suggestions for improving the phantom design and calibration procedure, and helped edit the manuscript. x Chapter 1 Introduction 1.1 Motivation Breast cancer is the most common form of cancer among women and is the second most common cancer in the world [1]. In 2009, an estimated 5,400 people in Canada, and 40,170 people in the United States died of breast cancer [2, 3]. The disease is best combated through early detection and treatment [4]. Typically in the diagnosis of breast tumors, the physician examines the breast by palpation to feel for stiff lumps. This technique requires experience for the physician to effectively identify suspicious lesions [5]. palpation has a specificity of about 94% but a sensitivity of only 55%, causing some debate over the technique’s effectiveness [6, 7]. In addition, palpation is restricted to detecting relatively superficial, large, stiff masses [8]. Diagnosis has been aided by the adoption of medical imaging modalities. Mammography is an X-ray based technique where the breast is compressed to achieve highly detailed images. It is widely accepted as a standard tech- nique and has sensitivity and specificity of about 69% and 94%, respectively [9]. Since it is an X-ray based technique, the radiation dose may result in carcinogenesis, limiting its use to older patients (e.g. 50-74 years of age) [10, 11]. Magnetic Resonance Imaging (MRI) detects the relaxation rate of the spinning magnetic dipoles inside of tissue. Intravenous injection of con- trast agents such as paramagnetic gadolinium allow visualization of tumors due to increased vascularity around the cancer. MRI can be more sensitive than mammography for detecting lesions in dense breasts [12]. It is gen- erally safer than mammography, however it may cause hazards to patients with implants such as pace makers and is prohibitively expensive for can- cer screening. Ultrasound (US) imaging systems consist of a transducer for sending short high frequency (3-20 MHz) pulses into a region to be exam- ined and recieving the echoes returned by the medium, and a computation system to convert the echo signals into an image. US serves as a valuable and inexpensive adjunct to mammography in distinguishing solid and fluid- filled masses, where the former is more likely malignant and the latter more 1 1.2. Background likely benign, however it is not recommended for routine primary screening due to limited ability to distingish cancer directly [13]. Since US operates in real-time and is one of the safest imaging modalities, it can easily be used for guidancy during needle biopsy to confirm the presence of cancer [14]. Recently there has been motivation to develop a new imaging modality to depict the mechanical properties of tissues. This technique is generally being referred to as elastography in the literature. Since cancers of the breast are known to appear as stiff nodules, it can be used to screen for cancer [15– 17]. Similar to a clinical breast examination, the tissue is deformed to locate inhomogeneities (Fig. 1.1). Based on the internal deformation pattern of the tissue, techniques have been developed to estimate the tissue’s elastic properties and, more recently, viscoelastic and anisotropic properties [18– 20]. 1.2 Background Elastography methods are based on measuring tissue motion in deformed and undeformed states. Different imaging modalities may be used to mea- sure the motion, with US and MRI being the most popular ones. With MRI-based techniques, it is generally easier to obtain a volume of 3D dis- placement estimates with a higher signal-to-noise ratio (SNR) and overall resolution compared to US [21]. Elastography with US offers the advantage of low cost, high frame-rate, portability, and easy setup, which has made it more widely studied [22]. Several techniques have been developed to estimate the tissue parameters from displacement estimates. One of the simplest methods is to simply display the amount of tissue strain under a given stress [15, 23]. Usually a motorized compression plate [16], the face of a hand held transducer [24], or an actuator attached to a transducer [25] is used to apply the stress to a Region Of Interest (ROI). The gradient of the displacements is used to estimate strain. Lesions that may not be distinguishable in a regular sonographic B-mode image are often clearly present in strain images, as seen in Fig.1.2. Strain is related to Young’s modulus through Hooke’s law of elasticity. To compute Young’s modulus from strain, knowledge of the 3D stress field is required. At present it is not possible to measure the 3D stress field in vivo, and therefore to create an elastogram the stress is typically assumed constant. However, if the stress field is not uniform, the strain image will contain artifacts [26]. 2 1.2. Background (a) (b) Figure 1.1: Physicians rely on (a) hand palpation to probe the stiffness of tissues to screen for breast cancer. Healthy background tissue (beige) is more compliant than cancerous lesions (brown). (b) Ultrasound elastogra- phy relies on the same principle; however, instead of relying on the sense of touch at the surface, elastography quantifies the elasticity distribution through measurement of the internal displacements. 3 1.2. Background (a) (b) (c) (d) Figure 1.2: (a) A simulated elasticity distribution with a soft background (black) and stiff lesion (white). (b) Example of a simulated B-mode image with no sonographic contrast for the lesion. By acquiring images before and after static compression, the (c) displacement can be estimated and its spatial derivative can be used to calculate the (d) internal tissue strain. The strain image clearly depicts the lesion. 4 1.2. Background An alternative approach to strain imaging is to compute the local elastic- ity values from the displacement measurements and perhaps the boundary conditions. This is known as an inverse problem. A number of methods to solving the inverse elasticity problem have been proposed. One such ap- proach uses the mechanical wave equation to relate the dynamic motion field to the tissue Lamé constants [27, 28]. Another approach is based on the Fi- nite Element Method (FEM), where the elasticity distribution is optimized to minimize the difference between the observed and calculated displace- ments [18, 29, 30]. There are many trade offs between the different elasticity estimation techniques including accuracy and computation time. However, common to all techniques is the requirement of high quality displacement data for good results. Strain-based methods require the gradient of the displacement. Wave equation and FEM approaches require second order displacement gra- dients. The gradient operation on poor quality or noisy data exaggerates any errors. Clearly, motion estimation accuracy and precision are of critical importance to elastography. 1.2.1 Single Transducer Motion Estimation Techniques Since US provides high resolution measurements (sub-millimeter) along the direction of sound propagation it has received the most interest in the liter- ature. US is based on receiving an echo signal along the axial direction of propagation of a short pulse. Since the frequency is in the megahertz range, this One Dimensional (1D) echo signal is called an Radio Frequency (RF) signal or RF line. Multiple RF lines over a plane or volume are used to pro- duce 2D and 3D images, respectively. Tissue motion tracking is most easily performed along each RF line because the movement of a tissue reflector along that line produces a change in location of the echo in the RF signal. These 1D axial motion estimators are generally referred to as time-delay estimators because a change in location corresponds to a change in time for the echo response of the scattering particle. The most straightforward and widely used method consists of dividing a reference echo signal into equally spaced, overlapping windows to create a pattern and using a matching al- gorithm to find the best match in the delayed echo signal windows. The best match consists of the identification of the maximum (or minimum) of a pattern matching function. Many pattern matching functions can be used such as cross-correlation [31] and sum absolute differences [32]. In addition to window shifts, tissue deformation can result in a change of shape. For this reason, accuracy can be improved through global or adaptive stretching 5 1.2. Background of the echo signal [33, 34]. The performance of delay estimators is fundamentally limited by the sampling interval. Several techniques have been proposed to reduce the impact of the finite sampling interval. Up-sampling the echo signals can reduce the error by the up-sampling factor [35]. Interpolation of the refer- ence echo signal with splines creates a continuous time representation of the reference signal which is then compared to the discrete delayed signal with the pattern matching function [36, 37]. A similar approach fits splines to both the reference and echo signals [38]. Determining the spline coefficients for a continuous representation of the echo signals can be computationally demanding. More commonly the pattern matching function is interpolated using polynomials [39], cosines [40], or grid slopes [41]. While these tech- niques introduce bias into the estimate, they are often used because they provide the speed required by most modern elastography systems. An alternative motion estimation approach uses features such as the zero-crossings or peaks of the echo signals for tracking. The zero-crossing methods find the zero-crossings using linear or spline interpolation [37, 42]. The peak search algorithm uses the wavelet transform to identify the peaks of the reference and delayed echo signals [43]. These techniques provide as many measurement points as there are features. As a result they have the potential to provide higher resolution than window based techniques, however they are generally more sensitive to noise [37]. Another method for tissue motion tracking uses phase-shift estimation. The method calculates the average phase-shift with respect to the central frequency of the transmitted pulse. The RF echo signals are converted into complex I/Q data (in-phase and quadrature components) and an au- tocorrelation technique is used to estimate velocity [44–47]. The benefit of autocorrelation techniques is that they are typically less computationally intensive, however they are subject to aliasing and can be sensitive to noise [48, 49]. Measuring motion only in 1D introduces several limitations. For ex- ample, in strain based elastography, 1D motion estimation results in only the axial component of strain can be calculated, ignoring the other normal and shear components of the strain tensor [50]. For elastography inversion based on the wave equation, the equations must be simplified for a less ac- curate partial inversion rather than a full inversion [51]. In addition to the limitations, the motion estimation is poorer since the tracked medium is deformed in 3D and the movement of the scattering particles causes decor- relation [52]. For these reasons there has been interest in extending pattern matching techniques into 2D. Again, the performance of the estimation is 6 1.2. Background Lateral Spacing Axial Spacing Elevation Axial Lateral Figure 1.3: Two RF scan lines acquired from a L12/5 ultrasound transducer on an Ultrasonix SonixRP ultrasound machine. The axial sample-to-sample spacing is approximately 20 µm and the lateral sample-to-sample spacing is approximately 300 µm. Pictured above the lines is a schematic of the transducer along with the axial, lateral, and elevational axes. limited by the sampling interval. Techniques such as linear interpolation between echo signals [35] or fitting 2D splines to the echo data [53] have been proposed. Similar to the 1D case, the computational overhead of these methods limits their applicability and instead the pattern matching function is usually interpolated. The interpolation can be performed independently in each direction as a direct extension of 1D methods, iteratively by find- ing the best match in the axial direction and then performing the search in the lateral direction from there [54], or jointly using 2D interpolation meth- ods [55]. Unfortunately, since the sampling interval in the lateral direction is typically an order of magnitude larger than in the axial direction (Fig. 1.3), the accuracy and precision of the estimates in the lateral direction are poorer. 7 1.2. Background 3D volumes of RF data can be acquired by translating, rotating, or fanning an US transducer so that a number of 2D images sweep through a volume of interest. Similar to the 2D case, motion estimation can be extended to 3D using pattern matching techniques [56]. However, the low frame-rate and resolution in the elevation direction has limited the use of 3D US elastography. Newer and more sophisticated 3D US transducers can replace the need for sweeping a 2D imaging plane through a volume by using a matrix array of transducer elements, but such transducers still produce best resolution in the axial direction. To help overcome the limitations of the sample spacing in the lateral direction a technique referred to as angular compounding has been devel- oped. In this technique, the ROI is scanned from multiple view angles. The view angles can be obtained by mechanically translating a phased array or rotating a linear array transducer [57]. Alternatively, the angular views can be obtained from a transducer using a single transmit and multiple receive apertures [58], or a transducer with multiple electronically steered transmit and receive angles [59–61]. The motion is estimated along the beam direc- tion using 1D techniques for each angle independently and then compounded to construct 2D motion vectors in the overlapping region. 1.2.2 Multiple Transducer Motion Estimation Techniques Similar to the concept of angular compounding, multiple transducers have been used for obtaining motion estimates from different view angles. Multi- ple transducers have the advantage that they are not limited by the grating artifacts introduced at electronic steering angles. Originally, multiple trans- ducer systems were used for Doppler blood flow measurement. Multiple single element transducers were arranged at different angles and focused at a single point where velocity in 2D and 3D could be estimated [62–64]. Later, the same principle was applied to estimate tissue motion with array transducers. In one study, two linear arrays were arranged such that their planes were perpendicular and they intersected along a line in space. Once the data was acquired, previously introduced 2D estimators were used to es- timate the motion for each transducer. The 2D vectors from each transducer were then combined to construct 3D motion vectors at the line of intersec- tion [65]. In a similar arrangement, the two transducers were mounted onto stepper motors to enable scanning a volume of breast tissue [66]. Good results were reported, but no analysis was performed to compare to single- transducer systems, nor development of techniques to align and calibrate the transducers. 8 1.3. Thesis Objectives 1.3 Thesis Objectives In this thesis a method of tracking internal tissue motion with US is de- veloped. Two transducers are oriented orthogonally and aligned into the same imaging plane (Fig. 1.4). The hypothesis is that by using motion estimates along the US beam axis of each transducer, the effective sampling rate and effective sonographic resolution in the lateral direction is increased, and thus higher quality 2D motion estimates can be obtained and elasticity can be recovered more accurately from these estimates than from a single transducer. To explore this hypothesis, the following objectives are defined in this thesis: 1. Studying the performance of this motion estimation technique in terms of accuracy and precision using both simulation and experimental data and comparing them with state of the art techniques. 2. Designing a new specialized apparatus for aligning two transducers into a single scan plane and evaluating the residual error in the align- ment. 1.4 Thesis Outline This thesis is presented in a manuscript-based style, as permitted by the Faculty of Graduate Studies at the University of British Columbia. Each chapter represents an individual work that has been published or submitted to a peer reviewed publication. The chapters can be read independently as each contains its own introduction to the work presented in that chap- ter, a description of the methodology, a presentation of the results, and a discussion. The references are summarized at the end of each chapter. In the course of achieving the primary objectives of this thesis, the fol- lowing contributions were made: 1. In Chapter 2, the motion tracking improvements provided by using dual ultrasound transducers for tacking 2D displacements are inves- tigated. Tissue displacement is measured using a correlation-based approach on ultrasonic RF data. Measurements are compared to 2D tracking with a single transducer both with, and without, angular compounding. Studies include numerical simulation as well as exper- iments. The dual transducer method demonstrates smaller errors in 9 1.4. Thesis Outline D D D 1 2 0 Primary Transducer Secondary Transducer Lateral Axial Figure 1.4: Schematic for 2D motion tracking with dual transducers. Two orthogonal ultrasound transducers are aligned into the same imaging plane. The true motion vector, D0, is estimated along the beam axis of the primary transducer to obtain the axial component, D1, and along the beam axis of the secondary transducer to obtain the lateral component, D2. “Axial” and “Lateral” are defined relative to the primary transducer. 10 1.4. Thesis Outline both translational and compressional motion tests. Lateral strain im- ages demonstrate improved contrast, signal-to-noise ratio, and root mean square error. The dual transducer method continues to provide lower error as the angle between transducers is decreased from 90 de- grees down to 40 degrees. Errors due to transducer misalignment are studied and shown to be significant as rotation misalignment exceeds 2 degrees. Translational motion estimation results with experimental data agree with the simulations. A preliminary alignment method is developed for the experimental work. 2. In Chapter 3, a novel method of aligning two orthogonal ultrasound transducers into a coincident scan plane is presented. A wedge phan- tom design provides visual feedback to the user to facilitate alignment. Calibration provides the transformation from one transducer to the other as well as a measure of the residual error in alignment. Mean alignment error is improved over the alignment method presented in Chapter 2. The repeatability of wedge-based calibration has similar results compared to N-fiducial [67] based calibration. The accuracy of the calibration for mapping points from one transducer to the other is found to have an acceptable level of error. 3. In Chapter 4, the results of the collected works are related to one another and a unified goal of the thesis is discussed. The strengths and weaknesses of the research are then presented, along with future directions for research. 11 References [1] F. Kamangar, G. M. Dores, and W. F. Anderson, “Patterns of cancer incidence, mortality, and prevalence across five continents: Defining priorities to reduce cancer disparities in different geographic regions of the world,” Journal of Clinical Oncology, vol. 24, no. 14, pp. 2137–2150, 2006. [2] Canadian Cancer Society’s Steering Committee, “Canadian cancer statistics 2009,” Canadian Cancer Society, Tech. Rep., 2009. [3] A. Jemal, R. Siegel, E. Ward, Y. Hao, J. Xu, and M. J. Thun, “Cancer statistics, 2009,” CA: A Cancer Journal for Clinicians, vol. 59, no. 4, pp. 225–249, 2009. [4] L. L. Humphrey, M. Helfand, B. K. Chan, and S. H. Woolf, “Breast Cancer Screening: A Summary of the Evidence for the U.S. Preventive Services Task Force,” Annals of Internal Medicine, vol. 137, no. 5 Part 1, pp. 347–360, 2002. [5] H. S. Campbell, S. W. Fletcher, C. A. Pilgrim, T. M. Morgan, and S. Lin, “Improving physicians’ and nurses’ clinical breast examina- tion: a randomized controlled trial,” American Journal of Preventative Medicine, vol. 7, no. 1, pp. 1–8, 1991. [6] M. B. Barton, R. Harris, and S. W. Fletcher, “The rational clinical examination. does this patient have breast cancer? the screening clinical breast examination: should it be done? how?” Journal of the American Medical Assosciation, vol. 282, no. 13, pp. 1270–1280, 1999. [7] J. P. Kösters and P. C. Gøtzche, “Regular self-examination or clinical examination for early detection of breast cancer,” Cochrane Database of Systematic Reviews, vol. 4, no. 1, 2008. [8] P. A. Newcomb, N. S. Weiss, B. E. Storer, D. Scholes, B. E. Young, and L. F. Voigt, “Breast Self-Examination in Relation to the Occurrence of 12 Chapter 1. References Advanced Breast Cancer,” Journal of the National Cancer Institute, vol. 83, no. 4, pp. 260–265, 1991. [9] C. J. Baines, A. B. Miller, C. Wall, D. V. McFarlane, I. S. Simor, R. Jong, B. J. Shapiro, L. Audet, M. Petitclerc, and D. Ouimet-Oliva, “Sensitivity and specificity of first screen mammography in the cana- dian national breast screening study: a preliminary report from five centers,” Radiology, vol. 160, no. 2, pp. 295–298, 1986. [10] F. A. Mettler, A. C. Upton, C. A. Kelsey, R. N. Ashby, R. D. Rosenberg, and M. N. Linver, “Benefits versus risks from mammography: A critical reasessment,” Cancer, vol. 77, no. 5, pp. 903–909, 1996. [11] K. Kerlikowske, D. Grady, S. M. Rubin, C. Sandrock, and V. L. Ernster, “Efficacy of screening mammography: A meta-analysis,” Journal of the American Medical Assosciation, vol. 273, no. 2, pp. 149–154, 1995. [12] F. Sardanelli, G. M. Giuseppetti, P. Panizza, M. Bazzocchi, A. Fausto, G. Simonetti, V. Lattanzio, and A. Del Maschio, “Sensitivity of MRI Versus Mammography for Detecting Foci of Multifocal, Multicentric Breast Cancer in Fatty and Dense Breasts Using the Whole-Breast Pathologic Examination as a Gold Standard,” American Journal of Roentgenology, vol. 183, no. 4, pp. 1149–1157, 2004. [13] W. Teh and A. R. M. Wilson, “The role of ultrasound in breast cancer screening. a consensus statement by the european group for breast can- cer screening,” European Journal of Cancer, vol. 34, no. 4, pp. 449–450, 1998. [14] N. Figueredo, C. Raker, K. Glass, J. LaChance, D. Dizon, and J. Gass, “Accuracy of office-based ultrasound guided needle biopsies performed by dedicated breast surgeons,” Cancer Research, vol. 69, no. 24, p. 6022, 2009. [15] J. Ophir, I. Cespedes, Y. Ponnekanti, Y. Yazdi, and X. Li, “Elastog- raphy: A quantitative method for imaging the elasticity of biological tissues,” Ultrasonic Imaging, vol. 13, no. 2, pp. 111–134, 1991. [16] B. S. Garra, E. I. Cespedes, J. Ophir, S. R. Spratt, R. A. Zuurbier, C. M. Magnant, and M. F. Pennanen, “Elastography of breast lesions: initial clinical results,” Radiology, vol. 202, no. 1, pp. 79–86, 1997. 13 Chapter 1. References [17] C. Pellot-Barakat, M. Sridhar, K. K. Lindfors, and M. F. Insana, “Ul- trasonic elasticity imaging as a tool for breast cancer diagnosis and research,” Current Medical Imaging Reviews, vol. 2, no. 1, pp. 157–164, 2006. [18] F. Kallel and M. Bertrand, “Tissue elasticity reconstruction using linear perturbation method,” IEEE Transactions on Medical Imaging, vol. 15, no. 3, pp. 299–313, 1996. [19] H. Eskandari, S. Salcudean, and R. Rohling, “Viscoelastic parameter estimation based on spectral analysis,” IEEE Transactions on Ultrason- ics, Ferroelectrics, and Frequency Control, vol. 55, no. 7, pp. 1611–1625, 2008. [20] R. Sinkus, M. Tanter, S. Catheline, J. Lorenzen, C. Kuhl, E. Son- dermann, and M. Fink, “Imaging anisotropic and viscous properties of breast tissue by magnetic resonance-elastography,” Magnetic Reso- nance in Medicine, vol. 53, pp. 372–387, 2005. [21] D. B. Plewes, J. Bishop, A. Samani, and J. Sciarretta, “Visualization and quantification of breast cancer biomechanical properties with mag- netic resonance elastography,” Physics in Medicine and Biology, vol. 45, no. 6, pp. 1591–1610, 2000. [22] J. Ophir, I. Cespedes, B. Garra, H. Ponnekanti, Y. Huang, and N. Mak- lad, “Elastography: Ultrasonic imaging of tissue strain and elastic mod- ulus in vivo,” European Journal of Ultrasound, vol. 3, no. 1, pp. 49–70, 1996. [23] C. Sumi, A. Suzuki, and K. Nakayama, “Estimation of shear modulus distribution in soft tissue from strain distribution,” IEEE Transactions on Biomedical Engineering, vol. 42, no. 2, pp. 193–202, 1995. [24] K. M. Hiltawsky, M. Kruger, C. Starke, L. Heuser, H. Ermert, and A. Jensen, “Freehand ultrasound elastography of breast lesions: clinical results,” Ultrasound in Medicine and Biology, vol. 27, no. 11, pp. 1461– 1469, 2001. [25] H. Rivaz and R. Rohling, “A hand-held probe for vibro-elastography,” in Medical Image Computing and Computer Assisted Intervention, vol. 3749, 2005, pp. 613–620. 14 Chapter 1. References [26] H. Ponnekanti, J. Ophir, Y. Huang, and I. Cspedes, “Fundamental me- chanical limitations on the visualization of elasticity contrast in elastog- raphy,” Ultrasound in Medicine and Biology, vol. 21, no. 4, pp. 533–543, 1995. [27] T. E. Oliphant, A. Manduca, R. L. Ehman, and J. F. Greenleaf, “Complex-valued stiffness reconstruction for magnetic resonance elas- tography by algebraic inversion of the differential equation,” Magnetic Resonance in Medicine, vol. 45, no. 2, pp. 299–310, 2001. [28] S. Catheline, J.-L. Gennisson, G. Delon, M. Fink, R. Sinkus, S. Abouelkaram, and J. Culioli, “Measurement of viscoelastic proper- ties of homogeneous soft solid using transient elastography: An inverse problem approach,” The Journal of the Acoustical Society of America, vol. 116, no. 6, pp. 3734–3741, 2004. [29] M. Doyley, P. Meaney, and J. Bamber, “Evaluation of an itera- tive reconstruction method for quantitative elastography,” Physics in Medicine and Biology, vol. 45, no. 6, pp. 1521–1540, 2000. [30] H. Eskandari, S. Salcudean, R. Rohling, and J. Ohayon, “Viscoelastic characterization of soft tissue from dynamic finite element models,” Physics in Medicine and Biology, vol. 53, no. 22, pp. 6569–6590, 2008. [31] O. Bonnefous and P. Pesqué, “Time domain formulation of pulse- doppler ultrasound and blood velocity estimation by cross correlation,” Ultrasonic Imaging, vol. 8, no. 2, pp. 73–85, 1986. [32] L. Bohs and G. Trahey, “A novel method for angle independent ultra- sonic imaging of blood flow and tissue motion,” IEEE Transactions on Biomedical Engineering, vol. 38, no. 3, pp. 280–286, 1991. [33] T. Varghese and J. Ophir, “Enhancement of echo-signal correlation in elastography using temporal stretching,” IEEE Transactions on Ultra- sonics, vol. 44, no. 1, pp. 173–180, 1997. [34] S. K. Alam, J. Ophir, and E. E. Konofagou, “An adaptive strain estima- tor for elastography,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 45, no. 2, pp. 461–472, 1998. [35] E. E. Konofagou and J. Ophir, “A new elastographic method for esti- mation and imaging of lateral displacements, lateral strains, corrected axial strains and poisson’s ratios in tissues,” Ultrasound in Medicine and Biology, vol. 24, no. 8, pp. 1183–1199, 1998. 15 Chapter 1. References [36] F. Viola and W. Walker, “A spline-based algorithm for continuous time- delay estimation using sampled data,” IEEE Transactions on Ultrason- ics, Ferroelectics, and Frequency Control, vol. 52, no. 1, pp. 80–93, 2005. [37] R. Zahiri-Azar and S. Salcudean, “Time-delay estimation in ultrasound echo signals using individual sample tracking,” IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 55, no. 12, pp. 2640–2650, 2008. [38] G. Pinton and G. Trahey, “Continuous delay estimation with polyno- mial splines,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 53, no. 11, pp. 2026–2035, 2006. [39] S. Foster, P. Embree, and J. O’Brien, W.D., “Flow velocity profile via time-domain correlation: error analysis and computer simulation,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Con- trol, vol. 37, no. 3, pp. 164–175, 1990. [40] P. de Jong, T. Arts, A. Hoeks, and R. Reneman, “Determination of tissue motion velocity by correlation interpolation of pulsed ultrasonic echo signals,” Ultrasonic Imaging, vol. 12, no. 2, pp. 84–98, 1990. [41] L. Bohs, B. Geiman, M. Anderson, S. Breit, and G. Trahey, “Ensemble tracking for 2d vector velocity measurement: Experimental and initial clinical results,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 45, no. 4, pp. 912–924, 1998. [42] S. Srinivasan and J. Ophir, “A zero-crossing strain estimator for elastog- raphy,” Ultrasound in Medicine and Biology, vol. 29, no. 2, pp. 227–238, 2003. [43] H. Eskandari, S. Salcudean, and R. Rohling, “Tissue strain imaging using a wavelet transform-based peak search algorithm,” IEEE Trans- actions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 54, no. 6, pp. 1118–1130, 2007. [44] C. Kasai, K. Namekawa, A. Koyano, and R. Omoto, “Real-time two- dimensional blood flow imaging using an autocorrelation technique,” IEEE Transactions on Sonics and Ultrasonics, vol. 32, no. 3, pp. 458– 464, 1985. [45] T. Loupas, J. Powers, and R. Gill, “An axial velocity estimator for ul- trasound blood flow imaging, based on a full evaluation of the doppler 16 Chapter 1. References equation by means of a two-dimensional autocorrelation approach,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Con- trol, vol. 42, no. 4, pp. 672–688, 1995. [46] M. O’Donnell, A. Skovoroda, B. Shapo, and S. Emelianov, “Internal displacement and strain imaging using ultrasonic speckle tracking,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Con- trol, vol. 41, no. 3, pp. 314–325, 1994. [47] A. Pesavento, C. Perrey, M. Krueger, and H. Ermert, “A time-efficient and accurate strain estimation concept for ultrasonic elastography using iterative phase zero estimation,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 46, no. 5, pp. 1057–1067, 1999. [48] I. Hein and J. O’Brien, W.D., “Current time-domain methods for as- sessing tissue motion by analysis from reflected ultrasound echoes-a review,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Fre- quency Control, vol. 40, no. 2, pp. 84–102, 1993. [49] G. Pinton, J. Dahl, and G. Trahey, “Rapid tracking of small displace- ments with ultrasound,” IEEE Transactions on Ultrasonics, Ferro- electrics, and Frequency Control, vol. 53, no. 6, pp. 1103–1117, 2006. [50] E. E. Konofagou and J. Ophir, “Precision estimation and imaging of normal and shear components of the 3D strain tensor in elastography,” Physics in Medicine and Biology, vol. 45, no. 6, pp. 1553–1563, 2000. [51] A. Baghani, S. Salcudean, and R. Rohling, “Theoretical limitations of the elastic wave equation inversion for tissue elastography,” The Journal of the Acoustical Society of America, vol. 126, no. 3, pp. 1541–1551, 2009. [52] F. Kallel and J. Ophir, “Three-dimensional tissue motion and its effect on image noise in elastography,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 44, no. 6, pp. 1286–1296, 1997. [53] F. Viola, R. Coe, K. Owen, D. Guenther, and W. Walker, “Multi- dimensional spline-based estimator (muse) for motion estimation: Al- gorithm development and initial results,” Annals of Biomedical Engi- neering, vol. 36, no. 12, pp. 1942–1960, 2008. 17 Chapter 1. References [54] R. Zahiri-Azar and S. E. Salcudean, “Real-time estimation of lateral displacement using time domain cross correlation with prior estimates,” in 2006 IEEE Ultrasonics Symposium, 2006, pp. 1209–1212. [55] G. Giunta, “Fine estimators of two-dimensional parameters and appli- cation to spatial shift estimation,” IEEE Transactions on Signal Pro- cessing, vol. 47, no. 12, pp. 3201–3207, 1999. [56] R. Zahiri Azar, “Methods for the estimation of the tissue motion using digitized ultrasound echo signals,” Ph.D. dissertation, The University of British Columbia, 2009. [57] U. Techavipoo, Q. Chen, T. Varghese, and J. Zagzebski, “Estimation of displacement vectors and strain tensors in elastography using angular isonifications,” IEEE Transactions on Medical Imaging, vol. 23, no. 12, pp. 1479–1489, 2004. [58] M. Tanter, J. Bercoff, L. Sandrin, and M. Fink, “Ultrafast compound imaging for 2D motion vector estimation: application to transient elas- tography,” IEEE Transactions on Ultrasonics, Ferroelectrics and Fre- quency Control, vol. 49, no. 10, pp. 1363–1374, 2002. [59] O. D. Kripfgans, J. M. Rubin, A. L. Hall, and J. B. Fowlkes, “Vector doppler imaging of a spinning disc ultrasound doppler phantom,” Ul- trasound in Medicine and Biology, vol. 32, no. 7, pp. 1037–1046, 2006. [60] M. Rao, Q. Chen, H. Shi, T. Varghese, E. Madsen, J. Zagzebski, and T. Wilson, “Normal and shear strain estimation using beam steering on linear-array transducers,” Ultrasound in Medicine and Biology, vol. 33, no. 1, pp. 57–66, 2007. [61] H. H. G. Hansen, R. G. P. Lopata, T. Idzenga, and C. L. de Korte, “High quality non-invasive full 2D strain tensor imaging using a beam steered linear array ultrasound transducer,” in IEEE Ultrasonics Sym- posium, 2009, pp. 159–162. [62] M. Fox, “Multiple crossed-beam ultrasound doppler velocimetry,” IEEE Transactions on Sonics and Ultrasonics, vol. 25, no. 5, pp. 281–286, 1978. [63] M. Nikoonahad and K. Chin, “Ultrasound transverse flow measurement with high lateral resolution,” in IEEE Ultrasonics Symposium, vol. 2, 1988, pp. 995 –998. 18 Chapter 1. References [64] K.-Y. Jhang and T. Sato, “Flow velocity field tomography using mul- tiple ultrasonic beam detectors and high-order correlation analysis,” The Journal of the Acoustical Society of America, vol. 86, no. 3, pp. 1047–1052, 1989. [65] H. Yoshikawa, T. Azuma, K. Kawabata, Y. Taniguchi, and S. Umemura, “Thee-dimensional tracking method of tissue motion with biplane im- ages,” Japanese Journal of Applied Physics, vol. 44, no. 6B, pp. 4561– 4566, 2005. [66] J. Bercoff, R. Sinkus, M. Tanter, and M. Fink, “3D ultrasound-based dynamic and transient elastography: first in vitro results,” in IEEE Ultrasonics Symposium, vol. 1, 2004, pp. 28–31. [67] R. M. Comeau, A. Fenster, and T. M. Peters, “Integrated MR and ultrasound imaging for improved image guidance in neurosurgery,” in SPIE Conference on Image Processing, vol. 3338, 1998, pp. 747–754. 19 Chapter 2 Two Dimensional Motion Tracking with Dual Ultrasound Transducers1 2.1 Introduction Elastography is an imaging technique for depicting the mechanical proper- ties of tissues [1–3]. The technique has a wide range of applications such as identification of breast lesions [4], detection of vulnerable atherosclerotic plaques [5], visualization of the prostate [6], myocardial imaging [7], and glaucoma diagnosis [8] among many others. Images of elasticity and other mechanical parameters are created by applying an excitation to the tissue, measuring the corresponding displacement field, and using the displacement field to infer the mechanical parameters of interest. For example, if tissue strain is used to estimate relative tissue elasticity, the gradient of the dis- placements is required and any noise or errors in the displacements will be exaggerated. This becomes more problematic in local inversion of the wave equation to estimate the absolute value of elasticity, where second order gradient is required to estimate the Laplacian of the displacement data [9]. Clearly elastography results rely heavily on the quality of the displacement measurements. Originally, motion tracking algorithms for elastography estimated mo- tion only in the axial direction (along the axis of sound propagation). It is typical for the displacement to be measured by estimating a time-shift between sequences of backscattered RF echo signals. The techniques com- monly consist of identifying the maximum of the correlation function be- tween windows of RF data [10]. Alternatively, features of the RF signal can be tracked such as the zero-crossings [11], the peaks [12], or the individual 1A version of this chapter has been submitted for publication. Jeffrey M Abeysek- era, Reza Zahiri-Azar, Orcun Goksel, Robert Rohling, and Septimiu E Salcudean, “Two Dimensional Motion Tracking with Dual Ultrasound Transducers”. 20 2.1. Introduction samples themselves [13]. These methods have demonstrated high accuracy and precision, with bias and variance on the order of microns. However, since real tissues deform in 3D, it is desirable to measure motion in more than one axis. Many papers in the literature have looked at extending motion estima- tors to the lateral direction (within the scan plane and perpendicular to the axial direction). This can be achieved by incorporating a 2D search over a specified window of RF data [14]. The lateral estimation is more error prone since the sampling interval in the lateral direction (transducer element spacing) is greater than the sampling interval in the axial direc- tion (digital sampling frequency), however estimates can be improved by interpolating between the RF A-lines [15], fitting a 2D function to the RF data [16], or interpolating the correlation function between consecutive RF frames [17]. Unfortunately, since lateral sample spacing is typically an order of magnitude larger than axial sample spacing, none of the methods in the literature have shown lateral estimates on the same level of precision as axial estimates [18]. Another method for estimating lateral motion uses angular compound- ing. In this method, the region of interest is scanned from multiple view angles. One approach to obtain the view angles is to mechanically rotate a linear array or translate a phased array transducer [19]. The mechanical ap- paratus for the careful maneuvering of the transducer adds substantial com- plexity. Alternatively, the angular views can be obtained from a transducer using a single transmit and multiple receive apertures [20], or a transducer with multiple electronically steered transmit and receive angles [21, 22]. The displacement component along the beam direction is estimated using RF tracking techniques mentioned previously, and the final motion vector is reconstructed by compounding the displacement estimates from each steer- ing angle. Results have shown an improvement in lateral tracking accuracy compared to 2D tracking methods, however since one set of echo signals is required for each steering angle, the data acquisition time is proportional to the number of steering angles used. As recently suggested, at a cost of slightly poorer tracking performance, a high acquisition rate can be achieved by employing a single pair of steered angles [17, 23]. While the errors in the lateral estimation with a pair of steered frames are an improvement over 2D tracking methods, the lateral estimate is still worse than the axial estimate. It is clear that the best angles for obtaining axial and lateral estimates would be 0◦ and 90◦, respectively. Indeed, some researchers have suggested using large steering angles to improve lateral estimates [24]. Unfortunately, it is physically impossible to electronically steer a transducer to 90◦, but us- 21 2.2. Methods ing a second transducer oriented orthogonally to the first transducer achieves this goal. This is the basis of the proposed dual transducer method. Al- though this provides 2D (not 3D) measurements, tissue mechanical param- eters can be reconstructed from this 2D motion data alone [25]. One chal- lenge of the proposed technique may be obtaining two clear viewpoints of the anatomy. Some applications, such as breast imaging, provide a range of viewpoints so the dual source concept may prove beneficial in those appli- cations. There have been previous studies in the literature using multiple trans- ducers for motion tracking [26–29]. In one study [29], two linear arrays were positioned in a perpendicular arrangement, or a biplane configuration, such that the beams intersected along a line. By obtaining 2D tracking estimates from each of the transducers, a 3D vector could be computed along the line of intersection of the two planes. In another study [28], two linear ar- ray transducers were oriented in a biplane configuration and mounted onto stepper motors which translated the transducers to scan a volume of tissue. An angular compounding technique was used to obtain 2D estimates for each transducer at each motor location to obtain a 3D volume of 3D motion estimates. These studies show some of the advantages that can be gained by using two transducers, but to the best of the authors’ knowledge, work in the literature has not explored tracking with two transducers aligned into the same scan plane. A preliminary investigation showed that the dual transducer method produces lower Root Mean Square Error (RMSE) in its motion estimates compared to a 2D tracking technique for translational motion applied at a 45◦ angle to each transducer [30]. It was also shown that alignment of the two transducers into a single scan plane remains a significant issue. The goal of this chapter is to further explore the accuracy and precision of the dual transducer method, and to compare it to other accurate 2D motion algorithms. Through simulation, this study will also investigate the measurement error as the angle between the transducers is varied. Plane misalignment effects and registration errors from Speed Of Sound (SOS) variations will also be investigated. Tracking performance will be tested experimentally using a simple calibration method for plane alignment. 2.2 Methods In the following section, the techniques for modeling motion in a virtual phantom, simulating RF signals from the distributed scatterers, and apply- 22 2.2. Methods ing the dual transducer method experimentally are explored. For this study, three motion estimation techniques are applied to estimate 2D motion; 2D tracking, 2D tracking with angular compounding, and the dual transducer method. The methods for estimating tissue motion from the RF signals for each of the three techniques are presented in the following sections. The methods for evaluating and comparing these tracking techniques for trans- lational and compressional motions are also presented. A simulation of dual transducer misalignment is studied to test its sensitivity to calibration, and the improvements in strain images produced by the dual transducer method are evaluated. 2.2.1 Simulated Motion For this study, translational and compressional motions were investigated separately. First, to evaluate the pure motion tracking accuracy of each method we applied displacements with step sizes of 30µm in the axial and lateral directions, creating an 11x11 grid with 121 distinct displacement poses. Note that from this point on, axial and lateral will refer to the axes defined in Fig. 2.1. A 40 x 40 x 20 mm3 virtual phantom was simulated by randomly placing 40 scatterers per mm3 with random amplitudes assigned with a uniform distribution. The scatterer concentration is similar to what others have used in the literature for generating fully developed speckle [31]. The size of the elevational direction was chosen to envelop the beam width in that direction. The displacements were added to the nominal phantom positions to create images of the phantom at each of the 121 poses. The second method of motion modeling studied the effects of mechani- cal compression and possible decorrelation noise. This was accomplished by solving a 3D model with FEM software ANSYS (Ansys Inc., Canonsburg, PA, USA). A 10 x 10 x 20 mm3 cube inclusion was modeled in the center of the phantom (Fig. 2.1). Similar phantom and inclusion sizes have been used in other 3D elastography studies [32]. The inclusion had the same acoustic properties of the background but had a Young’s modulus of 10 kPa compared to 2 kPa for the surrounding tissue. The Young’s modulus values were cho- sen to correspond to results for adipose breast tissue and ductal carcinoma found by some researchers [33–35]. The FEM model assumed a Poisson’s ratio of 0.499 to simulate the incompressible nature of most soft tissues. The FEM model used an element size of 1mm3 and two compressions were applied; 0.1% and 1% in the axial direction. These compression amounts are typical of the amount applied in static elastography [36]. The compression was modeled by fixing the top side, touching the transducer surface, and 23 2.2. Methods Primary Transducer Secondary Transducer Lateral Axial yaw Figure 2.1: The simulation setup for the compression motion model with a stiff cube inclusion (dark square). The phantom scatterers (red dots) and FEM mesh are down sampled by factors of 50 and 5 respectively for clarity in this figure. The primary transducer is directly in contact with the top of the phantom. Motion at the top of the phantom is constrained in the axial direction, but slip is allowed in the other two axes. The only lateral and elevational constraints are along the phantom’s axis of symmetry. This was to create a stable condition for the FEM solver under the compression test. The model assumes there is a small gap between the secondary transducer and the phantom such that the sides of the phantom are unconstrained. This situation could occur if the phantom is submerged in a water bath or with sufficient coupling gel. The rotation axis used for the misalignment analysis is shown on the secondary transducer (yaw). 24 2.2. Methods moving the bottom side upwards. Considering that the interface between the transducer and a patient would be lubricated, the displacements were not constrained in the lateral or elevational directions. The post-deformation virtual phantom was created by interpolating the scatterer positions from the pre-deformation phantom to the displacements computed by the FEM. In generating the displacement field for the dual transducer method, a small fluid filled gap was placed between the second transducer and the phantom allowing the edge to move freely as for the other tracking methods. Sound from the second transducer could still propagate into the tissue if a layer of ultrasonic coupling gel is placed between the transducer and the tissue surface, or if the tissue is submerged into a water bath similar to the clinical breast scanner used by some elastography researchers [28]. This is for the ease of simulating multiple transducer angles. 2.2.2 Simulation of RF Data Many methods of simulating RF signals for US have been proposed in the literature. For instance, convolution of a pre-determined point spread func- tion with the scatterers of a virtual phantom is a popular method [32, 37]. Here we use Field II, an acoustic pressure field simulation package, due to its accurate modeling of transducer properties, beam focusing, and wave propagation and interaction [38, 39]. The simulation approximates the hardware available on a SonixRP US machine (Ultrasonix Medical Corporation, Richmond, BC, Canada) and the linear array transducer (L14-5/38) used in the initial study [30]. The simu- lation models a linear array transducer with a 5 MHz center frequency and a center-to-center element spacing of 300 µm. A fixed elevational focus at a depth of 15 mm was included in the transducer geometry. The hardware was simulated with a sampling frequency of 40 MHz (≈ 20 µm sample spacing at a simulated speed of sound of 1540 m/s). The transmit focus was applied to the center of the phantom and dynamic receive focusing was used. A total of 128 RF lines were captured per frame to a depth capturing the entire virtual phantom. At each phantom configuration, two RF frames corresponding to steered angles of ±10◦, as well as one with zero steered angle were captured. The phantom was also rotated 90◦ about its center and an RF frame with zero steering angle was captured to simulate the perpendicular transducer for the dual transducer method. 25 2.2. Methods Figure 2.2: The cross-wire phantom used for alignment and calibration. The intersection of the Z-wires provides a visual point of reference for aligning the two transducers. 26 2.2. Methods 2.2.3 Experimental Setup A physical experiment requires the transducers to be aligned into a single coincident scanning plane. Since the relationship between the beam and the transducer housing is unknown, an image-based alignment protocol is preferred to a purely mechanical alignment based on the transducer dimen- sions. A custom-made cross-wire phantom was constructed containing two orthogonal Z-wire shapes crossing along their diagonals at the center of the phantom (Fig. 2.2). The phantom was designed to provide visual feedback to the user so that the transducers could be adjusted into their appropriate positions. Two US transducers were clamped orthogonally and translated in their elevation direction on manual linear stages. The whole setup was submerged in a water bath. The transducers were translated until a unique symmetric pattern in the B-mode images appeared, indicating the transduc- ers were scanning the center plane of the phantom. The B-mode images for each transducer were saved and processed off- line for calibration. Points determining wire locations were extracted by calculating an intensity weighted centroid in a user-defined region of interest around the echo created by the wire. A geometric model of the cross-wire phantom was used to find the pose of each transducer with respect to the phantom. Non-linear least squares optimization in Matlab (The Mathworks, Inc., Natick, MA, USA) was used to minimize the difference between the B- mode points and the points calculated in the geometric model until the best fit for the degrees of freedom of the US plane were found. Once the poses between transducer and phantom were found the relative transformation between the two image planes could be calculated. Motion tracking experiments were performed on a cylindrical phantom with a diameter of 45 mm and length of 30 mm. The phantom was prepared using 100% Polyvinyl Chloride (PVC) plasticizer (M-F Manufacturing Co., Inc., Fort Worth, TX, USA) mixed with 3% by mass cellulose (Sigma-Aldrich Inc., St. Louis, MO, USA) as scattering particles. The bottom of the phantom was gently pinched to keep it fixed to the mounting plate. The phantom was placed into a water bath with approximately 1 mm gaps from each of the transducer faces. Images were acquired with two linear array transducers (L12-5/38) on a SonixMDP US machine. The phantom was imaged to a depth of 40 mm with 128-element transducers with 300 µm lateral line spacing, operated at a frequency of 10 MHz. RF data were digitized at 20 MHz and upsampled by a factor of two to match the simulation sampling. On one transducer, frames were captured at a pair of steering angles (±10◦) in addition to acquiring 27 2.2. Methods normal RF frames at 0◦. RF frames were collected and processed off-line using Matlab. The water bath was mounted onto a motorized linear slide (T-LSR 150B) with 150 mm travel and 0.5 µm resolution (Zaber Technologies Inc., Van- couver, BC, Canada). The slide was controlled with a LabView (National Instruments Inc., Austin, TX, USA) Graphical User Interface (GUI). Step sizes of 30 µm translations were applied with the slide, up to a total of 150 µm from the starting position. Since the transducers were each mounted at a 45◦ angle relative to the slide, the applied motion was in 2D and equal in both axes. 2.2.4 2D Tracking The simulated RF frames at 0◦ steering angle are used for the 2D tracking method. The 2D tracking method is based on a time domain cross correla- tion algorithm using prior estimates (TDPE) of neighboring windows in the signal to bracket the search region [40]. The normalized cross correlation between the reference and motion RF frames is computed and 2D quadratic spline fitting is applied to the discrete correlation data. The peak of the function provides the motion estimate for the axial and lateral components of the motion vector. The window size is set to approximately 2 mm × 2 mm (208 samples axially and 13 samples laterally) with 50% overlap in the axial direction. 2.2.5 Angular Compounding For 2D tracking with angular compounding, the RF data for two equal but opposite steering angles (±θ), are input to the 2D tracking algorithm. It has been shown that for the hardware simulated here, the optimum pair of steered angles is ±10◦ [17]. The peak of the 2D quadratic spline function provides the estimate along and perpendicular to each of the steering direc- tions. After the displacements along the beam axis from each view angle are estimated (Dθa and D −θ a ), the two displacement images are scan-converted and spatially aligned using known transducer geometry and steering angle by interpolating on a Cartesian grid using cubic-splines. The axial and lat- eral components of the motion at each spatial location are calculated as [22]: Da = Dθa +D −θ a 2cosθ , (2.1) 28 2.2. Methods Dl = Dθa −D−θa 2sinθ . (2.2) 2.2.6 Tracking with Dual Transducers For dual transducer tracking, the motion component along the beam direc- tion for each transducer is estimated with the 2D tracking algorithm. Even though we are only interested in the displacement along the beam direction for each transducer, the correlation peak search is carried out in 2D be- cause it has shown better accuracy compared to 1D methods [18]. The axial displacement is the estimate from the primary transducer, and the lateral displacement is the estimate from the secondary transducer. The tracking parameters are set to be the same as for the 2D tracking method mentioned previously. 2.2.7 Non-Orthogonal Dual Transducers Up to this point we have only considered an orthogonal arrangement for the dual transducers since it is expected to provide the highest quality 2D motion tracking results. The orthogonal transducer arrangement may have clinical applications on anatomy, such as the breast, where the transducers can be placed around the region of interest. To apply the dual transducer method to other structures such as transabdominal imaging of the liver or kidney, the angle between the transducers could be decreased while still retaining a percentage of the quality of the lateral estimate of the orthogonal case. This provided the motivation to study the displacement error as a function of transducer angle. Transducer angles are incremented by 10◦ over a range of 40◦ to 90◦. The motion tracking procedure is the same as for the perpendicular case mentioned previously, but now the lateral estimate is not simply the motion measured by the secondary transducer. To calculate the lateral estimate, consider the true displacement vector ~d at point O in space, observed by a linear array US transducer rotated θ degrees from the vertical as shown in Fig. 2.3. Let uθ be a unit vector along an A-line intersecting point O. Ignoring noise, the observed displacement along the insonification direction of the transducer, qθ, is the projection of ~d onto ~uθ. The projection can be calculated as the dot product: 29 2.2. Methods O d q u θ θ θ Figure 2.3: The displacement estimate, qθ, of the applied motion vector, ~d, along the insonification direction, ~uθ, observed at point O for a transducer rotated at θ degrees from the vertical. The displacement estimate is the dot product between the applied motion vector and the unit vector along the beam direction of the transducer. 30 2.2. Methods qθ = ~d · ~uθ = daxcosθ + dlatsinθ, (2.3) where dax and dlat are the axial and lateral components of the displacement vector ~d, respectively. In this work, without loss of generality, we consider one of the transducers to be tracking the axial motion (θ = 0), thus observing the axial motion dax = q0◦ . Then, the lateral displacement component, dlat, for a given rotation angle between the two transducers, θ, is found as follows: dlat = qθ − q0◦cosθ sinθ . (2.4) 2.2.8 Motion Estimation Error A common method for evaluating the accuracy and precision of motion estimation or pure translational motion in the ultrasonic signal is calculating the bias and standard deviation of the estimates [37, 41]. In the translational motion test, the bias b, and the standard deviation σ, for each of the three tracking methods investigated here are calculated as: b ( D̂ ) = 1 N N∑ i=1 ( D̂ −D ) , (2.5) σ ( D̂ ) = √√√√√ 1 N N∑ i=1 ( D̂ − 1 N N∑ i=1 D̂ )2 , (2.6) where D̂ is the displacement estimate, D is the known applied displacement to the scatterers, and N is the number of displacement estimates in a region of interest located around the focal center of the phantom. Approximately 1000 estimate windows were used for these calculations in all the translation tests in this chapter. For the compression test, a different metric is required to evaluate the accuracy of the estimators. Bias is inappropriate because both positive and negative displacements are present in the deformed phantom, and summing the errors together could cancel out their effect. Standard deviation is also inappropriate because there is variation in the true displacement, and there- fore the estimate is different for every pixel. For these reasons, the RMSE was calculated using: 31 2.2. Methods RMSE ( D̂ ) = √√√√ 1 N N∑ i=1 ( D̂ −D )2 , (2.7) where the true motion, D, was interpolated to the window centers of the estimate from the displacements solved by the FEM. Approximately 2300 estimates were used for the calculations for compression tests. The region of interest spanned the entire depth of the phantom and the width was chosen to fit inside the region bounded by the steered beams. 2.2.9 Misalignment Simulation One of the key questions facing the practicality of the dual transducer method is its sensitivity to alignment and calibration. Here we refer to alignment as the physical adjustment of the transducers into the desired or- thogonal pose within a single plane, and calibration as the calculation of the pose of one transducer with respect to the other. There are several different errors that need to be considered. Three of them are discussed below. First, there is a limit to how accurately the two transducers can be manually ad- justed into a single plane. This can be done with motion stages using visual feedback from US images of a calibration phantom. While the error can be expected to be small, it is reasonable to assume that there will be some residual error. Residual error does not affect the underlying measurements along the axial and lateral axes since the alignment is only for positioning the transducers into a single plane, but the next two errors affect all of the measurement directions. The second type of error considered deals with calibration. Image fea- tures from B-mode images of a calibration phantom of known geometry are extracted to compute the poses of the transducers with respect to the phan- tom. This error models the ability of the calibration algorithm to accurately compute the true pose of the transducer. Since the calibration data is used to align the images from one transducer to another, inaccurate calibration will lead to misalignment of features. Dual transducers also suffer from alignment errors due to changes in SOS. Differences in SOS compared to the value assumed by the US machine will result in inaccurate depth placement. This will cause structures in the axial and lateral images to be misplaced relative to one another. This is problematic because when the two images are compared, the components of the 2D vector will not match up. It is worth noting that angular com- pounding methods will also suffer from these alignment errors, which will 32 2.2. Methods vary depending on the chosen steering angle. For this study, the misalignment is applied as a rotation about the yaw axis (Fig. 2.1), from 0◦ to 5◦. The yaw axis was chosen because, for the par- ticular geometry of the simulation model used in this study, it will produce the greatest lateral displacement estimation errors. The root mean square error between the lateral estimate of the misaligned transducer and the true displacement that was computed by the FEM will be used to evaluate the effect of misalignment. 2.2.10 Strain Estimation To obtain strain, a 9-point least squares strain estimator is used on the dis- placement data [42]. Three metrics are used for quantitative comparison between the axial and lateral strain images produced by the three displace- ment tracking techniques. For quantifying lesion detection, the elastographic contrast-to-noise ratio, CNRe, is used [43]: CNRe = 2 (s1 − s2)2 σ21 + σ 2 2 , (2.8) where s1 and s2 are average strains in the inclusion and background, respec- tively, and σ1 and σ2 are the standard deviations in those regions. The precision of the strain estimation on a homogeneous phantom under compression can be quantified using the elastographic signal-to-noise ratio, SNRe [44]: SNRe = s σ , (2.9) where s is the average strain and σ is the standard deviation of the strain calculated over the entire image. SNRe is a common measure of elastographic performance, however it assumes that the strain estimates are unbiased. To overcome this, an elas- tographic root mean square error, RMSe, calculation has also been proposed to complement SNRe calculations [12]. The error is defined as: RMSe = √√√√ 1 N N∑ i=1 (ŝi − S)2, (2.10) where ŝi is the strain estimate at pixel i, S is the true strain, and N is the number of pixels in the region of interest. With this type of error, if there are many strain estimates that have a deviation from the real value 33 2.3. Results Table 2.1: Mean bias and standard deviations of motion estimation error (µm) in axial and lateral directions using 2D tracking (2D), 2D tracking with angular compounding (AC), and dual transducer (DT) motion estimation techniques over an 11x11 motion grid. Tracking |bax| |blat| σax σlat 2D 0.15 3.26 0.060 4.44 AC 0.14 1.23 0.070 0.54 DT 0.15 0.15 0.060 0.07 Table 2.2: Root mean square error (µm) of motion estimation in axial and lateral directions using 2D tracking (2D), 2D tracking with angular com- pounding (AC), and dual transducer (DT) motion estimation techniques for the compression test. Phantom Tracking 0.1% Compression 1% Compression RMSEax RMSElat RMSEax RMSElat Homogeneous 2D 0.36 3.0 2.9 18 AC 0.62 0.72 5.6 6.2 DT 0.36 0.26 2.9 1.6 Inclusion 2D 0.37 4.4 3.1 22 AC 0.57 0.98 5.0 9.0 DT 0.37 0.28 3.1 2.2 of compression, or if only a few estimates have a substantial estimation inaccuracy, the error will increase. 2.3 Results The simulation results for the translational motion test are shown in Fig. 2.4. The biases of the motion estimates are displayed as vectors with an origin at their corresponding point on the 11×11 grid, and x and y components made up of the biases for the direction corresponding to each axis. Ellipses are used to visualize the standard deviation of the motion estimates. The radii of the ellipses correspond to the standard deviation of the motion estimates in the 34 2.3. Results −0.15 0 0.15 −0.15 −0.12 −0.09 −0.06 −0.03 0 0.03 0.06 0.09 0.12 0.15 Lateral [mm] Ax ia l [m m] (a) −0.15 0 0.15 −0.15 −0.12 −0.09 −0.06 −0.03 0 0.03 0.06 0.09 0.12 0.15 Lateral [mm] Ax ia l [m m] (b) −0.15 0 0.15 −0.15 −0.12 −0.09 −0.06 −0.03 0 0.03 0.06 0.09 0.12 0.15 Lateral [mm] Ax ia l [m m] (c) Figure 2.4: Bias vectors and standard deviation ellipses for the (a) 2D track- ing, (b) 2D tracking with angular compounding at ±10◦, and (c) dual trans- ducer methods on an 11×11 motion grid. To improve visualization, the axial bias and standard deviation values are scaled by factors of 20 and 40, respectively, in all figures. Similarly, the lateral standard deviation is also scaled by 2. 35 2.3. Results (a) (b) Figure 2.5: Typical axial (a) and lateral (b) displacement images from the 1% compression simulation for the (left to right) FEM model, 2D tracking method, angular compounding method, and dual transducer method. The dashed box on the FEM displacement profile indicates the region of interest used for displacement error calculations. 36 2.3. Results Table 2.3: CNRe (dB), SNRe (dB), and RMSe (% strain × 10−3) for 2D tracking (2D), 2D tracking with angular compounding (AC), and dual transducer (DT) motion estimation techniques at 1% compression. Tracking Axial Lateral CNRe SNRe RMSe CNRe SNRe RMSe 2D 106.79 20.51 0.18 47.48 9.32 205 AC 103.63 20.05 0.20 60.10 16.55 121 DT 106.79 20.51 0.18 96.05 21.64 0.39 corresponding axes and they are centered at the ends of their corresponding bias vector. The performance of all the techniques in terms of their mean absolute bias and mean standard deviations are summarized quantitatively in Table 2.1. The results show that 2D tracking using angular compounding has lower total bias when compared to just 2D tracking. This is consistent with previous results [17]. Fig. 2.4 also shows that dual transducer tracking produces both lower bias and standard deviation in the lateral direction over the entire grid compared to both 2D tracking and 2D tracking with angular compounding. The tracking performance of the three techniques with compressional motion is presented next. The displacement outcomes for the FEM (true motion) and US tracking methods are depicted in Fig. 2.5. In the lateral di- rection, the estimate for the dual transducer method almost exactly matches the true displacement. The angular compounding technique produces the next best estimate, resembling the proper displacement pattern but con- taining more variation or jitter throughout the image. The 2D tracking technique roughly captures the motion trend (although it should improve the axial estimate). The RMSE for each of the methods at different compression levels with both the homogeneous and inclusion phantoms are summarized in Table 2.2. The magnitude of the error is smaller at the 0.1% compression level, but rel- ative to the applied motion, the errors scale well between the two levels. The smaller magnitude error at lower compressions can be predicted by referring back to Fig. 2.4, where in the center, corresponding to small amounts of motion, the bias values are smaller. The error increases for the stiff inclu- sion phantom compared to the homogeneous phantom. This is likely due to 37 2.3. Results (a) (b) Figure 2.6: The elevational motion profile for the (a) homogeneous and (b) inclusion phantoms over a 10 mm elevational slice to approximate the thickness of the US beam. motion in the third dimension (the elevational axis), which is not accounted for by the algorithms in use here. For the homogeneous phantom there are uniform and small motions over the elevational axis. The inclusion causes the elevational motion to be non-uniform and areas around the inclusion to have higher displacement which increases decorrelation of the RF signal produced by those scatterers that are translated out of the beam area. The difference in elevation motion over a 10 mm slice approximating the beam thickness is shown in Fig. 2.6. Results summarizing the strain image quality for the three methods are presented in Table 2.3. The dual transducer method has the highest CNRe and SNRe, as well as the lowest RMSe, demonstrating the advantages it can provide for elastography results. The results to be presented next are from tests where the secondary transducer is placed not only at a 90◦ angle as in all the tests above, but also decreased down to 40◦ relative to the primary transducer. The RMSE of the lateral displacement estimate as a function of transducer angle is displayed in Fig. 2.7. As a reference, the error level for the angular compounding method shown in Table 2.2 for the 1% compression level on the phantom with a stiff inclusion is also included in the figure. It can be seen that the dual transducer method provides smaller error from the perpendicular 90◦ case down to at least 40◦ when compared to angular compounding at ±10◦. The region of interest used for the calculation for each configuration is the overlapping area where the 40◦ arrangement intersects with the phantom. Although this results in a small region, it is however required to make a 38 2.3. Results 40 50 60 70 80 90 0 5 10 Transducer Rotation Angle [degrees] Er ro r [µ m ]   Dual Transducer Beam Steering Figure 2.7: Root mean square error of the lateral displacement, as measured by the second transducer, plotted as a function of relative angle between the first and second transducer. The dashed line indicates the error level of the angular compounding method at ±10◦. Figure 2.8: Boundary of the tracking region (dashed lines) for 80◦ down to 40◦ overlaid upon the simulated phantom. Arrow shows direction of decreasing transducer angle. Transducer is pictured at 80◦. The dark square represents the phantom inclusion. 39 2.3. Results 0.2 1 2 5 0 4 8 12 16 20 24 28 32 Transducer Yaw Angle [degrees] R M S Er ro r [µ m ] Figure 2.9: Lateral displacement root mean square error as a function of yaw misalignment. 40 2.3. Results Table 2.4: The overall performance of 2D tracking 2D, 2D tracking with an- gular compounding (AC), and dual transducer (DT) motion tracking meth- ods summarized by mean bias and standard deviations of error (µm) in the axial and lateral directions for experimentally measured rigid motion. Tracking |bax| |blat| σax σlat 2D 1.62 13.75 3.19 8.27 AC 4.81 8.26 5.10 2.62 DT 1.62 1.59 3.19 2.58 consistent comparison between the different angles. Fig. 2.8 shows that the overlapping regions between the images decreases with decreasing angle. The effect of transducer misalignment for the dual transducer orthogonal configuration is studied next using simulations. One transducer is rotated from its initial perfectly aligned position, and the change in RMSE in the displacement estimations are shown in Fig. 2.9. The error rises rapidly for increasing misalignment as the relatively large axial component of the motion vector starts to get projected onto the transducer’s estimation axis. Note that these calculations are for this particular phantom and its deforma- tion and are meant only to illustrate the size and nature of the misalignment problem. Different errors are expected for different test conditions and may be minimized by careful design, although some residual error will inevitably exist for both the angular compounding and dual transducer methods. We can estimate the axial and lateral alignment error caused by SOS on our simulation model. Based on breast biopsy data, SOS parameters are assumed to be 1480 m/s for the normal background breast tissue and 1580 m/s for the inclusion [45]. A SOS of 1540 m/s assumed by the US machine during scan conversion and image formation. The depth seen in the image is the product of the true depth and the ratio of the US machine’s SOS and the tissue’s SOS. Using this relation, the maximum alignment error on our simulated phantom is under a millimeter in each axis. The results from the experimental translational motion test are presented next. The axial and lateral bias and standard deviation for the three tracking methods are summarized in Table 2.4. As with the simulations, the bias and standard deviation were calculated using equations (2.5) and (2.6). The bias and standard deviations are averaged over the six displacement frames from 0 to 150 µm. We see agreement between the simulation and experimental 41 2.4. Discussion results with the dual transducer exhibiting higher accuracy and precision. The calibration results show there is room for improvement in this step. The elevational offset between the transducers was 3.1634 mm, and the ro- tations about the axial, lateral, and elevational axes were 0.9447◦, 3.7634◦, and 1.0812◦ respectively. While at first glance it may look like these errors could be corrected by going back to the setup and moving one of the trans- ducers, it is not a trivial exercise as the degrees of freedom of the US plane cannot be manipulated directly, but are coupled. 2.4 Discussion The results presented in this chapter show that the proposed dual trans- ducer method outperforms other commonly used motion tracking methods. Although there are many other techniques for motion estimation, we have chosen methods to make the results directly comparable. For example, our angular compounding results did not include motion estimation at a beam angle of 0◦. A similar angular compounding method to that used in this study adds a third frame at 0◦ to the acquisition cycle to improve the axial estimate [23]. The ultimate accuracy for both axial and lateral motion esti- mation from a single transducer would be achieved with a large number of frames covering a range of angles. Similarly, multiple beam steering angles could also be applied to improve accuracy from each of dual transducers, or more transducers could be used. The trade offs include acquisition time and complexity of the apparatus. The fundamental question underlying all of these comparisons is: How does an angular compounding motion esti- mate compare to a non-compounded estimate? The results show that the non-compounded estimate is slightly better along the beam axis compared to the angular compounding estimate along the beam axis. It is therefore dependent on the application of the motion estimation what trade offs to make with accuracy, speed and complexity when designing transducer and beam geometry. A disadvantage of the dual transducer method when small angles are used is the loss of information at shallow depths due to the images only overlapping deeper in the tissue (Fig. 2.8). However, since one of the pri- mary benefits of supplementing breast cancer screening methods with elas- tography is enabling the detection of small deep lesions, the lack of visibility at shallower depths may be an acceptable trade-off. In applications other than breast scanning, such as transabdominal scanning where smaller dual transducer angles may be required, imaging depths can reach 10 to 15 cm. 42 2.4. Discussion Figure 2.10: Dual transducer method using convex array transducers. The relative angle between the transducers is 20◦ but there is still a large region of overlap (shaded gray). Arrows show a projection of the motion vector onto an RF line for each transducer. At these depths there will be greater image overlap in the areas of interest. This is in contrast to angular compounding techniques where there is less overlap for increasing depth. The dual transducer method could also be applied to convex transduc- ers. They could potentially provide advantages over linear array transducers since the overlapping region can be large even at small transducer angles, as seen in Fig. 2.10. In addition, the angles between RF lines in some areas of the overlap zone are closer to 90◦ than in the linear array case. Convex transducers would only require an extra step of scan conversion to convert from polar to Cartesian coordinates. The proposed calibration method is susceptible to image noise because it is a point-based method with only a few points defining the pose. The calibration can also be affected by SOS errors, however the speed in the water bath can be estimated by measuring the temperature and applying a polynomial model [46]. One more source of error is physical interference between the wires where they cross over causing differences between the real phantom and the geometric model used in the optimization procedure. For 43 2.4. Discussion Figure 2.11: A custom made dual transducer with a flat surface and an angled surface with 64 elements on each. 44 2.5. Conclusion these reasons it is possible that the poses calculated by the optimization algorithm may not be the true poses of the transducers. A more advanced calibration method may be required to better align the transducers. The current method required switching between the transducers manu- ally. A program to automatically switch between the transducers could be developed, however the mechanical relays used to switch transducers on the Ultrasonix hardware could be damaged if operated at high frequency. For future real-time operation of dual transducers, solid state relays could be implemented. Alternatively, a custom made transducer could be manufactured. Instead of using two separate transducers, there could be two surfaces on a single transducer; a flat surface and an angled surface (e.g. with 64 elements on each) such as in Fig. 2.11. This would eliminate the need for switching and would only require a custom beam forming sequence. There would be no need for calibration either as long as manufacturing errors are carefully controlled. 2.5 Conclusion In this study the performance of the proposed dual transducer technique was studied to determine its potential benefits over other 2D tracking tech- niques. When the transducers are orthogonal to each other, the dual trans- ducer method provides better lateral displacement estimation compared to 2D tracking and angular compounding. When the angle between the trans- ducers is reduced, the dual transducer method provides superior lateral dis- placement estimation down to at least 40◦ compared to angular compound- ing. In terms of transducer alignment, it has been shown that the lateral tracking error does not increase substantially within the range of expected registration error. It has also been found that the registration errors are most sensitive in the yaw direction, so special care must be taken to mini- mize this effect when designing the calibration system. Real-time synchronous operation of the two transducers will be addressed in future work so that the method can be used efficiently in a clinical set- ting. If the dual transducer is to be hand held, careful design will have to compensate for the increased size and weight of the transducer to make it ergonomic. In the next chapter, a new method of alignment and calibration is de- veloped attempting to improve on the misalignment found with the method used in this chapter. 45 References [1] J. Ophir, I. Cespedes, Y. Ponnekanti, Y. Yazdi, and X. Li, “Elastog- raphy: A quantitative method for imaging the elasticity of biological tissues,” Ultrasonic Imaging, vol. 13, no. 2, pp. 111–134, 1991. [2] L. Gao, K. Parker, and S. Alam, “Sonoelasticity imaging: Theory and experimental verification,” Journal of the Acoustical Society of Amer- ica, vol. 97, no. 6, pp. 3875–3886, 1995. [3] S. Catheline, F. Wu, and M. Fink, “A solution to diffraction biases in sonoelasticity: The acoustic impulse technique,” Journal of the Acous- tical Society of America, vol. 105, no. 5, pp. 2941–2950, 1999. [4] D. Melodelima, J. Bamber, F. Duck, J. Shipley, and L. Xu, “Elas- tography for breast cancer diagnosis using radiation force: System de- velopment and performance evaluation,” Ultrasound in Medicine and Biology, vol. 32, no. 3, pp. 387–396, 2006. [5] C. L. de Korte, A. F. W. van der Steen, E. I. Cespedes, G. Pasterkamp, S. G. Carlier, F. Mastik, A. H. Schoneveld, P. W. Serruys, and N. Bom, “Characterization of plaque components and vulnerability with in- travascular ultrasound elastography,” Physics in Medicine and Biology, vol. 45, no. 6, pp. 1465–1475, 2000. [6] S. Salcudean, D. French, S. Bachmann, R. Zahiri-Azar, X. Wen, and W. Morris, “Viscoelasticity modeling of the prostate region using vibro- elastography,” in Medical Image Computing and Computer-Assisted In- tervention - MICCAI 2006, 2006, pp. 389–396. [7] W.-N. Lee, C. M. Ingrassia, S. D. Fung-Kee-Fung, K. D. Costa, J. W. Holmes, and E. E. Konofagou, “Theoretical quality assessment of my- ocardial elastography with in vivo validation,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 54, no. 11, pp. 2233–2245, 2007. 46 Chapter 2. References [8] M. Tanter, D. Touboul, J.-L. Gennisson, J. Bercoff, and M. Fink, “High- resolution quantitative imaging of cornea elasticity using supersonic shear imaging,” IEEE Transactions on Medical Imaging, vol. 28, no. 12, pp. 1881–1893, 2009. [9] T. E. Oliphant, A. Manduca, R. L. Ehman, and J. F. Greenleaf, “Complex-valued stiffness reconstruction for magnetic resonance elas- tography by algebraic inversion of the differential equation,” Magnetic Resonance in Medicine, vol. 45, no. 2, pp. 299–310, 2001. [10] I. Cespedes, J. Ophir, H. Ponnekanti, and N. Maklad, “Elastography: Elasticity imaging using ultrasound with application to muscle and breast in vivo,” Ultrasonic Imaging, vol. 15, no. 2, pp. 73 – 88, 1993. [11] S. Srinivasan and J. Ophir, “A zero-crossing strain estimator for elastog- raphy,” Ultrasound in Medicine and Biology, vol. 29, no. 2, pp. 227–238, 2003. [12] H. Eskandari, S. Salcudean, and R. Rohling, “Tissue strain imaging using a wavelet transform-based peak search algorithm,” IEEE Trans- actions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 54, no. 6, pp. 1118–1130, 2007. [13] R. Zahiri-Azar and S. Salcudean, “Time-delay estimation in ultrasound echo signals using individual sample tracking,” IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 55, no. 12, pp. 2640–2650, 2008. [14] W. Walker, B. Friemel, L. Bohs, and G. Trahey, “Real-time imaging of tissue vibration using a two-dimensional speckle tracking system,” in 1993 IEEE Ultrasonics Symposium, vol. 2, 1993, pp. 873–877. [15] E. E. Konofagou and J. Ophir, “A new elastographic method for esti- mation and imaging of lateral displacements, lateral strains, corrected axial strains and poisson’s ratios in tissues,” Ultrasound in Medicine and Biology, vol. 24, no. 8, pp. 1183–1199, 1998. [16] F. Viola, R. Coe, K. Owen, D. Guenther, and W. Walker, “Multi- dimensional spline-based estimator (muse) for motion estimation: Al- gorithm development and initial results,” Annals of Biomedical Engi- neering, vol. 36, no. 12, pp. 1942–1960, 2008. 47 Chapter 2. References [17] R. Zahiri Azar, “Methods for the estimation of the tissue motion using digitized ultrasound echo signals,” Ph.D. dissertation, The University of British Columbia, 2009. [18] R. G. Lopata, M. M. Nillesen, H. H. Hansen, I. H. Gerrits, J. M. Thi- jssen, and C. L. de Korte, “Performance evaluation of methods for two-dimensional displacement and strain estimation using ultrasound radio frequency data,” Ultrasound in Medicine and Biology, vol. 35, no. 5, pp. 796–812, 2009. [19] U. Techavipoo, Q. Chen, T. Varghese, and J. Zagzebski, “Estimation of displacement vectors and strain tensors in elastography using angular isonifications,” IEEE Transactions on Medical Imaging, vol. 23, no. 12, pp. 1479–1489, 2004. [20] M. Tanter, J. Bercoff, L. Sandrin, and M. Fink, “Ultrafast compound imaging for 2D motion vector estimation: application to transient elas- tography,” IEEE Transactions on Ultrasonics, Ferroelectrics and Fre- quency Control, vol. 49, no. 10, pp. 1363–1374, 2002. [21] M. Rao, Q. Chen, H. Shi, T. Varghese, E. Madsen, J. Zagzebski, and T. Wilson, “Normal and shear strain estimation using beam steering on linear-array transducers,” Ultrasound in Medicine and Biology, vol. 33, no. 1, pp. 57–66, 2007. [22] O. D. Kripfgans, J. M. Rubin, A. L. Hall, and J. B. Fowlkes, “Vector doppler imaging of a spinning disc ultrasound doppler phantom,” Ul- trasound in Medicine and Biology, vol. 32, no. 7, pp. 1037–1046, 2006. [23] H. H. G. Hansen, R. G. P. Lopata, T. Idzenga, and C. L. de Korte, “High quality non-invasive full 2D strain tensor imaging using a beam steered linear array ultrasound transducer,” in IEEE Ultrasonics Sym- posium, 2009, pp. 159–162. [24] H. H. G. Hansen, R. G. P. Lopata, and C. L. de Korte, “Noninva- sive carotid strain imaging using angular compounding at large beam steered angles: Validation in vessel phantoms,” IEEE Transactions on Medical Imaging, vol. 28, no. 6, pp. 872–880, 2009. [25] H. Eskandari, S. Salcudean, R. Rohling, and J. Ohayon, “Viscoelastic characterization of soft tissue from dynamic finite element models,” Physics in Medicine and Biology, vol. 53, pp. 6569–6590, 2008. 48 Chapter 2. References [26] M. Fox, “Multiple crossed-beam ultrasound doppler velocimetry,” IEEE Transactions on Sonics and Ultrasonics, vol. 25, no. 5, pp. 281–286, 1978. [27] O. Bonnefous, “Measurement of the complete (3D) velocity vector of blood flows,” in IEEE Ultrasonics Symposium, vol. 2, 1988, pp. 795– 799. [28] J. Bercoff, R. Sinkus, M. Tanter, and M. Fink, “3D ultrasound-based dynamic and transient elastography: first in vitro results,” in IEEE Ultrasonics Symposium, vol. 1, 2004, pp. 28–31. [29] H. Yoshikawa, T. Azuma, K. Kawabata, Y. Taniguchi, and S. Umemura, “Thee-dimensional tracking method of tissue motion with biplane im- ages,” Japanese Journal of Applied Physics, vol. 44, no. 6B, pp. 4561– 4566, 2005. [30] J. M. Abeysekera and R. Rohling, “Improved 2D motion tracking for elastography using dual transducers,” in Eighth International Confer- ence on the Ultrasonic Measurement and Imaging of Tissue Elasticity, 2009, p. 77. [31] M. Rao and T. Varghese, “Spatial angular compounding for elastog- raphy without the incompressibility assumption,” Ultrasonic Imaging, vol. 27, no. 4, pp. 256–270, 2005. [32] E. E. Konofagou and J. Ophir, “Precision estimation and imaging of normal and shear components of the 3D strain tensor in elastography,” Physics in Medicine and Biology, vol. 45, no. 6, pp. 1553–1563, 2000. [33] A. Sarvazyan, D. Goukassian, E. Maevsky, G. Oranskaja, A. Skovoroda, S. Emelianov, A. Klishko, G. Mironova, V. Sholokhov, and V. Ermilova, “Elasticity imaging as a new modality of medical imaging for cancer de- tection,” in International Workshop on Interaction of Ultrasound with Biological Media, 1994, pp. 69–81. [34] A. Samani, J. Bishop, C. Luginbuhl, and D. B. Plewes, “Measuring the elastic modulus of ex vivo small tissue samples,” Physics in Medicine and Biology, vol. 48, no. 14, pp. 2183–2198, 2003. [35] R. Sinkus, M. Tanter, T. Xydeas, S. Catheline, J. Bercoff, and M. Fink, “Viscoelastic shear properties of in vivo breast lesions measured by mr elastography,” Magnetic Resonance Imaging, vol. 23, pp. 159–165, 2005. 49 Chapter 2. References [36] J. Ophir, I. Cespedes, B. Garra, H. Ponnekanti, Y. Huang, and N. Mak- lad, “Elastography: Ultrasonic imaging of tissue strain and elastic mod- ulus in vivo,” European Journal of Ultrasound, vol. 3, no. 1, pp. 49–70, 1996. [37] F. Viola and W. Walker, “A spline-based algorithm for continuous time- delay estimation using sampled data,” IEEE Transactions on Ultrason- ics, Ferroelectics, and Frequency Control, vol. 52, no. 1, pp. 80–93, 2005. [38] J. A. Jensen and N. B. Svendsen, “Calculation of pressure fields from ar- bitrarily shaped, apodized, and excited ultrasound transducers,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 39, no. 2, pp. 262–267, 1992. [39] J. A. Jensen, “Field: A program for simulating ultrasound systems,” in 10th Nordic-Baltic Conference on Biomedical Imaging Published in Medical and Biological Engineering and Computing, vol. 34, 1996, pp. 351–353. [40] R. Zahiri-Azar and S. E. Salcudean, “Motion estimation in ultrasound images using time domain cross correlation with prior estimates,” IEEE Transactions on Biomedical Engineering, vol. 53, no. 10, pp. 1990–2000, 2006. [41] G. F. Pinton, J. J. Dahl, and G. E. Trahey, “Rapid tracking of small dis- placements with ultrasound,” IEEE Transactions on Ultrasonics, Fer- roelectrics, and Frequency Control, vol. 53, no. 6, pp. 1103–1117, 2006. [42] F. Kallel and J. Ophir, “A least-squares strain estimator for elastogra- phy,” Ultrasonic Imaging, vol. 19, no. 3, pp. 195–208, 1997. [43] T. Varghese and J. Ophir, “An analysis of elastographic contrast-to- noise ratio,” Ultrasound in Medicine and Biology, vol. 24, no. 6, pp. 915–924, 1998. [44] ——, “A theoretical framework for performance characterization of elastography,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 44, no. 1, pp. 164–172, 1997. [45] P. Edmonds, C. Mortensen, J. Hill, S. Holland, J. Jensen, P. Schattner, and A. Valdes, “Ultrasound tissue characterization of breast biopsy specimens,” Ultrasonic Imaging, vol. 13, no. 2, pp. 162–185, 1991. 50 Chapter 2. References [46] N. Bilaniuk and G. S. K. Wong, “Speed of sound in pure water as a function of temperature,” The Journal of the Acoustical Society of America, vol. 93, no. 4, pp. 2306–2306, 1993. 51 Chapter 3 Alignment and Calibration of Dual Ultrasound Transducers Using a Wedge Phantom2 3.1 Introduction Ultrasound image features are strongly dependent on the direction of the US beam relative to the reflectors. The point spread function of an US transducer is anisotropic. Image speckle arises from the coherent US source and varies with the location of the US source. For these reasons, there have been many techniques developed to combine two or more US images of the same region of interest but taken from different viewpoints. Previous publications are divided roughly into two categories: spatial compounding and elastography. Spatial compounding simply averages B-mode images from multiple viewpoints to reduce speckle and improve depiction of internal anatomy [1–3]. Modern systems often electronically steer the ultrasonic beam to ob- tain multiple viewpoints so compounding can be done quickly on a single transducer. However, for large steering angles (e.g. > ±15◦), the effective aperture size decreases and artifacts from grating lobes may reduce image quality [4]. This can be overcome by mechanically steering a transducer [5], mounting the transducer on an articulated arm [6], or using multiple transducers [7]. When using multiple transducers the key issue is physical alignment of the different imaging planes. Tracking of tissue motion, such as in the field of elastography, is most accurate along the axial beam direction. Estimation of the tissue properties from a set of motion estimates is improved with the use of lateral and ele- vational motion components [8, 9]. Electronic beam steering can be used to 2A version of this chapter has been submitted for publication. Jeffrey M Abeysekera and Robert Rohling, “Alignment and Calibration of Dual Ultrasound Transducers Using a Wedge Phantom”. 52 3.1. Introduction measure motion in different directions and the tracking can be compounded to improve estimation accuracy [10, 11]. However, optimal steering angles are again typically limited to approximately 10◦ to 15◦ [12]. A filtering technique has been developed to reduce grating artifacts to allow for steer- ing angles up to ±45◦, but the decreased aperture only allows for shallow imaging such as for carotid arteries [13]. Using two orthogonal transducers (dual transducers) has been proposed for approximately an order of magni- tude improvement in the accuracy of tracking internal tissue motion [14, 15]. The accuracy of motion tracking is a key issue for reconstructing elasticity values [16, 17]. Using elastography for identifying cancerous lesions in breast has shown promise but inconsistent results have limited its clinical adoption [18, 19]. A dual transducer setup is well suited for scanning anatomy such as the breast, and the advantages it provides may lead to an effective clinical diagnostic tool. Again, alignment of the image planes is the critical issue for dual transducers. There are two aspects to solving the problem of creating a dual trans- ducer system. The first is alignment, where the two transducers must be physically oriented in such a way that their image planes coincide. An im- age based alignment is preferred to a purely mechanical alignment of the transducer casings since the orientation of the piezoelectric crystals, and therefore the image planes, is unknown. Alignment requires adjustment of four DOF since two translations within the plane can be adjusted freely while still maintaining alignment. The second step in creating a dual transducer system is determining the transformation from one transducer’s coordinate frame to the other, which is generally referred to as calibration. Calibration requires determination of all six DOF. This is the first paper addressing 2D to 2D US alignment and calibration. Calibration is a well researched problem [20, 21]. Calibration refers to the localization of the image coordinates with respect to an external coordi- nate system, usually a position tracker attached to the transducer housing. The need for physical alignment of dual transducers poses a unique problem that cannot be solved by traditional calibration approaches. For example, most traditional calibrations are solved iteratively, over six DOF (three po- sitions and three orientations), but physical alignment cannot be performed in this manner because of the lack of decoupled control of all DOF using cas- caded motion stages, lack of visual feedback to guide alignment and need for relatively few iterations. For these reasons, a new technique is required that is suitable for both alignment and calibration of two transducers. Alignment will bring the two imaging planes into approximately the same plane, and calibration will determine the relative location of the image origins and axes. 53 3.1. Introduction Calibration will also provide a measure of the residual error in alignment. Although the alignment and calibration problem is unique, the solution can borrow aspects from previous work on calibration. Calibration typically involves scanning a phantom of known geometry, extracting features of the phantom from the US images, and using an optimization algorithm to con- verge to a solution of the transformation matrix. For example, single-point based phantoms can be constructed with a bead or pin [22], or two intersect- ing wires [23]. The point feature is imaged from several different orientations to create an over-determined system for the calibration parameters. Single- point methods are easy to construct, but can be time consuming since tens of images are needed for proper determination of the calibration matrix, and locating and centering the transducer over the target point requires ex- perience and skill. Wire-based phantoms, such as the N-fiducial phantom [24, 25], do not require as many images and can be easier to scan. One possible limitation with such phantoms is that the features do not appear as clear dots in US images but as blurred ellipses, making locating the points more difficult to localize accurately, and increasing the number of points required to reduce errors. Alternatively, planar structures can be scanned to produce lines in im- ages. Examples include the floor of a water bath, or the specialized Cam- bridge phantom [26]. Thin plastic plates can also be used to produce line features [27]. Line features produce a large number of data points which help to average out random signal fluctuations inherent in US. There are many other calibration methods, and aspects of performance not listed here (see for example Lindseth et al. [20], Mercier et al. [21]). The main point is that some aspects can be adopted to the problem of physical alignment. Some visual guide to assist to user in alignment is needed such as angled wires leading to a plane of beads [28]. Opposing wedges provide the advantage of easily centering the beam in the elevation direction (orthogonal to the scan plane) and providing line features instead of points [29, 30]. This chapter investigates the problem of physical alignment of two imag- ing planes from dual transducers and calibration to determine the position and orientation of one image coordinate system with respect to the other. We present a new method of alignment for dual US transducers using a novel multi-wedge phantom. The phantom provides visual cues to the operator to guide the alignment process so that alignment can be achieved in a few steps. The repeatability of the alignment and calibration steps are tested and calibration results are compared to a N-fiducial phantom. 54 3.2. Methods D A B C Figure 3.1: The US transducer (A) is mounted onto a linear position stage (D), a tilt stage (B), and a rotary stage (C). The stages allow for four DOF (red arrows) adjustment during the alignment process. 3.2 Methods 3.2.1 Acquisition System Two linear array transducers (L12-5/38) connected to a SonixRP US ma- chine, are mounted onto a combination of motion stages; a rotary position stage (10000 Series, Parker Automation, Cleveland, OH, USA), a kinematic tilt stage (NT58-869, Edmund Optics, Barrington, NJ, USA), and a linear position stage (NT53-383, Edmund Optics). In total the three stages pro- vide three rotational degrees of freedom and one translational. The motion stages allow the transducers to be adjusted until they are aligned, based on feedback from images of the custom designed phantom. The connection between the stages and the degrees of freedom provided by the stages are shown in detail in Fig. 3.1. 3.2.2 Phantom The phantom design is inspired by the wedges used by Gee et al. [29]. The phantom is immersed in a water-filled Plexiglas container (23 × 15 × 17 cm3) and is held fixed in the tank using cantilevered springs (Fig. 3.2(b)). Two opposing wedge pairs, one with 30◦slopes and the other with 60◦slopes, form the basis of the special phantom pattern as shown in Fig. 3.3. A copy 55 3.2. Methods Side 1 Side 2 Dowels Holes (a) (b) Figure 3.2: (a) The two sides that make up the calibration phantom. The dowels fixed in side two slide into the holes of side one to line up the crossing point of all the wedges. The dowels of side one are inserted into (b) cantilever springs. The hole and the slot are slightly undersized so the cantilevers are deflected when the phantom is inserted, securing the phantom in place. of the phantom pattern is oriented orthogonally to the original phantom pattern and precisely located with respect to the first pattern using dowels (Fig. 3.2(a)). This allows each transducer to image one of the patterns. The phantom is designed to orient both patterns at a 45◦ angle from the horizontal to allow both transducers to image from the top of a water-filled container (Fig. 3.4). The cross-over point of the all of the wedges from each pattern intersect at the same plane. Thus, the goal is to adjust the transducers until they image the cross-over point of the phantom pattern (Fig. 3.3). The phantom is designed to fully determine all six DOF of the pose of each transducer. Visual cues in the image provide an indication of misalign- ment in each of the four alignment DOF (z, α, β, and γ) as seen in Fig. 3.5. The in-plane position (x and y) is not important for alignment, but the two parameters are important for calibration and registration of the images. The x parameter is constrained by the small outward facing wedges located on the outside edges of the two wedge pairs and the y parameter is constrained by both the outside wedges and the two wedge pairs. The phantom was machined from a block of aluminum on a CNC mill. 56 3.2. Methods x y Z Y X z α β γ Figure 3.3: A 3D model of the phantom pattern in an isometric view. The US plane is shown in light blue. 57 3.2. Methods Figure 3.4: The calibration apparatus. Each transducer is mounted onto a four DOF stage which is suspended above a Plexiglas container using steel supports. The phantom is placed in the container and water provides the coupling medium between the transducers and phantom. 58 3.2. Methods Aligned z β γα Figure 3.5: Each of the DOF adjusted during alignment provide visual cues to the user to indicate misalignment. The wedge pairs will no longer line up for misalignment in z. Misalignment in α causes the floor to appear at an angle. Rotation in β causes the wedges to appear angled. When the image has misalignment in γ, one of the wedge pairs may be aligned by adjusting z, but the other pair will intersect at different depths. 59 3.2. Methods To improve image quality, the surface was sandblasted to increase roughness and enhance reflectivity. 3.2.3 Calibration The goal is to compute the transformation matrix between the coordinate systems of the two transducer images, B1 and B2. We know the transfor- mation between the two phantom wedge patterns, W1TW2 , from the manu- factured geometry. Here we adopt the notation used by Prager et al. [26], where WTB is the transformation from coordinate system B to coordinate system W . We can estimate the transformations B1TW1 and W2TB2 between the images and the wedge patterns using the B-mode images, the details of which will be described later. The transformation between the two im- ages can be expressed as a multiplication of homogeneous transformation matrices: B1TB2 = B1TW1 W1TW2 W2TB2 . (3.1) Each transformation contains six DOF. The rotation is effected by first rotating by γ around the x-axis, then through β around the y-axis, and finally α around the z-axis (the order of rotation is important). The transformations between images and wedge patterns are solved by extracting image features produced by the phantom and applying an opti- mization algorithm. The features available in the images will be the result of the image plane intersecting the planes of the wedge faces. Unlike two ideal planes intersecting at a line, the feature will appear as a blurred rectangle due to the thickness of the US beam in the elevational direction as seen in Figs. 3.5 and 3.6. We will use the vertical intensity profile to estimate the ‘center’ of the intersection. A GUI was created with Matlab to facilitate the feature extraction. B- mode images were processed and saved from the SonixRP for off-line use. The 8-bit images were loaded into the GUI and, using a computer mouse, the user selected a rectangular ROI around the echo pattern produced by each wedge. The images are processed by smoothing the intensities I(y) for every pixel depth y, in every line of the image with a 1D Gaussian filter with a standard deviation of σ: s(y) = gσ(y) ∗ I(y), (3.2) where gσ(y) = 1 σ √ 2pi exp ( − y 2 2σ2 ) . (3.3) 60 3.2. Methods Figure 3.6: B-mode image of the phantom with the transducer misaligned. The intensity weighted centroid for each line is calculated using the smoothed intensities, s(y): cy = ymax∑ y=ymin s(y)× y ymax∑ y=ymin s(y) , (3.4) where ymin and ymax are the minimum and maximum pixel depths in the ROI, respectively. A line can be fit to the points using linear regression, however, considering the signal-to-noise ratio of wedge images it is necessary to use an algorithm with outlier rejection. To fit the line to the centroid data, RANSAC was chosen [31]. The effect of using RANSAC compared to a direct least-squares fit is shown in Fig. 3.7. The pixel scaling factors to convert between pixels and millimeters are obtained directly from the SonixRP interface. For calculating the depth of received echoes, the SonixRP assumes a SOS of 1540 m/s which corresponds to a mean SOS for soft tissues. The alignment is performed with distilled water at room temperature which has a SOS that can be estimated using a polynomial model [32]. To correct for the discrepancy between the SOS assumed by the US machine and the SOS of the water, the echoes are shifted towards the transducer by a factor of: k = SOSH2O(T ) 1540 , (3.5) 61 3.2. Methods Figure 3.7: RANSAC fitting can reduce the influence of significant errors. Here, near the edges of the selected ROI (red), the vertical centroids (green) fall rapidly, producing an inaccurate representation of the wedge. The cyan line is a least squares fit to all of the vertical centroids in the ROI. Blue stars indicate centroids that are considered inliers by the RANSAC algorithm, and the magenta line is the least squares fit to the inliers. The inliers do not include the stray centroids at the edges of the ROI, thus providing a more realistic estimate of the intersection between the US plane and the wedge. 62 3.2. Methods where SOSH2O(T ) is the SOS calculated by the polynomial model at the measured water temperature, T . To determine the transformation between the image and the phantom, non-linear least squares optimization is performed with Matlab. The algo- rithm takes the centroid inliers found by RANSAC as input and compares them to a mathematical description of the phantom geometry while vary- ing all six DOF. For a given set of DOF, the theoretical lines produced by the intersection of a plane representing the image and the wedge faces are computed. The objective function minimizes the perpendicular distance between each of the inlier centroid points from the B-mode image and the points’s corresponding theoretical intersection line. The function returns a vector containing the optimized DOF. As mentioned earlier, to obtain the transformation between the two transducers’ image planes we apply matrix multiplication as in eqn. 3.1. This results in a 4×4 homogeneous matrix of the form: B1TB2 =  P11 P12 P13 P14 P21 P22 P23 P24 P31 P32 P33 P34 0 0 0 1  . (3.6) It is desirable to describe the transformation between image planes using the DOF instead of just the numbers provided by the homogeneous matrix. The position DOF (x, y, and z) are the elements in the last column and first three rows of eqn. 3.6. Using the algebraic relation for the rotation scheme we have used (see A), the orientation DOF are determined using least squares. 3.2.4 Validation A sensitivity analysis was performed to determine whether the method was accurate and free from large changes in output when reasonable errors were added to the input. For the sensitivity analysis we wished to test the math- ematical model and the optimization algorithm. A computer simulation ran the procedure for 1000 trials. Feature extraction errors were simulated by applying a uniformly distributed random error to the centroid points for each trial. To determine an appropriate range for the error distribution, the centroids for a test image of the wedges were calculated and the maximum distance from the mean was used as the limit. The solution to an optimiza- tion problem is dependent on a good initial guess to avoid a false solution 63 3.2. Methods in possible local minimums. To account for the error in the initial guess, uniformly distributed random errors over a moderate range for each of the six DOF were applied; the angular parameters were varied by ±18◦, and the translational parameters were varied ±10% from the known solution. Repeated manual alignment trails tested the feasibility and repeatability of aligning the two transducers into a single plane. The difference between the solved DOF and the ideal DOF (α = 90◦, β = 0◦, γ = 0◦, z = 0 mm) is the alignment error, as determined by the wedge-based calibration. The alignment error for the four DOF should be close to zero and have a small range. For further validation we constructed a commonly used N-fiducial cal- ibration phantom [20, 25]. The phantom was built by drilling holes into opposite sides of a Plexiglas container and threading 0.52 mm diameter ny- lon wire through the holes to create two N patterns. The N-fiducial phantom can not be used for evaluation of manual align- ment because it is not suitable for visually determining alignment. However, it can be used to compare the quality of calibration with the wedge based phantom. To preform validation, the wedge phantom is removed and re- placed with the N-fiducial phantom after an alignment. An image of the N-fiducial phantom from each transducer is acquired and saved for off-line processing. Three focal depths are used to improve resolution. To reduce speckle noise and artifacts, an opening operation with a 1×18 pixel mor- phological element is performed on the images [33], followed by user defined thresholding. The effect of applying these operations can be seen in Fig. 3.8. After noise reduction, the wire points are localized in the image frame by calculating an intensity weighted centroid in a 2×2 mm2 rectangular area around each ellipse where the user clicks. Similar to the wedge phantom, a mathematical description of the wire geometry was created and linear least squares optimization was used to minimize the distance between the cen- troid points in the image and the calculated intersection points for a given set of DOF. While a closed form solution could be used, since we are not in- terested in real-time performance, the optimization method is satisfactory. The standard deviation of the six DOF over multiple calibrations gives a measure of calibration precision. The N-fiducial phantom also provides a means of evaluating the accuracy of the wedge phantom calibration. The DOF solved by the wedge calibration are used to map the N-fiducials from one transducer’s image to the other transducer’s N-fiducial image. The N-fiducial points should map to the same spot, and the distance between the mapped and true points provides an accuracy error. 64 3.2. Methods (a) (b) (c) Figure 3.8: B-mode image of the N-fiducial phantom (a) before processing, (b) after opening, and (c) after thresholding. 65 3.3. Results −0.1 −0.05 0 0.05 0.1 Er ro r ( de gre es  or  m m)    α β γ x y z Figure 3.9: The results from 1000 simulated trials with errors added to the centroid locations and initial starting point for the optimization. The bias for each DOF is plotted in black dots with 95% confidence intervals shown with bars. 3.3 Results The results from the sensitivity analysis for each of the six DOF after sim- ulating 1000 trials are plotted in Fig. 3.9. The method is shown to be unbiased. Confidence intervals are shown in bars to indicate the range of uncertainty. The confidence intervals assume a t-distribution for the solu- tions provided by the algorithm. The results indicate that the method is not very sensitive to errors in feature extraction and initial guess. Experimental results were obtained with the wedge phantom for six in- dependent alignment trials. The residual error in the alignment of the two transducers is summarized in Table 3.1. The mean of the absolute value of the error is under 1◦ in all three orientation DOF and is under 1 mm in the elevation direction. 66 3.3. Results Table 3.1: Summary of the alignment errors in the orientation and position DOF for six independent trials α (deg) β (deg) γ (deg) z (mm) Mean 0.40 0.71 0.55 0.56 Max 0.79 1.0 1.0 1.2 Min -1.7 -0.31 -0.85 -0.65 Table 3.2: Standard deviation of the three orientation DOF (degrees) and three position DOF (mm) after 15 trials on a single image for the wedge and N-fiducial phantoms. Wedges N-Fiducials α (deg) 0.20 0.28 β (deg) 0.14 0.11 γ (deg) 0.26 0.15 x (mm) 0.48 0.16 y (mm) 0.22 0.20 z (mm) 0.42 0.49 The following results compare the precision of the calibration solution for the wedge and N-fiducial methods. A single B-mode image of the wedge phantom was loaded and segmented 15 times. The same procedure was repeated for the N-fiducial phantom. The standard deviation for the solved DOF are reported in Table 3.2. The results in Table 3.3 describe the wedge calibration accuracy. Ac- curacy is tested by mapping N-fiducial points as seen from one transducer to the N-fiducial points as seen by the other transducer using the solved DOF to apply the transformation to the points. The statistics are for six independent alignment trials. 67 3.4. Discussion Table 3.3: Statistics of the errors in accuracy of mapping N-fiducial points from one transducer’s pose to the other, using the DOF solved by the wedge phantom. Error (mm) Mean 1.2 SD 0.57 Max 2.5 Min 0.47 3.4 Discussion The precision of the wedge based solution is very similar to the N-fiducial so- lution as seen in Table 3.2. This demonstrates that even though the wedges appear as blurred rectangles in the image, reliable data can be repeatably extracted from them. Of course, the calibration parameters may be incor- rect, but this test shows that both methods provide stable solutions. The largest difference in precision occurs for the x parameter. For the wedge solution, the x parameter is constrained only by the outward facing wedges. The outward facing wedges were located at the very bottom of the image, close to the maximum depth setting of the transducer and far from the focal region. The resolution of the image in this region was likely poorer. Ad- ditionally, the interface between the outward facing wedges and the wedge pairs produced a strong echo which dominated the local region of the image. The wedge phantom calibration provides an acceptable level of accuracy as reported in Table 3.3. The mean error of 1.2 mm is close to the spatial resolution of the transducers. The points extracted from the N-fiducial im- ages are based on centroid calculations and may not correspond to the true wire locations. For these reasons the level of error is expected. Since there is no true gold standard for evaluating the calibration accuracy, further study is required to evaluate the effect on the performance of dual transducer systems for tissue motion tracking. There were many design considerations that went into manufacturing the phantom. For example, smaller angles were initially considered because the are easier to manufacture, however larger angles provide more sensi- tivity to misalignment. The angles chosen for the wedges created standard 30-60-90 triangles. This allowed the use of readily available precision ground 68 3.4. Discussion angle blocks. The simulation results in Fig. 3.9 show that the angles cho- sen provided reasonable accuracy and precision. Theoretically any angle of wedge could be machined using a 4-axis CNC mill and the angles could be optimized for calibration performance. The gaps included in between the wedges were added for easier segmentation of each of the wedges and allowed visualization of the floor. Unfortunately, the gaps increased manu- facturing difficulty because a deep cut was required for a thin slot causing tool deflection. The wedge patterns were manufactured out of a single piece of aluminum because it helped ensure high tolerances in their relative positions. Unfor- tunately, this also increased the complexity of the job. Alternatively, each wedge could be machined individually and bonded to the base of the phan- tom. This would require a carefully designed jig to hold the wedges in the correct place until the bond sets. The jig would likely also be difficult to manufacture, but would be worthwhile for producing large volumes of the wedge phantom. Note that the tight tolerances on the wedges is not only required for accurate modeling of the wedge geometry for the optimization procedure (which could be accounted for by correcting the model with mea- surements from a coordinate measuring machine), but also for providing the appropriate visual feedback. It is of highest importance that the wedges cross-over to create a plane. The wedge phantom presented in here has a significantly higher cost and manufacturing time compared to a N-fiducial phantom. Is the complicated design worth it? The main advantage of the wedge phantom is that it provides visual feedback to the user. The human vision system is known to be good at recognizing features such as symmetry [34]. When close to alignment, the user can iteratively make small changes to each motion stage to improve the symmetry of the wedge features. In an N-fiducial phantom, symmetry is not affected as much for small angular changes. A system could be envisioned that automatically segments feature points to automatically segment feature points from an N-fiducial phantom and calculate the image pose [33]. One could then argue that, instead of using visual feedback, the image pose calculation could be used as feedback, and the alignment could be accomplished. However, knowledge of the image pose is not enough information on its own to know the nec- essary adjustments for the motion stages. The axes of the motion stages do not coincide with the axes of the image plane, which are unknown rel- ative to the transducer housing. Therefore, the DOF of the image plane are not directly adjustable. Since an adjustment to a motion stage causes an unknown change in the image DOF, adjustments would have to be made 69 3.4. Discussion through trail and error. It is unlikely an operator can converge to an aligned state. Further increasing the difficulty is that the DOF of the motion stages are actually coupled, so any adjustment to one axis will require compensa- tion in another. Leotta [28] designed a calibration phantom with a set of beads placed on a plane of wires. Additional crossing wires helped guide the alignment to the correct plane. Rotational misalignment was detected by asymmetry in the image. Once the rotational misalignments were corrected, the trans- ducer was translated in the elevation direction until the strings and beads appeared equally bright. This phantom provided visual feedback to guide the alignment, however, since the beam in the elevation direction has some thickness, the beads and strings will be visible over a range of elevational motion. For example, one study has reported beam thickness values of 10 mm at a 2 cm depth, down to 3.5 mm at the 8 cm fixed focal depth, and back up to 5.5 mm at 12 cm depth for a 3.5 MHz phased array transducer [35]. The wedge alignment presented in this work differs from wedge calibra- tion previously published in the literature. Gee et al. [29] used three wedge pairs to align an image plane to parallel wires forming a plane. It is assumed that the alignment procedure correctly aligns the planes and only the in- plane DOF (two position and one rotation) are extracted from the image. The DOF are solved independently; the angle of the wires provides the ro- tational component and the location of the wedges provides the position components. Boctor et al. [30] used sets of three wedge pairs to estimate the transformation of an US transducer in different poses. Similarly, the DOF are estimated independently. In this case, the out of plane translation is estimated by the rectangle separation, and the out-of-plane rotation is measured using a ratio of the rectangle sizes. In contrast, the techniques presented in this chapter use all of the information from the image together to solve for all six DOF simultaneously. The alignment apparatus used in this chapter (see Fig. 3.4) would be difficult to use in a clinical setting. To make the system more practical for physicians, the transducers could be bonded or mechanically fixed after alignment on the apparatus. The transducers could then be removed and used as a hand held system. 70 3.5. Conclusion 3.5 Conclusion We have presented a system for alignment of two US transducers into a single scan plane for use as dual transducers. After repeated manual alignments, the system has demonstrated a mean misalignment error of under 1◦ in the rotation axes and under 1 mm in the elevation direction. This is an improvement over the simple alignment system presented in Chapter 2. The accuracy of the proposed alignment method is not sensitive to changes in the imaging parameters of the US system which is encouraging for use on a wide range of systems and applications. Future work will focus on further improving the clarity of image features with the wedge phantom, and using an aligned and calibrated dual transducer system for research on spatial compounding and elastography. 71 References [1] S. K. Jespersen, J. E. Wilhjelm, and H. Sillesen, “In vitro spatial com- pound scanning for improved visualization of atherosclerosis,” Ultra- sound in Medicine and Biology, vol. 26, no. 8, pp. 1357–1362, 2000. [2] S. Huber, M. Wagner, M. Medl, and H. Czembirek, “Real-time spatial compound imaging in breast ultrasound,” Ultrasound in Medicine and Biology, vol. 28, no. 2, pp. 155–163, 2002. [3] A. R. Groves and R. N. Rohling, “Two-dimensional spatial compound- ing with warping,” Ultrasound in Medicine and Biology, vol. 30, no. 7, pp. 929–942, 2004. [4] J.-Y. Lu, H. Zou, and J. F. Greenleaf, “Biomedical ultrasound beam forming,” Ultrasound in Medicine and Biology, vol. 20, no. 5, pp. 403– 428, 1994. [5] M. H. Choi, M.-H. Bae, and R. Y. Yoon, “Spatial compounding for rotating linear probe in the presence of parameter error using image registration,” in Medical Imaging 2008: Ultrasonic Imaging and Signal Processing, vol. 6920. SPIE, 2008, pp. 69 200L1–69 200L9. [6] A. Hernandez, O. Basset, P. Chirossel, and G. Gimenez, “Spatial com- pounding in ultrasonic imaging using an articulated scan arm,” Ultra- sound in Medicine and Biology, vol. 22, no. 2, pp. 229–238, 1996. [7] D. E. Dick, R. D. Elliott, R. L. Metz, and D. S. Rojohn, “A new auto- mated, high resolution ultrasound breast scanner,” Ultrasonic Imaging, vol. 1, no. 4, pp. 368–377, 1979. [8] E. E. Konofagou and J. Ophir, “Precision estimation and imaging of normal and shear components of the 3D strain tensor in elastography,” Physics in Medicine and Biology, vol. 45, no. 6, pp. 1553–1563, 2000. [9] H. Eskandari, S. Salcudean, R. Rohling, and J. Ohayon, “Viscoelastic characterization of soft tissue from dynamic finite element models,” Physics in Medicine and Biology, vol. 53, pp. 6569–6590, 2008. 72 Chapter 3. References [10] M. Rao, Q. Chen, H. Shi, T. Varghese, E. Madsen, J. Zagzebski, and T. Wilson, “Normal and shear strain estimation using beam steering on linear-array transducers,” Ultrasound in Medicine and Biology, vol. 33, no. 1, pp. 57–66, 2007. [11] R. Zahiri Azar, “Methods for the estimation of the tissue motion using digitized ultrasound echo signals,” Ph.D. dissertation, The University of British Columbia, 2009. [12] M. Rao and T. Varghese, “Estimation of the optimal maximum beam angle and angular increment for normal and shear strain estimation,” IEEE Transactions on Biomedical Engineering, vol. 56, no. 3, pp. 760– 769, 2009. [13] H. H. G. Hansen, R. G. P. Lopata, T. Idzenga, and C. L. de Korte, “Full 2D displacement vector and strain tensor estimation for superficial tissue using beam-steered ultrasound imaging,” Physics in Medicine and Biology, vol. 55, no. 4, pp. 3201–3218, 2010. [14] R. J. Dickinson and C. R. Hill, “Measurement of soft tissue motion us- ing correlation between a-scans,” Ultrasound in Medicine and Biology, vol. 8, no. 3, pp. 263–271, 1982. [15] J. M. Abeysekera and R. Rohling, “Improved 2D motion tracking for elastography using dual transducers,” in Eighth International Confer- ence on the Ultrasonic Measurement and Imaging of Tissue Elasticity, 2009, p. 77. [16] M. M. Doyley, P. M. Meaney, and J. C. Bamber, “Evaluation of an iterative reconstruction method for quantitative elastography,” Physics in Medicine and Biology, vol. 45, no. 6, p. 1521, 2000. [17] P. E. Barbone and A. A. Oberai, “Elastic modulus imaging: some exact solutions of the compressible elastography inverse problem,” Physics in Medicine and Biology, vol. 52, no. 6, p. 1577, 2007. [18] C. Pellot-Barakat, M. Sridhar, K. K. Lindfors, and M. F. Insana, “Ul- trasonic elasticity imaging as a tool for breast cancer diagnosis and research,” Current Medical Imaging Reviews, vol. 2, pp. 157–164(8), 2006. [19] E. S. Burnside, T. J. Hall, A. M. Sommer, G. K. Hesley, G. A. Sisney, W. E. Svensson, J. P. Fine, J. Jiang, and N. J. Hangiandreou, “Differ- 73 Chapter 3. References entiating Benign from Malignant Solid Breast Masses with US Strain Imaging,” Radiology, vol. 245, no. 2, pp. 401–410, 2007. [20] F. Lindseth, G. A. Tangen, T. Langø, and J. Bang, “Probe calibra- tion for freehand 3-D ultrasound,” Ultrasound in Medicine and Biology, vol. 29, no. 11, pp. 1607–1623, 2003. [21] L. Mercier, T. Langø, F. Lindseth, and L. D. Collins, “A review of cal- ibration techniques for freehand 3-D ultrasound systems,” Ultrasound in Medicine and Biology, vol. 31, no. 2, pp. 143–165, 2005. [22] A. State, D. Chen, C. Tector, A. Brandt, H. Chen, R. Ohbuchi, M. Ba- jura, and H. Fuchs, “Case study: Observing a volume rendered fetus within a pregnant patient,” in IEEE Conference on Visualization, 1994, pp. 364–368. [23] P. R. Detmer, G. Bashein, T. Hodges, K. W. Beach, E. P. Filer, D. H. Burns, and D. E. S. Jr., “3D ultrasonic image feature localization based on magnetic scanhead tracking: In vitro calibration and validation,” Ultrasound in Medicine and Biology, vol. 20, no. 9, pp. 923–936, 1994. [24] R. M. Comeau, A. Fenster, and T. M. Peters, “Integrated MR and ultrasound imaging for improved image guidance in neurosurgery,” in SPIE Conference on Image Processing, vol. 3338, 1998, pp. 747–754. [25] N. Pagoulatos, D. R. Haynor, and Y. Kim, “A fast calibration method for 3-D tracking of ultrasound images using a spatial localizer,” Ultra- sound in Medicine and Biology, vol. 27, no. 9, pp. 1219–1229, 2001. [26] R. W. Prager, R. N. Rohling, A. H. Gee, and L. Berman, “Rapid cali- bration for 3-D freehand ultrasound,” Ultrasound in Medicine and Bi- ology, vol. 24, no. 6, pp. 855–869, 1998. [27] A. Viswanathan, E. M. Boctor, R. H. Taylor, G. Hager, and G. Fichtinger, “Immediate ultrasound calibration with three poses and minimal image processing,” in Medical Image Computing and Computer-Assisted Intervention - MICCAI 2004, 2004, pp. 446–454. [28] D. F. Leotta, “An efficient calibration method for freehand 3-D ultra- sound imaging systems,” Ultrasound in Medicine and Biology, vol. 30, no. 7, pp. 999–1008, 2004. 74 Chapter 3. References [29] A. H. Gee, N. E. Houghton, G. M. Treece, and R. W. Prager, “A me- chanical instrument for 3D ultrasound probe calibration,” Ultrasound in Medicine and Biology, vol. 31, no. 4, pp. 505–518, 2005. [30] E. M. Boctor, I. Iordachita, M. A. Choti, G. D. Hager, and G. Fichtinger, “Bootstrapped ultrasound calibration,” Studies in Health, Technology, and Informatics, vol. 119, pp. 61–66, 2006. [31] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and au- tomated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981. [32] N. Bilaniuk and G. S. K. Wong, “Speed of sound in pure water as a function of temperature,” The Journal of the Acoustical Society of America, vol. 93, no. 4, pp. 2306–2306, 1993. [33] T. K. Chen, A. D. Thurston, R. E. Ellis, and P. Abolmaesumi, “A real- time freehand ultrasound calibration system with automatic accuracy feedback and control,” Ultrasound in Medicine and Biology, vol. 35, no. 1, pp. 79–93, 2009. [34] J. Wagemans, “Characteristics and models of human symmetry detec- tion,” Trends in Cognitive Sciences, vol. 1, no. 9, pp. 346–352, 1997. [35] M. L. Skolnick, “Estimation of ultrasound beam width in the elevation (section thickness) plane,” Radiology, vol. 180, pp. 286–288, 1991. 75 Chapter 4 Conclusions and Future Research In this chapter, the results of the work presented in the previous chapters are related to one another and put into context with the current state of the art. The strengths and weaknesses of the approaches taken in this thesis are discussed and the contributions of the thesis are delineated. The future work is discussed at the end. At this point, it can be stated that the primary objectives of the thesis have been achieved. In Chapter 2, motion tracking performance was evalu- ated with numerical simulation and experimental data on a tissue mimicking phantom. Compared to a single transducer system, the dual transducer sys- tem produced more accurate 2D displacement measurements. In Chapter 3, an accurate, precise, and repeatable method for aligning two transducers into a single scan plane was developed. Therefore, the hypothesis that a system can be fabricated to produce high quality 2D motion estimates with dual transducers is confirmed. 4.1 Contributions In Chapter 2 a method of tracking tissue motions in 2D with dual transduc- ers was presented. The dual transducer method was compared to a state of the art 2D tracking technique with a single transducer with, and without, angular compounding [1]. The performance of the techniques were studied using both simulation and experimental data in terms of accuracy and pre- cision. The dual transducer method outperformed the other two tracking techniques, even as the angle between the transducers was reduced down to as much as 40◦. These results are encouraging for using dual transducer systems in a wide range of clinical applications such as diagnosis of liver fibrosis, characterization of plaques in the thyroid, and identification and segmentation of kidney volumes, among others in addition to breast cancer screening. 76 4.2. Future Work Alignment is a difficult process and it is reasonable to assume that there will be some residual error. A misalignment study in Chapter 2 attempted to quantify the expected errors from the image planes not being in perfect alignment. The misalignment results for the simulations are an example of expected levels of misalignment. Studies on real patients are needed to con- firm performance in a clinical setting. Ideally, the best approach would be to conduct a clinical trial in the future and measure performance by sensitivity and specificity of detecting lesions such as malignant tumors of the breast. The translational motion estimation results from experimental data on a tissue mimicking phantom agreed with those found through simulation. For experimental measurements, alignment and calibration was achieved with a simple cross-wire phantom. The misalignment found with the cross-wire phantom helped motivate the more sophisticated design in Chapter 3. In Chapter 3 a novel wedge-based phantom design was presented to attempt to solve the problem of aligning and calibrating the two transducers into a single plane. A wedge phantom design provides visual feedback to the user to facilitate alignment. Mean alignment error is shown to be under 1 degree in the rotation axes and 1 mm in translation after repeated manual alignments. Calibration provides the transformation from one transducer to the other as well as a measure of the residual error in alignment. A common phantom used for the problem of calibration of freehand transducers is the N-fiducial phantom [2–4]. The repeatability of wedge based calibration has similar results compared to N-fiducial based calibration. Using the N-fiducial point-based features, the accuracy of the calibration for mapping points from one transducer to the other was found to have a mean error of 1.2 mm. This result offers an indication of the calibration error, but since no gold standard exists, further study is required to investigate the effects of calibration error on tissue motion measurement. 4.2 Future Work In all of the motion tracking studies presented in this thesis, the TDPE al- gorithm was used for motion estimation [5]. This algorithm has been proven to be versatile and has been applied in several clinical settings [1]. However, depending on the clinical application, a different algorithm may provide cer- tain advantages. For instance, with acoustic radiation force imaging, typi- cally very small displacements need to be tracked at high speed [6]. In this case an autocorrelation algorithm may be more suitable. For applications with large quasi-static displacements, performance may be improved with 77 4.2. Future Work an algorithm incorporating RF stretching [7]. If computation time is not an issue, better tracking performance could be realized with a continuous spline representation of the echo signal [8]. Any number of algorithms can be used with the dual transducer system. Only high quality estimates in 1D are required. The studies and validations in this thesis were based on simulated and experimental data acquired from tissue mimicking phantoms. Further inves- tigation is required to validate the performance of the technique with data acquired from real tissue. The elastograms obtained in this thesis were based on display of tissue strain. Ideally the elastograms would contain absolute values of elasticity to provide more information for physicians to make a diagnosis. Solving for additional parameters such as viscosity could also improve the ability to differentiate between benign and malignant lesions [9]. To achieve this, application and measurement of dynamic displacements would be required. In contrast to the static compressions measured in this thesis, dynamic ex- citation would require synchronization between the excitation source and switching the transducers. Alternatively, if the time between switching transducers was known, phase compensation could be applied to compen- sate for the shift of the tissue during switching [10]. For use in a clinical setting, ideally the switching would occur quickly. Development of a new transducer multiplexing circuit board with solid state relays (as apposed to the mechanical relays currently used on Ultrasonix US machines) could accomplish this, or the construction of a custom transducer with two 64 element faces as mentioned in Chapter 2 and pictured in Fig. 2.11. In this work we only studied our proposed dual transducer system with linear array transducers. Further study is required to investigate improve- ments with curvilinear array or phased array transducers. This is expected to extend the applicability of dual transducers to other regions of the body in addition to breast scanning. This thesis only studied motion estimation in 2D over a plane. De- pending on the boundary conditions, even with high quality displacement measurements, it can be difficult to reconstruct the mechanical properties of a medium with just planar data [11]. Measurement over a 3D volume could reduce some of the assumptions required in the formulation of the inverse problem and therefore improve the reconstruction. Volumes of RF data can be acquired by translating, rotating, or fanning a standard US transducer to sweep a 2D image through a 3D volume. A common transducer design for obtaining such volumetric scans are motorized transducers, where a mo- tor rocks a standard US array back and forth. Future work will investigate 78 4.2. Future Work applying the dual transducer concept with these types of transducers. High quality estimates should be able to be measured in 2D as shown in this the- sis, and the estimation in the third axis can be averaged between the two transducers to obtain 3D motion estimates over the volume. 79 References [1] Reza Zahiri Azar. Methods for the Estimation of the Tissue Motion Using Digitized Ultrasound Echo Signals. PhD thesis, The University of British Columbia, 2009. [2] Niko Pagoulatos, David R. Haynor, and Yongmin Kim. A fast cali- bration method for 3-D tracking of ultrasound images using a spatial localizer. Ultrasound in Medicine and Biology, 27(9):1219–1229, 2001. [3] Frank Lindseth, Geir Arne Tangen, Thomas Langø, and Jon Bang. Probe calibration for freehand 3-D ultrasound. Ultrasound in Medicine and Biology, 29(11):1607–1623, 2003. [4] Po-Wei Hsu, Richard W. Prager, Andrew H. Gee, and Graham M. Treece. Real-time freehand 3d ultrasound calibration. Ultrasound in Medicine and Biology, 34(2):239–251, 2008. [5] Reza Zahiri-Azar and Septimiu E. Salcudean. Motion estimation in ultrasound images using time domain cross correlation with prior es- timates. IEEE Transactions on Biomedical Engineering, 53(10):1990– 2000, 2006. [6] G.F. Pinton, J.J. Dahl, and G.E. Trahey. Rapid tracking of small dis- placements with ultrasound. IEEE Transactions on Ultrasonics, Fer- roelectrics, and Frequency Control, 53(6):1103–1117, 2006. [7] S K Alam, J Ophir, and E E Konofagou. An adaptive strain estimator for elastography. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 45(2):461–472, 1998. [8] F. Viola, R. Coe, K. Owen, D. Guenther, and W. Walker. Multi- dimensional spline-based estimator (muse) for motion estimation: Al- gorithm development and initial results. Annals of Biomedical Engi- neering, 36(12):1942–1960, 2008. 80 Chapter 4. References [9] Ralph Sinkus, Katja Siegmann, Tanja Xydeas, Mickael Tanter, Claus Claussen, and M Fink. MR elastography of breast lesions: Under- standing the solid/liquid duality can improve the specificity of contrast- enhanced MR mammography. Magnetic Resonance in Medicine, 58(6): 1135–1144, 2007. [10] Ali Baghani. A wave equation approach to ultrasound elastography. PhD thesis, The University of British Columbia, 2010. [11] H. Eskandari, S.E. Salcudean, R. Rohling, and J. Ohayon. Viscoelas- tic characterization of soft tissue from dynamic finite element models. Physics in Medicine and Biology, 53(22):6569–6590, 2008. 81 Appendix A Fixed Angles Homogeneous Transformation Matrix The homogeneous matrix describing the six DOF transformation from co- ordinate frame A to coordinate frame B with translation (x, y, z) and fixed angles rotation, first through γ about the x-axis, followed by β around the y-axis, and finally through α around the z-axis takes the following form: BTA =  cosα − sinα 0 x sinα cosα 0 y 0 0 1 z 0 0 0 1   cosβ 0 sinβ 0 0 1 0 0 − sinβ 0 cosβ 0 0 0 0 1   1 0 0 0 0 cos γ − sin γ 0 0 sin γ cos γ 0 0 0 0 1  =  cosα cosβ cosα sinβ sin γ − sinα cos γ cosα sinβ cos γ + sinα sin γ x sinα cosβ sinα sinβ sin γ + cosα cos γ sinα sinβ cos γ − cosα sin γ y − sinβ cosβ sin γ cosβ cos γ z 0 0 0 1  . (A.1) 82

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0071195/manifest

Comment

Related Items