"Applied Science, Faculty of"@en .
"Electrical and Computer Engineering, Department of"@en .
"DSpace"@en .
"UBCV"@en .
"Zahiri Azar, Reza"@en .
"2010-01-05T14:52:18Z"@en .
"2009"@en .
"Doctor of Philosophy - PhD"@en .
"University of British Columbia"@en .
"Tissue motion estimation in ultrasound images plays a central role in many modern signal processing applications, including tissue characterization, strain and velocity imaging, and tissue viscoelasticity imaging. Therefore, the performance of tissue motion estimation is of significant importance. Also, its computational cost determines if it can be implemented in real-time so that it can be used clinically. This thesis presents several efficient methods for accurate estimation of tissue motion using digitized ultrasound echo signals.\n\nFirst, sample tracking algorithms are presented as a new class of motion estimators. These algorithms are based on the tracking of individual samples using a continuous representation of the reference echo signal. Simulations and experimental results on tissue mimicking phantoms show that sample tracking algorithms significantly outperform common algorithms in terms of accuracy, precision, sensitivity, and resolution. However, their performance degrades in the presence of noise.\n\nTo improve the performance of motion estimation in multi-dimensions, pattern matching interpolation techniques are studied and new interpolation techniques are presented. Simulation and experimental results show that, with small computational overhead, the proposed interpolation techniques significantly improve the accuracy and the precision of motion estimation in both 2D and 3D. Employing these techniques, real-time 2D motion tracking software is developed. Furthermore, the performance of the proposed 2D estimators is compared with that of 2D tracking using angular compounding. The results show that the proposed interpolation methods bring the performance of pattern matching techniques close to that of 2D compound tracking. \n\nFinally, angular compounding is combined with custom pulse sequencing and delay cancellation techniques to develop a system that estimates the motion vectors at very high frame rates (> 500 Hz) in real-time. The application of the system in the study of the propagation of mechanical waves for tissue characterization is also presented."@en .
"https://circle.library.ubc.ca/rest/handle/2429/17471?expand=metadata"@en .
"Methods for the Estimation of the Tissue Motion Using Digitized Ultrasound Echo Signals by Reza Zahiri Azar B.A.Sc., Sharif University of Technology, 2003 M.A.Sc., University of British Columbia, 2005 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Doctor of Philosophy in The Faculty of Graduate Studies (Electrical and Computer Engineering) The University of British Columbia (Vancouver) December, 2009 c Reza Zahiri Azar 2009 \u00E2\u0083\u009D \u000CAbstract Tissue motion estimation in ultrasound images plays a central role in many modern signal processing applications, including tissue characterization, strain and velocity imaging, and tissue viscoelasticity imaging. Therefore, the performance of tissue motion estimation is of signi\u00EF\u00AC\u0081cant importance. Also, its computational cost determines if it can be implemented in real-time so that it can be used clinically. This thesis presents several e\u00EF\u00AC\u0083cient methods for accurate estimation of tissue motion using digitized ultrasound echo signals. First, sample tracking algorithms are presented as a new class of motion estimators. These algorithms are based on the tracking of individual samples using a continuous representation of the reference echo signal. Simulations and experimental results on tissue mimicking phantoms show that sample tracking algorithms signi\u00EF\u00AC\u0081cantly outperform common algorithms in terms of accuracy, precision, sensitivity, and resolution. However, their performance degrades in the presence of noise. To improve the performance of motion estimation in multi-dimensions, pattern matching interpolation techniques are studied and new interpolation techniques are presented. Simulation and experimental results show that, with small computational overhead, the proposed interpolation techniques signi\u00EF\u00AC\u0081cantly improve the accuracy and the precision of motion estimation in both 2D and 3D. Employing these techniques, real-time 2D motion tracking software is developed. Furthermore, the performance of the proposed 2D estimators is compared with that of 2D tracking using angular compounding. The results show that the proposed interpolation methods bring the performance of pattern matching techniques close to that of 2D compound tracking. Finally, angular compounding is combined with custom pulse sequencing and delay cancellation techniques to develop a system that estimates the motion vectors at very high frame rates (> 500 Hz) in real-time. The application of the system in the study of the propagation of mechanical waves for tissue characterization is also presented. ii \u000CTable of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii List of Tables vii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii List of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Acknowledgments Dedication Statement of Co-Authorship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii 1 Introduction . . . . . . . . . . . . . . . . . . . . 1.1 Background . . . . . . . . . . . . . . . . . . 1.1.1 Ultrasound Imaging . . . . . . . . . . 1.1.2 1D Motion Estimation Techniques . . 1.1.3 2D/3D Motion Estimation Techniques 1.1.4 Compound Tracking Techniques . . . 1.1.5 High Frame Rate Tracking Techniques 1.1.6 Current in-vivo Applications . . . . . 1.2 Thesis Objectives . . . . . . . . . . . . . . . 1.3 Thesis Outline . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 1 2 3 3 4 5 5 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2 Time-Delay Estimation in Ultrasound Echo Signals Using Individual Sample Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Proposed Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Sample Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Spline-based Zero-Crossing Tracking . . . . . . . . . . . . . . . . . . . . 2.3 Simulation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Simulation Results and Discussions . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Experimental Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 17 19 19 21 23 24 33 35 iii \u000CTable of Contents References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2D Estimation of Sub-Sample Motion from nals . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . 3.2 Methods . . . . . . . . . . . . . . . . . . . 3.2.1 Independent 1D Method . . . . . . 3.2.2 Grid Slope . . . . . . . . . . . . . . 3.2.3 Iterative 1D Method . . . . . . . . 3.2.4 2D Method . . . . . . . . . . . . . . 3.3 Simulations . . . . . . . . . . . . . . . . . . 3.3.1 Simulation Setup . . . . . . . . . . 3.3.2 Rigid Motion . . . . . . . . . . . . . 3.3.3 Deformation . . . . . . . . . . . . . 3.3.4 Simulation Results . . . . . . . . . . 3.4 Experiments . . . . . . . . . . . . . . . . . 3.4.1 Experimental Setup . . . . . . . . . 3.4.2 Experimental Results . . . . . . . . 3.5 Real-Time Implementation . . . . . . . . . 3.6 Discussions . . . . . . . . . . . . . . . . . . 3.7 Conclusion and Future Work . . . . . . . . References Sig. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3D Estimation of Sub-Sample Motion from nals . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . 4.2 Methods . . . . . . . . . . . . . . . . . . . 4.2.1 Independent 1D Methods . . . . . . 4.2.2 3D Methods . . . . . . . . . . . . . 4.3 Simulations . . . . . . . . . . . . . . . . . . 4.3.1 Simulation Setup . . . . . . . . . . 4.3.2 Motion Estimation . . . . . . . . . 4.3.3 Simulation Results . . . . . . . . . . 4.4 Experiments . . . . . . . . . . . . . . . . . 4.4.1 Experimental Setup . . . . . . . . . 4.4.2 Motion Estimation . . . . . . . . . 4.4.3 Conversion to Units of Distance . . 4.4.4 Coordinate Transformation . . . . . 4.4.5 Experimental Results . . . . . . . . 4.5 Discussions and Conclusion . . . . . . . . . References Digitized Ultrasound Echo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digitized Ultrasound Echo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sig. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 39 39 41 43 43 43 44 46 46 47 48 49 57 57 58 61 62 64 65 71 71 72 73 73 76 76 77 77 82 82 87 87 88 90 90 94 iv \u000CTable of Contents 5 Comparison Between Pattern Matching Techniques Employing 2D sample Estimation and 2D Tracking Using Angular Compounding . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Pattern Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Angular Compounding . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Discussions and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 6 High Frame Rate Ultrasound for 2D Motion Estimation 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Signal Processing . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Motion Estimation and Phasor Computation . . . . 6.3.2 Phase Correction . . . . . . . . . . . . . . . . . . . 6.3.3 2D Reconstruction . . . . . . . . . . . . . . . . . . . 6.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . 6.4.1 2D Flow Measurements . . . . . . . . . . . . . . . . 6.4.2 2D Wave Propagation Measurements . . . . . . . . 6.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 118 120 121 121 121 123 123 124 125 128 130 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 7 Conclusions and Future Research . . . . . . . . . . 7.1 Multi-Dimensional Motion Estimation Techniques 7.2 Real-Time Motion Estimation . . . . . . . . . . . 7.3 High Frame Rate Tracking . . . . . . . . . . . . . 7.4 Summary of Contributions . . . . . . . . . . . . . 7.5 Future Work . . . . . . . . . . . . . . . . . . . . . References Sub. . . 99 . . . 99 . . . 100 . . . 101 . . . 101 . . . 102 . . . 102 . . . 103 . . . 109 . . . 109 . . . 110 . . . 111 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 135 136 137 137 138 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Appendices A Local Polynomial Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 B Strain Signal to Noise Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 v \u000CTable of Contents C 2D Normalized Cross Correlation . . . . . . . . . . . . . . . . . . . . . . . . . 145 D 1D Sub-Sample Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 E 2D Sub-Sample Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 E.0.1 2D Paraboloid Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 E.0.2 2D Polynomial Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 F 3D Normalized Cross Correlation . . . . . . . . . . . . . . . . . . . . . . . . . 150 G 3D Polynomial Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 vi \u000CList of Tables 3.1 3.2 3.3 Maximum values of biases and standard deviations obtained from the 2D noisefree simulations for window of 2 \u00C3\u0097 2 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A2 . . . . . . . . . . . . . . . . . . . . . . The overall performance of 2D motion estimation techniques estimated from compression test for window of 2 \u00C3\u0097 2 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A2 . . . . . . . . . . . . . . . . . . . . . . Maximum values of bias and standard deviation obtained from the 2D experimental data for window of 2 \u00C3\u0097 2 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A2 . . . . . . . . . . . . . . . . . . . . . . . . 53 55 61 4.1 Maximum values of biases and standard deviations of di\u00EF\u00AC\u0080erent 3D motion estimation techniques evaluated from simulated data in units of sample. . . . . . . 5.1 The overall performance of speckle pattern tracking using iterative 1D cosine \u00EF\u00AC\u0081tting and compound imaging using 1D cosine \u00EF\u00AC\u0081tting in simulated data. . . . 109 The overall performance of speckle pattern tracking using 2D polynomial \u00EF\u00AC\u0081tting and compound imaging using 1D parabola \u00EF\u00AC\u0081tting in simulated data. . . . . . . 110 5.2 79 vii \u000CList of Figures 2.1 2.2 2.3 2.4 2.5 Schematic of sample tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . Schematic representation of the zero-crossings tracking algorithm . . . . . . . . Bias and standard deviation of all the delay estimators . . . . . . . . . . . . . . Bias and standard deviation of all the techniques for di\u00EF\u00AC\u0080erent window sizes . . The bias of all the techniques and the standard deviation of window-based delay estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Standard deviation of the delay estimators as a function of signal to noise ratio 2.7 Time-delay estimation images at di\u00EF\u00AC\u0080erent compression levels . . . . . . . . . . 2.8 Signal-to-noise ratio of estimated strain as a function of applied compression . 2.9 Experimental displacement estimates using the sample tracking algorithm . . . 2.10 Displacement estimates using normalized cross correlation plus cosine \u00EF\u00AC\u0081t and sample tracking algorithm of a phantom with a cylindrical hard inclusion . . . 19 22 25 26 3.1 3.2 3.3 3.4 40 42 47 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 Common techniques to reduce the error of discrete pattern matching functions Di\u00EF\u00AC\u0080erent schemes for sub-sample displacement estimation in 2D . . . . . . . . . Scatterer distributions and a Field II simulated sonogram . . . . . . . . . . . . FEM mesh, deformation constraints, scatterer distributions, and an example of the displacements of scatterers . . . . . . . . . . . . . . . . . . . . . . . . . . . Biases and standard deviations of di\u00EF\u00AC\u0080erent pattern matching interpolation techniques as a function of sub-sample shift on a 11 \u00C3\u0097 11 grid. . . . . . . . . . . . . Grey level representations of the biases and standard deviations of di\u00EF\u00AC\u0080erent techniques. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Grey level representations of the biases and standard deviations of di\u00EF\u00AC\u0080erent techniques using individual colorbars. . . . . . . . . . . . . . . . . . . . . . . . Maximum values of biases and standard deviations obtained from the 2D simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Axial and lateral displacement images computed by the FEM and estimated by di\u00EF\u00AC\u0080erent techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Displacement vector images computed by the FEM and estimated by di\u00EF\u00AC\u0080erent techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Biases and standard deviations of di\u00EF\u00AC\u0080erent pattern matching interpolation techniques on envelope signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Displacement estimates at actual scale . . . . . . . . . . . . . . . . . . . . . . . Experimental set up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Biases and standard deviations of di\u00EF\u00AC\u0080erent pattern matching interpolation techniques on a 9x9 grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Screen shots of the real-time 2D motion tracking software . . . . . . . . . . . . 27 30 31 32 33 34 48 50 51 52 53 54 56 57 58 59 60 63 viii \u000CList of Figures 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 Di\u00EF\u00AC\u0080erent schemes for sub-sample displacement estimation in 3D . . . . . . . . . Scatterer distributions and a Field II simulated sonogram . . . . . . . . . . . . Biases and standard deviations on a 3D grid . . . . . . . . . . . . . . . . . . . . Grey level representations of the biases and standard deviations on a 3D grid . Grey level representations of the biases and standard deviations on a 3D grid . The overall performance of di\u00EF\u00AC\u0080erent 3D motion estimation techniques . . . . . The biases and the standard deviations on a 3D grid at the actual scale . . . . The experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Second experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Variation of the spacing between lines as a function of depth . . . . . . . . . . The transducer\u00E2\u0080\u0099s coordinate frame and Cartesian coordinates . . . . . . . . . . The estimated axial, lateral, and elevational components of the motion resulted from translations of the phantom in the water tank. . . . . . . . . . . . . . . . 4.13 Simulated (top) axial, lateral, and elevational components of the displacement, resulted from axial compression of a phantom with a hard inclusion and the experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 6.1 6.2 6.3 6.4 6.5 6.6 6.7 74 76 78 80 81 83 84 85 86 88 89 91 92 Schematics for 2D motion tracking using pattern matching and beam steering . Scatterer distributions and a Field II simulated sonograms when di\u00EF\u00AC\u0080erent steering angles are employed to acquire the data . . . . . . . . . . . . . . . . . . . . Point target phantom imaged for di\u00EF\u00AC\u0080erent steering angles . . . . . . . . . . . . Biases and standard deviations of the pattern matching method and beam steering employing di\u00EF\u00AC\u0080erent angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . Biases and standard deviations of the pattern matching method and beam steering employing di\u00EF\u00AC\u0080erent angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . The overall performance of all beam steering methods averaged over the simulated motion grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The overall performance of all methods averaged over the simulated motion grid The experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Experimental displacement estimates of 2D pattern matching techniques and beam steering techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Two di\u00EF\u00AC\u0080erent techniques to acquire high frame rate from two steering angles . A diagram showing the basic signal processing blocks used in 2D high frame rate imaging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Schematics for the reconstruction of a 2D measurement using 1D measurements estimated from two steering angles. . . . . . . . . . . . . . . . . . . . . . . . . . Schematics of the experimental setup used for testing the system using \u00EF\u00AC\u0082ow phantom. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spatially aligned 1D motions estimated for di\u00EF\u00AC\u0080erent steering angles and axial and lateral components of the motion reconstructed from the same set of data . Schematics of the experimental setup used for testing the system using a mechanical vibrator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Snapshot of wave images after 1D motions estimation and phase correction for both steering angles are shown . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 104 105 106 107 108 111 112 113 123 124 125 126 127 129 ix \u000CList of Abbreviations 1D/2D/3D ARFI BS c CNR CTDE decorrelated downsampling ECG FEM FOV inhomogeneities insoni\u00EF\u00AC\u0081cation PM PVC RF ROI scatterers SNR sonogram ST STD TDE subsample transducer upsampling ZCT One/Two/Three Dimensional Acoustic Radiation Force Impulse Beam Steering Speed of Sound Contrast to Noise Ratio Continuous Time Delay Estimator less similar discarding samples to reduce the sampling rate Electrocardiogram Finite Element Method Field Of View locations where physical properties change transmission of ultrasound (into a tissue) Pattern Matching Polyvinyl Chloride Radio Frequency echo signals Region of Interest small inhomogeneities that re\u00EF\u00AC\u0082ect ultrasound Signal to Noise Ratio a conventional ultrasound image; a 2D image showing the envelope of ultrasound signals Sample Tracking Standard Deviation Time Delay Estimator between samples; a fraction of the sample spacing any device converting signals between di\u00EF\u00AC\u0080erent physical bases (e.g. from ultrasonic pressure waves to electrical signals) increasing the number of samples to increase the sampling rate Zero Crossing Tracking x \u000CAcknowledgments I would primarily like to thank my supervisor Professor Tim Salcudean for his valuable insight into my research and support during the course of this thesis. Without his guidance and inspiration throughout the process, none of this would have been possible. My thanks also go to Professor Jonathan Ophir, Professor Peter Lawrence, and Professor Robert Rohling for their continually helpful advice. I would also like to thank Kris Dickie and Laurent Pelissier from Ultrasonix Medical Corporation for their help and technical support throughout this research. I am grateful to all my friends and colleagues in the Robotics and Control lab, Julian, Ehsan, Hani, Orcun, and Sara for their help and advices. I owe my loving thanks to my wife Sara, who has been with me at every step. Without her support, encouragement and understanding it would have been impossible for me to \u00EF\u00AC\u0081nish this work. Finally, I am sincerely grateful to my parents who have always been the biggest motivation throughout my studies and whose years of unconditional love and support created the opportunity for me to chase my dreams. xi \u000CDedication To my parents xii \u000CStatement of Co-Authorship This thesis is based on several manuscripts, resulting from collaboration between multiple researchers. A version of Chapter 2 has been published. This paper was co-authored with Dr. Tim Salcudean. The author\u00E2\u0080\u0099s contribution in that paper was developing the idea, numerical simulation, phantom fabrication, experimental evaluation of the algorithms and writing the manuscript. A version of Chapter 3 has been submitted for publication. The paper was co-authored with Orcun Goksel, Ehsan Dehghan, David Yao, Dr. Joseph Yan, and Dr. Tim Salcudean. The author\u00E2\u0080\u0099s contribution in that papers was developing the algorithms, numerical simulation, phantom fabrication, experimental evaluation, and implementation of the algorithms for realtime performance on ultrasound machine. Orcun Goksel helped with data simulation using Field II and FEM as well as editing the manuscript. Ehsan Dehghan helped with implementation of the algorithms. David Yao and Dr. Joseph Yan helped with the design and calibration of the 2D motion stage for the experimental validation. A version of Chapter 4 has been submitted for publication. The paper was co-authored with Orcun Goksel and Dr. Tim Salcudean. The author\u00E2\u0080\u0099s contribution in that paper was developing the algorithms, numerical simulation, phantom fabrication, and experimental evaluation. Orcun Goksel helped with data simulation using Field II as well as editing the manuscript. A version of Chapter 5 will be submitted for publication. This paper was co-authored with Dr. Tim Salcudean. The author\u00E2\u0080\u0099s contribution in that paper was developing the algorithms, numerical simulation, phantom fabrication, and experimental evaluation. A version of Chapter 6 has been submitted for publication. This paper was co-authored with Ali Baghani and Dr. Tim Salcudean. The author\u00E2\u0080\u0099s contribution in that paper was expanding the previously introduced high frame rate system to two dimensions using beam steering techniques, phantom fabrication, and experimental evaluation as well as writing the manuscript. Ali Baghani helped with delay compensation techniques used in the system. In addition to general research directions, Dr. Salcudean as my supervisor helped with his numerous suggestions in the course of enhancing the data acquisition, data processing, and visualizing the results, as well as editing all the manuscripts. xiii \u000CChapter 1 Introduction 1.1 Background The basis of medical imaging is the measurement of a property of tissue that varies with spatial location. Medical images are formed by displaying these properties measured at multiple locations in the body. From such images, a depiction of anatomy or pathology is gained. Each imaging modality in common use, such as X-ray, computed tomography, ultrasound and magnetic resonance imaging, measures a di\u00EF\u00AC\u0080erent property of tissue. But none of the properties measured by these modalities depict directly the mechanical properties of soft tissue even though such images have proven to be useful in numerous clinical applications. Changes in the mechanical properties of biological tissue often represent a warning sign for disease and imaging these properties provides a way of di\u00EF\u00AC\u0080erentiating normal from abnormal tissues [1\u00E2\u0080\u00933]. Elastography, as a relatively new imaging technique, aims to produce images that depict these mechanical properties for clinical applications. In the past two decades a great number of techniques have been suggested in the literature to provide these type of images. All these techniques use multiple measurements of tissue displacements to infer local tissue mechanical properties and are typically categorized based on (i) the type of excitation used, such as quasi-static compressions [4\u00E2\u0080\u00938], harmonic waves [9\u00E2\u0080\u009312], wide-band excitations [12, 13], or transient/pulsed waves [14\u00E2\u0080\u009318], (ii) the way these excitations are generated, for instance using internal tissue motions due to heartbeat or breathing [19\u00E2\u0080\u009323], acoustic radiation force [12, 24\u00E2\u0080\u009327], or an external actuation applied through a mechanical exciter or transducer motion [7, 14, 28\u00E2\u0080\u009330]), (iii) the imaging modality used to estimate the resulting tissue motion, namely ultrasound, [4, 14, 25] or the use of hydrophone [27], magnetic resonance imaging [30\u00E2\u0080\u009332], or optical imaging [33], and (iv) the type of image they produce, such as strain [4, 5, 34], compliance [13, 35], displacement amplitude [9, 10], compressibility [36], relative visco-elasticity [24, 28, 29], or absolute viso-elasticity [11, 12, 14, 30, 37\u00E2\u0080\u009339]. Regardless of which technique is employed, tissue motion estimation lies at the heart of all the above methods. Because of its central importance, its accuracy, precision, and computational cost are of critical importance. As mentioned above, in principle, several imaging modalities can be used in order to estimate the tissue motion for elastography but ultrasound has received the most attention due to its safety, low cost, real-time performance, quick setup procedure and easy access to digital data. Thus, in this thesis we focus on the estimation of tissue motion in sequences of ultrasound echo signals. 1.1.1 Ultrasound Imaging An ultrasound imaging system acquires data through the generation of an ultrasound wave directed toward the area to be examined, followed by measurement of the echoes generated by the interaction of the ultrasound wave with the tissue. The ultrasound system consists 1 \u000CChapter 1. Introduction of a transducer for generating the ultrasound pulses and measurement of the echoes, and a computation system to convert the echo signals into an image. The ultrasound transducer sends out a short burst of ultrasound and listens for the returning echoes. The time between the sent pulse and the received echo is used to calculate the depth of the interfaces assuming the speed of sound is constant throughout the media. The form of the transmitted wave is an amplitude modulated signal with a \u00EF\u00AC\u0081xed carrier frequency determined by the probe and the returning echoes are sampled during the listening interval. These unprocessed digitized echo signals are known as radio frequency (RF) data. The RF data go through envelope detection, logarithmic amplitude compression, and conversion to regular spatial coordinates (called scan conversion). An image formed this way is generally called a B-image (i.e. Brightness image) or a sonogram [40\u00E2\u0080\u009342]. 1.1.2 1D Motion Estimation Techniques Since ultrasound imaging provides higher resolution in the direction of beam propagation, the estimation of the axial component of the motion has received the most attention in the literature. Due to the nature of ultrasound, where the echo signals are acquired as a function of time, 1D motion estimators are generally being referred to as delay estimators. Delay estimators measure displacement of the backscattered signals with respect to the transducer. This displacement appears as a time-shift or phase-shift between sequences of echo signals [43]. Delay estimators are typically classi\u00EF\u00AC\u0081ed based on the type of signal (RF, envelope, in-phase and quadrature I/Q) and the domain on which they operate (i.e. time, phase, or frequency). These estimators have been studied and compared extensively in the literature [44\u00E2\u0080\u009347]. Phase-shift estimators were initially used for blood \u00EF\u00AC\u0082ow measurement. Later, the same techniques were used in other \u00EF\u00AC\u0081elds in order to estimate tissue motion. Phase-shift estimators \u00EF\u00AC\u0081nd the average phase-shift over a number of samples within a window with respect to the nominal or estimated central frequency of the transmitted pulse. Complex cross correlation of the RF echo signals [48, 49] or complex-valued Doppler signals [43, 50] are typically used in these techniques. Time-shift estimators typically consist of the identi\u00EF\u00AC\u0081cation of the maximum/minimum of a pattern matching function. The shape of the signal within a speci\u00EF\u00AC\u0081c window in the reference echo signal is set to be the pattern and a matching algorithm is used to \u00EF\u00AC\u0081nd the best match in the delayed echo signal. Several pattern matching techniques are currently employed, each o\u00EF\u00AC\u0080ering trade o\u00EF\u00AC\u0080s between complexity and accuracy [44, 51, 52]. The estimation error of the pattern matching techniques can be as large as half the sample spacing. Several techniques have been suggested in the literature to reduce the error introduced by \u00EF\u00AC\u0081nite sampling intervals. These techniques are categorized as: (i) echo signal up-sampling [47,53,54], (ii) interpolation of the echo signals [47,53,55,56], and (iii) interpolation of the pattern matching function [57\u00E2\u0080\u009360]. Up-sampling of the echo signal as in (i) reduces the error by the up-sampling factor [47,54]. Curve or polynomial \u00EF\u00AC\u0081tting to the echo signals as in (ii) results in a continuous pattern matching function, whose extremum determines the location of the best match [47, 53, 55, 56]. These techniques can be computationally demanding [56, 61], whereas curve or polynomial \u00EF\u00AC\u0081tting to the pattern matching function as in (iii) often has signi\u00EF\u00AC\u0081cantly smaller computational overhead. Thus, even though they may introduce some bias in the estimation process, they are widely used for motion estimation. Many 1D pattern matching interpolation methods have been proposed for 1D axial motion estimation with sub-sample accuracy. These include parabolic 2 \u000CChapter 1. Introduction \u00EF\u00AC\u0081tting [60], spline \u00EF\u00AC\u0081tting [59], grid slope [62], and cosine \u00EF\u00AC\u0081tting [58], and have been thoroughly investigated in the literature [44]. Some of the alternative approaches that make use of feature extraction have also been attempted in the literature. Zero-crossing tracking (ZCT) [63] identi\u00EF\u00AC\u0081es the zero-crossings of the echo signals using linear interpolation. Peak tracking and level-crossing tracking, wherein tracking is performed on a prede\u00EF\u00AC\u0081ned amplitude level, have also been suggested in [63] as extensions of ZCT. The peak searching algorithm [64] identi\u00EF\u00AC\u0081es the peaks of the echo signals using a wavelet transform. The distances between the zero-crossings (or level-crossings) or peaks represent the time-shifts. ZCT and PSA provide as many measurement points as there are zero/level crossings or peaks within the signals. As a result they have the potential to provide much higher resolution than window-based techniques. 1.1.3 2D/3D Motion Estimation Techniques Tracking the motion in one direction introduces some limitations for di\u00EF\u00AC\u0080erent applications. In blood \u00EF\u00AC\u0082ow and tissue velocity estimation using Doppler techniques, tracking along the beam propagation results in a poor estimation of the \u00EF\u00AC\u0082ow and tissue velocity due to the unknown Doppler angle between the velocity vector and the beam direction. Poor estimates can result even if the angle is manually adjusted [65,66]. In quasi-static elastography, tracking the motion in the axial direction results in estimation of only one component of the strain tensor (i.e. axial), with all the other components remaining unknown [67]. Finally, in dynamic elastography, using the wave equations, the estimation of a single component of motion limits modulus estimation algorithms to a less accurate partial inversion rather than a full inversion [68]. Techniques based on pattern matching functions are the most straightforward approaches used to estimate the axial motion from digitized ultrasound echo signals [4, 62, 69, 70]. Extensions of these techniques to 2D and 3D motion estimation have been proposed in the literature [71, 72] As mentioned above, the estimation error of the pattern matching techniques can be as large as half the sample spacing, which is important especially when the motion is small and the sample spacing is large. This error becomes more signi\u00EF\u00AC\u0081cant in the lateral and elevational directions where the sample spacing is very large. Similarly to 1D tracking, techniques like echo signal up-sampling [54] and interpolation of the echo signals [55, 56] in multi dimensions have been suggested in the literature. Also the same 1D sub-sample estimation techniques have been applied independently for each direction in 2D or 3D motion estimation to estimate the sub-sample lateral and elevational motions [55, 73]. 1.1.4 Compound Tracking Techniques Angular compounding has also been attempted in the literature to estimate the motion vectors [74\u00E2\u0080\u009378]. With this technique the data from the region of interest is acquired from multiple look angles. The multiple look angles can originate from a single transducer when it is moved mechanically, or they can originate from multiple transducers [74]. The multiple look angles can also originate from a single transducer using single transmit and multiple receive angles [77\u00E2\u0080\u009379] or multiple electronically steered transmit and receive angles [75,78,80]. Once the data from multiple angles are acquired, previously introduced 1D motion estimators are employed to estimate the motion along the direction of beam propagation for each angle independently. 3 \u000CChapter 1. Introduction Estimations are then compounded to construct the 2D motion vectors inside the overlapping region. 1.1.5 High Frame Rate Tracking Techniques Conventional ultrasound systems are based on line by line acquisition of the echo signals in order to acquire the entire 2D image. As a result, the acquisition time of each frame in these systems is proportional to the number of scan lines and the acquisition time of each scan line in that frame. However, high frame rate motion estimation is critical for a wide range of clinically used ultrasound imaging modes. Several techniques have been attempted in the literature in order to increase the imaging frame rates. In one simple approach high frame rate is achieved by reducing the number of scan lines. This technique increases the frame rates but results in reduction in the \u00EF\u00AC\u0081eld of view (FOV) and/or spatial resolution depending on the spacing between scan lines. This technique has been used in [23, 81] to study myocardial motion. In another approach high temporal resolution is achieved by beam interleaving techniques [82]. This technique divides the region of interest (ROI) into small sectors and acquires each sector at a high temporal resolution (200 Hz to 10 kHz depending on the number of scan lines per sector and the imaging depth) for a short period of time before moving on to the next sector, etc until all the observations for the entire ROI are acquired. Assuming that the time between the acquisitions of neighboring scan lines is small, the acquisition in each sector can be considered as a snapshot of the speckle movements. This technique will provide both high spatial resolution and temporal resolution. However, large delays are introduced between the data acquired from di\u00EF\u00AC\u0080erent sectors. This technique is commonly used in conventional color \u00EF\u00AC\u0082ow imaging, power Doppler imaging, and B \u00EF\u00AC\u0082ow imaging [82, 83]. The same technique is also used in [84] to evaluate regional myocardial deformation and in [11] to study the propagation of crawling waves in tissue using tissue Doppler imaging. Using the same acquisition scheme, compounding Doppler imaging has also been attempted in the literature to estimate the motion vectors using both beam steering and multi-synthetic aperture beamforming [77, 78]. With the help of parallel receive beamformers, techniques like multi-line-acquisitions (MLA) have also been used to increase the frame rate of conventional ultrasound machines where multiple echo signals (typically 2-8) are acquired from single transmit [85], thus, multiplying the e\u00EF\u00AC\u0080ective frame rate by the same factor at little cost to the resolution [22]. In [86], MLA was used to reduce transducer heating and acoustic exposure, and to facilitate data acquisition for real-time ARFI imaging. The idea of MLA was also extended to the acquisition of the entire image as opposed to multiple lines, thus, drastically increasing the e\u00EF\u00AC\u0080ective frame rates (>5 kHz). This method is generally being referred to as ultrafast imaging where a single unfocused plane wave is used for transmit and parallel receive beamformers (typically 64-128) are used to generate the scan lines. In [17, 87], ultrafast imaging was used to capture the propagation of the transient shear wave in soft tissue and to estimate the tissue elasticity. In [79] ultrafast imaging was combined with angular compounding using multi-synthetic aperture beamforming to follow both the axial and the lateral components of the motion during the shear wave propagation at a frame rate of 6 kHz. Even though very e\u00EF\u00AC\u0080ective, both MLA and ultrafast imaging are not generally available on conventional ultrasound systems. Additional hardware overhead is required to implement each of these techniques on conventional ultrasound systems. Techniques like coded excitations have also been introduced in the literature to increase the frame rate of the ultrasound acquisition [88\u00E2\u0080\u009390]. However, these techniques increase the 4 \u000CChapter 1. Introduction beam density and similar to MLA and ultrafast imaging require specialized hardware. In another approach to achieve high frame rates, synchronization techniques have been employed in the literature. The data acquisition in these techniques is similar to that of conventional color \u00EF\u00AC\u0082ow and power Doppler imaging to achieve high temporal resolution. However, to eliminate long delays between sectors, the start of data acquisition for each sector is synchronized with the exciter which varies from an external actuator (e.g. mechanical vibrator) to a signal generated in the body (e.g. electrocardiogram ECG). In [91], by synchronizing the data acquisition and an external exciter the shear-wave propagation in the scan plane was imaged at a frame rate of 6 kHz using a single element transducer. A similar approach was used in [92] to study both the transient and harmonic shear-wave scattering in both two and three dimensions using linear array transducers at a frame rate of 4 kHz. By synchronizing the image acquisition with the ECG signals, the propagation of several transient mechanical waves was imaged in di\u00EF\u00AC\u0080erent regions of the myocardium in mice at a frame rate of 8 kHz in [93] and in humans at a frame rate of 481 Hz in [20]. These techniques require additional hardware to synchronize the excitation and the data acquisition. Also, at the end of imaging of each sector, the system needs to wait long enough to make sure the tissue returns back to its initial position prior to the next excitation. Otherwise, artifacts will appear in the \u00EF\u00AC\u0081nal image when di\u00EF\u00AC\u0080erent sectors are stitched together. This waiting time will generally increase the total data acquisition time in these techniques. 1.1.6 Current in-vivo Applications Mechanical properties of tissue are often associated with tissue state. Thus, they can be used for diagnosis. Medical applications of both quasi-static and dynamic elastography methods with the underlaying motion tracking algorithms span a wide range of modern clinical applications. These applications include but are not limited to tumor detection and classi\u00EF\u00AC\u0081cation in breast [5,8,14,34,94\u00E2\u0080\u009397], prostate [11,98,99], and skin [100], detection of di\u00EF\u00AC\u0080use diseases such as liver \u00EF\u00AC\u0081brosis [12], myocardial elasticity imaging [19, 81, 84, 93], distinguishing between normal and lymphedematous tissues [36], characterization of vascular plaques [21], the impact of aging and gender on brain viscoelasticity [101], monitoring aging of deep venous thrombosis [102, 103], imaging of thermal lesion in the liver [104,105], monitoring renal transplant for early rejection [106], quantifying hepatic elasticity [107], imaging the abdomen [108], and the study of skeletal muscle contraction [109]. 1.2 Thesis Objectives The following objectives are de\u00EF\u00AC\u0081ned in this thesis: 1. Developing new algorithms for the estimation of motion in sequences of ultrasound echo signal in 1D (axial component only), 2D (both axial and lateral components), and 3D (axial, lateral, and elevational components), with high accuracy and precision and small computational overhead, to make them suitable for real-time applications. 2. Studying the performance of these techniques using both simulation and experimental data in terms of accuracy, precision, sensitivity, and resolution and comparing them with state of the art techniques. 5 \u000CChapter 1. Introduction 3. Implementing a system based on the proposed methods to estimate the motion in multiple dimensions at commonly used ultrasound frame rates (up to 50 Hz). 4. Overcoming the inherent low frame rate of conventional ultrasound and implementing a system to facilitate motion tracking in several dimensions at high frame rates (> 500 Hz). 1.3 Thesis Outline The thesis presented here is written in the manuscript-based style, as permitted by the Faculty of Graduate Studies at the University of British Columbia. In the manuscript-based thesis, each chapter represents an individual work that has been published, submitted or prepared for submission to a peer reviewed publication. Each chapter is self-contained in the sense that it includes an introduction to the work presented in that chapter, the methodology, simulations and experiments, results and discussion. The references are summarized in the bibliography found at the end of each chapter. The appendices pertaining to each chapter are presented at the end of the thesis. In the course of achieving the primary objectives of this thesis, the following contributions were made: 1. In Chapter 2 a new class of time-delay estimators based on the tracking of the individual echo samples called Sample Tracking (ST) is presented to improve the accuracy, precision, resolution, and sensitivity of one dimensional motion estimation. The use of the same interpolation approach to improve the performance of a previously developed ZCT delay estimator [63] is also presented. Simulation results show that sample tracking algorithms signi\u00EF\u00AC\u0081cantly outperform commonly used window based algorithms in terms of bias and standard deviation. ST algorithms also have higher sensitivity and resolution compared to traditional delay estimators, including recently introduced spline-based continuous time-delay estimators as they provide the displacement of individual samples. However, their performance degrades as the SNR of the echo signals becomes low. Experimental results demonstrating the viability of ST in addition to their extension to several dimensions are also presented. 2. In Chapter 3 we consider the problem of estimating 2D motion in ultrasound echo data images with sub-sample accuracy. We propose an approach based on iterative 1D interpolation, as well as approaches based on 2D interpolation. We study these techniques using both simulated and experimental data and compare them to other methods in the literature. The results show that the proposed methods signi\u00EF\u00AC\u0081cantly outperform other techniques in terms of both accuracy and precision. Employing the proposed methods, a real-time implementation of a 2D motion tracking algorithm is also presented. 3. Extending our previous 2D work, in Chapter 4 we introduce 3D sub-sample estimation techniques. We study this method using a synthetic phantom and the Field II ultrasound simulation software. A comparison with other reported methods shows that the proposed 3D interpolation-based method outperforms other common techniques in terms of accuracy and precision. Experimental results demonstrating the viability of the proposed method are also presented. 6 \u000CChapter 1. Introduction 4. In Chapter 5, we study and compare the performance of 2D pattern matching techniques employing 2D sub-sample estimation with that of 2D tracking using angular compounding. Simulations using the Field II ultrasound simulation software on a synthetic phantom and real ultrasound data acquired from tissue mimicking phantoms were used for this study. The results show that our proposed 2D interpolation techniques bring the performance of the 2D pattern matching close to that of motion vector imaging using angular compounding. 5. In Chapter 6 to overcome the inherent low frame rate of ultrasound for motion estimation in several dimensions, the delay cancellation techniques previously introduced in our laboratory are combined with angular compounding to develop a system that reconstructs the motion vectors at high frame rates. The system achieves both high spatial (line density of up to 128) and high temporal resolution (> 500 Hz) at an imaging depth of 5 cm and a 100% \u00EF\u00AC\u0081eld of view. Applications of the system in studying the wave propagation in two dimensions, and \u00EF\u00AC\u0082ow vector imaging are presented with experimental results from phantoms. 6. In Chapter 7 the results of the collected works are related to one another and a uni\u00EF\u00AC\u0081ed goal of the thesis is discussed. The strengths and weaknesses of the research are then presented, along with future directions for research. 7 \u000CReferences [1] J. Ophir, S. Alam, B. Garra, F. Kallel, E. Konofagou, T. Krouskop, C. Merritt, R. Righetti, R. Souchon, S. Srinivasan, and T. Varghese, \u00E2\u0080\u009CElastography: Imaging the Elastic Properties of Soft Tissues with Ultrasound,\u00E2\u0080\u009D Medical Ultrasound, vol. 29, pp. 155\u00E2\u0080\u0093171, 2002. [2] J. Greenleaf, M. Fatemi, and M. Insana, \u00E2\u0080\u009CSelected Methods for Imaging Elastic Properties of Biological Tissues,\u00E2\u0080\u009D Annual Review of Biomedical Engineering, vol. 5, pp. 57\u00E2\u0080\u009378, August 2000. [3] L. Gao, K. Parker, R. Lerner, and S. Levinson, \u00E2\u0080\u009CImaging of the elastic properties of tissue-a review,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 22, pp. 959\u00E2\u0080\u009377, 1996. [4] J. Ophir, I. Cespedes, H. Ponnekanti, Y. Yazdi, and X. Li, \u00E2\u0080\u009CElastography: a quantitative method for imaging the elasticity of biological tissues,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 13, pp. 111\u00E2\u0080\u0093134, April 1991. [5] I. Cespedes, J. Ophir, H. Ponnekanti, and N. F. Maklad, \u00E2\u0080\u009CElastography: elasticity imaging using ultrasound with application to muscle and breast in vivo,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 15, pp. 73\u00E2\u0080\u009388, 1993. [6] J. Ophir, S. Alam, B. Garra, F. Kallel, E. Konofagou, T. Krouskop, and T. Varghese, \u00E2\u0080\u009CElastography: ultrasonic estimation and imaging of the elastic properties of tissues,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 213, pp. 203\u00E2\u0080\u009333, 1999. [7] T. Hall, Z. Yanning, and C. Spalding, \u00E2\u0080\u009CIn vivo real-time freehand palpation imaging.\u00E2\u0080\u009D Ultrasound in medicine and biology, vol. 29, pp. 427\u00E2\u0080\u0093435, 2003. [8] M. Doyley, J. Bamber, F. Fuechsel, and N. Bush, \u00E2\u0080\u009CA freehand elastographic imaging approach for clinical breast imaging: System development and performance evaluation,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 27, pp. 1347\u00E2\u0080\u00931357, 2001. [9] R. Lerner, S. Huang, and K. Parker, \u00E2\u0080\u009C\u00E2\u0080\u0098Sonoelasticity\u00E2\u0080\u0099 images derived from ultrasound signals in mechanically vibrated tissues,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 16, no. 3, pp. 231\u00E2\u0080\u0093239, 1990. [10] K. Parker, S. Huang, R. Musulin, and R. Lerner, \u00E2\u0080\u009CTissue response to mechanical vibrations for \u00E2\u0080\u0099sonoelasticity imaging\u00E2\u0080\u0099,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 16, no. 3, pp. 241\u00E2\u0080\u0093246, 1990. [11] K. Hoyt, K. Parker, and J. Rubens, \u00E2\u0080\u009CReal-Time Shear Velocity Imaging Using Sonoelastographic Techniques,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 33, p. 10861097, 2007. 8 \u000CChapter 1. Introduction [12] S. Chen, M. Urban, C. Pislaru, R. Kinnick, Y. Zheng, A. Yao, and J. Greenleaf, \u00E2\u0080\u009CShearwave dispersion ultrasound vibrometry (SDUV) for measuring tissue elasticity and viscosity,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 56, pp. 55\u00E2\u0080\u009362, Jan 2009. [13] E. Turgay, S. Salcudean, and R. Rohling, \u00E2\u0080\u009CIdentifying Mechanical properties of Tissue by Ultrasound.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 32, pp. 221\u00E2\u0080\u0093235, 2006. [14] J. Berco\u00EF\u00AC\u0080, S. Cha\u00EF\u00AC\u0080ai, M. Tanter, L. Sandrin, S. Catheline, J. Fink M., Gennisson, and M. Meunier, \u00E2\u0080\u009CIn vivo breast tumor detection using transient elastography.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 29, pp. 1387\u00E2\u0080\u00931396, Oct 2003. [15] J. Berco\u00EF\u00AC\u0080, M. Tanter, and M. Fink, \u00E2\u0080\u009CSupersonic shear imaging: a new technique for soft tissue elasticity mapping,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 51, pp. 396\u00E2\u0080\u00934009, April 2004. [16] S. Catheline, J.-L. Thomas, F. Wu, and M. Fink, \u00E2\u0080\u009CDi\u00EF\u00AC\u0080raction \u00EF\u00AC\u0081eld of a low frequency vibrator in soft tissues using transient elastography,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 46, pp. 1013\u00E2\u0080\u009319, July 1999. [17] S. Catheline, J.-L. Gennisson, G. Delon, and M. Fink, \u00E2\u0080\u009CViscoelastic properties of soft solids using transient elastography,\u00E2\u0080\u009D in Second International Conference on the Ultrasonic Measurement and Imaging of Tissue Elasticity, Corpus Christi, U.S., October 2003, pp. 25\u00E2\u0080\u009325. [18] L. Sandrin, M. Tanter, D. Cassereau, S. Catheline, and M. Fink, \u00E2\u0080\u009CLow-frequency shear wave beam forming in time-resolved 2D pulsed elastography,\u00E2\u0080\u009D Proceedings of the IEEE Ultrasonics Symposium, vol. 2, pp. 1803\u00E2\u0080\u00931808, October 2000. [19] W. Lee, C. M. Ingrassia, S. D. Fung-Kee-Fung, K. D. Costa, J. W. Holmes, and E. Konofagou, \u00E2\u0080\u009CTheoretical Quality Assessment of Myocardial Elastography with In Vivo Validation,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 54, pp. 2233\u00E2\u0080\u00932245, 2007. [20] S. Wang, W. Lee, J. Provost, L. Jianwen, and E. Konofagou, \u00E2\u0080\u009CA composite high-framerate system for clinical cardiovascular imaging.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 55, pp. 2221\u00E2\u0080\u00932233, Oct 2008. [21] C. de Korte, A. van der Steen, E. Cspedes, G. Pasterkamp, S. Carlier, F. Mastik, A. Schoneveld, P. Serruys, and N. Bom, \u00E2\u0080\u009CCharacterisation of plaque components and vulnerability with Intravascular Ultrasound Elastography,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1465\u00E2\u0080\u00931475, 2000. [22] K. Kaluzynski, C. Xunchang, S. Emelianov, A. Skovoroda, and M. O\u00E2\u0080\u0099Donnell, \u00E2\u0080\u009CStrain rate imaging using two-dimensional speckle tracking.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 48, pp. 1111\u00E2\u0080\u00931123, July 2001. [23] H. Kanai, Y. Koiwa, and J. Zhang, \u00E2\u0080\u009CReal-time measurements of local myocardium motion and arterial wall thickening.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 46, pp. 1229\u00E2\u0080\u00931241, 1999. 9 \u000CChapter 1. Introduction [24] A. Sarvazyan, O. Rudenko, S. Swanson, J. Fowlkes, and E. S.Y., \u00E2\u0080\u009CShear wave elasticity imaging: a new ultrasonic technology of medical diagnostics,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 24, pp. 1419\u00E2\u0080\u009335, December 1998. [25] K. Nightingale, M. Palmeri, R. Nightingale, and G. Trahey, \u00E2\u0080\u009COn the feasibility of remote palpation using acoustic radiation force,\u00E2\u0080\u009D Journal of the Acoustical Society of America, vol. 110, pp. 625\u00E2\u0080\u009334, July 2001. [26] W. Walker, F. Fernandez, and L. Negron, \u00E2\u0080\u009CA method of imaging viscoelastic parameters with acoustic radiation force.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1437\u00E2\u0080\u00931447, 2000. [27] M. Fatemi and J. Greenleaf, \u00E2\u0080\u009CProbing the dynamics of tissue at low frequencies with the radiation force of ultrasound,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1449\u00E2\u0080\u009364, June 2000. [28] H. Eskandari, S. Salcudean, and R. Rohling, \u00E2\u0080\u009CViscoelastic Parameter Estimation Based on Spectral Analysis.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency control, vol. 55, pp. 1611\u00E2\u0080\u00931625, July 2008. [29] H. Eskandari, S. Salcudean, R. Rohling, and J. Ohayon, \u00E2\u0080\u009CViscoelastic Characterization of Soft Tissue from Dynamic Finite Element Models.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 53, pp. 6569\u00E2\u0080\u00936590, Nov 2008. [30] S. Papazoglou, U. Hamhaber, J. Braun, and I. Sack, \u00E2\u0080\u009CAlgebraic Helmholtz inversion in planar magnetic resonance elastography.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 53, pp. 3147\u00E2\u0080\u00933158, 2008. [31] R. Muthupillai, D. Lomas, P. Rossman, J. Greenleaf, A. Manduca, and R. Ehman, \u00E2\u0080\u009CMagnetic resonance elastography by direct visualization of propagating acoustic strain waves,\u00E2\u0080\u009D Science, vol. 269, pp. 1854\u00E2\u0080\u00937, September 1995. [32] T. Oliphant, \u00E2\u0080\u009CDirect Methods for Dynamic Elastography Reconstruction: Optimal Inversion of the Interior Helmholtz Problem.\u00E2\u0080\u009D Ph.D. dissertation, Mayo Graduate School, 2001. [33] J. Jurvelin, M. Buschmann, and E. Hunziker, \u00E2\u0080\u009COptical and mechanical determination of Poisson\u00E2\u0080\u0099s ratio of adult bovine humeral articular cartilage,\u00E2\u0080\u009D Journal of Biomechanics, vol. 30, pp. 235\u00E2\u0080\u009341, March 1997. [34] A. Thitaikumar, L. Mobbs, C. Kraemer-Chant, B. Garra, and J. Ophir, \u00E2\u0080\u009CBreast Tumor Classi\u00EF\u00AC\u0081cation using axial shear strain elastography: a feasibility study,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 53, pp. 4809\u00E2\u0080\u00934823, 2008. [35] S. Papazoglou, C. Xu, U. Hamhaber, E. Siebert, G. Bohner, R. Klingebiel, J. Braun, and I. Sack, \u00E2\u0080\u009CScatter-based magnetic resonance elastography.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 54, pp. 2229\u00E2\u0080\u00932241, 2009. [36] R. Righetti, J. Ophir, S. Srinivasan, and T. Krouskop, \u00E2\u0080\u009CThe Feasibility of Using Elastography for Imaging the Poissons Ratio in Porous Media,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 30, pp. 215\u00E2\u0080\u0093228, 2004. 10 \u000CChapter 1. Introduction [37] A. Skovoroda, S. Emelianov, and M. O\u00E2\u0080\u0099Donnell, \u00E2\u0080\u009CTissue Elasticity Reconstruction Based on Ultrasonic Displacement and Strain Images.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 42, pp. 747\u00E2\u0080\u009365, July 1995. [38] P. Barbone, \u00E2\u0080\u009CA variational formulation leading to direct elastic modulus reconstruction,\u00E2\u0080\u009D in Second International Conference on the Ultrasonic Measurement and Imaging of Tissue Elasticity, Corpus Christi, U.S., October 2003, pp. 67\u00E2\u0080\u009367. [39] S. Aglyamov, A. Skovoroda, J. Rubin, M. O\u00E2\u0080\u0099Donnell, and E. SY, \u00E2\u0080\u009CYoung\u00E2\u0080\u0099s modulus reconstruction in DVT elasticity imaging,\u00E2\u0080\u009D in In: Linzer M, ed. 27th International Symposium on Ultrasonic Imaging and Tissue Characterization. Arlington, VA: Ultrason Imaging, 2002, p. 174. [40] F. Kremkau, Ed., Diagnostic Ultrasound: Principles and Instruments, 4th Edition. W.B. Saunders Company, 1993, ch. 1. [41] W. McDicken, Ed., Diagnostic Ultrasonics: Principles and use of Instruments, 3rd Edition. Churchill Livingstone, 1991, ch. 1. [42] A. Christensen, Ed., Ultrasonic Bioinstrumentation. John Wiley and Sons, 1988, ch. 6. [43] T. Loupas, R. Peterson, and R. Gill, \u00E2\u0080\u009CExperimental evaluation of velocity and power estimation forultrasound blood \u00EF\u00AC\u0082ow imaging, by means of a two-dimensional autocorrelation approach.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 42, pp. 689\u00E2\u0080\u0093699, Jul 1995. [44] F. Viola and W. Walker, \u00E2\u0080\u009CA comparison of the performance of time-delay estimators in medical ultrasound.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 50, pp. 392\u00E2\u0080\u0093401, April 2003. [45] G. Pinton, J. Dahl, and G. Trahey, \u00E2\u0080\u009CRapid tracking of small displacements with ultrasound.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 53, pp. 1103\u00E2\u0080\u0093 1117, June 2006. [46] I. Hein and W. O\u00E2\u0080\u0099Brien, \u00E2\u0080\u009CCurrent time-domain methods for assessing tissue motion by analysis from re\u00EF\u00AC\u0082ected ultrasound echoes-a review.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 40, pp. 84\u00E2\u0080\u0093102, March 1993. [47] G. Pinton and G. Trahey, \u00E2\u0080\u009CContinuous Delay Estimation with Polynomial Splines.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 53, pp. 2026\u00E2\u0080\u00932035, 2006. [48] M. O\u00E2\u0080\u0099Donnell, A. Scovoroda, B. Shapo, and S. Emelianov, \u00E2\u0080\u009CInternal Displacement and Strain Imaging Using Ultrasonic Speckle Tracking,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 41, pp. 314\u00E2\u0080\u009325, May 1994. [49] A. Pesavento, C. Perrey, M. Krueger, and H. Ermert, \u00E2\u0080\u009CA time e\u00EF\u00AC\u0083cient and accurate strain estimation concept for ultrasonic elastography using iterative phase zero estimation,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 46, pp. 1057\u00E2\u0080\u00931067, 1999. 11 \u000CChapter 1. Introduction [50] C. Kasai, K. Namekawa, A. Koyano, and R. Omoto, \u00E2\u0080\u009CReal-time two-dimensional blood \u00EF\u00AC\u0082ow imaging using an autocorrelation technique.\u00E2\u0080\u009D IEEE Transactions on Sonics and Ultrasonics, vol. 32, pp. 458\u00E2\u0080\u0093464, 1985. [51] G. Jacovitti and G. Scarano, \u00E2\u0080\u009CDiscrete time techniques for time delay estimation.\u00E2\u0080\u009D IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 41, pp. 525\u00E2\u0080\u0093533, Feb 1993. [52] S. Langeland, J. Dapos, H. Torp, B. Bijnens, and P. Suetens, \u00E2\u0080\u009CA simulation study on the performance of di\u00EF\u00AC\u0080erent estimators for two-dimensional velocity estimation.\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. Volume 2: IEEE, 8-11 Oct 2002, pp. 1859\u00E2\u0080\u00931862. [53] F. Viola and W. Walker, \u00E2\u0080\u009CA Spline-Based Algorithm for Continuous Time-Delay Estimation Using Sampled Data.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 52, pp. 80\u00E2\u0080\u009393, January 2005. [54] E. Konofagou and J. Ophir, \u00E2\u0080\u009CA new elastographic method for estimation and imaging of lateral displacements, lateral strains, corrected axial strains and Poisson\u00E2\u0080\u0099s ratios in tissues,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 24, pp. 1183\u00E2\u0080\u009399, October 1998. [55] F. Viola, R. Coe, O. K., D. Guenther, and W. Walker, \u00E2\u0080\u009CMUlti-Dimensional Spline-Based Estimator (MUSE) for Motion Estimation: Algorithm Development and Initial Results.\u00E2\u0080\u009D Annals of Biomedical Engineering, vol. 36, pp. 1942\u00E2\u0080\u00931960, September 2008. [56] R. Zahiri-Azar and S. Salcudean, \u00E2\u0080\u009CTime-Delay Estimation in Ultrasound Echo Signals Using Individual Sample Tracking.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 55, pp. 2640\u00E2\u0080\u00932650, 2008. [57] I. Cespedes, Y. Huang, J. Ophir, and S. Spratt, \u00E2\u0080\u009CMethods for the estimation of subsample time-delays of digitized echo signals,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 17, pp. 142\u00E2\u0080\u0093171, 1995. [58] P. de Jong, T. Arts, A. Hoeks, and R. Reneman, \u00E2\u0080\u009CDetermination of Tissue Motion Velocity by Correlation Interpolation of Pulsed Ultrasonic Echo Signals,\u00E2\u0080\u009D Ultrasonics Imaging, vol. 12, pp. 84\u00E2\u0080\u009398, 1990. [59] B. Geiman, L. Bohs, M. Anderson, S. Breit, and G. Trahey, \u00E2\u0080\u009CA comparison of algorithms for tracking sub-pixel speckle motion.\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. Volume 2: IEEE, 5-8 Oct 1997, pp. 1239\u00E2\u0080\u00931242. [60] S. Foster, P. Embree, and W. O\u00E2\u0080\u0099Brien, \u00E2\u0080\u009CFlow velocity pro\u00EF\u00AC\u0081le via time-domain correlation: error analysis and computer simulation,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 37, pp. 164\u00E2\u0080\u0093175, May 1990. [61] F. Viola and W. Walker, \u00E2\u0080\u009CComputationally E\u00EF\u00AC\u0083cient Spline-Based Time Delay Estimation.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 55, pp. 2084\u00E2\u0080\u00932091, September 2008. [62] L. Geiman, BJ.and Bohs, M. Anderson, S. Breit, and T. G.E., \u00E2\u0080\u009CA novel interpolation strategy for estimating subsample speckle motion.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1541\u00E2\u0080\u00931552, 2000. 12 \u000CChapter 1. Introduction [63] S. Srinivasan and J. Ophir, \u00E2\u0080\u009CA zero-crossing strain estimator in elastography,\u00E2\u0080\u009D Ultrasound in Med and Bio, vol. 29, pp. 227\u00E2\u0080\u0093238, 2003. [64] H. Eskandari, S. Salcudean, and R. Rohling, \u00E2\u0080\u009CTissue strain imaging using a wavelet transform-based peak search algorithm.\u00E2\u0080\u009D IEEE Trans Ultrason Ferroelectr Freq Control, vol. 54, pp. 1118\u00E2\u0080\u009330, June 2007. [65] P. Wells, \u00E2\u0080\u009CUltrasonic colour \u00EF\u00AC\u0082ow imaging,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 39, pp. 2113\u00E2\u0080\u00932145, 1994. [66] R. Gill, \u00E2\u0080\u009CMeasurement of blood \u00EF\u00AC\u0082ow by ultrasound: accuracy and sources of error.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 11, pp. 625\u00E2\u0080\u0093641, Aug 1985. [67] E. Konofagou and J. Ophir, \u00E2\u0080\u009CPrecision Estimation and Imaging of Normal and Shear Components of the 3D Strain Tensor in elastography,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1553\u00E2\u0080\u00931563, 2000. [68] M. Greenleaf J., Fatemi and M. Insana, \u00E2\u0080\u009CSelected methods for imaging elastic properties of biological tissues.\u00E2\u0080\u009D Annual Review of Biomedical Engineering, vol. 5, p. 5778, 2003. [69] M. Lubinski, S. Emelianov, and M. O\u00E2\u0080\u0099Donnell, \u00E2\u0080\u009CSpeckle tracking methods for ultrasonic elasticity imaging using short-time correlation,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 46, pp. 82\u00E2\u0080\u009396, January 1999. [70] H. Shi and T. Varghese, \u00E2\u0080\u009CTwo-dimensional multi-level strain estimation for discontinuous tissue.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 52, pp. 389\u00E2\u0080\u0093401, Nov 2007. [71] L. Bohs and G. Trahey, \u00E2\u0080\u009CA novel method for angle independent ultrasonic imaging of blood \u00EF\u00AC\u0082ow and tissue motion,\u00E2\u0080\u009D IEEE Transactions on Biomedical Engineering, vol. 38, pp. 280\u00E2\u0080\u0093286, March 1991. [72] G. Trahey, J. Allison, and O. Von Ramm, \u00E2\u0080\u009CAngle independent ultrasonic detection of blood \u00EF\u00AC\u0082ow,\u00E2\u0080\u009D IEEE Transactions on Biomedical Engineering, vol. 34, pp. 965\u00E2\u0080\u00937, December 1987. [73] R. Lopata, M. Nillesena, H. Hansena, I. Gerritsa, T. J., and C. de Korte, \u00E2\u0080\u009CPerformance Evaluation of Methods for Two-Dimensional Displacement and Strain Estimation Using Ultrasound Radio Frequency Data.\u00E2\u0080\u009D Ultrasound in medicine and biology, vol. 35, pp. 796\u00E2\u0080\u0093812, 2009. [74] U. Techavipoo, Q. Chen, T. Varghese, and J. Zagzebski, \u00E2\u0080\u009CEstimation of displacement vectors and strain tensors in elastography using angular insoni\u00EF\u00AC\u0081cations.\u00E2\u0080\u009D IEEE Transactions on Medical Imaging, vol. 23, pp. 1479\u00E2\u0080\u00931489, 2004. [75] M. Rao, Q. Chen, H. Shi, T. Varghese, E. Madsen, J. Zagzebski, and T. Wilson, \u00E2\u0080\u009CNormal and shear strain estimation using beam steering on linear-array transducers.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 33, pp. 57\u00E2\u0080\u009366, Jan 2007. [76] H. Chen and T. Varghese, \u00E2\u0080\u009CNoise analysis and improvement of displacement vector estimation from angular displacements.\u00E2\u0080\u009D Medical physics, vol. 35, pp. 2007\u00E2\u0080\u00932017, 2008. 13 \u000CChapter 1. Introduction [77] L. Capineri, M. Scabia, and L. Masotti, \u00E2\u0080\u009CVector Doppler: spatial sampling analysis and presentation techniques for real time systems.\u00E2\u0080\u009D Journal of Electronic Imaging., vol. 12, p. 489498, July 2003. [78] O. Kripfgans, J. Rubin, A. Hall, and J. Fowlkes, \u00E2\u0080\u009CVector Doppler imaging of a spinning disc ultrasound Doppler phantom.\u00E2\u0080\u009D Ultrasound in Medicine and Biology., vol. 32, pp. 1037\u00E2\u0080\u00931046, 2006. [79] L. Sandrin, M. Tanter, D. Cassereau, S. Catheline, and M. Fink, \u00E2\u0080\u009Cultrafast compound imaging for 2D motion vector estimation: Application to Transient elastography,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 49, pp. 1363\u00E2\u0080\u0093 1374, October 2002. [80] M. Rao and T. Varghese, \u00E2\u0080\u009CSpatial angular compounding for elastography without the incompressibility assumption.\u00E2\u0080\u009D Ultrasonic imaging., vol. 27, pp. 256\u00E2\u0080\u0093270, 2005. [81] E. Konofagou, J. D\u00E2\u0080\u0099hooge, and J. Ophir, \u00E2\u0080\u009CMyocardial elastography - A feasibility study in vivo,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 28, pp. 475\u00E2\u0080\u0093482, October 2002. [82] L. Lvstakken, S. Bjaerum, D. Martens, and H. Torp, \u00E2\u0080\u009CBlood \u00EF\u00AC\u0082ow imaging\u00E2\u0080\u0093A new realtime, 2-D \u00EF\u00AC\u0082ow imaging technique.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 53, pp. 289\u00E2\u0080\u009399, Feb 2006. [83] J. Jensen and I. Lacasa, \u00E2\u0080\u009CEstimation of blood velocity vectors using transverse ultrasound beam focusing and cross-correlation,\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. IEEE, Oct 1999, pp. 1493\u00E2\u0080\u00931497 vol.2. [84] A. Heimdal, A. Stoylen, H. Torp, and T. Skjaerpe, \u00E2\u0080\u009CReal-time strain rate imaging of the left ventricle by ultrasound.\u00E2\u0080\u009D Journal of American Soc Echocardiography, vol. 11, pp. 1013\u00E2\u0080\u00931019, Nov 1998. [85] M. Fabian, K. Ballu, J. Hossack, T. Blalock, and W. Walker, \u00E2\u0080\u009CDevelopment of a parallel acquisition system for ultrasound research.\u00E2\u0080\u009D in Proc. SPIE vol. 4325, Feb 2001, pp. 54\u00E2\u0080\u009362. [86] J. Dahl, G. Pinton, M. Palmeri, V. Agrawal, K. Nightingale, and G. Trahey, \u00E2\u0080\u009CA Parallel Tracking Method for Acoustic Radiation Force Impulse Imaging.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 54, pp. 301\u00E2\u0080\u0093312, 2007. [87] J. Berco\u00EF\u00AC\u0080, M. Tanter, M. Muller, and M. Fink, \u00E2\u0080\u009CStudy of viscous and elastic properties of soft tissues using supersonic shear imaging,\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonic Symposium, 2003. [88] T. Misaridis and J. Jensen, \u00E2\u0080\u009CUse of modulated excitation signals in medical ultrasound. Part I: basic concepts and expected bene\u00EF\u00AC\u0081ts.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 52, pp. 177\u00E2\u0080\u0093191, Feb 2005. [89] \u00E2\u0080\u0094\u00E2\u0080\u0094, \u00E2\u0080\u009CUse of modulated excitation signals in medical ultrasound. Part II: Design and performance for medical imaging applications.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 52, pp. 192\u00E2\u0080\u0093207, 2005. 14 \u000CChapter 1. Introduction [90] \u00E2\u0080\u0094\u00E2\u0080\u0094, \u00E2\u0080\u009CUse of modulated excitation signals in medical ultrasound. Part III: High frame rate imaging.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 52, pp. 208\u00E2\u0080\u0093219, 2005. [91] V. Dutt, R. Kinnick, R. Muthupillai, T. Oliphant, R. Ehman, and J. Greenleaf, \u00E2\u0080\u009CAcoustic shear-wave imaging using echo ultrasound compared to magnetic resonance elastography,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 26, pp. 397\u00E2\u0080\u0093403, 2000. [92] A. Henni, C. Schmitt, and G. Cloutier, \u00E2\u0080\u009CThree-dimensional transient and harmonic shearwave scattering by a soft cylinder for dynamic vascular elastography.\u00E2\u0080\u009D Journal of the Acoustical Society of America, vol. 124, pp. 2394\u00E2\u0080\u00932405, Oct 2008. [93] M. Pernot, K. Fujikura, S. Fung-Kee-Fung, and K. E., \u00E2\u0080\u009CECG-gated, mechanical and electromechanical wave imaging of cardiovascular tissues in vivo.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 33, p. 10751085, 2007. [94] M. Insana, C. Pellot-Barakat, M. Sridhar, and K. Lindfors, \u00E2\u0080\u009CViscoelastic imaging of breast tumor microenvironment with ultrasound.\u00E2\u0080\u009D Journal of Mammary Gland Biology and Neoplasia, vol. 9, pp. 393\u00E2\u0080\u009304, 2004. [95] E. Fleury, J. Rinaldi, S. Piato, J. Fleury, and D. Roveda Junior, \u00E2\u0080\u009CAppearance of breast masses on sonoelastography with special focus on the diagnosis of \u00EF\u00AC\u0081broadenomas.\u00E2\u0080\u009D European Radiology, vol. 19, pp. 1337\u00E2\u0080\u009346, Jan 2009. [96] B. Garra, E. Cespedes, J. Ophir, and et al., \u00E2\u0080\u009CElastography of breast lesions: Initial clinical results,\u00E2\u0080\u009D Radiology, vol. 202, pp. 79\u00E2\u0080\u009386, January 1997. [97] R. Sinkus, J. Lorenzen, D. Schrader, M. Lorenzen, M. Dargatz, and D. Holz, \u00E2\u0080\u009CHighresolution tensor MR elastography for breast tumor detection,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1649\u00E2\u0080\u009364, June 2000. [98] S. Salcudean, D. French, S. Bachmann, X. Zahiri-Azar, R.and Wen, and J. Morris, \u00E2\u0080\u009CViscoelasticity modelling of the prostate region using vibro-elastography.\u00E2\u0080\u009D in 9th MICCAI Conference. Denmark: MICCAI, Oct 2006, pp. 389\u00E2\u0080\u0093396. [99] A. Lorenz, H. Sommerfeld, M. Schurmann, and et al, \u00E2\u0080\u009CA new system for the acquisition of ultrasonic multicompression strain images of the human prostate in vivo,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 46, pp. 969\u00E2\u0080\u00931147\u00E2\u0080\u00931154, 1999. [100] R. Gaspari, D. Blehar, M. Mendoza, M. Montoya, C. Moon, and D. Polan, \u00E2\u0080\u009CUse of Ultrasound Elastography for Skin and Subcutaneous Abscesses.\u00E2\u0080\u009D Journal of Ultrasound in Medicine, vol. 28, pp. 855\u00E2\u0080\u0093860, 2009. [101] I. Sack, B. Beierbach, J. Wuerfel, D. Klatt, U. Hamhaber, S. Papazoglou, P. Martus, and J. Braun, \u00E2\u0080\u009CThe impact of aging and gender on brain viscoelasticity.\u00E2\u0080\u009D Neuroimage, vol. 46, pp. 652\u00E2\u0080\u0093657, 2009. [102] S. Emelianov, X. Chen, M. O\u00E2\u0080\u0099Donnell, B. Knipp, D. Myers, T. Wake\u00EF\u00AC\u0081eld, and J. Rubin, \u00E2\u0080\u009CTriplex ultrasound: Elasticity imaging to age deep venous thrombosis,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 28, pp. 757\u00E2\u0080\u0093767, 2002. 15 \u000CChapter 1. Introduction [103] J. Rubin, S. Aglyamov, T. Wake\u00EF\u00AC\u0081eld, M. O\u00E2\u0080\u0099Donnell, and S. Emelianov, \u00E2\u0080\u009CClinical Application of Sonographic Elasticity Imaging for Aging of Deep Venous Thrombosis: Preliminary Findings,\u00E2\u0080\u009D Ultrasound Medicine, vol. 22, pp. 443\u00E2\u0080\u0093448, 2003. [104] T. Varghese, J. Zagzebski, and J. Jee, \u00E2\u0080\u009CElastography imaging of thermal lesion in the liver in vivo following radio frequency ablation: Preliminary results,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 28, pp. 1467\u00E2\u0080\u00931473, 2002. [105] R. Righetti, F. Kallel, R. Sta\u00EF\u00AC\u0080ord, and et al, \u00E2\u0080\u009CElastographic characterization of HIFUinduced lesions in canine livers,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 25, pp. 1099\u00E2\u0080\u0093 1113, 1999. [106] S. Emelianov, M. Lubinski, A. Skovoroda, R. Erkamp, and S. Leavey, \u00E2\u0080\u009CReconstructive ultrasound elasticity imaging for renal transplant diagnosis: kidney ex-vivo results,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 22, pp. 178\u00E2\u0080\u0093194, 2000. [107] M. Palmeri, M. Wang, J. Dahl, K. Frinkley, and K. Nightingale, \u00E2\u0080\u009CQuantifying Hepatic Shear Modulus in vivo Using Acoustic Radiation Force.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 34, pp. 546\u00E2\u0080\u0093558, 2008. [108] B. Fahey, K. Nightingale, R. Nelson, M. Palmeri, and G. Trahey, \u00E2\u0080\u009CAcoustic radiation force impulse imaging of the abdomen: Demonstration of feasibility and utility.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 31, pp. 1185\u00E2\u0080\u00931198, 2005. [109] S. Levinson, M. Shinagawa, and T. Sato, \u00E2\u0080\u009CSonoelastic determination of human skeletal muscle elasticity.\u00E2\u0080\u009D Journal of Biomechanics, vol. 28, pp. 1145\u00E2\u0080\u00931154, 1995. 16 \u000CChapter 2 Time-Delay Estimation in Ultrasound Echo Signals Using Individual Sample Tracking 1 2.1 Introduction Motion estimation as a time-delay estimation in sequences of ultrasound echo signals is essential for a wide range of modern ultrasound based signal processing applications. Time-delay estimation lies at the heart of blood \u00EF\u00AC\u0082ow estimation, tissue velocity estimation [1\u00E2\u0080\u00933], tissue elasticity estimation [4\u00E2\u0080\u00938], radiation force imaging [9\u00E2\u0080\u009311], and many other applications. Time-delay estimators measure displacement of the backscattered signals with respect to the transducer. This displacement appears as time-shift or phase-shift between sequences of echo signals [2]. Time-delay estimators are typically classi\u00EF\u00AC\u0081ed based on the type of signal (radio frequency signal RF, envelope signal, in-phase and quadrature signals I/Q) and the domain on which they operate (i.e. time, phase, or frequency). These estimators have been studied and compared extensively in the literature [12\u00E2\u0080\u009315]. Phase-shift estimators were initially used for blood \u00EF\u00AC\u0082ow measurement. Later, these were used in other \u00EF\u00AC\u0081elds in order to estimate tissue motion. Phase-shift estimators \u00EF\u00AC\u0081nd the average phase-shift over a number of samples within a window with respect to the nominal or estimated central frequency of the transmitted pulse. Complex cross correlation of the RF echo signals [16,17] or complex-valued Doppler signals [1,2] are typically used in these techniques. Time-shift estimators are widely used in estimating tissue motion. Typically, they consist of the identi\u00EF\u00AC\u0081cation of the maximum/minimum of a pattern matching function. The shape of the signal within a speci\u00EF\u00AC\u0081c window in the reference echo signal is set to be the pattern and a matching algorithm is used to \u00EF\u00AC\u0081nd the best match in the delayed echo signal. Many pattern matching techniques are currently employed, each o\u00EF\u00AC\u0080ering trade o\u00EF\u00AC\u0080s between complexity and accuracy [12, 18, 19]. To reduce the bias and variance introduced by \u00EF\u00AC\u0081nite sampling intervals, pattern matching function interpolation methods such as curve \u00EF\u00AC\u0081tting and polynomial interpolation techniques have been introduced [20]. These techniques include parabolic \u00EF\u00AC\u0081tting [21], spline \u00EF\u00AC\u0081tting [22], cosine \u00EF\u00AC\u0081tting [23], and grid slope [24]. While the computational cost of these methods is small, they su\u00EF\u00AC\u0080er from relatively high bias and variance. In an e\u00EF\u00AC\u0080ort to reduce the bias and variance of the delay estimators, at the expense of computational cost, spline-based continuous time-shift estimators have been introduced [15,25]. In these techniques the sampled echo signal is interpolated by spline polynomials. The coe\u00EF\u00AC\u0083cients of the polynomials are then 1 A version of this chapter has been published. R. Zahiri Azar and S. E. Salcudean, (2008) \u00E2\u0080\u009CTime-Delay Estimation in Ultrasound Echo Signals Using Individual Sample Tracking\u00E2\u0080\u009D, IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 55, no. 12, pp. 2640-2650 17 \u000CChapter 2. Sample Tracking used to generate a continuous pattern matching function. Time-shifts are estimated by \u00EF\u00AC\u0081nding the minimum/maximum of the generated continuous pattern matching function analytically. By keeping the displaced signals discrete and representing the reference echo signal by polynomials only, a continuous time-shift estimator has been derived by Viola and Walker [25]. At the expense of higher computational cost, Pinton and Trahey derived another algorithm where continuous representations of both the reference and the delayed echo signals are used to calculate a continuous pattern matching function [15]. These authors have shown that continuous time-shift estimators signi\u00EF\u00AC\u0081cantly outperform other algorithms in terms of standard deviation and bias over a broad range of conditions. All the above-mentioned techniques fall in the category of window-based delay estimators. Indeed, they all measure the average time-shift/phase-shift of a number of samples within a certain window. The size of the window and the overlap between windows play an important role in the performance of these time-delay estimators [26]. Trade o\u00EF\u00AC\u0080s exist between the signal-to-noise ratio and resolution of the time-shift estimators when di\u00EF\u00AC\u0080erent window sizes are used [13, 27, 28]. In an e\u00EF\u00AC\u0080ort to alleviate problems related to windowing, time-delay estimation using multiple window sizes [29, 30] have been introduced in the literature. Some of the alternative approaches to window-based time-delay estimation make use of feature extraction. Zero-crossing tracking (ZCT) [4] identi\u00EF\u00AC\u0081es the zero-crossings of the echo signals using linear interpolation. Peak tracking and level-crossing tracking, wherein tracking is performed on a prede\u00EF\u00AC\u0081ned amplitude level, have also been suggested in [4] as extensions of ZCT. The peak searching algorithm [31] identi\u00EF\u00AC\u0081es the peaks of the echo signals using a wavelet transform. The distances between the zero-crossings (or level-crossings) or peaks represent the time-shifts. ZCT and peak searching algorithms provide as many measurement points as there are zero/level crossings or peaks within the signals. As a result they have the potential to provide much higher resolution than window-based techniques. In this paper we propose a new delay estimation algorithm called the Sample Tracking (ST) algorithm. With this algorithm, the time-shift of each sample in a delayed echo signal is measured with respect to a continuous, interpolated representation of the reference echo signal. ST does not require windowing and can be implemented with a number of interpolation schemes that provide continuous approximations to the original signal. The interpolation schemes trade o\u00EF\u00AC\u0080 delay estimation accuracy vs computational requirements. ST provides the timedelay estimates with much higher density compared to conventional window based methods. We also introduce a spline-based ZCT where a spline-based interpolation method is used to \u00EF\u00AC\u0081nd the zero-crossings of the echo signals instead of the previously used linear interpolation technique. The paper is structured as follows: Section 2.2 presents the ST and the spline-based zerocrossing algorithms. Section 2.3 and 2.4 present simulation methods and simulation results that compare the performance of the algorithms introduced in Section 2.2 to some of the best and most widely used delay estimators. A discussion of the results is provided. Section 2.5 summarizes experimental results with a tissue phantom to demonstrate the feasibility of the proposed approach. Conclusions are presented in Section 2.6, along with avenues for future research. 18 \u000CChapter 2. Sample Tracking 4 2.5 x 10 2 Reference Signal s [i] 1 Fitted Function f(t) to s1[i] Displaced Signal s2[i] 1.5 1 0.5 0 \u00E2\u0088\u00920.5 \u00E2\u0088\u00921 \u00E2\u0088\u00921.5 \u00E2\u0088\u00922 \u00E2\u0088\u00922.5 670 675 680 685 690 Figure 2.1: Schematic of ST. The markers represent the discrete samples in the reference \u00F0\u009D\u0091\u00A01 [\u00F0\u009D\u0091\u0096] and delayed \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096] signals and the continuous line shows a polynomial interpolation \u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A1) to the reference signal. Black arrows show the actual displacements of each sample and gray arrows show other displacement candidates. 2.2 2.2.1 Proposed Algorithms Sample Tracking Let \u00F0\u009D\u0091\u00A01 [\u00F0\u009D\u0091\u0096], \u00F0\u009D\u0091\u0096 = 0, 1, ..., \u00F0\u009D\u0091\u009B \u00E2\u0088\u0092 1 be a sampled reference echo signal and \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096], \u00F0\u009D\u0091\u0096 = 0, 1, ..., \u00F0\u009D\u0091\u009B \u00E2\u0088\u0092 1 be a sampled, delayed, echo signal, where \u00F0\u009D\u0091\u009B is the number of discrete samples in the echo signals. Let \u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A1), \u00F0\u009D\u0091\u00A1 \u00E2\u0088\u0088 [0, \u00E2\u0088\u009E), be a continuous interpolation of \u00F0\u009D\u0091\u00A01 , , i.e. \u00F0\u009D\u0091\u0093 is such that \u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u0096\u00F0\u009D\u0091\u0087 ) = \u00F0\u009D\u0091\u00A01 [\u00F0\u009D\u0091\u0096], \u00E2\u0088\u0080\u00F0\u009D\u0091\u0096, where \u00F0\u009D\u0091\u0087 is the sampling period. The displacement of each individual discrete sample of the delayed echo signal \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096] with respect to the reference signal \u00F0\u009D\u0091\u00A01 [\u00F0\u009D\u0091\u0096] can be estimated by \u00EF\u00AC\u0081nding \u00F0\u009D\u0091\u00A1 such that \u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A1) = \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096]. (2.1) This is shown in Fig 2.1. Any technique that has been used previously to interpolate the pattern matching function can be used here to interpolate the reference echo signal. The complexity and accuracy of the ST algorithm depend strongly on the form of the polynomial representation of the echo signal. Cubic splines are commonly used in signal processing due to their excellent balance between ease of computation and accuracy [15,25,32]. To implement ST using a cubic spline polynomial, without involving any windowing, a continuous representation of the entire reference signal is generated according to the following equation: 19 \u000CChapter 2. Sample Tracking \u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u0096\u00F0\u009D\u0091\u0087 + \u00F0\u009D\u0091\u00A1) = \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0096 (\u00F0\u009D\u0091\u00A1) = \u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u0096 \u00F0\u009D\u0091\u00A13 + \u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u0096 \u00F0\u009D\u0091\u00A12 + \u00F0\u009D\u0091\u0090\u00F0\u009D\u0091\u0096 \u00F0\u009D\u0091\u00A1 + \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0096 , \u00F0\u009D\u0091\u00A1 \u00E2\u0088\u0088 [0, \u00F0\u009D\u0091\u0087 ] (2.2) where \u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u0096 , \u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u0096 , \u00F0\u009D\u0091\u0090\u00F0\u009D\u0091\u0096 , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0096 are the coe\u00EF\u00AC\u0083cients of the \u00EF\u00AC\u0081tted spline polynomial at the \u00F0\u009D\u0091\u0096\u00F0\u009D\u0091\u00A1\u00E2\u0084\u008E sample. Alternatively, local \u00EF\u00AC\u0081tting of polynomials using neighboring samples can be used to generate the corresponding coe\u00EF\u00AC\u0083cients at each sample location. The details of local polynomial \u00EF\u00AC\u0081tting are provided in Appendix A. The individual delays \u00CE\u0094\u00CC\u0082[\u00F0\u009D\u0091\u0096] of the discrete samples \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096], \u00F0\u009D\u0091\u0096 = 0, 1, , \u00F0\u009D\u0091\u009B \u00E2\u0088\u0092 1, of the delayed echo signal are estimated by \u00EF\u00AC\u0081nding the root of (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0096 (\u00F0\u009D\u0091\u00A1) \u00E2\u0088\u0092 \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096]) that is nearest to \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096], within the interval [0, \u00F0\u009D\u0091\u0087 ]: \u00CE\u0094\u00CC\u0082[\u00F0\u009D\u0091\u0096] = {\u00F0\u009D\u0091\u00A1 \u00E2\u0088\u0088 [0, \u00F0\u009D\u0091\u0087 ]\u00E2\u0088\u00A3\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0096 (\u00F0\u009D\u0091\u00A1) = \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096]}. (2.3) These delay estimates are not subject to sampling quantization. For negative time-shifts where the time-shift estimates fall in the interval [\u00E2\u0088\u0092\u00F0\u009D\u0091\u0087, 0], the process described above can be repeated by shifting the reference sampled signal by one sample and using the coe\u00EF\u00AC\u0083cients \u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u0096\u00E2\u0088\u00921 , \u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u0096\u00E2\u0088\u00921 , \u00F0\u009D\u0091\u0090\u00F0\u009D\u0091\u0096\u00E2\u0088\u00921 , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0096\u00E2\u0088\u00921 to \u00EF\u00AC\u0081nd the root of (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0096\u00E2\u0088\u00921 (\u00F0\u009D\u0091\u00A1) \u00E2\u0088\u0092 \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096]), within the interval [0, \u00F0\u009D\u0091\u0087 ]. This will result in estimating a time-shift in the interval \u00E2\u0088\u0092\u00F0\u009D\u0091\u0087 + [0, \u00F0\u009D\u0091\u0087 ] = [\u00E2\u0088\u0092\u00F0\u009D\u0091\u0087, 0]. This process is analogous to the shifting process used in conventional window based time-delay estimation using pattern matching functions. In addition to polynomial \u00EF\u00AC\u0081tting, \u00EF\u00AC\u0081tting a cosine function to neighboring samples can also be employed to generate a continuous representation of the reference echo signal (i.e. \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0096 (\u00F0\u009D\u0091\u00A1) = \u00F0\u009D\u0090\u00B4\u00F0\u009D\u0091\u0096 cos(\u00F0\u009D\u009B\u00BC\u00F0\u009D\u0091\u0096 \u00F0\u009D\u0091\u00A1 + \u00F0\u009D\u009B\u00BD\u00F0\u009D\u0091\u0096 )). The individual delays \u00CE\u0094\u00CC\u0082[\u00F0\u009D\u0091\u0096] of the discrete samples of the delayed echo signal are estimated by solving for \u00F0\u009D\u0091\u00A1 and selecting the root (i.e. (\u00C2\u00B1\u00F0\u009D\u0091\u008E cos(\u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096]/\u00F0\u009D\u0090\u00B4\u00F0\u009D\u0091\u0096 ) \u00E2\u0088\u0092 \u00F0\u009D\u009B\u00BD\u00F0\u009D\u0091\u0096 )/\u00F0\u009D\u009B\u00BC\u00F0\u009D\u0091\u0096 ) that is nearest to \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096], within the interval [\u00E2\u0088\u0092\u00F0\u009D\u0091\u0087, \u00F0\u009D\u0091\u0087 ]. ST algorithms provide a time-delay estimate for each sample of the echo signal, as opposed to window-based methods, which provide the same time-delay estimate for all the samples within a window. As a result, the delay estimates provided by ST are much denser than those provided by conventional window-based methods. Due to signal variations, ST may not always \u00EF\u00AC\u0081nd a root. Samples for which equation (2.1) does not have a root can be marked and removed. They can also be replaced with the average time-shifts of neighboring samples. Furthermore, without any prior knowledge about the displacement of each sample, ST may \u00EF\u00AC\u0081nd more than one candidate for the delay estimate of each sample (i.e. (2.1) has more than one root). Several possible scenarios are shown in Fig 2.1. On the peak/valley of the echo signal, ST might select the root to the right or the one to the left (black or gray arrow). Selecting a false root will result in a spike in the sample delay estimate. The problem of false roots in ST is similar to the selection of false peaks in pattern matching functions. As done in methods based on pattern matching [28], non-linear \u00EF\u00AC\u0081ltering can be used to remove false roots in ST. In the ST algorithms presented in this paper, false roots and unknown delays are removed by applying 1D median \u00EF\u00AC\u0081ltering with a kernel size of \u00EF\u00AC\u0081ve data points. The ST algorithm is di\u00EF\u00AC\u0080erent from the ZCT/level crossing or the peak searching algorithms. In ZCT, a single level-crossing is de\u00EF\u00AC\u0081ned. In ST, each sample of the delayed signal de\u00EF\u00AC\u0081nes a level-crossing to be found in the reference signal (Fig 2.1). Thus, in ST, a new level is de\u00EF\u00AC\u0081ned for every sample, as opposed to tracking a \u00EF\u00AC\u0081xed and prede\u00EF\u00AC\u0081ned amplitude level as suggested in [4]. 20 \u000CChapter 2. Sample Tracking ST can also be derived from the spline-based continuous time-delay estimator [25]. In [25], the error between the reference and the delayed signal is de\u00EF\u00AC\u0081ned for a given window as the following quadratic function: \u00F0\u009D\u009C\u0096(\u00F0\u009D\u0091\u00A1) = \u00F0\u009D\u0091\u008A \u00E2\u0088\u0091 (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0096 (\u00F0\u009D\u0091\u00A1) \u00E2\u0088\u0092 \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096])2 , (2.4) \u00F0\u009D\u0091\u0096=1 where \u00F0\u009D\u0091\u008A is the number of samples in the window. The time delay \u00F0\u009D\u0091\u00A1 that minimizes \u00F0\u009D\u009C\u0096(\u00F0\u009D\u0091\u00A1) is found analytically by taking the derivative with respect to \u00F0\u009D\u0091\u00A1 and setting the result equal to zero, then solving for \u00F0\u009D\u0091\u00A1. By setting \u00F0\u009D\u0091\u008A = 1 in (2.4) the error function and its derivative can be written as \u00F0\u009D\u009C\u0096(\u00F0\u009D\u0091\u00A1) = (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0096 (\u00F0\u009D\u0091\u00A1) \u00E2\u0088\u0092 \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096])2 and \u00E2\u0088\u0082\u00F0\u009D\u009C\u0096(\u00F0\u009D\u0091\u00A1)/\u00E2\u0088\u0082\u00F0\u009D\u0091\u00A1 = 2 (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0096 (\u00F0\u009D\u0091\u00A1) \u00E2\u0088\u0092 \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096]), respectively. By setting the derivative equal to zero, we obtain \u00E2\u0088\u0082\u00F0\u009D\u009C\u0096(\u00F0\u009D\u0091\u00A1)/\u00E2\u0088\u0082\u00F0\u009D\u0091\u00A1 = 0 \u00E2\u0086\u0092 \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0096 (\u00F0\u009D\u0091\u00A1) = \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096], (2.5) which is the same equation that we used for ST. It should be noted that the error function de\u00EF\u00AC\u0081ned in (2.4) does not result in the estimation of the average of the displacement of the individual samples inside the window. Therefore similar results are not expected and are not obtained when time-delay estimates of several samples are averaged and compared with the method from [25]. Similarly to phase-shift estimators which fail to estimate delays outside [\u00E2\u0088\u0092\u00F0\u009D\u009C\u0086/2, \u00F0\u009D\u009C\u0086/2], where \u00F0\u009D\u009C\u0086 is the wavelength, ST is envisaged to be used primarily for tiny delays (i.e. within [\u00E2\u0088\u0092\u00F0\u009D\u009C\u0086/4, \u00F0\u009D\u009C\u0086/4]), for which sub-sample delay estimation becomes very important and common window based algorithms su\u00EF\u00AC\u0080er from relatively high bias and standard deviation. With this property, ST algorithms are readily applicable to estimating delays in high frame-rate tissue imaging, where delays between successive acquisitions are small, and ARFI imaging, where maximum delays are small. For larger time-shifts, previously estimated time-shifts can be used to \u00EF\u00AC\u0081nd the coarse location of the current sample. This approach has been used in [33] to guide the search for the pattern matching function. The same approach has been used in [17] to unwrap the phase-shifts outside [\u00E2\u0088\u0092\u00F0\u009D\u009C\u0086/2, \u00F0\u009D\u009C\u0086/2]. Alternatively, other methods, such as pattern matching, can be used \u00EF\u00AC\u0081rst to \u00EF\u00AC\u0081nd the coarse estimation of the time-shifts. ST can then be applied to \u00EF\u00AC\u0081nd the \u00EF\u00AC\u0081ne sub-sample time-shift of each sample. Thus the time-shift of each sample will be equal to the coarse time-shift, estimated with the pattern matching function, plus the \u00EF\u00AC\u0081ne sub-sample time-shift, estimated with ST. This approach was implemented in this work to accurately estimate large delays using ST. 2.2.2 Spline-based Zero-Crossing Tracking The ZCT estimator identi\u00EF\u00AC\u0081es the zero-crossings of \u00F0\u009D\u0091\u00A01 [\u00F0\u009D\u0091\u0096] and \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096]. The displacements are then estimated from the distance between the zero-crossings. The initial implementation of the ZCT employed linear interpolation between consecutive samples of the two RF signals that have di\u00EF\u00AC\u0080erent signs [4]. With a trade o\u00EF\u00AC\u0080 of higher computational cost, the performance of the ZCT is expected to improve when more accurate representations of the echo signals are employed to \u00EF\u00AC\u0081nd the zero-crossings. Let \u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A1), \u00F0\u009D\u0091\u0094(\u00F0\u009D\u0091\u00A1) be continuous representations of the echo signals \u00F0\u009D\u0091\u00A01 [\u00F0\u009D\u0091\u0096], \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096], respectively, generated using spline polynomials as in (2.1) above. The locations of the zero-crossings \u00F0\u009D\u0091\u00A71 [\u00F0\u009D\u0091\u0097], \u00F0\u009D\u0091\u00A72 [\u00F0\u009D\u0091\u0097], where \u00F0\u009D\u0091\u0097 is the zero-crossing index, are found as the roots of the \u00EF\u00AC\u0081tted functions, i.e. \u00F0\u009D\u0091\u00A71 [\u00F0\u009D\u0091\u0097] = \u00F0\u009D\u0091\u00A1\u00E2\u0088\u00A3\u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A1)=0 and \u00F0\u009D\u0091\u00A72 [\u00F0\u009D\u0091\u0097] = \u00F0\u009D\u0091\u00A1\u00E2\u0088\u00A3\u00F0\u009D\u0091\u0094(\u00F0\u009D\u0091\u00A1)=0 . Finally the displacements 21 \u000CChapter 2. Sample Tracking 4 2.5 x 10 2 1.5 Reference Signal s [i] 1 Fitted Function f(t) to s1[i] Reference Signal s2[i] Fitted Function g(t) to s [i] 2 1 0.5 0 \u00E2\u0088\u00920.5 \u00E2\u0088\u00921 \u00E2\u0088\u00921.5 \u00E2\u0088\u00922 \u00E2\u0088\u00922.5 670 675 680 685 690 Figure 2.2: Schematic representation of the ZCT algorithm. The circle markers show discrete samples and the continuous lines show the \u00EF\u00AC\u0081tted polynomials to the reference and delayed signals. The arrows show the displacements of the zero-crossings. are estimated from the distance between the locations of the corresponding zero-crossings (Fig 2.2): \u00CE\u0094\u00CC\u0082[\u00F0\u009D\u0091\u0097] = \u00F0\u009D\u0091\u00A71 [\u00F0\u009D\u0091\u0097] \u00E2\u0088\u0092 \u00F0\u009D\u0091\u00A72 [\u00F0\u009D\u0091\u0097]. (2.6) Equation (2.6) was suggested in [4] and assumes that corresponding zero-crossings in the reference and delayed signal share the same index. However, this assumption does not always hold. If some zero-crossings disappear from the reference echo signal or new zero-crossings appear in the delayed signal, corresponding zero-crossings will not share the same index. Therefore, if (2.6) is used directly, large biases may be introduced in time-shift estimates. In order to avoid this problem, prior to using (2.6), a check is added to correct the indexes. In this work, positive large jumps in \u00CE\u0094\u00CC\u0082[\u00F0\u009D\u0091\u0097] (e.g., \u00F0\u009D\u0091\u00A71 [\u00F0\u009D\u0091\u0097] \u00E2\u0088\u0092 \u00F0\u009D\u0091\u00A72 [\u00F0\u009D\u0091\u0097] > \u00F0\u009D\u009C\u0086/4) are assumed to be the result of the appearance of a zero-crossing. Negative large jumps in \u00CE\u0094\u00CC\u0082[\u00F0\u009D\u0091\u0097] (i.e. \u00F0\u009D\u0091\u00A72 [\u00F0\u009D\u0091\u0097] \u00E2\u0088\u0092 \u00F0\u009D\u0091\u00A71 [\u00F0\u009D\u0091\u0097] > \u00F0\u009D\u009C\u0086/4) are assumed to be the result of a disappearance of a zero-crossing. Therefore, in the ZCT method presented in this paper, equation (2.6) is re-formulated as follows: \u00CE\u0094\u00CC\u0082[\u00F0\u009D\u0091\u0097] = \u00F0\u009D\u0091\u00A71 [\u00F0\u009D\u0091\u0097 + \u00F0\u009D\u0091\u009B] \u00E2\u0088\u0092 \u00F0\u009D\u0091\u00A72 [\u00F0\u009D\u0091\u0097 + \u00F0\u009D\u0091\u009A]. (2.7) where \u00F0\u009D\u0091\u009A and \u00F0\u009D\u0091\u009B are correction factors accounting for additional or missing zero-crossings. The appearance or disappearance of zero-crossings is a function of the quality of the continuous \u00EF\u00AC\u0081t to the sampled signals. The better the \u00EF\u00AC\u0081t, the less likely it is that zero crossings will appear or disappear. 22 \u000CChapter 2. Sample Tracking 2.3 Simulation Methods A series of computer simulations was performed to study the performance of the proposed algorithms. The proposed algorithms were compared with the following conventional timeshift and phase-shift estimators: 1. Viola\u00E2\u0080\u0099s continuous time-delay estimator (CTDE) [25], 2. Parabola \u00EF\u00AC\u0081tting to normalized cross-correlation [21], 3. Cosine \u00EF\u00AC\u0081tting to normalized cross-correlation [23], 4. Spline \u00EF\u00AC\u0081tting to normalized cross-correlation [22], 5. Kasai\u00E2\u0080\u0099s phase-shift estimator (1D autocorrelator) [1], 6. Loupas\u00E2\u0080\u0099 phase-shift estimator (2D autocorrelator) [2]. Detailed descriptions of the normalized correlation and associated interpolation schemes as time-shift estimators are provided in [25]. Detailed descriptions of the 1D and 2D autocorrelators as phase-shift estimators are provided in [13]. The performance of each estimator is considered in terms of its bias and standard deviation as a function of sub-sample shift [25, 28], window size [13, 26], and resolution [13, 27]. The RF data was constructed to simulate line scatterers moving axially toward the transducer. Similarly to previously reported work, the base signal was created by convolving Gaussian distributed white noise (zero mean and unit standard deviation) with a sinc-enveloped sinusoid point spread function (PSF) [25, 28] given by: PSF(t) = sin(\u00F0\u009D\u009C\u008B\u00F0\u009D\u0090\u00B5\u00F0\u009D\u0091\u00930 \u00F0\u009D\u0091\u00A1) sin(2\u00F0\u009D\u009C\u008B\u00F0\u009D\u0091\u00930 \u00F0\u009D\u0091\u00A1), \u00F0\u009D\u009C\u008B\u00F0\u009D\u0090\u00B5\u00F0\u009D\u0091\u00930 \u00F0\u009D\u0091\u00A1 (2.8) where \u00F0\u009D\u0091\u00930 = 5 MHz is the center frequency and \u00F0\u009D\u0090\u00B5 = 0.5 is the fractional bandwidth. A sincenveloped sinusoid PSF was chosen over Gaussian-enveloped sinusoid [13, 15] to facilitate the comparison of the results with [25,28]. The PSF(t) was sampled from \u00E2\u0088\u00921.5/(\u00F0\u009D\u0090\u00B5\u00F0\u009D\u0091\u00930 ) to 1.5/(\u00F0\u009D\u0090\u00B5\u00F0\u009D\u0091\u00930 ) at a frequency equal to the sampling rate. Oversampling the base signal by a factor of 100 (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u00A0 = 4 GHz) was used to obtain the necessary sub-sample displacement resolution to study the bias and standard deviation of the time-delay estimators. Reference and delayed signals, \u00F0\u009D\u0091\u00A01 [\u00F0\u009D\u0091\u0096] and \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096], were formed by decimating the base signal by a factor of 100 (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u00A0 = 40 MHz). The base signal was decimated starting at di\u00EF\u00AC\u0080erent samples to produce reference and delayed signals with a known sub-sample delay. For example, to produce a 0.1 sub-sample shift, the base RF signal was down-sampled by a factor of 100 starting at the 1st and 11th sample to produce the reference and delayed signals, respectively. Sub-sample delays were varied from 0 to 0.95 samples in steps of 0.05 samples, so that a total of 20 di\u00EF\u00AC\u0080erent delays were evaluated. A total of 1,200,000 RF samples were generated in the base signal which resulted in 12,000 samples for each step after decimation. Window based displacement calculations are based on data with an axial extent (or window length) of 24 samples (1.5\u00F0\u009D\u009C\u0086 or \u00E2\u0089\u0088 450 \u00F0\u009D\u009C\u0087m) which resulted in \u00F0\u009D\u0091\u0080 = 500 independent realizations for each delay. Gaussian white noise was also added to the reference and delayed signals to generate echo signals with di\u00EF\u00AC\u0080erent signal-to-noise ratios (SNR). 23 \u000CChapter 2. Sample Tracking The following equations were used to estimate the bias \u00F0\u009D\u0091\u008F and standard deviation \u00F0\u009D\u009C\u008E: \u00F0\u009D\u0091\u0080 1 \u00E2\u0088\u0091 (\u00CE\u0094\u00CC\u0082[\u00F0\u009D\u0091\u0098] \u00E2\u0088\u0092 \u00CE\u0094[\u00F0\u009D\u0091\u0098]) \u00F0\u009D\u0091\u0080 \u00F0\u009D\u0091\u0098=1 \u0003 \u0004 \u00F0\u009D\u0091\u0080 \u00F0\u009D\u0091\u0080 \u00041 \u00E2\u0088\u0091 1 \u00E2\u0088\u0091 \u00E2\u008E\u00B7 \u00CE\u0094\u00CC\u0082[\u00F0\u009D\u0091\u0098])2 (\u00CE\u0094\u00CC\u0082[\u00F0\u009D\u0091\u0098] \u00E2\u0088\u0092 \u00F0\u009D\u009C\u008E(\u00CE\u0094\u00CC\u0082) = \u00F0\u009D\u0091\u0080 \u00F0\u009D\u0091\u0080 \u00F0\u009D\u0091\u008F(\u00CE\u0094\u00CC\u0082) = \u00F0\u009D\u0091\u0098=1 (2.9) (2.10) \u00F0\u009D\u0091\u0098=1 where \u00F0\u009D\u0091\u0080 is the number of measurements, \u00CE\u0094[\u00F0\u009D\u0091\u0098] are the true time delays, and \u00CE\u0094\u00CC\u0082[\u00F0\u009D\u0091\u0098] are the estimated time delays. In order to study the resolution of the proposed estimators, their displacement step responses have been studied as in [13, 27]. A step discontinuity was mimicked by inducing a square wave displacement pro\u00EF\u00AC\u0081le in the echo signals. Twenty independent realizations of the reference and the delayed echo signals were generated for the same square wave displacement pro\u00EF\u00AC\u0081le. Bias, standard deviation and resolution are the most commonly used metrics in studying the performance of time-delay estimators. In order to study and compare the sensitivity of ST in the presence of decorrelation as a strain estimator, its strain \u00EF\u00AC\u0081lter has also been studied. A reference 2D RF frame was generated with a 1D simulation for each RF line as explained above. A total of 100 independent lines were simulated to generate the reference RF frame. To mimic the deformed signals at di\u00EF\u00AC\u0080erent compression ratios, the motion of scatterers for each RF line was modeled using a serial connection of springs. A sti\u00EF\u00AC\u0080ness map was assigned to each RF line. The sti\u00EF\u00AC\u0080ness was set to be the same across the entire image which leads to uniform compression of the point scatterers. The displacements of point scatterers were calculated by compressing the model with di\u00EF\u00AC\u0080erent compression ratios [33]. Compressions were generated in the range of 0.0001% to 10% (equivalent to 4.0 \u00C3\u0097 10\u00E2\u0088\u00925 mm \u00E2\u0089\u0088 2.0 \u00C3\u0097 10\u00E2\u0088\u00923 of a sample to 4.0 mm \u00E2\u0089\u0088 2.0 \u00C3\u0097 102 samples maximum displacement) on a logarithmic scale. The deformed RF signals were then calculated by convolving the compressed point scatterers with the original PSF (2.8). The strain signal-to-noise ratios \u00F0\u009D\u0091\u0086\u00F0\u009D\u0091\u0081 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0091\u0092 as de\u00EF\u00AC\u0081ned in Appendix B were computed from these data and were used to generate the strain \u00EF\u00AC\u0081lter of the ST algorithm in the presence of decorrelation noise. 2.4 Simulation Results and Discussions Simulations were performed by applying di\u00EF\u00AC\u0080erent time-delay estimators to the simulated ultrasonic RF data. All calculations were performed in MATLAB (MathWorks Inc., Natick, MA). Since the ZCT and ST methods provide a higher number of estimate points than window-based tracking (6 times higher for ZCT and 24 times higher for ST for a window length of 1.5\u00F0\u009D\u009C\u0086), the delay estimates from ZCT and ST algorithms were lumped following the estimation. A delay estimate for each window was obtained by taking the average of the delay estimates that fall within the window. These average delay estimates were then used to study and compare the performance of the ZCT and the ST algorithms with conventional window-based techniques. Simulation results are shown in Figs 2.3-2.8. Fig 2.3 shows the bias and standard deviation of all the methods considered as a function of the sub-sample shift as discussed in Section 2.3. 24 \u000CChapter 2. Sample Tracking CTDE (Cubic Spline) NCC (Parabolic) 0.025 NCC (Cosine) NCC (Spline) 0.02 1D Autocorrelator 0.015 2D Autocorrelator ST (Cosine) 0.01 rd ST (4 order local) Bias (Samples) ST (Cubic Spline) 0.005 ZCT (Linear) ZCT (Cubic Spline) 0 \u00E2\u0088\u00920.005 \u00E2\u0088\u00920.01 \u00E2\u0088\u00920.015 \u00E2\u0088\u00920.02 \u00E2\u0088\u00920.025 0 0.2 0.4 0.6 Sub\u00E2\u0088\u0092Sample Shift (Samples) 0.8 1 0 0.2 0.4 0.6 Sub\u00E2\u0088\u0092Sample Shift (Samples) 0.8 1 \u00E2\u0088\u00921 Standard Deviation (Samples) 10 \u00E2\u0088\u00922 10 \u00E2\u0088\u00923 10 \u00E2\u0088\u00924 10 Figure 2.3: Bias and standard deviation of all the delay estimators. The reference and delayed signals were identical, except for a sub-sample shift. For each delay, 500 independent realizations were used (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u00A0 = 40 MHz, \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u009C = 5 MHz, \u00F0\u009D\u0090\u00B5 = 0.5, and \u00F0\u009D\u0091\u008A = 1.5\u00F0\u009D\u009C\u0086). 25 \u000CChapter 2. Sample Tracking CTDE (Cubic Spline) \u00E2\u0088\u00923 x 10 NCC (Cosine) NCC (Spline) 12 2D Autocorrelator ST (Cubic Spline) 10 ZCT (Cubic Spline) Bias (Samples) 8 6 4 2 0 \u00E2\u0088\u00922 10 20 30 40 50 60 70 Window Length (Samples) 80 90 10 20 30 40 50 60 70 Window Length (Samples) 80 90 \u00E2\u0088\u00921 Standard Deviation (Samples) 10 \u00E2\u0088\u00922 10 \u00E2\u0088\u00923 10 Figure 2.4: Bias and standard deviation of all the techniques for di\u00EF\u00AC\u0080erent window sizes. The reference and delayed signals were identical, except for a sub-sample shift of 0.25 sample (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u00A0 = 40 MHz, \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u009C = 5 MHz, \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u0092\u00F0\u009D\u0091\u009D\u00F0\u009D\u0091\u00A1\u00E2\u0084\u008E = 50 mm, no window overlap). 26 \u000CChapter 2. Sample Tracking Actual CTDE (Cubic Spline) NCC (Parabolic) 0.36 NCC (Cosine) NCC (Spline) 0.34 Sub\u00E2\u0088\u0092Sample Shift (Samples) 2D Autocorrelator ST (Cubic Spline) 0.32 0.3 0.28 0.26 0.24 0.22 0.2 0.18 0 100 200 300 Samples 400 500 0 100 200 300 Samples 400 500 \u00E2\u0088\u00921 Standard Deviation (Samples) 10 \u00E2\u0088\u00922 10 \u00E2\u0088\u00923 10 Figure 2.5: The bias of all the techniques (left) and the standard deviation of window-based delay estimators (right). The results are the average displacement estimates over 20 realizations \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u00A0 = 40 MHz, \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u009C = 5 MHz, \u00F0\u009D\u0090\u00B5 = 0.5, \u00F0\u009D\u0091\u008A = 24 samples for window based techniques. 27 \u000CChapter 2. Sample Tracking A signal and its exact shifted replica were used. The vertical axis for standard deviation is shown on a logarithmic scale in order to provide a clear di\u00EF\u00AC\u0080erentiation. Fig 2.3 shows the bias and standard deviation of the ST algorithm when three point cosine \u00EF\u00AC\u0081tting, fourth order local polynomial \u00EF\u00AC\u0081tting, and cubic spline \u00EF\u00AC\u0081tting are used to generate the continuous representation of the reference echo signal. The accuracy of tracking individual samples increased with the quality of the polynomial interpolation. This result is shown in Fig 2.3. ST algorithms using cubic spline \u00EF\u00AC\u0081tting and CTDE signi\u00EF\u00AC\u0081cantly outperform all the window-based techniques and provide exact results, in terms of bias in estimating the average delays. These results are consistent with the results provided in Section 2.2, where it was shown that ST algorithms can be derived from CTDE with \u00F0\u009D\u0091\u008A = 1. For ZCT algorithms, Fig 2.3 shows that both the bias and standard deviation drop signi\u00EF\u00AC\u0081cantly as we use spline functions to locate the zero-crossings instead of previously used linear interpolations. These expected results are the outcome of the increased accuracy of tracking the zero-crossings, as the interpolating polynomial becomes a better representation for the actual discrete data. The results also show that when using linear interpolation, ZCT performance closely resembles that of the correlation based algorithms. These results are consistent with the results that were reported by Srinivasan in [4], where ZCT was reported to generate strain images with lower signal-to-noise ratios in comparison to normalized correlation. Fig 2.3 shows that the spline-based ZCT algorithm achieves the smallest bias among all the techniques and performed better than the spline-based ST and CTDE algorithms. This is due to the fact that ZCT \u00EF\u00AC\u0081ts a spline polynomial to both the reference and delayed echo signals while ST keeps the reference signal discrete. Similar results have been reported in the literature for window based continuous time-delay estimation. In [15] it has been shown that, with the trade-o\u00EF\u00AC\u0080 of higher computational cost, \u00EF\u00AC\u0081tting spline functions to both the reference and displaced signals, as opposed to the reference signal only [25], improves the performance of time-delay estimators. However, ST algorithms have several advantages compared to ZCT methods. First, ST provides many more measurement points compared to ZCT since the number of samples is always greater than the number of zero/level crossings. Second, the spacing between the time-shift estimates is \u00EF\u00AC\u0081xed in ST and is equal to the spacing between echo samples. The ZCT algorithms provide the time-shift estimate only at the location of the prede\u00EF\u00AC\u0081ned level (zero or other) crossings. These are not always equally spaced. Third, unlike ZCT, ST can be applied to the envelope signals equally e\u00EF\u00AC\u0080ectively. Fourth, ST does not have the problem of losing track of motion, as mentioned in Section 2.2 for ZCT. This is due to the fact that the time-shift estimation of each sample is independent from that of other samples. Fig 2.3 also shows that CTDE produces the lowest standard deviation among all techniques, followed by ST and ZCT. Fig 2.3 shows that the normalized correlation with cosine \u00EF\u00AC\u0081t and spline \u00EF\u00AC\u0081t achieve lower bias than parabolic \u00EF\u00AC\u0081t. The 2D autocorrelator as a phase-shift estimator, which estimates the center frequency, outperforms the 1D-autocorrelator, which assumes a \u00EF\u00AC\u0081xed center frequency equal to that of the transmitted pulse. These results are all consistent with previously published results [12, 13, 20, 22, 25, 34]. Fig 2.4 shows the bias and standard deviation of all methods as a function of window size. An echo signal with sub-sample shift of 0.25 samples was used for this study. The ZCT algorithm has the smallest bias followed by ST and CTDE. Similarly to the results in Fig 2.3, the bias of ST is the same as the bias of the CTDE for all window sizes. For spline-based methods (i.e. ZCT, ST, and CTDE) the size of the window does not a\u00EF\u00AC\u0080ect the bias of the 28 \u000CChapter 2. Sample Tracking estimator. Fig 2.4 shows that for large window sizes, the bias of the 2D autocorrelator phaseshift estimator becomes comparable to that of spline-based techniques. For all the methods considered the standard deviation increases as we reduce the size of the window. This is consistent with the result published in [28], where it was shown that the lower bound of the standard deviation of the delay estimators is inversely proportional to the square root of the size of the window. Therefore the standard deviation is expected to increase as we reduce the size of the window. Fig 2.4 shows that the biases of all the methods remain larger than that of the CTDE. It should be noted that the results depicted in Fig 2.3 and 2.4 are in favor of window-based methods since all the samples were experiencing the exact same shift inside the window. To study a more realistic case in which the samples inside a window experience di\u00EF\u00AC\u0080erent delays, we used a step discontinuity in the delays. The step responses (square wave responses) of the techniques we study indicates their resolution, and are shown in Fig 2.5. The step responses of the window-based techniques are compared with ST when no averaging is applied to the estimated delays. Fig 2.5 shows that ST outperforms all methods. For ST the transition from one step to the next happens instantly while for all the window-based methods including CTDE, the transition is smooth. This was expected as ST has the capability of tracking individual samples as opposed to tracking a group of samples within a window. Thus its performance is not limited by the size of the window. Fig 2.6 depicts the relationship between the standard deviation and the SNR. The SNR was varied from 10 dB to 60 dB. The results show that the performance of both ST and ZCT rapidly degrade as the SNR becomes smaller. This is also expected as, unlike window-based methods, neither ST nor ZCT take advantage of averaging to improve time-delay estimates. Measurements based on single samples or zero-crossings are more susceptible to errors when compared to tracking a group of samples within a window. To study the ST performance as a strain estimator and compare it with commonly used techniques, the algorithm was employed to estimate the delay when the echo signal changed due to sample compression. The performance of the cubic spline-based ST was compared with normalized correlation with cosine \u00EF\u00AC\u0081t and cubic spline-based CTDE. The time-delay estimates at di\u00EF\u00AC\u0080erent compression levels are shown in Fig 2.7. The normalized correlation was used to \u00EF\u00AC\u0081nd the coarse location of the time-shift for both techniques. The sizes of the windows were set to be 3\u00F0\u009D\u009C\u0086 and the windows were set to have 50% overlap as typically used in strain estimation [26]. The time-delay estimates for varying compression levels are shown in Fig 2.7 for all three methods. Fig 2.7 shows that window tracking with cosine \u00EF\u00AC\u0081t fails to estimate small strains and shows poor sensitivity while spline-based methods estimate even 0.0001% compression (4.0\u00C3\u009710\u00E2\u0088\u00925 mm \u00E2\u0089\u0088 2.0 \u00C3\u0097 10\u00E2\u0088\u00923 of a sample maximum displacement) without any ambiguity. These results are consistent with the results shown in Fig 2.3 and Fig 2.4, where ST and CTDE were shown to have much smaller bias and standard deviation when compared to the normalized correlation with cosine \u00EF\u00AC\u0081t. For large compression levels (i.e. 7% to 10%), all the algorithms failed to estimate the delays correctly. This is due to the fact that they all depend on the normalized cross-correlation to provide them with estimation delays that are within the sampling accuracy. As a result, when the normalized correlation fails at large compression levels, all the algorithms fail as well. In order to study the sensitivity quantitatively, the strain \u00EF\u00AC\u0081lters of all the above methods 29 \u000CChapter 2. Sample Tracking 0.3 CTDE (Cubic Spline) NCC (Parabolic) NCC (Cosine) NCC (Spline) 2D Autocorrelator ST (Cubic Spline) ZCT (Cubic Spline) Standard Deviation (Samples) 0.25 0.2 0.15 0.1 0.05 10 2 Standard Deviation (Samples) 10 10 dB 15 1 10 20 20 dB 25 30 35 SNR (dB) \u00E2\u0088\u00921 10 30 dB 40 \u00E2\u0088\u00921 10 40 dB 45 50 \u00E2\u0088\u00921 50 dB 10 1 10 0 10 0 \u00E2\u0088\u00922 10 10 \u00E2\u0088\u00922 10 \u00E2\u0088\u00922 10 \u00E2\u0088\u00921 10 \u00E2\u0088\u00921 10 \u00E2\u0088\u00922 10 \u00E2\u0088\u00922 10 \u00E2\u0088\u00923 10 \u00E2\u0088\u00923 10 \u00E2\u0088\u00923 10 Figure 2.6: Standard deviation of the delay estimators as a function of SNR. The bottom panel depicts an expanded view of each condition tested. The reference and delayed signals were identical, except for a sub-sample shift. 500 independent realizations were used to generate the results (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u00A0 = 40 MHz, \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u009C = 5 MHz, \u00F0\u009D\u0090\u00B5 = 0.5, and \u00F0\u009D\u0091\u008A = 1.5\u00F0\u009D\u009C\u0086). 30 \u000CChapter 2. Sample Tracking CTDE (0.0001% compression) \u00E2\u0088\u00923 x 10 2 1 0.5 0 0 0.018 0.016 0.016 0.016 0.014 0.014 0.014 0.012 0.012 0.012 0.01 0.008 0.008 0.008 0.006 0.006 0.006 0.004 0.004 0.004 0.002 0.002 0.002 CTDE (0.01% compression) ST (0.01% compression) 0.2 0.2 0.15 0.15 0.15 0.1 0.05 Depth 0.2 Depth 0.1 0.05 0 0.05 0 NCC (0.1% compression) 0 CTDE (0.1% compression) ST (0.1% compression) 2 1.5 1.5 1.5 1 1 0.5 Depth 2 Depth 2 1 0.5 0 0.5 0 NCC (8% compression) 0 CTDE (8% compression) ST (8% compression) 160 160 140 140 140 120 120 120 100 100 100 80 Depth 160 80 Depth Depth 0.01 Depth 0.02 0.018 Depth Depth 0.02 0.018 0.1 Depth 0 ST (0.001% compression) 0.02 0.01 Depth 0.5 CTDE (0.001% compression) NCC (0.01% compression) 80 60 60 60 40 40 40 20 20 20 0 0 NCC (9% compression) 0 CTDE (9% compression) ST (9% compression) 180 180 180 160 160 160 140 140 140 120 120 120 100 80 Depth Depth 1 0.5 NCC (0.001% compression) \u00E2\u0088\u00923 x 10 2 1.5 Depth 1 1.5 Depth Depth 1.5 ST (0.0001% compression) \u00E2\u0088\u00923 x 10 2 100 80 Depth NCC (0.0001% compression) 100 80 60 60 60 40 40 40 20 20 20 0 0 0 Figure 2.7: Time-delay estimation images at di\u00EF\u00AC\u0080erent compression levels (0.0001%\u00E2\u0088\u00BC0.1% and 8%\u00E2\u0088\u00BC9% strain on a logarithmic scale) using normalized cross correlation with cosine \u00EF\u00AC\u0081t (left), CTDE with cubic spline \u00EF\u00AC\u0081t (center) and ST with cubic spline \u00EF\u00AC\u0081t (right). The window size was set to 3\u00F0\u009D\u009C\u0086 with 50% window overlap. Colorbars are in sample. 31 \u000CChapter 2. Sample Tracking ST (Cubic Spline) CTDE (Cubic Spline) NCC (Cosine) 12 10 SNRe 8 6 4 2 0 \u00E2\u0088\u00924 10 \u00E2\u0088\u00923 10 \u00E2\u0088\u00922 \u00E2\u0088\u00921 10 10 0 10 1 10 Strain % Figure 2.8: Signal-to-noise ratio of estimated strain as a function of applied compression for window tracking and ST. ST shows much higher sensitivity compared to window tracking since it is capable of estimating very small strains. were estimated and shown in Fig 2.8. Fig 2.8 shows that compared to CTDE, ST provides strain images with much higher sensitivity. The performances of all the methods are similar for large deformations. To compare the computational cost of ZCT and ST with that of CTDE, note that both ST and ZCT preserve the order of \u00EF\u00AC\u0081tted polynomials for the root estimation process. However, the square term in (2.4) doubles the order of the polynomial for the continuous time-delay estimation. Therefore, the root estimation routine is simpler for both ST and ZCT. However, both ST and ZCT require calling the root estimation process more often than the window based continuous time-delay estimators. As a result, the ST and ZCT methods have higher computational cost than the corresponding window-based continuous delay estimators. However, estimating the delay for a di\u00EF\u00AC\u0080erent window size with a window-based method requires re-estimating the delays for each window, while in ST and ZCT, only the windowing must be repeated since the sample or the zero-crossing shift does not change. Thus, ST and ZCT methods o\u00EF\u00AC\u0080er the \u00EF\u00AC\u0082exibility of a variable resolution, depending on the window size used to average the displacement estimates, without signi\u00EF\u00AC\u0081cantly increasing the computation time. In contrast, window-based techniques need to repeat the entire estimation once the window size is changed. Although we have only considered 1D tracking, the generalization of ST to higher dimensions is possible. In the 2D case, for example, a 2D function needs to be \u00EF\u00AC\u0081tted to the reference echo signal. The displacement that satis\u00EF\u00AC\u0081es (2.1) in both directions needs to be calculated. This would result in a 2D estimate of the displacement for each sample. The use of 2D spline 32 \u000CChapter 2. Sample Tracking Sample Tracking Applied Disp Estimated Disp 0.6 Amplitude [mm] 0.4 0.2 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 2 4 6 Time [sec] 8 10 12 Figure 2.9: Experimental displacement estimates using the ST algorithm. The straight line shows the displacement applied to the probe and the dotted line shows the average of measurements over the region of interest. (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u00A0 = 40 MHz, \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u009C = 5 MHz, \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u0092\u00F0\u009D\u0091\u009D\u00F0\u009D\u0091\u00A1\u00E2\u0084\u008E = 40 mm, \u00F0\u009D\u0091\u008A = 3\u00F0\u009D\u009C\u0086, no window overlap). polynomials to describe the echo signal to estimate 2D motion has been reported in [35]. 2.5 Experimental Performance In order to study the performance of the ST algorithm with real data, an experiment was conducted. The transducer was mounted on top of a leadscrew stage that provides controlled motion. An experiment was performed on a 40 \u00C3\u0097 40 \u00C3\u0097 40 mm3 uniformly elastic phantom. The phantom was prepared using a 100% polyvinyl chloride (PVC) plasticizer (M-F Manufacturing Co., Inc. Fort Worth, TX, USA) with two percent cellulose (Sigma-Aldrich Inc., St Louis, MO, USA) as scatterers [36]. The phantom was placed in a tank full of degassed water at room temperature. The phantom was placed 2 mm away from the transducer. In this way the transducer was able to move without deforming the phantom, producing a rigid motion. The probe was moved axially in sinusoidal trajectories with 0.5 mm amplitude and variable frequency. The phantom was imaged to a depth of 40 mm (2 mm water gap plus 38 mm phantom) with a linear array of 128 elements with 0.3 mm lateral spacing, with a 5 MHz center frequency, digitized at 40 MHz, of a SonixRP ultrasound machine (Ultrasonix Medical Corporation, Richmond, BC). The RF frames and transducer position were recorded at 40 Hz and dumped into a \u00EF\u00AC\u0081le for o\u00EF\u00AC\u0080-line processing (a total of 800 frames). The transducer position was measured through the encoder (as an applied motion) and was synchronized with the RF ultrasound images of the phantom. 33 \u000CChapter 2. Sample Tracking Window Tracking [mm] 0.15 5 10 15 Windows 0.1 20 25 30 0.05 35 40 45 10 20 30 Lines 40 50 60 0 Sample Tracking [mm] 0.15 500 Samples 0.1 1000 1500 0.05 2000 10 20 30 Lines 40 50 60 0 Figure 2.10: Displacement estimates using normalized cross correlation plus cosine \u00EF\u00AC\u0081t (top), ST algorithm (bottom) of a phantom with a cylindrical hard inclusion when it undergoes small uniaxial compression (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u00A0 = 40 MHz, \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u009C = 5 MHz, \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u0092\u00F0\u009D\u0091\u009D\u00F0\u009D\u0091\u00A1\u00E2\u0084\u008E = 50 mm, \u00F0\u009D\u0091\u008A = 3\u00F0\u009D\u009C\u0086, no window overlap). 34 \u000CChapter 2. Sample Tracking For time-delay estimation, RF signals were divided into non-overlapping windows. The coarse displacements were measured using the normalized cross correlation applied to the nonoverlapping windows. The displacements of the individual samples within the windows were then estimated using cubic spline-based ST. The estimated time-delays inside a 25 mm \u00C3\u0097 25 mm (84 lines and 1300 samples) region of interest at each time were averaged and compared with the applied displacement. The experimental results are shown in Fig 2.9. The results show that ST tracks the displacement very accurately in the presence of noise. The average error was measured to be 8\u00F0\u009D\u009C\u0087m over the entire motion. In order to study the ST performance qualitatively, another experiment was performed on a phantom with a hard cylindrical inclusion. The phantom was prepared using a 100% PVC plasticizer for the inclusion and 66.7% PVC plasticizer and 33.3% plastic softener (M-F Manufacturing Co., Inc. Fort Worth, TX, USA) for the background. Two percent cellulose was used as scatterers for both inclusion and background. The phantom was placed in front of the probe. Using the same setup, the phantom was imaged once to capture the reference RF frame (i.e. pre-compression RF frame). The phantom was then compressed 150 \u00F0\u009D\u009C\u0087m uniaxially. The second RF frame (i.e. post-compression RF frame) was captured while the phantom was under the compression. The resulting displacements between the echo signals were measured with both cubic spline-based ST and normalized cross correlation with cosine \u00EF\u00AC\u0081t. The results are shown in Fig 2.10. Compared to the standard technique which provides measurement points equal to the number of windows (i.e. 50) per each line, ST provides measurement points equal to the number of samples (i.e. 2500) for each line and provides a smoother image. 2.6 Conclusions A new class of time-delay estimators based on the tracking of the individual measured echo samples has been presented in this paper. ST algorithms generate time-shift estimates with much higher density compared to commonly used window-based methods. Simulation results show that these algorithms outperform conventional window based time-delay estimators in terms of bias and standard deviation when applied to high SNR echo signals. Simulation results also show that ST algorithms have higher resolution and sensitivity when used as strain estimators compared to commonly used strain estimation algorithms, including the recently introduced spline-based continuous time-delay estimator. In addition to ST algorithms, based on the previously introduced zero-crossings tracking algorithm, a new spline-based zero-crossings tracking algorithm has also been presented. Simulation results show that the new zero-crossing tracking algorithm signi\u00EF\u00AC\u0081cantly outperforms the original linear interpolation based ZCT. The proposed ZCT has similar performance to that of the presented ST algorithm. The proposed algorithms have potential applications in medical ultrasound, including \u00EF\u00AC\u0081ne tissue motion estimation, strain estimation, elasticity estimation, and acoustic radiation force impulse imaging. 35 \u000CReferences [1] C. Kasai, K. Namekawa, A. Koyano, and R. Omoto, \u00E2\u0080\u009CReal-time two-dimensional blood \u00EF\u00AC\u0082ow imaging using an autocorrelation technique.\u00E2\u0080\u009D IEEE Transactions on Sonics and Ultrasonics, vol. 32, pp. 458\u00E2\u0080\u0093464, 1985. [2] T. Loupas, R. Peterson, and R. Gill, \u00E2\u0080\u009CExperimental evaluation of velocity and power estimation forultrasound blood \u00EF\u00AC\u0082ow imaging, by means of a two-dimensional autocorrelation approach.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 42, pp. 689\u00E2\u0080\u0093699, Jul 1995. [3] H. Torp, K. Kristo\u00EF\u00AC\u0080ersen, and B. Angelsen, \u00E2\u0080\u009CAutocorrelation Techniques in Color Flow Imaging. Signal model and statistical properties of the Autocorrelation estimates.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency control, vol. 41, pp. 604\u00E2\u0080\u0093612, 1994. [4] S. Srinivasan and J. Ophir, \u00E2\u0080\u009CA zero-crossing strain estimator in elastography,\u00E2\u0080\u009D Ultrasound in Med and Bio, vol. 29, pp. 227\u00E2\u0080\u0093238, 2003. [5] J. Ophir, I. Cespedes, H. Ponnekanti, Y. Yazdi, and X. Li, \u00E2\u0080\u009CElastography: a quantitative method for imaging the elasticity of biological tissues,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 13, pp. 111\u00E2\u0080\u0093134, April 1991. [6] A. Heimdal, A. Stoylen, H. Torp, and T. Skjaerpe, \u00E2\u0080\u009CReal-time strain rate imaging of the left ventricle by ultrasound.\u00E2\u0080\u009D Journal of American Soc Echocardiography, vol. 11, pp. 1013\u00E2\u0080\u00931019, Nov 1998. [7] L. Bohs, B. Friemel, and G. Trahey, \u00E2\u0080\u009CExperimental velocity pro\u00EF\u00AC\u0081les and volumetric \u00EF\u00AC\u0082ow via two-dimensional speckle tracking.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 21, pp. 885\u00E2\u0080\u009398, 1995. [8] M. Lubinski, S. Emelianov, and M. O\u00E2\u0080\u0099Donnell, \u00E2\u0080\u009CSpeckle tracking methods for ultrasonic elasticity imaging using short-time correlation,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 46, pp. 82\u00E2\u0080\u009396, January 1999. [9] K. Nightingale, M. Palmeri, R. Nightingale, and G. Trahey, \u00E2\u0080\u009COn the feasibility of remote palpation using acoustic radiation force,\u00E2\u0080\u009D Journal of the Acoustical Society of America, vol. 110, pp. 625\u00E2\u0080\u009334, July 2001. [10] W. Walker, F. Fernandez, and L. Negron, \u00E2\u0080\u009CA method of imaging viscoelastic parameters with acoustic radiation force.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1437\u00E2\u0080\u00931447, 2000. 36 \u000CChapter 2. Sample Tracking [11] M. Fatemi and J. Greenleaf, \u00E2\u0080\u009CProbing the dynamics of tissue at low frequencies with the radiation force of ultrasound,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1449\u00E2\u0080\u009364, June 2000. [12] F. Viola and W. Walker, \u00E2\u0080\u009CA comparison of the performance of time-delay estimators in medical ultrasound.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 50, pp. 392\u00E2\u0080\u0093401, April 2003. [13] G. Pinton, J. Dahl, and G. Trahey, \u00E2\u0080\u009CRapid tracking of small displacements with ultrasound.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 53, pp. 1103\u00E2\u0080\u0093 1117, June 2006. [14] I. Hein and W. O\u00E2\u0080\u0099Brien, \u00E2\u0080\u009CCurrent time-domain methods for assessing tissue motion by analysis from re\u00EF\u00AC\u0082ected ultrasound echoes-a review.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 40, pp. 84\u00E2\u0080\u0093102, March 1993. [15] G. Pinton and G. Trahey, \u00E2\u0080\u009CContinuous Delay Estimation with Polynomial Splines.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 53, pp. 2026\u00E2\u0080\u00932035, 2006. [16] M. O\u00E2\u0080\u0099Donnell, A. Scovoroda, B. Shapo, and S. Emelianov, \u00E2\u0080\u009CInternal Displacement and Strain Imaging Using Ultrasonic Speckle Tracking,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 41, pp. 314\u00E2\u0080\u009325, May 1994. [17] A. Pesavento, C. Perrey, M. Krueger, and H. Ermert, \u00E2\u0080\u009CA time e\u00EF\u00AC\u0083cient and accurate strain estimation concept for ultrasonic elastography using iterative phase zero estimation,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 46, pp. 1057\u00E2\u0080\u00931067, 1999. [18] G. Jacovitti and G. Scarano, \u00E2\u0080\u009CDiscrete time techniques for time delay estimation.\u00E2\u0080\u009D IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 41, pp. 525\u00E2\u0080\u0093533, Feb 1993. [19] S. Langeland, J. Dapos, H. Torp, B. Bijnens, and P. Suetens, \u00E2\u0080\u009CA simulation study on the performance of di\u00EF\u00AC\u0080erent estimators for two-dimensional velocity estimation.\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. Volume 2: IEEE, 8-11 Oct 2002, pp. 1859\u00E2\u0080\u00931862. [20] I. Cespedes, Y. Huang, J. Ophir, and S. Spratt, \u00E2\u0080\u009CMethods for the estimation of subsample time-delays of digitized echo signals,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 17, pp. 142\u00E2\u0080\u0093171, 1995. [21] S. Foster, P. Embree, and W. O\u00E2\u0080\u0099Brien, \u00E2\u0080\u009CFlow velocity pro\u00EF\u00AC\u0081le via time-domain correlation: error analysis and computer simulation,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 37, pp. 164\u00E2\u0080\u0093175, May 1990. [22] B. Geiman, L. Bohs, M. Anderson, S. Breit, and G. Trahey, \u00E2\u0080\u009CA comparison of algorithms for tracking sub-pixel speckle motion.\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. Volume 2: IEEE, 5-8 Oct 1997, pp. 1239\u00E2\u0080\u00931242. [23] P. de Jong, T. Arts, A. Hoeks, and R. Reneman, \u00E2\u0080\u009CDetermination of Tissue Motion Velocity by Correlation Interpolation of Pulsed Ultrasonic Echo Signals,\u00E2\u0080\u009D Ultrasonics Imaging, vol. 12, pp. 84\u00E2\u0080\u009398, 1990. 37 \u000CChapter 2. Sample Tracking [24] L. Geiman, BJ.and Bohs, M. Anderson, S. Breit, and T. G.E., \u00E2\u0080\u009CA novel interpolation strategy for estimating subsample speckle motion.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1541\u00E2\u0080\u00931552, 2000. [25] F. Viola and W. Walker, \u00E2\u0080\u009CA Spline-Based Algorithm for Continuous Time-Delay Estimation Using Sampled Data.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 52, pp. 80\u00E2\u0080\u009393, January 2005. [26] T. Varghese, J. Ophir, E. Konofagou, F. Kallel, and R. Righetti, \u00E2\u0080\u009CTradeo\u00EF\u00AC\u0080s In Elastographic Imaging, Ultrasonic Imaging,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 23, pp. 216\u00E2\u0080\u0093248, October 2001. [27] S. Srinivasan, R. Righetti, and J. Ophir, \u00E2\u0080\u009CTradeo\u00EF\u00AC\u0080s between the axial resolution and the signal-to-noise ratio in elastography,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 29, pp. 847\u00E2\u0080\u0093866, 2003. [28] F. Walker and E. Trahey, \u00E2\u0080\u009CA fundamental limit on delay estimation using partially correlated speckle signals,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 42, pp. 301\u00E2\u0080\u0093308, 1995. [29] H. Shi and T. Varghese, \u00E2\u0080\u009CTwo-dimensional multi-level strain estimation for discontinuous tissue.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 52, pp. 389\u00E2\u0080\u0093401, Nov 2007. [30] C. Sumi, \u00E2\u0080\u009CFine elasticity imaging utilizing the iterative RF-echo phase matching method.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency control, vol. 46, pp. 158\u00E2\u0080\u0093 166, Jan 1999. [31] H. Eskandari, S. Salcudean, and R. Rohling, \u00E2\u0080\u009CTissue strain imaging using a wavelet transform-based peak search algorithm.\u00E2\u0080\u009D IEEE Trans Ultrason Ferroelectr Freq Control, vol. 54, pp. 1118\u00E2\u0080\u009330, June 2007. [32] M. Unser, \u00E2\u0080\u009CSplines: a perfect \u00EF\u00AC\u0081t for signal and image processing.\u00E2\u0080\u009D Signal Processing Magazine, IEEE, vol. 16, pp. 22\u00E2\u0080\u009338, Nov 1999. [33] R. Zahiri-Azar and S. Salcudean, \u00E2\u0080\u009CMotion Estimation in Ultrasound Images Using Time Domain Cross Correlation With Prior Estimates.\u00E2\u0080\u009D IEEE Transactions on Biomedical Engineering, vol. 53, pp. 1990\u00E2\u0080\u00932000, 2006. [34] F. Viola and W. Walker, \u00E2\u0080\u009CA comparison between spline-based and phase-domain timedelay estimators.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 53, pp. 515\u00E2\u0080\u0093517, March 2006. [35] W. Walker, D. Guenther, R. Coe, and F. Viola, \u00E2\u0080\u009CA Novel Spline-Based Algorithm for Multidimensional Displacement and Strain Estimation.\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. IEEE, Oct 2006, pp. 1221\u00E2\u0080\u00931225. [36] S. DiMaio and S. Salcudean, \u00E2\u0080\u009CNeedle insertion modelling and simulation,\u00E2\u0080\u009D IEEE Transactions on Robotics and Automation: Special Issue on Medical Robotics, vol. 19, pp. 864\u00E2\u0080\u0093875, 2003. 38 \u000CChapter 3 2D Estimation of Sub-Sample Motion from Digitized Ultrasound Echo Signals 2 3.1 Introduction Motion estimation in sequences of ultrasound echo signals is essential for a wide range of modern ultrasound-based signal processing applications. These applications include blood \u00EF\u00AC\u0082ow estimation, tissue velocity estimation [1\u00E2\u0080\u00933], strain and strain rate imaging [4\u00E2\u0080\u00936], tissue elasticity imaging [7], vibro-elastography [8,9], poro-elastography [10], myocardial imaging [11], tumor classi\u00EF\u00AC\u0081cation [12], and acoustic radiation force impulse imaging [13\u00E2\u0080\u009315]. Because of its central importance, its accuracy, precision, and computational cost are of critical importance. Motion estimators measure displacement of the backscattered signals with respect to the transducer. Motion estimators are typically classi\u00EF\u00AC\u0081ed based on the type of signal (RF, envelope, in-phase and quadrature I/Q) and the domain on which they operate (i.e. time, phase, or frequency). These estimators have been studied and compared extensively in the literature [16\u00E2\u0080\u0093 20]. Phase-shift estimators were initially used for blood \u00EF\u00AC\u0082ow measurement [1, 2]. Later, the same techniques were used to estimate the tissue motion. Phase-shift estimators \u00EF\u00AC\u0081nd the average phase-shift over a number of samples within a window with respect to the nominal or estimated central frequency of the transmitted pulse. Complex cross correlation of the RF echo signals [21, 22] or complex-valued Doppler signals [1, 2] are typically used in these techniques. The extension of phase shift estimations to 2D has also been studied in the literature [23\u00E2\u0080\u009327]. In another approach, frequency-shift estimators have also been introduced in the literature [20, 28, 29]. However their extension to 2D has yet to be studied. Pattern matching techniques were initially used in video and image processing. Later, the same techniques were borrowed in the \u00EF\u00AC\u0081eld of ultrasound to estimate motion/time-shift from sampled ultrasound radio-frequency (RF) echo signals [4,26,30,31]. Estimation of motion from envelope signals and combination of RF and envelope signals have also been reported [32, 33]. These techniques typically consist of the identi\u00EF\u00AC\u0081cation of the maximum/minimum of a pattern matching function. The shape of the signal within a speci\u00EF\u00AC\u0081c window in the reference echo signal is set to be the pattern and a matching algorithm is used to \u00EF\u00AC\u0081nd the best match in the delayed echo signal. 2 A version of this chapter has been peer reviewed and published in the proceedings of the international Conference on IEEE EMBS. A version of this chapter has also been submitted for publication. Reza ZahiriAzar, Orcun Goksel, and Septimiu E. Salcudean, \u00E2\u0080\u009CA Comparative Study of 2D Pattern Matching Function Interpolation Methods using Ultrasound Echo signals with Application to Real-Time Elastography\u00E2\u0080\u009D. 39 \u000CChapter 3. Sub-sample Motion Estimation in 2D Figure 3.1: Common techniques to reduce the error of discrete pattern matching functions. Dashed boxes show optional steps. (a) Basic pattern matching function. (b) Pattern matching function with echo signal up-sampling where \u00F0\u009D\u009B\u00BC and \u00F0\u009D\u009B\u00BD are the up-sampling factor in the axial and the lateral directions. (c) Continuous pattern matching function generated from curve or polynomial \u00EF\u00AC\u0081tting to one or both of the echo signals. (d) Pattern matching function interpolation which can be implemented without (d.1) and with (d.2) signal up-sampling. Several pattern matching techniques have been used in the \u00EF\u00AC\u0081eld of ultrasound, such as normalized cross correlation, sum of square di\u00EF\u00AC\u0080erences, and sum of absolute di\u00EF\u00AC\u0080erences, each o\u00EF\u00AC\u0080ering trade o\u00EF\u00AC\u0080s between complexity and accuracy [16,34,35]. Extensions of these techniques to 2D (or even 3D) has also been suggested [26, 31, 33, 36\u00E2\u0080\u009340]. The estimation error of pattern matching techniques can be as large as half the sample spacing. This results in signi\u00EF\u00AC\u0081cant errors when accurate and precise tracking of the motion is the goal. This error become more signi\u00EF\u00AC\u0081cant in the lateral (or the elevational) direction were the sample spacing is an order of magnitude larger compared to the axial direction. Several techniques have been suggested in the literature to reduce the error introduced by \u00EF\u00AC\u0081nite sampling intervals. These techniques are categorized as: (i) echo signal up-sampling [19, 37, 41, 42], (ii) interpolation of the echo signals [19, 38, 41, 43], and (iii) interpolation of the pattern matching function [44\u00E2\u0080\u009347]. These approaches are schematically represented in Fig 3.1. Up-sampling the echo signal (i) reduces the error by the up-sampling factor (Fig. 3.1(b)), 40 \u000CChapter 3. Sub-sample Motion Estimation in 2D but can increase the computational cost signi\u00EF\u00AC\u0081cantly [11,19,37,42]. This is due to the fact that, in addition to the up-sampling operations, the pattern matching function needs to be computed at a higher sampling rate. Curve or polynomial \u00EF\u00AC\u0081tting to the echo signals (ii) results in a continuous representation of the echo signal and subsequently in a continuous pattern matching function [19, 38, 41, 43]. The location of the best match can then be identi\u00EF\u00AC\u0081ed from the continuous pattern matching function without the sample spacing limitation (Fig. 3.1(c)). It has been shown that these techniques outperform other algorithms but, similarly to up-sampling methods, they can be computationally demanding [43, 48]. Pattern matching interpolation techniques (iii) are computationally more e\u00EF\u00AC\u0083cient than up-sampling or continuous representation (Fig. 3.1(d)). Therefore, even though they introduce some bias in the estimation process, they are widely used in motion estimation. These techniques will be the topic of this work. A number of 1D pattern matching interpolation methods such as parabolic \u00EF\u00AC\u0081tting [47], spline \u00EF\u00AC\u0081tting [46], grid slope [31, 36], cosine \u00EF\u00AC\u0081tting [45], zero padding, and reconstructive methods [44] have been introduced and thoroughly investigated in the literature [16, 41, 43]. Applying the same 1D interpolation techniques independently to each direction (2-1D) has also been used widely in the literature to estimate the sub-sample motion in two (or even three) dimensions [31, 36, 38, 49\u00E2\u0080\u009351]. Applying iterative 1D interpolation [52, 53] and 2D interpolation techniques [53\u00E2\u0080\u009355] have also been attempted in the literature. However, there is no comprehensive study in the literature that quanti\u00EF\u00AC\u0081es and compares the performance of these various techniques. In this work, the performance of these interpolation methods are studied and compared on ultrasound radio frequency and envelope data. Both simulation and experimental results are used to produce a comparative performance assessment. Furthermore, due to small computational cost, pattern matching interpolation methods facilitate the estimation of sub-sample motions in real-time. By implementing these interpolation methods, we extend our previously introduced 1D motion tracking algorithm [56] to 2D, and report an implementation of a motion tracking software that estimates both axial and lateral motions with sub-sample accuracy in real-time. The paper is structured as follows: Section 3.2 presents the interpolation techniques. Section 3.3 and 3.4 present the performance comparison between the di\u00EF\u00AC\u0080erent methods by studying both simulation and experimental results. Section 3.5 presents an implementation of the interpolation algorithms for 2D motion tracking in real-time, followed by a discussion in Section 3.6. Section 3.7 presents conclusions along with avenues for future research. Throughout this work, it is assumed that the echo signals are 2D radio frequency (RF) signals. In Section 3.3, the performance of sub-sample estimation methods are also studied for 2D envelope data. Without loss of generality, it is assumed that the pattern matching function optimization involves maximization of the normalized cross correlation. A detailed description of a pattern matching function is provided in Appendix C. The pattern matching function values will be referred to as the matching coe\u00EF\u00AC\u0083cients. 3.2 Methods Let \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u00A3] be the discrete 2D pattern matching function between the windowed reference and the displaced echo signal over a prede\u00EF\u00AC\u0081ned search region. Given \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u00A3] the coarse axial \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E and lateral \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 estimates of the motion in the axial (\u00F0\u009D\u0091\u00A5) and the lateral (\u00F0\u009D\u0091\u00A6) directions are achieved 41 \u000CChapter 3. Sub-sample Motion Estimation in 2D (a) Independent 1D estimation (b) Iterative 1D approach (c) 2D joint estimation Figure 3.2: Di\u00EF\u00AC\u0080erent schemes for sub-sample displacement estimation in 2D, using coe\u00EF\u00AC\u0083cients of the cross correlation function in the neighborhood of its maximum. For the \u00EF\u00AC\u0081rst two techniques ((a) and (b)) only 1D interpolation is required while for the last method (c), 2D interpolation is necessary. Solid circles show the actual matching coe\u00EF\u00AC\u0083cients and squares show the interpolated matching coe\u00EF\u00AC\u0083cients. Ellipses show equal value contours of underlaying correlation function. 42 \u000CChapter 3. Sub-sample Motion Estimation in 2D by locating the maximum of the 2D discrete pattern matching function \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u00A3]. The estimates \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E and \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 are given by (3.1) [\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 ] = arg max \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u00A3]. \u00F0\u009D\u0091\u00A2,\u00F0\u009D\u0091\u00A3 Following the coarse estimation of the motion within the sampling accuracy, the following methods are used to estimate the sub-sample displacements (\u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E , \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 ) in the axial and the lateral directions respectively, using the matching coe\u00EF\u00AC\u0083cients at neighboring lags (Fig. 3.1(d)). 3.2.1 Independent 1D Method Referring to Fig. 3.2(a), let \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u008E (\u00F0\u009D\u0091\u00A5) be an axial 1-D interpolation function passing through the 2D pattern matching function at [\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 ] and its axial lags (i.e. \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + \u00F0\u009D\u0091\u0096, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 ] where \u00F0\u009D\u0091\u0096 \u00E2\u0088\u0088 {0, \u00C2\u00B11, \u00C2\u00B12, ..., \u00C2\u00B1\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u008E }, where \u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u008E is the \u00EF\u00AC\u0081tting radius in the axial direction) and \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0099 (\u00F0\u009D\u0091\u00A6) be a lateral 1-D interpolation function passing through the 2D pattern matching function at [\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 ] and its lateral lags (i.e. \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + \u00F0\u009D\u0091\u0097] where \u00F0\u009D\u0091\u0097 \u00E2\u0088\u0088 {0, \u00C2\u00B11, \u00C2\u00B12, ..., \u00C2\u00B1\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0099 }, where \u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0099 is the \u00EF\u00AC\u0081tting radius in the lateral direction). The sub-sample motion estimates \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E , \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 at (\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 ) are computed from their corresponding axial and lateral interpolation functions as follows: \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E = arg max \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u008E (\u00F0\u009D\u0091\u00A5), \u00F0\u009D\u0091\u00A5 \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 = arg max \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0099 (\u00F0\u009D\u0091\u00A6). \u00F0\u009D\u0091\u00A6 (3.2) These methods are the most commonly used techniques to estimate the sub-sample motion in 2D [38, 49\u00E2\u0080\u009351]. For the purpose of this work, (i) the three point 1D parabola \u00EF\u00AC\u0081tting [47], where the axial and the lateral sub-sample shifts are estimated from \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u008E (\u00F0\u009D\u0091\u00A5) = \u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u008E +\u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u008E \u00F0\u009D\u0091\u00A5+\u00F0\u009D\u0091\u0092\u00F0\u009D\u0091\u008E \u00F0\u009D\u0091\u00A52 and \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0099 (\u00F0\u009D\u0091\u00A6) = \u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u0099 +\u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u0099 \u00F0\u009D\u0091\u00A6 +\u00F0\u009D\u0091\u0090\u00F0\u009D\u0091\u0099 \u00F0\u009D\u0091\u00A6 2 , and (ii) the three point cosine \u00EF\u00AC\u0081tting [45], where the axial and the lateral sub-sample shifts are estimated from \u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A5) = \u00F0\u009D\u0090\u00B4\u00F0\u009D\u0091\u008E cos(\u00F0\u009D\u009B\u00BC\u00F0\u009D\u0091\u008E \u00F0\u009D\u0091\u00A5+\u00F0\u009D\u009B\u00BD\u00F0\u009D\u0091\u008E ) and \u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A6) = \u00F0\u009D\u0090\u00B4\u00F0\u009D\u0091\u0099 cos(\u00F0\u009D\u009B\u00BC\u00F0\u009D\u0091\u0099 \u00F0\u009D\u0091\u00A6 +\u00F0\u009D\u009B\u00BD\u00F0\u009D\u0091\u0099 ), have been implemented due to their relative computational simplicity [41,44]. Detailed descriptions of these common techniques are provided in Appendix D. The independent 1D methods using three point function \u00EF\u00AC\u0081tting (\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u008E = \u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0099 = 1), require matching coe\u00EF\u00AC\u0083cients to be available at \u00EF\u00AC\u0081ve lags (i.e. the maximum in the center and the two immediate neighboring lags in each direction). 3.2.2 Grid Slope This method was proposed in [31, 36]. It has also been studied in [41] when estimating the motion in axial direction only. In order to apply this technique, it is necessary to compute the pattern matching function between the reference and displaced signals \u00F0\u009D\u0091\u0085, and the pattern matching function between the reference signal and itself \u00F0\u009D\u0091\u00850 . Similarly to above mentioned techniques, the grid slope technique estimates the axial and lateral sub-sample motion independently. Thus, it can be classi\u00EF\u00AC\u0081ed as an independent 1D interpolation method. 3.2.3 Iterative 1D Method In an iterative approach, the achieved sub-sample accuracy in one direction can be used to estimate the sub-sample displacement in the other direction [52, 53]. Referring to Fig. 3.2(b), let \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u008E (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u0097), \u00F0\u009D\u0091\u00A5 \u00E2\u0088\u0088 \u00E2\u0084\u009C and \u00F0\u009D\u0091\u0097 \u00E2\u0088\u0088 {0, \u00C2\u00B11, \u00C2\u00B12, ..., \u00C2\u00B1\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0099 } be a set of functions such that for every 43 \u000CChapter 3. Sub-sample Motion Estimation in 2D value of \u00F0\u009D\u0091\u0097, \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u008E (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u0097) is an axial 1-D interpolation function passing through \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + \u00F0\u009D\u0091\u0096, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + \u00F0\u009D\u0091\u0097], \u00F0\u009D\u0091\u0096 \u00E2\u0088\u0088 {0, \u00C2\u00B11, \u00C2\u00B12, ..., \u00C2\u00B1\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u008E } and \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0099 (\u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u0096), \u00F0\u009D\u0091\u00A6 \u00E2\u0088\u0088 \u00E2\u0084\u009C and \u00F0\u009D\u0091\u0096 \u00E2\u0088\u0088 {0, \u00C2\u00B11, \u00C2\u00B12, ..., \u00C2\u00B1\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u008E } be a set of functions such that for every value of \u00F0\u009D\u0091\u0096, \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0099 (\u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u0096) is a lateral 1-D interpolation function passing through \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E +\u00F0\u009D\u0091\u0096, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 +\u00F0\u009D\u0091\u0097], \u00F0\u009D\u0091\u0097 \u00E2\u0088\u0088 {0, \u00C2\u00B11, \u00C2\u00B12, ..., \u00C2\u00B1\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0099 }. The iterative 1D method can be formulated as follows: 1. \u00F0\u009D\u0091\u0098 = 0, \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u00990 = 0, 2. \u00F0\u009D\u0091\u0094\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u0098 (\u00F0\u009D\u0091\u00A5) := 1-D interpolating function passing through \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0099 (\u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u0098 , \u00F0\u009D\u0091\u0096)), \u00F0\u009D\u0091\u0096 \u00E2\u0088\u0088 {0, \u00C2\u00B11, \u00C2\u00B12, ..., \u00C2\u00B1\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u008E }, 3. \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u0098 = arg max\u00F0\u009D\u0091\u00A5 \u00F0\u009D\u0091\u0094\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u0098 (\u00F0\u009D\u0091\u00A5), 4. \u00F0\u009D\u0091\u0094\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u0098 (\u00F0\u009D\u0091\u00A6) := 1-D interpolating function passing through \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u008E (\u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u0098 , \u00F0\u009D\u0091\u0097), \u00F0\u009D\u0091\u0097 \u00E2\u0088\u0088 {0, \u00C2\u00B11, \u00C2\u00B12, ..., \u00C2\u00B1\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0099 }, 5. \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u0098 = arg max\u00F0\u009D\u0091\u00A6 \u00F0\u009D\u0091\u0094\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u0098 (\u00F0\u009D\u0091\u00A6), 6. if stopping criterion not met, \u00F0\u009D\u0091\u0098 = \u00F0\u009D\u0091\u0098 + 1, return to (2). where \u00F0\u009D\u0091\u0098 is the index of iteration. In the algorithm presented above the \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0099 are the interpolation functions passing through actual matching coe\u00EF\u00AC\u0083cients, and \u00F0\u009D\u0091\u0094\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u0098 , \u00F0\u009D\u0091\u0094\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u0098 are the interpolation functions passing through interpolated matching coe\u00EF\u00AC\u0083cients. It should be noted that in the \u00EF\u00AC\u0081rst iteration \u00F0\u009D\u0091\u0094\u00F0\u009D\u0091\u008E0 (\u00F0\u009D\u0091\u00A5) = \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u008E (\u00F0\u009D\u0091\u00A5, 0) since \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u00990 = 0 and interpolated values are the same as the discrete values. In contrast to independent 1D methods, iterative 1D methods use the estimated subsample motion in one direction to calculate the sub-sample motion in the other direction. The iterative 1D methods use the interpolated matching coe\u00EF\u00AC\u0083cients at the estimated sub-sample locations. Thus, they require more matching coe\u00EF\u00AC\u0083cients than used in the independent 1D method. Similarly to independent 1D methods, the same three point parabola \u00EF\u00AC\u0081tting using \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u008E (\u00F0\u009D\u0091\u00A5) = \u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u008E + \u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u008E \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u0090\u00F0\u009D\u0091\u008E \u00F0\u009D\u0091\u00A52 , \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0099 (\u00F0\u009D\u0091\u00A6) = \u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u0099 + \u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u0099 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u0090\u00F0\u009D\u0091\u0099 \u00F0\u009D\u0091\u00A6 2 ) and cosine \u00EF\u00AC\u0081tting \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u008E (\u00F0\u009D\u0091\u00A5) = \u00F0\u009D\u0090\u00B4\u00F0\u009D\u0091\u008E cos(\u00F0\u009D\u009B\u00BC\u00F0\u009D\u0091\u008E \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u009B\u00BD\u00F0\u009D\u0091\u008E ), \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0099 (\u00F0\u009D\u0091\u00A5) = \u00F0\u009D\u0090\u00B4\u00F0\u009D\u0091\u0099 cos(\u00F0\u009D\u009B\u00BC\u00F0\u009D\u0091\u0099 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u009B\u00BD\u00F0\u009D\u0091\u0099 )), have been implemented. The iterative 1D method using three point function \u00EF\u00AC\u0081tting (\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u008E = \u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0099 = 1), requires matching coe\u00EF\u00AC\u0083cients to be available at nine lags (i.e. the maximum in the center, and its 8 neighboring lags). 3.2.4 2D Method In a more general approach a 2D function can be \u00EF\u00AC\u0081tted to the discrete matching coe\u00EF\u00AC\u0083cients in both the axial and the lateral directions [53, 54]. Joint estimation with sub-sampling accuracy can then be achieved in both directions by \u00EF\u00AC\u0081nding the peak of the \u00EF\u00AC\u0081tted function analytically. Referring to Fig. 3.2(c), let \u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) be a 2-D interpolation function passing through the 2D pattern matching function at [\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 ] and its neighbors (i.e. \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + \u00F0\u009D\u0091\u0096, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + \u00F0\u009D\u0091\u0097] where \u00F0\u009D\u0091\u0096 \u00E2\u0088\u0088 {0, \u00C2\u00B11, \u00C2\u00B12, ... \u00C2\u00B1 \u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u008E }, \u00F0\u009D\u0091\u0097 \u00E2\u0088\u0088 {0, \u00C2\u00B11, \u00C2\u00B12, ... \u00C2\u00B1 \u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0099 }). The sub-sample motion estimates \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E , \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 at (\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 ) are computed jointly from the corresponding 2D interpolation functions as follows: [\u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E , \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 ] = arg max \u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6). \u00F0\u009D\u0091\u00A5,\u00F0\u009D\u0091\u00A6 (3.3) 44 \u000CChapter 3. Sub-sample Motion Estimation in 2D The following 2D polynomial \u00EF\u00AC\u0081tting are implemented in this paper: \u00F0\u009D\u0091\u00935 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) = \u00F0\u009D\u0091\u008E1 + \u00F0\u009D\u0091\u008E2 \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u008E3 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E4 \u00F0\u009D\u0091\u00A52 + \u00F0\u009D\u0091\u008E5 \u00F0\u009D\u0091\u00A6 2 , (3.4) 2 2 2 2 \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) = \u00F0\u009D\u0091\u008E1 + \u00F0\u009D\u0091\u008E2 \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u008E3 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E4 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E5 \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u008E6 \u00F0\u009D\u0091\u00A6 , (3.5) 2 (3.6) \u00F0\u009D\u0091\u009316 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) = \u00F0\u009D\u0091\u008E1 + \u00F0\u009D\u0091\u008E2 \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u008E3 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E4 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E5 \u00F0\u009D\u0091\u00A52 + \u00F0\u009D\u0091\u008E6 \u00F0\u009D\u0091\u00A6 2 + \u00F0\u009D\u0091\u008E7 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6 2 (3.7) \u00F0\u009D\u0091\u00939 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) = \u00F0\u009D\u0091\u008E1 + \u00F0\u009D\u0091\u008E2 \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u008E3 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E4 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E5 \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u008E6 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E7 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6 2 2 2 +\u00F0\u009D\u0091\u008E8 \u00F0\u009D\u0091\u00A5 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E9 \u00F0\u009D\u0091\u00A5 \u00F0\u009D\u0091\u00A6 , 2 2 2 3 3 3 2 +\u00F0\u009D\u0091\u008E8 \u00F0\u009D\u0091\u00A5 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E9 \u00F0\u009D\u0091\u00A5 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E10 \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u008E11 \u00F0\u009D\u0091\u00A5 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E12 \u00F0\u009D\u0091\u00A5 \u00F0\u009D\u0091\u00A6 +\u00F0\u009D\u0091\u008E13 \u00F0\u009D\u0091\u00A6 3 + \u00F0\u009D\u0091\u008E14 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6 3 + \u00F0\u009D\u0091\u008E15 \u00F0\u009D\u0091\u00A52 \u00F0\u009D\u0091\u00A6 3 + \u00F0\u009D\u0091\u008E16 \u00F0\u009D\u0091\u00A53 \u00F0\u009D\u0091\u00A6 3 , \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) = \u00F0\u009D\u0091\u008E1 + \u00F0\u009D\u0091\u008E2 \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u008E3 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E4 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E5 \u00F0\u009D\u0091\u00A52 + \u00F0\u009D\u0091\u008E6 \u00F0\u009D\u0091\u00A6 2 + \u00F0\u009D\u0091\u008E7 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6 2 2 2 2 3 3 (3.8) 3 2 +\u00F0\u009D\u0091\u008E8 \u00F0\u009D\u0091\u00A5 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E9 \u00F0\u009D\u0091\u00A5 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E10 \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u008E11 \u00F0\u009D\u0091\u00A5 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E12 \u00F0\u009D\u0091\u00A5 \u00F0\u009D\u0091\u00A6 +\u00F0\u009D\u0091\u008E13 \u00F0\u009D\u0091\u00A6 3 + \u00F0\u009D\u0091\u008E14 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6 3 + \u00F0\u009D\u0091\u008E15 \u00F0\u009D\u0091\u00A52 \u00F0\u009D\u0091\u00A6 3 + \u00F0\u009D\u0091\u008E16 \u00F0\u009D\u0091\u00A53 \u00F0\u009D\u0091\u00A6 3 +\u00F0\u009D\u0091\u008E17 \u00F0\u009D\u0091\u00A54 + \u00F0\u009D\u0091\u008E18 \u00F0\u009D\u0091\u00A54 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E19 \u00F0\u009D\u0091\u00A54 \u00F0\u009D\u0091\u00A6 2 + \u00F0\u009D\u0091\u008E20 \u00F0\u009D\u0091\u00A54 \u00F0\u009D\u0091\u00A6 3 +\u00F0\u009D\u0091\u008E21 \u00F0\u009D\u0091\u00A6 4 + \u00F0\u009D\u0091\u008E22 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6 4 + \u00F0\u009D\u0091\u008E23 \u00F0\u009D\u0091\u00A52 \u00F0\u009D\u0091\u00A6 4 + \u00F0\u009D\u0091\u008E24 \u00F0\u009D\u0091\u00A53 \u00F0\u009D\u0091\u00A6 4 + \u00F0\u009D\u0091\u008E25 \u00F0\u009D\u0091\u00A54 \u00F0\u009D\u0091\u00A6 4 \u00F0\u009D\u0091\u00935 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) This polynomial is used in [57] for tracking single \u00EF\u00AC\u0082uorescent particles. The polynomial is \u00EF\u00AC\u0081tted to \u00EF\u00AC\u0081ve points of the discrete pattern matching function, the maximum and four immediate neighbors. Since there is no cross term in this polynomial (i.e. dependence on \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6), this 2D joint estimator is equivalent to two independent 1D parabola interpolation that is discussed in Section 3.2.1. For this reason, this polynomial is not studied separately. \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) This 2D polynomial with six coe\u00EF\u00AC\u0083cients is proposed in [55] for spatial shift estimation. The same polynomial has also been used in [58] to reduce the computational cost of modern video codecs. The cross term in this polynomial makes it a non-separable 2D polynomial. Two di\u00EF\u00AC\u0080erent implementation of this polynomial are studied in this work. In one approach the polynomial is \u00EF\u00AC\u0081tted to six points of the discrete pattern matching function, the maximum, four immediate neighbors, and one diagonal neighbor as proposed in [55]. This approach is expected to bias the results toward the selection of the sixth matching coe\u00EF\u00AC\u0083cient. To resolve this issue, in another approach, the 2D polynomial is \u00EF\u00AC\u0081tted to nine points of the discrete pattern matching function, the maximum and its eight immediate neighbors (\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u008E = \u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0099 = 1), using a least squares \u00EF\u00AC\u0081t. This method will be referred to as \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6). Detailed descriptions of this 2D paraboloid \u00EF\u00AC\u0081tting and the corresponding maximization are provided in Appendix E. \u00F0\u009D\u0091\u00939 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) This non-separable 2D polynomial with nine coe\u00EF\u00AC\u0083cients which is resulted from multiplying [1, \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A52 ] and [1, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A6 2 ] terms (quadratic spline) is used in [53] to estimate the sub-sample motion in 2D. The polynomial is \u00EF\u00AC\u0081tted to the maximum of the discrete pattern matching function and all its eight neighboring lags. Detailed descriptions of this 2D polynomial \u00EF\u00AC\u0081tting and the corresponding maximization are provided in Appendix E. 45 \u000CChapter 3. Sub-sample Motion Estimation in 2D \u00F0\u009D\u0091\u009316 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) This 2D polynomial with sixteen coe\u00EF\u00AC\u0083cients is resulted from multiplying [1, \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A52 , \u00F0\u009D\u0091\u00A53 ] and [1, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A6 2 , \u00F0\u009D\u0091\u00A6 3 ] terms (cubic spline). The polynomial is \u00EF\u00AC\u0081tted to the maximum of the discrete pattern matching function and all its twenty four neighboring lags (\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u008E = \u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0099 = 2) using the Spline Toolbox (MathWorks Inc., Natick, MA). A modi\u00EF\u00AC\u0081ed version of this polynomial has been used in [38] to generate a continuous representation of the echo signal itself in order to generate a continuous pattern matching function in 2D. \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) This 2D polynomial with twenty \u00EF\u00AC\u0081ve coe\u00EF\u00AC\u0083cients is resulted from multiplying [1, \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A52 , \u00F0\u009D\u0091\u00A53 , \u00F0\u009D\u0091\u00A54 ] and [1, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A6 2 , \u00F0\u009D\u0091\u00A6 3 , \u00F0\u009D\u0091\u00A6 4 ] terms (quartic spline) [59]. The polynomial is \u00EF\u00AC\u0081tted to the maximum of the discrete pattern matching function and all its twenty four neighboring lags (\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u008E = \u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0099 = 2). 1D version of this polynomial has also been used in [43] to generate a continuous representation of the echo signal itself in order to estimate motion of individual samples in one direction. 3.3 3.3.1 Simulations Simulation Setup A series of computer simulations were performed to study the performance of all the pattern matching function interpolation methods. All calculations were performed in MATLAB (MathWorks Inc., Natick, MA). The simulation assumed a 5 MHz center frequency, 40 MHz sampling frequency, line spacing of 300 \u00F0\u009D\u009C\u0087m, and a sample spacing of \u00E2\u0089\u0088 20 \u00F0\u009D\u009C\u0087m. Unless mentioned otherwise, the window size for the pattern matching function is set to be approximately 2 \u00C3\u0097 2 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A2 (i.e. 104 samples axially and 7 samples laterally). The size of the search area for the pattern matching function is set to be approximately 3\u00C3\u00973 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A2 (i.e. 156 samples axially and 11 samples laterally). In order to have accurate estimation of the cross correlation at the edges of the search region, the actual data from the echo signals were\u0006used instead stopping [ of zero-padding. ]\u0006 \u0006[ The ]\u0006 \u0006[ \u00F0\u009D\u0091\u0098 \u00F0\u009D\u0091\u0098 ] \u00F0\u009D\u0091\u0098\u00E2\u0088\u00921 \u0006 \u0006 \u00F0\u009D\u0091\u0098 \u00F0\u009D\u0091\u0098 \u0006 \u00F0\u009D\u0091\u0098\u00E2\u0088\u00921 criterion for the iterative 1D method was set to be \u0006 \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E , \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 \u00E2\u0088\u0092 \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E , \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 < 10\u00E2\u0088\u00925 , \u0006 / \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E , \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 where \u00E2\u0088\u00A5.\u00E2\u0088\u00A5 is the Euclidean norm. In all the simulations this criterion was met in less than three iterations. A 50 \u00C3\u0097 60 \u00C3\u0097 10 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A3 virtual phantom was simulated by randomly allocating scatterers, with random scattering amplitudes. Field II [60,61] was then used to simulate the ultrasound radio frequency echo signals (RF frames) and envelope signals from these scatterers. A linear probe was modeled with 5 MHz center frequency and 40 MHz sampling rate. A linear scan of the phantom was done with a 128 element transducer, using 64 active elements. A single transmit focus was placed at 30 mm, and dynamic receive focusing was employed to generate the RF lines. 128 RF lines were simulated along a width of 38 mm. The number of scatterers per smallest sampling volume was set to be 10 to ensure that the speckle of the ultrasound images is fully developed. A generated sonogram is shown in Fig 3.3. Both sub-sample rigid and non-rigid motions were simulated. Simulations were performed by applying the estimation algorithms to simulated RF frames. For all the data the normalized cross correlation was used as a pattern matching function to \u00EF\u00AC\u0081nd the coarse motion within 46 \u000CChapter 3. Sub-sample Motion Estimation in 2D 0 0 10 10 20 20 30 30 40 40 50 50 60 \u00E2\u0088\u009230 \u00E2\u0088\u009220 \u00E2\u0088\u009210 0 (a) 10 20 30 60 \u00E2\u0088\u009220 ROI \u00E2\u0088\u009210 0 10 20 (b) Figure 3.3: Scatterers distributions (a) (only a small fraction of all scatterers are plotted for better visualization) and a Field II simulated sonogram (b). The region of interest (ROI) that is used for motion tracking validations is also shown on the sonogram. The ROI was centered around the transmit focus and data from both near-\u00EF\u00AC\u0081eld and far-\u00EF\u00AC\u0081eld were removed from the study. sampling accuracy. The sub-sample motion estimators from Section 3.2 were then applied to \u00EF\u00AC\u0081nd the 2D sub-sample motion. 3.3.2 Rigid Motion The transducer was displaced on a 2D grid with sub-sample distances (i.e. smaller than the axial and the lateral RF signal sample spacing) in both the axial and the lateral directions to simulate rigid motions on a 2D grid using Field II. This way the motion grid was simulated without using any interpolation. The RF frames corresponding to each of the displaced scatterers con\u00EF\u00AC\u0081gurations were then simulated. For both the axial and the lateral displacements, a step size of 1/10 of the sample spacing in the corresponding axis was chosen (i.e. 2 \u00F0\u009D\u009C\u0087m axial spacing and 30 \u00F0\u009D\u009C\u0087m laterally), forming a grid of 11x11=121 distinct displacement con\u00EF\u00AC\u0081gurations. These simulated RF frames were then used in conjunction with the RF frame in the center, as a reference frame, to estimate the motion. This resulted in a grid spanning \u00C2\u00B10.5 of a sample in both the axial and lateral directions. Similarly to [38,41,43], these estimated motions were used to study and compare the performances of all the estimators in terms of their bias and standard deviation as a function of sub-sample shift in both the axial and the lateral directions to study their accuracy and precision. Simulation time of 121 frames, where each frame contains 128 RF lines, is more than 5000 hours on a single core computer using Field II. Four multi-core processor computers were used 47 \u000CChapter 3. Sub-sample Motion Estimation in 2D (a) (b) Figure 3.4: (a) FEM mesh, deformation constraints, and scatterer distributions; (b) an example of the displacements of scatterers during a sample compression (only a small fraction of all scatterers are plotted for better visualization). in parallel to simulate these RF lines. 3.3.3 Deformation To study and compare the performance of all the estimators in the presence of deformations, the virtual phantom mentioned above was meshed using a Finite Element Method (FEM). The compression is modeled by \u00EF\u00AC\u0081xing the top side, touching the probe surface, and moving the bottom side upwards. This is shown Fig. 3.4. The displacements of both top and bottom sides are constrained vertically, but not horizontally. This is done considering that the lubricated tissue surface in reality is free to slide on the probe. Note that throughout this paper the vertical and horizontal axes here are referred to as the axial and lateral directions, respectively, considering the frame of reference of the probe. A symmetrical mesh is used with four times \u00EF\u00AC\u0081ner resolution than shown in Fig. 3.4. The mesh contained a square inclusion of 30 mm with its Young\u00E2\u0080\u0099s modulus being twice that of the background. The Poisson\u00E2\u0080\u0099s ratio is set to 0.49 for all the elements. A detailed description of the FEM used in this study is provided in [62]. A compression of 1% of the depth was applied to the phantom, resulting in a 0.6 mm maximum displacement. The FEM-computed motion \u00EF\u00AC\u0081eld is then applied to the nominal scatterer\u00E2\u0080\u0099s positions in order to \u00EF\u00AC\u0081nd their post-deformation positions. The pre- and post-deformation RF echo signals were then generated from the pre- and post-deformation scatterers positions using the same Field II imaging parameters mentioned above. 48 \u000CChapter 3. Sub-sample Motion Estimation in 2D 3.3.4 Simulation Results Bias and standard deviation of all the methods are shown in Fig 3.5. To estimate the bias and standard deviations from the independent speckle pattern, no window overlap was employed in estimating the rigid motions. This resulted in more than 1000 estimations for each sub-sample motion on a grid. For better visualization of the accuracy, the axial and the lateral bias for each sub-sample shift on the grid are shown with a vector. Error vectors connecting the true displacements to the mean estimated displacements illustrate the directional bias for each of the 121 simulations. In order to show the precision in both directions, an ellipse representation is also used to show the standard deviations for each of the simulations. The ellipses are centered on the mean displacement estimations and the radius of each ellipse in each direction is set to be the standard deviation of motion estimation in that given direction. Fig 3.5 shows that the independent 1D interpolation methods including the grid slope method perform well only if the displacements have either axial or lateral component (i.e. \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E = 0 or \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 = 0). As expected, these methods exhibit large biases (i.e. large vectors) and standard deviations (i.e. stretched ellipses) when the displacements have both axial and lateral components. Both iterative 1D and 2D interpolation methods signi\u00EF\u00AC\u0081cantly outperform the commonly used independent 1D methods both in terms of bias and standard deviation (i.e. smaller vectors and ellipses). As we expected, the \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) proposed in [55] does not generate symmetric results and its performance depends on the selection of the sixth coe\u00EF\u00AC\u0083cient. This problem is alleviated when \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) is employed instead. Fig 3.5 shows that both the iterative 1D and 2D methods (i.e. \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6), \u00F0\u009D\u0091\u00939 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6), \u00F0\u009D\u0091\u009316 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6), and \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6)) are able to recover the underlying motion from the RF frames. In order to study the results quantitatively, the biases and standard deviations for all estimators are shown in Fig 3.6, using an approach similar to the one presented in [38]. The \u00EF\u00AC\u0081rst two columns depict the axial bias and standard deviation while the second two columns depict the lateral bias and standard deviation of displacement estimates. To simplify the comparison, the bias and standard deviation errors are displayed using the same color bar scale for the axial and lateral performances, for all the methods. It should be noted that bias is signed and is smaller when it is closer to the center of the color bar (i.e. gray) while standard deviation is positive and is smaller when it is closer to the bottom of the color bar (i.e. black). Fig 3.6 shows that the maximum axial biases and standard deviations of the independent 1D interpolation methods are larger than 0.05 of a sample (i.e. 1 \u00F0\u009D\u009C\u0087m). The same \u00EF\u00AC\u0081gure shows that the maximum lateral biases and standard deviations of the common independent 1D interpolation methods are larger than 0.1 of a sample (i.e. 30 \u00F0\u009D\u009C\u0087m). These results are consistent with the results reported in [38]. Fig 3.6 also shows that the maximum lateral bias and standard deviation of iterative 1D and 2D interpolation methods are at least one order of magnitude smaller than those of the common independent 1D interpolation methods. For better visualization of the performance of the iterative 1D and 2D interpolation methods, the axial and lateral biases and standard deviations of all these methods are shown in Fig 3.7 using separate color bars. The performance of all the techniques in terms of their maximum absolute bias and standard deviations in both directions is summarized in Table 3.1. For easier comparison the same results are also depicted in Fig. 3.8. The results show that 2D interpolation using high order polynomials (cubic and quartic spline) performs better than iterative 1D interpolation followed by 2D interpolation using low order polynomials and independent 1D interpolation techniques. The results also show that 2D cubic spline \u00EF\u00AC\u0081t performs 49 \u000C0.6 0.6 0.4 0.4 0.4 0.2 0 \u00E2\u0088\u00920.2 0.2 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift 0.4 0.6 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift 0.4 0.6 \u00E2\u0088\u00920.6 (b) Ind. 1D Parabola Fit 0.6 0.4 0.4 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 Axial sub\u00E2\u0088\u0092sample shift 0.6 0.4 0.2 0.2 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift 0.4 0.6 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift 0.4 0.6 \u00E2\u0088\u00920.6 0.4 \u00E2\u0088\u00920.2 0.2 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift 0.4 Axial sub\u00E2\u0088\u0092sample shift 0.6 0.4 Axial sub\u00E2\u0088\u0092sample shift 0.6 0.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 0.6 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift 0.4 0.6 0.4 0.6 (f) \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) 0.6 0 0.6 0.2 (e) Iter. 1D Parabola Fit 0.2 0.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 (d) Iter. 1D Cosine Fit \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 (c) Grid Slope Interpolation 0.6 Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift 0.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 (a) Ind. 1D Cosine Fit Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift 0.6 Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift Chapter 3. Sub-sample Motion Estimation in 2D 0.2 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 (g) \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift 0.4 0.6 (h) \u00F0\u009D\u0091\u00939 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift (i) \u00F0\u009D\u0091\u009316 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) 0.6 Axial sub\u00E2\u0088\u0092sample shift 0.4 0.2 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift 0.4 0.6 (j) \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) Figure 3.5: Biases and standard deviations of di\u00EF\u00AC\u0080erent pattern matching interpolation techniques as a function of sub-sample shift on a 11\u00C3\u009711 grid. Field II was used to simulate the echo signals. A total of 1000 windows in the pattern matching function were used to generate each bias vector and standard deviation ellipse (window size is \u00E2\u0089\u0088 2 \u00C3\u0097 2 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A2 , which is equivalent to 104 samples axially and 7 samples laterally). 50 \u000CChapter 3. Sub-sample Motion Estimation in 2D 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Axial Standard Deviation \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Axial Standard Deviation \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Axial Standard Deviation \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Axial Standard Deviation \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Axial Standard Deviation \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Axial Standard Deviation \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Axial Standard Deviation \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Lateral Bias Axial Standard Deviation \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift 0.05 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift 0.05 0.15 0.04 0.1 0.05 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral Standard Deviation \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Lateral Standard Deviation \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Lateral Standard Deviation \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Lateral Standard Deviation \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Lateral Standard Deviation Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Axial Bias Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift 0 0.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Lateral Bias Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Axial Bias Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.2 Lateral Standard Deviation Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Lateral Bias Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Axial Bias Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Lateral Bias Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Axial Bias Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift 0.4 Lateral Standard Deviation Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Lateral Bias Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Axial Bias Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift 0 0.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Lateral Bias Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Axial Bias Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.2 Lateral Standard Deviation Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Lateral Bias Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Axial Bias Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Lateral Bias Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Axial Bias Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 Lateral Standard Deviation Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.2 Axial Standard Deviation Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift Lateral Bias Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift Axial Bias \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift 0.15 0.1 0.03 0 0 0.02 \u00E2\u0088\u00920.05 0.01 \u00E2\u0088\u00920.05 (a) Axial Bias 0 (b) Axial STD 0.05 \u00E2\u0088\u00920.1 0 (c) Lateral Bias (d) Lateral STD Figure 3.6: Grey level representations of the biases and standard deviations of Independent 1D cosine (1\u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u00A1 row), Independent 1D parabola (2\u00F0\u009D\u0091\u009B\u00F0\u009D\u0091\u0091 row), Grid slope (3\u00F0\u009D\u0091\u009F\u00F0\u009D\u0091\u0091 row), iterative 1D cosine (4\u00F0\u009D\u0091\u00A1\u00E2\u0084\u008E row), iterative 1D parabola (5\u00F0\u009D\u0091\u00A1\u00E2\u0084\u008E row), \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) (6\u00F0\u009D\u0091\u00A1\u00E2\u0084\u008E row), \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) (7\u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u00A1 row), \u00F0\u009D\u0091\u00939 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) (8\u00F0\u009D\u0091\u00A1\u00E2\u0084\u008E row), and \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) (9\u00F0\u009D\u0091\u00A1\u00E2\u0084\u008E row) as a function of sub-sample shift on a 11 \u00C3\u0097 11 grid. 51 \u000CChapter 3. Sub-sample Motion Estimation in 2D \u00E2\u0088\u00920.01 \u00E2\u0088\u00920.02 0.4 \u00E2\u0088\u00920.03 0 Axial sub\u00E2\u0088\u0092sample shift 0 0.2 \u00E2\u0088\u00925 0.4 \u00E2\u0088\u009210 0.2 \u00E2\u0088\u00920.01 Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift 0 0.4 0.014 0.012 0.4 0.005 0 0.2 \u00E2\u0088\u00920.005 0.4 \u00E2\u0088\u00920.01 Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.2 6 0.4 4 10 0 8 0.2 6 0.4 4 Axial Standard Deviation \u00E2\u0088\u00923 x 10 1 \u00E2\u0088\u00920.2 0 0 \u00E2\u0088\u00921 \u00E2\u0088\u00920.2 0.005 \u00E2\u0088\u00922 6 0.2 4 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift (a) Axial Bias (b) Axial STD 0 0.2 \u00E2\u0088\u00920.005 0.4 \u00E2\u0088\u00920.01 Lateral Bias 0 0 \u00E2\u0088\u00922 0.4 0.02 \u00E2\u0088\u00920.4 0.025 \u00E2\u0088\u00920.2 0.02 0 0.015 0.2 0.01 0.4 0.02 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0.015 0 0.2 0.01 0.4 Lateral Standard Deviation \u00E2\u0088\u00923 x 10 4 2 0.2 0.03 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0.04 0.2 Lateral Standard Deviation 0.01 \u00E2\u0088\u00923 8 0 0.05 0 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 x 10 10 0.06 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.02 0 0.07 \u00E2\u0088\u00920.4 Lateral Bias \u00E2\u0088\u00923 12 \u00E2\u0088\u00920.2 0 0.2 x 10 0.02 Lateral Standard Deviation \u00E2\u0088\u00920.2 0.4 0.04 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift 0.02 0 0.06 0 0.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift 14 \u00E2\u0088\u00920.4 0.08 \u00E2\u0088\u00920.2 Lateral Bias \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift 10 8 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift 0.4 \u00E2\u0088\u00920.04 \u00E2\u0088\u00923 12 0.2 \u00E2\u0088\u00920.02 0.4 x 10 \u00E2\u0088\u00920.2 0 0 0.2 0.1 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift 14 Axial Standard Deviation 0.01 0.2 0.006 \u00E2\u0088\u00920.4 Axial Bias Axial Bias 0.008 0.02 0 0.015 Lateral Standard Deviation \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0.02 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift 0.04 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 0.01 0.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift 0 Lateral Bias Axial Standard Deviation 0.01 \u00E2\u0088\u00920.1 0.4 Axial Standard Deviation \u00E2\u0088\u00920.2 Axial Bias \u00E2\u0088\u00920.05 0.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift 0 0.025 0.2 Lateral Standard Deviation 0 0 0.03 0 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 0 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.2 0.01 Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00923 5 0.02 0.4 x 10 \u00E2\u0088\u00920.2 0.03 0.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 0.04 \u00E2\u0088\u00920.2 Axial sub\u00E2\u0088\u0092sample shift 0 0.2 \u00E2\u0088\u00920.4 0.035 \u00E2\u0088\u00920.2 Lateral Bias Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.2 0 Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift 0.01 \u00E2\u0088\u00920.02 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Axial Standard Deviation \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.01 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Axial Bias 0 0.2 Axial sub\u00E2\u0088\u0092sample shift 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Axial Bias 0.01 0 0 Axial sub\u00E2\u0088\u0092sample shift 0.02 0.01 Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.04 0 0.2 Axial sub\u00E2\u0088\u0092sample shift 0.4 0.01 0.4 Lateral Standard Deviation 0.02 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0.015 0 0.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.02 0.03 Axial sub\u00E2\u0088\u0092sample shift 0.2 \u00E2\u0088\u00920.2 Lateral Bias 0.04 \u00E2\u0088\u00920.2 \u00E2\u0088\u00925 0.4 0.02 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift 0 Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift 0.02 0 0 0.2 Axial Standard Deviation \u00E2\u0088\u00920.4 5 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift 0.04 0 0.01 0.4 Axial Bias \u00E2\u0088\u00920.2 0.02 Lateral Standard Deviation \u00E2\u0088\u00923 x 10 Axial sub\u00E2\u0088\u0092sample shift 0 0.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 0.03 \u00E2\u0088\u00920.4 \u00E2\u0088\u00924 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift (c) Lateral Bias Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.02 0.4 \u00E2\u0088\u00920.2 Axial sub\u00E2\u0088\u0092sample shift 0 0 0.2 Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift 0.02 \u00E2\u0088\u00920.2 Lateral Bias 0.04 \u00E2\u0088\u00920.4 Axial sub\u00E2\u0088\u0092sample shift Axial Standard Deviation Axial Bias \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00923 x 10 14 12 \u00E2\u0088\u00920.2 10 0 8 0.2 6 0.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 0.4 Lateral sub\u00E2\u0088\u0092sample shift (d) Lateral STD Figure 3.7: Grey level representations of the biases and standard deviations of iterative 1D cosine \u00EF\u00AC\u0081t (1\u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u00A1 row), iterative 1D parabola \u00EF\u00AC\u0081t (2\u00F0\u009D\u0091\u009B\u00F0\u009D\u0091\u0091 row), \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00EF\u00AC\u0081t (3\u00F0\u009D\u0091\u009F\u00F0\u009D\u0091\u0091 row), \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00EF\u00AC\u0081t (4\u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u00A1 row), \u00F0\u009D\u0091\u00939 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00EF\u00AC\u0081t (5\u00F0\u009D\u0091\u00A1\u00E2\u0084\u008E row), \u00F0\u009D\u0091\u009316 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00EF\u00AC\u0081t (6\u00F0\u009D\u0091\u00A1\u00E2\u0084\u008E row), and \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00EF\u00AC\u0081t (7\u00F0\u009D\u0091\u00A1\u00E2\u0084\u008E row), as a function of sub-sample shift on a 11 \u00C3\u0097 11 grid. 52 \u000CChapter 3. Sub-sample Motion Estimation in 2D Table 3.1: Maximum values of biases and standard deviations obtained from the 2D noise-free simulations for window of 2 \u00C3\u0097 2 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A2 . Method Ind. 1D Cos Ind. 1D Par Grid Slope Iter. 1D Cos Iter. 1D Par \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00F0\u009D\u0091\u00939 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00F0\u009D\u0091\u009316 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) Max Error Axial \u00E2\u0088\u00A3\u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 \u00E2\u0088\u00A3 \u00F0\u009D\u009C\u008E\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 0.0389 0.0428 0.0475 0.0461 0.0335 0.0556 0.0389 0.0428 0.0475 0.0461 0.0375 0.0460 0.0097 0.0151 0.0182 0.0154 0.0133 0.0154 0.0020 0.0101 (samples) Lateral \u00E2\u0088\u00A3\u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A1 \u00E2\u0088\u00A3 \u00F0\u009D\u009C\u008E\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A1 0.1234 0.1844 0.1251 0.1855 0.1421 0.1686 0.0086 0.0217 0.0209 0.0396 0.1322 0.1024 0.0435 0.0747 0.0301 0.0295 0.0126 0.0210 0.0042 0.0142 Max Error (microns) Axial Lateral \u00E2\u0088\u00A3\u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 \u00E2\u0088\u00A3 \u00F0\u009D\u009C\u008E\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 \u00E2\u0088\u00A3\u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A1 \u00E2\u0088\u00A3 \u00F0\u009D\u009C\u008E\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A1 0.7483 0.8239 38.5543 57.6356 0.9145 0.8871 39.0811 57.9665 0.6443 1.0704 44.3963 52.6929 0.7483 0.8239 2.6956 6.7682 0.9145 0.8871 6.5191 12.3699 0.7228 0.8846 41.3209 32.0002 0.1860 0.2914 13.5821 23.3300 0.3513 0.2965 9.4055 9.2043 0.2564 0.2961 3.9309 6.5654 0.0383 0.1939 1.3125 4.4314 0.3 Axial Lateral Maximum Error (Sample) 0.25 0.2 0.15 0.1 0.05 0 \u00E2\u0088\u00920.05 1D(Par) 1D(Cos) GS Itr(Cos) Itr(Par) f_6(I) f_6(II) Sub\u00E2\u0088\u0092sampling Method f_9 f_16 f_25 Figure 3.8: Maximum values of biases and standard deviations in both the axial and the lateral directions obtained from the 2D noise-free simulations for window of 2 \u00C3\u0097 2 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A2 . better than iterative 1D cosine \u00EF\u00AC\u0081t followed by the iterative 1D parabola \u00EF\u00AC\u0081t in agreement with previously reported work for 1D sub-sample estimation [16, 41, 43]. The maximum axial and lateral biases of the \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00EF\u00AC\u0081t are found to be 0.0020 and 0.0042 of a sample, which corresponds to 38 nm and 1.31 \u00F0\u009D\u009C\u0087m at the simulated \u00E2\u0089\u0088 20 \u00F0\u009D\u009C\u0087m sample spacing and 300 \u00F0\u009D\u009C\u0087m line spacing. The maximum standard deviation of \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00EF\u00AC\u0081t are found to be 0.0101 and 0.0142 of a sample, which corresponds to 193 nm and 4.43 \u00F0\u009D\u009C\u0087m. For the same set of simulated data, the maximum axial and lateral biases and standard deviations of the independent 1D methods were found to be an order of magnitude larger. The performance of di\u00EF\u00AC\u0080erent estimators in the presence of deformation are shown in Figs. 3.9 and 3.10. To estimate the motion in the presence of deformation, 50% window overlap was employed as commonly used in elastography applications [63]. It should be noted that in the 1% compression presented here, the displacement in the axial direction reaches the maximum 53 \u000CChapter 3. Sub-sample Motion Estimation in 2D Axial Displacement Image Lateral Displacement Image Axial Displacement Image Lateral Displacement Image 0.25 \u00E2\u0088\u00920.05 10 0.2 10 0.2 \u00E2\u0088\u00920.05 10 10 \u00E2\u0088\u00920.1 0.15 \u00E2\u0088\u00920.1 0.15 \u00E2\u0088\u00920.15 20 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.15 0.1 20 \u00E2\u0088\u00920.25 \u00E2\u0088\u00920.3 30 0 30 \u00E2\u0088\u00920.35 \u00E2\u0088\u00920.4 40 20 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.25 \u00E2\u0088\u00920.3 30 \u00E2\u0088\u00920.1 40 \u00E2\u0088\u00920.4 40 \u00E2\u0088\u00920.05 40 \u00E2\u0088\u00920.1 \u00E2\u0088\u00920.45 \u00E2\u0088\u00920.15 \u00E2\u0088\u00920.5 50 0 30 \u00E2\u0088\u00920.35 \u00E2\u0088\u00920.05 \u00E2\u0088\u00920.45 0.1 20 0.05 0.05 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.2 50 50 \u00E2\u0088\u00920.15 50 \u00E2\u0088\u00920.55 \u00E2\u0088\u00920.55 \u00E2\u0088\u00920.25 \u00E2\u0088\u009210 0 10 \u00E2\u0088\u009210 0 \u00E2\u0088\u00920.2 10 \u00E2\u0088\u009210 0 (a) FEM Axial Displacement Image 10 \u00E2\u0088\u009210 0 10 (b) Ind. 1D Cosine Lateral Displacement Image Axial Displacement Image Lateral Displacement Image 0.2 0.2 \u00E2\u0088\u00920.05 10 \u00E2\u0088\u00920.05 10 0.15 \u00E2\u0088\u00920.1 \u00E2\u0088\u00920.15 20 \u00E2\u0088\u00920.2 10 10 \u00E2\u0088\u00920.15 0.1 20 0.15 \u00E2\u0088\u00920.1 20 \u00E2\u0088\u00920.2 0.1 20 0.05 0.05 \u00E2\u0088\u00920.25 \u00E2\u0088\u00920.3 30 \u00E2\u0088\u00920.25 0 30 \u00E2\u0088\u00920.35 \u00E2\u0088\u00920.4 40 40 \u00E2\u0088\u00920.4 40 \u00E2\u0088\u00920.1 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.15 50 0 30 \u00E2\u0088\u00920.35 \u00E2\u0088\u00920.05 \u00E2\u0088\u00920.45 50 \u00E2\u0088\u00920.3 30 \u00E2\u0088\u00920.05 40 \u00E2\u0088\u00920.1 \u00E2\u0088\u00920.45 \u00E2\u0088\u00920.5 50 \u00E2\u0088\u00920.15 50 \u00E2\u0088\u00920.55 \u00E2\u0088\u00920.55 \u00E2\u0088\u00920.2 \u00E2\u0088\u009210 0 10 \u00E2\u0088\u009210 0 \u00E2\u0088\u00920.2 10 \u00E2\u0088\u009210 0 (c) Ind. 1D Parabola Axial Displacement Image 10 \u00E2\u0088\u009210 0 10 (d) Grid Slope Lateral Displacement Image Axial Displacement Image Lateral Displacement Image 0.2 0.2 \u00E2\u0088\u00920.05 10 \u00E2\u0088\u00920.05 10 0.15 \u00E2\u0088\u00920.1 \u00E2\u0088\u00920.15 20 \u00E2\u0088\u00920.2 10 10 \u00E2\u0088\u00920.15 0.1 20 0.15 \u00E2\u0088\u00920.1 20 \u00E2\u0088\u00920.2 0.1 20 0.05 0.05 \u00E2\u0088\u00920.25 \u00E2\u0088\u00920.3 30 \u00E2\u0088\u00920.25 0 30 \u00E2\u0088\u00920.35 \u00E2\u0088\u00920.4 40 40 \u00E2\u0088\u00920.4 40 \u00E2\u0088\u00920.1 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.15 50 0 30 \u00E2\u0088\u00920.35 \u00E2\u0088\u00920.05 \u00E2\u0088\u00920.45 50 \u00E2\u0088\u00920.3 30 \u00E2\u0088\u00920.05 40 \u00E2\u0088\u00920.1 \u00E2\u0088\u00920.45 \u00E2\u0088\u00920.5 50 \u00E2\u0088\u00920.15 50 \u00E2\u0088\u00920.55 \u00E2\u0088\u00920.55 \u00E2\u0088\u00920.2 \u00E2\u0088\u009210 0 10 \u00E2\u0088\u009210 0 \u00E2\u0088\u00920.2 10 \u00E2\u0088\u009210 (e) Iterative 1D Cosine Axial Displacement Image 0 10 \u00E2\u0088\u009210 0 10 (f) Iterative 1D Parabola Lateral Displacement Image Axial Displacement Image Lateral Displacement Image 0.2 0.2 \u00E2\u0088\u00920.05 10 \u00E2\u0088\u00920.05 10 0.15 \u00E2\u0088\u00920.1 \u00E2\u0088\u00920.15 20 \u00E2\u0088\u00920.2 10 10 \u00E2\u0088\u00920.15 0.1 20 0.15 \u00E2\u0088\u00920.1 20 \u00E2\u0088\u00920.2 0.1 20 0.05 0.05 \u00E2\u0088\u00920.25 \u00E2\u0088\u00920.3 30 \u00E2\u0088\u00920.25 0 30 \u00E2\u0088\u00920.35 \u00E2\u0088\u00920.4 40 40 \u00E2\u0088\u00920.4 40 \u00E2\u0088\u00920.1 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.15 50 0 30 \u00E2\u0088\u00920.35 \u00E2\u0088\u00920.05 \u00E2\u0088\u00920.45 50 \u00E2\u0088\u00920.3 30 \u00E2\u0088\u00920.05 40 \u00E2\u0088\u00920.1 \u00E2\u0088\u00920.45 \u00E2\u0088\u00920.5 50 \u00E2\u0088\u00920.15 50 \u00E2\u0088\u00920.55 \u00E2\u0088\u00920.55 \u00E2\u0088\u00920.2 \u00E2\u0088\u009210 0 10 \u00E2\u0088\u009210 0 \u00E2\u0088\u00920.2 10 \u00E2\u0088\u009210 0 10 (g) \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) Axial Displacement Image \u00E2\u0088\u009210 0 10 (h) \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) Lateral Displacement Image Axial Displacement Image Lateral Displacement Image 0.2 0.2 \u00E2\u0088\u00920.05 10 \u00E2\u0088\u00920.05 10 0.15 \u00E2\u0088\u00920.1 \u00E2\u0088\u00920.15 20 \u00E2\u0088\u00920.2 10 10 \u00E2\u0088\u00920.15 0.1 20 0.15 \u00E2\u0088\u00920.1 20 \u00E2\u0088\u00920.2 0.1 20 0.05 0.05 \u00E2\u0088\u00920.25 \u00E2\u0088\u00920.3 30 \u00E2\u0088\u00920.25 0 30 \u00E2\u0088\u00920.35 \u00E2\u0088\u00920.4 40 40 \u00E2\u0088\u00920.4 40 \u00E2\u0088\u00920.1 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.15 50 0 30 \u00E2\u0088\u00920.35 \u00E2\u0088\u00920.05 \u00E2\u0088\u00920.45 50 \u00E2\u0088\u00920.3 30 \u00E2\u0088\u00920.05 40 \u00E2\u0088\u00920.1 \u00E2\u0088\u00920.45 \u00E2\u0088\u00920.5 50 \u00E2\u0088\u00920.15 50 \u00E2\u0088\u00920.55 \u00E2\u0088\u00920.55 \u00E2\u0088\u00920.2 \u00E2\u0088\u009210 0 10 \u00E2\u0088\u009210 (i) \u00F0\u009D\u0091\u00939 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) 0 10 \u00E2\u0088\u00920.2 \u00E2\u0088\u009210 0 10 \u00E2\u0088\u009210 0 10 (j) \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) Figure 3.9: Axial (left) and lateral displacement (right) images computed by the FEM (a) and estimated by di\u00EF\u00AC\u0080erent techniques (b-j). 54 \u000CChapter 3. Sub-sample Motion Estimation in 2D Table 3.2: The overall performance of 2D motion estimation techniques estimated from compression test for window of 2 \u00C3\u0097 2 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A2 . Method Ind. 1D Cos Ind. 1D Par Grid Slope Iter. 1D Cos Iter. 1D Par \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00F0\u009D\u0091\u00939 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00F0\u009D\u0091\u009316 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) Absolute Error (microns) Axial Lateral Mean STD Mean STD 0.4008 0.4958 11.6922 15.1145 0.4029 0.4964 11.6794 15.1491 0.3976 0.4923 9.3275 12.4825 0.4008 0.4958 6.0302 7.6899 0.4029 0.4964 6.2540 8.0595 0.4203 0.5020 7.3661 8.4090 0.4287 0.5213 6.0537 7.5425 0.3654 0.4368 6.1065 7.6603 0.3644 0.4369 5.3262 6.7801 0.3637 0.4375 2.8548 3.4952 Absolute Axial Mean STD 1.4883 0.1510 1.4886 0.1512 1.4892 0.1499 1.4883 0.1510 1.4886 0.1512 1.4571 0.1529 1.4803 0.1588 1.4785 0.1330 1.4763 0.1331 1.4712 0.1331 Error (%) Lateral Mean STD 25.4610 32.9133 25.4332 32.9888 20.3115 27.1820 13.1313 16.7455 13.6187 17.5503 16.0403 18.3114 13.1826 16.4245 13.2976 16.6810 10.3311 13.4458 6.2166 7.6112 of 32 samples. As a result, the accuracy and precision of sub-sample estimation in the axial direction becomes less signi\u00EF\u00AC\u0081cant. However, for the same data set, the maximum lateral displacement is smaller than the line spacing. Thus, the accuracy and precision of sub-sample estimation in the lateral direction is critical. To study and compare the performance of different estimators in the presence of deformation quantitatively, the estimated displacement at each window were compared with the displacements calculated using FEM at the same window location. The estimation errors are summarized in Table 3.2. Figs. 3.9 and 3.10 show that iterative 1D interpolation and 2D interpolation methods perform well in estimating the lateral motion, while independent 1D methods produce larger errors. This is clear in both estimated lateral motions and displacement vectors. Figs. 3.9 and 3.10 also show the performance of 2D interpolation methods improves as we increase the order of the 2D polynomials. These results are in agreement with the results reported for estimations of rigid motions. So far we only considered estimation of the motion in the sequences of the RF signals. Fig 3.11 shows the performance of di\u00EF\u00AC\u0080erent estimators on the 2D envelope signal rather than on the raw 2D RF signal. Since there is no carrier frequency in the envelope signals, independent 1D parabola \u00EF\u00AC\u0081t, \u00F0\u009D\u0091\u00939 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00EF\u00AC\u0081t, and \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) are used for this study. As expected, the accuracy and precision of tracking in axial direction becomes poor when the envelope signals are used. This \u00EF\u00AC\u0081gure shows that the performance of the independent 1D estimators is improved in estimating the lateral motion. This is consistent with the results published in [33], where it was suggested that RF signals be used to estimate the axial motion and envelope signals to estimate the lateral motion. The results also show that 2D interpolation methods can also be employed to envelope data to increase both the accuracy and precision of the motion estimation in 2D. Fig. 3.5 visualizes the biases and standard deviations as a fraction of a sample assuming equal sample spacing in both directions. As mentioned before, the sample spacing in the lateral direction is much larger than the sample spacing in the axial direction (approximately \u00EF\u00AC\u0081fteen times in this work). In order to show the true length of the bias vectors and real shape of the standard deviation ellipses, Fig 3.12 compares the performance of the independent 1D cosine \u00EF\u00AC\u0081t, the iterative 1D cosine \u00EF\u00AC\u0081t, and \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00EF\u00AC\u0081t (Fig. 3.5(a),(d), and (j)) in units of distance, as opposed to fraction of a sample. Fig 3.12 depicts how errors of a fraction of a sample translate to large actual errors especially in the lateral direction. This \u00EF\u00AC\u0081gure shows the importance of accuracy and precision of sub-sample estimation especially in the lateral direction. 55 \u000CChapter 3. Sub-sample Motion Estimation in 2D Displacement Vector Image Displacement Vector Image Displacement Vector Image Displacement Vector Image 10 10 10 10 20 20 20 20 30 30 30 30 40 40 40 40 50 50 50 50 \u00E2\u0088\u009210 0 10 \u00E2\u0088\u009210 0 10 \u00E2\u0088\u009210 0 10 \u00E2\u0088\u009210 0 10 (a) FEM (b) Ind. 1D Cosine (c) Ind. 1D Parabola (d) Grid Slope Displacement Vector Image Displacement Vector Image Displacement Vector Image Displacement Vector Image 10 10 10 10 20 20 20 20 30 30 30 30 40 40 40 40 50 50 50 50 \u00E2\u0088\u009210 0 10 (e) Iter. 1D Cosine \u00E2\u0088\u009210 0 10 \u00E2\u0088\u009210 0 10 (f) Iter. 1D Parabola (g) \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) Displacement Vector Image Displacement Vector Image 10 10 20 20 30 30 40 40 50 50 \u00E2\u0088\u009210 0 10 (i) \u00F0\u009D\u0091\u00939 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00E2\u0088\u009210 0 \u00E2\u0088\u009210 0 10 (h) \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) 10 (j) \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) Fit Figure 3.10: Displacement vector images computed by the FEM (a) and estimated by di\u00EF\u00AC\u0080erent techniques (b-j). 56 \u000C0.6 0.6 0.4 0.4 0.4 0.2 0 \u00E2\u0088\u00920.2 0.2 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift 0.4 0.6 (a) Ind. 1D Par Fit Axial sub\u00E2\u0088\u0092sample shift 0.6 Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift Chapter 3. Sub-sample Motion Estimation in 2D 0.2 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift (b) \u00F0\u009D\u0091\u00939 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) Fit 0.4 0.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift 0.4 0.6 (c) \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) Fit Figure 3.11: Biases and standard deviations of di\u00EF\u00AC\u0080erent pattern matching interpolation techniques as a function of sub-sample shift on envelope signals. Field II was used to simulate the echo signals. A total of 1000 windows in the pattern matching function were used to generate each bias vector and standard deviation ellipse (window size is \u00E2\u0089\u0088 2 \u00C3\u0097 2 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A2 ). 3.4 3.4.1 Experiments Experimental Setup In order to further study the performance of all the methods using ultrasound data, the following experiment was conducted. A tissue-mimicking phantom was constructed from 100% polyvinyl chloride (PVC) plasticizer (M-F Manufacturing Co., Inc. Fort Worth, TX, USA). Two percent Cellulose (Sigma-Aldrich Inc., St. Louis, MO, USA) was added as scattering particles [43, 64]. The phantom was imaged using a SonixRP ultrasound machine (Ultrasonix Medical Corporation, Richmond, BC, Canada) with a L9-4/38 linear array transducer with 5 MHz center frequency and 300 \u00F0\u009D\u009C\u0087m line spacing. The RF signal, digitized at 40 MHz, was collected to a depth of 50 mm. The experimental setup is shown in Fig 3.13. To generate motion without applying any deformation, the phantom was \u00EF\u00AC\u0081rmly secured within a large cubic container where one face was replaced by an ultrasound transparent plastic wrap. The phantom was oriented parallel to and 5 mm away from the plastic wrap. The container was then \u00EF\u00AC\u0081lled with degassed water and was mounted on a piezoelectric motor stage system. The stage system consists of an HR1-1800E and an HR2-1800E motor stage (ALIO Industries, Wheat Ridge, CO, USA) with reported resolution of 50 nm, providing 2D motion. A PMAC2A-PC/104 motor controller (Delta Tau Data Systems Inc., Chatsworth, CA, USA) was used to control each motor through AB1A-3U ampli\u00EF\u00AC\u0081ers. The motor positions were measured through RGB25H00R08 encoder interfaces (Renishaw Plc., Hillesley, Gloucestershire, UK). Due to a limitation in the number of points that could be programmed for a 2D motion grid, a larger step size was used in the experimental setup compared to the simulation setup. For both the axial and the lateral displacements, a step size was set to be 1/8 of the sample spacing in the corresponding axis (i.e. 2.5 \u00F0\u009D\u009C\u0087m axially and 37.5 \u00F0\u009D\u009C\u0087m laterally), forming a grid of 9 \u00C3\u0097 9 = 81 distinct displacement con\u00EF\u00AC\u0081gurations. The RF frames were captured at each step. These RF frames in conjunction with the reference frame in the center, were ran through the same estimators that were used for the simulated data. This resulted in a grid spanning \u00C2\u00B10.5 of a sample in both directions. 57 \u000CAxial Shift (microns) Chapter 3. Sub-sample Motion Estimation in 2D 10 0 \u00E2\u0088\u009210 \u00E2\u0088\u0092150 \u00E2\u0088\u0092100 \u00E2\u0088\u009250 0 Lateral Shift (microns) 50 100 150 50 100 150 50 100 150 Axial Shift (microns) (a) Independent 1D Cosine Fit 10 0 \u00E2\u0088\u009210 \u00E2\u0088\u0092150 \u00E2\u0088\u0092100 \u00E2\u0088\u009250 0 Lateral Shift (microns) Axial Shift (microns) (b) Iterative 1D Cosine Fit 10 0 \u00E2\u0088\u009210 \u00E2\u0088\u0092150 \u00E2\u0088\u0092100 \u00E2\u0088\u009250 0 Lateral Shift (microns) (c) \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) Fit Figure 3.12: Displacement estimates of the independent 1D cosine \u00EF\u00AC\u0081t (a) and the iterative 1D cosine \u00EF\u00AC\u0081t (b) at actual scale. 2D biases are visualized with error vectors from the true motion to the mean displacement at each location on a simulated 11 \u00C3\u0097 11 2D grid. Radii of the ellipses which are centered at the location of the mean displacement correspond to the standard deviation of the measurements at the same location. Data was acquired at a 40 MHz temporal sampling rate and a 300 \u00F0\u009D\u009C\u0087\u00F0\u009D\u0091\u009A line spacing (window size is \u00E2\u0089\u0088 2 \u00C3\u0097 2 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A2 ). 3.4.2 Experimental Results Experimental results are shown in Fig. 3.14. The experimental results are in good agreement with the simulation results. Due to the experimental errors and the electromechanical noise in the system, larger biases and standard deviations were observed in the experimental data. Similarly to the simulation results generated by Field II, results from experiments show that the independent 1D methods exhibit large biases and standard deviations when the displacements have both the axial and the lateral components. Both iterative 1D and 2D interpolation methods signi\u00EF\u00AC\u0081cantly outperform commonly used independent 1D methods in terms of bias and standard deviation and are able to recover the underlying motion from the RF frames. 2D interpolations with high order polynomials outperform all the other techniques. The overall performance of all the techniques in terms of their maximum biases and standard deviations for experimental data is presented in Table 3.3. The maximum axial and lateral biases of the \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00EF\u00AC\u0081t are found to be 551 nm and 5.92 \u00F0\u009D\u009C\u0087m, respectively. The maximum axial and lateral standard deviations of the \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00EF\u00AC\u0081t are found to be 2.04 \u00F0\u009D\u009C\u0087m and 5.90 \u00F0\u009D\u009C\u0087m, respectively. For the same set of data the maximum axial and lateral biases and standard deviations of the independent 1D methods are found to be at least \u00EF\u00AC\u0081ve times larger. 58 \u000CChapter 3. Sub-sample Motion Estimation in 2D (a) Experimental Setup (b) Sonogram Figure 3.13: Experimental set up (a) shows the positioning of the transducer with respect to the phantom and plastic wrap. A sample sonogram acquired from SonixRP ultrasound machine (b) is also shown. The dark layer on top of the sonogram is the water gap between the probe and the phantom and the markers on the left represent the focal points. 59 \u000C0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0 \u00E2\u0088\u00920.2 0.2 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift 0.4 0.6 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift 0.4 0.6 \u00E2\u0088\u00920.6 (b) Ind. 1D Parabola Fit 0.8 0.6 0.6 0.4 0.4 0.4 0 \u00E2\u0088\u00920.2 0.2 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift 0.4 Axial sub\u00E2\u0088\u0092sample shift 0.8 0.6 0.2 0.6 0 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift 0.4 0.6 \u00E2\u0088\u00920.6 0.4 0.4 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift 0.4 Axial sub\u00E2\u0088\u0092sample shift 0.6 0.4 Axial sub\u00E2\u0088\u0092sample shift 0.8 0.2 0.6 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift 0.4 0.6 0.4 0.6 (f) \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) 0.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 (e) Iterative 1D Parabola Fit 0.8 \u00E2\u0088\u00920.2 0.6 \u00E2\u0088\u00920.2 0.6 0 0.4 0.2 0.8 0.2 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 (d) Iterative 1D Cosine Fit \u00E2\u0088\u00920.4 (c) Grid Slope Interpolation 0.8 Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift 0.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 (a) Ind. 1D Cosine Fit Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift 0.8 Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift Chapter 3. Sub-sample Motion Estimation in 2D 0.2 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 (g) \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift 0.4 0.6 (h) \u00F0\u009D\u0091\u00939 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift (i) \u00F0\u009D\u0091\u009316 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) 0.8 Axial sub\u00E2\u0088\u0092sample shift 0.6 0.4 0.2 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.2 0 0.2 Lateral sub\u00E2\u0088\u0092sample shift 0.4 0.6 (j) \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) Figure 3.14: Biases and standard deviations of di\u00EF\u00AC\u0080erent pattern matching interpolation techniques on a 9x9 grid. Echo signals were captured using an ultrasound machine. A total of 1000 windows in the pattern matching function were used to generate each error ellipse (window size \u00E2\u0089\u0088 2 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A \u00C3\u0097 2 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A equivalent to 104 samples axially and 7 samples laterally). 60 \u000CChapter 3. Sub-sample Motion Estimation in 2D Table 3.3: Maximum values of bias and standard deviation obtained from the 2D experimental data for window of 2 \u00C3\u0097 2 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A2 . Method Ind. 1D Cos Ind. 1D Par Grid Slope Iter. 1D Cos Iter. 1D Par \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00F0\u009D\u0091\u00936 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00F0\u009D\u0091\u00939 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00F0\u009D\u0091\u009316 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) 3.5 Max Error Axial \u00E2\u0088\u00A3\u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 \u00E2\u0088\u00A3 \u00F0\u009D\u009C\u008E\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 0.1490 0.1764 0.1561 0.1827 0.1396 0.1626 0.1490 0.1764 0.1561 0.1827 0.1540 0.1817 0.1079 0.1189 0.1013 0.1190 0.1013 0.1145 0.1013 0.1062 (samples) Lateral \u00E2\u0088\u00A3\u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A1 \u00E2\u0088\u00A3 \u00F0\u009D\u009C\u008E\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A1 0.0831 0.0960 0.0899 0.0991 0.0881 0.1045 0.0313 0.0409 0.0399 0.0440 0.1642 0.1434 0.0252 0.0523 0.0346 0.0244 0.0317 0.0236 0.0189 0.0189 Max Error (microns) Axial Lateral \u00E2\u0088\u00A3\u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 \u00E2\u0088\u00A3 \u00F0\u009D\u009C\u008E\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 \u00E2\u0088\u00A3\u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A1 \u00E2\u0088\u00A3 \u00F0\u009D\u009C\u008E\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A1 0.7373 3.3959 25.9563 29.9898 0.7421 3.5173 28.0848 30.9681 0.7176 3.1293 27.5185 32.6549 0.7373 3.3959 9.7814 12.7773 0.7421 3.5173 12.4588 13.7417 0.7845 3.4984 51.3202 44.8161 0.5803 2.2893 7.8800 16.3330 0.5584 2.2906 10.8107 7.6387 0.5526 2.2036 10.0311 7.3321 0.5515 2.0437 5.9210 5.9079 Real-Time Implementation Real-time estimation of the motion for elastography has been reported by several groups [22, 65\u00E2\u0080\u009367] and has been implemented on di\u00EF\u00AC\u0080erent commercial machines. Since ultrasound imaging provides higher resolution in the axial direction, in all these methods, only the estimation of the axial component of the motion has been considered. Tracking in lateral (or even elevational) direction has been used mainly to correct for the axial component of the motion [30, 68]. Due to the small computational cost of pattern matching interpolation methods, they can be used to estimate sub-sample motions in real-time. By implementing these 2D interpolation techniques, we extend our previously introduced 1D motion tracking algorithm [56] to 2D, and report an implementation of a motion tracking software that estimates both axial and lateral motions with sub-sample accuracy in real-time. As mentioned in Section 3.2, 2D motion estimation with pattern matching algorithms consists of the following two steps: (i) locating the maximum of the discrete pattern matching function in 2D, and (ii) estimating the peak o\u00EF\u00AC\u0080set in 2D using interpolation techniques. The key to achieve real-time performance is to reduce the computational cost of \u00EF\u00AC\u0081rst step (i.e. locating the maximum of the discrete pattern matching function), since the sub-sample estimation is computationally inexpensive. Most motion estimation algorithms ignore the strong spatial correlations of the motion \u00EF\u00AC\u0081eld. The size of the search region for locating the maximum of the pattern matching function stays the same throughout the tracking. Thus the estimation becomes computationally expensive and not suitable for real-time applications. We have previously introduced TDPE as a predictive and corrective motion estimation algorithm [56] that uses prior estimates to reduce the size of the search region. This is accomplished by centering the search region on the previously estimated motion of neighboring windows. To avoid losing track of motion, TDPE also checks the correlation coe\u00EF\u00AC\u0083cient to validate the estimation results. A recovery search is triggered when the correlation coe\u00EF\u00AC\u0083cient is small. The previous implementation of the TDPE uses 1D windows and applies 1D normalized cross correlation to \u00EF\u00AC\u0081nd the coarse location of the motion (i.e. 1D window, 1D search). 1D interpolation is then used to estimate the sub-sample component of the motion in the axial direction. In this work, employing 2D windows, the previous implementation is extended to 2D by locating the maximum of the discrete pattern matching function in two dimensions (i.e. 61 \u000CChapter 3. Sub-sample Motion Estimation in 2D 2D window, 2D search). Both iterative 1D and 2D interpolation methods are implemented to estimate the sub-sample components of the motion in both the axial and the lateral directions. The \u00F0\u009D\u0091\u009316 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) and \u00F0\u009D\u0091\u009325 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) are excluded since, in addition to the computational overhead for polynomial \u00EF\u00AC\u0081tting and root estimation process, they require the pattern matching function to be evaluated at least for twenty \u00EF\u00AC\u0081ve lags for each window. The 2D motion tracking algorithm was implemented as a client-software on the Sonix RP PC-based US machine. The software connects to the US machine to capture RF frames in realtime. The displacements are then estimated by comparing the sequences of these RF frames. The estimation of 2D motion runs at a refresh rate of more than 25 frames per second for 2D displacement images with 6,000 estimation overlapping windows of each 2 \u00C3\u0097 2 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A2 . Typical images displayed by the software interface are shown in Fig. 3.15. 3.6 Discussions In the simulations, the line spacing was intentionally set to be 300 \u00F0\u009D\u009C\u0087m in order to be consistent with our experimental setup. However, this spacing is larger than would be expected for most ultrasound imaging systems. The bias and the standard deviation of all the estimators are expected to improve with improved lateral resolution. Moreover, to save simulation time and to be consistent with our experimental setup, single transmit focus was used to generate the RF signals. However, the accuracy and the precision of all the estimators are also expected to improve by increasing the number of transmit focal points. Although not shown here, the performance of di\u00EF\u00AC\u0080erent estimators was also studied when di\u00EF\u00AC\u0080erent window sizes were employed in the pattern matching function. As expected, the accuracy and precision of all the methods improve, in estimating the rigid motion, when the size of the window is increased. Although we have only considered 2D sub-sample estimation, the generalization of the same methods to higher dimensions is possible. For example, in three dimensions, a 3D polynomial can be \u00EF\u00AC\u0081tted to the discrete matching coe\u00EF\u00AC\u0083cients in the axial, lateral, and the elevational directions (i.e. \u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A7)). Joint sub-sample motion estimation can then be achieved in all directions by \u00EF\u00AC\u0081nding the maximum of the \u00EF\u00AC\u0081tted polynomial (\u00E2\u0088\u0087\u00F0\u009D\u0091\u0093 = 0). The reader should note that the iterative 1D interpolation method studied in this work is fundamentally di\u00EF\u00AC\u0080erent from the iterative 1D cross correlation and recorrelation technique suggested in [11,37]. In this work, the discrete 2D pattern matching function is evaluated once, and iterations are used just to \u00EF\u00AC\u0081nd the exact location of the match in that same function. However, in [11, 37], a new pattern matching function is evaluated at each iteration. The above mentioned methods use the same common independent 1D interpolation method (i.e. 1D cosine \u00EF\u00AC\u0081tting) to estimate the sub-sample motion. The iterative 1D and 2D interpolations methods presented in this paper can also be used in [11, 37] to improve the performance of sub-sample estimation. The assumption of small motions and deformation which is considered and studied in this work is valid for fast imaging for dynamic [7] and real-time elastography as presented in this work, where inter-frame displacements and deformations are small and the echo signals are highly correlated. This is also the case in acoustic radiation force imaging [7, 13\u00E2\u0080\u009315] where induced displacements and deformations are very small. Thus, the 2D interpolation methods can be readily used in these applications. However, the assumption of small motions and 62 \u000CChapter 3. Sub-sample Motion Estimation in 2D (a) Axial Displacement (c) Displacement Vector (b) Lateral Displacement (d) Axial Strain Figure 3.15: Screen shots of the real-time 2D motion tracking software. Color coded estimated axial displacement (a), lateral displacement (b), displacement vector (c), and axial strain image (d) estimated inside a region of interest are shown. The images are imposed over the sonogram in the background. 63 \u000CChapter 3. Sub-sample Motion Estimation in 2D deformations does not hold when large deformations exist and echo signals are decorrelated. This is generally the case in quasi-static elastography [63,69] where large external compressions are applied to the tissue and cause the tissue to experience large deformations. This is also the case in myocardial elastography [11] where the tissue experiences large internal motions and deformations. The performance of all the pattern matching function interpolation techniques, including those presented in this work, rely on the matching coe\u00EF\u00AC\u0083cients estimated by the pattern matching algorithms and is expected to degrade when the echo signals are decorrelated and can not be matched correctly. In order to adapt all these methods to the estimation of displacements resulting from large deformations, previously introduced compounding methods should be applied to the raw echo signals [68, 70, 71] prior to the motion estimation process. Once the e\u00EF\u00AC\u0080ect of signal decorrelation is suppressed, the pattern matching algorithms followed by the 2D interpolation methods can be applied to estimate the 2D motion with sub-sample accuracy. Alternatively, techniques such as iterative 1D cross correlation with recorrelation [11, 37], which are more robust in the presence of deccorelation, can be employed to estimate the motion. 3.7 Conclusion and Future Work The results in this paper show that the common method of applying a separate 1D sub-sample estimation to the quantized 2D pattern matching function in each direction independently is not valid when estimating motion in sequences of ultrasound RF echo signals. Results from both simulations and experiments show that both iterative 1D and 2D interpolation methods outperform independent methods in terms of bias and standard deviation. The framework presented here is well suited to study the performance of other 2D motion estimators in the future. Both iterative 1D and 2D sub-sample motion estimation methods are shown to provide a good balance between accuracy, precision, and computational cost. Although they add computational overhead, compared to independent 1D methods, they still perform at real-time rates as presented. The presented real-time motion tracking implementation has several potential applications throughout the \u00EF\u00AC\u0081eld of signal processing. Speci\u00EF\u00AC\u0081c applications in medical ultrasound include \u00EF\u00AC\u0081ne 2D tissue motion tracking, velocity vector imaging, shear strain imaging, strain tensor imaging, poro-elastography, and tissue viscoelastography. In this work we only studied the pattern matching interpolation techniques to estimate the sub-sample motion in 2D. However, as we mentioned before, with a trade o\u00EF\u00AC\u0080 of higher computational cost, accurate tracking of the motion in 2D has been reported in literature by up-sampling the echo signal in the lateral direction with large up-sampling factor and employing iterative 1D estimation and recorrelation [11, 37, 51], using continuous pattern matching functions [38], and the phase of RF signal in the axial [23, 25, 26] and the lateral direction using synthetic lateral phase [27]. Further investigation is required to compare the performance of these estimators with pattern matching function interpolation methods. Furthermore, in this work we only used cross correlation as a pattern matching technique. The performance of the interpolation methods on other pattern matching techniques such as sum of squared di\u00EF\u00AC\u0080erences and sum of absolute di\u00EF\u00AC\u0080erences needs to be studied. Moreover, the e\u00EF\u00AC\u0080ects of line spacing, focusing, transmit frequency, and sampling frequency in the echo signals also needs further study. These will be the topics of our future work. 64 \u000CReferences [1] C. Kasai, K. Namekawa, A. Koyano, and R. Omoto, \u00E2\u0080\u009CReal-time two-dimensional blood \u00EF\u00AC\u0082ow imaging using an autocorrelation technique.\u00E2\u0080\u009D IEEE Transactions on Sonics and Ultrasonics, vol. 32, pp. 458\u00E2\u0080\u0093464, 1985. [2] T. Loupas, R. Peterson, and R. Gill, \u00E2\u0080\u009CExperimental evaluation of velocity and power estimation forultrasound blood \u00EF\u00AC\u0082ow imaging, by means of a two-dimensional autocorrelation approach.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 42, pp. 689\u00E2\u0080\u0093699, Jul 1995. [3] H. Torp, K. Kristo\u00EF\u00AC\u0080ersen, and B. Angelsen, \u00E2\u0080\u009CAutocorrelation Techniques in Color Flow Imaging. Signal model and statistical properties of the Autocorrelation estimates.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency control, vol. 41, pp. 604\u00E2\u0080\u0093612, 1994. [4] J. Ophir, I. Cespedes, H. Ponnekanti, Y. Yazdi, and X. Li, \u00E2\u0080\u009CElastography: a quantitative method for imaging the elasticity of biological tissues,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 13, pp. 111\u00E2\u0080\u0093134, April 1991. [5] A. Heimdal, A. Stoylen, H. Torp, and T. Skjaerpe, \u00E2\u0080\u009CReal-time strain rate imaging of the left ventricle by ultrasound.\u00E2\u0080\u009D Journal of American Soc Echocardiography, vol. 11, pp. 1013\u00E2\u0080\u00931019, Nov 1998. [6] L. Bohs, B. Friemel, and G. Trahey, \u00E2\u0080\u009CExperimental velocity pro\u00EF\u00AC\u0081les and volumetric \u00EF\u00AC\u0082ow via two-dimensional speckle tracking.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 21, pp. 885\u00E2\u0080\u009398, 1995. [7] J. Berco\u00EF\u00AC\u0080, M. Tanter, and M. Fink, \u00E2\u0080\u009CSupersonic shear imaging: a new technique for soft tissue elasticity mapping,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 51, pp. 396\u00E2\u0080\u00934009, April 2004. [8] E. Turgay, S. Salcudean, and R. Rohling, \u00E2\u0080\u009CIdentifying Mechanical properties of Tissue by Ultrasound.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 32, pp. 221\u00E2\u0080\u0093235, 2006. [9] H. Eskandari, S. Salcudean, and R. Rohling, \u00E2\u0080\u009CViscoelastic Parameter Estimation Based on Spectral Analysis.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency control, vol. 55, pp. 1611\u00E2\u0080\u00931625, July 2008. [10] R. Righetti, J. Ophir, S. Srinivasan, and T. Krouskop, \u00E2\u0080\u009CThe Feasibility of Using Elastography for Imaging the Poissons Ratio in Porous Media,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 30, pp. 215\u00E2\u0080\u0093228, 2004. 65 \u000CChapter 3. Sub-sample Motion Estimation in 2D [11] W. Lee, C. M. Ingrassia, S. D. Fung-Kee-Fung, K. D. Costa, J. W. Holmes, and E. Konofagou, \u00E2\u0080\u009CTheoretical Quality Assessment of Myocardial Elastography with In Vivo Validation,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 54, pp. 2233\u00E2\u0080\u00932245, 2007. [12] A. Thitaikumar, L. Mobbs, C. Kraemer-Chant, B. Garra, and J. Ophir, \u00E2\u0080\u009CBreast Tumor Classi\u00EF\u00AC\u0081cation using axial shear strain elastography: a feasibility study,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 53, pp. 4809\u00E2\u0080\u00934823, 2008. [13] K. Nightingale, M. Palmeri, R. Nightingale, and G. Trahey, \u00E2\u0080\u009COn the feasibility of remote palpation using acoustic radiation force,\u00E2\u0080\u009D Journal of the Acoustical Society of America, vol. 110, pp. 625\u00E2\u0080\u009334, July 2001. [14] W. Walker, F. Fernandez, and L. Negron, \u00E2\u0080\u009CA method of imaging viscoelastic parameters with acoustic radiation force.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1437\u00E2\u0080\u00931447, 2000. [15] M. Fatemi and J. Greenleaf, \u00E2\u0080\u009CProbing the dynamics of tissue at low frequencies with the radiation force of ultrasound,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1449\u00E2\u0080\u009364, June 2000. [16] F. Viola and W. Walker, \u00E2\u0080\u009CA comparison of the performance of time-delay estimators in medical ultrasound.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 50, pp. 392\u00E2\u0080\u0093401, April 2003. [17] G. Pinton, J. Dahl, and G. Trahey, \u00E2\u0080\u009CRapid tracking of small displacements with ultrasound.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 53, pp. 1103\u00E2\u0080\u0093 1117, June 2006. [18] I. Hein and W. O\u00E2\u0080\u0099Brien, \u00E2\u0080\u009CCurrent time-domain methods for assessing tissue motion by analysis from re\u00EF\u00AC\u0082ected ultrasound echoes-a review.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 40, pp. 84\u00E2\u0080\u0093102, March 1993. [19] G. Pinton and G. Trahey, \u00E2\u0080\u009CContinuous Delay Estimation with Polynomial Splines.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 53, pp. 2026\u00E2\u0080\u00932035, 2006. [20] K. Hoyt, F. Forsberg, and J. Ophir, \u00E2\u0080\u009CComparison of shift estimation strategies in spectral elastogaphy,\u00E2\u0080\u009D Ultrasonics, vol. 44, pp. 99\u00E2\u0080\u0093108, 2006. [21] M. O\u00E2\u0080\u0099Donnell, A. Scovoroda, B. Shapo, and S. Emelianov, \u00E2\u0080\u009CInternal Displacement and Strain Imaging Using Ultrasonic Speckle Tracking,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 41, pp. 314\u00E2\u0080\u009325, May 1994. [22] A. Pesavento, C. Perrey, M. Krueger, and H. Ermert, \u00E2\u0080\u009CA time e\u00EF\u00AC\u0083cient and accurate strain estimation concept for ultrasonic elastography using iterative phase zero estimation,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 46, pp. 1057\u00E2\u0080\u00931067, 1999. 66 \u000CChapter 3. Sub-sample Motion Estimation in 2D [23] A. Basarab, H. Liebgott, and P. Delachartre, \u00E2\u0080\u009CAnalytic estimation of subsample spatial shift using the phases of multidimensional analytic signals.\u00E2\u0080\u009D IEEE Transactions on Image Processing, vol. 18, pp. 440\u00E2\u0080\u0093447, 2009. [24] C. Sumi, \u00E2\u0080\u009CFine Elasticity Imaging Utilizing the Iterative RF-echo Phase Matching Method,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 46, pp. 158\u00E2\u0080\u0093166, January 1999. [25] \u00E2\u0080\u0094\u00E2\u0080\u0094, \u00E2\u0080\u009CDisplacement Vector Measurement Using Instantaneous Ultrasound Signal PhaseMultidimensional Autocorrelation and Doppler Methods,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency control, vol. 55, pp. 24\u00E2\u0080\u009343, Jan 2008. [26] M. Lubinski, S. Emelianov, and M. O\u00E2\u0080\u0099Donnell, \u00E2\u0080\u009CSpeckle tracking methods for ultrasonic elasticity imaging using short-time correlation,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 46, pp. 82\u00E2\u0080\u009396, January 1999. [27] C. Xunchang, M. Zohdy, E. SY., and M. O\u00E2\u0080\u0099Donnell, \u00E2\u0080\u009CLateral speckle tracking using synthetic lateral phase.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 51, pp. 540\u00E2\u0080\u0093 550, May 2004. [28] E. Konofagou, T. Varghese, J. Ophir, and S. Alam, \u00E2\u0080\u009CPower spectral strain estimators in elastography,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 25, pp. 1115\u00E2\u0080\u009329, September 1999. [29] T. Varghese, E. Konofagou, J. Ophir, S. Alam, and M. Bilgen, \u00E2\u0080\u009CDirect strain estimation in elastography using spectral cross-correlation,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 26, pp. 1525\u00E2\u0080\u00931537, November 2000. [30] H. Shi and T. Varghese, \u00E2\u0080\u009CTwo-dimensional multi-level strain estimation for discontinuous tissue.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 52, pp. 389\u00E2\u0080\u0093401, Nov 2007. [31] L. Geiman, BJ.and Bohs, M. Anderson, S. Breit, and T. G.E., \u00E2\u0080\u009CA novel interpolation strategy for estimating subsample speckle motion.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1541\u00E2\u0080\u00931552, 2000. [32] T. Varghese and O. J., \u00E2\u0080\u009CCharacterization of elastographic noise using the envelope of echo signals,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 24, pp. 543\u00E2\u0080\u0093555, 1998. [33] L. Bohs and G. Trahey, \u00E2\u0080\u009CA novel method for angle independent ultrasonic imaging of blood \u00EF\u00AC\u0082ow and tissue motion,\u00E2\u0080\u009D IEEE Transactions on Biomedical Engineering, vol. 38, pp. 280\u00E2\u0080\u0093286, March 1991. [34] G. Jacovitti and G. Scarano, \u00E2\u0080\u009CDiscrete time techniques for time delay estimation.\u00E2\u0080\u009D IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 41, pp. 525\u00E2\u0080\u0093533, Feb 1993. [35] S. Langeland, J. Dapos, H. Torp, B. Bijnens, and P. Suetens, \u00E2\u0080\u009CA simulation study on the performance of di\u00EF\u00AC\u0080erent estimators for two-dimensional velocity estimation.\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. Volume 2: IEEE, 8-11 Oct 2002, pp. 1859\u00E2\u0080\u00931862. 67 \u000CChapter 3. Sub-sample Motion Estimation in 2D [36] L. Bohs, B. Geiman, M. Anderson, S. Breit, and G. Trahey, \u00E2\u0080\u009CEnsemble Tracking for 2D Vector Velocity Measurement: Experimental and Initial Clinical Results.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 45, pp. 912\u00E2\u0080\u0093924, July 1998. [37] E. Konofagou and J. Ophir, \u00E2\u0080\u009CA new elastographic method for estimation and imaging of lateral displacements, lateral strains, corrected axial strains and Poisson\u00E2\u0080\u0099s ratios in tissues,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 24, pp. 1183\u00E2\u0080\u009399, October 1998. [38] F. Viola, R. Coe, O. K., D. Guenther, and W. Walker, \u00E2\u0080\u009CMUlti-Dimensional Spline-Based Estimator (MUSE) for Motion Estimation: Algorithm Development and Initial Results.\u00E2\u0080\u009D Annals of Biomedical Engineering, vol. 36, pp. 1942\u00E2\u0080\u00931960, September 2008. [39] E. Konofagou, F. Kallel, and J. Ophir, \u00E2\u0080\u009CThree-dimensional motion estimation in elastography,\u00E2\u0080\u009D Proceedings of the IEEE Ultrasonics Symposium, pp. 1745\u00E2\u0080\u00938 vol.2, October 1998. [40] M. Bilgen, \u00E2\u0080\u009CDynamics of errors in 3D motion estimation and implications for strain-tensor imaging in acoustic elastography,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1565\u00E2\u0080\u0093 1578, 2000. [41] F. Viola and W. Walker, \u00E2\u0080\u009CA Spline-Based Algorithm for Continuous Time-Delay Estimation Using Sampled Data.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 52, pp. 80\u00E2\u0080\u009393, January 2005. [42] A. Basarab, H. Liebgott, F. Morestin, A. Lyshchik, T. Higashi, R. Asato, and P. Delachartre, \u00E2\u0080\u009CA method for vector displacement estimation with ultrasound images and its application for thyroid nodular disease.\u00E2\u0080\u009D Medical Image Analysis, vol. 12, pp. 259\u00E2\u0080\u0093274, 2008. [43] R. Zahiri-Azar and S. Salcudean, \u00E2\u0080\u009CTime-Delay Estimation in Ultrasound Echo Signals Using Individual Sample Tracking.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 55, pp. 2640\u00E2\u0080\u00932650, 2008. [44] I. Cespedes, Y. Huang, J. Ophir, and S. Spratt, \u00E2\u0080\u009CMethods for the estimation of subsample time-delays of digitized echo signals,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 17, pp. 142\u00E2\u0080\u0093171, 1995. [45] P. de Jong, T. Arts, A. Hoeks, and R. Reneman, \u00E2\u0080\u009CDetermination of Tissue Motion Velocity by Correlation Interpolation of Pulsed Ultrasonic Echo Signals,\u00E2\u0080\u009D Ultrasonics Imaging, vol. 12, pp. 84\u00E2\u0080\u009398, 1990. [46] B. Geiman, L. Bohs, M. Anderson, S. Breit, and G. Trahey, \u00E2\u0080\u009CA comparison of algorithms for tracking sub-pixel speckle motion.\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. Volume 2: IEEE, 5-8 Oct 1997, pp. 1239\u00E2\u0080\u00931242. [47] S. Foster, P. Embree, and W. O\u00E2\u0080\u0099Brien, \u00E2\u0080\u009CFlow velocity pro\u00EF\u00AC\u0081le via time-domain correlation: error analysis and computer simulation,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 37, pp. 164\u00E2\u0080\u0093175, May 1990. 68 \u000CChapter 3. Sub-sample Motion Estimation in 2D [48] F. Viola and W. Walker, \u00E2\u0080\u009CComputationally E\u00EF\u00AC\u0083cient Spline-Based Time Delay Estimation.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 55, pp. 2084\u00E2\u0080\u00932091, September 2008. [49] Y. Mo\u00EF\u00AC\u0081d, S. Gahagnon, F. Patat, F. Ossant, and G. Josse, \u00E2\u0080\u009CIn vivo high frequency elastography for mechanical behavior of human skin under suction stress: elastograms and kinetics of shear, axial and lateral strain \u00EF\u00AC\u0081elds.\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. Vancouver: IEEE, Oct 2006, pp. 1041\u00E2\u0080\u00931044. [50] R. Lopata, M. Nillesen, I. Gerrits, J. Thijssen, L. Kapusta, F. van de Vosse, and C. de Korte, \u00E2\u0080\u009C In Vivo 3D Cardiac and Skeletal Muscle Strain Estimation ,\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. Vancouver: IEEE, Oct 2006, pp. 744\u00E2\u0080\u0093747. [51] R. Lopata, M. Nillesena, H. Hansena, I. Gerritsa, T. J., and C. de Korte, \u00E2\u0080\u009CPerformance Evaluation of Methods for Two-Dimensional Displacement and Strain Estimation Using Ultrasound Radio Frequency Data.\u00E2\u0080\u009D Ultrasound in medicine and biology, vol. 35, pp. 796\u00E2\u0080\u0093 812, 2009. [52] R. Zahiri-Azar and S. Salcudean, \u00E2\u0080\u009CReal-Time Estimation of Lateral Displacement Using Time Domain Cross Correlation with Prior Estimates ,\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. IEEE, Oct 2006, pp. 1209\u00E2\u0080\u00931212. [53] R. Zahiri-Azar, O. Goksel, T. Yao, E. Dehghan, J. Yan, and S. Salcudean, \u00E2\u0080\u009C Methods for the estimation of sub-sample motion of digitized ultrasound echo signals in two dimensions ,\u00E2\u0080\u009D in Proceedings of the IEEE Engineering in Medicine and Biology. IEEE, Aug 2008, pp. 5581\u00E2\u0080\u00935584. [54] R. Lopata, M. Nillesen, I. Gerrits, J. Thijssen, L. Kapusta, and C. de Korte, \u00E2\u0080\u009C 4D Cardiac Strain Imaging: Methods and Initial Results ,\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. IEEE, Oct 2007, pp. 872\u00E2\u0080\u0093875. [55] G. Giunta, \u00E2\u0080\u009CFine estimators of two-dimensional parameters and application to spatial shift estimation,\u00E2\u0080\u009D IEEE Transactions on Signal Processing, vol. 47, pp. 3201\u00E2\u0080\u00933207, 1999. [56] R. Zahiri-Azar and S. Salcudean, \u00E2\u0080\u009CMotion Estimation in Ultrasound Images Using Time Domain Cross Correlation With Prior Estimates.\u00E2\u0080\u009D IEEE Transactions on Biomedical Engineering, vol. 53, pp. 1990\u00E2\u0080\u00932000, 2006. [57] M. Cheezum, W. Walker, and G. WH., \u00E2\u0080\u009CQuantitative comparison of algorithms for tracking single \u00EF\u00AC\u0082uorescent particles.\u00E2\u0080\u009D Biophysical Journal, vol. 81, pp. 2378\u00E2\u0080\u00932388, Oct 2001. [58] P. Hill, T. Chiew, D. Bull, and C. Canagarajah, \u00E2\u0080\u009CInterpolation Free Subpixel Accuracy Motion Estimation,\u00E2\u0080\u009D IEEE Transactions on Circuits and Systems for Video Technology, vol. 16, pp. 1519\u00E2\u0080\u00931526, 2006. [59] Y. Zhu, \u00E2\u0080\u009CQuartic-Spline Collocation Methods for Fourth-Order Two-Point Boundary Value Problems.\u00E2\u0080\u009D Master\u00E2\u0080\u0099s thesis, Department of Computer Science, University of Toronto, 2001. 69 \u000CChapter 3. Sub-sample Motion Estimation in 2D [60] J. A. Jensen, \u00E2\u0080\u009CA model for the propagation and scattering of ultrasound in tissue,\u00E2\u0080\u009D Journal of the Acoustical Society of America, vol. 89, pp. 182\u00E2\u0080\u0093191, 1991. [61] J. Jensen, Ed., Ultrasound Imaging and its modeling. Springer Verlag, 2002, ch. Chapter for the Springer Verlag book Imaging of Complex Media with Acoustic and Seismic Waves, Topics in Applied Physics. [62] O. Goksel, R. Zahiri-Azar, and S. Salcudean, \u00E2\u0080\u009CSimulation of Ultrasound Radio-Frequency Signals in Deformed Tissue for Validation of 2D Motion Estimation with Sub-Sample Accuracy,\u00E2\u0080\u009D in Proceedings of the IEEE Engineering in Medicine and Biology. IEEE, August 2007, pp. 2159\u00E2\u0080\u00932162. [63] T. Varghese, J. Ophir, E. Konofagou, F. Kallel, and R. Righetti, \u00E2\u0080\u009CTradeo\u00EF\u00AC\u0080s In Elastographic Imaging, Ultrasonic Imaging,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 23, pp. 216\u00E2\u0080\u0093248, October 2001. [64] S. DiMaio and S. Salcudean, \u00E2\u0080\u009CNeedle insertion modelling and simulation,\u00E2\u0080\u009D IEEE Transactions on Robotics and Automation: Special Issue on Medical Robotics, vol. 19, pp. 864\u00E2\u0080\u0093875, 2003. [65] T. Hall, Z. Yanning, C. Spalding, and L. Cook, \u00E2\u0080\u009CIn vivo results of real-time freehand elasticity imaging.\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. Volume 2: IEEE, 2001, pp. 1653\u00E2\u0080\u00931657. [66] T. Shiina, N. Nitta, E. Ueno, and J. Bamber, \u00E2\u0080\u009CReal Time Tissue Elasticity Imaging using Combined Autocorrelation Method,\u00E2\u0080\u009D Medical Ultrasonics, vol. 26, pp. 57\u00E2\u0080\u009366, 1999. [67] G. Treece, J. Lindop, A. Gee, and R. Prager, \u00E2\u0080\u009CFreehand ultrasound elastography with a 3D probe,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 34, pp. 463\u00E2\u0080\u0093474, March 2008. [68] P. Chaturvedi, M. Insana, and T. Hall, \u00E2\u0080\u009C2D companding for noise reduction in strain imaging,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 45, pp. 179\u00E2\u0080\u0093191, 1998. [69] E. Brusseau, J. Kybic, J. Deprez, and O. Basset, \u00E2\u0080\u009C2-D Locally Regularized Tissue Strain Estimation From Radio-Frequency Ultrasound Images: Theoretical Developments and Results on Experimental Data.\u00E2\u0080\u009D IEEE Transactions on Medical Imaging, vol. 27, pp. 145 \u00E2\u0080\u0093 160, Feb 2008. [70] S. Alam, J. Ophir, and E. Konofagou, \u00E2\u0080\u009CAn adaptive strain estimator for elastography,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 45, pp. 461\u00E2\u0080\u0093 72, March 1998. [71] T. Varghese and J. Ophir, \u00E2\u0080\u009CEnhancement of echo-signal correlation in elastography using temporal stretching,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 44, pp. 173\u00E2\u0080\u0093180, January 1997. 70 \u000CChapter 4 3D Estimation of Sub-Sample Motion from Digitized Ultrasound Echo Signals 3 4.1 Introduction Motion estimation in sequences of ultrasound echo signals has many applications such as blood \u00EF\u00AC\u0082ow estimation, tissue velocity estimation [1\u00E2\u0080\u00933], strain and strain rate imaging [4\u00E2\u0080\u00936], tissue elasticity imaging [7], vibro-elastography [8,9], poro-elastography [10], myocardial imaging [11], tumor classi\u00EF\u00AC\u0081cation [12], and acoustic radiation force impulse imaging [13\u00E2\u0080\u009315]. Since ultrasound imaging provides higher resolution in the axial direction, the estimation of the axial component of the motion has received the most attention in the literature. However, tracking the motion in one direction introduces several limitations. In blood \u00EF\u00AC\u0082ow and tissue velocity estimation using Doppler techniques, tracking along the beam propagation results in a poor estimation of the \u00EF\u00AC\u0082ow and tissue velocity due to the unknown Doppler angle between the velocity vector and the beam direction. Poor estimates can result even if the angle is manually adjusted [16,17]. In quasi-static elastography, tracking the motion in the axial direction results in estimation of only the axial component of the strain tensor, with all the other components remaining unknown [18]. Finally, in dynamic elastography, using the wave equations, the estimation of a single component of motion limits modulus estimation algorithms to a less accurate partial inversion rather than a full inversion [19]. A great number of motion estimation algorithms have been introduced in the literature and have been studied in our previous work. Techniques based on pattern matching functions are the most straightforward approaches used to estimate the axial motion from digitized ultrasound radio frequency (RF) echo signals [4, 20\u00E2\u0080\u009322]. Extensions of these techniques to 2D and 3D motion estimation have also been proposed in the literature [23, 24]. With these techniques, the reference ultrasound echo signal is divided into a number of windows, which may have overlap with each other. The reference echo signal within each window is then set to be the pattern to be matched with the displaced echo signal over a prede\u00EF\u00AC\u0081ned search region. Finally, the motion of the window is found by locating the best match. Several pattern matching functions have been suggested in the literature, such as normalized cross correlation, sum of square di\u00EF\u00AC\u0080erences, and sum of absolute di\u00EF\u00AC\u0080erences [25, 26]. Estimation of the motion from envelope signals and from a combination of RF and envelope signals have also been 3 A version of this chapter has been peer reviewed and published in the proceedings of the international Conference on IEEE Ultrasonics Symposium. A version of this chapter has also been submitted for publication. Reza Zahiri-Azar, Orcun Goksel, and Septimiu E. Salcudean, \u00E2\u0080\u009C3D Estimation of Sub-Sample Motion from Digitized Ultrasound Echo Signals\u00E2\u0080\u009D. 71 \u000CChapter 4. Sub-sample Motion Estimation in 3D reported [23, 27]. The estimation error of these techniques can be as large as half the sample spacing, which is important especially when the motion is small and the sample spacing is large. This estimation error becomes more signi\u00EF\u00AC\u0081cant in the lateral and elevational directions where the sample spacing is very large. Several techniques have been suggested in the literature to reduce the error introduced by \u00EF\u00AC\u0081nite sampling intervals. These techniques are categorized as: (i) echo signal up-sampling [28\u00E2\u0080\u009330], (ii) interpolation of the echo signals [28, 29, 31, 32], and (iii) interpolation of the pattern matching function [33\u00E2\u0080\u009336]. Up-sampling of the echo signal as in (i) reduces the error by the up-sampling factor [29,30]. Curve or polynomial \u00EF\u00AC\u0081tting to the echo signals as in (ii) results in a continuous pattern matching function, whose extremum determines the location of the best match [28, 29, 31, 32]. It has been shown that these techniques outperform other algorithms but, similarly to up-sampling methods, they can be computationally demanding [32,37], whereas curve or polynomial \u00EF\u00AC\u0081tting to the pattern matching function as in (iii) often have signi\u00EF\u00AC\u0081cantly smaller computational overhead. Thus, even though they may introduce some bias in the estimation process, they are widely used for motion estimation. These techniques will be the topic of this work. A number of 1D pattern matching interpolation methods such as parabolic \u00EF\u00AC\u0081tting [36], spline \u00EF\u00AC\u0081tting [35], grid slope [22, 38], cosine \u00EF\u00AC\u0081tting [34], zero padding, and reconstructive methods [33] have been introduced and thoroughly investigated in the literature [25, 28, 32]. Applying the same 1D interpolation techniques independently to each direction (2-1D) has also been used in the literature to estimate the sub-sample motion in 2D and 3D [22, 31, 38\u00E2\u0080\u009341]. Applying iterative 1D interpolation [42, 43] and 2D interpolation techniques [43\u00E2\u0080\u009345] have also been attempted in the literature. In our previous work we studied all these techniques in 2D and showed that independent 1D sub-sample motion estimation in each direction results in estimates with poor accuracy and precision. We showed that 2D interpolation signi\u00EF\u00AC\u0081cantly outperform other interpolation techniques in estimating both the axial and the lateral sub-sample motions. In this work, we extend our previous work to 3D and study and compare the performance of (i) independent 1D and (ii) 3D interpolation pattern matching interpolation techniques in estimating axial, lateral, and elevational sub-sample motions on ultrasound radio frequency data. The paper is structured as follows: Section 4.2 presents the interpolation algorithms. Section 4.3 presents the simulation method and a comparison between the algorithms. Section 4.4 presents the experimental results. Finally, Section 4.5 presents a discussion and conclusions along with avenues for future research. Throughout this work, it is assumed that the echo signals are 2D radio frequency (RF) signals. Without loss of generality, it is assumed that the pattern matching function optimization involves maximization of the normalized cross correlation. The pattern matching function values will be referred to as the matching coe\u00EF\u00AC\u0083cients. 4.2 Methods Let \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u00A3, \u00F0\u009D\u0091\u00A4] be the 3D discrete pattern matching function between a reference window and displaced echo signals over a prede\u00EF\u00AC\u0081ned search region. The pattern matching function can be the normalized correlation as discussed in Appendix F. Given the discrete 3D pattern matching function \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u00A3, \u00F0\u009D\u0091\u00A4], the coarse axial \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , lateral \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 , and elevational \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 estimates of the motion in 72 \u000CChapter 4. Sub-sample Motion Estimation in 3D the axial (\u00F0\u009D\u0091\u00A5), lateral (\u00F0\u009D\u0091\u00A6), and elevational (\u00F0\u009D\u0091\u00A7) directions, respectively, are achieved by locating the maximum of this 3D discrete function: [\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ] = arg max \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u00A3, \u00F0\u009D\u0091\u00A4]. \u00F0\u009D\u0091\u00A2,\u00F0\u009D\u0091\u00A3,\u00F0\u009D\u0091\u00A4 (4.1) Following the coarse estimation of the motion within the sampling accuracy, the following methods are used to estimate the sub-sample displacements \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E , \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 , and \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0092 in the axial, the lateral, and the elevational directions, respectively, using the matching coe\u00EF\u00AC\u0083cients at adjacent lags. 4.2.1 Independent 1D Methods The most common approach for estimating sub-sample motion is to apply conventional 1D interpolation techniques such as parabola, cosine, or spline \u00EF\u00AC\u0081tting to the axial, lateral, and elevational directions in an independent manner. Referring to Fig. 4.1(a), let \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u008E (\u00F0\u009D\u0091\u00A5) be an axial 1D interpolation function passing through the 3D pattern matching function at [\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ] and its axial lags (i.e. \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E +\u00F0\u009D\u0091\u0096, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ] where \u00F0\u009D\u0091\u0096 is in an axial \u00EF\u00AC\u0081tting interval \u00E2\u0088\u0092\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u008E , ..., \u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u008E ), \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0099 (\u00F0\u009D\u0091\u00A6) be a lateral 1D interpolation function passing through the 3D pattern matching function at [\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ] and its lateral lags (i.e. \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0096 , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + \u00F0\u009D\u0091\u0097, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ] where \u00F0\u009D\u0091\u0097 is in a lateral \u00EF\u00AC\u0081tting interval {\u00E2\u0088\u0092\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0099 , ..., \u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0099 }), and \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 (\u00F0\u009D\u0091\u00A7) be an elevational 1D interpolation function passing through the 3D pattern matching function at [\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ] and its elevational lags (i.e. \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0096 , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 + \u00F0\u009D\u0091\u0098] where \u00F0\u009D\u0091\u0098 is in an elevational \u00EF\u00AC\u0081tting interval {\u00E2\u0088\u0092\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0092 , ..., \u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0092 }. The subsample motion estimates \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E , \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 , and \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 , at (\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ) are computed from their corresponding axial, lateral, and elevational interpolation functions as follows: \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E = arg max \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u008E (\u00F0\u009D\u0091\u00A5), \u00F0\u009D\u0091\u00A5 \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 = arg max \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0099 (\u00F0\u009D\u0091\u00A6), \u00F0\u009D\u0091\u00A6 \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0092 = arg max \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 (\u00F0\u009D\u0091\u00A7). \u00F0\u009D\u0091\u00A6 (4.2) For the purpose of this work, the following two interpolation functions have been implemented: (i) three point 1D parabola \u00EF\u00AC\u0081tting [36], where \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u008E (\u00F0\u009D\u0091\u00A5) = \u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u008E + \u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u008E \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u0090\u00F0\u009D\u0091\u008E \u00F0\u009D\u0091\u00A52 , \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0099 (\u00F0\u009D\u0091\u00A6) = \u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u0099 + \u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u0099 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u0090\u00F0\u009D\u0091\u0099 \u00F0\u009D\u0091\u00A6 2 , and \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 (\u00F0\u009D\u0091\u00A7) = \u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u0092 + \u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u0092 \u00F0\u009D\u0091\u00A7 + \u00F0\u009D\u0091\u0090\u00F0\u009D\u0091\u0092 \u00F0\u009D\u0091\u00A7 2 and (ii) three point cosine \u00EF\u00AC\u0081tting [34], where \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u008E (\u00F0\u009D\u0091\u00A5) = \u00F0\u009D\u0090\u00B4\u00F0\u009D\u0091\u008E cos(\u00F0\u009D\u009B\u00BC\u00F0\u009D\u0091\u008E \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u009B\u00BD\u00F0\u009D\u0091\u008E ), \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0099 (\u00F0\u009D\u0091\u00A6) = \u00F0\u009D\u0090\u00B4\u00F0\u009D\u0091\u0099 cos(\u00F0\u009D\u009B\u00BC\u00F0\u009D\u0091\u0099 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u009B\u00BD\u00F0\u009D\u0091\u0099 ), and \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 (\u00F0\u009D\u0091\u00A7) = \u00F0\u009D\u0090\u00B4\u00F0\u009D\u0091\u0092 cos(\u00F0\u009D\u009B\u00BC\u00F0\u009D\u0091\u0092 \u00F0\u009D\u0091\u00A7 + \u00F0\u009D\u009B\u00BD\u00F0\u009D\u0091\u0092 ). The independent 1D methods using three point function \u00EF\u00AC\u0081tting (\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u008E = \u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0099 = \u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0092 = 1), require matching coe\u00EF\u00AC\u0083cients to be available at seven lags (i.e. the maximum in the center and the two immediate neighboring lags in each direction). Detailed descriptions of these techniques are provided in our previous work and are not repeated here. 4.2.2 3D Methods In a more general approach a 3D function can be \u00EF\u00AC\u0081tted to the discrete matching coe\u00EF\u00AC\u0083cients in the axial, lateral, and elevational directions. Estimation with sub-sample accuracy can then be achieved in all directions by \u00EF\u00AC\u0081nding the peak of the \u00EF\u00AC\u0081tted function, analytically. Referring to Fig. 4.1(b), let \u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A7) be a 3D interpolation function passing through the 3D pattern matching function at [\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ] and its neighbors (i.e. \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + \u00F0\u009D\u0091\u0096, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + \u00F0\u009D\u0091\u0097, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 + \u00F0\u009D\u0091\u0098] where \u00F0\u009D\u0091\u0096 \u00E2\u0088\u0088 {\u00E2\u0088\u0092\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u008E , ..., \u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u008E }, \u00F0\u009D\u0091\u0097 \u00E2\u0088\u0088 {\u00E2\u0088\u0092\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0099 , ..., \u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0099 }, \u00F0\u009D\u0091\u0098 \u00E2\u0088\u0088 {\u00E2\u0088\u0092\u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0092 , ..., \u00F0\u009D\u0091\u0080\u00F0\u009D\u0091\u0092 }). The sub-sample motion estimates 73 \u000CChapter 4. Sub-sample Motion Estimation in 3D (a) Independent 1D estimation (b) 3D estimation Figure 4.1: Di\u00EF\u00AC\u0080erent schemes for sub-sample displacement estimation in 3D using the coef\u00EF\u00AC\u0081cients of the cross correlation function in the neighborhood of its maximum. For the \u00EF\u00AC\u0081rst technique, only 1D interpolation is required while 3D interpolation is needed for the second method. 74 \u000CChapter 4. Sub-sample Motion Estimation in 3D \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E , \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 , and \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0092 at (\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ) are computed simultaneously from the corresponding 3D interpolation functions as follows: [\u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E , \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 , \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0092 ] = arg max \u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A7). (4.3) \u00F0\u009D\u0091\u00A5,\u00F0\u009D\u0091\u00A6,\u00F0\u009D\u0091\u00A7 The following non-separable 3D polynomials: \u00F0\u009D\u0091\u009310 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A7) = \u00F0\u009D\u0091\u008E1 + \u00F0\u009D\u0091\u008E2 \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u008E3 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E4 \u00F0\u009D\u0091\u00A7 + \u00F0\u009D\u0091\u008E5 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6 2 (4.4) 2 2 +\u00F0\u009D\u0091\u008E6 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A7 + \u00F0\u009D\u0091\u008E7 \u00F0\u009D\u0091\u00A6\u00F0\u009D\u0091\u00A7 + \u00F0\u009D\u0091\u008E8 \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u008E9 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E10 \u00F0\u009D\u0091\u00A7 , \u00F0\u009D\u0091\u009327 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A7) = \u00F0\u009D\u0091\u008E1 + \u00F0\u009D\u0091\u008E2 \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u008E3 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E4 \u00F0\u009D\u0091\u00A7 + \u00F0\u009D\u0091\u008E5 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6 2 (4.5) 2 +\u00F0\u009D\u0091\u008E6 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A7 + \u00F0\u009D\u0091\u008E7 \u00F0\u009D\u0091\u00A6\u00F0\u009D\u0091\u00A7 + \u00F0\u009D\u0091\u008E8 \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u008E9 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E10 \u00F0\u009D\u0091\u00A7 2 +\u00F0\u009D\u0091\u008E11 \u00F0\u009D\u0091\u00A52 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E12 \u00F0\u009D\u0091\u00A52 \u00F0\u009D\u0091\u00A7 + \u00F0\u009D\u0091\u008E13 \u00F0\u009D\u0091\u00A6 2 \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u008E14 \u00F0\u009D\u0091\u00A6 2 \u00F0\u009D\u0091\u00A7 +\u00F0\u009D\u0091\u008E15 \u00F0\u009D\u0091\u00A7 2 \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u008E16 \u00F0\u009D\u0091\u00A7 2 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E17 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6\u00F0\u009D\u0091\u00A7 + \u00F0\u009D\u0091\u008E18 \u00F0\u009D\u0091\u00A52 \u00F0\u009D\u0091\u00A6 2 +\u00F0\u009D\u0091\u008E19 \u00F0\u009D\u0091\u00A52 \u00F0\u009D\u0091\u00A7 2 + \u00F0\u009D\u0091\u008E20 \u00F0\u009D\u0091\u00A6 2 \u00F0\u009D\u0091\u00A7 2 + \u00F0\u009D\u0091\u008E21 \u00F0\u009D\u0091\u00A52 \u00F0\u009D\u0091\u00A6\u00F0\u009D\u0091\u00A7 +\u00F0\u009D\u0091\u008E22 \u00F0\u009D\u0091\u00A6 2 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A7 + \u00F0\u009D\u0091\u008E23 \u00F0\u009D\u0091\u00A7 2 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E24 \u00F0\u009D\u0091\u00A52 \u00F0\u009D\u0091\u00A6 2 \u00F0\u009D\u0091\u00A7 +\u00F0\u009D\u0091\u008E25 \u00F0\u009D\u0091\u00A52 \u00F0\u009D\u0091\u00A7 2 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E26 \u00F0\u009D\u0091\u00A6 2 \u00F0\u009D\u0091\u00A7 2 \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u008E27 \u00F0\u009D\u0091\u00A52 \u00F0\u009D\u0091\u00A6 2 \u00F0\u009D\u0091\u00A7 2 , with 10 and 27 coe\u00EF\u00AC\u0083cients are implemented in this paper, where \u00F0\u009D\u0091\u009310 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A7) is a multinomial of degree 2 and \u00F0\u009D\u0091\u009327 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A7) results from multiplying the [1, \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A52 ], [1, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A6 2 ], and [1, \u00F0\u009D\u0091\u00A7, \u00F0\u009D\u0091\u00A7 2 ] terms (quadratic spline). Higher order polynomials such as cubic and quartic splines are not considered in this work (please see the Discussions Section). The two 3D polynomials are \u00EF\u00AC\u0081tted to 27 points of the discrete pattern matching function, the maximum and its 26 immediate neighbors, using a least squares \u00EF\u00AC\u0081t. A detailed description of the 3D polynomial \u00EF\u00AC\u0081tting is provided in Appendix G. The location of the unconstrained maximum of this \u00EF\u00AC\u0081tted 3D polynomial (i.e. \u00E2\u0088\u0087\u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A7) = (\u00E2\u0088\u0082\u00F0\u009D\u0091\u0093 /\u00E2\u0088\u0082\u00F0\u009D\u0091\u00A5, \u00E2\u0088\u0082\u00F0\u009D\u0091\u0093 /\u00E2\u0088\u0082\u00F0\u009D\u0091\u00A6, \u00E2\u0088\u0082\u00F0\u009D\u0091\u0093 /\u00E2\u0088\u0082\u00F0\u009D\u0091\u00A7) = 0) is then found using the following closed form solution: [ \u00F0\u009D\u0091\u00A5 \u00F0\u009D\u0091\u00A6 \u00F0\u009D\u0091\u00A7 ] [ =\u00E2\u0088\u0092 2\u00F0\u009D\u0091\u008E8 \u00F0\u009D\u0091\u008E5 \u00F0\u009D\u0091\u008E6 \u00F0\u009D\u0091\u008E5 2\u00F0\u009D\u0091\u008E9 \u00F0\u009D\u0091\u008E7 \u00F0\u009D\u0091\u008E6 \u00F0\u009D\u0091\u008E7 2\u00F0\u009D\u0091\u008E10 ]\u00E2\u0088\u00921 [ \u00F0\u009D\u0091\u008E2 \u00F0\u009D\u0091\u008E3 \u00F0\u009D\u0091\u008E4 ] , (4.6) for \u00F0\u009D\u0091\u009310 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A7), and by using Newton\u00E2\u0080\u0099s method (E.4) for \u00F0\u009D\u0091\u009327 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A7), where \u00F0\u009D\u009C\u0085 = 0, 1, ..., \u00F0\u009D\u0091\u009B is the index of the iteration, \u00F0\u009D\u0091\u009B is the number of iterations. In this work, in order to locate the maximum of \u00F0\u009D\u0091\u009327 in Newton\u00E2\u0080\u0099s method, \u00F0\u009D\u0091\u00A50 , \u00F0\u009D\u0091\u00A60 , \u00F0\u009D\u0091\u00A70 are initialized to zero. Note that the \u00EF\u00AC\u0081t for \u00F0\u009D\u0091\u009310 , followed by (4.6), can be used to generate the starting point for estimating the maximum of \u00F0\u009D\u0091\u009327 . This approach will provide a better starting point for Newton\u00E2\u0080\u0099s method and therefore will reduce the number of iterations required for convergence, but it will also require the data to be \u00EF\u00AC\u0081tted to both \u00F0\u009D\u0091\u009310 and \u00F0\u009D\u0091\u009327 . [ \u00F0\u009D\u0091\u00A5 \u00F0\u009D\u0091\u00A6 \u00F0\u009D\u0091\u00A7 ] [ = \u00F0\u009D\u009C\u0085+1 \u00F0\u009D\u0091\u00A5 \u00F0\u009D\u0091\u00A6 \u00F0\u009D\u0091\u00A7 \u00E2\u008E\u00A1 ] \u00E2\u0088\u0092\u00E2\u008E\u00A3 \u00F0\u009D\u009C\u0085 \u00E2\u0088\u00822 \u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A5\u00E2\u0088\u0082\u00F0\u009D\u0091\u00A5 \u00E2\u0088\u00822 \u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A6\u00E2\u0088\u0082\u00F0\u009D\u0091\u00A5 \u00E2\u0088\u00822 \u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A7\u00E2\u0088\u0082\u00F0\u009D\u0091\u00A5 \u00E2\u0088\u00822\u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A5\u00E2\u0088\u0082\u00F0\u009D\u0091\u00A6 \u00E2\u0088\u00822\u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A6\u00E2\u0088\u0082\u00F0\u009D\u0091\u00A6 \u00E2\u0088\u00822\u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A7\u00E2\u0088\u0082\u00F0\u009D\u0091\u00A6 \u00E2\u0088\u00822 \u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A5\u00E2\u0088\u0082\u00F0\u009D\u0091\u00A7 \u00E2\u0088\u00822 \u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A6\u00E2\u0088\u0082\u00F0\u009D\u0091\u00A7 \u00E2\u0088\u00822 \u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A7\u00E2\u0088\u0082\u00F0\u009D\u0091\u00A7 \u00E2\u008E\u00A4\u00E2\u0088\u00921 [ \u00E2\u008E\u00A6 \u00E2\u0088\u0082\u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A5 \u00E2\u0088\u0082\u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A6 \u00E2\u0088\u0082\u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A7 ]\u0011 \u0011 \u0011 \u0011\u00F0\u009D\u0091\u00A5=\u00F0\u009D\u0091\u00A5 \u0011 \u00F0\u009D\u0091\u00A6=\u00F0\u009D\u0091\u00A6\u00F0\u009D\u009C\u0085\u00F0\u009D\u009C\u0085 (4.7) \u00F0\u009D\u0091\u00A7=\u00F0\u009D\u0091\u00A7\u00F0\u009D\u009C\u0085 75 \u000CChapter 4. Sub-sample Motion Estimation in 3D 60 55 50 45 40 35 30 25 20 15 20 10 10 0 42 0 \u00E2\u0088\u00922 \u00E2\u0088\u00924 \u00E2\u0088\u009210 \u00E2\u0088\u009220 (a) Scatterers (b) Sonogram Figure 4.2: Scatterers distributions (left) (only a small fraction of all scatterers are plotted for better visualization) and a Field II simulated sonogram (right). 4.3 4.3.1 Simulations Simulation Setup A series of computer simulations were performed to study and compare the performance of the sub-sample motion estimators. All calculations were performed in MATLAB (MathWorks Inc., Natick, MA). A 50 \u00C3\u0097 60 \u00C3\u0097 10 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A3 virtual phantom was simulated (Fig. 4.2(a)) by randomly allocating with random scattering amplitudes. Field II [46] was then used to simulate the ultrasound radio frequency echo volumes (RF volumes) from these scatterers. A linear probe was modeled. The default simulation assumes a 5 MHz center frequency and 40 MHz sampling frequency (\u00E2\u0089\u0088 19.3 \u00F0\u009D\u009C\u0087m sample spacing). A linear scan of the phantom was carried out with a 128 element transducer, using 64 active elements. A single transmit focus was placed at 30 mm, and dynamic receive focusing was employed to generate the RF lines. 128 RF lines were simulated along a width of 38 mm for each frame. This results in a line spacing of 300 \u00F0\u009D\u009C\u0087m. The frames in the volume were simulated by translating the transducer in the elevational direction. An elevational frame spacing of 150 \u00F0\u009D\u009C\u0087m modeling our experimental ultrasound setup was employed. Seven frames were simulated for each volume (i.e. 7 \u00C3\u0097 128 = 896 RF lines per volume). The number of scatterers per smallest sampling volume was set to be 10 to ensure that the speckle of the ultrasound images is fully developed. A sample sonogram generated from one of these simulated RF data sets is shown in Fig. 4.2(b). In order to simulate rigid motions in all three directions, the scatterers were displaced on a grid in the axial, lateral and elevational directions with sub-sample distances (i.e. a fraction of the sample-spacing along each corresponding axis). For all the displacements, a step size of 1/4 of the sample spacing in the corresponding axis was chosen (i.e. 5 \u00F0\u009D\u009C\u0087m axially, 76 \u000CChapter 4. Sub-sample Motion Estimation in 3D 75 \u00F0\u009D\u009C\u0087m laterally, and 37.5 \u00F0\u009D\u009C\u0087m elevationally) forming a grid of 5x5x5=125 distinct displacement con\u00EF\u00AC\u0081gurations up to a full sample. The RF volumes corresponding to each of these displaced scatterer con\u00EF\u00AC\u0081gurations were then simulated (i.e. 896 \u00C3\u0097 125 = 112, 000 RF lines). These simulated RF volumes were used to estimate the motion in conjunction with the RF volume in the center of the 3D grid as a reference. This resulted in a grid spanning \u00C2\u00B10.5 of a sample in all directions. Simulation of each volume would take more than 500 hours using Field II on a single core computer. Four multi-core processor computers were used in parallel to simulate the RF volumes for this work. 4.3.2 Motion Estimation Simulations were performed by applying the estimation algorithms to simulated RF volumes. For all the data the normalized cross correlation was used as a pattern matching function to \u00EF\u00AC\u0081nd the coarse motion within the sampling accuracy. Unless mentioned otherwise, the window size for the pattern matching function is set to be approximately 2 \u00C3\u0097 2 \u00C3\u0097 1 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A3 (i.e. 104 samples axially, 7 samples laterally, and 5 samples elevationally). The size of the search area for the pattern matching function was set to be approximately 3 \u00C3\u0097 3 \u00C3\u0097 2 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A3 (i.e. 156 samples axially, 9 samples laterally, and 7 samples elevationally), so that it does not introduce any limitation in the estimation process. The sub-sample motion estimators from Section 4.2 were then applied to \u00EF\u00AC\u0081nd the sub-sample motions. In order to have accurate estimation of the cross correlation at the edges of the search region, the actual data from the echo signals were used instead of zero-padding. The start point for Newton\u00E2\u0080\u0099s method for estimating the maximum of the 3D polynomial was set to be \u00F0\u009D\u0091\u00A50 = \u00F0\u009D\u0091\u00A60 = \u00F0\u009D\u0091\u00A70 = 0 and the iteration was stopped when the variations drop below 0.00001, which is equivalent to 0.001% of the sample spacing. In all the simulations, this criterion was met in less than \u00EF\u00AC\u0081ve iterations (i.e. \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E = \u00F0\u009D\u0091\u00A55 , \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 = \u00F0\u009D\u0091\u00A65 , \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0092 = \u00F0\u009D\u0091\u00A75 in (E.4)). To have unbiased measurements, the region of interest (ROI) was centered around the transmit focus and data from both near-\u00EF\u00AC\u0081eld and far-\u00EF\u00AC\u0081eld were removed from the study. The size of the ROI was set to be 30 mm by 30 mm. This resulted in more than 1000 pattern matching windows used for displacement estimations for each sub-sample motion on a grid. The performance of each estimator was studied in terms of its bias and standard deviation as a function of the sub-sample shift in the axial, lateral, and elevational directions to study their accuracy and precision [28, 31, 32]. 4.3.3 Simulation Results Figure 4.3 shows the bias and standard deviation of all the sub-sample estimation techniques as a function of sub-sample shift on a 5 \u00C3\u0097 5 \u00C3\u0097 5 grid. For better visualization of the accuracy, the axial, the lateral, and the elevational biases are shown with a vector. Vectors connecting the true displacements to the mean estimated displacements illustrate the directional bias for each of the 125 simulations. In order to show the precision, the standard deviations in all three directions, an ellipsoid error representation is used for each of the simulations. The radius of each ellipse in each direction corresponds to the standard deviation of the motion estimation in that given direction. For better visualization, the ellipsoids are centered on the true motions instead of the estimated motions. Also, every other plane in the elevational direction is eliminated prior to display in this particular format. 77 \u000CChapter 4. Sub-sample Motion Estimation in 3D Figure 4.3: Bias Vectors (1st column) and standard deviation ellipsoids (2nd column) of Ind. 1D parabola \u00EF\u00AC\u0081t (1st row), Ind. 1D cosine \u00EF\u00AC\u0081t (2nd row), \u00F0\u009D\u0091\u009310 \u00EF\u00AC\u0081t (3rd row), and \u00F0\u009D\u0091\u009327 \u00EF\u00AC\u0081t (4th row) as a function of sub-sample shift on a 5 \u00C3\u0097 5 \u00C3\u0097 5 simulated grid. A total of 1000 windows/patterns were used to generate each bias vector and standard deviation ellipsoid. 78 \u000CChapter 4. Sub-sample Motion Estimation in 3D Figure 4.3 shows that the common independent 1D methods performs well only if the displacements have either axial, lateral, or elevational component (i.e. \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E \u00E2\u0088\u0095= 0 or \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 \u00E2\u0088\u0095= 0 or \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0092 \u00E2\u0088\u0095= 0). The 3D methods are able to recover the underlaying motion from the RF volumes and show small biases and standard deviations (i.e. smaller vectors and ellipses). The results show that increasing the number of the coe\u00EF\u00AC\u0083cients for the 3D polynomial \u00EF\u00AC\u0081tting improves the performance of the sub-sample motion estimator. 3D polynomial \u00EF\u00AC\u0081tting with 27 coe\u00EF\u00AC\u0083cients has smaller biases and standard deviations. This is consistent with our previously reported results for 2D sub-sample estimation. In order to study the results quantitatively, we follow the approach suggested in [31]. The biases and standard deviations of all the above techniques for each individual axes are presented in Fig. 4.4. To simplify the comparison between the results, the biases and standard deviations are displayed using the same color range, for all the methods. It should be noted that bias is signed and is smaller when it is closer to the center of the color bar (i.e. gray) while standard deviation is positive and is smaller when it is closer to the bottom of the color bar (i.e. black). Fig. 4.4 shows that the maximum biases and standard deviations of the 3D methods are signi\u00EF\u00AC\u0081cantly smaller than those of the common independent 1D methods. Figure 4.4 shows that the maximum lateral and elevational biases of the independent 1D interpolation methods are as large as 0.15 and 0.25 of a sample, respectively. This \u00EF\u00AC\u0081gure also shows that the maximum lateral and elevational standard deviations of the independent 1D interpolation methods are as large as 0.20 and 0.50 of a sample, respectively. These results are consistent with our previously reported results in 2D and the results reported in [31]. Figure 4.4 shows that the maximum lateral and elevational biases and standard deviations of all the 3D methods are an order of magnitude smaller than those of the conventional independent 1D interpolation methods. For better visualization of the range of errors of the 3D methods, their biases and standard deviations are shown in Fig. 4.5 using separate color bars. The performance of all the techniques in terms of their maximum bias and standard deviations in both directions is summarized in Table 4.1. For an easier comparison, the same results are shown in Fig. 4.6, using error plots. The results show that the accuracy and the precision of sub-sample estimation improve in all three directions when the 3D methods are employed instead of the common independent 1D method. The maximum axial, lateral, and elevational biases of the 3D polynomial \u00EF\u00AC\u0081tting with 27 coe\u00EF\u00AC\u0083cients, \u00F0\u009D\u0091\u009327 , are found to be 0.0121, 0.0180, and 0.0121 of a sample, which correspond to 232 nm, 5.40 \u00F0\u009D\u009C\u0087m, and 1.81 \u00F0\u009D\u009C\u0087m, respectively, at the simulated 19.3 \u00F0\u009D\u009C\u0087m, 300 \u00F0\u009D\u009C\u0087m, and 150 \u00F0\u009D\u009C\u0087m axial, lateral, and elevational spacing. The maximum axial, lateral, and elevational standard deviations of \u00F0\u009D\u0091\u009327 are found to be 0.0160, 0.0264, and 0.0574 of a sample, which correspond to 307 nm, 7.92 \u00F0\u009D\u009C\u0087m, and 8.61 \u00F0\u009D\u009C\u0087m, respectively. Table 4.1: Maximum values of biases and standard deviations of di\u00EF\u00AC\u0080erent 3D motion estimation techniques evaluated from simulated data in units of sample. Method Max \u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 Max \u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A1 Max \u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u0092\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u00A3 Max \u00F0\u009D\u009C\u008E\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 Max \u00F0\u009D\u009C\u008E\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A1 Max \u00F0\u009D\u009C\u008E\u00F0\u009D\u0091\u0092\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u00A3 Ind. 1D Par 0.0201 0.0121 0.0119 0.0121 0.1657 0.1641 0.0891 0.0180 0.2767 0.2769 0.1411 0.0121 0.0517 0.0482 0.0299 0.0160 0.1612 0.1623 0.0454 0.0264 0.4053 0.4041 0.1348 0.0574 Ind. 1D Cos \u00F0\u009D\u0091\u009310 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A7) \u00F0\u009D\u0091\u009327 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A7) 79 \u000CChapter 4. Sub-sample Motion Estimation in 3D Axial Bias Axial Bias 0.5 Axial Bias 0.5 Axial Bias 0.5 0.015 0.5 \u00E2\u0088\u00920.5 0.5 \u00E2\u0088\u00920.5 0.5 0.5 0 \u00E2\u0088\u00920.5 Lateral \u00E2\u0088\u00920.5 0 \u00E2\u0088\u00920.5 Lateral Lateral Bias \u00E2\u0088\u00920.5 0 \u00E2\u0088\u00920.5 0.5 0.5 0 0 Elevational Axial sub\u00E2\u0088\u0092sample 0 Axial sub\u00E2\u0088\u0092sample Axial sub\u00E2\u0088\u0092sample Axial sub\u00E2\u0088\u0092sample 0.01 \u00E2\u0088\u00920.5 Lateral Lateral Bias 0.5 0 \u00E2\u0088\u00920.005 \u00E2\u0088\u00920.5 0.5 0.5 0 0 Elevational 0.005 0 \u00E2\u0088\u00920.01 0.5 0 0 Elevational \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.5 Lateral Lateral Bias 0.5 0 Elevational \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.015 Lateral Bias 0.5 0.15 0.5 \u00E2\u0088\u00920.5 0.5 \u00E2\u0088\u00920.5 0.5 0.5 0 \u00E2\u0088\u00920.5 Lateral \u00E2\u0088\u00920.5 0 \u00E2\u0088\u00920.5 Lateral Elevational Bias \u00E2\u0088\u00920.5 0 \u00E2\u0088\u00920.5 0.5 0.5 0 0 Elevational Axial sub\u00E2\u0088\u0092sample 0 Axial sub\u00E2\u0088\u0092sample Axial sub\u00E2\u0088\u0092sample Axial sub\u00E2\u0088\u0092sample 0.1 \u00E2\u0088\u00920.5 Lateral Elevational Bias 0.5 0 \u00E2\u0088\u00920.05 \u00E2\u0088\u00920.5 0.5 0.5 0 0 Elevational 0.05 0 \u00E2\u0088\u00920.1 \u00E2\u0088\u00920.5 Lateral Elevational Bias 0.5 0.5 0 0 Elevational \u00E2\u0088\u00920.5 0 Elevational \u00E2\u0088\u00920.5 Elevational Bias 0.5 0.5 0 Axial sub\u00E2\u0088\u0092sample 0 Axial sub\u00E2\u0088\u0092sample Axial sub\u00E2\u0088\u0092sample Axial sub\u00E2\u0088\u0092sample 0.2 0 0.1 0 0 \u00E2\u0088\u00920.1 \u00E2\u0088\u00920.5 0.5 0 0 0 Elevational Axial Standard Deviation \u00E2\u0088\u00920.5 Lateral \u00E2\u0088\u00920.5 0.5 0.5 \u00E2\u0088\u00920.5 0.5 0 0 Elevational Axial Standard Deviation \u00E2\u0088\u00920.5 Lateral \u00E2\u0088\u00920.5 0.5 0.5 \u00E2\u0088\u00920.5 0.5 0 0 Elevational Axial Standard Deviation \u00E2\u0088\u00920.5 Lateral \u00E2\u0088\u00920.5 0.5 0.5 \u00E2\u0088\u00920.5 0.5 0 Elevational Axial Standard Deviation \u00E2\u0088\u00920.5 Lateral \u00E2\u0088\u00920.2 0.5 \u00E2\u0088\u00920.5 0.06 0.5 \u00E2\u0088\u00920.5 0.5 \u00E2\u0088\u00920.5 0.5 0.5 0 \u00E2\u0088\u00920.5 Lateral \u00E2\u0088\u00920.5 0 Axial sub\u00E2\u0088\u0092sample 0.03 0.02 0.01 0.5 0 0 Elevational Lateral Standard Deviation \u00E2\u0088\u00920.5 Lateral \u00E2\u0088\u00920.5 0 Elevational Lateral Standard Deviation \u00E2\u0088\u00920.5 Lateral 0.5 0 0.04 0 \u00E2\u0088\u00920.5 0.5 0.5 0 0 Elevational Lateral Standard Deviation \u00E2\u0088\u00920.5 0.5 Axial sub\u00E2\u0088\u0092sample 0.5 Axial sub\u00E2\u0088\u0092sample 0.5 0 0 Elevational Lateral Standard Deviation \u00E2\u0088\u00920.5 0 \u00E2\u0088\u00920.5 0.5 Axial sub\u00E2\u0088\u0092sample Lateral 0 0 \u00E2\u0088\u00920.5 0.2 0.5 Axial sub\u00E2\u0088\u0092sample 0 Axial sub\u00E2\u0088\u0092sample Axial sub\u00E2\u0088\u0092sample Axial sub\u00E2\u0088\u0092sample 0.05 0 0.15 0 0.1 0.05 0 \u00E2\u0088\u00920.5 Lateral \u00E2\u0088\u00920.5 0.5 0.5 \u00E2\u0088\u00920.5 0 0 Elevational Elevational Standard Deviation \u00E2\u0088\u00920.5 0.5 0.5 0 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.5 0 Elevational (a) 1D Parabola \u00EF\u00AC\u0081t \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.5 0.5 0.5 0 Lateral \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.5 0 Elevational (b) 1D Cosine \u00EF\u00AC\u0081t \u00E2\u0088\u00920.5 Lateral \u00E2\u0088\u00920.5 0 Elevational 0.5 0 \u00E2\u0088\u00920.5 0.5 0.5 0 Lateral \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.5 0 Elevational (c) \u00F0\u009D\u0091\u009310 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A7) \u00EF\u00AC\u0081t 0 Axial Standard Deviation 0.5 0 0.5 0 0 Elevational Elevational Standard Deviation Axial sub\u00E2\u0088\u0092sample 0 \u00E2\u0088\u00920.5 Lateral \u00E2\u0088\u00920.5 0.5 0.5 0 0 Elevational 0.5 Axial sub\u00E2\u0088\u0092sample Axial sub\u00E2\u0088\u0092sample \u00E2\u0088\u00920.5 Elevational Standard Deviation 0.5 Lateral \u00E2\u0088\u00920.5 Lateral \u00E2\u0088\u00920.5 0.5 0.5 Axial sub\u00E2\u0088\u0092sample \u00E2\u0088\u00920.5 0.5 0.5 0.4 0 0.3 0.2 0.1 \u00E2\u0088\u00920.5 0.5 0.5 0 Lateral \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.5 0 Elevational 0 (d) \u00F0\u009D\u0091\u009327 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A7) \u00EF\u00AC\u0081t Figure 4.4: Biases and standard deviations of the sub-sample estimation techniques as a function of sub-sample shift on a 5 \u00C3\u0097 5 \u00C3\u0097 5 grid (window size is 104 samples axially, 7 samples laterally, and 5 samples elevationally \u00E2\u0089\u0088 2 \u00C3\u0097 2 \u00C3\u0097 1 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A3 ). 80 \u000CChapter 4. Sub-sample Motion Estimation in 3D Axial Standard Deviation Axial Bias 0.02 \u00E2\u0088\u00920.005 \u00E2\u0088\u00920.5 0.5 0 \u00E2\u0088\u00920.5 Lateral \u00E2\u0088\u00920.5 0 0.5 \u00E2\u0088\u00920.5 0.5 0 \u00E2\u0088\u00920.01 \u00E2\u0088\u00920.5 Lateral Elevational Lateral Bias \u00E2\u0088\u00920.5 0 0.5 0 0 \u00E2\u0088\u00920.005 \u00E2\u0088\u00920.5 0.5 0.015 0 \u00E2\u0088\u00920.5 Lateral Elevational Lateral Standard Deviation \u00E2\u0088\u00920.5 0 0.5 0 \u00E2\u0088\u00920.02 \u00E2\u0088\u00920.04 \u00E2\u0088\u00920.5 0.5 0 \u00E2\u0088\u00920.5 Lateral \u00E2\u0088\u00920.5 0 0.045 0.04 0 \u00E2\u0088\u00920.08 \u00E2\u0088\u00920.5 Lateral Elevational Elevational Bias 0.01 0.008 0 Lateral 0.005 0 \u00E2\u0088\u00920.005 0.006 Elevational 0.035 0.5 0.01 0 0 Lateral Standard Deviation 0.015 0.5 \u00E2\u0088\u00920.5 0.5 0.03 0 0.025 0.02 0.035 \u00E2\u0088\u00920.5 0.5 \u00E2\u0088\u00920.06 0.5 0.05 0 0.012 \u00E2\u0088\u00920.5 0.5 \u00E2\u0088\u00920.01 0.055 Axial sub\u00E2\u0088\u0092sample 0.02 0 Axial sub\u00E2\u0088\u0092sample Axial sub\u00E2\u0088\u0092sample 0.04 0.06 0.5 0.06 0.014 0 Elevational 0.08 0.5 0.016 \u00E2\u0088\u00920.5 Lateral Bias 0.065 Axial sub\u00E2\u0088\u0092sample 0.025 0 0.005 0.018 0.5 0.01 0.03 Axial sub\u00E2\u0088\u0092sample 0 Axial sub\u00E2\u0088\u0092sample Axial sub\u00E2\u0088\u0092sample 0 0.5 Axial sub\u00E2\u0088\u0092sample 0.5 0.005 Axial Standard Deviation Axial Bias 0.035 0.01 0.5 \u00E2\u0088\u00920.5 0 0.5 \u00E2\u0088\u00920.5 0.5 \u00E2\u0088\u00920.01 0 0.025 \u00E2\u0088\u00920.5 Lateral Elevational Elevational Standard Deviation 0.15 \u00E2\u0088\u00920.5 0.5 0.03 0.19 \u00E2\u0088\u00920.5 0 0.5 Elevational 0 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.015 Lateral \u00E2\u0088\u00920.5 0 0.5 Elevational Standard Deviation Elevational Bias 0.01 \u00E2\u0088\u00920.05 0.15 0 0.14 0.13 Axial sub\u00E2\u0088\u0092sample 0 Axial sub\u00E2\u0088\u0092sample Axial sub\u00E2\u0088\u0092sample 0 0.16 0.5 0.5 0.17 0.05 0.005 0 0 \u00E2\u0088\u00920.1 0 \u00E2\u0088\u00920.5 Lateral \u00E2\u0088\u00920.5 0 0.5 Elevational (a) Bias of \u00F0\u009D\u0091\u009310 \u00E2\u0088\u00920.5 0.5 0 \u00E2\u0088\u00920.15 \u00E2\u0088\u00920.5 Lateral \u00E2\u0088\u00920.5 0 0.5 Elevational (b) STD of \u00F0\u009D\u0091\u009310 0.11 0.1 0.065 0.06 0.055 0.05 0 0.045 0.04 0.035 \u00E2\u0088\u00920.005 0.12 \u00E2\u0088\u00920.5 0.5 0.07 0.18 0.5 0.1 Axial sub\u00E2\u0088\u0092sample 0.5 \u00E2\u0088\u00920.5 0.5 0 \u00E2\u0088\u00920.5 Lateral \u00E2\u0088\u00920.5 0.015 Elevational 0 \u00E2\u0088\u00920.5 0.5 0.5 Elevational (c) Bias of \u00F0\u009D\u0091\u009327 \u00E2\u0088\u00920.01 0.03 0 \u00E2\u0088\u00920.5 Lateral \u00E2\u0088\u00920.5 0 0.5 0.025 Elevational (d) STD of \u00F0\u009D\u0091\u009327 Figure 4.5: Biases and Standard deviations of the sub-sample estimation techniques as a function of sub-sample shift on a 5 \u00C3\u0097 5 \u00C3\u0097 5 grid (window size is 104 samples axially, 7 samples laterally, and 5 samples elevationally \u00E2\u0089\u0088 2 \u00C3\u0097 2 \u00C3\u0097 1 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A3 ). 81 \u000CChapter 4. Sub-sample Motion Estimation in 3D Fig. 4.3 visualizes the biases and standard deviations as a fraction of a sample assuming equal sample spacing in both directions. However, as mentioned before, the sample spacing in the lateral and elevational directions is much larger than the sample spacing in the axial direction. In order to show the true length of the bias vectors and real shape of the standard deviation ellipsoids, Fig. 4.7 compares the performance of the independent 1D parabola \u00EF\u00AC\u0081tting and the 3D polynomial \u00EF\u00AC\u0081tting (Fig. 4.3(a),(d)) in units of distance, as opposed to fraction of a sample. Fig. 4.7 depicts how small errors in fraction of a sample, translate to large actual errors especially in the lateral and elevational directions which su\u00EF\u00AC\u0080er from large sample spacing. This \u00EF\u00AC\u0081gure shows the importance of accuracy and precision of sub-sample estimation especially in the lateral and elevational directions. 4.4 4.4.1 Experiments Experimental Setup In order to study the performance of the 3D method using actual ultrasound data, the following experiment was conducted. A 3-axis AIMS ultrasound scanning system (Onda Corp., Sunnyvale, CA) with 10 \u00F0\u009D\u009C\u0087m resolution in each axis, mounted on a water tank, equipped with the water conditioning system (Onda Corp., Sunnyvale, CA) was used. This experimental setup is shown in Fig. 4.8. Experiments were performed on a 60 \u00C3\u0097 40 \u00C3\u0097 40 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A3 uniform phantom. The phantom was prepared using a 100% polyvinyl chloride (PVC) plasticizer (M-F Manufacturing Co., Inc. Fort Worth, TX, USA) with two percent cellulose (Sigma-Aldrich Inc., St Louis, MO, USA) as scatterers [47]. The phantom was inserted in a tank of degassed water and placed 2 mm away from the transducer, thus enabling the transducer to move without deforming the phantom producing a rigid motion. The probe was moved in steps of 20 \u00F0\u009D\u009C\u0087m once axially, once laterally, and once elevationally up to 100 \u00F0\u009D\u009C\u0087m. The phantom was imaged using the 3D transducer (4DC6-3/40) of a SonixRP ultrasound machine (Ultrasonix Medical Corporation, Richmond, BC). Similarly to our simulation setup, a single transmit focus was placed at 30 mm. The phantom was imaged to a depth of 60 mm (2 mm water gap plus 58 mm phantom) using a 128-element motor-driven curved-linear transducer (lateral radius of \u00F0\u009D\u0091\u009F\u00F0\u009D\u0091\u0099 = 39.8 mm and elevational radius of \u00F0\u009D\u0091\u009F\u00F0\u009D\u0091\u0092 = 27.5 mm) with a 5 MHz center frequency digitized at 40 MHz, with 430 \u00F0\u009D\u009C\u0087m line spacing at the transducer\u00E2\u0080\u0099s surface, and 0.36\u00E2\u0088\u0098 frame spacing (i.e. 175 \u00F0\u009D\u009C\u0087m frame spacing at the surface of the transducer). 33 frames were acquired per volume (i.e. a total of 11.5\u00E2\u0088\u0098 \u00EF\u00AC\u0081eld of view elevationally). In addition to the reference RF volume, 5 RF volumes corresponding to each 20 \u00F0\u009D\u009C\u0087m increment, were recorded for each experiment on each axis for o\u00EF\u00AC\u0080-line processing (total of 16 RF volumes). These acquired RF volumes, in conjunction with \u00EF\u00AC\u0081rst RF volume as a reference, were used to estimate the rigid motions. In order to further study the performance of the 3D method qualitatively, in the presence of deformation, another experiment was conducted on a commercial phantom. The experiment was performed on a breast phantom (CIRS, Tissue Simulation and Phantom Technology, Norfolk, Virginia, USA). This experimental setup is shown in Fig. 4.9. The transducer was \u00EF\u00AC\u0081xed on top of a mechanical stage that provides controlled motion. A mechanical stage was used to compress the phantom in the axial direction. Using the same transducer and imaging setup mentioned above, pre- and post-compression RF volumes were acquired. These acquired RF volumes were used to estimate the motions. 82 \u000CChapter 4. Sub-sample Motion Estimation in 3D Axial Error (Sample) 0.15 0.1 0.05 0 \u00E2\u0088\u00920.05 1D(Par) 1D(Cos) 3D(10) Sub\u00E2\u0088\u0092Sampling Method 3D(27) Lateral Error (Sample) (a) 0.4 0.3 0.2 0.1 0 \u00E2\u0088\u00920.1 1D(Par) 1D(Cos) 3D(10) Sub\u00E2\u0088\u0092Sampling Method 3D(27) Elevational Error (Sample) (b) 0.8 0.6 0.4 0.2 0 \u00E2\u0088\u00920.2 1D(Par) 1D(Cos) 3D(10) Sub\u00E2\u0088\u0092Sampling Method 3D(27) Maximum Error (Sample) (c) 0.8 Axial Lateral Elevational 0.6 0.4 0.2 0 \u00E2\u0088\u00920.2 1D(Par) 1D(Cos) 3D(10) Sub\u00E2\u0088\u0092Sampling Method 3D(27) (d) Figure 4.6: The maximum error of di\u00EF\u00AC\u0080erent 3D motion estimation techniques (axial, lateral, and elevational) over the 5 \u00C3\u0097 5 \u00C3\u0097 5 simulated motion grid (window size is 104 samples axially, 7 samples laterally, and 5 samples elevationally \u00E2\u0089\u0088 2 \u00C3\u0097 2 \u00C3\u0097 1 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A3 ). For better comparison, the results are shown both in separate \u00EF\u00AC\u0081gures (a)-(c) and together (d). 83 \u000CAxial shift (microns) Chapter 4. Sub-sample Motion Estimation in 3D 10 0 \u00E2\u0088\u009210 \u00E2\u0088\u0092150 \u00E2\u0088\u009275 60 0 75 Lateral shift (microns) 0 150 \u00E2\u0088\u009260 Elevational shift (microns) (a) Bias of 1D Parabola Fit Axial shift (microns) (b) STD of 1D Parabola Fit 10 0 \u00E2\u0088\u009210 \u00E2\u0088\u0092150 \u00E2\u0088\u009275 60 0 Lateral shift (microns) 75 0 150 \u00E2\u0088\u009260 Elevational shift (microns) (c) Bias of 3D \u00F0\u009D\u0091\u009327 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A7) Fit (d) STD of 3D \u00F0\u009D\u0091\u009327 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A7) Fit Figure 4.7: The biases and the standard deviations of the 1D parabola \u00EF\u00AC\u0081t (a),(b) and 3D polynomial \u00EF\u00AC\u0081t (c),(d) on a 5 \u00C3\u0097 5 \u00C3\u0097 5 grid shown at the actual scale. For better visualization of the standard deviation of the independent 1D method, ellipsoids in every other plane in the lateral and elevational directions are eliminated prior to display. For the 3D method, all the ellipsoids are shown since they were small. 84 \u000CChapter 4. Sub-sample Motion Estimation in 3D (a) Experimental Setup (b) Sonogram Figure 4.8: The experimental setup (a) showing the positioning of the transducer with respect to the phantom on the 3-axis motion stage inside the water tank. A sample sonogram (b) acquired from ultrasound machine is also shown. 85 \u000CChapter 4. Sub-sample Motion Estimation in 3D (a) First experimental Setup (b) Sonogram Figure 4.9: The experimental setup (a) showing the positioning of the transducer with respect to the phantom and a sample sonogram (b) acquired using ultrasound machine. 86 \u000CChapter 4. Sub-sample Motion Estimation in 3D 4.4.2 Motion Estimation In this work, instead of a 1D/2D linear array transducer, a motor-driven curved linear array transducers is used to acquire the echo volumes. In one approach the acquired echo volumes can be interpolated into Cartesian coordinates prior to the motion estimation process. The motion can then be estimated from these interpolated echo volumes. Although this approach will simplify the motion estimation, it requires all the echo volumes to be interpolated into Cartesian coordinate which is computationally intensive. In another approach the motion can be estimated in the transducers coordinates using acquired raw RF volumes. Once motions are estimated, the estimated motion vectors in the transducer\u00E2\u0080\u0099s coordinate can be transformed into Cartesian coordinates. To avoid interpolation of the echo volumes, the second approach is used in this work. The displacements between the sequences of RF volumes were estimated using the normalized cross correlation as a pattern matching function. The Time Domain cross correlation with Prior Estimates (TDPE) [48] was used to \u00EF\u00AC\u0081nd the motion within sampling accuracy in the RF volume. The window size was set to be 2 \u00C3\u0097 2 \u00C3\u0097 1 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A3 (i.e. 104 samples axially, 7 samples laterally, and 5 samples elevationally) with approximately 50% window overlap in the axial direction. The size of search area for the pattern matching function was set to be approximately 3 \u00C3\u0097 3 \u00C3\u0097 2 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A3 (i.e. 156 samples axially, 9 samples laterally, and 7 samples elevationally). Once the coarse motion is estimated within the sampling accuracy, the 3D polynomial \u00EF\u00AC\u0081tting with 27 coe\u00EF\u00AC\u0083cients which outperformed all the other methods was used to \u00EF\u00AC\u0081nd the sub-sample components of the motion in all three directions for each window. 4.4.3 Conversion to Units of Distance 1D/2D linear array transducers have \u00EF\u00AC\u0081xed sample spacing in all directions. As a result, the displacements estimated in units of samples in each direction are converted to units of distance by multiplying the results with a constant scaling factor (i.e. the sample spacing in the same direction). However, in curved linear array transducers, the sample spacing in the direction of curvature increases with depth. This is shown in Fig. 4.10. Thus, the conversion factors to units of distance need to be increased with depth. The 3D motor-driven curved linear transducer which is used in this study is curved in both the lateral and elevational directions with di\u00EF\u00AC\u0080erent curvatures (i.e. \u00F0\u009D\u0091\u009F\u00F0\u009D\u0091\u0092 \u00E2\u0088\u0095= \u00F0\u009D\u0091\u009F\u00F0\u009D\u0091\u0099 ). Therefore the conversion factors in these two directions need to be increased with depth independently. In this paper, the estimated displacements in units of samples are converted to units of distance using the following equations: [ \u00E2\u0080\u00B2 ] [ ][ ] \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u008E \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u0099\u00E2\u0080\u00B2 \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u0092\u00E2\u0080\u00B2 = \u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u008E (\u00F0\u009D\u0091\u009D) 0 0 0 \u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u0099 (\u00F0\u009D\u0091\u009D) 0 0 0 \u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u0092 (\u00F0\u009D\u0091\u009D) \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u008E \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u0099 \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u0092 (4.8) where \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u008E = \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u0099 = \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 , and \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u0092 = \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 + \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0092 are estimated axial, lateral, and elevational components of the motion in units of samples at depth \u00F0\u009D\u0091\u009D, \u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u008E (\u00F0\u009D\u0091\u009D) = \u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u008E (0), \u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u0099 (\u00F0\u009D\u0091\u009D) \u00E2\u0089\u0088 \u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u0099 (0)(\u00F0\u009D\u0091\u009F\u00F0\u009D\u0091\u0099 +\u00F0\u009D\u0091\u009D)/\u00F0\u009D\u0091\u009F\u00F0\u009D\u0091\u0099 , and \u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u0092 (\u00F0\u009D\u0091\u009D) \u00E2\u0089\u0088 \u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u0092 (0)(\u00F0\u009D\u0091\u009F\u00F0\u009D\u0091\u0092 + \u00F0\u009D\u0091\u009D)/\u00F0\u009D\u0091\u009F\u00F0\u009D\u0091\u0092 are the transducer\u00E2\u0080\u0099s axial, lateral, and elevational sample spacing at depth \u00F0\u009D\u0091\u009D, respectively, \u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u008E (0) = 19.3 \u00F0\u009D\u009C\u0087m, \u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u0099 (0) = 430 \u00F0\u009D\u009C\u0087m, and \u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u0092 (0) = 175 \u00F0\u009D\u009C\u0087m are the sample spacing at transducer\u00E2\u0080\u0099s surface, \u00F0\u009D\u0091\u009F\u00F0\u009D\u0091\u0099 = 39.8 mm and \u00F0\u009D\u0091\u009F\u00F0\u009D\u0091\u0092 = 27.5 mm are the radii of the transducer in the lateral and the elevational directions, and \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u008E\u00E2\u0080\u00B2 , \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u0099\u00E2\u0080\u00B2 , and \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u0092\u00E2\u0080\u00B2 are the estimated components of the motion in units of distance at depth \u00F0\u009D\u0091\u009D in transducer\u00E2\u0080\u0099s coordinate. 87 \u000CChapter 4. Sub-sample Motion Estimation in 3D Figure 4.10: Variation of the spacing between lines as a function of depth is shown when a curved array transducer is used to acquire the echo lines. This happens in the lateral and elevational directions due to the curvature of the array (\u00F0\u009D\u0091\u00A0(0) is the spacing between lines on the transducer surface and \u00F0\u009D\u0091\u00A0(\u00F0\u009D\u0091\u009D) is the spacing between lines at a depth \u00F0\u009D\u0091\u009D). 4.4.4 Coordinate Transformation Once motion vectors are estimated in the transducers coordinate and converted to units of distance, it is necessary to transform the coordinates to Cartesian coordinates. For the 3D motor-driven transducer with di\u00EF\u00AC\u0080erent curvatures in lateral and elevational directions that is used in this study, this coordinate transformation consists of two rotations. This is shown in Fig. 4.11. In order to compensate for these, the same rotations need to be applied to the estimated motion vectors in the opposite directions. One rotation is required about the transducer\u00E2\u0080\u0099s elevational direction to correct for the curvature in each frame. This rotation will correct for the curvature of the transducer and will transform the estimation in the lateral direction to the true lateral component in Cartesian coordinates. The second rotation is required about the actual lateral direction to correct for the axial and elevational components which will transform them to the Cartesian coordinates. This rotation will correct for the sweeping angle. This transformation can be formulated as follow: \u00E2\u008E\u00A4 \u00E2\u008E\u00A1 [ \u00E2\u0080\u00B2\u00E2\u0080\u00B2 ] \u00F0\u009D\u0091\u0090\u00F0\u009D\u0091\u009C\u00F0\u009D\u0091\u00A0(\u00F0\u009D\u009C\u0099) 0 \u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u0096\u00F0\u009D\u0091\u009B(\u00F0\u009D\u009C\u0099) \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u008E \u00E2\u008E\u00A6 \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u0099\u00E2\u0080\u00B2\u00E2\u0080\u00B2 0 1 0 = \u00E2\u008E\u00A3 (4.9) \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u0092\u00E2\u0080\u00B2\u00E2\u0080\u00B2 \u00E2\u0088\u0092\u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u0096\u00F0\u009D\u0091\u009B(\u00F0\u009D\u009C\u0099) 0 \u00F0\u009D\u0091\u0090\u00F0\u009D\u0091\u009C\u00F0\u009D\u0091\u00A0(\u00F0\u009D\u009C\u0099) \u00E2\u008E\u00A1 \u00E2\u008E\u00A4\u00E2\u008E\u00A1 \u00E2\u0080\u00B2 \u00E2\u008E\u00A4 \u00F0\u009D\u0091\u0090\u00F0\u009D\u0091\u009C\u00F0\u009D\u0091\u00A0(\u00F0\u009D\u009C\u0083) \u00E2\u0088\u0092\u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u0096\u00F0\u009D\u0091\u009B(\u00F0\u009D\u009C\u0083) 0 \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u008E \u00E2\u008E\u00A3 \u00F0\u009D\u0091\u00A0\u00F0\u009D\u0091\u0096\u00F0\u009D\u0091\u009B(\u00F0\u009D\u009C\u0083) \u00F0\u009D\u0091\u0090\u00F0\u009D\u0091\u009C\u00F0\u009D\u0091\u00A0(\u00F0\u009D\u009C\u0083) 0 \u00E2\u008E\u00A6 \u00E2\u008E\u00A3 \u00F0\u009D\u0090\u00B7\u00E2\u0080\u00B2 \u00E2\u008E\u00A6 , \u00F0\u009D\u0091\u0099 \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u0092\u00E2\u0080\u00B2 0 0 1 88 \u000CChapter 4. Sub-sample Motion Estimation in 3D (a) Transducer Coordinate and Cartesian Coordinates (b) Front View (left) and Side View (right) Figure 4.11: The transducer\u00E2\u0080\u0099s coordinate frame and Cartesian coordinates are shown (a). The coordinate transformation consists of two rotations coming from the curvature of the transducer in both lateral and elevational directions (b). 89 \u000CChapter 4. Sub-sample Motion Estimation in 3D where \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u008E\u00E2\u0080\u00B2 , \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u0099\u00E2\u0080\u00B2 , and \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u0092\u00E2\u0080\u00B2 are the components of the motion in units of distance in transducer coordinates, \u00F0\u009D\u009C\u0083, \u00F0\u009D\u009C\u0099 are the rotation angles around elevational and lateral axes, respectively, and \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u008E\u00E2\u0080\u00B2\u00E2\u0080\u00B2 , \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u0099\u00E2\u0080\u00B2\u00E2\u0080\u00B2 , and \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u0092\u00E2\u0080\u00B2\u00E2\u0080\u00B2 are the components of the motion in units of distance in Cartesian coordinates. The rotation angles (i.e. \u00F0\u009D\u009C\u0083, \u00F0\u009D\u009C\u0099) are calculated based on the location of the reference window, which is used as the pattern in the matching algorithm, in the 3D volume. 4.4.5 Experimental Results The estimated axial, lateral, and elevational components of the displacement in Cartesian coordinates, that result from the translation of the phantom in the water tank, are shown in Fig. 4.12. The results show that the 3D method accurately measures all three components of the motion. The maximum tracking error of the 3D method was measured to be 10 \u00F0\u009D\u009C\u0087m in all three axes which is the same as the resolution of our experimental setup (i.e. approximately 10 \u00F0\u009D\u009C\u0087m). The estimated axial, lateral, and elevational components of the displacements that resulted from the axial compression of a breast phantom, were scan converted and shown in Fig. 4.13. A multi-planar view was used to visualize the results in 3D. The bright echo in the sonogram (Fig. 4.9(b)) is clearly visible as a hard inclusion in the displacement estimates. For comparison purposes, the results from the axial compression of a phantom with a hard inclusion computed using the \u00EF\u00AC\u0081nite element method (FEM) are also presented. Fig. 4.13 shows good agreement between the simulated and experimental results. 4.5 Discussions and Conclusion In this work, we studied several pattern-matching function interpolation schemes that are suited for precise and accurate estimation of motion in 3D. The performance of all the interpolation methods has been characterized through both simulations and experiments. The results show that the 3D interpolation methods signi\u00EF\u00AC\u0081cantly outperform common independent 1D interpolation algorithms in terms of bias and standard deviation. In the simulation, the line spacing was intentionally set to be large in order to match our experimental setup. However, this spacing is much larger than the typical line spacing in state of the art ultrasound imaging systems. We expect both the bias and the standard deviation of all the estimators to improve with better lateral resolution. As shown in Section 4.2, the interpolation to estimate the sub-sample motion assumes a \u00EF\u00AC\u0081x sample spacing in each direction. However, in our experiments section the discrete pattern matching function is evaluated in the transducer\u00E2\u0080\u0099s coordinates where the sample spacing varies as a function of depth. Even though we considered this fact during the conversion to units of distance, we ignored it in the interpolation step and assumed equal sample spacing during the interpolation. We believe this is a good assumption since the sample spacing in the axial direction is three orders of magnitude smaller, compared to the position of the pattern matching window with respect to the center of rotation (20 \u00F0\u009D\u009C\u0087m vs 40 mm). This will result in maximum spacing variation of 0.0005 of a sample in both the lateral and elevational directions which is much smaller than the bias and standard deviation of the 3D interpolation techniques. Alternatively, instead of assuming \u00EF\u00AC\u0081xed sample spacing, the 3D polynomial can be \u00EF\u00AC\u0081tted to the matching coe\u00EF\u00AC\u0083cients with corrected sample spacing at each depth. Once the polynomial 90 \u000C120 120 100 100 100 80 60 40 20 0 \u00E2\u0088\u009220 Elevational Estimates (microns) 120 Lateral Estimates (microns) Axial Estimates (microns) Chapter 4. Sub-sample Motion Estimation in 3D 80 60 40 20 0 0 \u00E2\u0088\u009220 20 40 60 80 100 Axial Displacement Steps (microns) 0 80 60 40 20 0 \u00E2\u0088\u009220 20 40 60 80 100 Axial Displacement Steps (microns) 0 20 40 60 80 100 Axial Displacement Steps (microns) 0 20 40 60 80 100 Lateral Displacement Steps (microns) 120 120 100 100 100 80 60 40 20 0 \u00E2\u0088\u009220 Elevational Estimates (microns) 120 Lateral Estimates (microns) Axial Estimates (microns) (a) Axial Translation 80 60 40 20 0 0 \u00E2\u0088\u009220 20 40 60 80 100 Lateral Displacement Steps (microns) 0 80 60 40 20 0 \u00E2\u0088\u009220 20 40 60 80 100 Lateral Displacement Steps (microns) 120 120 100 100 100 80 60 40 20 0 \u00E2\u0088\u009220 Elevational Estimates (microns) 120 Lateral Estimates (microns) Axial Estimates (microns) (b) Lateral Translation 80 60 40 20 0 0 20 40 60 80 100 Elevational Displacement Steps (microns) \u00E2\u0088\u009220 0 20 40 60 80 100 Elevational Displacement Steps (microns) 80 60 40 20 0 \u00E2\u0088\u009220 0 20 40 60 80 100 Elevational Displacement Steps (microns) (c) Elevational Translation Figure 4.12: The estimated axial (1st column), lateral (2nd column), and elevational (3rd column) components of the motion resulted from axial (1st row), lateral (2nd row), and elevational (3rd row) translations of the phantom in the water tank. 91 \u000CChapter 4. Sub-sample Motion Estimation in 3D Axial Motion Lateral Motion Elevational Motion Figure 4.13: Simulated (top) axial, lateral, and elevational component of the displacement, resulted from axial compression of a phantom with a hard inclusion and the experimental results (bottom) on the breast phantom. Multi-planar view is used to visualize the results in 3D. Equal cost contours on the surfaces are also used to show the direction of the displacement. 92 \u000CChapter 4. Sub-sample Motion Estimation in 3D is \u00EF\u00AC\u0081tted, the same root estimation techniques can be applied to \u00EF\u00AC\u0081nd the sub-sample motions. The performance of all the pattern matching function interpolation techniques, including those presented in this work, rely on the matching coe\u00EF\u00AC\u0083cients and is expected to degrade when the echo signals are decorrelated and can not be matched correctly. Highly correlated signals are generally the case in ultrafast imaging for dynamic elastography [7] and real-time elastography [48], where displacements and deformations between frames are small. However, this is not the case in quasi-static elastography [49,50] and myocardial elastography [11] where the tissue experiences large internal motions and deformations. In order to adapt all these methods to the estimation of displacements resulting from large deformations, previously introduced compounding methods should be applied to the raw echo signals [51\u00E2\u0080\u009353] prior to the motion estimation process. Once the e\u00EF\u00AC\u0080ect of signal decorrelation is suppressed, the pattern matching algorithms followed by the 3D interpolation methods can be applied to estimate the 3D motion with sub-sample accuracy. In our previous work we have studied the performance of quadratic, cubic, and quartic 2D spline polynomials to estimate the sub-sample motion in two dimensions. We showed that, with a trade o\u00EF\u00AC\u0080 of higher computational cost, increasing the order of polynomial improves both the accuracy and the precision of the sub-sample estimation. In this work we extended this study to 3D and compared the performance of 3D quadratic spline polynomials \u00F0\u009D\u0091\u009327 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) to interpolate the pattern matching function. Similarly to our previous work, in addition to the above 3D methods studied in this work, 3D cubic and quartic splines with 64 (\u00F0\u009D\u0091\u009364 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6)) and 125 coe\u00EF\u00AC\u0083cients (\u00F0\u009D\u0091\u0093125 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6)), respectively, can also be used to estimate the sub-sample motions in 3D with higher accuracy and precision. However, these polynomials are excluded from this study due to their computational overhead. This is due to the fact that, in addition to the polynomial \u00EF\u00AC\u0081tting and root estimation process, these polynomials requires the pattern matching function to be evaluated at least for 125 lags for each window. This signi\u00EF\u00AC\u0081cant additional computational overhead diminishes the simplicity advantage of the pattern matching interpolation techniques. Moreover, we showed in our previous work that grid slope interpolation techniques [22, 38] performs as well as independent 1D interpolation techniques. Thus, they are not included in this study as a separate interpolation method. The 3D sub-sample motion estimation methods studied in this work provide a good balance between accuracy, precision, and computational cost. These methods have several potential applications throughout the \u00EF\u00AC\u0081eld of signal processing. Speci\u00EF\u00AC\u0081c applications in medical ultrasound include \u00EF\u00AC\u0081ne 3D tissue motion tracking, motion vector estimation, velocity vector imaging, strain tensor estimation, and tissue elasticity estimation. 93 \u000CReferences [1] C. Kasai, K. Namekawa, A. Koyano, and R. Omoto, \u00E2\u0080\u009CReal-time two-dimensional blood \u00EF\u00AC\u0082ow imaging using an autocorrelation technique.\u00E2\u0080\u009D IEEE Transactions on Sonics and Ultrasonics, vol. 32, pp. 458\u00E2\u0080\u0093464, 1985. [2] T. Loupas, R. Peterson, and R. Gill, \u00E2\u0080\u009CExperimental evaluation of velocity and power estimation forultrasound blood \u00EF\u00AC\u0082ow imaging, by means of a two-dimensional autocorrelation approach.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 42, pp. 689\u00E2\u0080\u0093699, Jul 1995. [3] H. Torp, K. Kristo\u00EF\u00AC\u0080ersen, and B. Angelsen, \u00E2\u0080\u009CAutocorrelation Techniques in Color Flow Imaging. Signal model and statistical properties of the Autocorrelation estimates.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency control, vol. 41, pp. 604\u00E2\u0080\u0093612, 1994. [4] J. Ophir, I. Cespedes, H. Ponnekanti, Y. Yazdi, and X. Li, \u00E2\u0080\u009CElastography: a quantitative method for imaging the elasticity of biological tissues,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 13, pp. 111\u00E2\u0080\u0093134, April 1991. [5] A. Heimdal, A. Stoylen, H. Torp, and T. Skjaerpe, \u00E2\u0080\u009CReal-time strain rate imaging of the left ventricle by ultrasound.\u00E2\u0080\u009D Journal of American Soc Echocardiography, vol. 11, pp. 1013\u00E2\u0080\u00931019, Nov 1998. [6] L. Bohs, B. Friemel, and G. Trahey, \u00E2\u0080\u009CExperimental velocity pro\u00EF\u00AC\u0081les and volumetric \u00EF\u00AC\u0082ow via two-dimensional speckle tracking.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 21, pp. 885\u00E2\u0080\u009398, 1995. [7] J. Berco\u00EF\u00AC\u0080, M. Tanter, and M. Fink, \u00E2\u0080\u009CSupersonic shear imaging: a new technique for soft tissue elasticity mapping,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 51, pp. 396\u00E2\u0080\u00934009, April 2004. [8] E. Turgay, S. Salcudean, and R. Rohling, \u00E2\u0080\u009CIdentifying Mechanical properties of Tissue by Ultrasound.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 32, pp. 221\u00E2\u0080\u0093235, 2006. [9] H. Eskandari, S. Salcudean, and R. Rohling, \u00E2\u0080\u009CViscoelastic Parameter Estimation Based on Spectral Analysis.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency control, vol. 55, pp. 1611\u00E2\u0080\u00931625, July 2008. [10] R. Righetti, J. Ophir, S. Srinivasan, and T. Krouskop, \u00E2\u0080\u009CThe Feasibility of Using Elastography for Imaging the Poissons Ratio in Porous Media,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 30, pp. 215\u00E2\u0080\u0093228, 2004. 94 \u000CChapter 4. Sub-sample Motion Estimation in 3D [11] W. Lee, C. M. Ingrassia, S. D. Fung-Kee-Fung, K. D. Costa, J. W. Holmes, and E. Konofagou, \u00E2\u0080\u009CTheoretical Quality Assessment of Myocardial Elastography with In Vivo Validation,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 54, pp. 2233\u00E2\u0080\u00932245, 2007. [12] A. Thitaikumar, L. Mobbs, C. Kraemer-Chant, B. Garra, and J. Ophir, \u00E2\u0080\u009CBreast Tumor Classi\u00EF\u00AC\u0081cation using axial shear strain elastography: a feasibility study,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 53, pp. 4809\u00E2\u0080\u00934823, 2008. [13] K. Nightingale, M. Palmeri, R. Nightingale, and G. Trahey, \u00E2\u0080\u009COn the feasibility of remote palpation using acoustic radiation force,\u00E2\u0080\u009D Journal of the Acoustical Society of America, vol. 110, pp. 625\u00E2\u0080\u009334, July 2001. [14] W. Walker, F. Fernandez, and L. Negron, \u00E2\u0080\u009CA method of imaging viscoelastic parameters with acoustic radiation force.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1437\u00E2\u0080\u00931447, 2000. [15] M. Fatemi and J. Greenleaf, \u00E2\u0080\u009CProbing the dynamics of tissue at low frequencies with the radiation force of ultrasound,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1449\u00E2\u0080\u009364, June 2000. [16] P. Wells, \u00E2\u0080\u009CUltrasonic colour \u00EF\u00AC\u0082ow imaging,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 39, pp. 2113\u00E2\u0080\u00932145, 1994. [17] R. Gill, \u00E2\u0080\u009CMeasurement of blood \u00EF\u00AC\u0082ow by ultrasound: accuracy and sources of error.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 11, pp. 625\u00E2\u0080\u0093641, Aug 1985. [18] E. Konofagou and J. Ophir, \u00E2\u0080\u009CPrecision Estimation and Imaging of Normal and Shear Components of the 3D Strain Tensor in elastography,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1553\u00E2\u0080\u00931563, 2000. [19] M. Greenleaf J., Fatemi and M. Insana, \u00E2\u0080\u009CSelected methods for imaging elastic properties of biological tissues.\u00E2\u0080\u009D Annual Review of Biomedical Engineering, vol. 5, p. 5778, 2003. [20] M. Lubinski, S. Emelianov, and M. O\u00E2\u0080\u0099Donnell, \u00E2\u0080\u009CSpeckle tracking methods for ultrasonic elasticity imaging using short-time correlation,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 46, pp. 82\u00E2\u0080\u009396, January 1999. [21] H. Shi and T. Varghese, \u00E2\u0080\u009CTwo-dimensional multi-level strain estimation for discontinuous tissue.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 52, pp. 389\u00E2\u0080\u0093401, Nov 2007. [22] L. Geiman, BJ.and Bohs, M. Anderson, S. Breit, and T. G.E., \u00E2\u0080\u009CA novel interpolation strategy for estimating subsample speckle motion.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1541\u00E2\u0080\u00931552, 2000. [23] L. Bohs and G. Trahey, \u00E2\u0080\u009CA novel method for angle independent ultrasonic imaging of blood \u00EF\u00AC\u0082ow and tissue motion,\u00E2\u0080\u009D IEEE Transactions on Biomedical Engineering, vol. 38, pp. 280\u00E2\u0080\u0093286, March 1991. 95 \u000CChapter 4. Sub-sample Motion Estimation in 3D [24] G. Trahey, J. Allison, and O. Von Ramm, \u00E2\u0080\u009CAngle independent ultrasonic detection of blood \u00EF\u00AC\u0082ow,\u00E2\u0080\u009D IEEE Transactions on Biomedical Engineering, vol. 34, pp. 965\u00E2\u0080\u00937, December 1987. [25] F. Viola and W. Walker, \u00E2\u0080\u009CA comparison of the performance of time-delay estimators in medical ultrasound.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 50, pp. 392\u00E2\u0080\u0093401, April 2003. [26] G. Jacovitti and G. Scarano, \u00E2\u0080\u009CDiscrete time techniques for time delay estimation.\u00E2\u0080\u009D IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 41, pp. 525\u00E2\u0080\u0093533, Feb 1993. [27] T. Varghese and O. J., \u00E2\u0080\u009CCharacterization of elastographic noise using the envelope of echo signals,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 24, pp. 543\u00E2\u0080\u0093555, 1998. [28] F. Viola and W. Walker, \u00E2\u0080\u009CA Spline-Based Algorithm for Continuous Time-Delay Estimation Using Sampled Data.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 52, pp. 80\u00E2\u0080\u009393, January 2005. [29] G. Pinton and G. Trahey, \u00E2\u0080\u009CContinuous Delay Estimation with Polynomial Splines.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 53, pp. 2026\u00E2\u0080\u00932035, 2006. [30] E. Konofagou and J. Ophir, \u00E2\u0080\u009CA new elastographic method for estimation and imaging of lateral displacements, lateral strains, corrected axial strains and Poisson\u00E2\u0080\u0099s ratios in tissues,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 24, pp. 1183\u00E2\u0080\u009399, October 1998. [31] F. Viola, R. Coe, O. K., D. Guenther, and W. Walker, \u00E2\u0080\u009CMUlti-Dimensional Spline-Based Estimator (MUSE) for Motion Estimation: Algorithm Development and Initial Results.\u00E2\u0080\u009D Annals of Biomedical Engineering, vol. 36, pp. 1942\u00E2\u0080\u00931960, September 2008. [32] R. Zahiri-Azar and S. Salcudean, \u00E2\u0080\u009CTime-Delay Estimation in Ultrasound Echo Signals Using Individual Sample Tracking.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 55, pp. 2640\u00E2\u0080\u00932650, 2008. [33] I. Cespedes, Y. Huang, J. Ophir, and S. Spratt, \u00E2\u0080\u009CMethods for the estimation of subsample time-delays of digitized echo signals,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 17, pp. 142\u00E2\u0080\u0093171, 1995. [34] P. de Jong, T. Arts, A. Hoeks, and R. Reneman, \u00E2\u0080\u009CDetermination of Tissue Motion Velocity by Correlation Interpolation of Pulsed Ultrasonic Echo Signals,\u00E2\u0080\u009D Ultrasonics Imaging, vol. 12, pp. 84\u00E2\u0080\u009398, 1990. [35] B. Geiman, L. Bohs, M. Anderson, S. Breit, and G. Trahey, \u00E2\u0080\u009CA comparison of algorithms for tracking sub-pixel speckle motion.\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. Volume 2: IEEE, 5-8 Oct 1997, pp. 1239\u00E2\u0080\u00931242. [36] S. Foster, P. Embree, and W. O\u00E2\u0080\u0099Brien, \u00E2\u0080\u009CFlow velocity pro\u00EF\u00AC\u0081le via time-domain correlation: error analysis and computer simulation,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 37, pp. 164\u00E2\u0080\u0093175, May 1990. 96 \u000CChapter 4. Sub-sample Motion Estimation in 3D [37] F. Viola and W. Walker, \u00E2\u0080\u009CComputationally E\u00EF\u00AC\u0083cient Spline-Based Time Delay Estimation.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 55, pp. 2084\u00E2\u0080\u00932091, September 2008. [38] L. Bohs, B. Geiman, M. Anderson, S. Breit, and G. Trahey, \u00E2\u0080\u009CEnsemble Tracking for 2D Vector Velocity Measurement: Experimental and Initial Clinical Results.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 45, pp. 912\u00E2\u0080\u0093924, July 1998. [39] Y. Mo\u00EF\u00AC\u0081d, S. Gahagnon, F. Patat, F. Ossant, and G. Josse, \u00E2\u0080\u009CIn vivo high frequency elastography for mechanical behavior of human skin under suction stress: elastograms and kinetics of shear, axial and lateral strain \u00EF\u00AC\u0081elds.\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. Vancouver: IEEE, Oct 2006, pp. 1041\u00E2\u0080\u00931044. [40] R. Lopata, M. Nillesen, I. Gerrits, J. Thijssen, L. Kapusta, F. van de Vosse, and C. de Korte, \u00E2\u0080\u009C In Vivo 3D Cardiac and Skeletal Muscle Strain Estimation ,\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. Vancouver: IEEE, Oct 2006, pp. 744\u00E2\u0080\u0093747. [41] R. Lopata, M. Nillesena, H. Hansena, I. Gerritsa, T. J., and C. de Korte, \u00E2\u0080\u009CPerformance Evaluation of Methods for Two-Dimensional Displacement and Strain Estimation Using Ultrasound Radio Frequency Data.\u00E2\u0080\u009D Ultrasound in medicine and biology, vol. 35, pp. 796\u00E2\u0080\u0093 812, 2009. [42] R. Zahiri-Azar and S. Salcudean, \u00E2\u0080\u009CReal-Time Estimation of Lateral Displacement Using Time Domain Cross Correlation with Prior Estimates ,\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. IEEE, Oct 2006, pp. 1209\u00E2\u0080\u00931212. [43] R. Zahiri-Azar, O. Goksel, T. Yao, E. Dehghan, J. Yan, and S. Salcudean, \u00E2\u0080\u009C Methods for the estimation of sub-sample motion of digitized ultrasound echo signals in two dimensions ,\u00E2\u0080\u009D in Proceedings of the IEEE Engineering in Medicine and Biology. IEEE, Aug 2008, pp. 5581\u00E2\u0080\u00935584. [44] R. Lopata, M. Nillesen, I. Gerrits, J. Thijssen, L. Kapusta, and C. de Korte, \u00E2\u0080\u009C 4D Cardiac Strain Imaging: Methods and Initial Results ,\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. IEEE, Oct 2007, pp. 872\u00E2\u0080\u0093875. [45] G. Giunta, \u00E2\u0080\u009CFine estimators of two-dimensional parameters and application to spatial shift estimation,\u00E2\u0080\u009D IEEE Transactions on Signal Processing, vol. 47, pp. 3201\u00E2\u0080\u00933207, 1999. [46] J. A. Jensen, \u00E2\u0080\u009CA model for the propagation and scattering of ultrasound in tissue,\u00E2\u0080\u009D Journal of the Acoustical Society of America, vol. 89, pp. 182\u00E2\u0080\u0093191, 1991. [47] S. DiMaio and S. Salcudean, \u00E2\u0080\u009CNeedle insertion modelling and simulation,\u00E2\u0080\u009D IEEE Transactions on Robotics and Automation: Special Issue on Medical Robotics, vol. 19, pp. 864\u00E2\u0080\u0093875, 2003. [48] R. Zahiri-Azar and S. Salcudean, \u00E2\u0080\u009CMotion Estimation in Ultrasound Images Using Time Domain Cross Correlation With Prior Estimates.\u00E2\u0080\u009D IEEE Transactions on Biomedical Engineering, vol. 53, pp. 1990\u00E2\u0080\u00932000, 2006. 97 \u000CChapter 4. Sub-sample Motion Estimation in 3D [49] T. Varghese, J. Ophir, E. Konofagou, F. Kallel, and R. Righetti, \u00E2\u0080\u009CTradeo\u00EF\u00AC\u0080s In Elastographic Imaging, Ultrasonic Imaging,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 23, pp. 216\u00E2\u0080\u0093248, October 2001. [50] J. Deprez, E. Brusseau, C. Schmitt, G. Cloutier, and O. Basset, \u00E2\u0080\u009C3D estimation of soft biological tissue deformation from radio-frequency ultrasound volume acquisitions.\u00E2\u0080\u009D Medical Image Analysis, vol. 13, p. 116127, 2009. [51] S. Alam, J. Ophir, and E. Konofagou, \u00E2\u0080\u009CAn adaptive strain estimator for elastography,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 45, pp. 461\u00E2\u0080\u0093 72, March 1998. [52] P. Chaturvedi, M. Insana, and T. Hall, \u00E2\u0080\u009C2D companding for noise reduction in strain imaging,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 45, pp. 179\u00E2\u0080\u0093191, 1998. [53] T. Varghese and J. Ophir, \u00E2\u0080\u009CEnhancement of echo-signal correlation in elastography using temporal stretching,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 44, pp. 173\u00E2\u0080\u0093180, January 1997. 98 \u000CChapter 5 Comparison Between Pattern Matching Techniques Employing 2D Sub-sample Estimation and 2D Tracking Using Angular Compounding 4 5.1 Introduction Motion estimation in sequences of ultrasound echo signals is essential for blood \u00EF\u00AC\u0082ow estimation [1, 2], tissue velocity estimation [3, 4], elasticity imaging [5\u00E2\u0080\u00937], and acoustic radiation force imaging [8\u00E2\u0080\u009310]. Since ultrasound imaging provides higher resolution in the direction of beam propagation, the estimation of this component of the motion has received the most attention in the literature. However, tracking the motion in one direction introduces some limitations for each of the above mentioned applications. For example, in blood \u00EF\u00AC\u0082ow and tissue velocity estimation, tracking along the beam propagation results in a poor estimation of the \u00EF\u00AC\u0082ow and tissue velocity due to the unknown angle between the velocity vector and the beam direction, even if the angle is manually adjusted [11,12]. In quasi-static elastography, tracking the motion in one direction results in estimation of only one component of the strain tensor, with all the other components remaining unknown [13]. In dynamic elastography using wave equations, the estimation of a single component of motion limits modulus estimation algorithms to partial inversion, rather than more accurate full inversion algorithm [14]. Several authors have proposed methods for measuring both axial and lateral motion components using ultrasound radio frequency (RF) eacho signals. Techniques based on 2D pattern matching functions are the most straight forward approaches in estimating the motion from sampled ultrasound echo signals [7, 15\u00E2\u0080\u009318]. With these techniques, the reference ultrasound echo signal is divided into a number of windows, which may have overlap with each other. The reference echo signal within each window is set to be the pattern to be matched with the displaced echo signal over a prede\u00EF\u00AC\u0081ned search region. The motion of the window is then found by \u00EF\u00AC\u0081nding the location of the best match. To reduce the estimation error introduced by \u00EF\u00AC\u0081nite sampling intervals, curve or polynomial \u00EF\u00AC\u0081tting to the pattern matching function [19\u00E2\u0080\u009322] have been suggested in the literature to estimate the sub-sample motion. In another approach, angular compounding has also been attempted in the literature to 4 A version of this chapter will be submitted for publication. Reza Zahiri-Azar and Septimiu E. Salcudean, \u00E2\u0080\u009CComparison Between Pattern Matching Techniques Employing 2D Sub-sample Estimation and 2D Tracking Using Angular Compounding\u00E2\u0080\u009D. 99 \u000CChapter 5. Beam Steering estimate the motion vectors [23\u00E2\u0080\u009327]. With these techniques the data from the region of interest is acquired from multiple look angles. The multiple look angles can originate from a single transducer when it is moved mechanically, or they can originate from multiple transducers [23]. The multiple look angles can also originate from a single transducer using single transmit and multiple receive angles [26\u00E2\u0080\u009328] or multiple electronically steered transmit and receive angles [24, 27, 29]. Once data from multiple angles are acquired, the motions estimated along multiple directions are compounded to construct the motion vectors inside the overlapping region. We have previously presented 2D interpolation techniques that signi\u00EF\u00AC\u0081cantly improve the accuracy and the precision of 2D motion estimation using pattern matching techniques. In these techniques, iterative 1D polynomial \u00EF\u00AC\u0081tting or 2D polynomial \u00EF\u00AC\u0081tting are used to estimate the sub-sample component of the motion in both the axial and the lateral directions, as opposed to conventional 1D interpolation methods. In this paper, we study and compare the performance of 2D tracking methods using 2D sub-sample estimation techniques with that of conventional angular compounding when beam steering techniques are used to acquire the data from multiple look directions. The paper is structured as follows: Section 5.2 presents the motion tracking algorithms. Section 5.3 presents the simulation method and comparison between the methods using simulated data. Section 5.4 presents the experimental results. Finally, Section 5.5 presents a discussions and conclusion along with avenues for future research. Throughout this paper, it is assumed that 2D echo radio frequency (RF) signals are available. Without loss of generality, it is assumed that the pattern matching function optimization involves maximization of the normalized cross correlation. The pattern matching function values will be referred to as the matching coe\u00EF\u00AC\u0083cients. Also the component of the motion along the beam propagation and transverse to the beam will be referred to as axial and lateral, respectively, in both steered and non-steered images. 5.2 Methods Both of the above mentioned techniques rely on correlation techniques to track the displacements. The di\u00EF\u00AC\u0080erence is that angular compounding uses only the component of the motion along the beam direction from multiple angles to reconstruct the motion vectors whereas pattern matching techniques measures the motion vector by tracking the motion both along and transverse to the beam from a single set of data. This is shown in Fig. 5.1. In a general form, let \u00F0\u009D\u0091\u00A0\u00F0\u009D\u009C\u00831 [\u00F0\u009D\u0091\u0096, \u00F0\u009D\u0091\u0097], \u00F0\u009D\u0091\u00A0\u00F0\u009D\u009C\u00832 [\u00F0\u009D\u0091\u0096, \u00F0\u009D\u0091\u0097], \u00F0\u009D\u0091\u0096 = 0, ..., \u00F0\u009D\u0091\u009B \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0097 = 0, ..., \u00F0\u009D\u0091\u009A \u00E2\u0088\u0092 1 be the sampled reference and displaced/delayed echo signals, respectively, where \u00F0\u009D\u0091\u009B is the number of discrete samples and \u00F0\u009D\u0091\u009A is the number of acquisition lines in the echo signals, and \u00F0\u009D\u009C\u0083 is the steering angle. When \u00F0\u009D\u009C\u0083 = 0 we are referring to non-steered images. Let \u00F0\u009D\u0091\u0085\u00F0\u009D\u009C\u0083 [\u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u00A3] be the discrete 2D pattern matching function between a windowed of the reference and the displaced echo signals acquired using the same angle (\u00F0\u009D\u009C\u0083) over a prede\u00EF\u00AC\u0081ned search region. Given \u00F0\u009D\u0091\u0085\u00F0\u009D\u009C\u0083 [\u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u00A3], the coarse axial \u00F0\u009D\u0091\u0091\u00F0\u009D\u009C\u0083\u00F0\u009D\u0091\u008E and lateral \u00F0\u009D\u0091\u0091\u00F0\u009D\u009C\u0083\u00F0\u009D\u0091\u0099 estimates of the motion in the axial (\u00F0\u009D\u0091\u00A5\u00F0\u009D\u009C\u0083 ) and the lateral (\u00F0\u009D\u0091\u00A6 \u00F0\u009D\u009C\u0083 ) directions are achieved by locating the maximum of the \u00F0\u009D\u0091\u0085\u00F0\u009D\u009C\u0083 [\u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u00A3] as follow [\u00F0\u009D\u0091\u0091\u00F0\u009D\u009C\u0083\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u009C\u0083\u00F0\u009D\u0091\u0099 ] = arg max \u00F0\u009D\u0091\u0085\u00F0\u009D\u009C\u0083 [\u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u00A3]. \u00F0\u009D\u0091\u00A2,\u00F0\u009D\u0091\u00A3 (5.1) 100 \u000CChapter 5. Beam Steering (a) 2D Pattern Matching (b) Beam Steering Figure 5.1: Schematics for 2D motion tracking using pattern matching (a) and beam steering (b). In 2D pattern matching data are acquired from the region of interest with no steering and 2D displacements are estimated from the estimated axial and lateral components of the motion. In beam steering data from a region of interest are acquired with di\u00EF\u00AC\u0080erent steering angles. The 2D displacements in the overlapping region are reconstructed using 1D measurements from the employed steering angles. 5.2.1 Pattern Matching Following the coarse estimation of the axial and the lateral components of the motion within the sampling accuracy according to (5.1) in non steered images (\u00F0\u009D\u0091\u00910\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u00910\u00F0\u009D\u0091\u0099 ), 2D interpolation techniques are used to \u00EF\u00AC\u0081nd the sub-sample motions employing the matching coe\u00EF\u00AC\u0083cients in both directions (\u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E0 , \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u00990 ). Subsequently, the total 2D displacement is estimated to be \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u008E0 = \u00F0\u009D\u0091\u00910\u00F0\u009D\u0091\u008E + \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E0 , \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u00990 = \u00F0\u009D\u0091\u00910\u00F0\u009D\u0091\u0099 + \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u00990 . For the purpose of this work, 2D polynomial \u00EF\u00AC\u0081tting with nine coe\u00EF\u00AC\u0083cients (i.e. \u00F0\u009D\u0091\u00939 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6)) and iterative 1D cosine \u00EF\u00AC\u0081tting are used to estimate the sub-sample in both the axial and the lateral directions. Detailed descriptions of the 2D polynomial \u00EF\u00AC\u0081tting and iterative 1D cosine \u00EF\u00AC\u0081tting are provided in our previous work. These techniques will be referred to as PMP (i.e. pattern matching with 2D polynomial interpolation) and PMC (i.e. pattern matching with iterative 1D Cosine interpolation). 5.2.2 Angular Compounding Since we are only interested in estimating the component of the motion along the beam direction (\u00F0\u009D\u0091\u00A5\u00F0\u009D\u009C\u0083 ), for each steering angle, following the coarse estimation of the axial component of the motion within the sampling accuracy (\u00F0\u009D\u0091\u0091\u00F0\u009D\u009C\u0083\u00F0\u009D\u0091\u008E ) using (5.1), previously introduced 1D interpolation techniques are used to \u00EF\u00AC\u0081nd the sub-sample motion employing the matching coe\u00EF\u00AC\u0083cients in that same direction (\u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E\u00F0\u009D\u009C\u0083 ). Subsequently, the total displacement along the beam is estimated to be 101 \u000CChapter 5. Beam Steering \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u008E\u00F0\u009D\u009C\u0083 = \u00F0\u009D\u0091\u0091\u00F0\u009D\u009C\u0083\u00F0\u009D\u0091\u008E + \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E\u00F0\u009D\u009C\u0083 . The process mentioned above is repeated for two steering angles (i.e. \u00C2\u00B1\u00F0\u009D\u009C\u0083 in this work). Once displacements in the direction of beam propagations are estimated (i.e. \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u008E\u00F0\u009D\u009C\u0083 , \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u008E\u00E2\u0088\u0092\u00F0\u009D\u009C\u0083 ), the true axial and the lateral components of the motion is computed according to the following equation at each spatial location [27]: \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u008E0 = \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u00990 = \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u008E\u00F0\u009D\u009C\u0083 + \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u008E\u00E2\u0088\u0092\u00F0\u009D\u009C\u0083 , 2 cos \u00F0\u009D\u009C\u0083 \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u008E\u00F0\u009D\u009C\u0083 \u00E2\u0088\u0092 \u00F0\u009D\u0090\u00B7\u00F0\u009D\u0091\u008E\u00E2\u0088\u0092\u00F0\u009D\u009C\u0083 . 2 sin \u00F0\u009D\u009C\u0083 (5.2) (5.3) In order to have a fair comparison, the common 1D cosine \u00EF\u00AC\u0081t [20] and 1D parabola \u00EF\u00AC\u0081t [22] are used to estimate the sub-sample motion along the beam (i.e. 2-1D interpolation vs 2D interpolation). These technique will be referred to as BSC (i.e. beam steering with 1D cosine interpolation) and BSP (i.e. beam steering with 1D Parabola interpolation). 5.3 5.3.1 Simulations Simulation Setup Computer simulations were performed to study the performance of all methods in terms of their accuracy and precision. All calculations were performed in MATLAB (MathWorks Inc., Natick, MA). Similarly to our previous work, a 50\u00C3\u009760\u00C3\u009710 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A3 virtual phantom was simulated by randomly allocating scatterers, with random scattering amplitudes. Field II [30, 31] was used to simulate the ultrasound radio frequency echo signals (RF frames) and envelope signals from these scatterers. To be consistent with our experimental setup, a linear probe was modeled with 5 MHz center frequency and 40 MHz sampling rate. A linear scan of the phantom was carried out with a 128 element transducer, using 64 active elements. A single transmit focus was placed at 30 mm, and dynamic receive focusing was employed to generate the RF lines. 128 RF lines were simulated along a width of 38 mm resulting in line spacing of 300 \u00F0\u009D\u009C\u0087m. In order to simulate translations in both directions, the scatters were displaced on a grid with sub-sample distances in the axial and lateral directions (i.e. smaller than the actual RF sample-spacing in that corresponding axis). For all the displacements, a step size of 1/10 of the sample spacing in the corresponding non-steered axis was chosen (i.e. 2 \u00F0\u009D\u009C\u0087m axially and 30 \u00F0\u009D\u009C\u0087m laterally) forming a grid of 11 \u00C3\u0097 11 = 121 distinct displacement con\u00EF\u00AC\u0081gurations. The number of scatterers per smallest sampling volume was set to be 10 to ensure that the speckle of the ultrasound images is fully developed. For each displacement con\u00EF\u00AC\u0081guration RF frames corresponding to non-steered and 14 di\u00EF\u00AC\u0080erent steering angles including 7 pairs of angles (i.e. \u00F0\u009D\u009C\u0083 = \u00C2\u00B15, \u00C2\u00B17.5, \u00C2\u00B110, ... \u00C2\u00B1 20) were simulated. Sample sonograms generated from one of these simulated RF data sets employing di\u00EF\u00AC\u0080erent steering angles are shown in Fig 5.2. For comparison purpose, the point spread functions corresponding to all the steering angles are also depicted in Fig. 5.3. The simulated non-steered RF frames (i.e. 121 frames) were used to estimate the motion using 2D pattern matching methods (i.e. PMC and PMP) and the pairs of simulated steered RF frames (i.e. 14 \u00C3\u0097 121 = 1694 frames) were used to estimate the motion using beam steering methods (i.e. BSP and BSC). For all the estimations, the RF frame corresponding to the 102 \u000CChapter 5. Beam Steering center of the 2D motion grid was set to be the reference. This resulted in a grid spanning \u00C2\u00B10.5 of a sample in both non-steered axes. The region of interest for motion estimation was centered around the transmit focus and data from both near-\u00EF\u00AC\u0081eld and far-\u00EF\u00AC\u0081eld were removed from the study. Furthermore, since in beam steering techniques only the data in the overlapping beams can be reconstructed, only these regions were used for the study. The default window size for the pattern matching function was set to be approximately 2 \u00C3\u0097 2 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A2 (i.e. 104 samples axially and 7 samples laterally). This resulted in more than 1000 displacement estimations from di\u00EF\u00AC\u0080erent speckle patterns for each displacement con\u00EF\u00AC\u0081guration on a grid. It should be noted that large steps in lateral direction would introduce large motions along the direction of beam when steering is employed. To estimate these motions correctly, the size of the search area for the pattern matching function was set to be approximately 3 \u00C3\u0097 3 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A2 (i.e. 156 samples axially and 11 samples laterally). In order to have accurate estimation of the cross correlation at the edges of the search region, the actual data from the echo signals were used instead of zero-padding. The performances of all the estimators are studied in terms of their bias and standard deviation as a function of the sub-sample shift in the axial and lateral directions [32, 33] in order to study both their accuracy and precision. Simulation of each RF frame would take more than 100 hours using Field II on a single core computer. Four multi-core processor computers were used in parallel to simulate the RF frames for this study. 5.3.2 Simulation Results Figure 5.4 shows bias and standard deviation of both 2D pattern matching method employing iterative 1D cosine \u00EF\u00AC\u0081t (PMC) and 2D compounding employing 1D cosine \u00EF\u00AC\u0081t (BSC) using di\u00EF\u00AC\u0080erent angles as a function of sub-sample shift on a 11 \u00C3\u0097 11 grid. Figure 5.5 shows the same results for both 2D pattern matching method employing 2D polynomial \u00EF\u00AC\u0081t (PMP) and 2D compounding employing 1D parabola \u00EF\u00AC\u0081t (BSP). Similarly to our previous work, for better visualization of the accuracy, the axial and the lateral bias for each sub-sample shift on the grid are shown with a vector. Error vectors connecting the true displacements to the mean estimated displacements illustrate the directional bias for each of the 121 simulations. In order to show the precision in both directions, an ellipse representation is also used to show the standard deviations for each of the simulations. The ellipses are centered on the mean displacement estimations and the radius of each ellipse in each direction corresponds to the standard deviation of motion estimation in that given direction (con\u00EF\u00AC\u0081dence interval of 0.95). The results show that the performance of 2D tracking using the beam steering method is a function of the employed steering angles. Figs 5.4 and 5.5 show that small steering angles do not provide enough information to accurately track the motion in lateral direction. Also the accuracy of the tracking degrades when large steering angles are employed to acquire the echo signals. Figs 5.4 and 5.5 show that 2D tracking using beam steering introduces less bias compared to the 2D pattern matching techniques. The results also show that 2D tracking using beam steering experiences larger standard deviations when the lateral component of the motion is large. This is consistent with our previous results reported for non-steered images where we showed that the performance of sub-sample estimation using 1D interpolation degrades in the presence of 2D motion. 103 \u000CChapter 5. Beam Steering 0 10 20 30 40 50 60 \u00E2\u0088\u009230 \u00E2\u0088\u009220 \u00E2\u0088\u009210 0 10 20 30 (a) Scatterers (b) Sonogram (\u00F0\u009D\u009C\u0083 = 0) (c) Sonogram (\u00F0\u009D\u009C\u0083 = \u00E2\u0088\u009210) (d) Sonogram (\u00F0\u009D\u009C\u0083 = 10) Figure 5.2: Scatterer distributions (a) (only a small fraction of all scatterers are plotted for better visualization) and a Field II simulated sonograms when di\u00EF\u00AC\u0080erent steering angles are employed to acquire the data (b)-(d). 104 \u000CChapter 5. Beam Steering 0 5 7.5 10 12.5 15 17.5 20 10 Axial [mm] 20 30 40 50 60 \u00E2\u0088\u00925 0 5 Lateral [mm] Figure 5.3: Point target phantom imaged for di\u00EF\u00AC\u0080erent steering angles. The point spread functions images are generated by placing multiple points at multiple depths in front of the transducer and then sweeping the beam over the points. To simplify the comparison between the results, the overall performance of beam steering techniques (i.e. BSC and BSP) are shown in Fig 5.6 when di\u00EF\u00AC\u0080erent steering angles are employed to acquire the signals. The overall performance is calculated by averaging the absolute biases and standard deviation over the entire 2D motion grid. It should be noted that this is di\u00EF\u00AC\u0080erent from our previous work where we looked at maximum biases and standard deviations. Fig 5.6 shows that employing both very small and very large steering angles degrades the performance of the 2D compound tracking methods. Among di\u00EF\u00AC\u0080erent steering angles studied in this work, 10-12.5 degree steering angles, provide the most accurate results. Fig 5.6 also shows that for all the steering angles employed in 2D compound tracking, 1D cosine \u00EF\u00AC\u0081t outperforms 1D parabola \u00EF\u00AC\u0081t. This is consistent with previously published results for non-steered images where it was shown that the cosine \u00EF\u00AC\u0081t has smaller bias and standard deviations compared to parabola \u00EF\u00AC\u0081t in estimating the motion along the direction of beam propagation [19, 32]. In order to study the results quantitatively, the overall performance of all the methods are summarized in Tables 5.1 and 5.2 (mean absolute biases and standard deviations over the 2D motion grid). Tables 5.1 shows that the mean absolute axial and lateral biases of the 2D pattern matching techniques are 0.0076 and 0.0126 of a sample, for 2D polynomial interpolation (PMP), which corresponds to 146 nm and 3.95 \u00F0\u009D\u009C\u0087m at simulated 19.3 \u00F0\u009D\u009C\u0087m axial sample spacing and 300 \u00F0\u009D\u009C\u0087m lateral line spacing. The mean axial and lateral standard deviations of the same technique are found to be 0.0123 and 0.0273 of a sample, corresponding to 236 nm and 8.6 \u00F0\u009D\u009C\u0087m, respectively. For the same data, the mean absolute axial and lateral biases of the 2D pattern matching techniques are found to be 0.0023 and 0.0082 of a sample, for iterative 1D cosine interpolation (PMC), corresponding to 43 nm and 2.58 \u00F0\u009D\u009C\u0087m, respectively. The mean axial and lateral standard deviations of the same technique are found to be 0.0117 and 0.0253 of a sample, 105 \u000C0.6 0.6 0.4 0.4 0.2 0.2 Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift Chapter 5. Beam Steering 0 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.3 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.1 0 0.1 Lateral sub\u00E2\u0088\u0092sample shift 0.2 0.3 0.4 0.5 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.3 0.6 0.4 0.4 0.2 0.2 0 0.2 0.3 0.4 0.5 0.2 0.3 0.4 0.5 0.2 0.3 0.4 0.5 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.3 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.1 0 0.1 Lateral sub\u00E2\u0088\u0092sample shift 0.2 0.3 0.4 0.5 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.3 (c) BSC (\u00C2\u00B17.5) \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.1 0 0.1 Lateral sub\u00E2\u0088\u0092sample shift (d) BSC (\u00C2\u00B110) 0.6 0.6 0.4 0.4 0.2 0.2 Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.1 0 0.1 Lateral sub\u00E2\u0088\u0092sample shift (b) BSC (\u00C2\u00B15) 0.6 Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift (a) PMC \u00E2\u0088\u00920.2 0 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.3 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.1 0 0.1 Lateral sub\u00E2\u0088\u0092sample shift (e) BSC (\u00C2\u00B112.5) 0.2 0.3 0.4 0.5 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.3 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.1 0 0.1 Lateral sub\u00E2\u0088\u0092sample shift (f) BSC (\u00C2\u00B120) Figure 5.4: Biases and standard deviations of the pattern matching method (a) and beam steering employing di\u00EF\u00AC\u0080erent angles (b)-(f) as a function of sub-sample shift on a 11 \u00C3\u0097 11 grid when iterative 1D cosine \u00EF\u00AC\u0081t (a) and conventional 1D cosine \u00EF\u00AC\u0081t (b)-(f) are employed to estimate the sub-sample motion. Field II was used to simulate the echo signals. A total of 1000 realizations were used to generate each bias vector and standard deviation ellipse (window size is \u00E2\u0089\u0088 2mm\u00C3\u00972mm). 106 \u000C0.6 0.6 0.4 0.4 0.2 0.2 Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift Chapter 5. Beam Steering 0 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.3 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.1 0 0.1 Lateral sub\u00E2\u0088\u0092sample shift 0.2 0.3 0.4 0.5 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.3 0.6 0.4 0.4 0.2 0.2 0 0.2 0.3 0.4 0.5 0.2 0.3 0.4 0.5 0.2 0.3 0.4 0.5 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.3 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.1 0 0.1 Lateral sub\u00E2\u0088\u0092sample shift 0.2 0.3 0.4 0.5 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.3 (c) BSP (\u00C2\u00B17.5) \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.1 0 0.1 Lateral sub\u00E2\u0088\u0092sample shift (d) BSP (\u00C2\u00B110) 0.6 0.6 0.4 0.4 0.2 0.2 Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift \u00E2\u0088\u00920.1 0 0.1 Lateral sub\u00E2\u0088\u0092sample shift (b) BSP (\u00C2\u00B15) 0.6 Axial sub\u00E2\u0088\u0092sample shift Axial sub\u00E2\u0088\u0092sample shift (a) PMP \u00E2\u0088\u00920.2 0 0 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.6 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.3 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.1 0 0.1 Lateral sub\u00E2\u0088\u0092sample shift (e) BSP (\u00C2\u00B112.5) 0.2 0.3 0.4 0.5 \u00E2\u0088\u00920.5 \u00E2\u0088\u00920.4 \u00E2\u0088\u00920.3 \u00E2\u0088\u00920.2 \u00E2\u0088\u00920.1 0 0.1 Lateral sub\u00E2\u0088\u0092sample shift (f) BSP (\u00C2\u00B120) Figure 5.5: Biases and standard deviations of the pattern matching method (a) and beam steering employing di\u00EF\u00AC\u0080erent angles (b)-(f) as a function of sub-sample shift on a 11 \u00C3\u0097 11 grid when 2D polynomial \u00EF\u00AC\u0081tting (a) and conventional 1D parabola \u00EF\u00AC\u0081tting (b)-(f) are employed to estimate the sub-sample motion. Field II was used to simulate the echo signals. A total of 1000 realizations were used to generate each bias vector and standard deviation ellipse (window size is \u00E2\u0089\u0088 2mm\u00C3\u00972mm). 107 \u000CChapter 5. Beam Steering Figure 5.6: The overall performance of all beam steering methods averaged over the simulated motion grid. The error bars are used to show both mean absolute bias and their corresponding mean standard deviation in both axial (left) and lateral (right) directions. 108 \u000CChapter 5. Beam Steering corresponding to 225 nm and 7.97 \u00F0\u009D\u009C\u0087m, respectively. Table 5.2 shows that for the same 2D motion grid, the axial and lateral biases of the 2D tracking using compound techniques are smaller than those of 2D pattern matching techniques. The mean absolute axial and lateral biases of the 2D tracking using beam steering are found to be 0.0036 and 0.0025 of a sample, for 1D parabola interpolation (BSP) for \u00C2\u00B110 degree steering angle, which corresponds to 69nm and 793 nm, respectively. The mean absolute axial and lateral biases of the 2D tracking using beam steering are found to be 0.0016 and 0.0021 of a sample, for 1D cosine interpolation (BSC) for \u00C2\u00B110 degree steering angle, which corresponds to 30 nm and 650 nm, respectively. For the same data set, the mean axial and lateral standard deviations of the 2D tracking using beam steering are found to be 0.0210 and 0.0123 of a sample, for both 1D parabola and cosine interpolation (BSP, BPC), corresponding to 404 nm and 3.86 \u00F0\u009D\u009C\u0087m, respectively. As a \u00EF\u00AC\u0081nal comparison step, the overall performances of 2D pattern matching techniques and 2D compound tracking when \u00C2\u00B110 degree steering angles are employed to acquire the data are shown in Fig. 5.7 using error bars. Fig. 5.7 shows that for both interpolation techniques, 2D tracking using beam steering (i.e. BSC and BSP) outperforms 2D pattern matching technique (i.e. PMC and PMP) in estimating lateral motion even with only two steering angles. However, the estimation of motion in the axial direction does not improve signi\u00EF\u00AC\u0081cantly. These results suggest that to have good measurements in both axial and lateral directions one needs to employ more steering angles in the estimation process. Alternatively, non-steered images can be used to estimate the axial motions and steered images can be used to estimate the lateral motions. Table 5.1: The overall performance of speckle pattern tracking using iterative 1D cosine \u00EF\u00AC\u0081tting and compound imaging using 1D cosine \u00EF\u00AC\u0081tting in simulated data. Method (\u00C2\u00B1\u00F0\u009D\u009C\u0083) PMC BSC (\u00C2\u00B15) BSC (\u00C2\u00B17.5) BSC (\u00C2\u00B110) BSC (\u00C2\u00B112.5) BSC (\u00C2\u00B115) BSC (\u00C2\u00B117.5) BSC (\u00C2\u00B120) 5.4 5.4.1 Mean sample 0.0023 0.0026 0.0025 0.0016 0.0043 0.0053 0.0038 0.0073 \u00E2\u0088\u00A3\u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 \u00E2\u0088\u00A3 \u00F0\u009D\u009C\u0087\u00F0\u009D\u0091\u009A 0.0435 0.0500 0.0478 0.0302 0.0833 0.1024 0.0732 0.1402 Mean sample 0.0082 0.0022 0.0023 0.0021 0.0019 0.0015 0.0031 0.0058 \u00E2\u0088\u00A3\u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A1 \u00E2\u0088\u00A3 \u00F0\u009D\u009C\u0087\u00F0\u009D\u0091\u009A 2.5821 0.3628 0.7197 0.6510 0.5940 0.4816 0.9794 1.8405 Mean sample 0.0117 0.0268 0.0247 0.0210 0.0240 0.0283 0.0353 0.0483 \u00F0\u009D\u009C\u008E\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 \u00F0\u009D\u009C\u0087\u00F0\u009D\u0091\u009A 0.2255 0.5159 0.4746 0.4043 0.4626 0.5449 0.6797 0.9289 Mean sample 0.0253 0.0182 0.0123 0.0123 0.0125 0.0126 0.0147 0.0180 \u00F0\u009D\u009C\u008E\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A1 \u00F0\u009D\u009C\u0087\u00F0\u009D\u0091\u009A 7.9712 5.7201 3.8596 3.8662 3.9248 3.9719 4.6440 5.6754 Experiments Experimental Setup This experimental setup is shown in Fig. 5.8. A 3-axis AIMS ultrasound scanning system (Onda Corp., Sunnyvale, CA) with 10 \u00F0\u009D\u009C\u0087m resolution in each axis, mounted on a water tank, equipped with the water conditioning system (Onda Corp., Sunnyvale, CA) was used. Experiments were performed on a 60 \u00C3\u0097 40 \u00C3\u0097 40 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A3 uniform phantom. The phantom was prepared using a 100% polyvinyl chloride (PVC) plasticizer (M-F Manufacturing Co., Inc. Fort Worth, TX, USA) with two percent cellulose (Sigma-Aldrich Inc., St Louis, MO, USA) as scatterers [34]. The 109 \u000CChapter 5. Beam Steering Table 5.2: The overall performance of speckle pattern tracking using 2D polynomial \u00EF\u00AC\u0081tting and compound imaging using 1D parabola \u00EF\u00AC\u0081tting in simulated data. Method (\u00C2\u00B1\u00F0\u009D\u009C\u0083) PMP BSP (\u00C2\u00B15) BSP (\u00C2\u00B17.5) BSP (\u00C2\u00B110) BSP (\u00C2\u00B112.5) BSP (\u00C2\u00B115) BSP (\u00C2\u00B117.5) BSP (\u00C2\u00B120) Mean sample 0.0076 0.0056 0.0041 0.0036 0.0055 0.0069 0.0056 0.0094 \u00E2\u0088\u00A3\u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 \u00E2\u0088\u00A3 \u00F0\u009D\u009C\u0087\u00F0\u009D\u0091\u009A 0.1464 0.1073 0.0789 0.0697 0.1063 0.1322 0.1080 0.1813 Mean sample 0.0126 0.0031 0.0029 0.0025 0.0023 0.0017 0.0031 0.0057 \u00E2\u0088\u00A3\u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A1 \u00E2\u0088\u00A3 \u00F0\u009D\u009C\u0087\u00F0\u009D\u0091\u009A 3.9575 0.9907 0.9026 0.7938 0.7318 0.5408 0.9792 1.7796 Mean sample 0.0123 0.0268 0.0247 0.0210 0.0241 0.0283 0.0354 0.0481 \u00F0\u009D\u009C\u008E\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 \u00F0\u009D\u009C\u0087\u00F0\u009D\u0091\u009A 0.2367 0.5161 0.4751 0.4048 0.4640 0.5454 0.6810 0.9250 Mean sample 0.0273 0.0182 0.0123 0.0123 0.0125 0.0126 0.0147 0.0179 \u00F0\u009D\u009C\u008E\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A1 \u00F0\u009D\u009C\u0087\u00F0\u009D\u0091\u009A 8.5925 5.7217 3.8634 3.8681 3.9289 3.9727 4.6354 5.6406 phantom was inserted in a tank of degassed water and placed 2 mm away from the transducer, thus enabling the transducer to move without deforming the phantom producing rigid motions. The phantom was moved on a 6 \u00C3\u0097 6 2D grid in steps of 60 \u00F0\u009D\u009C\u0087m in both axes. This resulted in a square grid spanning approximately \u00EF\u00AC\u0081fteen samples axially and one sample laterally. The phantom was imaged using a linear transducer (L9-5/38) of a SonixRP ultrasound machine (Ultrasonix Medical Corporation, Richmond, BC). The phantom was imaged to a depth of 50 mm (2 mm water gap plus 48 mm phantom) using a 128-element transducer with a 5 MHz center frequency digitized at 40 MHz, with 300 \u00F0\u009D\u009C\u0087m lateral line spacing. For each displacement con\u00EF\u00AC\u0081guration RF frames corresponding to non-steered and 6 di\u00EF\u00AC\u0080erent steering angles including 3 pairs of angles (i.e. 0, \u00C2\u00B15, \u00C2\u00B110, \u00C2\u00B115) were acquired. Including the reference position, a total of 7 \u00C3\u0097 (36 + 1) = 259 RF frames were recorded for o\u00EF\u00AC\u0080-line processing. The acquired non-steered RF frames (i.e. 37 frames) were used to estimate the motion using 2D pattern matching methods (PMP, PMC) and the acquired steered RF frames (i.e. 6\u00C3\u009737 = 222 frames) were used to estimate the motion using beam steering methods (BSP, BSC). 5.4.2 Experimental Results The experimental results are shown in Fig. 5.9. It should be noted that in simulations (i.e. Figs 5.4 and 5.5), the distance between grid points in both directions was 1/10 of a sample, whereas the distance between grid points in experiments (i.e. Fig. 5.9) is 30 times larger axially (i.e. 3 samples vs 1/10 of a sample) and 2 times larger laterally (i.e. 1/5 of a sample vs 1/10 of a sample). For better visualization, the standard deviation ellipses are scaled by a factor of four in both directions. The experimental results show very good agreement with the simulation results. Fig. 5.9 depicts that the performances of the 2D compound tracking techniques is a function of the steering angle employed for data acquisition. Smallest biases and standard deviations are achieved for steering angles of \u00C2\u00B110 for 2D compound tracking techniques. Fig. 5.9 also shows that 2D tracking using beam steering has smaller biases compared to the 2D pattern matching. However, 2D tracking using beam steering exhibits larger standard deviations when transverse motion exists in the motion \u00EF\u00AC\u0081eld, consistent with the simulation results. 110 \u000CChapter 5. Beam Steering PMP PMC BSP BSC 0.04 Error (Sub\u00E2\u0088\u0092sample) 0.03 0.02 0.01 0 \u00E2\u0088\u00920.01 \u00E2\u0088\u00920.02 Axial Lateral Figure 5.7: The overall performance of all methods averaged over the simulated motion grid. The bars correspond to the mean absolute biases and the lines corresponds to their mean standard deviations. 5.5 Discussions and Conclusion In this work, in order to study the performance of the 2D compound tracking technique as a function of the steering angles, we only considered reconstruction of motion vectors from two angles which are acquired from the same steering angle but in opposite directions. However, as suggested by a number of authors, with a trade-o\u00EF\u00AC\u0080 of longer data acquisition and processing time, motion vector can be reconstructed from multiple steering angles to improve the performance of the 2D tracking using angular compounding techniques. But as the results of this work show, di\u00EF\u00AC\u0080erent weighting needs to be employed to estimate result from di\u00EF\u00AC\u0080erent steering angles in the reconstruction process. This is consistent with the results reported in [29]. It should be noted that pattern matching techniques only require one set of the echo signals to estimate the motion vectors whereas in beam steering techniques, one set of the echo signals is required for each steering angle. Thus, the data acquisition time in beam steering techniques is proportional to the number of steering angles employed. This may introduce some limitation if high acquisition rate is of interest. A single run of the motion estimation is enough to generate the vector image in pattern matching techniques whereas in beam steering techniques, aside from reconstruction step, each set of steered echo requires to undergo the motion estimation process prior to the reconstruction of the \u00EF\u00AC\u0081nal motion vector. As a result, their processing time is also proportional to the number of the steering angles employed in the data acquisition. Moreover, beam steering techniques are only able to reconstruct the motion vectors in the overlapping region. Thus, they introduce some limitations when large steering angles are employed or imaging deep structures is of interest, where it can be di\u00EF\u00AC\u0083cult to get the necessary acoustic window for beam overlap. This problem does not exist in motion vector 111 \u000CChapter 5. Beam Steering (a) Experimental Setup (b) Sonogram Figure 5.8: The experimental setup (a) showing the positioning of the transducer with respect to the phantom on the 3-axis motion stage inside the water tank. A sample sonogram (b) acquired from SonixRP ultrasound machine is also shown. 112 \u000C10 10 8 8 8 8 6 6 6 6 4 4 4 4 2 0 \u00E2\u0088\u00922 2 0 \u00E2\u0088\u00922 2 0 \u00E2\u0088\u00922 2 0 \u00E2\u0088\u00922 \u00E2\u0088\u00924 \u00E2\u0088\u00924 \u00E2\u0088\u00924 \u00E2\u0088\u00924 \u00E2\u0088\u00926 \u00E2\u0088\u00926 \u00E2\u0088\u00926 \u00E2\u0088\u00926 \u00E2\u0088\u00928 \u00E2\u0088\u00928 \u00E2\u0088\u00928 \u00E2\u0088\u00928 \u00E2\u0088\u009210 \u00E2\u0088\u009210 \u00E2\u0088\u00920.5 0 0.5 Lateral shift (samples) (a) PMC \u00E2\u0088\u009210 \u00E2\u0088\u00920.5 0 0.5 Lateral shift (samples) (b) BSC \u00C2\u00B15 \u00E2\u0088\u009210 \u00E2\u0088\u00920.5 0 0.5 Lateral shift (samples) (c) BSC \u00C2\u00B110 10 8 8 8 8 6 6 6 6 4 4 4 4 0 \u00E2\u0088\u00922 2 0 \u00E2\u0088\u00922 Axial shift (samples) 10 Axial shift (samples) 10 2 2 0 \u00E2\u0088\u00922 2 0 \u00E2\u0088\u00922 \u00E2\u0088\u00924 \u00E2\u0088\u00924 \u00E2\u0088\u00924 \u00E2\u0088\u00924 \u00E2\u0088\u00926 \u00E2\u0088\u00926 \u00E2\u0088\u00926 \u00E2\u0088\u00926 \u00E2\u0088\u00928 \u00E2\u0088\u00928 \u00E2\u0088\u00928 \u00E2\u0088\u00928 \u00E2\u0088\u009210 \u00E2\u0088\u00920.5 0 0.5 Lateral shift (samples) (e) PMP \u00E2\u0088\u009210 \u00E2\u0088\u00920.5 0 0.5 Lateral shift (samples) (f) BSP \u00C2\u00B15 \u00E2\u0088\u009210 \u00E2\u0088\u00920.5 0 0.5 Lateral shift (samples) (g) BSP \u00C2\u00B110 \u00E2\u0088\u00920.5 0 0.5 1 Lateral shift (samples) (d) BSC \u00C2\u00B115 10 Axial shift (samples) Axial shift (samples) Axial shift (samples) 10 Axial shift (samples) 10 Axial shift (samples) Axial shift (samples) Chapter 5. Beam Steering \u00E2\u0088\u009210 \u00E2\u0088\u00920.5 0 0.5 1 Lateral shift (samples) (h) BSP \u00C2\u00B115 Figure 5.9: Experimental displacement estimates of 2D pattern matching techniques (1st column) and beam steering techniques employing di\u00EF\u00AC\u0080erent angles (2nd, 3rd, and 4th rows) on a 6 \u00C3\u0097 6 motion grid in units of samples. For better visualization of standard deviations, the radius of each ellipse in each direction corresponds to four times the standard deviation of motion estimation in that given direction. Data was acquired at a 40 MHz axial sampling rate and 300 \u00F0\u009D\u009C\u0087m line spacing. A total of 1000 realizations from unique speckle patterns inside a region of interest were used to generate each circle and ellipse (window size is \u00E2\u0089\u0088 2 \u00C3\u0097 2 \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u009A2 , which is equivalent to 104 samples axially and 7 samples laterally). 113 \u000CChapter 5. Beam Steering estimation using pattern matching techniques. In the simulation, the line spacing was intentionally set to 300 \u00F0\u009D\u009C\u0087m in order to be consistent with our experimental setup. However, this spacing is much larger than the typical line spacing in current ultrasound imaging systems. The biases and the standard deviations of all the methods are expected to improve with improved lateral resolution. The performance of the 2D tracking using beam steering techniques is shown to be dependent on the steering angle being employed. However, the fact that the smallest biases and standard deviations are achieved for steering angles of \u00C2\u00B110 only holds for the transducer used in this work and should not be generalized to all transducers. The result presented in this work should not be compared with the results reported in [23] where mechanical rotation of the transducer was employed to acquire the RF data from multiple views. When employing mechanical rotation or multiple transducers to acquire the data from multiple angles, the imaging system remains unchanged. However, as shown in this work, when employing beam steering techniques, image properties change as we change the steering angle (Fig. 5.3). Amongst di\u00EF\u00AC\u0080erent techniques to acquire the data from multiple look angles, beam steering is studied in this work since it does not require any mechanical overhead which is clinically desired. The results presented in this work show that the performance of pattern matching techniques with 2D sub-sample estimation using 2D interpolations is close to that of 2D tracking employing beam steering techniques. However, with proper selection of steering angles, 2D tracking using beam steering techniques still outperform 2D pattern matching techniques especially in estimating lateral motions. This can be explained by the fact that in 2D tracking using beam steering techniques, motion vectors (both axial and lateral components) are reconstructed from multiple measurements along the direction of beam propagations where each has high accuracy and precision. However, in 2D tracking using pattern matching techniques, lateral motion is simply estimated from tracking in the direction transverse to the beam which results in less accurate and precise estimations. In this work, we only studied and compared the performance of 2D pattern matching function equipped with 2D interpolation techniques with that of 2D compound tracking employing conventional 1D interpolation techniques. The results show that beam steering techniques exhibit large standard deviations in the presence of transverse motions especially in the axial direction. This can be explained by the fact that the performance of 2D tracking using beam steering techniques is still limited by the 1D interpolation techniques used for sub-sample motion estimation. Employing 2D interpolation methods is expected to improve the performance of 2D tracking using beam steering techniques both in terms of accuracy and precision. The adaptation of 2D interpolation methods in steered coordinates, in order to improve their accuracy and precision, will be the topic of our future work. 114 \u000CReferences [1] C. Kasai, K. Namekawa, A. Koyano, and R. Omoto, \u00E2\u0080\u009CReal-time two-dimensional blood \u00EF\u00AC\u0082ow imaging using an autocorrelation technique.\u00E2\u0080\u009D IEEE Transactions on Sonics and Ultrasonics, vol. 32, pp. 458\u00E2\u0080\u0093464, 1985. [2] T. Loupas, R. Peterson, and R. Gill, \u00E2\u0080\u009CExperimental evaluation of velocity and power estimation forultrasound blood \u00EF\u00AC\u0082ow imaging, by means of a two-dimensional autocorrelation approach.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 42, pp. 689\u00E2\u0080\u0093699, Jul 1995. [3] H. Torp, K. Kristo\u00EF\u00AC\u0080ersen, and B. Angelsen, \u00E2\u0080\u009CAutocorrelation Techniques in Color Flow Imaging. Signal model and statistical properties of the Autocorrelation estimates.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency control, vol. 41, pp. 604\u00E2\u0080\u0093612, 1994. [4] A. Heimdal, A. Stoylen, H. Torp, and T. Skjaerpe, \u00E2\u0080\u009CReal-time strain rate imaging of the left ventricle by ultrasound.\u00E2\u0080\u009D Journal of American Soc Echocardiography, vol. 11, pp. 1013\u00E2\u0080\u00931019, Nov 1998. [5] J. Ophir, I. Cespedes, H. Ponnekanti, Y. Yazdi, and X. Li, \u00E2\u0080\u009CElastography: a quantitative method for imaging the elasticity of biological tissues,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 13, pp. 111\u00E2\u0080\u0093134, April 1991. [6] L. Bohs, B. Friemel, and G. Trahey, \u00E2\u0080\u009CExperimental velocity pro\u00EF\u00AC\u0081les and volumetric \u00EF\u00AC\u0082ow via two-dimensional speckle tracking.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 21, pp. 885\u00E2\u0080\u009398, 1995. [7] M. Lubinski, S. Emelianov, and M. O\u00E2\u0080\u0099Donnell, \u00E2\u0080\u009CSpeckle tracking methods for ultrasonic elasticity imaging using short-time correlation,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 46, pp. 82\u00E2\u0080\u009396, January 1999. [8] K. Nightingale, M. Palmeri, R. Nightingale, and G. Trahey, \u00E2\u0080\u009COn the feasibility of remote palpation using acoustic radiation force,\u00E2\u0080\u009D Journal of the Acoustical Society of America, vol. 110, pp. 625\u00E2\u0080\u009334, July 2001. [9] W. Walker, F. Fernandez, and L. Negron, \u00E2\u0080\u009CA method of imaging viscoelastic parameters with acoustic radiation force.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1437\u00E2\u0080\u00931447, 2000. [10] M. Fatemi and J. Greenleaf, \u00E2\u0080\u009CProbing the dynamics of tissue at low frequencies with the radiation force of ultrasound,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1449\u00E2\u0080\u009364, June 2000. 115 \u000CChapter 5. Beam Steering [11] P. Wells, \u00E2\u0080\u009CUltrasonic colour \u00EF\u00AC\u0082ow imaging,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 39, pp. 2113\u00E2\u0080\u00932145, 1994. [12] R. Gill, \u00E2\u0080\u009CMeasurement of blood \u00EF\u00AC\u0082ow by ultrasound: accuracy and sources of error.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 11, pp. 625\u00E2\u0080\u0093641, Aug 1985. [13] E. Konofagou and J. Ophir, \u00E2\u0080\u009CPrecision Estimation and Imaging of Normal and Shear Components of the 3D Strain Tensor in elastography,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1553\u00E2\u0080\u00931563, 2000. [14] M. Greenleaf J., Fatemi and M. Insana, \u00E2\u0080\u009CSelected methods for imaging elastic properties of biological tissues.\u00E2\u0080\u009D Annual Review of Biomedical Engineering, vol. 5, p. 5778, 2003. [15] H. Shi and T. Varghese, \u00E2\u0080\u009CTwo-dimensional multi-level strain estimation for discontinuous tissue.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 52, pp. 389\u00E2\u0080\u0093401, Nov 2007. [16] L. Geiman, BJ.and Bohs, M. Anderson, S. Breit, and T. G.E., \u00E2\u0080\u009CA novel interpolation strategy for estimating subsample speckle motion.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1541\u00E2\u0080\u00931552, 2000. [17] L. Bohs and G. Trahey, \u00E2\u0080\u009CA novel method for angle independent ultrasonic imaging of blood \u00EF\u00AC\u0082ow and tissue motion,\u00E2\u0080\u009D IEEE Transactions on Biomedical Engineering, vol. 38, pp. 280\u00E2\u0080\u0093286, March 1991. [18] G. Trahey, J. Allison, and O. Von Ramm, \u00E2\u0080\u009CAngle independent ultrasonic detection of blood \u00EF\u00AC\u0082ow,\u00E2\u0080\u009D IEEE Transactions on Biomedical Engineering, vol. 34, pp. 965\u00E2\u0080\u00937, December 1987. [19] I. Cespedes, Y. Huang, J. Ophir, and S. Spratt, \u00E2\u0080\u009CMethods for the estimation of subsample time-delays of digitized echo signals,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 17, pp. 142\u00E2\u0080\u0093171, 1995. [20] P. de Jong, T. Arts, A. Hoeks, and R. Reneman, \u00E2\u0080\u009CDetermination of Tissue Motion Velocity by Correlation Interpolation of Pulsed Ultrasonic Echo Signals,\u00E2\u0080\u009D Ultrasonics Imaging, vol. 12, pp. 84\u00E2\u0080\u009398, 1990. [21] B. Geiman, L. Bohs, M. Anderson, S. Breit, and G. Trahey, \u00E2\u0080\u009CA comparison of algorithms for tracking sub-pixel speckle motion.\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. Volume 2: IEEE, 5-8 Oct 1997, pp. 1239\u00E2\u0080\u00931242. [22] S. Foster, P. Embree, and W. O\u00E2\u0080\u0099Brien, \u00E2\u0080\u009CFlow velocity pro\u00EF\u00AC\u0081le via time-domain correlation: error analysis and computer simulation,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 37, pp. 164\u00E2\u0080\u0093175, May 1990. [23] U. Techavipoo, Q. Chen, T. Varghese, and J. Zagzebski, \u00E2\u0080\u009CEstimation of displacement vectors and strain tensors in elastography using angular insoni\u00EF\u00AC\u0081cations.\u00E2\u0080\u009D IEEE Transactions on Medical Imaging, vol. 23, pp. 1479\u00E2\u0080\u00931489, 2004. [24] M. Rao, Q. Chen, H. Shi, T. Varghese, E. Madsen, J. Zagzebski, and T. Wilson, \u00E2\u0080\u009CNormal and shear strain estimation using beam steering on linear-array transducers.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 33, pp. 57\u00E2\u0080\u009366, Jan 2007. 116 \u000CChapter 5. Beam Steering [25] H. Chen and T. Varghese, \u00E2\u0080\u009CNoise analysis and improvement of displacement vector estimation from angular displacements.\u00E2\u0080\u009D Medical physics, vol. 35, pp. 2007\u00E2\u0080\u00932017, 2008. [26] L. Capineri, M. Scabia, and L. Masotti, \u00E2\u0080\u009CVector Doppler: spatial sampling analysis and presentation techniques for real time systems.\u00E2\u0080\u009D Journal of Electronic Imaging., vol. 12, p. 489498, July 2003. [27] O. Kripfgans, J. Rubin, A. Hall, and J. Fowlkes, \u00E2\u0080\u009CVector Doppler imaging of a spinning disc ultrasound Doppler phantom.\u00E2\u0080\u009D Ultrasound in Medicine and Biology., vol. 32, pp. 1037\u00E2\u0080\u00931046, 2006. [28] L. Sandrin, M. Tanter, D. Cassereau, S. Catheline, and M. Fink, \u00E2\u0080\u009Cultrafast compound imaging for 2D motion vector estimation: Application to Transient elastography,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 49, pp. 1363\u00E2\u0080\u00931374, October 2002. [29] M. Rao and T. Varghese, \u00E2\u0080\u009CSpatial angular compounding for elastography without the incompressibility assumption.\u00E2\u0080\u009D Ultrasonic imaging., vol. 27, pp. 256\u00E2\u0080\u0093270, 2005. [30] J. A. Jensen, \u00E2\u0080\u009CA model for the propagation and scattering of ultrasound in tissue,\u00E2\u0080\u009D Journal of the Acoustical Society of America, vol. 89, pp. 182\u00E2\u0080\u0093191, 1991. [31] J. Jensen, Ed., Ultrasound Imaging and its modeling. Springer Verlag, 2002, ch. Chapter for the Springer Verlag book Imaging of Complex Media with Acoustic and Seismic Waves, Topics in Applied Physics. [32] F. Viola, R. Coe, O. K., D. Guenther, and W. Walker, \u00E2\u0080\u009CMUlti-Dimensional Spline-Based Estimator (MUSE) for Motion Estimation: Algorithm Development and Initial Results.\u00E2\u0080\u009D Annals of Biomedical Engineering, vol. 36, pp. 1942\u00E2\u0080\u00931960, September 2008. [33] R. Zahiri-Azar and S. Salcudean, \u00E2\u0080\u009CTime-Delay Estimation in Ultrasound Echo Signals Using Individual Sample Tracking.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 55, pp. 2640\u00E2\u0080\u00932650, 2008. [34] S. DiMaio and S. Salcudean, \u00E2\u0080\u009CNeedle insertion modelling and simulation,\u00E2\u0080\u009D IEEE Transactions on Robotics and Automation: Special Issue on Medical Robotics, vol. 19, pp. 864\u00E2\u0080\u0093875, 2003. 117 \u000CChapter 6 High Frame Rate Ultrasound for 2D Motion Estimation 5 6.1 Introduction High frame rate motion estimation has proved to be critical for a range of clinically used ultrasound imaging modes. These include blood \u00EF\u00AC\u0082ow estimation [1, 2], tissue velocity and strain rate imaging [3, 4], elasticity imaging [5\u00E2\u0080\u00937], acoustic radiation force imaging (ARFI) [8\u00E2\u0080\u009310], studying the propagation of mechanical waves in the tissue [11\u00E2\u0080\u009313], and cardiovascular imaging [14, 15]. Conventional ultrasound systems are based on line by line acquisition of the echo signals in order to acquire the entire 2D image. As a result, the acquisition time of each frame in these systems is proportional to the number of scan lines and the acquisition time of each scan line in that frame. There exists a fundamental trade o\u00EF\u00AC\u0080 between temporal resolution and image size. Several techniques have been attempted in the literature in order to increase the imaging frame rates. In one simple approach the number of scan lines can be reduced. This will increase the frame rates but will also result in reduction in the \u00EF\u00AC\u0081eld of view (FOV) and/or spatial resolution depending on the spacing between scan lines. To study myocardial motion, a frame rate of 200 Hz was achieved in [16] by reducing the number of scan lines and keeping the spacing between scan lines the same thus signi\u00EF\u00AC\u0081cantly reducing the FOV. In [17] a frame rate of 450 Hz was achieved by reducing the number of scan lines and increasing the spacing between scan lines thus keeping the FOV full, but signi\u00EF\u00AC\u0081cantly reducing spatial resolution. In another approach high temporal resolution is achieved by beam interleaving. This technique divides the region of interest (ROI) into small sectors and acquires each sector at a high temporal resolution (200 Hz to 10 kHz depending on the number of scan lines per sector and the imaging depth) for a short period of time before moving on to the next sector, etc until all the observations for the entire ROI are acquired. Assuming that the time between the acquisition of neighboring scan lines is small, the acquisition in each sector can be considered as a snapshot of the speckle movements. This technique will provide both high spatial resolution and temporal resolution. However, large delays are introduced between the data acquired from di\u00EF\u00AC\u0080erent sectors. This technique is commonly used in conventional color \u00EF\u00AC\u0082ow imaging, power Doppler imaging, and B \u00EF\u00AC\u0082ow imaging [18, 19]. The same technique is also used in [4] to evaluate regional myocardial deformation and in [12] to study the propagation of crawling waves in tissue using tissue Doppler imaging. Using the same acquisition scheme, compounding Doppler imaging has also been attempted in the literature to estimate the motion vectors using both beam steering and multi-synthetic aperture beamforming [20,21]. In these techniques the 5 A version of this chapter has been submitted for publication. Reza Zahiri-Azar, Ali Baghani, and Septimiu E. Salcudean, \u00E2\u0080\u009CHigh Frame Rate Ultrasound for 2D Motion Estimation\u00E2\u0080\u009D. 118 \u000CChapter 6. 2D High Frame Rate System data from the ROI is acquired from multiple look angles. Once the data from multiple angles are acquired, the motions estimated along multiple directions are compounded to construct the motion vectors inside the overlapping region. With the help of parallel receive beamformers, techniques like multi-line-acquisitions (MLA) have also been used to increase the frame rate of conventional ultrasound machines where multiple echo signals (typically 2-8) are acquired from single transmit [22], thus, multiplying the e\u00EF\u00AC\u0080ective frame rate at little cost to the resolution [14]. In [23], MLA was used to reduce transducer heating and acoustic exposure, and to facilitate data acquisition for real-time ARFI imaging. The idea of MLA was also extended to the acquisition of the entire image as opposed to multiple lines, thus, drastically increasing the e\u00EF\u00AC\u0080ective frame rates (>5 kHz). This method is generally being referred to as ultrafast imaging where a single unfocused plane wave is used for transmit and parallel receive beamformers (typically 64-128) are used to generate the scan lines. In [11, 24], ultrafast imaging was used to capture the propagation of the transient shear wave in soft tissue and to estimate the tissue elasticity. In [25] ultrafast imaging was combined with angular compounding using multi-synthetic aperture beamforming to follow both the axial and the lateral components of the motion during the shear wave propagation at a frame rate of 6 kHz. Even though very e\u00EF\u00AC\u0080ective, MLA and ultrafast imaging are not generally available on conventional ultrasound systems. Additional hardware overhead is required to implement each of these techniques on conventional ultrasound systems. Techniques like coded excitations have also been introduced in the literature to increase the frame rate of the ultrasound acquisition [26\u00E2\u0080\u009328]. However, these techniques increase the beam density and similar to MLA and ultrafast imaging require specialized hardware. In another approach to achieve high frame rates, synchronization techniques have been suggested in the literature. The data acquisition in these techniques is similar to that of conventional color \u00EF\u00AC\u0082ow and power Doppler imaging to achieve high temporal resolution. However, to eliminate long delays between sectors, the start of data acquisition for each sector is synchronized with the exciter which varies from an external actuator (e.g. mechanical vibrator) to a signal generated in the body (e.g. electrocardiogram ECG). In [29], by synchronizing the data acquisition and an external exciter the shear-wave propagation in the scan plane was imaged at a frame rate of 6 kHz using a single element transducer. A similar approach was used in [30] to study both the transient and harmonic shear-wave scattering in both two and three dimensions using linear array transducers at a frame rate of 4 kHz. By synchronizing the image acquisition with the ECG signals, the propagation of several transient mechanical waves was imaged in di\u00EF\u00AC\u0080erent regions of the myocardium in mice at a frame rate of 8 kHz in [31] and in humans at a frame rate of 481 Hz in [15]. These techniques require additional hardware to synchronize the excitation and the data acquisition. The number of observations in these techniques is generally higher than in Doppler techniques (typically 20-100) since the temporal variation of the echo signal is of interest as opposed to the average time-shift/phase-shift in \u00EF\u00AC\u0082ow and Doppler measurement techniques. Also, at the end of imaging of each sector, the system needs to wait long enough to make sure the tissue returns back to its initial position prior to the next excitation. Otherwise, artifacts will appear in the \u00EF\u00AC\u0081nal image when di\u00EF\u00AC\u0080erent sectors are stitched together. In the case of using an ECG the waiting time for each sector is determined by the heart rate. This waiting time will generally increase the total data acquisition time in these techniques. Recently we have developed a high frame rate ultrasound imaging system that uses a custom 119 \u000CChapter 6. 2D High Frame Rate System sequencer similar to the ones used in color \u00EF\u00AC\u0082ow imaging and power Doppler imaging followed by a delay compensation technique. Acquisition delays are compensated to reconstruct in-phase tissue motions at a virtual high frame rate using conventional ultrasound. In this work we combine this technique with that of angular compounding using beam steering to develop an ultrasound system that reconstructs 2D motion vectors from multiple 1D motion measurements estimated along di\u00EF\u00AC\u0080erent angles at a virtual high frame rate. The paper is structured as follows: Section 6.2 presents the data acquisition scheme. Section 6.3 presents the signal processing routines applied to acquire echo signals. Section 6.4 presents the experimental results. Finally, Sections 6.5 and 6.6 present discussions and conclusions along with avenues for future research. 6.2 Data Acquisition The data acquisition in this technique is similar to compound Doppler imaging using beam steering techniques [21]. The data from the ROI are acquired from multiple steering angles, which forms the basis for motion estimation. For each scan line, a series of \u00F0\u009D\u0091\u0081 pulses are acquired. This acquisition scheme is referred to as packet acquisition, and the number of pulses \u00F0\u009D\u0091\u0081 as the packet size. Using this technique, the frame rate requirement is attained by beam interleaving techniques as described above, in which the frame rate for speckle pattern imaging is the pulse repetition frequency (PRF) used during acquisition. This technique can be described as follows. The ultrasonic pulse needs to propagate a distance equal to twice the image depth \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0096\u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u0094 before a new pulse can be transmitted. The maximum possible PRF is thus given by: 1 \u00F0\u009D\u0091\u0090 \u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0090\u00B9\u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 = = , (6.1) \u00F0\u009D\u0091\u0087\u00F0\u009D\u0091\u0099 2\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0096\u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u0094 where \u00F0\u009D\u0091\u0087\u00F0\u009D\u0091\u0099 is the time required to acquire one scan line and \u00F0\u009D\u0091\u0090 is the ultrasound speed. By decreasing the PRF by a factor \u00F0\u009D\u0090\u00BE, there is time to acquire \u00F0\u009D\u0090\u00BE \u00E2\u0088\u0092 1 other scan lines. These \u00F0\u009D\u0090\u00BE scan lines form a sector and the number \u00F0\u009D\u0090\u00BE itself is called the sector size which can be expressed by: \u00E2\u008C\u008A \u00E2\u008C\u008B \u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0090\u00B9\u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 \u00F0\u009D\u0090\u00BE= , (6.2) \u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0090\u00B9 where the \u00EF\u00AC\u0082oor operator \u00E2\u008C\u008A.\u00E2\u008C\u008B rounds to the next lower integer. The number of sectors \u00F0\u009D\u0091\u0086 in the ROI is given by: \u00E2\u008C\u0088 \u00E2\u008C\u0089 \u00F0\u009D\u0090\u00BF , (6.3) \u00F0\u009D\u0091\u0086= \u00F0\u009D\u0090\u00BE where \u00F0\u009D\u0090\u00BF is the number of scan lines (line density) determined by the image width and line spacing and the ceiling operator \u00E2\u008C\u0088.\u00E2\u008C\u0089 rounds to the nearest higher integer. It should be noted that for a \u00EF\u00AC\u0081xed imaging depth, \u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0090\u00B9\u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 can be decreased by adding additional waiting time at the end of each scan line acquisition (i.e. \u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0090\u00B9\u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 is equal to 1/(\u00F0\u009D\u0091\u0087\u00F0\u009D\u0091\u0099 + \u00F0\u009D\u0091\u0087\u00F0\u009D\u0091\u00A4 ) instead of 1/\u00F0\u009D\u0091\u0087\u00F0\u009D\u0091\u0099 , where \u00F0\u009D\u0091\u0087\u00F0\u009D\u0091\u00A4 is the additional wait time). This technique can be used to make \u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0090\u00B9 a multiple of \u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0090\u00B9\u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 . This way (6.2) will transform into \u00F0\u009D\u0090\u00BE = \u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0090\u00B9\u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 /\u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0090\u00B9 . Also, when using (6.3) \u00F0\u009D\u0090\u00BF might not be a multiple of \u00F0\u009D\u0091\u0086 (i.e. \u00F0\u009D\u0091\u0086\u00F0\u009D\u0090\u00BE > \u00F0\u009D\u0090\u00BF), as a result extra wait times need to be programmed in the sequencer to keep the PRF \u00EF\u00AC\u0081xed for all the scan lines. This will virtually increase the number of scan lines to \u00F0\u009D\u0091\u0086\u00F0\u009D\u0090\u00BE. In order to add angular compounding to this technique, several strategies can be employed. In one straight forward approach, the process mentioned can be repeated for each steering angle 120 \u000CChapter 6. 2D High Frame Rate System independently. This technique will result in large delays between the acquisitions of the same tissue location with di\u00EF\u00AC\u0080erent steering angles. In another approach, di\u00EF\u00AC\u0080erent steering angles can be acquired inside the same sector. This technique will minimize the delay between the acquisition of the same scan line when di\u00EF\u00AC\u0080erent steering angles are employed but will maximize the delay between the acquisition of the \u00EF\u00AC\u0081rst and the last sector. These two techniques are shown in Fig. 6.1 when two steering angles are used. The numbers indicate the timing of each scan line acquisition. Note that the number of sectors is multiplied by the number of steering angles employed in data acquisition. Apart from the pulsing strategy used to acquire the data, the total acquisition time is always equal to the number of lines \u00F0\u009D\u0091\u0086\u00F0\u009D\u0090\u00BE, times the packet size \u00F0\u009D\u0091\u0081 , times acquisition time of each line \u00F0\u009D\u0091\u0087\u00F0\u009D\u0091\u0099 . This can be formulated as follow \u00F0\u009D\u0091\u0087\u00F0\u009D\u0091\u00A1\u00F0\u009D\u0091\u009C\u00F0\u009D\u0091\u00A1\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u0099 = \u00F0\u009D\u0091\u0086 \u00C3\u0097 \u00F0\u009D\u0090\u00BE \u00C3\u0097 \u00F0\u009D\u0091\u0081 \u00C3\u0097 \u00F0\u009D\u0091\u0087\u00F0\u009D\u0091\u0099 . (6.4) The observation time for each scan lines is also the number of observations \u00F0\u009D\u0091\u0081 times the pulse repetition period (PRP) where PRP is equal to 1/PRF. Following data acquisition, the input data available for processing are radio frequency (RF) echo signals arranged in packets independently. Each packet of data corresponds to time samples from one sample volume in the image, sampled at the PRF of the system. In Section 6.3 the signal processing performed on the input data will be described. 6.3 Signal Processing A block diagram showing the basic signal processing blocks used in the 2D high frame rate system is given in Fig. 6.2. Using this block diagram as a reference, detailed description of each block will be presented in the following subsections. 6.3.1 Motion Estimation and Phasor Computation Following the data acquisition, commonly used 1D delay estimation algorithms including timeshift and phase-shift estimators [32] can be applied to \u00EF\u00AC\u0081nd the 1D motions along each steering angle, at each spatial location. This process is applied to each scan line independently and thus is not a\u00EF\u00AC\u0080ected by the data acquisition scheme. We denote the estimated displacement by \u00F0\u009D\u0091\u00A2(\u00F0\u009D\u0091\u009B, \u00F0\u009D\u0091\u0099, \u00F0\u009D\u0091\u0091, \u00F0\u009D\u009C\u0083) where \u00F0\u009D\u0091\u009B is the packet index, \u00F0\u009D\u0091\u0099 is the scan line index, \u00F0\u009D\u0091\u0091 is the depth of the reference window used in motion estimation, and \u00F0\u009D\u009C\u0083 is the steering angle employed to acquire the data. Once the motions at each spatial location are estimated for all the packets, the time series of displacements are transformed into the frequency domain. This way, the phasors can be extracted by selecting the frequency of interest from the list of permissible frequencies at every spatial location. We denote the estimated displacement phasor by \u00F0\u009D\u0091\u0088 (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 , \u00F0\u009D\u0091\u0099, \u00F0\u009D\u0091\u0091, \u00F0\u009D\u009C\u0083), where \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 is the frequency of interest. 6.3.2 Phase Correction Once the displacement phasors are estimated, to correct for all the delays introduced during the data acquisition, their phase needs to be compensated. This phase correction step can be formulated as follow: \u00F0\u009D\u0091\u0088\u00F0\u009D\u0091\u0090 (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 , \u00F0\u009D\u0091\u0099, \u00F0\u009D\u0091\u0091, \u00F0\u009D\u009C\u0083) = \u00F0\u009D\u0091\u0088 (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 , \u00F0\u009D\u0091\u0099, \u00F0\u009D\u0091\u0091, \u00F0\u009D\u009C\u0083)\u00F0\u009D\u0091\u0092\u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u009D(\u00E2\u0088\u0092\u00F0\u009D\u0091\u00972\u00F0\u009D\u009C\u008B\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 \u00F0\u009D\u0091\u00A1\u00F0\u009D\u0091\u0091 ), (6.5) 121 \u000CChapter 6. 2D High Frame Rate System (a) Method I (b) Method II Figure 6.1: Two di\u00EF\u00AC\u0080erent techniques to acquire high frame rate from two steering angles with eight scan lines (\u00F0\u009D\u0090\u00BF = 8), sector size of \u00F0\u009D\u0090\u00BE = 4 which results in four sectors \u00F0\u009D\u0091\u0086 = 2 \u00C3\u0097 2 = 4 and \u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0090\u00B9 = \u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0090\u00B9\u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 /4, and packet size of \u00F0\u009D\u0091\u0081 = 4. The numbers indicate the sequence of the 64 pulses. 122 \u000CChapter 6. 2D High Frame Rate System Figure 6.2: A diagram showing the basic signal processing blocks used in 2D high frame rate imaging. where \u00F0\u009D\u0091\u0088\u00F0\u009D\u0091\u0090 is the corrected phasor and \u00F0\u009D\u0091\u00A1\u00F0\u009D\u0091\u0091 is the acquisition delay with respect to the start of the sequencing corresponding to the same phasor, calculated based on the \u00F0\u009D\u0091\u0091, \u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0090\u00B9\u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 , and the sequencing order used to acquire the data. The detailed description of this step is provided in [33] and is not repeated in here. It should be noted that adding the steering angle to data acquisition do not introduce di\u00EF\u00AC\u0083culties for the phase correction process as long as the acquisition order is correctly considered in the phase correction routine. 6.3.3 2D Reconstruction Once 1D phasors are corrected for all the delays, their coordinate needs to be transformed to Cartesian coordinates to correct for the steering angle employed in the data acquisition as follows: (6.6) \u00F0\u009D\u0091\u0088\u00F0\u009D\u0091\u0090 (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 , \u00F0\u009D\u0091\u0099, \u00F0\u009D\u0091\u0091, \u00F0\u009D\u009C\u0083) \u00E2\u0086\u0092 \u00F0\u009D\u0091\u0088\u00F0\u009D\u0091\u0090 (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 , \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u009C\u0083), where \u00F0\u009D\u0091\u0088\u00F0\u009D\u0091\u0090 is the corrected phasor in Cartesian coordinate and \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6 are the axial and the lateral axes, respectively. For the purpose of this work, linear interpolation was employed for the spatial alignment. Once 1D phasors are spatially aligned, the 2D phasors are reconstructed from the 1D phasors according to the following equations, at each spatial location [21]: \u00F0\u009D\u0091\u0088\u00F0\u009D\u0091\u0090 (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 , \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u009C\u0083) + \u00F0\u009D\u0091\u0088\u00F0\u009D\u0091\u0090 (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 , \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00E2\u0088\u0092\u00F0\u009D\u009C\u0083) , 2 cos \u00F0\u009D\u009C\u0083 \u00F0\u009D\u0091\u0088\u00F0\u009D\u0091\u0090 (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 , \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u009C\u0083) \u00E2\u0088\u0092 \u00F0\u009D\u0091\u0088\u00F0\u009D\u0091\u0090 (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 , \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00E2\u0088\u0092\u00F0\u009D\u009C\u0083) \u00F0\u009D\u0091\u0088\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u008E (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 , \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) = , 2 sin \u00F0\u009D\u009C\u0083 \u00F0\u009D\u0091\u0088\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 , \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) = (6.7) (6.8) where \u00F0\u009D\u0091\u0088\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 , \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) and \u00F0\u009D\u0091\u0088\u00F0\u009D\u0091\u0099\u00F0\u009D\u0091\u008E (\u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 , \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) are the axial and the lateral phasors, respectively. This scheme is shown in Fig. 6.3. 6.4 Experimental Results In order to study the performance of the proposed system, the following experiments were conducted. In one experiment the system was used to measure both axial and lateral component of the \u00EF\u00AC\u0082ow. This experiment was used to validate the data acquisition and 2D motion vector reconstruction steps of the system in Fig. 6.2 since phasor computation and phase correction steps are not required to measure the average \u00EF\u00AC\u0082ow velocity over a short period of time. In another experiment the system was used to measure both axial and lateral components of the motion resulted from propagation of mechanical waves in the tissue. This experiment was used to validate the entire system including phasor computation and phase correction steps. 123 \u000CChapter 6. 2D High Frame Rate System Figure 6.3: Schematics for the reconstruction of a 2D measurement using 1D measurements estimated from two steering angles. 6.4.1 2D Flow Measurements The experimental setup is shown in Fig 6.4. The experiment was performed on a Doppler \u00EF\u00AC\u0082ow phantom (1425A LE Doppler Flow System, Gammex, Middleton, USA). The phantom was imaged using a SonixRP ultrasound machine (Ultrasonix Medical Corporation, Richmond, BC, Canada) with a L9-4/38 linear array transducer with 5 MHz center frequency and 300\u00F0\u009D\u009C\u0087m line spacing. The RF signal, digitized at 40 MHz, was collected to a depth of 50 mm. Two steering angles were employed to acquire the data (\u00C2\u00B110 degrees). Referring to Fig. 6.1, in order to minimize the delay between the scan lines acquired from the same spatial location using di\u00EF\u00AC\u0080erent steering angles, the second sequencing strategy was employed. Referring to (6.1), an imaging depth of 50 mm results in \u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0090\u00B9\u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 of 15.4 kHz. Wait times were added to the end of each scan line acquisition to decrease the \u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0090\u00B9\u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 to 10 kHz. This resulted in \u00F0\u009D\u0091\u0087\u00F0\u009D\u0091\u0099 of 0.1 ms. In the sequencer, the line density was set to be \u00F0\u009D\u0090\u00BF = 120, the sector size was set to be six (\u00F0\u009D\u0090\u00BE = 6), and the packet size was set to be eight (\u00F0\u009D\u0091\u0081 = 8). This resulted in total of 40 sectors for both steering angles (2 \u00C3\u0097 120/6), PRF of 1667 Hz (10,000/6) and PRP of 0.6 ms. The observation time for each scan line was \u00F0\u009D\u0091\u0081 \u00C3\u0097 \u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0091\u0083 = 4.8 ms. Referring to (6.4), the acquisition time was 40 \u00C3\u0097 6 \u00C3\u0097 8 \u00C3\u0097 0.1 ms = 192 ms which resulted in total frame rate of \u00EF\u00AC\u0081ve 124 \u000CChapter 6. 2D High Frame Rate System Figure 6.4: Schematics of the experimental setup used for testing the system using \u00EF\u00AC\u0082ow phantom. frames per second (fps). RF lines were recorded for o\u00EF\u00AC\u0080-line processing. It should be noted that real-time implementation is also possible and o\u00EF\u00AC\u0080-line processing is used only to avoid the overhead of the programming e\u00EF\u00AC\u0080ort. This is not an inherent limitation of the system. The RF processing for each line was based on an autocorrelatoin technique, which converts the RF echo signals to complex I/Q data (basebanded inphase and quadrature components) and processes the complex I/Q data to measure the velocity. This method was \u00EF\u00AC\u0081rst proposed by Kasai et al. [1] and is commonly used on commercial ultrasound machine to estimate the \u00EF\u00AC\u0082ow. Once 1D \u00EF\u00AC\u0082ow measurements were calculated, 2D \u00EF\u00AC\u0082ow vectors were calculated using the same processing scheme explained in Section 6.3.3. The results calculated from one packet of data are shown in Fig. 6.5. No temporal \u00EF\u00AC\u0081ltering was used and a 2D spatial median \u00EF\u00AC\u0081lter with a kernel size of 3 \u00C3\u0097 3 was applied to remove the outliers. Fig. 6.5 shows that the system can reliably reconstruct the motion vector inside the region of overlapping beams acquired at high frame rates. 6.4.2 2D Wave Propagation Measurements The experimental setup for this experiment is shown in Fig 6.6. The experiment was performed on a tissue-mimicking phantom. The phantom was constructed from 100% polyvinyl chloride (PVC) plasticizer (M-F Manufacturing Co., Inc. Fort Worth, TX, USA). Two percent cellulose 125 \u000CChapter 6. 2D High Frame Rate System +10 degree [cm/sec] \u00E2\u0088\u009210 degree [cm/sec] 10 10 5 5 0 0 \u00E2\u0088\u00925 \u00E2\u0088\u00925 \u00E2\u0088\u009210 \u00E2\u0088\u009210 (a) 1D Motion Estimates Axial [cm/sec] Lateral [cm/sec] 60 60 40 40 20 20 0 0 \u00E2\u0088\u009220 \u00E2\u0088\u009220 \u00E2\u0088\u009240 \u00E2\u0088\u009240 \u00E2\u0088\u009260 \u00E2\u0088\u009260 Motion Vector (b) 2D Reconstruction Figure 6.5: Spatially aligned 1D motions estimated for di\u00EF\u00AC\u0080erent steering angles (a) and axial and lateral components of the motion reconstructed from the same set of data (b). The estimated motion vectors are also shown (b). 126 \u000CChapter 6. 2D High Frame Rate System Figure 6.6: Schematics of the experimental setup used for testing the system using a mechanical vibrator. (Sigma-Aldrich Inc., St. Louis, MO, USA) was added as scattering particles [32, 34]. The phantom was excited at 100 Hz continuously using an external shaker mounted on top of it. The phantom was imaged from the bottom using the same system mentioned above. Similarly to the previous experiment the \u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0090\u00B9\u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 was set to be 10 kHz. In the sequencer, the line density was set to be \u00F0\u009D\u0090\u00BF = 60, the sector size was set to be six (\u00F0\u009D\u0090\u00BE = 12), and the packet size was set to be eight (\u00F0\u009D\u0091\u0081 = 40). This resulted in total of 10 sectors for both steering angles (2 \u00C3\u0097 60/12), PRF of 833 Hz (10,000/12) and PRP of 1.2 ms. The observation time for each scan lines was (i.e. \u00F0\u009D\u0091\u0081 \u00C3\u0097 \u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0091\u0083 = 48 ms). The total acquisition time was 10 \u00C3\u0097 12 \u00C3\u0097 40 \u00C3\u0097 0.1 ms = 480 ms which resulted in a total frame rate of 2 fps. The RF lines were recorded for o\u00EF\u00AC\u0080-line processing. The RF processing for each line was based on normalized cross correlation which is commonly used to estimate the tissue motion [5, 35]. Sub-sample accuracy was achieved using cosine \u00EF\u00AC\u0081t [36]. For all the scan lines the motions were estimated with respect to the \u00EF\u00AC\u0081rst observation of the same scan line. Once 1D motions were calculated, 2D motion vectors were calculated using the same processing scheme explained in Section 6.3. The propagation of mechanical waves in the scan plane, imaged with the system are shown in Fig. 6.7. For better visualization, snapshots of wave images reconstructed at each step are 127 \u000CChapter 6. 2D High Frame Rate System displayed. No temporal \u00EF\u00AC\u0081ltering was employed and a 2D spatial mean \u00EF\u00AC\u0081lter with a kernel size of 3 \u00C3\u0097 3 was applied to the images to improve the signal to noise ratio. Fig. 6.7 shows that the system reliably corrects for the phase shifts and reconstructs the 2D motion \u00EF\u00AC\u0081eld at high frame rate. 6.5 Discussion The notion for data acquisition in this work is taken from [19] and the notation for phasor calculation and phase compensation is taken from [33]. We used \u00C2\u00B110 as two steering angles to reconstruct 2D motion vectors from 1D estimates. However, as suggested by a number of authors, multiple steering angles can be used to improve the performance of the 2D tracking using angular compounding techniques [37, 38]. Unfortunately, in practice, there is a limit for the length of the sequence that can be programmed in ultrasound imaging systems. Thus, in addition to longer acquisition and processing time, employing several steering angles would introduce implementation issues since the total number of scan lines which should be acquired are multiplied by the number of steering angles being employed (Eq. (6.4)). As suggested in [33], to resolve this issue, the sequencer can be programmed multiple times to acquire the entire data. However, this approach will increase the acquisition time even further. In this work, we only considered reconstruction of 2D phasors for a single frequency of interest \u00F0\u009D\u0091\u0093\u00F0\u009D\u0091\u0092 . In case we are interested in multiple frequencies, the process of phasor computation, phase correction, 2D reconstruction should be repeated for each frequency independently on the same set of data. Techniques like multi-line-acquisition that were mentioned in the introduction can also be employed in this system in order to speed up the data acquisition process. However, care needs to be taken during the phase correction step since scan line acquisition will not be sequential anymore. As shown in this work, to adjust the acquisition delays between the acquisition of the same tissue location when multiple steering angles are employed, di\u00EF\u00AC\u0080erent pulsing strategies can be used. However, it is important to note that when using phase correction techniques, the acquisition strategy does not play a role in the \u00EF\u00AC\u0081nal estimation results. This is due to the fact that following the acquisition delay cancellation, all the data are virtually acquired at the same time. Referring to Fig. 6.1, the maximum achievable PRF in the system is \u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0090\u00B9\u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 and \u00F0\u009D\u0091\u0083 \u00F0\u009D\u0091\u0085\u00F0\u009D\u0090\u00B9\u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A5 /2 for the \u00EF\u00AC\u0081rst and the second sequencing method, respectively. For example in the 2D \u00EF\u00AC\u0082ow measurement experiment by setting \u00F0\u009D\u0090\u00BE = 2 (i.e. in each sector, acquire one line from each angle), the PRF can go up to 5 kHz. In the 2D wave propagation measurement by setting \u00F0\u009D\u0090\u00BE = 1 (i.e. in each sector, acquire only one line), the PRF can go up to 10 kHz. However, as mentioned in the introduction, for a \u00EF\u00AC\u0081xed number of observations, setting the PRF too high will signi\u00EF\u00AC\u0081cantly reduce the observation time. Thus, depending on the application, it is important to \u00EF\u00AC\u0081nd the proper balance between these two parameters. The proposed system can be implemented on conventional ultrasound machines without an extra hardware overhead to facilitate estimation of 2D motion at high frame rates. However, the proposed system has its own limitations as well. First, the tissue and the transducer should be stationary during the data acquisition to avoid motion artifacts. Second, even though there is no need for the excitation to be of a single frequency, the excitation needs to be periodic 128 \u000CChapter 6. 2D High Frame Rate System +10 degree [microns] \u00E2\u0088\u009210 degree [microns] 10 10 8 8 6 6 4 4 2 2 0 0 \u00E2\u0088\u00922 \u00E2\u0088\u00922 \u00E2\u0088\u00924 \u00E2\u0088\u00924 \u00E2\u0088\u00926 \u00E2\u0088\u00926 \u00E2\u0088\u00928 \u00E2\u0088\u00928 \u00E2\u0088\u009210 \u00E2\u0088\u009210 (a) After 1D Motion Estimation +10 degree [microns] +10 degree [microns] 10 10 8 8 6 6 4 4 2 2 0 0 \u00E2\u0088\u00922 \u00E2\u0088\u00922 \u00E2\u0088\u00924 \u00E2\u0088\u00924 \u00E2\u0088\u00926 \u00E2\u0088\u00926 \u00E2\u0088\u00928 \u00E2\u0088\u00928 \u00E2\u0088\u009210 \u00E2\u0088\u009210 (b) After Phase Correction Axial [microns] Lateral [microns] 10 10 8 8 6 6 4 4 2 2 0 0 \u00E2\u0088\u00922 \u00E2\u0088\u00922 \u00E2\u0088\u00924 \u00E2\u0088\u00924 \u00E2\u0088\u00926 \u00E2\u0088\u00926 \u00E2\u0088\u00928 \u00E2\u0088\u00928 \u00E2\u0088\u009210 \u00E2\u0088\u009210 (c) After 2D Reconstruction Figure 6.7: Snapshot of wave images after 1D motions estimation (a) and phase correction (b) for both steering angles are shown. The reconstructed images of the axial and lateral components of the wave are also shown (c). 129 \u000CChapter 6. 2D High Frame Rate System and band-limited where the highest frequency content should be smaller than half of the PRF and the period of the lowest frequency needs to be short enough so that it can be observed during the imaging of each scan line. Third, without synchronization, the system can not be used to study the propagation of transient waves since all the scan lines should be acquired at the same time and long observation times are required to study the propagation of the waves. Techniques like ultrafast imaging are generally preferred for these applications. Fourth, beam steering techniques are only able to reconstruct the motion vectors in the overlapping region. Thus, they introduce some limitations when large steering angles are employed or imaging deep structures is of interest, where it can be di\u00EF\u00AC\u0083cult to get the necessary acoustic window for beam overlap. Techniques like split aperture [21] can be employed to alleviate this problem however the problem still remains. 6.6 Conclusion A high frame rate ultrasound system is introduced in this article which can readily be implemented on conventional ultrasound systems in real-time without additional hardware overhead. The system uses a data acquisition scheme similar to that of compound Doppler imaging to achieve high frame rates. A previously introduced phase correction scheme is used in the system to compensate for acquisition delays. Finally, reconstruction techniques are employed to estimate the 2D motion vectors from individual 1D measurement estimated from multiple steering angles. As shown in this work, the proposed system has potential applications in medical ultrasound. These applications include vascular imaging, studying the propagation of mechanical waves in the tissue, strain and strain rate imaging, and elasticity and viscosity imaging. 130 \u000CReferences [1] C. Kasai, K. Namekawa, A. Koyano, and R. Omoto, \u00E2\u0080\u009CReal-time two-dimensional blood \u00EF\u00AC\u0082ow imaging using an autocorrelation technique.\u00E2\u0080\u009D IEEE Transactions on Sonics and Ultrasonics, vol. 32, pp. 458\u00E2\u0080\u0093464, 1985. [2] T. Loupas, R. Peterson, and R. Gill, \u00E2\u0080\u009CExperimental evaluation of velocity and power estimation forultrasound blood \u00EF\u00AC\u0082ow imaging, by means of a two-dimensional autocorrelation approach.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 42, pp. 689\u00E2\u0080\u0093699, Jul 1995. [3] H. Torp, K. Kristo\u00EF\u00AC\u0080ersen, and B. Angelsen, \u00E2\u0080\u009CAutocorrelation Techniques in Color Flow Imaging. Signal model and statistical properties of the Autocorrelation estimates.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency control, vol. 41, pp. 604\u00E2\u0080\u0093612, 1994. [4] A. Heimdal, A. Stoylen, H. Torp, and T. Skjaerpe, \u00E2\u0080\u009CReal-time strain rate imaging of the left ventricle by ultrasound.\u00E2\u0080\u009D Journal of American Soc Echocardiography, vol. 11, pp. 1013\u00E2\u0080\u00931019, Nov 1998. [5] J. Ophir, I. Cespedes, H. Ponnekanti, Y. Yazdi, and X. Li, \u00E2\u0080\u009CElastography: a quantitative method for imaging the elasticity of biological tissues,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 13, pp. 111\u00E2\u0080\u0093134, April 1991. [6] L. Bohs, B. Friemel, and G. Trahey, \u00E2\u0080\u009CExperimental velocity pro\u00EF\u00AC\u0081les and volumetric \u00EF\u00AC\u0082ow via two-dimensional speckle tracking.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 21, pp. 885\u00E2\u0080\u009398, 1995. [7] M. Lubinski, S. Emelianov, and M. O\u00E2\u0080\u0099Donnell, \u00E2\u0080\u009CSpeckle tracking methods for ultrasonic elasticity imaging using short-time correlation,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 46, pp. 82\u00E2\u0080\u009396, January 1999. [8] K. Nightingale, M. Palmeri, R. Nightingale, and G. Trahey, \u00E2\u0080\u009COn the feasibility of remote palpation using acoustic radiation force,\u00E2\u0080\u009D Journal of the Acoustical Society of America, vol. 110, pp. 625\u00E2\u0080\u009334, July 2001. [9] W. Walker, F. Fernandez, and L. Negron, \u00E2\u0080\u009CA method of imaging viscoelastic parameters with acoustic radiation force.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1437\u00E2\u0080\u00931447, 2000. [10] M. Fatemi and J. Greenleaf, \u00E2\u0080\u009CProbing the dynamics of tissue at low frequencies with the radiation force of ultrasound,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1449\u00E2\u0080\u009364, June 2000. 131 \u000CChapter 6. 2D High Frame Rate System [11] J. Berco\u00EF\u00AC\u0080, M. Tanter, M. Muller, and M. Fink, \u00E2\u0080\u009CStudy of viscous and elastic properties of soft tissues using supersonic shear imaging,\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonic Symposium, 2003. [12] K. Hoyt, K. Parker, and J. Rubens, \u00E2\u0080\u009CReal-Time Shear Velocity Imaging Using Sonoelastographic Techniques,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 33, p. 10861097, 2007. [13] E. Turgay, S. Salcudean, and R. Rohling, \u00E2\u0080\u009CIdentifying Mechanical properties of Tissue by Ultrasound.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 32, pp. 221\u00E2\u0080\u0093235, 2006. [14] K. Kaluzynski, C. Xunchang, S. Emelianov, A. Skovoroda, and M. O\u00E2\u0080\u0099Donnell, \u00E2\u0080\u009CStrain rate imaging using two-dimensional speckle tracking.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 48, pp. 1111\u00E2\u0080\u00931123, July 2001. [15] S. Wang, W. Lee, J. Provost, L. Jianwen, and E. Konofagou, \u00E2\u0080\u009CA composite high-framerate system for clinical cardiovascular imaging.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 55, pp. 2221\u00E2\u0080\u00932233, Oct 2008. [16] E. Konofagou, J. D\u00E2\u0080\u0099hooge, and J. Ophir, \u00E2\u0080\u009CMyocardial elastography - A feasibility study in vivo,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 28, pp. 475\u00E2\u0080\u0093482, October 2002. [17] H. Kanai, Y. Koiwa, and J. Zhang, \u00E2\u0080\u009CReal-time measurements of local myocardium motion and arterial wall thickening.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 46, pp. 1229\u00E2\u0080\u00931241, 1999. [18] J. Jensen and I. Lacasa, \u00E2\u0080\u009CEstimation of blood velocity vectors using transverse ultrasound beam focusing and cross-correlation,\u00E2\u0080\u009D in Proceedings of the IEEE Ultrasonics Symposium. IEEE, Oct 1999, pp. 1493\u00E2\u0080\u00931497 vol.2. [19] L. Lvstakken, S. Bjaerum, D. Martens, and H. Torp, \u00E2\u0080\u009CBlood \u00EF\u00AC\u0082ow imaging\u00E2\u0080\u0093A new realtime, 2-D \u00EF\u00AC\u0082ow imaging technique.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 53, pp. 289\u00E2\u0080\u009399, Feb 2006. [20] L. Capineri, M. Scabia, and L. Masotti, \u00E2\u0080\u009CVector Doppler: spatial sampling analysis and presentation techniques for real time systems.\u00E2\u0080\u009D Journal of Electronic Imaging., vol. 12, p. 489498, July 2003. [21] O. Kripfgans, J. Rubin, A. Hall, and J. Fowlkes, \u00E2\u0080\u009CVector Doppler imaging of a spinning disc ultrasound Doppler phantom.\u00E2\u0080\u009D Ultrasound in Medicine and Biology., vol. 32, pp. 1037\u00E2\u0080\u00931046, 2006. [22] M. Fabian, K. Ballu, J. Hossack, T. Blalock, and W. Walker, \u00E2\u0080\u009CDevelopment of a parallel acquisition system for ultrasound research.\u00E2\u0080\u009D in Proc. SPIE vol. 4325, Feb 2001, pp. 54\u00E2\u0080\u009362. [23] J. Dahl, G. Pinton, M. Palmeri, V. Agrawal, K. Nightingale, and G. Trahey, \u00E2\u0080\u009CA Parallel Tracking Method for Acoustic Radiation Force Impulse Imaging.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 54, pp. 301\u00E2\u0080\u0093312, 2007. [24] S. Catheline, J.-L. Gennisson, G. Delon, and M. Fink, \u00E2\u0080\u009CViscoelastic properties of soft solids using transient elastography,\u00E2\u0080\u009D in Second International Conference on the Ultrasonic 132 \u000CChapter 6. 2D High Frame Rate System Measurement and Imaging of Tissue Elasticity, Corpus Christi, U.S., October 2003, pp. 25\u00E2\u0080\u009325. [25] L. Sandrin, M. Tanter, D. Cassereau, S. Catheline, and M. Fink, \u00E2\u0080\u009Cultrafast compound imaging for 2D motion vector estimation: Application to Transient elastography,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 49, pp. 1363\u00E2\u0080\u00931374, October 2002. [26] T. Misaridis and J. Jensen, \u00E2\u0080\u009CUse of modulated excitation signals in medical ultrasound. Part I: basic concepts and expected bene\u00EF\u00AC\u0081ts.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 52, pp. 177\u00E2\u0080\u0093191, Feb 2005. [27] \u00E2\u0080\u0094\u00E2\u0080\u0094, \u00E2\u0080\u009CUse of modulated excitation signals in medical ultrasound. Part II: Design and performance for medical imaging applications.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 52, pp. 192\u00E2\u0080\u0093207, 2005. [28] \u00E2\u0080\u0094\u00E2\u0080\u0094, \u00E2\u0080\u009CUse of modulated excitation signals in medical ultrasound. Part III: High frame rate imaging.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 52, pp. 208\u00E2\u0080\u0093219, 2005. [29] V. Dutt, R. Kinnick, R. Muthupillai, T. Oliphant, R. Ehman, and J. Greenleaf, \u00E2\u0080\u009CAcoustic shear-wave imaging using echo ultrasound compared to magnetic resonance elastography,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 26, pp. 397\u00E2\u0080\u0093403, 2000. [30] A. Henni, C. Schmitt, and G. Cloutier, \u00E2\u0080\u009CThree-dimensional transient and harmonic shearwave scattering by a soft cylinder for dynamic vascular elastography.\u00E2\u0080\u009D Journal of the Acoustical Society of America, vol. 124, pp. 2394\u00E2\u0080\u00932405, Oct 2008. [31] M. Pernot, K. Fujikura, S. Fung-Kee-Fung, and K. E., \u00E2\u0080\u009CECG-gated, mechanical and electromechanical wave imaging of cardiovascular tissues in vivo.\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 33, p. 10751085, 2007. [32] R. Zahiri-Azar and S. Salcudean, \u00E2\u0080\u009CTime-Delay Estimation in Ultrasound Echo Signals Using Individual Sample Tracking.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 55, pp. 2640\u00E2\u0080\u00932650, 2008. [33] A. Baghani, \u00E2\u0080\u009CA Wave Equation Approach to Ultrasound Elastography,\u00E2\u0080\u009D Ph.D. dissertation, University of British Columbia, 2009. [34] S. DiMaio and S. Salcudean, \u00E2\u0080\u009CNeedle insertion modelling and simulation,\u00E2\u0080\u009D IEEE Transactions on Robotics and Automation: Special Issue on Medical Robotics, vol. 19, pp. 864\u00E2\u0080\u0093875, 2003. [35] R. Zahiri-Azar and S. Salcudean, \u00E2\u0080\u009CMotion Estimation in Ultrasound Images Using Time Domain Cross Correlation With Prior Estimates.\u00E2\u0080\u009D IEEE Transactions on Biomedical Engineering, vol. 53, pp. 1990\u00E2\u0080\u00932000, 2006. [36] I. Cespedes, Y. Huang, J. Ophir, and S. Spratt, \u00E2\u0080\u009CMethods for the estimation of subsample time-delays of digitized echo signals,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 17, pp. 142\u00E2\u0080\u0093171, 1995. 133 \u000CChapter 6. 2D High Frame Rate System [37] M. Rao and T. Varghese, \u00E2\u0080\u009CSpatial angular compounding for elastography without the incompressibility assumption.\u00E2\u0080\u009D Ultrasonic imaging., vol. 27, pp. 256\u00E2\u0080\u0093270, 2005. [38] U. Techavipoo, Q. Chen, T. Varghese, and J. Zagzebski, \u00E2\u0080\u009CEstimation of displacement vectors and strain tensors in elastography using angular insoni\u00EF\u00AC\u0081cations.\u00E2\u0080\u009D IEEE Transactions on Medical Imaging, vol. 23, pp. 1479\u00E2\u0080\u00931489, 2004. 134 \u000CChapter 7 Conclusions and Future Research In this Chapter the results of the collected works are related to one another and a uni\u00EF\u00AC\u0081ed goal of the thesis is discussed. The strengths and weaknesses of the research are then presented, along with future directions for research. 7.1 Multi-Dimensional Motion Estimation Techniques In Chapters 2, 3 and 4 new algorithms are developed for the estimation of motion in the sequences of ultrasound echo signal in 1D (axial component only), 2D (both axial and lateral components), and 3D (axial, lateral, and elevational components). The performance of all the presented techniques is studied using both simulation and experimental data in terms of accuracy, precision, sensitivity, and resolution. A comparison is carried out with state of the art techniques. This way the \u00EF\u00AC\u0081rst two objectives of the thesis are met. In Chapter 2 a new class of delay estimators based on the tracking of the individual echo samples called sample tracking (ST) is presented. The use of the same interpolation approach to improve the performance of the ZCT delay estimator is also presented [1]. Simulation results show that these algorithms outperform conventional window based time-delay estimators in terms of bias and standard deviation when applied to high SNR echo signals. Simulation results also show that ST algorithms have higher resolution and sensitivity when used as strain estimators compared to commonly used strain estimation algorithms including recently introduced spline-based continuous time-delay estimators [2] as they provide the displacement of individual samples. However, their performance degrades rapidly as the SNR of the echo signals becomes low. Experimental results demonstrating the viability of ST are also presented. In Chapter 3 we show that the standard approach of applying a separate 1D sub-sample estimation [3, 4] in multi-dimensional motion tracking is not valid when estimating motion in sequences of ultrasound echo signals. Several pattern-matching function interpolation schemes that are suited for 2D motion estimation were presented. The techniques presented in Chapter 3 are extended to 3D in Chapter 4 where 3D subsample motion estimation schemes are presented. The performance of the proposed methods has been characterized through both simulations and experiments. The results show that the proposed methods signi\u00EF\u00AC\u0081cantly outperform other commonly used independent 1D algorithms in the literature in terms of bias and standard deviation. Both techniques presented in Chapter 3 and Chapter4 assume small motions and deformations which is valid for fast imaging [5], where inter-frame displacements and deformations are small and the echo signals are highly correlated. This is also the case in acoustic radiation force imaging [5\u00E2\u0080\u00938] where induced displacements and deformations are very small. However, 135 \u000CChapter 7. Conclusions and Future Research the assumption of small motions and deformations does not hold when large deformations exist and echo signals are decorrelated. This is generally the case in quasi-static elastography [9, 10] where large external compressions are applied to the tissue and cause the tissue to experience large deformations. This is also the case in myocardial elastography [11] where the tissue experiences large internal motions and deformations. The performance of all the pattern matching function interpolation techniques, including those presented in this work, is expected to degrade when the echo signals are decorrelated and can not be matched correctly. In order to adapt all these methods to the estimation of displacements resulting from large deformations, previously introduced compounding methods should be applied to the raw echo signals [12\u00E2\u0080\u009314] prior to the motion estimation process. Once the e\u00EF\u00AC\u0080ect of signal decorrelation is suppressed, the pattern matching algorithms followed by the proposed interpolation methods can be applied to estimate the motion. Alternatively, techniques such as iterative 1D cross correlation with recorrelation [11, 15], which are more robust in the presence of deccorelation, can be employed to estimate the motion. In Chapter 3 and Chapter 4 we have only employed a standard polynomial \u00EF\u00AC\u0081tting to generate the continuous representation of the pattern matching function in multi-dimension. However, several other polynomials may be used for this purpose, including multi-dimensional spline polynomials as in [3, 16]. The use of higher order splines are expected to improve the accuracy of the sub-sample estimation, but at higher computational cost. The proposed methods have several potential applications throughout the \u00EF\u00AC\u0081eld of signal processing. Speci\u00EF\u00AC\u0081c applications in medical ultrasound include \u00EF\u00AC\u0081ne velocity vector imaging, strain tensor estimation, elastography, and acoustic radiation force impulse imaging. 7.2 Real-Time Motion Estimation The proposed estimators in Chapter 2, Chapter 3, and Chapter 4 are shown to provide a good balance between accuracy, precision, and computational cost. All the proposed techniques add small computational overhead and are suitable for real-time applications. Based on the methods proposed in Chapter 3, a real-time 2D motion tracking software has been implemented that estimates the motions between the sequences of ultrasound echo signals in several dimensions at the native frame rate of the ultrasound machines used currently (up to 50 Hz). In this manner, the third objective of the thesis is met. The system is currently being used in several clinical applications namely strain imaging of the prostate [17], monitoring kidney transplants, studying the tissue deformation following needle insertion [18], characterization of soft tissue from \u00EF\u00AC\u0081nite element models [19], 2D strain tensor estimation, tissue elasticity and viscosity imaging [20], and tissue motion vector imaging at the University of British Columbia (UBC) and in collaboration with Dr. Christopher Nguan at the Vancouver General Hospital (VGH) and Dr. Morris at the British Columbia Cancer agency (BCCA). The system is also being used for real-time poro-elastography [21] for distinguishing between normal and lymphedematous tissues in vivo and shear strain imaging for breast tumor classi\u00EF\u00AC\u0081cation [22] in collaboration with Professor Jonathan Ophir and Dr. Arun Thitaikumar at medical school at Houston at the University of Texas and Dr. Brian Garra at Fletcher Allen Health Care at the University of Vermont. A version of motion tracking software has also been implemented in a strain imaging soft136 \u000CChapter 7. Conclusions and Future Research ware and licensed for commercial use on ultrasound machines to the Ultrasonix Medical Corporation (Richmond, BC, Canada). So far, the software has been used in di\u00EF\u00AC\u0080erent sites on more than 1000 clinical cases varying from breast tumor imaging [23] to skin abnormalities [24]. Research are currently going on to implement the methods proposed in Chapter 4 and develop a real-time 3D motion tracking system. This would provide the opportunity to extend all the above mentioned application to three dimensions. 7.3 High Frame Rate Tracking In Chapter 5 we show that the performance of our proposed 2D pattern matching techniques is close to that of 2D tracking employing beam steering techniques. However, 2D tracking using beam steering techniques still outperform 2D pattern matching techniques. This can be explained by the fact that in 2D tracking using beam steering techniques, motion vectors (both axial and lateral components) are reconstructed from multiple measurements along the direction of beam propagations where each has high accuracy and precision, whereas in 2D tracking using pattern matching techniques, lateral motion is simply estimated from tracking in the direction transverse to the beam which results in less accurate and precise estimations. Despite better performance, the data acquisition time in beam steering techniques is longer and is proportional to the number of steering angles employed. This will introduce some limitation when high acquisition rate is of interest. In order to take advantage of beam steering techniques without sacri\u00EF\u00AC\u0081cing the frame rate, in Chapter 6 a high frame rate ultrasound system is introduced. The system uses beam interleaving techniques [25] in conjunctions with a delay cancellation method introduced by my colleague Ali Baghani to compensate for acquisition delays [26]. In this manner, the last objective of the thesis is met. The system was implemented on conventional ultrasound machines without any additional hardware overhead and achieves both high spatial (line density of up to 128) and high temporal resolution (> 500 Hz) at an imaging depth of 5 cm and a 100% \u00EF\u00AC\u0081eld of view. Applications of the system in studying the wave propagation in two dimensions, and \u00EF\u00AC\u0082ow vector imaging are presented with experimental results from phantoms. The proposed system has several other potential applications such as high frame rate strain and strain rate imaging, and quantitative elasticity and viscosity imaging using dynamic elastography and wave inversion techniques [25]. 7.4 Summary of Contributions The major contribution of this thesis is development of new algorithms for estimation of the tissue motion in sequences of ultrasound images in multi-dimensions with high accuracy and precision and small computational cost suitable for real-time applications. The speci\u00EF\u00AC\u0081c contributions of this thesis can be outlined as follows: \u00E2\u0088\u0099 A new class of motion estimation techniques called sample tracking was introduced. Results from both simulations and experiments showed that the proposed techniques outperform all the the common and state-of-the art motion estimators in terms of accuracy, precision, resolution, and sensitivity. The extension of the proposed algorithms to multi-dimensional motion estimation is also presented. 137 \u000CChapter 7. Conclusions and Future Research \u00E2\u0088\u0099 To maximize the accuracy and precision of motion estimation in multi-dimensions while adding a small computational overhead, sub-sample estimators were studied. Several subsample motion estimators were introduced, each o\u00EF\u00AC\u0080ering di\u00EF\u00AC\u0080erent trade o\u00EF\u00AC\u0080s between performance and computational cost. Results from both simulations and experiments show that by adding a small computational cost, the proposed sub-sample estimation techniques signi\u00EF\u00AC\u0081cantly improve the performance of motion estimation in multi-dimensions. \u00E2\u0088\u0099 Based on the proposed sub-sample estimation methods, real-time 2D/3D motion tracking software has been implemented to estimate the motion vectors between sequences of ultrasound echo signals at commonly used ultrasound frame rates (up to 50Hz). The software was licensed for commercial use and currently being used in several clinical applications namely, normal/shear strain imaging [22], dynamic elastography [19], vibroelastography [17], tumor detection and classi\u00EF\u00AC\u0081cations [23, 24], and poro-elastography [21]. \u00E2\u0088\u0099 The proposed 2D tracking method was compared with 2D tracking employing beam steering techniques. Results from both simulations and experiments showed that 2D tracking using beam steering techniques outperforms 2D motion estimation without compounding in terms of both accuracy and precision especially in estimating lateral motion. \u00E2\u0088\u0099 A custom pulse sequencer followed by a phase correction scheme was developed to estimate 2D motion vectors at a high frame rates (> 500 Hz). Potential applications include studying the propagation of mechanical waves in tissue, visocoelasticity imaging of tissue, high frame rate strain/strain rate imaging, and vascular imaging. 7.5 Future Work Simulation results show that ST algorithms have higher resolution and sensitivity when used as strain estimators compared to commonly used techniques. However, their performance degrades as the SNR of the echo signals becomes low. The use of smoothing splines is expected to improve the performance of these techniques in the presence of noise. In this work, we only formulated and studied our proposed interpolation techniques in Cartesian coordinates to estimate the sub-sample motion. Further investigation is required to study the adaptability of the proposed interpolation methods to steered coordinates and coordinates which are typically introduced when data are acquire using curved linear transducers. This is expected to improve the performance of sub-sample estimation in these applications. The requirement for motion tracking algorithms changes from one medical application to the next. For example in myocardial imaging large motions and deformations generally exist in the imaging plane thus the tracking algorithm needs to be robust in the presence of large decorrelation. However, in breast imaging the tissue is generally stationary and does not experience large deformations and sensitivity of the estimator is more important. Further studies and validations are required to optimize the tracking process for di\u00EF\u00AC\u0080erent clinical applications. The studies and validations in this thesis were generally based on the simulated and experimental data acquired from tissue mimicking phantoms. The performance of the proposed techniques has also been evaluated in several clinical applications [22\u00E2\u0080\u009324]. However, further 138 \u000CChapter 7. Conclusions and Future Research studies are required to quantify the performance of the proposed technique using ultrasound data acquired from real tissue in vivo and in vitro. In this work we studied and compared the performance of di\u00EF\u00AC\u0080erent techniques in estimating translation and compression. However, due to di\u00EF\u00AC\u0080erent boundary conditions, the tissue may also experience slippage, rotation, and shearing. This is generally the case in tracking the kidney and the prostate. In the future, these behaviors also need to be studied to further optimize the motion tracking algorithms for these applications. Through out this work, the tissue motions were estimated without assuming any underlaying model. However, including di\u00EF\u00AC\u0080erent models in the process is expected to improve the performance of tissue motion estimation. These will be the topics of our future work. 139 \u000CReferences [1] S. Srinivasan and J. Ophir, \u00E2\u0080\u009CA zero-crossing strain estimator in elastography,\u00E2\u0080\u009D Ultrasound in Med and Bio, vol. 29, pp. 227\u00E2\u0080\u0093238, 2003. [2] F. Viola and W. Walker, \u00E2\u0080\u009CA Spline-Based Algorithm for Continuous Time-Delay Estimation Using Sampled Data.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 52, pp. 80\u00E2\u0080\u009393, January 2005. [3] F. Viola, R. Coe, O. K., D. Guenther, and W. Walker, \u00E2\u0080\u009CMUlti-Dimensional Spline-Based Estimator (MUSE) for Motion Estimation: Algorithm Development and Initial Results.\u00E2\u0080\u009D Annals of Biomedical Engineering, vol. 36, pp. 1942\u00E2\u0080\u00931960, September 2008. [4] R. Lopata, M. Nillesena, H. Hansena, I. Gerritsa, T. J., and C. de Korte, \u00E2\u0080\u009CPerformance Evaluation of Methods for Two-Dimensional Displacement and Strain Estimation Using Ultrasound Radio Frequency Data.\u00E2\u0080\u009D Ultrasound in medicine and biology, vol. 35, pp. 796\u00E2\u0080\u0093 812, 2009. [5] J. Berco\u00EF\u00AC\u0080, M. Tanter, and M. Fink, \u00E2\u0080\u009CSupersonic shear imaging: a new technique for soft tissue elasticity mapping,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 51, pp. 396\u00E2\u0080\u00934009, April 2004. [6] K. Nightingale, M. Palmeri, R. Nightingale, and G. Trahey, \u00E2\u0080\u009COn the feasibility of remote palpation using acoustic radiation force,\u00E2\u0080\u009D Journal of the Acoustical Society of America, vol. 110, pp. 625\u00E2\u0080\u009334, July 2001. [7] W. Walker, F. Fernandez, and L. Negron, \u00E2\u0080\u009CA method of imaging viscoelastic parameters with acoustic radiation force.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1437\u00E2\u0080\u00931447, 2000. [8] M. Fatemi and J. Greenleaf, \u00E2\u0080\u009CProbing the dynamics of tissue at low frequencies with the radiation force of ultrasound,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 45, pp. 1449\u00E2\u0080\u009364, June 2000. [9] T. Varghese, J. Ophir, E. Konofagou, F. Kallel, and R. Righetti, \u00E2\u0080\u009CTradeo\u00EF\u00AC\u0080s In Elastographic Imaging, Ultrasonic Imaging,\u00E2\u0080\u009D Ultrasonic Imaging, vol. 23, pp. 216\u00E2\u0080\u0093248, October 2001. [10] E. Brusseau, J. Kybic, J. Deprez, and O. Basset, \u00E2\u0080\u009C2-D Locally Regularized Tissue Strain Estimation From Radio-Frequency Ultrasound Images: Theoretical Developments and Results on Experimental Data.\u00E2\u0080\u009D IEEE Transactions on Medical Imaging, vol. 27, pp. 145 \u00E2\u0080\u0093 160, Feb 2008. 140 \u000CChapter 7. Conclusions and Future Research [11] W. Lee, C. M. Ingrassia, S. D. Fung-Kee-Fung, K. D. Costa, J. W. Holmes, and E. Konofagou, \u00E2\u0080\u009CTheoretical Quality Assessment of Myocardial Elastography with In Vivo Validation,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 54, pp. 2233\u00E2\u0080\u00932245, 2007. [12] S. Alam, J. Ophir, and E. Konofagou, \u00E2\u0080\u009CAn adaptive strain estimator for elastography,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 45, pp. 461\u00E2\u0080\u0093 72, March 1998. [13] P. Chaturvedi, M. Insana, and T. Hall, \u00E2\u0080\u009C2D companding for noise reduction in strain imaging,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 45, pp. 179\u00E2\u0080\u0093191, 1998. [14] T. Varghese and J. Ophir, \u00E2\u0080\u009CEnhancement of echo-signal correlation in elastography using temporal stretching,\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 44, pp. 173\u00E2\u0080\u0093180, January 1997. [15] E. Konofagou and J. Ophir, \u00E2\u0080\u009CA new elastographic method for estimation and imaging of lateral displacements, lateral strains, corrected axial strains and Poisson\u00E2\u0080\u0099s ratios in tissues,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 24, pp. 1183\u00E2\u0080\u009399, October 1998. [16] R. Zahiri-Azar and S. Salcudean, \u00E2\u0080\u009CTime-Delay Estimation in Ultrasound Echo Signals Using Individual Sample Tracking.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 55, pp. 2640\u00E2\u0080\u00932650, 2008. [17] S. Salcudean, D. French, S. Bachmann, X. Zahiri-Azar, R.and Wen, and J. Morris, \u00E2\u0080\u009CViscoelasticity modelling of the prostate region using vibro-elastography.\u00E2\u0080\u009D in 9th MICCAI Conference. Denmark: MICCAI, Oct 2006, pp. 389\u00E2\u0080\u0093396. [18] E. Dehghan, X. Wen, R. Zahiri-Azar, M. Marchal, and S. Salcudean, \u00E2\u0080\u009CNeedle-tissue interaction modeling using ultrasound-based motion estimation: Phantom study.\u00E2\u0080\u009D Journal of Image Guided Surgery, vol. 13, pp. 265 \u00E2\u0080\u0093 280, 2008. [19] H. Eskandari, S. Salcudean, R. Rohling, and J. Ohayon, \u00E2\u0080\u009CViscoelastic Characterization of Soft Tissue from Dynamic Finite Element Models.\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 53, pp. 6569\u00E2\u0080\u00936590, Nov 2008. [20] H. Eskandari, S. Salcudean, and R. Rohling, \u00E2\u0080\u009CViscoelastic Parameter Estimation Based on Spectral Analysis.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency control, vol. 55, pp. 1611\u00E2\u0080\u00931625, July 2008. [21] R. Righetti, J. Ophir, S. Srinivasan, and T. Krouskop, \u00E2\u0080\u009CThe Feasibility of Using Elastography for Imaging the Poissons Ratio in Porous Media,\u00E2\u0080\u009D Ultrasound in Medicine and Biology, vol. 30, pp. 215\u00E2\u0080\u0093228, 2004. [22] A. Thitaikumar, L. Mobbs, C. Kraemer-Chant, B. Garra, and J. Ophir, \u00E2\u0080\u009CBreast Tumor Classi\u00EF\u00AC\u0081cation using axial shear strain elastography: a feasibility study,\u00E2\u0080\u009D Physics in Medicine and Biology, vol. 53, pp. 4809\u00E2\u0080\u00934823, 2008. 141 \u000CChapter 7. Conclusions and Future Research [23] E. Fleury, J. Rinaldi, S. Piato, J. Fleury, and D. Roveda Junior, \u00E2\u0080\u009CAppearance of breast masses on sonoelastography with special focus on the diagnosis of \u00EF\u00AC\u0081broadenomas.\u00E2\u0080\u009D European Radiology, vol. 19, pp. 1337\u00E2\u0080\u009346, Jan 2009. [24] R. Gaspari, D. Blehar, M. Mendoza, M. Montoya, C. Moon, and D. Polan, \u00E2\u0080\u009CUse of Ultrasound Elastography for Skin and Subcutaneous Abscesses.\u00E2\u0080\u009D Journal of Ultrasound in Medicine, vol. 28, pp. 855\u00E2\u0080\u0093860, 2009. [25] L. Lvstakken, S. Bjaerum, D. Martens, and H. Torp, \u00E2\u0080\u009CBlood \u00EF\u00AC\u0082ow imaging\u00E2\u0080\u0093A new realtime, 2-D \u00EF\u00AC\u0082ow imaging technique.\u00E2\u0080\u009D IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, vol. 53, pp. 289\u00E2\u0080\u009399, Feb 2006. [26] A. Baghani, \u00E2\u0080\u009CA Wave Equation Approach to Ultrasound Elastography,\u00E2\u0080\u009D Ph.D. dissertation, University of British Columbia, 2009. 142 \u000CAppendix A Local Polynomial Fitting A fourth order polynomial \u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A1) = \u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A14 + \u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u00A13 + \u00F0\u009D\u0091\u0090\u00F0\u009D\u0091\u00A12 + \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u00A1 + \u00F0\u009D\u0091\u0092 can be \u00EF\u00AC\u0081tted to the reference echo signal around the \u00F0\u009D\u0091\u0096th sample using the following equation: \u00E2\u008E\u00A1 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A3 \u00F0\u009D\u0091\u008E \u00F0\u009D\u0091\u008F \u00F0\u009D\u0091\u0090 \u00F0\u009D\u0091\u0091 \u00F0\u009D\u0091\u0092 \u00E2\u008E\u00A4 \u00E2\u008E\u00A1 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5=\u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A6 \u00E2\u008E\u00A3 \u00E2\u008E\u00A4\u00E2\u0088\u00921 \u00E2\u008E\u00A1 16 \u00E2\u0088\u00928 4 \u00E2\u0088\u00922 1 \u00E2\u008E\u00A2 1 \u00E2\u0088\u00921 1 \u00E2\u0088\u00921 1 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 0 0 0 0 1 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 1 1 1 1 1 \u00E2\u008E\u00A6 \u00E2\u008E\u00A3 16 8 4 2 1 \u00F0\u009D\u0091\u00A01 [\u00F0\u009D\u0091\u0096 \u00E2\u0088\u0092 2] \u00F0\u009D\u0091\u00A01 [\u00F0\u009D\u0091\u0096 \u00E2\u0088\u0092 1] \u00F0\u009D\u0091\u00A01 [\u00F0\u009D\u0091\u0096] \u00F0\u009D\u0091\u00A01 [\u00F0\u009D\u0091\u0096 + 1] \u00F0\u009D\u0091\u00A01 [\u00F0\u009D\u0091\u0096 + 2] \u00E2\u008E\u00A4 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5, \u00E2\u008E\u00A5 \u00E2\u008E\u00A6 (A.1) Polynomials with di\u00EF\u00AC\u0080erent orders can also be \u00EF\u00AC\u0081tted using the same approach. 143 \u000CAppendix B Strain Signal to Noise Ratio The signal-to-noise ratio in strain estimation characterizes the noise at which a value of strain is estimated and is de\u00EF\u00AC\u0081ned by: \u00F0\u009D\u0091\u0086\u00F0\u009D\u0091\u0081 \u00F0\u009D\u0091\u0085 = \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u00A0 , \u00F0\u009D\u009C\u008E\u00F0\u009D\u0091\u00A0 (B.1) where \u00F0\u009D\u0091\u009A\u00F0\u009D\u0091\u00A0 denote the statistical mean strain estimate and \u00F0\u009D\u009C\u008E\u00F0\u009D\u0091\u00A0 denotes the standard deviation of the strain noise estimated from the strain image. 144 \u000CAppendix C 2D Normalized Cross Correlation Given a pair of sampled signals \u00F0\u009D\u0091\u00A01 [\u00F0\u009D\u0091\u0096, \u00F0\u009D\u0091\u0097] and \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096, \u00F0\u009D\u0091\u0097], where \u00F0\u009D\u0091\u0096 is the sample index and \u00F0\u009D\u0091\u0097 is the line index, their normalized correlation is de\u00EF\u00AC\u0081ned by (C.2), where \u00F0\u009D\u0091\u00A2 \u00E2\u0088\u0088 {\u00E2\u0088\u0092\u00F0\u009D\u0090\u00BE\u00F0\u009D\u0091\u008E , ..., \u00E2\u0088\u00921, 0, 1, ..., \u00F0\u009D\u0090\u00BE\u00F0\u009D\u0091\u008E }, \u00F0\u009D\u0091\u00A3 \u00E2\u0088\u0088 {\u00E2\u0088\u0092\u00F0\u009D\u0090\u00BE\u00F0\u009D\u0091\u0099 , ..., \u00E2\u0088\u00921, 0, 1, ..., \u00F0\u009D\u0090\u00BE\u00F0\u009D\u0091\u0099 }, \u00F0\u009D\u0090\u00BE\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0090\u00BE\u00F0\u009D\u0091\u0099 are the search radii, and \u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0099 represent the window length in both the axial and the lateral directions. 145 \u000CAppendix C. 2D Normalized Cross Correlation \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u00A3] = \u00F0\u009D\u0090\u00B4 , \u00F0\u009D\u0090\u00B5\u00E2\u008B\u0085\u00F0\u009D\u0090\u00B6 \u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0099 /2 \u00F0\u009D\u0090\u00B4 = \u00E2\u0088\u0091 (C.1) \u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u008E /2 \u00E2\u0088\u0091 (\u00F0\u009D\u0091\u00A01 [\u00F0\u009D\u0091\u0096, \u00F0\u009D\u0091\u0097] \u00E2\u008B\u0085 \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096 + \u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u0097 + \u00F0\u009D\u0091\u00A3]), \u00F0\u009D\u0091\u0097=\u00E2\u0088\u0092\u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0099 /2 \u00F0\u009D\u0091\u0096=\u00E2\u0088\u0092\u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u008E /2 \u0003 \u0004 \u00F0\u009D\u0091\u008A /2 \u0004 \u00E2\u0088\u0091 \u00F0\u009D\u0091\u0099 \u0004 \u00F0\u009D\u0090\u00B5 = \u00E2\u008E\u00B7 \u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u008E /2 \u00E2\u0088\u0091 \u00F0\u009D\u0091\u00A01 [\u00F0\u009D\u0091\u0096, \u00F0\u009D\u0091\u0097]2 , \u00F0\u009D\u0091\u0097=\u00E2\u0088\u0092\u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0099 /2 \u00F0\u009D\u0091\u0096=\u00E2\u0088\u0092\u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u008E /2 \u0003 \u0004 \u00F0\u009D\u0091\u008A /2 \u0004 \u00E2\u0088\u0091 \u00F0\u009D\u0091\u0099 \u0004 \u00F0\u009D\u0090\u00B6 = \u00E2\u008E\u00B7 \u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u008E /2 \u00E2\u0088\u0091 \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096 + \u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u0097 + \u00F0\u009D\u0091\u00A3]2 \u00F0\u009D\u0091\u0097=\u00E2\u0088\u0092\u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0099 /2 \u00F0\u009D\u0091\u0096=\u00E2\u0088\u0092\u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u008E /2 146 \u000CAppendix D 1D Sub-Sample Estimation Given the discrete 1D pattern matching function \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u00A2], the coarse location of the best match can be found in the discrete function (i.e. \u00F0\u009D\u0091\u0091 = arg max\u00F0\u009D\u0091\u00A2 \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u00A2]) within the sampling accuracy. Sub-sample accuracy \u00F0\u009D\u009B\u00BF can then be achieved by \u00EF\u00AC\u0081tting a local function or polynomial to discrete coe\u00EF\u00AC\u0083cient values around the best match. Given the largest sample of the discrete pattern matching function \u00F0\u009D\u0091\u008F = \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091], and its two neighbours \u00F0\u009D\u0091\u008E = \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091 \u00E2\u0088\u0092 1] and \u00F0\u009D\u0091\u0090 = \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091 + 1], the estimated sub-sample shift \u00F0\u009D\u009B\u00BF\u00CB\u0086 (correlation peak o\u00EF\u00AC\u0080set) and its corresponding correlation coe\u00EF\u00AC\u0083cient at the location \u00F0\u009D\u0091\u0091 + \u00F0\u009D\u009B\u00BF\u00CB\u0086 is given by: \u00F0\u009D\u009B\u00BF\u00CB\u0086 = \u00E2\u0088\u0092\u00F0\u009D\u009B\u00BD/\u00F0\u009D\u009B\u00BC, \u00CB\u0086 = \u00F0\u009D\u0091\u008F/ cos(\u00F0\u009D\u009B\u00BD) \u00CB\u0086 + \u00F0\u009D\u009B\u00BF) \u00F0\u009D\u0091\u0085(\u00F0\u009D\u0091\u0091 (D.1) (D.2) For 1D cosine interpolation method (\u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A5) = \u00F0\u009D\u0090\u00B4 cos(\u00F0\u009D\u009B\u00BC\u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u009B\u00BD)), where: \u00F0\u009D\u009B\u00BC = arccos(\u00F0\u009D\u0091\u008E + \u00F0\u009D\u0091\u0090/2\u00F0\u009D\u0091\u008F), (D.3) \u00F0\u009D\u009B\u00BD = arctan((\u00F0\u009D\u0091\u008E \u00E2\u0088\u0092 \u00F0\u009D\u0091\u0090)/2\u00F0\u009D\u0091\u008F sin(\u00F0\u009D\u009B\u00BC)) (D.4) \u00F0\u009D\u009B\u00BF\u00CB\u0086 = (\u00F0\u009D\u0091\u008E \u00E2\u0088\u0092 \u00F0\u009D\u0091\u0090)/2(\u00F0\u009D\u0091\u008E \u00E2\u0088\u0092 2\u00F0\u009D\u0091\u008F + \u00F0\u009D\u0091\u0090), \u00CB\u0086 = \u00F0\u009D\u0091\u008E(\u00F0\u009D\u009B\u00BF)( \u00CB\u0086 \u00F0\u009D\u009B\u00BF\u00CB\u0086 \u00E2\u0088\u0092 1)/2 \u00CB\u0086 + \u00F0\u009D\u009B\u00BF) \u00F0\u009D\u0091\u0085(\u00F0\u009D\u0091\u0091 \u00CB\u0086 \u00F0\u009D\u009B\u00BF\u00CB\u0086 \u00E2\u0088\u0092 1) \u00E2\u0088\u0092\u00F0\u009D\u0091\u008F(1 + \u00F0\u009D\u009B\u00BF)( (D.5) and by: (D.6) \u00CB\u0086 \u00F0\u009D\u009B\u00BF)/2. \u00CB\u0086 +\u00F0\u009D\u0091\u0090(1 + \u00F0\u009D\u009B\u00BF)( for 1D parabolic interpolation method (\u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A5) = \u00F0\u009D\u0091\u008E\u00F0\u009D\u0091\u00A52 + \u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u0090). 147 \u000CAppendix E 2D Sub-Sample Estimation By \u00EF\u00AC\u0081tting a local 2D function to discrete coe\u00EF\u00AC\u0083cient values around the best match (i.e. [\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 ] = arg max\u00F0\u009D\u0091\u00A2,\u00F0\u009D\u0091\u00A3 \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u00A2]), sub-sample accuracy in both directions (i.e. \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E , \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 ) can be achieved by locating the exact value of the best match in the \u00EF\u00AC\u0081tted 2D function. E.0.1 2D Paraboloid Fitting The following non-separable 2D polynomial \u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) = \u00F0\u009D\u0091\u008E+\u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u00A5+\u00F0\u009D\u0091\u0090\u00F0\u009D\u0091\u00A6 +\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6 +\u00F0\u009D\u0091\u0092\u00F0\u009D\u0091\u00A52 +\u00F0\u009D\u0091\u0093 \u00F0\u009D\u0091\u00A6 2 can be \u00EF\u00AC\u0081tted to the eight points discrete pattern matching function around its maximum (i.e. (\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 )) using least squares \u00EF\u00AC\u0081t according to (E.1). Each row in the 9 \u00C3\u0097 6 \u00F0\u009D\u0090\u00B4 matrix in (E.1) comes from setting \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6 equal to their relative position with respect to the center (i.e. \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6 \u00E2\u0088\u0088 {\u00E2\u0088\u00921, 0, 1}) and each element in the 9 \u00C3\u0097 1 is the matching coe\u00EF\u00AC\u0083cient corresponding to the same location. The term (\u00F0\u009D\u0090\u00B4\u00F0\u009D\u0091\u0087 \u00F0\u009D\u0090\u00B4)\u00E2\u0088\u00921 \u00F0\u009D\u0090\u00B4\u00F0\u009D\u0091\u0087 in (E.1) can be computed in advance and stored in memory for subsequent use. The location of the maximum of this \u00EF\u00AC\u0081tted 2D paraboloid is found by setting \u00E2\u0088\u0087\u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) = 0, leading to: \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E = \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 = E.0.2 2\u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u0093 \u00E2\u0088\u0092 \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0090 , \u00F0\u009D\u0091\u00912 \u00E2\u0088\u0092 4\u00F0\u009D\u0091\u0092\u00F0\u009D\u0091\u0093 2\u00F0\u009D\u0091\u0092\u00F0\u009D\u0091\u0090 \u00E2\u0088\u0092 \u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u0091 . \u00F0\u009D\u0091\u00912 \u00E2\u0088\u0092 4\u00F0\u009D\u0091\u0092\u00F0\u009D\u0091\u0093 (E.3) 2D Polynomial Fitting Similarly to the above method, the following non-separable 2D polynomial \u00F0\u009D\u0091\u0093 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6) = \u00F0\u009D\u0091\u008E + \u00F0\u009D\u0091\u008F\u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u0090\u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u0092\u00F0\u009D\u0091\u00A52 + \u00F0\u009D\u0091\u0093 \u00F0\u009D\u0091\u00A6 2 + \u00F0\u009D\u0091\u0094\u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6 2 + \u00E2\u0084\u008E\u00F0\u009D\u0091\u00A52 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u0096\u00F0\u009D\u0091\u00A52 \u00F0\u009D\u0091\u00A6 2 , resulted from multiplying [1, \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A52 ] and [1, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A6 2 ] terms, can be \u00EF\u00AC\u0081tted to the discrete pattern matching function using the eight points around its maximum (i.e. (\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 )) according to the (E.2). Each row in the 9 \u00C3\u0097 9 \u00F0\u009D\u0090\u00B4 matrix in (E.2) comes from setting \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6 equal to their relative position with respect to the center (i.e. \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6 \u00E2\u0088\u0088 {\u00E2\u0088\u00921, 0, 1}) and each element in the 9 \u00C3\u0097 1 is the matching coe\u00EF\u00AC\u0083cient corresponding to the same location. Similarly to the above method, the inverse of the 9 \u00C3\u0097 9 matrix in (E.2) can be computed once and stored in memory for subsequent use. The location of the maximum of this \u00EF\u00AC\u0081tted 2D polynomial can be found using a variety of iterative techniques. We use Newton\u00E2\u0080\u0099s method: [ \u00F0\u009D\u0091\u00A5 \u00F0\u009D\u0091\u00A6 ]\u00F0\u009D\u0091\u0098+1 [ = \u00F0\u009D\u0091\u00A5 \u00F0\u009D\u0091\u00A6 ]\u00F0\u009D\u0091\u0098 [ \u00E2\u0088\u0092 \u00E2\u0088\u00822\u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A5\u00E2\u0088\u0082\u00F0\u009D\u0091\u00A5 \u00E2\u0088\u00822\u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A6\u00E2\u0088\u0082\u00F0\u009D\u0091\u00A5 \u00E2\u0088\u00822\u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A5\u00E2\u0088\u0082\u00F0\u009D\u0091\u00A6 \u00E2\u0088\u00822\u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A6\u00E2\u0088\u0082\u00F0\u009D\u0091\u00A6 ]\u00E2\u0088\u00921 [ \u00E2\u0088\u0082\u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A5 \u00E2\u0088\u0082\u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A6 ]\u0011 \u0011 \u0011 \u0011 \u00F0\u009D\u0091\u0098 \u0011\u00F0\u009D\u0091\u00A5=\u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u0098 (E.4) \u00F0\u009D\u0091\u00A6=\u00F0\u009D\u0091\u00A6 where \u00F0\u009D\u0091\u0098 = 0, 1, ..., \u00F0\u009D\u0091\u009B is the index of the iteration, \u00F0\u009D\u0091\u009B is the maximum number of iterations, and \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E = \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u009B , \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 = \u00F0\u009D\u0091\u00A6 \u00F0\u009D\u0091\u009B . The stopping criterion for Newton\u00E2\u0080\u0099s method for estimating the maximum of 148 \u000CAppendix E. 2D Sub-Sample Estimation \u00E2\u008E\u00A1 \u00E2\u008E\u00A1 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A3 \u00F0\u009D\u0091\u008E \u00F0\u009D\u0091\u008F \u00F0\u009D\u0091\u0090 \u00F0\u009D\u0091\u0091 \u00F0\u009D\u0091\u0092 \u00F0\u009D\u0091\u0093 \u00E2\u008E\u00A4 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 = (\u00F0\u009D\u0090\u00B4\u00F0\u009D\u0091\u0087 \u00F0\u009D\u0090\u00B4)\u00E2\u0088\u00921 \u00F0\u009D\u0090\u00B4\u00F0\u009D\u0091\u0087 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A6 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A3 \u00E2\u008E\u00A1 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A3 \u00F0\u009D\u0091\u008E \u00F0\u009D\u0091\u008F \u00F0\u009D\u0091\u0090 \u00F0\u009D\u0091\u0091 \u00F0\u009D\u0091\u0092 \u00F0\u009D\u0091\u0093 \u00F0\u009D\u0091\u0094 \u00E2\u0084\u008E \u00F0\u009D\u0091\u0096 \u00E2\u008E\u00A4 \u00E2\u008E\u00A1 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5=\u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A6 \u00E2\u008E\u00A3 1 1 1 1 1 1 1 1 1 \u00E2\u0088\u00921 \u00E2\u0088\u00921 \u00E2\u0088\u00921 0 0 0 1 1 1 \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 \u00E2\u0088\u0092 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 0] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 \u00E2\u0088\u0092 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 0] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 \u00E2\u0088\u0092 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 0] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 1] \u00E2\u0088\u00921 0 1 \u00E2\u0088\u00921 0 1 \u00E2\u0088\u00921 0 1 the 2D polynomial was set to be 1 0 \u00E2\u0088\u00921 0 0 0 \u00E2\u0088\u00921 0 1 1 1 1 0 0 0 1 1 1 1 0 1 1 0 1 1 0 1 \u00E2\u008E\u00A4 \u00E2\u008E\u00A1 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5, \u00F0\u009D\u0090\u00B4 = \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A6 \u00E2\u008E\u00A3 \u00E2\u0088\u00921 0 \u00E2\u0088\u00921 0 0 0 1 0 1 \u00E2\u0088\u00921 0 1 0 0 0 \u00E2\u0088\u00921 0 1 1 0 1 0 0 0 1 0 1 1 1 1 1 1 1 1 1 1 \u00E2\u0088\u00921 0 1 \u00E2\u0088\u00921 0 1 \u00E2\u0088\u00921 0 1 \u00E2\u008E\u00A4\u00E2\u0088\u00921 \u00E2\u008E\u00A1 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A6 \u0006 \u0006[ \u0006 \u0006 \u00E2\u0088\u0082\u00F0\u009D\u0091\u0093 \u00E2\u0088\u0082\u00F0\u009D\u0091\u0093 ] \u0006 \u0006 \u00E2\u0088\u00925 , , \u0006 \u00F0\u009D\u0091\u0098 \u0006 < 10 \u0006 \u0006 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A5 \u00E2\u0088\u0082\u00F0\u009D\u0091\u00A6 \u00F0\u009D\u0091\u00A5=\u00F0\u009D\u0091\u00A5 \u00F0\u009D\u0091\u00A6=\u00F0\u009D\u0091\u00A6 \u00F0\u009D\u0091\u0098 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A3 \u00E2\u0088\u00921 \u00E2\u0088\u00921 \u00E2\u0088\u00921 0 0 0 1 1 1 1 0 \u00E2\u0088\u00921 0 0 0 \u00E2\u0088\u00921 0 1 1 0 1 1 0 1 1 0 1 1 1 1 0 0 0 1 1 1 \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 \u00E2\u0088\u0092 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 0] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 \u00E2\u0088\u0092 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 0] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 \u00E2\u0088\u0092 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 0] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 1] \u00E2\u008E\u00A4 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A6 (E.1) \u00E2\u008E\u00A4 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A6 (E.2) (E.5) where \u00E2\u0088\u00A5.\u00E2\u0088\u00A5 is the Euclidean norm. In all the simulations this criterion was met in less than \u00EF\u00AC\u0081ve iterations (i.e. \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u008E = \u00F0\u009D\u0091\u00A55 , \u00F0\u009D\u009B\u00BF\u00F0\u009D\u0091\u0099 = \u00F0\u009D\u0091\u00A6 5 in (E.4)). 149 \u000CAppendix F 3D Normalized Cross Correlation Given a pair of sampled signals \u00F0\u009D\u0091\u00A01 [\u00F0\u009D\u0091\u0096, \u00F0\u009D\u0091\u0097, \u00F0\u009D\u0091\u0098] and \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096, \u00F0\u009D\u0091\u0097, \u00F0\u009D\u0091\u0098], their normalized correlation is de\u00EF\u00AC\u0081ned by (F.2), where \u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u00A3, \u00F0\u009D\u0091\u00A4 are integers \u00F0\u009D\u0091\u00A2 \u00E2\u0088\u0088 {\u00E2\u0088\u0092\u00F0\u009D\u0090\u00BE\u00F0\u009D\u0091\u008E , ..., \u00F0\u009D\u0090\u00BE\u00F0\u009D\u0091\u008E }, \u00F0\u009D\u0091\u00A3 \u00E2\u0088\u0088 {\u00E2\u0088\u0092\u00F0\u009D\u0090\u00BE\u00F0\u009D\u0091\u0099 , ..., \u00F0\u009D\u0090\u00BE\u00F0\u009D\u0091\u0099 }, and \u00F0\u009D\u0091\u00A4 \u00E2\u0088\u0088 {\u00E2\u0088\u0092\u00F0\u009D\u0090\u00BE\u00F0\u009D\u0091\u0092 , ..., \u00F0\u009D\u0090\u00BE\u00F0\u009D\u0091\u0092 }, \u00F0\u009D\u0090\u00BE\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0090\u00BE\u00F0\u009D\u0091\u0099 , and \u00F0\u009D\u0090\u00BE\u00F0\u009D\u0091\u0092 \u00E2\u0088\u0088 \u00E2\u0084\u0095+ are the search radii, and \u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0099 , and \u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0092 represent the window lengths in axial, lateral, and elevational directions. 150 \u000CAppendix F. 3D Normalized Cross Correlation \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u00A3, \u00F0\u009D\u0091\u00A4] = \u00F0\u009D\u0090\u00B4 , \u00F0\u009D\u0090\u00B5\u00E2\u008B\u0085\u00F0\u009D\u0090\u00B6 \u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0092 /2 \u00F0\u009D\u0090\u00B4 = \u00E2\u0088\u0091 (F.1) \u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0099 /2 \u00E2\u0088\u0091 \u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u008E /2 \u00E2\u0088\u0091 (\u00F0\u009D\u0091\u00A01 [\u00F0\u009D\u0091\u0096, \u00F0\u009D\u0091\u0097, \u00F0\u009D\u0091\u0098] \u00E2\u008B\u0085 \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096 + \u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u0097 + \u00F0\u009D\u0091\u00A3, \u00F0\u009D\u0091\u0098 + \u00F0\u009D\u0091\u00A4]), \u00F0\u009D\u0091\u0098=\u00E2\u0088\u0092\u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0092 /2 \u00F0\u009D\u0091\u0097=\u00E2\u0088\u0092\u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0099 /2 \u00F0\u009D\u0091\u0096=\u00E2\u0088\u0092\u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u008E /2 \u0003 \u0004 \u0004 \u0004 \u00F0\u009D\u0090\u00B5 = \u00E2\u008E\u00B7 \u0003 \u0004 \u0004 \u0004 \u00F0\u009D\u0090\u00B6 = \u00E2\u008E\u00B7 \u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0092 /2 \u00E2\u0088\u0091 \u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0099 /2 \u00E2\u0088\u0091 \u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u008E /2 \u00E2\u0088\u0091 \u00F0\u009D\u0091\u00A01 [\u00F0\u009D\u0091\u0096, \u00F0\u009D\u0091\u0097, \u00F0\u009D\u0091\u0098]2 , \u00F0\u009D\u0091\u0098=\u00E2\u0088\u0092\u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0092 /2 \u00F0\u009D\u0091\u0097=\u00E2\u0088\u0092\u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0099 /2 \u00F0\u009D\u0091\u0096=\u00E2\u0088\u0092\u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u008E /2 \u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0092 /2 \u00E2\u0088\u0091 \u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0099 /2 \u00E2\u0088\u0091 \u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u008E /2 \u00E2\u0088\u0091 \u00F0\u009D\u0091\u00A02 [\u00F0\u009D\u0091\u0096 + \u00F0\u009D\u0091\u00A2, \u00F0\u009D\u0091\u0097 + \u00F0\u009D\u0091\u00A3, \u00F0\u009D\u0091\u0098 + \u00F0\u009D\u0091\u00A4]2 \u00F0\u009D\u0091\u0098=\u00E2\u0088\u0092\u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0092 /2 \u00F0\u009D\u0091\u0097=\u00E2\u0088\u0092\u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u0099 /2 \u00F0\u009D\u0091\u0096=\u00E2\u0088\u0092\u00F0\u009D\u0091\u008A\u00F0\u009D\u0091\u008E /2 151 \u000CAppendix G 3D Polynomial Fitting The following 3D polynomial with 10 coe\u00EF\u00AC\u0083cients: \u00F0\u009D\u0091\u009310 (\u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A7) = \u00F0\u009D\u0091\u008E1 + \u00F0\u009D\u0091\u008E2 \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u008E3 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E4 \u00F0\u009D\u0091\u00A7 + \u00F0\u009D\u0091\u008E5 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A6 2 (G.1) 2 2 +\u00F0\u009D\u0091\u008E6 \u00F0\u009D\u0091\u00A5\u00F0\u009D\u0091\u00A7 + \u00F0\u009D\u0091\u008E7 \u00F0\u009D\u0091\u00A6\u00F0\u009D\u0091\u00A7 + \u00F0\u009D\u0091\u008E8 \u00F0\u009D\u0091\u00A5 + \u00F0\u009D\u0091\u008E9 \u00F0\u009D\u0091\u00A6 + \u00F0\u009D\u0091\u008E10 \u00F0\u009D\u0091\u00A7 , can be \u00EF\u00AC\u0081tted to the 27 points discrete pattern matching function around its maximum (i.e. [\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 , \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ]) using a least-squares \u00EF\u00AC\u0081t using (G.2) and (G.3). Each row in the 27 \u00C3\u0097 10 matrix in (G.3) is derived by setting \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, and \u00F0\u009D\u0091\u00A7 equal to their relative positions with respect to the center (i.e. \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, \u00F0\u009D\u0091\u00A7 \u00E2\u0088\u0088 {\u00E2\u0088\u00921, 0, 1}) in the equation for 3D polynomial (i.e. 1, \u00F0\u009D\u0091\u00A5, \u00F0\u009D\u0091\u00A6, ..., \u00F0\u009D\u0091\u00A7 2 ) and the values in the 27\u00C3\u00971 in (G.2) are the matching coe\u00EF\u00AC\u0083cients corresponding to these locations. The term (\u00F0\u009D\u0090\u00B4\u00F0\u009D\u0091\u0087 \u00F0\u009D\u0090\u00B4)\u00E2\u0088\u00921 \u00F0\u009D\u0090\u00B4\u00F0\u009D\u0091\u0087 in (G.3) can be computed in advance and stored in memory for successive use. Other 3D polynomials can also be \u00EF\u00AC\u0081tted to the data using the same approach. \u00E2\u008E\u00A1 \u00E2\u008E\u00A1 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A3 \u00F0\u009D\u0091\u008E1 \u00F0\u009D\u0091\u008E2 \u00F0\u009D\u0091\u008E3 \u00F0\u009D\u0091\u008E4 \u00F0\u009D\u0091\u008E5 \u00F0\u009D\u0091\u008E6 \u00F0\u009D\u0091\u008E7 \u00F0\u009D\u0091\u008E8 \u00F0\u009D\u0091\u008E9 \u00F0\u009D\u0091\u008E10 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A4 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 = (\u00F0\u009D\u0090\u00B4\u00F0\u009D\u0091\u0087 \u00F0\u009D\u0090\u00B4)\u00E2\u0088\u00921 \u00F0\u009D\u0090\u00B4\u00F0\u009D\u0091\u0087 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A5 \u00E2\u008E\u00A2 \u00E2\u008E\u00A6 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A3 \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 \u00E2\u0088\u0092 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 \u00E2\u0088\u0092 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 \u00E2\u0088\u0092 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 \u00E2\u0088\u0092 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 \u00E2\u0088\u0092 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 \u00E2\u0088\u0092 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 \u00E2\u0088\u0092 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 \u00E2\u0088\u0092 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 \u00E2\u0088\u0092 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 ] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 + 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 + 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 + 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 + 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 + 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 + 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 \u00E2\u0088\u0092 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 + 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 0, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 + 1] \u00F0\u009D\u0091\u0085[\u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u008E + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0099 + 1, \u00F0\u009D\u0091\u0091\u00F0\u009D\u0091\u0092 + 1] \u00E2\u008E\u00A4 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A6 (G.2) 152 \u000CAppendix G. 3D Polynomial Fitting \u00E2\u008E\u00A1 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00F0\u009D\u0090\u00B4=\u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A2 \u00E2\u008E\u00A3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 \u00E2\u0088\u00921 \u00E2\u0088\u00921 \u00E2\u0088\u00921 0 0 0 1 1 1 \u00E2\u0088\u00921 \u00E2\u0088\u00921 \u00E2\u0088\u00921 0 0 0 1 1 1 \u00E2\u0088\u00921 \u00E2\u0088\u00921 \u00E2\u0088\u00921 0 0 0 1 1 1 \u00E2\u0088\u00921 0 1 \u00E2\u0088\u00921 0 1 \u00E2\u0088\u00921 0 1 \u00E2\u0088\u00921 0 1 \u00E2\u0088\u00921 0 1 \u00E2\u0088\u00921 0 1 \u00E2\u0088\u00921 0 1 \u00E2\u0088\u00921 0 1 \u00E2\u0088\u00921 0 1 \u00E2\u0088\u00921 \u00E2\u0088\u00921 \u00E2\u0088\u00921 \u00E2\u0088\u00921 \u00E2\u0088\u00921 \u00E2\u0088\u00921 \u00E2\u0088\u00921 \u00E2\u0088\u00921 \u00E2\u0088\u00921 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 \u00E2\u0088\u00921 0 0 0 \u00E2\u0088\u00921 0 1 1 0 \u00E2\u0088\u00921 0 0 0 \u00E2\u0088\u00921 0 1 1 0 \u00E2\u0088\u00921 0 0 0 \u00E2\u0088\u00921 0 1 1 1 1 0 0 0 \u00E2\u0088\u00921 \u00E2\u0088\u00921 \u00E2\u0088\u00921 0 0 0 0 0 0 0 0 0 \u00E2\u0088\u00921 \u00E2\u0088\u00921 \u00E2\u0088\u00921 0 0 0 1 1 1 1 0 \u00E2\u0088\u00921 1 0 \u00E2\u0088\u00921 1 0 \u00E2\u0088\u00921 0 0 0 0 0 0 0 0 0 \u00E2\u0088\u00921 0 1 \u00E2\u0088\u00921 0 1 \u00E2\u0088\u00921 0 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 0 1 1 1 1 0 1 1 0 1 1 0 1 1 0 1 1 0 1 1 0 1 1 0 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 \u00E2\u008E\u00A4 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A5 \u00E2\u008E\u00A6 (G.3) 153 "@en .
"Thesis/Dissertation"@en .
"2010-05"@en .
"10.14288/1.0065515"@en .
"eng"@en .
"Electrical and Computer Engineering"@en .
"Vancouver : University of British Columbia Library"@en .
"University of British Columbia"@en .
"Attribution-NonCommercial-NoDerivatives 4.0 International"@en .
"http://creativecommons.org/licenses/by-nc-nd/4.0/"@en .
"Graduate"@en .
"Methods for the estimation of the tissue motion using digitized ultrasound echo signals"@en .
"Text"@en .
"http://hdl.handle.net/2429/17471"@en .