Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Speckle tracking for 3D freehand ultrasound reconstruction Narges, Afsham 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2014_november_afsham_narges.pdf [ 5.8MB ]
Metadata
JSON: 24-1.0167624.json
JSON-LD: 24-1.0167624-ld.json
RDF/XML (Pretty): 24-1.0167624-rdf.xml
RDF/JSON: 24-1.0167624-rdf.json
Turtle: 24-1.0167624-turtle.txt
N-Triples: 24-1.0167624-rdf-ntriples.txt
Original Record: 24-1.0167624-source.json
Full Text
24-1.0167624-fulltext.txt
Citation
24-1.0167624.ris

Full Text

Speckle Tracking for 3D Freehand UltrasoundReconstructionbyNarges AfshamB.Sc. Electrical Engineering, University of Tehran, 2007M.Sc. Electrical Engineering-Bio Electric, University of Tehran, 2009A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFDoctor of PhilosophyinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Electrical and Computer Engineering)The University of British Columbia(Vancouver)October 2014© Narges Afsham, 2014AbstractThe idea of full six degree-of-freedom tracking of ultrasound images solely basedon speckle information has been a long term research goal. It would eliminatethe need for any additional tracking hardware and reduces cost and complexityof ultrasound imaging system, while providing the benefits of three-dimensionalimaging.Despite its significant promise, speckle tracking has proven challenging dueto several reasons including the dependency on a rare kind of speckle pattern inreal tissue, underestimation in the presence of coherency or specular reflection,ultrasound beam profile spatial variations, need for RF (Radio Frequency) data,and artifacts produced by out-of-plane rotation. So, there is a need to improve theutility of freehand ultrasound in clinics by developing techniques to tackle thesechallenges and evaluate the applicability of the proposed methods for clinical use.We introduce a model-fitting method of speckle tracking based on the RicianInverse Gaussian (RiIG) distribution. We derive a closed-form solution of the cor-relation coefficient of such a model, necessary for speckle tracking. In this manner,it is possible to separate the effect of the coherent and the non-coherent part of eachpatch. We show that this will increase the accuracy of the out-of-plane motion es-timation. We also propose a regression-based model to compensate for the spatialchanges of the beam profile.Although RiIG model fitting increases the accuracy, it is only applicable onultrasound sampled RF data and computationally expensive. We propose a newframework to extract speckle/noise directly from B-mode images and performspeckle tracking on the extracted noise. To this end, we investigate and developNon-Local Means (NLM) denoising algorithm based on a prior noise formationiimodel.Finally, in order to increase the accuracy of the 6-DoF transform estimation, wepropose a new iterative NLM denoising filter for the previously introduced RiIGmodel based on a new NLM similarity measure definition. The local estimationof the displacements are aggregated using Stein’s Unbiased Risk Estimate (SURE)over the entire image. The proposed filter-based speckle tracking algorithm hasbeen evaluated in a set of ex vivo and in vivo experiments.iiiPrefaceThis thesis is primarily based on six manuscripts. All publications have been modi-fied to make the thesis more coherent. Ethics approval for conducting this researchhas been provided by the Clinical Research Ethics Board, certificate numbers:H11-01125, H11-02594. Two initial work of research resulted from the effortsof the author for understanding the field of ultrasound imaging and its clinical ap-plications. The first work has equipped the author with the fundamental knowledgeof ultrasound spatial calibration, which is necessary for the purpose of the thesis.This publication is addressed in Chapter 1:• Narges Afsham, Kenny Chan, Leo Pan, Shuo Tang, and Robert N. Rohling.Alignment and calibration of high frequency ultrasound (HFUS) and opticalcoherence tomography (OCT) 1D transducers using a dual wedge-tri stepphantom. In Proceedings of SPIE Medical Imaging, pages 7964-7981, 2011.The contribution of the author was developing a new framework for spatial cali-bration of ultrasound using a wedge-based phantom. The method was the basisfor many of the future work in calibration in our group [2, 3, 86, 87]. The authordeveloped, implemented, and evaluated the method. K. Chan helped with acquir-ing OCT images. L. Pan helped with building the phantom. Prof. Tang providedthe opportunity to use the OCT system developed in her lab. She helped with hervaluable advice about the clinical applications of OCT. Prof. Rohling helped withhis valuable suggestions in improving the methodology. All co-authors contributedto the editing of the manuscript.By exploring different clinical applications of ultrasound imaging, the authorrecognized the need for improvement of ultrasound imaging in the field of ocularivexaminations. As an essential part of ocular 3D reconstruction, a gaze-trackingultrasound imaging technique was proposed in the following publication. Theoverview of the proposed system and the clinical applications of ocular ultrasoundimaging are explained in Chapter 1-Section 1.3.3, as a potential application ofspeckle tracking. This work has been published in:• Narges Afsham, Mohammad Najafi, Purang Abolmaesumi, and Robert N.Rohling. 3D ocular ultrasound using gaze tracking on the contralateral eye: afeasibility study. In Proceedings of Medical Image Computing and Computer-Assisted Intervention (MICCAI), pages 65-72, 2011.The contribution of the author was in developing a novel patient-specific methodof gaze tracking for ultrasound imaging. The author developed, implemented andevaluated the proposed system. M. Najafi helped with developing the system andperforming the experiments. Profs. Rohling and Abolmaesumi helped with theirvaluable suggestions in improving the methodology. All co-authors contributed tothe editing of the manuscript.The idea of model fitting speckle tracking in Chapter 2 was initially presentedby the author in:• Narges Afsham, Mohammad Najafi, Purang Abolmaesumi, and Robert N.Rohling. Out-of-plane motion estimation based on a Rician-Inverse Gaus-sian model of RF ultrasound signals: speckle tracking without fully devel-oped speckle. In Proceedings of SPIE Medical Imaging, pages 832017:1-8,2011.The paper won the conference Cum Laude Award. The contribution of the authorwas in developing, implementing, and evaluating the method. The paper was thenimproved by introducing a regression-based model to compensate for the spatialchanges of the ultrasound beam profile. The strength of out-of-plane displacementestimation of the proposed model fitting approach were evaluated in a series ofintra and intr-tissue experiments. The effect of patch size, power transformation,and out-of-plane rotation on the overall estimation accuracy were also examined. Amodified version of the following published paper has been provided in Chapter 2:v• Narges Afsham, Mohammad Najafi, Purang Abolmaesumi, and Robert N.Rohling. A generalized correlation-based model for out-of-plane motionestimation in freehand ultrasound. IEEE Transaction on Medical Imaging,33(1):186199, 2013.The contribution of the author was in developing, implementing, and evaluating themethod. M. Najafi helped with designing and performing the experiments. Profs.Rohling and Abolmaesumi helped with their valuable suggestions in improving themethodology. All co-authors contributed to the editing of the manuscript.A filter-based approach for speckle tracking was introduce in the followingwork for the first time. The 6-DoF speckle tracking were developed to enhance themeasurements of magnetic tracker and it showed improved reconstructed volumeof prostate biopsy data. A version of Chapter 3 has been published in:• Narges Afsham, Siavash Khallaghi, Mohammad Najafi, Lindsay Machan,Silvia D Chang, Larry Goldenberg, Petter Black, Robert N. Rohling, andPurang Abolmaesumi. Filter-based speckle tracking for freehand prostatebiopsy: Theory, ex vivo and in vivo results. In Proceedings of Informa-tion Processing in Computer-Assisted Interventions (IPCAI), pages 256-265,2014.The contribution of the author was in developing, implementing, and evaluating themethod. S. Khallaghi helped with performing the clinical experiments. L. Machan,S. Chang, L. Goldenberg, and P. Black helped with development of the study pro-tocol. Profs. Rohling and Abolmaesumi helped with their valuable suggestionsin improving the methodology. All co-authors contributed to the editing of themanuscript.To benefit from the strength of the RiIG model and the favorable properties ofthe filter-based approach an improved version Non-Local Means (NLM) filter weredeveloped based on the RiIG model and the feasibility of the in vivo applicationswere shown for spine images. A version of Chapter 4 has been submitted to:• Narges Afsham, Abtin Rasoulian, Mohammad Najafi, Purang Abolmaesumi,and Robert N Rohling. Non-local means filter-based speckle tracking. Sub-mitted.viThe contribution of the author was in developing, implementing, and evaluatingthe method. A. Rasoulian helped with preforming the clinical study. M. Najafihelped with suggestions on implementing the method. Profs. Rohling and Abol-maesumi helped with their valuable suggestions in improving the methodology.All co-authors contributed to the editing of the manuscript.viiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiList of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . xixAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Speckle tracking background and challenges . . . . . . . . . . . . 31.2.1 In-plane displacement estimation . . . . . . . . . . . . . 41.2.2 Out-of-plane displacement estimation . . . . . . . . . . . 61.2.3 Ultrasound spatial calibration . . . . . . . . . . . . . . . 101.3 Clinical applications . . . . . . . . . . . . . . . . . . . . . . . . 121.3.1 Prostate imaging . . . . . . . . . . . . . . . . . . . . . . 121.3.2 Spine imaging . . . . . . . . . . . . . . . . . . . . . . . 131.3.3 Ocular imaging . . . . . . . . . . . . . . . . . . . . . . . 151.4 Objective and contributions . . . . . . . . . . . . . . . . . . . . . 171.5 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18viii2 Model Fitting Speckle Tracking . . . . . . . . . . . . . . . . . . . . 212.1 Speckle statistics . . . . . . . . . . . . . . . . . . . . . . . . . . 212.2 Speckle tracking background . . . . . . . . . . . . . . . . . . . . 232.3 Proposed correlation-based speckle tracking . . . . . . . . . . . . 272.3.1 Correlation formulation . . . . . . . . . . . . . . . . . . 272.3.2 Correlation deviation . . . . . . . . . . . . . . . . . . . . 302.3.3 RiIG statistical modeling . . . . . . . . . . . . . . . . . . 332.3.4 Power transformation . . . . . . . . . . . . . . . . . . . . 372.3.5 Overview of the proposed method . . . . . . . . . . . . . 392.4 Experiments and results . . . . . . . . . . . . . . . . . . . . . . . 402.4.1 RiIG model fitting . . . . . . . . . . . . . . . . . . . . . 422.4.2 Regression estimation for σy model . . . . . . . . . . . . 432.4.3 Intra-tissue estimation . . . . . . . . . . . . . . . . . . . 442.4.4 Inter-tissue estimation . . . . . . . . . . . . . . . . . . . 472.4.5 Out-of-plane rotation effects and in vivo experiment . . . 492.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 Filter-based Speckle Tracking . . . . . . . . . . . . . . . . . . . . . 533.1 Introduction and clinical background . . . . . . . . . . . . . . . . 533.2 Filter-based speckle tracking . . . . . . . . . . . . . . . . . . . . 553.2.1 Beam profile modeling for curvilinear transducer . . . . . 553.2.2 Speckle extraction . . . . . . . . . . . . . . . . . . . . . 563.2.3 Transform estimation . . . . . . . . . . . . . . . . . . . . 583.3 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . 583.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634 RiIG NLM Speckle Tracking . . . . . . . . . . . . . . . . . . . . . . 654.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654.2 NLM filters for speckle tracking . . . . . . . . . . . . . . . . . . 674.2.1 NLM filters: background . . . . . . . . . . . . . . . . . . 674.2.2 NLM filters: proposed model . . . . . . . . . . . . . . . 704.2.3 Different noise models . . . . . . . . . . . . . . . . . . . 764.2.4 SURE-based 6-DoF transform estimation . . . . . . . . . 80ix4.2.5 Improved σy regression model . . . . . . . . . . . . . . . 834.3 Experiments and results . . . . . . . . . . . . . . . . . . . . . . . 844.3.1 Ex vivo experiment . . . . . . . . . . . . . . . . . . . . . 854.3.2 In vivo experiment . . . . . . . . . . . . . . . . . . . . . 864.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . 935.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100xList of TablesTable 2.1 Estimated σy regression parameters based on Equation (2.29)over tissue types. . . . . . . . . . . . . . . . . . . . . . . . . . 44Table 3.1 Accumulated drift and RMS error in the out-of-plane parame-ters compared to tracker over 100 frames. . . . . . . . . . . . . 59Table 3.2 Sigma variations at different depths. . . . . . . . . . . . . . . 59Table 4.1 In vivo 6-DoF transform estimation for five different subjects.The results are compared against the Electromagnetic (EM) trackermeasurements as the gold standard. The transformation param-eters values are obtained by concatenating the transformationsbetween the neighboring marked frames in the reconstructionsequence of every patient. MSD is the mean squared distancebetween the first and the last frame of every sequence. MSDREis defined by deviding the MSD between the estimated trans-form and EM transform for the last frame to the overall MSD.Drift shows the distance between the estimated last frame mid-point and the corresponding EM measurement. . . . . . . . . . 87xiList of FiguresFigure 1.1 Overview of freehand ultrasound imaging. The physician canfreely move the transducer over the anatomy. The images arestored with their corresponding tracking measurements. Opti-cal trackers and magnetic position sensor are among the mostcommonly used tracker systems in freehand ultrasound imag-ing. They measure the position of position sensors (attached tothe transducer) relative to the tracker reference. A volume isreconstructed by combining all the acquired ultrasound imagesusing the tracking measurements in a reference coordinate sys-tems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Figure 1.2 In-plane displacement estimation. (a) Generally few samplesaround the peak are interpolated with a pre-assumed function.If the function does not match the underlying cross correla-tion curve, bias occurs. (b) In block-matching algorithm, thecurrent frame is compared to the reference frame and motionvectors are estimated. . . . . . . . . . . . . . . . . . . . . . . 5xiiFigure 1.3 Out-of-plane displacement estimation. (a) Due to the finitewidth of the ultrasound beam profile, there is an overlap be-tween the effective resolution cells of two closely positionedframes. The ultrasound resolution cells are always approx-imated by ellipsoids. (b) As the out-of-plane displacementincreases between two resolution cells, the correlation coef-ficient decreases. This curve shows the mean and the standarddeviation of the correlation coefficient as a function of eleva-tion displacement all over a phantom sample image. This curveis learned for a phantom in a calibration process. The out-of-plane displacement is estimated for the measured correlationcoefficient from the obtained curve. . . . . . . . . . . . . . . 7Figure 1.4 Calibration phantoms.(a) N-wire phantom. The location of theend-points are defined by use of a stylus. (b) 3D model of thedual wedge-tri step phantom. This phantom is used in the cal-ibration and alignment of high frequency ultrasound (HFUS)and optical coherence tomography (OCT) 1D transducers. 2Dimages are constructed by means of translation of the trans-ducers using a linear motor stage. The imaging sample is posi-tioned in the dedicated place in front of the phantom. (c) Thenew multi-wedge phantom needs at least five different planesin order to have a unique closed-form solution. . . . . . . . . 10Figure 1.5 (a) Prostate biopsy image acquisition schematic. The blue vol-ume representing prostate gland and the red object is biopsyneedle. (b) Slice view of the prostate. Needle is highlighted indashed red line in the ultrasound image. . . . . . . . . . . . . 13Figure 1.6 (a) Anatomical structure of lumbar in paramedian plane whichis used for regional anesthesia needle insersion. (b) Parame-dian reslice of ultrasound spine volume created from transverseimages based on the speckle tracking algorithm proposed inChapter 4. Laminae in paramedian view is visualized as wave-like structure. . . . . . . . . . . . . . . . . . . . . . . . . . . 14xiiiFigure 1.7 System overview. (a) Each US image should be transformedinto the coordinate system of Eye2. The pose of Eye1 is trackedin the coordinate system of the camera using glint trackingfrom four LEDs. The US to camera transform is fixed dur-ing the experiment. The transform from Eye1 to Eye2 is de-termined by a subject-specific model. (b) A subject fixates atseveral target points to create a wide range of eye movementsduring an examination. . . . . . . . . . . . . . . . . . . . . . 15Figure 2.1 This figure shows the effect of different resolution cell widthson the out-of-plane motion estimation. (a) Two overlappingresolution cells in the presence of out-of-plane motion estima-tion. (b) The overlapping volume (V2) of a resolution cell withanother one with rotation is equivalent to its overlapping vol-ume (V1) with a smaller resolution cell. (c) The blue curveshows the correct autocorrelation function with 30% differ-ence between the resolution cell widths of two overlappingcells (2.9) and the red curve shows the autocorrelation func-tion considering both of the resolution widths as the smallerone. Clearly, using the red curve instead of the blue one causes∼ 20% underestimation of the out-of-plane motion. (d) Over-lapping volume percentage versus elevation distance for ro-tated and unequal resolution cells, excluding offset. . . . . . . 29Figure 2.2 This figure shows the effect of Gaussian approximation. Thesolid blue line is the theoretical sinc function for an ideal FDS(2.12) and the dashed line shows the Gaussian approximationof it. The red solid line shows the correlation curve in the pres-ence of coherency (2.27) and the red dashed line is its Gaussianapproximation. It is obvious that Gaussian approximation failsin modeling the correlation function if coherency is present. . 31xivFigure 2.3 The effect of power transformation on the RF amplitude dis-tribution. The green line shows the PDF of the RF amplitudeof a 2× 2 mm2 patch of bovine muscle ex vivo sample. Theblue curve is the simulated PDF based on the estimated statis-tical properties of the patch and the red curve is the estimatedPDF of coherent part (Z). (a) There is no power transformationof the data. (b), (c), and (d) are transformed to the power of0.5, 0.25, and 0.1, respectively. The power transform of 0.25is chosen in this work. This power transformation enhancesthe separation of the components as well as the performanceof RiIG model fitting. . . . . . . . . . . . . . . . . . . . . . . 38Figure 2.4 Overview of the proposed framework. . . . . . . . . . . . . . 39Figure 2.5 (a) Experimental setup. (b) Top view of the phantom. (c) Anultrasound image of chicken breast tissue. The highly echogenicborder at the lower depth of the images is due to the reflectionfrom two separate layer of tissue on top of each other. . . . . . 41Figure 2.6 Ratio of acceptable patches versus elevation distance for (a)beef, (b) chicken, (c) turkey, and (d) FDS phantom averagedover all the subsets for each tissue type. . . . . . . . . . . . . 43Figure 2.7 (a) Q-Q plot of a sample data against the generated data sam-ples from the RiIG estimated parameters. (b) Comparing theresults of out-of-plane estimation for two different patch sizesof 2×2 mm2 (735 samples) and 3×6 mm2 (3297 samples) onthe beef samples. . . . . . . . . . . . . . . . . . . . . . . . . 44Figure 2.8 Intra-tissue out-of-plane motion estimation. For each kind oftissue type consisting of five different subsets, σy model wasestimated in a cross-validation leave-one-subset-out experiment.The method has been tested on (a) beef, (b) chicken, (c) turkey,and (d) FDS phantom. . . . . . . . . . . . . . . . . . . . . . 45xvFigure 2.9 Inter-tissue out-of-plane motion estimation. (a) The solid linesshow the correlation curve of the first subset of beef, chicken,and turkey sequences, averaged for a given elevation distancefor all of the accepted patches. The dashed line shows the cal-culated correlation based on the proposed method with the σymodel obtained from all of the twenty subsets and the RiIGand position parameters for the center patch. The σy modelwas estimated excluding one subset of each tissue type and themethod was evaluated on the excluded subset. Figures aboveshow the result for one subset of beef (blue), chicken (green),turkey (red), and FDS phantom (cyan). . . . . . . . . . . . . . 48Figure 2.10 The ratio of the measured displacement error to the true eleva-tion separation for (a) 1◦ yaw rotation on the beef tissue (b) 1◦tilt rotation on the beef tissue (c) free hand in vivo images of ahuman forearm. . . . . . . . . . . . . . . . . . . . . . . . . . 49Figure 2.11 The ratio of the measured displacement error to the true eleva-tional separation for the patches close to the focus. . . . . . . 50Figure 2.12 Images of the tissue samples (a) beef (b) chicken (c) turkey (d)phantom (e) forearm. . . . . . . . . . . . . . . . . . . . . . . 51Figure 3.1 Image acquisition and speckle tracking framework. (a) Speckleextraction from the US image (b) Elevation displacement esti-mation. In this figure we demonstrate the correlation curveby which the elevation distance is estimated given a measurednoise. The different fitted ρ models on the measured correla-tion coefficients of US noise for the prostate data are shown.(c) Transform estimation. The scattered points show the 3Dposition of patch centers in the reference frame (red) and theadjacent frame. . . . . . . . . . . . . . . . . . . . . . . . . . 54Figure 3.2 (a) CV plot of the sigma parameter. Inter-tissue measurementsfor (b) the elevation distance estimation and (c) the ratio ofdisplacement error. . . . . . . . . . . . . . . . . . . . . . . . 60xviFigure 3.3 The ratio of the measured displacement error to the true eleva-tion separation for (a) 1◦ yaw and (b) 1◦ tilt rotation on beeftissue (c) freehand in vivo human gastrocnemius muscle. . . . 60Figure 3.4 Black triangles show the center point of the image in the ref-erence coordinate for every other frame for a total number of100 frames. The magenta dots are the center points estimatedby the proposed method. Orientations of the image planes areshown by lateral-axis: red, axial-axis: green, and elevation-axis: blue. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61Figure 3.5 Reconstructed volumes over 100 frames from smoothed mag-netic tracker data (top row) and the proposed approach (bottomrow) for three patients. . . . . . . . . . . . . . . . . . . . . . 62Figure 3.6 Coronal resclice of the reconstructed volumes from smoothedmagnetic tracker data (top row) and the proposed approach(bottom row). . . . . . . . . . . . . . . . . . . . . . . . . . . 63Figure 4.1 Overview of the proposed framework of out-of-plane motionestimation. First, a prior model of noise is considered. Then, asimilarity measure is defined based on the noise model. Afterdenoising based on the developed NLM model, the measuredcorrelation is compared to the theoretical parametric correla-tion coefficient. In a calibration procedure, in which the rela-tive displacements of adjacent frames are known, a model ofbeam profile (sigma map) is derived. Finally, the out-of-planemotion are estimated at the center points of the grid patches allover the image. . . . . . . . . . . . . . . . . . . . . . . . . . 68Figure 4.2 SNR evolution for a sample ultrasound spine image. . . . . . 76xviiFigure 4.3 An example of (a) the transverse ultrasound image of the spine,(b) the denoised image based on the proposed iterative RiIGNLM filter, (c) the extracted noisy part, and (d) the SURE-based risk map (The discontinuity comes from using the coarsegrid for SURE calculation). The tiled appearance of the SUREmap is because the δ parameter is estimated at the center ofgrid windows and the same value is used for all of the otherpixels. It can be observed that in the area with high specularreflection, the noise estimation is riskier. . . . . . . . . . . . . 77Figure 4.4 Ex-vivo out-of-plane motion estimation. For each kind of tis-sue type consisting of five different subsets, σy model was es-timated in a cross-validation leave-one-subset-out experiment.The method has been tested on (a) beef, (b) chicken, (c) turkey. 83Figure 4.5 CV-plots for different speckle tracking algorithms developedfor different denoising methods for the ex vivo experiment. . . 84Figure 4.6 CV-plots for the in vivo experiment at different axial depthsand elevation distances for five different subjects. . . . . . . . 86Figure 4.7 6-DoF transform estimation for a subset of the in vivo ex-periment. (a) The accumulated mid-point error, (b) the esti-mated translations (dotted lines) in comparison with the read-ings from the EM tracker (solid lines), (c) the errors in thetranslation estimations in comparison with EM tracker, (d) theestimated rotations , and (d) the errors in the rotation estima-tions in comparison with EM tracker. . . . . . . . . . . . . . 89Figure 4.8 The effect of (a) lateral motion and (b) tilt rotation in the over-all tracking results of the in vivo experiment. . . . . . . . . . 90xviiiList of AbbreviationsRF Radio FrequencyFDS Fully Developed SpeckleRIIG Rician Inverse GaussianNLM Non-Local MeansSURE Stein’s Unbiased Risk EstimateHFUS High Frequency UltrasoundOCT Optical Coherence Tomography6-DOF Six Degrees-of-FreedomMLE Maximum Likelihood EstimationWMLE Maximum Likelihood EstimationKLD Kullback-Leibler DistanceEM Expectation MaximizationTRUS Trans-Rectal UltrasoundPSF Point Spread FunctionSTD Standard DeviationPPBN Probability Patch-Based NLMxixAcknowledgmentsI would like to express my sincere gratitude to my advisor Prof. Robert Rohling forproviding me with an excellent atmosphere for doing research and his expertise,critical thinking, enthusiasm, and immense knowledge. His guidance helped mein all the time of research and writing of this thesis. My sincere thanks also goto Prof. Purang Abolmaesumi, who truly made a difference in the course of myresearch. His dedication and thoroughness has been inspiring and I have trulybenefited from his advice and expertise. Without Rob’s and Purang’s motivationand encouragement, I would not have survived during some of the hardships Iexperienced during my PhD journey.I would like to thank Prof. Shou Tang, who provided the opportunity to workwith the OCT system developed in her lab and her constructive suggestions.I also would like to thank my fellows in Robotics and Control Lab for theirconstructive suggestions and support during the past years, especially MohammadNajafi, Dr. Abtin Rasoulian, Siavash Khallaghi, Jeff Abeysekera, Dr. Ali Baghani,and Dr. Reza Zahiri-Azar.I thankfully acknowledge the fellowships and grants that supported my re-search during the course of my studies: the University of British Columbia, theNational Science and Engineering Council of Canada (NSERC), and the CanadianInstitutes of Health Research (CIHR).Last but not least, I would like to thank my parents Sakineh Yarigarravesh andMorad Afsham, and my sisters Dr. Neda and Dr. Nasi, for their unconditional loveand support throughout my life.xxChapter 1Introduction1.1 MotivationUltrasound imaging is considered as one of the most popular imaging modali-ties [43]. Ultrasound’s favorable features such as non-invasive nature, the possibil-ity of real-time visualization, and its low cost have contributed to a rising numberof diagnostic, therapeutic, and image-guided interventional applications [76, 78,91, 112, 124].Three-dimensional (3D) ultrasound provides clinicians with 3D visualizationof complex structures of anatomy and it is often used to improve diagnosis andguidance of interventional and surgical procedures [24, 42, 43, 58, 61]. There aretwo possible ways of acquiring 3D ultrasound images. The first one is by means ofholding a 3D ultrasound transducer over the area of interest. The alternative wayis to move a 2D ultrasound transducer freely over the anatomy. This approach isknown as “freehand” ultrasound imaging. It provides a wider range of imagingvolume specifically, when it is not possible to capture the structure of the anatomyin a single sweep of 3D transducer [61].In freehand ultrasound imaging, 2D visualization is extended to 3D by stackingmultiple 2D images based on their relative positions and orientations as shown inFig. 1.1. To this end, a tracking device is attached to the transducer and measuresthe six Degrees-of-Freedom (6-DoF) transform with respect to a reference coordi-nates system. However, the attachment of such tracker imposes additional cost and1Figure 1.1: Overview of freehand ultrasound imaging. The physician canfreely move the transducer over the anatomy. The images are storedwith their corresponding tracking measurements. Optical trackers andmagnetic position sensor are among the most commonly used trackersystems in freehand ultrasound imaging. They measure the position ofposition sensors (attached to the transducer) relative to the tracker ref-erence. A volume is reconstructed by combining all the acquired ultra-sound images using the tracking measurements in a reference coordinatesystems.challenges and it can confine the range of movement in freehand imaging [61].Mechanical arms, optical trackers, and magnetic trackers have been previouslyused as external tracker devices in freehand ultrasound imaging [42]. Comparedto the other types of tracking systems, magnetic trackers are relatively cheaperand smaller and do not require a line of sight as needed in optical trackers. As aresult, many ultrasound manufacturers provide an embedded magnetic sensors intheir machines [43]. However, magnetic trackers are notorious for their small scale2jittery noise. Their tracking accuracy is also susceptible to electromagnetic inter-ference of metallic objects in the proximity of the operation site, such as patientbed, guidance tools, or other imaging devices. The state-of-the-art magnetic track-ers are reported to give an accuracy of approximately 1.5 mm and 0.5◦, howeverthe positioning of the electromagnetic sensors in the magnetic field and the interfer-ence of the clinical environment can substantially reduce the accuracy of trackingup to 14 mm and 2◦ [125]. Any errors in the relative transform of the 2D imageswill cause geometric distortion in the reconstructed 3D image and geometric errorsmay occur in the measured lengths, areas and volumes of anatomical features.Alternatively, image data itself can be used for estimating some or all of 6-DoF.This kind of ultrasound image-based tracking is generally based on the ultrasoundspeckle patterns. Speckle tracking was first referred to as in-plane tracking ofspeckle pattern in ultrasound images [85, 90]. However, it became a general termfor 6-DoF image-based ultrasound tracking. The main goal of 6-DoF ultrasoundtracking is to determine displacement by means of tracking the speckle patternsin ultrasound images. Primarily, one-dimensional speckle tracking was used intime-domain Doppler flow measurement to determine the velocity and the direc-tion of motion [56]. It was then extended to simultaneous 2D displacement vectorestimates, which improved the tracking results [130].1.2 Speckle tracking background and challengesThe deviation of a coherent field phase front from its original form, after con-fronting a random medium, results in a granular noise-like pattern in ultrasoundimages referred to as speckle [1]. Speckle is an informative signal rather thanbeing random noise and it can be used to reveal information about the imagingsystem or the random medium [122]. On the other hand, the speckle formationprocess can be considered as the summation of small phase changes of the incidentcoherent signal produced by randomly distributed scatterers [84]. Such a processintrinsically suggests that it can be modeled as a stochastic process. By modelingthe speckle formation process, it is possible to obtain information about the ultra-sound echo signal distribution. Speckle tracking deals with changes in amplitudeand statistical properties of receiving ultrasound signal to infer underlying tissue3motion.Speckle tracking can be considered as two essentially different displacementestimations: in-plane and out-of-plane. The in-plane displacement in the ultra-sound image can be recovered more accurately and more reliably than the out-of-plane displacement [48]. The bigger challenge of speckle tracking is in the out-of-plane displacement estimation [75]. In the following sections, we review thefundamentals of speckle tracking both for in-plane and out-of-plane displacementestimation.1.2.1 In-plane displacement estimationPrimarily, in-plane one dimensional ultrasound displacement estimation was usedin Doppler flow estimation techniques. Then it became an essential part of strainimaging. In strain imaging, an accurate displacement estimation is necessary to es-timate the strain of a deformed tissue. There are many well-established methods ofin-plane displacement estimation in the literature of ultrasound strain imaging andelastography [91, 129]. Here, we only cite few examples to illustrate the broadnessof research on this topic compared to out-of-plane displacement estimation.A straightforward approach to estimate the in-plane displacement is to upsam-ple the received ultrasound echo signal or image through interpolation and thenperform a cross correlation with an identical acquisition of ultrasound echoes, butwith the tissue in a slightly displaced position. The location of the peak in thecorrelation estimate indicates the amount of displacement [70, 120]. Numerous in-terpolation techniques such as parabolic fit [62], cosine fit [19], cubic spline [120],and ellipsoid fit in 2D [50] have been developed to decrease the computational costof cross correlation peak estimation.Theoretically, in a range close to the ultrasound sampled RF data displacement,the phase of the cross correlation has a slope equivalent to the centroid frequencyand a zero crossing at the shift [19]. Many phase-based displacement estimationalgorithms are developed upon this theoretical assumption [82]. The main advan-tage of the phase-based displacement estimation methods is their computationalefficiency. However, the main issue with interpolation and phase-based methodis their biased estimations. Some of the subsample interpolation estimation tech-4(a) (b)Figure 1.2: In-plane displacement estimation. (a) Generally few samplesaround the peak are interpolated with a pre-assumed function. If thefunction does not match the underlying cross correlation curve, bias oc-curs. (b) In block-matching algorithm, the current frame is compared tothe reference frame and motion vectors are estimated.niques that look for the maximum of the correlation curve introduce significantbias error. The phase-based methods’ results are also biased due to aliasing [93].The bias issue in the interpolation methods has been illustrated in Fig. 1.2a.2D displacement vector estimation is also proposed to increase the accuracyof the estimate [49, 70, 129]. Iterative 2-D phase-tracking techniques are used toincrease the accuracy of the estimation [37, 114]. Recently, a new unbiased 2Diterative sinc interpolation method is proposed to find the correlation peak [82].Multiple sources of noise affect the accuracy of in-plane estimation, includingrandom phase associated with the moving speckle, quantization noise in receiver,and specular reflections from highly acoustic targets such as bones around thedesired tissue [82]. One other challenge in the in-plane displacement estimationis the unbalanced estimation between the lateral and axial direction. The lateralsampling rate depends on the spacing between adjacent beam lines and is gener-ally an order of magnitude lower than the axial sampling rate [49]. Subsampleinterpolation in lateral direction has been proposed to improve the overall estima-tion [49, 57]. Fourier transform has been shown to work well for in-plane displace-ment estimation [19, 90]. Registration-based techniques such as block matching5are also provided to compensate for the less accurate lateral displacement estima-tion [58]. Small blocks of two adjacent frames are registered together as it is shownin Fig. 1.2b.As a requirement of speckle tracking for clinical applications, the in-planedisplacement estimation should be fast and accurate. Hence, in this thesis, wehave combined the block matching algorithm with a Fourier-based registrationtechnique to determine the in-plane displacement. The in-plane translation ofeach block is determined with an efficient subpixel registration algorithm proposedin [54]. The accuracy of the method is equivalent to that of the conventional fastFourier transform upsampling approach in much less computation cost and mem-ory requirement. More details on this subject is provided in Chapter 3.To complete the discussion, it is worth mentioning that, there is another sourceof error in the in-plane displacement estimation, known as speckle motion arti-fact [65]. When motion occurs, the speckle motion is no longer the same as tissuemotion. This phenomenon happens due to the spatial variations of the transducerpoint spread function out of the focal zone. Therefore, a lateral tissue translationis seen as a tissue rotation, i.e. beam angulation changes with lateral translationin the image coordinates system. Detailed analytical explanation of this effect areprovided in [65]. However, for small displacements, this artifact is negligible.1.2.2 Out-of-plane displacement estimationIt has been more than two decades since the speckle correlation-based methodswere proposed to measure the out-of-plane displacement. These methods, alsoknown as decorrelation methods, are based on the statistical analysis of the receiv-ing ultrasound echo signal in the adjacent ultrasound frames [21, 34, 118, 123].Due to numerous sources of error and uncertainties, highly accurate out-of-planedisplacement estimation has remained a challenging and ongoing research prob-lem [26, 48, 73–75].The ultrasound beam profile is not ideal even at focal zone and it has a spreadalong different directions. The receiving ultrasound echo signal is a summation ofall scattering signal within an effective cell, referred to as resolution cell. Due to thespecific shape of the beam profile, resolution cells are approximated by ellipsoids6(a) (b)Figure 1.3: Out-of-plane displacement estimation. (a) Due to the finite widthof the ultrasound beam profile, there is an overlap between the effectiveresolution cells of two closely positioned frames. The ultrasound resolu-tion cells are always approximated by ellipsoids. (b) As the out-of-planedisplacement increases between two resolution cells, the correlation co-efficient decreases. This curve shows the mean and the standard devia-tion of the correlation coefficient as a function of elevation displacementall over a phantom sample image. This curve is learned for a phantomin a calibration process. The out-of-plane displacement is estimated forthe measured correlation coefficient from the obtained curve.(Fig. 1.3a) [48]. The resolution cells are considerably spread along the elevationdirection as a result of poor focus in this direction. It causes a measurable corre-lation between two closely positioned ultrasound backscattered signals or B-modepatches. This correlation is affected by two major sources: the medium and the ul-trasound beam position. If the effect of the ultrasound beam can be separated fromthe effect of the medium, by knowing the characteristics of the ultrasound beam,it is possible to relate the correlation changes to the out-of-plane displacement. Inother words, the correlation is dependant on the amount of overlap between twosets of backscattered signals. The correlation coefficient curve is usually obtainedin a calibration process by imaging a tissue phantom [48]. By measuring the cor-relation coefficient of corresponding patches in two closely positioned frames, wecan look up the displacement from the obtained correlation curve.By having the out-of-plane displacement of three non-collinear pairs of patches,7it is possible to estimate the out-of-plane parameters of the relative transform be-tween the two images. There is also another proposed approach based on the re-gression of the echo intensity to estimate the out-of-plane displacement [99]. Wewill show that (Section 2.3.3), this method is closely related to the correlation ofspeckle patches and it can be considered a correlation-based method.There are several sources of error which diminish the accuracy of the out-of-plane estimation. One of the main challenges is coherent scattering. Coher-ent scattering happens when medium structure such as repetitive scatterers pre-vent the phase of the pressure field to become random and causes underestimationin the elevation displacement measurement in the conventional correlation algo-rithms [48, 55, 75, 103]. Thus, displacement estimation is limited to specific pat-tern of speckle known as Fully Developed Speckle (FDS) [123]. The statisticaltheory behind this limitation is discussed in the following chapters. One solution isto find the FDS patterns over an entire ultrasound image. The second-order statis-tics and low-order moments of ultrasound echo signals have been used to classifyspeckle patterns and detect this type of speckle [55, 81, 102]. Many of the pro-posed correlation-based methods focus on the improvement of FDS detection byintroducing different FDS detection methods, such as novel meshing [103], andthe use of the Kolmogorov-Smirnov (K-S) test as a non-parametric goodness offit [55].Other efforts intend to increase the displacement estimation accuracy by adapt-ing the correlation curve and compensating the loss of coherency by using addi-tional information such as correlation in the axial and lateral directions [55], beamsteering [104], developing a heuristic method to compensate for the coherent partbased on in-plane correlation [48], using Maximum Likelihood Estimators (MLE)for displacement estimation [105], or defining measurement uncertainty and incor-porating the information of several noisy measurements in a probabilistic frame-work [74]. A novel learning based method of out-of-plane displacement estimationon imagery of real tissue has been introduced in [75]. They adapt the scale factorof the nominal correlation curve for different tissue types based on training data.Recently, this work has been improved by a learning-based error correction algo-rithm [26].Spatial variations of beam profile is another cause of inaccuracy in out-of-plane8displacement estimation. A model has been previously proposed for the local shapeof the resolution cell based on the decorrelation curves in the axial direction [55].Deviation of the beam profile from its theoretical model also contributes to theerror. We have proposed a generalized regression model to capture the variationsof beam profile as well as the deviation from its theoretical model in Section 2.3.2.Another issue that makes the speckle tracking a very challenging subject is theeffect of out-of-plane rotation on the correlation values [59]. The reduction in thecorrelation value and the change in the offset of the correlation curve occur due toseveral reasons including the change in the amount of overlapping volume of twoadjacent resolution cells, which changes the integral boundaries of the convolution,the change of beam profile due to the spatial variance of the beam, and the changeof the location of the scatterers with respect to the beam profile. Few solutions havebeen proposed to tackle the problem [59]. In Chapter 2, we have provided a closed-form solution for the correlation of two resolution cells with different widths. In-plane displacement or out-of-plane rotation may cause considerable change in thecell widths. By having a closed-form solution for the correlation of two resolutioncells with different widths, we have shown that it is possible to compensate for thiseffect.All of the sources of error and inaccuracies cause a bias in out-of-plane dis-placement estimation. However, these sources of error do not affect all patchesof an ultrasound image in a similar way. If we have a measure to determine thegoodness of the out-of-plane displacement estimation, it is possible to aggregatethe estimations all over the image based on their validity to reduce the estimationbias. This idea has been expanded in Chapter 4 by introducing the Stein’s UnbiasedRisk Estimator (SURE) as a measure of goodness.One other general challenge in the problem of speckle tracking is to resolve theambiguity of motion direction in out-of-plane displacement. However, the mainfocus of the current thesis is to introduce more accurate methods of out-of-planedisplacement estimation. Hence, throughout this entire work, we assume that theinformation of motion direction is available, either from the tracker measurementsor by the monotonic design of the experiments and there is no direction ambiguity.To solve the direction ambiguity the approaches proposed in [74] and [58] can befollowed.9(a) (b) (c)Figure 1.4: Calibration phantoms.(a) N-wire phantom. The location of theend-points are defined by use of a stylus. (b) 3D model of the dualwedge-tri step phantom. This phantom is used in the calibration andalignment of high frequency ultrasound (HFUS) and optical coherencetomography (OCT) 1D transducers. 2D images are constructed bymeans of translation of the transducers using a linear motor stage. Theimaging sample is positioned in the dedicated place in front of the phan-tom. (c) The new multi-wedge phantom needs at least five differentplanes in order to have a unique closed-form solution.1.2.3 Ultrasound spatial calibrationAll the tracking devices that are attached to the ultrasound transducer in freehandultrasound imaging measure the position and orientation of the sensor on the trans-ducer. They do not measure the position and orientation of the ultrasound image.So, the transformation between the origin of the sensor attached to the transducerand the image plane is needed to localize the features in the tracker coordinatessystem. The process of finding this transformation is called calibration [83]. Anyinaccuracy in the calibration can lead to image distortion, inaccurate navigationin image-guided procedures, or mislocation of treatment in the case of therapy.Speckle tracking can improve the accuracy of tracking even when spatial track-ing sensors are used. Moreover, such sensors can also be used for the purpose ofcomparison to evaluate the accuracy of the speckle tracking algorithms. Conse-quently, it is essential to study the fundamentals of ultrasound spatial calibrationprocedures.A variety of approaches for calibration of 3D freehand ultrasound have beeninvestigated and they have been compared in terms of reconstruction accuracy,10reproducibility, and acquisition time [61, 83, 95]. Theoretically, by knowing theposition of three non-collinear points residing in the same ultrasound image andtheir corresponding location with respect to a position sensor or global frame, thetransformation parameters can be determined. However, many calibration methodsneed several images to converge to the solution [83].One typical method of calibration is to image an artificial object, known as aphantom. The simplest phantom is a point target. Point-based methods are per-formed by repetitive scanning of a point target such as a spherical fiducial, inter-section of two wires, or the tip of a stylus [83]. Since aligning the ultrasound beamat a single point is not easy, scanning along strings or wires can be used insteadof single location. The three-wire phantom [61], Hopkins phantom [121], and Zor N shaped phantoms [25], are examples of these methods. The well-known N-wire phantom gives the transformation between image coordinates and phantomin a closed form solution with one image [14, 23, 25]. It is possible to use a 3Dspatial localizer and define the location of the end-points or track the phantom andprobe at the same time and form the transformation from image frame to the trans-ducer case (Fig. 1.4a). The final accuracy of point-based or wire-based methodsare highly dependent on the accuracy of manual or semi-automatic selection ofthe point or line centroid in ultrasound images [61]. Given the presence of noiseand artifacts, it is argued that a line is segmented easier and more accurately thana point [61]. The single wall method and its enhanced versions, the Cambridgephantom [97] and the closed-form differential solution to wall [87], take advantageof this idea and facilitate the automatic segmentation of the line.Wedge phantoms are also improved version of wall phantoms that provide vi-sual feedback during the calibration process. The idea of wedge phantoms was firstproposed in [47], where a precision-manufactured mechanical instrument facili-tates the alignment of the beam with a plane of wires. We have extended this ideato a dual wedge-tri step phantom for both alignment and calibration of High Fre-quency Ultrasound (HFUS) and Optical Coherence Tomography (OCT) 1D trans-ducers [4]. This phantom includes two symmetrical wedges and three steps thatprovide the user with visual feedback on how well the scan plane is aligned withthe midplane of the phantom (Fig. 1.4b). The phantom image consists of five linesegments, each of which corresponds to one of the wedges or steps. The slopes and11positions of the lines are extracted from the image and compared with the phantommodel. The scan plane parameters are found so that the difference between themodel and extracted features is minimized. The main advantage of this phantomis that only one frame is required to determine translations, orientations, and skewparameters of the scan plane with respect to the phantom.Instead of defining the calibration parameters as the 6-DoF transform parame-ters, we define the problem in a new framework. In this framework, the parametersto be solved include: P, the initial position of the transducer, U , the trajectory vec-tor along the movement, andV , the beam vector. The method has been enhanced byproposing a closed-form solution and differential measurements for a new multi-wedge phantom. The multi-wedge phantom is simply described as five differentplanes (Fig. 1.4c). The closed-form solution for the new multi-wedge phantomeliminates the need for an iterative optimization method and provides substantiallyhigher calibration accuracy [86].1.3 Clinical applicationsThe focus of this thesis is on the technical innovations and development of speckletracking algorithms. However, to validate the possibility of the clinical use ofthe proposed speckle tracking algorithms, among many possible applications, wehave explored the three clinical applications which are in the scope of our researchgroup.1.3.1 Prostate imagingProstate Cancer (PCa) is the second most prevalent cancer and the third cause ofcancer mortality in North American men [10, 18]. Ultrasound is the most com-monly used modality for imaging the prostate and effective localized treatment ofprostate cancer, requires accurate visualization of the prostate gland and its sur-rounding region [43, 79].The current clinical standard for PCa diagnosis is histological analysis of Tran-srectal Ultrasound (TRUS) guided biopsy samples (Fig 1.5a). The 2D nature ofthe conventional biopsy limits targeting accuracy and does not allow a 3D recordof core locations. To alleviate this issue, several groups have proposed 3D targeted12(a) (b)Figure 1.5: (a) Prostate biopsy image acquisition schematic. The blue vol-ume representing prostate gland and the red object is biopsy needle. (b)Slice view of the prostate. Needle is highlighted in dashed red line inthe ultrasound image.biopsy, where the biopsy targets and extracted cores are recorded in the space ofa TRUS volume acquired prior to the start of the procedure [12, 13, 127]. TRUSvolume reconstruction is generally accurate using a 3D TRUS transducer [12];however, such transducers are not part of a typical standard-of-care. When a 3DTRUS volume is reconstructed from a swept 2D TRUS, the best results have beenreported with a sophisticated mechanical stabilizer [13] The simplest, least expen-sive solution that is closest to the current standard-of-care, is to use a magneticallytracked, freehand 2D TRUS transducer [27, 127]. However, due to a combinationof tracking inaccuracy from proximal metal objects, calibration inaccuracy, andshifts in organ position there is a significant spatial reconstruction error [68].In Chapter 3, we propose a novel method of speckle tracking to enhance themagnetic tracking measurement of freehand 2D prostate biopsy and provide bettervisualization result.1.3.2 Spine imagingUltrasound imaging guidance for regional anesthesia is a rapidly growing field.2D ultrasound imaging has been demonstrated to be an effective tool to decide onthe puncture site [53]. It has also shown to increase success rate and decrease the13(a) (b)Figure 1.6: (a) Anatomical structure of lumbar in paramedian plane which isused for regional anesthesia needle insersion. (b) Paramedian reslice ofultrasound spine volume created from transverse images based on thespeckle tracking algorithm proposed in Chapter 4. Laminae in parame-dian view is visualized as wave-like structure.clinical complications [101]. Many studies show that ultrasound can be used toidentify the vertebral levels [46, 80, 101]. However, interpreting spinal ultrasoundimages for novice ultrasound operators (i.e. many anesthesiologists) remains achallenge. A 3D panorama from 2D or 3D ultrasound images, helps to reslicethe volume in a clinically desired plane (e.g. reslicing in parasagittal plane fordepicting the ligamentum flavum (LF)) [96, 100, 106]. Panorama reconstructionrequires tracking of the individual ultrasound images and registration of them in asingle volume.Complex bone structures with strong spatially-varying shadows result in rar-ity of FDS patches. Consequently, it is not feasible to apply conventional speckletracking for the spine 3D reconstruction because of the overall lack of accuracy [100].As a result, sensor-based and registration-based methods have been developed toreconstruct spinal panoramic images [11]. The main advantage of sensor-basedreconstruction is their high accuracy [96]. However, they impose spatial restric-14(a) (b)Figure 1.7: System overview. (a) Each US image should be transformed intothe coordinate system of Eye2. The pose of Eye1 is tracked in the coor-dinate system of the camera using glint tracking from four LEDs. TheUS to camera transform is fixed during the experiment. The transformfrom Eye1 to Eye2 is determined by a subject-specific model. (b) A sub-ject fixates at several target points to create a wide range of eye move-ments during an examination.tions and increase cost and complexity. Registration-based methods such as 3DSIFT [89], [45] are computationally expensive and they perform poorly in the pres-ence of out-of-plane displacement because of the significant anatomical changes.In Chapter 4, we evaluated the feasibility of the proposed speckle tracking forspinal 3D reconstruction.1.3.3 Ocular imagingUltrasound has become an indispensable diagnostic tool for many ocular and or-bital diseases [33, 44, 64]. Many studies show the ability of ultrasound to performa pathological evaluation of the eyeball in its posterior segment [108, 110]. Recentstudies have shown the benefit of ocular ultrasound in emergency medicine [69,128]. They suggest the use of ultrasound especially for the excluding of retinaldetachment. There are some pathological characteristics that differentiate retinaldetachment from other abnormalities such as vitreous hemorrhages [67]. Clinicalmanifestation of retina and vitreous detachments varies depending on where theadhesion is strongest [110], so ocular ultrasound of the entire posterior segmentwould benefit diagnosis.15Conventional ophthalmologic 2D ultrasound transducers have small footprintand wide angular field of view because they are designed to image large cross sec-tion at the posterior segment of the eye. Since ultrasound has poor penetration ofthe cornea and lens, images are taken when the gaze is deviated toward differentpositions [33]. However, the evaluation of pathological features using such 2Dultrasound makes the examination more complex and may contribute to false di-agnoses [67]. Moreover, the spherical geometry of the eyeball makes it difficult tointerpret the pose of each 2D ultrasound relative to the eye coordinate. The com-plexity of such examination comes from the fact that a 3D orientation of the eyeshould be considered for each image acquisition [64]. In practice, the examinerhas to visualize the intersection of ultrasound plane with a mental 3D model of theeye and its pose to find out where the absolute position of the image is located inthe coordinate system of the eye [64]. There is a need for a new solution for thisproblem especially since the eyelid is closed during the examination. So eye poseis difficult to discern visually.It has been suggested that dedicated 3D ultrasound transducer facilitates theassessment of certain posterior ocular abnormalities [128]. However, even in thecase of a specialized 3D ultrasound transducer, only a portion of eye would beimaged from each pose of the eye and it does not eliminate problem associated witha gaze-deviated examination [36]. Moreover, this challenge cannot be solved solelyby tracking the ultrasound transducer, because the eye changes its pose during theexamination.In order to make a 3D reconstruction of the eyeball, it is needed to determinethe position of the imaging plane with respect to eyeball as well as the relativepose of the adjacent ultrasound frames. We have proposed a novel subject-specificeye pose estimation method using a combination of a camera and an ultrasoundtransducer to tackle the problem of eyeball tracking with respect to the ultrasoundimage [5].In our proposed method of eye positioning, the pose of the examined eye isestimated from the pose of the contralateral eye during the imaging process. A sin-gle camera model-based gaze tracking method has been developed and modifiedfor this application (Fig. 1.7a). Eye parameters for each subject and in differentlightening conditions are modeled together with pose estimation in an attempt to16achieve high localization accuracy. As it is shown in Fig. 1.7b, the ultrasound im-ages are finally transformed to the examined eye frame to create a 3D volume. Wehave shown that the angular view of the image can be increased in the reconstructed3D ultrasound image.The proposed method enables reconstruction of a 3D ultrasound image of theeye posterior segment from 2D images acquired from different positions of the eye.However, small motion of the eye, even when it is fixating, is inevitable. Speckletracking methods can compensate for microsaccadic movement of eye and improvethe quality of the 3D reconstruction. This is the subject of our future research.1.4 Objective and contributionsThe overall objective of the thesis is to enable full 6-DoF estimation of poses be-tween pairs of standard ultrasound images solely based on ultrasound image datawith an accuracy comparable, or better than spatial sensor tracking. To this end,we investigate the challenges of the current speckle tracking methods and developalgorithms to increase the accuracy of speckle tracking. We evaluate the proposedalgorithms and assess the feasibility of their clinical use. In the course of achievingthis objective, the following contributions were made:• Developing a model-based speckle tracking method for ultrasound sampledRF data, which is capable of separating the coherent effect in the correlation.• Deriving a closed-form second-order statistics of RiIG model.• Deriving a closed-form solution for the correlation function of two resolutioncells with different widths.• Modeling the effect of beam profile by defining a linear regression model asa function of axial, lateral and elevation distance.• Introducing and developing a novel filter-based speckle tracking frameworkcapable of 6-DoF transform estimation for B-mode images.• Introducing a novel iterative Weighted Maximum Likelihood (WML) NLMfilter based on RiIG statistical model for the noise extraction of the proposedfilter-based speckle tracking method.17• Deriving a similarity measure with favorable probabilistic features for threedifferent ultrasound noise models, i.e. the additive Gaussian, the multiplica-tive Gamma, and the RiIG additive-multiplicative mixture model.• Deriving a test statistic based on the symmetrical Kullback-Leibler distance(KLD) and incorporating it in the proposed similarity measure as a regular-ization term.• using the Stein’s Unbiased Risk Estimate (SURE) to aggregate the informa-tion all over the image in a weighted average fashion.MATLAB® was the main environment in which algorithms are implemented.Visual C++© was used for volume reconstruction and 3D slicer was used for visu-alization of the results.1.5 Thesis outlineThis chapter introduced the research topic, motivation for performing this researchand the thesis objectives and contributions. Related background accompanied byan overview of the possible clinical applications of the speckle tracking methodshas also been presented. The rest of this thesis is as outlined below:In Chapter 2, we address main challenges of ultrasound speckle tracking in theout-of-plane displacement estimation including the rarity of FDS patterns and theeffect of spatial variations of beam profile. To be able to perform speckle tackingnot only for the FDS patterns, but also in the presence of coherency, a novel model-fitting approach of out-of-plane displacement is proposed. In this framework, theultrasound sampled RF data is modeled based on the RiIG stochastic process ofthe speckle formation. The proposed closed-form second-order statistics of theRiIG model facilitate the application of model-fitting speckle tracking. A closed-form formula is also derived for the correlation correlation coefficient of closelypositioned resolution cells with different widths, which increase the accuracy ofestimation in comparison with approximated solutions. To account for the effectof beam profile spatial changes, a linear regression model is proposed. We showthat the accuracy of the proposed out-of-plane displacement estimation in real tis-sue is considerably enhanced. The flexibility of the proposed method allows to fit18a model on almost any patch through in the entire image, estimate its statisticalproperties and use for the purpose of displacement estimation. This will increasesthe reliability of the out-of-plane displacement estimation. However, for such amodel-fitting approach ultrasound sampled RF data is necessary, which is not ac-cessible in many clinical applications. Moreover, statistical parameter estimation isa time-consuming task which hinders the real-time implementation of the method.To tackle the aforementioned drawbacks of the model-fitting approach, we pro-pose a new filter-based framework of speckle tracking based on noise extractionfrom B-mode images in Chapter 3. This approach provides the possibility of usingfast implementation of ultrasound denoising algotithms. A denoising algorithm isof interest that is based on noise formation model and it is spatially local. Non-Local Means (NLM) denoising filter is an appropriate candidate. A gamma multi-plicative noise model is considered and a probability patch-based non-local means(PPBN) filter is used for the task of speckle extraction. Validation tests are first per-formed on tissue samples obtained ex vivo using a linear motor stage and an opticaltracker as gold standards. Further validation is performed on electromagneticallytracked prostate biopsy scans obtained in vivo. We have shown that the proposedapproach produces visually continuous anatomical boundaries in reconstructed 3DUS volumes of the prostate.To benefit from the flexibility and richness of the RiIG noise model in thefilter-based approach, we develop a NLM filter-based speckle tracking based onweighted maximum likelihood estimation of local RiIG noise model in Chapter 4.In order to increase the accuracy of the 6-DoF transform estimation, we aggregatethe local estimation of the displacements all over the image. Stein’s Unbiased RiskEstimate (SURE) is used as a quality measure of the noise estimation. We derivean explicit analytical form of SURE for the RiIG model and use it as a weightfactor in aggregating the estimations. This way, more weights are allocated to theestimations with smaller risks. The proposed filter-based speckle tracking frame-work is formulated and evaluated for three different noise models. The results arecompared in terms of the accuracy of the out-of-plane displacement estimation ina set of ex vivo experiments for different tissue types. We show that the proposedspeckle tracking method is less tissue-dependent. The proposed method is alsoevaluated in vivo on the spines of five different subjects. The 6-DoF transform19parameters are estimated and compared with the electromagnetic tracker measure-ments. The results show the proposed method performs comparable to magnetictracker for typical small lateral displacements and tilt rotations between adjacentframes.Chapter 5 includes a brief summary followed by a discussion on the integra-tion of the speckle tracking algorithms into the clinical workflow. It also includessuggestions for future work.20Chapter 2Model Fitting Speckle Tracking12.1 Speckle statisticsThe basic speckle statistical model used for speckle tracking is the Rayleigh distri-bution. If the number of scatterers in a resolution cell of the ultrasound beam is suf-ficiently large, the resulting speckle follows a Rayleigh distribution. This is knownas Fully Developed Speckle (FDS) or non-coherent speckle. It is possible to esti-mate the first and second-order statistics of FDS patches from the ultrasound echointensity and determine the out-of-plane motion based on the statistical changes ofspeckle at two different poses [21]. Almost all of the previously proposed speckletracking methods are based on FDS patterns. One of the main concerns of out-of-plane motion estimation is the rarity of the FDS patches in real tissue [48], [103].This results in finding very few FDS patches through the whole image and dis-carding the rest of the information available in the image. Also, if there is anycoherency in the speckle, or in other words, if the statistics of the speckle deviatesfrom FDS, the out-of-plane estimation shows significant error [48].To tackle the deviation of the speckle distribution from FDS, one solution is to1The content in this chapter has been adopted from the following papers:N. Afsham, M. Najafi, P. Abolmaesumi, and R. N. Rohling. Out-of-plane motion estimation based ona Rician-Inverse Gaussian model of RF ultrasound signals: speckle tracking without fully developedspeckle. In SPIE Medical Imaging, volume 8320, pages 17/117/8, 2012.N. Afsham, M. Najafi, P. Abolmaesumi, and R. N. Rohling. A generalized correlation-based modelfor out-of-plane motion estimation in freehand ultrasound. IEEE Transactions on Medical Imaging,33(1):186199, 2013.21find patches that are as close as possible to FDS. Existing methods include varioustypes of FDS identifiers, such as low order moments of speckle [81, 102], theKolmogorov-Smirnov (K-S) test as a non-parametric goodness of FDS fit [55],and adaptive meshing [103].An alternative approach is to jointly consider the effect of FDS and non-FDSdistributions in the statistical distribution of the speckle pattern. Therefore, oursolution to the rarity of FDS patches, and our main contribution in this chapter, isthe creation of a framework capable of separating coherent and non-coherent ef-fects in every patch based on a Rician Inverse Gaussian (RiIG) statistical model.Although the statistics of RiIG has been previously proposed in the literature, thereis no previous work on the second-order statistics or correlation function of RiIG,which is critical in the context of speckle tracking. Moreover, for the first time, wederive a closed-form formulation for the correlation coefficient of the speckle pat-tern based on the parameters of the RiIG model, which is our second contributionin this work.The statistics of speckle has been of interest to researchers for many decades [17,51, 52]. Speckle is the result of many physical phenomena where a coherent sourceinteracts with an inhomogeneous medium [51]. In the case of ultrasound speckle,various statistical models have been proposed to fit the first order statistics of theultrasound echo envelope. From a general point of view, summarized in [32], thesestatistical models can be reviewed as a compound representation of ultrasound echointensity, or “composite models” [9]. The compound process may be composed ofa modulated distribution, a modulating distribution, and a modulated parameter.The conclusion in [32] is in favor of the homodyned K-distribution mainly becauseit has a meaningful physical description for its parameters. However, there is noexplicit analytical expression available for the Probability Density Function (PDF)of the homodyned K-distribution. In the context of speckle tracking, the statisticalchanges of speckle patterns are required to estimate the out-of-plane motion. Itmeans that the correlation function of the speckle distribution is the key subject ofstudy. Therefore, the second-order statistics of the speckle is of interest. Havinga closed-form formulation for the correlation function would facilitate the task ofout-of-plane motion estimation. Since there is no explicit PDF of the homodynedK-distribution, it is almost impossible to obtain a closed-form formulation for its22second-order statistics. As a result, another statistical model with an explicit ana-lytical expression is desirable.The RiIG distribution, first introduced in [39], is very close to the homodynedK-distribution in the sense that both are based on the Rice modulated distribution.Instead of modulating with a gamma distribution, the modulating distribution forRiIG is an Inverse Gaussian (IG), which is a good approximation for the gammadistribution [32]. Due to the nature of the IG, heavy tailed distributions that resultfrom multiple scatterings in tissue are well approximated [9].The other advantage of RiIG is that its posterior distribution has an explicitclosed-form formulation [39], which makes it possible to separate the coherentand non-coherent parts of the distribution using a Maximum A-Posteriori (MAP)filter [6]. The effect of the non-coherent part in real tissue is considered to be themain reason of the reported drifts in the previously proposed methods [75], [48].Moreover, it is also possible to estimate the RiIG parameters from very limitedsample data. As opposed to fractional order statistics that require large samples tobe reliably used for the task of out-of-plane estimation [98], RiIG parameters canbe reliably estimated from smaller patches [41], which increases the resolution ofthe speckle discrimination and hence, increases the accuracy of the out-of-planetransform estimation. Considering all of the aforementioned advantages, RiIG is asuitable candidate for the task of out-of-plane motion estimation and our proposedmethod is based on RiIG distribution.As a result of separating the effect of coherent and non-coherent parts, it istheoretically possible to use the entire image for the purpose of out-of-plane motionestimation and improve the overall accuracy. We will show that it results in moreaccurate estimation in the presence of coherency. By knowing the correlation oftwo patches, the out-of-plane displacement is determined from a correlation curve.This correlation curve is measured in a calibration process using an FDS phantomat known elevation displacement.2.2 Speckle tracking backgroundIn conventional correlation-based speckle tracking methods a Gaussian function isfitted to the correlation curve. Techniques such as imposing constraints (e.g., fea-23ture continuity in the image [105]), using additional information of beam steeredimages [104], and including additional information from a simple tracker [60,72]have been proposed to generalize the correlation curve which predicts better.One successful method is the Cambridge heuristic method, where the devia-tion of the correlation curve from its calibrated one is compensated [48]. In thatmethod, the calibrated correlation curve is locally adjusted for every single patch.The adjustment is based on the deviation of in-plane correlation curves from thecorresponding nominal curves obtained from an FDS phantom. By knowing theamount of in-plane drift from an ideal FDS curve, it is possible to correct for theout-of-plane drift. It is implicitly assumed that coherency in the axial direction isthe same as in the elevation direction. This situation holds true only for an isotropicmedium which is not valid for most tissue. The idea has been formulated by con-sidering a correlated (coherent) part between patches. The in-plane correlation hasbeen measured to find the coherency percentage between two patches and henceto anticipate the correlation of the in-plane non-coherent part from a proper cali-bration. Based on the fact that the resolution cell width is four times bigger in theelevation direction, the coherency effect, and therefore the elevation displacement,is heuristically considered to be four times the in-plane component. In addition tocoherency adjustment, an ad hoc linear correction has been applied to modify thedetermined coherency for different axial depths in order to minimize the estimationerror of the image tilt.An alternative way of adapting the correlation curve is through learning [75]. Aparametric regression-based model is used for the purpose of resolution cell widthestimation. The parameters of the model are determined in a learning process basedon a set of synthetic ultrasound images. As a result, they obtain a model capableof estimating the Gaussian correlation curve width for any new patch in terms ofmean and standard deviation. The estimation is based on a set of given features.Two different sets of features are used: the ones related to the medium and the onesrelated to the pulse profile characteristics. The second-order statistics of the specklepatch, i.e. the squared SNR and skew, normalized to the fixed correspondent valuesof the Rayleigh distribution are the first set of features. Although the second-orderstatistics of speckle are related to the micro structure of the scatterers, their abilityto represent the statistical properties of the medium is limited. The second set24of features are the in-plane beam widths normalized with respect to the nominalvalues of the FDS phantom.All of the previously proposed methods in some sense use the correlation curveas the estimator of out-of-plane motion. The regression-based method [99] mightseem different from correlation-based ones but, in fact, we will show that it isclosely related to the correlation of the patches.If we suppose that the intensity of the second images is linearly dependant onthe first one, we may write:I2 = bI1 +a. (2.1)Considering the orthogonality principle [109], the best linear affine estimationof I2 under the condition of knowing I1 based on estimation theory is:Î2 =< I2|I1 >=< I2 >+cov(I2, I1)1σI2 2(I1−< I1 >). (2.2)Î2 is the linear estimation of I2 and <> is an averaging operator. Suppose I1 and I2are both exponentially distributed (when the amplitude is Rayleigh distributed, theintensity is exponentially distributed [123]) with the parameter of σ2I2 [99]:< I1 >=< I2 >= 2σ2I , (2.3a)< I12 >=< I22 >= 8σ2I , (2.3b)cov(I2, I1) = 4ρσ2I , (2.3c)where cov is the covariance function and ρ is the correlation coefficient. By using2.3 and substituting in (2.2):Î2 =< I2|I1 >= ρ︸︷︷︸bI1 +2σ2I −ρ2σ2I︸ ︷︷ ︸a. (2.4)Obviously the regression coefficient of I2 as a function of I1 is the same as thecorrelation coefficient of I1 and I2, ρ .Instead of correcting or learning the correlation curve, we provide a generalizedclosed-form formulation for the correlation of the non-coherent part of every patch.25We also provide a plausible explanation on how the experimental correlation curvedeviates from its theoretical form and as our third contribution, we propose a linearregression model for the resolution cell width of the ultrasound beam to considerthe effect of the beam profile changes. A closed-form solution is provided for thecorrelation of two beam profiles with different resolution cell widths that can beused to compensate for the effect of out-of-plane rotation.In summary, the main advances of the model-fitting speckle tracking frame-work are:1. It is capable of separating the coherent and non-coherent part of the echosignal statistically whereas, in the previous work, the coherency effect isestimated in a heuristic way by inferring it from the in-plane correlationwhich also needs the assumption of the medium being isotropic. In thiswork, for the first time, the second-order statistics of the RiIG model is usedfor this reason.2. The main advantage of our method over the learning-based approach is thatit gives a closed-form formulation for the correlation coefficient based onthe statistical properties of each individual patch, so there is no need for anextensive learning procedure based on a large synthetic or real data set.3. The effect of the beam profile is modeled by defining a linear regressionmodel for each resolution cell. In a calibration procedure the resolution cell’seffective width is determined as a function of axial, lateral and elevationdistance. For the sake of simplicity we have used the same σy (ultrasoundresolution cell width in elevation direction) for two corresponding patches.4. Our formulation provides the possibility of using different resolution widthsfor two considered patches. This can be helpful in the presence of in-planemotion or out-of-plane rotation which is considered as major sources of errorin out-of-plane motion estimation [59]. Our formulation is also closed-form.Our proposed method is evaluated in the same type of experiments as in twoother aforementioned methods and the same criteria are used to assess the results.We compare our method to an implementation of the Cambridge heuristic methodin [48], and Laporte and Arbel’s recently developed method [75].262.3 Proposed correlation-based speckle trackingIn this section we start by explaining the out-of-plane motion estimation frameworkin general that builds upon earlier work as noted.As we explained in Section 1.2.2, a resolution cell is the volume of space oc-cupied by the ultrasound pulse. In practice, it is not possible to differentiate amongthe scatterers residing in the same resolution cell. However, if two resolutions cellsare overlapping, it is possible to estimate their distance based on the correlationbetween their receiving signals due to the effect of the same scatterers in the over-lapping space. To be able to formulate correlation as a function of overlap of theresolution cells, it is needed to better understand other sources that may changethe correlation of the two received signals, such as the statistical changes of themedium and the beam profile deviations from its ideal shape.2.3.1 Correlation formulationIn this section the correlation function of the received ultrasound signal based on aRician Inverse Gaussian (RiIG) statistical model is derived by using the theoreticalultrasound beam shape of a rectangular array.The ultrasound echo signal r(~X), often known as RF data, can be viewed asthe output of a linear system, where the input is the spatial variation in the physicalproperties of the medium [88]:r(~X0, t) = g(−~X , t)∗ fm(~X)|(~X=~X0). (2.5)g(~X) is the Point Spread Function (PFS) and fm(~X) is the spatial variation of themedium. ~X indicates a position vector in 3D space and ∗ represents spatial convo-lution. ~X0 is the position at which the RF signal is received.Based on linear system theory, considering linear shift invariance [131], theresulting autocorrelation function of the echo signal going through a system g(~X),can be written as [122]:R(∆~X) = Rm(∆~X)∗g(−∆~X)∗g∗(∆~X), (2.6)where Rm is the autocorrelation function of the physical variation in the medium27and g∗ is the complex conjugate of g.The PSF of a rectangular linear array transducer, can be divided into in-planeand out-of-plane parts [21]:g(~X) = g(x,z)in-plane g(y)out-of-plane,= Kg(x,z)in-plane sinc2(yσy),(2.7)where x, y, and z indicate lateral, elevational and axial directions, respectively.sinc(y) equals sin(piy)piy and σy is the ultrasound resolution cell width in elevationdirection and equals λ0zDy . Dy refers to the crystal height in the elevation direction. zis the axial distance and λ0 is the ultrasound wavelength in its central frequency. Krepresents a constant factor.For the purpose of out-of-plane motion estimation, the autocorrelation functionof the output of the system in the elevation direction is needed. By combining (3.1)and (2.7) one may write:R(∆y) = K2Rm(∆y)∗ sinc2(∆yσy1)∗ sinc2(∆yσy2), (2.8)where ∆y is the elevation distance between two resolution cells and σy1 and σy2refer to the resolution cell widths at each pose, respectively. Equation (2.8) canbe simplified by applying a term-wise Fourier transform assuming min(σy1 ,σy2) =σy1 :R(∆y) = Rm(∆y)∗K2σ2y1σy2(pi∆y)2(σy1 +σy22σy1− sinc(2∆yσy2)+σy1−σy2σy1cos(2pi∆yσy2)).(2.9)For the sake of simplicity we assume that σy1 and σy2 are the same for two po-sitions. However it is worth mentioning that in-plane displacement or out-of-planerotation may cause considerable change in the cell widths. As shown in Fig. 2.1c,a 30% difference in the resolution cell width will cause ∼ 20% underestimation ofthe out-of-plane displacement of 0.5 mm.28(a) (b)0 0.2 0.4 0.6 0.8 1 1.200.20.40.60.81distance (mm)correlation coefficient   Equal Sigma Unequal Sigma(c)0 0.2 0.4 0.6 0.8 1 1.200.20.40.60.81distance (mm)overlap percentage   Unequal cells Rotated cells(d)Figure 2.1: This figure shows the effect of different resolution cell widthson the out-of-plane motion estimation. (a) Two overlapping resolutioncells in the presence of out-of-plane motion estimation. (b) The overlap-ping volume (V2) of a resolution cell with another one with rotation isequivalent to its overlapping volume (V1) with a smaller resolution cell.(c) The blue curve shows the correct autocorrelation function with 30%difference between the resolution cell widths of two overlapping cells(2.9) and the red curve shows the autocorrelation function consideringboth of the resolution widths as the smaller one. Clearly, using the redcurve instead of the blue one causes ∼ 20% underestimation of the out-of-plane motion. (d) Overlapping volume percentage versus elevationdistance for rotated and unequal resolution cells, excluding offset.29If σy1 = σy2 = σy then Equation(2.9) becomes:R(∆y) = Rm(∆y)∗K2σy3(pi∆y)2 (1− sinc(2∆yσy)). (2.10)As mentioned earlier, if a large number of sub-wavelength scatterers are ran-domly distributed in a resolution cell, the microstructure of the imaging sampleis uncorrelated at each pose. Under such conditions, due to the averaging overuniformly distributed phases, the following is inferred:RmRayleigh(∆y) = 2σ2d δ (∆y). (2.11)This condition is the so-called Rayleigh condition and σ2d is the variance of thediffuse part in the medium. It also can be considered as the squared average ofthe echo signal over the region of interest [21], [122]. In this case, the autocorre-lation of the output process only reveals the information of the PSF (g(y)) ratherthan the information of the medium and it can be used to estimate the out-of-planemotion (∆y). Under this condition, Equation (2.10) turns into the correlation func-tion [122], conventionally approximated by a Gaussian curve [21]:R(∆y) =2σ2dσy3K2(pi∆y)2 (1− sinc(2∆yσy)). (2.12)2.3.2 Correlation deviationThe width of the ultrasound resolution cell is often approximated from the standarddeviation of the Gaussian curve fitted on the obtained correlation curve, σy. It hasbeen reported that the normalized intensity correlation function of FDS patches canbe well approximated by a Gaussian function [21]. However, some reports implythat there is a discrepancy between the Gaussian model fitted on the calibrationdata and the normalized correlation curves. It has been reported in [20] that theGaussian approximation deviates from the measured curve especially for elevationdistance of more than 0.55 mm and the difference can be as much as 0.25 mm in1 mm elevation range. It has also been suggested to use a four parameter offsetscaled Gaussian model to get a better-fitted Gaussian function over a wider range300 0.2 0.4 0.6 0.8 1 1.200.20.40.60.81distance (mm)correlation coefficient   FDS: Theoretical FDS: Gaussian Approx. Non−FDS: Theoretical Non−FDS: Gaussian Approx.Figure 2.2: This figure shows the effect of Gaussian approximation. Thesolid blue line is the theoretical sinc function for an ideal FDS (2.12) andthe dashed line shows the Gaussian approximation of it. The red solidline shows the correlation curve in the presence of coherency (2.27)and the red dashed line is its Gaussian approximation. It is obviousthat Gaussian approximation fails in modeling the correlation functionif coherency is present.of elevation distances [59]. Part of this difference is due to the approximation ofthe convolution of two squared sinc functions which itself includes a sinc function(2.10). The error introduced by this approximation increases significantly wherethere is coherency (Fig. 2.2).Another possible cause of this difference is the direct effect of beam profilechanges. For a linear rectangular array the idealized beam shape in the Fraunhoferregion is given by a sinc function along the elevation direction [123]. If the beamshape deviates from its theoretical sinc pattern, it directly affects the correlationcurve. A model has been previously proposed for the local shape of the resolutioncell based on the decorrelation curves in the axial direction [55]. In our proposedformulation, the beam shape effect has been taken into consideration by definingthe resolution cell width as a linear function of axial, lateral and elevational direc-tions. The change of elevation beam profile has been considered previously in otherresearch areas such as ultrasound spatial calibration [22] and in brachytherapy nee-dle insertion [92] to improve the performance. To the best of our knowledge, thisis the first time that the dependency of σ to elevation displacement has been con-sidered for the purpose of speckle tracking. We believe that by considering σy31as a function of the elevation displacement, the general shape of the correlationcurve will change in a way that is closer to the real measurements compared to itsconventional Gaussian form.Another issue that makes the speckle tracking a very challenging subject is theeffect of out-of-plane rotation on the correlation values [59]. The reduction in thecorrelation value and the change in the offset of the correlation curve occur dueto several reasons including: 1) the change in the amount of overlapping volumeof two adjacent resolution cells, which changes the integral boundaries of the con-volution, 2) the change of beam profile due to the spatial variance of the beam,and 3) the change of the location of the scatterers with respect to the beam profile.Since there is no analytical solution for correlation for the rotated beam profile, wepropose to model the effect of changes in the correlation value of overlapping ro-tated cells by considering an effective σy for the rotated plane instead of equal σy.For better understanding of the effect of considering two different resolution cells,we have simulated the case of two resolutions cells with different sizes. Fig. 2.1ccompares the correlation curves for equal σy with the case that there is a 30% dif-ference in the resolution cell width and 0.2 mm offset, which will cause ∼ 20%underestimation of the out-of-plane displacement of 0.5 mm. The reduction in thecorrelation value due to different resolution cell’s size is similar to the case of ro-tated resolution cells. Fig. 2.1b shows that given the overlapping volume of rotatedresolution cells (V2), it is possible to find an equivalent resolution cells size fortwo non-rotated cells with the same overlapping volume (V1). Fig. 2.1d shows theoverlapping volume percentage versus elevation distance for rotated and unequalresolution cells.In the presence of out-of-plane rotation, the 6-DoF transformation between twoadjacent frames can first be calculated, based on the immediate result of out-of-plane displacement estimation for accepted patches, which is the current approachfollowed in this paper. Then, the estimated out-of-plane rotation parameters’ valuescan be used to correct the displacement estimation, using unequal resolution cellwidths formulation. Based on the calibrated offset at every depth for different an-gles, we can estimate the smaller resolution cell widths using Ellipsoidal Toolboxfor MATLAB [71] and update the displacement estimations. The transformation isthen, recalculated which results in updated rotation angles. This procedure can be32repeated till the change in the rotation angles is less than a desired threshold. Sucha full 6-DoF calculation problem will be the subject of future work.2.3.3 RiIG statistical modelingAlmost all of the previously proposed speckle tracking methods except for [48],are based on FDS detection, so their performance decreases in the presence ofcoherency. Coherency can be a result of the occurrence of a periodic pattern inthe sample tissue or highly reflective areas. If there is a way to model the processof RF echo formation, then it is possible to use it for the purpose of out-of-planemotion estimation. The basic form of this kind of formulation was first proposedby adding a fixed coherent component to the Rayleigh model [123], but there hasbeen no complete formulation provided for the purpose of out-of-plane motionestimation. In this section, we derive the closed-form formulation of the RiIGcorrelation coefficient in details which was previously introduced in [6].The echo envelope signal is the superposition of all multiple attenuated, phase-shifted or frequency-shifted responses of the transmitted signal that carries the in-formation about the medium. This process can be described as the summation ofcomplex phasors, or so called random walks, by assuming the single scatteringinteraction [52]:Ae jθ = Σi=1N |ai|ejφi = X + jY, (2.13)A is the amplitude and θ is the phase of the complex RF signal, respectively. Nis the number of scatterers in a resolution cell. The number of scatterers (N) in aresolution cell for a randomly distributed medium follows a Poisson process [27].However, the number of scatterers is not always sufficient to construct a Rayleighdistributed echo signal. When the number of random walks is random itself, arandom process is obtained. In the case of Rayleigh situation, the I (In-phase)and Q (Quadrature) components of such a process, X and Y , follow zero-meanGaussian distributions. If the variances of I and Q components of the RF signal areΓ distributed, the RF signal is K distributed [41]. In a more general form, X and Yare defined as compound normal variables with random variance (Z):X = µx+βxZ+√ZNx, (2.14a)33Y = µy+βyZ+√ZNy, (2.14b)Nx and Ny are normally distributed with an identity covariance matrix.If Z is Γ distributed, with µx = µy = 0, R has a Rician distribution. If βx =βy = 0, R becomes homodyned K-distributed [40]. There is no explicit analyticalexpression for the homodyned K-distribution PDF and its second-order statistics;hence, it is more difficult to use it for the purpose of the out-of-plane motion esti-mation. Considering the Inverse Gaussian (IG) distribution for Z and zero µx andµy, the resulting distribution is the RiIG [39]. The physical interpretation of RiIG isthat the complex RF signal is a combination of two independent Brownian motionswith drifts βx and βy and the Inverse Gaussian (IG) first passage time [41]. Thedrift part models the presence of coherency in the echo signal. In this statisticalprocess, Z models the speckle-free part of the signal and Nx, Ny correspond to theFDS part. A previous study on RiIG shows that it outperforms K, and Nakagamidistributions in modeling ultrasound echo signal [41]. If Z1 = Z2 = 1 then it ispossible to derive the formulation for the K-distribution in the same manner. In themore general form of RiIG process, the autocorrelation function can be written as:Rm(∆y) =< (βxZ1 +√Z1Nx1 + j(βyZ1 +√Z1Ny1))(βxZ2 +√Z2Nx2 + j(βxZ2 +√Z2Ny2))∗ >=<√Z1Z2 > (< Nx1Nx2 >+< Ny1Ny2 >)+β 2 < Z1Z2 >,(2.15)where < . > stands for the expected value of a random variable. We assume thatβx and βy are the same for the closely positioned frames and β =√β 2x +β 2y . Sub-scripts 1 and 2 represent the process at the positions y1 and y2, respectively. Con-sidering Z1 and Z2 fully correlated for two adjacent frames, without loss of gener-alization, it can be assumed Z2 = kZ1 or Z1 = kZ2 such that 0≤ k≤ 1. Consideringthe fact that Z models the variance of the first passage time of the process, it isplausible to suppose that Z is independent of the normally distributed part. Underthese circumstances:Rm(∆y) =√k < Z > (< Nx1Nx2 >+< Ny1Ny2 >)+ kβ 2 < Z2 > . (2.16)34Since the Rayleigh condition applies to the normal part of the RiIG distributionand the variances of these normal distributions are unit, we can replace <Nx1Nx2 >and < Ny1Ny2 > by δ (∆y) (see [122] for more details on the derivation of Rayleighand K correlation functions):Rm(∆y) = 2√k < Z > δ (∆y)+ kβ 2 < Z2 > . (2.17)In this new correlation function, the variance of the diffuse part is replaced by2√k< Z > (compare with (2.11)) and kβ 2 < Z2 > represents the effect of coherentpart. The k-th order moments of Z with the IG distribution are [41]:µkZ =< Zk >= (δγ )kKk− 12(δγ)K− 12(δγ) , (2.18)where δ and γ are the parameters of the IG distribution and Kk(.) is the modifiedBessel function of second kind and order k. So the first and the second-ordermoments of Z are as follows:< Z >=δγ , (2.19a)< Z2 >=δ 2γ2 (1+1δγ ). (2.19b)After using (2.19) in Equation (2.16), we have:R(∆y) = K2kβ 2σ2yδ 2γ2 (1+1δγ )+K2σ3yδγ2√k(pi∆y)2 (1− sinc(2∆yσy)),= Rc+Rr(∆y).(2.20)Rc is the coherent part of the correlation and Rr is the Rayleigh part, which out-of-plane displacement can be estimated from. To eliminate the scaling factor K, it isbetter to use the correlation coefficient instead:ρ = Cout(∆y)Cout(0),=R(∆y)− (µm(y1)∗g(y))(µm(y2)∗g(y))σ2 ,(2.21)35where C(.) is the covariance function, µm(y) is the mean of the physical process atposition y and σ2 is the variance of the output process.We define process A as:A = βxZ+√ZNx+ j(βyZ+√ZNy), (2.22)and assume it is a stationary process to its second-order. By using the explicitterms for RiIG moments [41], we can get the first and the second-order momentsof process the A as:< A >= (βx+ jβy)< Z >= (βx+ jβy)δγ , (2.23)< A2 >= β 2 < Z2 >+2 < Z >,= β 2 δγ2(1+1δγ )+2δγ ,(2.24)where β =√βx2 +βy2.Since:ρ = Cout(∆y)Cout(0), (2.25)to compute the correlation coefficient, we first need to compute the covariancefunction:C(∆y) = R(∆y)− (µm(y1)∗g(y))(µm(y2)∗g(y)),= K2kβ 2σ2yδγ2(1+1δγ )+K2σ3yδγ2√k(pi∆y)2 (1− sinc(2∆yσy))−K2kβ 2σ2yδγ2,= K2kβ 2σ2yδγ3 +K2σ3yδγ2√k(pi∆y)2 (1− sinc(2∆yσy)).(2.26)We can combine (2.26) with (2.25), and derive the closed-form of the RiIG corre-lation coefficient:ρ =kβ 2 +2γ2√k(1−sinc( 2∆yσy )(pi∆y)2 )σyβ 2 + 43σy γ2. (2.27)36Note that for ∆y = 0, the two signals are the same and Z1 = Z2 so, k = 1. Themaximum value of the function (1−sinc( 2∆yσy )(pi∆y)2 ) equals23σ2y, so for k = 1, ρ is one andfor any other values of 0 ≤ k ≤ 1, ρ is less than one for all the values of ∆y, asexpected. ρ is directly calculated from the ultrasound sampled RF data. All theparameters on the right side of (2.27), including k, β , and γ can be estimated fromdata based on Expectation Maximization (EM) as explained in [39, 41]. σy is thevariance of the imaging system PSF at different depths, which can be calibratedfrom a speckle phantom. The calibration approach proposed by [48] has beenfollowed in this work.The mathematical expression for MAP estimation of the coherent part (Z) ofthe sampled RF data can be written as:zˆ = argmax pZ|R(z|r). (2.28)Since the posterior distribution of Z, pZ|R(z|r) is given in a closed-form [39],zˆ is explicitly determined [6]. For the details and discussion on the parameterestimation of RiIG model refer to [41].2.3.4 Power transformationWe found that by power transforming the RF data the distribution of the trans-formed data is less skewed and this will improve the validity of the Pearson corre-lation coefficient. It also results in better RiIG model fitting and a greater number ofaccepted patches. It should be considered that by power transformation of the RFdata, the correlation curve will also change. However, power transformation alsochanges the statistical parameters of the RiIG model, and as long as the models fitthe distribution of the transformed signal, it can be used for the purpose of out-of-plane motion estimation. Fig. 2.3 shows the effect of power transformation on thespeckle distribution. Previously the effect of power transformation on the discrim-ination properties of ultrasound statistics has been explored [98]. The simulationresults based on homodyned K-distribution in [98] shows that the meaningful vari-ation of the patch SNR divided by the noise in measuring SNR reaches maximumaround two different powers, i.e. 0.25 and 1.8. Since RiIG is a generalization ofhomodyned K-distribution, we used the same power transform and selected 0.25370 500 1000 1500 2000 250000.0020.0040.0060.0080.01sample valueprobability density(a)0 500 1000 1500 2000 250000.0020.0040.0060.0080.01sample valueprobability density(b)0 500 1000 1500 2000 250000.0020.0040.0060.0080.01sample valueprobability density(c)0 500 1000 1500 2000 250000.0020.0040.0060.0080.01sample valueprobability density(d)Figure 2.3: The effect of power transformation on the RF amplitude distribu-tion. The green line shows the PDF of the RF amplitude of a 2×2 mm2patch of bovine muscle ex vivo sample. The blue curve is the simulatedPDF based on the estimated statistical properties of the patch and the redcurve is the estimated PDF of coherent part (Z). (a) There is no powertransformation of the data. (b), (c), and (d) are transformed to the powerof 0.5, 0.25, and 0.1, respectively. The power transform of 0.25 is cho-sen in this work. This power transformation enhances the separation ofthe components as well as the performance of RiIG model fitting.38Patch SelectionWindow Size Model FittingPatch Position (x, ∆y, z)Goodness of fit (p-value) Parameter EstimationParameters(α, β, γ, δ) Sigma Map EstimationSigma Functionσ = f (x, y, z)Motion EstimationPatch Pair Correlation ρNormalization  and Power TransformScale and Power (s, pw)Data SetOut-of-Plane MotionρFigure 2.4: Overview of the proposed framework.over 1.8 to be less skewed.2.3.5 Overview of the proposed methodFig. 2.4 shows an overview of the proposed method. First of all, the acquireddata are power transformed and normalized. The next step is to obtain the spatialvariation of the resolution cell width which is referred to henceforth as the σy model(σy(x,∆y,z)). For this purpose, a linear regression model is considered to modelthe spatial change of the beam profile. The σy model is as follows:σy(x,∆y,z) = σ0 +ax+b∆y+ cz, (2.29)where x and z are lateral and axial positions, respectively.The proposed method of out-of-plane motion estimation is capable of dealingwith non-coherent speckle, therefore, it allows the parameters of the σy model tobe estimated from different kinds of real tissue. In a set of experiments calledcalibration, where the elevation distances between ultrasound frames are known,the RiIG model is fitted on a pair of patches and the statistical parameters areestimated. The KS-test is used to check how close these two distributions of Zare within the 95% confidence intervals. The null hypothesis is that the sampledRF data distribution and the simulated distribution are from the same continuousdistribution. The alternative hypothesis is that they are from different continuousdistributions. The test rejects the null hypothesis at the 5% significance level of theacquired p-value. If both models fit well (i.e. the KS-test fails to reject the null39hypothesis), the patches are considered as accepted, and the Pearson correlationcoefficient is calculated as follows:ρ = ∑i (As(i)− A¯s)(At(i)− A¯t)√∑i (As(i)− A¯s)2∑i (At(i)− A¯t)2, (2.30)where A is the amplitude of the RF signal and A¯ is its average over the patch. s andt indicate the location of corresponding two patches.By having ρ , ∆y and statistical parameters of patches acquired from RiIGmodel fitting, i.e. β , γ , and k, it is possible to estimate σy at the center of eachpatch from (2.27). Once σy is determined for all axial and lateral positions at dif-ferent known elevation displacements, σy model parameters are determined from(2.29). For any new data sample, the out-of-plane motion is estimated from (2.27)considering the acquired σy model and the measured ρ from (3.12).2.4 Experiments and resultsIn a set of experiments, different tissue types including beef tenderloin, chickenbreast, turkey breast, and FDS phantom were translated in the elevation directionwith known steps. For each tissue type, five datasets over the displacement of4 mm (40 frames) were acquired (twenty subsets in total). For a particular elevationdistance in each dataset, all the possible subsets of frames with the spacing equalto that given distance were considered for both calibration and evaluation. Allthe experiments were performed on a SonixTouch ultrasound machine (UltrasonixInc., Richmond, BC, Canada) using a 10 MHz linear array transducer (L14-5/38)with a 5-14 MHz bandwidth. The depth of imaging was 4 cm with a single focuson 2 cm. The sampling rate of RF echo signal was 40 MHz and the recordedsignal had 2080 samples for each of 126 lateral vectors. The elevation displacementwas produced by means of a linear motor stage (T-LSR150B, Zaber TechnologiesInc., Vancouver, BC, Canada) with equal elevation distances of 0.099 mm. Theultrasound transducer was affixed to the optical table by means of a mechanicalarm. The linear motor stage was also attached to the table and the imaging samplewas positioned on top of it. To ensure there was minimal in-plane motion, all of thetissue samples were secured in an agar medium as shown in Fig. 2.5a. To reduce40the reverberation artefact the bottom of the phantom container was floored withan acoustic absorber. Every ultrasound frame was divided into 8× 5 rectangularpatches of the size of 2× 2 mm2 excluding 15% margins from both sides of theaxial and the lateral axis of the ultrasound frame.For each tissue type, 40 frames were acquired. For any particular elevationdistance, all the possible subsets of frames with the spacing equal to that givendistance were considered for both calibration and evaluation. In the experimentfor evaluation of the method under out-of-plane rotation, first a linear sequence offrames is acquired and then an elevationally spaced sequence with the exact sameposition in tissue but with a transducer’s rotation relative to the previous pose is ac-quired (similar to the experimental setup to [59]). This experiment was performedfor both tilt (1◦) and yaw (1◦) rotations. In the freehand in vivo experiment, 75tracked ultrasound frames were acquired from the forearm of a subject. For thisexperiment, an Optotrak Certus optical tracker (Northern Digital Inc, Waterloo,Ultrasound Probe Phantom Motion Stage Mechanical Arm (a)(b)(c)Figure 2.5: (a) Experimental setup. (b) Top view of the phantom. (c) An ul-trasound image of chicken breast tissue. The highly echogenic border atthe lower depth of the images is due to the reflection from two separatelayer of tissue on top of each other.41Ontario, Canada) was used for tracking the transducer, instead of the linear motorstage.The code was written in MATLAB running on a PC with a Core 2 Duo CPU@2.93 GHz and 4 GB of RAM. The processing time to estimate the out-of-planedisplacement for all the patches for a pair of ultrasound frames (with the describedpatch size and number of patches) is about 1 minute. The code can be easilyparallelized (per patch) and be implemented on a GPU for faster performance.2.4.1 RiIG model fittingTo assess the performance of RiIG model fitting on different tissue types, we mea-sured the number of accepted patches at known elevation displacements for all fivesubsets in each tissue type and divided it by the total number of patches. Fig. 2.6shows the average and standard deviation of this ratio as a function of elevationdisplacement for different tissue types. Relatively large standard deviations in realtissues show the structural changes at different positions, as expected. Since thescatterers are uniformly distributed in the FDS phantom the smaller standard devi-ations confirm fewer structural changes. It appears that this ratio does not changedrastically up to 2 mm range and that is related to the fairly small structural varia-tion of the tissue in that range. As the elevation distance increases, the number ofaccepted patches decreases because it is more likely for two patches further apartto have different statistical distribution due to the structural changes in the scattererfield. The number of accepted patches in the FDS phantom is noticeably smallerin comparison with other tissue types. The reason is that, FDS patches follow theRayleigh distribution, which results in a β parameter very close to zero in the RiIGmodel. In this case, the EM estimation of the RiIG parameters becomes sensitiveto the initial value and it requires special considerations [41]. To improve the com-putational efficiency of the EM parameter estimation, we used a fixed initial valuefor β ; however, it performs weaker for smaller β values. Despite the small numberof accepted patches for the FDS phantom, because of β being close to zero, theaccuracy of γ estimation (refer to (2.27)) is lower and, as it will be shown later,the overall performance of the proposed method on the FDS phantom is similar toother tissues.420 0.5 1 1.5 200.10.20.30.40.50.60.7distance (mm)acceptable patches/total number of patches(a)0 0.5 1 1.5 200.10.20.30.40.50.60.7distance (mm)acceptable patches/total number of patches(b)0 0.5 1 1.5 200.10.20.30.40.50.60.7distance (mm)acceptable patches/total number of patches(c)0 0.5 1 1.5 200.10.20.30.40.50.60.7distance (mm)acceptable patches/total number of patches(d)Figure 2.6: Ratio of acceptable patches versus elevation distance for (a) beef,(b) chicken, (c) turkey, and (d) FDS phantom averaged over all the sub-sets for each tissue type.2.4.2 Regression estimation for σy modelIn a pre-procedure calibration, the σy regression parameters were estimated for ev-ery twenty subsets which includes beef, chicken, turkey, and the FDS phantom. Toassess the tissue-independency of the σy model parameters, the mean and the stan-dard deviation of the estimated parameters over all of the subsets were determined.Table 2.1 shows the results. The relatively low standard deviations of the parame-ters confirms the independency of these parameters from tissue type. As expected,43300 400 500 600 700 800 900 10003004005006007008009001000110012001300data samples quantilesgenerated samples quantiles735 samples  Data SampleFitted ModelOptimum Line(a)0 0.5 1 1.500.20.40.60.811.21.41.61.82distance (mm)measured distance (mm)   3x6 2x2 True displacement(b)Figure 2.7: (a) Q-Q plot of a sample data against the generated data sam-ples from the RiIG estimated parameters. (b) Comparing the results ofout-of-plane estimation for two different patch sizes of 2×2 mm2 (735samples) and 3×6 mm2 (3297 samples) on the beef samples.Parameters σ0(mm) a b cMean 0.15 4.5×10−4 0.7 9.3×10−3STD 0.04 8.6×10−4 0.03 1.6×10−3Table 2.1: Estimated σy regression parameters based on Equation (2.29) overtissue types.the regression parameter related to the lateral direction was very small. The reasonis that there is not a considerable change in the lateral direction of the beam pro-file, especially when the 70% central part of the image in the lateral direction wasconsidered.2.4.3 Intra-tissue estimationTo validate the performance of the proposed method on each tissue type, we firstcalculated the σy model from the five subsets in each tissue type, leaving one outand validated the method on the left out subset. Fig. 2.8 shows the distributionof the estimation error over 1.5 mm range of elevation distances. The first col-umn shows the accumulative estimated elevation distance versus the true value44450 0.5 1 1.50.511.5distance (mm)measured distance (mm)  Beef 1Beef 2Beef 3Beef 4Beef 5Ideal0 0.5 1 1.5−0.5−0.3−0.1500.150.30.5distance (mm)displacement error/distance0 0.5 1 1.50.20.4distance (mm)mTRE/distance0 0.5 1 1.5−3−2−10123distance (mm)yaw error (degree)0 0.5 1 1.5−3−2−10123distance (mm)tilt error (degree)(a) Beef sequences0 0.5 1 1.50.511.5distance (mm)measured distance (mm)  Chicken 1Chicken 2Chicken 3Chicken 4Chicken 5Ideal0 0.5 1 1.5−0.5−0.3−0.1500.150.30.5distance (mm)displacement error/distance0 0.5 1 1.50.20.4distance (mm)mTRE/distance0 0.5 1 1.5−3−2−10123distance (mm)yaw error (degree)0 0.5 1 1.5−3−2−10123distance (mm)tilt error (degree)(b) Chicken sequences0 0.5 1 1.50.511.5distance (mm)measured distance (mm)  Turkey 1Turkey 2Turkey 3Turkey 4Turkey 5Ideal0 0.5 1 1.5−0.5−0.3−0.1500.150.30.5distance (mm)displacement error/distance0 0.5 1 1.50.20.4distance (mm)mTRE/distance0 0.5 1 1.5−3−2−10123distance (mm)yaw error (degree)0 0.5 1 1.5−3−2−10123distance (mm)tilt error (degree)(c) Turkey sequences0 0.5 1 1.50.511.5distance (mm)measured distance (mm)  Phantom 1Phantom 2Phantom 3Phantom 4Phantom 5Ideal0 0.5 1 1.5−0.5−0.3−0.1500.150.30.5distance (mm)displacement error/distance0 0.5 1 1.50.20.4distance (mm)mTRE/distance0 0.5 1 1.5−3−2−10123distance (mm)yaw error (degree)0 0.5 1 1.5−3−2−10123distance (mm)tilt error (degree)(d) FDS Phantom sequencesFigure 2.8: Intra-tissue out-of-plane motion estimation. For each kind of tissue type consisting of five different subsets,σy model was estimated in a cross-validation leave-one-subset-out experiment. The method has been tested on(a) beef, (b) chicken, (c) turkey, and (d) FDS phantom.to observe any systematic drift. The subsequent columns show the accuracy ofout-of-plane motion estimation in terms of: (column 2) the ratio of the measureddisplacement error to the true elevational separation; (column 3) the mean targetregistration error (mTRE) between the true and the estimated transformation atthe center of patches divided by the true elevational separation in a similar wayexplained in [75]; and (columns 4 and 5) the estimated tilt and yaw angles as ex-plained in [48]. For measurement of yaw and tilt angles, patches in the same axialand lateral positions were used, respectively. If the out-of-plane displacement iscorrectly estimated, the estimated yaw and tilt angles should be equal to zero.As shown, in most cases, there is no evidence of systematic drift in the out-of-plane motion estimation. The accuracy of out-of-plane motion estimation de-creases as the elevation separation increases. The reason is that by increasingthe elevation displacement, the number of accepted patches decreases, as shownin Fig. 2.6. The out-of-plane motion estimation based on a limited number ofpatches is more susceptible to the noise of elevation displacement estimation forevery patch. Moreover, as the elevation separation increases the slope of correla-tion curve reduces, as shown in Fig. 2.9a, and the estimation method becomes lesssensitive to the changes of the distance.In the selection of the patch sizes, there is a trade off between the accurate sta-tistical parameter estimation and the spatial resolution of the transform estimation.Larger patches give more robust parameter estimation but also impose higher com-putational cost. In a previous study [111], it has been shown that the uncertaintyof out-of-plane estimation will increase by decreasing the patch size. However,the best patch size is dependent on the speckle size, which is related to the tissuetype and ultrasound frequency. Since we are using a higher frequency than pre-vious work (10 MHz center frequency, 5-14 MHz bandwidth vs 5 MHz [74] and5−10 MHz bandwidth [48]), the smaller patches are anticipated. Another reasonnot to use larger patches, especially in the axial direction, is to avoid violating theassumption of the stationarity of the beam within the patch (assuming constant un-derlying statistical distribution). Moreover, the power of the RiIG model is that itsparameters can be estimated from a fairly small number of samples. We selecteda patch size of 2× 2 mm2 (735 samples) to be close to 1000 samples which givesgood fitting results for the RiIG model [39]. Even for a small sample size of 441 B-46mode samples, RiIG has shown favorable log-likelihood values in a ten-fold crossvalidation procedure [39].To better understand the effect of patch size we compared the result of twodifferent patch sizes, i.e. 2×2 mm2 (735 samples) and 3×6 mm2 (3297 samples)on the beef samples (Fig. 2.7b). There was a slight improvement in the variance ofthe error, but there was very negligible change in the average error. The computa-tional cost was doubled. To better visualize the performance of RiIG fitting for ourchosen number of sample data, we compared the quantiles of an example datasetagainst a generated sample data based on the RiIG estimated parameters. Notethat, if the model perfectly represents the data, the Q-Q plot should be a straightline with intercept 0 and slope 1. Fig. 2.7a shows that, except for the outliers, theestimated model and the real data follow the same distribution. It is worth men-tioning that adapting the patch size based on the speckle size and validity of theRiIG model parameter estimation may improve the accuracy of the out-of-planemotion estimation, which is the subject of our future research.2.4.4 Inter-tissue estimationTo visualize the overall shape of the estimated correlation curve in comparisonwith the real correlation curve of the accepted patches in real tissues, we used theaverage of the σy model parameters reported in Table 2.1, and the RiIG and posi-tion (lateral and axial) parameters for the center patch of the first beef subset andcalculated the correlation curve from (2.27). Fig. 2.9a shows that the proposedcorrelation curve follows the real measurements well. In practice, the RiIG param-eters are individually determined for every patch and the σy is calculated from theσy model at that specific position which improves the match even further. As theelevation displacement increases, the number of accepted patches decreases, whichjustifies the smaller standard deviations for the measured correlation coefficient inFig. 2.9a at larger elevation displacements.To evaluate the overall performance of the proposed method, the σy map wasestimated based on all twenty subsets excluding one of each tissue type. Themethod was performed on the left-out subset. To quantify the accuracy, the samemetrics explained in Section 2.4.3 were used. Fig. 2.9 shows the results. In order to470 0.5 1 1.500.20.40.60.81distance (mm)correlation coefficient  Beef Actual CorrelationChicken Actual CorrelationTurkey Actual CorrelationProposed Correlation(a)0 0.5 1 1.50.511.5distance (mm)measured distance (mm)  BeefChickenTurkeyPhantomIdeal(b)0 0.5 1 1.5−0.5−0.3−0.1500.150.30.5distance (mm)displacement error/distance(c)0 0.5 1 1.50.20.4distance (mm)mTRE/distance(d)0 0.5 1 1.5−3−2−10123distance (mm)yaw error (degree)(e)0 0.5 1 1.5−3−2−10123distance (mm)tilt error (degree)(f)Figure 2.9: Inter-tissue out-of-plane motion estimation. (a) The solid linesshow the correlation curve of the first subset of beef, chicken, and turkeysequences, averaged for a given elevation distance for all of the acceptedpatches. The dashed line shows the calculated correlation based on theproposed method with the σy model obtained from all of the twentysubsets and the RiIG and position parameters for the center patch. Theσy model was estimated excluding one subset of each tissue type andthe method was evaluated on the excluded subset. Figures above showthe result for one subset of beef (blue), chicken (green), turkey (red),and FDS phantom (cyan).evaluate the performance of the method near the focus, in a similar experiment onlythe patches in the depth range of 1.5 to 2.5 cm were considered. Less variability isobserved in the results shown in Fig. 2.11.The performance of the method on FDS phantom is comparable to its perfor-mance on ex vivo tissue as expected since, in our work, we did not optimize theRiIG model fitting for FDS patterns. An FDS phantom is suitable for calibrationof other methods based on FDS patches. In this work, it is recommended to use a480 0.5 1 1.5−0.5−0.3−0.1500.150.30.5distance (mm)displacement error/distance(a)0 0.5 1 1.5−0.5−0.3−0.1500.150.30.5distance (mm)displacement error/distance(b)0 0.5 1 1.5−0.5−0.3−0.1500.150.30.5distance (mm)displacement error/distance(c)Figure 2.10: The ratio of the measured displacement error to the true eleva-tion separation for (a) 1◦ yaw rotation on the beef tissue (b) 1◦ tiltrotation on the beef tissue (c) free hand in vivo images of a humanforearm.non-FDS phantom or ex vivo tissue for calibration of our method. In fact, in realtissue, FDS patches are rare and therefore the focus of our proposed method is onmodeling non-FDS patterns.The performance of the proposed method on chicken breast tissue is slightlyless accurate compared to other tissues. The smaller number of accepted patchesis mostly responsible for the less accurate result. We found that some part of thechicken breast tissue had statistical properties similar to the FDS phantom, whichresulted in the same issue for the RiIG parameter estimation in FDS, as explainedin Section 2.4.1. Another reason for being less accurate in the case of chickenbreast is due to the fact that, in our experiment, the depth of a single chicken breastsample was smaller than the imaging depth (4 cm), so we had to put two separatelayers on top of each other which created a highly echogenic border line betweenthe separate layers that was persistent in all of the ultrasound images (Fig. 2.5c).However, this is not a limiting issue in our work since our proposed method iscapable of detecting more patches compared to many other methods, as it is notlimited to FDS patches only.2.4.5 Out-of-plane rotation effects and in vivo experimentIt has been shown that out-of-plane rotation affects the correlation curves [59]. Weperformed two experiments to assess the effect of out-of-plane rotations, i.e. tiltand yaw, in the out-of-plane motion estimation (Fig. 2.10 a and b) on the beef490 0.5 1 1.5−0.5−0.3−0.1500.150.30.5distance (mm)displacement error/distance  BeefChickenTurkeyPhantomIdealFigure 2.11: The ratio of the measured displacement error to the true eleva-tional separation for the patches close to the focus.tissue. To avoid issues with possible sign ambiguity, only out-of-plane distancesof more than 0.3mm have been considered. Comparing these results for the yawrotation with the previous results, standard deviation of the error increases but theaverage is still negligible especially for the range of 0.5 to 1mm. For the tilt ro-tation, although the standard deviation is fairly small, there is a negative offset inestimation of the out-of-plane distance. This means that the out-of-plane distanceis underestimated which agrees with the results in the previous research [59].Furthermore, to examine the performance of the proposed method in vivo, afeasibility experiment of freehand imaging of a human forearm was performed.The results were compared to the gold standard transformations acquired by anoptical tracker (Fig. 2.10 c). Our results show that compared to ex vivo tissues, thestandard deviation of the out-of-plane estimation error increases (compare Fig. 2.10c with Fig. 2.9 b), although it should be noted that the gold-standard optical track-ing also contains some experimental error.2.5 SummaryIn this work, a new correlation-based speckle tracking framework was proposed.The first novel contribution was the creation of a framework capable of separatingthe coherent and non-coherent effect in every patch based on second-order statisticsof a RiIG statistical model for the task of out-of-plane motion estimation. Oursecond contribution is that we derived a closed-form formulation for the correlationcoefficient of the amplitude of the ultrasound sampled RF data as a function of the50(a) (b) (c) (d) (e)Figure 2.12: Images of the tissue samples (a) beef (b) chicken (c) turkey (d)phantom (e) forearm.elevation displacement based on the parameters of RiIG model. The coherent partof the RF data was separated using the MAP estimation and the parameters of theRiIG model were estimated based on an EM algorithm.The third novelty of this work was a new closed-form formulation for the cor-relation function of a linear transducer as a function of elevation displacement withtwo different resolution cell width. Instead of conventionally performing Gaussianapproximation of the correlation curve, we used its theoretical closed-form for-mulation. This enabled us to use a linear regression model for the resolution cellwidth and approximate the deviations of the beam profile more accurately, and toenhance the overall performance of the out-of-plane motion estimation. Our lastcontribution is providing a closed-form solution for the correlation of two beamprofiles with different resolution cell widths that can be used to compensate for theeffect of out-of-plane rotation.To assess the proposed method, we performed various simulations and exper-iments. First in a set of intra and inter-tissue experiments on phantom and beef,chicken and turkey tissue samples the performance of out-of-plane motion esti-mation was evaluated. In a pre-procedure calibration, different tissue types werescanned with known out-of-plane displacement. The σy model was determined andit was used for the out-of-plane motion estimation. Ability to use a greater numberof patches and fairly small drift are the key features of this method. The methodoutperforms the two other state-of-the-art methods in most cases.Fig. 2.12 shows different images of tissue samples used in our experiment. Forreal tissues, it is obvious that FDS is rare which confirms the advantage of ourproposed method in real applications. A feasibility experiment is also performed51a human forearm. Although we have addressed some of the issues contributing tothe low accuracy of conventional out-of-plane displacement estimation, there arestill some aspects to be improved. One of these aspects is the need for sampled RFdata in the model-fitting approach. RF data is not always available and this wouldlimit the range of clinical applications. Moreover, the RiIG model-fitting and theparameter estimation are computationally intensive tasks. on the other hand, thereliability of the overall estimation can be increased by increasing the number ofpatches. However, as the number of patches or the number of RF samples in-creases, the computational cost of the algorithm increases. This is a major obstaclefor real-time implementation of the proposed model-fitting algorithm. In the nextchapter, we introduce a filter-based spackle tracking algorithm applicable to B-mode ultrasound images. This will eliminate the need for RF data and provides theopportunity to develop faster implementation of the speckle tracking. Finally, in-stead of focusing on the out-of-plane displacement estimation, which was the maincontribution of this chapter, we provide a full 6-DoF transform estimation in thenext chapter.52Chapter 3Filter-based Speckle Tracking23.1 Introduction and clinical backgroundIn this chapter, we introduce a novel filter-based speckle tracking method to sup-plement the magnetic tracking measurement of freehand 2D prostate biopsy. Aswe discussed earlier, the magnetic tracker measurements are affected by the metal-lic object near the magnetic field. The goal of this work is to take advantage of thespeckle pattern within the anatomy and create a more geometrically correct volumeof prostate.Conventional speckle tracking has been previously used to increase the relia-bility and accuracy of electromagnetic tracking [72] and to improve the result ofmulti-modal 3D US to CT registrations of spine [73]. However, the rarity of FullyDeveloped Speckle (FDS) in real tissue is among the major causes of inaccuracy inconventional speckle tracking methods. The fundamental basis of most correlation-based speckle tracking holds true only for non-coherent speckle, known as FDS.The rarity of such patterns reduces the accuracy of the elevation displacement es-timation. Previous methods addressed this issue by using a heuristic approach forcorrection of coherency [48] and learning the pattern of decorrelation for real tis-2The content in this chapter has been adopted from the following paper:N. Afsham, S. Khallaghi, M. Najafi, L. Machan, S. D. Chang, L. Goldenberg, P. Black, R. N. Rohling,and P. Abolmaesumi. Filter-based speckle tracking for freehand prostate biopsy: Theory, ex vivoand in vivo results. In Proceedings of Information Processing in Computer-Assisted Interventions(IPCAI), pages 256-265, 2014.53(a)0 0.1 0.2 0.3 0.4 0.5 0.6 0.700.10.20.30.40.50.60.70.80.91elevation displacement [mm]correlation coefficient  Measured CorrelationsGaussian ApproximationElevation−Independent ModelThe Proposed Model(b) (c)Figure 3.1: Image acquisition and speckle tracking framework. (a) Speckleextraction from the US image (b) Elevation displacement estimation.In this figure we demonstrate the correlation curve by which the eleva-tion distance is estimated given a measured noise. The different fittedρ models on the measured correlation coefficients of US noise for theprostate data are shown. (c) Transform estimation. The scattered pointsshow the 3D position of patch centers in the reference frame (red) andthe adjacent frame.sues [75]. To overcome the limitations of heuristic and learning-based approaches,in the previous chapter, we proposed a generalized closed-form formulation for thecorrelation of the non-coherent part of every patch. In spite of promising results,the computational cost of our previous approach hinders its clinical applications.The first advantage of the filter-based approach is the rapid processing of theimage using a pre-defined speckle model. Using the same processing hardwareas [7], the new proposed method performs about three times faster than the modelbased approach. Another advantage is to use full and partially developed speckleinformation extracted throughout the entire image, which reduces drift in posetransform calculations. A third advantage is that our our filter-based approachis designed to work with both RF data and B-mode ultrasound images, which ex-pands the range of systems and clinical applications. We incorporate this speckletracking method into 3D TRUS reconstruction by using the magnetic tracking in-formation of freehand 2D TRUS images as the initial guess. We demonstrate thatsuch integration of the two tracking technologies can reduce some of the visiblemisalignment errors in the reconstructed prostate volume.543.2 Filter-based speckle trackingIf we consider speckle/noise extraction as the estimation of the space-varying pa-rameters of noise, given the noise model, the separation of the coherent and non-coherent parts of US is equivalent to denoising. Several US denoising filters havebeen introduced in the literature [15]. Here, we require a method of denoisingthat is spatially local and incorporates the statistical noise model. Hence, we usea probabilistic patch-based generalization of the non-local means filter to estimatethe space-varying parameters the speckle/noise [30] (Fig. 3.1a). We also use aposition-dependent beam profile in the calculation of correlation coefficient to im-prove the accuracy of speckle tracking [7] (Fig. 3.1b).3.2.1 Beam profile modeling for curvilinear transducerAs we have shown before, the autocorrelation of the echo signal as a function ofdisplacement can be written as:R(∆~X) = Rm(∆~X)∗g(−∆~X)∗g∗(∆~X). (3.1)where Rm is the autocorrelation function of the physical variation in the scattererfield of the medium and ∆~X is the 3D translation vector. g is the Point Spread Func-tion (PSF) of the transducer and g∗ is the complex conjugate of g. Theoretically,the scatterer field of FDS has a delta correlation function. As a result, the corre-lation value of two FDS patches is only related to the convolution of two PSFs. Itis conventional to use the Gaussian approximation for the correlation coefficient;however, in Chapter 2, we showed that the closed-form Pearson’s correlation coef-ficient equals [7] (Fig. 3.1b):ρ = 3σ22(1− sinc(2∆yσ )(pi∆y)2 , (3.2)where ∆y is the elevation displacement and σ is the resolution cell width of the USbeam. To compensate for the spatial changes of beam profile, we modeled σ as alinear regression of the position parameters and the elevation displacement:σ = σ0 +ar+bθ + c∆y, (3.3)55where r is the radius and θ is the angle in the polar coordinate of a curvilinearprobe that we used in vivo. The resolution cell width increases along the axialdirection, r, which is responsible for the lower resolution at the lower parts ofthe US image. The US profile is almost constant in the lateral direction, θ . ∆ydependency compensates the deviation of correlation curve from the theoreticalfunction [7]. We used Levenberg-Marquart algorithm to solve ∆y.3.2.2 Speckle extractionIn the Goodman’s speckle noise model, the US signal is considered as the multi-plication of the square root of the desired signal (Z) and the noise (N):A =√N√Z. (3.4)The noise part, can be modeled as an L-look FDS noise following the gammadistribution with the scale parameter of 1L and the shape parameter of L:pN(n) =LLΓ(L)nL−1e−nL, (3.5)where Γ() is the Gamma function.Under the gamma distribution assumption,√N follows Nakagami distributionwith shape parameter L and unit spread parameter. Using the closed-form formulaof the nth order moment of the multiplication of two independent Nakagami ran-dom variables, we may derive the mean (µm) and the autocorrelation function (Rm)of two closely positioned frames as follows:µm = µ√Z (Γ(L+ 12)Γ(L+1))(1L)0.5︸ ︷︷ ︸a, (3.6)Rm(∆y) =< Z ><√N1N2 >={µz : ∆y = 0a2µz : ∆y 6= 0, (3.7)where <> represents the expected value of the random variable and µ√Z is theaverage of the desired signal square root.56By rewriting Rm(∆y) as µZ[(1−a2)δ (∆y)+a2], using (3.6) and (3.7), andfollowing the same calculation presented in [7], after some arithmetic:ρ =µZ(1−a2)σ(1−sinc( 2∆yσ )(pi∆y)2 )+a2σ2√zµZ(1−a2) 23σ +a2σ2√z, (3.8)where µZ and σ√z are the mean and standard deviation of the desired signal Z.The desired signal Z, is the intended output of any denoising filter. For thepurpose of speckle tracking, a method of denoising is of interest that considers thedistribution of the uncorrelated noise in its model and it is capable of space-varyingnoise estimation. A probability-based NLM filter, which outperforms other state-of-the-art denoising algorithms [30], serves the purpose best.In the NLM denoising approach, the desired signal Z is estimated based on theweighted average of the neighboring pixels [15]:Zˆs =∑t w(s, t)A2t∑t w(s, t). (3.9)where Zˆ is the estimated signal, and s, t indicate the location of the two neighboringpatches.The definition of the weights is the key in the success of NLM filters. Since theposterior distribution of the US amplitude, A, in (4.6) has a closed-form (3.10), aBayesian approach can be followed to derive the Weighted Maximum LikelihoodEstimation (WMLE) of Z.p(A|Z) =2LLΓ(L)ZLA2Le−LA2Z . (3.10)In the case of L-look FDS noise the WMLE of the weights can be found iterativelyas follows [30]:w(s, t)i = exp(∑k(1h˜log(As,kAt,k−At,kAs,k)+LT|Zˆi−1s,k − Zˆi−1t,k |2Zˆi−1s,k Zˆi−1t,k)), (3.11)where hˆ = h2L−1 and h, T can be considered as dual parameters to balance thetrade-off between the noise extraction and the fidelity of the estimate.573.2.3 Transform estimationTheoretically, by having the out-of-plane distances of at least three correspondingpoints between two frames, it is possible to solve for the three degrees of freedomof the out-of-plane motion, i.e. elevation displacement, tilt, and yaw angles. Thesign of the elevation displacement is determined by the electromagnetic trackerin this work; however, approaches proposed in [74] and [58] can be followed tosolve for direction ambiguity. We first determine the in-plane motion between pairsof patches by performing a sub-pixel cross correlation-based registration. Thento solve for the corresponding elevation displacement of each pair, the Pearson’scorrelation coefficient of the extracted noise, ρ , is measured using the followingformulation (Note that given ρ , the out-of-plane displacement, ∆y, is determinedusing (3.2)):ρ = ∑i (Is(i)− I¯s)∑i (Ik(i)− I¯k)√∑i (Is(i)− I¯s)2∑i (Ik(i)− I¯k)2, (3.12)where I is the intensity of the pixels and I¯ is the average intensity over the patch.In the experiments, 364 overlapping patches of size 6× 8 mm with ≈ 3000 pix-els in each patch were used. We used unit quaternions to estimate the 3D rigidbody transformation (Fig. 3.1c). The transformation is first estimated using all thepatches. The patches with the center point residual error of more than 0.5 mm wereconsidered as outliers and discarded, and the transformation is calculated again. Itis possible to find the transformation between any given frame and the first frame (T1 n) by multiplying the transformations from the consecutive pairs of frames as in(4.57). Therefore, the whole volume can be constructed relative to the first frame:T1 n = T12 T23 . . . Tn−1n. (3.13)3.3 Experiments and ResultsIn a one-time calibration step, the sigma function parameters were estimated fromturkey, chicken and beef tissue samples in vitro. The tissues were placed on a linearmotion stage and were moved in 0.1 mm steps relative to the transducer. Three sub-sets of 40 frames were captured for each sample type. Sigma function parameterswere estimated using the known elevation distance between the parallel frames and58they were averaged (intra-tissue) for each sample type. To access the tissue inde-pendency, the coefficient of variation is calculated over different axial and elevationdistances. The overall variation is less than 12 % over tissue types and the variationis smaller around axial focus (30 mm), as expected (Fig. 3.2a). Table. 3.2 showsthe total sigma variations for different tissue types at three different depths of 1, 3,and 6 cm for an elevation displacement of 0.5 mm. Small variations confirm thetissue independency of the method. In the subsequent experiments, the inter-tissueaverage of the sigma parameters are used for the in vivo speckle tracking.The accumulative elevation distance and the measured elevation error over thetrue value of displacement are shown in Fig. 3.2b, 3.2c. Since it is possible to usethe information all over the image in the proposed method, the drift is minimal.The changes of the beam profile that are not captured in the model cause relativelylarger Standard Deviations (STD). One way to decrease the error STD is to limitthe range of estimation to the focal zone [7].A practical factor that affects the elevation displacement estimation is the pres-ence of out-of-plane rotation, i.e. tilt and yaw. In our in vivo experiments, therelative out-of-plane rotations were less than 0.5◦ and hence, the influence of thaton the decorrelation is likely small. However, to evaluate the performance of theproposed method, we performed a set of experiments on beef tissue samples simi-lar to [59] with relative tilt and yaw angles of 1◦. To avoid sign ambiguity, elevationdistances more than 0.3 mm are considered. Fig. 3.3a, 3.3b show the results. ForTable 3.1: Accumulated drift and RMS error in theout-of-plane parameters compared to tracker over100 frames.Patient1 Patient2 Patient3Midpoint (mm)Drift -3.2 4.8 0.8RMS 1.93 2.96 1.1Tilt (degree)Drift 2 0.1 6.6RMS 1.28 1.11 4.7Yaw (degree)Drift 2.95 1.1 1.2RMS 0.73 0.39 0.75Table 3.2: Sigma vari-ations at differentdepths.Mean STDTop 0.58 0.11Middle 0.71 0.07Bottom 0.90 0.06590.2 0.4 0.6 0.8 10.050.0750.10.125distance (mm)coefficient of variation (cv)  20 mm30 mm40 mm50 mm60 mm(a)0 0.2 0.4 0.6 0.8 10.20.40.60.81distance (mm)measured distance (mm)  IdealTurkeyChickenBeef(b)0 0.2 0.4 0.6 0.8 1−0.5−0.3−0.1500.150.30.5distance (mm)displacement error/distance(c)Figure 3.2: (a) CV plot of the sigma parameter. Inter-tissue measurementsfor (b) the elevation distance estimation and (c) the ratio of displacementerror.0 0.5 1 1.5−0.5−0.3−0.1500.150.30.5distance (mm)displacement error/distance(a)0 0.5 1 1.5−0.5−0.3−0.1500.150.30.5distance (mm)displacement error/distance(b)0 0.2 0.4 0.6 0.8 1 1.2−0.5−0.3−0.1500.150.30.5distance (mm)displacement error/distance(c)Figure 3.3: The ratio of the measured displacement error to the true eleva-tion separation for (a) 1◦ yaw and (b) 1◦ tilt rotation on beef tissue (c)freehand in vivo human gastrocnemius muscle.the yaw rotation the drift is negligible for smaller distances, but tilt rotation causesunderestimation in the elevation measurements. The results agree with previous re-search [59]. A freehand feasibility experiment was first performed on human gas-trocnemius muscle, where the tracking information was obtained from an opticaltracker as gold standard. The mean and variation of the estimations are smaller forsmaller displacements. However, the aggregation of the displacements all over theimage will reduce the transform estimation error. Weighted aggregation based onthe displacement reliability estimation is the subject of future research (Fig. 3.3c).Prostate B-mode images were acquired during freehand TRUS-guided prostatebiopsy sessions. An EC9-5 endocavity transducer (Ultrasonix Corp., Richmond,Canada) with a built-in electromagnetic sensor was used. A reference electro-6085 90 95−45 −40 −35148150152[mm][mm][mm](a)−15−10−5 0 −140−138−136−134−132202204206208210212[mm][mm][mm](b)85 9095−95−90−85122124126[mm][mm][mm](c)Figure 3.4: Black triangles show the center point of the image in the referencecoordinate for every other frame for a total number of 100 frames. Themagenta dots are the center points estimated by the proposed method.Orientations of the image planes are shown by lateral-axis: red, axial-axis: green, and elevation-axis: blue.61Figure 3.5: Reconstructed volumes over 100 frames from smoothed magnetictracker data (top row) and the proposed approach (bottom row) for threepatients.magnetic sensor was placed close to the pelvic bone, which provided the patientcoordinate system. Three patients have been processed to assess the ability of theproposed method. For each patient, we analyzed 100 consecutive B-mode imagesobtained while the transducer was moved freehand in a transverse sweep from apexto base. The approximate size of the prostate region covered by this sweep is halfof the prostate length.We used the initial information of the magnetic tracker for the first frame in thesweep, and calculated the relative transformation of the rest of the frames based onthe proposed speckle tracking approach. The errors in the estimation of the out-of-plane parameters (tilt, yaw and midpoint elevation displacement), were measuredand compared to the electromagnetic tracker over the entire sweep (Table 3.1).The error drift shows that our proposed approach generally follows the magnetictracking transformation even after 100 frames.To reduce the impact of drift on the overall 3D reconstruction accuracy, whiletaking advantage of the speckle tracking information, we updated the transforma-62Figure 3.6: Coronal resclice of the reconstructed volumes from smoothedmagnetic tracker data (top row) and the proposed approach (bottomrow).tion obtained using the speckle tracking with the magnetic tracker information ev-ery 25 frames, and smoothed the transformation parameters in quaternion spaceby an averaging window of size 5. A volume was then reconstructed from the ac-quired transforms. Fig. 3.5 and 3.6 show the reconstructed volumes using magnetictracking, where the transformations were smoothed with a low-pass filter, and thevolumes obtained with our proposed combination of speckle tracking and magnetictracking. Comparison of the reconstructed volumes show smooth boundaries andless visual discontinuity with our proposed method.3.4 SummaryIn this work, a novel method of filter-based speckle tracking based on denoisingB-mode image is proposed and its theoretical basis is provided. The probabil-ity patch-based NLM filter is used for a Gamma-distributed noise model. In thefilter-based approach, the ultrasound image is once filtered and it is used for thepurpose of displacement and orientation estimation. In this manner, it is possibleto increase the number of patches with minimum additional computation cost anddecrease the overall drift. A closed-form regression model has been also used forthe correlation function of the curvilinear probe to approximate the deviations ofbeam profile more accurately. We have validated the propose filter-based approachusing ex vivo data, and in vivo data from three patients who underwent prostatebiopsy. 3D TRUS volumes were reconstructed from the prostate of those patients.63When compared to the magnetic tracker, results show visual improvement in thereconstructed boundaries of the prostate.As we will discuss in the next chapter, the similarity measure used in for thisNLM filter lacks some statistical requirements such as having an upper bound forthe multiplicative noise in ultrasound. In the next chapter we will develop a newNLM filter based on a suitable similarity measure and will take advantage of RiIGnoise model in the denoising filter. We will compare the results of PPBN and RiIGNLM filters and will demonstrate how the filtering enhancement will improve theout-of-plane displacement estimation. In this chapter, we considered all of the out-of-plane estimations in the same manner, regardless of their reliability, in 6-DoFtransform estimation. However, we can improve the overall accuracy of 6-DoFtracking by assigning a risk factor to every individual estimation based on the un-certainty of the noise estimation. We will propose a weighted average aggregationof out-of-plane displacement estimation in the next chapter.64Chapter 4RiIG NLM Speckle Tracking34.1 IntroductionThe core of all speckle tracking methods for the out-of-plane displacement estima-tion is the correlation of the specific noise pattern of the image, known as FullyDeveloped Speckle (FDS), which acts as a delta function input for the correlationfunction [123]. If we assume minimal changes of the beam profile from one loca-tion to another, the correlation of the two corresponding speckle patterns is directlyassociated with their distances. However, FDS may only be present in a small per-centage of the image patches. We propose to extract the speckle/noise from theultrasound image, so instead of small percentage of the available FDS patches inreal tissue, we can aggregate the information over the entire image to increase thevalidity of the transform estimation. To achieve that, Stein’s Unbiased Risk Esti-mate (SURE) is used as a weight factor. This way, less risky estimations contributemore to the transform estimation.In Chapter 2, we introduced a model fitting approach based on a variance mix-ture model, known as Rician-Inverse Gaussian (RiIG) [40], to increase the numberof accepted patches for the task of speckle tracking [7]. Expectation-Maximization(EM) estimation was used to determine the speckle model parameters [7]. EM3The content in this chapter has been adopted from the following papers:N. Afsham, A. Rasoulian, M. Najafi, P. Abolmaesumi, and R. N. Rohling. Non-local means filter-based speckle tracking. Submitted.65estimation is known to be prone to local minima [30]. Furthermore, computa-tional cost of the parameter estimation was a limiting factor. In the current method,the parameter estimation is first performed on a coarse grid as the initializationof the denoising algorithm. The final output of the denoising filter is based on theweighted average of the neighboring pixels, which is the main concept of non-localmeans (NLM) filters [15].A wide variety of denoising algorithms have been introduced in the literaturefor the multiplicative Gaussian noise model, which is the closest noise model inultrasound images [15]. In Chapter 3, we used a Gamma based NLM filter fordenoising. As we will discuss, there are some deficiency in the definition of itssimilarity measure. In this Chapter, we expand the NLM method of denoisingfor multiplicative noise to our previously introduced RiIG model in Chapter 2.A prior model of the noise is considered in the new denoising algorithm. Themethod has two main advantages. The first advantage is the low computationalcost, thanks to the fast implementation of non-local means filters [31]. The secondone is the ability to incorporate the previously introduced statistical noise modelin the denoising algorithm, which has shown its ability in ultrasound distributionmodeling.In summary, we propose a framework of speckle tracking based on noise ex-traction from ultrasound image. To the best of our knowledge, this is the first timethat such an approach is proposed. The novelties of this chapter include:1. developing a weighted maximum likelihood (WML) NLM filter based onRiIG statistical model,2. deriving a similarity measure with favorable probabilistic features for threedifferent ultrasound noise models, i.e. the additive Gaussian, the multiplica-tive Gamma, and the RiIG additive-multiplicative mixture model,3. deriving a “test statistic” based on the symmetrical Kullback-Leibler dis-tance (KLD) and incorporating it in the proposed similarity measure as aregularization term, and4. using the Stein’s Unbiased Risk Estimate (SURE) to aggregate the informa-tion all over the image in a weighted average fashion.66It is worth noting that the correlation of two adjacent ultrasound signals orimage patches is considerably affected by other factors, rather than single out-of-plane displacement, such as in-plane displacement, transducer face pressure, andin-plane and out-of-plane rotations [65]. Solutions have been proposed to compen-sate for some of these factors [59, 117]. One ultimate solution to compensate forthe effect of these factors is to solve the inverse problem of finding the scattererspositions and amplitudes, the so called deconvolution problem [115]. Then, it ispossible to calculate the correlation based on the deconvolution result as the for-ward model. By minimizing the distance between the measured correlation andthe calculated correlation all over the image, the best fit relative transformation ofthe two ultrasound images is determined. However, this approach is computation-ally intensive and still an open problem, which is out of the scope of the currentwork and a subject for future research. Here, we only try to increase the accu-racy of out-of-plane motion estimation by separating the effect of the coherent partof the signal from noise and improve the transform estimation by means of dataaggregation based on the noise estimation risk over all the image.This chapter is organized as follows. In Section 4.2, we first give a generalbackground on the NLM filters, necessary to understand the proposed filter. Next,the details of the proposed method and the mathematical derivations are provided.The in vivo and ex vivo experiments are explained in Section 4.3, followed by theresults for different denoising models and methods.4.2 NLM filters for speckle tracking4.2.1 NLM filters: backgroundSince the first time that NLM filters were introduced [15], they have been widelyused for image denoising in different applications and have shown comparable orbetter results in comparison with other state-of-the-art denoising algorithms [16].Different generalizations and adaptations of NLM filters have been studied in theliterature in recent years [29, 66, 126]. The main idea of NLM denoising is to usethe similarity of the neighbouring pixels of a central pixel with nearby so calledpatches as a weight factor. The noise-free signal is recovered from a weighted av-67Pi, Pj  Model Selection & Model Fitting Average Filter Parameters Beam Profile Priori σ = f (x, y, z) Motion Estimation Measured Correlation ρ Out-of-Plane Motion ρ NLM Filter EM Estimation ZˆSigma Map Estimation Similarity Measure Known Distance Measurements  Theoretical Correlation Formula  Additive Gaussian Noise Model  Multiplicative Gamma Noise Model  RiIG Mixed Noise Model    Conventional NLM   Probabilistic Patch-based   MAP similarity measure   New similarity measure for RiIG Pi, Pj  S (Pi, Pj) Figure 4.1: Overview of the proposed framework of out-of-plane motion es-timation. First, a prior model of noise is considered. Then, a similaritymeasure is defined based on the noise model. After denoising basedon the developed NLM model, the measured correlation is compared tothe theoretical parametric correlation coefficient. In a calibration proce-dure, in which the relative displacements of adjacent frames are known,a model of beam profile (sigma map) is derived. Finally, the out-of-plane motion are estimated at the center points of the grid patches allover the image.erage of all central pixels. In the simplest model, the original noise-free signal (zi)is assumed to be corrupted by the independent identically distributed (iid) Gaussiannoise (ni), comprising the received signal (ai):ai = zi+ni. (4.1)The basic NLM can be formulated as a weighted average of noisy pixels on aconventionally rectangular search window (P):ẑi =1Ci∑j∈PwNL(i, j)a j, (4.2)68where wNLM(i, j) is the weight factor, which is based on a similarity measure defini-tion of the neighboring pixels and Ci is the normalization factor; Ci = ∑j∈PwNL(i, j).The conventional definition of weight factors is:wNLM(i, j) = exp(−1h ∑k∈Vgσ ,k|ai,k−a j,k|2), (4.3)whereV is the region in which the difference between the pixel values of the neigh-boring patches are considered. gσ ,k is the sample of a Gaussian kernel with a meanof zero and a standard deviation of σ at position k ∈V , and h is the filter parameteradjusting the shape of the kernel. Other compactly supported smooth functions canalso be used instead of the Gaussian one to improve the results [31].It is shown that the above definition of weights (4.3) can be rewritten in termsof a similarity measure as [116]:wNLM(i, j) =∏k∈VS(ai,k,a j,k)g σ ,kh . (4.4)If ai and a j are similar, S(ai,a j) is supposed to be close to one, and close to zerootherwise. In the conventional definition of NLM, for images corrupted by additiveGaussian noise, it can be verified that S(ai,a j) is as an exponential function of theEuclidian distance between the pixel values:SNLM(ai,a j) = exp(−|ai−a j|2). (4.5)The first attempt to adapt the NLM filter for the task of ultrasound denoising wasproposed in by Coupe et al. [28]. They used a mixture model of Gaussian noise asfollows:ai = zi+ni√zi. (4.6)The weights of the NLM filter were redefined in an optimal Bayesian NLM frame-work. It was shown that by substituting the L2 norm with the Pearson distance inthe conventional similarity measure of NLM, they can obtain the optimal Bayesianweights. The only issue with the optimal Bayesian approach is that there is no priorassumption about the distribution of zi, and therefore ai. However, such informa-69tion is needed in the full definition of a well-defined Bayesian similarity measureas follows [116]:SNew(ai,a j) = P(zi−z j|ai,a j)(0|ai,a j),=∫PZi(z)PZ j(z)PAi|Zi(ai|z)PA j|Z j(a j|z)dzPAi(ai)PA j(a j)(4.7)A similar Probabilistic Patch-based Non-local means (PPBN) approach was fol-lowed in [30] and a similarity measure was defined based on the conditional prob-ability density function (PDF) of the noise. However, since there was no priorknowledge of PZi(z), and hence PAi(ai) in the model, it was suggested to omit thecorresponding terms from the above equation. In this case, the similarity measurewas defined as:SPPBN(ai,a j) =∫PAi|Zi(ai|z)PA j|Z j(a j|z)dz. (4.8)The PPBN similarity measure performs well for the additive Gaussian noise, butas noticed in [116], the measure is not bounded from above in the case of multi-plicative noise, which may result in very large weight coefficients that dominateothers.4.2.2 NLM filters: proposed modelRiIG is a normal variance mixture model that has shown better results comparedto well-known K-distribution for ultrasound PDF model fitting [39]. Another fa-vorable feature of RiIG is that, even in the case of limited data samples, it stillperforms acceptedly [41]. The RiIG model is defined as:ai = βizi+ni√zi, (4.9)where ni has a normal distribution N(0,1) and zi follows Inverse Gaussian (IG)distribution, IG(δi,γi), where δiγi is the mean and δ2i is the shape parameter. RiIG issimilar to the noise model definition proposed in [28], however the noise varianceis data-dependent and spatially different all over the image. The conditional PDF ofRiIG is also available in an explicit form [41], which allows us to derive a closed-70form probabilistic similarity measure with the favorable characteristics. We startfrom the new similarity measure definition:SRiIG(ai,a j) = P(Zi−Z j|ai,a j)(0|ai,a j),=∫PZi(z)PZ j(z)PAi|Zi(ai|z)PA j|Z j(a j|z)dzPAi(ai)PA j(a j),=1PAi(ai)PA j(a j)∫ δiδ jaia j2piz5 ×exp((δi+δ j)γ−δ 2i +δ 2j2z−1 + γ2z)×exp(−12z(a2i +a2j +2β 2z2))×I0 (βai) I0 (βa j)dz,(4.10)where [41]:PA (a) =√2pi α32 δ exp(δγ) a(δ 2 +a2)34×K 32(α√δ 2 +a2)I0(βa),(4.11)and α =√γ2 +β 2. In(.) and Kυ(.) represent the modified Bessel function of thefirst and the second kind of the order n, and υ respectively.PZ (z) = IG(δ ,γ) ,= IG(µ = δγ ,λ = δ2),=δ√2piz3exp(δγ− δ2z−1 + γ2z2),(4.12)and:PZ|A (z) = GIG(−32,√δ 2 +a2,α), (4.13)and:PA|Z (a) =azexp(−12z(a2 +β 2z2))I0 (βa) . (4.14)71If we plug in (4.11) to (4.14) into (4.10), we get:SRiIG(ai,a j) =δiδ jaia jI0 (βai) I0 (βa j)exp((δ i+δ j)γ)2piPAi (ai)PA j (a j)×∫ +∞01z5exp(−Ttz−1− (γ2 +β 2)z)dz,(4.15)where Tt =δ 2i +δ 2j2 +a2i +a2j2 .Using the following equality:∫ +∞0zυ−1 exp(−Θ2z−1−Θ3z)dz =2(Θ2Θ3) υ2Kυ(2√Θ2Θ3),(4.16)and after some arithmetic, we have:SRiIG(ai,a j) =(δ 2 +a2i )34 (δ 2 +a2j)342K 32(α√δ 2i +a2i)K 32(α√δ 2j +a2j) . (4.17)We can simplify SRiIG(ai,a j) more by using the following estimation of Kυ(x),for x υ , which is true for the values of δ in our experiments:Kυ (x)∼=√pi2exp(−x)√x, x υ , (4.18)and get the final similarity measure as:SRiIG (ai,a j)C∼=TiTjT74texp(−α(2√Tt −√Ti−√Tj),Ti = δ 2i +a2i , Tj = δ 2j +a22, Tt =δ 2i +δ 2j2+a2i +a2j2,(4.19)where C is a normalization factor. It can be observed that the similarity measure isaffected both by the additive (2√Tt −√Ti−√Tj) and multiplicative (TiTjT74t) differ-ence of the receiving signals. We have also shown that the RiIG similarity measure72is bounded from both sides, which is a desirable feature for a similarity measure.To improve the denoising result, we incorporated the symmetrical Kullback-Leibler distance (KLD) in our new similarity function as a regularization term in aniterative manner to use the prior knowledge of the previous estimation as suggestedin [30]:w(i, j) =∏k∈V(SRiIG(ai,k,a j,k,δ )︸ ︷︷ ︸data f idelityexp(−1TKLDRiIG(zi,k,z j,k,β ))︸ ︷︷ ︸a priori)g σ ,kh . (4.20)δ corresponds to the mean of the noise-free signal (Z) and β shows the amount ofdrift from the Gaussian noise model. The KLD term serves as a priori from theprevious estimation. Symmetric Kullback-Leibler divergence (Ksym) or Kullback-Leibler distance (KLD) for two density functions is defined as:KLD =∫ +∞−∞(PA|Z (t)−PA|Z (t)) ln(PA|Z (t)PA|Z (t))dt. (4.21)After plugging in (4.14) into (4.21) and simple arithmetic:KLDRiIG (zi,z j) =ln(z jzi)∫ +∞0(e−β22zizitI0 (β t)e−12zit2−e− β22z jz jtI0 (β t)e− 12z jt2)((12z j−12zi)t2−β 2(z j− zi2))dt.(4.22)Considering the following equality [63]:∫ +∞0xµ−12 e−αxI2ν(2β√x)dx =Γ(µ+ν+ 12)Γ(2ν+1) β−1eβ22α α−µM−µ,ν(β 2α ),(4.23)where ℜ{µ+ν+ 12}> 0, I2ν(.) is the modified Bessel function of the first kind and73order 2ν , Γ(.) is the gamma function, and M−µ,ν(.) is The Whittaker M function.Ma,b (z) = e− z2 zb+12 F11(12+b−a,1+2b,z), (4.24)where F11 (a,b, .) is the confluent hypergeometric function of the first kind. Know-ing that:F11 (a,a,z) = exp(z), (4.25)after some arithmetic, we can get:KLDRiIG (zi,z j) = ln(z jzi)×(e−β22zi (ziz j−1)F11(2,1,β 22zi)− e− β22z j (z jzi−1)F11(2,1,β 22z j)+β 2e−β22zi(z j− zi2)eβ22 zi−β 2e−β22z j(z j− zi2)eβ22 z j).(4.26)We use the asymptotic approximation of the F11 (a,b, .) function as [77]:F11 (a,b,z)∼=exp(z)za−bΓ(a), (4.27)and we finally:KLDRiIG (zi,z j)∼= β 2 ln(z2z1)(z1− z2)×(Φ−β22 (z1)(z1z2−12)+Φ−β22 (z2)(z2z1−12)),(4.28)where Φ(z) = exp(1z − z). It can easily be verified that KLDRiIG is symmetric.The weighted average estimation of the noise free pixel is developed based onthe WML estimation as follows:ẑi = argminz>0∑j∈Pw(i, j) lnP(A j|Z j)(a j|z). (4.29)From the first order optimality condition, we derive the equation for the WML74estimation of RiIG distribution.∂∂ zi ∑j∈Pw(i, j) ln(a jziexp(−12zi(a2j +β 2z2i))I0 (βa j)) = 0. (4.30)So:∑j∈Pw(i, j)a2i = (zi+β 22z2i )∑j∈Pw(i, j)︸ ︷︷ ︸Ci, (4.31)and easily we can get the final WLM estimation:ẑi =−1+√1+ 2β2Ci ∑j∈Pw(i, j)a2jβ 2 (4.32)Selection of Parameters: To be able to benefit from the Fast Fourier implemen-tation of the NLM filters [31], it is needed to swap between the two summations(4.2) and (4.3), known as “for loops”. Therefore, the same window size should beused for both the averaging window (P= [−l× l]) and the similarity measurementarea (V = [−l× l]). We select l = 5, and set the filter parameter T , which adjuststhe effect of data fidelity, to T = 0.2 (2l+1)2 as suggested in [30]. The so calledbandwidth h parameter, which performs as a smoothing parameter, is chosen ac-cording to the quantiles of a chi-squared distribution based on the selection of σas explained in [31]. It is worth mentioning that there are some adaptive methodsto locally select the NML filter parameters [35, 107] and improve the denoinsingalgorithm performance.Number of Iterations: Including the KLD test statistic reduces the sensitivityof the denoising algorithm to the structural changes and it leads to better handlingof discontinuities [94]. There is a trade off between the computational cost andthe bias of the estimation. Fig. 4.2 shows the change of the SNR with the samedefinition in [30] over six iterations. It can be seen that, after three iterations, theSNR converges, which is in agreement with the results presented in [30]. Due tocomputational restrictions, we choose the number of iterations as three, noting thatthe higher number of iterations may lead to better denoising results [30].Fast Implementation: Our approach extends the method of swapping the two751 2 3 4 5 614.514.751515.2515.5Number of iterationsSNRFigure 4.2: SNR evolution for a sample ultrasound spine image.“for loops”, as suggested before in [31], to our definition of RiIG weights and usesthe definition of convolution to make the algorithm faster. We first need to take thelogarithm of the weights defined in (4.20) and rearrange the indices to be able totake advantage of the Fast Fourier Transform (FFT):lnw(i, i+ τ) =∑kg σ ,kh︸︷︷︸φ(k)[lnS(ai+k,a j+k,δ )−1TKLD(zi+k,zi+τ+k,β )]︸ ︷︷ ︸∆τ (i+k), (4.33)w(i, i+ τ) = exp(φ ∗∆τ) = expF−1(F (φ)F (∆τ)), (4.34)where ∗ represents the convolution operation, F is the 2D discrete Fourier trans-form (2D-FFT), and F−1 is its inverse transform.4.2.3 Different noise modelsInstead of considering the complex process of ultrasound formation [122], we con-sider the amplitude of ultrasound RF signal (A), which has a similar autocovariancefunction as the intensity and may be used interchangeably for most practical pur-poses [123]. In all of the following noise models, an uncorrelated noise has beenconsidered. In the case of Gaussian noise, circularly normal Ni and N j are consid-76(a) (b)(c) (d)Figure 4.3: An example of (a) the transverse ultrasound image of the spine,(b) the denoised image based on the proposed iterative RiIG NLM fil-ter, (c) the extracted noisy part, and (d) the SURE-based risk map (Thediscontinuity comes from using the coarse grid for SURE calculation).The tiled appearance of the SURE map is because the δ parameter is es-timated at the center of grid windows and the same value is used for allof the other pixels. It can be observed that in the area with high specularreflection, the noise estimation is riskier.77ered with the following autocovariance function:< nin j >= 2σ2d δ (∆y). (4.35)We have assumed that the change of the coherent parts of two consecutive framesare negligible and thus:< ziz j >=< z2 > . (4.36)In this section, we derive the correlation function of ultrasound amplitude for dif-ferent noise models.Gaussian additive noise modelA = Z+N, (4.37)where N follows the normal distribution of N(0,σ2d ) . In this case, the correlationfunction in terms of elevation displacement is:Rm (∆y) =< (Z1 +N1)(Z2 +N2)>,=< Z2 >+< N1N2 > .(4.38)The model suggests that:< A >=< Z >, (4.39)and:< Z2 >= σ2z +µ2z = σ2A−σ2d +µ2A. (4.40)So:R(∆y) = Rm (∆y)∗K2σ3y(pi∆y)2(1− sinc(2∆yσy))=(σ2A−σ2d +µ2A)K2σ2y +2σ2dK2σ3y(pi∆y)2(1− sinc(2∆yσy)),(4.41)78and the correlation coefficient is derived as follows:ρ =(σ2A−σ2d +µ2A)σ2y +2σ2dσ3y(pi∆y)2(1− sinc(2∆yσy))−µ2Aσ2y(σ2A−σ2d)σ2y + 43σ2dσ y,=(σ2A−σ2d)σy+σ2dσ2y(pi∆y)2(1− sinc(2∆yσy))((σ2A−σ2d ))σy+ 43σ2d,=σ2ẑ +2σ2dσy(1−sinc(2∆yσy)(pi∆y)2)σ2ẑ +43σyσ2d.(4.42)where σẑ is the standard deviation of the estimated noise-free signal and σy is theresolution width of the ultrasound beam. ∆y represents the elevation displacement.We follow simple weighted average with a Gaussian kernel denoising method,where the estimated noise-free signal is calculated as:ẑi = ∑j∈Pw ja j, (4.43)where w j is the spatial sample of a Gaussian kernel centered at ai.Gamma multiplicative noise modelA =√Z√N, (4.44)where noise follows a Gamma distribution with number of looks equal to L (N ∼Gamma(L, 1L)) and as a result√N ∼Nakagami(L,1). In Chapter 3, we have shownthat the correlation coefficient in this case equals to [8]:ρ =aLσ2√Ẑ+µẑ (1−aL)σy(1−sinc(2∆yσy)(pi∆y)2)aLσ2√Ẑ+µẑ (1−aL) 23σy, (4.45)where aL =(Γ(L+ 12)Γ(L))2 (1L)and µẑ is the mean of estimated noise-free signal andσ√Ẑis the standard deviation of the squared root of Z. This is the same noisemodel that iterative PPBN denoising method has been developed for [30].79RiIG mixed noise modelA = βZ+√ZN, (4.46)where noise is normally distributed (N ∼ N(0,1)) and Z follows an Inverse Gaus-sian (IG) distribution with mean parameter of δγ and shape parameter of δ2. In thiscase, as we have shown in Chapter 2 the correlation coefficient is as follows [7]:ρ =β 2σ2ẑ +2µẑσy(1−sinc(2∆yσy)(pi∆y)2)β 2σ2ẑ +43σy µẑ. (4.47)The summary of the proposed iterative RiIG NLM algorithm is provided in Algo-rithm 1. In order to make the algorithm faster, we have used two different gridsof patches. A coarse grid is used to estimate the initial parameters of the RiIGmodel at the center of every patche. The parameters of other pixels in the imageare approximated from a closest neighboring patch center. We take advantage ofthe block processing method implemented in MATLAB and calculate a delta-mapbased on the estimated delta values at the center of the coarse grid. A finer gridis then used for the task of correlation and transform estimation to increase theamount of information extracted all over the image. As suggested in [41], we sub-stitute the average value of α and β parameters all over the image and used thesefixed values for the next iterations. In this case, the fitting algorithm is performedonly once on the coarse grid, which substantially reduces the processing time.4.2.4 SURE-based 6-DoF transform estimationWe need to estimate the rigid transformation between two sets of 3D points. Forthe least square formulation of this problem, there are several closed-form solu-tions in the literature [38]. In this work, we use SVD estimation, which can beeasily transformed to a weighted average one, noting that there is no considerabledifference in the computational cost or the robustness of the result for differentavailable algorithms [38].The in-plane motion can be recovered from 2D image registration techniques [58].We used an efficient subpixel image registration algorithm which obtains an initialestimate of the cross correlation peak by an FFT and then refines the shift estima-80Algorithm 1 Iterative RiIG NLM algorithm1: Initialize αi, βi, and δi.2: Calculate αi, βi, and δ 0i based on the EM method over a coarse grid.3: Use block processing to build a δ map all over the image.4: ẑi0 =−5+√25+4α¯2(δ 0i2+a2i )2α¯2 (MAP Estimation).5: for i = 1 to number of iterations do6: wit(i, j) =∏k∈V(S(ai,k,a j,k,δ it−1)exp(− 1T K(zit−1i,k ,zit−1j,k , β¯ )))g σ ,kh .7: ẑiti =−1+√1+ 2β¯2Ci∑j∈Pw(i, j)a2jβ¯ 2 (WAML Estimation).8: Update δ iti based on the µẑit and µ 1ẑit(Moment Estimation).9: end fortion by up-sampling the DFT only in a small neighborhood of that estimate. Usingthis approach all the image points are used to compute the up-sampled cross corre-lation [54]. We applied the method on every patch to determine the in-plane shiftof the patch center.SURE has been previously used as a metric to be minimized for the parameterselection of the NLM filters [119]. The main advantage of SURE is that it givesan estimate for Mean-Squared Error (MSE) directly from the noisy data withouthaving the knowledge of the true signal [113]. For the proposed RiIG NLM filter,we derive an explicit analytical expression of the SURE unbiased risk estimator.We know that:w(i, j) =∏k∈V(Ti,kTj,kT74texp(−α(2√Tt −√Ti,k−√Tj,k))ga,kh.(4.48)∂ ln(w(i, j))∂ni=∂w(i, j)∂niw(i, j), (4.49)81∂w(i, j)∂ni=∑k∈Vga,kh(2ai,kσi,kTi,k−74ai,kσi,kTt+α(ai,kσi,k√Ti,k−ai,kσi,k√Tt))+aiσi2,(4.50)∂w(i, j)∂ni=w(i, j)h{Di+aiσi2}(4.51)(β 2zi+1)2= 1+2β 2ci∑j∈Pw(i, j)a2j , (4.52)∂ zi∂ni=12σiaiT− 34i(β 2zi+1)=σiai2(β 2zi+1)(δ 2i +a2i) 34.(4.53)∂ ẑi∂ni=σ̂iai2(β 2i zi+1)(δ 2i +a2i) 34. (4.54)SURE = E{∣∣∣∣zi−aiβi∣∣∣∣2+σ3i ai(β 2i zi+1)T34}−σ2iβ 2i. (4.55)To determine the weight factors in SVD estimation, we first calculate the nor-malized SURE and define the weights at the center of every patch as:W (i) = 1−SURE(i)maxjSURE( j). (4.56)It is possible to find the transformation between any given frame and the firstframe ( T1 n) by multiplying the transformations from the consecutive pairs of frames.820.2 0.4 0.6 0.8 1 1.2−0.5−0.3−0.1500.150.30.5distance (mm)displacement error/distance  AveragePPBNiRiIG0.2 0.4 0.6 0.8 1 1.20.10.20.30.5distance (mm)mTRE/distance0.2 0.4 0.6 0.8 1 1.2−0.5−0.3−0.1500.150.30.512distance (mm)yaw error (degree)0.2 0.4 0.6 0.8 1 1.2−1−0.5−0.3−0.1500.150.30.51distance (mm)tilt error (degree)(a) Beef sequences0.2 0.4 0.6 0.8 1 1.2−0.5−0.3−0.1500.150.30.5distance (mm)displacement error/distance  AveragePPBNiRiIG0.2 0.4 0.6 0.8 1 1.20.10.20.30.5distance (mm)mTRE/distance0.2 0.4 0.6 0.8 1 1.2−0.5−0.3−0.1500.150.30.512distance (mm)yaw error (degree)0.2 0.4 0.6 0.8 1 1.2−1−0.5−0.3−0.1500.150.30.51distance (mm)tilt error (degree)(b) Chicken sequences0.2 0.4 0.6 0.8 1 1.2−0.5−0.3−0.1500.150.30.5distance (mm)displacement error/distance  AveragePPBNiRiIG0.2 0.4 0.6 0.8 1 1.20.10.20.30.5distance (mm)mTRE/distance0.2 0.4 0.6 0.8 1 1.2−0.5−0.3−0.1500.150.30.512distance (mm)yaw error (degree)0.2 0.4 0.6 0.8 1 1.2−1−0.5−0.3−0.1500.150.30.51distance (mm)tilt error (degree)(c) Turkey sequencesFigure 4.4: Ex-vivo out-of-plane motion estimation. For each kind of tissuetype consisting of five different subsets, σy model was estimated in across-validation leave-one-subset-out experiment. The method has beentested on (a) beef, (b) chicken, (c) turkey.Therefore, the whole volume is constructed relative to the first frame:T1 n = T12 T23 . . . Tn−1n. (4.57)4.2.5 Improved σy regression modelWe previously proposed a linear regression model to account for the spatial changesof the beam profile [7]. We improve our previous model to a quadratic one, which83is closer to the shape of ultrasound focused beam for a linear transducer as:σy(x,∆y,z) = σ0 +ax+b∆y+ c(z− z0)2, (4.58)where x and z are lateral and axial positions, respectively. In the case of curvilineartransducer, we used the following model:σy(r,θ ,∆y) = σ0 +ar+bθ + c∆y, (4.59)where r is the radius and θ is the angle in the polar coordinate of a curvilineartransducer. The resolution cell width increases along the axial direction, r, whichis responsible for the lower resolution at the lower parts of the ultrasound image.The ultrasound profile is almost constant in the lateral direction, θ .4.3 Experiments and resultsTo evaluate the proposed method, we have performed ex vivo and in vivo experi-ments and compared the tracking results with a gold standard. The ex vivo experi-ments use three different tissue types and the in vivo experiments use spine scans offive subjects. The details of the experiments are provided in the following sections.We have compared the tracking results of three different methods of denoising, i.e.simple weighted average (average), PPBN, and the proposed iterative RiIG (iRiIG)denoising algorithm. The results are compared in terms of the ratio of the measureddisplacement error to the true elevational separation; the mean target registration0 0.2 0.5 0.8 1.100.050.10.150.2distance (mm)coefficient of variation (cv)  5 mm10 mm15 mm20 mm25 mm30 mm35 mm(a) Average0 0.2 0.5 0.8 1.100.050.10.150.2distance (mm)coefficient of variation (cv)(b) PPBN0 0.2 0.5 0.8 1.100.010.020.030.040.050.060.07distance (mm)coefficient of variation (cv)(c) iRiIGFigure 4.5: CV-plots for different speckle tracking algorithms developed fordifferent denoising methods for the ex vivo experiment.84error (mTRE) between the true and the estimated transformation at the center ofthe patches divided by the true elevational separation; and the estimated tilt andyaw angles for the estimated transform.4.3.1 Ex vivo experimentDifferent tissue types including beef tenderloin, chicken breast, and turkey breastare translated relative to the transducer in the elevation direction with known steps.The translation is created by means of a linear motor stage (T-LSR150B, ZaberTechnologies Inc., Vancouver, BC, Canada) with equal elevation distances of 0.099 mm.Each tissue type over the length of 20 mm has been considered as five subsets of4 mm. Each subset is composed of 40 frames. For any particular elevation distance,all the possible subsets of frames with the spacing equal to that given distance areconsidered for evaluation.All the experiments on tissue samples are performed on a SonixTouch ultra-sound machine (Ultrasonix Inc., Richmond, BC, Canada) using a 10 MHz lineararray transducer (L14-5/38) with a 5-14 MHz bandwidth. The depth of imagingis 4 cm with a single focus on 2 cm. The sampling rate of the RF echo signal is40 MHz and the recorded signal had 2080 samples for each of 126 lateral vectors.The ultrasound transducer is affixed to the optical table by means of a mechanicalarm. The linear motor stage is also attached to the table and the imaging samplewas positioned on top of it. To ensure there is no in-plane motion, all of the tissuesamples are secured in an agar medium as explained in [7].For each kind of tissue type, the regressors of the σy model (4.58) is estimatedin a cross-validation leave-one-subset-out experiment. Fig. 4.4 shows the results.In an ideal situation, there should be no difference in the σy values among differenttissues. However, in practice, variations are observed for different tissues. Toassess the variations of the sigma values at different axial and elevation distances,the Coefficient of variations (CVs) are presented in CV-plots. As expected, at thedepths closer to the focal zone, the variation is less. The CV-plot of the iRiIGproposed method, shows less variation around 0.5 mm elevation distance, whichindicates the effect of the elevation focus.All the methods are implemented in MATLAB running on a PC with a Core 2850 0.5 1 1.5 20.060.080.10.120.140.160.180.20.22distance (mm)coefficient of variation (cv)  5 mm20 mm35 mm50 mm65 mm80 mmFigure 4.6: CV-plots for the in vivo experiment at different axial depths andelevation distances for five different subjects.Duo CPU @ 2.93 GHz and 4 GB of RAM. The processing time of the proposeddenoising algorithm on a 4×8 mm2 patch is 3.6 seconds, which is approximatelyone-third of the model fitting approach [7].4.3.2 In vivo experimentTo access the clinical feasibility of the proposed method, we process the ultrasounddata taken from 5 different subjects. The subject spine is scanned with a curvilineartransducer (C5-2) on a SonixGPS ultrasound machine (Ultrasonix Inc., Richmond,BC, Canada) and the electromagnetic (EM) tracker readings are recorded. Thespine is first scanned vertically cranial and then caudal. The cranial scan is usedfor the calibration purposes and the caudal scan is used for the evaluation purposesin all data sets.The results for 6-DoF transform parameters estimated based on the proposedmethod for the caudal spine scans of five subjects are presented in Table 4.1. TheMean Squared Distance (MSD) of all the patches’ center points for the first andthe last frame of every sequence is measured. The MSD error ratio (MSDER) isdefined by dividing the MSD between the estimated transform and EM transformfor the last frame to the overall MSD. The MSDER is a more insightful way toassess the success of the algorithm than the transform parameters. At frame i, Ti nis estimated based on the proposed method and the distance of the center point of8687Subject Tilt Yaw Roll Lateral Axial Elevation MSD MSDER DriftS1EM tracker −1.48∗ 1.15 0.27 7.23 5.04 -27.69 30.31 0 0Estimated -0.15 0.96 -6.42 1.55 -0.18 -19.35 26.17 0.29 -0.15S2EM tracker -1.79 -2.13 -7.16 9.37 1.44 -31.24 36.51 0 0Estimated 1.13 -2.40 -22.36 2.37 -8.04 -38.94 56.04 0.65 -0.59S3EM tracker -0.11 0.64 -1.12 2.75 1.08 -26.42 28.34 0 0Estimated -0.03 2.27 -4.52 -0.08 -0.57 -16.45 20.31 0.20 0.006S4EM tracker -0.09 0.37 -1.17 -1.90 1.49 -27.08 28.64 0 0Estimated -0.27 3.04 -15.88 -0.92 -0.89 -6.63 24.67 0.25 0.54S5EM tracker 1.59 0.41 -1.31 -2.00 4.08 -33.10 36.08 0 0Estimated 0.67 -1.51 -12.02 -1.30 -1.62 -25.25 33.88 0.13 0.01* The tilt,yaw, and roll angles are in degrees (◦) and the lateral, axial, elevation, MSD, and drift are in millimetres (mm).Table 4.1: In vivo 6-DoF transform estimation for five different subjects. The results are compared against the Electro-magnetic (EM) tracker measurements as the gold standard. The transformation parameters values are obtained byconcatenating the transformations between the neighboring marked frames in the reconstruction sequence of every pa-tient. MSD is the mean squared distance between the first and the last frame of every sequence. MSDRE is definedby deviding the MSD between the estimated transform and EM transform for the last frame to the overall MSD. Driftshows the distance between the estimated last frame mid-point and the corresponding EM measurement.the image is compared to the one from the EM tracker. The mid-point drift is theaccumulated error of the center point of the image. We calculate the average dis-tance between the center points of the two consecutive frames from the magnetictracker information. If it is greater than a minimum threshold (0.3 mm), we markthose frames to be considered in the estimation algorithm. We used 80 markedframes in each data set to evaluate the method. However, the consecutive numberof scanned frames are 112, 144, 101, 100, and 144, respectively. In practice, whenthere is no tracker information, a threshold could be applied on the average corre-lation between two frames, since there is a direct relation between the correlationvalue and displacement.We have used the tracker information to estimate the regressors of the σy modelfrom (4.59). Fig. 4.6 shows the CV-plots over different subjects. The CV coeffi-cient can be used as a measure of tissue independency of the σy model. The overallvariation is less than 20% over different subjects and drops to 8% around the focusat the axial depth range of 30 mm to 50 mm and 1 mm elevation distance.Several conclusions can be drawn from Table 4.1. The first one is that thehigher in-plane motion leads to significantly larger error in the overall trackingresult, as observed in the case of Subject1 and Subject2. The effect of speckle in-plane motion on displacement estimation has been previously reported as specklemotion artifact [65]. This phenomenon happens due to the spatial variations of thetransducer point spread function out of the focal zone. Therefore, a lateral tissuetranslation is seen as a tissue rotation, i.e. beam angulation changes with lateraltranslation in the image coordinates system. For a detailed analytical explanationof this effect refer to [65]. In addition to the speckle motion artifact, the changein the correlation peak due to the lateral motion affects the elevation displacementestimation. The estimation of the peak change involves some function fitting andimposes extra computational cost. It was shown that by doing so, it is possible toimprove the elevation estimation [57].It can also be observed that the presence of the tilt rotation significantly de-grades the tracking outcome. The same issue was reported in [59]. Two ap-proaches are previously proposed to compensate the effect of the out-of-plane rota-tion. In [59], the correlation curves are adapted by interpolating from the calibratedcurves for different tilt angles. Considering different resolution cell widths and the880 10 20 30 40 50 60 70 80−0.4−0.2−0.100.10.20.4accumulated mid−point error (mm)number of frames(a)0 10 20 30 40 50 60 70 80−35−30−25−20−15−10−505 Translationstranslations (mm)number of frames  L−estimatedA−estimatedE−estimatedL−trackerA−trackerE−tracker(b)0 10 20 30 40 50 60 70 80−10−8−6−4−20246number of frameserrors (mm)Translation Errors  LateralAxialElevation(c)0 10 20 30 40 50 60 70 80−14−12−10−8−6−4−2024 Rotationsrotations (degree)number of frames  Tilt−estimatedYaw−estimatedRoll−estimatedTilt−trackerYaw−trackerRoll−tracker(d)0 10 20 30 40 50 60 70 80−2024681012number of frameserrors (degree)Rotation Errors  TiltYawRoll(e)Figure 4.7: 6-DoF transform estimation for a subset of the in vivo experiment.(a) The accumulated mid-point error, (b) the estimated translations (dot-ted lines) in comparison with the readings from the EM tracker (solidlines), (c) the errors in the translation estimations in comparison withEM tracker, (d) the estimated rotations , and (d) the errors in the rota-tion estimations in comparison with EM tracker.891 2 3 4 5 6 7 80.20.30.40.5lateral displacement (mm)MSD error rate(a) Lateral displacement artifact0 0.5 1 1.5 20.20.250.3tilt (degree)MSD error rate(b) Tilt rotation artifactFigure 4.8: The effect of (a) lateral motion and (b) tilt rotation in the overalltracking results of the in vivo experiment.corresponding closed-form solution is proposed in [7]. These approaches couldbe used to improve the results at the cost of additional computation. Although itappears to be challenging to defuse the effect of tilt rotation or lateral translationfrom other transform parameters in ex vivo experiments, to closely study the effectof each individual parameters, we choose the subjects with one dominant transformparameter. Subject2 has the highest lateral displacement, so we have chosen thissubject to assess the effect of lateral translation on the overall accuracy. In addi-tion, Subject5 has the highest tilt rotation parameter. As a result, this subject hasbeen chosen to assess the effect of the tilt rotation. We start from the first frameand estimate the transformation until we reach the MSD of 15 mm. In order toget a reasonable number of subsequences, we repeat the same procedure for everyother 5 frames. Considering the above, 10 different subsequences are evaluatedfor each subject. In Fig. 4.8 the MSD error rate for different subsequences versusthe tilt angle and the lateral displacement are shown. It can be observe that lateraldisplacement of greater than 4 mm drastically degrades the estimation. It is alsoshown that, as the tilt rotation increases, the error rate increases as well.4.4 SummaryIn this work, we have proposed a filter-based method of speckle tracking. Aniterative NLM filter has been developed based on the RiIG statistical model of90ultrasound image. Furthermore, a new similarity measure which is suitable forthe multiplicative ultrasound noise model has been derived. The speckle trackingmethod has been developed based on the theoretical correlation expression of threedifferent noise models, i.e. additive Gaussian, multiplicative Gamma, and mixedRiIG model. A regression-based beam profile is proposed to capture the spatialvariability of the resolution cell. The ex vivo experiments has been performed onthree different tissue types. The tissues are translated in elevation direction bymeans of a linear motor stage and its measurements are used as the gold standard.The results show that proposed method performs better in terms of out-of-planedisplacement estimation compared to the other denoising methods. The evaluationof the sigma map variability at different depths and elevation displacements sug-gests less tissue-dependency of the proposed denoising algorithm compared to thetwo other denoising methods. An in vivo experiment has been performed to assessthe potential of the proposed framework for the clinical application. The spine offive different subjects are scanned and the speckle tracking results have been com-pared to the measurements of EM tracker. We can conclude that, when the motionis mostly composed of out-of-plane displacement with minimal lateral translationor tilt rotation, the proposed speckle tracking performs more accurately.Although it is challenging to decouple the effect of all sources of errors in afreehand ultrasound scan, in routine clinical applications, the general pattern oftransducer movement over a specific organ is known a priori. Hence, predefinedmotion patterns can be used to enhance the speckle tracking algorithm outcome.In our in vivo experiment, the vertebrate specular reflection and the shadowbeneath it cover a considerable part of the spine image, therefore, there are notenough speckles to be used for tracking purpose. For other organs, such as theprostate or liver, where these effects do not exist or they are minimal, we expect toobtain better results.Another important issue that needs further improvement is the method of mark-ing frames. As noticed in [58], concatenating all of the subsequent frames in afreehand scan will not necessarily result in an optimal reconstruction. In this workwe used a threshold to avoid the flat part of the correlation curve, however moreeffective selection of interleaving frames would possibly improve the result. Fur-thermore, some structural features of the ultrasound image such as, the continuity91of the boundaries, specifically in the plane perpendicular to the motion field, andthe in-plane statistical characteristics can be considered to decrease the drift ofspeckle tracking and improve the overall 3D reconstruction accuracy.92Chapter 5Conclusions and Future WorkIn recent years, the number of tracked ultrasound applications in image-guided in-terventional and surgical procedures has increased. This is primarily due to theunique characteristics of ultrasound imaging, such as being non-invasive, real-time, and relatively inexpensive. Improvements of low cost electromagnetic sen-sors has also encouraged the ultrasound manufacturers to provide built-in trackers,which facilitate tracking ultrasound. However, the interference of metallic toolsand equipment in the operating room significantly diminishes the accuracy of thepose measurements. The inaccurate tracking measurements result in misalignmentof the ultrasound frames and the reconstructed 3D volume suffers from spatial re-construction error. Any spatial reconstruction error may change the appearance ofthe anatomical structures which in turn makes the interpretation or further analysisof the image more challenging.One possible way to improve the accuracy of the 3D reconstructed volume isto use the image data itself and estimate the relative pose of consecutive images. Ifthe accuracy of the image-based tracking is comparable to sensor tracking, it mayalso eliminate the need of an external tracker and widens the range of freehandultrasound applications. In some therapeutic or surgical applications, however,it is necessary to locate a feature in a fixed reference coordinate system. In theseapplications image-based tracking can supplement the measurements of the tracker.Ultrasound image-based tracking and speckle tracking are terms that both referto the same idea of using image information to track the ultrasound frames. Speckle93tracking, specifically the in-plane displacement estimation, has been widely inves-tigated by researchers during the last two decades. On the other hand there hasbeen less success in the out-of-plane displacement estimation. The accuracy of thepreviously published results in the out-of-plane displacement estimation were stillless than the accuracy of 6-DoF position and orientation estimation from sensortracking. This is mainly due to the challenging task of out-of-plane motion esti-mation. The main idea of this thesis was to develop algorithms and methods toimprove the quality of out-of-plane motion estimation and bring it to the point thatthe 6-DoF image-based position and orientation is comparable or better than thesensor tracking systems.5.1 ConclusionsThere are many factors that contribute to the challenging nature of 6-DoF image-based position and orientation estimation. The first one is the presence of co-herency or specular reflections in real tissue. Traditionally, correlation-based out-of-plane displacement estimations are based on a specific pattern of speckle knownas FDS. However, in real tissue, this type of speckle pattern is rare and any co-herency, repetitive texture, or specular reflection may cause deviation from theideal distribution of FDS. For the first time, we proposed a model-fitting approachwith rich parameterizations to enable the consideration of patches with coherencyfor the out-of-plane displacement estimation. The RiIG ultrasound formation modelwas previously shown to be powerful in modeling the distribution of sampled RFdata. However, its second order statistics, which is a requirement for correlation-based out-of-plane displacement estimation, were not studied previously. For thefirst time, we introduced a correlation-based out-of-plane motion estimation basedon the RF data. Based on the estimated RiIG model parameters, we derived aclosed form formulation for the correlation coefficient as a function of out-of-planedisplacement. We showed this model makes it possible to consider the effect of co-herent part in the second order statistics of the RiIG statistical model.Despite the acceptable performance of the model-fitting approach, there aretwo drawbacks that impede its clinical application. The first one is dependency onsampled RF data. Since only a few ultrasound machines allow access to the raw94sampled RF data, it is desirable to develop a method that is directly applicable toB-mode images. The other drawback is the processing time. Theoretically, theproposed model fitting is applicable to any patch of the image. By increasing thenumber of patches that are considered for transform estimation, the accuracy of6-DoF estimation will increase. However, it will increase the computational cost.Increasing the number of RF samples in a patch will also result in a better modelfitting, however; it greatly increases the computational cost as well.To overcome the shortcomings of the model-fitting approach, we introduceda novel framework of speckle tracking based on noise extraction. Consideringspeckle as a specific kind of noise, denoising based on a prior assumption on thenoise distribution can be taken into account as a local model-fitting approach. TheNLM filter is a suitable candidate for a spatially local denoising filter. The in-teresting feature of NLM filters is that they are developed based on a probabilisticnoise model, which is necessary in the correlation-based out-of-plane displacementestimation.In the recent years, NLM filters have shown their strength in denoising differ-ent types of images including ultrasound images. In this work, we investigatedthe probability-based NLM filters in order to extract speckle information from theultrasound images and developed a filter-based speckle tracking framework. TheNLM filter is a specific kind of weighted average filter of the pixel values. Theweights are determined based on the similarity between a reference window andthe neighboring windows. This similarity measure comes from the assumption onthe noise formation model. The more similar are the compared windows, the moreweight is given to the center pixel of that window. When the denoised image isobtained, the noise image is immediately determined based on the prior model.In the filter-based framework, the speckle tracking is directly performed onB-mode images. This would bring the method closer to its clinical application.It also provides the opportunity to take advantage of fast implementation of thedenoising filter. Primarily, we used a patch-based approach for the NLM filterimplementation, however; in the next step, we selected the parameters of NLM ina way that a fast Fourier-based implementation of the filter is achievable.To further improve the denoising algorithm and increase the accuracy of thespeckle tracking algorithm, while benefiting from a richer distribution model, we95combined the model-fitting and filter-based approach. We introduced an iterativeNLM filter based on the RiIG model. With such a combination, we can benefitfrom the strengths of both methods.We developed a WML approach based on RiIG model for denoising. A re-cent study has shown that the traditional definition of similarity in NLM filterslacks some favorable statistical features. A new similarity measure suitable forthe multiplicative ultrasound noise model was developed in this work. KLD wasadded to the NLM similarity measure as a regularization term to keep the fidelityof the denoised image to the original one. For the sake of comparison, we derivedthe theoretical correlation expression of three different noise models, i.e. additiveGaussian, multiplicative Gamma, and mixed RiIG model. The evaluation of thebeam profile variability at different depths and elevation displacements suggesteda lower tissue-dependency of the proposed denoising algorithm compared to theother two denoising models.In the first attempt, we fit a transform on all of the measured out-of-plane dis-placement over the entire image. However, we can assign uncertainty measuresto each estimation to enhance the transform estimation accuracy. As a result, wedeveloped a closed-form SURE metric that measures the risk of the denoised sig-nal based on the noisy samples. We combined the local estimations based on theirSURE risk factors in a weighted average manner and assigned less weight to moreriskier estimations. Measuring the uncertainty of the estimations imposes extracomputational cost and brings up the trade off between accuracy and computationtime. Other uncertainty measures with lower computational cost can be exploredin future research.Another major challenge of the out-of-plane displacement estimation is in thespatial variations of the beam profile. The spatial variations of the beam profilecauses error in the correlation values and subsequently in the out-of-plane dis-placement estimation. We found that these variations have two different effects.The first one is on the resolution cell’s width over the entire image. Around thefocal zone, we have smaller resolution cells and, as we go away from the focus,the resolution cell width increases. It also changes in the elevation direction. Wetried to capture these variations by introducing a regression-based model for thebeam profile. In a calibration process, where the displacements are known, the re-96gressors of the model are determined. We increased the effectiveness of the modelby changing the linear shape to a quadratic form around the focus for the lineartransducer. In the same manner, we modeled the bean profile of a curvilinear trans-ducer as a function spatial radius and angle. We also studied the variations of theobtained resolution cells for different tissue types. The results shows that aroundthe focus, the variation is less and the method is more successful in terms of beingtissue-independent and capturing the effect of coherency. This is in agreement withthe fact that the beam profile is closer to its ideal form around the focus.The second effect is due to the Gaussian approximation of the correlation func-tion, which, in fact, is the convolution of two sinc2 functions. We provided a closedform solution for the correlation function and showed that it is possible to considertwo resolution cells with different widths in the new formulation. Through simu-lation, we showed that it is possible to compensate for the effect of out-of-planerotation by considering different resolution cell widths. However the implementa-tion of the proposed strategy on real tissue is one of the subjects for future research.All of the proposed algorithms were evaluated by performing several sets of exvivo experiments. To assess the potential of the proposed framework for clinicalapplication, the proposed methods were also applied to different in vivo ultrasoundimages including prostate and spine images. During the course of this researchwe learned that, when the motion is mostly composed of out-of-plane displace-ment with minimal lateral translation or tilt rotation, the best tracking results wereobtained.5.2 Future workIn spite of improving many aspects of 6-DoF speckle tracking, there are still someremaining issues that need to be studied further and be considered in future.One of the challenges in the sensoreless tracking algorithms is the change ofcorrelation due to other sources rather than the out-of-plane displacement. For ex-ample, when the in-plane motion is present, it can change the directional interac-tions of the ultrasound beam and the scatterers, mainly because of the beam profilenot being spatially invariant. One possible solution is to learn the effect on in-planetranslation or rotation for few values. Then interpolate or estimate the amount of97decorrelation due to the specific given in-plane motion. A similar approach hasbeen followed in [59]. Probe pressure also can change the density of the scatterersand therefore the correlation value. Previously an inter-frame registration approachhas been proposed to reduce the effect of probe pressure in 3D reconstruction. Re-cently, the inter-frame registration method has been improved by incorporating theorientation of the frames a regularization term to identify the pressure field [132].Once the force field is determined, it is possible to modify the statistical propertiesof the fitted model based on the amount of pressure and increase the quality of thespeckle tracking. Patient motion, organ deformation, and twisted trajectories areother sources of error that needs to be studied in future. A general solution to allof the errors from other sources of decorelation is first to detect them and if possi-ble correct for the effect, otherwise discard the corresponding patch in the 6-DoFtransform estimation.Another parameter that affects the overall accuracy of the 6-DoF transformestimation is the patch size. In the selection of the patch sizes, there is a tradeoff between the quality of the model fitting or denoising algorithm and the spatialresolution of the transform estimation. Larger patches give more robust parameterestimation but also impose higher computational cost. The uncertainty of out-of-plane estimation will increase by decreasing the patch size. However, the bestpatch size is dependent on the speckle size, which is related to the tissue type andultrasound frequency. One reason not to use very large patches, especially in theaxial direction, is to avoid violating the assumption of the stationarity of the beamwithin the patch. One approach is to change the patch size based on the spatiallocation and the scatterers distribution. Another approach is a multi-resolution one.Starting from a larger windows and decrease the patch size based on the uncertaintyof the displacement estimation. The challenges for such approach is defining asuitable measure for the reliability of the estimations and its computation cost.One other challenge that needs further improvement is the method of markingframes. Concatenating all of the subsequent frames in a freehand scan will notnecessarily result in an optimal reconstruction. In this work we used a thresholdto avoid the flat part of the correlation curve, however more effective selection ofinterleaving frames would possibly improve the result.There are some interesting areas of research, which are the expected extension98of the proposed methods in this thesis. For instance, more advanced statisticalmodels of the ultrasound amplitude that satisfy the requirements of the speckletracking such as closed-form second order statistics can be use instead of RiIG inthe proposed model-fitting approach. In the filter-based approach, we developedthe denoising algorithm based on NLM filters. However, other spatially local de-noising filters that consider the probabilistic model of the ultrasound noise can bestudied as well.Furthermore, structural features of the ultrasound images such as, the continu-ity of the boundaries, specifically in the plane perpendicular to the motion field, andthe in-plane statistical characteristics of the regions can be considered to decreasethe drift of speckle tracking and improve the overall 3D reconstruction accuracy.99Bibliography[1] J. G. Abbott and F. Thurstone. Acoustic speckle: Theory and experimentalanalysis. Ultrasonic Imaging, 1(4):303–324, 1979. → pages[2] J. M. Abeysekera and R. Rohling. Alignment and calibration of dualultrasound transducers using a wedge phantom. Ultrasound in Medicineand Biology, 37(2):271–279, 2011. → pages[3] J. M. Abeysekera, M. Najafi, R. Rohling, and S. E. Salcudean. Calibrationfor position tracking of swept motor 3-d ultrasound. Ultrasound inMedicine and Biology, 40(6):1356–1371, 2014. → pages[4] N. Afsham, K. Chan, L. Pan, S. Tang, and R. N. Rohling. Alignment andcalibration of high frequency ultrasound (HFUS) and optical coherencetomography (OCT) 1D transducers using a dual wedge-tri step phantom. InSPIE Medical Imaging, volume 7964, pages 28/1–28/8, 2011. → pages[5] N. Afsham, M. Najafi, P. Abolmaesumi, and R. Rohling. 3D ocularultrasound using gaze tracking on the contralateral eye: a feasibility study.In Medical Image Computing and Computer-Assisted Intervention, pages65–72, 2011. → pages[6] N. Afsham, M. Najafi, P. Abolmaesumi, and R. Rohling. Out-of-planemotion estimation based on a Rician-Inverse Gaussian model of RFultrasound signals: speckle tracking without fully developed speckle. InSPIE Medical Imaging, volume 8320, pages 17/1–17/8, 2012. → pages[7] N. Afsham, M. Najafi, P. Abolmaesumi, and R. Rohling. A generalizedcorrelation-based model for out-of-plane motion estimation in freehandultrasound. IEEE Transactions on Medical Imaging, 33(1):186–199, 2013.→ pages[8] N. Afsham, S. Khallaghi, M. Najafi, L. Machan, S. Chang, L. Goldenberg,P. Black, R. Rohling, and P. Abolmaesumi. Filter-based speckle tracking100for freehand prostate biopsy: Theory, ex vivo and in vivo results. InInformation Processing in Computer-Assisted Interventions, pages256–265, 2014. → pages[9] R. Agrawal and Karmeshu. Ultrasonic backscattering in tissue:characterization through nakagami-generalized inverse gaussiandistribution. Computers in Biology and Medicine, 37(2):166–172, 2007. →pages[10] American Cancer Society. Cancer facts and figures 2013. Atlanta:American Cancer Society. → pages[11] H. A.-D. Ashab. Ultrasound guidance for epidural anesthesia. Master’sthesis, University of British Columbia, 2013. → pages[12] M. Baumann, P. Mozer, V. Daanen, and J. Troccaz. Prostate biopsytracking with deformation estimation. Medical Image Analysis, 16(3):562– 57, 2012. → pages[13] J. Bax, D. Cool, L. Gardi, K. Knight, D. Smith, J. Montreuil, S. Sherebrin,C. Romagnoli, and A. Fenster. Mechanically assisted 3D ultrasound guidedprostate biopsy system. Medical Physics, 35(12):5397–5410, 2008. →pages[14] L. G. Bouchet, S. L. Meeks, G. Goodchild, F. J. Bova, J. M. Buatti, andW. A. Friedman. Calibration of three-dimensional ultrasound images forimage-guided radiation therapy. Physics in Medicine and Biology, 46(2):559, 2001. → pages[15] A. Buades, B. Coll, and J.-M. Morel. A review of image denoisingalgorithms, with a new one. Multiscale Modeling & Simulation, 4(2):490–530, 2005. → pages[16] A. Buades, B. Coll, and J.-M. Morel. Image denoising methods. a newnonlocal principle. Siam Review, 52(1):113–147, 2010. → pages[17] C. B. Burckhardt. Laser speckle pattern–A narrowband noise model. TheBell System Technical Journal, 49:309–316, 1970. → pages[18] Canadian Cancer Society Advisory Committee on Cancer Statistics.Canadian cancer statistics 2013. Toronto, ON: Canadian Cancer Society.→ pages101[19] I. Ce´spedes, Y. Huang, J. Ophir, and S. Spratt. Methods for estimation ofsubsample time delays of digitized echo signals. Ultrasonic Imaging, 17(2):142–171, 1995. → pages[20] R. Chang, W. Wu, D. Chen, W. Chen, W. Shu, J. Lee, and L. Jeng. 3D USframe positioning using speckle decorrelation and image registration.Ultrasound in Medicine & biology, 29(6):801–812, 2003. → pages[21] J.-F. Chen, J. B. Fowlkes, P. L. Carson, and J. M. Rubin. Determination ofscan-plane motion using speckle decorrelation: Theoretical considerationsand initial test. International Journal of Imaging Systems and Technology,8(1):38–44, 1997. → pages[22] T. Chen, R. Ellis, and P. Abolmaesumi. Improvement of freehandultrasound calibration accuracy using the elevation beamwidth profile.Ultrasound in Medicine and Biology, 37(8):1314–1326, 2011. → pages[23] T. K. Chen, P. Abolmaesumi, D. R. Pichora, and R. E. Ellis. A system forultrasound-guided computer-assisted orthopaedic surgery. Computer AidedSurgery, 10(5):281, 2005. → pages[24] T. K. Chen, P. Abolmaesumi, A. D. Thurston, and R. E. Ellis. Automated3D freehand ultrasound calibration with real-time accuracy control. InMedical Image Computing and Computer-Assisted Intervention, pages899–906, 2006. → pages[25] T. K. Chen, A. D. Thurston, R. E. Ellis, and P. Abolmaesumi. A real-timefreehand ultrasound calibration system with automatic accuracy feedbackand control. Ultrasound in Medicine and Biology, 35(1):79–93, 2009. →pages[26] J. Conrath and C. Laporte. Towards improving the accuracy of sensorlessfreehand 3d ultrasound by learning. In Machine Learning in MedicalImaging, pages 78–85, 2012. → pages[27] D. Cool, S. Sherebrin, J. Izawa, J. Chin, and A. Fenster. Design andevaluation of a 3D TRUS prostate biopsy system. Medical Physics, 35(10):4695–4707, 2008. → pages[28] P. Coupe´, P. Hellier, C. Kervrann, and C. Barillot. Bayesian non localmeans-based speckle filtering. In IEEE International Symposium onBiomedical Imaging: From Nano to Macro, pages 1291–1294, 2008. →pages102[29] P. Coupe´, P. Hellier, C. Kervrann, and C. Barillot. Nonlocal means-basedspeckle filtering for ultrasound images. IEEE Transactions on ImageProcessing, 18(10):2221–2229, 2009. → pages[30] C.-A. Deledalle, L. Denis, and F. Tupin. Iterative weighted maximumlikelihood denoising with probabilistic patch-based weights. IEEETransactions on Image Processing, 18(12):2661–2672, 2009. → pages[31] C.-A. Deledalle, V. Duval, and J. Salmon. Non-local methods withshape-adaptive patches (nlm-sap). Journal of Mathematical Imaging andVision, 43(2):103–120, 2012. → pages[32] F. Destrempes and G. Cloutier. A critical review and uniformizedrepresentation of statistical distributions modeling the ultrasound echoenvelope. Ultrasound in Medicine and Biology, 36(7):1037–1051, 2010.→ pages[33] C. W. DiBernardo, A. P. Schachat, and M. Fekrat, Sharon. OphthalmicUltrasound: A Diagnostic Atlas. Thieme Medical Publishers, 1st edition,1998. → pages[34] R. Dickinson and C. Hill. Measurement of soft tissue motion usingcorrelation between A-scans. Ultrasound in Medicine and Biology, 8(3):263–271, 1982. → pages[35] V. Dore´ and M. Cheriet. Robust nl-means filter with optimal pixel-wisesmoothing parameter for statistical image denoising. IEEE Transactions onSignal Processing, 57(5):1703–1716, 2009. → pages[36] D. B. Downey, D. A. Nicolle, M. F. Levin, and A. Fenster.Three-dimensional ultrasound imaging of the eye. Eye, 10(1):75–81, 1996.→ pages[37] E. S. Ebbini. Phase-coupled two-dimensional speckle tracking algorithm.IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control,53(5):972–990, 2006. → pages[38] D. W. Eggert, A. Lorusso, and R. B. Fisher. Estimating 3-d rigid bodytransformations: a comparison of four major algorithms. Machine Visionand Applications, 9(5-6):272–290, 1997. → pages[39] T. Eltoft. The Rician inverse Gaussian distribution: a new model fornon-Rayleigh signal amplitude statistics. IEEE Transactions on ImageProcessing, 14(11):1722–1735, 2005. → pages103[40] T. Eltoft. A new approach to modeling signal amplitude statistics by theK-distributions. In IEEE Signal Processing Symposium, pages 62–65,2006. → pages[41] T. Eltoft. Modeling the amplitude statistics of ultrasonic images. IEEETransactions on Medical Imaging, 25(2):229–240, 2006. → pages[42] A. Fenster, D. B. Downey, and H. N. Cardinal. Three-dimensionalultrasound imaging. Physics in Medicine and Biology, 46(5):R67–R99,2001. → pages[43] A. Fenster, J. Bax, H. Neshat, N. Kakani, and C. Romagnoli. 3Dultrasound imaging in image-guided intervention. In Advancements andBreakthroughs in Ultrasound Imaging, pages 1–25. InTech, 2013. → pages[44] H. C. Fledelius. Ultrasound in ophthalmology. Ultrasound in Medicine andBiology, 23(3):365–375. → pages[45] R. Franc¸ois, R. Fablet, and C. Barillot. Robust statistical registration of 3Dultrasound images using texture information. In IEEE InternationalConference on Image Processing, volume 1, pages I581–I584, 2003. →pages[46] G. Furness, M. Reilly, and S. Kuchi. An evaluation of ultrasound imagingfor identificationof lumbar intervertebral level. Anaesthesia, 57(3):277–280, 2002. → pages[47] A. H. Gee, N. E. Houghton, G. M. Treece, and R. W. Prager. A mechanicalinstrument for 3D ultrasound probe calibration. Ultrasound in Medicineand Biology, 31(4):505–518, 2005. → pages[48] A. H. Gee, R. James Housden, P. Hassenpflug, G. M. Treece, and R. W.Prager. Sensorless freehand 3D ultrasound in real tissue: Speckledecorrelation without fully developed speckle. Medical Image Analysis, 10(2):137–149, 2006. → pages[49] B. J. Geiman, L. N. Bohs, M. E. Anderson, S. M. Breit, and G. E. Trahey.A novel interpolation strategy for estimating subsample speckle motion.Physics in Medicine and Biology, 45(6):1541, 2000. → pages[50] G. Giunta. Fine estimators of two-dimensional parameters and applicationto spatial shift estimation. IEEE Transactions on Signal Processing, 47(12):3201–3207, 1999. → pages104[51] J. W. Goodman. Statistical properties of laser speckle patterns. In LaserSpeckle and Related Phenomena, pages 9–75. Springer Berlin, 1975. →pages[52] J. W. Goodman. Some fundamental properties of speckle. Journal of theOptical Society of America, 66:1145–1150, 1976. → pages[53] T. Grau, E. Bartusseck, R. Conradi, E. Martin, and J. Motsch. Ultrasoundimaging improves learning curves in obstetric epidural anesthesia: apreliminary study. Canadian Journal of Anesthesia, 50(10):1047–1050,2003. → pages[54] M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup. Efficient subpixelimage registration algorithms. Optics Letters, 33(2):156–158, 2008. →pages[55] P. Hassenpflug, R. W. Prager, G. M. Treece, and A. H. Gee. Speckleclassification for sensorless freehand 3D ultrasound. Ultrasound inMedicine and Biology, 31(11):1499–1508, 2005. → pages[56] I. Hein and W. D. O’Brien. Current time-domain methods for assessingtissue motion by analysis from reflected ultrasound echoes: A review.IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control,40(2):84–102, 1993. → pages[57] R. Housden, A. Gee, G. Treece, and R. Prager. Subsample interpolationstrategies for sensorless freehand 3D ultrasound. Ultrasound in Medicineand Biology, 32(12):1897–1904, 2006. → pages[58] R. J. Housden, A. H. Gee, G. M. Treece, and R. W. Prager. Sensorlessreconstruction of unconstrained freehand 3D ultrasound data. Ultrasoundin Medicine and Biology, 33(3):408–419, 2007. → pages[59] R. J. Housden, A. H. Gee, R. W. Prager, and G. M. Treece. Rotationalmotion in sensorless freehand three-dimensional ultrasound. Ultrasonics,48(5):412–422, 2008. → pages[60] R. J. Housden, G. M. Treece, A. H. Gee, and R. W. Prager. Calibration ofan orientation sensor for freehand 3D ultrasound and its use in a hybridacquisition system. Biomedical Engineering Online, 7(5):1–13, 2008. →pages105[61] P.-W. Hsu, R. W. Prager, A. H. Gee, and G. M. Treece. Freehand 3Dultrasound calibration: A review. In Advanced Imaging in Biology andMedicine, pages 47–84. 2009. → pages[62] G. Jacovitti and G. Scarano. Discrete time techniques for time delayestimation. IEEE Transactions on Signal Processing, 41(2):525–533, 1993.→ pages[63] A. Jeffrey and D. Zwillinger. Table of integrals, series, and products.Academic Press, 2007. → pages[64] J. P. S. G. Jr, P. T. Finger, and R. B. Rosen. Dynamic OphthalmicUltrasonography: A Video Atlas for Ophthalmologists and ImagingTechnicians. Lippincott Williams and Wilkins, 1st edition, 2009. → pages[65] F. Kallel, M. Bertrand, and J. Meunier. Speckle motion artifact under tissuerotation. IEEE Transactions on Ultrasonics, Ferroelectrics, and FrequencyControl, 41(1):105–122, 1994. → pages[66] C. Kervrann, J. Boulanger, and P. Coupe´. Bayesian non-local means filter,image redundancy and adaptive dictionaries for noise removal. In ScaleSpace and Variational Methods in Computer Vision, pages 520–532, 2007.→ pages[67] M. A. Q. Khalida Laghari. Role of B-scan ultrasonography in pre-operativecataract patients. International Journal of Health Sciences, 4(1):34–42,2010. → pages[68] S. Khallaghi, S. Nouranian, S. Sojoudi, H. A. Ashab, L. Machan, S. Chang,P. Black, M. Gleave, L. Goldenberg, and P. Abolmaesumi. Motion anddeformation compensation for freehand prostate biopsies. In SPIE MedicalImaging, volume 9036, pages 20/1–20/8, 2014. → pages[69] B. A. Kilker, J. M. Holst, and B. Hoffmann. Bedside ocular ultrasound inthe emergency department. European Journal of Emergency Medicine,2013. → pages[70] E. Konofagou and J. Ophir. A new elastographic method for estimation andimaging of lateral displacements, lateral strains, corrected axial strains andpoissons ratios in tissues. Ultrasound in Medicine and Biology, 24(8):1183–1199, 1998. → pages[71] A. A. Kurzhanskiy and P. Varaiya. Ellipsoidal toolbox. Technical report,University of California, Berkeley, 2006. → pages106[72] A. Lang, P. Mousavi, G. Fichtinger, and P. Abolmaesumi. Fusion ofelectromagnetic tracking with speckle-tracked 3D freehand ultrasoundusing an unscented kalman filter. In SPIE Medical Imaging, volume 72651,pages A/1–A/8, 2009. → pages[73] A. Lang, P. Mousavi, S. Gill, G. Fichtinger, and P. Abolmaesumi.Multi-modal registration of speckle-tracked freehand 3D ultrasound to CTin the lumbar spine. Medical Image Analysis, 16(3):675–686, 2012. →pages[74] C. Laporte and T. Arbel. Combinatorial and probabilistic fusion of noisycorrelation measurements for untracked freehand 3D ultrasound. IEEETransactions on Medical Imaging, 27(7):984–994, 2008. → pages[75] C. Laporte and T. Arbel. Learning to estimate out-of-plane motion inultrasound imagery of real tissue. Medical Image Analysis, 15(2):202–213,2011. → pages[76] F. Lindseth, T. Langø, T. Selbekk, R. Hansen, I. Reinertsen, C. Askeland,O. Solheim, G. Unsga˚rd, R. Ma˚rvik, and T. A. N. Hernes. Ultrasound-basedguidance and therapy. In Advancements and Breakthroughs in UltrasoundImaging, pages 27–82. InTech, 2013. → pages[77] A. D. MacDonald. Properties of the confluent hypergeometric function.1948. → pages[78] E. Mace´, G. Montaldo, I. Cohen, M. Baulac, M. Fink, and M. Tanter.Functional ultrasound imaging of the brain. Nature Methods, 8(8):662–664, 2011. → pages[79] S. S. Mahdavi. Prostate segmentation for medical interventions. PhDthesis, University of British Columbia, 2012. → pages[80] P. Marhofer and V. W. Chan. Ultrasound-guided regional anesthesia:current concepts and future trends. Anesthesia and Analgesia, 104(5):1265–1269, 2007. → pages[81] R. Martı´, J. Martı´, J. Freixenet, R. Zwiggelaar, J. Vilanova, and J. Barcelo´.Optimally discriminant moments for speckle detection in real B-scanimages. Ultrasonics, 48(3):169–181, 2008. → pages[82] M. M. McCormick and T. Varghese. An approach to unbiased subsampleinterpolation for motion tracking. Ultrasonic Imaging, 35(2):76–89, 2013.→ pages107[83] L. Mercier, T. Lango, F. Lindseth, and L. Collins. A review of calibrationtechniques for freehand 3-d ultrasound systems. Ultrasound in Medicineand Biology, 31(2):143–165, 2005. → pages[84] D. Middleton. An Introduction to Statistical Communication Theory.Wiley-IEEE Press, 1996. → pages[85] R. Mongeon. Apparatus and method for speckle tracking, 1978. US Patent4,123,651. → pages[86] M. Najafi, N. Afsham, P. Abolmaesumi, and R. Rohling. A closed-formdifferential formulation for ultrasound spatial calibration. In InformationProcessing in Computer-Assisted Interventions, pages 44–53, 2012. →pages[87] M. Najafi, N. Afsham, P. Abolmaesumi, and R. Rohling. Single wallclosed-form differential ultrasound calibration. In SPIE Medical Imaging,volume 83162, pages A/1–A/8, 2012. → pages[88] J. Ng, R. Prager, N. Kingsbury, G. Treece, and A. Gee. Modelingultrasound imaging as a linear, shift-variant system. IEEE Transactions onUltrasonics, Ferroelectrics and Frequency Control, 53(3):549 –563, 2006.→ pages[89] D. Ni, Y. Qu, X. Yang, Y. P. Chui, T.-T. Wong, S. S. Ho, and P. A. Heng.Volumetric ultrasound panorama based on 3D SIFT. In Medical ImageComputing and Computer-Assisted Intervention, pages 52–60, 2008. →pages[90] M. O’Donnell, A. Skovoroda, B. Shapo, and S. Emelianov. Internaldisplacement and strain imaging using ultrasonic speckle tracking. IEEETransactions on Ultrasonics, Ferroelectrics, and Frequency Control, 41(3):314–325, 1994. → pages[91] J. Ophir, I. Cespedes, B. Garra, H. Ponnekanti, Y. Huang, and N. Maklad.Elastography: ultrasonic imaging of tissue strain and elastic modulus invivo. European Journal of Ultrasound, 3(1):49–70, 1996. → pages[92] M. Peikari, T. Chen, A. Lasso, T. Heffter, G. Fichtinger, and E. Burdette.Characterization of ultrasound elevation beamwidth artifacts for prostatebrachytherapy needle insertion. Medical Physics, 39(1):246, 2012. →pages108[93] A. Pesavento, C. Perrey, M. Krueger, and H. Ermert. A time-efficient andaccurate strain estimation concept for ultrasonic elastography usingiterative phase zero estimation. IEEE Transactions on Ultrasonics,Ferroelectrics, and Frequency Control, 46(5):1057–1067, 1999. → pages[94] J. Polzehl and V. Spokoiny. Propagation-separation approach for locallikelihood estimation. Probability Theory and Related Fields, 135(3):335–362, 2006. → pages[95] T. C. Poon and R. N. Rohling. Comparison of calibration methods forspatial tracking of a 3D ultrasound probe. Ultrasound in Medicine andBiology, 31(8):1095–1108, 2005. → pages[96] T. C. Poon and R. N. Rohling. Three-dimensional extended field-of-viewultrasound. Ultrasound in Medicine and Biology, 32(3):357–369, 2006. →pages[97] R. W. Prager, R. N. Rohling, A. H. Gee, and L. Berman. Rapid calibrationfor 3-D freehand ultrasound. Ultrasound in Medicine and Biology, 24(6):855–869, 1998. → pages[98] R. W. Prager, A. H. Gee, G. M. Treece, and L. H. Berman. Analysis ofspeckle in ultrasound images using fractional order statistics and thehomodyned K-distribution. Ultrasonics, 40(1):133–137, 2002. → pages[99] R. W. Prager, A. H. Gee, G. M. Treece, C. J. C. Cash, and L. H. Berman.Sensorless freehand 3D ultrasound using regression of the echo intensity.Ultrasound in Medicine and Biology, 29(3):437–446, 2003. → pages[100] H. Rafii-Tari. Panorama ultrasound for navigation and guidance of epiduralanesthesia. Master’s thesis, University of British Columbia, 2011. → pages[101] A. Rasoulian, J. Lohser, M. Najafi, H. Rafii-Tari, D. Tran, A. A. Kamani,V. A. Lessoway, P. Abolmaesumi, and R. N. Rohling. Utility ofprepuncture ultrasound for localization of the thoracic epidural space.Canadian Journal of Anesthesia, 58(9):815–823, 2011. → pages[102] H. Rivaz, E. M. Boctor, and G. Fichtinger. Ultrasound speckle detectionusing low order moments. In IEEE Ultrasonics Symposium, pages2092–2095, 2006. → pages[103] H. Rivaz, E. Boctor, and G. Fichtinger. A robust meshing and calibrationapproach for sensorless freehand 3D ultrasound. In SPIE Medical Imaging2007, volume 6513, pages 18/1–18/8, 2007. → pages109[104] H. Rivaz, R. Zellars, G. Hager, G. Fichtinger, and E. Boctor. Beam steeringapproach for speckle characterization and out-of-plane motion estimationin real tissue. In IEEE Ultrasonics Symposium, pages 781–784, 2007. →pages[105] H. Rivaz, H. J. Kang, P. J. Stolka, R. Zellars, F. Wacker, G. Hager, andE. Boctor. Novel reconstruction and feature exploitation techniques forsensorless freehand 3D ultrasound. In SPIE Medical Imaging, volume7629, pages 1D/1–1D/8, 2010. → pages[106] R. Rohling, A. Gee, and L. Berman. Three-dimensional spatialcompounding of ultrasound images. Medical Image Analysis, 1(3):177–193, 1997. → pages[107] J. Salmon. On two parameters for denoising with non-local means. IEEESignal Processing Letters, 17(3):269–272, 2010. → pages[108] M. L. Schott, J. E. Pierog, and S. R. Williams. Pitfalls in the use of ocularultrasound for evaluation of acute vision loss. The Journal of EmergencyMedicine, 44(6):1136–1139, 2013. → pages[109] S. K. Sengijpta. Fundamentals of statistical signal processing: Estimationtheory. Technometrics, 37(4):465–466, 1995. → pages[110] Z. Shinar, L. Chan, and M. Orlinsky. Use of ocular ultrasound for theevaluation of retinal detachment. The Journal of Emergency Medicine, 40(1):53–57, 2011. → pages[111] W. L. Smith and A. Fenster. Analysis of an image-based transducertracking system for 3D ultrasound. In SPIE Medical Imaging, volume5035, pages 154–165, 2003. → pages[112] O. Solberg, T. Langø, G. Tangen, R. Ma˚rvik, B. Ystgaard, A. Rethy, andT. Hernes. Navigated ultrasound in laparoscopic surgery. MinimallyInvasive Therapy & Allied Technologies, 18(1):36–53, 2009. → pages[113] C. M. Stein. Estimation of the mean of a multivariate normal distribution.The Annals of Statistics, pages 1135–1151, 1981. → pages[114] C. Sumi. Fine elasticity imaging utilizing the iterative RF-echo phasematching method. IEEE Transactions on Ultrasonics, Ferroelectrics, andFrequency Control, 46(1):158–166, 1999. → pages110[115] T. Taxt. Three-dimensional blind deconvolution of ultrasound images.IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control,48(4):867–871, 2001. → pages[116] T. Teuber and A. Lang. A new similarity measure for nonlocal filtering inthe presence of multiplicative noise. Computational Statistics & DataAnalysis, 56(12):3821–3842, 2012. → pages[117] G. M. Treece, R. W. Prager, A. H. Gee, and L. Berman. Correction ofprobe pressure artifacts in freehand 3D ultrasound. Medical ImageAnalysis, 6(3):199–214, 2002. → pages[118] T. Tuthill, J. Kru¨cker, J. Fowlkes, and P. Carson. Automatedthree-dimensional US frame positioning computed from elevational speckledecorrelation. Journal of Radiology, 209(2):575–582, 1998. → pages[119] D. Van De Ville and M. Kocher. SURE-based non-local means. IEEESignal Processing Letters, 16(11):973–976, 2009. → pages[120] F. Viola and W. F. Walker. A spline-based algorithm for continuoustime-delay estimation using sampled data. IEEE Transactions onUltrasonics, Ferroelectrics, and Frequency Control, 52(1):80–93, 2005. →pages[121] A. Viswanathan. Immediate ultrasound calibration from three poses andminimal image processing. In Medical Image Computing andComputer-Assisted Intervention, volume 3217, pages 446–454, 2004. →pages[122] R. F. Wagner, S. W. Smith, J. M. Sandrik, and H. Lopez. Statistics ofspeckle in ultrasound B-scans. IEEE Transactions on Sonics andUltrasonics, 30(3):156–163, 1983. → pages[123] R. F. Wagner, M. F. Insana, and D. G. Brown. Statistical properties ofradio-frequency and envelope-detected signals with applications to medicalultrasound. Journal of the Optical Society of America A, 4(5):910–922,1987. → pages[124] F. O. Walker, M. S. Cartwright, E. R. Wiesler, and J. Caress. Ultrasound ofnerve and muscle. Clinical Neurophysiology, 115(3):495–507, 2004. →pages111[125] I. Wegner, D. Teber, B. Hadaschik, S. Pahernik, M. Hohenfellner, H.-P.Meinzer, and J. Huber. Pitfalls of electromagnetic tracking in clinicalroutine using multiple or adjacent sensors. The International Journal ofMedical Robotics and Computer Assisted Surgery, 9(3):268–273, 2013. →pages[126] N. Wiest-Daessle´, S. Prima, P. Coupe´, S. P. Morrissey, and C. Barillot.Rician noise removal by non-local means filtering for low signal-to-noiseratio mri: applications to DT-MRI. In Medical Image Computing andComputer-Assisted Intervention, pages 171–179, 2008. → pages[127] S. Xu, J. Kruecker, B. Turkbey, N. Glossop, A. K. Singh, P. Choyke,P. Pinto, and B. J. Wood. Real-time MRI-TRUS fusion for guidance oftargeted prostate biopsies. Computer Aided Surgery, 13(5):255–264, 2008.→ pages[128] R. Yoonessi, A. Hussain, and T. B. Jang. Bedside ocular ultrasound for thedetection of retinal detachment in the emergency department. AcademicEmergency Medicine, 17(9):913–917, 2010. → pages[129] R. Zahiri-Azar and S. E. Salcudean. Motion estimation in ultrasoundimages using time domain cross correlation with prior estimates. IEEETransactions on Biomedical Engineering, 53(10):1990–2000. → pages[130] R. Zahiri-Azar, O. Goksel, T. S. Yao, E. Dehghan, J. Yan, and S. E.Salcudean. Methods for the estimation of sub-sample motion of digitizedultrasound echo signals in two dimensions. In IEEE Engineering inMedicine and Biology Society, pages 5581–5584, 2008. → pages[131] R. J. Zemp, C. K. Abbey, and M. F. Insana. Linear system models forultrasonic imaging: Application to signal statistics. Ultrasonics,Ferroelectrics and Frequency Control, IEEE Transactions on, 50(6):642–654, 2003. → pages[132] C. S. zu Berge, A. Kapoor, and N. Navab. Orientation-driven ultrasoundcompounding using uncertainty information. In Information Processing inComputer-Assisted Interventions, pages 236–245, 2014. → pages112

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0167624/manifest

Comment

Related Items