UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A comparative evaluation of two Synthetic Transmit Aperture with Virtual Source beamforming methods in… Ma, Manyou 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2016_february_ma_manyou.pdf [ 8.32MB ]
Metadata
JSON: 24-1.0223123.json
JSON-LD: 24-1.0223123-ld.json
RDF/XML (Pretty): 24-1.0223123-rdf.xml
RDF/JSON: 24-1.0223123-rdf.json
Turtle: 24-1.0223123-turtle.txt
N-Triples: 24-1.0223123-rdf-ntriples.txt
Original Record: 24-1.0223123-source.json
Full Text
24-1.0223123-fulltext.txt
Citation
24-1.0223123.ris

Full Text

A Comparative Evaluation of TwoSynthetic Transmit Aperture withVirtual Source Beamforming Methodsin Biomedical UltrasoundbyManyou MaB.Eng.(Hons), McGill University, 2014A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF APPLIED SCIENCEinTHE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES(Electrical and Computer Engineering)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)December 2015c© Manyou Ma 2015AbstractThis thesis studies the Synthetic Transmit Aperture with Virtual Source(STA-VS) beamforming method, which is an emerging technique in biomedi-cal ultrasound. It promises better imaging quality compared to conventionalbeamforming, with the same imaging speed. Several specific realizations ofthe STA-VS methods have been proposed in the literature and the topic isan active research area.The first part of the thesis examines two realizations of the STA-VSmethod, namely the Synthetic Aperture Sequential Beamforming (SASB)method and the bi-directional pixel-based focusing (BiPBF) method. Stud-ies are performed with both ultrasound simulation software and a com-mercial ultrasound scanner’s research interface. The studies show that theSTA-VS methods can improve the spatial and contrast resolution of ultra-sound imaging. The two stage implementation of SASB has lower com-plexity between the two STA-VS methods. However, compared to otherbeamformers, SASB is more susceptible to speed-of-sound (SOS) errors inthe beamforming calculations. The second part of the thesis proposes anSOS estimation and correction algorithm. The SOS estimation part of thealgorithm is based on second-order polynomial fitting to point scatterers inpre-beamformed data, and is specifically applicable to the two stage real-ization of the SASB method. The SOS correction part of the algorithm isincorporated into the second stage beamforming of the SASB method andis shown to improve the spatial resolution of the beamformed image. Thisalgorithm is also adapted to, and tested on, vertical two-layer structureswith two distinct SOS’s, through simulations and measurements on an in-house phantom. The premise is that two layers can simulate a fat/muscleor fat/organ anatomy. Spatial resolution is shown to improve with the SOScorrection. Future work will investigate whether that this two-layer SOS es-timation and correction algorithm will similarly improve the imaging qualityin vivo such as abdominal ultrasound examinations of overweight patients.iiPrefaceI hereby declare that I am the author of this thesis. This thesis is an original,unpublished work under the supervision of Professor Robert Rohling andProfessor Lutz Lampe.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiList of Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii1 Introduction and Background . . . . . . . . . . . . . . . . . . 11.1 Biomedical ultrasound imaging system . . . . . . . . . . . . 11.1.1 Overview of the ultrasound hardware . . . . . . . . . 11.1.2 Overview of the ultrasound imaging process . . . . . 21.2 Conventional beamforming . . . . . . . . . . . . . . . . . . . 21.2.1 Ultrasound transducer . . . . . . . . . . . . . . . . . 21.2.2 Focused transmission . . . . . . . . . . . . . . . . . . 31.2.3 Dynamic receive focusing . . . . . . . . . . . . . . . . 41.2.4 Parameters that affect beamforming performance . . 61.3 Evaluation metrics . . . . . . . . . . . . . . . . . . . . . . . . 71.3.1 Spatial resolution . . . . . . . . . . . . . . . . . . . . 71.3.2 Contrast resolution . . . . . . . . . . . . . . . . . . . 81.4 Motivation of the thesis . . . . . . . . . . . . . . . . . . . . . 81.5 Structure of the thesis . . . . . . . . . . . . . . . . . . . . . . 92 A Comparison of Two Synthetic Transmit Aperture withVirtual Source Methods . . . . . . . . . . . . . . . . . . . . . . 102.1 Introduction and previous research . . . . . . . . . . . . . . . 102.2 Beamforming equations . . . . . . . . . . . . . . . . . . . . . 12ivTable of Contents2.2.1 Transducer coordinates . . . . . . . . . . . . . . . . . 122.2.2 Conventional DAS beamforming methods . . . . . . . 132.2.3 STA-VS Methods . . . . . . . . . . . . . . . . . . . . 152.3 System complexity analysis . . . . . . . . . . . . . . . . . . . 192.3.1 System specifications . . . . . . . . . . . . . . . . . . 192.3.2 Implementation using a single processor . . . . . . . . 192.3.3 Implementation using FPGA and System on Chip (SoC)solutions . . . . . . . . . . . . . . . . . . . . . . . . . 222.4 Performance comparison . . . . . . . . . . . . . . . . . . . . 252.4.1 PSF simulation analysis . . . . . . . . . . . . . . . . . 252.4.2 SNR simulation analysis . . . . . . . . . . . . . . . . 302.4.3 Phantom measurements study . . . . . . . . . . . . . 312.4.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . 343 Speed-of-sound Estimation and Correction Using the SingleReceive-focused Delay-and-Summed Image . . . . . . . . . 363.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . 363.1.2 Previous research . . . . . . . . . . . . . . . . . . . . 373.1.3 Relationship to the SASB method . . . . . . . . . . . 383.1.4 SOS estimation and correction in a two-layer model . 393.2 Theory and methods . . . . . . . . . . . . . . . . . . . . . . 403.2.1 A review of the SOS estimation from per-channel RFdata . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.2.2 SOS estimation using the srDAS image . . . . . . . . 433.2.3 Adaptation of the method for two-layer SOS estima-tion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473.2.4 Parabola detection in noisy images . . . . . . . . . . 503.2.5 Two-layer phantom generation in Field II . . . . . . . 513.2.6 Beamforming correction in a two-layer phantom usingthe SOS estimates . . . . . . . . . . . . . . . . . . . . 533.2.7 Simulation setup . . . . . . . . . . . . . . . . . . . . . 543.2.8 Experimental setup . . . . . . . . . . . . . . . . . . . 553.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593.3.1 Simulation results . . . . . . . . . . . . . . . . . . . . 593.3.2 Experimental results . . . . . . . . . . . . . . . . . . 713.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78vTable of Contents4 Conclusions and Future Work . . . . . . . . . . . . . . . . . . 794.1 Thesis contributions . . . . . . . . . . . . . . . . . . . . . . . 804.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . 81Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83AppendicesA Beamforming Parameters . . . . . . . . . . . . . . . . . . . . . 89B Point Spread Function Phantom Simulation Results . . . . 91C Contrast Phantom Simulation Results . . . . . . . . . . . . 101D Phantom Measurement Results . . . . . . . . . . . . . . . . . 109E Derivation of the Two-layer Compensation Model . . . . . 121F Estimation of Layer Separation Based on the SOS Profile 124G Beamformed Image from the In-house Phantom Measure-ments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127viList of Tables2.1 System parameters using the Field II and SonixTouch setup . 202.2 Number of FLOPs required using LUT for delay calculation . 212.3 Size of LUTs required for delay calculations using the 3 beam-forming methods with 10 selectable focal depths . . . . . . . 222.4 Number of FLOPs for delay calculation without using LUTs . 222.5 FPGA resources for each component in a beamformer (Adaptedfrom [1]) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.6 FPGA resources required for different beamformers . . . . . . 242.7 FPGA resources for the different methods . . . . . . . . . . . 253.1 Chemical composition of the two layers of agar-glycerol phan-tom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.2 One-layer simulation result of the SOS estimation and com-pensation when the amplitude of the point scatters is 20 timesof the standard deviation of the noise amplitude . . . . . . . 603.3 One-layer simulation result of the SOS estimation and com-pensation when the amplitude of the point scatters is 10 timesof the standard deviation of the noise amplitude . . . . . . . 613.4 Two-layer simulation result of SOS estimation accuracy withthe layer separation at 40 mm . . . . . . . . . . . . . . . . . . 633.5 Two-layer simulation result of the SOS estimation accuracywith the layer separation at 50 mm . . . . . . . . . . . . . . . 643.6 Two-layer simulation result of the SOS estimation accuracywith the layer separation at 60 mm . . . . . . . . . . . . . . . 653.7 Two-layer simulation result of the SOS estimation accuracywith the layer separation at 70 mm . . . . . . . . . . . . . . . 663.8 Two-layer simulation result of the SOS estimation accuracywith the layer separation at 80 mm . . . . . . . . . . . . . . . 673.9 Improvement in the lateral resolution after using the SOScorrection in the two-layer simulated phantom with the layerseparation at 40 mm . . . . . . . . . . . . . . . . . . . . . . . 68viiList of Tables3.10 Improvement in the lateral resolution after using the SOScorrection in the two-layer simulated phantom with the layerseparation at 50 mm . . . . . . . . . . . . . . . . . . . . . . . 683.11 Improvement in the lateral resolution after using the SOScorrection in the two-layer simulated phantom with the layerseparation at 60 mm . . . . . . . . . . . . . . . . . . . . . . . 693.12 Improvement in the lateral resolution after using the SOScorrection in the two-layer simulated phantom with the layerseparation at 70 mm . . . . . . . . . . . . . . . . . . . . . . . 693.13 Improvement in the lateral resolution after using the SOScorrection in the two-layer simulated phantom with the layerseparation at 80 mm . . . . . . . . . . . . . . . . . . . . . . . 703.14 Results of SOS estimation in the one-layer CIRS phantom . 723.15 Accuracy of two-layer SOS estimation in the in-house phan-tom, with a 33 mm thick first layer . . . . . . . . . . . . . . . 743.16 Accuracy of two-layer SOS estimation using in the in-housephantom, with a 53 mm thick first layer . . . . . . . . . . . . 753.17 Comparison between the lateral resolution in one-layer DRFmethod, one-layer SASB method, and the two-layer SASBmethod in the in-house phantom, with a 33 mm thick first layer 773.18 Comparison between the lateral resolution in one-layer DRFmethod, one-layer SASB method, and the two-layer SASBmethod in the in-house phantom, with a 53 mm thick first layer 77A.1 Beamforming parameters for the DRF, SASB, and BiPBFmethods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90B.1 Point spread function phantom simulation results of the threemethods with the transmit focal depth 20 mm . . . . . . . . 92B.2 The performance of the three methods with the transmit focaldepth 50 mm . . . . . . . . . . . . . . . . . . . . . . . . . . . 93B.3 Point spread function phantom simulation results of the threemethods with the transmit focal depth 70 mm . . . . . . . . 94B.4 Point spread function phantom simulation results of the threemethods with the receive beamforming f-number 1 . . . . . . 95B.5 Point spread function phantom simulation results of the threemethods with the receive beamforming f-number 1.5 . . . . . 96B.6 Point spread function phantom simulation results of the threemethods with the transmit beamforming f-number 0.5 . . . . 97viiiList of TablesB.7 Point spread function phantom simulation results of the threemethods with the transmit beamforming f-number 1 . . . . . 98B.8 Point spread function phantom simulation results of the threemethods with the speed of sound assumption 1450 m/s . . . . 99B.9 Point spread function phantom simulation results of the threemethods with the speed of sound assumption 1630 m/s . . . . 100C.1 Contrast phantom simulation results of the three methodswith the transmit focal depth 20 mm . . . . . . . . . . . . . . 102C.2 Contrast phantom simulation results of the three methodswith the transmit focal depth 40 mm . . . . . . . . . . . . . . 103C.3 Contrast phantom simulation results of the three methodswith the transmit focal depth 70 mm . . . . . . . . . . . . . . 104C.4 Contrast phantom simulation results of the three methodswith the receive beamforming f-number 1 . . . . . . . . . . . 105C.5 Contrast phantom simulation results of the three methodswith the receive beamforming f-number 1.5 . . . . . . . . . . 106C.6 Contrast phantom simulation results of the three methodswith the transmit beamforming f-number 0.5 . . . . . . . . . 107C.7 Contrast phantom simulation results of the three methodswith the transmit beamforming f-number 1 . . . . . . . . . . 108D.1 Phantom measurement results of the three methods with thetransmit focal depth 20 mm . . . . . . . . . . . . . . . . . . . 110D.2 Phantom measurement results of the three methods with thetransmit focal depth 50 mm . . . . . . . . . . . . . . . . . . . 111D.3 Phantom measurement results of the three methods with thetransmit focal depth 70 mm . . . . . . . . . . . . . . . . . . . 112D.4 Phantom measurement results of the three methods with thetransmit beamforming f-number 0.5 . . . . . . . . . . . . . . 113D.5 Phantom measurement results of the three methods with thetransmit beamforming f-number 1 . . . . . . . . . . . . . . . 114D.6 Phantom measurement results of the three methods with thereceive beamforming f-number 1 . . . . . . . . . . . . . . . . 115D.7 Phantom measurement results of the three methods with thereceive beamforming f-number 1.5 . . . . . . . . . . . . . . . 116D.8 Phantom measurement results of the three methods with thespeed of sound assumption 1490 m/s . . . . . . . . . . . . . . 117D.9 Phantom measurement results of the three methods with thespeed of sound assumption 1540 m/s . . . . . . . . . . . . . . 118ixList of TablesD.10 Phantom measurement results of the three methods with thespeed of sound assumption 1620 m/s . . . . . . . . . . . . . . 119D.11 Phantom measurement results of the three methods with thespeed of sound assumption 1670 m/s . . . . . . . . . . . . . . 120F.1 Accuracy of layer separation and SOS estimation using theautomatic layer separation detection . . . . . . . . . . . . . . 125xList of Figures1.1 Overall breakdown of the hardware of an ultrasound machine 21.2 Demonstration of a simplified ultrasound transducer . . . . . 31.3 Beam intensity of a focused transmission . . . . . . . . . . . . 41.4 Beam profile of a focused transmission . . . . . . . . . . . . . 41.5 Illustration of transmit beamforming in a focused transmission 51.6 Illustration of receive beamforming after a focused transmission 52.1 Transmission and reception diagram of STA-VS methods . . 112.2 TOF calculation of srDAS and DRF . . . . . . . . . . . . . . 152.3 TOF calculation of BiPBF method . . . . . . . . . . . . . . . 162.4 TOF calculation of SASB method . . . . . . . . . . . . . . . 172.5 Data path for processing two channel data when implementedusing the Xilinx FPGA architecture (Adapted from [1]) . . . 232.6 Lateral resolutions using PSF simulation . . . . . . . . . . . . 262.7 Lateral resolutions using the PSF phantom using the SASB,BiPBF, and DRF method . . . . . . . . . . . . . . . . . . . . 272.8 Lateral resolution of the PSF phantom using different receivebeamforming f-numbers . . . . . . . . . . . . . . . . . . . . . 282.9 Lateral resolution of the PSF phantom using different SOSsin beamforming . . . . . . . . . . . . . . . . . . . . . . . . . . 292.10 SNR at different depths using the three beamforming methods 302.11 SNR at different depths using different receive beamformingf-numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.12 Lateral resolution at different depths using the CIRS phantom 322.13 The performance of the three methods with the transmit focaldepth 20 mm . . . . . . . . . . . . . . . . . . . . . . . . . . . 332.14 Lateral resolution using different receive beamforming f-numberswith the CIRS phantom . . . . . . . . . . . . . . . . . . . . . 342.15 Lateral resolution using different focal depths with the CIRSphantom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.1 TOF calculation using the per-channel RF image . . . . . . . 41xiList of Figures3.2 An example of per-channel RF data, gathered after transmis-sion focused at element 65 . . . . . . . . . . . . . . . . . . . . 413.3 TOF calculation using the srDAS image . . . . . . . . . . . . 443.4 An example of srDAS image . . . . . . . . . . . . . . . . . . . 443.5 Evolution of the errors in SOS estimation with different un-derlying SOS’s . . . . . . . . . . . . . . . . . . . . . . . . . . 463.6 Difference between the arrival time profile using the one-layerand two-layer model . . . . . . . . . . . . . . . . . . . . . . . 483.7 SOS estimation in a two-layer phantom with and without thetwo-layer compensation method . . . . . . . . . . . . . . . . . 493.8 Flow diagram of the modified SOS detection algorithm . . . . 503.9 Propagation of sound in a two layer phantom . . . . . . . . . 523.10 Demonstration of the two-layer in-house phantom, view fromthe front . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573.11 Demonstration of the two-layer in-house, view from the side . 573.12 Experimental setup of the measurement of the two-layer in-house phantom . . . . . . . . . . . . . . . . . . . . . . . . . . 583.13 Two-layer simulation result of SOS profile estimation withthe layer separation at 40 mm . . . . . . . . . . . . . . . . . . 633.14 Two-layer simulation result of the SOS profile estimation withthe layer separation at 50 mm . . . . . . . . . . . . . . . . . . 643.15 Two-layer simulation result of the SOS profile estimation withthe layer separation at 60 mm . . . . . . . . . . . . . . . . . . 653.16 Two-layer simulation result of the SOS profile estimation withthe layer separation at 70 mm . . . . . . . . . . . . . . . . . . 663.17 Two-layer simulation result of the SOS profile estimation withthe layer separation at 80 mm . . . . . . . . . . . . . . . . . . 673.18 srDAS image one-layer CIRS phantom . . . . . . . . . . . . 723.19 srDAS image in the in-house phantom with a 33 mm thickfirst layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743.20 srDAS image in the in-house phantom with a 53 mm thickfirst layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753.21 SOS estimation in the in-house phantom of the two layerphantom, layer separation at 33 mm . . . . . . . . . . . . . . 763.22 SOS estimation in the in-house phantom of the two layerphantom, layer separation at 53 mm . . . . . . . . . . . . . . 76C.1 Contrast phantom simulation results of the three methodswith the transmit focal depth 20 mm . . . . . . . . . . . . . . 102xiiList of FiguresC.2 Contrast phantom simulation results of the three methodswith the transmit focal depth 40 mm . . . . . . . . . . . . . . 103C.3 Contrast phantom simulation results of the three methodswith the transmit focal depth 70 mm . . . . . . . . . . . . . . 104C.4 Contrast phantom simulation results of the three methodswith the receive beamforming f-number 1 . . . . . . . . . . . 105C.5 Contrast phantom simulation results of the three methodswith the receive beamforming f-number 1.5 . . . . . . . . . . 106C.6 Contrast phantom simulation results of the three methodswith the transmit beamforming f-number 0.5 . . . . . . . . . 107C.7 Contrast phantom simulation results of the three methodswith the transmit beamforming f-number 1 . . . . . . . . . . 108D.1 Phantom measurement results of the three methods with thetransmit focal depth 20 mm . . . . . . . . . . . . . . . . . . . 110D.2 Phantom measurement results of the three methods with thetransmit focal depth 50 mm . . . . . . . . . . . . . . . . . . . 111D.3 Phantom measurement results of the three methods with thetransmit focal depth 70 mm . . . . . . . . . . . . . . . . . . . 112D.4 Phantom measurement results of the three methods with thetransmit beamforming f-number 0.5 . . . . . . . . . . . . . . 113D.5 Phantom measurement results of the three methods with thetransmit beamforming f-number 1 . . . . . . . . . . . . . . . 114D.6 Phantom measurement results of the three methods with thereceive beamforming f-number 1 . . . . . . . . . . . . . . . . 115D.7 Phantom measurement results of the three methods with thereceive beamforming f-number 1.5 . . . . . . . . . . . . . . . 116D.8 Phantom measurement results of the three methods with thespeed of sound assumption 1490 m/s . . . . . . . . . . . . . . 117D.9 Phantom measurement results of the three methods with thespeed of sound assumption 1540 m/s . . . . . . . . . . . . . . 118D.10 Phantom measurement results of the three methods with thespeed of sound assumption 1620 m/s . . . . . . . . . . . . . . 119D.11 Phantom measurement results of the three methods with thespeed of sound assumption 1670 m/s . . . . . . . . . . . . . . 120E.1 TOF diagram of the simplified two-layer model . . . . . . . . 121xiiiList of FiguresG.1 Beamformed images from the two-layer in-house phantom us-ing the one-layer DRF method, one-layer SASB method, andthe two-layer SASB method, with a 33 mm thick first layer . 128G.2 Beamformed images from the two-layer in-house phantom us-ing the one-layer DRF method, one-layer SASB method, andthe two-layer SASB method, with a 53 mm thick first layer . 129xivList of AcronymsADC Analog-to-digital ConverterAFE Analog Front-endBiPBF Bi-directional Pixel-based FocusingBRAM Block Random-access MemoryCLB Configurable Logic BlockCT Computational TomographyCIRS A tissue mimicking and phantom manufacturing companyDAS Delay-and-sumDRF Dynamic Receive BeamformingDSP Digital Signal ProcessingFLOP Floating-point OperationsFPGA Field-programmable Gated ArrayFWHM Full Width at Half MaximumLUT Lookup TableMATLAB A numerical computing softwareMRI Magnetic Resonance ImagingPSF Point Spread FunctionRAM Random-access MemoryRF Radio FrequencyROM Read-only MemoryxvList of AcronymsSASB Synthetic Aperture Sequential BeamformingSNR Signal-to-noise RatioSoC System on ChipSOS Speed-of-soundsrDAS Single Receive-focused Delay-and-sumSTA-VS Synthetic Transmit Aperture with Virtue SourceTGC Time Grain CompensationTOF Time of FlightxviAcknowledgmentsFirst and foremost, I would like to thank Dr. Robert Rohling and Dr. LutzLampe for taking me into the program and offering continuous support andinvaluable expertise in completing this thesis. I learned immensely fromthem on ultrasound imaging and academic research during our discussionsin the meetings, and through their help in improving my work. I would alsolike to express gratitude to everyone in the RCL and the Data Communi-cation labs, for welcoming me into two large and diverse research groups,and being friendly and helpful to me during my Masters degree. I wouldespecially like to thank Simon Dahonick and Julio Lobo for helping me insetting up the SonixDAQ and Texo SDK, Mohammad Honarvar and JeffAbeysekera for offering me advice on making the phantoms, and MehranPesteie and Emran Anas for being inspiring officemates for the past twoyears. I have to thank all the friends I made in Vancouver and UBC, whokept me accompanied, helped me with every aspect in my life, and mademy past two-years’ experience in this new city fun and exciting. Finally, Iwant to express my deepest love and gratefulness towards my parents, fortheir love and support over the past two years and throughout my life.xviiChapter 1Introduction andBackgroundUltrasonography, or ultrasound imaging, is a subcutaneous imaging methodwith wide clinical applications. Unlike other imaging modalities such asmagnetic resonance imaging (MRI) or computed tomography (CT) imaging,ultrasonography is inexpensive and free from ionizing radiation. However,compared to CT and MRI scanners, a conventional ultrasound scanner ex-hibits a poor spatial resolution and contrast, which makes it difficult forthe operator to observe small anatomical structures. Using more advancedultrasound beamforming methods, the imaging quality can be improved.Some of the state-of-the-art beamforming methods are plane wave imag-ing [2], synthetic aperture imaging [3], minimum variance beamforming [4],and image deconvolution [5]. The aim of this thesis is to survey a specialgenre of advanced beamforming method, which is called Synthetic TransmitAperture with Virtue Source (STA-VS).This chapter will give an introduction to the conventional ultrasoundimaging system and the beamforming process, which will be a basis for un-derstanding of the more advanced beamforming algorithms. It will also dis-cuss several important ultrasound beamforming parameters, and the choiceof evaluation metrics used in the thesis. The motivation and general struc-ture of the thesis will also be introduced.1.1 Biomedical ultrasound imaging system1.1.1 Overview of the ultrasound hardwareThe overall breakdown of the hardware architecture of a conventional ultra-sound machine is shown in Figure 1.1. A conventional biomedical ultrasoundsystem is consisted of a transducer probe (plus circuitry), an analogue front-end (AFE), a digital beamformer, and a backend image processing unit.11.2. Conventional beamformingAnalogue front-endBeamformer BackendUltrasound transducerFigure 1.1: Overall breakdown of the hardware of an ultrasound machine1.1.2 Overview of the ultrasound imaging processAfter being gathered at the ultrasound probe, the analog signals are firstamplified (subject to a depth-dependant scale to compensate for the at-tenuation effect in wave propagation) and digitized in the AFE. The dig-itized per-channel radio frequency (RF) data are further beamformed inthe beamfomer, where the detailed implementation depends on the specificbeamforming algorithm chosen. The output of the beamformer (often calledRF signal1) are further match-filtered, envelope detected, decimated, com-pressed in dynamic range, and displayed in the backend image processingunit.1.2 Conventional beamformingIn a conventional biomedical ultrasound system, focused transmission andDynamic Receive Focusing (DRF)2 are employed.1.2.1 Ultrasound transducerAn ultrasound transducer is an housing of multiple transducer elementsmade of piezoelectric materials. When a voltage is applied to a piezoelectricelement, the element vibrates and generates an acoustic wave; while whenan acoustic wave is applied to a piezoelectric element, a voltage difference isgenerated between the two ends of the element. Therefore, each transducerelement may act either as a transmitter or receiver of acoustic waves, and1To avoid confusion, we will always use the term ‘per-channel RF data’ to describe thedigitized version of the data gathered at each element of the probe.2In the literature, the acronym DRF (Dynamic Receive Focusing) and DAS (Delay-And-Sum) have been used interchangeably. This is because most commercial ultrasoundmachines are digital, where the beamforming utilized inside is DRF. However, there isanother possible realization of the DAS algorithm, mostly found in analog machines,where a static delay is chosen for an entire channel. Since both of these two realizationare discussed in this thesis, we will use srDAS (single receive-focused delay-and-sum) todenote the second realization of DAS, to avoid confusion.21.2. Conventional beamformingsometimes is referred to as a ‘channel’. A typical commercial ultrasoundtransducer has multiple (around 100 to 200) transducer elements. In Figure1.2, an ultrasound transducer with 10 transducer elements is demonstrated.The larger rectangle at the border represents the transducer housing, whileeach smaller square with a number inside represents a transducer element.This schematic diagram is a simplified conceptual representation of an ul-trasound transducer and will be used in the following sections in the thesis,for the purpose of demonstration only. Both in transmission and receptionoperations, only a subset of the available transducer elements are turned on(which means that they are generating acoustic impulse during transmis-sion or conducting electric current during reception), to acquire a desiredbeam-pattern. If we treat the acoustic wave generated from each trans-ducer element (or each infinitesimal segment of the transducer element) asa spherical wave, then most of the beamforming phenomenons in RF an-tenna design and optics hold. Following the naming conventions from thoseresearch, the portion of the transducer where the elements are turned on iscalled the transmit or receive aperture.1 2 3 4 5 6 7 8 9 10Figure 1.2: Demonstration of a simplified ultrasound transducer1.2.2 Focused transmissionIn a focused transmission, the elements at the edge of the transmit apertureare fired earlier than those in the centre of the aperture. This is illustratedin Figure 1.5 with a 5 element aperture. In this transmission focused attransducer element 3, each of the five elements are delayed by five transmittime delays ttmt1 through ttmt5, where ttmt3 is set larger compared to ttmt1and ttmt5. As a result, the ultrasound impulses sent by different transducerelements arrive at the chosen focal point simultaneously, this point is alsoreferred to as the virtual source in the STA-VS methods. The spatial in-tensity of the transmitted field is the strongest and the beam profile of thetransmitted beam is the sharpest near the focal point. As an example, thebeam intensity recording of a focused emission, with a 65-element apertureand a focal length of 70 mm, is shown in Figure 1.3. The beam profile of thesame transmission is demonstrated in Figure 1.4, where the beam profile isobtained by highlighting the region in the field where the acoustic energy31.2. Conventional beamformingLateral Position (mm)Depth (mm)−8 −6 −4 −2 0 2 4 6 820406080100120Figure 1.3: Beam intensity of a focused transmissionLateral Position (mm)Depth (mm)−8 −6 −4 −2 0 2 4 6 820406080100120Figure 1.4: Beam profile of a focused transmissionintensity is within 3dB from the maximum intensity in the imaging field.1.2.3 Dynamic receive focusingIn DRF, after one focused transmission, only one axial imaging line locatedalong the transmit aperture centre is recovered. If a scatterer exists on thisaxial line, it will backscatter a series of spherical impulses toward the trans-ducer elements. These impulses will arrive at different transducer elementswith slight time differences. This is demonstrated in Figure 1.6, with thesame 5 element aperture. If the RF signals received by different elementsat different depths are shifted by their corresponding time delays (such asthe trcv1 through trcv5 chosen in Figure 1.6), we can coherently add up per-channel RF signals from different receive elements at any given depth.41.2. Conventional beamforming𝑡𝑡𝑚𝑡 1  𝑡𝑡𝑚𝑡2  𝑡𝑡𝑚𝑡 3  𝑡𝑡𝑚𝑡 4  𝑡𝑡𝑚𝑡 5  12345Transmit beamforming delay Transmission wavefrontTransmission focal pointUltrasound transducerFigure 1.5: Illustration of transmit beamforming in a focused transmission12345∑ 𝑡𝑟𝑐𝑣1  𝑡𝑟𝑐𝑣2  𝑡𝑟𝑐𝑣3  𝑡𝑟𝑐𝑣4  𝑡𝑟𝑐𝑣5  Beamformed lineReceive beamforming delayPer-channelRF dataBackscattered wavefrontFigure 1.6: Illustration of receive beamforming after a focused transmission51.2. Conventional beamforming1.2.4 Parameters that affect beamforming performanceTransmission focal depthBy adjusting the time delays at different transducer elements, different trans-mission focal depths can be achieved. A shallower focal depth correspondsto a better lateral resolution in the near field, while a deeper focal depthcorresponds to a better penetration [6].F-numberThe f-number3 measures the ratio between the depth of the imaging pointand the width of the aperture. In the receive beamforming stage, it standsfor the ratio between the length of the active aperture used for receive beam-forming to the depth of the imaging point. In the transmit beamformingstage, it is reflected by the ratio between the length of the active channelsused for transmit beamforming to the depth difference between the virtualsource and the imaging point. A smaller f-number corresponds to a betterspatial resolution, while a large f-number corresponds to a better penetra-tion [6]. A constant f-number across different depths can result in a uniformresolution. In this thesis, the transmit and receive f-numbers are aimed tobe kept constant by expanding the aperture size as the imaging depth getsdeeper, however this can not be achieved at deeper depths because of thefinite size of the transducer.Beamforming apodizationSimilar to the window functions in the digital signal processing systems,apodization is a technique used in optics to suppress grating lobes (that arespurious peaks next to the main lobe when the distance between neighbour-ing transducer elements is larger than half of the transmission wavelength).A point to notice is that this is generally achieved at the price of broaden-ing the main lobe, which will lead to a slightly worsened lateral resolution.In a biomedical ultrasound scanner, the apodization is implemented in twosteps: the transmission apodization and the beamforming apodization. Inthe transmission apodization, elements at the edge of the transmit apertureare given a smaller amplitude compared to the elements at the centre of thetransmit aperture. In the beamforming apodization, recordings at the edgeof the receive aperture are given a smaller weight compared to the recordings3The letter ‘f’ stands for ‘focal’, it is based on optics terminology describing the focal-ratio of lenses.[7]61.3. Evaluation metricsat the centre of the aperture. Both of these result in a lower grating lobe inthe beamformed image, at the price of a wider main lobe. In this thesis, anapodization window with the same shape as a Hamming window is chosenfor all the transmission and beamforming apodizations.Other factorsThere are several other factors that affect the beamforming quality, suchas the Time Gain Compensation (TGC), dynamic range, the number ofactive elements, the centre frequency of the transducer, and the shape andlength of the transmitted signal. For example, the imaging quality can beimproved when a longer transmitted signal is sent, or by fine-tuning theTGC curve. However, since this thesis mainly focuses on the beamformingaspect of ultrasound imaging, we will only study the effects of the threeaforementioned factors in detail.1.3 Evaluation metricsIt was observed in Section 1.2.4 that beamforming parameters seem to eitheroptimize the penetration performance at the price of lowering the resolution,or vice versa. Because of this, we chose to use two different set of metricsto evaluate the performance of all the beamforming algorithms used in thethesis, where the full width at half maximum (FWHM) is used to evaluatethe lateral and axial resolutions and the signal-to-noise ratio (SNR) is usedto evaluate the penetration and contrast. We describe these two metrics inthe next two sections.1.3.1 Spatial resolutionThe spatial resolution is defined by the FWHM along the axial and lateraldirections. For example, in the lateral direction, this length is found byfirst finding a point of the maximum pixel value in the beamformed imageand then moving away this point laterally in both directions, until the pixelvalues at the two new positions are less than half of the maximum value.The distance between these two point is defined as the lateral resolution.A typical method to evaluate the FWHM performance of a beamformingalgorithm uses the point spread function (PSF) phantom. In a point spreadfunction, a few point sources are placed in an otherwise empty imaging field.In a beamforming algorithm with a good spatial resolution, the image of thepoint on the screen is a smaller dot, which corresponds to a smaller FWHM71.4. Motivation of the thesisvalue; while in a beamforming algorithm with a bad spatial resolution, theimage of the point on the screen is a bigger smudged version of the dot,which corresponds to a larger FWHM value.1.3.2 Contrast resolutionThe SNR can be measured when a phantom with a highly reflective region(occlusion) or a lower reflective region (inclusion) is scanned. The SNR isdefined asSNR(dB) = 10 log10|µin − µout|σinσout, (1.1)where µin and σin are the mean and standard deviation of pixels insidethe inclusion or occlusion, and µout and σout are the mean and standarddeviation of pixels in the neighbourhood of the inclusion or occlusion. Sincein this thesis, the inclusion or occlusion regions are designed to be circles,the neighbourhood is defined as a concentric square whose sides are twicethe diameter of the inclusion or occlusion.1.4 Motivation of the thesisIn a conventional DRF image, since we only investigate the image line alongthe aperture centre, only the transmission energy along this axial line isutilized. That is, the resulting received focused line would be the same ifthe transmission beam profile is a thin axial line along the focal position.However, from Figure 1.4, we know that the actual beam profile of a fo-cused transmission is an hourglass shape, where the energy is spread out inthe near field and the far field. This means that energies in the far fieldand the near field of the focused transmission were not utilized in the re-ceive beamforming step. Furthermore, a focal point must be chosen andshould be set manually by the operator near the anatomical region of inter-est, which is generally not optimally chosen in practice. To compensate this,modern ultrasound machines stitch images together from multiple transmitswith different focal depths. This improves the spatial resolution of the im-age, however reduces the frame rate of imaging by the number of transmits.These problems motivate the STA-VS beamforming methods.Currently, STA-VS are starting to be implemented in high-end commer-cial ultrasound machines, such as the EPIQ system [8] by Philips and theACUSON system [9] by Siemens. However, most of the commercial systems81.5. Structure of the thesisare propitiatory, and their detailed implementations are unknown. This the-sis aims at surveying the existing implementations of the STA-VS methodsin the literature, explore the advantages and drawbacks of each method, withthe goal to select one implementation that leads to the best performance.1.5 Structure of the thesisThe outline of the rest of the thesis is as follows: Chapter 2 introduces twoSTA-VS methods, and performs a full comparison between these two, includ-ing their beamforming performances and system complexity requirements.Chapter 3 introduces a method that improves one of the STA-VS methods’performance in tissues with non-ideal speed-of-sound (SOS) properties viaan SOS estimation and correction algorithm. Finally, Chapter 4 concludesthe thesis and provides some directions of future work.9Chapter 2A Comparison of TwoSynthetic Transmit Aperturewith Virtual Source Methods2.1 Introduction and previous researchThe goal of the Synthetic Transmit Aperture with Virtual Source (STA-VS)methods is to utilize the laterally spread energy in the transmit beam in thenear and far field. To achieve this, per-channel radio frequency (RF) signalsfrom multiple transmissions with different lateral foci are summed synthet-ically to form a single lateral line in the final image. As a demonstration,five transmissions with five lateral foci are shown in Figure 2.1. The solidarrows represent the lateral positions of the transmit foci, while the dashedarrows represent where per-channel RF lines are gathered and stored. Forexample, after Transmission 1, which is focused at transducer element 3, thereceived per-channel RF signals from all the transducer elements 1-10 arestored. When a final imaging line is being gathered at transducer element 3,per-channel RF data gathered from all of the 5 transmissions are summed upcoherently. In order to sum the received lines from multiple transmissionscoherently, different specific realizations of STA-VS have been developed inprevious research.BiPBFThe first STA-VS method was published by Bae et al. in 2000 [10], wherethe author proposed a beamforming algorithm called bi-directional pixel-based focusing (BiPBF). This method employs synthetic transmit focusingcombined with Dynamic Receive Focusing (DRF). After each transmission,all per-channel RF data collected by all the transducer elements are stored.A single-stage transmit beamforming is performed after all the RF data nec-essary to form an image are gathered. The authors claimed that this method102.1. Introduction and previous research1 2 3 4 5 6 7 8 9 10Transmission 5Transmission 3Transmission 4Transmission 2Transmission 1Figure 2.1: Transmission and reception diagram of STA-VS methodscan substantially increase the lateral resolution of an image, however alsoacknowledged that it is computational and memory demanding.MultilineIn the patent [11] by Philips in 2008, the inventors presented a simplified ver-sion of the BiPBF method, which performs a two-stage beamforming usingpartial per-channel RF signals. In the first stage, after each transmission,multiple DRF receive focused lines are acquired at the transducer elementsin the neighbourhood of the transmit aperture centre. The authors claimedthat the larger the number of scanlines that are recorded, the better res-olution the beamformed image has. The number of focused lines gathereddepends on the degree of tissue motion present in the subject: when imag-ing tissues that are relatively stable, such as the abdomen, more scanlinesare acquired, while when imaging tissues that are rapidly moving, such asthe heart, fewer scanlines are acquired. In the second stage, the multiplescanlines gathered at the same spatial position after different transmissionsare summed synthetically using the same technique as the second stage ofthe BiPBF method. The inventors claimed that the system achieves a com-parable resolution compared to the conventional multi-zone focus imagingsystems (which stitches images from multiple transmissions with differentfocal depths together, as introduced in Chapter 1), with a higher frame rate.112.2. Beamforming equationsSASBThe Synthetic Aperture Sequential Beamforming (SASB) is a method intro-duced by Kortbek et al. in 2008 [12][13]. SASB is also a simplified versionof the BiPBF method, where synthetic transmit beamforming is achievedthrough a two-stage approach. In the first stage, one single receive-focuseddelay-and-summed (srDAS) line is formed at the transmit aperture centre.The authors pointed out that DRF should not be used as the first stagebeamforming algorithm, since it will lead to wrong beamforming delay cal-culations. In the second stage, different received lines at different transmis-sion foci are synthetically summed to form an image line on the display.The authors claimed that the SASB method have a better spatial resolutioncompared to the conventional DRF method. A subsequent paper [14] bythe same research group applied this method to a clinical study on livertumours and claimed that the method has a comparable performance withDRF. Further research built on this method: In [15], the authors appliedthe method in tissue harmonic imaging. In [16], the authors implementedthe first stage of the beamforming method on a mobile handheld device andperformed the second stage remotely via a wireless network.In the remaining sections in this chapter, we will give brief mathematicaldescriptions of two of the STA-VS methods (SASB and BiPBF)4, analyzetheir requirement on system complexity, and compare their performancein both simulations and phantom measurements. The conventional DRFmethod will serve as a baseline for the comparison.2.2 Beamforming equationsIn this section, we will introduce the mathematical representations anddetails of the implementations of the three beamforming methods, DRF,BiPBF, and SASB.2.2.1 Transducer coordinatesIn deriving the beamforming equations, the following notations are used.We use a notation that is different from the original works introducing theSTA-VS methods but enables us to represent all three methods using thesame framework. The goal of beamforming is to calculate the pixel value at4The multiline method was not chosen here because its detailed implementation wasnot available from the patent that introduced it.122.2. Beamforming equationsa point in the final displayed image Sdisplayed at the location Q (zj , xi), wherezj stands for the depth of the imaging point and xi stands for the lateralposition of the imaging point (which is also the lateral position of transducerelement i, since focused lines are only beamformed at lateral positions ofthe transducer elements). In total, the transducer has I elements and themaximum image depth is J . The relationship between xi and i dependson the pitch of the ultrasound transducer and the relationship between zjand j depends on the sampling frequency and the speed-of-sound (SOS).We will denote the per-channel RF line collected at transducer position rafter the transmission focused at transducer element f as Srf . The valueof this per-channel RF line at depth zj can be written as Srf (zj), which isalso the jth entry of this digitized per-channel RF signal. Let the lateralposition of the transmission focus be xf , the lateral position of the receiveelement be xr, the transmit focal depth be zf , and the f-numbers in bothtransmit and receive focusing be F#. Let us also define a Hamming functionHamming(w, x), where w stands for the width of the window, and x standsfor the distance from the centre position within the Hamming window.2.2.2 Conventional DAS beamforming methodssrDAS MethodAlthough the srDAS method performs worse than DRF and is not commonlyused in commercial machines, it is included here since it is identical to thefirst stage of the SASB method.The red dashed line in Figure 2.2 shows the time of flight (TOF) pathof the srDAS beamforming. In an oversimplified understanding (that treatseach transmission from the transducer element as a spherical wave), after afocused transmission centred at element 4, the wavefront first converges atthe focal point B, then propagates toward to the scatterer Q, returns to thefocal point B5, and finally arrives at the different receiving elements. Theper-channel RF signal received at transducer element 7 hence follows theTOF path A → B → Q → B → C. Similarly, the per-channel RF signalreceived at transducer element 4 follows the TOF path A → B → Q →B → A. The path difference between these two per-channel RF signals iszD = |BC| − |AB| .5This step seems counter-intuitive and inaccurate, however, it is the TOF path ansrDAS beamformer assumes. In other words, this model is only accurate if the wave-frontin the imaging field follows this TOF, which is obviously not the case. However, this modelis accurate when the depth of interest is close to the focal depth, and its imaging qualitycan be improved by stitching transmission with multiple focal points together [17].132.2. Beamforming equationsThe beamforming equation of the srDAS method can be written asSdisplayed(zj , xi) =∑xr[AR(zj , xr, xi) · Sri(zj − zd(xi, zf , xr))], (2.1)where AR(zj , xr, xi) is the apodization factor and zd is the shift appliedto the received per-channel RF signal Sri. The apodization matrix can bewritten asAR(zj , xr, xi) = Hamming(|zj |F#, |xr − xi|). (2.2)The shift equation can be written aszd(xi, zf , xr) = zD(xi, zf , xr) =√z2f + (xi − xr)2 − zf . (2.3)This path difference expression is depth (zj) independent. Hence, all entriesin a per-channel RF line Srf are subject to the same amount of shift, andthis can be implemented using a register shift operation.Historically, the srDAS beamformer was applied in most of the ana-log commercial ultrasound machines, where the delay operations were im-plemented using tapped delay lines. Only starting from 1970s, as digitalbeamformers were introduced, DRF started to be realistic for implementa-tion [6]. Moreover, even when implemented using digital circuits, an srDASbeamformer still requires less digital circuitry compared to the DRF imple-mentation [18].DRF methodThe blue solid line in Figure 2.2 represents the TOF calculations of theconventional DRF beamformer. After a focused transmission centred at el-ement 4, the wavefront first converges at the focal position, then propagatestoward the scatterer, and then returns directly to the receive element. Theper-channel RF signal received at transducer element 7 hence follows theTOF path A → Q → C. Similarly, the per-channel RF signal received attransducer element 4 follows the TOF path A→ Q→ A. The path differentbetween these two per-channel RF signals is zR = |QC| − |QA| .Therefore, the beamforming equation for the dynamic receive focusingmethod can be written asSdisplayed(zj , xi) =∑xr[AR(zj , xr, xi) · Sri(zj − zd(zj , xi, xr))], (2.4)142.2. Beamforming equations1 2 3 4 5 6 7 8 9 10QABCxzFigure 2.2: TOF calculation of srDAS and DRFwhere AR(zj , xr, xi) is the same as in the conventional DAS case, and zd canbe written aszd(zj , xi, xr) = zR(zj , xi, xr) =√z2j + (xr − xi)2 − zj . (2.5)The path difference zR is depth dependent, which needs to be either calcu-lated for each depth in real time, or pre-calculated and stored in memory.In modern ultrasound beamforming systems, the DRF beamformer iseither implemented using a digital signal processing (DSP) chip or a field-programmable gate array (FPGA) board. The shift values are either calcu-lated on-the-fly, or stored in a lookup table (LUT) [6]. There is a notablebody of recent research on the efficient implementation of the DRF method[19] [20].2.2.3 STA-VS MethodsBiPBF methodIn the BiPBF method, to form a displayed image, multiple beams aretransmitted with the lateral foci at each of the transducer elements. Theper-channel RF lines are gathered at each of the transducer elements andsummed up depending on the TOF path difference between them. The TOFcalculation of the BiPBF method is shown in Figure 2.3, the red dashed line152.2. Beamforming equations1 2 3 4 5 6 7 8 9 10QABCDExzFigure 2.3: TOF calculation of BiPBF methodrepresents the TOF path corresponding to the per-channel RF line S10,4 andthe blue solid line represents the TOF path corresponding to the per-channelRF line S10,6. After the focused transmission, the wavefront first travels tothe focal point, propagates toward the scatterer, and then is received at thereceiver position. Per-channel RF signal S10,4 hence follows the TOF path A→ B→ Q→ E. Similarly, per-channel RF signal S10,6 follows the TOF pathC → D → Q → E. The path difference (in the transmit portion) betweenthese two per-channel RF signals is zT = |QD| − |QB|.Therefore, the beamforming equation of the BiPBF method can be writ-ten asSdisplayed(zj , xi) =∑xr∑xf[A(zj , zf , xi, xr, xf )·Srf (zj−zd(zj , xf , zf , xr, xi))],(2.6)The apodization function is equivalent to one Hamming window basedon the distance between the imaging position xi and the lateral position ofthe receive element xr and another Hamming window based on the distancebetween the imaging position and the lateral position of the transmit elementxf . That is,A(zj , zf , xi, xr, xf ) = AR(zj , xr, xi) ·AT (zj , zf , xf , xi), (2.7)where AR(zj , xr, xi) is the receive apodization and AT (zj , zf , xf , xi) can be162.2. Beamforming equations1 2 3 4 5 6 7 8 9 10QABCDxzFigure 2.4: TOF calculation of SASB methodwritten asAT (zj , zf , xf , xi) = Hamming(|zj − zf |F#, |xf − xi|). (2.8)The path difference can be broken up into two portions, which are the pathdifference during the transmit focusing portion and the path difference inthe receive focusing portion. The total path difference can be written aszB(zj , xr, xi, zf , xi) = zR(zj , xr, xi) + zT (zj , xi, zf , xf ), (2.9)where zR(zj , xr, xi) is the DRF receive focusing path difference and zT (zj , xi, zf , xf )can be written aszT (zj , xi, zf , xf ) = sgn(zj − zf ) · (√(zj − zf )2 + (xf − xi)2 − |zj − zf |).(2.10)The complete beamforming equation can also be written asSdisplayed(zj , xi) =∑xr∑xfAR(ti, xr, xi) ·AT (zf , zf , xf , xi) (2.11)·Srf (zj − zT (zj , xi, zf , xf )− zR(zj , xr, xi)). (2.12)SASB methodIn the SASB beamforming method, in the first stage, one srDAS focusedline is formed at the transmit aperture centre of each transmission. And172.2. Beamforming equationsin the second stage, different focused lines are synthetically beamformed ateach imaging points.In the first stage, an srDAS line Bxf (zj) is generated after each trans-mission focused at transducer element f . This first stage output is the sameas the srDAS beamforming:Bxf (zj) =∑xrAR(zj , xr, xf )Srf (zj − zD(zf , xr, xf )), (2.13)where zD(zf , xr, xf ) and AR(zj , xr, xf ) are the same as defined before.In Figure 2.4, the TOF calculation of the SASB method is shown. Thered dashed line shows the TOF path of the focused signal Bx4 , and the bluesolid line represents the TOF path of the per-channel RF signal Bx6 . In thesecond stage, transmit beamforming is performed on the Bxf (zj)’s obtainedin the first stage. In the SASB method, the wavefront is assumed to firstconverge at the focal point, then propagate to the scatter, then return to thefocal point, and then returned at the receiving elements. Focused signal Bx4hence follows the TOF path A→ B→ Q→ B→ A. Similarly, focused signalBx6 follows the TOF path C→ D→ Q→ D→ C. The TOF path difference(in the transmit and receive portions) between these two per-channel RFsignals is z′T = 2(|QD| − |QB|) .The second stage beamforming equation of the SASB method can there-fore be written asSdisplayed(zj , xi) =∑xfAT (zj , xi, zf , xf )Bxf (zj − zT ′(zj , xi, xf , zf )), (2.14)wherezT ′(zj , xi, zf , xf ) = 2 · sgn(zj − zf ) · (√(zj − zf )2 + (xf − xi)2 − |zj − zf |).(2.15)A note on transmit beamformingIn the above calculations, a transmit apodization Hamming window was cho-sen for both of the STA-VS methods. This weight is chosen because amongdifferent transmissions with different lateral positions, the energies receivedat the imaging positions are different. If the imaging point is closer to theaperture centre and the focal depth of one transmission, then the receivedenergy from this transmission is higher, hence a larger weight should beassigned to the received signal gathered after this transmission. By keeping182.3. System complexity analysisthe transmit beamforming f-number fixed and selecting a Hamming apodiza-tion window, we assumed that the transmitted beam profile has an hourglassshape. This is not the only possible choice for the transmitted beam profile,though. In [11], the authors proposed to use the experimental measurementof the field intensity as the transmitted beam profile. In [21], the authorscompared the effect of beamforming with different types of window func-tions. While in [22], the authors proposed to use a semi-analytic model todescribe the transmitted beam profile, for this purpose.2.3 System complexity analysisIn Section 2.1, we mentioned that the BiPBF method is more computationaland memory demanding compared to the SASB and the DRF methods. Thecomputational demands of the algorithms can be represented by the numberof floating-point operations (FLOPs) required to form an entire image, andthe memory demand can be represented by the size of the random-accessmemory (RAM) and read-only memory (ROM) required to store the per-channel RF signals and the LUTs for the time delay calculations. As intro-duced in Section 1.4, there are specially designed commercial systems thatemploy STA-VS methods. However, the specific implementation of theirbeamforming algorithms and hardware realizations are proprietary and un-known. This section will give an estimate of the FLOPs and size of LUTsrequired for the BiPBF and SASB methods using two different implemen-tations. These implementations are not necessarily how the algorithms areimplemented in the commercial machines. Thus, our results only serve as acomplexity estimate and may differ from those in commercial implementa-tions.2.3.1 System specificationsWe consider two different implementations: the system setup in Field II[23] and in the SonixTouch (Ultrasonix Medical Corp., Burnaby, Canada)machine. The relevant parameters of the two systems are listed in Table2.1.2.3.2 Implementation using a single processorSince most of the STA-VS methods are only available for real-time imagingin specific-purpose high-end machines, which are not common in research192.3. System complexity analysisTable 2.1: System parameters using the Field II and SonixTouch setupField II simulation SonixTouchSampling frequency 120 MHz 40 MHzSOS 1540 m/s 1540 m/sDepth of field 10 cm 10 cmSamples per RF line 15584 5194Size per data point 8 bits 2 bitsSize per line 121.7 KB 10.14 KB.labs yet, most of the STA-VS beamformings are performed off-line using apersonal computer, after all the per-channel RF data necessary to form animage are gathered. In this case, the beamforming operation is completedsequentially using a single processor. In this section, we estimate the numberof FLOPs and the size of the RAM that are required when beamforming isperformed using a single processor. The number of FLOPs required for eachbasic operation (square root, multiplication, and addition) is based on theMATLAB (MathWorks Inc., Natick, MA) Toolbox Lightspeed [24].FLOP operations for sub-operations in the beamformingcalculationBefore calculating the number of FLOPs required for the entire images us-ing each method, we will first look into the FLOPs required for each sub-operations in the beamforming equation. For the calculation of path differ-ence for receive beamforming (Equation 2.3), the total number of FLOPsrequired is 13 (8 FLOPs for square-root, 2 FLOPs for subtraction, 1 FLOPfor addition, and 2 FLOPs for the two squares). For the calculation of pathdifference of transmit beamforming (Equation 2.10), the total number ofFLOPs required is 17 (8 FLOPs for square-root, 3 FLOPs for subtraction,1 FLOP each for the addition, absolute value, multiplication and sgn(), and2 FLOPs for the two squares). And for the matrix element-wise multiplica-tion, which is required for combining the apodization function and the pathdifference function, with matrices with the size m ·n, the number of FLOPsrequired equals to m · n.202.3. System complexity analysisFLOP calculations if LUTs are used for time delay and weightingIn most commercial ultrasound machines, the time delay values for differentcalculations are pre-calculated and stored in an LUT, in order to avoid theoverhead from calculating time delays in real time. Suppose the image depthis J (15588 for Field II, and 5194 for SonixTouch), the number of transducerelements is I (128). For the DRF method, according to Equation 2.4, thetotal number of FLOPs required is (I · J · I · 3). For the BiBPF method,according to Equation 2.6, the number of FLOPs required is (I · J · I · I · 3).For the SASB method, the number of FLOPs is (I ·J · I ·3+ I ·J · I ·3). Thecorresponding numerical values of the number of FLOPs required using thethree methods are shown in Table 2.2.Table 2.2: Number of FLOPs required using LUT for delay calculationName Total FLOPs Field II simulation SonixTouchDRF 3I2J 8E8 3E8BiPBF 3I3J2 1E11 3E10SASB 6I2J 2E9 5E8The size of RAM required to store the LUTs for the delay calculationsis shown in Table 2.3. For the Field II simulation case, in the BiBPF calcu-lation, for each transmission focal lateral positions (128), there needs to beone time delay entry for each sample in the RF lines (128 · 128). Therefore,the LUT size for a fixed focal axial depth isLUTsize = 128 · 128 · 128 · 15584 · 8/1024/1024/1024 GB = 243 GB. (2.16)For the SASB calculation, the receive beamforming LUT is significantlysmaller compared to the transmit beamforming LUTs, and each receive po-sition requires only 128 entries. For each transmit position, there needs tobe an entry in the LUT related to each lateral position of RF line. The LUTsize for a fixed focal axial depth isLUTsize = 128 · 128 · 8/1024/1024/1024 (2.17)+128 · 15582 · 128 · 8/1024/1024/1024 GB (2.18)= 1.9 GB. (2.19)If we want to have 10 different transmission focal depths that can bechosen by the operator before each exams, then additional LUTs need to be212.3. System complexity analysisstored in the system with ROMS, which further increases storage require-ments on the system. This is demonstrated in the last column in Table2.3.Table 2.3: Size of LUTs required for delay calculations using the 3 beam-forming methods with 10 selectable focal depthsMethodField II SonixTouchRAMs ROMs RAMs ROMsDRF 1.9 GB 19 GB 0.039 GB 0.39 GBBiPBF 243 GB 2.37 TB 5.00 GB 50 GBSASB 1.9 GB 19 GB 0.039 GB 0.39 GBFLOPs calculations not using LUTsIf no LUTs for delay calculations were stored, and all the delay calculationsare calculated in real time, the number of FLOPs required for differentmethods are summarized in Table 2.4.Table 2.4: Number of FLOPs for delay calculation without using LUTsName Total FLOPs Field II SonixTouchDRF (13 + 3)I2J 4E9 2E9BiPBF (13 + 17 + 3)I2J 1E12 3E11SASB (17 + 3)I3J 1E10 3E92.3.3 Implementation using FPGA and System on Chip(SoC) solutionsIn recent years, ultrasound beamforming algorithms have started to be im-plemented using FPGA and SoC boards. These designs give the beamform-ers parallel processing power for faster imaging and flexible architecturefor redesigning the algorithms. To achieve complete parallel beamforming,dedicated FPGA cells are allocated for each channel. A typical ultrasoundbeamformer requires analog-to-digital converter (ADC) blocks to digitize theultrasound signals, DSP slices to perform various filtering functions, and on-chip RAMs to store RF signals and to perform delay operations. An SoC222.3. System complexity analysisSerial to parallel converterInter-polation filterVariable delaysVariable delaysx ∑ Window functionChannel data 1Channel data 2Sum of the two channel dataFigure 2.5: Data path for processing two channel data when implementedusing the Xilinx FPGA architecture (Adapted from [1])chip has all these resources on the board, and can be used for beamforming.Following the Xilinx (San Jose, CA) framework, the rough cost of the afore-mentioned beamforming methods can be estimated. The calculations, dataand figures in this section are based on a set tutorial slides published byXilinx [1].Figure 2.5 shows the architecture and operations required to processRF signals gathered from two receiving channels. The essential resourcesrequired to carry out these operations are the Configurable Logic Blocks(CLB), the DSP Slices, and the Block Random Access Memories (BRAM).The quantity of the SoC resource needed for each component in Figure 2.5 isshown in Table 2.5. As discussed in Section 2.2, the first stage of the SASBmethod is an srDAS beamformer, while the first stage of the BiPBF methodneeds to be a full DRF beamformer. Using Table 2.5, we can compute theSoC resources required for each stage of the STA-VS methods. To imple-ment an srDAS beamformer, only the first 5 components in Table 2.5 arerequired. A DRF beamformer will require two extra BRAM blocks to storethe LUTs for variable delay values and apodization (component 5 and 6).Since there are 128 channels in the ultrasound machine and each structurein Figure 2.5 can handle two channels, 64 such structures are required foreach beamformer. The required resources for the srDAS beamformer andthe DRF beamformer are shown in Table 2.6 (A & B). To fully implementthe BiPBF and SASB methods, additional BRAMs are required to bufferRF signals before a new image can be formed (C & D in Table 2.6).BiPBF For the BiPBF method, in the first stage, we need 128 DRFbeamformers forming one line at each receiving elements. After each trans-mission, the buffer will fill one column, corresponding to one transmissionlocation. Compared to the DRF method, we will need addition RAMs to232.3. System complexity analysisTable 2.5: FPGA resources for each component in a beamformer (Adaptedfrom [1])Structure CLB DSP Slice BRAM1 Serial-to-parallel converter 24 0 02 Interpolation filter 250 5 03 Variable delays 100 0 24 Window function 120 1 15 Summation 24 0 06 LUT for time delay 0 0 17 LUT for apodization 0 0 18 Additional Storage 0 0 1Table 2.6: FPGA resources required for different beamformersComponent Component CLB DSP Slice BRAMA DFR beamformer 34,432 384 320B srDAS beamformer 28,032 384 320CStorage for onechannel (per frame)0 0 128DStorage for one channel(per 128 frames)0 0 16,384store the RF signal before sending the signal to the second stage. After theentire buffer is full, the transmit beamforming is performed for each imagingposition. To achieve this in a parallel manner, we need 128 beamformers inthe first stage and 1 beamformer in the second stage. In total, the minimumrequirement on the resources is 128·A + A + D (see Table 2.6).SASB For the SASB method, in the first stage, we only need one 128channel DAS beamformer. We will need a buffer to store 128 lines fromall the transmission. In the second stage, we need one DRF beamformer,to achieve beamforming for synthetic transmit beamforming. In total, theminimum requirement on the resources is B + A + C (see Table 2.6).The total FPGA resources required for different methods are shown inTable 2.7. One typical Xilinx Board XC6VLX240T has the resources: 37,680CLBs, 768 DSP slices, and 832 BRAMs with the size 18KB (enough fordata from one channel in the SonixTouch Machine as listed in Table 2.1)[25]. In the last column, the minimum number of XC6VLX240T boardsthat is required for the beamforming method is shown. The price of oneof this board is around $ 2,000, therefore the last column provides a rough242.4. Performance comparisonestimate about the complexity and cost of the beamforming methods. TheSASB only doubles complexity of the DRF method, therefore is realizablecompared to the BiBPF method, which multiplies the cost by 120 times.Table 2.7: FPGA resources for the different methodsCLB DSP slice BRAMNumber ofXC6VLX240TDRF 34,432 384 320 1BiPBF 4,441,728 49,536 57,664 120SASB 62,464 768 768 22.4 Performance comparisonIn this section, we present the results from the implementations of the DRF,SASB and BiPBF methods using both the Field II simulation software andthe SonixTouch machine. In the simulation study, we investigated the fullwidth at half maximum (FWHM) performance using the point spread func-tion (PSF) phantom and the signal-to-noise ratio (SNR) performance usingthe contrast phantom (which is the inclusion-occlusion phantom introducedin Section 1.3). In the phantom measurement study, we will investigated theFWHM performance using a standard quality assurance phantom. In eachstudy, the effect of transmission focal depth, transmit and receive beam-forming f-numbers, and the SOS assumption on beamforming performancewill be considered. The beamforming parameters for these experiments canbe found in Appendix A, where the parameters were chosen to match thosein [13].2.4.1 PSF simulation analysisIn the first set of simulations, we created a PSF phantom consisting of 20scatterers arranged in a single file on the lateral central line of the transducer.The first scatterer was located at the depth of 5 mm, and the subsequentscatterers were all 10 mm apart from the previous one. After beamforming,the FWHM resolutions at different depths were recorded. The detailedlateral and axial resolution at each depths can be found in Appendix B.However, only the lateral resolutions are discussed in this section since theaxial resolution is higher compared to the lateral resolution, and is almostidentical across different depths.252.4. Performance comparisonComparisons between the three beamformers0 50 100 150 20000.511.522.533.544.5Depth(mm)FWHM Resolution (mm)  DRF  @ 70mmSASB @ 20mmBiPBF @ 20mmFigure 2.6: Lateral resolutions using PSF simulationThe lateral resolution using the PSF phantom using the SASB, BiPBF,and DRF method is shown in Figure 2.6.Observations: For all the three methods, the FWHM resolution worsensas the scatterer gets deeper in the imaging field. In the near field (in frontof the focal point), the BiPBF and DRF methods have a better resolutioncompared to the SASB method; while in the far field, the SASB method has abetter performance. At the depth of beyond 100 mm, the STA-VS methodshave almost half of the FWHM resolution compared to the conventionalDRF method.Discussions: (1) Choice of focal depth: The focal depth of the DRFmethod is set to 70 mm, while the focal position of the BiPBF and SASBmethod is set to 20 mm. These focal depths were chosen for a fair comparisonbetween the three methods, since the DRF method performs better when thefocal depth is approximately at the centre of the image. (2) Performancein the near-field: Compared to the far-field, where the two STA-VSmethods performed significantly better compared to the DRF method, theSTA-VS performed almost the same or even worse compared to the DRFmethod. We suspect that this inferior performance is due to the incorrectassumption of the transmitted beam profile in the near field. As introducedin Section 2.2.3, the beamforming equations programmed here is based onthe assumption that the transmitted beam-profile has an hourglass shape.This rough approximation is especially not accurate in the near field, since262.4. Performance comparisonthe far-field approximation of beam profile might not be valid. This leads toinaccurate transmit beamforming in the near field. (3) Choice of otherparameters: This plot only shows that by using the parameters chosenin [13], the SASB methods achieves a superior resolution compared to theBiBPF method. The key message here is that SASB and BiPBF have a bet-ter performance compared to DRF, while the small performance differencebetween SASB and BiPBF should be explored as a study of the best choiceof parameters. (4) Choice of evaluation metric: The chosen evaluationmetric, the FWHM resolution, is a relatively simple measure of the PSF anddoes not capture the full characteristic of the three beamforming algorithms.Effect of different focal depths0 50 100 150 20000.511.522.533.544.5Depth(mm)FWHM Resolution (mm)  DRF @ 20 mmDRF @ 50 mmDRF @ 70 mm0 50 100 150 20000.511.522.533.544.5Depth(mm)FWHM Resolution (mm)  SASB @ 20 mmSASB @ 50 mmSASB @ 70 mm0 50 100 150 20000.511.522.533.544.5Depth(mm)FWHM Resolution (mm)  BiPBF @ 20 mmBiPBF @ 50 mmBiPBF @ 70 mmFigure 2.7: Lateral resolutions using the PSF phantom using the SASB,BiPBF, and DRF methodIn Figure 2.7, the lateral resolutions of the PSF functions using the threemethods with different focal depths, at 20mm, 50mm, and 70 mm are shown.Observations: In all the three methods, the spatial resolution decreasesas the focal depth is set to a deeper position. The SASB method is moresensitive to the change in focal depth compared to the BiPBF method.Discussions: (1) Effect of the focal depth: It was our previous un-derstanding that by using the synthetic transmit beamforming, the beam-forming performance is less dependent on the focal depth. However, fromthis experiment, we found out that the resolution of the STA-VS method ismore dependant on the focal depth compared to the DRF method. However,this dependency is not related to the anatomical depth of interest, since ashallower focal point leads to better performance across all depth insteadof only near the focal depth. (2) Fixed f-number versus finite sizeaperture: From the SASB plot, we observe that the three curves seem to272.4. Performance comparisonresult in an uplifting linear slope after a region of constant resolution. Thisis due to the finite size of the ultrasound imaging array, where after a certaindepth, recordings from all elements are used in beamforming, and a constantf-number cannot be sustained.Effect of different transmit and receive f-numbers0 20 40 60 80 1000.20.40.60.811.21.41.6Depth(mm)FWHM Resolution (mm)  SASB f−num =0.5SASB f−num =1SASB f−num =1.5BiPBF f−num =0.5BiPBF f−num =1BiPBF f−num =1.5Figure 2.8: Lateral resolution of the PSF phantom using different receivebeamforming f-numbersA set of simulations were carried out using the SASB and the BiPBFwith different transmit f-number and receive f-numbers. Compared to thetransmit f-number, the receive f-number has a more significant impact onthe lateral resolution of PSF. The full comparison of lateral resolution usingdifferent transmit and receive beamforming f-numbers can be found in theAppendix B of the thesis. Figure 2.8 shows the effect of different receivef-numbers on the lateral resolution using the SASB (solid lines) and theBiPBF (dashed lines) method.Observations: We observe that a smaller receive f-number can lead toa better spatial resolution in the image, especially in the near field. Fromchanging the f-number of 1.5 to 0.5, the lateral resolution in the near fieldalmost doubled using the SASB method, while the lateral resolution onlyincreased 30% using the BiPBF method. Beyond 60 mm, beamforming usingthe three f-numbers almost render the same resolution.Discussions: The reason that using the three different f-numbers renderthe same resolution in the far field is also because the finite size of the282.4. Performance comparisonaperture. In the far field, since all of the received RF recordings are used inthe receive beamforming step, the beamforming resolution is irrelevant tothe f-number used.Effect of SOS errors0 50 100 150 20000.511.522.533.544.5Depth(mm)FWHM Resolution (mm)  DRF c’ = 1450 m/sDRF c’ = 1540 m/sDRF c’ = 1630 m/s0 50 100 150 20000.511.522.533.544.5Depth(mm)FWHM Resolution (mm)  SASB c’ = 1450 m/sSASB c’ = 1540 m/sSASB c’ = 1630 m/s0 50 100 150 20000.511.522.533.544.5Depth(mm)FWHM Resolution (mm)  BiPBF c’ = 1450 m/sBiPBF c’ = 1540 m/sBiPBF c’ = 1630 m/sFigure 2.9: Lateral resolution of the PSF phantom using different SOSs inbeamformingMost of the imaging media of our interest, such as the interface betweenmuscle and the subcutaneous fat layer, are not homogeneous, where SOSvaries in different layers of the tissue. However, in the reception and beam-forming stage, this SOS difference is generally ignored, and a constant SOS(usually c = 1540 m/s) is assumed in the TOF calculations. Because ofthis discrepancy between the intrinsic SOS in the tissue and the assumedSOS used in the beamforming process, the imaging quality will deteriorate,where the degree of the deterioration varies between different algorithms. Arobust ultrasound beamformer needs to have the property to withstand thisSOS error introduced, and this property can be examined by varying theassumed SOS used in the reception stage.Figure 2.9 shows the effect of SOS error in the beamforming processon the FWHM of the PSF. In each simulation, the per-channel RF signal isgenerated using an SOS 1540 m/s, while in the beamforming process anotherSOS c′ is used for the TOF calculation.Observations: We observe that in the far field (beyond the focal point),the amount of increase in FWHM resolution is greater in SASB beamformingcompared to the BiPBF and the DRF method. The amount of increase isespecially high between the depth interval 20 mm - 150 mm.Discussions: This drawback can be corrected via an SOS detection andcorrection method after the first stage beamforming. This will be explainedin detail in the next chapter.292.4. Performance comparison0 20 40 60 80 100−2−10123OcclusionDepth(mm)SNR   20 40 60 80 100 120−2−10123Depth(mm)SNRInclusion  DRF @ 70 mmSASB @ 20 mmBiPBF @ 20 mmDRF @ 70 mmSASB @ 20 mmBiPBF @ 20 mmFigure 2.10: SNR at different depths using the three beamforming methods2.4.2 SNR simulation analysisTo investigate the penetration and contrast property of the beamformer,the SNR ratio at different depths are measured using a contrast phantom.The contrast phantom consists of 5 inclusions and 5 occlusions with thediameter 2 mm, arranged in a single file in at the lateral centre of the imageand spread from 15 mm to 115 mm in depth. The detailed images and SNRresolutions at each depths can be found in Appendix C.Comparison between the three beamformersFigure 2.10 shows the SNR in different depths in the image using the threeaforementioned methods.Observations: We observe that the three methods share similar SNR per-formance. Compared to Section 2.4.1, where SASB had an apparent betterperformance, the SASB method only performed slightly better compared tothe other two methods.Discussions: An inclusion-occlusion phantom was constructed to eval-uate the contrast resolution of the three methods across different depths.As we observe Figure 2.10, the SNR performance across different depths isdifferent in the occlusions and in the inclusions: the SNR of occlusions in-crease as the depth increases, while the SNR of the inclusion decreases as thedepths increases. The SNR performance of the occlusions is inaccurate, andis due to the evaluation metric we chose: in the far field, because the con-trast is too low, most of the speckles are not detectable at all, which makesthe background value small (denominator in Equation 1.1), and boosts theSNR value. This high SNR value does not correspond to a better contrastresolution performance. For this reason, only the SNR performance of theinclusions will be used to evaluate the SNR performance of the algorithms.302.4. Performance comparison20 30 40 50 60 70 80 90 100 110−50005001000150020002500300035004000Depth(mm)SNR  DRF f−num =0.5DRF f−num =1.5SASB f−num =0.5SASB f−num =1.5BiPBF f−num =0.5BiPBF f−num =1.5Figure 2.11: SNR at different depths using different receive beamformingf-numbersEffect of different receive beamforming f-numbersFigure 2.11 shows the SNR of the inclusions at different depth using the threemethods with two different receive beamforming f-numbers. The dashedlines stand for SNR using an f-number of 0.5, and the solid line stands forSNR using an f-number of 1.5.Observations: For all the three methods, the SNR improves as the f-number is increased. Among the three methods, the improvement in SNRperformance as the f-number increases is the greatest in the SASB method.2.4.3 Phantom measurements studyAt last, the three algorithms were implemented using the SonixTouch ma-chine. A SonixDAQ (Ultrasonix Medical Corp., Burnaby, Canada) dataacquisition module was used to acquire per-channel RF data. The per-channel data were first gathered and stored using the Ultrasonix researchinterface. Beamforming of the RF data was performed using MATLAB.The sampling frequency was set to 40 MHz, however the RF signals wereinterpolated (using cubic spline) to 120 MHz before beamforming to achievea better imaging quality. The phantom under investigation is the CIRSGeneral Purpose Multi-Tissue Ultrasound Phantom, Model 40 (CIRS, Nor-folk, WA). The SOS inside the phantom was measured to be 1580 m/s.The detailed images and lateral resolutions at each depths can be found inAppendix D.312.4. Performance comparison0 10 20 30 40 50 60 700.40.50.60.70.80.911.11.21.3Depth(mm)FWHM Resolution (mm)  DRF @ 70 mmSASB @ 20 mmBiPBF @ 20 mmFigure 2.12: Lateral resolution at different depths using the CIRS phantomComparison between the three methodsUsing the same parameters as in Section 2.4.1, the lateral resolutions atdifferent depths using the CIRS phantom are shown in Figure 2.12. Andthe beamformed image using the three methods are shown in Figure 2.13.Observations: (1) From Figure 2.12, we observe that in the near field,the BiPBF method and the DRF method performed better compared to theSASB method; while in the far field, the SASB method performed bettercompared to the other two methods. This is consistence with the PSF phan-tom simulation. The only difference is the performance of BiBPF performedworse compared DRF in the far field. (2) From Figure 2.13, we observe thatthe scatterers in the near field using the SASB methods are more blurredcompared to the ones using the BiPBF method. While in the far field, thescatterers in the far field using the BiPBF seems brighter compared to theones beamformed using the SASB and DRF methods.Discussions: (1) Evaluation metric: Since the FWHM resolution is usedas the evaluation metric of the phantom, we are only comparing the sharp-ness of the image without considering the penetration. To evaluate thepenetration or contrast performance of the three algorithms, different stan-dard phantoms that are designed for this purpose needs to be used. (2)Appearance of the beamformed image: From the beamformed image,it seems that the BiPBF method performs better compared to the other twomethods; while from the FWHM plot, it showed that the performance ofthe BiPBF method is the worst. This comparison is not conclusive, partlybecause Time Gain Compensation (TGC) was not incorporated in the meth-ods we implemented. The appearance of the images might change if TGC322.4. Performance comparison(a) DRF (b) SASB (c) BiPBFFigure 2.13: The performance of the three methods with the transmit focaldepth 20 mmis used, or if another dynamic range is used to display the image.Effect of different receive beamforming f-numbersThe lateral resolution using the BiPBF and the SASB methods with differentreceive beamforming f-numbers are shown in Figure 2.14.Observations: In both of the BiPBF method and the SASB method,the FWHM resolution gets worse as the receive beamforming f-number isincreased from 0.5 to 1.5.Discussions: The result is consistent with the PSF simulation case, wherea smaller f-number results in a sharper lateral resolution.Effect of different transmission focal depthsThe lateral resolutions using the three methods using different transmissionfocal depths are shown in Figure 2.15.332.4. Performance comparison0 10 20 30 40 50 60 700.40.50.60.70.80.911.11.21.3Depth(mm)FWHM Resolution (mm)  SASB f−num =0.5SASB f−num =1SASB f−num =1.5BiPBF f−num =0.5BiPBF f−num =1BiPBF f−num =1.5Figure 2.14: Lateral resolution using different receive beamforming f-numbers with the CIRS phantom0 20 40 60 800.511.5Depth(mm)FWHM Resolution (mm)  DRF @ 20 mmDRF @ 50 mmDRF @ 70 mm0 20 40 60 800.511.5Depth(mm)FWHM Resolution (mm)  SASB @ 20 mmSASB @ 50 mmSASB @ 70 mm0 20 40 60 800.511.5Depth(mm)FWHM Resolution (mm)  BiPBF @ 20 mmBiPBF @ 50 mmBiPBF @ 70 mmFigure 2.15: Lateral resolution using different focal depths with the CIRSphantomObservations: From Figure 2.15, we observe that the penetration usingthe DRF method increased from 50 mm to 60 mm as the transmission focaldepth is increased from 20 mm to 50 mm. The performance the SASBmethod gets worse as the focal depth is set deeper. The performance of theBiPBF did not fluctuate as much when a deeper focal depth is chosen. Wealso observed that when a focal depth of 70 mm is used as the transmissionfocal depth, the SASB method performs the worst among the three methods,especially in the near field.2.4.4 ConclusionsIn this section, we compared the performance of the DRF, BiPBF, andSASB methods using three different approaches.342.4. Performance comparisonThe PSF phantom simulation showed that the SASB method has a betterlateral resolution compared to the BiBPF method in the far field. However,the performance of the SASB method depends heavily on the beamformingparameters chosen within the model, such as the receive beamforming f-number and the focal depth. A shallow focal depth and a small f-numbercorrespond to a better lateral resolution in the beamformed image.The inclusion-occlusion phantom showed that the SASB and BiPBFmethods have similar contrast performance. It also showed the SNR isdependent on the receive beamforming f-number. A larger f-number corre-sponds to a higher SNR in the beamformed image.The phantom measurement study showed that the SASB performs betterin terms of lateral resolution compared to the BiPBF method in experimen-tal phantoms with noise. However, the SASB method did not have as goodlateral resolution performance in the near field, especially when the focaldepth is deeper.35Chapter 3Speed-of-sound Estimationand Correction Using theSingle Receive-focusedDelay-and-Summed Image3.1 Introduction3.1.1 MotivationPulse-echo ultrasound imaging quality is based on an assumed knowledge ofthe speed-of-sound (SOS) inside the tissue being imaged. Typically, com-mercial medical ultrasound imaging systems assume that the SOS is 1540m/s, and this value is used when beamforming delays are calculated. How-ever, the actual SOS in human tissues ranges from 1430 m/s, as recorded inthe breast, to 1647 m/s, as recorded in the lenses of the eyes [26]. Beam-forming with an incorrectly assumed SOS can significantly reduce the con-trast and spatial resolution of an ultrasound image. This was already demon-strated through simulations in Section 2.4, while a detailed analysis on theimpact of SOS errors to the effect of beamforming can be found in [27]. Inaddition, there has also been interest in tissue characterization based on theSOS and its variation [28],[29]. For these reasons, there is a sizable body ofliterature on how to estimate the SOS of tissue and to correct SOS beam-forming errors.A correct SOS assumption in the ultrasound system is paramount tothe beamforming performance to the Synthetic Aperture Sequential Beam-forming (SASB) method. This was demonstrated in Section 2.4.1, the fullwidth at half maximum (FWHM) resolution increased from 0.5 mm to 1.5mm, as a 6% SOS error is introduced into the system. From last chapter,an advantage of the SASB method is that it requires a simple hardware, es-pecially that the first stage single receive-focused delay-and-sum (srDAS) is363.1. Introductioninexpensive to implement. Recent research has considered implementing thesrDAS beamforming step using a low-cost device [30] [16], and then performthe second stage synthetic beamforming using a more powerful computerremotely. This is only feasible because the amount of data that needs to betransmitted between the two stages is only the srDAS image, the output ofthe first stage beamformer, which is small in size. The goal of this chapter isto devise an SOS estimation method that can be integrated to this low-costrealization of the SASB method, so that its performance can be improved intissues with non-ideal SOS properties. The SOS estimation and detectionsteps should be implemented in the second stage in the more powerful com-puter, since the first stage beamformer is intended to be kept simple. Sincethe only information transmitted to the second layer beamformer is the sr-DAS image, most of the current SOS estimation and correction methodscan not be directly employed, because they require the per-channel radiofrequency (RF) data. An SOS estimation and correction algorithm basedon solely the srDAS image is therefore needed to achieve this goal. To thisend, we developed an SOS estimation and correction method that only re-quires the srDAS image and integrated it to the second stage of the SASBbeamforming.3.1.2 Previous researchA literature review on this topic was performed in 2009 [31], where theauthors characterized the SOS research into two categories: the estimationof SOS according to the tissue properties, and the correction of SOS errorsof beamforming in order to improve image perception.The earliest research on SOS estimation used a separate transducer andreceiver [32]. The time it takes for an acoustic pulse to travel from thetransducer and receiver is used to calculate the SOS in the tissue. Thedisadvantage of this method are in two folds: First, the accurate geometryof the phantom and transducers needs to be known; Second, the signal-to-noise ratio (SNR) of the image is low since only one element is activein the transmission. In another classic method, called the dynamic receivemethod, dynamic receive beamforming is first performed using different as-sumed SOS’s, and the measured SOS corresponds to the beamformed imagewith the best imaging quality, such as the sharpest spatial resolution [33][34] or minimum variance [35]. Although these methods are intuitive andaccurate, multiple transmissions need to be performed to measure the SOS(assuming that the per-channel RF data is not available), which reducesthe imaging speed. One of the most influential SOS estimation methods373.1. Introductionwas introduced by Anderson et al. in 1988 [36]. In this method the au-thors employed focused transmission and recorded per-channel RF data ateach transducer element. The SOS estimation was obtained by fitting apolynomial to the delay profile of the backscattered signals and finding itscurvature. Further research were based on this idea. In [37], the authorscombined the polynomial fitting method with virtue source and achievedbetter SOS estimation accuracy. In [38], the authors derived a closed formsolution of the SOS (instead of treating the polynomial fitting as a blackbox). Although this method is fast, accurate result and has high SNR, itrequires the per-channel data from the ultrasound transducer, which is notalways available from conventional ultrasound imaging systems. There aremore advanced SOS estimation method based on image registration [39],spatial spectrum [26], and speckle analysis [40]. However, all of these meth-ods need multiple frame of data to compute a single SOS estimate.There are two main approaches in the correction of SOS error in beam-forming. In the first approach, the beamforming error is corrected via theSOS mapping obtained from the previous estimation step. If the tissue be-ing scanned is homogeneous, then the SOS correction would be straight for-ward, where the correct SOS is used in beamforming instead of the assumedone. However, there has been little research on SOS error beamformingcorrection based on SOS mapping [31] in more complex tissue models. Oneexample is in [41], where the authors proposed a brute-force method to cor-rected the SOS error originated from a two-layer planar tissue model. In thecontrast, most research focused on another approach by performing phaseaberration correction without the specific knowledge of the SOS mappingin the imaging field. As summarized in [42], this is done by correcting thedistorted wavefront in the per-channel RF data. Using this approach, theresulting beamformed images are sharper, since backscattered echoes fromthe same scatterers in the imaging field are added up coherently. However,this method also requires the per-channel RF data in the machine.3.1.3 Relationship to the SASB methodWe introduce an SOS estimation and correction method that with the goalsof having a similar imaging speed, performance in low SNR environment,and accuracy compared to [36], and operates without the need to use thepre-beamformed per-channel RF data6. To be more specific, the proposed6As explained earlier, per-channel RF data is generally not available to most end usersof ultrasound imaging systems. The SonixTouch ultrasound system used in our lab has aresearch interface that can be used to record a beamformed line at the aperture centre with383.1. IntroductionSOS estimation method takes the output of an srDAS beamformer and es-timates the SOS around the bright scatterers inside the tissue. Since theoutput of the srDAS beamformer is also the output of the first stage beam-former, also called srDAS image, in the SASB method, the SOS estimationand compensation method can also be used to improve the beamformingaccuracy of the SASB method. In this way, the improved version of SASBmethod should not only exhibit the good spatial beamforming properties,but it should also be resilient to SOS errors in the beamforming process. In[16], the group that proposed the SASB method suggested that the secondstage of the SASB beamforming can be performed over a wireless network.It suggested that an srDAS beamformer be physically attached to the ultra-sound probe, the srDAS images beamformed from the first stage beamformerbe transmitted wirelessly to a remote server, and the second stage beam-forming be computed on a more powerful machine more efficiently. Sincethe proposed SOS estimation method only requires an srDAS image, it canbe incorporated into the server side of this two stage implementation andimprove its beamforming performance.3.1.4 SOS estimation and correction in a two-layer modelA major goal of the SOS estimation problem is to alleviate the image qualitydegradation when scanning the organs through the significant subcutaneousfat (adipose tissue) layer in most patients. In [43], the authors examined135 healthy young adults and found the thickness of their subcutaneous adi-pose layer ranging from 1 mm to 72 mm. As discussed in a white paper byPhilips Medical Systems (Eindhoven, The Netherlands)[44], when imagingoverweight patients in liver exams, the SOS’s are different in the subcuta-neous adipose layer and the liver parenchyma layer, which results in a poorspatial resolution and depth penetration. The white paper further proposedto model the abdomen of an overweight patient as a two-layer model, wherethe SOS’s are assumed to be 1450 m/s and 1540 m/s in the fat layer and theliver layer, respectively. In this chapter, we will adopt this two layer model.However, to better accommodate the distinct characteristics of the type oftissues under examination and the variations in SOS’s of the fat layer indifferent patents, we will estimate the two SOS’s in both layers instead ofcustom delay profile and apodization shape; However, to gather per-channel RF data, aSonixDAQ data acquisition module is required. Since the per-channel RF data acquisitionmodules, such as the SonixDAQ, are not available on most ultrasound machines, SOSestimation methods that require per-channel RF data are more difficult to implementwidely.393.2. Theory and methodsassuming them to be 1450 m/s and 1540 m/s. Another aspect of the algo-rithms is that the proposed SOS estimation is only applicable when brightscatterers are present in the tissue. We will assume this constraint is satis-fied, considering that common inhomogeneities in tissues and clinical toolscan show up in ultrasound scans as bright scatterers, such as capillaries orneedles in use.In the following sections of this chapter, we will first introduce the theorybehind the proposed SOS estimation algorithm, design a method suitablefor noisy images, and finally apply the method to simulation studies and tophantom measurements.3.2 Theory and methods3.2.1 A review of the SOS estimation from per-channel RFdataIn this and the following, we will first give a brief review of the SOS esti-mation method introduced in [36] and then introduce our SOS estimationand compensation method as a modification of this method. We will alsoexplore experimentally the possibility of SOS detection and estimation in atwo-layer phantom. In these three sections, we will assume the beamformingequations in Section 2.2, hence the virtual source propagation models, aretrue. The limitations of the sound propagation model and noise conditionswill be addressed in the later sections.In [36], the authors proposed a method that can be used to estimate theSOS in homogeneous tissue with point reflectors or speckle targets inside.In this method, per-channel RF data are gathered after a focused trans-mission is performed. If a bright scatterer is present in the phantom, it willbackscatter a spherical wave to the transducer. This spherical wave’s arrivaltime profile at different transducer elements can be used to detect the SOSaround the scatterer.For example, let us assume that a scatterer is located at location Q(z0, x0), as shown in Figure 3.1. An example of the received per-channel RFsignal is shown in Figure 3.2. In a simplified view of ultrasound propagation,we assume the wave front first converges at a virtue source at the depth X1,then propagates toward the scatterer, and then backscatters toward eachreceive element. Let the lateral position of the transducer elements be xk’s.Assume a short pulse is sent into the imaging field at time 0, then the time403.2. Theory and methods1 2 3 4 5 6 7 8 9 10QxzFigure 3.1: TOF calculation using the per-channel RF imageazimuth (samples)depth (samples)20 40 60 80 100 1201000200030004000500060007000800090001000011000Figure 3.2: An example of per-channel RF data, gathered after transmissionfocused at element 65413.2. Theory and methodsit takes for the transducer located at xk to receive this excitation ist(xk) =X1 +X2 +X3c=z0 +√z02 + (x0 − xk)2c. (3.1)Suppose there is an element k′ located at the lateral position x0 (which isthe same lateral position of the scatterer), then the time it takes for theexcitation to arrive at this element ist(xk′) =z0 +√z02 + (x0 − xk′)2c=2z0c, (3.2)which is also the shortest arrival time among the arrival times at all of thetransducer elements. That is, by locating the excitation impulses in theper-channel RF data gathered after a focused transmission, the arrival timeof the excitation at different transducer elements t(xk)’s can be found; andthe shortest arrival time among the t(xk)’s is t(xk′). After both t(xk)’s andt(xk′) are obtained from the per-channel RF signal, the new variable p(xk)p(xk) = t(xk)− t(xk′)2=√z02 + (x0 − xk)2c, (3.3)can also be obtained. The motivation for getting p(xk) is that it is thesquare-root of a polynomial. By taking the square of both sides, we findp(xk)2 =1c2(xk2−2x0xk+x02 +z02) = 1c2x2k−2c2x0xk+1c2(x20 +z02), (3.4)which is a standard quadratic function with respect to the lateral positionsof the transducers. Through curve fitting, we can recover the coefficients ofthis quadratic function, p1, p2, and p3, where p(xk) = p1xk2 + p2xk + p3.The underlying SOS can be estimated through the curvature (p1) of thequadratic function, wherecˆ =√1p1. (3.5)The lateral and axial position of the scatterer can further be estimated,using p2 and p3, wherexˆ0 = − cˆ2p22, (3.6)andzˆ0 =√cˆ2p3 − xˆ20. (3.7)423.2. Theory and methods3.2.2 SOS estimation using the srDAS imageModification of the delay profile estimationAssuming that the wave propagation model using a virtual source fromSection 2.2 is accurate, the above method can be modified to accommo-date srDAS images. According to the srDAS assumption, after a focusedtransmission, the wavefront first converges at the focal point, travels to thescatterer, backscatters toward the same focal point, and then returns to theaperture centre. Two of the sample paths are depicted in Figure 3.3. Anexample of the received srDAS image is shown in Figure 3.4.Let the depth of the focal position be zf and assume a short excitationis sent into the imaging field at time 0. Then the time it takes for thetransducer located at xk to receive this excitation ist(xk) =X1 +X3 +X3 +X1c= 2zf +√(z0 − zf )2 + (x0 − xk)2c. (3.8)Similar to the case of SOS estimation using per-channel RF signals, let usdefineq(xk) =12t(xk)− zfc=√(z0 − zf )2 + (x0 − xk)2c. (3.9)Then, by taking square of both sides, we getq(xk)2 =1c2x2k −2c2x0xk +1c2(x20 + (z0 − zf )2). (3.10)By fitting a quadratic function to q(xk)2 = q1xk2 + q2xk + q3 , the SOS andspatial coordinates of the scatterer can be estimated, wherecˆ =√1q1, (3.11)xˆ0 =cˆ2q22, (3.12)andzˆ0 =√cˆ2q3 − xˆ20 + zf . (3.13)SOS estimate from an initial guessIn Equation 3.9, in deriving q(xk), we assumed the actual SOS c inside thezfc term is known. However, this is not true. In practice, we need to use aninitial guess of SOS at c0 instead of c in this step. Hence q(xk) =12 t(xk)−zfc0.433.2. Theory and methods1 2 3 4 5 6 7 8 9 10QxzFigure 3.3: TOF calculation using the srDAS imageazimuth (samples)depth (samples)20 40 60 80 100 1201000200030004000500060007000800090001000011000Figure 3.4: An example of srDAS image443.2. Theory and methodsThis estimation, however is not accurate, since a guess c0 of speed of soundwas incorporated in q(xk). There are three cases of c0 that needs to beconsidered:• Case 1: c0 = c. In this caseq(xk)2 = [12t(xk)− zfc]2 =1cˆ2[x2k − 2x0xk + (x20 + (z0 − zf )2)]. (3.14)Therefore, cˆ = c, since 1c2[x2k−2x0xk+(x20+(z0−zf )2)] = [12 t(xk)−zfc ]2.• Case 2: c0 < c. In this caseq(xk)2 = [12t(xk)− zfc0]2 =1cˆ2[x2k − 2x0xk + (x20 + (z0 − zf )2)] (3.15)Since [12 t(xk)−zfc0]2 < [12 t(xk)−zfc ]2, then 1cˆ2[x2k − 2x0xk + (x20 + (z0−zf )2) < 1c2[x2k − 2x0xk + (x20 + (z0 − zf )2)]. Therefore, cˆ > c > c0.• Case 3: c0 > c. Similar to Case 2, we find cˆ < c < c0.From the three cases, we find three properties: (1) A correct initial guessof SOS leads to a correct estimate of SOS (which is identical to the initialguess); (2) An incorrect initial guess of SOS leads to an incorrect estimate ofSOS, and the incorrect estimate is also different from the initial assumption;(3) Every time an initial guess of SOS is applied, we will know if it is anupper or lower bound of the true SOS. Using these three properties, a moreaccurate SOS can be estimated by carefully selecting a sequence of initialguesses.Iterative steps to find the correct SOSSince every time a new initial guess of SOS is used, we can find out whetherit is an upper bound or lower bound of the true SOS, an iterative methodcan be designed to find the true SOS. We propose to do this via a binarysearch of the space of all possible SOS values. We will assume that the SOSinside the imaging field is within 1540 ± 300m/s. This range was chosenbecause it covers the range of all possible SOS in human body (1430 m/s to1647 m/s, as introduced in Section 3.1) and leaves margins to accommodateabnormal tissues with special acoustic SOS’s. We will start with a lowerbound of SOS at 1240 m/s and an upper bound of SOS at 1840 m/s. At453.2. Theory and methodseach iteration, we will choose the mid-point between the upper and lowerbound as the initial guess, and use the SOS estimation to update the upperand lower bound. After each iteration, the gap between the upper boundand lower bound is at least half of its original value, since the initial guesswill become an upper or lower bound for the next iteration. The process isrepeated until the gap between the upper and lower bound is smaller thana 2 m/s, and the mid-point between the upper and lower bound will bereported as the estimated SOS in the imaging field.In Figure 3.5, we showed several sample path of SOS estimation usingthis iterative process. The underlying SOS in the image are 1300 m/s, 1400m/s, 1500 m/s, 1600 m/s, and 1700 m/s. For each of the SOS estimationproblems, the initial SOS guess is set to 1500m/s. The errors betweenthe current estimation of SOS and the underlying SOS at each iterationare plotted. From the graph, we observe that the errors between the SOSestimate and the true SOS drop to less than 1 m/s within 10 iterations.1 2 3 4 5 6 7 8 9 10−200−150−100−50050100150200IterationError in SOS Estimation (m/s)  1300 m/s1400 m/s1500 m/s1600 m/s1700 m/sFigure 3.5: Evolution of the errors in SOS estimation with different under-lying SOS’s463.2. Theory and methods3.2.3 Adaptation of the method for two-layer SOSestimationFormulation of a two-layer model SOS estimation problemAs introduced in Section 3.1, the major objective of our SOS estimation andcorrection algorithm is to correct the SOS beamforming error in the adiposelayer in abdomen scannings, arised from the SOS differernce between theadipose layer and the tissue layer. Therefore, the SOS estimation needs tobe accurate not only in the first (adipose) layer of the abdomen, but alsoin the second (tissue) layer. However, the SOS estimation algorithm pro-posed here, together with many other SOS esimation algorihtms, can notaccurately obtain good SOS estimations in the second layer of the abdoman.Modifications are required to make the SOS estimation in the second layermore accurate. In a subsequent publication by Anderson et al. [45], the au-thors acknowledged that their estimation in the second layer is not accuratecompared to the first layer. In [46], the authors modified their one-layerSOS estimation method to accommodate a two-layer model. In this section,we will introduce the source or error in the two-layer estimation model, andpropose a solution to address this problem.To demonstrate the inaccuracy in the two-layer SOS estimate, let usconsider the following model. Suppose one scatterer is located at the depthz0 = 90mm, in a single layer phantom with the SOS c1 = 1540m/s. If weassume the virtual source propagation model is true, the arrival time-profileat different elements using Equation 3.8 can be written ast(xk) = 2zf +√(z0 − zf )2 + (x0 − xk)2c1. (3.16)Similarly, if the same scatterer is located in a two-layer phantom, where thefirst layer has the SOS c0 = 1450m/s, the second layer has the SOS c1 =1540m/s, and the separation is located at the depth zd = 60mm , the arrivaltime profile has a different form. The detailed TOF calculation can be foundby Snell’s law. In Figure 3.6, the arrival time profiles at different elementsusing the one-layer and two-layer model are plotted. The solid blue linerepresents the one layer model and the dashed green line represents the two-layer model. Despite the resemblance in curvature of the two t(xk) curves,only the solid curve will lead to the correct SOS estimation. The dashedcurve will lead to an SOS estimation of 1480 m/s, which underestimates theSOS in this layer. To better model the two-layer SOS estimation model, weconstructed the time profile of arrival echoes from 17 point scatterers spread473.2. Theory and methodsout vertically from 25 mm to 105 mm in a two-layer phantom. The layerseparation is located at 60 mm, the SOS in the first layer is c1 = 1450 m/s,and the SOS in the second layer is c2 = 1540 m/s. The underlying SOS atdifferent depths in the two-layer phantom is plotted as the solid green curvein Figure 3.7. The SOS estimate using the arrival time profile for curves atdifferent depths using the one-layer assumption is plotted as the dashed redcurve. It can be observed that instead of having an abrupt change at theseparation of the two layers, the SOS estimate only changes gradually. Thisresults in the inaccuracy of SOS estimation in the second layer.−15 −10 −5 0 5 10 15121.5122122.5123123.5124124.5125125.5126lateral (mm)time (micro second)  Two−layer modelOne−layer modelFigure 3.6: Difference between the arrival time profile using the one-layerand two-layer modelSOS estimation in a two-layer phantom with a known location oflayer separationIn this section, we propose a simple compensation formula for a two-layermodel. The compensation scheme ignores the refraction effect, and assumesthat an acoustic wave travels in a straight line. We will assume the depthof layer separation is already known to be zs and the SOS in the first layerbe c1. In real implementation, the depth of layer separation can either bedetermined heuristically by the operator, or through software applications,such as the one described in [47]. We also devised an autonomous layer483.2. Theory and methodsseparation detection solely based on the one-layer SOS profile, however theerror in the system is often too large to provide reliable location of layerseparation (Although this method was not employed in the thesis, it is in-cluded in Appendix F). Let the apparent speed capp =1√q1, which is theSOS estimation from the srDAS image if a one-layer model is assumed, andthe estimated depth of the scatterer be zˆ0. Using this model, the true SOSin the second layer can be found ascˆ2 =(zˆ0 − zs)(zˆ0−zf )capp− (zs−zf )c1. (3.17)The derivation of this equation can be found in the Appendix E. This two-layer compensation method leads to a sharper SOS estimation in the two-layer model, as shown in the dotted blue curve in Figure 3.7.20 30 40 50 60 70 80 90 100 110144014501460147014801490150015101520153015401550Depth(mm)SOS (m/s)  one−layer measurementstwo−layer measurementstrue velocityFigure 3.7: SOS estimation in a two-layer phantom with and without thetwo-layer compensation methodIn practice, to determine the two SOS’s in a two layer model, we firstlocated the depth of layer separation and perform the SOS estimation withthe one-layer assumption. The SOS estimation in the first layer cˆ1 is foundby averaging the SOS estimations of the one-layer SOS’s obtained in thefirst layer. Then the two-layer compensation model is used and the the SOS493.2. Theory and methods(a) Hilbert Transform(b) Local peak detection(c )Non-linear scalingGenerate parabola with different curvaturessrDAS ImageCurrent guess of SOSA grid of different SOS’s(d) Cross-correlation and find maximum(e) Compare with current guessSOS esitimateEqualNot EqualUpper bound and lower bound of SOSInitial upper and lower bounds of SOSUpdate upper and lower boundsFigure 3.8: Flow diagram of the modified SOS detection algorithmestimation in the second layer cˆ2 is found by averaging the two-layer SOS’sfound in the second-layer.3.2.4 Parabola detection in noisy imagesIn the SOS estimation problem, it is not practical to extract the time profileof the parabola based solely on the peaks in the received data at all thereceiving channels. Two common cases that prevent us from obtaining acomplete parabola from the peaks of different channels are: (1) If the SNRis low in the system, some peaks in the receiving channels will be due to noiseinstead of the scatterer in the field; (2) If multiple scatterers are located inthe field, the parabola originated from the two scatterers will interfere eachother. To resolve this, we designed an SOS detection method based on 2Dcross-correlation. This algorithm can withstand randomly distributed noisegenerated by speckles in the imaging field, and it can be used when thereare multiple scatterers presented in the field7.The procedure of the algorithm is shown in Figure 3.8 (a). Starting fromthe srDAS image, we will first perform a Hilbert transform to detect the en-7When echoes due to multiple scatterers are present, the current algorithm only se-lects the strongest echo among them and generate one SOS estimation. How to generatemultiple SOS estimations when multiple echoes are presented will be explored in futurework.503.2. Theory and methodsvelope of the transmitted system. Here, the Hilbert transform is equivalentto a low-pass filter, and is widely used in the ultrasound image forming pro-cess. (b). In the next step, all the local peaks in each receive channels areextracted. We chose to preserve all the local peaks instead of the globalmaxima because in an image with a high SNR, a weak received signal dueto the scatterer might be ‘buried’ by a large reflection by noise, if only oneglobal maxima is recorded. After the peak detection, all the points in theimage that do not belong to a local peak will be assigned to the value zero.(c). Up until now, the y-axis of the image corresponds to the arrival time(as in Equation 3.8). In the next step, the a non-linear scaling will be per-formed on the y-axis, so that in the new image, the y-axis corresponds tothe unit of q(xk)’s as in Equation 3.9. However, an initial guess of SOS wasused inside this step and may lead to inaccuracy in the SOS estimation. (d).Next, we will extract the SOS information in the image by cross-correlateit with parabolas with different curvatures, which correspond to differentunderlying speed in the image. The brute force approach requires us tocross-correlate this image with parabolas corresponding to all the possibleSOSs with vertices located at every possible position in the images. However,better search strategies can be designed to reduce the number of computa-tions. The SOS estimate will be at the SOS whose parabola leads to thehighest cross-correlation among all the parabolas8. (e). As explained in therecursive step, this SOS estimation would only be accurate if it is the sameas the initial guess of the SOS used in the non-linear scaling step. There-fore, the new SOS estimate will serve as the new initial guess of SOS, andthe above procedure will be repeated until the SOS estimate is within therequired accuracy (set to 1 m/s in this thesis).3.2.5 Two-layer phantom generation in Field IIBecause Field II is a linear time impulse response simulator, it cannot accu-rately model phantoms with multiple SOS layers. However, since Field II iscompletely linear, it means that the superposition principle applies. Two ofthe linear properties that can be exploited, in order to simulated a two-layermodel, are the superposition of the scatterers and the superposition of thetransducer elements. (1) The per-channel RF signal obtained by simulat-ing a PSF with two scatterers is the same as the per-channel RF signalsobtained by two individual simulations where the PSF only contains one of8Steps (a)-(d) together perform the role of curve extraction. There are many packagesfor curve extraction methods available in image processing and computer vision. We chosethis method because it is computationally efficient.513.2. Theory and methodsthe scatterers. (2) The per-channel RF signal obtained by a simulation withtwo active transducer elements is the same as the sum of two simulationswhere in each simulation only one of the two elements is active.In Figure 3.9, we model a scenario of transmission using one single trans-ducer element (element 17), and only one scatterer is presented in the phan-tom (point Q). Let us assume the SOS in the top layer is c1, and the SOSin the bottom layer is c2. Since Field II models the radiation from thetransducer element as a spherical wave, and there is only one scatterer inthe phantom, only one spatial impulse is received at the transducer, whichis due to the backscattering of the only scatterer in the field. We start bysetting up a Field II simulation with a one-layer phantom, where the SOS(in both layers) equals to c1. Field II assumes the TOF path of the onelayer model follows A → Q → D. However, if the layer separation exists,the actual TOF in the two layer model follows A→ B → Q→ C → D. Themain idea of this method is to convert the per-channel RF data from thissimple one-layer simulation to a two-layer model by manually adding delaysto the received per-channel RF signal at each channel. This procedure wasinspired by the two-layer in-house simulator developed in [46], and is only asimplification of the real pulse-echo imaging scenario. Q1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18ABCDFigure 3.9: Propagation of sound in a two layer phantomFor each single element excitation using the one-layer phantom in Field523.2. Theory and methodsII, 128 per-channel RF data are gathered at all the transducer elements. Toconvert these per channel RF data to the two-layer model, we need to shiftthe per-channel data with a time shift ts. If the we know the location ofthe transmitting element (xa, ya), the location of the scatterer (xq, yq), andthe location of the receiving element (xd, yd), the time shift can be found byequationts = t1layer − t2layer (3.18)=|AQ|+ |QD|c1− ( |AB|+ |CD|c1+|BQ|+ |QC|c2) (3.19)The length of |AB|, |CD|, |BQ|, and |QC| can be found by Snell’s law.To simulate the data acquisition using a focused transmission, multipletransmissions, where the number equal to the size of the transmit aperture(Figure 3.9 shows a transmit aperture with 5 elements), need to be simulatedand combined. To simulate a PSF phantom with 10 point scatterers, 10simulations need to be simulated separately, and then combined.3.2.6 Beamforming correction in a two-layer phantom usingthe SOS estimatesHere, we adopt a method for SOS correction in a two-layer phantom basedon the depth of layer separation and the SOS estimation on both sides ofthe separation. Suppose after the first stage beamforming, we obtained ansrDAS image S ∈ RJ×I with J entries in depth. Also, from the two layerSOS estimation step, the layer separation was found to be at zs, and the SOSin the two layers are found to be c1 and c2. Suppose that in the first stagebeamforming step, an SOS of c0 = 1540m/s was assumed. This methodconsists of 3 steps:Step 1 First, we will locate the layer separation and break the srDAS imageinto two parts. The first part of the srDAS data S1 ∈ RL×I corre-sponds to the layer with the SOS c1, and the second part of the srDASS1 ∈ R(J−L)×I corresponds to the layer with the SOS c2. L can befound by the formulaL =zs · 2 · fsc1. (3.20)Thereafter, the first L rows of the matrix S make up the new matrixS1 and the last (J −L) rows of the matrix S make up the new matrixS2.533.2. Theory and methodsStep 2 In S1, the spatial length between two neighboring samples correspondsto ds1 =2·c1fs , while in S2 the spatial length between two neighbouringsamples corresponds to ds2 =2·c2fs . However, in order to ensure thesecond stage beamforming using c0 is accurate, we want ds1 = ds2 =2·c0fs . This requires re-sampling of S1 and S2 in the depth direction.After resampling, the number of entries in depth in S′1 isL·c1c0, whilethe number of entries in depth in S′2 isL·c2c0.Step 3 After the resampling, we will stitch the two layers S′1 and S′2 backtogether. And the normal second stage beamforming using the SOSassumption c0 can be subsequently performed.3.2.7 Simulation setupThe simulation part of the SOS estimation procedure is performed usingthe Field II toolbox. First, we simulated a one-layer phantom with addednoise, to test the accuracy of the one-layer SOS estimation algorithm andits resilience to noise. Then, we simulated several two-layer phantoms withfirst layers with variable thickness, to test the two-layer SOS estimationalgorithm.One-layer SOS estimation setupThe performance of the SOS estimation algorithm was tested at three dif-ferent depths, 45 mm, 65 mm, and 85 mm. At each depth, a phantom con-sisting 10,000 randomly distributed scatterers, are built in order to achievefully developed speckle. At the depth on interest, a point scatterer with theamplitude a = m · σ is added, where σ2 is the variance of the amplitudeof the randomly distributed scatterers, and m is a number that representsthe level of noise in the image. Two simulations were setup with two differ-ent m’s, the simulation with m = 20 corresponds to a lower level of noisewhile the simulation with m = 10 corresponds to a higher level of noise.Three sets of srDAS data are generated, with three different speeds 1450m/s, 1540 m/s, and 1620 m/s. The assumed SOS in the system is 1540 m/s,and the first stage beamforming in all of the images are all conducted withan assumed SOS at 1540 m/s. The estimated SOS is compared with theunderlying SOS inside the phantom. Two sets of second stage beamformingwith two velocities were implemented, one with the assumed velocity 1540m/s and the other one with the SOS estimate, and their lateral resolutionperformance were compared.543.2. Theory and methodsTwo-layer SOS estimation setupTo simulate the accuracy of the proposed algorithm in a two-layer phantom,five sets of experiments on two-layer phantom using Field II with the dif-ferent thickness of with first layers were performed. All phantoms have 12scatterers in a single file in them. The phantoms consist of two layers, wherethe first layer has the SOS c1 = 1450m/s, while the second layer has the SOSc2 = 1540m/s. The SOS around each scatterer was first computed using aone-layer model, then converted to the two-layer model based on the real lo-cation of layer separation. Five simulations, where the layer separation waslocated at 40 mm, 50 mm, 60 mm, 70 mm and 80 mm were simulated. Thesimulations were performed using the two-layer phantom generation methodintroduced in Section 3.2.5. No noise was added into the phantom becausethe phantom generation is computationally intensive, which means generate10,000 scatterers using the two-layer model is computationally infeasible.3.2.8 Experimental setupOne-layer measurement using the CIRS phantomTo validate the one-layer SOS estimation method to real ultrasound scans,we applied the SOS estimation and compensation method to the CIRS phan-tom (the same phantom as used in Chapter 2). As stated before, the phan-tom has gone through some ageing degradation, therefore the SOS inside thephantom is not exactly calibrated as 1540 m/s. The SOS inside the CIRSphantom was measured to be 1580 m/s, by beamforming the image usinga different set of SOS assumptions and selecting the SOS assumption thatlead to the sharpest spatial resolution. The SOS was estimated from thesrDAS. At first, the srDAS images are beamformed with an assumed SOSof 1540 m/s. From the srDAS image, we manually located and highlighted,using coloured rectangles, the reflected echoes due to strong reflectors. Thenthese echoes were used to calculate SOS in the phantom. The starting pointof the recursive step of SOS guess was set to c1 = 1540m/s.Two-layer measurement using in-house phantomA two-layer in-house phantom was made to match the two-layer simulationresults in the previous section. The phantom, as shown in Figure 3.10 and3.11, resembles an N-fiducial ultrasound calibration phantom. The phantomhas a Plexiglas container as its housing, with the dimensions 19 × 13.5 ×14 cm3. Seven horizontal nylon fishing line are fixed tightly across the front553.2. Theory and methodsTable 3.1: Chemical composition of the two layers of agar-glycerol phantomLayerGlycerol content(%)Agar content(%)IdealSOS(m/s)MeasuredSOS(m/s)Top 0 3 1500 [48] 1500Bottom 20 3greater than1570 [49]1590and back walls of the container through drilled holes, where the verticaldistance between each hole is 10 ± 1mm. The phantom is then filled withtwo layers of agar-glycerol mixture, with two different underlying SOS’s.To emulate the SOS difference between the subcutaneous fat layer and thetissue layer, an SOS difference of 90 m/s between the two layers of thephantom is desired. Since an SOS of 1450 m/s is hard to achieve withthe agar mixtures, we chose to build two layers of agar phantom with theSOS’s 1500 m/s and 1590 m/s. The chemical composition of the two layers,together with the theoretical SOS are shown in Table 3.1. The measuredSOS inside the phantom was also obtained by beamforming the per-channelRF data with 20 different SOS’s (1450 m/s to 1650 m/s, with a 10 m/sincrement) and selecting the SOS that lead to the best lateral resolution.The lateral axis of the ultrasound transducer was placed perpendicularto the front wall, so that the fishing line would only appear as a single dotin the 2D ultrasound image. A photo of the experimental setup is shown inFigure 3.12. To model the variable thickness of the fat layer, one 2 cm thickagar phantom block with the same chemical composition as the top layerwas put on the top of the agar-glycerol phantom we made.In the experiment, we first estimated the SOS in the first layer, and thenused the one-layer SASB method to locate the separation depth betweenthe two layers. Then, the srDAS method was used to find the one-layerSOS at all scatterers. Finally, the depth of layer separation and the one-layer SOS at different depths were used to calculate the two-layer SOS atdifferent depths. Later, the estimated SOS were used to perform a moreaccurate version of the SASB method, the two-layer SASB method. Tocompare the performance of the improved SASB, we compared the result ofthe beamforming error with DRF and SASB method with an assumed SOS.Since in this phantom setup, the bottom layer corresponds to the tissuelayer, and has an SOS 1590m/s, the DRF and SASB assumed an SOS of1590 m/s, for a fair comparison.563.2. Theory and methods20% Glycerol 3% Agar0% Glycerol 3% Agar0% Glycerol 3% Agar blockFishing LinePlexiglas ContainerUltrasound ProbeFigure 3.10: Demonstration of the two-layer in-house phantom, view fromthe front20% Glycerol 3% Agar0% Glycerol 3% Agar0% Glycerol 3% Agar blockFishing lineLateral View10 mm 33 mm0, 20 mmUltrasound ProbeFigure 3.11: Demonstration of the two-layer in-house, view from the side573.2. Theory and methodsFigure 3.12: Experimental setup of the measurement of the two-layer in-house phantom583.3. Results3.3 Results3.3.1 Simulation resultsOne-layer simulationThe results of the one-layer simulation study with added noise is shown inTable 3.2 and 3.3. In Table 3.2, the magnitude of the point scatters is 20times of the standard deviation of the noise amplitude; while in Table 3.3,the magnitude of the point scatters is 10 times of the standard deviation ofthe noise amplitude.DiscussionsThe SOS estimation in the one-layer simulation using the proposed methodhas a more than 2% accuracy. However, there are several rows in Table3.3 and 3.2 where the improvement in the lateral resolution is almost zeros.This is because the underlying SOS in the phantom equals the assumed SOSof the system. Since in these cases, the beamforming quality cannot be fur-ther improved, they do not reflect on the performance of the SOS estimationalgorithm.The axial resolution is dependent on the SOS used in the conversionfrom time into distance. For example, starting from a beamformed image,where the FWHM is 10 samples in the lateral direction in the image. Inevaluating the FWHM in length (mm), the underlying SOS in the system isneeded. Using an SOS of 1540 m/s and a sampling rate of 120MHz, the lat-eral resolution is interpreted as 0.6416 mm; while using an SOS of 1450m/s,the lateral resolution is interpreted as 0.6041 mm. Therefore, the axial res-olution is not a good evaluation metric for the beamforming performance inthis scenario. Therefore, only the improvement in the lateral direction willbe compared.Section 3.1, stated that the range of possible SOS in human tissue is 1430m/s to 1647 m/s, where the span is 217 m/s. The largest error of estimationhere was 22 m/s. Therefore, the maximum error is 10% of the possiblerange of SOS’s. This shows that the 2% accuracy is somehow deceptivein understanding the performance of the algorithm (2% seems small, butis large considering the range of possible SOS in human body is alreadylimited). Future work is required on analysing the SOS estimation accuracy.593.3.ResultsTable 3.2: One-layer simulation result of the SOS estimation and compensation when the amplitude of the pointscatters is 20 times of the standard deviation of the noise amplitudeDepth(mm)ActualSOS(m/s)Estimateof SOS(m/s)Errorof SOSestimateLateralresolutionbefore(mm)Lateralresolutionafter(mm)Improve-mentin lateralresolutionAxialresolutionbefore(mm)Axialresolutionafter(mm)Improve-mentin axialresolution45 1450 1466 1.103% 1.012 0.51634 48.9796% 0.154 0.1466 4.805%45 1540 1525 -0.974% 0.51634 0.55765 -8% 0.20533 0.20333 0.974%45 1620 1598 -1.358% 0.95007 0.5783 39.1304% 0.154 0.1598 -3.766%65 1450 1466 1.103% 1.2805 0.55765 56.4516% 0.154 0.1466 4.805%65 1540 1541 0.065% 0.55765 0.55765 0% 0.154 0.1541 -0.064%65 1620 1622 0.123% 1.3218 0.59896 54.6875% 0.20533 0.21627 -5.324%85 1450 1443 -0.482% 1.1566 0.68157 41.0714% 0.20533 0.1924 6.298%85 1540 1541 0.065% 0.70222 0.70222 0% 0.154 0.1541 -0.065%85 1620 1625 0.308% 1.2805 0.74353 41.9355% 0.154 0.1625 -5.519%603.3.ResultsTable 3.3: One-layer simulation result of the SOS estimation and compensation when the amplitude of the pointscatters is 10 times of the standard deviation of the noise amplitudeDepth(mm)ActualSOS(m/s)Estimateof SOS(m/s)Errorof SOSestimateLateralresolutionbefore(mm)Lateralresolutionafter(mm)Improve-mentin lateralresolutionAxialresolutionbefore(mm)Axialresolutionafter(mm)Improve-mentin axialresolution45 1450 1461 0.758% 1.012 0.51634 48.9796% 0.154 0.1461 5.129%45 1540 1554 0.909% 0.51634 0.51634 0% 0.20533 0.2072 -0.909%45 1620 1610 -0.617% 0.92941 0.55765 40% 0.154 0.161 -4.545%65 1450 1447 -0.206% 1.4458 0.53699 62.8571% 0.154 0.1447 6.039%65 1540 1528 -0.779% 0.5783 0.5783 0% 0.154 0.1528 0.779%65 1620 1611 -0.555% 1.3218 0.61961 53.125% 0.20533 0.2148 -4.610%85 1450 1472 1.517% 1.0327 0.68157 34% 0.20533 0.19627 4.415%85 1540 1535 -0.324% 0.68157 0.68157 0% 0.154 0.1535 0.324%85 1620 1632 0.740% 1.2392 0.74353 40% 0.154 0.1632 -5.974%613.3. ResultsTwo-layer simulation resultsThis section demonstrates the performance of the two-layer SOS estima-tions using Field II simulations. The estimated SOS at different depth inthe image, the estimated SOS of the two-layer model, together with the es-timation errors are shown in Table 3.4 to Table 3.8. To help better visualizethe SOS estimation, the SOS profiles of the one-layer estimation, two-layerestimation, and the two selected SOS’s for the two-layer model are plottedin Figure 3.13 to Figure 3.17. In each of the five cases, the dashed red curverepresents the one-layer estimation, the dotted blue curve represents thetwo-layer model, while the solid green curve represents the SOS parametersof the two-layer model. Finally, after the two selected speeds are applied toSOS correction, the lateral resolutions of the corrected SASB method, arecompared to the DRF and SASB using assumed SOS in Table 3.9 to Table3.13.623.3. Results30 40 50 60 70 80 90 100 110 12014001420144014601480150015201540156015801600Depth (mm)SOS (m/s)Layer separation at 40 mm  one−layer measurementstwo−layer measurementstwo−layer modelFigure 3.13: Two-layer simulation result of SOS profile estimation with thelayer separation at 40 mmTable 3.4: Two-layer simulation result of SOS estimation accuracy with thelayer separation at 40 mmDepth(mm)SOS estimation(m/s)True SOS(m/s)Error in SOSestimation (%)34.73 1438 1450 -0.8344.79 1514 1540 -1.6954.57 1496 1540 -2.8664.79 1512 1540 -1.8275.49 1537 1540 -0.1984.89 1520 1540 -1.3095.29 1531 1540 -0.58105.97 1544 1540 0.26113.76 1506 1540 -2.21First layer c1 1438 1450 -0.83Second layer c2 1520 1540 -1.30633.3. Results30 40 50 60 70 80 90 100 110 12014001420144014601480150015201540156015801600Depth (mm)SOS (m/s)Layer separation at 50 mm  one−layer measurementstwo−layer measurementstwo−layer modelFigure 3.14: Two-layer simulation result of the SOS profile estimation withthe layer separation at 50 mmTable 3.5: Two-layer simulation result of the SOS estimation accuracy withthe layer separation at 50 mmDepth(mm)SOS estimation(m/s)True SOS(m/s)Error in SOSestimation (%)34.73 1438 1450 -0.8344.68 1439 1450 -0.7654.80 1542 1540 0.1364.47 1500 1540 -2.6075.13 1535 1540 -0.3285.84 1556 1540 1.0494.87 1525 1540 -0.97105.31 1536 1540 -0.26115.51 1536 1540 -0.26First layer c1 1439 1450 -0.79Second layer c2 1532 1540 -0.46643.3. Results30 40 50 60 70 80 90 100 110 12014001420144014601480150015201540156015801600Depth (mm)SOS (m/s)Layer separation at 60 mm  one−layer measurementstwo−layer measurementstwo−layer modelFigure 3.15: Two-layer simulation result of the SOS profile estimation withthe layer separation at 60 mmTable 3.6: Two-layer simulation result of the SOS estimation accuracy withthe layer separation at 60 mmDepth(mm)SOS estimation(m/s)True SOS(m/s)Error in SOSestimation (%)34.73 1438 1450 -0.8344.68 1439 1450 -0.7654.71 1442 1450 -0.5564.71 1544 1540 0.2674.96 1541 1540 0.0685.07 1538 1540 -0.1395.80 1560 1540 1.30104.50 1518 1540 -1.43114.64 1524 1540 -1.04First layer c1 1440 1450 -0.71Second layer c2 1538 1540 -0.16653.3. Results30 40 50 60 70 80 90 100 110 1201400145015001550160016501700Depth (mm)SOS (m/s)Layer separation at 70 mm  one−layer measurementstwo−layer measurementstwo−layer modelFigure 3.16: Two-layer simulation result of the SOS profile estimation withthe layer separation at 70 mmTable 3.7: Two-layer simulation result of the SOS estimation accuracy withthe layer separation at 70 mmDepth(mm)SOS estimation(m/s)True SOS(m/s)Error in SOSestimation (%)34.73 1438 1450 -0.8344.68 1439 1450 -0.7654.71 1442 1450 -0.5564.48 1438 1450 -0.8375.04 1632 1540 5.9784.94 1553 1540 0.8495.11 1548 1540 0.52105.89 1570 1540 1.95113.90 1504 1540 -2.34First layer c1 1439 1450 -0.74Second layer c2 1541 1540 0.07663.3. Results30 40 50 60 70 80 90 100 110 1201400145015001550160016501700Depth (mm)SOS (m/s)Layer separation at 80 mm  one−layer measurementstwo−layer measurementstwo−layer modelFigure 3.17: Two-layer simulation result of the SOS profile estimation withthe layer separation at 80 mmTable 3.8: Two-layer simulation result of the SOS estimation accuracy withthe layer separation at 80 mmDepth(mm)SOS estimation(m/s)True SOS(m/s)Error in SOSestimation (%)34.73 1438 1450 -0.8344.68 1439 1450 -0.7654.71 1442 1450 -0.5564.48 1438 1450 -0.8375.02 1450 1450 0.0085.14 1668 1540 8.3195.05 1562 1540 1.43104.48 1519 1540 -1.36115.56 1559 1540 1.23First layer c1 1441 1450 -0.59Second layer c2 1577 1540 2.40673.3. ResultsTable 3.9: Improvement in the lateral resolution after using the SOS cor-rection in the two-layer simulated phantom with the layer separation at 40mmDepth(mm)One-layerDRF(mm)One-layerSASB(mm)Two-layerSASB(mm)ImprovementfromSASB (%)ImprovementfromDRF (%)5 0.59900 0.64030 0.64030 0.00 -6.8915 1.01200 0.64030 0.55760 12.92 44.9025 1.01200 0.76420 0.51630 32.44 48.9835 1.01200 0.97070 0.51630 46.81 48.9845 1.09460 1.09460 0.55760 49.06 49.0655 1.30120 1.17730 0.59900 49.12 53.9765 1.42510 1.13590 0.59900 47.27 57.9775 1.71430 0.88810 0.64030 27.90 62.6585 1.83820 0.97070 0.68160 29.78 62.9295 2.12730 0.88810 0.76420 13.95 64.08105 2.37520 0.92940 0.80550 13.33 66.09115 2.49910 0.97070 0.88810 8.51 64.46Table 3.10: Improvement in the lateral resolution after using the SOS cor-rection in the two-layer simulated phantom with the layer separation at 50mmDepth(mm)One-layerDRF(mm)One-layerSASB(mm)Two-layerSASB(mm)ImprovementfromSASB (%)ImprovementfromDRF (%)5 0.59900 0.64030 0.64030 0.00 -6.8915 1.01200 0.64030 0.55760 12.92 44.9025 1.01200 0.76420 0.51630 32.44 48.9835 1.01200 0.97070 0.51630 46.81 48.9845 1.13590 1.17730 0.51630 56.15 54.5555 1.30120 1.30120 0.64030 50.79 50.7965 1.42510 1.38380 0.72290 47.76 49.2775 1.67290 1.21860 0.68160 44.07 59.2685 1.87950 0.92940 0.68160 26.66 63.7495 2.08600 1.05330 0.80550 23.53 61.39105 2.37520 0.92940 0.80550 13.33 66.09115 2.58170 1.01200 0.88810 12.24 65.60683.3. ResultsTable 3.11: Improvement in the lateral resolution after using the SOS cor-rection in the two-layer simulated phantom with the layer separation at 60mmDepth(mm)One-layerDRF(mm)One-layerSASB(mm)Two-layerSASB(mm)ImprovementfromSASB (%)ImprovementfromDRF (%)5 0.59900 0.64030 0.64030 0.00 -6.8915 1.01200 0.64030 0.55760 12.92 44.9025 1.01200 0.76420 0.51630 32.44 48.9835 1.01200 0.97070 0.51630 46.81 48.9845 1.13590 1.17730 0.51630 56.15 54.5555 1.34250 1.38380 0.51630 62.69 61.5465 1.42510 1.50770 0.72290 52.05 49.2775 1.71430 1.46640 0.76420 47.89 55.4285 1.83820 1.30120 0.72290 44.44 60.6795 2.16860 1.01200 0.76420 24.49 64.76105 2.29260 1.13590 0.88810 21.82 61.26115 2.49910 1.05330 0.88810 15.68 64.46Table 3.12: Improvement in the lateral resolution after using the SOS cor-rection in the two-layer simulated phantom with the layer separation at 70mmDepth(mm)One-layerDRF(mm)One-layerSASB(mm)Two-layerSASB(mm)ImprovementfromSASB (%)ImprovementfromDRF (%)5 0.59900 0.64030 0.64030 0.00 -6.8915 1.01200 0.64030 0.55760 12.92 44.9025 1.01200 0.76420 0.51630 32.44 48.9835 1.01200 0.97070 0.51630 46.81 48.9845 1.13590 1.17730 0.51630 56.15 54.5555 1.34250 1.38380 0.51630 62.69 61.5465 1.46640 1.59030 0.55760 64.94 61.9775 1.67290 1.63160 0.88810 45.57 46.9185 1.83820 1.50770 0.92940 38.36 49.4495 2.08600 1.42510 0.92940 34.78 55.45105 2.37520 1.05330 0.88810 15.68 62.61115 2.45780 1.34250 1.09460 18.47 55.46693.3. ResultsTable 3.13: Improvement in the lateral resolution after using the SOS cor-rection in the two-layer simulated phantom with the layer separation at 80mmDepth(mm)One-layerDRF(mm)One-layerSASB(mm)Two-layerSASB(mm)ImprovementfromSASB (%)ImprovementfromDRF (%)5 0.59900 0.64030 0.64030 0.00 -6.8915 1.01200 0.64030 0.55760 12.92 44.9025 1.01200 0.76420 0.51630 32.44 48.9835 1.01200 0.97070 0.51630 46.81 48.9845 1.13590 1.17730 0.51630 56.15 54.5555 1.34250 1.38380 0.51630 62.69 61.5465 1.46640 1.59030 0.55760 64.94 61.9775 1.67290 1.67290 0.59900 64.19 64.1985 1.83820 1.63160 0.97070 40.51 47.1995 2.08600 1.59030 1.05330 33.77 49.51105 2.29260 1.42510 1.09460 23.19 52.26115 2.62300 1.13590 1.01200 10.91 61.42703.3. ResultsDiscussionsFrom Table 3.13 to Table 3.17, we observe that 39 of the total 45 (86%) es-timations achieved an accuracy better than 2%. The average absolute errorin the estimation is 1.22 %.As can be observed in the SOS profile figures (Figure 3.13 to Figure 3.17),both the one-layer and two-layer SOS estimation (red dashed curve and bluesolid curve) have a larger variance in the second layer compared to in thefirst layer. This is because the transformed parabolas, (q(xk)’s), from thearrival time profiles in the second layer are not perfect parabolas. Therefore,the SOS estimations in the second layer are not as accurate in the first layer.Another point to notice is that in Figure 3.16 and Figure 3.17, the two-layer estimation profile overshoots at the scatterer right behind the layerseparation. This makes the SOS estimation at this point less accurate. Thereason behind this phenomenon yet needs to be explored. One solution tothis problem is to ignore the scatterer immediately after behind layer sepa-ration in calculating the SOS in the second layer.From Table 3.9 to Table 3.13, after using the estimated SOS in bothlayers to correct the received srDAs image, the beamforming resolution wasimproved at every depth (except for the one scatterer located at the nearfield, which is because the performance of SASB is poor in the near field, ashas been demonstrated in the previous chapter). The improvement was moreapparent in the near field compared to the far field. This is mainly for tworeasons: (a) The difference between the assumed SOS and the underlyingSOS was larger in the near field compared to the one in the far field; (2)The accuracy in the far field was worse even if a correct SOS is used, andtherefore could not be improved much from a corrected estimated SOS.3.3.2 Experimental resultsSOS estimation using the one-layer CIRS phantomIn Figure 3.18, the srDAS image gathered using the CIRS phantom is shown.The estimation using echoes in this image is shown in Table 3.14.713.3. ResultsDiscussionsWe observe that the mean of the five estimations is µc = 1591.2m/s, andthe standard deviation is σc = 8.87m/s. The deviation from true SOS insidethe phantom is ∆cc = 0.71%.azimuth(samples)depth(samples)20 40 60 80 100 1202000400060008000100001200014000Figure 3.18: srDAS image one-layer CIRS phantomTable 3.14: Results of SOS estimation in the one-layer CIRS phantomColorof thesquareDepth(mm)SOS estimationfrom srDAS (m/s)Green 26.12 1583Magenta 26.7 1598Yellow 26.29 1588Blue 35.29 1584Red 44.87 1603Average - 1591.20Standard deviation - 8.87723.3. ResultsSOS estimation using the two-layer in-house phantomThe srDAS image gathered from the in-house phantom are listed in Figure3.19 and in Figure 3.20. The two-layer SOS estimations associated with eachof the highlighted curves, together with their accuracy, are shown in Table3.15 and Table 3.16. To help visualize the accuracy the SOS estimation, theSOS profile of the one-layer estimation, two-layer estimation and two-layermodel are shown in Figure 3.21 and Figure 3.22. Finally, the performancesof the SOS correction algorithm using the two-layer model are shown inTable 3.17 and Table 3.18. The beamformed images using the three modelsare shown in Appendix G.DiscussionsCompared to the simulated results from Section 3.3.1, the SOS estimationin the two-layer model resulted in a larger error. This is partially becausesome of the sources of errors in the experimental phantom are not present inthe Field II simulations. For example, the in-house phantom we made is notcompletely uniform. Therefore, the SOS may vary based on temperatureand density. The fishing lines are not ideal reflectors, therefore the echocurves they reflects are not as smooth as the ones in CIRS phantom or FieldII point sources. Another general limitation is that there is no perfect goldstandard of SOS measurement inside the phantom.From observing Figure 3.21 and Figure 3.22, we find that the SOS esti-mate is over-estimated at the scatterers immediately behind the layer sep-aration. This is consistent with the simulation results. The reason behindthis phenomenon still needs to be explored in future work.In Table 3.17 and Table 3.18, it is shown that the SOS correction leadto a better lateral resolution in the two-layer SASB algorithm, compared tothe beamforming algorithms using an assumed SOS. The improvement inlateral resolution is more apparent in the phantom with a thicker fat layer.This is also consistent with the simulation study.733.3. ResultsChannelsSamples20 40 60 80 100 1202000400060008000100001200014000Figure 3.19: srDAS image in the in-house phantom with a 33 mm thick firstlayerTable 3.15: Accuracy of two-layer SOS estimation in the in-house phantom,with a 33 mm thick first layerColor of therectangleDepth(mm)SOS estimation True SOSError inSOS estimation(%)Yellow 26.92 1482 1500 -1.20Megenta 37.35 1678 1590 5.53Cyan 48.48 1610 1590 1.26Red 58.74 1608 1590 1.13Green 68.49 1609 1590 1.19First Layer c1 1482 1500 -1.20Second layer c2 1626.25 1590 2.28743.3. ResultsChannelsSamples20 40 60 80 100 1202000400060008000100001200014000Figure 3.20: srDAS image in the in-house phantom with a 53 mm thick firstlayerTable 3.16: Accuracy of two-layer SOS estimation using in the in-housephantom, with a 53 mm thick first layerColor of therectangleDepth(mm)SOS estimation(m/s)True SOS(m/s)Error inSOS estimation(%)Yellow 29.38 1497 1500 -0.20Megenta 38.88 1492 1500 -0.53Cyan 48.31 1495 1500 -0.33Red 58.33 1644 1590 3.40Green 69.42 1594 1590 0.25Blue 79.21 1598 1590 0.50First Layer c1 1495 1500 -0.36Second layer c2 1612 1590 1.38753.3. Results20 30 40 50 60 70 80145015001550160016501700Depth (mm)SOS (m/s)Layer separation at 33 mm  one−layer measurementstwo−layer measurementstwo−layer modelFigure 3.21: SOS estimation in the in-house phantom of the two layer phan-tom, layer separation at 33 mm20 30 40 50 60 70 801480150015201540156015801600162016401660Depth (mm)SOS (m/s)Layer separation at 53 mm  one−layer measurementstwo−layer measurementstwo−layer modelFigure 3.22: SOS estimation in the in-house phantom of the two layer phan-tom, layer separation at 53 mm763.3. ResultsTable 3.17: Comparison between the lateral resolution in one-layer DRFmethod, one-layer SASB method, and the two-layer SASB method in thein-house phantom, with a 33 mm thick first layerDepth(mm)One-layerDRF(mm)One-layerSASB(mm)Two-layerSASB(mm)ImprovementfromSASB (%)ImprovementfromDRF (%)8.26 0.54478 0.54478 0.54478 0.00 0.0018.40 0.63558 0.69611 0.48425 43.75 31.2526.92 1.48301 0.72637 0.69611 4.35 113.0437.35 1.39221 0.57504 0.60531 -5.00 130.0048.48 0.93823 0.69611 0.66584 4.55 40.9158.74 0.66584 0.54478 0.48425 12.50 37.5068.82 0.81717 0.60531 0.57504 5.26 42.11Table 3.18: Comparison between the lateral resolution in one-layer DRFmethod, one-layer SASB method, and the two-layer SASB method in thein-house phantom, with a 53 mm thick first layerDepth(mm)One-layerDRF(mm)One-layerSASB(mm)Two-layerSASB(mm)ImprovementfromSASB (%)ImprovementfromDRF (%)29.38 1.54354 0.78690 0.51451 34.62 66.6738.88 2.81469 1.24089 0.42372 65.85 84.9548.31 3.20815 1.63434 0.39345 75.93 87.7458.33 2.23965 0.90797 0.66584 26.67 70.2769.42 1.69487 1.08956 0.57504 47.22 66.0779.21 1.24089 1.08956 0.57504 47.22 53.6690.33 1.15009 1.18036 0.66584 43.59 42.11773.4. Conclusions3.4 ConclusionsIn this chapter, we introduced a method that can be used to measure the SOSusing only the beamformed srDAS image. First, we showed, through TOFequations, that the SOS can be estimated from the first stage beamformedsrDAS image, through a modification of an existing SOS estimation method.We also developed a method to correctly extract the parabola from a noisyimage, a method to estimate the SOS in two-layer phantoms, and an SOSerror correction method. The SOS estimation method was tested in one-layer and two-layer phantoms using Field II simulations. The one-layer SOSestimates were also measured in the tissue mimicking CIRS phantoms. Theerror of the SOS estimation was less than 2% in the one layer model. Inboth Field II and in-house phantom measurements of the two-layer phantom,the SOS estimation and correction method improved the lateral resolutionperformance of the SASB method. The method has the potential to beapplied in the clinical scanning of the liver through subcutaneous fat, inorder to alleviate the SOS difference between the two layers.However, there are several inherent limitations in this SOS estimationmethod. Firstly, in order to gather an SOS estimation, an ideal reflector isneeded in the phantom, which might not be feasible in all clinical situations.Moreover, to accurately estimate the SOS in both layers, at least morethan one or two good measurements of SOS are required in each layer,which means that multiple ideal reflectors are required in the phantom.This makes the scenarios where this method can be applied more stringent.Secondly, even though the beamforming resolution is improved after theSOS correction of the srDAS image, the reconstruction is not perfect. Thisis because in the first stage, when the srDAS beamforming is performingusing an assumed SOS, some error is already incorporated into the systemand can not be entirely removed through compensation in the second stage.78Chapter 4Conclusions and FutureWorkSynthetic Transmit Aperture with Virtual Source (STA-VS) beamformingmethods can significantly increase imaging resolution and contrast, com-pared to conventional delay-and-sum (DAS) beamforming methods. Theyhave been gradually adopted by commercial ultrasound scanners. Usingan ultrasound simulation software and a commercial ultrasound machine’sresearch interface, two specific realizations of the STA-VS method, the bi-direction pixel-based focusing (BiPBF) and the Synthetic Aperture Sequen-tial Beamforming (SASB) were implemented and compared. The two meth-ods were shown to have better signal-to-noise ratio (SNR) and spatial res-olution properties compared to the conventional dynamic receive focusing(DRF) beamforming method. Among the two, SASB was shown to notonly have better performance, but it is also less computational and memorydemanding. Implementing the SASB method in a two-stage approach canimprove ultrasound imaging quality, without extensive modification of thehardware.However, the SASB performance is more susceptible to the error in thespeed-of-sound (SOS) assumption in the system, making the image of tissuewith non-ideal SOS properties less accurate. One method for SOS estima-tion using the single receive-focused DAS (srDAS) image method has beenproposed to detect the correct SOS in the imaging field. This method isparticularly useful when the per-channel radio frequency (RF) data is notavailable to the SOS detector, which is the case when the SASB is im-plemented in a two-stage and low-cost approach. Another SOS correctionmethod has been developed that can be incorporated into the second stageof the SASB beamforming. The SOS estimation and correction algorithm isalso capable to accommodate imaging phantoms with two-layer structureswith two distinct SOS’s, such as in the imaging liver through the subcuta-neous fat layer in abdomen scanning. The two methods combined give riseto an improved version of the SASB method, the two-layer SASB, which canimprove the lateral resolution in the scan. This was demonstrated in both794.1. Thesis contributionssimulation study and measurements.4.1 Thesis contributionsThe contributions of this thesis are summarized as follows.• A version of the SASB and DRF method, which produces identicalbeamforming resolutions with the results in the publication, usingsolely the public available simulation software Field II [23] was imple-mented. The paper that introduced SASB, together with papers in-troducing other advanced beamforming algorithms were implementedusing an in-house beamformer called Beamforming Toolbox (BFT [50],developed by Centre of Fast Ultrasound Imaging in the Technical Uni-versity of Denmark), which is not available to public. The implemen-tations of these two methods, which are validated by published results,can provide future research a reliable baseline for comparison.• A preliminary implementation of the STA-VS methods using Field-Programmable Gate Array (FPGA) technology was proposed. Cur-rently STA-VS are only implemented in vendor-specific high-end ul-trasound machines, where the detailed software and hardware imple-mentation of the system are unknown. The analysis presented in thethesis provided a possible hardware and software architecture, andphysical resources required to implement the DRF, SASB, and BiBPFusing a commercial FPGA framework. The cost related to imple-menting the three methods was also compared. This is a first step inachieving real-time implementation of these STA-VS algorithms, in anopen generic computer architecture.• An SOS estimation algorithm based on the srDAS image was devel-oped. Compared to the existing algorithms, this method does notrequire the knowledge of per-channel RF data, which is not availablein may commercial ultrasound machines. This method can either beimplemented as part of SASB method to improve SOS error perfor-mance, or be employed as a standalone method, such as in the tissuecharacterization applications.• A two-layer SOS error correction method based on re-sampling andshifting the srDAS image was applied between the first stage and sec-ond stage beamforming. This correction method, combined with theSOS estimation method, gave rise to an improved version of the SASB804.2. Future workmethod and improved the resolution of beamformed images comparedthe original SASB method without SOS estimation and correction.4.2 Future workFor the comparison between the DRF, SASB, and BiPBF beamformingmethods presented in the thesis, the underlying cause of the performance dif-ference between BiPBF and SASB method was not fully understood. Giventhe simplicity and low computational cost of the SASB method, it is surpris-ing that it outperforms the BiBPF method in both the point spread function(PSF) phantom and SNR study. This could be further explored in two ap-proaches: (1) Through theoretical derivation of the SNR and full width athalf maximum (FWHM) value using theories from array signal processingresearch; (2) Through a comparison study in a larger parameter space, withdifferent excitation pulse lengths and shapes, apodizations shapes, aperturesizes, and evaluation phantoms and metrics.The SOS estimation and correction method developed in the thesis is notautonomous. A human operator is currently required to select a local rect-angle that contains the echo reflected by the strong strong reflectors. Theprimary reason is that we have not developed a good algorithm that canautomatically locate all echoes in the field that correspond to strong pointreflectors. More advanced computer vision or image processing algorithmscan be employed to perform automatic curve detection and extraction, whichwill replace the last manual step in the solution. Alternatively, we will alsolook into whether other automatic SOS estimation methods, especially thedynamic receive methods [33] [34] [35] as introduced in Section 3.1.2, can beadapted handle the srDAS image. Similar to the dynamic receive methods,the second stage beamforming can be performed using a grid of differentSOS’s and the SOS estimate is selected based on their imaging quality. Inthis way, no scatterer detection and selection algorithms are needed, andthe system is completely autonomous.One of the limitations of the SOS estimation algorithm is that it requiresthe presence of strong scatterers. However, we have yet to define what typeof scatterers, such as needles, blood cells, capillaries, and micro-bubbles, aresufficient to render a reasonable estimation. Further experiments need tobe conducted in order to study the echoes generated by these scatterers.On the other hand, it would also be useful to explore how to use less ideal814.2. Future workreflectors, such as speckle generating targets, to estimate the SOS in srDASimages. In this way, the SOS estimation and correction method can be ap-plied to a wider range of tissue types and clinical applications.We focused on comparisons in this thesis, and did not implement theSOS estimation and correction method in real-time. All the beamformingsin this thesis were based on the pre-acquired per-channel RF data, and theprocessing was conducted off-line. Given the low complexity of the SASBmethod and that the SOS estimation step is solely based on parabola detec-tion and fitting, the algorithm should be implementable in real-time usingthe research interface of a commercial ultrasound machine. The end goal ofthe research would be to develop a software package that performs acquisi-tion, SOS detection and correction, beamforming, and image processing inreal-time.82Bibliography[1] H.-S. Lim, “High performance dsp solutions for ultrasound,”http://www.eet-china.com/STATIC/PDF/200805/EETC-CMET.pdf?SOURCES=DOWNLOAD, 2008, Accessed: 2015-07-30.[2] L. Sandrin, S. Catheline, M. Tanter, X. Hennequin, and M. Fink,“Time-resolved pulsed elastography with ultrafast ultrasonic imaging,”Ultrasonic imaging, vol. 21, no. 4, pp. 259–272, 1999.[3] J. A. Jensen, S. I. Nikolov, K. L. Gammelmark, and M. H. Pedersen,“Synthetic aperture ultrasound imaging,” Ultrasonics, vol. 44, pp. e5–e15, 2006.[4] I. K. Holfort, F. Gran, and J. A. Jensen, “Broadband minimum variancebeamforming for ultrasound imaging,” Ultrasonics, Ferroelectrics, andFrequency Control, IEEE Transactions on, vol. 56, no. 2, pp. 314–325,2009.[5] J. Ng, R. Prager, N. Kingsbury, G. Treece, and A. Gee, “Waveletrestoration of medical pulse-echo ultrasound images in an em frame-work,” Ultrasonics, Ferroelectrics, and Frequency Control, IEEE Trans-actions on, vol. 54, no. 3, pp. 550–568, 2007.[6] R. S. Cobbold, Foundations of biomedical ultrasound. Oxford Univer-sity Press on Demand, 2007.[7] J. W. Goodman, Introduction to Fourier optics. Roberts and CompanyPublishers, 2005.[8] K. Thiele and J. Jago, “Exploring nsight imaging a totally new ar-chitecture for premium ultrasound,” Koninklijke Philips Electronics,Eindhoven, The Netherlands, Tech. Rep. 4522 962 95791, June 2013.[9] C. Bradley, “Retrospective transmit beamformation,” Siemens Health-care Sector, Mountain View, California, Tech. Rep. WS 0808 I, August2008.83Bibliography[10] M.-H. Bae and M.-K. Jeong, “A study of synthetic-aperture imagingwith virtual source elements in b-mode ultrasound imaging systems,”Ultrasonics, Ferroelectrics and Frequency Control, IEEE Transactionson, vol. 47, no. 6, pp. 1510–1519, 2000.[11] C. Cooley, K. Thiele, J. Robert, M. Burcher, and B. Robinson, “Ul-trasonic synthetic transmit focusing with a multiline beamformer,”Mar. 20 2012, US Patent 8,137,272.[12] J. Kortbek, J. A. Jensen, and K. L. Gammelmark, “Synthetic aperturesequential beamforming,” in Ultrasonics Symposium, 2008. IUS 2008.IEEE. IEEE, 2008, pp. 966–969.[13] ——, “Sequential beamforming for synthetic aperture imaging,” Ultra-sonics, vol. 53, no. 1, pp. 1–16, 2013.[14] P. M. Hansen, M. Hemmsen, A. Brandt, J. Rasmussen, T. Lange, P. S.Krohn, L. Lo¨nn, J. A. Jensen, and M. B. Nielsen, “Clinical evaluation ofsynthetic aperture sequential beamforming ultrasound in patients withliver tumors,” Ultrasound in medicine & biology, vol. 40, no. 12, pp.2805–2810, 2014.[15] M. C. Hemmsen, J. H. Rasmussen, and J. A. Jensen, “Tissue harmonicsynthetic aperture ultrasound imaging,” The Journal of the AcousticalSociety of America, vol. 136, no. 4, pp. 2050–2056, 2014.[16] M. C. Hemmsen, T. Kjeldsen, L. Lassen, C. Kjær, B. Tomov,J. Mosegaard, J. Jensen et al., “Implementation of synthetic apertureimaging on a hand-held device,” in Ultrasonics Symposium (IUS), 2014IEEE International. IEEE, 2014, pp. 2177–2180.[17] K. E. Thomenius, “Evolution of ultrasound beamformers,” in Ultrason-ics Symposium, 1996. Proceedings., 1996 IEEE, vol. 2. IEEE, 1996,pp. 1615–1622.[18] R. Krutsch, “Taking a multicore dsp approachto medical ultrasound beamforming,” http://www.embedded.com/design/configurable-systems/4217545/Taking-a-multicore-DSP-approach-to-medical-ultrasound-beamforming,2011, Accessed: 2015-07-30.[19] H. Feldka¨mper, R. Schwann, V. Gierenz, and T. Noll, “Low power delaycalculation for digital beamforming in handheld ultrasound systems,”84Bibliographyin Ultrasonics Symposium, 2000 IEEE, vol. 2. IEEE, 2000, pp. 1763–1766.[20] M. Almekkawy, J. Xu, and M. Chirala, “An optimized ultrasound dig-ital beamformer with dynamic focusing implemented on fpga,” in En-gineering in Medicine and Biology Society (EMBC), 2014 36th AnnualInternational Conference of the IEEE. IEEE, 2014, pp. 3296–3299.[21] L. Tong, H. Gao, and J. D’hooge, “Multi-transmit beam forming forfast cardiac imaging-a simulation study,” Ultrasonics, Ferroelectrics,and Frequency Control, IEEE Transactions on, vol. 60, no. 8, pp. 1719–1731, 2013.[22] J. M. Hansen and S. I. Nikolov, “Synthetic aperture imaging using asemi-analytic model for the transmit beams,” in SPIE Medical Imag-ing. International Society for Optics and Photonics, 2015, pp. 94 190K–94 190K.[23] J. A. Jensen, “Field: A program for simulating ultrasound systems,” in10TH NORDICBALTIC CONFERENCE ON BIOMEDICAL IMAG-ING, VOL. 4, SUPPLEMENT 1, PART 1: 351–353. Citeseer, 1996.[24] T. Minka, “The lightspeed matlab toolbox,” http://research.microsoft.com/en-us/um/people/minka/software/lightspeed/, 2003, Accessed:2015-08-30.[25] Xilinx, “Product specification: Virtex-6 family overview,” http://www.xilinx.com/support/documentation/data sheets/ds150.pdf, 2012, Ac-cessed: 2015-07-30.[26] M. Gyo¨ngy and S. Kolla´r, “Variation of ultrasound image lateral spec-trum with assumed speed of sound and true scatterer density,” Ultra-sonics, vol. 56, pp. 370–380, 2015.[27] M. Anderson, M. McKeag, and G. Trahey, “The impact of sound speederrors on medical ultrasound imaging,” The Journal of the AcousticalSociety of America, vol. 107, no. 6, pp. 3540–3548, 2000.[28] A. Tsalach, I. Steinberg, and I. Gannot, “Tumor localization using mag-netic nanoparticle-induced acoustic signals,” Biomedical Engineering,IEEE Transactions on, vol. 61, no. 8, pp. 2313–2323, 2014.85Bibliography[29] H. Nakano, K. Naito, S. Suzuki, K. Naito, K. Kobayashi, and S. Ya-mamoto, “Scanning acoustic microscopy imaging of tongue squa-mous cell carcinomas discriminates speed-of-sound between lesions andhealthy regions in the mucous epithelium,” Journal of Oral and Max-illofacial Surgery, Medicine, and Pathology, vol. 27, no. 1, pp. 16–19,2015.[30] M. C. Hemmsen, L. Lassen, T. Kjeldsen, J. Mosegaard, and J. A.Jensen, “Implementation of real-time duplex synthetic aperture ul-trasonography,” in Ultrasonics Symposium (IUS), 2015 IEEE Inter-national. IEEE, 2015, pp. 1–4.[31] H.-C. Shin, R. Prager, H. Gomersall, N. Kingsbury, G. Treece, andA. Gee, Estimation of speed of sound using medical ultrasound imagedeconvolution, 2009.[32] D. Robinson, J. Ophir, L. Wilson, and C. Chen, “Pulse-echo ultrasoundspeed measurements: progress and prospects,” Ultrasound in medicine& biology, vol. 17, no. 6, pp. 633–646, 1991.[33] N. Hayashi, N. Tamaki, M. Senda, K. Yamamoto, Y. Yonekura, K. Tor-izuka, T. Ogawa, K. Katakura, C. Umemura, and M. Kodama, “A newmethod of measuring in vivo sound speed in the reflection mode,” Jour-nal of clinical ultrasound, vol. 16, no. 2, pp. 87–93, 1988.[34] B. E. Treeby, T. K. Varslot, E. Z. Zhang, J. G. Laufer, and P. C. Beard,“Automatic sound speed selection in photoacoustic image reconstruc-tion using an autofocus approach,” Journal of biomedical optics, vol. 16,no. 9, pp. 090 501–090 501, 2011.[35] C. Yoon, Y. Lee, J. H. Chang, T.-k. Song, and Y. Yoo, “In vitro estima-tion of mean sound speed based on minimum average phase variance inmedical ultrasound imaging,” Ultrasonics, vol. 51, no. 7, pp. 795–802,2011.[36] M. E. Anderson and G. E. Trahey, “The direct estimation of soundspeed using pulse–echo ultrasound,” The Journal of the Acoustical So-ciety of America, vol. 104, no. 5, pp. 3099–3106, 1998.[37] B. C. Byram, G. E. Trahey, and J. A. Jensen, “A method for directlocalized sound speed estimates using registered virtual detectors,” Ul-trasonic imaging, vol. 34, no. 3, pp. 159–180, 2012.86Bibliography[38] P. Annibale and R. Rabenstein, “Closed-form estimation of the speedof propagating waves from time measurements,” Multidimensional Sys-tems and Signal Processing, vol. 25, no. 2, pp. 361–378, 2014.[39] J. Krucker, J. Fowlkes, and P. Carson, “Sound speed estimation usingultrasound image registration,” in Biomedical Imaging, 2002. Proceed-ings. 2002 IEEE International Symposium on, 2002, pp. 437–440.[40] X. Qu, T. Azuma, J. T. Liang, and Y. Nakajima, “Average soundspeed estimation using speckle analysis of medical ultrasound data,”International journal of computer assisted radiology and surgery, vol. 7,no. 6, pp. 891–899, 2012.[41] B. D. Lindsey and S. W. Smith, “Refraction correction in 3d transcra-nial ultrasound imaging,” Ultrasonic imaging, vol. 36, no. 1, pp. 35–54,2014.[42] G. C. Ng, S. S. Worrell, P. D. Freiburger, and G. E. Trahey, “A compar-ative evaluation of several algorithms for phase aberration correction,”Ultrasonics, Ferroelectrics, and Frequency Control, IEEE Transactionson, vol. 41, no. 5, pp. 631–643, 1994.[43] S. Leahy, C. Toomey, K. McCreesh, C. ONeill, and P. Jakeman, “Ultra-sound measurement of subcutaneous adipose tissue thickness accuratelypredicts total and segmental body fat of young adults,” Ultrasound inmedicine & biology, vol. 38, no. 1, pp. 28–34, 2012.[44] T. P. Gauthier and D. R. Maxwell, “Tissue aberration correction: In-novative solution for improving image quality on the obese patient,”Koninklijke Philips Electronics, Eindhoven, The Netherlands, Tech.Rep. 4522 962 27391/795, August 2007.[45] M. Anderson, M. McKeag, R. Gauss, M. Soo, and G. Trahey, “Ap-plication of sound speed estimation and mapping to multi-layer mediaand in vivo data,” in Ultrasonics Symposium, 1998. Proceedings., 1998IEEE, vol. 2. IEEE, 1998, pp. 1393–1396.[46] H.-C. Shin, R. Prager, H. Gomersall, N. Kingsbury, G. Treece, andA. Gee, “Estimation of speed of sound in dual-layered media usingmedical ultrasound image deconvolution,” Ultrasonics, vol. 50, no. 7,pp. 716–725, 2010.87[47] J. Ng, R. Rohling, and P. D. Lawrence, “Automatic measurement ofhuman subcutaneous fat with ultrasound,” Ultrasonics, Ferroelectrics,and Frequency Control, IEEE Transactions on, vol. 56, no. 8, pp. 1642–1653, 2009.[48] E. L. Madsen, J. A. Zagzebski, R. A. Banjavic, and M. M. Burlew,“Phantom material and method,” Jul. 7 1981, US Patent 4277367 A.[49] C. Oates, “Towards an ideal blood analogue for doppler ultrasoundphantoms,” Physics in medicine and biology, vol. 36, no. 11, p. 1433,1991.[50] J. Kortbek, S. Nikolov, and J. A. Jensen, “Effective and versatile soft-ware beamformation toolbox,” in Medical Imaging. International So-ciety for Optics and Photonics, 2007, pp. 651 319–651 319.88Appendix ABeamforming ParametersThis appendix contains the beamforming parameters used in the simulationand phantom study in this thesis. The parameters are summarised in TableA.1. Unless specified otherwise, all of the beamforming realizations in thisthesis follow these parameters. Most of the parameters were chosen to matchthose in the paper that introduced the SASB method [13].89Appendix A. Beamforming ParametersTable A.1: Beamforming parameters for the DRF, SASB, and BiPBF meth-odsDRF SASB BiPBFSampling frequency 120 MHzTransducerPitch 0.208 mmCentre frequency 7 MHzElevation focus 25 mmNumber of elements 128Excitation 1 cycle sinusoid1st stage processingFocusing Dynamic Fixed at 20 mmTransmit sub-aperture 63Transmit apodization HammingFocal depth 70 mm 20 mmReceive apodization HammingReceive sub-aperture 63Number of imaging lines 128Number of lines stored 1 632nd stage processingFocusing n/a Synthetic apertureNumber of channels n/a 128transmit focusing apodization n/a HammingNumber of imaging lines n/a 128Post processingDynamic range 50 dBInterpolation factor in the lateral direction 1090Appendix BPoint Spread FunctionPhantom Simulation ResultsThis appendix includes the details of the lateral and axial resolution of thepoint spread function (PSF) study discussed in Chapter 29. Both the lateraland axial resolutions are included here.9The images are not included here because there is no noise in the image, and the mostimportant information are the lateral and axial resolutions.91Appendix B. Point Spread Function Phantom Simulation ResultsTable B.1: Point spread function phantom simulation results of the threemethods with the transmit focal depth 20 mmDepth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)5 0.39242 0.20533 0.68157 0.16683 0.39242 0.1604215 0.39242 0.20533 0.51634 0.17325 0.39242 0.15425 0.55765 0.20533 0.51634 0.17325 0.51634 0.1475835 0.76418 0.20533 0.51634 0.17325 0.59896 0.15445 0.97072 0.19892 0.51634 0.17325 0.64026 0.1475855 1.2186 0.20533 0.51634 0.17325 0.72288 0.15465 1.4664 0.19892 0.55765 0.17325 0.80549 0.1475875 1.6729 0.19892 0.64026 0.17325 0.8468 0.1475885 1.7969 0.20533 0.68157 0.16683 0.97072 0.15495 2.1686 0.19892 0.76418 0.17325 1.0533 0.14758105 2.3752 0.20533 0.8468 0.16683 1.1359 0.154115 2.4578 0.19892 0.88811 0.17325 1.2599 0.14758125 2.6643 0.20533 0.97072 0.17325 1.3838 0.154135 2.8709 0.20533 1.0533 0.17325 1.4664 0.14758145 3.2013 0.20533 1.1359 0.17325 1.5903 0.154155 3.4492 0.20533 1.2186 0.17325 1.6729 0.154165 3.5731 0.19892 1.3012 0.17325 1.7969 0.14758175 3.7383 0.20533 1.3838 0.16683 1.9208 0.154185 4.1514 0.19892 1.4664 0.17325 2.0447 0.14758195 4.1927 0.20533 1.549 0.16683 2.1273 0.15492Appendix B. Point Spread Function Phantom Simulation ResultsTable B.2: The performance of the three methods with the transmit focaldepth 50 mmDepth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)5 0.39242 0.20533 1.4664 0.16683 0.39242 0.15415 0.39242 0.19892 1.012 0.16683 0.43373 0.15425 0.55765 0.20533 0.92941 0.17325 0.59896 0.15435 0.76418 0.20533 0.92941 0.17325 0.76418 0.15445 0.8468 0.20533 1.012 0.17325 0.88811 0.1475855 1.012 0.20533 1.012 0.17325 1.012 0.1475865 1.2599 0.20533 0.92941 0.17967 1.0946 0.15475 1.5077 0.20533 0.92941 0.17325 1.1773 0.1475885 1.7556 0.19892 0.92941 0.17325 1.2186 0.15495 2.0034 0.20533 0.92941 0.17325 1.2599 0.14758105 2.2099 0.19892 0.92941 0.17325 1.3012 0.154115 2.4165 0.20533 0.97072 0.17325 1.3838 0.14758125 2.623 0.20533 1.012 0.17325 1.4664 0.14758135 2.8295 0.20533 1.0533 0.17325 1.549 0.154145 3.1187 0.20533 1.0946 0.17325 1.6316 0.14758155 3.2839 0.20533 1.1773 0.17325 1.7556 0.154165 3.4905 0.20533 1.2599 0.17325 1.8382 0.14758175 3.697 0.19892 1.3425 0.17325 1.9621 0.154185 3.9448 0.20533 1.4251 0.17325 2.086 0.14758195 4.1514 0.20533 1.5077 0.17325 2.1686 0.15493Appendix B. Point Spread Function Phantom Simulation ResultsTable B.3: Point spread function phantom simulation results of the threemethods with the transmit focal depth 70 mmDepth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)5 0.39242 0.20533 2.0447 0.17325 0.39242 0.15415 0.39242 0.19892 1.3838 0.16683 0.43373 0.15425 0.55765 0.20533 1.2599 0.17325 0.59896 0.1475835 0.76418 0.20533 1.2599 0.17325 0.80549 0.15445 0.92941 0.20533 1.3012 0.17325 1.012 0.1475855 1.0533 0.20533 1.3012 0.17967 1.1359 0.15465 1.1773 0.20533 1.3838 0.17325 1.2599 0.1475875 1.3425 0.19892 1.3425 0.17325 1.3425 0.15485 1.549 0.19892 1.3012 0.17325 1.4664 0.15495 1.7969 0.20533 1.3425 0.17325 1.549 0.14758105 2.0447 0.20533 1.3012 0.17967 1.5903 0.154115 2.2926 0.20533 1.3012 0.17325 1.6316 0.14758125 2.4991 0.20533 1.3012 0.17325 1.7143 0.154135 2.7056 0.19892 1.2599 0.17325 1.7556 0.154145 2.9948 0.20533 1.3012 0.17325 1.7969 0.14758155 3.2013 0.20533 1.3012 0.17325 1.8795 0.154165 3.4492 0.20533 1.3425 0.17325 1.9208 0.14758175 3.5731 0.19892 1.3838 0.17325 2.0447 0.154185 3.8622 0.20533 1.4251 0.17325 2.1686 0.14758195 4.0688 0.19892 1.5077 0.17325 2.2099 0.15494Appendix B. Point Spread Function Phantom Simulation ResultsTable B.4: Point spread function phantom simulation results of the threemethods with the receive beamforming f-number 1Depth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)5 0.51634 0.20533 1.0533 0.17325 0.47503 0.15415 0.51634 0.19892 0.59896 0.16683 0.47503 0.1475825 0.64026 0.20533 0.55765 0.17325 0.51634 0.1475835 0.80549 0.20533 0.51634 0.17325 0.59896 0.15445 0.92941 0.20533 0.51634 0.17325 0.64026 0.1475855 1.0533 0.20533 0.51634 0.17325 0.72288 0.15465 1.1773 0.20533 0.55765 0.17325 0.80549 0.1475875 1.3425 0.19892 0.64026 0.17325 0.8468 0.1475885 1.549 0.19892 0.68157 0.16683 0.97072 0.15495 1.7969 0.20533 0.76418 0.17325 1.0533 0.14758105 2.0447 0.20533 0.8468 0.16683 1.1359 0.154115 2.2926 0.20533 0.88811 0.17325 1.2599 0.14758125 2.4991 0.20533 0.97072 0.17325 1.3838 0.154135 2.7056 0.19892 1.0533 0.17325 1.4664 0.14758145 2.9948 0.20533 1.1359 0.17325 1.5903 0.154155 3.2013 0.20533 1.2186 0.17325 1.6729 0.154165 3.4492 0.20533 1.3012 0.17325 1.7969 0.14758175 3.5731 0.19892 1.3838 0.16683 1.9208 0.154185 3.8622 0.20533 1.4664 0.17325 2.0447 0.14758195 4.0688 0.19892 1.549 0.16683 2.1273 0.15495Appendix B. Point Spread Function Phantom Simulation ResultsTable B.5: Point spread function phantom simulation results of the threemethods with the receive beamforming f-number 1.5Depth(mm))DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)5 0.76418 0.20533 1.5077 0.16683 0.59896 0.15415 0.72288 0.19892 0.68157 0.17325 0.59896 0.15425 0.76418 0.20533 0.59896 0.17325 0.59896 0.1475835 0.8468 0.20533 0.55765 0.17325 0.64026 0.15445 0.97072 0.20533 0.51634 0.17325 0.68157 0.1475855 1.0533 0.20533 0.55765 0.17325 0.72288 0.15465 1.2186 0.20533 0.55765 0.17325 0.80549 0.1475875 1.3838 0.19892 0.64026 0.17325 0.88811 0.1475885 1.549 0.19892 0.68157 0.16683 0.97072 0.15495 1.7969 0.20533 0.76418 0.17325 1.0533 0.14758105 2.0447 0.20533 0.8468 0.16683 1.1359 0.154115 2.2926 0.20533 0.88811 0.17325 1.2599 0.14758125 2.4991 0.20533 0.97072 0.17325 1.3838 0.154135 2.7056 0.19892 1.0533 0.17325 1.4664 0.14758145 2.9948 0.20533 1.1359 0.17325 1.5903 0.154155 3.2013 0.20533 1.2186 0.17325 1.6729 0.154165 3.4492 0.20533 1.3012 0.17325 1.7969 0.14758175 3.5731 0.19892 1.3838 0.16683 1.9208 0.154185 3.8622 0.20533 1.4664 0.17325 2.0447 0.14758195 4.0688 0.19892 1.549 0.16683 2.1273 0.15496Appendix B. Point Spread Function Phantom Simulation ResultsTable B.6: Point spread function phantom simulation results of the threemethods with the transmit beamforming f-number 0.5Depth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)5 0.39242 0.20533 0.59896 0.16683 0.39242 0.1604215 0.39242 0.19892 0.43373 0.17325 0.39242 0.15425 0.55765 0.20533 0.43373 0.17967 0.47503 0.15435 0.76418 0.20533 0.43373 0.17325 0.51634 0.15445 0.92941 0.20533 0.43373 0.17325 0.55765 0.1475855 1.0533 0.20533 0.47503 0.16683 0.64026 0.15465 1.1773 0.20533 0.51634 0.17325 0.72288 0.1475875 1.3425 0.19892 0.59896 0.17325 0.8468 0.1475885 1.549 0.19892 0.68157 0.16683 0.92941 0.15495 1.7969 0.20533 0.76418 0.17325 1.0533 0.14758105 2.0447 0.20533 0.8468 0.16683 1.1359 0.154115 2.2926 0.20533 0.88811 0.17325 1.2599 0.14758125 2.4991 0.20533 0.97072 0.17325 1.3838 0.154135 2.7056 0.19892 1.0533 0.17325 1.4664 0.14758145 2.9948 0.20533 1.1359 0.17325 1.5903 0.154155 3.2013 0.20533 1.2186 0.17325 1.6729 0.154165 3.4492 0.20533 1.3012 0.17325 1.7969 0.14758175 3.5731 0.19892 1.3838 0.16683 1.9208 0.154185 3.8622 0.20533 1.4664 0.17325 2.0447 0.14758195 4.0688 0.19892 1.549 0.16683 2.1273 0.15497Appendix B. Point Spread Function Phantom Simulation ResultsTable B.7: Point spread function phantom simulation results of the threemethods with the transmit beamforming f-number 1Depth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)5 0.39242 0.20533 0.64026 0.16683 0.39242 0.1604215 0.39242 0.19892 0.47503 0.17967 0.39242 0.15425 0.55765 0.20533 0.47503 0.17325 0.47503 0.1475835 0.76418 0.20533 0.47503 0.17325 0.55765 0.15445 0.92941 0.20533 0.47503 0.17325 0.59896 0.1475855 1.0533 0.20533 0.47503 0.16683 0.68157 0.15465 1.1773 0.20533 0.51634 0.17325 0.76418 0.1475875 1.3425 0.19892 0.59896 0.17325 0.8468 0.1475885 1.549 0.19892 0.68157 0.16683 0.92941 0.15495 1.7969 0.20533 0.76418 0.17325 1.0533 0.14758105 2.0447 0.20533 0.8468 0.16683 1.1359 0.154115 2.2926 0.20533 0.88811 0.17325 1.2599 0.14758125 2.4991 0.20533 0.97072 0.17325 1.3838 0.154135 2.7056 0.19892 1.0533 0.17325 1.4664 0.14758145 2.9948 0.20533 1.1359 0.17325 1.5903 0.154155 3.2013 0.20533 1.2186 0.17325 1.6729 0.154165 3.4492 0.20533 1.3012 0.17325 1.7969 0.14758175 3.5731 0.19892 1.3838 0.16683 1.9208 0.154185 3.8622 0.20533 1.4664 0.17325 2.0447 0.14758195 4.0688 0.19892 1.549 0.16683 2.1273 0.15498Appendix B. Point Spread Function Phantom Simulation ResultsTable B.8: Point spread function phantom simulation results of the threemethods with the speed of sound assumption 1450 m/sDepth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)5 0.51634 0.20533 0.68157 0.16683 0.43373 0.15415 1.0946 0.19892 0.55765 0.17325 0.68157 0.15425 1.0946 0.20533 0.76418 0.16683 0.68157 0.15435 0.97072 0.20533 0.92941 0.17325 0.72288 0.1604245 1.0533 0.20533 1.1359 0.17325 0.88811 0.15455 1.0946 0.20533 1.3425 0.17325 0.97072 0.15465 1.2186 0.20533 1.4664 0.16683 1.0946 0.1475875 1.3838 0.19892 1.4664 0.16683 1.1359 0.15485 1.5903 0.20533 1.5903 0.17325 1.2186 0.15495 1.8382 0.20533 1.7556 0.17325 1.3425 0.154105 2.0447 0.20533 1.5903 0.17325 1.3838 0.14758115 2.2926 0.20533 1.6316 0.17325 1.4251 0.154125 2.4991 0.19892 1.6729 0.17325 1.549 0.14758135 2.7056 0.20533 1.5903 0.17325 1.6729 0.154145 2.9948 0.20533 1.6729 0.17325 1.7556 0.14758155 3.2426 0.20533 1.6729 0.17967 1.8382 0.154165 3.4078 0.19892 1.6729 0.17967 1.9208 0.14758175 3.5731 0.20533 1.7143 0.17325 2.0034 0.154185 3.8622 0.20533 1.7969 0.17967 2.1273 0.14758195 4.1101 0.20533 1.7969 0.17325 2.2512 0.15499Appendix B. Point Spread Function Phantom Simulation ResultsTable B.9: Point spread function phantom simulation results of the threemethods with the speed of sound assumption 1630 m/sDepth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)5 0.64026 0.19892 0.68157 0.16683 0.55765 0.1475815 1.1773 0.19892 0.64026 0.16683 0.76418 0.1475825 1.0946 0.21175 0.72288 0.17325 0.64026 0.1604235 0.92941 0.20533 0.92941 0.17325 0.76418 0.15445 1.012 0.20533 1.1359 0.16683 0.88811 0.15455 1.0946 0.20533 1.3012 0.16683 0.97072 0.1475865 1.2186 0.20533 1.5077 0.17325 1.0533 0.15475 1.3838 0.20533 1.549 0.17325 1.1359 0.15485 1.5903 0.20533 1.6316 0.17967 1.1773 0.1475895 1.8382 0.20533 1.549 0.17325 1.3012 0.154105 2.086 0.20533 1.5903 0.17325 1.3425 0.154115 2.3339 0.20533 1.6316 0.17967 1.4664 0.14758125 2.5404 0.20533 1.549 0.17325 1.5903 0.154135 2.7469 0.20533 1.6316 0.17325 1.6316 0.154145 2.9948 0.20533 1.6316 0.17967 1.7556 0.154155 3.2426 0.20533 1.6729 0.17325 1.8382 0.154165 3.4905 0.19892 1.7556 0.17325 1.9621 0.14758175 3.6144 0.20533 1.7969 0.17325 2.086 0.154185 3.9035 0.19892 1.8795 0.17325 2.1686 0.14758195 4.1514 0.20533 1.9208 0.17325 2.2926 0.154100Appendix CContrast PhantomSimulation ResultsThis appendix includes the details of the SNR resolution of the contrastphantom study discussed in Chapter 2. The SNR performance results of thethree methods using different transmission focal depth, transmit focusingf-number, receive focusing f-number, and speed-of-sound assumption arelisted here.101Appendix C. Contrast Phantom Simulation ResultsTable C.1: Contrast phantom simulation results of the three methods withthe transmit focal depth 20 mmDepth (mm) DRF SASB BiPBF15 -0.66817 -0.51068 -1.009125 2.8611 2.6562 1.931735 -0.36808 0.53149 0.1217145 1.8761 2.811 2.303355 0.89189 1.3198 1.143565 1.2475 2.0869 2.027875 0.5058 1.4719 1.513585 0.95595 1.5846 1.552195 0.78376 1.4055 1.1499105 -1.6839 1.6084 1.2091(a) DRF (b) SASB (c) BiPBFFigure C.1: Contrast phantom simulation results of the three methods withthe transmit focal depth 20 mm102Appendix C. Contrast Phantom Simulation ResultsTable C.2: Contrast phantom simulation results of the three methods withthe transmit focal depth 40 mmDepth (mm) DRF SASB BiPBF15 0.56559 0.40066 0.4963225 2.5981 2.5543 -0.154335 0.03374 0.09408 0.2931345 2.6352 2.8776 1.145955 1.4469 1.8272 1.805865 1.6924 2.5742 0.7870975 1.1393 0.99236 0.9847585 1.5077 3.0919 0.8383895 1.4733 2.2473 1.931105 1.2751 2.4347 0.37366(a) DRF (b) SASB (c) BiPBFFigure C.2: Contrast phantom simulation results of the three methods withthe transmit focal depth 40 mm103Appendix C. Contrast Phantom Simulation ResultsTable C.3: Contrast phantom simulation results of the three methods withthe transmit focal depth 70 mmDepth (mm) DRF SASB BiPBF15 -0.099185 0.1209 -0.2650525 1.6817 0.9753 -0.5821635 1.1743 1.6753 1.106945 2.3886 1.7924 0.6967755 0.69752 0.9844 0.5178365 1.3974 1.2838 2.118575 0.95817 0.94154 0.7523585 1.4588 1.4866 5.231995 1.939 2.0268 0.57436105 0.3509 0.75432 -5.0623(a) DRF (b) SASB (c) BiPBFFigure C.3: Contrast phantom simulation results of the three methods withthe transmit focal depth 70 mm104Appendix C. Contrast Phantom Simulation ResultsTable C.4: Contrast phantom simulation results of the three methods withthe receive beamforming f-number 1Depth (mm) DRF SASB BiPBF15 -0.75262 -0.37685 -0.8189725 2.5338 2.8608 2.218935 -0.34288 0.48109 0.07788545 1.7998 3.2973 2.753655 0.89762 1.1949 1.002465 1.2295 2.6348 2.539375 0.5095 1.3165 1.3985 0.94419 2.3712 2.129395 0.78328 1.1965 1.0211105 -1.757 1.9947 1.6046(a) DRF (b) SASB (c) BiPBFFigure C.4: Contrast phantom simulation results of the three methods withthe receive beamforming f-number 1105Appendix C. Contrast Phantom Simulation ResultsTable C.5: Contrast phantom simulation results of the three methods withthe receive beamforming f-number 1.5Depth (mm) DRF SASB BiPBF15 -0.13351 -0.22333 -0.5435125 2.1721 2.9993 2.384135 -0.30103 0.43728 0.02922245 1.3839 3.6992 2.995555 0.93268 1.1383 0.9117365 1.0564 3.2338 2.892575 0.53003 1.2231 1.317385 0.86852 3.0809 2.534795 0.77968 1.0715 0.94001105 -2.2391 2.5935 2.0267(a) DRF (b) SASB (c) BiPBFFigure C.5: Contrast phantom simulation results of the three methods withthe receive beamforming f-number 1.5106Appendix C. Contrast Phantom Simulation ResultsTable C.6: Contrast phantom simulation results of the three methods withthe transmit beamforming f-number 0.5Depth (mm) DRF SASB BiPBF15 -0.66817 -1.3418 -1.25925 2.8611 2.0202 0.4644335 -0.36808 0.053655 0.1777545 1.8761 3.2944 1.82155 0.89189 1.3489 1.38965 1.2475 2.0573 1.009275 0.5058 1.5053 1.77685 0.95595 1.3208 0.4751195 0.78376 1.4888 1.3324105 -1.6839 1.5355 0.44205(a) DRF (b) SASB (c) BiPBFFigure C.6: Contrast phantom simulation results of the three methods withthe transmit beamforming f-number 0.5107Appendix C. Contrast Phantom Simulation ResultsTable C.7: Contrast phantom simulation results of the three methods withthe transmit beamforming f-number 1Depth (mm) DRF SASB BiPBF15 -0.66817 -0.8371 -1.032325 2.8611 2.4237 1.260735 -0.36808 0.43931 0.1602345 1.8761 2.9302 2.115455 0.89189 1.3417 1.254765 1.2475 2.0995 1.559175 0.5058 1.4794 1.670385 0.95595 1.4709 1.076795 0.78376 1.4528 1.2606105 -1.6839 1.5534 0.84627(a) DRF (b) SASB (c) BiPBFFigure C.7: Contrast phantom simulation results of the three methods withthe transmit beamforming f-number 1108Appendix DPhantom MeasurementResultsThis appendix contains the results of the phantom measurements using theDRF, SASB and BiPBF method, as discussed in Chapter 2. The perfor-mance results of the three algorithms using different transmit beamformingf-numbers, receive beamforming f-numbers, transmission focal depths, andspeed-of-sound assumptions are listed here. The speed-of-sound inside thephantom was measured to be 1580 m/s.109Appendix D. Phantom Measurement ResultsTable D.1: Phantom measurement results of the three methods with thetransmit focal depth 20 mmDepth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)9 0.45398 0.27592 0.60531 0.28233 0.42372 0.2630817 0.45398 0.28875 0.51451 0.37217 0.42372 0.346526 0.48425 0.35292 0.63558 0.36575 0.60531 0.3593334 0.66584 0.37858 0.57504 0.37858 0.72637 0.3721743 0.72637 0.39142 0.63558 0.44275 0.81717 0.38552 0.45398 0.22458 0.66584 0.41067 0.93823 0.3657561 NaN 0.20533 0.75664 0.26308 1.1198 0.37217(a) DRF (b) SASB (c) BiPBFFigure D.1: Phantom measurement results of the three methods with thetransmit focal depth 20 mm110Appendix D. Phantom Measurement ResultsTable D.2: Phantom measurement results of the three methods with thetransmit focal depth 50 mmDepth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)9 0.45398 0.26308 1.029 0.2695 0.45398 0.2566717 0.45398 0.2695 0.72637 0.36575 0.45398 0.3208326 0.48425 0.37217 0.69611 0.37217 0.60531 0.346534 0.69611 0.3465 0.66584 0.39142 0.72637 0.3785843 0.66584 0.39783 0.7869 0.39142 0.84743 0.3785852 0.69611 0.35292 0.8777 0.385 1.029 0.3593361 0.63558 0.31442 1.1198 0.41708 1.1501 0.39783(a) DRF (b) SASB (c) BiPBFFigure D.2: Phantom measurement results of the three methods with thetransmit focal depth 50 mm111Appendix D. Phantom Measurement ResultsTable D.3: Phantom measurement results of the three methods with thetransmit focal depth 70 mmDepth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)9 0.45398 0.26308 1.4225 0.27592 0.45398 0.2566717 0.45398 0.28233 0.93823 0.37217 0.45398 0.3336726 0.48425 0.3465 0.8777 0.37217 0.63558 0.346534 0.72637 0.35933 0.8777 0.385 0.75664 0.3721743 0.75664 0.39783 0.8777 0.385 0.93823 0.3914252 0.81717 0.462 0.90797 0.40425 1.0896 0.423561 0.69611 0.44917 0.99876 0.42992 1.1501 0.37858(a) DRF (b) SASB (c) BiPBFFigure D.3: Phantom measurement results of the three methods with thetransmit focal depth 70 mm112Appendix D. Phantom Measurement ResultsTable D.4: Phantom measurement results of the three methods with thetransmit beamforming f-number 0.5Depth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)9 0.45398 0.27592 0.45398 0.32083 0.39345 0.2759217 0.45398 0.28875 0.45398 0.37217 0.45398 0.346526 0.48425 0.35292 0.45398 0.37217 0.51451 0.3593334 0.66584 0.37858 0.48425 0.37858 0.54478 0.3721743 0.72637 0.39142 0.48425 0.41708 0.63558 0.38552 0.45398 0.22458 0.45398 0.308 0.72637 0.4042561 NaN 0.20533 0.45398 0.32725 0.84743 0.43633(a) DRF (b) SASB (c) BiPBFFigure D.4: Phantom measurement results of the three methods with thetransmit beamforming f-number 0.5113Appendix D. Phantom Measurement ResultsTable D.5: Phantom measurement results of the three methods with thetransmit beamforming f-number 1Depth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)9 0.45398 0.27592 0.51451 0.29517 0.39345 0.2630817 0.45398 0.28875 0.51451 0.35933 0.42372 0.3272526 0.48425 0.35292 0.51451 0.37217 0.57504 0.3593334 0.66584 0.37858 0.54478 0.385 0.63558 0.3785843 0.72637 0.39142 0.54478 0.42992 0.72637 0.38552 0.45398 0.22458 0.57504 0.41708 0.81717 0.3914261 NaN 0.20533 0.57504 0.32725 0.9685 0.39142(a) DRF (b) SASB (c) BiPBFFigure D.5: Phantom measurement results of the three methods with thetransmit beamforming f-number 1114Appendix D. Phantom Measurement ResultsTable D.6: Phantom measurement results of the three methods with thereceive beamforming f-number 1Depth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)9 0.66584 0.25667 0.7869 0.27592 0.54478 0.2502517 0.63558 0.28875 0.60531 0.37858 0.54478 0.3529226 0.72637 0.35292 0.66584 0.36575 0.69611 0.3593334 0.84743 0.37858 0.60531 0.37858 0.75664 0.3721743 0.9685 0.39142 0.63558 0.42992 0.84743 0.38552 0.48425 0.17967 0.63558 0.41067 0.9685 0.3657561 0.51451 0.18608 0.72637 0.30158 1.0896 0.37217(a) DRF (b) SASB (c) BiPBFFigure D.6: Phantom measurement results of the three methods with thereceive beamforming f-number 1115Appendix D. Phantom Measurement ResultsTable D.7: Phantom measurement results of the three methods with thereceive beamforming f-number 1.5Depth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)9 0.9685 0.24383 0.99876 0.26308 0.66584 0.2502517 0.7869 0.29517 0.63558 0.37858 0.63558 0.3529226 0.9685 0.35933 0.72637 0.37217 0.84743 0.3593334 1.029 0.385 0.63558 0.37858 0.81717 0.3721743 1.2106 0.41708 0.63558 0.4235 0.90797 0.38552 0.48425 0.16683 0.63558 0.41708 0.9685 0.3657561 0.48425 0.18608 0.75664 0.30158 1.1198 0.37217(a) DRF (b) SASB (c) BiPBFFigure D.7: Phantom measurement results of the three methods with thereceive beamforming f-number 1.5116Appendix D. Phantom Measurement ResultsTable D.8: Phantom measurement results of the three methods with thespeed of sound assumption 1490 m/sDepth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)9 0.81717 0.25025 0.60531 0.28875 0.63558 0.2630817 0.9685 0.32083 0.66584 0.36575 0.69611 0.3400826 1.4527 0.35292 0.7869 0.37217 1.029 0.3593334 1.8765 0.4235 0.9685 0.39783 0.84743 0.3978343 2.1791 0.385 0.90797 0.35933 0.93823 0.3914252 0.42372 0.20533 1.0896 0.44275 0.93823 0.3914261 NaN 0 0.45398 0.12192 1.1501 0.41067(a) DRF (b) SASB (c) BiPBFFigure D.8: Phantom measurement results of the three methods with thespeed of sound assumption 1490 m/s117Appendix D. Phantom Measurement ResultsTable D.9: Phantom measurement results of the three methods with thespeed of sound assumption 1540 m/sDepth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)9 0.57504 0.25667 0.60531 0.28233 0.51451 0.2630817 0.57504 0.28875 0.57504 0.36575 0.51451 0.3272526 0.7869 0.36575 0.69611 0.35933 0.66584 0.3593334 0.69611 0.37858 0.63558 0.385 0.72637 0.3785843 0.69611 0.40425 0.63558 0.4235 0.84743 0.3785852 NaN NaN 0.63558 0.40425 0.93823 0.38561 NaN NaN 0.51451 0.16683 1.029 0.35292(a) DRF (b) SASB (c) BiPBFFigure D.9: Phantom measurement results of the three methods with thespeed of sound assumption 1540 m/s118Appendix D. Phantom Measurement ResultsTable D.10: Phantom measurement results of the three methods with thespeed of sound assumption 1620 m/sDepth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)9 0.45398 0.28233 0.60531 0.27592 0.45398 0.2630817 0.60531 0.30158 0.63558 0.35933 0.51451 0.3272526 0.93823 0.35292 0.66584 0.37217 0.69611 0.346534 1.3317 0.39783 0.72637 0.37858 0.75664 0.3785843 1.8765 0.36575 0.7869 0.39142 0.8777 0.3785852 NaN NaN 0.66584 0.41067 0.99876 0.3657561 NaN NaN 1.1198 0.19892 1.1198 0.3465(a) DRF (b) SASB (c) BiPBFFigure D.10: Phantom measurement results of the three methods with thespeed of sound assumption 1620 m/s119Appendix D. Phantom Measurement ResultsTable D.11: Phantom measurement results of the three methods with thespeed of sound assumption 1670 m/sDepth(mm)DRF Resolution SASB Resolution BiPBF ResolutionLateral(mm)Axial(mm)Lateral(mm)Axial(mm)Lateral(mm)Axial(mm)9 0.66584 0.30158 0.57504 0.29517 0.54478 0.269517 0.8777 0.29517 0.7869 0.34008 0.7869 0.3272526 1.6041 0.35292 0.93823 0.36575 0.8777 0.346534 2.5726 0.41067 0.9685 0.36575 0.93823 0.3785843 1.2106 0.33367 1.8765 0.39783 1.0593 0.3914252 NaN NaN 0.8777 0.23742 1.1198 0.3721761 NaN NaN 1.2712 0.12833 1.1804 0.33367(a) DRF (b) SASB (c) BiPBFFigure D.11: Phantom measurement results of the three methods with thespeed of sound assumption 1670 m/s120Appendix EDerivation of the Two-layerCompensation ModelThe Appendix derives Equation 3.17 in the thesis. Suppose a scatterer Q islocated at the position (z0, x0), the focal point of the transmission is at zf ,and SOS in the entire phantom is c1, the arrival time profile can be writtenast(xk) = 2zf +√(z0 − zf )2 + (x0 − xk)2c1. (E.1)1 2 3 4 5 6 7 8 9 10QxzZdZ0C1C2Figure E.1: TOF diagram of the simplified two-layer modelFor another two-layer phantom with two different layers with differenttransmission velocities, the separation is located at zd, and the SOS in thesecond layer being c2. Suppose that the sound wave travels in a straight121Appendix E. Derivation of the Two-layer Compensation Modelline, as shown in Figure E, the arrival time profile can be written ast(x′k) = 2 · (zfc1+ t(x′k1) + t(x′k2)) =2zfc1+2L1c1+2L2c2(E.2)=2zfc1+2c1l1l1 + l2L(xk) +2c2l2l1 + l2L(xk) (E.3)=2zfc1+2l1 + l2[l1c1+l2c2]L(xk) (E.4)wherel1 = zd − zf , (E.5)l2 = z0 − zd, (E.6)andL(xk) =√(z0 − zf )2 + (xk − x0)2. (E.7)The equation in can be further synthesized intot(x′k) =2zfc1(E.8)+2z0 − zf [zd − zfc1+z0 − zdc2]√(z0 − zf )2 + (xk − x0)2.(E.9)If we perform the same non-linear scaling as in the one in Section 3.2.2,the function q(xk) is in the formq(xk)2 = (12t′(xk)− zfc1)2 (E.10)= [zd − zfc1(z0 − zf ) +z0 − zdc2(z0 − zf ) ]2 (E.11)·((z0 − zf )2 + (xk − x0)2) (E.12)= [zd − zfc1(z0 − zf ) +z0 − zdc2(z0 − zf ) ]2 (E.13)·(xk2 − 2xkx0 + x02 + (z0 − zf )2) (E.14)= p1xk2 + p2xk + p3. (E.15)(E.16)Therefore, z0 can be recovered fromxˆ0 = − p22p1, (E.17)122Appendix E. Derivation of the Two-layer Compensation Modelandzˆ0 =√p3p1− xˆ20 + zf . (E.18)Using the same method used in the one layer model to get the SOS, wefind√p1 =c2(zd − zf ) + c1(z0 − zd)c1c2(z0 − zf ) (E.19)=(zd − zf ) + c1c2 (z0 − zd)c1(z0 − zf ) (E.20)c1c2=√p1c1(z0 − zf )− (zd − zf )z0 − zd . (E.21)(E.22)Therefore,cˆ2 =(zˆ0 − zd) · c1√p1c1(zˆ0 − zf )− (zd − zf ) . (E.23)(E.24)If we define the apparent speed capp =1√p1, which is the speed that thearrival time profile corresponds to if it is generated from a one-layer model.Then, the above result can be interpreted ascˆ2 =(zˆ0 − zd)(zˆ0−zf )capp− (zd−zf )c1. (E.25)123Appendix FEstimation of LayerSeparation Based on the SOSProfileIn Section 3.2.3, in deriving the two-layer speed-of-sound (SOS) estimationprofiles, we assumed the depth of the separation between the two layers couldbe found by inspecting the beamformed image or through package fat-layerthickness detection algorithms. In this appendix, we present a method thatcan automatically detect the separation between two layer solely based onthe one-layer SOS estimation at different depths. The performance andlimitation of this method are also addressed.In a noise-free case, the location of the layer separation can be deter-mined by observing and locating the discontinuity in the SOS estimationat different depths (blue curve in Figure 3.7). However, in an experimentalnoisy setup, this discontinuity might not be apparent, and the exact depthof the layer separation needs to be found by SOS estimations at differentdepths. Here, we propose a method to find the accurate layer separationdepth based on a series of SOS estimations (obtained through the one-layerapproach) at different depths.Description of the algorithmSuppose we have identifiedD different scatterers atD different depths, zd(d),d = 1, 2, 3, · · · , D, in the imaging field. (1) At each depth, we will obtain anSOS estimation based on the one-layer model c1−layer(d), d = 1, 2, 3, · · · , D.(2) We will define a grid of N possible depths of layer separation of zs(n),n = 1, 2, 3, · · · , N . (3) For each possible zs(n), we will apply Equation3.17 to the c1−layer(d)’s at all the D depths and find the corresponding SOSestimates for the two-layer model c2−layer(d|zs(n)), d = 1, 2, 3, · · · , D andn = 1, 2, 3, · · · , N . (4) After the c1−layer(d|zs(n))’s at different depths aregathered, we will calculate the average SOS in the second layer cˆ2(zs(n)),124Appendix F. Estimation of Layer Separation Based on the SOS Profilewherecˆ2(zs(n)) = avg(c2−layer(d|zs(n))), ∀zd > zs(n). (F.1)Then we can define a two-layer SOS profile, cˆ2−layer(d|zs(n)), d = 1, 2, 3, · · · , D,at different depths, wherecˆ2−layer(d|zs(n)) ={c1, if zd < zs(n).cˆ2(zs(n)), otherwise(F.2)(4) If we formulate the selection of the true layer separation position prob-lem as an optimization problem, then we can define the cost function asthe sum of absolute value of the difference between the two SOS profiles,c2−layer(d|zs(n))’s and cˆ2−layer(d)’s. The estimated location of the layer sep-aration can be expressed aszˆs = arg minzs(n)D∑d=1|c2−layer(d|zs(n))− cˆ2−layer(d|zs(n))|, n = 1, 2, · · · , N.(F.3)ResultsThis layer separation detection algorithm was tested using the Field II sim-ulation software. The two-layer simulation setup is the same as the onedescribed in Section 3.2.7. There are 12 vertically distributed point scat-terers with 10 mm apart from each other, spreading from 5 mm to 115 mmin the imaging field. The performance of the layer separation algorithm issummarized in Table F.1. The density of the possible layer separation isevery 2 mm (zd’s). The error in the estimation in the SOS estimation in thefirst layer c1, the SOS estimation in the first layer c2 and the second layerc2, and the estimated position of layer separation zs are listed.Table F.1: Accuracy of layer separation and SOS estimation using the au-tomatic layer separation detectionzs(mm)cˆ1(m/s)Errorincˆ1 (%)cˆ2(m/s)Errorincˆ2 (%)zˆs (mm)Errorin zˆd(mm)40 1445 -0.35 1540 0 48 850 1439 -0.8 1533 -0.46 50 060 1440 -0.72 1526 -0.92 58 -270 1439 -0.75 1531 -0.59 66 -480 1439 -0.75 1506 -2.26 66 -14125Appendix F. Estimation of Layer Separation Based on the SOS ProfileDiscussionsFrom the layer separation estimation result, we may observe that the layer-separation algorithm works better when the layer separation is around thecentre of the phantom. This is because in this case, there is a reasonablenumber of estimates at either side of the layer separation to give a moreaccurate result.Currently, the layer separation estimation algorithm is not reliable enoughfor deeper layer separation positions. However, the results could be improvedby one of the following ways: (1). By exploring use different cost functionsin Equation F.3, such as different type of norms, the layer separation canbecome more accurate. (2). The performance of the layer separation per-formance could also be improved if the two-layer compensation model per-formance is improved. Especially when the ‘overshoot’ problem presentedin Section 3.3.1 is solved.126Appendix GBeamformed Image from theIn-house PhantomMeasurementsThis appendix contains the beamformed images of the in-house two-layeragar-glycerol phantoms. These images corresponds to the measurement re-sults listed in Table 3.17 and Table 3.18.127Appendix G. Beamformed Image from the In-house Phantom Measurements(a) DRF (b) One-layer SASB (c) Two-layer SASBFigure G.1: Beamformed images from the two-layer in-house phantom us-ing the one-layer DRF method, one-layer SASB method, and the two-layerSASB method, with a 33 mm thick first layer128Appendix G. Beamformed Image from the In-house Phantom Measurements(a) DRF (b) One-layer SASB (c) Two-layer SASBFigure G.2: Beamformed images from the two-layer in-house phantom us-ing the one-layer DRF method, one-layer SASB method, and the two-layerSASB method, with a 53 mm thick first layer129

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0223123/manifest

Comment

Related Items