Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Advances in synthetic aperture, compounded plane wave, and spatially encoded excitation techniques for… Kotowick, Kyle 2013

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2013_fall_kotowick_kyle.pdf [ 58.08MB ]
Metadata
JSON: 24-1.0074121.json
JSON-LD: 24-1.0074121-ld.json
RDF/XML (Pretty): 24-1.0074121-rdf.xml
RDF/JSON: 24-1.0074121-rdf.json
Turtle: 24-1.0074121-turtle.txt
N-Triples: 24-1.0074121-rdf-ntriples.txt
Original Record: 24-1.0074121-source.json
Full Text
24-1.0074121-fulltext.txt
Citation
24-1.0074121.ris

Full Text

Advances in Synthetic Aperture, Compounded PlaneWave, and Spatially Encoded Excitation Techniques forFast UltrasonographybyKyle KotowickB.Sc.(Hons) Computer Science, University of British Columbia Okanagan, 2011A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFMaster of Applied ScienceinTHE FACULTY OF GRADUATE STUDIES(Electrical and Computer Engineering)The University Of British Columbia(Vancouver)August 2013c? Kyle Kotowick, 2013AbstractUltrasonography offers subcutaneous imaging at a fraction of the cost of magneticresonance imaging (MRI) and without the ionizing radiation of X-ray or computedtomography (CT) imaging. In addition, ultrasound imaging machines are compactand portable, and do not require any sort of specialized environment to function.Ultrasonography is, however, limited by the relatively slow speed of sound andstandard beamforming can only achieve low imaging frame rates (20 - 80 framesper second). This restricts its use in a number of applications that would otherwisebenefit greatly from its use. For example, transient elastography, 3-dimensionalvolumetric imaging, and Doppler sonography would all benefit from higher framerates.This thesis presents two new variations of fast imaging methods. The firstis by combining two existing fast-imaging techniques, plane wave (PW) and syn-thetic aperture (SA), using an adaptive weighting algorithm to compound imagesgenerated from the techniques individually. This method improves image resolu-tion and signal-to-noise ratio (SNR) without losing the higher frame rate of each,which is successfully demonstrated through experiments on a physical commer-cial ultrasound system. The second method for increasing frame rate involves twoextensions on a spatial encoding technique proposed by Fredrik Gran and J?rgenArendt Jensen in 2008; these extensions entailed implementing a compressed sens-ing algorithm to reduce the code length requirement presented in their paper andremoving the non-imagable ?deadzone? region that their method produces. Theseextensions are applicable for scenarios requiring high definition for a small setof high-reflectivity points in an otherwise dark region, such as intra-spinal needleguidance, and are demonstrated using the Field II ultrasound simulation software.iiPrefaceAll work presented in this thesis was completed at the Robotics and Control Lab-oratory at the University of British Columbia, Point Grey Campus. The work de-scribed in this thesis was completed independently by the author, K. Kotowick,with advisor guidance from professors L. Lampe and R. Rohling.The hardware and software discussed in Chapter 3 are commercially available(hardware) or open-source (software) products that the author did not contributeto.A version of Chapter 4 has been published in the proceedings of the Inter-national Symposium on Biomedical Imaging 2013 as ?Adaptive Compounding ofSynthetic Aperture and Compounded Plane-Wave Imaging for Fast Ultrasonogra-phy? [20].iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiGlossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xivAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Thesis Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . 41.2 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.1 Conventional Ultrasound Basics . . . . . . . . . . . . . . . . . . 52.2 Fast Imaging Methods . . . . . . . . . . . . . . . . . . . . . . . 93 Equipment and Software . . . . . . . . . . . . . . . . . . . . . . . . 133.1 SonixRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.2 SonixDAQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.3 Field II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17iv4 Adaptive Compounding of Synthetic Aperture and Compounded Plane-Wave Imaging for Fast Ultrasonography . . . . . . . . . . . . . . . . 194.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194.2 Previous Research . . . . . . . . . . . . . . . . . . . . . . . . . . 204.3 Experimental Design . . . . . . . . . . . . . . . . . . . . . . . . 214.4 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . 244.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Compressed Sensing and Partial Decoding for Receive-Side Separa-tion of Multiple Simultaneous Transmit Events . . . . . . . . . . . . 295.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295.2 Previous Research . . . . . . . . . . . . . . . . . . . . . . . . . . 315.3 Spatial Encoding with Code Division . . . . . . . . . . . . . . . . 325.4 Compressed Sensing . . . . . . . . . . . . . . . . . . . . . . . . 375.5 Experimental Design . . . . . . . . . . . . . . . . . . . . . . . . 385.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546.1 Thesis Contributions . . . . . . . . . . . . . . . . . . . . . . . . 546.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57A Compressed Sensing Point Phantom Images . . . . . . . . . . . . . . 63B Compressed Sensing Tissue Phantom Images . . . . . . . . . . . . . 76vList of TablesTable 4.1 Lateral Resolution (mm) . . . . . . . . . . . . . . . . . . . . . 26Table 4.2 Axial Resolution (mm) . . . . . . . . . . . . . . . . . . . . . 26Table 4.3 Occlusion Imaging SNR (dB) . . . . . . . . . . . . . . . . . . 27Table 4.4 Significance p-values: comparison of resolution and SNR ofadaptively compounded (AC) method vs. others . . . . . . . . 27Table 5.1 Lateral and Axial Width-Half-Max (mm) of Point Spread Re-sponse for Compression Ratio of 0.9 . . . . . . . . . . . . . . 44Table 5.2 Lateral and Axial Width-Half-Max (mm) of Point Spread Re-sponse for Compression Ratio of 1 . . . . . . . . . . . . . . . 45Table 5.3 Lateral and Axial Width-Half-Max (mm) of Point Spread Re-sponse for Compression Ratio of 3 . . . . . . . . . . . . . . . 46Table 5.4 Lateral and Axial Width-Half-Max (mm) of Point Spread Re-sponse for Compression Ratio of 5 . . . . . . . . . . . . . . . 47Table 5.5 SNR (dB) of High and Low Reflectivity Occlusions for Com-pression Ratio of 0.9 . . . . . . . . . . . . . . . . . . . . . . . 48Table 5.6 SNR (dB) of High and Low Reflectivity Occlusions for Com-pression Ratio of 1 . . . . . . . . . . . . . . . . . . . . . . . . 49Table 5.7 SNR (dB) of High and Low Reflectivity Occlusions for Com-pression Ratio of 3 . . . . . . . . . . . . . . . . . . . . . . . . 50Table 5.8 SNR (dB) of High and Low Reflectivity Occlusions for Com-pression Ratio of 5 . . . . . . . . . . . . . . . . . . . . . . . . 51viList of FiguresFigure 2.1 (a) An Ultrasonix C7-3 convex transducer. (b) An UltrasonixL14-5 linear transducer. (c) Image generated by a convex trans-ducer. (d) Image generated by a linear transducer. Imagescourtesy of Ultrasonix Medical Corp. . . . . . . . . . . . . . 7Figure 2.2 Time delays for an SA transmit from element at position xt toa point (x,z) and received by element at position xr. . . . . . . 10Figure 2.3 Time delays for a plane wave to a point (x,z) and received byelement at position xr. . . . . . . . . . . . . . . . . . . . . . 11Figure 3.1 The Ultrasonix Research Platform (SONIXRP). Image courtesyof Ultrasonix Medical Corp. . . . . . . . . . . . . . . . . . . 14Figure 3.2 (a) A Ultrasonix Data Acquisition Card (SONIXDAQ) mod-ule. (b) A high-level block diagram showing the design of theSONIXDAQ. (c) The internal circuitry of the SONIXDAQ. (d)The SONIXDAQ installed on a SONIXRP ultrasound machine.Images courtesy of Ultrasonix Medical Corp. . . . . . . . . . 16Figure 4.1 (a) Time delays for a plane wave of angle ? to a point (x,z)and received by element at position xr. (b) Time delays for anSA transmit from element at position xt to a scatterer at (x,z)and received by element at position xr. . . . . . . . . . . . . . 22Figure 4.2 Structure of the CIRS General Purpose Multi-Tissue Ultra-sound Phantom, Model 40. . . . . . . . . . . . . . . . . . . . 24viiFigure 4.3 (a) B-mode conventional delay-and-sum (DAS) transmit beam-forming with 2 focal depths, 128 transmits each (256 total).(b) 128-transmit compounded plane wave (CPW) with 128-element aperture. (c) 128-transmit SA with 10-element aper-ture. (d) Adaptively compounded 64-transmit CPW and 64-transmit SA. . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Figure 5.1 (a) Possible echo sources with transmission from xt1 receivedby xr. (b) Possible echo sources with transmission from xt2received by xr. (c) Possible echo sources with simultaneoustransmission from xt1 and xt2 received by xr. . . . . . . . . . . 31Figure 5.2 Reusable acoustic standoff pads. Image courtesy of CIVCOMedical Solutions. . . . . . . . . . . . . . . . . . . . . . . . 37Figure A.1 Point phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 2 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 64Figure A.2 Point phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 5 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 64Figure A.3 Point phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 15 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 65Figure A.4 Point phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 30 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 65viiiFigure A.5 Point phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 50 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 66Figure A.6 Point phantom simulation with compression ratio of 1 and stand-off pad thickness of 2 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) CompressedSensing Partial Decoded. . . . . . . . . . . . . . . . . . . . . 66Figure A.7 Point phantom simulation with compression ratio of 1 and stand-off pad thickness of 5 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) CompressedSensing Partial Decoded. . . . . . . . . . . . . . . . . . . . . 67Figure A.8 Point phantom simulation with compression ratio of 1 and stand-off pad thickness of 15 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) CompressedSensing Partial Decoded. . . . . . . . . . . . . . . . . . . . . 67Figure A.9 Point phantom simulation with compression ratio of 1 and stand-off pad thickness of 30 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) CompressedSensing Partial Decoded. . . . . . . . . . . . . . . . . . . . . 68Figure A.10 Point phantom simulation with compression ratio of 1 and stand-off pad thickness of 50 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) CompressedSensing Partial Decoded. . . . . . . . . . . . . . . . . . . . . 69Figure A.11 Point phantom simulation with compression ratio of 3 and stand-off pad thickness of 2 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) CompressedSensing Partial Decoded. . . . . . . . . . . . . . . . . . . . . 69Figure A.12 Point phantom simulation with compression ratio of 3 and stand-off pad thickness of 5 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) CompressedSensing Partial Decoded. . . . . . . . . . . . . . . . . . . . . 70ixFigure A.13 Point phantom simulation with compression ratio of 3 and stand-off pad thickness of 15 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) CompressedSensing Partial Decoded. . . . . . . . . . . . . . . . . . . . . 70Figure A.14 Point phantom simulation with compression ratio of 3 and stand-off pad thickness of 30 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) CompressedSensing Partial Decoded. . . . . . . . . . . . . . . . . . . . . 71Figure A.15 Point phantom simulation with compression ratio of 3 and stand-off pad thickness of 50 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) CompressedSensing Partial Decoded. . . . . . . . . . . . . . . . . . . . . 72Figure A.16 Point phantom simulation with compression ratio of 5 and stand-off pad thickness of 2 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) CompressedSensing Partial Decoded. . . . . . . . . . . . . . . . . . . . . 72Figure A.17 Point phantom simulation with compression ratio of 5 and stand-off pad thickness of 5 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) CompressedSensing Partial Decoded. . . . . . . . . . . . . . . . . . . . . 73Figure A.18 Point phantom simulation with compression ratio of 5 and stand-off pad thickness of 15 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) CompressedSensing Partial Decoded. . . . . . . . . . . . . . . . . . . . . 73Figure A.19 Point phantom simulation with compression ratio of 5 and stand-off pad thickness of 30 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) CompressedSensing Partial Decoded. . . . . . . . . . . . . . . . . . . . . 74Figure A.20 Point phantom simulation with compression ratio of 5 and stand-off pad thickness of 50 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) CompressedSensing Partial Decoded. . . . . . . . . . . . . . . . . . . . . 75xFigure B.1 Tissue phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 2 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 77Figure B.2 Tissue phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 5 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 77Figure B.3 Tissue phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 15 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 78Figure B.4 Tissue phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 30 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 78Figure B.5 Tissue phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 50 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 79Figure B.6 Tissue phantom simulation with compression ratio of 1 andstandoff pad thickness of 2 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 79Figure B.7 Tissue phantom simulation with compression ratio of 1 andstandoff pad thickness of 5 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 80Figure B.8 Tissue phantom simulation with compression ratio of 1 andstandoff pad thickness of 15 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 80xiFigure B.9 Tissue phantom simulation with compression ratio of 1 andstandoff pad thickness of 30 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 81Figure B.10 Tissue phantom simulation with compression ratio of 1 andstandoff pad thickness of 50 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 82Figure B.11 Tissue phantom simulation with compression ratio of 3 andstandoff pad thickness of 2 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 82Figure B.12 Tissue phantom simulation with compression ratio of 3 andstandoff pad thickness of 5 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 83Figure B.13 Tissue phantom simulation with compression ratio of 3 andstandoff pad thickness of 15 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 83Figure B.14 Tissue phantom simulation with compression ratio of 3 andstandoff pad thickness of 30 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 84Figure B.15 Tissue phantom simulation with compression ratio of 3 andstandoff pad thickness of 50 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 85Figure B.16 Tissue phantom simulation with compression ratio of 5 andstandoff pad thickness of 2 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 85xiiFigure B.17 Tissue phantom simulation with compression ratio of 5 andstandoff pad thickness of 5 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 86Figure B.18 Tissue phantom simulation with compression ratio of 5 andstandoff pad thickness of 15 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 86Figure B.19 Tissue phantom simulation with compression ratio of 5 andstandoff pad thickness of 30 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 87Figure B.20 Tissue phantom simulation with compression ratio of 5 andstandoff pad thickness of 50 mm. (a) Reference simulation. (b)Gran Decoded. (c) Compressed Sensing Decoded. (d) Com-pressed Sensing Partial Decoded. . . . . . . . . . . . . . . . 88xiiiGlossaryAC adaptively compoundedADC analog-to-digital conversionCPW compounded plane waveCT computed tomographyDAS delay-and-sumFIR finite impulse responseFPS frames per secondGPU graphics processing unitMRI magnetic resonance imagingPW plane waveRF radio-frequencySA synthetic apertureSNR signal-to-noise ratioSONIXDAQ Ultrasonix Data Acquisition CardSONIXRP Ultrasonix Research PlatformTGC time-gain compensationxivAcknowledgmentsFirst and foremost, I would like to thank Rob and Lutz who riskily accepted meinto a program I had little background in, but had faith in me and provided excel-lent guidance. Your commitment to this project was admirable and crucial to itscompletion.A special thanks to my parents Dwain and Shelley for raising me to the personI am today and for convincing me to pursue academia. Their support and love hasalways been unconditional and without bounds.Thank you to my labmates for the welcoming and entertaining environment, inparticular to Caitlin Schneider for taking me under her wing and patiently answer-ing my continual questions, and to the support staff at Ultrasonix for graciouslyhandling my hectic requests for technical assistance.Finally, the completion of this research would not have been possible withoutfunding from the Natural Sciences and Engineering Research Council of Canada(NSERC) and from the University of British Columbia.xvChapter 1IntroductionA relatively mature technology, ultrasound imaging has seen a strong resurgence innew research over the past decade. In the face of rising medical costs, ultrasonog-raphy offers subcutaneous imaging at a fraction of the cost of magnetic resonanceimaging (MRI) and without the ionizing radiation of X-ray or computed tomogra-phy (CT) imaging. It provides portable, real-time imaging capability without theneed for a specialized environment. As opposed to X-ray or CT, which requireradiation shielding, and MRI, which requires multi-million dollar equipment and ametal-free environment, ultrasonography can be performed at extremely low costin nearly any situation.Unfortunately, due to the relatively slow speed of sound, ultrasonography suf-fers from low image frame rates. Images are acquired in vertical sections, whichare then combined to form a full frame. To avoid interference only one of thesesections can be acquired at a time, necessitating a long data acquisition phase.Depending on the specific technique used and the imaging depth, the frame rategenerally falls in the 20 - 80 frames per second (FPS) range. This restriction ham-pers a number of potential applications that require higher frame rates to be fullyfunctional, including 3-dimensional imaging, elastography, and Doppler sonogra-phy.3-dimensional volumetric imaging, usually generated by the sweeping of a 1-D transducer array over a 2-D plane, generates an image from a number of 2-D?slices? which are combined into the final 3-D volume. As each 2-D slice must be1generated individually, this drastically reduces the frame rate of a 3-D volume gen-erated from 20 - 80 slices to around 1 FPS, which is insufficient for imaging nearlyany form of internal motion. For this technique of 3-D imaging to be fully effective,significantly higher frame rates than conventional systems offer are required.Elastography is a technique for detecting hard tissue within soft tissue (usuallyfor tumor detection)[1]. The tissue is vibrated at approximately 60 - 150 FPS andis imaged with ultrasound simultaneously. Hard tissue will remain rigid, whilesoft tissue surrounding it will compress with the vibrations. The strain on sectionsof tissue are measured to detect the rigid regions, which signify a possible tumor[24]. As specified by the Nyquist sampling theorem, in order to reconstruct thetissue vibration the sampling (imaging) must be conducted at twice the vibrationrate (120 - 300 FPS). Improvements in ultrasound frame rate are often required toachieve this goal.Doppler sonography is a technique for assessing the motion of structures (oftenblood) through tissue, specifically their velocity relative to the transducer. Modernultrasound machines use pulsed Doppler, where the frequency shift is determinedfrom the phase changes of a given sample between pulses (frequency is the changeof phase over time). The frequency shift can then be used to determine the speedand direction of the movement of that sample, which is then visualized by a colouroverlay on the ultrasound image. Pulsed Doppler can suffer from aliasing whenthe frequency of the motion being observed is greater than the frame rate of theultrasound imaging, which can result in misleading or missing information. Ahigher frame rate ultrasonography method would be of great benefit for use withDoppler imaging.Frame rate also determines the time required to generate a single image frame.This has drastic implications even on techniques that do not require a high framerate. For imaging of fast-moving organs, such as the heart, if an image frametakes a long duration to create then the organ will have moved during data collec-tion. For techniques that acquire frames in sections, such as conventional transmitbeamforming (Section 2.1.5), this means that part of the frame will have been gen-erated while the organ is in a specific position and orientation, while other partsof will have been generated after the organ has moved. This effect makes gener-ating high-resolution images without blurring significantly more difficult. Higher2frame rate ultrasound methods mean that organs will move less during the imag-ing period, therefore providing a higher quality image with fewer restrictions onstationarity.The frame rate of conventional ultrasound imaging is dependent on a numberof factors and can be defined as, in frames per second,Frame Rate =c2md, (1.1)where d is the depth of the region to be imaged, m is the number independenttransmit events per frame, and c is the speed of sound in the imaged tissue (whichaverages 1540 m/s in human tissue). For a given region of tissue to image c andd are constant, so methods to improve frame rate focus on decreasing the value ofm. m can be reduced in two ways: (1) reduce the number of transmits required toform a full frame, or (2) have multiple transmits occur simultaneously to reducethe total time required.Previously proposed methods for reducing the number or required transmitsper frame (1) include plane wave (PW) imaging (Section 2.2.2) and synthetic aper-ture (SA) imaging (Section 2.2.1). The PW technique activates all elements of atransducer simultaneously to produce a planar acoustic wave instead of a conven-tional focused beam, which illuminates the entire region in a single transmit (up to128x increase in frame rate). The SA technique activates only a single element ata time which allows for dynamic transmit beamforming as well as receive beam-forming, thereby giving a substantially greater accuracy in image reconstruction.The higher accuracy and unfocused nature of the single element activation allowfor image generation with far fewer transmits than conventional. Both of these,however, have significant disadvantages that do not allow them to be a completereplacement for conventional imaging (discussed in detail in their respective sec-tions).One previously proposed method for using multiple transmits simultaneously(2) is spatial encoding of ultrasound pulses (Section 2.2.3). Spatial encoding, a.k.a.coded excitation, is a method that allows for simultaneous transmit events that areactivated with unique coded pulses. The received signals can then be separatedusing a decoding matrix to isolate the echoes that originated from each individual3transmit.1.1 Thesis ObjectivesThe goal of the work in this thesis is to improve the frame rate of ultrasound imag-ing without significantly decreasing the image quality. This is achieved with twoseparate methods:1. Combining existing PW and SA fast-imaging techniques using an adaptiveweighting algorithm to compound images generated from the techniques in-dividually, giving an improvement in image resolution and signal-to-noiseratio (SNR) without losing the higher frame rate.2. Improving on Gran and Jensen?s spatial encoding technique [9] to eliminate?deadzones? (non-imageable regions due to the half-duplex constraint of ul-trasound transducers) by implementing a compressed sensing with partialdecoding method. Compressed sensing allows for a drastic reduction in en-coded pulse length, and partial decoding allows for estimation of reflectorsin the deadzone.1.2 Thesis StructureThe outline of this thesis is as follows: Chapter 2 provides a detailed and compre-hensive background on ultrasound imaging, as well as a number of techniques thathave been attempted in order to improve frame rate. Chapter 3 details the equip-ment and software used in the course of this research, including the UltrasonixResearch Platform (SONIXRP), the Ultrasonix Data Acquisition Card (SONIXDAQ)and the Field II ultrasound simulation software. Chapter 4 presents an originalmethod for improving image quality of existing high frame rate methods (PW andSA) by adaptively combining them. Chapter 5 presents two original extensions ofthe work done by Gran and Jensen on spatial encoding for fast imaging [9]. Theirmethod is enhanced by implementing compressed sensing algorithms with partialdecoding to remove the ?deadzone? caused by the ultrasound transducer elementhalf-duplex constraint. Finally, Chapter 6 presents the conclusions of this thesisand describes the direction of future work on the subject.4Chapter 2BackgroundThe following chapter provides an overview to current conventional ultrasoundimaging systems as well as an introduction to proposed fast-imaging techniques,as a basis for the original work described in Chapter 4 and Chapter 5.2.1 Conventional Ultrasound Basics2.1.1 Acoustic Wave GenerationSound waves are generated from piezoelectric elements (devices that vibrate whena voltage is applied) arranged in a 1-D array, encased in a housing (the ?trans-ducer?). These elements are excited at the desired frequency by an impulse gener-ator in the ultrasound machine. Imaging frequencies are generally selected on thebasis of type and depth of tissue to be imaged, with higher frequencies giving betterresolution but poorer tissue penetration than lower frequencies. A given transducerconsists of elements that have a natural central frequency, usually in the range of2 - 20 MHz (with exceptions for highly specialized applications). The frequencyof the impulse generator is generally selected to correspond with the transducer?scentral frequency, with some leeway to force a specific frequency slightly higher orlower if desired. The impulses provided by the impulse generator can be positive ornegative, with variable power levels available depending on the equipment (in thesense that the power level of the entire impulse sequence can be adjusted, but not5individual impulses within a transmit). Pre-programmed series of positive and neg-ative impulses can be used to generate a specific pulse shape in the waveform, withapplications for encoded transmits (coded excitation). The impulses received bythe elements cause them to vibrate, thereby transmitting an acoustic wave into thetissue they are placed against. A specialized gel is used in between the transducerand tissue surface to minimize reflections from the transition.2.1.2 Transducer DesignsUltrasound transducers are available in a wide variety of designs for specific appli-cations. Most commonly used for external imaging (where the transducer is againstan exterior surface of the body) are convex (Figure 2.1a) and linear (Figure 2.1b)transducers. Convex transducers form an image with a wide field of view that in-creases with depth, but decreases in resolution (Figure 2.1c). Linear transducersform a rectangular image with more consistent resolution (Figure 2.1d), and areused exclusively for the experiments (both physical and simulated) described inthis thesis.2.1.3 Transmit BeamformingModern ultrasound machines use phased array transducers, which allow each ele-ments to be fired independently with varying time delays [34, 46]. By varying thedelays on each element, the ultrasound beam can be shaped or steered to focus ona given point. This is known as fixed transmit focusing. Doing so drastically in-creases the ratio of energy focused on that point versus all other points in the tissue,which simplifies the image formation calculations and provides higher accuracy.For each focused transmit, a transmit aperture is defined (i.e., the number ofelements transmitting). The transmit aperture is generally user-adjustable, depend-ing on the application. A narrow aperture will provide a narrower beam aboveand below the focal point, while a wider aperture will have a wider beam but willprovide more acoustic power at the focal point.Conventional ultrasonography, commonly known as delay-and-sum (DAS) ul-trasound, uses at least one transmit focused vertically on each element column. Foreach of these transmits, a small vertical section of the final image is generated, and6(a)(b)(c) (d)Figure 2.1: (a) An Ultrasonix C7-3 convex transducer. (b) An UltrasonixL14-5 linear transducer. (c) Image generated by a convex transducer.(d) Image generated by a linear transducer. Images courtesy of Ultra-sonix Medical Corp.combined/stitched horizontally with the image sections from each other transmit.Depending on the depth of the region of interest and level of quality desired,multiple transmit focuses may be done for each element column, with various focaldepths. This results in image sections being combined/stitched both horizontallyand vertically.2.1.4 Echo ReceptionReflected sound waves are received in the reverse way that they are transmitted.Echoes transfer from the tissue into the transducer elements, causing them to vi-brate and produce an electrical signal. The signal passes through analog-to-digital7conversion (ADC) hardware and the ultrasound machine captures the resultant dig-ital signal and records it for use in generating the final image. The received signalis known as the radio-frequency (RF).As acoustic waves travel through tissue, their power is attenuated. Therefore,received echoes that originated from reflectors at a lower depth will have a lowerpower than echoes that originated from reflectors of the same strength at a shal-lower depth. This effect is minimized by using time-gain compensation (TGC)which artificially increases the power of reflections based on the time they take toarrive at the receiving element [39], providing a more uniform and accurate bright-ness in the image.2.1.5 Receive Beamforming and ApertureWhen a transmitted acoustic wave is reflected by a point in the tissue, it scatters,meaning that the echo will be received by multiple elements in the transducer.Receive beamforming is the re-focusing of the received echoes for each point in theimage being generated. For each point (x,z) in the image, the temporal coordinateof the corresponding signal in the received RF of a column-beamformed transmit(i.e., the transmit beam is perpendicular to the transducer face, and not steered) isgiven by?(xr,x,z) =z+?z2 +(x? xr)2c, (2.1)where x is also the lateral position of the transmit focus and xr is the lateral positionof the receiving element. For each given depth z (generally given in incrementsof the sampling interval) in each element column x, the corresponding temporalcoordinate of the RF from each receiving element is calculated using Equation 2.1and are then summed together to find the total value for that point in the image.The piezoelectric elements used in ultrasound transducers are most receptiveto vibrations traveling directly perpendicular to the surface of the element (axi-ally), and the receptiveness decreases as the angle of the incoming reflected waveincreases. Therefore, only elements within a certain distance from the element col-umn the transmit is focused on are useful in receiving. To compensate for this, thereceive aperture (i.e., the elements of the transducer that are used to contribute to8a given point (x,z) in the image) is determined by the depth (z) of that point. Thewidth of the receive aperture is given byD = Nz, (2.2)where D is the width of the aperture and N is the F-number. The F-number issimply the ratio of the focal length to the aperture width, and is usually (and forall experiments in this thesis) set to 0.5 with user-adjustability on ultrasound ma-chines.2.1.6 Image FormationAfter receive beamforming is completed and the values for each point in the imagehave been summed, the RF is envelope-detected using a Hilbert transform to pro-vide a displayable greyscale image. The image is often also logarithmically com-pressed to display a greater dynamic range and increase the visibility of smallerdifferences in reflection amplitude.2.2 Fast Imaging MethodsA number of alternative imaging methods have been proposed and studied, withthe intention of achieving a higher frame rate than conventional imaging.2.2.1 Synthetic ApertureSA imaging [18], based on the technique for radar systems [35], synthesizes anaperture by using a single element, moving it over the region of interest, and com-bining the resultant datasets. This technique offers the benefit of extremely highresolution; as only one element is transmitting at once, it is known exactly wherea given reflection originated (as opposed to conventional transmit beamformingwhere a reflected signal could have come from any element in the transmit aper-ture), known as dynamic transmit focusing. Because of this high precision, thesame resolution as conventional imaging can be achieved with fewer transmits,e.g., with transmits on every second element instead of a focus for each elementwith conventional. Fewer transmits translates to a higher frame rate, as shown in9Figure 2.2: Time delays for an SA transmit from element at position xt to apoint (x,z) and received by element at position xr.Equation 1.1. The use of SA imaging for medical applications was first consideredin the 1970s and 80s [2, 5, 27], and was developed with more advanced techniquesover the following decades [14, 22, 23, 28].The drawback to this method is that due to single-element transmits, and there-fore the lower amount of acoustic energy entering the tissue per transmit, the re-ceived RF SNR is significantly lower than that of conventional imaging (i.e., detailsof the tissue being imaged are less visible among the background noise). Severalmethods have been proposed for overcoming this issue, which are described inSection 4.2.1.2.2.2 Plane WaveOne method for obtaining high frame rates is that of PW imaging (Figure 2.3)[32,33], where all transducer elements are fired together to create a quasi-planar wave.Due to the large number of elements firing per transmit, and therefore the largeamount of energy illuminating the tissue, this method offers a relatively high SNR.On receive dynamic beamforming, reflections are assumed to have originated fromthe leading edge of the plane wave. Firing all elements simultaneously as opposed10Figure 2.3: Time delays for a plane wave to a point (x,z) and received byelement at position xr.to sequential focused columns allows an entire image to be generated from a singletransmit event, leading to a frame rate over 100 times higher than conventionalimaging (7500+ FPS for a 10cm deep image), making it particularly well suited fortechniques requiring dynamic motion tracking such as transient elastography forcancer detection [1].However, the lack of a transmit focus results in drastically lower resolutionthan conventional transmit beamforming techniques offer. Work has been done inthe field to improve this drawback, as detailed in Section 4.2.2.2.2.3 Coded ExcitationA more recent method for improving ultrasound frame rate, and one of the mainfoci of this thesis, is the use of coded excitation. In conventional ultrasound, onlyone group of elements is activated at a time, where ?group? is defined as a setof elements that are programmed to have a common focal point. If more thanone group is activated, then the origin of received echoes becomes ambiguous andcauses a drastic drop in image quality. The theory behind coded excitation is basedon the idea that it should be possible to have multiple transmits simultaneouslythat each transmit an individual code, which can be decoded and isolated in the11received RF. After isolation, they are treated in processing as if they were entirelyseparate.Coded excitation presents the benefit of requiring 1K as many transmits perimage as conventional imaging, where K is the number of groups transmitting si-multaneously with encoded pulse sequences. This directly translates to K timesthe frame rate as the equivalent conventional imaging, but with the drawback ofdecreased image quality due to the non-linear nature of tissue, as well as whatis referred to in this thesis as a ?deadzone?, where tissue cannot be imaged dueto the half duplex constraint of the transducer and the required length of encodedpulse sequences. These drawbacks and the proposed methods for minimizing orremoving them (one of the objectives of this thesis) are described in Chapter 5.12Chapter 3Equipment and SoftwareIn this chapter are details and specifications for the equipment (SONIXRP andSONIXDAQ) used to perform the physical experiments in Chapter 4, and an overviewof the software (Field II) used to perform the simulations in Chapter 5.3.1 SonixRP3.1.1 SpecificationsThe SONIXRP is a diagnostic ultrasound system from Ultrasonix Medical Corpora-tion. The operating system consists of a customized version of Microsoft WindowsXP, allowing for use of the machine in a PC-type manner. The SONIXRP supports128-element transducers, with 256 transmit channels and 32 receive channels. Theultrasound system runs on a 40 MHz internal clock, and utilizes 10-bit ADC hard-ware.3.1.2 Research InterfaceThe SONIXRP comes packaged with a research interface. The research interfaceand other special research tools on the RP allow for the the following additionalfunctionality (as listed in Ultrasonix?s documentation [45]):1. The use of operational modes not available on a purely clinical system13Figure 3.1: The Ultrasonix Research Platform (SONIXRP). Image courtesy ofUltrasonix Medical Corp.2. The retrieval and modification of low-level parameters used to generate ul-trasound images3. The acquisition and storage of raw data in a variety of formats4. Transducer prototyping5. Connecting to the system through a network for parameter setting and datacapture6. Low-level ultrasound beam sequencing and control7. Development of commercial ultrasound applications running on the Sonixplatform14For the research in this thesis, the most important of these were 2 and 6. Run-ning experiments using SA and PW imaging requires control over the individualelements in the transducer and their parameters, including apertures, time delays,power, excitation pulse shapes, and TGC. Access to the entire operating systemin research mode is also required for data collection using the SONIXDAQ, as de-scribed in Section 3.2, and for post-processing of said data using external applica-tions such as Matlab or custom C++ applications.3.2 SonixDAQ3.2.1 SpecificationsThe Ultrasonix Data Acquisition Card (SONIXDAQ) is a device from UltrasonixMedical Corporation that is designed to integrate with their ultrasound researchplatforms (Figure 3.2a). Where the research interface in the SONIXRP will allowthe capture of receive beamformed RF data that is a summation of the individualchannels, the SONIXDAQ captures the raw (pre-beamformed) RF data received byeach individual element. It is a receive-only device that does not effect the imagingsequence in any way, but captures the data as it is gathered and stores it in inter-nal memory. Raw RF data is processed through ADC hardware, high-pass filtered,packed into a usable format, and stored in memory. After the imaging sequenceis complete, the raw RF data can be downloaded from the SONIXDAQ?s internalmemory to the SONIXRP?s internal hard disk via a USB cable (Figure 3.2b). TheSONIXDAQ supports 128 parallel receive channels (each with an independent ADC)and contains 16 GB of internal memory to store the RF data. It uses a 40 MHzinternal clock, with support for both 40 MHz sampling (with 12-bit ADC) and 80MHz sampling (with 10-bit ADC). It connects to one of the transducer ports onthe ultrasound machine for transfer of the RF data (Figure 3.2d), and to an exter-nal trigger port on the ultrasound machine for synchronized data acquisition (theSONIXDAQ begins recording a new frame when the ultrasound machine signals thata new transmission has begun).15(a)(b)(c)(d)Figure 3.2: (a) A SONIXDAQ module. (b) A high-level block diagram show-ing the design of the SONIXDAQ. (c) The internal circuitry of theSONIXDAQ. (d) The SONIXDAQ installed on a SONIXRP ultrasound ma-chine. Images courtesy of Ultrasonix Medical Corp.163.2.2 Reasons for RequirementBecause the experiments in this thesis use element parameters and firing sequencesthat are entirely different than the conventional (default) ones for the SONIXRP, theraw RF data must be receive beamformed and processed in a different manner. Thismeans that the beamformed RF data that is captured and stored by the SONIXRP isinsufficient, and the SONIXDAQ module is therefore required for all of the experi-ments described in Chapter 4.3.3 Field IIField II is an ultrasound simulation program for use in Matlab, written by J?rgenArendt Jensen, the first version of which was published in 1996 [17]. Field IIruns simulations on a matrix representation of a phantom (an object specificallydesigned for use in medical imaging tests) which consists of a series of (x,y,z)coordinates of point reflectors, each with an accompanying reflection amplitude.Transducer specifications (including element width, kerf, depth, height, central fre-quency, impulse response, and number of elements) are given to model a physicaltransducer and its elements. Details of the simulation process can be found in theprogram?s documentation [15], but a summary is given here. The software, builton a basis of linear systems theory, calculates the spatial impulse response for eachgiven point in space as a function of time using the given transducer parametersand phantom matrix (this is based on research presented in [43], [37], and [36]).This spatial impulse response is initially calculated assuming a Dirac delta func-tion is used as an excitation for the transducer elements. For other excitations, suchas the coded excitations used in Chapter 5, the spatial impulse response from theDirac excitation is simply convolved with the given excitation to find the new spa-tial impulse response. To calculate the final received RF, this new spatial impulseresponse is convolved with the impulse response of the receiving element and ad-justed to compensate for the electro-mechanical transfer function of the transducer.Jensen cites proofs of this theory found in [38] and [31]. Apodization of elements(edges vibrating differently than the centre) is emulated by dividing the elementsinto squares, for each of which the received RF is calculated independently. Thesum of the responses for these sub-elements gives the response of the entire ele-17ment. Further background on the theory behind Field II can be found in [16].18Chapter 4Adaptive Compounding ofSynthetic Aperture andCompounded Plane-WaveImaging for FastUltrasonographyIn this chapter, a novel technique is presented that combines two previously pro-posed fast-imaging techniques to achieve high frame rate ultrasonography withfiner resolution and higher SNR than conventional imaging. The technique is testedon a physical ultrasound system to demonstrate its viability.4.1 BackgroundSeveral imaging techniques have been proposed that provide for a higher physicalframe rate limit by requiring fewer transmits per image frame. As described in Sec-tion 2.2.1, synthetic aperture (SA) imaging [18] synthesizes an aperture by using asingle element, moving it over the region of interest, and combining the resultantdatasets. Although the ability to dynamically focus in transmit provides a largeboost in resolution SA suffers from an inherently low SNR due to the low num-19ber of element firings per frame, and therefore lower amount of acoustic energyentering the tissue, compared to conventional focused transmit beamforming.Another method for obtaining high frame rates is that of plane wave (PW) imag-ing [32, 33], where all transducer elements are fired together to create a quasi-planar wave. As described in Section 2.2.2, due to the large number of elementsfiring per transmit this method offers a high SNR but the lack of a transmit focusresults in significantly lower resolution than conventional transmit beamformingtechniques offer. Nevertheless, the exceptionally high frame rates (7500+ FPS fora 10cm deep image) make it especially useful for dynamic motion tracking [1].There is a need for combining the benefits of synthetic aperture and plane-wave imaging without compromising frame rate. It is anticipated that adaptivelycompounding the results of the two methods can maximize the resolution con-tributed by SA and the SNR contributed by PW without incorporating the negativeside-effects of each.4.2 Previous Research4.2.1 Synthetic Aperture ImagingSeveral methods have been proposed for overcoming the issue of low SNR withSA imaging. In [19], several transducer elements of a transducer phased array (asopposed to the standard single element) are pulsed with ?defocusing? time delaysto simulate a virtual point source originating behind the transducer surface whileincreasing the signal power. In [40], a technique is described that improves SNRby pulsing each element at a different frequency. Another method under extensiveinvestigation is transmit coding [3, 4, 6, 7, 10, 25, 29], where excitation pulses areencoded or modulated to allow for longer pulses (greater signal energy) withoutloss in resolution.4.2.2 Plane Wave ImagingOne of the most successful attempts to improve the resolution of PW imaging wasby Montaldo et al. [26], who described a system for coherently compounding re-ceived signals from a number of plane waves transmitted at various angles called20compounded plane wave (CPW). In doing so, they achieved a resolution compara-ble to that of conventional transmit beamforming but still below that of SA imaging,while maintaining an exceptionally high SNR.4.3 Experimental DesignOur proposed method is based on combining the strengths of SA and CPW imagingwhile minimizing the drawbacks. In doing so, better resolution and contrast arecreated without limiting the system to a lower frame rate than either method wouldprovide individually. Frame rate is defined byFrame Rate =c2md, (4.1)where d is the depth of the region to be imaged, m is the number of transmits perframe, and c is the speed of sound in the imaged tissue. In this thesis, the pro-posed 128-transmit adaptively compounded (AC) method is compared against 128-transmit SA-only and CPW-only methods, as well as against a 256-transmit (twofocal depths per element) conventional DAS beamformed transmit method. Thefinal compounded image would be expected to improve lateral and axial resolutioncompared to CPW and increase SNR compared to SA imaging. Both resolution andSNR would be expected to be better than the conventional DAS technique at twofocus depths, but with twice the frame rate.4.3.1 Image GenerationTo generate a compounded image, sparse images are first obtained from CPW andSA methods individually. The CPW image is formed by compounding the data of 64PW transmits at varying steering angles, evenly distributed over a 33? range. Thereceived signals are delay-and-sum beamformed, with the transmit-scatterer(x,z)-receiver delay defined as?(?,xr,x,z) =zcos?+ xsin?c+?z2 +(x? xr)2c, (4.2)where ? is the plane wave steering angle and xr is the lateral position of the receiv-ing element. This is the sum of the wave-to-scatterer travel time and the scatterer-21(a) (b)Figure 4.1: (a) Time delays for a plane wave of angle ? to a point (x,z) andreceived by element at position xr. (b) Time delays for an SA trans-mit from element at position xt to a scatterer at (x,z) and received byelement at position xr.to-receiver travel time, as can be seen in Figure 4.1a.The SA image is then formed by compounding the data of 64 virtual-point-source transmits from evenly spaced transmit points across the transducer. Thisstep uses the multi-element aperture method suggested in [19], where a defocusedtime delay curve simulates a point source behind the transducer. The time delaybefore firing of an element n in the aperture is defined as?n =x2n2zdc, (4.3)where xn is the distance of the nth element from the aperture centre, zd is the dis-tance of the defocal point behind the aperture centre, and c is the speed of soundin the imaged tissue. As also suggested in [19], zd is chosen to be dKt/2, whered is the inter-element spacing and Kt is the aperture width (selected as 10 for thiscomparison). The received signals are then DAS receive beamformed, with thetransmit-scatterer(x,z)-receiver delay defined as?(xt ,xr,zd ,x,z) =?(z+ zd)2 +(x? xt)2c+?z2 +(x? xr)2c, (4.4)where xt is the lateral position of the virtual point source. This is the sum of thetransmitter-to-scatterer travel time and the scatterer-to-receiver travel time, as can22be seen in Figure 4.1b.The data from each of these images then undergoes standard post-processing,including a bandpass filter centered around the transmit frequency to remove back-ground noise, and a Hilbert transform for envelope detection.4.3.2 Adaptive Image Data CompoundingOnce the sparse CPW and SA images have been obtained, they are compoundedinto a single high-quality frame. The objective is to retain the high SNR of theCPW image, while also retaining the high resolution of the SA image. This is doneusing an adaptive weighted averaging technique, where the CPW image is givenpreference for low-reflectivity regions (where SNR and contrast are most crucial)and the SA image is given preference for high-reflectivity regions (where sharplydefined, highly reflective edges give strong echoes that are easily captured by SA).Each point v(x,z) in the final image is generated according to the equationv(x,z) =CPW (x,z)CPWmaxSA(x,z)+(1?CPW (x,z)CPWmax)CPW (x,z), (4.5)where CPWmax is the highest intensity point in the CPW image.4.3.3 Physical ImplementationThis system was implemented on a SONIXRP with a SONIXDAQ for capturing full-frame raw RF signals. The element transmit sequences were programmed usingthe Ultrasonix ?Texo? software development kit for a 6 cm image at a frequencyof 9.5 MHz, and the sequence was used on a CIRS General Purpose Multi-TissueUltrasound Phantom, Model 40 (CIRS, Norfolk, Virginia), as seen in Figure 4.2.Once RF data were captured for all transmit sequences, they were processed offlineon a dual-core desktop workstation in a C++ application.23Figure 4.2: Structure of the CIRS General Purpose Multi-Tissue UltrasoundPhantom, Model 40.4.4 Results and DiscussionThe resulting images of the CPW, SA, AC, and conventional DAS (with focus depthsat 2 cm and 4 cm) techniques, as seen in Figure 4.3, were compared according totheir resolution in the near, mid, and far field, and their SNR in the occlusion regionsof the phantom.4.4.1 ResolutionThe lateral and axial resolution of the techniques was measured at the full widthat half maximum on the point spread response from the wire targets. Since thephantom background exhibited significant and variable speckle, the point spreadfunction was defined as the width at half maximum of the values in the region of24(a) (b)(c) (d)Figure 4.3: (a) B-mode conventional DAS transmit beamforming with 2 focaldepths, 128 transmits each (256 total). (b) 128-transmit CPW with 128-element aperture. (c) 128-transmit SA with 10-element aperture. (d)Adaptively compounded 64-transmit CPW and 64-transmit SA.25interest minus the average value in the surrounding area. The function was appliedin lateral and axial directions for each of 16 distinguishable point scatterers fullycontained within the frame. These points were then divided into near, mid, and farfields, as seen in Figure 4.2. Tables 4.1 and 4.2 show the mean results for eachfield in the lateral and axial directions, respectively. As expected, the resolutionof the compounded technique fell between those of the CPW and SA techniques,with an overall 14.5% gain over CPW, 12.8% gain over conventional, and 17% lossfrom SA in lateral resolution, and an overall 7.5% gain over CPW, 2.7% gain overconventional, and 14.7% loss from SA in axial resolution.Table 4.1: Lateral Resolution (mm)DAS CPW SA ACNear Field 1.32 1.32 1.01 1.16Mid Field 1.73 1.70 1.40 1.43Far Field 2.02 2.16 1.37 1.83Mean 1.69 1.72 1.26 1.47Table 4.2: Axial Resolution (mm)DAS CPW SA ACNear Field 0.61 0.64 0.53 0.59Mid Field 0.69 0.70 0.628 0.69Far Field 0.72 0.78 0.56 0.69Mean 0.67 0.71 0.57 0.664.4.2 Signal-to-Noise RatioThe SNR of each technique was measured by comparing the values inside an oc-clusion to the values outside the occlusion, according to the equationSNR(dB) = 10log10|?in??out |??in?out, (4.6)where ?in and ?in are the mean value and standard deviation inside the occlusion,and ?out and ?out are the mean value and standard deviation in the area surround-ing the occlusion. This function was applied to each of the five occlusions in the26imaged region (three high reflectivity, two low reflectivity). Table 4.3 shows theresults. As expected, the occlusion SNR of the compounded technique fell betweenthose of the CPW and SA techniques, with an overall 14.54 dB gain over SA, 4.85dB gain over conventional, and 2.01 dB loss from CPW.Table 4.3: Occlusion Imaging SNR (dB)DAS CPW SA ACOcclusion 1 3.45 4.79 -33.71 3.40Occlusion 2 2.76 1.62 1.31 -0.04Occlusion 3 -8.35 5.25 -21.98 4.44Occlusion 4 -0.78 2.28 -0.44 1.16Occlusion 5 -17.20 0.29 -13.71 -4.80Mean -4.02 2.84 -13.71 0.834.4.3 Statistical SignificanceThe resolution and SNR results were compared using a one-tailed Mann-WhitneyU test to determine their significance. Table 4.4 shows the p-values for the com-pounded method?s improvements over CPW in resolution, SA in SNR, and conven-tional DAS in both (p-values of decreases in quality shown in parentheses). As thevalues show, the improvement in lateral resolution over PW and DAS was statisti-cally significant (p < 0.05), as was the improvement in SNR over SA. The lossesin lateral resolution and SNR compared to SA and CPW, respectively, were not sta-tistically significant. This demonstrates that the improvements of the compoundedmethod significantly outweigh the losses. Results could possibly be improved fur-ther yet by implementing a more advanced weighting algorithm (for Equation 4.5)that is more resilient to the effects of outliers in the CPW image.Table 4.4: Significance p-values: comparison of resolution and SNR of adap-tively compounded (AC) method vs. othersDAS CPW SAAC Lat. Res. 0.0202 0.0269 (0.1198)AC Ax. Res. 0.3385 0.1216 (0.0051)AC SNR 0.2103 (0.1548) 0.0476274.5 SummaryThe proposed technique has been implemented and validated using an UltrasonixMedical Corporation SONIXRP and SONIXDAQ for testing on a tissue-mimickinggel phantom, showing a 14.5% gain in lateral resolution over CPW, a 7.5% gain inaxial resolution over CPW, and a 14.5 dB gain in SNR over SA. Each of DAS, CPW,and SA performed poorly in one or more tests. CPW performed worst in lateraland axial resolution, and SA performed worst in occlusion SNR. Conventional DASperformed second worst in all three tests. Only the adaptive compounded methodperformed well in all three tests, while still maintaining a frame rate of at leastdouble that of the tested conventional imaging technique. This method improvesthe general applicability of high frame rate imaging, as it is the only one of thethree tested techniques that outperforms conventional DAS imaging in all aspects.28Chapter 5Compressed Sensing and PartialDecoding for Receive-SideSeparation of MultipleSimultaneous Transmit EventsThis chapter describes a method for high frame rate ultrasonography through spa-tially coded excitation. The work is presented as two novel extensions of workcompleted by Fredrik Gran and J?rgen Arendt Jensen.5.1 BackgroundAs detailed in Section 2.2.1, SA images are formed by conducting a series of singleelement transmits, usually one for each element in the transducer. These pulsesare run sequentially in order to avoid cross-talk and interference between separatetransmit events. If pulses from separate elements were transmitted simultaneously,ambiguity would be introduced in the receive beamforming step (Section 2.1.5) asto where a given RF echo originated in the tissue. In a SA transmit pulse from anelement at xt , the signal received by an element at xr at a given time ? could haveoriginated from a set of points in the tissue, i.e., any coordinate (x,z) where thefollowing holds true:29?z2 +(x? xt)2 +?z2 +(x? xr)2 = ?. (5.1)The effect of this can be seen for two different transmit elements at xt1 and xt2 inFigure 5.1a and Figure 5.1b; the RF value at time ? could have originated fromany point along an arc. This is compensated for in receive beamforming, wherethe received signals from multiple elements are adjusted and combined to reducethe arc to a single point. When two elements (at xt1 and xt2, respectively) transmitsimultaneously, however, the ambiguity is greatly amplified. The RF value receivedby an element at xr at a given time ? could have originated from twice as manypossible tissue points, i.e., from any point (x,z) where the following holds true:(?z2 +(x? xt1)2+?z2 +(x? xr)2 = ?)?(?z2 +(x? xt2)2+?z2 +(x? xr)2 = ?).(5.2)This will cause two arcs of possible reflection points (Figure 5.1c), which can nolonger be compensated for with receive beamforming. Therefore, conventionalultrasound imaging has continued to use only a single element transmit at a time.If a method could be devised, however, that can reliably transmit with K ele-ments simultaneously and isolate their signals in the received RF, the frame rate ofthat technique could be increase K-fold over the same technique without multiplesimultaneous transmits. One technique under research for solving this problem isknown as ?spatial encoding? where element excitation pulse sequences consist of acode, with a unique code for each element transmitting simultaneously. This chap-ter details the previous research in this area and the author?s proposed methods andresults.30(a) (b)(c)Figure 5.1: (a) Possible echo sources with transmission from xt1 received byxr. (b) Possible echo sources with transmission from xt2 received by xr.(c) Possible echo sources with simultaneous transmission from xt1 andxt2 received by xr.5.2 Previous ResearchSpatial encoding was originally proposed to combat the inherently low SNR ofSA imaging by transmitting with multiple elements simultaneously to increase thetotal acoustic energy entering the tissue. In [4] and [25] the authors suggest aHadamard encoding scheme, where excitation pulse waveforms on each elementtransmitting simultaneously is multiplied by a column of a Hadamard encodingmatrix. However, this design has the requirement that as many transmit eventsmust be fired as there are simultaneous transmitters before the received RF can be31decoded, e.g., if 4 elements are set to fire simultaneously, they must do so 4 timesbefore decoding. While this accomplishes the task of increasing the total acousticenergy entering the tissue in the same amount of time without losing resolution,it does not offer any advantage in higher frame rates. In [3], Chiao and Thomassuggest using orthogonal Golay codes in place of Hadamard codes, but this methodfaces the same drawback of requiring as many transmissions as active simultaneoustransmitting elements.An alternate approach to encoding was proposed by Gran and Jensen in [7],who expanded on the topic in [10]. In a typical ultrasound transducer, elementshave an available frequency range in which they can transmit (e.g., the UltrasonixL14-5 linear probe has a frequency range of 5 - 14 MHz [44]). Gran and Jensenseparated this available bandwidth into disjoint subbands, each of which was as-signed to one element of a simultaneously transmitting set. Received RF signalswere isolated using simple frequency filters, determining which element the signaloriginated from. To gather information from the full possible frequency bandwidthof a given element, however, multiple transmits at different subbands had to beconducted to synthesize the full spectrum. Therefore, this approach also did notresult in a net frame rate increase.In 2008, Gran and Jensen proposed a spatial encoding technique with the mainpurpose of reducing the total number of transmission events required to form afull ultrasound image, achieved by devising an encoding technique that allows fordecoding from a single transmission [9]. This paper was based on their previousresearch ([8, 11]), where pseudo-random encoding sequences are proposed. Sec-tion 5.3 presents an summary of the technique, as it is the essential basis for theoriginal work presented later in this chapter. Greater detail can be found in [9].5.3 Spatial Encoding with Code DivisionThis section is an overview of the key points of Gran and Jensen?s paper [9], wherefull details can be found.A single ultrasound transmit event with K simultaneous transmitting elementsand Q receiving elements can be modeled, assuming that the system is fully linear,as32ykq(t) ={P?p=1sp(~rp)he(~rk,~rp, t)?hr(~rp,~rq, t)}? xk(t), (5.3)where P is the number of scatterers in the medium (possibly infinite in tissue) andsp(~rp) is the strength of the pth scatterer. he(~rk,~rp, t) is the spatial impulse responsefrom the kth transmitting element to the pth scatterer, ? denotes convolution in thetime domain, and hr(~rp,~rq, t) is the spatial impulse response from the pth scattererto the qth receiving element.~rp is the position of the pth scatterer,~rk is the positionof the kth transmitting element, and~rq is the position of the qth receiving element.xk(t) is the code sequence transmitted by the kth transmitting element. Because thespatial impulse response represents the entire transformation of the acoustic wave,including attenuation and the electromechanical transfer function of the transducerelements, a scattering function for an acoustic wave from the kth transmitting ele-ment to the qth receiving element can be writtenhkq(t) =P?p=1sp(~rp)he(~rk,~rp, t)?hr(~rp,~rq, t). (5.4)For the total received signal at the qth receiving element, Equation 5.3 can be com-bined intoyq(t) =K?k=1hkq(t)? xk(t). (5.5)After passing through the ADC, and considering vq(n) to be the digitized noiseprocess from the qth (assumed Gaussian distribution with zero mean), Equation 5.5becomesyq(n) =K?k=1hkq(n)? xk(n)+ vq(n), (5.6)where hkq(n) and xk(n) (with N samples) are the digitized versions of the scatteringfunction hkq(t) and code sequence xk(t), respectively. Because the acoustic wavewill decay as it travels through the medium, it can be modeled as a finite impulseresponse (FIR) process, where the transfer function of the digitized version thescattering function Equation 5.4 can be written as33Hkq(z?1) =M?1?m=0hkq(m)z?m, (5.7)where M is the length of the impulse responses and z?1 is the unit backward-shiftoperator. Therefore, the output at the qth receiving element becomesyq(n) =K?k=1M?1?m=0hkq(m)xk(n?m)+ vq(n). (5.8)yq(n) and hkq(n) are now written as column vectorsyq =(yq(0) yq(1) ? ? ? yq(N +M?2))T(5.9)hkq =(hkq(0) hkq(1) ? ? ? hkq(M?1))T(5.10)so the convolution between the waveform transmitted by the kth element and thescattering function can be written asyq =K?k=1Xkhkq +vq, (5.11)whereXk =????????????????xk(0) 0 ? ? ? 0xk(1) xk(0). . . 0.... . . . . ....xk(N?1). . . . . . xk(0)0 xk(N?1). . . xk(1).... . . . . ....0 0 ? ? ? xk(N?1)????????????????(5.12)with dimensions (M +N)?M and vq is a zero mean noise process with Gaussianprobability distribution and autocovariance matrixE[vqvTq ] = Qv. (5.13)34The signal matricies Xk of the k transmitting elements can be grouped to write amore compact version of Equation 5.11yq =(X1 X2 ? ? ? XK)? ?? ?X??????h1qh2q...hKq??????? ?? ?hq+vq. (5.14)To find the most likely estimate of the scattering function vector hq, which can beused to generate the final image, the probability distribution of receiving outputy given a set of scattering functions must first be calculated. This distribution isgiven bypyq|hq (yq|hq) =1?(2pi)N+M?1 det(Qv)? exp(?12(yq?Xhq)T Q?1v (yq?Xhq))(5.15)as the only stochastic part of the scattering function is the noise, which is as-sumed to be Gaussian distributed. The scattering functions hq that maximize Equa-tion 5.15 must now be found, such thath?q = argmaxhqpyq|hq(yq|hq). (5.16)The solution to this optimization problem is taken from [21]:(XT Q?1v X)h?q = XT Q?1v yq (5.17)which, when considering that the noise process is white with variance ?2v (i.e.,Qv = ?2v I), becomes(XT X)h?q = XT yq. (5.18)Assuming that X is full-rank, i.e., we select a code xk(n) of length N such thatN ? (K?1)M+1, (5.19)35then the maximum likelihood estimate of hq can be found:h?q =(XT X)?1XT yq. (5.20)Becausehq =(h1q h2q ? ? ? hKq)T, (5.21)as seen in Equation 5.14, h?kq can be isolated for every value k. Since h?kq repre-sents an estimate of the signal that would have been received by the qth receivingelement had it transmitted alone and without a code, it can now be used for re-ceive beamforming and image generation. Beamforming is conducted in the sameSA method described in Section 2.1.5, but with a different value for ? (defined inEquation 5.28).Gran and Jensen?s work in this field, while achieving acceptable results in reso-lution and SNR, causes a significant drawback. For the received RF to be decodable,i.e., matrix X must be full-rank, Equation 5.19 must be observed. This means thatfor a region of interest M samples deep, a code length N of (K?1)M +1 samplesmust be transmitted into the tissue. Commercially available ultrasound imagingsystems are limited by the half-duplex constraint, meaning that no elements canreceive RF signals while any other element is transmitting. For a transmit sequenceof N samples, then, the first N samples that would have been received by the re-ceiving elements are lost. Therefore, the first N samples of the medium cannot beimaged. Furthermore, that region must also be completely void of scatters so as notto introduce artifacts into the received RF vector yq. For example, a medium to beimaged 5cm deep with two simultaneously active transmitting elements must havea 5cm empty area between it and the transducer. Although this fact is not specifi-cally stated in Gran and Jensen?s paper [9], the effect can be seen in Figure 4 of thatpaper where an imaging region of 2cm begins at 2.5cm, so there is 2.5cm of voidspace above the imaging region. In this chapter, the void space region is known asthe ?deadzone?. To utilize Gran and Jensen?s technique on a medium that does nothave a natural deadzone like their test medium does, a device known as a standoffpad could be used (Figure 5.2). A standoff pad is an acoustically transparent devicethat sits between the transducer and the medium surface, thereby creating an empty36Figure 5.2: Reusable acoustic standoff pads. Image courtesy of CIVCO Med-ical Solutions.area above the image. As ultrasound image quality decreases with depth, however,the requirement for a standoff pad that adds depth leads to images of poorer overallquality than those without standoffs.5.4 Compressed SensingThe original work presented in this chapter was completed as an extension of thespatial encoding method described in Section 5.3. To reduce and possibly removethe restriction of a deadzone, an alternate decoding scheme is proposed.Many diagnostic and surgical procedures that require ultrasound imaging, suchas intra-spinal needle guidance, generate RF data with a small set of very brightpoints in an otherwise dark region. In the intra-spinal needle guidance example, theneedle and spine outline would generate extremely powerful reflections in compar-ison to the surrounding soft tissue (the acoustic wave does not effectively penetratebone, and so only the spine outline would show). In such a scenario it can be as-sumed that the received RF is relatively sparse, where the soft tissue reflections are37near-zero in comparison to the bone and needle reflections.Because of the sparsity of the received RF vector, an alternative decodingmethod known as ?compressed sensing? can be used. Compressed sensing, as ageneral signal processing term, refers to the reconstruction of a signal by solving anunderdetermined linear system by taking advantage of sparsity or compressibilityin the system. This is done by formulating the decoding as a convex optimizationproblem, which can then be solved using a number of open-source or commer-cial programming toolboxes. The capability to solve an underdetermined linearsystem means that the code matrix X in Equation 5.20 no longer needs to be full-rank to estimate a solution, and therefore Equation 5.19 no longer needs to holdtrue. Removing that restriction on the code length N reduces the required depth ofthe deadzone, thereby removing the requirement for a standoff pad and bringingthe medium closer to the transducer surface for an improvement in quality. In thefollowing sections, experiments and results are described where the ?compressionratio?, i.e., the ratio of M imaged samples to N code samples for a 2-element si-multaneous transmit, is reduced from < 1 requirement described in Equation 5.19to as high as 5.As a further extension, a technique titled ?partial decoding? is presented inSection 5.5.2 where the half-duplex constraint is entirely removed by expanding thedimensions of code matrix X to estimate the medium?s spatial impulse response hkqfrom the tail end of the code that is received immediately after transmission ceases.5.5 Experimental DesignAn initial experiment was designed to replicate the results of the paper publishedby Gran and Jensen [9], described in Section 5.3. A Field II (Section 3.3) sim-ulation was created using the same parameters for the transducer and ultrasoundsystem: 7 MHz linear array with 0.208mm pitch, 0.035mm kerf, 4.5mm elementheight, 128 elements divided into groups of two for transmit with single elementson receive, and a 120 MHz ADC sampling rate. The element excitation pulse, how-ever, was set as a single Dirac impulse instead of a sinusoid; this was changedto more accurately emulate the capabilities of a commercial ultrasound machine,which cannot transmit with variable power within a given excitation code.3840 sets of simulations were run with varying parameters. 20 sets were run oneach of two phantoms, the first with six point scatters vertically spaced 5mm apart(?point phantom?, similar to that used for simulations in [9] but with 2 extra pointscatterers) and the second with six occlusions (three high-reflectivity and threelow-reflectivity) set in a homogeneous tissue below skin and fat layers (?tissuephantom?), represented with 200,000 individual point scatterers. On each phan-tom, simulation sets were run for every combination of compression ratios of 0.9,1, 3, and 5 and standoff pads of thickness 2 mm, 5 mm, 15 mm, 30 mm, and 50mm. Each set consisted of the following:1. ?reference? simulation, where all transmit groups fired individually and with-out a coded excitation to simulate what conventional non-coded SA imagingwould produce.2. ?Gran decoded? simulation, where two transmit groups fired simultaneouslyand the received RF was decoded using the method in Section 5.3.3. ?compressed decoded? simulation, where two transmit groups fired simulta-neously and the received RF was decoded using a compressed sensing algo-rithm.4. ?compressed partial decoded? simulation, with the same transmit sequenceand decoding as ?compressed decoded? but with the code matrix X extendedto estimate impulse responses from scatters in the deadzone by using onlythe tail of the received echo.For the simulations that used coding (items 2, 3, and 4) 18-bit codes the sameas in [9] was used:x1 =(1 ?1 ?1 ?1 ?1 ?1 ?1 ?1 1 1 ?1 1 1 1 ?1 ?1 ?1 1)(5.22)x2 =(1 1 ?1 ?1 ?1 ?1 1 ?1 1 1 1 1 1 1 ?1 1 1 1), (5.23)which were then oversampled to create a sparse code of length N. In [9] thisoversampled code was then convolved with a sinusoidal excitation wave, but as the39excitation in this experiment was set as a Dirac impulse to emulate the capabilitiesof a physical ultrasound machine the convolution resulted in no change.Simulations were run on a computer with dual Intel X5650 processors at 2.67GHz and 24 GB random access memory running Windows 7 and using MatlabR2012b. Each simulation returned a set of received RF vectors hq for every elementq for each of the transmit groups in the reference simulation (item 1, independenttransmits, no coding), and a set of received RF vectors yq for every receiving ele-ment q for each of the transmit group pairs in the coded simulations (items 2, 3, and4, two groups transmitting simultaneously). To find the vectors h?q for item 2 (Grandecoded), the vectors y were decoded using the process described in Section 5.3.5.5.1 Compressed SensingTo find the vectors h?q for item 3 (compressed decoded), yq was decoded by formu-lating the system as a convex optimization problem and solving using the SDPT3solver [41] in the CVX modeling system for Matlab [12, 13]. In theory, the objec-tive of this problem would be to find the value for hq such thath?q = argminhq?hq?0 s.t. yq = Xhq. (5.24)In the presence of noise, this is revised to beh?q = argminhq?hq?0 s.t. ?yq?Xhq?2 < ?, (5.25)where ? is a user-definable error value dependent on the expected noise level ofthe system [42]. Using the 0-norm, however, this problem is NP-hard. To reduceit to a quadratic problem, we relax it using the 1-norm as an approximation. Theproblem then becomesh?q = argminhq?hq?1 s.t. ?yq?Xhq?2 < ?, (5.26)which is then solved using the CVX library. In these experiments, the value of ?was determined by imaging the tissue phantom with conventional SA imaging (alltransmit events are independent) and taking the average response value within one40of the occlusions that, in theory, should be entirely zero values.5.5.2 Compressed Sensing with Partial DecodingTo find the vectors hq for item 4 (compressed partial decoded), the same processedwas used as in Section 5.5.1 but with an extended code matrix X. Instead of thatpresented in Equation 5.12, the code matrix was defined asXk =???????????????????xk(N?1) xk(N?2) ? ? ? xk(1) xk(0) 0 ? ? ? 00 xk(N?1). . . xk(2) xk(1) xk(0). . . 0... 0. . ........ . . . . ........... . . xk(N?1).... . . . . . xk(0)....... . . 0 xk(N?1). . . . . . xk(1)....... . .... 0 xk(N?1). . ........... . ........ . . . . ....0 0 ? ? ? 0 0 0 ? ? ? xk(N?1)???????????????????(5.27)with dimensions (M +N)? (M +N ? 1). When decoding with this matrix, thegiven hkq is of length M+N?1, as opposed to length M in the Section 5.3 method.The extra N ? 1 samples represent the spatial impulse responses from scattererswithin the deadzone. Although the estimated values for these scatters are less ac-curate than those outside of the deadzone (which can be solved using the full lengthcode instead of a partial one), they nevertheless entirely remove the requirement fora standoff pad.5.5.3 Receive BeamformingAfter the vectors h?q have been determined, they are then dynamically receivebeamformed and combined to generate full frames. Beamforming is performedin the same way as described in Section 2.1.5, but with ? defined as?(xk,xq,x,z) =?z2 +(x? xk)2c+?z2 +(x? xq)2c(5.28)41for a scatterer at (x,z), where xk is the lateral position of the transmitting elementand xq is the lateral position of the receiving element.After the vectors h?q have been beamformed independently for each separatetransmit group, the resultant beamformed sets are then summed to create the finalframe.5.5.4 Image GenerationAfter full frames were compiled through receive beamforming, the RF data wasconverted into a human-viewable format. This was done through the followingsteps:1. Envelope detection: the envelope of the receive beamformed RF data wasfound by taking the absolute value of the Hilbert transform [30] of eachcolumn in the frame.2. Logarithmic compression: to display greater detail in regions of low-powerreflections, the image is logarimically compressed with a 50 dB dynamicrange.3. Interpolation: as the frame is much higher resolution vertically (thousandsof samples) than horizontally (128 elements), the image is interpolated in thehorizontal direction by a factor of 10.4. The final data is scaled to provide an image that is the width of the transducerwide and as deep as the region that was imaged.5.6 ResultsThe resulting images from each of the 20 simulation sets on the point phantom,shown in Appendix A, were compared according to their resolution. The imagesfrom each of the 20 simulation sets on the tissue phantom, shown in Appendix B,were compared according to their SNR at the occlusions.425.6.1 ResolutionThe lateral and axial resolution of each of the four techniques in Section 5.5 ineach of the 20 simulation sets were measured at the full width at half maximum onthe point spread response from the point scatterers. This is defined as the distancebetween the points on each side of the scatterer that are half the value of the highestvalue for that scatterer. These resolutions are presented in the tables Table 5.1through Table 5.4. For point scatterers that were not visible, not detectable in thenoise, or not fully defined, the resolution has been denoted as ?N/A?.5.6.2 Signal-to-Noise RatioThe SNR of each technique was measured by comparing the values inside an oc-clusion to the values outside the occlusion, according to the equationSNR(dB) = 10log10|?in??out |??in?out, (5.29)where ?in and ?in are the mean value and standard deviation inside the occlusion,and ?out and ?out are the mean value and standard deviation in the area surroundingthe occlusion. The ?inner? region was defined as a 4 mm diameter circle centredon the occlusion?s centre, while the ?outer? region was defined as the area within a8 mm wide square centered on the occlusion?s centre (excluding the inner region).This function was applied to each of the six occlusions in the imaged region (threehigh reflectivity, three low reflectivity). The results are presented in Table 5.5through Table 5.8.43Table 5.1: Lateral and Axial Width-Half-Max (mm) of Point Spread Response for Compression Ratio of 0.9StandoffPad(mm)ScattererDepth(mm)Reference Gran DecodedCompressedDecodedCompressedPartialDecodedLateral Axial Lateral Axial Lateral Axial Lateral Axial2(Figure A.1)2 1.226 0.430 N/A N/A N/A N/A N/A N/A7 0.997 0.423 N/A N/A N/A N/A N/A N/A12 0.893 0.423 N/A N/A N/A N/A 0.914 0.42317 0.893 0.417 N/A N/A N/A N/A 0.956 0.41722 0.893 0.423 2.223 1.617 N/A N/A 0.914 0.42327 0.872 0.423 2.389 0.648 N/A N/A 0.872 0.4235(Figure A.2)5 0.893 0.417 N/A N/A N/A N/A N/A N/A10 0.872 0.417 N/A N/A N/A N/A 0.872 0.41715 0.893 0.423 N/A N/A N/A N/A N/A N/A20 0.914 0.423 N/A N/A N/A N/A 0.914 0.42325 0.872 0.423 1.641 0.732 N/A N/A 0.872 0.42330 0.893 0.423 1.226 0.892 N/A N/A 0.893 0.42315(Figure A.3)15 0.893 0.423 N/A N/A N/A N/A 0.893 0.42320 0.914 0.423 N/A N/A N/A N/A 0.914 0.42325 0.872 0.423 0.914 0.423 N/A N/A 0.872 0.42330 0.893 0.423 0.935 0.449 N/A N/A 0.893 0.42335 0.893 0.430 2.555 0.475 N/A N/A 0.893 0.43040 0.893 0.423 0.893 0.622 N/A N/A 0.893 0.42330(Figure A.4)30 0.893 0.423 N/A N/A N/A N/A 0.893 0.42335 0.893 0.430 N/A N/A N/A N/A 0.893 0.43040 0.893 0.423 0.914 0.423 N/A N/A 0.893 0.42345 0.852 0.423 0.872 0.423 N/A N/A 0.852 0.42350 0.872 0.423 0.893 0.545 N/A N/A 0.872 0.42355 0.914 0.423 0.997 0.475 N/A N/A 0.914 0.42350(Figure A.5)50 0.872 0.423 0.893 0.423 0.893 0.423 0.872 0.42355 0.914 0.423 0.914 0.423 0.914 0.423 0.914 0.42360 1.039 0.417 1.039 0.417 1.039 0.417 1.039 0.41765 1.080 0.417 1.122 0.417 1.122 0.417 1.080 0.41770 1.163 0.411 1.163 0.411 1.163 0.411 1.163 0.41175 1.205 0.411 1.205 0.411 1.205 0.411 1.205 0.41144Table 5.2: Lateral and Axial Width-Half-Max (mm) of Point Spread Response for Compression Ratio of 1StandoffPad(mm)ScattererDepth(mm)Reference Gran DecodedCompressedDecodedCompressedPartialDecodedLateral Axial Lateral Axial Lateral Axial Lateral Axial2(Figure A.6)2 1.226 0.430 N/A N/A N/A N/A N/A N/A7 0.997 0.423 N/A N/A N/A N/A N/A N/A12 0.893 0.423 N/A N/A N/A N/A N/A N/A17 0.893 0.417 N/A N/A N/A N/A 0.935 0.42322 0.893 0.423 N/A N/A N/A N/A 0.914 0.42327 0.872 0.423 N/A N/A N/A N/A 0.893 0.4235(Figure A.7)5 0.893 0.417 N/A N/A N/A N/A N/A N/A10 0.872 0.417 N/A N/A N/A N/A 0.872 0.41715 0.893 0.423 N/A N/A N/A N/A 0.893 0.42320 0.914 0.423 N/A N/A N/A N/A 0.914 0.42325 0.872 0.423 N/A N/A N/A N/A 0.872 0.42330 0.893 0.423 N/A N/A N/A N/A 0.893 0.42315(Figure A.8)15 0.893 0.423 N/A N/A N/A N/A 0.893 0.42320 0.914 0.423 N/A N/A N/A N/A 0.914 0.42325 0.872 0.423 N/A N/A N/A N/A 0.872 0.42330 0.893 0.423 1.496 0.507 N/A N/A 0.893 0.42335 0.893 0.430 N/A N/A N/A N/A 0.893 0.43040 0.893 0.423 1.143 0.622 N/A N/A 0.893 0.42330(Figure A.9)30 0.893 0.423 N/A N/A N/A N/A 0.893 0.42335 0.893 0.430 1.641 0.520 N/A N/A 0.893 0.43040 0.893 0.423 N/A N/A N/A N/A 0.893 0.42345 0.852 0.423 0.872 0.648 N/A N/A 0.852 0.42350 0.872 0.423 0.872 0.430 N/A N/A 0.872 0.42355 0.914 0.423 0.914 0.423 N/A N/A 0.914 0.42350(Figure A.10)50 0.872 0.423 0.893 0.423 0.893 0.423 0.872 0.42355 0.914 0.423 0.914 0.423 0.914 0.423 0.914 0.42360 1.039 0.417 1.080 0.417 1.039 0.417 1.039 0.41765 1.080 0.417 1.143 0.423 1.122 0.417 1.080 0.41770 1.163 0.411 1.163 0.411 1.163 0.411 1.163 0.41175 1.205 0.411 1.205 0.411 1.205 0.411 1.205 0.41145Table 5.3: Lateral and Axial Width-Half-Max (mm) of Point Spread Response for Compression Ratio of 3StandoffPad(mm)ScattererDepth(mm)Reference Gran DecodedCompressedDecodedCompressedPartialDecodedLateral Axial Lateral Axial Lateral Axial Lateral Axial2(Figure A.11)2 1.226 0.430 N/A N/A N/A N/A N/A N/A7 0.997 0.423 N/A N/A N/A N/A N/A N/A12 0.893 0.423 N/A N/A N/A N/A 0.893 0.42317 0.893 0.417 N/A N/A N/A N/A 0.893 0.41722 0.893 0.423 N/A N/A N/A N/A 0.893 0.42327 0.872 0.423 N/A N/A N/A N/A 0.872 0.4235(Figure A.12)5 0.893 0.417 N/A N/A N/A N/A 0.893 0.41710 0.872 0.417 N/A N/A N/A N/A 0.872 0.41715 0.893 0.423 N/A N/A N/A N/A 0.893 0.42320 0.914 0.423 N/A N/A N/A N/A 0.914 0.42325 0.872 0.423 N/A N/A N/A N/A 0.872 0.42330 0.893 0.423 N/A N/A N/A N/A 0.893 0.42315(Figure A.13)15 0.893 0.423 N/A N/A 0.893 0.423 0.893 0.42320 0.914 0.423 N/A N/A 0.914 0.423 0.914 0.42325 0.872 0.423 N/A N/A 0.893 0.423 0.872 0.42330 0.893 0.423 N/A N/A 0.893 0.423 0.893 0.42335 0.893 0.430 N/A N/A 0.893 0.430 0.893 0.43040 0.893 0.423 N/A N/A 0.893 0.423 0.893 0.42330(Figure A.14)30 0.893 0.423 N/A N/A 0.893 0.423 0.893 0.42335 0.893 0.430 N/A N/A 0.893 0.430 0.893 0.43040 0.893 0.423 N/A N/A 0.893 0.423 0.893 0.42345 0.852 0.423 N/A N/A 0.872 0.423 0.852 0.42350 0.872 0.423 N/A N/A 0.872 0.423 0.872 0.42355 0.914 0.423 N/A N/A 0.914 0.423 0.914 0.42350(Figure A.15)50 0.872 0.423 N/A N/A 0.893 0.423 0.872 0.42355 0.914 0.423 N/A N/A 0.914 0.423 0.914 0.42360 1.039 0.417 N/A N/A 1.039 0.417 1.039 0.41765 1.080 0.417 N/A N/A 1.122 0.417 1.080 0.41770 1.163 0.411 N/A N/A 1.163 0.411 1.163 0.41175 1.205 0.411 N/A N/A 1.205 0.411 1.205 0.41146Table 5.4: Lateral and Axial Width-Half-Max (mm) of Point Spread Response for Compression Ratio of 5StandoffPad(mm)ScattererDepth(mm)Reference Gran DecodedCompressedDecodedCompressedPartialDecodedLateral Axial Lateral Axial Lateral Axial Lateral Axial2(Figure A.16)2 1.226 0.430 N/A N/A N/A N/A 1.226 0.4307 0.997 0.423 N/A N/A N/A N/A 0.997 0.42312 0.893 0.423 N/A N/A N/A N/A 0.893 0.42317 0.893 0.417 N/A N/A N/A N/A 0.893 0.41722 0.893 0.423 N/A N/A N/A N/A 0.893 0.42327 0.872 0.423 N/A N/A N/A N/A 0.872 0.4235(Figure A.17)5 0.893 0.417 N/A N/A N/A N/A 0.893 0.41710 0.872 0.417 N/A N/A N/A N/A 0.872 0.41715 0.893 0.423 N/A N/A N/A N/A 0.893 0.42320 0.914 0.423 N/A N/A N/A N/A 0.914 0.42325 0.872 0.423 N/A N/A N/A N/A 0.872 0.42330 0.893 0.423 N/A N/A N/A N/A 0.893 0.42315(Figure A.18)15 0.893 0.423 N/A N/A 0.893 0.423 0.893 0.42320 0.914 0.423 N/A N/A 0.914 0.423 0.914 0.42325 0.872 0.423 N/A N/A 0.893 0.423 0.872 0.42330 0.893 0.423 N/A N/A 0.893 0.423 0.893 0.42335 0.893 0.430 N/A N/A 0.893 0.430 0.893 0.43040 0.893 0.423 N/A N/A 0.893 0.423 0.893 0.42330(Figure A.19)30 0.893 0.423 N/A N/A 0.893 0.423 0.893 0.42335 0.893 0.430 N/A N/A 0.893 0.430 0.893 0.43040 0.893 0.423 N/A N/A 0.893 0.423 0.893 0.42345 0.852 0.423 N/A N/A 0.872 0.423 0.852 0.42350 0.872 0.423 N/A N/A 0.872 0.423 0.872 0.42355 0.914 0.423 N/A N/A 0.914 0.423 0.914 0.42350(Figure A.20)50 0.872 0.423 N/A N/A 0.893 0.423 0.872 0.42355 0.914 0.423 N/A N/A 0.914 0.423 0.914 0.42360 1.039 0.417 N/A N/A 1.039 0.417 1.039 0.41765 1.080 0.417 N/A N/A 1.122 0.417 1.080 0.41770 1.163 0.411 N/A N/A 1.163 0.411 1.163 0.41175 1.205 0.411 N/A N/A 1.205 0.411 1.205 0.41147Table 5.5: SNR (dB) of High and Low Reflectivity Occlusions for Compression Ratio of 0.9StandoffPad(mm)OcclusionDepth(mm)Reference Gran DecodedCompressedDecodedCompressedPartial DecodedHigh Low High Low High Low High Low2(Figure B.1)2 4.029 2.847 N/A N/A N/A N/A 2.736 -5.3127 4.342 3.599 N/A N/A N/A N/A 3.551 -5.43012 4.967 4.037 2.339 0.093 -0.501 0.548 4.629 1.2275(Figure B.2)5 4.109 3.506 N/A N/A N/A N/A 3.092 -7.70510 4.419 3.825 N/A N/A N/A N/A 3.790 -0.36915 5.035 4.239 0.755 -4.716 -17.078 -4.903 4.894 1.88915(Figure B.3)15 4.751 3.982 N/A N/A N/A N/A 4.801 0.99020 4.710 3.808 2.408 -0.813 -1.509 -13.742 4.724 1.98525 5.138 4.391 1.137 -7.177 -12.450 -5.716 5.239 2.70230(Figure B.4)30 4.811 4.044 1.991 -7.300 -5.081 -11.752 4.982 3.42235 4.839 3.747 1.422 -7.227 -7.892 -6.431 5.100 2.68540 5.291 4.328 1.165 -2.021 -4.068 -4.006 5.390 3.60850(Figure B.5)50 4.887 4.094 4.889 4.103 4.889 4.103 5.123 3.66455 4.888 3.834 4.886 3.805 4.886 3.805 5.094 3.49660 5.390 4.094 5.390 4.073 5.390 4.073 5.493 3.82248Table 5.6: SNR (dB) of High and Low Reflectivity Occlusions for Compression Ratio of 1StandoffPad(mm)OcclusionDepth(mm)Reference Gran DecodedCompressedDecodedCompressedPartial DecodedHigh Low High Low High Low High Low2(Figure B.6)2 4.029 2.847 N/A N/A N/A N/A 2.766 -5.2417 4.342 3.599 N/A N/A N/A N/A 3.331 -6.30312 4.967 4.037 0.408 -2.533 -0.480 -2.543 4.717 1.6595(Figure B.7)5 4.109 3.506 N/A N/A N/A N/A 3.112 -14.00510 4.419 3.825 4.365 6.720 5.368 5.263 4.002 -1.64015 5.035 4.239 -1.641 -8.300 -9.005 -2.866 4.951 1.88615(Figure B.8)15 4.751 3.982 N/A N/A N/A N/A 4.678 0.98420 4.710 3.808 0.494 -1.897 -0.095 -0.655 4.610 2.10825 5.138 4.391 -2.123 -8.759 -14.070 -13.136 5.142 2.97730(Figure B.9)30 4.811 4.044 -0.976 -4.864 -1.025 -5.013 5.019 3.11235 4.839 3.747 -1.765 -7.869 -1.783 -7.943 4.961 2.69840 5.291 4.328 -1.877 -1.225 -1.868 -1.238 5.423 3.63050(Figure B.10)50 4.887 4.094 4.767 3.730 4.889 4.103 5.107 3.67355 4.888 3.834 4.686 3.735 4.887 3.803 5.031 3.29160 5.390 4.094 5.180 3.852 5.391 4.072 5.523 3.93249Table 5.7: SNR (dB) of High and Low Reflectivity Occlusions for Compression Ratio of 3StandoffPad(mm)OcclusionDepth(mm)Reference Gran DecodedCompressedDecodedCompressedPartial DecodedHigh Low High Low High Low High Low2(Figure B.11)2 4.029 2.847 -3.795 -20.825 -5.230 -6.264 3.400 -1.7427 4.342 3.599 -4.887 -13.745 -4.188 -10.720 3.909 0.70112 4.967 4.037 -4.561 -0.662 0.764 -11.192 4.682 1.2715(Figure B.12)5 4.109 3.506 -5.756 -8.189 -2.553 -11.283 3.621 -0.94010 4.419 3.825 -4.431 -10.371 -2.721 -4.697 4.262 1.62715 5.035 4.239 -9.123 -17.546 2.294 -8.622 4.861 1.42515(Figure B.13)15 4.751 3.982 -1.433 -8.257 4.915 2.539 4.628 1.56420 4.710 3.808 -8.311 -1.009 4.681 2.258 4.634 2.16325 5.138 4.391 -11.112 -5.151 5.146 2.757 5.119 2.54730(Figure B.14)30 4.811 4.044 -4.740 -7.861 4.798 2.226 4.656 1.61535 4.839 3.747 -4.897 -19.505 5.079 1.775 5.074 1.86340 5.291 4.328 -0.722 -6.025 5.364 3.075 5.345 3.02650(Figure B.15)50 4.887 4.094 -4.350 -11.268 5.040 2.465 4.968 1.85155 4.888 3.834 -1.455 -12.222 4.958 1.330 4.895 1.31260 5.390 4.094 -4.114 -8.792 5.333 2.990 5.313 2.97750Table 5.8: SNR (dB) of High and Low Reflectivity Occlusions for Compression Ratio of 5StandoffPad(mm)OcclusionDepth(mm)Reference Gran DecodedCompressedDecodedCompressedPartial DecodedHigh Low High Low High Low High Low2(Figure B.16)2 4.029 2.847 -6.809 -5.978 -4.780 -7.780 3.617 0.1087 4.342 3.599 -3.854 -7.240 -0.282 -10.124 3.886 1.41012 4.967 4.037 -9.262 -8.948 4.876 1.356 4.814 0.5845(Figure B.17)5 4.109 3.506 -6.188 -7.338 -7.761 -5.791 3.635 0.80010 4.419 3.825 -12.921 -9.221 1.568 -0.595 4.025 0.97715 5.035 4.239 -6.007 -6.489 4.732 1.330 4.592 1.08015(Figure B.18)15 4.751 3.982 -7.357 -4.022 4.375 0.763 4.243 0.73520 4.710 3.808 -16.305 -7.266 4.186 0.988 4.163 0.94625 5.138 4.391 -5.518 -6.040 4.804 1.270 4.794 1.20930(Figure B.19)30 4.811 4.044 -4.268 -6.408 4.485 0.784 4.426 0.61935 4.839 3.747 -2.185 -16.081 4.504 0.805 4.487 0.82240 5.291 4.328 -6.225 -11.858 5.115 1.836 5.111 1.84050(Figure B.20)50 4.887 4.094 1.325 -2.893 4.873 0.721 4.854 0.58855 4.888 3.834 -0.234 -8.125 4.941 1.174 4.935 1.15660 5.390 4.094 0.134 -4.737 5.356 2.409 5.359 2.418515.7 Discussion5.7.1 Gran DecodingAs stated Equation 5.19, for a two-element simultaneous transmit N ?M+1 codematrix X to be full-rank. In the simulations, this value is modified asN =M+1C, (5.30)where C is the compression ratio. Due to the half-duplex constraint of elementsbeing unable to receive while transmitting, this results in a deadzone of depth Nsamples. The effects of this deadzone can be seen in the simulation images andresolution/SNR results. For a compression ratio of 1 the deadzone will be roughlyhalf the depth of the entire image. As seen by the ?N/A? results in Table 5.2 thiswill result in any scatterer within that region being undetectable, with the additionaleffect of scatterers below it being significantly distorted.As the Gran Decoded method requires a full-rank code matrix to function, theresults rapidly deteriorate as C is increased and the code matrix becomes underde-termined, even for points that were entirely outside the deadzone. This effect canbe seen in Table 5.3 and Table 5.4, where the Gran decoding method was unable todistinguish a point scatterer at any depth.5.7.2 Compressed Sensing DecodingIn simulations where scatterers fall within the deadzone, the compressed sensingdecoding method is shown to perform worse than the Gran technique with signif-icantly more distortion (Table 5.2 fourth row, Figure A.9). At higher compressionratios, however, the deadzone decreases in size and allows for a smaller standoffpad to be used (less void space required). For simulations at a compression ratiogreater than 1 and with no scatterers within the deadzone, the compressed sens-ing method is shown to outperform the Gran decoding method in every case, withequal resolutions for scatterers detected by both methods and far more scatterersdetected in general.The resolution of this method is shown to be comparable to that of conven-52tional non-coded SA imaging for all cases with no scatterers in the deadzone (e.g.,Table 5.4 rows 3-5). For high-reflectivity regions, the SNR is also comparable.It does, however, suffer from a significant drop in SNR for low-reflectivity occlu-sions in non-sparse media (the tissue phantom) at high compression ratios (e.g.,Table 5.8, rows 3-5). This is a result of the required assumption for compressedsensing that the only points of interest are those with high reflectivity.5.7.3 Compressed Sensing with Partial DecodingThis method shows a distinct advantage in scenarios where scatterers fall withinthe deadzone. In Table 5.4 (first row), it can be seen that the partial decodingmethod allows for detection of scatterers that neither the Gran decoding or ba-sic compressed sensing decoding could, and at an equivalent resolution to that ofthe reference simulation. In Table 5.8 (rows 1-2) the partial decoding method alsoshows a drastic improvement in SNR over that of the Gran decoded and compresseddecoded methods for both high- and low-reflectivity occlusions (although still sig-nificantly lower for low-reflectivity occlusions than the reference simulation).5.7.4 SummaryThe compressed sensing with partial decoding method shows to be superior toboth Gran decoding and basic compressed sensing in all regards. It allows fordetection of scatterers within the deadzone, as well as the best resolution and SNRof the three coded excitation methods. With 32 transmit events required for each ofthe simulations conducted, it offers an 8-fold increase in FPS over the default 256transmit settings of a conventional ultrasound machine.Although an improvement over other coded excitation methods, compressedsensing with partial decoding suffers from the drawback of poor SNR for low-reflectivity regions in non-sparse media. This is in line with the expectations de-scribed in Section 5.4, and shows promise for applications that require the reliabledetection of only the high-reflectivity scatterers.53Chapter 6ConclusionsHigh frame rate ultrasound imaging has the potential to allow for advanced imagingtechniques at a exceptionally low cost and with no known patient risk. 3-D vol-umetric imaging, transient elastography, and fast Doppler sonography are severalof the tools that would benefit most from a fully capable fast-imaging ultrasoundtechnique. In addition, fast ultrasonography reduces motion blur when imagingquick-moving organs such as the heart.One method for increasing ultrasound frame rate without loss of resolution orSNR has been proposed where the previously designed techniques of synthetic aper-ture (SA) and compounded plane wave (CPW) are adaptively combined to achievean imaging method that integrates the benefit of each without the drawbacks. Asecond method has also been proposed where the work of Gran and Jensen [9] inspatial encoding is extended to vastly improve the applicability of the method andremove the limitations.6.1 Thesis Contributions? Synthetic aperture (SA) and compounded plane wave (CPW) techniques pre-viously only performed in simulations or on dedicated research equipmentwere implemented on commercially available ultrasound equipment. Themethods were performed using the SONIXRP and SONIXDAQ hardware fromUltrasonix Medical Corporation and using the Texo software development54kit for sequence control. This low-cost physical implementation allows forfurther research in the area with minimal difficulty and expense.? An adaptive compounding technique was developed that combines the reso-lution of SA imaging with the SNR of CPW imaging. The technique allowsfor a higher frame rate than conventional fixed transmit beamforming imag-ing (double the frame rate in the experiments conducted) while maintainingan improvement in both axial and lateral resolution as well as SNR.? An extension on the spatial encoding method developed in [9] was proposedand tested, which removes the full-rank requirement for the code matrix byimplementing a compressed sensing technique. Reducing the rank of thecode matrix allows for a reduction in the size of the deadzone produced bythe half-duplex constraint on the transducer elements (cannot receive whiletransmitting). Experiments were conducted with a compression ratio (ratioof imaged samples to code length) ranging from 0.9 to 5, and showed amarked improvement over the Gran decoding technique [9] in all cases witha compression ratio greater than 1 where there were no scatterers located inthe deadzone. This method is targeted for applications requiring detection ofhigh-reflectivity scatterers only, as the SNR for low-reflectivity scatterers ispoorer than that of conventional non-coded SA imaging.? A further extension on the compressed sensing technique was proposed andtested, which extends the code matrix to allow for decoding of the tails of re-flections originating in the deadzone (partial codes). This technique allowsfor imaging of scatterers within the deadzone, and eliminates the negativeeffect they have on scatterers below the deadzone. It shows an improve-ment over both the Gran decoding method and the basic compressed sensingmethod in lateral resolution, axial resolution, and SNR for both high- andlow-reflectivity occlusions. The SNR for low-reflectivity occlusions at highcompression ratios is poorer then that of non-coded SA imaging, and as suchis targeted for applications requiring detection of high-reflectivity scatterersonly.556.2 Future WorkFor the adaptive compounding method described in Chapter 4, future work willfocus on testing the technique in vivo (on living tissue). Although the physicalimplementation on the Ultrasonix equipment is a large step towards clinical appli-cations, the method may need to be modified to handle motion within the tissueduring imaging. Future work may also focus on improving the adaptive weightingalgorithm (Equation 4.5) to be more resilient to outliers in the strength of the RFdata of the CPW image.For the compressed sensing spatial encoding method described in Chapter 5,future work will focus primarily on implementing the technique on a physical ul-trasound platform with quality assurance phantoms to determine its effectivenessoutside of simulations. Eventually, that would be extended to tests on in vivo me-dia. Currently, physical implementation is limited by the incapability of the Ultra-sonix platforms to excite multiple elements simultaneously with individual exci-tation pulses, but that feature is expected to be available in the future. Additionalwork may focus on developing an application-specific convex optimization algo-rithm to improve the accuracy of the decoded result and the speed at which it isfound.Both techniques require a variety of user-definable parameters, including SAtransmit aperture width, CPW angles, compression ratio, and excitation pulse shape/power.Although the selected values of these parameters are heavily dependent upon themedium being imaged, it may be possible to develop self-adjusting algorithms fordetermining their optimal values.Due to hardware limitations and processing requirements, the proposed tech-niques in both Chapter 4 and Chapter 5 are currently only capable of being run in anoffline mode, i.e., they cannot be run as real-time imaging. Future improvementsin computer processing ability, coupled with implementation of data processingon a graphics processing unit (GPU) for massive multi-threading, will allow forreal-time processing.56Bibliography[1] J. Bercoff, S. Chaffai, M. Tanter, L. Sandrin, S. Catheline, M. Fink,J. Gennisson, and M. Meunier. In vivo breast tumor detection using transientelastography. Ultrasound in Medicine & Biology, 29(10):1387 ? 1396, 2003.ISSN 0301-5629. doi:10.1016/S0301-5629(03)00978-5. URLhttp://www.sciencedirect.com/science/article/pii/S0301562903009785. 2,11, 20[2] C. Burckhardt, P.-A. Grandchamp, and H. Hoffmann. An experimental 2mhz synthetic aperture sonar system intended for medical use. Sonics andUltrasonics, IEEE Transactions on, 21(1):1 ?6, Jan 1974. ISSN 0018-9537.doi:10.1109/T-SU.1974.29783. 10[3] R. Chiao and L. Thomas. Synthetic transmit aperture imaging usingorthogonal golay coded excitation. In Ultrasonics Symposium, 2000 IEEE,volume 2, pages 1677?1680 vol.2, 2000.doi:10.1109/ULTSYM.2000.921644. 20, 32[4] R. Chiao, L. Thomas, and S. Silverstein. Sparse array imaging withspatially-encoded transmits. In Ultrasonics Symposium, 1997. Proceedings.,1997 IEEE, volume 2, pages 1679 ?1682 vol.2, Oct 1997.doi:10.1109/ULTSYM.1997.663318. 20, 31[5] J. Flaherty. Synthetic aperture ultrasonic imaging systems, 12 1970. 10[6] K. Gammelmark and J. Jensen. Multielement synthetic transmit apertureimaging using temporal encoding. Medical Imaging, IEEE Transactions on,22(4):552 ?563, Apr 2003. ISSN 0278-0062.doi:10.1109/TMI.2003.809088. 20[7] F. Gran and J. Jensen. Multi element synthetic aperture transmission using afrequency division approach. In Ultrasonics, 2003 IEEE Symposium on,volume 2, pages 1942?1946 Vol.2, 2003.doi:10.1109/ULTSYM.2003.1293297. 20, 3257[8] F. Gran and J. Jensen. Identification of pulse echo impulse responses formulti source transmission. In Signals, Systems, and Computers,Thirty-Eighth Annular Asilomar Conference on, pages 168?172, 2004. 32[9] F. Gran and J. Jensen. Spatial encoding using a code division technique forfast ultrasound imaging. Ultrasonics, Ferroelectrics and Frequency Control,IEEE Transactions on, 55(1):12?23, 2008. ISSN 0885-3010.doi:10.1109/TUFFC.2008.613. 4, 32, 36, 38, 39, 54, 55[10] F. Gran and J. A. Jensen. Spatio-temporal encoding using narrow-bandlinear frequency modulated signals in synthetic aperture ultrasound imaging.pages 405?416, 2005. doi:10.1117/12.592352. URL+http://dx.doi.org/10.1117/12.592352. 20, 32[11] F. Gran, J. A. Jensen, and A. Jakobsson. A code division technique formultiple element synthetic aperture transmission. pages 300?306, 2004.doi:10.1117/12.535222. URL +http://dx.doi.org/10.1117/12.535222. 32[12] M. Grant and S. Boyd. Graph implementations for nonsmooth convexprograms. In V. Blondel, S. Boyd, and H. Kimura, editors, Recent Advancesin Learning and Control, Lecture Notes in Control and InformationSciences, pages 95?110. Springer-Verlag Limited, 2008.http://stanford.edu/?boyd/graph dcp.html. 40[13] M. Grant and S. Boyd. CVX: Matlab software for disciplined convexprogramming, version 2.0 beta. http://cvxr.com/cvx, Sept. 2012. 40[14] I. Holfort, A. Austeng, J.-F. Synnevg, S. Holm, F. Gran, and J. Jensen.Adaptive receive and transmit apodization for synthetic aperture ultrasoundimaging. In Ultrasonics Symposium (IUS), 2009 IEEE International, pages1 ?4, sept. 2009. doi:10.1109/ULTSYM.2009.5442035. 10[15] J. Jensen. Field ii simulation program. URL http://field-ii.dk. Accessed:2013-07-29. 17[16] J. Jensen. Linear description of ultrasound imaging systems. 1999. 18[17] J. A. Jensen. Field: A program for simulating ultrasound systems. In 10THNORDICBALTIC CONFERENCE ON BIOMEDICAL IMAGING, VOL. 4,SUPPLEMENT 1, PART 1:351?353, pages 351?353, 1996. 17[18] J. A. Jensen, S. I. Nikolov, K. L. Gammelmark, and M. H. Pedersen.Synthetic aperture ultrasound imaging. Ultrasonics, 44, Supplement(0):e5 ?58e15, 2006. ISSN 0041-624X. doi:10.1016/j.ultras.2006.07.017. URLhttp://www.sciencedirect.com/science/article/pii/S0041624X06003374.Proceedings of Ultrasonics International (UI05) and World Congress onUltrasonics (WCU). 9, 19[19] M. Karaman, P.-C. Li, and M. O?Donnell. Synthetic aperture imaging forsmall scale systems. Ultrasonics, Ferroelectrics and Frequency Control,IEEE Transactions on, 42(3):429 ?442, May 1995. ISSN 0885-3010.doi:10.1109/58.384453. 20, 22[20] K. Kotowick, R. Rohling, and L. Lampe. Adaptive compounding ofsynthetic aperture and compounded plane-wave imaging for fastultrasonography. In Biomedical Imaging (ISBI), 2013 IEEE 10thInternational Symposium on, pages 784?787, 2013.doi:10.1109/ISBI.2013.6556592. iii[21] L. Ljung. System Identification: Theory for the User. Prentice-Hall,Englewood Cliffs, NJ, 1987. 35[22] G. Lockwood and F. Foster. Design of sparse array imaging systems. InUltrasonics Symposium, 1995. Proceedings., 1995 IEEE, volume 2, pages1237 ?1243 vol.2, Nov 1995. doi:10.1109/ULTSYM.1995.495782. 10[23] G. Lockwood, J. Talman, and S. Brunke. Real-time 3-d ultrasound imagingusing sparse synthetic aperture beamforming. Ultrasonics, Ferroelectricsand Frequency Control, IEEE Transactions on, 45(4):980 ?988, Jul 1998.ISSN 0885-3010. doi:10.1109/58.710573. 10[24] E. Mendelson, J.-F. Chen, and P. Karstaedt. Assessing tissue stiffness mayboost breast imaging specificity. Diagn. Imaging, 31:15?17, 2009. 2[25] T. X. Misaridis and J. A. Jensen. Spacetime encoding for high frame rateultrasound imaging. Ultrasonics, 40(18):593 ? 597, 2002. ISSN 0041-624X.doi:http://dx.doi.org/10.1016/S0041-624X(02)00179-8. URLhttp://www.sciencedirect.com/science/article/pii/S0041624X02001798. 20,31[26] G. Montaldo, M. Tanter, J. Bercoff, N. Benech, and M. Fink. Coherentplane-wave compounding for very high frame rate ultrasonography andtransient elastography. Ultrasonics, Ferroelectrics and Frequency Control,IEEE Transactions on, 56(3):489 ?506, Mar 2009. ISSN 0885-3010.doi:10.1109/TUFFC.2009.1067. 2059[27] K. Nagai. A new synthetic-aperture focusing method for ultrasonic b-scanimaging by the fourier transform. Sonics and Ultrasonics, IEEETransactions on, 32(4):531 ?536, Jul 1985. ISSN 0018-9537.doi:10.1109/T-SU.1985.31627. 10[28] M. O?Donnell and L. Thomas. Efficient synthetic aperture imaging from acircular aperture with possible application to catheter-based imaging.Ultrasonics, Ferroelectrics and Frequency Control, IEEE Transactions on,39(3):366 ?380, May 1992. ISSN 0885-3010. doi:10.1109/58.143171. 10[29] M. O?Donnell and Y. Wang. Coded excitation for synthetic apertureultrasound imaging. Ultrasonics, Ferroelectrics and Frequency Control,IEEE Transactions on, 52(2):171 ?176, Feb 2005. ISSN 0885-3010.doi:10.1109/TUFFC.2005.1406544. 20[30] A. V. Oppenheim and R. W. Schafer. Discrete-Time Signal Processing.Prentice-Hall, NJ, 1989. 42[31] J. rgen Arendt Jensen. A model for the propagation and scattering ofultrasound in tissue. The Journal of the Acoustical Society of America, 89(1):182?190, 1991. doi:10.1121/1.400497. URLhttp://link.aip.org/link/?JAS/89/182/1. 17[32] L. Sandrin, S. Catheline, M. Tanter, X. Hennequin, and M. Fink.Time-resolved pulsed elastography with ultrafast ultrasonic imaging.Ultrasonic Imaging, 21(4):259 ?272, Dec 1999. 10, 20[33] L. Sandrin, S. Catheline, M. Tanter, and M. Fink. 2d transient elastography.In M. Halliwell and P. N. T. Wells, editors, Acoustical Imaging, pages485?492. Springer US, 2002. ISBN 978-0-306-47107-0. URLhttp://dx.doi.org/10.1007/0-306-47107-8 68. 10.1007/0-306-47107-8 68.10, 20[34] D. P. Shattuck and O. T. von Ramm. Compound scanning with a phasedarray. Ultrasonic Imaging, 4(2):93 ? 107, 1982. ISSN 0161-7346.doi:http://dx.doi.org/10.1016/0161-7346(82)90094-3. URLhttp://www.sciencedirect.com/science/article/pii/0161734682900943. 6[35] M. Soumekh. Synthetic Aperture Radar Signal Processing with MATLABAlgorithms. Wiley-Interscience publication. Wiley, 1999. ISBN9780471297062. URL http://books.google.ca/books?id=gVWqQgAACAAJ.960[36] P. R. Stepanishen. Transient radiation from pistons in an infinite planarbaffle. The Journal of the Acoustical Society of America, 49(5B):1629?1638, 1971. doi:10.1121/1.1912541. URLhttp://link.aip.org/link/?JAS/49/1629/1. 17[37] P. R. Stepanishen. The time-dependent force and radiation impedance on apiston in a rigid infinite planar baffle. The Journal of the Acoustical Societyof America, 49(3B):841?849, 1971. doi:10.1121/1.1912424. URLhttp://link.aip.org/link/?JAS/49/841/1. 17[38] P. R. Stepanishen. Pulsed transmit/receive response of ultrasonicpiezoelectric transducers. The Journal of the Acoustical Society of America,69(6):1815?1827, 1981. doi:10.1121/1.385919. URLhttp://link.aip.org/link/?JAS/69/1815/1. 17[39] M. Tang, F. Luo, and D. Liu. Automatic time gain compensation inultrasound imaging system. In Bioinformatics and Biomedical Engineering ,2009. ICBBE 2009. 3rd International Conference on, pages 1?4, 2009.doi:10.1109/ICBBE.2009.5162432. 8[40] J. Taylor, J. Chan, and G. Thomas. Frequency selection for compoundingsynthetic aperture ultrasound images. In Imaging Systems and Techniques(IST), 2012 IEEE International Conference on, pages 74 ?77, Jul 2012.doi:10.1109/IST.2012.6295514. 20[41] K. C. Toh, M. Todd, and R. H. Ttnc. Sdpt3 ? a matlab software package forsemidefinite programming. OPTIMIZATION METHODS AND SOFTWARE,11:545?581, 1999. 40[42] J. Tropp and S. Wright. Computational methods for sparse solution of linearinverse problems. Proceedings of the IEEE, 98(6):948?958, 2010. ISSN0018-9219. doi:10.1109/JPROC.2010.2044010. 40[43] G. E. Tupholme. Generation of acoustic pulses by baffled plane pistons.Mathematika, 16, 1969. doi:10.1112/S0025579300008184. 17[44] Ultrasonix Medical Corp. Transducer guide, . URLhttp://www.ultrasonix.com/webfm send/879. Accessed: 2013-07-31. 32[45] Ultrasonix Medical Corp. Sonix RP, . URLhttp://www.ultrasonix.com/wikisonix/index.php/Sonix RP. Accessed:2013-07-17. 1361[46] L. D. C. W. T. D. W. J. H. W. J. L. W. A. Anderson, J. T. Arnold and L. T.Zitelli. A new real-tim phased-array sector scanner for imaging the entireadult human heart. Ultrasound in Medicine, 3B:1547?1558, 1977. 662Appendix ACompressed Sensing PointPhantom ImagesThis appendix contains the resulting images from the 20 simulations conducted onthe point phantom (Section 5.5) for each combination of compression ratios 0.9, 1,3, 5 and standoff pad thicknesses of 2mm, 5mm, 15mm, 30mm, 50mm.63(a) (b) (c) (d)Figure A.1: Point phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 2 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.(a) (b) (c) (d)Figure A.2: Point phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 5 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.64(a) (b) (c) (d)Figure A.3: Point phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 15 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.(a) (b) (c) (d)Figure A.4: Point phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 30 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.65(a) (b) (c) (d)Figure A.5: Point phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 50 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.(a) (b) (c) (d)Figure A.6: Point phantom simulation with compression ratio of 1 and stand-off pad thickness of 2 mm. (a) Reference simulation. (b) Gran De-coded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.66(a) (b) (c) (d)Figure A.7: Point phantom simulation with compression ratio of 1 and stand-off pad thickness of 5 mm. (a) Reference simulation. (b) Gran De-coded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.(a) (b) (c) (d)Figure A.8: Point phantom simulation with compression ratio of 1 and stand-off pad thickness of 15 mm. (a) Reference simulation. (b) Gran De-coded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.67(a) (b) (c) (d)Figure A.9: Point phantom simulation with compression ratio of 1 and stand-off pad thickness of 30 mm. (a) Reference simulation. (b) Gran De-coded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.68(a) (b) (c) (d)Figure A.10: Point phantom simulation with compression ratio of 1 andstandoff pad thickness of 50 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.(a) (b) (c) (d)Figure A.11: Point phantom simulation with compression ratio of 3 andstandoff pad thickness of 2 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.69(a) (b) (c) (d)Figure A.12: Point phantom simulation with compression ratio of 3 andstandoff pad thickness of 5 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.(a) (b) (c) (d)Figure A.13: Point phantom simulation with compression ratio of 3 andstandoff pad thickness of 15 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.70(a) (b) (c) (d)Figure A.14: Point phantom simulation with compression ratio of 3 andstandoff pad thickness of 30 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.71(a) (b) (c) (d)Figure A.15: Point phantom simulation with compression ratio of 3 andstandoff pad thickness of 50 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.(a) (b) (c) (d)Figure A.16: Point phantom simulation with compression ratio of 5 andstandoff pad thickness of 2 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.72(a) (b) (c) (d)Figure A.17: Point phantom simulation with compression ratio of 5 andstandoff pad thickness of 5 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.(a) (b) (c) (d)Figure A.18: Point phantom simulation with compression ratio of 5 andstandoff pad thickness of 15 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.73(a) (b) (c) (d)Figure A.19: Point phantom simulation with compression ratio of 5 andstandoff pad thickness of 30 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.74(a) (b) (c) (d)Figure A.20: Point phantom simulation with compression ratio of 5 andstandoff pad thickness of 50 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.75Appendix BCompressed Sensing TissuePhantom ImagesThis appendix contains the resulting images from the 20 simulations conducted onthe tissue phantom (Section 5.5) for each combination of compression ratios 0.9,1, 3, 5 and standoff pad thicknesses of 2mm, 5mm, 15mm, 30mm, 50mm.76(a) (b) (c) (d)Figure B.1: Tissue phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 2 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.(a) (b) (c) (d)Figure B.2: Tissue phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 5 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.77(a) (b) (c) (d)Figure B.3: Tissue phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 15 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.(a) (b) (c) (d)Figure B.4: Tissue phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 30 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.78(a) (b) (c) (d)Figure B.5: Tissue phantom simulation with compression ratio of 0.9 andstandoff pad thickness of 50 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.(a) (b) (c) (d)Figure B.6: Tissue phantom simulation with compression ratio of 1 andstandoff pad thickness of 2 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.79(a) (b) (c) (d)Figure B.7: Tissue phantom simulation with compression ratio of 1 andstandoff pad thickness of 5 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.(a) (b) (c) (d)Figure B.8: Tissue phantom simulation with compression ratio of 1 andstandoff pad thickness of 15 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.80(a) (b) (c) (d)Figure B.9: Tissue phantom simulation with compression ratio of 1 andstandoff pad thickness of 30 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.81(a) (b) (c) (d)Figure B.10: Tissue phantom simulation with compression ratio of 1 andstandoff pad thickness of 50 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.(a) (b) (c) (d)Figure B.11: Tissue phantom simulation with compression ratio of 3 andstandoff pad thickness of 2 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.82(a) (b) (c) (d)Figure B.12: Tissue phantom simulation with compression ratio of 3 andstandoff pad thickness of 5 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.(a) (b) (c) (d)Figure B.13: Tissue phantom simulation with compression ratio of 3 andstandoff pad thickness of 15 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.83(a) (b) (c) (d)Figure B.14: Tissue phantom simulation with compression ratio of 3 andstandoff pad thickness of 30 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.84(a) (b) (c) (d)Figure B.15: Tissue phantom simulation with compression ratio of 3 andstandoff pad thickness of 50 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.(a) (b) (c) (d)Figure B.16: Tissue phantom simulation with compression ratio of 5 andstandoff pad thickness of 2 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.85(a) (b) (c) (d)Figure B.17: Tissue phantom simulation with compression ratio of 5 andstandoff pad thickness of 5 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.(a) (b) (c) (d)Figure B.18: Tissue phantom simulation with compression ratio of 5 andstandoff pad thickness of 15 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.86(a) (b) (c) (d)Figure B.19: Tissue phantom simulation with compression ratio of 5 andstandoff pad thickness of 30 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.87(a) (b) (c) (d)Figure B.20: Tissue phantom simulation with compression ratio of 5 andstandoff pad thickness of 50 mm. (a) Reference simulation. (b) GranDecoded. (c) Compressed Sensing Decoded. (d) Compressed SensingPartial Decoded.88

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0074121/manifest

Comment

Related Items