UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Investigating limited view problem in photoacoustic tomography Shu, Weihang 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2016_september_shu_weihang.pdf [ 2.54MB ]
Metadata
JSON: 24-1.0306918.json
JSON-LD: 24-1.0306918-ld.json
RDF/XML (Pretty): 24-1.0306918-rdf.xml
RDF/JSON: 24-1.0306918-rdf.json
Turtle: 24-1.0306918-turtle.txt
N-Triples: 24-1.0306918-rdf-ntriples.txt
Original Record: 24-1.0306918-source.json
Full Text
24-1.0306918-fulltext.txt
Citation
24-1.0306918.ris

Full Text

 INVESTIGATING LIMITED VIEW PROBLEM IN PHOTOACOUSTIC TOMOGRAPHY by  Weihang Shu  B.Eng., Zhejiang University, 2011  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES  (Electrical and Computer Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  July 2016  © Weihang Shu, 2016 ii  Abstract Photoacoustic (PA) imaging is a new biomedical imaging modality that is based on the PA effect. In the PA effect, nanosecond short pulsed laser light illuminates on tissue and is absorbed by optical absorbers, which undergo a temporary temperature rise. The illuminated region experiences thermo-elastic expansion and engenders an abrupt and localized pressure variation. This transient variation causes a PA wave to propagate from the absorber outward through the tissue to the surface for detection by an ultrasound transducer. The detected PA signal can then be used to estimate the initial pressure distribution. PA imaging is an absorption-based modality which is capable of providing the optical property of the illuminated tissue while achieving deep imaging penetration compared with conventional optical imaging. Meanwhile, the PA image can also provide a similar spatial resolution as ultrasound imaging. Photoacoustic tomography (PAT) is the most widely used imaging mode in PA imaging due to its simplicity and versatility. One major drawback in PAT is that the reconstruction algorithm required to estimate the initial pressure demands a large detection view angle for exact reconstruction, which is typically impractical in clinical applications. Our goal of this thesis is to develop a novel methodology to increase the detection view angle by using two linear array transducers at different orientations. The relative position between the two transducers needs to be calibrated in order to combine the received signal from the two transducers. A new calibration approach is developed by using the ultrasound modality. The efficacy of the calibration method is demonstrated by both simulation and experiment. With increased detection view angle, the reconstructed image indicates more complete tissue structures compared with the one acquired by a single transducer. By combining the PAT images from the two transducers, an improvement on the image quality through complementing the structural information is achieved. Our approach does not require a calibration phantom, which can largely simplify the calibration process and shorten the acquisition time. The use of linear array transducers and the flexibility of positioning the transducers to fit tissue geometry make this approach promising for clinical applications. iii  Preface  Part of the work done in Chapter 3 has been published. Shu, Weihang, et al. "Image registration for limited-view photoacoustic imaging using two linear array transducers." SPIE BiOS. International Society for Optics and Photonics, 2015; I conducted all the experiments, analyzed the results, and wrote the manuscript. Min Ai was involved in the helping with conducting experiments. Robert Rohling, Tim Salcudean, Purang Abolmaesumi, and Shuo Tang were involved throughout the project in concept formation and manuscript composition.  Part of the work done in Chapter 3 has been published. Shu, Weihang, et al. "Broadening the detection view of 2D photoacoustic tomography using two linear array transducers." Optics Express 24.12 (2016): 12755-12768. I conducted all the experiments, analyzed the results, and wrote the manuscript. Min Ai was involved in the helping with conducting experiments. Robert Rohling, Tim Salcudean, Purang Abolmaesumi, and Shuo Tang were involved throughout the project in concept formation and manuscript composition.   iv  Table of Contents  Abstract .......................................................................................................................................... ii Preface ........................................................................................................................................... iii Table of Contents ......................................................................................................................... iv List of Tables ................................................................................................................................ vi List of Figures .............................................................................................................................. vii List of Abbreviations .....................................................................................................................x Acknowledgements ...................................................................................................................... xi Dedication .................................................................................................................................... xii Chapter 1: Introduction ............................................................................................................... 1 1.1 A brief history of photoacoustic ................................................................................. 3 1.2 Photoacoustic modes: tomography and microscopy ................................................... 4 1.3 Limited view problem in photoacoustic imaging ....................................................... 6 1.4 Problem statement and motivation.............................................................................. 9 1.5 Organization of the thesis ......................................................................................... 11 Chapter 2: Photoacoustic Imaging Principle ............................................................................. 12 2.1 Photoacoustic wave generation ................................................................................. 12 2.2 Photoacoustic wave propagation ............................................................................... 14 2.3 Photoacoustic image reconstruction ......................................................................... 16 2.4 Photoacoustic absorption contrast............................................................................. 17 Chapter 3: Broadening view angle in 2D PAT using two linear transducers ........................... 18 3.1 Introduction ............................................................................................................... 18 v  3.2 Photoacoustic tomography imaging system.............................................................. 19 3.3 2D Calibration principle ........................................................................................... 21 3.4 The first calibration method – using a calibration phantom ..................................... 23 3.4.1 2D experiment results of calibrating two transducers using a calibration phantom ........ 23 3.4.2 2D experiment results of PAT by two transducers using calibration phantom .... 25 3.5 The second calibration method – using ultrasound imaging .................................... 27 3.5.1 2D experiment results of calibrating two transducers using ultrasound ............... 30 3.5.2 2D experiment results of PAT by two transducers using ultrasound modality .... 34 3.6 Further consideration about the coplanar imaging plane .......................................... 40 Chapter 4: Simulation of 3D calibration for photoacoustic imaging with two linear array transducers ................................................................................................................................ 43 4.1 Photoacoustic tomography simulation platform ....................................................... 43 4.2 Calibrating transducer position in 3D using ultrasound modality ............................ 44 4.3 Simulation results of calibrating two transducers in 3D using ultrasound modality 48 4.4 3D simulation results of PAT by two transducers .................................................... 52 Chapter 5: Conclusion............................................................................................................... 56 5.1 Significance of work ................................................................................................. 56 5.2 Future work and improvement .................................................................................. 58 Bibliography .................................................................................................................................59  vi  List of Tables Table 3.1 Summary of the overall system characterization .......................................................... 21 Table 3.2 Transformation parameters obtained under different calibration approaches .............. 34 Table 4.1 Pixel coordinates of all SPs in the simulation setting ................................................... 49 Table 4.2 Pixel coordinates of all the SPs in the reconstructed 3D volume ................................. 51 Table 4.3 Linear regression analysis results on all the SPs points of Transducer B..................... 51  vii  List of Figures Figure 1.1: Schematic illustration of photoacoustic imaging by Bme591wikiproject, retrieved and modified from http://en.wikipedia.org/wiki/Image:PASchematics_v2.png. Used under Creative Commons Attribution-ShareAlike 3.0 Unported license. (https://creativecommons.org/licenses/by-sa/3.0/deed.en) .......................................... 2 Figure 1.2: A PAT setup for noninvasive transdermal and transcranial imaging of the rat brain in vivo with the skin and skull intact. © 2003 Nature Publishing Group  [31] ................................................................................................................... 5 Figure 1.3: An AR-PAM system setup. © 2009 American Institute of Physics  [38] .................... 6 Figure 1.4: Images reconstructed from simulated data corresponding to a numerical phantom using (a) 30 detectors over 90 degrees (b) 60 detectors over 135 degrees (c) 120 detectors over 180 degrees (d) 240 detectors over 360 degrees © 2013 Optical Society of America [40] ..................................................................... 7 Figure 2.1: Diagram of the photoacoustic signal detected at position 0r  and time t . 0r  is the detector position and r is the PA source position. O  is the origin. © American Institute of Physics [4]............................................................................... 15 Figure 3.1: The PAT imaging setup used in this thesis. ND:YAG is a Surelite II neodymium-doped yttrium aluminum garnet laser; OPO is an optical parametric oscillator. .................................................................................................. 20 Figure 3.2: (a) Top view of the calibration setup with ~ 90-degree relative angle between the two linear array transducers using calibration phantom. (b) Reconstructed PAT image acquired by RA'. (c) Reconstructed image acquired by RA. .................. 24 Figure 3.3: Demonstration of the PAT imaging with the two-transducer method using calibration phantom on a piece of paper sample. (a) Photograph of the printed paper sample with a square. (b) Reconstructed image of the printed paper acquired by the transducer RA' after transformation. (c) Reconstructed image of the printed paper acquired by the transducer RA. (d) Reconstructed image using data acquired by both transducers. ................................................................... 25 Figure 3.4: PAT imaging with the two-transducer method using calibration phantom on two tubes filled with rabbit blood. (a) Photograph of the sample. (b) Reconstructed image of the phantom acquired by transducer RA' after transformation. (c) Reconstructed image of the phantom acquired by transducer RA. (d) Reconstructed image using data acquired by both transducers. ................................................................................................................ 26 Figure 3.5: Top view of the imaging configuration using two identical linear array transducers. (a) shows the calibration process using ultrasound imaging, in which several channels of Transducer B (highlighted in red) are transmitting and Transducer A is receiving acoustic waves. (b) shows the PAT imaging where both transducers are receiving PAT waves. .................................................... 28 Figure 3.6: (a) Top view of the imaging platform with ~100-degree relative angle between the two linear array transducers. (b) Virtual ultrasound image and the sketch diagram of Transducer A. SP1, SP2, SP3, SP4 represent the enabled channel viii  No.64, No.96, No.110 and No.128 respectively (c) Reconstructed image and the sketch diagram of Transducer B. SP1', SP2', SP3', SP4' represent the reconstructed channel No.64, No.96, No.110, and No.128, respectively. (d) The reconstructed image of Transducer B after applying the transformation. .......... 31 Figure 3.7: PAT imaging of a phantom containing a piece of paper printed with 5 points. (a) Photograph of the printed paper phantom and sketch diagram of the transducers. (b) Reconstructed image of the phantom acquired by Transducer B. (c) Reconstructed image of the phantom acquired by Transducer A after transformation. (d) Overlapped image which combines acquired data from Transducer A and Transducer B. ............................................................................... 32 Figure 3.8: PAT images of a sheet of paper printed with regular octagon and 3 points. (a) Diagram of the printed paper phantom and sketch diagram of the transducers. (b) Image of the printed paper reconstructed by Transducer B. (c) Image of the printed paper reconstructed by Transducer A after transformation. (d) Reconstructed image using data acquired by both transducers. ................................. 35 Figure 3.9: Combined PAT images of a sheet of paper printed with regular octagon and 3 points using two linear array transducers at (a) 180 degree (b) 120 degree (c) 90 degree. (d)-(f) The corresponding combined PAT images as the top row, except that the images obtained from different transducers are shown in different colors, grayscale and red, respectively. ....................................................... 37 Figure 3.10: PAT images of a leaf skeleton phantom. (a) Photograph of the leaf skeleton phantom and sketch diagram of the transducers. (b) Image of the leaf skeleton phantom reconstructed by Transducer B. (c) Image of the leaf skeleton phantom reconstructed by Transducer A after transformation. (d) Reconstructed image using data acquired by both transducers. ................................. 38 Figure 3.11: PAT imaging of a plastic tube filled with rabbit blood. (a) Photograph of rabbit blood phantom and sketch diagram of the transducers. (b) Image of the phantom acquired by Transducer B. (c) Image of the phantom acquired by Transducer A after transformation. (d) Reconstructed image using data acquired by both transducers. ..................................................................................... 39 Figure 3.12:  Simulation results showing the variation of the maximum pixel intensity of the reconstructed SPs by Transducer B when it rotates or translates out of the image plane of Transducer A. (a)-(b) Transducer B receives ultrasound signal at different rotation angles around the x-axis. (c)-(d) Transducer B receives signal at different translational locations along the z-axis. ........................................ 41 Figure 4.1: Schematic diagram of two linear array transducers in an arbitrary position. ............. 44 Figure 4.2: 3D imaging of Transducer B by scanning Transducer A along the Z axis. The red dots indicate the enabled channels. ...................................................................... 45 Figure 4.3: Schematic diagram of two linear array transducers performing 3D scanning in PAT using the calibration results. .............................................................................. 46 Figure 4.4: Schematic diagram of (a) the two transducers after adjusting the lateral axis of the Transducer B to the imaging plane of Transducer A (b) the arrangement of the two transducers where Transducer B receives signal at different rotation angles around y-axis (c) the arrangement of the two transducers after the 3D calibration................................................................................................................... 47 ix  Figure 4.5: Illustration of the relative position of the two linear array transducers in the simulation. The global coordinate system is xyz. The coordinate system A A Ax y z  and B B Bx y z is the local coordinate system of Transducer A and B, respectively. ............................................................................................................... 48 Figure 4.6: Illustration of all the scanning position of Transducer A along the Z axis and the enabled channels of Transducer B. The enabled channel numbers are No.10, No.32, No.64, and No.96. .......................................................................................... 49 Figure 4.7: Binary volume of the reconstruction of the enabled channels in Transducer B using the signal detected by Transducer A. ............................................................... 50 Figure 4.8: Linear fitting results of the reconstructed SPs of Transducer B. ................................ 52 Figure 4.9: (a) Simulation sample of a cube with eight edges. (b) Reconstructed 3D volume by Transducer A. (c) Reconstructed 3D volume by Transducer B. (d) Reconstructed 3D volume using data acquired by both Transducer A and Transducer B. ............................................................................................................. 53 Figure 4.10: (a) Simulation sample of a tree branch structure. (b) Reconstructed 3D volume by Transducer A. (c) Reconstructed 3D volume by Transducer B. (d) Reconstructed 3D volume using data acquired by both Transducer A and Transducer B. ............................................................................................................. 54  x  List of Abbreviations CT  Computed Tomography MRI  Magnetic Resonance Imaging CLSM  Confocal Laser Scanning Microscopy OCT  Optical Coherence Tomography PA  Photoacoustic OA  Optoacoustic PAT  Photoacoustic Tomography PAM  Photoacoustic Microscopy AR-PAM Acoustic Resolution Photoacoustic Microscopy OR-PAM Optical Resolution Photoacoustic Microscopy PAE  Photoacoustic Endoscopy HbO2  Oxy-haemoglobin HHb  Deoxy-haemoglobin OPO  Optical Parametric Oscillator ND:YAG Neodymium-doped Yttrium Aluminum Garnet FOV  Field of View SP  Source Point xi  Acknowledgements I would like to express my appreciation to my supervisor Dr. Shuo Tang for her continuous guidance and support.   I would also like to thank every single member of the Biophotonics group. Thanks to Leo Pan, Min Ai, and Jiayi Chen for their cooperation and brilliant ideas in the photoacoustic project. Thanks to Mengzhe Shen and Lin Huang for their selfless assistances and support throughout the years. Thanks to Tom Lai, Myeong Jin Ju, and Xin Zhou for their help in the weekly group meeting.  I would also like to thank Professor Robert Rohling, Professor Tim Salcudean, Professor Purang Abolmaesumi, and every single member of the Robotic Control laboratories for their help and support.  Special thanks are owed to my parents, whose have supported me throughout my years of education, both morally and financially.  Weihang Shu University of British Columbia July 2016 xii  Dedication This thesis is dedicated to my family for the unconditional love every step of the way. 1  Chapter 1: Introduction  Technology developments in biomedical imaging have provided various noninvasive diagnostic tools for physicians and technicians nowadays, such as X-ray, X-ray computed tomography (CT), magnetic resonance imaging (MRI), ultrasonography and optical based modalities. Among those imaging techniques, ultrasonography is relatively mobile and low-cost, and provides good image contrast between soft tissue and bones. However, ultrasonography depends on acoustic impedance to generate image contrast, which has difficulty in differentiating soft tissues because of their comparatively similar acoustic impedance values. On the other hand, optical based image modalities, such as confocal laser scanning microscopy (CLSM), optical coherence tomography (OCT), and multi-photon microscopy (MPM), are also widely used in biomedical applications. Optical imaging can offer the highest resolution amongst all the above-mentioned modalities by the use of light with various wavelengths. However, the significant scattering of light in tissue severely limits the optical imaging depth of ballistic photons (photons that propagate straight through) up to ~1 mm. Although optical imaging with diffuse photons (photons that zigzag through) can probe centimeters into tissue, its image resolution is very low. Therefore, it is a challenge for pure optical imaging to maintain high image resolution while achieving deep image depth. In recent years, photoacoustic (PA) imaging  [1–4] has emerged as a highly promising hybrid imaging modality between ultrasound and optical imaging. In PA imaging, the incident light is converted into acoustic waves, which are scattered much less. This conversion is referred as PA effect. The basic process of PA imaging can be described in Figure 1.1. A short pulsed laser beam shines on the tissue. The tissue absorbs the optical energy and undergoes an abrupt temperature increase, which leads to thermal-elastic expansion. This expansion generates an 2  acoustic wave, which propagates through the tissue to the surface and is recorded by ultrasonic detectors. The detected acoustic signal is then reconstructed to form an image of the initial optical absorption distribution. Compared with ultrasonic echography, PA provides optical absorption contrast for biomedical imaging, while maintaining high-resolution image quality beyond the optical diffusion limit by combining optical excitation and acoustic acquisition. Therefore, PA imaging is usually considered as a hybrid imaging modality to measure optical property which is carried by acoustic waves.   Figure 1.1: Schematic illustration of photoacoustic imaging by Bme591wikiproject, retrieved and modified from http://en.wikipedia.org/wiki/Image:PASchematics_v2.png. Used under Creative Commons Attribution-ShareAlike 3.0 Unported license. (https://creativecommons.org/licenses/by-sa/3.0/deed.en) 3  Acoustic scattering in tissue is about three orders of magnitude less than optical scattering, which is beneficial to achieve high imaging penetration. Besides, as PA contrast is from optical absorption, PA imaging can excite specific chemical compositions with different excitation wavelengths based on their maximum absorption spectrum. Hence, the inherent hybrid property of PA imaging has the advantages to maintaining deep tissue penetration as well as achieve optical absorption contrast. PA imaging has been widely explored in various application areas such as skin, eye, breast, and brain [5–12].  1.1 A brief history of photoacoustic Research on the underlying mechanism of PA or optoacoustic (OA) effect has a long history dating back to 1880s when Alexander Graham Bell discovered the generation of sound waves from a solid sample exposed to a beam of modulated sunlight [13]. It was discovered that the resulted acoustic signal depended on the type of the material and the optical absorption of the incident light. Later, the PA effect was also discovered in liquid and gas materials  [14,15].  Despite the novelty and potential of this technique for non-visible spectrum measurement at that time, the scientific research or technology development on the PA effect was relatively inactive until the invention of the laser in the 1960s. The spectrally pure, directional, and power-intensive laser beam provided researchers and engineers with a reliable light source for tissue excitation and draw significant interest back to the PA effect. The early PA studies in the 1970s and 1980s mainly focused on exploiting the PA detection of gas types and photochemical studies [16,17]. Although the potential detection applications in biological tissue were exploited [18], it was not until the 1990s that various publications on biomedical applications began to appear [19–24]. Thereafter, PA imaging has grown rapidly. 4  Nowadays, PA imaging has been widely investigated in biomedical research for clinical diagnosis. With corresponding optical spectrum, PA imaging is capable of offering anatomical, functional, metabolic, molecular and genetic contrast information by detecting certain optical absorbers, such as hemoglobin, melanin, water, and lipid. In the blood circulation system, hemoglobin is imperative as the oxygen carrier. Using the predominant optical absorption of hemoglobin excited by visible light, PA can provide anatomical and functional imaging of tissue metabolism [25]. The concentration of water and lipid can be used to indicate various diseases and their distribution in tissue can be investigated by utilizing their strong optical absorption in the near infra-red range [26,27].  Melanin, as the major pigment of melanoma, has comparatively broad optical absorption spectrum from the ultraviolet to the near infra-red range. Therefore, it has been used as the target absorber for skin cancer diagnosis in PA imaging [28].  1.2 Photoacoustic modes: tomography and microscopy PA imaging can be generally separated into two different modes: photoacoustic tomography (PAT) and photoacoustic microscopy (PAM) [19,24,29–31]. In PAT, the ultrasound signal is excited from a large area by an unfocused laser beam and is received at multiple angles by an array transducer or a scanning transducer at different orientations. Image of the initial pressure distribution is reconstructed from the detected ultrasound signal based on the acoustic speed and the geometry of the transducers. The reconstruction algorithm can be chosen based on several factors such as PAT detection geometry, computational complexity, and accuracy. Figure 1.2 demonstrates a typical PAT system for animal imaging.  5  Figure 1.2: A PAT setup for noninvasive transdermal and transcranial imaging of the rat brain in vivo with the skin and skull intact. © 2003 Nature Publishing Group  [31]  In PAM, an A-line (1D) image is formed by either a focused laser beam or a focused ultrasound detector, based on the time-of-flight of the received ultrasound wave. B-mode (2D) PA image is generated from multiple A-lines obtained by mechanically scanning the tissue sample, or the focused laser beam/ultrasound detector. Based on whether a focused ultrasound detector or focused laser beam is used in the imaging mode, PAM is further referred as acoustic resolution photoacoustic microscopy (AR-PAM) or optical resolution photoacoustic microscopy (OR-PAM). OR-PAM [32–34] is more analogous to optical microscopy because the lateral resolution is restricted by the focused laser beam. Therefore, its maximum penetration depth is severely limited by optical scattering to approximately 1 mm. On the other hand, the lateral resolution of AR-PAM  depends on the focused ultrasound detector [35,36], which can be adjusted by the transducer bandwidth and focal length. AR-PAM has been used to image organs in a mouse of several centimeters deep with a few hundred micrometers spatial resolution [37,38]. A typical AR-PAM system is showed in Figure 1.3. 6   Figure 1.3: An AR-PAM system setup. © 2009 American Institute of Physics  [38]  1.3 Limited view problem in photoacoustic imaging  PAT is a promising PA mode for clinical applications due to its deep penetration, compatibility with medical ultrasound, and a relatively simple configuration.  However, the exact reconstruction of the PAT image usually requires a large detection angle between the target sample and the transducer. Insufficient detection angle will result in missing structures or blurring of detailed structures in the reconstructed image. This limitation is referred as the limited-view problem [39]. Figure 1.4 illustrates how the reconstructed image quality is significantly affected by the detection angle. In Figure 1.4(a), the artifact is severe especially in the diagonal direction, which indicates that detection view angle of 90 degrees is insufficient for PAT reconstruction. With the increase of the detection view angle, the image artifact is accordingly suppressed. In Figure 1.4(b), the basic sample outlines can be observed at 135 degrees detection angle with a certain amount of artifacts. In Figure 1.4(c), the sample boundaries are well reconstructed at 180 7  degrees, but the reconstructed artifact can still be observed. With full circle detection at 360 degrees as showed in Figure 1.4(d), the reconstruction presents a clean sample image with almost no reconstruction artifact.    Figure 1.4: Images reconstructed from simulated data corresponding to a numerical phantom using (a) 30 detectors over 90 degrees (b) 60 detectors over 135 degrees (c) 120 detectors over 180 degrees (d) 240 detectors over 360 degrees © 2013 Optical Society of America [40]   In order to address the limited-view problem, various approaches have been developed. One approach is to design custom transducer arrays with a large detection view angle. Among the reconstructed image details, tissue interfaces are of particular interest to identify organ boundaries. Because each boundary consists of small flat segments and each segment transmits acoustic waves along the two opposite directions perpendicular to it, a boundary can be well reconstructed given that all the local normal direction of the boundary passes through the transducer. Therefore, to exactly reconstruct an arbitrary boundary in 2D, it is intuitive to design a ring-shaped transducer array to enclose the object. In practice, specialized ring-shaped transducer arrays  [8,41–43] are designed to achieve exact reconstruction in full-view detection. A sample is placed at the center of the transducer and the PA signal generated from the sample can be detected at different receiving angles over a complete 360-degree circle for 2D imaging. However, these transducers require special customized design and are not commonly available. For some organs, it is not possible to receive the PA signal in 360-degree view angles in clinical applications. 8  On the other hand, linear array transducers are widely used in clinical ultrasound applications due to their versatile imaging of the human body, free-hand guidance, and real-time capability. However, the detection angle of linear array transducers is comparatively small. For example, a transducer of 38 mm width to image a sample at 30 mm away has a maximum receiving angle of 65 degrees. Therefore, the limited-view problem for linear array transducers is severer compared with customized transducers. Hence, the detectable tissue structure is highly restricted by the tissue orientation relative to the receiving transducer [44,45]. In order to increase the detection view for PAT using linear array transducers, various methods have been developed. A straightforward solution is to scan the linear array transducer circularly or rotate the sample to achieve full-view PAT [46–48]. When rotating the transducer, the positions of the transducer are predetermined and restricted by the rotation center and radius of a rotational stage. It makes this approach difficult to implement in clinical applications where the body sites or organs usually have irregular shape and geometry. Another approach is to use acoustic reflectors to redirect the otherwise undetected PAT signals to the linear array transducer. Cox et al. [49] utilized two acoustic reflectors perpendicular to the linear array transducer to create an infinitely wide virtual transducer array. However, this method was only demonstrated in numerical simulations. Huang et al. [50] and Li et al. [51] proposed to use a single 45-degree acoustic reflector or two acoustic reflectors set at 120 degrees relative to each other to double or triple the detection view. The relative position of the linear array and the acoustic reflectors is predetermined before data acquisition. The acoustic reflector generates a virtual transducer to acquire the acoustic signal from different directions and hence increase the detection view. Although this approach can increase the view angle and maintain high imaging speed (without 9  scanning), the acoustic reflector usually should be placed close to the imaging area, which is not practical in clinical application. In addition to hardware improvements, various image processing algorithms have also been proposed to address the limited-view problem.  Wu et al. [52] made use of acoustic speckle noise, which contained scattered and reflected PA signals from other directions, to expand the detection view. Because ultrasound imaging can provide the information of acoustically inhomogeneous properties of tissue, it was employed to estimate the Green’s function of the tissue. With the approximated Green’s function and the recorded PA signal, the PAT image from speckle noise can be extracted to achieve larger detection view. Ma et al. [53] proposed to combine the filtered mean back-projection method with an iteration algorithm to reconstruct the distribution of optical absorption. The filtered mean back-projection was used to produce an initial distribution to shorten the iteration time and enhance the reconstruction speed. Huang et al. [54] developed a full-wave approach of iterative image reconstruction in PAT. Based on the acoustic wave equation, they established a discrete model and implemented an associated discrete forward and back-projection operator pair to minimize a cost function. These image processing algorithms show promising improvements in image quality and do not require extra hardware for data acquisition. However, they are usually computationally intensive and time–consuming [50,51].  1.4 Problem statement and motivation As mentioned above, although PAT using a linear array transducer is commonly used in PAT systems, it usually suffers from the limited-view problem due to insufficient detection angle. The current approaches, such as using an acoustic reflector, rotating transducer or applying 10  advanced algorithms, still have many limitations. The existing approaches are either time-consuming in data acquisition or computationally intensive in image reconstruction.  We present a relatively simple method to ameliorate the problem by using two linear array transducers positioned at different orientations. A new method is developed to calibrate the transducer positions so that the received signals from the two transducers can be integrated to improve the detection view angle and thus the image reconstruction. This approach can significantly expedite the data acquisition process without scanning while maintaining simplicity in image reconstruction. My work mainly contains two parts. The first part is on improving the limited-view PAT imaging in 2D using two linear array transducers. The methodology for calibrating the two transducers with either a calibration phantom or ultrasound modality is investigated and presented. The aim is to devise a simple and convenient scheme that improves the detection view angle in tissue.  The second part is on improving the calibration method using linear array transducers in 3D. The previous 2D calibration method has the practical challenge in how to align the two transducers on the same imaging plane. By performing 3D scanning of the transducers, the arbitrary relative position between the two linear array transducers in 3D space can be calibrated and a more general solution to integrate the two transducers can be achieved.   11  1.5 Organization of the thesis Chapter 1: A brief overview of the background information of PA imaging is given. Limited-view problem for PAT mode is introduced and our motivation for this thesis is presented.  Chapter 2: The principles of PA imaging is presented, including PA signal generation, acoustic wave propagation, PAT image reconstruction, and contrast mechanisms.  Chapter 3: PAT imaging using two linear array transducers in 2D is presented. Specifically, methods of calibrating the two linear array transducers are reported and the results are verified by simulation and experiments of PAT imaging.  Chapter 4:  A 3D calibration method aiming to align the two transducers in more general positions is presented. The calibration approach and the simulation results are presented and discussed.  Chapter 5: Summary of all the work conducted in the thesis and discussion of the possible future directions that can be done for the project. 12  Chapter 2: Photoacoustic Imaging Principle In this chapter, an overview of the principle of PA imaging is presented. First, the PA wave generation process is discussed including PA effect mechanism and its two requirements. Then, the acoustic wave propagation in tissue is described. Next, a universal back-projection reconstruction algorithm is given. Finally, the fundamental absorption contrast property of PA imaging is described.   2.1 Photoacoustic wave generation In the PA effect, the energy of incident short-pulsed light is converted into acoustic energy through thermoelastic expansion. In particular, the irradiated tissue sample absorbs photons (e.g. near infra-red light for biomedical tissue imaging) by specific tissue chromophore, such as hemoglobin, and converts the absorbed energy to heat. The subsequent thermo-elastic expansion produces localized pressure change and generates acoustic pressure waves. The successful generation of PA effect usually requires short laser pulses on the order of nanoseconds. This requirement is due to the fact that two physical conditions, known as the thermal and stress confinements, need to be fulfilled in order to efficiently convert optical energy into acoustic signals.  Firstly, the absorption of optical energy produces a temporary temperature increase. This temperature increase should be abrupt enough to generate subsequent thermos-elastic expansion. Meanwhile, heat dissipation in tissue will decrease the local temperature increase, and inhibit the PA effect. In order to build up sufficient rapid temperate rise, the temporal duration of the laser pulse  p  should be shorter than the time scale for the heat dissipation th  so that the heat diffusion during optical excitation can be neglected. This condition is referred as thermal confinement. 13  Equation(2.1) describes the approximation of the time scale for the heat dissipation of absorbed laser energy by thermal conduction [4,55]:  2~4pthTLD  (2.1) where pL  is the characteristic linear dimension of the excited structure, and TD  is the thermal diffusivity of the sample. For soft tissue, the typical value of TD  is around 3 21.4 10 / cm s  [56]. Considering the resolution of most typical PA imaging system, pL = 50µm is a reasonable lower limit approximation of tissue structure that system can differentiate. Hence, the estimate of thermal dissipation time  th  is approximately around several millisecond. Compared with typical laser pulse width on the order of nanoseconds, the thermal confinement condition is typically met. On the other hand, the stress created by thermoelasticity will decrease due to stress relaxation in a tissue sample. The stress relaxation time  s  in tissue can be described as [4]:   psLc  (2.2) where c is the acoustic speed of the medium. Using the typical acoustic speed value in soft tissue sample of 1540 /m s , as well as the pL  value mentioned above, the estimated stress relaxation time for soft tissue is in the tens of nanoseconds. In order to build up abrupt localized stress, the laser pulse width  p  should be shorter than the tissue stress relaxation time  s  so that the thermo-elastic expansion can be well developed before stress relaxation takes place. Therefore, the pulse width of incident light should be on the time scale of a few nanoseconds. This condition is referred as stress confinement. 14  When the thermal and stress confinement conditions are satisfied, the thermo-elastic expansion produces an abrupt localized pressure increase which can be described as [4,55]:  20 a apcp F F AC        (2.3) where   is the isobaric volume expansion coefficient in 1K , c is the acoustic speed in /cm s , pC  is the specific heat capacity given constant pressure in / ( )J kg K , a  is the tissue absorption coefficient in 1cm , and F is the local optical fluence in 2/J cm . Here  aA F is the density of local energy deposition in3/J cm .   is called the Grüneisen coefficient, a dimensionless parameter expressed as 2 /  pc C . The Grüneisen coefficient relates the local energy deposition density A  with the pressure change 0p .  For example,   is ~0.80 for porcine subcutaneous fat tissue and ~0.20 for blood at 22oC .  2.2 Photoacoustic wave propagation After the initial pressure 0 ( )p r is generated by the PA effect, the pressure propagates through the tissue to the surface, where it is detected by an ultrasound transducer. The PA wave propagation ( , )p r t at the position r and time t  is governed by the generalized wave equation [16,55,57–60]:  222 21( ) ( , ) ( , )    pp r t H r tc t C t  (2.4) where ( , )H r t  is the heating function of the absorbed energy from the laser pulse. The solution of Equation(2.4) is obtained by applying a free-space Green’s function, and the detailed derivation can be found in  [57,59,61–63]. 15  For laser pulse width on the order of nanoseconds, it is sufficiently short to be considered as a delta function for simplicity. Then the heating function can be treated as [59]:  0( )( , ) ( ) ( ) ( )p rH r t A r t t   (2.5)  During the measurement where the detector at position 0r  receives PA signal from the source 0 ( )p r at position r , the PA signal 0( , )dp r t , which is detected at position 0r  and time t  can be described as:  00 0| |( , ) [ ( )d ]4dr r ctp r t p rtt      (2.6) where d  is the solid angle element of vector r  with respect to the point at 0r , as showed in Figure 2.1.  Figure 2.1: Diagram of the photoacoustic signal detected at position 0r  and time t . 0r  is the detector position and r is the PA source position. O  is the origin. © American Institute of Physics [4]  In PAT, after the PA signal acquisition at the surface of the tissue sample, an inverse algorithm is applied to reconstruct the initial pressure 0 ( )p r from the measured data 0( , )dp r t . Then optical absorption based image is obtained. 16  2.3 Photoacoustic image reconstruction In PAT, a proper image reconstruction algorithm is indispensable to rebuild the initial pressure distribution 0p , generated by optical absorption, from the acoustic signal detected by the transducer at the surface of the imaging tissue. The image reconstruction is an inverse process to reconstruct the initial pressure based on the measured data. There are several algorithms developed to reconstruct the original pressure, such as inverse Radon transform [19], Fourier-transform [65], and back-projection [60]. For all the experiments and simulations in this thesis, back-projection algorithm is chosen to reconstruct the PAT image for its simplicity and versatility. The universal back-projection of reconstruction for three-dimensional PAT can be described as [60]:  00 0 0 0 0( ) ( , | |)d /    p r b r t r r   (2.7) with the back-projection term related to the measurement at the position 0r  as  00 0( , )( , ) 2 ( , ) 2 ddp r tb r t p r t tt  (2.8) where  t c t is the distance which acoustic wave travels given time t  The back-projection assumes that the wavelength of PA signal is sufficiently small compared with the distance from the PA source absorbers to the detector. Equation(2.8) indicates that the initial pressure is also related to the first derivative of the acquired acoustic pressure rather than simply proportional to the acoustic pressure itself.   17  2.4 Photoacoustic absorption contrast In PAT, the local pressure 0p is reconstructed, which represents tissue properties. Based on Equation(2.3), the PA imaging contrast is dominated by several tissue properties, such as the Grüneisen coefficient and the absorption coefficient. The Grüneisen coefficient is related to the material thermo-mechanics, including the mechanical ( ,c ) and the thermodynamic (pC ) properties of the tissue. This thermal-mechanical coefficient has very low variation in soft tissues. The optical absorption coefficient ( a ) describes the strength of light absorption by tissue. It can vary several orders of magnitude among various biomedical tissues and depends on the wavelength of the light. As optical absorption is the dominant contrast in PA imaging, PA imaging is considered as absorption based imaging modality. The optical absorption coefficients of common tissue chromophores Can be described as follows: Oxy-haemoglobin (HbO2) and deoxy-haemoglobin (HHb) are the main absorbers in tissue. Their absorption coefficients at physiological concentrations are orders of magnitude higher than other chromophores in the visible to NIR range. This preferential absorption leads to high image contrast of vasculature in PA imaging [65]. Besides, the difference in the absorption coefficient between HbO2 and HHb at certain wavelengths can be utilized to quantify the oxygenation of blood by applying multi-wavelength spectroscopic analysis [66]. Melanin is another major absorber which has high optical absorption over a wide light spectrum. The contrast from melanin absorption has been utilized in PA imaging to detect melanoma, a severe type of skin cancer. At wavelength range longer than 1000 nm, water, and lipid absorption dominates, which have been utilized to characterize breast tissue with PA imaging.  18  Chapter 3: Broadening view angle in 2D PAT using two linear transducers As mentioned in Chapter 1, the quality of the reconstructed image in PAT is severely degraded by insufficient detection view. A new calibration method is developed by us, which uses two linear array transducers at different orientations to increase the view angle. In this chapter, the PAT system used for the experiments is introduced. The methods of calibrating the relative position between two linear array transducers are explained in details. Improving the PAT imaging quality with two linear array transducers is demonstrated using both simulation and experimental results.   3.1 Introduction As discussed in Chapter 1, PAT requires certain reconstruction algorithm to rebuild the initial pressure distribution of the tissue. However, exact PAT reconstruction usually needs large detection view between the tissue and the transducer, which is often unpractical in clinic applications. Insufficient detection view may cause severe reconstruction artifacts, such as blurring of sharp edges or even missing structures. This limited-view problem is a common issue for linear array transducer. In order to address the limited-view problem, we present a simple approach for PAT imaging using linear array transducers. We use two linear array transducers placed at different directions to broaden the detection view. The positions of the two transducers are not predetermined and can be adjusted with flexibility to fit the specific geometry of an imaging site. Therefore, the relative position between the two linear array transducers needs to be calibrated. We propose a new method of calibration using ultrasound imaging. Compared with existing methods, this approach has several advantages. First, the imaging system does not need any 19  mechanical rotation, which can significantly reduce the imaging time. Second, this approach provides flexibility on the positioning of the two transducers, which will be convenient for clinical applications. We believe our approach can provide a practical and simple solution for improving the limited-view problem of PAT imaging with linear array transducer for clinical applications.  3.2 Photoacoustic tomography imaging system Figure 3.1 shows the schematic of the experimental setup for our PAT imaging system. This PAT system was developed by previous students and was expanded here by using two linear array transducers to broaden the detection view. The laser source contains a Q-switched Nd:YAG laser (Surelite II, Continuum, Inc., Santa Clara, CA, USA) at 532 nm wavelength. The 532 nm light pumps an optical parametric oscillator (OPO) from the same manufacturer to generate tunable wavelengths. The OPO output is tunable from 680 nm to 2500 nm wavelength, with a pulse width of 5 ns and a repetition rate of 10 Hz. The imaging system uses two identical conventional ultrasound linear array transducers controlled by a SonixMDP ultrasound imaging system (Analogic Corporation, Richmond, BC, Canada). The ultrasound imaging system is capable of enabling selected transducer channels for acoustic wave transmitting. The two linear array transducers (L14-5/38 Linear) are positioned in the same plane. Each linear array transducer has 128 channels with 7.2 MHz center frequency, minimum 70% fractional bandwidth (at -6dB), and 0.3 mm element pitch. The acoustic signal received by the transducer array is sent to a specialized research module, SonixDAQ (Analogic Corporation, Richmond, BC, Canada), for data acquisition. This module acquires and digitizes the pre-beamformed radio-frequency signal from all the 128 channels individually at a sampling rate of 40 MHz and 12-bit resolution. Both PA and ultrasound images are reconstructed using a back-projection algorithm [60]. The lateral and axial resolution 20  of the systems is 0.52 mm and 0.44 mm [67], respectively. In the ultrasound imaging, the laser source is switched off. One transducer is connected to SonixMDP to transmit acoustic waves and the other one is connected to SonixDAQ to detect the acoustic signal. In the PA modality, the laser source is switched on, and the two transducers are connected to the SonixDAQ to receive the PA signals sequentially for image reconstruction.  Figure 3.1: The PAT imaging setup used in this thesis. ND:YAG is a Surelite II neodymium-doped yttrium aluminum garnet laser; OPO is an optical parametric oscillator.   Several phantom experiments have been performed to test the resolution and field of view of our PAT system. The details of these tests can be found in  [67]. In order to determine the spatial resolution, the cross-section of a single strand of human hair embedded in gelatin was imaged. The field of view (FOV) of PA system is mainly limited by the beam size of the incident light. A summary of the system characterization is listed in    21  Table 3.1 Summary of the overall system characterization System Characterization Frame rate 10 Hz Field of view ~3 cm2 Peak laser energy output 120 mJ Tunable laser wavelengths 680 ~ 2500 nm Axial resolution (3 cm away from transducer) 0.44 mm Lateral resolution (3 cm away from transducer) 0.52 mm  3.3 2D Calibration principle Using the PAT imaging system mentioned above, the proposed method images the tissue sample using two linear array transducers. Each transducer has its individual FOV of the target sample from different orientations. The reconstructed image by a single transducer can only detect partial tissue structure due to the limited-view problem. However, combining the detected information from the two linear array transducers can broaden the detection view and result in more complete reconstruction structure. To achieve this, the main challenge is how to calibrate the relative position of the two transducers.  The calibration can be performed by spatially aligning two images of the same scene so that the corresponding points match with each other after a transformation. The calibration process can be described in two steps. First, the two linear array transducers capture a common structure from their respective FOV by either using a calibration phantom or the ultrasound imaging. Some feature points from their respective FOV can be extracted as the corresponding points for further 22  processing. How to obtain the two images of the same scene and finding the corresponding points will be presented in Sections 3.4 and 3.5. The second step of the calibration process is to solve the transformation parameters between the two FOVs by matching the corresponding points. In our experimental setup where two identical linear array transducers are positioned in the same plane, only image rotation and translation need to be considered. Therefore, global rigid transformation or Euclidean transformation [68] is used as the transformation function to translate and rotate one image with respect to the other, keeping the distance between points and the angle between lines unchanged after transformation. The transformation function can be written as [68]  1 0 cos sin 00 1 sin cos 01 0 0 1 0 0 1 1                                     X h xY k y   (3.1) where ( , )X Y and ( , )x y are the coordinates of the corresponding points in the given two images respectively. Here ( , )h k denote how the reconstructed image is translated with respect to the other in two perpendicular coordinate directions. The rotation parameter  denotes the orientation difference in the counter-clockwise direction. In order to determine the transformation parameters( , , ) h k , we need as least 2 pairs of corresponding points. In practice, we use more than two pairs to improve accuracy.  After the transformation parameters are solved in the calibration process, the relative positions of the two linear array transducers are known. Then the resolved geometry information can be applied in the subsequent PAT imaging to combine the detected signal from the two transducers for more complete PAT reconstruction.  23  3.4 The first calibration method – using a calibration phantom In order to calibrate their relative position, the two linear array transducers need to capture a common structure from their respective FOV. To achieve this, a traditional approach is to use a calibration phantom. For PAT, the calibration phantom contains point structures of high optical absorption. Practically, the point source is the ideal structure which can be detected at any orientation and the high optical absorption property can provide high image contrast for feature point detection. PAT images of the calibration phantom are acquired by the two transducers. The two reconstructed PA images of the calibration phantom contain the same physical position information of the phantom sample. Hence, the feature correspondence between the two images is obtained by pairing up the corresponding points from each image. The coordinates of the corresponding points are applied into Equation(3.1) to solve the transformation parameters. Then we remove the calibration phantom and replace it with experimental tissues. The two reconstructed images acquired by the two transducers can be registered together by image reconstruction using the transformation parameters attained in the calibration process.  3.4.1 2D experiment results of calibrating two transducers using a calibration phantom In order to test the method using calibration phantom, several experiments of various samples have been conducted. In this section, an example is given to demonstrate the process of calibrating the two transducers using calibration phantom in 2D.  The two linear array transducers are positioned at approximately 90 degrees. The same approach can also be applied to other angles as well.  The excitation laser beam is set at 700 nm wavelength and the laser power is kept about 75 mJ/cm2 incident local light fluence rate on the 24  surface of the phantom. A relatively high power is used on the phantom to obtain high signal intensity. The calibration phantom consists of a piece of paper printed with 6 points embedded in gelatin. Figure 3.2(a) shows the top view of the calibration setup. The two linear array transducers are both used as receiving arrays (RA). In the reconstructed image acquired by RA' (Figure 3.2(b)), all the points are well captured. Meanwhile, the corresponding points can also be depicted in the reconstructed image acquired by RA (Figure 3.2(c)). Comparing the distribution patterns of the 6 points in both reconstructed images, we can find the matching pairs, such as CP1 to CP1', and CP2 to CP2'. The transformation parameters are resolved by substituting the coordinates of the matching pairs into Equation(3.1). Then the transformation parameters ( , , ) h k are obtained for further PA image reconstruction.  Figure 3.2: (a) Top view of the calibration setup with ~ 90-degree relative angle between the two linear array transducers using calibration phantom. (b) Reconstructed PAT image acquired by RA'. (c) Reconstructed image acquired by RA.    25  3.4.2 2D experiment results of PAT by two transducers using calibration phantom  Figure 3.3: Demonstration of the PAT imaging with the two-transducer method using calibration phantom on a piece of paper sample. (a) Photograph of the printed paper sample with a square. (b) Reconstructed image of the printed paper acquired by the transducer RA' after transformation. (c) Reconstructed image of the printed paper acquired by the transducer RA. (d) Reconstructed image using data acquired by both transducers.  After the transformation parameters are resolved, we replace the calibration phantom with samples. Figure 3.3 demonstrates the principle by imaging a piece of paper printed with a square embedded in gelatin. A photograph of the printed square paper phantom is shown in Figure 3.3(a). Figure 3.3(b) shows the reconstructed PAT image using the detected data from RA' alone and the horizontal borders of the square are missing. This is because the wavefront generated from those borders propagates along the vertical direction and is out of the receiving angle range of RA'. Nevertheless, the missing horizontal borders can be well detected by RA as shown in Figure 3.3(c), 26  due to the approximately perpendicular transducer setup. Similarly, the vertical borders can be detected by RA' alone while almost missed by RA completely. After incorporating the PA signals from both linear array transducers, utilizing the transformation parameters from the calibration process, Figure 3.3(d) shows the complete image, which successfully visualizes all borders.  Figure 3.4: PAT imaging with the two-transducer method using calibration phantom on two tubes filled with rabbit blood. (a) Photograph of the sample. (b) Reconstructed image of the phantom acquired by transducer RA' after transformation. (c) Reconstructed image of the phantom acquired by transducer RA. (d) Reconstructed image using data acquired by both transducers.  The efficacy of imaging optically absorbing vasculature is demonstrated by imaging two tubes filled with rabbit blood under laser excitation at the wavelength of 532 nm. The relative position of the two tubes is shown in the photograph in Figure 3.4(a). The incident laser beam on the phantom surface is controlled to be under ANSI safety limit [69]. Due to the strong optical 27  absorption of rabbit blood at 532 nm, the acquired tube structure is imaged with high image contrast between the blood and the background. From the reconstructed image by a single linear array transducer, one blood tube splits into two and it seems each of the two walls of the tube is producing a distinct pulse. The possible explanation for this phenomenon is that the thermo-elastic expansion of the blood is more constrained by the tube walls at the boundary between the walls and the blood. Therefore, a stronger photoacoustic signal can be generated, and it can be observed that two parallel linear structures represent one tube in the reconstructed PAT image.  Similar to the paper phantom discussed before, in Figure 3.4(b) and Figure 3.4(c), each tube can only be visualized by the transducer with approximately parallel orientation to it and is almost missed by the transducer with the perpendicular orientation. Figure 3.4(d) shows the complete image by the two-transducer approach, where the doubled detection view can capture both tubes. This demonstrates the benefit of the method by reconstructing the image using acquired data from two linear array transducers.  3.5 The second calibration method – using ultrasound imaging The approach in Section 3.4 is to fix their relative position through a mechanical fixture, which can then be calibrated through imaging a calibration phantom. The calibration needs to be repeated if the mechanical positions are changed. However, this approach lacks the flexibility to adjust the transducer positions during imaging for better fitting the shape of the sample, which is needed in clinical studies on patients. The use of calibration phantom is also cumbersome and time-consuming. In this section, a new approach using ultrasound modality for relative position calibration is presented. The calibration is performed by ultrasound imaging, where one transducer transmits 28  and the other receives ultrasound signals. The calibrated relative position is then applied to the subsequent PAT imaging for correlating and combining the received acoustic signals from the two transducers to broaden the view angle. Figure 3.5 illustrates the top view of the imaging configuration. Two identical type linear array transducers (Transducer A and B) are positioned in the same imaging plane. The relative angle between the two transducers α can be an arbitrary angle ranging from 90 degrees (perpendicular position) to 180 degrees (face to face position), as long as the ultrasound signal transmitted from one transducer can be received by the other.  Figure 3.5: Top view of the imaging configuration using two identical linear array transducers. (a) shows the calibration process using ultrasound imaging, in which several channels of Transducer B (highlighted in red) are transmitting and Transducer A is receiving acoustic waves. (b) shows the PAT imaging where both transducers are receiving PAT waves.  At first, the system operates in the ultrasound imaging for calibration as shown in Figure 3.5(a). Several discrete channels on Transducer B are enabled to transmit ultrasound signal as acoustic source points (SPs). The distances between the adjacent SPs are chosen to be different so 29  that they can be differentiated. A virtual image which represents the locations of those SPs on the FOV of Transducer B can be obtained. The location of Transducer B determines the middle top position of the virtual image so that the SPs are located on the very first row of the virtual image. Since the sequence of the enabled channels is already known, the column location of the SPs can also be determined on the virtual image. Therefore, based on the geometry and the sequence of the enabled channels, the virtual image containing the physical positions of the SPs can be obtained. Meanwhile, Transducer A works in receiving mode to detect the ultrasound signal and reconstructs an image of the same SPs in the FOV of Transducer A. Since the virtual image and the reconstructed image capture the same set of SPs, the FOVs of Transducer A and B can be registered by matching the relative position of the corresponding SPs on the two images. Currently, manual matching of the SPs is applied to the two images. Since the distances between the adjacent SPs have been chosen to be different, the order of the SPs can be clearly identified on both the virtual and reconstructed images. Thus, an SP in the virtual image can be matched with the corresponding SP in the reconstructed image. Alternatively, pattern matching algorithms can also be used to automatically match the corresponding SPs between the two images without manually establishing the SPs’ correspondence.  Similarly, Transducer A can also be used as the transmitter while Transducer B as the receiver for the calibration. The process to solve the calibration parameters is described in Section 3.3. The calibration process is to solve the transformation parameters between the two FOVs by matching the corresponding points (SPs). After the transformation parameters are solved, the relative position between the two linear array transducers is calibrated and their FOVs can be overlapped. The reconstructed image FOV can be transformed with respect to the virtual image FOV by the transformation parameters. 30  Next, the imaging system is switched from ultrasound to PAT imaging. Both linear array transducers work in receiving mode to acquire PAT signals as shown in Figure 3.5(b). A PAT image can be reconstructed from the signal received from each transducer. Because the PAT image shares the same FOV as the ultrasound image for each transducer, the two reconstructed PAT images by these transducers can be aligned together using the same transformation parameters resolved from the ultrasound modality.  3.5.1 2D experiment results of calibrating two transducers using ultrasound Figure 3.6 demonstrates the experimental result of the calibration process utilizing ultrasound. The relative angle between the two transducers is ~100 degree and four channels in Transducer A are enabled to transmit ultrasound signals, as shown in Figure 3.6(a). The enabled channels are No.64, No.96, No.110, and No.128. Figure 3.6(b) shows the sketch of Transducer A and the virtual image, which indicates the FOV of Transducer A. There are four corresponding SPs representing the enabled channels as SP1, SP2, SP3, and SP4, respectively, at the top of the image. The exact positions of the SPs in the virtual image are determined by the sequence of the enable channels on Transducer A. Figure 3.6(c) shows the sketch of Transducer B and the reconstructed ultrasound image using the signal received by Transducer B. When the enabled channels of Transducer A are within the receiving angle (~45 degree), the corresponding SPs can be observed in the reconstructed image as SP1', SP2', SP3' and SP4'. Because the distances between each two adjacent SPs are different, the matching correspondence can be found by comparing the two images as SP1 to SP1', SP2 to SP2', SP3 to SP3' and SP4 to SP4'. Hence, the correspondences between the SP pairs from the two images are established and the transformation parameters can be solved by applying the coordinates of the corresponding SP pairs into Equation(3.1). For the 31  example shown in Figure 3.6, the transformation parameters are obtained as h=51.9 mm, k=101.2 mm, and  =100.8 degree. The reconstructed image after applying the transformation is shown in Figure 3.6(d), which matches with Figure 3.6(b). These transformation parameters can be applied to the subsequent PAT imaging for image reconstruction as long as the transducers do not move.  Figure 3.6: (a) Top view of the imaging platform with ~100-degree relative angle between the two linear array transducers. (b) Virtual ultrasound image and the sketch diagram of Transducer A. SP1, SP2, SP3, SP4 represent the enabled channel No.64, No.96, No.110 and No.128 respectively (c) Reconstructed image and the sketch diagram of Transducer B. SP1', SP2', SP3', SP4' represent the reconstructed channel No.64, No.96, No.110, and No.128, respectively. (d) The reconstructed image of Transducer B after applying the transformation.  After the transformation parameters are obtained from the ultrasound, PAT modality can be switched on where a laser illuminates the sample and both Transducers A and B receive PAT signal. Figure 3.7(a) shows the PAT imaging of a phantom containing a piece of paper with 5 printed points. The illumination wavelength is 700 nm and the incident local light fluence rate on 32  the surface of the phantom is ~75 mJ/cm2. Here a relatively high local fluence rate is utilized to achieve high signal to noise ratio. Figure 3.7(b) shows the PAT image acquired by Transducer B. Figure 3.7(c) shows the PAT image reconstructed by Transducer A and transformed using the calibration parameters obtained from the ultrasound modality. Figure 3.7(d) shows the combined image by summing up the previous two images pixel by pixel. In Figure 3.7(d), the points overlap well and the intensity of the points is increased. This shows that the calibration parameters obtained from the ultrasound imaging can be applied to the PAT imaging, and the images from the two transducers can be combined to increase the view angle and signal to noise ratio.  Figure 3.7: PAT imaging of a phantom containing a piece of paper printed with 5 points. (a) Photograph of the printed paper phantom and sketch diagram of the transducers. (b) Reconstructed image of the phantom acquired by Transducer B. (c) Reconstructed image of the phantom acquired by Transducer A after transformation. (d) Overlapped image which combines acquired data from Transducer A and Transducer B.  33  In the above ultrasound based calibration approach, Transducer A transmits while B receives ultrasound (Calibration1). The calibration can also be performed by Transducer B transmitting and A receiving ultrasound (Calibration 2). A more traditional calibration approach, which is discussed in Section 3.4, uses a phantom where the two transducers image the same phantom and the two images are registered to find the calibration parameters (Calibration 3). Here the printed points can be used as the SPs and the two PAT images of the phantom can be registered to obtain the calibration parameters. Table 3.2 shows the transformation parameters obtained using the three calibration approaches. As we can see, the transformation parameters match closely among the three approaches, indicating that the ultrasound based calibration has similar accuracy as the phantom-based calibration. Using the ultrasound to perform the calibration without the need of a phantom has the advantage in clinical applications where positioning a phantom can be cumbersome and sometimes unpractical. The accuracy of the proposed ultrasound calibration method is affected by the accuracy of acoustic speed, which is a common issue in PAT imaging. After the transformation, the printed points from one reconstructed image should be matched with their counterparts in the other reconstructed image. The mismatch of the printed points can be used to demonstrate the accuracy of the calibration process. As each printed points is represented by a blob in the reconstructed image, the central gravity of each blob is calculated as the coordinate of the point. The average distance between the matching printed points indicate the mismatch of them. For Calibration 1 and 2, all the printed points are used to calculate the average distance. In Calibration 3, two points are utilized for calibration and the rest 3 points are used for the average distance calculation. The average distance between the matching points is listed in Table 3.2. Compared with the PAT system resolution of 0.44 mm and 0.52 mm for axial and lateral directions 34  respectively, the average distance between the matching points is within the resolution of the system. Table 3.2 Transformation parameters obtained under different calibration approaches                                    Calibration                                    methods Transformation Parameters Calibration 1 Calibration 2 Calibration 3 X translation (mm) - h 52.2 51.9 51.5 Y translation (mm) - k 100.6 101.2 101.8 rotation angle (Degree) - θ 100.7 100.8 100.2 Average distance between the matching points (mm) 0.2 0.2 0.1 Calibration 1: Transducer A transmitting and Transducer B receiving; Calibration 2: Transducer B transmitting and Transducer A receiving; Calibration 3: calibration phantom, two point structures are used as the feature points for calibration. Details are discussed in Section 3.4.  3.5.2 2D experiment results of PAT by two transducers using ultrasound modality In order to experimentally demonstrate the validity of the calibration method using ultrasound, we carried out a phantom study. In Figure 3.8, a piece of paper printed with a regular octagon and 3 points is imaged. Figure 3.8(a) shows the phantom and the transducer positions. The excitation wavelength is 700 nm. The incident local light fluence rate on the phantom surface is ~75 mJ/cm2. The relative angle between the two transducers is ~100 degree. Figure 3.8(b) shows the PAT image reconstructed by Transducer B and Figure 3.8(c) shows that by Transducer A after transformation. From Figure 3.8(c), we can see that Transducer A is unable to detect the horizontal borders of the regular octagon because the PAT wave-fronts from those edges propagate along the vertical direction which is out of the receiving angle of Transducer A. However, these PAT signals can be well acquired by Transducer B. Similar observation can be made to the vertical borders, which can be clearly detected by Transducer A but is out of the receiving angle of Transducer B. In contrast, the combined image, shown in Figure 3.8(d), can overcome this problem by 35  incorporating the PAT signals received from both linear array transducers. Both vertical and horizontal borders are visualized in the combined image.  Figure 3.8: PAT images of a sheet of paper printed with regular octagon and 3 points. (a) Diagram of the printed paper phantom and sketch diagram of the transducers. (b) Image of the printed paper reconstructed by Transducer B. (c) Image of the printed paper reconstructed by Transducer A after transformation. (d) Reconstructed image using data acquired by both transducers.  In order to demonstrate that the transducers can be positioned with flexibility to fit the geometry of an imaging site, the two transducers are tested at three different orientations. The phantom is a piece of paper printed with a regular octagon and 3 points. Figure 3.9(a)-(c) show the combined PAT images in grayscale when the relative angle between the two transducers is 180, 120, and 90 degrees, respectively. Figure 3.9(d)-(f) show the same combined PAT images but with the image acquired by one transducer shown in grayscale and the other in red so that the signals 36  from the two transducers can be differentiated. Figure 3.9(a) and (d) show the result when the relative angle between the two transducers is 180 degree. At 180 degree angle, the two linear array transducers are oriented in face-to-face or parallel position. The two transducers detect almost the same tissue structure, and combining the images can increase the signal level. However, the structure that is perpendicular to the transducer surface cannot be captured by either transducer, as shown by the missing edges of the octagon. Figure 3.9(b) and (e) show the result when the relative angle between the two transducers is 120 degree. Since the two transducers are not parallel to each other, they can capture some different structures and the combined image shows a more complete octagon. Figure 3.9(c) and (f) show the result when the relative angle between the two transducers is 90 degree. At 90 degree angle, the two linear array transducers are oriented in a perpendicular position. The structures captured by the two transducers are more complementary and the combined image shows most structures of the octagon.  This result shows that the calibration method works for different orientation angles between the two transducers, as long as the two transducers fall within the FOV of the ultrasound imaging of each other, which is not difficult to achieve considering the relatively large FOV of ultrasound imaging. How much improvement on the PAT imaging can be achieved will depend on the orientation of the transducers and the property of the tissue. For structures with sharp edges, the 90-degree orientation of the two transducers should be optimum. For structures with no sharp edges, a wide range of angle orientation should all improve the PAT imaging. 37   Figure 3.9: Combined PAT images of a sheet of paper printed with regular octagon and 3 points using two linear array transducers at (a) 180 degree (b) 120 degree (c) 90 degree. (d)-(f) The corresponding combined PAT images as the top row, except that the images obtained from different transducers are shown in different colors, grayscale and red, respectively.  A leaf skeleton embedded in bovine gelatin, mimicking blood vessel branches in tissue, is also imaged. Photograph of the sample and the transducer positions are shown in Figure 3.10(a). The relative angle between the two linear array transducers is ~90 degree. The excitation wavelength is 532 nm (output from the Nd:YAG laser) and the local light fluence rate on the sample surface is kept below 20 mJ/cm2 under the ANSI safety limit [69]. Figure 3.10(b) shows the PAT image reconstructed by Transducers B and Figure 3.10(c) shows the image reconstructed by Transducer A after transformation. In both images, each linear transducer can only capture part of the leaf skeleton structure which is near parallel to the lateral axis of the transducer. Figure 3.10(c) displays the central leaf stem captured by Transducer A, and in Figure 3.10(b), Transducer B detects acoustic signal from the leaf branches. By incorporating the acoustic signals acquired by 38  both linear transducers, a more complete leaf skeleton structure can be obtained, as indicated in Figure 3.10(d).  Figure 3.10: PAT images of a leaf skeleton phantom. (a) Photograph of the leaf skeleton phantom and sketch diagram of the transducers. (b) Image of the leaf skeleton phantom reconstructed by Transducer B. (c) Image of the leaf skeleton phantom reconstructed by Transducer A after transformation. (d) Reconstructed image using data acquired by both transducers.  Another sample phantom of soft tubing filled with fresh blood is also imaged to show the PAT imaging of blood. The photograph of the sample is shown in Figure 3.11(a). The inverted U-curved transparent soft tubing is filled with fresh rabbit blood and embedded in bovine gelatin. The two linear transducers are positioned at ~90-degree relative angle. The excitation wavelength is 532 nm and the local light fluence rate on the sample surface is kept below 20 mJ/cm2. In the reconstructed PAT image, the blood tube seems to split into two as Figure 3.4. The Transducer A 39  and Transducer B are only able to capture part of the vessel structure, shown in Figure 3.11(c) and Figure 3.11(b) respectively. The combined image in Figure 3.11(d) shows a more complete structure of the vessel. Thus combining two linear array transducers can increase the detection view angle and provide more structural information about tissue.  Figure 3.11: PAT imaging of a plastic tube filled with rabbit blood. (a) Photograph of rabbit blood phantom and sketch diagram of the transducers. (b) Image of the phantom acquired by Transducer B. (c) Image of the phantom acquired by Transducer A after transformation. (d) Reconstructed image using data acquired by both transducers.  In experiments showed above, the alignment of the two transducers to the same imaging plane is performed using visual guidance by the eye. From the combined images, a good overlap of common structures is observed, which shows that aligning the two transducers to the same plane can be achieved with good accuracy under simple visual guidance. In some clinical applications, 40  such as in prostate imaging using a transrectal ultrasound transducer and an abdominal ultrasound transducer, the two transducers may not be visible by the eye simultaneously. Aligning the transducers by monitoring the ultrasound intensity in the calibration process will be necessary in that case, which is discussed in the next section.  3.6 Further consideration about the coplanar imaging plane As discussed above, when using two linear array transducers to increase the view angle in PAT, the transducers are assumed to be located in the same imaging plane. Practically, this requirement can be achieved by transmitting and receiving acoustic waves between the two transducers while adjusting their positions during the calibration process. The two transducers are considered to be aligned in the same plane when the reconstructed image by the receiving transducer shows the highest intensity of the reconstructed SPs.   To validate the efficacy of aligning the two transducers using the intensity monitoring approach, simulations are carried out to study the signal intensity when one transducer is rotated or translated to out of the imaging plane as shown in Figure 3.12. Several channels on Transducer A are enabled to transmit ultrasound signals, and Transducer B receives the ultrasound signals for reconstruction at different off-plane rotation and translation locations. The calculation of the forward ultrasound wave propagation is performed by k-space model [70–72] in Matlab, on a 160 × 160 × 130 voxel grid corresponding to 48 mm × 48 mm × 39 mm space. The spatial step is 0.3 mm and time step is 58.44 ns. The transducers are discretized in 128 pixels, corresponding to 39 mm in width. From the reconstructed image, the maximum pixel intensity is obtained to indicate the quality of the reconstructed image. In Figure 3.12 (a), Transducer B is initially set within the same imaging plane with Transducer A at 90-degree relative angle and is then subsequently rotated 41  around the x-axis at different orientations to receive ultrasound signals. The change of the maximum pixel intensity at the different rotation angles is shown in Figure 3.12 (b). As we can see, the maximum pixel intensity drops down quickly when the rotation angle around x-axis increases. In Figure 3.12 (c), Transducer B is firstly positioned in the same plane with Transducer A at 90-degree relative angle and is then translated along the z-axis. Figure 3.12 (d) shows the dependence of the maximum pixel intensity as a function of the translation distance. The maximum intensity decreases rapidly when the translation in z-axis increases. The simulation results show that the maximum pixel intensity of reconstructed SPs can be used to guide the alignment of the transducers to the same image plane with high accuracy.  Figure 3.12:  Simulation results showing the variation of the maximum pixel intensity of the reconstructed SPs by Transducer B when it rotates or translates out of the image plane of Transducer A. (a)-(b) Transducer B receives ultrasound signal at different rotation angles around the x-axis. (c)-(d) Transducer B receives signal at different translational locations along the z-axis.   42  In 2D PAT, using two linear array transducers at different orientations can capture more sample structure while suppressing reconstruction artifacts. The challenge of this approach is how to tuning the two transducers into the same image plane. Off-plane imaging can introduce calibration error and affect the final reconstruction. Although ultrasound intensity can be used as guidance to tune the two linear array transducers into the same imaging plane for 2D PAT, there can still be alignment error and the tuning process will take extra time. Therefore, we also propose to calibrate the relative position in 3D space, which will be discussed in the next chapter. 43  Chapter 4: Simulation of 3D calibration for photoacoustic imaging with two linear array transducers As mentioned in the previous chapter, 2D PAT using two linear array transducers suffers from coplanar imaging requirement. Although the signal intensity can be utilized to tune the transducers into the same imaging plane, the process can generate errors and is time-consuming.  In order to ameliorate this problem, we also propose to perform the calibration of transducer positions in 3D and perform PAT imaging in 3D. In this chapter, the calibration method using ultrasound is extended into 3D. The accuracy of the 3D calibration approach is validated by k-wave simulation. First, a brief summary of the k-wave simulation platform is described. Then, the 3D calibration principle using 3D scanning is proposed. Next, the 3D simulation results of calibrating two linear array transducers are presented. Finally, the 3D simulation results of PAT by two linear array transducers are outlined.  4.1 Photoacoustic tomography simulation platform In this chapter, all the simulations are done in the k-wave platform. K-wave is a freely available third-party MATLAB toolbox for acoustic wave field simulation [72]. Mimicking the realistic PA imaging process, the simulation consists of two basic parts: forward acoustic wave propagation and inverse backward reconstruction. In the forward wave propagation simulation, k-wave couples several first-order acoustic wave equations in 3D [72,73], which are derived from Equation(2.3) and Equation(2.4). The simulation is based on the k-space pseudo-spectral time domain solution [71,74–77] for homogeneous media. For the inverse reconstruction, the back-projection algorithm, described in 44  Section 2.3 is applied considering the trade-off between the detector generality and execution time for large grid space. The 3D geometry simulation space is set on a 130 130 100  voxel grid. This effective working space is enclosed by an external perfect matched layer of 20 voxel grids along each axis to attenuate acoustic wave propagating within or perpendicularly encountering them. This perfectly matched layer  [72] is used to obviate the acoustic wave leaving one side of the space to reappear at the opposite side. The whole voxel grid corresponds to 39 mm 39 mm 30 mm geometry space with the time step of 60 ns and the spatial step of 0.3 mm.  In this simulation, the transducers are simplified as omnidirectional and no directivity pattern has been considered.  4.2 Calibrating transducer position in 3D using ultrasound modality As showed in Figure 4.1, under a more general condition in which the two linear array transducers are given arbitrary positions in 3D geometry space, the image planes of the two transducers are usually not coplanar. In Chapter 3, the two transducers have to be aligned in the same plane which can be time-consuming and not accurate.  In this chapter, we propose a method that can calibrate the transducer positions in 3D and perform PAT imaging in 3D.  Figure 4.1: Schematic diagram of two linear array transducers in an arbitrary position.  45  As showed in Figure 4.2, the Transducer B is held stationary and several of its channels are enabled to transmit ultrasound signal. The locations of those activated channels are marked as SPs. The Transducer A is translated along the Z axis to receive the acoustic signal from Transducer B at different positions. From the acoustic data acquired by scanning the Transducer A, a 3D volume can be reconstructed, where the positions of the SPs within the 3D volume can be extracted. Since the SPs correspond to the enabled channels on the surface of the Transducer B, all the SPs should fall in a straight line in 3D geometry space. With the knowledge of the sequence of the enabled channels in Transduce B, a linear regression can be performed on the reconstructed SPs to determine the lateral axis and the centroid of the surface of Transducer B with respect to the Transducer A.   Figure 4.2: 3D imaging of Transducer B by scanning Transducer A along the Z axis. The red dots indicate the enabled channels.  Once the relative position of the transducers A and B is calibrated using the above method, the two transducers can acquire PAT data together and the image reconstruction can be achieved 46  by using the intergraded data from both transducers. This is showed in Figure 4.3, where the imaging system is switched to PAT modality. Both linear array transducers operate in receiving mode to acquire PAT signal from a tissue sample. Each transducer is translated along the Z direction to acquire acoustic signals in a 3D volume. With the knowledge of the translation spacing and the calibration results from the previous step, each element of either Transducer A or B is well determined in the universal coordinate system defined by Transducer A. Therefore, a PAT reconstruction algorithm can be applied to all the detected data for 3D volume reconstruction.  Figure 4.3: Schematic diagram of two linear array transducers performing 3D scanning in PAT using the calibration results.  On the other hand, the 3D scanning calibration results can also be used to guide the two linear array transducers to move to the same imaging plane for 2D imaging. Then similarly to Section 3.4 and 3.5, a 2D PAT can be performed using the two linear array transducers. As demonstrated in Figure 4.4(a), after getting the lateral axis and the centroid of the surface of Transducer B, these information can also be applied to shift Transducer B so that its lateral axis fall into the image plane of Transducer A. Because the 3D scanning result cannot provide 47  information on the elevational axis of Transducer B, the image plane of the two transducers may still not be coplanar. Then, as showed in Figure 4.4(b), Transducer B is rotated around the y-axis at different rotation angle for ultrasound signal acquisition. The intensity of the received acoustic signal can be used as a guidance to adjust the two transducers on the same image plane. As showed in Figure 4.4(c), the maximum intensity can indicate the 3D position where the two linear array transducers are aligned on the same imaging plane. Afterward, the 2D calibration approach discussed in Chapter 3 can be performed to calibrate the relative position of the two transducers in 2D space.   Figure 4.4: Schematic diagram of (a) the two transducers after adjusting the lateral axis of the Transducer B to the imaging plane of Transducer A (b) the arrangement of the two transducers where Transducer B receives signal at different rotation angles around y-axis (c) the arrangement of the two transducers after the 3D calibration.  48  4.3 Simulation results of calibrating two transducers in 3D using ultrasound modality In this Section, a simulation is conducted to validate the proposed 3D calibration method discussed in Section 4.2. The simulation space is defined in Section 4.1 and the two linear array transducers are placed inside it. To simplify the simulation, each linear array transducer is presented by a line segment, discretized in 128 pixels. As showed in Figure 4.5, the center of Transducer A on the coordinate system xyz is (65, 0, 50) pixel. The three axis of its local coordinate system A A Ax y z  will parallel the counterparts of the global coordinate system xyz respectively if it yaws -90 degrees. The center of Transducer B on the coordinate system xyz is (34, 66, 61) pixel. The three axis of its local coordinate system B B Bx y z will parallel the counterparts of the global coordinate system xyz respectively if it rolls -10 degrees.  Figure 4.5: Illustration of the relative position of the two linear array transducers in the simulation. The global coordinate system is xyz. The coordinate system A A Ax y z  and B B Bx y z is the local coordinate system of Transducer A and B, respectively.   49  As displayed in Figure 4.6, four channels of Transducer B are enabled as SPs to transmit acoustic wave, which are No.10, No.32, No.64, No.96, respectively. Transducer A operates in the receiving mode and translates along the Z axis from 0 to 100 to detect the acoustic signal at different positions.  The coordinates of all the SPs are listed in Table 4.1. Table 4.1 Pixel coordinates of all SPs in the simulation setting No. x (pixel) y (pixel) z (pixel) 10 34 13 52 32 34 34 55 64 34 66 61 96 34 98 67 One pixel corresponds to 0.3 mm.  Figure 4.6: Illustration of all the scanning position of Transducer A along the Z axis and the enabled channels of Transducer B. The enabled channel numbers are No.10, No.32, No.64, and No.96.  The forward propagation of the acoustic wave from the enable SPs is simulated using the k-wave tool box. The local pressures of SPs generated by Transducer B propagate to Transducer 50  A for signal detection. The acquired ultrasound signals are then back-projected to reconstruct the initial pressure distribution using back-projection algorithm. Figure 4.7 demonstrates the reconstructed SPs of Transducer B in binary volume using the data received by Transducer A. The SPs in the 3D space is the spherical structure and their signals can still be detected by Transducer A in spite of the limited-view problem.   Figure 4.7: Binary volume of the reconstruction of the enabled channels in Transducer B using the signal detected by Transducer A.  After the reconstruction of all the SPs, the center of gravity of each SP is extracted from the reconstructed volume of SPs to represent them. The pixel coordinates of the central gravity of all the SPs in the reconstructed 3D volume is listed in Table 4.2. The data in Table 4.2 should match with that in Table 4.1 with some detection errors.    51  Table 4.2 Pixel coordinates of all the SPs in the reconstructed 3D volume No. x (pixel) y (pixel) z (pixel) 10 33.0 13.3 52.0 32 32.9 34.1 54.9 64 33.1 66.3 61.1 96 32.5 98.0 67.0 One pixel corresponds to 0.3 mm. From the four SPs, a linear regression is performed to determine the lateral axis of Transducer B. Besides, with the knowledge of enabled channel index, the centroid coordinate of the surface of Transducer B can also be obtained through linear regression analysis. The linear regression analysis results are outlined in Table 4.3 and Figure 4.8. As mentioned before, the exact centroid coordinates of Transducer B is at channel No.64, corresponding to (34, 66, 61) pixel and the roll angle is 10 degree. Compared with the exact simulation setting, the maximum difference of centroid point among all the axis is ~1 pixel and the difference of the roll angle is within 1 degree. Therefore, it is showed that the transducer positions can be calibration in 3D with good accuracy.  The calibration results can be used to either guide the two linear array transducers to fall into the same imaging plane or determine all the detection element positions for both Transducer A and B in 3D. Table 4.3 Linear regression analysis results on all the SPs points of Transducer B Centroid of transducer surface Included angle with XY plane (degree) x (pixel) y (pixel) z (pixel) 32.8 66.3 61.2 10.2 One pixel corresponds to 0.3 mm.  52   Figure 4.8: Linear fitting results of the reconstructed SPs of Transducer B.  4.4 3D simulation results of PAT by two transducers  With the two transducer positions calibrated in 3D space in the previous section, PAT imaging using data received by the two registered transducers can be carried out. In order to validate the efficacy of the 3D calibration, several 3D PAT sample tests in simulation have been conducted. In Figure 4.9, a hollow cube with 12 edges is used as the PAT sample for the simulation study.  Figure 4.9(a) shows the sample placed inside the simulation space for imaging. Figure 4.9(b) and Figure 4.9(c) show the PA 3D volume reconstructed by Transducer A and B, respectively. For the PAT volume reconstructed by a single transducer, we can see that it is not sufficient to capture all the sample structures. The parallel edges to the surface of the transducer can be well detected whereas the perpendicular edges have very low or even no intensity due to the limited-view problem. However, using the data from both transducers, the reconstructed volume as showed in 53  Figure 4.9(d) can provide more sample details. Almost all the edges can be observed in Figure 4.9(d).  Figure 4.9: (a) Simulation sample of a cube with eight edges. (b) Reconstructed 3D volume by Transducer A. (c) Reconstructed 3D volume by Transducer B. (d) Reconstructed 3D volume using data acquired by both Transducer A and Transducer B.   An artificial structure mimicking tree branch (Figure 4.10(a)) is also simulated. The 3D volumes reconstructed by Transducer A and B are showed in Figure 4.10(b) and (c) respectively. A single transducer from one perspective is incapable of detecting all the sample details as the white arrows indicate. Some tissue parts close to the perpendicular position to the transducer 54  surface have very low signal intensity. By incorporating all the acoustic signals acquired by both transducers, a more complete tree branch structure can be obtained, as showed in Figure 4.10(d).   Figure 4.10: (a) Simulation sample of a tree branch structure. (b) Reconstructed 3D volume by Transducer A. (c) Reconstructed 3D volume by Transducer B. (d) Reconstructed 3D volume using data acquired by both Transducer A and Transducer B.   Using two linear array transducers at different directions in 3D PAT can effectively solve the limited-view problem. By calibrating the relative position between the two linear array transducers by 3D imaging, the transducers can be placed in arbitrary positions in 3D space without the need to pre-aligning them to be within the same image plane as in 2D PAT. Although the 3D 55  calibration process is more time-consuming and requires accurate mechanical motors for scanning, the 3D PAT using two linear array transducers can provide more complete tissue structures and functional information in 3D for biomedical analysis.   56  Chapter 5: Conclusion PAT has arisen as a promising hybrid imaging technology in the biomedical field. This imaging modality relies on the optical absorption to generate PA signals and the excited acoustic signal then propagates to the surface of the tissue for detection. The inherent nature of PAT can achieve centimeter level of penetration depth while maintaining high spatial resolution in the ultrasound range. This absorption based contrast mechanism of PAT makes it suitable for functional analysis in pathological diagnoses such as blood oxygenation measurement and cancer diagnosis. Because of those advantages, PAT has been extensively researched and developed in recent years. In the image reconstruction of PAT, the limited-view problem is still a major challenge of imaging target tissue with all of its structure. This thesis addresses the limited view problem. The contribution can be divided into two parts. The first part is to broaden the detection view for PAT in 2D imaging by using two linear array transducers at different orientations. The second part is to extend the method into 3D space using 3D scanning for better relative position calibration. This chapter explains the significance of the work done, and suggests possible future directions.  5.1 Significance of work Many groups have developed methodologies in experiments or reconstruction algorithms for limited view PAT imaging in order to improve the image quality and accuracy. We present a relative simple and practical way to ameliorate the limited-detection-view for PAT using two linear array transducers. In order to calibrate the relative position of the two transducers in 2D imaging, two methods have been performed and tested. The first method of calibration involves using a calibration phantom to be imaged by both transducers as the reference. The results show a 57  significant improvement in image integrality of the target sample compared with the reconstructed image by a single transducer. However, use of a calibration phantom complicates the imaging process and slows it down. The second method of using ultrasound for calibration shows a noticeable improvement in expediting the imaging process with similar image quality and accuracy as the first method. This method demonstrates the potential of imaging improvement using multiple linear array transducers. Compared with approaches proposed by other research groups, our approach has several advantages. First, the imaging system does not need any mechanical rotation, which can significantly reduce the imaging time. Second, the calibration process uses the ultrasound imaging from the same transducers, which can be easily applied in common PAT system. Therefore, the calibration process has the potential to be performed in seconds and fully automatically. The signal-to-noise ratio in the ultrasound imaging is quite high which is beneficial for the calibration. Third, this approach provides flexibility on the positioning of the two transducers, which can be convenient for clinical applications. The 2D calibration method using two linear array transducers is fast and can acquire 2D PAT images. Nevertheless, it has a difficulty on how to align the two transducers into the same imaging plane. Therefore, the 2D calibration method is extended into 3D for relaxing the coplanar requirement and enabling 3D imaging.  The transmitting transducer is detected in 3D by the receiving transducer. Thus, the relative position of the transducers can be calibrated in 3D. Afterward, 3D PAT can be acquired by the two transducers and the two 3D volumes can be combined to increase the view angle. Our simulation result shows that the 3D calibration method can potentially calibrate a more general relative position between the two transducers and double the detection view as well. 58  In PAT imaging, using linear array transducers has significant meaning for translating this technology to clinical applications, because linear transducers are widely available and compatible with standard medical ultrasound. Our method of using ultrasound modality for calibration enables the use of two linear transducers simultaneously to increase the detection view angle. It provides a relatively simple and practical approach for improving the PAT imaging with potential application in clinic.   5.2 Future work and improvement We have demonstrated the use of two transducers to double the detection view. The view angle can be further extended by using three or more transducers simultaneously. The use of multiple transducers incorporating larger detection view is beneficial for more complicated biomedical tissue structures. The main challenge is how to prevent the calibration error from accumulating when more transducers are involved. On the other hand, only linear array transducer is discussed in this thesis. However, the calibration method is theoretically feasible for other shapes of transducers, such as spherical, arc-shape or any other customized transducer as long as the ultrasound signal from one transducer can be well detected by the other. The 3D PAT calibration and imaging method discussed in Chapter 4 has only been validated in simulation. It can be applied and tested in experiments in real tissues in a future study. The scanning of the transducer in 3D requires a highly accurate mechanical motor when reconstructing 3D PAT volume for calibration and imaging. Therefore, the accuracy of the motor will be the main challenge to apply our method in 3D PAT. On the other hand, the 3D calibration and PAT imaging are both time-consuming.  How to expedite or simplify the whole calibration and imaging process can also be investigated in the future.   59  Bibliography 1.   L. V. Wang and S. Hu, "Photoacoustic tomography: in vivo imaging from organelles to organs," Science 335, 1458–1462 (2012). 2.   L. V. Wang, "Multiscale photoacoustic microscopy and computed tomography.," Nat. Photonics 3, 503–509 (2009). 3.   P. Beard, "Biomedical photoacoustic imaging," Interface Focus 1, 602–631 (2011). 4.   M. Xu and L. V. Wang, "Photoacoustic imaging in biomedicine," Rev. Sci. Instrum. 77, 41101 (2006). 5.   C. P. Favazza, O. Jassim, L. a Cornelius, and L. V. Wang, "In vivo photoacoustic microscopy of human cutaneous microvasculature and a nevus.," J. Biomed. Opt. 16, 16015 (2011). 6.   A. de la Zerda, Y. M. Paulus, R. Teed, S. Bodapati, Y. Dollberg, B. T. Khuri-Yakub, M. S. Blumenkranz, D. M. Moshfeghi, and S. S. Gambhir, "Photoacoustic ocular imaging," Opt. Lett. 35, 270 (2010). 7.   R. H. Silverman, F. Kong, Y. C. Chen, H. O. Lloyd, H. H. Kim, J. M. Cannata, K. K. Shung, and D. J. Coleman, "High-Resolution Photoacoustic Imaging of Ocular Tissues," Ultrasound Med. Biol. 36, 733–742 (2010). 8.   C. Li, A. Aguirre, J. Gamelin, A. Maurudis, Q. Zhu, and L. V. Wang, "Real-time photoacoustic tomography of cortical hemodynamics in small animals," J. Biomed. Opt. 15, 10509-10509–3 (2010). 9.   M.-L. Li, J.-T. Oh, X. Xie, G. Ku, W. Wang, C. Li, G. Lungu, G. Stoica, and L. V. Wang, "Simultaneous Molecular and Hypoxia Imaging of Brain Tumors In Vivo Using Spectroscopic Photoacoustic Tomography," Proc. IEEE 96, 481–489 (2008). 10.   E. Z. Zhang, J. G. Laufer, R. B. Pedley, and P. C. Beard, "In vivo high-resolution 3D photoacoustic imaging of superficial vascular anatomy.," Phys. Med. Biol. 54, 1035–46 (2009). 11.   S. a Ermilov, T. Khamapirad, A. Conjusteau, M. H. Leonard, R. Lacewell, K. Mehta, T. Miller, and A. a Oraevsky, "Laser optoacoustic imaging system for detection of breast cancer.," J. Biomed. Opt. 14, 24007 (2014). 12.   S. Manohar, S. E. Vaartjes, J. C. G. van Hespen, J. M. Klaase, F. M. van den Engh, W. Steenbergen, and T. G. van Leeuwen, "Initial results of in vivo non-invasive cancer imaging in the human breast using near-infrared photoacoustics.," Opt. Express 15, 12277–85 (2007). 13.   A. G. Bell, "Upon the production and reproduction of sound by light," Telegr. Eng. J. Soc. Of 9, 404–426 (1880). 14.   J. Tyndall, "Action of an intermittent beam of radiant heat upon gaseous matter," Proc. R. Soc. Lond. 31, 307–317 (1880). 15.   W. C. Röntgen, "On tones produced by the intermittent irradiation of a gas," (1881). 16.   A. C. Tam, "Applications of photoacoustic sensing techniques," Rev. Mod. Phys. 58, 381 (1986). 17.   C. K. N. Patel and A. C. Tam, "Pulsed optoacoustic spectroscopy of condensed matter," Rev. Mod. Phys. 53, 517 (1981). 18.   T. Bowen, "Radiation-induced thermoacoustic soft tissue imaging," in 1981 Ultrasonics Symposium (IEEE, 1981), pp. 817–822. 60  19.   R. A. Kruger, P. Liu, Y. “‘Richard’” Fang, and C. R. Appledorn, "Photoacoustic ultrasound (PAUS)—Reconstruction tomography," Med. Phys. 22, 1605–1609 (1995). 20.   R. O. Esenaliev, A. A. Karabutov, F. K. Tittel, B. D. Fornage, S. L. Thomsen, C. Stelling, and A. A. Oraevsky, "Laser optoacoustic imaging for breast cancer diagnostics: limit of detection and comparison with x-ray and ultrasound imaging," in BiOS’97, Part of Photonics West (International Society for Optics and Photonics, 1997), pp. 71–82. 21.   A. A. Oraevsky, V. A. Andreev, A. A. Karabutov, R. D. Fleming, Z. Gatalica, H. Singh, and R. O. Esenaliev, "Laser optoacoustic imaging of the breast: detection of cancer angiogenesis," in BiOS’99 International Biomedical Optics Symposium (International Society for Optics and Photonics, 1999), pp. 352–363. 22.   R. A. Kruger, K. D. Miller, H. E. Reynolds, W. L. Kiser Jr, D. R. Reinecke, and G. A. Kruger, "Breast Cancer in Vivo: Contrast Enhancement with Thermoacoustic CT at 434 MHz—Feasibility Study 1," Radiology 216, 279–283 (2000). 23.   G. Ku and L. V. Wang, "Scanning thermoacoustic tomography in biological tissue," Med. Phys. 27, 1195–1202 (2000). 24.   P. C. Beard, "Photoacoustic imaging of blood vessel equivalent phantoms," in International Symposium on Biomedical Optics (International Society for Optics and Photonics, 2002), pp. 54–62. 25.   J. Yao, K. I. Maslov, Y. Zhang, Y. Xia, and L. V. Wang, "Label-free oxygen-metabolic photoacoustic microscopy in vivo," J. Biomed. Opt. 16, 76003 (2011). 26.   Z. Xu, Q. Zhu, and L. V. Wang, "In vivo photoacoustic tomography of mouse cerebral edema induced by cold injury," J. Biomed. Opt. 16, 66020 (2011). 27.   H.-W. Wang, N. Chai, P. Wang, S. Hu, W. Dou, D. Umulis, L. V. Wang, M. Sturek, R. Lucht, and J.-X. Cheng, "Label-Free Bond-Selective Imaging by Listening to Vibrationally Excited Molecules," Phys. Rev. Lett. 106, 238106 (2011). 28.   J. A. Viator, J. Komadina, L. O. Svaasand, G. Aguilar, B. Choi, and J. Stuart Nelson, "A Comparative Study of Photoacoustic and Reflectance Methods for Determination of Epidermal Melanin Content," J. Invest. Dermatol. 122, 1432–1439 (2004). 29.   C. G. A. Hoelen, F. F. M. De Mul, R. Pongers, and A. Dekker, "Three-dimensional photoacoustic imaging of blood vessels in tissue," Opt. Lett. 23, 648–650 (1998). 30.   D. Finch and S. K. Patch, "Determining a function from its mean values over a family of spheres," SIAM J. Math. Anal. 35, 1213–1240 (2004). 31.   X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang, "Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain," Nat. Biotechnol. 21, 803–806 (2003). 32.   E. V. Savateeva, A. A. Karabutov, M. Motamedi, B. A. Bell, R. M. Johnigan, and A. A. Oraevsky, "Noninvasive detection and staging of oral cancer in vivo with confocal optoacoustic tomography," in (2000), Vol. 3916, pp. 55–66. 33.   W. Shi, S. Kerr, I. Utkin, J. Ranasinghesagara, L. Pan, Y. Godwal, R. J. Zemp, and R. Fedosejevs, "Optical resolution photoacoustic microscopy using novel high-repetition-rate passively Q-switched microchip and fiber lasers," J. Biomed. Opt. 15, 56017-56017–7 (2010). 34.   K. Maslov, H. F. Zhang, S. Hu, and L. V. Wang, "Optical-resolution photoacoustic microscopy for in vivo imaging of single capillaries," Opt. Lett. 33, 929–931 (2008). 61  35.   H. F. Zhang, K. Maslov, G. Stoica, and L. V. Wang, "Functional photoacoustic microscopy for high-resolution and noninvasive in vivo imaging," Nat. Biotechnol. 24, 848–851 (2006). 36.   H. F. Zhang, K. Maslov, and L. V. Wang, "In vivo imaging of subcutaneous structures using functional photoacoustic microscopy," Nat. Protoc. 2, 797–804 (2007). 37.   C. P. Favazza, L. A. Cornelius, and L. V. Wang, "In vivo functional photoacoustic microscopy of cutaneous microvasculature in human skin," J. Biomed. Opt. 16, 26004-26004–5 (2011). 38.   E. W. Stein, K. Maslov, and L. V. Wang, "Noninvasive, in vivo imaging of the mouse brain using photoacoustic microscopy," J. Appl. Phys. 105, 102027 (2009). 39.   Y. Xu, L. V. Wang, G. Ambartsoumian, and P. Kuchment, "Reconstructions in limited-view thermoacoustic tomography," Med. Phys. 31, 724–733 (2004). 40.   X. Liu, D. Peng, X. Ma, W. Guo, Z. Liu, D. Han, X. Yang, and J. Tian, "Limited-view photoacoustic imaging based on an iterative adaptive weighted filtered backprojection approach," Appl. Opt. 52, 3477–3483 (2013). 41.   J. Xia, Z. Guo, K. Maslov, A. Aguirre, Q. Zhu, C. Percival, and L. V. Wang, "Three-dimensional photoacoustic tomography based on the focal-line concept," J. Biomed. Opt. 16, 90505-90505–3 (2011). 42.   J. Xia, M. R. Chatni, K. Maslov, Z. Guo, K. Wang, M. Anastasio, and L. V. Wang, "Whole-body ring-shaped confocal photoacoustic computed tomography of small animals in vivo," J. Biomed. Opt. 17, 0505061–0505063 (2012). 43.   J. Gamelin, A. Maurudis, A. Aguirre, F. Huang, P. Guo, L. V. Wang, and Q. Zhu, "A real-time photoacoustic tomography system for small animals," Opt. Express 17, 10489–10498 (2009). 44.   S. Preisser, N. L. Bush, A. G. Gertsch-Grover, S. Peeters, A. E. Bailey, J. C. Bamber, M. Frenz, and M. Jaeger, "Vessel orientation-dependent sensitivity of optoacoustic imaging using a linear array transducer," J. Biomed. Opt. 18, 026011–026011 (2013). 45.   J. L. Su, R. R. Bouchard, A. B. Karpiouk, J. D. Hazle, and S. Y. Emelianov, "Photoacoustic imaging of prostate brachytherapy seeds," Biomed Opt Express 2, 2243–2254 (2011). 46.   D. W. Yang, D. Xing, S. H. Yang, and L. Z. Xiang, "Fast full-view photoacoustic imaging by combined scanning with a linear transducer array," Opt. Express 15, 15566–15575 (2007). 47.   J. Gateau, M. Á. A. Caballero, A. Dima, and V. Ntziachristos, "Three-dimensional optoacoustic tomography using a conventional ultrasound linear detector array: Whole-body tomographic system for small animals," Med. Phys. 40, 13302 (2013). 48.   L. Nie, D. Xing, Q. Zhou, D. Yang, and H. Guo, "Microwave-induced thermoacoustic scanning CT for high-contrast and noninvasive breast cancer imaging," Med. Phys. 35, 4026–4032 (2008). 49.   B. T. Cox, S. R. Arridge, and P. C. Beard, "Photoacoustic tomography with a limited-aperture planar sensor and a reverberant cavity," Inverse Probl. 23, S95 (2007). 50.   B. Huang, J. Xia, K. Maslov, and L. V. Wang, "Improving limited-view photoacoustic tomography with an acoustic reflector.," J. Biomed. Opt. 18, 110505 (2013). 51.   G. Li, J. Xia, K. I. Maslov, and L. V. Wang, "Broadening the detection view of high-frequency linear-array-based photoacoustic computed tomography by using planar acoustic reflectors," in (2014), Vol. 8943, p. 89430H–89430H–6. 62  52.   D. Wu, C. Tao, and X. Liu, "Photoacoustic tomography extracted from speckle noise in acoustically inhomogeneous tissue," Opt. Express 21, 18061–18067 (2013). 53.   S. Ma, S. Yang, and H. Guo, "Limited-view photoacoustic imaging based on linear-array detection and filtered mean-backprojection-iterative reconstruction," J. Appl. Phys. 106, 123104 (2009). 54.   C. Huang, K. Wang, L. Nie, L. V. Wang, and M. a Anastasio, "Full-wave iterative image reconstruction in photoacoustic tomography with acoustically inhomogeneous media.," IEEE Trans. Med. Imaging 32, 1097–110 (2013). 55.   V. E. Gusev and A. A. Karabutov, "Laser optoacoustics," NASA STIRecon Tech. Rep. A 93, (1991). 56.   F. A. Duck, "Physical properties of tissue: a comprehensive reference book. 1990," Lond. UK Acad. (n.d.). 57.   G. J. Diebold, T. Sun, and M. I. Khan, "Photoacoustic monopole radiation in one, two, and three dimensions," Phys. Rev. Lett. 67, 3384–3387 (1991). 58.   M. W. Sigrist, "Laser generation of acoustic waves in liquids and gases," J. Appl. Phys. 60, R83–R122 (1986). 59.   C. Li and L. V. Wang, "Photoacoustic tomography and sensing in biomedicine," Phys. Med. Biol. 54, R59 (2009). 60.   M. Xu and L. Wang, "Universal back-projection algorithm for photoacoustic computed tomography," Phys. Rev. E 71, 16706 (2005). 61.   J. D. Jackson, Classical Electrodynamics (Wiley, 1999). 62.   G. B. Arfken, H. J. Weber, and F. E. Harris, Mathematical Methods for Physicists: A Comprehensive Guide (Academic press, 2011). 63.   H. Feshbach and P. M. Morse, Methods of Theoretical Physics (McGraw-Hill Interamericana, 1953). 64.   M. Jaeger, S. Schüpbach, A. Gertsch, M. Kitz, and M. Frenz, "Fourier reconstruction in optoacoustic imaging using truncated regularized inverse k -space interpolation," Inverse Probl. 23, S51–S63 (2007). 65.   J. Laufer, P. Johnson, E. Zhang, B. Treeby, B. Cox, B. Pedley, and P. Beard, "In vivo preclinical photoacoustic imaging of tumor vasculature development and therapy," J. Biomed. Opt. 17, 0560161–0560168 (2012). 66.   X. Wang, X. Xie, G. Ku, L. V. Wang, and G. Stoica, "Noninvasive imaging of hemoglobin concentration and oxygenation in the rat brain using high-resolution photoacoustic tomography," J. Biomed. Opt. 11, 24015-24015–9 (2006). 67.   L. L. Pan, "Photoacoustic imaging for prostate brachytherapy," University of British Columbia (2014). 68.   A. A. Goshtasby, Image Registration: Principles, Tools and Methods (Springer, 2012). 69.   Laser Institute of America, "American National Standard for Safe Use of Lasers ANSI Z136.1-2000," (2000). 70.   M. Tabei, T. D. Mast, and R. C. Waag, "A k-space method for coupled first-order acoustic propagation equations.," J. Acoust. Soc. Am. 111, 53–63 (2002). 71.   B. T. Cox, S. Kara, S. R. Arridge, and P. C. Beard, "k-space propagation models for acoustically heterogeneous media: Application to biomedical photoacoustics," J. Acoust. Soc. Am. 121, 3453–3464 (2007). 63  72.   B. E. Treeby and B. T. Cox, "k-Wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields.," J. Biomed. Opt. 15, 21314 (2010). 73.   K. Firouzi, B. T. Cox, B. E. Treeby, and N. Saffari, "A first-order k-space model for elastic wave propagation in heterogeneous media," J. Acoust. Soc. Am. 132, 1271–1283 (2012). 74.   B. E. Treeby, "Modeling nonlinear wave propagation on nonuniform grids using a mapped k-space pseudospectral method," IEEE Trans. Ultrason. Ferroelectr. Freq. Control 60, 2208–2213 (2013). 75.   B. E. Treeby, J. Jaros, A. P. Rendell, and B. T. Cox, "Modeling nonlinear ultrasound propagation in heterogeneous media with power law absorption using a k-space pseudospectral method," J. Acoust. Soc. Am. 131, 4324–4336 (2012). 76.   B. E. Treeby and B. T. Cox, "A k-space Green’s function solution for acoustic initial value problems in homogeneous media with power law absorption," J. Acoust. Soc. Am. 129, 3652–3660 (2011). 77.   B. T. Cox and P. C. Beard, "Fast calculation of pulsed photoacoustic fields in fluids using k-space methods," J. Acoust. Soc. Am. 117, 3616–3627 (2005).  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0306918/manifest

Comment

Related Items