UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Towards photoacoustic tomography for robot-assisted prostate imaging Moradi Kashkouli, Hamid 2018

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2019_february_moradikashkouli_hamid.pdf [ 6.22MB ]
Metadata
JSON: 24-1.0375754.json
JSON-LD: 24-1.0375754-ld.json
RDF/XML (Pretty): 24-1.0375754-rdf.xml
RDF/JSON: 24-1.0375754-rdf.json
Turtle: 24-1.0375754-turtle.txt
N-Triples: 24-1.0375754-rdf-ntriples.txt
Original Record: 24-1.0375754-source.json
Full Text
24-1.0375754-fulltext.txt
Citation
24-1.0375754.ris

Full Text

   TOWARDS PHOTOACOUSTIC TOMOGRAPHY FOR ROBOT-ASSISTED PROSTATE IMAGING   by  Hamid Moradi Kashkouli  B.Sc., Sharif University of Technology, Tehran, IRAN, 2009 M.Sc., Sharif University of Technology, Tehran, IRAN, 2012    A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF PHILOSOPHY   in   THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES  (Electrical and Computer Engineering)    THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)   December 2018   © Hamid Moradi Kashkouli, 2018ii  The following individuals certify that they have read, and recommend to the Faculty of Graduate and Postdoctoral Studies for acceptance, the dissertation entitled:  Towards photoacoustic tomography for robot-assisted prostate imaging  submitted by Hamid Moradi Kashkouli  in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical and Computer Engineering  Examining Committee: Septimiu Salcudean, Electrical and Computer Engineering Supervisor  Shuo Tang, Electrical and Computer Engineering Supervisory Committee Member  Robert Rohling, Electrical and Computer Engineering Supervisory Committee Member Stefan Reinsberg, Department of Physics and Astronomy University Examiner Purang Abolmaesumi, Electrical and Computer Engineering University Examiner  Additional Supervisory Committee Members: Edmond Cretu, Electrical and Computer Engineering Supervisory Committee Member  Supervisory Committee Member   iii  Abstract During prostate surgeries, there are critical structures around the prostate that should be preserved. Therefore, an additional intra-operative prostate cancer (PCa) imaging method is needed to help the surgeon localize the cancer. Photoacoustic (PA) imaging as an emerging imaging modality shows great potential to detect cancerous tissue. In this thesis, we focus on intra-operative PA imaging of the prostate using the da Vinci robotic system. Towards this objective, we developed a PA reconstruction technique that works in the presence of the challenges of the linear transducers. These challenges include the directivity effect of the transducer and limited-view PA imaging that cause the rank deficiency of the reconstruction system. Therefore, a sparse representation of the PA absorber distribution using the Discrete Cosine Transform was proposed. This sparse representation helps improve the numerical conditioning of the system of equations and reduces the computation time of the approach. In addition, we evaluated the feasible scanning configurations for intra-operative PA tomography (PAT) of the prostate. There are two ultrasound transducers that can be used in prostate PAT: a transrectal ultrasound (TRUS) transducer located posterior to the prostate, and a pick-up ultrasound transducer carried by the da Vinci robotic system and located anterior to the prostate. We proposed a PAT acquisition system that includes a da Vinci system controlled by the da Vinci Research Kit. The configurations using the pick-up and the TRUS transducers to perform intra-operative prostate PAT were investigated. Finally, we developed intra-operative prostate PA imaging using the da Vinci robotic system and a pick-up ultrasound transducer. We proposed a new approach in which the da Vinci robot is programmed to acquire trajectories in a shared control configuration with virtual fixtures; the pick-up transducer is manually controlled but virtual fixtures keep it parallel to a single tomography axis, and keep its translation fixed to a single plane normal to this axis. The surgeon controls the transducer motion on the tissue along this virtual fixture. This thesis confirms that intra-operative da Vinci robot-assisted PA imaging with a pick-up transducer is feasible. iv  Lay Summary Robotic surgery using the da Vinci surgical system is a common treatment option to remove prostate cancer (PCa) with an appropriate safety margin. However, cancerous tissue cannot be distinguished from normal tissue in the console view of the da Vinci system. The overall objective of the research done in this thesis is to consider photoacoustic (PA) imaging of the prostate and to study the feasibility of robot-assisted PA tomography (PAT). First, a new PA reconstruction method was developed that works in presence of challenges of the linear transducers. Then, we compared the scanning geometries for prostate PAT to find that the best approach for intra-operative PA imaging should employ pick-up transducer. Finally, we developed a new robot-assisted PAT system that moves the pick-up transducer in a shared-control environment. The effectiveness of the proposed methods for intra-operative prostate PAT has been demonstrated through various simulation and experimental studies. v  Preface This thesis is written based on several manuscripts resulting from the collaboration between multiple researchers. The material from the publications has been modified to make the thesis coherent.  A version of Chapter 2 has been published as: o H. Moradi, S. Tang, and S. E. Salcudean, “Deconvolution based photoacoustic reconstruction for directional transducer with sparsity regularization,” in SPIE BiOS, 2016, p. 97082D. o H. Moradi, S. Tang, and S. E. Salcudean, “Deconvolution based photoacoustic reconstruction with sparsity regularization,” Opt. Express, vol. 25, no. 3, p. 2771, Feb. 2017. The contribution of the author in those papers was developing the method, formulating and implementing the algorithm, conducting numerical simulations, designing a mechanical scanning systems for 2-D and 3-D PAT, integrating the actuator control code into the PA scanning system, collecting the data, evaluating the method, and writing the manuscript. Dr. Salcudean and Dr. Tang provided guidance, support, and assisted in editing the manuscript. o H. Moradi, M. Honarvar, S. Tang, and S. E. Salcudean, “Iterative photoacoustic image reconstruction for three-dimensional imaging by conventional linear-array detection with sparsity regularization,” in SPIE BiOS, 2017, p. 100643K. The author’s contribution in that article was implementing the method, collecting the data, evaluating the results, and writing the manuscript. Dr. Mohammad Honarvar contributed to the article through discussions and developing the iterative version of the method in a parallel processing unit, GPU. Dr. Salcudean and Dr. Tang provided guidance, support, and assisted in editing the manuscript. A version of Chapter 3 has been published as: vi  o H. Moradi, S. Tang, and S. E. Salcudean, “Towards Intra-operative Prostate Photoacoustic Imaging: Configuration Evaluation and Implementation Using the da Vinci Research Kit,” IEEE Trans. Med. Imaging, in press, 2018. The contribution of the author in that paper was integrating the da Vinci robot system and the transrectal ultrasound actuator into the PA scanning system in terms of both software and hardware components, conducting the experiments, conducting numerical simulation, evaluating the results, and writing the manuscript. Dr. Salcudean suggested the idea of integrating the PA scanning system and da Vinci robotic system and the use of the intra-operative pick-up ultrasound transducer. Dr. Salcudean and Dr. Tang provided guidance, support, and assisted in editing the manuscript. A version of Chapter 4 has been submitted as: o H. Moradi, S. Tang, and S. E. Salcudean, “Towards Robot-assisted Photoacoustic Imaging: Implementation Using the da Vinci Research Kit and Virtual Fixtures,” 2018. The author’s contribution in that paper was developing and implementing the method, integrating da Vinci Research system into PA scanning system, conducting experiments, investigating the da Vinci robot error, evaluating the results, and writing the manuscript. Dr. Salcudean suggested the idea of the use of shared-control platform for PAT using da Vinci robotic system. Dr. Salcudean and Dr. Tang provided guidance, support, and assisted in editing the manuscript. vii  Table of Contents Abstract ....................................................................................................................................... iii Lay Summary ............................................................................................................................. iv Preface .......................................................................................................................................... v Table of Contents ...................................................................................................................... vii List of Tables ............................................................................................................................... x List of Figures ............................................................................................................................. xi Glossary ..................................................................................................................................... xvi Acknowledgements ................................................................................................................. xviii Dedication ................................................................................................................................. xix Chapter 1 ˗ Introduction ............................................................................................................ 1 1.1 OVERVIEW AND MOTIVATION ............................................................................................. 1 1.2 THE PROSTATE ..................................................................................................................... 1  Intra-operative prostate cancer imaging ....................................................................... 2 1.3 PHOTOACOUSTIC IMAGING .................................................................................................. 3  Photoacoustic Tomography .......................................................................................... 5  Photoacoustic image reconstruction ............................................................................. 8  Tissue Mimicking Material ........................................................................................ 12 1.4 MINIMALLY INVASIVE SURGERY, THE DA VINCI SURGICAL SYSTEM, AND ROBOT-ASSISTED PA IMAGING ............................................................................................................ 13  Minimally invasive surgery ........................................................................................ 13  Robot-assisted surgery with the da Vinci Surgical System ........................................ 14  Robot-assisted PA imaging ........................................................................................ 15 1.5 PROSTATE PHOTOACOUSTIC IMAGING ............................................................................... 16 1.6 THESIS OBJECTIVES ........................................................................................................... 20 viii  1.7 THESIS OUTLINE ................................................................................................................ 21 Chapter 2 ˗ Photoacoustic image reconstruction ................................................................... 22 2.1 INTRODUCTION .................................................................................................................. 22 2.2 MATERIALS AND METHODS ............................................................................................... 24  Theory ........................................................................................................................ 25  Sparsity regularization ............................................................................................... 30  Transducer calibration ................................................................................................ 34  Simulation methods .................................................................................................... 37  Experimental set-up ................................................................................................... 38 2.3 RESULTS ............................................................................................................................ 39  Simulation results ....................................................................................................... 39  Experimental results ................................................................................................... 46 2.4 LARGE SCALE SYSTEM ....................................................................................................... 50  Results ........................................................................................................................ 52 2.5 CONCLUSION ..................................................................................................................... 56 Chapter 3 ˗ Prostate photoacoustic imaging configuration ................................................... 57 3.1 INTRODUCTION .................................................................................................................. 57 3.2 MATERIALS AND METHODS ............................................................................................... 58  Scanning geometries .................................................................................................. 59  Simulation study ......................................................................................................... 59  Experimental setup ..................................................................................................... 63  Phantoms .................................................................................................................... 66 3.3 RESULTS ............................................................................................................................ 68  Simulation results ....................................................................................................... 68  Experimental results ................................................................................................... 75 3.4 DISCUSSION ....................................................................................................................... 80 3.5 CONCLUSION ..................................................................................................................... 82 Chapter 4 ˗ Feasibility of da Vinci assisted prostate photoacoustic imaging ....................... 83 4.1 INTRODUCTION .................................................................................................................. 83 4.2 METHODS .......................................................................................................................... 85 ix   System overview ........................................................................................................ 85  Robot control .............................................................................................................. 86  Phantom study ............................................................................................................ 93  Measuring the da Vinci robot errors .......................................................................... 95  Shared-control teleoperation efficacy ........................................................................ 95 4.3 RESULTS ............................................................................................................................ 98 4.4 DISCUSSION ..................................................................................................................... 101 4.5 CONCLUSION ................................................................................................................... 103 Chapter 5 ˗ Conclusions and future work ............................................................................. 104 5.1 OVERVIEW ....................................................................................................................... 104 5.2 SUMMARY OF FINDINGS................................................................................................... 104 5.3 CONTRIBUTIONS .............................................................................................................. 105 5.4 IMPLICATIONS .................................................................................................................. 106 5.5 LIMITATIONS ................................................................................................................... 107 5.6 FUTURE WORK ................................................................................................................. 108 5.7 CONCLUSION ................................................................................................................... 109 Bibliography ............................................................................................................................ 111 x  List of Tables Table 2-1. CNR and RMS errors of the results of DAS, DPARS and FDPARS methods for simulated 2-D PAT ............................................................................................................. 44 Table 2-2. CNR and RMS errors of DAS, DPARS and FDPARS methods for experimental 2-D and 3-D PAT ....................................................................................................................... 48 Table 2-3. CNR and RMS error of the reconstructed images by DAS and iDPARS methods ... 53 Table 3-1. CNR and RMS error values of the results of the fan-shaped tomography and cylindrical tomography for the pencil leads simulation ........................................................................ 75 Table 3-2. CNR and RMS error values of the results of the fan-shaped tomography and cylindrical tomography for the sphere simulation ................................................................................. 75 Table 3-3. RMS error values of the results of the fan-shaped tomography and cylindrical tomography for the prostate simulation .............................................................................. 75 Table 3-4. CNR and RMS error values of the results of the fan-shaped and cylindrical tomography for the pencil leads phantom ............................................................................................... 80 Table 3-5. CNR and RMS error values of the results of the fan-shaped and cylindrical tomography for the prostate phantom...................................................................................................... 80 Table 4-1. Position errors of the da Vinci robot during tomography with the VF in different trials. ........................................................................................................................................... 101 Table 4-2. Orientation errors of the da Vinci robot during tomography with the VF in different trials. .................................................................................................................................. 101       xi  List of Figures Fig. 1-1. The placement of prostate within the abdomen. ............................................................. 2 Fig. 1-2. Schematics of PA imaging. ............................................................................................ 4 Fig. 1-3. Different geometries formed by detectors for PAT, (a) Cylindrical, (b) spherical and (c) planar ..................................................................................................................................... 5 Fig. 1-4. Schematic of a linear array of transducers. ................................................................... 10 Fig. 1-5. Image formation using DAS method. ........................................................................... 11 Fig. 1-6. The da Vinci Surgical System. Image courtesy: Intuitive Surgical Inc. ....................... 15 Fig. 1-7. Light delivery for prostate using (a) end firing fiber and (b) side-firing fiber ............. 19 Fig. 1-8. The TRUS and pick up probes can be used for prostate PAT. To perform PAT, the TRUS probe is rotated around a fixed axis while the pick-up transducer can be moved freely along the surface. .......................................................................................................................... 20 Fig. 2-1. Equation (2-2) enables us to compute the pressure at the transducer location ............. 26 Fig. 2-2. Element 𝒊 receives the signal from a considerable region of the medium but with different directivity effect. Sampled points are shown with red cross signs. They are equally distanced. Also one sampled point with the surrounding nodes can be seen. ...................................... 28 Fig. 2-3. One sampled point in 3-D space with the surrounding nodes. ..................................... 29 Fig. 2-4. Weighting function used for FDPARS in 1-D case ...................................................... 32 Fig. 2-5. (a) Experimental set up for PAT, a close-up of the probe and the rotary stage was shown at the upper left corner (b) The structure of the DPARS algorithm .................................... 34 Fig. 2-6. (a) The experimental setup for transducer calibration. A point source was located at different positions in the elevational (y) and axial (x) directions. The maximum intensity detected by the main transducer element with respect to the location of the point source along the elevation and lateral directions are shown in (b), and the maximum intensity detected by different elements is used to measure the lateral dependency of transducer which is shown in (d); (c) and (e) show the calibration results in which the intensities were normalized with the maximum intensities recorded in the same depth.......................................................... 36 Fig. 2-7. The US transducer receives a signal when the object is located outside of its imaging (center) plane. As mentioned in Fig. 2-6(a), y represents the deviation of the point source from the main plane of the US transducer. The DAS method was used for reconstruction the recorded data when the point source was at (a) the depth 33 mm and 14 mm out of plane (b) the depth 37 mm and 6 mm out of plane (c) the depth 78 mm and 14 mm out of plane. ... 36 Fig. 2-8. Simulation results for 2-D PAT. (a) shows the absorber and the scanning geometry. The red arrows show the main directions of the sensors as shown clearly in the close-up photo at the upper right corner. PA data were collected at three positions in 60o increments. (b) xii  Measured transducer element sensitivity with respect to the angle of the incident acoustic wave. (c) Actual absorber distribution, and reconstructions with (d) DPARS, (e) FDPARS, and (f) DAS. ........................................................................................................................ 40 Fig. 2-9. Simulation for circular 2-D PAT. (a) Absorber and scanning geometry. The red arrows show the main directions of the sensors that are perpendicular to the detection surface as shown at the lower right corner in the close-up photo. The PA data were collected through 60o in steps of 1o. (b) Sensitivity of the sensor with respect to angle of PA point source. (c) Initial distribution of the absorber and the actual image that should be reconstructed. Results for PAT reconstruction: (d) DPARS, in case the sensors have omni-directional response, (e) DPARS, (f) FDPARS, (g) Tikhonov regularization, and (h) DAS. .................................... 41 Fig. 2-10. White Gaussian noise was added to data. (a) DPARS, (b) FDPARS, (c) Tikhonov regularization, and (d) DAS reconstruction results with noisy data .................................... 43 Fig. 2-11. RMS error and CNR versus regularization parameters for the DPARS method for noisy, and noiseless data. (a) and (b) show the sparsity regularization results and, (c) and (d) show those of Tikhonov regularization. ....................................................................................... 44 Fig. 2-12. Results for 2-D PAT. (a) PA data were collected at 3 positions around the sample, (b) the sample which was printed on paper, and kept in the US plane of the transducer. Reconstructed images with: (c) DPARS, (d) FDPARS, (e) without any regularization, and (f) DAS. ............................................................................................................................... 47 Fig. 2-13. Experimental set-up and results for 3-D PAT. (a) shows the absorber. The PA data were collected through 140o with a step size of 2o. (b) shows the sample with respect to the transducer. DPARS (c) and DAS (d) were used for the image reconstructions. The Z direction is parallel to the rotation axis and the lateral direction of the transducer. ............ 48 Fig. 2-14. Cross-sections of 3-D DPARS, FDPARS, and DAS images in different vales of Z. Z axis is parallel to the rotational axis .................................................................................... 49 Fig. 2-15. Block representation of the coefficient matrix. Each block corresponds to each transaducer/transducer location ........................................................................................... 52 Fig. 2-16. (a) The inclusion and the tomography scanning geometry. (b) iDPARS and (c) DAS images of the simulation study. ........................................................................................... 53 Fig. 2-17. (a) The sample and the tomography scanning geometry. (b) iDPARS and (c) DAS images of the experimental study. ....................................................................................... 55 Fig. 3-1. Arrangement of TRUS and “pick up” probes with respect to the prostate. The TRUS probe is located posteriorly while the “pick up” probe is located anteriorly to the prostate. ............................................................................................................................................. 58 Fig. 3-2. Different geometries of US transducers with respect to the ROI for the simulation of PAT are shown. PAT with the pick-up and TRUS probes are simulated. The first, second, and third rows simulate PA imaging of two pencil leads, a spherical inclusion, and prostate, respectively. 90o angles of view are shown in a cross section normal to the lateral direction for each PAT. Transducers are directional and the red arrows show the direction of the maximum sensitivity which is perpendicular to the detection surface. The left column shows the fan-shaped tomography with TRUS probe and the right column shows the cylindrical xiii  tomography with the pick-up probe. The transducers are rotated about an axis parallel to the lateral direction (Y direction) of the probes. ....................................................................... 61 Fig. 3-3. Sensitivity of the linear transducer with respect to PA waves coming (a) from inside of the main plane (in the lateral transducer direction) and (b) from outside of the US plane (elevational transducer direction). ....................................................................................... 61 Fig. 3-4. Mid cross section of the prostate after illumination. Transurethral illumination was simulated. Therefore, the region farther away from the urethra receives less illumination. 62 Fig. 3-5. Experimental set up for 3-D PAT. The goal is to compare fan-shaped tomography with the TRUS probe and cylindrical tomography with the pick-up probe. The pick-up probe is held by a da Vinci tool, and can be manipulated for intra-operative tissue scanning. The dVRK system is used to control the da Vinci arm. In coordination with the PA system, the da Vinci and TRUS robots move the probes to different positions around the sample. A series of PA data is recorded at those positions. ........................................................................... 63 Fig. 3-6. A da Vinci instrument is used to maneuver the pick-up probe. The tool tip of the instrument follows a circular trajectory, while the control strategy keeps the pick-up probe towards the tomography axis to perform a cylindrical tomography. .................................. 65 Fig. 3-7. The TRUS probe can be rotated around its axis by a custom-designed robot. In coordination with the PA system, the TRUS robot enables fan-shaped tomography with the TRUS probe. ....................................................................................................................... 66 Fig. 3-8. (a) 3-D structure made of two pencil leads. The pencil leads are held on fishing lines. (b) PVC-based phantom. The phantom shape and size are compatible with prostate imaging conditions. We embedded a sphere-shaped inclusion of size 1 cm into the phantom. (c) Prostate gelatin-based phantom. The phantom contains a semi-dark prostate and two dark inclusions named A and B. .................................................................................................. 68 Fig. 3-9. Reconstructed images of pencil leads simulation; (a) results of the fan-shaped tomography with the TRUS probe and (b) results of the cylindrical tomography with the pick-up probe. Two methods of PA reconstruction are used: Non-model-based DAS and model-based iDPARS. For each case, two cross sectional images that contain the pencil leads and a cross sectional image that is normal to the pencil leads are shown. .......................... 70 Fig. 3-10. Reconstructed images of sphere simulation with model and non-model based reconstruction approaches; the cross sectional images normal to the main directions are shown. (a) Results of the fan-shaped tomography with the TRUS probe and (b) results of the cylindrical tomography with the pick-up probe. The red dashed circles in the images represent the actual size of the reference inclusion. ............................................................ 71 Fig. 3-11. Reconstucted images of the prostate simulation with model and non-model based reconstruction methods; the mid cross sectional image of the prostate is shown. (a) Results of the fan-shaped tomography with the TRUS probe and (b) results of the cylindrical tomography with the pick-up probe. ................................................................................... 72 Fig. 3-12. Position errors were added to the probe positioning in the sphere simulation. The cross sectional images normal to the main directions are shown. Results of (a) the fan-shaped tomography with the TRUS probe and (b) the cylindrical tomography with the pick-up probe. The red dashed circles in the images represent the actual location of the inclusion. xiv  Due to the positioning errors, the inclusion shown may not be shown at its exact location. ............................................................................................................................................. 74 Fig. 3-13. Reconstructed images of the pencil leads phantom shown in Fig. 3-8(a) with iDPARS and DAS methods. (a) Results of the fan-shaped PAT with the TRUS probe and (b) results of the cylindrical PAT with the pick-up probe with the da Vinci robotic system. PA signals for both experiments were collected through 90o with a step size of 1o. ............................. 77 Fig. 3-14. Reconstructed images of the prostate phantom shown in Fig. 3-8(b) with iDPARS and DAS methods. (a) Results of the fan-shaped PAT with the TRUS probe maneuvered by the TRUS robot and (b) results of the cylindrical PAT with the pick-up probe maneuvered by the da Vinci robotic system. PA signals for both experiments were collected through 90o with a step size of 1o. The red dashed circles in the images represent the actual size of the reference inclusion. The location is an approximation based on the data. .......................... 78 Fig. 3-15. Reconstructed images of the gelatin-based prostate phantom shown in Fig. 3-8(c) with model and non-model based reconstruction methods. Similar to the photo of the phantom, A- and B-planes are shown in the images. (a) Results of the fan-shaped PAT with the TRUS probe and (b) results of the cylindrical PAT with the pick-up probe and the da Vinci robotics system. The two inclusions are visible in the image of the pick-up data. ........................... 79 Fig. 4-1. The TRUS and pick up probes can be used for prostate PAT. To perform PAT, the TRUS probe is rotated around a fixed axis while the pick-up transducer can be moved freely along the surface. .......................................................................................................................... 84 Fig. 4-2. Experimental set-up for PAT. The da Vinci robotic system through dVRK controllers is used to maneuver the pick-up probe. .................................................................................. 87 Fig. 4-3. (a) The shared control strategy locks the pick-up probe to the tomography axis while the probe is placed on the tissue surface. (b) In addition, to capture tomographic PA signals from the ROI, we need to avoid axial movement. A virtual plane is defined that is normal to the tomography axis and crosses the center of the pick-up probe. (c) The surgeon can freely move the transducer in the virtual plane using an MTM and a PSM follows its motion in the virtual plane according to the proposed shared control configuration. ............................... 91 Fig. 4-4. (a) The control diagrams present how the forward (a) and backward (b) scannings are performed and how the trajectory generated in the shared control platform is followed by the robot through the ROS interface and dVRK system. During the backward scanning (b), the US machine is activated and PA imaging is performed. In addition, the trigger diagram shown in (b) represents the sequence of activation of different components of the integrated system during the backward scanning. ................................................................................ 92 Fig. 4-5. Prostate gelatin-based phantoms. (a) The phantom contains a 3-D structure made of three pencil leads. The pencil leads are shown using the red arrows in the CT images of the planes A and B. (b) The phantom contains chicken breast tissue. Two pencil leads shown in the CT images of the planes C and D are embedded inside the breast tissue. ................................ 94 Fig. 4-6. Optotrak markers attached to a board create a rigid body which is grasped by the da Vinci tool in the same way as the pick-up transducer. ........................................................ 95 Fig. 4-7. (a) A phantom with a mark to show the start of trajectory. Different trajectories are defined using the da Vinci robot run in conventional and the proposed shared-control teleoperation. (b) The detection surfaces are compared using a PAT simulation study. The xv  ROI includes a sphere (blue) and two beam shaped inclusions (red). Plane E is a transversal cross-section of the ROI which is shown with respect to the phantom. .............................. 97 Fig. 4-8. US and PAT images of the planes shown in Fig. 4-5. (a) US images (gray scale) of the planes A and B, see Fig. 4-5(a), are overlaid with their PAT images (colored). (b) The results of the ex vivo experiment. PAT images (colored) of the planes C and D, see Fig. 4-5(b), are shown on top of their US images (gray scale). ................................................................... 99 Fig. 4-9. (a) Position and (b) orientation errors of the da Vinci robot during tomography with the VF in the shared control environment. .............................................................................. 100 Fig. 4-10. PAT images of the plane E shown in Fig. 4-7(b). The detection surfaces for PAT were obtained using the da Vinci robot run in the teleoperation mode or the proposed shared-control teleoperation mode. ............................................................................................... 100  xvi  Glossary 1-D/2-D/3-D  One/two/three Dimensional BPH  Benign Prostatic Hyperplasia CGM  Conjugate Gradient Method CNR  Contrast to Noise Ratio CS  Compressed Sensing CT  Computed Tomography DAS  Delay and Sum DCT  Discrete Cosine Transform DPARS Deconvolution-based PA Reconstruction with Sparsity regularization dVRK  da Vinci Research Kit EIR  acousto-Electric Impulse Response EM  Electromagnetic FDPARS Filtered Deconvolution-based PA Reconstruction with Sparsity regularization FWHM Full Width at Half Maximum  GPS  Graphics Processing Unit iDPARS iterative Deconvolution-based PA Reconstruction with Sparsity regularization MTM  Master Tool Manipulator MRI  Magnetic Resonance Imaging NP  Nano Particle NVB  Neurovascular Bundle OPO  Optical Parametric Oscillator xvii  PA  Photoacoustic PAM  Photoacoustic Microscopy PAT  Photoacoustic Tomography PCa  Prostate Cancer PSM  Patient Side Manipulators PVC  Poly Vinyl Chloride RALRP Robot Assisted Laparoscopic Radical Prostatectomy RF  Radio-Frequency RMS  Root-Mean Square RP  Radical Prostatectomy ROI  Region of Interest SIR  Spatial Impulse Response SLSC  Short-Lag Spatial Coherence SNR  Signal to Noise Ratio SOS  Speed of Sound STD  Standard Deviation TR  Time Reversal TRUS  Transrectal Ultrasound UBP  Universal Back Projection US  Ultrasound VF  Virtual Fixture xviii  Acknowledgements I would like to acknowledge all the help that I have received throughout the course of my PhD. First of all, Dr. Salcudean, my supervisor, who gave me the opportunity to work on these meaningful research projects in such professional environment. His support, guidance, advice, and ideas were invaluable. I would also like to thank Dr. Tang and her students, Alvin and Leo, who were very helpful with the laser system and laser-fiber coupling. I would like to give my sincere thanks to all the members of the Robotics and Control Laboratory at UBC for their support and friendship, including, but not limited to, Julio, Irene, Corey, Samira, Angelica, Omid, Caitlin, Amirhossein, and in particular, Mohammad, who was always available to help. I would like to acknowledge the UBC FYF Graduate scholarship and grants provided by Dr. Salcudean and Dr. Tang that supported my research. Last but not least, I would like to thank my parents, my beloved wife, and my son whose unconditional support and love helped me accomplish this and many other milestones throughout my life.    xix  Dedication     To Mohadeseh and Hossein, the gifts of my life.   1  Chapter 1 ˗ Introduction 1.1 Overview and motivation This thesis provides a comprehensive look at intra-operative prostate photoacoustic (PA) imaging. The long term clinical goal is to apply ultrasound and PA imaging as an image guidance tool to localize prostate cancer (PCa) and vasculature in vivo during Robot Assisted Laparoscopic Radical Prostatectomy (RALRP) procedures. To treat PCa, the entire prostate gland must be removed. Given information about the location of PCa enables the surgeon to preserve the critical structures surrounding the prostate gland. PA imaging is a new imaging modality, with potential applications in cancer detection [1]. The cancer detection [2] is performed through detecting tissue vascularization [3] and blood oxygenation [4]. In spectroscopic PA imaging, multiple PA images may be acquired to detect both oxy-hemoglobin and deoxy-hemoglobin concentrations [5] and can be combined to form hypoxia (lack of oxygen) images of tumors [4]. PA imaging has been used to detect breast tumors [2], [6] and, therefore, has the potential to detect PCa. 1.2 The prostate The prostate is a walnut sized gland that is part of the male reproductive system. As shown in Fig. 1-1, the prostate gland sits anterior to the rectum and below the bladder. The prostate surrounds the prostatic urethra and can be felt during a rectal exam. The main function of the prostate is to secrete alkaline, milky or white fluid to the passing seminal fluid that is critical to prolong the lifespan of the sperm and neutralize the acidity of the vaginal tract. The prostate fluid makes up about one-third of the total volume of semen. The prostate is formed of both muscular and glandular tissues. The smooth muscles help drive out semen during ejaculation [7]. 2   Fig. 1-1. The placement of prostate within the abdomen.  The prostate size is doubled during puberty. After the age of 25, the prostate starts to enlarge again which may become a problem in older men. Benign prostatic hyperplasia (BPH) or abnormal growth of the prostate can cause problems. The prostate may squeeze the prostatic urethra and affect the passing urine. The urinary symptoms include a frequent and intense urgent need to urinate, less urine flow, and a sensation that the bladder has not emptied completely. Some degree of BPH can be seen among 40 percent of men by the age 60. The number of affected men can be 80 percent by age 80. Generally, treatment for BPH is to help the affected men to urinate more normally by drugs or surgery. The other common disorder related to the prostate is PCa. PCa is the most common cancer and the second leading cancer-related cause of death among North American men. PCa is heterogeneous and generates highly variable tissue changes; some types of PCa that grow slowly may not cause serious harm. Other types are aggressive and can spread to other organs such as the lungs and brain. Although symptoms are similar, there is no link between BPH and PCa. A medical history, a blood test, and a rectal exam can help to detect PCa [7]. Surgery is a common treatment for PCa and involves the removal of the entire prostate gland and the seminal vesicles. The procedure is called radical prostatectomy (RP) and is commonly performed with robot assistance. In the United states, RALRP using the da Vinci robotic system is used for as many as 80% of RPs [8].  Intra-operative prostate cancer imaging As mentioned in the previous section, RP is a common treatment for PCa. The goal of RP is oncological success (removal of all cancer), while minimizing functional damage (incontinence 3  and impotence). During surgery, whether open, laparoscopic, with or without robot assistance, surgeons cannot distinguish between cancerous and normal tissue using conventional scopes or cameras, and therefore have difficulty achieving the best tradeoff between oncological and functional outcomes [9]. Ultrasound (US) has been integrated before in prostate surgery [10] and allowed for fusion of pre-operative multi-parametric magnetic resonance imaging (MRI) [11] with US, from which the surgeon can infer the location of cancerous tissue from the more accurate prostate MRI. Since MRI is not available intra-operatively, and fused MRI-US methods are subject to registration errors, there is currently a demand for techniques that can perform intraoperative imaging of PCa [12]–[14]. US, even when augmented with advanced processing of echo data, is not sufficiently accurate for the detection of PCa [12], [14], [15], especially early stage PCa [13]. The addition of PA imaging to US has the potential to provide intraoperative prostate surgery guidance, enabling the surgeon to better decide where to excise with large margins to ensure cancer removal and where to spare tissue for better functional outcomes. As well, to date there has not been an assessment of the performance of PA imaging in the detection of PCa in vivo. Therefore, the intra-operative imaging of PCa has the potential to have a significant clinical impact [16]. 1.3 Photoacoustic imaging PA imaging is a new and rapidly growing modality, and has diverse applications in cancer treatment [1], [17]–[19], [5], [20], [21] and tissue characterization [22]. PA imaging is a nonionizing modality, which combines US and optical imaging. It provides high resolution images from US and high contrast images from optical imaging.  4   Fig. 1-2. Schematics of PA imaging.  The principle of PA imaging is based on the generation of US waves by laser illumination of tissue. Short and controlled laser pulses illuminate the tissue. The absorbed energy results in a rapid temperature rise leading to pressure waves generated through thermoelastic expansion. As shown in Fig. 1-2, the pressure waves travel through tissue and are detected at specific locations by a US transducer array. The PA signals can be detected using the same transducer used in US imaging, leading, potentially, to co-registered acquisition of PA and conventional US images [23]. The detected signals are analyzed to derive images of the distribution of laser energy absorption contrast in tissue. The contrast obtained using PA imaging depends on the absorption characteristics of the imaged materials and can be controlled by controlling the wavelength of the laser source. When multiple wavelengths are used the imaging is called spectroscopic. Typically, PA imaging is used to detect oxy-hemoglobin, deoxy-hemoglobin or nano particles (NP), alone, or in combination [5]. Oxy-hemoglobin images showing the combination of hemoglobin and oxygen, and deoxy-hemoglobin images showing the hemoglobin can be combined to form hypoxia (lack of oxygen) images of a tumor [4], [24].  5  There are two main categories in PA imaging: PA tomography (PAT) and PA microscopy (PAM) [25]. In PAM either a focused optical source or a focused US receiver is used. Therefore, a small spot of the region of interest (ROI) is imaged at a time, and in order to have a 2-D or 3-D image of the ROI, the source or the receiver is swept to cover the ROI. PAM is time consuming but produces accurate images. In PAT [26]–[28], the entire ROI is excited and the generated PA signals are collected at locations around the ROI. The PA signals are combined and processed to reconstruct an image. PAT results in faster imaging process, but the image reconstruction, surveyed in [29] is challenging. In [30], the developments of PAT systems in the clinical applications are reviewed.  Photoacoustic Tomography In this section a brief literature review of PAT is described. In PAT, as the entire ROI is illuminated, the generated PA signals are collected using an array of detectors or a mechanically scanned detector. Three main detector array geometries have been introduced: cylindrical, spherical, and planar, shown in Fig. 1-3. The cylindrical and spherical cases need full access around the target.   Fig. 1-3. Different geometries formed by detectors for PAT, (a) Cylindrical, (b) spherical and (c) planar A circular detection surface can be formed for 2-D PAT. Kruger et al. [31] studied 2-D PAT of small animals, in vivo. They assumed the US transducer receives signal from 2-D slices with a thickness of 1.5 mm. For tomography, they rotate the object illuminated by laser beams from two sources placed in opposite directions. Their reconstruction method was a filtered back projection algorithm. Wang et al. [32] established a reconstruction method based on the response of the ultrasonic transducer to a point source. They assumed that the object consists of a number of point 6  sources at different positions. They used a hydrophone at a fixed position to detect signals from a rotating phantom. The linear transducer was used in different PA studies. Yin et al. [33] used a linear array of 320 elements to generate 2-D PA images of phantoms. The phantoms consisted of either graphite rods or black ink and were placed in the focal plane of the US transducer. In their reconstruction method, they used a subgroup of elements to generate a 1-D image where first the PA signals were deconvolved from the point-response of the transducer, then the signals were phase-controlled and also weighed based on both the element direction and the transit distance. Finally, the 1-D images were combined to obtain a 2-D tomographic image. PA imaging can also be used to image contrast agents. Kim et al. [5] studied US and PA imaging for monitoring NP delivery. They used a linear array and 1-D motorized stage to obtain multiple 2-D US and PA images. Agarwal et al. [1] examined the feasibility of PA imaging to detect early stage PCa. They used gold NP as a contrast agent, and a linear transducer as a detection device. To image the agent distribution, laser pulses at a wavelength at which the agent has the highest absorption coefficient were used. They employed a motorized stage to move the transducer, and took multiple 2-D images to find the concentration of the agent in an in-vitro experiment. In another study, Kang et al. [34] generated 3-D PA images of breast microcalcifications using a linear transducer array. They used a 1-D motorized stage to move the probe in the elevational direction to cover a volume of interest, and collected both US and PA images at each position. Then 2-D US and PA images were combined using interpolation to make 3-D images. Their results suffered from poor elevational resolution, which was due to slice thickness of the 2-D images. The conventional linear US transducer array is usually focused in the elevational direction; however, the transducer still receives signals from outside of the focal plane. PA imaging has been used for in vivo cancer  detection. Lin et al. [6] developed a PA system to visualize angiographic structures of in vivo breast tissue. The authors used a ring US transducer consisting of 512 elements which was mechanically moved along the elevational direction to perform PAT of breast tissue. The scanner recorded PA signals from the entire breast within 15 s. They used a 3-D back-projection algorithm for image reconstruction. The resolution of the reconstructed image was 255 µm and 5.6 mm in the imaging plane of the transducer and the elevational direction, respectively. The authors concluded that PAT and PA elastography clearly 7  visualize the breast tumors. In the same research group [35], the authors developed a single-impulse panoramic PAT system which has an in-plane resolution of 125 µm. The imaging system uses a single laser impulse per image and can be used to image small-animals. They employed a ring detector consisting of 512 elements, with an imaging area of about 16 mm in diameter. The system was used to take 600 cross-sectional images from an in vivo mouse in 12 s. The limited view PAT was introduced in [36]. For a stable reconstruction, the angle view (or solid angle view for 3-D) of the region of interest (ROI) should be at least 𝜋 (or 2𝜋) [36], [37]. For lower angle views, the limited-view PAT causes data incompleteness artifacts in the reconstructed images. To remove such artifacts, compressed sensing (CS) and data compression based reconstruction algorithms have been suggested [38]–[40]. Aperture orientation of the linear array transducer was studied in [41]. The authors suggested the use of the synthetic aperture, as a result, the artifacts of the limited-view PA imaging with the single aperture are reduced. They ran a simulation study and compared different orientations of the transducer to find the optimal detection view. They concluded the optimal aperture orientation is 60o if three linear transducers are used. Ku et al. [42] developed an experimental set-up for PAT of rat brains for both in vivo and ex vivo cases. The authors illuminated the objects with green laser pulses, for which blood has been reported to have the highest absorption coefficient, to image vascular structures. They used different US transducers with different working bandwidths. The transducers were rotated around the sample to acquire data from different positions. The data from the different probes were reconstructed separately and the results were compared. The images obtained using higher frequency transducers were better in terms of resolution. The working frequency range of the US transducers used affects the quality of the PA images. Note that a laser pulse with a very short width (say 5 ns) is used to excite the initial acoustic pressure, so ultrasound waves with very high frequency components are generated. Ultrasound transducers working with an ultrawide frequency bandwidth lead to minimal distortions to the detected PA signals and therefore provide higher image resolution [2]. In [43], various types of ultrasound pulses in PAT were studied. The authors showed that the laser pulse generates an N-shaped temporal pressure pulse with a duration that depends on the absorption size. The rise time of the pressure edge is of the order of the laser pulse width. Therefore, a low frequency transducer 8  only detects the decreasing part of the N-shaped PA pulse and a high frequency transducer is required to detect the sharp edges of the PA pulse. For PAM that provides images with resolution at the subcellular scale, transducers with center frequency of at least 300 MHz should be used [26].  In addition, considering the fact that high frequency US waves have more attenuation, high frequency transducers have limited working distance [44]. For example, a transducer with a central frequency of 50 MHz has a working distance of 6 mm [45]. For deeper acoustic penetration such as human breast imaging, low frequency transducers with center frequency of 1-2 MHz can be used [35].  Photoacoustic image reconstruction An electromagnetic (EM) pulse of intensity 𝐼(𝑡) results in exciting an initial acoustic pressure 𝑝0(𝒓). The initial pressure depends on the physical properties of the absorbers in the object: 𝑝0(𝒓) =  Γ𝐴(𝒓) where Γ = 𝑐2𝛽/𝐶𝑝 is the Gruneisen parameter (𝑐: speed of sound, 𝛽: the isobaric volume expansion coefficient, and 𝐶𝑝: the specific heat) and 𝐴(𝒓) is a spatial EM absorber function. The initial pressure 𝑝0(𝒓) generates a wave through the medium described by the following non-homogeneous wave equation [46]:   (∇2 −1𝑐2𝜕2𝜕𝑡2) 𝑝(𝒓, 𝑡) =  −𝑝0(𝒓)𝑑𝐼(𝑡)𝑑𝑡  (1-1) where 𝑝(𝒓, 𝑡) is the induced pressure at location 𝒓 and time 𝑡. Image reconstruction techniques are used to create 2-D and 3-D images from sets of the PA signals detected at specific locations. The objective in PA image reconstruction is to find the initial pressure source distribution 𝑝0(𝒓). In this section, two main categories of image reconstruction are discussed: model-based and beam former reconstruction methods. 1.3.2.1 Model-based approach Xu and Wang [46], [47] developed an algorithm for 3-D image reconstruction for PAT which is applicable to all the geometries including planar, spherical, and cylindrical surfaces, when a constant speed of sound (SOS) is assumed throughout the medium. Their method was based on 9  an exact solution to Eq. (1-1) and called the universal back projection (UBP), in which the back projection term 𝑝0(𝑏)(𝒓) at position 𝒓 is given by:  𝑝0(𝑏)(𝒓) =  ∫  𝑏(𝒓0, ?̅?) 𝑑Ω0/Ω0 Ω0 ,  (1-2) where ?̅? = |𝒓 − 𝒓0| and 𝑑Ω0/Ω0 is a factor indicating the contribution to the reconstruction of the element located at 𝒓𝟎 and 𝑏(𝒓0, ?̅?) = 2𝑝(𝒓0, ?̅?) − 2 ?̅?𝜕𝑝(𝒓0, ?̅?)/𝜕?̅?. Ω0 is the solid angle of the whole detection surface and 𝑑Ω0 corresponds to a detection element. Thus, the algorithm took into account the size of detectors. Many studies attempted to correlate the spatial frequency components of the initial pressure distribution to the temporal frequency components of the measured pressure data. The authors reported that the numerical implementation of the reconstruction formula is simple and more computationally efficient than filtered back projection algorithms [48]. Most of the Fourier-based reconstruction algorithms rely on the PA signals recorded from a detection surface that encloses the object. However, it is not always possible to fully enclose the object with a detection surface. In cases of limited detection, part of data will be missed and some artifacts appear in the images. This has been discussed for 2-D PA reconstruction in [49]. For a more general case, Paltauf et al. [37], [50] extended the PA reconstruction algorithm UBP in a way to work for limited view detection. They presented a weight factor function to reduce the artifacts caused by missing data. One of the leading time-domain methods for PA image reconstruction problem is time reversal (TR). The method is applicable for the case of closed detection surfaces and takes into account acoustic inhomogeneity. With the TR technique, the transient wave is measured from time 0 to 𝑇. Then the signals are time-reversed from the position of detector, and the wave equation is solved to find the initial source [51]–[53]. However, TR does not work very well in the presence of sensor directivity effects [54].  In [55], with the assumption that the illumination function 𝐼(𝑡) = 𝐼0 for a small time 𝜏, the authors obtained the solution to Eq. (1-1) at the location 𝒓 of a detector in the time domain as  10   𝑝(𝒓, 𝑡) ≈𝛽𝐼0𝑐4𝜋𝐶𝑝𝜏𝑑𝑑𝑡∯ 𝐴(𝒓′)|𝒓−𝒓′|=𝑐𝑡𝑑𝑟′𝑐𝑡 (1-3) For reconstruction the authors used the following solution to the inverse problem:  ∯ 𝐴(𝒓′)|𝑟−𝑟′|=𝑐𝑡𝑑𝑟′ =4𝜋𝐶𝑝𝑡𝛽𝐼0𝜏 ∫ 𝑝(𝒓, 𝑡′) 𝑑𝑡′𝑡0  (1-4) 1.3.2.2 Beam-forming approach Most of research groups who use conventional linear transducer arrays (see Fig. 1-4) use beam-forming methods for the PA image reconstruction [1], [22], [56], [57].   Fig. 1-4. Schematic of a linear array of transducers. Conventional beam forming Delay-and-Sum (DAS) is the most popular. In this method, as shown in Fig. 1-5, the time delay for the PA signals coming from the point source at position 𝑛 to the detection element 𝑖 is computed as follows  𝜏𝑖,𝑛 =  𝑑𝑖,𝑛 − 𝑑𝑛𝑐  (1-5) 11  where 𝑐 is the SOS, 𝑑𝑛 is the distance between the point source and the detection surface, and 𝑑𝑖,𝑛 is the distance between the point source and the detection element. The time delay 𝜏𝑖,𝑛 indicates which part of the signal recorded by the i-th element 𝑠𝑖(𝑡) is from the point source. By repeating this process for all detectors, and summing up the corresponding values, the image is formed.   Fig. 1-5. Image formation using DAS method. Park et al. [56] discussed beamforming for PA image reconstruction for linear transducers. The assumption that the SOS is constant affects the quality of the DAS PA images. Yoon et al. [58] presented a method to find an optimal SOS for a tissue which results in higher resolution images. DAS can be used for real-time imaging due its simplicity, but the images suffer from a poor resolution and high sidelobes. Mozaffarzadeh et al. [59] modified the DAS algorithm to improve the quality of the reconstructed images. The authors used the spatial coherence of PA signals, as a nonlinear beamforming method, which calculates the correlation of signals to enhance the resolution and used minimum variance adaptive beamforming, which adaptively weighs the signals to reduce the sidelobes that appeared in the DAS images. 12  Zhang et al. [60] introduced a PA inversion method, named SPARE, which uses US post-beamformed radio frequency (RF) data, which is available internally on many US machines. Considering the fact that the time of flight of acoustic waves for PA is half of that for ultrasound, the beamformed PA-induced signals are defocused. The defocused signals seem to be transmitted from the focal point. Thus, the authors assumed that the post-beamformed data recorded by the US machine was detected at the focal point by a virtual element. Then, the DAS algorithm is applied to the beamformed RF data for PA image reconstruction. The quality of the reconstructed images using this method was in a good agreement with the DAS images. Bell et al. [61] presented a method named short-lag spatial coherence (SLSC) beam forming to solve PA image reconstruction. Their method was based on the normalized correlation between PA signals detected by different detection elements, without taking into account the signal amplitude. They showed that, compared to DAS, SLSC enhances the contrast of the reconstructed images, but reduces the resolution. The research group mostly use the method to localize the brachytherapy seeds [21], [62]–[64].   Tissue Mimicking Material  Tissue mimicking material is used to study the feasibility of a new method or device before in vivo imaging. In this section, some of the commonly used phantoms in PA imaging are discussed.  A phantom including porcine skin gelatin is the most popular [61] and has been proposed for prostate imaging. The authors fixed a bovine liver implanted with black painted titanium seeds in gelatin. Compared to the prostate, the bovine liver has higher light scattering and also higher blood content. They also studied ex vivo and in vivo canine prostate [62]. Poly vinyl chloride (PVC) is another material that can be used as a base for PA imaging. Bohndiek et al. [44] introduced a protocol to develop PVC-based phantoms for PA imaging. Using the proposed protocol, a wide range of phantoms with controllable optical properties can be constructed. The absorption and reduced scattering coefficients of the phantoms are tuned by using different amounts of black plastic color and titanium oxide, respectively. Yoon et al. [58] fixed an ex vivo specimen in PVC phantom. They submerged the phantom in a saline solution for proper acoustic coupling.  13  The gel wax-based phantoms were introduced in [65] and can be used for multispectral PA imaging. The authors suggested the addition of titanium oxide and oil-based inks to adjust reduced scattering and optical absorption coefficients, respectively. The selected oil-based inks have different absorption coefficient depending on the wavelength of the laser source. Park et al. [56] imaged thin black film with a uniform PA response in order to find the impulse response of the transducer array. A comprehensive study of gelatin-based phantoms for PA and US imaging was performed in [66]. The authors suggested that PA phantoms should include scatterers for both imaging modalities: forty-micron, spherical silica particles for acoustic scattering which also improves optical scattering, and Intralipid 20% IV fat emulsion for optical scattering. They also proposed that the optical absorption of tissue at a specific wavelength be imitated by varying different concentrations of three components: India Ink, Evans Blue, and Direct Red 81. 1.4 Minimally invasive surgery, the da Vinci Surgical System, and robot-assisted PA imaging  Minimally invasive surgery Minimally invasive surgeries is replacing the traditional open surgeries for many procedures. During a minimally invasive operation, a few instruments are inserted through the body’s natural openings or through small incisions, of approximately 2 cm, into the body instead of 10-15 cm incisions during a traditional open surgery. The surgeon is still in charge of the procedure and works using the specialized instruments including an endoscope that provides the view of surgical area. There are a number of benefits for the patient including quicker recovery, shorter hospital stay, less infection and bleeding, less pain and discomfort [67]. However, minimally invasive surgery is demanding on surgeons. This is because the surgeons work with the instruments that do not give the degrees of freedom available in the human hand. The surgeon’s view of the surgical area is sometimes more restricted compared to the open surgery where the view to the entire surgical area is available. In addition, many minimal invasive surgeries are carried out without 3-D visualization and, as a result, depth perception is much poorer than in open surgery [68]. 14  There are three types of minimally invasive surgery. First, endoscopy in which the surgeon uses the endoscope itself to perform the procedure. Usually, the endoscope is inserted into the body using one of the body’s natural opening, as in bronchial access to the lung for biopsies or for colonoscopies. Second, laparoscopy in which the endoscope, in addition to other instruments are inserted into the body through small ports through which the surgeon manipulates the hand-held tools to perform the procedure while standing by the patient. Typically, the tools do not have a “wrist”; there are only four degrees of freedom and there is no orientation control at the distal end of the instruments. Also, the surgeon must look away from the tool to the monitor to see the image of surgical area. Finally, robot-assisted surgery which is similar to the laparoscopy. The endoscope and robotic instruments are inserted into the body and the surgeon controls the robot while sitting at a console.  Robot-assisted surgery with the da Vinci Surgical System Robotic systems such as the da Vinci Surgical System have been designed to improve upon traditional laparoscopy. The da Vinci Surgical system has three main components. First, a Surgeon’s console which is typically in the same room as the patient and includes two master tool manipulators (MTMs) and a viewing console. Second, a patient-side cart which includes four robotic arms/manipulators that can be controlled from the surgeon’s console. One of the arms is used to position an endoscope and the remaining manipulators, named patient side manipulators (PSMs), are for maneuvering instruments such as graspers, scissors and cauteries. The viewing console provides a 3-D view of the surgical scene as seen by the 3-D endoscope held by the camera arm (see Fig. 1-6).  The da Vinci robot is a tele-operated robot. The surgeon performs the operation while seated comfortably at the console with his eyes and hands positioned so that he/she can move the instruments and camera located at the patient-side cart. The patient lies on the operating room bed besides the patient side cart. The PSMs can be inserted into the body through small incisions. Once activated, the surgeon motions are copied by the instruments on the patient-side cart. However, the surgeon motions at the console may be filtered and scaled to remove tremor and improve dexterity. Compared to laparoscopy, the da Vinci robot assisted surgeries offer easier operation, greater precision, lower error, less pain and bleeding, shorter hospital stay, and quicker recovery for the patient [69], [70]. There have been reports of robot malfunctions, but the patient 15  safety or surgical outcome have not been affected by these malfunctions [71]. The PSM motion is not restricted with the da Vinci robot. There are 7 degrees of freedom controlled by each of the surgeon’s hands at the master – 3 translations and 3 orientations of the instrument inside the patient plus grasping or cutting. However, because of instabilities that may occur if the surgeons hands were to be affected by the instrument touching the tissue, current robotic systems do not provide haptic (force or touch) feedback to the surgeon’s console.  Fig. 1-6. The da Vinci Surgical System. Image courtesy: Intuitive Surgical Inc.  Robot-assisted PA imaging There are a few studies that investigated robot-assisted PA imaging. Generally, the robot is employed to position the illuminator, the sample or the US transducer. A Selective Compliance Articulated Robot arm holding optical fiber was used to scan the entire ROI to provide signals for PA and fluorescence imaging [72]. A similar idea was used to consider the effect of the skull bone on PA imaging [73]. The optical fiber held by a robot is placed at different locations within a hemispherical transducer array. The system was tested in the presence and absence of a skull phantom. The authors showed that PA signals are more attenuated and temporally shifted with the skull phantom. In another study [74], the same PA system was used for dynamic contrast-enhanced imaging on a blood flow phantom. PA images are collected before and during the 16  injection of piglet and rabbit blood. The authors showed that piglet and rabbit blood have different optical absorptions and that PA imaging has the potential to visualize vasculatures.  In [75], PA image guidance was used to measure vessel separation to find safety zones during surgeries. The authors proposed robotic and non-robotic measurement of the distance between vessels. In the robotic approach, a da Vinci Surgical system is used to sweep the optical fiber across the targets and a linear US transducer is used to record the generated PA signals. Therefore, reconstructed PA images or the fiber positions reported by the da Vinci robot kinematics can be used to measure the vessel separation distance. The authors concluded that the PA image-based measurement has higher accuracy than the measurement obtained by the robot kinematics. In the same research group [76], a multi-fiber light delivery system which surrounds surgical tool tips was designed. The light delivery system helps visualize the tool tip and the vessels at the same time. The design was specifically optimized for a neurosurgical drill. PA imaging can also help detect critical structures during robot-assisted endo-nasal surgery [77]. There are arteries behind the bone being drilled during this surgery that should be preserved. The authors proposed a navigation system in which the drill and the US probe that are held by robots are tracked. The PA imaging system, including a light fiber attached to the drill, helps with the placement of the tracked US probe so that images of the target can be obtained.  In [78], a robotically tracked system was developed to perform 2-D PAT with a conventional linear probe. The probe was rotated in the plane and collected PA signals at one or three positions. PAT reconstruction using three poses resulted in images with fewer artifacts. 1.5 Prostate photoacoustic imaging The feasibility of PA imaging to detect early stage PCa was examined in [1], [15]. The authors used golden and polyacrylamide NPs as contrast agents, and a linear transducer as a detection device. They showed that NPs increase contrast of PAT images in an in-vitro experiment. The placement of laser fiber bundles and US receivers for prostate imaging was optimized in [13], showing that transurethral laser light delivery increases the imaging depth penetration in the prostate. Ex vivo human prostate studies showed that PCa can be detected using multispectral PA imaging [12]. PCa progression was investigated in [79]. 3-D co-registered PA and pulse echo imaging were used to monitor PCa injected in the mouse window chamber. The authors showed 17  PA imaging and PA spectroscopy are capable of visualizing vascular networks and blood oxygen saturation over time.  Deep learning and multispectral PA imaging are also used to detect ex vivo PCa [80]. Different slices of human prostate tissue were imaged. Three classes, malignant, benign, and normal, were defined and distinguished using their method. In [12], multispectral PA imaging of the ex vivo human prostate was employed to distinguish between cancerous and non-cancerous prostate tissue. PAT can also be used for guidance to find prostate biopsy targets [81]. The approach was tested with an ex vivo human prostate. PA imaging can also be used for prostate brachytherapy localization [63], [82]. In [63], the authors imaged multiple brachytherapy seeds implanted into an ex vivo dog prostate phantom using a traditional US linear array. To better visualize the seeds located far from the light source, a short-lag spatial coherence beamformer was suggested [61]. The reconstruction method is independent from the amplitude of the PA signals received by the linear array of transducers. In [83], it was shown that seeds oriented normal to the acoustic beam direction generate more PA signals. The potential of PAT for PCa detection was shown in [14] through a study on in vivo canine model. To model PCa, different amounts of fresh canine blood were injected into the prostate. This work showed that US is able to image the boundary and the urethra of the prostate; moreover, PAT presented good quality images of deep lesions. In vivo PCa in mice was studied in [84]. The authors used spectroscopic PA imaging to detect the prostate-specific membrane antigen, which is often overexpressed in PCa. Tumors are implanted under the mice skin and imaged when they had diameters of 2.5 to 5 mm. The authors concluded that the PA imaging is a suitable imaging modality to detect early stage PCa and to monitor its progression. In [16], a PA imaging system with a hand-held probe was designed to visualize the NVBs. It was shown that PA imaging is able to image vessels with small diameters, as small as 300 µm. It was also shown that PA and US imaging localize NVBs clearly better than US alone. Ishihara, et al. [16] developed a transrectal PA system to image prostate cancer during prostate biopsy procedures. They showed that combined PA and US imaging has a better performance to identify prostate cancer than US alone. Later in the same group [85], the feasibility of the transrectal PA system for in vivo clinical trials was studied. It was shown that PA imaging can be used to find targets of the biopsy 18  procedure. However, intraoperative localization of PCa in vivo is yet to be demonstrated, partly because the prostate is difficult to access for both laser illumination and US transducers. In terms of light delivery, the main concern is to provide the entire ROI, including the prostate, with sufficient laser light intensity, which also should be kept below the maximum allowable for human tissue [86]. In [85], an illumination system with TRUS transducer was designed to investigate transrectal PA imaging that can be used to guide prostate biopsy to detect PCa. In another study [87], a transrectal light delivery system is developed for brachytherapy treatment. The authors showed the delivery system is able to visualize brachytherapy seeds implanted at the depth of up to 5 cm in phantoms. In [88], the design of a laser-fiber coupling system with high output energy is presented. The coupling system is suitable for prostate imaging through transrectal or transurethral delivery. The authors presented the transurethral illumination system in [89]. They reached 3 cm optical penetration in chicken breast tissue with fluence rate of 7 mJ/cm2. Bell et al. [18] illuminated seeds that had been implanted in a canine prostate. For this purpose, they took advantage of transurethral light delivery using a 1-mm diameter optical fiber and catheter. The authors discussed two types of illumination: end-firing, where light exists axially through the end of the fiber, and side-firing, where the light is redirected radially away from the fiber using mirrors held by a quartz tube. The authors of another study [61] introduced transperineal light delivery. They studied an ex vivo liver and used a 1 mm diameter optical fiber which was kept inside a brachytherapy needle and acts as a diffusing sheath to diffuse the light to within 2 cm around the fiber tip. The trans-urethral illumination seems superior [18] because the light source is placed closer to the prostate tissue. Based on the allowable output energy for the use of lasers [86], the entire ROI or the imaging region of the detector could be illuminated. As Fig. 1-7(a) shows, an end firing fiber can be used to illuminate the prostate. Using this method, it is difficult to ensure that the laser energy is kept below the maximum allowable for human tissue [86] and the entire prostate receives sufficient illumination for PAT. The other option is to use a side-firing fiber to illuminate a radial slice of prostate, as shown in Fig. 1-7(b). A more complicated imaging system is needed, which should coordinate the fiber and the US transducer so that the fiber always illuminates the imaging region of the transducer.  19   (a)  (b) Fig. 1-7. Light delivery for prostate using (a) end firing fiber and (b) side-firing fiber The challenge of PAT imaging of the prostate is that it is difficult to develop an instrument that conforms to the usual PAT configuration with US transducers that surround the organ to be imaged, see Fig. 1-8. In the absence of surgery, a patient’s prostate can be examined with US using three approaches: endo-rectal, trans-perineal and trans-abdominal. The trans-abdominal view is not viable for PA imaging because, for many patients, the prostate is often too far from the transducer. The endo-rectal view, used commonly for biopsy and brachytherapy, and the trans-perineal view, proposed for prostate biopsy and radiation therapy guidance, have limited angle of view. However, if we consider intra-operative imaging, a “pick-up” US transducer, such as the one developed by our group for intra-operative scanning using a da Vinci grasper [90], can be used to provide a 4th view, in the anterior-to-posterior direction, as shown in Fig. 1-8. When combined with trans-rectal or trans-urethral laser illumination, this configuration should be studied.  20   Fig. 1-8. The TRUS and pick up probes can be used for prostate PAT. To perform PAT, the TRUS probe is rotated around a fixed axis while the pick-up transducer can be moved freely along the surface. 1.6 Thesis Objectives The objective of this work is to develop new methods and techniques for imaging of prostate using PA imaging. The goal is to find and meet the constraints and conditions related to an intra-operative prostate imaging study. Based on the insight presented in Section 1.5, five related topics are investigated: 1. The development of a PA image reconstruction method that is compatible with the current US transducers used for prostate imaging and that allows for a limited set of acoustic data collected around the object. 2. The extension of the above inversion technique to large scale systems. 3. The feasibility of performing PAT using a conventional linear US transducer. 4. The study of PAT configurations for prostate imaging which are formed by the pick-up transducer or the TRUS transducer. 5. The development of a da Vinci robot-assisted PA imaging system. 21  1.7 Thesis outline Chapter 2 describes a new sparsity regularization technique for solving the inverse problem of PA. The method is tested in simulation and experimental data and the results are compared with the most common regularization technique, Tikhonov regularization. The method is also extended to solve the inverse problem of large scale systems. In Chapter 3, the configurations of PAT of prostate using the pick-up US transducer and the TRUS transducer are investigated. There are two ways of performing prostate PAT. First, fan-shaped tomography in which the TRUS transducer sweeps the volume by rotating about its axis. Second, cylindrical tomography in which the pick-up transducer can be rotated around any arbitrary axis. Experiments and simulations are used to test and analyze the configurations.  Chapter 4 details a fully integrated da Vinci robot-assisted PAT system. A shared control environment is described in which virtual fixtures are used to allow the surgeon to keep the pick-up transducer in touch with the tissue to be imaged. In our approach, the axial motion of the transducer is restricted to a virtual plane in order to define a cylindrical PAT reconstruction configuration (Fig. 1-3(a)), with the transducer always facing towards the tomography axis. The user freely moves the transducer and ensures its contact with the tissue for acoustic coupling. The approach was tested in experiments using the da Vinci Research Kit. In addition, the accuracy of localization of the transducer by the robot is measured and analyzed using an optical tracking system.  The final chapter of this thesis provides an overview of the work presented. The long term goals and limitations of the work are also discussed. Finally, the future of the work and possible improvements are considered.   22  Chapter 2 ˗ Photoacoustic image reconstruction 2.1 Introduction As summarized in the previous chapter, photoacoustic (PA) imaging is a new and rapidly growing imaging modality that has numerous potential applications in the imaging of biological tissue [1], [17]–[19]. The goal is to generate PAT images with the equipment that can be used for prostate imaging in the presence of constraints such as the specific directivity of sensors and limited-view. Many of the exact PA tomography (PAT) reconstruction algorithms require full-view PA imaging systems that enclose the sample with unfocused sensors [91]. The time reversal (TR)  algorithm requires an enclosed detection surface around the initial pressure distribution. [29], [51]. If the detectors are not omni-directional, TR does not account for sensor directivity effects. The effect of sensor directionality on PAT images reconstructed by TR was studied in [54]. Even when a complete and continuous detection surface was employed, significant artifacts appeared in the reconstructed images. If focused, conventional linear arrays that are typically available on clinical US machines could be used for PAT, a broad range of applications could be easily enabled by relatively simple modifications of conventional US machines. For example, PAT breast scanning could be accomplished by rotating an array of transducers around the breast as in the automated breast US system presented in [92]. In prostate imaging, it is possible to capture the acoustic waves that could be generated in the prostate through trans-urethral illumination [18] from limited angles. Thus, a model-based PAT reconstruction method that works with PA wave fields that are physically focused and that are based only on limited-view acquisition would be very useful to have. The conventional linear transducer array has been used in different PA studies [60], [93]. In one approach, a linear transducer is moved in a plane for a planar detection geometry. Multiple two-dimensional (2-D) images are captured and combined to form a three-dimensional (3-D) tomographic image. However, the reconstructed multi-slice images may suffer from poor 23  elevational resolution [34] because linear transducers receive signals from a limited volume and not just a plane [31], [94]. For the same reason, the conventional Delay-and-Sum (DAS), a back projection-based algorithm, which is inspired by US imaging, cannot provide accurate images. The best performance of the DAS method is achieved when the sensors provide omni-directional response. In most cases, when a linear array of transducers are used, back projection algorithms have been used for PA image reconstruction [1], [22], [31], [34], [49], [94]. The back projection algorithm is fast and reliable [95], but it is not model-based and may cause artifacts in the reconstructed images [96]. To reduce the artifacts, the method can be modified with deconvolution algorithms.  Deconvolution algorithms have the potential to include different constraints on the transducers. Many studies showed the effectiveness of the deconvolution based PA image reconstruction. For example, a deconvolution algorithm has been used to remove the artifacts caused by the pulse width, and by the transducer working bandwidth [97]. The algorithm was also used for deblurring purposes [98]. Generally, spatial impulse response (SIR) or the acousto-electric impulse response (EIR) are employed in the deconvolution algorithms. Wang et al. [99] showed that PAT spatial resolution can be enhanced with impulse responses. In contrast to the SIR, finding the EIR is challenging [100].  A few model based image reconstruction methods have been reported to compensate for the sensor directionality caused by the size and shape of elements. Sensor directionality was investigated in [54]. Due to the directionality of transducers, spatial resolution can be decreased. To reduce the directivity effect caused by the transducer size, the rectangular-shaped transducer of an ideal size was investigated in [101] and a 2-D Fourier transform based method was developed to remove the PA signals coming from outside the imaging area of the transducer. In general, the transducer surface is divided into a number of patches and superimposition of the PA signals is used in the reconstruction [102]. Further improvements for 3-D models are described in [103]. However, these methods are not applicable to transducers with the directional response caused by an external acoustic lens, as is the case with the conventional US transducers. The CS algorithms are capable of image reconstruction from limited-view and noisy measurements [104], [105], and in the presence of non-uniform and random laser illumination [106], [107]. Data compression based PA algorithms find a sparse representation of the measured signals [108]–[110] and CS-based PA algorithms find a sparse representation of the PA images 24  [105], [111], [112] with sparsifying transforms. For example, the Discrete Cosine Transform (DCT) was used in [113] to sparsify the measured data, which drastically reduced the computing time in their 2-D PA imaging study. While the sparse representation of the measured signals leads to a more effective solution, it does not reduce the number of unknowns. The limited view and the directionality of the transducers are constraints that make the system of equations for PAT reconstruction rank-deficient. To the best of our knowledge, the rank-deficient PA system consisting of the limited-view PAT with focused US transducer has not been addressed. While, the reconstructed image may not be sparse in its pixel representation, it has a sparse representation in the DCT-domain [114]. The DCT which is widely-used in CS-studies and has been employed in JPEG and MPEG systems for still and motion image compression [115], has not been used to address the rank deficiency in limited view, focused transducer, PAT reconstruction.  In this chapter, a novel deconvolution-based PA reconstruction with sparsity regularization (DPARS) is presented. We show that the method works with the transducers that are physically focused in the elevational direction with an external acoustic lens as in traditional linear transducers designed for conventional US (not PA) imaging. The transducer beam shape affects the reconstruction and should be taken into account during the reconstruction process. The distribution of the absorption coefficients is computed by solving a large linear system of equations derived from the projections of the absorption coefficients. These projections account for the transducer directivity. In addition, we consider a limited-view PA imaging system that covers less than 180 (or 360 for 3-D) degrees around the sample. To reduce the number of equations, as well as to improve numerical conditioning and computing time, we use a sparse DCT representation of the distribution of absorption coefficients. In this chapter, for the first time, a model-based PAT reconstruction method that accounts for both sensor directionality and limited angle of view is presented. We compare and report the performance of the proposed method with model-based and non-model-based algorithms.  2.2 Materials and methods In this chapter, scalar parameters, vectors, and matrices are shown in normal, italic bold, and roman bold fonts, respectively. 25   Theory In PAT, we illuminate the entire ROI with an electromagnetic (EM) pulse of intensity 𝐼(𝑡). The laser pulse results in exciting an initial acoustic pressure 𝑝0(𝒓). The initial pressure is linearly related to a spatial EM absorber function 𝐴(𝒓) by 𝑝0(𝒓) =  𝛤 𝐴(𝒓) where 𝛤 is the Gruneisen parameter related to the physical properties of the absorbers in the object; 𝛤 = 𝑐2𝛽/𝐶𝑝 where 𝑐 is the speed of sound, 𝛽 is the isobaric volume expansion coefficient, and 𝐶𝑝 is the specific heat. The initial pressure propagates through the medium. The PA pressure 𝑝(𝒓, 𝑡) at vector position 𝒓 and time 𝑡 is described by the following non-homogeneous wave equation [46]: (∇2 −1𝑐2𝜕2𝜕𝑡2) 𝑝(𝒓, 𝑡) =  −𝑝0(𝒓)𝑑𝐼(𝑡)𝑑𝑡 (2-1) Image reconstruction techniques are used to form 2-D and 3-D images using the PA signals detected at specific locations around the ROI to find the initial pressure source distribution 𝑝0(𝒓). A solution to 𝑝(𝒓, 𝑡) of Eq. (2-1) can be obtained in the time domain as [55]: 𝑝(𝒓𝑖 , 𝑡) = 𝑝𝑖(𝑡) =𝛽4𝜋𝐶𝑝 ∭𝑑𝒓|𝒓 − 𝒓𝑖| 𝐴(𝒓)𝑑𝐼(𝑡′)𝑑𝑡′ (2-2) where 𝒓𝑖 is the location of the transducer 𝑖, and 𝑡′ = 𝑡 − |𝒓 − 𝒓𝑖|/𝑐. The following formulation from Eqs. (2-3) through (2-5) follows the deconvolution approach from [32], but using a global coordinate system. According to the deconvolution algorithm, the PA pressure is the superposition of point source responses. Using Eq. (2-2), the PA pressure from a point source at time 𝑡 can be obtained as 𝑝𝑝(𝑡). The point source is located at position 𝒓𝑝. Thus, the PA pressure can be calculated as follows 𝑝𝑖(𝑡) =𝛽𝑘4𝜋𝐶𝑝 (1𝑡 ∮ ∫ 𝐴(𝒓)𝑑𝑆|𝒓−𝒓𝑖|=𝑐𝑡) ∗ 𝑝𝑝0(𝑡) (2-3) where ∗ denotes the convolution operator and, 𝑝𝑝0(𝑡) is the time delayed 𝑝𝑝(𝑡). The time delay |𝒓𝑝 − 𝒓𝑖|/𝑐 moves the point source to the location of the transducer, and 𝑘 is a parameter corresponding to the physical properties of the point source being imaged, including 𝛤, 𝑐, and the 26  initial location of the point source. As shown in Fig. 2-1, the integral of the absorber function 𝐴(𝒓) is computed over the sphere with radius of |𝒓 − 𝒓𝑖| centered at the position 𝒓𝑖. The detected signal by the US transducer i, 𝑝𝑑𝑖 (𝑡), is the convolution of the pressure, 𝑝𝑖(𝑡) and the impulse response of the transducer, ℎ(𝑡). The generalization of Eq. (2-3) for the detected signals by the transducers will be 𝑝𝑑𝑖 (𝑡) =𝛽𝑘4𝜋𝐶𝑝 (1𝑡 ∮ ∫ 𝐴(𝒓)𝑑𝑆|𝒓−𝒓𝑖|=𝑐𝑡) ∗ 𝑝𝑑0(𝑡) (2-4)  Fig. 2-1. Equation (2-2) enables us to compute the pressure at the transducer location  where 𝑝𝑑0(𝑡) = 𝑝𝑝0(𝑡) ∗ ℎ(𝑡), indicates the PA signal from the point source detected by transducer. By applying the Fourier transform on both sides of Eq. (2-4) and we can re-write the constraint on the absorber function as follows: ∮ ∫ 𝐴(𝒓)𝑑𝑆|𝒓−𝒓𝑖|=𝑐𝑡=4𝜋𝐶𝑝𝑡𝛽𝑘 𝐼𝐹𝑇 (𝑝𝑑𝑖`(𝜔) 𝑤(𝜔)𝑝`𝑑0(𝜔) ) (2-5) where 𝑝𝑑𝑖`(𝜔) and 𝑝`𝑑0(𝜔) are the Fourier transforms of 𝑝𝑑𝑖 (𝑡) and 𝑝𝑑0(𝑡), respectively, and 𝑤(𝜔) is a filter that only passes the frequency components in the working range of the transducer. Equation (2-5) can be used to link detected PA signals by an US transducer to absorption distribution. The right side of Eq. (2-5) can be calculated using the detected signals by the 27  transducers. Let 𝑅𝑖(𝑡) be the left side of Eq. (2-5) and defined as the preprocessed signal of an individual transducer 𝑖,  𝑅𝑖(𝑡) = ∮ ∫ 𝐴(𝒓)𝑑𝑆|𝒓−𝒓𝑖|=𝑐𝑡 (2-6) The transducer 𝑖 is directional and treats the PA pressures based on which direction they arrive at the transducer. Let the function 𝐷𝑖(𝒓) indicate the directivity effect of the transducer 𝑖 for any location 𝒓 in the medium with respect to the transducer location, 𝒓𝒊. Then we can rewrite Eq. (2-6) as follows 𝑅𝑖(𝑡) = ∮ ∫ 𝐴(𝒓) 𝐷𝑖(𝒓) 𝑑𝑆|𝒓−𝒓𝑖|=𝑐𝑡 (2-7) The spatial distribution of absorbers is discretized by a mesh of equally sized cells as shown in Fig. 2-2 where ?̂? is a vector of size 𝑁𝑝 representing the nodal values of the absorber distribution or the image in 2-D or 3-D space. We assume that the ROI fully encloses the region of the sample that receives significant illumination, i.e., that the incoming signals to the transducer from outside its imaging region are negligible. The preprocessed signal that is computed from the de-convolution algorithm for the transducer 𝑖, 𝑅𝑖(𝑡), is sampled as 𝑅𝑖(𝑡𝑚), 𝑚 = 1, 2, … , 𝑀. The integral in Eq. (2-7) can be estimated by summing the integrand values at a finite set of points. 𝑁𝑚 points are picked on the surface, 𝑆𝑖(𝑐𝑡𝑚) shown with red cross signs in Fig. 2-2. The values at the sampled points are assembled into a vector ?̆?. These are equally spaced on a surface equidistant to the acoustic element i. We can write Eq. (2-7) in discretized form as follows  𝑅𝑖(𝑡𝑚) = ∑ ?̆?𝛾  ?̆?𝛾𝑖  ∆𝑆𝛾∈𝑆𝑖(𝑐𝑡𝑚)  , 𝑚 = 1, 2, … , 𝑀 𝑖 = 1, 2, … , 𝑁 (2-8) where 𝑁 is the number of transducers, ?̆?𝛾 and ?̆?𝛾𝑖  are the values of the absorber function 𝐴(𝒓) and the directivity function 𝐷𝑖(𝒓) at the integration points shown by red cross signs in Fig. 2-2, and ∆𝑆 =  2𝜋𝑟𝑚/𝑁𝑚 in the 2-D case and ∆𝑆 =  4𝜋 𝑟𝑚2  /𝑁𝑚 where 𝑟𝑚 = 𝑐 𝑡𝑚. ?̆?𝛾 can be written 28  with respect to the nodal values ?̂?𝛾𝑗 using Eq. (2-9). Each integration point ?̆?𝛾 is surrounded with the nodes inside the set 𝑁𝑠(𝛾) of the image (the set 𝑁𝑠(𝛾) has 4 members for the 2-D case and 8 for the 3-D case). The ?̆?𝛾 value is expressed as a linear combination of the surrounding nodes, ?̂?𝛾𝑗 as follows ?̆?𝛾 = ∑ 𝑎𝑗 ?̂?𝛾𝑗𝑗∈𝑁𝑠(𝛾) (2-9) where the coefficient 𝑎𝑗 shows the contribution of the node ?̂?𝛾𝑗 to the sampled point value ?̆?𝛾.  Fig. 2-2. Element 𝒊 receives the signal from a considerable region of the medium but with different directivity effect. Sampled points are shown with red cross signs. They are equally distanced. Also one sampled point with the surrounding nodes can be seen. For example, Fig. 2-2 shows one ?̆?𝛾 is enclosed with four ?̂?𝛾𝑗′s for a 2-D image. The coefficients can be calculated based on the distance of the sampled point from one of the ?̂?𝛾𝑗′s as follows  𝑎1 = 1 −  𝛿𝑙1 −  𝛿𝑙2 +  𝛿𝑙1 𝛿𝑙2 (2-10)  𝑎2 = 𝛿𝑙2 −  𝛿𝑙1 𝛿𝑙2 (2-11) 29   𝑎3 =  𝛿𝑙1 −  𝛿𝑙1 𝛿𝑙2 (2-12)  𝑎4 = 𝛿𝑙1 𝛿𝑙2 (2-13) where (𝛿𝑙1, 𝛿𝑙2) is the normalized Cartesian position of the node ?̆?𝛾 relative to the node ?̂?𝛾1.  In addition, Fig. 2-3 shows one ?̆?𝛾, is surrounded with eight ?̂?𝛾𝑗′s for a 3-D image. Similar to the 2-D case, the coefficient 𝑎𝑗 can be calculated using the distance of the sampled point from the ?̂?𝛾𝑗 as follows  𝑎j = (1 − δ𝑙1𝑗)(1 − δ𝑙2𝑗)(1 − δ𝑙3𝑗) (2-14) where (δ𝑙1𝑗, δ𝑙2𝑗 , δ𝑙3𝑗) is the normalized Cartesian position of the node ?̆?𝛾 relative to the node ?̂?𝛾j. Equation (2-14) can be used for all the ?̂?𝛾𝑗′s that enclose the sampled point ?̆?𝛾.  Fig. 2-3. One sampled point in 3-D space with the surrounding nodes. By substituting Eq. (2-9) into Eq. (2-8), we obtain the following: 𝑅𝑖(𝑡𝑚) =  ∑ ∑ ∆𝑆 ?̆?𝛾𝑖  𝑎𝑗 ?̂?𝛾𝑗𝑗∈𝑁𝑠(𝛾) 𝛾∈𝑆𝑖(𝑐𝑡𝑚) (2-15) 30  where 𝑆𝑖(𝑐𝑡𝑚) is the set of 𝑁𝑚 discretized points at radius 𝑐𝑡𝑚 from sensor 𝑖, and 𝑁𝑠(𝛾) is the set of neighbors of ?̆?𝛾. For example, in Fig. 2-2, the points in 𝑆𝑖(𝑐𝑡𝑚) are shown with red cross signs, and 𝑁𝑠(𝛾) =  {15, 16, 21, 22} for 𝛾 = 4. For each 𝑖, and 𝑚, we obtain one linear equation with at most 4 or 8 times 𝑁𝑚 of the unknowns. The linear equation fills up the row 𝑚 + (𝑖 − 1)𝑁 of the coefficient matrix F, and the value 𝑅𝑖(𝑡𝑚) fills up the same row of the vector 𝑹. Considering the number of transducers/transducer locations, 𝑁 and the length of sampled preprocessed signal of each transducer 𝑀, we will have a system of 𝑁 × 𝑀 linear equations as follows {𝑹}𝑁𝑀×1 = [𝐅]𝑁𝑀×𝑁𝑝  {?̂?}𝑁𝑝×1 (2-16)  Sparsity regularization In a regularization process we add constraints to make the system determined and well-conditioned. When using regularization, the number of measurements required for good image reconstruction may be far less than prescribed by Shannon’s sampling theory [116]. As noted in Eq. (2-16), there are more unknowns than there are equations. Assumptions such as sparsity of the distribution of absorbers 𝐴(𝒓) can drastically reduce the number of unknowns. The main advantage of the DCT is that the transform holds energy compaction properties. In the DCT-domain, the coefficients corresponding to the highest frequencies can be neglected to achieve dimensionality reduction [117]. As a result, in process of solving the inverse problem, the number of unknowns and memory usage are drastically reduced and therefore the processing time and numerical conditioning are enhanced. The DCT has been used in different inverse problems such as reconstructing subsurface geologic features [118] and reconstructing tissue elasticity [119]. A similar approach can be used in the PA inverse problem. The general formulation of DCT for a 1-D signal 𝑓(𝑛𝑑) of length 𝑁𝑑 is defined as  𝑓(𝑢) = ∑ 𝑓(𝑛𝑑)𝑁𝑑−1𝑛𝑑=0𝑐𝑜𝑠 [𝜋𝑁𝑑(𝑛𝑑 +12) 𝑢] 𝑢 = 0, … , 𝑁𝑑 − 1 (2-17) 31  We can make the transform orthonormal by multiplying the 𝑓(0) term by 1/√𝑁𝑑, and other terms by the scale factor of √2/𝑁𝑑. For signals in 2-D or 3-D cases, DCT coefficients can be calculated by a composition of the coefficients along each dimension.  The DCT as a linear transformation can be represented by matrices:  {?̃?}𝑁𝑝×1= [𝐓−𝟏]𝑁𝑝×𝑁𝑝  {?̂?}𝑁𝑝×1 (2-18) Where the square matrix 𝐓−𝟏 is the DCT matrix which can be applied to the vector ?̂? containing the original variables to compute the transform-domain vector, ?̃?. We can rebuild ?̂? from ?̃? using the matrix 𝐓 {?̂?}𝑁𝑝×1= [𝐓]𝑁𝑝×𝑁𝑝  {?̃?}𝑁𝑝×1 (2-19) In this method, we assume that most of the energy concentration in the DCT domain is in the lower spatial frequency portion of the domain. For example, a circular region in the lower frequency of the domain can be selected. Therefore, we can take just a fraction of low-frequency DCT coefficients to recover the original distribution of absorbers. The fraction is named the regularization parameter or cutoff ratio 𝑟𝑐.  Each column of the transform matrix represents a node of the spatial frequency domain. We select a subset of 𝑁𝑝′ columns of 𝐓 corresponding to the selected dominant coefficients and call the new truncated transform matrix 𝐓𝐭𝐫. If ?̃?𝒕𝒓 is the truncated version of ?̃? in the DCT domain, then: {?̂?}𝑁𝑝×1≈ [𝐓𝐭𝐫]𝑁𝑝×𝑁𝑝′{?̃?𝒕𝒓}𝑁𝑝′×1 (2-20) Now we can rewrite Eq. (2-16) as {𝑹}𝑁𝑀×1 =  [𝐅]𝑁𝑀×𝑁𝑝  [𝐓𝐭𝐫]𝑁𝑝×𝑁𝑝′{?̃?𝒕𝒓}𝑁𝑝′×1 (2-21) Thus, DCT sparsity regularization is applied to the PA inverse problem by replacing the absorber distribution ?̂? with 𝐓𝐭𝐫?̃?𝒕𝒓. To solve the inverse problem stated in the Eq. (2-21), we can take 32  𝐅 𝐓𝐭𝐫 as the coefficient matrix, and solve it to find ?̃?𝒕𝒓. We then reconstruct the image using Eq. (2-20). A schematic of the inversion approach is shown in the diagram of Fig. 2-5(b).  It is well known in signal processing that a sharp cutoff in the Fourier domain introduces ringing in the spatial domain. Therefore, we propose that instead of a step change in the DCT domain (rectangular window) as described in Eq. (2-20), a smooth windowing function be applied as follows: {?̂?}𝑁𝑝×1≈ [𝐓𝐭𝐫]𝑁𝑝×𝑁𝑝′[𝐖]𝑁𝑝′×𝑁𝑝′{?̃?𝒕𝒓}𝑁𝑝′×1 (2-22) By substituting Eq. (2-22) into Eq. (2-16) we have  {𝑹}𝑁𝑀×1 =  [𝐅]𝑁𝑀×𝑁𝑝  [𝐓𝐭𝐫]𝑁𝑝×𝑁𝑝′[𝐖]𝑁𝑝′×𝑁𝑝′{?̃?𝒕𝒓}𝑁𝑝′×1 (2-23) where 𝐖 is a diagonal weighting matrix that smoothly reduces the DCT parameters near the cutoff edge within the region from the radius of 𝑟𝑐 − 𝑟𝑑𝑒𝑐 to 𝑟𝑐. A Hamming window as shown in Fig. 2-4 for 1-D case is used over a decaying radius 𝑟𝑑𝑒𝑐 to smooth out the discontinuity of the ideal low pass filter of 𝐓𝐭𝐫. Equation (2-23) shows the conclusive equation for the modified method named filtered DPARS (FDPARS).  Fig. 2-4. Weighting function used for FDPARS in 1-D case For the simulation study, we also compare our simulation results with those achieved by the commonly used Tikhonov regularization method [120]. To improve the conditioning of the problem, we penalize the norm of the absorber distribution resulting in the following equation 33  [𝐅T]𝑁𝑝×𝑁𝑀{𝑹}𝑁𝑀×1 = ([𝐅T]𝑁𝑝×𝑁𝑀[𝐅]𝑁𝑀×𝑁𝑝 + 𝜆[𝐈]𝑁𝑝×𝑁𝑝) {?̂?}𝑁𝑝×1 (2-24) where 𝐈 is the identity matrix, and 𝜆 is Tikhonov regularization parameter. We evaluate our approach using simulation and experimental data, as described in the next sections.  34   Fig. 2-5. (a) Experimental set up for PAT, a close-up of the probe and the rotary stage was shown at the upper left corner (b) The structure of the DPARS algorithm  Transducer calibration Sensor directivity is important in the US transducer design [121], [122]. The directivity function of the transducer 𝑖, 𝐷𝑖(𝒓) is defined as the PA signal detected by a transducer when a point source is at the arbitrary position 𝒓 normalized by the PA signal detected along the transducer element axis. The directivity information is required for the DPARS inversion algorithm, and can be obtained from a calibration procedure. For the calibration experiment, we imaged a point source at different locations with respect to the transducer as shown in Fig. 2-6(a). The PA signal recorded by the linear transducer is defined as the point source response. The point source was kept fixed at its location and, instead, the transducer was moved with respect to the point source. With this approach, the light intensity received by the point source is constant. The point source should be of an infinitesimal size. A 35  black bead made of metal with diameter of 1 mm illuminated by the laser was used in the calibration experiment. The locations of the point source were specified in the axial/elevational/lateral (x,y,z) coordinate system that was attached to the transducer (see Fig. 2-6(a)). The point source and the transducer were kept in a water bath for acoustic coupling. The transducer was attached to a translational stage and was moved along the axial and elevational axes. Instead of moving the transducer or the point source along the lateral axis, we assumed that the elements of the linear transducer are identical, so the intensity that would have been measured by the element at z = 0 with the source at (x,y,z0) is the same as that measured by the element at lateral location z = -z0 with the source at (x,y,z=0). We imaged the point target at approximately the lateral center of the transducer (z = 0), at the axial range of 26 mm to 86 mm in increments of 5 mm, and for each axial value, we covered the elevational range of 0 to 14 mm in increments of 1 mm.  For each location of the point source, the RF data is collected from the 128 elements of the transducer using the data acquisition card and the data is band-passed filtered to keep only the frequency components in the working range of the transducer. The main element of the transducer (assumed to be at z=0 in Fig. 2-6(a)) is defined to be the closest element to the point source, which also receives the highest PA signal over all point source locations. The ratio of the maximum absolute intensity measured at a given point source location to the overall maximum absolute intensity measured over all point source responses gives us the relative directivity response of the main or center transducer element.  Then, we used linear interpolation to have the calibration result at an arbitrary axial and elevational value. Fig. 2-6(b) shows the maximum measured intensity received by the main element when the point source was moved in the elevational-axial plane and Fig. 2-6(d) shows the maximum measured intensity received by different elements when the point source was moved only in the axial direction. Fig. 2-6(c) and Fig. 2-6(e) also show the transducer calibration result in which we compensated the result for attenuation. In contrast to the directivity, the attenuation depends on the medium. If the point source is moved along the axial direction, the intensity drop is only due to the attenuation. Therefore, to compensate for the attenuation and measure only the directivity effect, the maximum intensity recorded for each location (x,y,z) of the point source is normalized by the maximum intensity recorded at (x, 0, 0).  36  Fig. 2-7 shows DAS images corresponding to three calibration experiments. For example, in Fig. 2-7(a) and Fig. 2-7(c) the point source was at 14 mm out of the imaging plane; however, the point is still visible in the reconstructed images.  Fig. 2-6. (a) The experimental setup for transducer calibration. A point source was located at different positions in the elevational (y) and axial (x) directions. The maximum intensity detected by the main transducer element with respect to the location of the point source along the elevation and lateral directions are shown in (b), and the maximum intensity detected by different elements is used to measure the lateral dependency of transducer which is shown in (d); (c) and (e) show the calibration results in which the intensities were normalized with the maximum intensities recorded in the same depth   Fig. 2-7. The US transducer receives a signal when the object is located outside of its imaging (center) plane. As mentioned in Fig. 2-6(a), y represents the deviation of the point source from the main plane of the US transducer. The DAS method was used for reconstruction the recorded data when the point source was at (a) the depth 33 mm and 14 mm out of plane (b) the depth 37 mm and 6 mm out of plane (c) the depth 78 mm and 14 mm out of plane. 37   Simulation methods We employed the open source MATLAB toolbox k-Wave to calculate PA waves generated from a point source at a specific distance from the detector through the medium in the time-domain [123]. The PA wave response to a distribution of sources was computed by superposition. The intensities of the acoustic waves decrease as they propagate through the tissue and travel to the transducer locations. To gain insight into 2-D and 3-D PAT, we ran two simulation studies. In a 2-D tomography study, a planar simulated ROI of 105 mm × 55 mm was selected, with a speed of sound of 1540 m/s, and with a distribution of absorbers that comprises a circle and three points, Fig. 2-8. As shown in Fig. 2-8(a), the simulated receiving transducer array for 2-D PAT is rotated in the plane of the ROI. The simulated transducer array includes 128 sensing elements in a 38 mm width, and models a standard linear transducer L14-5 (Ultrasonix Medical Corporation, Richmond, BC) which has a width of 38 mm with 128 elements and a center frequency of 7.5 MHz. The sensing elements do not have omni-directional response; rather, their directivity is modeled using measurements of the actual L14-5 transducer response (Fig. 2-8(b)). The main directions of transducers are indicated with red arrows in Fig. 2-8(a). The PAT reconstruction has both a limited angle of view (120°) and a sparse set of views (three views at 60° from each other). The pixel size used in the reconstruction is 0.1 mm × 0.1 mm. We also simulated the reconstruction of a single transversal plane of a 3-D PAT system that uses a rotating linear transducer array; we ran a circular tomography in which we have 60 views over 60 degrees as shown in Fig. 2-9(a). The simulated sensing elements are positioned on a circle while pointing towards the center. An ROI of 70 mm × 54 mm was selected in the transversal plane, with a distribution of absorbers that simulate a vessel structure and an inclusion, as shown in Fig. 2-9(a). We used the same pixel size and speed of sound as in the 2-D PAT simulation.  It is clear from the geometry of the transducer, that the transducer elevational lens will drastically affect the reconstruction. In our circular tomography simulation, we used the sensitivity of the transducer element shown in Fig. 2-9(b), which was obtained by moving a point source in the elevational direction of the L14-5 transducer, at a distance of 30 mm from the transducer face. The directivity effect of the transducer element is more pronounced in the elevational direction (Fig. 2-9(b)) than in the lateral direction (Fig. 2-8(b)).  38   Experimental set-up The schematic of the PAT setup is shown in Fig. 2-5(a). A Q-switched Nd:YAG laser at 532 nm wavelength (Continuum, Inc., Santa Clara, CA, USA) with a pulse width of 5 ns, and a repetition rate of 10 Hz was used to illuminate the sample. An optical parametric oscillator (OPO) from the same manufacturer was used to generate the 700 nm wavelength. The density of the laser light was kept below 40 mJ cm-2. The laser burn paper test showed that the laser beam is approximately circularly shaped, with a diameter of 2 cm at a distance of 20 cm from the laser output. PA signals were detected by a L14-5 linear transducer array. The Ultrasonix SonixDAQ, a parallel data acquisition system, was used to record pre-beam formed radio-frequency (RF) digital data. The sampling rate was 40 MHz with 12-bit resolution, and the acquisition was performed for all 128 elements at the same time. To perform tomography with the linear transducer, many research groups illuminate a rotating object; however, we, in a more realistic scenario, assumed that the laser beam was pointed to a fixed sample, and we move the transducer instead of the sample. The sample and the transducer were kept inside a water bath for better acoustic wave coupling.  2.2.5.1 Experimental 2-D and 3-D PAT We fabricated a 2-D phantom by printing a circle and three points on a piece of paper (Fig. 2-12(a)) as in our simulation study. We embedded the paper between two layers of gelatin. We mounted the transducer on a rotary stage with its imaging plane aligned with the mid-plane of the 2-D phantom. As shown in Fig. 2-12(a), we recorded three PA data sets at 60o angle increments.  Prior to the experiment, we need to have the geometrical information of the 2-D tomography set-up. Thus, we imaged a calibration phantom. The phantom contains a point source. The phantom was fixed. We took images of the point source in different positions of the probe. Tracking the location of the point source in different images produces a circle. The center of the circle shows the center of rotation of the 2-D tomography set-up. For 3-D PAT, the L14-5 transducer can be maneuvered with any tracked scanning system [78], [124]. In our experiment, the probe was mounted such that its lateral axis coincides with the rotation axis of a motorized rotary stage (Fig. 2-13(a) and Fig. 2-13(b)), and was rotated on a cylindrical surface surrounding the 3-D ROI, with the US plane passing through cylinder axis. A piece of pencil-lead shown in Fig. 2-13(a) was used as an absorber and mounted on a transparent fishing line. We recorded the PA signals through 140o in increments of 2o.  39  To have the geometrical information of the 3-D tomography set-up, the calibration phantom contains a piece of black thread. The thread can be aligned normal to the ground using a weight. The phantom was fixed while we took images of the thread in different positions of the probe. Then we can find two opposite positions of probe that have the thread in-plane. The centerline of the two images illustrate the rotational axis of the 3-D tomography set-up. 2.3 Results  Simulation results To have a fair evaluation of our DPARS reconstruction approach, we back-projected the preprocessed signals in the DAS reconstruction process.  For 2-D PAT, we have three views, each acquired with 128 elements, as described in Section 2.2.4 and illustrated in Fig. 2-8(a) and Fig. 2-8(b). The reconstructed images with DPARS, FDPARS and DAS methods are shown in Fig. 2-8(d)-(f), respectively.  In the circular tomography, due to the sharp directivity effect of the transducer (Fig. 2-9(b)), the best result of the DAS method was achieved when we back-projected the transducer element signals only along the main direction of transducer elements. This is equivalent to the assumption that the linear probe only receives signals from within the US plane. Then we used linear interpolation to fill the gaps and form the DAS image. Model-based and non-model-based algorithms are employed for the PA reconstruction. The reconstructed images are shown in Fig. 2-9(e)-(h) using DPARS, FDPARS, Tikhonov regularization, and DAS, respectively. Fig. 2-9(d) shows the result of DPARS if we do not consider the directivity effect. Fig. 2-10 shows the reconstructed images in the presence of noise.  40   Fig. 2-8. Simulation results for 2-D PAT. (a) shows the absorber and the scanning geometry. The red arrows show the main directions of the sensors as shown clearly in the close-up photo at the upper right corner. PA data were collected at three positions in 60o increments. (b) Measured transducer element sensitivity with respect to the angle of the incident acoustic wave. (c) Actual absorber distribution, and reconstructions with (d) DPARS, (e) FDPARS, and (f) DAS. 41   Fig. 2-9. Simulation for circular 2-D PAT. (a) Absorber and scanning geometry. The red arrows show the main directions of the sensors that are perpendicular to the detection surface as shown at the lower right corner in the close-up photo. The PA data were collected through 60o in steps of 1o. (b) Sensitivity of the sensor with respect to angle of PA point source. (c) Initial distribution of the absorber and the actual image that should be reconstructed. Results for PAT reconstruction: (d) DPARS, in case the sensors have omni-directional response, (e) DPARS, (f) FDPARS, (g) Tikhonov regularization, and (h) DAS.  42  To compare the reconstruction methods, we calculated the contrast-to-noise ratio (CNR) and the RMS error relative to the gold standard of Fig. 2-9(c) or Fig. 2-8(c), corresponding to each method. Prior to these calculations, we need to have a comparable dynamic range for the reconstructed images. For this, we multiplied each image pixel values by a factor 𝛼 in order to minimize the error (𝑒𝑟) between the actual and reconstructed images:   𝑒𝑟 =  ‖𝛼𝑨 −  𝑨𝟎‖2 (2-25)  𝑑𝑒𝑟𝑑𝛼=  0 (2-26)  𝛼 =  𝑨𝑇𝑨𝟎𝑨𝑇𝑨 (2-27) where 𝑨 and 𝑨𝟎 are the vectorized reconstructed and actual images, respectively, and the superscript 𝑇 represents transpose. This modification of the dynamic range is applied equally to all approaches. The 𝑅𝑀𝑆 error was calculated using the following equation 𝑅𝑀𝑆 = (∑(𝐴𝑖 −  𝐴𝑖0)2/𝑁𝑝𝑁𝑝𝑖=1)0.5 (2-28) where 𝐴𝑖 and 𝐴𝑖0 are the reconstructed and actual pixels 𝑖, respectively, and 𝑁𝑝 is the number of image pixels. To compute 𝐶𝑁𝑅, we use Eq. (2-29): 𝐶𝑁𝑅 = (2(𝑚𝑖 − 𝑚𝑏)2/(𝜎𝑖2 +  𝜎𝑏2))0.5 (2-29) where 𝑚𝑖 and 𝑚𝑏 are the means of intensities and 𝜎𝑖 and 𝜎𝑏 are the standard deviations at the inclusion location and background respectively. Table 2-1 aims to compare the inversion approaches in terms of CNR and RMS error for 2-D PAT simulation. For circular tomography, DAS results have the RMS error and CNR of 0.18 and 0.92, respectively. These values can be compared with the values reported for model-based regularization methods, Fig. 2-11. The RMS error and CNR of the Tikhonov regularization result, 43  Fig. 2-9(g), are 0.16 and 1.72, respectively; however, those are 0.16 and 2.16 for DPARS. Using the FDPARS, the RMS error and CNR can be improved up to 0.10 and 2.8, respectively. We added White Gaussian noise to the data to acquire Signal to Noise Ratios (SNR) of 20, 30, and 40 dB. Wiener deconvolution was used as the deconvolution operator in which the noise-to-signal power ratio was set to 0.1. The inverse problem was solved with different values of the regularization parameter. The results are shown in Fig. 2-11. Reconstructed images with DPARS, FDPARS, Tikhonov regularization and DAS for SNR of 20 dB are shown in Fig. 2-10.   Fig. 2-10. White Gaussian noise was added to data. (a) DPARS, (b) FDPARS, (c) Tikhonov regularization, and (d) DAS reconstruction results with noisy data 44   Fig. 2-11. RMS error and CNR versus regularization parameters for the DPARS method for noisy, and noiseless data. (a) and (b) show the sparsity regularization results and, (c) and (d) show those of Tikhonov regularization.  Table 2-1. CNR and RMS errors of the results of DAS, DPARS and FDPARS methods for simulated 2-D PAT Approaches Parameters DAS DPARS FDPARS CNR 1.91 3.47 4.64 RMS error 0.16 0.13 0.12   45  Our work has considered realistic PAT scanning geometries in which directional PA sensors do not fully enclose the object. The sparse representation of the absorber distribution provided by the DCT makes our DPARS inversion approach an effective reconstruction method with good RMS and CNR figures and significantly lower artifacts than in DAS reconstructions. If we do not consider the directivity effect, DPARS is unable to recover the absorbers (Fig. 2-9(d)), and if we do not use sparsity regularization, the linear system of equations is rank deficient due to the lack of information caused by the directivity effect and the limited angle of view. The noise analysis also shows the robustness of the regularization methods. Fig. 2-11 shows the optimized range of the cutoff ratio that determines the number of DCT coefficients used in the PAT reconstruction. With the SNR of 20 dB, the DAS reconstruction method results in a CNR of 0.88 and RMS error of 0.23 that can be enhanced to 2.2 and 0.15, respectively, with FDPARS. Comparing these values with those in Fig. 2-11, DPARS outperforms DAS and Tikhonov regularization for a considerable range of cutoff ratios in the presence of noise as we set 𝑟𝑐 to 0.3 and 𝑟𝑑𝑒𝑐 to 0.15 for all the reconstructed images with DPARS and FDPARS. Wiener deconvolution reduced the impact of noise on the inversion approaches. The optimal cutoff ratio 𝑟𝑐 and decaying radius 𝑟𝑑𝑒𝑐 depend on the pattern of the image and the noise level, which we believe do not change significantly in a specific application. Therefore, once a range of acceptable values of cutoff ratio is found for specific application using simulations, it does not have to be changed for different data sets of the same application. For higher quality images with longer processing time, the number of DCT coefficients can always be increased. In the DPARS reconstruction of the inclusion and vessel structure of Fig. 2-9(e), we had a linear system of equations with 10,904 unknowns and 8,168 equations. For such an under-determined problem, we need to use some form of regularization. Sparsity regularization helps improve the conditioning of the problem by reducing the number of unknowns to 741. The condition number of the system after sparsity regularization is equal to 40, while it was 123.7 for the Tikhonov regularized system. Sparsity regularization also improves the processing time. We performed our processing in Matlab® using a standard (Intel® Core™ i7-4770 CPU @ 3.4GHz) PC with 32GB RAM. The computing time required to assemble the linear system, Eq. (2-16), was 17 s. This time depends on how efficient the code is and can be reduced. The computing time required to solve the resulting system of equations was 19 s when the DCT sparsity regularization was used, and 228 s for the Tikhonov regularization. Thus, sparsity regularization outperforms Tikhonov 46  regularization in every aspect. However, sparsity regularization has a low-pass filter nature, which could affect sharp edges in images. Other sparsifying transforms such as wavelet transform or regularization algorithms such as l1 regularization might preserve the sharp edges.  Experimental results In addition to the numerical simulation, we experimentally consider the inversion approach. Fig. 2-12(c), (d), and (f) show the reconstructed images using the DPARS, FDPARS, and DAS methods. The lateral plane transducer calibration data were fed to the DPARS method. The circle is not fully visible in the DAS image but the DPARS method provides a reasonable image. In addition, Table 2-2 represents the CNR and the RMS errors for each image reconstruction method. We also consider the linear system in the absence of regularization. Fig. 2-12(e) shows the result. The image has the CNR and the RMS error of 1.27 and 0.2, respectively. Comparing these values with the values reported in Table 2-2 illustrates that sparsity regularization improves the image reconstruction in terms of both factors. The resolution was calculated based on the full width at half maximum (FWHM) of the point source. The resolution of the DPARS image was 0.41 mm, and of the FDPARS image was 0.45 mm. In 3-D PAT, the linear probe rotates around an axis parallel to the lateral direction of the probe. Fig. 2-13(c) and (d) show the reconstructed images with the DPARS and DAS methods. Both DPARS and DAS methods were applied in 3-D space. Fig. 2-14 shows the transversal sections of the 3-D image at different lateral transducer coordinates. Table 2-2 also shows the CNR and RMS errors corresponding to the reconstruction approaches. The resolution of the DPARS and FDPARS images were calculated as 0.35 mm and 0.4 mm, respectively. 47   Fig. 2-12. Results for 2-D PAT. (a) PA data were collected at 3 positions around the sample, (b) the sample which was printed on paper, and kept in the US plane of the transducer. Reconstructed images with: (c) DPARS, (d) FDPARS, (e) without any regularization, and (f) DAS. 48   Fig. 2-13. Experimental set-up and results for 3-D PAT. (a) shows the absorber. The PA data were collected through 140o with a step size of 2o. (b) shows the sample with respect to the transducer. DPARS (c) and DAS (d) were used for the image reconstructions. The Z direction is parallel to the rotation axis and the lateral direction of the transducer.   Table 2-2. CNR and RMS errors of DAS, DPARS and FDPARS methods for experimental 2-D and 3-D PAT Approaches Parameters 2-D PAT  3-D PAT DAS DPARS FDPARS  DAS DPARS FDPARS CNR 1.71 2.24 2.53  0.9 1.60 2.67 RMS error 0.21 0.17 0.17  0.56 0.13 0.13 49   Fig. 2-14. Cross-sections of 3-D DPARS, FDPARS, and DAS images in different vales of Z. Z axis is parallel to the rotational axis In our reconstruction, we have assumed that the reconstruction ROI that receives significant illumination is fully enclosed within the transducer. Otherwise, the probe detection surface will receive signals from outside the region considered for the reconstruction and the reconstruction will have artifacts. To deal with this problem, the transducer directivity can be used to define the ROI, with the assumption that absorbers outside this ROI do not produce significant PA waves. For 2-D PAT, due to the softer directivity effect inside the US plane of the linear probe, we expected to have a better performance from the DAS reconstruction. As Fig. 2-8(f) and Fig. 2-12(f) show, DAS artifacts appear as lines parallel to the face of the transducer. Three lines corresponding to three positions of the transducer are visible.  The DPARS algorithm provides an image with fewer artifacts and considerable higher CNR. In our experimental 2-D tomography, we acquired CNR and RMS values that are close to those of the simulation study. The artifact in the DPARS image was similar to that in the simulation study. The artifacts show a wave with low frequency. However, DAS could not recover the circle completely. Noise, non-uniformity of the laser beam as well as errors in the assumed geometry 50  of the detection surface may all affect the reconstruction accuracy. We used same detection surface during reconstruction process of the DAS and DPARS. Therefore, the geometrical errors are identical for both methods. These will be studied in greater detail in the future, but our first results indicate that sparsity regularization enables us to compensate for such errors. Generally, an ideal sharp cutoff window is applied to the DCT coefficients. As a result, ringing artifacts appear near sharp transitions in the DPARS images. It has been shown in all the simulation and experiment results of FDPARS that the ringing artifacts are reduced by windowing in the DCT domain. The images are improved mostly in terms of CNR, because ringing effects are more severe near step discontinuities.  We started the 3-D PAT with a small sample to consider the feasibility of the reconstruction approach. In the next section, large scale systems are investigated. DPARS provided an image with better CNR and RMS error. For such 3-D experiments, the high number of data acquisitions, and the uncertainty in the transducer location could cause artifacts in the reconstructed images. However, DPARS results in images with considerable higher resolution than DAS, as shown in Fig. 2-13.  2.4 Large scale system In the section 2.2, the DPARS method was introduced. The direct method used in DPARS for solving the resulting system of equations has a very high computational cost, both in terms of memory and processing. To overcome the memory problem, an iterative solver such as a Conjugate Gradient Method (CGM) can be used in which the coefficient matrix does not need to be stored completely in memory. In addition, parallel processing can be used to reduce the computational time. In this section, a parallel computing approach to PAT that combines DPARS with CGM on Graphics Processing Units (GPUs) is presented. This approach, which is called iterative DPARS (iDPARS) enables us to reconstruct large 3-D volumes.  According to the DPARS algorithm, as discussed in the section 2.2.2, the system of linear equations of the Eq. (2-21) results in finding the absorber distribution. Equivalently, the Eq. (2-21) can also be interpreted as {𝑹}𝑁𝑀×1 = [𝐅]𝑁𝑀×𝑁𝑝  [𝐓]𝑁𝑝×𝑁𝑝  {?̃?}𝑁𝑝×1 (2-30) 51  where 𝐓 is the inverse DCT matrix in which the elements corresponding to the high frequencies were set to zero.  For large scale systems, the size of the matrix F is beyond the normal computer data storage capacity. In addition, the computational cost of the multiplication, F T is also high. For such systems, iterative methods that generate a sequence to improve an approximated solution are suggested in which storing the matrix F is not necessary. For computational efficiency, instead of F T, 𝐓 ?̃? is calculated at each iteration. The inverse of a 3-D DCT is a separable product of the inverses of the 1-D DCT coefficients. The 1-D DCT inverse is applied along each dimension at a time. Thus, instead of calculating 𝐓 ?̃? for a large scaled system, these three steps can be taken to implement the inverse DCT of the image ?̃?: [?̂?𝟏]𝑁1×𝑁2𝑁3=  [𝐓𝟏]𝑁1×𝑁1[?̃?]𝑁1×𝑁2𝑁3 (2-31) [?̂?𝟐]𝑁2×𝑁3𝑁1=  [𝐓𝟐]𝑁2×𝑁2[?̂?𝟏]𝑁2×𝑁3𝑁1 (2-32) [?̂?]𝑁3×𝑁1𝑁2=  [𝐓𝟑]𝑁3×𝑁3[?̂?𝟐]𝑁3×𝑁1𝑁2 , (2-33) where ?̂? is the space domain representation of the image. 𝑁𝑖 and 𝐓𝐢 are the image size and inverse DCT matrix corresponding to the dimension 𝑖, respectively. The GPU was used to handle the matrix multiplications during the CGM implementation. Due to the difficulty of storing the entire coefficient matrix F, the block representation of the coefficient matrix was employed (see Fig. 2-15). Each block represents the equations corresponding to each transducer/transducer location. These blocks can be calculated and multiplied by vectors in parallel and at each iteration of CGM.  52   Fig. 2-15. Block representation of the coefficient matrix. Each block corresponds to each transaducer/transducer location  Results 2.4.1.1 Simulation analysis An ROI of size 200 × 200 × 280 in increments of 0.1 mm was selected. The ROI contains a vessel structure which was embedded in an ellipsoid-shaped background as shown in Fig. 2-16(a). A simulated linear transducer array was used as detectors. The array contains 128 elements spaced 0.3 mm apart. The PA signals were collected over 120 degrees in increments of 2 degrees. The rotational axis was parallel to the lateral direction of the transducer or Z direction in the image coordinate system. The real directivity effect of the transducer both in plane (Fig. 2-8(b)) and out of plane (Fig. 2-9(b)) were taken into account. Fig. 2-16(b) and (c) show the results of iDPARS and DAS, respectively. In the DAS method, the PA signals are back projected from the linear transducer locations to the main planes of the linear transducer. It is equivalent to the assumption that the linear transducer do not receive signal from outside of its main plane. The sparse and limited data collection, along with the sensor directivity causes the rank deficiency of the system. However, the iDPARS managed to recover the absorbers. DAS recovered some boundary of the ellipsoidal background as well as the part of vessel that was almost parallel to the lateral direction of the transducer. This happens in every back projection algorithm. However, DAS was unable to reconstruct the tilted branch of the vessel.  Table 2-3 shows that the iDPARS is superior to DAS in terms of CNR and RMS errors.  53   (a)   (b) (c) Fig. 2-16. (a) The inclusion and the tomography scanning geometry. (b) iDPARS and (c) DAS images of the simulation study.  Table 2-3. CNR and RMS error of the reconstructed images by DAS and iDPARS methods Approaches Parameters DAS iDPARS CNR 0.16 5.21 RMS error 0.80 0.07 54  2.4.1.2 Experiment analysis The inversion approach was also experimentally evaluated. The laser beams with energy density below 40 mJ cm-2 were directed to the top surface of the sample. The sample was a leaf which was slightly tilted as shown in Fig. 2-17(a). The PA signals were collected by a L14-5 linear transducer array. The probe was also held by a motorized stage custom-designed in our lab. For limited view PAT, the probe was rotated around the sample. The rotational axis was parallel to the lateral direction of the probe or Z direction in the image coordinate system. The PA signals were collected over 120 degrees with the step size of 2 degrees. Fig. 2-17(a) shows the scanning geometry of the leaf imaging. A Sonix MDP, PC based US machine, coordinated the automated imaging. It sends the appropriate triggers to the laser system, data acquisition card, and the motorized stage.  Fig. 2-17(b) and (d) show the front view of the leaf and Fig. 2-17(c) and (e) show the side view of the leaf, in the iDPARS and DAS images, respectively. The images show that iDPARS outperforms DAS. The directivity pattern of transducer was used during the iDPARS reconstruction.  Laser light non-uniformity of one laser beam as well as the non-uniformity among different laser firings can cause errors in the image reconstructions. Temperature fluctuation of the laser room may also affect the laser energy output. In future work, we could monitor the laser beam pattern during experiments to reduce the effect of the laser light non-uniformity on results. The errors in finding the transducer locations can also affect the reconstructed images.  The inversion algorithm was implemented using an NVidia GPU (GeForce GTX 1080 GPU card, 2,560 CUDA cores, NVidia) structure. The computing time required to reconstruct the iDPARS image of the size 100 × 100 × 140 was 11 minutes.  55   (a)   (b) (c)   (d) (e) Fig. 2-17. (a) The sample and the tomography scanning geometry. (b) iDPARS and (c) DAS images of the experimental study. 56  2.5 Conclusion We presented DPARS, a new model-based, deconvolution-based PA reconstruction method that employs sparsity regularization. We have demonstrated that the method works with limited angle views and in the presence of sensor directivity. The method creates images by solving large linear systems of equations by reducing the number of unknowns using a sparse representation of the distribution of absorbers. Sparsity regularization enhances the numerical conditioning and shortens the computing time required to generate an image. The DPARS algorithm is categorized as a semi-analytical method. Due to memory and computational issues, the direct methods cannot handle the large scale systems. The iterative methods can be used in which the large scale system is divided into smaller tasks that operate together with the parallel computation. We also presented iDPARS, an iterative, model based PA reconstruction method that works well with conventional linear ultrasound transducers. The inversion approach was tested with both simulation and experimental data.    57  Chapter 3 ˗ Prostate photoacoustic imaging configuration 3.1 Introduction In chapter 2 it was shown that regularization is essential to reach a solution for PAT from a limited angle. The sparsity regularization improves the conditioning of the inverse problem and reduces the computing time. The presented reconstruction approach also accounts for the directivity effect of transducers. These conditions are relevant to the prostate PAT which will be discussed in this and next chapters. In this chapter, different configurations of prostate imaging are introduced and analyzed. Practically, there are two US transducers used in prostate studies (see Fig. 3-1): Transrectal Ultrasound (TRUS) transducer that can be used to examine the prostate in the posterior-to-anterior direction, and robotically positioned pick-up ultrasound transducer [90] that can be used to examine the prostate in the anterior-to-posterior direction. In this chapter, we evaluate the feasible scanning geometries for intra-operative PAT. These include the use of TRUS or pick-up transducer. Simulation and experimental studies demonstrates unequivocally that PA imaging with a pick-up transducer that rotates about the prostate is significantly better than one that uses just a trans-rectal US transducer. To implement PAT, we propose and demonstrate a new experimental set-up with the da Vinci robotic system and our pick-up US transducer. 58   Fig. 3-1. Arrangement of TRUS and “pick up” probes with respect to the prostate. The TRUS probe is located posteriorly while the “pick up” probe is located anteriorly to the prostate. 3.2 Materials and methods PAT consists of the recovery of an initial spatial distribution pressure 𝑝0(𝒓) generated in tissue due to an electromagnetic (EM) pulse of intensity 𝐼(𝑡) in an imaging ROI. The initial pressure 𝑝0(𝒓) depends on the physical properties of the absorbers in the object, with 𝑝0(𝒓) =  𝛤 𝐴(𝒓), where 𝛤 = 𝑐2𝛽/𝐶𝑝 is the Gruneisen parameter (𝑐: speed of sound, 𝛽: the isobaric volume expansion coefficient, and 𝐶𝑝: the specific heat), which is assumed to be constant throughout the medium, and 𝐴(𝒓) is a spatial EM absorber function. The initial pressure 𝑝0(𝒓) generates a wave through the medium described by the following non-homogeneous wave equation [46]:  (∇2 −1𝑐2𝜕2𝜕𝑡2) 𝑝(𝒓, 𝑡) =  −𝑝0(𝒓)𝑑𝐼(𝑡)𝑑𝑡 (3-1) where 𝑝(𝒓, 𝑡) is the induced pressure at location 𝒓 and time 𝑡. Image reconstruction techniques are used to create 2-D and 3-D images from subsets of the PA signals 𝑝(𝒓, 𝑡) detected at specific locations to find the initial pressure source distribution 𝑝0(𝒓). In this chapter, we used Delay-and-Sum (DAS) and iterative deconvolution-based PA reconstruction with sparsity regularization (iDPARS) as the reconstruction methods. DAS and other back projection-based algorithms are popular for PA image reconstruction in cases in which we do not have a full detection view of 59  US waves around the ROI, or when a linear array of transducers is used [1], [22], [31], [33], [34]. iDPARS is a model-based PA reconstruction algorithm that was introduced in [125], [126] and was further developed to solve large scale systems [127]. iDPARS enables us to recover images from limited-view PAT data with limited directivity, as we encounter in prostate imaging with conventional US transducers. In this inversion approach, a linear system is assembled according to the detection surface and directivity information of the transducers. Then, the system is solved in the sparse discrete cosine transform domain.  Scanning geometries To perform prostate PAT imaging, one approach is to use the TRUS transducer that can sweep the prostate volume by rotating the transducer about its longitudinal axis. This approach captures the acoustic energy converging towards the TRUS axis. Another approach is to use the pick-up US transducer carried by the da Vinci robotic system to capture the acoustic energy diverging from the prostate. This transducer is located anteriorly to the prostate and can be moved on a cylindrical surface around a fixed axis below the prostate (Fig. 3-1). Note that this transducer does not have to move along a circular trajectory, but rather it can conform to tissue while rotating tangentially around the tomography axis and moving radially to conform to the tissue. The details of the robot implementation that allows such an approach have been developed and will be presented in the next chapter. In this chapter, we are aiming to compare the detection surfaces associated with the TRUS transducer (convex detection surface) and with the pick-up transducer (concave detection surface). These are illustrated in Fig. 3-2. To compare the methods in a meaningful way, we assume that we have a 90o angle of view from each point of access, and we have the same number of 90 laser shots available to illuminate the ROI to collect the PA wave energy. The pick-up and TRUS transducers follow the quarter cylinders of radius 5 cm and 1 cm, respectively. Fig. 3-2 shows the arrangement of the samples and different scanning geometries.  Simulation study We used an open source MATLAB toolbox called k-Wave [123] to simulate the US waves generated from a point source located at an arbitrary location. According to [32], the ROI comprises a collection of point sources of US waves, with the initial point source intensity at location r determined by a given (assumed) spatial absorber function 𝐴(𝒓), known Gruneisen parameter 𝛤 and uniform pulse intensity 𝐼(𝑡) that is independent of spatial location throughout 60  the ROI (Eq. (3-1) and the paragraph preceding it). Most published PAT reconstruction techniques make similar assumptions [46], [96], [128]. Therefore, we can calculate the PA-induced US waves from the point sources using superposition. We also employed simulated transducers with directivity effects. This means that the transducer has a different sensitivity with respect to the PA waves coming from different directions, as illustrated in Fig. 3-3. Fig. 3-3 shows the results of the calibration experiments of an actual conventional linear transducer. Fig. 3-3(a) and (b) show the transducer response with respect to the acoustic waves coming from within and outside of the transducer plane, respectively. The simulated transducers have the same directivity properties as the actual transducers. To compare images acquired through the convex and concave detection surfaces similar to prostate imaging, we selected 3-D ROIs of 300 × 300 × 250 voxels with a pitch of 0.1 mm. The speed of sound was taken to be 1540 m s⁄ . As shown in Fig. 3-2, we include a 3-D structure consisting of two pencil leads with diameter of 0.5 mm in one ROI (Fig. 3-2(a), (b)), and one spherical inclusion with diameter of 10 mm in another ROI (Fig. 3-2(b)). We also simulate a non-sparse reference image, where the absorption map matches a 3-D T2 MRI image of a human prostate, with simulated neurovascular bundles (NVBs) added to the image (Fig. 3-2(e) and (f)). The prostate is assumed to be illuminated through the urethra using a side-firing fiber. We assume that the light intensity decreases exponentially with distance from the fiber location with the decay constant given by the effective attenuation coefficient [129]. The effective attenuation coefficient of 1 cm-1 is used for this simulation. Fig. 3-4 shows the mid cross section of the prostate after illumination. We assume that the samples are illuminated 90 times by the laser light, once per degree of transducer rotation as it covers the imaging surface. Fig. 3-2 shows the arrangement of the absorber distribution and transducer positions of the simulation study. 61   Fig. 3-2. Different geometries of US transducers with respect to the ROI for the simulation of PAT are shown. PAT with the pick-up and TRUS probes are simulated. The first, second, and third rows simulate PA imaging of two pencil leads, a spherical inclusion, and prostate, respectively. 90o angles of view are shown in a cross section normal to the lateral direction for each PAT. Transducers are directional and the red arrows show the direction of the maximum sensitivity which is perpendicular to the detection surface. The left column shows the fan-shaped tomography with TRUS probe and the right column shows the cylindrical tomography with the pick-up probe. The transducers are rotated about an axis parallel to the lateral direction (Y direction) of the probes.    (a) (b) Fig. 3-3. Sensitivity of the linear transducer with respect to PA waves coming (a) from inside of the main plane (in the lateral transducer direction) and (b) from outside of the US plane (elevational transducer direction). 62   Fig. 3-4. Mid cross section of the prostate after illumination. Transurethral illumination was simulated. Therefore, the region farther away from the urethra receives less illumination. For each PAT image, Fig. 3-2 also shows a cross section of the lateral direction (Y direction) of the transducers. Each quarter-circle in the image corresponds to 90 distinct positions, in 1o increments, of the directional linear US transducer arrays. The maximum sensitivity of the US transducers is in the radial direction pointing towards the ROI. At each distinct position, we have a linear array (aligned with the tomography axis Y) of 128 elements of directional transducers. The probes’ widths, number of transducer elements, and their pitch are equal to those of the real TRUS and pick-up probes reported in the Experimental setup section 3.2.3. We also perform an imaging sensitivity analysis with respect to transducer position errors. The localization error of the da Vinci tool tip has been reported to be approximately 1-2 mm [130]. In our simulation for tomography with the pick-up probe, we add normally distributed random position errors with standard deviation of 2 mm to the location of the tool tip in the X-Y plane. In addition, random angle errors with standard deviation of 3o were added to the orientations around the X-axis of the tool tip of da Vinci robot. We assume that the errors are smooth with respect to the robot position. Therefore, we filter the random errors with a low pass filter with cutoff frequencies of 1 mm-1 and 5 degree-1, respectively; as a result, the differences between errors of adjacent positions remain below 0.5 mm and 1.5o. Then, we record the PA waves with the simulated pick-up probe.  In addition, in our simulation for tomography with TRUS probe, we consider errors for the position and orientation of the rotational axis. These depend on the accuracy of the assembly of the probe on the supporting mechanism of the TRUS robot. We consider a worst-case location 63  error of 3 mm for the rotational axis of the probe in the X-Z plane and an orientation error of 3o around the X axis. Then, we record the PA waves with the simulated TRUS probe.  Experimental setup The schematic of the PAT setup is shown in Fig. 3-5. A Q-switched Nd:YAG laser at a 532 nm wavelength (Continuum, Inc., Santa Clara, CA, USA) with a pulse width of 5 ns, and a repetition rate of 10 Hz was used as the light source. An optical parametric oscillator (OPO) from the same manufacturer was used to generate light with a 700 nm wavelength. The laser light is delivered to the sample with an optical fibre. The PA system sends two triggers to the laser system: the flash lamp (firing) and the Q-switch triggers. The laser energy can be controlled by the delay between these triggers. The energy of the laser light was kept at 30 mJ/pulse. We generate the triggers using a custom-designed trigger generator in our lab [131].   Fig. 3-5. Experimental set up for 3-D PAT. The goal is to compare fan-shaped tomography with the TRUS probe and cylindrical tomography with the pick-up probe. The pick-up probe is held by a da Vinci tool, and can be manipulated for intra-operative tissue scanning. The dVRK system is used to control the da Vinci arm. In coordination with the PA system, the da Vinci and TRUS robots move the probes to different positions around the sample. A series of PA data is recorded at those positions. 64  In our current setup, the laser beam, with the help of the fibre-transmitted light, was pointed at a fixed sample. We also used a custom-designed laser-fibre coupling [132]. The 1500 µm fibre was used in end-firing mode and placed 8 cm away from the sample to illuminate the entire ROI with every single shot. PA signals were detected by the pick-up [90] or TRUS probes that are shown in Fig. 3-5. The pick-up transducer has a width of 28 mm with an array of 128 elements and a center frequency of 10 MHz. We also used the linear array of the Ultrasonix biplane TRUS brachytherapy probe that has a width of 55 mm with an array of 128 elements and a bandwidth of 5-9 MHz. The Ultrasonix SonixDAQ, a parallel data acquisition card, was used to record pre-beam formed radio-frequency (RF) digital data at 40 MHz with 12-bit resolution. The sample and the transducers were kept inside a water bath for better acoustic coupling. 3.2.3.1 Data collection with the da Vinci robot A da Vinci patient-side manipulator and the pick-up transducer are shown in Fig. 3-5. The probe is small enough to pass through a standard laparoscopic incision; it is placed inside the abdominal cavity before the surgery and remains there throughout the surgery. The da Vinci patient side manipulator with a commonly used Pro-Grasp instrument is used to grasp and manipulate the transducer in a repeatable manner [90]. In coordination with the laser and the US machine, the da Vinci robot moves the pick-up probe to different locations around the ROI. A series of PA data at those locations are collected. The PA data is processed and results in a 3-D image of the ROI. Our goal is for the PA data acquisitions to occur along a cylindrical detection surface. Therefore, the pick-up transducer moved by the da Vinci manipulator needs to follow a circular trajectory with the transducer pointed at the cylinder axis. Fig. 3-6 illustrates a typical pick-up transducer grasper trajectory, which will keep the US plane of the pick-up transducer crossing the tomography axis. To accomplish such trajectory, we are using the da Vinci Research Kit (dVRK). The dVRK is an open source platform consisting of hardware, firmware and software components [133]. The hardware comprises electronics to read the robot encoders and to send out motor currents, and FPGAs to control both the master and the slave robots. The PID control is processed in the software components and is implemented through the firmware, which also performs safety functions. Cartesian-space teleoperation and autonomous control is carried out on a host PC, using software components developed at Johns Hopkins University as described in [133]. The dVRK 65  controllers have been installed on the first generation of the da Vinci surgical system in our lab [134].  In our current setup, the dVRK controllers perform tomography with the aforementioned conditions autonomously. The dVRK controllers rotate the pick-up probe on a quarter-circle of radius 5 cm similar to the task described in the scanning geometry section and Fig. 3-2(b) and (d). We embed the phantoms in a water bath to ensure acoustic coupling. PA signals are collected over 90o in increments of 1o as in our simulation. At every desired location of the pick-up probe, the dVRK system sends a trigger over a TCP/IP socket to the PA system. Upon triggering, the PA system enables the laser to start firing, and the data acquisition card to start collecting raw echo data through accurate triggers. We also record the location of the transducer at each data acquisition step. The dVRK system is also used to record the locations of the tool tips and consequently the pick-up transducer locations that are to be used for image reconstruction.  Fig. 3-6. A da Vinci instrument is used to maneuver the pick-up probe. The tool tip of the instrument follows a circular trajectory, while the control strategy keeps the pick-up probe towards the tomography axis to perform a cylindrical tomography. 3.2.3.2 Data collection with the TRUS robot Our research group previously designed a robotic TRUS manipulator to automatically rotate the TRUS probe [135] as shown in Fig. 3-7. We collect PA signals through 90o in increments of 1o using the TRUS robot. The probe positioning with respect to the samples is similar to Fig. 3-2(a) and (c). The robot receives a command from our PA system and rotates the TRUS probe. Therefore, the probe is automatically rotated about its axis and performs fan-shaped tomography. 66  At each desired location, the PA system is enabled through triggers. In addition, we can record the location of each data acquisition step. The location corresponding to each dataset is used for our volumetric reconstruction.  Fig. 3-7. The TRUS probe can be rotated around its axis by a custom-designed robot. In coordination with the PA system, the TRUS robot enables fan-shaped tomography with the TRUS probe.  Phantoms To evaluate the feasibility of intra-operative prostate PAT with our system, a 3-D structure made of two pencil leads, a custom-made polyvinyl chloride (PVC) prostate phantom, and a gelatin prostate phantom shown in Fig. 3-8 were used.  The pencil leads have diameter of 0.5 mm and lengths of 7 mm and 13 mm. We made a 3-D structure using the pencil leads and fishing line that is transparent in PA images. The 3-D arrangement can be seen in Fig. 3-8(a) in which two screws were used to hold the fishing lines. The screws are vertically attached to an aluminum plate. The plate is located on the floor for cylindrical tomography with the pick-up probe. For fan-shape tomography, the plate should not block the acoustic waves from reaching the probe; therefore, the structure was tilted to give a detection view to the probe that was located above the plate, as shown in Fig. 3-5, left bottom corner. The PVC prostate phantom was of the proper shape and size so that we could model the imaging conditions during robot-assisted radical prostatectomy. The phantom has a hole for the TRUS probe below the prostate designated location. A sphere-shaped inclusion of diameter 1 cm was embedded in the background material. The inclusion was a controlled mixture of PVC and black 67  plastic color as described in [44] and has an absorption coefficient of 0.7 cm-1. We implemented the scenario described in the scanning geometries section, and similar to the simulation study. The gelatin prostate phantom shown in Fig. 3-8(c) contains a semi-dark prostate. The gelatin concentration is 10 percent, which is the same throughout the phantom. The prostate has additional graphite powder, which produces the effective attenuation of 0.8 cm-1. For this attenuation, 0.16 grams of graphite powder was mixed with 25 mL of gelatin. The attenuation was measured using the exponential decay model of light propagation [129], in which the laser light intensities are measured after passing through layers with different thicknesses of the same combination of gelatin and graphite powder. We also embedded two 8 mm-long pencil leads into the prostate as inclusions. The two inclusions are named A and B in the photo of the phantom. They are also visible in the computed tomography (CT) image of the phantom, see Fig. 3-8(c). After PAT was performed, the phantom was cut at two locations parallel to the axes of the probes. One cut is at the location of the inclusion A forming the A-plane and the other one was at the location of the inclusion B forming the B-plane as shown in Fig. 3-8(c). 68   Fig. 3-8. (a) 3-D structure made of two pencil leads. The pencil leads are held on fishing lines. (b) PVC-based phantom. The phantom shape and size are compatible with prostate imaging conditions. We embedded a sphere-shaped inclusion of size 1 cm into the phantom. (c) Prostate gelatin-based phantom. The phantom contains a semi-dark prostate and two dark inclusions named A and B. 3.3 Results  Simulation results Fig. 3-9 - Fig. 3-11 show the reconstructed images with the scanning geometries that we tested. For each case, reconstruction with the model-based method, iDPARS and with the non-model-based method, DAS are performed. In Fig. 3-9, two cross-sectional images that confine the 69  simulated pencil leads and one that is normal to the pencil leads are shown. The iDPARS images have fewer artifacts. In addition, the DAS images of cylindrical tomography by the simulated pick-up probe look better than those obtained by the simulated TRUS probe. Fig. 3-10 shows cross-sectional images of the sphere inclusion along every direction. The red dashed-circle in the images of Fig. 3-10 represents the actual size and location of the sphere inclusion in the reference image. Notably, the iDPARS algorithm provides images with fewer artifacts than DAS. Fig. 3-11 shows the results for a non-sparse sample. The mid cross section of the prostate image acquired by DAS and iDPARS algorithms are shown. Quantitatively, the cylindrical tomography seems to provide better reconstructions for prostate imaging. However, some details, including the NVBs, are more visible in the TRUS images, which illustrate a potential for more detailed images using a combined two-transducer imaging.       70   (a)  (b) Fig. 3-9. Reconstructed images of pencil leads simulation; (a) results of the fan-shaped tomography with the TRUS probe and (b) results of the cylindrical tomography with the pick-up probe. Two methods of PA reconstruction are used: Non-model-based DAS and model-based iDPARS. For each case, two cross sectional images that contain the pencil leads and a cross sectional image that is normal to the pencil leads are shown.  71    Fig. 3-10. Reconstructed images of sphere simulation with model and non-model based reconstruction approaches; the cross sectional images normal to the main directions are shown. (a) Results of the fan-shaped tomography with the TRUS probe and (b) results of the cylindrical tomography with the pick-up probe. The red dashed circles in the images represent the actual size of the reference inclusion.  72   (a)  (b) Fig. 3-11. Reconstucted images of the prostate simulation with model and non-model based reconstruction methods; the mid cross sectional image of the prostate is shown. (a) Results of the fan-shaped tomography with the TRUS probe and (b) results of the cylindrical tomography with the pick-up probe. To compare the two scanning geometries, we compute the Contrast to Noise Ratio (𝐶𝑁𝑅) and Root-Mean Square (𝑅𝑀𝑆) error corresponding to each reconstructed image. The 𝑅𝑀𝑆 error was calculated using the following equation 𝑅𝑀𝑆 =  √1𝑛 ∑(𝐴𝑖 − 𝐴𝑖0)2𝑛𝑖=1 (3-2) where 𝐴𝑖 and 𝐴𝑖0 are the actual and reconstructed pixels 𝑖, respectively, and 𝑛 is the number of image pixels. To compute 𝐶𝑁𝑅, we then use equation (3-3): 𝐶𝑁𝑅 =  |𝑚𝑖 − 𝑚𝑏|𝜎𝑏 (3-3) 73  where 𝑚𝑖 and 𝑚𝑏 are the mean intensities at the inclusion and background, respectively, and 𝜎𝑏 is the standard deviations at the background. Table 3-1-Table 3-3 compare the two scanning geometries and reconstruction approaches. As it can be seen, the cylindrical tomography with the pick-up probe outperforms the fan-shaped tomography with TRUS probe in every aspect. Fig. 3-12 shows the reconstructed images in the presence of position errors of the simulated transducers. The red dashed-circle shows the location of the inclusion in the reference image. As shown, the inclusion is recovered, but slightly dislocated. The DAS and iDPARS images reconstructed using TRUS data have RMS errors of 0.43 and 0.22, respectively. The corresponding values for pick-up probe PAT are 0.42 and 0.16.        74   (a)  (b) Fig. 3-12. Position errors were added to the probe positioning in the sphere simulation. The cross sectional images normal to the main directions are shown. Results of (a) the fan-shaped tomography with the TRUS probe and (b) the cylindrical tomography with the pick-up probe. The red dashed circles in the images represent the actual location of the inclusion. Due to the positioning errors, the inclusion shown may not be shown at its exact location. 75  Table 3-1. CNR and RMS error values of the results of the fan-shaped tomography and cylindrical tomography for the pencil leads simulation Factors TRUS image pick-up image DAS iDPARS DAS iDPARS RMS error (a.u.) 0.24 0.04 0.16 0.03 𝑪𝑵𝑹 2.98 12.83 4.33 19.57  Table 3-2. CNR and RMS error values of the results of the fan-shaped tomography and cylindrical tomography for the sphere simulation Factors TRUS image pick-up image DAS iDPARS DAS iDPARS RMS error (a.u.) 0.42 0.14 0.42 0.14 𝑪𝑵𝑹 1.53 8.14 1.57 12.30  Table 3-3. RMS error values of the results of the fan-shaped tomography and cylindrical tomography for the prostate simulation Factors TRUS image pick-up image DAS iDPARS DAS iDPARS RMS error (a.u.) 0.25 0.18 0.22 0.16  Experimental results In our experiments, the pencil leads and prostate phantoms were imaged with both TRUS and pick-up probes. The DAS and iDPARS reconstruction methods were used to process the recorded PA data. Fig. 3-13 shows the cross-sectional images of the fan-shaped and cylindrical tomography of the pencil leads. Two slices that encompass the pencil leads and one slice normal to the direction of the leads are shown in Fig. 3-13. As expected, iDPARS provides images with fewer artifacts in both cases and the pick-up images look better. DAS images of the pick-up probe also are better than those of the TRUS probe in terms of the CNR and RMS errors. This is due to the larger detection surface formed by the pick-up probe. Fig. 3-14 shows the cross-sectional images 76  of the PVC prostate phantom. Three planes normal to the main directions are shown. Similar to our approach in the simulation study, we reconstruct the same ROI containing the inclusion. The red dashed-circles in Fig. 3-14 represent the actual size of the inclusion; however, the location is an approximation based on the reconstructed image. As can be seen in Fig. 3-14, both reconstruction techniques confirm that the cylindrical tomography results in better images than the fan-shaped tomography. Table 3-4 and Table 3-5 report the calculated 𝐶𝑁𝑅 and 𝑅𝑀𝑆 error corresponding to the two scanning geometries. We calculated 𝐶𝑁𝑅 and 𝑅𝑀𝑆 errors using the 3-D reconstructed images and approximate reference image. To have the reference image, we had the accurate information regarding the size and shape of the inclusions, but their locations in the image were based on the reconstructed images.  Fig. 3-15 shows the reconstructed images of the gelatin-based prostate phantom. The phantom is shown in Fig. 3-8(c). The cross sectional images at the locations of A- and B-planes are shown. The two inclusions are visible in the images of the pick-up data while iDPARS provides images with fewer artifacts. Although the transducer is directionally sensitive and we had a limited view of the sample, the absorber was recovered better from a concave detection surface with the pick-up transducer.   77   (a)  (b) Fig. 3-13. Reconstructed images of the pencil leads phantom shown in Fig. 3-8(a) with iDPARS and DAS methods. (a) Results of the fan-shaped PAT with the TRUS probe and (b) results of the cylindrical PAT with the pick-up probe with the da Vinci robotic system. PA signals for both experiments were collected through 90o with a step size of 1o.  78   (a)  (b) Fig. 3-14. Reconstructed images of the prostate phantom shown in Fig. 3-8(b) with iDPARS and DAS methods. (a) Results of the fan-shaped PAT with the TRUS probe maneuvered by the TRUS robot and (b) results of the cylindrical PAT with the pick-up probe maneuvered by the da Vinci robotic system. PA signals for both experiments were collected through 90o with a step size of 1o. The red dashed circles in the images represent the actual size of the reference inclusion. The location is an approximation based on the data. 79    Fig. 3-15. Reconstructed images of the gelatin-based prostate phantom shown in Fig. 3-8(c) with model and non-model based reconstruction methods. Similar to the photo of the phantom, A- and B-planes are shown in the images. (a) Results of the fan-shaped PAT with the TRUS probe and (b) results of the cylindrical PAT with the pick-up probe and the da Vinci robotics system. The two inclusions are visible in the image of the pick-up data.      80  Table 3-4. CNR and RMS error values of the results of the fan-shaped and cylindrical tomography for the pencil leads phantom Factors TRUS image pick-up image DAS iDPARS DAS iDPARS RMS error (a.u.) 0.36 0.07 0.21 0.05 𝑪𝑵𝑹 0.83 5.35 4.18 9.66  Table 3-5. CNR and RMS error values of the results of the fan-shaped and cylindrical tomography for the prostate phantom Factors TRUS image pick-up image DAS iDPARS DAS iDPARS RMS error (a.u.) 0.39 0.31 0.38 0.16 𝑪𝑵𝑹 1.76 3.08 1.10 6.54 3.4 Discussion There are four approaches to examine prostate with ultrasound or PA: endo-rectal, trans-perineal, trans-abdominal, and anterior-to-posterior view made by the pick-up transducer [90]. The trans-abdominal view is very far from the prostate and trans-perineal view provides a very limited view to the prostate. This chapter focused on the potential scanning geometries for intra-operative prostate PA imaging which includes the use of TRUS and pick-up transducers. As can be seen in Fig. 3-9 - Fig. 3-15, in the cases in which the transducers performed fan-shaped tomography, both in experiment and simulation, the reconstruction approaches resulted in significant artifacts in the reconstructed images. The transducers have directivity and receive PA signals only from a limited region in space. Thus when the transducers scan the volume inwards through a concave surface (sensing directions are converging), the detection regions overlap and produce better images. Table 3-1 - Table 3-5 show that the concave surface formed by the pick-up probe provides better images than the convex surface formed by the TRUS probe in terms of CNR and RMS errors. The results of our simulation study were in a good agreement with experimental results in terms 81  of CNR and RMS error. In addition, the artifacts present in the simulation images are similar to those in the experimental study. To compute the CNR and RMS errors of the experimental images, we used approximate reference images. The accurate information regarding the size and shape of the inclusions was available, but the actual physical location was not recorded as it requires an additional complex calibration step [136]. We used PA images to find the locations of the inclusions. We aligned the inclusions with real size and shape to the pixels of PA images with the highest intensities to form the reference images. A similar approach was used in [61], [83], [124]. This approximation may affect the reported CNR and RMS errors of the experimental study. Registered US-PA could be used to localize the inclusions in the future. For this study, the laser beam was fixed and illuminated the entire ROI with one shot. The beam was also non-uniform, and this could affect the results. The other problem was the limited energy at the laser output which prevented us from testing the system with a phantom that exactly represents prostate properties due to lack of enough light penetration.  The spherical inclusion is not recovered perfectly in the experimental TRUS image, see Fig. 3-14(a). The limitations including limited view, directivity effect of transducer, and limited detection surface spanned by TRUS transducer could affect the reconstructed images. In addition, as the TRUS axis was parallel to Y axis, the sharp directivity of the transducer with respect to the out of plane waves causes more artifacts in the XZ plane. For the TRUS probe, as it should rotate about its axis, there may be some error in the location of transducers at each location.  The reconstruction algorithms used in this chapter were non-model-based, DAS and model-based, iDPARS. Significant artifacts are visible in DAS images due to incomplete measurements [49], which show better results when the pick-up transducer is used (e.g. see Fig. 3-13). iDPARS provides better images in all cases in terms of 𝐶𝑁𝑅 and 𝑅𝑀𝑆 error, as it includes the directivity information of the transducers for the limited-view PA imaging. As can be seen from our analysis (Fig. 3-12), transducer localization errors do not have a significant effect on the reconstructed PAT images. The reported localization error of the pick-up transducer when positioned by the da Vinci manipulator over its detection surface is 2 mm in translation and 3° in orientation [130]. With errors in this range, the reconstructed PAT images are still reasonable. For TRUS images, the translational error of the rotational axis causes an error 82  in the inclusion location and the error of the probe axis orientation causes a smooth deformation of the inclusion. However, as simulation results show, smooth errors in the robot and TRUS trajectories do not lead to a breakdown of the reconstruction. Non-sparse samples were considered in both experimental and simulation studies. Both results (Fig. 3-11 and Fig. 3-15) showed that the cylindrical tomography with the pick-up probe is superior to the fan-shaped tomography with the TRUS probe. However, they also showed that near the TRUS probe the PAT reconstruction is good. This is due to the lower acoustic attenuation of the PA pressure and larger detecting area of the elements of the TRUS probe.  In the next chapter, for intra-operative PAT imaging of the human prostate, a shared control between the robot controller and the surgeon’s active scan will be presented. Such a control would implement a virtual fixture that locks the pick-up transducer to a given tomography axis and a given, essentially transversal, scanning plane.  3.5 Conclusion In this chapter a comparison of different PAT configurations that can be used for intra-operative prostate imaging was presented. PA data was generated using different simulation and experiment studies. The experimental studies were based on a prostate phantom that imitates the geometry of prostate PAT using pick-up and TRUS probes. The phantom was intended to introduce and compare different scanning geometries for PA prostate imaging. We have shown that a reasonable approach to PAT of the prostate seems to be an intra-operative approach that uses a pick-up transducer to scan the prostate in an anterior-to-posterior direction.    83  Chapter 4 ˗ Feasibility of da Vinci assisted prostate photoacoustic imaging 4.1 Introduction In chapter 3, different configurations for prostate PA tomography (PAT) were investigated and it was mentioned that two transducers could be used for prostate imaging, as shown in Fig. 4-1: first, a transrectal US (TRUS) probe, that can be rotated about a fixed axis, and second, an intra-operative pick-up probe, as presented in [90], that can be placed at an arbitrary location (position and orientation) on the tissue during minimally invasive surgery. The intra-operative pick-up US transducer is held and maneuvered by the da Vinci robotic system to collect the PA signals diverging from the prostate. The pick-up transducer use is motivated by [137] demonstrating its superiority for PAT reconstruction. Other ultrasound imaging approaches, e.g. trans-perineal or trans-abdominal, are not viable for prostate PAT because the transducers are too far from the prostate and can receive PA signals only from a very limited angle of view.  Robot-assisted PA imaging has been discussed in a few studies. Typically, robots are used to position the optical fiber, the sample or the US transducer. A robot holding optical fiber was used to scan the entire ROI to provide signals for PA and fluorescence imaging [72]. The same robot-assisted PA system was used to consider the effect of the skull bone on PA imaging [73] and contrast-enhanced imaging on a blood flow phantom [74]. In addition, the optical fiber can be attached to surgical tool tips to also facilitate PA imaging during a robot-assisted surgery. For example, a multifiber light delivery system was designed for a neurosurgical drill [76]. The PA system helps protect arteries behind the bone being drilled during robot-assisted endonasal surgery [77]. In [75], PA image guidance is used to measure vessel separation to find safety zones during an operation. In [78], a robotically tracked system was developed to perform 2-D PAT with a conventional linear probe. The probe was rotated in the plane and collected PA signals at one or three positions. The 3-D robot-assisted PAT was introduced in the previous chapter [137].  84  In this chapter, we present a novel robot-assisted PAT system for intra-operative prostate imaging. The integrated system employs the da Vinci robot and its da Vinci research kit (dVRK) [133] to jointly maneuver the pick-up US transducer in a shared control environment, employing Virtual fixtures (VFs) to allow the operator to easily control the pick-up transducer in a manner consistent with PAT. To the best of our knowledge, this is the first study to present VF in a shared-control environment for robot-assisted 3-D PAT of the prostate. Typically, VFs are used in robot-user interaction to make it easier for operators to perform difficult tasks [138] and to protect critical structures [139]. VFs have been implemented on the da Vinci system for path planning [140]. In our study, VFs are used to mimic a PA tomographic scan while allowing the operator to be in control of moving the transducer along a feasible trajectory on the tissue surface.  Fig. 4-1. The TRUS and pick up probes can be used for prostate PAT. To perform PAT, the TRUS probe is rotated around a fixed axis while the pick-up transducer can be moved freely along the surface. 85  4.2 Methods  System overview The laser system we employ for photoacoustic imaging is a Q-switched Nd:YAG laser with an optical parametric oscillator (Continuum, Inc., Santa Clara, CA, USA), generating 5 ns-wide pulses at a wavelength of 700 nm (used for blood vessel illumination [79]), with a repetition rate of 10 Hz. The experimental set-up is shown in Fig. 4-2. The laser is coupled to a 1500 µm optical fiber with a numerical aperture of 0.39 [132]. An end-firing fiber illuminates the sample. The fiber was placed 3 cm away from the sample for a wider illumination region (the illumination system for an in vivo system will in fact illuminate laterally through the urethra – see [18], [89]). The laser energy is tuned through the delay between the firing and Q-switch triggers, and it was kept at 47 mJ/pulse using a custom-designed trigger box [131].  The linear array of the pick-up probe described in [90] is used to record PA signals from a fixed sample. The pick-up US array is 28 mm long and has 128 elements at a center frequency of 10 MHz. The Ultrasonix SonixDAQ, a parallel receiving data system, was used to record radio-frequency (RF) digital data at a 40 MHz sampling rate with a 12-bit resolution. The Ultrasonix SonixTOUCH, an US system, was used to collect B-mode images during tomography with the pick-up transducer to have co-registered US and PAT images. In this study, the first generation of the da Vinci surgical system [134] controlled by the da Vinci Research Kit (dVRK) [141] was used. The dVRK is an open source platform including hardware, firmware and software components [133], which is used to control the da Vinci system. The da Vinci robotic system is also interfaced with the robot operating system (ROS), which provides convenient communication between different robot software components. The robot system is run in teleoperation mode, meaning that the motion of the master tool manipulator (MTM), manipulated by the operator, is mimicked by a Patient Side Manipulator (PSM).  The “pick-up” transducer presented in [90] can be grasped securely and repeatedly using a da Vinci ProGrasp instrument. Fig. 4-2 shows a da Vinci PSM holding the pick-up transducer. The transducer is small enough to pass through a standard laparoscopic incision and can be kept in the abdominal cavity during the surgery. Therefore, it can be used to receive PA signals through the anterior surface of the prostate during RALRP.  86  The location of the pick-up transducer is controllable by the user’s hand through the MTM, and it is reported back by the dVRK, so the transducer can be used for PAT imaging, as illustrated in Fig. 4-2. For PA image reconstruction, a model-based algorithm, iterative deconvolution-based PA reconstruction with sparsity regularization (iDPARS) [126], [127], is used. iDPARS, which was developed in our research group, is capable of reconstructing images from limited-view PAT data using the same conventional linear transducers that are used for prostate imaging.  Robot control To accomplish robot-assisted PAT, the pick-up probe can be moved by one of the da Vinci PSMs along and in contact with the tissue, see Fig. 4-3. To facilitate PAT reconstruction, the transducer should also be parallel to and facing the tomography axis in a plane transverse to this axis, see Fig. 4-3(a) and (b). For safety reasons, the depth of the scan in tissue should be controlled by the operator and not autonomously by the robot. However, the dVRK controller autonomously ensures that the imaging plane of the pick-up transducer crosses the tomography axis. In addition, a virtual imaging plane is defined normal to the tomography axis as shown in Fig. 4-3(a); the dVRK controller also ensures that the pick-up transducer is maintained in this plane. Thus, in our shared control environment, the surgeon is free to move the pick-up transducer in a virtual plane (usually close to a transversal body plane), while the dVRK controller automatically rotates the transducer so that it crosses the tomography axis. In this section, scalar parameters, vectors, and matrices are shown in normal, italic bold, and roman bold fonts, respectively. The indices s and m correspond to the PSM and MTM coordinate systems, respectively.  87   Fig. 4-2. Experimental set-up for PAT. The da Vinci robotic system through dVRK controllers is used to maneuver the pick-up probe.  The surgeon provides all the information required for tomography. To define the tomography geometry, the surgeon holds the pick-up transducer at a set-up position and activates the semi-autonomous teleoperation mode, so the MTM and PSM positions are recorded as 𝒑𝑖𝑚 and 𝒑𝑖𝑠, respectively (Fig. 4-3(a)). In addition, the tomography axis can be defined to be parallel to the probe axis, in the direction ?̂?𝑠, at the distance 𝑟 from the transducer face, as shown in Fig. 4-3(a). We define the virtual tomography plane to be normal to the transducer axis, crossing 𝒑𝑖𝑚 and 𝒑𝑖𝑠 in the MTM and PSM coordinate systems, respectively. With reference to Fig. 4-3(a), the tomography axis crosses the virtual plane at position 𝒑𝑐𝑠, which can be found from  𝒑𝑐𝑠 =  𝒑𝑖𝑠 + 𝒓 (4-1) where 𝒓 =  𝑟 ?̂?𝑖𝑠, and ?̂?𝑖𝑠 is the unit vector of the PSM orientation along the z-direction at the set-up position.  88  Therefore, by determining the set-up position, all the required information is collected and the semi-autonomous teleoperation can be enabled. Assuming that the operator moves the MTM from a current position 𝒑𝑚 to any other position 𝒑𝑛𝑚 as shown in Fig. 4-3(c), the distance from 𝒑𝑛𝑚 to the virtual plane is determined as follows  𝑑 =  ?̂?𝑚. (𝒑𝑖𝑚 − 𝒑𝑛𝑚) , (4-2) where ?̂?𝑚 is the tomography axis vector in the MTM coordinate system and can be calculated as  ?̂?𝑚 =  𝐓𝑠𝑚 ?̂?𝑠 , (4-3) where 𝐓𝑠𝑚 is the transformation matrix between the MTM and the PSM coordinate systems. The projection of 𝒑𝑛𝑚 onto the virtual plane can be calculated as  𝒑𝑛,𝑝𝑚 =  𝒑𝑛𝑚 + 𝑑 ?̂?𝑚. (4-4) Therefore, the MTM motion vector projected onto the virtual plane is  𝜹𝑝𝑚 =  𝒑𝑛,𝑝𝑚 − 𝒑𝑚 (4-5) and its transformation in the PSM coordinate system is  𝜹𝑠 =  𝐓𝑚𝑠  𝜹𝑚. (4-6) Finally, to keep the transducer within the virtual plane, the commanded position of the PSM is  𝒑𝑑𝑠 =  𝒑𝑠 + 𝜹𝑠 (4-7) and to keep the transducer facing the tomography axis, the commanded orientation of the PSM is  ?̂?𝑑𝑠 = (𝒑𝑐𝑠 −  𝒑𝑑𝑠 )/|𝒑𝑐𝑠 −  𝒑𝑑𝑠 | (4-8)  ?̂?𝑑𝑠 =  𝒘𝑠 (4-9)  ?̂?𝑑𝑠 =  ?̂?𝑑𝑠 × ?̂?𝑑𝑠  (4-10) where × and |. | denote the cross product and vector magnitude. 89  The above steps can be followed in the Fig. 4-3(c). Therefore, the commanded frame of the PSM is {𝒑𝑑𝑠 , [?̂?𝑑𝑠 , ?̂?𝑑𝑠 , ?̂?𝑑𝑠 ]} which can be implemented through the dVRK system and ROS that provide a high rate by up to 200 Hz, as shown in Fig. 4-4(a). The ROS interface facilitates the communication between the dVRK system and the shared-control platform. All the computations of the shared-control teleoperation are performed on the dVRK machine under the ROS Python module. The generated frame is followed through the dVRK hardwares and software [141]. The user can perform scanning of the region of interest (ROI) with the pick-up probe with no laser firing in a “dry run” mode. This is defined as the forward scanning [142]. The semi-automated forward scanning allows the surgeon to define the optimum and safe robot trajectory without needing to illuminate the tissue. The dVRK system is used to record the robot and tool tip/transducer locations during the forward scanning. The start and end of the desired trajectory to be recorded for the forward scanning are determined by pressing the pedal twice, as shown in the control diagram, Fig. 4-4(a). On the first pedal press, the surgeon initiates the trajectories. When the surgeon presses the pedal for the second time, the semi-autonomous teleoperation is disabled and the forward scanning is finished. If the surgeon is satisfied with the detection surface, the robot system can perform PAT during the backward scanning. For backward scanning, a subset of the recorded trajectory locations is used as a set of consecutive set-points. The backward scanning is done autonomously, in a start-stop manner, from one set-point to another, with the robot and the US machine in hand-shaking mode. After settling at each set-point, the robot triggers the laser and the DAQ card and waits for the data acquisition to be completed by the US machine (Fig. 4-4(b)) in order to move to the next set-point in the backward trajectory. The instrument tip/pick-up transducer location at each data acquisition position is computed from the dVRK and used in the PAT image reconstruction. 90   (a)   (b) 91   (c) Fig. 4-3. (a) The shared control strategy locks the pick-up probe to the tomography axis while the probe is placed on the tissue surface. (b) In addition, to capture tomographic PA signals from the ROI, we need to avoid axial movement. A virtual plane is defined that is normal to the tomography axis and crosses the center of the pick-up probe. (c) The surgeon can freely move the transducer in the virtual plane using an MTM and a PSM follows its motion in the virtual plane according to the proposed shared control configuration.   92   (a)  (b) Fig. 4-4. (a) The control diagrams present how the forward (a) and backward (b) scannings are performed and how the trajectory generated in the shared control platform is followed by the robot through the ROS interface and dVRK system. During the backward scanning (b), the US machine is activated and PA imaging is performed. In addition, the trigger diagram shown in (b) represents the sequence of activation of different components of the integrated system during the backward scanning. 93   Phantom study To evaluate the feasibility of intra-operative prostate PAT with our system, two gelatin prostate phantoms shown in Fig. 4-5 were fabricated and used. The gelatin concentration is 10 percent throughout the phantoms. In terms of the size and shape, the phantom models the imaging conditions during RARLP. One of the phantoms has a 3-D structure using three pencil leads, shown in Fig. 4-5(a). The pencil leads have a diameter of 0.5 mm and a length of 10 mm. The pencil leads are also visible in the computed tomography (CT) image of the phantom in the planes A and B (see features pointed at by red arrows, Fig. 4-5(a)). The distance between the planes is 4 mm. An additional ex vivo phantom was used for scanning, and used a 20 × 40 × 40 mm3 piece of chicken embedded into gelatin (Fig. 4-5(b)). Chicken breast has an approximate effective light attenuation of 0.9 cm-1 at a wavelength of 700 nm [143]. Two pencil leads with diameter of 0.5 mm and a length of 10 mm are embedded in the chicken breast. Fig. 4-5(b) shows the photograph of the phantom and CT images of the planes C and D that contain the pencil leads. The planes are at a distance of 10 mm from each other. The proposed forward tomography was performed on both the phantoms while the robot/transducer positions and orientations were recorded. A dataset of 100 set-points at equal distance were selected for tomography for each experiment. During the backward motion for tomography, co-registered B-mode and PAT images were collected.   94   (a)  (b) Fig. 4-5. Prostate gelatin-based phantoms. (a) The phantom contains a 3-D structure made of three pencil leads. The pencil leads are shown using the red arrows in the CT images of the planes A and B. (b) The phantom contains chicken breast tissue. Two pencil leads shown in the CT images of the planes C and D are embedded inside the breast tissue. 95   Measuring the da Vinci robot errors The accuracy of the dVRK was tested using the NDI Certus Optotrak (Northern Digital Inc., Waterloo, Canada) as the ground truth. The Certus tracking system has the accuracy of 0.1 mm and resolution of 0.01 mm over its workspace [144]. As shown in Fig. 4-6, four Optotrak markers are attached to a small board with the same grasping element as the pick-up transducer. The robot picks up the board in the same way it picks up the pick-up transducer, so we can assume that the board is rigidly attached to the pick-up transducer. Hand-eye calibration is carried out using the method from [145]. The required transformations are found using 100 random poses of the da Vinci robot.  The positions and rotations of the rigid body were recorded while the robot is run in the proposed semi-autonomous teleoperation mode using the Optotrak and dVRK systems. The data can be represented in one of the coordinate systems using the calculated transformation. The rotations of the rigid body can also be represented in the axis-angle form and the rotation angle differences are used to compute the orientation errors.   Fig. 4-6. Optotrak markers attached to a board create a rigid body which is grasped by the da Vinci tool in the same way as the pick-up transducer.  Shared-control teleoperation efficacy We ran a mixed experimental-simulation study to establish the efficacy of the proposed shared-control teleoperation. In this study, the da Vinci robotic system was used to scan a gelatin phantom (Fig. 4-7(a)), containing a ROI with virtual content (Fig. 4-7(b)). The ROI includes a simulated 96  sphere and two beam shaped inclusions. As shown in Fig. 4-7(a), one phantom edge was marked to indicate the start of trajectory. An ideal trajectory to scan the virtual ROI would start from this point and would follow the surface of the phantom. Two users (graduate students in the Robotics and Control Laboratory, UBC) were asked to use the da Vinci robot in the conventional and in the proposed shared-control teleoperation modes. Ten (five by each user) trajectories of the robot/detection surfaces for imaging were measured in each teleoperation mode. We carried out PAT reconstructions with the pick-up transducer using the method described in [137] following the measured pick-up transducer trajectories in each teleoperation mode. The results are reported at the end of Section 4.3.         97   (a)  (b) Fig. 4-7. (a) A phantom with a mark to show the start of trajectory. Different trajectories are defined using the da Vinci robot run in conventional and the proposed shared-control teleoperation. (b) The detection surfaces are compared using a PAT simulation study. The ROI includes a sphere (blue) and two beam shaped inclusions (red). Plane E is a transversal cross-section of the ROI which is shown with respect to the phantom.  98  4.3 Results To have a better understanding of the reconstructed images, we computed the Contrast to Noise Ratio (CNR) and the RMS error corresponding to each reconstructed image using the following equations  𝑅𝑀𝑆 =  √1𝑛 ∑ (𝐴𝑖 − 𝐴𝑖0)2𝑛𝑖=1  , (4-11) where 𝐴𝑖 and 𝐴𝑖0 are the reference and reconstructed pixel i value, respectively, and n is the number of image pixels and  𝐶𝑁𝑅 =  |𝑚𝑖−𝑚𝑏|𝜎𝑏 , (4-12) where 𝑚𝑖 and 𝑚𝑏 are the mean intensities at the inclusion and background, respectively, and 𝜎𝑏 is the standard deviations at the background. The US and PAT images of the phantoms, see Fig. 4-5, are shown in Fig. 4-8. Fig. 4-8(a) shows the US images (gray scale) of the planes A and B, shown in Fig. 4-5(a), overlaid with their PAT images (colored). The three pencil leads are visible in the images. The lower pencil lead in the plane A is not perfectly recovered; this can be due to not receiving sufficient laser illumination at its location. The CNR and RMS errors of the PAT image are 2.27 and 0.06, respectively.  Fig. 4-8(b) shows the results of the ex vivo experiment. The PAT images (colored) of the planes C and D, see Fig. 4-5(b), are shown on top of their US images (gray scale). The two pencil leads are fully recovered in the PAT image. The chicken breast tissue that does not generate acoustic waves by the laser illumination is visible in the US image and in the background. This experiment was repeated four times to check the repeatability and the results were in a good agreement as the CNR and RMS errors of the PAT images are 3.04 ± 0.48 and 0.06 ± 0.01, respectively.  An error analysis was performed for the da Vinci robot motion. Fig. 4-9 shows the position and orientation errors of the robot. The robot followed a trajectory based on the proposed algorithm in the shared environment (manual/automated control of the da Vinci robot). To have a better understanding of the errors, different users performed the tomography four times. The mean, standard deviation (STD), maximum, minimum errors of position and orientation of the robot through each trajectory are reported in Table 1 and 2, respectively.  99  Fig. 4-10 shows the images of plane E of the virtual enclosures in the gelatin phantom, reconstructed with detection geometries obtained from the conventional and shared control teleoperation. The results show that the proposed shared-control teleoperation significantly increases the quality of the reconstructed images. The RMS errors relative to the simulated phantom are 0.20 ± 0.03 a.u. and 0.13 ± 0.005 a.u. over the ten trials for the detection surfaces obtained by the teleoperation and proposed shared-control teleoperation, respectively.  (a)  (b) Fig. 4-8. US and PAT images of the planes shown in Fig. 4-5. (a) US images (gray scale) of the planes A and B, see Fig. 4-5(a), are overlaid with their PAT images (colored). (b) The results of the ex vivo experiment. PAT images (colored) of the planes C and D, see Fig. 4-5(b), are shown on top of their US images (gray scale). 100   (a)  (b) Fig. 4-9. (a) Position and (b) orientation errors of the da Vinci robot during tomography with the VF in the shared control environment.  Fig. 4-10. PAT images of the plane E shown in Fig. 4-7(b). The detection surfaces for PAT were obtained using the da Vinci robot run in the teleoperation mode or the proposed shared-control teleoperation mode. 101  Table 4-1. Position errors of the da Vinci robot during tomography with the VF in different trials. Experiments Position error (mm) Mean STD Max Min Experiment#1 5.82 3.86 12.50 1.80 Experiment#2 5.78 4.01 12.57 1.75 Experiment#3 5.84 3.48 11.04 1.14 Experiment#4 5.61 3.57 11.17 1.20   Table 4-2. Orientation errors of the da Vinci robot during tomography with the VF in different trials. Experiments Orientation error (deg) Mean STD Max Min Experiment#1 2.60 1.40 4.79 0.06 Experiment#2 3.05 1.48 5.21 0.02 Experiment#3 2.81 1.55 5.21 0.01 Experiment#4 2.82 1.95 6.44 0.16 4.4 Discussion For intra-operative prostate PAT imaging, we proposed a shared control between the robot controller and the surgeon’s active scan. Such a control would implement a VF that locks the pick-up transducer to a given tomography axis and a given, essentially transversal, scanning plane. Using this proposed architecture, we have demonstrated the feasibility of PAT imaging through an integrated system that combines an US machine with a research interface, an externally triggered ultrafast laser system and a da Vinci standard with the dVRK. The da Vinci robot PAT trajectory errors were measured and are 5.77 ± 3.72 mm and 2.82 ± 1 degrees for the position and orientation, respectively. There are some factors that could affect the results. First, our results are conservative because the da Vinci Pro-GRASP tool used in this experiment was not new, and its wires were slightly stretched, resulting in some backlash error. Second, our results could be optimistic because we carried out our experiments in free-space, so 102  the instruments were not bent by loading by the tissue. Finally, the errors reported in the table 1 and 2 can be compensated as the mean values of the errors are not dispersed. There might be errors in finding the transformation between the da Vinci and the tracking system coordinates system. Further research for calibration errors is needed. The da Vinci robot errors are smooth, as shown in Fig. 4-9. The smooth errors in positions and orientations of the da Vinci robot were investigated in [137], and it was shown that these types of smooth errors do not cause failure of the reconstruction, but they may introduce localization errors that are of the same order as the pick-up transducer localization error. From a clinical perspective, if the distance of PAT-detected cancer to critical structures is found to be of the same order as the pick-up transducer localization error, then a conservative approach would be appropriate resulting in wider resections to ensure negative surgical margins. In future work, more precise localization of PCa can be achieved by direct tracking, in the camera coordinate system, of the pick-up transducer. There are other limitations that could affect the quality of reconstructed images. In this study, a limited-view 3-D PAT using a conventional ultrasound transducer was investigated. The transducer held by the da Vinci robot provides directional response to the incoming ultrasound transducer waves. In addition, the reconstruction technique that was used [126] uses sparsity regularization algorithm to overcome the rank deficiency of the inverse problem. The sparsity regularization has a low pass filter nature which might affect the sharp edges in the PA images, for example, see the shape of inclusions in the shared-control teleoperation image, Fig. 4-10. In addition, the efficacy of the proposed shared control teleoperation was investigated in a mixed experiment-simulation study. The detection surfaces generated with and without the proposed shared control strategy were compared and it was shown that the shared-control teleoperation results in significantly better PA images with fewer artifacts (see Fig. 4-10). PAT images of the ex vivo experiment have fewer artifacts and higher CNR than those of the bare pencil leads. The reason can be the better light diffusion throughout the inclusions when the chicken breast tissue was used. The bare pencil leads of the other phantom were almost parallel to the laser beam, which may affect the light absorption throughout the inclusions. The light might not penetrate through the pencil leads. 103  4.5 Conclusion This study focused on the feasibility of robot-assisted prostate PAT. A shared control strategy has been implemented, with the dVRK controllers locking a pick-up transducer orientation to a tomography axis, and allowing the user to move the transducer freely in a virtual plane in order to acquire PA signals from a ROI as needed for prostate imaging. The da Vinci robot errors were measured for this task. The robot has smooth errors during tomography. Even though the da Vinci robot assisted PAT was performed in a laboratory environment, the set-up was similar to what is expected in the operating room. As a result of this study, we determine that a reasonable approach to perform PAT of prostate can be the use of a da Vinci robotic system with a pick-up transducer.   104  Chapter 5 ˗ Conclusions and future work 5.1 Overview PA imaging of tissues has received much attention in the medical field. PA imaging has shown great potential to distinguish between cancerous and non-cancerous tissues and can be used to monitor cancer progression. The inverse problem of PAT and robot-assisted PAT were investigated in this research. The goal was to investigate the constraints and conditions related to intra-operative PA imaging of prostate.  5.2 Summary of findings In Chapter 2, sparsity regularization was introduced and used for solving the PA inverse problems. The assumptions were that full access to the region of interest is not available and the transducers used to record acoustic waves have directional response to the incoming waves. The resulting linear system of equations is rank deficient. Sparsity regularization helped reduce the computational cost and improve numerical conditioning of the system of equations. Simulation studies were used to compare this method with Tikhonov regularization technique. The presented method outperforms the conventional DAS reconstruction method in the simulation and experimental studies. Also, the iterative version of the inversion approach using parallel processing was presented in this chapter to show the potential of the method to solve the 3-D large scale inverse problems. In Chapter 3, different configurations of PAT of prostate were introduced and compared. There are two US transducers available for prostate PAT: TRUS and pick-up. The TRUS transducer can be rotated about its axis to perform fan-shaped PAT; however, the pick-up transducer can be placed at an arbitrary orientation with respect to a tomographic axis and at any position in contact with tissue. Therefore it can be used for cylindrical PAT. Simulation and experimental studies showed that PAT with a pick-up transducer outperforms PAT with just a TRUS transducer. An 105  experimental set-up with the da Vinci robotic system and the pick-up US transducer was introduced and tested in this chapter. In Chapter 4, a shared-control platform for the da Vinci robot-assisted PAT was designed and introduced. The da Vinci robot, operated in semi-autonomous teleoperation mode, maneuvered the pick-up transducer about a sample for US and PAT imaging. During tomography, the transducer can be kept in contact with the sample surface to preserve acoustic coupling and the transducer can be kept facing a known tomography axis. To collect PA signals from an optimum volume, a plane, as a VF, normal to the tomography axis, was defined. Therefore, the surgeon can freely move the transducer within the virtual plane while the robot autonomously orients the transducer towards the tomography axis. An experimental set-up is designed to test the approach using the da Vinci research kit and the ROS interface to perform co-registered PA and US imaging on an ex-vivo phantom. In addition, an error analysis was performed to quantify the localization errors of the da Vinci robot using an optical tracking system. These errors were found to be small and not affect the ability to carry out PAT. 5.3 Contributions The contribution of this thesis are summarized as follows  A novel deconvolution-based PA reconstruction with sparsity regularization (DPARS) was presented. The method works with the transducers that have directional response. The transducer beam shape which affects the reconstruction was considered during the reconstrution process. Moreover, the reconstruction method was employed for limited-view PA imaging. To compute the distribution of the absorption coefficients, a large linear system of equations resulted from the projections of the absorption coefficients is solved. This is the first model-based PAT reconstruction method that accounts for both sensor directionality and limited angle of view. To address the rank deficiency of the inverse problem, a sparse DCT representation of the distribution of absorption coefficients is used. In the DCT domain, the significant coeficients are mostly located at lower frequencies. Therefore, a limited range of DCT coefficients corresponding to low frequencies is selected to parametrize the spatial distribution of absorption coefficients. This approximation reduces the number of equations, improves numerical conditioning, 106  and reduces computing time. The DCT has not been used to address the rank deficiency in limited view, focused transducer, PAT reconstruction. A robotic scanner was designed and developed for validating the reconstrution method prior to the use of the da Vinci robotic system.  For the first time, the feasible scanning geometries for intra-operative PAT of prostate were investigated. These configurations include the use of the TRUS or pick-up transducers. The TRUS transducer can be rotated about its axis using a robotic TRUS manipulator to perform the fan-shaped tomography. The pick-up transducer can be rotated about any axis usign the da Vinci robotic system to perform the cylindrical tomography. Simulation and experimental studies showed that the cylindrical tomography with the pick-up transducer is significantly better than the one that uses just the TRUS transducer.   To perform PAT, we designed and implemented a fully integrated experimental set-up to perform PAT with the da Vinci robotic system. This thesis presents the first reported PAT system that employs a robot-positioned US transducer to acquire the acoustic wave field for 3-D PAT reconstruction with the focus on prostate imaging.  A novel robot-assisted PAT system for intra-operative prostate imaging was introduced in this thesis. The integrated system employs the da Vinci robot through its dVRK system to jointly maneuver the pick-up US transducer in a shared control platform. The da Vinci robot is run in semi-teleoperation mode which employs virtual fixtures to allow the operator to easily control the pick-up transducer in a manner consistent with PAT. This is the first study to present virtual fixtures in a shared-control environment for robot-assisted 3-D PAT of the prostate. 5.4 Implications The research described in this thesis has implications for the intra-operative PAT of the prostate. The results of chapter 2 attempt to demonstrate the feasibility of PA imaging in the presence of the prostate challenges. The results of chapter 3 and 4 may influence surgical removal of the prostate cancer using the da Vinci robotic system and provide the first steps towards intra-operative prostate PAT. 107  The reconstruction method presented in chapter 2 investigates the constraints associated with the transducers used for prostate imaging. Theses constraints collectively have not been discussed in the previous work in the area. This thesis showed that PA image reconstruction in the presence of prostate limitations is feasible. The intra-operative ultrasound transducer used in chapter 3 and 4 enables the surgeon to have the subsurface view during a robotic surgery. Better understanding of the PCa location helps find the best trade-off between oncological (removing all cancer) and functional (continence, potency) outcomes. Previous work has shown that PA imaging has a great potential for localization of cancers and can be used for PCa imaging. This thesis took the first steps towards the addition of PA imaging to the da Vinci robotic system and showed that da Vinvi robot assisted PAT for prostate is feasible. The tracking accuracy of the da Vinci robotic system and reconstruction accuracy of the system have been quantified. 5.5 Limitations As mentioned in the reconstruction chapter, in this thesis, we assumed that PA induced pressure propagates through medium based on a simple wave equation and the speed of sound is constant through the medium. Although real tissue has nonlinear properties and the speed of sound is not necessarily constant, the wave equation and constant speed of sound assumptions are frequently made in PA studies. The sparsity regularization method presented in this thesis has a low pass filter nature. Although this sparse representation of the image helps improve the numerical conditioning of the inverse problem and decrease the computational time, the sparsity regularization smoothens the sharp edges that appear in images.  PAT images have been obtained with both phantoms and ex-vivo tissue; however, the new robot-assisted PAT has not yet been tested in an in-vivo setting. For an in-vivo study, specific design and characterization for the illumination system as a result of an in-vivo environment is necessary. When such an illumination system is deployed, the findings of this thesis can be applied to both acquire the PA data and to perform PAT. The robot-assisted PAT presented in this thesis requires the integration of automated PAT into the da Vinci robotic system, which is limited by the use of the dVRK system. The dVRK system 108  has not been approved for clinical use. The current work is a proof of concept showing that PAT imaging can be a useful modality to localize PCa during robotic surgery to provide the surgeon with additional guidance. In addition, the laser system used in this thesis has the maximum energy of 60 mJ/pulse at the fiber output. If the laser beam is expanded to cover a circular surface with a radius of 2 cm, an approximation for the prostate imaging, the energy density will be 4.8 mJ/cm2, which is much below the maximum allowable for human tissue. Therefore, a high penetration depth cannot be expected with the current set-up, unless a smaller laser beam which mechanically scans the volume is used. 5.6 Future work Novel approaches have been presented in this thesis for solving the inverse problem of PAT under conditions of prostate imaging and performing PAT using the da Vinci robotic system. A number of future research directions that the proposed work can be further improved or extended are as follows:  PA image reconstruction using sparsity regularization:  In the proposed sparsity regularization method, the DCT as a sparsifying transform was used to find the sparse representation of the absorber coefficients. Other regularization methods can be used to address the rank difencity of the inverse problem. Moreover, other sparsifying transforms including wavelets can be tested.  For sparsity reguarization, the absorber coefficients are transformed into the DCT domain, then the signigicant coefficients located on a disk at lower frequecnies are selected as the sparsity pattern. Instead of using the disk in the DCT domain, prior knowledge of the absorber distribution, provided, for example, by the DAS image, can be used to select a customized sparsity pattern.   The reconstruction approach was used for 2-D and 3-D PAT using conventional linear transducer array located at multiple positions around the sample. Another topic to investigate the invesion approach is the use of DPARS for data of the single positioned linear transducer. 109   The proposed inversion method presented in this thesis is based on the wave-equation model of PA pressure propogation. There are non-linear, more complex and more realistic models [123], [146], [147] available that can be used to formulate PA pressure propogation through medium. Robot-assisted PAT using the da Vinci robotic system:  The potential for two-transducer PA imaging for prostate was shown in this thesis. In future work, the TRUS and pick-up transducers can be registered as discussed in [136] to collect complementary information from the prostate and combine the collected data from the transducers. Therefore, more detailed images will be possible through two-transducer imaging.  The da Vinci robot errors were measured using a rigid body attached to a robot instrument while it was moved in free space. However, in practice, the instrument may be loaded against tissue and more accurate estimates of the localization error of the pick-up transducer can be obtained by including instrument deflection models [148], [149].   PAT images of TRUS transducer can be improved by considering the accuracy of the localization of the transducer using the mechanical TRUS actuator. An error analysis can be performed using the optical tracking system to find the position errors of the TRUS transducer.  The experimental set-up for robot assisted tomography was evaluated using phantoms and ex-vivo chicken breast tissue. A more complex characterization of the method through more realistic prostate phantoms [44] and ex-vivo human prostate can be developed. 5.7 Conclusion Assuming that PCa can be better visualized using PA imaging, this thesis presents progress towards robot-assisted PAT of the prostate. The PA image reconstruction method introduced takes into account the limitations of PA prostate imaging. Sparsity regularization was used to improve the conditioning and the computational cost of the inverse problem. We have also designed a mechanical scanner to show the feasibility of PAT using a linear transducer array. 110  Robot-assisted PAT was expanded to the use of transrectal and pick-up ultrasound transducers. Different scanning configurations for PAT of prostate were investigated. The da Vinci robotic system was integrated into the PA system and a shared-control platform was designed to properly maneuver the pick-up probe for optimum PAT. We have shown that accurate 3-D co-registered PA and US images created with a pick-up transducer is feasible. The addition of PA to a prostate surgery system has the potential to better distinguish between cancerous and non-cancerous tissue, which will lead to more accurate surgical margins.  The current research can be continued and expanded. Should the results of an in vivo study using this approach be positive and demonstrate potential benefit of PAT for cancer imaging, a specialized imaging system could be designed in the future. Such an optimized system will be characterized using more realistic prostate phantoms and canine models before patient data will be acquired.   111  Bibliography [1] A. Agarwal, S. W. Huang, M. O’Donnell, K. C. Day, M. Day, N. Kotov, and S. Ashkenazi, “Targeted gold nanorod contrast agent for prostate cancer detection by photoacoustic imaging,” J. Appl. Phys., vol. 102, no. 6, p. 64701, 2007. [2] S. A. Ermilov, T. Khamapirad, A. Conjusteau, M. H. Leonard, R. Lacewell, K. Mehta, T. Miller, and A. A. Oraevsky, “Laser optoacoustic imaging system for detection of breast cancer,” J. Biomed. Opt., vol. 14, no. 2, p. 24007, 2009. [3] R. I. Siphanto, K. K. Thumma, R. G. M. Kolkman, T. G. van Leeuwen, F. F. M. de Mul, J. W. van Neck, L. N. A. van Adrichem, and W. Steenbergen, “Serial noninvasive photoacoustic imaging of neovascularization in tumor angiogenesis,” Opt. Express, vol. 13, no. 1, p. 89, Jan. 2005. [4] M.-L. Li, J.-T. Oh, X. Xie, G. Ku, W. Wang, C. Li, G. Lungu, G. Stoica, and L. V Wang, “Simultaneous molecular and hypoxia imaging of brain tumors in vivo using spectroscopic photoacoustic tomography,” Proc. IEEE, vol. 96, no. 3, pp. 481–489, 2008. [5] S. Kim, Y.-S. Chen, G. P. Luke, M. Mehrmohammadi, J. R. Cook, and S. Y. Emelianov, “Ultrasound and photoacoustic image-guided photothermal therapy using silica-coated gold nanorods: in-vivo study,” in 2010 IEEE Ultrasonics Symposium (IUS), 2010, pp. 233–236. [6] L. Lin, P. Hu, J. Shi, C. M. Appleton, K. Maslov, L. Li, R. Zhang, and L. V. Wang, “Single-breath-hold photoacoustic computed tomography of the breast,” Nat. Commun., vol. 9, no. 1, p. 2352, Dec. 2018. [7] A. OpenStax College and P. O. CNX., Anatomy and Physiology. 2016. [8] C Phillips, “Tracking the rise of robotic surgery for prostate cancer,” in Natl Cancer Inst Cancer Bull, 2011. [9] V. Ficarra, G. Novara, W. Artibani, A. Cestari, A. Galfano, M. Graefen, G. Guazzoni, B. Guillonneau, M. Menon, F. Montorsi, V. Patel, J. Rassweiler, and H. Van Poppel, 112  “Retropubic, laparoscopic, and robot-assisted radical prostatectomy: a systematic review and cumulative analysis of comparative studies.,” Eur. Urol., vol. 55, no. 5, pp. 1037–63, May 2009. [10] O. Mohareri, J. Ischia, P. C. Black, C. Schneider, J. Lobo, L. Goldenberg, and S. E. Salcudean, “Intraoperative registered transrectal ultrasound guidance for robot-assisted laparoscopic radical prostatectomy,” J. Urol., vol. 193, no. 1, pp. 302–312, 2015. [11] O. Mohareri, G. Nir, J. Lobo, R. Savdie, P. Black, and S. Salcudean, “A System for MR-Ultrasound Guidance during Robot-Assisted Laparoscopic Radical Prostatectomy,” Springer, Cham, 2015, pp. 497–504. [12] V. S. Dogra, B. K. Chinni, K. S. Valluru, J. V Joseph, A. Ghazi, J. L. Yao, K. Evans, E. M. Messing, and N. A. Rao, “Multispectral Photoacoustic Imaging of Prostate Cancer: Preliminary Ex-vivo Results.,” J. Clin. Imaging Sci., vol. 3, no. 1, p. 41, 2013. [13] S. Tang, J. Chen, P. Samant, K. Stratton, and L. Xiang, “Transurethral Photoacoustic Endoscopy for Prostate Cancer: A Simulation Study,” IEEE Trans. Med. Imaging, vol. 35, no. 7, pp. 1780–1787, Jul. 2016. [14] X. Wang, W. W. Roberts, P. L. Carson, D. P. Wood, and J. B. Fowlkes, “Photoacoustic tomography: a potential new tool for prostate cancer,” Biomed. Opt. Express, vol. 1, no. 4, p. 1117, Nov. 2010. [15] G. Xu, M. Qin, A. Mukundan, J. Siddiqui, M. Takada, P. Vilar-Saavedra, S. A. Tomlins, R. Kopelman, and X. Wang, “Prostate cancer characterization by optical contrast enhanced photoacoustics,” 2016, p. 97080I. [16] A. Horiguchi, K. Tsujita, K. Irisawa, T. Kasamatsu, K. Hirota, M. Kawaguchi, M. Shinchi, K. Ito, T. Asano, H. Shinmoto, H. Tsuda, and M. Ishihara, “A pilot study of photoacoustic imaging system for improved real-time visualization of neurovascular bundle during radical prostatectomy,” Prostate, vol. 76, no. 3, pp. 307–315, Feb. 2016. [17] C. Sim, H. Kim, H. Moon, H. Lee, J. H. Chang, and H. Kim, “Photoacoustic-based nanomedicine for cancer diagnosis and therapy,” J. Control. Release, vol. 203, pp. 118–125, 2015. [18] M. A. L. Bell, X. Guo, D. Y. Song, and E. M. Boctor, “Transurethral light delivery for 113  prostate photoacoustic imaging,” J. Biomed. Opt., vol. 20, no. 3, p. 36002, 2015. [19] Y.-S. Chen, W. Frey, S. Kim, K. Homan, P. Kruizinga, K. Sokolov, and S. Emelianov, “Enhanced thermal stability of silica-coated gold nanorods for photoacoustic imaging and image-guided therapy,” Opt. Express, vol. 18, no. 9, pp. 8867–8878, 2010. [20] P. P. Joshi, Y.-S. Chen, S. Kim, J. Shah, K. Sokolov, and S. Emelianov, “Molecular therapeutic agents for noninvasive photoacoustic image-guided photothermal therapy,” in Engineering in Medicine and Biology Society, 2009. EMBC 2009. Annual International Conference of the IEEE, 2009, pp. 4106–4109. [21] M. A. L. Bell, X. Guo, D. Y. Song, and E. M. Boctor, “Photoacoustic imaging of prostate brachytherapy seeds with transurethral light delivery,” in SPIE BiOS, 2014, p. 89430N–89430N–6. [22] L. Pan, R. Rohling, P. Abolmaesumi, S. Salcudean, and S. Tang, “Differentiating fatty and non-fatty tissue using photoacoustic imaging,” in SPIE BiOS, 2014, pp. 894354–894356. [23] B.-Y. Hsieh, S.-L. Chen, T. Ling, L. J. Guo, and P.-C. Li, “All-optical scanhead for ultrasound and photoacoustic dual-modality imaging,” Opt. Express, vol. 20, no. 2, pp. 1588–1596, 2012. [24] M. Li, Y. Tang, and J. Yao, “Photoacoustic tomography of blood oxygenation: A mini review,” Photoacoustics, vol. 10, pp. 65–73, Jun. 2018. [25] P. Beard, “Biomedical photoacoustic imaging,” Interface Focus, vol. 1, no. 4, pp. 602–631, 2011. [26] L. V Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science, vol. 335, no. 6075, pp. 1458–1462, Mar. 2012. [27] L. V. Wang, “Prospects of photoacoustic tomography,” Med. Phys., vol. 35, no. 12, pp. 5758–5767, Nov. 2008. [28] C. Kim, C. Favazza, and L. V. Wang, “In Vivo Photoacoustic Tomography of Chemicals: High-Resolution Functional and Molecular Optical Imaging at New Depths,” Chem. Rev., vol. 110, no. 5, pp. 2756–2782, May 2010. [29] Jun Xia and L. V. Wang, “Small-Animal Whole-Body Photoacoustic Tomography: A 114  Review,” IEEE Trans. Biomed. Eng., vol. 61, no. 5, pp. 1380–1389, May 2014. [30] P. K. Upputuri and M. Pramanik, “Recent advances toward preclinical and clinical translation of photoacoustic tomography: a review,” J. Biomed. Opt., vol. 22, no. 4, p. 041006, Nov. 2016. [31] R. A. Kruger, W. L. Kiser Jr, D. R. Reinecke, and G. A. Kruger, “Thermoacoustic computed tomography using a conventional linear transducer array,” Med. Phys., vol. 30, no. 5, pp. 856–860, 2003. [32] Y. Wang, D. Xing, Y. Zeng, and Q. Chen, “Photoacoustic imaging with deconvolution algorithm,” Phys. Med. Biol., vol. 49, no. 14, p. 3117, 2004. [33] B. Yin, D. Xing, Y. Wang, Y. Zeng, Y. Tan, and Q. Chen, “Fast photoacoustic imaging system based on 320-element linear transducer array,” Phys. Med. Biol., vol. 49, no. 7, p. 1339, 2004. [34] J. Kang, E. K. Kim, G. R. Kim, C. Yoon, T. K. Song, and J. H. Chang, “Photoacoustic imaging of breast microcalcifications: A validation study with 3-dimensional ex vivo data and spectrophotometric measurement,” J. Biophotonics, 2013. [35] L. Li, L. Zhu, C. Ma, L. Lin, J. Yao, L. Wang, K. Maslov, R. Zhang, W. Chen, J. Shi, and L. V. Wang, “Single-impulse panoramic photoacoustic computed tomography of small-animal whole-body dynamics at high spatiotemporal resolution,” Nat. Biomed. Eng., vol. 1, no. 5, p. 0071, May 2017. [36] Y. Xu, L. V. Wang, G. Ambartsoumian, and P. Kuchment, “Reconstructions in limited-view thermoacoustic tomography,” Med. Phys., vol. 31, no. 4, p. 724, 2004. [37] G. Paltauf, R. Nuster, and P. Burgholzer, “Weight factors for limited angle photoacoustic tomography,” Phys. Med. Biol., vol. 54, no. 11, p. 3303, 2009. [38] J. Meng, L. V. Wang, L. Ying, D. Liang, and L. Song, “Compressed-sensing photoacoustic computed tomography in vivo with partially known support,” Opt. Express, vol. 20, no. 15, p. 16510, Jul. 2012. [39] K. J. Francis, P. Rajalakshmi, and S. S. Channappayya, “Distributed compressed sensing for photo-acoustic imaging,” in 2015 IEEE International Conference on Image Processing (ICIP), 2015, pp. 1513–1517. 115  [40] H. Gao, J. Feng, and L. Song, “Limited-view multi-source quantitative photoacoustic tomography,” Inverse Probl., 2015. [41] X. Lin, J. Yu, N. Feng, and M. Sun, “Synthetic aperture-based linear-array photoacoustic tomography considering the aperture orientation effect,” J. Innov. Opt. Health Sci., vol. 11, no. 04, p. 1850015, Jul. 2018. [42] G. Ku, X. Wang, X. Xie, G. Stoica, and L. V Wang, “Imaging of tumor angiogenesis in rat brains in vivo by photoacoustic tomography,” Appl. Opt., vol. 44, no. 5, pp. 770–775, 2005. [43] V. G. Andreev, A. A. Karabutov, and A. A. Oraevsky, “Detection of ultrawide-band ultrasound pulses in optoacoustic tomography,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control, vol. 50, no. 10, pp. 1383–1390, Oct. 2003. [44] S. E. Bohndiek, S. Bodapati, D. Van De Sompel, S.-R. Kothapalli, and S. S. Gambhir, “Development and application of stable phantoms for the evaluation of photoacoustic imaging instruments,” PLoS One, vol. 8, no. 9, 2013. [45] T. T. W. Wong, R. Zhang, P. Hai, C. Zhang, M. A. Pleitez, R. L. Aft, D. V. Novack, and L. V. Wang, “Fast label-free multilayered histology-like imaging of human breast cancer by photoacoustic microscopy,” Sci. Adv., vol. 3, no. 5, p. e1602168, May 2017. [46] M. Xu and L. V Wang, “Universal back-projection algorithm for photoacoustic computed tomography,” Phys. Rev. E, vol. 71, no. 1, p. 16706, 2005. [47] M. Xu and L. V Wang, “Erratum: Universal back-projection algorithm for photoacoustic computed tomography [Phys. Rev. E 71, 016706 (2005)],” Phys. Rev. E, vol. 75, no. 5, p. 59903, 2007. [48] K. Wang and M. A. Anastasio, “A simple Fourier transform-based reconstruction formula for photoacoustic computed tomography with a circular or spherical measurement geometry,” Phys. Med. Biol., vol. 57, no. 23, p. N493, 2012. [49] S. Ma, S. Yang, and H. Guo, “Limited-view photoacoustic imaging based on linear-array detection and filtered mean-backprojection-iterative reconstruction,” J. Appl. Phys., vol. 106, no. 12, p. 123104, 2009. [50] G. Wurzinger, R. Nuster, N. Schmitner, S. Gratt, D. Meyer, and G. Paltauf, “Simultaneous 116  three-dimensional photoacoustic and laser-ultrasound tomography,” Biomed. Opt. Express, vol. 4, no. 8, pp. 1380–1389, 2013. [51] Y. Xu and L. V Wang, “Time reversal and its application to tomography with diffracting sources,” Phys. Rev. Lett., vol. 92, no. 3, p. 33902, 2004. [52] C. Huang, L. Nie, R. W. Schoonover, L. V Wang, and M. A. Anastasio, “Photoacoustic computed tomography correcting for heterogeneity and attenuation,” J. Biomed. Opt., vol. 17, no. 6, pp. 612111–612115, 2012. [53] J. Xia and L. Wang, “Small-animal whole-body photoacoustic tomography: a review,” 2013. [54] B. T. Cox and B. E. Treeby, “Effect of sensor directionality on photoacoustic imaging: a study using the k-wave toolbox,” in BiOS, 2010, p. 75640I–75640I–6. [55] R. A. Kruger, P. Liu, and C. R. Appledorn, “Photoacoustic ultrasound (PAUS)—reconstruction tomography,” Med. Phys., vol. 22, no. 10, pp. 1605–1609, 1995. [56] S. Park, S. R. Aglyamov, and S. Y. Emelianov, “10A-5 Beamforming for Photoacoustic Imaging Using Linear Array Transducer,” in Ultrasonics Symposium, 2007. IEEE, 2007, pp. 856–859. [57] J. Kang, E.-K. Kim, T. Song, and J. H. Chang, “Photoacoustic imaging of breast microcalcifications: A validation study with 3-dimensional ex vivo data,” in Ultrasonics Symposium (IUS), 2012 IEEE International, 2012, pp. 28–31. [58] C. Yoon, J. Kang, S. Han, Y. Yoo, T.-K. Song, and J. H. Chang, “Enhancement of photoacoustic image quality by sound speed correction: ex vivo evaluation,” Opt. Express, vol. 20, no. 3, pp. 3082–3090, 2012. [59] M. Mozaffarzadeh, A. Mahloojifar, M. Orooji, K. Kratkiewicz, S. Adabi, and M. Nasiriavanaki, “Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm,” J. Biomed. Opt., vol. 23, no. 02, p. 1, Feb. 2018. [60] H. K. Zhang, M. A. L. Bell, X. Guo, H. J. Kang, and E. M. Boctor, “Synthetic-aperture based photoacoustic re-beamforming (SPARE) approach using beamformed ultrasound data,” Biomed. Opt. Express, vol. 7, no. 8, p. 3056, Aug. 2016. 117  [61] M. A. Lediju Bell, N. Kuo, D. Y. Song, and E. M. Boctor, “Short-lag spatial coherence beamforming of photoacoustic images for enhanced visualization of prostate brachytherapy seeds,” Biomed. Opt. Express, vol. 4, no. 10, pp. 1964–1977, 2013. [62] M. A. L. Bell, D. Y. Song, and E. M. Boctor, “Coherence-based photoacoustic imaging of brachytherapy seeds implanted in a canine prostate,” in SPIE Medical Imaging, 2014, p. 90400Q–90400Q–6. [63] N. Kuo, H. J. Kang, D. Y. Song, J. U. Kang, and E. M. Boctor, “Real-time photoacoustic imaging of prostate brachytherapy seeds using a clinical ultrasound system,” J. Biomed. Opt., vol. 17, no. 6, pp. 660051–660057, 2012. [64] N. Kuo, H. J. Kang, T. DeJournett, J. Spicer, and E. Boctor, “Photoacoustic imaging of prostate brachytherapy seeds in ex vivo prostate,” in SPIE Medical Imaging, 2011, pp. 796407–796409. [65] E. Maneas, W. Xia, O. Ogunlade, M. Fonseca, D. I. Nikitichev, A. L. David, S. J. West, S. Ourselin, J. C. Hebden, T. Vercauteren, and A. E. Desjardins, “Gel wax-based tissue-mimicking phantoms for multispectral photoacoustic imaging,” Biomed. Opt. Express, vol. 9, no. 3, p. 1151, Mar. 2018. [66] J. R. Cook, R. R. Bouchard, and S. Y. Emelianov, “Tissue-mimicking phantoms for photoacoustic and ultrasonic imaging,” Biomed. Opt. Express, vol. 2, no. 11, pp. 3193–3206, 2011. [67] T. N. Robinson and G. V. Stiegmann, “Minimally Invasive Surgery,” Endoscopy, vol. 36, no. 1, pp. 48–51, Jan. 2004. [68] G. G. Hamad and M. Curet, “Minimally invasive surgery,” Am. J. Surg., vol. 199, no. 2, pp. 263–265, Feb. 2010. [69] S. Maeso, M. Reza, J. A. Mayol, J. A. Blasco, M. Guerra, E. Andradas, and M. N. Plana, “Efficacy of the Da Vinci Surgical System in Abdominal Surgery Compared With That of Laparoscopy,” Ann. Surg., vol. 252, no. 2, pp. 254–262, Aug. 2010. [70] G. Hubens, H. Coveliers, L. Balliu, M. Ruppert, and W. Vaneerdeweg, “A performance study comparing manual and robotically assisted laparoscopic surgery using the da Vinci system,” Surg. Endosc., vol. 17, no. 10, pp. 1595–1599, Oct. 2003. 118  [71] E. Rajih, C. Tholomier, B. Cormier, V. Samouëlian, T. Warkus, M. Liberman, H. Widmer, J.-B. Lattouf, A. M. Alenizi, M. Meskawi, R. Valdivieso, P.-A. Hueber, P. I. Karakewicz, A. El-Hakim, and K. C. Zorn, “Error reporting from the da Vinci surgical system in robotic surgery: A Canadian multispecialty experience at a single academic centre.,” Can. Urol. Assoc. J., vol. 11, no. 5, pp. E197–E202, May 2017. [72] I. Kosik and J. J. L. Carson, “Combined 3D photoacoustic and 2D fluorescence imaging of indocyanine green contrast agent flow,” in SPIE BiOS, 2013, p. 858143. [73] P. Tavakolian, I. Kosik, A. Chamson-Reig, K. St. Lawrence, and J. J. L. Carson, “Potential for photoacoustic imaging of the neonatal brain,” in SPIE BiOS, 2013, p. 858147. [74] P. Wong, I. Kosik, and J. J. L. Carson, “Dynamic contrast-enhanced 3D photoacoustic imaging,” in SPIE BiOS, 2013, pp. 858152–858156. [75] N. Gandhi, M. Allard, S. Kim, P. Kazanzides, and M. A. Lediju Bell, “Photoacoustic-based approach to surgical guidance performed with and without a da Vinci robot,” J. Biomed. Opt., vol. 22, no. 12, p. 1, Aug. 2017. [76] B. Eddins and M. A. L. Bell, “Design of a multifiber light delivery system for photoacoustic-guided surgery,” J. Biomed. Opt., vol. 22, no. 4, p. 041011, Jan. 2017. [77] S. Kim, H. J. Kang, A. Cheng, M. A. Lediju Bell, E. Boctor, and P. Kazanzides, “Photoacoustic image guidance for robot-assisted skull base surgery,” in 2015 IEEE International Conference on Robotics and Automation (ICRA), 2015, pp. 592–597. [78] H. Zhang and F. Aalamifar, “Feasibility study of robotically tracked photoacoustic computed tomography,” SPIE Med., 2015. [79] D. R. Bauer, R. Olafsson, L. G. Montilla, and R. S. Witte, “3-D photoacoustic and pulse echo imaging of prostate tumor progression in the mouse window chamber,” J. Biomed. Opt., vol. 16, no. 2, p. 026012, 2011. [80] A. R. Rajanna, R. Ptucha, S. Sinha, B. Chinni, V. Dogra, and N. A. Rao, “Prostate cancer detection using photoacoustic imaging and deep learning,” Electron. Imaging, vol. 2016, no. 15, pp. 1–6, Feb. 2016. [81] B. L. Bungart, L. Lan, P. Wang, R. Li, M. O. Koch, L. Cheng, T. A. Masterson, M. Dundar, and J.-X. Cheng, “Photoacoustic tomography of intact human prostates and vascular 119  texture analysis identify prostate cancer biopsy targets,” Photoacoustics, vol. 11, pp. 46–55, Sep. 2018. [82] T. Harrison and R. J. Zemp, “Coregistered photoacoustic-ultrasound imaging applied to brachytherapy,” J. Biomed. Opt., vol. 16, no. 8, p. 080502, 2011. [83] J. L. Su, R. R. Bouchard, A. B. Karpiouk, J. D. Hazle, and S. Y. Emelianov, “Photoacoustic imaging of prostate brachytherapy seeds,” Biomed. Opt. Express, vol. 2, no. 8, p. 2243, Aug. 2011. [84] H. K. Zhang, Y. Chen, J. Kang, A. Lisok, I. Minn, M. G. Pomper, and E. M. Boctor, “Prostate-specific membrane antigen-targeted photoacoustic imaging of prostate cancer in vivo,” J. Biophotonics, p. e201800021, Jun. 2018. [85] M. Ishihara, M. Shinchi, A. Horiguchi, H. Shinmoto, H. Tsuda, K. Irisawa, T. Wada, and T. Asano, “Possibility of transrectal photoacoustic imaging-guided biopsy for detection of prostate cancer,” 2017, vol. 10064, p. 100642U. [86] A. Standard, “Z136. 1. American national standard for the safe use of lasers. American National Standards Institute,” Inc., New York, 1993. [87] K. L. Bell, T. Harrison, N. Usmani, and R. J. Zemp, “Integrated transrectal probe for translational ultrasound-photoacoustic imaging,” 2016, vol. 9708, p. 97080A. [88] M. Ai, W. Shu, T. Salcudean, R. Rohling, P. Abolmaesumi, and S. Tang, “Design of high energy laser pulse delivery in a multimode fiber for photoacoustic tomography,” Opt. Express, vol. 25, no. 15, p. 17713, Jul. 2017. [89] M. Ai, S. Tang, T. Salcudean, R. Rohling, and P. Abolmaesumi, “Transurethral illumination probe design for deep photoacoustic imaging of prostate,” in Photons Plus Ultrasound: Imaging and Sensing 2018, 2018, vol. 10494, p. 11. [90] C. Schneider, J. Guerrero, C. Nguan, R. Rohling, and S. Salcudean, “Intra-operative ‘Pick-up’ ultrasound for robot assisted surgery with vessel extraction and registration: A feasibility study,” in Information Processing in Computer-Assisted Interventions, Springer, 2011, pp. 122–132. [91] C. Li and L. Wang, “Photoacoustic tomography and sensing in biomedicine,” Phys. Med. Biol., 2009. 120  [92] R. Z. Azar, C. Leung, T. K. Chen, K. Dickie, J. Dixon, K.-K. Chan, and L. Pelissier, “An automated breast ultrasound system for elastography,” in 2012 IEEE International Ultrasonics Symposium, 2012, pp. 1–4. [93] S. Ma, S. Yang, and H. Guo, “Limited-view photoacoustic imaging based on linear-array detection and filtered mean-backprojection-iterative reconstruction,” J. Appl. Phys., vol. 106, no. 12, p. 123104, 2009. [94] D. W. Yang, D. Xing, S. H. Yang, and L. Z. Xiang, “Fast full-view photoacoustic imaging by combined scanning with a linear transducer array,” Opt. Express, vol. 15, no. 23, pp. 15566–15575, 2007. [95] D. Yang, D. Xing, H. Gu, Y. Tan, and L. Zeng, “Fast multielement phase-controlled photoacoustic imaging based on limited-field-filtered back-projection algorithm,” Appl. Phys. Lett., vol. 87, no. 19, p. 194101, 2005. [96] A. Rosenthal, D. Razansky, and V. Ntziachristos, “Fast semi-analytical model-based acoustic inversion for quantitative optoacoustic tomography,” Med. Imaging, IEEE Trans., vol. 29, no. 6, pp. 1275–1285, 2010. [97] N. A. Rejesh, H. Pullagurla, and M. Pramanik, “Deconvolution-based deblurring of reconstructed images in photoacoustic/thermoacoustic tomography,” JOSA A, vol. 30, no. 10, pp. 1994–2001, 2013. [98] J. Prakash, A. S. Raju, C. B. Shaw, M. Pramanik, and P. K. Yalavarthy, “Basis pursuit deconvolution for improving model-based reconstructed images in photoacoustic tomography,” Biomed. Opt. Express, vol. 5, no. 5, p. 1363, May 2014. [99] K. Wang, S. A. Ermilov, R. Su, H.-P. Brecht, A. A. Oraevsky, and M. A. Anastasio, “An Imaging Model Incorporating Ultrasonic Transducer Properties for Three-Dimensional Optoacoustic Tomography,” IEEE Trans. Med. Imaging, vol. 30, no. 2, pp. 203–214, Feb. 2011. [100] Q. Sheng, K. Wang, T. P. Matthews, J. Xia, L. Zhu, L. V. Wang, and M. A. Anastasio, “A Constrained Variable Projection Reconstruction Method for Photoacoustic Computed Tomography Without Accurate Knowledge of Transducer Responses,” IEEE Trans. Med. Imaging, vol. 34, no. 12, pp. 2443–2458, Dec. 2015. 121  [101] K. P. Köstli and P. C. Beard, “Two-dimensional photoacoustic imaging by use of Fourier-transform image reconstruction and a detector with an anisotropic response,” Appl. Opt., vol. 42, no. 10, pp. 1899–1908, 2003. [102] K. Mitsuhashi, K. Wang, and M. Anastasio, “Investigation of the far-field approximation for modeling a transducer’s spatial impulse response in photoacoustic computed tomography,” Photoacoustics, 2014. [103] D. Queirós, X. L. Déan-Ben, A. Buehler, D. Razansky, A. Rosenthal, and V. Ntziachristos, “Modeling the shape of cylindrically focused transducers in three-dimensional optoacoustic tomography,” J. Biomed. Opt., vol. 18, no. 7, p. 076014, Jul. 2013. [104] M. Cao, J. Yuan, S. Du, G. Xu, X. Wang, P. L. Carson, and X. Liu, “Full-view photoacoustic tomography using asymmetric distributed sensors optimized with compressed sensing method,” Biomed. Signal Process. Control, vol. 21, pp. 19–25, 2015. [105] X. Liu, D. Peng, W. Guo, X. Ma, X. Yang, and J. Tian, “Compressed Sensing Photoacoustic Imaging Based on Fast Alternating Direction Algorithm,” Int. J. Biomed. Imaging, vol. 2012, pp. 1–7, 2012. [106] D. Liang, H. F. Zhang, and L. Ying, “Compressed-sensing Photoacoustic Imaging based on random optical illumination,” Int. J. Funct. Inform. Personal. Med., vol. 2, no. 4, p. 394, 2009. [107] Shuhui Bu, Zhenbao Liu, T. Shiina, K. Kondo, M. Yamakawa, K. Fukutani, Y. Someda, and Y. Asao, “Model-Based Reconstruction Integrated With Fluence Compensation for Photoacoustic Tomography,” IEEE Trans. Biomed. Eng., vol. 59, no. 5, pp. 1354–1363, May 2012. [108] N. Huynh, E. Zhang, M. Betcke, S. Arridge, P. Beard, and B. Cox, “Patterned interrogation scheme for compressed sensing photoacoustic imaging using a Fabry Perot planar sensor,” in SPIE BiOS, 2014, p. 894327. [109] P. Burgholzer, M. Sandbichler, F. Krahmer, T. Berer, and M. Haltmeier, “Sparsifying transformations of photoacoustic signals enabling compressed sensing algorithms,” in SPIE BiOS, 2016, p. 970828. [110] S. Tzoumas, A. Rosenthal, C. Lutzweiler, D. Razansky, and V. Ntziachristos, 122  “Spatiospectral denoising framework for multispectral optoacoustic imaging based on sparse signal representation,” Med. Phys., vol. 41, no. 11, p. 113301, Nov. 2014. [111] Z. Guo, C. Li, L. Song, and L. V. Wang, “Compressed sensing in photoacoustic tomography in vivo,” J. Biomed. Opt., vol. 15, no. 2, p. 021311, 2010. [112] J. Provost and F. Lesage, “The application of compressed sensing for photo-acoustic tomography,” IEEE Trans. Med. Imaging, 2009. [113] Y. Zhang, Y. Wang, and C. Zhang, “Efficient discrete cosine transform model–based algorithm for photoacoustic image reconstruction,” J. Biomed. Opt., vol. 18, no. 6, p. 066008, Jun. 2013. [114] M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: The application of compressed sensing for rapid MR imaging,” Magn. Reson. Med., vol. 58, no. 6, pp. 1182–1195, Dec. 2007. [115] Yongyi Yang, N. P. Galatsanos, and A. K. Katsaggelos, “Regularized reconstruction to reduce blocking artifacts of block discrete cosine transform compressed images,” IEEE Trans. Circuits Syst. Video Technol., vol. 3, no. 6, pp. 421–432, 1993. [116] E. Candès and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Probl., vol. 23, no. 3, pp. 969–985, Jun. 2007. [117] E. Bingham and H. Mannila, “Random projection in dimensionality reduction: Applications to image and text data,” in Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’01, 2001, pp. 245–250. [118] B. Jafarpour, V. K. Goyal, D. B. McLaughlin, and W. T. Freeman, “Transform-domain sparsity regularization for inverse problems in geosciences,” GEOPHYSICS, vol. 74, no. 5, pp. R69–R83, Sep. 2009. [119] M. Honarvar, R. S. Sahebjavaher, S. E. Salcudean, and R. Rohling, “Sparsity regularization in dynamic elastography,” Phys. Med. Biol., vol. 57, no. 19, p. 5909, 2012. [120] A. Tikhonov and V. Arsenin, “Solutions of ill-posed problems,” 1977. [121] S.-C. Wooh and Y. Shi, “Optimum beam steering of linear phased arrays,” Wave Motion, 123  vol. 29, no. 3, pp. 245–265, Apr. 1999. [122] E. J. Gottlieb, J. M. Cannata, Chang-Hong Hu, and K. K. Shung, “Development of a high-frequency (> 50 MHz) copolymer annular-array, ultrasound transducer,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control, vol. 53, no. 5, pp. 1037–1045, May 2006. [123] B. E. Treeby and B. T. Cox, “k-Wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields,” J. Biomed. Opt., vol. 15, no. 2, pp. 21312–21314, 2010. [124] H. J. Kang, M. A. L. Bell, X. Guo, and E. M. Boctor, “Spatial Angular Compounding of Photoacoustic Images,” IEEE Trans. Med. Imaging, vol. 35, no. 8, pp. 1845–1855, Aug. 2016. [125] H. Moradi, S. Tang, and S. E. Salcudean, “Deconvolution based photoacoustic reconstruction for directional transducer with sparsity regularization,” in SPIE BiOS, 2016, p. 97082D. [126] H. Moradi, S. Tang, and S. E. Salcudean, “Deconvolution based photoacoustic reconstruction with sparsity regularization,” Opt. Express, vol. 25, no. 3, p. 2771, Feb. 2017. [127] H. Moradi, M. Honarvar, S. Tang, and S. E. Salcudean, “Iterative photoacoustic image reconstruction for three-dimensional imaging by conventional linear-array detection with sparsity regularization,” in SPIE BiOS, 2017, p. 100643K. [128] G. Paltauf, R. Nuster, M. Haltmeier, and P. Burgholzer, “Experimental evaluation of reconstruction algorithms for limited view photoacoustic tomography with line detectors,” Inverse Probl., vol. 23, no. 6, p. S81, 2007. [129] F. A. Duck, Physical Properties of Tissues : a comprehensive reference book. Elsevier Science, 1990. [130] O. Mohareri, M. Ramezani, T. K. Adebar, P. Abolmaesumi, and S. E. Salcudean, “Automatic localization of the da Vinci surgical instrument tips in 3-D transrectal ultrasound,” IEEE Trans. Biomed. Eng., vol. 60, no. 9, pp. 2663–2672, 2013. [131] C. J. Kelly, H. Moradi, and S. E. Salcudean, “An automated breast ultrasound scanner with integrated photoacoustic tomography,” in SPIE BiOS, 2016, p. 970809. 124  [132] M. Ai, W. Shu, T. Salcudean, R. Rohling, P. Abolmaesumi, and S. Tang, “High energy laser pulse coupling in a multimode fiber for photoacoustic tomography,” in SPIE BiOS, 2016, p. 97084H–97084H. [133] Z. Chen, A. Deguet, R. Taylor, S. DiMaio, G. Fischer, and P. Kazanzides, “An open-source hardware and software platform for telesurgical robotics research,” in Proceedings of the MICCAI Workshop on Systems and Architecture for Computer Assisted Interventions, Nagoya, Japan, 2013, pp. 22–26. [134] O. Mohareri, S. E. Salcudean, and C. Nguan, “Asymmetric force feedback control framework for teleoperated robot-assisted surgery,” in Robotics and Automation (ICRA), 2013 IEEE International Conference on, 2013, pp. 5800–5806. [135] T. Adebar, S. Salcudean, S. Mahdavi, M. Moradi, C. Nguan, and L. Goldenberg, “A robotic system for intra-operative trans-rectal ultrasound and ultrasound elastography in radical prostatectomy,” in Information Processing in Computer-Assisted Interventions, Springer, 2011, pp. 79–89. [136] T. K. Adebar, M. C. Yip, S. E. Salcudean, R. N. Rohling, C. Y. Nguan, and S. L. Goldenberg, “Registration of 3D ultrasound through an air–tissue boundary,” Med. Imaging, IEEE Trans., vol. 31, no. 11, pp. 2133–2142, 2012. [137] H. Moradi, S. Tang, and S. E. Salcudean, “Towards Intra-operative Prostate Photoacoustic Imaging: Configuration Evaluation and Implementation Using the da Vinci Research Kit,” IEEE Trans. Med. Imaging, pp. 1–1, 2018. [138] L. B. Rosenberg, “Virtual fixtures: Perceptual tools for telerobotic manipulation,” in Proceedings of IEEE Virtual Reality Annual International Symposium, 1993, pp. 76–82. [139] J. Abbott, P. Marayong, and A. Okamura, “Haptic virtual fixtures for robot-assisted manipulation,” Robot. Res., 2007. [140] L. Wang, Z. Chen, P. Chalasani, R. M. Yasin, P. Kazanzides, R. H. Taylor, and N. Simaan, “Force-Controlled Exploration for Updating Virtual Fixture Geometry in Model-Mediated Telemanipulation,” J. Mech. Robot., vol. 9, no. 2, p. 021010, Mar. 2017. [141] P. Kazanzides, Z. Chen, A. Deguet, G. S. Fischer, R. H. Taylor, and S. DiMaio, “An open-source research kit for the da Vinci R surgical robot,” in Proc. IEEE International Conf. 125  on Robotics and Automation (ICRA’14), 2014. [142] C. M. Schneider, “Ultrasound elastography for intra-operative use and renal tissue imaging,” Jan. 2017. [143] G. Marquez, L. V. Wang, S.-P. Lin, J. A. Schwartz, and S. L. Thomsen, “Anisotropy in the absorption and scattering spectra of chicken breast tissue,” Appl. Opt., vol. 37, no. 4, p. 798, Feb. 1998. [144] R. Rohling, P. Munger, and J. Hollerbach, “Comparison of relative accuracy between a mechanical and an optical position tracker for image-guided neurosurgery,” J. image Guid., 1995. [145] M. Shah, “Solving the robot-world/hand-eye calibration problem using the Kronecker product,” J. Mech. Robot., 2013. [146] Y. Zhou, J. Yao, and L. V. Wang, “Tutorial on photoacoustic tomography,” J. Biomed. Opt., vol. 21, no. 6, p. 061007, Apr. 2016. [147] C. Huang, K. Wang, L. Nie, L. V. Wang, and M. A. Anastasio, “Full-Wave Iterative Image Reconstruction in Photoacoustic Tomography With Acoustically Inhomogeneous Media,” IEEE Trans. Med. Imaging, vol. 32, no. 6, pp. 1097–1110, Jun. 2013. [148] R. Beasley and R. Howe, “Increasing accuracy in image-guided robotic surgery through tip tracking and model-based flexion correction,” IEEE Trans. Robot., 2009. [149] M. Tavakoli and R. D. Howe, “Haptic Effects of Surgical Teleoperator Flexibility,” Int. J. Rob. Res., vol. 28, no. 10, pp. 1289–1302, Oct. 2009.  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0375754/manifest

Comment

Related Items