UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Developing surgical navigation tools for minimally invasive surgery using ultrasound, structured light,… Edgcumbe, Philip 2017

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2020_may_edgcumbe_philip.pdf [ 24.82MB ]
Metadata
JSON: 24-1.0357427.json
JSON-LD: 24-1.0357427-ld.json
RDF/XML (Pretty): 24-1.0357427-rdf.xml
RDF/JSON: 24-1.0357427-rdf.json
Turtle: 24-1.0357427-turtle.txt
N-Triples: 24-1.0357427-rdf-ntriples.txt
Original Record: 24-1.0357427-source.json
Full Text
24-1.0357427-fulltext.txt
Citation
24-1.0357427.ris

Full Text

DEVELOPING SURGICAL NAVIGATION TOOLS FOR MINIMALLY INVASIVE SURGERY USING ULTRASOUND, STRUCTURED LIGHT, TISSUE TRACKING AND AUGMENTED REALITY by  Philip Edgcumbe  B.ASc., The University of British Columbia, 2011  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Biomedical Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  October 2017  © Philip Edgcumbe, 2017   ii Abstract Surgeons and their patients would benefit if, during an operation, a surgeon could inexpensively, safely and non-invasively peer beneath the surface of the organ s/he was operating on. Peering below the surface would allow the surgeon to see blood vessels, tumours and other important structures. Furthermore, it would allow them to better plan their surgery and avoid damaging important structures with their tools. Giving surgeons the ability to peer beneath the surface and better formulate their surgical plan is the goal of image guided surgery research and the focus of this thesis. In this thesis accurate 3D models of cancer tumour phantoms are generated and displayed to the surgeon. This is achieved via the development of: an ultrasound calibration technique (Chapter 2); the augmented reality ultrasound navigation system (ARUNS) (Chapter 3); a miniature projector for surgery called the Pico Lantern (Chapter 4); and the Projector-based Augmented Reality Intracorporeal System (PARIS)(Chapter 5).  The ultimate goal is to improve surgical navigation and help surgeons be more accurate and reduce the amount of healthy tissue they excise during operations.  The ultrasound calibration technique improved ultrasound-based pinhead point reconstruction accuracy from 3.1mm to 1.3mm. The Pico Lantern and the PARIS were developed to improve surface reconstruction and to improve the realism of the augmented reality in surgery.  The Pico Lantern is a miniature projector for surface reconstruction, augmented reality and guidance in laparoscopic surgery. The PARIS was tested by two surgeons in a user study of 32 simulated kidney cancer surgeries. Compared to using a laparoscopic ultrasound transducer alone, when using the PARIS, surgeons found the surgical navigation more intuitive and they had a better spatial understanding of the underlying anatomy. Furthermore, positive margin rates decreased and there was a statistically significant reduction in the amount of healthy tissue excised.  Key conclusions are that wide baseline ultrasound calibration is effective, simple guidance cues are important in augmented reality in surgery and that projected light in surgery is a viable strategy for surface reconstruction and augmented reality.   iii Lay Summary The goal of this thesis is to improve cancer and surgical outcomes for the 50,000 Canadians that are diagnosed with liver, stomach, pancreatic, kidney, bladder or prostate cancer each year. This is achieved by developing surgical navigation tools and studying how these tools change the outcome of surgeries. The research in the thesis represents a bridge from the lab bench to the patient bedside because it brings important engineering and technological advances to surgeons in the operating rooms.  Specifically, the tools developed in this thesis allow surgeons to look beneath the surface, see accurate 3D models of underlying cancer tumours, and better formulate a surgical plan. These tools were tested in over 30 simulated kidney cancer surgeries and resulted in statistically significant improvements in important surgical metrics. The navigation tools are built using ultrasound imaging, computer vision, augmented reality with direct graphic overlay and augmented reality via projection of light directly onto the patient.       iv Preface This thesis is primarily based on four published manuscripts and one manuscript that has been submitted and is under review.  Most of the content from those manuscripts have been included in this thesis.  The manuscripts have been modified to make the thesis cohesive and to follow the thesis format.  The research done by the author has often been in collaboration with other researchers.   Clinical research ethics approvals for the clinical studies for this thesis were obtained from the UBC Clinical Research Ethics Board (CREB) (Applications numbers: A14-0171 and A15-0215).  A modified version of Chapter 2 has been published in the following manuscript: • Philip Edgcumbe, Christopher Nguan, and Robert Rohling. "Calibration and stereo tracking of a laparoscopic ultrasound transducer for augmented reality in surgery." In Augmented Reality Environments for Medical Imaging and Computer-Assisted Interventions, pp. 258-267. Springer Berlin Heidelberg, 2013. [The author received the Dr. John Ankenman Clinical Research Prize in 2013 for the presentation of this work at the Annual Research Day of the Department of Urological Sciences at UBC] The contribution of the author was in the development and implementation of the methods and experiments, analysis of results and writing the manuscript.  Profs Nguan and Rohling provided clinical and technical advice respectively and contributed to the editing of the manuscript.  A modified version of Chapter 3 has been published in the following manuscript:   v • Philip Edgcumbe*, Rohit Singla*, Philip Pratt, Caitlin Schneider, Christopher Nguan, and Robert Rohling. "Augmented Reality Imaging for Robot-Assisted Partial Nephrectomy Surgery." In International Conference on Medical Imaging and Virtual Reality, pp. 139-150. Springer International Publishing, 2016.  The author and Rohit Singla are joint first authors on this paper. The author was primarily responsible for writing the manuscript and received significant writing and editing assistance from Rohit Singla. The author’s experimental contributions included developing and testing the Dynamic Augmented Reality Tracker (DART) for use in tumour-centric intra-operative ultrasound imaging and surgical guidance. The author conducted the experiments for ultrasound calibration, the da Vinci kinematic to camera calibration and made the phantom tumour model. Caitlin Schneider showed the author the tumour-making technique and, after several iterations, the author determined the ratio of phantom ingredients.  The author was responsible for tumour model generation via ultrasound tracking, reconstruction and segmentation. The author’s contribution to the user interface included the proposal of the orthogonal view display of the instrument tips, tumour and kidney surface via the TilePro® inputs. Rohit Singla improved on the strategy for instrument display in the orthogonal view by rendering a representative cone instead of a point. Rohit Singla developed and implemented the virtual traffic light idea to indicate instrument to tumour proximity.  The author contributed to the mathematical framework of the project by developing the equations for tumour-centric tracking of the ultrasound probe and of the surgical instruments as well as the equations for initial placement of the virtual cameras.  Rohit Singla’s contributions to the mathematical framework included developing the equations for using da Vinci kinematics to track the surgical   vi instruments and for continuously rendering the tumour and surgical instrument meshes in the virtual camera field of view.  Rohit Singla was primarily responsible for the design and implementation of the underlying computer architecture and computer code.  The author was present during many of the computer code design meetings and helped debug the code.  Rohit Singla worked with Philip Pratt to set up a framework such that plugins could be added to Philip Pratt’s software.  Rohit Singla then wrote the plugins which, among other things, load the ultrasound images, track the optical markers and display augmented reality overlays. Rohit Singla was also responsible for implementing the 3D ultrasound image reconstruction and segmentation pipeline.  Philip Pratt provided active support and answered questions about the interface of his software.  Dr. Robert Rohling and Dr. Christopher Nguan provided suggestions and contributed toward editing the manuscript.  A modified version of Chapter 4 has been published in the following manuscripts and submitted patent: • Philip Edgcumbe, Philip Pratt, Guang-Zhong Yang, Chris Nguan, and Rob Rohling. "Pico lantern: A pick-up projector for augmented reality in laparoscopic surgery." In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 432-439. Springer International Publishing, 2014. [The author received the Outstanding Young Scientist Award at MICCAI 2014 for this manuscript] • Philip Edgcumbe, Philip Pratt, Guang-Zhong Yang, Christopher Nguan, and Robert Rohling. "Pico Lantern: Surface reconstruction and augmented reality in laparoscopic   vii surgery using a pick-up laser projector." Medical Image Analysis 25, no. 1 (2015): 95-102. • Rohling, Robert, Philip Edgcumbe, and Christopher Nguan. "Imagery System." U.S. Patent Application 15/183,458, filed June 15, 2016.   For the patent application, the author was one of three inventors that contributed equally to the inventive aspects of the patent.  Rohit Singla and the author worked together to do a patent search and a search for prior art.  The main novel and inventive aspects of the patent application was a technique for doing accurate surface reconstruction with a moveable source of structure light.  Specifically, it is an “imagery system comprising: a camera having a field of view; a light source comprising a source of structured light and at least one fiducial marker visible externally from the light source, the light source movable relative to the camera; and at least one processor programmed to determine an estimated position of the source of structured light, and the surface map of the surface exposed to the structured light.” [1]  The patent application covers many aspects of the Pico Lantern, a miniature projector, which is central to the research presented in Chapter 4 and in the two manuscripts associated with Chapter 4.  With regards to the manuscripts, the contribution of the author was in the development and implementation of the methods and experiments, the analysis of results and the writing of the manuscript.  Rohit Singla developed the computer code that generates the colour-coded tissue displacement map.  Profs Nguan and Rohling provided clinical and technical advice respectively and contributed to the editing of the manuscript.  Dr. Philip Pratt provided feedback about the experimental plan and edited the manuscript. Prof Yang edited the manuscript.   viii  A modified version of Chapter 5 has been submitted and is under review.  The authors on that manuscript are Philip Edgcumbe, Rohit Singla, Philip Pratt, Christopher Nguan, and Robert Rohling. The strength of the chapter is the presentation and evaluation of the Projector-based Augmented Reality Intracorporeal System (PARIS). The PARIS is a novel fully-integrated system for intraoperative guidance in soft tissue surgery. The Pico Lantern concept is significantly extended and it is shown how structured light can be used during laparoscopic surgery to display intraoperative ultrasound via projection onto patient augmented reality. The author’s contribution included doing the experiments to evaluate the surface reconstruction accuracy and density as well as taking CT scans and registering the DART to the CT scan coordinate system. The author modified his previous DART design to make CT to ultrasound registration possible and modified the phantom design to have contrast and have a more realistic colour. The phantoms and associated tumour model in this chapter are the same as the ones presented in Chapter 3. The PARIS has two visualization modes. The author proposed the projector point of view (P-POV) visualization mode and Rohit Singla proposed the laparoscope point of view (L-POV) visualization mode.  The author designed the PARIS verification and validation experiment in which a phantom was cut in half to directly project a tumour outline onto the exposed surface. The author and Rohit Singla jointly ran the user study with 2 surgeons who did 32 simulated laparoscopic partial nephrectomies.  The author analyzed the quantitative user study results and Rohit Singla analyzed the qualitative user study results via development and analysis of the user study questionnaire.  The author and Rohit Singla jointly developed the mathematical framework for this system.  Rohit Singla and Caitlin Schneider did   ix the CT and ultrasound segmentation and Rohit Singla proposed displaying the tumour in a dot pattern for a semi-transparent appearance. Rohit Singla was primarily responsible for the design and implementation of the underlying computer architecture and computer code and characterizing and improving its performance.  Characterization included measuring latency. Improvements were made in the speed of calculating surface ray intersection. Rohit was responsible for creating a system to automatically project a target onto a fiducial marker and for analyzing the reprojection error.  As part of the analysis of reprojection error Rohit identified a design modification for the Pico Lantern which involved reconfiguring the location of the fiducial marker for more accurate Pico Lantern tracking. Rohit explored a variety of depth cue strategies and ran experiments for colour coding of depth. The author was present during many of the computer code design meetings and helped debug the code. Profs Rohling and Nguan provided clinical and technical advice and supervision respectively and contributed to the editing of the manuscript. Dr. Philip Pratt provided regular input on the design of the software plug-ins and experimental design as well as editing the manuscript.      x Table of Contents Abstract .......................................................................................................................................... ii	Lay Summary ............................................................................................................................... iii	Preface ........................................................................................................................................... iv	Table of Contents ...........................................................................................................................x	List of Tables ................................................................................................................................xv	List of Figures ............................................................................................................................. xvi	List of Supplemental Videos ................................................................................................... xxiv	List of Abbreviations .................................................................................................................xxv	Acknowledgements ................................................................................................................. xxvii	Chapter 1 - Introduction ...............................................................................................................1	1.1	 Background ..................................................................................................................... 2	1.1.1	 Kidney Cancer and Kidney Cancer Surgery ............................................................... 2	1.1.1.1	 Kidney Cancer .................................................................................................... 3	1.1.1.2	 Kidney Anatomy ................................................................................................. 3	1.1.1.3	 Kidney Cancer Surgery ....................................................................................... 4	1.1.2	 Minimally Invasive Surgery, Robot-Assisted Surgery and Computer-Assisted Surgery .................................................................................................................................... 6	1.1.2.1	 Minimally Invasive Surgery ............................................................................... 6	1.1.2.2	 Robot-Assisted Surgery ...................................................................................... 8	1.1.2.3	 Computer-assisted Surgery ................................................................................. 9	1.1.3	 Image-Guided Surgery .............................................................................................. 10	1.1.3.1	 Introduction to Augmented Reality .................................................................. 11	  xi 1.1.3.2	 Image-Guided Neurosurgery ............................................................................ 12	1.1.3.3	 Image-Guided Abdominal Surgery ................................................................... 12	1.1.3.4	 Challenges in Image-Guided Laparoscopic Surgery ........................................ 13	1.1.4	 A Review of Research in Augmented Reality and Image-Guided Laparoscopic Surgery .................................................................................................................................. 14	1.1.4.1	 Image-Guided Surgery and Laparoscopic Ultrasound ...................................... 15	1.1.4.2	 Augmented Reality in Image-Guided Surgery ................................................. 17	1.1.4.3	 Static Video Display Augmented Reality ......................................................... 20	1.1.4.4	 3D Surface Reconstruction, Structured Light and Projection onto Patient Augmented Reality ........................................................................................................... 23	1.1.4.5	 Adapting to Tissue Deformation in Laparoscopic Surgery .............................. 25	1.2	 Putting the Research of this Thesis in Context ............................................................. 26	1.2.1	 Augmented Reality and Image-Guided Laparoscopic Surgery ................................ 26	1.2.1.1	 Image-Guided Surgery and Laparoscopic Surgery ........................................... 27	1.2.1.2	 Augmented Reality in Image-Guided Surgery ................................................. 27	1.2.1.3	 Static Video Display Augmented Reality ......................................................... 27	1.2.1.4	 3D Surface Reconstruction, Structured Light and Projection onto Patient Augmented Reality ........................................................................................................... 28	1.2.1.5	 Adapting to Tissue Deformation in Laparoscopic Surgery .............................. 29	1.2.2	 Setting a Goal of 5 mm for Accuracy in Augmented Reality ................................... 30	1.2.3	 Overview of Error Metrics in Thesis ........................................................................ 31	1.3	 Objectives ..................................................................................................................... 35	1.4	 Contributions ................................................................................................................ 37	  xii 1.5	 Thesis Outline ............................................................................................................... 38	Chapter 2 - Calibration and Stereo Tracking of a Laparoscopic Ultrasound Transducer for Augmented Reality in Surgery ...................................................................................................43	2.1	 Introduction ................................................................................................................... 43	2.2	 Methods......................................................................................................................... 45	2.2.1	 Apparatus, Calibration and Tracking ........................................................................ 45	2.3	 Experiments .................................................................................................................. 50	2.3.1	 Point Reconstruction Accuracy and Precision .......................................................... 50	2.3.2	 Point Reconstruction Accuracy as a Function of Focus ........................................... 52	2.4	 Results ........................................................................................................................... 53	2.4.1	 Point Reconstruction Accuracy and Precision .......................................................... 53	2.4.2	 Point Reconstruction Accuracy and Precision as a Function of Focus ..................... 54	2.5	 Discussion and Conclusion ........................................................................................... 55	Chapter 3 - Augmented Reality Imaging for Robot-Assisted Partial Nephrectomy Surgery ..........................................................................................................................................58	3.1	 Introduction ................................................................................................................... 58	3.1.1	 Related work ............................................................................................................. 60	3.2	 Materials and Methods .................................................................................................. 61	3.2.1	 Calibration and Accuracy Tests ................................................................................ 63	3.2.2	 FEM Simulation for DART ...................................................................................... 66	3.2.3	 Theory ....................................................................................................................... 67	3.2.4	 Principle of Operation ............................................................................................... 69	3.2.5	 Surgeon User Study .................................................................................................. 71	  xiii 3.3	 Results ........................................................................................................................... 72	3.3.1	 Calibration and Accuracy Tests ................................................................................ 72	3.3.2	 FEM Simulation for DART ...................................................................................... 72	3.3.3	 Surgeon User Study .................................................................................................. 76	3.4	 Discussion ..................................................................................................................... 77	Chapter 4 - Pico Lantern: Surface Reconstruction and Augmented Reality in Laparoscopic Surgery Using a Pick-Up Laser Projector .................................................................................80	4.1	 Introduction ................................................................................................................... 80	4.2	 Materials ....................................................................................................................... 83	4.3	 Methods......................................................................................................................... 87	4.3.1	 Checkerboard Corner Selection and Checkerboard Tracking .................................. 87	4.3.2	 Validation of 3D Surface Reconstruction ................................................................. 88	4.3.3	 Measurement and Augmented Reality Display of Tissue Movement ...................... 90	4.3.4	 Virtual Viewpoints of Surgical Scene ....................................................................... 91	4.3.5	 Proof-of-concept In Vivo Porcine Experiment .......................................................... 92	4.4	 Theory/Calculation ....................................................................................................... 92	4.4.1	 3D Surface Reconstruction ....................................................................................... 92	4.4.1.1	 Method 1 - Stereo Laparoscope and Untracked Pico Lantern .......................... 93	4.4.1.2	 Method 2 - Mono Laparoscope and Tracked Pico Lantern .............................. 94	4.5	 Results ........................................................................................................................... 95	4.5.1	 Virtual Viewpoints of Surgical Scene ....................................................................... 97	4.5.2	 Proof-of-concept In Vivo porcine experiment ........................................................... 98	4.6	 Conclusions ................................................................................................................... 99	  xiv Chapter 5 - Follow the Light: Projector-based Augmented Reality for Intraoperative Surgical Planning in Minimally Invasive Surgery ..................................................................102	5.1	 Introduction ................................................................................................................. 102	5.2	 Methods and Materials ................................................................................................ 106	5.2.1	 Materials ................................................................................................................. 106	5.2.2	 Augmented Reality Visualizations ......................................................................... 108	5.2.3	 Verification and Validation ..................................................................................... 109	5.3	 Results ......................................................................................................................... 112	5.4	 Discussion ................................................................................................................... 116	Chapter 6 - Conclusion ..............................................................................................................119	6.1	 Summary of Findings .................................................................................................. 119	6.2	 Limitations .................................................................................................................. 123	6.3	 Future Work ................................................................................................................ 125	6.4	 Conclusion .................................................................................................................. 129	Bibliography ...............................................................................................................................131	Appendix A A. 1 Calculating The Transformation From the DART Coordinate System to Laparascopic Surgical Instrument Coordinate System .........................................................144	Appendix B B. 1 Further Design Considerations for the Pico Lantern ................................147	B. 1.1 Pico Lantern Electrical Connectors and Size Constraints ........................................ 147	B. 1.2 The Pico Lantern’s Sensitivity to Tracking Error .................................................... 149	   xv List of Tables Table 1: Point reconstruction accuracy (mm) ± standard deviation for the combination of narrow baseline calibration and tracking and the combination of wide baseline calibration and narrow baseline tracking.  30 LUS transducer poses were captured for calibration and randomly assigned to ten groups of 10, ten groups of 15 and one group of all 30 poses. ........................................... 53	Table 2:	 Point reconstruction precision (mm) ± standard deviation for the combination of narrow baseline calibration and tracking and the combination of wide baseline calibration and narrow baseline tracking. 30 LUS transducer poses were captured for calibration and randomly assigned to ten groups of 10, ten groups of 15 and one group of all 30 poses. ............................ 54	Table 3: Point reconstruction results (average ± std) for the LUS transducer at a distance of 160 mm from the narrow baseline camera.  30 LUS transducer poses were captured for calibration and randomly assigned to ten groups of 15. ................................................................................. 55	Table 4: Quantitative comparison for user study with simulated partial nephrectomies ............ 115	   xvi List of Figures Figure 1: Diagram of kidney anatomy [4]. ..................................................................................... 4	Figure 2: Conventional MiS surgical instrument - note the limited dexterity. © Mitch Webb ...... 7	Figure 3: da Vinci Si® (Left), da Vinci surgical instruments in surgeon’s field of view (top right) and control mechanism for the surgical instruments (bottom right). 	......	©2017 Intuitive Surgical, Inc. .................................................................................................................................................. 9	Figure 4: This photographs show a laparoscopic linear transducer (top) and laparoscopic flexible transducer (bottom).  © Springer Science + Business Media New York 2014 ............................ 15	Figure 5: Pick-up laparoscopic ultrasound (LUS) transducer for da Vinci surgical robot.  It was designed and built by Schneider et al. [29]. The picture shows the fixed transform that exists between the da Vinci ProGrasp™ and LUS. ................................................................................ 16	Figure 6: Image from Wang et al.’s paper [39] showing rendering views of augmented reality visualization for blood vessels using (from left to right) transparent overlay, virtual window, random-dot mask, transparent mask and the ghosting method. © Displays, Elsevier .................. 19	Figure 7: Screenshot of surgical console from image-guided surgery using surgical navigation tool developed by Buchs et al. [32].  The liver tumour is shown in yellow and the surgical instrument shown in red.  In theory, the red surgical instrument should be directly overlaid onto the real surgical instrument.  However, due to tracking the endoscope and surgical instrument with an external tracker, there is a lever arm effect. The lever arm runs the length of the tool and results in an offset between the real tool and augmented reality display of the same tool. © Journal of Surgical Research, Elsevier ......................................................................................... 21	Figure 8: Surgeon’s view using cone-beam CT augmented reality system described in the work by Simpfendörfer et al [43].  This is a picture of the surgeon’s view which includes the   xvii augmented reality video (upper left), the augmented reality fluoroscopy image (bottom left) and the conventional laparoscopic image [43]. © Journal of Surgical Research, Elsevier ................. 22	Figure 9: Diagram to illustrate point reconstruction accuracy and precision. The ultrasound transducer is on top of water bath (dark blue line), the ultrasound transducer is imaging the pinhead in the water bath.  The double ended red arrow shows the ultrasound calibration estimate of the physical relationship between the ultrasound linear array and optical fiducial. The black rectangle is the AR ultrasound overlay.  The double ended blue arrow points at the estimate pinhead location (white dot in ultrasound image) and actual pinhead location (blue pinhead). .. 32	Figure 10: Pictures to illustrate the meaning of the da Vinci kinematics instrument tracking (dVKIT) error.  Images 1 and 2 (left and center) show a point that is localize via computer vision tracking and da Vinci kinematics.  Image 2 show a da Vinci kinematic instrument that is touching the point of interest.  Image 3 (right) shows how the dVKIT manifests itself as an offset between the instrument and the graphical rendering of the instrument which is done as a yellow cone. .............................................................................................................................................. 33	Figure 11: Diagram to illustrate process used to calculate total system error in Chapter 3. ........ 34	Figure 12: Pictographic outline of thesis.  For each chapter several pictures are shown that represent the key concepts in those chapters. ............................................................................... 42	Figure 13: Picture showing the da Vinci ProGrasp™ tool holding the ``pick-up'' LUS transducer which has checkerboard markers on it.  Right: Same picture as left with addition of 3D coordinate system overlay showing the axes of the LUS transducer marker coordinate system (T).  The z axis and the normal of the ultrasound imaging plane are almost parallel. .................. 47	Figure 14: Two pictures of the experimental setup.  Left: The wide baseline and narrow baseline (stereo laparoscope) cameras are in the foreground and the pick-up LUS transducer and triple N-  xviii wire phantom are in the background.  Right: The LUS transducer, held by the da Vinci ProGrasp™ tool, is directly above the N-wires. The phantom optical fiducials are in the background. The four experimental coordinate systems (U, P, C and Ph) and the transformations between them PTU,CTP, CTPh) are shown. ...................................................................................... 48	Figure 15: Pictures from point reconstruction experiment. The red arrows point to the pinhead.  Images 1-3 show the LUS transducer in 3 poses (top row).  In each pose the pinhead is in the ultrasound image generated by the LUS transducers.   Image 4 shows the pinhead.  The LUS transducer has been removed and the water drained from the container.  This pinhead represents the gold standard pinhead location in the camera coordinate system. Image 5 and 6 are two ultrasound images from the LUS transducer, taken during this experiment.  The white dot in the ultrasound image is the pinhead. ................................................................................................... 51	Figure 16: The DART with repeatable grasp (left); the DART with KeyDot® marker as it is inserted into an ex vivo porcine kidney (centre); and display of modified DART for total system error analysis (right). The red circle is the centre of the pinhead as determined by ultrasound calibration and KeyDot® tracking. The vertex of the yellow cone is the location of the pinhead as determined by da Vinci surgical instrument kinematics. .............................................................. 61	Figure 17: Diagram illustrating two-step process for calculating total system error. ................... 66	Figure 18: System configuration with labeled coordinate frames and components for both phases........................................................................................................................................................ 68	Figure 19: The surgeon’s view during the phases of the surgery. VC1 and VC2 are the orthogonal virtual camera viewpoints for top and side views. Refer to Figure 18 legend for labels of the components in this image. .................................................................................................. 69	  xix Figure 20: The direct augmented reality (left) and virtual camera viewpoints (middle and right) that are shown to the surgeon using the ARUNS in addition to his/her normal view. The middle pane is the top-down view and right pane is the side view of the surgical scene. ........................ 72	Figure 21:Example of cross-sectional view of FEM simulation.  The colour-coded cross-sectional view shows the amount of displacement at each vertex in the FEM mesh.  The colour corresponds to the colour-coded legend on the left of the image which is in units of mm.  The tumour is the sphere in the center of the image and the DART is the small rectangle on top of the simulated cube of kidney tissue.  The legs of the DART are not visible because the cross-section does not go through them.  The area of largest deformation, in red on the top left of the image, is the place where the ultrasound force was applied over a rectangle the size of the ultrasound linear array. ............................................................................................................................................. 73	Figure 22: The graphs show the results of some of the FEM simulations. For the simulations shown in this figure, the elasticity of the material was held fixed at 15.4 kPa (left graph) and 10.8 kPa (right graph). The x and y axis in both graphs represent input parameters for the simulation and the z axis is the magnitude of the distance (mm) between the theoretical tumour centroid, which is always 20mm immediately below the DART, and the actual tumour centroid. The numbers beside the data points in the graphs (*) are the z value of each of the data points.  The coloured surface between the data points (*) is generated by connecting the data points along the edges of a graph created by Delaunay triangulation between the data points. ............................. 74	Figure 23: The graphs show the results of some of the FEM simulations. For the simulations shown in this figure, the DART leg length was held constant at 10 mm.  The x and y axis in both graphs represent input parameters for the simulation and the z axis is the magnitude of the distance (mm) between the theoretical tumour centroid, which is always 20mm immediately   xx below the DART, and the actual tumour centroid. In this case the x axis is the tissue/kidney elasticity in units of kPa and the y axis is the force exerted by the ultrasound transducer in units of Newtons. The numbers beside the data points in the graphs (*) are the z value of each of the data points.  The coloured surface between the data points (*) is generated by connecting the data points along the edges of a graph created by Delaunay triangulation between the data points. ... 75	Figure 24: Pictures of the commercially available ShowWX+ projector (left), picture of the internals of the ShowWX+ (center) and conceptual diagram of Pico Lantern in use during laparoscopic surgery and scanning surface of kidney (right).  Notice that part of the ShowWX+ is within the white Pico Lantern. ...................................................................................................... 82	Figure 25: Pictures of Pico Lantern prototypes 1 and 2 projecting a checkerboard pattern onto the surface of ex vivo porcine kidneys (left and middle). Picture of the proposed configuration of the internal components of Pico Lantern prototype 3 (right). ............................................................. 85	Figure 26: Picture of the Integrated Photonics Module (IPM) from the ShowWX+ projector (left). The IPM is placed inside the Pico Lantern housing and connected to the rest of the ShowWX+ projector via custom designed PCBs and flat flexible cables. ................................... 86	Figure 27: Picture of custom made PCB boards for connecting the Pico Lantern Integrated Photonics Module (IPM) to the Electronics Platform Module (EPM).  This cable allowed meant that the battery and other components of the projector could be left outside of the patient.  The black board-to-board connectors in the bottom left of the picture were identical to the ones used in the ShowWX+ projector and the model had to be discovered by reverse engineering. ........... 87	Figure 28:Picture of PCB design for Pico Lantern. ...................................................................... 87	Figure 29: Diagram showing the approximate geometry of the experimental setup for the plane, cylinder and kidney 3D surface reconstruction experiments. ....................................................... 89	  xxi Figure 30: Two views of the surface reconstruction data for the cylinder. The Certus optical tracker stylus gold standard surface points are black and the Pico Lantern surface points are coloured. Each colour corresponds to a different Pico Lantern pose and each coloured point represents a corner of the projected checkerboard. The density of gold standard surface data points is approximately 1/mm2 and the density for the Pico Lantern points is approximately 0.2/mm2. ........................................................................................................................................ 90	Figure 31: Overview of the two methods used for surface reconstruction. The red lines show the narrow triangle geometry of method 1 (left) and the blue lines show the wider geometry of method 2 (right). ........................................................................................................................... 93	Figure 32: Laparoscope view during measurement of motion of the human neck in vivo with graphs showing 10 seconds of displacement of the checkerboard corners indicated by the tail of the arrows (left). Depiction of the motion of the carotid artery using interpolated colour map: red corresponds to large motion (right). .............................................................................................. 95	Figure 33: Laparoscope view of two kidneys placed side by side (left). 3D surface reconstruction in the laparoscope coordinate system, as determined by the Pico Lantern (right). Each colour in the graph on the right corresponds to a different Pico Lantern pose and each point corresponds to a corner of the projected checkerboard. The V-shape created by the two organs (kidneys) touching each other, can clearly be visualized in the left and right image. .................................. 97	Figure 34: Da Vinci Si® laparoscope view of the ex vivo kidney used for surface reconstruction validation and virtual viewpoint images. Each set of coloured points on the kidney surface indicates the corners of the checkerboards that were projected onto the kidney surface for each Pico Lantern pose (left). Three virtual viewpoints of the part of the kidney surface that was imaged by the Pico Lantern (right).} ............................................................................................ 98	  xxii Figure 35: Da Vinci Si® laparoscope view during in vivo porcine experiment. The Pico Lantern is projecting a checkerboard pattern onto the surface of the kidney for the purpose of surface reconstruction. ............................................................................................................................... 99	Figure 36: Overview of the Projector based Augmented Reality Intracorporeal Systems (PARIS) in projector point of view (P-POV) mode. There is a red perspective and yellow/brown orthogonal projection of the tumour in the projector point of view (P-POV). Dashed blue lines are orthographic lines from tumour in the direction of the projector. The dynamic marker is the DART and is shown as a grey/white object in the conceptual and surgeon’s view respectively...................................................................................................................................................... 106	Figure 37: (a) The pick-up ultrasound transducer with KeyDot®, (b) the plastic 3D printed DART, (c) the metal 3D printed DART, (d) the original Pico Lantern projector, and (e) the Celluon PicoPro used in experiments. ........................................................................................ 108	Figure 38: Picture from data collection during measurement of reprojection error experiment.  Black arrow shows origin of asymmetric dot pattern and pink error shows reprojected laser dot that should be centered on the origin of the dot pattern. ............................................................. 111	Figure 39: Phantom cut in half for the purpose of qualitative validation of repropjection accuracy. (Left) un-augmented cross-section of phantom. The phantom was cut in half to expose the black coloured tumour which is indicated by a blue arrow.  The ultrasound probe is placed so that its imaging plane is just behind and parallel to the surface of the phantom where it was cut. (Center) Computer graphics overlay of tumour model. (Right) L-POV perspective projection of tumour model. LUS was placed on the phantom’s edge and the reconstructed volume was segmented. .................................................................................................................................. 113	  xxiii Figure 40: Cross sections of excised specimens from the first four phantoms from each of the Projector POV (top row) and LUS (bottom row) branches of the study with the expert surgeon. The black inclusion is the simulated kidney cancer lesion. Centroids and contours of tumours and full tissue cross-sections are shown. Note that the 2nd and 4th specimens in the bottom row had positive margins and were excluded from the quantitative analysis and data shown in Table 4...................................................................................................................................................... 114	Figure 41: Quantitative comparison of excised tissue volume during user study.  LUS stands for LUS only.  When comparing the results for LUS and PARIS for both the novice and expert surgeon, the Wilcoxon signed-rank test p-value was < 0.01. ..................................................... 115	Figure 42: Picture of laparoscopic instrument holding the DART. The coordinate systems that are labelled and shown are the DART (D), Solidworks (SW) and Patient Side Manipulator Tip (PSMTip). ................................................................................................................................... 145	Figure 43: Picture of Integrated Photonics Module (IPM) of ShowWX+ projector. The blue circles show the interconnects which were used to connect the IPM to the rest of the projector...................................................................................................................................................... 147	Figure 44: Picture of the ShowWX+ Electronics Control Module (ECM - left) and Integrated Photonics Module (IPM - right). The coloured circles show how the ECM and IPM connected to each other. ................................................................................................................................... 148	Figure 45: Picture of IPM inside Pico Lantern housing ............................................................. 148	     xxiv List of Supplemental Videos There are two supplemental videos associated with this thesis. The reader will find these supplemental videos in the meta data associated with this thesis on the University of British Columbia cIRcle website (www.circle.ubc.ca) and data repository.  • Supplementary video 1: The video title is Augmented Reality Imaging For Robot Assisted Partial Nephrectomy Surgery. This 90 second video is a demonstration of the Augmented Reality Ultrasound Navigation System (ARUNS) described in Chapter 3 • Supplementary video 2: The video title is Follow the Light: Intracorporeal Projector-Based Augmented Reality for Laparoscopic Surgery. This 90 video is a demonstration of the Projector based Augmented Reality System (PARIS) that is described in Chapter 5 It is highly recommended that the reader watch these videos when s/he reads Chapter 3 and Chapter 5 respectively. Both videos are narrated demonstrations of the ARUNS and PARIS respectively and most of the footage of these videos show the ARUNS or PARIS in use from the perspective of the surgeon console. Watching these videos will allow the reader to visualize the surgery and the information that the surgical navigation systems give to the surgeon.   xxv List of Abbreviations 2D Two Dimensional 3D Three Dimensional API Application Programming Interface ARUNS Augmented Reality Ultrasound Navigation System CT Computed Tomography DART  Dynamic Augmented Reality Tracker DOF Degree of Freedom EPM Electronics Platform Module FEM Finite Element Method IPM Integrated Photonics Module LED Light Emitting Diode LUS Laparoscopic Ultrasound MIS Minimally Invasive Surgery MRI Magnetic Resonance Imaging OR Operating Room PARIS Projector-based Augmented Reality Intracorporeal System PCB Printed Circuit Board PVC  Poly Vinyl Chloride RALPN Robot-Assisted Laparoscopic Partial Nephrectomies RALRP Robot-Assisted Laparoscopic Radical Prostatectomy RCC Renal Cell carcinoma   xxvi RGB Red Green Blue RMS Root Mean Square RMSE  Root Mean Square Error                              xxvii Acknowledgements My PhD has had many twists, turns, ups and downs, but I’ve always been able to count on my co-supervisors, Dr. Robert Rohling and Dr. Christopher Nguan to be there to support me.  They have always believed in me and challenged and encouraged me to pursue excellence in research.  Thank you to both Rob and Chris.  I’d also like to thank my thesis committee for providing timely and relevant guidance.  My research would not have been possible without funding and support from a variety of sources. I’ve been fortunate to work on a variety of research projects that were funded by the CIHR and NSERC.  I’m grateful to Prof. Tim Salcudean for providing advice and access to the da Vinci surgical system and a variety of research equipment. I’d like to thank Andrew Wiles, the Manager of Advanced Research at Northern Digital Inc (NDI), for believing in me and my research and licensing the Pico Lantern patent from UBC. I was humbled to be invited to visit Andrew and his team and present my work at NDI.  I have really valued the feedback I have received about my research from the NDI Advanced Research team.  Furthermore, I am grateful to have received direct research support from the following organizations and programs: CIHR Vanier Canada Graduate Scholarship programs, the Vancouver Coastal Health-UBC-CIHR MD/PhD Studentship and the Prostate Cancer Canada Amy and Donald McInnes Graduate Studentship Award.  I’ve been blessed to work with supportive and collegial lab mates and collaborators.  I’m grateful for the regular lab lunches, weekly Spartacus workouts, impassioned research discussions, support and plentiful laughs that I’ve shared with the students and staff of the Robotics and   xxviii Control Lab (RCL). Two people that I have particularly enjoyed working with are Dr. Philip Pratt and Rohit Singla.  Philip is based at the Imperial College London in London, UK.  We have shared a fruitful research collaboration that started in August, 2013 and is still going strong almost four years later.  Philip has been a wonderful mentor, friend, host and collaborator.  I really look up to him and admire his genuine curiosity and love for research. Rohit Singla, now a Masters student in the Robotics and Control lab, and I have worked together on a variety of projects for close to three years. Thank you Rohit for been a caring friend and someone who has challenged me and taught me so much.  I have been really impressed at your thorough understanding of the field of research in which we are working and your maturity as a scientist.  Given that we worked on many projects together, I’m also thankful for all that you have taught me and the patience you have shown as we’ve learned about each other as scientists and people.  Keep up the good work in the research lab and I hope to work with you on the medical wards one day.  Julio and Caitlin, the stalwarts of the RCL, thanks for all the good memories. I’d like to specially thank and remember Jeff Abeysekera.  Jeff passed away with cancer in August, 2016.  Jeff brought positive energy to everything he did and a great friend.  He helped me start my first research project and continued to be someone that I turned to throughout my PhD.   Thank you Muriel Clauson, my best friend, who has helped me grow as a scientist and as a person. Thank you to my family. Betty Soule provided wonderful editing support for my manuscripts and thesis. My siblings, Claire and Henry, have kept me grounded and shared lots of hugs over the years.  Finally, thank you to my parents, Gwyneth and Paul, who have loved and supported me since the day I was born and all the way through my PhD thesis.   1 Chapter 1 - Introduction The goal of this thesis is to show how surgical navigation is improved by the creation of new tools, techniques and strategies for image-guided surgery.  Partial nephrectomy for kidney cancer surgery was chosen as an exemplar surgery.  The expectation is that improvements in surgical navigation in kidney cancer surgery can be applied more generally to other abdominal surgeries such as surgery for liver, stomach, pancreatic, kidney, bladder and prostate cancer surgeries.  More than fifty thousand Canadians were diagnosed with these cancers last year [2]. The techniques, devices and tools that were developed include the following:  1. A technique which improves the accuracy of ultrasound calibration. 2. A device, called the Dynamic Augmented Reality Tracker (DART), which simplifies surface and organ tracking and minimizes the effect that tissue deformation has on the accuracy of surgical navigation information.  3. Another device, called the Pico Lantern, which does surface reconstruction and allows the surgeon to see information concerning the underlying anatomy via direct projection onto the patient.   4. Surgical navigation tools called the Augmented Reality Ultrasound Navigation System (ARUNS) and Projector based Augmented Reality Intracorporeal Systems (PARIS). Both the DART and Pico Lantern are integral parts of the ARUNS and PARIS.   The effectiveness of ARUNS and the PARIS as surgical navigation tools were evaluated via simulated surgeries on kidney phantoms and in porcine in vivo feasibility studies.  Section 1.1 covers the background of kidney cancer surgery, advances in surgery and a review of research at the intersection of augmented reality and image-guided surgery.  Section 1.2 explains how the research in this thesis relates to the background research presented in   2 Section 1.1. In particular, section 1.2.2 outlines why 5 mm is the goal for the accuracy of the augmented reality systems presented in this thesis. Sections 1.3 and 1.4 cover the objectives and contributions of the thesis, respectively. Section 1.5 is a thesis outline.  1.1 Background The background section provides clinical and technical context around which the research for this thesis was done.  Since the focus of this thesis has been on improving kidney cancer surgery, section 1.1.1 is an introduction to the kidney, kidney cancer and kidney cancer surgery.  Section 1.1.2 is an introduction to several types of surgical techniques that are called minimally invasive surgery, robot-assisted surgery and computer-assisted surgery.  The surgical tools developed in this thesis are built for those kinds of surgical techniques.  Section 1.1.3 introduces the reader to a variety of applications for augmented reality, the evolution of image-guided surgery and augmented reality in surgery and the specific challenge of doing image-guided laparoscopic surgery.  Section 1.1.4 is a review of the ongoing research in image-guided surgery and augmented reality.  For the review of augmented reality, both computer graphic and projected-based augmented reality research are included.    1.1.1 Kidney Cancer and Kidney Cancer Surgery In the following section the topics of kidney cancer, kidney anatomy and kidney cancer surgery will be presented in turn.    3 1.1.1.1 Kidney Cancer 5,900 Canadians were diagnosed with kidney cancer in 2015 and 1,850 died from it [2].  The Canadian 5-year net survival rate of kidney cancer is 67%.  In other words, a person diagnosed with kidney cancer has a 67% likelihood of living for 5 more years [2].  Primary renal neoplasms is the scientific name for kidney cancer that originates in the kidney. Renal cell carcinoma (RCC), which originates within the renal cortex, constitutes 80 to 85 percent primary renal neoplasms.  The most common presenting symptoms for patients with kidney cancer are blood in the urine, abdominal mass, pain and weight loss.  However there is an increasing rate of incidental diagnosis from radiologic procedures that were ordered for other medical conditions [3].  For the majority of patients with localized RCC surgery is associated with a high rate of cancer-free survival and is the preferred method of treatment.     1.1.1.2 Kidney Anatomy The role of the kidneys is primarily to filter blood and secondarily to release certain hormones into the body.  The kidneys are paired organs that are approximately 13 cm long, 6 cm wide and 3 cm thick and are located inferior to the diaphragm and posterior to the abdominal cavity.  The kidney has three main components - the kidney cortex, the medulla and the collecting system.  The kidney cortex is the outer layer of the kidney. The medulla concentrates the ultrafiltrate. The collecting system is in the interior part of the kidney. The urine collects in the collecting system before it leaves the kidney for the bladder.    4  Figure 1: Diagram of kidney anatomy [4].   1.1.1.3 Kidney Cancer Surgery Patients with RCC that is localized to the kidney generally receive surgical treatment.  Surgical treatment options include radical nephrectomy or partial nephrectomy.  Radical nephrectomy is the full removal of the kidney. Partial nephrectomy, a nephron-sparing surgery, is the removal of just the kidney tumour.  Partial nephrectomy is the standard treatment for patients with only one kidney, patients with a risk for future loss of significant renal function or patients with tumours < 4 cm in diameter [5].  Partial nephrectomy results in comparable oncologic outcome and significantly lower risk of chronic renal dysfunction [6], [7]. Compared to patients who have their entire kidney removed, patients who receive partial nephrectomy surgery have better post-  5 surgery kidney function because they are left with more kidney and more nephrons for filtering blood.  Thus, one of the goals in partial nephrectomy surgery is to save as much healthy kidney tissue as possible while still removing the entire kidney tumour and minimizing warm ischemia time [8], [9]. Warm ischemia time is the time when the renal artery is clamped and there is no blood supply to the kidney.  Warm ischemia has been associated with multifocal interstitial nephritis and it is generally understood that the longer the warm ischemia time, the worse the post-operative renal function will be [10].  In most partial nephrectomy surgeries the surgeon cuts into the healthy renal parenchyma surrounding the kidney tumour.  Enucleation is a new surgical technique which involves the removal of the tumour without dissection into the parenchyma surrounding the tumour [11].  However, the enucleation technique has limited long term clinical data to support its effectiveness for ensuring long term cancer free survival. There are several approaches for doing partial nephrectomy surgery. These are: trans-peritoneal, retroperitoneal, hand-assisted and robot-assisted techniques.  The steps in a partial nephrectomy surgery include kidney dissection from perineal fat to expose the lesion, dissection and clamping of renal artery or hilum with a vascular clamp, tumour resection with sharp dissection, reconstruction of the kidney and unclamping the hilum [12]. Compared to open partial nephrectomy, laparoscopic partial nephrectomy is associated with shorter operative time, less blood loss, and shorter hospital stays.  However, compared to open partial nephrectomy, laparoscopic partial nephrectomy generally has longer ischemia time and more urological postoperative complications such as hemorrhage and urine leakage [13].    In this thesis the planning and execution stage of the surgery are specifically defined and referred to.  The planning stage is the part of the surgery between the exposure of the kidney and   6 the surgeon’s first cut into the kidney.  The execution stage is the part of the surgery where the surgeon is cutting through the kidney and resecting the tumour.    1.1.2 Minimally Invasive Surgery, Robot-Assisted Surgery and Computer-Assisted Surgery  1846 was the year of the first successful surgical procedure performed with the patient under anesthesia. This milestone helped to alleviate humankind’s great fear of pain during surgery.  It forever changed surgery and the approach of surgical innovation. In 1895, Wilhelm Roentgen invented the X-Ray. The X-Ray invention was another major medical milestone which changed the practice of medicine forever. In the 20th century, the importance of imaging in medicine continued to increase with the invention of ultrasound, computed tomography (CT) scans and magnetic resonant imaging (MRI). Computer-assisted surgery and image-guided surgery were natural applications of these medical imaging technologies.  The following sections will provide an overview of key technological developments in surgery and will explain how computer-assisted surgery allows surgeons to incorporate medical imaging into the surgical planning process.  1.1.2.1 Minimally Invasive Surgery In parallel with innovation in medical imaging technology there was significant innovation in surgical tool technology. In the 20th century surgeons and engineers developed tools for minimally invasive surgery (MIS). The small keyhole incisions used in MIS resulted in shorter recovery times and less pain after surgery.  Over the last few decades the thrust of surgical innovation has been to improve surgical outcomes and increase the number of conditions that can   7 be treated via MIS.  This has often been a synergistic process because one way to improve surgical outcomes is to do the surgery as MIS because MIS minimizes the amount of blood loss, reduces patient recovery time and post-operative pain [14]. MIS, also known as laparoscopic surgery in the context of abdominal and urological surgery, replaces large incisions through  muscles and the abdominal wall with small keyhole incisions with a diameter of 10 mm through which the laparoscopic surgical instruments and laparoscope are inserted.  As shown in Figure 2, the conventional laparoscopic surgical instruments are simple long rods that are held by the surgeon on the one end and have a working element on the other end.  The grasping end of the instrument enters the patient via a cannula which also acts as a fulcrum for instrument movement. This means that moving the conventional surgical instruments is reversed and counter-intuitive.  Furthermore, the surgeon has much less dexterity in conventional MIS than open surgery.  Thus, it takes many years to become an expert at conventional laparoscopy.    Figure 2: Conventional MiS surgical instrument - note the limited dexterity. © Mitch Webb   A laparoscope is an instrument for MIS that is inserted through a small surgical incision or cannula and used for looking at the inside of the abdomen and pelvis. The majority of MIS is   8 done using monocular laparoscopes making it difficult to perceive the three dimensional (3D) spatial relationships of objects in the surgical scenes. Several companies have improved visual perception for MIS by developing stereo laparoscopes.  Examples are the 3DHD Vision System (ConMed, New York, USA), the Endoeye Flex 3D (Olympus, Shinjuku, Tokyo, Japan) and the stereo laparoscope of the da Vinci Surgical System (Intuitive Surgical, Sunnyvale, California, USA). No matter the type of laparoscope, they all have a limited view of the surgical field.  The limited view makes it more difficult for surgeons to identify important anatomical and pathological features [15].  Despite these challenges, it seems that the advantages outweigh the disadvantages of MIS because MIS continues to grow in popularity in Canada [14] and the world.  Furthermore, MIS will play an important role in meeting the medical needs of Canada’s ageing population.   1.1.2.2 Robot-Assisted Surgery To address some of the shortcomings of conventional laparoscopic surgery, Intuitive Surgical Inc™ developed the da Vinci™ surgical system, often referred to as the da Vinci surgical robot. Surgeons use it for Robot-Assisted MIS.  The da Vinci™ is a sophisticated surgical tool that is teleoperated by the surgeon at the surgeon’s console.  The tools have a similar diameter (8 mm) to conventional laparoscopic tools and the end effector has a full 6 degrees of freedom (DOF).  The da Vinci surgical robot gives the surgeon stereo vision and the dexterous tele-manipulation and articulation that was lost with the initial development of laparoscopic instruments.  The da Vinci also filters the surgeon’s hand tremors and scales the surgeon’s hand movements. In the USA 72% of laparoscopic partial nephrectomies are done as Robot-Assisted Laparoscopic Partial Nephrectomies (RALPN) using the da Vinci surgical robot [16]. Also in the USA, 85% of   9 MIS radical prostatectomies are done as Robot-Assisted Laparoscopic Radical Prostatectomy (RALRP) with the da Vinci surgical robot [17].   A radical prostatectomy is the removal of the prostate and is performed to treat prostate cancer.   Figure 3: da Vinci Si® (Left), da Vinci surgical instruments in surgeon’s field of view (top right) and control mechanism for the surgical instruments (bottom right).  ©2017 Intuitive Surgical, Inc.  1.1.2.3 Computer-assisted Surgery Medical imaging technologies such as ultrasound, CT and MRI have revolutionized the practice of medicine and changed how doctors diagnose and treat patients. Computer-assisted surgery is the application of medical imaging technology to assist in navigation in surgery.  At a simple level, navigation in surgery involves asking the questions of: “Where is my (anatomical) target?” and “Where am I (anatomically)?” [18]. One of the goals of the biomedical engineering field,   10 and a focus of this thesis, is to make medical imaging technology more available to surgeons during the conduct of an operation.    Computer-assisted surgery is an important area of study because it aims to make the vast improvements in medical imaging technology over the last 50 years available in real time during operations.  Medical imaging has positively transformed clinical medicine and the hope is that it will do the same for surgery.  For example, in clinical medicine, it has been shown that a CT scan changes the working diagnosis for patients with abdominal pain 53% of the time [19].  Done properly, it is entirely possible that effective use of medical imaging in surgery could improve the surgical plan in many surgeries.   The specific advantage of computer assistance is that it allows the surgeon to visualize subsurface targets and critical structures, either prior to or during surgery. In turn, the surgeon’s improved knowledge of the underlying anatomy could lead to fewer complications, improved safety, and better quality operations.  1.1.3 Image-Guided Surgery Image-guided surgery is a kind of computer-assisted surgery.  It is a general term for any surgery where the surgeon uses tracked surgical instruments which are registered to preoperative or intraoperative images to help guide the procedure.  In other words, image-guided surgery integrates preoperative or intraoperative images with the real time operative field.   Image-guided surgery is analogous to navigating a car with Google Maps.  Tracking a surgical instrument and showing its location on a medical image is analogous to tracking a car with GPS and showing its location on digital street map.  Surgeons and drivers alike both have to make decisions about where they go based on what they see immediately in front of them and information provided to them by navigational tools.    11 1.1.3.1 Introduction to Augmented Reality Augmented reality is a view of the physical world with certain elements that are augmented by computer-generated sensory input.  On the reality-virtuality continuum, augmented reality is part of mixed reality and is closer to the real environment then virtual environment.  It has many applications beyond surgery and is a burgeoning field.  It is used by the military, in professional sports broadcasts, and movies, on cell phones and in medicine [20].   In medicine there are many applications for augmented reality. One exemplary application is the commercially available VeinViewer (Christie Medical Holdings Inc, Memphis, Tennessee, USA). It projects onto the patient a map of the blood vessels under the skin in order to make IV needle insertion easier. Currently, most surgeons look at medical images on a screen. They then use their mind’s eye to translate the information from the screen to the operating field. Image-guided surgery can show the surgeon the location of a tracked instrument in the medical images on the screen.  However, to bring the medical imaging information directly to the actual operating field, augmented reality is a necessity.  It has the potential to reduce the cognitive load on the surgeon and allow the surgeon to fully utilize the digitized medical imaging information.  Building on the car and surgery analogy from the previous section, augmented reality displays on the windshield of cars is similar to augmented reality on the surgical field. Several car companies are developing augmented reality for cars which have features that alert the driver to potentially dangerous objects in their environment and labels nearby roads and landmarks.  This can help drivers avoid collisions or alert them to errors they are making such as drifting into another lane. In this thesis, similar concepts are applied to augmented reality in surgery and surgical navigation.     12  1.1.3.2 Image-Guided Neurosurgery Neurosurgeons are the pioneers of image-guided surgical navigation.  This is not surprising because the brain is full of delicate structures which have significant functional roles and the brain is largely fixed relative to the skull of patients.  First came the surgical map, generated before the surgery via a CT or MRI scan [18].  Second, the surgical map was registered to the skull of the patient via a stereotaxic frame.  Third, frameless navigation was developed which ultimately allowed tracking of a surgical instrument in “real-time” and visualizing its position on the surgical map, the preoperative CT or MRI [18], [21] images.  The frameless navigation technique involves placing tracking markers on the skull of the patient and the surgical instruments so only the relative movement between the tracked instrument and tracked skull of the patient is relevant. This approach has been shown to offer a significant benefit to the patients [22].  Finally, augmented reality was introduced in which medical imaging information is overlaid directly onto the surgeon’s field of view [23]. Brainlab AG (Munich, Germany), a leading medical technology, is an innovative company that has developed commercially available products for image-guided neurosurgery.    1.1.3.3 Image-Guided Abdominal Surgery There are many parallels and similarities between computer-aided and image-guided neurosurgery and computer-aided and image-guided abdominal surgery, the focus of the thesis.  In both neurosurgery and abdominal surgery the surgeon needs a clear understanding of the underlying anatomy in order to minimize the amount of healthy tissue removed.  However, in the case of abdominal surgery there are unique challenges.  Namely, the organs of the abdomen are   13 not constrained as the brain is within the skull, and the organs are prone to movement and deformation before and during surgery [24].  This makes it more difficult to use image-guided surgery because the surgeon cannot be sure that the images and associated information follow the moving organs and are accurately displayed.   1.1.3.4 Challenges in Image-Guided Laparoscopic Surgery For image-guided laparoscopic surgery, important points to consider are: the difference between imaging modalities, tissue deformation, intraoperative dynamics, robustness and relevance.  Each of these points are explored in more detail below: - Medical images are generally captured via 3D CT or MRI scans or 2D ultrasound scans.  Further, the ultrasound transducer can be tracked to generate a 3D ultrasound image.  Thus, the medical images are generally stored as voxels in a 3D space and the laparoscopic images are two-dimensional (2D) arrays of pixels storing three color values (RGB).  This makes establishing correspondence between medical image data and laparoscopic images difficult. A related challenge in displaying information on the laparoscopic images is accounting for lens distortion [25].  - There is additional tissue translation and deformation in laparoscopic surgery because pneumoperitoneum creates pressure in the abdominal cavity causing cumulative organ shift of 28mm [26] for the liver. For the kidney, the shift can be as much as 46.5 mm due to a combination of pressure from pneumoperitoneum and the change in patient position from supine-to-flank between the preoperative imaging and actual surgery [24].   14 - Intraoperative dynamics, such as breathing and cardiovascular pulses, cause a periodic movement of the organs.  Song et al. measured a maximum displacement of 22.5 mm of the liver in 10 healthy humans [27]. - Robustness and relevance is an ongoing challenge.  Many laparoscopic augmented reality techniques have been demonstrated and characterized, both in terms of accuracy and improved operation outcomes, in highly controlled lab settings.  However, seamless integration into standard OR workflow has remained a challenge.   Providing image-guidance in laparoscopic abdominal surgery presents many of the same challenges as in open abdominal surgery.  Tissue deformation and tissue tracking are challenges in both cases.  However, one key difference is that the surgeon sees the surgical field of view through the laparoscopic camera screen.  The laparoscopic screen provides a natural interface for augmented reality. Augmented reality strategies, such as using a half-mirror, augmented reality goggles or large projectors, which were developed for open surgery or needle biopsies, do not work in laparoscopic abdominal surgery due to space and equipment limitations.    1.1.4 A Review of Research in Augmented Reality and Image-Guided Laparoscopic Surgery The focus of this thesis has been on developing image-guided augmented reality navigation aids for laparoscopic surgery. As such, a review of related research is included in the following sections.     15 1.1.4.1 Image-Guided Surgery and Laparoscopic Ultrasound In particular, the focus of this thesis is on ultrasound image-guided surgery.  Ultrasound was chosen because the rapid miniaturization and cost reduction of ultrasound technology, coupled with improvements in image quality and analysis, means ultrasound will likely become ubiquitous, much like the stethoscope. Ultrasound imaging is a good candidate for the operating room because it is real time, non-ionizing and relatively inexpensive.   Laparoscopic ultrasound (LUS) transducers are designed to fit through the incisions that are made during MIS.  LUS improves surgical safety by allowing surgeons to visualize important anatomy beneath the organ surface during operations. As of 2010, a survey of surgeons practicing endoscopy showed that 82% expected an increase in the use of LUS in the next 5 years [28].  Most of the major medical imaging companies sell laparoscopic ultrasound transducer like the one shown in Figure 4.    Figure 4: This photographs show a laparoscopic linear transducer (top) and laparoscopic flexible transducer (bottom).  © Springer Science + Business Media New York 2014  There are also LUS that have been developed specifically for use with the da Vinci surgical robot.  These LUS transducers can be picked up by the da Vinci robot and controlled by   16 the surgeon at the da Vinci console. BK Medical (Herlev, Denmark) sells a LUS for the da Vinci surgical robot called the ProART™.  Another LUS for the da Vinci is the “pick-up” LUS designed and developed by Schneider et al. [29].  One of the features of the pick-up LUS developed by Schneider et al. is that it can be repeatedly grasped by the robot so there is a repeatable transform from the robot tool to the ultrasound linear array.  The repeatable grasping element allows the LUS to be tracked via robotic kinematics in addition to direct optical tracking and electromagnetic tracking.  It also has a built-in electromagnetic sensor for real-time electromagnetic tracking. The width of its linear array is 2.56 cm and it has a maximum imaging depth of 6 cm.  These specifications make the pick-up LUS a good candidate for ultrasound imaging during partial nephrectomies. As mentioned previously, partial nephrectomies are only offered to patients with tumours with a diameter of less than 4 cm and the typical kidney dimensions are 13 cm long, 6 cm wide and 3 cm thick   Figure 5: Pick-up laparoscopic ultrasound (LUS) transducer for da Vinci surgical robot.  It was designed and built by Schneider et al. [29]. The picture shows the fixed transform that exists between the da Vinci ProGrasp™ and LUS.     17 1.1.4.2 Augmented Reality in Image-Guided Surgery For a comprehensive introduction to augmented reality and image-guided surgery, please refer to the following reviews: - Navab et al. wrote a review about the research the Navab group has done to personalize intra-operative imaging and provide augmented reality in computer-assisted interventions [30]. - Kersten-Oertel et al. wrote a review about visualization in mixed reality image guided surgery and specifically identified and discussed 87 articles which included the terms reality or virtuality in their titles [31]. - Marescaux et al. reviews the concept of hybrid image-guided minimally invasive therapies which combines surgery, advanced endoscopy, and interventional radiology [14]. - Bernhardt et al.’s review of the status of augmented reality in laparoscopic surgery is especially relevant here because it covers many of the topics discussed in this thesis [15].   As those reviews listed above suggest, developing strategies for displaying imaging data to the surgeon is an area of active research [15]. There is a continuum of augmented reality strategies which include: static video display, video see-through, optical see-through and projection onto patient.  Static video display augmented reality involves adding computer graphics to a fixed monitor which shows a video of the surgical scene [32].  Most augmented reality for MIS uses this strategy to overlay virtual information onto the laparoscopic video displayed on the operating room monitor.  Video see-through involves a tracked PC tablet which is mobile, has an attached camera and simulates a physical transparency [33]. Optical see-  18 through involves a half-silvered mirror placed in front of the scene or augmented reality glasses onto which the augmented data is projected [34]–[36].   On the opposite end of the augmented reality spectrum is the projection onto patient approach [37].  This is a good description because the reality is augmented by making the patient a screen.  Projection onto patient is the form of augmented reality that is closest to the definition of augmented reality.  Given that the focus of the thesis was to improve surgical navigation via augmented reality in laparoscopic surgery, research was done to improve both static video display and projection onto patient augmented reality.  The former was of interest because it is the most commonly used augmented reality for laparoscopic surgery. The latter was of interest because projection onto patient from within the patient is entirely novel in the context of laparoscopic surgery and has the potential to address some of the shortcomings of the more commonly used static video display augmented reality.  Ongoing research in static video display augmented reality (Section 1.1.4.3) and projection onto patient (section 1.1.4.4) are reviewed below.  To give an overview of the large body of literature and to provide context for the proposed work, a selection of illustrative papers is described in the subsequent sections.  Although many claims have been made about augmented reality, the benefits and tradeoffs need to be acknowledged and explored in detail. More information via augmented reality does not always translate into better surgical outcomes.  Inattention blindness is a significant concern. For example, Dixon et al. showed that when surgeons performed an endoscopic navigation exercise on a cadaveric specimen the augmented reality view increased the accuracy of the surgeons but significantly reduced the rate at which the surgeons noticed a foreign body that was near the target [38].  A second important consideration when evaluating augmented reality systems is how effectively they provide a sense of depth perception to the   19 user.  A common problem with augmented reality is that while it is commonly designed to show the user information about subsurface structures, the user often interprets the augmented part of the image to be floating on top of the surface in the field of view.  To find the best augmented reality strategy for giving the user depth perception, Wang et al. recently evaluated five augmented reality visualization modes via a user study in which they used augmented reality to show the users the underlying tumour and blood vessels in kidney phantoms and in vivo porcine kidneys. The five modes they tested were called transparent overlay, virtual window, random-dot-mask, transparent mask and the ghosting method. They found that the visualization mode with the best spatial perception measure and the one that was most preferred by their users was the transparent mask mode. In the transparent mask mode, the user selects a center point for the mask and a radius. The mask becomes fully transparent in the center and the transparency falls off in a linear manner as a function of a distance from the center [39].  At the radius and beyond of the mask, there is no transparency.  Bichlmeier et al.’s strategy for improving depth perception was to make the transparency a function of the skin curvature and the observer’s line of sight [40].     Figure 6: Image from Wang et al.’s paper [39] showing rendering views of augmented reality visualization for blood vessels using (from left to right) transparent overlay, virtual window, random-dot mask, transparent mask and the ghosting method. © Displays, Elsevier     20 1.1.4.3 Static Video Display Augmented Reality Static video display augmented reality involves adding computer graphics to a fixed monitor which shows a video of the surgical scene.  This approach is particularly well-suited to laparoscopic surgery because the surgeon already looks at the surgical scene through a fixed monitor display.  For liver surgery, Buchs et al. developed a surgical navigation tool which augments the da Vinci surgical robot surgeon display with a virtual model of the liver lesion and the surgical instrument. It also displays the relative distance between the tooltip and the tumour (Figure 7) [32].  The registration from preoperative medical image to intra-operative surgical scene was done by touching four hepatic marks with the surgical instruments which were visible on the preoperative CT image.  While promising, this system also has some drawbacks.  It does not account for the angle of the wrist of the da Vinci surgical instrument. This introduces errors of up to 10 mm when the wrist is at an angle relative to the main instrument.  It relies on an external tracking system of both the laparoscope and surgical instrument which has a significant lever arm effect and it does not track the motion of the liver after the initial registration. This results in an offset between the real and augmented world that the user sees.    21  Figure 7: Screenshot of surgical console from image-guided surgery using surgical navigation tool developed by Buchs et al. [32].  The liver tumour is shown in yellow and the surgical instrument shown in red.  In theory, the red surgical instrument should be directly overlaid onto the real surgical instrument.  However, due to tracking the endoscope and surgical instrument with an external tracker, there is a lever arm effect. The lever arm runs the length of the tool and results in an offset between the real tool and augmented reality display of the same tool. © Journal of Surgical Research, Elsevier   One approach to compensate for tumour movement during surgery is to update the tumour position by registering ultrasound images acquired during the operation to a 3D tumour model generated by scanning the tumour with ultrasound at the beginning of the surgery [41].  In another paper, Puerto-Souza et al. used anchor points and a correspondence-search method to track the movement of the kidney surface during a laparoscopic partial nephrectomy.  They reported a reprojection accuracy of the tracked anchor points of less than 1mm [42].  However, that does not account for the initial registration error between the endoscopic-video frame and the 3-D CT model.  For that registration, they rely on the surgeon to manually align the two datasets and they assume no deformation between the endoscopic-video frames to the 3-D CT   22 model.  This assumption is optimistic given that it has been shown that kidneys can move as much as 46.5 mm and rotate 25 degrees, between preoperative imaging and the operation itself [24].   This 46.5 mm shift is due to a combination of pressure from pneumoperitoneum and the change in patient position from supine-to-flank between the preoperative imaging and actual surgery.   Simpfendörfer et al. addressed the issue of kidney movement before and during surgery by developing a system for laparoscopic partial nephrectomy (LPN) which includes intraoperative cone-beam computed tomography imaging of the kidney and radio-opaque markers which are visible in both cone-beam CT and the laparoscopic camera field of view [43].  This allowed for automatic fusion of the segmented intraoperative CT image with the real-time fluoroscopy during the execution stage of the surgery.     Figure 8: Surgeon’s view using cone-beam CT augmented reality system described in the work by Simpfendörfer et al [43].  This is a picture of the surgeon’s view which includes the augmented reality video (upper left), the augmented reality fluoroscopy image (bottom left) and the conventional laparoscopic image [43]. © Journal of Surgical Research, Elsevier     23 It is noteworthy that Simpfendörfer et al. uses a LUS to validate the resection plan that is made with their augmented reality system.  Further, they argue that it is only possible to use intraoperative ultrasound navigation during the planning stage of the surgery.    1.1.4.4 3D Surface Reconstruction, Structured Light and Projection onto Patient Augmented Reality  Within the research community there has been sustained interest in developing guidance tools for minimally invasive surgery. An important criteria for many of these guidance tools is that they perform 3D surface reconstruction of the tissue quickly and accurately for registering preoperative images to the live surgical view [44], [45]. The challenge is to perform such registration in real-time in the presence of displaced and possibly deformed soft tissue surfaces.  A detailed review of the five main approaches for 3D surface reconstruction in laparoscopic surgery has recently been published [46]. The five approaches are stereo endoscopy (requiring a stereo endoscope), monocular shape-from-X, Simultaneous Localization and Mapping (SLAM) from a moving camera, time-of-flight from a specialized illumination unit, and structured light. Each approach offers benefits and trade-offs.   The structured light approach for 3D surface reconstruction replaces a camera in the conventional stereovision system with an active device which projects a known coded pattern onto the scene. The known pattern is then identified in the captured image. Several research groups have made important contributions to the field of structured light in laparoscopic surgery. They have all demonstrated the capacity for mapping smooth and featureless organ surfaces quickly and accurately. Hayashibe et al. developed a laser-scan endoscope for real-time 3D shape intraoperative measurement and visualization.  They solved the correspondence problem   24 between the laser-scan endoscope and endoscope by using an optical galvano scanner and high-speed camera to create and detect a laser-beam strip that scanned the tissue surface [47].  Maurice et al. built a structured light vision system in a 10 mm diameter two-channel laparoscope and achieved satisfactory 3D reconstruction results at 25 images/s in an in vivo pig experiment. They designed a monochromatic subperfect map-based pattern and sped up the pattern decoding process by utilizing the known epipolar geometry of the laparoscope [48]. Reiter et al. presented the Surgical Structured Light system which projects a pattern that is invisible to the surgeon.  This is possible because the Surgical Structured Light system includes two off-the-shelf 10 mm laparoscopes, a narrow-band blue light emitting diode (LED) projector and a dichroic beam splitter [49].  Two other groups have used flexible probes for delivery of the structured light.  One flexible probe used Single Shot Structured Light that was delivered via a sensor head with a diameter of 3.6 mm which contained a catadioptric camera and pattern projection unit [50]. The second flexible probe was a 1.7 mm multi-spectral fiber-based structured light probe [51] that can fit in the biopsy channel of an endoscope and project a constant pattern of 127 identifiable coloured spots.    In addition to 3D surface reconstruction, projectors can be used for projection onto patient augmented reality. The projection onto patient strategy aims to help overcome the challenge of natural depiction in augmented reality [52].  A large projector for interventional radiology [53], a handheld projector for open abdominal surgery [54] and a handheld projector for showing suggested incision points in neurosurgery [55] already exist.        25 1.1.4.5 Adapting to Tissue Deformation in Laparoscopic Surgery Any augmented reality system has to be able to account for significant organ shift and deformation during surgery [56].   One way to address that is through nonrigid registration and intraoperative organ tracking systems [57], [58].  While it is difficult to track the low texture kidney surface, researchers have made significant progress in this area.  For example, Yip et al. showed that by using a combination of the STAR feature detector and binary robust independent elementary features they could track natural features on an in vitro kidney surface to an accuracy of 2 mm in an eight second video [58].  Collins et al. used a different approach of densely matching tissue texture at the pixel level and were able to track in vivo kidney texture to within 2 pixels over a period of approximately 60 seconds [59].  However, these researchers did not attempt the more challenging task of tracking, let alone modelling the deformation, of the kidney during the execution stage of the surgery.  During the execution stage when the surgeon is cutting out the tumour, new tissue is exposed as the surgery progresses and there is significant deformation. Modelling the deformation of a kidney during an incision is difficult.  Altamar et al. placed optical markers on a perfused ex vivo kidney and made an incision into the kidney with a tracked scalpel.  The actual deformation after a single incision was 3.2mm compared to 6.7mm predicted by the anisotropic biomechanical model [60].  That error would likely be magnified for the second and third incision, and so forth.  Thus, adding natural feature tracking and biomechanical modelling error on top of tumour image or tumour registration acquisition errors would push the total system error to more than 5 mm, the stated goal of this thesis. Beyond this threshold it is not practical to use augmented reality guidance for the execution and dissection stage of the operation.    26 Several researchers have recognized that an effective shortcut for robust intra-operative tracking is fiducial-based tracking.  Examples of fiducial-based tracking include fluorescent markers that are placed on the surface of the organ to guide a 2D/3D intra-operative registration algorithm [61] and a radiopaque needle shaped fiducials for preoperative to intra-operative image registration [43]. In both cases, fiducial-based tracking is used because fiducials are much easier to robustly and accurately track than the natural features on the kidney.  The spherical fluorescent and radio-opaque markers are tracked as single points.  Thus, for effective modeling of the surface, at least four of them must be spread widely across the kidney. They can only provide guidance during the planning stage of the surgery since there is no easy way of knowing which of the markers are staying behind on the surface of the kidney and which are on the specimen that is been excised and removed.   1.2 Putting the Research of this Thesis in Context  The following sections show how the thesis in this research relates to existing research (section 1.2.1), explain why 5 mm is the goal set for surgical navigation system accuracy (section 1.2.2) and introduces the key error metrics used in this thesis (section 1.2.3).  1.2.1 Augmented Reality and Image-Guided Laparoscopic Surgery In the previous sections (sections 1.1.4.1, 1.1.4.2, 1.1.4.3, 1.1.4.4 and 1.1.4.5) a review of existing research in augmented reality and image-guided research is presented.  The sections below explain how the research in this thesis relates to and builds on the existing research that was just described.      27 1.2.1.1 Image-Guided Surgery and Laparoscopic Surgery The pick-up LUS probe described in 1.1.4.1 and shown in Figure 5 was used extensively for the research outlined in this thesis.  It was part of the experiments described in Chapter 2, Chapter 3 and Chapter 5.  It is timely to be doing research in LUS because in a survey of European urologists who perform RALPN, the majority answered that they use LUS and 86% expect that augmented reality during RAPN will be useful in the future [62].    1.2.1.2 Augmented Reality in Image-Guided Surgery In section 1.1.4.2 the strategies for providing depth cues through augmented reality were discussed.  In Chapter 3 of this thesis a novel strategy for showing depth cues is presented in which the surgeon is shown rendered orthogonal perspectives of their surgical instruments and the underlying tumour.  In Chapter 5 it is proposed that the surgeon can be given a depth cue by simultaneously projecting the orthogonal and projective perspective of the same tumour onto the surface.  The relative size of those two projections can be used as a depth cue.  1.2.1.3 Static Video Display Augmented Reality Several existing static video display augmented reality systems for MIS are described in section 1.1.4.3. In Chapter 3 of this thesis a novel system for static video display augmented reality for MIS is described.  One of the key aspects of the system is that a surgical navigation aid called the DART is introduced which makes it possible to accurately display intraoperative ultrasound navigation information for the entire surgery.    28 1.2.1.4 3D Surface Reconstruction, Structured Light and Projection onto Patient Augmented Reality There are many surface reconstruction and projection onto patient augmented reality systems for open surgery and biopsy guidance (section 1.1.4.4). However, as of 2014, the limitation for projection onto patient augmented reality is that it was not available for MIS.  To our knowledge, no one had proposed or explored projection-onto-patient augmented reality for MIS.  Thus, Chapter 4 describes the design and construction of the Pico Lantern, a small projector for MIS. It was apparent that to fully leverage the advantages of projection onto patient in MIS it was necessary to build the Pico Lantern.  Secondly, since projection onto patient AR often requires the 3D surface reconstruction map to pre-distort projection images, it was necessary to invent a new technique for surface reconstruction given that the projector would not be fixed relative to the camera like most projector-camera pairs.  The Pico Lantern is a core aspect of the research in this thesis.  Describing the Pico Lantern, characterizing its technical specifications and doing a high level exploration of its potential applications is the focus of Chapter 4. Chapter 5 focuses on the incorporation of the Pico Lantern into the PARIS and the performance of PARIS as a surgical navigation aid in laparoscopic partial nephrectomy is tested with user studies. Chapter 4 also describes a novel strategy for surface reconstruction with the Pico Lantern. The Pico Lantern was built to make surface reconstruction and projector onto patient available in laparoscopic surgery.  As described in the previous paragraphs, a significant amount of work has already been done on projector onto patient technology [52], [53], [55].  However, all of that research has been done in the context of biopsy guidance or laparoscopic surgery.  The Pico Lantern is designed to be small enough for   29 laparoscopic surgery so it allows the previous research to be applied in the fast growing field of laparoscopic image-guided surgery.  1.2.1.5 Adapting to Tissue Deformation in Laparoscopic Surgery In section 1.1.4.5 the challenge of using natural features to track movement and deformation of organs during surgery is explored in detail.  Furthermore, in that section it is noted that some researchers side-stepped the difficulty of natural feature tracking by using fiducial-based tracking instead.  The use of the Dynamic Augmented Reality Tracker (DART), is described in both Chapter 3 and Chapter 5.  Chapter 5 also provides another example of fiducial-based tracking.  Unlike the previously described fiducials ([61] [43]) that are tracked in 3 DOF, the DART is tracked in all 6 DOF.  Furthermore, only one DART is required for the operation; since the DART is placed immediately above the kidney tumour it is used to track the tumour location during the execution and dissection stage of the surgery.    Assistance during the execution and dissection of the operation has been identified as one of two stages where AR offers a potential clinical advantage [63]. The DART is developed with the intention of facilitating accurate dissection during tumour resection to ensure both a negative surgical margin and a maximally nephron-sparing operation.   The DART is meant to be inserted into the patient directly above the kidney tumour.  In a concept similar to the frameless navigation technique described in section 1.1.3.2, the DART is tracked via computer vision techniques. Its coordinate system becomes the tumour-centric coordinate system relative to which the ultrasound transducer images and surgical instruments are tracked. This allows for persistence of the ultrasound scan information even after the ultrasound transducer is removed and throughout the entire surgery. To test the assumption that   30 the DART stays fixed relative to the tumour, a combination of FEM analysis and in vivo porcine experiments were done to characterize the performance of the DART. The DART is an integral part the ARUNS, presented in Chapter 3, and the PARIS, presented in Chapter 5.  The ultimate goal is to replace the DART with robust, real-time, dense and deformable 3D tracking of natural features on the organ surface [59].  However, the DART is a good intermediate step that allows for the exploration of important challenges in augmented reality in laparoscopic surgery, without having to develop or implement a robust natural feature tracking algorithm. In the medium term, the DART eliminates the need to track natural features and run a biomechanical model during the dissection.   1.2.2 Setting a Goal of 5 mm for Accuracy in Augmented Reality The goal in this thesis is to build surgical navigation tools that are accurate to within 5 mm.  The accuracy of augmented reality ultrasound, the ARUNS and the PARIS are objectively measured.  However, determining whether or not the system accuracy is “good enough” is subjective.  The accuracy target adopted for this thesis is that there is a total system error of < 5 mm.  This 5 mm goal is arrived at by taking into consideration both the complexity of the ARUNS and the PARIS systems as well as the clinical guidelines for the partial nephrectomy surgery for which the ARUNS and the PARIS were built for.  The clinical guidelines for small renal masses (< 4 cm) is a partial nephrectomy, where the entire tumour is removed while preserving the maximum amount of kidney tissue [5]. The generally accepted guideline is that a 5 mm negative margin of healthy parenchyma should be left around the entire tumour [8].  As long as the surgeon is aiming for a 5mm margin and the surgical navigation system is accurate to within 5mm the tumour should be successfully removed with negative   31 margins on all side.  It has been shown that as long as there are negative margins around the tumour, the size of those negative margins do not affect the cancer reoccurrence rate [8].  Since improved health outcomes are attributed to the preservation of the kidney tissue [8] the overarching goal for building the ARUNS and the PARIS systems is to remove all of the cancer tumour while minimizing the amount of healthy tissue removed.  Margin status and healthy tissue excised are metrics used to evaluate the success of the simulated laparoscopic partial nephrectomy user studies in Chapter 3 and Chapter 5.  1.2.3 Overview of Error Metrics in Thesis The previous section (section 1.2.2) explained the rationale behind setting a goal of 5 mm for the accuracy of the ARUNS and PARIS.  This section is a comprehensive list of the error metrics that will be presented in this thesis.  It is important to understand the meaning of each error metric so that the various errors can be understood in the context of an overall system and goal of 5mm accuracy.  In Chapter 2, the ultrasound point reconstruction accuracy and precision with ultrasound are reported as a measure of the quality of the ultrasound calibration. The laparoscopic ultrasound transducer is moved to various poses for the purpose of imaging a pinhead in a water bath.  The pinhead is the point that is reconstructed.  To estimate the pinhead's location in the camera coordinate system it is segmented from each ultrasound image and its location is transformed to the camera coordinate system.  Its actual location is determined by stereo triangulation of the pinhead location after draining the fluid medium.  Point reconstruction accuracy is the Euclidean distance from the average of the estimated pinhead location to the actual pinhead location.  In Chapter 2 point reconstruction precision is defined as the average   32 Euclidean distance from each estimated pinhead location point to the centroid of those points.  In Chapter 3 and Chapter 5 point reconstruction precision is defined as root mean square distance from each estimated pinhead location point to the centroid of those points.    Figure 9: Diagram to illustrate point reconstruction accuracy and precision. The ultrasound transducer is on top of water bath (dark blue line), the ultrasound transducer is imaging the pinhead in the water bath.  The double ended red arrow shows the ultrasound calibration estimate of the physical relationship between the ultrasound linear array and optical fiducial. The black rectangle is the AR ultrasound overlay.  The double ended blue arrow points at the estimate pinhead location (white dot in ultrasound image) and actual pinhead location (blue pinhead).     In Chapter 3 the following error metrics are reported: point reconstruction precision with the ultrasound, da Vinci kinematics instrument tracking (dVKIT) error, and total system error.  The point reconstruction is defined earlier in this section.   The dVKIT error is a measure of the difference between tracking a point with the camera and da Vinci kinematic tracking. As illustrated in Figure 10 it is the Euclidean distance between the location of the same point (red arrow) as determined by camera tracking and da Vinci kinematic tracking in the camera coordinate system.  In practice, this manifests itself as an offset between the actual instrument and graphical overlay of the instrument. This is shown as the blue arrow in Figure 10.    33  Figure 10: Pictures to illustrate the meaning of the da Vinci kinematics instrument tracking (dVKIT) error.  Images 1 and 2 (left and center) show a point that is localize via computer vision tracking and da Vinci kinematics.  Image 2 show a da Vinci kinematic instrument that is touching the point of interest.  Image 3 (right) shows how the dVKIT manifests itself as an offset between the instrument and the graphical rendering of the instrument which is done as a yellow cone.  The total system error is affected by ultrasound calibration, camera calibration and the dVKIT error. Total system error is a measure of the accuracy of the tool-to-tumour distance that is reported by the surgical navigation system in Chapter 3. It is the difference between the location of a point as determined via ultrasound imaging and optical tracking (step 1) and via the da Vinci kinematics (step 2).     34  Figure 11: Diagram to illustrate process used to calculate total system error in Chapter 3.   In Chapter 5 reprojection error is introduced. The reprojection error is the distance between the detected origin of the DART and its transformed equivalent as projected onto the scene by a projector. This captures error in the tracking of two KeyDots® and the laparoscope and projector calibration models.  It is important to note that the da Vinci kinematics instrument tracking (dVKIT) error and total system error that are described above are not relevant in Chapter 5 because there is no da Vinci kinematic tracking in Chapter 5.  To calculate an equivalent total system error in Chapter 5 the point reconstruction precision with ultrasound metric and reprojection error must be combined together.     35 1.3 Objectives The long term goal for the research in this thesis is to contribute to the development of the surgical navigation systems of the operating room of the future. Surgical navigation systems should increase the surgeon’s spatial understanding of the underlying tissue and allow surgeons to be more accurate in their surgical excisions so they can spare as much healthy tissue as possible. Further, better surgical navigation tools will allow more complex cases to be done as MIS instead of open surgery. To make progress towards that goal, the objectives of this thesis are to: • Objective 1: Create and test the Augmented Reality Ultrasound Navigation System (ARUNS) for laparoscopic surgery o Measure the accuracy of a novel ultrasound calibration technique o Measure the total system accuracy of the ARUNS o Test the hypothesis that the DART can reduce the error of tracking the underlying kidney cancer tumour to less than 1mm. o Run a feasibility study the ARUNS with a surgeon to learn about AR visualization strategies • Objective 2: Create and test the Projector based Augmented Reality System (PARIS) for laparoscopic surgery.   o Note that the PARIS includes the Pico Lantern, a pick-up projector which is a source of structured light for 3D surface reconstruction and augmented reality in laparoscopic surgery.  o Measure the accuracy of the Pico Lantern surface reconstruction o Measure the accuracy of the Pico Lantern point reprojection error   36 • Objective 3: Test the hypothesis that the PARIS improves tumour resection accuracy, reduces the amount of healthy tissue excised and improves the surgeon’s spatial understanding of underlying anatomy. The novel aspects of objective 1 is that the ARUNS provides surgical navigation during the execution stage of the surgery. The novel aspects of objective 2 lie in the Pico Lantern surface reconstruction technique and the fact that the Pico Lantern is the first projector designed for MIS which projects onto the patient from within the patient’s body.    For successful completion of objective 1 it is necessary to have accurate ultrasound calibration, visualization strategies, and to account for tissue movement and deformation. Thus, a review of existing ultrasound calibration techniques is undertaken and a novel approach to ultrasound calibration, specifically designed for laparoscopic surgery, is proposed and tested.  By reducing the ultrasound calibration error, the whole augmented reality system becomes more effective and useful to the surgeon.  The second aim of objective 1 is to explore different strategies for visualization of intraoperative ultrasound information and to leverage the robotic kinematics during the execution part of the surgery.  Finally, the challenges of tissue deformation and tissue tracking are addressed. The motivation for objective 2 came from two sources.  Firstly, conversations with surgeons revealed that they would like to be able to do intra-operative imaging and see a display of subsurface blood vessels and tumours.  Secondly, a review of the literature showed that during laparoscopic surgery surface reconstruction on the relatively textureless surface of the kidney is difficult.     37 1.4  Contributions The work in this thesis is intended to improve computer-assisted and image-guided surgery.  In the course of achieving the thesis objectives listed in section 1.3 the following contributions were made:  • Ultrasound calibration: Developed a wide baseline camera ultrasound N-wire calibration for LUS transducers which resulted in an improvement of point reconstruction accuracy from 3.1mm to 1.3mm. • Static video augmented reality display: In this thesis, the specific combination of pick-up ultrasound transducer, augmented reality ultrasound and the da Vinci surgical robot were combined to make the ARUNS.  The total system error of the ARUNS was 5.1mm and the surgeon in the user study reported that the most useful guidance cue was a stoplight warning system that alerted him if he came too close to the tumour.  • Tissue tracking and tissue deformation: Developed and tested the Dynamic Augmented Reality Tracker (DART) which is inserted into a kidney during surgery for the purpose of tracking the kidney and underlying tumour.  It was determined via FEM simulation that the DART-enabled kidney tumour tracking is accurate to within 1mm. • Structured light in surgery: Invented and built the Pico Lantern, a miniature pick-up projector for laparoscopic surgery. It is a novel source of structured light that is dropped into the abdominal cavity through a laparoscopic surgical port and picked up by the surgeon to project patterned light inside the body during MIS. • Surface reconstruction in surgery: Developed a novel approach for surface reconstruction in laparoscopic surgery. The key innovation is to use a source a structured light that is   38 small enough to be placed and dynamically tracked in the field of view of the laparoscope. This enables accurate and flexible surface reconstruction due to the wide baseline between the camera and projector (patent application pending).  The absolute error for surface reconstruction of a plane, cylinder and kidney was 0.8, 0.3 and 1.5 mm respectively. The surface reconstruction works on surfaces of all textures. • Intra-operative blood vessel detection: Proposed and tested novel use of structured light for detecting pulsing blood vessels. • Projection onto patient augmented reality: Explored novel approaches to augmented reality in laparoscopic surgery by developing the Projector based Augmented Intracorporeal System (PARIS). The PARIS is a surgical navigation system for laparoscopic surgery which projects the location of blood vessels and kidney tumours onto the surgical scene.  The PARIS has a point reprojection accuracy of 0.8 mm and when it was used by surgeons in a surgical user study it caused a statistically significant reduction in healthy tissue excised. 1.5 Thesis Outline This thesis covers the background literature related to augmented reality ultrasound and structured light in laparoscopic surgery as well as proposed systems in those areas of research and tests to validate those systems.  Figure 12 is a pictographic summary of the thesis.  A chapter by chapter outline of the thesis follows:      39 Chapter 2 - Calibration and stereo tracking of a laparoscopic ultrasound transducer for augmented reality in surgery This chapter introduces laparoscopic ultrasound and explains how it is useful in minimally invasive surgery.  The utility of laparoscopic ultrasound is further extended via stereo tracking of the ultrasound transducer for the purpose of creating augmented reality ultrasound images.  Finally, wide baseline camera ultrasound calibration is introduced, and shown to improve the accuracy of ultrasound imaging of a pinhead in 3D space.   Chapter 3 - Augmented Reality Imaging for Robot-Assisted Partial Nephrectomy Surgery Just like in Chapter 2, developing tools to make laparoscopic ultrasound more effective in surgery is an important focus in this chapter. Here, the focus is on making the ultrasound imaging useful for the entire surgery and testing the system in a surgeon user study. Instead of showing the surgeon a direct augmented reality display of the 2-dimensional ultrasound image that disappears as soon as the surgeon finishes using the ultrasound, in this chapter the ultrasound image is segmented and relevant 3D models of important subsurface anatomy are displayed to the surgeon. To accurately track and show the subsurface anatomy, a novel surgical navigation marker called the Dynamic Augmented Reality Tracker (DART) is developed. The DART and the novel intra-operative ARUNS for robot-assisted minimally invasive surgery are tested by a surgeon in a simulated laparoscopic partial nephrectomy (LPN) procedure.  This chapter helped to inform the development of the Pico Lantern and the PARIS as well as the future development plan for ARUNS.       40 Chapter 4 - Pico Lantern: Surface Reconstruction and Augmented Reality in Laparoscopic Surgery Using a Pick-Up Laser Projector This chapter continues to build on the themes of intra-operative imaging and augmented reality image guided surgery, first outlined in chapters 2 and 3. This chapter describes the Pico Lantern, a miniature projector developed for structured light surface reconstruction, augmented reality and guidance in laparoscopic surgery. During surgery it will be dropped into the patient and picked up by a laparoscopic tool. While inside the patient it projects a known coded pattern and images onto the surface of the tissue. The Pico Lantern is visually tracked in the laparoscope's field of view for the purpose of stereo triangulation between it and the laparoscope.  The Pico Lantern was developed to be able to do accurate surface reconstruction of organs with smooth textureless surfaces and to explore a projector-based augmented reality.  One of the challenges of computer graphic-based augmented reality is that subsurface objects, when rendered, are sometimes perceived by the user to be floating above the surface. The hope was that the projected image would blend naturally with the organ surface to make the augmented reality scene more intuitive to interpret.  The accuracy of the Pico Lantern surface reconstruction is evaluated and a proof-of-concept test done on a human volunteer shows that the pulsatile motion of the tissue overlying a major blood vessel can be detected and displayed in vivo.   Chapter 5 - Follow the light: Projector-based Augmented Reality for Intraoperative Surgical Planning in Minimally Invasive Surgery This chapter presents a fully-integrated and functional surgical navigation system called the PARIS. The Pico Lantern concept from chapter 4 is significantly extended and it is shown how structured light can be used during laparoscopic surgery to display information from   41 intraoperative ultrasound. It is confirmed that the ultrasound transducer, DART and Pico Lantern all fit within the space available in laparoscopic surgery and are tracked within the field of view of the camera.  Furthermore, the system is shown to have a total reprojection error of 0.8 mm RMS.  A user study with two surgeons who conducted 32 simulated laparoscopic partial nephrectomies on kidney phantoms shows that the system resulted in a significantly significant reduction in healthy tissue removed.   Chapter 6 - Conclusion and Future Work This chapter summarizes the key findings of the thesis and presents several avenues for future work.      42  Figure 12: Pictographic outline of thesis.  For each chapter several pictures are shown that represent the key concepts in those chapters.   43 Chapter 2 - Calibration and Stereo Tracking of a Laparoscopic Ultrasound Transducer for Augmented Reality in Surgery  2.1 Introduction As described in detail in section 1.1.2.1, minimally invasive surgery (MIS) offers significant advantages compared to open surgery.  For example, incisions are smaller and post-operative recovery time is shorter.  However, MIS procedures have disadvantages including: limited view of the surgical field, poor depth perception and reduced surgical dexterity and haptic feedback. Stereo laparoscopes and laparoscopic ultrasound (LUS) are two technologies that promise overcome some of these disadvantages by improving visualization of subsurface anatomical features.  Both technologies were introduced in detail in the introduction in section 1.1.2.1 and section 1.1.4.1 respectively.  With regards to stereo laparoscopy, these is growing interest in the use of stereo laparoscopy for standard laparoscopy and for tracking tools and instruments as part of an augmented reality system [64].     To improve the accessibility and ease of interpretation of LUS, several research groups have developed augmented reality LUS systems by tracking the position of a LUS transducer.  Offline ultrasound calibration must be performed to determine the transformation from the ultrasound image coordinate system to the LUS transducer marker coordinate system. During surgery, the accuracy of the tracking of the LUS transducer is critical and determines the overall accuracy of the augmented reality LUS system. Tracking of the LUS transducer has been achieved by robotic kinematics [65], optical tracking [66][67], electromagnetic tracking [68], and a combination of optical tracking and electromagnetic tracking [69].  An external base coordinate   44 system, which must be used for tracking with robot kinematics, electromagnetic tracking and external optical tracking, makes tracking susceptible to error amplification due to the lever-arm effect between base and tool tip.  Maximizing the calibration accuracy is critical to these augmented reality systems.     In this chapter we propose an augmented reality LUS system using a recently developed pick-up LUS transducer [29] and stereo laparoscopy. Pratt et al. developed a similar augmented reality LUS system for mono laparoscopy and a pick-up LUS transducer [67]. They used the laparoscope to track the LUS transducer and eliminated the need for an external base coordinate system. This optical tracking of the LUS transducer offers the potential of higher accuracy due to a reduced lever-arm effect and a direct transformation from the ultrasound image to the camera via visible markers on the LUS transducer [67]. Our proposed augmented reality LUS system also uses optical tracking and eliminates the external base coordinate system.  Furthermore, we address the problem that stereo laparoscopes have a narrow baseline (camera spacing of about 5 mm) which results in narrow triangulation and poor accuracy of stereo laparoscope augmented reality systems [70].    Our primary innovation is to use different stereo cameras for ultrasound calibration and LUS transducer tracking. We use a 75 mm baseline stereo camera for ultrasound calibration and an inherently narrow baseline stereo laparoscope for LUS tracking.  For both ultrasound calibration and LUS tracking we track the same LUS optical fiducials and the same tracking method.  This approach aims reduce the ultrasound calibration error. We measure accuracy by using the tracked LUS to estimate the location of a pinhead of known location in the camera coordinate system.  To our knowledge, Leven et al. [66] proposed, but did not report, results for direct optical tracking of a LUS with a stereo laparoscope, so as of 2013 this was the first such   45 report.  A second aspect of this project is to characterize the accuracy of an augmented reality LUS system as a function of a changing camera focus.  We do this to understand the consequences of a surgeon changing the focus of the stereo laparoscope during surgery to optimize the view of the surgical field [71].  In short, the objective and novelty of this chapter is to show how the size of camera baseline during ultrasound calibration affects the error of an augmented reality LUS system.  Our hypothesis is that the wider the baseline during the ultrasound calibration stage, the better the accuracy of the augmented reality LUS.    2.2 Methods This section describes the apparatus that was used, the calibration and tracking methods, and the experiments.  We compared the combination of a wide baseline calibration and narrow baseline tracking (our proposal) to a combination of narrow baseline calibration and narrow baseline tracking (the standard approach of using the same sensor for calibration and tracking). Accuracy and precision of the two proposed augmented reality LUS systems are reported.  Henceforth, the stereo laparoscope will be referred to as a narrow baseline camera.   2.2.1 Apparatus, Calibration and Tracking We used a SonixTouch ultrasound machine (Analogic Corporation, Peabody, Massachusetts, USA) with a 10MHz LUS transducer (28 mm linear array) [29].  The LUS transducer was designed to take advantage of the dexterity of the da Vinci tools.  It can be picked up with the da Vinci ProGrasp™ tool and be moved in 6 DOF. Furthermore, the surgeon at the da Vinci console controls the movement of the LUS transducer which allows the surgeon's natural hand-eye   46 coordination to aid interpretation of the 3D anatomy from a set of 2D cross-sectional images.  All ultrasound images were taken at an ultrasound image depth of 20 mm. All camera images (stereo camera calibration, ultrasound calibration and validation experiments) were taken simultaneously with the two camera systems allowing for a more controlled comparison of the accuracies of the respective camera combinations.  The narrow baseline camera is a wide angle da Vinci stereo laparoscope from the da Vinci Surgical System (Standard model). It has a narrow baseline of 5 mm and a resolution of 720 × 486 pixels.  The wide baseline camera system has a baseline of 75 mm and consists of two Flea2 cameras (Point Grey Research, Richmond, Canada) with a resolution of 1280 × 960 pixels.  It has previously been observed that a similar difference in camera resolution did not have a significant effect on camera calibration results  [67], so the important difference is the baseline.  The calculation of the intrinsic and extrinsic camera parameters and lens distortion coefficients was done with the Caltech Camera Calibration toolbox [72] using 20 images of unique poses of 8 × 10 checkerboard with 5 mm squares.   To define the LUS transducer marker coordinate system we used a similar approach to Pratt et al. [67] in which a small checkerboard is mounted onto the LUS.   We placed a 6 × 2 and a 7 × 2 checkerboard with 3.175 mm square on the two flat (9 mm × 27 mm) surfaces on each side of the LUS transducer (Figure 13).  Our checkerboard is made of surgical identification tape (Key Surgical Inc., Eden Prairie, Minnesota, USA) which is approved for internal human use, repeated sterilization cycles and designed to be semi-permanently attached to surgical instruments.  Using a camera to track an ultrasound transducer for construction of 3D ultrasound images has been done previously [73].    47  Figure 13: Picture showing the da Vinci ProGrasp™ tool holding the ``pick-up'' LUS transducer which has checkerboard markers on it.  Right: Same picture as left with addition of 3D coordinate system overlay showing the axes of the LUS transducer marker coordinate system (T).  The z axis and the normal of the ultrasound imaging plane are almost parallel.  We used the triple N-wire ultrasound calibration technique [74].  The triple N-wire phantom was precisely manufactured with the Objet30 desktop 3D printer (Objet Inc., Billerica, Massachusetts, USA) which has 28 micrometer precision.  For defining the location of the N-wires in the coordinate system of the phantom we used an Optotrak® Certus optical tracker (Northern Digital Inc., Waterloo, Ontario, Canada) to track four NDI markers on our phantom and an NDI tracked stylus that was used to select the 18 N-wire holes.  An Optotrak® is not strictly required for this step; we could have used the known geometry of our CAD model to calculate the same geometric relationships.  The phantom bath was filled with distilled water and 9 % by volume glycerol [75] to achieve a sound speed of 1540 m/s to match the sound speed expected by the internal ultrasound image formation process.    For ultrasound calibration and tracking experiments the LUS transducer was placed at a distance of 100 mm from the narrow baseline camera and 150 mm from the wide baseline camera. Figure 14 includes a picture of the experimental setup (left) and a diagram of the four coordinate systems. The coordinate systems are: #1) Ultrasound image coordinate system (U),   48 #2) Pick-up LUS transducer marker coordinate system (P), #3) Camera coordinate system (C) and #4) Phantom coordinate system (Ph). The camera coordinate system (C) represents either the coordinate system of the wide baseline or narrow baseline camera.   Figure 14: Two pictures of the experimental setup.  Left: The wide baseline and narrow baseline (stereo laparoscope) cameras are in the foreground and the pick-up LUS transducer and triple N-wire phantom are in the background.  Right: The LUS transducer, held by the da Vinci ProGrasp™ tool, is directly above the N-wires. The phantom optical fiducials are in the background. The four experimental coordinate systems (U, P, C and Ph) and the transformations between them PTU,CTP, CTPh) are shown.  Equation 1 shows the transformation from the ultrasound image coordinate system (x,y with units of mm) to the camera coordinate system (a,b,c with units of mm).  The ultrasound calibration matrix - the fixed 6 DOF transformation from the ultrasound image to pick-up LUS transducer marker coordinate system (PTU) is the part of that equation that is determined offline prior to LUS imaging during surgery. The transformation from the pick-up LUS transducer   49 marker coordinate system to the camera coordinate system (CTP) is solved by using a corresponding point algorithm between the known location of the 21 saddle points on the transducer checkerboard in the transducer coordinate system and the camera coordinates of those same saddle points as determined by a Harris corner detector and stereo-triangulation [76].  The transformation from the phantom to the camera (CTPh) is solved in the same way except the points are the four centers of the NDI markers and their locations in the camera images are selected manually.   𝒂𝒃𝒄𝟏𝑪 = 	 𝑻𝑿𝑪 𝑷 𝑻𝑿𝑷 𝑼 𝒙𝒚𝟎𝟏𝑼                                                              Equation 1 For each LUS image of the N-wire phantom, the location in the phantom coordinate system where the wires intersect the ultrasound imaging plane (d,e,f) are calculated by selecting the wires as they appears in the ultrasound image and using the distance between the points and the known geometry of the N-wire phantom. The ultrasound calibration matrix (PTU) is solved by using a corresponding point algorithm [76] between the N-wire points (d,e,f), transformed from the phantom to the pick-up LUS transducer marker coordinate system, (see equation 2) and the same N-wire points (x,y) transformed from the ultrasound image coordinate system to the pick-up LUS transducer marker coordinate system.  The selection of the wires in the ultrasound image is done via a semi-automatic algorithm which finds the location of each wire by finding the centroid of the ultrasound image pixels associated with each wire.     𝑻𝑿𝑷 𝑪 𝑻𝑿𝑪 𝑷𝒉 𝒅𝒆𝒇𝟏𝑷𝒉 = 	 𝑻𝑿𝑷 𝑼 𝒙𝒚𝟎𝟏𝑼                                                            Equation 2   50 In total, 30 LUS transducer poses were captured for calibration.  The 30 poses were randomly assigned to ten groups of 10, ten groups of 15 and one group of 30 and the ultrasound calibration matrix for each group was calculated.  During ultrasound calibration, the LUS transducer covered an approximately uniform range within a 5x5x20mm cuboid and Euler angles of 23°, 11°, and 23°about the x, y and z axes of the LUS transducer marker coordinate system of the first LUS transducer pose (Figure 13).  In summary we built our experimental apparatus so we could compare the combination of a wide baseline camera for ultrasound calibration and a narrow baseline camera for tracking to the combination of a narrow baseline camera for both ultrasound calibration and tracking.  2.3 Experiments 2.3.1 Point Reconstruction Accuracy and Precision We did point reconstruction with the ultrasound to determine the accuracy and precision of the ultrasound calibration. Point reconstruction accuracy was determined by taking 22 ultrasound images of a pinhead in a water bath.  For all 22 ultrasound image, the camera and pinhead were held in a fixed position while the LUS transducer was moved to different poses.  Each LUS transducer pose was chosen such that the LUS was in the field of view of both the wide baseline Flea2 stereo cameras and the narrow baseline stereo laparoscope. Additionally, each LUS transducer pose was chosen such that the pinhead was in the ultrasound image.  The pinhead was easily identifiable as a bright reflection and was manually segmented. The motivation for taking many images of the pinhead with the LUS transducer in different poses is that it allows us to determine how consistent the estimation of the pinhead location is as a function of a moving LUS transducer.    51   Figure 15: Pictures from point reconstruction experiment. The red arrows point to the pinhead.  Images 1-3 show the LUS transducer in 3 poses (top row).  In each pose the pinhead is in the ultrasound image generated by the LUS transducers.   Image 4 shows the pinhead.  The LUS transducer has been removed and the water drained from the container.  This pinhead represents the gold standard pinhead location in the camera coordinate system. Image 5 and 6 are two ultrasound images from the LUS transducer, taken during this experiment.  The white dot in the ultrasound image is the pinhead.  To estimate the pinhead's location in the camera coordinate system, it is segmented from each ultrasound image and its location is transformed to the camera coordinate system as shown in equation 1.  Its actual location is determined by stereo triangulation of the pinhead location after draining the fluid medium.  Accuracy is the Euclidean distance from the average of the estimated pinhead location to the actual pinhead location.  Precision is the average Euclidean distance from each estimated pinhead location point to the centroid of those points.  These measures account for errors in calibration as well as alignment, segmentation, tracking and other   52 errors [77]. However, we kept alignment, segmentation and tracking constant across experiments so the changes in accuracy and precision are primarily due to the different ultrasound calibration matrices. The same 22 LUS transducer poses were used for all point reconstruction experiments.  The LUS transducer covered an approximately uniform range within a 6x8x10mm cuboid and Euler angles ranged over 22°, 16°, and 28° about the x, y and z axes respectively of the LUS transducer marker coordinate system of the first LUS transducer pose.  The pinhead is plastic and has a diameter of 2.5mm.   2.3.2 Point Reconstruction Accuracy as a Function of Focus In this experiment the change in accuracy and precision is calculated for a change of focus from 100mm to 160mm.  The focus of the stereo laparoscope was changed to 160 mm, the LUS transducer was moved to a distance of about 160 mm from the stereo laparoscope and 16 new pinhead reconstruction LUS transducer poses were captured.  The location of the LUS transducer was calculated using the 100 mm focus camera calibration parameters and separately with the 160 mm focus camera calibration parameters.  Both sets of camera calibration parameters were calculated with 20 images of an 8 × 10 checkerboard and the Caltech Camera Calibration toolbox [72].  The stereo laparoscope is set to a focus of 100 mm or 160 mm by placing a checkerboard perpendicular to the viewing direction at those respective distances and adjusting the focus until the checkerboard is sharply in focus.  This approach is necessary because the da Vinci application programming interface (API) does not report the distance from the camera at which an object would be most sharply in focus.       53 2.4 Results In this section the results of the point reconstruction tests are presented.  The goal is to understand how the accuracy and precision of pinhead reconstruction with ultrasound is affected by the type of camera used during ultrasound calibration and changing the focus of the stereo camera without updating the camera calibration model.  2.4.1 Point Reconstruction Accuracy and Precision The wide baseline approach for calibration improved accuracy (reduced point target localization error) from 3.1 mm to 1.3 mm when all 30 LUS transducer poses were used for calibration (Table 1). A similar trend was seen for the subset selection of 10 and 15 calibration poses. A greater number of poses appear to help repeatability of the calibration.   Table 1: Point reconstruction accuracy (mm) ± standard deviation for the combination of narrow baseline calibration and tracking and the combination of wide baseline calibration and narrow baseline tracking.  30 LUS transducer poses were captured for calibration and randomly assigned to ten groups of 10, ten groups of 15 and one group of all 30 poses. Stereo camera type for ultrasound calibration Stereo camera type for tracking LUS # of calibration poses 10 15 30 Narrow baseline Narrow baseline 3.3 ± 1.3 3.3 ± 0.9 3.1 Wide baseline Narrow baseline 1.5 ± 0.4 1.4 ± 0.3 1.3  The wide baseline approach for calibration improved precision a small amount (Table 2)    54 Table 2: Point reconstruction precision (mm) ± standard deviation for the combination of narrow baseline calibration and tracking and the combination of wide baseline calibration and narrow baseline tracking. 30 LUS transducer poses were captured for calibration and randomly assigned to ten groups of 10, ten groups of 15 and one group of all 30 poses. Stereo camera type for ultrasound calibration Stereo camera type for tracking LUS # of calibration poses 10 15 30 Narrow baseline Narrow baseline 1.3 ± 0.2  1.4 ± 0.1  1.3 Wide baseline Narrow baseline 1.2 ± 0.1  1.1 ± 0.1  1.2    2.4.2 Point Reconstruction Accuracy and Precision as a Function of Focus Table 3 shows how the accuracy and precision of pinhead reconstruction accuracy changes as a function of changing camera focus without and with updating the camera model.  The camera model is shorthand for the camera intrinsic parameters determined via camera calibration.  In the 2nd and 3rd row of Table 3 the new camera focus is 160 mm and the camera model used to track the LUS during the imaging of the pinhead is the one that was calculated when the camera was at a focus of 100 mm. In the 4th and 5th row of the table the new camera focus is 160 mm and the camera model used to track the LUS is the one that was calculated by doing a new camera calibration with the camera at a focus of 160 mm. Rows 2 and 3 show that when the camera focus is changed and the camera model is not updated, the point reconstruction accuracy decreases (increased point target localization error) to about 20 mm. When the camera model is updated by calibrating the camera at the new camera focus of 160 mm, the point reconstruction accuracy returns to 0.8 mm and 2.6 mm for wide baseline and low baseline camera for ultrasound calibration respectively. These accuracy results are similar to what was observed   55 when the LUS transducer was at a distance of 100 mm and the camera was focused at a distance of 100 mm and calibrated for that focus distance.   Table 3: Point reconstruction results (average ± std) for the LUS transducer at a distance of 160 mm from the narrow baseline camera.  30 LUS transducer poses were captured for calibration and randomly assigned to ten groups of 15. Stereo camera type for ultrasound calibration Stereo camera type for tracking LUS Is camera model updated after the focus of the camera is changed? (Y/N) Accuracy (mm) Precision (mm) Narrow baseline Narrow baseline N 19.2 ± 0.7 1.8 ± 0.2 Wide baseline Narrow baseline N 20.2 ± 0.2 1.5 ± 0.1 Narrow baseline Narrow baseline Y 2.6 ± 1.0 1.8 ± 0.2 Wide baseline Narrow baseline Y 0.8 ± 0.4 1.5 ± 0.1  2.5 Discussion and Conclusion We have shown a millimeter level of accuracy for an augmented reality LUS system via direct optical tracking using a stereo laparoscope, suggesting it is a viable option for guidance in minimally invasive surgery. When we implement our proposed method of using a wide baseline (75 mm) stereo camera for ultrasound calibration and a narrow baseline (5 mm) stereo laparoscope for tracking the accuracy is 1.3 mm (Table 1).  When the narrow baseline camera system is used for ultrasound calibration and tracking, accuracy of 3.1 mm is achieved. This reinforces the need for careful consideration of the ultrasound calibration step.    56  Most other research groups that developed augmented reality LUS systems used tracking systems that include an external base coordinate system such as optical tracking [66], electromagnetic tracking [68], and a combination of optical tracking and electromagnetic tracking [69]. These groups have reported point reconstruction errors in the approximate range of 1.5 mm and 3 mm.  It should be noted that direct comparisons of accuracy results are difficult because of differences in apparatus, tests and definitions of accuracy. The novelty in our work is the use of a different stereo camera system for the ultrasound calibration and the direct optical tracking of the LUS transducer with a stereo laparoscope.  The concept of using a different sensor for ultrasound calibration is broadly applicable.  With the increasing adoption of MIS the need for understanding the challenges associated with AR ultrasound using direct optical tracking with a mono or stereo laparoscope will continue to grow.  Furthermore, direct optical tracking has an elegant simplicity that minimizes the extra equipment required to implement the system and electromagnetic field distortion is not a concern.  One drawback of optical tracking is the need for a line of sight between the laparoscope and the LUS transducer, but this is naturally performed by the surgeon when placing the LUS transducer over a region of interest. A second drawback of optical tracking is that blood or other fluid may obscure part of the LUS checkerboard optical markers.  However, as long as part of the checkerboard remains visible the LUS transducer can still be tracked, albeit with reduced accuracy.  To further understand the effect of camera baseline on accuracy we calculated the accuracy of the combination of wide baseline calibration and tracking and the accuracy of the combination of narrow baseline calibration with wide baseline tracking.  The results were 0.6 mm and 2.45 mm respectively. For these experiments we used the same 30 LUS transducer poses that were captured for calibration and the same 22 LUS transducer poses that were   57 captured to determine the accuracy and precision. Thus, the best case accuracy is 0.6 mm and we surmise that using a narrow baseline camera for tracking decreases accuracy (increases point target localization error) by about 0.7 mm to the overall accuracy of 1.3 mm (Table 1). Therefore, we recommend that this wide baseline ultrasound calibration strategy be adopted in all cases where the accuracy of the augmented reality image is important.     58 Chapter 3 - Augmented Reality Imaging for Robot-Assisted Partial Nephrectomy Surgery  3.1 Introduction The advantages and drawbacks of MIS surgery and the laparoscopic ultrasound (LUS) as a tool for surgical navigation in MIS were introduced in section 1.1.2.1 and Chapter 2.  Furthermore, the focus in Chapter 2 was on increasing the accuracy of the ultrasound calibration for a LUS transducer, a commonly used intra-operative surgical navigation aid in MIS.  An accurate ultrasound calibration means that the physical relationship between the ultrasound image coordinate system and the coordinate system of the tracking sensor is known correctly, which allows for more accurate ultrasound volume reconstruction and image-guided surgery.  In turn this should help mitigate some of the drawbacks of MIS by allowing the surgeon to better visualize the underlying surface anatomy and successfully execute a surgery as defined in section 1.1.3.  For the same reasons as Chapter 2, the goal of this chapter is also to improve image-guided surgery for MIS.  The specific focus is on building a surgical navigation system and testing out augmented reality visualization strategies.  The surgical navigation system that is built in this chapter is called the augmented reality ultrasound navigation system (ARUNS). The ARUNS is related to Chapter 2 because it needs to have an accurate ultrasound calibration to work well.  Instead of only displaying the ultrasound images during the ultrasound scan, the ARUNS displays information from the ultrasound imaging for the duration of the surgery.  The ARUNS is evaluated in the context of laparoscopic partial nephrectomy.   59  As described in section 1.1.4.5, tracking the relatively featureless kidney and measuring and modelling the kidney’s deformation during surgery is difficult.  As such, an important component of the ARUNS is the Dynamic Augmented Reality Tracker (DART), a custom-designed surgical navigation marker. The DART is useful because it allows the issues of tissue tracking and tissue deformation to be sidestepped. There are well established algorithms for tracking the optical fiducial on the DART and the DART is placed immediately above the tumour so that deformation between the DART and tumour is minimal which makes tracking deformation unnecessary.   A narrated description and demonstration of the ARUNS is included in one of the supplementary videos of this thesis. The supplementary video can be found in the meta data associated with this thesis on the University of British Columbia cIRCle website and data repository. It is highly recommended that the reader watch this supplementary video. The steps for using the ARUNS are as follows. The surgeon places the DART (Figure 16) directly above the kidney cancer tumour and performs a freehand ultrasound scan of the kidney and tumour. During this scan, both the DART and LUS are optically tracked. A 3D model of the tumour in the DART coordinate system is generated using optical tracking information and ultrasound segmentation. The positions of the surgical instruments relative to the tumour are displayed to the surgeon as direct AR in two virtual camera viewpoints.  Additionally, a tool-to-tumour colour-coded proximity alert system is active that warns the surgeon if his/her instruments are dangerously close to the tumour. These two orthogonal virtual camera viewpoints, called the top and side views, are displayed to provide the surgeon with a better understanding of the location of the surgical instruments relative to the tumour. Furthermore, a guiding principle in the design of the ARUNS and introduction of virtual camera viewpoints is that surgeons generally dislike   60 direct graphical overlays that obscure the surgical field and prefer simple stylized graphics placed beside the surgical scene [65]. The ARUNS is broadly applicable to MIS and, in this first iteration, has been designed for RALPN with the da Vinci S® and Si® surgical systems (Intuitive Surgical, Sunnyvale, California, USA). The DART and the ARUNS were tested through a user study in which an expert surgeon excised a tumour from a phantom model of a kidney tumour.   The main novelties in this chapter are the invention of the DART, the tumour-centric tracking paradigm, and the virtual camera display of the LUS-generated 3D tumour model and the positions of the surgical instruments. The tumour-centric tracking paradigm involves tracking the ultrasound, camera and surgical instruments all relative to the DART in order to maximize accuracy of the guidance.   3.1.1 Related work Reviews by Lango et al. [64] and Hughes-Hallett et al. [63] summarize the significant amount of work that has already been done in the field of LUS and image-guided abdominal soft tissue surgery. Noteworthy augmented reality LUS research includes electromagnetically-tracked ultrasound for a kidney phantom model resection [68], optical tracking of the LUS for the first use of registered intra-operative ultrasound overlay in in vivo trans-anal surgery [67] and RALPN [78]. Cheung et al. showed that augmented reality ultrasound shortens planning time [68] and Hughes-Hallett et al. used optically registered LUS to account for intra-operative tissue deformation and displayed freehand 3D reconstruction of the ultrasound image on the operative view [79]. Teber et al. previously developed a real-time augmented reality display of the kidney tumour for the execution phase of laparoscopic partial nephrectomy. They employed landmark-based registration of the preoperative segmented CT and intra-operative field of view and   61 maintained the registration by tracking navigational aids that the surgeon had placed into the kidney [80].  The ARUNS differs from the work of Teber et al. [80] in the following ways: 1) only one surgical navigation marker, the DART, is inserted into the kidney, 2) the augmented reality image displayed is a 3D representation of the tumour generated by the intra-operative LUS scan, and 3) the surgical instruments and the display is presented to the surgeon via two orthogonal virtual camera viewpoints and a direct augmented reality overlay (Figure 20).  3.2 Materials and Methods    Figure 16: The DART with repeatable grasp (left); the DART with KeyDot® marker as it is inserted into an ex vivo porcine kidney (centre); and display of modified DART for total system error analysis (right). The red circle is the centre of the pinhead as determined by ultrasound calibration and KeyDot® tracking. The vertex of the yellow cone is the location of the pinhead as determined by da Vinci surgical instrument kinematics.  The DART (Figure 16) is designed in Solidworks (Waltham, Massachusetts, USA) and 3D printed in stainless steel at a low cost of $26 USD each to enable sterilization by autoclave (Xometry, Gaithersburg, Maryland, USA). The DART can be inserted via the surgical assistant’s 12 mm trocar, has a flat surface for placement of the KeyDot® optical marker [78], and can be picked up in a repeatable manner by the da Vinci ProGrasp™ [29]. One advantage of the   62 repeatable grasp is that there is a fixed transform from the DART to the surgical instrument. This fixed transform means it is theoretically possible to perform da Vinci kinematic calibration by simply grasping the DART and waving it around while it is tracked with standard computer vision techniques. As well, the DART facilitates a unique tumour-centric tracking system for the ARUNS. The accuracy of the generated tumour model displayed to the surgeon relies on the assumptions that the DART is fixed relative to the tumour and local tissue deformation does not occur. To that end, the DART includes legs with barbed hooks of length 10 mm that are intended to anchor it in a fixed position relative to the tumour. The LUS transducer is designed for robot-assisted minimally invasive surgeries [29] and it is the same LUS that is used in Chapter 2, Chapter 3 and Chapter 5 of this thesis. It has a 10 MHz 28 mm linear array and it is compatible with the Analogic ultrasound machine (Analogic, Richmond, British Columbia, Canada). The KeyDot® optical markers on the LUS transducer and DART are approved for human use (Key Surgical Inc., Eden Prairie, Minnesota, USA). Tracked ultrasound images are recorded during freehand ultrasounds scanning and trilinear interpolation, ITK-Snap [81] and Gmsh [82] are used for volume reconstruction, ultrasound segmentation and model generation respectively. The user study is performed with the da Vinci Si® (Intuitive Surgical, Sunnyvale, California, USA), using the ProGrasp™ instrument and Monopolar curved scissors.  10-30 mm spherical inclusions at a depth of approximately 20 mm in cylindrical PVC white phantoms with a curved top surface are created using Super Soft Plastic and white colour (M-F Manufacturing, Fort Worth, Texas, USA). The phantom’s elastic modulus is 15 kPa, which is consistent with the reported elastic modulus for human kidneys [83].    63 3.2.1 Calibration and Accuracy Tests There are several components in the ARUNS system that require calibration. These include the laparoscope camera, the ultrasound and the da Vinci kinematic chain. The purpose of the ultrasound calibration is to calculate the transformation from the linear array of the ultrasound to the KeyDot® marker asymmetrical grid of circular dot patterns [78] on the LUS transducer. The da Vinci kinematic chain calibration corrects for the lack of accuracy and precision in the da Vinci set up joint encoders. To understand ultrasound calibration and da Vinci kinematics calibration it is necessary to define some coordinate systems.  As shown in Figure 18, there are pick-up LUS transducer optical marker (P), DART (D), ultrasound image (U), da Vinci surgical instrument (I) and camera (C) coordinate systems.  Camera calibration is performed using the Caltech Camera Calibration toolbox [72]. Ultrasound calibration, optical tracking of the KeyDots® on the DART and ultrasound, and 3D ultrasound reconstruction are performed as described previously [78]. The ultrasound calibration determines the transformation from U to P (PTU).  Ultrasound calibration accuracy is determined by imaging a pinhead in a water bath from 10 different ultrasound poses. To estimate the pinhead's location in the camera coordinate system it is segmented from each ultrasound image and its location is transformed to the camera coordinate system as shown in equation 1. The point reconstruction precision is the root mean square (RMS) of the Euclidian distance from each pinhead point to the centroid of the pinhead points. This is a measure of the quality of the ultrasound calibration. See sections 2.3.1 and 2.4.1 and Figure 15 for a detailed explanation of the experimental setup for measuring point reconstruction accuracy and precision.    64 Next, the da Vinci kinematics calibration is performed.  The da Vinci kinematics calibration is necessary to calculate the fixed offset in the transformation (CTI) that is reported by the da Vinci kinematic chain.  This fixed offset changes at the start of every surgery when the da Vinci arm setup joints are reconfigured and locked into place. There is a change in the fixed offset every time the setup joints are locked in place because the setup joints are on a 12-foot-long arm with a 13 DOF kinematic chain.  It has been reported that the fixed offset can vary by as much as 50 mm and once the setup joints are locked in place the relative tracking accuracy of instrument on the end of the da Vinci arm is 1 mm [84].   The da Vinci kinematics calibration and calculation of the fixed offset is achieved by measuring the 3D location of the origin of the DART coordinate system while the DART is in 14 unique poses.  The DART coordinate system is defined by the KeyDot® on the DART.  In each of the 14 poses, the DART’s origin in the camera coordinate system is measured by the standard camera KeyDot® tracking algorithm[78] and the da Vinci kinematics. To do this, for each pose the DART is held stationary while the standard camera KeyDot® tracking algorithm records its location. Then, with the DART still held stationary, the da Vinci instrumet tip is maneuvered so that it touches the origin of the DART and the da Vinci kinematics record the DART location. Once this process is complete there are two sets of point clouds in the camera coordinate system. The transformation matrix which most closely registers the two point clouds to each other is calculated using Horn’s algorithm [76]. That transformation matrix accounts for the fixed offset in the transformation (CTI) reported by the da Vinci kinematics.  That transformation matrix is used to counteract the offset for the duration of each operation. After the da Vinci kinematic calibration is complete, the da Vinci kinematics instrument tracking (dVKIT) error is calculated. The dVKIT error is a measure of the difference between   65 tracking a point with the camera and da Vinci kinematic tracking.  The dVKIT is calculated via a leave-one-out error analysis.  For each of the 14 DART poses, the other 13 DART poses are used to do a da Vinci kinematics calibration. Then, the Euclidean distance between the DART origin as reported by the camera and the DART origin as reported by the da Vinci kinematic tracking is calculated. Finally, the RMSE of the Euclidean distances from all 14 DART poses with leave-one-out da Vinci kinematic calibration are calculated.  This RMSE value is the dVKIT error Finally, to characterize the accuracy of the overall system, the total system error is measured. The total system error is affected by ultrasound calibration, camera calibration and da Vinci kinematic calibration. It is a measure of the accuracy of the tool-to-tumour distance that is reported by the ARUNS. As shown in Figure 17, calculating the total system error is a two-step process. The steps are analogous to the ultrasound scanning step and surgical navigation step of the ARUNS. The first step, like the ultrasound scanning of the tumour, involves finding the location of an object in the coordinate system of the DART.  In the total system error experiment that object is a pinhead that is rigidly attached to the DART.  The second step is finding the location of that same pinhead with the da Vinci kinematics.  The difference in the location of the pinhead as reported by the DART and da Vinci kinematics is the total system error.  To measure the total system error a modified DART is designed with a flat top and 2.5 mm pinhead (to simulate the tumour centre) rigidly attached exactly 25 mm below the DART surface. A model of the pinhead is generated in the DART coordinate system via ultrasound scanning and optical tracking of the ultrasound transducer and DART. Next, the ultrasound is removed, the DART is moved around and the da Vinci surgical instrument picks up the pinhead (Figure 16). The pinhead’s location in the DART coordinate system is recorded via the optical tracking of the DART and da Vinci kinematics. The total system error is the Euclidean distance between the   66 pinhead points. This measure meets the goal of providing user feedback on tool-to-tumour distance. The RMSE of 10 different poses is reported.     Figure 17: Diagram illustrating two-step process for calculating total system error.    3.2.2 FEM Simulation for DART To test the assumption that the DART remains relatively fixed relative to the tumour a finite element method (FEM) simulation was run for a DART in a kidney during an ultrasound scan. The FEM simulation was run using ANSYS simulation software (ANSYS, Pittsburgh, Pennsylvania, USA).  Using a calibrated force sensor, the average maximal downward force for three complete kidney tumour ultrasound scans of a kidney phantom was recorded as 0.7 ± 0.3   67 N.  Grenier et al. imaged in vivo pig kidneys with renal ultrasound elastography and reported a cortical and medullary elasticity values (Young’s modulus) of 15.4 ± 2.5 kPa and 10.8 ± 2.7 kPa respectively [83].  The DART leg length used throughout this thesis is 10mm. Thus, the input parameters for the DART FEM simulation are applied ultrasound force (0.1, 0.5 and 1.0 N), DART leg length (0, 5 and 10 mm) and kidney elasticity (10.8 kPa and 15.4 kPa).  The ANSYS FEM mesh was set to a medium mesh discretization.  The kidney tumour was 20mm in diameter and 20mm deep and the ultrasound force was applied 10 mm from the edge of the DART.  The entire FEM simulation was done in a cube of virtual kidney with dimensions of 50 × 50 × 50 mm with the kidney tumour close to the centre and the DART directly above it. For each FEM simulation and associated input parameters, the magnitude of the distance between the theoretical tumour centroid, which is always 20mm immediately below the DART, and the actual tumour centroid is calculated.  3.2.3 Theory When using the ARUNS, the surgeon sees the tumour and tooltips via the direct camera feed and via virtual cameras that appear fixed relative to the real camera. The underlying linear algebra that makes this possible is presented in this section.   68  Figure 18: System configuration with labeled coordinate frames and components for both phases.  The abbreviations for the coordinate frames of the ARUNS (Figure 18) are listed here: Pick-up LUS transducer (P), DART (D), ultrasound image (U), surgical instrument (I), camera (C) and virtual cameras (VC). In the equations in this section, T is a 4x4 transformation matrix, the subscript is the initial coordinate frame, the superscript is the resulting coordinate frame, the subscript of the coordinate frame subscript o indicates the frame at time = 0, and the camera uses the OpenCV coordinate system convention. The ultrasound images and the locations of the da Vinci surgical instrument tooltips are transformed into the DART coordinate system via equations 3 and 4 respectively:             DTU = DTC  CTP  PTU                                                                Equation 3                               DTI = DTC  CTI                                          Equation 4    69 PTU is determined by ultrasound calibration and DTC and CTP are determined by optical tracking of the KeyDots® on the DART and pick-up LUS transducer respectively. CTI is determined via the da Vinci kinematic chain from tooltip to camera. The transformations from the virtual camera coordinate systems to the initial DART coordinate system, DoTVC, are calculated as translational and rotational components. The translations are a pre-set constant that determines the distance between the tumour and virtual cameras. The rotations are pre-set orthogonal rotations around the y and x axes of the camera. When the DART moves, a new transformation from virtual camera to the DART at time t, DTVC, is calculated as follows: DTDo = DTC CTCo CoTDo                                                     Equation 5 DTVC = DTDo DoTVC                                                                Equation 6  3.2.4 Principle of Operation  Figure 19: The surgeon’s view during the phases of the surgery. VC1 and VC2 are the orthogonal virtual camera viewpoints for top and side views. Refer to Figure 18 legend for labels of the components in this image.    70 The DART placement, tracked ultrasound scan and model generation occur only in the planning phase. The augmented reality step occurs in both planning (phase 1) and execution (phase 2). The surgeon’s console view is shown in Figure 19. The step-by-step instructions for the ARUNS’ usage are below:  1. DART placement: The DART is placed into the kidney near to the tumour (Figure 16).  2. Tracked ultrasound scan: During the freehand ultrasound scan of the kidney, the LUS transducer and DART KeyDot® markers are optically tracked and synchronised ultrasound images are recorded.  3. Model generation: A 3D model of the kidney tumour is created via manual tumour segmentation of the 3D ultrasound volume [81][85]. 4. Augmented reality: In addition to the regular surgical scene view, orthogonal viewpoints and one direct augmented reality image of the operative scene are displayed to the surgeon in real-time. The viewpoints include rendered tumour and tooltips, shown from the top view and side view, relative to the real camera. The views both face the centroid of the tumour and remain fixed relative to the real camera. The tumour and tooltips are continuously rendered as the DART moves. The rendering also displays the movement of the tumour in the virtual viewpoints. 5. Tumour excision: During the excision of the tumour, if the da Vinci surgical instrument tooltips come within a set threshold distance of the centroid of the tumour the viewpoints flash red to warn the surgeon s/he is approaching the tumour. Last, the DART is removed together with the tumour and surrounding tissue that comes out with the tumour.    71 3.2.5 Surgeon User Study One expert urological surgeon versed in robot-assisted partial nephrectomies participated in the study. The goal of the user study was to evaluate the ARUNS in a simulated RALPN surgery. In the first case, the surgeon was only given the LUS transducer. In the second case, the surgeon was given the LUS transducer and the ARUNS (ARUNS). The surgeon spent 20 minutes familiarizing himself with the user interface of the ARUNS system after which he was given the phantom for resection and the simulated surgery started. The phantoms provided in each case had inclusions that were purposefully unique in shape and location, limiting the surgeon’s ability to learn from one case to the other. The augmented reality overlay and orthogonal virtual camera viewpoints are placed at the bottom of the surgeon’s screen using TilePro® (Figure 19 and Figure 20). At the end of the user study, the surgeon answered a questionnaire in which he provided feedback about both cases and both systems. The survey included questions regarding usability and helpfulness of each system.  During the planning phase of both the LUS and ARUNS cases, the surgeon marked the phantom’s surface with the tip of a permanent marker held by the monopolar curved scissors. This simulated the use of electrocautery to mark the kidney surface in surgery. In both the LUS and ARUNS cases, the surgeon started the execution phase immediately after he finished the planning phase. During the execution phase he used the da Vinci surgical instruments and did not use the LUS. The ARUNS tumour model and orthogonal virtual viewpoints were enabled at the start of the planning stage. This was possible because, for this user study, the tumour was scanned and manually segmented prior to the start of the planning phase.   72 The volume of excised tissue was recorded after subtracting the tissue between the top of the tumour and the tissue surface. The ratio of excised tissue to tumour volume was also recorded. The excised tissue mass was cut into 10 mm slices to determine margin status and size.    Figure 20: The direct augmented reality (left) and virtual camera viewpoints (middle and right) that are shown to the surgeon using the ARUNS in addition to his/her normal view. The middle pane is the top-down view and right pane is the side view of the surgical scene.  3.3 Results 3.3.1 Calibration and Accuracy Tests  The point reconstruction precision with the ultrasound was 0.9 mm. Over the course of capturing the 10 ultrasound images of the pinhead, the ultrasound transducer covered a range of 16×10×19 mm. The dVKIT error was 1.5 mm. The lowest single error was 0.6 mm. The correction factor associated with the 0.6 mm error was used for the rest of the experiment. The correction factor accounts for the fixed offset in the transformation (CTI) that is reported by the da Vinci kinematic chain matrix.   3.3.2 FEM Simulation for DART The input parameters for the DART FEM simulation are applied ultrasound force (0.1, 0.5 and   73 1.0 N), DART leg length (0, 5 and 10 mm) and kidney elasticity (10.8 kPa and 15.4 kPa).  For each combination of those input parameters a FEM analysis of one second of deformation was run that gave results like the one shown in Figure 21. Using two displacement probes the displacement between the DART and the centroid of the tumour is measured.    Figure 21:Example of cross-sectional view of FEM simulation.  The colour-coded cross-sectional view shows the amount of displacement at each vertex in the FEM mesh.  The colour corresponds to the colour-coded legend on the left of the image which is in units of mm.  The tumour is the sphere in the center of the image and the DART is the small rectangle on top of the simulated cube of kidney tissue.  The legs of the DART are not visible because the cross-section does not go through them.  The area of largest deformation, in red on the top left of the image, is the place where the ultrasound force was applied over a rectangle the size of the ultrasound linear array.      74  Figure 22: The graphs show the results of some of the FEM simulations. For the simulations shown in this figure, the elasticity of the material was held fixed at 15.4 kPa (left graph) and 10.8 kPa (right graph). The x and y axis in both graphs represent input parameters for the simulation and the z axis is the magnitude of the distance (mm) between the theoretical tumour centroid, which is always 20mm immediately below the DART, and the actual tumour centroid. The numbers beside the data points in the graphs (*) are the z value of each of the data points.  The coloured surface between the data points (*) is generated by connecting the data points along the edges of a graph created by Delaunay triangulation between the data points.      75  Figure 23: The graphs show the results of some of the FEM simulations. For the simulations shown in this figure, the DART leg length was held constant at 10 mm.  The x and y axis in both graphs represent input parameters for the simulation and the z axis is the magnitude of the distance (mm) between the theoretical tumour centroid, which is always 20mm immediately below the DART, and the actual tumour centroid. In this case the x axis is the tissue/kidney elasticity in units of kPa and the y axis is the force exerted by the ultrasound transducer in units of Newtons. The numbers beside the data points in the graphs (*) are the z value of each of the data points.  The coloured surface between the data points (*) is generated by connecting the data points along the edges of a graph created by Delaunay triangulation between the data points.    For all simulations, the magnitude of the distance between the theoretical and actual tumour center never exceeds 1mm.  Thus, the conclusion from this FEM analysis is that assuming that the DART is fixed relative to the tumour will result in an additional error in the estimate of the kidney tumour location that is no greater than 1mm.  Furthermore, the simulation results reveal the following: - Increasing the DART leg length so that the legs end immediately above the tumour (<1mm) results in a 0.25mm reduction in the estimated location of the tumour centroid.    76 This is a negligible change in error. Therefore, DART leg length should be long enough to pierce the outer surface of the kidney, but no longer. - Increasing the ultrasound force increases the deformation and the error of the estimate of the location of the tumour centroid.   - Increasing the elasticity of the kidney decreases the deformation and the error of the estimate of the location of the tumour centroid.    3.3.3 Surgeon User Study For the LUS only case, the planning and execution times were 2 minutes and 10 minutes 45 seconds, respectively. The excised tissue volume was 24 cm3 and the volume of the tumour was 4 cm3. Thus, the excised tissue volume to tumour volume ratio was 6:1. There was a gross margin and a separate microscopically (< 1 mm) positive margin. The largest negative margin size was 24 mm. For the ARUNS case, the planning and execution time were 1 minute 57 seconds and 7 minutes 30 seconds respectively. The excised tissue volume was 16.5 cm3, and the volume of the tumour was 5.5 cm3. Thus, the excised tissue volume to tumour volume ratio was 3:1. There was a gross and a separate microscopically positive margin. The largest negative margin size was 12 mm. For both cases, the tumour was endophytic and the surgeon rated the R.E.N.A.L nephrometry score [86] as 12. In other words, a difficult surgery was simulated.  Furthermore, the surgeon was aggressive in trying to minimize the size of the tumour margin. After the user study, the surgeon reported that during the planning phase the ARUNS+LUS provided more information for visualization of the tumour than the LUS. During the execution phase, the surgeon preferred the visualization provided by the ARUNS+LUS over   77 no visualization. General comments about the ARUNS+LUS system include that the most useful guidance cue was the tool-to-tumour colour-coded proximity alert system.  The system worked by making the screen flash red if an instrument got within a certain distance of the centre of the tumour. The warning aided the surgeon in avoiding the tumour and minimizing the healthy tissue excised. The surgeon found the top-down view easier to interpret than the side view.  However, he also reported that surgery is dynamic and it is not intuitive to stop partway through the surgery to take the time to look at the virtual views to orient himself.   3.4 Discussion The success of image-guided surgical systems is largely dependent on their accuracy, usability and the clinical need for the extra image guidance. Each of those aspects of the ARUNS will be addressed in turn in the discussion.  Both the ultrasound pinhead reconstruction precision error of 0.9 mm and the error of 1.5 mm for the da Vinci kinematics were consistent with error for similar experiments that have been reported in the literature of 1.2 mm [87] and 1.0 mm [84] respectively. The larger error in the ARUNS may be because the gold standard used were optically tracked KeyDot® markers as opposed to an Optotrak® 3020 stylus (Northern Digital Instruments, Waterloo, ON, Canada), which has a reported tip error of 0.25 mm [84]. Given an ultrasound error of 0.9 mm and da Vinci kinematics error of 1.5 mm, the measured total system error of the ARUNS of 5.1 mm can possibly be reduced through further refinement and testing. There is still error from optical tracking of both the LUS and the DART, manual pinhead segmentation, and an imprecise technique for touching the pinhead with the surgical instruments. Given that one of the end goals for ARUNS is to increase the amount of healthy kidney that is spared, it is important to reduce   78 the total system error further. The standard of care recommendation for a kidney tumour resection is to leave a safety margin of 5 mm [9]. In terms of usability, the ARUNS orthogonal virtual camera viewpoint is different to other image guidance systems for abdominal surgery. The advantage of the orthogonal viewpoints is that it provides the surgeon a perspective they would not normally have without occluding the surgeon’s view of the operative field. An additional advantage to the virtual viewpoints approach is that the lag, inevitably introduced by an image guidance system with graphical rendering, is much less of a distraction in the orthogonal view as opposed to the direct overlay view. However, further work is required to help the surgeon orient himself or herself when looking at the orthogonal views of the ARUNS. Additional simplistic cues such as rendering the camera, showing the centre line axis of the virtual viewpoints or letting the surgeon set the pose of the virtual viewpoints could help with minimize these issues. Using a colour gradient to represent the distance of the instrument to the tumour could improve the warning cue given to the surgeon as well. The ultimate goal is that the ARUNS will be used for human surgeries. To achieve that goal, the issues of ultrasound segmentation, movement of the kidney after renal artery clamping, blood occlusion and seeding risk will have to be addressed. For simplicity, manual segmentation was performed. In practice, segmentation time can be minimised using (semi-)automatic algorithms that exist or using a bounding sphere approach for complex tumour geometry. However, in vivo automatic segmentation of tumours is more difficult than segmentation of phantoms. For renal artery clamping, the main issue is that, to minimize warm ischemia time, the ultrasound imaging should be performed prior to renal artery clamping. The shape of the kidney   79 and tumour change when the perfusion pressure drops to zero. Insertion of the DART into the kidney yields a potential risk of seeding. In conclusion, the ARUNS is an innovative approach to surgical navigation for minimally invasive surgery and the success of the initial study suggest that further investigation and user studies are warranted.      80 Chapter 4 - Pico Lantern: Surface Reconstruction and Augmented Reality in Laparoscopic Surgery Using a Pick-Up Laser Projector  4.1 Introduction Just like Chapter 2 and Chapter 3, the aim here is to develop a better surgical navigation aid for minimally invasive surgery (MIS) surgery.  The intention is that this navigation aid will help surgeons achieve greater success in MIS surgeries and that it will compensate for some of the well-known drawbacks of MIS that were listed in the Introduction and Chapter 2.  This chapter presents the Pico Lantern, a device built primarily to address the challenges of surface depth recovery for mapping the shape of internal organs and display of surgical navigation information.   The Pico Lantern is a pick-up projector for laparoscopic surgery that is small enough to be dropped into the abdominal cavity (via a cannula or incision) and picked up therein by the surgeon. It is a source of structured light and, simultaneously, a projector for augmented reality in surgery. In addition to doing surface depth recovery, or surface reconstruction, it detects and highlights subtle surface movements associated with the pulsatile motion of underlying blood vessels. The Pico Lantern is designed as a multi-purpose tool for enhancing laparoscopic surgery. Partial nephrectomy (kidney cancer resection) has been chosen as the first application for the Pico Lantern.   The miniature Pico Lantern components are derived mainly from the consumer electronics industry for the purpose of incorporating miniature projectors into smart phones.  For example, Lenovo released the Moto Z phone in June 2016 which can be modified to include a Moto Insta-Share projector which attaches directly to the phone and has dimensions of 153 × 74   81 × 11 mm and brightness of 50 lumens. These miniature projectors are called pico projectors.  We have leveraged the miniaturization of laser-based pico projectors to develop the low cost ($500US) Pico Lantern. The Pico Lantern uses laser diodes and a raster scanning laser (micro electro mechanical system scanner mirror: 2.9 × 2.2 × 1 mm) from the Microvision ShowWX+ pico projector (Redmond, Washington, USA). The proposed system is called a Pico Lantern because it illuminates the area of interest after it is dropped into the abdominal cavity and picked up therein. It projects high fidelity images that are in focus at almost all depths because it has a single-pixel beam expansion that matches the rate of expansion of the projected image size.   The Pico Lantern differs from previous devices developed for projection in laparoscopic surgery because the source of structured light is inside the abdomen, it is free to move relative to the laparoscope and no external tracking tool is required. This means that there are fewer calibration and registration steps and a reduced lever arm effect so the surface reconstruction and augmented reality projections are potentially more accurate.   Augmented reality guidance can be implemented with the Pico Lantern by projecting computer-generated images onto tissue surfaces. For such augmented reality the same coordinate system transformations that are already used to calculate the 3D surface reconstruction are used. A projection of the frequency filtered displacement of the tissue back onto the tissue surface is demonstrated in this chapter. For colour projection on tissue, solutions to the challenges of projecting onto non-white curved surfaces exist [88]. It is also possible to adjust projected images so that they appear undistorted on curved surfaces [89]. Like a real lantern, the Pico Lantern can be used as a supplementary light source which can be automatically adjusted to reduce bright specular reflections. It can also be used to illuminate surfaces from a shallow angle to detect small protruding features by their long shadows.    82  One of the main motivations for developing the Pico Lantern is to overcome the reduced depth perception and limited viewpoints in laparoscopic surgery.  Further challenges that need to be overcome are to make it easier to register preoperative and intra-operative images, to identify important subsurface anatomy such as blood vessels and to provide a tool for visualizing surgical guidance information. In this chapter, the Pico Lantern data is used for surface reconstruction of objects and organs, detecting blood vessels and creating virtual viewpoints of the surgical scene. Further, the Pico Lantern's augmented reality feature is used to project surgical guidance information about underlying blood vessels onto the surgical scene.  Figure 24: Pictures of the commercially available ShowWX+ projector (left), picture of the internals of the ShowWX+ (center) and conceptual diagram of Pico Lantern in use during laparoscopic surgery and scanning surface of kidney (right).  Notice that part of the ShowWX+ is within the white Pico Lantern.    This chapter first describes the design and construction of the Pico Lantern. Next, the underlying approaches for tracking, 3D surface reconstruction and augmented reality are described. The results include tests on phantoms, ex vivo and in vivo porcine kidneys, a   83 comparison of mono and stereo surface reconstruction and proof-of-concept testing for detecting in vivo tissue movement. In summary, we demonstrate the feasibility and accuracy of 3D tissue surface reconstruction and augmented reality using the Pico Lantern.  4.2 Materials This section starts with a general discussion of the three Pico Lantern prototypes, a description of the experimental equipment and a detailed description of the design of Pico Lantern prototype 2.  We have built three Pico Lantern prototypes with nearly identical hardware components. Pictures of the prototypes are in Figure 25. The first prototype is used for the experiments in this chapter and it is the calibrated ShowWX+ projector. The second prototype (Figure 24, Figure 25 and Figure 26) includes a new housing and cabling which allows the ShowWX+ to be taken apart to separate the integrated photonics module (IPM) and the electronics platform module (EPM).  This allows the IPM to be dropped into the patient and attached via a flexible cable to the larger EPM which is kept outside of the patient. Prototype 2 has the same 3-colour functionality as the ShowWX+ projector, a diameter of 28 mm and it can be placed through the skin incision with the cable beside the trocar. It has the same grasping element as in Schneider et al. [29] so it can be picked up with the ProGrasp™ manipulator of the da Vinci surgical system. The grasping element is designed for repeatable grasping so that the transformation between the grasping element (and hence Pico Lantern) and robot coordinate systems is constant. Thus, robot kinematics can be incorporated into tracking in future research. The third prototype (Figure 25) has a diameter of 17 mm. It has a single laser diode and the same MEMS scanner mirror. This version was built to demonstrate manufacturability but the 3-colour version is used for both   84 colour and mono-colour testing.  The ultimate Pico Lantern prototype will have a diameter of 12 mm so that it can be inserted through a standard laparoscopic surgery trocar.  A checkerboard with 3.175 mm squares is affixed onto a flat surface of each Pico Lantern prototype. The inner 2 times 6 checks and associated checkerboard corners (suitable for the 12 mm diameter prototype) are used for tracking. The checkerboard is made of surgical identification tape (Key Surgical Inc., Minnesota, USA) that is designed to be semi-permanently attached to surgical instruments through repeated sterilization cycles, and it is approved for use in humans [87].   A Flea2 camera (Point Grey Research, Richmond, British Columbia, Canada) with resolution of 1280 × 960 pixels is used for Pico Lantern projector calibration [90] and for determining the transformation from the coordinate systems of the checkerboard on the Pico Lantern and the projector. All tests are done using a da Vinci Si® laparoscope (Intuitive Surgery, Sunnyvale, California, USA) with images of 1280 × 1024 pixels. The Pico Lantern has an HDMI input, a frame rate of 60 Hz, projection resolution of 848 × 480 pixels, and a brightness of 15 lumens.   The da Vinci surgical system is an ideal testing platform because it can hold the Pico Lantern steady. Also, because it has a stereo laparoscope, the Pico Lantern surface reconstruction can be compared to conventional stereo laparoscopic surface reconstruction.   85  Figure 25: Pictures of Pico Lantern prototypes 1 and 2 projecting a checkerboard pattern onto the surface of ex vivo porcine kidneys (left and middle). Picture of the proposed configuration of the internal components of Pico Lantern prototype 3 (right).  The working Pico Lantern prototype # 2 is shown in Figure 25. In Figure 25, the picture of prototype #2 was taken as the da Vinci Si® Surgical System ProGrasp™ was picking up the Pico Lantern and scanning the kidney surface with the projected checkerboard pattern. The Pico Lantern housing was manufactured with the Objet30 desktop 3D printer (Objet Inc., Billerica, Massachusetts, USA) which has 28 micrometer precision. In the ShowWX+ the integrated photonics module (IPM) and electronics platform module (EPM) are normally connected directly to each other via WP3 series low-profile board-to-board connectors with 20 pins and 0.4 mm pitch spacing (Japan Aviation Electronics Industry Ltd., Shibuya, Japan) and a flexible PCB. In the Pico Lantern, the IPM and EPM are connected via printed circuit boards (PCB) that were custom designed and manufactured (Sierra Circuits, Sunnyvale, California, USA) and a flat flexible cable with 20 pins and 0.5 mm pitch (Wurth Elektronik, Niedernhall, Germany). A WP3   86 board-to-board connector and a 20 pin, 0.5 mm pitch zero insertion force surface mount connector (Wurth Elektronik) are soldered to each PCB. The board-to-board connector connects the PCB to the IPM or EPM and the surface mount connector connects the PCB to the flat flexible cable (Figure 26). There is a direct one-to-one correspondence between the pins of the board-to-board and surface mount connectors via PCB traces of approximately 0.23 mm in width and a maximum of 30 mm in length. Epoxy glue is used to hold the IPM rigidly in place relative to the Pico Lantern housing.   Figure 26: Picture of the Integrated Photonics Module (IPM) from the ShowWX+ projector (left). The IPM is placed inside the Pico Lantern housing and connected to the rest of the ShowWX+ projector via custom designed PCBs and flat flexible cables.  The PCB boards were custom designed by the author using Altium, a PCB Design Software (Altium Software Company, San Diego, California, United States).   The PCBs were printed by AP Circuits (Calgary, Canada).     87  Figure 27: Picture of custom made PCB boards for connecting the Pico Lantern Integrated Photonics Module (IPM) to the Electronics Platform Module (EPM).  This cable allowed meant that the battery and other components of the projector could be left outside of the patient.  The black board-to-board connectors in the bottom left of the picture were identical to the ones used in the ShowWX+ projector and the model had to be discovered by reverse engineering.   Figure 28:Picture of PCB design for Pico Lantern. 4.3 Methods 4.3.1 Checkerboard Corner Selection and Checkerboard Tracking The corners of the checkerboard on the Pico Lantern and the projected checkerboard are selected manually and the checkerboard corner detection algorithm [91] detects the checkerboard corner   88 locations with sub-pixel accuracy. The projected checkerboard is a 6x13 blue and black checkerboard.  Manual tracking is primarily used here, but an automatic tracking algorithm is available and has been successfully used previously in porcine [67] and human [92] surgery for tracking an intraoperative ultrasound probe with a checkerboard affixed to it, hence the extra circular targets in the image in the left of Figure 25.  Such automatic tracking will help with clinical integration but will not significantly affect overall accuracy, which is explored here.  4.3.2 Validation of 3D Surface Reconstruction The objects used for validation of 3D surface reconstruction are a white plane, white cylinder (52.63 ± 0.05 mm diameter) and an ex vivo porcine kidney. For imaging each object, the laparoscope and object are stationary while the Pico Lantern is moved to 5 different poses within the field of view of the laparoscope. The surface data from the 5 poses are then combined. The Pico Lantern surface data points (the corners of the projected checkerboard) are regularly spaced with a density of about 0.2/mm2. For each object, the same images are used for evaluating the accuracy of both mono and stereo 3D surface reconstruction methods. For all the surface reconstruction tests, the average and standard deviation of the angle between the Pico Lantern and camera axes, the distance from camera to object and the distance from Pico Lantern to object is 61° ± 12°, 166 ± 7 mm and 49 ± 11 mm respectively (Figure 29). The da Vinci Si® laparoscope surgical light is set to a medium brightness (40 % of maximum) as a compromise between ambient lighting and projected contrast.    89  Figure 29: Diagram showing the approximate geometry of the experimental setup for the plane, cylinder and kidney 3D surface reconstruction experiments.  To determine the relative error the respective Pico Lantern surface data points are fitted to the known geometric shapes of the plane and cylinder. The relative error is the average distance from the Pico Lantern surface data points to the surfaces of the plane or cylinder after the fitting process.   To calculate the absolute error, the gold standard surfaces of the objects are measured using a Certus optical tracker stylus (NDI, Waterloo, Ontario, Canada). To minimize tissue deformation from the stylus, the kidney is frozen and only the surface is defrosted to give a normal appearance. An open source surface fitting tool is used to fit a surface to the stylus surface points [93]. The surface is fitted using approximation as opposed to interpolation so that the surface fitting is less sensitive to outliers and noise. The fitted surface is like a flexible plate that is attached to the data points via elastic bands and the plate has a finite and non-zero bending rigidity. Equal weighting is given to the fitting error and first partial derivative of the surface.  The fitted surface is stored as a square grid pattern with square edges of 0.1 mm. The absolute error is the average distance from the Pico Lantern surface points to their respective nearest   90 neighbour points in the fitted 0.1 mm square grid.  The grid is the gold standard surface.  For each of the objects, the stylus point density is about 1/mm2 because 3,000 stylus points are collected over a surface area of about 3,000 mm2. The Pico Lantern and Certus optical tracker stylus data for the cylinder is shown in Figure 30.  Figure 30: Two views of the surface reconstruction data for the cylinder. The Certus optical tracker stylus gold standard surface points are black and the Pico Lantern surface points are coloured. Each colour corresponds to a different Pico Lantern pose and each coloured point represents a corner of the projected checkerboard. The density of gold standard surface data points is approximately 1/mm2 and the density for the Pico Lantern points is approximately 0.2/mm2.  Additionally, the Pico Lantern is used to map the surface of two organs placed side by side with a clearly identifiable V-shape between them.  4.3.3 Measurement and Augmented Reality Display of Tissue Movement The Pico Lantern can measure and display dynamic surface motion. Here, the Pico Lantern is used with monovision (method 2 described below) to measure the surface motion of a volunteer's neck. The goal is to capture subtle motion of the skin of the neck that is caused by the underlying   91 blood vessels.  The blood vessels in the neck are used as a surrogate for blood vessels such as the superior mesenteric, renal and pudendal arteries. These vessels are of interest to the surgeon.  Surface motion is measured by tracking the projected checkerboard at 15 frames per second for 10 seconds. The checkerboard corner tracking between frames is automatic because the distance a checkerboard corner moves in between frames is small. In turn, the Pico Lantern projects a tissue motion map directly onto the tissue where the motion occurred. There are challenges for depicting computer-generated features such as vessels, and suitable visual cues have been proposed by others [52]. As a proof-of-concept we propose to simply display an interpolated colour map in which the known data points are the projected checkerboard corners. For each checkerboard corner, the fast Fourier transform of 10 seconds of displacement data is calculated and the average of the coefficients in the frequency domain of 0.82-1.1 Hz is the value of the data points in the interpolated colour map. By rapidly alternating the projections of the checkerboard and the colour map overlays, measurement and depiction can be performed together in real-time. In summary, this demonstrates measurement and display of tissue movement caused by underlying blood vessels in the neck of a volunteer. The gold standard location of the carotid artery and jugular vein are identified by the sternocleidomastoid muscle anatomical landmark and palpation of the carotid arteries.  4.3.4 Virtual Viewpoints of Surgical Scene Another application for the surface generated by the Pico Lantern is to render the surgical scene so that the surgeon can see it from any virtual camera perspective.  To demonstrate the concept, the ex vivo kidney images and Pico Lantern surface data are used to create virtual viewpoints of the kidney surface (Figure 34). The concept of showing the 3d surface reconstruction from any   92 arbitrary position has previously been demonstrated [94]. This is the first time this approach has been implemented in the context of the Pico Lantern.  4.3.5 Proof-of-concept In Vivo Porcine Experiment The proposed Pico Lantern introduces some unique geometrical constraints during surgery. Thus, an in vivo porcine trial was conducted to qualitatively evaluate how the Pico Lantern would perform in a surgical setting. A porcine trial was conducted at an animal lab at the Jack Bell Animal Research Facility, Vancouver (UBC animal care \# A11-0223). The pig was anesthetized and its left kidney mobilized via an open surgical approach in supine position. The laparoscope surgical light is set to a lower brightness (20 % of maximum) to emphasize the projected contrast over the ambient lighting. The Pico Lantern and da Vinci Si® laparoscope were used create a surface map of the kidney. A picture of the projected checkerboard on the in vivo kidney is shown in Figure 35.   4.4 Theory/Calculation 4.4.1 3D Surface Reconstruction Two surface reconstruction methods, shown in Figure 31, are proposed for 3D surface reconstruction in the two sections below. The Caltech Camera Calibration toolbox [91] checkerboard corner detection algorithm and stereo triangulation function are used in both methods.   93  Figure 31: Overview of the two methods used for surface reconstruction. The red lines show the narrow triangle geometry of method 1 (left) and the blue lines show the wider geometry of method 2 (right).  4.4.1.1 Method 1 - Stereo Laparoscope and Untracked Pico Lantern Method 1 follows the traditional principles of stereo triangulation using a calibrated stereo laparoscope (hereafter referred to as camera) [46].  The corresponding points between the left and right images are the corners of the checkerboard pattern that is projected onto the surface by the Pico Lantern.  The surface is at the point of intersection of the line-of-sight rays of the corresponding checkerboard corners from the left and right images.  The stereo triangulation process can be shown as a matrix multiplication in each camera: 𝒎𝟏 = 𝑲𝟏 𝑰	𝟎 𝑴						𝒂𝒏𝒅						𝒎𝟐 = 𝑲𝟐 𝑹	𝒕 𝑴			                                      Equation 7 where points m1 and m2 are homogeneous vectors in pixel coordinates in the camera images, and point M is a homogenous point in the coordinate system of camera 1. K1 and K2 are the intrinsic camera parameters, the matrix R and vector t are the extrinsic camera parameters between the left and right stereo laparoscope and 0 is a homogeneous vector of zeros.    94 4.4.1.2 Method 2 - Mono Laparoscope and Tracked Pico Lantern Method 2 is suitable for either a mono or stereo laparoscope; monovision is used here. The location of the Pico Lantern, in the coordinate system of the laparoscope, is determined by visually tracking the checkerboard that is on the Pico Lantern. This enables surface reconstruction using wide baseline triangulation with the Pico Lantern (P) and camera (C) at two of the vertices of the triangle. The surface is at the point of intersection, implemented as the shortest distance of the corresponding rays V and R from the Pico Lantern and camera respectively [95]: 	𝒔𝑹𝒙𝒔𝑹𝒚𝒔𝑹𝒛𝒔 = 𝑻𝑿𝑪 𝑷𝒖𝑽𝒙𝒖𝑽𝒚𝒖𝑽𝒛𝒖.𝑷 ..𝑪                                                                Equation 8 where s and u are scalars and CTP is the transformation from the Pico Lantern projector (P) to camera (C) coordinate system.  The goal of Pico Lantern calibration is to calculate KTP, the fixed transformation matrix from the Pico Lantern projector (P) to the Key Surgical checkerboard (K) on the Pico Lantern. It is performed offline in an unintuitive manner that involves two steps with fixed geometry:   1. Calculation of CTP with conventional projector calibration.   2. Calculation of KTP using CTP that was calculated in step 1. The first step uses conventional projector calibration [90] and Bouguet’s Camera Calibration Toolbox [91] which is based on Zhang’s algorithm [25]. The key is to model the projector as a camera in reverse. The intrinsic parameters of the camera are calculated via camera calibration [91]. The second step is best understood via this equation:  𝑻𝑿𝑪 𝑷 = 𝑻𝑿𝑪 𝑲 𝑻𝑿𝑲 𝑷                                                                      Equation 9   95 where CTK is the transformation matrix from the Key Surgical checkerboard (K) on the Pico Lantern to the camera (C) that is calculated in each camera. Since the camera and Pico Lantern remain stationary during step two of Pico Lantern calibration, CTP  and CTK are known and constant. KTP is the unknown transformation matrix and it is calculated offline using corresponding points between the known location of projected checkerboard corners on a plane in the Pico Lantern projector (P) and Key Surgical checkerboard (K) coordinates [90]. The corresponding points come from twelve Flea2 camera images in which the plane onto which the Pico Lantern projects a checkerboard pattern with 98 checkerboard corners is in a different pose in each image.    4.5 Results  Figure 32: Laparoscope view during measurement of motion of the human neck in vivo with graphs showing 10 seconds of displacement of the checkerboard corners indicated by the tail of the arrows (left). Depiction of the motion of the carotid artery using interpolated colour map: red corresponds to large motion (right).   96  The 3D surface reconstruction relative error for method 1 was 1.6 ± 1.6 mm for the plane and 2.4 ± 2.1 mm for the cylinder. The relative error for method 2 was 0.8 ± 0.7 mm for the plane, and 0.3 ± 0.3 mm for the cylinder. The absolute error for method 1 was 2.0 ± 1.7 mm for the plane, 3.0 ± 2.9 mm for the cylinder and 5.6 ± 4.9 mm for the kidney. The absolute error for method 2 was 1.4 ± 1.1 mm for the plane, 1.5 ± 0.6 mm for the cylinder and 1.5 ± 0.6 mm for the kidney. During data collection the range covered by the Pico Lantern in the camera coordinate system was 27 × 21 × 49 mm for the plane, 14 × 53 × 36 mm for the cylinder and 20 × 28 × 29 mm for the kidney. The extent of the ex vivo kidney surface that was imaged in the surface reconstruction is shown in Figure 34.  The results of the in vivo human test of the pulsatile motion of the neck near the carotid artery and jugular vein is shown in the right of Figure 32. The colour corresponds to the magnitude of the pulsatile motion in the frequency range of 0.82-1.1 Hz. Red and blue represent the most and least motion respectively.  In this experiment it was found that the red region of the interpolated colour map corresponds to the path of the carotid artery and jugular vein that run vertically through the image. The top graph in Figure 32 shows the periodic pulsatile displacement of a checkerboard corner which has a maximum 3D vector magnitude of 0.9 mm. The lower graph shows a point that is about 10 mm away from the carotid artery and has a maximum displacement of 0.3 mm and is less periodic.     97  Figure 33: Laparoscope view of two kidneys placed side by side (left). 3D surface reconstruction in the laparoscope coordinate system, as determined by the Pico Lantern (right). Each colour in the graph on the right corresponds to a different Pico Lantern pose and each point corresponds to a corner of the projected checkerboard. The V-shape created by the two organs (kidneys) touching each other, can clearly be visualized in the left and right image.  To qualitatively validate the accuracy of the Pico Lantern 3D surface reconstruction a V-shape surface is created by placing two organs together. Figure 33 shows that these V-shaped surfaces are accurately measured by the Pico Lantern.  4.5.1 Virtual Viewpoints of Surgical Scene The surface data and an image from the kidney 3D surface reconstruction were used to render the surface of the kidney.  Several virtual viewpoints of the kidney are shown in Figure 34. This shows that there is good overlap between Pico Lantern views and that it is possible to combine those views and show a photorealistic surface. The photorealistic rendering is possible because the Pico Lantern is turned off for one frame during the data collection. Giving the surgeon the ability to view the region of interest from arbitrary viewpoints is expected to assist him or her in   98 making an optimal operative plan. It is envisioned that the virtual viewpoint could be shown at a constant offset angle from the actual camera using the da Vinci TilePro® feature or the surgeon could manipulate the rendered image using a tablet interface [96].   Figure 34: Da Vinci Si® laparoscope view of the ex vivo kidney used for surface reconstruction validation and virtual viewpoint images. Each set of coloured points on the kidney surface indicates the corners of the checkerboards that were projected onto the kidney surface for each Pico Lantern pose (left). Three virtual viewpoints of the part of the kidney surface that was imaged by the Pico Lantern (right).}  4.5.2 Proof-of-concept In Vivo porcine experiment The in vivo porcine experiment confirms that it is possible to use the Pico Lantern to image the kidney during open surgery with the da Vinci Si. It was possible to place the Pico Lantern and laparoscope in an appropriate orientation to collect Pico Lantern surface data (Figure 35).    99  Figure 35: Da Vinci Si® laparoscope view during in vivo porcine experiment. The Pico Lantern is projecting a checkerboard pattern onto the surface of the kidney for the purpose of surface reconstruction.  4.6 Conclusions We have proposed the Pico Lantern, a pick-up laser projector, for minimally invasive surgical guidance that is based on low-cost, fast, commercially available technology. In some surface reconstruction experiments it has sub-millimeter accuracy, it detects and highlights subsurface blood vessels and virtual viewpoints of the surgical scene are rendered. We acknowledge that the subsurface blood vessel detected in this study, the carotid artery, sits outside of the general abdominal surgery focus of this work. It would be preferable to present this concept in conjunction with abdominal anatomy and detection of the renal artery. However, this study highlighted a proof of concept for artery tracking. Finally, an in vivo porcine experiment shows that it can be used during surgery for surface reconstruction. Future, in vivo porcine studies will include experiments to test the Pico Lantern’s ability to detect the renal artery. One of the   100 challenging aspects of this future study will be accounting for the respiratory movement of the kidney and renal artery.  In the future, surgeons may use these virtual viewpoints to visualize complex intra-operative surgical scenes. For example, virtual viewpoints would be helpful in determining the distance from the surgical instrument to the tissue or how far the kidney tumour protrudes from the kidney surface.  Surface reconstruction method 1 - a stereo laparoscope with an untracked Pico Lantern achieves an accuracy comparable to other stereo laparoscope results for the plane and cylinder [97]. However, it is sensitive to the detection of the checkerboard corner, so the accuracy decreases for the kidney because a simple correspondence method is used. The complex surface of the kidney causes the projected image to be more blurry and the stereo laparoscope has a small baseline of 5 mm. Surface reconstruction method 2 - a mono laparoscope with tracked Pico Lantern was more accurate and consistent. A wider baseline between the camera/laparoscope and Pico Lantern and the high contrast checkerboard on the Pico Lantern account for the higher accuracy. Method 2 compares favourably to surface reconstruction techniques in which a single mono laparoscope is used with no additional components.    Advantages of method 2 include: easier identification of the structured light features in the laparoscope view since the Pico Lantern rays can be calculated in camera coordinates; mono-vision laparoscopy to be used in 3D surface reconstruction; particular effectiveness compared to other techniques on tissues with a low density of natural and unique surface features; it can cover a wide field of view by stitching surfaces together; the surgeon can move the Pico Lantern as close as necessary to achieve desired accuracy/field-of-view trade off; an effective way to add augmented reality which only requires the laparoscopic video feed and requires no alteration to   101 the laparoscopic hardware.  Disadvantages of method 2 are: the Pico Lantern must be in the field of view of the laparoscope; it is an extra piece of hardware that must be picked-up and manipulated (the da Vinci 3rd arm may be a solution); it is connected via a cable through an additional port (or the cable could be squeezed between an existing trocar and tissue, as suggested for the pick-up ultrasound transducer [29]; the brightness is limited.  However, ongoing improvements to the technology of pico projectors will likely provide better accuracy, luminance and resolution in the future.       102 Chapter 5 - Follow the Light: Projector-based Augmented Reality for Intraoperative Surgical Planning in Minimally Invasive Surgery   5.1 Introduction In Chapter 4 the Pico Lantern, a miniature projector for MIS, was presented.  It’s accuracy for surface reconstruction was measured and it was noted that it had the potential to do projection onto patient augmented reality to improve surgical guidance.  In this chapter, the focus is on incorporating the Pico Lantern into a usable surgical navigation tool and testing the tool to see if it improves surgical outcomes.  The motivation for the research in this chapter is to make minimally invasive surgery (MIS) easier and safer.  This is the same motivation that drove the research in the previous chapters and is described in more details in the introductions of Chapter 2 and Chapter 3.   As noted previously, MIS is gaining in popularity and laparoscopic ultrasound (LUS) is used increasingly by surgeons in MIS to help them determine the tumour location.  In the case of computer-assisted surgery where a three dimensional model of the tumour is generated there is still an open question about how to display the information about the tumour location. This is true for even robot-assisted partial nephrectomies with a stereo laparoscope. Thus, it is still challenging for the surgeon to intraoperatively plan the surgery to achieve the ideal excision. The difficulty in planning is particularly pronounced for endophytic (grows inwards) kidney tumour resections that have a 47% complication rate, five times higher than exophytic (grows outwards) tumours [98]. For endophytic tumours, the ideal approach is to start as close as possible to the   103 tumour and excise straight down from the organ surface along the orthographic projection of the tumour. For spherical tumours, the ideal excision specimen would fit within a cylinder.  As outlined in the Introduction in Chapter 1, the surgeon uses a LUS transducer to visualize the underlying anatomy. The surgeon tracks the LUS transducer pose with limited depth perception, remembers the location of the transducer and tumour, marks the tissue and starts excising. The surgeon has no way to quantitatively measure the tumour or shape of the organ surface. These limitations could be overcome through the use of augmented reality (AR) for laparoscopic surgery, as explored in previous chapters and other researchers. In particular, Bernhardt et al. recently published a comprehensive review on the subject [15]. A brief list of related research efforts include the development of a projector to display surgical navigation information [54] and for surface reconstruction [99][100]. However, the disadvantage of those approaches is they are not applied to laparoscopic surgery. In the context of MIS, Lin et al. and Hayashibe et al. have developed structured light for surface reconstruction but did not display any projector-based augmented reality guidance information [83][101]. Teber et al. and Simpfendorfer et al. used video-based augmented reality in human in-vivo laparoscopic partial nephrectomy [90][102]. They used intraoperative cone-beam computed tomography and custom designed radiopaque needle shaped fiducials that were inserted into the kidney and optically tracked in 3 degrees-of-freedom to image the kidney and account for kidney movement. However, the system delivers additional ionizing radiation energy to the patient.  Our proposed approach is to use Pico Lantern (Chapter 4) for AR guidance. To our knowledge, this is the first that projector-based augmented reality system for laparoscopic surgery in which the projector is in the patient during the procedure has been explored.     104  Herein, a Projector-based Augmented Reality Intracorporeal System (PARIS), is developed (Figure 36). The PARIS comprises a miniature projector, a dynamic marker inserted into the kidney and a LUS transducer without extrinsic tracking hardware.  The miniature projector is similar to the Pico Lantern that was presented in Chapter 4 and the dynamic marker is the Dynamic Augmented Reality Tracker (DART) that was presented in Chapter 3. As in Chapter 4, the Pico Lantern projector was created by affixing a KeyDot®, a fiducial marker with an asymmetric circles grid pattern, onto a commercially available projector. In Chapter 4 the Pico Lantern was built using the ShowWX+ projector and in this chapter the Pico Lantern project was built using the PicoPro projector. Both have a similar form factor, and the PicoPro is a brighter (increased lumens) and higher resolution projector. In this chapter the Pico Lantern projector is used for both surface reconstruction and real augmentation of the surgical scene. From a tracked ultrasound scan, a 3D model of the tumour is generated. Tracking this model with the dynamic marker, the projector projects an image of the tumour onto the surgical scene.  In the projector point of view (P-POV) mode of the PARIS the tumour is projected in pink and yellow.  The pink tumour projection image is the perspective view of the tumour as seeing by the projector.  The yellow tumour projection image is at the intersection of the surgical scene surface with the orthographic projection of the tumour in the direction of the projector.  The multi-coloured perspective and orthographic projection is a minor depth cue and in practice the surgeon mostly looks at the outline of the yellow orthographic projection. In simulated surgery on phantoms, the PARIS in P-POV mode is compared to standalone LUS. The PARIS is used in the P-POV mode for all experiments described in this chapter.  A narrated description and demonstration of the PARIS is included in one of the supplementary videos of this thesis. The supplementary video can be found in the meta data   105 associated with this thesis on the University of British Columbia cIRCle website and data repository. It is highly recommended that the reader watch this supplementary video.     106  Figure 36: Overview of the Projector based Augmented Reality Intracorporeal Systems (PARIS) in projector point of view (P-POV) mode. There is a red perspective and yellow/brown orthogonal projection of the tumour in the projector point of view (P-POV). Dashed blue lines are orthographic lines from tumour in the direction of the projector. The dynamic marker is the DART and is shown as a grey/white object in the conceptual and surgeon’s view respectively.  5.2 Methods and Materials In this section the material and methods used in this chapter are explained.  5.2.1 Materials The projector is a PicoPro projector (Celluon Inc., Seoul, Korea) with an attached KeyDot®, a fiducial marker with an asymmetric circles grid pattern (Key Surgical, Eden Prairie, Minnesota, USA). Using circle detection to determine circle centroids, pose estimation is performed to provide a full 6 degree-of-freedom pose of the KeyDot® relative to a mono laparoscope [78]. This removes the need for exogenous tracking hardware. It projects images via laser raster scanning and has a large focus range. The projector requires no interposition between the laparoscope and monitor. Compared to the Pico Lantern hardware described in Chapter 4 that had lower resolution (848 × 480) and 15 lumens of brightness, the updated hardware used in this chapter has double the   107 resolution (1920 × 1080) and brightness (30 lumens), along with wireless capabilities and Android, iOS, and Windows compatibility. As mentioned before, the ultimate use of Pico Lantern in surgery would require no dedicated port as it could be placed through the skin incision with a thin cable beside the trocar or controlled wirelessly. Given the significant improvements in resolution and brightness of the PicoPro over the Pico Lantern, it was clear that the PicoPro would be the better projector to do the experiments in this chapter.  In this chapter, the unmodified PicoPro was used for the experiments.  It was determined that it would be possible to modify it so that it would have a similar form factor to the Pico Lantern in Chapter 4.  However, it was not worth the time to re-engineer the device because it would not have changed how the experiments were performed or the outcome of the experiments. The DART navigation aid, described in Chapter 4, is unchanged here. To review, the DART is used as a dynamic marker with barbed legs to embed into the organ surface. It is made of either plastic or stainless steel, with an attached KeyDot®. At 10 × 10 × 13 mm, the marker can be inserted into a 12 mm trocar and picked up in a repeatable manner by the da Vinci ProGrasp™ (Intuitive Surgical Inc., Sunnyvale, USA). The DART can be sterilized by autoclave and inserted into the organ. By tracking the DART and assuming minimal local deformation during and after the ultrasound scan, the tumour is tracked. Section 3.3.2 described FEM biomechanical simulations of the kidney and DART with realistic forces and tissue elasticity and showed that the deformation between the DART and tumour centroid is < 1 mm. The LUS transducer is a 10 MHz, 28 mm linear array and it is designed for robot-assisted minimally invasive surgery [99]. It is used with an Ultrasonix ultrasound machine (Analogic, Peabody, Massachusetts, USA) and has a KeyDot® marker for vision-based tracking.     108  Figure 37: (a) The pick-up ultrasound transducer with KeyDot®, (b) the plastic 3D printed DART, (c) the metal 3D printed DART, (d) the original Pico Lantern projector, and (e) the Celluon PicoPro used in experiments.  Similar to Edgcumbe et al. [100], by tracking the LUS transducer relative to the DART, then using the marching cubes algorithm to reconstruct the LUS volume, and using manual segmentation of target inclusions, a 3D tumour model can be generated. PVC kidney phantoms are made with Super Soft Plastic (M-F Manufacturing, Fort Worth, Texas, USA). They have an elastic modulus of 15 kPa, consistent with human kidneys [83]. Spherical inclusions with a 10-20mm diameter are placed at a depth of 20 mm. The PARIS is tested with the da Vinci Si® surgical system (Intuitive Surgical Inc., Sunnyvale, USA).   5.2.2 Augmented Reality Visualizations  After the tumour model is generated, the projector first projects structured light (checkerboard) to facilitate surface reconstruction. Next the P-POV mode is displayed by the projector. The model is projected as a dense point set.  The projection image for the orthographic display in the P-POV mode is calculated as follows. For each vertex of the tumour model, a ray is generated that runs parallel to the vector between the tumour centroid and the projector.  The intersection of the rays   109 from each of the vertexes and the surface are determined.  Using the projector model, the 3D location of the intersection points is back projected to form the projection image. The end result is a projection onto the organ surface of the tumour which is the size of the tumour.  In other words, the surgeon knows that if s/he starts the excision on the edges of the projected orthographic tumour image and cuts parallel to the projector to tumour centroid vector then the result will be a negative margin and healthy tissue excised will be minimized.  Note that perspective projections were perceived as less intuitive, so orthographic projections (meaning projection calculations maintain the size of the target at all depths from the organ surface) are the main focus in this paper. Generally, the angle of the laparoscope is shallow to the surface, and is unlikely to follow the ideal approach angle – normal to the surface at the point closest to the tumour. In contrast, it is relatively easy to place the tracked projector on this ideal approach angle and provide guidance to the surgeon. This is achieved by aligning the projection image center with tumour centroid as seen by the projector. During the process, the projector is kept approximately normal to the surface via careful manual positioning. With the projector, the surgeon can move this tool around the scene and observe the resulting display, being able to “see” from arbitrary poses. This protocol was one of the unexpected preferences learned through the course of preliminary user studies.   5.2.3 Verification and Validation The laparoscope is calibrated using OpenCV. The projector calibration is performed as in [90]. The ultrasound calibration is done geometrically in the method described previously [78].  The   110 ultrasound calibration determines the LUS image to KeyDot® transformation. The point reconstruction precision is defined as the root mean square error (RMSE) of the Euclidian distance from each pinhead point to the centroid of the pinhead point.  The point reconstruction precision is 0.9 mm over 10 ultrasound images covering a working volume of 16 × 10 × 19 mm.  The ultrasound calibration matrix used here was the same one used in Chapter 3. See sections 3.2.1 and 3.3.1 for more information about how this ultrasound calibration was done and see sections 2.3.1 and 2.4.1 and Figure 15 for a detailed explanation of the experimental setup for measuring point reconstruction accuracy and precision.   Surface reconstruction is performed via semi-global block matching [102]. Semi-global block matching is a technique for stereo matching corresponding pixels in a pair of images for the purpose of doing 3D reconstruction (or stereo reconstruction) by triangulation. The surface reconstruction percentage is the percentage of the surface for which stereo matching is successful and surface data is generated. The surface reconstruction percentage of an ex-vivo kidney, with and without extra features projected, is compared for 12 unique laparoscope and projector poses. This tests the hypothesis that when the projector projects extra “texture” (visual features) onto the surface of the kidney the stereo surface reconstruction with semi-global block matching will be improved.  The quality of the surface reconstruction is measured by the surface reconstruction percentage. To evaluate accuracy of the projector’s augmentations, both the reprojection error and tumour location are quantified. The reprojection error is the distance between the detected origin of the DART and its transformed equivalent as projected onto the scene. This captures error in the tracking of two KeyDots® and the laparoscope and projector calibration models. To measure it,   111 the projector is moved to 5 poses, and for each pose the DART is placed in 10 poses, approximately 80 mm from the laparoscope. The RMS error is reported.    Figure 38: Picture from data collection during measurement of reprojection error experiment.  Black arrow shows origin of asymmetric dot pattern and pink error shows reprojected laser dot that should be centered on the origin of the dot pattern.    For the phantoms used in the simulated surgeries, the segmented tumour model volumes and radii are compared to the ground truth values measured during phantom construction. Secondly, a phantom is cut in half and the segmented ultrasound volume of the exposed part of the tumour is projected onto the cut surface. The Hausdorff distance and average RMS distance between the contours of the actual tumour and projected tumour for five laparoscope and projector poses are reported.  Finally, one novice urologist (2nd year surgical resident) and one expert urologist (Chris Nguan with 10 years+ of surgical experience) completed simulated partial nephrectomies on kidney phantoms using the PARIS. The surgeon also completed the same simulated partial nephrectomies using standalone LUS imaging as the control arm.  The surgeon is given one practice trial for both the PARIS and standalone LUS imaging technique.  After practice, the novice completed 12 simulated surgeries and the expert completed 20 simulated surgeries. The excision times, margin status, size, volume ratio of resected tissue to tumour are recorded. To   112 quantify deviation from the ideal excision, the excised specimen is cut at increments of 5mm and the cross-section with the largest tumour diameter is analyzed. The tumour and full cross-section are segmented and the RMS distance between their centroids and the Hausdorff distance between their segmented contours are reported. The surgeon answered a questionnaire and provided open-ended qualitative feedback.  5.3 Results Projected patterns improved surface reconstruction percentage by an absolute average of 15.4 ± 8.3%. This results in sufficient surface reconstruction percentage for determining the projection images. The reprojection error of the DART’s Keydot® origin is 0.8 mm RMS. During the data collection, the projector was moved over a range of 32 × 9 × 11 mm in the laparoscope coordinate frame. The average ground truth and tumour volumes were 2.6 ± 0.7 cm3 and 4.2 ± 1.4 cm3 respectively. The difference between measured and ground truth radii is 1.5 mm RMS. For the projection of the tumour onto the actual tumour, the average Hausdorff distance and RMS distance between the contours are 3.9 mm and 1.7 mm respectively.     113   Figure 39: Phantom cut in half for the purpose of qualitative validation of repropjection accuracy. (Left) un-augmented cross-section of phantom. The phantom was cut in half to expose the black coloured tumour which is indicated by a blue arrow.  The ultrasound probe is placed so that its imaging plane is just behind and parallel to the surface of the phantom where it was cut. (Center) Computer graphics overlay of tumour model. (Right) L-POV perspective projection of tumour model. LUS was placed on the phantom’s edge and the reconstructed volume was segmented.  Figure 36 is an illustration of the PARIS projection onto patient overlay. Figure 39 is a picture of a cross sectional slice of a phantom showing the physical location of the tumour and the projection from the endoscope view are also seen in Figure 3. The RENAL nephrometry score is a clinical score used by surgeons that quantifies the complexity and difficulty of a partial nephrectomy surgery.  The highest RENAL score possible, representing the most complex and difficult surgery,  is 12 [86]. The RENAL score was 12 for all the tumours, indicating a challenging resection task. Quantitative results from the 36 simulated laparoscopic partial nephrectomies are summarized in Table 1. Some of the highlights are that the novice surgeon had 5/6 negative margins with the PARIS and 5/6 with the LUS alone, and the expert surgeon had 10/10 and 8/10 negative margins for with the PARIS and with the LUS alone respectively. Furthermore, both the novice and expert surgeon excised a statistically significant less amount of   114 healthy tissue.  For both the novice and expert, the Wilcoxon signed-rank test p-value was  < 0.01 in the healthy tissue excised PARIS to LUS alone comparison. Cross sections of the expert surgeon’s first four excised specimens with both the PARIS and LUS alone are in Figure 40. The Hausdorff distance for the novice surgeon is 13.3 mm while using the PARIS and 19.3 mm while using the LUS alone. The Hausdorff distance for the expert surgeon is 11.0 mm while using the PARIS of 11.0 mm and 18.0 mm while using the LUS alone.  These Hausdorff distance results indicate that the PARIS results in more consistently tight margins and less healthy tissue removed.    Figure 40: Cross sections of excised specimens from the first four phantoms from each of the Projector POV (top row) and LUS (bottom row) branches of the study with the expert surgeon. The black inclusion is the simulated kidney cancer lesion. Centroids and contours of tumours and full tissue cross-sections are shown. Note that the 2nd and 4th specimens in the bottom row had positive margins and were excluded from the quantitative analysis and data shown in Table 4.    115 Table 4: Quantitative comparison for user study with simulated partial nephrectomies Metric Surgeon and Visualization Type Novice Surgeon  Expert Surgeon (n=20) LUS (n=6) PARIS (n=6) LUS (n=10) PARIS (n=10) Execution time (secs) 579 ± 155  469 ± 152 199 ± 31 207 ± 40 Tumour volume (cm3) 2.8 ± 0.7  2.6 ± 0.7 2.6 ± 0.8 2.4 ± 0.9 Negative margins  5/6 5/6 8/10 10/10 Excised tissue volume* (cm3) 26 ± 3 17 ± 3 20 ± 4 14 ± 4 Hausdorff dist. (mm) 19.3 ± 0.8 13.3 ± 3.7 18.0 ± 2.2 11.0 ± 1.7 Centroid dist. (mm) 5.1 ± 1.5 4.1 ± 1.8 4.4 ± 1.9  2.9 ± 1.2   Figure 41: Quantitative comparison of excised tissue volume during user study.  LUS stands for LUS only.  When comparing the results for LUS and PARIS for both the novice and expert surgeon, the Wilcoxon signed-rank test p-value was < 0.01. The surgeons observed that the PARIS generated a clear image that blended well with the phantom surface and that the depiction of the resection line was a natural and intuitive depiction relative to the organ. Drawbacks included the need for moderate ambient surgical light intensity   116 to avoid washing out of the projection and a need for guidance during the excision itself. The surgeon reported that he would use the PARIS over the standalone ultrasound imaging method partly because the augmented reality provides persistent guidance by projecting the tumour outline. Conventionally, the surgeon observes a limited cross-section during only the ultrasound scan. After each resection the surgeons answers a questionnaire.  The combined results for both surgeons of those questionnaire for the LUS and the PARIS follow.  In comparing the LUS to the PARIS the results indicate the surgeons felt more confident (3.3 ± 1.1 to 5.0 ± 0.0) and had a better spatial understanding (3.5 ± 0.8 to 4.6 ± 0.5) when using the PARIS. All these results favour PARIS over LUS visualization.   5.4 Discussion This work presents a novel fully-integrated intracorporeal augmented reality system for intraoperative guidance in soft tissue surgery. Given a 5 mm margin for partial nephrectomy, the error of the subsystems (1.2 mm RMS for the tumour model geometry and 0.8 mm RMS error for reprojection) and the overall tumour localization error (1.7 mm RMS) are small enough to consider the PARIS beneficial for guidance. This study is intended to test the overall concept of the PARIS and further studies in-vivo are required. However, it demonstrated that integration of the three components, vision-based tracking and projector augmentation is feasible and practical. Integration of the PARIS with the da Vinci solely requires read-only access to the video feed, which eases dissemination. It took each surgeon one practice trial before they went on to use PARIS effectively for guidance. The projector’s augmentation was clear enough on the surface to be useful to the planning of the excision. The surgeon indicated that, unlike standalone ultrasound imaging, it was helpful to   117 have the projector provide persistent guidance after ultrasound scanning. The tradeoff is that segmentation is required. Projection of ultrasound enhanced to provide high contrast of the tumour relative to the background is a possible solution. A key discovery it that the P-POV mode of the PARIS is an effective visualization strategy and that an orthographic projection which is parallel to the direction of excision is a good strategy for augmented reality navigation.  The direction of excision and associated parallel orthographic projection is usually perpendicular to the surface but theoretically could be in any direction that provides a short path and avoids critical anatomy. Such advanced guidance would be implemented by adapting one of the many graphics techniques described in the surgical guidance literature to the projector. In a more general sense, the PARIS P-POV mode is akin to having an “eye in the hand” of the surgeon, as explored in other applications, so that the moving projection image gives valuable dynamic visual cues to the surgeon.  The user study size limits the ability to make definitive statements about superiority of a particular method. The main outcome is that when the surgeon used PARIS there was a statistically significant reduction in the amount of healthy tissue excised and the surgeons that used it felt more confident and had a better self-reported spatial understanding of the underlying anatomy.  The user study included a novice and expert surgeon.  The novice and expert had comparable negative margin rates.  In both the LUS only and PARIS case, the novice surgeon excised more healthy tissue than the expert surgeon. However, when the novice surgeon used the PARIS he excised less healthy tissue than the expert did when using the LUS only.  Thus, the PARIS can help novice surgeons reach expert surgical competency, as measured by margin rates and healthy tissue excised, much earlier in their training. The PARIS may also help surgeons   118 better understand how to interpret LUS images so that even if they go back to doing operations without PARIS they’ll have a better understanding of what lies beneath the tissue. This work concludes that PARIS is a relatively simple, easily integrated system with potential to provide valuable guidance with sufficient ease and accuracy in laparoscopic surgery. The DART provides a tumour-centric reference for relative measurements of the LUS transducer and projector to minimize errors. The dual use of the projector for additional features and guidance information is feasible. This guidance is an adjunct, not replacement, to standard practice. Further study is needed to demonstrate utility in-vivo, where the challenges of bleeding, smoke, and specular reflections arise.     119 Chapter 6 - Conclusion This chapter is an overview of the research and key findings of this thesis.  The summary of research findings is presented and several avenues for future work are proposed.  6.1 Summary of Findings The goal of this thesis was to advance the field of image-guided surgery in an effort to improve the surgical treatment of patients.  In the context of this thesis, the aim is to improve surgical treatment by increasing the accuracy of surgeons and reducing the amount of healthy tissue excised. The specific objectives set to achieve that goal were:  • Objective 1: Create and test the Augmented Reality Ultrasound Navigation System (ARUNS) for laparoscopic surgery  • Objective 2: Create and test the Projector based Augmented Reality System (PARIS) for laparoscopic surgery.   • Objective 3: Test the hypothesis that PARIS improves tumour resection accuracy, reduces the amount of healthy tissue excised and improves the surgeon’s spatial understanding of underlying anatomy.  To complete those objectives, novel concepts and devices were developed that improved ultrasound imaging accuracy, accounted for tissue deformation and tissue movement during surgery and offered new augmented reality display techniques for laparoscopic surgery.   A brief summary of the main conclusions from each chapter follow:   120  Chapter 2 - Calibration and Stereo Tracking of a Laparoscopic Ultrasound Transducer for Augmented Reality in Surgery Ultrasound calibration is required for any image-guided intervention that uses tracked ultrasound.  Ultrasound calibration is performed to determine the physical relationship between the ultrasound image coordinate system and the coordinate system of the tracking sensor.  Thus, optimizing the ultrasound calibration step is critical for successful image-guided interventions with ultrasound.  In the context of MIS the LUS can be tracked by an external sensor or by direct optical tracking with the laparoscope.  In MIS the baseline of the stereo laparoscope, the distance between the two cameras, is limited due to the size of the incision and cannulas used in MIS.  The narrow baseline makes stereo tracking less accurate.    However, given the optical fiducial on the LUS is rigidly mounted and therefore fixed, ultrasound calibration needs to be done only once.  Thus, the key realization that motivated the research in this chapter was that the limitations placed on the diameter of the stereo laparoscope in MIS need not apply during the ultrasound calibration stage. Once that realization had been spelled out, the next step was to test the hypothesis that using wide baseline stereo cameras during ultrasound calibration would improve the quality of the ultrasound calibration.  Testing that hypothesis was the focus of this chapter. The conclusion is that using wide baseline cameras during ultrasound calibration improves ultrasound point reconstruction accuracy by 1.8 mm, from 3.1 mm to 1.3 mm.       121 Chapter 3 - Augmented Reality Imaging for Robot-Assisted Partial Nephrectomy Surgery In this chapter, the focus shifts from optimizing ultrasound calibration of a LUS to using LUS for image-guided surgery. The ultimate goal of this work is that the ARUNS will enable the surgeon to do an accurate complete excision of the kidney tumour while preserving as much of the healthy kidney as possible. The surgeon who tested the ARUNS felt that the tool-to-tumour proximity stoplight warning system was the most helpful feature of the ARUNS.  The ARUNS also demonstrated that a direct augmentation of the laparoscopic point of view is not intuitive. This informed the decision to focus on the projector perspective point of view AR display for PARIS in Chapter 5. The ARUNS for robot-assisted minimally invasive surgery is built and tested via a simulated partial nephrectomy user study. The standard direct AR ultrasound overlay strategy (i.e. revising the surgeon’s main video display by adding graphic overlays) is foregone in favour of showing the surgeon virtual views of the kidney tumour and of the instruments in separate views that are orthogonal to the surgeon’s main view.  The ARUNS includes a novel surgical navigation marker called the Dynamic Augmented Reality Tracker (DART). The DART is inserted onto the kidney surface and the DART and LUS are tracked during an intra-operative freehand ultrasound scan of the tumour. After the ultrasound scan, the system continues to track the DART and display the segmented 3D tumour and location of surgical instruments relative to the tumour throughout the surgery. The point reconstruction precision with the ultrasound was 0.9 mm, the da Vinci kinematics instrument tracking (dVKIT) error was 1.5 mm and the total system error was 5.1 mm.  The total system error is a function of the accuracy of ultrasound calibration, camera calibration and da Vinci kinematic instrument tracking. The system was evaluated by an expert surgeon who used the DART and ARUNS to excise a tumour from a   122 kidney phantom.  This work serves as a preliminary evaluation in anticipation of further refinement and validation in vivo.  Chapter 4 - Pico Lantern: Surface Reconstruction and Augmented Reality in Laparoscopic Surgery Using a Pick-Up Laser Projector In this chapter, the Pico Lantern, a novel device for surface reconstruction and augmented reality is presented.  The Pico Lantern is a miniature projector developed for structured light surface reconstruction, augmented reality and guidance in laparoscopic surgery. It is used to directly illuminate the surgical scene for the purpose of guidance. During surgery it will be dropped into the patient and picked up by a laparoscopic tool. While inside the patient it projects a known coded pattern and images onto the surface of the tissue. The Pico Lantern is visually tracked in the laparoscope's field of view for the purpose of stereo triangulation between it and the laparoscope. In this chapter, the first application is surface reconstruction. Using a stereo laparoscope and an untracked Pico Lantern, the absolute error for surface reconstruction for a plane, a cylinder and an ex vivo kidney, is 2.0 mm, 3.0 mm and 5.6 mm respectively. Using a mono laparoscope and a tracked Pico Lantern for the same plane, cylinder and ex vivo kidney, the absolute error is 1.4 mm, 1.5 mm and 1.5 mm respectively. These results confirm the benefit of the wider baseline produced by tracking the Pico Lantern. Virtual viewpoint images are generated from the kidney surface data and an in vivo proof-of-concept porcine trial is reported. Surface reconstruction of the neck of a volunteer shows that the pulsatile motion of the tissue overlying a major blood vessel can be detected and displayed in vivo. Future work will integrate the Pico Lantern into standard and robot-assisted laparoscopic surgery.    123 Chapter 5 - Follow the light: Projector-based Augmented Reality for Intraoperative Surgical Planning in Minimally Invasive Surgery This chapter continues to build on the themes from chapters 2 and 3 of intra-operative imaging and augmented reality computer-assisted and image-guided surgery.  The use of the DART and the use of the LUS to image the ultrasound tumour and create a 3D tumour model remain the same.  However, the tumour outline is shown to the surgeon using projection onto patient augmented reality instead of static video display augmented reality.  In this chapter, the Pico Lantern concept from chapter 4 is significantly extended.  The novel PARIS includes a miniature tracked projector, a navigation marker called the DART and a LUS. The PARIS displays the orthographic projection of the kidney cancer tumour on the kidney surface. The system accuracy and feasibility of use is evaluated in a user study in which two surgeons performed 16 simulated partial nephrectomies with the PARIS for guidance and 16 simulated partial nephrectomies with a LUS for guidance.  With the PARIS there was a statistically significant reduction in the amount of healthy tissue excised and significant trends toward a more accurate dissection around the tumour and more negative margins.  The combined point tracking and reprojection error of the PARIS system is 0.8mm. Qualitative feedback about PARIS supports the hypothesis that it is an effective surgical navigation tool which improved metrics of simulated laparoscopic partial nephrectomies.  6.2 Limitations The ARUNS and the PARIS systems have been tested and characterized with phantoms and ex vivo kidneys. There have also been a few in vivo tests of individual components of the system. However, no in vivo tests have been done on the entire systems.  This is primarily because it is   124 infeasible to obtain Health Canada approval for new hardware and software for use on humans in surgery, and infeasible to redesign the prototypes for sterile use.   A limitation of both the ARUNS and the PARIS is that the ultrasound images are manually segmented to create 3D tumour volumes. After the ultrasound scanning and volume reconstruction of the ultrasound images, it takes about 30 seconds to segment the tumour using a 3D spherical paintbrush segmenting tool. Given that operating room time is very expensive, even a 30 second increase in the length of an operation is a significant consideration.   A limitation of the DART is that is must always be in the field of view. If the DART leaves the field of view or is occluded for some other reason, all surgical navigation functionality is lost.  The PARIS system is designed to include the Pico Lantern, and the Pico Lantern is supposed to be a miniature projector for MIS.  However, the smallest prototype of the Pico Lantern is 28 mm in diameter.  In order for it to fit through a cannula in MIS it should be no more than 10 mm in diameter.  Secondly, the projector used in Chapter 5 is much larger than the cannula opening.  No attempt was made to miniaturize it, unlike the Pico Lantern in Chapter 4 which was a miniature projector for surgery. Significant engineering work would be required to make a fully-functional Pico Lantern that fits the standard medical device requirements and MIS size requirements.  Lastly, the PARIS currently does surface reconstruction with semi-global block matching stereo surface reconstruction.  The long term goal is to replace stereo surface reconstruction with the surface reconstruction method #2 which was invented and described in Chapter 4.    125 6.3 Future Work In this section the future work for each chapter in this thesis is described.  Chapter 2 - Calibration and stereo tracking of a laparoscopic ultrasound transducer for augmented reality in surgery The next steps for this project include real-time implementation, multi-sensor tracking and on the fly camera calibration clinical validation, as well as further accuracy improvements.  The first step for real-time implementation is to replace the offline and manual tracking of the LUS that was used in the original experiments with the real time tracking of an optical fiducial on the LUS that was developed for chapters 3-5. Multi-sensor tracking and fusion of the LUS during the ultrasound calibration stage would likely further improve the accuracy of ultrasound tracking and further improve the calibration accuracy.  The custom-built pick-up LUS [29] used in this chapter and throughout this thesis has a built-in EM sensor and can be grasped repeatedly in the same orientation by the da Vinci surgical system.  Thus, optical tracking, EM sensor tracking and da Vinci kinematic tracking are all possible and the triple tracking results could be combined together to make one highly accurate tracking read-out [69].  This chapter showed that a change of focus of the stereo laparoscope dramatically changes the LUS optical tracking accuracy due to the change of the intrinsic parameters of the camera.  It is common for the surgeon to change the focus of the laparoscope in operation so this is a barrier to augmented reality ultrasound.  One potential solution to this is on-the-fly camera calibration using the optical fiducial that is on the LUS as the checkerboard required for camera calibration.  Another solution to account for the change in camera focus would be to implement the strategy described by Pratt et al. which is to construct a model of the stereo endoscope over the range of focus settings used by the surgeon.    126 This allows for a single view of reference geometry to be used to update the camera calibration and the associated model of the camera intrinsics [103].  Chapter 3 - Augmented Reality Imaging for Robot-Assisted Partial Nephrectomy Surgery This chapter describes the ARUNS which was designed for assisting the surgeon during laparoscopic partial nephrectomy.  Future work includes more user studies of the existing system, improving the navigational display interface and information, modifying the design of the DART and its intended use, and exploring the surgeries for which the ARUNS could be used for.  This chapter only has a single user study in which the surgeon used the ARUNS to do one simulated laparoscopic partial nephrectomy. Thus, more user studies are needed. More user studies will allow for stronger conclusions to be about the ARUNS’s effect on key surgical metrics. These metrics include positive margin rates, excision time and healthy tissue excised. Running more user studies will also provide a deeper understanding of how the surgeon uses the ARUNS and which surgeries it would be beneficial for.  However, before doing more user studies, it is worthwhile to consider several potential changes to the navigation display interface that would give the surgeon even more helpful guidance cues. The first priority is to add the location of the DART and the surface in its immediate vicinity to the virtual views.  This will be the anchor between the real world and the augmented virtual views seen by the surgeon and this will help the surgeon orientate himself or herself when looking at the virtual views. The second priority is to also show blood vessels in the augmented reality orthogonal views. The instrument tracking feature could include a function that adds a virtual extension of the tool so the surgeon can see in advance if the planned tool path   127 is going to intersect with the tumour or with other critical structures. Surface reconstruction can be done via stereo surface reconstruction which is facilitated by structured light using, for example, laser-based solutions or projector- based solutions like the Pico Lantern [104].  Furthermore, the surface could be used to provide the surgeons a true top-down view, as opposed to a view that is orthogonal to their camera viewpoint.  In addition to improving on the navigational display the ARUNS, there are promising avenues for improvement of the ultrasound segmentation process and the DART.  It may be worthwhile to pursue automatic segmentation of the kidney tumour from the ultrasound images.  Other research groups have shown this is possible for other ultrasound images of solid breast tumours [105]. For improving the DART an omniphobic coating could be added to repel blood that would otherwise stick to the DART and occlude the KeyDot® pattern that is used for tracking [106]. Customised tumour-based DARTs can be created from preoperative imaging prior to surgery to handle tumours of varying geometries. Also, by using several unique DARTs, surgeons could insert them throughout surgery to provide persistent augmented reality and overcome the line-of-sight issues. The virtual views are not limited to rendering one tumour mesh and the da Vinci tools. These applications are all enabled by the relative tracking paradigm created by the DART.  While small, there is some deformation between the DART and the kidney tumour.  A real time FEM model and real-time surface reconstruction algorithm could be developed to account for the deformation that does occur.   Finally, there are many other promising applications for the ARUNS.  A first possibility is that it could be generalised to provide image-guidance for standard non-robotic laparoscopy. Several applications that may be well suited for the ARUNS are guidance during MIS hepatic or renal tumour resections, preoperative CT scan to intra-operative ultrasound registration and   128 display of absolute elastography images [107]. It is possible to display quantitative elastography or time series data with the ARUNS because the pick-up LUS is made with a linear array from Analogic which gives researchers access to the unfiltered ultrasound data.    Chapter 5 - Follow the light: Projector-based Augmented Reality for Intraoperative Surgical Planning in Minimally Invasive Surgery The focus of chapter 5 was the development and testing of the PARIS.  The PARIS user study described in Chapter 5 showed promising results such as a statistically significant reduction in the amount of healthy tissue excised and a trend towards a lower positive margin rate. However,  there is still scope for further work. Possible options for further work include: 1) more user studies on phantoms and on ex-vivo kidneys; 2) more augmented reality guidance cues; 3) a comparison of static video versus projection onto patient augmented reality; 4) development of real-time, automatic surface reconstruction using a mono-laparoscope and Pico Lantern; 5) combining the Pico Lantern augmented reality system with an augmented reality system that provides surgical guidance during the execution stage of the surgery [15]; and 6) a comparative study of the augmented reality system compared to fluorescence imaging of blood vessels and tumours in the kidney.  For the augmented reality guidance cues, safety margins can be added to the projected outlines to help the surgeons start the dissection with the margin size that they desire.       129 6.4 Conclusion This thesis presented work which is intended to improve augmented reality, computer-assisted and image-guided surgery. The first object of the thesis was to create and test the ARUNS and the second objective was to do the same for the PARIS.  The motivation for developing the ARUNS and the PARIS was to enable surgeons to more accurately remove cancerous tumours while sparing as much healthy tissue as possible.  In 32 ex vivo simulated laparoscopic partial nephrectomy phantom surgeries, PARIS did just that.   Objective 1 was accomplished via work described in Chapter 2 and Chapter 3.  Concrete conclusions from that research are that the novel wide baseline ultrasound calibration technique had a pinhead reconstruction error of 1.3 mm, ARUNS had a total system error of 5.1mm, FEM analysis showed that the DART to kidney transform was no greater than 1 mm, and in a preliminary user study the surgeon was enthused by the ARUNS and particularly valued the tumour proximity stoplight warning feature.  Objective 2 was accomplished via work described in Chapter 4 and Chapter 5.  Concrete conclusions from that research are that the Pico Lantern surface reconstruction accuracy is approximately 1.5 mm, the Pico Lantern point reprojection error is 0.8 mm.  Objective 3 was accomplished when it was shown that the PARIS increased the surgeon’s spatial awareness of the underlying anatomy and resulted in a statistically significant reduction in the healthy tissue excised.  Noteworthy characteristics of the ARUNS and the PARIS system are that the ARUNS provides surgical guidance during the execution and dissection stage for the surgery and that for the goal of creating a navigation system with a total error of less than 5 mm was met.  This 5 mm goal is important because, as discussed earlier in the thesis, surgeons aim for a 5 mm margin when dissecting kidney tumours.    130  This thesis addressed important challenges in the field of image-guided and computer-assisted surgery.  Those challenges include ultrasound calibration, guidance throughout the surgery and intuitive display of information to the surgeon. The work from this thesis will hopefully contribute to improving image-guided and computer-assisted surgery which will in turn make surgery safe and more successful in the future.  This is particularly important for the 50,000 Canadians that are diagnosed with liver, stomach, pancreatic, kidney, bladder and prostate cancer each year of which many are treated surgically.      131 Bibliography [1] R. Rohling, P. Edgcumbe, and C. Nguan, “Imagery System,” 15/183,458, 2016. [2] Canadian Cancer Society’s Advisory Committee on Cancer Statistics, “Canadian Cancer Statistics 2015.” [3] T. Gudbjartsson, A. Thoroddsen, V. Petursdottir, S. Hardarson, J. Magnusson, and G. V Einarsson, “Effect of Incidental Detection for Survival of Patients with Renal Cell Carcinoma: Results of Population-Based Study of 701 Patients,” Urology, vol. 66, pp. 1186–1191. [4] Junqueira’s Basic Histology, 14th ed. McGraw-Hill Education. [5] J. W. Mashni et al., “New Chronic Kidney Disease and Overall Survival after Nephrectomy for Small Renal Cortical Tumors,” Urology, vol. 86, no. 6, pp. 1137–1143, 2015. [6] H. Van Poppel et al., “A Prospective, Randomised EORTC Intergroup Phase 3 Study Comparing the Oncologic Outcome of Elective Nephron-Sparing Surgery and Radical Nephrectomy for Low-Stage Renal Cell Carcinoma,” Eur. Urol., vol. 59, no. 5, pp. 543–552, 2010. [7] W. C. Huang et al., “Chronic kidney disease after nephrectomy in patients with renal cortical tumours: a retrospective cohort study,” Lancet Oncol., vol. 7, no. 9, pp. 735–740, 2006. [8] S. E. Sutherland, M. I. Resnick, G. T. Maclennan, and H. B. Goldman, “Does the Size of the Surgical margin in Partial Nephrectomy for Renal Cell Cancer Really Matter?,” J. Urol., vol. 167, pp. 61–64, 2002. [9] I. S. Gill et al., “Laparoscopic partial nephrectomy for renal tumor: duplicating open   132 surgical techniques.,” J. Urol., vol. 167, no. 2, pp. 469–476, 2002. [10] G. Trottier, “Determining the best warm ischemic time for patients undergoing partial nephrectomy for renal cancer,” Can Urol Assoc, vol. 5, no. 1, p. 44, 2011. [11] M. Carini, A. Minervini, L. Masieri, A. Lapini, and S. Serni, “Simple Enucleation for the Treatment of PT1a Renal Cell Carcinoma: Our 20-Year Experience,” Eur. Urol., vol. 50, no. 6, pp. 1263–1271, 2006. [12] J. McAninch, T. Lue, and D. Smith, Smith & Tanagho’s General Urology. New York, New York, USA: McGraw-Hill Medical, 2013. [13] I. S. Gill et al., “Comparison of 1,800 Laparoscopic and Open Partial Nephrectomies for Single Renal Tumors,” J. Urol., vol. 178, no. 1, pp. 41–46, 2007. [14] J. Marescaux and M. Diana, “Next step inminimally invasive surgery: hybrid image-guided surgery,” J. Pediatr. Surg., vol. 50, pp. 30–36, 2015. [15] S. Bernhardt, S. A. Nicolau, L. Soler, and C. Doignon, “The status of augmented reality in laparoscopic surgery as of 2016,” Med. Image Anal., vol. 37, pp. 66–90, 2017. [16] J. D. Sammon et al., “Robot-assisted vs. laparoscopic partial nephrectomy: Utilization rates and perioperative outcomes,” Int. Braz J Urol, vol. 39, no. 3, pp. 377–386, 2013. [17] M. J. Barry, P. M. Gallagher, J. S. Skinner, and F. J. Fowler, “Adverse effects of robotic-assisted laparoscopic versus open retropubic radical prostatectomy among a nationwide random sample of medicare-age men,” J. Clin. Oncol., vol. 30, no. 5, pp. 513–518, 2012. [18] U. Mezger, C. Jendrewski, and M. Bartels, “Navigation in surgery,” Langenbecks Arch Surg, vol. 398, pp. 501–514, 2013. [19] P. V Pandharipande et al., “Changes in Physician Decision Making after CT: A Prospective Multicenter Study in Primary Care Settings,” Radiology, vol. 281, no. 3, pp.   133 835–846, 2016. [20] I. Rabbi and S. Ullah, “A Survey on Augmented Reality Challenges and Tracking Authors,” Acta Graph., vol. 24, pp. 29–46, 2013. [21] D. W. Roberts, J. W. Strohbehn, J. F. Hatch, W. Murray, and H. Keti ’enberger, “A frameless stereotaxic integration of computerized tomographic imaging and the operating microscope,” J Neurosurg, vol. 65, pp. 545–549, 1986. [22] J. Wadley, N. Dorward, N. Kitchen, and D. Thomas, “Pre-operative planning and intra-operative guidance in modern neurosurgery: A review of 300 cases,” Ann. R. Coll. Surg. Engl., vol. 81, no. 4, pp. 217–225, 1999. [23] “An Automatic Registration Method for Frameless Stereotaxy, Image Guided Surgery, and Enhanced Reality Visualization,” IEEE Trans. Med. Imaging, vol. 15, no. 2, pp. 129–140, 1996. [24] C. Schneider, C. Nguan, M. Longpre, R. Rohling, and S. Salcudean, “Motion of the kidney between preoperative and intraoperative positioning.,” IEEE Trans. Biomed. Eng., vol. 60, no. 6, pp. 1619–27, Jun. 2013. [25] Z. Zhang, “A flexible new technique for camera calibration,” Pattern Anal. Mach. Intell. IEEE Trans., vol. 22, no. 11, pp. 1330–1334, 2000. [26] M. Zijlmans, T. Langø, E. Fagertun Hofstad, C. F. P Van Swol, and A. Rethy, “Minimally Invasive Therapy – liver shift and deformation due to pneumoperitoneum in an animal model,” Minim. Invasive Ther. Allied Technol., vol. 21, no. 3, pp. 241–248, 2012. [27] R. Song, A. Tipirneni, P. Johnson, R. B. Loeffler, and C. M. Hillenbrand, “Evaluation of respiratory liver and kidney movements for MRI navigator gating,” J. Magn. Reson. Imaging, vol. 33, no. 1, pp. 143–148, 2011.   134 [28] C. Våpenstad et al., “Laparoscopic ultrasound: a survey of its current and future use, requirements, and integration with navigation technology.,” Surg. Endosc., vol. 24, no. 12, pp. 2944–53, Dec. 2010. [29] C. Schneider, J. Guerrero, C. Nguan, R. Rohling, and S. Salcudean, “Intra-operative ‘Pick-Up’ Ultrasound for Robot Assisted Surgery with Vessel Extraction and Registration: A Feasibility Study,” Int. Conf. Inf. Process. Comput. Interv., vol. 6689, pp. 122–132, 2011. [30] “Personalized, relevance-based Multimodal Robotic Imaging and augmented reality for Computer Assisted Interventions.,” Med. Image Anal., vol. 33, pp. 64–71, 2016. [31] M. Kersten-Oertel, P. Jannin, and D. L. Collins, “The state of the art of visualization in mixed reality image guided surgery,” Comput. Med. Imaging Graph., vol. 37, pp. 98–112, 2013. [32] N. C. Buchs et al., “Augmented environments for the targeting of hepatic lesions during image-guided robotic liver surgery,” J. Surg. Res., vol. 184, no. 2, pp. 825–831, 2013. [33] N. Mahmoud et al., “On-patient see-through augmented reality based on visual SLAM,” Int. J. Comput. Assist. Radiol. Surg., vol. 12, no. 1, pp. 1–11, 2017. [34] G. Fichtinger et al., “Image overlay guidance for needle insertion in CT scanner,” IEEE Trans. Biomed. Eng., vol. 52, no. 8, pp. 1415–1424, 2005. [35] C. R. Weiss, D. R. Marker, G. S. Fischer, G. Fichtinger, A. J. Machado, and J. A. Carrino, “Augmented reality visualization using image-overlay for MR-guided interventions: System description, feasibility, and initial evaluation in a spine phantom,” Am. J. Roentgenol., vol. 196, no. 3, pp. 305–307, 2011. [36] H. Fuchs et al., “Augmented Reality Visualization for Laparoscopic Surgery,” Proc. First Int. Conf. Med. Image Comput. Comput. Interv., pp. 934–943, 1998.   135 [37] A. Osorio et al., “Real time planning, guidance and validation of surgical acts using 3D segmentations, augmented reality projections and surgical tools video tracking,” Spie, vol. 7625, pp. 762529-1-762529–11, 2010. [38] B. J. Dixon, M. J. Daly, H. Chan, A. D. Vescan, I. J. Witterick, and J. C. Irish, “Surgeons blinded by enhanced navigation: the effect of augmented reality on attention.,” Surg. Endosc., vol. 27, no. 2, pp. 454–61, Feb. 2013. [39] R. Wang, Z. Geng, Z. Zhang, R. Pei, and X. Meng, “Autostereoscopic augmented reality visualization for depth perception in endoscopic surgery,” Displays, vol. 48, pp. 50–60, 2017. [40] C. . Bichlmeier, T. . Sielhorst, S. M. . Heining, and N. . Navab, “Improving depth perception in medical AR a virtual vision panel to the inside of the patient,” Inform. aktuell, pp. 217–221, 2007. [41] O. Shahin, A. Beširević, M. Kleemann, and A. Schlaefer, “Ultrasound-based tumor movement compensation during navigated laparoscopic liver interventions,” Surg. Endosc. Other Interv. Tech., vol. 28, no. 5, pp. 1734–1741, 2014. [42] G. A. Puerto-souza, S. Member, J. A. Cadeddu, and G. Mariottini, “Toward Long-Term and Accurate Augmented-Reality for Monocular Endoscopic Videos,” IEEE Trans. Biomed. Eng., vol. 61, no. 10, pp. 2609–2620, 2014. [43] T. Simpfendörfer et al., “Intraoperative Computed Tomography Imaging for Navigated Laparoscopic Renal Surgery: First Clinical Experience.,” J. Endourol., vol. 30, no. 10, pp. 1105–1111, 2016. [44] P. Pratt, D. Stoyanov, M. Visentini-Scarzanella, and G.-Z. Yang, “Dynamic guidance for robotic surgery using image-constrained biomechanical models,” in Medical Image   136 Computing and Computer-Assisted Intervention--MICCAI 2010, Springer, 2010, pp. 77–85. [45] A. B. Benincasa, L. W. Clements, S. D. Herrell, and R. L. Galloway, “Feasibility study for image-guided kidney surgery: Assessment of required intraoperative surface for accurate physical to image space registrations,” Med. Phys., vol. 35, no. 9, p. 4251, 2008. [46] L. Maier-Hein et al., “Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery,” Med. Image Anal., vol. 17, no. 8, pp. 974–996, 2013. [47] M. Hayashibe, N. Suzuki, and Y. Nakamura, “Laser-scan endoscope system for intraoperative geometry acquisition and surgical robot safety management,” Med. Image Anal., vol. 10, no. 4, pp. 509–519, 2006. [48] X. Maurice, C. Albitar, C. Doignon, and M. de Mathelin, “A structured light-based laparoscope with real-time organs’ surface reconstruction for minimally invasive surgery,” in Engineering in Medicine and Biology Society (EMBC), 2012 Annual International Conference of the IEEE, 2012, pp. 5769–5772. [49] A. Reiter, A. Sigaras, D. Fowler, and P. K. Allen, “Surgical Structured Light for 3D minimally invasive surgical imaging,” in Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on, 2014, pp. 1282–1287. [50] C. Schmalz, F. Forster, A. Schick, and E. Angelopoulou, “An endoscopic 3D scanner based on structured light,” Med. Image Anal., vol. 16, no. 5, pp. 1063–1072, 2012. [51] N. T. Clancy, D. Stoyanov, L. Maier-Hein, A. Groch, G.-Z. Yang, and D. S. Elson, “Spectrally encoded fiber-based structured lighting probe for intraoperative 3D imaging,” Biomed. Opt. Express, vol. 2, no. 11, pp. 3119–3128, 2011. [52] C. Hansen, J. Wieferich, F. Ritter, C. Rieder, and H.-O. Peitgen, “Illustrative visualization   137 of 3D planning models for augmented reality in liver surgery,” Int. J. Comput. Assist. Radiol. Surg., vol. 5, no. 2, pp. 133–141, 2010. [53] S. A. Nicolau, J. Brenot, L. Goffin, P. Graebling, L. Soler, and J. Marescaux, “A structured light system to guide percutaneous punctures in interventional radiology,” in Photonics Europe, 2008, p. 700016. [54] K. Gavaghan et al., “Evaluation of a portable image overlay projector for the visualisation of surgical navigation data: phantom studies.,” Int. J. Comput. Assist. Radiol. Surg., vol. 7, no. 4, pp. 547–56, Jul. 2012. [55] C. Hennersperger, J. Manus, and N. Navab, “Mobile Laserprojection in Computer Assisted Neurosurgery,” Lect. Notes Comput. Sci., vol. 9805, pp. 151–162, 2016. [56] L. W. Clements, P. Dumpuri, W. C. Chapman, B. M. Dawant, R. L. Galloway, and M. I. Miga, “Organ Surface Deformation Measurement and Analysis in Open Hepatic Surgery: Method and Preliminary Results From 12 Clinical Cases,” IEEE Trans Biomed Eng, vol. 58, no. 8, 2011. [57] T. J. Carter, M. Sermesant, D. M. Cash, D. C. Barratt, C. Tanner, and D. J. Hawkes, “Application of soft tissue modelling to image-guided surgery,” Med. Eng. Phys., vol. 27, pp. 893–909, 2005. [58] M. C. Yip, D. G. Lowe, S. E. Salcudean, R. N. Rohling, C. Y. Nguan, and D. G. Lowe, “Tissue Tracking and Registration for Image-Guided Surgery,” IEEE Trans. Med. Imaging, vol. 31, no. 11, 2012. [59] T. Collins, A. Bartoli, N. Bourdel, and M. Canis, “Robust,real-time,dense and deformable 3D organ tracking in laparoscopic videos,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9900 LNCS, pp. 404–412, 2016.   138 [60] H. O. Altamar et al., “Kidney deformation and intraprocedural registration: a study of elements of image-guided kidney surgery.,” J. Endourol., vol. 25, no. 3, pp. 511–517, 2011. [61] E. Wild et al., “Robust augmented reality guidance with fluorescent markers in laparoscopic surgery,” Int. J. Comput. Assist. Radiol. Surg., vol. 11, no. 6, pp. 899–907, 2016. [62] A. Hughes-Hallett, E. K. Mayer, P. Pratt, A. Mottire, A. Darzi, and J. Vale, “The current and future use of imaging in urological robotic surgery: a survey of the European Association of Robotic Urological Surgeons Archie,” Int. J. Med. Robot., vol. 11, pp. 8–14, 2015. [63] A. Hughes-Hallett et al., “Augmented Reality Partial Nephrectomy: Examining the Current Status and Future Perspectives,” Urology, vol. 83, no. 2, pp. 266–273, 2014. [64] T. Langø et al., “Navigated laparoscopic ultrasound in abdominal soft tissue surgery: technological overview and perspectives,” Int J CARS, vol. 7, pp. 585–599, 2012. [65] C. Schneider, G. Dachs, C. Hasser, M. Choti, S. DiMaio, and R. Taylor, “Robot-assisted laparoscopic ultrasound,” Inf. Process. Comput. Interv. Comput. Aided -- IPCAI 2010, pp. 67–80, 2010. [66] J. Leven et al., “Davinci canvas: A telerobotic surgical system with integrated, robot-assisted, laparoscopic ultrasound capability,” in Medical Image Computing and Computer-Assisted Intervention--MICCAI 2005, Springer, 2005, pp. 811–818. [67] P. Pratt, A. Di Marco, C. Payne, A. Darzi, and G.-Z. Yang, “Intraoperative ultrasound guidance for transanal endoscopic microsurgery,” in Medical Image Computing and Computer-Assisted Intervention--MICCAI 2012, Springer, 2012, pp. 463–470.   139 [68] C. L. Cheung, C. Wedlake, J. Moore, S. E. Pautler, and T. M. Peters, “Fused video and ultrasound images for minimally invasive partial nephrectomy: a phantom study,” in Medical Image Computing and Computer-Assisted Intervention--MICCAI 2010, Springer, 2010, pp. 408–415. [69] M. Feuerstein, T. Reichl, J. Vogel, J. Traub, and N. Navab, “Magneto-optical tracking of flexible laparoscopic ultrasound: model-based online detection and correction of magnetic tracking errors,” IEEE Trans. Med. Imaging, vol. 28, no. 6, pp. 951–967, 2009. [70] D. A. Wang, F. Bello, and A. Darzi, “Augmented reality provision in robotically assisted minimally invasive surgery,” in International Congress Series, 2004, vol. 1268, pp. 527–532. [71] D. Stoyanov, A. Darzi, and G.-Z. Yang, “Laparoscope self-calibration for robotic assisted minimally invasive surgery,” in Medical Image Computing and Computer-Assisted Intervention--MICCAI 2005, Springer, 2005, pp. 114–121. [72] J.-Y. Bouguet, “Camera Calibration Toolbox for Matlab,” 2013. [Online]. Available: http://www.vision.caltech.edu/bouguetj/calib_doc/index.html. [73] A. Ali and R. Logeswaran, “A visual probe localization and calibration system for cost-effective computer-aided 3D ultrasound,” Comput. Biol. Med., vol. 37, no. 8, pp. 1141–1147, 2007. [74] L. Mercier, T. Langø, F. Lindseth, D. L. Collins, and others, “A review of calibration techniques for freehand 3-D ultrasound systems,” Ultrasound Med. Biol., vol. 31, no. 2, pp. 143–166, 2005. [75] C. P. Oates, “Towards an ideal blood analogue for Doppler ultrasound phantoms,” Phys. Med. Biol., vol. 36, no. 11, p. 1433, 1991.   140 [76] B. K. P. Horn, “Closed-form solution of absolute orientation using unit quaternions,” JOSA A, vol. 4, no. 4, pp. 629–642, 1987. [77] P.-W. Hsu, G. M. Treece, R. W. Prager, N. E. Houghton, and A. H. Gee, “Comparison of freehand 3-D ultrasound calibration techniques using a stylus,” Ultrasound Med. Biol., vol. 34, no. 10, pp. 1610–1621, 2008. [78] P. Pratt et al., “Robust ultrasound probe tracking: initial clinical experiences during robot-assisted partial nephrectomy,” Int. J. Comput. Assist. Radiol. Surg., vol. 10, pp. 1905–1913, 2015. [79] A. Hughes-Hallett, P. Pratt, J. Dilley, J. Vale, A. Darzi, and E. Mayer, “Augmented reality: 3D image-guided surgery,” Cancer Imaging, vol. 15, no. (Suppl 1):08, pp. 5–7, 2015. [80] D. Teber et al., “Augmented Reality: A New Tool To Improve Surgical Accuracy during Laparoscopic Partial Nephrectomy? Preliminary In Vitro and In Vivo Results,” Eur. Urol., vol. 56, no. 2, pp. 332–338, 2009. [81] P. A. Yushkevich et al., “User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability,” Neuroimage, vol. 31, no. 3, pp. 1116–1128, 2006. [82] C. Geuzaine and J.-F. Remacle, “Gmsh: A 3-D finite element mesh generator with built-in pre-and post-processing facilities,” Int. J. Numer. METHODS Eng. Int. J. Numer. Meth. Engng, vol. 79, no. 79, pp. 1309–1331, 2009. [83] N. Grenier, J. L. Gennisson, F. Cornelis, Y. Le Bras, and L. Couzi, “Renal ultrasound elastography,” Diagn. Interv. Imaging, vol. 94, no. 5, pp. 545–550, 2013. [84] D. M. Kwartowitz, S. D. Herrell, R. L. Galloway, D. M. Kwartowitz, R. L. Galloway, and   141 S. D. Herrell, “Toward image-guided robotic surgery: determining intrinsic accuracy of the daVinci robot,” Int J CARS, vol. 1, pp. 157–165, 2006. [85] M. J. Gooding, S. Kennedy, and J. A. Noble, “Volume Segmentation and Reconstruction from Freehand Three-Dimensional Ultrasound Data with Application to Ovarian Follicle Measurement,” Ultrasound Med. Biol., vol. 34, no. 2, pp. 183–195, 2008. [86] A. Kutikov and R. G. Uzzo, “The R.E.N.A.L. Nephrometry Score: A Comprehensive Standardized System for Quantitating Renal Tumor Size, Location and Depth,” J. Urol., vol. 182, no. 3, pp. 844–853, 2009. [87] P. Edgcumbe, C. Nguan, and R. Rohling, “Calibration and Stereo Tracking of a Laparoscopic Ultrasound Transducer for Augmented Reality in Surgery,” in Augmented Reality Environments for Medical Imaging and Computer-Assisted Interventions, Springer, 2013, pp. 258–267. [88] H. Park, M.-H. Lee, S.-J. Kim, and J.-I. Park, “Surface-independent direct-projected augmented reality,” in Computer Vision--ACCV 2006, Springer, 2006, pp. 892–901. [89] J.-P. Tardif, S. Roy, and J. Meunier, “Projector-based augmented reality in surgery without calibration,” in Engineering in Medicine and Biology Society, 2003. Proceedings of the 25th Annual International Conference of the IEEE, 2003, vol. 1, pp. 548–551. [90] G. Falcao, N. Hurtos, J. Massich, and D. Fofi, “Projector-camera calibration toolbox,” 2009. [91] J.-Y. Bouguet, “Visual methods for three-dimensional modeling,” Citeseer, 1999. [92] A. Hughes-Hallett et al., “Intraoperative Ultrasound Overlay in Robot-assisted Partial Nephrectomy: First Clinical Experience,” Eur. Urol., vol. 65, pp. 671–672, 2014. [93] J. D’Errico, “Surface fitting using gridfit,” MATLAB Cent. file Exch., 2005.   142 [94] P. Pratt et al., “Multimodal reconstruction for image-guided interventions,” 2013, pp. 59–60. [95] J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics, vol. 3, no. 2, pp. 128–160, 2011. [96] A. Hughes-Hallett, P. Pratt, E. Mayer, S. Martin, A. Darzi, and J. Vale, “Image Guidance for All-TilePro Display of 3-Dimensionally Reconstructed Images in Robotic Partial Nephrectomy REPLY,” Urology, vol. 84, p. 243, 2014. [97] S. Rohl et al., “Dense GPU-enhanced surface reconstruction from stereo endoscopic images for intraoperative registration.,” Med. Phys., vol. 39, no. 3, pp. 1632–45, Mar. 2012. [98] R. Venkatesh et al., “Laparoscopic partial nephrectomy for renal masses: effect of tumor location.,” Urology, vol. 67, no. 6, p. 1169–74; discussion 1174, Jun. 2006. [99] C. Schneider, C. Nguan, R. Rohling, and S. Salcudean, “Tracked ‘pick-Up’ ultrasound for robot-assisted minimally invasive surgery,” IEEE Trans. Biomed. Eng., vol. 63, no. 2, pp. 260–268, 2016. [100] P. Edgcumbe, R. Singla, P. Pratt, C. Schneider, C. Nguan, and R. Rohling, Augmented reality imaging for robot-assisted partial nephrectomy surgery, vol. 9805. 2016. [101] H. Park, M. Lee, S. Kim, and J. Park, “Surface-Independent Direct-Projected Augmented Reality,” LNCS, vol. 3852, pp. 892–901, 2006. [102] H. Hirschmuller, “Accurate and efficient stereo processing by semi-global matching and mutual information,” Comput. Vis. Pattern Recognition, 2005. CVPR 2005. IEEE Comput. Soc. Conf., vol. 2, no. 2, pp. 807–814, 2005. [103] P. Pratt, C. Bergeles, A. Darzi, and G. Z. Yang, “Practical intraoperative stereo camera   143 calibration,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2014, vol. 8674 LNCS, no. PART 2, pp. 667–675. [104] P. Edgcumbe, P. Pratt, G. Z. Yang, C. Nguan, and R. Rohling, “Pico Lantern: Surface reconstruction and augmented reality in laparoscopic surgery using a pick-up laser projector,” Med. Image Anal., vol. 25, no. 1, pp. 95–102, 2015. [105] R. F. Chang, W. J. Wu, W. K. Moon, and D. R. Chen, “Automatic ultrasound segmentation and morphology based diagnosis of solid breast tumors,” Breast Cancer Res. Treat., vol. 89, no. 2, pp. 179–185, 2005. [106] D. C. Leslie et al., “A bioinspired omniphobic surface coating on medical devices prevents thrombosis and biofouling,” Nat. Biotechnol., vol. 32, no. 11, pp. 1134–1140, 2014. [107] C. Schneider, A. Baghani, R. Rohling, and S. Salcudean, “Remote ultrasound palpation for robotic interventions using absolute elastography,” in Medical Image Computing and Computer-Assisted Intervention--MICCAI 2012, Springer, 2012, pp. 42–49     144 Appendix A A. 1 Calculating The Transformation From the DART Coordinate System to Laparascopic Surgical Instrument Coordinate System The DART is introduced in section 3.1 and described in more detail in section 3.2. A unique element of the DART is that it has a repeatable grasp. This means there is a fixed transform from the DART to the surgical instrument. This fixed transform means it is theoretically possible to perform da Vinci kinematic calibration by simply grasping the DART and waving it around while the asymmetric keydot pattern is been tracked in the laparascopic coordinate system using standard computer vision technique. In this section, the calculations that were done to calculate the transform from the DART coordinate system to the laparoscopic surgical instrument coordinate system are shown. To do this calculation it is important to define the following three coordinate systems: The DART (D) coordinate system, which is defined by the asymmetric dot pattern that is either stuck onto or 3D printed onto the DART, is defined by the origin of the asymmetric dot pattern. The SolidWorks (SW) coordinate system which is defined within the Computer-Aided Drawing (CAD) softward of SolidWorks and the Patient Side Manipulator Tip (PSMTip) coordinate system which is the coordinate system at the base of the grasping element of the laparoscopic instrument. These coordinate systems are labelled in Figure 42.   145  Figure 42: Picture of laparoscopic instrument holding the DART. The coordinate systems that are labelled and shown are the DART (D), Solidworks (SW) and Patient Side Manipulator Tip (PSMTip).  The goal is to calculate PSMTipTD and PSMTipTD = PSMTipTSW * SWTD.  PSMTipTSW is calculated by knowing that the SW Y axis and PSMTip Z axis are parallel and colinear to each other. Furthermore, the angle at which the laparoscopic instrument holds the DART is known, the base of the grasp the laparoscopic intrument is coincident with the origin of the PSMTip and the width of the DART is known. Using this information, we calculated that the PSMTip origin was at (0, 21.32 mm, 0) in the SW coordinate system. PSMTipTSW = 0 0−1 0 1 00	 0000.0 −10 0 0 21.320 1                                      Equation 10                     Next, we calculated SWTD by calculating the coordinates of four dots in the asymmetric dot pattern in both the SW and D coordinate system. In the final version of the DART, the asymmetric dot pattern was part of the DART design a 3D printed directly on the DART so 		PSMTip 					SW 							D   146 finding the coordinates of the dots in the SW coordinate system simply involved selecting the dots in the SolidWorks program. The DART coordinate system is defined by the asymmetric dot pattern so calculating the dot position in the DART coordinate system was very simple. Next, we used the coordinates of the dots in the two coordinate systems and Horn’s algorithm to calculate the transform between the SW and DART coordinate system. SWTD = 0.412 00.912 0 0.912 −4.300−0.412	 −3.590				0 001			0 000 000000 −2.050		0 1                                      Equation 11 Finally, PSMTipTD = PSMTipTSW * SWTD = 0 01−0.412 00 00 −2.050−0.912	 4.300−0.912 00			0 00 0.412 			24.91		0 1                           Equation 12    147 Appendix B B. 1 Further Design Considerations for the Pico Lantern B. 1.1 Pico Lantern Electrical Connectors and Size Constraints In section 4.2 the design of the Pico Lantern was described. In this section additional information is included which provide further details about how the Pico Lantern prototype was created.  As can be seen in Figure 43, the Integrated Photonics Module (IPM) of the Microvision ShowWX+ is about 4 cm long and 2 cm wide.  The IPM is the part of the Microvision ShowWX+ projector which was placed inside the Pico Lantern.   Figure 43: Picture of Integrated Photonics Module (IPM) of ShowWX+ projector. The blue circles show the interconnects which were used to connect the IPM to the rest of the projector.  Figure 44 shows how the IPM connected to the rest of the projector. The IPM interconnects are printed on flexible PCB which meant that it was easier to design custom PCB pieces and solder   148 the interconnects onto them and still have them connect to the IPM. For connecting to the rigid blue Electronics Control Module three separate PCB boards were created so that the placement of the interconnects onto those PCBs didn't have to be perfect.  Figure 44: Picture of the ShowWX+ Electronics Control Module (ECM - left) and Integrated Photonics Module (IPM - right). The coloured circles show how the ECM and IPM connected to each other.  As can be seen in Figure 45, the Pico Lantern was custom designed and 3D printed so that it could house the Microvision ShowWX+ IPM.   Figure 45: Picture of IPM inside Pico Lantern housing   149 B. 1.2 The Pico Lantern’s Sensitivity to Tracking Error Generally, the Pico Lantern was placed at a distance of 50 mm from the object that was been imaged by the Pico Lantern. It is important to consider how the angular tracking error of the optical fiducial on the Pico Lantern affects the reprojection error of the projector, in other words, the accuracy with which the projector can project light onto a point on the object that has been identified by the laparoscopic camera. This theoretical reprojection error can be calculated as follows: 𝑅𝑒𝑝𝑟𝑜𝑗𝑒𝑐𝑡𝑖𝑜𝑛 𝐸𝑟𝑟𝑜𝑟 = D ∗ tan(Θ)                                      Equation 13  In equation 13, D is the distance from the Pico Lantern to the object that the Pico Lantern is projecting rays onto for the purpose of surface reconstruction or augmented reality. Θ is the angular tracking error of the optical fiducial on the Pico Lantern. The angle of the optical fiducial is defined relative to some arbitrary universal coordinate system. The tracking error is the difference between the measured angle of the optical fiducial relative to its actual angle. In practice, the angular tracking error at 50 mm is about one degree which, based on equation 13, translates into a reprojection error of 0.87 mm. This is consistent with the 0.8 mm reprojection error that was reported in 5.2. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0357427/manifest

Comment

Related Items