UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Intra-operative ultrasound-based augmented reality for laparoscopic surgical guidance Singla, Rohit Kumar 2017

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2017_september_singla_rohit.pdf [ 16.04MB ]
Metadata
JSON: 24-1.0348707.json
JSON-LD: 24-1.0348707-ld.json
RDF/XML (Pretty): 24-1.0348707-rdf.xml
RDF/JSON: 24-1.0348707-rdf.json
Turtle: 24-1.0348707-turtle.txt
N-Triples: 24-1.0348707-rdf-ntriples.txt
Original Record: 24-1.0348707-source.json
Full Text
24-1.0348707-fulltext.txt
Citation
24-1.0348707.ris

Full Text

Intra-operative Ultrasound-based Augmented Reality forLaparoscopic Surgical GuidancebyRohit Kumar SinglaB.A.Sc, The University of British Columbia, 2015A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFMaster of Applied ScienceinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Biomedical Engineering)The University of British Columbia(Vancouver)July 2017c Rohit Kumar Singla, 2017AbstractLaparoscopic partial nephrectomy involves the complete resection of a kidney tu-mour, while minimizing healthy tissue excised, and under a time constraint beforeirreparable kidney damage occurs. The surgeon must complete this operation ina reduced sensory environment with poor depth perception, limited field of view,and little or no haptic feedback. For endophytic tumours (grows inwards), this isparticularly difficult. In order to assist the surgeon, augmented reality can pro-vide intra-operative guidance. Intra-operative ultrasound is low cost, non-ionising,and real-time. This has tremendous potential to guide the surgeon. This thesisdetails the development of three intra-operative augmented reality systems from asingle framework, with augmentations all based on intra-operative ultrasound. Thesystems were all developed on the da Vinci Surgical System R, using it as a devel-opment and testing platform. All systems leverage a single fiducial marker calledthe Dynamic Augmented Reality Tracker which can track the local surface andcreate a tumour-centric paradigm. A 3D ultrasound volume is reconstructed usinga tracked ultrasound transducer. A tumour model is then extracted via manual seg-mentation of the volume. The three systems were developed and evaluated in sim-ulated robot-assisted partial nephrectomies. The first system shows the feasibilityof providing continuous ultrasound-based guidance during excision and achieves asystem error of 5.1 mm RMS. Improving on this, the second system demonstratesclinically acceptable system error of 2.5 ± 0.5 mm. The second system signifi-cantly reduced healthy tissue excised from an average of 30.6 ± 5.5 cm3 to 17.5 ±2.4 cm3 (p <0.05) and reduced the depth from the tumor underside to cut from anaverage of 10.2 ± 4.1 mm to 3.3 ± 2.3 mm (p <0.05). The third system is a novelintra-corporeal projector-based system that assists in determining the initial angleiiof resection. This system is evaluated in a surgeon study with a total of 32 simu-lated operations and addresses the limitations of conventional augmentations fromthe laparoscope’s point of view. All three systems show their potential benefitsin improving laparoscopic surgery with minimal additional hardware. With suchimage-guidance systems, the widespread adoption of laparoscopic surgery can befacilitated, improving patient care.iiiLay SummaryMinimally invasive surgery is rapidly becoming the standard of care for many dis-eases, including partial nephrectomy (the excision of only the tumour in kidneycancer operations). However, the nature of these surgeries requires the surgeon tooperate with the limitations such as reduced field of view, poor depth perceptionand little or no sense of touch. To overcome these challenges, this thesis proposesthe development of three systems based on ultrasound imaging and augmentedreality. Each system presents a unique set of augmented reality overlays from atracked ultrasound scan using computer vision. Each system is evaluated in mockrobot-assisted partial nephrectomies performed by an expert surgeon. The resultsindicate the systems have clinically acceptable error and can significantly reducethe amount of healthy tissue excised. This work can improve on and facilitate thewidespread adoption of laparoscopic surgery, broadly benefiting patients in numer-ous surgeries.ivPrefaceThis thesis is primarily based on three manuscripts, one of which has been pub-lished and the other two which are pending. The manuscripts have been modifiedand integrated for coherency. This work is been the result of an inter-disciplinaryand inter-institutional collaboration between the University of British Columbia’sDepartment of Electrical and Computer Engineering, the University of BritishColumbia’s Department of Urological Sciences, Imperial College London’s De-partment of Surgery and Cancer, and Northern Digital Inc.A modified version of Chapter 3 has been published, where the author is jointfirst author (denoted by asterisk), as follows:• Philip Edgcumbe*, Rohit Singla*, Philip Pratt, Caitlin Schneider, Christo-pher Nguan, and Robert Rohling. “Augmented Reality Imaging for Robot-Assisted Partial Nephrectomy Surgery”. In International Conference onMed-ical Imaging and Virtual Reality, pp. 139-150. Springer International Pub-lishing, 2016.A modified version of Chapter 3 and Chapter 4 has been submitted to and wasaccepted in a Special Issue on Augmented Environments for Computer-AssistedInterventions in IET’s Healthcare Technology Letters. A modified abstract wassubmitted to the 11th Annual Lorne D. Sullivan Lectureship and Research Dayand was accepted as a podium presentation. It received the Best Clinical SciencesResearch Award. The author list and title are as follows:• Rohit Singla, Philip Edgcumbe, Philip Pratt, Christopher Nguan, and RobertRohling. Intuitive Intra-operative Ultrasound-based Augmented Reality Guid-ance for Robot-Assisted Laparoscopic Surgery.vA modified abstract of Chapter 5 was submitted to the 11th Annual Lorne D.Sullivan Lectureship and Research Day and was accepted as a poster presentation.The author list and title are as follows (presenting author denoted by asterisk):• Philip Edgcumbe, Rohit Singla*, Philip Pratt, Christopher Nguan, and RobertRohling. Follow the Light: Intra-corporeal Projector-based Augmented Re-ality for Laparoscopic Surgery.The author’s technical contributions include the design and implementation ofthe software components for the systems. The author collaborated with Dr. PhilipPratt to create a plug-in framework for Dr. Pratt’s software. From there, the au-thor implemented modules to do the following: interface with the Analogic ul-trasound machines; track one or multiple fiducial markers; track the projector inreal-time and evaluate its projection accuracy; render the virtual viewpoints; ren-der the augmented surgical instruments; render using perspective and orthographicprojections; render models as point clouds, convex hulls and more; display aug-mentations in the projector point-of-view and laparoscope point-of-view; displayaugmentations via a monitor or projector. Furthermore, the author added to themathematical framework to provide tumour-centric tracking over time and contin-uously render models as seen by the virtual cameras. The author developed thequalitative metrics and analyzed the results. Other contributions of the author in-clude: characterization of the latency; reconfiguration of the fiducial marker of theprojector; and manual segmentation. Finally, the author led writing of manuscriptsin Chapter 4; contributed to editing of the manuscripts in Chapter 3 and Chapter 5;and created supplemental videos for all chapters.Philip Edgcumbe developed and tested the Dynamic Augmented Reality Tracker.This included the initial design, any modifications and the finite element model-ing (FEM) simulations. He performed the geometric ultrasound calibration, robotto camera calibration for Chapter 3, and made the phantom tumour models used inall experiments. He further generated the tumour models used for Chapter 3, andhad contributed the idea of orthogonal views. For the systems in Chapter 3 andChapter 4, Philip Edgcumbe’s contributions to the theoretical design include thetransformation equations for tracking surgical instruments relative to the DART,and determining the pose of the virtual cameras. For Chapter 5, Philip Edgcumbevicreated and developed the prototype for the Pico Lantern. He evaluated surfacereconstruction accuracy, and performed verification and validation experiments.Dr. Andrew Wiles provided support in the development of the projector-basedwork. His research team completed the surface reconstruction accuracy and speedtesting. Dr. Philip Pratt provided technical support and guidance, expanded the in-terface of his software as needed, and contributed to the manuscripts. Prof. RobertRohling and Dr. Christopher Nguan provided technical and clinical guidance, andcontributed to the manuscripts.viiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xivGlossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviiiAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Minimally Invasive Surgery . . . . . . . . . . . . . . . . . . . . . 21.2 Robot-Assisted Minimally Invasive Surgery . . . . . . . . . . . . 41.3 Image Guided Surgery . . . . . . . . . . . . . . . . . . . . . . . 71.4 Thesis Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . 111.5 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Background and Related Work . . . . . . . . . . . . . . . . . . . . . 132.1 The Kidney . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.1.1 Anatomy and Physiology . . . . . . . . . . . . . . . . . . 13viii2.1.2 Renal Cell Carcinoma . . . . . . . . . . . . . . . . . . . 162.2 The Nephrectomy . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2.1 Procedure Overview . . . . . . . . . . . . . . . . . . . . 192.2.2 Operation Benefits and Challenges . . . . . . . . . . . . . 202.2.3 Metrics of Evaluation . . . . . . . . . . . . . . . . . . . . 222.3 Ultrasound Imaging . . . . . . . . . . . . . . . . . . . . . . . . . 242.4 Augmented Reality in Laparoscopic Surgery . . . . . . . . . . . . 262.4.1 Ultrasound-based Augmented Reality . . . . . . . . . . . 302.5 Challenges of Guidance in Laparoscopy . . . . . . . . . . . . . . 322.6 Remaining Needs . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Intra-operative Ultrasound-Augmented Reality . . . . . . . . . . . . 353.1 Framework Overview . . . . . . . . . . . . . . . . . . . . . . . . 363.1.1 Hardware Components . . . . . . . . . . . . . . . . . . . 363.1.2 Dynamic Augmented Reality Tracker . . . . . . . . . . . 393.2 Vision-based Tracking . . . . . . . . . . . . . . . . . . . . . . . 413.2.1 Pose Estimation . . . . . . . . . . . . . . . . . . . . . . . 443.3 Principle of Operation . . . . . . . . . . . . . . . . . . . . . . . . 453.4 Transformation Theory . . . . . . . . . . . . . . . . . . . . . . . 473.4.1 Virtual Cameras and Time . . . . . . . . . . . . . . . . . 493.5 Augmented Reality Overlays . . . . . . . . . . . . . . . . . . . . 513.6 System Calibration and Accuracy . . . . . . . . . . . . . . . . . 523.6.1 Ultrasound Image to KeyDot Transform . . . . . . . . . . 523.6.2 da Vinci Laparoscope to Camera Transform . . . . . . . . 533.6.3 Total System Error . . . . . . . . . . . . . . . . . . . . . 553.7 User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.8 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.8.1 Finite Element Simulations . . . . . . . . . . . . . . . . . 563.8.2 System Calibration and Accuracy . . . . . . . . . . . . . 573.8.3 User Study . . . . . . . . . . . . . . . . . . . . . . . . . 573.9 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58ix4 Improvements to NGUAN . . . . . . . . . . . . . . . . . . . . . . . . 614.1 Intra-operative Validation Tool . . . . . . . . . . . . . . . . . . . 624.2 Refinements to System Accuracy . . . . . . . . . . . . . . . . . . 634.2.1 da Vinci Laparoscope to Camera calibration . . . . . . . . 634.2.2 Total System Accuracy . . . . . . . . . . . . . . . . . . . 644.3 New Augmented Reality Overlays . . . . . . . . . . . . . . . . . 654.4 User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735 Projector-based Augmented Reality Intra-corporeal System . . . . . 765.1 The Challenge of Resection Angle . . . . . . . . . . . . . . . . . 775.2 The Pico Lantern . . . . . . . . . . . . . . . . . . . . . . . . . . 785.3 Projector-based Augmented Reality Intra-corporeal System . . . . 795.3.1 Surface Reconstruction . . . . . . . . . . . . . . . . . . . 825.4 Augmented Reality Overlays . . . . . . . . . . . . . . . . . . . . 845.4.1 Projector and Laparoscope Point-of-Views . . . . . . . . 845.4.2 Orthographic and Perspective Projections . . . . . . . . . 865.4.3 Overview of Ray-Surface Intersection . . . . . . . . . . . 875.4.4 Summary of Augmented Reality Overlays . . . . . . . . . 885.5 System Calibration and Accuracy . . . . . . . . . . . . . . . . . 895.6 User Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905.7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915.7.1 System Calibration and Accuracy . . . . . . . . . . . . . 915.7.2 First User Study . . . . . . . . . . . . . . . . . . . . . . 925.7.3 Second User Study . . . . . . . . . . . . . . . . . . . . . 935.8 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . 976.1 Author’s Contributions . . . . . . . . . . . . . . . . . . . . . . . 976.1.1 System and Components . . . . . . . . . . . . . . . . . . 986.1.2 Principle of Operation . . . . . . . . . . . . . . . . . . . 996.1.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 101x6.2 Future Work and Recommendations . . . . . . . . . . . . . . . . 101Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103xiList of TablesTable 3.1 Nephrectomy Guidance using Ultrasound-Augmented Naviga-tion (NGUAN) initial feasibility study results. Results of the tri-als using ultrasound only (ultrasound (US)) and the guidancesystem (NGUAN) are shown. . . . . . . . . . . . . . . . . . . . 58Table 4.1 Quantitative results of simulated partial nephrectomies as aver-age and standard deviation. Average and standard deviations(avg ± stdev) of each metric is listed. Results of the trialsusing ultrasound only (US) and augmented reality (Nephrec-tomy Guidance Using Ultrasound-Augmented Navigation 2.0(NGUAN+)) are shown. Bold indicates statistical significance(p <0.05). Bold asterisk indicates statistical significance (p<0.05) of augmented reality compared to the US only. . . . . . 71Table 4.2 Qualitative metrics with the questions asked about the augmentedreality system. Score reported where 1 = strongly disagree and5 = strongly agree. . . . . . . . . . . . . . . . . . . . . . . . . 72xiiTable 5.1 Quantitative comparison of simulated partial nephrectomies per-formed in the first Projector-based Augmented Reality Intra-corporeal System (PARIS) study. Average and standard devia-tions (avg ± stdev) of each metric is listed. Results of the trialsusing ultrasound only (US), augmented reality from the laparo-scope point of view (laparoscope point-of-view (LPOV)), andaugmented reality from the projector point of view (projectorpoint of view (PPOV)) are shown. . . . . . . . . . . . . . . . . 92Table 5.2 Quantitative comparison for second PARIS user study performed.Average and standard deviations (avg ± stdev) of each metricis listed. Results of the trials using ultrasound only (US) andaugmented reality from the projector point of view (PPOV) areshown. Bold asterik indicates statistical significance (p <0.05). 94xiiiList of FiguresFigure 1.1 Example of the incision required in open surgery, as seen on aporcine model. . . . . . . . . . . . . . . . . . . . . . . . . . 3Figure 1.2 Example of the long and rigid laparoscopic instruments usedin minimally invasive surgery. . . . . . . . . . . . . . . . . . 4Figure 1.3 The da Vinci S Surgical System R. The surgeon’s console(left), the patient-side cart (middle), and vision cart (right).c2017 Intuitive Surgical, Inc. . . . . . . . . . . . . . . . . . 5Figure 2.1 Illustration of the kidney. . . . . . . . . . . . . . . . . . . . . 14Figure 2.2 Illustration of a nephron, the functional unit of the kidney. . . 15Figure 3.1 System hardware diagram. da Vinci images c2017 IntuitiveSurgical, Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . 37Figure 3.2 The custom “pick up” US transducer used in this work fromSchneider et al.. Adhered is the tracked KeyDot R. . . . . . . 38Figure 3.3 The plastic Dynamic Augmented Reality Tracker (DART) witha pattern adhered (left), metal version with scale reference (mid-dle), and the DART as inserted into an ex-vivo porcine kidney(right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40Figure 3.4 Simulated surgery set-up with the DART inserted into a phan-tom, and tracked US scan performed (top). US images (bottomleft) are segmented to create a three dimensional (3D) tumourmodel (bottom right) . . . . . . . . . . . . . . . . . . . . . . 46xivFigure 3.5 Conceptual illustrations of the surgeon’s console view in bothstages of the robot-assisted partial nephrectomy (RAPN). . . . 48Figure 3.6 Coordinate system diagram in each stage of the RAPN usingNGUAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50Figure 3.7 The set of visualizations as presented in TilePro R. Endoscopicview augmented (left) and virtual viewpoints (right). Pinkand yellow cones are virtual renderings of the tracked surgi-cal instruments. Red, green, and blue meshes are visualizedin each view. No interpolation was performed between seg-mented slices of the mesh, resulting in the poor mesh visualized. 52Figure 3.8 The calibrated camera coordinate system (C) differs from thelaparoscope coordinate system of the da Vinci R (L). The twomust be registered to one another. . . . . . . . . . . . . . . . 54Figure 3.9 The modified DART used for error testing with instrument andpinhead overlaid. . . . . . . . . . . . . . . . . . . . . . . . . 55Figure 3.10 FEM simulation of tumour movement as a function of forceand leg length using 15.4 kPA stiffness (left) and 10.8 kPAstiffness (right). . . . . . . . . . . . . . . . . . . . . . . . . . 57Figure 4.1 DART 3D printed in colour (left) and the ballpoint stylus beingscanned (right). . . . . . . . . . . . . . . . . . . . . . . . . . 63Figure 4.2 A comparison of the view without augmented reality(left) andwith augmented reality (right). Red mesh model appears within1mm of ground truth ballpoint stylus, and augmented realityoverlays appear within 1mm of ground truth. . . . . . . . . . 65Figure 4.3 Left TilePro R with the augmented endoscopic view (top). RightTilePro R feed with virtual viewpoint and traffic lights (bot-tom). Compass overlay in grey, and projected path overlay foreach instrument shown. . . . . . . . . . . . . . . . . . . . . . 67Figure 4.4 Magnified virtual viewpoint to show how the surgeon uses theguidance when close to the tumour underside. Red sphere in-dicates a distance within 2.5mm of tumour surface. . . . . . . 68xvFigure 4.5 NGUAN+ as seen in the surgeon’s console. Augmentations pro-vided using TilePro R. . . . . . . . . . . . . . . . . . . . . . 69Figure 4.6 Cross section of tumour excised with augmented reality guid-ance. Slice closest to the surface on the left, farthest on theright. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71Figure 5.1 The system setup for PARIS. The projector is used to augmentthe tumour’s surface. The scene is viewed by a stereo laparo-scope. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80Figure 5.2 Coordinate systems used within PARIS. Tracked US scan isperformed relative to the DART (top). Tracked and calibratedprojector augments the scene with the tumour model (bottom). 81Figure 5.3 Ex-vivo kidney seen by the laparoscope with no projection onit with a relatively featureless surface (top left). The idealreconstruction would match this image perfectly. A typicalsurface reconstruction using semi-global block matching onCPU (SGBM) and no additional features (top right). Note theblack spots are holes in the reconstruction. The checkerboardpattern projected onto the scene (bottom left). The additionalfeatures improve the surface reconstruction by a perceptibleamount (bottom right). The two holes in the middle are due tospecular reflection and the DART, which also causes reflection. 83Figure 5.4 Overview of PARIS. Light green indicates orthographic pro-jection from LPOV (left). Red indicates projection from PPOV(right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84Figure 5.5 The PPOV visualization of PARIS. Red indicates perspectiveprojection, and yellow/brown indicates orthographic projec-tion. Both seen from the projector point of view (POV). . . . . 86Figure 5.6 Example projection image for LPOV projections (left) and itsappearance on ex-vivo kidney. The tumour model is pre-distorted,hence the irregular shape. . . . . . . . . . . . . . . . . . . . . 87xviFigure 5.7 Un-augmented cross-section of phantom (left). Computer graph-ics overlay of tumour model (right). LPOV perpsective projec-tion of model. . . . . . . . . . . . . . . . . . . . . . . . . . . 90Figure 5.8 Example of a positive margin with both tumour exposed and aportion remaining in the phantom, indicated with blue arrows. 93Figure 5.9 Example cross sections of excised specimens using PPOV (toprow) and US (bottom row). . . . . . . . . . . . . . . . . . . . 94xviiGlossary1D one dimensional2D two dimensional3D three dimensional6-DOF 6 degrees-of-freedom13-DOF 13 degrees-of-freedomAPI application programming interfaceB-MODE brightness modeBM block matching on the CPUBMGPU block matching on the GPUBPGPU belief propagation on GPUCSBPGPU constant space belief propagation on GPUCT computed tomographyDART Dynamic Augmented Reality TrackerEM electro-magneticFEM finite element modelingFRE fiducial registration errorxviiiGFR glomerular filtration rateGPU graphics processing unitHD high-definitionIGS image-guided surgeryLPOV laparoscope point-of-viewMIS minimally invasive surgeryMRI magnetic resonance imagingNGUAN Nephrectomy Guidance using Ultrasound-Augmented NavigationNGUAN+ Nephrectomy Guidance Using Ultrasound-Augmented Navigation 2.0PARIS Projector-based Augmented Reality Intra-corporeal SystemPNP Perspective-N-Point ProblemPOV point of viewPPOV projector point of viewPVC polyvinyl chlorideRAPN robot-assisted partial nephrectomyRCC renal cell carcinomaRMS root mean squareSDI serial digital interfaceSGBM semi-global block matching on CPUTRE target registration errorUS ultrasoundVGH Vancouver General HospitalxixAcknowledgmentsFirst and foremost, I would like to express my sincerest gratitude to my supervisor,Prof. Robert Rohling, for letting me explore the fields of augmented reality, medi-cal imaging, surgical robotics, and cancer surgery — all in the same project. Prof.Rohling has allowed me to use one of the most technologically advanced tools inthe da Vinci Surgical System, improve it using one of (if not the) greatest medicalimaging modalities in ultrasound, and develop my own contribution of the nextmajor computing modality in augmented reality. Prof. Rohling has tremendouspatience and his expertise was a major factor of how this work was even possi-ble. Thank you for your outstanding technical and emotional intelligence. Thankyou for being an outstanding role model who continues to inspire me. Thank youfor always having my back and letting me roam — whether it be to London, orSwitzerland, or back to my desk.Thank you to Dr. Christopher Nguan, the clinical champion, for being be-yond open-minded to some hair-brained schemes and even pitching some of yourown. The passion you have for improving surgery through technology is infectious.Thank you for your occasional 1:00AM emails. Thank you for your patience andbelief. Thank you for being the namesake of this work.Thank you to Prof. Purang Abolmaesumi for his never-ending support. I amcontinuously humbled by your brilliance and delighted by your uplifting laughter!I am grateful for the generous funding from the Natural Sciences and Engineer-ing Research Council (NSERC) through the Canada Graduate Scholarship (Mas-ter’s Level) and the Collaborative Research and Training Experience (CREATE)program; the Engineers in Scrubs Travel Grant Award; and the UBC GraduateStudent Initiative.xxI would also like to thank Andrew Wiles and the Advanced Research Team atNorthern Digital Inc. for their continued support and feedback. I would furtherlike to thank Prof. Tim Salcudean for infrastructure and support.Thank you to my lab mates, past and present, who made my time in the lab funand productive: Alexander Seitel, Angelica Ruskowksi, Corey Kelly, Gregory Al-lan, Irene Tong, Jeffrey Abeysekra, Jorden Hetherington, Julie Hemily, Julio Lobo,Mehran Petsie, Mohammad Honaravar, Mohammad Najafi, Nathan van Woude-berg, Omid Mohaeriri, Qi Zeng, Saman Nourarian, Samira Soujoudi, ShekoofehAzizi, Tom Curran, and Yasmin Halwani. Lab mates like you make our lab one ofthe best in the world.Thank you to the friends I have met in my graduate school journey for theirclever insights and cleverer jokes: Brendan Gribbons, Cameron Stuart, CharleneLeung, Claire da Roza, Jason Spiedel, Juan Pablo Gomez Arrunategui, Liam Sharkey,Luke Haliburton, Michele Touchette, Olivia Paserin, Prashant Pandey, SampathSatti, Dr. Su Lin Lee, and Tiffany Ngo.Thank you to the friends beyond academia that kept me balanced whenever Ineeded it: Amit Anand, Christopher Tan, Elizabeth Wicks, Emily Woehrle, GordenLarson, James Mackenzie, Jeremy Lord, Jessica Stewart, Lauren Fung, Lucas andAndrea Cahill, Marc Lejay, Michael Ip, Sagar Malhi, Sarah Holdijk, and VanessaRussell.To three of the greatest mentors I’ve ever had: Dr. Caitlin Schneider, Dr. PhilipPratt, and Dr. Philip Edgcumbe - it is beyond my wildest dreams that I ever couldhave worked with experts in the field such as yourselves. I’ve been given the privi-lege to learn from the best, the brightest - and most importantly - the humblest trio.I am honoured to work with all of you each and every day.Finally, endless thanks to my mom and dad, Renu and Krishan Singla, andtwo sisters, Rubina and Krishma. It is with you that I get to take a step awayfrom my world and go back to being a kid again. Thank you for calling when youhaven’t heard from me in a while. Thank you for picking me up and taking mehome. Thank you for making me endless cups of chai. Thank you for your love,sacrifices, unwavering support, jokes, and everything - big and small - that you’veever done so that I can live in the city I love, do the work I do, and have the lifethat I have.xxiChapter 1IntroductionThus (through perspective) every sort of confusion is revealed withinus; and this is that weakness of the human mind on which the art ofconjuring and of deceiving by light and shadow and other ingeniousdevices imposes, having an effect upon us like magic... And the arts ofmeasuring and numbering and weighing come to the rescue of thehuman understanding there is the beauty of them and the apparentgreater or less, or more or heavier, no longer have the mastery overus, but give way before calculation and measure and weight?— Plato (380 B.C.)Little did he know, when Wilhelm Ro¨entgen took the first X-ray image of hiswife’s hand in 1895, he would start a revolution in medical imaging and surgery.Since the creation of radiology, imaging modalities such as X-rays, computedtomography (CT), and ultrasound (US) have drastically improved the manner inwhich surgery is performed. Ro¨entgen’s would lead to development of interven-tional radiology; combined with the development of endoscopic imaging it wouldchange the cut-and-see approach of the past to the see-then-cut approach seen today[54]. With the use of imaging prior to the operation (pre-operative) and later duringthe operation (intra-operative), surgeons gained unprecedented abilities to diagnoseand understand the human anatomy and the pathologies that their patients faced[54]. With additional advances in hardware and mathematics, the field of image-guided therapy was born. However, despite this enhanced skill set, surgeons facenumerous challenges in performing operations, particularly when operating on soft1tissues in the abdomen.This thesis focuses on the creation of new guidance systems with applica-tions to laparoscopic surgery, specifically the robot-assisted partial nephrectomy(RAPN). In order to assist the surgeon further, with no additional harm to the pa-tient, this thesis presents three systems based on ultrasound (US) and computervision. Each system provides the surgeon unique augmentations that enhance whathe or she can see and how he or she can operate, all with the motivation of improv-ing patient care. While this thesis is applied to the RAPN, its potential is large andcan be extended to other organs like the liver, prostate, pancreas and more.Chapter 1 is organised as follows:1. Section 1.1 outlines the evolution of minimally invasive surgery,2. Section 1.2 discusses the development of robot-assisted minimally invasivesurgery,3. Section 1.3 provides a brief overview of key concepts in image guided surgery,4. Section 1.4 breaks down the objectives of this thesis, and finally5. Section 1.5 gives a chapter-by-chapter overview of the thesis as a whole.1.1 Minimally Invasive SurgeryThe field of minimally invasive surgery (MIS) has become a relatively standardapproach for various abdominal surgeries. Here, rather than one long morbid in-cision in the patient, as in Figure 1.1, a set of small incisions are made instead.These small incisions of 0.5 - 1.5 cm in length, give this type of surgery the nick-name, ”keyhole surgery”. The surgeon inserts a camera, known as the laparoscope,and surgical instruments through the various port sites into the abdomen where aworking space has been generated through the insufflation of carbon dioxide gas.The surgeon operates using long, rigid instruments like in Figure 1.2. With MIS,the patient benefits from shorter recovery time, less post-operative pain, less intra-operative blood loss, and better cosmesis.The cost for MIS is at the expense of the surgeon who must now operate ina reduced sensory environment. In regards to visualizing the surgical scene, a2Figure 1.1: Example of the incision required in open surgery, as seen on aporcine model.surgical assistant must hold a monocular laparoscope inserted in the patient. Thevideo feed is displayed on a monitor. This combination presents a reduction in thefield of view and poor depth perception, with added instability from the manualmanipulation of the laparoscope. Stereoscopic laparoscopes, which offer partialrecovery of depth perception, are commercially available from vendors includingOlympus and Intuitive Surgical, but are not yet ubiquitous. The laparoscope’s lackof depth resolution influences the surgeon’s spatial understanding of the scene andthe relative positioning of structures to one another.With regards to the laparoscopic tools, the surgeon’s haptic sense is impairedand his or her movement becomes unintuitive. The latter comes because of thefulcrum effect, where inversion is required in moving an instrument by its han-dle, rather than by its tip. Essentially, when the surgeon moves his or her handright, the instrument moves left instead of right. As well, these instruments are3Figure 1.2: Example of the long and rigid laparoscopic instruments used inminimally invasive surgery.not articulated (wristed), producing dexterity challenges. All of these constraintson how the surgeon sees, feels and thinks lead to increased operation time and in-creased surgical errors. Tasks requiring fine motor skills or complex manipulationare more difficult. To mitigate these challenges, the field of robot-assisted MISwould emerge.1.2 Robot-Assisted Minimally Invasive SurgeryOver 30 years ago, research into robotic-assisted surgery began, yielding novel sys-tems like the ROBODOC released by Integrated Surgical Systems in 1992. Sincethat time, the da Vinci Surgical System R (Intuitive Surgical Inc., Sunnyvale, CA,USA) has become one of the most successful robot-assisted systems used world-wide. In 2015 alone, there were an estimated 3600 systems completing 650,000procedures including gynaecology and urology. In this thesis, the term “robot”refers to the da Vinci Surgical System R, which is treated as the exemplar systemherein. With robot-assisted surgery, the surgeon is in control at all times, in amaster-slave configuration. The robot extends but does not replace the surgeon’s4Figure 1.3: The da Vinci S Surgical System R. The surgeon’s console (left),the patient-side cart (middle), and vision cart (right). c2017 IntuitiveSurgical, Inc.abilities and role and has no autonomous ability. The surgeon, through the use of arobot, regains some of his or her senses which were reduced when the field movedto laparoscopic surgery.With the da Vinci R (Figure 1.3), the operator sits at a surgeons console withtwo “master”-side manipulators, four foot pedals and a console viewer with a threedimensional (3D) screen instead of a traditional two dimensional (2D) one. Theda Vinci’s laparoscope has a pair of stereo high-definition (HD) cameras, allowingfor the 3D visualization. The console also permits integration of additional digitaldata directly into the surgeons console through the use of the TilePro R function.These could be in the form of preoperative and intra-operative imaging modalitiessuch as computed tomography (CT), US, or in the form of navigation and guidancetools.The console is connected to a vision cart and patient cart. The patient cart islocated at the patient bedside which has a center column with four robotic patient-5side manipulators hanging from it. The manipulators are docked to specialisedports inserted into the same incisions in the insufflated abdomen as in traditionalMIS. Each manipulator can operate a unique instrument such as small 8 mm di-ameter scissors or electrocautery tools. The master-side manipulators’ motion istranslated to the patient-side manipulators. The da Vinci R is referred to as a tele-operated robotic system because of this master to patient manipulator mapping. Inaddition to real-time teleoperation, the movement is filtered to minimize naturalhand tremor and the motion can be variably scaled to allow for fine movements.The articulated tools themselves give the surgeon back a degree of freedom he orshe lost with in conventional laparoscopic surgery. The da Vinci R allows for intu-itive movement of the tools, removing the fulcrum effect from the list of challengesthe surgeon must face. By design, the robotic manipulators can match the surgeonsrange of motion in open surgery. These improvements allow for more complexminimally invasive procedures and also simplify routine laparoscopic operations.Additionally, the da Vinci’s R ability to localise points in space has been quantifiedto be 1 mm [37]. This means the da Vinci R is suitable for use in image-guidancesystems, with high precision of instrument tracking [37]. It has further been asso-ciated with a reduction in mental effort and workload in comparison to traditionallaparoscopic tasks [49]. Studies indicate there is likely a benefit to using the robot-assisted approach over conventional laparoscopy to achieve better margin sizes andpost-operative function in partial nephrectomies [11, 55, 64].Nevertheless, there are disadvantages to robot-assisted surgery. With the daVinci R, a prominent example is the fact haptic feedback is entirely absent, notjust limited as in conventional laparoscopy. This lack of tactile feedback increasesthe risk of excessive force applied or the clashing of instruments. As well, the daVinci R requires significant investment upfront, so robot-assisted procedures oftencost more to perform. Surgeons and staff must undergo specific training for use ofthe robot. Overall, the da Vinci R remains a promising avenue for surgery. From aresearch perspective, it is an excellent development and testing platform. Integrat-ing image guidance can expand the existing benefits of the da Vinci R, especiallyfor the surgeon.61.3 Image Guided SurgeryWith the advent of computers, the field of medicine was revolutionised. This onsetof computational power has led to improvements to various parts of medicine, butarguably none more than radiology and surgery. Due to the advances in the imagingmodalities available, clinical decision making has been drastically improved. Bybeing able to see inside the patient, with no excision, and understand the underlyinganatomy, the choice to operate or not is better informed. When surgery is chosen,pre-operative imaging is vital in developing a patient-specific plan [54]. Throughthe use of X-ray, CT, magnetic resonance imaging (MRI), or US, the surgeon canunderstand what the internal structure looks like in each patient. Intra-operatively,through the use of fluoroscopy, cone-beam CT, US and others, the surgeon can seewithin and understand the nuances of anatomy in real-time without needing to seethe target with his or her own eyes. Medical imaging enhances the surgeon’s sightand the visualization enhances the surgeon’s reasoning [84]. The field of usingsuch powerful imaging to assist and navigate during surgery is called image-guidedsurgery (IGS) [53].Image-guidance can tackle some of the challenges in MIS and to a degree, thosein open surgery as well. For example, it can mitigate for the loss of haptic feedbackto subsurface patient anatomy [53]. The standard video feed can be complementedby the use of intra-operative imaging, pre-operative imaging, or a combination ofboth [53]. Tracking surgical tools or anatomical structures, and aligning them in acommon coordinate system with the imaging can be beneficial [53]. Through this,the surgeon can understand where he or she (i.e. the instruments) are spatially inrelation to the target. IGS can then lead to better clinical decisions, which in turnmay lead to fewer complications, less blood loss, less tissue excised, and preventdisorientation [6]. IGS can also reduce the cognitive load on the surgeon [6, 54].Traditionally, the surgeon must observe and mentally connect what he or she hasseen pre-operatively to what he or she view intra-operatively; a challenging taskgiven the existing environmental stressors such as time [54, 84].The importance and value of image-guidance is reflected in the developmentof the Advanced Multi-modality Image Guided Operating (AMIGO) suite [34].AMIGO is a state-of-the-art integrated three room design dedicated to allow the7use of MRI, CT, US and an array of additional imaging modalities intra-operatively.However, IGS’s benefits are limited if not correctly implemented. IGS itself encom-passes several technical components, including but not limited to imaging, track-ing, registration, and display [53]. These are briefly discussed as follows:• Imaging: The choice of imaging modality is guided by the desired appli-cation. MRI, for instance, provides excellent and detailed 3D tomographicimages, with excellent contrast of various soft tissue. However, it is not real-time, is subject to motion artefacts and is difficult to use intra-operatively.CT and X-rays are beneficial because they can be used intra-operatively andpotentially in real-time, but both introduce ionising radiation. US is a possi-ble option both intra-operatively and pre-operatively. It is both real-time andnon-ionising but has variable contrast, noise and resolution, and the imagequality is heavily dependent on the user. US can also not image beyond areasof high acoustic impedance, and has relatively low penetration. Additionally,there is the endoscopic image itself [6]. As this provides an intra-operativeview of the scene, other modalities may be registered to it, or the view itselfcan provide guidance information.No single modality is superior to another. However, the choice of modalitywill influence the accuracy and use of the IGS system. Take, for example, anintra-operative US of a subsurface target. While relatively safe for the patient,the image is often difficult to interpret. The segmentation of the target in agiven image may be particularly difficult, and a poor segmentation will limitthe entire system’s accuracy. One must be careful in the choice of modalityas no one imaging modality is best for all phases of a procedure [54].• Tracking: In diagnostic and therapeutic procedures, the surgeon uses instru-ments to observe and manipulate the scene. To integrate these instrumentswith the imaging information, they must be first tracked. In doing so, theycan be brought into a common coordinate system. Several tracking meth-ods exist including optical tracking, electro-magnetic (EM) tracking, andcomputer-vision based tracking, described below.– Optical: Optical tracking refers to the use of infrared light to illuminate8reflective or active markers, analyse the illuminated image with a cam-era, and then localise the markers relative to the tracker [53]. This canbe done by controlling when the markers are illuminated (active) or notilluminated (passive). While this is has been shown to be highly accu-rate and precise, it requires a direct line of sight with the markers [53].In the case of laparoscopic surgery, these markers are frequently out-side of the patient and located on the proximal ends of the instruments.Placing the markers far from the distal instrument tip may introduceadditional errors in tracking [53]. Common systems include the Po-laris (Northern Digital Inc., Waterloo, ON, CA) and Certus OptoTrak(Northern Digital Inc., Waterloo, ON, CA).– Electromagnetic: EM tracking uses a field generator to create an elec-tromagnetic field, in which sensor coils on the instruments are tracked[53]. This eliminates the line-of-sight issue of optical tracking withsimilar accuracy to optical tracking [53]. However, the presence of fer-romagnetic material within the operating room will cause distortionsto the EM field, resulting in non-uniform accuracy and noise [53]. Thepresence of such material is quite likely. Common systems used are theAurora (Northern Digital Inc., Waterloo, ON, CA) and the AscensionBird (Ascension Technology Corp, Shelburne, VT, USA).– Computer-vision: Computer-vision based tracking (also called imagebased) analyses the laparoscopic image to track organs and tools. Thismay be through the use of a single 2D laparoscopic image, or a stereo-scopic 3D pair. It may even be through analysis of medical imagesthemselves (ex. in US-guided needle interventions). This has poten-tial in the tracking of soft organs and the instruments with no exoge-nous hardware added, but its robustness and accuracy for use in clinicalpractice remains to be proven. Challenges include deformable objects,foreshortening, occlusions, and the need for concurrency.Similar to the choice of imaging modality, there is no single superior track-ing method. The best approach is likely a combination of each to balancebenefits and drawbacks.9• Registration: in order to be of use, imaging data and tracking data must becombined together. The process of bringing these two data sets together in acommon coordinate system, such that a point in one set and its equivalent inanother set is known, is referred to as registration [53]. A rigid registrationonly requires a rotation and translation between coordinate systems, while anon-rigid registration requires additional parameters [53]. Registration mayoccur between a 3D dataset to another 3D dataset, 2D to 2D, or 2D to 3D andvice versa. Registration may also involve two different or imaging modalities(pre-operative/intra-operative or intra-operative/intra-operative). Regardlessof the method used, the end outcome should be an accurate alignment that isconfirmed using a validated evaluation method [53].• Visualization: the display of registered tracked tools and imaging data isperhaps one of the most significant barriers to broad adoption of IGS[53].Regardless of all the complexities involved in the other aspects, if the endresult of an IGS system cannot be easily understood, then it is hard to envi-sion any benefit. Questions to consider in designing visualizations includewhether it includes one or more registered sets and modes of data; whetherit augments the view or supplements it; whether the imaging is in 2D or 3D;and whether it introduces additional mental load. Visualization is a signif-icant challenge, but achieving an intuitive visualization is how IGS can beaccepted and utilised at scale. In this aspect, augmented and virtual realitydisplays may play a role [53]. Visualization which mimics the real world,showing virtual images with correct context, is the key to effective IGS[53].Common methods of rendering include surface rendering (a set of polygons,often triangles, which form a mesh) or volume rendering (visualization ofthe entire dataset, visualized through ray casting) [6].Furthermore, the integration of all these aspects into one unified IGS systempresents a significant challenge in itself. One must consider costs, practical imple-mentation, physical requirements, accuracy, usability, and clinical utility.101.4 Thesis ObjectivesThe primary goal of this thesis is to develop novel intra-operative image guidancesystems using US and augmented reality. It does so by presenting three systemsthat share a common framework and principle of operation. The systems are calledNGUAN, NGUAN+, and PARIS. These systems undergo evaluation of their fea-sibility in improving RAPN with the hypothesis that measurable quality metrics ofthe surgery will be improved. As part of this goal, the following objectives mustbe met:• the integration of a US machine, the da Vinci R, the laparoscopic video feed,and additional components as necessary into a unified framework such thatimage guidance is possible.• a method to register US information to a surgical scene despite tissue defor-mation and organ movement.• a method to provide continuous and real-time guidance with as few con-straints on the surgeon as possible.• the development of augmented and virtual reality visualizations to addressspecific surgical challenges of the RAPN.• the thorough evaluation of the overall system and its components for clini-cally acceptable accuracy.• the design of a user study or studies to evaluate the utility of the developedsystems in a clinical context.Achieving these will illustrate the feasibility of creating an IGS system usingcomponents that are low in cost and can be broadly distributed. Such a systemwould address existing gaps in present options, and reduce the barrier to perform-ing laparoscopic surgeries.1.5 Thesis OverviewThis thesis is structured as follows:11• Chapter 1 provides an overview of minimally invasive surgery and image-guided surgery, describes the motivation of this thesis, and presents the thesisobjectives.• Chapter 2 presents an overview of the kidney, renal cell carcinoma, and thenephrectomy procedures; describes the use of intra-operative US; and re-views the prior work in the field of image-guided surgery.• Chapter 3 presents the overall framework used to develop the systems in thisthesis; outlines NGUAN and the evaluation and limitations of it.• Chapter 4 presents NGUAN+, with improved accuracy that is clinically ac-ceptable and has intuitive visualizations; and evaluates the new system forits utility and limitations.• Chapter 5 presents and evaluates PARIS, the third augmented reality system,which uses a novel projection-based intra-corporeal approach, addressing anunmet challenge from the first two systems.• Chapter 6 concludes the thesis with a summary of the work done, contribu-tions, highlights limitations, and discusses potential avenues for future work.12Chapter 2Background and Related Work2.1 The Kidney2.1.1 Anatomy and PhysiologyThe kidney is a vital organ in the human body. It is a bean shaped organ that, whenfully developed in an adult, is approximately 13 cm ⇥ 5 cm ⇥ 2 cm in size, orapproximately the size of a fist [59]. The normal human has a pair of kidneys whichare located in the posterior of the abdominal cavity, and caudad to the diaphragmand the liver (the upper back side of the abdomen) [59].A kidney has a fairly complex structure. The kidney itself is encapsulated ina layer of fascia, perirenal fat, and the renal capsule [59]. This covers the renalcortex, the outermost part of the kidney itself. The cortex is smooth and appearsred in colour [59]. The cortex goes from the renal capsule to the base of the re-nal pyramids. The cortex and renal pyramid bases together make up the kidney’sparenchyma. Beneath the cortex lies the renal medulla layer, which appears red-brown in colour [59]. The medulla contains the renal pyramids themselves, whichare oriented with an apex inwards to the center of the kidney. The renal pyramidsare formed by an aggregation of nephrons and tubules. Within the medulla lies thecollecting duct system, composed of minor and major calyxes [59]. Urine passesthrough from the collecting duct into the renal pelvis and finally into the ureterwhich leads to the bladder [59]. This layer consists of millions of the kidney’s13Figure 2.1: Illustration of the kidney.functional unit, the nephron, which are microscopic tubes [59]. Blood supply tothe kidney comes through the renal hilum, composed of the renal artery, vein, andpelvis. Through the renal artery and vein, the renal hilum is connected to the aortaand vena cava [59]. A structural example of the kidney can be seen in Figure 2.1.The nephron is the functional unit of the kidney and performs blood filtration.Filtration starts in the renal corpuscle, which is composed of the glomerulus (abundle of capillaries) and the Bowman’s capsule which contains the glomerulus[59]. The afferent arteriole brings blood glomerulus, while the efferent arterioletakes blood away [59]. The glomerular filtrate then travels through the proximalconvoluted tubules, the Loop of Henle, and the distal convoluted tubule where fur-ther filtration occurs [59]. The distal convoluted tubule ends in a single collectingduct, leading to the renal pelvis. In order to evaluate kidney function and health,14Figure 2.2: Illustration of a nephron, the functional unit of the kidney.the glomerular filtration rate (GFR) is used [59]. The structure of the nephron isillustrated in Figure 2.2.The kidney has a multitude of roles to play in maintaining homeostasis [59].The kidney’s roles include:• Waste Excretion: filtration and execretion of toxins such as urea is excretedby the kidney• Urine Regulation: regulation of the urine volume, and additional ions suchas sodium and potassium.• Blood Pressure Regulation: maintenance of the blood pressure throughrenin production, vessel constriction as well as the concentration of saltsand water in the body.• pH Regulation: maintenance of a balance of hydrogen ions in the blooditself• Hormonal Secretion: production of erythropoietin which causes the cre-ation of blood cells in bone marrow and activates vitamin D which causesabsorption of calcium.15Due to the crucial role of the kidney plays, renal failure becomes a significantissue. Renal failure can be treated by either hemodialysis or peritoneal dialysis[59]. With hemodialysis, a machine is used to filter blood, acting as an artificialkidney equivalent. Hemodialysis requires minor surgery to access blood vessels.These treatments take multiple hours and occur multiple times a week. With peri-toneal dialysis, a catheter is inserted into the abdomen which is then filled with adialysate [59]. The dialysate itself causes waste removal. This approach permitsthe blood to stay within the vessels themselves. Any form of damage or diseasethat impedes renal function may cause renal failure [59]. One such example iskidney cancer of which the most common type is renal cell carcinoma.2.1.2 Renal Cell CarcinomaIn North America, kidney cancer is estimated to be the sixth most common can-cer in men, and eighth in women. In the United States alone, an estimated 62,700cases of kidney cancer were diagnosed in 2016, causing 14,420 deaths [65]. De-spite a relatively high survival rate, kidney cancer has an increasing incidence ratecommonly due to incidental discovery from medical imaging. [65]. Of this cancer,renal cell carcinoma (RCC) is the most common type making up 85% of all cases.RCC occurs because of the uncontrolled growth of cells within the lining ofkidney tubules [83]. The cause is currently unknown [83]. Warning signs of theonset of kidney cancer include urine containing blood, an unexpected abdominalmass or lump, appetite loss, unexpected weight loss, and pain [83]. RCC can bediagnosed through blood tests, CT or US imaging, or renal mass biopsy. RCC tu-mours vary significantly. Tumour descriptors include maximal diameter, exophyticand endophytic properties, nearness of the tumour to the collecting system, an-terior/posterior location, and location relative to the kidney’s polar lines. Thesedescriptors are used to score the kidney using the RENAL nephrometry scoringsystem [36]. This RENAL score quantifies tumour properties and is used to in-forms clinical decision making, amongst other factors such as co-morbidities [36].This nephrometry measure provide meaningful comparisons of RCC from case tocase. Moreover, a high RENAL score has been found to be predictive of compli-cations and increased warm ischemia time [25]. The use of pre-operative CT can16also be used to inform the choice of treatment.Treatment options to RCC include surveillance, ablation, surgery, and radia-tion. Of these methods, only surgery is only the known curative option for RCC.Active surveillance is a reasonable alternative in cases where the patient is unfitfor surgery or suffers from co-morbidities [7], or if the tumor is very small at timeof detection. However, the risk of cancer progression remains. If not appropri-ately observed, may not be eligible for certain surgical procedures at a later date.Radiofrequency-ablation and cyro-ablation are additional modes of therapy whichare being developed.. Radiation is considered a palliative option. Laparoscopicpartial nephrectomy is the surgical treatment of choice for tumours less than 4 cmin diameter. Of these tumours, endophytic tumours (those with significant volumeof tumor subsurface) cause a high rate of complications [78]. The different type ofnephrectomies are described in the subsequent section.2.2 The NephrectomyThe nephrectomy is the name for the surgical removal of a part or the entirety ofa kidney from a patient. It is performed in order to treat kidneys that are injured,or are diseased such as in patients suffering from RCC. The procedure has severalvariants. It can be completed as open or laparoscopic surgery and as a complete orpartial procedure.In the open approach, the surgeon will make a single large incision into thepatient’s abdomen in order to access the affected renal unit. This incision causessignificant post-operative pain and requires lengthy recovery times. However, thesurgeon is able to use their tactile senses, see the entirety of the working space,and can perform the surgery with full dexterity. The workspace is completely ex-posed to the surgeon, and he or she can directly access whatever area is needed.In contrast, the laparoscopic approach has the surgeon operate with rigid surgicalinstruments in the body. From here, the surgeon’s sensory experience is reduced,but the patient benefits from reduced pain and a shorter recovery time.The complete (or radical) nephrectomy involves the removal of the entire kid-ney from the patient. This reduces post-operative renal function at the trade-offof completely removing the diseased organ. In recent years, the partial approach17has gained popularity. With the partial approach, the surgeon aims to minimise theamount of healthy kidney tissue excised while performing a complete resection ofthe cancerous tumour. Doing so improves post-operative total renal function, asthe remaining nephrons can still function independently of what is excised. Be-cause of this, the partial nephrectomy approach is often called kidney-sparing ornephron-sparing surgery. Originally, this was indicated for patients in whom radi-cal nephrectomy of the affected renal unit would result in an anephric state. Thiswould result in the need for renal replacement via dialysis. In contemporary times,partial nephrectomy is leveraged so to maximize kidney function for all patients asit has been shown in numerous studies that global reduction in GFR is associatedwith poorer quality and quantity of life in all patients [40, 73, 74].Finally, for completeness, there is also the donor nephrectomy. In this variant, ahealthy kidney is completely removed from a patient in order to facilitate an organtransplant to a recipient in need. This is in contrast to the radical nephrectomywhere a diseased kidney is removed. The recipient and donor in this approach areassessed for fitness in this surgery. The donor must be healthy enough to undergothe surgery, with no pre-existing renal disease. In addition, the donor must notpresent significant risk factors for future disease that would impact renal functionwhile having only a single kidney. The donor must be able to consent to the surgery,and have a compatible blood type with the recipient. The recipient must be fit forsurgery, and have had their co-morbidities diagnosed, treated, and stabilised.It is worth noting that tumour enculeation is a new alternative to partial nephrec-tomy. As RCC compresses the the parenchyma, it creates a pseudocapsule aroundthe tumour [38]. The enucleation of the tumour is then possible using this pseu-docapsule, achieving similar cancer survival rates as the partial nephrectomy [38].However, tumour enculeation still requires further clinical studies and is not themethod performed at the local hospital site. While this thesis focuses on improv-ing the partial nephrectomy, the systems and principles addressed here are alsoapplicable in the context of RCC enucleation and indeed, any mode of mechanicalintervention to RCC resection.182.2.1 Procedure OverviewThe generalised steps to completing a laparoscopic partial nephrectomy at the localinstitution of Vancouver General Hospital (VGH) are as follows:• Tumour Exposure: The peritoneum and the Gerota’s fascia must be dis-sected and mobilised to expose the kidney itself. The Gerota’s fascia wrapsaround and compresses the perinephric fat that surrounds the kidney. Uponmobilization of Gerota’s, this fat must be dissected in order to expose the kid-ney surface. The gonadal vein, ureter and hilum must be exposed. Finally,any additional fat is dissected if needed to identify the tumour of interest.• Boundary Identification: Upon exposure, the surgeon must identify thebounds of the tumour and will commonly mark the bounds on the surfaceusing electrocautery. To guide their demarcation, the surgeon will frequentlyuse an US transducer. This use of US is described further in Section 2.3. Theentirety of this step is referred to as the planning stage in this thesis. Thisstage is not considered to be under time constraints as the renal hilum is notclamped, and the kidney perfusion is nominal.• Kidney Clamping: After the bounds of the tumor have been identified, thesurgeon will cut blood off from the kidney by clamping the renal arteryand/or vein at the hilum. The interval from clamping of the hilum throughthe duration of the remainder of the surgery, until hilar unclamping, is knownas warm ischemia time. This is the time in which an organ is cut off fromits blood supply but remains at body temperature. The length of the warmischemia time has the potential to negatively impact the patient, as describedin Section 2.2.2. The accepted threshold is 25 minutes.• Tumour Resection: The tumour resection itself is performed with the sur-geon incising into the healthy parenchyma surface of the kidney near thetumor. The surgeon must interpret their marked boundaries, and rememberthe tumour’s subsurface shape and pose from the (now removed) US and pre-operative imaging, to make the initial incision.. The surgeon must continuewithout imaging and complete the resection. This is referred to as the ex-cision stage in this thesis. The excision stage is particularly challenging for19endophytic tumours because the ideal approach is to start as close as possi-ble to the tumour and excise straight down from the organ surface along theorthographic projection of the tumour. For spherical tumours, the ideal exci-sion specimen would fit within a cylinder with diameter equal to the tumour.The surgeon commonly takes a surgical margin of healthy tissue around theentirety of the tumour. This is further discussed in Section 2.2.2.• Kidney Reconstruction: Finally, the surgeon must reconstruct the kidneydue to the large defect now created from tumour resection. This involvesthe time consuming and meticulous action of sewing vessels, and often per-forming renorrhaphy (suturing of the kidney). After this reconstruction, thekidney is unclamped and blood flow to the kidney restored.Further details, including patient positioning, port placement, and post-operativemanagement can be found in Zhao et al. [89].Instead of the conventional approach described above, there is the growingmovement in performing RAPN. Commonly performed with the da Vinci surgicalsystem R, the RAPN has the surgeon operate in an enhanced environment com-pared to conventional laparoscopy. The da Vinci R facilitates improved dexterityand precision with its improved ergonomics, filtration of tremors, and articulatedinstrumentation. It has an additional robotic instrument which the surgeon can use.It does, however, completely remove the haptic feedback of the surgeon, making itdifficult to localise subsurface structures like RCC difficult. In a recent study of 65patients with completely endophytic tumours, it was shown that the use of roboticscould result in the safe excision such tumours [4]. Generally, the RAPN has beenshown to be effective for both cystic and solid tumours, favorable in improved re-nal function, shorter warm ischemia time, and less blood loss and learning time[1, 11, 55]. Because facilitating the adoption of the robot-assisted approach canimprove the frequency of successful partial nephrectomies [64], the RAPN is usedas the exemplary surgery in this thesis.2.2.2 Operation Benefits and ChallengesThe laparoscopic partial nephrectomy yields several advantages over its open andradical counterparts. There is a benefit to preserving kidney tissue, as more tissue20is likely to reduce the chance of requiring dialysis. Dialysis worsens the patient’slifestyle, limiting their ability to work and increasing the risk of infections andother diseases like cardiovascular disease [44]. It further increases the mortalityrate [18]. In comparison to the radical approach, the partial approach’s preservationof kidney tissue is directly attributed to improved health outcomes. In a compari-son of patients receiving the partial nephrectomy against the radical nephrectomy,results shows those receiving the partial nephrectomy have equivalent long-termoncological outcomes and even an improved overall survival by as much as 10%[40, 51, 71, 73]. Further, the patient is at a reduced risk of developing renal insuffi-ciency and proteinuria (excessive amounts of protein in the urine) [40]. Accordingto the American Urological Association, radical nephrectomy has potential of in-creasing the risk of kidney disease itself. This is because the removal of one kidneyreduces the overall global kidney function, while increasing the filtration require-ments of the single kidney left behind, potentially resulting in kidney insufficiency.As Zhao et al. succintly note, “renal function following [laparoscopic partialnephrectomy] is dependent on quality, quantity, and quickness” [89]. Qualityrefers to the kidney reconstruction, status of the excision and handling of com-plications. Quantity refers to the amount of healthy parenchymal tissue remainingin the kidney post-operation. Quickness refers to the length of the warm ischemiatime experienced. In comparison to the open approach, the laparoscopic approachis associated with shorter operative time, less blood loss, and reduced hospital stay.However, the laparoscopic partial nephrectomy is also associated with longer is-chemia time and more urological postoperative complications such as hemorrhageand urine leakage.The surgeon must also operate with the constraints imposed by MIS: a re-duced field of view, poor depth perception, and reduced haptic feedback (or in therobot-assisted approach, no haptic feedback at all). Such constraints may cause thesurgeon to deviate from the ideal excision plan during operation, as it impacts theirability to localise structures like blood vessels, nerves and tumours. In the caseof endophytic tumours, this challenge can be reflected in the fact they have a 47%complication rate, nearly five times that of exophytic tumours [78]. Part of thiscan be attributed to their depth within the kidney, increasing the risk of the surgeoncutting into the collecting system.21While it is rapidly being adopted, the surgery itself is complex. All surgicalsteps are fairly involved or time-consuming. A component of this includes theidentification and localization of the renal hilum, and correctly clamping it. Shouldthe hilum, or the contained artery and vein, be damaged, severe blood loss occurs.This forces a slow approach as the surgeon “feels” their way around the anatomy.The warm ischemia time of 25 minute from the clamping of the hilum is also asignificant factor [74]. Thompson et al. showed, with statistical significance, thatthe kidney is damaged with an ischemia time above this threshold.Finally, there is consideration of the surgical margin taken. A positive margin isdefined as either microscopic (a slight tumour exposure in the specimen) or gross(portions of tumour remaining in the kidney). A negative margin is where thetumour is completely encapsulated in tissue. While small, the laparoscopic partialnephrectomy does have a positive margin occurrence rate of 2.9%, roughly equalto the open partial nephrectomy’s rate of 3.3% [38]. To achieve a cancer-negativemargin, the surgeon traditionally excises a margin of 10 mm. This means thereshould be a 10 mm thick layer of parenchyma encapsulating the tumour completely.Recent analysis shows that that margin size is independent of local tumourrecurrence, and not all positive margins produce recurrent cancer [38, 43, 69]. In-stead, a normal renal parenchyma margin of 5 mm or less [69] is recommended.Thus, as margin size does not influence this risk, but does influence post-operativerenal function, then one should minimise the margin as best as possible while main-taining a negative margin. Achieving this all around the tumour, particularly be-neath it, is a difficult task. The surgeon should not simply try to avoid a positivemargin, but instead optimise post-operative renal function. Doing so in all cases isdifficult, and enhancing the surgeon’s ability to do so is the goal of this thesis.2.2.3 Metrics of EvaluationAs the RAPN is simulated in numerous studies of this thesis, the success of the sys-tems in these surgeries must be carefully evaluated and quantified. To that end, theclinically-relevant metrics of evaluation for the simulated surgeries used in variouschapters are as follows:• Excision Time: the time of completion from the start of kidney clamping22through to the end of kidney reconstruction. In the studies performed here,no kidney clamping or reconstruction is simulated, only the excision. Thismetric corresponds directly to the warm ischemia time.• Margin Status: whether a positive or negative margin occurred. This im-pacts whether or not an additional surgery is required. This should be neg-ative. While the positive margin rate is low, these systems should at leastillustrate non-inferiority to the conventional method.• Margin Size: in negative margins, this is the maximum distance betweena point on the tumour and a point on the outline of a cross section of thespecimen. This measures the excess healthy kidney tissue excised, impactingpost-operative renal function.• Excised Tissue Volume: the volume of the specimen excised is determinedby measured weight and known density. This is an additional measure oftissue excised, and similarly impacts post-operative renal function.• Adjusted Tissue Volume: a corrected version of excised tissue volume.However, to account for varying tumour depth, the top layer of parenchymaabove the tumour is removed prior to weighing. The specimen’s weight isthen multiplied by the known density.• Specimen to Tumour Volume Ratio: the ratio of the adjusted tissue vol-ume, weighed post-operatively, to the tumour’s known volume, measuredduring construction.• Depth Beyond Tumour: the distance of the excised tissue that extends be-neath the tumour. Determined by US imaging of the excised specimens, thismetric corresponds to one of the most challenging components of the partialnephrectomy. It also evaluates the risk of cutting into the kidney’s collectingsystem.• Cross-sectional Hausdorff Distance: after excision, the specimen is slicedinto cross sections of 5 mm thickness. In the cross section that most ex-poses the tumour, the tumour outline and the cross section perimeter are23segmented. The Hausdorff distance is the maximum distance between allclosest points on the two contours. This metric evaluates the deviation fromthe ideal excision.• Cross-sectional Centroid Distance: the centroids of the segmented con-tours are determined. The Euclidean distance between centroids indicatesdiscrepancy in alignment from the ideal resection.There is also the evaluation of accuracy for the system. The threshold forsystem accuracy should be feasible to achieve and clinically useful. A high errormay cause an increase in tissue excised (the surgeon believes he or she is not safedespite actually being so) and positive margins (the surgeon believes he or she aresafe despite not being so). A low error may allow the surgeon to trust the systemand operate with confidence, but a certain order of magnitude may not be possibleas different components are introduced and their errors accumulate. There is nowidely accepted threshold set for accuracy. Because of this lack of a value, thisthesis uses the recommended size for surgical margins (5 mm) as the threshold.Using this 5 mm value is not related to the mitigation of cancer. It is chosen suchthat if the surgeon aimed to take a 5 mm margin at a given point using the guidancesystem, the guidance system would not falsely indicate that there is no tumourpresent even with the potential error.2.3 Ultrasound ImagingUltrasound imaging is a valuable medical imaging modality. It can be used diag-nostically and therapeutically. With US, like other medical imaging, one can seewithin the patient without the need to cut. US operates on the concept of process-ing sound and echoes to visualise anatomy within a patient. The acoustic waves aregenerated from an array of piezoelectric crystals, which convert electrical energy toand from mechanical energy. The arrays can be linear or curved in structure. Thesearrays are housed in transducers that transmit the sound pulses. By sending electricsignals into the transducer, the crystals vibrate to cause the high frequency soundtransmitted into the patient. These waves, with a frequency on the order of MHz,propagate through tissue at a speed of approximately 1540 m/s. Waves are reflected24at the boundaries of different structures, creating echos which are received by thetransducer. These reflections occur due to differences in acoustic impedance of tis-sue. The energy reflected and the energy transmitted is determined by the acousticimpedance.The same array receives the echoes, generating electrical signals which are sentto a computer for processing and image generation. By analysing the time of flightbetween when a wave is transmitted, when its echo is received, and the signal in-tensity of the echo, the depth of a reflection can be determined. The end resultis a 2D brightness mode (B-MODE) grayscale image that plots a cross section ofthe anatomy. This principle of operation requires no ionising radiation or contrastagent, and makes for real-time processing. Each pixel corresponds to the inten-sity of the echo at that region. US can also come in one dimensional (1D), 3D,and colour Doppler modes. The transducers are typically hand-held, as frequentlyseen in obstetric and cardiac imaging, but can also be miniaturised for use intra-operatively during laparoscopic surgery. The result is a modality that is both lowcost and small in footprint, and that can complement pre-operative imaging.US does not come without limitations. Inherently, due to the coherent natureof the pulse echo imaging technique, US images will have speckle noise. Areas ofhigh impedance limit the ability to image through bone or lung. Tissue attenua-tion also limits the maximum depth of the US image. Image resolution and depthare tradeoffs determined mainly by the number of piezoelectric crystals used andwavelength. Even then, the US is strongly dependent on the user’s ability to posi-tion the transducer and interpret the US image. For the kidney, this requires the USto be held perpendicular to the curved surface in order to get an accurate represen-tation of the underlying anatomy. The interpretation requires a mental registrationof the 2D images and form a 3D model from them.The use of US in laparoscopy has been used for decades. Langø et al. providesan overview on the various use of US to navigate laparoscopic procedures in avariety of soft-tissue abdominal procedures [39]. The use of US range from provid-ing 2D and 3D guidance, for registration of CT to the intra-operative scene, imagefusion and guidance. Specifically, in the context of laparoscopic partial nephrec-tomy, intra-operative US is used to localise the tumour [52]. While one may arguethe use of pre-operative imaging suffices, Schneider et al. showed that the kidney25may move as much as 46.5 mm and 25 degrees may occur from the time of pre-operative imaging to the procedure [61]. This is attributed to the change in patientposition. Further, US is used to identify the boundaries of the tumour relative tothe healthy parenchyma, and is particularly beneficial in cases where the tumourlies intraparenchymal [38, 52]. Imaging can reveal the lateral bounds, the depth ofthe tumour inferior to the kidney surface, and relative location to other structureslike collecting duct or blood vessels [38]. This more precisely informs the site ofsurgical excision [52].The use of this imaging information is limited in laparoscopy. In current la-paroscopic practice, the surgeon’s ability to move the transducer to the ideal poseis restricted. In the robot-assisted approach, the surgeon is often in control of thetransducer, and must instruct their surgical assistant on how to move the transducer.Current practice has the surgeon view the US image, while the transducer moves(either by a surgical assistant or themselves), remember what he or she viewedin the image, and mark the tissue with electro-cautery. The transducer is then re-moved and the excision begins. The image information is not present during theexcision, arguably one of the most vital components of the partial nephrectomy.While it informs, US does not currently guide. Displaying previously acquired USimages requires that the US images be registered to the surface where the imageswere taken so that the surgeon sees the images moving with the organ.That said, US is still beneficial. In fact, a survey of surgeons practicing laparo-scopic surgery showed that 84% expect an increase of US in the future [77]. Aseparate survey showed that the majority of European urologists performing RAPNuse US intra-operatively [29]. If the use of the acquired US imaging data could beused throughout the partial nephrectomy, it would likely assist the surgeon in over-coming the numerous challenges in surgery. One method of doing this is throughaugmented reality.2.4 Augmented Reality in Laparoscopic SurgeryOne method of providing image-guidance during laparoscopy is augmented real-ity. Milgram et al. describe the reality-virtuality continuum, which incorporatesaugmented reality and virtual reality displays [46]. On one end lies reality, the real26environment that humans perceive and live in. On the other end lies virtuality, acomplete virtual environment with no component of the real world. In betweenlies mixed reality — a mixture of the environments together [46]. Towards real-ity is the class of displays known as augmented reality [46]. This refers to theaugmentation of the real environment with additional computer-generated inputs.These augmentations have the potential to improve what a person perceives andunderstands about their world. In recent years, computer science has great ad-vanced what is capable with augmented reality. Technology now allows for peopleto experience visually compelling and geographically aware augmented reality onhand-held devices, packed with computer power and connected to large computa-tional networks - and all this at the consumer stage. With the various challenges asurgeon must face to provide optimal care, it is no surprise that augmented realitywith applications in laparoscopic surgery has been an active area of research.The ability to augment a surgeon’s perception with 3D models and spatial in-formation of critical structures can significantly reduce the complexities he or sheface in-vivo. In a recent survey of urologists, 87% felt augmented reality had thepotential to be used for navigation and is of interest to the medical community[29]. This is because the use of augmented reality has the potential to enhancethe surgeon’s abilities, and “see” beyond what is conventionally available in thelaparoscopic view. Further, the “mental” registration required by surgeons to usepre-operative and intra-operative imaging increases mental workload and reducesaccuracy. Hughes-Hallett et al. showed that the use of pre-operative imaging issubject to variability in interpretation by the surgeon for intra-operative use [31].It is insufficient for image-guided surgery to simply present data, it must utiliseand display it in meaningful ways. This section presents a brief review of the nu-merous systems and efforts to use augmented reality in laparoscopic surgery. Thissection first discusses the great efforts made towards using augmented reality inlaparoscopy, and then focuses in on the use of intra-operative US to provide suchguidance. Several surveys on the use of augmented reality in laparoscopic surg-eries, different display devices, tracking and registration methods exist [6, 27, 66].Select publications are described herein.As early as two decades ago, Fuchs et al. presented an early augmented realitysystem with the development of a see-through head-mounted device and 3D laparo-27scope [21]. Ukimura and Gill reported one of the first clinical uses of augmentedreality in urology [75]. Their augmented reality system presented 3D visualiza-tion of anatomy for both laparoscopic partial nephrectomy and radical prostatec-tomy. They reported that augmented reality is feasible and improved the surgeon’sanatomical understanding. Teber et al. presented a novel real-time surgical guid-ance tool for the partial nephrectomy. Their use of cone-beam CT imaging forintra-operative imaging, and multiple radio-opaque navigation aids allowed themto track the organ in real-time. They evaluated their registered guidance in ex-vivomodels using agar-based tumours and used manual registration in-vivo [72]. Theaccuracy between virtual and real models only had an error of 0.5 mm [72]. Whilea significant step forward in providing guidance for the partial nephrectomy, thatwork required additional ionising radiation, multiple aids to be inserted into theorgan, and was used for enucleation rather than excision [72]. Furthermore, aidplacement was not guided or informed, risking damage to subsurface structures.Their augmentation was also the superposition of segmented data onto the laparo-scope organ [72]. It is unclear how the radio-opqaue navigation aids were excisedor removed from the organ, and whether or not they introduce unnecessary tis-sue damage. Regardless, that system was refined and brought to clinical use [67].Simpfendo¨rfer et al. reported the successful use of their cone-beam CT approachfor augmented reality to localise complex and endophytic tumours in-vivo. Fluo-rescent markers have since been introduced to facilitate automatic registration ofpre-operative CT to the intra-operative scene [81]. While such fluorescent markersare promising due to being metabolizable and their robustness in the face of bleed-ing and smoke, the steps to clinical use requires the development of a clinically-acceptable marker and injection of a contrast agent into the patient [81]. The needto place multiple markers into the organ is a limitation of both radio-opaque andfluorescent markers.Using pre-operative 3D CT, Su et al. showed it is feasible to register such imag-ing to the stereoscopic laparoscopic view [68]. They further showed accuracy of 1mm for their registration [68]. However, their work required initial manual align-ment and is not real-time. Later, Mohareri et al. presented a novel guidance systemusing real-time registered MRI-US in robot-assisted laparoscopic radical prostatec-tomy. That work integrates a combination of different components including MRI28to trans-rectal US biomechanical deformable registration in real time, registrationof the US transducer to the da Vinci R, and semi-automatic image segmentation[47]. Further, they show the first ever use of such a system in human patients [47].That work builds upon significant engineering, and is an excellent illustration ofthe great effort involved to create a useful image-guidance system.Isotani et al. used reconstructed data for pre-operative planning to evaluate re-nal structures from CT imaging. They used this planning to identify the best ap-proach for their resection in several RAPNs performed in-vivo [32]. Intra-operativelyhowever, the use of this data was limited to manual manipulation by a surgical as-sistant and displayed via the da Vinci’s TilePro R [32]. Improving on this, Volonte´et al. created a software module that provided a stereoscopic rendering of pre-operative reconstructed data, allowing the surgeon to view the model in 3D [79].The interaction with this model was improved with the installation of a joystickto allow the surgeon to autonomously manipulate the data [79]. The system wasconsidered by surgeons to provide a perceptible benefit in their confidence.On visualization specifically, the challenges of depth perception and convinc-ing overlays may be considered a sub-field all on its own. Hansen et al. presentedmethods for intra-operative visualization that encoded the distances within the tex-ture of the overlays themselves [23]. Despite only being applied to vascular struc-ture, their initial results illustrated visualizations that are expressive and useful, butcan unintentionally present too much information [23]. Wang et al. compare dif-ferent visualizations, including the transparent overlay, a virtual window, and theirown depth-aware ghosting method [80]. They noted that a single visualizationmethod may not provide utility for both simple and complex structures, and theproblem is nuanced [80]. Wang et al. also presented an interesting model of con-sidering how the surface, when registered to the camera and tumour, impacts thevisualization [80]. A related idea is explored in Chapter 5. Finally, Amir-Khaliliet al. explore the value of incorporating uncertainty in the augmentations them-selves as to improve on the user’s trust in the guidance [3]. Using CT imaging, theyused a probabilistic segmentation to then overlay onto a stereoscopic video feed,resulting in convincing augmentations [3].In considering the display technologies, Bernhardt et al. highlight that the mostcommon method is the static display, which is to present a second monitor next29to the traditional laparoscopic video feed [6]. Other methods include projectionof augmentations onto the patient’s abdomen, head-mounted devices, and silveredmirrors [6]. They note that no work currently exists on the use of a projector withinthe patient’s abdomen [6].2.4.1 Ultrasound-based Augmented RealityIn laparoscopic partial nephrectomies, the traditional use of augmented reality hasbeen in intra-operative planning. The models and augmentations try to assist thesurgeon in understanding the tumour location and ideal excision prior to excision.Bajura et al. presented one of the first uses of US to augment the real patientabdomen, while the integration of US for use in robot-assisted procedures was pre-sented over a decade ago in the development of da Vinci Canvas [5, 42]. The daVinci Canvas integrated an US transducer, tracked it with the laparoscope’s cam-era, and visualised the imaging. It was evaluated in target finding and US-guidedbiopsy tasks, but not during tumour resection. That work noted that the display ofthe US volume is distracting when overlaid onto the scene [42]. Several years later,the same group explored different visualizations of robot-assisted laparoscopic US[63]. They presented a split screen view of the laparoscopic and US image, aregistered wire frame of the US image overlaid onto the scene with a picture-in-picture display of the US, and the registered US on the laparoscopic image itself[63]. They additionally displayed cues to orient the surgeon’s transducer and indi-cated location of landmarks. They found the use of an integrated robot/US systemwas received with enthusiasm, and yielded improvements in an array of clinically-relevant tasks, even with simple user interfaces [63].Closely related is the work of Cheung et al. which presented a visualizationplatform using fused video and US [10]. The platform used EM tracking of a flexi-ble US transducer to fuse US directly onto the the laparoscopic video [10]. They in-vestigated both 2D and 3D visualization, and performed simulated laparoscopic tu-mour resections on kidney models. While the system accuracy was acceptable be-tween the tracked US and the laparoscope camera (2.38 ± 0.11 mm), they showedno statistically significant improvement in excision time; the use of 3D visualiza-tion in fact increased the excision time [10]. Although 2D visualization benefited30the planning time but this stage is untimed as the kidney is unclamped. Cheunget al. does note that the image display is an important consideration, as the directoverlay of the US image is ambiguous in its spatial location relative to the tumour[10].Pratt et al. presented a navigation system that integrated pre-operative CT- andMRI-based models with semi-automatic registration. Pratt et al. presented modelsand surgical margins using virtual and augmented reality techniques like inverserealism in real-time [57]. The augmentation was validated in both offline and on-line analysis. While the registration error was as high as 3.16 mm, it shows thefeasibility of image guidance for the RAPN procedure, and the value of providingsuch guidance.Pratt et al. additionally showed the tracking of US intra-operatively, withoutthe use of EM hardware, is practical using computer vision methods [56]. Theypresented the use and calibration of a checkerboard pattern attached to a micro-surgery US transducer [56]. From this marker, they tracked the US in 6 degrees-of-freedom (6-DOF) and create freehand volumes. However, complex image analysisinvolving the triangulation of the detected pattern, enforcement of topographicalconstraints, outlier removal, and parallelization was required. They achieved real-time use and provided relatively small operating range of 42 mm from the laparo-scope [56]. Further, this study did not use the US to perform the surgery and onlyhad surgeons estimate tumour thickness. In a similar vein, Jayarathne et al. andZhang et al. extended the idea of transducer tracking to non-planar transducers. Ja-yarathne et al. use a Gaussian Mixture Model to estimate a laparoscopic transducerusing previously acquired data and known geometry of the pattern [33]. However,it is not real-time. Zhang et al. combine the circles grid pattern in this thesis withcorner features, and developed a real-time method of tracking based off this newpattern, but do not validate this method for the accuracy of its US augmentations[87].Finally, there is Hughes-Hallett et al.’s presentation of an image-enhanced op-erating environment built around the RAPN. This study is the largest of its kind, re-porting over 60 cases that use image-guidance in both planning and excision stagesof the partial nephrectomy. Its planning stage is unique as a tablet interface is usedto visualised pre-operative segmented imaging data which is not registered auto-31matically to the scene. The excision stage is similar to Pratt et al.’s work in usingregistered intra-operative US to create and display a reconstructed 3D volume, aug-mented onto the surgical scene. This was done to assist the surgeon in accountingfor tissue deformation. That work does not report quantitative surgical outcomessuch as reduction in tissue volume excised, but does report a subjective benefit.It uses improved marker tracking from Pratt et al. which was shown to be robustin-vivo [58]. Motivated by these works, there remains a need for real-time and in-tuitive US-based image guidance during the laparoscopic partial nephrectomy, forboth its planning and excision stages.2.5 Challenges of Guidance in LaparoscopyDespite the significant advances developed over the last few decades, there has notbeen a widespread adoption of image guidance systems in laparoscopic surgery[54]. There are inherent challenges in providing beneficial augmented reality in anenvironment as complicated as laparoscopic surgery. There are numerous technicalchallenges that must be tackled including accuracy and perception [6, 54]. Thesechallenges include:• Accuracy: the most important criteria for laparoscopic augmented reality,the system must provide high accuracy in order to be useful [6]. The reg-istrations involved, imaging modalities chosen, and the dynamic scene willimpact the accuracy [6]. A proper assessment of system components and theaccumulated errors should be done.• Organ Motion: the organ of interest may move significantly between thetime of imaging to the time of excision [6]. During the operation, mobi-lization of structures around the organ may shift it significantly, affecting itspositioning [6]. Even normal respiratory function or blood pulsation maycause these organs to shift [6]. Schneider et al. reported significant kidneymotion occurred between pre-operative and intra-operative imaging [61].• Deformation: the soft organs within the abdominal cavity will deform wheninterrogated by instrumentation. This deformation must be accounted for32when providing guidance. Accurate modeling has been shown to be difficult,with inaccuracies of 3-4 mm from actual to modeled deformation [2, 19].• User Friendly: image guidance systems should be usable and reduce cog-nitive load on the already stressed surgeon [54]. An intuitive interface willimprove the surgeon’s perception and understanding of the scene [54]. It willalso minimize the number of interactions required from the surgeon [6].• Visualization: the visualization is a vital aspect of the utilty of image-guidance systems. Too much or too little information may be presented[23]. Visualizations also risk being difficult to interpret, increasing cogni-tive load of the surgeon. Additionally, accuracy of registration may impactthe visualization, and the surgeon’s trust in the image guidance.• Validation: the guidance should be reproducible and consistent [6]. Worksthat include phantom models for validation still need to consider the nuancesof the in-vivo environment and show robustness in such [6]. There is a needfor a validation method that is repeatable in all cases, including the guidancefor subsurface structures [6].• Latency and Refresh Rate: the guidance provided must have low latency,and a high refresh rate. Doing this for systems with complicated hardware,requiring synchronization of many components is difficult. For algorithmsthat are computationally intensive like dense registration or dynamic render-ings, there may be a trade-off between accuracy and computation speed.Beyond the technical barriers, other challenges include the demonstration ofimproved or maintained long-term patient outcomes, reduction of operation time,avoidance of additional monitors, and cost-effectiveness [54]. These all requirehigh volume cases studies which are difficult to achieve [54]. Most of the workshave only been done on small volumes of in-vivo cases, evaluated on ex-vivo mod-els, or simulated.332.6 Remaining NeedsTo understand where the proposed systems fit in relation to this previous work, onemust consider the stage at which guidance has been used, the imaging modalities,and point of view of the guidance. Peters and Linte note that the task of understand-ing the tool to target relation is equally as important as of identifying the target’slocation [54]. In the context of a partial nephrectomy, it is reasonable to relatethe target location task to the intra-operative planning stage and the instrument totarget task to the excision stage. Guidance is valuable in both. The majority ofwork in the field has focused on the planning stage of the partial nephrectomy, andso there remains a need for continous guidance during the excision. Work that hascontributed to the excision stage has used intra-operative CT, adding additional ra-diation, while the use of US imaging has been limited to the planning stage, despiteits potential value in understanding the tumour’s depth.The choice of display method is also important. Numerous works have triedto superimpose or “fuse” the imaging data acquired onto the laparoscopic scene.However, there are concerns on the impact this has to operative inattentional blind-ness (failure to recognize unexpected stimulus), and the risk of occluding unex-pected regions of interest [14, 28, 54]. This thesis tries not to interfere with thesurgeon’s endoscopic view directly, and explores supplemental displays and, forthe first time, projections within the abdomen.The field to date has, naturally, explored augmentations rendered from the la-paroscopic point of view. This is a consequence of using the laparoscope. This hascertain limitations, and may in fact not be the ideal choice for the point of view.This thesis explores alternative viewpoints for rendering.34Chapter 3Intra-operativeUltrasound-Augmented RealityIn order to address some of the challenges of performing laparoscopic and robot-assisted laparoscopic partial nephrectomies, this work proposes three novel aug-mented reality systems using intra-operative ultrasound imaging. All of these aredesigned with the overarching goals to reduce the volume of healthy kidney tissueremoved and reduce the warm ischemia time. As mentioned in the previous chap-ter, maximizing of healthy parenchyma and maintaining a warm ischemia timeunder 25 minutes will improve post-operative renal function [74]. This chapterintroduces the framework that is used throughout all systems in this thesis, andthen introduces the first system called Nephrectomy Guidance using Ultrasound-Augmented Navigation (NGUAN). It additionally introduces the Dynamic Aug-mented Reality Tracker (DART), a surgical navigation aid that overcomes chal-lenges in tissue deformation. It is important to note that this chapter’s purpose is topresent the DART and the NGUAN in the context of a RAPN and evaluate their fea-sibility in a laboratory setting. The main novelties here are the DART, the tumour-centric tracking paradigm, and the augmentations created from a US-based tumourmodel.The structure of the chapter is as follows: in Section 3.1 introduces the hard-ware and software components; Section 3.2 describes the computer-vision basedtracking used for pose estimation of the fiducial markers used throughout this35thesis; Section 3.3 provides an overview on how the system works to provideguidance; Section 3.4 covers the transformation theory behind the augmented re-ality overlays; Section 3.5 discusses the augmentations themselves; Section 3.6discusses the calibration and accuracy testing performed; Section 3.7 describesthe single-user single-phantom study performed, and finally Section 3.8 and Sec-tion 3.9 review the results and lessons learned.3.1 Framework OverviewThe augmented reality systems in this thesis are developed from a common frame-work. This framework was originally created by Dr. Philip Pratt of Imperial Col-lege London, and has been developed in conjunction with the author and co-authorsat the University of British Columbia over the course of three years. After severaliterations and the extension of the framework to support external modules, it nowincorporates interfacing with Analogic US machines, tracking fiducial markers inthe scene, efficiently reconstructing the 3D US volume, and then displaying theaugmentations via a display device. These are all done through different C++modules written by the author that leverage OpenGL and OpenCV which are twopublicly available programming libraries for graphics and computer vision respec-tively.3.1.1 Hardware ComponentsThe framework is composed of a HP-Z820 PC (Intel Xeon E5-2670 2.6GHz CPUand 16GB of RAM). It has a NVidia Quadro 6000 graphics processing unit (GPU)(NVidia Corporation, Santa Clara, CA, USA) in it, along with the NVidia serialdigital interface (SDI) Capture and Output cards (NVidia Corporation, Santa Clara,CA, USA). This hardware allows up to four video feeds into the PC for processingand up to two video feeds output. With this, the da Vinci R Surgical System and theda Vinci Si R Surgical System (Intuitive Surgical, Sunnyvale, CA, USA) have theirstereo video feeds into the PC. These video feeds are 1080i HD, connected usingSDI, and are not hardware synched. The PC can additionally interface with the daVinci R systems and an US machine (Analogic, Richmond, BC, Canada), via ether-net connections. Both of these machines have their own application programming36Figure 3.1: System hardware diagram. da Vinci images c2017 Intuitive Sur-gical, Inc.interface (API) allowing for the transmission of data from each machine. For theda Vinci R, each patient side manipulator’s tracked pose relative to the endoscopiccamera is transmitted. For the US machine, the B-mode image is transmitted. Eachof these can be routed into the TilePro R functionality of the da Vinci R, whichoutputs them on secondary screens in the surgeon’s console. While the video feedscan be routed back as the main display for the surgeon, this was not done for thiswork. That is because initial testing revealed a significant lag in receiving a videofeed from the da Vinci R, running the feed through image processing, and returningthe feed into the surgeon’s console. This lag was not quantified, but was qualita-tively large enough to warrant using the TilePro R function. The hardware diagramof the system is illustrated in Figure 3.1.The US transducer used in all experiments is a custom transducer designed forrobot-assisted minimally invasive surgeries by Schneider et al.[62]. It has a 28 mmlinear array, contains 128 elements, operates at a centre frequency of 10 MHz, and37Figure 3.2: The custom “pick up” US transducer used in this work fromSchneider et al.. Adhered is the tracked KeyDot R.is compatible with the Ultrasonix machines. It has a unique grasp designed for thePro-Grasp instrument. This provides autonomy to the surgeon. Schneider et al.showed this transducer has a grasping repeatability within 0.1 mm in all axes, andwithin 1 degree for roll, pitch and yaw [62]. The US machine is set to a depth of35 mm, and operated at a frequency of 10 MHz for all experiments. Additionally,the US transducer has a KeyDot R optical marker (Key Surgical Inc., Eden Prairie,MN, USA) on one of its flat faces. These markers have been approved for hu-man use and can be sterilised by auto-clave. Further, these markers can be trackedusing computer vision as described in Section 3.2.1 [58]. While this frameworkinterfaces with Analogic US machines, it is extensible to support additional man-ufacturers as long as a video feed of the machine can be obtained. This meansthat US machines that give the US image itself can be supported, even without aresearch API. The US transducer and the KeyDot R are seen in Figure 3.2.As the da Vinci R has a large catalogue of supported surgical instruments, it isimportant to note that this work primarily supports a subset of those instrumentsincluding the Pro-Grasp, Monopolar Curved Scissors, and Black Diamond MicroForceps. as the lengths of every tool is known from Intuitive Surgical’s instrumentcatalogue, the framework can be extended to support any number of instruments.Each of the augmented reality systems produced from this framework are eval-38uated in simulated RAPNs. To this end, realistic kidney phantom models are cre-ated. Cylindrical polyvinyl chloride (PVC) phantoms is created using Super SoftPlastic (M-F Manufacturing, Fort Worth, TX, USA). The phantoms additionallyhad a curved surface. In order to accurately represent the kidney, the phantom’selastic modulus is designed to be 15 kPA, consistent with the reported corticalelastic modulus for in-vivo porcine kidneys [22]. Each phantom has a 10-30 mmspherical inclusion at a depth of approximately 20 mm. By design, the tumour ap-pears as a hyperechoic mass in a US image. For the NGUAN, the phantoms are whitewith transparent inclusions. For the remaining systems, the phantoms are dyed redwith black inclusions for ease of post-operative analysis. Due to the depth andendophytic nature of these inclusions, each phantom achieves a RENAL score of10x given their size, location and depth [36]. This indicates the phantoms simulatedifficult cases, and provide a significant risk of cutting into the kidney’s collectingduct system. The gold standard for the tumour volume is determined by weightduring construction, and using the known density of the material.3.1.2 Dynamic Augmented Reality TrackerTracking the kidney’s surface is difficult in both real and phantom models due to itsnearly feature-less texture and because of its deformable nature. To overcome thesechallenges, so that one can register US images to the surface, this work proposesthe use of the DART, a custom surgical navigation aid. It is designed in Solidworks(Solidworks, Waltham, MA, USA) and can be 3D printed in stainless steel or plas-tic (Xometry, Gaithersburg, MD, USA). The DART has barbed legs of 10 mm inlength and can be inserted into the patient abdomen via a 12 mm trocar. It can alsobe repeatably picked up in a similar manner as the custom US transducer used. Ithas a flat face for the placement of a KeyDot R marker. The DART and a metalversion of it are seen in Figure 3.3.The DART can be inserted in the planning stage of the RAPN and is insertedinto the kidney’s renal cortex, above the tumour of interest. As there is a layerof healthy parenchyma above the endophytic tumour which is excised regardless,inserting the DART into this layer allows it to be excised along with the specimen.Due to its barbed legs, it is considered to be rigidly connected relative to the tu-39Figure 3.3: The plastic DART with a pattern adhered (left), metal versionwith scale reference (middle), and the DART as inserted into an ex-vivoporcine kidney (right).mour. Thus, the DART can track a local region of the organ’s surface relative to thetumour. With this, tracking of the DART creates a tumour-centric tracking paradigmwhereby all tracking and guidance is done relative to the tumour. This is useful inorder to reconstruct an accurate US volume, improve system accuracy, and pro-vide persistent information over time, even after removal of the US transducer andwithout the need for surface tracking.It is noted that the DART is a intermediate solution. While there have been somesteps towards deformable tissue tracking, these are often not robust, are too sparse,or do not account for deformation throughout the entirety of the surgeon. Exam-ples include the case of Collins et al., who present a promising solution but theirevaluation does not present an accuracy measure for their tracking and the case ofMahmoud et al. whose ORBSLAM-based approach still achieves a relatively higherror for tracking of 3 to 4.1 mm RMSE and is far from real-time [12, 45]. Further,such solutions have not been made publicly available for validation [45]. Untilthis challenge can be solved, the DART allows the exploration of augmented realityin soft-tissue surgery without the need for advanced algorithms. With this singlesmall drop-in tracker, augmented reality and surgical guidance can be created toenable maximal nephron sparing while maintaining a negative margin. Initial ex-perimentation of the DART inserted into ex-vivo porcine kidney showed that thekidney itself could be lifted by pulling on the DART. To assess the rigid relation-40ship of the DART to a simulated tumour finite element modeling (FEM) is done inANSYS (ANSYS, Pittsburgh, PA, USA). This modeled the DARTs movement in akidney during an US scan. The kidney is modeled as a 50 x 50 x 50 mm cube. Itis modeled as being linear, elastic, homogeneous, and isotropic. The tumour is 20mm in diameter and simulated to be 20 mm within the kidney. The DART is placedabove the tumour, and the US transducer is placed 10 mm from the DART’s edge.The model is treated as isotropic and homogeneous with linear elasticity. Poisson’sratio of 0.48 for both the kidney and the parenchyma is used. Input parameters tothe simulation included applied US force, barbed leg length of the DART, and kid-ney stiffness. Using a calibrated force sensor, the average maximum downwardforce for three complete scans of phantoms is 0.7 ± 0.3 N. Thus the forces of 0.1,0.5, and 1.0N are evaluated. The leg length is varied between 0, 5 and 10 mm. AsGrenier et al. reports different cortical and medullary elasticities for in-vivo porcinekidneys (15.4± 2.5 kPa and 10.8± 2.7 kPa respectively), simulations are done us-ing each average elasticity [22]. To evaluate the simulations, the distance betweenthe theoretical tumour center (20 mm below the aid, regardless of pose) and theactual center is calculated.3.2 Vision-based TrackingAs described, both the US transducer and the DART have a KeyDot R marker placedon them. These planar markers, illustrated in Figure 3.2 and Figure 3.3, are a gridof black and white circles. The grid is asymmetric in its dimensions, meaning thenumber of circles per row differs from the number of circles per column, providingrotational invariance. This design allows the system to estimate the pose of theplanar grid from a single camera. Further, these fiducials can be tracked in full6-DOF. In order to discuss the pose estimation however, camera calibration andcentroid detection need to be discussed first.Camera CalibrationCameras can range in their properties which influence the imaging process itself.These include the focal length, number and types of lenses used, and even whetheror not the pixels are isotropic. Using a pinhole camera model, these cameras can41be modeled mathematically. This can then compensate for lens distortion, measurethe size of objects in real world units instead of pixels, or estimate the pose of thecamera in the world. For this work, Zhang’s calibration method is used [88]. Inthis chapter, this is done using the CalTech Camera Calibration toolbox (a softwaremodule for MATLAB) and in the later chapters with OpenCV [8]. Both use thesame implementation.The da Vinci’s laparoscope has a pair of HD cameras which creates a 3D stereo-scopic display for the surgeon to see. Each camera of the stereo endoscope can becalibrated according to a pinhole camera model with intrinsic parameters. Theseparameters indicate how a 3D point in space is projected onto the camera’s 2Dimaging plane. The intrinsic parameters include the focal lengths fx and fy, thelength from the modeled pinhole to the imaging plane, and the coordinates of thecamera’s principal point cx and cy. The principal point is the location of the in-tersection of a perpendicular line from the modeled pinhole to the imaging plane.The principal point is often not in the center of the camera image. Further, a skewparameter s is included to account for image axes that are not perpendicular to oneanother. This is represented by the matrix K, seen in Equation 3.1. This can alter-natively be considered a combination of a shear (from the skew factor), a scaling(from the focal lengths), and then a translation (from the principal point). Whatthis means is that a point in the camera’s 3D coordinate can be transformed into its2D imaging plane by Equation 3.2.K =264 fx 0 cx0 fy cy0 0 1375 (3.1)s264uv1375= K264rxx ryx rzx txrxy ryy rzy tyrxz ryz rzz tz375266664XYZ1377775 (3.2)Where (X ,Y,Z) is a 3D world point, the position and orientation of the camerais represented by the r, t parameters, (u,v) represent the pixel point, and s is a scalefactor. When considering a set of points that lie on the same plane, this equation42can be simplified by treating the Z component as zero. This results in Equation 3.3,where K and the (r, t( parameters can be treated as a single 3 by 3 homographymatrix, H. This matrix can then be found via decomposition.s264uv1375= H264XY1375 (3.3)The calibration for a single camera is performed by taking pictures of a cal-ibration target [88]. In this work, the calibration target used is a checkerboardpattern. The pattern is a 7 ⇥ 8 checkerboard with each square being 4 ⇥ 4 mm.The checkerboard pattern is moved in the camera’s field of view, with images takenof it in varying poses. It is important to include rotational and translational changesin the set of images, as these will impact the quality of the calibration [88]. Eachimage is converted into grayscale, and run through a Harris corner detection al-gorithm, producing a set of 2D pixel points [88]. These points are refined forsub-pixel accuracy. This set of 2D points then corresponds to a set of 3D points,known through the geometry of the checkerboard target [88].The 3D points are treated as lying a single plane, having a Z-coordinate of zeroas in Equation 3.3 [88]. This is done to reduce the problem complexity, simplfifythe problem to determining a homography and provide constraints on the intrinsicparameters [88]. These feature points can then be used to estimate the intrinsic andextrinsic parameters using a closed-form solution [88]. The estimations are refinedusing a Levenberg-Marquardt least-squares optimization [88]. This algorithm aimsto minimise the re-projection error of the calibration parameters. Re-projection er-ror is defined to be the sum of squared differences between the 2D points foundby the corner detection and the estimated 2D points created when using the currentiteration’s calibration parameters. Simply, it is the error between where the cali-bration thinks the pixels are versus where the pixels actually are. The optimizationcontinues until no further improvement in error is observed below a threshold. Theresult is an iteratively obtained set of camera calibration parameters. Throughoutthis work, the re-projection error is less than 0.4 pixels; sufficient for use.However, the above model does not incorporate a lens, which is used to in-crease the number of rays that pass through the sensor. These lenses can cause43distortion that bend the rays near the edge of the images called radial distortion.There is also tangential distortion when the lens is not parallel to the camera sensoritself. The camera model is expanded to have a set of seven coefficients to accountfor these distortions.Each camera of the stereo pair can additionally be calibrated for their extrinsicrelation to one another, creating a transformation of how to go from one camera’scoordinate system to another. Doing so is useful for stereo-based surface recon-struction, discussed in Chapter 5, and potentially incorporating triangulation forimproved tracking accuracy. With a set of calibration parameters for even one ofthe laparoscope’s cameras, one can accurately estimate its pose relative to a knownpattern in the scene. This is a fundamental concept in the vision-based trackingused throughout this work.3.2.1 Pose EstimationGiven a set of 2D/3D point correspondences, it is possible to estimate a calibratedcamera’s full 6-DOF pose relative to a known pattern by solving the classic com-puter vision problem called the Perspective-N-Point Problem (PNP).In the case of the circles grid, a corner detection algorithm cannot be used.The image containing the circles grid is run through a blob detection algorithm,specified to look for regions with certain convexity. Each region is filtered basedoff its convexity and the known relation of the circles grid. The end result is then aset of 2D points which are the centroids of each circle. This creates the 2D/3D pointcorrespondence as in with the checkerboard pattern. If the entirety of the circlesgrid is not detected, then the tracking algorithm stops processing the current frame.The same Equation 3.2 can be used here to model PNP. However, unlike cameracalibration, the camera parameters are known and the only components requiringestimation are the rotation and translation (r, t) parameters. These values can besimilarly be estimated using the Levenberg-Marquardt optimization. The resultis the pose of the KeyDot R (on either the DART or US transducer) relative to thecamera is obtained. A limitation of this approach is that the solution to PNP mayhave multiple solutions as a result of the least squares algorithm. For tracking,this would result in flipped coordinate systems. To mitigate this risk, the previ-44ous found transform is used to initialise the least squares optimization algorithm.This improves the reliability of the tracking, particularly when the pattern is fac-ing perpendicular to the camera which, from experience, often results in multiplesolutions.The process of detecting the circles and computing the pose from them is timeintensive on a full HD resolution image. To improve the tracking speed, a motionestimation algorithm is used to create a region of interest for the next frame. Thecircle detection then first runs in the region of interest, significantly improvingspeed, and will resort to the full resolution image if no candidate circles grid isfound. This method of tracking has been developed by Pratt et al. [58]. Initialin-vivo use was explored and proved to be reliable and accurate. Specifically, thetracking algorithm was found to have an operational envelope of -50.1 to 52.5degrees in rotating about the X-axis, -52.7 to 57.6 degrees in rotating along theY-axis, and had a working depth of 13 to 86 mm in the Z-axis. The use of motionestimation improves the pose estimation to only taking 11ms, resulting in real-timeprocessing.Pratt et al. demonstrated that the motion estimation with tracking circular pat-tern outperformed checkerboard tracking [58]. It showed to track more constantlyover a large workspace different illumination levels [58]. These appealing aspectsof the tracking algorithm warrants its use in the intra-operative systems presentedin this work.3.3 Principle of OperationThere are several components of NGUAN required for its use, and it is importantto understand their role in the overall system. Prior to the steps below, it has beenassumed the calibrations have been completed. The principle of operation is de-scribed as follows:1. DART Placement: After the kidney surface has been exposed, an untrackedUS scan is performed. The surgeon uses this scan to estimate the tumourlocation and identify where he or she would like to insert the DART. Innormal practice, this scanning is already done so little additional time isconsumed here.45Figure 3.4: Simulated surgery set-up with the DART inserted into a phantom,and tracked US scan performed (top). US images (bottom left) are seg-mented to create a 3D tumour model (bottom right)2. Tracked US Scan: Once inserted, a freehand tracked US scan is performed.Both the US transducer and the DART’s KeyDot markers are tracked by thelaparoscope, using the method previously described. The US images arerecorded relative to the DART. Depending on the DART location, this scancan be performed in a entirely translational manner or include rotation inorder to capture the entire tumour. This step is done during the planningstage where no clamping is done, and there is no time limit.The 3D volume is reconstructed from the set of tracked 2D US images. Voxellength and slice thickness are both set to 0.375, determined experimentally46to provide a good quality reconstruction. The optimised reconstruction takesless than 30 seconds. Figure 3.4 shows the DART as inserted into a phantommodel, with a tracked US being performed.3. Tumour Model Generation: The manual segmentation of volume’s crosssections for the tumour is performed for each slice. Segmentation is doneusing ITK-Snap, a third-party software [86]. Again, this is performed duringthe untimed planning stage. From the segmentations, surface extraction isperformed.4. Augmented Reality Overlays: In additional to the regular surgical sceneview, augmentations are provided to the surgeon in real-time. These aug-mentations include a virtual rendering of the tracked surgical instrumentsand the mesh model, as well as an augmented surgical scene. Treating thetumour as a rigid body, as the DART moves, the tumour’s movement can becontinuously rendered. The surgeons console view is conceputally shown inFigure 3.5, and the overlays are described in depth in subsequent sections.5. Guided Tumour Excision: During the excision of the tumour, if the daVinci R surgical instrument comes within a set threshold distance of the cen-troid of the tumour the viewpoints flash red to warn the surgeon he/she isapproaching the tumour. Last, the DART is removed together with the tu-mour and surrounding tissue.NGUAN differs from the work of Teber et al. in the following ways: only onesurgical navigation marker (DART) is inserted into the kidney, the tracking providedis 6-DOF, the augmented reality is a 3D representation of the tumour generated byUS, and the augmentations involve virtual camera viewpoints.3.4 Transformation TheoryIn the equations of this thesis, the notation for transformations is as a follows: ATBis a 4x4 transformation matrix that takes the coordinate system B and rotates andtranslates it into the coordinate system A. The notation A0 indicates the coordinatesystem A at time = 0. As coordinate systems may move over time, a coordinate47Figure 3.5: Conceptual illustrations of the surgeon’s console view in bothstages of the RAPN.transform at time = n would correspondingly be denoted as An. The notation for apoint in a given coordinate system is pA.In order to provide augmented reality overlays, the system must track the sur-gical instruments relative to the tumour at any given time. Further, for the overlaysinvolving virtual cameras, these virtual cameras must be placed and positioned cor-rectly over time so that they appear fixed relative to the real camera. Each of thesehas its own set of transformations. The underlying transformation theory for thesesteps is outlined in this sections.Illustrated in Figure 3.6, the individual coordinate systems, their origins, axesand units in NGUAN are:• U : the 2D US image. The origin is the top left pixel. X-axis increaseslaterally from left to right of the image. Y-axis increases axially from top tobottom of the image. Units are pixels.• I: the surgical instrument’s coordinate system. The origin is located on the daVinci R instrument’s wrist which is tracked by the API. The Z-axis increasesalong the length of the tool, towards the tip of the tool. The X-axis andY-axis are arbitrarily defined. Units in millimeters.• C: the calibrated camera’s 3D coordinate system. The origin is located in 3Dspace. X-axis moves left to right on the camera image. Y-axis moves top to48bottom on the camera image. The Z-axis goes into the camera image itself.Units are in millimeters.• L: the laparoscope’s coordinate system. The origin is unknown but locatedsomewhere physically within the laparoscope. Determined experimentally,the laparoscope’s axes and units follows the same conventions asC.• V : a virtual camera coordinate system. The origin is defined arbitrarily. Theaxes and unit conventions at the same asC.• K: the KeyDot R marker on the US transducer. Origin is in the top left circle.X-axis moves up along the columns. Y-axis moves right along the rows.Z-axis is into the page of the marker. Units are in millimeters.• D: the KeyDot R marker on the DART itself. Same coordinate description asin K.For simplicity, the differences in convention between OpenGL and OpenCV,the intermediate coordinate systems used in OpenGL when rendering, and the pro-jection of a 3D point onto the camera’s 2D imaging plane are not covered in thisthesis.3.4.1 Virtual Cameras and TimeWhen considering the surgeon’s normal viewpoint, surgeons have difficulty inter-preting two aspects of the tumour: how deep the tumour lies and how far fromthe laparoscope it extends. To facilitate easier view of these aspects, two orthog-onal viewpoints relative to their normal view are provided. The first viewpoint isa “bird’s eye” view and the second viewpoint is looking in from the side of thescene. These viewpoints can be achieved using virtual cameras placed relative tothe DART’s first detected pose. They are not placed relative to the real camera as itwould create a confusing experience to move the camera and have two additionalviewpoints move at the same time. The same is true for moving the DART overtime. Placing virtual cameras relative to the first detected pose lets their positionsto remain fixed despite a dynamic scene.49Figure 3.6: Coordinate system diagram in each stage of the RAPN usingNGUAN.Virtual cameras themselves are mathematical models of cameras that do notexist (hence “virtual”). Using the same pinhole camera model as in calibration,it is possible to simulate alternative viewpoints of a 3D scene. As the surgicalinstruments and tumour are modeled in full 3D, it is then evidently possible tocreate virtual viewpoints. Doing so requires the instruments, tumour, and virtualcameras to be registered in the same coordinate system.These virtual cameras are tumour-centric, as their placement is focused aroundthe tumour model. The transformation from the virtual camera coordinate systemsV to the initial DART coordinate system, D0, is decomposed into its rotational andtranslational components which are each computed separately. The translations area pre-defined distance between the tumour and the virtual camera. This is describedby Equation 3.4. DNC can be any of the three unit column vectors of the transformDTC, describing an axis of the real camera in the DART coordinates. For the virtual50cameras in NGUAN, these are the X and Y axes of the real camera. The constant sis an arbitrary scalar constant that sets the distance along that real axis. pD is the3D location of tumour centroid in DART coordinates. The result of Equation 3.4 isthe translation component, t, of D0TV for each virtual camera.tD0TV = pD+ s · (DNC) (3.4)The rotational component for the virtual cameras is defined to be 90 degreerotations about either the real camera’s X or Y axis relative to D0. Combinedwith the translation above, this creates D0TV for for each camera. This suffices forthe first time the DART is found. However, in the description thus far, only thesurgical instruments may move. If the DART or camera move, these changes inthe coordinate systems cannot be accounted for as is. Therefore when either thecamera or DART move, the pose of the virtual cameras must be updated. This isdone using the relative transform of the DART at t = 0 to t = n as in Equation 3.5.This can then be applied to D0TV to produce DnTV . as seen in Equation 3.6.DnTD0 =Dn TCn ·Cn TC0 ·C0 TD0 (3.5)DnTV =Dn TD0 ·D0 TV (3.6)3.5 Augmented Reality OverlaysSummarizing, the surgical instruments can be continuously tracked relative to aUS-based tumour model in 3D. Two virtual cameras can be placed relative to the tu-mour model over time, with the DART and camera moving freely. This informationmust be relayed to the surgeon. The surgeon’s normal video feed is supplementedwith two augmented feeds with TilePro R.Seen in Figure 3.7, the first is the surgeon’s endoscopic view with augmen-tations (referred to as “direct overlay”). The second is a split screen of the twovirtual viewpoints (top-down and side). The views both face the centroid of the tu-mour and remain fixed relative to the real camera. The tumour and instruments arecontinuously rendered as the DART moves. The rendering also displays the move-ment of the tumour in the virtual viewpoints. For NGUAN, the surgical instruments51Figure 3.7: The set of visualizations as presented in TilePro R. Endoscopicview augmented (left) and virtual viewpoints (right). Pink and yellowcones are virtual renderings of the tracked surgical instruments. Red,green, and blue meshes are visualized in each view. No interpolationwas performed between segmented slices of the mesh, resulting in thepoor mesh visualized.are rendered as cones. These cones are centered in I with a height equal to thecurrent instrument’s length. These lengths are obtained from Intuitive Surgical in-strument catalogue. The left instrument is coloured pink, and the right instrumentis coloured yellow to distinguish between them. The tumour model is colouredred in the direct overlay, green in the top down view, and blue in the side view.Furthermore, a flashing red warning is given to the surgeon based on his or herinstruments’ distance to the tumour center and a pre-defined threshold.3.6 System Calibration and AccuracyThere are several components in the NGUAN system that require calibration. Thecalibration for the US transducer, the da Vinci R to camera, and the total systemerror are described in this section.3.6.1 Ultrasound Image to KeyDot TransformTo create reconstruct a 3D volume from the 2D US image, relative to the DARTcoordinate system, the pixel to millimeter scale factor and US calibration is needed.To convert a pixel value to a physical value, the pixel to millimeter scale factoris required. This is determined by imaging a block at known dimensions and ob-serving its US image. Segmenting the block in the lateral and axial dimensions, and52dividing by the known length obtains the scale factor. The US image is assumed tohave isotropic pixels, so the same scale factor is expected in both dimensions.The purpose of the US calibration is to calculate the transformation from the 2Dimage from to the KeyDot R marker on the transducer face. These two coordinatesystems are illustrated in Figure 3.6. The unknown transformation between the twosystems is denoted as KTU . Given the US transducer’s CAD model is available, thistransform is determined geometrically.To assess the US calibration accuracy, reconstruction accuracy of a pinhead isused. By imaging a pinhead in a water bath from 10 different poses of the trackedtransducer, and manually segmenting the US images for their 2D pixel locations,each pinhead point can be transformed into 3D coordinates. The Euclidean distancefrom each pinhead point to the centroid of all points is calculated. The root meansquare (RMS) error is reported.With this transform, a tracked US scan relative to the DART can be done. Thisis captured in Equation 3.7. Here CTK is the transform from the Keydot to thecalibrated camera, and DTC is the transform from the camera to the DART, bothobtained from pose estimation described previously.pD =D TC ·C TK ·K TU · pU (3.7)3.6.2 da Vinci Laparoscope to Camera TransformThe API from Intuitive Surgical provides tracking information of the instruments’coordinate system I relative to the laparoscope coordinate system L. The da Vinci’sinstrument is a 12 foot long, 13 degrees-of-freedom (13-DOF) kinematic chain.This lends itself to an absolute tracking accuracy of approximately 50 mm andrelative tracking accuracy of 1 mm [37]. However, the single camera of the la-paroscope used for tracking has a different origin than the laparoscope as seen inFigure 3.8. Thus, for accurate tracking of the surgical instrument relative to cali-brated camera, the robot needs to be registered with respect to the camera, solvingfor the unknown transform CTL. This registers the surgical instrument to the cam-era coordinate system. This is seen in Equation 3.8. Additionally, the da Vinci’stracking information for a manipulator comes from a combination of a high res-53Figure 3.8: The calibrated camera coordinate system (C) differs from the la-paroscope coordinate system of the da Vinci R (L). The two must beregistered to one another.olution encoder, giving relative information, and a low resolution potentiometer,giving absolute information. Because of inaccuracies in the encoder and poten-tiometer, these must be accounted for. To simplify this, the da Vinci’s trackingerror is accounted for in the same calibration as as solving for CTL.pD =D TC ·C TL ·L TI · pI (3.8)For Chapter 3, solving CTL is achieved via registration of 14 pairs of points,one in the camera coordinate system (C) and one in the laparoscope coordinatesystem (L). To generate each pair of points, a KeyDot R is moved to a uniquepose and at each location the surgical instrument tip touches the known origin ofthe KeyDot R. In turn, a leave-one-out error for each of the 14 pairs is calculatedbased on a registration of the other 13 pairs using Horn’s method. That is, aftercalibrating on 13 pairs of points, the target registration error (TRE) is calculatedusing the remaining pair. The RMS of those 14 errors is reported.54Figure 3.9: The modified DART used for error testing with instrument andpinhead overlaid.3.6.3 Total System ErrorFinally, to characterise the accuracy of the overall system, a modified DART isdesigned. As seen in Figure 3.9, this modified version has a a flat circular top and2.5 mm pinhead that extends along the Z-axis of the DART. This pinhead simulatesthe tumour centre. This extension is 25 mm in length. By taking tracked US scan, apinhead model can be generated in the DART coordinate system. With this, the daVinci R surgical instrument is used to pick up the pinhead itself. The instrument’slocation in the DART coordinate system is recorded. The error is calculated as thedistance between the pinhead centroid and the surgical instrument. The RMS errorof 10 different poses is reported.553.7 User StudyIn evaluating the NGUAN, simulated RAPNs are performed. One expert urologistversed in performing RAPNs participated. The phantoms provided had inclusionsthat are purposefully unique in shape and location, limiting the surgeon’s abilityto learn from case to another. The tumour models are generated prior to the userstudy. In each case, the surgeon is instructed to use the US transducer to scanthe phantom surface. Using a permanent marker, the surgeon simulated electro-cautery and marked the tumour boundaries. This mimicked the planning stage.After this, the surgeon immediately began the excision itself. In the first case, thesurgeon is only given the US transducer during the planning stage and no additionalguidance thereafter. In the second case, the surgeon spent 20 minutes training andlearning the NGUAN system prior to starting the operation. Then the surgeon isgiven NGUAN to operate with during planning and execution stage.After both trials are completed, the surgeon answer a questionnaire in whichhe provided feedback about both cases and both systems. The survey includedquestions regarding usability and helpfulness of each system. The surgeon is in-terviewed for open feedback on the system. To capture quantitative benefits, themetrics of excision time, margin status, max margin size, adjusted excised tissuevolume, and specimen to tumour volume are reported for the two cases performed.These metrics are previously described in Section 2.2.3. The excised specimenmass is cut into 10 mm slices to determine margin status and size.3.8 Results3.8.1 Finite Element SimulationsFor all FEM simulations, the distance between theoretical and actual tumour centrenever exceeded 1 mm. From this, the rigidity assumption for the navigation aidresults in an error of kidney tumour location of no greater than 1 mm. Simulationresults are summarised in Figure 3.10.56Figure 3.10: FEM simulation of tumour movement as a function of force andleg length using 15.4 kPA stiffness (left) and 10.8 kPA stiffness (right).3.8.2 System Calibration and AccuracyThe geometric US calibration’s pinhead reconstruction accuracy is 0.9 mm RMS.The US calibration result of the pinhead reconstruction relative accuracy is 0.9 mm.Over the course of capturing the 10 US images of the pinhead, the US transducercovered a range of 16 ⇥ 10 ⇥ 19 mm. The da Vinci R laparoscope to cameracalibration TRE is 1.5 mm RMS overall. The single lowest TRE is 0.6 mm. Theoverall system TRE is 5.1 mm RMS.3.8.3 User StudyIn using only the US, the execution time is 10 minutes and 45 seconds. The tumourvolume itself is 4 cm3 and the adjusted excised tissue volume is 24 cm3 Thusthe specimen to tumour volume ratio is 6:1. The largest negative margin size is 24mm. In using NGUAN, the execution time is 7 minutes and 30 seconds. The tumourvolume itself is 5.5 cm3 and the adjusted excised tissue volume is 16.5 cm3 Thus,the specimen to tumour volume ratio is 3:1. The largest negative margin is 12 mm.In both cases, there is a gross and a separate microscopic positive margin.After the user study, the surgeon preferred the use of NGUAN over the US for vi-sualizing the tumour in the execution phase. General comments about the NGUANsystem include that the most useful guidance cue is that the screen flashed red oncethe instruments got to within a certain distance of the tumour. The warning aided57Table 3.1: NGUAN initial feasibility study results. Results of the trials usingultrasound only (US) and the guidance system (NGUAN) are shown.Metrics US (n=1) NGUAN (n=1)Excision Time (min:secs) 10:45 07:30Margin Status ( /1) 1 gross & micro 1 gross & microMargin Size (mm) 24 12Known tumour volume (cm3) 4.0 5.5Adjusted Tissue Volume (cm3) 24 16.5Specimen to Tumour Volume Ratio 6:1 3:1the surgeon in avoiding the tumour and minimizing the healthy tissue excised. Thesurgeon found the top-down view easier to interpret than the side view.3.9 DiscussionThe success of image-guided surgical systems is largely dependent on their accu-racy, usability and the clinical need for the image guidance. Each of those aspectsof NGUAN will be addressed in the discussion. Both the US pinhead reconstructionprecision error of 0.9 mm and the da Vinci R calibration of 1.5 mm are consistentwith error for similar experiments in the literature of 1.2 mm and 1.0 mm respec-tively [15, 37]. The larger error in NGUAN may be because the gold standard usedis optically tracked KeyDot R markers as opposed to an Optotrak R 3020 stylus(Northern Digital Inc., Waterloo, ON, Canada), which has a reported tip error of0.25 mm [37]. The navigation aid was simulated in a finite element analysis and,relative to the tumour, did not deviate more than 1.0 mm than the expected dis-tance. This is adequate for the purposes of providing guidance in the soft kidney.More advanced simulations, such as Camara et al. who simulates the kidney’s de-formation under an ultrasound scan using a particle-based approach [9] 1-2 mmerror, may be integrated as well.With the individual component errors being small, the measured total systemerror remains high at 5.1 mm especially when compared against the standard ofcare’s recommended margin size of 5 mm. This is likely to be reduced through fur-ther system refinement and testing. The pinhead extension on the modified DARTis not designed to be grasped by an instrument, and so imprecision in simply grab-58bing the tool could lead to added error. As well, it is not evaluated how accuratethe manual pinhead segmentation is against the ground truth. These aspects areimproved upon in the next chapter. As per the goals of the system, sparing tissuewill be impacted by system error.The single-surgeon/single-phantom study is primarily for feasibility. With it,NGUAN can be refined and improved. The DART is used to generate a tumourmodel, and provide guidance, without impeding the surgeon. Future studies arerequired with more trials of the system. This will provide more robust results thanthe single surgeon/single phantom study performed, as well as provide a clearerunderstanding on usability and preference. This is addressed in the next chapter.In terms of usability, the NGUAN orthogonal virtual camera viewpoints are dif-ferent to other image guidance systems for abdominal surgery. The advantage ofthe orthogonal viewpoints is that it provides the surgeon a perspective he or shewould not normally have without occluding the surgeons view of the operativefield. As well, because these viewpoints are displayed based off the tracking in-formation, and not dependent on the video feed itself, there is no additional lagintroduced. However, further work is required in NGUAN for the positioning ofthe views, as the surgeon had difficulty orienting himself relative to given views.Additional simplistic cues such as rendering the camera, showing the centre lineaxis of the virtual viewpoints or letting the surgeon set the pose of the virtual view-points could help minimise these issues. Using a colour gradient to represent thedistance of the instrument to the tumour could improve the warning cue given tothe surgeon as well. These augmented reality overlays are improved upon in thenext chapter.An evident critique of the DART include the line-of-sight requirement. In orderto provide any guidance, the DART must be in the field of view of the laparoscope.This is acceptable during the planning stage, and early on in the execution stage.However, due to the manner in which the surgeon excises the specimen, he orshe will zoom in close to the point of excision. He or she will often also lift thespecimen up to try to see underneath it. During these steps, the DART may fallout of view. As well, blood may occlude the DART. This can be mitigated. Forexample, one could insert an additional DART into the side of the specimen, detectthe new reference, and continue tracking. Alternatively, blood occlusion of the59DART pattern can alleviated through an omni-phobic coating to repel blood [41],or attempting to wash it with saline intra-operatively.If one or multiple DARTs are used, inserting a barbed aid into the kidney yieldsa potential risk of seeding. According to preliminary tests, this may be preventedby using the stainless steel DART with electro-cautery. It is also feasible that arange of DART geometries be available to the surgeon depending on tumour depth:long barbs for deep tumours, short barbs for shallow tumours and adhesive fixationor tissue branding for superficial tumours. These could be determined based offpre-operative imaging.The ultrasound scan is performed prior to the clamping of the renal artery tominimise warm ischemia time. However, with the change in perfusion pressure, theshape of the kidney is likely to change. Simulation and evaluation of the amountthe kidney changes, and incorporating that into the provided guidance, will berequired.The DART and NGUAN offer many interesting avenues for future research. Onenovel addition would be the incorporation of surface reconstruction. This can befacilitated by structured light using, for example, laser-based or projector-basedsolutions. A reconstructed surface mesh could be displayed in orthogonal viewsto provide further depth cues. Furthermore, the surface could be used to providethe surgeon a true top-down view, as opposed to a view that is orthogonal to theircamera viewpoint. Future work could also explore the use of the tumour model inintra-operative planning. These are addressed in Chapter 5.60Chapter 4Improvements to NGUANThe previous chapter introduced NGUAN, an augmented reality system that com-bines US and computer vision-based tracking and kinematics-based tracking to pro-vide continuous real-time guidance during tissue excision. NGUAN is a largelystandalone system composed of a surgical navigation aid called the DART and anUS transducer, requiring no extrinsic tracking hardware. It leverages the da Vinci Ras a development and testing platform. However, the initial iteration of NGUAN hadsignificant shortcomings. Its systematic error was reported to be 5.1 mm, which isunacceptable given the standard of care for a margin size is considered to be 5mm. The virtual viewpoints, despite making use of the 3D modeling and real-timetracking, are hard to interpret in a time constrained environment. Further, it didnot provide guidance for a significant part of the surgery: identifying when to cutunderneath the tumour. This challenge is a difficult one as, with the endophytictumour and small size of the kidney, the surgeon risks cutting into the collectingduct. Therefore, while NGUAN is promising, improving its error and simplify-ing its augmentations would yield better utility. To that end, this chapter presentsNephrectomy Guidance Using Ultrasound-Augmented Navigation 2.0 (NGUAN+).NGUAN+ uses the same principle of operation as NGUAN, but has been refinedwith four different augmentations. These include a proximity alert, an orientationcue, a simpler virtual viewpoint, and a projected path of the instruments. Fur-ther, this chapter also presents an intra-operative validation tool that can be usedto assess augmentation accuracy during surgery. NGUAN+ is similarly evaluated61in simulated RAPN by an expert urologist, but has more trials to achieve statisti-cal significance. This chapter is structured as follows: Section 4.1 discusses theuse of an augmented reality validation tool for during surgery; Section 4.2 outlinesthe specific methodology changes to improve calibration and system accuracies;Section 4.3 discusses the new augmentations presented to the surgeon; Section 4.4outlines the user study used to evaluate NGUAN+; Section 4.5 presents system errorand study results; and Section 4.6 discusses the results and future work.4.1 Intra-operative Validation ToolThe modified DART presented in the previous chapter is only used for system errorevaluation. However, its utility can be applied intra-operatively for both calibrationand validation. This requires a few refinements to the initial design. For distinctionfrom the DART, this tool is referred to as the ballpoint stylus.The previous design had a circular face with a 2.5 mm ballpoint that extendedfrom the face. The KeyDot R marker is manually placed on the face. Because ofthis, the location of the ballpoint can only be estimated relative to the DART. Anyerrors in DART placement would propagate to the supposed ground truth ballpointlocation. As well, the circular face is not designed to be easily grasped by the daVinci R instrument. This limited the ability to assess the total system accuracy. Tothat end, the ballpoint stylus is printed entirely 3D printed (Proto3000, Vaughn,ON, CA) including the circle pattern. The pattern is printed in colour. The ball-point stylus has known geometry up to the printing precision of the manufacturer.Proto3000 reports using the Statasys J750 printer, which has a printing resolutionof 14 microns. The circular face is replaced with a portion of the DART includingthe repeatable grasp design. The ballpoint itself is increased to 3 mm in diameter toease segmentation, and has slots the same size as a surgical instrument. It is moreeasily grasped, and when the surgical instrument grasps the ball tip, the instrumenttip and ball tip are coincident. The ballpoint stylus and the completely 3D printedDART is seen in Figure 4.1.62Figure 4.1: DART 3D printed in colour (left) and the ballpoint stylus beingscanned (right).4.2 Refinements to System Accuracy4.2.1 da Vinci Laparoscope to Camera calibrationChanges in the algorithm and the design of the calibration stylus resulted in signif-icant improvements in the laparoscope to camera calibration. Previously, Horn’salgorithm is used. Horn’s gives the transformation parameters between two cor-responding sets of points (rotation, translation and scale) that minimises the meansquared error between the sets. However, Umeyama notes that this method maygive an incorrect rotation [76]. Umeyama presents a refinement of Horn’s methodthat, with his closed form solution, always presents the correct rotation [76]. Forsolving the laparoscope to camera transform, CTL, Umeyama’s method is used.The new ballpoint stylus is now used for calibration of the laparoscope to thecamera. The same method for data collection of paired points is used. The stylusis moved to 23 unique poses. At each point, the stylus’ origin in the camera is col-lected, and then the origin is touched with a surgical instrument. The instrument’spose is then collected. Instead of a leave-one-out approach to calculating TRE, arandom set of 12 pairs are used to first calculate the CTL transform. The fiducialregistration error (FRE) for these 12 is reported. The resulting transform is then ap-plied to the remaining 11 pairs, and the TRE is reported. FRE is reported to assessthe registration accuracy, and TRE for guidance accuracy. While there is little to no63correlation between FRE and TRE [20], both are reported for completeness.Finally, the previous chapter noted that CTL is a combination of the laparoscopeto camera transform and a correction for errors in tracking the instruments. Itassumed that the single transform is valid for each of the patient-side manipulatorsused. In reality, there is a separate calibration required for the left and the rightinstrument. Both are calibrated for prior to the user study.4.2.2 Total System AccuracyWith the improved ballpoint stylus, system error can be better assessed. This errorcan be determined by comparing the known center of the ballpoint, from the ge-ometry of the CAD model, with the instrument’s location when the instrument isgrasping the stylus. This is captured when considering Equation 4.1 and Equa-tion 4.2. Note that Equation 4.1 is the same as Equation 3.7 but repeated forconvenience. Note that the subscript D, which previously represents the DART,represents the ballpoint stylus. The coordinate systems are interchangeable. WithEquation 4.1, the segmented model’s centroid can be registered to the ballpointstylus. Comparing this centroid against the known ground truth assesses the errorin vision-based tracking combined with reconstruction and segmentation. Thenwith Equation 4.2, the 3D location of the instrument (pI) is transformed into the la-paroscope coordinate system L, to the camera coordinate systemC, and finally intothe ballpoint stylus’ coordinate system. By comparing the instruments location toknown ground truth, it is possible to evaluate the error in the combined tracking.Moreover, when the ballpoint stylus is grasped by the instrument, and then has itsballpoint reconstructed, the two points of pU and pI should be equal, representingthe ballpoint’s centroid.pD =D TC ·C TK ·K TU · pU (4.1)pD =D TC ·C TL ·L TI · pI (4.2)To evaluate these errors, the modified DART is held by an instrument in a wa-ter bath at room temperature. The ballpoint stylus is scanned, reconstructed, andsegmented. The ballpoint stylus is moved to 10 poses, still held by the instrument.64Figure 4.2: A comparison of the view without augmented reality(left) andwith augmented reality (right). Red mesh model appears within 1mmof ground truth ballpoint stylus, and augmented reality overlays appearwithin 1mm of ground truth.This makes pU = pI . The Euclidean distance between instruments location andthe ground truth center is calculated in each pose, and the average is reported. Anexample of the resulting guidance is seen in Figure 4.2.4.3 New Augmented Reality OverlaysRecall that the surgeon operates under a time constraint while trying to minimisetissue excised. Because of this, it is impractical to develop nuanced augmentationsthat cannot be quickly interpreted. While high fidelity overlays may be visually ap-pealing, they are limited in utility if not intuitive and informative. Using this designconsideration, four simple augmentations are proposed as seen in Figure 4.3. Aug-mentations are similarly provided to the surgeon using the TilePro R rather thaninterrupting the surgeons normal video feed or occluding the surgeons feed. Theyare as follows:1. Traffic Light: a colour coded proximity alert of the instruments distance tothe tumours surface shown as coloured blocks to the surgeon. The surgeonsets four ranges of distance of the instrument to the tumours surface. From65these ranges, the alert flashes red, yellow, orange, or green. These are pro-vided as blocks of colour. For this thesis, the ranges are if the distance isless than 2.5mm, between 2.5mm and 3.5, between 3.5 mm and 5.0 mm,and beyond 5.0mm. A traffic light is provided for each of the two surgicalinstruments.2. Compass: a conical overlay orienting the surgeon of his or her surgical in-strument to the tumour. As the tumours in this work are endophytic, it isimportant to know the relative orientation of the tumour to an instrumentat any given time, particularly if the instrument is past the tumour. A greycone pointing from the instrument to the tumours center is provided, withthe cones height proportional to instrument to tumour distance. The cone isoccluded if the surgeon’s tool is behind the tumour model.3. Projected Path: a virtual needle-like extension with spheres of known di-ameter and spacing, also set by the surgeon. In Figure 4.3 and Figure 4.4,the spheres are all set at 1 mm apart, with 1 mm diameters. The functionalityof the traffic lights are combined with the spheres, allowing the surgeon togauge the distance of his or her instrument to the tumour should he or shecontinue in the current pose.4. Surface View: the projected virtual scene from a virtual camera placed 50mm away from the tracked aid, facing perpendicular to the grid of circles asseen in Figure 4.3. Treating the aid as a planar approximation of the localsurface, the surgeon can then see tumour depth from the surface virtually.An example of this is seen in Figure 4.4.The display as seen by the surgeon is captured in Figure 4.5. All four augmen-tations are given to the surgeon at the same time and it is up to the surgeon whichaugmentation to pay attention to.Naively, determining the distance of the instrument’s tip to the tumour’s sur-face would require a comparison of the point to all points on the tumour’s model.This would be a computationally expensive task. To provide real-time distanceguidance, the augmentations here leverage a pre-computed signed distance field.66Figure 4.3: Left TilePro R with the augmented endoscopic view (top). RightTilePro R feed with virtual viewpoint and traffic lights (bottom). Com-pass overlay in grey, and projected path overlay for each instrumentshown.67Figure 4.4: Magnified virtual viewpoint to show how the surgeon uses theguidance when close to the tumour underside. Red sphere indicates adistance within 2.5mm of tumour surface.This field is computed after the tumour model is generated from US, and incorpo-rates a 10 mm margin from tumour surface in each axis. Using a signed distancefield reduces the complex calculation to a looking up an indexed value. It furtherscaptures irregularities in model topography, allowing for precise augmentations.This is particularly beneficial when the model is complex or contains additionalstructures.4.4 User StudyThe clinical utility of NGUAN is evaluated by having an expert perform simulatedRAPNs. The participant is a practising urologist with over 10 years of experienceand trained in performing RAPNs. This surgeon completes 9 nephrectomies usingonly laparoscopic US, as is the conventional method, and 9 with NGUAN+ for a totalof 18. The simulated surgeries are performed on mock phantom models with elas-tic modulus of 15.4 kPA, similar to that of human kidney [22]. These models have10 - 30 mm diameter black inclusions with graphite in them. This is to improveultrasound contrast and to facilitate post-operative analysis by visual inspection.68Figure 4.5: NGUAN+ as seen in the surgeon’s console. Augmentations pro-vided using TilePro R.The location and depth of the inclusions are randomised. Prior to the study, eachphantom model is scanned and the tumour models are generated. The segmentedtumour model volumes and radii are compared against the ground truth from tu-mour construction to evaluate tumour model accuracy. The surgeon is able to trainby using the ballpoint stylus. This allows them to interface with the segmentedballpoint model, understand the error in the system visually, and trust the system.The surgeon is given a practice surgery using only US as well. This training periodis not timed or included in the results.In each simulated surgery, the surgeon is instructed to scan the model’s surfaceusing the pick-up US transducer and, with a permanent marker, outline the tumourboundaries they observe. The surgeon then begins the excision stage. In the caseof US, the surgeon has no additional guidance provided, while in the case of usingNGUAN+ the surgeon has image-guidance throughout the excision.For all surgeries performed, the excision times, margin status, excised and ad-69justed specimen volumes, specimen to tumour volume ratio, and the depth beyondtumour are reported. For qualitative feedback, the surgeon completed a Likert-scale questionnaire adapted from the System Usability Scale after each surgery[60]. After all the surgeries are completed, the surgeon is given open-ended ques-tions to answer about his experience using the AR. A two-tailed paired t-test isperformed for statistical significance with a power of 0.05. Holm-Bonferroni cor-rection is used to account for multiple comparisons.4.5 ResultsThe average (and standard deviation) known volume of tumours excised underguidance was 1.9 ± 0.4 cm3, compared to the average segmented volume of 2.7± 0.7 cm3. The average (and standard deviation) radius of the segmented modelswas 0.9 ± 0.3 mm greater than the ground truth radius. This indicates that thesegmented models were slightly larger than the ground truth by only a millimeter.In calibrating the laparoscope to the calibrated camera coordinates, the averageand standard deviation FRE of using 12 points to determine the calibration trans-form is 0.8 ± 0.3 mm. Evaluating the determined calibration on a separate set of11 paired points, the average and standard deviation TRE is 1.0 ± 0.4 mm. Theworking volume covered is 45 ⇥ 30 ⇥ 50 mm.Total system error is defined to be the Euclidean distance between the trackedinstrument’s tip compared against the ground truth center of the ballpoint stylus.This requires the stylus to be scanned, reconstructed, and registered with the instru-ment. Over 10 poses, the average and standard deviation of the total system erroris found to be 2.5 ± 0.5 mm. When comparing the instrument’s tip against thesegmented model, rather than the ground truth, the average and standard deviationdistance between them is 1.4 ± 0.5 mm.The quantitative results of the surgeries performed are summarised in Table 4.1.These initial results show that, with no statistically significant difference in exci-sion time, the surgeon is able to excise significantly less tissue with NGUAN+ thanwithout. The known tumour volumes excised with US and augmented reality werenot significantly different, nor is there a significant difference in positive marginrate. Note however that the positive margins with US were gross margins that left70Table 4.1: Quantitative results of simulated partial nephrectomies as averageand standard deviation. Average and standard deviations (avg ± stdev)of each metric is listed. Results of the trials using ultrasound only (US)and augmented reality (NGUAN+) are shown. Bold indicates statisticalsignificance (p <0.05). Bold asterisk indicates statistical significance (p<0.05) of augmented reality compared to the US only.Metric (avg ± stdev US (n=9) NGUAN+ (n=9)Excision Time (secs) 203 ± 30 257 ± 50Margin Status ( / 9) 2 gross 1 microscopicKnown Tumour Volume (cm3) 2.4 ± 1.0 1.9 ± 0.4Excised Tissue Volume (cm3) 30.6 ± 5.5 17.5 ± 2.4*Adjusted Tissue volume (cm3) 22.1 ± 5.2 10.6 ± 2.1*Depth Beyond Tumour (mm) 10.2 ± 4.1 3.3 ± 2.3*Figure 4.6: Cross section of tumour excised with augmented reality guidance.Slice closest to the surface on the left, farthest on the right.significant amounts of tumour behind. The single positive margin achieved withaugmented reality is considered microscopic, with a small amount of tumour ex-posed and with no visible tumour left behind. Importantly, with AR, the surgeon isable to significantly reduce the depth past the tumour from approximately 10mmto 3mm.Figure 4.6 shows some example cross sections of specimens excised with AR,each approximately 5 mm thick. Table 4.2 summarises the qualitative metrics fromthe Likert-scale questionnaires. When asked to rank the augmented reality over-lays from most to least preferred, the expert indicated he strongly preferred theprojected path, then the traffic lights, the compass, and finally the virtual view-point.71Table 4.2: Qualitative metrics with the questions asked about the augmented reality system. Score reported where 1 =strongly disagree and 5 = strongly agree.Question Asked Score (avg ± stdev Degree of AgreementI found the system unnecessarily complex. 1.3 ± 0.5 Strongly DisagreeI thought the system was easy to use. 4.8 ± 0.5 Strongly AgreeI imagine most people would learn to use this system very quickly 4.8 ± 0.5 Strongly AgreeI found this system cumbersome to use. 1.3 ± 0.5 Strongly DisagreeI felt very confident using this system. 4.7 ± 0.7 Strongly AgreeI needed to learn a lot of things before I could get going with the system. 1.0 ± 0.0 Strongly DisagreeI felt I understood where my region of interest was spatially. 4.8 ± 0.7 Strongly AgreeI felt I had a good understanding of the relative distance from my tool to the tumour 4.8 ± 0.7 Strongly AgreeI felt I was not at risk of cutting into the tumour 4.6 ± 0.7 Strongly AgreeThe system meets my needs. 4.6 ± 0.7 Strongly Agree724.6 DiscussionThis chapter presents an improvement to the novel intra-operative US-based aug-mented reality system known as NGUAN+. The total system error is significantlyreduced to 2.5 ± 0.5 mm, which is acceptable with a 5 mm margin as the stan-dard of care. This system meets the accuracy requirement to be useful in guidance.Augmented reality is beneficial in this study in resecting the lateral edges of thespecimen. It is informative in determining the point to cut underneath the tumourand is considered essential in guiding the deep resection through tissue. The aug-mented reality is noted as being predictable when it would and would not appear(due to occlusion of the DART). This is beneficial, as the surgeon could understandwhy no guidance is presented at times and how to resolve it, but also frustrating.This line of sight issue could be mitigated with the use of multiple aids addedduring excision.Specifically considering the virtual renderings of the instruments themselves,the surgeon noted it is useful to have them even though the small registration erroris noticeable. This misalignment is in fact the approximately millimeter error oflaparoscope to camera calibration, which is enlarged given the laparoscope’s fieldof view and distance to the tools. The surgeon found it useful as he is able to men-tally adjust for the error because he also understood where the physical alignmentshould be.Using the projected path and its incorporated traffic light, the surgeon adopted acheck and go strategy, a minor modification on his traditional approach to excision.With this strategy, he paused during cutting and checked his tools surroundings. Atvarious points where the spheres were hard to see or his instruments were occluded,the traffic lights were used as a proxy. Counter-intuitively, this modified strategydid not significantly increase the excision time. Speculatively, this may be in partdue to the overall reduced amount that needed to be excised and improved surgeonconfidence. However, further experimentation and testing is required to validatethis hypothesis. With respect to the virtual viewpoint, the surgeon elaborated that,although useful in concept, it is difficult to quickly interpret and mentally register tothe scene while under a time constraint. In an untimed stage of the surgery, like theplanning stage, a virtual viewpoint may be beneficial. The surgeon’s perception of73depth is still limited with the augmentations provided. For example, the projectedpath, which copies the laparoscopic view and renders on top of it, is created usinga single camera feed, contrary to the surgeons 3D stereoscopic video feed. Thiscan be improved using TilePro R to provide a 3D stereo overlay.While the study is still small, with a single user performing 18 surgeries, it doesdemonstrate the feasibility of using tracked US to create continuous guidance withencouraging results. The surgeon is able to use the NGUAN+ system to significantlyreduce the amount of healthy tissue excised, at no increase to excision time. It isparticularly valuable in reducing the risk of cutting into the collecting system, asnoted by the significantly smaller depth cut under the tumour of 3.3 ± 2.3 mm. Ofall four augmentations, the surgeon used the projected path the most as it mimickedhis real environment more closesly than the others.This ballpoint stylus is useful during the operation as a validation device forthe augmented reality. As Bernhardt et al. notes, there is a current need to validatethe accuracy of augmented reality during surgery without exposing the physicalstructure being modeled [6]. For endophytic tumours, validation would requireexposing the tumour to then see how well the augmented reality aligns - a predica-ment. That said, with the grasp design, the surgeon can easily drop in and pickup the ballpoint stylus in a reliable manner. The ballpoint’s circles pattern canbe tracked, and the surgeon can observe his augmented reality with respect to theballpoint stylus. With the ballpoint rendered, the surgeon can see the error in seg-mentation. The surgeon can interact with the model and the physical ballpoint, andgain an understanding of the system error. If there appears to be a significant error,re-calibration is warranted, which can be facilitated intra-operatively by the sametool through its trackable pattern and repeatable grasp.The guidance presented in NGUAN+ is powerful. With it, there is the potentialfor a truly minimally invasive approach to partial nephrectomies. The surgeonnoted that with this guidance, one could preserve the top layer of parenchymaalmost entirely. This new approach would start by making a single incision abovethe tumour, retracting it with the da Vinci’s additional arm, and inserting the DARTinto the gap. The model generation steps are performed as described. Then, thesurgeon could leverage the guidance to core out the tumour itself. Rather thanachieving an ideal resection of a cylindrical shape, the true ideal resection is a74teardrop shape. Upon tumour excision, the reconstruction phase of the RAPN wouldbe significantly reduced. Minor reconstruction may still be required, but the largedefect in the conventional approach would no longer exist. Reconstruction maybecome as simple as suturing itself. This approach would not only minimise thenephrons further (by preservation of a layer of tissue) but also reduce the risk ofreaching the 25 minute threshold for warm ischemia time. Investigation into thefeasibility of this approach is warranted.However, while this system is excellent for the excision stage, there is a require-ment for guidance during the planning stage. The surgeon may use the guidancepresented in this chapter as a safety measure, ensuring he or she does not cut intothe tumour, but it does not inform the surgeon’s initial resection. The ideal excisionapproach is not easily known to the surgeon. In particular, because of the shallowangle of the laparoscope and its limited range, the surgeon cannot explore theirworkspace with ease to find the best angle to resect. Augmented reality during theplanning stage potentially yields additional benefits. As well, since the model isgenerated during the planning stage, it is a natural extension to provide guidanceduring it. This challenge is addressed in the following chapter.75Chapter 5Projector-based AugmentedReality Intra-corporeal SystemThe previous two chapters introduced the framework of a US-based augmentedreality guidance system. They presented two sets of overlays, and initial evaluationof the guidance during the excision stage of a RAPN. However, these works did notassist the surgeon in the planning stage. As the tumour model is generated duringthis stage, and there is no time constraint, it is natural to provide guidance forintra-operative planning to improve surgical outcomes.This chapter proposes a novel guidance system called Projector-based Aug-mented Reality Intra-corporeal System (PARIS) that provides guidance and ad-dresses the issue of initial resection angle for the surgeon. PARIS uses a minia-turised projector within the patient’s abdomen for both reconstruction of the kid-ney surface and as a method to augment the surgical scene itself. Then, using theDART, the projector can project the 3D tumour model back onto the scene in asurface corrected manner. Different visualizations can be performed with the pro-jector, and presented from either laparoscope point-of-view (LPOV) or projectorpoint of view (PPOV).The chapter is structured as follows: Section 5.1 describes the surgical chal-lenge of the initial resection angle; Section 5.2 describes the miniaturised projectorused in PARIS; Section 5.3 covers the principle of operation and transformationtheory behind PARIS; Section 5.4 outlines the steps involved in providing projector-76based augmented reality and the augmentations in this work; Section 5.5 describesthe calibration of the system’s components; Section 5.6 outlines the initial and sec-ondary user studies performed; Section 5.7 presents the accuracy and user studyresults; and finally Section 5.8 recaps the work done and limitations.5.1 The Challenge of Resection AngleFor endophytic tumours, the ideal resection approach has two components: loca-tion and direction. The ideal location is to begin where the tumour is closest to theorgan surface, minimising the layer of healthy parenchyma excised above the tu-mour. The ideal direction is to cut straight down from this point, specifically downthe normal of the surface. The surgeon would like to begin his or her resection bycutting down the orthographic projection of the tumour on the surface. For a spher-ical tumour, this ideal excision would be encompassed in a cylinder with diameterequal to the tumour’s diameter.With this in mind, and using the conventional US-only approach, this ideal ap-proach is unlikely to be achieved. That is because the surgeon does not definitivelyknow where the ideal location to begin is. US may indicate the depth of the tumour,but it still lacks information on the approach angle. Additionally, the surgeon mayinadvertently hold the transducer off from being perpendicular to the kidney sur-face, leading to a misinterpretation of the real location to start. Finally, the US isagain only temporary.Further, when considering augmented reality and providing guidance to thesurgeon, the conventional manner has been to augment the laparoscope’s video.This creates augmentations in the laparoscope’s point of view (LPOV). However,the laparoscope is often not positioned in the ideal manner for resection, directlyabove the tumour and perpendicular to the surface. The laparoscope is often atan acute angle relative to the surface. While the surgeon may be able to “seethrough” the surface with augmentations, this is differs from the ideal approach.Further, the surgeon’s laparoscope is at a different angle than his or her instruments.When considering the set of possible angles that can be achieved by each one, thelaparoscope’s set is smaller than that of the instruments. This makes the mentalregistration between the two for the purpose of resection difficult. Simply, it is77hard to reach a target if your viewpoint is significantly different than the tool you’reusing to reach it. This leads to the hypothesis that the surgeon’s initial resectioncan be improved by providing augmented reality from a different point of view.With the use of an intra-corporeal projector, the surgeon can potentially explore awider range of angles from the PPOV. With the same device, one can then augmentthe scene and obtain beneficial guidance.5.2 The Pico LanternProjector-based augmented reality is an appealing display modality as it augmentsreality itself. On the spectrum of reality-virtuality, projections are considerablycloser to reality than the computer graphic-based equivalents. Using a projectorfor display is an alternative strategy that has yet to be explored within the patient’sabdomen. Conventional computer graphics are frequently superimposed onto thelaparoscope’s video feed, appearing as if separate and floating on top. These super-imposed rendering are not effected by changes in the scene itself, such as lightingconditions, and provide poor depth perception. The use of a projections on theother hand can provide convincing augmentations that blend naturally with thescene. To that end, the intra-operative and intra-corporeal projector used in thiswork is called the Pico Lantern, created by Edgcumbe et al. for surface reconstruc-tion and augmented reality in laparoscopic surgery [16]. The terms projector andPico Lantern are used interchangeably in this work.This works uses updated hardware over the initial prototype, although the con-cept is similar. The projector is a modified PicoPro projector (Celluon Inc., Seoul,Korea) with a KeyDot R marker placed on it. Using the same pose estimation asdescribed previously, this projector can be tracked relative to a single camera. Noexogeneous tracking hardware is needed. Further, the projector requires no inter-position between the laparoscope and the surgeon’s console. The projector’s laserraster scanning enables it to have a large range of focus [16]. This model has morethan double the resolution (1920 ⇥ 1080) and brightness (30 lumens) compared tothe original prototype with 640 ⇥ 480 resolution and 15 lumens [16]. It addition-ally has wireless capabilities, and is compatible for Android, iOS, and Windows.The Pico Lantern requires no dedicated port as it can placed through the skin in-78cision with a thin cable beside the trocar or controlled wirelessly [16]. It, like theDART and US transducer used, has a custom grasp that can be reliably picked upusing the da Vinci R [16].As an improvement to the initial prototype, this prototype has the KeyDot Rmarker which may be perpendicular to laparoscope. The motivation for this camefrom initial experimentation on accuracy and geometry constraints of having a pro-jector in the field of view of the camera. Having the marker on the face of the pro-jector requires the majority of the projector to be in the field of view. By movingit to an extension, the projector itself can be outside the field of view. As well,KeyDot R tracking has limited tracking accuracy when parallel to the laparoscopeimage. By moving the KeyDot R from a parallel to perpendicular arrangement, thetracking stability improves.In order to leverage the projector, additions are made to the framework. First,the TilePro R feeds output from the PC are removed entirely, as the projector isindependent of the surgeon’s console. Then, the projector had to be additionallycalibrated to model its intrinsic parameters, and the theoretical projector origin rel-ative to the KeyDot R on it. The projector can be modeled using the pinhole cameramodel used for the camera calibration, as one can treat the projector as a camerain reverse. While the specifics of projector calibration are outside the scope of thiswork, the projector has a similar set of intrinsic parameters in ( fx, fy,cx,cy) anda set of distortion parameters as a calibrated camera. Additionally, the projectorcalibration results in determining the transformation of the KeyDot R marker co-ordinate system (M) to the projector’s coordinate system (P). The marker and theprojector are seen in Figure 5.1, while M and P are illustrated in Figure 5.2. Thisis done using the Projector-Camera Calibration toolbox [17].5.3 Projector-based Augmented Reality Intra-corporealSystemPARIS follows a similar but expanded principle of operation as NGUAN and NGUAN+.The DART is placed above the tumour using an freehand US scan. A tracked US scanis then taken relative to the DART, and manual segmentation produces a 3D tumourmodel. PARIS differs in creation of the augmented reality overlays, described in79Figure 5.1: The system setup for PARIS. The projector is used to augment thetumour’s surface. The scene is viewed by a stereo laparoscope.the next section. To create the augmented reality overlays, a surface reconstructionmust be performed, described in Section 5.3.1. Note that PARIS does not providecontinuous guidance during the excision itself, unlike NGUAN and NGUAN+. Thisis because of the computational complexity required to calculate the augmenta-tions. The projections created rely on the surface reconstruction of the scene, asdiscussed later, which is not real-time.As the projector is dynamic and can be moved, it is best to move it as close tothe ideal location for the resection angle. Placing the tracked projector near this isrelatively easy by aligning the projection image’s center (drawn as a dot) with thetumour’s centroid as seen by the projector. During the process, the projector is keptapproximately normal to the surface via careful manual positioning. The systemset up is seen in Figure 5.1.In order to project accurate guidance, the different components of PARIS mustbe registered to each other. As illustrated in Figure 5.2, PARIS uses the followingcoordinate systems which carry over from NGUAN, and described in Section 3.4:• U : the 2D US image.• K: the KeyDot R marker on the US transducer itself.80Figure 5.2: Coordinate systems used within PARIS. Tracked US scan is per-formed relative to the DART (top). Tracked and calibrated projectoraugments the scene with the tumour model (bottom).81• C: the calibrated camera’s 3D coordinate system.• D: the KeyDot R marker on the DART itself. Same coordinate description asin K.PARIS introduces the following additional coordinate systems:• P: the projector coordinate system. The 3D origin of the projector lies withinthe device. The axes are the same as theC, where X-axis goes left to right ofthe image, Y-axis goes top to bottom, and Z-axis goes out of the projector.• M: the KeyDot R on the projector. It is placed on the surface of the projector,and shares the same coordinate description as K and D.The calibration of KTU is done as described previously in Section 3.6.1. Thetransforms of CTK , CTD, CTM are found through pose estimation as described inSection 3.2.1. The transform of PTM is found through projector calibration. Withthese transforms, it is then possible to create projection-based augmented reality.5.3.1 Surface ReconstructionAfter the tumour model is generated, a surface reconstruction must be done to pro-vide augmented reality. The projected image, when perceived by the laparoscope,must appear accurately. To do this, the projected image must be pre-distorted toaccount for surface topography. Doing this will cause the projections to appearnormal from the perspective of the laparoscope, and therefore the surgeon.OpenCV has multiple implementations for stereo surface reconstruction: blockmatching on the CPU (BM), block matching on the GPU (BMGPU), semi-globalblock matching on CPU (SGBM), belief propagation on GPU (BPGPU), and con-stant space belief propagation on GPU (CSBPGPU). Evaluation of these algorithmsis completed by the Advanced Research Team at Northern Digital Inc. Processingeach algorithm on the Middlebury Stereo Datasets showed that BM is the fastestbut poorest quality, CSBPGPU produced irregularities, and SGBM is the best choiceconsidering availability, quality and speed. However, SGBM is not real time andchallenging to parallelise on a GPU [26].In order to use SGBM, stereo cameras must be used [26]. The stereo camera cal-ibration parameters can be used to rectify the camera images. Rectification distorts82Figure 5.3: Ex-vivo kidney seen by the laparoscope with no projection on itwith a relatively featureless surface (top left). The ideal reconstructionwould match this image perfectly. A typical surface reconstruction us-ing SGBM and no additional features (top right). Note the black spots areholes in the reconstruction. The checkerboard pattern projected onto thescene (bottom left). The additional features improve the surface recon-struction by a perceptible amount (bottom right). The two holes in themiddle are due to specular reflection and the DART, which also causesreflection.the images such that matching points in the images will lie along correspondingepipolar lines. This simplifies the search for matching points into a 1D searchproblem. The details of epipolar geometry is not covered in this work. Using therectified images, SGBM becomes more efficient [26]. Normally, the disparity be-tween matching points can be used to estimate a point’s depth in a stereo imagepair which tends to be noisy. SGBM improves upon this by combining global andlocal matching methods and matches small regions of the image [26]. The resultis an algorithm that sufficiently balances speed and accuracy, with relatively goodrobustness against noise. However, in the case of narrow baseline stereo cameraslike in the laparoscope, these surface reconstructions may not be sufficiently densefor guidance. To improve the method, the projector projects a checkerboard pat-83Figure 5.4: Overview of PARIS. Light green indicates orthographic projec-tion from LPOV (left). Red indicates projection from PPOV (right).tern to add additional features into the scene. An example comparison is seen inFigure 5.3. Surface improvement is evaluated in Section 5.5.5.4 Augmented Reality Overlays5.4.1 Projector and Laparoscope Point-of-ViewsThe augmented reality overlays can be split into two independent categories: pro-jection type and projection point of view (POV). The two types of projection ex-plored in this work are orthographic and perspective. The two types of POVs ex-plored are LPOV and PPOV. All of these are displayed with the projector. Note that84the term perspective and point-of-view refer to distinctly different things, and arenot interchangeable in this chapter.As mentioned, there are limitations to the laparoscope’s possible angles relativeto the surface. Using a dynamic and mobile projector makes it more likely to iden-tify the best approach angle. The two different augmentation POVs are illustratedin Figure 5.4.Augmentations from the LPOV are straightforward. Every point in the tumourmodel which is in the DART coordinate system, pD, is transformed into a point inthe camera’s coordinate system, pC, as in Equation 5.1. Each pC is then trans-formed onto the camera’s imaging plane. The resulting image is displayed by theprojector and requires no tracking of the projector. As the projector moves, theprojection remains the same. As the DART or camera move, the tumour model’sappearance as seen by the camera has changed. This results in an update in theprojected image.pC =C TD · pD (5.1)In order to provide augmentations from PPOV, the tumour model must be trans-formed into the projector’s coordinates. Assuming the US data has already beentransformed into the DART as a result of the reconstruction and segmentation, theDART must then be registered to the projector. A point in the DART coordinatesystem, pD, can be transformed into the camera coordinate system by CTD, then tothe projector’s marker coordinate system by MTC, and finally into the projector’scoordinate system by PTM. This results in pP as seen in Equation 5.2.This is captured in Equation 5.2.pP =P TM ·M TC ·C TD · pD (5.2)From here, the tumour model can be displayed with the projector as if the projectoris a camera. Using the intrinsic and distortion parameters, the 3D model viewed bythe projector is reduced to its 2D image, and then projected onto the scene. Thisidea is similar to the virtual camera concept. As the camera moves, the projectionremains the same. As the DART or projector move, then the tumour model’s ap-pearance as seen by the projector has changed. The projection image is updated toreflect this.85Figure 5.5: The PPOV visualization of PARIS. Red indicates perspective pro-jection, and yellow/brown indicates orthographic projection. Both seenfrom the projector POV.5.4.2 Orthographic and Perspective ProjectionsThere are different methods in illustrating a 3D model as a 2D planar image. Theseprimarily are organised as parallel projection (of which orthographic is a subset)and perspective projection.With orthographic projection, parallelism between lines is maintained whenprojected onto the 2D imaging plane of a camera. To perform the orthographicprojections, the rays are projected in parallel from the tumour towards the desiredPOV.Perspective projection is akin to how humans observe the world. The raysof light converge into the eye, rather than stay parallel. This often results in oneor more vanishing points, where parallel lines in 3D appear to intersect. It mayalso result in foreshortening, where the model appears shorter due the angle of theviewer, or uneven scaling.With orthographic projections, the projected image is determined by the in-tersection of the rays with the surface. With the LPOV perspective projection, theprojection image must be pre-distorted to account for the kidney’s non-planar to-pography. If viewed without this correction, the projection image will only appearcorrect from the projector and warped from all other viewpoints. Both the or-thographic projection and pre-distortion leverage the same ray-surface intersection86Figure 5.6: Example projection image for LPOV projections (left) and its ap-pearance on ex-vivo kidney. The tumour model is pre-distorted, hencethe irregular shape.algorithm described in the next section.5.4.3 Overview of Ray-Surface IntersectionThe tumour model is first transformed from the DART coordinate system into thecamera’s coordinate system, registering the surface and model together.For orthographic projections, a set of rays, R, is created. Each ray r in theset begins at one of the tumour model’s vertices and all end at either the camera’sorigin or the projector’s origin, depending on the POV. Similarly, for the surfacecorrection for LPOV perspective projection, R is created. Here, each ray r begins atthe camera’s 3D origin and ends at one of the tumour model’s vertices.The surface mesh is modeled as a set of triangles, S. Each triangle, s, can beused to define a 3D plane A which has a normal vector nA.For all rays in R, each r is tested for intersection with each triangle s in thesurface’s set of triangles S. This is done by first computing if the ray intersectswith the triangle or not. The existence of an intersection can be confirmed bycomputing the dot product of the plane’s normal and the ray’s direction, dr, as inEquation 5.3. If this equation is true, then a single point of intersection exists whichcan be computed. If this equation is not true, then the ray either does not intersectwith this triangle or it intersects completely (line intersection). Neither case is ofinterest here.dr ·nA 6= 0 (5.3)87If the intersection point does exist, it must be determined whether it lies withinthe given triangle s or not. This can be done using the geometric approach ofMoller and Trumbore [48]. Here, the triangle is represented using two parameters(u,v) space and the three edges. A point in the triangle lies within the triangle’sbarycentric coordinates. Details can be found in [48]. This efficiently finds thepoints that intersect between a given ray r and a given triangle s. This processis repeated for all r in R and all s in S. Multi-threading is used to improve thecomputational speed to <1 second. In the future, this would be properly translatedto leverage the GPU given the parallelizable nature of the problem.5.4.4 Summary of Augmented Reality OverlaysIn summary, there are four projected images. They are the perspective and or-thographic projection from LPOV and the perspective and orthographic projectionfrom PPOV. In order to create these projection images, a surface reconstructionis performed, which is supplemented by the projector. A conceptual diagram forthe orthographic projections in both POVs is seen in Figure 5.4, while a concep-tual view of the perspective and orthographic projection is seen in Figure 5.5. Anexample of pre-distortion for the LPOV is seen in Figure 5.6.It should be noted that perspective projections is briefly evaluated and foundto give less intuitive guidance. The original intent is to use them in conjunctionwith the orthographic projection such that the surgeon could gauge the depth as therelative difference between the two. However, given the small scale, this distance istoo minute to interpret. Standalone, the perspective projection presents an incorrectview on the surface as the tumour model appears smaller than it actually is. In orderto use the perspective projection alone, the surgeon would have to cut down in aconical manner. This is confusing. As such, perspective guidance is not exploredfurther. In all augmentations, the model is projected as a dense point set.Collins et al. presented a similar notion of presenting augmented reality froma different POV, primarily using the instrument’s port [13]. This is centered on theport itself, rather than the ideal position for excision. Furthermore, they specif-ically use perspective projection, not orthographic and their chosen POV is fixedand determined geometrically, not variable and dynamically found. The assess-88ment of resection quality is ambiguous as well, with no quantitative measure oftissue excised.5.5 System Calibration and AccuracyWhile it is possible to perform surface reconstruction with only the stereo laparo-scope cameras, a naive algorithm will produce a reconstruction that is not denseenough. To that end, the ability of the Pico Lantern to improve surface densityis measured. Surface density is defined as the percentage of the surface in the la-paroscope view that is reconstructed. By projecting a simple pattern, not only isthe laparoscope’s field of view more illuminated, there are additional features thatcan be matched. A simple checkerboard pattern is used here which produces anabundance of corner features. The surface density of an ex-vivo kidney, with andwithout the extra features projected, is compared for 12 unique laparoscope andprojector poses. Average surface density change is reported.Next, the accuracy of the projector’s ability to display is quantified. The re-projection error of the projector here is similar to that of camera calibration. Re-ferring to Equation 5.2, the detected origin of the DART can be transformed intothe projector’s coordinate system. Using the projector’s intrinsic parameters, theDART origin can be projected onto the projector’s imaging plane and drawn as adot. The actual projection of this image onto the scene should result in the dotperfectly overlaid on the DART origin. The re-projection error is the Euclidean dis-tance between the real origin and the projected origin. This captures the cumulativeerror in the tracking of the DART, the camera calibration, tracking the of projector’sKeyDot R, and the projector’s calibration. To measure this error, the projector ismoved to 5 poses, and for each pose the DART is placed in 10 poses, approximately80 mm from the laparoscope. In each pose combination, the laparoscope image istaken as it views the DART and the projected point. The two points are manuallysegmented. The RMS error is reported. This experiment is seen in Figure 5.7.Next, the system’s ability to localise and project a tumour model accuratelymust be quantified. Even if the re-projection error described above is accurate,one must evaluate the cumulative error of creating a tumour model and project-ing it with PARIS. To that end, the tumour models themselves are first evaluated.89Figure 5.7: Un-augmented cross-section of phantom (left). Computer graph-ics overlay of tumour model (right). LPOV perpsective projection ofmodel.For each phantom used in the study, the segmented tumour model volumes andradii are compared against the ground truth values. These ground truth values aredetermined during phantom construction.Then, a phantom model is cut in half to expose the endophytic tumour. Usingthe RENAL score, these tumours are among the most challenging the surgeon willface due to their depth. Exophytic tumours (those with majority protruding beyondthe surface) are relatively easier to excise with their lower complication rate [78].A tracked US is taken at the face of the exposed tumour, and is segmented. Thisproduces a tumour model that is a single slice, rather than the complete modelitself. This is done because it is imprecise to cut the phantom in half. The sin-gle slice tumour model is then reprojected by the projector onto the scene. Theaverage RMS distance and the Hausdorff distance between the contours of the ac-tual tumour and projected tumour, as seen by the laparoscope, for five laparoscopeand projector poses is reported. The Hausdorff distance represents the maximumdistance between a point in one contour to a point in the other contour.5.6 User StudiesTwo user studies were performed to evaluate PARIS and its clinical utility for intra-operative surgical planning. In the first study performed, one expert urologist com-pleted three simulated partial nephrectomies for three different visualization modesfor a total of nine simulated operations. The first mode is the conventional US onlyapproach. The second mode is using orthographic projection from LPOV and US90imaging. The third is using orthographic projection from PPOV and US imaging.The surgeon is given US in all trials as, in clinical use, the surgeon would have itfor planning. The surgeon can validate the visualization using US as well. The sur-geon is given a single practice trial for each visualization, which are not includedin analysis. After the practice is completed, the surgeon performs the 9 simulatedsurgeries in randomised order.In the second study performed, a novice surgeon is added. The novice is asecond year urological resident with no training on the da Vinci R. The novice andexpert each complete a set of mock surgeries in one day long session. The surgeriesare completed using the US only approach and the orthographic projection fromPPOV visualizations. Each surgeon is given a practice trial with both visualizationswhich are, as above, not included in analysis. The novice surgeon completed 6surgeries in each visualization for a total of 12. The expert surgeon completed 10surgeries in each visualization for a total of 20.For both studies, the average excision time, margin status and size, and aver-age specimen to tumour volume ratio are reported for each visualization mode. Toquantify the surgeon’s deviation from the ideal excision, the excised specimen iscut into approximately 5mm thick slices. The tumour outline and the cross sectioncontour are segmented manually. The RMS distance between the centroids and theHausdorff distance are reported. Each surgeon also answered a Likert-scale ques-tionnaire, which inquired about confidence and spatial understanding, and providedopen feedback at the end of all surgeries.5.7 Results5.7.1 System Calibration and AccuracyThe surface density using a projected pattern in the scene improved by an absoluteaverage of 15.4 ± 8.3%. This increase in surface density is sufficient the calcula-tion of projection images.The projector’s re-projection error onto the DART origin is determined to be 0.8RMS. During the data collection, the projector is moved over a range of 32 ⇥ 9 ⇥11 mm in the laparoscope coordinate frame.91Table 5.1: Quantitative comparison of simulated partial nephrectomies per-formed in the first PARIS study. Average and standard deviations (avg ±stdev) of each metric is listed. Results of the trials using ultrasound only(US), augmented reality from the laparoscope point of view (LPOV), andaugmented reality from the projector point of view (PPOV) are shown.Metric (avg ± stdev) US (n=3) LPOV (n=3) PPOV (n=3)Execution time (secs) 188 ± 10 180 ± 28 178 ± 27Tumour volume (cm3) 2.9 ± 0.2 2.4 ± 0.4 3.1 ± 0.7Adjusted Excised Volume (cm3) 12.3 ± 0.9 11.0 ± 3.4 11.1 ± 3.2Positive Margins (/3) 3 3 2Hausdorff Distance (mm) 11.7 12.6 8.4Centroid Distance (mm) 5.3 5.3 2.4The average tumour model and ground truth volumes are 2.8± 0.7 cm3 and 4.0± 1.0 cm3 respectively. The difference between measured and ground truth radii is1.2 mm RMS. For the projection of the tumour onto the actual tumour, the averageHausdorff distance and RMS distance between the contours are 3.9 mm and 1.7mm respectively.5.7.2 First User StudyThe quantitative results of the first study’s nine simulated surgeries are summarisedin Table 5.1. There are positive margins in all trials except for one instance withPPOV. The average execution time and specimen to tumour volume ratio are con-sistent acss the surgeries. However, the Hausdorff distance of the cross sectionsimproved using PPOV. With PPOV, the surgeon is able to achieve 8.4 mm, indicat-ing a more consistently tight margin compared to 11.7 mm for US only.Qualitatively, the surgeon felt PARIS generated a clear sharp image that blendedwell with the phantom surface. The depiction of the tumour felt natural and intu-itive. Drawbacks included the need for moderate ambient surgical light intensityto avoid poor contrast of the projection, and a need for guidance during the exci-sion itself. The surgeon reported that he would use PARIS over the conventional USapproach partly because the augmented reality provides persistent guidance. Con-ventionally, the surgeon observes a limited cross-section during the US scan. Incomparing the two POVs, the questionnaire indicate he felt more confident using92Figure 5.8: Example of a positive margin with both tumour exposed and aportion remaining in the phantom, indicated with blue arrows.PPOV (4.3 ± 1.2 compared to 1 ± 0). He further felt had a better spatial under-standing of tumour depth (3 ± 0 compared to 1.7 ± 1.2). Lastly, he felt with LPOVhe needed significant additional information. The reason LPOV mode is droppedfor the second user study is because the surgeon had difficulty aligning his mentalmodel from the US scan with what he observed in LPOV. This discrepancy likelycaused the gross positive margins, as seen in Figure 5.8.5.7.3 Second User StudyThe quantitative results from the 36 simulated RAPNs is summarised in Table 5.2.The novice surgeon is able to achieve 1/6 positive margins in both US and PPOV vi-sualizations. The expert achieved 2/10 and 0/10 for US and PPOV respectively. Bothsurgeons excised a statistically significant less amount of healthy kidney tissue. AWilcoxon signed-rank test, used to evaluate significance, resulted in p <0.05. Fur-93Table 5.2: Quantitative comparison for second PARIS user study performed.Average and standard deviations (avg ± stdev) of each metric is listed.Results of the trials using ultrasound only (US) and augmented realityfrom the projector point of view (PPOV) are shown. Bold asterik indicatesstatistical significance (p <0.05).Novice Novice Expert ExpertMetric (avg ± stdev) US (n=6) PPOV (n=6) US (n=10) PPOV (n=10)Execution Time (secs) 579 ± 155 469 ± 152 199 ± 31 207 ± 40Positive Margins 1/6 1/6 2/10 0/10Known Tumour Volume (cm3) 2.8 ± 0.7 2.6 ± 0.7 2.6 ± 0.8 2.4 ± 0.9Excised Tissue Volume (cm3) 26 ± 3 17 ± 3* 20 ± 4 14 ± 4*Hausdorff Distance (mm) 19.3 ± 0.8 13.3 ± 3.7 18.0 ± 2.2 11.0 ± 1.7Centroid Distance (mm) 5.1 ± 1.5 4.1 ± 1.8 4.4 ± 1.9 2.9 ± 1.2Figure 5.9: Example cross sections of excised specimens using PPOV (toprow) and US (bottom row).ther, example cross sections of the excised specimens are in Figure 5.9. For thenovice surgeon, the use of PPOV improved the Hausdorff distance from 19.3 mm to13.3 mm. For the expert surgeon, the use of PPOV improved the Hausdorff distancefrom 18.0 mm to 11.0 mm. This indicates a more consistently tight margin withthe use of PPOV. In comparing the US and PPOV, the results indicate the surgeonsfelt more confident (3.3 ± 1.1 to 5.0 ± 0.0) and had a better spatial understanding(3.5 ± 0.8 to 4.6 ± 0.5). These results favour PPOV over US visualization.945.8 DiscussionThis work presents a novel fully-integrated intra-corporeal augmented reality sys-tem for intra-operative guidance in soft tissue surgery. Given a 5 mm margin forpartial nephrectomy, the error of the subsystems (1.2 mm RMS for the tumourmodel geometry and 0.8 mm RMS error for dynamic marker re-projection) andthe overall tumour localization error (1.7 mm RMS) are small enough to considerPARIS beneficial for guidance.This work demonstrates that integration of the three components (the DART,the US transducer and the Pico Lantern), vision-based tracking and PPOV projectoraugmentation are all feasible and practical. Integration of PARIS with the da Vinci Rsolely requires read-only access to the video feed which eases dissemination. Withonly one practice trial, PARIS could already be successfully used for guidance,showing that training time is short. The projectors augmentation is clear enoughon the surface to be useful in planning the excision. The surgeons indicated that,unlike standalone US imaging, it is helpful to have the projector provide persistentguidance after US scanning. The trade-off is that the projector must be balancedwith the surgical light’s intensity as to give sufficient contrast to the projectionitself.Key discoveries include: the clear preference of the PPOV orthographic pro-jection over the LPOV; PPOV is an effective visualization, and that orthographicprojection parallel to the direction of excision is a good strategy for navigation.With the LPOV, the tumour projection is placed along the line-of-sight to the sur-geon, but presents the well-known challenge of depth perception. Moreover, thisLPOV is not typically the desired direction for resection. The direction of excisionand associated parallel orthographic projection is usually perpendicular to the sur-face but theoretically could be in any direction that provides a short path and avoidscritical anatomy. Such advanced guidance would be implemented by adapting oneof the many graphics techniques described in the surgical guidance literature to theprojector. In a more general sense, the PPOV is akin to have an eye in the handof the surgeon, as explored by others, so that the moving projection image givesvaluable dynamic visual cues to the surgeon.The projector, although being adjunct hardware, is a relatively low cost device95that can enhance surface reconstruction density (seen here to be vital in the creationof augmented reality) and that can display realistic augmentations. With the mobileand dynamic projector, the surgeon explores his or her workspace with overlaysonly possible with the use of a projector. In an in-vivo setting, the effect of theanatomy, such as the peritoneum and perinephric fat, on the projections will haveto be explored and compensated for.The first study has a small scope and size, limiting the ability to make statisti-cally significant conclusions. However, the main outcome is preliminary evidencethat the PPOV orthographic projection aided the surgeon in centering the tumourwithin the ideal excision boundaries in all excisions performed with that method.Note in this case, the surgeon is instructed to produce a zero-margin resection inan effort to judge the ultimate accuracy of the different guidance methods. Thus,positive margins are expected and are not indicative of actual surgery using 5 mmmargins.The second study is able to compare two users with different knowledge levelsand how PARIS can insist them. Using PARIS, a surgeon is able to achieve a statis-tically significant reduction in healthy tissue excised regardless of experience. Thesurgeons also felt more confident and had a better self-reported spatial understand-ing of the underlying anatomy.This work concludes that PARIS is a simple, easily integrated system with po-tential to provide valuable guidance with sufficient ease and accuracy in laparo-scopic surgery. The dynamic marker provides a tumour-centric reference for rela-tive measurements of the UStransducer and projector to minimise errors. The dualuse of the projector for additional features and guidance information is feasible.This guidance is an adjunct, not replacement, to standard practice. Further studyis needed to demonstrate utility in-vivo, where the challenges of bleeding, smoke,and specular reflections arise, and whether or not parallax needs to be accounted inusing the stereo laparoscope.96Chapter 6Conclusion and Future WorkThis chapter reviews the work proposed in this thesis. The contributions, limita-tions, and future work are described as well.6.1 Author’s ContributionsThis thesis has presented three augmented reality systems, NGUAN, NGUAN+, andPARIS. The systems stem from the idea that, with US-based augmented reality, theinherent challenges of laparoscopic surgery can be mitigated intra-operatively andthe surgeon can improve upon the current standard of care.These systems leverage computer-vision based tracking of a fiducial marker,and operate in the same manner. After inserting the DART, a tracked ultrasoundscan is performed, yielding a reconstructed volume. The tumour of interest issegmented from this volume, and the resulting model provides guidance to thesurgeon in both the intra-operative planning and excision stage. The systems wereevaluated using simulated RAPNs, a type of laparoscopic surgery.The first system, NGUAN, showed the technical feasibility of using intra-operativeultrasound to generate augmented reality. It showed that the DART concept couldfit within the work space of the surgeon, and that it is possible to provide guidancein spite of tissue deformation. NGUAN+ used a mix of medium and low fidelityaugmentations to produce significant improvements in surgery. NGUAN+ was ableto reduce the depth of the cut beyond the tumour. Then a novel intra-corporeal97projector system in PARIS illustrated, for the first time, the use of a projector toaugment the kidney itself and inform the surgeon. PARIS was able to position thesurgeon in a better spot and improve his or her ability to intra-operatively plan.With these systems, the surgeon gains an unprecedented information in which toexplore and perceive his or her workspace.This work required engineering to do the following:• interface with the ultrasound machines,• calibrate the ultrasound transducer to its tracked marker,• design and development of the DART,• track and register one or multiple fiducials,• register the kinematics-based tracking of the da Vinci R with the computer-vision based tracking of the DART,• track and control a projector,• render augmentations including virtual viewpoints, surgical instruments,• perspective and orthographic projections, point clouds and meshes, and• evaluation of the systems; and the design, implementation and analysis ofuser studiesThere were limitations. For the systems and their evaluations, these limitationscan be categorised into the system and its components, the principle of operation,and evaluation of the system.6.1.1 System and ComponentsWhile the components of the systems have been thoroughly evaluated, the obviouslimitation is that the system evaluation has only been on plastic PVC phantoms.The porcine model, shown to be suitably representative for humans, would be thenext step [70].98Considering the components of the systems it should be noted that alternativeultrasound calibration methods may improve results while projector and cameracalibrations were suffice,. Najafi et al. [50] illustrated a ultrasound calibration tech-nique which produced a submillimeter pinhead reconstruction accuracy, an orderof magnitude more accurate than the geometric calibration performed here [50]. Aswell, in considering the fact a stereo camera pair was available, using triangulationmay improve the system accuracy error [24, 35].For the framework to be broadly used, it must be extended to include com-monly used ultrasound transducers and machines. Aloka and BK Ultrasound ma-chines are used at our local center at VGH. Provided that these machines can senda video signal out, or either the ultrasound image itself or the entire screen, thenthey can be supported. Transducers can similarly be supported, but doing so willrequire the generalisation of the pose estimation. With the planar fiducials, onlytransducers with flat faces can be used. 3D printing transducer covers or exploringmodel-based tracking methods may overcome this.In the case of PARIS, the surface reconstruction algorithm used is not the bestavailable in the field. It is a readily available algorithm that, when compared to asmall set of other algorithms, gave the best trade off between speed and accuracy.Improving the choice of stereo reconstruction algorithm used is warranted. Further,in the original Pico Lantern paper, the use of the projector was in simulating a widebaseline stereo reconstruction with only a monocular laparoscope and the projector.Making this approach real-time and automatic would be suited to make PARIS moreaccessible.6.1.2 Principle of OperationThe ultimate goal is that guidance in NGUAN, NGUAN+ and PARIS is used clinicallyin-vivo. To achieve that goal, the issues of ultrasound scanning and segmentation,tissue deformation, renal artery clamping must be addressed in the overall frame-work.For simplicity, manual segmentation was used in experiments. This is timeconsuming and incorporates additional human error given the difficulty in inter-preting ultrasound images. In practice, segmentation time can be reduced by using99(semi-)automatic methods. Alternatively, simply approximating the tumour seg-mentation with a bounding sphere may suffice, and be highly efficient. That said,in-vivo automatic segmentation of tumours is more difficult than segmentation ofphantoms. In regard to reliably scanning the full tumour, knowing the tumour’ssize from pre-operative imaging may be beneficial. Using this, the surgeon’s ultra-sound could be augmented with a “bounding circle” to guide them in maximizinghow much tumour he or she capture.The guidance used in this work treats the DART and tumour as rigid bodiesthat, from FEM simulations, are relatively rigid to one another. However these sim-ulations did not simulate the kidney during excision itself, where severe shearingand tissue tearing occurs. The forces during excision are likely greater than duringplanning. To that end, incorporating a deformable model which can register con-tinuously to the laparoscopic image would be beneficial. As well, the DART is alsoused as a means to circumvent surface tracking of the featureless kidney. However,through the use of the Pico Lantern to project additional features and the real-timetissue tracking presented by Yip et al. [85], these challenges may be mitigated.As with any navigation system, one must consider the characterization of thesystem itself. While NGUAN and NGUAN+ are both real-time, limited only by theframes per second of the laparoscopic video, PARIS is not real-time. PARIS in facthas a latency of 300 ms. This is largely attributed to the computation of ray-surfaceintersection.The tracking performed is dependent on the markers. These markers are unob-strusive, safe for use in the patient, and provide full 6-DOF tracking with sufficientaccuracy and robustness. However, in all systems, the number of tracked instru-ments depends on the number of markers in the scene. This work does not exploremore than two markers at any given time. Due to the topographical filtering per-formed during pose estimation, it remains to be seen whether mutliple markers canbe tracked at once. Ways to mitigate this issue would be variance in marker sizeand colour. The ability to track flexible instrumentations is also a remaining issue.The working volume of the systems have been shown to be adequate and simulatea patient’s abdomen.1006.1.3 EvaluationThe phantoms presented for here contained only a single endophytic tumour. Ahuman kidney will contain branches of the renal artery known as segmental ar-teries. These traverse the kidney and surgeons simply cut them, which requiresreconstruction after tumour excision. Modeling arteries into these phantoms, andobserving how the guidance could help the surgeon understand his or her location(or avoid them all together) would be beneficial.The qualitative questionnaires used in this thesis were primarily adapted fromthe System Usability Scale. However, none of the qualitative feedback receivedconsidered or measured the mental, temporal and physical demands on the surgeonor more situational specific constraints. To that end, in future studies, the SurgeryTask Load Index should be used [82]. This validation measure was developed usingfundamentals of laparoscopic surgery peg transfer task, and is well suited for theevaluation of introducing interventions into the surgical environment [82].6.2 Future Work and RecommendationsWhat was not discussed was the potential of fluorescence imaging in identifyingtumour boundaries. FireflyTM is an integrated real-time fluorescence-imaging modethat is available for the da Vinci R system. It uses near-infrared fluorescence imag-ing after the injection of the indocynanine green contrast agent. Such imaging maybe an excellent compliment to the guidance described in this work. By design,the images are co-registered with Firefly’s camera, simplifying registration of yetanother component. With Firefly, one can assess the presence of blood vessel struc-tures, and whether the kidney has been adequately clamped off from blood. In thecase of dense perinephric fat, Firefly may have difficulty imaging, but US may beof benefit as it can image through said fat [52]. Regardless, the imaging of bloodvessels with either modality may be useful in providing real-time localization andguidance.While developed and tested using robot-assisted partial nephrectomy as a modelprocedure, this system and its concepts is extensible to conventional laparoscopicsurgery and can be applied other soft-tissue surgeries such as the hepatectomy orprostatectomy. With the exception of the stereo surface reconstruction in Chap-101ter 5, the rest of the instrumentation for these systems can translate well into theconventional laparoscopic approach. Addition of fiducial markers onto the laparo-scopic instruments can replace the need for kinematics-based tracking. The generalworkflow of the system does not require significant engineering input to perform,and so this work is relatively low barrier to go from laboratory settings to clini-cal usage. Robust evaluation to observe the significance of augmented reality onlong-term outcome, operation time, and usability are needed.In conclusion, the systems here are innovative approaches to surgical naviga-tion for minimally invasive surgery that can be broadly disseminated. Doing so,based off the studies here, has the potential to make challenging cases more fea-sible and reduce the learning curve for performing the surgery. The work has thepotential to increase the availability of procedures. This in turn can increase thenumber of patients that undergo laparoscopic surgery, improving patient care at alarge scale.102Bibliography[1] O. Akca, H. Zargar, R. Autorino, L. F. Brandao, H. Laydner, J. Krishnan,D. Samarasekera, J. Li, G.-P. Haber, R. Stein, et al. Robotic partialnephrectomy for cystic renal masses: a comparative analysis of amatched-paired cohort. Urology, 84(1):93–98, 2014. ! pages 20[2] H. O. Altamar, R. E. Ong, C. L. Glisson, D. P. Viprakasit, M. I. Miga, S. D.Herrell, and R. L. Galloway. Kidney deformation and intraproceduralregistration: a study of elements of image-guided kidney surgery. Journal ofendourology, 25(3):511–517, 2011. ! pages 33[3] A. Amir-Khalili, M. S. Nosrati, J.-M. Peyrat, G. Hamarneh, andR. Abugharbieh. Uncertainty-encoded augmented reality for robot-assistedpartial nephrectomy: A phantom study. In Augmented Reality Environmentsfor Medical Imaging and Computer-Assisted Interventions, pages 182–191.Springer, 2013. ! pages 29[4] R. Autorino, A. Khalifeh, H. Laydner, D. Samarasekera, E. Rizkala,R. Eyraud, R. J. Stein, G.-P. Haber, and J. H. Kaouk. Robot-assisted partialnephrectomy (rapn) for completely endophytic renal masses: a singleinstitution experience. BJU international, 113(5):762–768, 2014. ! pages20[5] M. Bajura, H. Fuchs, and R. Ohbuchi. Merging virtual objects with the realworld: Seeing ultrasound imagery within the patient. In ACM SIGGRAPHComputer Graphics, volume 26, pages 203–210. ACM, 1992. ! pages 30[6] S. Bernhardt, S. A. Nicolau, L. Soler, and C. Doignon. The status ofaugmented reality in laparoscopic surgery as of 2016. Medical imageanalysis, 37:66–90, 2017. ! pages 7, 8, 10, 27, 29, 30, 32, 33, 74[7] M. Borghesi, E. Brunocilla, A. Volpe, H. Dababneh, C. V. Pultrone,V. Vagnoni, G. La Manna, A. Porreca, G. Martorana, and R. Schiavina.103Active surveillance for clinically localized renal tumors: an updated reviewof current indications and clinical outcomes. International Journal ofUrology, 22(5):432–438, 2015. ! pages 17[8] J.-Y. Bouguet. Camera calibration toolbox for matlab. 2004. ! pages 42[9] M. Camara, E. Mayer, A. Darzi, and P. Pratt. Soft tissue deformation forsurgical simulation: a position-based dynamics approach. Internationaljournal of computer assisted radiology and surgery, 11(6):919–928, 2016.! pages 58[10] C. Cheung, C. Wedlake, J. Moore, S. Pautler, and T. Peters. Fused video andultrasound images for minimally invasive partial nephrectomy: a phantomstudy. Medical Image Computing and Computer-AssistedIntervention–MICCAI 2010, pages 408–415, 2010. ! pages 30, 31[11] J. E. Choi, J. H. You, D. K. Kim, K. H. Rha, and S. H. Lee. Comparison ofperioperative outcomes between robotic and laparoscopic partialnephrectomy: a systematic review and meta-analysis. European urology, 67(5):891–901, 2015. ! pages 6, 20[12] T. Collins, A. Bartoli, N. Bourdel, and M. Canis. Robust, real-time, denseand deformable 3d organ tracking in laparoscopic videos. In InternationalConference on Medical Image Computing and Computer-AssistedIntervention, pages 404–412. Springer, 2016. ! pages 40[13] T. Collins, P. Chauvet, C. Debize, D. Pizarro, A. Bartoli, M. Canis, andN. Bourdel. A system for augmented reality guided laparoscopic tumourresection with quantitative ex-vivo user evaluation. In Computer-Assistedand Robotic Endoscopy: Third International Workshop, CARE 2016, Held inConjunction with MICCAI 2016, Athens, Greece, October 17, 2016, RevisedSelected Papers, volume 10170, page 114. Springer, 2017. ! pages 88[14] B. J. Dixon, M. J. Daly, H. Chan, A. D. Vescan, I. J. Witterick, and J. C.Irish. Surgeons blinded by enhanced navigation: the effect of augmentedreality on attention. Surgical endoscopy, 27(2):454–461, 2013. ! pages 34[15] P. Edgcumbe, C. Nguan, and R. Rohling. Calibration and stereo tracking ofa laparoscopic ultrasound transducer for augmented reality in surgery. InAugmented Reality Environments for Medical Imaging andComputer-Assisted Interventions, pages 258–267. Springer, 2013. ! pages58104[16] P. Edgcumbe, P. Pratt, G.-Z. Yang, C. Nguan, and R. Rohling. Pico lantern:Surface reconstruction and augmented reality in laparoscopic surgery usinga pick-up laser projector. Medical image analysis, 25(1):95–102, 2015. !pages 78, 79[17] G. Falcao, N. Hurtos, J. Massich, and D. Fofi. Projector-camera calibrationtoolbox. Erasumus Mundus Masters in Vision and Robotics, 2009. ! pages79[18] S. S. Fenton, D. E. Schaubel, M. Desmeules, H. I. Morrison, Y. Mao,P. Copleston, J. R. Jeffery, and C. M. Kjellstrand. Hemodialysis versusperitoneal dialysis: a comparison of adjusted mortality rates. AmericanJournal of Kidney Diseases, 30(3):334–342, 1997. ! pages 21[19] I. Figueroa-Garcia, J.-M. Peyrat, G. Hamarneh, and R. Abugharbieh.Biomechanical kidney model for predicting tumor displacement in thepresence of external pressure load. In Biomedical Imaging (ISBI), 2014IEEE 11th International Symposium on, pages 810–813. IEEE, 2014. !pages 33[20] J. M. Fitzpatrick. Fiducial registration error and target registration error areuncorrelated. In SPIE Medical Imaging, pages 726102–726102.International Society for Optics and Photonics, 2009. ! pages 64[21] H. Fuchs, M. A. Livingston, R. Raskar, K. Keller, J. R. Crawford,P. Rademacher, S. H. Drake, A. A. Meyer, et al. Augmented realityvisualization for laparoscopic surgery. In International Conference onMedical Image Computing and Computer-Assisted Intervention, pages934–943. Springer, 1998. ! pages 27, 28[22] N. Grenier, J.-L. Gennisson, F. Cornelis, Y. Le Bras, and L. Couzi. Renalultrasound elastography. Diagnostic and interventional imaging, 94(5):545–550, 2013. ! pages 39, 41, 68[23] C. Hansen, J. Wieferich, F. Ritter, C. Rieder, and H.-O. Peitgen. Illustrativevisualization of 3d planning models for augmented reality in liver surgery.International journal of computer assisted radiology and surgery, 5(2):133–141, 2010. ! pages 29, 33[24] R. I. Hartley and P. Sturm. Triangulation. Computer vision and imageunderstanding, 68(2):146–157, 1997. ! pages 99105[25] M. Hew, B. Baseskioglu, K. Barwari, P. Axwijk, C. Can, S. Horenblas,A. Bex, J. de la Rosette, and M. L. Pes. Critical appraisal of the paduaclassification and assessment of the renal nephrometry score in patientsundergoing partial nephrectomy. The Journal of urology, 186(1):42–46,2011. ! pages 16[26] H. Hirschmuller. Stereo processing by semiglobal matching and mutualinformation. IEEE Transactions on pattern analysis and machineintelligence, 30(2):328–341, 2008. ! pages 82, 83[27] A. Hughes-Hallett, E. K. Mayer, H. J. Marcus, T. P. Cundy, P. J. Pratt, A. W.Darzi, and J. A. Vale. Augmented reality partial nephrectomy: examiningthe current status and future perspectives. Urology, 83(2):266–273, 2014. !pages 27[28] A. Hughes-Hallett, E. K. Mayer, H. J. Marcus, P. Pratt, S. Mason, A. W.Darzi, and J. A. Vale. Inattention blindness in surgery. Surgical endoscopy,29(11):3184–3189, 2015. ! pages 34[29] A. Hughes-Hallett, E. K. Mayer, P. Pratt, A. Mottrie, A. Darzi, and J. Vale.The current and future use of imaging in urological robotic surgery: a surveyof the european association of robotic urological surgeons. The InternationalJournal of Medical Robotics and Computer Assisted Surgery, 11(1):8–14,2015. ! pages 26, 27[30] A. Hughes-Hallett, P. Pratt, J. Dilley, J. Vale, A. Darzi, and E. Mayer.Augmented reality: 3d image-guided surgery. Cancer Imaging, 15(1):O8,2015. ! pages 31[31] A. Hughes-Hallett, P. Pratt, E. Mayer, M. Clark, J. Vale, and A. Darzi. Usingpreoperative imaging for intraoperative guidance: a case of mistakenidentity. 2015. ! pages 27[32] S. Isotani, H. Shimoyama, I. Yokota, T. China, S.-i. Hisasue, H. Ide,S. Muto, R. Yamaguchi, O. Ukimura, and S. Horie. Feasibility and accuracyof computational robot-assisted partial nephrectomy planning by virtualpartial nephrectomy analysis. International Journal of Urology, 22(5):439–446, 2015. ! pages 29[33] U. L. Jayarathne, E. C. Chen, J. Moore, and T. M. Peters. Freehand 3d-usreconstruction with robust visual tracking with application toultrasound-augmented laparoscopy. In SPIE Medical Imaging, pages106978617–978617. International Society for Optics and Photonics, 2016. !pages 31[34] F. Jolesz. Advanced multi-modality image guided operating (amigo) suite.In Proc. Fourth Image-guided Therapy (NCIGT) Workshop, pages 15–6,2011. ! pages 7[35] K. Kanatani, Y. Sugaya, and H. Niitsuma. Triangulation from two viewsrevisited: Hartley-sturm vs. optimal correction. In practice, 4:5, 2008. !pages 99[36] A. Kutikov and R. G. Uzzo. The renal nephrometry score: a comprehensivestandardized system for quantitating renal tumor size, location and depth.The Journal of urology, 182(3):844–853, 2009. ! pages 16, 39[37] D. M. Kwartowitz, S. D. Herrell, and R. L. Galloway. Toward image-guidedrobotic surgery: determining intrinsic accuracy of the da vinci robot.International Journal of Computer Assisted Radiology and Surgery, 1(3):157–165, 2006. ! pages 6, 53, 58[38] J. S. Lam, J. Bergman, A. Breda, and P. G. Schulam. Importance of surgicalmargins in the management of renal cell carcinoma. Nature clinical practiceUrology, 5(6):308–317, 2008. ! pages 18, 22, 26[39] T. Langø, S. Vijayan, A. Rethy, C. Va˚penstad, O. V. Solberg, R. Ma˚rvik,G. Johnsen, and T. N. Hernes. Navigated laparoscopic ultrasound inabdominal soft tissue surgery: technological overview and perspectives.International journal of computer assisted radiology and surgery, 7(4):585–599, 2012. ! pages 25[40] W. K. Lau, M. L. Blute, A. L. Weaver, V. E. Torres, and H. Zincke. Matchedcomparison of radical nephrectomy vs nephron-sparing surgery in patientswith unilateral renal cell carcinoma and a normal contralateral kidney. InMayo Clinic Proceedings, volume 75, pages 1236–1242. Elsevier, 2000. !pages 18, 21[41] D. C. Leslie, A. Waterhouse, J. B. Berthet, T. M. Valentin, A. L. Watters,A. Jain, P. Kim, B. D. Hatton, A. Nedder, K. Donovan, et al. A bioinspiredomniphobic surface coating on medical devices prevents thrombosis andbiofouling. Nature biotechnology, 32(11):1134–1140, 2014. ! pages 60[42] J. Leven, D. Burschka, R. Kumar, G. Zhang, S. Blumenkranz, X. Dai,M. Awad, G. Hager, M. Marohn, M. Choti, et al. Davinci canvas: a107telerobotic surgical system with integrated, robot-assisted, laparoscopicultrasound capability. Medical Image Computing and Computer-AssistedIntervention–MICCAI 2005, pages 811–818, 2005. ! pages 30[43] Q.-l. Li, H.-w. Guan, F.-p. Wang, T. Jiang, H.-c. Wu, and X.-s. Song.Significance of margin in nephron sparing surgery for renal cell carcinomaof 4 cm or less. Chinese medical journal, 121(17):1662–1665, 2008. !pages 22[44] G. M. London. The clinical epidemiology of cardiovascular diseases inchronic kidney disease: Cardiovascular disease in chronic renal failure:Pathophysiologic aspects. In Seminars in dialysis, volume 16, pages 85–94.Wiley Online Library, 2003. ! pages 21[45] N. Mahmoud, I. Cirauqui, A. Hostettler, C. Doignon, L. Soler, J. Marescaux,and J. Montiel. Orbslam-based endoscope tracking and 3d reconstruction.arXiv preprint arXiv:1608.08149, 2016. ! pages 40[46] P. Milgram, H. Takemura, A. Utsumi, and F. Kishino. Augmented reality: Aclass of displays on the reality-virtuality continuum. In Photonics forindustrial applications, pages 282–292. International Society for Optics andPhotonics, 1995. ! pages 26, 27[47] O. Mohareri, G. Nir, J. Lobo, R. Savdie, P. Black, and S. Salcudean. Asystem for mr-ultrasound guidance during robot-assisted laparoscopicradical prostatectomy. In International Conference on Medical ImageComputing and Computer-Assisted Intervention, pages 497–504. Springer,2015. ! pages 28, 29[48] T. Mo¨ller and B. Trumbore. Fast, minimum storage ray/triangle intersection.In ACM SIGGRAPH 2005 Courses, page 7. ACM, 2005. ! pages 88[49] L. J. Moore, M. R. Wilson, J. S. McGrath, E. Waine, R. S. Masters, and S. J.Vine. Surgeons display reduced mental effort and workload whileperforming robotically assisted surgical tasks, when compared toconventional laparoscopy. Surgical endoscopy, 29(9):2553–2560, 2015. !pages 6[50] M. Najafi, N. Afsham, P. Abolmaesumi, and R. Rohling. A closed-formdifferential formulation for ultrasound spatial calibration: multi-wedgephantom. Ultrasound in medicine & biology, 40(9):2231–2243, 2014. !pages 99108[51] J.-J. Patard, O. Shvarts, J. S. Lam, A. J. Pantuck, H. L. Kim, V. Ficarra,L. Cindolo, K.-R. Han, A. De La Taille, J. Tostain, et al. Safety and efficacyof partial nephrectomy for all t1 tumors based on an international multicenterexperience. The Journal of urology, 171(6):2181–2185, 2004. ! pages 21[52] N. Pavan, T. Silvestri, C. Cicero, A. Celia, and E. Belgrano. Intraoperativeultrasound in renal surgery. In Atlas of Ultrasonography in Urology,Andrology, and Nephrology, pages 137–146. Springer, 2017. ! pages 25,26, 101[53] T. Peters and K. Cleary. Image-guided interventions: technology andapplications. Springer Science & Business Media, 2008. ! pages 7, 8, 9, 10[54] T. M. Peters and C. A. Linte. Image-guided interventions andcomputer-integrated therapy: Quo vadis?, 2016. ! pages 1, 7, 8, 32, 33, 34[55] P. M. Pierorazio, H. D. Patel, T. Feng, J. Yohannan, E. S. Hyams, and M. E.Allaf. Robotic-assisted versus traditional laparoscopic partial nephrectomy:comparison of outcomes and evaluation of learning curve. Urology, 78(4):813–819, 2011. ! pages 6, 20[56] P. Pratt, A. Di Marco, C. Payne, A. Darzi, and G.-Z. Yang. Intraoperativeultrasound guidance for transanal endoscopic microsurgery. Medical ImageComputing and Computer-Assisted Intervention–MICCAI 2012, pages463–470, 2012. ! pages 31, 32[57] P. Pratt, E. Mayer, J. Vale, D. Cohen, E. Edwards, A. Darzi, and G.-Z. Yang.An effective visualisation and registration system for image-guided roboticpartial nephrectomy. Journal of Robotic Surgery, 6(1):23–31, 2012. !pages 31[58] P. Pratt, A. Jaeger, A. Hughes-Hallett, E. Mayer, J. Vale, A. Darzi, T. Peters,and G.-Z. Yang. Robust ultrasound probe tracking: initial clinicalexperiences during robot-assisted partial nephrectomy. International journalof computer assisted radiology and surgery, 10(12):1905–1913, 2015. !pages 32, 38, 45[59] D. C. Rizzo. Fundamentals of anatomy and physiology. Cengage Learning,2015. ! pages 13, 14, 15, 16[60] J. Sauro. Measuring usability with the system usability scale (sus), 2011. !pages 70109[61] C. Schneider, C. Nguan, M. Longpre, R. Rohling, and S. Salcudean. Motionof the kidney between preoperative and intraoperative positioning. IEEETransactions on Biomedical Engineering, 60(6):1619–1627, 2013. ! pages25, 26, 32[62] C. Schneider, C. Nguan, R. Rohling, and S. Salcudean. Tracked pick-upultrasound for robot-assisted minimally invasive surgery. IEEE Transactionson Biomedical Engineering, 63(2):260–268, 2016. ! pages xiv, 37, 38[63] C. M. Schneider, G. W. Dachs II, C. J. Hasser, M. A. Choti, S. P. DiMaio,and R. H. Taylor. Robot-assisted laparoscopic ultrasound. In InternationalConference on Information Processing in Computer-Assisted Interventions,pages 67–80. Springer, 2010. ! pages 30[64] R. Shiroki, N. Fukami, K. Fukaya, M. Kusaka, T. Natsume, T. Ichihara, andH. Toyama. Robot-assisted partial nephrectomy: Superiority overlaparoscopic partial nephrectomy. International Journal of Urology, 23(2):122–131, 2016. ! pages 6, 20[65] R. L. Siegel, K. D. Miller, and A. Jemal. Cancer statistics, 2016. CA: acancer journal for clinicians, 66(1):7–30, 2016. ! pages 16[66] T. Sielhorst, M. Feuerstein, and N. Navab. Advanced medical displays: Aliterature review of augmented reality. Journal of Display Technology, 4(4):451–467, 2008. ! pages 27[67] T. Simpfendo¨rfer, C. Gasch, G. Hatiboglu, M. Mueller, L. Maier-Hein,M. Hohenfellner, and D. Teber. Intraoperative computed tomographyimaging for navigated laparoscopic renal surgery: First clinical experience.Journal of Endourology, 30(10):1105–1111, 2016. ! pages 28[68] L.-M. Su, B. P. Vagvolgyi, R. Agarwal, C. E. Reiley, R. H. Taylor, and G. D.Hager. Augmented reality during robot-assisted laparoscopic partialnephrectomy: toward real-time 3d-ct to stereoscopic video registration.Urology, 73(4):896–900, 2009. ! pages 28[69] S. E. Sutherland, M. I. Resnick, G. T. Maclennan, and H. B. Goldman. Doesthe size of the surgical margin in partial nephrectomy for renal cell cancerreally matter? The Journal of urology, 167(1):61–64, 2002. ! pages 22[70] M. Swindle, A. Makin, A. Herron, F. Clubb Jr, and K. Frazier. Swine asmodels in biomedical research and toxicology testing. Veterinary pathology,49(2):344–356, 2012. ! pages 98110[71] H.-J. Tan, E. C. Norton, Z. Ye, K. S. Hafez, J. L. Gore, and D. C. Miller.Long-term survival following partial vs radical nephrectomy among olderpatients with early-stage kidney cancer. Jama, 307(15):1629–1635, 2012.! pages 21[72] D. Teber, S. Guven, T. Simpfendo¨rfer, M. Baumhauer, E. O. Gu¨ven,F. Yencilek, A. S. Go¨zen, and J. Rassweiler. Augmented reality: a new toolto improve surgical accuracy during laparoscopic partial nephrectomy?preliminary in vitro and in vivo results. European urology, 56(2):332–338,2009. ! pages 28, 47[73] R. H. Thompson, S. A. Boorjian, C. M. Lohse, B. C. Leibovich, E. D. Kwon,J. C. Cheville, and M. L. Blute. Radical nephrectomy for pt1a renal massesmay be associated with decreased overall survival compared with partialnephrectomy. The Journal of urology, 179(2):468–473, 2008. ! pages 18,21[74] R. H. Thompson, B. R. Lane, C. M. Lohse, B. C. Leibovich, A. Fergany,I. Frank, I. S. Gill, M. L. Blute, and S. C. Campbell. Renal function afterpartial nephrectomy: effect of warm ischemia relative to quantity and qualityof preserved kidney. Urology, 79(2):356–360, 2012. ! pages 18, 22, 35[75] O. Ukimura and I. S. Gill. Imaging-assisted endoscopic surgery: Clevelandclinic experience. Journal of Endourology, 22(4):803–810, 2008. ! pages28[76] S. Umeyama. Least-squares estimation of transformation parametersbetween two point patterns. IEEE Transactions on pattern analysis andmachine intelligence, 13(4):376–380, 1991. ! pages 63[77] C. Va˚penstad, A. Rethy, T. Langø, T. Selbekk, B. Ystgaard, T. A. N. Hernes,and R. Ma˚rvik. Laparoscopic ultrasound: a survey of its current and futureuse, requirements, and integration with navigation technology. Surgicalendoscopy, 24(12):2944–2953, 2010. ! pages 26[78] R. Venkatesh, K. Weld, C. D. Ames, S. R. Figenshau, C. P. Sundaram, G. L.Andriole, R. V. Clayman, and J. Landman. Laparoscopic partialnephrectomy for renal masses: effect of tumor location. Urology, 67(6):1169–1174, 2006. ! pages 17, 21, 90[79] F. Volonte´, N. C. Buchs, F. Pugin, J. Spaltenstein, B. Schiltz, M. Jung,M. Hagen, O. Ratib, and P. Morel. Augmented reality to the rescue of theminimally invasive surgeon. the usefulness of the interposition of111stereoscopic images in the da vinci robotic console. The InternationalJournal of Medical Robotics and Computer Assisted Surgery, 9(3):e34–e38,2013. ! pages 29[80] R. Wang, Z. Geng, Z. Zhang, and R. Pei. Visualization techniques foraugmented reality in endoscopic surgery. In International Conference onMedical Imaging and Virtual Reality, pages 129–138. Springer, 2016. !pages 29[81] E. Wild, D. Teber, D. Schmid, T. Simpfendo¨rfer, M. Mu¨ller, A.-C. Baranski,H. Kenngott, K. Kopka, and L. Maier-Hein. Robust augmented realityguidance with fluorescent markers in laparoscopic surgery. Internationaljournal of computer assisted radiology and surgery, 11(6):899–907, 2016.! pages 28[82] M. R. Wilson, J. M. Poolton, N. Malhotra, K. Ngo, E. Bright, and R. S.Masters. Development and validation of a surgical workload measure: thesurgery task load index (surg-tlx). World journal of surgery, 35(9):1961,2011. ! pages 101[83] X. Wu and X. Shu. Epidemiology of renal cell carcinoma. In Renal CellCarcinoma, pages 1–18. Springer, 2017. ! pages 16[84] Z. Yaniv and K. Cleary. Image-guided procedures: A review. ComputerAided Interventions and Medical Robotics, 3:1–63, 2006. ! pages 7[85] M. C. Yip, D. G. Lowe, S. E. Salcudean, R. N. Rohling, and C. Y. Nguan.Real-time methods for long-term tissue feature tracking in endoscopicscenes. In International Conference on Information Processing inComputer-Assisted Interventions, pages 33–43. Springer, 2012. ! pages100[86] P. A. Yushkevich, J. Piven, H. C. Hazlett, R. G. Smith, S. Ho, J. C. Gee, andG. Gerig. User-guided 3d active contour segmentation of anatomicalstructures: significantly improved efficiency and reliability. Neuroimage, 31(3):1116–1128, 2006. ! pages 47[87] L. Zhang, M. Ye, P.-L. Chan, and G.-Z. Yang. Real-time surgical tooltracking and pose estimation using a hybrid cylindrical marker.International Journal of Computer Assisted Radiology and Surgery, pages1–10, 2017. ! pages 31112[88] Z. Zhang. A flexible new technique for camera calibration. IEEETransactions on pattern analysis and machine intelligence, 22(11):1330–1334, 2000. ! pages 42, 43[89] P. T. Zhao, L. Richstone, and L. R. Kavoussi. Laparoscopic partialnephrectomy. International Journal of Surgery, 36:548–553, 2016. ! pages20, 21113

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0348707/manifest

Comment

Related Items