UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A system for intraoperative transrectal ultrasound imaging in robotic-assisted laparoscopic radical prostatectomy Adebar, Troy Kiefert 2011

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2011_fall_adebar_troy.pdf [ 12.8MB ]
Metadata
JSON: 24-1.0072118.json
JSON-LD: 24-1.0072118-ld.json
RDF/XML (Pretty): 24-1.0072118-rdf.xml
RDF/JSON: 24-1.0072118-rdf.json
Turtle: 24-1.0072118-turtle.txt
N-Triples: 24-1.0072118-rdf-ntriples.txt
Original Record: 24-1.0072118-source.json
Full Text
24-1.0072118-fulltext.txt
Citation
24-1.0072118.ris

Full Text

A System for Intraoperative Transrectal Ultrasound Imaging in Robotic-Assisted Laparoscopic Radical Prostatectomy by Troy Kiefert Adebar Bachelor of Applied Science, University of British Columbia, 2009 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Applied Science in THE FACULTY OF GRADUATE STUDIES (Electrical and Computer Engineering) The University Of British Columbia (Vancouver) August 2011 c© Troy Kiefert Adebar, 2011 Abstract This thesis describes a system for intraoperative transrectal ultrasound imaging in robotic-assisted laparoscopic radical prostatectomy, and related image registration work. First, a novel method for registering three-dimensional ultrasound data to an external coordinate frame is presented. The method uses a registration tool pressed against an air-tissue boundary to provide common target points in the the ultra- sound frame and the external frame. This method has two applications in our system: registering the ultrasound data captured by the system to a laparoscopic stereo camera to allow augmented-reality style overlays in laparoscopic or robotic surgery, and registering the system to the da Vinci Surgical System so the ultra- sound imaging arrays can automatically track the da Vinci tools during surgery. In an initial feasibility study, the method was used to register a mechanical three-dimensional ultrasound transducer to high-disparity stereo cameras through a tissue phantom. Average registration error was found to be 1.69 ± 0.60 mm. Accuracy of localizing ultrasound fiducials pressed against an air-tissue boundary was found to range from 0.54 mm to 1.04 mm. In a second study, the method was used to register three-dimensional transrec- tal ultrasound data to a da Vinci stereo endoscope. In this study, fiducials imaged at multiple registration tool positions were incorporated into a single registration. Registration error imaging through a tissue phantom ranged from 3.85 ± 1.76 mm using one registration tool position to 1.82 ± 1.03 mm using four positions. Reg- istration error imaging through an ex-vivo porcine liver tissue sample ranged from 2.36 ± 1.01 mm using one registration tool position to 1.51 ± 0.70 mm using four positions. ii The components of the transrectal ultrasound system, including a robotic probe manipulator, an ultrasound machine with transducer, and control and image pro- cessing software, are described in detail. Initial validation testing is also described. The ultrasound system was registered to a da Vinci system using the air-tissue boundary method in order to allow automatic tracking of the da Vinci tools. The average registration error was found to be 0.95 ± 0.38 mm. The ability of the system to capture two-dimensional and three-dimensional B-mode and elastogra- phy data was tested using a prostate phantom. Initial patient images were captured using the system in the operating room immediately prior to surgery. iii Preface A main focus area of this thesis is a new method for registering three-dimensional ultrasound to an external coordinate system through an air-tissue boundary. The development of this method, presented in Chapters 2 and 3, was performed in cooperation with Mr. Michael Yip. The initial feasibility study on this method (Chapter 2) was presented at the Medical Imaging Computing and Computer Aided Intervention (MICCAI) 2010 conference, with an article published in the proceedings [59]. Mr. Yip was the primary author, with myself, Dr. Robert Rohling, Dr. Tim Salcudean and Dr. Christopher Nguan as coauthors. Mr. Yip and I cooperated on experiment design, apparatus construction, experimental evaluation and data analysis. Mr. Yip pre- pared the majority of the manuscript, while I contributed sections and assisted with editing. Dr. Rohling and Dr. Salcudean formulated the research problem, and as- sisted with experimental design and manuscript editing. Dr. Nguan provided input as a practicing urologist. A second more detailed feasibility study (Chapter 3) was recently completed, with an article to be submitted to the IEEE Transactions on Robotics in August 2011. I was the primary author, with Mr. Yip, Dr. Salcudean, Dr. Rohling, Dr. Nguan and Dr. Larry Goldenberg as coauthors. I constructed most of the apparatus for this test, while Mr. Yip and I cooperated on experimental evaluation and data analysis. I prepared the majority of the manuscript, while Mr. Yip assisted with editing. Dr. Salcudean, Dr. Rohling, Dr. Nguan and Dr. Goldenberg provided input and assisted with editing the manuscript. The robotic system for performing trans-rectal ultrasound during prostatec- tomy described in this thesis (Chapter 4) was also previously presented at the Infor- iv mation Processing in Computer-Assisted Interventions (IPCAI) 2011 conference, again with an article published in the proceedings [1]. I was the primary author of this work, with Dr. Salcudean, Ms Sara Mahdavi, Dr. Mehdi Moradi, Dr. Nguan and Dr. Goldenberg as coauthors. I performed a literature review, created hardware and software components of the robot, and conceived and performed experiments to evaluate the registration for tracking. I also prepared the manuscript. Dr. Sal- cudean formulated the research problem, designed and built earlier iterations of the robot, and assisted with editing the manuscript. Ms Mahdavi and Dr. Moradi collected patient data, assisted with processing elastography data, and assisted with editing the manuscript. Dr. Nguan and Dr. Goldenberg provided input as practicing urologists. Appendix A describes an initial feasibility study on automatic detection of ul- trasound surface fiducials, such as those in the above registration method, using a detector based on boosting of simple classifiers. This study was previously sub- mitted as a course project for Computer Science 525: Image Analysis II at the University of British Columbia in 2010. Clinical data described in this thesis was collected with UBC Clinical Research Ethics Board approval, study number H08-02696. v Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1.1 Prostate Anatomy . . . . . . . . . . . . . . . . . . . . . . 1 1.1.2 Radical Prostatectomy . . . . . . . . . . . . . . . . . . . 2 1.1.3 RALRP Procedure . . . . . . . . . . . . . . . . . . . . . 3 1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Prior Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.1 TRUS in ORP . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.2 TRUS in LRP: Ukimura and Gill . . . . . . . . . . . . . . 7 1.3.3 TRUS in MRP . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.4 TRUS in RALRP . . . . . . . . . . . . . . . . . . . . . . 9 1.3.5 Tandem Robotic TRUS in RALRP . . . . . . . . . . . . . 10 1.3.6 Summary of Prior Work . . . . . . . . . . . . . . . . . . 10 vi 1.4 Research Objectives . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.5 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . 13 2 Registration through an Air-Tissue Boundary: Feasibility Study . . 15 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . 19 2.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3 Registration through an Air-Tissue Boundary: da Vinci Study . . . 30 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2.1 Registration Concept . . . . . . . . . . . . . . . . . . . . 31 3.2.2 Apparatus . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2.3 Registration Procedure . . . . . . . . . . . . . . . . . . . 34 3.2.4 Validation Procedure . . . . . . . . . . . . . . . . . . . . 37 3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4 Robotic System for Transrectal Ultrasound . . . . . . . . . . . . . . 46 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.2 Robotic TRUS Imaging System Description . . . . . . . . . . . . 46 4.2.1 Robotic Probe Manipulator . . . . . . . . . . . . . . . . . 46 4.2.2 Control and Image Analysis Software . . . . . . . . . . . 49 4.2.3 Imaging Apparatus . . . . . . . . . . . . . . . . . . . . . 51 4.2.4 Control Modes . . . . . . . . . . . . . . . . . . . . . . . 51 4.3 Registration Method for Tool Tracking . . . . . . . . . . . . . . . 53 4.4 System Validation: Method . . . . . . . . . . . . . . . . . . . . . 56 4.4.1 Registration Accuracy Tests . . . . . . . . . . . . . . . . 56 4.4.2 Phantom Imaging Tests . . . . . . . . . . . . . . . . . . . 58 4.4.3 Patient Imaging Tests . . . . . . . . . . . . . . . . . . . . 58 vii 4.5 System Validation: Results . . . . . . . . . . . . . . . . . . . . . 59 4.5.1 Registration Accuracy Tests . . . . . . . . . . . . . . . . 59 4.5.2 Phantom Imaging Tests . . . . . . . . . . . . . . . . . . . 61 4.5.3 Patient Imaging Tests . . . . . . . . . . . . . . . . . . . . 61 4.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 5 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . 66 5.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 5.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 5.2.1 3D Ultrasound to Camera Registration . . . . . . . . . . . 67 5.2.2 Robotic System for TRUS in RALRP . . . . . . . . . . . 69 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 A Automatic Detection of Surface Fiducials using Boosting . . . . . . 78 A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 A.2 Detector Concept . . . . . . . . . . . . . . . . . . . . . . . . . . 80 A.2.1 Detection in 2D . . . . . . . . . . . . . . . . . . . . . . . 80 A.2.2 Detection in 3D . . . . . . . . . . . . . . . . . . . . . . . 81 A.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 A.3.1 Imaging Setup . . . . . . . . . . . . . . . . . . . . . . . 81 A.3.2 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 A.3.3 Dictionary of Patches . . . . . . . . . . . . . . . . . . . . 82 A.3.4 Training Detector . . . . . . . . . . . . . . . . . . . . . . 82 A.3.5 Running Detector in 2D . . . . . . . . . . . . . . . . . . 82 A.3.6 Grouping Detections . . . . . . . . . . . . . . . . . . . . 83 A.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 A.4.1 2D Results . . . . . . . . . . . . . . . . . . . . . . . . . 84 A.4.2 3D Results . . . . . . . . . . . . . . . . . . . . . . . . . 84 A.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 A.6 Conclusions and Recommendations . . . . . . . . . . . . . . . . 85 B Patient Trial Protocol Outline . . . . . . . . . . . . . . . . . . . . . . 92 viii List of Tables Table 1.1 Summary of clinical studies involving TRUS in RP . . . . . . 12 Table 2.1 Fiducial localization error . . . . . . . . . . . . . . . . . . . . 25 Table 2.2 Registration error . . . . . . . . . . . . . . . . . . . . . . . . 26 Table 3.1 Registration error for phantom . . . . . . . . . . . . . . . . . 40 Table 3.2 Registration error for ex-vivo tissue . . . . . . . . . . . . . . . 40 Table 4.1 Robot motion parameters . . . . . . . . . . . . . . . . . . . . 49 Table 4.2 Tracking errors for simulated da Vinci . . . . . . . . . . . . . 59 Table 4.3 Tracking errors for da Vinci . . . . . . . . . . . . . . . . . . . 60 ix List of Figures Figure 1.1 Prostate and surrounding anatomy . . . . . . . . . . . . . . . 2 Figure 1.2 Posterior dissection of the prostate . . . . . . . . . . . . . . . 4 Figure 1.3 Ligation of the prostatic pedicles and antegrade dissection of the neurovascular bundle . . . . . . . . . . . . . . . . . . . . 5 Figure 2.1 Registration concept . . . . . . . . . . . . . . . . . . . . . . 18 Figure 2.2 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . 20 Figure 2.3 Registration test equipment . . . . . . . . . . . . . . . . . . . 21 Figure 2.4 Example air-tissue images . . . . . . . . . . . . . . . . . . . 23 Figure 3.1 Updated registration concept . . . . . . . . . . . . . . . . . . 32 Figure 3.2 Robotic TRUS imaging system . . . . . . . . . . . . . . . . . 33 Figure 3.3 Schematic of registration tool . . . . . . . . . . . . . . . . . 34 Figure 3.4 Registration test setup . . . . . . . . . . . . . . . . . . . . . 35 Figure 3.5 Ex-vivo imaging arrangement . . . . . . . . . . . . . . . . . 36 Figure 3.6 Schematic of cross wire phantom . . . . . . . . . . . . . . . 38 Figure 3.7 Surface fiducial images . . . . . . . . . . . . . . . . . . . . . 39 Figure 3.8 TRUS overlay image . . . . . . . . . . . . . . . . . . . . . . 41 Figure 4.1 TRUS robot components . . . . . . . . . . . . . . . . . . . . 48 Figure 4.2 Mechanical components of robot translation stage . . . . . . . 48 Figure 4.3 TRUS robot electronics enclosure . . . . . . . . . . . . . . . 50 Figure 4.4 TRUS control GUI . . . . . . . . . . . . . . . . . . . . . . . 52 Figure 4.5 3D mouse for TRUS control . . . . . . . . . . . . . . . . . . 53 Figure 4.6 da Vinci to TRUS registration concept . . . . . . . . . . . . . 54 Figure 4.7 2D and 3D US frames for TRUS system . . . . . . . . . . . . 55 x Figure 4.8 Calibration of da Vinci tool tip to optical markers . . . . . . . 57 Figure 4.9 da Vinci tool tip in ultrasound . . . . . . . . . . . . . . . . . 60 Figure 4.10 Comparison of elastography and B-mode images of phantom . 61 Figure 4.11 3D B-mode ultrasound of phantom . . . . . . . . . . . . . . . 62 Figure 4.12 Comparison of elastography and B-mode patient images . . . 62 Figure 4.13 3D B-mode ultrasound of patient . . . . . . . . . . . . . . . . 63 Figure A.1 Image patches . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Figure A.2 Mean correlation scores used for training . . . . . . . . . . . 88 Figure A.3 Example 2D detector outputs . . . . . . . . . . . . . . . . . . 89 Figure A.4 Precision-recall plots . . . . . . . . . . . . . . . . . . . . . . 90 Figure A.5 Example outputs from 3D detector . . . . . . . . . . . . . . . 91 Figure A.6 Precision/accuracy value frequencies . . . . . . . . . . . . . . 91 xi Glossary API application programming interface TRUS transrectal ultrasound MRI magnetic resonance imaging CT computed tomography AR augmented reality PM positive margins NVB neurovascular bundles ANOVA analysis of variance 3D three-dimensional 2D two-dimensional RP radical prostatectomy LRP laparoscopic radical prostatectomy RALRP robotic-assisted laparoscopic radical prostatectomy MRP mini-laparotomy radical prostatectomy ORP open radical prostatectomy PSA prostate-specific antigen xii MUL membranous urethral length 3DUS three-dimensional ultrasound US ultrasound OR operating room PVC polyvinyl chloride FRE fiducial registration error TRE target registration error GUI graphical user interface UBC the University of British Columbia RF radio frequency xiii Acknowledgments I would like to thank some of the people whose contributions allowed me to com- plete this thesis. My supervisor, Dr. Tim Salcudean, for his insightful guidance and kind support. Dr. Robert Rohling for his cosupervision of our registration work. Dr. Mehdi Moradi for his help in the operating room and Sara Mahdavi for her help with the elastography software and data processing. Ramin Saheb- javaher for his mechanical assistance. Caitlin Schneider, John Bartlett, Michael Yip, Raoul Kingma, Jeff Abeysekera and Hedyeh Rafii-Tari (in that exact order) for their considerate and diligent maintenance of my work area while I was away at conferences. My future post-doc, Siavash Khallagi, for his helpful assistance with the dual languages of Farsi and C++. My family for their support, advice and genetic predispositions. My partner Reiko Hoyano for generally putting up with me, and for being great. And most importantly of all, Germany von Bierstadter Hof for her thoughtful advice, wisdom, and tireless review of this manuscript. xiv Chapter 1 Introduction Prostate cancer is the most commonly diagnosed cancer in North American men apart from non-melanoma skin cancer. In 2011, an estimated 25,500 new cases will be diagnosed in Canada, with an estimated 4,100 fatalities [6]. In the United States, an estimated 240,890 new cases will be diagnosed in 2011, with an esti- mated 33,720 fatalities [2]. Common treatment options for prostate cancer include watchful waiting, radiation therapy and surgical intervention. The surgical removal of the prostate gland and surrounding tissues, radical prostatectomy (RP), is viewed by many as the gold standard treatment for clinically-confined prostate cancer. 1.1 Background 1.1.1 Prostate Anatomy The prostate is a small gland in the human male reproductive system, whose main function is to secret a portion of the fluid expelled during ejaculation. A healthy prostate is approximately the size of a walnut. As seen in Figure 1.1, the prostate is located in the lower abdomen inferior to the bladder and anterior to the rectum. Many small nerves and blood vessels run along both lateral aspects of the prostate, forming cohesive plates known as neurovascular bundles (NVB). These bundles are thought to play an important role in both urinary and sexual function. Injury to the NVB during prostate interventions is believed to be a primary factor in 1 Figure 1.1: Prostate and surrounding anatomy. Median sagittal section of the pelvis is shown. (Image reproduced from Mangera et al. [34] with per- mission of Elsevier.) post-operative urinary incontinence and sexual impotence. Identifying and avoid- ing the NVB during prostate cancer interventions is thus a crucial part of preventing these complications. 1.1.2 Radical Prostatectomy Several variants of RP exist. In traditional open radical prostatectomy (ORP), an incision approximately 20 cm in length is made in the lower abdomen, allowing the surgeon to manually access the prostate. A variant of ORP, mini-laparotomy radical prostatectomy (MRP), requires an incision that is only about half as long. This approach is not commonly applied in North America. In laparoscopic radical prostatectomy (LRP), a specialized surgical camera (laparoscope) and specialized surgical tools allow the surgeon to access the prostate through a series of much smaller incisions. LRP generally reduces scar tissue formation, blood loss and 2 hospital stay compared to ORP. From the surgeon’s perspective, LRP is a more challenging operation because the laparoscopic camera limits visibility and the la- paroscopic instruments limit dexterity. The current state of the art, robotic-assisted laparoscopic radical prostatectomy (RALRP), replaces the laparoscopic instruments and camera used in standard LRP with a teleoperated robotic platform. The da Vinci Surgical System (Intuitive Surgical, Sunnyvale, CA) is currently the only commer- cially available system for RALRP. When using the da Vinci robot in surgery, the surgeon sits away from the patient at a control console, through which he or she controls laparoscopic surgical instruments mounted on a patient-side cart. These instruments are wristed, giving the surgeon much greater dexterity than standard laparoscopic tools. The da Vinci system also features high-definition stereo laparo- scopic cameras, giving the surgeon three-dimensional (3D) depth information and much greater visibility than a standard laparoscope. Because of the many advan- tages of RALRP, it has become one of the most popular variants of LRP. Over 70% of RP procedures in the United States are currently performed in this manner. 1.1.3 RALRP Procedure Between different centers and surgeons, there are many variations in the RALRP procedure. The following surgical steps, based on the procedure described by Gon- zales et al. [21], are intended to provide a representative summary of the procedure. The patient is placed in lithotomy position and insufflated (the abdomen is filled with gas to provide a working space). The da Vinci patient cart is docked, and the surgical camera and instruments are installed into small incisions. An initial dis- section reveals the anterior aspect of the prostate. The plane between the bladder and the prostate is then developed, and the bladder neck is dissected from anterior to posterior. The seminal vesicles are identified and dissected, and the posterior rectal plane is developed toward the urethra as seen in Figure 1.2. The prostatic pedicles are ligated, and the NVB are released as shown in Fig- ure 1.3. As much as possible, this step is performed without electrocautery. The prostate apex is dissected, leaving the prostate and seminal vesicles free in the ab- dominal cavity. A vesicourethral anastomosis is performed to restore urinary con- tinuity. Finally, the prostate is removed, and the abdominal incisions are closed. 3 Figure 1.2: Posterior dissection of the prostate. The plane of dissection to- ward the prostate is indicated by the dashed arrow. (Image reproduced from Gonzales et al. [21] with permission of Elsevier.) 1.2 Motivation The primary objective of RP is the complete removal of all cancerous tissue. This is evaluated by post-operative histology. If the boundaries of the prostate spec- imen are found to contain cancer cells, it is said to have positive margins (PM). This implies that cancerous tissue was left behind at the surgical boundary, greatly increasing the chance of disease recurrence. Smith et al. found PM rates after RALRP of 9.4% and 50% for pT2 and pT3 patients1respectively [25]. Magheli et al. similarly found PM rates after RALRP of 9.3% and 48.5% for pT2 and pT3 respectively [31]. Preserving the patient’s post-operative urinary control and erectile function are also important objectives of RP, as these side effects reduce patient quality of life. This is increasingly true because modern prostate-specific antigen (PSA) screening detects prostate cancer in younger patients, to whom these functional side effects 1pT2 and pT3 refer to the TNM staging system used in North America to classify prostate cancer patients based on the progression of their disease. The p indicates that the rating is based on post- operative pathology. The T indicates a primary tumour rather than cancer in the lymphatic system (N) or metastasis (M). T2 indicates a primary tumour confined to the prostate; T3 indicates a primary tumour that has spread beyond the prostate capsule [41]. 4 Figure 1.3: Ligation of the prostatic pedicles and antegrade dissection of the neurovascular bundle. Titanium or Hemolock clips are used for hemostasis to avoid the use of electrocautery. (Image reproduced from Gonzales et al. [21] with permission of Elsevier.) are even more undesirable. In a survey of case series, Coughlin et al. found post- operative sexual potency rates ranging from 43-79% and post-operative urinary continence rates ranging from 89-98% for RALRP [12]. It should be noted that comparisons of functional outcomes between series are difficult due to differences in nerve-sparing approach, patient classification, follow-up period, and metrics for sexual and urinary function. RP is a technically challenging operation. Successful cancer control requires a complete and accurate dissection of the prostate. Surgeons can experience diffi- culty in delineating the prostate boundary, particularly when dissecting the prostate apex and base, as visual cues that allow the surgeons to correctly identify their planes of dissection are extremely limited in both these areas. Surgeons must also tailor their surgical margins to avoid the delicate NVB around the periphery of the prostate. The nerves in these bundles are generally less than 1 mm in diameter, so 5 even with the visual magnification of the da Vinci system it is not possible for the surgeon to visualize them directly. As most of the challenges of RP involve the intraoperative visualization of periprostatic anatomy, it is logical to conclude that intraoperative imaging could potentially improve surgical outcomes, both in terms of cancer control and func- tional outcomes. Two commonly applied imaging modalities in prostate imaging are magnetic resonance imaging (MRI) and ultrasound (US); both have advantages and disadvantages pertaining to intraoperative guidance. MRI produces extremely detailed images of the prostate and periprostatic anatomy which allow accurate delineation of the prostate boundaries [54]. Unfortunately, in addition to being ex- pensive, MRI systems generate powerful magnetic fields, so only equipment con- structed from non-magnetic materials can be present in an operating room (OR) with an MRI machine. This makes widespread adoption of intraoperative MRI for guidance in RP currently impractical, although MRI-compatible robotic systems for RP have been reported [20]. US images may not show periprostatic anatomy as clearly as MRI, but US is likely the most practical choice for an intraoperative imag- ing modality because it is inexpensive, non-magnetic, operates in real time and produces no ionizing radiation. The most common US configuration for prostate imaging, transrectal ultrasound (TRUS), is commonly used for guidance in prostate biopsy and prostate brachytherapy. Applying intraoperative TRUS to RALRP in or- der to improve patient outcomes is the primary focus of this thesis. In the future, it may also be possible to combine the benefits of both MRI and US by performing a preoperative MRI, and registering the data to the intraoperative TRUS. MRI to US registration, however, is a challenging technical problem. Previous studies related to prostate biopsy have described methods for rigid manual registration [43, 57], deformable manual registration [26], and semi-automatic deformable registration [36]. 1.3 Prior Work In this section, previous studies that have examined the use of intraoperative TRUS for guidance in RP are surveyed, in order to identify specific steps of the RALRP procedure where TRUS might be usefully applied. 6 1.3.1 TRUS in ORP Intraoperative TRUS for guidance in RP was first reported in 1997 by Zisman et al. [62]. This group applied TRUS to ORP, and used it primarily to ensure accurate and complete dissection of the prostate apex. Although their study was of very limited size (n = 3), the TRUS guidance was shown to be useful for identifying the boundaries of the prostate at the apex. In one patient, residual prostatic tissue missed by the surgeon was identified by the TRUS and then dissected. 1.3.2 TRUS in LRP: Ukimura and Gill Several studies describing the use of TRUS in LRP have been published by the research group of Osamu Ukimura and Inderbir S. Gill, formerly of the Cleveland Clinic. Initial Results Their first study, published in 2004, described initial results using real-time intraop- erative TRUS on twenty-five patients undergoing LRP [51]. In this study, B-mode, power Doppler and 3D imaging modes were applied. B-mode imaging was used for guidance during the operation, with an ultrasound operator manually tracking the laparoscopic tool tip and providing feedback to the laparoscopic surgeon on the tool location relative to critical anatomy. The authors found TRUS guidance particularly useful in three aspects of the procedure: • Identifying the correct plane between the bladder and the prostate base, • Identifying difficult-to-see distal protrusions of the prostate apex posterior to the membranous urethra in some patients, • Providing visualization of any hypoechoic nodule abutting the prostate cap- sule, thus alerting the surgeon to perform a wider dissection at that point. B-mode TRUS was also used to visualize the tented-up rectal wall during rectal-wall release. Cancerous nodules in the prostate were said to be visible as hypoechoic areas in the B-mode TRUS images. Although B-mode TRUS generally has low sensitivity in detecting prostate cancer, the authors theorized that having accurate 7 biopsy data before the operation made the ultrasound operator more confident and increased their effectiveness at locating cancer. Pre-operative and post-operative power Doppler images of the NVB were used to visualize arterial bloodflow in the vessels pre-operatively and later confirm vessel preservation post-operatively. The Doppler images were also used to determine the average distance between the NVB and the lateral edge of the prostate. This distance was found to be (mean ± SD) 1.9 ± 0.9 mm at the prostate apex and 2.5 ± 0.8 mm at the prostate base. TRUS imaging was also used to measure the dimensions of important periprostatic anatomy, including the NVB, prostate apex, membranous urethra, bladder neck, rectal wall, and any cancerous nodules. TRUS in Lateral Pedicle Control Ukimura and Gill next described using TRUS in prostatectomy in their description of a completely athermal method for vascular pedicle control [19]. Temporary clamping followed by suturing was used in place of electrocautery to secure the pedicles. Similar to their previous study, TRUS power Doppler imaging was used to confirm bloodflow in the NVB while the clamps were in place and afterwards. B-mode imaging was also used for navigation, as previously described. Detailed Clinical Study In 2006, Ukimura and Gill released the results of a larger clinical study evaluating whether TRUS navigation decreased the incidence of PM [52]. The study compared 77 consecutive patients treated by LRP with TRUS guidance to 217 patients treated previously by LRP without TRUS guidance. A collection of annotated intraoperative images created using their method was also published at the same time [48]. Overall it was found that real-time intraoperative TRUS decreased the incidence of PM from 29% to 9% of patients. The TRUS was effective in reducing incidence of positive margins in both pT2 and pT3 disease. TRUS identified 54 hypoechoic lesions in 41 patients that corresponded to preoperative biopsy-confirmed cancer. Of those 54 lesions, TRUS indicated 31 as likely to have extracapsular extensions, leading to a wider dissection. The overall sensitivity of the TRUS for detecting cancer remained around 50%. 8 One-Year Energy-Free Potency Outcomes Following up their description of an energy-free nerve-sparing procedure, Ukimura and Gill released results on the potency of their patients at the one-year point. At that point they had used the technique on 169 patients. The authors showed a strong correlation between superior erectile function recovery and indication of NVB bloodflow in power Doppler images, suggesting power Doppler might be useful in nerve sparing. The author’s energy-free TRUS-guided method was shown to improve speed and rate of recovery of erectile function significantly [18]. 1.3.3 TRUS in MRP A Japanese group that included Osamu Ukimura applied TRUS guidance to MRP [37]. This study investigated the effect of TRUS guidance on PM rates, and used TRUS to measure the post-operative membranous urethral length (MUL) in order to examine its effect on post-operative continence. The use of TRUS for guidance focused on the dissection of the prostate apex. The study compared 123 patients treated using TRUS guidance with 66 patients treated without TRUS. The TRUS navigation was found to decrease distal PM significantly from 23% to 12% of pa- tients. The study also found that MUL longer than 12 mm strongly correlated to early recovery of continence, indicating the importance of accurate dissection of the prostate apex. 1.3.4 TRUS in RALRP The first reported use of TRUS for guidance in robotic-assisted prostate cancer surgery was by van der Poel et al. in 2008 [53]. This group focused primarily on using TRUS to locate the plane between the bladder and the prostate base. Cor- rectly identifying this plane of dissection is generally held to be one of the most difficult steps of the robotic operation, particularly for the novice robotic surgeon. The first 80 robotic operations by two experienced laparoscopic surgeons who were new to the da Vinci system were compared. One surgeon used a TRUS probe in- tegrated with the da Vinci robot, the other did not. The first 30 and last 30 cases from each surgeon were compared separately to monitor the impact of training. In the first 30 cases, the surgeon using TRUS had significantly fewer basal positive 9 margins compared to the surgeon not using TRUS. In the last 30 cases, there was no statistically significant difference. As such, the authors recommended TRUS as a tool for novice surgeons for the dissection of the prostate base. It is interesting to note that the surgeons in this study removed the TRUS probe before opening Denonvillier’s fascia and dissecting the posterior aspect of the prostate, in order to avoid injury to the compressed rectum. If this procedure were followed, it would make guidance during the freeing of the rectal wall as described in the previous studies impossible. 1.3.5 Tandem Robotic TRUS in RALRP In a recent article, Han et al. describe a robotic TRUS probe manipulator to be used for guidance during RALRP [24]. This robotic manipulator overcomes the main limitation of all the previous studies: the need for a dedicated ultrasound assis- tant. The robotic system allows the surgeon to reposition the probe directly using a computer joystick. Their system also allows for 3D reconstruction of ultrasound images using the TRUS robots kinematics. The system was used in an initial trial on three subjects. 1.3.6 Summary of Prior Work Table 1.1 provides a summary of the articles described above. Based on the results of these studies, there seem to be several applications in RALRP where TRUS could be integrated quickly and have definite benefits: • Identifying the correct plane of dissection between the bladder and prostate base, especially for novice robotic surgeons, • Ensuring complete and accurate dissection of the prostate apex and preser- vation of an adequate urethral stump, • Monitoring surgical tool position relative to the prostate capsule to avoid accidental capsulotomy. There are also several possible applications where the value or ability to integrate TRUS is less clear: 10 • Identifying the tented-up rectal wall during dissection of the posterior aspect of the prostate, • Monitoring the location of the cavernosal nerves or vessels, possibly using power Doppler imaging to visualize the vessels, • Identifying cancerous lesions near the border of the prostate in order to make a wider dissection around them. While the studies summarized above have suggested a number of ways in which intraoperative TRUS might improve oncological and functional outcomes in RP, intraoperative TRUS in RP has not been widely adopted. This may be be- cause the methods for intraoperative TRUS, with the exception of the robotic sys- tem described by Han et al., all suffer from a number of limitations. A dedicated ultrasound technician or assistant is required, which increases the cost of the op- eration significantly. The surgeon is unable to position the TRUS imaging planes directly, and must direct the ultrasound assistant. Repositioning the TRUS many times would likely add to the time of the procedure. B-mode ultrasound imaging is used to identify cancer near the prostate boundaries, but B-mode imaging is known to have very low specificity for prostate cancer detection. Finally, access to the patient is limited by the patient cart of the da Vinci system. The robotic TRUS system described by Han et al. [24] replaces the dedicated ultrasound assistant and allows the surgeon to directly position the TRUS transducer using a joystick. While this is an improvement over an assistant, the surgeon would still be required to reposition the TRUS transducer numerous times throughout the surgery, and the duration of the procedure might still be increased significantly. Furthermore, the system fails to address the fact that standard B-mode ultrasound has low specificity in prostate cancer detection. It may be somewhat possible to visualize cancerous tissue as hypoechoic regions in B-mode images as Ukimura and Gill suggest [51], but if surgeons are to rely on TRUS for guidance, higher specificity is needed. 11 Table 1.1: Summary of clinical studies involving TRUS in RP. Year Authors Surgery N Use of TRUS Other Findings 1997 Zisman et al. ORP 3 Guidance in dissection of apex and confirmation of water-tight anastomosis None 2004 Ukimura, Gill et al. LRP 25 Guidance during dissection of bladder neck, dissec- tion of prostate apex, dissection near hypoechoic nod- ules abutting prostate capsule and separation of rectal wall. Power Doppler imaging used to monitor NVB. Measurements of periprostatic anatomy taken using TRUS. 2005 Gill, Ukimura et al. LRP 25 General guidance as in previous. Power Doppler imaging used to confirm blood flow in NVB during clamping. Energy-free method for control of lateral pedicles. 2006 Ukimura, Magi-Galuzzi and Gill LRP 77+217 Guidance as previously described. Decreased occur- rence of positive surgical margins by 20 percent. None 2007 Gill and Ukimura LRP 169 Description of detailed potency data using previous method. Recovery of erectile function correlated sig- nificantly with Power Doppler confirmation of NVB flow post-op. Energy-free pedicle control significantly improved rate and speed of erectile function recovery 2008 van der Poel et al. RALRP 80+80 Guidance during dissection of bladder neck. Signifi- cantly reduced basal positive margins in first 30 cases, no difference in last 30 of 80 cases. None 2009 Okihara et al. MRP 123+66 Guidance during dissection of apex. TRUS signifi- cantly improved distal positive margin rates. Membranous urethra length predicts early return to conti- nence 2011 Han et al. RALRP 3 Capturing 2D and 3D B-mode and colour Doppler im- ages of the periprostatic anatomy. None 12 1.4 Research Objectives This thesis describes the development of a new robotic system for intraoperative TRUS imaging during RALRP. RALRP was targeted specifically over other variants of RP for two main reasons. First, RALRP is the current state-of-the-art method. Second the da Vinci system allows for better integration and control of a TRUS robot than a conventional laparoscopic or open surgical arrangement. The following functional requirements were defined for the new intraoperative TRUS system: • The system must be able to manipulate a biplane lateral-fire TRUS probe so that both parasagittal and transverse imaging arrays can be used to image the prostate and surrounding tissue. • The system must be able to capture 3D data by moving the probe and cap- turing two-dimensional (2D) images with position information. • The system must be capable of ultrasound elastography imaging since, as will be discussed in later sections, elastography has been shown to be supe- rior to B-mode ultrasound for imaging prostate cancer and anatomy. • The system must have an effective user interface that allows remote control with a human-input device as in the system of Han et al. [24]. • The system must be able to automatically track the da Vinci surgical tools with the parasagittal imaging plane during surgery, thus eliminating the need for frequent manual repositioning of the TRUS. • The system could be a modification of an existing commercial system to reduce difficulties with clinical approval and surgeon training. 1.5 Thesis Organization As described in the preface, much of the information presented in this thesis has been previously published in or submitted to peer-reviewed publications. The chap- ters are mostly organized to follow the original articles. 13 Chapters 2 and 3 focus on a new method for coordinate system registration through an air-tissue boundary. Chapter 2 describes an initial feasibility study on the method; Chapter 3 describes a second set of experiments using more realistic clinical apparatus. These chapters describe using the method to register 3D ultra- sound data to the stereo camera frame of the da Vinci stereo endoscope, as part of a future system for augmented reality in surgery. The new registration method can also be applied to register the 3D TRUS robot to the kinematic frame of the da Vinci robot’s surgical manipulators, allowing the TRUS robot to automatically track the da Vinci tools. In Chapter 4, a robotic system for intra operative TRUS in RALRP is described. The robot’s hardware and software components are described in detail. Initial pa- tient imaging results are presented. A system for automatic tracking of da Vinci surgical tools is described and evaluated. 14 Chapter 2 Registration through an Air-Tissue Boundary: Feasibility Study 2.1 Introduction Laparoscopic surgery and robotic-assisted laparoscopic surgery have benefits com- pared to traditional open surgery that include reduced scar tissue formation, blood loss and post-operative hospital stay. Although stereo cameras are useful for guid- ance, they do not provide subsurface information. One research concept aimed at improving guidance, augmented reality (AR), is typically implemented as the overlay of medical image data onto the surgeon’s camera view. This allows the surgeon to visualize subsurface anatomic features (tumors, vasculature, etc.). Prior studies in AR for surgery have incorporated data from a variety of medical imag- ing modalities, including X-ray, computed tomography (CT), MRI and US [9– 11, 17, 22, 27, 44, 45, 49, 50]. This chapter explores the ability to display a three-dimensional ultrasound (3DUS) volume in a laparoscopic camera view during minimally invasive surgery. Robotic-assisted surgery using the da Vinci Surgical System is an excellent platform for AR because the da Vinci surgeon views the surgical field through a 3D 15 computer display. RALRP is a common application of the da Vinci system. As dis- cussed in Section 1.3, previous studies have found that TRUS is useful in prostate- ctomy for, among other things, identifying the correct plane of dissection between the prostate base and the bladder, identifying hypoechoic regions that may repre- sent cancerous tissue near the boundary of the prostate, and visualizing bloodflow in the NVB [51–53]. An AR system based on TRUS could thus allow the surgeon to perform a more accurate dissection of the prostate, avoiding PM’s while minimiz- ing damage to the sensitive periprostatic anatomy. This could potentially improve both oncological and functional outcomes for the patient. AR in surgery involves three main technical problems: developing a 3D model of subsurface anatomy based on medical image data, registering the model frame to the camera frame, and rendering the model to the user [50]. The registration of the medical image data to the camera is critical because errors from registration trans- late directly to errors in anatomic feature localization. This chapter investigates the problem of US to camera registration. Existing methods for US to camera registration are generally based on exter- nal tracking systems. These methods involve three steps. First, optical trackers, magnetic trackers or robotic manipulators are used to monitor the 3D poses of the US transducer housing and camera housing with respect to a fixed external coor- dinate system [9, 10, 27, 29]. Second, the US volume is calibrated to the pose of the transducer housing using a calibration phantom [28, 35]. Third, the 3D camera frame is calibrated to the pose of the camera housing [5]. While these registra- tion methods based on external tracking systems have been applied with success in various clinical settings, they have several shortcomings. The external track- ing systems themselves are expensive pieces of equipment that occupy significant space in the OR. Other pieces of equipment present in the OR can interfere with their operation, for example by blocking the line of sight of optical tracking sys- tems or disrupting the sensitive fields of magnetic tracking systems. Both of these issues are especially problematic in robotic surgery, where the da Vinci patient-side cart is docked to the OR table, abutting the surgical field. US transducer calibra- tions are time consuming, and likely need to be repeated before each procedure for maximum accuracy. Given that the US transducer poses are tracked with respect to the housing, errors in pose can be magnified by the lever arm effect between 16 the housing and the imaging plane. External tracking systems also typically re- quire modifications to existing transducer and camera housings in order to mount magnetic or optical markers. Direct US to camera registration, without any external tracking system, requires common features that can be identified in both modalities and used as fiducials. Unfortunately, US and camera data only overlap at boundaries between air and tissue. Therefore, common features must be located at the air-tissue boundary in order to be used as registration targets. This chapter describes a method for registering 3DUS to stereoscopic cameras based on this concept. In our method, a registration tool with three optical markers and three ultrasound fiducials is pressed against the air-tissue boundary so that it can be imaged by both the cameras and the 3DUS, thus providing common points in the two frames. By eliminating the US transducer calibration and the external tracking systems, this method reduces the possible sources of error in the registration. Direct registration of 3DUS to the cameras across the air-tissue boundary is the key innovation of this chapter. 2.2 Methods This study had two goals: to determine the accuracy of locating US fiducials on an air-tissue boundary, and to determine the feasibility of using these fiducials to reg- ister 3DUS to stereoscopic cameras. We first examined the accuracy of localizing spherical fiducials on an air-tissue boundary in US. Air-tissue boundaries exhibit high reflection at their surfaces that may make it difficult to accurately localize fiducials. We considered five variables that could affect the accuracy of fiducial lo- calization: (1) fiducial size, (2) lateral position in US image, (3) angle of air-tissue boundary, (4) boundary depth, (5) stiffness of tissue. Next, we implemented a di- rect closed-form registration method between 3DUS and a stereoscopic camera by localizing surface fiducials in both the 3DUS volume and the stereo camera (Figure 2.1). This method is described below. We define four coordinate systems, the stereo camera system {o˜0,C0}, the op- tical marker system {o˜1,C1}, the US fiducial system {o˜2,C2} and the 3DUS system{o˜3,C3}. The transformation from {o˜1,C1} to {o˜0,C0}, 0T1, is found by stereo- triangulating the optical markers on the registration tool. The transformation from 17 Figure 2.1: Registration concept. {o˜2,C2} to {o˜1,C1}, 1T2, is constant and known from the tool geometry. The trans- formation from {o˜3,C3} to {o˜2,C2}, 2T3, is found by localizing three fiducials that define {o˜2,C2} in the 3DUS system {o˜3,C3}. The three fiducial locations in coordinate system {o˜3,C3}, 3x0, 3x1 and 3x2, define two perpendicular vectors with coordinates 3v1 =3 x1− 3x0 (2.1) 3v2 =3 x2− 3x0 , (2.2) that can be used to define the unit vectors of frame C2 in system {o˜3,C3}: 3i2 = 3v1 ||3v1|| (2.3) 3 j2 = 3v2 ||3v2|| (2.4) (2.5) 3k2 = 3i2× 3 j2 . (2.6) The origin o˜2 has coordinates 3x0 in {o˜3,C3}. The homogeneous transformation 18 from the 3DUS system {o˜3,C3} to the US fiducial system {o˜2,C2}, 3T2, is then 3T2 = [ 3i2 3 j2 3k2 3x0 0 0 0 1 ] (2.7) and 2T3 = 3T2 −1. The overall transformation between the stereo camera system {o˜0,C0} and the 3DUS system {o˜3,C3} is then 0T3 = 0T11T22T3 . (2.8) A homogeneous transformation can then be constructed to register the 3DUS frame to the stereo camera frame. Lastly, with known camera parameters (focal length, image center, distortion coefficient, etc.), the registered US volume in the camera frame can be projected onto the two stereoscopic images. 2.2.1 Experimental Setup Figure 2.2 shows the experimental setup used in this study. 3DUS volumes were captured using a Sonix RP US machine (Ultrasonix Medical Corp., Richmond, BC) with a mechanical 3D transducer (model 4DC7-3/40). A three-axis mechanical micrometer stage was used for accurate positioning of registration tools relative to the fixed US transducer and stereo cameras. Surface Fiducial Localization Sets of steel spherical fiducials arranged in precisely-known geometry on stainless steel plates were pressed against tissue-mimicking phantoms, imaged and localized in the 3DUS volumes. The steel plates contained three sets of fiducials spaced 10 cm apart, with each set consisting of a center fiducial and eight surrounding fiducials at a radius of 10 mm (Figure 2.3a). The fiducials were seated in holes cut into the plate on a water jet cutter with dimensional accuracy of 0.13 mm. Fiducial diameters of 2 mm, 3 mm and 4 mm were imaged through phantoms with thicknesses of 3 cm, 6 cm and 9 cm, stiffnesses of 12 kPa, 21 kPa and 56 kPa, and boundary angles of 0 degrees, 20 degrees and 40 degrees. The phantoms were made from polyvinyl chloride (PVC) using a ratio of liquid plastic to softener of 19 Figure 2.2: Registration accuracy test setup. 1:1 (12 kPa, low stiffness), 2:1 (21 kPa, medium stiffness), and 1:0 (56 kPa, high stiffness) to create phantoms that mimicked tissue properties [4]. To create fully developed speckle, 1% (mass) cellulose was added as a scattering agent. The five independent variables evaluated were varied independently about a control case (3 mm fiducials, 6 cm depth, 21 kPa stiffness, 0 degree angle, and cen- tral location). The surface fiducial plates were pressed lightly into the PVC tissue phantoms, and imaged through the phantoms. The fiducials were then manually localized in the US volume, and the Euclidean distances between the outer fidu- cials and the center fiducials were compared to the known geometry to determine the accuracy of localization. For every variable level, 10 tests with 8 error mea- surements were performed (n = 80). The focal depth was set to the boundary depth in all tests. 20 (a) (b) (c) Figure 2.3: Registration test equipment:(a) fiducial localization test plate (b) registration tool (c) registration accuracy test tool. 21 Registration For the registration experiments, a Micron Tracker H3-60 optical tracking system (Claron Technology, Toronto, ON) was used as the stereoscopic cameras. This pro- vided a stable and accurate pre-calibrated camera system and allowed the analysis of registration accuracy to focus mainly on localizing the registration tool in US. The registration tool was also built on a steel plate cut on a waterjet cutter (Figure 2.3b). On the top surface are three Micron Tracker markers spaced 20 mm and 15 mm apart forming an ”L” shape; on the bottom surface are three surface fiducials (3 mm) seated in holes cut directly in line with the Micron Tracker markers. Reg- istration accuracy was measured using the test tool shown in Figure 2.3c. The test tool consists of a steel frame, a nylon wire cross which can be accurately localized by US in a water bath, and three Micron Tracker markers which allow the tracker to determine the location of the wire cross in the camera frame. We first determined the homogeneous transformation relating points in the US frame to the camera frame using the registration tool. We then evaluated the accu- racy of the transformation using the test tool placed in a water bath. The cross wire location in US registered into the camera frame was compared to the location of the cross wire in the camera coordinates. This was done by saving a US volume of the cross wire in a water bath, and then draining the water to track the optical markers on the test tool in the stereoscopic cameras. Registration error was defined as the Euclidean distance between the position predicted by registration and the tracked position of the cross wire in the cameras. The errors were transformed into the US frame so that they could be specified in the lateral, elevational and axial directions. To ensure accurate scales, the US volumes were generated using the correct speeds of sound for the phantoms and for water. 22 (a) (b) Figure 2.4: Example images of air-tissue boundary (a) and 3-mm fiducial pressed against an air-tissue boundary (b). 23 2.3 Results The results of the fiducial localization experiments are shown in Table 2.1. Hypoth- esis testing was used to determine the statistical signficance of variables. Given two groups x and y, unpaired student t-tests determine the probability p that the null hy- pothesis (µx = µy) was true. An analysis of variance (ANOVA) is a generalization of the student t-test for more than two groups and produces probability statistics {F, p}. For both the student t-test and ANOVA, a generally accepted probability for suggesting statistical significance is p< 0.05. The ANOVA results showed that the size of the fiducial, the depth of the bound- ary, and the angle at which the boundary was imaged affect the accuracy of fidu- cial localization (ANOVA: Fsize = 7.34, psize = 9.96E− 04;Fdepth = 15.5, pdepth = 1.11E−06;Fangle = 8.49, pangle = 3.61E−04). However, the tissue stiffness does not significantly change the accuracy of fiducial localization (ANOVA: Fsti f f ness = 0.0414, psti f f ness = 0.960). T -test results showed that the lateral position of the fiducials on the boundary plays a significant role in the accuracy of localization (t-test: p= 7.10E−18). In our registration experiment, four unique poses of the registration tool were used for a physical configuration of the camera and US transducer. The registra- tion errors were computed at twelve different locations in the US volume. To test the repeatability of this method, the registration was repeated four times on the same images. Table 2.2 shows that the average error among all the transformed points for all transformations was 1.69 mm, with a minimum error of 1.55 mm and a maximum error of 1.84 mm. The time required to perform a registration was approximately equal to the acquisition time of a 3DUS volume (2 sec). 24 Table 2.1: Mean, standard deviation and median of errors associated with localizing fiducials at air-tissue boundaries. Asterisks indicate statistically significant difference from control case. Variable Value Mean ± Std Dev. (mm) Median (mm) RMS Error Fiducial Size 2 mm 0.94 ± 0.34* 0.89 1.00 3 mm 0.82 ± 0.28 0.78 0.87 4 mm 0.70 ± 0.20 0.67 0.73 Boundary Depth Long (9 cm) 0.54 ± 0.18* 0.55 0.57 Med. (6 cm) 0.82 ± 0.28 0.78 0.87 Short (3 cm) 0.66 ± 0.20* 0.64 0.69 Tissue Stiffness High (12kPa) 0.81 ± 0.30 0.78 0.86 Med. (21kPa) 0.82 ± 0.28 0.78 0.87 Low (56kPa) 0.80 ± 0.19 0.80 0.82 Boundary Angle 0◦ 0.82 ± 0.28 0.78 0.87 20◦ 0.78 ± 0.28 0.75 0.83 40◦ 1.04 ± 0.35* 0.97 1.10 Lateral Position Center 0.82 ± 0.28* 0.78 0.87 On Boundary Offset (10 cm) 0.60 ± 0.28* 0.59 0.66 25 Table 2.2: Mean errors (n = 12) between points in a registered 3DUS volume and its location in the stereo-camera frame. eLateral (mm) eElevational (mm) eAxial (mm) eTotal (mm) Registration 1 0.90 ± 0.44 0.77 ± 0.33 1.08 ± 0.75 1.75 ± 0.56 Registration 2 1.02 ± 0.45 0.60 ± 0.32 1.14 ± 0.99 1.83 ± 0.74 Registration 3 0.65 ± 0.43 0.76 ± 0.33 1.01 ± 0.63 1.55 ± 0.53 Registration 4 0.57 ± 0.40 0.82 ± 0.30 1.03 ± 0.79 1.60 ± 0.58 Average 0.78 ± 0.45 mm 0.74 ± 0.32 mm 1.07 ± 0.78 mm 1.69 ± 0.60 mm 26 2.4 Discussion The fiducial localization tests showed that errors associated with localizing surface fiducials at an air-tissue boundary ranged from 0.54 mm to 1.04 mm. Several vari- ables had a significant effect on accuracy. The smaller fiducials (2 mm) produced higher localization errors, suggesting that the fiducials became lost in the boundary reflection. The larger fiducials presented larger features that were easier to detect. Boundary depths farther away from the US transducer produced lower localization errors, as fiducial centers were more difficult to localize when approaching the near field [39]. Two results from the localization error analysis that have practical implications are that tissue stiffness does not significantly affect the accuracy of fiducial local- ization and that only large angles (e.g. 40 degrees) significantly affect the localiza- tion accuracy. Our registration method should therefore remain accurate for tissues with a wide variety of stiffnesses and shapes. The lateral location of the fiducials on the air-tissue boundary, however, was significant to the localization accuracy. The air-tissue boundary exhibited greater specular reflection near the axis of the US transducer, and thus fiducials offset laterally from the axis were less obscured by specular reflection and could be more accurately localized. The registration experiment showed that using fiducials on an air-tissue bound- ary for direct registration between 3DUS and stereo cameras is feasible with an ac- curacy of 1.69 ± 0.60 mm. The largest errors were in the axial directions since the tail artifacts of the surface fiducials obscured the true depth at which the fiducials were located in the US volume (Figure 2.4). Repeated registrations on the same data and registrations using different physical locations of the registration tool all gave consistent overall and component errors, suggesting a model of the reverbera- tion tail could improve localization and registration accuracy further. Nevertheless, based on the overall errors, our registration method is a promising alternative to us- ing tracking equipment, where errors for similar US-to-camera registration systems are within 3.05 ± 0.75 mm [10] for magnetic tracking and 2.83 ± 0.83 mm [27] for optical tracking. It is clear that the main source of error for the new registration method is the localization of registration tool fiducials, as any localization errors would be amplified by a lever-arm effect. 27 The proposed registration is ideal for situations where the camera and the US transducer are fixed. However, if the US transducer or the camera is moved, a new registration can simply be acquired. Alternatively, in the case of robot-assisted surgery, the robot kinematics can be used to determine the new locations of the camera or the US transducer and maintain continuous registration to within the accuracy of robot kinematic calculations from joint angle readings. A few practical issues with the proposed registration method should be con- sidered. First, stereo-camera disparity plays a significant role in the accuracy of registration. The registrations presented in this chapter were performed using the Claron Micron Tracker; this represents an ideal case, as the cameras have a large disparity (12 cm) and a tracking error of 0.2 mm. In minimally invasive surgery, laparoscopic stereo-cameras having much smaller disparities would be used, possi- bly resulting in higher errors (although the cameras are imaging a much shallower depth so that the effect of disparity is lessened). This can be compensated for by maximizing the size of the registration tool, producing a well-conditioned system for computing the transformations. Such a registration tool could be designed to fold and fit through a trocar for laparoscopic surgery. Another way to improve registration accuracy is to introduce redundancy into the registration data. Our registration tool featured only the minimum three fidu- cials required to extract the six degrees of freedom transformation between the US volume and the stereoscopic cameras; with more fiducials on the registration tool, averaging could be used to reduce errors. In addition, higher accuracies can be achieved by considering different poses of the registration tool in both the US and the camera frame [40]. 2.5 Conclusions In this study, we evaluated the accuracy of localizing fiducials pressed against an air-tissue boundary in ultrasound. We have shown that this method can be used to perform 3DUS to stereo camera registration for AR in surgery. This method provides a direct closed-form registration between a 3DUS volume and a stereo- scopic camera view, does not require calibration of the US transducer or tracking of cameras or US transducers, and provides improved accuracies over tracking-based 28 methods. Future work will utilize laparoscopic stereo-cameras in the registration technique, and investigate in-vivo studies to confirm an acceptable level of accu- racy is achieved in an intra-operative setting. 29 Chapter 3 Registration through an Air-Tissue Boundary: da Vinci Study 3.1 Introduction Chapter 2 described an initial feasibility study evaluating the registration of 3DUS to stereo cameras using fiducials at an air-tissue boundary. While the results of this initial study were promising, several elements of the experimental setup were unrealistic for laparoscopic surgery. The stereo cameras used for imaging had a disparity of 12 cm, much greater than any possible laparoscopic camera. The reg- istration tool, which measured 55 mm by 45 mm, was also unrealistically large for laparoscopic or robotic surgery. In Chapter 3 we describe a second feasibility study examining registration through an air-tissue boundary. This second study tested whether our accuracy results from Chapter 2 would transfer to a more realistic clinical system. In this experiment, the stereo endoscope of a da Vinci Surgical System was used for cam- era imaging, and a smaller registration tool was used. Also in the second experi- ment, a different 3DUS imaging system was applied. As discussed in Chapter 2, the new registration method is intended for registration of ultrasound data from a va- 30 riety of imaging configurations. In the first study we tested an external abdominal 3DUS transducer which might be used, for example, for guidance in laparoscopic or robotic-assisted laparoscopic partial nephrectomy. In this study, the robotic TRUS system for guidance in RALRP introduced in Chapter 1 and described in detail in Chapter 4 was used for 3DUS imaging. 3.2 Methods 3.2.1 Registration Concept Figure 3.1 depicts our registration concept, which differs slightly from that de- scribed in Chapter 2. We define three coordinate systems: the stereo camera co- ordinate system {o˜0,C0}, the optical marker coordinate system {o˜1,C1}, and the 3DUS coordinate system {o˜2,C2}. The goal of the registration is to determine the homogeneous transformation 0T2 from {o˜0,C0} to {o˜2,C2}. The coordinates of the three camera markers in {o˜0,C0}, 0xc0, 0xc1, and 0xc2, are determined by stereo triangulation. Likewise, the coordinates of the three US fiducials in {o˜2,C2}, 2xus0,2xus1, and 2xus2, are determined by segmenting the fiducials out of the 3DUS vol- ume. The offset between the camera markers and the ultrasound fiducials, 1vuc, is known from the geometry of the tool. The offset is applied to yield the position of the ultrasound fiducials in {o˜0,C0}, 0xus0, 0xus1, and 0xus2. There are then three common points known in both {o˜0,C0} and {o˜2,C2}, which means that a standard least squares approach can be used to solve for the transformation 0T2. Multiple positions of the registration tool can be imaged and incorporated to increase the number of fiducials, and thus the accuracy of registration. 3.2.2 Apparatus Laparoscopic Stereo Cameras A 12-mm 0-degree da Vinci stereo endoscope was used for camera imaging. A da Vinci Standard model was used for phantom testing, and a da Vinci Si model was used for ex-vivo tissue testing. The stereo camera images were captured using two Matrox Vio cards (Matrox Electronic Systems, Dorval, QC), with the left and right 31 Figure 3.1: Updated registration concept. channel DVI outputs from the da Vinci surgical console streamed to separate cards. The capture system ran on an Intel PC with 10 GB memory running Windows XP 32-bit Edition. The images were captured synchronously using the native Matrox API at 60 frames per second and a resolution of 720 by 486 pixels. Three-dimensional Ultrasound System All ultrasound data for this study were captured using a biplane parasagittal/trans- verse TRUS transducer in combination with a PC-based ultrasound console (Sonix RP; Ultrasonix Medical Corp., Richmond, BC). The 128-element, linear parasagit- tal array was used for all imaging, with an imaging depth of 55 mm. Focus depth was adjusted before testing to produce the best possible image, and remained con- stant. A robotic TRUS imaging system (shown in Figure 3.2) based on a modified brachytherapy stepper [1] was used to capture 3D data by rotating the TRUS trans- ducer around its axis and recording ultrasound images at angular increments of 0.3 degrees. The range of angles was adjusted according to the position of the registration tool. 32 Figure 3.2: Robotic TRUS imaging system. Registration Tool Figure 3.3 shows the registration tool used in this experiment. It consists of a machined stainless steel plate, with angled handles designed to be grasped by da Vinci needle drivers. Optical markers on the top surface are arranged directly above stainless steel spherical fiducials on the opposite face. The spherical fiducials are 3 mm in diameter. The fiducials are seated in 1-mm circular holes machined into the plate by a water jet cutter with dimensional accuracy of 0.13 mm, in order to locate them accurately. The tool was designed to fit through the 10-mm inner diameter of the da Vinci cannulas. It is approximately 9.5 mm wide, with an overall length of approximately 54 mm. The fiducial diameter was selected based on the fiducial localization accuracy test and the ANOVA described in Chapter 2. In that test, we found that the average localization error for 3-mm fiducials was less than that for 2-mm fiducials by a statistically significant amount. The average fiducial localization error for 4-mm fiducials was less than for 3-mm fiducials, but the difference was not statistically significant. Based on this result, the smaller 3-mm fiducials were selected for this experiment. The width of the tool was constrained by the size of the da Vinci cannulas. The length was chosen based on the size of the imaging array used in the experiment (55 mm). The number of fiducials was chosen to maintain consistency with the previous feasibility study. 33 Figure 3.3: Schematic of registration tool. (Dimensions are in mm.) 3.2.3 Registration Procedure In this study, we applied our method to two different tissue phantoms. A custom- made PVC prostate phantom was used to register our 3DUS system to a da Vinci Standard system in a research lab at the University of British Columbia (UBC). An ex-vivo porcine liver was used to register our 3DUS system to a da Vinci Si system in a research lab at Vancouver General Hospital. Both tests followed the same experimental procedure, described below. Figure 3.4 shows an overview of the experimental setup used in this study. The TRUS transducer was installed on a standard operating room table using a brachytherapy positioning arm (Micro Touch; CIVCO Medical Solutions, Kalona, IA). The da Vinci stereo endoscope was positioned to view the parasagittal imaging array of the TRUS transducer. A US imaging phantom was installed over the TRUS probe, with the top surface visible in the da Vinci camera view. The registration tool was applied to the top surface of the phantom using the da Vinci manipulators. The tool was positioned so that the three ultrasound fiducials were visible in the 3DUS and the three optical markers were visible in the da Vinci camera view. The left and right camera images and a 3DUS volume were captured. The registration tool was moved to a new position, and reimaged. A total of twelve registration tool positions were imaged for each ultrasound phantom. 34 Figure 3.4: Registration test setup. 35 (a) (b) Figure 3.5: 3DUS and da Vinci stereo endoscope arranged to image the ex-vivo air-tissue boundary (a) and da Vinci camera view of the tool pressed against the surface of the porcine liver (b). 36 A standard stereo camera calibration was performed using Bouguet’s camera calibration toolbox for Matlab [5]. The registration tool optical markers were se- lected in the left and right camera images, and the initial selection was automat- ically refined to sub-pixel precision using a Harris corner detector. The left and right image points were then used to triangulate the positions of the registration tool optical markers in the 3D camera frame. Similarly the tips of the US fiducials were manually localized in the 3DUS volumes. As described above, the common fiducial points on the registration tool were used to solve the homogeneous trans- formation between the camera and ultrasound frames using a standard least-squares algorithm [46] minimizing the sum of squared distance error between the common points. Fiducial points from multiple positions of the registration tool were incor- porated in order to increase the accuracy of the registration. Between one and four positions of the registration tool were used, with the registration tool translated and rotated at random across the portion of the phantom’s surface that could be imaged by both the TRUS and the stereo endoscope (approximately 30 mm by 30 mm). The fiducial registration error (FRE) was defined as the average residual error between the camera markers and ultrasound fiducials. 3.2.4 Validation Procedure Figure 3.6 shows a cross wire phantom used to evaluate the accuracy of our reg- istration method. The phantom was designed to provide points that could be pre- cisely localized by both 3DUS and stereo cameras. It consists of 8 intersection points of 0.2-mm nylon wire arranged in a grid approximately 35 mm by 25 mm by 20 mm. The wire grid is supported by a custom-built stainless steel frame. After the registration was determined, without moving either the US transducer or the stereo endoscope, the ultrasound phantom was removed and the transducer was immersed in a waterbath. The cross wire phantom was installed in the water- bath and a 3DUS volume of the phantom containing all eight cross wire points was captured. Again without disturbing any of the apparatus, the water in the bath was drained and left and right camera images of the cross wire phantom were captured. This process was repeated for a second position of the cross wire phantom, yield- ing sixteen target points in all. The cross wire points were localized in the camera 37 25 32.7 A A 20 SECTION A-A Figure 3.6: Schematic of cross wire phantom. (Dimensions are in mm.) frame using stereo triangulation, and in the US frame by manually localizing the points in the 3DUS volumes. (All ultrasound data were corrected for differences in speed of sound (cPVC =1520 m/s; ctissue =1540 m/s; cwater =1480 m/s.) To de- termine registration error, we used the previously found registrations to transform the positions of the cross wire intersections from the 3DUS frame into the stereo camera frame. These transformed points were compared to the triangulated posi- tions of the cross wires, with target registration error (TRE) defined as the distance between the transformed US points and the triangulated camera points. We mea- sured the error for registrations incorporating between one and four positions of the registration tool (i.e. between three and twelve fiducials). We used an ANOVA to determine whether registrations incorporating between two and four positions of the tool were significantly more accurate than the single tool case. The registration was also used to create examples of images that could be used by the da Vinci surgeon for guidance during surgery. The da Vinci camera im- ages and a 3DUS volume were captured while the stereo endoscope and the TRUS both imaged a prostate elastography phantom (CIRS, Norfolk, VA). The simulated anatomic features in the phantom were then overlaid in the correct position and 38 orientation on both images. 3.3 Results Figure 3.7 shows the appearance of spherical fiducials pressed against the surface of a PVC ultrasound phantom and an ex-vivo liver sample. (a) (b) (c) Figure 3.7: Surface fiducial against an air-tissue boundary and imaged through a PVC phantom (a) and an ex-vivo liver tissue sample (b). Il- lustration of method for localizing fiducial tip (c): The axis of the rever- beration is identified, and the tip is selected along that line. Table 3.1 lists TRE and FRE results when imaging through the PVC prostate phantom. Between one and four positions of the registration tool were used to determine the transformation, so results are averaged over multiple possible com- binations (e.g. 12 choose 4 combinations, 12 choose 3 combinations, etc). Table 3.2 lists registration accuracy results when imaging through the ex-vivo porcine liver. Figure 3.8 shows an example of an overlay image produced using our registra- tion method. The left da Vinci camera image, captured while the stereo endoscope imaged the prostate elastography phantom, is shown. The segmented simulated urethra and seminal vesicles are shown superimposed in the correct position and orientation on the camera image. 39 Table 3.1: Registration accuracy results imaging through PVC phantom. (As- terisks indicate statistically significant improvements over the single tool result.) tool poses fiducials targets FRE (mm) TRE (mm) 1 3 16 0.20 ± 0.09 3.85 ± 1.76 2 6 16 0.75 ± 0.38 2.16 ± 1.16* 3 9 16 0.81 ± 0.55 1.96 ± 1.08* 4 12 16 0.85 ± 0.62 1.82 ± 1.03* Table 3.2: Registration accuracy results imaging through ex-vivo liver tissue. (Asterisks indicate statistically significant improvements over the single tool result.) tool poses fiducials targets FRE (mm) TRE (mm) 1 3 16 0.54 ± 0.20 2.36 ± 1.01 2 6 16 0.82 ± 0.29 1.67 ± 0.75* 3 9 16 0.91 ± 0.32 1.57 ± 0.72* 4 12 16 0.95 ± 0.34 1.51 ± 0.70* 40 Figure 3.8: Overlay of TRUS information on da Vinci camera view based using registration. Segmented urethra (red) and seminal vesicles (blue) are shown. The image dimensions are 55 mm by 55 mm. 41 3.4 Discussion In our previous feasibility study, we found an average TRE of 1.69± 0.6 mm using a single registration tool position and imaging through a PVC tissue phantom [59]. In that experiment we used stereo cameras with 120-mm disparity, and a relatively large registration tool. In this experiment we used a da Vinci stereo endoscope with 3.8-mm disparity, and a smaller registration tool designed to fit through a 10-mm cannula. Based on the differences in camera and tool geometry, it is not surprising that we found the equivalent accuracy measure in this study to be greater. In this study, using a single registration tool position, we found an average TRE of 3.85 ± 1.76 mm imaging through PVC and 2.36 ± 1.01 mm imaging through ex-vivo liver. To improve the results, we compensated for the geometry changes by adding one or more additional positions of the tool to the registration. For the ex-vivo liver testing, two positions of the tool produced an average TRE of 1.67 ± 0.75 mm. This is comparable to our previous result and represents a statistically significant reduction in TRE over a single tool position. Previous studies have reported accu- racies of 3.05 ± 0.75 mm based on magnetic tracking [10] and 2.83 ± 0.83 mm based on optical tracking [27], although these studies used different accuracy mea- sures. Adding more registration tool positions further reduced the average TRE. Incorporating four tool positions, for example, produced an average TRE of 1.51± 0.70 mm for the ex-vivo liver test. While the increase from one tool position to two tool positions produced a statistically significant improvement, no other additional tool position produced an incremental improvement that was statically significant. In this experiment the registration tool was randomly repositioned over a section of the phantom surface approximately 30 mm by 30 mm, so adding a second tool position to the first would be equivalent to using a larger registration tool. Incorpo- rating two registration tool positions would likely be the best choice for a clinical system, as this provides equivalent accuracy to more tool positions without the time needed for additional ultrasound scans. Based on the previous studies that have considered intraoperative TRUS in RP, an AR system based on TRUS would potentially be useful for identifying the correct planes of dissection at the prostate base, at the prostate apex, and medial to the neurovascular bundles [51–53]. Identifying the correct plane between the prostate 42 and the NVB is the most critical step, and requires the highest level of accuracy. Ukimura et al. [51] found that the mean distance between the NVB and the lateral edge of the prostate ranged from 1.9 ± 0.8 mm at the prostate apex to 2.5 ± 0.8 mm at the prostate base. This is suggestive of the required accuracy for an AR system in RALRP. Our system approaches this accuracy when incorporating two positions of the registration tool. In our previous feasibility study we measured the accuracy of manually local- izing fiducials to be approximately 1 mm [59]. The appearance of the spherical fiducials in US, and the ability to localize them accurately, clearly has an impor- tant effect on the overall accuracy of our method. Because our TRUS system uses sweeps of 2D images to construct 3D data, the resolution is lowest in the eleva- tional direction of the array. Altering the incremental angle between images might thus affect registration accuracy significantly. Altering the depth of the US focus relative to the fiducials might also affect the overall accuracy, as the boundaries of the fiducials become less clear. The boundaries of the fiducials raise another issue, as it is uncertain whether the high-intensity response at the top of the fiducials rep- resents the actual edge of the sphere or reflections from within the metal sphere. Hacihaliloglu et al. also considered this problem when using a stylus pointer with a spherical edge in US [23]. They imaged a row of spheres with different known diameters, and compared the differences between the edges in the ultrasound im- ages with the known differences in diameter. They concluded that the edge of the high-intensity response was in fact the edge of the sphere, perhaps with a small constant offset. The force applied to hold the registration tool against the air-tissue boundary would also appear to be important to the appearance of the fiducials, but we qualitatively observed that varying the applied force did not greatly affect the appearance of the fiducials, as long as the fiducials were in contact with the phantom or tissue. As we have discussed, performing a registration currently requires manual seg- mentation and triangulation of registration tool fiducials. Once we have identified the common points, finding the transformation itself requires only a simple cal- culation that takes less than a second on a typical PC. If fiducial detection and localization in both the ultrasound and camera frames could be made automatic, the registration process would take no longer than the time required to capture a 43 3DUS volume. Automatic detection of markers in a camera image is a well studied problem in computer vision (many commercial optical tracking systems are based on this) so we do not believe this step would present an obstacle. Automatically de- tecting and localizing ultrasound surface fiducials has not, to our knowledge, been previously accomplished. Based on the regular appearance of the fiducials, and the fact that their distinctive comet-tail reverberations (see Figure 3.7) distinguish them from surrounding features, we believe standard computer vision algorithms should be able to automatically detect and localize the fiducials without much alter- ation. Our initial investigations into this problem suggest that a detection algorithm based the Adaboost algorithm [15], similar to common face detectors [55], might be successful in detecting surface fiducials at nearly real-time speeds. Boosting algorithms have previously been applied successfully to detect features in ultra- sound [7, 30, 42]. Once the fiducials are detected, their tips can be localized using an edge-detection algorithm, producing accurate positions of the fiducials in the ultrasound frame. It is worth noting that while other methods for ultrasound to camera registration provide updated registrations as the transducer or camera is translated or rotated, our method provides a one-time registration that is invalidated by any movement of either the camera or transducer. For several reasons, we do not believe this is a critical disadvantage. First, because our method could be made to provide reg- istrations very quickly (i.e. in 5 seconds or less), surgeons could simply reapply our registration tool every time they moved either the camera or transducer. Al- ternatively, because our main application area is robotic surgery using the da Vinci Surgical System, the kinematics of the robot manipulators could be used to provide updated registrations based on the movement of the da Vinci camera arm. In the case of robotic surgery, if a stepper or positioning arm or robotic system such as our TRUS robot is used to hold the ultrasound transducer in place, the registration might only need to be performed once for the entire surgery, with the robot sensors used to update the registration. The registration accuracy results presented in this chapter appear promising. A patient study is planned for further validation. Many aspects of an actual clinical environment may have an important impact on the accuracy of registration or the usability of the method. The lighting within the surgical field may be too intense 44 or too low for accurate camera marker detection, blood or smoke may obscure the optical markers, and the ultrasound fiducials may appear differently when imaged through living tissue. Follow up in-vivo tests will be able to answer these questions. 3.5 Conclusion We have presented a new method for registering 3DUS data to a stereoscopic cam- era view. Compared to existing methods, our use of a registration tool with both camera and ultrasound fiducials eliminates the need for external tracking systems in the operating room. It also does not require any modifications to existing ul- trasound or camera systems. The only additional equipment required is a simple, inexpensive tool which can be made sufficiently compact to fit through a cannula. Validation shows an average TRE of 3.85 ± 1.76 mm and 1.51 ± 1.01 mm when imaging through a PVC phantom and liver tissue respectively, which are consid- ered ideal conditions. Incorporating two poses of the registration tool significantly improves TRE to 2.16 ± 1.16 mm and 1.67 ± 0.75 mm for PVC and liver respec- tively. After further developing methods for automatic ultrasound fiducial localiza- tion and optical marker triangulation, we plan to apply our method to augmented reality systems in clinical trials. 45 Chapter 4 Robotic System for Transrectal Ultrasound 4.1 Introduction This chapter provides a detailed description of a new robotic system for intra- operative TRUS imaging in radical prostatectomy. The mechanical, electrical and software components of the system are presented. The system’s imaging capabil- ities and control modes, including automatic tracking of da Vinci surgical tools, are described. The accuracy of the automatic tracking is tested. Finally, images from initial phantom and patient trials are presented, and plans for future testing are discussed. 4.2 Robotic TRUS Imaging System Description Our robotic system for intra-operative TRUS imaging in prostatectomy consists of three main parts: a robotic probe manipulator (robot), an ultrasound machine with TRUS probe, and control and image processing software. 4.2.1 Robotic Probe Manipulator The robot is based on a commercial brachytherapy stepper (EXII; CIVCO Medical Solutions, Kalona, IA). A brachytherapy stepper locks a TRUS probe in a cradle, 46 and allows the user to manually rotate the TRUS probe about its axis and manually translate (step) the TRUS probe along its axis. Mechanical indicators or encoders accurately report the angle and position of the US imaging planes so clinicians can relate the images they view to pre-operative plans. The EXII model has optical encoders attached to the translational and rotational movement assemblies that are read by an attached unit and displayed on an LED readout. In the EXII stepper, the user steps the probe by rotating a knob or drum on the rear of the unit. In our robot, shown in Figure 4.1, the encoders attached to the rotation and translation stages have been replaced with servomotors that allow automatic rota- tion and translation of the probe. The probe cradle section of the stepper has also been replaced with a module that can produce controlled radial vibration of the TRUS probe for ultrasound elastography imaging. The probe is held on a linear bearing while a servomotor spinning an eccentric cam vibrates the probe. The fre- quency of vibration is determined by the rotational speed of the motor shaft, while the amplitude of vibration can be adjusted mechanically. Robot motion parameters, such as range of motion and resolution, are listed in Table 4.1. It should be noted that the robotic probe manipulator described here is a modi- fication of an earlier system constructed by personnel at the Robotics and Control Lab at UBC for 3D elastography imaging in prostate brachytherapy. The previous version of the system had active rotation and vibration stages, but the axial trans- lation of the probe was still controlled manually by clinicians. In order to use the system in RALRP, where access to the stepping knob is limited during the surgery by the da Vinci, the active translation stage was added as part of this thesis. The mechanical components of the active translation stage are shown in Figure 4.2. In the EXII stepper, a lead screw that controls translation of the probe cradle (A) is connected to a dual-input gearbox (B) with one output shaft and two input shafts. One input shaft is connected to the stepping knob (C), the other is connected to the translation stage’s optical encoder. In our modified version, the encoder has been removed, and a motor (D) is connected to the dual-input gearbox instead. Three machined aluminum plates connect to the gearbox housing and provide a mount- ing position for the translation motor. A flexible shaft coupling (W Series; Helical Products Company, Santa Maria, CA) is used to connect the translation motor shaft to the gearbox shaft. 47 Figure 4.1: Views of the robotic TRUS probe manipulator and stabilizer arm showing: (A) vibration assembly (B) roll motor (C) manual translation drum (D) translation motor (E) stabilizer fine adjustment stage (F) sta- bilizer gross adjustment clamp. Figure 4.2: Mechanical components of robot translation stage (shown with cover removed): (A) base of translation stage lead screw (B) dual-input gearbox (C) Stepping knob (D) Translation motor (E) Flexible coupling. 48 Table 4.1: Robot motion parameters. Parameter Roll Stage Translation Stage Vibration Stage Motor model Faulhaber 2342 Faulhaber 2342 Maxon 11875 Gearmotor reduction 14:1 3.71:1 1:1 Motion range ±45 deg. ±60 mm ±2 mm Response time (center to max) 1027 ms 905 ms N/A Resolution 3.5×10−3 deg. 5.3×10−3 mm variable The robot’s three DC motors (probe translation, rotation and vibration) are driven by electronics contained within a custom-built enclosure, shown in Figure 4.3. The main components within the enclosure are a medical-grade power supply and three Faulhaber motion controllers (Model 3006S; Micromo, Clearwater FL) which integrate PD/PID position and velocity controllers with PWM amplifiers. The controllers provide a high-level interface; they receive ASCII commands for position or velocity control through a RS232 interface. In our enclosure, they are connected to the controlling PC through RS232-to-USB adapters. 4.2.2 Control and Image Analysis Software Software running on the ultrasound console controls the motion of the TRUS robot and the ultrasound imaging. A graphical user interface (GUI) writes ASCII com- mands to the Faulhaber motion controllers through the RS232 interface. The GUI is shown in Figure 4.4, and is a modified version of a previous program designed for imaging during brachytherapy. A second program interacts with the motion control GUI through shared access to text files, and allows real-time elastography imaging and the capture of 2D image data with position information for processing into 3D volumes. The imaging program was written by students in the Robotics and Control Lab at UBC for previous research studies [61]. 49 (a) (b) Figure 4.3: Exterior and interior views of TRUS electronics enclosure. 50 4.2.3 Imaging Apparatus A parasagittal/transverse biplane TRUS probe is used for imaging in combination with a PC-based ultrasound console (Sonix RP or Sonix Touch; Ultrasonix Medi- cal Corp., Richmond, BC). The ultrasound console is capable of capturing B-mode, power and color Doppler, and raw radio frequency (RF) data. By capturing and pro- cessing RF data from the Ultrasonix research interface, our system is also capable of elastography imaging. Ultrasound elastography is a technique for measuring mechanical properties of tissue [38]. When compressed, softer tissue experiences larger strain than stiffer tissue. By vibrating the tissue and measuring the amount of tissue displacement or strain from consecutive RF-data images, characteristics such as elasticity or viscosity can be obtained. A variety of algorithms can be used to reconstruct the tissue properties, but a detailed description is outside the scope of this thesis. In this work, axial strain elastography images are presented. To capture elastography data using our system, the TRUS probe is vibrated by the ec- centric cam mechanism described above while RF data is acquired. The vibration frequency content can be controlled by the user through the GUI. Image analysis software developed by our group [61] is used to generate 2D ultrasound elastogra- phy data either in real time or offline. 4.2.4 Control Modes Our motion control software allows the user to position the TRUS probes and cap- ture data in a several manners. They are described below. 3D Data Capture This mode allows automatic capture of 3D US data. The TRUS robot translates the transverse imaging plane or rotates the parasagittal imaging plane automatically, either in a smooth sweep or incremental steps. For both motions, the hold time, speed, range and/or increment size can be varied through the motion control GUI shown in Figure 4.4. The control software continually writes the current positions from the motor encoders to a file that is read by the image processing software, which in turn saves image and position data to an output file. During the automatic movement, the vibration of the probe can be activated while RF data is captured, 51 Figure 4.4: GUI for TRUS robot control through the Ultrasonix console. The program allows control of the vibration, translation and rotation of the TRUS probe. allowing the generation of elastography data. Remote Manual Control This mode allows remote positioning of the TRUS probe as, for example, by the surgeon seated away from the patient at the da Vinci console. In this mode, a 3D mouse (SpaceNavigator; 3Dconnexion, Boston, MA) as shown in Figure 4.5 is used as an input device by the motion control software. Displacement and rotation 52 Figure 4.5: 3Dconnexion SpaceNavigator 3D mouse used for remote manual positioning of the TRUS probe. of the 3D mouse around a single axis are mapped by the GUI to velocity commands for the translation and rotation states of the robot, respectively. Automatic Tool Tracking In this mode, the parasagittal imaging plane of the TRUS transducer is automati- cally rotated so that it contains the tip of a selected da Vinci surgical tool. This is intended to provide optimal guidance without any distraction to the surgeon. The translation stage of the motor is not used for tool tracking because of safety con- cerns. The tool tracking is based on registering the TRUS system to the da Vinci robot using a variation of the air-tissue boundary method described in Chapter 2 and Chapter 3. This registration method is described in the next section. 4.3 Registration Method for Tool Tracking Tracking a tool tip with the ultrasound requires simply that the positions of the tool tip 0x in the ultrasound frame {o˜0,C0} be known in real time. The da Vinci application programming interface (API) can provide the location of the tool tips 1x relative to a fixed coordinate system on the da Vinci patient-side cart, {o˜1,C1}. If the homogeneous transformation between the frames, 0T1, is known, it can be 53 Figure 4.6: Air-tissue boundary registration concept for automatic tool track- ing in RALRP. used to transform the points output by the da Vinci API into the ultrasound frame. Our air-tissue boundary registration method can be used to determine the transfor- mation between the frames. To avoid confusion, the difference between the registration described in Chap- ter 2 and Chapter 3 and the registration described in this chapter should be em- phasized. In the previous chapters, the goal was to register a 3DUS system to an external coordinate system using common points at an air-tissue boundary. In those chapters, the motivation was to provide AR overlays in the surgeon’s view, and the external coordinate system was the 3D frame of stereo cameras. In this chapter, the goal is again to register the TRUS system to an external coordinate system using common points at an air-tissue boundary, however the external coordinate system is the kinematic frame of the da Vinci system. The da Vinci cameras themselves are not necessary in this registration, other than for the surgeon positioning the tools as in normal surgery. A registration is performed as follows. The TRUS system is arranged so that it can image an air-tissue boundary, as at the anterior aspect of the prostate prior 54 Figure 4.7: Relationship between 2D image frame and 3DUS frame for TRUS system. Angle θ is obtained from the rotation motor. to dissection of the bladder neck during RALRP (as shown in Figure 4.6). The tip of a da Vinci surgical tool is pressed against the surface of the tissue. The user then manually rotates the parasagittal imaging plane of the TRUS system using the remote control mode, until the tip of the tool can be clearly distinguished in the US image. Custom software running on the ultrasound console allows the user to select the tip of the tool in the 2D US image. Based on the assumption that the robot rotates the probe around its axis, the software uses the position in the image along with the current rotation motor position to calculate the position of the tool tip in the 3D TRUS robot frame as follows: The tip of the da Vinci tool, point p1, is defined by its components imp1 = [a1,b1]T in the 2D image coordinate system {o˜im,Cim} located at the corner of the TRUS image (see Figure 4.7). The goal is to determine 0p1 = [x1,y1,z1]T , the definition of the point in the 3D TRUS coordinate system {o˜0,C0}. Using the known width of the image (55 mm), the 3D x-component is found as: x1 = 55−a1 (4.1) 55 The 3D y-component and z-component are found using the current rotation angle: y1 = b1 sinθ , (4.2) z1 = b1 cosθ (4.3) The rotation angle is determined by from the current encoder reading of the rotation motor, rencoder, as follows: θ = (rencoder×360)/(2048×14×3.56) (4.4) Where 2048 positions per 360 degrees is the encoder resolution of the roll motor, 14 is the rotation motor gearhead reduction, and 3.56 is the gear ratio of the spur gear used to drive the rotation stage. After the position of the da Vinci tool tip in the 3D ultrasound frame has been determined, the tool is moved to new position on the air-tissue boundary. The localization process is repeated for at least two additional points, three in total. A standard least-squares algorithm [56] is then used to solve the transformation 0T1 between the two frames by minimizing the sum of squared distances between the US and API points. 4.4 System Validation: Method This section describes testing that has been conducted to validate the robotic TRUS system for use in RALRP. 4.4.1 Registration Accuracy Tests Two experiments were carried out to measure the accuracy of our registration method for tracking. In the first experiment, we simulated the da Vinci API us- ing a da Vinci tool (Black Diamond Micro Forceps; Intuitive Surgical, Sunnyvale, CA) and a Micron Tracker optical tracking system (H3-60; Claron Technology, Toronto, ON). Optical markers were affixed to the body of the da Vinci tool, and the tip of the tool was registered to the markers using the Micron Tracker software by pressing the tip against a target point on an included calibration block (as in Fig- ure 4.8). In the second experiment, we used a da Vinci Surgical System (Standard 56 Figure 4.8: The tip of the da Vinci tool is calibrated to optical markers affixed to the tool body, so that the tip can be tracked by the Micron Tracker. model; Intuitive Surgical, Sunnyvale, CA) with an enabled research interface. The tip of a different da Vinci tool (Large Needle Driver; Intuitive Surgical, Sunnyvale, CA) was tracked using the forward kinematics module of the Intuitive API. In both experiments, the TRUS robot coordinate frame was registered to the Micron Tracker or da Vinci coordinate frame through the boundary of a PVC tissue phantom using the method described above. Three different positions of the tool tip were used as target points for each registration. 57 To evaluate the accuracy of the registration, the da Vinci tool tip was used to define additional common points beyond those used for registration. The tool was pressed against the surface of the phantom at 10 and 8 locations for the first and second tests respectively. At each location, the position of the tip in the da Vinci API frame was recorded, and 2D B-mode ultrasound images with position data were recorded and processed into 3D volumes. The tool tip was manually localized in the resulting ultrasound volumes, and the recorded tip positions in the da Vinci frame were transformed into the US frame using the determined registrations. Registra- tion error was defined as the Euclidean distance between the localized ultrasound position and the registered position from the da Vinci kinematics. In essence, this error compares the ideal tracking location found manually in the ultrasound vol- umes with the tracking location that was predicted by our registration method and the da Vinci kinematics. 4.4.2 Phantom Imaging Tests A specialized prostate elastography phantom (Model 066; CIRS, Norfolk, VA) was imaged. This phantom contains structures that simulate the prostate, urethra, seminal vesicles and rectal wall. The simulated prostate also contains three 1-cm diameter lesions which are designed to be invisible in B-mode images but visible in elastography images. Two-dimensional B-mode and axial strain elastography images of the prostate inclusions were captured in real time using both imaging arrays. B-mode and RF data with position information were also captured using both imaging arrays, and 3D B-mode and 3D elastography data were generated offline. The 3D data were visualized and manually segmented using Stradwin [47]. 4.4.3 Patient Imaging Tests After obtaining informed consent, eight patients with a mean age of 61.4 ± 5.9 years were imaged using our TRUS system over a period of 8 months. (For this patient imaging the earlier version of the robot designed for prostate brachytherapy was used; the new translation stage was not applied.) Patients were imaged in the operating room immediately prior to RALRP, after sedation and positioning for surgery but before insufflation or the docking of the da Vinci system. A standard 58 Table 4.2: Average error over ten tool tip positions between tool tip location and predicted location based on registration to simulated da Vinci frame (Micron Tracker). Errors are presented in the anatomical frame of the pa- tient assuming positioning for RALRP, along the superior-inferior (eS−I), medial-lateral (eM−L) and anterior-posterior (eA−P) axes. eS−I (mm) eM−L (mm) eA−P (mm) eTotal (mm) Registration 1 1.01 ± 0.74 0.48 ± 0.28 1.29 ± 1.21 1.99 ± 1.08 Registration 2 0.98 ± 0.51 0.42 ± 0.33 2.26 ± 1.39 2.66 ± 1.29 Registration 3 1.69 ± 1.03 0.43 ± 0.22 1.57 ± 0.89 2.47 ± 1.22 Registration 4 1.83 ± 1.15 0.73 ± 0.41 1.01 ± 0.65 2.38 ± 1.15 Average 1.38 ± 0.97 0.52 ± 0.34 1.53 ± 1.17 2.37 ± 1.15 brachytherapy stabilizer arm (Micro-Touch 610-911; CIVCO Medical Solutions, Kalona, IA) was mounted to the operating table and the robot was installed on the stabilizer. The stabilizer’s coarse and fine positioning mechanisms were used to position the probe relative to the patient. The surgeon then manually inserted the probe into the rectum, and positioned the probe for imaging. Three-dimensional B- mode and RF data were captured by rotating the parasagittal array only. The probe was removed from the patient, and the robot and stabilizer were uninstalled before the surgery commenced. Three-dimensional B-mode and elastography volumes were generated offline. The 3D data were again visualized and manually segmented using Stradwin. 4.5 System Validation: Results 4.5.1 Registration Accuracy Tests Figure 4.9 shows the appearance in an ultrasound image of a da Vinci tool tip pressed against an air-tissue boundary (top surface of a PVC phantom). Table 4.2 shows measured registration errors using the Micron Tracker to sim- ulate a da Vinci system. Four registrations were performed, with the registration error measured at ten tool positions for each registration. The average error for all four registrations was 2.37 ± 1.15 mm. Table 4.3 shows measured registration errors using the actual da Vinci system. 59 Figure 4.9: Ultrasound image of the tip of a da Vinci tool (Large Needle Drivers) pressed against the air-tissue boundary of a PVC phantom. Table 4.3: Average error over eight tool tip positions between tool tip loca- tion and predicted location based on registration to actual da Vinci frame. Errors are presented in the anatomical frame of the patient assuming po- sitioning for RALRP, along the superior-inferior (eS−I), medial-lateral (eM−L) and anterior-posterior (eA−P) axes. eS−I (mm) eM−L (mm) eA−P (mm) eTotal (mm) Registration 1 0.46 ± 0.52 0.42 ± 0.51 0.34 ± 0.15 0.79 ± 0.33 Registration 2 0.29 ± 0.39 0.48 ± 0.47 0.38 ± 0.10 0.75 ± 0.14 Registration 3 0.58 ± 0.68 0.47 ± 0.56 0.53 ± 0.32 1.03 ± 0.37 Registration 4 0.82 ± 0.83 0.55 ± 0.63 0.50 ± 0.17 1.23 ± 0.44 Average 0.54 ± 0.65 0.48 ± 0.52 0.44 ± 0.21 0.95 ± 0.38 Four registrations were again performed, with the registration error measured at eight tool positions for each registration. The average Euclidean error for all four registrations was 0.95 ± 0.38 mm. 60 Figure 4.10: Comparison of axial strain elastography (E) and B-mode images (B) of prostate elastography phantom. Arrows indicate inclusions in elastography images. 4.5.2 Phantom Imaging Tests Figure 4.10 shows 2D axial strain elastography images and B-mode images of the prostate elastography phantom captured in real time. In the B-mode images, the prostate inclusions cannot be clearly identified. In the elastography images, the inclusions are visible as the low intensity areas (low strain) within the prostate. According to the phantom’s documentation, the simulated lesions are approx- imately three times stiffer than the surrounding simulated prostate tissue. In the strain images shown in Figure 4.10, the mean intensity of the prostate excluding inclusions and the urethra is 16% of maximum in the transverse image and 19% of maximum in the parasagittal image. The mean image intensity of the inclu- sions is 5.6% of maximum in the transverse image and 8.0% of maximum in the parasagittal image. Figure 4.11 shows a B-mode volume of the prostate elastography phantom captured by translating the transverse imaging plane. Orthogonal reslices of the volume data are shown, along with the manually segmented prostate, urethra and seminal vesicles. 4.5.3 Patient Imaging Tests Figure 4.12 shows axial strain elastography and B-mode patient images. The images shown are transverse-plane slices of the parasagittal volume data. The prostate boundaries are stronger in the elastography images than in the B-mode images. Figure 4.13 shows a B-mode volume generated from the patient data. Orthog- 61 Figure 4.11: 3D B-mode US of elastography phantom generated using the transverse imaging array. (Rendered using Stradwin [47].) Figure 4.12: Comparison of axial strain elastography (E) and B-mode images (B) of two patients. 62 Figure 4.13: 3D B-mode US of patient prostate generated using the sagittal imaging array. (Rendered using Stradwin [47].) onal reslices of the volume data are shown, along with the manually segmented prostate. 4.6 Discussion Our TRUS system is well suited to intra-operative imaging in RALRP. The robot’s large motion ranges (120 mm translation, 90 degrees rotation) allow even very large prostates to be imaged completely. Both the roll and translation stages have fast response times (both take approximately 1 second to move from zero position to maximum range). The robot and stabilizer arm do not interfere with the motion of the da Vinci arms, and because both are based on common clinical systems, they are straightforward for clinicians to install and position. Also, the system maintains the standard stepper’s manual interfaces so that in the event of a failure or depending on their preference, the surgeon can elect to manipulate the device manually through all its degrees of freedom. Compared to the system of Han et al., our robot’s primary advantages are its fully manual override and its ability to capture ultrasound elastography data. The phantom images shown in Figure 4.10 emphasize the advantage of ultrasound elastography imaging compared to B-mode imaging in prostatectomy. Although Ukimura and Gill have used B-mode US to identify cancerous tissue near the edge 63 of the prostate [51], the specificity of B-mode ultrasound in prostate cancer detec- tion is known to only be on the order of 50%. This means surgeons using B-mode images for guidance would not be able to reliably tailor their dissections accord- ing to intra-operative US. Our system’s elastography capability should improve the intra-operative visualization of cancerous tissue. Apart from cancer visualiza- tion, ultrasound elastography has also been shown to be more effective for imaging prostate anatomy in general than B-mode ultrasound [32], as seen in Figure 4.12 where the prostate boundaries are much clearer. Based on the possible control methods described above, we foresee two us- age modes for our system: 3D TRUS imaging for surgical planning and 2D TRUS imaging for real-time guidance. The 3D imaging has some remaining technical challenges that will need to be overcome. The surgical planning use of the 3D images requires the registration of pre-operative MRI to intra-operative 3D ultra- sound, a complex problem that has yet to be totally solved. Our groups has pre- viously reported rigid registration of MRI to TRUS elastography with reasonably small Dice similarity measures [33], but further work in this area, including non- rigid registration, will be needed. It may also be necessary to increase the speed of our 3D volume generation. Capturing 2D data with position information by ro- tating or translating the probe typically takes 60 to 90 seconds, although this can be adjusted by the user. Processing these data into a scan-converted volume takes approximately 280 seconds for B-mode data and between 8 and 15 minutes for elastography data depending on the reconstruction algorithm used. (Currently all 3D data is processed off-line.) It will likely be possible to optimize our processing in the future to near real-time performance, and certainly to a speed that will be practical for use in the OR. We believe a registration error of 0.95± 0.38 mm is acceptable for our tracking application. Since the goal of the tracking is simply to have the tool tips appear in the TRUS images, errors in the axial and lateral ultrasound directions are irrelevant as long as the tool tips are within the image boundaries. And because the thickness of the TRUS beam at the anterior surface of the prostate is on the order of millime- ters, small errors in the elevational direction likely are not critical. The registration error using the Micron Tracker to simulate a da Vinci was 2.37 ± 1.15 mm, more than twice the error using the actual da Vinci. The higher error likely results from 64 tracking the tip of the da Vinci forceps using the Micron Tracker. The Micron Tracker locates individual optical markers within 0.2 mm, but when it is used to track a tool tip with markers on the base, the error is exacerbated by a lever-arm effect. Based on the geometry of our markers and the forceps, the additional tool tip tracking error would be as high as 1 mm. In this testing, the registration between the TRUS system and the da Vinci robot was performed by a single user, and took only approximately 90 seconds. In an actual operating room environment, positioning and imaging the tools during the registration might require more time, especially since it would likely be more dif- ficult to visually identify the tools through patient anatomy than through a simple PVC phantom. The registration would also likely require an ultrasound operator to assist the surgeon, although with refinements in the software it could be performed entirely by the surgeon. Even considering the time necessary to perform a reg- istration, automatic tool tracking would greatly reduce the overall time added by intraoperative TRUS since it would largely eliminate the need for manual adjust- ment. 4.7 Conclusion The initial phantom and patient studies we have described have demonstrated the suitability of our new robotic system for intra-operative TRUS in robot-assisted radical prostatectomy. Future work will examine the use of the system’s various control modes in surgery. 65 Chapter 5 Conclusions and Future Work This section summarizes the contributions of this thesis, and discusses potential directions for future work. 5.1 Contributions Chapter 2 and Chapter 3 described a new method for registering 3DUS data to stereo cameras. This new method uses a registration tool pressed against an air- tissue boundary, with both ultrasound fiducials and camera markers, to define common points for a direct frame-to-frame registration. Previous methods have required tracking the pose of the ultrasound transducer and camera housings using an external tracking system. Testing described in these chapters found the accu- racy of the new method to be as high as 1.51 ± 0.70 mm when registering to the stereo endoscope of a da Vinci Surgical System. This is equivalent to or better than previous results using external tracking systems. Chapter 4 described a new robotic system for intraoperative TRUS imaging in RALRP. This new system improves upon the TRUS robot previously described by Han et al. [24] by enabling vibro-elastography imaging, which has been shown to be superior for imaging prostate cancer and anatomy. Also, using the registration concept described in Chapter 2 and Chapter 3, the new system can be registered to the coordinate frame of the da Vinci Surgical System. Based on this registration, the system can track the tips of the da Vinci tools with the TRUS imaging arrays, so 66 that they always contain the current area of interest without the need for frequent manual adjustment. The new robotic TRUS system, and the method for registering it to the da Vinci robot’s coordinate frame, is the central contribution of this thesis. 5.2 Future Work 5.2.1 3D Ultrasound to Camera Registration Our method for 3DUS to stereo camera registration through an air-tissue boundary has not yet been tested in vivo. This is an important next step. The appearance of the surface fiducials may be significantly different when imaged at the boundary of living tissue. Also new practical considerations, such as marker obscurement due to blood or smoke in the surgical cavity, may come to light during an actual use case. The revised registration tool described in Chapter 3 still requires some modi- fication before being applied in a clinical study. The ultrasound fiducials are cur- rently secured to the body of the tool using epoxy. They are prone to breaking off, which would be unacceptable in surgery. The optical markers on the top surface of the tool are currently printed onto adhesive paper stickers, which are water soluble. Water-proof markers are needed, since blood and other bodily fluids will regularly come into contact with the marker. To minimize issues with use in surgery, the ideal registration tool would be constructed from a single piece, perhaps using rapid pro- totyping methods, or some form of casting. Alternatively metal components similar to the current tool could be spot welded, and coated in a biocompatible material. It might be preferable to mount the registration tool at the end of a standard laparo- scopic tool, so that it could be inserted through an assistant’s port and manipulated by the patient-side surgeon. Each registration accuracy test described in Chapter 2 and Chapter 3 consid- ered only a single relative position of the ultrasound transducer and the camera. The effect of changing the relative position on the overall accuracy of registration should be considered. Large angles between either the TRUS or the camera and the registration tool would likely decrease the accuracy of marker or fiducial localiza- 67 tion. One important direction of future research for this registration method is ap- plying it in a practical on-line system. The actual registration (i.e. solving for the homogeneous transformation that relates the camera and ultrasound frames) has only been performed offline to this point. All image overlays have likewise been performed offline. As discussed in previous chapters, two main problems remain to be solved: automatic real-time localization of ultrasound fiducials and automatic real-time triangulation of camera markers. Automatic detection and localization of ultrasound surface fiducials has not been reported in the literature. Appendix A, previously submitted as a class project for Computer Science 525: Image Analysis II, describes a proof-of-concept test to show that a boosted classifier based on sim- ple features could reliably detect surface fiducials in ultrasound images. Boosted classifiers have previously been applied in ultrasound image analysis, but not to the detection of repeatable features like surface fiducials. A classifier was trained and tested using a database of annotated images drawn from the registration testing described above. Initial results suggest that with further development, a boosted classifier could be used to automatically detect ultrasound surface fiducials. There are a number of possible schemes for applying the registration method in a practical surgical system. In the most basic scheme, surgeons would simply ap- ply the registration tool and image it using both modalities every time they wished to view an AR overlay. The software would automatically detect the fiducials and markers, solve the registration, and produce the AR overlay. This overlay would be invalidated by relative motion of the camera and US transducer. Alternatively, the registration method could be used to generate an initial registration. If the the US transducer was fixed, or held using our TRUS system, the da Vinci robot kinematics could then be used to monitor movement of the camera arm, and update the reg- istration. Methods for tissue tracking currently under development could also be used to update the registration. Some measure of uncertainty in the updated regis- tration would be monitored, and at a certain level of uncertainty the system would recommend reapplying the registration tool and performing a new registration. It might also be possible to generate a first initial guess at the registration, before applying the tool, since the da Vinci robot and US system would likely be arranged in fairly consistent relative position for every surgery. 68 5.2.2 Robotic System for TRUS in RALRP Based on the success of the evaluation testing described in Chapter 4, the robotic TRUS system should be applied in a patient trial. The testing should follow the general protocol described in Appendix B. This patient trial will allow further eval- uation of our robot in particular, and the results are also likely to contribute to the more general literature concerning the usefulness of TRUS in RP. As with the registration of 3DUS to the da Vinci stereo endoscope for aug- mented reality, the registration of the TRUS robot to the da Vinci robot could be improved by increasing the level of automation. An automatic surface fiducial de- tector, similar to the concept described in Appendix A, would again be useful for detecting the tips of the TRUS manipulators. Such a detector could reduce the com- plexity of the registration procedure. Instead of using the 3D mouse to locate each tool position during the registration, the surgeon could simply position the tool and capture a sweep of images. Software would then automatically localize the tool tip. This automation would likely make surgeons more willing to adopt the method. Some improvements to the robotic system itself could be made, especially since several components of the system have remained unchanged since an earlier iter- ation of the system was used for brachytherapy imaging. The system currently requires three USB cords to be connected to the ultrasound console, because the motor controller for each of the three DOF’s has a separate connection. The Faul- haber motor controllers can actually network with multiple controllers communi- cate to the host PC through a single RS232 connection. This would remove some cabling, which is undesirable in an OR, and might also allow for a smaller electron- ics enclosure. It would likely be possible with only a moderate amount of effort to create an algorithm for reconstructing the absolute position of the TRUS probe during vibra- tion. This absolute position information is useful in reconstructing the properties of the tissue for elastography. The vibration motor spins a linear cam which drives the TRUS probe up and down on a linear slide. The position of probe is thus linearly related to the rotational position of the motor. However a known zero position (i.e. where in the motor range the probe is at the top or bottom of its travel) is needed. By monitoring the torque output of the motor during an initial calibration spin, this 69 zero point could be determined. The eccentricity of the cam means that the location of maximum torque is the bottom of the travel. A short study examining the vibration modes of the TRUS transducer would be worthwhile. In addition to the desired linear sliding motion of the transducer, there appears to be a substantial amount of compliance when loading the probe tip. This compliance may alter the actual vibration at the TRUS tip, reducing the accuracy of the elastography reconstruction. If there are undesired modes of vibration, they would likely be greatly reduced by redesigning the mechanism which connects the transducer holder to the slider. The motor control code should be altered to avoid generating errors from the motion controllers. At some point in its operation, previously written code that is used in the control GUI software overloads the input buffers of the motion con- trollers, causing an unexpected error message to be sent back along the RS2332 communication channel. This error was not sent in previous versions of controller firmware, so the error was not recognized. The point in the code where the buffers are overrun should be identified and altered. Currently a work around solution ignores the error message. Finally the calibration of the translation and rotation stages should be reex- amined. Currently, during initial calibration of the system, the user is directed by the software to center both ranges. Rotation is locked into place using a detent screw; translation is aligned with a visual marker on the slide. Both systems could be redesigned to zero at the edge of their ranges. Since the eccentric weight of the vibration motor causes the roll stage to naturally move to the end of its range, calibrating it at this point might greatly simplify the clinical work flow. 70 Bibliography [1] T. Adebar, S. E. Salcudean, S. Mahdavi, M. Moradi, C. Nguan, and S. L. Goldenberg. A robotic system for intra-operative trans-rectal ultrasound and ultrasound elastography in radical prostatectomy. In Proceedings of the 2nd International Conference on Information Processing in Computer-Assisted Interventions, volume 6689, pages 79–89, 2011. → pages v, 32 [2] American Cancer Society. Cancer facts and figures 2011. Technical report, American Cancer Society, Atlanta, 2011. → pages 1 [3] A. T. B. Russell and W. T. Freeman. Labelme: the open annotation tool, 2005. URL http://labelme.csail.mit.edu/. → pages 80 [4] A. Baghani, H. Eskandari, S. Salcudean, and R. Rohling. Measurement of viscoelastic properties of tissue mimicking material using longitudinal wave excitation. IEEE Transactions on Ultrasonics, Ferroelectronics and Frequency Control, 56(7):1405–1418, 2009. → pages 20 [5] J.-Y. Bouguet. Camera calibration toolbox, 2010. URL http://www.vision.caltech.edu/bouguetj/. → pages 16, 37 [6] Canadian Cancer Society’s Steering Committee. Canadian cancer statistics 2011. Technical report, Canadian Cancer Society, Toronto, 2011. → pages 1 [7] G. Carneiro, B. Georgescu, S. Good, and D. Comaniciu. Automatic fetal measurements in ultrasound using constrained probabilistic boosting tree. In Proceedings of the 10th International Conference on Medical Image Computing and Computer-Assisted Intervention, volume 4792, pages 571–579, 2007. → pages 44 [8] G. Carneiro, B. Georgescu, S. Good, and D. Comaniciu. Detection and measurement of fetal anatomies from ultrasound images using a constrained probabilistic boosting tree. IEEE Transactions on Medical Imaging, 27: 1342–1355, 2008. → pages 80 71 [9] C. Cheung, C. Wedlake, J. Moore, S. Pautler, and T. Peters. Fused video and ultrasound images for minimally invasive partial nephrectomy: A phantom study. In Proceedings of the 13th International Conference on Medical Image Computing and Computer-Assisted Intervention, volume 6363, pages 408–415, 2010. → pages 15, 16 [10] C. L. Cheung, C. Wedlake, J. Moore, S. E. Pautler, A. Ahmad, and T. M. Peters. Fusion of stereoscopic video and laparoscopic ultrasound for minimally invasive partial nephrectomy. In SPIE Medical Imaging 2009: Visualization, Image-Guided Procedures, and Modeling, volume 7261, pages 726109–1–726109–10, 2009. → pages 16, 27, 42 [11] D. Cohen, E. Mayer, D. Chen, A. Anstee, J. Vale, G.-Z. Yang, A. Darzi, and P. Edwards. Augmented reality image guidance in minimally invasive prostatectomy. In International Workshop on Prostate Cancer Imaging: Computer-Aided Diagnosis, Prognosis, and Intervention, volume 6367, pages 101–110, 2010. → pages 15 [12] G. Coughlin, K. Palmer, K. Shah, and V. Patel. Robotic-assisted radical prostatectomy: functional outcomes. Archrivos Espanoles de Urologia, 60 (4):408, 2007. → pages 5 [13] S. Doyle, A. Madabhushi, M. Feldman, and J. Tomaszeweski. A boosting cascade for automated detection of prostate cancer from digitized histology. In Proceedings of the 9th International Conference on Medical Image Computing and Computer-Assisted Intervention, volume 4191, pages 504–511, 2006. → pages 79 [14] L. Fei-Fei, R. Fergus, and A. Torralba. Recognizing and learning object categories. CVPR Short Course, 2007. → pages 80 [15] Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In Computational Learning Theory, volume 904, pages 23–37, 1995. → pages 44 [16] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. Annals of Statistics, pages 337–374, 2000. → pages 80 [17] H. Fuchs, M. A. Livingston, R. Raskar, A. State, J. R. Crawford, P. Rademacher, S. H. Drake, and A. A. Meyer. Augmented reality visualization for laparoscopic surgery. In Proceedings of the 1st 72 International Conference on Medical Image Computing and Computer Assisted Intervention, volume 5241, pages 934–943, 1998. → pages 15 [18] I. Gill and O. Ukimura. Thermal energy-free laparoscopic nerve-sparing radical prostatectomy: one-year potency outcomes. Urol, 70(2):309–314, 2007. → pages 9 [19] I. Gill, O. Ukimura, M. Rubinstein, A. Finelli, A. Moinzadeh, D. Singh, J. Kaouk, T. Miki, and M. Desai. Lateral pedicle control during laparoscopic radical prostatectomy: refined technique. Urol, 65(1):23–27, 2005. → pages 8 [20] A. A. Goldenberg, J. Trachtenberg, W. Kucharczyk, Y. Yang, M. Haider, M. Sussman, L. Ma, and R. Weersink. Robot-assisted MRI-guided prostatic interventions. Robotica, 28(2):215–234, 2010. → pages 6 [21] M. L. Gonzalgo, N. Patil, L.-M. Su, and V. R. Patel. Minimally invasive surgical approaches and management of prostate cancer. Urologic Clinics of North America, 35(3):489–504, 2008. → pages 3, 4, 5 [22] W. E. L. Grimson, M. E. Leventon, G. J. Ettinger, A. Chabrerie, F. Ozlen, S. Nakajima, H. Atsumi, R. Kikinis, and P. Black. Clinical experience with a high precision image-guided neurosurgery system. In Proceedings of the 1st International Conference on Medical Image Computing and Computer-Assisted Intervention, volume 5241, pages 63–73, 1998. → pages 15 [23] I. Hacihaliloglu, R. Abugharbieh, A. Hodgson, P. Guy, and R. Rohling. Bone surface localization in ultrasound using image phase based features. Ultrasound in Medicine & Biology, 35(9):1475–1487, 2009. → pages 43 [24] M. Han, C. Kim, P. Mozer, F. Schafer, S. Badaan, B. Vigaru, K. Tseng, D. Petrisor, B. Trock, and D. Stoianovici. Tandem-robot assisted laparoscopic radical prostatectomy to improve the neurovascular bundle visualization: a feasibility study. Urology, 77(2):502–506, 2011. → pages 10, 11, 13, 66 [25] J. A. S. Jr., R. C. Chan, S. S. Chang, S. D. Herrell, P. E. Clark, R. Baumgartner, and M. S. Cookson. A comparison of the incidence and location of positive surgical margins in robotic assisted laparoscopic radical prostatectomy and open retropubic radical prostatectomy. The Journal of Urology, 178(6):2385–2390, 2007. → pages 4 73 [26] J. Krücker, S. Xu, N. Glossop, P. Guion, P. Choyke, I. Ocak, A. K. Singh, and B. J. Wood. Fusion of realtime transrectal ultrasound with preacquired MRI for multi-modality prostate imaging. In SPIE Medical Imaging 2007: Visualization and Image-Guided Procedures, volume 6509, pages 650912–1–650912–12, 2007. → pages 6 [27] J. Leven, D. Burschka, R. Kumar, G. Zhang, S. Blumenkranz, X. Dai, M. Awad, G. Hager, M. Marohn, M. Choti, C. Hasser, and R. Taylor. Davinci canvas: a telerobotic surgical system with integrated, robot-assisted, laparoscopic ultrasound capability. In Proceedings of the 8th International Conference on Medical Image Computing and Computer-Assisted Intervention, volume 3749, pages 811–818, 2005. → pages 15, 16, 27, 42 [28] F. Lindseth, G. A. Tangen, T. Langø, and J. Bang. Probe calibration for freehand 3-D ultrasound. Ultrasound in Medicine & Biology, 29(11): 1607–1623, 2003. → pages 16 [29] C. A. Linte, J. Moore, A. D. Wiles, C. Wedlake, and T. M. Peters. Virtual reality-enhanced ultrasound guidance: A novel technique for intracardiac interventions. Computer Aided Surgery, 13(2):82–94, 2008. → pages 16 [30] A. Madabhushi, P. Yang, M. Rosen, and S. Weinstein. Distinguishing lesions from posterior acoustic shadowing in breast ultrasound via non-linear dimensionality reduction. In Proceedings of Engineering in Medicine and Biology Society 2006, pages 3070–3073, 2006. → pages 44, 80 [31] A. Magheli, M. L. Gonzalgo, L.-M. Su, T. J. Guzzo, G. Netto, E. B. Humphreys, M. Han, A. W. Partin, and C. P. Pavlovich. Impact of surgical technique (open vs laparoscopic vs robotic-assisted) on pathological and biochemical outcomes following radical prostatectomy: an analysis using propensity score matching. British Journal of Urology International, 107 (12):1956–1962, 2011. → pages 4 [32] S. Mahdavi, M. Moradi, X. Wen, W. Morris, and S. Salcudean. Vibro-elastography for visualization of the prostate region: method evaluation. In Proceedings of the 12th International Conference on Medical Image Computing and Computer-Assisted Intervention, volume 5761, pages 339–347, 2009. → pages 64 [33] S. Mahdavi, W. Morris, I. Spadinger, N. Chng, O. Goksel, and S. Salcudean. 3D prostate segmentation in ultrasound images based on tapered and deformed ellipsoids. In Proceedings of the 12th International Conference on 74 Medical Image Computing and Computer-Assisted Intervention, volume 5761, pages 960–967, 2009. → pages 64 [34] A. Mangera, A. K. Patel, and C. R. Chapple. Anatomy of the lower urinary tract. Surgery, 28(7):307–313, 2010. → pages 2 [35] L. Mercier, T. Langø, F. Lindseth, and D. L. Collins. A review of calibration techniques for freehand 3-D ultrasound systems. Ultrasound in Medicine & Biology, 31(4):449–471, 2005. → pages 16 [36] R. Narayanan, J. Kurhanewicz, K. Shinohara, E. Crawford, A. Simoneau, and J. Suri. Mri-ultrasound registration for targeted prostate biopsy. In IEEE Biomedical Imaging: From Nano to Macro, 2009, pages 991–994, 2009. → pages 6 [37] K. Okihara, K. Kamoi, M. Kanazawa, T. Yamada, O. Ukimura, A. Kawauchi, and T. Miki. Transrectal ultrasound navigation during minilaparotomy retropubic radical prostatectomy: Impact on positive margin rates and prediction of earlier return to urinary continence. International Journal of Urology, 16(10):820–825, 2009. → pages 9 [38] J. Ophir, I. Cespedes, H. Ponnekanti, Y. Yazdi, and X. Li. Elastography: a quantitative method for imaging the elasticity of biological tissues. Ultrasonic Imaging, 13(2):111–134, 1991. ISSN 0161-7346. → pages 51 [39] T. Poon and R. Rohling. Tracking a 3-D ultrasound probe with constantly visible fiducials. Ultrasound in Medicine & Biology, 33(1):152–157, 2007. → pages 27, 78 [40] R. Prager, R. Rohling, A. Gee, and L. Berman. Rapid calibration for 3-D freehand ultrasound. Ultrasound in Medicine & Biology, 24:855–869, 1998. → pages 28 [41] Prostate Cancer Canada. Grading and staging prostate cancer, 2011. URL http://www.prostatecancer.ca/PCCN/Prostate-Cancer/diagnosis/ clinical-testing-and-the-Gleason-grade. → pages 4 [42] O. Pujol, M. Rosales, P. Radeva, and E. Nofrerias-Fernández. Intravascular ultrasound images vessel characterization using adaboost. In Functional Imaging and Modeling of the Heart, volume 2674, pages 242–251. Springer-Verlag, 2003. → pages 44, 80 75 [43] C. Reynier, J. Troccaz, P. Fourneret, A. Dusserre, C. Gay-Jeune, J. Descotes, M. Bolla, and J. Giraud. Real-time MRI-ultrasound image guided stereotactic prostate biopsy, preliminary results. Medical Physics, 31: 1568–1575, 2004. → pages 6 [44] L.-M. Su, B. P. Vagvolgyi, R. Agarwal, C. E. Reiley, R. H. Taylor, and G. D. Hager. Augmented reality during robot-assisted laparoscopic partial nephrectomy: toward real-time 3D-CT to stereoscopic video registration. Urology, 73(4):896–900, 2009. → pages 15 [45] D. Teber, S. Guven, T. Simpfendörfer, M. Baumhauer, E. O. Güven, F. Yencilek, A. S. Gözen, and J. Rassweiler. Augmented reality: a new tool to improve surgical accuracy during laparoscopic partial nephrectomy? preliminary in vitro and in vivo results. European Urology, 56(2):332–338, 2009. → pages 15 [46] The Math Works, Inc. Matlab Statistics Toolbox User’s Guide. 24 Prime Park Way, Natick, MA 01760-1500, 2008. → pages 37 [47] G. Treece, R. Prager, and A. Gee. Regularised marching tetrahedra: improved iso-surface extraction. Computers and Graphics, 23(4):583–598, 1999. ISSN 0097-8493. → pages 58, 62, 63 [48] O. Ukimura and I. Gill. Real-time transrectal ultrasound guidance during nerve sparing laparoscopic radical prostatectomy: pictorial essay. Journal of Urology, 175(4):1311–1319, 2006. → pages 8 [49] O. Ukimura and I. S. Gill. Imaging-assisted endoscopic surgery: Cleveland clinic experience. Journal of Endourology, 22(4):803–810, 2008. → pages 15 [50] O. Ukimura and I. S. Gill, editors. Augmented reality for computer-assisted image-guided minimally invasive urology, pages 179–184. Contemporary Interventional Ultrasonography in Urology. Springer-Verlag, 2009. → pages 15, 16 [51] O. Ukimura, I. Gill, M. Desai, A. Steinberg, M. Kilciler, C. Ng, S. Abreu, M. Spaliviero, A. Ramani, J. Kaouk, et al. Real-time transrectal ultrasonography during laparoscopic radical prostatectomy. Journal of Urology, 172(1):112–118, 2004. → pages 7, 11, 16, 42, 43, 64 [52] O. Ukimura, C. Magi-Galluzzi, and I. Gill. Real-time transrectal ultrasound guidance during laparoscopic radical prostatectomy: impact on surgical margins. Journal of Urology, 175(4):1304–1310, 2006. → pages 8 76 [53] H. van der Poel, W. de Blok, A. Bex, W. Meinhardt, and S. Horenblas. Peroperative transrectal ultrasonography-guided bladder neck dissection eases the learning of robot-assisted laparoscopic prostatectomy. British Journal of Urology International, 102(7):849–852, 2008. → pages 9, 16, 42 [54] G. M. Villeirs and G. O. D. Meerleer. Magnetic resonance imaging (mri) anatomy of the prostate and application of mri in radiotherapy planning. European Journal of Radiology, 63(3):361–368, 2007. → pages 6 [55] P. Viola and M. J. Jones. Robust real-time face detection. International Journal of Computer Vision, 57(2):137–154, 2004. → pages 44, 79 [56] B. L. W. Schroeder, K. Martin. The visualization toolkit: an object-oriented approach to 3D graphics, 2003. → pages 56 [57] S. Xu, J. Kruecker, B. Turkbey, N. Glossop, A. Singh, P. Choyke, P. Pinto, and B. Wood. Real-time MRI-TRUS fusion for guidance of targeted prostate biopsies. Computer Aided Surgery, 13(5):255–264, 2008. → pages 6 [58] X. Xuan and Q. Liao. Statistical structure analysis in MRI brain tumor segmentation. In Proceedings of the 4th International Conference on Image and Graphics, pages 421–426, 2007. → pages 79 [59] M. C. Yip, T. K. Adebar, R. N. Rohling, S. E. Salcudean, and C. Y. Nguan. 3D ultrasound to stereoscopic camera registration through an air-tissue boundary. In Proceedings of the 13th International Conference on Medical Image Computing and Computer-Assisted Intervention, volume 6362, pages 626–634, 2010. → pages iv, 42, 43 [60] X. Yuan and P. Shi. Microcalcification detection based on localized texture comparison. In Proceedings of the International Conference on Image Processing, pages 2953–2956, 2004. → pages 79 [61] R. Zahiri-Azar and S. Salcudean. Motion estimation in ultrasound images using time domain cross correlation with prior estimates. Biomedical Engineering, IEEE Transactions on, 53(10):1990–2000, 2006. ISSN 0018-9294. → pages 49, 51 [62] A. Zisman, S. Strauss, Y. Siegel, H. Manor, and A. Lindner. Transrectal ultrasonographically assisted radical retropubic prostatectomy. Journal of Ultrasound in Medicine, 16(12):777, 1997. → pages 7 77 Appendix A Automatic Detection of Surface Fiducials using Boosting A.1 Introduction The new registration method described in Chapters 2 and 3 requires localizing the same fiducials in the 3DUS frame and the camera frame in order to solve for the homogeneous transformation that relates the two frames. In the tests described in those chapters, the fiducials were manually localized in the 3DUS volumes by scrolling through 2D slices of the volumes and selecting the center or tip of each fiducial. While the overall registration concept was a success, the fiducial selection was labor intensive. Also, the interpretation of the fiducial centers or tips was somewhat subjective, and may have varied between users. An automatic fiducial localization algorithm would remove any possible bias generated by the manual selection in future testing. Automatic fiducial localization would also mean future augmented reality systems based on this registration method would require less user interaction, and would thus interfere less with clinical work flow in the OR. The problem of automatically localizing fiducials in US can be split into two sub-problems: automatically detecting the presence of fiducials, and automatically locating the center or tip of each detected fiducial in the volume frame. Poon and Rohling [39], in a study on calibration of 3D ultrasound probes (a separate and un- related type of calibration), used the intensity centroid of an image region around a 78 user-supplied location to semi-automatically detect the center of similar fiducials. This simple concept solves the second sub-problem, but not the first. A necessary next step for our real-time augmented reality system is a method for automatically detecting the approximate location of fiducials in an ultrasound volume. The pre- cise localization can then be handled by an intensity centroid or other method. Several key factors are relevant to the selection of a detection algorithm for this problem. The detection algorithm must be applied to 2D images, rather than 3D volumes. Although the ultrasound considered in this study is three-dimensional, the volumes are in fact created by an off-line scan conversion. The ultrasound data is only available in real time as a series of 2D images generated by the 3DUS trans- ducer scanning the body. The appearance of the target objects is fairly consistent. Ultrasound images of fiducials pressed against air-tissue boundaries all contain similar features: strong horizontal lines from the air-tissue boundary, circular areas of high-intensity from the fiducials themselves, and comet-tails of reverberations extending along the axial direction of the transducer away from each fiducial. The scale of the ultrasound volumes is fixed by the spatial resolution of the ultrasound transducer, so scale invariance is not required. Most surgeries will involve fairly consistent relative orientations of transducer and probe, so rotational invariance is likewise not required. Finally, the detection must be very rapid. In an ideal real-time augmented reality system, the medical image data displayed in the sur- geons stereo view would be updated at a rate close to that of normal video (i.e. thirty frames per second). In our system, because the ultrasound probe and cam- era will often be held fixed after registration and because small movements can be accounted for using the da Vinci robot kinematics, the time constraint is less strict. Still, a detection algorithm that can scan an entire volume in a few seconds is necessary to avoid disrupting the surgical work flow. Based on these factors, the most logical approach seems to be a detector based on boosting, similar to the classic Viola-Jones face detector [55]. Detection based on boosting of weak classifiers has been previously applied to widely varied prob- lems in medical image analysis. Example applications include cancer detection in prostate histology specimen images [13], brain tumor detection in MR im- ages [58], and early breast cancer detection in mammograms [60]. In several cases, boosting has been applied to detection or segmentation in ultrasound specifically. 79 Pujol et al. [42] used the Adaboost algorithm to separate plaque from lumen in intravascular ultrasound. Madabhushi et al. [30] used the Adaboost algorithm to distinguish lesions from shadows in ultrasounds of the breast. Both these studies used classifiers based on complex descriptors of image texture. Carneiro et al. [8] used boosting to automatically segment fetal anatomy in ultrasound. Unlike all these previous studies, which aimed to detect or segment particular portions of the anatomy, our objective is to detect features from an inanimate and unvarying tool imaged through anatomy, with relatively small and non-textural features. The goal of this study was to apply open-source Matlab code for such a detector to determine whether the development of a real-time system is worthwhile. A.2 Detector Concept The test program detects fiducials in a 3DUS volume in two main steps. First, 2D slices of the volumes are scanned in sequence using a simple object detector with boosting. Second, nearby detections across slices are grouped, with the assumption being that the grouped detections represent different cross-sections of the same fiducial. A.2.1 Detection in 2D The 2D fiducial detector was adapted from demonstration code for a simple object detector using boosting by Li, Fergus and Torralba available online [14]. This demonstration code uses Gentleboost, a variant of the Adaboost algorithm [16], to train a strong classifier. The code was designed to work exclusively with the LabelMe database created by MIT [3], so some adaptation was necessary. The detection code does not use simple rectangular differences as its features in the manner of the Viola-Jones face detector. Instead, cross-correlation scores with a dictionary of patches are used. The patches are of random size and are sampled at random from within a preliminary set of manually segmented target objects. Each patch is filtered in four different ways: no filtering, an x-derivative filter, a y-derivative filter, and a laplacian filter. Using patch correlations as features rather than rectangular difference features, which can be calculated using integral images, increases the computation time beyond the realm of real-time applications. There 80 is still a great deal of similarity between the methods, so applying the online code seemed a worthwhile first exercise. A.2.2 Detection in 3D As previously mentioned, the 2D object detector is applied in sequence to trans- verse slices of the ultrasound volumes. The volumes used in this study contain approximately 100 slices, with each slice having a resolution of approximately 300 200 pixels. To reduce computation time, slices at set intervals are scanned, rather than every slice. Once the scanning is complete, the detections are grouped based on their location in the volume using a standard k-means algorithm. A.3 Method A.3.1 Imaging Setup Ultrasound image data was captured using a Sonix RP ultrasound machine (Ultra- sonix Medical Corp., Richmond, Canada) with a mechanical 3D transducer (model 4DC7-3/40). The mechanical probe sweeps a 2D transducer along a curved path, recording single images at positions along the sweep. The volumes were all scan- converted from the original pre-scan data into Cartesian volumes offline before analysis. A.3.2 Datasets Before the system could be applied to 3D data, it was necessary to create annotated 2D image datasets for training and evaluation of the 2D detector. Image datasets were created by scrolling through and annotating ultrasound volumes taken from previous studies. The images were annotated in Matlab by dragging a rectangle around each fiducial and its comet tail. The image itself and the parameters of the bounding rectangle were then saved in a .mat file. Two separate datasets were created. First, a dataset of 120 annotated images was created. Some images in this dataset contained multiple fiducials, but only one was labelled in each case. Using this dataset seemed equivalent to training the detector with flawed ground truth, so a second dataset was created with only 52 annotated images, each with all 81 visible fiducials labelled correctly. (The second dataset was merely the first dataset with all images containing unlabeled fiducials removed.) All experiments were performed using both datasets in order to compare the differences. All 2D images were drawn from five 3DUS volumes that were excluded from all other testing. For unbiased results in 2D, the dictionary images, training images and evaluation images for the 2D experiments were also all mutually exclusive sets. The 3D testing was performed using 22 ultrasound volumes. A.3.3 Dictionary of Patches To create the dictionary of features, eight images were chosen at random from the dataset. From each of these eight images, twenty patches were extracted at random from within the segmentation of the fiducials. The four previously discussed filters were applied to each sampled patch, for a total of 640 patches in the dictionary. The patch sizes were randomly chosen between 9 by 9 and 25 by 25 pixels. Fig- ure A.1 shows the patches extracted from an example image and their locations in the image. A.3.4 Training Detector The detector was trained using the Gentleboost algorithm and applied to the fil- tered patch responses at a series of locations with known positive and negative object presence. The positive response locations were naturally the centers of the user-supplied bounding rectangles in each image. The responses at thirty negative locations were also taken from each training image, with the locations chosen as local maximums of the average correlation score of all 640 patches. Figure A.2 shows the image of average correlation scores, and the negative and positive loca- tions sampled from an example training image. The training continued iteratively until a set number of weak classifiers were found, 120 in this study. A.3.5 Running Detector in 2D Although 120 weak classifiers were trained, only thirty classifiers were used in the 3D testing. This choice was validated by results shown in the next section. For each image, the detector produces a set of bounding boxes and the detection score 82 for each box which is the sum of all the weak classifier outputs. Correctness of each detection is determined by comparing its location to the manually-selected box. By varying a threshold on the detection score from low to high, precision- recall plots for the detectors were created. These plots are also shown in the next section. Precision is defined as the percentage of detections that are correct; recall is defined as the percentage of object instances that are correctly detected. A.3.6 Grouping Detections As previously mentioned, the detections across slices are grouped using a stan- dard k-means method. Since the registration tool consists of three fiducials, k is set to three in all cases. To increase the likelihood of the k-means algorithm find- ing its global minimum, the grouping is repeated fifty times with different random seeding, and the overall best grouping is taken. The computation time for this grouping, even with repetitions, is negligible compared to the time spent running the detector. To decrease detector computation time, not all slices of the volume are scanned. The spacing between slices is variable to take advantage of the fact that detections are likely to be close together. If no fiducials are detected in the current slice, the program moves forward a large number of slices. If fiducials are detected in the current slice, the program moves forward only a few slices. Large and small slice spacing of seven slices and three slices respectively were used in this study. Overall processing time using this method was between thirty seconds and 120 seconds per volume. The true locations used for evaluating the 3D detections were generated by scrolling through each test volume and manually locating the three fiducial centers. In testing the 3D detector, a detection is marked as correct if ex- actly one true fiducial is within the bounding volume of the detection. A detection score threshold of five was used in an attempt to remove false positives. Because the k-means algorithm always produces three detections and there are always three fiducials in the volume, precision and recall for the 3D testing are always equal. 83 A.4 Results A.4.1 2D Results Figure A.3 shows the outputs from the 2D detector for two example cases. These two cases illustrate the ambiguity that stems from not labellings all object in- stances. Figure A.4 shows the precision-recall plots generated by the 2D detector using both the 120 image and 52 image datasets using a variable number of weak. In both cases, only the smallest number of weak classifiers (10) showed any obvious decrease in performance, so a moderate 30 weak classifiers were used for all 3D testing. A.4.2 3D Results Figure A.5 shows example detections and grouped outputs from the 3D detector compared to the user-selected fiducial points. These results used large and small slice spacing of 7 slices and 3 slices respectively, and a detection threshold of 5. Figure 6 shows a large number of false positives and several cases where a fiducial was not detected at all. Figure A.6 shows the percentage of fiducials detected correctly using detectors from each dataset. The 3D detector based on the 52-image dataset never found all three fiducials correctly, whereas the detector based on the 120-image dataset found it a significant portion of the time. In about half the volumes, both detectors produced less than three detections and thus returned no 3D locations (to avoid an error from the k-means algorithm). A.5 Discussion It is interesting to contrast the results of the detectors created from the two image datasets. In the 2D detection, the precision-recall plots in Figure 5 show higher precision from the 52-image dataset, the set with complete labeling. However, because many of the detections produced in 120-image dataset would have been incorrectly marked as false positives (because the fiducials were not labeled) it is 84 not clear whether there is actually much difference in the behavior of the detec- tors themselves. In the 3D detection, the detector based on the 120-image dataset performed better than the detector based on the 52-image dataset. The 52-image detector never found all three fiducials correctly, whereas the 120-image detec- tor did this in fifteen percent of thevolumes. Qualitatively, the 52-image detector seemed to produce many more false positives than the 120-image detector, while the 120-image detector had more missed detections. Unfortunately, because the k-means algorithm uses three means in every case and assumes equal weighting for all inputs, the 3D grouping is very sensitive to both missed detections and false positives. In the case of the completely labeled 52-image set, there was a systemic flaw in the objects selected for training. Because of the shape of the tool hold- ing the fiducials, the fiducials generally appear in a similar pattern in the volumes. Two fiducials appear together within a few slices, and the third appears tens of slices later. The adapted training code was not able to handle more than one object instance per training image, so all images with two fiducials were excluded from the 52-image dataset (this accounts for the reduction from 120 to 52 images). In our volumes, one of the two nearby neighbor fiducials is often softer than the other two, with less reverberation due to its position relative to the probe. This different fiducial was excluded in the 52 image dataset. It is worth noting that all the ul- trasound images used in this study were taken through PVC phantoms, which are generally favorable conditions compared to operation in vivo. Although cellulose was added to the PVC to generate realistic scatter, actual tissue will contain more irregular images features that happen to be similar to the fiducials, likely increasing the number of false positive detections. Segmenting the fiducials in the 2D training sets with more detail than a simple rectangle might produce better results. As seen in Figure 2, many of the patches were extracted from background regions that just happened to be within the rectangle, rather than from the actual fiducial. A.6 Conclusions and Recommendations Based on the results of the 2D detector testing, a simple object detector using boost- ing shows promise as a method for automatically detecting surface fiducials. There are obviously some issues that still need to be addressed. 85 The most important improvement must be in processing time. Depending on slice spacing and number of weak classifiers used, the Matlab programs used in this study could scan a typical 3D volume in as little as 30 seconds. The average processing time was significantly higher. A runtime of a few seconds at maximum is necessary for our augmented reality application. This will likely require switch- ing to a feature type that can make use of integral images, and to an implementation outside of Matlab. The performance of the 2D detector needs to be improved. It seems very prob- able that the greatest problem with the 2D detector was insufficient training data. For this study, it was simply too time consuming to annotate more images. Train- ing images should be drawn from a more varied number of volumes; test volumes were captured across different experiments and may have had different background noise. Looping through several scales, something that was tried incidentally, did not improve results significantly, so it does not appear to be an issue with changing resolution. Given the consistent appearance of the fiducials, an even simpler 2D detection method such as cross correlation with a single template might be worth investigating as well. Using the k-means algorithm to group the detections still seems like a reason- able method, although it was difficult to see in this study due to the poor detector performance. The k-means will produce very inaccurate groupings if there are many false positive detections or if a fiducial is completely missed, both of which occurred frequently in this study. Including the detection scores as weightings in the k-means calculation, and incorporating knowledge of the relative fiducial locations based on the constant geometry of the fiducial tool both seem logical. Reducing the spacing between slices would also likely increase the performance of the 3D detector, but without reducing the 2D detector runtime it would also make the overall system even slower. 86 (a ) (b ) Fi gu re A .1 :I m ag e pa tc he s ta ke n fr om on e ex am pl e im ag e be fo re fil te ri ng an d lo ca tio ns of pa tc he s in ex am pl e im ag e (g re en )w ith in us er -s up pl ie d bo un di ng bo x w ith ce nt er (r ed ). 87 Fi gu re A .2 :M ea n co rr el at io n sc or e w ith al l 64 0 fil te re d pa tc he s. T hi rt y lo ca l m ax im um s of th is m ea n sc or e (g re en sq ua re s) w er e ta ke n as ne ga tiv e lo ca tio ns fo r cl as si fie r tr ai ni ng . T he ce nt er of th e us er su pp lie d re ct an gl e (r ed di am on d) w as ta ke n as a po si tiv e lo ca tio n fo rc la ss ifi er tr ai ni ng . 88 Fi gu re A .3 :D et ec to r ou tp ut s fo r th re e ex am pl e im ag es sh ow in g us er se gm en ta tio n, cl as si fie r ou tp ut ,t hr es ho ld ed ou t- pu t, an d fin al de te ct or ou tp ut . In C as e A ,t w o fid uc ia ls w er e co rr ec tly de te ct ed ,a lth ou gh on ly on e fid uc ia lw as la be le d by th e us er .I n C as e B ,o nl y th e fid uc ia ll ab el ed by th e us er w as de te ct ed . 89 Fi gu re A .4 :P re ci si on -r ec al l pl ot s w ith va ri ab le nu m be r of w ea k cl as si fie rs fo r 2D de te ct or ba se d on (A ) 12 0 im ag e da ta se tw ith so m e ob je ct s un la be le d an d (B )5 2 im ag e da ta se tw ith al lo bj ec ts la be le d. 90 Figure A.5: Example outputs from the 3D detector in volume coordinates showing (A) one correct detection (B) two correct detections (C) three correct detections. Figure A.6: Frequency of each possible precision/accuracy value over 22 test volumes containing three fiducials each. 91 Appendix B Patient Trial Protocol Outline 1. Initialize the VibroApp and RFImaging software. 2. After the patient has been sedated and secured in a trendelenburg position for RALRP, install the TRUS system at the foot of the OR table using the CIVCO stabilizer arm and the bed attachment clamps. 3. Prepare the TRUS probe for insertion into the patient’s rectum: apply ultra- sound gel and a probe cover from sterile packaging to the probe. Manually step the probe cradle to the distal end of its travel before inserting it into the patient. In this manner it will be certain that system errors or acciden- tal commands will not move the probe to an unsafe depth in the patient’s rectum. 4. Allow the surgeon to insert the probe into the patient’s rectum using the gross positioning clamp on the CIVCO arm. The array should be positioned axi- ally so that the parasagittal array images as much of the prostate as possible. 5. Capture two to three sweeps of RF and B-mode data with position infor- mation by rotating the parasagittal imaging plane as in the previous testing protocol 6. Capture Doppler images of the prostate’s lateral to evaluate localization of the NVB based on Doppler signals in the vasculature. 92 7. The surgical team should begin the RALRP procedure as usual at this point. Insert the da Vinci trocars under vision, dock the da Vinci robot, and begin initial dissection. 8. Once the anterior aspect of the prostate has been identified, register the TRUS robot to the da Vinci kinematic frame using the air-tissue boundary method: Place a da Vinci tool tip against the air-tissue boundary. Use the 3D mouse to rotate the parasagittal TRUS plane until the tip of the tool can be visual- ized in the parasagittal image. Adjust the imaging focus depth as necessary. Initialize the TransformationFinder software, and select the tool tip in the US image. Repeat this procedure two more times, and export the transformation between the TRUS robot and the da Vinci API frames. 9. After registration, the surgical team should continue the steps of the proce- dure, while making use of real-time 2D B-mode imaging guidance, either with manual repositioning of the imaging planes or automatic tool tracking at the surgeon’s preference. The 3D mouse should be placed on or near the da Vinci surgeon console so the operating surgeon can control the position of the arrays. Real-time B-mode images should be examined before dissection of the prostate base and apex, and separation of the NVB. 10. Once the prostate has been fully mobilized and placed in a specimen bag, the TRUS transducer should be fully retracted using the remote manual control, in order to avoid negatively affecting the anastomosis. 11. After the completion of the procedure, remove the probe, stepper and stabi- lizer arm. Store the probe in a specimen bag until it can be sterilized at BC Cancer Agency. 12. Pathological outcome, procedure time, continence and sexual function at fol- low up should all be measured and compared to control series to evaluate the impact of the TRUS guidance system. 13. Surgeon satisfaction with the system should be measured using a survey after the operation. Confidence in the identification of the correct plane before dissecting the prostate-bladder plane should be assessed. 93 14. Rectal wall injury and any blood transfusion should be monitored as adverse effects of the system. 94

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0072118/manifest

Comment

Related Items