Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Intra-operative `pick-up' ultrasound for guidance and registration to pre-operative imaging Schneider, Caitlin 2011

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


24-ubc_2011_fall_schneider_caitlin.pdf [ 2.65MB ]
JSON: 24-1.0105048.json
JSON-LD: 24-1.0105048-ld.json
RDF/XML (Pretty): 24-1.0105048-rdf.xml
RDF/JSON: 24-1.0105048-rdf.json
Turtle: 24-1.0105048-turtle.txt
N-Triples: 24-1.0105048-rdf-ntriples.txt
Original Record: 24-1.0105048-source.json
Full Text

Full Text

Intra-operative ‘Pick-up’ Ultrasound for Guidance and Registration to Pre-operative Imaging  by Caitlin Schneider Biomedical Engineering, Johns Hopkins University, 2009  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  Master of Applied Science in THE FACULTY OF GRADUATE STUDIES (Electrical and Computer Engineering)  The University Of British Columbia (Vancouver) August 2011 c Caitlin Schneider, 2011  Abstract The integration of ultrasound into robotic laparoscopic surgery can provide a surgeon with navigational guidance that could decrease operating times and increase surgeon confidence during complicated procedures such as partial nephrectomy. Computed Tomography (CT) scans are taken for diagnosis before surgery and have a wide field-of-view and high resolution but do not provide real-time information for the surgeon during surgery. Ultrasound is an inexpensive, portable, noninvasive imaging modality, which has the potential to provide the surgeon with realtime information. With accurate registration between CT and ultrasound, a surgeon can be provided with a wide field-of-view of the patients underlying anatomy, that cannot be seen through the laparoscope. Organ motion within the abdomen between the time of diagnostic scanning and intra-operative imaging can affect the accuracy of image registration. CT scans of the patients’ kidneys in both the supine (diagnostic) and flank (surgical) positions were registered to determine the extent of kidney motion. The center of mass was observed to move between 10 and 75 mm resulting in a recommendation that diagnostic CT scans be performed with the patient in the potential surgical position when image registration will be performed. This thesis presents the design of a new intra-abdominal ultrasound transducer, which can be controlled directly by the operating surgeon throughout the duration of the procedure. Initial use of the transducer is targeted for robotic laparoscopic surgery, where the operating surgeon must rely on a patient-side assistant. Multiple tracking methods have been integrated into the transducer to allow 3D ultrasound volumes to be constructed from a set of 2D slices. These methods include tracking using electromagnetic sensors, optical markers and robotic kinematics. ii  The vessels of the kidney serve as important landmarks during the surgical procedure and can also be used as features for CT to ultrasound registration. A registration method using automatic ultrasound vessel segmentation is proposed and tested in a phantom and human model. The root mean square error in the phantom was calculated to be 3.2 mm, which is comparable to other reported registration errors, while the error in the registration using the human model was 7.5 mm.  iii  Preface Material from Chapters 4 and 5 was accepted for publication at the conference for Information Processing in Computer Aided Interventions (IPCAI), under the title ‘Intra-operative “Pick-up” Ultrasound for Robot Assisted Surgery with Vessel Extraction and Registration: A Feasibility Study’. The proceedings of this conference were published by Springer, in the Lecture Notes in Computer Science series (LNCS 6689). The work was co-authored by Julian Guerrero, Christopher Nguan, Robert Rohling and Septimiu Salcudean 1 . The code used for vessel segmentation was originally written by Julian Guerrero using algorithms that he co-developed with Septimiu Salcudean. The algorithms and code were modified by the author for the purposes of these experiments. Christopher Nguan assisted in acquiring intra-operative laparoscopic ultrasound images and provided clinical guidance for the transducer design and prototype testing. Robert Rohling and Septimiu Salcudean played a pivotal role in contributing to the overall design of the transducer as well as revisions to the manuscript. The author assisted with coordination of patient studies and collection of CT data. The author performed all experiments, ultrasound scans and data analysis for the studies described in this thesis. Ultrasound transducer design was facilitated by Robert Rohling, Septimiu Salcudean, and Ramin Sahebjavaher. The author contributed to the design, finalized the design, coordinated with transducer manufacturer and tested the ultrasound transducer. CT scans for the study of kidney motion and intra-operative laparoscopic ul1 C.  Schneider, J. Guerrero, C. Nguan, R. Rohling, and S. Salcudean. Intra-operative pick-up ultrasound for robot assisted surgery with vessel extraction and registration: A feasibility study. Information Processing in Computer-Assisted Interventions, pages 122-132, 2011  iv  trasound scans were collected as part of the study for Real-time Image Guidance for Robot-Assisted Laparoscopic Partial Nephrectomy. This study was approved by the University of British Columbia (UBC) Research Ethics Board. The UBC CREB number of this study is H08-02798. Christopher Nguan is the principal investigator for this study and co-investigators include Septimiu Salcudean, David Lowe and Robert Rohling. Steven Tang assisted with patient recruitment and coordination.  v  Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iv  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  vi  List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  viii  List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ix  Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  xiii  Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  xiv  1  2  Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1  1.1  Thesis Objectives . . . . . . . . . . . . . . . . . . . . . . . . . .  3  1.2  Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . .  4  Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  5  2.1  Intra-operative Ultrasound . . . . . . . . . . . . . . . . . . . . .  5  2.2  Image Guidance and Augmented Reality . . . . . . . . . . . . . .  8  2.2.1  3  Tracking Methods . . . . . . . . . . . . . . . . . . . . .  10  2.3  Robotic Surgery with the da Vinci Surgical System . . . . . . . .  12  2.4  Partial Nephrectomy . . . . . . . . . . . . . . . . . . . . . . . .  14  Motion of the Kidney Between Pre-operative and Intra-operative Positioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  vi  20  4  3.1  Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  20  3.2  Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  23  3.2.1  Patient Overview . . . . . . . . . . . . . . . . . . . . . .  23  3.2.2  Image Acquisition and Processing . . . . . . . . . . . . .  25  3.3  Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  27  3.4  Discussion and Conclusion . . . . . . . . . . . . . . . . . . . . .  30  Ultrasound Transducer Design and Characteristics . . . . . . . . .  33  4.1  Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  33  4.2  Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  35  4.2.1  Requirements . . . . . . . . . . . . . . . . . . . . . . . .  35  4.2.2  Construction and Testing . . . . . . . . . . . . . . . . . .  36  Tracking Methods . . . . . . . . . . . . . . . . . . . . . . . . . .  44  4.3.1  Electromagnetic Tracking . . . . . . . . . . . . . . . . .  44  4.3.2  da Vinci Joint Kinematics . . . . . . . . . . . . . . . . .  45  4.3.3  Optical Tracking . . . . . . . . . . . . . . . . . . . . . .  47  Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  50  4.3  4.4 5  Ultrasound to Computed Tomography Registration using Vessel Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  52  5.1  Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  52  5.2  Registration Method . . . . . . . . . . . . . . . . . . . . . . . .  55  5.3  Experimental Design . . . . . . . . . . . . . . . . . . . . . . . .  60  5.4  Registration Results . . . . . . . . . . . . . . . . . . . . . . . . .  63  5.5  Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  65  Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  66  6.1  Thesis Contributions . . . . . . . . . . . . . . . . . . . . . . . .  67  6.2  Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  68  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  71  6  vii  List of Tables Table 2.1  Summary of the advantages and disadvantages of tracking methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  Table 3.1  Summary of subjects, image modalities and positions used in the study of kidney motion. . . . . . . . . . . . . . . . . . . .  Table 3.2  24  Average centroid translation for right and left kidneys in each flank position . . . . . . . . . . . . . . . . . . . . . . . . . .  Table 4.1  10  27  The angles at which the ARToolKitPlus markers can be detected. Two markers are placed on each face of the transducer. Yaw is measured about the X axis and Pitch is measured about the Y axis of the marker. All measurements are in degrees. . .  Table 4.2  51  The size of the markers which can be detected by the algorithm as a function of the percentage of the image that they cover. Consistent: both markers are detected. Variable: both markers are detected about half of the time. Inconsistent: detection is unreliable. . . . . . . . . . . . . . . . . . . . . . . . . . . . .  Table 5.1  51  The fiducial localization error for each fiducial type used during the vessel registration experiments. . . . . . . . . . . . . . . .  viii  62  List of Figures Figure 2.1  The da Vinci Surgical System. Image courtesy of Intuitive Surgical Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . .  Figure 2.2  13  One solution to image guidance while using the da Vinci surgical robot found at Blank Children’s Hospital in Des Moines, Iowa. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  Figure 2.3  14  Example CT of a partial nephrectomy patient. The patient’s kidneys are circled in green. A small tumour is located on the lower pole of the right kidney, circled in red. . . . . . . . . .  Figure 2.4  17  A laparoscopic ultrasound transducer used during surgery. Top: View of the surgical field, kidney is not easily visible. Bottom: Ultrasound image of the kidney. . . . . . . . . . . . . . . . .  Figure 2.5  18  A laparoscopic ultrasound transducer used during surgery. Top: View of the kidney and superficial tumour (circled in green). Bottom: Ultrasound image of the kidney, tumour is visible at the top of the ultrasound image (circled in green). CT image of this patient is shown in Figure 2.3 . . . . . . . . . . . . . .  Figure 3.1  19  Subject 1 in supine (left), prone (center) and left flank (right) positions. Slices were taken from the same general area of the abdomen and have been scaled to fit. . . . . . . . . . . . . . .  Figure 3.2  Kidney surfaces after spine registration and before organ registration. Subject in the flank (green) and supine (blue) positions.  Figure 3.3  23 25  Surface motion of the kidney due to change from flank to supine positions for Subject 1 in the anterior (top) and posterior (bottom) views. . . . . . . . . . . . . . . . . . . . . . . . . . . . ix  26  Figure 3.4  Each Graph represents the results of each subject, where each kidney was registered individually. Top Left: The maximum rotational component. Top Right: The translational distance of the kidney centroid. Bottom: The average motion of the surface points on the kidney model. . . . . . . . . . . . . . .  Figure 3.5  28  Iterative Closest Point (ICP) Error due to changes from flank to supine positions for Subject 1 in the anterior (top) and posterior (bottom) views. . . . . . . . . . . . . . . . . . . . . . . . . .  29  Figure 3.6  Volume change of the kidney as a percentage for each subject.  29  Figure 4.1  Left: Aloka, UST-5536-7.5 Multi Frequency (5-10MHz) Flexible Laparoscopic Transducer. Right: BK Medical, 8666 510MHz Flexible Laparoscopic Transducer. . . . . . . . . . .  Figure 4.2  34  The ProGrasp instrument. Image courtesy of Intuitive Surgical Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  38  Figure 4.3  Rendered images of the Lap-handle.  38  Figure 4.4  Rendered images of the ‘pick-up’ transducer. Left: cross sec-  . . . . . . . . . . . . .  tional view (blue) of the Lap-handle, a steel section added to the transducer. The angled faces and locking pins can be seen. Right: the tool fits tightly against the angled faces. The practicality of adding visual tracking markers is demonstrated. . . . Figure 4.5  39  Final specifications of the transducer design. The Lap-handle is not shown. . . . . . . . . . . . . . . . . . . . . . . . . . .  40  Figure 4.6  Photograph of the final transducer prototype. . . . . . . . . .  41  Figure 4.7  Element response time, sensitivity and frequency response as provided by the transducer manufacturer. . . . . . . . . . . .  43  Figure 4.8  Example B-mode image of the carotid artery. . . . . . . . . .  44  Figure 4.9  Example color Doppler image of the carotid artery. . . . . . .  45  Figure 4.10 The da Vinci robot can grasp the transducer in a stable and repeatable manner. Markers for camera tracking are placed on the transducer faces. . . . . . . . . . . . . . . . . . . . . . .  x  45  Figure 4.11 Phantom reconstruction using tracking based on da Vinci tool positions. Left: cross sectional view of the ultrasound vessel phantom. Right: the 3D reconstruction of the phantom vessel bifurcation. . . . . . . . . . . . . . . . . . . . . . . . . . . .  47  Figure 4.12 Examples of common optical markers. . . . . . . . . . . . . .  47  Figure 4.13 Four ARToolKitPlus markers were placed on the transducer. They are named to discriminate them from each other and others. ‘Ninja’ and ‘Bicep’ were placed on one face of the transducer while ‘Jack’ and ‘Sword’ were placed on the other. . . .  49  Figure 4.14 The view angle for reliable marker tracking. Green sections represent the angles in which both markers are tracked and the red sections represent the angles in which one marker can be tracked. Top: Viewing angles for rotation around the Y axis of the marker. Bottom: Viewing angles for rotation around the X axis of the marker. . . . . . . . . . . . . . . . . . . . . . . .  50  Figure 5.1  The SonixRP machine used for this experiment. . . . . . . . .  56  Figure 5.2  Overall flow of the registration method. The top represents the steps using the ultrasound images and the bottom represents the steps using the CT images. These inputs were registered using an ICP algorithm. . . . . . . . . . . . . . . . . . . . .  Figure 5.3  59  The ultrasound flow phantom used for these studies. Top left: Power Doppler. Top middle: B-mode images. Top right: CT imaging. Bottom: photograph of the phantom. . . . . . . . . .  Figure 5.4  Example ultrasound image of the phantom in CT (left) and ultrasound (right). . . . . . . . . . . . . . . . . . . . . . . . .  Figure 5.5  61  Left: Example ultrasound image of the leg. Right: Schematic drawing of the ultrasound scan area. . . . . . . . . . . . . . .  Figure 5.6  60  62  Example images for intra-operative power Doppler images of the kidney vessels. Left: Branching of renal artery. Middle: renal artery and vein. Right: internal vessels of the kidney. . .  xi  63  Figure 5.7  Example of the completed registration. The blue surface model represents the CT model of the vessel phantom and the red points are the fiducial locations. The series of the white points represent the ultrasound contour centroids, paired with the black fiducial points (white points are seen as light blue when inside the blue surface). . . . . . . . . . . . . . . . . . . . . . . . .  Figure 5.8  64  Final registration of the anterior and posterior tibial arteries and the popliteal artery. The surface model was created from 3D ultrasound. . . . . . . . . . . . . . . . . . . . . . . . . .  Figure 5.9  64  Distribution of registration errors for a series of 12 CT to ultrasound vessel registrations using the vessel phantom. . . . .  xii  65  Glossary CT Computed Tomography DOF Degree of Freedom EM Electromagnetic ICP Iterative Closest Point MRI Magnetic Resonance Imaging OR Operating Room RFA Radio Frequency Ablation RMS Root Mean Square  xiii  Acknowledgments First, I would like to thank my supervisors, Rob Rohling and Tim Salcudean. You have been my guides throughout this journey and I thank you for your advice, your patience and your encouragement. I would like to thank all of my collaborators at Vancouver General Hospital. Chris Nguan, for his clinical guidance in every aspect of this project, and for always trying to help in getting data and hospital logistics. Steven Tang, for coordinating with patients and the CT booking for the clinical trial. Vickie Lessoway, for always being there to help us collect ultrasound data, even in the most awkward positions possible. Tom Shrinkas, for keeping the robots running and hanging out in the OR until the late hours of the night. I would also like to thank Neerav Patel for all his help with the software aspects of this thesis and for his willingness to answer my unending questions. And Ramin Sahebjavaher and Leo Stocco for their help in brainstorming designs for the ultrasound transducer. A special thanks to my past and current labmates, to Jing Xiang for teaching us how everything works and giving good advice over the years, to Troy Adebar for making us laugh, to John Bartlett for answering my random questions about code and for making Vol2Strad, to Hedy Raffi for always being a sounding board and having chocolate, and to Mike Yip, Raoul Kingma and Jeff Abeysekera for being good friends and making lab life that much more enjoyable. For those of you who are leaving, I wish you the best! Outside of the lab, a thanks to my good friends Hannah Gustafson, Adam and Joanne Noel, Dana Hoffmann and Ron Maharik for many dinner parties and late night games. I would also like to thank my many friends of the Varsity Outdoor xiv  Club for showing me the wonders and adventure of British Columbia. I would like to thank my parents, my step-mom and my friends from home, who, though far away from here, they have supported me and encouraged me as I made my through this first adventure in graduate school. A special thanks goes to my mom for spending many hours reading through this thesis. Finally, I would like to thank the sources of funding for my project and research: The Natural Sciences and Engineering Research Council of Canada, the C.A Laszlo Chair, and the Canadian Foundation for Innovation. Without their support, I would not have TWO da Vinci surgical robots to play with.  xv  Chapter 1  Introduction Minimally invasive surgery is slowly becoming the standard of care for many procedures, and continues to grow in popularity. Minimally invasive operations require several small incisions, of approximately 2 cm each, to be made through the patient’s abdominal wall, instead of the 10-15 cm incision required for a traditional open surgery. The surgeon works using long specialized instruments and views the surgical scene through a laparoscopic camera. This type of procedure decreases the patient’s morbidity and shortens hospital stays [44, 89]. Many abdominal, thoracic and pelvic procedures are now completed as minimally invasive or laparoscopic operations. Although there are benefits to the patient, additional physical and cognitive loads are placed on the surgeon [106]. First of all, the surgeon must work with instruments that lack the degrees of freedom available in the human hand. The majority of laparoscopic instruments do not have flexibility at the end-effector. Additionally, during laparoscopic surgery, the surgeon must overcome working through the ‘fulcrum effect’ caused by the instrument having to pass through a narrow opening in the mostly rigid abdominal wall. This constraint makes the tip of the instrument move in the opposite direction of the surgeon’s hand. The view of the surgical field also becomes considerably more restricted when moving from open surgery to laparoscopic surgery. The surgeon is no longer able to see the entire surgical field at once, but must rely on the view through the laparoscopic camera. The use of a separate camera makes the surgeon dependent on an assistant 1  for camera control as they operate. Because they are now looking at a single camera image, they lose binocular vision, and therefore, depth perception. The limitations of laparoscopic surgery are especially noticeable in more difficult surgeries, such as laparoscopic partial nephrectomy [83]. Partial nephrectomy is a treatment of kidney cancer. Kidney cancer refers to any tumours that occur in the parenchyma of the kidney, of which approximately 80% are renal cell carcinomas. It is the sixth most diagnosed malignancy in Canada in men and the tenth most common in women, with estimated 4,800 new cases and about 1,650 deaths from the disease in 2010. Since the mid-1980s, death rates have decreased by 0.3 per cent per year for males, and for females by 0.7% per year. Despite the death rates decreasing, this type of cancer is also increasing its rate of incidence by approximately 1.3% per year. (Canadian Cancer Statistics: The current standard of care uses Computed Tomography (CT) for diagnosis and surgical planning in the cases where surgical resection is deemed the best course of action. Because partial nephrectomy allows the cancerous tissue to be removed while preserving as much of the healthy kidney tissues as possible, it is the preferred method over radical nephrectomy for tumours under 4 centimetres in size [58]. Laparoscopic partial nephrectomy has also gained popularity as it is minimally invasive but has shown to have comparable outcomes to open procedures [48, 83, 93]. A very technically demanding surgery requiring multiple suture sites, partial nephrectomy requires the surgeon to be fast and accurate, all the while limiting the time when blood flow to the kidney is stopped to less than 30 minutes [35]. These challenges, combined with the limitations of laparoscopic surgery, make partial nephrectomy an appropriate use-case for improvements through application of new technology. The pre-operative CT scan provides a complete 3D anatomical map of the patient, and could help the surgeon during delicate dissections, tumour localization and initial orientation to the patient’s anatomy. Image registration and fusion techniques allow both the surgical view and a patient’s CT scan to be visualized during surgery, which has the potential to improve the efficiency of surgery and surgical outcomes. The use of intra-operative ultrasound could also be pivotal to the improvement of surgery. As a real-time imaging technique, ultrasound has the potential to be used for registration to pre-operative data and also as a method of 2  direct visualization of critical structures. Medical robotics were first introduced in 1985, during which a Puma 560 industrial robot was used to orient a needle for brain biopsy [30]. Robotics has a tremendous potential to improve the performance of surgeons through increased precision and expanded capabilities. Systems in use today do not replace the surgeon, but augment their abilities. This thesis will use the unique capacity of robotics for laparoscopic surgery, coupled with improved imaging, to improve surgical navigation. Through the use of intra-operative ultrasound combined with robotic technology, the challenges of laparoscopic surgery and partial nephrectomy can be overcome. These technologies, as well as a detailed description of the surgery, can be found in Chapter 2.  1.1  Thesis Objectives  The overall goal of this thesis is to improve surgical navigation through the development of an intra-operative ultrasound transducer and its integration with robot assisted laparoscopic partial nephrectomy. 1. To determine the difference in kidney position between diagnostic CT imaging and intra-operative imaging. Comparisons between CT scans taken in the supine and flank positions will determine the amount of kidney motion that occurs between different patient positions. The results of this study could influence the position in which the diagnostic CT scan is taken and quantify the amount of error anticipated between pre-operative images and intra-operative images. 2. To design and build an intra-operative ‘pick-up’ ultrasound transducer that can be reliably and easily grasped and manipulated by the da Vinci robot. 3. To develop a method of registration between ultrasound and CT based on the vasculature of the kidney. This registration will eventually be used to aid in visualization of the CT scan during the procedure.  3  1.2  Thesis Outline  The outline of the thesis is as follows. Chapter 2 provides a comprehensive background on the technologies used during this project including intra-operative ultrasound, image guidance and robot assisted surgery. A detailed description of partial nephrectomy is also presented. Chapter 3 describes a study to determine the motion of the kidneys when a patient is moved between diagnostic and operative positions. Knowledge of this motion is useful for future registration between pre-operative and intra-operative datasets. A large motion confirms the need for intra-operative imaging to update organ location. Indeed, if organ motion between the pre-operative CT and the surgery is small, then probably the best registration method would involve the use of some landmarks and rigid registration. We demonstrate that this is not the case. The specifications, design and characteristics of a new intra-operative ultrasound transducer are described in Chapter 4. The manufactured transducer is presented, along with the results of different tracking methods and preliminary methods of 3D ultrasound reconstruction. Chapter 5 presents the development of one method for ultrasound to CT registration. This method employs the vessel structures, which play a major role in partial nephrectomy. Finally, Chapter 6 describe the conclusions and contributions of this thesis as well as the future research directions.  4  Chapter 2  Background This chapter presents the background on the main components of this thesis including intra-operative ultrasound, image guidance, robotic surgery and a detailed description of the partial nephrectomy procedure. Past and present knowledge is described as well as how these subjects pertain to the overall thesis objectives.  2.1  Intra-operative Ultrasound  Ultrasound, compared to other imaging modalities such as CT or Magnetic Resonance Imaging (MRI), is inexpensive, modular and portable. It does not involve ionizing radiation or require special environments to be maintained. Ultrasound can be easily used during surgery and allows real-time imaging of the patient. Although ultrasound images are subject to some artefacts, shadowing and user dependence, ultrasound provides high resolution images of patient anatomy. Intra-operative ultrasound was first introduced in the 1950s with the use of A-mode ultrasound to examine brain tissue [70]. In 1958, the first laparoscopic ultrasound transducer was used during cholecystectomy, but intra-operative ultrasound did not gain widespread usage until B-mode images became available in the 1970s and interpretation of images became more straightforward since 2D B-mode images could now be visualized as a ‘picture’ rather than a single echo line. In 1979, the first specific electronic laparoscopic transducer was produced by Aloka (Tokyo, Japan). This transducer was designed to be used for liver surgery, which  5  is a commonly performed surgery in Japanese hospitals. Laparoscopic ultrasound also gained popularity with the spread of minimally invasive surgery, where the ultrasound images attempted to replace the haptic feedback that is lost when using laparoscopic tools [70]. Traditional laparoscopic intra-operative ultrasound is currently used for a variety of procedures, including resection of liver cancer [11], gall bladder removal [27] and resection of kidney cancer [82]. Intra-operative ultrasound provides high-quality, real-time intra-operative imaging for assessment of tumour margins, guidance around vessels and locating tumour resection planes. Currently, the main producers of laparoscopic ultrasound technology are Aloka (Tokyo, Japan), B&K Medical (Herlev, Denmark), Esaote (Genova, Italy), Gore (Newark, DE, USA), Hitachi (Tokyo, Japan), Philips/ATL (Amsterdam, The Netherlands), and Toshiba (Tokyo, Japan) [107]. Although end-firing transducers do exist, most modern laparoscopic transducers are side-firing with a broadband transmit frequency in the range of 7.5 to 10MHz (see Figure 4.1 for two examples of these transducers). This thesis describes the design of a side-firing intra-operative transducer that has similar ultrasound characteristics to these traditional laparoscopic transducers. Chapter 4 describes this transducer in detail. In a recent survey of surgeons [107], 46% of surgeons said that they used laparoscopic ultrasound, 82% believe that its use will increase in the future and 79% of surgeons believe that the use of laparoscopic ultrasound combined with navigation will increase. The majority of surgeons that use laparoscopic ultrasound use it for liver procedures and, understandably, many also employ the use of Doppler imaging which is used mainly to locate the vessels of the liver. From this survey, surgeons seem to believe that ultrasound is a useful tool and will be expanded in the future, but do not necessary use it on a regular basis in its current state. V˚apenstad et al. note that an important limiting factor to the success of laparoscopic ultrasound is the learning curve associated with using ultrasound as a diagnostic tool. Several difficulties mentioned in the study included transducer handling arising from problems with hand-eye coordination when looking through a laparoscope, limited field of view in the ultrasound image, and difficulties related to ultrasound image interpretation. A transducer that is easy to use could increase the number of surgeons using laparoscopic ultrasound. It was also noted in the survey that 87% of the surgeons manoeuvre the transducer themselves and 92% of sur6  geons prefer to use a flexible transducer. The transducer presented in this thesis will allow the surgeon to manoeuvre the transducer themselves and allow even greater range of motion in comparison to current transducer designs. Combining ultrasound imaging with robotic surgery enables intuitive control of the transducer, eliminated problems arising from hand-eye coordination. In addition, the potential registration with pre-operative imaging and 3D ultrasound volume reconstruction could counter the limitations associated with a small field of view and difficulties with image interpretation. Doppler ultrasound is a method to visualize and measure flow in ultrasound images. Doppler imaging is used during liver transplant [28], other liver procedures and for liver vessel registration [98]. Like other modes of ultrasound imaging, Doppler uses pulses of sound to create signals and by measuring the echoes is able to characterize the interaction between the pulses and the tissue. In Doppler imaging, the target (the blood cells within a vessel) is moving and the velocity is measured from the frequency shift of a reflected wave. There are two main types of Doppler ultrasound, continuous wave and pulsed wave. During continuous wave Doppler, the piezoelectric crystal array in the transducer is excited with a continuous sinusoid wave. In practice, samples are taken along a line through the tissue and information is presented on a time line. The shift in frequency generally falls within the audible range and flow can be presented to the sonographer in the form of an auditable signal. Pulsed wave Doppler works on a similar principle, but several short pulses are sent rather than a continuous wave form. The timing of the echoes then allows the location of the velocity measurements to be known. With both the location and velocity known, the information can be overlaid on to the B-mode image for easy correlation between flow measurements and anatomical structures. This is generally referred to as color Doppler and is completed over a sub-region of the B-mode image due to the increase in time required for image acquisition. Color Doppler information is presented to the sonographer as a color intensity range, for example, from red to blue. In this example red could represent the flow away from the transducer and blue, the flow towards the transducer. The color hue would then indicate the velocity of the flow in its respective direction. Power Doppler is a deviation of color Doppler. In this case the color displayed represents the strength of the Doppler shifts, removing the 7  dependency on direction. Power Doppler is more sensitive to slow flow and flow in deep or small vessels.  2.2  Image Guidance and Augmented Reality  Over the last decade, the use of image guidance in rigid and soft tissue operations has become more common [13, 37]. Originally used for neurosurgery and orthopaedics, image guidance can be used to improve the localization and visualization of tumours, as well as the avoidance of critical structures. Images and information from several sources (pre-operative CT, MR or ultrasound) can be combined with intra-operative imaging to provide the surgeon with a comprehensive patient model. Registration must be preformed to align the pre- and intra-operative images. Some common registration approaches include the use of stereotactic frames, anatomical markers or surface models, as well as image-to-image intensity based registration using intra-operative imaging systems [2, 5, 69]. Registration is performed to align the patient’s anatomy with pre-operative images and planning data. Patient anatomy and critical structures can be shown to the surgeon in real-time with respect to the tracked surgical tools being used. At the same time, other preoperative planning, such as needle trajectories or preferred tumour margins, can be shown [23, 45]. Organ deformation and motion can cause severe challenges when registration is performed between pre-operative and intra-operative images [3, 13]. This aspect, with respect to kidney registration, is discussed in Chapter 3. Owing to the additional challenges associated with performing laparoscopic surgery and the visualization provided by the laparoscopic camera, some groups have targeted laparoscopic surgery for integration with augmented reality [32, 43, 77, 99, 102, 109]. Augmented reality has a long history of use in medicine and a comprehensive review can be found in [97]. Augmented reality refers to the combination of real-world scenes with computer graphics. In the case of laparoscopic surgery, the scene is the camera view and the computer graphics are often pre- or intra-operative imaging. Several methods for integration of the two images have been proposed and often use 3D visualization methods to counter the loss of depth perception during traditional laparoscopic surgery. Such methods include videosee-through head-mounted displays which provide a 3D scene for the surgeon [43],  8  and alpha compositing for image overlay [6]. The use of a stereo camera during laparoscopic procedures has also been proposed for use with augmented reality [29]. The loss of haptic feedback can be compensated for by using pre-operative and intra-operative imaging to see the anatomical structures below the surface of the organ that is seen through the laparoscope. To provide vision under the surface, pre-operative images or segmented surfaces are overlaid on the surgeon’s view of the surgical scene which provides the simulated computer graphics component to augmented reality. Optical tracking of both the patient and the laparoscope [32] can be used to locate critical structures and register the camera view to the patient anatomy. Other approaches to patient-camera registration is to extract a 3D surface model from stereo laparoscope images and match the surface to the pre-operative surface through an iterative closest-point method [102], or to use skin fiducials and an optically tracked pointer for registration to the patient anatomy [71]. There have been many developments in using ultrasound as an intra-operative imaging modality for laparoscopic procedures; the portability and safety of ultrasound has made it a valuable addition to the Operating Room (OR). Augmented reality for ultrasound involves the overlay of the ultrasound image on to the camera frame, which allows the surgeon to view the position and orientation of the ultrasound image in relation to the surrounding anatomy. In order to implement augmented reality for ultrasound, the position of the ultrasound image must be known in the camera frame. This requires that the transducer be tracked with respect to the camera, typically by using Electromagnetic (EM) sensors and/or optical tracking [39, 40, 61, 77]. In some cases a mechanical 3D ultrasound transducer with optical tracking is used [64]. Unfortunately, the optical tracking methods require that the transducer or markers be seen by the camera(s). A different method of integrating augmented reality was developed in which a full 3D representation of the ultrasound transducer was created such that the wrist angle and angle of the ultrasound image could be clearly visualized by the surgeon [94]. This system took advantage of a robotic system and tracked the orientation of the transducer through the forward kinematics of the robotic arm. The ultrasound image was also shown on a separate screen so details of the anatomy could be discriminated.  9  2.2.1  Tracking Methods  The accurate relation of images and anatomy is a pivotal concern for augmented reality. This accuracy is achieved through image registration and tracking of different surgical components. The main types of tracking use external cameras, the laparoscopic camera and electromagnetic fields. A summary of the advantages and disadvantages of these methods is presented in Table 2.1. Table 2.1: Summary of the advantages and disadvantages of tracking methods Tracking method Electromagnetic  Optical (External Camera) Optical (Laparoscopic Camera) Robotic Kinematics  Advantage No direct line of sight or physical connection required High accuracy and repeatability Tracked with respect to surgeon view High accuracy and repeatability  Disadvantage Distortion caused by metal and stray magnetic fields Line of sight required, large equipment Line of sight required physical connection between robot and tool rquired  Tracking systems used in the OR to track surgical tools, or other instruments, are generally one of two types: optical tracking or EM tracking. Optical trackers rely on the line of sight between the cameras and the targets. External cameras can be used and the targets can be specific patterns such as used by the Micron Tracker (Claron, Toronto, Canada), reflective balls or active light emitting diodes (LEDs) such as the Polaris (Northern Digital Inc, Waterloo, Ontario) and OptiTrack (NaturalPoint, Corvallis, Oregon). The laparoscopic camera can also be used as the tracking system. In this case, the target is tracked in the camera frame directly. The other main tracking system is the EM tracker, which uses a magnetic field generator to track small sensors. Commonly available tracking systems include the Aurora (Northern Digital Inc, Waterloo, Ontario), 3D Guidance (Ascension Tech. Corp., Burlington, Vermont) and the 3Space Fastrack (Polhemus Inc., Colchester, Vermont). These trackers can be used to track laparoscopic tools where a line of 10  sight is not possible, and the sensors can be embedded into small instruments, such as needle tips [62]. EM trackers range from a centimetre to 1 millimetre in size, can be unobtrusive, and do not require additional large pieces of equipment in the OR. The only addition to the OR is the relatively small field transmitter, either a flat panel or a cube measuring about 15 cm on a side. One drawback of EM tracking is the distortion in the reported sensor positions caused by nearby metallic objects and that the sensors have a limited range of use, typically ranging from 20 cm to one metre. Accuracies in the range of 1% of the dimension of interest have been reported [52], and EM tracking has been evaluated for obtrusiveness, robustness, accuracy and working volume in clinical environments such as the interventional radiology suite, a CT suite and a pulmonology suite [110, 111]. Examples of tracking systems using external cameras that have been used for augmented reality in a surgical environment include the Polaris (Northern Digital Inc.) [61, 64, 77] or the ARTrack camera system [39, 40]. Augmented reality systems often combine EM and optical tracking to enable tracking of both the laparoscope and a flexible ultrasound transducer. A combined system also allows for the calibration of any distortion in the magnetic field [39, 40, 61] which will cause errors in the EM sensor position and orientation. Markers attached to the transducer or transducer shaft can be used to track the transducer in the laparoscopic camera view [66], but must be carefully designed and may not always be visible in the field of view of the camera. A further discussion of optical markers and their use with respect to ultrasound transducer tracking can be found in Section 4.3.3. Robotic systems can be used as tracking systems since the joint angles of the arms are often known through potentiometers or encoders. Forward kinematics can be used to calculate the tool tip positions and orientations. The da Vinci robot (Intuitive Surgical Inc., Sunnyvale, California), described in detail in Section 2.3, has been used to track a robotically controlled ultrasound transducer [94]. In a test of the da Vinci robot’s accuracy, it was found that the tool tip could be localized to within 1 mm [63]. Another robot used in practice is the Neuromate Robot (Integrated Surgical Systems, Davis, California) which is a system used during stereotactic procedures in neurosurgery. With this robot in a frame-based configuration, the application error was reported at 0.86 ± 0.32 mm [67]. ROBODOC (now 11  owned by Curexo Technology Corporation) is a robotic system designed to address potential errors during cementless total hip replacement by accurately milling out the femur. Using a five-axis arm with a high speed drill as an end-effecter, the robot was able to accurately track the drill using forward kinematics and mill out the space in the femur according to a pre-surgical plan [12].  2.3  Robotic Surgery with the da Vinci Surgical System  The introduction of robotic systems such as the da Vinci Surgical System attempts to mitigate the shortcoming of laparoscopic surgery by replicating an environment more similar to that of traditional open surgery. The system consists of three sections: the surgeon’s console, the patient side cart and the vision cart [Figure 2.1]. The surgeon is able to sit comfortably and ergonomically at the console (a possible improvement over traditional open surgery). The instruments used during robotic surgery are wristed at the end-effector, allowing the surgeon to regain natural control of the degrees of freedom of the tools. The orientation of the surgeon’s hands on the ‘master manipulators’ is constrained to match the orientation of the end-effector as seen by the surgeon through the cameras. The surgeon uses the ‘master manipulators’ in the surgeon’s console to control all of the Degree of Freedom (DOF) available to the instrument, generally six or seven. In six DOF instruments there are three translational degrees and three rotational degrees of freedom while in seven DOF instruments, the surgeon also controls the grasping motion of the instrument. The camera arrangement is also different from laparoscopic procedures. The da Vinci laparoscope has two camera channels that run along its length. The video feeds are kept separate until viewed by the surgeon through the console. This camera system is stereoscopic, simulating binocular vision and allows for depth information to be provided to the surgeon. This, along with the degrees of freedom of the instruments, makes manipulations of tissue, knot tying, needle passing, and discrimination of layers more effective [24, 51, 114]. There have been numerous studies comparing the efficiency and efficacy of the da Vinci robot to traditional approaches of open or laparoscopic surgery [24, 51, 112, 114]. The results of these papers generally show that the da Vinci is as effective as other methods, and patient outcomes are similar. In particular, the  12  Figure 2.1: The da Vinci Surgical System. Image courtesy of Intuitive Surgical Inc. learning curve when operating with the robot is much better for young surgeons than that for laparoscopic surgery [34, 51, 114]. The robot is pushing the boundaries in advanced laparoscopic procedures [7]. The robot has proven effective in a wide variety of surgery types, including prostatectomy [17], pyeloplasty [112], and hysterectomy [38], and during partial nephrectomy [9, 14, 58, 90]. The benefit of better visualization of tissue and additional degrees of freedom of its instruments is realized in these more complex laparoscopic procedures. Although the da Vinci robot offers advantages over traditional laparoscopic surgery, some studies have found that experienced laparoscopic surgeons have not seen a significant benefit [68]. On the other hand, the da Vinci robot is a good platform for integration of additional aids and is not being currently used to its full potential. With additional navigational aids, such as intra-operative imaging and patient-registered surgical planning, we believe that surgery with the da Vinci robot can become faster, and more efficient. The stereoscopic vision allows the surgeons to visualize intra-operative and pre-operative imaging [109], and can be used for 3D tissue tracking [101]. Using the joint angles of the robot encoders, the tool position and orientation can be known and tracked within the accuracy of other commonly used surgical localizers [63]. Although there have been groups working with integration of imaging with 13  Figure 2.2: One solution to image guidance while using the da Vinci surgical robot found at Blank Children’s Hospital in Des Moines, Iowa. the da Vinci robot [29, 94, 102] and other stereo camera set-ups, no commercially available and easy to use solution is yet in common use for image guidance during robotic surgery. Due to the fact that the surgeon must look through the robot console, it is often very inconvenient to look at other monitors in the OR. One solution to this problem from Blank Children’s Hospital in Des Moines, Iowa is shown in Figure 2.2, but the preferred method would be integration directly within the surgeons view inside the robotic console.  2.4  Partial Nephrectomy  Partial nephrectomy is a relatively new procedure that is increasingly being performed but the complexity of the surgery has limited its widespread acceptance [83]. During the surgery, there is a time limit due to the necessity of clamping the major vessels to the kidney [35]. The majority of the difficult suturing must be completed during the warm ischemia time, the time in which the blood flow to the kidney is cut off. The dexterity and speed of suturing that a robot can provide could be beneficial in reducing this time [51]. The dissection and exposure of the renal hilum and the localization of the ureter is the most time consuming portion of the surgery. The total average time for the procedure averaged over 12 patients was  14  reported as 289.5 minutes (range 145-369 minutes), while the time for resection and suturing was only 35.3 minutes (range 15-49 minutes) [34], so only about 12% of the total surgical time is spent during resection and suturing. Since the exact location of the vessels is not known and the consequences of damaging these vessels will cause significant blood loss for the patient, surgeons proceed slowly and with much care. The next major hurdle involves finding and exposing the tumour. Laparoscopic ultrasound is used to determine the extent and location of the tumour and helps determine the margins of the resection. In general, surgeons try to leave 5 - 10 mm margins around the edge of the tumour to ensure that all cancerous tissue has been removed. Reported surgical margins range from 0.5 mm to 9.5 mm [26]. A detailed description of the surgical procedure is given in [100], but the following list outlines the steps in the procedure as experienced at our institution: 1. Insert gas port (some surgeons will place camera port first) 2. Insert laparoscopic camera 3. Gain initial orientation to the patient anatomy 4. Determine port placement sites and place ports for da Vinci robot 5. Dock robot to the patient 6. Insert instruments and camera and re-gain orientation 7. Begin dissection of fat surrounding the kidney 8. Locate and dissect the gonadal vein and the ureter, follow this back to the renal pelvis 9. Expose the renal hilum 10. Check for any branching, and determine where clips will be applied 11. Determine location and expose the tumour, dissect additional fat if needed. Use CT as a general guidance tool during this step [Figure 2.3] 12. Bring in the laparoscopic ultrasound transducer to find the exact location of the tumour 15  13. Use ultrasound to determine tumour margins and electrocautery to mark them on the kidney surface 14. Place the clips on the renal hilum (30 minute warm ischemia time limit starts now) 15. Begin resection of tumour 16. Remove tumour, checking for clean margins 17. Suture the collecting system, if needed 18. Suture the defect closed, adding SurgiCel (Johnson & Johnson, New Brunswick, New Jersey) or Floseal as needed 19. Release the clamps on the hilum (30 minute warm ischemia time limit ends) 20. Check to make sure no bleeds have appeared 21. Undock robot and finish closing using laparoscopic tools There are a few times during the surgery in which a manoeuvrable intra-operative ultrasound transducer might be used. Ultrasound could be used during the initial orientation within the patient (Step 3). The patient could be scanned with the transducer to identify the major vessels. From these vessels, the CT could be registered to the patient. This would allow the surgeon to see the internal structures before the tissues are manipulated. An initial orientation would be helpful in visualizing the relative locations of the vessels, kidney and tumour. This would be especially useful for new or inexperienced surgeons. This aspect of registration to CT is discussed in Chapter 5. Second, ultrasound could be useful during the initial exposure of the renal hilum and gonadal vein (Steps 8-10). Knowing the location and the depth of the vessels would allow the surgeons to work with more confidence. The scans should be fast and not disrupt the flow of the surgery. Instead of trying to register the CT volume to the patient, the vessel outlines, acquired from the segmentation, could be ‘painted’ on the tissue surface as seen through the laparoscopic camera. This  16  Figure 2.3: Example CT of a partial nephrectomy patient. The patient’s kidneys are circled in green. A small tumour is located on the lower pole of the right kidney, circled in red. would provide the surgeon information about the vessel with regards to its location and depth without obscuring his or her view of the surgical field. The current use of laparoscopic ultrasound to locate the tumour and find the appropriate margins (Step 12) could be augmented in a similar way [Figure 2.4, 2.5]. Currently, surgeons must mentally construct the 3D volume of the tumour and place it within the kidney. After the ultrasound probe has been removed, the surgeon must remember where the images were taken in order to mark the kidney surface properly. In contrast, the intra-operative probe could be used to construct a 3D volume of the tumour and then to ‘paint’ the edges of the tumour on the kidney surface. The final use of ultrasound would come after the clamp on the hilum has been released. It is not currently performed, but the return of blood flow through to the kidney after surgery is very important to the patient outcome. Using Doppler, the flow through the vessels within the kidney could be verified.  17  Figure 2.4: A laparoscopic ultrasound transducer used during surgery. Top: View of the surgical field, kidney is not easily visible. Bottom: Ultrasound image of the kidney.  18  Figure 2.5: A laparoscopic ultrasound transducer used during surgery. Top: View of the kidney and superficial tumour (circled in green). Bottom: Ultrasound image of the kidney, tumour is visible at the top of the ultrasound image (circled in green). CT image of this patient is shown in Figure 2.3 .  19  Chapter 3  Motion of the Kidney Between Pre-operative and Intra-operative Positioning 3.1  Introduction  The management of intra-abdominal cancers, such as renal, adrenal and hepatic tumours, is relying increasingly on minimally invasive techniques such as laparoscopic partial resections and laparoscopic Radio Frequency Ablation (RFA). With the limited surgical access, a lack of tactile feedback and limited field of view through a narrow laparoscope, surgeons are relying more on pre-operative imaging for intra-operative planning. Furthermore, small tumours, particularly in the kidney and adrenal gland, are obscured by fat and the margins of the lesions can still be difficult to delineate visually during the operation. When further dissection is required to uncover the tumour and the surrounding vessels, there is a higher likelihood of intra-operative vascular complications and longer recovery time for the patient. As imaging techniques improve, 3D reconstructions of CT and MRI scans allow surgeons detailed visualizations of the anatomy of a tumour and the surrounding structures. These images can be registered to the patient during the operation to provide navigation  20  and guidance. Image-guided surgery or augmented reality has become commonplace in certain types of surgery such as neurosurgery, ophthalmology and orthopedics [12, 37, 53, 73, 78]. Image guidance allows the surgeon to identify the boundaries of an anatomical structure and its location relative to other important structures. During image guided surgery, the pre-operative image needs to be linked to the patient at the time of the operation, requiring image registration. When the organs to be located are relatively rigid (e.g. skull, bones, eye), a rigid point-based registration to fixed external landmarks or fiducials can be used since little to no organ motion is expected during surgery. However, surgery involving soft tissue, (such as the liver, prostate or kidney) organ shifts and deformation cause a significant error with current registration techniques [13]. Deformation can occur after clamping vessels or incising the organ, both of which can lead to a change in organ shape [3]. Organ shifts can also occur due to heartbeat, respiration, laparoscopic insufflation, and manipulation of the organ during dissection. Therefore, many non-rigid registrations and intra-operative organ tracking systems have been developed to try to account for these changes in real time [25]. The above mentioned organ shifts and deformations have been studied (particularly in liver surgery) and mathematical models from in vitro experiments have been developed to account for these changes. However, no previous research has investigated the amount of kidney shift that occurs when moving a patient from the supine position, in which diagnostic scans are taken, to the flank position, in which the surgery is performed, nor how this affects the amount of error in organ shift models. Two common methods used to register 3D volumes involve using landmarks (or artificial fiducials) identified in image volumes or image features [50, 69]. Landmark registration often requires additional user interaction, but has the advantage of speed and the ability to compensate for large changes in position and orientation. External fiducials may be also used as well and located using an external tracking device [56]. Feature-based registration generally produces more precise results, at the cost of additional computation time and some initialization steps to place both volumes within the algorithm’s capture range. The capture range is the difference in initial position and orientation of two volumes in which the registration will still converge correctly. This can vary from 15 mm to 44 mm 21  in translation and up to 30 degrees of rotation [50, 91], based on the complexity of the shape and the registration algorithm. It may be the case that the shifts in the organs, caused by changes in position, may place the image features outside of the typical capture range for some registration algorithms. An additional issue related to the registration between pre-operative and intraoperative data deals directly with the reliability of the data being registered. It is often the case that the intra-operative field of view is much smaller than that of the pre-operative imaging method, in this case comparing intra-operative ultrasound to pre-operative CT. Only a sub-set of the image volume can be used to register the images, but the entire volume will be used for navigation purposes in order to give the surgeon supplemental information about the surrounding anatomy. If significant shifts in the organ locations have taken place, it is likely that these shifts cannot be corrected for during registration, and information displayed to the surgeon may be inaccurate. For example, only patients in the same pre-operative and surgical position were allowed to participate in a registration study due to the fact the internal organ motion could not be accounted for with external markers [56]. As part of standard clinical work flow, a patient’s diagnostic CT or MRI scan is taken in the supine position, with the patient lying on a flat surface. However, some of these patients will be in a different position during their surgery. In particular, patients undergoing partial or radical nephrectomy will be positioned in a flank position during surgery and placed on a cantilevered bed, meaning that there may be significant organ shift between the initial CT scan and the procedure. Figure 3.1 shows example CT images of a patient in each position. In radiation therapy of soft tissue organs, the shifts are estimated and accommodated for in the treatment plan to target specific locations in tissues and organs [20, 59]. During radiation therapy, patients are scanned in the same position in which therapy is performed and respiratory motion is accounted for. A survey of more general organ shifts was reported in [65] but only kidney motion with respect to changes caused by breathing was reported; no motion with respect to patient position was measured. During a study to evaluate a tracked laparoscopic pointer for navigation, a series of patients were scanned in their surgical position [71]. Using skin fiducials for registration, the tracked pointer combined with a Doppler device was used to locate vessel bifurcations within the retroperitoneum. The combined pointer and Doppler device 22  Figure 3.1: Subject 1 in supine (left), prone (center) and left flank (right) positions. Slices were taken from the same general area of the abdomen and have been scaled to fit. allowed the anatomical shifts to be monitored visually as misalignments between registered images and the real-time Doppler images. No significant shifts within the retroperitoneum were reported in images of the patient in the same position. The registration errors from this study ranged from 3.84 mm to 9.10 mm but errors monitored visually by the experimenters seemed better than these number indicate. The issue of organ shift between pre-operative and intra-operative poses has not been studied extensively for laparoscopic surgery where switching from supine to flank positions is more prevalent. The goal of this study is to determine whether the organ shift is substantial. If the motion is substantial, it might warrant considering if the protocols for pre-operative imaging should be changed to minimize the difference between pre-operative and intra-operative patient positioning. In the case of partial nephrectomy, this would mean recommending that patients obtain their diagnostic CT in the flank position, as opposed to supine.  3.2  Methods  CT and MRI scans of subjects in supine and flank positions were compared. The position and orientation of the kidney in relation to the spine were found for each subject and the motion between two scans is examined.  3.2.1  Patient Overview  Ten subjects participated in this study. Nine subjects were patients slated for partial or radical nephrectomy and one was a healthy volunteer. For these subjects, signed 23  consent was obtained following approval of the study by the clinical review and ethics board of the hospital. Table 3.1 summarizes the subjects, imaging modalities and subject positions used in this study. CT scans were taken of the subjects enlisted in the study, according to current surgical protocol and MRI was used to image the healthy volunteer. The MRI scans were all taken within the same day, while all other scans were taken with at least a week interlude and up to several months. For surgical subjects, the first CTscan was taken at the time of diagnosis and the second scan was taken just prior to surgery. A healthy volunteer was included in this study to gain insight into the motion of the kidney of a single subject in both the left and right flank, whereas the CT scans were only taken in the appropriate surgical position, i.e. right flank. Subject 1 also had a previous CT scan taken in the prone position, which was examined in this study to provide for a better overall understanding of kidney and organ motion in the abdomen. Example cross sections of these CT scans are shown in Figure 3.1. The surgical patients involved in this study all had renal masses of varying size and location. Because there was some time between scans, the growth of the tumours had to be taken into account. The range and variety of poses, images and pathology allow this preliminary study to give insight into the issue of kidney motion. Table 3.1: Summary of subjects, image modalities and positions used in the study of kidney motion.  Subject 1 2 3 4 5 6 7 8 9 10  Modality CT MRI X X X X X X X X X X  Supine X X X X X X X X X X  Position Left Flank Right Flank X X X X X X X X X X X  24  Prone X  Figure 3.2: Kidney surfaces after spine registration and before organ registration. Subject in the flank (green) and supine (blue) positions.  3.2.2  Image Acquisition and Processing  The CT images were obtained with a pixel size of 0.78 mm within the axial slices and 3 mm between slices. The MRI images were originally obtained at 2 mm within plane and between slices and were then interpolated to 1 mm in each direction. A T1-weighted sequence was used for the MRI images. This provided good contrast for the kidneys but minimized the chemical shift artefacts from the fat surrounding the kidney. Both the kidneys and the vertebrae were manually segmented using Stradwin [104, 105]. Manual segmentation was used to delineate the locations of organ and bone boundaries. The segmentations were interpolated to create a volume defined by vertices and triplets. These volumes were then read into Matlab (Mathworks, Natick, MA). The point clouds from the kidney surfaces were registered using the Iterative Closest Point (ICP) method [15]. The spine was used as a poseindependent rigid landmark and hence the vertebrae were matched first in order to bring the kidneys into the same coordinate frame. The right and left kidneys were then rigidly registered to their respective flank or prone counterparts using the ICP method. The results from this latter registration are reported as the organ motion within the body. Figure 3.2 presents an example of the kidneys after they have been placed in the same coordinate frame using registration of the spine, but before kidney-to-kidney registration. From these segmentations we were able to examine the motion of the kidneys 25  Figure 3.3: Surface motion of the kidney due to change from flank to supine positions for Subject 1 in the anterior (top) and posterior (bottom) views. in several ways. The overall rigid motion of the kidney and the rotation about each of its three primary axes were examined. The vector translation and rotations of the kidney describe how it shifts and rotates around the spine. We also computed the displacement magnitude for each vertex of the surface of the kidney volume. A rigid registration between supine and other patient positions was used for this study. In order to determine if a deformable registration is warranted, the errors of the ICP registration and changes in volume of the kidney were examined. The registration error, or the Hausdorff distance, is defined as the distance between one point and its closest neighbour in the corresponding point cloud. Because much of the possible registration error could be due to the change in overall kidney volume between scans, the changes in total kidney volume were also investigated. This volume change is reported as a percent of the total kidney volume. 26  3.3  Results  Figure 3.2 illustrates the kidney position before registration and an example of the overall motion experienced along the kidney surface is shown in Figure 3.3. Both of these example images were created from the results of Subject 1. The results of the translations, rotation and surface vertex motion from the registrations are shown in Figure 3.4. These figures show that the centroid of each kidney can move from 6.6 mm to 43.6 mm with an average of 22.3 mm for all kidneys. The average motion for all subjects was 12.9 mm for the right kidney and 24.7 mm for the left kidney. Table 3.2 describes the average centroid motion for each kidney in each flank position for all subjects. The maximum rotation of all kidneys ranged from 5.9 to 25.6 degrees with an average of 12.7 degrees. The movements of all the points that define the surface of the kidney were also tracked during the registration process. The average motion of these points per kidney ranged from 9.8 mm to 46.5 mm. The left kidney of subject 9 displayed the largest overall surface motion at 62.15 mm. The greatest motion occurred on the inferior pole of the kidney. Because the translations and rotation were coupled, the motion was not uniform across the surface of the kidney [Figure 3.3]. Table 3.2: Average centroid translation for right and left kidneys in each flank position Right Kidney  Left Kidney  25.6 11.9  27.6 21.5  Right Flank Left Flank  The residual errors after the ICP algorithm had a mean error always equal to or less than 3 mm, and a mean error across all patients of approximately 2 mm. Figure 3.5 illustrates the distribution of the registration errors mapped to their location on the kidney surface. Although larger errors were reported, most of these occurred at the poles of the kidney during kidney registration and around the spinous process during the initial registrations of the vertebra, and are most likely caused by segmentation errors. Considering the non-symmetric bean-like shape of the kidney, the relatively small overall error implies that the kidneys are well matched to each other after the registration and the rigid registration was successful. 27  Figure 3.4: Each Graph represents the results of each subject, where each kidney was registered individually. Top Left: The maximum rotational component. Top Right: The translational distance of the kidney centroid. Bottom: The average motion of the surface points on the kidney model. In order to further understand the registration, the change in kidney volume, with respect to the supine scan, was calculated for each subject. Figure 3.6 shows this value for all subjects. The change in volume ranged from 23% for Subject 1 to less than 1% for Subjects 4, 8 and 10. The registration error is not entirely due to poor registration, but also due to changes in the total kidney volume over time. Often several months had elapsed between scans, during which time the subject’s overall fluid level may have changed, or tumour growth may have affected kidney volume. Subject 2, the healthy volunteer, was the only subject for which all scans were taken within one  28  Figure 3.5: ICP Error due to changes from flank to supine positions for Subject 1 in the anterior (top) and posterior (bottom) views.  Figure 3.6: Volume change of the kidney as a percentage for each subject.  29  day, and had a 3% change in volume.  3.4  Discussion and Conclusion  Minimally invasive surgical approaches of management for intra-abdominal cancers such as liver, renal and adrenal tumours are increasing. To improve cancer control and patient morbidity from these operations, surgeons are relying increasingly on imaging. The use of image guidance had been common place in neurosurgery since 2004 but has yet to be used clinically in abdominal soft tissue surgery. This is due to the challenges that an organ, which can move and deform, places on our image guidance systems. There are four key steps in image guided surgery: 1) pre-operative imaging, 2) intra-operative imaging, 3) image registration to the patient and 4) tracking of the organ during surgery. The organ shifts that occur during the last three steps in both liver and renal surgery have begun to be investigated and algorithms to account for these shifts have been proposed [61]. However, no one has looked at the shift that occurs when moving the patient from the supine position where the pre-operative images were taken to the flank position where the operation will take place. If the internal organ shift is large, one can postulate that the rest of the measurements will be affected. Our results show that a substantial shift in the kidney can be seen with up to 43.6 mm and 25 degrees of rotation as the patient position changes from supine to flank. It is known that the abdominal organs move with respiration, especially the kidneys, liver and spleen, but the motion observed in this study is greater than that reported with regards to respiration. Respiration motion has been studied for use in radiation therapy, where the location of the organ throughout the respiratory cycle can affect the planning target volume and dose calculations. The range of motion found through these studies [4, 10, 19, 31, 74, 96, 103] was found to be greatest in the superior inferior (SI) direction and average motion was reported to range from 11 mm to 19 mm for the left kidney in the SI direction. In a study examining the bending of the renal artery, kidney motion was examined during passive respiration in both the coronal and axial views. The right kidney was found to have the maximum displacement, and was 13.2 mm superior and 6.3 mm posterior during expiration with respect to its position during inspiration [36]. The motions  30  in the left-right directions, or cranial-caudal direction, were reported to be much smaller in all studies. The studies do report a significant inter-patient variability, where motion of the organs was distributed from 0 mm to 27 mm. In our study, the average kidney motion for each subject ranged from 10.2 mm to 33.3 mm. In comparison to previous studies on renal organ motion, we found the overall motion with position change to be greater than that found with passive respiration but similar to displacement in forced respiration [74]. The displacements during consecutive respiration cycles were measured and standard deviations of 2-3 mm were reported [96]. From this relatively small standard deviation, one can infer that an organ’s location is repeatable when comparing its position from one respiratory cycle to the next. This means that the organs fall in the same place with each breath. Thus, any motion measured during this study should not contain more that 2-3 mm of position change caused by breath hold inconsistencies rather than patient positioning. The repetitive shift with respiration has been incorporated into continuous intra-operative navigation models with reliable accuracy. There was a difference between the motion of the left and right kidney [Table 3.2], although not statistically significant with the current subject population. This difference could be caused by the interaction of the kidneys with other organs such as the liver and spleen. These organs may also shift during changes in patient position and influence the measured movement of the kidneys. In the future, we will examine the other abdominal organs and attempt to quantify this influence. The shifts due to change in patient position would have an effect on the registration of the pre-operative CT to intra-operative modalities, including ultrasound and camera view. These registrations between pre-operative and intra-operative images are often used for surgical guidance and the deformation caused by the change in patient position would impair the accuracy of the image guidance. In previous work, it has been found that the organ shift in the retroperitoneum is minimal, when the patient stays in the same position over multiple imaging sessions [71]. Based on the results of this study and previous work, it would be preferred to initially scan the patient in the same position in which the surgery would occur. Although a rigid registration was used throughout this study, the errors after registration were small, averaging 2 mm across all subjects. The acceptable clinical margins that are created around a tumour during resection range from 5 to 10 mm, 31  the 2 mm error in registration will still lead to a positive margin, but the surgeon should be aware of the possible errors. The 2 mm error from rigid registration leads us to believe that the extra time and complexity of using a deformable registration are not necessary for the registration of organs between different subject poses in the pre-operative setting. Future work will examine how the other abdominal organs affect the displacement of the kidneys. In particular, the liver, which sits directly superior to the right kidney, could shift significantly and contribute to the kidneys’ displacement or be coupled with its motion. With additional study, characterization of the motion and deformation of all the abdominal organs can take place. The subjects’ tumour size may also contribute. In particular, Subject 1 had a large tumour on the superior pole of the left kidney. This tumour grew in volume from approximately 115 ml to 150 ml between the times when the supine and flank CT scans were taken. As this demonstrates, tumour growth should also be taken into account when discerning the cause of organ displacement.  32  Chapter 4  Ultrasound Transducer Design and Characteristics 4.1  Introduction  Current practice of intra-abdominal ultrasound during robotic surgery generally involves the use of a traditional laparoscopic ultrasound transducer [Figure 4.1]. The laparoscopic transducer is controlled by a patient-side assistant, under the verbal direction of the surgeon. In order to see the ultrasound image, the ultrasound machine screen is input into the da Vinci console via an S-video cable and displayed below the view of the surgical field in the surgeon’s console (see example in Figure 2.4 and Figure 2.5). On older versions of the robot, this feature is not available, and the surgeon is required to leave the console to look at the ultrasound image. The laparoscopic ultrasound transducer itself is limited in its degrees of freedom and the viewing angles are also limited by the port placement. Although a variety of transducers can be used, most commonly used are transducers with a single flexible joint at the end-effecter. In addition to being controlled by the patient-side assistant, when the transducer is in use, it requires a dedicated surgical port, as well as time committed to changing tools. Ultrasound has been found to be a very userdependent modality [54, 81]. This means that the quality of the ultrasound image available to the surgeon is dependent on the skill of the patient-side assistant, who might have limited experience with the laparoscopic ultrasound instrument. 33  Figure 4.1: Left: Aloka, UST-5536-7.5 Multi Frequency (5-10MHz) Flexible Laparoscopic Transducer. Right: BK Medical, 8666 5-10MHz Flexible Laparoscopic Transducer. Ultrasound is a real-time, non-invasive imaging method that provides the surgeon with useful and relevant information about the state of the patient and the location of vessels and tumours. If an ultrasound transducer were more accessible to the surgeon, it could be used to provide additional surgical guidance. This surgical guidance could improve three stages of surgery. The first stage in which the transducer would be used involves the initial orientation of the surgeon to the patient’s anatomy. The transducer can be used to find the kidney and the vessels, and register them to the pre-operative CT scan. This would provide the surgeon with a broad view of the abdomen before the dissection begins. The second stage involves the dissection of the major vessels. These vessels should be localized and dissected before the kidney tumour is located. These vessels must be clamped during tumour removal, so having them localized and accessible is paramount to the continuation of the procedure. The third stage involves the removal of the tumour itself. This section of the surgery is where ultrasound is currently used most extensively and involves the localization of tumour margins before the tumour resection. The patient-side assistant controls the laparoscopic transducer and the surgeon must mentally reconstruct the tumour margins in relation to where the ultrasound transducer is placed and angled on the kidney surface. The resection boundaries are then marked on the surface of the kidney with an electrocautery tool after the ultrasound transducer has been removed.  34  4.2 4.2.1  Design Requirements  In order to integrate intra-operative ultrasound into robotic surgery and take full advantage of the degrees of freedom available to the da Vinci tools, an intraabdominal ultrasound transducer was designed that can be easily picked up and manoeuvred by the da Vinci grasper. Previously, a 13MHz Aloka mini-transducer (transducer dimensions: 15x9x6mm) has been used for characterization of the coronary arteries during thoracic laparoscopic surgery [21, 22]. This transducer was outfitted with a small fin or tube that allowed the da Vinci needle drivers to grasp and manoeuvre the transducer. Unfortunately, the footprint of this transducer is very small and the grasping method does not fit the design parameters described below. An ultrasound transducer has also been integrated directly into the da Vinci as a tool controlled from the surgeon’s console [94]. This allows simple tracking of the transducer through use of the robotic joint angles. Unfortunately, a tool change is still required to access the ultrasound and the overall length of the instrument limits its range of motion. After speaking with urological surgeons and watching videos of robotic surgery, we propose to construct a ‘pick-up’ transducer that fulfills the following set of desirable characteristics for use during partial nephrectomy. The ‘pick-up’ transducer should: • use standard hospital equipment • be small enough to be manoeuvred inside the patient – based on previous experience and the current size of laparoscopic transducers, length should be limited to 50 mm • have a small enough diameter to fit through a surgical incision – in order to fit through a laparoscopic incision and not require additional sutures to prevent loss of insufflation, diameter should be no greater than 15 mm  35  • have a consistent and repeatable interface with the da Vinci grasper • have a self-aligning interface with the da Vinci grasper, such that a range of initial alignments between transducer and grasper will still result in correct ‘capture’ • allow interchangeable grasper positioning in relation to the ultrasound image • Ultrasound image should not be ocscured or degraded by da Vinci tool interface • have no sharp or breakable components • allow for multiple methods of tracking • any metal used for the da Vinci interface should not interfere with embedded electromagnetic tracking device • not be effected or damaged by repeated grasping with the da Vinci tools • allow for standard methods of sterilization  4.2.2  Construction and Testing  In order to create such a transducer, Vermon (Tours, France), an ultrasound transducer manufacturer known to build custom designed ultrasound transducers, was contacted. Working closely with the engineers at the company, we were able to design and define a transducer that would suit our needs. There were several design iterations before a final design was settled upon. Some of the original ideas that were considered included requiring some assembly of the transducer inside of the body, designing the ultrasound transducer to be grasped directly and allowing some modularity of the assembly. Some initial designs were larger to increase the area that the tool could grasp, involving a ‘fin’ on the top side of the transducer. Since this would make the diameter of the transducer too large for a standard incision, assembly within the body was considered briefly. It was soon deemed too complicated as it introduced additional load on the surgeon, as well as small parts that could break and/or be lost inside the patient. Any 36  small parts also introduced problems during sterilization. Some initial designs also included complicated transducer housing construction, such that the housing could be grasped directly by the transducer at various angles. This idea was dismissed due to the potential damage that the da Vinci tools could inflict on the plastic of the transducer housing. The da Vinci tools are very sharp, even the blunt graspers have sharp teeth, and it was determined that repeated grasping would soon render the housing unusable after a short amount of time. Some complicated housing designs were also dismissed by Vermon due to the limitations of the manufacturer’s machining capabilities The da Vinci tool selected to be used with the ‘pick-up’ transducer is the ProGrasp [Figure 4.2], a general use grasper that is recommended for use in many types of urologic procedures, including nephrectomy and prostatectomy. Its slotted construction allows for a unique interface with the transducer to be designed. This tool has a high jaw closing force, a published jaw angle of 0-38 degrees and a jaw length of 28 mm. Actual measurements of the tool found that the maximum jaw opening was approximately 12 mm (tip-to-tip) and the effective jaw length, from the pivot to the tool tip, was approximately 20 mm. Published jaw length was measured to the wrist pivot. These measurements determined the size and shape of the area where the transducer will be grasped. The length and width of the slot were measured as 19 mm and 2.4 mm respectively. These measurements were important in determining the locking mechanism’s shape and design. Because of the ProGrasps high jaw strength and sharp teeth, it was decided that the tool should grasp a metal piece that would be added semi-permanently to the proximal end of the transducer. This would provide a constant and nondamaging method of grasping the transducer. After consultation with a urological surgeon and examining the general relationship between the tool and the tissue during surgery, we decided to have the tool grasp the transducer at a right angle. In this configuration, the kidney can be imaged at a wide range of angles. The metal add-on, the ‘Lap-handle’, was designed such that it formed a locking type grasp with the ProGrasp tool [Figure 4.3]. The ProGrasp has a center slot that was used to lock the tool to the transducer. A groove [Figure 4.3] (A), shown in blue, was built into the Lap-handle to match the width of the ProGrasp. A small pin (B), shown in red, will catch in the slot of the tool and prevent the tool from sliding 37  Figure 4.2: The ProGrasp instrument. Image courtesy of Intuitive Surgical Inc.  Figure 4.3: Rendered images of the Lap-handle. off the Lap-handle. The walls of the groove are angled (C), shown in green, to increase the capture range. One wall is slanted to the floor of the groove, while the other ends just short (D). The solid wall (E), shown in yellow, helps to constrain the motion between the transducer and the tool while the fully slanted wall (C) prevents the tool from ever becoming jammed into the groove. The angle between the two sides of the groove (A) matches that of the ProGrasp when the tool is completely closed on to the Lap-handle. This allows for a fixed transformation between the da Vinci tool and the transducer. Also, all the degrees of freedom and range of motion available to the da Vinci tool is now transferable to the transducer. This will make the operation of the transducer very flexible and easier than traditional hand-held 38  Figure 4.4: Rendered images of the ‘pick-up’ transducer. Left: cross sectional view (blue) of the Lap-handle, a steel section added to the transducer. The angled faces and locking pins can be seen. Right: the tool fits tightly against the angled faces. The practicality of adding visual tracking markers is demonstrated. laparoscopic transducers. A diagram and final photograph of the transducer are shown in Figure 4.5 and 4.6 respectively. In addition to the Lap-handle at the transducer’s proximal end, the transducer shape also contributes to its versatility. Two faces have been cut into the sides of the transducer at an angle similar to that of da Vinci and laparoscopic tool jaws. These faces, in conjunction with a narrow groove at the bottom of the face, allow for additional flanges to be snapped over the top of the transducer. The flanges could include ways for other da Vinci or standard laparoscopic tools to grasp the transducer (different tools and different angles). This increases the number of tools that can be used with the transducer and the applications where the transducer can be useful. Several features were included to facilitate tracking of the transducer. The main addition is an embedded electromagnetic sensor discussed further in Section 4.3. The faces of the transducer are also designed to make the addition of optical markers easy to implement. The final method of tracking involves the constant transformation between the da Vinci tool and the transducer. An in depth discussion of each of the tracking methods can be found in Section 4.3. The ultrasound array used for this transducer is similar to those used in stan39  Figure 4.5: Final specifications of the transducer design. The Lap-handle is not shown.  40  Figure 4.6: Photograph of the final transducer prototype. dard laparoscopic instruments, since the clinical application is the same. A linear array was chosen for this transducer. The transducer array is 28 mm long with 128 elements and a center frequency of 10MHz. Element response time, sensitivity and frequency response as provided by the Vermon are shown in Figure 4.7. The high center frequency allows for high resolution, but limits the depth to approximately 6 cm. This is within the range that is used during partial and radical nephrectomy. The entire transducer is 15 mm in diameter and approximately 50 mm long, including the added length from the Lap-handle. Example B-mode and Doppler images of the carotid artery scanned using the new transducer are shown in Figures 4.8 and 4.9. The image quality of the transducer is comparable to the commercially produced transducer, HST15-8, used with Ultrasonix machines (Ultrasonix, Richmond, British Columbia). The transducer and sensor cables are integrated into the housing design to allow standard surgical sterilization protocols to be followed. The transducer can be sterilized using the following agents: Salvanios,Cidex OPA, Cidex 14 day,Cidex Plus, Gigasept, Gigasept AG Forte, Gigasept FF, Alkazyme, Steranios, Mikrozid, Klenzyme, Cidezyme, Bodedex Forte, Korsolex Basic, Korsolex Extra, Bomix Plus. It is also compatible for use with Sterrad (Irvine, CA) and Ethylene Oxide (EtO) sterilization methods. The new ‘pick-up’ ultrasound concept has been successfully tested with the 41  da Vinci robot. The Lap-handle allows the transducer to be easily grasped and provides the transducer with the same freedom of motion as the da Vinci tools. The transducer could be easily grasped by the da Vinci tools due to its self-aligning properties. During testing, the capture range was found to be approximately 20 degrees from each axis of the transducer.  42  43 Figure 4.7: Element response time, sensitivity and frequency response as provided by the transducer manufacturer.  Figure 4.8: Example B-mode image of the carotid artery.  4.3  Tracking Methods  The transducer was also designed to be tracked in several ways. The two faces of the transducer are flat and specialized markers can be placed to aid in vision-based tracking [Figure 4.10]. The use of the da Vinci stereo cameras allow 6 DOF to be known with a minimal number of markers. An electromagnetic (EM) position and orientation sensor is also embedded in the transducer to aid in transducer tracking. Due to the fixed transformation between the transducer and da Vinci tool, the robotic kinematics can also be used as a tracking method.  4.3.1  Electromagnetic Tracking  A 6 DOF sensor was permanently embedded into the transducer. It was located inside the transducer to be co-localized as close to the ultrasound image as possible. This minimizes the lever arm effect of any errors in the ultrasound image calibration. A Model 180 (Ascension Technologies, Burlington, VT), 2 mm outer diameter sensor was used, with a 3.3 meter cable. This sensor was the optimal trade off between size and working range. This sensor works with the Ascension trakStar or DriveBAY systems. A mid-range pulsed DC transmitter is used to create the magnetic field used by the sensor to identify position and orientation and has a 58 cm range when used in conjunction with this sensor ( The EM tracking systems allows for the position of the transducer to be known even when the transducer is not in view of the camera. Although EM trackers have a limited range in which they are accurate, this volume is within the range of a 44  Figure 4.9: Example color Doppler image of the carotid artery.  Figure 4.10: The da Vinci robot can grasp the transducer in a stable and repeatable manner. Markers for camera tracking are placed on the transducer faces. typical surgical field for kidney surgery. EM trackers also are distorted by nearby metal objects due to changes caused in the magnetic field used by the sensors, but accuracies have been reported in the range of 1% of the dimension of interest [52]. These systems have been tested in clinical environments [62, 110, 111] and are used in clinical practice, such as the Ultrasonix GPS system (  4.3.2  da Vinci Joint Kinematics  When the transducer is used with the da Vinci robot, the robot kinematics offers a second method of transducer tracking. The accuracy of the da Vinci robot has been tested and the robot can be used to localize a point within one millimetre, which 45  is within the range of other surgical tracking methods [63]. Using the application programming interface (API), the joint angles of the robotic arms are calculated in real time. Using the joint angles and a calibrated transformation between the robot tool and the transducer, the ultrasound image can be located in space and displayed in the surgeons view. The use of joint angles for transducer localization can be used if the optical markers become obscured during the surgical procedure, causing ‘dead spots’ in which no markers are visible to the cameras, or if the EM field becomes highly distorted. The da Vinci tool tip kinematics has a very high incremental accuracy, but a fairly low absolute accuracy. This means that some initial registration between the transducer and the robot must be completed, for example, using optical markers. This registration can be completed automatically and will be valid throughout the surgical procedure. The method of tracking the transducer with da Vinci joint angles has been tested. The transducer was calibrated using a single wall phantom [72, 85]. The phantom consists of a water bath with a metal plate in the bottom. The plate creates a distinctive bright echo line in the ultrasound images which can be automatically detected using a few adjustable parameters and RANSAC line fitting. A series of approximately 40 images was taken using a water bath following the patterns outlined in [85]. This software takes into account the temperature of the water and adjusts the final results to be consistent with the speed of sound in soft tissue, 1540m/s. The joint angles of the tool used to hold the transducer were recorded from the robot simultaneously with the ultrasound images. Each of the images was checked manually to determine that the floor of the water bath was correctly located in the image. Using the built-in algorithms, the lines detected in each ultrasound image were then fit to a surface using the Levenberg-Marquardt algorithm. Residual error for this calibration was 2.6 mm. Once successful calibration had taken place, the rigid transformation between the tool position and the ultrasound image is known. While tracking the tool position, a 3D ultrasound volume can be constructed. A sample volume of a vessel phantom was constructed [Figure 4.11]. The vessels in the phantom have been segmented manually for ease of visualization.  46  Figure 4.11: Phantom reconstruction using tracking based on da Vinci tool positions. Left: cross sectional view of the ultrasound vessel phantom. Right: the 3D reconstruction of the phantom vessel bifurcation.  Figure 4.12: Examples of common optical markers.  4.3.3  Optical Tracking  Optical markers have been used for a wide variety of purposes within augmented reality, including but not limited to medical imaging [66, 86], conferencing systems [57], and mobile devices [108]. Yu et al. presents a review of augmented reality, its applications and future direction [115], and Ribo et al. presents a more extensive list of tracking systems [88]. These types of markers rely on camera based image processing techniques to detect corners or specific patterns [Figure 4.12]. Popular systems are ARToolKit, ARToolKitPlus and ARTag. ARToolKitPlus and ARTag, both variations on the original ARToolKit [57] markers, have per47  formed well in studies evaluating the system accuracy [1, 41, 80]. ARToolKitPlus markers were chosen for this tracking application due to the simplicity of implementation and open source software. These markers consist of a black boarder on a white background with an interior ID code. For ARToolKitPlus, the black boarder is detected in the image first using edge detection after thresholding the entire image, followed by a search for quadrangles. Detected areas are checked for size and then rejected if either too large or too small. The areas inside the quadrangles are normalized for perspective and then checked against known marker patterns. If detection of the pattern is successful, an iterative pose estimation algorithm is performed until the final transformation from the camera to the local marker frame is found. ARToolKitPlus uses ID markers, rather than cross correlation methods for marker identification. This negates the need to compare images against large databases and multiple images. ID markers allow the software to directly ‘read’ the image ID from the markers binary pattern. The interior image ID relies only on bi-tonal levels, making it more robust against changes in lighting. In addition, ARToolKitPlus also implements an automatic thresholding algorithm which uses the median values of the interior marker pixels in one image to find a threshold for the following images. If a marker is not detected, a random threshold is chosen until the next successful marker detection. This software was originally developed for use with mobile devices but a PC-based version that implements a more Robust Planer Pose Tracking was used [108]. Several aspects of the accuracy of the ARToolKitPlus specifically were tested for use with a desktop virtual reality system [33]. The parameters that were tested in this study included jitter, accuracy (of pose estimation), marker recognition and marker confusion. Absolute positions with respect to the camera where not measured but known relative positioning of an array of markers was used to measure the parameters mentioned above. Jitter was found to be greatest in the Z direction, with up to 15 mm jitter being reported, but can be explained by the difference in camera resolution (using a single camera, the depth is measured solely based on changes in size). Positional accuracy was also better in the X and Y direction in comparison to the Z direction. The average orientation accuracy was found to be 2.9 degrees around the X and Y axis and 3.7 around the Z axis. For marker detec48  Figure 4.13: Four ARToolKitPlus markers were placed on the transducer. They are named to discriminate them from each other and others. ‘Ninja’ and ‘Bicep’ were placed on one face of the transducer while ‘Jack’ and ‘Sword’ were placed on the other. tion, it was found that to a size of 22 pixels, marker recognition was close to 100%, while under 15 pixels, markers could not be found. For recognition with respect to orientation, angles up to 75 degrees had 100% recognition. The ARToolKitPlus markers were attached to the transducer to evaluate the tracking ability with respect to maximum angles and size of recognition [Figure 4.13]. Using two markers placed on each face, the following marker recognition angles were found. Both markers could be reliably tracked on the faces of the transducer for approximately 114-102 degrees around the X and Y axis of the markers. In addition, about 20 degrees past these points, one marker could be tracked consistently [Figure 4.14]. Facing directly toward camera, the markers could be tracked up to about 20 cm. Tables 4.1 and 4.2 describe the results of the angle and the marker size detection tests. The pixel size of the detection cut off was found to be between 21 and 24 pixels which is consistent with the findings from [33]. Because there are 4 markers on the transducer, there are options for how to combine the information from each marker. Since one marker can be used to track all 6 DOF, multiple markers add redundant information that can be used for more accurate tracking. In this case, one marker determines the transformation for the transducer. The image is searched and all markers are found. The marker that is larger is used to find the transducer transformation. This marker is generally closer to the camera or has an angle that is closer to the cameras line of sight and thus  49  Figure 4.14: The view angle for reliable marker tracking. Green sections represent the angles in which both markers are tracked and the red sections represent the angles in which one marker can be tracked. Top: Viewing angles for rotation around the Y axis of the marker. Bottom: Viewing angles for rotation around the X axis of the marker. generally will have a more accurate pose estimate [1]. In the future, the markers could be combined into a single multi-marker set [108], or a more sophisticated decision algorithm could be implemented based on the accuracy function [1].  4.4  Conclusions  A new transducer has been designed specifically for use during robotic surgery. The transducer takes advantage of the degrees of freedom available with the da Vinci surgical system. The design requirements that were originally described have been fulfilled and the final manufactured transducer has been tested with the 50  Table 4.1: The angles at which the ARToolKitPlus markers can be detected. Two markers are placed on each face of the transducer. Yaw is measured about the X axis and Pitch is measured about the Y axis of the marker. All measurements are in degrees.  Marker visibility 0 1 or more Both visible 1 or more 0 2 marker View angle  Ninja & Bicep Yaw Pitch 0 0 0-26 0-21 26-138 21-131 138-180 131-180 180 180 112 110  Jack & Sword Yaw Pitch 0 0 0-37 0-26 37-139 26-136 139-180 136-180 180 180 102 110  Table 4.2: The size of the markers which can be detected by the algorithm as a function of the percentage of the image that they cover. Consistent: both markers are detected. Variable: both markers are detected about half of the time. Inconsistent: detection is unreliable.  Consistent Variable Inconsistent  Ninja & Bicep  Jack & Sword  >= 0.73% 0.73%0 > x >= 0.62% <0.62%  >=1.02% 1.02% > x >= 0.74% <0.74%  da Vinci robot. Methods of tracking were incorporated into the transducer in order to facilitate the construction of 3D ultrasound volumes. Accurate tracking is also important if the ultrasound image is to be accurately placed in the surgical scene directly, or registered to pre-operative imaging. The next chapter describes a method of using the tracked ultrasound transducer to register pre-operative CT scans using vessels as registration features.  51  Chapter 5  Ultrasound to Computed Tomography Registration using Vessel Extraction 5.1  Introduction  Vasculature is a predominant feature in many types of surgery. Whether to be found, such as in partial nephrectomy, or avoided, as in different types of brain surgery, it is important for the surgeon to know and understand where the vessels lie, what they are connected to and where they flow. During partial nephrectomy, the renal artery and vein must be properly located, and delicately cleaned of surrounding tissue. These vessels must be clamped during the tumour resection to minimize blood loss as the kidney tissue is cut. Thus, it is paramount that they be clamped properly. The ureter is also located before the tumour resection begins to avoid accidentally harming this important structure during the procedure. The ureter lies along side of the gonadal (inguinal) vein, and this vein acts as a guide to locate the ureter, which is generally small and difficult to see in the laparoscopic camera. We believe that the vessels can act as a feature for ultrasound to CT registration. This registration can then be used to overlay the pre-operative CT into the  52  surgeon’s view of the abdomen. The CT overlay can be used during the beginning of surgery to gain initial orientation. The surgeon will immediately know where the major structures lie, including the vessels, the kidney and the tumour. The dissection of the vessels and kidney can then be carried out in a more direct manner, saving time in the OR as well as the energy of the surgeon. The pre-operative imaging of patients is generally taken with contrast. The contrast allows the important structures relevant to this procedure to be segmented easily, either manually or semi-automatically. Vasculature has previously been used to register intra-operative data to preoperative CT and MRI scans [64, 76, 79, 84, 87, 98]. The main application of vessel based registration have been the brain [76, 87], liver [64, 79, 84] and carotid arteries [98]. Many of these registration algorithms rely on pre-processed preoperative data. The cortical vessels of the brain can orient the surgeon during the localization of the tumour, identify eloquent cortices and account for brain deformation during image guided surgery. One approach was to use video tracking of the vessels [76]. This study compared the accuracy of video-tracked vessel-based registration to skin-to-skin fiducial registration. The pre-operative MRI contrast images were segmented using thresholding-based techniques and manual editing. A rigid transformation was computed from the video data and the pre-processed MRI images. A video overlay was used after registration to provide the surgeon with a 2D projection image of the patient and the critical structures. In phantom studies the fiducial registration error was reported as 0.9 ± 1.1 mm and in clinical cases the error was 2.3 ± 1.3 mm. These results were better than the skin-to-skin registration error (11.1 ± 10.5 mm target registration error). In addition, vessel based registration can also be used to correct for cortical brain shift. Tracked 2D ultrasound was used to register MRI images of the brain [87]. Color was used to segment the ultrasound Doppler signal (indicating blood flow and vessel location) from combined B-mode/Doppler images. Due to the effects of wall motion, the size of the vessel is often overestimated (known as Doppler blooming) but to counter-act this effect centerlines were extracted after segmentation. Similar centerline ‘skeletons’ were extracted from the MRI images. The two sets of centerlines were registered with a modified ICP [15] algorithm that 53  includes outlier rejection. Thin-plate splines were then used to correct for non-rigid transformation parameters. In phantom studies this algorithm was able to correct 7.5 mm of deformation to within 1.6 mm. Image guidance during liver surgery is another application in which vessels are used for registration [64, 79, 84]. The vessels of the liver are very prominent features in tumour resection and surgical navigation. Different types of tracked ultrasound were implemented, either optical tracking [79], a motorized track [84] or a mechanical 3D transducer [64] was used. Similar to [87], Lange et al. use a modified ICP algorithm in combination with a non-rigid registration method, in this case, multi-level B-splines. Pre-operative MRI or CT and intra-operative3D ultrasound scans are segmented using a region growing method and centerlines are extracted using the TEASAR algorithm [92]. Manual registration was required to place the models within the capture range of the registration algorithm. A Root Mean Square (RMS) difference of 3.4 - 5.7 mm was reported between rigid and non-rigid registration. A comparison of vessel-based registration to a ‘bronze standard’ combination of ICP and manual alignment was also undertaken [79]. This study describes a method of registering MRI images to 3D ultrasound (tracked 2D using the Polaris Optical Tracking System). The MRI and ultrasound images are converted into probability maps that describe the probable vessel locations within the image. These volumes are then registered based on cross-correlation. Using liver images of patients, a final target registration error of 3.6 mm was reported. This was an improvement over the error of 15.4 mm found with manual registration. A more general method of registration of vascular images involves creating a model of the vessels (as tubes) in the source image and registering this directly to the image of the target both rigidly [8] and combined with deformable registration [55]. These methods also use centerline modeling but include other metrics to describe the vessel such as approximate radius and branching patterns. Sub-voxel errors have been achieved using phantom data and the algorithms have been tested on patient data of brain and liver vasculature. This chapter presents a vessel based registration method between pre-operative CT and real-time ultrasound. A surface model of the vessels in CT is created through manual segmentation and a centerline representation of the ultrasound is 54  created in real-time based on a combination of B-mode and Doppler imaging. A rigid registration method based on ICP was used to align the two modalities using both phantom and human data.  5.2  Registration Method  Because this registration method was designed for use during partial nephrectomy, it was first necessary to determine if the vessels of the kidney could be seen and scanned properly during kidney surgery. In order to gain insight into the properties of the kidney vessels, ultrasound images were taken from several patients with a traditional laparoscopic ultrasound transducer during radical nephrectomy. Images of both the internal vasculature and renal vessels were recorded. All imaging was performed with default vascular parameters by a surgeon not specialized in ultrasonography, using a traditional laparoscopic transducer and machine (Philips HDI 5000, Philips Healthcare, Andover, MA). For feasibility testing of the registration method, a combination of components including an ultrasound machine, an EM tracker and the associated data and image processing software was used. A PC-based Ultrasonix SonixRP ultrasound machine (Ultrasonix Corp, Richmond, BC) with a research interface allowed for access to the imaging stream and the imaging parameters during testing. This interface allowed for separate B-mode and power Doppler images to be streamed directly. For all phantom work, a linear 10MHz vascular transducer (L12-5) was used. This transducer array has 128 elements and is approximately 38 mm in length. A miniBird sensor (Ascension Technologies, Burlington, VT) was rigidly attached to the transducer. The resolution of the sensor is specified as 0.5 mm and 0.1 degrees for positions and orientation with an accuracy of 1.8 mm and 0.5 degrees. A z-wire calibration [72] was used to find the homogeneous transformation between the sensor and ultrasound image. All processing was completed using the on-board computer of the ultrasound machine. The method that is used in the following experiments uses a combination of automatic B-mode ultrasound vessel segmentation, manual segmentation of preoperative scans and ICP registration. Previous work by Julian Guerrero [46, 47] was used as a basis for the semi-  55  Figure 5.1: The SonixRP machine used for this experiment. automatic segmentation of vessels in B-mode ultrasound. This work was originally designed to segment the vessels of the lower leg in order to perform evaluation scans for deep vein thrombosis, ie, to locate blood clots in the veins of the lower leg. In the approach, the transverse vessel is segmented in real-time, with a frame rate of 10-16 Hz. Features are tracked over successive frames using a temporal Kalman filter and position and orientation measurements taken from an electromagnetic (EM) sensor rigidly attached to the ultrasound transducer. All the image acquisition was performed using the Ultrasonix libraries. The locations of the image plane and image features are known with respect to the frame of the EM transmitter. The vessel segmentation method uses a Star algorithm [42] and Kalman filter. A seed point inside the vessel is used to initialize the algorithm and intensity values along radii are used to detect the vessel walls. The distance from the seed point to the detected vessel wall is used as the input for the spatial Kalman filter having the radius as a function of the radius angle. An elliptical model is used to describe the vessel contour, instead of the traditional circular model, where the knowledge of the ellipse parameters is determined by an extended Kalman filter. The vessel tracking method was used to allow the original seed point to be used over many frames. The tracking method was developed to accommodate feature motion. The seed point is estimated using a temporal Kalman filter that assumes the vein center moves with constant velocity from frame to frame. The position 56  of the seed point in the previous frame is used to predict the location of the seed point in the current frame. In addition, a mask is created from sub-sampling the area around the predicted seed point. The location of the minimum brightness is found, and corresponds to the center of the vessel; this location is used as the new seed point. The contour points are now located in a 3D world frame using the EM sensor on the transducer. All valid contours can be used to create a 3D model of the vessel. A mesh is created between contours to visualize the vessel model. This mesh was not used as part of the registration method, but served as a good indication of how well the algorithm was tracking the vessel. The seed points for the contour identification can be selected manually, or determined through the use of the Doppler image data. When the Doppler image is used, the vessel and the seed point can automatically be selected. The Doppler image is also used throughout the scan to guide the segmentation. Multiple vessels can also be tracked in this manner. As the vessel is scanned and segmented, the identified contours were recorded. The centroid point of each contour is found as the average of all the points that form the vessel wall contour and they are used as the result for the ultrasound side of the registration method. The centroid from each contour forms a skeleton of the vessel that will be used during the final registration with CT. The vessels from the pre-operative scan, usually CT angiography, can be processed before the surgery. In the case of these experiments, all segmentation was completed manually. Stradwin, software developed at Cambridge University, was used for this purpose. From the manual segmentation, a closed surface model of the vessel could be created [104, 105]. The point based skeleton of the vessel created from intra-operative ultrasound is then registered to the surface formed from the pre-operative scan through an ICP method [15]. ICP is generally used to register two point clouds, ai and bi . In this case, the two point clouds are constructed from the skeleton points defined in ultrasound and the points that form the surface defined in CT. The method can handle the full six DOF. The goal of the registration is to find the rotation (R) and translation (p) that will minimize the error between the points of the two clouds, ai  57  and bi . The total error is calculated as η = ∑ ei · ei ,  (5.1)  i  where the error of each point is represented by ei = (R · ai + p) − bi .  (5.2)  The first step in the registration involves matching the centers of mass of the two clouds where a represents the center of the point cloud. p = b − R · a.  (5.3)  The method iteratively translates and rotates one point cloud until the distance from points in one cloud to the closest point in the other cloud is minimized. An initial guess of R is used and rotations from subsequent iterations are concatenated until the goal of finding the rotation matrix (R) is met that minimizes the errors between ai and bi . The final value of R is determined when either a minimum error or a maximum number if iterations is met. The error in this final measurement can be used as a measure of the success of the registration. Because this method involves the registration of a point skeleton to a surface, it is understood that the remaining error will be high and not a reflection of registration accuracy. Indeed, the method works such that the skeleton will finally lie in the center of the tube formed by the vessel surface, thus minimizing the distance to all the closest points, but the final distance will be close to the radius of the vessel. The registration was implemented using the Visualization Toolkit (Kitware, Clifton Park, New York), and this method takes all surface, line and point representations of the data. Figure 5.2 presents the steps of the registration methods. This method was used as a preliminary rigid registration. Because the preoperative scan has a wider field of view, and additional features can be identified and more carefully segmented, it is taken to be close to the ground truth. The intraoperative image may be subject to deformations caused by the pressure of the transducer, changes in patient position, and deformation caused by insufflation. These 58  Figure 5.2: Overall flow of the registration method. The top represents the steps using the ultrasound images and the bottom represents the steps using the CT images. These inputs were registered using an ICP algorithm. deformations may cause the vessel to be a slightly different shape or be diverted slightly. Since the information is coming from two different imaging modalities, it is also understood that the vessels will not look identical, since the modalities rely on different tissue properties to form an image. The use of the centroid points from ultrasound minimizes the effect of slight differences in the intra-operative and pre-operative imaging positioning as well as negates image based differences. User initialization is needed to place the source and target within the capture range of the registration algorithm, since it will converge monotonically to the nearest local minimum of the distance metric. Robotic kidney surgery has a very predictable set-up, in which the robot is positioned behind the patient, who is placed in the flank position, and the ultrasound machine (and EM transmitter) will be next to the robot and towards the patients head. The expected orientation of the patient with respect to the EM transmitter can be calculated before the initial ultrasound images are taken, and the approximate initial transformation between the two coordinate frames can be determined.  59  Figure 5.3: The ultrasound flow phantom used for these studies. Top left: Power Doppler. Top middle: B-mode images. Top right: CT imaging. Bottom: photograph of the phantom.  5.3  Experimental Design  A set of experiments were undertaken to test the validity of the registration method. Two models were used, one phantom model and one human model. Both models were chosen to mimic the bifurcation seen in the vessels of the kidney. The two main places being where the renal artery splits from the aorta, or where the renal artery bifurcates before entering the kidney proper. A phantom was custom designed for these experiments and constructed by Blue Phantom (Redmond, Washington) [Figure 5.3]. This phantom consists of a single vessel that bifurcates approximately half way through the phantom. The diameter of the vessel entering the phantom is 6 mm. After the bifurcation point, the vessel consists of two vessels, one at 6 mm and the other at 4 mm in diameter, respectively. In order to simulate flow through the human arteries, a peristaltic pump, the Fisher Scientific (Waltham, Massachusetts) mini pump with variable flow was used to push artificial blood (fluid with similar ultrasound properties) through the vessels at 4.0 to 85.0 mL/min. In this way a pulsating Doppler signal was achieved throughout the length of the phantom. The fluid used within the phantom was pro-  60  Figure 5.4: Example ultrasound image of the phantom in CT (left) and ultrasound (right). vided by Blue Phantom, and contains scattering material to mimic the ultrasound texture of human blood. A high resolution CT scan, 0.3-mm isoptropic resolution, was taken of the phantom before experiments began. Eighteen 2-mm steel spheres were secured under the phantom and served as visual fiducial landmarks in both the CT scan and ultrasound image [Figure 5.4] to quantify the error in registration. All eighteen fiducials were easily located in the CT image. During the ultrasound scan of the phantom, on average, 10 fiducials per volume were identified. These fiducials were then matched to those located from the CT scan. Similar spheres at the air-tissue interface have been employed for fidicial registration [113]. Localization error for these fiducials was found in both ultrasound and CT. The second model used for registration validation was a leg model. The algorithm was tested using the anterior and posterior tibial arteries and the popliteal artery [Figure 5.5]. A surface model of the vessels was created using manual segmentation of a 3D ultrasound scan created with Stradwin [104]. 3D ultrasound was used instead of CT to avoid the volunteer’s exposure to radiation. The vessels were scanned a second time and segmented in real time using the contour-based Kalman filtering algorithm described above. Branching points in the vessel structures were  61  Figure 5.5: Left: Example ultrasound image of the leg. Right: Schematic drawing of the ultrasound scan area. used as target registration points. An example ultrasound image of the vessels and scan area can be seen in Figure 5.5. When using the leg model, artificial fiducials would be impractical to implant into a volunteer. Therefore, the fiducials used for this model were bifurcation points of the vessels themselves. The major vessel bifurcation was used, along with two smaller vessel bifurcations that could easily be located in ultrasound images. These three anatomical fiducials were used when calculating the error of the registration. The localization error for these bifurcations was calculated. Table 5.1: The fiducial localization error for each fiducial type used during the vessel registration experiments. Localization Error (mm) CT Fiducials Ultrasound Fiducials Ultrasound Vessel Bifurcations  62  0.29 0.83 1.11  Figure 5.6: Example images for intra-operative power Doppler images of the kidney vessels. Left: Branching of renal artery. Middle: renal artery and vein. Right: internal vessels of the kidney.  5.4  Registration Results  First, to validate that the kidney vessels could be located and defined in intraabdominal ultrasound images, scans were collected from five patients undergoing radical nephrectomy. Sample ultrasound images of the kidney taken from a laparoscopic transducer highlight the high-quality that is available with this method, and the ease of vessel detection using Doppler methods [Figure 5.6]. During the experiments, a series of twelve ultrasound scans was taken of the vessel phantom [Figure 5.7]. For each volume, approximately ten fiducials were located during the ultrasound scan and used to determine the error of registration. A series of seven scans of the leg model were taken, and the three anatomical fiducials were located in each scan [Figure 5.8]. The RMS distance from the location in ultrasound to that in pre-operative scan after registration had taken place is reported as the target registration error. For the phantom model, the RMS error for all landmarks was 3.2 mm. The distribution of these target registration errors (TRE) ranged from 0.73 to 7.07 mm [Figure 5.9]. The average error along each axis was also found, in which the x-axis is in the direction of flow through the vessels and the y-axis is in plane with the vessel branch. The average error for all fiducials was 1.9 ± 1.7 mm, 1.3 ± 0.4 mm and 1.04 ± 0.41 mm, for the x, y and z directions respectively. As part of the experiment, the fiducial localization error was also found from the ultrasound images. The RMS error for fiducial localization in ultrasound was 0.83 mm, while 63  Figure 5.7: Example of the completed registration. The blue surface model represents the CT model of the vessel phantom and the red points are the fiducial locations. The series of the white points represent the ultrasound contour centroids, paired with the black fiducial points (white points are seen as light blue when inside the blue surface).  Figure 5.8: Final registration of the anterior and posterior tibial arteries and the popliteal artery. The surface model was created from 3D ultrasound. the RMS error for fiducial localization in CT was 0.29 mm. An overall RMS error of 7.5 mm was calculated using the bifurcation points of the vessels during the scans of the leg models. The larger error in these results may be a result of patient motion, which was not present in the phantom study. The small vessels are also likely to deform to a greater extent. In addition to these sources of error, the localization of the anatomical fiducials was much more difficult to perform consistently. The fiducial localization error for the vessel  64  Figure 5.9: Distribution of registration errors for a series of 12 CT to ultrasound vessel registrations using the vessel phantom. bifurcation was 1.11 mm.  5.5  Conclusions  Intraoperative ultrasound scans were taken to determine the visibility of the kidney vasculature. The vessels of the kidney are quite distinct in ultrasound and their high flow allows high quality Doppler ultrasound images to be created. A registration method between ultrasound and CT using automatic vessel segmentation in ultrasound and manually segmented CT scan was tested using a flow phantom and the vessels of the lower leg. The results of the registration are within the range of other registration methods and should provide the surgeon with useful information regarding the location of critical structures. We propose the use of ultrasound to CT registration for the initial stages of dissection, to give the surgeon a better sense of orientation and direction. This is even more important for surgeons inexperienced with the da Vinci robot.  65  Chapter 6  Conclusions Integration of ultrasound into robotic partial nephrectomy has the potential to influence the ease and outcomes of this procedure. Ultrasound is a real-time, noninvasive method to image the patient during the operation. While it is currently used only minimally during robotic surgery, the expanded usability and the ability to control and manipulate the transducer directly from the surgeons console will potentially increase the use of ultrasound during partial nephrectomy and other robotic laparoscopic surgeries. Intra-operative guidance can provide valuable information to the surgeon regarding the location of the kidney, the tumour and the vessels that need to be delicately dissected. Both the direct view of the ultrasound image and also a wider view of the surgical field could be shown after registration to a patient’s pre-operative CT scan. A system for integration of a ‘pick-up’ intra-abdominal ultrasound for robotic surgery has been proposed and initial feasibility tests have been completed. With a new ultrasound transducer, intra-abdominal ultrasound is more feasible throughout the duration of the procedure. The transducer can be sterilized and has provisions for expanded usability with other laparoscopic tools. The vessels of the kidney are quite distinct in ultrasound and their high flow allows high quality Doppler ultrasound images to be created. This section summarizes the contributions and findings of this thesis.  66  6.1  Thesis Contributions  • A study to determine the kidney motion between the patient’s diagnostic CT scan in the supine position and the flank intra-operative position was completed. Ten patients were included in the study and a CT scan was taken in each position. Each kidney was rigidly registered with respect to the spine and the translational and rotational components were examined. After observing the wide range of motion that the kidney could undergo during this change in patient position, it was recommended that patients undergo diagnostic CT in the same position as their potential operation to minimize error in image guidance. To our knowledge this is the first study that examines organ motion caused by differences in patient positioning. • A new ultrasound transducer was designed, built and tested. This new transducer was designed to interface specifically with the da Vinci robotic system, giving the operating surgeon full control over the ultrasound imaging and increasing the potential of robotic assisted surgery. This is the first transducer that will be available throughout the duration of the surgery without requiring tool changes. Several methods of tracking the transducer are available, including EM tracking, optical tracking and tracking through the use of robotic forward kinematics. Accurate tracking of the transducer will facilitate the construction of 3D ultrasound volumes and ultrasound registration. • Intra-operative ultrasound was performed on kidney vessels during laparoscopic surgery. The aorta, vena cava, renal vein and renal artery were indentified and could be scanned along their length. These ultrasound scans revealed the kidney vessels as distinctive features that are easily identified with Doppler ultrasound. Several patients were scanned and the vessels were found to be consistent throughout. • A method of ultrasound to CT registration was developed and tested on phantom and human models. This method utilises the vasculature as a registration feature, since the vessels play a prominent role in the surgery and are important to localize. Automatically segmented ultrasound vessel centerline points are registered to manually segmented CT surfaces. Results of this registra67  tion method in both phantom and human models show that the registration based on vessels is feasible and additional testing with intra-operative data is warranted. This registration will be used in the future to accurately overlay pre-operative imaging information into the surgical scene, providing the surgeon with vital navigational information.  6.2  Future Work  Future work will include patient trials with the new transducer to assess its usability and functionality. We will be working closely with surgeons throughout the trial to find any possible modifications for a second generation of the transducer. Modifications may include the overall size of the transducer, or the method/angle of grasping. Ultimately the pre-operative images will be displayed to the surgeon in an intuitive manner. Using the pre-operative CT, the best method of presenting the registered CT to the surgeon will be determined. Augmented reality, through different types of overlay, has had positive results [29, 102], but the method of display of the information is still a topic that needs additional study. Several groups have come up with methods ranging from alpha compositing [6] to illustrative rendering techniques [49] in order to display an image of a 3D shape. An augmented reality system will be implemented using our new ‘pick-up’ ultrasound concept, and targeted at highlighting and visualizing the renal tissue and surrounding vessels since these are important targets for dissection. In addition to the registration and display of the pre-operative CT in the augmented reality environment, direct visualization of the vessels will be implemented. For real-time vessel localization during and after dissection, the vessels can be ‘painted’ in the surgical scene using the ultrasound images. The vessels in the ultrasound image can be segmented using Doppler. The coloured pixels can easily be discriminated from the background tissue. These pixels will displayed to the surgeon and depict the vessel’s current location. Again, the methods of display and visualization must be carefully determined to provide the surgeon with useful information without overwhelming them or blocking their view of critical structures within the surgical scene. The validation of this method will be completed 68  using both phantom and animal models, comparing the ultrasound output to the pre-operative CT scan or mechanical 3D ultrasound transducer. The accuracy of the reconstruction is heavily dependent of the accuracy of the tracking methods for the transducer. An additional consideration when projecting information onto the surgical scene is that after a few moments, the tissue may be deformed or the camera may move. In order to keep the displayed navigational information displayed correctly without requiring a new ultrasound scan or to perform an additional registration, the location of the displayed information must be held constant with respect to the tissue. This can be implemented using tissue tracking techniques, where surface features of the tissue are tracked using computer vision algorithms [75]. As the tissue moves with respect to the camera, the features on the surface can be tracked and the display of underlying structures manipulated accordingly. Because the new transducer design has a system involving multiple methods of tracking, there are several ways to incorporate optical tracking, electromagnetic tracking and tracking via da Vinci joint angles. A decision process must be created to determine when a particular tracking method will be used, and how much it can be trusted. Some of the combinations will be binary. Optical tracking cannot be used if the transducer is out of view or the markers are obscured. In the same vein, tracking via joint angles can only be used when the transducer is being held by the da Vinci instruments. The electromagnetic tracker can be used at all times but is subject to noise and distortion from electrical fields and nearby metallic objects. Feuerstein [40] describes a method of using magneto-optical tracking to find the location of a laparoscopic transducer in the surgical field. Redundant tracking methods are used and a distrust level is assigned to each output based on distortion estimation and other factors. Based on these distrust levels the final position and orientation of the transducer is determined. If the errors exceed a specified level, this is brought to the surgeon’s attention through the use of a visual warning. Similar processes must be used to reliably combine the data from the three tracking methods available while using the transducer. There have been groups [16, 40] that have looked into automatic detection of distortion errors in electromagnetic tracking. Both these methods involve monitoring a known sensor position, either by a static offset between two sensors [16] or 69  the closed loop error using an optical tracker [40]. The error in the known position of the sensor can be used to determine the distrust level mentioned above for the electromagnetic tracker. Another issue with any of the tracking methods that needs to be overcome is the ability to synchronize the acquisition of the position information and the ultrasound image. This discrepancy can be calibrated through temporal calibration. Open source software [18] is available for this purpose and will be incorporated into this project for each of the tracking methods combined with the Ultrasonix image acquisition libraries. In addition to online estimation of the dynamic errors in the electromagnetic tracking system, we would like to be able calibrate the system for static errors caused by the proximity of the operating room table, the surgical robot and other immobile equipment. Kindratenko provides a comprehensive survey of calibration techniques [60]. We will implement a method that allows for calibration using an unstructured grid that can be calculated in real time. With this method, the EM sensor can quickly and easily be calibrated after the transducer has been introduced into the patient’s abdomen. The optical tracking will be used to calibrate the EM field. Thus, if the markers are obscured later in the surgery, or the transducer is being held in should a way that they are not visible to the camera, accurate position and orientation can still be determined. Although the registration from ultrasound to CT using vasculature has proven to be feasible, other approaches that use intra-abdominal ultrasound will be investigated in the event that the patient’s vessels cannot provide the necessary information. The intra-abdominal transducer described in this thesis has the potential to create accurate high resolution 3D reconstructions of the kidney tissue. The sub-volumes of the patient anatomy could be used to register the intra-operative data to pre-operative CT directly to 3D ultrasound volumes that are taken during surgery through the patients’ back, or to simulated ultrasound that is constructed from pre-operative CT.  70  Bibliography [1] D. Abawi, J. Bienwald, and R. Dorner. Accuracy in optical tracking with fiducial markers: an accuracy function for ARToolKit. In Proceedings of the 3rd IEEE/ACM International Symposium on Mixed and Augmented Reality, pages 260–261. IEEE Computer Society, 2004. → pages 48, 50 [2] M. Alp, M. Dujovny, M. Misra, F. Charbel, and J. Ausman. Head registration techniques for image-guided surgery. Neurological Research, 20(1):31, 1998. → pages 8 [3] H. Altamar, R. Ong, C. Glisson, D. Viprakasit, M. Miga, S. Herrell, and R. Galloway. Kidney deformation and intraprocedural registration: A study of elements of image-guided kidney surgery. Journal of Endourology, 25 (3):511–517, 2011. → pages 8, 21 [4] T. Aruga, J. Itami, M. Aruga, K. Nakajima, K. Shibata, T. Nojo, S. Yasuda, T. Uno, R. Hara, K. Isobe, et al. Target volume definition for upper abdominal irradiation using CT scans obtained during inhale and exhale phases. International Journal of Radiation Oncology Biology Physics, 48 (2):465–469, 2000. → pages 30 [5] M. Audette, F. Ferrie, and T. Peters. An algorithmic overview of surface registration techniques for medical imaging. Medical Image Analysis, 4(3): 201–217, 2000. → pages 8 [6] N. Ayache. Epidaure: A research project in medical image analysis, simulation, and robotics at INRIA. IEEE Transactions on Medical Imaging, 22(10):1185–1201, 2003. → pages 9, 68 [7] A. Ayav, L. Bresler, L. Brunaud, and P. Boissel. Early results of one-year robotic surgery using the da Vinci system to perform advanced laparoscopic procedures. Journal of Gastrointestinal Surgery, 8(6): 720–726, 2004. → pages 13 71  [8] S. Aylward, J. Jomier, S. Weeks, and E. Bullitt. Registration and analysis of vascular images. International Journal of Computer Vision, 55(2): 123–138, 2003. → pages 54 [9] K. Badani, F. Muhletaler, M. Fumo, S. Kaul, J. Peabody, M. Bhandari, and M. Menon. Optimizing robotic renal surgery: The lateral camera port placement technique and current results. Journal of Endourology, 22(3): 507–510, 2008. → pages 13 [10] J. Balter, R. Ten Haken, T. Lawrence, K. Lam, and J. Robertson. Uncertainties in CT-based radiation therapy treatment planning associated with patient breathing. International Journal of Radiation Oncology Biology Physics, 36(1):167–174, 1996. → pages 30 [11] D. Barbot. Improved staging of liver tumors using laparscopic intraoperative ultrasound. Journal of Surgical Oncology, 64:63–67, 1997. → pages 6 [12] W. Bargar, A. Bauer, and M. B¨orner. Primary and revision total hip replacement using the Robodoc (R) system. Clinical Orthopaedics and Related Research, 354:82, 1998. → pages 12, 21 [13] M. Baumhauer, M. Feuerstein, H. Meinzer, and J. Rassweiler. Navigation in endoscopic soft tissue surgery: Perspectives and limitations. Journal of Endourology, 22(4):751–766, 2008. → pages 8, 21 [14] R. Berglund, I. Gill, D. Babineau, M. Desai, and J. Kaouk. A prospective comparison of transperitoneal and retroperitoneal laparoscopic nephrectomy in the extremely obese patient. British Journal of Urology International, 99(4):871–874, 2007. → pages 13 [15] P. Besl and N. McKay. A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 239–256, 1992. → pages 25, 53, 57 [16] T. Bien, M. Kaiser, and G. Rose. Conductive distortion detection in AC electromagnetic tracking systems. In Biomedical Engineering, 2011. → pages 69 [17] J. Binder and W. Kramer. Robotically-assisted laparoscopic radical prostatectomy. British Journal of Urology International, 87(4):408–410, 2001. → pages 13  72  [18] J. Boisvert, D. Gobbi, S. Vikal, R. Rohling, G. Fichtinger, and P. Abolmaesumi. An open-source solution for interactive acquisition, processing and transfer of interventional ultrasound images. In The MIDAS Journal-Systems and Architectures for Computer Assisted Interventions, 2008. → pages 70 [19] E. Brandner, A. Wu, H. Chen, D. Heron, S. Kalnicki, K. Komanduri, K. Gerszten, S. Burton, I. Ahmed, and Z. Shou. Abdominal organ motion measured using 4D CT. International Journal of Radiation Oncology Biology Physics, 65(2):554–560, 2006. → pages 30 [20] K. Brock, M. Sharpe, L. Dawson, S. Kim, and D. Jaffray. Accuracy of finite element model-based multi-organ deformable image registration. Medical Physics, 32:1647, 2005. → pages 22 [21] R. Budde, T. Dessing, R. Meijer, P. Bakker, C. Borst, and P. Grundeman. Robot-assisted 13MHz epicardial ultrasound for endoscopic quality assessment of coronary anastomoses. Interactive Cardiovascular and Thoracic Surgery, 3(4):616, 2004. → pages 35 [22] R. Budde, R. Meijer, P. Bakker, C. Borst, and P. Grundeman. Endoscopic localization and assessment of coronary arteries by 13MHz epicardial ultrasound. The Annals of Thoracic Surgery, 77(5):1586–1592, 2004. → pages 35 [23] T. Butz, S. Warfield, K. Tuncali, S. Silverman, E. van Sonnenberg, F. Jolesz, and R. Kikinis. Pre-and intra-operative planning and simulation of percutaneous tumor ablation. In Medical Image Computing and Computer-Assisted Intervention, pages 395–416, 2000. → pages 8 [24] J. Byrn, S. Schluender, C. Divino, J. Conrad, B. Gurland, E. Shlasko, and A. Szold. Three-dimensional imaging improves surgical performance for both novice and experienced operators using the da Vinci Robot System. The American Journal of Surgery, 193(4):519–522, 2007. → pages 12 [25] T. Carter, M. Sermesant, D. Cash, D. Barratt, C. Tanner, and D. Hawkes. Application of soft tissue modelling to image-guided surgery. Medical Engineering & Physics, 27(10):893–909, 2005. → pages 21 [26] E. Castilla, L. Liou, N. Abrahams, A. Fergany, L. Rybicki, J. Myles, and A. Novick. Prognostic importance of resection margin width after nephron-sparing surgery for renal cell carcinoma. Urology, 60(6):993–997, 2002. → pages 15 73  [27] J. Catheline. A comparison of laparoscopic ultrasound versus cholangiography in the evaluation of the biliary tree during laparoscopic cholecystectomy. European Journal of Ultrasound, 10(1):1–9, 1999. → pages 6 [28] Y. Cheng, T. Huang, C. Chen, T. Lee, T. Chen, Y. Chen, P. Liu, Y. Chiang, H. Eng, C. Wang, et al. Intraoperative Doppler ultrasound in liver transplantation. Clinical Transplantation, 12(4):292–299, 1998. → pages 7 [29] C. Cheung, C. Wedlake, J. Moore, S. Pautler, A. Ahmad, and T. Peters. Fusion of stereoscopic video and laparoscopic ultrasound for minimally invasive partial nephrectomy. In Proceedings of SPIE, volume 7261, pages 09–19, 2009. → pages 9, 14, 68 [30] B. Davies. A review of robotics in surgery. Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine, 214 (1):129–140, 2000. → pages 3 [31] S. Davies, A. Hill, R. Holmes, M. Halliwell, and P. Jackson. Ultrasound quantitation of respiratory organ motion in the upper abdomen. British Journal of Radiology, 67(803):1096, 1994. → pages 30 [32] S. De Buck, J. Van Cleynenbreugel, I. Geys, T. Koninckx, P. Koninck, and P. Suetens. A system to support laparoscopic surgery by augmented reality visualization. In Medical Image Computing and Computer-Assisted Intervention, pages 691–698, 2001. → pages 8, 9 [33] T. De Kler. Integration of the ARToolKitPlus optical tracker into the Personal Space Station. 2007. → pages 48, 49 [34] L. Deane, H. Lee, G. Box, O. Melamud, D. Yee, J. Abraham, D. Finley, J. Borin, E. McDougall, R. Clayman, et al. Robotic versus standard laparoscopic partial/wedge nephrectomy: a comparison of intraoperative and perioperative results from a single institution. Journal of Endourology, 22(5):947–952, 2008. → pages 13, 15 [35] M. Desai, I. Gill, A. Ramani, M. Spaliviero, L. Rybicki, and J. Kaouk. The impact of warm ischaemia on renal function after laparoscopic partial nephrectomy. British Journal of Urology International, 95(3):377–383, 2005. → pages 2, 14 [36] M. Draney, C. Zarins, and C. Taylor. Three-dimensional analysis of renal artery bending motion during respiration. Journal of Endovascular Therapy, 12(3):380–386, 2005. → pages 30 74  [37] R. Ewers, K. Schicho, G. Undt, F. Wanschitz, M. Truppe, R. Seemann, and A. Wagner. Basic research and 12 years of clinical experience in computer-assisted navigation technology: a review. International Journal of Oral and Maxillofacial surgery, 34(1):1–8, 2005. → pages 8, 21 [38] J. Fanning, B. Fenton, and M. Purohit. Robotic radical hysterectomy. American Journal of Obstetrics and Gynecology, 198(6):649, 2008. → pages 13 [39] M. Feuerstein, T. Reichl, J. Vogel, A. Schneider, H. Feussner, and N. Navab. Magneto-optic tracking of a flexible laparoscopic ultrasound transducer for laparoscope augmentation. In Medical Image Computing and Computer-Assisted Intervention, pages 458–466, 2007. → pages 9, 11 [40] M. Feuerstein, T. Reichl, J. Vogel, J. Traub, and N. Navab. New approaches to online estimation of electromagnetic tracking errors for laparoscopic ultrasonography. Computer Aided Surgery, 13(5):311–323, 2008. → pages 9, 11, 69, 70 [41] M. Fiala. Designing highly reliable fiducial markers. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1317–1324, 2010. → pages 48 [42] N. Friedland and D. Adam. Automatic ventricular cavity boundary detection from sequential ultrasound images using simulated annealing. IEEE Transactions on Medical Imaging, 8(4):344–353, 1989. → pages 56 [43] H. Fuchs, M. Livingston, R. Raskar, D. Colucci, K. Keller, A. State, J. Crawford, P. Rademacher, S. Drake, and A. Meyer. Augmented reality visualization for laparoscopic surgery. Medical Image Computing and Computer-Assisted Interventation, pages 934–943, 1998. → pages 8 [44] K. Fuchs. Minimally invasive surgery. Endoscopy, 34(2):154–159, 2002. → pages 1 [45] D. Gering, A. Nabavi, R. Kikinis, W. Grimson, N. Hata, P. Everett, F. Jolesz, and W. Wells. An integrated visualization system for surgical planning and guidance using image fusion and interventional imaging. In Medical Image Computing and Computer-Assisted Intervention, pages 809–819, 1999. → pages 8 [46] J. Guerrero, E. Salcudean, A. McEwen, B. Masri, and S. Nicolaou. System for deep venous thrombosis detection using objective compression 75  measures. IEEE Transactions on Biomedical Engineering, 53(5):845–854, 2006. → pages 55 [47] J. Guerrero, S. Salcudean, J. McEwen, B. Masri, and S. Nicolaou. Real-time vessel segmentation and tracking for ultrasound imaging applications. IEEE Transactions on Medical Imaging, 26(8):1079–1090, 2007. ISSN 0278-0062. → pages 55 [48] G. Haber and I. Gill. Laparoscopic partial nephrectomy: Contemporary technique and outcomes. European urology, 49(4):660–665, 2006. → pages 2 [49] C. Hansen, J. Wieferich, F. Ritter, C. Rieder, and H. Peitgen. Illustrative visualization of 3D planning models for augmented reality in liver surgery. International Journal of Computer Assisted Radiology and Surgery, 5(2): 133–141, 2010. → pages 68 [50] D. Hill, P. Batchelor, M. Holden, and D. Hawkes. Medical image registration. Physics in Medicine and Biology, 46:1–45, 2001. → pages 21, 22 [51] G. Hubens, H. Coveliers, L. Balliu, M. Ruppert, and W. Vaneerdeweg. A performance study comparing manual and robotically assisted laparoscopic surgery using the da Vinci system. Surgical Endoscopy, 17(10):1595–1599, 2003. → pages 12, 13, 14 [52] S. Hughes, T. D’Arcy, D. Maxwell, W. Chiu, A. Milner, J. Saunders, and R. Sheppard. Volume estimation from multiplanar 2D ultrasound images using a remote electromagnetic position and orientation sensor. Ultrasound in Medicine & Biology, 22(5):561–572, 1996. ISSN 0301-5629. → pages 11, 45 [53] H. Iseki, Y. Masutani, M. Iwahara, T. Tanikawa, Y. Muragaki, T. Taira, T. Dohi, and K. Takakura. Volumegraph: Overlaid three-dimensional image-guided navigation. Stereotactic and Functional Neurosurgery, 68 (1-4):18–24, 1997. → pages 21 [54] J. Jakimowicz. Intraoperative ultrasonography in open and laparoscopic abdominal surgery: An overview. Surgical endoscopy, 20:425–435, 2006. → pages 33 [55] J. Jomier and S. Aylward. Rigid and deformable vasculature-to-image registration: A hierarchical approach. Medical Image Computing and Computer-Assisted Intervention, pages 829–836, 2004. → pages 54 76  ˚ [56] J. Kaspersen, E. Sjølie, J. Wesche, J. Asland, J. Lundbom, A. Ødeg˚ard, F. Lindseth, and T. Nagelhus Hernes. Three-dimensional ultrasound-based navigation combined with preoperative CT during abdominal interventions: A feasibility study. Cardiovascular and interventional radiology, 26(4): 347–356, 2003. → pages 21, 22 [57] H. Kato and M. Billinghurst. Marker tracking and HMD calibration for a video-based augmented reality conferencing system. In Proceedings of 2nd IEEE and ACM International Workshop on Augmented Reality, pages 85–94, 1999. → pages 47 [58] S. Kaul, R. Laungani, R. Sarle, H. Stricker, J. Peabody, R. Littleton, and M. Menon. da Vinci-assisted robotic partial nephrectomy: Technique and results at a mean of 15 months of follow-up. European Urology, 51(1): 186–192, 2007. → pages 2, 13 [59] J. Killoran, H. Kooy, D. Gladstone, F. Welte, and C. Beard. A numerical simulation of organ motion and daily setup uncertainties: Implications for radiation therapy. International Journal of Radiation Oncology Biology Physics, 37(1):213–221, 1997. → pages 22 [60] V. Kindratenko. A survey of electromagnetic position tracker calibration techniques. Virtual Reality, 5(3):169–182, 2000. → pages 70 [61] K. Konishi, M. Nakamoto, Y. Kakeji, K. Tanoue, H. Kawanaka, S. Yamaguchi, S. Ieiri, Y. Sato, Y. Maehara, S. Tamura, et al. A real-time navigation system for laparoscopic surgery based on three-dimensional ultrasound using magneto-optic hybrid tracking configuration. International Journal of Computer Assisted Radiology and Surgery, 2(1): 1–10, 2007. → pages 9, 11, 30 [62] J. Kr¨ucker, S. Xu, N. Glossop, A. Viswanathan, J. Borgert, H. Schulz, and B. Wood. Electromagnetic tracking for thermal ablation and biopsy guidance: Clinical evaluation of spatial accuracy. Journal of Vascular and Interventional Radiology, 18(9):1141–1150, 2007. → pages 11, 45 [63] D. Kwartowitz, S. Herrell, and R. Galloway. Toward image-guided robotic surgery: Determining intrinsic accuracy of the da Vinci robot. International Journal of Computer Assisted Radiology and Surgery, 1(3): 157–165, 2006. → pages 11, 13, 46 [64] T. Lange, S. Eulenstein, M. Huenerbein, H. Lamecker, and P. Schlag. Augmenting intraoperative 3D ultrasound with preoperative models for 77  navigation in liver surgery. Medical Image Computing and Computer-Assisted Intervention, pages 534–541, 2004. → pages 9, 11, 53, 54 [65] K. Langen and D. Jones. Organ motion and its management. International Journal of Radiation Oncology Biology Physics, 50(1):265–278, 2001. → pages 22 [66] J. Leven, D. Burschka, R. Kumar, G. Zhang, S. Blumenkranz, X. Dai, M. Awad, G. Hager, M. Marohn, M. Choti, et al. da Vinci canvas: A telerobotic surgical system with integrated, robot-assisted, laparoscopic ultrasound capability. Medical Image Computing and Computer-Assisted Intervention, pages 811–818, 2005. → pages 11, 47 [67] Q. Li, L. Zamorano, A. Pandya, R. Perez, J. Gong, and F. Diaz. The application accuracy of the NeuroMate robot: A quantitative comparison with frameless and frame-based surgical localization systems. Computer Aided Surgery, 7(2):90–98, 2002. → pages 11 [68] R. Link, S. Bhayani, and L. Kavoussi. A prospective comparison of robotic and laparoscopic pyeloplasty. Annals of Surgery, 243(4):486, 2006. → pages 13 [69] J. Maintz and M. Viergever. An overview of medical image registration methods. In In the Symposium of the Belgian Hospital Physicists Association (SBPH-BVZF), 1996. → pages 8, 21 [70] M. Makuuchi, G. Torzilli, and J. Machi. History of intraoperative ultrasound. Ultrasound in Medicine & Biology, 24(9):1229–1242, 1998. → pages 5, 6 [71] R. M˚arvik, T. Langø, G. Tangen, J. Andersen, J. Kaspersen, B. Ystgaard, E. Sjølie, R. Fougner, H. Fjøsne, and T. Nagelhus Hernes. Laparoscopic navigation pointer for three-dimensional image-guided surgery. Surgical endoscopy, 18(8):1242–1248, 2004. → pages 9, 22, 31 [72] L. Mercier, T. Langø, F. Lindseth, and L. Collins. A review of calibration techniques for freehand 3-D ultrasound systems. Ultrasound in Medicine & Biology, 31(2):143–165, 2005. ISSN 0301-5629. → pages 46, 55 [73] P. Merloz, J. Tonetti, A. Eid, C. Faure, S. Lavallee, J. Troccaz, P. Sautot, A. Hamadeh, and P. Cinquin. Computer assisted spine surgery. Clinical Orthopaedics and Related Research, 337:86, 1997. → pages 21 78  [74] M. Moerland, A. van den Bergh, R. Bhagwandien, W. Janssen, C. Bakker, J. Lagendijk, and J. Battermann. The influence of respiration induced motion of the kidneys on the accuracy of radiotherapy treatment planning: A magnetic resonance imaging study. Radiotherapy and Oncology, 30(2): 150–154, 1994. → pages 30, 31 [75] P. Mountney, D. Stoyanov, and G. Yang. Three-dimensional tissue deformation recovery and tracking. IEEE Signal Processing Magazine, 27 (4):14–24, 2010. → pages 69 [76] S. Nakajima, H. Atsumi, R. Kikinis, T. Moriarty, D. Metcalf, F. Jolesz, and P. Black. Use of cortical surface vessel registration for image-guided neurosurgery. Neurosurgery, 40(6):1201, 1997. → pages 53 [77] M. Nakamoto, Y. Sato, M. Miyamoto, Y. Nakamjima, K. Konishi, M. Shimada, M. Hashizume, and S. Tamura. 3d ultrasound system using a magneto-optic hybrid tracker for augmented reality visualization in laparoscopic liver surgery. Medical Image Computing and Computer-Assisted Intervention, pages 148–155, 2002. → pages 8, 9, 11 [78] L. Nolte, L. Zamorano, H. Visarius, U. Berlemann, F. Langlotz, E. Arm, and O. Schwarzenbach. Clinical evaluation of a system for precision enhancement in spine surgery. Clinical Biomechanics, 10(6):293–303, 1995. → pages 21 [79] G. Penney, J. Blackall, M. Hamady, T. Sabharwal, A. Adam, and D. Hawkes. Registration of freehand 3D ultrasound and magnetic resonance liver images. Medical Image Analysis, 8(1):81–91, 2004. ISSN 1361-8415. → pages 53, 54 [80] K. Pentenrieder, P. Meier, G. Klinker, et al. Analysis of tracking accuracy for single-camera square-marker-based tracking. In Proceedings of Dritter Workshop Virtuelle und Erweiterte Realitt der GI-Fachgruppe VR/AR, Koblenz, Germany. Citeseer, 2006. → pages 48 [81] K. Perry, J. Myers, and D. Deziel. Laparoscopic ultrasound as the primary method for bile duct imaging during cholecystectomy. Surgical Endoscopy, 22(1):208–213, 2008. → pages 33 [82] T. Polascik, M. Meng, J. Epstein, and F. Marshall. Intraoperative sonography for the evaluation and management of renal tumors: Experience with 100 patients. The Journal of Urology, 154(5):1676–1680, 1995. → pages 6 79  [83] F. Porpiglia, A. Volpe, M. Billia, and R. Scarpa. Laparoscopic versus open partial nephrectomy: analysis of the current literature. European Urology, 53(4):732–743, 2008. → pages 2, 14 [84] B. Porter, D. Rubens, J. Strang, J. Smith, S. Totterman, and K. Parker. Three-dimensional registration and fusion of ultrasound and MRI using major vessels as fiducial markers. IEEE Transactions on Medical Imaging, 20(4):354–359, 2002. → pages 53, 54 [85] R. Prager, R. Rohling, A. Gee, and L. Berman. Rapid calibration for 3-D freehand ultrasound. Ultrasound in Medicine & Biology, 24(6):855–869, 1998. → pages 46 [86] H. Rafii-Tari, P. Abolmaesumi, and R. Rohling. Panorama ultrasound for guiding epidural anesthesia: A feasibility study. Information Processing in Computer-Assisted Interventions, pages 179–189, 2011. → pages 47 [87] I. Reinertsen, M. Descoteaux, K. Siddiqi, and D. Collins. Validation of vessel-based registration for correction of brain shift. Medical Image Analysis, 11(4):374–388, 2007. → pages 53, 54 [88] M. Ribo. State of the art report on optical tracking. Vienna Univ. Technol., Vienna, Austria, Tech. Rep, 25, 2001. → pages 47 [89] T. Robinson and G. Stiegmann. Minimally invasive surgery. Endoscopy, 36 (1):48–51, 2004. → pages 1 [90] C. Rogers, A. Singh, A. Blatt, W. Linehan, and P. Pinto. Robotic partial nephrectomy for complex renal tumors: Surgical technique. European Urology, 53(3):514–523, 2008. → pages 13 [91] H. Rusinek, W. Tsui, A. Levy, M. Noz, and M. de Leon. Principal axes and surface fitting methods for three-dimensional image registration. Journal of nuclear medicine, 34(11):2019, 1993. → pages 22 [92] M. Sato, I. Bitter, M. Bender, A. Kaufman, and M. Nakajima. Teasar: Tree-structure extraction algorithm for accurate and robust skeletons. In Proceedings of The Eighth Pacific Conference on Computer Graphics and Applications, pages 281–449, 2000. → pages 54 [93] J. Schiff, M. Palese, E. Vaughan Jr, R. Sosa, D. Coll, and J. Del Pizzo. Laparoscopic vs open partial nephrectomy in consecutive patients: The cornell experience. British Journal of Urology International, 96(6): 811–814, 2005. → pages 2 80  [94] C. Schneider, G. Dachs, C. Hasser, M. Choti, S. DiMaio, and R. Taylor. Robot-assisted laparoscopic ultrasound. Information Processing in Computer-Assisted Interventions, pages 67–80, 2010. → pages 9, 11, 14, 35 [95] C. Schneider, J. Guerrero, C. Nguan, R. Rohling, and S. Salcudean. Intra-operative pick-up ultrasound for robot assisted surgery with vessel extraction and registration: A feasibility study. Information Processing in Computer-Assisted Interventions, pages 122–132, 2011. → pages [96] L. Schwartz, J. Richaud, L. Buffat, E. Touboul, and M. Schlienger. Kidney mobility during respiration. Radiotherapy and Oncology, 32(1):84–86, 1994. → pages 30, 31 [97] T. Sielhorst, M. Feuerstein, and N. Navab. Advanced medical displays: A literature review of augmented reality. Journal of Display Technology, 4 (4):451–467, 2008. → pages 8 [98] P. Slomka, J. Mandel, D. Downey, and A. Fenster. Evaluation of voxel-based registration of 3-D power Doppler ultrasound and 3-D magnetic resonance angiographic images of carotid arteries. Ultrasound in Medicine & Biology, 27(7):945–955, 2001. → pages 7, 53 [99] L. Soler, S. Nicolau, J. Fasquel, V. Agnus, A. Charnoz, A. Hostettler, J. Moreau, C. Forest, D. Mutter, and J. Marescaux. Virtual reality and augmented reality applied to laparoscopic and notes procedures. In 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pages 1399–1402, 2008. → pages 8 [100] M. Spaliviero and I. Gill. Laparoscopic partial nephrectomy. British Journal of Urology International, 99(5b):1313–1328, 2007. → pages 15 [101] D. Stoyanov, G. Mylonas, F. Deligianni, A. Darzi, and G. Yang. Soft-tissue motion tracking and structure estimation for robotic assisted MIS procedures. Medical Image Computing and Computer-Assisted Intervention, pages 139–146, 2005. → pages 13 [102] L. Su, B. Vagvolgyi, R. Agarwal, C. Reiley, R. Taylor, and G. Hager. Augmented reality during robot-assisted laparoscopic partial nephrectomy: Toward real-time 3D-CT to stereoscopic video registration. Urology, 73(4): 896–900, 2009. → pages 8, 9, 14, 68  81  [103] I. Suramo, M. P¨aiv¨ansalo, and V. Myllyl¨a. Cranio-caudal movements of the liver, pancreas and kidneys in respiration. Acta Radiologica: Diagnosis, 25 (2):129, 1984. → pages 30 [104] G. Treece, R. Prager, and A. Gee. Regularised marching tetrahedra: improved iso-surface extraction. Computers & Graphics, 23(4):583–598, 1999. → pages 25, 57, 61 [105] G. Treece, R. Prager, A. Gee, and L. Berman. Surface interpolation from sparse cross sections using region correspondence. IEEE Transactions on Medical Imaging, 19(11):1106–1114, 2002. ISSN 0278-0062. → pages 25, 57 [106] M. Van Veelen, E. Nederlof, R. Goossens, C. Schot, and J. Jakimowicz. Ergonomic problems encountered by the medical team related to products used for minimally invasive surgery. Surgical Endoscopy, 17(7): 1077–1081, 2003. → pages 1 [107] C. V˚apenstad, A. Rethy, T. Langø, T. Selbekk, B. Ystgaard, T. Hernes, and R. M˚arvik. Laparoscopic ultrasound: A survey of its current and future use, requirements, and integration with navigation technology. Surgical Endoscopy, pages 1–10, 2010. → pages 6 [108] D. Wagner and D. Schmalstieg. ARToolKitPlus for pose tracking on mobile devices. In Computer Vision Winter Workshop, pages 6–8, 2007. → pages 47, 48, 50 [109] D. Wang, F. Bello, and A. Darzi. Augmented reality provision in robotically assisted minimally invasive surgery. In International Congress Series, volume 1268, pages 527–532, 2004. → pages 8, 13 [110] E. Wilson, Z. Yaniv, D. Lindisch, and K. Cleary. A buyers guide to electromagnetic tracking systems for clinical applications. In Proceedings of SPIE, volume 6918, pages 69182B–1, 2008. → pages 11, 45 [111] Z. Yaniv, E. Wilson, D. Lindisch, and K. Cleary. Electromagnetic tracking in the clinical environment. Medical Physics, 36:876, 2009. → pages 11, 45 [112] D. Yee, A. Shanberg, B. Duel, E. Rodriguez, L. Eichel, and D. Rajpoot. Initial comparison of robotic-assisted laparoscopic versus open pyeloplasty in children. Urology, 67(3):599–602, 2006. → pages 12, 13  82  [113] M. Yip, T. Adebar, R. Rohling, S. Salcudean, and C. Nguan. 3D ultrasound to stereoscopic camera registration through an air-tissue boundary. Medical Image Computing and Computer-Assisted Intervention, pages 626–634, 2010. → pages 61 [114] P. Yohannes, P. Rotariu, P. Pinto, A. Smith, and B. Lee. Comparison of robotic versus laparoscopic skills: Is there a difference in the learning curve? Urology, 60(1):39–45, 2002. → pages 12, 13 [115] D. Yu, J. Jin, S. Luo, W. Lai, and Q. Huang. A useful visualization technique: A literature review for augmented reality and its application, limitation & future direction. Visual Information Communication, pages 311–337, 2010. → pages 47  83  


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items