UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Ultrasound guidance for epidural anesthesia Ashab, Hussam Al-Deen 2013

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2013_spring_ashab_hussam al-deen.pdf [ 1.85MB ]
Metadata
JSON: 24-1.0073744.json
JSON-LD: 24-1.0073744-ld.json
RDF/XML (Pretty): 24-1.0073744-rdf.xml
RDF/JSON: 24-1.0073744-rdf.json
Turtle: 24-1.0073744-turtle.txt
N-Triples: 24-1.0073744-rdf-ntriples.txt
Original Record: 24-1.0073744-source.json
Full Text
24-1.0073744-fulltext.txt
Citation
24-1.0073744.ris

Full Text

Ultrasound Guidance for Epidural Anesthesia by Hussam Al-Deen Ashab MSc, Biomedical Engineering, Duke University, 2010  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  Master of Applied Science in THE FACULTY OF GRADUATE STUDIES (Electrical and Computer Engineering)  The University Of British Columbia (Vancouver) April 2013 c Hussam Al-Deen Ashab, 2013  Abstract We propose an augmented reality system to automatically identify lumbar vertebral levels and the lamina region in ultrasound-guided epidural anesthesia. Spinal needle insertion procedures require careful placement of a needle, both to ensure effective therapy delivery and to avoid damaging sensitive tissue such as the spinal cord. An important step in such procedures is the accurate identification of the vertebral levels, which is currently performed using manual palpation with a reported success rate of only 30%. In this thesis, we propose a system using a trinocular camera which tracks an ultrasound transducer during the acquisition of a sequence of B-mode images. The system generates a panorama ultrasound image of the lumbar spine, automatically identifies the lumbar levels in the panorama image, and overlays the identified levels on a live camera view of the patient’s back. Several experiments were performed to test the accuracy of vertebral height in panorama images, the accuracy of vertebral levels identification in panorama images, the accuracy of vertebral levels identification on the skin, and the impact on accuracy with spine arching. The results from 17 subjects demonstrate the feasibility of the approach and capability of achieving an error within a clinically acceptable range for epidural anesthesia. The overlaid marks on the screen are used to assist locating needle puncture site. Then, an automated slice selection algorithm is used to guide the operator positioning a 3D transducer such that the best view of the target anatomy is visible in a predefined re-slice of the 3D ultrasound volume. This re-slice is used to observe, in real time, the trajectory of a needle attached to the 3D transducer, towards  ii  the target. The method is based on Haar-like features and AdaBoost learning algorithm. We have evaluated the method on a set of 32 volumes acquired from volunteer subjects by placing the 3D transducer on L1-L2, L2-L3, L3-L4 and L4L5 interspinous gaps on each side of the lumbar spine. Results show that the needle insertion plane can be identified with a root mean square error of 5.4 mm, accuracy of 99.6%, and precision of 78.7%.  iii  Preface This thesis was prepared under the supervision of Dr. Purang Abolmaesumi and Dr. Robert Rohling. They introduced the research topic of generating a panorama image of the lumbar spine, automatically identifying the lumbar levels in the panorama image, overlay the identified levels on a live camera view of the patient’s back, and the idea of identifying lamina region from the lumbar spine ultrasound volumes to assist in needle insertion. Moreover, they revised the manuscripts of a conference paper, journal paper and this thesis. A version of Chapter 2 has been published at the IEEE Engineering in Medicine and Biology Society (EMBS) conference, under the title “AREA: An Augmented Reality System for Epidural Anaesthesia” and in the IEEE Transactions on Biomedical Engineering, under the title “An Augmented Reality System for Epidural Anaesthesia (AREA): Pre-Puncture Identification of Vertebrae”. The work was co-authored by Victoria A. Lessoway, Siavash Khallaghi, Alexis Cheng, Robert Rohling and Purang Abolmaesumi [6]. Part of the code used for the system was originally written by Alexis Cheng and Siavash Khallaghi. The author modified and re-wrote these parts, and added to the code in order to develop a complete working system. Moreover, the author was responsible for testing the system and doing all the analysis. Data from subjects were acquired by Victoria A. Lessoway (British Columbia Womens Hospital and Health Centre, Department of Ultrasound). Ethics approval for this study was obtained from the UBC Research Ethics Board with certificate number: H07-0691 for the study in Chapters 2, and Chapter 3.  iv  Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iv  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  v  List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  x  List of Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Thesis Objectives . . . . . . . . . . . . . . . . . . . . . . . 1.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Background . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 Lumbar Spine Anatomy . . . . . . . . . . . . . . . 1.4.2 Regional Analgesia and Anesthesia . . . . . . . . . 1.5 Ultrasound Guidance for Epidural Analgesia and Anesthesia 1.5.1 Ultrasound Image Registration and Tracking Methods 1.5.2 Augmented Reality . . . . . . . . . . . . . . . . . . v  . . . . . . . . . .  . . . . . . . . . .  xv  . 1 . 1 . 4 . 4 . 5 . 5 . 7 . 9 . 9 . 11  1.5.3 1.5.4 1.5.5  Ultrasound Image Filtering and Vertebral Level Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Automatic Slice Selection . . . . . . . . . . . . . . . . . 13 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 13  2 Lumbar Level Identification . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Ultrasound Image Calibration . . . . . . . . . . . . . . . . . . . . 2.3 Image Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Panorama Generation . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Vertebral Identification . . . . . . . . . . . . . . . . . . . . . . . 2.6 Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . 2.7.1 Accuracy of Vertebral Height in Panorama Image . . . . . 2.7.2 Accuracy of Vertebral Level Identification in Panorama Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.3 Accuracy of Vertebral Level Identification on the Skin . . 2.7.4 Accuracy of Spine Arching . . . . . . . . . . . . . . . . . 2.8 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.1 Accuracy of Vertebrae Height in Panorama Images . . . . 2.8.2 Accuracy of Vertebral Levels Identification in Panorama Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.3 Accuracy of Vertebral Levels Identification on the Skin . . 2.8.4 Accuracy of Spine Arching . . . . . . . . . . . . . . . . .  15 15 17 21 23 25 28 30 33  3 Insertion Slice Detection . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 3.2 Data Acquisition . . . . . . . . . . . . . . . . . . . . 3.3 Feature Extraction and Construction of Weak Classifier 3.4 Learning Classifiers . . . . . . . . . . . . . . . . . . . 3.4.1 Training Cascade Classifiers . . . . . . . . . .  42 42 43 45 50 51  vi  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  34 35 36 37 37 39 40 40  3.5 3.6 3.7  Slice Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . 54 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60  4 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . 62 4.1 Summary of Contributions . . . . . . . . . . . . . . . . . . . . . 63 4.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  66  A Additional Results . . . . . . . . . . . . . . . . . . . . . . . . . . . .  74  vii  List of Tables Table 2.1  Table 2.2  Table 2.3  Table 2.4  Table 2.5  Types of experiments performed, gold standard used, and measurements/labels that have been defined in this paper. Numbers used in the table refer to the measurements defined within the text. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Mean ± standard deviation of Curvilinear vertebral height, Panorama vertebral height, and the absolute error calculated as the difference between those two measurements. Units are in millimetres; N=17. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Mean of the absolute error between AREA and Panorama vertebral levels, and between Kerby and Panorama vertebral levels. Units are millimetres, N=17. . . . . . . . . . . . . . . . . 31 Number of false AREA and Kerby vertebral counts (N = 82). Using the linear transducer, the sonographer could not identify three of the vertebrae because they were fused with a neighbouring vertebra. . . . . . . . . . . . . . . . . . . . . . . . . . 34 Actual vertebral count, AREA vertebral count and Kerby vertebral count (N = 82). For the actual vertebral count, the sonographer could not identify three of the vertebrae because they were fused with a neighbouring vertebra. . . . . . . . . . . . . 35  viii  Table 2.6  Table 2.7  Table 3.1  Table 3.2  Table 3.3  Mean and standard deviation of the absolute difference between AREA actual vertebrae labels at the resting position and actual vertebrae labels at the resting position measured on subject’s back. Units are millimetres, N=17. . . . . . . . . . . . . 35 Comparison of the absolute error of AREA for different spine arching angels. Units are millimetres, N=17. . . . . . . . . . . 37 The distance between Spinous process-Facet and Facet-Transverse process measured from statistical shape model of the spine. Units are in millimetres. . . . . . . . . . . . . . . . . . . . . . 56 The RMS error of selecting optimal slice from ultrasound volume. Error was calculated as the distance between optimal slice the sonographer chose and the algorithm optimal slice. Units are in millimetres. . . . . . . . . . . . . . . . . . . . . . 58 The performance of the method in selecting optimal slice from ultrasound volume. . . . . . . . . . . . . . . . . . . . . . . . . 58  ix  List of Figures Figure 1.1 Figure 1.2 Figure 2.1 Figure 2.2 Figure 2.3 Figure 2.4 Figure 2.5 Figure 2.6 Figure 2.7  Figure 2.8  Anatomical structure of the lumbar spine showing vertebral levels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Needle insertion into lumbar region of the spine. . . . . . . . Markers used for the calibration of ultrasound transducer. . . . Stylus Calibration. . . . . . . . . . . . . . . . . . . . . . . . Phantom Registration. . . . . . . . . . . . . . . . . . . . . . Segmentation Parameter Module. . . . . . . . . . . . . . . . . Freehand Calibration. . . . . . . . . . . . . . . . . . . . . . . Workflow of AREA. . . . . . . . . . . . . . . . . . . . . . . Ultrasound B-mode images are acquired by placing the transducer in the parasagittal plane 10 mm from the midline. The solid line shows the vertebral level the system identifies and the dashed line shows the imaging plane acquired by the sonographer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example of two ultrasound panorama images. (a) Panorama obtained in the parasagittal plane, showing L1, L2, L3, L4, L5 and S1 from left to right. (b) The same panorama image showing the automatically identified levels L1, L2, L3, L4 and L5 from left to right. . . . . . . . . . . . . . . . . . . . . . .  x  6 8 17 18 19 20 21 22  24  26  Figure 2.9  Work-flow of panorama image processing, thresholding and vertebral identification, with the final step showing the fusion of the identification results with the original panorama image. 27 Figure 2.10 The GUI developed using 3D Slicer showing the vertebral levels (black lines) overlaid on the video image of the patient’s back. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Figure 2.11 Summary of major factors contributing to the overall error. Each arrow indicates an error contributing to each of the modules in the system. . . . . . . . . . . . . . . . . . . . . . . . . 38 Figure 3.1  Figure 3.2 Figure 3.3  Figure 3.4  Three slices extracted from 3D ultrasound volume. The first plane shows the lamina, the second plane shows facet joints and the third plans shows the transverse process. All of these three slices have a similar wave-like pattern which can be confusing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Features extracted from ultrasound volume slices. . . . . . . . 47 The value at a point (x,y) in the integral image is the sum of all pixels in the up-left region of that pixel. Figure was adopted from Viola et al. [71] . . . . . . . . . . . . . . . . . . . . . . 48 An example showing how to calculate the sum of pixels in a rectangle D. The value of the integral image at four locations is used a,b,c and d. The value at location a is the sum of pixel in region A, the value at location b is the sum of pixels in region A & B, the value at location c is the sum of pixels in the region A & C , and the value at location d is the sum of pixels in the region A & B & C & D. Therefore, to calculate the sum of pixels at region D we have to add a & b and subtract from it c & d, SUM(D) = a + d - c - b. Figure was adopted from Viola et al. [71] . . . . . . . . . . . . . . . . . . . . . . 48  xi  Figure 3.5  The images were divided into two classes. Vertebrae class which contain vertebrae sub-windows and Non-Vertebrae class which contain other parts of the images which does not correspond to vertebrae sub-windows. The first column correspond to sub-windows from the first class, and the second and third columns correspond to sub-windows from the second class. . . Figure 3.6 An example of parameter representation of a feature. . . . . . Figure 3.7 A series of classifiers applied to each subwindow. The first classifier will remove most of the non-vertebrae images, then following classifiers will remove more of the non-vertebrae subwindows. . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 3.8 Slice selection algorithm workflow. . . . . . . . . . . . . . . Figure 3.9 (a) Needle Insertion plane sonographer chose from ultrasound volumes, (b) the distance between spinous process and facets, and (c) the distance between facets and transverse process. . . Figure 3.10 Examples of six slices from three different volumes. The first column are images sonographer chose as optimal and the second column are images the algorithm chose as optimal. . . . . Figure 3.11 Comparison between optimal slices sonographer chose and the optimal slices the algorithm chose (Black). The two blue dots are the lower and upper edges of the set of slices sonographer chose as optimal slice. . . . . . . . . . . . . . . . . . . Figure A.1  Figure A.2  Comparison between optimal slices sonographer chose and the optimal slices the algorithm chose (Black). The two blue dots are the lower and upper edges of the set of slices sonographer chose as optimal slice. . . . . . . . . . . . . . . . . . . Comparison between optimal slices sonographer chose and the optimal slices the algorithm chose (Black). The two blue dots are the lower and upper edges of the set of slices sonographer chose as optimal slice. . . . . . . . . . . . . . . . . . . xii  49 50  53 55  57  59  60  74  75  Figure A.3  Figure A.4  Figure A.5  Comparison between optimal slices sonographer chose and the optimal slices the algorithm chose (Black). The two blue dots are the lower and upper edges of the set of slices sonographer chose as optimal slice. . . . . . . . . . . . . . . . . . . Comparison between optimal slices sonographer chose and the optimal slices the algorithm chose (Black). The two blue dots are the lower and upper edges of the set of slices sonographer chose as optimal slice. . . . . . . . . . . . . . . . . . . Comparison between optimal slices sonographer chose and the optimal slices the algorithm chose (Black). The two blue dots are the lower and upper edges of the set of slices sonographer chose as optimal slice. . . . . . . . . . . . . . . . . . .  xiii  76  77  78  List of Algorithms 1 2  C LASSIFIER  A DA B OOST. A LGORITHM WAS ADOPTED FROM V IOLA ET AL . [71] . . . . . . . . . . . . . . . . 51 A LGORITHM TO BUILD CASCADE OF CLASSIFIERS . A LGORITHM WAS ADOPTED FROM V IOLA ET AL . [71] . . . . . . . . . . . . . 54 TRAINING USING  xiv  Acknowledgments This thesis would not have been possible without the help of several individuals who contributed in the preparation and completion of this study. First and foremost, my utmost gratitude to my supervisors Purang Abolmaesumi and Robert Rohling for their guidance, patience and encouragement. I would like to gratefully thank Vickie Lessoway for her assistance in collecting data, constructive comments and her clinical guidance. I wish also to acknowledge Vicky Earle for the original drawings (before our modification) in Figures 1.1, 1.2, 2.7, 3.1 and 3.9 My special thanks to all my colleagues and friends at the Robotics and Control lab: Siavash Khallaghi, Weiqi Wang, Caitlin Schneider, Arthur Leland Schiro, Abtin Rasoulian, Ramin S.Sahebjavaher, Julio Lobo, Saman Nouranian, Jeff Abeysekera, Philip Edgcumbe and Samira Sojoudi. Thanks for all your help and knowledge. I would also like to thank Mohammad Amir Bino Al-shishany, Ali abed alhamid Al-shishany, Ferdos Shahbaz Al-shishany and Abdallah Ashab Al-shishany for their help. Without your support, I never would have made it here. Finally, I am also very appreciative to all members of my family for giving me inspiration and support during all of those years.  xv  Chapter 1 Introduction 1.1  Motivation  Epidural analgesia and anesthesia is an injection of local anesthetics and antiinflammatory medication into the epidural space near the spinal canal for pain management which can also be used as an alternative to general anesthesia [33]. In obstetric anesthesiology, anaesthesiologists commonly use L2-L3 or L3L4 interspinous gaps for injection into the lumbar region for pain relief of labour and delivery. To deliver anesthetics and/or anti-inflammatory medication effectively, needle insertion has to be successfully performed in a two-step procedure. The puncture site should be correctly selected at the desired intervertebral space. Then to reach the target, it should be followed by appropriate selection of needle trajectory. Both of these steps are currently done blindly with manual palpation. Identification of the vertebrae for the first step is only 30% accurate [52]. Therefore, more than 70% of the procedures misidentify the desired intervertebral space by one or more levels. This is undesired for two reasons. Firstly, accidental needle overshoot is more likely to result in nerve damage for higher vertebrae. Secondly, the effectiveness of the anesthetics depends on the level [52]. Recent studies have shown that ultrasound can be used to identify the vertebral levels accurately [14, 48, 56, 74]. However, interpreting spinal ultrasound 1  images remain a challenge, especially for novice ultrasound operators (i.e. many anaesthesiologists). To alleviate this issue, our research team has previously proposed two techniques for automatic vertebral level identification from panorama ultrasound images [26] and displaying the level to the anaesthesiologist in an ultrasound guidance system with a camera mounted on the transducer [44]. However, there are remaining challenges that need to be addressed to enable the clinical translation of ultrasound-guided epidural anesthesia in routine clinical practice: 1) A system has to be developed to seamlessly relate the identified vertebral levels to the patient’s skin while accommodating some patient motion during the epidural procedure. 2) There should be no disruption to the sterlie field such as modifying the ultrasound transducer by mounting a camera system as previously proposed [44]. 3) The system should automatically detect and display slice passing through the lamina in the ultrasound volume to assist in guiding needle insertion. This research makes three significant contributions. First, a new, efficient, and fully automatic lumbar level identification algorithm from panorama ultrasound images is developed. Second, it proposes an Augmented Reality system for Epidural Anesthesia (hereafter referred to as AREA) that overlays the identified levels on a live video image of the patient’s back. Third, an automatic algorithm is used to detect and display lamina region from ultrasound volume to assist in needle insertion plane. The workflow of AREA is as follows: 1) Initially, a sequence of ultrasound images are acquired from the patient’s back before needle insertion. These images are used to generate a panorama image showing a sagittal cross section of the spinal anatomy parallel to the main spinal axis. 2) An automatic image processing technique identifies the lumbar levels in the panorama image. 3) An augmented reality module which converts the level location in the panorama image relative to the camera’s view of the patient, and overlays virtual markings of the levels on a live video display of the patient’s back. 4) Automatically detect a slice passing through lamina in the ultrasound volume to assist in guiding needle insertion.  2  This workflow ensures minimum disruption in the current clinical procedure. Furthermore, since a remote video camera is used, the system does not interfere with the sterile field. In the proposed workflow, a sequence of ultrasound images is acquired from the patient’s back before needle insertion. These images are used to generate a panorama image showing a sagittal cross section of the spinal anatomy parallel to the main spinal axis. An automatic image processing technique identifies the lumbar levels in the panorama image, converts the level location relative to the camera’s view of the patient, and overlays virtual markings of the levels on a live video display of the patient’s back. Using the projected markings, the operator identifies the desired lumbar levels and selects an approximate puncture site. The final steps of selecting the exact puncture site, needle angle, and depth of insertion can be done by one of the following methods: 1. Palpating surface landmarks on the patient’s skin to identify appropriate intervertebral space and puncture site. Then the anesthesiologist use a needle to reach the target “epidural space”. During needle insertion, the loss of resistance technique is used to identify the epidural space and to confirm that the needle tip has reached the target. 2. Registering multi-vertebrae statistical shape model to 3D ultrasound images. Then the registration results are used to identify the mid-saggital plane for needle insertion [2]. 3. Using an automatic technique to identify a re-slice plane through a 3D ultrasound volume such that the re-slice plane includes the target epidural space and the wave-like appearance of the lamina of the vertebrae. This help the anesthesiologist to gain confidence in interpreting the live re-slice plane that is used to select the needle trajectory, angle, and depth of insertion.  3  1.2  Thesis Objectives  This work has three major objectives: 1. Develop an automatic technique to identify the lumbar levels from a panorama ultrasound image. This is to supplement the manual technique of palpation of the pelvis and vertebrae. 2. Design and integrate an augmented reality system that projects the identified lumbar levels on a live video image of the patient’s back. This is to allow the anesthesiologist to select an appropriate puncture site using an interface that does not affect the sterile field on the patient. 3. Develop an automatic technique that identifies the re-slice plane through a 3D ultrasound volume such that the re-slice plane includes the target epidural space and the wave-like appearance of the lamina of the vertebrae. This is to allow the anesthesiologist to gain confidence in interpreting the live re-slice plane that is used to select the needle trajectory. When the live reslice plane matches the correctly identified plane by the computer, then the anesthesiologist can proceed to insert the needle toward the target. All of these objectives should be achieved for a wide range of patients covering typical variations in vertebrae shape and size. These objectives should also be able to accommodate a range of noise present in ultrasound images. To these ends, the techniques will be tested in human subjects and compared to gold-standards from other techniques and manual identification by an expert sonographer.  1.3  Thesis Outline  Thesis outline is as follows: Chapter 1 provides a brief overview of previous work done on ultrasound image registration and tracking methods, augmented reality technologies, ultrasound  4  image filtering and vertebral level identification, and automated slice selection from ultrasound volumes. Chapter 2 describes different system components used to acquire images, registration to generate panorama image, vertebral level identification, and visualization. Moreover, it presents the experiments conducted to test the accuracy of the system. Chapter 3 presents an algorithm proposed to detect a slice passing through laminae in the ultrasound volume to assist in guiding needle insertion and results showing the accuracy of the proposed method. Finally, Chapter 4 summarizes the conclusions and presents future work which can improve the system proposed in this thesis.  1.4  Background  This section provides an overview of lumbar spine anatomy, epidural analgesia and anesthesia, and ultrasound guidance for epidural analgesia and anesthesia. Then, it describes common approaches used for tracking and registering ultrasound images, different augmented reality technologies, panorama image processing, and automated standardized slice selection from ultrasound volumes.  1.4.1  Lumbar Spine Anatomy  Lumbar region of the spine is below the cervical and thoracic regions and above the sacrum. This vertebral column consist of five separate vertebrae labelled L1L5 as shown in Figure 1.1 and are the largest and strongest among all spine vertebrae [66] [8]. The lumbar spine has six nerves which are divided into two groups as follows [66]: 1. Posterior: (a) Lateral femoral cutaneous nerve: Originate from L2 and L3 vertebrae. (b) Femoral nerve: Originate from L2, L3 and L4 vertebrae. 5  Figure 1.1: Anatomical structure of the lumbar spine showing vertebral levels.  6  2. Anterior: (a) Iliohypogastric nerve: Originate from T12 and L1 vertebrae. (b) Ilioinguinal nerve: Originate from L1 vertebrae. (c) Genitofemoral nerve: Originate from L1 and L2 vertebrae. (d) Obturator nerve: Originate from L2, L3 and L4 vertebrae.  1.4.2  Regional Analgesia and Anesthesia  Regional analgesia and anesthesia, which is one of the most commonly performed interventions to manage the lower back pain [9], is an injection of local anesthetics and anti-inflammatory medication into the epidural space near the spinal canal or the spinal column. It blocks nerve impulses from lumbar spine which result in blocking pain from the lower body parts. There are three types of regional analgesia and anesthesia: 1. Epidural analgesia and anesthesia: is a technique to inject local anesthetic drugs through a catheter into the epidural space [61]. 2. Spinal analgesia and anesthesia: is a technique to inject local anesthetic drugs into the spinal column. This technique is quicker to take effect but only for a short period of time [61]. 3. Combined Spinal-Epidural (CSE): is a technique to inject analgesic and/or local anaesthetic medication into the interathecal space. This type of epidural combines the benefits of both spinal analgesia and epidural analgesia [61]. To perform epidural the patient is asked to arch their back forward. Then the needle puncture site and trajectory is defined followed by the needle insertion into the area surrounding the spinal cord as shown in Figure 1.2. To confirm the correct needle placement and reduce the risk of needle overshooting, the lossof-resistance technique is used to place the needle in the exact location. Then 7  a catheter is passed through the needle to reach the epidural space and deliver medication. Complications associated with inaccurate placement of needle, such as accidental dural puncture, are common (2.5%) and lead to side effects for patients, such as post dural puncture headache in 86% of cases [1, 63, 72]. These complications are even higher for trainees at a university, which has a greater incidence of dural perforation (3-5%) [10, 51]. Selection of puncture site is achieved through manual palpation of the patient’s back. Kopacz et al. [27] showed that experience on 60 patients is required to reach competency of 90%. To improve the accuracy of needle injection, Johnson et al. [24] proposed to use x-ray-guidance (fluoroscopy) to improve safety and efficacy compared to blind techniques, but at the cost of exposure to ionizing radiation.  Figure 1.2: Needle insertion into lumbar region of the spine. 8  1.5  Ultrasound Guidance for Epidural Analgesia and Anesthesia  The current methods of manual palpation and x-ray-guidance techniques for the selection of puncture site, needle trajectory, and depth of insertion still have limitations. One limitation of manual palpation is the lack of accuracy, especially for the obese where landmarks are less easily identified. One limitation of x-ray guidance is the use of ionizing radiation, which is contra-indicated in obstetric anesthesia. Recently, a number of solutions have been proposed to improve needle insertion accuracy, while maintaining and improving patient safety by reducing the amount of ionizing radiation (X-ray) and complication rates. Grau et al. [18] showed that the rate of puncture attempts was significantly reduced using ultrasound for localization of the epidural space for catheter placement. In another study, Grau et al. [19] found that using ultrasound for teaching epidural analgesia and anesthesia in obstetrics improves the success rate. Rapp et al. [47] used ultrasound to visualize epidural space, ligmentum flavum, and dural structures in children. Arzola et al. [5] found agreement between ultrasound depth measurement and needle depth, and good level of success in the use of ultrasound to select the needle puncture site. Karmakar et al. [25] and Tran et al. [68] successfully combined real-time ultrasound guidance with loss-of-resistance with needle inserted in the plane of ultrasound beam for paramedian epidural access. These research suggest that using ultrasound for the guidance of epidural anesthesia will improve puncture site selection, needle insertion and maintain patient safety. However, interpreting spinal ultrasound images remain a challenge, especially for novice ultrasound operators.  1.5.1  Ultrasound Image Registration and Tracking Methods  In order to be able to obtain a wide field of view depicting the lumbar spine lamina to identify lumbar vertebral levels, a panorama image should be generated by the proposed system. Panorama reconstruction requires the orientation and position of the individ-  9  ual ultrasound images to be tracked and registered together. To track and register ultrasound images for the purpose of generating panorama image, several techniques have been developed that can be divided into two categories [57]: sensorbased and image-based. The sensor-based methods use an optical, mechanical, or magnetic sensors to acquire images with their pose tracked with respect to a fixed base. For example, Poon et al. [43] suggested a system to generate a panorama using an optically tracked 3D ultrasound transducer. The main advantage of these methods is high tracking accuracy but the disadvantages are additional cost and complexity. Image-based tracking techniques do not require additional hardware, but instead try to register images using the anatomical depictions in the images, much like stitching photographs by aligning the scene depictions together. Imagebased techniques have two main types: (1) Pixel-based registration that compare groups of pixels from one image to another using a similarity metric and then find the transformation between images that maximizes the similarity in the overlapping region between images. For example, Francois, et al. [11] used statistical texture-based similarity metric to register ultrasound volumes. (2) Feature-based registration, which extracts common features from the images and uses them to perform registration. For instance, Moradi et al. [37] used Scale Invariant feature Transform (SIFT) [35] features and B-spline to register ultrasound data. Recently, SIFT was extended to 3D SIFT by scovanner et al. [59] for 3D imagery such as MRI data. In another study [38], 3D SIFT and random sample consensus (RANSAC) is used to register ultrasound volumes. Even though the pixel-based method is accurate, it is computationally expensive, and even though the featurebased method is efficient, it fails when the appearance of the anatomy varies significantly in the images, as with ultrasound images of the spine. In this research, sensor-based and image-based methods are combined in an attempt to achieve accurate and robust registration. The transformations among images obtained from an external tracking device are used as the initial guess for registration, then a rigid registration based on normalized cross correlation is used to fine-tune the results.  10  1.5.2  Augmented Reality  To relate the identified lumbar levels to the patients back, an augmented reality (AR) approach is proposed. AR is the virtual variation of a real world depiction where the user sees computer-generated objects superimposed upon the real world. Medical augmented reality is a promising technology to improve the accuracy and efficiency of surgery and other image guided interventions. Augmented reality technologies can be divided into five fundamental classes [60]: (1) Head mounted display-based AR systems, where a display device is worn on the head that blends the users view of the world with virtual objects. Sutherland [65] reported the first head mounted AR system in 1968 which combined real and virtual images. In another study, Wright et al. [73] used augmented reality for educational purposes. They superimposed images of bone on a human model to teach students radiographic positioning of the elbow joint. (2) Augmented optics which use a semi-transparent mirror to augment the real image by reflecting a virtual image into it’s optical path. For example, Berger et al. [7] augmented angiographic images onto a biomicroscopic fundus image to guide the treatment of macular diseases. (3) Augmented reality windows, which are semi-transparent mirrors placed between the user and a real world object, augment the real world with virtual world objects. Liao et al. [31] used a half-silvered mirror to superimpose a 3D image onto the surgical field. (4) Projections on the patient. In these systems data is directly augmented onto the patient. Glossop et al. [17] projected computer generated information onto the patient using visible laser. (5) Augmented monitors use an external camera and a monitor to augment real world objects with virtual images. Nicolau et al. [39] proposed an augmented reality system that superimposed a 3D model of liver on a video image of the patient for guidance. In another study, Sato et al. [55] used a live video images to superimpose a 3D model of tumor onto patient’s breast. The superimposed 3D model assist the sergeon in locating the exact position of the tumor. The advantage of augmented monitors is that users do not have to wear a head mounted display or glasses which will add more equipment and complicate the procedure of epidural. Moreover, us11  ing the augmented monitors system does not interfere with the clinical procedure. Therefore, this technique is used in the system proposed in this research.  1.5.3  Ultrasound Image Filtering and Vertebral Level Identification  Ultrasound systems produce noisy images with various artifacts making the identification of vertebral levels inaccurate. Furthermore, bone and tissue appearance depends strongly on orientation and machine settings. Moreover, in some ultrasound applications, speckle is considered as undesirable noise and several techniques are developed to solve this problem. Therefore, to obtain a good lumbar level identification results the images should be filtered to boost the desired results. In ultrasound imaging, special filters have to be used due to the signal dependent nature of speckle intensity which is different than most denoising methods were additive white gaussian noise model is assumed. Loupas et al.[34] used median filter for speckle reduction. In another studies, Lee’s filter [30], Frost’s filter [13] and Kuan’s filter [28] are commonly used adaptive filters because they are easy to implement and control, but have a major limitation in edge preserving. Therefore, speckle reducing anisotropic diffusion (SRAD) [75] was developed were the results show that SRAD has better performance regarding mean preservation, variance reduction and edge localization. Image processing and computer vision techniques were applied to ultrasound images for vertebral level identification [26]. It was shown that using simple median filtering, threshodling and parabolic fitting on panorama ultrasound images enables the vertebral levels to be identified with an accuracy of 11.8 mm [26]. Even though this error is large, it is still possible to identify the levels in most cases since it is smaller than half the typical vertebral height in adults. One draw back of these techniques is the increase in intervening tissue between the ultrasound probe and lamina in obese patients which may affect the image quality and results of lumbar identification. 12  1.5.4  Automatic Slice Selection  Three-dimensional ultrasound imaging systems applications are developing rapidly. For example, nowadays 3D ultrasound is used for fetal biometric measurements which are crucial in tracking fetal growth and detecting any abnormality. The biometric measurement are taken from standardized ultrasound planes of fetal head. Another application is 3D echocardiography which is used to examine cardiac functions. It has been applied to asses aortic valve area [15]. In another study, Arai et al. [4] estimated volume and ejection fraction of left ventricle for patients with wall motion abnormalities using 3D ultrasound. For epidural analgesia and anesthesia application, the use of 3D ultrasound provides more data for vertebra identification than conventional 2D ultrasound. However, interpretation and analysis of the 3D data is more complex and computational expensive than that of conventional 2D ultrasound, and the time it takes to navigate to obtain standardized views similar to 2D acquisition limits 3D ultrasound applications to spine guided navigation. Therefore, automatic detection of spine anatomical structures in 3D ultrasound data is extremely important to guide needle insertion and puncture site selection for epidural anesthesia, and to reduce inter and intra-observer variability.  1.5.5  Summary  In this chapter an overview of lumbar anatomy, and epidural analgesia and anesthesia is provided. This overview explained the main anatomical structures of lumbar anatomy which is used by anesthesiologist to perform epidural, and by the proposed system in this thesis to identify lumbar levels. Moreover, a literature review of ultrasound guidance for epidural anesthsia is explored and there use in reducing the complications associated with epidural analgesia and anesthesia procedures. After that ultrasound image registration and tracking methods are explored. Methods to register medical images are explored were the methods are divided into two categories: sensor-based and image-based. For the purpose of this re13  search the two methods are combined in an attempt to achieve accurate and robust registration. Besides that literature review of augmented reality systems and methods is presented which shows using augmented monitors is the best method for the epidural analgesia and anesthesia application. Filtering ultrasound images are explored and their effect in aiding the identification of lumbar level detection is emphasized. To assist in the guidance of needle insertion, the use of Haar-like features and AdaBoost is suggested to obtain the optimal slice from ultrasound volumes and several previous work is explored.  14  Chapter 2 Lumbar Level Identification This chapter provides details of different software and equipment used in the design and implementation of the proposed system.  2.1  Introduction  Epidural analgesia and anesthesia are commonly used in obstetrics for labour and cesarean delivery, and for surgery. Epidural procedures are effective alternatives to general anesthesia [33], especially in the parturient patient. In these procedures selection of the puncture site, which is usually between L2-L3 or L3-L4 for epidurals, and the needle angle are achieved through manual palpation of the spine. The challenge is to place the needle tip accurately in the targeted epidural space on the first attempt, thus reducing the procedure time and additional pain to the patient. While some of these procedures are performed for non-obstetric indications under fluoroscopic guidance, the majority of them are done blindly with manual palpation. Therefore, they are performed on parturient patients where exposure to ionizing radiation is contraindicated. This method correctly identifies the vertebral spaces in only approximately 30% of the cases [52]. Recently, a number of solutions have been proposed with the goal of reducing the radiation dose and increasing the needle placement accuracy, while maintaining or improving patient  15  safety and reducing complication rates. [14, 25, 74]. An imaging modality that has enjoyed a recent resurgence for guiding spinal procedures is ultrasound, which provides a more accessible, portable and nonionizing imaging alternative to fluoroscopy. The success rate of conventional ultrasound for the identification of vertebral levels have already been shown to outperform the current standard of care which is manual palpation (71% vs. 30%, respectively [14]). To further improve the success rate, the acquisition of panorama ultrasound images has been proposed for the purpose of automatically identifying the vertebrae [26]. The authors reported an accuracy of 11.8 mm. While this accuracy could be sufficient for the identification of vertebral levels, robust and fully automatic identification of vertebral levels from panorama ultrasound images has remained a challenge. Furthermore, there is still a misconnect between the location of the identified levels of the panorama on the computer monitor and the patient’s back. What is needed is a system that can: (1) automatically identify the vertebrae in the panorama image, (2) allow a small amount of subject motion between the ultrasound scan and the selection of the puncture site for needle insertion, (3) seamlessly relate identified levels to the patient’s back, and (4) provide a clinically acceptable level of accuracy with respect to the patient’s back, which is half the vertebral height when the purpose is correct identification. The system proposed in this thesis makes two major contributions: Firstly, it presents a fully automatic and efficient lumbar level identification algorithm from panorama ultrasound image. Secondly, it presents an augmented reality system for epidural anesthesia that overlays the identified level on the live video image of the patient’s back. We demonstrate that AREA can reliably identify and display the lumbar levels relative to the patient’s back despite significant shadowing artifacts and variability of the spine’s appearance in ultrasound images, as well as unavoidable minor patients movement and changes in spine arching. AREA aims to work within the established clinical workflow and setup, and increase the confidence of the operators to reliably identify the correct puncture site for epidural injections. It is intended to be easily used by operators with little ultrasound  16  experience (i.e. many anaesthesiologists).  2.2  Ultrasound Image Calibration  Ultrasound images were acquired using a SonixTOUCH ultrasound system (Ultrasonix Medical Corp., Richmond, Canada) equipped with a 6.6 MHz linear array transducer (L14-5/38, Ultrasonix Medical Corp., Richmond, Canada), Claron technology MicronTracker and an external PC with Public software Library for Ultrasound imaging research (PLUS) [29] were used to acquire tracking information and ultrasound images.  (a) Stylus Marker  (b) Probe Marker  (c) Reference  Figure 2.1: Markers used for the calibration of ultrasound transducer. Calibration of ultrasound transducer is performed in according to the following steps: Stylus Calibration This is the first step in the procedure to calibrate the ultrasound transducer. In this step we use a stylus which is sometimes referred to as a 3D localizer or pointer. On 17  one side, it has a marker which we will refer to it for the purpose of calibration as “Stylus Marker”, and on the other side, it has a sharp end. To calibrate the stylus, it is rotated around its tip while the position of the marker attached is recorded. The tip of the marker should stay stationary during the process, therefore its location in space is fixed. Then the recorded positions of the stylus marker are used to find stylus tip location with respect to the Stylus Marker. Stylus calibration using PLUS is shown in Figure 2.2.  Figure 2.2: Stylus Calibration.  Phantom Registration The second step after calibrating the stylus is referred to as “Phantom Registration”. In this step the stylus is used to locate predefined points on the phantom 18  which has the reference marker attached to it. After locating these points, a predefined module is registered to those points which will result in a calibrated phantom in space. Figure 2.3 shows the registered model to the points in space.  Figure 2.3: Phantom Registration.  Segmentation Parameter Module In this module, several parameters are tuned such as threshold. These parameters are used to segment ultrasound images and detect phantom wires, which has a defined shape and location with respect to the phantom, in the ultrasound image. PLUS allows the user to adjust the parameters and provide a feedback on the accuracy of the segmentation and detection, so the user can find the optimal parameters to accurately segment and detect wires appearing in the ultrasound 19  images. Figure 2.4 shows the parameters tuned and detected phantom wires in ultrasound image.  Figure 2.4: Segmentation Parameter Module.  Freehand Calibration Using the parameters from previous steps and freehand scanning of the phantom, ultrasound images were segmented and phantom wire locations in the ultrasound image are identified. Ultrasound image to probe transformation can be found, since the phantom marker and probe marker to camera transformations, and the image coordinates are known. Ultrasound image to probe transformation for each acquired ultrasound image is calculated and the average of all these transformations are considered as the final ultrasound image to probe transformation.  20  Figure 2.5: Freehand Calibration.  2.3  Image Acquisition  AREA consists of a SonixTOUCH ultrasound system (Ultrasonix Medical Corp., Richmond, Canada) equipped with a 6.6 MHz linear array transducer (L14-5/38; imaging parameters: depth 6.0 cm, dynamic range 56 dB, gain 50% and 16 frames per second), and a trinocular MicronTracker motion tracking system (Claron Technology Inc., Toronto, Canada) to track two markers. The first one is placed on the transducer so it is referred to as the “Transducer Marker”. The second marker is affixed to the patient’s back approximately 50 mm lateral to L3, which is close to the approximate puncture site, but outside the sterilized area, and referred to as the “Patient Marker”. A simplified workflow of spinal needle insertion with guidance from AREA  21  Figure 2.6: Workflow of AREA. is shown in Figure 2.6. First, the system checks if all markers are visible followed by a prompt for the sonographer to start. The transducer is initially placed in the parasagittal plane, 10 mm away from the midline on the interspinous gap L5-S1. As shown in Figure 2.7, the sonographer moves the transducer superiorly across the lamina from the interspinous gap L5-S1 to the interspinous gap T12-L1 to acquire the set of images.5 During the scan, each image is recorded individually by pressing a foot pedal. The sonographer determines the suitability of each image for the subsequent steps, if the image contains: 1. A wave-like pattern from the laminae surfaces; 2. Bright echoes from the most superior surface of the laminae; 3. Distinct shadows under the laminae; 4. 50-70% overlap with the previous image. 5 N.B.  The sonographer also used a curvilinear transducer (C5-2, Ultrasonix Medical Corp., Richmond, Canada), which was capable of displaying two to three vertebral lamina in a single ultrasound image. These images were used to measure the vertebral height for the purpose of validation as described in detail in the Experiments and Results section.  22  These criteria ensure that adjacent images contain similar anatomical features that allow inter-slice registration when generating the panorama of the lumbar spine. Moreover, the 50-75% overlap between images is a compromise between the accuracy of registration (large overlap) and speed of examination (small overlap).  2.4  Panorama Generation  The position and orientation of the ultrasound images must be known for AREA to identify and display the lumbar levels accurately. Therefore, the N-wire calibration method integrated within PLUS [29] was used to calibrate the ultrasound image to the marker on the transducer. Using this calibrated transducer, a sequence of B-mode ultrasound images are acquired while tracked by the MicronTracker in the camera coordinate system. The stitching process then uses the estimated transformation between images from the tracker and a subsequent rigid registration using the Insight Toolkit (ITK) [21] to automatically register and create a panorama ultrasound image of consecutive vertebrae B-mode images. To allow a smaller search space for the alignment parameters and less likelihood of large misregistration errors, the tracking information provided by the MicronTracker is used as an initial guess for the feature alignment followed by a standard normalized cross-correlation with gradient descent optimizer, linear interpolation of image intensities at a non-integer pixel position, and translation transformation for the final alignment. Even though the Microntracker has high reported accuracy (Marker tracking error (RMS) of about 0.20 mm), it is sensitive to illumination conditions, specular reflections, and the orientation of a tracked tool, which affects the accuracy of tracking of individual ultrasound frames. These problems were also reported in a previous study by Maier-Hein et al. [36]. Therefore, to improve robustness in a clinical environment, we used an optimization and image registration algorithm to refine the tracking measurements. We used a gradient descent optimizer with normalized cross correlation as the image similarity measure. Our experience 23  Figure 2.7: Ultrasound B-mode images are acquired by placing the transducer in the parasagittal plane 10 mm from the midline. The solid line shows the vertebral level the system identifies and the dashed line shows the imaging plane acquired by the sonographer. with the system in preliminary testing suggested that the choice of the optimizer and similarity measure did not affect the registration outcome. However, omitting the image registration step resulted in small but observable misalignments of the sequential ultrasound images. 24  This registration technique may still be susceptible to errors associated with accidental out-of-plane motions of the transducer, but with reasonable care during scanning, it generates panorama images sufficient for the purpose of vertebral level identification. An example panorama image is shown in Figure 2.8. If the operator is unsatisfied with the quality of the panorama, a new panorama can be generated since a panorama can be acquired in less than 2 minutes. This primarily includes the time the operator takes to find the proper imaging plane, as it may be challenging in some cases to find a plane that clearly displays the epidural space. The actual panorama generation process, after acquiring all the ultrasound images, takes in the order of seconds. After the panorama image is generated, an automatic image processing technique identifies the lumbar levels. An identification example is shown in Figure 2.8.  2.5  Vertebral Identification  The challenge of identifying the lumbar levels arises from speckle, low contrast, and shadowing in the ultrasound images. These challenges come from the complex shape of the vertebrae and presence of multiple ligaments, muscle, and fat, all of which generate echoes dependent on the ultrasound beam angle of incidence. Therefore, several processing steps are required for successful vertebral identification [67]. Ultrasound echoes are stronger from specular reflections, such as bony surfaces, than they are from soft tissues. This suggests a simple thresholding may be sufficient to separate the bone surface from tissue. However, previous research [20, 23] reports the difficulty of segmenting bone and tissue in ultrasound images with such simple thresholding. In AREA, we take advantage of the unique signature of the vertebral images in the ultrasound data (i.e. the shadow that appears under the lamina), and aim to segment this signature from the panorama images. Given the variable overall image intensity variations inherent between the ultrasound images from different subjects, we use an automatic thresholding technique based on Otsu’s method [41]. This method assumes the image contains 25  Figure 2.8: Example of two ultrasound panorama images. (a) Panorama obtained in the parasagittal plane, showing L1, L2, L3, L4, L5 and S1 from left to right. (b) The same panorama image showing the automatically identified levels L1, L2, L3, L4 and L5 from left to right. two classes of pixels: foreground (i.e. the soft tissue and lamina) and background (i.e. the shadow underneath each lamina). Under this assumption, this method calculates the optimum threshold separating those two classes so that intra-class variances are minimal.  26  Figure 2.9: Work-flow of panorama image processing, thresholding and vertebral identification, with the final step showing the fusion of the identification results with the original panorama image. 27  The laminae in the lumbar spine appears as wave-like patterns in the panorama image. This signature is used to convert the two-dimensional panorama image to a one-dimensional signal. After thresholding the ultrasound panorama image, it is scanned along the echo direction from the bottom of the image to the top of the image. Whenever a value above Otsu’s threshold value from the previous step is reached, the index value is used as a sampled point in the one-dimensional signal as shown in Figure 2.9. Then a median filter is applied to the signal followed by a peak detection technique to identify the peaks in the signal. The peak detection technique starts from left to right of the image, and when it detects a maximum followed by a minimum, it assigns a peak label. For each of the peaks, a threshold equal to the height of the vertebrae is used to remove any false peaks. Next, the thresholded peaks are considered as the middle sections of the laminae in the panorama ultrasound image as shown in Figure 2.9. Given that the scanning starts at L5-S1, the vertebrae are labelled sequentially from L1 to L5.  2.6  Visualization  The MicronTracker is calibrated to extract coefficients of projection ray equations that convert pixel locations in an image into projection rays in the camera coordinate system. In the camera Software Development Kit (SDK), the back projection ray is represented by a point in space and its angular orientation from the image axes. The equation coefficients are found by presenting a 3D grid of targets to the camera, then the projection coefficients are tuned to obtain the best match possible between the nominal spatial positions of the targets and the projections representing the location in the image. After that calibration parameters are stored in a file. To identify the markers, the MicronTracker processes the images and matches them to the descriptors in the marker templates. Marker projections onto each image were found to always exceed a minimum footprint diameter of 9 – 11 pixels. After identifying a marker in the image, its 3D position is calculated by triangulating the projection rays, obtained from the calibration file, associated with the location in the image where the marker centre is observed. We used a 28  standard volume ray casting method [70] to overlay the identified lumbar levels on the corresponding location in a live video stream of the patient’s back from the MicronTracker. A 3D point coordinate in the camera space is transformed to a 2D pixel location in the image plane using the MicronTracker SDK and the calibration file. To label a live video image of the patient’s back, the system finds (1) the anchor transform, which is the transform between the location of the first ultrasound image and the camera, and (2) the patient transform, which is the transform between the patient marker and the camera. Using the anchor and patient transforms, the system finds the transform between the anchor position of the panorama image and patient marker, which we will refer to as “Image-to-Patient” Transform. Then, for each identified vertebrae, a line in the coronal plane across the vertebrae is transformed to the 3D coordinates of the camera using the Image-to-Patient and patient marker transforms. The intersection of the projection rays with imaging plane of the MicronTracker centre camera is used to calculate the location where the 3D point will appear in the MicronTracker image, and the identified lines are overlaid on the live camera view [22].  Figure 2.10: The GUI developed using 3D Slicer showing the vertebral levels (black lines) overlaid on the video image of the patient’s back. 29  Table 2.1: Types of experiments performed, gold standard used, and measurements/labels that have been defined in this paper. Numbers used in the table refer to the measurements defined within the text. Experiment Accuracy of vertebral height in panorama images  Gold Standard 1a) Curvilinear vertebral height  Accuracy of vertebral level identification in panorama images  2a) Panorama vertebral levels  Accuracy of vertebral level identification on skin Accuracy after arching forward  3c) Actual vertebral count 3a) Actual vertebral labels at the resting position 3b) Actual vertebral labels with 5◦ and 10◦ arching forward  Measurement 2d) Panorama vertebral height 2b) AREA vertebral levels 2c) Kerby vertebral levels 2e) AREA vertebral count 2f) Kerby vertebral count 3d) AREA vertebral labels at the resting position 3e) AREA vertebral labels with 5◦ and 10◦ arching forward  The image with the labels is transferred in real time to 3D Slicer [16] through OpenIGTLink and displayed to the anesthesiologist on a standard monitor (see Figure 2.10). Given a small range of patient motion, the positions of the overlaid lines are automatically updated as the camera tracks the changing position of the patient’s marker.  2.7  Experiments and Results  Experiments were carried out on 17 subjects following informed consent. Ethics approval for this study was obtained from our institution’s Research Ethics Board. We conducted four experiments as shown in Table 2.1 and took the following measurements: 1. Measurements on single 2D ultrasound images: (a) Curvilinear vertebral height: For each subject, the sonographer measured from the superior margin of one vertebral lamina to the superior 30  Table 2.2: Mean ± standard deviation of Curvilinear vertebral height, Panorama vertebral height, and the absolute error calculated as the difference between those two measurements. Units are in millimetres; N=17. Metric Curvilinear vertebral height Panorama vertebral height Mean absolute error  L1 31.5 ±3.3 28.9 ±4.8 5.4 ±3.2  L2 33.4 ±4.2 32.9 ±5 5.5 ±4.6  L3 33.5 ±5.3 35.1 ±6 5.8 ±5.2  L4 31.8 ±4.1 38.2 ±8.6 10 ±5  L5 31 ±4 35.7 ±8.7 8.4 ±7.6  Table 2.3: Mean of the absolute error between AREA and Panorama vertebral levels, and between Kerby and Panorama vertebral levels. Units are millimetres, N=17. Vertebra Mean absolute error between AREA and Panorama vertebral levels Mean absolute error between Kerby and Panorama vertebral levels  L1  L2  L3  L4  L5  4.2 ±3  2.5 ±2.4  3 ±2.8  3.7 ±4.2  2.7 ±2.2  2.2 ±1.8  4.4 ±5.1  4.4 ±6.2  2.9 ±3.3  2.3 ±2  margin of the next vertebral lamina on images obtained with a curvilinear transducer (C5-2, Ultrasonix Medical Corp., Richmond, Canada). This transducer was used, as it is capable of displaying two to three vertebral lamina in a single ultrasound image. 2. Measurements on the panorama image: (a) Panorama vertebral levels: The sonographer identified each vertebral level on the panorama images. She marked the midpoint of the shadow generated by the posterior surfaces of the laminae. (b) AREA vertebral levels: AREA identified the vertebral levels on the  31  panorama images. (c) Kerby vertebral levels: Kerby et al.’s algorithm [26] was used to identify vertebral levels on the panorama images. A brief description of this algorithm is provided below. (d) Panorama vertebral height: The sonographer determined the height of the vertebrae in the panorama images, by measuring from the superior margin of one vertebral lamina to the superior margin of the next lamina. (e) AREA vertebral count: AREA was used to count the number of vertebrae in the panorama image. (f) Kerby vertebral count: Kerby et al.’s algorithm was used to count the number of vertebrae in the panorama image. 3. Measurements on the skin: (a) Actual vertebral labels at the resting position: These measurements were performed with each subject sitting in a comfortable upright position without purposefully arching the spine forward (i.e. sitting in the “resting” position). The sonographer used the linear array transducer to manually identify the five lumbar vertebrae either by scanning up from the sacrum or scanning down from the bottom rib and labelled the midpoint of each lumbar vertebrae on the subject’s skin with a felt pen. (b) Actual vertebral labels with arching forward: The measurements above were repeated while each subject arched forward until the screw angle (see Section III.D. for the description of the screw angle) of the patient marker, relative to the resting position, was changed by 5◦ and 10◦ , respectively. (c) Actual vertebral count: The sonographer used the linear array transducer to manually count the five lumbar vertebrae either by scanning 32  up from the sacrum or scanning down from the bottom rib. (d) AREA vertebrae labels at the resting position: These measurements were performed with each subject sitting in the resting position. The sonographer acquired a sequence of ultrasound images, and AREA displayed virtual markings of the identified vertebral levels on an augmented video of the subject’s back. The vertebral levels, which appear on the monitor, were used to label each subject’s back with a felt pen. (e) AREA vertebrae labels with arching forward: The measurements above were repeated while each subject arched forward until the screw angle of the patient marker, relative to the resting position, was changed by 5◦ and 10◦ , respectively.  The experiments were divided into the following parts:  2.7.1  Accuracy of Vertebral Height in Panorama Image  This experiment tested the accuracy of generating panorama images by AREA. The sonographer obtained the curvilinear vertebral height and panorama vertebral height measurements, then the absolute error was calculated between those two measurements. Measurement of vertebral height from the curvilinear transducer as well as the panorama images, and the difference between those two measurements are reported in Table 2.2. In summery, two different transducers and two different scanning methods (single image versus panorama) were used in order to have two independent measurements of the vertebral height. We used the nonparametric Mann-Whitney-U test to compare the measurements obtained with the two approaches. The test fails to show statistically significant difference between the two measurements (p > 0.05), except for L4 (p = 0.005).  33  2.7.2  Accuracy of Vertebral Level Identification in Panorama Image  In order to evaluate the accuracy of AREA vertebral level identification algorithm, we measured the distance between panorama vertebral levels and AREA vertebral levels. We also compared the performance of vertebral level identification technique proposed in this paper with a competing method previously developed by Kerby et al. [26]. Error was calculated as the distances between panorama vertebral levels and Kerby vertebral levels identified in the panorama images. Kerby et al.’s algorithm first applies a median filter with a window width and height twice the ultrasound signal wavelength. Then, a linear filter, which operates in the vertical and horizontal directions, is used to highlight bone edges and enhance the periodic nature of the vertebrae. After that, hard thresholding is used to set twothirds of pixels to zero, and a least squares parabolic is fit into the image. For each parabolic fit into the image, the minimum is identified as vertebra. More details can be found in the Kerby et al.’s paper [26].  Table 2.4: Number of false AREA and Kerby vertebral counts (N = 82). Using the linear transducer, the sonographer could not identify three of the vertebrae because they were fused with a neighbouring vertebra. Method/Vertebra  L1  L2  L3  L4  L5  No. of false AREA vertebral levels  0  2  0  1  0  No. of false Kerby vertebral levels  0  1  0  0  0  AREA and Kerby et al.’s algorithms were compared in terms of the mean absolute error, number of the vertebrae identified, and the number of false identification. The results are reported in Tables 2.3, 2.5 and 2.4. The false identification reported in Table 2.4 is mostly around the interspinous gaps because of low image intensity at that location, which occasionally results in inaccurate segmentation of the panorama image using Otsu’s threshold, and subsequently, wrong identifica34  tion at the interspinous gaps. The Mann-Whitney-U test comparing the absolute errors reported by AREA and Kerby et al.’s algorithms in Table 2.3 fails to show statistically significant difference between the two measurements (p > 0.05). Table 2.5: Actual vertebral count, AREA vertebral count and Kerby vertebral count (N = 82). For the actual vertebral count, the sonographer could not identify three of the vertebrae because they were fused with a neighbouring vertebra.  2.7.3  Method  L1  L2  L3  L4  L5  Actual vertebral count  16  17  17  17  15  AREA vertebral count  16  16  17  16  14  Kerby vertebral count  16  15  14  15  13  Accuracy of Vertebral Level Identification on the Skin  To test the accuracy of identifying and overlaying vertebral levels on a live video stream of the patient’s back, the error was defined as the distance between AREA actual vertebrae labels at the resting position and actual vertebrae labels at the resting position. This error was measured on the skin of the volunteer, and the mean absolute error and standard deviation of this error are reported in Table 2.6. Table 2.6: Mean and standard deviation of the absolute difference between AREA actual vertebrae labels at the resting position and actual vertebrae labels at the resting position measured on subject’s back. Units are millimetres, N=17. Metric  L1  L2  L3  L4  L5  Mean  13.6  8.1  4.2  4.2  4.2  Std  10.9  7.5  3.5  4.7  4.5  35  2.7.4  Accuracy of Spine Arching  In spinal needle insertion procedures, the patient is typically asked to arch forward to increase the width of the window to the epidural space. However, the patient may change their arch after being imaged by AREA, and change the location of vertebral levels with respect to the patient marker. After the system identifies the vertebral levels at the resting position, we asked each subject to arch further forward until the screw angle of the marker was changed by 5◦ and 10◦ , respectively. We use screw angle measurements to determine the relative rotation of the marker orientation between two arching positions of the patient. All measurements are performed relative to the orientation Φ of the patient marker in the resting position of the subject and is calculated as in Equation (2) in Spoor et al. [62]:   R11 R12 R13    R= R R R 21 22 23   R31 R32 R33  (2.1)  sin−1 ( (R32 − R23 )2 + (R13 − R31 )2 + (R21 − R12 )2 )  (2.2)  Φ=  where R is the rotation matrix around the screw axis. For each subject, the volunteer is asked to arch forward until the screw angle of the marker was changed by 5◦ with respect to the resting position. Then the distance between actual vertebrae labels with 5◦ arching forward and AREA actual vertebrae labels with 5◦ arching forward was measured on the volunteer’s back. After that the volunteer is asked to arch further forward again until the screw angle of the marker was changed by 10◦ with respect to the resting position. Then the distance between actual vertebrae labels with 10◦ arching forward and AREA actual vertebrae labels with 10◦ arching forward was measured on the volunteer’s back. The mean absolute of these measurements for each vertebral level are re36  ported in Table 2.7. Table 2.7: Comparison of the absolute error of AREA for different spine arching angels. Units are millimetres, N=17. Angle 0◦ 5◦ 10◦  2.8  L1 13.6 16.5 23.7  L2 8.1 7.1 10.5  L3 4.2 4.7 6.3  L4 4.2 9.2 7.9  L5 4.2 11.3 11.7  Total Mean absolute error 6.9 9.7 11.9  Discussion  We presented a novel needle puncture site selection system for reducing the risk associated with lumbar spine needle insertion. The new augmented reality system concept has been successfully tested on 17 subjects, and results show the accuracy of identifying lumbar vertebral levels and overlaying the information onto a live video stream of the patient’s back (mean absolute error 21% of the vertebral height). Figure 2.11 shows simple transformation chain to illustrate the major factors contributing to the total system error. In the following sections, we provide further descriptions of the various sources of the error in each step. Note that the vertebral level is considered to be identified correctly if the error is less than half the vertebral height.  2.8.1  Accuracy of Vertebrae Height in Panorama Images  In this experiment, the mean absolute error between curvilinear vertebral height and panorama vertebral height was 7.1 mm. However, this error indicates that the accuracy of generating panorama images by the proposed system is accurate enough for the purpose of vertebral level identification (less than half the vertebral height). The Major factors contributing to this error include: ultrasound image calibration error ( 1 mm), patient motion during the acquisition of tracked ultrasound images, acquiring images out of plane, MicronTracker localization error, 37  Figure 2.11: Summary of major factors contributing to the overall error. Each arrow indicates an error contributing to each of the modules in the system. and MicronTracker calibration error. Further improvements may be possible by using a 3D transducer and an algorithm to automatically select optimal images from each acquired ultrasound volume. This may improve the generation of the panorama images which subsequently may improve the overall system performance. Moreover, a more accurate  38  tracking system could be used but at greater cost.  2.8.2  Accuracy of Vertebral Levels Identification in Panorama Images  We also reported the accuracy of vertebral level identification. This experiment shows the accuracy of AREA in identifying individual vertebra in the panorama images. As shown in Table 2.3, the mean absolute error of vertebral level identification is 3.2 mm with an identification rate of 96% and false identification rate of 3.7% (3 false identifications). The false negatives are due to weak reflection from the posterior surface of the lamina in the panorama images. The false positives are mostly because of low image intensity at the interspinous gaps which result in inaccurate segmentation of the panorama image using Otsu’s threshold. In addition, AREA vertebral level identification was compared to the Kerby et al.’s algorithm as shown in Table 2.3. The mean absolute error of AREA vertebral level identification (3.2 mm) and Kerby et al.’s algorithm (3.2 mm) are both less than half the vertebral height which indicates that both methods can be used to identify the levels. However, the AREA identification rate is 96% while the Kerby et al.’s algorithm identification rate is only 90%. As shown in Table 2.5, AREA identified the L3 vertebrae, the typical injection site, for all subjects while most of the Kerby et al.’s algorithm false identifications were for L3. The false identification rate of AREA is 3.7% (3 false identifications) compared to 1.2% (one false identification) of Kerby et al.’s algorithm. This error is not expected to affect the outcome of needle puncture site selection since the operator can visually determine the false identification from the augmented lines on the video of the patient’s back and the generated panorama image. The vertebral level identification error of 3.2 mm may be reduced by registering statistical shape model [49] of the lumbar spine anatomy to the panorama image which, in turn, provides more data for vertebra identification but at greater computational expense. Moreover, using a more accurate tracking system will also improve the outcome of panorama generation and vertebral level identifica39  tion modules at higher cost.  2.8.3  Accuracy of Vertebral Levels Identification on the Skin  The overall system accuracy is shown in Table 2.6 with a total mean absolute error of 6.9 mm (shown in Table 2.7). As shown in Table 2.7, the mean absolute error is less than half the vertebral height except for L1 vertebra. This higher error for L1 is likely due to the fact that the L1 vertebra has low intensity and a flat shape in ultrasound images; thus it is difficult to define the midpoint of the shadow generated by the posterior surface of the lamina. All the previous modules contributed to the error of this stage. Specifically, this includes the errors from projecting the identified lumber levels onto a live video of the patient’s back.  2.8.4  Accuracy of Spine Arching  In the arching accuracy experiment, given forward arching of the subject up to 10◦ , the maximum mean absolute error observed was 11.9 mm. Given that the marker was affixed to the patient’s back close to L3, the error was highest for the vertebrae farthest away, i.e. L1 and L5. The mean absolute error increases with forward arching due to the increase of the distance away from the patient marker. These errors are less significant because L1 and L5 are farthest from the typical injection site that is generally close to L3 (L3-L4 or L2-L3). If needed, the error of L1 and L5 could be also reduced by using multiple tracking markers on the skin of the patient that span the sacral, lumbar and thoracic region. The proposed system introduces less than 2 minutes overhead to the routine clinical examination process. This includes acquiring tracked ultrasound images, vertebral level identification, and overlaying the identified levels on a live video image of the patient’s back. One drawback of this system is the fact that missing vertebra will result in wrong labelling of the vertebrae of the levels. Therefore, the system allows the operator to decide the first and last vertebrae to be imaged and thereafter used to count, label, overlay and display those labels. In the future, the system can be further developed to allow the operator to specify the missing 40  vertebral labels which will help in case there is any missing vertebra other than L5 and L1. Another possible drawback is the increase in intervening tissue in obese patients between the ultrasound probe and lamina which may affect the image quality and results of lumbar identification. However, using the AREA vertebral level identification algorithm, in which segmentation and vertebrae identification parameters are automatically chosen depending on the properties of the panorama image, will have a minor effect on the results compared to Kerby et al.’s algorithm which use simple threshold by zeroing two thirds of the pixels. More tests on obese patients are needed. The performance of the panorama generation step will degrade in the presence of large transducer rotations. To analyze the effect of such rotations, we performed an experiment where we scanned a volunteer 10 times from L5-S1 to T12-L1, and successfully generated 10 independent panorama images. We measured the rotation of the scan planes relative to the first scan plane in each generated panorama image. The extent of rotation around the axial axis is: 4.1◦ ± 5◦ , the extent of rotation around the lateral axis is: 0.9◦ ± 7.7◦ , and the extent of rotation around the elevation axis is: 9.5◦ ± 10.2◦ . These rotation angles indicate the range of probe rotations during panorama acquisition, while anatomically correct features are visible in the acquired ultrasound scans. We would like to emphasize that the acceptable range is most likely operator and subject specific.  41  Chapter 3 Insertion Slice Detection 3.1  Introduction  In Chapter 2, a system is proposed to identify lumbar vertebral levels to assist in the needle puncture site selection. After identifying the vertebral levels, the needle should be inserted into a mid-sagittal plane until it reaches the epidural space. Needle insertion is a challenging task and should be performed carefully to avoid overshooting the needle into the spinal cord which will result in the puncture of the dura mater and leakage of cerebral spinal fluid. This common complication leads to side effects such as post dural puncture headache. To solve this problem, The use of 2D ultrasound has been proposed by Grau et al. [19] to provide information about needle optimal puncture site, trajectory and depth of insertion. They show that the use of ultrasound imaging in obstetric regional anesthesia has higher rate of success compared to loss of resistance technique. However, midline needle insertion guidance using 2D transducer is difficult for three reasons: 1. 2D ultrasound transducer obscures the puncture site. 2. Placing the transducer in a parasagittal plane makes it impossible to view the needle tip and the target together in the same ultrasound image. 42  3. The conventional tracking methods require the tracking sensor to be placed either inside the needle close to the tip, which will block the pass of the saline during standard procedure of loss-of-resistance, or on the base of the needle, which reduces tracking accuracy due to needle bending. Rasoulian et al. [2] proposed a solution to guide midline needle insertion by placing a 3D transducer equipped with a needle guide in a paramedian plane and consequently the needle will be placed in the midline plane. Then a virtual plane (which we will refer to it as a re-slice plane) containing the needle path is extracted from ultrasound volumes which depict both the needle and the epidural space. Limitation of this technique is that ultrasound images are hard to interpret and the wave-like appearance of the lamina can be confused with the appearance of facets or transverse process as shown in Figure 3.1. To solve this problem, an automatic technique has been develop that identifies a plane through a 3D ultrasound volume such that the slice plane includes the target epidural space and the wave-like appearance of the lamina of the vertebrae. This is to allow the anesthesiologist to gain confidence in interpreting the live re-slice plane that is used to select the needle trajectory. When the live re-slice plane matches the correctly identified plane by the algorithm, then the anesthesiologist can proceed to insert the needle toward the target.  3.2  Data Acquisition  32 volumes were acquired from four volunteers using a SonixTOUCH ultrasound system (Ultrasonix Medical Corp., Richmond, Canada) equipped with a 5 MHz transducer (4DC7-3/40, Ultrasonix Medical Corp., Richmond, Canada). The transducer was placed on a parasagittal plane within 10 mm of the midline and was gently angled away from the parasagittal plane by 5◦ - 10◦ as described by Tran et al. [69]. The ultrasound beam was angled to intersect the laminae at the base of the spinous process. Ultrasound volumes were acquired by placing the transducer on lumbar interspinous gapes on each side of the lumbar spine, respectively. 43  Figure 3.1: Three slices extracted from 3D ultrasound volume. The first plane shows the lamina, the second plane shows facet joints and the third plans shows the transverse process. All of these three slices have a similar wave-like pattern which can be confusing.  44  The sonographer chose the optimal needle insertion planes for training and testing purposes which met the following criteria: 1. Wave-like pattern from the lamina surfaces; 2. Bright echoes from the most superior surface of the lamina for at least two adjacent vertebrae (L1-L5); 3. Distinct shadows under the lamina; 4. Images show the ligamentum flavum; 5. The best image will have the anterior wall of the spinal canal.  3.3  Feature Extraction and Construction of Weak Classifier  In this study, Haar-like features, which have been extensively studied by Viola et al. [71] and Papageorgiou et al. [42], were used to detect lamina in ultrasound images. In there study Viola et al. [71] were able to detect faces on images with 384 by 288 pixels at a rate of 15 frames per second. This algorithm is equivalent to the best published work [40, 53, 54, 58, 64] in the field of face detection. In another study, Rahmatullah et al. [45] used Haar-like features to automatically select standard planes from fetal ultrasound volumes with precision of 76% and recall rate of 91%. Figure 3.2 shows the five features extracted from ultrasound slices. Features (a), (b), (c) and (d) detect horizontal and vertical edges, and feature (e) detect diagonal edges in the ultrasound slices. Viola et al. [71] showed how these features encode relative pixel information and position of this information in an image. Haar-like features are extracted by subtracting and adding adjacent rectangles (i.e. the sum of pixel values in the black rectangles minus the white rectangles) according to the following Equation 3.1 adopted from Viola et al. [71]:  45  N 1 N Value = ( )( ∑ Blacki − ∑ W hitei ) N i=1 i=1  (3.1)  For fast calculation of these features, integral image was suggested by Viola et al. [71], which can be defined as a lookup table with the size of the original image. Each element of this lookup table is the result of the sum of all pixels on the up-left region of that pixel as shown in Figure 3.3. Therefore, to compute the sum of pixels in a rectangle requires only four operations according to Equation 3.3, which was adopted from Viola et al. [71], and using the same notations of Figure 3.4.  I=  i(x∗ , y∗ ) ∑ ∗  (3.2)  x <x y∗ <y  ∑∗  i(x∗ , y∗ ) = I(a) + I(d) − I(b) − I(c)  (3.3)  a(x)<x <c(x) a(x)<y∗ <c(y)  During training process, the width X, length Y , and location (XΨ ,YΨ ), as shown in Figure 3.6, of each feature type, shown in Figure 3.2, was changed and its value is extracted from the set of positive and negative training examples. Then a simple threshold is used, shown in Equation 3.4 and was adopted from Acevedo [3], for each feature type, width, length and location to find a classifier with a total error of less than 50% and a positive error of less than or equal to 5%, which we will refer to as “weak classifier”. Total error and positive error are calculated as shown by Equations 3.5 and 3.6 which was adopted from Acevedo [3].  46  Figure 3.2: Features extracted from ultrasound volume slices.  47  Figure 3.3: The value at a point (x,y) in the integral image is the sum of all pixels in the up-left region of that pixel. Figure was adopted from Viola et al. [71]  Figure 3.4: An example showing how to calculate the sum of pixels in a rectangle D. The value of the integral image at four locations is used a,b,c and d. The value at location a is the sum of pixel in region A, the value at location b is the sum of pixels in region A & B, the value at location c is the sum of pixels in the region A & C , and the value at location d is the sum of pixels in the region A & B & C & D. Therefore, to calculate the sum of pixels at region D we have to add a & b and subtract from it c & d, SUM(D) = a + d - c - b. Figure was adopted from Viola et al. [71] 48  Figure 3.5: The images were divided into two classes. Vertebrae class which contain vertebrae sub-windows and Non-Vertebrae class which contain other parts of the images which does not correspond to vertebrae subwindows. The first column correspond to sub-windows from the first class, and the second and third columns correspond to sub-windows from the second class.   1 if T hreshold < f (x) < T hreshold ; t 1 2 ht (x) = 0 if otherwise;  (3.4)  where ft is the value of the Haar-like feature and the values of T hreshold1 and T hreshold2 are automatically adjusted to have a total error of less than 50% and a positive error of less than or equal to 5%. n  Totalerror = ∑ wi (ht (x) − yi ),  (3.5)  i=1  p  Positiveerror =  ∑ w j (1 − ht (x)),  j=1  49  (3.6)  Figure 3.6: An example of parameter representation of a feature. where n is the total number of training images, yi is an array which has a value of 1 for each positive image and 0 for each negative image, w weights for each image in the training set and p is the number of positive images.  3.4  Learning Classifiers  AdaBoost was introduced by Freund and Schapire [12], which is used to detect objects such as faces in natural images [71] and anatomical landmarks in ultrasound images [46]. The reason to use AdaBoost is the large number of Haar-like features extracted from an image. This large number of features makes it prohibitively expensive to construct weak classifiers from all of these features. Therefore, AdaBoost is used to choose and combine small number of weak classifiers to form a strong classifier. In each iteration of AdaBoost a new weak classifier is found and the weights are updated according to the Equations shown in Algorithm 1. Weights updating process gives higher weights to misclassified images and lower weights to correctly classified images. Therefore, in each step of AdaBoost, the new classifier purpose is to classify the misclassified images from the previous  50  step. Algorithm 1: C LASSIFIER TRAINING USING A DA B OOST. A LGORITHM WAS ADOPTED FROM V IOLA ET AL . [71] Input: Given training examples (P1 , N1 )....(Pn , Nn ) where Nn ∈ 0, 1 for negative and positive training examples respectively. 1 1 Set the initial weights W1,n = . Where n is the total number of images n 2 for t ← 1 to T do 3 Normalize images for positive and negative training examples: W Wt,i = (∑n t,iW ) . k=1  5 6  t,k  Find a weak classifier with minimum weighted error ht = argminh j ∈H α j = ∑ni=1 Wt,i ∗ [yi = h j (xi )] et Update the weights Wt+1, j = Wt,i B1−et , Where Bt = (1−e t)  4  The final strong classifier is a weighted sum of all the weak classifiers:  7  H(x) =  8  9  T T 1 , ∑t=1 αt ht (x) > 21 ∗ ∑t=1 αt 0 , otherwise  where αt = log B1t . T For the purpose of this research ∑t=1 αt ht (x) will be used as the score of each detected vertebra in ultrasound images return H(x)  3.4.1  Training Cascade Classifiers  The design of a cascade classifier is driven by the requirement to achieve high detection rate with a low false positive rate. This requirement is motivated by the fact that there is few number of vertebra sub-windows in an image with large number of non-vertebra sub-windows. For example, In the field of face detection the detection rate desired is up to 95% with a false positive as low as 10−6 due to the few number of faces in an image. These detection and false positive rates were adjusted and used in the training process for this work. For any cascade detector the false positive is calculated as [71]: 51  k  F = ∏ fi  (3.7)  i=1  where F is the total false positive rate, fi is the false positive rate of the ith classifier, and k is the number of stages in the cascade classifiers. The detection rate is calculated as [71]:  k  D = ∏ di  (3.8)  i=1  where D is the total detection rate, di is the detection rate of the ith weak classifier, and k is the number of stages in the cascade classifiers. Using Adaboost alone will only minimise error without specifically achieving high detection rates and low false positives. Therefore, a more sophisticated algorithm is used as shown in Algorithm 2 to achieve the desired detection and false positive rates which was suggested by Viola et al. [71]. This algorithm was used to train a set of cascade classifiers using AdaBoost for each stage of the cascade classifiers. For the first stage, the algorithm will search for 10 weak classifiers and adjust the strong classifier threshold to achieve a detection rate equal to 99% and false positive rate less or equal to 50% on the training data. Each subsequent stage will start with a single weak classifier. Then the algorithm will add a weak classifier and adjust the threshold of the perceptron on each iteration to achieve the desired detection and false positive rates for each stage. A cascade of classifiers are constructed as shown in Figure 3.7. The first group of classifiers will have a detection rate of more than 99% with false positive rate of at most 50%. Even though, this classifier is far from being acceptable as a good vertebrae classifier, it will reduce the number of sub-windows that need to be processed. The following classifiers will remove more negatives possible while the positives will trigger the subsequent classifiers in the cascade detectors. 52  Figure 3.7: A series of classifiers applied to each subwindow. The first classifier will remove most of the non-vertebrae images, then following classifiers will remove more of the non-vertebrae subwindows.  3.5  Slice Selection  Each volume was rotated with angles from −10◦ to 10◦ around the parasagittal plane. The rotation angles were chosen according to Tran et al. [69] to get a visible wave-like pattern from the lamina in the ultrasound volume. The centre of rotation was the interface plane between the transducer and the volunteer skin, and the volume is regarded as slices of y-axis planes. For each slice, a sliding sub-window is used, which slides across all the slice, and the maximum score of a sub-window in a slice is considered the score for that slice, which we will refer to as “image score”. For the purpose of this research, the score of each sub-window will be calculated as: L  Score =  ∑ αt ht (x)  (3.9)  t=1  where L is the number of weak classifiers which detected a vertebrae in the image, and αt is the weight of each weak classifier. 53  Algorithm 2: A LGORITHM TO BUILD CASCADE OF CLASSIFIERS . A LGO RITHM WAS ADOPTED FROM V IOLA ET AL . [71] Input: User selects the required overall target accuracy Ftarget , d: the minimum detection rate per layer, f : the maximum false positive rate per layer, N: set of negative examples, P : set of positive examples, and V : set of validation data positive and negative. 1 F0 = 1.0; D0 = 1.0 2 i = 0 ; ni = 0 3 while (Fi > Ftarget & Di < d ∗ Di−1 ) do 4 ni ← ni + 1 5 Using AdaBoost train a classifier with ni featuers on P and N 6 Determine Fi and Di using the validation data set V 7 Decrease the threshold of the current layer until the detection rate is at least d ∗ Di−1 8 9  10  Clear negative images data set N If Fi > Ftarget evaluate the current cascade of classifiers on the non-vertebrae images and put any false detections into the set N return cascade of classifiers  For each volume, the image score is found and a weighted mean, shown in Equation 3.10, of all images with a score of more than 85% is used to chose the optimal slice from a volume. The implementation of the method is shown in Figure 3.8. 1 N Value = ( )( ∑ (SliceNumber(i)) ∗ (Score)) N i=1  3.6  (3.10)  Experiments and Results  During training process, eight volumes were retained for testing the cascade of classifiers, and the remaining data (24 volumes) were divided into two groups: 1) 16 volumes were used as training data, 2) eight volumes were used for validation. Validation data is used to adjust the threshold of the perceptron to achieve the 54  Figure 3.8: Slice selection algorithm workflow. desired detection and false positive rates for each stage. The trained detector consisted of three stages of classifiers with a total of 90 weak classifiers. The number of weak classifiers and the threshold for the last stage were driven through a trial and error process in order to reduce the training time of the detector. Positive and negative examples are generated to train the cascade classifiers. The positive data set was generated by making the ligamentum flavum the centre of a bounding box around the wave-like pattern of the lamina in the ultrasound image, then extracting that part of the image. The negative examples are generated from all images which the sonographer did not choose as optimal slices. On average, the training process started with approximately 35000 negative sub-windows and approximately 1000 positive sub-windows. We will refer to this data set as “Initial Training Data”. The positive and negative sub-windows were 150 × 500 pixels which were rescaled to 26 × 26. This size was chosen empirically since smaller sizes were used in initial experiments which resulted in improper performance. The larger size sub-windows will not slow performance but is more difficult to train since they require more computation. However, larger size subwindows will contain more information which will result in the earlier rejection of non-optimal slices. 55  Table 3.1: The distance between Spinous process-Facet and FacetTransverse process measured from statistical shape model of the spine. Units are in millimetres. Vertebrae L2 L3 L4 L5 L5  Spinous process-Facet 12.0 ±2.2 14.0 ±1.45 14.3 ±2.0 16.1 ±2.7 17.4 ±2.9  Facet-Transverse process 15.6 ±3.7 20.5 ±3.4 23.2 ±3.9 21.16 ±4.45 20.55 ±4.65  The first stage of the cascade classifiers was trained using the Initial Training Data. After that, in each iteration of building the cascade classifiers, a new training data set is constructed as follows: 1. False positives are extracted from training volumes using the current set of cascade classifiers. 2. False positives are extracted from training data used to build the last stage of cascade detector. Those two sets of false positives are concatenated into one set for training purpose of the next stage in the cascade classifiers. The first evaluation criteria of the algorithm was based on the distance between the plane the sonographer chose as optimal needle insertion plane, which is shown in Figure 3.9, and the plane the algorithm found as optimal slice. A plane is considered optimal if it is in the lamina region shown in Figure 3.9. To find the gold standard for these measurements, the distances were measured from a statistical model of the spine, as shown in Figure 3.9, which was built using data from 32 volunteers. For the spinous process-facet the distance was measured from the base of spinous process to the beginning of the facets, and for Facet-Transverse process the distance was measured from the middle of the facets to the end of the transverse process. The results are shown in Table 3.1. 56  Figure 3.9: (a) Needle Insertion plane sonographer chose from ultrasound volumes, (b) the distance between spinous process and facets, and (c) the distance between facets and transverse process. The second evaluation criteria of the algorithm was based on: 1. True Negatives (TN) : Number of slices marked as not optimal by the proposed technique and the slices are not optimal. 2. False Negative (FN) : Number of slices marked as not optimal by the proposed technique while the slices are optimal. 3. False Positives (FP) : Number of slices marked as optimal by the proposed technique while they are not optimal. 4. True Positive (TP) : Number of slices marked as optimal by the proposed technique and the slices are optimal. 5. Precision, recall and accuracy which are calculated as shown in Equations (3.11), (3.12), and (3.13). 57  Precision = Recall = Accuracy =  TP . (T P + FP)  TP . (T P + FN)  TP+TN . (T P + T N + FP + FN)  (3.11) (3.12) (3.13)  Table 3.2: The RMS error of selecting optimal slice from ultrasound volume. Error was calculated as the distance between optimal slice the sonographer chose and the algorithm optimal slice. Units are in millimetres. Interspinous gap  Error  L1-L2  1.4  L2-L3  0.5  L3-L4  4.0  L4-L5  9.5  Table 3.3: The performance of the method in selecting optimal slice from ultrasound volume. Criteria  Value  True Negative (%)  99.77  False Positive (%)  21.3  False Negative (%)  0.23  True Positive (%)  78.7  Accuracy (%)  99.6  Precision (%)  78.7  Recall (%)  78  58  Figure 3.10: Examples of six slices from three different volumes. The first column are images sonographer chose as optimal and the second column are images the algorithm chose as optimal.  59  Figure 3.11: Comparison between optimal slices sonographer chose and the optimal slices the algorithm chose (Black). The two blue dots are the lower and upper edges of the set of slices sonographer chose as optimal slice.  3.7  Discussion  In this research, a novel method to select optimal ultrasound slices from ultrasound volume data is developed. The precision of the method is 78.7% and the recall rate is 78%. Due to stricter criteria imposed by the sonographer, during the selection of optimal images, the precision and recall were reduced. Therefore, in the future, we plan to investigate the inter-user agreement by including more expert in the optimal image selection to have a more accurate criteria to evaluate the method Due to stricter criteria imposed by the sonographer, during the selection of optimal images, the precision and recall was not high. Therefore, in the future, we plan to investigate the inter-user agreement by including more expert in the optimal image selection to have a more accurate criteria to evaluate the method. 60  For the other evaluation criteria, the error was calculated as the distance between the plane the algorithm chose as optimal and the plane the sonographer chose as optimal. The root-mean-square (RMS) error is 5.4 mm. This error is less than the distance between spinous process and facets which indicate that the lamina region is still identifiable using the proposed algorithm. The performance of the detector was affected when the lamina does not have bright echo in ultrasound images. In other cases, similar shape to wave-like pattern of the lamina resulted in false detection of vertebrae which resulted in missing the optimal image plane. This false positive was due to the shadowing artifacts associated with using 3D transducer which can be mitigated by using more data to train the classifier and by preprocessing ultrasound volumes.  61  Chapter 4 Conclusion and Future Work In this chapter conclusions of the thesis are presented, followed by the discussion of the future work to improve the proposed system. In conclusion, AREA provides an objective and consistent measure for identification of the vertebral levels based on the panorama image depicting the entire lumbar region, as opposed to local vertebral identification using single ultrasound image acquisitions at each level. Also, the method provides an overlay of the plan onto the patient’s back, therefore minimizing the procedure variability due to the interpretation of the sonographer. The accuracy of AREA is within the clinically acceptable range, which is less than half the vertebral height, for L3 vertebra where most needle insertions are performed. This system is designed to fit within the established clinical workflow for epidural anesthesia. It could be used prior to performing the needle insertion procedure without the requirement for special patient preparation. Moreover, AREA is intended to be used by a single operator without disrupting the sterile field since the only computer interaction is via a foot pedal. This proof of concept therefore is the first step before subsequent testing in clinical practice. In the feasibility study of insertion slice detection, we have shown that machine learning techniques with the training algorithm suggested by Viola et al. [71] can be used to assist in the detection of optimal needle insertion plane. Moreover, 62  this thesis presents detailed experiments on one of the difficult applications of object detection in ultrasound images. The detection of lamina in ultrasound images is difficult due to presence of speckles, low contrast of lamina, and shadowing in the ultrasound images.  4.1  Summary of Contributions  The ultimate goal of this research is to provide a guidance system for spinal needle insertion, a high volume clinical procedure used to manage the chronic back pain, by providing a robust and affordable image guidance tool that can be readily used in any clinic. The proposed research is expected to increase the accuracy of the injection process, which will result in less needle passes and less exposure to inoizing radiation. Therefore, an augmented reality system is introduced that can identify lumbar spinal levels and overlay the identified levels on a video stream of the patient’s back. This system is intended to be used prior to performing spinal needle injection to help anaesthesiologist in selecting puncture site. This thesis major contributions can be summarized: 1. Efficient and fully automatic lumbar level identification algorithm from panorama ultrasound images is developed. 2. An augmented reality system for epidural anesthesia that overlays the identified levels on a live video image of the patient’s back. 3. Design, implement and evaluate an automated standardized plane selection algorithm from ultrasound volumes that features: a) High detection rates of standardized planes. The algorithm has to be robust to the variations in the shape of vertebrae in the ultrasound volumes between patients, and to the high level of noise in ultrasound volumes; b) Real-time detection since ultrasound imaging and needle guidance are done live; c) Demonstrate the feasibility of this approach in patient studies.  63  4.2  Future Work  While AREA was validated on 17 subjects as proof of concept, it still requires subsequent testing to be ready for clinical applications. The following are a few suggestions for future work: 1. Focus on improving the accuracy of generating lumbar spine panorama images. In this part we can investigate the use of advanced techniques such as 3D SIFT [59] to register 3D ultrasound volumes and generate a volumetric panorama [38]. The use of 3D ultrasound volumes will provide more data for vertebra detection but at greater computational expense. Furthermore, the use of more accurate tracking tools such as Optotrak will improve the accuracy of generating panorama images but at much higher cost. 2. Improve the accuracy of lumbar level detection using Haar-like features and AdaBoost algorithm. The cascade classifiers can be trained to detect the wave-like appearance of the lamina and use it to estimate vertebrae location in the panorama image. This probably will increase the identification rate and reduce the false positive rate of identifying lumbar levels. 3. The use of more than one marker on the patient’s back will reduce the effect of patient motion by allowing the estimation of the patient’s back curvature at the cost of increasing the procedure preparation time. 4. Augment the image overlay with a statistical shape model of the lumbar spine registered to the panorama ultrasound image [50]. While this registration may not be highly accurate, it will provide the clinician with a three-dimensional context of the underlying anatomy and facilitates the interpretation of real-time ultrasound images. The first expected clinical application for the system is epidural injections, and then facet joints injection for pain management, a procedure that is currently performed under X-ray fluoroscopy or Computed Tomography (CT).  64  5. Improve the insertion slice detection technique by adding more classifiers to detect facets and transverse processes. Moreover, the size of sub-windows can be downsampled to 50 × 50 or 100 × 100. This will improve the accuracy of the classification but training will be more computationally expensive. Moreover, in this research we trained a cascade classifier with only three stages. Adding more stages will reduce the false positive rate and improve the performance of the detector. 6. The type of features used can be extended to include rotated Haar-like features as proposed by Lienhart et al. [32]. In their study, they show that building cascaded detector with the extended set of features reduced false positive by 10% at a given detection rate. Moreover, the 2D Haar-like features can be extended to 3D Haar-like features which may result in having more accurate weak classifier. These accurate weak classifiers will improve the overall detection rate.  65  Bibliography [1] S. Abdi, S. Datta, and L. Lucas. Role of epidural steroids in the management of chronic spinal pain: A systematic review of effectiveness and complications. Pain Physician, 8:127–143, Jan 2005. [2] P. Abolmaesumi, R. Rohling, A. Rasoulian A. Kamani C. Lo, and V. A. Lessoway. Porcine thoracic epidural depth measurement using 3d ultrasound resliced images. In Canadian Anesthesiologist’s Society Annual Meeting, page S51, Montreal, Quebec, Aug 2010. [3] J. Acevedo. Face detection using MATLAB. Personal communication, September 2012. [4] K. Arai, T. Hozumi, Y. Matsumura, K. Sugioka, Y. Takemoto, H. Yamagishi, M. Yoshiyama, H. Kasanuki, J. Yoshikawa, et al. Accuracy of measurement of left ventricular volume and ejection fraction by new real-time three-dimensional echocardiography in patients with wall motion abnormalities secondary to myocardial infarction. The American Journal of Cardiology, 94:552, May 2004. [5] C. Arzola, S. Davies, A. Rofaeel, and J. Carvalho. Ultrasound using the transverse approach to the lumbar spine provides reliable landmarks for labor epidurals. Anesthesia & Analgesia, 104:1188–1192, May 2007. [6] H. Ashab, V. Lessoway, S. Khallaghi, A. Cheng, R. Rohling, and P. Abolmaesumi. Area: An augmented reality for epidural anaesthesia. In IEEE Engineering in Medicine and Biology, pages 2659–2663, San Diego, USA, Aug 2012. [7] J. Berger and D. Shin. Computer-vision-enabled augmented reality fundus biomicroscopy. Ophthalmology, 106:1935–1941, Oct. 1999. 66  [8] N. Bogduk. Clinical anatomy of the lumbar spine and sacrum. Churchill Livingstone, 2005. [9] P. Center, L. Covington, and A. Parr. Lumbar interlaminar epidural injections in managing chronic low back and lower extremity pain: A systematic review. Pain Physician, 12:163–188, Jan 2009. [10] G. de Oliveira Filho. The construction of learning curves for basic skills in anesthetic procedures: an application for the cumulative sum method. Anesthesia & Analgesia, 95:411–416, Aug 2002. [11] R. Franc¸ois, R. Fablet, and C. Barillot. Robust statistical registration of 3d ultrasound images using texture information. In International Conference on Image Processing (ICIP), volume 1, pages 581–584, Sep 2003. [12] Y. Freund and R. Schapire. A desicion-theoretic generalization of on-line learning and an application to boosting. In Computational learning theory, pages 23–37, 1995. [13] V. Frost, J. Stiles, K. Shanmugan, and J. Holtzman. A model for radar images and its application to adaptive digital filtering of multiplicative noise. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-4:157–166, Mar 1982. [14] G. Furness, M. Reilly, and S. Kuchi. An evaluation of ultrasound imaging for identification of lumbar intervertebral level. Anaesthesia, 57:277–280, Mar 2002. [15] S. Ge, J. Warner, T. Abraham, N. Kon, R. Brooker, A. Nomeir, K. Fowle, P. Burgess, and D. Kitzman. Three-dimensional surface area of the aortic valve orifice by three-dimensional echocardiography: clinical validation of a novel index for assessment of aortic stenosis. American Heart Journal, 136:1042–1050, Dec 1998. [16] D. Gering, A. Nabavi, R. Kikinis, W. Grimson, N. Hata, P. Everett, F. Jolesz, and W. Wells. An integrated visualization system for surgical planning and guidance using image fusion and interventional imaging. In Medical Image Computing and Computer-Assisted Intervention, pages 809–819, Cambridge, UK, Sep 1999.  67  [17] N. Glossop, C. Wedlake, J. Moore, T. Peters, and Z. Wang. Laser projection augmented reality system for computer assisted surgery. pages 239–246, Montral, Canada, Nov 2003. [18] T. Grau, R. Leipold, R. Conradi, E. Martin, and J. Motsch. Efficacy of ultrasound imaging in obstetric epidural anesthesia. Journal of Clinical Anesthesia, 14:169–175, May 2002. [19] T. Grau, E. Bartusseck, R. Conradi, E. Martin, and J. Motsch. Ultrasound imaging improves learning curves in obstetric epidural anesthesia: a preliminary study. Canadian Journal of Anesthesia/Journal canadien d’anesth´esie, 50:1047–1050, Oct 2003. [20] I. Hacihaliloglu, R. Abugharbieh, A. Hodgson, and R. Rohling. Bone surface localization in ultrasound using image phase-based features. Ultrasound in Medicine & Biology, 35:1475–1487, Sep 2009. [21] L. Ibanez, W. Schroeder, L. Ng, and J. Cates. The itk software guide: the insight segmentation and registration toolkit. Kitware Inc, 5, 2003. [22] C. T. Inc. Microntracker developers manual mtc 3.6. 2008. [23] A. Jain and R. Taylor. Understanding bone responses in B-mode ultrasound images and automatic bone surface extraction using a bayesian probabilistic framework. In Proc. of SPIE Medical Imaging, pages 131–142, Bellingham, WA, USA, Apr 2004. [24] B. Johnson, K. Schellhas, and S. Pollei. Epidurography and therapeutic epidural injections: technical considerations and experience with 5334 cases. American Journal of Neuroradiology, 20:697–705, Apr 1999. [25] M. Karmakar, X. Li, A. Ho, W. Kwok, and P. Chui. Real-time ultrasound-guided paramedian epidural access: evaluation of a novel in-plane technique. British Journal of Anaesthesia, 102:845–854, Apr 2009. [26] B. Kerby, R. Rohling, V. Nair, and P. Abolmaesumi. Automatic identification of lumbar level with ultrasound. In IEEE Engineering in Medicine and Biology, pages 2980–2983, Vancouver, BC, Canada, Aug 2008.  68  [27] D. Kopacz, J. Neal, J. Pollock, et al. The regional anesthesia learning curve. what is the minimum number of epidural and spinal blocks to reach consistency? Regional Anesthesia, 21:182, May 1996. [28] D. Kuan, A. Sawchuk, T. Strand, and P. Chavel. Adaptive noise smoothing filter for images with signal-dependent noise. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-7:165–177, Mar 1985. [29] A. Lasso, T. Heffter, C. Pinter, T. Ungi, T. K. Chen, A. Boucharin, and G. Fichtinger. Plus: An open-source toolkit for developing ultrasound-guided intervention systems. 4th NCIGT and NIH Image Guided Therapy Workshop, 4:103, October 2011. [30] J. Lee. Digital image enhancement and noise filtering by use of local statistics. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-2:165–168, Mar 1980. [31] H. Liao, T. Inomata, I. Sakuma, and T. Dohi. 3-d augmented reality for mri-guided surgery using integral videography autostereoscopic image overlay. IEEE Transactions on Biomedical Engineering, 57:1476–1486, Jun 2010. [32] R. Lienhart and J. Maydt. An extended set of haar-like features for rapid object detection. In International Conference on Image Processing, pages I–900, New York, USA, Sep 2002. [33] S. Liu, W. Strodtbeck, J. Richman, and C. Wu. A comparison of regional versus general anesthesia for ambulatory anesthesia: a meta-analysis of randomized controlled trials. Anesthesia & Analgesia, 101:1634, Dec 2005. [34] T. Loupas, W. McDicken, and P. Allan. An adaptive weighted median filter for speckle suppression in medical ultrasonic images. IEEE Transactions on Circuits and Systems, 36:129–135, Jan 1989. [35] D. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60:91–110, Nov 2004. [36] L. Maier-Hein, A. Franz, H. Meinzer, and I. Wolf. Comparative assessment of optical tracking systems for soft tissue navigation with fiducial needles. In Medical Imaging 2008: Visualization, Image-guided Procedures, and Modeling, page 69181Z, San Diego, CA, Mar 2008. 69  [37] M. Moradi, P. Abolmaesoumi, and P. Mousavi. Deformable registration using scale space keypoints. In Proceedings of SPIE, volume 6144, pages 791–798, Mar 2006. [38] D. Ni, Y. Qu, X. Yang, Y. Chui, T. Wong, S. Ho, and P. Heng. Volumetric ultrasound panorama based on 3d sift. Medical Image Computing and Computer-Assisted Intervention–MICCAI 2008, 5242:52–60, Sep 2008. [39] S. Nicolau, X. Pennec, L. Soler, and N. Ayache. An accuracy certified augmented reality system for therapy guidance. In Computer Vision-ECCV 2004, pages 79–91, Prague, Czech Republic, May 2004. [40] E. Osuna, R. Freund, and F. Girosit. Training support vector machines: an application to face detection. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition., pages 130–136, Cambridge, MA, USA, Jun 1997. [41] N. Otsu. A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics, 9:62–66, Jan 1979. [42] C. Papageorgiou, M. Oren, and T. Poggio. A general framework for object detection. In Sixth International Conference on Computer Vision., pages 555–562, Cambridge, MA, USA, Jan 1998. [43] T. Poon and R. Rohling. Three-dimensional extended field-of-view ultrasound. Ultrasound in Medicine & Biology, 32:357–369, Mar 2006. [44] H. Rafii-Tari. Panorama ultrasound for navigation and guidance of epidural anesthesia. Master’s thesis, University of British Columbia, British Columbia, Ca, 2011. [45] B. Rahmatullah, A. Papageorghiou, and J. Noble. Automated selection of standardized planes from ultrasound volume. Machine Learning in Medical Imaging, pages 35–42, 2011. [46] B. Rahmatullah, I. Sarris, A. Papageorghiou, and J. Noble. Quality control of fetal ultrasound images: Detection of abdomen anatomical landmarks using adaboost. In International Symposium on Biomedical Imaging: From Nano to Macro, pages 6–9, Chicago, USA, Mar 2011.  70  [47] H. Rapp, A. Folger, and T. Grau. Ultrasound-guided epidural catheter insertion in children. Anesthesia & Analgesia, 101:333–339, Aug 2005. [48] A. Rasoulian, J. Lohser, M. Najafi, H. Rafii-Tari, D. Tran, A. Kamani, V. Lessoway, P. Abolmaesumi, and R. Rohling. Utility of prepuncture ultrasound for localization of the thoracic epidural space. Canadian Journal of Anesthesia/Journal canadien d’anesth´esie, 58:1–9, Sep 2011. [49] A. Rasoulian, R. Rohling, and P. Abolmaesumi. Group-wise registration of point sets for statistical shape models. IEEE Transactions on Medical Imaging, 31:2025 – 2034, Nov 2012. [50] A. Rasoulian, R. Rohling, and P. Abolmaesumi. Probabilistic registration of an unbiased statistical shape model to ultrasound images of the spine. In Proc. of SPIE Medical Imaging, volume 8316, page 83161P, Feb 2012. [51] D. Renfrew, T. Moore, M. Kathol, G. El-Khoury, J. Lemke, and C. Walker. Correct placement of epidural steroid injections: fluoroscopic guidance and contrast administration. American Journal of Neuroradiology, 12: 1003–1007, Sep 1991. [52] F. Reynolds. Logic in the safe practice of spinal anaesthesia. Anaesthesia, 55:1045–1046, Nov 2000. [53] D. Roth, M. Yang, and N. Ahuja. A snow-based face detector. In Advances in Neural Information Processing Systems, pages 855–861, Colorado, USA, Jun 2000. [54] H. Rowley, S. Baluja, and T. Kanade. Neural network-based face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence., 20: 23–38, Jan 1998. [55] Y. Sato, M. Nakamoto, Y. Tamaki, T. Sasama, I. Sakita, Y. Nakajima, M. Monden, and S. Tamura. Image guidance of breast cancer surgery using 3-d ultrasound images and augmented reality visualization. IEEE Transactions on Medical Imaging, 17:681–693, Oct 1998. [56] H. Schlotterbeck, R. Schaeffer, W. Dow, Y. Touret, S. Bailey, and P. Diemunsch. Ultrasonographic control of the puncture level for lumbar neuraxial block in obstetric anaesthesia. British journal of anaesthesia, 100:230–234, Feb 2008. 71  [57] R. J. Schneider, D. P. Perrin, N. V. Vasilyev, G. R. Marx, P. J. del Nido, and R. D. Howe. Real-time image-based rigid registration of three-dimensional ultrasound. Medical Image Analysis, 16(2):402–414, Feb 2012. [58] H. Schneiderman and T. Kanade. A statistical method for 3d object detection applied to faces and cars. In Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on, pages 746–751, Pittsburgh, PA, USA, Jun 2000. [59] P. Scovanner, S. Ali, and M. Shah. A 3-dimensional sift descriptor and its application to action recognition. In Proceedings of the 15th international conference on Multimedia, pages 357–360, New York, USA, Sep 2007. [60] T. Sielhorst, M. Feuerstein, and N. Navab. Advanced medical displays: A literature review of augmented reality. Display Technology, 4:451–467, Dec 2008. [61] S. Simmons, A. Cyna, A. Dennis, and D. Hughes. Combined spinal-epidural versus epidural analgesia in labour. Cochrane Database Syst Rev, 3, JAN 2009. [62] C. Spoor, F. Veldpaus, et al. Rigid body motion calculated from spatial co-ordinates of markers. Journal of Biomechanics, 13:391–393, Feb 1980. [63] J. Sprigge and S. Harper. Accidental dural puncture and post dural puncture headache in obstetric anaesthesia: presentation and management: A 23-year survey in a district general hospital. Anaesthesia, 63:36–43, Jan 2007. [64] K. Sung and T. Poggio. Example-based learning for view-based human face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence., 20:39–51, Jan 1998. [65] I. Sutherland. A head-mounted three dimensional display. In Proceedings of the December 9-11, 1968, fall joint computer conference, part I, pages 757–764, New York, USA, Dec 1968. [66] G. J. Tortora. Principles of human anatomy. John Wiley & Sons, 2005. [67] D. Tran and R. Rohling. Automatic detection of lumbar anatomy in ultrasound images of human subjects. Biomedical Engineering, IEEE Transactions on, 57:2248–2256, Sep 2010. 72  [68] D. Tran, A. Kamani, V. Lessoway, C. Peterson, K. Hor, and R. Rohling. Preinsertion paramedian ultrasound guidance for epidural anesthesia. Anesthesia & Analgesia, 109:661–667, August 2009. [69] D. Tran, A. Kamani, E. Al-Attas, V. Lessoway, S. Massey, and R. Rohling. Single-operator real-time ultrasound-guidance to aim and insert a lumbar epidural needle. Canadian Journal of Anesthesia/Journal canadien d’anesth´esie, 57:313–321, Sep 2010. [70] J. Udupa and G. Herman. 3D Imaging in Medicine. CRC, 1999. [71] P. Viola and M. Jones. Robust real-time face detection. International Journal of Computer Vision, 57:137–154, 2004. [72] R. Windsor, S. Storm, and R. Sugar. Prevention and management of complications resulting from common spinal injections. Pain Physician, 6: 473–484, Oct 2003. [73] D. Wright, J. Rolland, and A. Kancherla. Using virtual reality to teach radiographic positioning. Radiologic Technology, 66:233–238, Mar 1995. [74] M. Yamauchi, E. Honma, M. Mimura, H. Yamamoto, E. Takahashi, and A. Namiki. Identification of the lumbar intervertebral level using ultrasound imaging in a post-laminectomy patient. Journal of Anesthesia, 20:231–233, Aug 2006. [75] Y. Yu and S. Acton. Speckle reducing anisotropic diffusion. IEEE Transactions on Image Processing, 11:1260–1270, Nov 2002.  73  Appendix A Additional Results  Figure A.1: Comparison between optimal slices sonographer chose and the optimal slices the algorithm chose (Black). The two blue dots are the lower and upper edges of the set of slices sonographer chose as optimal slice.  74  Figure A.2: Comparison between optimal slices sonographer chose and the optimal slices the algorithm chose (Black). The two blue dots are the lower and upper edges of the set of slices sonographer chose as optimal slice.  75  Figure A.3: Comparison between optimal slices sonographer chose and the optimal slices the algorithm chose (Black). The two blue dots are the lower and upper edges of the set of slices sonographer chose as optimal slice.  76  Figure A.4: Comparison between optimal slices sonographer chose and the optimal slices the algorithm chose (Black). The two blue dots are the lower and upper edges of the set of slices sonographer chose as optimal slice.  77  Figure A.5: Comparison between optimal slices sonographer chose and the optimal slices the algorithm chose (Black). The two blue dots are the lower and upper edges of the set of slices sonographer chose as optimal slice.  78  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0073744/manifest

Comment

Related Items