Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Real-time interactive retinal vessel segmentation and analysis Dickie, Ryan 2010

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2011_spring_dickie_ryan.pdf [ 17.17MB ]
Metadata
JSON: 24-1.0071576.json
JSON-LD: 24-1.0071576-ld.json
RDF/XML (Pretty): 24-1.0071576-rdf.xml
RDF/JSON: 24-1.0071576-rdf.json
Turtle: 24-1.0071576-turtle.txt
N-Triples: 24-1.0071576-rdf-ntriples.txt
Original Record: 24-1.0071576-source.json
Full Text
24-1.0071576-fulltext.txt
Citation
24-1.0071576.ris

Full Text

Real-Time Interactive Retinal Vessel Segmentation and Analysis by Ryan Dickie  B.A.Sc. (Hons), Simon Fraser University, 2007  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in The Faculty of Graduate Studies (Electrical & Computer Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) January 2011 c Ryan Dickie 2010  Abstract Vessel analysis is important for a wide range of clinical diagnoses and disease research such as diabetes and malignant brain tumours. Vessel segmentation is a crucial first step in such analysis but is often complicated by structural diversity and pathology. Existing automated techniques have mixed results and difficulties with non-idealities such as imaging artifacts, tiny vessel structures, and regions with bifurcations. Live-Vessel is a novel and intuitive semi-automatic vessel segmentation technique that extends the classic Live-Wire technique from user-guided contours to user guided paths along vessel centre-lines with automated boundary detection. Live-Vessel achieves this by globally optimizing vessel filter responses over both spatial (x, y) and non-spatial (radius) variables simultaneously. In this thesis I provide three main contributions. First, I bring Live-Vessel into the domain of realtime interactivity. Second, I enhance the objective function for improved contrast and graph search performance by incorporating colour image information, adding penalty terms, utilizing a smaller data type, and increasing the contrast between desirable and undesirable paths. Third, I gather and retain vessel connectivity information and provide post-segmentation analysis tools. I validated this technique using real medical data from the DRIVE, STARE, and REVIEW retina vessel databases. Quantitative results show that, on average, Live-Vessel resulted in an 7.28 times reduction in overall manual segmentation task time at a 95% accuracy level with most radial and medial errors being under 1 pixel in distance.  ii  Table of Contents Abstract  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iii  List of Tables  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  v  List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  vi  Dedication  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv  1 Introduction . . . . . . . . . . . . . . . . . 1.1 Motivation . . . . . . . . . . . . . . . . 1.1.1 Techniques . . . . . . . . . . . . 1.1.2 Live-Wire . . . . . . . . . . . . . 1.2 Blood Vessels . . . . . . . . . . . . . . . 1.2.1 Biology, Structure, and Function 1.2.2 Blood Fluid Mechanics . . . . . 1.2.3 Branching . . . . . . . . . . . . 1.2.4 The Venous System . . . . . . . 1.2.5 The Capillary System . . . . . . 1.3 The Eye . . . . . . . . . . . . . . . . . 1.3.1 The Eyeball . . . . . . . . . . . 1.3.2 The Retina . . . . . . . . . . . . 1.4 Ophthalmoscopy . . . . . . . . . . . . . 1.4.1 Vessel Diagnostics . . . . . . . . 1.4.2 Background Fundus . . . . . . . 1.4.3 Diabetic Retinopathy . . . . . . 1.5 Objectives and Contributions . . . . . .  . . . . . . . . . . . . . . . . . .  . . . . . . . . . . . . . . . . . .  . . . . . . . . . . . . . . . . . .  . . . . . . . . . . . . . . . . . .  . . . . . . . . . . . . . . . . . .  . . . . . . . . . . . . . . . . . .  . . . . . . . . . . . . . . . . . .  . . . . . . . . . . . . . . . . . .  . . . . . . . . . . . . . . . . . .  . . . . . . . . . . . . . . . . . .  . . . . . . . . . . . . . . . . . .  . . . . . . . . . . . . . . . . . .  1 1 1 4 4 4 8 8 10 11 12 12 14 16 17 23 25 26  2 Live-Vessel Theory and Extensions . . . . . . . . . . . . . . . 2.1 Live-Wire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 The (x,y,r) Search Space . . . . . . . . . . . . . . . . . . . .  29 29 30  iii  2.3 2.4  2.5  2.6 2.7  User Interaction . . . . . . . . . . The Objective Function . . . . . . 2.4.1 Quaternion Vesselness Cost 2.4.2 Curvilinear Structure Cost 2.4.3 Colour Edge Evidence Cost 2.4.4 Smoothness . . . . . . . . Performance Considerations . . . 2.5.1 Memory Requirements . . 2.5.2 Search Space . . . . . . . . 2.5.3 The Scoreboard . . . . . . Post-Search Smoothing . . . . . . Tree Analysis . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  32 39 40 45 46 50 51 52 52 53 55 57  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  59 59 59 60 64 65 65 66 67 67 75 83  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  99  3 Experimentation . . . . . . . . 3.1 Data Sets . . . . . . . . . . . 3.1.1 DRIVE . . . . . . . . 3.1.2 STARE . . . . . . . . 3.1.3 REVIEW . . . . . . . 3.2 Performance Criteria . . . . 3.2.1 Binary Segmentations 3.2.2 Medial Axis Error . . 3.3 Results . . . . . . . . . . . . 3.3.1 DRIVE . . . . . . . . 3.3.2 STARE . . . . . . . . 3.3.3 REVIEW . . . . . . . 4 Conclusion  . . . . . . . . . . . .  . . . . . . . . . . . .  . . . . . . . . . . . .  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102  Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 A Vesselness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 B Results  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114  iv  List of Tables 1.1  External diameters and taper factors. From [33] table 2.2.1. .  3.1  Comparisons of state-of-the-art methods with our proposed Live-Vessel approach using the accuracy measure Acc = (TP + TN )/(P + N ) on the DRIVE database. These are the values reported by the authors with the precision they reported. . . Comparisons of state-of-the-art methods with our proposed Live-Vessel approach using the accuracy measure Acc = (TP + TN )/(P + N ) on the STARE database. The Martinez-Perez results only reported 2 significant digits. These are the values reported by the authors with the precision they reported. . .  3.2  10  71  80  B.1 Complete Live-Vessel results for the DRIVE database. The error measures are described in Sec. 3.2. . . . . . . . . . . . . 146 B.2 Complete Live-Vessel results for the STARE database. The error measures are described in Sec. 3.2. . . . . . . . . . . . 147  v  List of Figures 1.1  1.2  1.3  1.4  1.5  1.6  1.7  1.8  1.9  Tranverse section through a small artery (A) and small vein (V) from Gray’s Anatomy 1 (Fig. 448) [22]. m: muscular wall. e: endothelial membrane. a: unstriped muscle cells. . . . . . . Illustration showing the structure of a typical artery. Image released under the creative commons license by Stijn A.I. Ghesquiere2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A graphical representation of the capillary system. This figure was released into the public domain by the US government and is hosted by the Wikimedia Commons.3 . . . . . . . . . . An anatomical representation of the human eye. This figure was released into the public domain and is hosted by the Wikimedia Commons.4 . . . . . . . . . . . . . . . . . . . . . . This is an ophthalmoscopic image of the retina and its blood vessels from the DRIVE database [51] with important features labelled. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . This is a high resolution image of the optic disc cropped from the REVIEW database [1]. Note the myelination of the nerve ends causing a blurring of the vessel structures. Also note the entrance points for the two vessel networks in the eye. . . . . The bright yellow spot in the optic disc is a cup. It is a central depression occuring around the optic nerve. Both of the images are from the DRIVE database [51]. Left: A well pronounced optic disc cup. Right: Image showing both retinal and choroidal vessels as well as an enlarged and thinned optic disc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . These images are cropped from the REVIEW database [1]. Left: A vein wraps around an artery. Right: Artery crosses a vein with the veing forming an S-shaped kink. . . . . . . . . Arteries appear whiter than usual due to partial sheating. This is a cropped image from the REVIEW database [1]. . .  6  7  12  14  15  16  17  19 20  vi  1.10 This cropped image from the REVIEW database [1] demonstrates the light reflex in an artery. Note the pixel intensity is quite similar to that of the light reflex for the retina background. 1.11 This image from the DRIVE database [51] demonstrates neovascularization in the retina. The newly formed vessels are tortuous and many appear near the optic disc. . . . . . . . . 1.12 These cropped images from the REVIEW database[1] demonstrate both hard and soft exudates in the retina. . . . . . . . 1.13 Images showing normal colour vision and distorted colour vision due to diabetic retinopathy. Images courtesy of NEI [21]. 2.1 2.2  2.3  2.4  2.5  2.6  Image of Cocoa watching Star Trek: The Next Generation and the associated Live-wire cost map. . . . . . . . . . . . . . Live-Vessel’s 3D graph search algorithm depicted in 2D (x, y) and 3D (x, y, r). (Left) Medial path sequence shown in dotted blue dashed line connecting two neighbouring nodes p = (x1 , y1 , r1 ) and q = (x2 , y2 , r2 ), projected on the xy plane. The arrows perpendicular to the blue line denote the radius r dimension. (Right) Search space for a single node shown in the center (red). The 8-neighbours of the central node in the same radial plane are shown in light green while the 8-neighbours in the adjacent radial plane are shown in dark green. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview of Live-Vessel operation. (a) Vessel boundary wire (blue) and medial axis wire (yellow) from a seed point to the next potential seed point (mouse cursor) waiting for user approval. (b) The approved vessel segment is shown by colour coded discs representing the radius at each point along the vessel path. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image a demonstrates a connectivity subgraph representing one of the two vascular networks on image 40 of DRIVE. Each node is numbered with its x-y coordinate and represents a branch point. Image b shows where these nodes are located on the segmentation. The large green dot is the top node located at (505, 274) and the pink nodes are descendants. . . Segmentation artifacts created by a user selecting an incorrect radius. These can be resolved by enabling the automatic endpoint radius detection feature. . . . . . . . . . . . . . . . . Graphical representation of the individual cost terms employed in Eq. 2.1. . . . . . . . . . . . . . . . . . . . . . . . . .  21  22 24 25  30  32  34  36  38 40 vii  2.7 2.8  2.9  2.10  2.11 2.12  2.13  2.14  Demonstration of the Frangi Vesselness filter on DRIVE image 40 for 3 different scales. . . . . . . . . . . . . . . . . . . . These minimal response images demonstrate the edge penalty effect on the vesselness filter on image 40 of DRIVE. Note the medial axis in the bottom image is isolated across all scales. The colour blue has an associated cost of 0 while red has one of 255 and it scales linearly with the transition shown in the legend. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (a) Original image. (b) Maximal response vesselness filter output with added edges. (c) Magnification of a small region denoted by the rectangle from (b) to show detail. Note that the medial axis (dark) is associated with a low cost and the vessel boundaries with a high cost (bright). This keeps the graph search contained to the vessel center. . . . . . . . . . . (a) Original image. (b) Maximal response curvilinear filter output. (c) Magnification of a small region denoted by the rectangle from (b) to show detail. Note how this filter is much better at assigning a higher cost (white) to the background pixels than the vesselness filter shown in Fig. 2.9. . . . . . . . Demonstration of the Koller curvilinear structure filter on DRIVE image 40 at 3 different scales. . . . . . . . . . . . . . (a) Original image. (b) Maximal response to colour edge filter output Ce (q). (c) Diagram demonstrating how the √ sampling is done for point along the medial axis with = 2. The white arrow represents the medial axis direction, the central white disc denotes point x, and the small dark circles denote the boundary sampling points from which the gradient will be evaluated. . . . . . . . . . . . . . . . . . . . . . . . . . . . The graph search space shown in red between points P and Q is bounded by the intesection between the gray box and the two lines forming a 120o cone. The cone’s central axis is aligned with the segment P Q. This cone exist for both P Q and QP . The intermediate point P also has this constraint since the graph search is recursive. . . . . . . . . . . . . . . . This figure demonstrates the purpose and function for the post-processing smoothing. The left image shows how the pixel resolution of the output and the noisy cost function cause undesired error whereas the right appears to smooth out the inprecision. . . . . . . . . . . . . . . . . . . . . . . . .  42  43  44  46 47  49  54  56  viii  2.15 Tree representation of small arterial section of the retina of DRIVE image 40. Each node represents the branching point and the label represents the vessel radii. The edge labels represent the segment lengths. . . . . . . . . . . . . . . . . . . 3.1  3.2  3.3  3.4  3.5  3.6  These are sample images taken from the Hoover collection from the STARE database. These demonstrate the difficulties faced in segmenting retinal blood vessels. . . . . . . . . . . . This image was chosen from the Hoover collection to demonstrate the quality of the segmentations. Note the gaps in the vascular network and the decision not to segment the smallest vessels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . These are sample images taken from the REVIEW database. Each image is taken from a different set in this collection. These demonstrate the difficulties faced in segmenting retinal blood vessels. . . . . . . . . . . . . . . . . . . . . . . . . . . Sample results for DRIVE image 02. (a) Retinal fundus image. (b) Optimal medial axis (unsmoothed) computed by Live-Vessel (black) overlaid on the original images. (c) LiveVessel segmentations without image overlay. (d,h,l) Segmentation errors with coloured pixel reflecting correctness or error. Red: Correct. Green: Over segmentation. Blue: Under segmentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample results for DRIVE image 14. (a) Retinal fundus image. (b) Optimal medial axis (unsmoothed) computed by Live-Vessel (black) overlaid on the original images. (c) LiveVessel segmentations without image overlay. (d,h,l) Segmentation errors with coloured pixel reflecting correctness or error. Red: Correct. Green: Over segmentation. Blue: Under segmentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample results for DRIVE image 19. (a) Retinal fundus image. (b) Optimal medial axis (unsmoothed) computed by Live-Vessel (black) overlaid on the original images. (c) LiveVessel segmentations without image overlay. (d,h,l) Segmentation errors with coloured pixel reflecting correctness or error. Red: Correct. Green: Over segmentation. Blue: Under segmentation. . . . . . . . . . . . . . . . . . . . . . . . . . . .  57  62  63  64  68  69  70  ix  3.7  3.8  3.9  3.10  3.11  3.12  3.13  3.14  Example zoomed region in image 19 (Fig. 3.6d) demonstrates the errors when segmenting small vessels which are typical in the DRIVE database. Red: Correctly identified pixels. Green: Over segmented pixels. Blue: Under segmented pixels. Boxplot illustrating the accuracy measured using the three metrics on the DRIVE dataset for Live-Vessel. (ACC) The traditional accuracy measure; (DSC) Dice Similarity Coefficient; (HAUS) Modified Hausdorff error (pixels). . . . . . . . Histogram of distance errors for each medial axis pixel across all DRIVE images. The value listing gives the middle value for the bounded segment. . . . . . . . . . . . . . . . . . . . . This tree demonstrates a partial vascular network for DRIVE image 40 with one graph for each the arterial and venous networks. The nodes are each numbered with the pixel coordinate of the branch points. . . . . . . . . . . . . . . . . . . . Segmentation times for 10 images from the DRIVE database. The manual segmentations times were measured by Miranda Poon as part of early Live-Vessel research. . . . . . . . . . . . Sample results for Hoover image 7. (a) Retinal fundus image. (b) Optimal medial axis (unsmoothed) computed by LiveVessel (black) overlaid on the original images. (c) Live-Vessel segmentations without image overlay. (d) Segmentation errors with coloured pixel reflecting correctness or error. Red: Correct. Green: Over segmentation. Blue: Under segmentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample results for Hoover image 13. (a) Retinal fundus image. (b) Optimal medial axis (unsmoothed) computed by Live-Vessel (black) overlaid on the original images. (c) LiveVessel segmentations without image overlay. (d) Segmentation errors with coloured pixel reflecting correctness or error. Red: Correct. Green: Over segmentation. Blue: Under segmentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample results for Hoover image 17. (a) Retinal fundus image. (b) Optimal medial axis (unsmoothed) computed by Live-Vessel (black) overlaid on the original images. (c) LiveVessel segmentations without image overlay. (d) Segmentation errors with coloured pixel reflecting correctness or error. Red: Correct. Green: Over segmentation. Blue: Under segmentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  71  72  73  74  75  77  78  79  x  3.15 Example zoomed region in image 17 (Fig. 3.14d) demonstrates the errors when STARE images due to the inclusion of the vessel wall in the segmentation and blurred film causing weaker edges and scale space confusion. Red: Correctly identified pixels; Green: Over segmented pixels; Blue: Under segmented pixels. . . . . . . . . . . . . . . . . . . . . . . . . . 3.16 Boxplot illustrating the accuracy measured using the three metrics on the STARE database. (ACC) The traditional accuracy measure; (DSC) Dice Similarity Coefficient; (HAUS) Modified Hausdorff error (pixels). . . . . . . . . . . . . . . . . 3.17 Histogram of distance errors for each medial axis pixel across all STARE images. The value listing gives the middle value for the bounded segment. . . . . . . . . . . . . . . . . . . . . 3.18 This partial graph demonstrates the vascular network for a single retinal image (DRIVE 40). The nodes are each numbered with the k-value and the edges are labelled with the tortuosity value (vessel length / spatial distance between seeds). 3.19 This histogram demonstrates the radial deviation from the averaged manual segmentations for the three manual and all Live-Vessel segmentations on the HRIS dataset. . . . . . . . . 3.20 The segmentation overlay for Live-Vessel on image 1 of the HRIS dataset using the manual segments as a reference with N = 1 and N = 8. . . . . . . . . . . . . . . . . . . . . . . . . 3.21 This histogram shows the medial error in pixels for both the manual segmentations and the Live-Vessel (LV) segmentations on the HRIS dataset. . . . . . . . . . . . . . . . . . . . . 3.22 This histogram demonstrates the radial deviation from the averaged manual segmentations for the three manual and all Live-Vessel segmentations on the KPIS dataset. . . . . . . . . 3.23 The segmentation overlay for Live-Vessel on image 1 of the KPIS dataset. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.24 This histogram shows the medial error in pixels for both the manual segmentations and the Live-Vessel (LV) segmentations on the KPIS dataset. . . . . . . . . . . . . . . . . . . . . 3.25 The segmentation overlay for Live-Vessel on image 1 of the CLRIS dataset. . . . . . . . . . . . . . . . . . . . . . . . . . . 3.26 This histogram demonstrates the radial deviation from the averaged manual segmentations for the three manual and all Live-Vessel segmentations on the CLRIS dataset. . . . . . . .  80  81  82  83  85  86  87  88 89  90 91  92  xi  3.27 This histogram shows the medial error in pixels for both the manual segmentations and the Live-Vessel (LV) segmentations on the CLRIS dataset. . . . . . . . . . . . . . . . . . . . 3.28 This histogram demonstrates the radial deviation from the averaged manual segmentations for the three manual and several Live-Vessel segmentations on the VDIS dataset. . . . . . 3.29 This histogram shows the medial error in pixels for both the manual segmentations and the Live-Vessel (LV) segmentations on the VDIS dataset. . . . . . . . . . . . . . . . . . . . 3.30 The segmentation overlay for Live-Vessel on image 3 of the VDIS dataset. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.31 The segmentation overlay for Live-Vessel on image 4 of the VDIS dataset. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.32 The segmentation overlay for Live-Vessel on image 7 of the VDIS dataset. . . . . . . . . . . . . . . . . . . . . . . . . . . . B.1 Segmentation outputs for the first 4 images from DRIVE generated with Live-Vessel. . . . . . . . . . . . . . . . . . . . . . B.2 Segmentation outputs for DRIVE images 5 to 8 generated with Live-Vessel. . . . . . . . . . . . . . . . . . . . . . . . . . B.3 Segmentation outputs for DRIVE images 9 to 12 generated with Live-Vessel. . . . . . . . . . . . . . . . . . . . . . . . . . B.4 Segmentation outputs for DRIVE images 13 to 16 generated with Live-Vessel. . . . . . . . . . . . . . . . . . . . . . . . . . B.5 Segmentation outputs for DRIVE images 19 to 20 generated with Live-Vessel. . . . . . . . . . . . . . . . . . . . . . . . . . B.6 Segmentation outputs for DRIVE images 21 to 24 generated with Live-Vessel. . . . . . . . . . . . . . . . . . . . . . . . . . B.7 Segmentation outputs for DRIVE images 25 to 28 generated with Live-Vessel. . . . . . . . . . . . . . . . . . . . . . . . . . B.8 Segmentation outputs for DRIVE images 29 to 30 generated with Live-Vessel. . . . . . . . . . . . . . . . . . . . . . . . . . B.9 Segmentation outputs for DRIVE images 33 to 36 generated with Live-Vessel. . . . . . . . . . . . . . . . . . . . . . . . . . B.10 Segmentation outputs for DRIVE images 37 to 40 generated with Live-Vessel. . . . . . . . . . . . . . . . . . . . . . . . . . B.11 Segmentation outputs for image numbers in the Fibonacci sequence from DRIVE. The colour represents the radius and uses the jet colour map from Matlab with r = 0 being blue and r = 4.5 being red. . . . . . . . . . . . . . . . . . . . . . .  93  94  95 96 97 98  115 116 117 118 119 120 121 122 123 124  125 xii  B.12 Segmentation outputs for the first 4 images from STARE generated with Live-Vessel. . . . . . . . . . . . . . . . . . . . . . B.13 Segmentation outputs for images 5 to 8 from STARE generated with Live-Vessel. . . . . . . . . . . . . . . . . . . . . . . B.14 Segmentation outputs for images 9 to 12 from STARE generated with Live-Vessel. . . . . . . . . . . . . . . . . . . . . . . B.15 Segmentation outputs for images 13 to 16 from STARE generated with Live-Vessel. . . . . . . . . . . . . . . . . . . . . . B.16 Segmentation outputs for images 17 to 20 from STARE generated with Live-Vessel. . . . . . . . . . . . . . . . . . . . . . B.17 Segmentation overlays for image 1 from REVIEW HRIS generated with Live-Vessel. N is the number of endpoints in each segment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.18 Segmentation overlays for image 2 from REVIEW HRIS generated with Live-Vessel. N is the number of endpoints in each segment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.19 Segmentation overlays for image 3 from REVIEW HRIS generated with Live-Vessel. N is the number of endpoints in each segment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.20 Segmentation overlays for image 4 from REVIEW HRIS generated with Live-Vessel. N is the number of endpoints in each segment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.21 Segmentation overlays for image 1 from REVIEW CLRIS generated with Live-Vessel. N is the number of endpoints in each segment. . . . . . . . . . . . . . . . . . . . . . . . . . B.22 Segmentation overlays for image 2 from REVIEW CLRIS generated with Live-Vessel. N is the number of endpoints in each segment. . . . . . . . . . . . . . . . . . . . . . . . . . B.23 Segmentation overlays for image 1 from REVIEW KPIS generated with Live-Vessel. N is the number of endpoints in each segment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.24 Segmentation overlays for image 2 from REVIEW KPIS generated with Live-Vessel. N is the number of endpoints in each segment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.25 Segmentation overlays for image 1 from REVIEW VDIS generated with Live-Vessel. N is the number of endpoints in each segment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.26 Segmentation overlays for image 2 from REVIEW VDIS generated with Live-Vessel. N is the number of endpoints in each segment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  126 127 128 129 130  131  132  133  134  135  136  137  137  138  139 xiii  B.27 Segmentation overlays for image 3 from REVIEW VDIS generated with Live-Vessel. N is the number of endpoints in each segment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.28 Segmentation overlays for image 4 from REVIEW VDIS generated with Live-Vessel. N is the number of endpoints in each segment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.29 Segmentation overlays for image 5 from REVIEW VDIS generated with Live-Vessel. N is the number of endpoints in each segment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.30 Segmentation overlays for image 6 from REVIEW VDIS generated with Live-Vessel. N is the number of endpoints in each segment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.31 Segmentation overlays for image 7 from REVIEW VDIS generated with Live-Vessel. N is the number of endpoints in each segment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.32 Segmentation overlays for image 8 from REVIEW VDIS generated with Live-Vessel. N is the number of endpoints in each segment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  140  141  142  143  144  145  xiv  Dedication I dedicate my thesis to my parents for all of their support and advice along the way, my good buddy Will, ma tr`es bonne amie Anick, and of course my good lab mates Josna and Tricia who were great friends as well. I also dedicate this thesis to those who were an integral part of my life over the past two years including Sarah, Brendan, Emilia, Laura, Julian, Sita, Katie, and Cody.  xv  Chapter 1  Introduction 1.1  Motivation  It is very important to analyze vessel structure in two dimensional and three dimensional images for a wide variety of applications. These range from the study of tumour vasculature [8] to aneurysms [62]. In 2D retinography, for example, vessel analysis can aid in the diagnosis of diabetes and hypertension [66] and accurate vessel segmentation typically constitutes the very first step in any such analysis. A complete and topologically-correct vascular segmentation is typically required for proper analysis of the overall vessel tree and its branching patterns [8], for facilitating data registration [3], and for vessel boundary extraction [9, 37]. However, achieving this is often quite difficult due to a number of challenging factors including imaging artifacts, image resolution, pathology, and other complications including those arising in areas of vessel bifurcations [70]. Also, in each retinal image, there are two vascular networks: the arterial and the venous. Around the optic disc these vessels cross and overlap significantly and these automated techniques are unable to make the distinction between these vessels. This difficulty is also present in any 2D projection imaging. Expert manual tracing remains the widely used tool for vessel segmentation in practice but it is extremely time consuming and suffers from large inter-operator and intra-operator variability [59] as well as user fatigue effects [53]. While automatic segmentation schemes are preferred for handling large datasets, they typically require proper parameter tuning, significant training and initialization, manual correction, and human validation.  1.1.1  Techniques  A large number of techniques have been proposed for vessel segmentation, many of which were reviewed by Kirbas and Quek [27]. One main family of automated methods relies on pattern recognition and often requires post processing steps such as skeletonization and pruning as well as train1  ing which complicates their applicability to clinical use. For example, Ricci and Perfetti [61] used a line-detector coupled with a support vector machine (SVM). Similarly, the filtering method of Soares et al. [67] used Gabor wavelets paired with a Bayesian classifier. Staal et al. [69] employed ridge analysis [18] and a kNN classifier. Lam and Yan [30] used normalized gradient vector fields to locate vessel centerlines and then a Laplacian-based vessel detector to prune the result. The main limitation of the above techniques is that none of them incorporate higher level geometric models of the vasculature, e.g. vessel connectivity constraints, which typically results in many broken and incomplete vessel networks. A second family of automated vessel segmentation methods is based on evolving energy-minimizing deformable models [19, 36, 38, 39, 41, 50]. Implicit level-set based approaches [36, 39] and explicit topologically adaptable models, including T-snakes or T-surfaces [44, 45], are able to handle complex vascular topologies. Typical single seed initialization is not likely to yield a segmentation that includes all distant branches, and while multiple seed points can be used to alleviate this problem, the pitfall is ending up with a disconnected vascular network. Other methods that take advantage of vessel-specific properties include the work of Vasilevskiy et al. [71], which maximizes the gradient vector field flux to optimize the contour or surface and that of Yan et al. [75], which models the capillary force to segment small vessels. In general, the output from such energy-minimizing approaches is quite unpredictable and many trials and modifications of the model initialization and parameters may be needed. Also, these methods are not guaranteed to give globally optimal solutions. Globally optimal energy minimizing segmentation approaches have been proposed [7, 52]; however, they achieve global optimality by oversimplifying the cost function to ensure convexity and, to the best of our knowledge, none were specifically designed for segmenting tubular branching structures (including vessels). The graph cuts approach [5] is one that yields a globally optimal segmentation given a set of seeds. Hraiech et al. [24] used this technique for segmentation of abdominal aortic aneurysms; however, that work was a direct application of graph cuts without any vessel-specific extensions. Li et al. [34] used graph cuts to extract optimal surfaces in tubular 3D objects but a volume had to be partitioned into carefully chosen regions in which the boundary surfaces could be unfolded in terrain-like surfaces. In general, such graph cuts results are unpredictable since user interaction is obtained via interior region seeds and background constraints (as opposed to constraints on the segmentation boundary). It is thus quite typical to require user refinement by careful placement of numerous additional seed points to 2  correct the initial segmentation rendering this technique unsuitable for segmenting the large number of complex boundaries typically present vascular networks. Alternative approaches for automated vessel segmentation employ multiscale detection of curvilinear structures, which is effective in discerning both large and small vessels. In [28, 35, 64], the eigenvector associated with the minimum eigenvalue of the Hessian matrix across all scales was used to estimate the vessel direction at a given pixel. Aylward and Bullit [3] applied this directional information to traverse image ridges and estimate vessel centrelines. Building on [35, 64], Frangi et al. [20] developed a vesselness filter using the Hessian matrix eigenvalues. This was recently expanded to colour [65]. Other approaches combined vesselness with vessel enhancing diffusion [10, 40], region growing [42], cylindrical vessel models [29], and matched filtering [74]. Sofka et al. [68] later developed another vesselness measure using matched filters instead of Hessian-based techniques. Multi-scale vessel detection proved useful in detecting vessels; however, to the best of our knowledge, only the maximal response across the scales was used at any particular pixel with two exceptions. The first exception is the Vessel Crawlers approach [46] where an estimate of the scale was derived from the radius of the leading front of the crawler thus achieving a form of scale regularization, but without any global scale optimality condition. The second exception is an application of graph cuts using Frangi’s vesselness filter with a smoothness constraint for vessel radii[48]. Cohen and Kimmel [13] explored using a 2D graph search to find medial paths which were extended to 3D by Deschamps and Cohen in [15] using a fast marching front propagation. However, these works did not address the problem of identifying the boundaries of tubular structures. Young et al. [76] employed the vesselness filter of Frangi [20] with a front-propagation based algorithm to segment the vessel and to extract its medial axis using a maximal response across all scales. A key point, which we argue for and propose in this paper, is that the vesselness scale (ie. radius) at each pixel should be chosen so that a globally optimal path is found as opposed to chosing a path determined by the maximal response projection of the radial dimension. Wink et al. [73] applied a graph search approach to the Frangi vesselness filter through a 3D (x, y, r) space, where x and y indicate the vessel spatial coordinates and r indicates radius. Li and Yezzi [32] took a similar approach to Wink et al. and claimed that their approach is able to correctly find the centerline and boundary more robustly. However, they did not take into account any vessel direction and edge information which 3  decreases robustness.  1.1.2  Live-Wire  Live-Wire is a popular interactive minimal path segmentation approach [4, 49]. It traces along boundaries and contours between user selected seed points using a globally optimal graph search that minimizes some defined cost criteria. The main advantage of Live-Wire is that it updates the proposed solution in real-time based on user interaction. The user clicks a seed point and is instantly presented with a wire that follows the mouse cursor and only becomes fixed when the user finally confirms the path by clicking the endpoint. This instant feedback allows the user to precisely control the wire and to achieve the exact desired results while saving a lot of time compared to manual techniques. Live-wire techniques are being used for neurite tracing and analysis. Neurites are the projections from neuronal cells and may be either an axon or a dendrite. Neurites are manifested as thin line-like objects in these images and has similarities with blood vessel segmentation. Meijering et al. [47] devised a two phase method where first they precompute the likelihood of a pixel belonging to a neurite followed by the Live-Wire tracing phase. They claim that consecutive pixels with high likelihood values are the most likely to represent the neurite centerlines. Zhang et al. [77] build on this work by efficiently extending this to 3D images by incorporation certain anatomical constraints to limit the search space.  1.2 1.2.1  Blood Vessels Biology, Structure, and Function  The entire vascular network is designed to supply and meet the metabolic demands of the body and remove waste products. Knowing the purpose, many researchers have been inspired to develop a set of mathematical equations that can accurately describe the anatomy and physiology of this network. For example, if we can estimate how much oxygen is consumed by major organs then we can approximate the blood flow required and the minimum vessel diameters. Adding the observation that blood vessels have branching patterns and forms designed to minimize wave reflection and other flow artifacts when given a pulsatile flow, we can begin narrowing the solution space and begin deriving precise and accurate models.  4  This section provides a detailed review of vessel biology and function. The majority of the content described below is well known to medical practitioners, anatomists, and physiologists and was pieced together using the books by Waite and Fine [72], Li [33], and Gray [22]. The vascular network can be divided into three systems: arterial, capillary, and venous systems. Roughly 20% of the blood is found in the arteries. They are primarily responsible for distributing oxygen rich blood to the major muscles and organs. The arterial system has elastic walls to regularize blood pressure and allow for an optimal pulsatile flow. Almost 70% of the blood remains in the venous system under low pressure. Veins have much lower flow rates and serve as a reservoir for blood. This is made possible by the sheer number of veins compared to arteries. Veins are far more pliable but less elastic due to the thinner walls and lower elastin density. In fact, they are completely collapsable under normal conditions. Since they operate under far lower pressure that arteries and also often work against gravity, they have bicuspid valves to prevent backflow. There are often many more valves in the lower limbs and other extremities. The structure and differences between these two categories are illustrated in Fig. 1.1.  5  Figure 1.1: Tranverse section through a small artery (A) and small vein (V) from Gray’s Anatomy 1 (Fig. 448) [22]. m: muscular wall. e: endothelial membrane. a: unstriped muscle cells. The image in Fig. 1.2 shows the anatomy for a typical artery. All vessels are typically divided into three layers: tunica intima, tunica media, and 1  The 1918 edition of Gray’s Anatomy [22] is now in the public domain.  6  Figure 1.2: Illustration showing the structure of a typical artery. Image released under the creative commons license by Stijn A.I. Ghesquiere2 . tunica externa. The lumen is the interior of the vessel through which the blood passes. It is lined by a single layer of cells called the endothelium which play an active role in passing the nutrients but also has roles in other vascular functions such as contraction and secretion. This layer is surrounded by the tunica media which is a layer of elastic fibers and smooth muscle cells that can contract or expand to control pressure and volume. The elastica interna, also known as the internal elastic lamina, is the elastic layer that provides tension and holds form. The tunica adventitia, commonly known as the tunica externa, contains passive and elastic fibers and much stiffer collagenous fibers which help anchor and fix the arteries. There are three main types of arteries. Elastic arteries have the largest diameters of the three categories and have a thicker tunica media compared to other vessels. Muscular arteries are intermediate in size and have a large tunica media composed of smooth muscle nearly 40 cells thick. The arterioles are those which have a diameter less than 0.5mm. All of the vessels in the 2 Released under the Creative Commons Attribution-Share Alike 2.5 Generic licence. (http://en.wikipedia.org/wiki/File:Anatomy artery.png)  7  retina are of this nature. These vessels only posess a thin tunica externa and have a muscular tunica media between 1 and 5 cells thick. These vessels are where most of the pressure drop between the heart and veins occur. Vascular stiffness in arteries can be expressed in terms of Young’s modulus of elasticity. It is given as the ratio between tensile stress σt to tensile strain t as E = σt / t . When this relationship is linear then it is said to obey Hooke’s law. This approximation is only valid for cylindrical vessels and when the deformations are relatively small. Stress is proportional to blood pressure and is often given in units of mmHg. The tensile strain t is split into radial strain and longitudinal strain. When the materials do not change volume does not change under strain, then they are said to be incompressible. Arterial walls have been experimentally shown to be nearly incompressible.  1.2.2  Blood Fluid Mechanics  The modeling of blood flow is based on the Navier-Stokes equations. In the early 19th century, a France physician developed the Poiseuille equation for describing blood flow in the major vessels such as the aorta. The equation is written as Q=  πr4 δp 8ηl  (1.1)  with Q as the steady flow in ml/s, r as the lumen radius, l the length through which the blood flows, δp as the pressure drop, and η as the viscosity of blood (0.03 poise). Therefore, his model shows an approximate r4 relationship between blood flow and vessel radius in a cylindrical tube. Also, in this tube, we have a parabolic velocity profile where the fluid flow is fastest towards the inside and slowest on the outside. There exist other sophisticated models considering the non-linearities and non-newtonian flow. These models are significantly more complicated and do not provide sufficient insight for our task of segmentation and visualization of blood vessels. However, some of the main observations used in the derivation of these models will be considered.  1.2.3  Branching  Vascular structure exists to provide efficient distribution of oxygen and nutrients as well as efficient removal of waste products. Most branching occurs through bifurcation which means that most vessels fork only two children. 8  For most organs and tissues, trifurcation is exceedingly rare. Good correlations have been found between mathematical models with measurements for vessel radii and branching lengths but not for branching angles[33]. Space-filling is an important aspect of the vascular structure. The greater the surface area, the greater the need for the expansion of the vascular network. This is why many cardiac indices are normalized to the body surface area. One such space-filling model has all branches scaled by a linear coefficient α or β. That is, the first left branch is scaled by α and the right branch is scaled by β. Then the 2nd left branch is then α2 and the right branch of the branch is αβ. This is recursive until the scaling reaches a sufficiently small radius termination condition. Other models have taken inspiration from the fact the sum of the crosssections of the children are larger than those of the parent vessel. There is a well known relation between vessel diameters of parent to children branch segments given as dkp = dk1 + dk2  (1.2)  with 2 ≤ k ≤ 3 being observed in real physiology. In fact, this is known as Murray’s equation and we can derive this for k = 3 assuming some idealities using the Poiseuille equation. There are two main classes of bifurcations: Y-junctions and T-junctions. The aortic carotid iliac and coronary bifurcations are examples of Y-shaped while the renal femoral, celiac, and mesenteric are examples of T-shaped ones. The literature does not go into great detail about either type but the main distinction is that in a T-junction one of the branches remains relatively similar to its parent and the other does not. Murray’s law states that the flow is proportional to the cube of the lumen radius in order for the flow to be optimized. This is largely applicable to larger vessels because in small vessels the majority of resistance is from the smaller vessels it branches to. Consider a bifurcation in a vessel with length lo and radius ro that splits into (l1 , r1 ) and (l2 , r2 ). By applying Poiseuille’s equation and Murray’s equation we get a cost function in terms of power which minimizes and then substitutes into Qo = Q1 + Q2 to yield once again Eq. 1.2 but with k = 3. Taking this equation and then for the case when r1 = r2 we see ro2 = r12 cos(θ) + r22 cos(φ)  (1.3)  with θ and φ being the branching angles relative to the mother vessel direction. While vessel radii calculations have been shown to correlate well 9  Vessel Abdominal Aorta Iliac Atery Femoral Artery Carotid Artery  d (cm) 0.777 0.413 0.342 0.378  ko (cm−1 ) 0.027 ± 0.007 0.021 ± 0.005 ± 0.007 ± 0.004  Table 1.1: External diameters and taper factors. From [33] table 2.2.1. with reality, even simple cases like the above for angle branching does not correlate as well and are approximate at best. The arterial system tapers and branches. The lumen is the area where blood flows inside the vessel. The branched daughter vessels always have a smaller lumen compared to those of the mother vessel but combined have a greater cross-sectional area. The geometrical taper of a blood vessel describes the shrunken lumen along the length of the blood vessel as vessels branch from it. There is a roughly exponential relationship describing this. In Eq. 1.4, A is the cross-sectional area, z is the distance along the vessel (cm), and ko is the taper factor which varies between vessels. Typical values for ko for major vessels are given in Table 1.1. A(z) = A(0)e−ko z  1.2.4  (1.4)  The Venous System  The venous system is significantly larger and functions quite differently from the arterial system. The venous system acts as a reservoir for blood and operates under significantly lower pressure. The vessels themselves are collapsable, contain bicuspid valves to prevent back flow, and are more affected by the skeletal muscles than arteries. Venous vessels are often located in close proximity to arteries and are susceptible to interference from the pulsatile flow in the neighbouring vessel. The transmural pressure, which is the pressure across the walls, oscillates close to zero. There are many studies to determine and model the blood pressure of the venous network but there appear to be none describing the structural properties such as branching and and radial changes with the same precise mathematical models as the arteries. Just as arteries are divided into three categories, so are veins. Venules are those with lumen diameters similar in size to arterioles and are composed of endothelium sitting in a connective tissue layer. Venules feed small 10  veins and have all three tunics present. Medium and large sized veins are structured quite similarly with the main difference being those greater than 2mm contain one way valves to prevent back flow due to gravity and low pressure. There are also far more valves in the lower extremities than in the rest of the body.  1.2.5  The Capillary System  Capillaries are the smallest blood vessels found in the body and form the gas and nutrient exchange network where red blood vessels travel single file down extremely narrow channels. They connect the arterioles to the venules through a mesh as shown in Fig. 1.3.  11  Figure 1.3: A graphical representation of the capillary system. This figure was released into the public domain by the US government and is hosted by the Wikimedia Commons.3 The capillary wall consists of a simple endothelial cell which facilitates the exchanges of gases, nutrients, and waste materials between the tissues and the vascular system. This is the site of the greatest drop in blood pressure in the human body. The blood flow rate into the capillaries is regulated by the precapillary sphincters located at the ateriole branch points where the capillaries begin. Capillaries are roughly between 0.5mm and 1.0mm long and are approximately the diameter of a red blood cell (7.5µm). They do not change sizes when branching unlike all other vessels. They are too small to be visible in retinal images so they were not considered when developing Live-Wire.  1.3 1.3.1  The Eye The Eyeball  The eyeball is essentially a fluid-filled sphere consisting of several concentric shells as shown in Fig. 1.4. The vitreous humour is a clear gelatinous gel that fills the hollow interior and provides appropriate pressure to keep proper shape and form. The cornea, pupil, iris, ciliary muscles, and other nearby structures all provide mechanism and control for focusing and regulating the amount of light that falls onto the retina. The sclera is the white elastic outer layer which serves to protect and provide support for the internal components. It is roughly 1mm thick in the 3  http://upload.wikimedia.org/wikipedia/commons/d/da/Illu capillary.jpg  12  posterior and thins towards the front. In the rear, the optic nerve and blood vessels pass through a single spot and they can be easily distorted here.  13  Iris  Pupil  Cornea  Posterior chamber  Anterior chamber (aqueous humour)  Zonular fibres  Ciliary muscle  Lens  Suspensory ligament  Retina Choroid Sclera  Vitreous humour Hyaloid canal  Optic disc  Optic nerve  Fovea  Retinal blood vessels  Figure 1.4: An anatomical representation of the human eye. This figure was released into the public domain and is hosted by the Wikimedia Commons.4 Nestled against the interior of the sclera is the choroid. The choroid is a very thin structure roughly 0.5mm thick. The choroidal stroma is a pigmented layer of loose tissue containing, among other things, blood vessels. Choroidal veins contain no valves and are split into four quadrants for the eye. These lay on top of a bed of capillaries which lay on top of the pigmented epithelium layer. The blood vessels in the choroid have a different entrance and exit point than retinal vessels and they also have a different branching pattern. They supply blood to the ciliary muscles and the outer third of the retina.  1.3.2  The Retina  The retina is composed of a single layer of pigmented epithelial cells and three layers of specialized neuronal cells for sensing the light. The pigmented epithelium is the outermost layer and is only a single cell thick. The amount 4  http://en.wikipedia.org/wiki/File:Schematic diagram of the human eye en.svg  14  of pigment as well the size and shape of these cells vary throughout the eye which gives a patchy appearance when viewed under an ophthalmoscope. The retina can be divided into several regions as marked in Fig. 1.5.  Figure 1.5: This is an ophthalmoscopic image of the retina and its blood vessels from the DRIVE database [51] with important features labelled. The optic disc is a special region where the optic nerves and blood vessels attach and enter the eye. There are no rods or cones to detect light in this region and it is aptly known as the blind spot. The optic disc varies quite significantly in healthy patients but generally all healthy patients have a sharp and clear boundary between the disc and the light sensing retina. Occasionally, the nerve fibers are myelinated at the surface and cause white feathery patches obfuscating the vessels[6]. The optic disc is shown in high resolution in figure1.6.  15  Figure 1.6: This is a high resolution image of the optic disc cropped from the REVIEW database [1]. Note the myelination of the nerve ends causing a blurring of the vessel structures. Also note the entrance points for the two vessel networks in the eye. The macula is a small yellow spot near the center of the eye but in fundus images it can appear dark orange-red in colour. In the center of the macula is a small pit of cones. This is where the central and highly focused vision occurs. It is also a region where most of the colour vision occurs since the region is dominated by cones rather than rods. This central region is critical to vision and in some cases of diabetic retinopathy, blood vessels and hemorrhages can cover this region and cause various degrees of blindness.  1.4  Ophthalmoscopy  Ophthalmoscopy is the procedure for imaging the retina by shinning a light onto it and magnifying the result. According to Dr. John Parr [54], a good program for examining the eye is to start with the optic disc, follow the retinal vessels to the four quadrants, examine the background in each of these quadrants, and finally inspect the macula and fovea. This is the workflow that will be incorporated into the segmentation tool developed in this thesis. The optic disc contains three properties to search for: the degree of cupping or swelling, the visibility of the disc margin, and the colour. Cupping 16  (a)  (b)  Figure 1.7: The bright yellow spot in the optic disc is a cup. It is a central depression occuring around the optic nerve. Both of the images are from the DRIVE database [51]. Left: A well pronounced optic disc cup. Right: Image showing both retinal and choroidal vessels as well as an enlarged and thinned optic disc. can be seen as a small bright spot in the middle of the optic disc as in Fig. 1.7. The disc margin is typically well defined and uniform around the whole circumference with some blurring by glial tissue on the nasal side. The colour of the disc is given by a network of capillaries in the translucent nerve fibers without any layer of pigment to cover it up. The colour varies greatly in healthy patients but the reduction of colour can indicate a whole number of diseases.  1.4.1  Vessel Diagnostics  Retinal vessel properties are important markers for diagnosing a whole host of diseases and are important considerations in developing an accurate vessel segmentation tool. This section will discuss a number of diseases described by Dr. Parr in Introduction to Ophthalmology[54].  17  Arterial Arterial symptoms include abrupt narrowings or terminations, localized widenings, and even rippling along one or both edges. Irregularities do increase with age but generally, symptoms are due to other disease such as hypertension or diabetes. Some diseases can be characterized by the aterio-venous crossings. In normal crossings, the venous vessel is visible is visible right to the boundary of the artery. There are four classes of changes. The first class, is the concealment of the vein. In these cases, the venous column appears to stop before reaching the crossing. This is due to a thickening of the sheaths at the crossing. The second category is called deviation and this is where the path of the vessel makes a slight turn before or after the crossing. This is usually do to dragging by the above vessel. The third, but uncommon change, is where the vessel becomes thinner at the crossing. The reasons for this are still disputed and mainly occur in older patients. The images in Fig. 1.8 demonstrate both wrapping and kinking. Emboli are class of buildup inside the vessels that can be made from deposits of cholesterol, calcium, or platelets. Cholesterol emboli are generally bright, stick to the vessel walls, and do not greatly restrict blood flow. Calcium emboli are very rare greatly obstruct blood flow. Platelet emboli also greatly restrict blood flow and appear as a small white blob. They can often move from bifurcation to bifurcation. Sheathing of the arteries can occur on or near the optic disc by glial tissue as shown in Fig. 1.9. This is not always an indication of pathology but may be due to arterial sclerosis, hypertension, fatty deposits, diabetic retinopathy, or leukemia. Light reflexes are the reflection properties that vessels and the background have when stimulated by light. In many cases, the central third of the normal arteries has a bright strip as shown in Fig. 1.10. These are caused by the interface between the blood and the vessel wall. The importance of alterations are difficult to asses and serve little diagnostic purpose but they do pose many problems for imaging and automated analysis and most models do not consider these either directly or indirectly. Tortuosity is the property of bending and twisting of the vessels along their paths. Normal variation is so great so as to drown out pathological cases for automated analysis but generally they tend to be straighter with age and more tortuous with higher blood pressure.  18  (a) Wrap  (b) S-kink  Figure 1.8: These images are cropped from the REVIEW database [1]. Left: A vein wraps around an artery. Right: Artery crosses a vein with the veing forming an S-shaped kink.  19  Figure 1.9: Arteries appear whiter than usual due to partial sheating. This is a cropped image from the REVIEW database [1].  20  Figure 1.10: This cropped image from the REVIEW database [1] demonstrates the light reflex in an artery. Note the pixel intensity is quite similar to that of the light reflex for the retina background. Capillary Capillaries are generally too small to be seen directly from an ophthalmoscope but in some conditions, such as diabetes and hypertension, the capillaries may become overly dilated and visible. They are often in localized areas of the fundus and show up best when fluoroscein is added. It is also possible for these capillaries to appear as small red dots in the case of microaneurysms. They are hard to distinguish from normal hemorrhages and commonly occur in diabetic retinopathy and hypertension. Venous Veins can also be used to diagnose or merely indicate a diseases presence. As with arteries, directly assessing the calibre of veins is clinically difficult but variations in dilation occur along side many other problems. Neovascularization can occur in or on the retina and is commonly associated with diabetic retinopathy. These new vessels are generally small and form networks with looping and meshing. Fibrous tissue proliferation always occurs with these changes and becomes more pronounced as the disease progresses. These changes are demonstrated in Fig. 1.11.  21  Figure 1.11: This image from the DRIVE database [51] demonstrates neovascularization in the retina. The newly formed vessels are tortuous and many appear near the optic disc.  22  1.4.2  Background Fundus  The fundus background is the generally reddish pigmented background of the retina. Light from the ophthalmoscope is reflected off the choroidal vessels and then diffused by the retinal pigments into a more or less continuous surface. The colour varies greatly from person to person and arises from both the pigments in the retina and choroid. The continuity can also vary significantly between diseases. Changes in the fundus background are usually colour changes to pale colours, red, or black. Pale areas are produced by a loss of transparency in the retina and may even expose the sclera. They can also be changed due to hemorrhaging, retinal and/or choroidal detachment, swelling, infection, or inflammation. Retinal exudates are patches and spots generally pale in colour. They are divided into two categories: hard and soft. Hard exudates are sharp demarcated spots which occur deep in the retina. Soft exudates are sometimes referred to as cotton wool spots because they are splotchy soft patches where the axons in the nerve fibre layer become swollen. These are both shown in Fig. 1.12.  23  (a) Soft Exudate  (b) Hard Exudate  Figure 1.12: These cropped images from the REVIEW database[1] demonstrate both hard and soft exudates in the retina.  24  1.4.3  Diabetic Retinopathy  One of the major degenerative diseases in the eye is diabetic retinopathy. The DRIVE dataset that will be experimented with was created exclusively to study this important disease. This condition adversely affects vision and may eventually progress to cause complete blindness in patients.  (a) Normal Vision  (b) Distorted Vision  Figure 1.13: Images showing normal colour vision and distorted colour vision due to diabetic retinopathy. Images courtesy of NEI [21]. Diabetic retinopathy is a common disease affecting nearly all patients 25  with type I diabetes and 60% of patients who have have had type II diabetes for more than ten years[21]. In general, retinopathy progresses from mild non-proliferative abnormalities including increased vascular permeability to moderate and severe variations characterized by vascular closure. Proliferative diabetic retinopathy is a particularly damaging stage wherein new blood vessels are grown on the retina and the posterior surface of the vitreous. They emerge from the optic disc region and spread towards the macula where central vision occurs. Vision loss from diabetic retinopathy has several mechanisms. The macular variation occurs when protein or blood collects on or under the macula. It can be seen quite clearly as staining of the eye. In many cases swelling and strain can occur further distorting central vision. The formation of new blood vessels can lead to distortion of the retina and strain it leading to retinal detachment. The formation of these new blood vessels may also cause hemorrhaging. Diabetic retinal changes can be identified using an ophthalmoscope. It consists of searching for microaneurysms, dot and blot hemorrhages, hard and soft exudates, certain changes in arteries and veins, and neovascularization. It is useful to divide diabetic retinopathy into proliferative and nonproliferative kinds. Proliferative retinopathy has neovascularization and these small vessels are delicate and bleed easily. This occurs often as a later stage following non-proliferative. Non-proliferative contains hemorrhaging and the development of soft and hard exudates. Hypertension and type II diabetes are closely related. Retinal changes from hypertension are clearly recognizable and used for diagnosis. Occasionally, these changes changes can be reversed with a decrease in blood pressure. These changes include generalized and localized narrowing, aterial-venous crossing changes, and alterations to the light reflex, increased or reduced tortuosity, and alterations of the branching angle. While these occur in patients with this disease, these changes are often not sufficient on their own since it is possible they can occur in healthy patients or caused by other diseases.  1.5  Objectives and Contributions  Live-Vessel began as a project developed by a previous Masters student[57]. It extended on Live-Wire by incorporating vessel terms and simultaneously optimizing over (x, y, r) resulting in a medial path along the vessels with 26  symmetric boundaries delineating the vessel walls at radius r away from the medial line. The implementation was limited to 4 different vessel radii and was not capable of real-time interaction due to computation and memory requirements. Rather than having a live wire track between the mouse and an anchored seed point, it functioned by having the user click seed points, waiting a number of seconds for the path to be displayed, and then asking the user to either approve or redo the current segment. The vessels were obtained by segmenting overlapping strips of vessels and then combining the strips to produce a single binary output. In 2D imaging, there are many overlapping segments where vessels cross or branch and these are difficult to distinguish by solely examining the binary output. It also did not preserve any of the path and radius data and was made with the intention of only producing a binary image output. The radius and medial data cannot be retrieved or precisely estimated from this type of output since we are probing the limits of image resolution where vessels are between 1 and 3 pixels wide. A significant amount of information is lost due to aliasing in the output among other things. The majority of techniques for segmenting the retina have also focused on yielding a binary output and fail to capture the necessary information that clinicians and researchers need to analyze diseases in detail. There appears to be little to no utility in having such a binary output. The main objective function from the original Live-Vessel provided the cost to travel between two pixels p = (x1 , y1 , r1 ) and q = (x2 , y2 , r2 ) and took the form Cost(q, p) = w1 Cv (q)+w2 Cd (p, q)+w3 Ce (q)+w4 Cs (p, q)+w5 Cr (p, q). (1.5) Each individual cost term C was normalized to lie in the range [0, 1] and the weighting parameters wi were determined emperically. The first cost term Cv is directly from Frangi’s vesselness filter [20]. It examines the grayscale curvature and determines if the pixel lies within a vessel-like tubular structure at that scale. The second cost term Cd considers the direction of minimum principal curvature at point p and q and determines if they are in line. The filter was thus defined in terms of normalized minimum principal curvature directions x1 as 2 arcos(|x1 (p) · x1 (q)|) (1.6) π The third cost term Ce was called image evidence cost. This examined the edges of the image, taken as a weighted sum of the Canny edge detection Cd (p, q) =  27  filter, Laplacian of Gaussian filter, and gradient magnitude filter. It was stated that Canny and LoG were chosen because they were less sensitive to noise but the gradient magnitude filter does involve pre-smoothing. This Ce term is the computed average of N samples taken from this edge image at radius r away from center point q perpendicular to the direction of minimum curvature. The last two terms in the objective function are spatial and radial smoothness terms Cs (p, q) and C √r (p, q). Cs (p, q) is simply the Euclidean distance between p and q (either 2 or 1) while CR (q, p) is simply the absolute difference between the radius values at p and q normalized to a scale of 1. It was stated that the term Cs was chosen to keep the medial axis shorter and discourage zig-zagging but this term will not perform that task since it may penalize shorter diagonals and favour the longer right angle turns. The shortest path between points p and q is a path through the 3D space (x, y, r). The idea is that it globally and simultaneously optimizes the medial path and radius. This path was found using Dijkstra’s algorithm [17] and the computational explosion from adding a complete third dimension to the graph search is the reason for the 4 radii limitation and the non real-time wire. This thesis provides three main contributions. First, Live-Vessel is taken into the domain of real-time interactivity. The graph search and optimization process underwent several necessary changes in order to make this possible. Second, the objective function was modified to incorporate colour information and increase the contrast between desirable and undesirable paths. This allowed the use of a lower precision but smaller data type to reduce the memory burden. Third, vessel connectivity information was gathered gather and retained for the post-segmentation analysis tools. The GUI also had several important changes in order to facilitate the capture and maintenance of this data. This technique was validated using real medical data from the DRIVE, STARE, and REVIEW retina vessel databases.  28  Chapter 2  Live-Vessel Theory and Extensions Live-Vessel is an interactive technique where the user traces out the medial axis vessel lines while the wire displays the medial and vessel contours. It also captures vessel specific features such as connectivity and branching in order to yield a topologically correct and expressive vessel tree structure which can be used for further analysis and study. The Live-Vessel objective function differs from the Live-Wire function since it is defined over both radius and medial terms yielding a single optimal path determined simultaneously through both subspaces. We present and use several improved and enhanced vessel and centerline detection filters. Live-Vessel considers the topology of the anatomical structure and is able to extract detailed information for analysis beyond simple segmentation labels.  2.1  Live-Wire  Live-wire is a very popular and well understood technique for delineating boundaries and contours in an image[4]. It is a two-stage optimization technique. The first stage is generally precomputed and determines the boundary cost (ie: the likelihood the pixel belongs to an edge). The second stage is the optimization process and includes the computation of any path-dependant cost terms to ensure a smooth and regular path. The path is traced along the object boundary and is updated in real-time as the user drags the mouse across the screen. This interactivity is the main advantage of this technique since the user is able to intelligently guide the segmentation without over-burdening them as would be the case in manual segmentation. The goal is to provide the accuracy of manual segmentation with the speed of automated segmentation without compromising either too severely. In Live-Wire the image is converted into a graph where each pixel becomes a node and each node caries a cost. The cost of traveling from pixel p to q is the sum of all the nodes it visits along with a path smoothness  29  cost where straighter contours are preferred to zig-zags. The best contour between two pixels is the optimal path and this can be determined globally and efficiently using Dijkstra’s method. Note that Live-Wire does not provide sub-pixel accuracy under normal conditions since the fundamental unit in the graph is the pixel. An example of a cost map for Live-Wire is given in Fig. 2.1. In this image, the pixel intensity represents the likelihood of not being an edge. Note that this is merely an estimation and that not all boundaries are clearly defined as is quite typically the case. User interaction, path smoothness, and other regularization parameters are crucial in order to accurately segment the images.  (a) Original  (b) Cost Map  Figure 2.1: Image of Cocoa watching Star Trek: The Next Generation and the associated Live-wire cost map.  2.2  The (x,y,r) Search Space  Traditional Live-Wire uses Dijkstra’s algorithm[17] to calculate the globally optimal path between two seed points. Typically, the costs are inversely proportional to the image edge strengths with the objective function often 30  including smoothing terms to enable shorter and less jagged paths to overcome image noise and artifacts. Live-Vessel extends Live-Wire into a 3D search space where we simultaneously find the optimal vessel centerline and radius. In this method, rather than delineating a vessel by guiding Live-Wire contours along both sides of the vessel boundary, which is very inefficient and does not capture vessel topology well, the user steers the wire along the centerline (or medial axis) of a 2D vessel and snaps vessel segments together to form the full vascular network. The contour along the medial of the vessel is computed as the optimal path between two points in 3D space (x, y, r), where (x, y) are the image spatial coordinates at each medial node and r represents the corresponding radius value at each of those nodes as illustrated in Fig. 2.2. The search space can be visualized as a series of 2D planes stacked on top of each other with one plane for each radius. The added constraint is that no (x, y) coordinate may appear twice in the optimal path. The optimal path is determined by applying Dijkstra’s algorithm [17] between start and endpoint nodes.  31  Q=(x ,y ,r ) 2  2  2  r= 2  r= 1 P=(x ,y ,r ) 1  1  1  Figure 2.2: Live-Vessel’s 3D graph search algorithm depicted in 2D (x, y) and 3D (x, y, r). (Left) Medial path sequence shown in dotted blue dashed line connecting two neighbouring nodes p = (x1 , y1 , r1 ) and q = (x2 , y2 , r2 ), projected on the xy plane. The arrows perpendicular to the blue line denote the radius r dimension. (Right) Search space for a single node shown in the center (red). The 8-neighbours of the central node in the same radial plane are shown in light green while the 8-neighbours in the adjacent radial plane are shown in dark green.  2.3  User Interaction  The demands of vessel segmentation are quite different from those in general image segmentation. In Live-Wire the user typically traces an object boundary with the aid of a snapping wire. The wire is anchored at seed point locations and the region enclosed by the wire is labelled as segmented. In contrast, in Live-Vessel the user marks points along the center of a blood vessel and the wire traces along the medial axis while two more trace out the parallel contours along the boundary of the vessel. The full segmentation is achieved by connecting these vessel segments together into a tree network. This provides a radically different user experience and we must consider the human computer interaction elements along with the wealth of specialized information which we wish to extract. 32  The implementation presented in this thesis is primarily designed for segmenting 2D images with enhancements useful for retinography. These 2D images contain many difficulties and properties unique to ophthalmology. In particular, we are dealing with images where vessels in 3D space are projected onto a 2D plane and layers of uneven pigment and tissue all captured using visible light. The 2D projection makes it easier to spot and trace vessels since in 3D we often see only cross-sections taken at various angles as the vessel follows a particular path through space. One complication though, is the fact that in 2D, arteries and veins appear to intersect even though in 3D they merely pass near each other. Though certain pathologies and irregularities such as stenosis and emboli are common to large vessels in many regions throughout the body, each vessel type and region may have their own particular diseases, symptoms, and anatomical traits that one must consider. Many people have developed segmentation techniques that deal with healthy patients and idealized characteristics of healthy patients, but clinicians often need the software for examining diseased tissues with wildly differing traits. It is important to afford the user control where their domain specific knowledge can be incorporated and fully utilized rather than rely on software parameters and terms to be learned and tuned for every possible case. The images in Fig. 2.3 show the process of segmenting a vessel using Live-Vessel. In our system, the user specifies spatial coordinates (x, y) with the cursor and the radius via keyboard hot keys that increase or decrease in tunable increments. Just as in Live-Wire, the interactive wire is drawn in real-time along the medial axis in yellow and the boundaries on either side are shown in blue. The seed points are marked with a cross-hair whose radius is that of the vessel. This is shown in Fig. 2.3a. Once solidified, the medial axis colour changes to white and the boundary contour to black. The vessel is then filled in with a solid colour representing the vessel radius at that point. The spectrum was chosen to emphasize changes in vessel radius in order to quickly spot problems as shown in Fig. 2.3b.  33  (a)  (b)  Figure 2.3: Overview of Live-Vessel operation. (a) Vessel boundary wire (blue) and medial axis wire (yellow) from a seed point to the next potential seed point (mouse cursor) waiting for user approval. (b) The approved vessel segment is shown by colour coded discs representing the radius at each point along the vessel path.  34  Blood vessel segments are connected to form complex and topologically correct vessel trees prime for analysis. There are usually two networks in each segmentation: one arterial network and one venous network. Occasionally, there are other non-connected isolated vessels that can appear due to disease or image cropping. Live-Vessel allows the user to handle bifurcation points by initiating new branches and handles cross-overs by simply passing a wire over the segment. A user will begin a segmentation by clicking a seed at the root of a tree and trace a vessel along a path to its very end. New branches and segments will snap to existing seed points and branch points can be inserted anywhere along a pre-existing path. All snapable seed points are marked with black dots and snapable root segments are marked with light gray to distinguish them from all other points. Once the user is satisfied with the segmentation, the collection of seed points and vessel segments are analyzed and converted into vessel tree representations. Live-Vessel was designed to handle any number of independent vessel networks in an image. The user may specify a vessel network root anywhere in the image and can then connect segments and branches to it without limit. The graphical user interface was designed to facilitate this process. The connectivity graph for such a segmentation is shown in Fig. 2.4.  35  (a)  (b)  Figure 2.4: Image a demonstrates a connectivity subgraph representing one of the two vascular networks on image 40 of DRIVE. Each node is numbered with its x-y coordinate and represents a branch point. Image b shows where these nodes are located on the segmentation. The large green dot is the top node located at (505, 274) and the pink nodes are descendants.  36  In Live-Vessel, the primary sources of delay in segmentation are in the visual identification of blood vessels and in the adjustment of the vessel path and radii. While the former is a human bottleneck, this can be partially alleviated by allowing the user to visualize the photographic image in different ways with the press of a button. The ability in Live-Vessel to toggle between display images was added to make it more apparent if a fuzzy blur is in fact a blood vessel. The display images can be anything ranging from contrast enhanced photographic images to vesselness and other cost function images. The second problem was addressed by allowing Live-Vessel to automatically adjust the end point radius to the optimal value. This can be done efficiently by simply accepting the radius value for the first instance when the graph search arrives at the correct (x, y) spatial coordinates rather than searching until it arrives at the proper (x, y, r) coordinate. Further information on the graph search can be found in Sec. 2.5. The radius auto-adjustment feature can be enabled or disabled via the keyboard. There are two main advantages to enabling this feature. The first is that it reduces the number of user adjustments and the second main advantage is that this method can reduce user error. In many cases, even a slightly incorrect radius with an error less than a pixel will create an obvious bulge or stenosis artifact as seen in Fig. 2.5. It is also crucial that optimal values are chosen for the branching nodes since these are critical for the graph analysis utility.  37  (a) False Bulge  (b) False Stenosis  Figure 2.5: Segmentation artifacts created by a user selecting an incorrect radius. These can be resolved by enabling the automatic endpoint radius detection feature. So far, Live-Vessel has been restricted to the domain of 2D projection or photographic imaging and has not been applied to 3D blood vessel imaging. There is a variation of Live-Wire for segmenting 3D images[58]. Since creating a live-surface and having a user navigate a 3D image yields too many unsolved research problems in the fields of human computer interaction and graph theory optimization, it works by having a user segment 2D images in 2 of the 3 orthogonal planes and selects specific points from those contours as seed points for the final 3D plane which is processed completely automatically. This approach would be useful for segmenting 3D blood vessels, however, users often find vessels difficult to track when observing 2D planes. 38  Blood vessels will appear as an elliptical bright or dark spot in an image with the appearance varying from circular to line-line as the vessel travels from slice to slice. The tracking is particularly difficult when there are many vessels passing nearby and when tortuous vessels end up oscillating between slices.  2.4  The Objective Function  The objective function is defined from node p = (x1 , y1 , r1 ) to a neighbouring node q = (x2 , y2 , r2 ) as  Cost(q, p) = (w1 Cv (q)+w2 Cc (q)+w3 Ce (q))(w4 Cs (p, q)+w5 Cr (p, q)). (2.1) This function is comprised of three pixel-based and path-independent vessel detection cost terms, Cv (q) (vesselness), Cc (q) (curvilinear), and Ce (q) (edge evidence). Each term has a corresponding weighting parameter w1 , w2 , w3 , w4 , and w5 . This function yields a minimum cost along the vessels medial axis at the appropriate vessel radius. Their behaviour is also smoothly increasing around the optimal values giving some tolerance to noise. This cost function structure differs from the original Live-Vessel in how it multiplies the smoothing terms rather than adding them. The smoothing terms are to be interpreted as spatial edge lengths and this is why they are multiplied with the position based node terms. The term Cv is based on Frangi’s multi-scale vesselness filter [20], which we extend here to incorporate colour image cues and edge detection to better isolate the medial axis. The term Cc is a curvilinear structure detector, similar to that of Koller et al. [28], which we modify to incorporate colour information and it replaces another term used in the original Live-Vessel. The term Ce is a measure of the medial node fitness assessed using image edge evidence at a scale-dependent distance and was also modified to search for symmetry and colour edges.  39  Cost Function Overview Initial Node Edge Evidence Branch Node  Main Segment  Termination Node  Vesselness & Curvilinear Daughter Branch  Medial / Radial Path Smoothness  (a)  Figure 2.6: Graphical representation of the individual cost terms employed in Eq. 2.1. Each cost term in Eq. 2.1 can be individually normalized to a [0, 1] range and weighted by parameters wi . The cost terms are illustrated in Fig. 2.6.  2.4.1  Quaternion Vesselness Cost  The cost function used by Frangi et al. [20] attempts to quantify the likelihood of a pixel belonging to a vessel of thickness σ at a scale space image fσ . It was used in the original Live-Vessel but it has been since modified in three ways. The colour quaternion version of the filter [65] was used, edge penalties were added to isolate the medial axis, and the square root was taken in order to increase contrast between the medial and other further. The derivation is given in detail in 4. One of the original goals of the vesselness filters was to produce a near binary result where the entire vessel is filtered out whereas we wish to isolate the medial axis. The vesselness filter response, for a particular scale at point q = (x, y, r), is given by: 0 Cf v (q) =  exp  λ2 > 0 Rβ 2β 2  1 − exp  T2 2c2  otherwise  (2.2)  where Rβ = λ1 /λ2 represents the eccentricity of a second order ellipse, T = λ21 + λ22 , and β and c control filter sensitivity to each of the terms. To detect curvilinear structures in 2D at a given scale σ, the image is first convolved with a Gaussian kernel of variance σ 2 . The ordered eigenvalues 40  |λ1 | ≤ |λ2 | of the 2 × 2 Hessian matrix Hσ for each pixel can then be used to determine whether a pixel lies in a vessel of that scale [35]. Eq. 2.2 summarizes the eigenvalue conditions for vesselness quantification at a given pixel. Intuitively, this equation attempts to place the pixel either inside or outside with values either close to 1 or 0. The two exponential terms work together to squeeze the values into this range. Since we validate Live-Vessel on real medical data of optical fundus images that are in full colour, we employ the colour Hessian as defined by Shi et al. [65]. The Hessian matrix for each colour channel is individually computed and the colour Hessian is found by multiplying the red, green, and blue channels with the imaginary numbers i, j, and k respectively. The eigenvalues of this matrix are found through quaternion singular value decomposition (QSVD) [55]. This vessel detection filter was originally designed to work by taking the maximal response across all vessel radii. Its goal was to filter out the vessel from non-vessel yielding a nearly binary output. This is a scale space technique where for each possible vessel radius, the image is convolved with a gaussian kernel of that size and then forwarded through the filter. The final output is obtained by taking the maximum filter output value for each pixel across all scales. The images in Fig. 2.7 demonstrates the Frangi filter for three different scales: one small, one medium, and one large. Note that the correct scale for each vessel has assigned the medial axis a low cost and the surrounding and incorrect scales a higher cost. Unfortunately, at the smaller scales this filter produces a positive response near the vessel wall. This works smoothly in towards the center of the vessel with increasing sizes until it reaches the correct size at the medial. The cost was increased at these incorrect scales by adding the image edges to the filter output for each scale. This increases the penalty at the vessel boundaries and surrounding pixels compared to those forming the centerline as shown in Fig. 2.8. This also has a beneficial property of providing a bounding wall which greatly reduces the effective search space and hence greatly improves efficiency. Fig. 2.9b shows and example output of the colour quaternion edge enhanced based vesselness filter.  41  (a) Small  (b) Medium  (c) Large  Figure 2.7: Demonstration of the Frangi Vesselness filter on DRIVE image 40 for 3 different scales.  42  (a) No Edge Penalty  (b)  (c) Edge Penalty  (d)  Figure 2.8: These minimal response images demonstrate the edge penalty effect on the vesselness filter on image 40 of DRIVE. Note the medial axis in the bottom image is isolated across all scales. The colour blue has an associated cost of 0 while red has one of 255 and it scales linearly with the transition shown in the legend.  43  (a)  (b)  (c)  Figure 2.9: (a) Original image. (b) Maximal response vesselness filter output with added edges. (c) Magnification of a small region denoted by the rectangle from (b) to show detail. Note that the medial axis (dark) is associated with a low cost and the vessel boundaries with a high cost (bright). This keeps the graph search contained to the vessel center. Finally, we use this Vesselness filter in a graph search where costs are accumulated along a path. In order to stay along the desired path and avoid shortcutting we need to increase the contrast between the medial and everything else. Since the output was normalized to the range [0, 1], the 44  square root of the entire term could be taken to increase the contrast between low values. This yields a final cost function with image I as Cv (q) =  2.4.2  1 − Cf v + Edge(I).  (2.3)  Curvilinear Structure Cost  The curvilinear structure cost term was inspired by the work of Koller et al. [28]. They search for bar like structures in an image by estimating the direction of a like structure and then applying a filter in that direction. This filter has the desired property of having a maximal response along the medial axis at the correct scale with a response that slowly drops off until the vessel boundary is reached. Intuitively, the cost is the minimum of two responses a distance of scale s from the center of the filter x in the direction d. They estimated the direction d by finding the direction that provides the maximum second derivative of the scaled image. We estimate the direction and edge response using the colour image gradients as described in Sec. 2.4.3 as opposed to grayscale ones as originally proposed. Once again, in order to increase contrast, the square root was computed for the cost. The curvilinear structure filter response is thus generated as:  Ckc = min(P os(Rl ), P os(Rr ))  (2.4)  P os(x) = max(x, 0)  (2.5)  1 − Ckc  (2.6)  Cc =  with Rl and Rr are edge responses given in terms of the space smoothed image fσ at scale σ as: Rl = ∇fσ (q + σ d) · d  (2.7)  Rr = −∇fσ (q + σ d) · d.  (2.8)  This filter was also designed to detect structures in an image using the maximal response approach. An example maximal response cost generated using this filter is shown in Fig. 2.10 and an example of the response at three different scales are given in Fig. 2.11. Notice how this time, the filter behaves differently with scale. As the scale approaches the correct radius, the cost drops. The medial axis also isolated by default. 45  (a) Small  (b) Medium  (c) Large  Figure 2.10: (a) Original image. (b) Maximal response curvilinear filter output. (c) Magnification of a small region denoted by the rectangle from (b) to show detail. Note how this filter is much better at assigning a higher cost (white) to the background pixels than the vesselness filter shown in Fig. 2.9.  2.4.3  Colour Edge Evidence Cost  Live-Vessel has an image evidence term making use of edge information to further favour medial located towards the centre of a vessel of radius r. For each node q in the image, we compute edge samples at radius r from point (x, y) in the direction given by the principal eigenvector e1 of the Hessian matrix. For robustness, we collect additional samples nearby along  46  (a) Small  (b) Medium  (c) Large  Figure 2.11: Demonstration of the Koller curvilinear structure filter on DRIVE image 40 at 3 different scales.  47  the direction parallel to e2 scaled by a free parameter as shown in Fig. 2.12c. The colour edge evidence cost term is thus defined as: Ce1 (q) = Edge(q ± r · e1 ± {− , 0, } · e2 )  (2.9)  where the operator ‘Edge’ is the result of applying colour gradient edge detection [12, 63]. This equation is visually demonstrated by Fig. 2.12c. In this approach, the colour gradient edge is defined as the square root of the second eigenvalue of DT D where D is given by:   Rx Ry D =  Gx Gy  (2.10) Bx By with R, G, and B, the red, green, and blue colour channels respectively.  48  (a)  (b)  (c)  Figure 2.12: (a) Original image. (b) Maximal response to colour edge filter output Ce (q). (c) Diagram demonstrating how the sampling is done for √ point along the medial axis with = 2. The white arrow represents the medial axis direction, the central white disc denotes point x, and the small dark circles denote the boundary sampling points from which the gradient will be evaluated. The term written in Eq. 2.9 has some undesirable properties. For instance, it is possible that one side will feel the edge while the other will not when it is off centered. This will produce a lower cost at the incorrect radius at the incorrect location and this is especially troublesome for low resolution images such as those in DRIVE where the vessel radius is only a 49  few pixels wide. In order to address this issue, the absolute difference was computed between the left and right edges and threshold this. If the difference is significant, this means that pixel q is not likely to lie along the vessel centerline. Secondly, in some cases it is desired to search for vessel radii that are 1 pixel or less as in DRIVE. This means that it may falsely detect edge pixels as medial pixels when checking these small scales. In order to address this, image edges were subtracted from the final cost computation thus eliminating any possible low costs assigned to edge pixels. For the case of single pixel wide vessels, there is often not a sufficient edge to detect and we rely on the other cost terms in the image. The edge term is weighted by parameter we which amplifies the edge penalty. The value we = 4 was found good for all images tested. The final cost term is given in Eq. 2.14.  Lef t := Edge(q + r · e1 ± {− , 0, } · e2 )  (2.11)  Right := Edge(q − r · e1 ± {− , 0, } · e2 )  (2.12)  if |Lef t − Right|/Ce1 (q) > 3/4 then Ce1 (q) := 0  (2.13)  1 − max(0, Ce1 (q) − we ∗ Edge(q))  (2.14)  Ce (q) =  Fig. 2.12c shows the maximal response output of this filter given in Eq. 2.14.  2.4.4  Smoothness  Spatial regularization terms are incorporated to encourage smoothness and to avoid unnaturally undulating paths. In Live-Vessel, in addition to the spatial smoothing term Cs , which regularizes the path, we include a radial regularization term Cr that penalizes abrupt changes in vessel thickness. We separate the spatial and radial smoothing terms for more precise control since they function rather differently. The spatial term introduces the concept of physical distance and is meant to prevent long jagged paths while the radial term is meant to keep a smooth radius profile but it also guides the graph search in a spatial sense since the dimensions are naturally coupled. The spatial cost term Cs is computed between neighbours p and q and is used as a multiplier for the edge weightings to weigh diagonal links larger than non-diagonals ones thus penalizing increased physical distance between adjacent graph nodes. This √ term is calculated simply as the Euclidean distance and evaluates to 2 for diagonals and 1 for the 4-connected pixel neighbours. 50  The radial smoothness cost term, on the other hand, is designed to penalize noisy responses obtained from the vessel filters. In almost all cases, the radius does not change abruptly along the length of each branch in such a way that it varies abruptly and with sharp changes between pixels. We therefore incorporate this characteristic into our graph search using the normalized function: Cr (q, p) =  |rq − rp | rmax − rmin  (2.15)  where rmax and rmin provide the maximum range of radii being searched. It is worth noting that if one expects that the radius to change extremely abruptly, e.g. in the case of pathology, while still requiring heavy smoothing throughout the image, then the user can either lower the value of w5 in Eq. 2.1 and increase the number of seed points thus providing anchors to force this rapid variation.  2.5  Performance Considerations  The 3D graph search is very demanding in terms of memory and processor and required several important modifications and considerations in order to make possible real-time interactivity on a modern desktop computer. The previous version of Live-Vessel was only able to handle four radii on the low-resolution DRIVE images. It did not have a live interactive vessel wire and worked on the basis of approving or rejecting segments in a guess and check approach. The user would begin by clicking a seed point, waiting a few seconds for a path to compute and display, and then either approving or rejecting the segment. The code was developed using Matlab and C++ for performance critical regions using the Mex interface. Everything was developed from scratch and was able to achieve real-time interactivity on larger datasets with far more vessel radii. The algorithm requires several large and heavily accessed and modified data structures. The main cost graph in Live-Vessel contains several planes: one plane for each radius. Each pixel is a node containing the cost values for the path-independant terms from Eq. 2.1. The edges of the graph are a regular set of neighbours for each pixel so those do not require additional storage. The costs terms can be precomputed in a parallel and highly efficient manner in order to relieve CPU pressure during runtime which would otherwise dominate the computational workload. They are stored in memory as a single 1D array for efficient non-aliased access but this puts great 51  pressure on the memory allocator and system allocator in order to efficiently create such a large chunk. The graph search must also store a significant number of search states including the current cost estimates to visit each node, the shortest path to that node, and scoreboard which counts the costs and determines which nodes are to be visited next.  2.5.1  Memory Requirements  The first thing to consider are the memory requirements required to compute the graph search. Consider an image from the REVIEW database. They are 3584 × 2438 pixels and the radii in these images range from 3 to 14 pixels. If these costs are computed to the nearest 1/2 pixel for radius, or the nearest full pixel for diameter, this means there would be 22 scales to store information for at each pixel. The natural way to compute and store these values are to use floating point numbers in the range [0, 1]. Using the 8-byte floating type double (IEEE 754), which is the native type for Matlab and the LAPACK [2] numerical library interface it creates, these images would require 1.5GB instead of 192MB just for the precomputed costs. This issue was addressed by computing the cost terms for the image using type double for each radius and then scaling and casting to unsigned chars which only occupy 1 byte instead of 8. It was found that there is sufficient contrast and resolution in [0, 255] to distinguish between good and bad paths especially since the cost function was designed to enhance this contrast. The runtime performance of the graph search is limited by memory access speed since there are many reads and writes compared to any of the remaining minor computations. By reducing the precomputed cost terms to a smaller data type, the L1 and L2 cache were able to be better utilized. These caches are typically the performance bottleneck in a modern computer. An L2 cache hit will only require tens of cycles while a miss requires hundreds if not thousands to fetch the data from RAM. Due to the random access nature of this graph search, the cache prediction units are not as effective. By using a smaller datatype, we can store 8 times more data into a single cache line, transfer 8 times more data during a line fetch or flush to RAM, and are less likely require eviction of recently used terms.  2.5.2  Search Space  The graph search algorithm is based on Dijkstra’s famous shortest path algorithm[17]. It finds a globally optimal path between any two points by 52  propagating a search wave that expands outward until it finds the destination. It provides a general solution that solves all problems but for this application we can constrain the search space by including biological priors. For instance, vessels generally grow and branch in a specific way and will not follow a completely random path as could be true in the general case for all graphs. The graph search could be limited to search in only a relatively forward direction with only small changes in radius allowed along the path. Also, for the smallest vessels, the graph search will remain pinned to the smallest radii since vessels of this size will never grow large. The goal is to find a fundamental set of constraints that do not remove from the global optimality of the graph search. This is especially important to consider when dealing with pathology since these vessels often express highly irregular behaviour. In this implementation, the path of the blood vessels is restricted to be within a cone. The central axis of the cone is aligned with the seed points and has an aperture angle of 120o . The search path is also prevented from exiting a bounding box defined by the seed points with a 60 pixel pad on all sides. Since the graph search is this constraint is also applied to every point along the path as if it was an initial seed. This concept is illustrated in Fig. 2.13. In addition to reducing the search space for spatial coordinates, the radial one must also be reduced. Since blood vessels have smooth radii, the number of radial neighbours can be effectively reduced. This means a pixel can only search one of N allowable radius neighbours. This prevents searching and recording values for biologically impossible paths. For all data tested, N = 1 was found to be sufficient. The final performance enhancements can be made to the cost function itself. By increasing contrast between the correct path and everything else, we can spend less time exploring invalid regions and restrict ourselves to searching a much smaller space.  2.5.3  The Scoreboard  The graph search algorithm must maintain a significant number of states including the distances to all pixels at the wave front, the previous pixel in the shortest path to every single pixel, and structure which determines the next pixels in the wave expansion. The data structure keeping the current cost estimates stores 32-bit unsigned integers. This was the smallest native integral type that would not overflow on long paths. The data structure keeping current cost estimates can be implemented using several different data structures. Initially, a large 1-D array was used 53  Figure 2.13: The graph search space shown in red between points P and Q is bounded by the intesection between the gray box and the two lines forming a 120o cone. The cone’s central axis is aligned with the segment P Q. This cone exist for both P Q and QP . The intermediate point P also has this constraint since the graph search is recursive.  54  where each node has a 32-bit integer since this is the smallest datatype that can be used that won’t overflow. These values would be initialized to the largest value the data type can represent. For a large possible search area this creates a very large array and the initialization alone can consume a significant amount of time. The data structure presently used is a hashmap with a number of bins equal to the physical (x,y) size of the search space. While this increases the lookup time and can burden the memory allocator, this generally and greatly reduces the amount of memory required and the processor time since initializing each and every node is no longer required. The C++0x draft version [26] of the data structure called called unordered map was used for this purpose. The algorithm is required to track the shortest path to each pixel. This can be done efficiently by inserting an index for the previous node in the path. Since this also requires a 32-bit integer type (pixel coordinate rather than pixel memory address), the same memory issues faced with the current distance estimates must be considered. Since the lookup time and allocation times are slower for this type than a native array, the two traditional data structures were merged into a single one and it used a ”value-type” of std :: pair < int, int >. In C++ terms, this is a plain old data (POD) structure and should be 64-bit aligned by an optimizing compiler. This merging reduces the number of lookups and allocations in half. The wave-front for the algorithm must be efficiently expanded. This means that the wave should head in the most ”forward” direction as possible and avoid exploring backwards, sideways, and higher cost paths where possible. Another consideration, is the termination condition. It would be ideal that when the first wave front reaches the end point that it is the globally shortest path so that it must not continue to explore all other possibilities to be certain. In order to do this, a priority queue was used where the priority is the cumulative distance from the starting seed. The standard C++ STL implementation priority queue was used for ordering. It creates a heap on an array structure in order to keep it efficiently sorted. Testing showed this works significantly faster than my own hand optimized link based data structure which in turn was faster than the C++ standard ones for this application.  2.6  Post-Search Smoothing  The graph search results in a medial path and radius components that are only precise to the pixel level. There are a few well studied and widely used 55  (a) Without  (b) With  Figure 2.14: This figure demonstrates the purpose and function for the postprocessing smoothing. The left image shows how the pixel resolution of the output and the noisy cost function cause undesired error whereas the right appears to smooth out the inprecision. data sets for retinal imaging including STARE [23] and DRIVE [51]. In these data sets, the vessel radii are on the order of pixels wide so artifacts resulting from the discretization become easily apparent. Since we are also interested in determining a smooth path required to compute the vessel boundaries, this problem was addressed by smoothing the optimal path a Gaussian kernel. This will remove the high frequency zig-zags and average out the path over several pixels. The optimal width and smoothing amount depends on the size of the image and the number of radii tested. The seed points themselves were not included in this smoothing process in order to keep the user input completely intact. The advantages to this process is shown in Fig. 2.14. The DRIVE database we tested against has many vessels with a radius of less than 1-pixel. Such small vessels are very susceptible to noise and also pose problems for our gradient based filters described in the previous subsections. In order to rectify this problem, the algorithm was modified so that if the endpoints have radii 1 pixel or smaller, then the graph search should stay bounded within that range along the entire path. This solution incorporates the biological prior that very small vessels do not vary in size along their paths [33].  56  4.000 13.8  31.1  1.500 23.9  1.500  36.9 35.0 0.500  85.7 2.000  2.000 58.5  1.000  1.500  1.000 47.3 1.000 44.5 1.500 112.6 1.500 11.0 26.8 1.000  4.000 62.7  0.500  1.000  70.4 3.000 126.5 43.3  0.500  1.500  88.0  8.0  1.500  0.500  1.500  41.1  3.500 58.3  54.7  3.500 136.9 3.500  3.500 125.1 0.500  37.1 1.000  3.500  3.000  150.2  1.000  1.500  31.7 53.9  1.000  58.5  16.0 107.7  1.500  29.0  146.3  3.500  11.6 19.3  20.2 19.6  44.0  1.500  35.8  1.000  39.0 106.1 0.500  3.000  11.8  0.500  126.2  9.9  16.7  2.50 29.0  23.7  1.500  1.500  91.7  7.5  0.500  3.000  61.2  22.4 56.7  1.500  3.000  2.000  2.500  Tree Analysis  43.6  66.5 78.8  67.9  63.8  1.500  1.500  1.000  30.0  3.000 90.5  1.000 9.9  37.8  1.500  1.000  1.000  0.500  57.1 34.1  76.9  57  0.500  53.3 1.000  1.500  20.3 0.500  2.000  204.5 67.0  The Live-Vessel output contains a wealth of raw data ready to be analyzed 1.500 0.500 2.500 0.500 2.000 1.000 and1.000 exploited. A vessel connectivity graph can be extracted where each 2.000 branching point, root, and termination point can be represented by a node. 36.2 44.7 19.4 31.8 11.4 102.2 The layout of the graph is generational where each branching node is placed 2.500 in the 1.000 2.500 and level 1.000 as the rest in its generation. Each node or 1.000 same row edge 2.000 can be labelled with one of many different computed values such as radius, 21.7 13.0 36.5 36.8 43.8 22.7 taper, or tortuosity. This makes it easy to see patterns and deviations in the 1.500 1.500 data. The 2.000 general 0.500 layout of the graph can be seen in Fig. 2.15. 1.500 1.000 There are five different edge labels that are currently generated by Live58.9 24.3 Vessel. These properties naturally describe the relation between parent and children nodes.1.000 They include spatial distance, vessel length, tortuosity, ves1.000 sel radius ratio, and vessel taper. The spatial distance represents the phys39.6 56.9 in pixels between nodes. The vessel length label gives the disical distance tance generated by the path integral along the medial axis. The tortuosity 0.500 1.000  0.500  2.000  Figure 2.15: Tree representation of small arterial section of the retina of 3.000 1.000 2.000 DRIVE image 40. Each node represents the1.500 branching point2.500 and the3.000 label 1.500 represents the vessel radii. The edge labels represent the segment lengths. 76.7 75.6 36.6 29.5 77.0 30.5 93.6 133.8  2.7  116.2  58.4 52.7  0.500  24.2  3.00  73.0  1.500  2  0.500 51.2 1.500  183.0 1.000 57.5 9.5  1.000  1.000  3  metric is simply the ratio between the vessel length and the spatial distance. In Sec. 1.4.1 tortuosity was defined as the property of bending and twisting along a vessel path. We define tortuosity to have a minimum possible value of 1 for a perfectly straight value as T in Eq. 2.16 with Lv being vessel length and Ls being the spatial euclidean distance between endpoints of the vessel. The vessel radius ratio is the ratio between the vessel radius at both end points along the segment. The vessel taper is a logarithmically scaled value relating the vessel radius ratio to length and is the constant k in Eq. 1.4. T = Lv /Ls  (2.16)  In addition to the five different edge labels, there are currently three different node labels: coordinate, radius, and polynomial exponent. The coordinate label simply gives the (x, y) coordinate of the branch point. The radius is the radius at the node location. The polynomial exponent is the constant exponent k in the polynomial of Eq. 1.2 but includes all children radii in the case of trifurcation or even higher order branching.  58  Chapter 3  Experimentation The performance of Live-Vessel was evaluated on real medical images from the three major publicly available retinal image databases: DRIVE [51], STARE [23], and REVIEW [1] databases. All three databases vary widely in acquisition process and features and come with different standards for identification of pixel vessel-membership.  3.1  Data Sets  Live-Vessel was evaluated on three leading databases: DRIVE, STARE, and REVIEW. Each database was unique in its acquisition, segmentation, resolution, and patient characteristics.  3.1.1  DRIVE  The images for DRIVE were obtained from a Dutch study on diabetic retinopathy [51] and includes 40 images. The first 20 images were segmented by an expert ophthalmologist and form the training set. The second batch of 20 images form the testing database and was segmented by two ‘experts’. Only 7 of the 40 patients whose images were randomly selected to be part of the datasets show signs of diabetic retinopathy. All of the images were captured as JPEG and converted to TIFF so it is possible there is some high frequency loss which critically works against the identification of small vessels and precise boundaries. A more detailed description of the images is available from the website1 at the time of this print. Here is an excerpt: The images were acquired using a Canon CR5 non-mydriatic 3CCD camera with a 45 degree field of view (FOV). Each image was captured using 8 bits per color plane at 768 by 584 pixels. The FOV of each image is circular with a diameter of approximately 540 pixels. For this database, the images have 1  http://www.isi.uu.nl/Research/Databases/DRIVE/  59  been cropped around the FOV. For each image, a mask image is provided that delineates the FOV. The segmentation tool used to generate the expert segmentations was a simple paint tool and the expert users were asked to mark all pixels they were at least 70% confident belonged to a blood vessel. They were not asked to capture the overall structure of the blood vessel nor to guess and connect regions which are obfuscated by myelin for example. There were many discrepancies between the two users and many minor artifacts due to fatigue. In some instances, there were major differences including vessels which were present in one but omitted in the other. One of the major difficulties in establishing a gold standard for vessel segmentation is consistency. We found that when the two manual segmentations were compared quantitatively, they yielded on average accuracy value computed using Eq. 3.1 of 0.94. This is rather low considering the vast majority of the image belongs to background pixels. Other authors have also expressed concerns in using these segmentations for validation[42] due to the inconsistency in including or excluding the vessel wall as part of the segmentation.  3.1.2  STARE  The STARE database was created in 1975 by Michael Goldbaum, M.D. at University of California, San Diego. Since then, over 30 specialists and engineers have contributed and for the past 24 years it has been continuously funded by the National Institute of Health. Unlike DRIVE, STARE has images covering a wide variety of diseases and contains a list of 13 potential conditions for diagnosis. There are presently 402 images in the database and each come with a diagnosis. The 13 conditions are: • Hollenhorst Emboli • Branch Retinal Artery Occlusion • Cilio-Retinal Artery Occlusion • Branch-Retinal Vein Occlusion • Central Retinal Vein Occlusion • Hemi-Central Retinal Vein Occlusion • Background Diabetic Retinopathy 60  • Proliferative Diabetic Retinopathy • Arteriosclerotic Retinopathy • Hypertensive Retinopathy • Coat’s • Macroaneurism • Choroidal Neovascularization The Hoover dataset[23] contains 20 randomly selected images with 2 expert segmentations of similar quality to those in DRIVE. The images were acquired using a traditional film camera with a 35o field of view and was scanned to a resolution of 605x700 pixels at 24bpp. Ten of the images are from patients with disease and the other ten are from healthy patients. These images present a wide variation in lighting, quality, and pathology. Here are a few sample images demonstrating this:  61  (a) a  (b) b  (c) c  (d) d  Figure 3.1: These are sample images taken from the Hoover collection from the STARE database. These demonstrate the difficulties faced in segmenting retinal blood vessels. There are two sets of segmentations performed by two different expert users. Martinez-Perez and others [42] have noted that the first user has labelled more vessels than the second. Many of the segmentations do not have a continuous vessel network and include many gaps. This is the case where the vessels do exist but are obfuscated by tissue and blood as shown in Fig. 3.2 or where the field of view of the camera is not sufficient to display the entire retina. Since Live-Vessel is aiming to provide vascular network analysis for disease identification rather than simple demarcation of pixel vessels, these segmentations pose difficulties for comparison.  62  (a) Original  (b) Expert  Figure 3.2: This image was chosen from the Hoover collection to demonstrate the quality of the segmentations. Note the gaps in the vascular network and the decision not to segment the smallest vessels.  63  3.1.3  REVIEW  The REVIEW database consists of four distinct datasets all of which are high resolution and contain partial segmentations with precise diameter measurements. There were three separate observers who marked the boundaries to the nearest 1/4 pixel. The high resolution image set (HRIS) contains four images with differing severe grades of diabetic retinopathy. There are four images where 90 segments in total were marked. The vascular disease image set (VDIS) consists of eight images of which 6 are of patients with diabetic retinopathy. This image set is said to be very noisy, contains pathologies, and manifests a greater variance. The third dataset is the central light reflex image set (CLRIS). There are only two images with 21 marked segments. They represent early atherosclerotic changes giving an exaggerated light reflex. The last image set is the kick point image set (KPIS). In this set, there are only portions of two images with three segments and gold standard widths. Sample images from each dataset are shown in Fig. 3.3.  (a) HRIS  (b) VDIS  (c) CLRIS  (d) KPIS  Figure 3.3: These are sample images taken from the REVIEW database. Each image is taken from a different set in this collection. These demonstrate the difficulties faced in segmenting retinal blood vessels.  64  3.2  Performance Criteria  Live-Vessel’s binary segmentations were evaluated using three different accuracy measures and an additional time efficiency measure. The first measure is the traditionally-used accuracy metric calculated using the percentage of true positives and true negatives. This enabled a more fair comparison between Live-Vessel and published articles which mostly report this metric and have not made source or binaries available. The second metric used to measure performance is the Dice similarity coefficient (DSC)[16] which provides a greater contrast than the accuracy metric. The third accuracy metric measured is a variation of the Hausdorff distance metric and is used to determine error in medial axis localization. Finally, efficiency was calculated as the speedup in time by a user to complete the segmentation task using Live-Vessel versus manual tracing. To obtain manual tracing task times, images were segmented using a simple paint tool and were previously reported for the DRIVE database in [51].  3.2.1  Binary Segmentations  Traditionally, the accuracy measure that has been widely used and accepted in retinal fundus image segmentation tasks is defined as: Acc = (TP + TN )/(P + N )  (3.1)  where TP and TN are the number of true positive and negative pixels and P and N are the total number of positive and negative pixels in the reference segmentation. This metric may be the most fair nor reliable metric for the data in DRIVE, STARE, and REVIEW because of the rather large number of background pixels N compared to the actual vessel pixels P which means that even with a TP of 0, one we would observe an artificially high accuracy rate. The DSC aims to penalize false positives and true negatives equally in the image and provides a better contrast. DSC is defined as CDSC = 2|A  B|/(|A| + |B|)  (3.2)  where Tp = |A B| with A and B being the sets of vessel pixels in each of the two images being compared. The main problem with these two metrics, as noted by Martinez-Perez et al. [43], is that the segmentors may decide to include or exclude vessel walls and there is no way to determine the which specific pixels are either included or missing. Another main issue is that the segmentations may include or 65  exclude small vessel segments for various unknown reasons. In many other tissues, the segmentation usually includes a single enclosed region but with blood vessels, they branch out into many different directions and eventually disappear as they branch and shrink beneath the camera resolution. The boundary to volume ratio is extremely large and the inclusion of small vessels appears entirely up to the user and their own interest in them. These are among the many issues making a classical binary comparison less effective at discriminating quality.  3.2.2  Medial Axis Error  In order to quantify the error in our method’s medial axis localization with the binary segmentations available in DRIVE and STARE, the available manual segmentations were skeletonized to obtain an approximation to the ground truth medial pixels. This is only an approximation since the skeleton is dependent on the boundary pixels which are often unsmooth and irregular. The REVIEW database, however, contains precise medial axis placement data. None of the data sets provide physical dimensions so all error is reported in terms of pixels. The Hausdorff metric, for every point, finds the closest point in the corresponding set and returns the maximum of those distances. The mean metric [78], as given in Eq. 3.3, finds the average distance error and is designed to give an overall impression of how close the skeletons match. In each image there will be branches not included in the other image and this will contribute greatly to the error measure. Each point set X and Y has a different number of points and so we will observe different results if comparing X to Y and Y to X.    inf ||x − y||/|Y |,  dmh (X, Y ) = min  y∈Y  x∈X  inf ||x − y||/|X|  x∈X  y∈Y  (3.3)  One main difficulty in performing comparisons to the available gold standard was that the expert manual segmentations do not have a sub-pixel precision. The output of Live-Vessel will, however, generally have contours that pass through non-integer pixel locations as vessels do not necessarily contain complete pixels. Thus, in order to perform a comparison with the available manual segmentations, the Live-Vessel segmentation results were thresholded so that only if more than half of the pixel belonged to a vessel  66  then the whole pixel was labelled as a vessel pixel. This in fact degraded our Live-Vessel results but was done to enable comparisons to published work. There are instances where, due to resolution, some authors have chosen to only include false positives as points being more than 1 pixel away from the nearest correct positive[23]. This has a rather large impact on the reported value since the vessels are merely pixels in diameter and minor variations along the boundary due to lack of sub-pixel precision are typical along all vessels.  3.3  Results  In this section, the results on the three complete retinal image databases are reviewed. Each database was acquired differently with its own features and must be analyzed seperately as a result. Complete segmentations for each image and full error tables are available in the Appendix.  3.3.1  DRIVE  We ran Live-Vessel on all 20 images of the DRIVE test database and all 20 images in the training database. The weightings for the cost function Eq. 2.1 were determined experimentally to be (w1 = 4/20, w2 = 4/20, w3 = 8/20, w4 = 1, w5 = 256). We used scales on [0.5, 4] with increments of 0.5 pixels. Example qualitative results are shown in Figs. 3.4, 3.5 and 3.6 which show the detected medial axis, the resulting Live-Vessel segmentations, and the locations of segmentation errors. For the most part, the (small) errors were concentrated in tiny vessels with radii between 0.5-1.5 pixels as demonstrated in Fig. 3.7 and for regions where we did not identify the same vessels for segmentation as the experts.  67  (a)  (b)  (c)  (d)  Figure 3.4: Sample results for DRIVE image 02. (a) Retinal fundus image. (b) Optimal medial axis (unsmoothed) computed by Live-Vessel (black) overlaid on the original images. (c) Live-Vessel segmentations without image overlay. (d,h,l) Segmentation errors with coloured pixel reflecting correctness or error. Red: Correct. Green: Over segmentation. Blue: Under segmentation.  68  (a)  (b)  (c)  (d)  Figure 3.5: Sample results for DRIVE image 14. (a) Retinal fundus image. (b) Optimal medial axis (unsmoothed) computed by Live-Vessel (black) overlaid on the original images. (c) Live-Vessel segmentations without image overlay. (d,h,l) Segmentation errors with coloured pixel reflecting correctness or error. Red: Correct. Green: Over segmentation. Blue: Under segmentation.  69  (a)  (b)  (c)  (d)  Figure 3.6: Sample results for DRIVE image 19. (a) Retinal fundus image. (b) Optimal medial axis (unsmoothed) computed by Live-Vessel (black) overlaid on the original images. (c) Live-Vessel segmentations without image overlay. (d,h,l) Segmentation errors with coloured pixel reflecting correctness or error. Red: Correct. Green: Over segmentation. Blue: Under segmentation.  70  Figure 3.7: Example zoomed region in image 19 (Fig. 3.6d) demonstrates the errors when segmenting small vessels which are typical in the DRIVE database. Red: Correctly identified pixels. Green: Over segmented pixels. Blue: Under segmented pixels.  Table 3.1: Comparisons of state-of-the-art methods with our proposed LiveVessel approach using the accuracy measure Acc = (TP + TN )/(P + N ) on the DRIVE database. These are the values reported by the authors with the precision they reported. Technique Expert Manual Ricci and Perfetti [61] Soares et al. [67] Staal et al. [69] Chang et al. [11] Live-Vessel  Accuracy 0.9473 0.9595 0.9466 0.9441 0.9419 0.9447  The data in Table 3.1 shows the results comparing Live-Vessel with current state-of-the-art techniques for the same DRIVE database images. This table also includes the comparison between both sets of expert manual segmentations available in DRIVE. Many of those other reported techniques share similar limitations. For example, the optic disc and small vessels pose great difficulty for automated techniques and in many cases pathologies themselves can cause problems by breaking the assumptions in the model used. Many of these problems can be seen by examining the result figures 71  in the papers [14, 25, 31, 42, 61, 67]. In all of these cases, trained humans can easily correct these difficulties and our Live-Vessel technique is able to successfully leverage this as we see in the results.  Value  0.5  0.6  0.7  0.8  0.9  Accuracy Measures for Live−Vessel  ACC  HAUS  DSC  Accuracy Metric  Figure 3.8: Boxplot illustrating the accuracy measured using the three metrics on the DRIVE dataset for Live-Vessel. (ACC) The traditional accuracy measure; (DSC) Dice Similarity Coefficient; (HAUS) Modified Hausdorff error (pixels). Fig. 3.8 details the three different accuracy metrics we measured across the 20 images of the DRIVE testing set. The values varied from image to image, sometimes significantly. In fact, we found large variability even between the two manual segmentation sets in the DRIVE database (CDSC = 0.7879, Accuracy = 0.9473). Other researchers have reported similar difficulties with validation against these manual segmentations [42]. Our modified Hausdorff metric values showed excellent results. Though neither the manual segmentation skeletons nor the graph search were subpixel, our Live-Vessel results maintained an average distance error of roughly 0.3 pixels throughout the entire segmentation. This demonstrates our graph search provides a reliable medial axis output. Fig. 3.9 shows the distance 72  error in histogram form. Most of the distances measured were within 1 pixel. In many cases the diameter is an even number of pixels and given that most of the vessels are merely pixels wide, this becomes a significant contributor to the error. Recall that Live-Vessel is smoothed to sub-pixel precision while the binary segmentations are merely skeletonized to approximate the medial axis.  0.0  0.2  0.4  Density  0.6  0.8  Histogram of Medial Error  0.25  0.75  1.2  1.8  2.2  2.8  3.2  3.8  4.2  4.8  Error (pixels)  Figure 3.9: Histogram of distance errors for each medial axis pixel across all DRIVE images. The value listing gives the middle value for the bounded segment. Live-Vessel is also capable of maintaining important topological information pertaining to the vascular network including parent-child relationships and branching locations. The tree in Fig. 3.10 demonstrates the potential for data mining and further analysis using our technique. It shows the connectivity graph for both the arteries and veins of a single retinal image.  73  (492,283)  (481,290)  (458,284)  (420,291)  (385,294)  (477,301)  (355,270)  (391,310)  (412,355)  (394,363)  (358,388)  (502,254)  (457,351)  (538,226)  (414,374)  (331,306)  (351,43)  (529,233)  (480,196)  (534,215)  (447,135)  (355,50)  (463,96)  (315,59)  (289,13  (a) (314,387)  (366,415)  (273,371)  (259,129)  (302,353)  (179,316)  (268,328)  (175,306)  (212,254)  (154,324)  (208,273)  (190,286)  (212,188)  (184,123)  (177,252)  (121,146)  (232,274)  (110,149)  (216,283)  (76,162)  (69,219)  (b)  Figure 3.10: This tree demonstrates a partial vascular network for DRIVE (41,247) image 40 with one graph for each the arterial and venous networks. The nodes are each numbered with the pixel coordinate of the branch points. (27,273)  (50,265)  Live-Vessel shares a similar segmentation efficiency gain as Live-Wire as both techniques are based on a similar interactive segmentation paradigm. Overall, we noticed a speedup of 7.28 times across 10 images we manually segmented using a paint tool (Fig. 3.11). During testing, we found that LiveVessel allowed for extremely rapid segmentation of large and medium size vessels and that the human decision making process becomes a bottleneck for 74  (90,271)  (1  (127,246)  (84,123)  (52,162)  (261  segmentation of tiny vessels. We found the segmentation time was greatly increased with the number of small vessels required to be segmented. They take a significant portion of time to identify and guide the wire whereas the major vessels are easily and nearly instantly identified and segmented.  4000  6000  Manual LiveVessel  0  2000  Time (seconds)  8000  Segmentation Times  1  3  4  5  6  7  9  10  19  20  Image  Figure 3.11: Segmentation times for 10 images from the DRIVE database. The manual segmentations times were measured by Miranda Poon as part of early Live-Vessel research.  3.3.2  STARE  We ran Live-Vessel on all 20 images in the HOOVER dataset of the STARE database. The weightings for the cost function Eq. 2.1 were determined experimentally to be (w1 = 4/20, w2 = 4/20, w3 = 8/20, w4 = 1, w5 = 256). These were the same as for DRIVE. We used scales on [0.5, 7] with increments of 0.5 pixels. Example qualitative results are shown in Figs. 3.12, 3.13, and 3.14 which show the detected medial axis, the resulting Live-Vessel segmentations, and the locations of segmentation errors. For the most part, the (small) errors were concentrated in tiny vessels with radii between 0.5-1.5 pixels as demon75  strated in Fig. 3.15 and for regions where we did not identify the same vessels for segmentation as the experts just as with DRIVE. The other other main errors were caused by blurry film and the inclusion of the vessel wall in the binary segmentations. Image blurring from scanned film will confuse the scale space computations since it behaves as if it is being convolved twice: once by the image blurring due to the lens and once by Live-Vessel during the computation. As a result, the radius is often underestimated.  76  (a)  (b)  (c)  (d)  Figure 3.12: Sample results for Hoover image 7. (a) Retinal fundus image. (b) Optimal medial axis (unsmoothed) computed by Live-Vessel (black) overlaid on the original images. (c) Live-Vessel segmentations without image overlay. (d) Segmentation errors with coloured pixel reflecting correctness or error. Red: Correct. Green: Over segmentation. Blue: Under segmentation.  77  (a)  (b)  (c)  (d)  Figure 3.13: Sample results for Hoover image 13. (a) Retinal fundus image. (b) Optimal medial axis (unsmoothed) computed by Live-Vessel (black) overlaid on the original images. (c) Live-Vessel segmentations without image overlay. (d) Segmentation errors with coloured pixel reflecting correctness or error. Red: Correct. Green: Over segmentation. Blue: Under segmentation.  78  (a)  (b)  (c)  (d)  Figure 3.14: Sample results for Hoover image 17. (a) Retinal fundus image. (b) Optimal medial axis (unsmoothed) computed by Live-Vessel (black) overlaid on the original images. (c) Live-Vessel segmentations without image overlay. (d) Segmentation errors with coloured pixel reflecting correctness or error. Red: Correct. Green: Over segmentation. Blue: Under segmentation.  79  Figure 3.15: Example zoomed region in image 17 (Fig. 3.14d) demonstrates the errors when STARE images due to the inclusion of the vessel wall in the segmentation and blurred film causing weaker edges and scale space confusion. Red: Correctly identified pixels; Green: Over segmented pixels; Blue: Under segmented pixels.  Table 3.2: Comparisons of state-of-the-art methods with our proposed LiveVessel approach using the accuracy measure Acc = (TP + TN )/(P + N ) on the STARE database. The Martinez-Perez results only reported 2 significant digits. These are the values reported by the authors with the precision they reported. Technique Expert Manual Soares et al. [67] Staal et al. [69] Chang et al. [11] Martinez-Perez et al. [56] Live-Vessel  Accuracy 0.9373 0.9480 0.9516 0.9431 0.96 0.9622  The data in Table 3.2 shows the results comparing Live-Vessel with current state-of-the-art techniques for the same STARE database images. This table also includes the comparison between both sets of expert manual seg80  mentations available in the Hoover subset of STARE. The types of difficulties encountered by the fully automated techniques were similar to those in DRIVE and were corrected by Live-Vessel. These are mostly issues dealing with connectivity, the optic disc, and regions of pathology which all trouble the pixel intensity based optimization. Accuracy Measures for Live−Vessel  0.9  ●  Value  0.5  0.6  0.7  0.8  ●  ACC  HAUS  DSC  Accuracy Metric  Figure 3.16: Boxplot illustrating the accuracy measured using the three metrics on the STARE database. (ACC) The traditional accuracy measure; (DSC) Dice Similarity Coefficient; (HAUS) Modified Hausdorff error (pixels). Our modified Hausdorff metric values showed excellent results. Though neither the manual segmentation skeletons nor the graph search were subpixel, our Live-Vessel results maintained an average distance error of roughly 0.3 pixels throughout the entire segmentation. This demonstrates our graph search provides a reliable medial axis output. Fig. 3.17 shows the distance error in histogram form. Most of the error were within 1 pixel. In many cases the diameter is an even number of pixels and given that most of the vessels are merely pixels wide, this becomes a significant contributor to the error. As stated earlier, the reference medial axis was obtained through skeletonization 81  of the expert segmentations and is thus only an approximation.  0.0  0.2  0.4  Density  0.6  0.8  Histogram of Medial Error  0.25  0.75  1.2  1.8  2.2  2.8  3.2  3.8  4.2  4.8  Error (pixels)  Figure 3.17: Histogram of distance errors for each medial axis pixel across all STARE images. The value listing gives the middle value for the bounded segment. Live-Vessel is also capable of maintaining important topological information pertaining to the vascular network including parent-child relationships and branching locations. The tree in Fig. 3.18 demonstrates the potential for data mining and further analysis using our technique. It shows the connectivity graph for both the arteries and veins of a single retinal image with node labels of k-value and edge labels of tortuosity.  82  0.948  0.987  1.000 0.993 0.000  0.000  0.989  1.650  0.956  0.000  92 0.980  0.989  0.700  0.986 0.920  0.983 0.968  0.976  0.985  2.500 0.971  0.983 0.995 0.700  0.993  1.009 0.9 1.150  0.993  0.907  0.989 0.993  1.700 0.919 0.000  0.988 0.984  0.000  0.000  0.897  0.000  0.000  0.968  0.974  0.981 0.980  0.970  0.981  0.000  1.350  1.010 0.997  0.000  0.987  1.050  1.850  0.000  0.988  0.910  0.990 0.973  0.997 0.965  0.975  1.050  1.000  0.000  0.930  0.972  0.000  1.000 0.976  0.961  0.954 1.021  1.000 0.994  0.950 0.9  0.000 0.978  0.000  1.000  0.979 0.984  0.983 0.999  0.000 1.010  0.994  0.800 0.986  0.750  0.000  0.992This 0.926partial 0.988 Figure 3.18: graph 0.964 demonstrates the vascular network for 0.974 a 0.972 single retinal image (DRIVE 40). The nodes are each numbered with the 0.000 and the edges are 0.000 k-value labelled with the tortuosity value (vessel length / spatial distance between seeds).  0.977 0.979  1.002 0.890  3.3.3  1.350  1.004  0.980  0.963 0.000 0.948 0.000 0.977 0.965  REVIEW  1.000  0.000  The review dataset consists of four distinct image sets and three high pre1.023cision 0.985 expert segmentors 0.997 giving 1.000 precise medial and radial measurements. These are only partial segments with vessel regions carefully chosen to expose certain conditions and traits. For comparison and validation purposes, 1.000 the three manual segmentations were averaged and the error is reported as the difference between the target and the mean of the three expert refer0.976 0.986 ences. The error was then biased to have a mean of zero by subtracting its own mean from every point. This is desirable since the constant bias could 1.650 be easily corrected with calibration and may be caused by the inclusion or exclusion of vessel wall and boundary pixels. This error measurement 0.985 approach0.999 was discussed by the database contributors Al-Diri et al. [1]. We segmented all images in this database including all vessel segments 0.000 0.985 0.989  0.973  83  0.976 1.022  0.970  included in the manual segmentations. We used the same weightings on all four datasets but different ranges of radii since the images are all available in different resolutions. The weightings for the cost function Eq. 2.1 were determined experimentally to be (w1 = 4/20, w2 = 7/20, w3 = 9/20, w4 = 1, w5 = 256) and only varied slightly compared to DRIVE. It should be noted these parameters were set for automated seeding and not for user feedback. For this dataset, completely automated seeding was used. The seed points were based on the average medial axis and radial values for the manual segmentations. We also ran experiments attempting to determine how the number of seed points used along each segment increases the accuracy and decreases the error. Thus, for each segment, we began with the two endpoints. Then we sub-divided it into N = 2, 4, 8, and 16 chunks by adding additional seed points. For the most part, segments produced desirable results no sub-divisions (N = 1) but certain regions of difficulty in the images required more seed points to yield an adequate result. This follows intuition and confirms that users should be able to force the wire to produce correct output in difficult regions while minimally interacting in all others. HRIS Live-Vessel was run on all four images in this dataset using radii in the domain of [3, 14] increased in increments of 0.5. The histogram in Fig. 3.19 show the errors for all 4 images for the manual and Live-Vessel segmentations with the constant bias removed. They show that Live-Vessel is not as accurate as the manual segmentations so we examine more closely where the error lies in Fig. 3.20.  84  Histogram of Radial Error  0.4 0.0  0.2  Density  0.6  0.8  User−1 User−2 User−3 LV−1 LV−2 LV−4 LV−8 LV−16  0.5  1.5  2.5  3.5  4.5  5.5  Absolute Error (pixels)  Figure 3.19: This histogram demonstrates the radial deviation from the averaged manual segmentations for the three manual and all Live-Vessel segmentations on the HRIS dataset. The visualization in Fig. 3.20 shows where in the image the error occurs. As you can see, the error is isolated to a few points which were corrected with more seed points. The two main difficulties in this were the exudates and blood hemorrhaging staining the blood vessels with similar intensity values to the background and in contrast to the vessels.  85  (a) N=1  (b) N=8  Figure 3.20: The segmentation overlay for Live-Vessel on image 1 of the HRIS dataset using the manual segments as a reference with N = 1 and N = 8. 86  The histogram in Fig. 3.21 shows the medial axis distance error for all values of N tested. This demonstrates that most points fall within 1 pixel of the actual vessel centerline. Histogram of Medial Error  0.0  0.2  0.4  Density  0.6  0.8  User 1 User 2 User 3 LV−1 LV−2 LV−4 LV−8 LV−16  0.5  1.5  2.5  3.5  4.5  5.5  6.5  7.5  Error (pixels)  Figure 3.21: This histogram shows the medial error in pixels for both the manual segmentations and the Live-Vessel (LV) segmentations on the HRIS dataset. KPIS The KPIS was created for the sole purpose of providing a very high quality gold standard width. Live-Vessel was run on both images in this set with radii in the domain of [2, 5] and 0.15 increments. The histogram in Fig. 3.22 show the combined radial error for both images with the mean bias removed. These excellent results show that Live-Vessel is sound and accurate in its approach. 87  Histogram of Radial Error  0.8 0.0  0.2  0.4  0.6  Density  1.0  1.2  1.4  User 1 User 2 User 3 LV−1 LV−2 LV−4 LV−8 LV−16  0.25  0.75  1.2  1.8  Absolute Error (pixels)  Figure 3.22: This histogram demonstrates the radial deviation from the averaged manual segmentations for the three manual and all Live-Vessel segmentations on the KPIS dataset. The visualization in Fig. 3.23 shows the segmentation overlay on top of the image. As you can see, the binary output appears rough and inprecise since the vessels are very small compared to the pixel resolution. This was a problem that plagued the comparison in other datasets such as STARE and DRIVE.  88  (a) N=1  (b) N=8  Figure 3.23: The segmentation overlay for Live-Vessel on image 1 of the KPIS dataset. We utilized the full precision of the KPIS segmentation boundaries to determine a more precise medial axis. The histogram in Fig. 3.24 shows the medial axis distance error for all N tested at a higher resolution that other datasets. It is interesting to see the distribution and notice that the mean error is approximately 0.5 pixels. This is likely due to the fact that the graph search only has pixel resolution and the output can oscillate around the sub-pixel path. The gaussian post-processing path smoothing is able to remove this oscillation to some extent without degrading the quality or optimality of the process.  89  1.5  Histogram of Medial Error  0.0  0.5  Density  1.0  LV−1 LV−2 LV−4 LV−8 LV−16  0.1  0.3  0.5  0.7  0.9  1.1  1.3  1.5  1.7  Error (pixels)  Figure 3.24: This histogram shows the medial error in pixels for both the manual segmentations and the Live-Vessel (LV) segmentations on the KPIS dataset. CLRIS The CLRIS database is the central light reflex and will expose the difficulty of a bright band occurring within the interior of a vessel. Live-Vessel was run on both images with radii on [3, 11.5] and increments of 0.5. The images in Fig. 3.25 demonstrate that Live-Vessel, and the cost terms that comprise it, are sufficiently robust to deal with this issue.  90  (a) N=1  (b) N=8  Figure 3.25: The segmentation overlay for Live-Vessel on image 1 of the CLRIS dataset. 91  The radial and medial histograms in Fig. 3.26 and 3.27 shows the radial and medial axis errors respectively. Once again it is clear that Live-Vessel is sufficiently robust to handle the challenge presented in these images.  Histogram of Radial Error  0.0  0.2  0.4  Density  0.6  0.8  User 1 User 2 User 3 LV−1 LV−2 LV−4 LV−8 LV−16  0.5  1.5  2.5  3.5  4.5  Absolute Error (pixels)  Figure 3.26: This histogram demonstrates the radial deviation from the averaged manual segmentations for the three manual and all Live-Vessel segmentations on the CLRIS dataset.  92  Histogram of Medial Error  0.4 0.0  0.2  Density  0.6  0.8  User 1 User 2 User 3 LV−1 LV−2 LV−4 LV−8 LV−16  0.5  1.5  2.5  3.5  4.5  5.5  6.5  Error (pixels)  Figure 3.27: This histogram shows the medial error in pixels for both the manual segmentations and the Live-Vessel (LV) segmentations on the CLRIS dataset. VDIS The last dataset to be tested is the vascular disease image set. This consists of eight images of which six are patients with diabetic retinopathy. The vessel segments were carefully chosen by ophthalmological experts and present some challenging regions to segment. They feature vessels in the optic disc, highly tortuous vessels, new and small vessel growth, hard and soft exudates, and vessels passing in close proximity. We ran Live-Vessel with automated seeding just as with the others over a radius range of [1, 10] with increments of 0.5. The histograms for radial and medial errors are given in Figs. 3.28 and 3.29 respectively.  93  Histogram of Radial Error  0.0  0.2  0.4  Density  0.6  0.8  User 1 User 2 User 3 LV−1 LV−2 LV−4 LV−8 LV−16  0.5  1.5  2.5  3.5  4.5  5.5  Absolute Error (pixels)  Figure 3.28: This histogram demonstrates the radial deviation from the averaged manual segmentations for the three manual and several Live-Vessel segmentations on the VDIS dataset.  94  Histogram of Medial Error  0.4 0.0  0.2  Density  0.6  0.8  User 1 User 2 User 3 LV−1 LV−2 LV−4 LV−8 LV−16  0.5  1.5  2.5  3.5  4.5  5.5  6.5  7.5  Error (pixels)  Figure 3.29: This histogram shows the medial error in pixels for both the manual segmentations and the Live-Vessel (LV) segmentations on the VDIS dataset. The visualization in Figs. 3.30, 3.31, and 3.32 shows where in the images the error occurs in three randomly chosen images. From these images we can see that Live-Vessel is able to efficiently segment most segments with as few seed points as possible but it also shows that additional ones required for some segments. Small vessels with large bends pose the greatest trouble.  95  (a) N=1  (b) N=8  Figure 3.30: The segmentation overlay for Live-Vessel on image 3 of the VDIS dataset.  96  (a) N=1  (b) N=8  Figure 3.31: The segmentation overlay for Live-Vessel on image 4 of the VDIS dataset.  97  (a) N=1  (b) N=8  Figure 3.32: The segmentation overlay for Live-Vessel on image 7 of the VDIS dataset.  98  Chapter 4  Conclusion In this thesis, the second version of Live-Vessel was presented. Current automated techniques necessitate human validation and often require numerous corrections. Our proposed technique incorporates user expertise into the process itself with quantitative results of using Live-Vessel demonstrating greater efficiency compared to manual techniques while providing excellent accuracy for untrained users. The first version of Live-Vessel introduced the concept of performing a Live-Wire like graph search to simultaneously find the vessel medial line and radius through a 3D (x, y, r) graph search. It did not have a real-time interactive wire and the code only supported 4 radii due to the computational complexity involved in such a search. Segmentations were also obtained by drawing strips with the goal to produce a binary segmentation output. None of the medial or radial path data was maintained and there was no notion of connectivity and branching. Throughout this thesis, several new contributions to Live-Vessel were discussed. First, the necessary modifications were made in both algorithm and code to bring Live-Vessel into the realm of real-time interactivity. This required reducing the search space through enhancing the contrast in the cost function and eliminating, through the incorporation of anatomical knowledge, a large subset of possible search paths. Appropriate data structures were chosen in order to reduce the memory requirements and initialization time each time a graph search was performed. It was carefully coded in C++ with performance in mind. Second, the Live-Vessel objective function was enhanced to increase contrast between desirable and undesirable paths and incorporate colour information as well as post-processing smoothing to remove the high-frequency pathing artifacts due to discretization. Contrast enhancement was a necessary step for using a lower precision data type to reduce memory load. Third, Live-Vessel was modified to capture and store vessel connectivity information for futher analysis whereas all competing methods simply capture which pixels belong to vessels without considering connectivity or maintaining higher precision medial path and radius measurements. With this data, Live-Vessel is able to produce trees for each vessel network and label it with physiological indicators such as the k-values 99  from Murray’s equation, vessel tortuosity, and vessel taper among others. These properties are described in the anatomy literature and are of great interest in vessel modelling and physiological research. This data is available because we segment the vessel network as a complete vessel network rather than simply having a goal of identifying pixels. We validated our method using real medical data from the DRIVE and STARE databases containing both healthy and diseased patients. Quantitative results show that, on average, Live-Vessel resulted in a reduction of overall segmentation task time of 7.28 compared to manual times while providing a 95% accuracy level for an untrained user. We discussed the difficulties in validation and the downsides to using the traditional accuracy metric. This metric measures binary membership, has poor contrast for images where most pixels are background, and does not provide a way to compare and validate clinically interesting and relevant features such as vessel radius and branching angles. While the DRIVE and STARE datasets are the most frequently cited, the quality of segmentation is low and they contain numerous flaws and discontinuities such as gaps. The two experts often vary significantly and in non-trivial ways. Only the REVIEW dataset provided the necessary information required to analyze specific segments for path and radius validation. Unfortunately, it did not consider branch points and only contains a few high precision segments instead of a complete vascular network. We compared Live-Vessel to state-of-the-art methods and highlighted its important advantages in that it readily provides the correct topology of the vascular tree hierarchy thanks to user interaction as well as the associated vessel medial (skeletal) curves and thickness profiles. We demonstrate the potential for this by creating a tree with edge and node labelings with various measures of interest. We also validated against the REVIEW database which was designed specifically for measuring radial and medial error in images with light reflex, pathology, and other difficulties. We found that LiveVessel performed very well as most error lies within 1 pixel of distance. We also demonstrated on this database that generally Live-Vessel requires few seed points and is able to overcome any problems through additional user interaction. All datasets included both healthy and unhealthy patients. In the future, Live-Vessel could be expanded by improving the quantity and quality of anatomical and physiological information for medical researchers. While previous researchers have focused solely in identifying which pixels belong to retinal blood vessels, Live-Vessel was designed to capture the anatomical information available in these images. This idea should be expanded upon by providing vessel branch analysis tools directly 100  into the segmentation display. Presently, the graph analysis is done externally and only once the complete segmentation has been performed. Its output provides a graph that is organized by generation and not by pixel coordinate. It may be useful for clinicians and researchers to see this graph and its data directly on the retinal image while segmenting. It may even be possible for indicators and alerts to appear in certain regions of the image when suspect traits and pathologies are detected. Another important area for future research would be a thorough validation study. The data available at present is neither complete nor fully accurate. The DRIVE and STARE databases are inconsistent and merely measure pixel membership while the REVIEW database does measure radius and path direction precicely but is small and incomplete. It would be useful to create a new dataset with a variety of healthy and diseased patients containing both their left and right eyes along with complete segmentations and markers denoting special regions of interest. With this data we could verify and validate Live-Vessel for its use in producing data to mine and analyze. Once thoroughly validated, Live-Vessel could be used to generate hundreds or thousands of segmentations as part of a large clinical study examining risk factors and disease progression based on extractable information like tortuosity, k-values, new vessel growth, and vessel taper. The weightings parameters for the objective function used in this thesis were determined emperically. It will be important to study the effect of weightings on the optimality and ease of use. Since Live-Vessel is interactive, the user will be provided feedback and make adjustments so it will be important not just to save the seed points and re-run with various parameters but instead have the user do complete segmentations from scratch each time. Also, it may be that certain terms in the objective function provide a more reliable output for smaller vessels than all others. A single set of globally used parameters may not provide the optimal solution and we may wish to adjust parameters depending on the conditions. This idea has been discussed in the context of Live-wire by Josna Rao[60]. Finally, since Live-Vessel is a semi-automated segmentation technique, we rely on the user to identify and grow vessel networks using the wire. Perhaps it would be possible to expediate this process and minimize user involvement by attempting a rough automated segmentation that can be quickly adjusted and corrected by the user. This may be possible by automatically identifying vessel seed points and attempting to grow out and connect them based on biological priors. Once the network has been initialized, the user could then attempt to rework the network by moving, adding, and removing nodes and then connecting incomplete networks. 101  Bibliography [1] B. Al-Diri, A. Hunter, D. Steel, M. Habib, T. Hudaib, and S. Berry. Review - a reference data set for retinal vessel profiles. In Engineering in Medicine and Biology Society, pages 2262–2265, Aug. 2008. [2] E. Anderson, Z. Bai, and C. Bischof. LAPACK Users’ guide. Society for Industrial Mathematics, 1999. [3] S.R. Aylward and E. Bullitt. Initialization, noise, singularities, and scale in height ridge traversal for tubular object centerline extraction. IEEE Trans Med Imaging, 21(2):61–75, Feb. 2002. [4] W. A. Barrett and E. N. Mortensen. Interactive live-wire boundary extraction. Med Image Anal, 1(4):331–341, Sep. 1997. [5] Y. Boykov and G. Funka-Lea. Graph cuts and efficient N-D image segmentation. International Journal of Computer Vision, 70(2):109– 131, Nov. 2006. [6] C.A. Bradford. Basic ophthalmology for medical students and primary care residents. American Academy of Ophthalmology, 1999. [7] X. Bresson, S. Esedoglu, P. Vandergheynst, J.P. Thiran, and S. Osher. Fast global minimization of the active contour/snake model. Journal of Mathematical Imaging and Vision, 28(2):151–167, Jun. 2007. [8] E. Bullitt, G. Gerig, S. Aylward, S. Joshi, K. Smith, M. Ewend, and W. Lin. Vascular attributes and malignant brain tumors. In Medical Image Computing and Computer Assisted Intervention, 2003. [9] A. Can, H. Shen, J.N. Turner, H.L. Tanenbaum, and B. Roysam. Rapid automated tracing and feature extraction from retinal fundus images using direct exploratory algorithms. IEEE Transactions on Information Technology in Biomedicine, 3:125–138, 1999. [10] C. Canero and P. Radeva. Vesselness enhancement diffusion. Pattern Recognition Letters, 24(16):3141–3151, 2003. 102  [11] C.-C. Chang, C.-C. Lin, P.-Y. Pai, and Y.-C. Chen. A novel retinal blood vessel segmentation method based on line operator and edge detector. In Intelligent Information Hiding and Multimedia Signal Processing, pages 299–302, Sep. 2009. [12] A. Chodorowski, U. Mattsson, M.L., and G. Hamarneh. Color lesion boundary detection using live wire. In Medical Imaging: Image Processing, pages 1589–1596. SPIE, 2005. [13] L. Cohen and R. Kimmel. Global minimum for active contour models : A minimal path approach, 1997. [14] D. J. Cornforth, H. F. Jelinek, J. J. G. Leandro, J. V. B. Soares, R. M. Cesar-Jr., M. J. Cree, P. Mitchell, and T. R. J. Bossomaier. Evolution of retinal blood vessel segmentation methodology using wavelet transforms for assessment of diabetic retinopathy. Complexity international, 11:171–182, 2005. [15] T. Deschamps and L.D. Cohen. Fast extraction of minimal paths in 3D images and applications to virtual endoscopy, 2001. [16] Lee R. Dice. Measures of the amount of ecologic association between species. Ecology, 26(3):297–302, 1945. [17] E.W. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik, 1(1):269–271, Dec. 1959. [18] D. Eberly, R. Gardner, B. Morse, S. Pizer, and C. Scharlach. Ridges for image analysis. Journal of Mathematical Imaging and Vision, 4(4):353– 373, Dec. 1994. [19] A.F. Frangi, W.J. Niessen, R.M. Hoogeveen, T.V. Walsum, and M.A. Viergever. Quantitation of vessel morphology from 3D MRA. In Medical Image Computing and Computer Assisted Intervention, pages 358–367. Springer-Verlag, 1999. [20] A.F. Frangi, W.J. Niessen, K.L. Vincken, and M.A. Viergever. Multiscale vessel enhancement filtering. In Medical Image Computing and Computer-Assisted Interventation, pages 130–137. MICCAI, 1998. [21] US government. National eye institute (http://www.nei.nih.gov), Aug. 2010. [22] H. Gray. Anatomy of the human body. Lea & Febiger, 1918. 103  [23] A. Hoover, V. Kouznetsova, and M.H. Goldbaum. Locating blood vessels in retinal images by piece-wise threshold probing of a matched filter response. IEEE Transactions on Medical Imaging, 19(3):203–210, Mar. 2000. [24] N. Hraiech, M. Carroll, M. Rochette, and J.L. Coatrieux. 3d vascular shape segmentation for fluid-structure modeling. In Shape Modeling and Applications, pages 226–231. IEEE Computer Society, 2007. [25] D.-S. Huang, X.-P. Zhang, and G.-B. Huang. Locating vessel centerlines in retinal images using wavelet transform: A multilevel approach. In International Conference on Intelligent Computing, Lecture Notes in Computer Science, pages 668–696. Springer, Aug. 2005. [26] J. Jarvi and J. Freeman. C++ lambda expressions and closures. Science of Computer Programming, pages 762–772, 2009. [27] C. Kirbas and F.K.H. Quek. A review of vessel extraction techniques and algorithms. ACM Computing Surveys, 36:81–121, 2004. [28] T. M. Koller, G. Gerig, G. Szekely, and D. Dettwiler. Multiscale detection of curvilinear structures in 2-D and 3-D image data. In Computer Vision, pages 864–869. IEEE Computer Society, 1995. [29] K. Krissian, G. Malandain, N. Ayache, R. Vaillant, and Y. Trousset. Model-based detection of tubular structures in 3D images. Computer Vision and Image Understanding, 80(2):130–171, 2000. [30] B. S. Y. Lam and Hong Yan. A novel vessel segmentation algorithm for pathological retina images based on the divergence of vector fields. IEEE Transactions on Medical Imaging, 27(2):237–246, 2008. [31] B.S.Y. Lam and H. Yan. A novel vessel segmentation algorithm for pathological retina images based on the divergence of vector fields. IEEE Transactions on Medical Imaging, 27(2):237–246, Feb. 2008. [32] H. Li and A. Yezzi. Vessels as 4-D curves: Global minimal 4-D paths to extract 3-D tubular surfaces and centerlines. IEEE Transactions on Medical Imaging, 26:1213–1223, 2007. [33] J.K.J. Li. Dynamics of the vascular system. World Scientific Publishing Company, 2004.  104  [34] K. Li, X. Wu, D.Z. Chen, and M. Sonka. Optimal surface segmentation in volumetric images-a graph-theoretic approach. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(1):119–134, 2006. [35] C. Lorenz, I.-C. Carlsen, T.M. Buzug, C. Fassnacht, and J. Weese. Multi-scale line segmentation with automatic estimation of width, contrast and tangential direction in 2D and 3D medical images. In Joint Conference on Computer Vision, Virtual Reality and Robotics in Medicine and Medial Robotics and Computer-Assisted Surgery, pages 233–242. Springer-Verlag, 1997. [36] L. M. Lorigo, O. D. Faugeras, W. E. L. Grimson, R. Keriven, R. Kikinis, A. Nabavi, and C.-F. Westin. Curves: Curve evolution for vessel segmentation. Medical Image Analysis, 5:195–206, 2001. [37] J. Lowell, A. Hunter, D. Steel, A. Basu, R. Ryder, and R.L. Kennedy. Measurement of retinal vessel widths from fundus images based on 2-D modeling. IEEE Transactions on Medical Imaging, 23(10):1196–1204, Oct. 2004. [38] H. Luo, Q. Lu, R.S. Acharya, and R. Gaborski. Robust snake model. In Computer Vision and Pattern Recognition, pages 452–457, 2000. [39] R. Manniesing and W.J. Niessen. Local speed functions in level set based vessel segmentation. In Medical Image Computing and ComputerAssisted Intervention, Lecture Notes in Computer Science, pages 475– 482, Sep. 2004. [40] R. Manniesing, M. Viergever, and W. Niessen. Vessel enhancing diffusiona scale space representation of vessel structures. Medical Image Analysis, 10(6):815–825, Dec. 2006. [41] R. Manniesing, M.A. Viergever, and W.J. Niessen. Vessel axis tracking using topology constrained surface evolution. IEEE Transactions on Medical Imaging, 26(3):309–316, Mar. 2007. [42] M.E. Martinez-Perez, A.D. Hughes, S.A. Thom, A.A. Bharath, and K.H. Parker. Segmentation of blood vessels from red-free and fluorescein retinal images. Medical Image Analysis, 11(1):47–61, 2007. [43] M.E. Martinez-Perez, A.D. Hughes, S.A. Thom, A.A. Bharath, and K.H. Parker. Segmentation of blood vessels from red-free and fluorescein retinal images. Medical Image Analysis, 11(1):47–61, 2007. 105  [44] T. Mcinerney and D. Terzopoulos. T-snakes: Topology adaptive snakes. In Medical Image Analysis, pages 73–91, 1999. [45] T. Mcinerney and D. Terzopoulos. Topology adaptive deformable surfaces for medical image volume segmentation. IEEE Transactions on Medical Imaging, 18:840–850, 1999. [46] C. Mcintosh and G. Hamarneh. Vessel crawlers: 3D physically-based deformable organisms for vasculature segmentation and analysis. In Computer Vision and Pattern Recognition, pages 1084–1091, 2006. [47] E. Meijering, M. Jacob, J.C.F. Sarria, P. Steiner, H. Hirling, and M. Unser. Design and validation of a tool for neurite tracing and analysis in fluorescence microscopy images. Cytometry Part A, 58(2):167– 176, 2004. [48] H. Mirzaalian and G. Hamarneh. Vessel scale selection using MRF optimization. In IEEE Computer Vision and Pattern Recognition, pages 3273–3279, 2010. [49] E. Mortensen, B. Morse, W. Barrett, and J. Udupa. Adaptive boundary detection using ‘live-wire’ two-dimensional dynamic programming. In Computers in Cardiology, pages 635–638, 1992. [50] D. Nain, A. Yezzi, and G. Turk. Vessel segmentation using a shape driven flow. In Medical Imaging Copmuting and Computer Assisted Intervention, pages 51–59, 2004. [51] M. Niemeijer, J. Staal, B.V. Ginneken, M. Loog, and M.D. Abramoff. Comparative study of retinal vessel segmentation methods on a new publicly available database. In Medical Imaging 2004: Image Processing, volume 5370, pages 648–656. SPIE, 2004. [52] M. Nikolova, S. Esedoglu, and T.F. Chan. Algorithms for finding global minimizers of image segmentation and denoising models. SIAM Journal on Applied Mathematics, 66(5):1632–1648, 2006. [53] S. D. Olabarriaga and A. W. M. Smeulders. Interaction in the segmentation of medical images: A survey. Medical Image Analysis, 5(2):127– 142, Jun. 2001. [54] J. Parr. Introduction to ophthalmology. Oxford University Press, USA, 1989. 106  [55] S.C. Pei, J.-H. Chang, and J.-J. Ding. Quaternion matrix singular value decomposition and its applications for color image processing. In Image Processing, pages 805–808, Sep. 2003. [56] M.E.M. Perez, A.D. Hughes, S.A. Thorn, and K.H. Parker. Improvement of a retinal blood vessel segmentation method using the insight segmentation and registration toolkit (itk). In Engineering in Medicine and Biology Society, pages 892–895, Aug. 2007. [57] M. Poon, G. Hamarneh, and R. Abugharbieh. Live-vessel: Extending livewire for simultaneous extraction of optimal medial and boundary paths in vascular images. In Medical Image Computing and Computer Assisted Intervention, pages 444–451, 2007. [58] M. Poon, G. Hamarneh, and R. Abugharbieh. Segmentation of complex objects with non-spherical topologies from volumetric medical images using 3D livewire. In SPIE, pages 1–10, 2007. [59] J.C. Rajapakse and F. Kruggel. Segmentation of MR images with intensity inhomogeneities. Image and Vision Computing, 16(3):165–180, 1998. [60] Josna Rao, Rafeef Abugharbieh, and Ghassan Hamarneh. Adaptive regularization for image segmentation using local image curvature cues. In Kostas Daniilidis, Petros Maragos, and Nikos Paragios, editors, Computer Vision ECCV 2010, volume 6314 of Lecture Notes in Computer Science, pages 651–665. Springer Berlin / Heidelberg, 2010. [61] E. Ricci and R. Perfetti. Retinal blood vessel segmentation using line operators and support vector classification. IEEE Transactions on Medical Imaging, 26(10):1357–1365, Oct. 2007. [62] J.S. Ross, T.J. Masaryk, M.T. Modic, P.M. Ruggieri, E.M. Haacke, and W.R. Selman. Intracranial aneurysms: evaluation by MR angiography. AJNR Am J Neuroradiol, 11(3):159–165, 1990. [63] Guillermo Sapiro. Color snakes. Computer Vision and Image Understanding, 68(2):247–253, 1997. [64] Y. Sato, S. Nakajima, N. Shiraga, H. Atsumi, S. Yoshida, T. Koller, G. Gerig, and R. Kikinis. Three-dimensional multi-scale line filter for segmentation and visualization of curvilinear structures in medical images. Medical Image Analysis, 2(2):213–222, 1998. 107  [65] L. Shi, B. Funt, and G. Hamarneh. Quaternion color curvature. In Color Imaging, pages 338–341, 2008. [66] F. Skovborg, A. V. Nielsen, E. Lauritzen, and O. Hartkopp. Diameters of the retinal vessels in diabetic and normal subjects. Journal of Diabetes, 18(5):292–298, May 1969. [67] J.V.B. Soares, J.J.G. Le, R.M. Cesar, H.F. Jelinek, and M.J. Cree. Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification. IEEE Transactions on Medical Imaging, 25:1214–1222, 2006. [68] M. Sofka and C.V. Stewart. Retinal vessel centerline extraction using multiscale matched filters, confidence and edge measures. IEEE Transactions on Medical Imaging, 25:1214–1222, 2006. [69] J. Staal, M.D. Abramoff, M. Niemeijer, M.A. Viergever, and B. van Ginneken. Ridge-based vessel segmentation in color images of the retina. IEEE Transactions on Medical Imaging, 23(4):501–509, Apr. 2004. [70] A. Szymczak, A. Stillman, A. Tannenbaum, and K. Mischaikow. Coronary vessel trees from 3D imagery: a topological approach. Medical Image Analysis, pages 548–559, Aug. 2006. [71] A. Vasilevskiy and K. Siddiqi. Flux maximizing geometric flows. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24:1565– 1578, 2002. [72] L. Waite and J.M. Fine. Applied biofluid mechanics. McGraw-Hill Professional, 2007. [73] O. Wink, W.J. Niessen, and M.A. Viergever. Multiscale vessel tracking. IEEE Transactions on Medical Imaging, 23(1):130–133, Jan. 2004. [74] C.-H. Wu, G. Agam, and P. Stanchev. A hybrid filtering approach to retinal vessel segmentation. In International Symposium on Biomedical Imaging, pages 604–607, Apr. 2007. [75] P. Yan and A.A. Kassim. Segmentation of volumetric MRA images by using capillary active contour. Medical Image Analysis, 10(3):317–329, Jun. 2006. [76] S. Young, B. Movassaghi, J. Weese, and V. Rasche. 3D vessel axis extraction using 2D calibrated x-ray projections for coronary modeling. In Medical Imaging: Image Processing, pages 1491–1498. SPIE, 2003. 108  [77] Y. Zhang, X. Zhou, J. Lu, J. Lichtman, D. Adjeroh, and S.T.C. Wong. 3D axon structure extraction and analysis in confocal fluorescence microscopy images. Neural computation, 20(8):1899–1927, 2008. [78] C. Zhao, W. Shi, and Y. Deng. A new Hausdorff distance for image matching. Pattern Recognition Letters, 26(5):581–586, 2005.  109  Appendices  110  Appendix A  Vesselness Given an image I(x, y) : R2 → X where X is over [0, 1], the Hessian for I is defined as the real and symmetric matrix ∂2I ∂x2 ∂2I ∂y∂x  H(I) =  ∂2I ∂x∂y ∂I ∂y 2  .  (A.1)  For a colour image I(x, y) : R2 → X3 , the Hessian can be defined by using quaternions for each of the red, green, and blue colour channels.  Q = HR i + HG j + HB k  (A.2)  2  (A.3)  2  2  i = j = k = ijk = −1  Quaternion Singular Value Decomposition (QSVD) can be used to decompose into eigenvalues and eigenvectors. Q = VQT ΛUQ  (A.4)  It is not clear how to perform QSVD directly on the above, so we find an equivalent complex 4x4 matrix and compute the complex SVD.  Q = HR i + HG j + HB k  (A.5)  A = HR i  (A.6)  B = HG + HB i  (A.7)  Q = A + Bj ¯ A −B = ¯ B A  (A.8)  Qeq  (A.9)  Decomposing Qeq will yield only 2 unique eigenvalues and eigenvectors. Now that we have computed the eigenvalues, we can describe the tubularness of the structure using Frangi’s vesselness filter. It is a scale space  111  technique. Let Qσ be the result of convoling Q with a Gaussian kernel of size σ. Let q be the (x, y, r) pixel and scale to test vessel membership for. 0 Cf v (q) =  λ2 > 0  exp  Rβ 2β 2  1 − exp  T2 2c2  otherwise  (A.10)  In the above, β and c are free parameters. The other terms are computed from Qσ where |λ1 | ≤ |λ2 | |λ1 | RB = |λ2 |  (A.11)  λ21 + λ22  (A.13)  S = ||Hσ ||2 =  (A.12)  Finally, we wish to penalize edge responses at false scales and enhance the constrast so the vesselness based cost term will be defined as Cv (q) =  1 − Cf v + Edge(I).  (A.14)  Since I is a colour image, Edge(I) can be computed from the Jacobian.  112  ∂IR ∂x ∂IR Ry = ∂y ∂IG Gx = ∂x ∂IG Gy = ∂y ∂IB Bx = ∂x ∂IB By = ∂y   Rx Ry D = Gx Gy  Bx By Rx =  DT D =  Rx Ry + Gx Gy + Bx By Rx2 + G2x + Bx2 Ry Rx + Gy Gx + By Bx Ry2 + G2y + By2 DT Dx = λx Edge(I) =  λmax  (A.15) (A.16) (A.17) (A.18) (A.19) (A.20) (A.21)  (A.22) (A.23) (A.24)  113  Appendix B  Results  114  (a) 1  (b) 2  (c) 3  (d) 4  Figure B.1: Segmentation outputs for the first 4 images from DRIVE generated with Live-Vessel.  115  (a) 5  (b) 6  (c) 7  (d) 8  Figure B.2: Segmentation outputs for DRIVE images 5 to 8 generated with Live-Vessel.  116  (a) 9  (b) 10  (c) 11  (d) 12  Figure B.3: Segmentation outputs for DRIVE images 9 to 12 generated with Live-Vessel.  117  (a) 13  (b) 14  (c) 15  (d) 16  Figure B.4: Segmentation outputs for DRIVE images 13 to 16 generated with Live-Vessel.  118  (a) 17  (b) 18  (c) 19  (d) 20  Figure B.5: Segmentation outputs for DRIVE images 19 to 20 generated with Live-Vessel.  119  (a) 21  (b) 22  (c) 23  (d) 24  Figure B.6: Segmentation outputs for DRIVE images 21 to 24 generated with Live-Vessel.  120  (a) 25  (b) 26  (c) 27  (d) 28  Figure B.7: Segmentation outputs for DRIVE images 25 to 28 generated with Live-Vessel.  121  (a) 29  (b) 30  (c) 31  (d) 32  Figure B.8: Segmentation outputs for DRIVE images 29 to 30 generated with Live-Vessel.  122  (a) 33  (b) 34  (c) 35  (d) 36  Figure B.9: Segmentation outputs for DRIVE images 33 to 36 generated with Live-Vessel.  123  (a) 37  (b) 38  (c) 39  (d) 40  Figure B.10: Segmentation outputs for DRIVE images 37 to 40 generated with Live-Vessel.  124  (a) 1  (b) 2  (c) 3  (d) 5  (e) Colour map  Figure B.11: Segmentation outputs for image numbers in the Fibonacci sequence from DRIVE. The colour represents the radius and uses the jet colour map from Matlab with r = 0 being blue and r = 4.5 being red.  125  (a) 1  (b) 2  (c) 3  (d) 4  Figure B.12: Segmentation outputs for the first 4 images from STARE generated with Live-Vessel.  126  (a) 5  (b) 6  (c) 7  (d) 8  Figure B.13: Segmentation outputs for images 5 to 8 from STARE generated with Live-Vessel.  127  (a) 9  (b) 10  (c) 11  (d) 12  Figure B.14: Segmentation outputs for images 9 to 12 from STARE generated with Live-Vessel.  128  (a) 13  (b) 14  (c) 15  (d) 16  Figure B.15: Segmentation outputs for images 13 to 16 from STARE generated with Live-Vessel.  129  (a) 17  (b) 18  (c) 19  (d) 20  Figure B.16: Segmentation outputs for images 17 to 20 from STARE generated with Live-Vessel.  130  (a) N=1  (b) N=2  (c) N=4  (d) N=8  Figure B.17: Segmentation overlays for image 1 from REVIEW HRIS generated with Live-Vessel. N is the number of endpoints in each segment.  131  (a) N=1  (b) N=2  (c) N=4  (d) N=8  Figure B.18: Segmentation overlays for image 2 from REVIEW HRIS generated with Live-Vessel. N is the number of endpoints in each segment.  132  (a) N=1  (b) N=2  (c) N=4  (d) N=8  Figure B.19: Segmentation overlays for image 3 from REVIEW HRIS generated with Live-Vessel. N is the number of endpoints in each segment.  133  (a) N=1  (b) N=2  (c) N=4  (d) N=8  Figure B.20: Segmentation overlays for image 4 from REVIEW HRIS generated with Live-Vessel. N is the number of endpoints in each segment.  134  (a) N=1  (b) N=2  (c) N=4  (d) N=8  Figure B.21: Segmentation overlays for image 1 from REVIEW CLRIS generated with Live-Vessel. N is the number of endpoints in each segment.  135  (a) N=1  (b) N=2  (c) N=4  (d) N=8  Figure B.22: Segmentation overlays for image 2 from REVIEW CLRIS generated with Live-Vessel. N is the number of endpoints in each segment.  136  (a) N=1  (b) N=2  (c) N=4  (d) N=8  Figure B.23: Segmentation overlays for image 1 from REVIEW KPIS generated with Live-Vessel. N is the number of endpoints in each segment.  (a) N=1  (b) N=2  (c) N=4  (d) N=8  Figure B.24: Segmentation overlays for image 2 from REVIEW KPIS generated with Live-Vessel. N is the number of endpoints in each segment.  137  (a) N=1  (b) N=2  (c) N=4  (d) N=8  Figure B.25: Segmentation overlays for image 1 from REVIEW VDIS generated with Live-Vessel. N is the number of endpoints in each segment.  138  (a) N=1  (b) N=2  (c) N=4  (d) N=8  Figure B.26: Segmentation overlays for image 2 from REVIEW VDIS generated with Live-Vessel. N is the number of endpoints in each segment.  139  (a) N=1  (b) N=2  (c) N=4  (d) N=8  Figure B.27: Segmentation overlays for image 3 from REVIEW VDIS generated with Live-Vessel. N is the number of endpoints in each segment.  140  (a) N=1  (b) N=2  (c) N=4  (d) N=8  Figure B.28: Segmentation overlays for image 4 from REVIEW VDIS generated with Live-Vessel. N is the number of endpoints in each segment.  141  (a) N=1  (b) N=2  (c) N=4  (d) N=8  Figure B.29: Segmentation overlays for image 5 from REVIEW VDIS generated with Live-Vessel. N is the number of endpoints in each segment.  142  (a) N=1  (b) N=2  (c) N=4  (d) N=8  Figure B.30: Segmentation overlays for image 6 from REVIEW VDIS generated with Live-Vessel. N is the number of endpoints in each segment.  143  (a) N=1  (b) N=2  (c) N=4  (d) N=8  Figure B.31: Segmentation overlays for image 7 from REVIEW VDIS generated with Live-Vessel. N is the number of endpoints in each segment.  144  (a) N=1  (b) N=2  (c) N=4  (d) N=8  Figure B.32: Segmentation overlays for image 8 from REVIEW VDIS generated with Live-Vessel. N is the number of endpoints in each segment.  145  Image 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 39 30 31 32 33 34 35 36 37 38 39 40  ACC 0.94088 0.93913 0.94226 0.94463 0.94210 0.93753 0.93467 0.94333 0.94484 0.94190 0.93707 0.94737 0.93722 0.95062 0.95267 0.94943 0.94639 0.95471 0.95703 0.95665 0.95512 0.93318 0.95879 0.93039 0.93670 0.94898 0.93984 0.93488 0.94054 0.94559 0.95415 0.94902 0.95050 0.93460 0.94654 0.93318 0.95100 0.94927 0.94279 0.95210  HAUS 0.59861 0.56660 0.67220 0.54905 0.60658 0.56857 0.61343 0.65947 0.58748 0.63534 0.59504 0.60553 0.57809 0.54621 0.54985 0.54905 0.65270 0.64307 0.49329 0.63912 0.54235 0.61813 0.61556 0.59593 0.66291 0.61143 0.59760 0.62036 0.60891 0.60693 0.58963 0.60327 0.55953 0.69204 0.56094 0.58734 0.56240 0.61749 0.67917 0.60372  DSC 0.75774 0.76325 0.77288 0.75600 0.74499 0.74208 0.69823 0.73665 0.73435 0.71805 0.72217 0.78008 0.74950 0.76193 0.74162 0.78845 0.77352 0.80000 0.82819 0.79859 0.77909 0.69749 0.75183 0.75272 0.72948 0.75517 0.72151 0.73079 0.70521 0.71898 0.69178 0.76504 0.76206 0.75175 0.75326 0.75368 0.78436 0.80045 0.75212 0.77221  Table B.1: Complete Live-Vessel results for the DRIVE database. The error measures are described in Sec. 3.2.  146  Image 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20  ACC 0.96308 0.96686 0.96963 0.95735 0.94806 0.96862 0.96252 0.96484 0.96385 0.95742 0.95824 0.96487 0.95805 0.95664 0.95697 0.96097 0.95343 0.97462 0.97674 0.96201  HAUS 0.63007 0.62197 0.68092 0.92362 0.67563 0.73360 0.59517 0.54452 0.58270 0.67149 0.48795 0.55775 0.65994 0.65578 0.62541 0.65783 0.52439 0.66193 0.61181 0.86912  DSC 0.82512 0.80895 0.81002 0.76338 0.74544 0.77807 0.81968 0.81884 0.81412 0.79356 0.74873 0.81606 0.81525 0.81437 0.80562 0.85057 0.78339 0.80702 0.79659 0.78917  Table B.2: Complete Live-Vessel results for the STARE database. The error measures are described in Sec. 3.2.  147  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0071576/manifest

Comment

Related Items