3D Livewire and Live-Vessel: MinimalPath Methods for InteractiveMedical Image SegmentationbyMiranda PoonB.A. Sc., The University of BritishColumbia, 2005A THESIS SUBMITED ll’J PARTIALFULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREEOFMaster of Applied ScienceinTHE FACULTY OF GRADUATE STUDIES(Electrical & Computer Engineering)THE UNIVERSITY OF BRITISHCOLUMBIA(VANCOUVER)April 2008© Miranda Poon, 2008AbstractMedical image analysis is a ubiquitous and essential part of modem healthcare. Acrucial first step to this is segmentation, which is often complicated by manyfactorsincluding subject diversity, pathology, noise corruption, andpoor image resolution.Traditionally, manual tracing by experts was done. While considered accurate, thisprocess is time consuming and tedious, especially when performed slice-by-sliceonthree-dimensional (3D) images over large datasets or on two-dimensional(2D) buttopologically complicated images such as a retinography. On the otherhand, fully-automated methods are typically faster, but workbest with data-dependent, carefullytuned parameters and still require user validation and refinement.This thesis contributes to the field of medical imagesegmentation by proposing ahighly-automated, interactive approach that effectivelymerges user knowledge andefficient computing. To this end, our work focuseson graph-based methods and offerglobally optimal solutions. First, we present a novel method for 3Dsegmentation basedon a 3D Livewire approach. This approach is an extension ofthe 2D Livewireframework, and this method is capable of handlingobjects with large protrusions,concavities, branching, and complex arbitrary topologies.Second, we propose a methodfor efficiently segmenting 2D vascular networks, called‘Live-Vessel’. Live-Vesselsimultaneously extracts vessel centrelines and boundary points,and globally optimizesover both spatial variables and vessel radius. Both ofour proposed methods are validatedon synthetic data, real medical data, and are shownto be highly reproducible, accurate,11and efficient. Also, they were shown to beresilient to high amounts of noise andinsensitive to internal parameterization.111Table of ContentsAbstract.iiTable of ContentsivList of TablesviList of FiguresviiAcknowledgementsxi1 Introduction and Background121.1 Problem Statement and Motivation131.2 Overview of Image Segmentation Approaches141.2.1 Manual Segmentation141.2.2 Fully-automated Segmentation Techniques151.2.3 Semi-automated Interactive Techniques171.3 Graph-Based Segmentation181.3.1 Dynamic Programming191.3.2 Dijkstra’s Algorithm201.4 The Livewire Formulation221.4.1 Contour Optimization Process231.4.2 Interactive Seeding242 3D Livewire for Arbitrarily ComplexTopologies 272.1 Interactive 3D Segmentation272.2 Extending 2D Livewire to 3D322.2.1 Automated Seeding342.2.2 Automatic Seedpoint Ordering352.2.3 Intersection Point Pruning372.2.4 Efficient Graph Search for Pre-determinedSeedpoints 382.2.5 Handling Arbitrary Topology39iv2.2.6 Implementation Details.412.3 Validation and Results432.3.1 Synthetic Data Tests452.3.2 Real Medical Data482.3.3 Analysis of Robustness and ParameterSensitivity 493 Live-Vessel: ExtendingLivewire to Vascular Data513.1 Vessel Segmentation Overview513.2 Livewire to Live-Vessel: ExtendingGraph Search to (x,y,r) Space 563.3 Live-Vessel External and InternalCosts 583.3.1 Vesselness Cost603.3.2 Vessel Direction Cost613.3.3 Image Evidence Cost633.3.4 Spatial and Radius Smoothness Costs643.4 Results and Discussions653.4.1 Performance Criteria663.4.2 Synthetic Data Segmentation683.4.3 Retinography Data Segmentation693.4.4 Live-Vessel Robustnessand Parameter Selection744 Conclusions774.1 Contributions774.2 Limitations and Future Directions79Bibliography84VList of TablesTable 1. Reproducibility and accuracy results of our proposed method,on both syntheticand real medical image data. Each entry in the table isthe average over 4 trials with thecorresponding standard deviation. For the ventricles, vertebra,and pelvis examples,expert manual segmentations were not available47Table 2. Task time reduction, in seconds, achieved byour proposed method compared toperforming 2DLW on each slice. Each step for the examples isaveraged over 4 trials.Standard deviation values between each set of trials are included.(I) User interactiontime with our tool. (II) Automatic processing time of our tool. (III)Time required formanual corrections. (IV) Total task time of our tool. (V) Task timeusing 2DLW on allslices. (VI) Fraction of time (%) required for ourtool compared to 2DLW on all slices(IV)/(V)47Table 3. Needed eigenvalue conditions from theHessian matrix at a given pixel forvessel detection60Table 4. Summary of Live-Vessel’s performanceon 3 random manual tracings in theDRIVE database. Results are shown as averages andstandard deviation over 2 trials... .68Table 5. Summary of the average accuracy and reproducibilityresults of Live-Vessel (LV)compared to manual traces (MT1 and MT2), reporting averagesand standard deviationover two trials. Dice similarity is the reported measure. Imagesize is 565 x 584 pixelsthroughout71Table 6. Efficiency results of our proposed Live-Vessel(LV) compared to a manual trace(MT), reporting average and standard deviationover two trials. Efficiency was measuredas the fraction of the manual task time needed togenerate the mask. Results shown areaverages of two trials, and are reported in seconds.The images size is 565 x 584 pixels.Average reduction in time is 75.3%72Table 7. Accuracy comparison of state-of-the-artmethods and the proposed methodLive-Vessel, using the widely used accuracy assessmentmethod 74Table 8. Effect of parameter selection on Dice similarity(accuracy) between Live-Vesseland each manual tracing (MT 1, MT2).Percentage of accuracy change provided for eachexperiment. Weighting for vesselness (V), vesseldirection consistency (Ev), imageevidence (le), radius smoothness (R), and spatialsmoothness (XY) were raised orlowered (by ±50%)74viList of FiguresFigure 1 Although the path from Node 1 to the endnode q has the highest optimal costcompared to Nodes 2 and 3, it has the lowestcost to the starting node; therefore, theoptimal path from the start node p to q passes throughnode 1 instead 20Figure 2. In the graph search of Dijkstra’s algorithm,the cumulative costs of neighboringnodes (dotted squares) of the ‘unvisited’ nodewith lowest cumulative cost (bold square)are updated. Costs of ‘visited’ nodes (solid blacksquares) are no longer updated 21Figure 3. In iteratively determining the optimal pathfrom a node with cumulative cost of32, the next node on the path is always the neighboringnode with the lowest cumulativecost until the seedpoint (cumulative cost of 0)is reached 22Figure 4. Livewire Interactive seeding. (a)Original image. (b) with a set seedpoint(circle), proposed contours from the seedpoint to theuser’s cursor is displayed in realtime. (c) A proposed contour is selected (green),and the process in (b) repeats 24Figure 5. 2D Livewire contour on a caudatenucleus mask. (a) 2D contour (gray)segmenting a slice of the caudate mask. (b) This samecontour (black) is shown in thecontext of the entire object26Figure 6. Overall steps of thisalgorithm, shown on a binary image for clarity.(a)-(b)Seedpoints (grey squares) are selected in user-guidedLivewire contours on orthogonalslices. User-approved segments are in green, andred contours represent the proposed,‘live’ segment during the Livewiretask (crosshair denotes cursor location). (c) 3D plot of11 user-guided contours. (d) Orderingautomatically generated seedpoints (gray squares)on a slice in the third orthogonal orientation usingour turtle-based algorithm (Section2.2.2) results in the contour in (e). (f)-(g) 3D plotsof automatically generated contours atmid-task and at task completion (125 contours),respectively. (h) Surface rendering of thesegmentation result33Figure 7. Orthogonal contours in (a)(red and blue) intersect with a slice in the thirdorientation (green) in 10 different locations, as shownin (b). These intersections on thegreen slice in (c) become seedpoints on a turtle map(Section 2.2.2) 35Figure 8. Example seedpoint map showing outercontour seedpoints (red), inner contourseedpoints (green with ‘i’ suffix), and adisjoint object’s seedpoints (blue). For eachcontour, the turtle object starts at the first point andfollows the tracks (in grey) accordingto its rules in Section 2.2.2, visitingother seedpoints in the order shown. Track values of‘2’ denote track intersections36Figure 9. Possible turtle map intersections resultingfrom seedpoint locations, denoted by1, include (a) the basic‘+‘ junction, (b) the ‘T’ junction where a seedpoint overlaps atrack, and (c) the ‘L’ junction where two seedpointsoverlap. Pixels with values ‘1’ and‘2’ represent non-seedpoint tracks andtrack intersections respectively 37viiFigure 10. Contiguous intersection points. (a) User-guidedLivewire contours (blackvoxels) will intersect with the cube’s end slice (gray)in multiple contiguous locations. (b)A contour intersecting with an orthogonal imageslice (gray) creates two clusters ofcontiguous points (black). White arrows denote which pixelsare kept after the pruningalgorithm (Section 2.2.3)38Figure 11. Graph search required per pre-determined seedpoint.(a) Vertebra. (b) Full costmap needed per seedpoint (circled). Darker areasindicate lower cost. (c)-(d) Abbreviatedgraph search algorithm terminates when the next pre-determined seedpointis reached.. .39Figure 12. Screen-capture of the segmentation tool’sgraphical user interface during asegmentation task. Completed 2D contours aredisplayed in green for the threeorthogonal views, providing feedback on segmentationaccuracy throughout thesegmentation task. Yellow lines indicate the currentslice indices of the other twoorientations43Figure 13. Results of our proposed segmentation method on syntheticdata. (a),(d),(g)Rendering of a left caudate mask, torus, and fork objectrespectively. (b),(e),(h) 3D plotof user-guided contours (red) and automaticallygenerated contours (light blue). (c),(f),(i)Surface renderings of the segmented synthetic examples above,using our proposedapproach45Figure 14. Results of our proposed segmentationmethod on real 3D medical data.(a),(d),(g) Original 3D images of ahuman brain (T1-MRI), spine (CT), and pelvic region(CT) respectively. (b),(e),(h) 3D plots of user-guidedcontours (red) and automaticallygenerated contours (light blue). Twenty-four (red)used to segment 200 (cyan), 17 tosegment 88, and 80 to segment 277 respectively. (c),(f),(i)Surface renderings of thesegmented examples above, using our proposed method49Figure 15. Our proposed method’s performancereflected in segmentation accuracy asAWGN is progressively added. (a) Slices of a leftcaudate mask with increasing noiseand PSNR levels of 1 dB,20.0 dB,6.0 dB, and 0 dB.(b) Accuracy level stays consistentlyhigh until very high noise levels occur,obscuring the ability of the user to choose reliableLivewire seedpoints50Figure 16. Live-Vessel’s 3D graph searchalgorithm depicted in 2D (x,y) and 3D (x,y,r).(a) Medial path sequence (greenarrows) with two neighbouring nodes p=(x,y,r) andq=(x‘,y‘,r 9, projected on the (x,y) plane. Red arrowsdenote the radius (r) dimension. (b)Alternative directions from p to q. Note that the nextnode on the path after p=(x,y,r)cannot be q=(x’,y’,r’) ifx=x’andyy’57Figure 17. Flowchart depicting program operationfrom user point of view 58Figure 18. Overview of Live-Vessel operation. (a)Original Image. (b) Vessel boundarypoints (yellow) from a seedpoint (green cross) to thenext potential seedpoint (red square)are graphically shown to the user for approval.(c) Segmentation mask is created fromboundary points determined in (b)58viiiFigure 19. Graphical representation of the cost terms in Live-Vessel’s minimum pathsearch59Figure 20. Vessel bifurcation and noise poses problems for a maximalresponsevesselness filter. (a) Original image, magnified for clarity. (b) Maximal responsevesselness filter output. Note the filter problems caused by vessel bifurcationor noise. (c)Segmentation result with our proposed method 61Figure 21. Eigenvector consistency cost CE(p,q) going from nodep(x,y,r) toq=(x’,y’,r’)is calculated using angle A, not angle B. Angle A is thesmallest angle between ±Ev(p)and ±Ev(q)62Figure 22. Image evidence cost and its effect. (a) points (0’s) parallel to vesseldirection(white arrow) at a scale-dependent distance r are tested for edge detectionresponse. (b)Original retinal image. Red square denotes close-up area in (c)and (d). (c) Imageevidence cost function Cje(p) at a small value of r (darker means less cost). Medialnodesof thin vessels are prominent and medial nodes of large vessels exhibit highercost. (d)Image evidence cost function Cje(p) at a larger r. Medial axes oflarger vessels becomeprominent and medials of smaller vessels exhibit higher costand are dispersed 64Figure 23. Difference in effects of radial and spatial smoothing.Unsmoothed elementsare in gray (solid line = medial, dottedline = boundaries), new medials are in red, andnew boundaries are in blue. (a) While vessel width wasconstant, favouring shortermedial axes results in a smoother medial and vessel boundaries. (b) While the medialwasalready smooth, minimizing change in radius results insmoother boundary contours.... 65Figure 24. Synthetic data segmentation. (a) Synthetic Image ofretina. (b) Segmentationresult using Live-Vessel69Figure 25. Segmentation of 1-pixel wide vessels usingLive-Vessel. (a) Original image.(b) Zoomed (red square in (a)) areacontaining 1-pixel wide vessels. (c) Computedoptimal medial path for one of the vessels in (b). (d)Further zoomed view of vessel(green square in (c)). White line denotes optimal medial axis,black lines indicate theoptimal vessel boundaries69Figure 26. Sample results using our proposed method.For each column: (a) Originalretinal image. (b) Optimal medial axis computed by Live-Vessel(white curve) overlaidon original image. (c) Live-Vessel segmentation mask. (d)Close-up of segmentationmask70Figure 27. Manual segmentation differences between experts.(a) Observer 1. (b)Observer 2. (c) Difference between (a) and (b). (d-e) Close-up of (a)and (b) respectively.(f) Difference between (d) and (e)72Figure 28. Segmentation of a synthetic image under increasingAWGN noise. Redsquares in the first column denote the field of view zoomed inthe second column. (a)PSNR = 27.73 dB. (b) PSNR = 10.22 dB. (c) PSNR = 0 dB. (d) PSNR = -8.11dB. In theixright column of (a) and (b) segmentation masks are shown. Inthe right column of (c) and(d), black contours denote detected boundary points.White contours represent optimalmedial axes from a seedpoint (green cross) to the next point(solid red square) 76xAcknowledgementsI would like to thank my supervisors, Dr. RafeefAbugharbieh and Dr. GhassanHamarneh, for their guidance. I wouldalso like to acknowledge Dr. Martin McKeown forproviding Ti MRI data and the U.S. NationalLibrary of Medicine for the publiclyavailable Visible Human Project CT data. Moreover,I would like to acknowledge M.Niemeijer for the DRIVE retinal image database.Lastly, I would like to thank the labmembers of BiSICL for their expertise and support.MIRANDA POONThe University ofBritish ColumbiaApril 2008xiChapter 1Introduction andBackgroundToday, medical imaging is ubiquitous and essentialfor medical professionals in patientdiagnosis, treatment planning, and computer-aided surgery. Sincemost imagingtechniques are non-invasive, the study of anatomical structure and diseaseprogression invivo is possible. Some examples of three-dimensional(3D) medical imaging arecomputer tomography (CT) to detect calcified tissue, magneticresonance imaging (MRI)for imaging soft tissue, and magnetic resonance angiography (MRA)for capturingvasculature. Two-dimensional (2D) medical imaging includes X-ray(what CT is basedon), and retinography which capturesthe blood vessels in the eye.121.1 Problem Statement and MotivationIn almost all medical image applications, segmentation is a vital precursorstep.Image segmentation in the broadest sense is the division and classification of animageinto areas or regions of interest. However, unlike mainstreammultimedia graphics,medical imaging has relatively poor image quality due to lowresolution and high noise,which poses problems in designing effective segmentationalgorithms. Also, sincemedical imaging is often used to study disease across many patients,biological diversityand pathology hinders algorithm robustness andaccuracy. Furthermore, segmentation of3D objects over large datasets raises designchallenges to algorithm extensibility andcomputational speed. These problems have fuelled interest in finding alternatives tothepredominant segmentation choices today: the slow but accurate manual tracingapproachand the usually faster but inflexible fully-automatedapproaches.The impact of developing successful segmentationalgorithms is extremely large;thus much research in medical image processing is focusedin this area. An obviousmotivation is that a region of interest (ROT) can bevisualized more clearly aftersegmentation than with the image background. Also, in 3D, embeddedobjects can onlybe visualized as a whole after segmentation, ratherthan viewed slice-by-slice.Segmentation is also necessary for volume and shapeanalysis in establishing statisticalnorms and disease research, for instance, in Parkinson’sdisease [1] and Alzheimer’sdisease [2] research, both of which are doneon 3D data. Vessel segmentation frommedical imaging is also important in studies of brain tumourvasculature [3], intracranialaneurysms [4], and diagnoses and research of diabetesand hypertension [5][6].13Moreover, both rigid body registration and deformable registration techniques benefitfrom segmentation, in localizing the region of interest to reduce the highcomputationcomplexity as well as increasing the accuracy of registration [7].1.2 Overview of Image Segmentation ApproachesMedical image segmentation techniques are extremely varied. This sectionfocuses on the benefits and drawbacks of manual, automatic,and semi-automatictechniques.1.2.1 Manual SegmentationManual segmentation is the technique that requires the user to specify each imageunit (pixel or voxel) as foreground or background. Thisis typically done using a mouseor a graphics tablet. The actual delineation process varies frommanually ‘painting’ eachpixel in the object of interest to tracing the segmentationcontour around the object by an‘expert’ — a user familiar to the object being segmented.Since manual segmentation isoften considered accurate, it can be used to ‘train’ theparameters of automaticsegmentation techniques (Section 1.2.2). However, whilemanual segmentation results areoften used as the ‘gold standard’ for benchmarkingother techniques’ accuracy, they stillsuffer from inter and intra-subject variability [8] anduser fatigue [9]. Also, for largevolumes and large patient datasets, manual segmentationis highly tedious and timeconsuming because careful delineation is required for each 2Dslice within a volume.Moreover, manually extracted contours viewed from an orthogonal directiontypicallyappear jagged because boundary smoothness is notenforced between slices. Lastly,14oftentimes a user may not be able to see structuralfeatures due to low image quality orpoor computer displays.1.2.2 Fully-automated Segmentation TechniquesFully-automated segmentation techniques, onthe other side of the segmentationmethods spectrum, offer faster segmentation tasktimes and eliminate inter-operatorvariability. However, they work best when theirparameters are carefully tuned forspecific image properties and anatomy, whichremains a challenge, and still require uservalidation and refinement. An additional complicationfactor for these approaches is thatanatomical structures are typically affected bysignificant variations due to subjectdiversity and pathology which reduces thesegmentation accuracy, robustness, andconsistency between volumes. Under these conditions,an additional design difficulty islocating and identifying the object(s) of interestwithout user guidance. Additionalliterature review pertaining to automatic 3Dsegmentation and vessel segmentation arepresented in Section 2.1 and Section 3.1 respectively.One of the most basic forms ofsegmentation is global or adaptive thresholding,which assigns a binary value toeach pixel on the image or sub-image, dependingonwhether the pixel’s intensity isabove/below a threshold value [10]. While simpleandeffective on certain imaging modalities (i.e. segmentingbone from CT), this method doesnot enforce tissue connectivityand performs poorly noisy environments.Another segmentation group called regiongrowing uses image intensityinformation and enforces connectivity of segmentedpixels. This method is seeded as asingle point, and a region propagatesfrom this point until stopping conditions are met15based on image intensity change or statistical propertiesof the image [11]. A drawback ofthis property is its tendency to ‘leak’ outside the region of interest since itis not as robustto noise as more sophisticated techniques such as deformablemodels (explained later).Similar to region growing are watershed techniques which viewa 2D image in 3D, withthe third dimension being image intensity [101. Regionswith low pixel intensity are‘flooded’ and as basins begin to meet at higher intensitylevels, ‘dams’ are constructed,which translates into boundary paths between regions.This method suffers from thesame leakage problem as region growing.The deformable models family of segmentation techniquesis much more robustthan global image operators such as in the above. The goalof these techniques is toevolve a closed contour to minimize its energy function, as definedby external forcesderived from image properties such as image intensity andinternal forces such as contoursmoothness [121. The advantages to deformable modelsare that for any particularcontour, pixel connectivity is ensured, and anobject’s boundaries can be defined even ifthere is not enough gradient information in the region.However, they tend to settle intolocal minima [12], are generally quite computational, areunpredictable during contourevolution, and may not produce accurate resultswithout user initialization or postcorrection. The two main classes in this familyare explicit models which evolveexplicitly defined contours [12][13][14] andimplicit curves[15][16][17]. The maindifference is that while explicit models are based onparametric formulations duringdeformation and less computational, implicitmodels represent the contours as a level setof higher-dimensional scalar function and can handletopological changes to the object.161.2.3 Semi-automated Interactive TechniquesDue to the above-mentioned difficulties withboth manual and fully automatedsegmentation techniques, semi-automated methods have drawnwide interest as a way tofacilitate computer-based segmentation of 3D anatomicalstructures using minimal humaninteraction. Interaction in image segmentationtypically involves parameter selection,menu operation, or graphical input. Interactive segmentationtypes, similar tosegmentation techniques in general, are extremely varied; for in-depth surveysof existingtechniques, we refer readers to [9] [18]. Also,additional literature reviews specificallypertaining to 3D interactive segmentation and semi-automated vesselsegmentation arepresented in Section 2.1 and Section 3.1 respectively.Tweaking parameter values, the user is givenrapid feedback on the segmentationresult, which is derived from a largely ‘blackbox’ algorithm in the tool’s backend.Example parameter values include weighting for adeformable model [19], thresholdparameters [20], and number of processing iterations[20]. This interaction scheme issimple to implement, but oftentimes the user has to havea passing knowledge of thecomputation algorithm and providing intuitiveinteraction is a design challenge [9].However, if real-time feedback is achieved, algorithmunderstanding becomes less of anissue, as the user can learn through experience.Menu-based techniques place the largest emphasison the computationalalgorithm. The user is guided through the segmentationprocess by providing descriptiveinstructions regarding the object’s properties using buttons andforms but otherwise doesnot have direct control over the segmentation contouror parameterization. Numerous17methods use this scheme to allow users to accept/rejectautomatically computedsegmentation[211,or adopt this as part of the overall interaction task [221. Other methodsuse menus in a flowchart manner to let users narrowdown the ROl’s topologicalproperties [23] or to let users choose templates [24]. Whileresults tend to be morereproducible and efficient, accuracy is not guaranteedand if the menu system is not abledescribe the object exactly, program robustness suffers.Lastly, graphical input techniques require the user to specifyseedpoints ormanipulate artificial objects on top of the image itself,which bootstraps the computationalgorithm. With direct user control over segmentationregions, robustness is maximized.This interaction scheme tends to be the most intuitive, but itsdesign challenges are torequire only minimal interaction and to minimize user-input variability.There are 3 maintypes of interactivity. The first is initiationof a deformable model [12][25] andgraphically specifying regions for the model tofavour/avoid in real-time [13][14]. Thesecond type is specification of backgroundand foreground seeds for computation of aquick, but automatically determined segmentation[22][23]. Lastly, the third type isspecification of seedpoints along the contour itself,whether it is the object edge [261 or atubular structure’s medial path [27]. The focus of this thesis isthis third type, and weexplore the graph theory optimal path finding involvedin the next section.1.3 Graph-Based SegmentationImages are raster images, meaning that they are representedby pixels and voxels,each represented by an intensity value (greyscale or RGB).While pixels and voxels arenot necessarily squares and cubes, they are tiled asan array and can be represented as a18graph or matrix. A graph is a collection of nodes and theconnections that each node haswith other nodes. Since segmentation contoursare essentially paths along the graph anddelineates a sub-graph (the object or region of interest), graph-theoryand graph-basedapproaches are useful and intuitive methods. While thereare numerous graph-basedsegmentation approaches such as region growing [11]or Graph-Cuts based methods[22] [28] [291, we focus on optimal pathfinding approaches.1.3.1 Dynamic ProgrammingDynamic programming [30] is a method of solving problems whichhaveincremental, optimal substructure on a graph, and applies tomany fields includingmathematics, computer science, and economics. Usingdynamic programming, theglobally optimal path on a graph between arbitrarynodes p to q is found by recursivelybreaking the problem into sub-problems and finding theoptimal path for these simplercases. In the first step, the lowest cost path to q fromall adjacent nodes is determined.However, while the local path costs from these nodes to qare known, this algorithm isrecursively applied to each node that is connected to q to find theglobally optimal pathand associated cumulative cost. The result ofthis recursion is that not only is the globallyoptimal path from p to q computed, but theglobally optimal paths from p to all nodesvisited during recursion are found as well.An example of solving a sub-problem is illustratedin Figure 1, where the globallyoptimal (lowest cost) path from end node p tonodes 1, 2, and 3 were found in earlieriterations to have costs 8, 5, and 6 respectively. Sincethe path segment from node 1 to q19exhibits the lowest total cost (8+1), this segment becomes part of the globallyoptimalpath.Figure 1 Although the path from Node I to the end node q has the highest optimal costcompared toNodes 2 and 3, it has the lowest cost to the starting node; therefore, the optimalpath from the startnodep to q passes through node 1 instead.1.3.2 Dijkstra’s AlgorithmDijkstra’s algorithm [31] is a graph search algorithm that uses the concepts ofdynamic programming to find the globally optimal (lowest cumulative path cost) pathbetween a node and every other node on the graph.It is assumed that each node has anassociated local cost and/or a path cost between itand each of its neighbours. However,its main difference to traditional dynamic programming is that it usesiteration rather thanrecursion. In summary, the algorithm is outlined as follows and illustrated in Figure 2:1. Create a cumulative cost list, visited list, and a node queuelist. These lists’elements are initialized asco, 0,and 0 respectively. The starting node has acumulative cost of 0, is set to ‘visited’, and is placed on the node queue.I 8202. The node in the queue with thesmallest cumulative cost is set to ‘visited’ andprocessed. For each of this node’s unvisited neighbours, thecumulative cost iscalculated based on this new path. If this cost is smallerthan the previouslycalculated cost, it is updated in the list. If this neighbour nodeis not on the nodequeue, it is inserted into the queue.3. The node queue is resorted accordingto each node’s cumulative cost.4. Steps 2 and 3 are repeated until allnodes are processed.2H21H2H 2W___ ___II5 4 1 3I=4 41014 41=+54 654 3 5 4 6 3 5 4 6 3Figure 2. In the graph search of Dijkstra’s algorithm, thecumulative costs of neighboring nodes(dotted squares) of the ‘unvisited’ nodewith lowest cumulative cost (bold square) are updated. Costsof ‘visited’ nodes (solid black squares) are no longer updated.Using the above graph search algorithm, the final cumulative costlist is a datastructure that stores both the globally optimal cumulativecost and the globally optimalpath from a node to all nodes onthe graph. The exact path from a starting node to adestination node is determined backwards. Starting atthe destination node, the ‘next’node along the path is defined as the neighbourwith the lowest cumulative cost. Thisprocess is repeated until the starting node is reached.Figure 3 illustrates this process.21_916H3OJ_H_ ______fj324 21132H32H2OH22Figure 3. In iteratively determining theoptimal path from a node with cumulative cost of 32, the nextnode on the path is always the neighboring node withthe lowest cumulative cost until the seedpoint(cumulative cost of 0) is reached.A similar method is the Fast Marchmethod [32]. This is based on Dikjstra’sefficient one-pass algorithm (each node is visitedat most only once), and is used to solveproblems in continuous space because of its abilityto interpolate between nodes.However, for discrete path-minimizing problems in adiscrete space, Dijkstra’ s algorithmprovides a more direct solution than Fast March.1.4 The Livewire FormulationSince segmentation contours are assumed to becontinuous along the outside ofthe region of interest, graph-based techniques such asDijkstra’s algorithm can be easilyapplied. The Livewire framework [26] incorporatesDikj stra’ s algorithm and interactivityfor real-time user-guided segmentation. Here, pathoptimization is based on edgedetection measures, and real-time contourfeedback is provided by taking advantage ofthe computationally inexpensive path determination stepof Dijkstra’ s algorithm.33221.4.1 Contour Optimization ProcessIdeal segmentation contours lie along the boundaries ofobjects, which typicallyexhibit high gradient. Therefore, edge detection is usedto assign low local cost to edgepixels. Initially, only Laplacian of Gaussian(LoG) edge detection [10] was used, but laterimplementations used multiple algorithms includingCanny edge detection [33] andimage gradient for higher algorithm robustness. Giventhat q = (x, y) is a pixel on theimage S and p = (x’, y’) is a neighbouring pixel of q,the Laplacian of Gaussian (LoG)costCLOG(q) is defined asCLOG(q) = 1— (LOGkernel (x, y)*5)I(x,y)qwhere S is convoluted with the LoG kernelLOGkernel(X,Y) =X+Yexp*The gradient magnitude cost C0 (q) is defined asC (q)=1—1/ds(x,y)2dS(x,y)2Gmax(G)’. dx)Ldy ,) =(x,y) qwhere max(G) denotes the largest gradient magnitudefound in the image. Since objectedges are smooth, the gradientdirection costCGD(p, q) favours pixels that showsmalldifferences in the direction of thegradient. This is defined as23CGD(p, q) --- arccos[S()• 5’(q)]G(p)G(p)where G(p) and G(q) denote the gradient magnitude (not gradient cost) of the image atpixel p and q respectively. Each of these filters has unique strengths [10],and theiraggregation results in a more robust algorithm. For instance, the LoGcost is less sensitiveto image noise due to its convolution with a Gaussiankernel, while the gradientmagnitude cost is more sensitive in detecting weak structural edges.Lastly, a scalar costCd(p, q) that proportional toEuclidean distance J(x — x’)2+(y—y’)2 is added inbetween each node and its neighbours in order for the optimizationto favour physicallyshorter and therefore smoother contours.1.4.2 Interactive Seeding(a) (b) (c)Figure 4. Livewire Interactive seeding. (a) Original image. (b) with a setseedpoint (circle), proposedcontours from the seedpoint to the user’s cursor is displayedin real time. (c) A proposed contour isselected (green), and the process in (b) repeats.In Livewire, the user specifies sparse seedpoints along an object boundary andtheobject is segmented in this piecewise fashion. When theuser specifies a seedpoint withthe mouse, the 2D version of Dijkstra’s algorithmusing the above optimization costs is24applied, resulting in a ‘cost map’ that contains theminimal cost from the starting node(pixel) to all pixels on the image. Sincethe optimal paths from the starting pixel to allother pixels can be quickly computed, the optimal contourfrom the starting node to themouse cursor location is displayed asoverlay above the image. This contour iscontinuously updated as the cursor moves and appears to‘snap’ against the object’sboundaries and other high-gradient regions. With this real-timefeedback, users can retainfull control over how the contour looks withoutsacrificing efficiency to manually tracethe contour. Figure 4 illustrates thisprocess.If the temporary contour looks satisfactory, the contoursegment is locked byselecting the current pixel, and this process is repeateduntil the entire object issegmented. For clearly visible objects, only sparse seedpointsare required, thus gainingefficiency. However, for images of poorer quality, moreshorter-spaced seedpoints areneeded in these corrupt image areas in order toretain algorithm accuracy.To improve the efficiency of Livewire’sgraph search step, numerousmodifications such as LiveLane [34] and Livewireon-the-fly [35][36], which limit thealgorithm’s graph search space, were proposed.While improvements in technology nowallow Livewire to operate in real-time without thesemodifications, automated Livewiremethods that emulate user input over multipleimage slices in a 3D volume still benefitfrom an abbreviated graph search implementation[37]. Figure 5 depicts how a 2DLivewire contour is represented in a 3D volume.25Figure 5. 2D Livewire contour on a caudate nucleusmask. (a) 2D contour (gray) segmenting a sliceof the caudate mask. (b) This same contour(black) is shown in the context of the entire object.26Chapter 23D Livewire for ArbitrarilyComplex TopologiesThis chapter first presents a review forexisting 3D interactive segmentationtechniques (Section 2.1). This is followed bythe details of the extension from 2DLivewire to 3D (Section 2.2). This technique’s performance isthen validated on syntheticdata, real medical volumes, and in robustness tests(Section 2.3).2.1 Interactive 3D SegmentationThe incorporation of interaction in segmentation hasbeen proposed for a largevariety of methodology families. One such familythat support or can be extended to 3Dor user interaction include parametric, explicit [13][12][38][39][40][41][14],or implicit(e.g. level-set based) [15][16][17]energy minimizing deformable models. However, thesemodels are prone to convergence to local minima. Activecontours that converge to a27global minima have been developed [42][43]; however, these rely on acoarserdiscretization of the search space, succeeded by graph-theoretic optimization proceduresthat are less amenable to user interaction. User-interaction in level-set approaches [44] isnot straight-forward because the contours in between the constraintpoints can beunpredictable and many task attempts may be needed to find an acceptablecontour.Further, level-set approaches typically require more computations than other deformablemodels since contours on 2D images (1D manifolds) and surfacesin 3D images (2Dmanifolds) are represented using a higher dimensional (signed distance transformimages), thus increasing the complexity of the problem. Also,employing graphicsprocessing units (GPU’s) to perform level-set calculations [45] may be needed toachieveinteractivity [46], especially for 3D images. Other recent active contour methodsthatincorporate user intervention include [47][25]. In Yushkevich et al’sapproach,implemented into the tool called ITK-SNAP [47], user-interaction is usedfor initializingan evolving 3D active contour, for setting up parameters, aswell as for manual post-processing. However, while the initialization is largely graphical, user knowledgeof thismethod’s computational part is required for selecting cost parameters,and usingimprecise initialization parameters can cause the contour to deviate. Furthermore,the userhas no steering control during the curve evolution. In Mclnerney et al’s‘SketchInitialized Snakes’ [25], the user uses a graphical tablet to quickly initialize a2D Snake,which is then automatically optimized. However, this approachrequires specialized inputhardware. Also, its computational efficiency, reproducibility, and tolerance to usererrorwere not reported.28In addition, while interaction with 2D deformable models is straightforward,incorporating intuitive user-guidance and real-time visualization for 3D deformablesurfaces or meshes remains a challenging human-computerinterface problem. Oneapproach to circumvent this problem is to iterate an interactive2D algorithm on eachslice in the 3D volume [48][49]. However, it is highlyinefficient to constantly repeat thistask, and without correlating computations within adjacent slices,segmentation resultsare often jagged (similar to manuallydelineated ones) when viewed from otherorientations. By only requiring sparse user-guided contours or points todramaticallyreduce interaction time, there exists many automaticsurface reconstruction methods[50][51], some of which can even generate objects witharbitrary topologies [52][53][54].However, their main drawback is that they do notconsider image-based information;rather, they draw point connectivity and object topologyconclusions based on thelocations of the surface points alone. Also, their computationtime is considerable anddoes not allow for user intervention during the task shouldmistakes occur. Similarly,Saboo et al’s ‘Geolnterp’ method uses sparselyinterleaved manual segmentations in oneorientation to initialize a geodesic Snake [55]. However, the user-inputis not actuallyused as a hard constraint, and the Snake optimizes its shapewithout considering voxelintensities.An alternative to deformable models is the ‘GraphCuts’ approach, originallyproposed by Boykov and Jolly [56] and further developedin [22]. Here, ‘cuts’, orglobally optimal segmentations, are computed using manuallyspecified foreground andbackground seeds (hard constraints) and boundary/regioninformation (soft constraints).Refinement of the ‘cut’ can be made using additionaluser-placed seedpoints. This29method offers interaction simplicity especially inthe 2D case; however, its maindrawback is that because the seedpoints lie inthe region bodies and not on theboundaries, results can be unpredictable alongweak edges. Graph Cuts results will alsovary depending on the soft constraint weighting andchoice of seeds. Similarly, Rother eta!. proposed the ‘GrabCut’ method that performsan iterative Graph Cut algorithm thatdecreases the amount of user interaction required [28]. Using thesame interactionscheme, Grady proposed a method, random walkers, whereall pixels in an image areassigned to each of the hard constraint seedpointsbased on a probabilistic measure [29].However, this method still shares the same limitationsof the original ‘Graph Cuts’. Also,3D visualization during the segmentation task remainsa challenge with this interactionscheme because user input is done in 2D but the segmentationtask is not broken into 2Dmodules.In extending Livewire to 3D, several methods requiringonly sparse 2D contourswere proposed, but they only consider image slicesin one orientation. Souza et a!.proposed a hybrid approach between Snakesand Livewire by projecting seedpoints froma previous adjacent slice onto thecurrent slice and then refining their locations [57].Similarly, Schenk et a!. proposed an approachwhich takes sparsely spaced Livewirecontours and interpolates and optimizes thecontours in between using minimal cost paths[58]. Also, Malmberg et a!. [59]proposed a method to bridge sparsely separated Livewirecontours using haptic feedback and the image forestingtransform [60]. However, specialequipment such as a haptic device and astereo-capable monitor is required. Moreover,image smoothness in orthogonal directions is notensured, and medical images oftencontain objects with complex 3D shapes (e.g. deepconcavities, protrusions, non-spherical30topologies, branching), which none of theseparallel-slice approaches are able toeffectively handle.An approach was put forth by Falcao et al. to extend 2D Livewire to3D byutilizing 2D contours on oblique slices to automatically mimic 2DLW on allslices in anorthogonal direction [61]. However, considerable user supervisionand knowledgeregarding the object’s exact topological features are required tobreak a complex objectdown into ‘slabs’, which are groups of consecutive slicesalong the axis of automaticcomputation where the sub-object exhibits constant topology. The restrictionson theseinitial setup steps and on the selection of slices for 2DLWare critical to correctlysegment each slab properly. Moreover, the intersection of these2D contours with eachslice in each slab generates seedpoints that need to bemanually ordered in a clockwise orcounter-clockwise fashion before they can be fed into theautomated Livewire process.More recent 3D Livewire methods mitigate some of the abovesetup steps by usingorthogonal 2D Livewire contours instead of obliquecontours [62][63]. In Lu et al.’s 3DLivewire approach [62], seedpoint ordering is more automated thanin [61], but it requiresthe projection of a manually-supplied referencecontour onto adjacent slices. InHamameh et al.’s approach [63], seedpoint ordering isautomatically computed using analgorithm based on turtle graphics [64] (part of Logoprogramming language) withoutadditional image-based or user-supplied information.However, while both of thesemethods [62][63] do not require the complicatedinteraction steps of [61], these methodsfail to address the problem of segmenting objectsof arbitrary topology. Though thesesemi-automated methods presented reasonablesolutions for certain segmentation tasks,their limitations highlight the need for a robust 3Dsegmentation approach that can31natively handle complex shapes of arbitrary topology, while at the sametime stilloffering the advantages of user control, efficiency, accuracy,intuitive operation, andminimal user supervision.2.2 Extending 2D Livewire to 3DThe user begins the segmentation process by performing sparselyseparated 2DLivewire (Section 1.4) segmentations on slices in any two orthogonalorientations. These2D contours are then used to determine the Livewire seedpoints to be used in thethirdorthogonal orientation by intersecting these contours and theunseen orthogonal slices(Section 2.2.1). These intersection points arepre-processed to increase robustness(Section 2.2.3) and are then used to create a ‘turtle map’ which consists of orthogonallinesegments (Section 2.2.2). Our ‘turtle’ point ordering algorithm is then appliedto this mapsuch that the resulting ordered points mimic the sequence of pointsa user would selectduring a Livewire segmentation, but now in a fully automated manner onthe unseenslices. Since these new seed points are a subset of the contours previouslyapproved bythe user, they are therefore a suitable choice of seedpointsfor guiding the Livewiresegmentation. In this scheme, user-generated contours thatare circumscribed insideanother contour are automatically flagged, and such flagsare used to split and mergesections of the turtle map (Section 2.2.5). By doing so,multiple closed contours andobjects with non-spherical topologies such as, for example, avertebra (which has atoroidal topology due to the spinal canal) can be processedcorrectly. Figure 6 andAlgorithm 1 summarize this approach.32Algorithm 1 Determining the segmentation for unvisited slicein xz given Minput contours{ }in yz and N input contours{ }in xy. Note that it is atrivial change to process unseen slices in xy and yz.Require: {}and{ }on slices andXmand z,, representarbitrary slice indices of x and z respectively.Ensure: Closed contours{C}on sliceTurtleMap = 0. Turtlemap is the 2D area on which turtle tracksare placed.TurtleMap ‘=Algorithm2({},building horizontal tracks of TurtleMap.TurtleMap+ Algorithm2({},S0),building vertical tracks of TurtleMap.P. (x, z) = TurtleMap(x, z), ordering seedpoints usingTurtle algorithm (Section 2.2.2).A = number of separate contours on S0,where A = max(a).for each P, e(1..A) do=], using abbreviated graph search (Section 2.2.4).end for{cyoz } = {},all contours with y=are found.(c)(0(g) (h)Figure 6. Overall steps of this algorithm, shown on a binary image for clarity. (a)-(b) Seedpoints(grey squares) are selected in user-guided Livewire contours onorthogonal slices. User-approvedsegments are in green, and red contours represent the proposed,‘live’ segment during the Livewiretask (crosshair denotes cursor location). (c) 3D plot of 11 user-guidedcontours. (d) Orderingautomatically generated seedpoints (gray squares) on a slice in thethird orthogonal orientation usingour turtle-based algorithm (Section 2.2.2) results in thecontour in (e). (1)-(g) 3D plots ofautomatically generated contours at mid-task and at task completion(125 contours), respectively. (h)Surface rendering of the segmentation result.332.2.1 Automated SeedingTo perform an automaticLivewire segmentation, seedpoints need tobedetermined. These intersection points are simplythe intersection between the 2DLivewire contours and the unvisited orthogonalslice that this automatic step is takingplace on. For example, if 2D Livewire is usedto create two contours C0 and C0 onarbitrary yz and xy slices respectively, thengiven a slice in the xz orientation atindexYo,the intersection points‘xyOzbetween and and betweenand can be calculated asIx,yo,zCx0yz fl =flSimilarly, the following equations define theintersection points I and J on slice S ifdifferent orientation combinations are chosen:Ix,y,zo:=fl = flS0Ixa,y,z:=CX), flSXOYZ<= C flS02Figure 7 illustrates this determination mechanism.34Figure 7. Orthogonal contours in (a) (red and blue)intersect with a slice in the third orientation(green) in 10 different locations, as shown in (b).These intersections on the green slice in (c) becomeseedpoints on a turtle map (Section 2.2.2).2.2.2 Automatic SeedpointOrderingIn order to mimic a user-guided2D Livewire segmentation task in an automaticfashion, the seedpoints, I and J, needto be ordered either clockwise or counter-clockwisein the 2D space of slice S. Toaccomplish this, seedpoints on the 2D space which belongto the same user-guided Livewirecontour subset are paired and connected bylines(tracks). Since there are seedpoint contributionsfrom two orthogonal orientations, thesetracks, will themselves be orthogonal onslice S. Algorithm 1 summarizesthis step andFigure 8 illustrates the final result, called a‘turtle map’.(a) (b)(c)35L--rt__11 2 1 21 1 2 71_ =fFigure 8. Example seedpoint map showing outercontour seedpoints (red), inner contour seedpoints(green with ‘i’ suffix), and a disjoint object’s seedpoints(blue). For each contour, the turtle objectstarts at the first point and follows the tracks(in grey) according to its rules in Section 2.2.2, visitingother seedpoints in the order shown. Trackvalues of ‘2’ denote track intersections.To traverse the turtle map and determinethe correct ordering of seedpoints, analgorithm based on turtle graphics [64] isemployed. Turtle graphics is based on the Logoprogramming language, and its main idea surroundsa directional turtle object that canonly move forward and change directionson a graph-based system. Here, the turtleobjectbegins at an arbitrary seedpoint andmoves forward along the orthogonal tracks, turningto its left if it encounters a track intersectionand reversing direction when it encountersanother seedpoint. The sequence in whichthe seedpoints are visited determinestheirorder (Figure 8). This process is repeatedif there are multiple closed contours foundonthe same unseen slice, and this methodkeeps track of which seedpoints have beenvisitedso that they are not encounteredagain.While turtle map tracks usually intersectin a ‘+‘ like shape (Figure 9a),oftentimes turtle maps can exhibit ‘T’ junctions(Figure 9b) and ‘L’ junctions (Figure 9c),3623IF3 ‘I4L_J2 1 2¶ r—I— J11112 SC4k2III,__2r2121 3r JI10I1—4w9‘--Vaas determined by how the user-guided contoursintersect with this unseen slice. Here, ourproposed turtle algorithm detects these situationsand resolves them correctly by alteringthe turtle’s movement rules and ensuringthese seedpoints are not duplicated in theresulting seedpoint list.00_____[Ir1I1F1(a) (b)(c)Figure 9. Possible turtle map intersections resultingfrom seedpoint locations, denoted by 1, include(a) the basic‘+‘ junction, (b) the ‘T’ junction where a seedpoint overlaps a track, and (c) the ‘L’junction where two seedpoints overlap. Pixels with values‘1’ and ‘2’ represent non-seedpoint tracksand track intersections respectively.22.3 Intersection Point PruningIdeally, a 2D contour would intersect with an unvisitedorthogonal slice at an evennumber of locations, described by Hamamehet al. as ‘entering and exiting’ the object[63]. However, objects with cusps can causesingular intersection points to exist as well.Also, while a user-guided contour will always beorthogonal to the slice in question,contiguous colinear contour pixels mayintersect with this slice [62]. In the extremeexample of a cube, a user-guided contour (asquare) may be orthogonal to an unseenslice, but their intersection may comprisethe entire square side of the contour (Figure1 Oa). To combat this, our proposed methodassumes intersection points appear incluster(s) or occupy consecutive pixellocations such as in Figure 10. Since theintersection points found between eachuser-guided Livewire contour and the orthogonalunvisited slice will always be ahorizontal or vertical line on the slice, these colinear37points are first sorted in ascending pixellocation order. Next, by traversing these points,cluster boundaries are easily found bydetermining the non-consecutive pixel locationvalues. Knowing where clusters startand end allows us to prune the unnecessary points inbetween. An exception to the rule is when onlyone cluster is found, which corresponds toa cusp (singular point). In this case,the start and end of the cluster are kept and themiddle points are discarded. With the extraneouscontiguous points removed, the desiredcase of having an even number of intersectionpoints is achieved. This allows foreachautomatically processed slice to beindependent of all other parallel slices and to notrequire a reference frame [62]. Therefore, shape andtopology changes (e.g. branches,cusps, saddle points) not observedin adjacent slices can now be seamlessly detectedwithout further user supervision.(b)Figure 10. Contiguous intersection points. (a) User-guidedLivewire contours (black voxels) willintersect with the cube’s end slice (gray) inmultiple contiguous locations. (b) A contourintersectingwith an orthogonal image slice (gray)creates two clusters of contiguous points (black). White arrowsdenote which pixels are kept after the pruningalgorithm (Section 2.2.3).2.2.4 Efficient Graph Searchfor Pre-determined SeedpointsSince the seedpoints determined from orthogonalcontours are pre-determined, nouser-interaction is required and thus, anexhaustive 2D search using Dijkstra’salgorithm(a)II 1 11 II I I Ii I II l.38for each seedpoint is redundant. Our solution is similar to [35], as our modified graphsearch algorithm terminates after the next target point in the ordered list of seedpoints hasbeen reached. The computational savings originate from the orderin which the graphsearch propagation is done: the propagation algorithm selects the unprocessed pixel withthe lowest accumulative cost to be analyzed next [26]. For example, when thegraphsearch propagates from seedpoint q to point r, the accumulative costof r isCr. Atthispoint, all arbitrary pixels p with accumulative cost Ci,,<Crwould have been foundalready; thus the path from r back to q is guaranteed to beglobally optimal. Figure 11illustrates the impact of this technique. The graph search algorithm favourspropagationalong high-gradient edges and will largely ignore homogenousregions becauseseedpoints tend to be on or very close to gradient edges. Another advantageis that thecomputational savings now depend on the distance between seedpoints and not imageresolution.(a) (b) (c) (d)Figure 11. Graph search required per pre-determined seedpoint. (a) Vertebra. (b) Fullcost mapneeded per seedpoint (circled). Darker areas indicatelower cost. (c)-(d) Abbreviated graph searchalgorithm terminates when the next pre-determined seedpoint is reached.2.2.5 Handling Arbitrary TopologyAnatomical structures often exhibit non-spherical topologies, concavities,andprotrusions. For convex objects (e.g. sphere), it is guaranteed thatthere will only be one39or two clusters of seedpoints per inputcontour in unseen slices; thus, a turtle map can beeasily generated using the technique described in ourpreviously proposed framework[63]. However, for objects withconcavities or protrusions (e.g. U-shaped tube), theremay be situations where a slice captures multipleobjects and its turtle map will showmultiple disjoint groups accordingly (Figure 8). Since our current approachprocesseseach group independently, multiple objects can be segmentedconcurrently, such as theleft and right ventricles (Figure 14a). More complicatedstill are objects with non-spherical topology (e.g. torus), which none of the previousLivewire methods can handle.In order to correctly segment these objects, our methodfirst identifies contours that arecircumscribed within another, using pairwise comparisonson all user-guided contours ofa given image slice. Let Ci and C2 represent twoclosed contours on the same slice. C1and C2 are first converted to binarymasksM(x, y) andM(x, y) respectively, wherepixels inside the contour have a value of 1 and 0 otherwise.IfM(x, y)fl M(x,y)= M(x, y)then C1 is wholly situated inside C2,and ifM(x, y)fl M(x, y)M(x, y)then C2 is wholly situated inside C1.This step is critical because these ‘inner’ contoursdelineate pixels that do not encompassthe object of interest, but rather a hole in theobject. Due to this, these contours and their derivedseedpoints (Section 2.2.1) are flaggedas ‘negative’, whereas the contoursand seedpoints that actually delineate the object areflagged as ‘positive’. Both ‘positive’and ‘negative’ intersection points are used on the40turtle map; however, turtle tracks are only constructed between‘positive’ seedpoints. Incontrast, ‘negative’ seedpoints in effect negate asection of an otherwise longer track line,splitting the turtle map into two distinct parts. This processis illustrated in Figure 8,where seedpoints 2i and 4i negate the otherwise longer turtle trackbetween seedpoints 3and 9. A central cavity results, which now correctly representsthe toroidal object. Thisprocess is outlined in Algorithm 2.Algorithm 2 Constructs the horizontal tracks in TurtleMap that isused for ordering theseedpoints found in Algorithm 1 for an unvisited slice For verticaltracks and iforthogonal unvisited slices or are used,only trivial changes are required.Require: An empty or incomplete TurtleMap array, Gcontours on slicesrespectively.Xmrepresents arbitrary slice indices ofx.Ensure: Construction of horizontal TurtleMap tracks.for each e (1...G) do=Pairs” (z1 , z2)=‘x,,yo,zpruning algorithm in Section 2.2.3 to determine B pointpairsfor each Pairs” (1. ..B) doz2 = Pairs” (z1 ,z2), extracting the z valuesfrom each pairTurtleMap(xm,zj), TurtleMap(xm, z2)=Let C be an arbitrary, closed, user-guided Livewire contourin the yz directionifmask(C flmask(C)= mask(Cfor any C thenTurtleMap(xm, z : z2) = TurtleMap(xm,Zj : z2) —elseTurtleMap(xm, zi : z2) = TurtleMap(xm,Zj : Z2)+ 1end ifend forend for2.2.6 Implementation DetailsThis proposed framework was developedin MATLAB (the MathWorks Inc.,Natick, MA.) and offers the standard concurrentorthogonal views of a volume as shown41in Figure 12. As an overlay on top of theimage data, user-guided contours are clearlydemarcated in all views, regardless of their orientation.One criticism of this type of 3DLivewire extension [63] was that the slices usedfor user-guided contours had to becarefully selected otherwise the segmentation willfail [62]. By displaying these contoursin this manner, our application gives users aclear idea of which areas have beensegmented and which areas exhibit more topologicalfeatures. In our findings, thesefeature-rich areas, if segmented correctly by the user, usually allowfor higher accuracy.Also, this software feature is usefulfor visually judging the accuracy of the delineationresult. Additionally, our user interface is able to display3D plots of contours as well as asurface rendering of the object of interest after the 3DLivewire procedure is completed.In our tool, additional features such as point deletionand automatic contourclosing are available during the user-guided Livewirestage. Also, if the user selects aseedpoint erroneously, the segmentationprocess can be reverted to an earlier state,similar to the ‘undo’ command found in many commonapplications. While our techniqueis flexible and robust, errors arebound to occur due to human error and poor imagequality. Our tool offers the undo operation describedabove, as well as the ability forusers to remove entire automatically generatedcontours for re-computing. From therendered result, users can quicklyidentify problematic areas, if any, and increase thesegmentation accuracy by providing additionaluser-guided contours in these areas andre-running the 3D Livewire algorithm. For isolatedrefinement, users can also choose tooverwrite the automatically generatedcontour(s) using the 2D Livewire procedure.42Figure 12. Screen-capture of the segmentation tool’s graphicaluser interface during a segmentationtask. Completed 2D contours are displayed in green forthe three orthogonal views, providingfeedback on segmentation accuracy throughout the segmentationtask. Yellow lines indicate thecurrent slice indices of the other two orientations.2.3 Validation and ResultsThe proposed method was testedon both synthetic images (Section 2.3.1) and realmedical image data (Section 2.3.2) to demonstrateits capabilities. The application’sperformance during these tasks was quantified basedon the three main recommended43criteria for semi-automatic segmentation [9]. Toreport accuracy and reproducibilitymeasurements, Dice similarity (voxel agreement) CDice2vol1,I(volA+VO1B)was used,where CDICe is the Dice similarity coefficient. VOlsimis the sum of the voxels at theintersection between trial A and trial B, and VOlA and VO1Brepresent the sum of the voxelsin trials A and B respectively. The Dice similarity coefficients werethen averaged over alltrials. Since our 3D Livewire method is deterministic and producesidentical results giventhe same input contours, we measured reproducibilitywith different user-guided contoursand seedpoints as input because not all operators willchoose the same slices nor will theychoose the same locations for seedpoints. The orientationof the 3D segmentation waskept constant, but operators with different familiarityof Livewire to were used. Theuntrained operators do not have prior understandingof topology and were trained forapproximately 10 minutes on the segmentation tooland the objects they had to segment.Efficiency was calculated by comparing the time required forour technique to segment a3D volume to the total time needed for performing2D Livewire on each slice. Due topoor image quality or user mistakes, contourerrors may occur with 3D Livewire; thus,the time it takes to correct such errors is included inthe time measurements as well.Finally, algorithm robustness to increasing levelsof additive white Gaussian noise(AWGN) as well as parameter sensitivity was also investigatedin Section 2.3.3.Synthetic data to validate our proposed method includesa mask of a left caudatenucleus (elongated object), a torus (toroidal topologyobject), and a fork-shaped object(branching object). We also demonstrate our methodon real medical data such as the leftand right ventricles from magnetic resonanceimaging (Tl-weighted MRI), a human44vertebra (computer tomography (CT) from theVisible Human Project (VHP) [65]), andboth parts of the human pelvis (CT,also from VHP).2.3.1 Synthetic Data Tests(b)Figure 13. Results of our proposedsegmentation method on synthetic data. (a),(d),(g) Rendering of aleft caudate mask, torus, and fork object respectively. (b),(e),(h)3D plot of user-guided contours(red) and automatically generated contours (light blue).(c),(I),(i) Surface renderings of thesegmented synthetic examples above, using our proposed approach.(e)(g)(h) (i)45For the caudate nucleus (Figure 1 3(a-c)),very few user-guided contours arerequired to segment the body, and additional contours at thetail guarantees an accuratedelineation. The torus example (Figure 1 3(d-f))highlights our technique’s ability tosegment objects with non-spherical topology. For this scenario,only 8 user-guidedLivewire contours are needed. If another orientationis chosen, only 6 user-guidedcontours would be needed. For the fork-shaped object (Figure13(g-i)), only 5 user-guided Livewire contours were required to automaticallygenerate 209 contours. Thesegmentation shows a smooth transition at thebranching site. Table 1 summarizes ourmethod’s accuracy and reproducibility rates, averagedover multiple trials. Figure 13shows the efficiency of our method for each computingphase. Our results show that ourmethod is able to achieve these complex segmentationtasks in roughly 15% of the time ittakes to delineate all slices using 2DLivewire.We found that the reproducibility ratesfor these images are high because the inputcontours were computed using 2D Livewire, whichhas high reproducibility [26]. Theminor differences between each trial largely dependon the accuracy of the chosenseedpoints, which varies by user. In terms of efficiency,total processing time naturallyincreases for volumes with high shape complexity.As the number of user-guidedcontours increases, so does the total amount of intersectionpoints found on each unseenslice and ultimately, computation time. However,this higher processing time iscounterbalanced by the fact thatmanual tracing of complex objects requires more userattention and segmentation time for an accurate delineation.We found that scaling avolume did not affect the number user-guidedLivewire contours needed, as the sameamount of these contours can still create validturtle maps for all slices.46Table 1. Reproducibility and accuracy results of our proposed method,on both synthetic and realmedical image data. Each averaged over 4 trials with thecorresponding standard deviation. For theventricles, vertebra, and pelvis examples, expertmanual segmentations were not available.Synthetic Data Reproducibility% Average Accuracy %Left Caudate Mask 99.8 ± 0.2 98.7± 0.1Torus 94.6±2.7 95.3± 1.1Fork Object 99.7 ± 0.197.1 ± 0.7Real Medical Data Reproducibility %Average Accuracy %L+R Ventricles 92.4± 3.1Vertebra 95.1±2.4Pelvis (2 pieces) 98.1 ± 0.5Table 2. Task time reduction (s), achieved by our proposedmethod compared to performing 2DLWon each slice. Each step averaged over 4 trials. Standarddeviation values between each set of trialsare included. (I) User interaction time withour tool. (II) Automatic processing time of our tool. (III)Time required for manual corrections. (IV) Total task timeof our tool. (V) Task time using 2DLWon all slices. (VI) Fraction of time (%) required for ourtool compared to 2DLW on all slices (IV)/(V).Synthetic(VI)(I) User (II) Auto (III) Fix (IV) Total (V)2DLW FractionData(%)245.8 ± 86.8 ± 332.5 ± 2197.5±Caudate 014.9± 1.887.5 44.8 128.5 713.7144.5± 162.8±Torus 18.3 ± 3.9 0778.8 ± 166 20.5 ± 3.556.659.2Fork 146.5± 187.8±2310.5±41.3±3.4 08.2±1.3Object 64.565.1 685.3Medical(VI)(I) User (II) Auto (III) Fix (IV) Total (V)2DLW FractionData(%)Left and428.3± 75.8± 504±3431.8±014.5± 1.8Right108.2 14.4 119.8407.8Ventricles684.5 ± 162.5 ± 900.5 ±3974.8 ±Vertebra 53.5 ± 5.922.5 ± 2.7209.3 89.6 221599Pelvis (2 1350.5 ± 467.3 ± 167.5 ±1985.3 ± 6854.5± 28.4 ± 5.3pieces) 512.3 180.766.3 653.4 1469.3472.3.2 Real Medical DataThe first example presented here is a pair of ventricles segmented from an MRIvolume (Figure 14(a-c)). Here, both disjoint structures were segmented during the sametask, using a total of 24 input contours to automatically segment 105 slices(approximately 200 contours). In order to accurately capture the tail regions, a higherconcentration of input contours was provided there. All object features were successfullycaptured, including the separation. Next, a vertebra was extracted from the male CT scanof the VHP [65] (Figure 14(d-f)). The human vertebra is toroidal, with pronouncedprotrusions. Also, the volume contains multiple vertebrae, and parts of two vertebraeoften appear on the same slice. Here, our proposed method successfully extracted thevertebra using 17 input contours to segment 88 slices. Lastly, the human pelvis (Figure14(g-i)), also from the CT scan of the VHP, was segmented using 80 input contours tosegment 277 slices. For this example, due to the very thin bone characteristics at theilium, the minimal number of input contours to create correct turtle maps at this area isdifficult to achieve. This results in minor gaps in the segmentation, but these gaps wereeasily fixed with our tool using 2D Livewire to overwrite these problematic slices. Theabove examples were tested for reproducibility and efficiency over non-expert 2DLivewire done on all slices (Table 1 and Table 2). Similar to the synthetic dataexperiments, our proposed method is shown to be much more efficient than performing2D Livewire on every slice in a real medical image and the segmentation is highlyreproducible when provided different input contours.48Figure removed due to copyrightpermission.MRJ of brain fromA. Uthama, R. Abugharbieh, A.Travoulsee, and M. McKeown,Invariant SPHARM ShapeDescriptors for Complex Geometryin MR Region of Interest Analysis,IEEE EMBS (2007) 1322-1325.(a)Figure removed due to copyrightpermission.CT scan of vertebra fromM. Ackerman, “The visible humanproject,” Proceedings of the IEEE86,pp.504—511, March 1998.(d)Figure removed due to copyrightpermission.CT scan of pelvis fromM. Ackerman, “The visible humanproject,” Proceedings of the IEEE86,pp.504-511, March 1998.(g)Figure 14. Results of our proposed segmentation methodon real 3D medical data. (a),(d),(g) Original3D images of a human brain (T1-MRI), spine (CT), andpelvic region (CT) respectively. (b),(e),(h) 3Dplots of user-guided contours (red) andautomatically generated contours (light blue). Twenty-four(red) used to segment 200 (cyan), 17 to segment 88, and 80 tosegment 277 respectively. (c),(t),(i)Surface renderings of the segmented examplesabove, using our proposed method.2.3.3 Analysis of Robustnessand Parameter SensitivityUsing consistent user-defmed seedpointsfor the user-guided contours, the caudatemask volume was subjected to incremental levelsof AWGN and then segmented using(b)(e)(f)(h) (i)49our proposed method. Accuracy levels were then plotted againstthe peak signal to noiseratio (PSNR) of the volume, defined as PSNR = 20 log10 max(Objectlntensity)IcrnoiseThe segmentation results are shown in Figure 1 5aand accuracy results are shown inFigure 1 5b. As expected, the accuracy level decreases as the increasingamount of noisecorrupts the image, but our method is able to recovermuch of the object even under highamounts of noise.Parameter selection was not a problem in obtainingaccurate results, and ourimplementation uses equal weighting for each term in our costfunction(w(1..5)1).Nevertheless, we investigated the effect of parameter sensitivityon the syntheticexamples in Figure 13 to determine the changein accuracy when varying each weightvalue. We found that varying each weight by asmuch as ±50% did not change theaccuracy by more than 3.1% in the test datasets.100 —__ 98. 92908886 I 111111 Iinf 28.0 21.9 18.4 140 11.1 7.5 4.4 0.0PS1NR(b)Figure 15. Our proposed method’s performance reflectedin segmentation accuracy as AWGN isprogressively added. (a) Slices of a left caudate mask with increasingnoise and PSNR levels of 1dB,20.O dB,6.O dB, and 0 dB. (b) Accuracy level stays consistentlyhigh until very high noise levelsoccur, obscuring the ability of the user to choose reliable Livewireseedpoints.(a)50Chapter 3Live-Vessel: ExtendingLivewire to VascularDataThis chapter first presents a review for existingvessel segmentation techniques(Section 3.1). This is followed by the details ofthe extension from 2D Livewire to 3D(Section 3.2 and 3.3). This technique’s performanceis then validated on synthetic data,real medical volumes, and in robustness tests(Section 3.4).3.1 Vessel SegmentationOverviewIn addition to the aforementioneddifficulties in medical image segmentation,further complications in vascular images includevessel bifurcations and noise corruptionof vessel edges and centres, disruptingan otherwise boxcar-shaped vessel profile of anideal vessel. Noise can also createwhat appear to be gaps in vessels [66] that only expertsfamiliar with anatomy can discern.In vessel segmentation, topological conformity51ensuring vessel tree connectivity is important forfurther analysis, such as discoveringbranching patterns or branch lengths and tortuosity [3]. Manysuch analyses require thedetermination of the vessel medial path (as opposed to vessel boundariesonly). Vesselmedial axis extraction facilitates other forms of shape analysis,data registration [7], andvessel boundary extraction itself [67][68].Manual tracing by experts for vessel segmentation isconsidered accurate.However, it is extremely time consuming duecomplex vascular networks. Also, longsegmentation tasks, particularly neededin vascular images, are commonly affected byuser fatigue [9]. Therefore, some level of automation isnecessary in this application.Numerous methods have been proposed for vessel segmentation.Kirbas and Quek[69] provided a survey of several existing vesselsegmentation methods. Automatedvessel segmentation methods are diverse. Onefamily of methods uses pattern recognitiontechniques combined with post processing stepssuch as pruning or pixel classificationafter preliminary vessel detection. For example,Ricci and Perfetti [70] used a linedetector with a rotational interval of 30° whichwas coupled with a support vectormachine (SVM). Similarly, Soares et at. ‘s filteringmethod [71] used Gabor wavelets atrotation intervals of100along with a Bayesian classifier. Staal et al. [72] employedridgeanalysis [73] and a ICNN classifier. Such classifier-basedmethods, however, requiretraining which complicates their applicability toclinical use. Lam and Yan [74] usednormalized gradient vector fields to locate vesselcenterlines, and a Laplacian-basedvessel detector was used to prune the result. However,gradient vector fields are highlysusceptible to noise, especially when detectingthin vessels. A bigger limitation is that52none of the above methods enforcesvessel connectivity constraints. This often resultsinbroken vessels and the inability to determineaccurate medials and vessel trees.Another family of automatic vessel segmentationmethods is based on energy-minimizing evolution of deformable models[75][76][77][78][79][80]. Implicit, level-setsbased approaches (e.g. [75J[76]), aswell as topologically adaptable explicit models,suchas T-snakes [38] or T-surfaces [811,are able to handle the complex vascular topology.However, initializing topologically adaptablemodels with a single seed may not segmentdistant branches, and usingmultiple distributed seeds does not ensure that allparts willmerge to produce a correctvessel tree connectivity (having correct topology).Othermethods that take advantage of vessel-specificproperties include maximizing thegradient vector field flux to optimizethe contour or surface [82] and modeling of thecapillary force to segment small vessels [83].In general, results of energy-minimizingapproaches can be unpredictableand many trials and modifications of the modelinitialization and parameters may beneeded. Also, these methods are not guaranteed togive a globally optimal solution.Globally optimal energy minimizing segmentationapproaches have been proposed [84][42}.However, such approaches still suffer fromrestrictions: to the best of our knowledge nonewere specifically designed for segmentingtubular branching structures (or vessels); theysimplify the cost terms to achieve globalminima of convex functionals; or theydiscretize and limit the search space ofpossiblesolutions. The Graph Cuts [22]approach achieves a globally optimal segmentation (givena set of foreground and backgroundseeds). Hraiech et al. [85] segmented abdominalaortic aneurysms using Graph Cuts.However their work is a direct application of Graphcuts without any vessel-specific extensions.Li et a!. [86] used Graph Cuts to extract53optimal surfaces in tubular 3D objects. However, a volume has tobe partitioned intocarefully chosen regions in which the boundarysurfaces can be unfolded in terrain-likesurfaces. In general, Graph-Cut contours are may also beunpredictable since userinteraction is obtained via interior region seeds and boundary constraints, sotheytypically require user refinement in the form of additional seedpoints,thus this method’sability to segment complex vascular networks, not justmajor tubular structures, is not yetproven.Other approaches for automated vessel segmentation employmultiscale detectionof curvilinear structures, which is effective in discerningboth large and small vessels. In[87][88}[89], the eigenvector associated with theminimum eigenvalue of the Hessianmatrix across all scales was used to estimate thevessel direction at a given pixel as thedirection with the smallest image intensitycurvature. Aylward and Bullit [90] appliedthis directional information to traverse image ridges and estimatevessel centrelines.Building on[881[89], Frangi et al. [91] developed a“vesselness” filter using the Hessianmatrix eigenvalues. Other approaches combined vesselnesswith vessel enhancingdiffusion [92][93], region growing [94], cylindricalvessel models [95], and matchedfiltering [96]. Sofka et al. [97] later developed anothervesselness measure using matchedfilters instead of Hessian-based techniques. Multiscalevessel detection proved useful indetecting vessels; however, to the best of our knowledge, onlythe maximal responseacross the scales was used at any particular pixelthus lacking regularization of scalealong the vessel. An exception is the Vessel Crawlersapproach [98], where an estimateof the scale is derived from the radius of the leadingfront of the crawler thus achieving aform of scale regularization, but without any globalscale optimality condition. Placing54seedpoints along a vessel’s medialaxis can enable the implementation of vessel extractionmethods based on path optimization techniquesin the image domain. Finding minimalpaths in 2D images was exploredin [43]. Deschamps and Cohen [99] extended theapproach to 3D, where a fast marching frontpropagation was implemented toapproximate the medial axes of tubular structuresin spatial dimensions (x,y,z). However,that work did not address the problem ofidentifying the boundaries of tubular structures.Young et al. [100] employed the vesselnessfilter of[911with a front-propagation basedalgorithm to segment the vessel and extract its medialaxis. However, the scale thatachieves the maximum vesselness responsefor each pixel independently was used, andnot the scale that minimizes the total costof the path. A key point here is that thevesselness scale at each pixel should be chosen suchthat a globally optimal path is found,and not chosen to maximize the vesselnessresponse at that pixel. Our proposed approachaddresses this exact issue. The two vessel trackingmethods that relate most to ourproposed method are the following. Liand Yezzi [101] optimized for both spatialvariables and a ‘radius’ variable, thussimultaneously determining the globally optimalmedial and vessel boundaries. However,this method did not incorporate features ofvesselness magnitude nor did it takeadvantage of vessel direction. Further, Wink et al.[27] applied Dijkstra’s algorithm [311 tofind the globally optimal medial andcorresponding scale values. They usedmultiscale vesselness filtering that did not discardnon-maximal response. The cost ofselecting a certain scale at a particular pixel wasinversely proportional to the vesselnessresponse for that given scale at that pixel.However, this method does not take advantageof the vessel direction information anddoes not utilize edge information at a scale-dependantdistance from the medial.55The Livewire segmentation framework [261 isan optimal path finding algorithmfor image segmentation that emphasizes userinvolvement during the segmentationprocess. Such an interactiveapproach, unlike fully automated approaches, can bebeneficial to vessel segmentation as it combinesthe advantages of intuitive uservalidation along with efficient computational speeds [9][18].3.2 Livewire to Live-Vessel: ExtendingGraph Search to (x,y,r)SpaceIn our proposed method, ratherthan delineating a vessel by guiding a Livewirecontour along the vessel boundary, which isinefficient, our technique enables the user tosteer a contour along the centerline (or medial axis)of a 2D vessel. The contour along themedial the vessel is computed as the optimal pathbetween two points in 3D space (x,y,r),where (x,y) are the image spatial coordinates at eachmedical node and r represents thecorresponding radius values at those nodes. Thisapproach reduces the amount ofseedpoints needed, since a single optimal pathdefines three contours (the medial axis andtwo boundary contours on either side).In 2D Livewire implementations, given an image1(p), where p = (x, y), the only localpath choices from node p are to one of theeightneighbouring pixels, q = (x’, y’). In Live-Vessel,we optimize the medial axis path withrespect to three variables: the two spatial variables xand y, and the vessel radius variabler. This extends the traditional Livewire graphsearch from 2D to 3D. However, since themedial axis in reality is in 2D space and cannotconnect p = (x,y,r) to q = (x,y,r’)where r r’ (i.e. only a single radiusvalue can be associated with each medial node),our3D graph search is restricted accordingly(Figure 16). To accommodate for vessels that56dilate and constrict rapidly, the radius value can change toany other set of values (albeitpenalized differently, as described in Section 3.3.4). This increasesthe computationalcomplexity, but the optimal medial path and optimal vessel thickness are thenguaranteed.(a) (b)Figure 16. Live-Vessel’s 3D graph search algorithm depictedin 2D (x’) and 3D (x,y,r). (a) Medialpath sequence (green arrows) with two neighbouring nodes p=(x,y,r) andq(x’,y’,r9, projected on the(x,y) plane. Red arrows denote the radius (r) dimension.(b) Alternative directions from p to q. Notethat the next node on the path afterp(x,y,r) cannot beq(x’,y’,r9 if x=x’ andy=y’.While traditional Livewire displays the proposed contour between twopoints,Live-Vessel displays the proposed medial path as well asits associated boundarycontours. In addition to specifying physical coordinates (x,y)with the cursor at seedpoints, the radius (r) is increased/decreased viakeyboard arrow keys. The user can thenquickly determine the next seedpoint such that thecurrent vessel segment is delineatedcorrectly. One of Livewire’ s advantages, whichLive-Vessel maintains, is the intuitivecontrol of desired segmentation accuracyand efficiency given varying degrees of noiseand object complexity. To actuallyextract the vessel, the medial path is first projectedback onto the (x,y) plane. Each medial node hasan optimal radius value r, and itspreceding and succeeding nodes form a directionvector. These elements are used todetermine the two vessel boundary points on either sideof the medial node. Repeating57xthis for all medial nodes, all the boundary points of the vesselare found in sequence andconverted into a segmentation mask. This processis illustrated in Figure 17 and Figure 18.Figure 18. Overview of Live-Vessel operation. (a) OriginalImage. (b) Vessel boundary points (yellow)from a seedpoint (green cross) to the nextpotential seedpoint (red square) are graphically shown tothe user for approval. (c) Segmentation maskis created from boundary points determined in (b).33 Live-Vessel External andInternal CostsWe define the incremental cost function,which describes the cost from nodep = (x,y,r)to aneighbouring node q = (x’,y,r’), as:Cost(q,p)=w1C(p)+w2C(q,p)+W3C1e(P)+w4C(q,p)+w5C(q,p)Figure 17. Flowchart depicting program operation fromuser point of view.58Cvis associated with Frangi et al.’s multiscale vesselnessfilter [91]; however, quiteimportantly, our vesselness response at p is evaluated at different scalesrather thanchoosing the scale with the maximal response at (x,y) (seeSection 3.3.1). C5 refers to thecost of vessel direction change between pandq, and Cieis a measure of the medial nodefitness assessed using the image edge evidence at ascale-dependent distance. Twosmoothness cost terms (CR and C) are used to penalizepaths where the radius, r, orspatial variables, (x,y), fluctuate rapidly (i.e. the two termsregularize the vessel thicknessand medial axis, respectively). Note that evaluating C, CE, Cie,and CR depends onscale, which is a variable being optimizedfor during our graph search, whereas C5 doesnot depend on scale. Each cost term is normalized tolie in the range [0,1] and is weightedby w.,i e {1..5}. All cost terms are explainedin Sections 3.3.1-3.3.4 and summarized inFigure 19.InitialNodeCost: Vesselness Over Multiple ScalesCost: Image Evidence using edge detectionCost: RadialCost: SpatialSmoothnessGreen Arrows:Vessel WidthsCost: Vessel direction consistencyBoldarrowsindicatemedial pathsequenceFigure 19. Graphical representation of the cost terms inLive-Vessel’s minimum path search.593.3.1 Vesselness CostTo detect curvilinear structures in 2D at agiven scale a, the image is firstconvolved with a Gaussian kernel withvariance a2. Then, the ordered eigenvaluesIA1 I221of the 2 x 2 Hessian matrix H for each pixel canbe used to determine whethera pixel lies on a vessel of that scale[88]. Table 3 summarizes the eigenvalue conditionsfor vessel detection at a given pixel.Table 3. Needed eigenvalue conditions from the Hessian matrixat a given pixel for vessel detection.Eigenvalue condition Visual interpretationA1 < 0,IA1 /221= large Dark background, bright vesselA1 > 0, A1 /22I= large Light background, dark vesselIn our implementation, we used thefilter proposed by Frangi et al. [91] toquantify the likelihood of a dark vessel pixel into a‘vesselness’ image feature. This filteris adopted as a cost term in Live-Vessel’s optimizationprocess as follows:1 f22>OC(q) = V(A1,22) =2 \R( 1’T”1—expi 1—expi — I otherwiseI2fl2,)L2c2)Rfi= A1/22represents the eccentricity of a secondorder ellipse and T = J2 + . /3 andc affect filter sensitivity and havevalues 0.5 and 0.3 respectively. These valuesaresimilar to those used in other vessel filter studies[91][27].60Previous techniques proposed fordetecting curvilinear structures using theHessian matrix, except Wink et a!. [27], chose the highestfilter response for each pixel;however, this is susceptible to noise and does not enforcespatial continuity (Figure 20b).Our proposed method, in contrast, stores the resultsover a range of scales1,separately,and optimizes for the scale-dependent variabler too. By combining the results with ourother cost terms, we place restrictions on the relationshipof neighbouring nodes; hence,ensuring robustness to poor image quality and bifurcations (Figure20c).The eigenvector Ev(x,y, r) of Hg, corresponding to 2, points in thedirection of theminimum principal curvature, which estimatesthe vessel direction [87]. By choosingpaths that minimize the change in direction ofEv, we can mitigate local noise and favourDefining boundaries as zero crossings of the Laplacian of a Gaussianintensity profile across the vesselyields a = r, where r = vessel radius. This is derived by equatingthe second derivative of the Gaussiankernel to zero.61Figure 20. Vessel bifurcation and noise posesproblems for a maximal response vesselness filter. (a)Original image, magnified for clarity. (b) Maximalresponse vesselness filter output. Note the filterproblems caused by vessel bifurcation or noise. (c) Segmentationresult with our proposed method.3.3.2 Vessel Direction Costvessel directions that change smoothly. We thereforeincorporate the cost term CE(q,p)from the incremental cost function anddefine it as:2 Ev(p).Ev(q)C(q,p) = —arccosEv(p)Ev(q)which evaluates the direction change from neighbouringnodes q = (x,y,r) top = (x’, y’,rf)•From the vesselness filter, Ev(q) and Ev(p) (Figure 21) point arbitrarilyineither directions of a bidirectional vessel (i.e. ±Ev(q) and±Ev(p)). The equation abovegives the same cost regardless whether the proceedswith segmentation in either vesseldirections: the medial path cost from a q top node transitionis equal to ap to q sequence.If an operand of a dot product is inversed, the dotproduct becomes negative but itsmagnitude is unchanged (Ev(q) Ev(p) = —(Ev(q) —Ev(p)));thus we use the absolutevalue operation above to cancel any potentialnegative signs and obtain the smaller anglebetween the vectors (Figure 21).q— —-Ev(’cj) —-v(p)Figure 21. Eigenvector consistency cost CEV(p,q) going fromnodep(x,y,r) to q(x’,y’,r’) is calculatedusing angle A, not angle B. Angle A is the smallest angle between±Ev(p) and ±Ev(qJ.Ev(q)623.3.3 Image Evidence CostOur proposed Live-Vessel also uses image evidence in the form of edgeinformation to favour medial nodes that are located at the centre of a vessel cross-sectionof radius r. For this, we employ Canny, Laplacian of Gaussian (LoG), and gradientmagnitude based edge detection and average their responses into R(x,y). We chose thesefilters because Canny and LoG filters are less sensitive to noise, whereas thegradientmagnitude filter does not involve pre-smoothing and thus complementsthe Canny andLoG filters by detecting weak structural edges.Specifically, for each p = (x,y,r) node in the image, we combine thevesselnessdirection Ev(x,y,r) and R(x,y) to define another measure of node ‘medialness’. Byfindingthe two unit vectors that are normal to Ev in the (x,y) plane and scalingthem by r, we candetermine two locations at which the vessel wall should be located. Wethus retrieve thecorresponding R(x,y) at these points and their adjacent pointsR= {(x1 , y,);1 1,2.. .N} (Nis the total number of points considered on both sides) in parallel directions toEv (Figure22a). The image evidence cost Cje(p) in the incremental costfunction is then defined asC (p) = 1 — (i / n)’1R(PR)to average a potentially noisy response. This costterm isminimized for a medial node that has equidistant (at distance = r) vesselboundaries,which is expected for vessel medial nodes. Figure 22c-d show this cost’seffect whenother cost terms are not considered.63(c)t..t(d)Figure 22. Image evidence cost and its effect. (a) points (0’s)parallel to vessel direction (white arrow)at a scale-dependent distance r are testedfor edge detection response. (b) Original retinal image. Redsquare denotes close-up area in (c) and (d). (c) Image evidence costfunction Cje(p) at a small value ofr (darker means less cost). Medial nodes of thin vessels areprominent and medial nodes of largevessels exhibit higher cost. (d) Image evidence cost function Cje(p) at alarger r. Medial axes of largervessels become prominent and medials of smaller vesselsexhibit higher cost and are dispersed.3.3.4 Spatial and Radius Smoothness CostsBy encouraging the medial axis to be shorter, medialjaggedness is avoided andspatial smoothness is improved (Figure 23a). Tothis end, Live-Vessel imposes a costproportional to the length of the medial axis. Anincremental cost for each additionalnode added to the path (connecting points q = (x, y, r)to p = (x’, y’, r’)) is used, whichaccumulates during the graph search operation.This cost, Cs(q,p) in the incremental costfunction, is proportional to .J(x —+(y—(a) (b)64Similarly, we penalize medial paths withrapidly changing radius values for tworeasons. Firstly, the vesselness filter is noisesensitive, and estimating the radius basedsolely on the filter output is unreliable. Secondly,vessel radii tend to not change rapidlyunless branching or abnormalities such asaneurysms occur. By incorporating this costCR(q,) = Ir — ri/(rm, — rmjn)to the incremental cost function, the vesselwidth in thesegmentation result is rendered smoother withgradually changing radius values (Figure23b). Here, rm and rmjn are the maximum andminimum values r can take. In the casewhere aneurysms or other causes of rapid vessel radiuschange, an extra seedpoint can beplaced at that location to force theoptimal medial’s radius to correctly adapt.—— — — S—— —(a) (b)Figure 23. Difference in effects of radial and spatialsmoothing. Unsmoothed elements are in gray(solid line = medial, dotted line = boundaries), new medialsare in red, and new boundaries are inblue. (a) While vessel width was constant,favouring shorter medial axes results in a smoother medialand vessel boundaries. (b) While themedial was already smooth, minimizing change in radius resultsin smoother boundary contours.3.4 Results and DiscussionsIn this section, we validate the performanceof Live-Vessel on synthetic data aswell as real medical retinal image datafrom the publicly available Digital Retinal Imagesfor Vessel Extraction (DRIVE) database [1021. Wealso provide comparisons to other65state-of-the-art vessel segmentationtechniques, as well as demonstrate Live-Vessel’srobustness to parameter sensitivityand increasing amounts of noise.3.4.1 Performance CriteriaWe benchmarked the performanceof Live-Vessel using the three maincriteriatypically used to evaluate segmentation results;accuracy, reproducibility, and efficiency[9]. Reproducibility was measured byperforming several segmentation trials of the sametask, which differed in seedpoint selection,since, realistically, different seedpointswouldbe chosen by different users orthe same user during different trials. Efficiencywascalculated as the fraction of the time auser needed to complete the segmentationtaskusing Live-Vessel versus manualtracing. To obtain manual tracing task times, wesegmented each image using asimple paint tool, and the task times weresimilar to thosereported in [102].Accuracy and reproducibility were reportedusing the Dice similarity metric(which captures the agreement in pixellabels) between the segmentation mask andthegolden truth or between resulting segmentationmasks in different trials. In retinal vesselsegmentation, blood vessel boundariesare oftentimes unclear and thin vesselsaredelineated differently, if at all, bydifferent observers. Therefore, for evaluatingthesegmentation of real retinal images, wecalculated its similarity to each manual tracingand compared that to how similarthe manual tracings are to each other. Dice similarityincorporates both true andfalse positives (TP and TN), andis defined asCDice=2areasim /(areaA+areaB). CDICeis the Dice similarity coefficientand areasim is66the intersection between segmentation areas of trialsA and B (area4 and areaBrespectively). Dice similarity coefficients were averaged overmultiple trials.To compare with other current state-of-the-art vesselsegmentation techniques, thewidely used accuracy measure Acc = (TP + TN) /(P + N),averaged over tested images,was used. TP and TN here are the sums oftrue positive and negative pixels in theimages’ circular field of view (P+ N total pixels). While we felt reporting this measure isnecessary for comparing Live-Vessel to other reportedquantitative results (using thesame reporting metric), we believe this does not provide atrue measure of accuracy,especially for thin vessels. Since background pixels typicallygreatly outnumber vesselpixels in an image (by approximately 10 to 1) in the field of view,this accuracy measureis inherently inflated and, we believe, is notthe best measure of accuracy in suchapplications of segmentation (for example, Accgive 90% accuracy even if not a singlepixel is labeled as vessel, in a typical image whereonly 10% of the pixels are vesselpixels).We note that since 2D medical images suffer frompartial area effects, ourproposed method’s segmentation result uses a fuzzymembership for each pixel,evaluating the fraction of the pixel inside/outside the vessel. Notethat the boundary of thevessel obtained via Live-Vessel is calculated withsub-pixel accuracy, being a distant raway from and normal to the vessel medial (Section3.2). If crisp binary segmentation isdesired, a pixel is assigned to the object if this ratio is 0.5.673.4.2 Synthetic Data SegmentationThe manually-traced segmentation masks provided by theDRIVE database wereused as realistic ‘noise-free’ synthetic data. Varying levelsof noise were subsequentlyadded. Three randomly selected manual tracings of observer 2 (images4, 9, 10) from theDRIVE database were each segmented twice, as illustrated inFigure 24. This syntheticdata does not suffer from partial area effects, but wefound that binary vessel masks areinherently jagged and the masks themselves possessed numerous flaws.One of these isthe presence of slight ‘indentations’ in the vessels that do not appearin the original data.While these flaws were unintentional, our proposedmethod resolves these issues byencouraging spatial and radial smoothness (Section 3.3.4). Wefound that vessels with 1-pixel wide widths were segmented correctly (see also Section 3.4.3).Table 4 summarizes our method’s reproducibility and accuracy rates onsyntheticdata, averaged over 2 trials per image. We foundthat the reproducibility rate wasconsistently high and consistent with traditionalLivewire [26], reflecting that thelocations of the sparse seedpoints only marginally impact the optimal contourin between.We found the accuracy results were also high andconsistent (Table 4).Table 4. Summary of Live-Vessel’s performance on 3random manual tracings in the DRIVEdatabase. Results are shown as averages and standard deviationover 2 trials.Dice similarity ReproducibilityImage 04 0.875±0.006 0.983Image 09 0.849 ± 0.010 0.972Image 10 0.890 ± 0.008 0.966Average 0.871 ± 0.008 0.97468Figure 24. Synthetic data segmentation. (a) Synthetic Image of retina. (b) Segmentationresult usingLive-Vessel.3.4.3 Retinography Data SegmentationTen randomly selected retinal images fromthe DRIVE database were segmentedusing Live-Vessel to demonstrate our method’s performanceon real medical vascularimages. Test images 1, 3, 4, 5, 6, 7, 9, 10, 19, 20were ultimately chosen. Examplequalitative results are shown in Figure 26. Figure 25illustrates our proposed method’sability to segment 1-pixel wide vessels with sub-pixelboundary accuracy on real vascularimage data.Figure 25. Segmentation of 1-pixel wide vessels using Live-Vessel.(a) Original image. (b) Zoomed(red square in (a)) area containing1-pixel wide vessels. (c) Computed optimal medial path for oneofthe vessels in (b). (d) Further zoomed view of vessel (greensquare in (c)). White line denotes optimalmedial axis, black lines indicate the optimal vessel boundaries.(a) (b)(a) (b)(c) (d)69Figure removed due to copyright Figure removed due to copyrightpermission. permission.Figure removed due to copyrightpermission.Retinal image #4 from Retinal image #9 from Retinal image #10 fromM. Niemeijer, J.J. Staal, B. vanGinneken, M. Loog, and M.D.Abramoff, Comparative Study ofRetinal Vessel SegmentationMethods on a new PubliclyAvailable Database, SPIE MedicalImaging (2004) 648-656.M. Niemeijer, J.J. Staal, B. vanGinneken, M. Loog, and M.D.Abramoff, Comparative Study ofRetinal Vessel SegmentationMethods on a new PubliclyAvailable Database, SPIE MedicalImaging (2004) 648-656.M. Niemeijer, J.J. Staal, B. vanGinneken, M. Loog, and M.D.Abramoff, Comparative Study ofRetinal Vessel SegmentationMethods on a new PubliclyAvailable Database, SPIE MedicalImaging (2004) 648-656.(a)(b)(c)(d)Figure 26. Sample results using ourproposed method. For each column: (a) Original retinal image.(b) Optimal medial axis computed byLive-Vessel (white curve) overlaid on original image. (c) LiveVessel segmentation mask. (d) Close-up of segmentationmask.70Table 5. Summary of the average accuracy and reproducibility results of Live-Vessel(LV) comparedto manual traces (MTI and MT2), reporting averages andstandard deviation over two trials. Dicesimilarity is the reported measure. Image size is 565x 584 pixels throughout.Segmentation Accuracy LVSegmentationMT1 VS. MT2 LV VS. MT1 LV VS. MT2ReproducibilityImage 01 0.804 0.778 ± 0.023 0.810 ± 0.02 1 0.969Image 03 0.785 0.784 ± 0.011 0.760 ± 0.0080.972Image 04 0.802 0.780 ± 0.02 1 0.794 ± 0.03 80.984Image 05 0.790 0.744± 0.003 0.762 ± 0.008 0.976Image 06 0.770 0.789 ± 0.006 0.8 15 ± 0.012 0.954Image 07 0.768 0.729 ± 0.0040.764 ± 0.025 0.966Image 09 0.800 0.792 ± 0.032 0.786 ±0.017 0.959Image 10 0.787 0.779 ± 0.004 0.799± 0.024 0.98 1Image 19 0.825 0.765±0.005 0.811±0.0150.967Image 20 0.770 0.728 ± 0.009 0.821 ±0.006 0.956Average 0.790 0.767 ± 0.0120.792 ± 0.017 0.968higher thanMT1 vs. MT2Quantitative results indicated high reproducibility rates,similar to our results onsynthetic data. Accuracy was found to be similar toboth manual segmentations, and thissimilarity was comparable to that of manualtracings to each other. In some cases, Live-Vessel had a higher pixel agreement with either or bothmanual tracings than between thetwo manual tracings themselves. We also found thatLive-Vessel had higher similarityresults for the second manual tracing (Table 5) than thefirst tracing in 8 out of 10 images.We believe that in addition to user fatigue, there maybe perceptive differences betweenDRIVE’s different users, perhaps due to computerhardware such as display quality.Figure 27 illustrates the differencesbetween these users, computed using the ‘exclusiveor’ operation. Variability is evidentfor both thick and thin vessels. Table 6 summarizes71our findings on Live-Vessel’s efficiency tests.By only requiring the user to guide themedial path of vessels, we found that segmentation usingLive-Vessel on averagerequired less than 25% of the time it takes to segmentusing manual tracing (75%reduction). Images with more vesselsor with vessels that are not as clear expectedlyrequired more time to segment.Figure 27. Manual segmentation differencesbetween experts. (a) Observer 1. (b) Observer 2. (c)Difference between (a) and (b). (d-e) Close-up of (a) and (b)respectively. (I) Difference between (d)and (e).Table 6. Efficiency results of our proposedLive-Vessel (LV) compared to a manual trace (MT),reporting average and standard deviationover two trials. Efficiency was measured as the fraction ofthe manual task time needed to generatethe mask. Results shown are averages of two trials,and arereported in seconds. The images size is 565x 584 pixels. Average reduction in time is 75.3%.Segmentation EfficiencyLVIMTMT LV(%)ImageOl 6154 1260±48 20.5±0.8Image03 6689 1683±9825.2± 1.572Imageo4 6839 1632±8 23.9±0.1Imageo5 7613 2052± 132 26.9± 1.7Image 06 5741 1437± 33 25.0±0.6Image07 6488 2042±67 31.5±1.0Image09 5546 1382±52 24.9±0.9Image 10 5394 1111± 52 20.6±1.0Imagel9 4877 1316±5727.0±1.2Image2o 6807 1490±87 21.9± 1.3Average 6215 1541± 63 24.7± 1.0•, 75.3%reductionFinally, we also quantitativelycompared Live-Vessel’s performance to thelatestreported techniques in 2D retinographysegmentation. In terms of measured accuracy,ourproposed technique is quite similar (Table 7),although as described previously, webelieve this accuracy measure is biasedin all cases (as background pixels significantlyoutnumber vessel pixels in allimages). It is important to note that noneof the otherreported techniques ensures topologicalintegrity and thus may result in many brokenvessels and spurious (non-vessellike) detected structures (e.g. Figure 1 lain [70], Figure6 in [72], Figure 6 in[71], and Figure 15 in [74]). Thin vessels(1-2 pixel wide), whichcan constitute a large portion of a vessel tree’stotal length, but may not contributemuchto the accuracy calculation,are especially problematic in reported methods;however,they are handled well by Live-Vessel.Additionally, Live-Vessel computes the optimalmedial axes with radius valuesalong the vessel (in addition to the boundariesandsegmentation masks), which simplifiessubsequent vessel tree analysis (e.g. by avoidingvessel skeletonization).73Table 7. Accuracy comparison of state-of-the-art methods and the proposed methodLive-Vessel,using the widely used accuracy assessment method.Technique AccuracyJiang et a!. 0.9337Ricci and Perfetti 0.9595Lam and Yan 0.9474Soares et a?. 0.9466Staal et al. 0.944 1Live-Vessel 0.94603.4.4 Live-Vessel Robustness and Parameter SelectionWe performed detailed sensitivity analysis in order todetermine the extent of change inaccuracy (Dice similarity) when varying the weightingparameters using in ouroptimization function the incremental cost function. Tests were doneon observer 2’smanual tracing for image 10 in the DRIVE database. Our implementationuses defaultequal weighting (w1 = 1) for each cost term weight in theincremental cost function.Results of changing the weight of each term are shown in Table 8.It can be seen thateven with ±50% variability in each parameter, Live-Vessel’saccuracy was stable, with anaverage magnitude in percentage change of 2.44%, with amaximum of 7.78 1%, and aminimum of 0%. This confirms d Live-Vessel’s insensitivity tosignificant parametervariations.Table 8. Effect of parameter selection on Dice similarity (accuracy)between Live-Vessel and eachmanual tracing (MT1, MT2). Percentage of accuracy change providedfor each experiment.Weighting for vesselness (V), vessel direction consistency (Ev), imageevidence (le), radiussmoothness (R), and spatial smoothness (XY) were raised or lowered (by*50%).ChangingAccuracy_and %_changeweights in the vs. MT1 vs. MT2 vs. MT1 vs. MT2incremental50% -50% +50% +50%cost function= 0.5 , 0.5 ,= 1.5 w1 = 1.5No change 0.784 0.760 0.784 0.76074(w15__1)V0.723 0.73 8 0.779 0.792(-7.781%) (-2.895%) (-0.638%) (4.211 %)E0.764 0.755 0.795 0.762V(-2.551%) (-0.658%) (1.403%) (0.263 %)0.751 0.748 0.814 0.759e(-4.209%) (-1.579%) (3.827 %) (-0.132%)R0.735 0.729 0.791 0.760(-6.250%) (-4.079%) (0.893%) (0.000 %)0.775 0.774 0.762 0.773XY(-1.148%) (1.842%) (-2.806%) (1.711%)To test Live-Vessel’s robustness to noise, observer 2’s manual tracing for image10 in the DRIVE database was subjected to incremental levels of AWGNand thensegmented using consistent seedpoints in Live-Vessel, but only on vessels stillvisible tothe observer (given the progressing noise levels). Accuracylevels were then plottedagainst the peak signal to noise ratio (PSNR) of the volume, definedasPSNR = 20 log10 (max(Objectlntensity)) /nOjse•The segmentation results are illustrated inFigure 28. As expected, we found that as noise increased, segmentation speedwasnaturally reduced because denser sets of seedpoints wereneeded to maintain accuracy.Under high noise, the eigenvector consistency, radial smoothness,and spatial smoothnesscosts were especially important, as vesselness response andimage evidence measureswere adversely affected. Our tests show that even under high noise (PSNR = -8.11dB inFigure 28d), our proposed method was still successfullytracking the medial axes ofreasonably large vessels.75Figure 28. Segmentation of a synthetic image under increasing AWGN noise.Red squares in the firstcolumn denote the field of view zoomed in the second column. (a) PSNR = 27.73 dU. (b)PSNR =10.22 dB. (c) PSNR = 0 dB. (d) PSNR = -8.11 dB. In the right column of(a)and (b) segmentationmasks are shown. In the right column of (c) and (d), black contours denote detectedboundary points.White contours represent optimal medial axes from a seedpoint (green cross) tothe next point (solidred square).76Chapter 4ConclusionsMedical imaging poses many problems tostructural segmentation. Althoughmuch research is put into thisfield, it is still difficult to use computation alone tosolvethese issues without incorporation ofhuman insight to recognize and locate the object ofinterest and to differentiate between a goodand bad segmentation result.4.1 ContributionsThe two highly-automated, interactiveframeworks that we presented here are efficient inmelding the advantages of both extremesof the fully-automated to manual segmentationspectrum. Our contributions here are twointeractive, globally optimal techniques[103][104]. Both our 3D Livewire methodand Live-Vessel have been submitted asjournal submissions after further refinementand testing.771. 2D to 3D Livewire extension for 3D objects:M. Poon, G. Hamarneh, and R. Abugharbieh, “Segmentationof Complex Objectswith Non-spherical Topologies from VolumetricMedical Images using 3DLivewire,” SPIE Medical Imaging 6512-31,pp.1—10, 2007.• Handles arbitrary topologies, protrusions, concavities, andbranching• Integrated into a user-friendly segmentation tool• Accurate, highly reproducible, and efficient compared totraditional Livewire• Robust to high levels of noise and insensitive toparameter variability2. Live-Vessel for segmenting 2Dretinography images:M. Poon, G. Hamarneh, R. Abugharbieh, Live-Vessel:Extending Livewire forSimultaneous Extraction of Optimal Medialand Boundary Paths in Vascular Images,MICCAI (2007) 444-45 1.• Simultaneously extracts vascular medial paths andboundaries• Optimizes over spatial (x,y) and non-spatial (radius)variables• Optimizes for vesselness,vessel direction consistency, edge information, andradial and spatial smoothness• Comparable accuracy to manual segmentation,reproducible, highly efficientcompared to manual segmentation• Robust to high levels of noise and insensitive toparameter variability784.2 Limitations and FutureDirectionsDespite the proposed methods’ benefits, there are several limitationsand areas forimprovement.Extensibility to Other ModalitiesFor instance, their extensibility to other data typesis yet to be proven. Othermodalities such as diffusion tensorimaging, ultrasound, or magnetic resonanceangiography pose different problems thanMRI or CT, which require implementationspecifics that cater to these issues. Also,while 2D and 3D image modalities are common,medical images can have higher dimensionalities,for example, the incorporation of thetime dimension to capture structuraldeformities or brain function changes.Sub-pixel AccuracyAnother limitation of our approaches is that they donot offer sub-pixel accuracy,as the contours in our 3D Livewire approachand the medial paths in Live-Vessellies onpixel centres. While this is less ofan issue in high resolution images, medicalimagesoftentimes have low resolution in oneor more dimensions. One possible waytoaccomplish this is to use refinementsubdivision procedures such as [105] orup-samplethe image prior to segmentation. Also,as an option, deformable models [12] may be usedto refine the segmentation at theexpense of user-control and predictability.793D Livewire - Algorithm Robustness toUser ErrorsWith regard to the 3D Livewire method, the more immediate goalsincludeenhancing algorithm robustness to user-mistakes. While Livewirecontours is adept insnapping to object edges, incorrect input, even if it is off by one or twopixels, createssmall protrusions or indentations which can affect the qualityof the final result. Apossible solution to this may be an option to allow oursegmentation tool to search withinits neighbouring pixels to find a more suitableseedpoint. We believe this should remainas a user option because we feel theuser should be able to assert full control of seedpointlocations if necessary.3D Livewire - ParallehzationAnother potential improvement is that although graph search speed is not anissuewith traditional 2D Livewire, our method’s need toperform a large number of automatedLivewire operations on unseen slices can benefit from speed-upalgorithms. One possiblemethod is to parallelize Dijkstra’s algorithm onmultiple processors [108]. Also, since thesegmentation of an unseen slice is not dependent on adjacentslices, multiple processorscan linearly decrease the automated phase of our proposedmethod.3D Livewire - Relationship Between Topology andRequired User-guided ContoursAnother limitation is that users currently require initial practice and understandingof the properties of the structure they wish tosegment, and place user-guided contours onselect slices. This is a concern because biological structuresvary widely in size, shape,and topology, and we wish to refinethis method such that non-experts with minimal80training can perform segmentation withreasonable results. Therefore, we believe that thisproject’s future direction should include theunderstanding of the relationship between theamount and orientation of user-guidedcontours necessary for a correct segmentation andthe shape, size, and topology of the object. Apossible approach to solve this problem isto apply Art Gallery Theorems[106], which applies for complex topology and 3Dextensions. Possible benefits include beingable to automatically determine and suggest tothe user the locations of slices to place manually-guidedLivewire contours, cutting downon segmentation task timeand improving reproducibility.3D Livewire - Oblique SlicesIn addition, we would like to look into how obliqueslices, rather than justorthogonal slices, can be used to initiate this 3D Livewirealgorithm, because obliqueslices may characterize an object more accurately.The main challenge will be toreformulate the turtle algorithm such that non-orthogonal‘tracks’ can be traversed.3D Livewire - Training ParametersWhile we would like to maintain our algorithm’sflexibility over as manymodalities of medical images as possible, webelieve that training the weights of theoptimization terms to suit a particular modalitywould benefit the accuracy and usabilityof the segmentation. This may be obtained becollecting a set of training imagesegmentation pairs, and optimizing for the weightsthat give delineations as close aspossible to the ground-truth training segmentation.81Livewire for Globally Optimal SurfacesLastly, another ftiture direction may be toextend Livewire from finding anoptimal 1D contour in 2D (or even 3D space) to findingglobally optimal 2D surfacessuch as those found in [86]. An idea is to use 1DLivewire contours along the surface ofan object in 3D space and iteratively partition the object surface forpolygon generation.Another potential method is to define surface ‘patches’ alongan object and usingLivewire to determine an optimal path along these ‘surface nodes’, thusperforming a 3Dsegmentation problem using a 2D graph search.This approach would be similar to [107],but without the need for pre-segmentation.Live-Vessel EfficiencyFor Live-Vessel, the immediate improvements wewill pursue include thereduction of the computation time required forthe graph search. The reason is that our2D+R graph search algorithm is exponentially more computationalthan a 2D graphsearch, with each node currently having 32 neighborscompared to 8 for the 2D case. Weare again looking into parallelization [108], as well aslimiting the graph search operationsimilarly to [34][35].Live-Vessel for 3D Vessel SegmentationWe believe Live-Vessel is well-suitedfor 2D retinography images, but the nextnatural step would be to extend this framework evenfurther to (x,y,z,r) for segmentationof 3D vessels such as ones found using magneticresonance. One of the design challengeshere will be to reduce the graph search as mentionedabove, as Dijkstra’s algorithm in 4D82would be even more computationally demanding. Also,in any interactive 3Dsegmentation schemes, human computer interaction such as visualization andinputmethod becomes more complex. The goal here will be to develop aninteraction schemethat allows users to see the proposed vessel before theyvalidate it. While rendering 3Dvessels is not difficult, users oftentimes judge the quality of a segmentation contour basedon intensity information from its surrounding pixels/voxels;thus this information willhave to be incorporated into the 3D vessel visualization as well.Further Live-Vessel ValidationAlso, in this thesis, we benchmarked our method’s accuracy on twodifferentmeasures; however, adopting a different validation methodsuch as[1091that evaluatesthe medial itself would benefit the use of Live-Vessel invessel tree analyses and vesseltracking. Similar to this idea, while the DRIVE database onlyprovides two sets of expertsegmentations, it would be beneficial to investigate exactly whythese differ and to derivea ground truth from these observers or by using additionalexperts.83Bibliography[1] A. Uthama, R. Abugharbieh, A. Travoulsee,and M. McKeown, Invariant SPHARMShape Descriptors for Complex Geometry in MR Region of InterestAnalysis, IEEEEMBS (2007) 1322-1325.[2] A. Convit, M. Leon, C. Tarshish, S. Santi,W. Tsui, H. Rusinek, and A. George,Specific Hippocampal Volume Reductions in Individuals atRisk for Alzheimer’sDisease, Neurobiology of Aging (1997) 13 1-138.[3] E. Bullitt, G. Gerig, S. Aylward, S. Joshi, K.Smith, M. Ewend, and W. Lin,Vascular Attributes and Malignant Brain Tumors, MICCAI(2003) 671-679.[4] J. Ross, T. Masaryk, M. Modic, P. Ruggieri,E. Haacke, W. Selman, IntracranialAneurysms: Evaluation by MR Angiography, AmericanJournal of Neuroradiology11(3) (1990) 449-455.[5] F. Skovborg, A. Nielsen, E.Lauritzen, and 0. Hartkopp, Diameters of the RetinalVessels in Diabetic and Normal Subjects, Diabetes(1969) 292-298.[6] T. Wong and R. McIntosh, HypertensiveRetinopathy Signs as Risk Indicators ofCardiovascular Morbidity and Mortality, British MedicalBulletin (2005) 57-70.[7] S. Aylward, J. Jomier, S. Weeks,and E. Bullitt, Registration and Analysis ofVascular Images, International Journal of Computer Vision(2002) 123-138.[8] J. Rajapakse and F. Kruggel, “Segmentationof MR images with intensityinhomogeneities,” Image and Vision Computing 16,pp.165—180, 1998.[9] 5. Olabarriaga and A. Smeulders,“Interaction in the segmentation of medicalimages: A survey,” Medical Image Analysis 5,pp.127—142, 2001.84[10] R. Gonzalez and R. Woods, Digital ImageProcessing 2”” Ed., Prentice-Hall Inc.,Upper Saddle River, New Jersey, USA, 2002.[11] T. Yoo (Editor), Insight into Images, AK Peters Ltd., Wellesey, MA, USA, 2004.[12] T. Mclnerney and D. Terzopoulos,“Deformable models in medical image analysis:A survey,” Medical Image Analysis 1,pp.91—108, 1996.[13] M. Kass, A. Witkin, and D. Terzopoulos,“Snakes: Active contour models,”International Journal of Computer Vision 1,pp.321—331, 1988.[14] J. Liang, T. Mclnerney, and D. Terzopoulos, “United snakes,”Medical ImageAnalysis 10,pp.2 15—233, 2006.[15] S. Osher and N. Paragios, Geometric Level Set Methodsin Imaging Vision andGraphics, Springer Verlag, 2003.[16] V. Caselles, R. Kimmel, and G. Sapiro, “Geodesicactive contours,” IJCV 22(1),pp.61—79, 1997.[17] J. A. Sethian, Level Set Methods;Evolving Interfaces in Geometry, FluidMechanics, Cambridge University Press, 1996.[18] Y. Kang, K. Engelke, and W. Kalender, “Interactive3D editing tools for imagesegmentation,” Medical Image Analysis 8, pp. 35—46,2004.[19] T. Buck, H. Ehricke, W. Strasser, and L.Thurfjel, 3D Segmentation of MedicalStructures by Integration of Ray-casting with AnatomicKnowledge, Computers andGraphics (1995) 441-449.85[201J. Cabral, K. White, Y. Kim, and E.Effhiann, Interactive Segmentation of BrainTumors in MR Images using 3D Region-Growing,SPIE Medical Imaging (1993)171-181.[21] J. Udupa, L. Wei, S. Samarasekera, M.Van Buchem, and R. Grossman, MultipleSclerosis Lesion Quantification using Fuzzy-ConnectednessPrinciples, IEEE TMI(1997) 598-609.[22] Y. Boykov and G. Funka-Lea, “Graph cutsand efficient N-D image segmentation,”International Journal of Computer Vision 70,pp.109—131, 2006.[23] W. Higgins, J. Reinhardt, andW. Sharp, Semi-automatic Construction of 3DMedical Image-Segmentation Processes, SPIE Medical Imaging(1994) 59-71.[24] K. Hinshaw, R. Altman,and J. Brinidey, Shape-based Models for InteractiveSegmentation of Medical Images, SPIE Medical Imaging(1995) 771-780.[25] T. Mclnerney and S. Akhavan,“Sketch initialized snakes for rapid, accurate andrepeatable interactive medical image segmentation,”ISBI, pp.398—401, 2006.[26] W. Barrett and E. Mortensen,“Interactive live-wire boundary extraction,” MedicalImage Analysis 1,pp.33 1—341, 1997.[27] 0. Wink, W. Niessen, and M.Viergever, Multiscale Vessel Tracking, IEEE TMI(2004) 130-133.[28] C. Rother, V. Kolmogorov,and A. Blake, “GrabCut: interactive foregroundextraction using iterated graph cuts,”ACM Transactions on Graphics,pp. 309—314, 2004.86[29] L. Grady, “Random walks for image segmentation,” IEEETransactions PatternAnalysis and Machine Intelligence 28,PP.1768—1 783, November 2006.[30] R. Bellman, Dynamic Programming, Princeton University Press, 1957.[311E. Dijkstra, “A note on two problems in connexion with graphs,” In NumericalMathematik 1,pp.269—270, December 1959.[32] J. Sethian, Level Set Methods and Fast Marching Method: Evolving InterfacesinComputational Geometry, Fluid Mechanics, Computer Vision, and MaterialsScience, Cambridge University Press, 1999.[33] J. Canny, “A computational approach to edge detection,” IEEETransactions PatternAnalysis and Machine Intelligence 8(6),pp.679—698, 1986.[34] A. Falcao, J. Udupa, S. Samarasekera, S. Sharma, B. Hirsch, andR. Lotufo, “User-steered image segmentation paradigms: Live wire and live lane,” Graphical Modelsand Image Processing 60,pp.233—260, July 1998.[35] A. Falcao, J. Udupa, and F. Miyazawa, “An ultra-fastuser-steered imagesegmentation paradigm: live wire on the fly,” IEEE TMI 19, pp. 55—62, Jan 2000.[36] H. W. Kang and S. Y. Shin, “Enhanced lane: interactive imagesegmentation byincremental path map construction,” Graphical Models 64(5),Pp.282—303, 2002.[37] A. Schenk, G. Prause, and H. Peitgen, “Local cost computation forefficientsegmentation of 3D objects with live wire,” SPIE Medical Imaging,pp. 1357—1364, 2001.87[38] T. Mclnerney and D. Terzopoulos, “T-snakes: Topologyadaptive snakes,” MedicalImage Analysis 4,pp.73—91, 2000.[39] H. Delingette, “General objectreconstruction based on simplex meshes,” IJCV 32,pp.111—146, 1999.[40] J.-O. Lachaud and A, Montanvert, “Deformable mesheswith automated topologychanges for coarse-to-fine 3D surface extraction,” Medical Image Analysis3(2),pp.187—207, 1998.[41] J. Montagnat, H. Delingette, and N. Ayache, “Areview of deformable surfaces:Topology, geometry and deformation,” 19(14),pp.1023—1040, 2001.[42] X. Bresson, S. Esedoglu, P. Vandergheynst, J.-P.Thiran, and S. Osher, “Fast globalminimization of the active contour/snake model,” Journal of MathematicalImagingand Vision 28(2), (2007) 15 1-167.[43] L. Cohen and R. Kimmel, “Global minimum foractive contour models: A minimalpath approach,” IJCV 24,pp.57—78, 1997.[44] N. Paragios, “User-aided boundary delineationthrough the propagation of implicitrepresentations,” MICCAI,pp. 678—686, 2003.[45] M. Rumpf and R. Strzodka, “Level set segmentationin graphics hardware,” IEEEinternational Conference on Image Processing,pp. 1103—1106,2001.[46] A. Lefohn, J. Cates, and R.Whitaker, “Interactive, GPU-based level sets for 3Dsegmentation,” MICCAI,pp. 564—572, 2003.88[47] P. Yushkevich, J. Piven, H. Hazlett, R.Smith, S. Ho, J. Gee, and G. Gerig, “Userguided 3D active contour segmentation of anatomicalstructures: Significantlyimproved efficiency and reliability,” Neurolmage 31,pp.1116—1128, 2006.[481M. Ursehier, H. Mayer, R. Bolter, and F. Leberl, “The livewireapproach for thesegmentation of left ventricle electron-beam CT images,” In 26thWorkshop of theAustrian Association for Pattern Recognition (AGM/AAPR) , 2002.[49] A. Gougoutas, A. Wheaton, A. Borthakur, E.Shapiro, J. Kneeland, J. Udupa, and R.Ravinder, “Cartilage volume quantification via live wire segmentation,”AcademicRadiology 11,pp.1389—1 395, Jan 2004.[50] H. Zhang, J. Nosher, and P. Yim, “Surfacereconstruction from orthogonalcontours,” SPIE Medical Imaging, pp.98—104, 2006.[51] M. Jackowski, M. Satter, and A. Goshtasby,“Approximating digital 3D shapes byrational gaussian surfaces,” IEEE Transactions onVisualization and ComputerGraphics 9(1), pp. 56—69, 2003.[52] H. Zhao, S. Osher, B. Merriman, and M. Kang,“Implicit, nonparametric shapereconstruction from unorganized points using a variationallevel set method,”Computer Vision and Image Understanding 80,pp.295—3 19, 2000.[53] G. Turk and J. O’Brien, “Shape transformationusing variational implicit functions,”Proceedings of SIGGRAPH 99: Computer Graphics,pp. 33 5—342, July 1999.89[54] H. Hoppe, T. DeRose, T. Duchamp, J. McDonald,and W. Stuetzle, “Surfacereconstruction from unorganized points,” in SIGGRAPH ‘92: Proceedings of the19th annual conference on Computer graphics and interactive techniques, pp.71—78, ACM Press, (New York, NY, USA), 1992.[55] R. Saboo, J. Rosenman, and S. Pizer, “Geointerp:Contour interpolation withgeodesic snakes,” The Insight Jounal , July 2006.[561Y. Boykov and M. Jolly, “Interactive organ segmentation usinggraph cuts,”MICCAI, pp.276—286, 2000.[57] A. Souza, J. Udupa, 0. Grevera, D. 0. Y. Sun, N. Sun, andS. M, “Iterative livewire and live snake: New user-steered 3D image segmentation paradigms,”SPIEMedical Imaging, pp.1159—1165, March 2006.[58] A. Schenk, G. Prause, and H. Peitgen, “Efficient semiautomatic segmentationof 3Dobjects in medical images,” MICCAI‘pp.186—195, 2000.[59] F. Malmberg, E. Vidholm, and I. Nystrom, A 3DLive-Wire Segmentation Methodfor Volume Images Using Haptic Interaction.[60] A. Falcao and F. Bergo, “Interactive volumesegmentation with differential imageforesting transforms,” IEEE TMI 23, pp. 1100—1108, September2004.[61] A. Falcao and J. Udupa, “A 3D generalization of user-steeredlive-wiresegmentation,” Medical Image Analysis 4, pp. 389—402, 2000.90[62] K. Lu and W. Higgins, “Improved 3Dlive-wire method with application to 3D Ctchest image analysis,” SPIE Medical Imaging, pp.189—203, 2006.[63] G. Hamarneh, J. Yang, C.McIntosh, and M. Langille, “3D live-wire-basedsemiautomatic segmentation of medical images,”SPIE Medical Imaging, pp. 1597-1603, 2005.[64] S. Papert, Mindstorms: Children,Computers, and Powerful Ideas, Basic Books,Inc., New York, NY, USA, 1980.[65] M. Ackerman,“The visible human project,” Proceedings of the IEEE 86,pp.504—511, March 1998.[66] A. Szymczak, A. Stiliman, A.Tannenbaum, and K. Mischaikow, Coronary VesselTrees from 3D Imagery: A TopologicalApproach, Medical Image Analysis 10(4)(2004) 548-559.[67] A. Can, H. Shen, J. N.Turner, H. L. Tanenbaum, and B. Roysam, Rapid AutomatedTracing and Feature Extractionfrom Live High-resolution Retinal Fundus Imagesusing Direct Exploratory Algorithms,IEEE Transactions on InformationTechnology in Biomedicine (1999) 125-138.[68] J. Lowell, A. Hunter, D.Steel, A. Basu, R. Ryder, and RI. Kennedy, Measurementof Retinal Vessel Widths From FundusImages Based on 2-D Modeling, IEEE TMI(2004) 1196-1204.[69] C. Kirbas and F. Quek,A Review of Vessel Extraction Techniques andAlgorithms,ACM Computing Surveys (2004) 81-121.91[701F. Ricci and R. Perfetti, Retinal BloodVessel Segmentation Using Line Operatorsand Support Vector Classification,IEEE TMI (2007) 1357-1365.[71] J. Soares, J.Leandro, R. Cesar, H. Jelinek, and M. Cree, RetinalVesselSegmentation Using the 2-D GaborWavelet and Supervised Classification,IEEETMI (2006) 1214-1222.[72] J. Staal, M. Abramoff, M.Niemeijer, M. Viergever, and B. Ginneken, Ridge-BasedVessel Segmentation in Color Images of the Retina,IEEE TMI (2004) 501-509.[73] D. Eberley, R.Gardner, B. Morse, S. Pizer, and C.Scharlach, Ridges for ImageAnalysis, Journal of Mathematical Imaging and Vision(1994) 353-373.[74] B. Lam and H. Yan, A NovelVessel Segmentation Algorithm for PathologicalRetina Images Based on the Divergenceof Vector Fields, IEEE TMI (2008) 237-246.[75] L. Lorigo, O.D. Faugeras, W.Grimson, R. Keriven, R. Kikinis, A. Nabavi, and C.F. Westin, CURVES: Curve evolution forvessel segmentation, Medical ImageAnalysis (2001) 195-206.[76] R. Manniesingand W. Niessen, Local Speed Functionsin Level Set Based VesselSegmentation, MICCAI (2004) 475-482.[77] R. Manniesing, M.Viergever, and W. Niessen, Vessel AxisTracking UsingTopology Constrained Surface Evolution,IEEE TMI (2007) 309-316.[78] D. Nain, A. Yezzi, and G.Turk, Vessel Segmentation Using a ShapeDriven Flow,MICCAT (2004) 5 1-59.[79] H. Luo,Q.Lu, and R. Acharya, Robust Snake Model,IEEE CVPR (2000) 452-457.92[80] A. Frangi, W. Niessen, R. Hoogeveen, T. vanWalsum, and M. Viergever,Quantitation of vessel morphology from 3D MRA,MICCAI (1999) 358-367.[81] T. Mclnerney and D. Terzopoulos, Topology Adaptive DeformableSurfaces forMedical Image Volume Segmentation, IEEE TMI 18(10) (1999) 840-850.[82] A. Vasilevskiy and K. Siddiqi, Flux Maximizing GeometricFlows, IEEE TPAMI(2002) 1565-1578.[83] P. Yan and A. Kassim, Segmentation of Volumetric MRAImages by usingCapillary Active Contour, Medical Image Analysis (2006) 317-329.[84] T. Chan, S. Esedoglu, and M. Nikoloa,Algorithms for Finding Global Minimizersof Image Segmentation and Denoising Models, SIAMJournal on AppliedMathematics 66(5) (2006) 1632-1648.[85] N. Hraiech, M. Carroll, M. Rochette, J. Coatrieux, 3DVascular ShapeSegmentation for Fluid-Structure Modeling, SMI (2007) 226-231.[861 K.Li, X. Wu, D. Chen, and M. Sonka, Optimal SurfaceSegmentation inVolumetric Images — A Graph-Theoretic Approach, IEEEPAMI 28(1) (2006) 119-134.[87] T. Koller, G. Gerig, G. Szekely, and D. Dettwiler,Multiscale Detection ofCurvilinear Structures in 2-D and 3-D Image Data, ICCV(1995) 864-869.[88] C. Lorenz, I. Carisen, T. Buzug, C. Fassnacht,and J. Weese, Multi-scale LineSegmentation With Automatic Estimation of Width, Contrastand TangentialDirection in 2D and 3D Medical Images, CVRMED-MRCAS’97,LNCS (1997)233-242.93[89] Y. Sato, S. Nakajima, N. Shiraga,H. Atsumi, S. Yoshida, T. Koller, G. Gerig, andR. Kikinis, Three-Dimensional Multi-scale Line Filter for SegmentationandVisualization of Curvilinear Structures in Medical Images, MedicalImage Analysis(1998) 143-168.[90] S. Aylward and E. Bullitt, Initialization, Noise, Singularitiesand Scale in HeightRidge Traversal for Tubular Object Centerline Extraction, IEEETMI (2002) 61-75.[911A. Frangi, W. Niessen, K. Vincken, and M. Viergever, MultiscaleVesselEnhancement Filtering, MICCAI (1998) 130-137.[92] C. Canero and P. Radeva, Vesselness EnhancementDiffusion, Pattern RecognitionLetters (2003) 3141-3151.[93] R. Manniesing, M. A. Viergever,and W.J. Niessen, Vessel Enhancing Diffusion: AScale Space Representation of Vessel Structures, MedicalImage Analysis (2006)8 15-825.[94] M. Martinez-Perez, A. Hughes, S. Thom, A.Bharath, and K. Parker, Segmentationof Blood Vessels from Red-freeand Fluorescein Retinal Images, Medical ImageAnalysis (2007) 47-61.[95] K. Krissian, G. Malandain, N. Ayache, R. Vaillant,and Y. Trousset, Model BasedDetection of Tubular Structures in 3D Images,Computer Vision and ImageUnderstanding (2000) 130-171.[96] C. Wu, G. Agam, and P. Stanchev, AHybrid Filtering Approach to Retinal VesselSegmentation, IEEE ISBI (2007) 604-607.[97] M. Soika and C. Stewart, Retinal VesselCenterline Extraction Using MultiscaleMatched Filters, Confidence and EdgeMeasures, IEEE TMI (2006) 153 1-1546.94[98] C. McIntosh and G. Hamarneh, VesselCrawlers: 3D Physically-based DeformableOrganisms for Vasculature Segmentation and Analysis, IEEE CVPR (2006)1084-109 1.[99] T. Deschamps and L. Cohen, Fastextraction of minimal paths in 3D images andapplications to virtual endoscopy, Medical Image Analysis (2001)28 1-299.[100] 5. Young, B. Movassaghi, J. Weese, and V. Rasche, 3DVessel Axis Extractionusing 2D Calibrated X-ray Projections for Coronary Modeling,SPIE MedicalImaging (2003) 1491-1498.[101] H. Li and A. Yezzi, Vessels as 4D Curves:Global Minimal 4D Paths to 3D TubularStructure Extraction, IEEE MMBIA (2006).[102] M. Niemeijer, J.J. Staal, B. van Ginneken, M.Loog, and M.D. Abramoff,Comparative Study of Retinal Vessel Segmentation Methods on a newPubliclyAvailable Database, SPIE Medical Imaging (2004) 648-656.[103] M. Poon, G. Hamameh, and R. Abugharbieh,Segmentation of Complex Objectswith Non-spherical Topologies fromVolumetric Medical Images using 3DLivewire, SPIE Medical Imaging (2007) 1—10.[104] M. Poon, G. Hamameh, R. Abugharbieh,Live-Vessel: Extending Livewire forSimultaneous Extraction of Optimal Medial andBoundary Paths in VascularImages, MICCAI (2007) 444-451.[105] A. Clements, H. Zhang, MinimumRatio Contours on Surface Meshes, IEEE SMI(2006) 57-78.[106] J. O’Rourke, Art Gallery Theorems andAlgorithms, Oxford University Press(1987).95[107] S. Konig, J. Hesser, 3D Live-Wires on Pre-Segmented VolumeData, SPIE MedicalImaging (2005) 1674-1681.[108] A. Crauser, K.Mehlhorn, U. Meyer, and P. Sanders, AParallelization of Dijkstra’sShortest Path Algorithm, MFCS (1998) 722-731.[109] J. Jomier, V. LeDigarcher, and S. Aylward,Comparison of Vessel Segmentationsusing STAPLE, MICCAI (2005) 523-530.96
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- 3D livewire and live-vessel : minimal path methods...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
3D livewire and live-vessel : minimal path methods for interactive medical image segmentation Poon, Miranda 2008
pdf
Page Metadata
Item Metadata
Title | 3D livewire and live-vessel : minimal path methods for interactive medical image segmentation |
Creator |
Poon, Miranda |
Publisher | University of British Columbia |
Date | 2008 |
Date Issued | 2008-10-31 |
Description | Medical image analysis is a ubiquitous and essential part of modem health care. A crucial first step to this is segmentation, which is often complicated by many factors including subject diversity, pathology, noise corruption, and poor image resolution. Traditionally, manual tracing by experts was done. While considered accurate, this process is time consuming and tedious, especially when performed slice-by-slice on three-dimensional (3D) images over large datasets or on two-dimensional (2D) but topologically complicated images such as a retinography. On the other hand, fully-automated methods are typically faster, but work best with data-dependent, carefully tuned parameters and still require user validation and refinement. This thesis contributes to the field of medical image segmentation by proposing a highly-automated, interactive approach that effectively merges user knowledge and efficient computing. To this end, our work focuses on graph-based methods and offer globally optimal solutions. First, we present a novel method for 3D segmentation based on a 3D Livewire approach. This approach is an extension of the 2D Livewire framework, and this method is capable of handling objects with large protrusions, concavities, branching, and complex arbitrary topologies. Second, we propose a method for efficiently segmenting 2D vascular networks, called ‘Live-Vessel’. Live-Vessel simultaneously extracts vessel centrelines and boundary points, and globally optimizes over both spatial variables and vessel radius. Both of our proposed methods are validated on synthetic data, real medical data, and are shown to be highly reproducible, accurate, and efficient. Also, they were shown to be resilient to high amounts of noise and insensitive to internal parameterization. |
Extent | 2313900 bytes |
Subject |
Medical Imaging Health Care Image Segmentation |
Genre |
Thesis/Dissertation |
Type |
Text |
File Format | application/pdf |
Language | Eng |
Collection |
Electronic Theses and Dissertations (ETDs) 2008+ |
Date Available | 2008-10-31 |
Provider | Vancouver : University of British Columbia Library |
DOI | 10.14288/1.0066786 |
Degree |
Master of Applied Science - MASc |
Program |
Electrical and Computer Engineering |
Affiliation |
Applied Science, Faculty of Electrical and Computer Engineering, Department of |
Degree Grantor | University of British Columbia |
Graduation Date | 2008-05 |
Campus |
UBCV |
Scholarly Level | Graduate |
URI | http://hdl.handle.net/2429/2736 |
Aggregated Source Repository | DSpace |
Download
- Media
- ubc_2008_spring_poon_miranda.pdf [ 2.21MB ]
- [if-you-see-this-DO-NOT-CLICK]
- Metadata
- JSON: 1.0066786.json
- JSON-LD: 1.0066786+ld.json
- RDF/XML (Pretty): 1.0066786.xml
- RDF/JSON: 1.0066786+rdf.json
- Turtle: 1.0066786+rdf-turtle.txt
- N-Triples: 1.0066786+rdf-ntriples.txt
- Original Record: 1.0066786 +original-record.json
- Full Text
- 1.0066786.txt
- Citation
- 1.0066786.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Country | Views | Downloads |
---|---|---|
India | 8 | 0 |
United States | 8 | 1 |
China | 5 | 15 |
City | Views | Downloads |
---|---|---|
Mumbai | 5 | 0 |
Ashburn | 4 | 0 |
Beijing | 4 | 0 |
Bangalore | 2 | 0 |
Wilmington | 2 | 0 |
Chandigarh | 1 | 0 |
Mountain View | 1 | 0 |
Unknown | 1 | 11 |
Shenzhen | 1 | 15 |
{[{ mDataHeader[type] }]} | {[{ month[type] }]} | {[{ tData[type] }]} |
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0066786/manifest