Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Automated analysis of vascular structures of skin lesions : segmentation, pattern recognition and computer-aided… Kharazmi, Pegah 2018

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
24-ubc_2018_may_kharazmi_pegah.pdf [ 5.67MB ]
Metadata
JSON: 24-1.0365805.json
JSON-LD: 24-1.0365805-ld.json
RDF/XML (Pretty): 24-1.0365805-rdf.xml
RDF/JSON: 24-1.0365805-rdf.json
Turtle: 24-1.0365805-turtle.txt
N-Triples: 24-1.0365805-rdf-ntriples.txt
Original Record: 24-1.0365805-source.json
Full Text
24-1.0365805-fulltext.txt
Citation
24-1.0365805.ris

Full Text

AUTOMATED ANALYSIS OF VASCULAR STRUCTURES OF SKIN LESIONS: SEGMENTATION, PATTERN RECOGNITION AND COMPUTER-AIDED DIAGNOSIS  by  Pegah Kharazmi  B.Sc., Shiraz University, 2009 M.Sc, Shiraz University, 2012  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Biomedical Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)   April 2018  © Pegah Kharazmi, 2018 ii  Abstract  Skin disorders are among the most common healthcare referrals in Canada, affecting a large population and imposing high healthcare costs. Early detection plays an essential role in efficient management and better outcome. However, restricted access to dermatologists and lack of education to other healthcare professionals pose a major challenge for early detection.   Computer-aided systems have great potential as viable tools to identify early skin abnormalities. The initial clinical diagnosis phase of most skin disorders involves visual inspection of the lesion for specific features, associated with certain abnormalities. One of the main cutaneous features are vascular structures, which are significantly involved in pathogenesis, diagnosis, and treatment outcome of skin abnormalities. The presence and morphology of cutaneous vessels are suggestive clues for specific abnormalities. However, there has been no systematic approach for comprehensive analysis of skin vasculature. In this thesis, we propose a three-level framework to systematically detect, quantify and analyze the characteristics of superficial cutaneous blood vessels.  First, we investigate the vessels at pixel-level. We propose novel techniques for detection (absence/presence) and segmentation of vascular structures in pigmented and non-pigmented lesions and evaluate the performance quantitatively. We develop a fully automatic vessel segmentation framework based on decomposing the skin into its component chromophores and accounting for shape. Furthermore, we design a deep learning framework based on stacked sparse auto-encoders for detection and localization of skin vasculature. Compared to previous studies, we achieve higher detection performance, while preserving clinical feature interpretability. iii  Next, we analyze the vessels at lesion-level. We propose a novel set of architectural, geometrical and topological features to differentiate vascular morphologies. The defined feature set can effectively differentiate four major classes of vascular patterns. Finally, we investigate the vessels at disease-level. We analyze the relationship between vascular characteristics and disease diagnosis. We design and deploy novel features to evaluate total blood content and vascular characteristics of the lesion to differentiate cancerous lesions from benign ones. We also build a system upon integrating patient’s clinical information and lesion’s visual characteristics using deep feature learning, which achieves superior cancer classification performance compared to current techniques without the need for handcrafted high-level features.   iv  Lay Summary  Skin disorders are among the most common healthcare referrals in Canada, affecting a large population and imposing high healthcare costs. Early detection plays an essential role in efficient management and better outcome of skin disorders. A major diagnostic clue in skin conditions is the characteristics of blood vessels. Many physicians report that using a simple portable tool, called a dermoscope, can help diagnose skin cancer without biopsy. Mastering the diagnostic technique of dermoscopy requires years of training. Computer-aided systems have great potential to provide a viable tool to identify early stage skin abnormalities. In this thesis, we develop a computer-aided system to detect, quantify and analyze cutaneous blood vessels, which can help with the diagnosis and management of skin disease. This computer tool will help healthcare professionals better use dermoscopy, which will ultimately save lives and reduce healthcare costs. v  Preface  This thesis presents research conducted by Pegah Kharazmi, under the guidance of Dr. Tim K. Lee and Dr. Z. Jane Wang. A list of publications resulting from the work presented in this thesis is provided on the following page. Publications have been modified to make the thesis coherent. A version of the contents of Chapter 2 (Part one and Part two) are published in two journal papers [J1], [J2] and three conference papers [C1], [C2], [C3]. The work in chapter 3 is published in one conference proceeding [C4] and is also ready for submission to a related journal [J3]. The content of Chapter 4 is published in one journal paper [J4] and three conference papers [C5], [C6], [C7]. The work presented in these manuscripts was performed by Pegah Kharazmi, which includes literature review, designing and implementing the proposed algorithms, performing all experiments, analyzing the results and writing the manuscript. The entire work was conducted under the supervision and with editorial input from Dr. Tim K. Lee and Dr. Z. Jane Wang. The thesis was written by Pegah Kharazmi, with editing assistance from Dr. Tim K. Lee and Dr. Z. Jane Wang. The work in this thesis does not involve any studies with human participants or animals performed by the author. vi   [J1] P. Kharazmi, M. I. AlJasser, H. Lui, Z. J. Wang and T. K. Lee, "Automated Detection and Segmentation of Vascular Structures of Skin Lesions Seen in Dermoscopy, With an Application to Basal Cell Carcinoma Classification," in IEEE Journal of Biomedical and Health Informatics, vol. 21, no. 6, pp. 1675-1684, Nov. 2017. (Selected as Journal’s Feature Article). [J2] P. Kharazmi, J. Zheng, H. Lui, Z. J. Wang and T. K. Lee, "A Computer-Aided Decision Support System for Detection and Localization of Cutaneous Vasculature in Dermoscopy Images Via Deep Feature Learning," in Journal of Medical Systems (2018) 42: 33.  [C1] P. Kharazmi, H. Lui, W. V. Stoecker, T. K. Lee, "Automatic detection and segmentation of vascular structures in dermoscopy images using a novel vesselness measure based on pixel redness and tubularness", in Proc. SPIE 9414, Medical Imaging 2015: Computer-Aided Diagnosis 94143M, Orlando, FL, 2015.  [C2] P. Kharazmi, H. Lui, Z. J. Wang and T. K. Lee, "Automatic detection of basal cell carcinoma using vascular-extracted features from dermoscopy images," in 2016 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), Vancouver, BC, 2016, pp. 1-4. [C3] P. Kharazmi, M. AlJasser, H. Lui, W. V. Stoecker, T. K. Lee, "Automatic Segmentation of Tubular Shape Vascular Structures in Dermoscopy Images: A Framework Based on Vascular Erythema and Shape ", in 23rd World Congress of Dermatology, Vancouver, Canada, 2015. [C4] P. Kharazmi, M. AlJasser, H. Lui, W. V. Stoecker, T. K. Lee, "Automatic Pattern Classification of Vascular structures in Dermoscopy Based on Vascular Morphology and Architectural Arrangement: A Dermoscopic Diagnostic Clue", in 5th World Congress of Dermoscopy, Vienna, Austria, 2015. vii  [J3] P. Kharazmi, S. Kalia, H. Lui, Z. J. Wang and T. K. Lee, "Computer-aided morphology analysis and architectural pattern recognition of cutaneous vascular structures under dermoscopy examination.", Ready to Submit. [J4] P. Kharazmi, S. Kalia, H. Lui, Z. J. Wang and T. K. Lee, "A feature fusion system for basal cell carcinoma detection through data-driven feature learning and patient profile, " in Skin Research & Technology. 2017; 00:1–9.  [C5] P. Kharazmi, S. Kalia, H. Lui, Z. J. Wang and T. K. Lee, "Computer-aided detection of basal cell carcinoma through blood content analysis in dermoscopy images", in Proc. SPIE 9414, Medical Imaging 2018: Computer-Aided Diagnosis 105750O, Houston, TX, 2018.   [C6] P. Kharazmi, H. Lui, Z.J. Wang, T.K. Lee, “New Trends of Artificial Intelligence in Dermatology: Generalizing Skin Image Analysis”, in International Society of Biophysics and Imaging of Skin, Lisbon, Portugal, 2016. [C7] P. Kharazmi, M. AlJasser, H. Lui, W. V. Stoecker, T. K. Lee, "Development of an automated image analysis technique for vasculature detection with an application to characterization of cutaneous vasculature in basal cell carcinoma", International Society for Biophysics and Imaging of the Skin, Vancouver, Canada, 2015. viii  Table of Contents  Abstract .......................................................................................................................................... ii Lay Summary ............................................................................................................................... iv Preface .............................................................................................................................................v Table of Contents ....................................................................................................................... viii List of Tables .............................................................................................................................. xiv List of Figures ...............................................................................................................................xv List of Abbreviations ................................................................................................................. xix Acknowledgements ......................................................................................................................xx Dedication .................................................................................................................................. xxii Chapter 1: Introduction ................................................................................................................1 1.1 Human Skin Biology....................................................................................................... 4 1.1.1 Anatomy of the Skin ................................................................................................... 4 1.1.2 Epidermis .................................................................................................................... 5 1.1.3 Dermis ......................................................................................................................... 6 1.1.4 Subcutis (Hypodermis) ............................................................................................... 6 1.1.5 Skin Chromophores .................................................................................................... 6 1.2 Skin Cancer ..................................................................................................................... 8 1.3 Dermoscopy and Clinical Diagnosis ............................................................................. 11 1.4 Vasculature and Skin Disorders .................................................................................... 14 1.5 Clinical Dermoscopy of Vascular Structures ................................................................ 19 1.6 Overview of the Literature ............................................................................................ 20 ix  1.6.1 Previous Work on Detection and Segmentation of Cutaneous Vasculature ............. 21 1.6.2 Previous Work on Morphology and Pattern Recognition of Cutaneous Vasculature ………………………………………………………………………………………24 1.6.3 Previous Work on Computer-aided Diagnosis of Skin Cancer based on Vascular Properties .............................................................................................................................. 25 1.7 Motivation ..................................................................................................................... 27 1.8 Problem Statement ........................................................................................................ 27 1.9 The Proposed Framework ............................................................................................. 29 1.9.1 Pixel-level (microscopic, low-level) ......................................................................... 29 1.9.2 Lesion level (mesoscopic, medium-level) ................................................................ 30 1.9.3 Disease-level (macroscopic, high-level) ................................................................... 30 1.10 Thesis Outline ............................................................................................................... 32 Chapter 2: Analysis of Cutaneous Vasculature at Pixel-level: Detection, Segmentation, Quantification ...............................................................................................................................34 2.1 Part One: Automated Segmentation of Cutaneous Vasculature via Hemoglobin Extraction and Multiscale Shape Analysis ............................................................................... 35 2.1.1 Overview of the Proposed Method ........................................................................... 35 2.1.2 Skin Decomposition .................................................................................................. 38 2.1.3 Erythema Extraction ................................................................................................. 42 2.1.4 Shape Filtering and Vessel Mask Extraction ............................................................ 44 2.1.5 Experimental Results ................................................................................................ 46 2.1.6 Discussion ................................................................................................................. 55 2.1.7 Limitations ................................................................................................................ 56 x  2.2 Part Two: Detection and Localization of Cutaneous Vasculature in Dermoscopy Images via Deep Feature Learning ........................................................................................................ 59 2.2.1 Overview of the Proposed Method ........................................................................... 62 2.2.2 Unsupervised Feature Learning through SSAE ........................................................ 63 2.2.3 Vasculature Detection through Softmax Classification ............................................ 66 2.2.4 Experimental setup.................................................................................................... 68 2.2.5 Training Set Preparation and Labeling ..................................................................... 68 2.2.6 Parameter Setting ...................................................................................................... 69 2.2.7 Vessel Detection Performance Evaluation ................................................................ 70 2.2.8 Experimental Results ................................................................................................ 70 2.2.9 Vessel Detection Results........................................................................................... 70 2.2.10 Comparison with Supervised Techniques ............................................................. 73 2.2.11 Comparison with Deep Networks (Unsupervised Feature Transfer) .................... 74 2.2.12 Feature Visualization ............................................................................................ 75 2.2.13 Discussion ............................................................................................................. 76 2.2.14 Conclusion ............................................................................................................ 80 Chapter 3: Analysis of Cutaneous Vasculature at Lesion-level: Morphology Analysis and Pattern Recognition .....................................................................................................................82 3.1 Identifying Valid Lesional Vasculature ........................................................................ 84 3.2 Design and Extraction of New Vascular Feature Set.................................................... 85 3.2.1 Geometric Feature Set............................................................................................... 86 3.2.1.1 Vessel Topology ............................................................................................... 86 3.2.1.2 Graph Features .................................................................................................. 90 xi  3.2.2 Orientation and Distribution Feature Set .................................................................. 91 3.2.3 Chromatic Features of the Vessel Segments ............................................................. 93 3.2.4 General Lesion Features ........................................................................................... 93 3.3 Classification................................................................................................................. 93 3.4 Experimental Results .................................................................................................... 94 3.4.1 Dataset....................................................................................................................... 94 3.4.2 Results and Discussion ............................................................................................. 94 3.5 Conclusion .................................................................................................................... 95 Chapter 4: Analysis of Cutaneous Vasculature at Disease-level: Classification, Assessment, Monitoring ....................................................................................................................................97 4.1 Case Study One: Assessment of Correlation Factor between Basal Cell Carcinoma Size and Lesions’ Vascular Density ................................................................................................. 99 4.1.1 Method ...................................................................................................................... 99 4.1.2 Results ..................................................................................................................... 100 4.1.3 Conclusion .............................................................................................................. 101 4.2 Case Study Two: Computer-Aided Detection of Basal Cell Carcinoma through Total Blood Content Analysis in Dermoscopy Images .................................................................... 101 4.2.1 Preprocessing .......................................................................................................... 102 4.2.2 Blood Content Segmentation .................................................................................. 102 4.2.3 Feature Extraction ................................................................................................... 104 4.2.4 Classification........................................................................................................... 104 4.2.5 Experimental Setup ................................................................................................. 105 4.2.6 Classification Performance ..................................................................................... 105 xii  4.2.7 Conclusion .............................................................................................................. 107 4.3 Case Study Three: Computer-aided Detection of Basal Cell Carcinoma based on Lesions’ Vascular Properties .................................................................................................. 107 4.3.1 Vascular Features .................................................................................................... 108 4.3.2 Experimental Results .............................................................................................. 109 4.3.3 Discussion ............................................................................................................... 110 4.3.4 Conclusion .............................................................................................................. 113 4.4 Case Study Four: A Feature-fusion System for Basal Cell Carcinoma Detection through Data-Driven Feature Learning and Patient Profile ................................................................. 113 4.4.1 Overview of the Method ......................................................................................... 116 4.4.2 Unsupervised Vascular Kernel Learning through Sparse Autoencoder ................. 117 4.4.3 Kernel Application and Feature Maps Generation ................................................. 119 4.4.4 Pooling and Condensed Feature Maps .................................................................... 121 4.4.5 Feature Fusion for BCC Detection through Softmax Classifier ............................. 121 4.4.6 Experimental Setup ................................................................................................. 123 4.4.6.1 Training Set Preparation and Labeling: .......................................................... 123 4.4.6.2 Parameter Setting: ........................................................................................... 124 4.4.6.3 Patient Profile: ................................................................................................ 124 4.4.7 Results and Discussion ........................................................................................... 124 4.4.8 Conclusion .............................................................................................................. 129 Chapter 5: Conclusions and Future Work ..............................................................................130 5.1 Significance and Potential Application of the Research ............................................. 130 5.2 Summary of Contributions .......................................................................................... 131 xiii  5.3 Directions for Future Work ......................................................................................... 133 5.3.1 Technical Direction ................................................................................................. 133 5.3.2 Clinical Direction .................................................................................................... 134 Bibliography ...............................................................................................................................136  xiv  List of Tables Table 2. 1: Generated reference values of the three clusters. ....................................................... 47 Table 2. 2: Quantitative segmentation results using different channels. ...................................... 54 Table 2. 3: Vascular detection performance ................................................................................. 55 Table 2. 4: Comparison of vessel detection performance ............................................................. 75  Table 3. 1: Classification performance of the proposed method .................................................. 95  Table 4. 1: Average vascular density of the lesions in the dataset. ............................................ 101 Table 4. 2: Blood content features. ............................................................................................. 105 Table 4. 3: BCC classification performance of the proposed method ........................................ 106 Table 4. 4: Feature evaluation and ranking ................................................................................. 106 Table 4. 5: Vascular features ...................................................................................................... 109 Table 4. 6: BCC classification performance of the proposed method ........................................ 110 Table 4. 7: BCC classification performance using only erythema or shape information ........... 111 Table 4. 8: Comparison with previous work ............................................................................... 112 Table 4. 9: BCC classification performance of the proposed framework ................................... 126 Table 4. 10: Comparison with previous work ............................................................................. 128  xv  List of Figures Figure 1. 1: Examples of skin disorders with vascular involvement. ............................................. 2 Figure 1. 2: Schematic of human skin and its layers. ..................................................................... 4 Figure 1. 3: Diagram of epidermis layers. a) Histology image. b) schematics. .............................. 5 Figure 1. 4: Absorption spectra of major visible-absorbing pigments of human skin. ................... 7 Figure 1. 5: Dermoscopy and clinical image of a malignant melanoma. ....................................... 9 Figure 1. 6: Dermoscopy and clinical image of a basal cell carcinoma. ...................................... 10 Figure 1. 7: Dermoscopy and clinical image of a squamous cell carcinoma. ............................... 11 Figure 1. 8: Analogue and digital dermoscopes............................................................................ 12 Figure 1. 9: Dermoscopic features seen in a pigmented BCC ...................................................... 13 Figure 1. 10: Clinical and dermoscopy image of a BCC .............................................................. 15 Figure 1. 11: A number of vascular morphologies of skin lesions. .............................................. 17 Figure 1. 12: A number of vascular architecture of skin lesions. ................................................. 18 Figure 1. 13: Overview of the proposed computer-aided vascular analysis framework .............. 31 Figure 1. 14: Challenges of the proposed computer-aided vascular analysis framework ............ 32  Figure 2. 1: Dermoscopy of a number of skin lesions with vasculature. ...................................... 36 Figure 2. 2: Framework of the approach. ...................................................................................... 37 Figure 2. 3: Light transmission through a substance. ................................................................... 38 Figure 2. 4: RGB plane distribution of hemoglobin component among the three clusters. ......... 47 Figure 2. 5: Relative hemoglobin densities of two BCC lesions. ................................................. 49 Figure 2. 6: ICA decomposition of lesions. .................................................................................. 50 Figure 2. 7: Vessel shape probability maps. ................................................................................. 51 xvi  Figure 2. 8: Final vessel mask of the lesions (a)-(d) of Figure 2.6. .............................................. 52 Figure 2. 9: Distribution of the three clusters in RGB and LAB color space. .............................. 53 Figure 2. 10: Erythema clustering on a pigmented lesion through LAB values and the proposed method........................................................................................................................................... 53 Figure 2. 11: Erroneous hemoglobin extraction due to the presence of bubbles. ......................... 56 Figure 2. 12: Vessel Segmentation on a multicolor lesion by the proposed method. ................... 58 Figure 2. 13: Placement of excess melanin. .................................................................................. 59 Figure 2. 14: Diagram of the proposed computer-aided framework. ........................................... 63 Figure 2. 15: The proposed framework of the stacked sparse autoencoder and softmax classifier for detecting vasculature in skin dermoscopic images. ................................................................ 64 Figure 2. 16: Schematic representation of a sparse autoencoder. ................................................. 65 Figure 2. 17: Examples of a) vascular and b) non-vascular patches extracted from the image dataset....................................................................................................................................................... 69 Figure 2. 18: Vessel detection and localization results on four candidate lesions with minimal to low pigmentation. ......................................................................................................................... 71 Figure 2. 19: Vessel detection and localization results on three candidate lesions with moderate to high pigmentation. ........................................................................................................................ 72 Figure 2. 20: Visualization of some of the feature representations. ............................................. 76 Figure 2. 21: Limitations of the proposed technique due to over-exposure. ................................ 77 Figure 2. 22: Examples of images with artifacts (oil immersion bubbles, out of focus, blur, low quality). ......................................................................................................................................... 78  Figure 3. 1: Schematics of different vascular morphology. .......................................................... 83 xvii  Figure 3. 2: Candidate lesions with different vessel patterns and the associated lesion and vascular masks............................................................................................................................................. 85 Figure 3. 3: Width and length Extraction. ..................................................................................... 88 Figure 3. 4: Branch point and axis ratio extraction. ...................................................................... 89 Figure 3. 5: Curvature extraction for two candidate lesions. ........................................................ 90 Figure 3. 6: Central distance calculation. ...................................................................................... 92  Figure 4. 1: Schematic diagram of the proposed approach for BCC vasculature and size correlation study. ........................................................................................................................................... 100 Figure 4. 2: Schematic diagram of the proposed approach for lesional blood content analysis. 102 Figure 4. 3: Decomposition of dermoscopy images for hemoglobin and melanin separation. .. 103 Figure 4. 4:  Extracted hemoglobin and melanin masks for dermoscopy images in Figure 4.2. 104 Figure 4. 5:  BCC classification ROC curves for color method, shape method and the proposed combined method ........................................................................................................................ 111 Figure 4. 6: Examples of vascular structures seen in dermoscopy of BCC. ............................... 114 Figure 4. 7: Simplified diagram of the proposed framework for BCC classification using feature fusion........................................................................................................................................... 117 Figure 4. 8: Schematic of the proposed SAE. ............................................................................. 118 Figure 4. 9: Feature map generation through convolution of the learned kernels. ..................... 120 Figure 4. 10: Pooling of a sample feature map through average function. ................................. 121 Figure 4. 11: Schematic illustration of the feature fusion framework for BCC classification. .. 122 Figure 4. 12: Patient profile demographics. ................................................................................ 125 xviii  Figure 4. 13: ROC curves for BCC classification based on patient profile, SAE features and integrated framework .................................................................................................................. 126 Figure 4. 14: Visualization of some of the feature representations by SAE ............................... 128  xix  List of Abbreviations ATA                              American Telemedicine Association BCC                               Basal Cell Carcinoma CNN                              Convolutional Neural Network DCNN                           Deep Convolutional Neural Network EANN                            Evolving Artificial Neural Network GA                                 Genetic Algorithm GLCM                           Gray Level Cooccurrence Matrix HoC                               Histogram of Curvature ICA                                Independent Component Analysis PWS                              Port wine stain SAE                               Sparse AutoEncoder SCC                               Squamous Cell Carcinoma SSAE                             Stacked Sparse AutoEncoder   xx  Acknowledgements This thesis would not have been finished without the generous and kind support of many people. First, I want to express my sincere gratitude to my supervisors, Dr. Z. Jane Wang and Dr. Tim K. Lee, for their guidance and support throughout my PhD program. I thank Dr. Lee and Dr. Wang for believing in me, supporting me, encouraging me and guiding me in the last couple of years. I am forever indebted to them. My deepest gratitude goes to Dr. Harvey Lui, for all he has taught me during the years. His creativity, critical thinking, academic professionalism among other things has not only inspired me to grow academically but also as a person. I feel extremely lucky to have him as my mentor and role model. I also thank Dr. Sunil Kalia, who has not only continuously supported and encouraged me, but also contributed substantially in data preparation. I also thank my mentors and supervisors at Photomedicine research group, Dr. David McLean, Dr. Haishan Zeng, Dr. Stella Atkins and Dr. Vincent Richer who have helped me tremendously in my academic journey. With no doubt they are among the best people anyone could wish to work with. I also thank Dr. Rabab Ward for her valuable scientific guidance. I thank Dr. Mohammed AlJasser and Dr. William V. Stoecker for providing parts of the dataset. I also thank my current and former colleagues at the Photomedicine Research group, Dr. Lioudmila Tchvialeva, Dr. Jianhua Zhao, Mr. Zhenguo Wu, Ms. Yunxian Tian, Mr. Daniel Louie, Ms. He Huang and all other colleagues who never ceased to motivate, support and encourage me. I am honored to have been part of this amazing team. Also I would like to thank my labmates at the University of British Columbia, friends and coauthors, Mr. Liang Zou, Mr. Jiannan Zheng, Ms. Nandinee Haq, Mr. Liang Zhao, Ms. Jiayue Cai, Dr. Aiping Liu and all other colleagues. Their xxi  professional suggestions and personal encouragement, was a big help and motivation along my journey. I would like to express my gratitude for Ms. Leanne Li, Karen Ng, Tegan Stusiak and other staff at Department of Dermatology and Skin Science as well as Biomedical Engineering program. I am honored to have been the recipient of scholarships from Canadian Institutes of Health Research-Skin Research Training Center and Faculty of Graduate Studies awards.  My heartfelt thanks go to my beloved parents, who have always been my biggest inspiration, who have taught me to dream big, be persistent, push hard and always aim for the best. I owe them my everything. I also thank my brother who has always been on my side.  Last but not least, my great gratitude goes to my best friend, my companion and my love Reza, for standing beside me and supporting me unconditionally through the hardships and joys. Words are not enough to thank him for all the sacrifices he has made. I could have never done it without him. xxii  Dedication To My parents, for your unconditional love and sacrifices. To my brother, for your energy and support. To Reza, for being by my side every step of this journey.   1  Chapter 1: Introduction Skin diseases are among the most common health problems in the United States and Canada [1, 2]. Approximately 85 million Americans received medical attention for at least one skin condition in 2013, exceeding the number of individuals with cardiovascular disease, diabetes or end-stage renal disease [2]. Morbidity and mortality from skin disorders are expected to rise and the health care costs associated with skin conditions are among the fastest growing compared to any medical condition [3]. Based on a report by American Academy of Dermatology on the burden of skin disease, in 2013 skin diseases led to an estimated direct health care cost of $75 billion and an indirect lost opportunity cost of $11 billion [2]. Skin disease cover a wide variety with symptoms ranging from physical discomfort and hypersensitivity to severe disfigurement, organ damage, and death. Skin diseases can be categorized based on symptoms, physical appearance, cause, pathologic or histologic examination; However, in practical terms, skin abnormalities can be divided into four major categories of neoplasm, infection, inflammation and developmental disorders based on their physio-pathology and characteristics.  In almost all types of skin abnormalities, cutaneous vessels and vascular structures either directly cause or play a significant role in the physio-pathology of the disease. Neoplasms, to begin with, are defined as abnormal growth of skin cells which alludes to cancerous and pre-cancerous lesions. This abnormal tissue growth demands a higher need for oxygen and nutrients within the tumor. To address this demand, tumors trigger growth of new vasculature into their environment (neovascularization), which is a main characteristic of neoplasms.  Skin infections as the second category of disorders, begin with the invasion of organism(s) into the skin (e.g. bacteria in skin cellulitis), followed by the influx of cells, chemicals, and  2  proteins from the immune system to defend the body. This set of immune reactions is conducted through enlargement of blood vessels, increase in the flow of blood and a higher permeability of the vessel walls at the site of infection. Sometimes this reaction can be caused without a specific trigger from outside the body, which creates the independent category of inflammatory skin disorders (e.g. psoriasis). The main hypothesis is that, due to not very well-defined reasons, the immune system shows an unwanted reaction to the normal structures of the skin. Although the external trigger is missing in the course of inflammatory skin disorders, as compared with infections, the process of vasculature involvement is the same.  In addition, many developmental skin disorders such as hemangioma, birthmark, and port wine stain are direct results of overgrowth and malformation in the structure of cutaneous vessels. Figure 1.1 demonstrates examples of skin disorders with the presence of vascular component.    Figure 1. 1: Examples of skin disorders with vascular involvement.  Vessels are a major factor in physio-pathology, development and clinical appearance of skin disorders. a) Basal cell carcinoma skin cancer [4]  b) rosacea [5] c) psoriasis [5] d) port wine stain (Photo courtesy of Dr. H. Lui).   3  Timely detection of skin disorders (and especially skin cancer) not only leads to proper management of the disease, it also prevents high cost of care. However, limited availability, long waiting times to access highly trained dermatologists and expert physicians and lack of access to other health care professionals poses a major challenge for early detection. Computer-aided diagnostic systems have a great potential to provide an inexpensive, fast and viable tool to identify skin abnormalities at early stage and prevent complicated and disfiguring surgical procedures. Unlike most other abnormalities, majority of skin disorders are visible on the skin surface. The clinical diagnosis procedure of most skin disorders involves visual inspection of the lesion, where the dermatologist looks for specific visual features and structures that are associated with certain abnormalities and malignancies. By providing better visualization of suspicious skin lesions and developing tools and techniques to mimic steps of dermatologists’ diagnostic approach, advanced computer-aided systems not only serve as a clinical decision-support system to clinicians but also aid patients to do self-examination and remote monitoring where there is limited access to expert dermatologists, thereby leading to early detection and reliable monitoring of skin disorders.  As discussed above, vascular structures are significantly involved in most skin abnormalities. Systematic detection, quantification, and analysis of skin vasculature can provide critical information towards diagnosis, understanding and monitoring of the disease. Yet, there is no comprehensive study on quantitative and systematic analysis of cutaneous vasculature. Visual inspection, as the current technique, lacks precision and reliability. The purpose of this research is to develop techniques and algorithms to systematically detect, quantify, analyze and extract information of the cutaneous vasculature for the purpose of computer-aided diagnosis.  4  1.1 Human Skin Biology 1.1.1 Anatomy of the Skin The skin is the largest organ in human body, responsible for not only serving as a barrier and protecting the body against mechanical injury but also regulating body temperature, preventing fluid loss and acting as a sensory organ [6]. Skin accounts for about 16% of total body weight with a surface area of 1.8 m2. Human skin has a multilayered structure, consisting of epidermis (the top layer), dermis (the middle layer) and subcutis or hypodermis (the deepest layer). Figure 1.2 demonstrates a schematic of skin structure and its layers.  Figure 1. 2: Schematic of human skin and its layers. Image is taken with permission from [7].  5   1.1.2 Epidermis The outermost layer of human skin is called epidermis. With an average thickness of 0.1 mm, the epidermis is mainly composed of keratinocytes, which are cells producing protein keratin that provide the main protective barrier against harmful substances. The other types of cell present in the epidermis are melanocytes, which produce and distribute melanin, the substance that gives pigments to the skin. There are no blood vessels in this layer and cells are mainly nourished through oxygen diffusion in the surrounding air and the blood capillaries that extend to the outer part of the dermis [8]. The epidermis is the most visible region of the skin and has its own layered structure. The epidermis and its layers are illustrated in Figure 1.3.   Figure 1. 3: Diagram of epidermis layers. a) Histology image. b) schematics. Image is taken with permission from [9].   6  1.1.3 Dermis Connected to and located exactly beneath the epidermis is the dermis layer, which is a dense matrix of connective tissue [8]. Collagen and elastin fibers are the main building blocks of the dermis, which are two connective tissues that account for strength, integrity and elasticity of the skin. Most specialized structures of the skin including hair follicles, sebaceous and sweat glands, blood vessels, nerves (sensory body) and fibroblasts (synthesizing collagen and elastin) are located in the dermis. The blood vessels of the dermis layer are responsible for nourishing and waste removal of the dermal and lower epidermal cells. 1.1.4 Subcutis (Hypodermis) The subcutaneous layer is the deepest layer of the skin, which is mainly composed of fat and loose connective tissue. The fat in this layer serves not only as energy storage, but also as an insulator for the body and regulates body temperature through heat exchange [6]. Hypodermis attaches the skin to the underlying muscles and bones and is meshed with blood vessels for quick nutrition delivery [8]. 1.1.5 Skin Chromophores The human skin color is the result of different skin chromophores. Chromophores express characteristic absorption of specific wavelengths and hence determine the color of a substance. There are several major chromophores in human skin with absorption spectra across different ranges of wavelengths. These chromophores may be endogenous (occurring naturally within the skin tissue) or exogenous (externally added to the tissue due to therapeutic or cosmetic purposes) such as tattoos and inks. Besides normal skin chromophores, other chromophores may be present in the skin in pathological conditions [6, 8].  7  As depicted in Figure 1.4, among all different skin chromophores, only a few demonstrate characteristic absorption within the visible range. In the visible range, melanin and hemoglobin are the most dominant skin chromophores determining skin color.     Melanin is a skin pigment produced by a special group of cutaneous cells called melanocytes. It is located in the epidermis, within the top 50–100 µm of the skin and is responsible for the dark brown to brown color of the skin. The amount of produced melanin in normal human skin varies by genetic factors as well as degree of sun exposure [8, 10].   Hemoglobin is present in the blood and microvascular network of the dermis, which is found typically 50–500 µm below the epidermal surface. In the visible spectra, hemoglobin dominates the absorption properties of the dermis and is responsible for the characteristic red to purple color of the skin. Reduced hemoglobin can also result in a pale skin color [8, 10]. Figure 1. 4: Absorption spectra of major visible-absorbing pigments of human skin.  Visible spectrum is from 390 to 700 nm. Image is reproduced from raw data [11].   8  1.2 Skin Cancer Skin cancer, including melanoma, basal cell carcinoma (BCC) and squamous cell carcinoma (SCC), is the most common type of cancer in Canada and a significant burden on the healthcare system which accounts for most skin-related morbidity and mortality [2, 12]. It is estimated that the total economic burden of skin cancers in Canada would rise to $922 million in each year by 2031 [13]. Together skin cancers account for nearly the same number of new cancer cases as the four major cancers combined (lung, breast, colorectal, prostate) [12]. However, skin cancers tend to have good prognosis if detected early. For example, melanoma, the most fatal type of skin cancer, has an overall 5-year survival rate of about 98 percent, if detected early, before the tumor has spread to regional lymph nodes or other organs [14]. This number falls to 62 percent when the disease reaches the lymph nodes, and 16 percent when the disease metastasizes to distant organs [14].  Cancers of the skin may originate from melanocytes, keratinocytes, or other cells. Melanocytic lesions originate from abnormal activity of melanocyte cells and in most cases, are presented as pigmented areas. This could be presented as either an increase in the number of melanocytes (proliferation), an increase in the amount of melanin production without the increase of melanocytes or relocation and nesting of melanocytes [8]. Non-melanocytic lesions, arise from other cells such as keratinocytes, fibroblasts, etc. [6, 15]. Clinical appearance of skin cancers may involve pigmentation (increase of melanin), scales (increase of keratin), abnormal vessels and erythema (increase of vasculature) among other structures. Although much work has been done to enhance the visualization and analysis of irregular pigmentation and lesional structure, cutaneous vessels have not been studied extensively.  9  Melanoma is the malignant tumor of melanocytes. Melanoma is less common than other skin cancer types, however, at advanced stages, it can grow to the dermis and invade deeper skin layers and become fatal [6, 8, 16]. Melanoma can represent specific diagnostic features among which are irregular pigmentation, irregular vascularity, irregular borders, multiple colors, dots and streaks [17]. Figure 1.5 demonstrates a malignant melanoma with clinical features. Figure 1. 5: Dermoscopy and clinical image of a malignant melanoma. Dermoscopic features are annotated on the image. Image is taken with permission from [17].  Keratinocyte carcinomas account for around 30% of all cancers in Canada, which makes them the most common type of cancer among Canadians (and globally) [6, 14], with an incidence rate 18 to 20 times higher than that of melanoma. Sun Exposure is recognized as the most contributing factor in these cancers, and therefore, face, ears, neck and hands are the most common areas of cancer occurrence [6]. 99% of tumors in this group belong to the cancers of keratinocytes, namely BCC and SCC [6, 8]. Timely detection of BCC and SCC leads to high cure rates and low chances of tumor recurrence and metastasis [14, 18].   10  BCC is the most common type of skin cancer with more than 4 million cases diagnosed in the United States every year [14, 19, 20]. Although BCC rarely metastasizes it can be highly invasive to the underlying tissue, destroying and disfiguring the underlying structures such as bones, cartilage and soft tissue, leading to fatality. Treatment of advanced BCC imposes high surgery costs on the health care system. Advanced BCC can have a huge negative impact on patients’ physical well-being while also causing a psychological burden due to visible disfigurement. The tumor initiates from the basal keratinocytes located in the deepest layer of epidermis and invades the dermis as strands or islands [21]. BCC has several sub-types however the most common type is represented as small pinkish lesions that show fine branching blood vessels and may have a translucent background. BCC can also include pigmentation and resemble the appearance of moles which makes the diagnosis more challenging. Figure 1.6 demonstrates a BCC with annotated clinical features.    Figure 1. 6: Dermoscopy and clinical image of a basal cell carcinoma.  Arborizing vessels are a major biomarker of BCC. Image is taken from [17] with permission.   11  SCC is another form of keratinocyte carcinoma. Malignant keratinocytes irregularly invade the dermis by opening their way through destroying the dermo-epidermal junction [22]. At such stage, the invasive SCC is metastatic and can be fatal. Looped vessels at the lesion edge is a characteristic of SCC [15]. Figure 1.7 shows an example of SCC and its clinical features.  Figure 1. 7: Dermoscopy and clinical image of a squamous cell carcinoma.  Nodular SCC usually has looped vessels at the lesion edge. Polymorphic vessels demonstrate a rapidly growing tumor. Image is taken from [17] with permission.  1.3 Dermoscopy and Clinical Diagnosis  Dermoscopy is a non-invasive diagnostic technique for in- vivo observation and evaluation of pigmented and non-pigmented skin lesions. It allows a better visualization and recognition of morphologic structures of the epidermis, the dermo-epidermal junction, and the papillary dermis, not visible by the naked eye, thus opening a new dimension of the clinical features of the lesions [15, 23].  In the last decade, dermoscopy has become a routine technique in dermatology practice and has made significant contributions to the knowledge of the morphology of numerous  12  cutaneous lesions. It has been stated that dermoscopy improves accuracy in diagnosing pigmented skin lesions (allowing 10-27% higher sensitivity) [15, 23]. Dermoscope is a simplified microscope comprising of a high-quality lens for 10 to 30-times magnification and a lighting system. Advantages of dermoscopy can be summarized in three folds: 1. Surface magnification: Magnified visualization of dermoscopy allows for recognition of structures not visible to the naked eye; 2. Subsurface visualization: Dermoscopy allows for visualization of structures of the epidermis, dermoepidermal junction and superficial dermis that are located within 200µm depth of the skin surface; 3. Simplicity and portability: Dermoscopy is easy to be carried around for use in a clinical setting. A selection of analogue and digital dermoscopes are demonstrated in Figure 1.8.  Figure 1. 8: Analogue and digital dermoscopes  Top left and middle: Dermlite. Bottom left: Handyscope. Bottom middle: Heine. Right: MoleMax.  Dermoscopic assessment of skin lesions is mainly based on two categories of global and local features. Global features provide a preliminary categorization of the lesion by evaluating the overall lesion morphology prior to more detailed assessment while local features represent  13  the letters of dermoscopic alphabet and cues for final diagnosis [24]. If extracted precisely, local features provide a detailed assessment of the lesion characteristics and focus on specific diagnostic clues [25]. Figure 1.9 demonstrates some of the most important local dermoscopic features. Each local dermoscopic feature correlates with a specific histopathologic structure, many of which are associated with a specific skin condition [25]. Hence, dermoscopic pattern analysis is a critical task in diagnosis of skin lesions. In the past decade, there has been an increasing interest in the field of dermoscopic image analysis to develop automated algorithms for extraction of dermoscopic features [26-30]. Computer algorithms have been developed in the last few years to establish automated lesion segmentation [31-35], feature extraction [18, 36-39] and computer-aided diagnosis techniques [40-44].  Figure 1. 9: Dermoscopic features seen in a pigmented BCC Image is taken from [17] with permission.   14  1.4  Vasculature and Skin Disorders One of the main dermoscopic local features are vascular structures of skin lesions. As previously stated, blood vessels are significantly involved in pathogenesis, diagnosis, and treatment outcome of all categories of skin abnormalities. The presence, morphology and clinical visual characteristics of vessels in skin lesions are suggestive clues for abnormalities and malignancies and can serve as a biomarker for certain skin conditions [45] [46]. Blood vessels are the primary dominant diagnostic feature in non-melanocytic lesions [47]. Moreover, vascular formation and angiogenesis could be an indicator of tumor aggressiveness and growth. Furthermore, many non-cancer skin disorders, such as psoriasis which are also very common among Canadians, are comprised of and related to vascular disorders and vasculature malformations [48]. Moreover, the location and distribution pattern of vessels within the lesional area is a distinguishing factor in differential diagnosis of skin abnormalities. Since accurate evaluation of blood vessels is a critical step in diagnosis and management of skin diseases, objective and systematic detection, visualization, and localization of cutaneous blood vessels are important tasks in skin lesions analysis, leading to a more accurate diagnosis. The importance of blood vessels in skin lesions can be summarized in three different aspects: 1. Presence: Presence of vascular structures and vascular network within a lesion can serve as a biomarker for certain skin conditions [45]. For example, the presence of telangiectatic (elongated branching) vessels is a key diagnostic clue of basal cell carcinoma (BCC), the most common type of skin cancer [46]. Figure 1.10 shows the dermoscopy image of a BCC with presence of arborizing vessels over the lesion. In addition, vascular formation and angiogenesis (creation of new blood vessels) could be an indicator of tumor growth and progression [37, 49]. When cancer cells grow large enough in size, they get distant from the  15  mainstream blood vessels. Therefore, in order to receive the supply of nutrients and oxygen they need, tumor cells produce growth factors that stimulate formation of new blood vessels or dilation of existing ones to increase blood flow [50]. Since formation of new blood vessels and expansion of the existing ones, is an essential step in tumor progression, presence and characteristics of excessive blood vessels can be an indicator of tumor stage and depth of invasion.   Figure 1. 10: Clinical and dermoscopy image of a BCC Arborizing vessels are seen in the dermoscopic image. Image is taken from [17] with permission.      16   2. Morphology: The morphological patterns of vascular structures in skin lesions have high diagnostic significance especially in non-melanocytic lesions. Different morphologies are associated with different skin conditions, some with a high specificity for certain disorders. There are a variety of different morphologies and architectural arrangements of lesional cutaneous blood vessels. Each vascular morphology is associated with specific abnormalities and malignancies; some with a high specificity that could yield a ready diagnosis for specific disorders [49]. For example, an arborizing (branching) architecture of vessels is a diagnostic feature of basal cell carcinoma (BCC), the most common type of skin cancer [17, 49], whereas dotted vessels are found in melanoma (the most fatal type of skin cancer) and certain inflammatory conditions. Moreover, morphology of lesional vessels could provide significant insights towards the underlying histology [47, 49]. For instance, depending on whether the underlying vasculature is located superficially beneath the epidermis or deep in the dermis and whether they are perpendicular or parallel to skin surface, they may appear in a variety of shapes and colors from bright focused red lines to blurred pink dots. Figure 1.11 demonstrates a number of vascular morphologies and patterns observed in a variety of skin disorders along with their associated description and clinical interpretation.   17   Figure 1. 11: A number of vascular morphologies of skin lesions. Adopted with permission from [49].    18  3. Architectural Arrangement: Besides the patterns of vascular structures, their arrangement and architecture throughout the lesion are also important diagnostic clues. The vascular morphology and architectural arrangements have such significant diagnostic value that the clinical diagnosis of non-pigmented skin lesions through dermoscopic examination follows a three-step algorithm that considers vascular morphology, architectural arrangement and additional clues [47, 49]. Figure 1.12a shows a few different vasculature arrangements in skin lesions. In some cases, vascular patterns along with their arrangements are so specific that they allow a ready diagnosis. For example, as shown in Figure 1.12b, presence of crown vessels (linear straight) distributed on the periphery of the lesion and surrounded by whitish/yellowish globular structures is a specific marker for sebaceous hyperplasia [47, 49].  Figure 1. 12: A number of vascular architecture of skin lesions. a) Vascular architecture of skin lesions, b) Vasculature of a sebaceous hyperplasia.  Adopted with permission from [49].  19   1.5  Clinical Dermoscopy of Vascular Structures Although dermoscopy was primarily used to study the pigmented skin lesions, in 1990 arborizing vessels were introduced as important dermoscopic feature for the diagnosis of BCC [26]. Since then, there has been a number of clinical studies on the morphology and patterns of vascular structures associated with melanocytic and non-melanocytic skin lesions seen in dermoscopy. In 2010, Zalaudek et al. have discussed different vascular morphology seen in dermoscopy of a variety of skin lesions with an emphasis on the diagnostic significance of vascular structures [47]. Later in that year, the same group have found that dermoscopy of vascular structures for some diseases such as sebaceous hyperplasia, seborrheic keratosis, clear cell acanthoma and Bowen disease are highly specific to the extent that they provide a ready diagnosis in most cases [49]. Chin et al. have studied differences in vascular patterns of basal and squamous cell skin carcinomas in order to explain their differences in clinical behavior [51]. In 2012, Trigoni et al. have studied the vascular patterns of BCC. They have found that there is a significant difference in the presence of different vascular patterns in different types of BCC [39]. In 2008, Pan et al. studied vascular features distinguishing superficial basal cell carcinoma, intraepidermal carcinoma, and psoriasis. They reported that each condition contains characteristic, reproducible vascular patterns, and that when these patterns appear in combination, they yield an extremely accurate clinical diagnosis [52]. All these studies confirm the importance and significant diagnostic role of vasculature and vascular patterns in differential diagnosis of a variety of skin disorders and express the need for a comprehensive tool for disease-specific vascular pattern recognition. Moreover, vasculature has also been an  20  important factor in melanoma studies. In [53], De Giorgi et al. have reported that the difference in frequency of vascular patterns in different thickness categories of melanoma was statistically significant. In [54] Lorentzen et al. have established an analysis for calculating the probability of thick malignant melanoma, in which vascular patterns were one of the most important factors. Although all these studies prove that vascular characteristics of melanoma is important, no studies have yet been conducted in order to demonstrate the exact role of vasculature to predict the Breslow’s depth of melanoma which is the most indicative factor of the cancer’s progression and invasion. These previous studies confirm the critical role of distinctive vascular patterns in differential diagnosis and understanding of the nature of skin disorders and express the need for a comprehensive tool for specific vascular visualization, segmentation, quantification and pattern recognition. 1.6  Overview of the Literature Over the last two decades, there has been an increasing interest towards computer-aided diagnosis of skin abnormalities. Specifically, the field of dermoscopic image analysis and development of new techniques for automated analysis and interpretation of such images has become a popular research problem. As a major step towards building such a system, substantial research has been done on automated detection and categorization of global dermoscopic patterns [55, 56] and local lesional structures [57-61]. There is interesting research on the global and local dermoscopic features extraction, lesion classification as well as lesion pattern recognition for non-vascular patterns. However, although vascular structures are significant clues for differential skin diagnosis, automated detection and pattern recognition  21  of cutaneous vasculature has rarely been studied and there are very few studies on this topic in the literature.  1.6.1 Previous Work on Detection and Segmentation of Cutaneous Vasculature There are very few studies related to the automated analysis of cutaneous vasculature. Majority of the previous studies focus on detecting erythema rather than detecting vasculature [62, 63]. Choi et al. have used the a* parameter in CIELAB color space and statistical analysis in order to detect facial erythema from clinical photos [62]. Hames et al. proposed a color space transform and morphological operations to find erythema in clinical photographs with their results being reported in terms of sensitivity and specificity of detecting actinic keratosis [63]. It is noted that both these studies used clinical photographs and addressed the erythema detection problem. Cheng et al. have used a color drop algorithm followed by post-processing noise removal techniques in an adaptive critic design contrast enhancement framework in order to extract telangiectatic vessels from skin images [64]. They have reported the performance of their technique in terms of basal cell carcinoma (BCC) classification using vascular features. Using an artificial neural network, the classification method has achieved an area under the receiver operating characteristic curve of 85%. Although the methodology is interesting, it only applies to telangiectasia, which is one of the many vascular types in skin lesions. Moreover, the endpoint in this analysis is not to segment the vasculature itself, but rather to classify the lesions into two groups of “absence” or “presence” of telangiectasia. Though clinically meaningful, this does not address the delineation, quantification and analysis of skin vasculature in general. It is worth emphasizing that, although the above few methodologies [62-64] are clinically meaningful, they do not specifically address the vascular segmentation problem, do not report  22  the segmentation results quantitatively and hence do not provide quantification and morphological information of skin vasculature. There are also very few studies on the detection of presence of atypical vascular pattern as a step in melanoma diagnosis. Betta et al. proposed a framework for detecting atypical vascular patterns based on the frequency histogram of vessels in the HSL color space [65]. They first derived color membership functions in the HSL color space based on frequency color histograms of blood vessels. Subsequently, they introduced belief levels that indicate whether a pixel belongs to a vascular structure or not. Although this is an interesting ad hoc methodology, it only considers general color information. The authors also pointed that in some cases the algorithm resulted low specificity and wrong detection and needs to be further validated. In a similar work, Di Leo et al. used principal component analysis and texture features to detect atypical vascular pattern for melanoma detection [66]. Wadhawan et al. [67] used texture features from Haar wavelet and local binary patterns along with color features from different color spaces to perform the atypical vascular pattern detection task. These are very interesting approaches for the detection of atypical vascular patterns with the final goal of melanoma diagnosis, however they do not address the actual segmentation and quantification of blood vessels. To our knowledge, there is a great need for a framework dedicated to automatic segmentation and quantification of blood vessels in both pigmented and non-pigmented skin lesions. The main aim of our investigation is to fill this gap. Despite the limited work on skin lesion vasculature, blood vessel segmentation has been addressed in other applications with other imaging modalities. Specifically, in retinal imaging of the fundus, the changes of blood vessels are critical in the diagnosis of retinal diseases. So far, there have been five different categories of methodologies for vessel segmentation in  23  retinal images. These approaches are: supervised methods, unsupervised methods, matched filtering, morphological methods and parametric models. Soares et al. have proposed the use of a 2-D Gabor wavelet and supervised classification for retinal vessel segmentation [68]. Ricci and Perfetti demonstrated the application of line operators as feature vector and SVM for pixel classification [69]. You et al. employed the combination of the radial projection and the semi-supervised self-training method using SVM [70]. Tolias and Panas developed a fuzzy C-means (FCM) clustering algorithm using linguistic descriptions (vessel and non-vessel) to extract fundus vessels [71]. The Bayesian image analysis for segmentation of arteries, veins and the fovea in retinal angiograms was applied by Simo and de Ves [72]. In [73] Villalobos-Castaldi et al. used the local entropy information in combination with the gray-level co-occurrence matrix (GLCM) for vessel segmentation. The use of a two-dimensional linear kernel with a Gaussian profile was proposed by Chaudhuri et al. [74]. The amplitude-modified second order Gaussian filter [75] has also been utilized for vessel detection. Adaptive tracking is presented in [76] for detection of vasculature in retinal angiograms, where an initial point within a vessel should be given in order to estimate the local vessel trajectories. Frangi et al. [38] examined the multiscale second order structureness of the image (Hessian) to develop vessel enhancement. The use of the classical snake (active contours) in combination with blood vessel topological properties was proposed by Espona et al. [77].  Although some of the above-mentioned methodologies could potentially be applied in the field of dermoscopy vascular analysis, there are major differences between the vessel segmentation problem in dermoscopic and retinal images:  1. In retinal image analysis, color can be ignored and hence most of the techniques in the retinal domain are developed for monochromatic intensity images, whereas in the case of  24  dermoscopy, color plays a crucial role in the analysis. Furthermore, the presence of skin pigmentation obscures the vessel visibility and further adds to the problem challenges. 2. Retinal blood vessels are of one pattern (branching), where as in skin, we see a variety of vessel patterns and morphologies. 3. Retinal vessels are usually larger and hence more detectable than cutaneous vessels. 4. In retinal vessel analysis, the anatomy of the retina and the distribution of the vessels over the image does not vary from image to image and hence this is used as a priori information towards the segmentation problem, whereas in skin, the appearance, pattern and distribution of both the lesion and the blood vessels varies from image to image. 5. There are three major publicly available labeled datasets for retinal vessel segmentation while in dermoscopy the data collection and ground truth has been one of the major challenges of our work. These differences confirm the need for new techniques and algorithms to address such disease-specific challenges. 1.6.2 Previous Work on Morphology and Pattern Recognition of Cutaneous Vasculature Due to diagnostic significance of lesional vasculature, much clinical and observational studies have concentrated on identifying different morphological patterns of blood vessels. In a clinical study, Argenziano et al. investigated the presence of different vascular morphologies in dermoscopic images of a variety of skin lesions and associated specific morphologies to specific skin abnormalities by evaluating the frequency of appearance of each morphology [78]. In two major clinical investigations, Zalaudek et al. identified six major vascular morphologies and architectural arrangements and defined their diagnostic criteria in  25  differentiating and further managing and treating the skin conditions [49]. Rosendahl et al. investigated the clinical criteria of non-pigmented skin tumors where they concluded that blood vessel morphology is a crucial diagnostic clue [79]. In a recent study, Ayhan et al. performed a detailed discussion on definition and diagnostic predictive values of skin vascular morphologies and confirmed the diagnostic significance and the need for further investigation on such structures [80].  Although these clinical studies all confirm the major role of cutaneous vascular morphologies on the diagnosis and management of the disease, there have yet been no studies on automated recognition of vascular morphologies. Previous approaches on skin vasculature only studied the problem of absence/presence of vascular structures for the final goal of melanoma diagnosis and hence did not do any further analysis on quantification and structural characteristics of blood vessels, nor did they investigate the vascular pattern recognition problem. Other methodologies such as [64] only focus on detection of one specific vessel type and therefore do not address the morphological information of skin vasculature and vascular pattern recognition problem. To the best of our knowledge automated morphology analysis of skin vasculature has never been previously studied and we aim to present the first work in this field. 1.6.3 Previous Work on Computer-aided Diagnosis of Skin Cancer based on Vascular Properties In recent years there has been an increasing interest on computer-aided diagnosis of skin cancer. Although many studies focus on dermoscopic feature analysis towards skin cancer diagnosis, vascular structures are among the most understudied dermoscopic features. Betta et al. calculated the ratio of number of vascular pixels to lesion area and compared this measure  26  with a threshold to determine the presence of any vascular pattern within the lesion [65]. They used the absence/presence of vascular pattern as a binary feature towards melanoma classification. Di Leo et al. used the same technique to select candidate vessel pixels and further performed a statistical test to verify the irregularity of the vascular pattern and used the presence/absence of irregularity as a binomial feature in their melanoma classification framework [66]. Dwahan et al. used a multispectral imaging modality by a Nevoscope device to acquire several images at different illuminations and different wavelength protocols including white light surface illumination, white light cross-polarizes transillumination, and cross-polarized transillumination images at 510nm, 560 nm and 610 nm [81]. They then used these different wavelength images to calculate the average and standard deviation of blood volume ratio to melanin ration and compare the calculated measures for different lesion types. This is a very interesting study; however, it benefits from a different imaging technology that provided multi-spectral images. Cheng et al. used color properties to discriminate the blood vessels from other lesional structures. They subsequently derived a number of vascular topological properties (including area and length) at different erosions and used these features to classify BCC lesions [82]. This is a valuable study and among the first works in the literature that investigated lesional vasculature; however, their vessel detection method only takes color information into account and the BCC classification framework is only tested on a small dataset (59 BCC and 152 benign lesions). In a later study, the same group expanded the dataset and implemented an adaptive critic design framework for contrast enhancement [64]. Similar to previous works, this study only considers color information. Furthermore, since the endpoint of the analysis is BCC detection, the vessel detection performance is not validated, which  27  affects the classification performance. However, these results provide a solid foundation for cutaneous vasculature studies and confirm the need for further analysis of skin blood vessels.   1.7  Motivation So far, there has been no comprehensive study on quantitative analysis and pattern recognition of skin vascular structures. There is no systematic way to detect, quantify and assess the vasculature in skin lesions and visual inspection, as the only current technique, suffers from subjectivity and lack of precision. These features are small, complex and low contrast which makes the detection a challenging task. In this thesis, we propose to develop a technique to detect cutaneous vascular structures, characterize the vast variety of vascular morphology and analyze the lesion vasculature properties. There are two main goals in this thesis:  1. To use dermoscopy as a platform to develop automated techniques for detecting, segmenting and analyzing small, low contrast, complex color vascular structures. These tools can be applied to other medical or non-medical domains.  2. To develop a comprehensive technique for assessment and analysis of vascularity for further use as an educational and clinical tool. 1.8  Problem Statement  As stated in the previous sections, the study of cutaneous vascular structures is a major step towards diagnosis and assessment of skin abnormalities. Moreover, there is no comprehensive study on quantitative and systematic vascular analysis so far. The purpose of this research is to develop techniques and algorithms to analyze the vasculature in dermoscopic visualizations of skin lesions. Triggered by the needs and potentials in clinical practice, we  28  attempt to address the following clinical problems and redefine them as technical problems to be solved: 1. Visualization, segmentation and detection of skin blood vessels provides significant diagnostic information. Furthermore, in order to assess the severity, progression and treatment response of several cutaneous disorders (such as vascular malformations) there is a need to quantitatively measure the vasculature. We map the problem of quantitative assessment of vasculature into the technical problem of automated vessel detection, segmentation, and pixel classification.   2. As previously mentioned, for a better diagnosis, recognizing vascular morphologies is an important step. We model this problem as a multi-class pattern classification which can be done both following the results of the previous step or as a new blind classification problem. 3. As the final point in clinical-decision support systems is the diagnosis, we propose a disease classification problem based on the vascular properties integrated with other lesion information.  These techniques could serve as an analytical tool to deeper understanding of vascular changes in skin disorders, but also as a computer-aided clinical decision support system and as an educational tool, for training purposes. Besides, computer-aided diagnostic techniques can improve self-examining approaches for skin patients in order to monitor skin abnormalities in remote areas and increase prognosis rates. The newly developed techniques could be used in other medical or non-medical applications.  29  1.9  The Proposed Framework  This thesis includes extensive investigation and analysis of vascular structures of cutaneous lesions. It is the first work in the field that proposes systematic frameworks for analysis of skin vasculature by means of dermoscopy. We propose a novel three-level framework to investigate cutaneous vasculature in the following avenues: 1) Pixel-level: to detect, segment and accurately quantify the blood vessels of skin lesions; 2) Lesion-level: to extract architectural and structural properties of the vessels and identify the lesions’ vascular pattern; 3) Disease-level: to associate lesions’ vascular properties with the diagnosis and characteristics of the disease. The proposed three-level analysis framework analyzes cutaneous vasculature at pixel-level, lesion-level and disease-level. In this regard, at each level, specific information is collected and differentiating features are extracted, accordingly. Results of each level prepare a baseline for the analysis at the next level. Following this framework, the following three levels are investigated: 1.9.1 Pixel-level (microscopic, low-level) In the pixel level, the goal is to detect and segment vascular structures within the lesion. We investigate both the detection (absence/presence) and segmentation (pixel classification) of vascular structures in pigmented and non-pigmented lesions. Vascular structures are small, complex and of different shapes and colors which makes the detection a challenging task. Moreover, the presence of skin pigmentation, pigment networks and other dark colors and structures normally obscure the visibility of the vessels and make their detection even more challenging.  30  At this level, we propose a novel framework based on dermoscopic image decomposition to address the interference of lesional pigmentation with blood vessels. We also propose a deep feature learning strategy for cutaneous vessel detection. The segmentation results provide the basis for deriving quantitative measures of vasculature.  1.9.2 Lesion level (mesoscopic, medium-level) In the lesion-level, the aim is to extract, describe and classify architectural arrangement, clinical morphology and topographical properties of the vascular structures of different skin conditions based on the results of the previous level. As demonstrated by Figures 1.11 and 1.12, several vascular morphologies associated with skin disorders have been recognized so far. A major challenge for automated cutaneous vascular morphology recognition is first accurate extraction of the vessels and further defining and extracting effective feature sets. Having the extracted vasculature resulted from the previous level, in this level we will subsequently define and introduce new sets of features to address the geometrical and arrangement of blood vessels. We propose a multi-class classification framework to automatically identify lesional vasculature based on their patterns and morphologies. 1.9.3 Disease-level (macroscopic, high-level)  In the disease-level, our aim is to use the vascular information of a lesion towards making a diagnosis or a better understanding of the disease. For this purpose, we are proposing two formats of studies: a) Correlating vascular properties and patterns with skin conditions and b) Integrating vascular information with other lesion properties for disease classification. We investigate four clinical cases:  31  1. Vascular density and basal cell carcinoma size: We evaluate the relationship between density of lesion’s vasculature with the size of the BCC (which is an indication of the progress of the disease). 2. We study the total blood content and vascularization of lesions and define new characteristic measures based on total blood content of the lesions to differentiate cancerous (BCC) versus benign lesions. 3. We define a set of vascular features based on topographical properties of the vessels to distinguish BCC from a set of benign lesions.  4. We also investigate a deep feature learning approach to investigate the BCC classification performance based on deep learned features. For each classification task, sensitivity and specificity is determined. ROC analysis is carried out to examine the performance of differentiating BCC. In the correlation test of vasculature features and BCC size, Wilcoxon rank sum test is applied. Figures 1.13 and 1.14 demonstrate the overview and detailed diagram of the proposed framework.   Figure 1. 13: Overview of the proposed computer-aided vascular analysis framework The proposed analysis is performed at pixel-level, lesion-level, disease-level.  32    Figure 1. 14: Challenges of the proposed computer-aided vascular analysis framework  1.10 Thesis Outline The remainder of this dissertation is as follows: Chapter 2 focuses on pixel-level analysis of blood vessels. We propose a framework for blood vessel segmentation based on skin image decomposition and extraction of melanin and hemoglobin maps and combining them with different shape filters at several scales. We further evaluate both the segmentation and detection performance by comparing to the ground truth vessel masks annotated by expert dermatologists. In this chapter, we also propose a deep network based on stacked sparse autoencoders for detection and localization of cutaneous vasculature. We validate the results with reference to ground truth and further compare the performance with other hand-crafted techniques. In chapter 3, vascular structures are analyzed at a lesion level. We propose a multi-class classification framework for pattern and morphology recognition of skin vasculature. Based  33  on the segmentation results of chapter 2, we define and extract structural, geometrical, topographical, textural, chromatic and position features to differentiate different vascular morphologies. The overall performance as well as class specific measures are evaluated.  Chapter 4 investigates the cutaneous vasculature at a disease level. For this purpose, we perform a number of experimental and clinical studies in which we evaluate the correlation of vascular properties with lesion diagnosis and lesion status. Based on the segmentation and topographical features from previous chapters a BCC classification framework is designed. In addition, we propose the vascular features from the deep feature learning network of chapter 2 as kernel operators to perform a BCC classification framework based on learned vascular features. We also investigate the vascular density properties of different BCC lesion sizes. As our last clinical study, we investigate the vascular characteristics of melasma, in conjunction with its pigmentary properties. Finally, chapter 5 summarizes the contributions of this thesis and discusses the future research directions.      34  Chapter 2: Analysis of Cutaneous Vasculature at Pixel-level: Detection, Segmentation, Quantification Pixel-level analysis of cutaneous vasculature, including detection, segmentation, and quantification of cutaneous blood vessels provide critical information towards lesion diagnosis and assessment. In this chapter, we propose approaches for pixel-level analysis of skin vascular structures. As discussed in chapter 1, previous studies on this matter, only investigated erythema (redness) detection or detecting absence/presence of vasculature in the whole lesion as a binary feature for melanoma classification; However, erythema detection only deals with color contents of the lesion thereby leading to low specificity when used for vessel detection. Detection techniques do not provide any information on localization and distribution of the vessels and therefore, do not address the problem of cutaneous vessel segmentation and localization [62-66, 82]. Furthermore, as previously mentioned, due to differences in the nature, anatomy, shape and color variability of skin with other organs, vessel detection algorithms developed for retinal or angiographical studies cannot be directly used for skin.  Besides, such techniques are limited to specific elongated branching vessel type, which restricts their applicability to the wide variety of skin vessel morphologies, necessitating new algorithms for skin vasculature to address problem-specific challenges.   In this chapter, we propose two different frameworks (our published work in  [83] and [84]) to address the detection and segmentation of cutaneous vascular structures in dermoscopy images. To the best of our knowledge, the presented methodologies are the first attempts to develop an automatic skin vessel segmentation and localization framework.   35  2.1 Part One: Automated Segmentation of Cutaneous Vasculature via Hemoglobin Extraction and Multiscale Shape Analysis In part one, we propose a framework that decomposes the skin image into melanin and hemoglobin contents. The proposed method incorporates skin color decomposition along with shape filtering and thus accounts for both underlying color components of the skin and the vascular shape. This eliminates the problem of vessel obscuration and further expands the applicability of the method from erythema detection to vessel segmentation. Furthermore, more accurate segmentation allows us to extract more accurate and meaningful vascular features improving the classification accuracy in differentiating BCC from non-BCC lesions.  2.1.1 Overview of the Proposed Method A challenge in dermoscopy vessel segmentation is the presence of pigmentation. Pigmentation can range from yellow and light brown to dense dark brown and black. Figure 2.1 demonstrates the dermoscopy images of a number of skin lesions, including a variety of different vascular patterns and erythema with or without the presence of lesion pigmentation. As it can be seen from Figure 2.1, pigmentation obscures the visibility of blood vessels, interferes with vascular structures and is sometimes mistakenly classified as vasculature. This reduces the sensitivity and specificity of segmentation methods in such cases. As a solution, we propose an approach based on skin decomposition. Human skin is a multi-layered structure with various components contributing to its color. Among those, melanin and hemoglobin are the most dominantly present in the epidermal and dermal layer, respectively. Melanin, produced by melanocytes, is responsible for the characteristic black/brown color of human skin and hemoglobin gives blood its color and its  36  Figure 2. 1: Dermoscopy of a number of skin lesions with vasculature.  The green arrows show some of the vascular structures. (a)-(c) and (e): Presence of various morphologies of vasculature in pigmented lesions. (f) and (g): erythematous lesions with the presence of dotted and linear vascular structures. (d): Clinical image and (h) dermoscopic image of an erythematous lesion with the presence of dotted blood vessels.  circulation within vessels results in the red and purplish color of the skin. Both these components absorb light in the visible spectrum and this triggers the motivation to use skin color information towards understanding the underlying structures. These two components are the most dominant factors in skin color. It is hence a valid assumption to consider that the variations of skin color are mostly caused by these two components. Also, the quantities of melanin and hemoglobin in human skin are mutually independent from each other. These two valid assumptions make the basis for our analytical framework. Following these assumptions, we decompose the skin image into melanin and hemoglobin channels, automatically detect the hemoglobin channel and further analyze it to cluster erythematous areas and segment the vasculature using shape information.   37  ICA was first proposed for facial color image analysis by Tsumura et al. [85]. Later, Madooei et al. adopted the same technique with a different skin model to extract color and texture features for malignancy classification [86]. Our proposed framework, adopts a similar ICA-based idea as [85, 86] in spirit. We build upon the idea in [85] and further extend it by proposing automatic hemoglobin differentiation, skin color cluster design, followed by cluster-based erythema extraction. Madooei’s work [86] uses ICA to remove the effects of camera on the color features of the lesion. Therefore, they adopt a model that also accounts for the illuminant characteristics. This provides more robust color features to be used directly for melanoma classification. Our approach however, uses a simpler skin model in the optical density domain to maintain computation efficiency. Furthermore, unlike [86] we propose a full color reconstruction of each chromophore channel, which subsequently provides the basis for efficient skin color cluster definition and erythema extraction. Figure 2.2 shows the framework of our proposed segmentation approach. Figure 2. 2: Framework of the approach.  38  2.1.2 Skin Decomposition Independent Component Analysis (ICA) is a computational technique to separate a multivariate signal from its constructing components and was first proposed in the skin analysis field by Tsumura [85]. Considering that it is valid to assume skin color as a multivariate signal composed of melanin and hemoglobin, ICA could be applied to skin images in order to extract each of the components. In order to solve the problem of skin pigmentation obscuring the appearance of blood vessels, we propose a novel approach based on the extraction of melanin and hemoglobin components of the skin.  Since hemoglobin is responsible for blood color, we further analyze the hemoglobin channel to segment the cutaneous vasculature. In order to elaborate on the proposed technique, we first focus on the perception of color. It should be noted that the color we see in a sample is due to the fact that certain wavelengths of visible light are absorbed and the remaining wavelengths are transmitted (or reflected) by the sample. Those wavelengths of light that are transmitted (or reflected) by the substance reach our eyes and cause us to “see” a certain color. This perceived color is quantified by RGB pixel values. Since the signal value of each color channel is obtained by integration of the spectral intensity with respect to the wavelength in the sensitive spectral range of that channel, each pixel value is a measure of ratio of output transmitted (reflected) intensity with respect to input intensity range. Figure 2.3 demonstrates light transmission in a substance. Figure 2. 3: Light transmission through a substance.  b P0 P  39  The absorbed light is defined as the negative logarithm of the ratio of transmitted (reflected) to incident light intensity as in equation 2.1:  𝐴 = − log (𝑃𝑃0)  (2.1)  where 𝐴 is the absorbed light, P0 is the intensity of the input light and P is the output intensity (as transmittance or reflectance) as demonstrated in Figure 2.3. The absorbed light is also called the optical density.  The main framework of the proposed skin decomposition is based on the modified Beer-Lambert law. Based on this law, the absorbance of light in a sample is directly proportional to the concentration of absorbing material:  𝐴 = 𝜀𝑏𝐶  (2.2)  where 𝜀 is the absorbtivity coefficient (depending on the substance and wavelength), 𝑏 is the length [80] and 𝐶 is the concentration (mol/Litre). Therefore, the total absorbance is a function of pure absorbance per unit density (𝜀) and the density (𝑏𝐶) of the substance. If several absorbing materials are present, the effects are additive. Therefore, based on the modified Beer-Lambert law, in the optical density domain, total absorbed light can be expressed as a linear combination of concentrations (densities) of the underlying substances as in equation 2.3:    𝐴 = 𝜀1𝑏1𝐶1 + 𝜀2𝑏2𝐶2+…  (2.3)  40   Following this notation, in our application, we define the skin color optical density (skin color vector in the optical density domain) signal as 𝐼𝑥,𝑦 as in equation 2.4:   𝐼𝑥,𝑦 = [−𝑙𝑜𝑔𝑟𝑥,𝑦, −𝑙𝑜𝑔𝑔𝑥,𝑦, −𝑙𝑜𝑔𝑏𝑥,𝑦]  (2.4) where 𝑟𝑥,𝑦, 𝑔𝑥,𝑦, 𝑏𝑥,𝑦 are normalized color values of the RGB color space. Based on the modified Beer-Lambert law, skin color optical density can be modeled as a linear combination of the underlying substances. Considering the fact that the variations of skin color are mostly caused by melanin and hemoglobin in the visible range, skin color optical density can be expressed as a linear combination of melanin and hemoglobin quantities:  𝐼𝑥,𝑦 = 𝑐𝑚𝑞𝑥,𝑦𝑚 + 𝑐ℎ𝑞𝑥,𝑦ℎ + ∆ (2.5)  where 𝑐𝑚 and 𝑐ℎ are pure color per unit density of melanin and hemoglobin respectively; 𝑞𝑚 and 𝑞ℎ are relative quantities of melanin and hemoglobin in each pixel (𝑥, 𝑦) ; Δ is the stationary column vector caused by other skin pigments and structures. The pure color per unit density values [𝑐𝑚, 𝑐ℎ] are named similar to the notation of pure absorbance per unit density (𝜀) in the equation 2.2. This is a constant vector for the given substances that we estimate during our analysis and similar to the absorptivity per unit density (𝜀), the pure color density vector also depends on the characteristic of the melanin and hemoglobin.  In the conventional notation of ICA, 𝑐𝑚 and  𝑐ℎ could be considered as mixing signals, 𝑞𝑚 and 𝑞ℎ as the source signals and 𝐼𝑥,𝑦 as the observed signal. To achieve the best results, principal component analysis was first applied to the three-dimensional input. While preserving more than 99% of the total variance of the data, the first two principal components  41  were chosen for the analysis. Subsequently, ICA was applied to the input to extract the relative quantities of melanin and hemoglobin and estimate their pure color per density vectors ?̃? =[𝑐?̃?, 𝑐ℎ]̃ . The estimation process maximizes the non-Gaussianity in the estimated components to ensure the independency among them. If we assume that the minimum quantity of each of the two components (melanin and hemoglobin) in any pixel is zero, then we have:  min𝑥,𝑦{?̃?−1𝐼𝑥,𝑦} − min𝑥,𝑦{?̃?−1∆} = 0 (2.6)                                             where C̃ is the estimated pure color density vector; We can then define and calculate 𝐸 as:  𝐸 = min𝑥,𝑦{?̃?−1𝐼𝑥,𝑦} (2.7) Relative quantities of melanin and hemoglobin can then be obtained using the following equation:  [𝑞𝑥,𝑦𝑚 , 𝑞𝑥,𝑦ℎ ]𝑡= ?̃?−1𝐼𝑥,𝑦 − 𝐸 (2.8)                                                      The skin decomposition is then obtained by:   ?́?𝑥,𝑦 = ?̃?(𝐾[𝑞𝑥,𝑦𝑚 , 𝑞𝑥,𝑦ℎ ]𝜏+ 𝑗𝐸) + 𝑗∆ (2.9)                                        where Íx,y is the synthetized skin color. K and j are control parameters to control the quantities and effect of stationary signal, respectively. By setting 𝑗 = 0 and 𝐾 = 𝑑𝑖𝑎𝑔(1,0) and 𝐾 =𝑑𝑖𝑎𝑔(0,1) respectively, the melanin and hemoglobin component of the color image is extracted. However, ICA has an ambiguity on permutation, meaning that the order of the independent components is not consistent. In other words, it does not automatically determine which of the two components corresponds to each chromophore. Therefore, applying ICA to each dermoscopy image results two independent components (one being the melanin component and the other being the hemoglobin component), but it is incapable of automatically  42  determining which of the two resulting components is associated to each skin chromophore. This causes the algorithm to be semi-automatic, needing an external expert inference to determine which component represents which chromophore and select the hemoglobin component among the two components. As an intuitive way of automatically differentiating between the two components, we took advantage of the prior domain knowledge on skin chromophores. We know that hemoglobin is the pigment that gives blood its color and it’s responsible for the reddish-pink color elements of the skin. Therefore, the hemoglobin component is the component with higher red color concentration. We also know that in the Lab color space, the a* channel represents the red/green opponent colors, with green at negative a* values and red at positive a* values. Therefore, the higher the a* values, the more red. As a result, among the two independent components, the hemoglobin component should have a higher correlation with the a* channel of the Lab color space. We used this as an intuitive solution to automatically find the hemoglobin component among the two components obtained by ICA. We modeled this concept in terms of the following equation:  𝑞ℎ = arg  maxi(𝑐𝑜𝑟𝑟(𝑞𝑖, 𝑎∗))  , i ∈ m, h (2.10)  𝑐𝑜𝑟𝑟(𝑞𝑖, 𝑎 ∗) =𝐶𝑜𝑣(𝑞𝑖, 𝑎∗)√𝑉𝑎𝑟(𝑞𝑖) 𝑉𝑎𝑟(𝑎∗) (2.11)                        2.1.3 Erythema Extraction In this work, we propose three clusters within skin: normal skin, pigmented skin and erythema. For each cluster, a reference value vector is learned through a set of selected pixels among our images. To learn the reference values for the three clusters of the skin, an expert was asked to manually outline regions of normal skin, pigmented skin and vasculature among  43  the selected pixels for training to provide digital gold standard. ICA was applied to the selected images and the hemoglobin channel was extracted. Subsequently, three reference vectors were calculated: mean values along with the standard deviations of R, G and B channels of the hemoglobin component of the normal skin, pigmented skin and erythema as 𝐼?̅?, 𝐼?̅?, 𝐼?̅?, respectively.  In order to evaluate choice of RGB plane for reference values, we derived the histogram distribution of the R, G, B values of the hemoglobin channel for each cluster (vessel, pigmented, normal skin) in RGB color space. A two-sample t-test for unpaired data was performed to test the difference between the means of the three clusters to determine whether the three clusters can be differentiated by their means. The results demonstrated significant difference between the means of the erythema cluster from the other two clusters in each color channel in RGB color space which justifies the use of RGB plane to differentiate the erythema from the other two clusters. The results of this verification are presented in section 2.1.5. Once the reference values are obtained and stored as a priori information, a thresholding framework was designed based on the Mahalanobis distance on the hemoglobin component of the skin in order to classify red regions. For each pixel of the image, the Mahalanobis distance from the three clusters (normal skin, pigmented skin and blood vessels) in hemoglobin component is calculated and the pixel is classified into the group with the closest distance as demonstrated in equation 2.12:  𝐸𝑥,𝑦𝑖 = √(𝑟𝑥,𝑦−𝑟𝑖)2𝜎𝑟𝑖+(𝑔𝑥,𝑦−𝑔𝑖)2𝜎𝑔𝑖+(𝑏𝑥,𝑦−𝑏𝑖)2𝜎𝑏𝑖 (2.12)  𝑖 ∈ 𝑛, 𝑝, 𝑒  44  where 𝐸𝑥,𝑦𝑛 , 𝐸𝑥,𝑦𝑝 , 𝐸𝑥,𝑦𝑒  are the Mahalanobis distances of pixel (𝑥, 𝑦) from normal skin, pigmented skin and blood vessels, respectively, and ri, gi, and bi are the RGB values of the reference clusters. If 𝐸𝑥,𝑦𝑒 = min  (𝐸𝑥,𝑦𝑛 , 𝐸𝑥,𝑦𝑝 , 𝐸𝑥,𝑦𝑒 ), the pixel is classified as belonging to the erythematous cluster. By performing the same analysis on every single pixel of each image, each pixel is classified into the one of the three mentioned clusters. As a result, a mask image is produced that segments the red areas of the skin.  2.1.4 Shape Filtering and Vessel Mask Extraction Detecting redness is an essential step towards cutaneous vessel segmentation. However, there are multiple sources such as inflammation, pressure (due to contact dermoscopy imaging) and temperature that may affect skin redness and interfere with vessels. Therefore, we propose to take shape information into account along with color. For this purpose, we consider two main shape categories including tubular and circular for cutaneous vessels. To measure shape properties, we use and further extend the Frangi measure at 20 different scales. In [38], Frangi et al. proposed that in order to measure the tubularness of a pixel 𝑋 = (𝑥, 𝑦) at different scales {𝑠1, … , 𝑠𝑘}  one may use:   𝑉(𝑋, 𝑠) ={  0                                   𝑖𝑓 𝜆2(𝑋, 𝑠) < 0 𝑒−𝑅2(𝑋,𝑠)2𝛽2 (1 − 𝑒−𝑆22𝑐2)          𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒    (2.13)     𝑅 =𝜆1(𝑋, 𝑠)𝜆2(𝑋, 𝑠)     ,    𝑆 = √∑𝜆𝑖2(𝑋, 𝑠)2𝑖=1        |λ1| ≤ |λ2| (2.14)   45  R, S are measures of blobness and second order structureness; λi(X, s) , i = 1,2   (|λ1| ≤|λ2|)  are the eigenvalues of the Hessian matrix of the image I calculated at scale s; β, c are control parameters that control the sensitivity of the filter to the measures R and S.  R attains its maximum at blob-like structures and S will be low in the background where no structure is present and eigenvalues are small. Presence of tubular structures increases the contrast, which in turn causes the norm to become large since at least one eigenvalue will be large. This function maps the image into probability-like estimates of tubularness. We should note that the condition of equation 2.13 determines the polarity of the target, i.e. the brightness or darkness of the pixels. Therefore, the condition of Frangi measure depends on the polarity of the target structure versus the background (i.e. dark-on-bright vs. bright-on-dark). Skin vessels emerge as dark structures on brighter environment (background skin). We used this prior information, derived from clinical appearance of vessels, as a consistent condition to prevent erroneous enhancements and discard structures other than vasculature. Depending on the orientation of blood vessels in the underlying skin layers, some of the cutaneous blood vessels may appear as circular structures (dots). For a pixel to belong to a circular structure, both eigenvalues of the Hessian matrix should be in the same order of magnitude. Therefore, we modified the Frangi tubularness measure to define a circularness probability estimate as demonstrated in equation 2.15:  𝑉(𝑋, 𝑠) = {𝑒−(𝜆2−𝜆1)2𝛽2 (1 − 𝑒−𝑆2𝑐2 )           𝑖𝑓 𝜆2 > 𝜆1 > 0 0                                              𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒      (2.15)  Frangi measures are applied to the extracted red areas of the lesion. In order to include both tubular and circular shaped vascular structures, both Frangi measures are calculated for  46  each pixel. The maximum value of each of the measures (2.13) and (2.15) is calculated across the scales, for each pixel. For each pixel, the highest value between the two is selected. Global thresholding based on Otsu’s method [87] was applied to the resulting image to get the binary vessel mask.   2.1.5 Experimental Results  The dataset used in this chapter consists of 759 images obtained from three different sources: 1) Atlas of dermoscopy by Argenziano [88] comprised of images of 768 by 512 pixels by the so-called ‘wet’ dermoscopy approach; 2) The University of Missouri comprised of images of 1024 x 768 pixels using ‘wet’ dermoscopy (which requires application of a gel) and 3) Vancouver Skin Care Center comprised of images of 1930 x 1779 pixels by Dermlite smartphone dermoscope with polarized light, i.e. ‘dry’ dermoscopy (without gel application). The diagnosis (BCC and non-BCC) of these lesions were given along with the images.  The method implementation was performed using MATLAB 2015a. To obtain the reference values for the three clusters of normal skin, pigmented skin and erythema, a total of 500000 pixels from the three clusters (172320 pixels from vasculature, 163840 from pigmented skin and 163840 from normal skin) were outlined by an expert among a set of 100 images randomly chosen from the dataset. Figure 2.4 demonstrates the distribution of the sample points of each of the three clusters in the training set in each color channel. A two-sample t-test for unpaired data was performed to test the difference between the means of the three clusters. The results demonstrated significant difference between the mean of the erythema cluster from the other two clusters in each color channel. The extracted reference values (mean and standard deviation) from the training set are demonstrated in Table 2.1.  47   Figure 2. 4: RGB plane distribution of hemoglobin component among the three clusters.   Table 2. 1: Generated reference values of the three clusters.  Pigmented Normal Vasculature Red μ = 0.5250 σ = 0.1385     μ = 0.8625     σ = 0.0917 μ = 0.7948 σ= 0.1109 Green μ = 0.3244 σ = 0.1250    μ =0.8433    σ = 0.0869 μ = 0.4756 σ = 0.1108 Blue μ = 0.2390 σ = 0.0981    μ = 0.7951    σ = 0.0943 μ = 0.5178 σ = 0.1109  The remaining 659 images were used for implementation and testing. From the remaining image set, another set of 500000 pixels were outlined by experts (dermatologist and dermoscopist) as the ground truth for the location of vessels through manual segmentation of cutaneous vasculature regions. The selected test set contains both pigmented and non-pigmented lesions, different vascular patterns including tubular, curved, linear and circular  48  vessels and different data sources (among our three dermoscopy sources). In the test set, among the 500k pixels, 287452 were vasculature and 212548 were chosen from non-vasculature regions. The control parameters of Frangi filter were set to 𝛽 = 0.7 , 𝑐 = 0.05, respectively. FastICA implementation was used in this work by a fixed-point iteration [89]. Each lesion is decomposed into melanin and hemoglobin components. Figure 2.5 demonstrates two BCC lesions with their associated relative hemoglobin densities. A number of pigmented and non-pigmented lesions along with their decomposition into melanin and hemoglobin channel are demonstrated in Figure 2.6 We verified the performance of the empirical rule in equation 2.10. The component selected by this rule was compared with the one picked by the expert and among 659 images, the hemoglobin component was picked correctly 650 times (98.6% of the cases). The 9 cases where the hemoglobin was not correctly selected were all from contact (oil immersion) dermoscopy technique with low quality that had large areas of the image covered with bubbles which interfere with the lesion colors. Using the obtained reference values, k-means clustering was performed on the hemoglobin channel using the Mahalanobis distance, resulting in segmentation of red areas of the lesions as demonstrated by Figure 2.6 The tubularness and circularness filters were applied to the extracted red regions.  49  Figure 2. 5: Relative hemoglobin densities of two BCC lesions. Brighter areas demonstrate higher relative density.   Figure 2.7 shows two sample shape probability maps on the segmented red areas of lesions (a) and (b). The vessel mask is obtained using Otsu’s thresholding. Figure 2.8 demonstrates the final vessel mask of the lesions, overlaid on the original image. For better visual comparison, the corresponding ground truth for the lesions are also provided. Segmentation performance was tested on the test set of 500000 manually outlined pixels where a sensitivity and specificity of 90% and 86% were achieved respectively.   50  Figure 2. 6: ICA decomposition of lesions.  (a)-(d): Original dermoscopy images of four BCC lesions. (e)-(h): Corresponding decomposition melanin channel. (i)-(l): Corresponding decomposition hemoglobin channel. (m)-(p): Segmented red areas based on the clustering of hemoglobin channel.     51  Figure 2. 7: Vessel shape probability maps. In order to justify the choice of ICA and the proposed decomposition framework, we implemented the whole framework without the melanin/hemoglobin decomposition and with the existing (LAB and RGB color planes). This was to verify how the entire framework using the existing color planes (and without the decomposition part) would work and how it would compare to the proposed approach. We extracted the LAB, and RGB color values from the training set, derived the histogram for each color plane (demonstrated in Figure 2.9 below) and performed clustering using the reference values from these color planes in order to get the erythema mask. The rest of the approach, including shape filtering using Frangi and thresholding was implemented in a similar way. Table 2.2 below demonstrates the quantitative segmentation results on the same test set as used in this chapter. The challenging cases are pigmented lesions, where pigmentation obscures the visibility of vessels and is mistakenly segmented as vasculature. As an example, Figure 2.10 below demonstrates a pigmented lesion with the corresponding erythema clustering through original LAB values and the proposed method. As it can be seen from Figure 2.10, clustering through original LAB color space is not capable of successfully differentiating lesion pigmentation from vasculature which consequently results in low specificity. This was the major motivation for using the melanin/hemoglobin decomposition framework.  52   Figure 2. 8: Final vessel mask of the lesions (a)-(d) of Figure 2.6.  Left: Vessel mask by proposed method. Middle: Vessel mask outlined by expert. Right: Overlay of the two masks over the lesion.     53   Figure 2. 9: Distribution of the three clusters in RGB and LAB color space.  Top Left: Red values of the Vessel, Pigmented and Normal cluster. Middle Left: Green values of the Vessel, Pigmented and Normal cluster. Bottom Left: Blue values of the Vessel, Pigmented and Normal cluster. Top Right: L values of the Vessel, Pigmented and Normal cluster. Middle Right: a value of the Vessel, Pigmented and Normal cluster. Bottom Right: b values of the Vessel, Pigmented and Normal cluster.   Figure 2. 10: Erythema clustering on a pigmented lesion through LAB values and the proposed method. Using the original LAB values mistakenly classifies some pigmentation as erythema cluster.  B Vessel Vessel Pigmented Normal Normal Pigmented L a* b* R G  54   Table 2. 2: Quantitative segmentation results using different channels.  Sensitivity Specificity RGB 0.85 0.77 LAB 0.86 0.81 Proposed Method 0.90 0.86   We would like to explain that, as mentioned in chapter 1, though there have been a few related studies in the literature, vascular segmentation problem has not been addressed explicitly and hence we can’t compare our segmentation results directly with other methods. Therefore, we investigate the vessel detection performance of our proposed algorithm with previous methods. In chapter 4, we also compare the BCC classification performance using our proposed vascular features with different methods and different features. In order to evaluate the overall detection performance of the proposed method in terms of absence or presence of vascular patterns within the whole lesion, a detection analysis was performed. The proposed method was implemented on the entire 659 images, with 451 lesions containing vascular structures (Present) and 208 lesions without vascular structures (Absent). For each image, the number of vascular pixels detected by the proposed method and the size of the lesion (in terms of pixels) were recorded. To classify images into Absent or Present in terms of vascular patterns, a vascular density ratio was defined in a similar way as [65]:  𝑉𝑎𝑠𝑐𝑢𝑙𝑎𝑟 𝐷𝑒𝑛𝑠𝑖𝑡𝑦 =𝑉𝑎𝑠𝑐𝑢𝑙𝑎𝑟 𝐴𝑟𝑒𝑎𝐿𝑒𝑠𝑖𝑜𝑛𝑆𝑖𝑧𝑒 (2.16)  55  where LesionSize is the size of lesion in terms of pixels. Images containing a density ratio higher than a threshold (set to 0.08) are classified as Present and the rest as Absent. The detection results are summarized in Table 2.3. The performance is compared with the method proposed in [65], where a color-based approach using the frequency histograms of vessels in the HSL color space is used for vasculature detection. As it can be seen from Table 2.3, the decomposition framework and including shape filters by our proposed method demonstrates an improved detection performance.  Table 2. 3: Vascular detection performance  TP Rate FP Rate Precision AUC Betta et al. [65] 0.889 0.144 0.930 0.878 Proposed Method 0.933 0.100 0.952 0.922  2.1.6 Discussion In this study, skin decomposition was used in combination with shape analysis for vascular segmentation in dermoscopy. Many previous studies on different dermoscopic structures such as pigment networks, blue white veils, streaks, globules, etc. set their endpoint goal on either structure detection or disease classification rather than on the structure segmentation. However, in the case of cutaneous vasculature, there is significant clinical importance in the shape, clinical appearance and quantification of vascular structures and hence, structure segmentation would provide useful clinical information and is a meaningful endpoint. However, since none of the previous studies address this endpoint, in order to evaluate and compare the performance of our method, BCC classification was performed in different scenarios. Each scenario investigates the efficacy and performance of an aspect of the  56  proposed method and is designed to evaluate the method design. These studies are presented later in chapter 4. 2.1.7 Limitations Since the proposed method is based on the assumption that variations of skin color are mainly due to melanin and hemoglobin, the presence of artifacts that interfere with the visibility of true colors of the skin can affect the accuracy of the method. Figure 2.11 demonstrates the dermoscopy of two lesions, where bubbles resulting from gel application, cover parts of the image. As it can be seen, the presence of bubbles results in erroneous decomposition and decreases the accuracy of the extracted hemoglobin component.   Figure 2. 11: Erroneous hemoglobin extraction due to the presence of bubbles.  Left: Original dermoscopy of two lesions. Right: Extracted hemoglobin component. The hemoglobin component is not accurately extracted. Red arrows demonstrate the gel bubbles over the lesions.   57  Another limitation of the proposed approach is that, in this study we only consider three clusters whereas in general up to six color criterions may be present in skin lesions. We do admit that considering six clusters can provide more accurate categorization of lesional colors; however, we believe that this detailed level of pigmentation and color categorization is not necessary for our current study. In our current study, we proposed a simplified categorization of three clusters, since the exact level of pigmentation is not the main target of the proposed algorithm. The target of our current study is to detect vasculature from lesional and peri-lesional skin; therefore, the main focus is to accurately differentiate the vasculature (visible as red or blue) cluster from other colors. In other words, even if the “light brown”, “dark brown” and “black” colors are all simply categorized into the “pigmented skin” cluster or the “white’” color of the regression areas is considered as perilesional normally pigmented skin in our approach, it does not affect the detection accuracy of vasculature cluster, since precise categorization of pigmentation is not the main purpose of the proposed study. Therefore, in order to increase the efficiency and decrease the computational complexity of the algorithm while maintaining the detection accuracy, we proposed a three-cluster scheme. As an example, Figure 2.12 below demonstrates a challenging image with multiple colors (black, brown and light brown pigmentation, white, red, etc), where the proposed method has successfully extracted the erythema and final vessel mask inside the lesion. Although the proposed method has demonstrated superior performance compared to previous techniques, we believe that especially for melanoma studies, considering more color clusters may improve the clustering accuracy.   58   Figure 2. 12: Vessel Segmentation on a multicolor lesion by the proposed method.  Left: Original dermoscopy image of the lesions. Middle: The extracted hemoglobin component. Right: The final vessel mask overlaid in white color on the image.  Another limitation of the proposed study is the cases of dermal pigmentation, i.e. excess melanin buried deep down in the skin dermis. Melanin is dominantly contained in the epidermal layer whereas hemoglobin is located in the dermal layer, which is the basis of the proposed decomposition framework. Melanin tends to reflect more light in shorter wavelengths. Considering the fact that the shorter the wavelength the smaller the depth of penetration, epidermal melanin is well visualized using the blue light in the visible range and detected by camera. However, there are certain skin conditions where melanin is concentrated deeper down in the dermis layer. Light brown spots such as birth marks, café au lait spots, age spots, post-injury pigmentation and melasma are among these conditions. While epidermal hyperpigmentation is very well detected by RGB camera, it is difficult to differentiate and detect dermal melanin from only red, green and blue light. As it can be seen from Figure 2.13 below, shorter wavelengths (such as the blue light) tend to vanish as they penetrate into deeper layers, while longer wavelengths that have a better penetration (such as red light) are less absorbed by melanin. Therefore, not only dermal melanin signal is weak, but also the  59  appearance of its color is largely affected and interfered by the hemoglobin reflection signal. This directly affects the decomposition framework and causes the areas of excess pigmentation to be classified as hemoglobin component to a large extent.   Figure 2. 13: Placement of excess melanin.  Left) Melanin-containing melanosomes are indicated by the dots [90]. Right) Skin layers and red, green and blue waves. Light waves vanish as they reach deeper into the skin.    2.2 Part Two: Detection and Localization of Cutaneous Vasculature in Dermoscopy Images via Deep Feature Learning In part one, we presented a novel framework for detection and segmentation of cutaneous vasculature. Although the proposed approach demonstrates superior performance on detecting cutaneous vessels compared to other techniques, it is highly dependent on extensive domain knowledge to design and extract meaningful hand-crafted features and hence requires specific algorithms for every vessel type. Cutaneous vessels are highly variable in shape, size, color, and architecture, which complicate the detection task. Considering the large variability of these structures, conventional vessel detection techniques often lack the generalizability to detect  60  different vessel types and require separate algorithms to be designed for each type. Furthermore, such techniques are highly dependent on precise hand-crafted features which are time-consuming and computationally inefficient. As a solution, in the second part of this chapter, we propose a data-driven deep feature learning framework which incorporates clinically interpretable feature learning, while covering different skin vessel patterns and colors for comprehensive detection of cutaneous vessels. The proposed approach is capable of “learning” the hidden features directly from the data instead of the need for hand-crafting the features. While hand-crafted feature extraction is limited in terms of generalization and dependency on expert knowledge, deep artificial neural networks demonstrated superior results in image recognition tasks [91]. Using a hierarchy of processing layers, deep networks model representations of the image from low level information (pixel intensities) to high level features (complicated combinations of edges and shapes), and then analyze the abstracted features [91]. Such high-level features are more robust to artifacts and more powerful in distinguishing complicated visual patterns [92]. Direct learning from the data bypasses the need for hand-crafted features, provides a comprehensive understanding towards the nature of the data and hence demonstrates good generalizability characteristics [93]. Furthermore, deep learning approaches can leverage unlabeled data by performing unsupervised learning [94]. This is a great asset for medical image data analysis, where acquiring image labels and annotations is time-consuming and expensive. Recently, deep networks were successfully employed in a variety of medical image analysis applications [95], including nuclei detection in histopathology images of skin, breast and colon cancer [92, 96, 97], chest pathology detection in X-ray images and melanoma diagnosis [36, 98, 99]. These capabilities of deep networks  61  address major challenges in skin vascular analysis problem and make them a perfect candidate for our application.  There are different types of deep networks according to their architectures. Convolutional neural networks (CNN) are partially connected networks, meaning they perform based on local receptive fields and only learn a set of locally connected neurons. CNNs consist of millions of parameters to be tuned, therefore, in most cases they need to be pre-trained using general image datasets. Stacked sparse autoencoders (SSAE) are another type of deep networks specialized in unsupervised feature learning. SSAE is a fully-connected multilayer neural network which deploys and learns a global weight matrix for feature representation. At each hidden layer, the network will transform the input data into a different representation in the hidden units. Therefore, SSAE is an unsupervised feature learning tool, which can abstract features as an input image propagates through the network and use the extracted high-level features for the recognition task via a final classification layer. We propose the fully connected model of SSAE for our application because SSAEs are capable of learning hierarchical high-level feature representations directly from image pixel intensities which makes them suitable for capturing variable geometrical patterns and colors of skin blood vessels. Also, SSAEs can be trained with unsupervised learning, which enables us to exploit the large amount of unlabeled data for data-driven feature learning. Furthermore, we propose a patching strategy to divide each image into patches and analyze each patch separately. This strategy reduces the dimensionality of the input data and makes it computationally efficient to be employed in a fully connected network architecture.  In this part, we present a fully connected model based on stacked sparse autoencoders (SSAE) for detection of skin vascular structures from dermoscopy images. A multilayer SSAE  62  is designed to learn the hidden representations of the data in hierarchical layers in an unsupervised manner. The high-level learned features are subsequently fed into a classifier which detects and localizes vessels within the lesion. We evaluate the detection performance and feature interpretations of the proposed framework in comparison with the method in part one, as well as a pre-trained convolutional neural network (CNN) and conventional feature extraction strategies. By visualizing the learned features of the proposed SSAE, we demonstrate that unlike general texture feature descriptors and pre-trained convolutional neural networks (CNN), the proposed approach yields to clinical interpretability of the features which is a major asset for validation of the proposed computer-aided system.   2.2.1 Overview of the Proposed Method  A simplified diagram of the proposed framework is demonstrated in Figure 2.14. The training dermoscopy image set is first divided into patches of 32 x 32 pixels. As mentioned above, patching decreases the dimensionality of the input so that it could be employed in the fully connected model of SSAE. Lesional skin blood vessels are small structures and under dermoscopic examination each lesion may contain a large number of blood vessels throughout the lesion. Patching enables the deep framework to capture different patterns, orientations and shades of color within every single lesion image and hence provides a comprehensive understanding towards the nature of vascular structures. Therefore, patching is also a good strategy to increase the available training data. Moreover, the proposed framework separately analyzes every single patch of the lesion based on its vascular content, which allows for both detection and localization of the blood vessels within the whole lesion. Each patch is labeled by an expert based on presence or absence of vascular structures. The patches are fed to a  63  SSAE, where hidden representations are learned in a layer-wise manner. A final softmax layer performs the supervised classification on the hidden representations for predicting the presence or absence of vasculature in each patch. A diagram of the proposed approach is given below. Figure 2. 14: Diagram of the proposed computer-aided framework.  2.2.2 Unsupervised Feature Learning through SSAE The proposed cutaneous vessel detection framework is based on the multilayer architecture demonstrated in Figure 2.15. First, patches are extracted from the skin lesion training set and labeled with 𝐿𝜖 {0,1}, so that patches containing a whole or part of a vascular structure are labeled as (L = 1) and patches without any vasculature as (L = 0). The training patches are then fed as the input to the SSAE network. SSAE is a neural network comprised of multiple layers of successive sparse autoencoders, where training is done in a layer-by-layer fashion.   64   Figure 2. 15: The proposed framework of the stacked sparse autoencoder and softmax classifier for detecting vasculature in skin dermoscopic images.  Each layer of the SSAE is a sparse autoencoder. Figure 2.16 illustrates a simple schematic of the autoencoder. Autoencoder is a neural network which attempts to replicate its input at the output. By trying to reproduce the input at the output, the autoencoder discovers interesting characteristics and hidden patterns of the input data. In the training phase, the autoencoder learns a set of neurons’ weight vector W through backpropagation such that the reconstructed output is a copy of the input data [94]. This involves finding optimum weight parameters by minimizing the discrepancy between the input and its reconstruction while preserving sparsity, i.e. allowing only a small number of activated neurons. Sparsity is a desired characteristic of the model, which enables the model to represent the original data with much less information and provide a minimal efficient representation.   65  Figure 2. 16: Schematic representation of a sparse autoencoder.  The sparsity characteristic of the model can be enforced by introducing a term in the cost function of the training algorithm that penalizes large activation values for neurons. Equation 2.17 demonstrates the cost function used in the training of SSAE:  𝐸 =1𝑁 ∑  ∑(𝑥𝑘𝑛 − 𝑥𝑘𝑛)2𝐾𝑘=1𝑁𝑛=1+   𝜆||𝑊||2+  𝛽∑𝐾𝐿(𝐷𝑗=1𝜌||𝜌?̂?) (2.17) N is the number of observations (in this case, patches), and K is the number of variables (neurons) in the training data. The first term in the right-hand side of the equation is the mean square error term between the input and its reconstruction, which tries to force the model to provide a good reconstruction of the input data at the output. The second term is a  66  regularization term (also called the weight decay) that tends to prevent the magnitude of the neuron weights (W) from increasing dramatically and helps avoid overfitting. λ is the weight decay control parameter. The third term is the sparsity constraint that encourages the activation value of each neuron to be low. For this purpose, a desired activation value ρ is selected which determines the desired proportion of training examples a neuron reacts to. The goal is to keep the average activation 𝜌𝑗 of neuron j close to the desired activation ρ using the Kullback-Leibler divergence KL(𝜌||𝜌?̂?), which measures the difference between the distributions of 𝜌𝑗 and ρ and returns a large value when the values of 𝜌𝑗 and ρ are distant from each other. β is the control parameter for this term and D is the number of neurons in the hidden layer. At each iteration of training, the weights W of the network are updated through the optimization of the above cost function. The training sequence involves feeding patches into the first layer, obtaining first hidden feature representations ℎ𝑘(1) as the result of training. The primary representations are then fed into the second layer, where a higher-level set of features ℎ𝑘(2) is learned. After successive layers of sparse encoders, the high-level features will be fed as the input to the softmax classifier for vasculature detection.  2.2.3 Vasculature Detection through Softmax Classification Unlike the sparse autoencoder layers where learning is done in an unsupervised manner, the softmax layer acts as a supervised classifier in a conventional neural network. It takes as the input the high-level features z of the last layer of SSAE and produces the probability of the  67  patch t belong to class 1, i.e. probability of presence of vasculature in the patch. This probability is produced using a logistic function as shown in equation 2.18:  𝑃(𝑡 = 1|𝑧) =11 + 𝑒−𝑧 (2.18)                                                                                     During the training of the softmax, the weighs of the network are updated such that the discrepancy between the predicted class and the true class is minimized. Training of the softmax classifier is done through minimizing the cost function in equation 2.19.    𝐸 =1𝑁∑∑𝑡𝑖𝑗𝑙𝑛𝑦𝑖𝑗 + (1 − 𝑡𝑖𝑗)𝑙𝑛 (1 − 𝑦𝑖𝑗)𝑐𝑖=1𝑁𝑗=1 (2.19)                                            N is the number of training examples (in this case, the number of patches) and c is the number of classes (c = 2 for vasculature and non-vasculature classes). 𝑡𝑖𝑗 is the true class label corresponding to the jth patch of the input which is 1 only if the jth patch belongs to ith class. 𝑦𝑖𝑗 is the predicted output probability corresponding to the jth patch, i.e. the probability that the jth input sample belongs to the ith class. The cost function in equation 2.19 attempts to update the weights such that the predicted probability tends to the true class label. The network is trained in a greedy layer-wise manner, i.e. each layer in turn, starting from the first layer. Hidden representations learned through the first layer of training are fed into the second sparse autoencoder. The feature representations of the second layer acts as the input to the softmax layer with the associated labels for supervised learning. Stacking the encoder layers and the softmax layer results in a deep structure which is then fine-tuned over the entire training set to detect vasculature.  68   2.2.4 Experimental setup                                 The dataset used in this part consists of a set of 600 RGB color images obtained from the same three sources as part one: 1) Atlas of dermoscopy by Argenziano [88]; 2) The University of Missouri; and 3) Vancouver Skin Care Center. Lesions covered a variety of different cutaneous conditions including basal cell carcinoma, sebaceous hyperplasia, seborrheic keratosis and dysplastic nevi. The diagnosis of each lesion was given along with the image. All implementations were performed using MATLAB 2015a and Caffe platform [100] on a PC (Intel core i7 with 16GB RAM) and GeForce GTX NVIDIA Graphics Processor Unit. Once the network is trained and weight parameters are tuned, predicting new images is done in real-time.   2.2.5 Training Set Preparation and Labeling 400 images were randomly selected among the dataset for training. Images were resized to standard size of 256x256. From the selected images, non-overlapping patches of size 32x32 pixels were selected. The size of the patches was chosen so that it is large enough to contain part of a vascular structure and small enough for the fully connected autoencoder training. An expert manually labeled each patch based on absence or presence of any vasculature within the patch. In this regard, any patch that contains a vessel or part of a vascular structure of any type is labeled 1 whereas patches that do not contain any vasculature are labeled 0. Since the dataset covers a variety of skin conditions, the vascular structures within the images are of different patterns and types. As for the remaining 200 images for testing, the same procedure was done for labeling. The final training set consists of 3186 patches of vasculature and 6097 patches  69  not containing any vasculature. This set is then used to train the SSAE and softmax classifier. As for the test set, 1062 patches of vasculature and 2033 patches of no vasculature were stored. Figure 2.17 demonstrates a number of extracted candidate patches with presence or absence of vasculature. As it can be seen, the vascular patches cover a range of different shapes, colors and patterns of blood vessels. Figure 2. 17: Examples of a) vascular and b) non-vascular patches extracted from the image dataset  2.2.6 Parameter Setting Since images are of three channel RGB type, the size of the input to the first layer of the SSAE is 32x32x3=3072. After an exhaustive search on the network architecture, the number of hidden units on the successive layers of SSAE is chosen as 𝑑ℎ(1)= 400 and 𝑑ℎ(2)= 100 respectively. The regularization and sparsity parameters λ and β in the SSAE cost function as in equation 2.17 are set to λ=0.004 and β=4. The desired activation value ρ is set to ρ=0.05. Weights of the network are randomly initialized and the training is performed using gradient descent.  70  2.2.7 Vessel Detection Performance Evaluation In order to evaluate the performance of the proposed SSAE framework, we investigate and compare the proposed vessel detection performance extensively with both supervised and unsupervised techniques including hand-crafted feature extraction and supervised classification, shape and color spectrum decomposition through independent component analysis and fine-tuning pre-trained deep neural networks. 2.2.8 Experimental Results All implementations were performed using MATLAB 2015a and Caffe platform [100] on a PC (Intel core i7 with 16GB RAM) and GeForce GTX NVIDIA Graphics Processor Unit. Once the network is trained and weight parameters are tuned, predicting new images is done in real-time. 2.2.9 Vessel Detection Results The vasculature detection and localization results of the proposed SSAE model on a number of candidate lesions with different vessel types is shown in Figure 2.18 and Figure 2.19. The demonstrated results are selected from a variety of lesions (from minimal and low to moderate and high degrees of lesional pigmentation, including different vascular architectures and distributions and different erythema color) ranging from easy to challenging cases. Each skin image is divided into small patches and the proposed algorithm detects any vascular structure within each patch. Detection of vasculature in patches demonstrates how the blood vessels are distributed in the lesions and hence localizes the vasculature within the whole lesion. Since the location and distribution of blood vessels in the lesion is a critical diagnostic clue, this step can provide significant information for differential diagnosis. Furthermore, the proposed algorithm can better visualize the density of lesion’s vascular component compared  71  to the lesion area. Since vascular density can be an indicator of tumor growth, this step can contribute to non-invasive systematic assessment of tumor progress. In Figures 2.18 and 2.19, the presence and locations of detected vasculature within the lesion is visualized by green marks overlaid on the lesion. These lesions were chosen to demonstrate the ability of SSAE to detect different varieties of skin vasculature among different lesion types.   Figure 2. 18: Vessel detection and localization results on four candidate lesions with minimal to low pigmentation.  The test image is divided into 32 x 32 patches. Each patch is fed into the SSAE. The final softmax classifier predicts the presence or absence of vasculature within each patch. Green patches demonstrate the locations of vasculature. a) Dermoscopy image. b) Patching of the image. c) Detected vasculature by the proposed SSAE framework marked by green.  72   Figure 2. 19: Vessel detection and localization results on three candidate lesions with moderate to high pigmentation.  a) Dermoscopy image. b) Patching of the image. c) Detected vasculature by the proposed SSAE framework marked by green.  In order to evaluate the proposed framework, vessel detection performance of the algorithm is compared with a number of standard and state-of-the-art techniques. Most common techniques in skin vascular analysis consider shape and color information. In this regard, we conducted the performance comparison using common color and texture features in supervised classification, state-of-the-art vessel feature extraction as well as unsupervised deep learning techniques. Details on the comparison framework is presented below.   73   2.2.10 Comparison with Supervised Techniques 1. GLCM+Color: This method classifies the presence and absence of vasculature within a patch using color and Grey Level Co-occurrence Matrix (GLCM) texture features [101] including contrast, correlation, energy, and homogeneity. The gray level co-occurrence matrix is calculated at four different angles [0 45 90 135] degrees and three different pixel distances [1 2 3]. The color features include the mean of L*, a* and b* values in the Lab color space. All these features are fed into a softmax classifier. 2. Gabor+Color: This technique classifies the presence and absence of vasculature within a patch using color and Gabor features. The Gabor filter banks [102] are created at 8 different orientations [0 23 45 68 90 113 135 158] degrees and five different wavelengths [2 4 6 8 10]. The color features are the mean of L*, a* and b*. A softmax classifies is used for the classification task.  3. Skin image decomposition: The detection performance is also compared with the method presented in part one and in [83], where skin color is first decomposed into melanin and hemoglobin channel. The hemoglobin channel is further analyzed for shape filtering by Frangi vesselness measure to extract vascular structures. The color decomposition is performed through independent component analysis. Frangi vesselness measure [38], which is obtained through the second order structureness of the Hessian matrix, is also calculated over the range of scales (1-10) and thresholded by Otsu’s method.    74  2.2.11 Comparison with Deep Networks (Unsupervised Feature Transfer) 4. Pre-trained CNN: Deep networks have recently dominated the unsupervised feature learning field. In order to compare the detection performance of SSAE with a deep learning framework, Alexnet structure [103] is used, which is a deep convolutional neural network with five locally connected convolutional and three fully connected layers. The network is pre-trained on natural images from ImageNet dataset [104] and the learned features are transferred. The final layer of Alexnet is then fine-tuned on our training image set to adapt it to our specific application.  Table 2.4 demonstrates a comparison of the quantitative performance of different techniques in terms of sensitivity, specificity, positive predictive value (precision), negative predictive value and accuracy on 3095 patches from 200 test images. Since this is a detection problem, the performance in terms of positive predictive value (PPV) is of great importance. It can be seen from Table 2.4 that the proposed SSAE framework yields the highest PPV and highest accuracy and hence outperforms the other techniques in detection task. As seen in Table 2.4, the general color and feature extraction methods based on GLCM+color and Gabor+color demonstrate weak detection performance which may be caused by the fact that these features only capture general aspects of vascular structures properties and therefore may misclassify other structures such as hair or skin pigmentation as vasculature. The method based on skin image decomposition [83] is highly dependent on expert knowledge to define and extract meaningful features and therefore may not be suitable for fast screening purposes. The pre-trained CNN framework demonstrates good results, however as it will be demonstrated in the next section, it lacks clinical interpretability for the features.     75  Table 2. 4: Comparison of vessel detection performance  Sensitivity Specificity PPV NPV Accuracy GLCM+Color 0.906 0.845 0.753 0.945 0.866 Gabor+Color 0.761 0.963 0.914 0.885 0.893 Pre-trained CNN 0.902 0.907 0.809 0.955 0.906 Skin image decomposition [83] 0.921 0.905 0.835 0.956 0.911 Proposed SSAE framework 0.917 0.973 0.947 0.957 0.954   2.2.12 Feature Visualization Figure 2.20 demonstrates a set of feature representations learned through SSAE, a set of features from the pre-trained CNN and a set of conventional features corresponding to Gabor filter banks. It can be seen that the learned features through the proposed SSAE best capture the visual properties of the lesional dermoscopic data such as different shades and patterns of red color and pigmentation, dotted patterns, curved, linear and combined patterns at different orientations and widths and texture profiles. This feature set covers various vascular patterns and colors and hence demonstrates a good generalization ability to detect a variety of dermoscopic vasculature. A set of Alexnet features is demonstrated in Figure 2.20(b). Although some of the features from pre-trained Alexnet may potentially be helpful for the vessel detection problem, Alexnet features are very biased towards the dataset that it has been trained on. Therefore, Alexnet features tend to represent the visual properties and colors of natural images (animals, trees, objects, etc.) and fail to capture clinically interpretable patterns.  76  The conventional features from Gabor basis as in Figure 2.20(c), can only capture general texture features and therefore are much less specific and comprehensive. Figure 2. 20: Visualization of some of the feature representations.  a) SSAE b) pre-trained and fine-tuned Alexnet and c) Gabor filter banks   2.2.13 Discussion This study proposes a novel framework for automated cutaneous vascular detection and localization, demonstrating superior performance compared to other common techniques. However, in order to achieve the best performance of the proposed framework, there are multiple factors that need to be considered: Patch Size: In this study, in order to select the proper patch size, four different patch sizes were evaluated (8×8, 16×16, 32×32, 64×64 pixels). While smaller patch sizes fail to represent color variations and complex vascular architectural patterns, leading to a poor detection  77  performance, bigger patch size results in higher computational cost. Therefore, a patch size of 32×32 was selected, to balance between computational cost and morphologic details. Different applications may require other optimal patch sizes. Limitations: Figures 2.21 and 2.22 demonstrate several dermoscopy images, where the proposed technique has failed to properly detect the vasculature. As it can be seen in Figure 2.21a, due to over-exposure of the image, there is a shift in the image histogram which has resulted in very low image contrast and consequently caused the algorithm to miss some of the blood vessels. In such cases, a pre-processing step such as histogram equalization can be used to correct the image. However, there are cases where the image includes significant artifacts such as the two images in Figure 2.22a, where immersion oil bubbles have covered large parts of the image and have changed the image to out-of-focus. To avoid such images, proper dermoscopy image acquisition training is recommended.  Figure 2. 21: Limitations of the proposed technique due to over-exposure.  a) Left: Original dermoscopy image. Middle: Erroneous results and missed vasculature by the proposed framework. Right: Image histogram. b) Left: Histogram-corrected image. Middle: Improved vessel detection. Right: Adjusted image histogram   78  Figure 2. 22: Examples of images with artifacts (oil immersion bubbles, out of focus, blur, low quality).  a) Original dermoscopy image. b) Arrows demonstrating the vasculature not detected by the algorithm  Image Quality: Image quality plays an important role in the performance of the proposed technique. As we demonstrated in the limitations section, factors such as over-exposure, blurriness, presence of artifacts and device specifications, which result in low image quality, can affect the performance of the proposed technique and yield erroneous results. Therefore, we propose that in order to achieve the best performance of the proposed technique, images should meet the following inclusion criteria:  1) Minimum color and spatial resolution requirement as proposed by the American Telemedicine Association [75] guideline [105]: Based on the American Telemedicine Association [75], a minimal image spatial resolution of 75 ppi and an image color resolution of 24 bits is recommended for teledermatology applications [105]. The 2012 ATA guideline for store-and-forward images recommended an image resolution of 800×600 pixels (preferred 1024×768) [106]. Although these guidelines are all for digital camera images and to date, no universally acceptable standards exist for dermoscopy images in remote applications, most  79  commercially available dermoscopy devices, exceed the minimum image quality resolution requirements. 2) Blur and artifact free: In this regard, non-contact dermoscopy is preferred as it does not require application of immersion oil or gel and hence provides a better quality. 3) Images acquired by cross-polarized dermoscopy are preferred over non-polarized as they allow deeper light penetration and provide better visualization of vasculature.  As discussed above, variations in imaging conditions and illumination sources across the dataset can be a major factor affecting the performance of the system. With this regard, one possible solution to enhance the robustness of the system to imaging conditions is the use of color constancy algorithms that aim to minimize the influence of acquisition set up [107]. This is achieved by transforming the images acquired under different lighting conditions, so that they appear similar to colors under a canonical illuminant. A successful example of such techniques was presented in [107], where Barata et al. demonstrated that using color compensation can significantly improve the performance of  CAD systems in classification and analysis of dermoscopy images. It is also worth mentioning that a powerful aspect of deep learning models is their capability to learn variations among the data (including variations in lighting, contrast, field of view, etc.) and further generalize it to new unseen data instances. The capability of deep learning frameworks to learn directly from the data, enables them to account for inter data variability and hence make them a powerful tool in dealing with unstandardized data. From this aspect, deep learning models surpass any conventional technique which suffer drastically from data variations. This capability of deep networks can be further enhanced by introducing more data samples and more variations to the training data.  80  That is why in this work, we tried to include datasets from multiple sources, acquired by different devices under different conditions. 2.2.14 Conclusion In this chapter, we investigated the problem of cutaneous vasculature analysis at pixel-level. We proposed two strategies for detection, segmentation, and localization of cutaneous vasculature and evaluated their performance quantitatively. In the first part, we proposed a fully automatic cutaneous blood vessel segmentation framework based on skin image decomposition and shape analysis of the hemoglobin channel. The presented method is the first in the field capable of detecting and segmenting vessels in both pigmented and non-pigmented lesions as a result of the decomposition framework. Compared to previous studies, this study accounts for both shape and color information and demonstrates that combining the two yields better results when segmenting cutaneous telangiectasia. Experimental results show the efficiency of the proposed method in extracting the vascular mask when compared to annotations provided by expert dermatologists. The results of this study show the promise for a potential tool for vasculature quantification, which can be applied, in a broad range of dermatology applications. A drawback of the proposed technique along with other conventional segmentation techniques is their dependency on extensive domain knowledge, computational inefficiency due to hand-crafting the features and restricted generalizability. Motivated by such limitations, in the second part of this chapter, an automated deep learning framework based on stacked sparse autoencoder was presented for detection and localization of skin vascular structures. We demonstrated that the proposed framework is capable of learning hidden representations of the data directly and thereby generalizes to a wide variety of different vascular patterns and shapes.  81  Using a patching strategy, the proposed framework detects the presence and determines the locations of vasculature within skin lesions. We evaluated the detection performance of the proposed framework by comparing it to different strategies based on learning pre-training CNN as well as conventional supervised classification techniques. We further compared the feature interpretability of the proposed framework with other hand-crafted feature extraction and deep learning techniques and demonstrated that unlike other techniques, the proposed framework results in clinically meaningful learned features. However, the proposed framework does not yield smooth segmentation but rather patch-based localization. Since the proposed technique does not depend on high-level hand-crafted feature extraction, it can potentially be used in fast screening, teledermatology and remote diagnostic purposes.  82  Chapter 3: Analysis of Cutaneous Vasculature at Lesion-level: Morphology Analysis and Pattern Recognition In this chapter, we investigate cutaneous vascular structures at lesion-level, studying the morphology and patterns of vasculature. As discussed in Chapter one, morphological patterns, architecture and distribution of vascular structures in skin lesions are significant diagnostic clues, linked to certain abnormalities. There are a variety of morphologies and architectural arrangements of lesional cutaneous blood vessels from bright focused red lines to blurred pink dots; some with high specificity that could yield a ready diagnosis or provide significant insights towards the underlying histology. Previous studies confirm the critical role of distinctive vascular patterns in differential diagnosis and understanding of the nature of skin disorders [39, 47, 49, 79] and in this chapter, we develop an automated framework to detect and differentiate significant vascular patterns within the lesion. Based on clinical categorization, cutaneous vessels are divided into three main morphologic classes of red dots, clods and linear vessels [108]. The linear class can be further subcategorized into arborizing, hairpin, comma and linear irregular based on their morphologic properties. Figure 3.1 demonstrates a schematic of these morphologic styles. We will use these clinical definitions later in the chapter to design discriminative features for automated morphology recognition.Automating the detection and morphology recognition of vascular structures would not only facilitate the skin examination process through better visualization and providing a decision support for physicians, it could also contribute to the training and teaching of dermoscopic examination technique to new physicians. To the best of our knowledge, there is yet no study in the literature on automated morphology and pattern analysis  83  Figure 3. 1: Schematics of different vascular morphology.  a) Comma (linear) b) Crown (linear) c) Arborizing d) Dotted e) glomerular f) polymorphous.  of skin vasculature. Considering the importance of cutaneous blood vessels, the main goal of this investigation is to fill this gap.  This chapter presents a novel method for automated analysis of cutaneous vascular morphology. The presented framework builds upon and extends our earlier work on Chapter two on automated segmentation of cutaneous vasculature. From the extracted vessel mask, based on the clinical and mathematical definitions of cutaneous vessel types, a novel set of features is proposed which includes geometric, structural, orientation, distribution, textural and chromatic properties of lesional vessels and classifies the vascular component of the lesion into one of the main categories of Arborizing, Dotted, Linear and Polymorphous. Each category is  84  analyzed based on their constructional features and classification performance is evaluated over a dataset of several vascular patterns.  It should be noted that in clinical terms, there are more vascular categories than the four studied in this chapter. However, some of these vascular morphologies are very rare, and our data sources contained very few to no images of such categories, which made it impossible to develop a reliable computer-aided framework. The four proposed categories in this chapter, are the most clinically significant cutaneous vascular morphologies and have been recognized as biomarkers for and strongly associated with melanoma, basal cell carcinoma, pre-cancerous lesions and inflammatory conditions.    3.1 Identifying Valid Lesional Vasculature The first step in our pattern recognition framework is to segment the valid vascular structures of the lesion and get a vessel mask image. For this purpose, we use our proposed method in in Chapter 2 part one, which segments valid lesional blood vessels based on ICA decomposition to account for the underlying color components of the skin and applies multi-scale Frangi filter for vascular shape. Figure 3.2 demonstrates a number of candidate lesions with the associated vessel mask generated by our algorithm. Lesions in Figure 3.2 were chosen to represent each of the morphologic categories. After extracting the vessel mask, we proceed to define and extract morphologic feature set.  85  Figure 3. 2: Candidate lesions with different vessel patterns and the associated lesion and vascular masks. a) Dotted b) polymorphous c) linear d) arborizing.   3.2 Design and Extraction of New Vascular Feature Set Based on the clinical and mathematical definitions of lesional vasculature, a novel set of vascular features is proposed which includes geometric, orientation, distribution and chromatic features of the lesional blood vessels. Common color and texture features of the lesion itself are also extracted and used.    86   3.2.1 Geometric Feature Set Geometric features are designed based on clinical definitions of cutaneous vessels to capture topological characteristics of the vasculature. This feature set consists of two subsets described below. 3.2.1.1 Vessel Topology Topological features include vessel width, length, area, major to minor axis ratio, number of branches, curvature and tortuosity. This feature set is calculated for each vascular segment in the binary vessel mask and the max, min, mean and standard deviation over the total vessel segments of each image is calculated. a) Width, Length, Area For each vessel segment the maximum width is defined as the diameter of the largest disk that can be fit into the vessel segment blob, such that the disk is fully enclosed.  For this purpose, the Euclidean distance transform of the complement of the binary vessel mask image is calculated, that is, for each pixel in a vessel blob, the Euclidean distance between that pixel and its nearest nonzero pixel is calculated and the maximum distance in each blob is derived as follows:  𝑀𝑎𝑥𝑊𝑖𝑑𝑡ℎ = 2 × max𝑖,𝑗∈𝑏𝑙𝑜𝑏(√(𝑥𝑖 − 𝑥𝑗)2+ (𝑦𝑖 − 𝑦𝑗)2)   (3.1) where (𝑥𝑖,𝑦𝑖) denotes a single pixel of a vessel segment in the binary vessel mask. Average width over all vessel segments is also calculated by averaging the distance matrix.  87  To calculate the length of each vessel segment, first the vessel mask is “thinned”, i.e. the internal pixels of each vessel segment are removed such that the segment is shrunk to a minimal stroke. The thinning algorithm follows the notation proposed in [109], where a central pixel 𝑝 with eight neighboring pixels 𝑥1, 𝑥2, … , 𝑥8 (denoted by 𝑁(𝑝)), is deleted if the following conditions are satisfied: 𝑋𝐻(𝑝) = 1 (3.2) 2 ≤ min (𝑛1(𝑝), 𝑛2(𝑝)) ≤ 3 (3.3) (𝑥6⋁𝑥7⋁ 𝑥4)⋀𝑥5 = 0  (3.4)  where 𝑋𝐻(𝑝) is defined as the number of crossings from a white to a black point and is calculated as: 𝑋𝐻(𝑝) =∑𝑏𝑖4𝑖=1 (3.5) 𝑏𝑖 = {1    𝑖𝑓 𝑥2𝑖−1 = 0 𝑎𝑛𝑑 (𝑥2𝑖 = 1 𝑜𝑟 𝑥2𝑖+1 = 1)0                                                           𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒  and 𝑛1(𝑝), 𝑛2(𝑝) are defined as adjacent pixels crossings as: 𝑛1(𝑝) = ∑𝑥2𝑘−1⋁𝑥2𝑘4𝑘=1 (3.6) 𝑛2(𝑝) = ∑𝑥2𝑘⋁𝑥2𝑘+14𝑘=1 (3.7)   88   The length is then calculated as the total number of pixels in each thinned stroke. Area is also calculated as the sum of the number of pixels in each vessel segment blob. Figure 3.3 demonstrates three sample dermoscopy images with the associated vessel masks where the width and length calculations are performed.  Figure 3. 3: Width and length Extraction. Each blob is processed by removing inner pixels. Blobs are skeletonized for length calculation.       89   b) Axis Ratio, Branching Points Another feature that helps differentiate elongated from circular structures is the major to minor axis ratio. For this purpose, the ellipse that has the same normalized second central moments as the vascular blob is derived and the length ratio of its major to minor axis is calculated. To differentiate between arborizing (branching) vessels and linear non-branching architecture, number of branching points is also calculated for each vessel blob by finding the bifurcation points in the skeletonized mask. Figure 3.4 demonstrates the same three lesions with the axis ratio and number of branches calculations. Figure 3. 4: Branch point and axis ratio extraction. Middle row: Branch points are superimposed on the original mask. Bottom row: An ellipse with the same second central moment is fit to each blob and the major to minor axis ratio is calculated.   90   c) Curvature  A discriminating feature among linear straight and arborizing vessels is the vessel curvature. Curvature is defined as the rate by which the unit tangent vector of a curve changes direction [110]. To calculate the curvature, boundary of vessel blobs is extracted. A window is used to select points along the boundary and a polynomial is fit to each segment. Having the derivative of the polynomial, the curvature is calculated at each point on the blob boundary. From the curvature values, maximum, mean and standard deviation is calculated for each blob and over all blobs. Figure 3.5 demonstrates the curvature calculations of two of the previous lesions.   Figure 3. 5: Curvature extraction for two candidate lesions. From left to right: Original image, curvature values shown as green lines on the image boundary, magnified view, color coded curvature values of a sample vessel segment within the image in three-dimensional view, color code overlay of curvature (areas with higher curvature are brighter), histogram of curvature for the selected segment.  3.2.1.2 Graph Features In order to accurately investigate the inter-regional and structural relationships between the detected vascular pixels, a feature set is derived from the graph model of the vascular network. For this purpose, first a graph is built upon the detected vascular segments. Each  91  vascular segment 𝑣𝑖 is converted to 𝑛 graph nodes based on the branching and centroid points. From the graph, density is extracted. a) Density  A graph is defined as a set of vertices 𝑣 ∈ 𝑉 and edges 𝐸 ⊆ 𝑉 × 𝑉. The density of graph determines how close the number of edges is to the maximal number of edges. Vascular graph density is not only a good differentiating factor between lesions with absence/presence of vascular structures, it is also a good indicator of vessel distribution throughout the lesion. Inspired by [111], vascular graph density is defined as: 𝑉𝑎𝑠𝑐𝑢𝑙𝑎𝑟 𝑑𝑒𝑛𝑠𝑖𝑡𝑦 =2|𝐸||𝑉|(|𝑉| − 1) (3.8)  where |𝐸| and |𝑉| denote the number of edges and vertices in the vascular graph respectively.  3.2.2 Orientation and Distribution Feature Set Cutaneous vasculature can have different distribution patterns. While certain vessels are homogenously distributed throughout the lesion (dotted vessels in most cases), specific vessels demonstrate clustered, peripheral or irregular distribution. A feature set of orientation and distribution properties of lesional vasculature is calculated as follows. a) Central Distance  In order to investigate distribution of the vessels, first the image is aligned so that the major axis of the lesion is parallel to the x-axis. A distance measure is calculated based on the Euclidean distance, which calculates the distance between the center point of each vessel segment and the lesion’s center point. This measure helps differentiate centric vessels from  92  peripheral and eccentric vasculature. Figure 3.6 demonstrates the locations of centroids for each vessel segment of two of the candidate lesions and calculation of central distance. Figure 3. 6: Central distance calculation. The center of mass of the lesion, as well as the center of mass of each vessel segment is calculated and the distance is derived.  b) Distribution homogeneity  A measure of homogeneity is proposed to evaluate the symmetry of vascular distribution throughout the lesion. For this purpose, the lesion is first divided into arcs of 30 degrees (π/6) and the area of vessel segments in each bin is calculated. The ratio of vessel area in each bin to the total vessel area of the lesion is then calculated as a value in [0, 1]. If the vessels demonstrate a homogenous distribution, each lesion slice contains almost the same degree of vascularization and therefore the standard deviation of the bin should be small. As a measure of homogeneity in distribution, standard deviation of the bins is calculated.    93  c)  Orientation  Orientation for each vascular segment is calculated as the angle between the x-axis and the major axis of the ellipse that has the same second-moments as the vascular segment. Orientation is calculated for each vascular segment and its max, min, mean and standard deviation over the entire lesion is added to the feature set.  3.2.3 Chromatic Features of the Vessel Segments Vascular structures of the skin may appear in a range of colors depending on their orientation and relative depth from skin surface. Therefore, color information of vessels can be useful in distinguishing different vascular types. The vascular color features include mean and standard deviation of L*, a* and b* values of the Lab color space over the segmented vascular segments.  3.2.4 General Lesion Features Since vascular structures affect the color and texture of the lesion, lesional color and texture features are also extracted and added to the feature set. Lesion texture information consists of energy, contrast, correlation, and homogeneity measures, calculated through the gray level co-occurrence matrix (GLCM) constructed over the entire lesion. Lesion color information is obtained through mean and standard deviation of the L*, a* and b* channel of the Lab color space. 3.3 Classification Finally, the proposed set of features is fed into a random forest classifier to classify the one of the four classes of vascular morphology. Classification is done through inherent multiclass classification capability of random forest and the performance is evaluated quantitatively.  94  3.4 Experimental Results In this section the proposed feature set are extracted from our multi-class dataset. Results are reported in classifying four major classes of Arborizing/ Linear/ Dotted/ Polymorphous morphologic patterns. 3.4.1 Dataset The dataset used in this chapter consists of 100 lesions of arborizing vessels, 85 linear vessels, 47 dotted patterns and 45 polymorphous lesions. Images were obtained from different sources including: 1) Atlas of dermoscopy by Argenziano [88] 2) The University of Missouri dermoscopy set 3) Vancouver Skin Care Center and 4) Dermofit Image Library from the University of Edinburgh [112]. Images were labeled by an expert based on the pattern of vasculature. The diagnosis of these lesions was given along with the images. 3.4.2 Results and Discussion Table 3.1 below demonstrates the weighted average performance on the multi-class classification on a five-fold cross-validation. As it can be seen from the table, arborizing class yields the best classification results, whereas polymorphous class seem to be the most difficult to identify. We believe this could be due to a number of reasons. First, among the four classes, arborizing class has the clearest clinical definition, whereas the polymorphous, linear, and dotted class have significant ambiguity and subjectivity in the definition of class boundaries. Therefore, even the labels provided by the experts are not “clean” and consistent which has led to a drop in the classification performance. Furthermore, arborizing class is also the most populated class in the training dataset which can partly contribute to a higher performance for this class compared to the rest.  95  It should be noted that there are very fine clinical lines, when differentiating vascular patterns and in many cases (unless the vascular pattern of the lesion is very clear), the human expert can also mistake different patterns. Furthermore, we have to add that since the proposed method relies heavily on the results of the segmentation step, any segmentation error can translate into false feature set and misclassification. In addition, some vascular classes are more frequent and hence there is the issue of class imbalance which also affects the classification performance.    Table 3. 1: Classification performance of the proposed method Performance Arborizing 100 Linear 85 Dotted 47 Polymorphous 45 Weighted Average Sensitivity 0.92 0.88 0.90 0.79 0.88 Specificity 0.89 0.84 0.83 0.72 0.84  3.5 Conclusion In this chapter, we investigated the pattern recognition and morphology classification problem of cutaneous blood vessels. This is a very challenging problem because, not only some of these patterns are very similar, but also segmentation of these structures is a challenging problem. We proposed a novel feature set that captures different characteristics of vessels, both separately and as a network. We demonstrated that our proposed feature set can differentiate four major pattern classes of vessels with a reasonable performance. As more data becomes  96  available, we plan to extend the technique to include more vessel types. The presented technique could potentially contribute to the quality of care as a clinical decision support system and serve as an educational tool.  97  Chapter 4: Analysis of Cutaneous Vasculature at Disease-level: Classification, Assessment, Monitoring In this chapter, we investigate cutaneous vascular structures at disease-level. As discussed in Chapter one, vascular structures have significant diagnostic power in differentiating several skin conditions. Furthermore, vascularization, angiogenesis and creation of new blood vessels has been strongly associated with abnormal cellular growth that is observed in cancerous tumors. In this chapter, we conduct a set of studies to investigate the link between cutaneous vasculature and disease diagnosis and status (our published work in [83, 113, 114]). It is reported that cancerous lesions demonstrate a higher demand for oxygen and supplies, which not only causes a higher blood perfusion through the lesion, but also leads to the creation of new blood vessels [50]. Considering the link between angiogenesis and cancerous growth, in this chapter, we carry out studies with this focus. In our first study, we investigate whether vascularization density can provide any information on the cancer progression. For this purpose, for each lesion we automatically extract vascular density and lesion size and study the correlation between the two. We expect that BCCs with bigger size (and hence more progression) demonstrate higher densities of blood vessels per lesion area.  Next, we conduct a series of studies where we design and extract clinically meaningful vascular properties towards computer-aided diagnosis of basal cell skin cancer. We tackle this problem via two different strategies: As our first strategy, we study BCC classification using handcrafted vascular features. For this purpose, based on our developed vascular segmentation method in part one chapter 2, we design and extract a set of vascular features from the lesional vessel mask to capture vascular characteristics of the lesion. We then use these features in a  98  classification framework to differentiate cancerous from benign lesions. In our first study of this series, we investigate the total blood content of the lesion (without considering detailed characteristics of the vessels) and evaluate whether the total blood perfusion through the lesion can be used to differentiate cancerous from benign lesions. For this purpose, following the same decomposition framework as chapter 2, a set of clinically inspired features and ratiometric measurements are extracted from each of the melanin and hemoglobin maps and fed into a random forest classifier to differentiate BCC from benign lesions. As previously mentioned, the presence, morphology and characteristics of cutaneous vasculature are considered as biomarkers and indicators of specific skin abnormalities. Therefore, as the next study of this series, we investigate detailed vascular caliber and morphologic properties of the lesion towards classifying BCC from other lesion types. To do so, we design and extract a clinically meaningful vascular feature set that characterizes vascular morphology and incorporate the features for BCC classification. As our second strategy, we investigate BCC classification problem using direct feature learning from image data, eliminating the need for hand-crafted feature design. The proposed method is composed of two parts. First, an unsupervised feature learning framework is proposed which attempts to learn hidden characteristics of the data including vascular patterns directly from the images. This is done through the design of a sparse autoencoder (SAE). After the unsupervised learning, we treat each of the learned kernel weights of the SAE as a filter. Convolving each filter with the lesion image yields a feature map. Feature maps are condensed to reduce the dimensionality and are further integrated with patient profile information. The overall features are then fed into a softmax classifier for BCC classification.    99  4.1 Case Study One: Assessment of Correlation Factor between Basal Cell Carcinoma Size and Lesions’ Vascular Density In our first case study, we investigate any correlation between vascularization of a tumor and its progress stage. In clinical evaluation of BCC, the size of the lesion is an important factor, indicating tumor progression. BCC relies on its surrounding supportive tissue to grow, causing the tumor to spread by direct extension. Size of BCC is an important factor in disease prognosis and treatment. In clinical assessments, BCCs that are larger than 1cm in size, are usually of more concern than the ones with less than 1cm diameter.  Considering the fact that formation of telangiectatic blood vessels is a major characteristic of BCC, our aim is to study the possible relation between the size and vascularization of BCC. 4.1.1 Method Figure 4.1 demonstrates the framework of the overall approach. In order to study the correlation between vasculature and lesion size, in this study the BCC dataset is split into two groups based on lesion size. Considering the standard risk criteria of BCC size, a size of 1cm is selected as the threshold. The vessel segmentation technique of chapter two is implemented on each lesion and the corresponding vessel mask is extracted. We define vascular density, as the vascularization per lesion area and calculated this ratio for each lesion:    𝑉𝑎𝑠𝑐𝑢𝑙𝑎𝑟 𝐷𝑒𝑛𝑠𝑖𝑡𝑦 =𝑉𝑎𝑠𝑐𝑢𝑙𝑎𝑟 𝐴𝑟𝑒𝑎 𝑖𝑛 𝑝𝑖𝑥𝑒𝑙𝑠𝐿𝑒𝑠𝑖𝑜𝑛𝑆𝑖𝑧𝑒 𝑖𝑛 𝑝𝑖𝑥𝑒𝑙𝑠 (4.1)  100   Figure 4. 1: Schematic diagram of the proposed approach for BCC vasculature and size correlation study.  For each BCC group, the average and standard deviation of the vasculature density is calculated. A Wilcoxon rank sum test was carried out to compare the vasculature density between the two size groups.  4.1.2 Results In this study, a set of 258 BCC lesions is used. Considering the standard risk criteria of BCC size, we categorized our BCC data set into two groups based on their size; first group consisting of 74 lesions smaller than 1cm and second group consisting of 184 lesions with a size bigger than 1cm, respectively. After extracting the vessel mask and calculating the vascular density for each group, the results showed an average of 3.5% vasculature per lesion area in the large lesion group and 1.78% in the small group. Table 4.4 demonstrates the results of the Wilcoxon rank sum test. As demonstrated in Table 4.1, the data shows a significant difference in vascular density between the two groups (p =0.0018).     101  Table 4. 1: Average vascular density of the lesions in the dataset. Lesion Size Average vasculature per lesions area Smaller than 1cm 1.78% Larger than 1cm 3.5%  4.1.3 Conclusion In this study, we demonstrated that in our data set, the vascular density of larger lesions is statistically significantly higher than that of smaller lesions. BCCs that had grown larger in size, expressed a higher density of vascularization compared to smaller lesions. This finding could provide a foundation for building a new prognostic factor for BCC based on the lesion’s vascular index.  4.2 Case Study Two: Computer-Aided Detection of Basal Cell Carcinoma through Total Blood Content Analysis in Dermoscopy Images As mentioned previously, a fundamental characteristic of cancerous tumors is the increase of vascularization and blood flow around and within the tumor [115]. Cancerous tumors demonstrate abnormal progressive dividing and growing behavior, which demands excess oxygen and nutrients. Therefore, cancer cells induce vessel growth and new vessel formation [50]. This can be used as non-invasive biomarker towards developing early detection tools.   In this study, we investigate automated detection of BCC skin cancer, based on the extraction and analysis of lesional blood content information. The proposed method builds upon our work in chapter two, through analysis and extraction of new features from skin color components. Following the decomposition framework in chapter two, first, independent component analysis  102  is used to decompose the dermoscopy image of the lesion and extract pigment and blood maps. Each of the maps is further analyzed to extract intensity, homogeneity, area and ratiometric features. Considering the excess vascularization in cancerous lesions, the features in this work are defined such that they represent the absolute and relative quantification of lesions’ blood contents. The features are then fed into a random forest classifier to differentiate cancerous from benign lesions. A simplified diagram of the proposed approach is demonstrated in Figure 4.2.  Figure 4. 2: Schematic diagram of the proposed approach for lesional blood content analysis.  4.2.1 Preprocessing All images were filtered by dullrazor [116] to remove the hair. The image data set was also scanned to discard any image containing large areas of gel bubbles, as it interferes with the decomposition algorithm and may affect the accuracy of melanin and hemoglobin reconstruction. Ground truth lesion segmentations were provided by a dermatologist.  4.2.2 Blood Content Segmentation The first step is to extract the blood content of the lesion. For this purpose, we follow our ICA-based decomposition framework, as proposed in chapter two, to decompose the skin into Dermoscopy Image Melanin Map Hemoglobin Map Classification Blood Mask Thresholding Thresholding Pigment Mask Absolute & Relative Blood Area Blood Mask Homogeneity Blood to Pigment Area Ratio Blood Map Intensity Features Blood Map Color Features ICA  103  melanin and hemoglobin components. As a result of the decomposition, for each lesion we generate corresponding relative densities of melanin and hemoglobin as well as the reconstructed color channel of each component obtained by decomposition. Figure 4.3 demonstrates two sample lesions and the corresponding decomposition results.  Figure 4. 3: Decomposition of dermoscopy images for hemoglobin and melanin separation.  a) Two original dermoscopy images b) Relative density of hemoglobin c) Relative density of melanin d) The corresponding reconstructed hemoglobin channel e) The corresponding reconstructed melanin channel   The obtained relative densities of melanin and hemoglobin are greyscale intensity images. Relative density images were thresholded using Otsu’s adaptive thresholding to generate the total pigment and blood masks. The segmented areas were then used to calculate the total blood area. Lesion masks as provided by the dermatologist were used to extract the blood content inside the lesion. The extracted total and internal blood and pigment masks from the previous two lesions are demonstrated in Figure 4.4.  104   Figure 4. 4:  Extracted hemoglobin and melanin masks for dermoscopy images in Figure 4.2.   a) Lesion masks for the original dermoscopy images b) Blood content map c) Blood content inside the lesion d) Pigment content map e) Pigment content inside the lesion.  4.2.3 Feature Extraction  A set of absolute and ratiometric features were defined and extracted from the resulting pigment and blood maps and masks. For each mask, the area is calculated through summing up the actual number of pixels in the mask region. Since cancerous lesions demand excess blood flow not only the blood content inside the lesion, but also the total blood content (both inside the lesion and the surrounding normal skin) were considered. The ratio of blood content to the whole lesion and to the pigment content are also calculated. Table 4.2 demonstrates the extracted features along with their description. 4.2.4 Classification The set of twelve features in Table 4.2 are fed into a random forest classifier to differentiate between BCC and non-BCC cancer. At each iteration, a random number of features are chosen to construct the tree.   105  Table 4. 2: Blood content features. Feature Description Blood_Area Total area of the blood content of the image Blood/Lesion_Ratio Ratio of total blood content area to lesion area Blood/Pigment_Ratio Ratio of total blood content area to total pigment content area Internal_Blood _Area  Area of the blood content inside the lesion Internal_Blood/Lesion _Ratio Ratio of blood content area inside the lesion to lesion area Internal_Blood/Pigment _Ratio Ratio of blood content area inside the lesion to pigment content area inside the lesion Mean_Blood Color (3 features) Average value of each of the L, a and b channels of the Lab color space for the hemoglobin channel s.d_Blood Color (3 features) Standard deviation of each of the L, a and b channels of the Lab color space for the hemoglobin channel  4.2.5 Experimental Setup In this study, a dermoscopy dataset consisting of 295 BCC and 369 non-BCC images (including sebaceous hyperplasia, dysplastic nevi and Seborrheic keratosis) of 1024 x 768 pixels was used. The analysis was performed on a PC (Intel core i7 with 16GB RAM) and GeForce GTX NVIDIA Graphics Processor Unit using MATLAB 2015a. The random forest classifier was built using 100 trees, each working on a subset of 4 features from the entire feature set. A 10-fold cross validation was performed to obtain the performance measures. 4.2.6 Classification Performance  Table 4.3 demonstrates the classification results using the set of features from Table 4.2. An area under the ROC curve of 0.83 is achieved on a 10-fold cross validation, for differentiating BCC from non-BCC lesions.     106  Table 4. 3: BCC classification performance of the proposed method  TP RATE FP RATE PRECISION AUC BCC 0.749 0.222 0.729 0.832 Non-BCC 0.778 0.251 0.795 0.832 Weighted Average 0.765 0.236 0.766 0.832  We also evaluated the worth of each of the feature attributes by measuring the Pearson’s correlation between the features and the target class. Features are ranked based on their correlation with the diagnosis class as demonstrated in Table 4.4. As it can be seen from Table 4.4, the ratio of blood surface area inside the lesion to lesion area, along with color values of the hemoglobin channel have the highest correlation with the diagnosis of the lesion. This is aligned with the clinical characteristics of cancerous lesions confirming that cancerous lesions demonstrate a larger and more dense blood content compared to benign ones. Table 4. 4: Feature evaluation and ranking Order Feature 1 Internal_Blood/Lesion _Ratio 2 Mean_a_value of hemoglobin channel 3 Mean_b_value of hemoglobin channel 4 Blood_Area 5 Blood/Lesion_Ratio 6 Mean_L_value of hemoglobin channel 7 Blood/Pigment_Ratio 8 Internal_Blood/Pigment _Ratio 9 s.d_a_value of hemoglobin channel 10 s.d_b_value of hemoglobin channel 11 s.d_L_value of hemoglobin channel 12 Internal_Blood _Area  107   4.2.7 Conclusion In this study, we investigated the blood contents of skin lesions in order to differentiate the cancerous versus benign tumors. Using a skin decomposition approach, melanin and hemoglobin components of the lesion were extracted and further thresholded to generate blood and pigment masks. Clinical features were defined and extracted from the maps for classification. The presented results demonstrated that lesional blood content features are capable of differentiating the lesions based on their pathology. It should be noted that in this study we only focused on the blood content and blood volume characteristics of skin lesions. In clinical practice, the diagnosis of skin lesions involves thorough examination to evaluate the lesion based on a variety of features and attributes. Considering the fact that the proposed method only considers lesion total blood content and no other lesional features (not even more detailed vascular characteristics) were taken into account, the results are very promising. Such results can be combined with other lesional attributes to improve the detection performance of the computer-aided framework as a decision support system and also for remote screening purposes.  4.3 Case Study Three: Computer-aided Detection of Basal Cell Carcinoma based on Lesions’ Vascular Properties As a follow up to the previous study that looks at total blood content and area of the lesion, in this case study, we propose a computer-aided framework for BCC classification based on detailed vascular properties. As mentioned before, BCC is the most common type of skin cancer, in which timely diagnosis plays an important role in the treatment outcome and reduced  108  morbidity. BCC is strongly associated with the presence of branching telangiectasia (which are small dilated blood vessels near the skin surface) in the lesion. In dermoscopic examination of BCC, vascular structures are key diagnostic clues [49]. However, although BCC is classified as a non-melanocytic skin cancer, there are certain types of BCC that are pigmented. Furthermore, epidermis is a pigmented layer of skin with normal amount of melanin. These pigmentations interfere with vessel visibility and make BCC diagnosis very challenging. Hence, as an application of our vascular segmentation framework, we propose a computer-assisted disease classification to differentiate BCC from benign lesions based on BCC’s unique vascular properties in addition to total blood area. 4.3.1 Vascular Features  To extract lesional vascular features, we built upon our proposed vascular segmentation method in part one chapter two. Using the vessel mask resulted from the segmentation step, we defined and extracted a set of 12 vascular features from each lesion in a dataset of BCC and non-BCC lesions. Features were designed so that they best capture the characteristics of telangiectatic vessels. These features include: maximum vessel length, average vessel length, standard deviation of length, maximum vessel area, average vessel area, standard deviation of vessel area, ratio of vessel area to lesion area, maximum vessel width, average vessel width, standard deviation of vessel width, number of vessel branches and ratio of branches to lesion area. Table 4.5 demonstrates the extracted vascular features. Features were then fed to a number of different classifiers (including MLP, decision tree, simple logistic) and the performance of the BCC recognition system was recorded.    109  Table 4. 5: Vascular features Feature Description Max_Length Maximum length of all vessel segments in the lesion Mean_Length Average length of all vessel segments in the lesion s.d._Length Standard deviation of length of all vessel segments in the lesion Max_Width Maximum width of all vessel segments in the lesion Mean_Width Average width of all vessel segments in the lesion s.d._Width Standard deviation of width of all vessel segments in the lesion Max_Area Maximum area of all vessel segments in the lesion Mean_Area Average area of all vessel segments in the lesion Area_Ratio Ratio of vessel area to lesion area Num_Branch Number of vascular branches Branch_Ratio Ratio of number of branches to lesion area  4.3.2 Experimental Results  The dataset used in this study consists of 659 images obtained from our three sources: 1) Atlas of dermoscopy by Argenziano; 2) The University of Missouri and 3) Vancouver Skin Care Center. The diagnosis (BCC and non-BCC) of lesions were given with the images.  The 659 images consist of 299 BCC vs 360 non-BCC lesions. Images are processed with the steps mentioned previously (ICA decomposition, K-means clustering, redness mask extraction, shape filtering, and vessel mask extraction). Twelve vascular features are extracted from the extracted vessel mask images and fed into four different classifiers: simple logistic, Naïve Bayes, MLP and random forest. The same 10-fold cross validation was performed for each of the four classifiers and random forest yielded the best performance. Therefore, classification is performed using a random forest classifier composed of 100 trees, each constructed while considering 4 random features with 10-fold cross validation. The statistics  110  of classification is demonstrated in Table 4.6 the overall weighted sensitivity and specificity for detecting BCC and non-BCC were 90.4% and 89.3%, respectively. The overall accuracy in term of AUC was 96.5%. As for the computational cost, the segmentation, feature extraction and classification were performed in less than 10 seconds, using a regular PC. Table 4. 6: BCC classification performance of the proposed method  TP Rate    FP Rate Precision AUC BCC 0.859 0.061 0.914 0.965 Non-BCC 0.939 0.141 0.898 0.965 Weighted Average 0.904 0.107 0.905 0.965  4.3.3 Discussion In this study, we investigated automated BCC classification by exploring vascular-extracted features. The proposed vascular feature set is derived on the base of vessel mask extracted by our segmentation method. To evaluate and compare the performance of our method, BCC classification was performed in different scenarios. Each scenario investigates the efficacy and performance of an aspect of the proposed method and intends to evaluate the method design. As stated before, previous vessel detection studies mostly focus on erythema detection. In order to evaluate the role and necessity of shape information in our design, in this scenario shape information is ignored and only color information is used to segment the vessels. As such, the erythema mask resulting from clustering of the hemoglobin channel is considered as the vessel mask. Similarly, in order to evaluate the importance of color information in the segmentation process, only shape information was taken into consideration, i.e. Frangi filters  111  were directly applied to the original lesion image and thresholded to extract blood vessels. The same twelve vascular features were extracted in each case for classification. Table 4.7 compares the classification results using only vessel color or only shape segmentation by the same Random Forest classifier.   Table 4. 7: BCC classification performance using only erythema or shape information  TP Rate FP Rate Precision AUC Erythema Method  0.631 0.382 0.629 0.667 Shape Method 0.712 0.312 0.712 0.791  As it can be seen from Tables 4.6 and 4.7, combining both color and shape information as presented in this chapter, results in better classification performance. The overall AUC accuracy improved from 66.7% (color only) and 79.1% (shape only) to 96.5% (combining color and shape). Figure 4.5 shows the ROC curves of the three cases where the presented method outperforms the other two.  Figure 4. 5:  BCC classification ROC curves for color method, shape method and the proposed combined method    112  It should be noted that in this work, in order to demonstrate the efficiency of our vessel segmentation method, we have only considered the vascular features of the lesion for classification. Moreover, no preprocessing was performed on the lesions in our study. In general, in order to provide a comprehensive diagnosis, a variety of feature categories are taken into consideration among which are patient personal profile, lesion location, lesion textural, structural and geometrical features. Table 4.8 gives a comparison of our results with previous work on BCC classification in dermoscopy where different feature categories and classifiers were used. As it can be seen from Table 4.8, the proposed method is capable of achieving comparable or superior results compared to previous methods on a larger dataset, with fewer features and using only vascular information.  Table 4. 8: Comparison with previous work   Dataset (BCC vs. Non-BCC) Number of Features Feature Categories Classifier AUC Shimizu et al. [117] 69 vs 692 25 Color, Texture, Sub-region Layered Model 0.896 Cheng et al. [118]  350 vs 350 17 Patient profile, General & specific lesion features, General exam descriptors EANN with GA 0.948 Cheng et al. [82] 59 vs 152 30 Vascular Neural Net 0.955 Proposed Method 299vs 360 12 Vascular Random Forest 0.965  113  We should also emphasize that compared to our previous study, where only total blood area information was considered, the results of this study show that specific structural and topological features of vessels are more powerful in differentiating BCC from non-BCC lesions. The obtained vascular features could be combined with other lesion information towards a more accurate diagnosis.  4.3.4 Conclusion This case study proposes a set of hand-crafted vascular features for computer-aided classification of basal cell carcinoma. Our decomposition-based vessel segmentation method is used as the base for vascular feature extraction towards BCC classification. Experimental results show the efficiency of the proposed method in extracting powerful vascular features towards BCC classification. Compared to our previous study, we demonstrated that specific structural vascular features are more powerful in differentiating BCC from other lesions. The results of this study show the promise for a potential tool for vasculature quantification, which can be applied in a broad range of dermatology applications.  4.4 Case Study Four: A Feature-fusion System for Basal Cell Carcinoma Detection through Data-Driven Feature Learning and Patient Profile In this study, we investigate computer-aided BCC classification problem from a different aspect. Instead of handcrafting vascular features from images, this study aims to directly learn what features to look for. The motivation for this study is discussed below. Considering the significant role of vascular patterns in diagnosis and further management of BCC, accurate detection and evaluation of cutaneous blood vessels is a critical step in early  114  BCC detection framework. However, although arborizing vessels are considered as the most characteristic vascular pattern and a biomarker in BCC, several studies have reported that other vascular patterns may also be present in BCC lesions. Micantonio et al. have demonstrated that besides arborizing vessels, fine short telangiectasia, hairpin vessels, dotted vessels, comma vessels and polymorphous patterns may also be detected in BCC lesions [119]. In a retrospective study to evaluate the presence of dermoscopic features of different BCC subtypes, Trigoni et al. found that scattered vascular pattern, atypical red vessels, arborizing vessels, comma vessels, white-red structureless areas, red dots, red globules and telangiectasias are all among vascular patterns of BCC [39]. Figure 4.6 shows a number of BCC lesions with different vascular patterns under dermoscopic examination.   Figure 4. 6: Examples of vascular structures seen in dermoscopy of BCC. Arborizing, hairpin, comma, dotted, glomerular and polymorphous vessels are seen in the BCC lesions shown.   115  This variety in the patterns of BCC vascular structures poses a major challenge for computer-aided early detection techniques. Every different variant of shape, size, color, geometrical structure, orientation and distribution of blood vessels requires different specific features and algorithms to be designed. This limits the feasibility of a comprehensive framework capable of analyzing different vessel types and requires multiple separate platforms to be designed, which in turn restricts the computational efficiency and performance of the overall framework. Furthermore, such techniques require a lot of pre-processing steps to transform the image data into feature space. This includes not only segmentation of the lesion, but also detection and segmentation of vascular structures within the lesion. Considering the variability in geometry and architecture of lesional vasculature, this can be a very challenging task. Because of such challenges, previous studies on computer-aided detection of BCC mostly focus on a single vascular pattern [64-66, 83]. Techniques from non-cutaneous domains such as retinal vasculature are also limited in terms of adaptability to skin since such variety in structure, color and patterns of blood vessels are rarely present in other domains [68, 69, 73, 120]. As an attempt to fill this gap, the main purpose of this chapter section is to propose an automated BCC detection framework that builds upon comprehensive vascular feature learning. In recent years, deep artificial neural networks have emerged as promising tools for detecting, categorizing and analyzing complicated patterns within image data [91, 103]. As emphasized by their name, “deep” artificial neural networks are comprised of multiple layers of processing which makes them capable of building and modeling complicated patterns and rich representations directly from image pixel intensities. The main advantage of such approaches compared to conventional machine learning techniques lies in the ability of deep  116  networks to perform unsupervised learning. Conventional techniques require a lot of effort on the design of the features to make sure that what is presented to the network for training, is in a format that allows the network to recognize the intended patterns. Such features are normally limited in terms of generalizability and can only represent a limited aspect of the data. Whereas the processing power of deep networks allows them to learn such features automatically from the data. Therefore, the concept of “feature engineering” in conventional techniques is replaced by “feature learning” in deep networks. This results in more comprehensive and robust features that better capture the natural characteristic of the data. This study proposes a computer-aided BCC detection framework that adopts the feature learning capability of deep networks for BCC diagnosis. Instead of manually designing multiple sets of pattern-specific features for every vascular type, we design a deep learning framework to learn comprehensive BCC vascular features directly from the data. Considering the significant role of patient profile information in the diagnosis process, we also propose a fusion strategy to combine both visual and clinical information for more accurate diagnosis. 4.4.1 Overview of the Method  Clinical examination of cutaneous lesions involves visual evaluation of the lesion as well as assessment of patient history and profile. In an attempt to mimic the dermatologists’ diagnosis process, the proposed BCC classification framework relies on two major sets of features: 1) Visual characteristics including vascular content from the image data 2) Patient’s clinical profile. The proposed network first attempts to learn different lesional colors and patterns of vasculature by being introduced to samples of dermoscopic data. The learned patterns would construct a basis for feature extraction. Visual features are then integrated with patient profile data as an input to the classifier for BCC detection. A simplified diagram of the  117  proposed framework is demonstrated in Figure 4.7. Details on each part of the framework are given below.  Figure 4. 7: Simplified diagram of the proposed framework for BCC classification using feature fusion.  a) Data-driven feature learning is performed through patching and SAE. b) Learned representations of SAE are treated as filters. Each filter is applied to a new set of BCC and non-BCC image set through convolution, resulting in feature maps. Feature maps are condensed through pooling. Condensed feature maps are integrated with patient profile information and then fed into the softmax classifier.  4.4.2 Unsupervised Vascular Kernel Learning through Sparse Autoencoder In order to extract comprehensive vascular properties of the lesions, the first step of the proposed framework involves unsupervised data-driven feature learning. For this purpose, we extract patches of different BCC and non-BCC lesions and feed them as input to our unsupervised deep feature learning platform to directly learn features from the image data. We a)                                                                b)  118  propose sparse autoencoder (SAE) as our feature learning tool. As previously discussed in chapter two, SAE is a self-replicating network that tries to regenerate its input at the output [94]. Therefore, a SAE tries to learn an identity function approximation. By imposing size constraints on the network, we can force the network to discover and learn a compact representation of the underlying characteristics of the image data and therefore generate a data-driven feature set based on the nature of the original data. Figure 4.8 demonstrates a sample diagram of the SAE, where a BCC patch is fed as the input and its reconstruction is generated at the output.  Figure 4. 8: Schematic of the proposed SAE.  The SAE attempts to regenerate its input at the output, thereby discovering and learning hidden characteristics of the underlying data  In the training phase, the SAE tries to reproduce its input in the output by learning compressed hidden representations and characteristics of the data as neurons weight vector. For this purpose, a cost function is designed to minimize the difference between the input and  119  its reconstruction, through finding optimum weight parameters in a layer-wise manner. Equation 4.2 demonstrates the cost function used in the training of SAE:  𝐸 =1𝑁 ∑  ∑(𝑥𝑘𝑛 − ?̂?𝑘𝑛)2𝐾𝑘=1𝑁𝑛=1+   𝜆||𝑊||2+  𝛽∑𝐾𝐿(𝐷𝑗=1𝜌||𝜌?̂?) (4.2) where N denotes the number of samples, K is the number of variables in the training data and W represents the weights. This cost function attempts to minimize the mean square error between the input and its reconstruction through the first right hand side term, while preventing the magnitudes of the weights from increasing drastically in the second term with λ as the weight decay control parameter. We also include a sparsity restriction through the third term, which guarantees a minimal efficient model by ensuring that only a small number of neurons are activated in response to each part of the input. Sparsity helps avoid overfitting and contributes to the robustness of the model. For this purpose, we used the Kullback-Leibler divergence KL(𝜌||𝜌?̂?) [121], which measures and penalizes the difference between the distributions of average activation 𝜌𝑗 of neuron j and the desired activation ρ as in equation 4.2. β is the sparsity control parameter and D is the number of hidden layer neurons.  𝐾𝐿(𝜌||𝜌?̂?) = 𝜌 log (𝜌𝜌?̂?) + (1 − 𝜌) log (1 − 𝜌1 − 𝜌?̂?) (4.3)    Since no label is required for each image patch, this is an unsupervised learning process. After the training phase, data-driven feature set is obtained and stored.   4.4.3 Kernel Application and Feature Maps Generation After unsupervised learning phase, we employ the trained neuron weights of SAE as filters or kernels. Each set of neuron weights is treated as a kernel that will be convolved with each  120  image to build feature maps. On a new image dataset of BCC and non-BCC lesions, each kernel is applied to the whole lesion image through convolution. Convolution determines the value of each pixel, by multiplying the current pixel value and it neighboring pixels by the values of the kernel matrix as in equation 4.4:  𝑓𝑐 =∑ ∑ (𝑘𝑖,𝑗𝑓𝑖,𝑗)𝑛𝑗=1𝑛𝑖=1𝐾  (4.4)  𝑓𝑐 is the new central pixel value, 𝑓𝑖,𝑗 is the image pixel value at position (𝑖, 𝑗),  𝑘𝑖,𝑗 is the kernel coefficient value at the corresponding position and 𝐾 is the total sum of the kernel coefficients. As demonstrated in Figure 4.9, the kernel slides through every single pixel of the image and new pixel values are calculated. A feature map is generated as a result of each kernel application, representing how each spatial section of the image responds to the pattern of that specific filter. Therefore, each input image yields a set of m feature maps, where m is the total number of learned kernels from the SAE.   Figure 4. 9: Feature map generation through convolution of the learned kernels. Left: Learned kernels slide through every single pixel of the original image through convolution. Right: For each kernel, a feature map is generated. Each feature map demonstrates how the image responds to the specific pattern of the kernel.   121  4.4.4 Pooling and Condensed Feature Maps Kernel application generates a large number of feature maps to be used as input to the BCC classification framework. Since this process dramatically increases the size of inputs to the classifier, we used a pooling strategy to reduce the dimensionality of the feature maps. For this purpose, each feature map is divided into spatial regions determined by pool dimension. Regions are then summarized into a more compact space through averaging the values of pixels within them. Figure 4.10 demonstrates the pooling operation implemented on a sample feature map.  Figure 4. 10: Pooling of a sample feature map through average function.  4.4.5 Feature Fusion for BCC Detection through Softmax Classifier    The resulting condensed feature maps build the visual feature set for the classification framework. For an inclusive computer-aided diagnosis, this visual feature set is integrated with  122  patient profile information to build a comprehensive feature set. The overall feature set is then fed into a softmax classifier for classifying BCC vs. non-BCC. Softmax classifier is an extended form of logistic regression that takes the feature vectors as the input and generates a value between zero and one interpreted as the target class probability. The feature fusion framework is demonstrated in Figure 4.11.  Figure 4. 11: Schematic illustration of the feature fusion framework for BCC classification. Condensed feature maps are integrated with patient profile information and then fed into the softmax classifier.   123  4.4.6 Experimental Setup  The dataset used in this chapter consists of 1199 RGB dermoscopy images of BCC and non-BCC lesions, obtained from Vancouver Skin Care Center, The University of Missouri and Atlas of dermoscopy by Argenziano [88]. Confirmed diagnosis is given by histopathology. Lesions cover a variety of different cutaneous conditions including basal cell carcinoma, sebaceous hyperplasia, seborrheic keratosis and dysplastic nevi. 300 images were used for data-driven feature learning through SAE and the remaining 899 images (299 BCC and 600 non-BCC) were employed for BCC classification and performance evaluation. These 899 images were further divided randomly into a training and a test set. The classification results are reported from only the test set. 4.4.6.1 Training Set Preparation and Labeling:  300 images were randomly selected among the dataset for constructing the feature sets. Images were resized to 258x258 pixels to standardize across datasets. For data-driven feature learning, we extracted patches of both BCC and non-BCC images from the training image set. Patches were chosen such that they contain different structural and architectural properties of lesions. Specifically, BCC patches were outlined to include different patterns and types of BCC vascular structures. The size of the patches was set to 32x32 pixels, so that it is large enough to contain part of a vascular structure or lesion architecture while being small enough to avoid computational complexity. The final feature learning set consists of 10345 patches derived from BCC lesions (4248 patches) and non-BCC lesions (6097 patches). This set is then used in the SAE framework for data-driven unsupervised feature learning.  124  4.4.6.2 Parameter Setting: Considering that each image patch consists of three channels (R, G and B), the size of each input patch to the feature learning SAE is 32x32x3=3072. After an exhaustive search on the number of neurons of the hidden layer, the number of hidden units of SAE is chosen as 𝐷 = 400. The control parameters including regularization and sparsity factors λ and β in the SAE cost function are set to λ=0.004 and β=4. A window of 8x8 is selected as the pool size and average pooling is adopted as pooling strategy. Convolving the learned filters of the SAE with new images results in feature maps of 227x227, which are further condensed into 28x28 after pooling is implemented. 4.4.6.3 Patient Profile:  Patient profile information consists of lesion location, lesion size, lesion elevation (a binary variable indicating whether the lesion is flat or elevated) along with age and gender of the patients. Figure 4.12 demonstrates a demographic of patient profile of the dataset. This information is integrated with the visual features of the SAE and fed to the classifier.  4.4.7 Results and Discussion All implementations were performed using MATLAB 2015a on a PC (Intel core i7 with 16GB RAM) and GeForce GTX NVIDIA Graphics Processor Unit. Half of the remaining 899 images is used for training of the softmax (449 images including 149 BCC and 300 non-BCC) randomly selected from the data set and the performance is tested on the remaining 450 images. Once the network is trained and weight parameters are tuned, predicting new images is fast and is done in real-time.  125   Figure 4. 12: Patient profile demographics. In each diagram, the bars show the number of patients in each category.   The learned weights from the SAE are treated as kernel parameters and are convolved with images followed by average pooling. After convolution and average pooling, condensed feature maps are integrated with patient data and fed to the classifier for classification. Table 4.9 demonstrates the BCC classification results based on only patient profile, only SAE features and integrated SAE and patient profile. As it can be seen from Table 4.9, integrating the condensed feature maps with patient information increases the diagnosis accuracy of BCC. The BCC lesions of our dataset are mostly of the nodular type, in which the presence of blood vessels is a significant biomarker. Figure 4.13 shows the receiver operating characteristic (ROC) curve for the proposed integrated BCC classification framework.   126  Table 4. 9: BCC classification performance of the proposed framework  Sensitivity Specificity PPV NPV Accuracy Patient Profile 0.413 0.927 0.738 0.760 0.756 Softmax (SAE features) 0.753 0.893 0.779 0. 879 0.847 Integrated (Patient profile and SAE features 0.853 0.940 0.877 0.928 0.911  Figure 4. 13: ROC curves for BCC classification based on patient profile, SAE features and integrated framework    127  There are only a few studies on automated and computer-aided BCC classification. The most recent superior performances reported in the literature on different datasets are summarized in Table 4.10, where different feature categories and classifiers were used. The reported techniques are based on design and extraction of high-level hand-crafted features (lesional and sub-lesional texture and chromatic properties along with disease-specific features such as semi-translucency and pink blush). Although an exact comparison cannot be achieved due to the differences in datasets, it can be seen from Table 4.10 that the proposed method can achieve comparable or superior results compared to previous methods by means of automatic feature learning instead of hand-crafted feature extraction. It should be noted that since hand-crafted features benefit from expert human intelligence, they yield high-quality features in terms of clinical interpretation; however, such high-level features are time-consuming to design and require expert knowledge which limits their applicability for fast initial screening purposes. Besides, these techniques are limited in terms of generalizability. That is where the major advantages of automated feature learning techniques lie. Furthermore, such hand-crafted features are only capable of capturing and representing limited aspects of data characteristics, whereas data-driven feature learning allows for a comprehensive feature representation based on the nature of the data. It is also worth mentioning that deep-learning methods require large amounts of data to demonstrate good performance. In our case, we believe that the current sample size is still relatively small for the proposed deep learning based method. However, with more data available, the proposed method is capable of learning more accurate features and demonstrating better performance.   128   Table 4. 10: Comparison with previous work  Sample Size (BCC vs. Non-BCC) Feature Categories Classifier AUC Shimizu et al. [117]  69 vs. 692 Color, Texture, Sub-region Statistics Layered Model 0.869 Kefel et al. [122]    304 vs. 720 Semi-translucency features by adaptable texture-based segmentation  Logistic Regression Analysis 0.877 Proposed Method w/o Patient data 299 vs 600 SAE feature learning Softmax 0.847 Proposed Method 299 vs. 600 Patient Profile and SAE feature learning Softmax 0.911  Figure. 4.14 demonstrates a set of feature representations that is learned by the unsupervised SAE framework. As shown in Figure 4.14, the SAE framework has learned a feature set that represents the visual properties of the lesion. For example, among the learned feature set are different shades of erythema, different spatial orientations and architectures relating to lesion vasculature (dotted pattern, curved lines, branching structures, etc). Whereas in conventional classification techniques, complex computations and algorithms need to be developed in order to define and extract such image features.  Figure 4. 14: Visualization of some of the feature representations by SAE  129  4.4.8 Conclusion In this study, an automated BCC detection framework was presented that builds upon integrating both patient’s clinical information and lesion’s visual features. Unlike conventional computer-aided techniques where dermoscopic features need to be engineered in mathematical formulations resulting in limited applicability and extensive computational complexity, we proposed a data-driven feature learning framework based on sparse autoencoder to directly learn the hidden characteristics of the data from dermoscopy images. Visualizing the learned representations, confirmed that the learned features address a variety of different vascular patterns and lesional structures. We investigated the clinical value of the learned representations by incorporating them into a BCC classification framework and demonstrated that integrating the learned features of SAE with patient profile information can improve the BCC classification performance. We demonstrated that the proposed framework can achieve similar or superior classification performance compared to other state of the art techniques without the need for handcrafted high-level features. Since the proposed technique does not rely on high-level hand-crafted feature extraction, it can potentially be used in fast screening diagnostic settings.    130  Chapter 5: Conclusions and Future Work 5.1  Significance and Potential Application of the Research Blood vessels are considered an important biomarker in differential diagnosis of skin lesions. These structures are significantly involved in pathogenesis, diagnosis, and treatment outcome of skin abnormalities and systematic analysis of cutaneous vasculature is a major step in any clinical scenario. Our technique on cutaneous vasculature analysis have many clinical applications since previously, there has been no comprehensive approach for systematic and quantitative analysis, assessment, and pattern recognition of skin vascular structures.  Our proposed approach for detection and segmentation of blood vessels can be used as a non-invasive and systematic tool for vasculature quantification and can be applied in a broad range of dermatology applications. A major potential application for our technique is the monitoring of skin conditions both in terms of treatment efficiency and disease progress. This can be used both for remote monitoring of chronic conditions such as psoriasis, as well as remote cancer screening where high-risk patients need to be regularly monitored. These tools can also be adopted to other medical domains such as detection and monitoring of diabetic retinopathy. Our proposed feature set and strategy for pattern recognition of blood vessels can be integrated in a broader computer-aided system as a biomarker for specific skin abnormalities. Our proposed feature set and strategy for disease classification, can be used as a clinical decision-support system for diagnosis and early detection of skin cancer. Moreover, since our proposed BCC detection framework based on deep feature learning does not depend on high-level hand-crafted feature extraction, it can potentially be used in fast screening, teledermatology and remote diagnostic purposes.    131  5.2  Summary of Contributions This thesis includes extensive investigation and analysis of vascular structures of cutaneous lesions. It is the first work in the field that proposes a systematic framework for analysis of skin vasculature by means of dermoscopy. The main goals and contributions of the thesis can be summarized in three folds: Systematic detection, segmentation, and quantification of skin vasculature, systematic pattern and morphology analysis of skin vessels and computer-aided diagnosis/assessment based on vascular properties. To achieve that, we proposed a novel three-level framework to investigate cutaneous vasculature in the following avenues: 1) Pixel-level: to detect, segment and accurately quantify the blood vessels of skin lesions; 2) Lesion-level: to extract architectural and structural properties of the vessels and identify the lesions’ vascular pattern; 3) Disease-level: to associate lesions’ vascular properties with the diagnosis and characteristics of the disease. At pixel-level, we proposed two different frameworks to address the detection and segmentation of cutaneous vascular structures and evaluated their performance quantitatively. First, a fully automatic cutaneous blood vessel segmentation framework was proposed based on the biological properties of the skin. Using independent component analysis to decompose the skin into its constructing chromophores and accounting for shape properties, our proposed framework is the first in the field capable of segmenting different vascular structures in both pigmented and non-pigmented lesion. Compared to previous studies, we achieved a higher detection performance while demonstrating good segmentation performance when compared to annotations provided by expert dermatologists.  In order to decrease the dependency of the computer-aided framework on extensive expert domain knowledge, we developed an automated deep learning framework based on stacked  132  sparse autoencoder for detection and localization of skin vascular structures. We demonstrated that due to direct data-driven learning capacity of the proposed framework, it can well generalize to a wide variety of different vascular patterns and shapes. In comparison to different strategies such as pre-trained CNN as well as conventional supervised classification techniques, the SSAE approach appears to be superior to the conventional method based on our data set, while preserving clinical feature interpretability. At lesion-level, we tackled vascular pattern recognition problem which is a major diagnostic clue in differential diagnosis of skin conditions. We proposed a novel feature set comprising of domain-specific architectural, geometrical and graph-based features to differentiate vascular morphologies. We were the first to study vascular morphology systematically and demonstrated that the proposed feature set can effectively differentiate four major classes of cutaneous vascular patterns. At disease-level, we looked at the vascular properties of the lesion at a bigger picture, where we investigated the relationship between the vascular characteristics and diagnosis and status of the disease. We performed four sets of studies, each targeting a specific aspect of the disease. First, we investigated the lesion size with respect to vascular density and systematically demonstrated that larger lesions expressed higher vascular density. Next, we proposed a feature set that evaluates the total blood area of the lesion and deployed these features to differentiate cancerous from benign tumors. Consequently, we designed more topological features from the vascular network to distinguish BCC from benign lesions. Finally, we built a system upon integrating both patient’s clinical information and lesion’s visual features using deep feature learning. We demonstrated that integrating the learned features of SAE with patient profile information improves the BCC classification performance.  133  Our proposed technique can achieve similar or superior classification performance compared to other state of the art techniques without the need for handcrafted high-level features. 5.3  Directions for Future Work 5.3.1 Technical Direction 1. Multi-resolution Analysis: In this thesis, we mostly focused on dermoscopy images, which provide high magnification views of the lesion. However, in many applications, instead of high-quality high-magnification dermoscopy, wide view clinical images are acquired and recorded. Therefore, analysis of clinical images can provide significant contribution to the computer-aided framework. Therefore, a potential future extension to the work in this thesis is a computer-aided framework that can process multi-resolution, multi-magnification images. Using transfer-learning capabilities of deep networks, developing a multi-resolution-tract architecture that can learn image semantics across different multiple image resolutions, can not only improve the diagnosis accuracy of the system but also be used in broader applications of teledermatology. 2. Multi-modality Analysis: The presented work only considers dermoscopy as the imaging modality. However, in many clinical applications several imaging modalities are used to acquire different characteristics of the lesion. Each imaging modality best captures certain characteristics of the skin and therefore, combining different modalities can provide a comprehensive understanding of the nature of the disease. Therefore, a potential and very useful future work is to use transfer learning to integrate image features from multiple imaging sources. Such integration can better mimic the actual practice of a dermatologist and hence can improve the clinical performance of the computer-aided system.  134  3. Smooth Segmentation: In this work, we only employed deep learning for patch-based segmentation of the vessels. The next step of our SSAE localization approach would be to develop an end-to-end deep learning framework that can be used for pixel-wise smooth segmentation of the vessels. 5.3.2 Clinical Direction The techniques developed in this thesis provide a systematic and non-invasive tool for assessment and monitoring of a variety of skin conditions. Below we suggest a number of potential future clinical studies that can be built upon the proposed techniques.    1. Melanoma study: One of the most important factors in melanoma assessment and prognosis, is the lesion’s Breslow’s depth [54]. Assessing the depth of penetration for a melanoma before the operation and lesion excision, could have an important impact on the accuracy of the operation, as well as the direction of treatment. Therefore, a future direction of our work is to use and expand our techniques to correlate and find a relationship between the degree, density, sub-type and pattern of vascularity of a melanoma with its depth of invasion as defined by the Breslow’s depth aiming for a pre-operative melanoma thickness estimation.  2. Melasma Study: Melasma is an acquired discoloration of the skin, characterized by irregular brown patches mostly on the face, with a prevalence as high as 75% among pregnant women, causing a huge negative impact on patients’ quality of life [123]. Melasma is frustrating for both patients and physicians due to the lack of effective treatment. The pathogenesis of melasma is still unknown and due to cosmetic concerns, taking skin biopsies is impractical. Hence non-invasive techniques are the main method to study this disease. Melasma is believed to primarily be a pigmentary disorder; however,  135  evidence suggest that abnormal cutaneous blood vessels may be involved in the pathogenesis of melasma [124]. The techniques developed in this thesis can be used as non-invasive optical assessment procedure to investigate the vascular characteristics of melasma lesions, the underlying relationship between melasma vasculature and pigmentation, and the role of these factors in the visual appearance of the disease, thereby suggesting appropriate treatment. 3. Port-wine stain study: Port-wine stain (PWS) is the most common type of vascular malformation usually occurring in the face and neck with laser therapy recognized as the most common treatment. Although studies have associated several factors with the treatment response, the treatment outcome remains subjective and unpredictable between individuals and even on multiple sites on the same patient [125]. Therefore, an interesting future direction would be to deploy and extend the techniques in this thesis for objective assessment framework for port-wine stains by vascular quantification and measurement.            136  Bibliography 1. Kalia, S. and M.L. Haiducu, The burden of skin disease in the United States and Canada. Dermatol Clin, 2012. 30(1): p. 5-18, vii. 2. Lim, H.W., et al., The burden of skin disease in the United States. Journal of the American Academy of Dermatology. 76(5): p. 958-972.e2. 3. Thorpe, K.E., C.S. Florence, and P. Joski, Which medical conditions account for the rise in health care spending? Health Aff (Millwood), 2004. Suppl Web Exclusives: p. W4-437-45. 4. International Skin Imaging Collaboration archive. 2016. 5. Dermatology Atlas.  [cited 2018 February 4]; Available from: http://www.atlasdermatologico.com.br/. 6. Lazar, A.J.F.a.M., G. F. , The Skin, in Robbins and Cotran Pathologic Basis of Disease. 2015, Elsevier. p. 1141-1178. 7. contributors, W.C., File:Skin layers.svg. Wikimedia Commons, the free media repository. 8. Gawkrodger, D.J. and M.R. Ardern-Jones, Dermatology : an illustrated colour text. 2012, Edinburgh; Toronto: Churchill Livingstone. 9. Häggström, M., Bensmith, W., Epidermal layers. Wikimedia Commons, the free media repository. 10. Fodor, L., Elman, M., Ullmann, Y., Light Tissue Interactions, in Aesthetic Applications of Intense Pulsed Light. 2011, Springer-Verlag London. 11. Anderson, R.R. and J.A. Parrish, The optics of human skin. J Invest Dermatol, 1981. 77(1): p. 13-9. 12. About Skin Cancer.  [cited 2018 February 4]; Available from: http://www.canadianskincancerfoundation.com/about-skin-cancer.html. 13. Cancer, C.P.A., The Economic Burden of Skin Cancer in Canada: Current and Projected.  Final Report: CPAC. 14. Cancer Facts and Figures. 2017  [cited 2018 February 4]; Available from: http://www.cancer.org/acs/groups/content/@editorial/documents/document/acspc-048738.pdf. 15. Oakley, A. Dermoscopy course. 2011  [cited 2018 February 4]; Available from: https://www.dermnetnz.org/cme/dermoscopy-course/introduction-to-dermoscopy/. 16. Clark, W.H., Jr., et al., The developmental biology of primary human malignant melanomas. Semin Oncol, 1975. 2(2): p. 83-103. 17. McColl. , I. Dermoscopy made simple, a dermoscopy teaching blog of the australian institute of dermatology and the skin cancer college of Australia  137  and New Zealand. 2011  [cited 2018 February 4]; Available from: http://dermoscopymadesimple.blogspot.com. 18. Committee., C.C.S.A. Canadian Cancer Statistics 2014.  [cited 2018 February 4]; Available from: http://www.cancer.ca/en/cancer-information/cancer-101/canadian-cancer-statistics-publication/past-editionscanadian-cancer-statistics/?region=on. 19. Rogers, H.W., et al., Incidence Estimate of Nonmelanoma Skin Cancer (Keratinocyte Carcinomas) in the U.S. Population, 2012. JAMA Dermatol, 2015. 151(10): p. 1081-6. 20. Medical and Editorial Content Team, A.C.S. Key Statistics for Basal and Squamous Cell Skin Cancers. 2016  [cited 2018 February 4]; Available from: https://www.cancer.org/cancer/basal-and-squamous-cell-skin-cancer/about/key-statistics.html. 21. Wong, C.S., R.C. Strange, and J.T. Lear, Basal cell carcinoma. Bmj, 2003. 327(7418): p. 794-8. 22. Grossman, D., Leffell, D.J., Squamous cell carcinoma, in Fitzpatrick’s Dermatology in General Medicine. 2008, McGraw Hill Medical. p. 1028-36. 23. D., M. Definition of dermoscopy. . 2014  [cited 2018 February 4]; Available from: http://www.dermoscopy.org/atlas/base.htm. 24. Soyer, H.P., et al., Dermoscopy of pigmented skin lesions. Eur J Dermatol, 2001. 11(3): p. 270-6; quiz 277. 25. Medigroup, D. Definition of dermoscopy. 2014  [cited 2018 February 4]; Available from: http://www.dermoscopy.org/atlas/base.htm. 26. Arroyo, J.L.G. and B.G. Zapirain, Detection of pigment network in dermoscopy images using supervised machine learning and structural analysis. Comput Biol Med, 2014. 44: p. 144–157. 27. Celebi, M.E., et al., Automatic detection of blue-white veil and related structures in dermoscopy images. Comput Med Imaging Graph, 2008. 32(8): p. 670-7. 28. Mirzaalian, H., T.K. Lee, and G. Hamarneh, Learning features for streak detection in dermoscopic color images using localized radial flux of principal intensity curvature, in IEEE Workshop on Mathematical Methods in Biomedical Image 2012. 2012: Breckenridge, Colorado, USA. . p. 97-101. 29. Sadeghi, M., et al., Detection and Analysis of Irregular Streaks in Dermoscopic Images of Skin Lesions. Ieee Transactions on Medical Imaging, 2013. 32(5): p. 849-861.  138  30. Sadeghi, M., et al., A novel method for detection of pigment network in dermoscopic images using graphs. Comput Med Imaging Graph, 2011. 35(2): p. 137-43. 31. Garnavi, R., et al., Border detection in dermoscopy images using hybrid thresholding on optimized color channels. Comput Med Imaging Graph, 2011. 35(2): p. 105-15. 32. Celebi, M.E., et al., Lesion border detection in dermoscopy images. Comput Med Imaging Graph, 2009. 33(2): p. 148-53. 33. Schaefer, G., et al., Colour and contrast enhancement for improved skin lesion segmentation. Comput Med Imaging Graph, 2011. 35(2): p. 99-104. 34. Xie, F. and A.C. Bovik, Automatic segmentation of dermoscopy images using self-generating neural networks seeded by genetic algorithm. Pattern Recognition, 2013. 46(3): p. 1012–1019. 35. Wighton, P., et al., A fully automatic random walker segmentation for skin lesions in a supervised setting, in MICCAI 2009 Lecture Notes in Computer Science. 2009: London, UK. p. 1108-1115. 36. Bar, Y., et al. Chest pathology detection using deep learning with non-medical training. in 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI). 2015. 37. Benazzi, C., et al., Angiogenesis in spontaneous tumors and implications for comparative tumor biology. ScientificWorldJournal, 2014. 2014: p. 919570. 38. Frangi, A.F., et al., Multiscale vessel enhancement filtering, in Medical Image Computing and Computer-Assisted Intervention — MICCAI’98: First International Conference Cambridge, MA, USA, October 11–13, 1998 Proceedings, W.M. Wells, A. Colchester, and S. Delp, Editors. 1998, Springer Berlin Heidelberg: Berlin, Heidelberg. p. 130-137. 39. Trigoni, A., et al., Dermoscopic features in the diagnosis of different types of basal cell carcinoma: a prospective analysis. Hippokratia, 2012. 16(1): p. 29-34. 40. Zortea, M., et al., Performance of a dermoscopy-based computer vision system for the diagnosis of pigmented skin lesions compared with visual evaluation by experienced dermatologists. Artif Intell Med, 2014. 60(1): p. 13-26. 41. Celebi, M.E., et al., A methodological approach to the classification of dermoscopy images. Comput Med Imaging Graph, 2007. 31(6): p. 362-73. 42. Korotkov, K. and R. Garcia, Computerized analysis of pigmented skin lesions: a review. Artif Intell Med, 2012. 56(2): p. 69-90.  139  43. Schmid-Saugeon, P., J. Guillod, and J.P. Thiran, Towards a computer-aided diagnosis system for pigmented skin lesions. Comput Med Imaging Graph, 2003. 27(1): p. 65-78. 44. Wighton, P., et al., Generalizing common tasks in automated skin lesion diagnosis. IEEE Trans Inf Technol Biomed, 2011. 15(4): p. 622-9. 45. Haliasos, H.C., et al., Dermoscopy of benign and malignant neoplasms in the pediatric population. Semin Cutan Med Surg, 2010. 29(4): p. 218-31. 46. Martin, J.M., R. Bella-Navarro, and E. Jorda, [Vascular patterns in dermoscopy]. Actas Dermosifiliogr, 2012. 103(5): p. 357-75. 47. Zalaudek, I., et al., How to diagnose nonpigmented skin tumors: A review of vascular structures seen with dermoscopy Part I. Melanocytic skin tumors. Journal of the American Academy of Dermatology, 2010. 63(3): p. 361-374. 48. Kim, G.W., et al., Dermoscopy can be useful in differentiating scalp psoriasis from seborrhoeic dermatitis. British Journal of Dermatology, 2011. 164(3): p. 652-656. 49. Zalaudek, I., et al., How to diagnose nonpigmented skin tumors: a review of vascular structures seen with dermoscopy: part II. Nonmelanocytic skin tumors. J Am Acad Dermatol, 2010. 63(3): p. 377-86; quiz 387-8. 50. Folkman, J., Role of angiogenesis in tumor growth and metastasis. Seminars in Oncology, 2002. 29(6, Supplement 16): p. 15-18. 51. Chin, C.W., et al., Differences in the vascular patterns of basal and squamous cell skin carcinomas explain their differences in clinical behaviour. J Pathol, 2003. 200(3): p. 308-13. 52. Pan, Y., et al., Dermatoscopy aids in the diagnosis of the solitary red scaly patch or plaque-features distinguishing superficial basal cell carcinoma, intraepidermal carcinoma, and psoriasis. J Am Acad Dermatol, 2008. 59(2): p. 268-74. 53. De Giorgi, V. and P. Carli, Dermoscopy and preoperative evaluation of melanoma thickness. Clin Dermatol, 2002. 20(3): p. 305-8. 54. Lorentzen, H.F., K. Weismann, and F.G. Larsen, Dermatoscopic prediction of melanoma thickness using latent trait analysis and likelihood ratios. Acta Derm Venereol, 2001. 81(1): p. 38-41. 55. Tanaka, T., et al., Pattern Classification of Nevus with Texture Analysis. IEEJ Transactions on Electrical and Electronic Engineering, 2008. 3(1): p. 143-150. 56. Serrano, C. and B. Acha, Pattern analysis of dermoscopic images based on Markov random fields. Pattern Recognition, 2009. 42(6): p. 1052-1057.  140  57. Sadeghi, M., et al., A novel method for detection of pigment network in dermoscopic images using graphs. Computerized Medical Imaging and Graphics, 2011. 35(2): p. 137-143. 58. Fischer, S., P. Schmid, and J. Guillod. Analysis of skin lesions with pigmented networks. in Proceedings of 3rd IEEE International Conference on Image Processing. 1996. 59. Fleming, M.G., et al., Techniques for a structural analysis of dermatoscopic imagery. Comput Med Imaging Graph, 1998. 22(5): p. 375-89. 60. Anantha, M., R.H. Moss, and W.V. Stoecker, Detection of pigment network in dermatoscopy images using texture analysis. Comput Med Imaging Graph, 2004. 28(5): p. 225-234. 61. Grana, C., et al. Line Detection and Texture Characterization of Network Patterns. in 18th International Conference on Pattern Recognition (ICPR'06). 2006. 62. Choi, J.W., et al., Characteristics of subjective recognition and computer-aided image analysis of facial erythematous skin diseases: a cornerstone of automated diagnosis. Br J Dermatol, 2014. 171(2): p. 252-8. 63. Hames, S.C., et al., Automated detection of actinic keratoses in clinical photographs. PLoS One, 2015. 10(1): p. e0112447. 64. Cheng, B., et al., Automatic telangiectasia analysis in dermoscopy images using adaptive critic design. Skin Res Technol, 2012. 18(4): p. 389-96. 65. Betta, G., et al., Dermoscopic image-analysis system: estimation of atypical pigment network and atypical vascular pattern," IEEE International Workshop on Medical Measurement and Applications, in MeMea 2006. . 2006, IEEE: Benevento. p. pp. 63-67. 66. Di Leo G, P.A., Sommella P, Fabbrocini G, Rescigno O, A software tool for the diagnosis of melanomas, in IEEE Instrumentation & Measurement Technology Conference Proceedings. 2010: Austin, TX. p. pp. 886-891. 67. Wadhawan, T., et al., Implementation of the 7-point checklist for melanoma detection on smart handheld devices. Conf Proc IEEE Eng Med Biol Soc, 2011. 2011: p. 3180-3. 68. Soares, J.V., et al., Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification. IEEE Trans Med Imaging, 2006. 25(9): p. 1214-22. 69. Ricci, E. and R. Perfetti, Retinal blood vessel segmentation using line operators and support vector classification. IEEE Trans Med Imaging, 2007. 26(10): p. 1357-65.  141  70. You, X., et al., Segmentation of retinal blood vessels using the radial projection and semi-supervised approach. Pattern Recognition, 2011. 44(10): p. 2314-2324. 71. Tolias, Y.A. and S.M. Panas, A fuzzy vessel tracking algorithm for retinal images based on fuzzy clustering. IEEE Transactions on Medical Imaging, 1998. 17(2): p. 263-273. 72. Simó, A. and E. de Ves, Segmentation of macular fluorescein angiographies. A statistical approach. Pattern Recognition, 2001. 34(4): p. 795-809. 73. Villalobos-Castaldi, F.M., E.M. Felipe-Riverón, and L.P. Sánchez-Fernández, A fast, efficient and automated method to extract vessels from fundus images. Journal of Visualization, 2010. 13(3): p. 263-270. 74. Chaudhuri, S., et al., Detection of blood vessels in retinal images using two-dimensional matched filters. IEEE Transactions on Medical Imaging, 1989. 8(3): p. 263-269. 75. Gang, L., O. Chutatape, and S.M. Krishnan, Detection and measurement of retinal vessels in fundus images using amplitude modified second-order Gaussian filter. IEEE Transactions on Biomedical Engineering, 2002. 49(2): p. 168-172. 76. Liu, I. and Y. Sun, Recursive tracking of vascular networks in angiograms based on the detection-deletion scheme. IEEE Trans Med Imaging, 1993. 12(2): p. 334-41. 77. Espona, L., et al., A Snake for Retinal Vessel Segmentation, in Pattern Recognition and Image Analysis: Third Iberian Conference, IbPRIA 2007, Girona, Spain, June 6-8, 2007, Proceedings, Part II, J. Martí, et al., Editors. 2007, Springer: Berlin, Heidelberg. p. 178-185. 78. Argenziano, G., et al., Vascular structures in skin tumors: a dermoscopy study. Arch Dermatol, 2004. 140(12): p. 1485-9. 79. Rosendahl, C., et al., Dermoscopy of squamous cell carcinoma and keratoacanthoma. Arch Dermatol, 2012. 148(12): p. 1386-92. 80. Ayhan, E., D. Ucmak, and Z. Akkurt, Vascular structures in dermoscopy. Anais Brasileiros de Dermatologia, 2015. 90(4): p. 545-553. 81. Dhawan, A.P., et al., Multispectral optical imaging of skin-lesions for detection of malignant melanomas. Conf Proc IEEE Eng Med Biol Soc, 2009. 2009: p. 5352-5. 82. Cheng, B., et al., Automatic detection of basal cell carcinoma using telangiectasia analysis in dermoscopy skin lesion images. Skin Research and Technology, 2011. 17(3): p. 278-287.  142  83. Kharazmi, P., et al., Automated Detection and Segmentation of Vascular Structures of Skin Lesions Seen in Dermoscopy, with an application to Basal Cell Carcinoma Classification. IEEE J Biomed Health Inform, 2016. 21(6): p. 1675 - 1684. 84. Kharazmi, P., et al., A Computer-Aided Decision Support System for Detection and Localization of Cutaneous Vasculature in Dermoscopy Images Via Deep Feature Learning. Journal of Medical Systems, 2018. 42(2): p. 33. 85. Tsumura, N., H. Haneishi, and Y. Miyake, Independent-component analysis of skin color image. J Opt Soc Am A Opt Image Sci Vis, 1999. 16(9): p. 2169-76. 86. Madooei, A., et al. Intrinsic Melanin and Hemoglobin Colour Components for Skin Lesion Malignancy Detection. in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2012. 2012. Berlin, Heidelberg: Springer Berlin Heidelberg. 87. Otsu, N., A Threshold Selection Method from Gray-Level Histograms. IEEE Transactions on Systems, Man, and Cybernetics, 1979. 9(1): p. 62-66. 88. Argenziano, G., et al., Interactive Atlas of Dermoscopy (Book and CD-ROM). 2000: Edra Medical Publishing and New Media  89. Hyvarinen, A., Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans Neural Netw, 1999. 10(3): p. 626-34. 90. Dermal versus epidermal hyperpigmentation: How deep does the melanin go? 2009  [cited 2018 February 4]; Available from: http://skinverse.files.wordpress.com/2010/08/epidermal-versus-dermal-hyperpigmentation.jpg. 91. Krizhevsky A, Learning multiple layers of features from tiny images, in Department of Computer Science. 2009, University of Toronto. 92. Xu, J., et al., Stacked Sparse Autoencoder (SSAE) for Nuclei Detection on Breast Cancer Histopathology Images. IEEE Trans Med Imaging, 2016. 35(1): p. 119-30. 93. Bengio, Y., A. Courville, and P. Vincent, Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell, 2013. 35(8): p. 1798-828. 94. Ng, A., CS294A lecture notes: Sparse autoencoder, 2010. 95. Litjens, G., et al., A survey on deep learning in medical image analysis. Medical Image Analysis, 2017. 42: p. 60-88. 96. Cruz-Roa, A.A., et al., A deep learning architecture for image representation, visual interpretability and automated basal-cell  143  carcinoma cancer detection. Med Image Comput Comput Assist Interv, 2013. 16(Pt 2): p. 403-10. 97. Greenspan, H., B.v. Ginneken, and R.M. Summers, Guest Editorial Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique. IEEE Transactions on Medical Imaging, 2016. 35(5): p. 1153-1159. 98. Codella, N., et al., Deep Learning, Sparse Coding, and SVM for Melanoma Recognition in Dermoscopy Images, in Machine Learning in Medical Imaging: 6th International Workshop, MLMI 2015, Held in Conjunction with MICCAI 2015, Munich, Germany, October 5, 2015, Proceedings, L. Zhou, et al., Editors. 2015, Springer International Publishing: Cham. p. 118-126. 99. Shin, H.C., et al., Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans Med Imaging, 2016. 35(5): p. 1285-98. 100. Jia, Y., et al., Caffe: Convolutional Architecture for Fast Feature Embedding, in Proceedings of the 22nd ACM international conference on Multimedia. 2014, ACM: Orlando, Florida, USA. p. 675-678. 101. Haralick, R.M., K. Shanmugam, and I. Dinstein, Textural Features for Image Classification. IEEE Transactions on Systems, Man, and Cybernetics, 1973. SMC-3(6): p. 610-621. 102. Jain, A.K. and F. Farrokhnia. Unsupervised texture segmentation using Gabor filters. in 1990 IEEE International Conference on Systems, Man, and Cybernetics Conference Proceedings. 1990. 103. Krizhevsky, A., I. Sutskever, and G.E. Hinton, ImageNet classification with deep convolutional neural networks, in Proceedings of the 25th International Conference on Neural Information Processing Systems. 2012, Curran Associates Inc.: Lake Tahoe, Nevada. p. 1097-1105. 104. Deng, J., et al. ImageNet: A large-scale hierarchical image database. in 2009 IEEE Conference on Computer Vision and Pattern Recognition. 2009. 105. Krupinski, E., et al., American Telemedicine Association's Practice Guidelines for Teledermatology. Telemed J E Health, 2008. 14(3): p. 289-302. 106. McKoy K, N.S., Lappan C. . Quick guide to store-forward & live-interactive teledermatology. . 2012  [cited 2018 February 4]; Available from: https://healthsciences.ucsd.edu/som/fmph/divisions/family-medicine/Documents/quickguide.pdf.  144  107. Barata, C., M.E. Celebi, and J.S. Marques, Improving Dermoscopy Image Classification Using Color Constancy. IEEE Journal of Biomedical and Health Informatics, 2015. 19(3): p. 1146-1152. 108. Kittler, H., Riedl, E., Rosendahl, C., Cameron, A., Dermatoscopy of unpigmented lesions of the skin: a new classification of vessel morphology based on pattern analysis. . Dermatopathology: Practical & Conceptual., 2007. 14(4):3. 109. Lam, L., S.W. Lee, and C.Y. Suen, Thinning methodologies-a comprehensive survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992. 14(9): p. 869-885. 110. Sapiro, G., Geometric Partial Differential Equations and Image Analysis. . 2006: Cambridge University Press. 111. Coleman, T. and J. Moré, Estimation of Sparse Jacobian Matrices and Graph Coloring Blems. SIAM Journal on Numerical Analysis, 1983. 20(1): p. 187-209. 112. Edinburgh Dermofit Library, U.o. Edinburgh, Editor. 113. Kharazmi, P., et al., A feature fusion system for basal cell carcinoma detection through data-driven feature learning and patient profile. Skin Res Technol, 2017. 114. Kharazmi, P., Kalia, S., Lui, H., Wang. Z.J.,  and Lee, T.K. . Computer-aided detection of basal cell carcinoma through blood content analysis in dermoscopy images. in SPIE 9414, Medical Imaging 2018: Computer-Aided Diagnosis. 2018. Houston, TX. 115. Nishida, N., et al., Angiogenesis in cancer. Vasc Health Risk Manag, 2006. 2(3): p. 213-9. 116. Lee, T., et al., Dullrazor®: A software approach to hair removal from images. Computers in Biology and Medicine, 1997. 27(6): p. 533-543. 117. Shimizu, K., et al., Four-class classification of skin lesions with task decomposition strategy. IEEE Trans Biomed Eng, 2015. 62(1): p. 274-83. 118. Cheng, B., et al., Analysis of clinical and dermoscopic features for basal cell carcinoma neural network classification. Skin Res Technol, 2013. 19(1): p. e217-22. 119. Micantonio, T., et al., Vascular patterns in basal cell carcinoma. J Eur Acad Dermatol Venereol, 2011. 25(3): p. 358-61. 120. L Srinidhi, C., P. Aparna, and J. Rajan, Recent Advancements in Retinal Vessel Segmentation. Journal of Medical Systems, 2017. 41(4): p. 70. 121. Kullback, S. and R.A. Leibler, On Information and Sufficiency. Ann. Math. Statist., 1951. 22(1): p. 79-86.  145  122. Kefel, S., et al., Adaptable texture-based segmentation by variance and intensity for automatic detection of semitranslucent and pink blush areas in basal cell carcinoma. Skin Res Technol, 2016. 22(4): p. 412-422. 123. Rathore, S.P., S. Gupta, and V. Gupta, Pattern and prevalence of physiological cutaneous changes in pregnancy: a study of 2000 antenatal women. Indian J Dermatol Venereol Leprol, 2011. 77(3): p. 402. 124. Kim, E.H., et al., The vascular characteristics of melasma. J Dermatol Sci, 2007. 46(2): p. 111-6. 125. Kelly, K.M., et al., Description and analysis of treatments for port-wine stain birthmarks. Arch Facial Plast Surg, 2005. 7(5): p. 287-94.  146   

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0365805/manifest

Comment

Related Items