UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Digitization and analysis of mammographic images for early detection of breast cancer Aghdasi, Farzin 1994

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_1995-982454.pdf [ 4.28MB ]
JSON: 831-1.0065327.json
JSON-LD: 831-1.0065327-ld.json
RDF/XML (Pretty): 831-1.0065327-rdf.xml
RDF/JSON: 831-1.0065327-rdf.json
Turtle: 831-1.0065327-turtle.txt
N-Triples: 831-1.0065327-rdf-ntriples.txt
Original Record: 831-1.0065327-source.json
Full Text

Full Text

DIGITIZATION AND ANALYSIS OF MAMMO GRAPHIC IMAGES FOREARLY DETECTION OF BREAST CANCERByFarzin AghdasiB. Sc. (Eng. Hon.) Imperial College of Science and Technology, University of London,London, U.K., 1977M. B. A. University of Portland, Portland, Oregon, U.S.A., 1978A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinTHE FACULTY OF GRADUATE STUDIESELECTRICAL ENGINEERINGWe accept this thesis as conformingto the required standardTHE UNIVERSITY OF BRITISH COLUMBIADecember 1994© Farzin Aghdasi, 1994In presenting this thesis in partial fulfilment of the requirements for an advanceddegree at the University of British Columbia, I agree that the Library shall make itfreely available for reference and study. I further agree that permission for extensivecopying of this thesis for scholarly purposes may be granted by the head of mydepartment or by his or her representatives. It is understood that copying orpublication of this thesis for financial gain shall not be allowed without my writtenpermission.(Signature)___________________________Department of IL EThe University of British ColumbiaVancouver, CanadaDate -tC# iX I91L-DE-6 (2188)AbstractX-ray mammography is the proven method for early detection of breast cancer. Digital processing and analysis of mammographic images can potentially assist in improved performanceof radiologists in earlier detection and recognition of abnormalities.In this work a novel image acquisition system based on an area scanning CCD array hasbeen developed for the digitization of mammograms at high spatial and photometric resolutions.The system characteristic parameters were measured. The quality of the resulting images interms of sharpness and noise content is comparable with that obtained by the more expensiveand slower drum laser-scanning microdensitometer. The clinical application of soft-copy displayof digitized images are evaluated.To further improve the quality of the images, restoration algorithms were applied to restorethe images from the degrading effects of the system’s blur and noise. Performance of threefiltering techniques was compared. A new method for the reduction of boundary truncationartifacts in image restoration was suggested and studied.The process of radiographic image formation was modeled and two locally adaptive smoothing filters were employed to counter signal-dependent radiographic noise before application ofrestoration filters. The results of the restored images show a marked improvement in detectability of smallest particles of microcalcifications when judged by a human observer.Image segmentation routines were developed to separate microcalcifications from the background parenchymal pattern. Performances of two algorithmic approaches to segmentation andtwo artificial neural networks were compared. Over 100 numerical features were automaticallyextracted from the clusters of microcalcifications. These features were evaluated for their abilityto separate the benign and malignant formations. Using a database of 68 digitized mammograms a discriminant function was calculated. The sensitivity and specificity of this approachin recognition of malignant microcalcification clusters is shown to be comparable to that oftrained radiologists.11Table of ContentsTable of Contents iiiList of Tables viiiList of Figures xAcknowledgment xiii1 Introduction 11.1 Motivation 11.2 Human Female Breast 21.3 Breast Cancer 31.4 Mammographic Presentation 31.5 Objectives1.6 The Structure of This Thesis 51.7 Claims to originality 72 State of the Art in Digital Mammogram Processing and Analysis 82.1 Image enhancement 82.2 Risk assessment 92.3 Automated detection of abnormalities 92.4 Differential Diagnosis 103 Radiographic Film Digitization 113.1 Image Acquisition Hardware 123.1.1 Microdensitometers 121113.1.2 Video Cameras. 133.1.3 CCD Sensors3.2 Performance Characteristics3.3 Noise Reduction3.4 Comparison of the Three Film3.5 Image Acquisition at Multiple3.6 The Software Environment3.6.1 MAMPRO3.7 Mammogram Data BaseDigitization SystemsScales4 Clinical Evaluation of the Film Digitization System4.1 Working Hypothesis4.2 Materials and Methods4.3 Results4.4 Discussion 425 Image Restoration5.1 Problem Description5.2 Image Processing5.3 Image Restoration5.4 The System Transfer Function5.5 The Wiener Filter5.5.1 Iterative Restoration5.6 The Constrained Least Squares Filter5.7 Results6 Restoration in the Presence of Signal-Dependent Noise6.1 Image Formation System6.1.1 The X-ray Screen-Film Imaging System131923252930313236373839464647485050525455626363iv8586889297• 100• 104• 107118• 118• 119119• 124• 126• 127• • . 129• 132• • . 1336.1.2 The CCD Camera 676.2 Image Observation Model 696.3 Image Restoration 776.3.1 Local adaptive Wiener smoothing filter 786.3.2 Bayesian smoothing filter 796.3.3 The Deconvolution Filter 806.4 Results 806.5 Summary 837 Reduction of Boundary Artifacts in Image Restoration7.1 Image Restoration Artifacts7.2 Problem description7.2.1 Analysis of errors in using the conventional linear de-convolution . .7.2.2 Restoration by circular de-convolution7.3 Restoration by image extension and periodic convolution7.3.1 Computational Load7.4 Experiments8 Segmentation8.1 Image Analysis8.2 Detection and Segmentation of Microcalcification8.3 Local Histogram Thresholding Algorithm . . . .8.3.1 Object Labeling8.4 Edge Detection and Region Growing Algorithm8.5 Neural Networks Techniques8.5.1 The Modified Hopfield Network8.5.2 The Feed Forward Neural Network8.6 Results and DiscussionsClustersV8.7 Summary 1359 Classification9.1 Data Base9.2 Features9.3 Analysis9.4 Results9.5 Summary137138139• . 143• . 143• . 14610 Conclusions, and recommendations10.1 Objectives10.2 Summary of the work10.3 Suggestions for future work .for future work 147• . 147• . 147150Bibliography 152A MammographyA.1 Mammographic Image Formation SystemA.2 Sources of Image Blur In MammographyA.2.1 Motion BlurA.2.2 Geometric DistortionA.2.3 The Radiographic Receptor . .A.3 The System Characteristic ParametersA.3.1 The Modulation Transfer FunctionA.3.2 ContrastA.3.3 NoiseA.3.4 The X-ray ScatterA.3.5 DigitizationA.4 Mammographic Image Interpretation .165165167167167168171171171176176178181viA.5 Radiographically Occult Cancers.182B Questionnaire For Clinical Evaluation of Images 188C Features 190C.1 Features from Individual Microcalcifications 190C.2 Features from Microcalcification Clusters 190VI’List of Tables3.1 Main Menu Functions in the Software package MAMPRO 333.2 Image Restoration Functions in the Software package MAMPRO 343.3 Other Image Processing Functions in the Software package MAMPRO 354.1 Confusion matrix for Benign (B) vs. Malignant (M) classification by conventional(CMM) and digital (DMM) magnification mammography 404.2 Confusion matrix for Benign (B) vs. Malignant (M) classification by conventional (CMM) and digital (DMM) magnification mammography; ground truth isradiologists’ majority opinion 414.3 Inter-observer (dis)agreements for Benign vs. Malignant classification in conventional magnification mammography 414.4 Comparison of classification by conventional (CMM) and digital (DMM) magnification mammography for each observer 424.5 Comparison of image features by conventional (CMM) and digital (DMM) magnification mammography for the first observer 435.1 Comparison of restoration filter performances 566.1 Filter performance factors 838.1 Classification matrix for algorithm 1 1338.2 Classification matrix for algorithm 2 1348.3 Classification matrix for Perceptron 1359.1 Discriminant Power of Features . 1449.2 Classification performance of computer algorithms on training images 146vii’9.3 Jackknife classification performance of computer algorithms 146A.1 Radiographic characteristics of breast masses 183ixList of Figures3.1 The image acquisition system 153.2 The image acquisition hardware block diagram based on a linear CCD 163.3 Spatial distribution of illumination in the light box 183.4 The image acquisition hardware block diagram based on a 2-D CCD 203.5 System Contrast Transfer Function 223.6 System Modulation Transfer Function (a) in two dimensions w, and w; (b) aone dimensional slice along w, 243.7 Modulation Transfer Function of the sampling aperture of the three digitizerscompared with the MTF of the screen-film combination: (a) 2-D CCD; (b) LinearCCD; (c) Matrix Laser Scanner in high resolution mode; (d) Kodak Min-R OrthoM screen-film 273.8 Noise standard deviation for (a) the screen-film scanned by a 70jtm aperture; (b)Matrix Laser scanner; (c) One frame of the 2-D CCD; (d) Average of 40 framesof the 2-D CCD with correction for fixed pattern noise 285.1 The digital image formation model 495.2 The Modulation Transfer Function of (a) the system; and the restoration filterwith (b) noise to signal power ratio a = 0.01; and (c) a = 0.001; (d) and (e)show the improvement in MTF due to the application of the restoration filtersHr corresponding to (b) and (c) 535.3 Restoration of test images: f is the latent image, fb is the blurred image, g andg1 are the observed images; all other images are the results of restoration byvarious filters 57x5.4 A portion of a standard test phantom imaged at a far distance near the resolution limit of the camera. Results of restoration by the two Wiener filters:Wienerl with NSR=.01 & Wiener2 with NSR=.001, and enhancement by 3x3convolutional masks sharpl & sharp2 as defined in the text 595.5 Restoration of a mammogram: (a) The observed image; (b) Restored image usingWiener filter (with NSR=.001) compensating for the camera MTF; (c) Restoredimage using Wiener filter (with NSR=.001) compensating for the combined system MTF 606.1 The radiographic image formation model 646.2 The Modulation Transfer Functions 686.3 The digitizing camera image formation model 706.4 A model of radiographic image formation 716.5 A simplified model of radiographic image formation 736.6 Generation of the noise source n2 756.7 Generation of the noise source n1 766.8 Adaptive smoothing and deconvolution of a digitized mammogram (a) observedimage, (b) smoothed by LLMMSE filter, (c) smoothed by MAP filter, (d,e,f),result of Wiener deconvolution of images (a,b,c), respectively 847.1 The image degradation and restoration model in the absence of noise 877.2 Modulation Transfer Function of the digitizing camera 907.3 Restoration of a step edge acquired by a digitizing photographic camera: a) Theobserved image of the step edge; b) The restored image using linear deconvolution, notice the boundary truncation artifact; c)The restored image of the stepedge, a trapezoidal was applied to the data prior to its deconvolution; d)Therestored image of the step edge using the proposed approach 917.4 Extension of the image by mirroring in the x and y directions 101xi7.5 The extended signal under the conditions necessary for exact recovery 1057.6 Spatial support of the boundary artifacts around the margins of the image forthe case of L,. <M 1067.7 Restoration of the blurred image of Bayan: a) original; b) a 64x64 section cutfrom the original larger image; c) a 64x64 section of the original image blurredby the out-of-focus model of the digitizing camera; d) result of restoring by lineardeconvolution; e) result of restoring by linear deconvolution after windowing thedata by a trapezoidal window; f) result of restoring using image extension andcircular deconvolution; g) the difference image of b) and c); h) the differenceimage of b) and d); i) the difference image of b) and f) 1098.1 Segmentation using the Local Histogram Thresholding algorithm . . 1218.2 The subimage grid displacement 1228.3 Effects of area and gradient tests on segmentation 1238.4 Examples of segmentation using the Local Histogram Thresholding algorithm . . 1258.5 Comparison of two segmentation algorithms . . 128A.1 X-ray mammography system 166A.2 The heel effect 169A.3 MTF curves for three Kodak screen-film combinations [131] . . . . . 172A.4 Variation of contrast with photon energy [126j 174A.5 Screen-film characteristic curves 175A.6 Noise power spectrum; film noise (solid line); quantum noise (dash line); andtotal noise [130] 177A.7 Ratio of scattered to primary radiation as a function of radiation field [128] . . 179A.8 Contrast sensitivity of the eye [126] 185xliAcknowledgmentI wish to express my gratitude to my research supervisor Dr. Rabab K. Ward for her exemplaryand continued support, care and guidance. I owe Dr. Branko Palcic, Professor of Pathology andhead of Cancer Imaging at British Columbia Cancer Research Center, in whose laboratories.1 spent the last five years, a personal debt of gratitude for his unbounded enthusiasm andforesight throughout the course of this research. His generous financial support is also mostgratefully acknowledged. This work would not have been possible without the constant help andencouragement of Dr. Jacqueline Morgan-Parkes, professor of radiology at the British ColumbiaCancer Agency. In all the clinical aspects of this work her collaboration has been indispensable.I wish to acknowledge the support of Cancer Imaging, British Columbia Cancer Agency, andXiffix Technologies Corporation of Vancouver BC. In particular thanks are due to chief engineerMr. Bruno Jaggi P.Eng. without whøse support the scope of this work would have been muchlimited. I am indebted to my friends and colleagues Dr. Calumn MacAulay, Mr. Steven PoonP.Eng., Mr. Brian Pontifex P.Eng., Mr. John Fengler P.Eng., and Mr. Daniel Nesbit for helpfuldiscussions and for the use of hardware or software tools that they had designed. Thanks arealso due to many other staff of Cancer Imaging and Xiffix for their technical support. Themoral support and encouragement of other friends, too many to be mentioned by name here,will be always remembered.Special thanks to my wife Fariba for her patience, love, faith, and helpful ideas, and to myson Bayan, much of whose companionship has been sacrificed for the completion of this study;to them I dedicate this work.This study was supported financially by the Canadian Commonwealth Scholarship, theUniversity Graduate Fellowship, the British Columbia Cancer Agency and Xillix TechnologiesCorporation.xl”Chapter 1Introduction1.1 MotivationFor women in the developed countries, breast cancer is the most frequently diagnosed cancer anda leading cause of cancer deaths. Over 10 percent of all women in North America will developbreast cancer during their lifetime [1]. In Canada [2] breast cancer accounts for approximately30% of all new cancer cases and for 20% of cancer deaths in women. In British Columbia theincidence rate is about 105 women in 100,000 with a mortality rate of 32 in 100,000 [3]. Breastcancer therefore is a major health threat to women of epidemic proportions.The probability of success in curing breast cancer is directly related to the stage at whichit is detected. The earlier the detection, the higher are the chances of a successful treatment.Breast preserving surgical therapy is only possible at the early stages of the disease.There are several techniques for detection of asymptomatic breast cancer. These includeregular breast self-examination, clinical examination by a physician and imaging of the breast.Clinical examination of the breast can detect tumors, but not infrequently bigger than 1 or2 centimeters in diameter. They may have spread to the axillary lymph nodes and requiresystemic as well as local treatment and more importantly the patients may not be cured.Imaging the breast has therefore emerged as an indispensable tool for early detection.Several types of breast imaging has been investigated by researchers. These include thermography, diaphanography, ultrasonography and mammography.Thermography is based on the hypothesis that cancerous cells are more active than normalcells. A heat sensing device is used to detect “hot spots”. The clinical reliability of this methodis very low and therefore it is rarely used.1Chapter 1. Introduction 2In diaphanography the breast is transilluminated with strong light in the visible spectrum.This method is generally thought to be of limited usefulness.Ultrasonic imaging technology is emerging as a useful tool though as yet it lacks the necessary resolution. It is most effective in differentiating cystic from solid masses. Magneticresonance, impedance measurements and several other imaging modalities are also currentlyunder investigation.The X-ray mammography holds the greatest promise of effective early detection of breastcancer. Special low dose X-ray equipment has been developed reducing the danger of radiationinduced cancer to negligible levels.It has been well documented that by screening postmenopausal women using X-ray mammography, the mortality rate can be reduced significantly [4]. Well over one quarter of a millionwomen in the USA have participated in a “Breast Cancer Detection Demonstration Project”which illustrated reduced mortality rate for the screened population [4]. Of equal importanceis the fact that detecting the cancer at carcinoma in situ stage, when treatment with minimalsurgery followed by ionizing radiation is still possible, results in a much higher quality of lifefor the patient.Mass screening of asymptomatic women however will generate a vast number of mammograms. This overload is sufficiently large to place a thorough screening program beyond themeans of most communities. A helper tool such as a computerized prescreening device whichcould aid radiologists in the detection and recognition of subtle signs of abnormality wouldgreatly speed up this process and make it more economical. Additionally, digital analysis ofmammographic images can assist in more objective and therefore more accurate interpretationand diagnosis.1.2 Human Female BreastThe breast or mammary gland is a modified sweat gland that has the specific function of milkproduction. An understanding of its basic anatomy, physiology and histology is important inChapter 1. Introduction 3the interpretation of mammography.Briefly, the adult breast is composed of three basic structures: the skin, the subcutaneous fat,and the breast tissue, which includes the parenchyma and the stroma. The stroma is composedof fat and other connective tissues. The parenchyma is divided into 15 to 20 segments, eachdrained by a lactiferous duct. The ducts converge beneath the nipple, with 5 to 10 major ductsdraining into the nipple. Each duct drains a lobe composed of 20 to 40 lobules. The compositionof the breast changes with life cycle and is a function of both the diet and hormonal variation [5].1.3 Breast CancerAlthough the origin and causes of breast cancer are not yet known, it has been establishedthat early detection is the most effective strategy in its management. There are over eightyhistologically different diseases of the breast covering the spectrum from mild benign casesto highly aggressive invasive malignancies. There are great differences in appearance of theseabnormalities on a mammogram, with many similarities between the benign and malignantcases. Additionally mammographic appearance of normal breasts are quite variable. Manysmall breast cancers are non-palpable, and some are also mammographically occult. Differentialdiagnosis of breast cancer is therefore both important and difficult.1.4 Mammographic PresentationX-ray mammography is used for two purposes: as a diagnostic tool, and as a mass screeningmethod. In screening centers mammography is used to identify suspicious cases, while as adiagnostic tool it is used for the detection of abnormalities, recognition of malignant tumorsand preoperative localization of such tumorsThe increased use of mammography has resulted in an increased incidence of detectionof smaller lesions. In British Columbia, for example, at screening, 15-20% of all detectedcancers are found in the carcinoma in situ stage. The normal presentation before mammographyscreening was 2-3%. Early detection of pathological tissues depends on the mammographicChapter 1. Introduction 4image quality.Radiologists typically scan mammograms for any signs of abnormality which may include 1)clusters of microcalcifications, 2) well-defined and rounded or ill-defined and spiculated masses,and 3) other signs such as skin thickening or architectural distortion of the breast etc. Radiologists usually compare a mammogram with other views of the same breast, those of theother breast or previous mammograms of the patient to check for asymmetries or developingdensities. The radiographic images are quite complex. Despite a highly evolved human visualsystem a radiologist requires many years of training to detect subtle abnormalities in a complexparenchymal pattern.To aid radiologists in this complex task researchers have reported different image and visionprocessing techniques. A summary of this work is given in chapter 2.Despite these efforts there is much room for improvement and this remains an active fieldof research. Computer vision methods have in fact been used successfully in another area ofcancer imaging, namely that of cell classification in cervical cancer screening programs [6].In digital mammography the efforts in applications of digital image processing have so farconcentrated on image enhancement for visual interpretation by a radiologist. Image analysisfor detection and classification have also been reported. Two important aspects however havenot received attention. The first is that in the process of X-ray image formation the latentimage suffers degradation due to the system blur and noise. The process of digitization of theimage further contributes to both the blur and the noise. The first step should therefore bethe restoration of the image from these effects. The second aspect is the need for accurate andrapid digitization of the film images if any image processing algorithm is to be used in a clinicalsetting. In this work these two issues are addressed before application of pattern recognitiontechniques for detection and recognition of mammographic abnormalities.Chapter 1. Introduction 5L5 ObjectivesThe objective of this study is to develop a system to assist in the detection and recognitionof microcalcifications which are frequently present in a mammogram. The most subtle signsof abnormality are not visible in the digitized mammograms and may not be visible inthe original films— because of the noise and blur in the image. These are due to the X-raymachine, the camera and the digitizing equipment. If the characteristics of the noise and thedifferent blur functions are known or obtained then these effects may be removed or reducedby image restoration techniques. Different image enhancement techniques have been reportedin the literature but image restoration has not yet been applied on mammographic images.While image enhancement has its own advantages such as removing noise and highlightingsome features, it does not compensate for the blur. Thus it would be of great value to restorethe image to aid radiologists in their analysis, prior to applying the automatic edge detection,feature extraction and classification routines.Following image acquisition and restoration, segmentation and recognition of microcalcification clusters will be developed and evaluated.1.6 The Structure of This ThesisChapter 2 contains a review of the state of the art in processing and analysis of the digitizedmammograms.In chapter 3, the hardware and software development of a system for the digitization ofmammograms using a Charge Coupled Device (CCD) sensor are described. The acquisitionsystem characteristics are reported in terms of its spatial and photometric response. TheModulation Transfer Function (MTF) of the system is calculated from its Square Wave ResponseFunction (SWRF’) and also from its Edge Spread Function (ESF) in both spatial and frequencydomains. It is shown that the combined effects of fixed pattern and random noise can bereduced to within round-off noise, having variance of 0.25 gray levels over the whole image.Chapter 1. Introduction 6In chapter 4, the clinical evaluation of the film digitization system is reported. It will beshown that in most cases radiologists can extract essentially the same diagnostic informationfrom electronically magnified images as from the magnification mammography procedure.In chapter 5, the effects of image restoration on the digitized mammograms are considered.Two simple Wiener image restoration filters which counter the high frequency attenuationdue to the MTF and sharpen the mammographic image details are tested. The results ofthese procedures are the removal, to a great extent, of the blurring effects of the differentcomponents of the image formation system and the generated noise. It has been postulatedthat this image restoration will aid in the quantitative analysis of mammographic images andassist the radiologists in improving their diagnosis.Chapter 6 extends the restoration of mammograms to include the effects of radiographicsignal-dependent noise. To accomplish this a comprehensive image formation model is presented.In chapter 7, a novel method of carrying out filtering in the frequency domain using imageextension and circular convolution are presented. This approach eliminates much of the boundary artifacts associated with linear, frequency domain restoration of truncated images. we alsopresent a mathematical analysis of this technique.Chapter 8 presents algorithms for automated segmentation of microcalcifications from thebackground parenchyma of the breast.Chapter 9 describes the extraction of over 100 quantitative features from the segmentedclusters of microcalcifications and their application in classification of mammographic abnormalities.Finally, chapter 10 presents conclusions and suggestions for further research.The appendix gives many details and physical principles of mammographic image formationand interpretation.Chapter 1. Introduction 71.7 Claims to originalityMaterials presented in chapter 2 are gleaned from the relevant published literature.CCD cameras have been used by others to digitize X-ray films. The system developed inchapter 3 however is novel in construction and in many of its characteristics as described in thebody of this thesis. The software package MAMPRO is a new research tool that contains all ofthe algorithms developed in this work.The clinical investigations reported in chapter 4 are novel.The restoration filters employed in chapter 5 are well known, but their application to digitized mammographic images is new.The signal-dependent restoration filters described in chapter 6 are known. Their derivationfor the radiographic noise is new, as are the radiographic image formation models.The image extension technique presented in chapter 7 has been reported in the literature inthe context of spatial domain filtering. The analysis of its effects in frequency domain filteringfor restoration problems, however, is new.The first segmentation algorithm described in chapter 8 is taken from the literature. Thenext two algorithms were developed by me for this work.Most of the features used in chapter 9 are reported in literature in other contexts. Evaluationof their utility in classification of mammograms is new. The data base of digitized mammogramsand the resulting multivariate discriminant functions are also new.At present, no practical (commercial) system exists which provides means to radiologists foraccurate viewing, interactive manipulation, or objective (automated and quantitative) analysisof mammograms. The radiological workstation described here forms the basis for such a system.It provides a helper tool (and a second opinion) to aid radiologists in their task of primarydiagnosis of mammograms.Chapter 2State of the Art in Digital Mammogram Processing and AnalysisMost early efforts in digital analysis of mammograms were concentrated on processing of Xeroradiographs, poor quality radiographs, or have relied on manual measurement of features. Sincescreen-film mammography has only recently produced acceptable radiographic image quality,we will give a partial list of published work in this area since 1980. This is a very brief chapterthat merely points to the relevant literature in the field; details of the published studies thathave a direct bearing on this work are given in the subsequent chapters. Different attempts todigitally process mammograms have had different goals in mind. These can be divided into thefollowing four groups:2.1 Image enhancementDigital image enhancement techniques either employ global manipulation of grey levels or locallyadapt such manipulations to image features. The input image is modified using a set of usuallyheuristic rules to enhance the visibility of certain desired features [7]. In one approach a linearcombination of smoothed images and a non-linear contrast transformation is used to obtainenhancement [8]. Alternatively a global estimate of the background breast structure is employedto bring out pathologic abnormalities [9]. Image neighborhoods that are locally adapted to thespatial extent of image features are selected. In this way enhancement techniques such ascontrast manipulation, histogram equalization, etc. respond to the local image detail [10, 11,12, 13, 15]. Finally a non-linear mapping is used to encode the image gray levels in an attemptto equalize the system noise which is signal-dependent [14].In other approaches to smoothing of mammographic images adaptive order statistic filters8Chapter 2. State of the Art in Digital Mammogram Processing and Analysis 9and tree-structured wavelet transforms are employed [16, 17]. These filters operate on themammogram at multiple resolutions and therefore are potentially useful in preserving imagefeatures such as edges while smoothing the image [18].2.2 Risk assessmentIt has been suggested that a woman’s risk of developing breast cancer can be determined by thepattern of parenchymal densities on the mammogram [133]. The goal of computer aided riskassessment is to correlate the mammographic parenchymal pattern with the risk of developingbreast cancer using objective and repeatable measures commonly based on texture [19, 20, 21]or density [22, 23].2.3 Automated detection of abnormalitiesTwo types of abnormalities have been investigated namely microcalcifications and masses. Mostdetection schemes enhance the conspicuity of abnormalities as an intermediate stage, beforeselecting candidate pixels belonging to abnormalities. The classical method of unsharp maskingfor the detection of abnormalities is evaluated in [24].To detect the presence of microcalcifications, linear filtering techniques have been employed.Researchers at the University of Chicago have used matched filtering to enhance the signal,while a box-rim filter is employed to suppress the signal. The presence of microcalcifications isdetected from the difference of these signal enhanced and signal suppressed images [25, 26, 27,28, 29]. Neural network techniques are then employed to reduce the number of false positivedetections [30, 31, 32].Many other techniques have been used for the extraction of microcalcification images fromthe background. These include: i) local area thresholding [35, 36], ii) morphological operations [37, 38, 39], iii) stochastic image models [40, 41], iv) features derived from contour plotsfor signal peak detection [42, 58], v) multiresolution approaches [44, 45], and vi) wavelet transforms [46, 47]. A battery of tests involving local contrast, shape, size, gradient, and proximityChapter 2. State of the Art in Digital Mammogram Processing and Analysis 10to other microcalcifications have been commonly used to reduce false positive detections ofclusters of microcalcifications [33, 34].Research into the detection and segmentation of masses has also been reported in the literature. Asymmetry between the right and left breasts have been exploited to detect possiblemasses [49, 50, 51, 52]. Template matching has been applied for the detection and segmentationof circumscribed masses [48]. The concept of multiresolution image processing based on fuzzypyramid linking is used to detect and subsequently classify masses [53, 54]. Scale space filteringis used to extract closed contours associated with mass boundaries [55]. Texture features havealso been used to detect stellate lesions [56, 57], and Markov Random Field image model hasbeen utilized to segment tumors [66].2.4 Differential DiagnosisThe aim of differential diagnosis is to apply pattern recognition methods to differentiate benignand malignant lesions, or to separate benign or malignant subgroups. In one study radiographsof biopsy specimen were used to extract features and calculate a discriminant function to classifyclusters of microcalcifications [59]. Different sets of features were used to approach the sametask for screen-film mammograms [60, 61, 62].In a different approach to the same problem, individual objects were not extracted from thebackground. Instead structural features were computed from the image. Subsequently artificialneural networks were applied to these feature vectors to classify the whole mammogram withoutthe need for the prior segmentation of microcaicifications [67, 68, 69].Shape features have been calculated for the classification of masses [63, 64, 65, 66]. Fractaldimension has also been used to classify lesions based on image texture [70].Other related developments are in the areas of film digitization [71, 72] and developmentof smart workstations for computer aided diagnosis [73, 74, 75, 76, 77]. The most recentadvances in almost all of the above areas are reported in the proceedings of the two internationalworkshops that have so far been held in this field [159, 160, 161].Chapter 3Radiographic Film DigitizationAccurate digitization of the film is the first essential step to a successful automated analysis. The quality of digitized images is of fundamental importance as any information lost inthe acquisition and digitization process can not be recovered by further processing. Accurateinterpretation of digitized X-ray screen-film mammograms has been limited by the lack of a‘specialized’ image acquisition system with high speed and accuracy.Several researchers, e.g. [19], attempted to extract numerical feature descriptions for tumorclassification. They report however that a major llmitation on the success of these efforts isthe need for accurately digitized images. The mammographic features of interest range fromseveral centimeters across, as in architectural distortions of the breast, to fine microcalcificationsof less than 0.1 mm across. Very high resolutions are therefore required for at least someportions of the image. Digitizing the entire mammogram at the highest possible resolutionrequires a prohibitive amount of memory. Thus my approach is to develop an acquisitionsystem that facilitates the digitization of mammogram images at the various scales neededfor the different stages of analysis. The digitization should be rapid and within the requiredaccuracy. Such a system would be useful for both stages of pre-screening in the mass screeningof non-symptomatic women as well as for the more detailed analysis of suspicious lesions.Two novel image acquisition systems based on a linear and an area scanning scientific gradeCharge Coupled Device (CCD) arrays were developed. Both systems will be described in thischapter and their performance will be compared. Although both systems are useful, the secondsystem is superior and meets the specified requirements in that it offers a fast method ofdigitizing mammograms with high spatial and photometric resolutions. The system is capable11Chapter 3. Radiographic Film Digitization 12of acquiring 6 frames per second where each frame consists of over 1.3 miffion pixels digitizedto 12 bits per pixel. The fixed pattern and the random noise (of optical and electronic origin)are minimized using background subtraction and signal averaging techniques. The quality ofthe resulting image in terms of sharpness and noise content is comparable with that obtainedby the more expensive and slower drum laser-scanning microdensitometer.3.1 Image Acquisition HardwareThere are three methods of film digitization, namely using: 1) microdensitometers; 2) videocameras; 3) one-dimensional and two-dimensional CCD digital cameras.3.1.1 MicrodensitometersX-ray film digitization has traditionally been performed using a microdensitometer. A narrow beam of light is transmitted through the film, and a light sensitive device measures thetransmittance which is then converted to optical density using a calibration curve. Lightingconditions approximate specular reflection. The film is moved past the beam by either a rotating drum or a flat bed mechanism. The sampling aperture of these systems is limited to thespot size. The scan-time is proportional to the number of sampling points or sampling linesdepending on the scan mechanism, and may be quite long. Furthermore the calibration andoperation of the system is an involved process requiring a skilled operator. For these reasonsmicrodensitometers have not been widely used to digitize radiograplis in clinical settings.X-ray film digitization using laser scanning microdensitometer has been widely reported inthe literature. A comparative study of digitized film radiography systems and the associateddiagnostic and operational advantages are given in references [78, 79]. The performance characteristics of laser digitizers are reported in [80, 81, 82, 83]. A recent assessment of these systemsfor clinical applications concludes that they are generally slow and expensive [84].Chapter 3. Radiographic Film Digitization 133.1.2 Video CamerasVideo cameras have dynamic ranges and band width limitations that affect their performanceat the spatial and photometric resolutions required. As an example the Panasonic WV-BD400video camera was evaluated. The video signal was captured and digitized to 512 x 480 pixels bythe frame grabber. The contrast transfer function of this camera falls off to 5% at 400 TV lineson standard video resolution charts when the chart-camera distance is set for a full field of view.This corresponds to a spatial resolution of 1.25 line pairs per mm. This is clearly inadequate fordetection of submilimeter microcalcifications. Also the useful gray scale resolution is only about7 bits [851 and the camera response as a function of gray scale is non-linear with non-unity gainfactor. The analogue output of the camera has to be digitized resulting in further quantizationerrors and the integration times are restricted to be less than 1/30 seconds for interlaced framesat 60 Hz mains frequency. Therefore, video cameras were not considered any further.3.1.3 CCD SensorsTwo new image acquisition systems which take advantage of the recent advances in ChargeCoupled Device (CCD) technology were developed. The first system is based on a linear arrayCCD, and the second system uses a two-dimensional CCD array.Linear CCIJThe first system uses a digital scanner (Datacopy 612, Mountain View, CA) which has a lineararray solid state detector (Fairchild CCD 122). There are 1728 pixels spaced contiguously every13 m over a 22.5 mm length. Two-dimensional images can be acquired by moving the sensorin 13 im steps across the image. Scanning 2846 lines per image results in image files of 4.9Megabytes where each pixel is digitized to 8 bits. The amount of optical aberration at thecorners however is severe and therefore the image was limited to 2500 lines. The integrationtime is fixed at 3.5 ms.The camera output is fed to the Datacopy image processing interface board model 110Chapter 3. Radiographic Film Digitization 14housed in an IBM AT computer. The camera interface board communicates with the framegrabber (FG100AT, Imaging Technology Inc., Woburn, MA) and a digital signal processingboard (SKY32O, SKY computer Inc., Boston, MA) via the host computer bus (IBM-AT). Thearrangement is shown in Figures 3.1 and 3.2.The digital signal processing board based on TMS32O signal processing chip is used to speedup the scanning operation. The resulting image file is written onto the disk. The frame grabberboard is used to process and display the image on a Sony RGB monitor. Since this boardcan only handle 512 x 512 image files the input image was subsampled prior to display. Sixmammograms were digitized using this system. Visual inspection of these images show themto be of inferior quality compared with images obtained from a microdensitometer.This system provides a large field of view but the scanning mechanism imposes two limit ations on its performance. The first limitation relates to the illumination source and is discussedbelow.The second performance limitation of this system is the scan time. This speed limitationis characteristic of all point and line scanning mechanisms such as drum laser scanners andlinear array CCD cameras. Nonetheless the system is typically at least five times faster thana microdensitometer and is considerably less expensive. Several other researchers have alsoreported the use of a linear array CCII camera [13, 14]. As discussed later, the random noisecontributed by the illumination source and the sensor electronics may be reduced by averagingseveral frames. This approach however is impractical for this system due to the scan timewhich is several minutes per frame. To further improve the speed and accuracy of the systemthe second acquisition system was built.IlluminationThe camera was mounted above a light box transilluminating a mammogram. This light boxis a common viewing box used by radiologists. It has two a.c. powered fluorescent lamps undera light diffuser. Images obtained by the linear CCD displayed a regular pattern of light andTHEIMAGEACQUISITIONSXSTEM ComputerMonitor‘crocomputerwithInterfaceandDisplayPeripheralsrwwnCfl,-mmwQT1!KeyboardCTIHighResolutionCCDSensorCameraStandHighResolutionDisplayMammo gramC.) I.StabilizeIlluminationSourceMouseFig.3.1.TheImageAcquisitionSystemTHEIMAGEACQUISITIONHARDWAREIITFI1rIMUXLUTIIiiGreenDACI:LJBlueLr1:I! r-’F]UflDAJI[UFrameUTiBufferFIIIJIILLTjIIAIFG100FrameGrabberFig.3.2.Theimageacquisitionhardwarebasedoncilinear CODChapter 3. Radiographic Film Digitization 17dark bands parallel to the CCD array. This artifact was interpreted as being due to light flickerproduced by the a.c. source. The 60 Hz ripple in the power supply means that the illuminationflux varies over time. Consequently, different lines of the image will have different intensities.For the case of the linear CCD, the camera’s fixed integration time of 3.5 ms is not longenough to overcome this problem. This artifact is sufficiently disturbing as to render the imageessentially unusable for quantitative work.This artifact may be minimized by use of a high frequency ballast. The standard ballast,having output frequency of 60Hz was replaced by a Plaser ballast of 600Hz. The disturbingartifact was reduced but was still perceptible.Therefore a new light box (Gordon Instruments TVS, Orchard Park, NY) was acquiredwith four d.c. driven, 400 watts, quartz lamps. This unit is air cooled and can transilluminatetransparencies of sizes up to 14” x 17”. The spatial non-uniformity of light emitted throughthe top surface of the light box was measured using an optical power meter. A radiometricfilter was coupled to the light power detector to make broad-band measurements in the rangeof 450-950 nm wavelengths. The incident light power is given in Figure 3.3 as a function ofdistance from the centre. It can be seen that a hot ring exists about 10 cm from the center. Themaximum variation is about 20%. Narrow band radiometric and photometric measurementswere also performed which confirmed the existence of hot spots.The light non-uniformity was also measured using the CCD camera. the background illumination has a variance of 50 gray levels at mid range on an 8-bit scale. However this ‘hot spot’non-uniformity can be accounted for as it is of fixed pattern.The fluorescence lighting provides a more uniform illumination (variance of 10 on an 8-bitscale). It is also cooler and therefore does not require the fan associated with the d.c. drivensource. The problems of flicker and constant (i.e. non-adjustable) output light can be solvedby use of variable integration times and area scanning CCD arrays.Chapter 3. Radiographic Film Digitization3187654I 2100 20Distance (Centimeters)30 40Figure 3.3: Spatial distribution of illumination in the light boxChapter 3. Radiographic Film Digitization 192-D CCDThe second system is an adaptation of the detector originally developed for a solid statemicroscope (SSM) employing Micro-Imager 1400 (XilJix Technologies Corp., Vancouver, BC,Canada) [86, 87]. This detector uses a two-dimensional CCD array (Kodak KAF-1400) to capture 1320 x 103b pixels with square sensor elements of 6.8 im per side. Up to six frames persecond can be acquired with variable integration times. The CCD output is digitized to 12 bits,8 of which can be displayed at any one time. The contents of frame memory is displayed on ahigh resolution monitor (1280x1024 pixels) and the system work station (Apollo DN4500) hasaccess to the frame memory for image transfer and subsequent analysis. The system block diagram is given in Figure 3.4. A highly interactive C program with an easy to operate graphicaluser interface was developed for mammographic image analysis.The following is the description of the performance characteristics of this system. Theperformance characteristics of the light source, the lens, and the transfer functions of the cameraare described in section 3.2. Noise reduction is discussed in section 3.3 and a comparison offilm digitization systems is presented in section Performance CharacteristicsThe following six parameters characterize the quality of an image acquisition system [88]spatial resolution, photometric resolution, photometric linearity and spectral response of thetransducer, spatial distortion due to non uniformity of illumination and size/shape variationsof transducer pixels, temporal distortion due to illumination fluctuations and electronic noise.The photometric range of the detector is measured to be between 0 and 3.2 Optical Densities.This dynamic range is sufficient to cover all radiographic densities present in a properly exposedand developed mammogram. The CCD response was measured as a function of the intensity ofincident light and was found to be linear. Since the CCD measures the optical transmittanceof the mammogram at each pixel location, a logarithmic transformation was programmed inthe acquisition system’s look-up table (LUT) to enable the display of the image in the opticalChapter 3. Radiographic Film Digitization 200C)0C0-DU)(6,U-nE0)a-U-00-Qa0-IC,,a0UU)U)UEU)-cICV)U)LLrj)j)zCChapter 3. Radiographic Film Digitization 21density domain.Using a Nikon 55 mm f/2.8 Micro Nikkor lens and the 2-11 CCD sensor it is possibleto digitize a mammogram with a minimum sampling interval of 12.6 um. This high densitysampling is required for the task of detection of minute microcalcifications which may be presentin the vicinity of the primary tumor site. These “satellite” microcalcifications are indicative ofhistologic multifocality and are believed to be of prognostic value [89].The Contrast Transfer Function (CTF) of the camera using the above lens is measured andgiven in Fig. 3.5. The limit of resolution was measured to be 770 TV lines when imaging astandard resolution chart.Although the CTF is commonly used to specify the spatial resolution of a camera, it is notin general a linear measure and therefore the overall CTF can not be readily computed fromthe CTF of cascaded elements. Therefore the Modulation Transfer Function (MTF) which isthe Fourier Transform of the Point Spread Function (i.e. blur) was used. Two methods wereused in measuring the MTF of the camera 1) the Square Wave Response Function and 2) theEdge Spread Function.The Square Wave Response Function (SWRF) has been used by others [90] and can bedirectly measured using bar pattern test objects. Each sinusoidal component of the squarewave with frequency nf will have its amplitude MTF(nf) so that:S(f) = >MTF(nf) (3.1)n1 3and we can compute the MTF fromMTF(f) = [S(f) + S(3f) - S(5f) + (3.2)where S is the SWRF.From equation (3.2) it can be seen that the values of the MTF at high frequencies aredependent on the accuracy of the SWRF measurements at much higher multiples of thesefrequencies. At these higher frequencies the SWRF values are extremely difficult to measureChapter 3. Radiographic Film Digitization 22UH01 . 10 20 30 40 50Frequency (Cyc’es/mm)Figure 3.5: System Contrast Transfer FunctionChapter 3. Radiographic Film Digitization 23since they are limited by noise. Thus the results of this method were not further employed inthis work.In the second method a large black test object with a straight edge is imaged. The EdgeSpread Function (ESF) is obtained by averaging 128 one-dimensional step edges. The LineSpread Function (LSF) is obtained by differentiating the ESF. Finally one slice of the MTF isobtained by the one-dimensional discrete Fourier Transform of the LSF [91]. The differentiationmay be done in space domain (x, y) or equivalently in the spatial frequency domain (u, v):MTF(u, v) . 6(v) = 2?rju G(u, v) (3.3)where G(u, v) is the Fourier Transform of ESF, and 6 is the unit impulse function. The two-dimensional MTF is a separable function formed by the product of one-dimensional MTFs inthe u and v directions. Furthermore the Point Spread Function (PSF) of the camera is foundby a two-dimensional inverse transform of the MTF. A plot of the MTF is given in Figure 3.6.The results of this method was selected to be further used in this work.3.3 Noise ReductionThe aim here is to correct the noise introduced in the system. Specifically we are concernedwith the noise element added by the film digitizing process. This noise is due to the followingfour sources: a) the illumination source; b) the lens; c) the CCD and its associated amplifier;and d) the analog to digital converter. We combine these four sources and divide the overallnoise present in the data into two types: the fixed pattern noise such as the optical shading ofthe light source and aberrations of the lens, and random noise of optical and electronic origin.The fixed pattern noise can be corrected by image calibration using [92]:‘raw — ‘dark‘cat. = k T T (3.4)bright — 1darkwhere ‘raw is the raw image, ‘dark is the image with the shutter closed, Ight is the image ofthe background without the mammogram and k is a positive constant. For rapid processing,Chapter 3. Radiographic Film Digitization 241 . 3.6: System Modulation Transfer Function (a) in two dimensions w and w; (b) a onedimensional slice along wCqh0U-0LI)0/ n—%. flf r eO..—(a)UF—0 10 20 30 40 50I I • I60(Cycles/mm)Frequency(b)70 80Chapter 3. Radiographic Film Digitization 25integer arithmetic can be used. In order to avoid discontinuities created by division by aninteger, the following approximate relationship was implemented.-Tcal. = ‘raw — ‘bright + bias (3.5)Applying this method to the full image, the noise standard deviation a, was reduced from 3.4to 1.7 gray levels on the 8-bit scale at mid-range.The random noise can be reduced by averaging multiple frames. If N frames are averaged,the standard deviation of the noise will be reduced by a factor of After subtraction of thefixed pattern noise 25 frames were averaged. This operation takes approximately 10 secondsand reduces the standard deviation of the noise to 0.5 gray levels on the 8-bit scale at midrange. This low level of noise was found to be independent of the integration time or the size ofthe image and is primarily due to the arithmetic round-off errors introduced in equation (3.5).After correction for both types of noise, a 1.3 megabyte image with a maximum backgroundvariation of +2 gray levels was obtained.3.4 Comparison of the Three Film Digitization SystemsA wide range of microdensitometers are in use and their characteristics have been widely reported in the literature. We have presented the above two film digitizers based on a linear CCDand a two-dimensional CCD. We will now compare the performance of these systems with atypical microdensitometer. We will use a ‘Matrix Laser Digitizer’ as a typical unit of moderndesign [81].Pixel Size:Microdensitometers usually provide the option of a few fixed pixel sizes, in the range of 50-200 sum. Each pixel is circular and its minimum size is limited by minimum beam spot size.For CCD cameras the pixel is square and therefore congruent pixels cover the image withoutoverlaps. My first system (linear CCD) has 13 um pixels and the second system (2-D CCD)Chapter 3. Radiographic Film Digitization 26has 6.8 um pixels. Additionally, since the two-dimensional CCD has 100% fill factor there areno gaps in the light sensitive area of the detector. This can not be achieved by the circularscanning spot of microdensitometers.Modulation Transfer Function:The MTF of each digitizer is the combined effect of the MTF of the lens and the MTF of thesensor. All three detectors can be designed such that the result ant MTF of the sensor is close tothe sampling aperture function with little degradation introduced by the sensor electronics. Wecan therefore compare the aperture functions of the three systems. Fig. 3.7 gives the plot of theMTFs of the three sensor apertures as compared with the MTF of a high resolution screen-filmcombination. We have used the Matrix laser digitizer in the high resolution mode [81] andthe Kodak Min-R screen and the Ortho-M film. The effect of finer sampling by the CCD inobtaining higher spatial resolutions is evident.Signal to Noise Ratio:The temporal noise of the digitizer is not a function of the sampling interval and may bemeasured as a function of the film density. Fig. 3.8 is a plot of digitizer noise as a functionof film density compared with screen-film noise. It can be seen that temporal noise in themicrodensitometer and the (JCD sensor, after averaging, are comparable and lower than screenfilm noise for densities less than 2.5. Since the area scanning CCD grabs an image at least 2orders of magnitude faster than the other two systems, it is the only digitizer that allows noisereduction by averaging of multiple frames within acceptable time scales. After noise reduction,the system noise is primarily due to quantization noise in the analog to digital ( A/D ) converter.Dynamic Range:The useful density range of a film (defined as the density range with sensitometric slope > 0.5)is between 0.2 and 3.2 optical density units. Therefore a dynamic range of in the signalChapter 3. Radiographic Film Digitization 270.6-\H0.5-i’.‘% ‘0.4 (d)\ \0.3- ‘\0.2-0.1 -00 I I • I I0 10 20 30 40 50 60 70 80Frequency (Cycles/mm)Figure 3.7: Modulation Transfer Function of the sampling aperture of the three digitizerscompared with the MTF of the screen-ifim combination: (a) 2-D CCD; (b) Linear CCD; (c)Matrix Laser Scanner in high resolution mode; (d) Kodak Min-R Ortho-M screen-film.(a) 2—D CCD(b) Linear CCD(c) Laser Scanner(d)Screen—FilrnChapter 3. Radiographic Film Digitization. 280.180.16014_______________________(c)/0.12 • (a) Screen—Film if“o (b) Microdensitometer /-r 0.1 0--- (c) CCD single frame /1(1)---a--- (d) CCD calibrated iG) 0.08_____ _____/0.06 (a)0.5 0.7 0.9 1 .1 1.3 1 .5 1 .7 1 .9 2.1 2.3 2.5Optical Density UnitsFigure 3.8: Noise standard deviation for (a) the screen-ifim scanned by a 7Opm aperture; (b)Matrix Laser scanner; (c) One frame of the 2-D CCD; (d) Average of 40 frames of the 2-D CCDwith correction for fixed pattern noise.Chapter 3. Radiographic Film Digitization 29(transmission) space is required. This can be obtained by all the above digitizers and a 12-bitA/D.Image Acquisition TimeThe clinical application of digital radiology requires processing of hundreds of images per dayin an average sized radiology department. Film digitization time is therefore significant andshould preferably be less than the film processing time of one second per inch. Two-dimensionalCCD scanning has a major advantage in this regard and is between a hundred and a thousandtimes faster than the other two systems.SummaryIn summary, both our CCD based systems can give higher resolutions than that of the densitometer and the two-dimensional CCD based system is significantly faster than the other twosystems. It should also be mentioned that a CCD based system could be manufactured at muchlower costs than the densitometer.3.5 Image Acquisition at Multiple ScalesIt is often necessary to acquire several images at different spatial resolutions from the samemammogram. To facilitate rapid image acquisition the camera was mounted on a belt drivenvertical linear drive. The mechanism was coupled to a three stack stepper motor. The lenswas also connected to a rotary gear system that was driven by a smaller stepper motor. Inthis way the camera could provide real time automated optical zoom capabilities. The drivecontroller for the lens subassembly was programmed with a spline corresponding to the requiredamount of rotation of the focus ring. In this way the focus ring would automatically maintainfocus while the camera moves vertically. Since the mammogram inherently contains a twodimensional image at a fixed location atop the light box, this form of obtaining autofocus isadequate. An x-y stage, controlled via RS232 port from the computer could be added to theChapter 3. Radiographic Film Digitization 30system to provide pan capability. In initial clinical use however, practicing radiologists havefound that a manual pan of the film under the camera is just as acceptable when the systemis being used interactively. This form of zoom and pan provide magnification with increasedspatial resolution and is superior to the software zoom and pan provided by pixel replicationin commercially available software for imaging boards.Once a region of interest (ROT) had been identified, either through detection of microcalcification clusters or masses, its center coordinates were marked and new images with muchhigher resolutions were obtained within a radius of up to 2.5 cm. These newly acquired imageswere subject to various signal processing algorithms in an effort to identify minute calcificationformations associated with “satellite tumor sites”, around the central reference tumor. Thesesateffites are normally mammographically occult to the naked eye but it has been pathologicallydemonstrated that the presence or absence of these satellite tumors are highly prognostic [891.The question which was addressed is whether the provision of higher spatial and photometricresolution by itself is sufficient to enable the detection of these cancers if, in fact, they exist. Forthis latter analysis a data base of mammograms is required on which subsequent pathologicalanalysis has shown the existence of satellite cancers. This issue will be further investigated inchapter 6.3.6 The Software EnvironmentNone of the currently available commercial image processing software libraries are of direct usesince they are for general purpose use and do not conform to the particular requirements ofmammogram analysis such as image size, etc. These systems have not been designed for objectdetection or pattern recognition work, are of a “turn key” style and generally unsuitable fordevelopment work. A software environment therefore was developed to facilitate the acquisition,storage and retrieval, processing and display of images of various size and format.Development work was done on the following three hardware platforms as they becameavailable to me during the course of this project:Chapter 3. Radiographic Film Digitization 311. an IBMPC compatible computer with 80386 CPU and a FG100AT Imaging Technologymonochrome imaging board.2. an IBMPC compatible computer with 80486 CPU and a 1280 Matrox colour imagingboard3. an Apollo 4500 computer with a Univision monochrome frame grabber.The C programming language was chosen for all algorithm development. It is a mediumlevel language that combines ease of programming of high level languages with the speed ofmachine specific languages.For the PC environment, elementary routines were developed for manipulating image files.The visual display unit available was a 480 x 640 pixel VGA monitor. Therefore routines weredeveloped to select subimages of 320 x 320 pixels from the input images. Also, routines forsubsampling the original image were written to create smaller image files at lower resolutions.With a subsampling rate of 1:2 in each direction, it takes only 30% more disk space to store allthe low resolution ancestors of a given image in a pyramid linked data structure. Routines werealso written to enable writing of image data to the video memory for display. Two images of 320x 320 can be simultaneously displayed on the monitor. This enables rapid visual comparisonof input and processed images. Routines were also written for calculating the global histogramand for applying a user defined global threshold to the image gray level.When the Matrox imaging board became available a software package named WINMAMwas developed for the acquisition and display of images. This set of routines use the windowsgraphical interface and a secondary large screen (1280 x 1024 pixels) is then used for imagedisplay.3.6.1 MAMPROThe C programming language operating under UNIX was used in software development forthe Apollo workstation. A user friendly, menu driven graphical interface was employed inChapter 3. Radiographic Film Digitization 32this system and the routines are callable from this interface. The resulting package is calledMAMPRO and a listing of its menu structure developed for this study is given in tables 3.1 to3.3.3.7 Mammogram Data BaseAn image data base was accumulated from mammograms obtained from the British ColumbiaCancer Agency. To minimize the effects of image degradation associated with the X-ray mammographic unit, all images were collected from similar units. The size of the data base waslimited by the availability of films with related radiologic and pathologic reports and the digitalstorage facilities, however over 500 mammograms were examined and over 200 digitized imageswere obtained. Each image is 1280 x 1024 pixels in size. Most mammograms were digitized ateither 50 m or 100 m at 12 bits using our 2D CCD sensor. Further details are given in thefollowing chapters.Additionally, the image data base consists of five images of 1728 x 2500 pixels obtainedfrom the Datacopy linear scanner and 39 images of 1024 x 1024 pixels from the Royal MarsdenHospital in London, U.K., digitized on a drum laser scanner.Chapter 3. Radiographic Film Digitization 33[ Main MenuCamera Control Functions Set Input Look Up Table (LUT)Set Integration TimeShutter On/OffScreen Facilities Set Display LUTSet Pseudo ColorGraphics / Text OverlayZoom / PanImage Acquisition ScanAverageSubtract BackgroundImage Storage/Retrieval Main Memory / Frame MemoryFile FormatImage Statistics Intensity ProfileLinear / Log. HistogramImage Processing Image RestorationImage EnhancementEdge DetectionMorphological ProcessingImage ArithmeticImages ArithmeticImage SegmentationFeatures Extraction Label CalcificationsCalculate Object FeaturesCalculate Cluster FeaturesTable 3.1: Main Menu Functions in the Software package IVIAMPROChapter 3. Radiographic Film Digitization 34Add Noise Add Gaussian noiseAdd Poisson noiseAdd photonic noiseAdd radiographic noiseClean Noise Wiener smootherMMSE filterMAP filterAdd Blur Add uniform blurAdd Gaussian blurAdd camera blurAdd radiographic blurForm Pyramid Subsample by /2Power Spectrum Image extensionRestore Camera Spatial / Freq. domainInverse filterWiener filterCbS filterRestore Mammogram Adaptive / Signal-dependentTable 3.2: Image Restoration Functions in the Software package MAMPROChapter 3. Radiographic Film Digitization 35Image Enhancement Image convolutionRank filtersAverage filterSharp filtersUnsharp mask filtersEnhance contrastEqualize histogram_____________________________Optical DensityEdge Detection Sobel filterRoberts filterKirsch filterPrewitt filterLaplacian filterMorphological edgesMarr & HildrethMorphological Proc. DilationErosionClosingOpeningMorphological edgesDetect saltsDetect peppersSkeletonizingImage Arithmetic Image inversionwith a constant Image thresholdImage + - x / and or xorImage clip clampImages Arithmetic Images ± - x / and or xorbetween two images images mini/maxImages rms errorImage Segmentation Set thresholdIncrease SNRSegment calcificationsLabel objectsCreate binary masksTable 3.3: Other Image Processing Functions in the Software package MAMPROChapter 4Clinical Evaluation of the Film Digitization SystemA prototype unit of the film digitization system was evaluated subjectively from a clinical pointof view by experienced radiologists from the BC Cancer Agency. As a result of their initialfeedback it was postulated that such a film digitization unit will be of clinical utility evenwithout any further image processing and analysis software. This chapter reports on a clinicalinvestigation that was designed and implemented to assess this aspect of the device.Radiologists normally scan a mammogram for a number of abnormalities including thepresence of any microcalcification clusters. Often a hand-held magnifying glass is used toensure that the very small and faint microcalcifications are detected. If a definite assessment stillcannot be made the patient is recalled and a magnification mammogram is taken. Magnificationmammography is a well established conventional procedure that is used as a diagnostic tool inevaluation of microcalcifications. It achieves 1.5 to 2 times magnification of a selected portion ofthe breast image by introducing an air gap between the breast and the screen-film receptor. Theair gap also increases the unsharpness of the image and therefore larger magnification ratios arenot practical. A superior image to the conventional mammography is obtained through exposingthe patient with several times more ionizing radiation. The noise reduction is primarily dueto lower amounts of Poisson-distributed quantum noise at higher X-ray doses in the primarybeam. The procedure normally involves patient recall with the associated cost and anxiety.The film digitization system also produces magnified images by means of electro-optics.Digital magnification of mammograms results in images that are superior to the conventional36Chapter 4. Clinical Evaluation of the Film Digitization System 37mammogram. Digital magnification is obtained by spatial magnification and photometric mapping via an optimized camera Look Up Table (LUT). The optical arrangement generally produces a demagnification, mapping the field of view to the CCD chip size. For example, forthe case of our prototype unit, a square 50 micron sampling interval on the X-ray film maps a64 mm x 51.2 mm portion of mammogram to 8.7 mm x 7 mm CCD area, i.e. a reduction ofover 7.3 times. When viewed on an 18” display monitor the 1280 x 1024 image will be 360 x 288mm in dimension, i.e. a net magnification of over 5.6. This is at least three times the magnification produced by the conventional magnification mammography. Although this can almostbe achieved using hand held magnifying glass, no improvement in contrast or conspicuity ispossible in the latter case. It is the combination of magnification with contrast enhancementthat provides the unique capabilities of Digital Magnification Mammography (DMM).For this study we performed no post-processing or enhancement of the acquired images. Theacquisition camera was used with a linear LUT and a suitable choice of minimum and maximumgrey levels. This effectively provides for an implied adjustment of window and level for the inputgrey levels. This operation is necessary since the acquisition is in 12 bits but the display is in8 bits. This operation, coupled to the inherent contrast properties of the display unit leads toa pronounced improvement in the image contrast and conspicuity of microcalcifications.To assess the clinical utility of DMM a study was designed and carried out in collaborationwith three radiologists from British Columbia Cancer Agency (BCCA). This was a preliminaryclinical investigation of the application of DMM for the evaluation of mammograms and primarydiagnosis of early breast cancer by experienced practicing radiologists.4.1 Working Hypothesis:We tested the hypothesis that practicing radiologists can extract essentially the same diagnosticinformation from the original mammogram in regards to microcalcifications by using DMM inplace of obtaining an extra magnification view, thus sparing the patient the added exposure toionizing radiation.Chapter 4. Clinical Evaluation of the Film Digitization System 384.2 Materials and Methods:Mammograms of patients who had been referred to British Columbia Cancer Agency BCCA forfurther radiographic follow-up were obtained. Each of the patients had a pair of mammograms(craniocaudal and mediolateral oblique view) and one magnification mammography performedon her. Subsequently a biopsy was carried out on each of these patients, and we obtained therelevant pathology report on the excised tissue. All cases were reviewed by an experiencedradiologist from BCCA, to ensure their suitability for this study. Specifically, each suitablecase should have a visible suspicious cluster of microcalcifications. The films represented hardcases that conventionally would require magnification mammography.We selected 35 cases (three films per case i.e. 105 images) involving difficult-to-diagnosemicrocalcifications (as judged by a radiologist) without associated masses or any other signsof abnormality. Due to these requirements, we examined and rejected many more cases whicheither had a visible lesion present on the film, or the original mammograms were unavailable.Whenever the original mammograms were performed at an outside laboratory (normally withinBC), the films were requested through the BCCA department of Diagnostic Imaging. Each filmwas subsequently digitized twice, at 50 um and at 100 ,um sampling intervals. The magnificationviews each needed only to be digitized at 100 1um. In this way, each case is comprised of 5 imagesonly one of which was used in the present study.Three radiologists participated in the study. The study consisted of two parts. In the firstpart, each radiologist individually reviewed the original mammograms and the digitally magnified images on a monitor. In the second part, each participant again separately reviewed theoriginal mammograms as well as the extra magnification films as in a conventional reading.In each case a detailed questionnaire, given in Appendix B, was filled by each observer. Thequestionnaire involved 18 questions in 6 categories pertaining to the features of microcalcifications. The questionnaire quantifies five attributes of the microcalcifications, namely: number ina cluster, shape, density, margination, and spatial arrangement. The questionnaire also recordsthe overall clinical assessment.Chapter 4. Clinical Evaluation of the Film Digitization System 39The two parts of the study were kept separate by both a time interval and random shufflingof the cases so that the observers had no recollection of their previous analysis. The threeobservers did not discuss the cases among themselves and no other clinical data or patienthistory was supplied to them.4.3 Results:Two types of results were obtained namely, a qualitative assessment of the images by theradiologists and a quantitative comparison of the data obtained from the questionnaire.Qualitative assessment of the images by the radiologists:All the three radiologists expressed that they were able to see the details of the microcalcifications better on the monitor than on the original mammograms. Two other experiencedradiologists with special interest in mammography evaluated the digitized images using themammograms that they brought along with themselves. One of these radiologists is associatedwith BCCA, while the other is affiliated with a community hospital. All the clinicians expressedthat they can see more microcalcifications on the monitor and they find the soft-copy displayeasier to use for primary diagnosis than the films. Although the radiologists were free to referto the original mammograms during the evaluation of the digitized images, they chose not todo so. They would first look at the films to determine the relative location of the microcalcifications (i.e. in which quadrant of the breast they are) and then use the soft-copy displayexclusively for detailed description of the microcalcifications.Quantitative comparison of the data:Comparative data for the three observers are given in table 4.1 in the form of a classificationconfusion matrix. CMM refers to the conventional magnification mammography and DMMrefers to the digital magnification mammography. The last question in the questionnaire quantifies the overall diagnostic impression of the microcalcifications and can be viewed as a twoChapter 4. Clinical Evaluation of the Film Digitization System 40Bcnign Malignath Snitivity 1 Specificity Accuracy ConipicuityPathology Pathology % 1.Obeerver CMM B 20 3 67 77 74 49#1 M 6 60MM 20 3 67 77 74 49M 6 8Obeerver CMM B 22 7 22 88 T 69 3?#2 M 4 2 10MM B 19 6 44 73 T 66 317 4Fbbserver CMM B to 2 78 T 38 49 -3#3 M 16 70MM B 7 1 89 T 27 43 -14M 19 8Table 4.1: Confusion matrix for Benign (B) vs. Malignant (M) classification by conventional(CMM) and digital (DMM) magnification mammography.class classification problem, with the two classes labeled as either benign or malignant.In table 4.1 we have also included calculations of “Sensitivity”, “Specificity”, and “Accuracy” as used in the literature and reviewed in the Appendix. We also introduce a new metriccalled “Conspicuity” defined asC=2A—1 (4.1)where C is the conspicuity, and A is the accuracy. The rationale for this metric is this: arandom classification for a two class problem, under the assumption of equal probabilities,gives an accuracy of 50% . If the image of any abnormality is so inconspicuous as to renderthe classification essentially random then Cz0. A highly conspicuous abnormality, with C1,should ideally result in 100% accuracy in classification.It is well known that some cancers are pathologically evident but radiologically occult.Therefore if ground truth, so far as the images are concerned, is taken to be the 2/3 majorityof radiologists responses we obtain the results of table 4.2.Table 4.3 gives the extent of inter-observer agreements for the conventional reading of mammograms. Observer # 1 agreed in 77% of cases with observer # 2, and 57% with observer #3. The second and third observers only agreed in 40% of the cases. It can be seen that aconsiderable disagreement exists. This may be a typical example, and a natural consequenceof the subjective and non-quantitative method of interpretation currently in practice.Chapter 4. Clinical Evaluation of the Film Digitization System 41Berign 1 MaJiguant 1 Sensitivity Specificity ] Accuracy Conspicuityfeatures j features j % % j %Observer CMM B 23 0 100 96 97 94#1 M 1 110MM B 23 0 100 96 97 94•___ 1 11Observer 0MM B 23 6 46 96 80 60#2 M 1 50MM W 20 3 73 83 80 60M 4 8Observer 0MM B 11 1 1 91 46 60 20#3 M 13 10 jDMM B 8 0 100 33 54 9M 16 11Table 4.2: Confusion matrix for Benign (B) vs. Malignant (M) classification by conventional(CMM) and digital (DMM) magnification mammography; ground truth is radiologists’ majorityopinion.___________Observer # 2 Observer # 3Benign Malignant Benign [ MalignantObserver Benign 22 1 10 13# 1 Malignant 7 5 2 10Observer Benign 10 2# 3 Malignant 19 4Table 4.3: Inter-observer (dis)agreements for Benign vs. Malignant classification in conventionalmagnification mammography.Chapter 4. Clinical Evaluation of the Film Digitization System 42CMMDMM Benign MalignantObserver Benign 20 3# 1 Malignant 3 9Observer Benign 23 1# 2 Malignant 6 5Observer Benign 3 5# 3 Malignant 9 18Table 4.4: Comparison of classification by conventional (CMM) and digital (DMM) magnification mammography for each observer.It is instructive to compare the results of conventional and digital magnification for eachindividual observer. These are given in table 4.4.The classification problem is carried out in two steps: detection and visualization of microcalcifications, and determination of probability of malignancy. Both of these factors affect thefinal outcome. The second step however is not a function of image quality. We should thereforecompare the radiologists responses for visibility of image features. Typical results from the firstobserver are given in table DiscussionObserver # 1 had some preliminary exposure to the prototype unit, while the second and thirdobservers had never used soft-copy images for primary diagnosis of mammograms. It appearsfrom the results that some familiarity with this form of image display improves the outcome.We also observe that in both the conventional and digital arms of the study, the first observerwas consistently closer to the “truth”.Considering the conventional analysis we assume that all three radiologists are equally wellexperienced, and yet their performance in predicting the probability of malignancy is verydifferent. In particular the third observer is extra cautious and calls for far too many cases ofmalignancy. The accuracy of this observer is in fact worse than a random classification of 50%C aqj.CcCD ‘-CDCl) o CDCD C CD C c2 1+:1Chapter 4. Clinical Evaluation of the Film Digitization System 44leading to negative values of image conspicuity. It is clear that this observer is influenced byher past experience and the particular group of patients that she normally examines. Our database is a particularly difficult one to evaluate, and the probability distribution of malignantcases is different than the normal expectations of this observer.Concentrating on the results of the first observer, we note from tables 4.1 and 4.2 that thehit rate for both the conventional and digital magnification is the same. This indicates thatwithin the statistical validity of the experiment the working hypothesis is confirmed.This result should be interpreted with caution. The conventional magnification procedureresults in images that inherently have higher signal to noise ratios. There is however some lossof sharpness in the magnified images. The digital magnification has optical magnification andelectronic contrast enhancement, but amplifies both the signal and the recorded noise in theoriginal mammogram. This study does not compare these images from a technical view i.e. bysignal to noise ratio, contrast, etc. but rather from a clinical point of view.As an example referring to table 4.5 we note that out of the 35 cases we have 9 truepositive cases (TP=9), 20 true negative cases (TN=20), 3 false positive cases (FP=3) and 3false negative cases (FN=3). This gives a false positive ratio of FPR=3/23=13%, or specificityof 87%, a false negative ratio of FNR=3/12=25%, or sensitivity of 75%. The overall accuracyis 29/35=83%. We have defined a conspicuity scale (or C-scale) as (2 x accuracy -1). Since arandom diagnosis is likely to lead to 50% FPR and 50% FNR it will give a C-value of 0, i.e. theconspicuity of the abnormality has not increased by use of DMM. Alternatively if DMM couldcompletely obviate the use of conventional magnification mammography we would have no falsepositives or false negatives, an accuracy of 100% and a C-value of 1. Any C-value above zeroquantifies the contribution of DMM. In this case C = 0.66, i.e. a 66% increase in conspicuity.The above analysis can also be carried out in respect of individual questions in the questionnaire to quantify the contribution of DMM in increasing the conspicuity of various featuresof abnormalities. For the feature in question 1, the number of micro calcifications were classifiedas few ( less than 10 per cm2 ) or many (equal to or greater than 10 per cm2).Chapter 4. Clinical Evaluation of the Film Digitization System 45Hence out of the 35 cases correct estimates were made in 24 cases giving an accuracy of69%. In 8 cases DMM has underestimated the number of microcalcifications while in 3 cases ithas overestimated it. The C-value is 0.37, indicating a 37% increase in conspicuity.Question 5 relates to spatial arrangement of microcalcifications and its confusion matrix isgiven in table 4.5. From table 4.5 we have an accuracy of 77% and C-value of 0.54, i.e. a 54%increase in conspicuity.Question 4 relates to margination of microcalcifications and classifies them into smooth orirregular as given in table 4.5, with an accuracy of 57% and a modest increase in conspicuityof 14%.Question 3 measures the density of calcifications, and classifies them into uniformly denseand poorly defined or smudgy. The confusion matrix is again given in table 4.5, from which wesee an accuracy of 0.63 and C-value of 0.26.Question 2 relates to the shape of microcalcifications and is a seven class classificationproblem with multiple labels being allowed. The confusion matrix for this feature has 49entries, i.e. larger than the total number of cases. For this reason this method of analysis wasconsidered inappropriate.In summary the most important of the above results is the number of cases that a radiologist can correctly identify as benign or malignant without the use of conventional magnificationmammography. For the three radiologists the average ratio was 26 out of 35 cases or 74%. Forobserver # 1, who had prior exposure to the system, this ratio was 83%. We have thereforeshown that within the parameters of the experiment, the number of magnification mammogra-.phies may be reduced significantly by the use of digital magnification using the proposed imageacquisition device.Chapter 5Image RestorationIn this chapter we employ filtering techniques in an attempt to reduce the image degradationsand improve the detectability of abnormalities in digitized mammograms.5.1 Problem DescriptionMany more breast cancers are now detected in earlier stages as a result of greater participationof women in screening programs. The majority of early carcinomas of the breast are indicatedby the presence of one or more clusters of microcalcifications on a mammogram. Detection ofsubtle, small, and low contrast microcalcifications, is therefore gaining increased significance.Over 30% of all early carcinomas of the breast are detected solely on the basis of the presence of a cluster of subtle microcalcifications. Usually a biopsy is performed and the presenceof microcalcifications together with the associated malignancy is confirmed pathologically. Aradiograph of the biopsy specimen is often taken employing contact radiography where theparameters are optimized for the best possible image. The specimen radiograph has a higherquality image due to lower noise associated with a much higher X-ray dose, and reduced scattering in the thin specimen. The specimen radiograph almost always shows a larger number ofmicrocalcifications than was visible in the original mammogram. Clearly then the mammography imaging process misses much needed information and introduces image degradations thatmay play a critical role in the detection of early breast cancer.A closely related problem is that mammography usually underestiniates the extent of a lesion. From several correlated pathologic-radiologic studies it has been shown that the smallest46Chapter 5. Image Restoration 47microcalcifications, normally in the periphery of the lesions, are not visible on the mammogram. This problem is particularly significant when multifocality is involved and “satellite”microcalcifications are present in the vicinity of the primary tumor. These satellite microcalcifications can be found as far away as 4 cm from the primary lesion and are believed to behighly prognostic [89].Two factors contribute to this phenomenon namely the observation system noise and thesystem blur. Image processing techniques may be used to overcome some of these degradations.5.2 Image ProcessingImage processing can be considered as a preprocessing stage in digital analysis of mammograms.This preprocessing is used to calibrate the image, remove or reduce the random and fixed patternnoise and counter the effects of non-linear illumination and camera response. It is also used tocorrect the degrading effects of the transfer functions of the X-ray source, the imaging path andthe screen-film receptor as well as to correct for the blurring effects of the camera ModulationTransfer Function (MTF).There is much published work in the literature in the areas of image enhancement andrestoration, for example [94], [95], and [96]. We have treated noise reduction, image calibrationand some general image enhancement techniques in chapter 3.While image enhancement techniques have their own advantages they do not consider theprocess of image degradation in the derivation of algorithms. We have already noted that themost subtle signs of abnormality are not visible in the raw mammogram or in its digitizedversion because of the noise and blur in the image. Degradations caused by the blurring effects and noise introduced by the X-ray machine and the screen-film detector appear in themammogram, and similar degradations of the camera and the digitizing equipment affect thedigitized mammogram. We postulate that if the characteristics of the noise and the differentblur functions are known or obtained then these effects may be removed or reduced by image restoration techniques. The restored image may then be directly viewed on the monitor,Chapter 5. Image Restoration 48subjected to further image enhancement or processed for automated diagnosis.In this chapter We will consider image restoration from the degradiiig effects of the systemtransfer function. A prerequisite of restoration is the knowledge of the image formation modeland the transfer functions of the system components as well as the noise model. The overallModulation Transfer Function is considered to be due to both the screen-film receptor and thefilm digitization system. The most common restoration methods use linear filtering techniquessuch as Wiener filtering. In the following chapter We will extend this work to include the effectsof signal-dependent noise.5.3 Image RestorationImage restoration is a mathematical process in which operations are performed on an observedimage so as to estimate the original object that would be observed if no degradations werepresent in the image formation system used. Basically the procedure is to model the imagedegradation effects of the system and then find and perform appropriate operations to ‘undo’these degrading effects. Thus in order to effectively design a digital image restoration procedure,it is necessary to first quantify or characterize the image degradation effects of the physicalimaging system, the image digitizer, and the image display. Due to the statistical nature of thedegradations, ideal restoration is not possible. However> some degree of improvement may befeasible. We seek such improvements in the X-ray imaging of the female breast.To restore an image the blur functions and the noise strengths of the system must first bemeasured. The two components of the overall system are the X-ray imaging system and thetwo-dimensional CCD digitizing system, as shown in Fig. 5.1. These are modeled as a cascadeof linear and shift invariant systems. This siniple model serves as a starting point and hasthe advantage of mathematical tractability. The Modulation Transfer Function (MTF) of thecamera was measured as discussed in chapter 3. The MTF of the screen-film combination isnormally available from the manufacturer. The noise components from the camera and thenoise recorded on the film are here assumed to be additive and their strengths are estimated.Chapter 5. Image Restoration 49Object Film MatrixDigitizing Camera rj-czzzzz> X-ray MammographyFigure 5.1: The digital image formation modelChapter 5. Image Restoration 505.4 The System Transfer FunctionThe relative importance of the camera MTF and the screen-film MTF needs careful attention.The values of the MTF of the best available X-ray screen-film combination at frequencies above16 cycles/mm are known to be less than 5% of the MTF’s maximum value [97]. This impliesthat image details associated with these frequency components are severely attenuated resultingin blurring of the image. The mammographic image can therefore be considered to be bandlimited. The camera MTF’s value is over 50% at this limit (16 cycles/mm) indicating thatthe screen-film (and not the digitizing camera) limits the detection of microcalcifications. Thisrelationship depends of course on the geometrical magnification provided by the lens, and holdsonly when the object and its image are of equal size. If the mammogram is imaged at lowermagnifications the effect of the camera MTF increases.A combined impulse response can be found by convolving the impulse response of the individual elements in the imaging path. In the Fourier domain this translates into multiplication:H(f,f) =H8j(f,f) H(f,f,) (5.1)where H is the overall system modulation transfer function, H3f is the screen-film MTF andH is the digitizing camera MTF.The system is modeled asg(x,y)= h(x,y)* f(x,y)-i- n(x,y) (5.2)where * is the convolution operator, h(x,y) is the overall system blur function, and n(x,y) isthe additive noise, and g(x,y) is the observed image.5.5 The Wiener FilterThe reconstruction ifiter is designed to minimize a certain estimation error e which may bedefined in a variety of forms. In the Wiener filter formulation E is defined as:= Ex{JJ [f(x,y)f(x,y)j2ddy} (5.3)Chapter 5. Image Restoration 51where Exp is the expectation operator, f(x, y) is the original, undegraded, and unknown imageand f(x,y) is the estimate of f(a,y). The expectation in equation (5.3) is performed over theentire spatial support of the image.The restored image F(f, f!,) in the transform domain isP(f, f!d) = H(f, f) G(f, f) (5.4)where H(f, fr,) is the restoration filter, and G(f, fr,) is the Fourier Transform of the observedimage.For the assumed linear shift invariant system the Wiener filter in the Fourier domain isknown to beH1’ f— x,jyrJx,J!J)— I 2________Jy) W(f,f,)where H is the Fourier Transform of h, and WN and WF are the power spectrum of the noise andsignal respectively. Note that since the acquisition system blur function, h(a, y), is commonlysymmetric, the transfer function, H(f, fr,) is a real function, and thus the above restorationfilter is also real.To calculate equation (5.5) we are faced with the fact that the power spectrum of the latentimage WF, in practice, is unknown. In our first attempt to calculate (5.5) we estimate WFfrom the degraded images as follows. The Wiener filter (5.5) can be re-written as:— H*.WFH— TT 2 TIT TXT (5.6)11 I’Vp + VVNwhere the (fr, fr,,) are implied. From (5.2) the observed image in the frequency domain is givenby:G=H-F+N (5.7)Therefore the power spectrum of the observed image WG can be calculated as:W0=H 2 WF + WN (5.8)Chapter 5. Image Restoration 52and hence substituting for the unknown WF we getH—’ WG-WNT—ff W (5.9)From equation (5.9) we note that the Wiener filter can be decomposed into a cascade of asmoothing filter, WGWw and the inverse filter, . The computations were carried using theDiscrete Fourier Transform (DFT), and WG was replaced by I G 2 In this implementation,in order to minimize the effects of the zeros of H, we selected only the first L values of theblur function as being significant and set the rest to a fixed, small but non-zero constant H(L).L can be treated as a design parameter and its value may be chosen empirically. The noisepower spectrum was estimated from selected smooth regions of the noisy and blurred images.The spectral energy of the relatively low contrast soft tissue images on a mammogram aregenerally concentrated near the zero frequency. The Wiener smoothing filter WGW_WN was alsoimplemented separately and proved to be quite effective in smoothing mammographic imagesas judged visually.The alternative method of calculating equation (5.5) assumes the noise to signal power ratioa = to be equal to a constant. The value of this constant is adjusted empirically to obtainthe best restored image, as judged visually. The filter takes the form of a sharpening highpass filter and assists in better visualization of details of microcalcifications. The filter transferfunction is given in Figure 5.2. Although the first method of deriving WF from the observedimage has been suggested in the literature, we found that the second method of assuming aconstant noise to signal power ratio produced better results.5.5.1 Iterative RestorationThe Wiener filter of equation (5.6) is the optimum linear restoration filter in the MinimumMean Square Error sense. As stated earlier the problem is that normally the power spectrum ofthe latent image Wp’ is not known a priori. In section 5.5 above we tried to estimate WF fromthe power spectrum of the observed image WG. This resulted in equation (5.9). AlternativelyChapter 5. Image Restoration 531614(a) System MTF.‘1 2 (b) Hr with NSR=.O1 ,‘(c) Hr with NSR=.OO1 .‘ . (c)1 0 (d) MTF * Hr(.Q1)(e) MTF * Hr(.0O1) /Li_____________________________________8 ///6,.4.Frequency (Cycles/mm)Figure 5.2: The Modulation Transfer Function of (a) the system; and the restoration filter with(b) noise to signal power ratio o = 0.01; and (c) c = 0.001; (d) and (e) show the improvementin MTF due to the application of the restoration filters Hr corresponding to (b) and (c).Chapter 5. Image Restoration 54we assumed that WF has the same shape as WN and took to be a constant. In this sectionwe will estimate WF iteratively.The first estimate of Wp’ is obtained from the observed image, by:1:1 c 2 (5.10)From this we formulate a restoration filter= (5.11)IHl2WF +WNThis filter is used to obtain an estimate of the restored image= H$) G (5.12)Now a new estimate of WF can be obtained=1 E() 12 (5.13)The iteration may be stopped based on some convergence criteria. It was found effective enoughto arbitrarily stop the procedure after a few iterations.5.6 The Constrained Least Squares FilterThe classical Constrained Least Square deconvolution procedure seeks to minimize the objectivefunction:II QE 112 +) II (G — HF) 112 (5.14)where Q is a two dimensional high pass filter, and A is the Lagrange multiplier. The first termin the objective function imposes a smoothness criterion by minimizing the high frequencycomponents of the estimate of the restored image F. The second term aims at a least squaresfit to the observed image G. The Lagrange multiplier which is related to the extent of thenoise in the observation is treated as a design parameter and adjusted empirically for ‘best’Chapter 5. Image Restoration 55results. It is well known that minimization of the above objective function yields the followingConstrained Least Squares deconvolution filter [96]:H*Hr H 2. I Q 12 (5.15)where y is a constant inversely proportional to the Lagrange multiplier. Following the conventional approach the Laplacian operator was chosen, approximated by0 10q(x,y)= 1 —4 10 10to enforce the smoothness constraint. ThereforeQ(w,w) = —1 + 0.5cos(w) + 0.5cos(w) (5.16)5.7 ResultsIn order to compare the performances of the various restoration filters quantitatively it isnecessary to have access to the ‘true’ latent image. For this purpose first a phantom resolutionchart was imaged. The phantom image contains both high and low frequency objects of varyingcontrasts. This image was considered to be our ‘latent’ image f(x,y). The image was blurredwith the system transfer function and independent white Gaussian noise of standard deviationa,-. = 5 or 10 grey levels was added to it, to form the ‘observed’ blurred and noisyimage g(x, y) or g1(r, y) respectively. Each image was then filtered with the inverse filter, twoimplementations of the Wiener filter, the iterative filter, and the Constrained Least Squaresfilter. In each case we used the error metric e defined ase =])2(5.17)where images are of size M x M. While it is true that this metric does not correctly capturethe visual quality of the images, it is mathematically simple to implement, and is in commonChapter 5. Image Restoration 56Filter Parameter j Restoration Error eNone 36.5Inverse 98.1Wiener SNR=l 42.4SNR=80 27.1SNR=100 26.7SNRz1000 53Wiener2 o, = 5 73.3Iterative Wiener u = 5 44.2Constrained ‘y = 0.001 45.2Least Squares ‘y 0.01 24.7= 0.1 31.8y=1 37.6Table 5.1: Comparison of restoration filter performancesuse. Table 5.1 gives a summary of the results. In this table Wiener2 refers to the cascade of aWiener smoothing filter WGW_WN and an inverse filter as described in section 5.5 above.A sample of images is given in Figure 5.3. In this figure Wiener3 refers to the iterativeimplementation of the Wiener ifiter. It can be seen that the restoration filters can be optimizedto sharpen the image detail, with little degradation due to the amplified noise. Since a largeportion of the phantom consists of fiat areas with nearly constant grey levels, the amplificationof the noise causes the error e to be larger in some cases than the error in the unrestored image.Judging visually, five of the output images show considerable improvement in visualization ofimage detail, with the Constrained Least Squares filter with = 0.01 producing the best result.This observation is consistent with the numerical results of table 5.1.Figure 5.4 shows the effect of the Wiener filters on the actual image of a test phantomobtained by the two-dimensional CCD digitizer. The object is placed at a distance far fromthe camera such that the optical image formed at the CCD plane contains substantiai energyat frequencies near the limit of the resolution capability of the camera. A blurred image isobtained in this way while the camera is maintained at its best focus setting. No additionaldegradation was introduced this time, and the impact of processing the image with the WienerChapter 5. Image Restoration 5700= oriqin.il. f’inverse of q 9bI’iiHI FI( I I of 91 uienii of q UI I 0(1 1)1Iv I sli fijij iiod— 1 fl wiener 01 110055 lOll. S. 5sor 80•I 8 hI lJlt((1. 1 1U lOner of .qnil s_d. 105111’ 1eUk Ill I ((f (j f Uk 11(1 (If1101 0 5 (1 1 001 0 dulIrul I qI UI 101 I iiIi I 1 10 ..[‘ 001 1 5 t1 10az)”,’foist sqi of 1-ref sql ‘I 9 IlsiSt 91 of 9hill I t S • I (PIll I 0 11111 I PIll I 0 IIII i I p qi t i of jl-a- 1111111 0 IfIJIJIJIJI inli 1101—inmi (1111III qIof 1 Iii 1110f911mm_I if IJIJIJIJI e . i-immoII III I t fI ( I 1—,I i 1 41 IH fIPIPII 00001 jfiMflI IFigure 5.3: Restoration of test images: f is the latent image, fj, is the blurred image, g and g1are the observed images; all other images are the results of restoration by various filters.Chapter 5. Image Restoration 58filter was tested. A fixed noise to signal ratio (NSR) is assumed in the design of the Wienerfilters. The value of this constant was adjusted empirically to obtain the best restored image,as judged visually. In Figure 5.4 the outputs labeled Wienerl and Wiener2 are produced byapplication of Wiener filters with constant noise to signal ratios of NSR=0.0l and NS]Iii.00lrespectively.To compare the performance of the deconvolution filter with enhancement filters we includethe effects of two high pass filters shown in Figure 5.4. Both of these filters are 3x3 convolutionalfilters with the following mask values:o —1 0 —1 —1 —1—1 5 —i ;sharp2 = —l 9 —1o —1 0 —1 —1 —1These edge sharpening filters were implemented in the spatial domain and therefore werecomputationally faster and more economical in the storage requirements. It can be seen thatthe Wiener filter with NSR=.00l produces the sharpest image in which the four separate linesat the top half of the image are clearly distinguished.The same filters were used on a data base of 30 mammograms selected from diagnostic filmsof patients referred to the British Columbia Cancer Agency’s Vancouver clinic. The mammograms represent typical cases containing clusters of low-contrast microcalcifications present inthe normal breast parenchyma. The films were digitized with contiguous square pixels of 10Omx lOOjim. Figure 5.5 shows the effect of the Wiener filter with NSR=.00l on a mammogram.In Figure 5.5a the observed digitized image is shown. Figure 5.5b gives the result of theapplication of a different Wiener filter which restores the image from the effects of the digitizingsystem only, i.e. only the MTF of the camera is used in the derivation of the filter. Figure5.5c is the result of a Wiener filter which restores the image from the combined effects of thedigitizing system and the screen-film combination. The sharpening effect of the filter is clearlyvisible as are the characteristic textured patterns in the background due to the amplification ofthe system noise. The microcalcifications associated with the primary tumor can be resolvedChapter 5. Image Restoration 59Wiener2Figure 5.4: A portion of a standard test phantom imaged at a far distance near the resolutionlimit of the camera. Results of restoration by the two Wiener filters: Wienerl with NSR=.O1& Wiener2 with NSR=.OO1, and enhancement by 3x3 convolutional masks sharpi & sharp2 asdefined in the text.original Wienerl sharpisharp2Chapter 5. Image Restoration 60(a) (c)Figure 5.5: Restoration of a mammogram: (a) The observed image; (b) Restored image usingWiener filter (with NSR=.001) compensating for the camera MTF; (c) Restored image usingWiener filter (with NSR=.001) compensating for the combined system MTF.(b)Chapter 5. Image Restoration 61better, and several ‘satellite’ microcalcifications are now visible. It can be seen from Figure 5.5that the effects of camera MTF at the sampling interval of 100 1um can not be neglected.In summary in this chapter we have reported on our implementation of a number of restoration algorithms. The performances of these filters with various parameters were compared. Theeffects of both the screen-film combination and the digitizing equipment were considered. Therestored images are sharper and assist in better visualization of the image detail.Chapter 6Restoration in the Presence of Signal-Dependent NoiseThe work described in Chapter 5 assumes a globally stationary image with signal-independentadditive noise. In this chapter the restoration of mammographic images is extended to includethe signal-dependent nature of radiographic noise.We consider a non-stationary image model and signal-dependent noise of photonic andfilm-grain origins. Both the camera blur arid the MTF of the screen-film combination areconsidered. The camera noise is minimized through averaging and background subtraction asdescribed in chapter 3. The signal-dependent nature of the radiographic noise is modeled by alinear shift-invariant system and the relative strengths of various noise sources are compared.We investigate the application of two locally adaptive image smoothing filters to improvethe signal to noise ratio of digitized mammogram images. To minimize the effects of the systemblur a deconvolution filter is then applied in conjunction with these smoothing filters resultingin better visualization of image details.The deconvolution filter is based on the Minimum Mean Squared Error (MMSE) criteria,while the smoothing filters utilize the Bayesian and the Wiener criteria. Of the two smoothingfilters the Bayesian estimator is found to outperform the adaptive Wiener filter. The filters areimplemented in a real time processing environment as part of my MAMPRO mammographicimage acquisition and analysis system.The objective of this approach is to facilitate the detection of microcalcifications at theearliest stage of their formation. In section 6.1, We characterize the image formation systemand derive an image observatiomi model in section 6.2. In section 6.3 different image restorationprocedures are discussed and designed so as to minimize or reduce the effects of the different62Chapter 6. Restoration in the Presence of Signal-Dependent Noise 63blurs and noise degradations.6.1 Image Formation SystemThe system degradations are considered to originate from the cascade of two imaging systemsas described in chapter 5. In chapter 5 we assumed a globally stationary image with additiveGaussian noise independent of the signal. In this chapter we study and develop a model todescribe the formation of digitized mammographic images with special attention to the signal-dependent nature of the noise. In order to find a more realistic image degradation model, thedifferent components of the image formation system are first examined. Firstly there is the Xray image formation system which produces the image recorded on the film. The X-ray systemis followed by the digitizing optoelectronic camera that produces the digital image matrix. Wewill first consider these two systems separately and then in section 6.2, we combine them toformulate an overall image observation model.6.1.1 The X-ray Screen-Film Imaging SystemRadiographic images suffer from a number of degradations which are inherent in the imageformation system. A block diagram of radiographic image formation is given in Figure 6.1.These degradations may be broadly divided into two categories of blur or unsharpness, andnoise or statistical fluctuations in image intensity.The Image Blur:The image blur is due to the following four sources: a) the X-ray source; b) the geometryof imaging; c) the beam scattering by the subject; and d) the image detection and displaycomponents. The illumination received at the screen is non-uniform due to the X-ray sourceand the geometry of the imaging. The largest amount of exposure is received along the centralbeam, with the quantum fluence decreasing proportional to cos3O, where 0 is the angle ofinclination from the normal [98]. This effect is modulated by the ‘heel’ effect of the anode.Fig.6.1Theradiographicimageformationmodela)Chapter 6. Restoration in the Presence of Signal-Dependent Noise 65Additionally, the focal spot on the anode of the X-ray tube has a finite size creating penumbralshadows in the image. The extent of this shadow depends on the object-film distance, and itaffects the spatial resolution limit of the image. It is also this effect that limits the extent ofuseful magnification views to about two times only [99j. We can model this effect by a two-dimensional convolution of the point spread function corresponding to the effective focal spotaperture with the image.The X-ray scatter in the breast reduces the contrast of the image. This is a major sourceof degradation. Breast compression devices and vibrating anti-scatter grids are used to reducethis effect. Nevertheless, X-ray scatter remains a major limitation. The overall effect of theselimitations can be partially evaluated practically by comparing a pre-operative mammogram ofa patient who has undergone biopsy, with the specimen radiograph. In specimen radiographythe object-film distance is reduced and much of the breast mass responsible for beam scatter isabsent. Additionally higher doses are employed leading to better contrast and reduced inputquantum noise. The specimen is also centrally located reducing the effects of geometricaldistortions.The screen-film combination as a detector has been studied extensively [97]. The intensifyingfluorescent screen absorbs the X-ray photons and radiates many more photons in the visiblerange of wavelengths which expose the film. The X-ray absorption and re-radiation efficiency ofthe screen,as well as the photon absorption efficiency of the film i2, contribute to the overallcontrast for a given dose to the patient. The amplification and scattering mechanisms of thescreen are stochastic in nature. Rabbani [100] has shown that the uncorrelated component of thequantum noise passes through the screen unaltered while the correlated component is filteredby the system contrast transfer function. The amplification m is described by its probabilitydistribution function Pr(m), having mean ñi, and variance u. The scattering process can bemodeled by a two-dimensional linear convolution operation and forms the major component ofthe screen film Modulation Transfer Function (MTF).For contrasts below about 6%, the system noise is the limiting factor in the visibility of theChapter 6. Restoration in the Presence of Signal-Dependent Noise 66image details while the system MTF has little effect on it [101]. For higher contrasts however,the MTF contributes significantly to these limitations. This fact indicates that in digitizedmammograms restoring the image from the effects of the system MTF may result in bettervisualization of smaller objects.Finally the response of the film to the incident photons is non-linear. This non-linearityis described by the D — log E characteristic curve. Although this is commonly written asD = -y log E + /3 we note that both and /3 are functions of the exposure level E. The contrasttransfer function of the screen-film combination is both a function of exposure (due to thecharacteristic curve) and frequency (due to MTF).Image NoiseThe process of X-ray image formation is also associated with noise which is generated by foursources. These noise sources are: a) the quantum noise due to the discrete nature of the X-rayphotons; b) the screen mottle due to the stochastic nature of amplifications and scattering;c) the screen structure noise due to its inhomogeneous phosphor coating; and d) the film grainnoise of the emulsion coating. The screen structure noise is generally considered to contributeless than 2% to the overall noise [97].In Figure 6.1, X-ray quantum fluence Q, in the (i, j)th pixel contains spatial non-uniformitiesdue to the geometry of imaging and the inherent photon noise which has a Poisson distribution.The effect of focal spot size is modeled by convolution with its aperture function, and that ofthe X-ray scatter is represented as an effective low pass filter. The effects due to cos3 0 termare quite small since for typical geometries involved 0 is less than 5°.We use the Kodak Min-R screen together with a Kodak Ortho-M film. The densitometricdata, MTF, and other parameters of this screen-film combination have been reported by Bunch[97]. Details of this model are further described in section 6.2.Chapter 6. Restoration in the Presence of Signal-Dependent Noise 676.1.2 The CCD CameraIn addition to the X-ray, the digitizing camera also blurs the image. The finite size of each pixelgives the aperture function and provides the theoretical limit to the MTF of the camera. Thelens contribution to the blur can at best be limited to the diffraction properties of the opticalcomponents employed. As described in chapter 3, we use a two-dimensional CCD (Kodak KAF1400) with 100% fill factor and 1035 x 1320 square pixels of 6.8 um per side. This gives anMTF of the form ‘sinc(w)sinc(w)’. This function is multiplied by the MTF of the lens. Weused a Nikon Nikkor 55 mm lens which, for the best focus conditions has a minimum MTF of0.6 at the Nyquest sampling frequency of our sensor.The MTFs of the camera and that of the screen-film are shown in the same plot in Figure6.2. The spatial frequency axis refers to the frequency content of images formed on the filmand on the CCD planes, respectively. When comparing these two MTFs we note that these twocurves should refer to the same imaging plane. If the geometry of imaging and the focal lengthof the lens are chosen such that a ‘life size’ image is formed at the CCD then the two MTF curvesare directly comparable. Under these conditions we note that the screen-film combination (andnot the digitizing camera) is the limiting factor in spatial resolution. The field of view is nowrestricted to the area of the CCD, i.e. a mere 7 mm x 9 mm. As we increase the field of viewthe effective camera MTF referred to the object (mammogram) plane deteriorates.In addition to the noise present on the developed film the digitizing system adds anothernoise element. This noise element is due to the following four sources: a) the illumination source;b) the lens; c) the CCD and its associated amplifier; and d) the analogue to digital converter.Figure 6.3 gives a block diagram of the CCD image formation from the mammogram. Wecombine these four sources and divide the overall noise present in the data into two types,the fixed pattern noise such as the optical shading of the light source and the aberrations ofthe lens, and the random noise of optical and electronic origin. The fixed pattern noise cangenerally be corrected by image calibration, while the random component of the camera noisemay be reduced using averaging. The final calibrated image therefore will contain a smallChapter 6. Restoration in the Presence of Signal-Dependent Noise 68Modulation Transfer Functions1 .00.90.8- 0.70.6H 8 16 24 32 40 48 56 64 72 80Frequency (Cycles/mm)Figure 6.2: The Modulation Transfer FunctionsChapter 6. Restoration in the Presence of Signal-Dependent Noise 69amount of observation noise, which is due to the camera and uncorrelated to the image, and amore significant radiographic noise which is signal-dependent.6.2 Image Observation ModelIt is customary in image restoration literature to consider the image observation model to belinear. In a commonly used model the observed image, g, is considered to be the result of linearconvolution of the latent image, f, with a blur function, h, and addition of independent, zero-mean, white, Gaussian noise. While this model has the advantage of mathematical tractability,it is an over simplification of the actual situation.We have employed this model and have shown in chapter 5 that improvement in the appearance of digitized mammographic images is possible. In particular the blur and noise contributedby the digitizing camera can be modeled in this form. The X-ray image formation however, cannot be modeled in this way. Specifically, radiographic noise is strongly signal-dependent andthe film density vs. log exposure characteristic curve introduces non-linearity in the convolutionterm.We formulated an image observation model for the X-ray system as shown in Figure 6.4based on the physical description of the system given in Figure 6.1. In this model the weakeningof the off-axis rays may be compensated for by a pointwise normalization of each pixel valueby the cos3 0 term. The input is the number of X-ray quanta received at the screen perpixel, and the output is g3, the observed optical density of each pixel. We have included threesources of blur in this model: hfocalspot, hscatter, and hscreen. There are also three scalingfactors: i is the mean screen amplification ratio; and and 2 are the absorption efficienciesof the screen and the film respectively. The three noise sources represent n1 the correlated, andn2 the uncorrelated, components of the input quantum noise, and n3 the associated film-grainnoise.This model may be simplified by making the following observations. The scalar factors maybe taken out of the system block diagram and reflected at the input. The blur due to the focalI IFig.6.3.ThedigitizingcameraimageformationmodelFig.6.4.Amodelofradiographicimageformationftn3n2I.Chapter 6. Restoration in the Presence of Signal-Dependent Noise 72spot size is a function of the relative distance of the lesion of interest and the screen. This isof course not known a priori and for the cases where the object of interest is in contact withthe screen there are no blur contributions from the focal spot. Finally we assume that theX-ray scatter in the subject is largely absorbed by the vibrating grid. An effective grid systemensures that the scattered photons are not absorbed by the screen and therefore the scatter canbe modeled by a gain factor independent of spatial frequency.Based on these considerations a simplified version of the above model is given in Figure 6.5.In this model we have ignored the effects of the weakening of the off-axis rays, the focal spotsize and the X-ray scatter in the subject.In Figure 6.5 the ‘ideal’ image is the number of light quanta, q23, absorbed by the filmin each pixel (i,j)f = q,, = (6.1)where Q, is the number of X-ray quanta received at the screen per pixel, ñi is the mean screenamplification ratio, and= ?]12, i.e. the combined absorption efficiencies of the screen-filmcombination. Note that in Figure 6.5 the images f, fi, and f2 are in the exposure domainwhile f and g are in the optical density domain. P is the non-linear, D — log E, characteristicfunction of the film.The noise sources in Figure 6.5 n1, n2, and n3 are zero-mean additive signal-dependentwhite noise sources uncorrelated with each other. The effects of the scalar factors of Figure 6.4are now incorporated in the magnitudes of these sources. Specifically, n2 is the uncorrelatedcomponent of the Poisson noise of the X-ray source and is generated as in Figure 6.6.= ./j.n’ (6.2)where n’ is a zero-mean unit-variance Gaussian random variable. fi is a random variable withPoisson probability distribution whose expected value is f. The noise component n1 is theamplified noise due to the screen amplification fluctuations:=k1..,/jn’ (6.3)Chapter 6. Restoration in the Presence of Signal-Dependent Noise 73g1g=f[bf*(f+n1)+n2J3* * + fl,= q13 11 iii QFigure 6.5: A simplified model of radiographic image formationChapter 6. Restoration in the Presence of Signal-Dependent Noise 74It is generated according to the block diagram of Figure 6.7. The gain factor is= iJth (i +-i-) (6.4)where E is the excess Poisson noise. The film-grain noise is represented by n3:n3 = k3fn’ (6.5)andk3=a.logioe (6.6)where a is the average film-grain area, and / is the sampling interval, i.e. the pixel size. Foroptimally exposed film 3 is taken to be 0.5.From Figure 6.5 we can write the following image observation equationsg = P[h31 *(f + ni)+ n2] + 713 (6.7)f [h31 * f + (h3f * ni + n2)] + n3 (6.8)where h3f is the point spread function of the screen-film and * signifies linear convolution. Usingthe constant parameters for our screen fihn combination ( 0.58, ñz = 284, = 112, and MTFand densitometric data as published in [97]) we note that n2 is at least two orders of magnitudesmaller than either n1 or n3. The latter two quantities are of comparable magnitudes. We willtherefore ignore n2 from now on and write:g=P[h1*(f+n)j+3 (6.9)The problem of film non-linearity may be handled in any one of the following three ways: i)the function I’ may be explicitly incorporated in the restoration filter; ii) the processing may bedone in the exposure domain where a linear relationship exists; or iii) a small-signal model maybe used to derive a linear equation. Since mammographic images are of very low contrast wewill use the small-signal analysis assumption. If a linear approximation to the above non-linearChapter 6. Restoration in the Presence of Signal-Dependent Noise 75poisson generator n2ii’ = Gauss(O,1)Figure 6.6: Generation of the noise source n2Chapter 6. Restoration in the Presence of Signal-Dependent Noise 76n, = k, ‘JT n’k, = i.[ (1 + / ffl)c = excess Poisson noiseffl = mean screen amplificationni1”1cFigure 6.7: Generation of the noise source n1Chapter 6. Restoration in the Presence of Signal-Dependent Noise 77equation is made then9 * f + n4 (6.10)n4 = hf * n1 + n3 (6.11)where all variables are now in the density domain and the appropriate conversion constants areincorporated in them. n4 is now the total radiographic noise.This relation can be combined with the linear convolution model of digitization by theCCD camera. We therefore consider that first the ‘ideal’ image, f(i,j), is blurred by thesystem impulse response, h(i, j), and a small amount of uncorrelated camera observation noise,flc(,j) is added to it to produce the blurred image fb(i,j). Subsequently the signal-dependentradiographic noise n4(i,j) is added to it, which results in the final observed image9(i,j):f&(i,j) = h(i,j) * f(i,j) + n(i,) (6.12)g(i,j) = f&(i,j) +n4(i,j) (6.13)The system impulse response h is due to the combined effect of the camera and the screen-filmsystem, i.e.:h(i,j) =h3f(i,j) * h(i,j) (6.14)Note that the additive noise model above is not restrictive since any multiplicative noise can bereformulated as additive signal-dependent noise. We consider n, to be a white noise field withzero-mean Gaussian distribution.6.3 Image RestorationWe will use the image formation model of equations (6.12) and (6.13). According to thismodel the observed image was formed in two steps. The latent image f, was first blurred(with the addition of n) to form fb, and then contaminated by signal-dependent noise n4. Wetherefore divide the restoration problem into two steps. In the first step we apply smoothingChapter 6. Restoration in the Presence of Signal-Dependent Noise 78techniques using local statistics to ‘clean-up’ the image from the signal-dependent noise n4. Inthe second step we apply a classical Wiener filter to deblur the image from the combined effectsof the system’s Modulation Transfer Function and the noise n. The following two criteriaof optimality are used in the first step in deriving two smoothing filters: the minimum meansquared error (MMSE), and the maximum a posteriori (MAP) joint probability.The required image smoothing may be achieved using either the global image or the localimage approach. In the first approach the image is assumed to be a stationary random process.In chapter 5 a Wiener smoothing filter is employed, assuming the system noise is a whiteGaussian process independent of the image grey levels. This is a non-adaptive global approachin which the MMSE criterion of optimality is applied over the whole image.In this chapter we study the more realistic of the two approaches (i.e. the one which useslocal processing). We postulate that the breast image is non-stationary due to the presenceof structure in the parenchymal pattern. This is particularly true of mammograms containingmicrocalcifications or masses. Additionally, the blur process is a local operation and thereforeit is reasonable to expect that the restoration should also be performed locally.The grey level histogram of a complete mammogram is commonly not Gaussian. If wesubtract the local mean from each pixel however, the resulting image grey levels have a nearlynormal distribution. Therefore in this work we consider the image model to be Gaussian withnon-stationary mean and non-stationary variance (NMNV).6.3.1 Local adaptive Wiener smoothing filterFor the NMNV model [102] the image is considered to be Gaussian only locally in small neighborhoods. Using the MMSE criterion locally will lead to an adaptive local linear minimummean square error (LLMMSE) filter. The estimated pixel value fb [103] is:fg+U;24(g_g) (6.15)where and c.r are the local mean and variance of the observed image, and u4 is the noisevariance. The required local statistics are estimated from the observed image and a prioriChapter 6. Restoration in the Presence of Signal-Dependent Noise 79knowledge about the image is not required.The noise power is first estimated for each pixel and this knowledge is used to calculate anew estimate of the signal according to equation (6.15). If the noise power is negligibly small(i.e. u >> u4) then the estimated pixel value is very close to its observed value. At the otherextreme if all of the observed power is due to noise (i.e. u u4) then the best estimate ofthe signal is the local average.The noise power is estimated from a knowledge of the noise model. The total radiographicnoise n4 is the sum of the film-grain noise n3, and the noise due to the quantum mottle. Thenoise power U4 can therefore be readily estimated for each pixel.For the quantum mottle n1, we observe that the underlying noise process is due to thediscrete nature of the X-ray photons and therefore has a Poisson probability distribution. Thegrey level fb(i,j) for the pixel (i,j) is related to the number of photons incident on it. The greylevel has a code value between zero and 255 which is a linear quantization of the film densityD(i,j), and the film density is a known function of the exposure. Therefore the number ofincident photons can be calculated for each pixel. Since the mean and variance of a Poissondistribution are equal, the exposure va’ue will directly determine the noise power a1.Finally, since the two components of the radiographic noise are independent of each otherthe combined noise power can be obtained asu1 MTF 2 (6.16)6.3.2 Bayesian smoothing filterThe maximum a posteviori (MAP) filter for the above case of the non-stationary mean andnon-stationary variance (NMNV) image model and Poisson noise has the form [104]:-1fb= 2 (6.7)where the average power of the blurred image u6 is obtained from the observed imageChapter 6. Restoration in the Presence of Signal-Dependent Noise 80= max[(u— 4), 0]. (6.18)6.3.3 The Deconvolution FilterAfter the application of one of the above smoothing filters, the resultant image f, is an estimateof the blurred image f& (equation 6.13). A modified Wiener restoration filter was selected toobtain f, an estimate of the ideal image:P(w,.,w) = H,.(w,w).Fb(w,w71 (6.19)II*(w w)H,.(w,LJ) I H(w,w,) (6.20)where F6(w,w), and H(w,w) are Fourier Transforms off, fb, and h respectively,and * is the complex conjugation. H,. is the reconstruction filter in the frequency domain andis a measure of the noise to signal power ratio.6.4 ResultsWe implemented both the LLMMSE filter (equation 6.15) and the MAP filter (equation 6.17),followed by the deconvolution filter (equation 6.20). We note that the application of these filtersmay or may not be required depending on our operating conditions ( such as the samplinginterval ) and also depending on the application in mind. For example in any reading of amammogram where attention to fine spatial and photometric details is not required, imagedeconvolution will not be necessary. An example of this is when mammograms of the left andright breast of the same woman are being examined for the detection of bilateral asymmetry.In such cases large pixel sizes ( e.g. 200 m per side ) may be used which leads to smaller, andtherefore more manageable images. The radiographic noise will also be sniafler in these imagesobviating the need for image smoothing.We evaluated each of the above three filters individually and also as combinations. Thetwo smoothing filters may be used individually in cases where radiographic noise is judgedChapter 6. Restoration in the Presence of Signal-Dependent Noise 81to be the limiting factor in interpreting the films. The deconvolution filter may be employedalone in cases where the image blur is the principal consideration and oniy a small amount ofuncorrelated noise is present. This would be the case when a relatively large sampling interval(i.e. pixel size ) is utilized. The cascade of adaptive smoothing and deblurring filters shouldbe applied in cases where both radiographic noise and image blur are present.We have chosen visual assessment of processed images to determine image quality. Variousquantitative measures, such as MMSE have also been proposed in the literature. Calculation ofMMSE however requires knowledge of the ideal image and may be performed using simulateddegradations.To evaluate the effectiveness of each one of the two smoothing filters we considered a portionof a digitized mammogram containing a cluster of microcalcifications suspicious of malignancy.Radiographic noise was then added to this image. Although this will exaggerate the totalamount of noise present in a mammogram it enables us to assess, more readily, the performance of the smoothing filters. It also represents a inamniogram obtained under less than theideal imaging conditions. A 5x5 square window was used to calculate local statistics. Theperformance factor P for each filter is defined as the square root of the ratio of average noisepower before smoothing , to the average noise power after smoothing [104]:= (6.21)4 = - (fb(i,j) — fb(i,))) (6.22)(6.23)Table 6.1 shows the advantages of our noise compensation procedure and gives a summaryof the performance factor for each filter. The extent of the noise was controlled by a constantmultiplier factor, ), in these experiments. The cases of A = 1, 0.5, & 0 correspond to ‘severe’,‘moderate’, and no noise respectively.Finally, to examine the effect of the combined operation we processed 30 images from ourChapter 6. Restoration in the Presence of Signal-Dependent Noise 82data base. Following smoothing, each smoothed image was deblurred by the Wiener filter. Inthis implementation, we again considered the signal to noise ratio to be a constant independentof the spatial frequency of the image. We used a signal to noise power ratio of 20 in thedeconvolution filter.The images were displayed on a high resolution (1024 x 1280 pixels, colour) monitor, anda radiologist and an image processing engineer reviewed the images. In all cases the processedimages were sharper and revealed greater amounts of image detail. Generally the noise contentof the images was also increased. This, however did not interfere with identification of microcalcification clusters. The opinion of the reviewers was that application of these processing stepsnormally assisted in better visualization of image detail.Figure 6.8 presents a typical image at various processing stages. Here the mammogramcontains ‘real’ ( i.e. not simulated ) radiographic noise. The results of the deconvolutionboth before and after smoothing of the mammograms are shown. Figure 6.8a is the observednoisy and blurred mammogram, and Figure 6.8d is the restoration of it using the Wienerfilter but without any prior smoothing. The amplification of the noise obscures much of thedetail of this image. Figure 6.8b is the result of processing the image of Figure 6.8a with theLLMMSE smoothing filter, and Figure 6.8e is its restoration using the Wiener filter. Clearlythe image detail has been sharpened without any significant gain in the noise. Figure 6.8c isthe result of smoothing of Figure 6.8a with the MAP filter, and Figure 6.8f is the deblurringof Figure 6.8c with the Wiener filter. The beneficial effects of these noise smoothing and detailsharpening filters are clearly visible. The visual appearance of these images are consistentwith the measured values of the filter performance factors. Since the MAP filter shows higherperformance factors, this filter was implemented as part of the real-tinie mammographic imageacquisition and analysis system.Chapter 6. Restoration in the Presence of Signal-Dependent Noise 83Table 6.1: Filter performance factorsModerate Noise Severe NoiseLLMMSE 1.63 1.47MAP 1.76 1.636.5 SummaryIn summary radiographic images suffer from both signal-dependent noise and system blur. Wehave designed and implemented locally adaptive smoothing filters to reduce the effect of thenoise. The smoothed images are then subjected to a deblurring algorithm to reduce the effectsof system blur. The resulting images improve the visibility of subtle signs of abnormality andthus help the earlier detection of breast cancer.Chapter 6. Restoration in the Presence of Signal-Dependent Noise 84Figure 6.8: Adaptive smoothing and deconvolution of a digitized mammogram (a) observedimage, (b) smoothed by LLMMSE filter, (c) smoothed by MAP filter, (d,e,f), result of Wienerdeconvolution of images (a,b ,c), respectively.Chapter 7Reduction of Boundary Artifacts in Image RestorationIn this chapter we investigate the problem of restoration of large images. When a mammogramis digitized to high resolutions, it becomes computationally very expensive to process the wholeimage at once. When the image is subdivided into smaller images artifacts are introduced dueto boundary truncation effects. In this chapter we analyse these effects and suggest a simplenovel approach to overcome these effects.The abrupt boundary truncation of an image introduces artifacts in the restored image thatmay be visually objectionable. These artifacts are particularly severe when the restorationfilter contains significant high frequency components which is usually the case. The traditionalsolution is to smooth the image data using special window functions such as Hamming or trapezoidal windows, before applying the restoration filter. This method improves the results butstill distorts the image, especially at the margins. Instead of the customary ‘linear’ convolutionof the image with the restoration filter we examine a different procedure. This procedure issimple and exploits the natural property of ‘circular’ or periodic convolution of the DiscreteFourier Transform. Instead of padding the image by zeros, it is padded by a reflected version ofit. this is followed by ‘circular’ convolution with the restoration filter. This procedure is shownto lead to much better restoration results than the windowing techniques. The computationaleffort is also improved since our method requires half the number of computations required bythe conventional linear de-convolution method.85Chapter 7. Reduction of Boundary Artifacts in Image Restoration 867.1 Image Restoration ArtifactsTo remove certain well characterized degradations from images, image restoration techniquesare employed. These however generally introduce various artifacts of their own in the image.Several authors have studied these artifacts. Tekaip and Sezan have identified and detailedfour types of image restoration artifacts, namely the filtered noise, the filter deviation, thePoint Spread Function error and the boundary truncation artifacts [105]. In particular theartifacts associated with the image boundary truncation can dominate the restored image undercertain conditions. Woods was the first to discuss the boundary truncation artifact [106].White and Brzakovic have considered the problem of image extension to minimize errors inconvolution of images with spatial masks [1071. Tan, Lim and Tan have discussed the boundaryartifacts in image restoration, but they have not considered extending the image to reduce theseartifacts [108, 1091. We analyze the boundary artifact problem in section 7.2, and proceed tosuggest a solution to it in section 7.3. In section 7.2.1 we discuss the problems arising whenlinear deconvolution is used, and in section 7.2.2 We will discuss the related problem of aliasingdue to periodic convolutions. Section 7.4 considers the computational load of the new approachand we report the experimental results in section 7.5.The image degradation model is usually assumed to be that of a linear convolution andadditive noise. In this chapter we do not consider the noise and concentrate on the deblurring.Even though the omission of the noise may not hold in actual situations, this analysis servesas an appropriate starting point due to its mathematical tractability. This image restorationmodel is schematically depicted in Figure 7.1.Consider an image of a scene obtained by a photographic camera. The observed imageg1(x, y) is cut from the background due to the finite aperture of the image acquisition device.Since the reconstruction filter is usually a convolutional filter, and not a point-wise operation,the restored image f(x,y) will contain artifacts due to the abrupt truncated boundaries ing1(x, y).This problem is of special relevance in the restoration of large images, where a large image isChapter 7. Reduction of Boundary Artifacts in Image Restoration 87f(x,y) g(x,y).. .g(x,y) f(x,y)Figure 7.1: The image degradation and restoration model in the absence of noise.Chapter 7. Reduction of Boundary Artifacts in Image Restoration 88subdivided into subimages and each subimage is restored separately. It is well known that thedistribution of intensities in a large image is generally not Gaussian. Typical images of commonscenes have non-stationary mean and variance. It has been suggested that the removal of thelocal mean from the image will render the image Gaussian [112]. It is for this reason thatthe observed image is divided into smaller size subimages and then each of these subimagesis restored separately. Additionally the computational load of restoration is reduced by imagesubdivision. The boundaries of each subimage introduced by dividing the large image lead toundesirable artifacts.The traditional solution to the image truncation problem has been to employ special windowing functions, such as Hamming or Hanning, to smooth the effect of truncation. This isfollowed by zero padding the observed image to the length of the restoration filter, and DFToperations are then used for deconvolution. The zero padding is necessary to achieve lineardeconvolution (convolution) since such DFT processes are by nature circular or periodic ones.Here we examine the effects of some simple measures to minimize the effects of these boundary truncation artifacts. We show that for this problem, the natural circular deconvolution(convolution) process of a DFT can be used to advantage in lessening the image truncationeffects. We find that when the image is globally stationary, then padding with the image itself(instead of zeros) leads to better expected results. We then relax the condition of global stationarity to that of local stationarity and propose a simple padding method by which the resultsare further improved. Whenever convenient, and without loss of generality, we will illustrateour approach using one-dimensional images.7.2 Problem descriptionConsider a long one-dimensional signal f(x) whose length >> M, and its blurred version g(x).Let g1(x) be the observed truncated section of g(x). gi(n), the discretized version ofg1(x), islimited to a width of M pixels. Assume the impulse response of the reconstruction filter hr(n)has L,. non-zero terms. After processing gi(n) by h(n) the resulting image f(m) is of lengthChapter 7. Reduction of Boundary Artifacts in Image Restoration 89L + M — 1. The first and last L,. — 1 terms of f(n) are affected by the boundary truncationartifact and only the middle M—L,. + 1 terms are true estimates of f(n). If L,. << M and ifh is a smoothing function i.e. a low pass filter, then the boundary artifact has the effect ofsmoothing a band of L,. — 1 pixels wide around the image. This effect is usually of little visualobjection.It is common however for the degrading function to be a blurring one i.e. to be a lowpass function and therefore hr() tends to be a high pass filter i.e. the transfer functioncorresponding to hr() will contain high frequency components. Moreover, the reconstructionfilter is commonly designed and applied in the frequency domain such as in inverse filtering,Wiener filtering and other associated techniques. Under these circumstances and particularlydue to sharp transitions in the transfer function of the reconstruction filter, the length of theimpulse response hr(n), L, is likely to be large. The effects of a high pass filter on an imagewith sharp boundaries are the introduction of highly objectionable artifacts. If L,. M + 1these artifacts spread over the whole image. Fig. 7.3 gives an example of this effect.In Fig. 7.3 the degraded image is obtained by imaging a step edge with a CCII camerausing a square aperture. The camera sensor has 1317 x 1035 pixels with 100% fill factor suchthat there are no gaps between adjacent pixels. Each pixel is 6.8 zm x 6.8 im giving a Nyquistbandwidth of 73 cycles/mm. The aperture function sets the theoretical limit of the ModulationTransfer Function (MTF). A further reduction to the camera bandwidth is contributed by thelens. We have determined the MTF of the digitizing camera experimentally and this is shownin Fig. 7.2. This MTF has positive non-zero values at all frequencies below the Nyquist limit(64 pixels in our case).The reconstruction filter is a modified smooth Wiener filter which was designed from theknowledge of the camera Modulation Transfer Function. We used the following filterHr I H-f-c(7.1)where c = 0.01 gave the visually best restored image (Fig. 7.3b). The ringing artifacts producedby frequency domain filtering are clearly visible in the restored image of Fig. 7.3b.Chapter 7. Reduction of Boundary Artifacts in Image Restoration 901 .00.90.8a0.’ation Transfer Function0 8 16 24 32 40 48 56 64 72 80Frequency (Cycles/mm)Figure 7.2: Modulation Transfer Function of the digitizing camera.Chapter 7. Reduction of Boundary Artifacts in Image Restoration 91Figure 7.3: Restoration of a step edge acquired by a digitizing photographic camera: a) Theobserved image of the step edge; b) The restored image using linear deconvolution, notice theboundary truncation artifact; c)The restored image of the step edge, a trapezoidal was appliedto the data prior to its deconvolution; d)The restored image of the step edge using the proposedapproach.(a) (b) (c) (d)Chapter 7. Reduction of Boundary Artifacts in Image Restoration 927.2.1 Analysis of errors in using the conventional linear de-convolutionIn this section we investigate the sources and nature of errors arising from the abrupt truncationof the image. To do that we will first consider the restoration of a blurred image in the absence ofpicture truncation. Assume there is no boundary truncation and the object f(n) is surroundedby a background of grey levels equal to zeros:{0, 0, 0, fo ..., fRi, 0, 0, 0}Assume f(n), the finite impulse response h(n) and thus the resulting g(n) are of finite lengthsR,LandMrespectively(M = R+L—1). Assume that g(n)= {0,0,0,go,...,gR_1,gR,...,gM,0,0,0}is completely observed, theng(n) = h(n) * f(n) 0 < n < M — 1 (7.2)= 0 otherwisewhere * represents the linear convolution operation. To obtain f(n) we shall operate on g(n)by h(n) whose length is Lr. Since the length of g(n) is M, L is usually chosen to be M.In the rest of this paper we assume L,. M.)(n)= h(n)*g(n) 0< ii < M+ Lr 1 (7.3)J(n) is of length M + L — 1 2M — 1.If we perform our computation using a DFT then we pad h,f,g and hr by zeros (forexample, as recommended by [110]). We take the DFT of size (M + Lr — 1)-point. Withoutloss of generality form now on we will take the N-point DFT, where N = 2M. We getG(k) = H(k) F(k) (7.4)Applying Hr(k) we getP(k) = Hr(k) . H(k) . F(k) (7.5)Chapter 7. Reduction of Boundary Artifacts in Image Restoration 93and choosing H(k) as the inverse filter (assuming H(k) 0 Vk) we obtainP(k) F(k) (7.6)i.e. F(k) is recovered exactly.In the above we studied the case when f(n) is of length Rand g(n) is of length M = R+L—iand we took N-point DFT (N = 2M). Suppose we now increase the length of f(n) but we stillapply N-point DFTs. We have two cases:Case 1: Length of f(n) is > R but N — L + 1: here the length of g(n) is < N and takingN-point DFT will result in equation (7.4) as above.Case 2: Length of f(n) is > N — L + 1. The resulting image g(n) is of length > N. Let ustruncate g(n) = {0, 0,0, go, ..., gj.r1gN, ...} and denote the segment of g(n) whose length isN by gN(n)= {go, Let us denote the corresponding N pixels of f(m) by fN(n) ={fo,...,fUN_l}. for 0< n< N—i we getgN(n) = h(n) * fN(n) 0 n < N — 1 (7.7)Let the N-point DFT of gN(n) and fN(n) be GN(k) and FN(k) respectively. We getGN(k) H(k) FN(k) (7.8)but ratherGN(Ic) = H(k) . FN(k) + EN(k) (7.9)where BN(k) is an error sequence. Although the image formation of g(n) in (7.2) and (7.7)are similar, their N-point DFT’s (7.4) and (7.9) are not the same because the length of g(n.) isnow greater than N. This is the first problem which we encounter when we truncate the imageg(n).The second error arises from circular convolution as follows: The finite aperture of the imageacquisition device modulates the observed image. Since the aperture window w(n) is of sizeChapter 7. Reduction of Boundary Artifacts in Image Restoration 94M(< N), then the observed truncated image g1(n) isgi(n) = g(n) w(n) (7.10)= gN(n).w(n)where gi(n) is of length M(< N).To restore the observed g1(n) we apply the restoration filter, hr(fl), of size L,. = M. Toapply hr(n), we pad each of gi(n) and hr(n) by zeros and use the N-point (N = 2M) DFT.In what follows we let G1(k) and W(k) be the N-point DFT of the zero-padded gi(n) andw(n) respectively. Since gN(n) is of length N(> M), the N-point DFT of (7.10) results in thecircular convolutionGl(k)=GN(k)® W(k) (7.11)where ® is the periodic (circular) convolution operator. Thus, from (7.9) and (7.11) weconclude that in generalG1(k) = [H(k). FN(k) + EN(k)j ® W(k) (7.12)The application of the inverse filter to G1(k) in (7.12) will not result in E(k) = F(k) as in(7.6) due to EN(k) and the circular convolution with W(k). Even if EN(k) = 0 (case 1 above),we will still have a circular convolution with W(k). Thus, it is clear from (7.12) that the exactrecovery of F(k) via linear deconvolution is usually impossible.The classical approach to the boundary problem has been to employ different smoothingwindows, i.e. multiplying g1(n) in (7.10) by w(n)’s of different shapes so as to lessen theeffects of the abrupt boundaries and the literature has many examples of such windows [111].Although this approach restricts the influence of the boundary effects, the results may still beobjectionable.In Fig.7.3c we illustrate the use of a trapezoidal window. The same Wiener filter as inFig.7.3b is employed to restore the observed noisy step edge. The original image was multipliedby a trapezoidal window such that a band of 16 pixels wide around the image was attenuatedChapter 7. Reduction of Boundary Artifacts in Image Restoration 95linearly while the central region of 32 x 32 pixels were unaltered. Clearly the step edge itselfbecomes sharper, reversing the blurring effect of the camera; however the margins of the imageare severely distorted.So far we have analyzed the sources of errors associated with image truncation in restoration.The first problem is that of f(n) being too long resulting in length of g(n) greater than N.When the N-point DFT is used this results in the model (7.9). The second problem is thatof circular convolution (7.11) with W(k). In what follows we model the problem differently sothat the effects of the second problem are avoided.Specifically let us select the M points{fo,fl,...,fM—1}from the long sequencef(7i)—and assume that the observedg1(n) (of size M) is due to the convolution of h(n) with {fo, ..., fM—1}plus the modeling errors. Let us pad the M selected points of f(n) with zeros to length N toobtain the set fi(n)thusf(n) — fi(n) = {...,f_2,f1,O,O,O,...,fM,fM±1,...} (7.13)Padding g1(n) with zeros to size N, the resulting gi(n) isg1(m) = h(n) * fi(n) + ei(n) for 0 < n < N — 1 (7.14)Taking the N-point DFT we getG1(k) = H(k) . F(k) + E1(k) (7.15)Chapter 7. Reduction of Boundary Artifacts in Image Restoration 96Thus unlike (7.12), by using the present model utilizing fj(n) of (7.13), we now have a simpleadditive error sequence E1(k). We show in appendix 7A thatE1(k) = h(j)W (f(_n) - (_1)kf(M- n)) W (7.16)for 0 < k < N-i, whereWN = e_2111In the space domain the error sequence ei(n) isI Jh(j)[f(n-j)-fi(n-j)] for 0<n<M-i (7.17)ei(n) = ç(_Jh(j)f1n_ for M<n<N—iPlease note that in the above we assumed h(n) to be in the form {h0,h1,..., hL_}. If howeverh(n) is symmetric and is represented in the form {h_, ..., ho, . .., h. } then the equation(7.17) becomes(Ll)/2h(J)[f(fl_J)_ f(n-j)] for 0 <n < M- 1e(n)_(Ll)/h(i)f (i) for n<0 or n>MApplying the restoration filter Hr(k)=results inE(k) = H(k).G(k) (7.18)= Fi(k)+Hr(k)Ei(k) (7.19)And the error in the restoration isEt(k) = P(k)— F1(k) (7.20)= H(k) E1(k) (7.21)i.e. the error in restoration is given by (7.21) and (7.16).Chapter 7. Reduction of Boundary Artifacts in Image Restoration 977.2.2 Restoration by circular de-convolutionConsider again the degraded image g(n). Truncating g(n) to length M to obtain the observedgi(n) and padding the latter by zeros creates severe edges within the signal g1(n) which causesthe errors in (7.12) or (7.14). Thus instead of assuming the observed gi(n) is a truncated signallet us assume g1(n) as a period in a periodic signal and let us investigate the effects of theperiodic or circular de-convolution of this period with the reconstruction filter hr(n). Let usdenote two periods ofg1(n) as g2(n). g2(n) is thus the N-sample sequenceg2(n)= {go,...,gM—1,go,...,gM—i}i.e.g2(n) = g1(n) + gi(n — M) 0 <n < N — 1 (7.22)This model implies that the object resulting in g2(n) (compare with (7.13)) is formed of thetwo periodsf2(n)= {fo, fi, ..., fM—i, fo, fi, ..., fM—i} (7.23)i.e.f(n) — f2(n)= {..., f—a — f-i — fM-i, 0,0,0,..., f — fo, fM+i — fi, ...} (7.24)i.e. we assume g2(n) is the result of the circular convolution of 2(n) with h(n) plus errorsg2(n) = h(n.) f2(n) + e2(n) 0 n K N — 1 (7.25)where ®is the circular convolution operator.Since g was assumed periodic with period M, studying two periods instead of one will haveno effect on the results of computations, but it facilitates the comparison of the error sequenceswhich are now all of size N as in section 7.2.1. Applying the restoration filter h(n) whoselength is Lr M we getf2(fl) = hr(fl) ® g2(m) 0 n K N — 1 (7.26)Chapter 7. Reduction of Boundary Artifacts in Image Restoration 98Since g(n) was originally formed by linear convolution of f and h, and f2 in (7.26) isproduced by circular deconvolution, thus for the considered pixels,f2(n) f(n) 0 < n < M —‘ (7.27)in general due to the ‘wraparound’ effect of the periodic convolution.Let G2(k) be the N-point DFT ofg2(n), then from (7.24) we obtainG2(k) H(k). F2(k) + E2(k) 0 <m < N — 1 (7.28)and applying the inverse filter Hr(k) we obtainF2(k) = Hr(k) . G(k) 0 < fl < N 1 (7.29)F2(k) + Hr(k) . E(k) (7.30)and thus the error in the restoration isErnv(k)= F2(k) — F2(k) (7.31)= ll(k).E2( ) (7.32)i.e. it is exactly the same as (7.21) except that here the term E1(k) is replaced by E2(k). Thusif E2(k) < E1(k) then we expect the error in restoration for this circular deconvolution case tobe less than that of the previous linear deconvolution case. Appendix 7B shows thatE2(k) = h(j)W [f(-n) - f(M n)j [i + (_i)k] W (733)for 0 < k < N — 1, and in the space domain we can show that e2(n) is1Zo’h(j)[f(n-j)-f2(n-j)1 for 0<n<M-1 (7.34)e2(n) =( e2(n—M) for M<n<N—1Comparing (7.33) and (7.16) we note that in this case the error E2(k) is a function of thequantityb2(n,k) = [f(-n) - f(M - n)j [1 + (_i)k] (7.35)Chapter 7. Reduction of Boundary Artifacts in Image Restoration 99while in the case of linear convolution the error E1(k) (7.16) is the same function as in (7.33)but of the quantitybi(n,k) = [f(_n) - (_1)kf(M- n)] (7.36)We restate (7.36) and (7.35) asI f(—n)—f(M—n) forkevenE1(k) is a function ofI f(—n)+f(M—n) forkoddandI 2[f(—n)--f(M—n)} forkevenE2(k) is a function of10 forkoddFor an image of constant grey level, note that E(k) = 0 only when k is even, but E2(k) = 0 Vk.In this special case E2(k) < E1(k) Vk. Also if the image is “globally stationary”, i.e.Exp{f(n)} = K ‘v/n (7.37)where K, the image mean is a constant and Exp represents the expected value, then Exp{Ei(k)} =0 only when k is even, but not when k is odd. However from (36) Exp{E2(k)} = 0 Vk. In thiscase we also have E2(k) E1(k) Vk. The same results also hold when the grey levels aroundthe boundary of the image are all equal to the same constant.An alternative way to see that is to note thate2(n) in (7.34) is a function of f(n)—f2(n)whichis given in (7.24) and ei(n) is a function of fi(m) or f(n) — fi(n) given in (7.13). In the lattersince Exp{f1(n)} or Exp{f(n) — fi(n)} are never equal to zero then Exp{e1(n)} is never zero.However the Exp{f(n.)—f2(n)} may be equal to zero when the areas surrounding the picture’sboundaries are of similar gray level values. For these cases we therefore expect the error inrestoration to be smaller for the case of circular convolution than that of the linear convolution.In particular if the image is truly periodic with period M then E2(k) = 0 Vk i.e. the errorvanishes when circular convolution is used. This fact suggests how structural features withinthe image can be exploited to reduce boundary truncation artifacts.Chapter 7. Reduction of Boundary Artifacts in Image Restoration 100In this section we have modeled the degradation process by a circular convolution processand therefore, as is well known, it leads to wraparound errors which are included in e2(n). Thequestion now is how we can use these errors of the circular convolution to our advantage sothat they lessen the effects of the boundary truncation artifacts. In the next section we proposea simple procedure which enables the use of periodic convolution and reduces the impact ofboundary truncation artifacts.7.3 Restoration by image extension and periodic convolutionOur proposed solution to the boundary problem is to form a new image g3(x, y) as in Fig.7.4.The new image is N x N(N — 2M) pixels. The top left quadrant ofg3(x, y) is the observedM x M image g1(x,y) and the other three quadrants are extensions ofg1(x,y). The topright quadrant is the mirror image of g1(x, y) about the AA aids and the bottom half is themirror image of the top half about the BB axis. The method of extending an image at theboundaries albeit by reflecting only a few of the rows and columns has been earlier suggestedfor performing convolutions of the image with spatial masks (such as in smoothing or edgedetection applications) [107]. The extension of the image by reflecting all its rows and columnsand its application with circular deconvolution in restoration problems are new.If it is desired to completely remove the boundary effect in the most general case we need toapply signal prediction techniques to extend an M x M image by M/2 pixels on each side. Thisis clearly a non trivial task and our proposed method of image extension by mirror imagingeffectively achieves a first order prediction in a straight forward manner.We have seen in section 7.2 that using circular convolution, via N x N DFT, results insmaller expected errors than using linear convolution when the image is globally stationary.We therefore use circular convolution here also, i.e. we shall not pad the newly formed Nx N image by zeros before applying the restoration filter hr(,y) in the transform domain.Considering again the untruncated one-dimensional image g(n),g(n) = g(n). w(n) 0< n M —1Chapter 7. Reduction of Boundary Artifacts in Image Restoration ioiA’ALB___ ___ ___ ___11AFigure 7.4: Extension of the image by mirroring in the x and y directions.Chapter 7. Reduction of Boundary Artifacts in Image Restoration 102The newly formed image in the one dimensional case isg3(n)= {go,gi,..,g_i,gM—i,...,go} (7.38)i.e.g3(n) = gi(n)+gi(N—n—1) 0 <n <N—i (7.39)We assume that the point spread function is symmetric and that the image g3(n) is formedby a circular convolution of the symmetric h(n) with f3(n) where now f3(n) is given byf3(n)= {fo, fi, ..., fM—i, fM—i,..., fo} (7.40)thusf(n)—f3(n) {...,f2 — fi,f-i — fo,0,0,0,...,fM—fM—i,fM+i— fM-2,...} (7.41)In the frequency domain, using N-point DFT and circular convolution we haveG3(k) = H(k) . F3(k) + E3(k) (7.42)We now perform the circular convolution of the extended image g3(n) with the reconstructionfilter h(n) in the DFT domain‘3(k) = Hr(k) . G3(k) 0 < k < N 1= F3(k)+E(k) (7.43)whereET”(k) = H(k) . E3(k) (7.44)Note that (7.44) is the same as (7.21) and (7.32) except for the E3(k) term.We have shown in the previous section that when the image is globally stationary theexpected error using circular deconvolution is less than that using linear deconvolution. For thepresent case we show below that the expected error (using mirroring and circular convolution)Chapter 7. Reduction of Boundary Artifacts in Image Restoration 103is less than that using linear deconvolution if the image is locally stationary. Local stationaryis a more relaxed condition than global stationary [112].We show in Appendices 7A and 7C that for symmetric h(n) the error term E1(k) is givenby(L—1)/2—1 M—1E1(k) = > h(j)W > f(m)Wjzzl n=—j nM—j(L—1)/2 j—i M—1+J+ h(j)W — f(n)W + f(n)Wj (7.45)n=Mfor 0 < k < N — 1, while the error term E3(k) is given byE3(k) = Ei(k) + WE1(—k) (7.46)and hence(L—1)/2—1E3(k) = h(j)W {f(n)- f(—n—1)}Wjj=1 n=—jM-1— {—f(n)+f(N—n—1)}WjnM-j(L—1)/2-1+ h(j)W {—f(n)+f(—n—1)}Wjjzzl n=OM—1+j+ {f(n)—f(N—n—1)}Wj (7.47)n=Mfor 0 < k < N — 1. In the space domain e3(n) isf Jh(j)[f(n-j)-f3(n-j)] for 0<n<M-1 (7.48)e3(n) =(eN_n_1) for M<n<N—1Comparing the expressions for E1(k) (7.45) and E3(k) (7.47) we note that they have the sameform except that in E3(k) every pixel value f(n) has been replaced by the difference of twopixel values, where the two pixels lie in the same local neighbourhood. For example, inside thefirst term in (7.45) and (7.47), f(n) in (7.45) is replaced by the difference of f(n) and f(—n— 1)where the possible values of n lie between -1 and — and where L is the width of the blurringChapter 7. Reduction of Boundary Artifacts in Image Restoration 104function. Thus the maximum distance between f(n) and f(—n— 1) is less than L. The sameapplies for the other 3 terms in (7.45) and (7.47). Thus when the image is locally stationary inthe truncation boundaries then Exp{E3(k)} = 0 Vk and thusExp{Ea} < Exp{Ei} (7.49)We therefore conclude that our proposed method of extension of the observed image by reflectionfollowed by circular deconvolution leads to smaller expected errors in the restored image whenthe truncated image is locally stationary at each of its boundaries.Also, as illustrated in the Fig. 7.5, if the grey levels in the neighbourhood of the boundaryf(0), up to L,. pixels are all equal to a constant K1, and those from M—L,.— 1 up to theboundary f(M — 1) are also equal to another constant K2, then E3(k) = 0 Vk. In the case ofcircular deconvolution (section 7.2.2) E2(k) = 0 under the added condition that K1 = K2.In the two dimensional case, if the restoration function hr(fl, m) has L,. non-zero termsin both directions then the wrap-around effects due to circular convolution affects a band,(L,.— 1)-pixels wide, along the margins of the image. Referring to Fig. 7.6 we observe that ifthe objects of interest fall inside region A then they will be free from distortion due to wraparound, but the margins are not. If this margin is a smooth background region in which theenergy is concentrated at a d.c. level, then the wrap-around errors introduced by the circularconvolution operation is such that they will maintain this d.c. level exactly and the object isexactly recovered.We also note that if the support of the blur function is along one axis only (as for examplein motion blur) then to obtain exact recovery, the grey-levels of the margins of different linesofg1(x, y) need not be all equal; additionally, the grey levels may be different for the right andleft L,. — 1 pixels of the same line.7.3.1 Computational LoadFor our proposed method (section 7.3) we use the N(= 2M)-point DFT. In the two-dimensionalcase we can show that the computational load compared to the circular convolution (i.e. withoutChapter 7. Reduction of Boundary Artifacts in Image Restoration 105ba0 L M-L M M÷L N-Lf 1 N-ir r rFigure 7.5: The extended signal under the conditions necessary for exact recovery.Chapter 7. Reduction of Boundary Artifacts in Image Restoration 106L? 1A L1Figure 7.6: Spatial support of the boundary artifacts around the margins of the image for thecase of Lr <M.Chapter 7. Reduction of Boundary Artifacts in Image Restoration 107any zero padding) will increase from O(M2log2 M) to O(4M2log2 2M). However since theresulting extended image g3(n) is real and symmetric considerable gains can be made. TakingN-point DFT of (7.39), after a spatial shift of one half a pixel, we getG3(k) = 2Re{Gi(k)} (7.50)M-1= 2Re{ gi(m)e_32N} for 0< k < N — 1 (7.51)M-1= 2 gi(n) cos(’ffkn/M) (7.52)Therefore we only need to compute the real DFT transform of an M-point signal. The transformation (7.52) is the real part of the Discrete Fourier Transform and FFT algorithms can beadopted in its computation.For conventional linear deconvolution, to calculate G1 (k) we need to calculate the real partas well as the imaginary part ofG1(k). The same applies to the inverse DFT computations. Ourproposed technique therefore requires half the computational load of the traditional approach.7.4 ExperimentsExample: Consider a one-dimensional image M pixels long,g1(n). We extend this image to N =2M pixels by reflecting it about a point half a pixel away from the last pixel to form g1(N— n).We now form a periodic function 2(n), one period of which is g2(n) gi(n) +g1(N — n). Letthe first L,. — 1 values of gi(n) be all equal to a, and that its last L — 1 values be all equal tob. The signal g2(n) is shown in Figure 7.5.Assume that the reconstruction filter has Lr(<M-i) non-zero valuesh0,h1...,hrL_1 andthat these values are normalized to have a sum of 1. Now perform a circular convolution ofg2(n) and hr(n) to compute f(n).)(n)= a.hr(p)=aforO<n<Lr_1b.hr(p)bforN_Lr+1nN_1Chapter 7. Reduction of Boundary Artifacts in Image Restoration 108Thus the aliasing error cancels the boundary truncation error and therefore the marginsremain undistorted.In the above example we obtained the exact result because we made the aliases compensatefor the truncation errors. To understand why this is so, let us consider the case of g(n) =C, for 0 < n < 4, i.e. a constant gray level and h = a, b, c. Using the linear deconvolution, therecovered image isf(n)=C{a,a+b,1,1,1,b+c,c} 0< n< M+L —1where we have normalized the area a+b+c under h,. to be equal to 1. If we use image extension,we get the length of g = 2M = 10 and because of circular convolution the new f(n) is composedof the sum of the restoration results of the signal plus its aliases. Thus,1(n) = C{a,a+ b,1,1,1,1,1,1,b+ c,c} +C{b+ c,c,0,0,0,0,0,0,0,0}+C{0, 0, 0, 0, 0,0,0,0,a,a+ b}where the second and third quantities are due to the aliases to the left and right of the signal.Thus,)(n) C{1, 1, 1,1,1,1,1,1,1, 1}and after deleting the second half of f(n) we obtain)(n) = C{1, 1, 1,1, 1}Thus we have used the aliases to our advantage.Fig.7.3d illustrates the effect of this technique in restoring the image of the blurred stepedge. In the general case, i.e. for any image, this method of image extension preserves a firstorder continuity at the image boundaries and therefore greatly reduces the undesirable effectsof boundary discontinuity artifacts. The only requirement is that within each horizontal orvertical lines the margins should be reasonably smooth.Chapter 7. Reduction of Boundary Artifacts in Image Restoration 109Figure 7.7: Restoration of the blurred image of Bayan: a) original; b) a 64x64 section cut fromthe original larger image; c) a 64x64 section of the original image blurred by the out-of-focusmodel of the digitizing camera; d) result of restoring by linear deconvolution; e) result ofrestoring by linear deconvolution alter windowing the data by a trapezoidal window; f) resultof restoring using image extension and circular deconvolution; g) the difference image of b) andc); h) the difference image of b) and d); i) the difference image of b) and f).Chapter 7. Reduction of Boundary Artifacts in Image Restoration 110To show how the new method performs on any image even if the boundaries are not smooth,we apply it on the 64x64 pixel image of a smiling child, Bayan. Fig.7.7a shows the 128x128pixel image of Bayan and Fig.7.7b is a 64x64 section of his face. Fig.7.7c is the appropriate64x64 section of the result of the linear convolution of Fig.7.7a with a 64x64 blurring maskh(n). The blur function of Fig.7.2 was used in this experiment. Fig.7.7d is the result oflinear deconvolution of the inverse restoration filter with the blurred and zero-padded imageof Fig.7.7c. The effects of the boundary artifacts are clearly visible. Fig.7.7e was obtained byfirst applying a two-dimensional trapezoidal window on Fig.7.7c in the space domain beforezero-padding and linear convolution with the same restoration filter. Fig.7.7f is the result ofthe application of the restoration filter according to our proposed method. Figs.7.7g, 7.7h, and7.7i, are the difference images of the original image in Fig.7.7b and Figs.7.7c, 7.7d, and 7.7frespectively. A bias of 128 grey-levels has been added for display purposes.SummaryWe have discussed the problem of image truncation in image deconvolution. The commonlypracticed method of frequency domain image filtering firstly involves the application of a window(such as hamming, trapezoidal, etc.) in the space domain to smooth the effects of truncation.Since by its nature, DFT performs circular deconvolution (convolution), after applying thewindow, the image is zero-padded to the required DFT size so that linear deconvolution isachieved. This approach partially corrects for the boundary truncation artifacts but at the costof reducing the useful part of the image.In this chapter we have exploited the natural circular deconvolution process to our advantage. We first show that if the blurred image is globally stationary, then using circulardeconvolution (i.e. instead of padding with zeros we pad by the image itself) leads to reducederrors arising from boundary truncation effects. We have then proposed a simple image extension technique by which the observed image is mirrored in both spatial directions. We haveChapter 7. Reduction of Boundary Artifacts in Image Restoration 111shown that when the blur is a symmetric function, applying circular deconvolution on the njir..rored image leads to reduced errors when each area around the boundaries of the image is onlylocally stationary. Our proposed method thus leads to much improved restored images. Thistechnique is also of advantage in the restoration of large blurred images and requires half thecomputational effort of the traditional linear deconvolutional approach.Chapter 7. Reduction of Boundary Artifacts in Image Restoration 112Appendix 7AIn this appendix we will derive an expression for the boundary truncation error associatedwith the use of linear convolution. Consider a long image f and its linear convolution with anL( M)-point unit sample response h to produce the blurred image g. We observe M pointsof g, denoting the observed sequence g1=O<i<M-1We take the N(= 2M)-point DFT of g1G1(k) = gi(i)W Ok<N—1M—1 L—1G1(k) = h(j)f(i—j)W=O j=UL—1 M—1= h(j)W [(i —j=ONow let n = i—jL-1 M-1-jG1(k) = h(j)W f(n)Wjj=O n=—jL—1 / —i M—1 M—1Gi(k) = h(j)Wjjc ( f(n)W + f(n)Wj —j=O n0 nM—jL—i fM—i 3 3= h(j)W ( f(n)W + f(—n)W — f(M — n)Wj”jzzO \n=O n=1G1(k) = H(k)F(k) + E1(k)whereB1(k) = h(j)W (f(_n) - (_1)kf(M - n)) WChapter 7. Reduction of Boundary Artifacts in Image Restoration 113If we assume a symmetric blur function h(n) in the form {h_-1,..., ho, ...,h1-} then wehave(L—1)/2—1 M—iEi(k) = h(j)W f(n)Wj— f(n)Wj=1 n=—j(L—1)/2 j—1 M—1+,+ h(j)Wc —f(n)W+ f(n)W3=1 n=O n=MAppendix 7BIn this appendix we will derive an expression for the boundary truncation error associatedwith the use of circular convolution. We will use two periods of g1(n) in order to obtain anN-sample sequence. We haveg2(n)= {go,...,gM—1,go,...,gM—;}= gi(n)+gi(n—M)We also define the sequencef2(n) {fo, ..., fM—i, fo, ..., fM—1}= fi(n)+fi(n—M)Taking N-point DFT we haveG2(k)=(1 + (_1)Ic)GkandF2(k) (1+(—l)’)FkIn terms of the image formation model we haveG1(k) = H(k)F1(k)-t-EkG2(k) = H(k)F2(k + E2(k)Chapter 7. Reduction of Boundary Artifacts in Image Restoration 114and henceE2(k) = (1 + (_1)ic)EkE2(k) h(j)W [f(-n) - f(M- n)j [1 + (_i)k] WChapter 7. Reduction of Boundary Artifacts in Image Restoration 115Appendix 7CIn this appendix we will derive an expression for the boundary truncation error associatedwith the use of image extension and circular convolution. The original image is assumed to bef3(n)= {fof1,...,fM—1,fM—1,...,fo}and the corresponding observed image after extension isg3(n)= {go,gl,...,gM—1,gM—1,...,go}We will assume a symmetric blur function h(n) in the form {h_L, ...h0 ...,hL_I} then wehave(L—1)/2g3(i) = h(j)f3(i—j) OiN—1j=—(L—1)/2We take the N(= 2M)-point DFT of g3G3(k) g3(i)W 0 <k < N — 1We also haveg3(n) = gi(n)+gi(N--n—1)givingG3(k) = Gi(k) + W/’G1(—k)G3(k) H(k) . F3(k) + E3(k)G1(k) = H(k).F1(k)-j-E)We conclude thatE3(k) = E(k) -j- WE1(—k)Chapter 7. Reduction of Boundary Artifacts in Image Restoration 116The expression for E1(k) for the case of a symmetric h(n) is given in appendix 7A above.Substituting for Ei(k) we have(L—1)/2—1E3(k) = h(j)W f(n)Wj — f(n)Wfj1 n=M—j(L—1)/2—1 M-—i+ >Z h(j)W1ç f(n)W1 — f(n)Wk Wi/cj=rl n=M—j(L—1)/2 j—1 JkI—1-)-j+ h(j)W7 —f(n)W+ f(n)Wn=O n=M(L—1)/2 j—1 M—1+j+ h(j)Wjc — f(n)W/ + f(n)W/ Wj=1 nOSubstituting the dummy variable n’ for n + 1 in the second and fourth square bracket we get(L—1)/2—1E3(k) h(j)W f(n)W —jzl n=—j(L—1)/2 0 M+ h(j)W/ f(n’ — 1)W’Ic — f(n’ — 1)W7”jz1 n’=—j+l nzzM-j-F1(L—1)/2 j—1 M—1--j+ h(j)W — f(n)Wj + f(n)Wjj=1(L—1)/2 j M+j+ h(j)W — f(m’ — 1)W” + f(n’ — 1)W’Icj1 nM+1With new dummy substitutions n = N— n’ or n —n’ as appropriate in the same terms weobtain(L—1)/2—1 M—iE3(k) h(j)W f(n)Wj — f(n)Wjn=—j(L—1)/2—1 M—1+j+ h(j)Wk f(—n—1)Wj— f(N—n—1)Wjzrl nM(L—1)/2 j—1+ h(j)W? —>f(n)Wf+ f(n)W3=1 n=O(L—1)/2—1+ h(j)W] — f(—n—1)Wj+ f(N—n_1)Wjj=1 n=—j n=M—jChapter 7. Reduction of Boundary Artifacts in Image Restoration 117We can now combine the terms in the first and fourth square brackets together, and similarlythe terms in the second and third square brackets to obtain the final expression(L—1)/2E3(k) = h(j)W {f(n)—f(—n--1)}Wjj=1 n=—jM-1— > {—f(n)+f(N—n—1)}Wn=M—j(L—1)/2+ h(j)W1k {—f(n)+f(—n—1)}Win=OM-1+j+ {f(n)_f(N_n_1)}W]Chapter 8SegmentationThis chapter describes several approaches to the segmentation of microcalcifications from thebackground parenchyma. Segmentation is a prerequisite step in the analysis of mammographicimages.8.1 Image AnalysisImage analysis is concerned with the extraction of semantic descriptions from images. It differsfrom image processing in that its output is not another image. Image analysis techniques can beemployed in detection and localization of abnormalities and in extracting quantitative features.Pattern recognition methods are then used for recognition and classification of abnormalities,as described in chapter 9.Common approaches in image analysis include matching, segmentation, shape analysis anddescription [113].Matching normally involves template matching using cross correlation techniques or matchedifitering. These techniques may be applied in a transform domain to render the match invariant to size, rotation or translation [94]. Template matching is a common technique in machinevision for segmentation for example of manufactured parts. Due to great variability in shapesand sizes of microcalcifications however, this approach is not likely to be useful for the detectionof abnormalities.Segmentation methods involve thresholding, edge detection and texture based segmentation [1141.Thresholding may be applied to gray level or some other property such as first or second118Chapter 8. Segmentation 119oider histogram [1151.The edge detection methods are well developed and are in large based on derivative operatorsor best fit models. The more successful algorithms model the computational aspects of thehuman visual system and lead to operators that extract edges at various scales or resolutions.Many such operators can be approximated by the difference of Gaussians [1161.Texture segmentation uses variations of split and merge algorithms. Also the concept offractal dimension [117] is used to segment textured images. These algorithms may be appliedto segment masses from a mammogram. They are not however of much use in the case ofmicrocalcifications that are only a few pixels in size.Shape analysis employs a description of the object boundaries and may involve morphological processing such as thinning etc. [118]. These processes are followed by extraction ofnumerical features which are then employed to classify the observed abnormalities.8.2 Detection and Segmentation of Microcalcification ClustersThe first step in automated mammographic image analysis is the detection of calcifications,if present. The criteria commonly used by radiologists for recognition of calcifications mustfirst be quantified. This was done based on a large number of cases in collaboration with anexperienced radiologist. Features for recognition of calcifications include intensity level, localcontrast measured by offset or ratio of the pixel to the average within a window, shape and sizetests, gradient tests, etc.In this chapter we describe three methods for the detection and segmentation of microcalcification clusters. Mammograms will typically include identification marks. To exclude thesefrom the analysis, a simple routine was applied to separate breast from non-breast areas.8.3 Local Histogram Thresholding AlgorithmThresholding of image grey level histogram is commonly used for object segmentation. Weimplemented a modified version of a locally adaptive method described in [36]. Each image isChapter 8. Segmentation 120first subdivided into 100 square regions. In each region the grey level histogram is computedand the pixels belonging to the non breast areas are automatically excluded. The grey scalehistogram is repeatedly smoothed using a simple three tap Finite Impulse Response (FIR)averaging filter until it has less than three modes. If the resulting smoothed histogram isbimodal and a significant valley is still found then a threshold value is chosen. The thresholdsfor regions whose histograms are unimodal or do not exhibit a clear valley are interpolated fromneighbouring regions. In this way local thresholds are found, that vary from region to regionbut are constant within each region.The local threshold values that are found may be used in a preliminary segmentation of theimage. The input image is a 32mm x 32mm section of a mammogram containing microcalcifications, shown in Figure 8.la and the processed output is given in Figure 8.lb. Clearly thechoice of sub-region size and boundary affects the results and a large portion of the backgroundbreast structure is misclassified as calcifications. To correct for the effect of boundaries, a grid,as suggested by Davies et al. 1351, was placed in five overlapping positions as shown in Figure8.2. The subregion of concern is the shaded area. In this way there are five independentlydetermined local threshold values for each sub region. Figure 8.lc shows the result of acceptingas calcification any pixel that has a gray level higher than any of its associated thresholds.Figures 8.ld, 8.le and 8.lf are the results of progressively raising the requirements of pixel graylevel to be higher than 2, 3 or all five of threshold values obtained from the five grids. Theamount of debris is clearly decreasing but few pixels which, when assessed visually, may bepotential candidates are also eliminated.The resulting segmented objects are subjected to further tests involving size of the areaand gradient values. The size test was imposed using the heuristic knowledge that significantmicrocalcifications are rarely greater than 1.6mm x 1.6mm in area. The introduction of thisconstraint removes most of the remaining artifacts as shown in Figure 8.3e. Figures 8.3a-d arereproduced from Figure 8.1 for ease of comparison.To further reduce the false positive rate, the following gradient test is applied. The RobertsChapter 8. Segmentation 121(a) (b) (c)(d) (e) (f)Figure 8.1: Segmentation using the Local Histogram Thresholding algorithmChapter 8. Segmentation 122Figure 8.2: The subimage grid displacementChapter 8. Segmentation 123(a) (b) (c)(d) (e) (f)Figure 8.3: Effects of area and gradient tests on segmentationChapter 8. Segmentation 124gradient operator for one edge of the pixel F(i,j) is given as:R=R+RR= IF(i,j)—F(i—l,j— 1)1= F(i—1,j) — F(i,j— 1)1For computational simplicity we use a modified operator R = IRI + IRI. The average edgestrengths of each pixel in the perimeter of each object is measured and weak edges are removed.Figure 8.3f shows that the application of this edge constraint removes several scattered debris. It cannot be known whether the remaining border line cases are truly microcalcifications ornot without a correlated pathologic study. Figure 8.4 shows the results of processing two othermammograms. It can be seen that the algorithm produces satisfactory results when calcifications are of good contrast. However when calcifications appear superimposed on other densities,the algorithm segments the entire mass with its associated calcifications. The algorithm failsto correctly detect or segment low contrast microcalcifications.8.3.1 Object LabelingThe above algorithm is computationally expensive due to the repeated shifting of the subregionforming grid. To reduce the computational effort, an alternative method which uses a twopass approach is taken. In the first pass the grid is fixed and the local histogram is used todetermine a threshold for each pixel as described above. Potential microcalcification pixels aremarked. Each object is then labeled using 6-connected neighbours then object boundaries aremarked but they are not segmented from the background. These object boundaries will involvediscontinuities at the edges of each subregion since the thresholds change at these locations.This is corrected for in the second pass as follows. The mean grey level of each object iscomputed and a unique threshold is allocated to each object. This threshold value lies halfwayChapter 8. Segmentation 125(a) (b) (c)(d) (e) (f)Figure 8.4: Examples of segmentation using the Locai Histogram Thresholding algorithmChapter 8. Segmentation 126between the object mean grey level and the background and is used for extracting the object.This method produces results that are comparable with the previous method, but with lowercomputational effort.8.4 Edge Detection and Region Growing AlgorithmThe above algorithm fails to correctly detect microcalcifications of low contrast. Thus anotheralgorithm which identifies the pixels that may potentially belong to microcaicifications, usingedge detection technique, instead of obtaining thresholds from the local histograms, is employed.Eight different edge detecting algorithms were implemented for the gradient test and their performance compared. These included the following four gradient-based edge detectors, namelythe Roberts, Sobel, Kirsh, and Prewit operators. These operators are simple to implement,and they have short execution times. However they are all sensitive to noise with comparableperformances. Of these four convolutional masks, the Roberts operator gives the thinnest, i.e.the best localized edges, while the Kirsh operator gives the thickest edges with about 4 pixelsmarked for a one-pixel-wide edge.Two approximations to the second order derivative Laplacian operator were also implemented. The edges are located by identifying the zero-crossings of the output from the Laplacian operators. These operators are quite sensitive to noise with many spurious edges detected.To overcome this problem, we implemented the Marr-Hildreth edge detector which is the Laplaclan of the Gaussian operator with variable width of the Gaussian smoother. We found thisedge detector to be essentially useless for this application. The reason is that microcalcifications are small objects with pronounced edges. The Gaussian smoother tends to eliminate thesmaller and fainter microcalcifications and lead to many false negatives.The last edge detector that we tested was based on morphological operations. Essentially ifan eroded version of the image is subtracted from the original scene, the resulting output maybe thresholded to give an edge map. We found this morphological edge detector to be quiteuseful for larger microcalcifications, however single pixel calcifications are once again eliminatedChapter 8. Segmentation 127by the erosion operation despite their high contrast. For these reasons we selected the Robertsoperator as the edge detector of choice for the initial detection of edges. Spurious outputs fromthis operator were then eliminated in the subsequent steps of the algorithm.After the identification of boundary pixels, region growing techniques were employed togrow the calcifications. A local threshold was calculated based on the grey levels of each edgepixel, its neighbouring pixels, and the direction of the steepest gradient. This threshold wasused in growing additional pixels at the boundaries of each object. If the seed pixel was nota true edge of a connected object, the object normally grows to be unrealistically large. Thiscriteria was used to eliminate false detections. Finally, the resulting segmented objects weresubjected to tests involving shape, size and gradient at the boundaries.Figure 8.5 shows sections of three mammograms containing clusters of microcalcificationsand the output of the various segmentation routines. The left column is the original digitizedmammograms. The top image is a degenerating fibroadenoma with benign microcalcifications.The two lower images are malignant formations. The central column is the output of the localarea thresholding algorithm (algorithm 1), and the right column is the output of the edgedetection and region growing algorithm (algorithm 2). It can be seen that in all the threeexamples the second algorithm outperforms the first one.8.5 Neural Networks TechniquesVarious neural network architectures have been used in object detection and image segmentationproblems. The basic idea is to treat image segmentation as a pixel classification problem. Ifm distinct regions exist in an image, then each pixel can have one of m different labels. Thenetwork dynamics, in a supervised, or unsupervised fashion, should then lead the network to astable state such that each pixel has the desired label. In [1191 a constraint satisfaction neuralnetwork (CSNN) is developed and applied to segmentation of CT, PET, and MRI images.In [31] an algorithmic approach is taken towards detection of clusters of microcalcificationsin digitized mammograms. A multilayered Perceptron is then used to reduce the false positiveChapter 8. Segmentation 128Figure 8.5: Comparison of two segmentation algorithmsChapter 8. Segmentation 129rate and increase the specificity of the detection algorithm with only moderate decrease in itssensitivity. In [32] the performance of three different neural networks are compared in reductionof false positive detection of microcalcification clusters. The input to the network consisted ofseven features extracted from the segmented objects, and the network output would indicate ifthey represent a true cluster.In this section we examine the ability of two neural networks to operate directly on the rawimage and segment the microcalcifications by assigning one of two labels to each pixel.8.5.1 The Modified Hopfield Network:The first neural network considered is a modified version of a Hopfield network. This networkhas self organizing properties that favor the formation of compact regions and therefore atraining set of images is not required. The derivation of this network is given in [120] in detail.We provide here a summary description of it.One neuron is assigned to each pixel in a digitized mammogram. Each neuron is connectedonly locally to the neurons of the pixels in a pre-defined neighbourhood. This neighbourhoodmay be defined arbitrarily as either the four connected adjacent pixels ( referred to as N2 ) orthe eight connected pixels ( referred to as N3 ). The choice of neighbourhood is treated as adesign parameter and will have an impact on the performance of the system.The input U, to each neuron i, comes from two sources, namely the local neighbourhood,and a bias input. Thus the input U is such that(8.1)where W, are the synaptic connections between neuron i and neuron j, V3 is the output ofneuron j, and I is the bias input to neuron i. We let the bias input be a normalized version ofthe pixel grey levels in the raw input mammogram, so thatI=2—1 (8.2)Chapter 8. Segmentation 130where g is the grey level of the pixel i in the observed image, and L is the maximum numberof grey levels. We will assume symmetric weights, i.e.W, = W, (8.3)We constrain W, to be in the range { 0 , 1 }. The choice of values of W dictate the influenceof the neighbourhood. For example if we set Wj = 0 for all i and j, all neurons are decoupledfrom each other and no segmentation takes place beyond the initial estimate. If W = 1 forthe closest 8 neighbours, and W 0 for all other neurons, then these neighbouring pixels willhave a strong effect on the classification of the central pixel under consideration while all otherswill only have an indirect effect. This is similar to a morphological operation that fills holes inthe segmentation mask.The energy function for this network also has two components as follows: If a pixel belongs tothe background (and not microcalcifications) then there is a high probability that its neighboursalso belong to the background. Therefore if a pair of adjacent pixels has similar values thenthe energy contribution of this pair should be small. One of the terms in the energy functiontherefore isAlso if the grey level of a pixel is very close to that of the background parenchymal pattern(i.e. relatively darker), or to that of the object (i.e. relatively brighter than its neighbourhood),then it is highly likely that in the stable state the corresponding neuron will be labeled asbackground or object respectively. Since we have chosen the input bias to be proportional tothe image grey levels, then the product I,V. should contribute less towards the total energyvalue. The second part of the energy expression therefore isFinally, the network energy is given byE = -I1V. (8.4)Chapter 8. Segmentation 131The Discrete Model:In this formulation we use bi-state neurons. Each neuron i, has two possible states, corresponding to the two labels of its associated pixel. The neural output V, may take the values of +1,or 4, corresponding to the ON or OFF states of the neuron. The output of each neuron isderived from its input by a simple threshold logic1+1 if U>Ovt=i_i if U<O,where S. is a local estimate of threshold.A change in the output of the neuron i, tiV will result in a change in the energy of thenetwork LE such that—- (>z wjvj + (8.5)In the above equation it can be seen that /V has the same sign as the expression in thebracket and therefore < 0. Hence the network dynamics tends to obtain the minimumenergy.The Continuous Model:In this formulation the neural response is graded and is an approximation of the sigmoidalfunction. This function is similar to the ‘S’ function used in fuzzy sets and can be written as—1 if U—1(U+1)2— if —1<U<01—(1—U2) if 0<U<1+1 if U>1Once again the network time evolution leads it to a minimum energy state. The dynamicsof the network are governed by simultaneous solutions to the differential equations8EdU186ÔV. dtChapter 8. Segmentation 132By Euler approximation we haveU(t + At) = U(t) + At (WVj(t) + I — U2(t)) (8.7)It can be shown that the energy function of such a system is monotonically decreasing andtherefore the network dynamics lead the system to a stable output representing the requiredbinarized mask of pixels belonging to the microcalcifications.8.5.2 The Feed Forward Neural Network:The second neural network is a three layer Perceptron. The input layer receives the grey levelvalues of each pixel and the output layer flags the presence of microcalcifications. A hiddenlayer is added to ensure that a piecewise linear discrimination between the two classes may alsobe accommodated. The back propagation training a’gorithm is used for the adjustment of thesynaptic connections and the training and test set of images are kept separate.It is well known that this network architecture does not have shift-invariant properties. Itfollows that the network needs to be trained on all the shifted versions of a given input pattern.This requires a prohibitively long training time, and it is unlikely that successful training canbe accomplished for images of realistic size and complexity. To overcome this restriction weformulated the problem differently as follows.The detection of microcalcifications is essentially a local operation. A single microcalcification is detected by a human observer by comparison of its features with those of its immediatevicinity. We can therefore examine a window around each pixel to decide whether it should belabeled as a microcalcification or not. In this way the network becomes shift-invariant.Based on the above analysis we chose a three layer perceptron with 81 neurons in the inputlayer, 4 neurons in the middle layer, and one neuron in the output layer. The input consistsof grey levels in a 9 x 9 window centered on a pixel and the output is the label assigned tothis pixel. Sections from four mammograms containing numerous microcalcifications of varioussizes and shapes were chosen as the training set. training pattern vectors were generated by amoving 9 x 9 window.Chapter 8. Segmentation 133Automated segmentation Manual segmentationObject BackgroundObject 364 371Background 161 101504Table 8.1: Classification matrix for algorithm 18.6 Results and DiscussionsPerformance of each of the two neural networks and the two above algorithms were tested usingselected images from our database of 68 images. Local Histogram Thresholding algorithm isreferred to as algorithm 1, and Edge Detection and Region Growing algorithm is referred toas algorithm 2. We have chosen to compare the results on a pixel by pixel basis instead of onan object by object basis, since both the detection and accurate segmentation of the shape ofmicrocalcifications are clinically significant. Table 8.1 gives a typical classification matrix forthe first algorithm. In this particular case a 320 x 320 pixel square window from a mammogramwas segmented manually and also using the first algorithm.A total of 525 pixels were labeled manually as belonging to microcaicifications, and thebalance of 101875 pixels as background parenchyma. The algorithm correctly labeled 364 pixelsas being microcalcifications, missing the other 161 pixels. It also falsely labeled 371 pixels asbeing microcalcifications. In this image there were 11 individual calcifications, nine of whichwere detected by the algorithm. The shape of the calcifications however were not segmentedaccurately leading to pixel misciassifications. We use the standard definitions ofTPSensitivityTP+FN(8.8)andTNSpecificity= TN+FP (8.9)andTP+TNaccuracy= TP+TN+FP+FN(8.10)Chapter 8. Segmentation 134Automated segmentation Manual segmentationObject BackgroundObject 507 26Background 18 101849Table 8.2: Classification matrix for algorithm 2where TP, TN, FP, FN are the number of true positive, true negative, false positive and falsenegative cases respectively.For this typical case Specificity is 0.996, Sensitivity is 0.69, and accuracy is 0.995. The highvalue of accuracy is due to the very large number of background pixels. In such cases careshould be exercised in interpretation of the results.The classification matrix for our second algorithm and the same input image is given inTable 8.2. 507 of the 525 microcalcification pixels were correctly labeled and 26 pixels of thebackground were misclassified. We note from this table that both Sensitivity and Specificityhave increased considerably. This second algorithm has consistently performed better than thefirst, indicating the superiority of an edge based technique to thresholding of local histogram.The modified Hopfield network was also tested with the same input image. No manualsegmentation was performed as this network has self organizing properties. We found thatthis network can be tuned for a given set of input images to achieve acceptable segmentation.In particular the network dynamics are such that segmented microcalcifications are morphologically compact, i.e. without holes in them. Unfortunately, the choice of neural response,the various gain factors, and the choice of neighbourhood size and shape are problem dependent. This limits the application and generalization of this network to other data sets. Furtherinvestigation into the proper choice of these parameters is required.The multilayer Perceptron was tested with the same image data base. We first verifiedthe shift invariant property of the network by training it on a subset of images and testingits performance on shifted versions of the same images. A typical classification matrix for thisnetwork is given in Table 8.3. We notice a comparable performance with the above algorithmicChapter 8. Segmentation 135Automated segmentation Manual segmentationObject Background350 87175 101788ObjectBackgroundTable 8.3: Classification matrix for Perceptronapproaches.Using the proposed techniques all of the microcalcification clusters in the original set ofmammograms were correctly detected. Additionally, other minute calcifications were segmentedwhich were not readily visible on the original mammograms.These results indicate that both algorithmic and neural network based approaches cansegment microcalcifications from the background parenchyrnal pattern, and that a combinationof approaches may be used for improved sensitivity and specificity. These techniques couldprove of assistance for objective classification of some malignant breast lesions.The computational complexity of these routines depends on the contents of each image.In our experience all calculations can be performed within a few seconds on an Apollo 4500workstation. This time is only slightly longer than image digitization and is sufficiently rapidto make this system usable in a clinical setting to provide a second (objective) opinion to theradiologist’s interpretation of the mammogram.8.7 SummaryAutomatic detection and segmentation of microcalcifications may be achieved by applicationof algorithmic techniques or by use of artificial neural networks. We have developed two algorithmic approaches using firstly a local area thresholding method and secondly a techniquebased on edge detection followed by region growing to segment microcalcifications from thebackground parenchyma. We also selected two neural network architectures and implementedobject detection techniques on them. Further In this chapter we reported on the performanceChapter 8. Segmentation 136and comparative merits of each one of these four systems. Of these four systems the approachbased on a modified Hopfield network has been the least successful one in that it requiresmanual specification of parameters which are dependent on the contents of each image. Theperceptron performs similar to the first algorithm. The second algorithm, based on edge detection and region growing, outperforms the first algorithm, which was based on local histogramthresholding. The second algorithm was therefore chosen for use in the next chapter.The objective of the work in this chapter and the following chapter is to develop a systemfor the automatic recognition of microcalcifications which are the most common signs of abnormality in a mammogram. At present, no practical system is available to provide radiologistsmeans for the computer assisted detection and quantitative analysis of mammographic imageswhich will likely lead to improved diagnosis.Chapter 9ClassificationMicrocalcifications are non-specific indicators of breast carcinoma in conventional mammographic examinations. Currently, radiologists inspect mammograms with a viewing box andbased on a subjective appearance of microcalcifications, determine the presence or absence ofa suspicious lesion.The objective of the work described in this chapter is to extract various numerical featuresfrom microcalcifications in a digitized mammographic image and to use this information toconstruct a piecewise linear discriminant function for classification of images into two groupsof clearly benign or possibly malignant class. Recent advances in digital image acquisitionand analysis have made it possible to consider automated diagnosis of mammographic images.Towards this end, a good selection of small number of features would be helpful for a successfuldiscrimination of mammograms [121].We therefore evaluated over 100 numerical features extracted from individual as well asclusters of microcalcifications in digitized mammographic images. These features quantify thenumber, size, shape, roughness, and configuration of clusters of microcalcifications. The featuresare then examined individually and also in various combinations to test their potential indiscriminating between benign and malignant microcalcification patterns.Pattern Recognition MethodsPattern recognition methodologies can be divided into two broad categories of supervised andunsupervised classification. Supervised learning may be further divided into statistical decisionmaking and syntactic or structural.137Chapter 9. Classification 138The image represents a point in N-dimensional feature space. In the training phase thefeature vector is considered as the input and the desired classification as the output. A discriminant function is then computed to best separate the feature space into various classes.This function may be linear, quadratic, or generally non-linear. Piecewise linear discriminantfunctions are the most popular in image pattern recognition since most practical feature vector spaces are not linearly separable and the general non-linear discriminant functions are notmathematically tractable. In computing the discriininant function, if the a priori knowledgeof the class statistics is unobtainable then a heuristic distribution-free approach is followed.One such decision rule is based on the minimum distance of the observed vector from the classcentroid. Alternatively, k-nearest-neighbour classifiers may be used.The statistical pattern recognition methods commonly use Bayes’ rule and require knowledgeof a priori probabilities of each class. These may be parametric such as Gaussian distributionsor non-parametric. Unsupervised learning techniques such as clustering [122], and fuzzy setreasoning [1231 have also been used in pattern recognition tasks.9.1 Data BaseWe have examined over 400 mammograms from patients who have recently undergone breastbiopsies at Vancouver Clinic of British Columbia Cancer Agency (BCCA). From this group 68typical images were selected. These images contain isolated fine and coarse calcifications as wellas clusters of microcalcifications that may be clearly benign, clearly malignant or suspicious ofmalignancy. Participating radiologists from the Clinic have performed a blind diagnosis of theimages without access to other relevant clinical or pathological information. The images werethen digitized with 100gm sampling interval in both spatial directions using the two dimensionalCCD device described in chapter 3.Each image was processed by the “edge detection and region growing” segmentation routinedescribed in chapter 8. Each image was associated with a binary mask representing the pixelsthat belong to microcalcifications. Each image and its associated mask was checked manuallyChapter 9. Classification 139and if necessary, the segmentation mask was corrected. This step was included to ensure thatany errors in automatic segmentation do not affect the pattern classification stage. Quantitativefeatures were then extracted from each image.9.2 FeaturesMy approach was to extract features from microcalcifications individually as well as collectivelyfrom all the microcalcifications present in the image. Screen-film mammograms were imagedand discretized to 1280x1024 pixels with 12 bit accuracy using the system as described inchapter 3. Each image was then subjected to a segmentation algorithm based on edge detectionand region growing as explained in chapter 8. A binary mask created in this way would tag thepixels that were calcified. In order to eliminate the effect of errors in automated segmentationon the classification results, each binary mask was examined and corrected if necessary. Foreach microcaicification the following groups of features were computed:1. Photometric variables:(a) Mean and variance of intensity of the object (microcalcification);(b) Contrast of the object over the background parenchyma;(c) The Optical Density (OD) of each pixel was measured asOD(x,y) = —20log1T(x,y) (9.1)where T is the local optical transmittance of the mammogram for each pixel atcoordinates (x, y).(d) A histogram of OD was constructed for each object and minimum, maximum, range,mean and variance of OD were calculated.(e) The skewness of the OD histogram is the third moment of its distribution normalizedby its second moment; and was calculated from(OD(x, y) — ODmean)3ODskew= (n— 1)ODa,. (9.2)Chapter 9. Classification 140where n is the total number of the pixels in the object, ODva,. is the variance of ODnormalized by ODean, and ODskew is a measure of asymmetry in OD distribution.(f) The long-tailed nature of the optical density distribution was measured by its kurtosis, being its fourth moment normalized by the square of its second moment(OD(x, y)— ODrnean)4ODkt= (— flOD (9.3)I var2. Size variables:(a) Area, being n, the number of connected pixels forming the microcalcification.(b) Perimeter, using 4-connectedness (i.e. only edge adjacent pixels are considered neighbours) and appropriate corrections for square tessellationsP = ni + v’irn2 + 2.0fl3 (9.4)where n1, n2, and n3 are the number of edge pixels in the object with 1, 2, and 3non-object neighbours respectively.(c) Mean radius, being the distance from the object centroid to the object edge.3. Shape variables:(a) Compactness, calculated as the normalized ratio of area to perimeter squared;C= (9.5)where n is the total number of the pixels in the object.(b) Moment of inertia of the mask considered as a uniform laminar object, normalizedby the mask area squared;Inert2rd(x,y) (9.6)where d(x, y) is the distance of the pixel at (x, y) from the object centroid.Chapter 9. Classification 41(c) Variance of the object radii;(d) Sphericity, defined as the ratio of the radii of two concentric circles enclosing theobject boundary;(e) Eccentricity, defined as the ratio of the eigenvalues of the matrix of second momentsof the object; i.e. let M2 be such a matrixM2lyx Iyywhere I, = I =—— ) and I and I, are variances of x-coordinateand y-coordinate values. If eigenvalues of M2 are denoted as A1 and A2 and A1 A2thenEccent = A1/A2 (9.7)(f) Elongation, defined as the ratio of major axis to minor axis of the least squares bestfit of the object boundary to an effipse.4. Roughness variables: coarse and fine roughness measured as energy content of se’ectedfrequency bands in the spectrum of the object radii. The coordinates (x, y) of the objectboundary are first expressed in polar coordinates (r, 0). Using Fourier coefficients we canwriter(8) = + (a cos(iO) + /Jb sin(iO)) (9.8)then the mean radius is ; (a1,b1) determine the least squares best fit of the objectboundary to a circle; and (a2,b2) determine the least squares best fit to an effipse. Themajor and minor axis of this ellipse are given bya0 + 2a + b (9.9)Finally the energy content within a given frequency band i1 to i2 is calculated from(a + b) (9.10)Chapter 9. Classification 142We used the parameters (i1 = 3 i2 = 10) to measure coarse boundary variation and(i1 = 11 i2 = 31) to estimate fine boundary variation.Features of all individual microcalcifications in an image were combined to derive overallcharacteristic features for each image. Thus minimum, maximum, range, mean, variance andnormalized standard deviation were calculated for each feature of individual microcalcifications.The center of mass of all masks within an image was identified and simple statistical datawere computed for distance of each object to the cluster center. To estimate the degree ofscatteredness of calcifications, the object-to-cluster-center distance was weighted by object areaor object perimeter and similar statistical metrics were calculated. The mean and varianceof the distance between pairs of microcalcifications were used as other features to estimatescatteredness.Additionally, a convex polygon enclosing each microcalcification cluster was calculated andseveral features relating to its size and shape were computed.Other features measured relate to the degree of alignment of calcifications. Radiologistsoften consider a cluster as being suspicious if the major axis of individual microcalcificationsare aligned in a linear or branching fashion. We used two methods of calculating alignment asfollows [59]:1. For each object i, we measure the degree to which the major axis of all other objects inthe scene are aligned with it,Alin=(1— A— —-.—)A?, . M, (9.11)where k is the total number of microcalcifications in the image, Ar is the aspect ratio, Ais a unit vector along the major axis of each object, and M. . M2 implies a dot product.Note that long and thin objects are weighted more than round and compact objects.Simple statistical data from the distribution of alignment values were used to describethe whole cluster.Chapter 9. Classification 1432. Alternatively, the direction of the major axis of the convex polygon enclosing the clusterwas first identified. Orientation of each microcaicification was then compared to thisvector to derive alignment features.Many other common features from the literature [124] (e.g. normalized central moments,Fourier coefficients, and other invariant features for shape recognition) were also implemented.A complete list of features is given in Appendix C.9.3 AnalysisA commercially available statistical analysis package (Bio-Medical Data Processing) was usedto evaluate the feature vectors. The Fisher statistics for each individual feature was calculated.Individual histograms were also computed for each feature.The features were grouped together based on what attribute of the cluster was being measured. From each group of the features one or two representative ones were selected basedon their discriminating power. These features were used in calculation of a piece wise lineardiscriminant function. The jackknife method was used to ensure that the training set and thetest cases were kept separate.9.4 ResultsBased on a data base of 68 images we have determined the most effective features when usedindividually as listed in Table 9.1 with their relative discriminating value. Nineteen featuresshowed F-values higher than 4 and were considered significant in classification. It can be seenfrom Table 9.1 that the most important featllre is the number of microcalcifications in a cluster.This is confirmed by current practice of radiologists who consider the presence of more than 10microcalcifications per square centimeter as highly suspicious for malignancy. Various featuresmeasuring alignment, compactness and inertia were also found to be significant.Combining a few selected features resulted in a two-class discriminant function that correctlyclassified 64 of the 68 selected mammograms, i.e. with an overall accuracy of 94.1%Chapter 9. Classification 144Feature Fisher Value Classification Accuracy (%)Number of Microcalcifications 73.30 89.7Range of Alignmentl 46.94 91.2Max. of Alignmentl 44.57 91.2Var. of Alignmentl 27.11 88.2Mean of Alignmentl 26.57 85.3Minimum Inertia 21.95 79.4Minimum Compactness 16.47 79.4Range of Alignmentll 11.8 82.4Max. of Alignmentll 11.52 80.9Mean of Alignmentll 10.61 80.9Normalized Std. Dev. of Area 7.06 69.1Mm. Mean Radius 6.92 67.7Mm. Perimeter 6.6 64.7Mm. Distance to Center 5.36 55.9Var. of Alignmentll 5.28 85.3Normalized Std. Dev. of Perimeter 4.9 66.2Mm. Pair Distance 4.87 48.5Average Mean Radius 4.35 51.5Average Pair Distance 4.16 66.2Table 9.1: Discriminant Power of FeaturesChapter 9. Classification 145The three “best” features taken as a combination are:X3 = Number of microcalcifications in the clusterX75 = Area of the convex polygon enclosing the clusterX59 = Variance of mutual alignment of microcalcificationsX3, the number of microcalcifications in the cluster, is the feature with the most discriminating power. The next best feature, when taken together with X3, is X75, which measuresthe compactness of the cluster. The third best feature, when taken together with X3 and X75,is X59, which measures the degree of mutual alignment of the individual microcalcificationswithin a cluster. Addition of other features did not significantly improve the classification results. The automatic selection of these features by the software package for the calculation ofdiscriminant function, confirms and quantifies the experience of radiologists in separating thebenign and malignant classes.The resulting discriminant function for the benign and malignant groups are:= 0.57932X3 — 0.00004X59— 6.10094X75 — 1.92331 (9.12)= 1.78940X3— 0.00028X59— 15.05314X75 — 11.96746 (9.13)The classification matrix for the entire data set is given in Table 9.2. This table shows thateven with the use of over 100 features the two classes in the training set can not be separatedwith 100% accuracy. 53 out of 55 benign cases and 11 of 13 malignant cases were automaticallyrecognized. Given the fact that in current practice only two out of five biopsies prove to bemalignant, the sensitivity (85%), specificity(96%), and accuracy (94.1%) figures derived fromTable 9.2 represent a major improvement over current clinical practice. Table 9.3 represents thejackknife classification matrix for this data set. The performance accuracy of the algorithmsdrops to 92.6%.Chapter 9. Classification 146[ “Truth”Algorithmic L Benign MalignantBenign 53 2Malignant 2 11Table 9.2: Classification performance of computer algorithms on training images.“Truth”Algorithmic Benign MalignantBenign 53 3Malignant 2 10Table 9.3: Jackknife classification performance of computer algorithms.9.5 SummaryWe evaluated the potential of over 100 features in discriminating benign from malignant microcalcification clusters. The results indicate that a few features, when taken in combination,are capable of successfully discriminating the two classes of mammograms, with a success rateof over 92%. The computational complexity of feature calculation routines depends on thecontents of each image, and in our experience all calculations can be performed within a fewseconds on an Apollo 4500 workstation. This time is only slightly longer than image digitization and is sufficiently rapid to make this system usable in a clinical setting to provide a second(objective) opinion to the radiologist’s interpretation of the mammogram.Chapter 10Conclusions, and recommendations for future work10.1 ObjectivesThe objective of this research has been to investigate the following two hypotheses:1. Using image restoration techniques in digitized mammograms, it is possible to improvethe visibility of minute microcalcifications which are common signs of early breast cancer.In some forms of breast cancer these microcalcifications may be present in the vicinity ofthe primary tumors and are believed to be highly prognostic.2. Using quantitative image features extracted from microcalcifications it is possible to classify a cluster as benign or madignant.10.2 Summary of the workIn order to carry out this investigation a series of hardware and software tools were developed.Chapter 3 describes the development and characterization of a film digitization device basedon a two dimensional CCD array sensor. It is shown that this device can provide the requiredspatial and photometric resolution necessary in digitizing mammograms.We have described the derivations of system parameters and noise characteristics and havealso implemented measures to reproduce the original image in the digital form with a highdegree of fidelity.The distinguishing features of the newly developed system are: i) a fast method of digitizingmammograms and ii) acquisition of images with a high spatial and photometric resolution. Each147Chapter 10. Conclusions, and recommendations for future work 148frame contains over 1.3 million pixels digitized to 12 bits per pixel. The sampling interval is6.8 ,um without optical magnification.The fixed pattern and the random noise are minimized using background subtraction andsignal averaging techniques. The resulting image appears comparable with that obtained by alaser-scanning microdensitometer obtained at a fraction of the time required.Chapter 4 reports on the clinical evaluation of digitized images and concludes that the useof the conventional magnification mammography procedure may be substantially reduced.It is shown in chapter 5 that application of known restoration techniques to digitized mammograms can greatly increase the visibility of image details. Locally adaptive filters are usedin chapter 6 to improve the results of image restoration in the presence of signal dependentradiographic noise.The results of image restoration algorithms on the data base of 30 images show a markedimprovement in detectability of smallest particles of microcalcifications when judged by a humanobserver.In chapter 7, we have given mathematical derivations to show the benefits of using imageextension and circular deconvolution in frequency domain image restoration via the DFT.To test the second hypothesis, we have developed image segmentation routines, includingthe evaluation of neural network techniques, for the automated detection and extraction ofmicrocalcifications.Automatic detection and segmentation of microcalcifications may be achieved by the application of algorithmic techniques or by the use of artificial neural networks. We selected twoneural network architectures and implemented object detection techniques on them.The first neural network considered is a modified version of a Hopfield network. One neuronis assigned to each pixel in a digitized mammogram. Each neuron is connected only ally to apre-defined neighbourhood. The network dynamics favor the formation of compact regions andlead the system to a stable output representing the required binarized mask of pixels belongingto the microcalcifications.Chapter 10. Conclusions, and recommendations for future work 149The second neural network is the classical three layer perceptron with error back propagationscheme for training. The input layer receives the gray level values of the normalized image andthe output layer flags the presence of microcalcifications.Further we have developed two algorithmic approaches to segment microcalcifications. Inthe first algorithm, thresholding of local image grey level histogram is used for object segmentation. In the first pass, each object is labeled and object boundaries are marked but they are notsegmented from the background. In the second pass, the discontinuities due to region boundaries are corrected for by allocating a unique threshold value for each object commensuratewith the local background.In an alternative algorithm we employ edge detection to identify the pixels that may potentially belong to microcalcifications. Region growing techniques are then applied and theresulting segmented objects are subjected to tests involving shape, size and gradient.We have examined over 400 mammograms of patients with biopsy proven benign or malignant abnormalities. Participating radiologists have performed a blind diagnosis of the imageswithout access to other relevant clinical or pathological information. The images were thendigitized at 12 bits with 50km or 100am sampling intervals. Preliminary results indicate thatboth algorithmic and neural network based approaches can segment microcaicifications fromthe background parenchymal pattern, and that a combination of approaches may be used forimproved sensitivity and specificity.Finally, in chapter 9 we have calculated over 100 photometric and morphological featuresfrom different microcalcification patterns in 68 digitized mammographic images. These featurevectors were used in computation of a discriminant function that separates the benign and malignant classes with overall accuracies better than can be obtained subjectively by experiencedradiologists.Chapter 10. Conclusions, and recommendations for future work 15010.3 Suggestions for future workThe problem of automatic detection and recognition of several abnormalities in mammograms,with clinically acceptable sensitivity and specificity, is as yet unsolved.New film digitization devices should be designed and built with larger sensors so that all ormost of a mammogram can be digitized with pixels of 25jm. This is necessary so that imagedetail up to 201p/mm can be captured.New developments in CCD and associated technologies are making it possible to acquirewhole breast digital images directly from the patient without the use of radiographic film.This development will open new avenues for characterization of the physical imaging system.Application of image restoration techniques to this new direct-digital modality needs to beinvestigated.Enhancement of mammographic images to increase the conspicuity of abnormalities is another area of on-going research.Differentiation of calcifications from dense background needs to be further investigated.Since only about 50% of malignant diseases of breast manifest microcalcifications, the nextstep in the analysis of mammograms will be the detection of masses. Soft tissue image processing algorithms will have to be developed for segmenting masses from the normal breastbackground. A number of segmentation algorithms needs to be evaluated for their effectivenessin discriminating poorly defined masses from normal breast parenchyma. Measurements basedon texture will be the major criteria here. Various features of the segmented masses will haveto be measured in order to classify them as benign or suspicious for malignancy.The third step in the analysis of mammograms will be the detection of secondary signs ofcancer. This includes comparison of images of both breasts of the same subject. Although theparenchymal pattern and size of the two breasts may not be identical, radiologists consider alack of symmetry between the two images as suspicious. Some measure of asymmetry can becomputed even though exact registration of the two images is often not possible.Another secondary sign of cancer is the dilation of a single duct. Linear or non-linearChapter 10. Conclusions, and recommendations for future work 151edge detection algorithms may be applied to segment the prominent ductal patterns from thebackground. The ductal size can then be measured and used as evidence of abnormality.Skin thickness can be measured and any skin retraction can be located by changes in curvature in unexpected places. Any architectural distortions of the breast will have to be identified.A degree of confidence measure may be assigned to any and all measurements. In thisway the combined effects of primary and secondary signs of cancer may be weighed for a finalclassification of the mammogram as normal or suspicious of malignancy.Bibliography152Bibliography 153Bibliography[1] L. W. Bassett and R. H. G. Cold, “Breast Cancer Detection,” Grune & Stratton, 1987.[2] National Cancer Institute of Canada, “Candian Cancer Statistics,” 1992.[3] Cancer Control Agency of B.C., “B.C. Cancer statistics,” Vancouver: 1988-89.[4] M. Moskowitz, “Mammography to screen asymptomatic women for breast cancer,” American J. Roentgenology, vol.143, pp.457- 459, Sept. 1984.[5] E. S. Paredes, “Atlas of Film-Screen Mammography,” Baltimore: Urban & Schwarzenberg, 1989.[6] J. Tucker and B. Stenkvist, “Whatever happened to cervical cytology automation?,” Anal.Cell. Pathol., Vol 2. pp.259-266, 1990.[7] J. Chen, M. J. Flynn, and M. Rebner, “Regional contrast enhancement and data compression for digital mammographic images,” in Proc. SPIE, Biomedical Image Processingand Biomedical Visualization, Vol. 1905, pp. 752-758, 1993.[8] P. C. Tahoces, J. Correa, M. Souto, C. Gonzalez, L. Gomez, and J. J. Vidal “Enhancementof chest and breast radiographs by automatic spatial filtering,” IEEE Trans. Med. Image.,Vol. 10, No. 3, pp. 330-335. Sept. 1991.[9] S. Yabashi, M. Hata, K. Kubo, and T. Ishikawa, “Image processing for recognition oftumor on mammography,” in Proc. mt. Symp. on Noise and Clutter Rejection in Radarsand Imaging Censors, Japan, Nov. 1989.[101 R. Gordon, and R. M. Rangayyan, “Feature enhancement of film mammograms usingfixed and adaptive neighborhoods,” Applied Optics, Vol. 23 No. 4, pp. 560-564, 15 Feb.1984.[11] A. P. Dhawan, C. Buelloni, and R. Gordon “Enhancement of mammographic features byoptimal adaptive neighborhood image processing,” IEEE Trans. Med. Image., Vol. 5, No.1, pp. 8-15, March 1986.[12] A. P. Dhawan and E. L. Royer, “Mammographic feature enhancement by computerizedimage processing,” Computer Methods and Programs in Bio-medicine, Vol. 27, pp. 23-35,1988.[13] W. M. Morrow, R. M. Rangayyan, and J. E. L. Desautels “Feature adaptive enhancementand analysis of high-resolution digitized mammograms,” in Proc. Annual mt. Conf. ofIEEE Eng. in Med. and Biol., Vol. 12, No. 1, pp. 165-166, 1990.[14] N. Karssemeijer and L. V. Erning, “Iso-precision scaling of digitized mammograms tofacilitate image analysis,” in Proc. SPIE, Image Processing, Vol. 1445, pp. 166-177, 1991.Bibliography 154[15] W. M. Morrow, R. B. Paranjape, R. M. Rangayyan, and J. E. L. Desautels “Region-basedcontrast enhancement of mammograms,” IEEE Trans. Med. Image., Vol. 11, No. 3, pp.392-406, Sep. 1992.[16] W. Qian, L. P. Clarke, M. Callergi, H. D. Li, R. A. Clark, and L. M. Silbiger, “Adaptiveorder statistic filtering and wavelet transform for feature enhancement in mammography,”in Proc. Annual mt. Conf. of IEEE Eng. in Med. and Biol., Vol. 15, No. 1, pp. 62-63,1993.[17] W. Qian, L. P. Clarke, M. Callergi, H. D. Li, and R. A. Clark, “Tree-structured nonlinearfilters in digital mammography,” IEEE Trans. Med. Image., Vol. 13, No. 1, pp. 25-36,March 1994.[18] A. Lame, S. Song, J. Fan, W. Huda, J. Honeyrnan, and B. Steinbach, “Adaptive mutiscaleprocessing for contrast enhancement,” in Proc. SPIE, Biomedical Image Processing andBiomedical Visualization, Vol. 1905, pp. 521-532, 1993.[19] I. E. Magnin, F. Cluzeau, C. L. Odet, and A. Bremond “Mammographic texture analysis:an evaluation of risk for developing breast cancer,” Optical Engineering, Vol. 25 No. 6,pp. 780-784, June 1986.[20] C. B. Caidwell, S. J. Stapleton, and D. W. Holdsworth “Characterization of mammographic parenchymal pattern by fractal dimension,” in Proc. SPIE, Vol. 1092, pp. 10-16,1989.[21] C. B. Caidwell, S. J. Stapleton, D. W. Holdsworth, R. A. Jong, W. J. Weiser, G. Cooke,and M. J. Yaffe “Characterization of mammographic parenchymal pattern by fractaldimension,” Phys. Med. Biol., Vol. 35, No. 2, pp. 235-247, 1990.[22] M. Kallergi, K. Woods, L. P. Clarke, W. Qian, and R. A. Clark, “Image segmentation indigital mammography: comparison of local thresholding and region growing algorithms,”Computerized Medical Imaging and Graphics, Vol. 16, No. 5, pp. 323-331, 1992.[23] S. Hajnal, P. Taylor, M. H. Dilhuydy, B. Barreau, and J. Fox, “Classifying mammograms by density: rationale and preliminary results,” in Proc. SPIE, Biomedical ImageProcessing and Biomedical Visualization, Vol. 1905, pp. 478-489, 1993.[24] H. P. Chan, C. J. Vyborny, H. MacMahon, C. E. Metz, K. Doi, and E. A. Sickles,“Evaluation of digital unsharp-mask filtering for the detection of subtle mammographicmicrocalcifications,” in Proc. SPIE, Medicine XIV / FAGS IV, Vol. 626, pp. 347-348,1986.[25] H. P. Chan, K. Doi, C. J. Vyborny, C. E. Metz, H. MacMahon, P. M. Jokich, and S.Galliorta, “Digital mammography: development of a computer-aided system for detectionof microcalcifications,” in Proc. SPIE, Medical Imaging, Vol. 767, pp. 367-370, 1987.Bibliography 155[26] H. P. Chan, K. Doi, S. Caihorta, C. J. Vyborny, H. MacMahon, and P. M. Jokich,“Image feature analysis and computer aided diagnosis in digital radiography: automateddetection of microcalcifications in mammography,” Medical Physics, Vol. 14, No. 4, pp.538-548, July/Aug. 1987.[27] H. P. Chan, K. Doi, K. L. Lam, C. J. Vyborny, R. A. Schmidt, and C. E. Metz, “Digitalcharacterization of clinical mammographic microcalcifications: applications in computer-aided detection,” in Proc. SPIE, Medical Imaging II, Vol. 914, pp. 591-593, 1988.[28] H. P. Chan, K. Doi, C. J. Vyborny, K. L. Lam, and R. A. Schmidt, “Computer-aideddetection of microcalcifications in mammograms, methodology and preliminary clinicalstudies,” Invst. Radiology, Vol. 23, pp.664-71 , 1988.[29] H. P. Chan, K. Doi, C. J. Vyborny, R. A. Schmidt, C. E. Metz, K. L. Lam, T. Ogura, Y.Wu, and H. Macmahon, “Improvement in radiologists’ detection of clustered microcalcifications on mammograms,” Invst. Radiology, Vol. 25, pp. 1102-1110 , 1990.[30] R. M. Nishikawa, M. L. Giger, K. Doi, F. F. Yin, Y. Wu, Y. Jiang, W. Zhang, Z. Huo,U. Bick, C. J. Vyborny, and R. A. Schmidt, “Computer-aided detection and diagnosisof masses and clustered microcalcifications from digital mammograms,” in Proc. SPIE,Biomedical Image Processing and Biomedical Visualization, Vol. 1905, pp. 422-432, 1993.[31] Y. Wu, K. Doi, M. L. Giger, and K. M. Nishikawa, “Computerized detection of clusteredmicrocalcifications in digital mammograms: application of artificial neural networks,”Medical Physics, Vol. 19, No. 3, pp. 555-560, May/June 1992.[32] K. S. Woods, J. L. Solka, C. E. Priebe, C. C. Doss, K. W. Bowyer, and L. P. Clarke,“Comparative evaluation of pattern recognition techniques for detection of microcalcifications,” in Proc. of SPIE, Biomedical Image Processing and Biomedical Visualization,Vol. 1905, pp. 841-52, 1993.[33] B. W. Fam, S. L. Olson, P. F. Winter, and F. J. Scholz, “The detection of calcificationclusters in film screen mammograms: a detailed algorithmic approach,” in Proc. SPIE,Med. Imaging II, Vol. 914, pp. 620-634, 1988.[34] B. W. Fam, S. L. Olson, P. F. Winter, and F. J. Scholz “Algorithm for the detectionof fine clustered calcifications on film mammograms,” Radiology, Vol. 169, pp. 333-337,1988.[35] D. H. Davies, D. K. Dance and C. H. Jones, “Automatic detection of clusters of microcalcifications in digital mammograms using local area thresholding techniques,” in Proc.SPIE, Medical Imaging III: Image Processing, Vol. 1092, pp. 153-157, 1989.[36] D. H. Davies, D. K. Dance and C. H. Jones, “Automatic detection of microcalcificationsin digital mammograms,” in Proc. SPIE, Medical Imaging IV: Image Processing, Vol.1233, pp. 185-191, 1990.Bibliography 156[37] D. Zhao, M. Shridhar, and D. 0. Daut, “Morphology on detection of calcifications inmammograms,” iii Proc. IEEE mt. Conf. Acous. Speech and Signal Processing, part III,pp. 129-132, 1992.[38] D. Zhao, M. Shridhar, and D. C. Daut, “Rule-based morphological feature extraction ofmicrocalcifications in mammograms,” in Proc. SPIE, Biomedical Image Processing andBiomedical Visualization, Vol. 1.905, pp. 702-715, 1993.[39] A. Strauss, A. Sebbar, S. Desarnaud, and P. Mouillard, “Automatic detection and segmentation of microcalcifications on digitized mammograms,” in Proc. Annual mt. Conf.of IEEE Eng. in Med. and Biol. Vol. 14, No. 5, pp. 1938-1939, 1992.[40] N. Karssemeijer, “A stochastic model for automated detection of calcifications in mammograms,” in Proc. XIIth Conf. on Information Processing in Medical Imaging, WyeCollege, Kent, 1991.[41] N. Karssemeijer, “Recognition of clustered microcalcifications using a random fieldmodel,” in Proc. SPIE, Biomedical Image Processing and Biomedical Visualization, Vol.1905, pp. 776-786, 1993.[42] I. N. Bankman, W. A. Christens-Barry, D. W. Kim, I. N. Weinberg, 0. B. Gatewood, andW. R. Brody, “Automated recognition of microcalcification clusters in mammograms,” inProc. SPIE, Biomedical Image Processing and Biomedical Visualization, Vol. 1905, pp.731-738, 1993.[43] I. N. Bankman, W. A. Christens-Barry, D. W. Kim, I. N. Weinberg, 0. B. Gatewood, andW. R. Brody, “Algorithm for detection of microcalcification clusters in mammograms,”in Proc. Annual mt. Conf. of IEEE Eng. in Med. and Biol. Vol. 15, No. 1, pp. 52-53,1993.[44] J. Dengler, S. Behrens, and J. F. Desaga, “Segmentation of microcalcifications in mammograms,” IEEE Trans. Med. Imag. Vol. 12, No. 4, pp. 634-42, Dec. 1993.[45] H. Barman and G. Granlund, “Hierarchical feature extraction for computer-aided analysisof mammograms,” in Proc. SPIE, Biomedical Image Processing and Biomedical Visualization, Vol. 1905, pp. 492-503, 1993.[46] W. Qian, L. P. Clarke, M. Callergi, H. D. Li, R. Veithuizen, R. A. Clark, and M. L. Silbiger, “Tree-structured nonlinear filter and wavelet transform for microcalcification segmentation in mammography,” in Proc. SPIE, Biomedical Image Processing and Biomedical Visualization, Vol. 1905, pp. 509-520, 1993.[47] W. B. Richardson Jr., “Wavelet packets applied to mammograms,” in Proc. SPIE,Biomedical Image Processing and Biomedical Visualization, Vol. 1905, pp. 504-508, 1993.[48] S. M. Lai, X. Li, and W. F. Bischof “On techniques for detecting circumscribed massesin mammograms,” IEEE Trans. on Med. Imaging, Vol. 8 No. 4, pp. 377-386, Dec. 1989.Bibliography 157[49] T. K. Lau and W. Bischof “Automatic detection of breast tumors using the asymmetryapproach,” Computers and Biomedical Research, Vol. 24, pp. 273-295, 1991.[50] F. F. Yin, M. L. Giger, K. Doi, C. E. Metz, C. J. Vyborny, and R. Schmidt, “Computerizeddetection of masses in digital mammograms: analysis of bilateral subtraction images,”Medical Physics, Vol. 18, No. 5, PP. 955-963, Sep/Oct 1991.[51] M. L. Giger, F. F. Yin, K. Doi, C. E. Metz, R. Schmidt, and C. J. Vyborny, “Investigationof methods for the computerized detection and analysis of mammographic masses,” inProc. SPIE, Medical Imaging IV: Image Processing, Vol. 1233, pp. 183-184, 1990.[521 P. Miller and S. Astley, “Detection of breast asymmetry using anatomical features,” inProc. SPIE, Biomedical Image Processing and Biomedical Visualization, Vol. 1905, pp.433-442, 1993.[53] D. Brzakovic, X. M. Luo, and P. Brzakovic, “An approach to automated detection oftumors in mammograms,” IEEE Trans. on Medical Imaging, Vol. 9 No. 3, pp. 233-241,Sept. 1990.[54] D. Brzakovic, P. Brzakovic, and M. Neskovic, “An approach to automated screening ofmammograms,” in Proc. SPIE, Biomedical Image Processing and Biomedical Visualization, Vol. 1905, pp. 690-701, 1993.[55] M. Terauchi and Y. Takeshita, “An approach toward automatic diagnosis of breast cancerfrom mammography,” in Proc. IEEE Pacific Rim Conf. on Comm. Computers and Sig.Proc., Victoria, BC, Vol. 2, pp. 594-597, May 1993.[56] W. P. Kegelmeyer Jr., “Computer detection of stellate lesions in mammograms,” in Proc.SPIE, Biomedical Image Processing and Three-Dimensional Microscopy, Vol. 1660, pp.446-453, 1992.[57] W. P. Kegelmeyer Jr., “Evaluation of stellate lesion detection in a standard mammogramdata set,” in Proc. SPIE, Biomedical Image Processing and Biomedical Visualization, Vol.1905, pp. 787-798, 1993.[58] H. D. Li, M. Kallergi, L. P. Clarke, W. Qian, and R. A. Clark, “Markov random fieldmodel for mammogram segmentation,” in Proc. Annual mt. Conf. of IEEE Eng. in Med.and Biol. Vol. 15, No. 1, pp. 54-55, 1993.[59] S. H. Fox, U. M. Pujare, W. G. Wee, M. Moskowitz and R. V. P. Hutter, “A computeranalysis of mammographic microcalcifications: global approach,” in Proc. IEEE 5th mt.Conf. on Pattern Recognition, pp. 624-631, 1980.[60] I. E. Magnin, M. ElAlaoui, and A. Bremond, “Automatic microcalcifications patternrecognition from x-ray mamrnographies,” in Proc. SPIE, Science and Engineering of Medical Imaging, Vol. 1137, pp. 170-175, 1989.Bibliography 158[61] L. Shen, R. M. Rangayyan, and J. E. Leo Desautels, “An automatic detection and classification system for caicifications in mammograms,” in Proc. of SPIE, Biomedical ImageProcessing and Biomedical Visualization, SPIE Vol. 1905, pp. 799-805, 1993.[62] J. Parker, D. R. Dance, D. H. Davies, L. J. Yeoman, M. J. Michell, and S. Humphreys,“Classification of ductal carcinoma in-situ by image analysis of calcifications from mammograms,” in Proc. SPIE, Biomedical Image Processing and Biomedical Visualization,Vol. 1905, pp. 832-840, 1993.[63] G. L. Cole and M. D. Fox, “Differentiation of mammographic tumor types using thebinary joint transform correlator,” in Proc. Annual mt. Conf. of IEEE Eng. in Med. andBiol., Vol. 11, pp. 358-360, 1989.[64] J. Kilday, F. Palmieri, and M. D. Fox, “Linear discriminant based mammographic tumorclassification using shape descriptors,” in Proc. Annual mt. Conf. of IEEE Eng. in Med.and Biol., Vol. 12, No. 1, pp. 434-435, 1990.[65] J. Kilday, F. Palmieri, and M. D. Fox, “Classifying mammographic lesions using computerized image analysis,” in IEEE Trans. Med. Irnag., Vol. 12, No. 4, pp. 664-669, Dec.,1993.[66] K. M. Allain, and G. L. Cote, “Computer classification of stellate carcinomas and fibroadenomas using digitized mammograms,” in Proc. Annual mt. Conf. of IEEE Eng. inMed. and Biol., Vol. 15, No. 1, pp. 56-57, 1993.[67] A. P. Dhawan, Y. S. Chitre, M. Moskowitz, and E. Gruenstein, “Classification ofmographic microcalcification and structural features using an artificial neural network,”in Proc. Annual mt. Conf. of IEEE Eng. in Med. and Biol., Vol. 13, No. 3, pp. 1105-1106,1991.[68] A. P. Dhawan, Y. S. Chitre, and M. Moskowitz, “Artificial neural network based classification of mammographic microcalcifications using image structure features,” in Proc.SPIE, Biomedical Image Processing and Biomedical Visualization, Vol. 1905, pp. 820-831,1993.[69] Y. Chitre, A. P. Dhawan, and M. Moskowitz, “Artificial neural network based classification of mammographic microcalcifications using image structure features,” in Proc.Annual mt. Conf. of IEEE Eng. in Med. and Biol. Vol. 15, No. 1, pp. 50-51, 1993.[70] C. J. Burdett, H. G. Longbotham, M. Desa.i, W. B. Richardson, and J. F. Stoll, “Nonlinearindicators of malignancy,” in Proc. SPIE, Biomedical Image Processing and BiomedicalVisualization, Vol. 1905, pp. 853-860, 1993.[71] C. Kimme-Smith, S. Dardashti, and L. W. Bassett, “Glandular tissue contrast in CCDdigitized mammograms,” in Proc. SPIE, Biomedical Image Processing and BiomedicalVisualization, Vol. 1905, pp. 465-470, 1993.Bibliography 159[72] E. T. Y. Chen, J. S. J. Lee, and A. C. Nelson, “The comparison of mammogram imagesusing different quantization methods,” in Proc. SPIE, Biomedical Image Processing andBiomedical Visualization, Vol. 1905, PP. 465-470, 1993.[73] S. Astley, I. Hutt, S. Adamson, P. Miller, P. Rose, C. Boggis, C. Taylor, T. Valentine, J.Davies, and J. Armstrong, “Automation in mammography: computer vision and humanperception,” in Proc. SPIE, Biomedical Image Processing and Biomedical Visualization,Vol. 1905, pp. 716-730, 1993.[74] M. L. Giger, R. M. Nishikawa, K. Doi, F. F. Yin, C. J. Vyborny, R. A. Schmidt, C. E.Metz, Y. Wu, H. MacMahon, and H. Yoshimura, “Development of a smart workstationfor use in mammography,” in Proc. SPIE, Image Processing, Vol. 1445, pp. 101-103, 1991.[75] D. V. Beard, R. E. Johnston, E. Pisano, B. Hemminger, and S. M. Pizer, “A radiologyworkstation for mammography: preliminary observations, eyetracker studies, and design,”in Proc. SPIE, Medical Imaging V: FAGS Design and Evaluation, Vol. 1446, pp. 289-296,1991.[76] M. L. Giger, “Future of breast imaging, computer aided diagnosis,” in Syllabus: Categorical Course in Physics, Technical Aspects of Breast Imaging ed. A.Haus and M.J.Yaffe,Radiological Society of North America, Oak Brook, IL, pp. 257-270, 1992.[77] A. Cues, A. R. Cowen, and C. J. S. Parkin, “A clinical workstation for digital mammography,” in Proc. SPIE, Biomedical Image Processing and Biomedical Visualization, Vol.1905, pp. 806-817, 1993.[78] L. T. Cook, M. L. Giger, L. H. Wetzel, M. D. Murphy and S. Batnitzky, “Digitized filmradiography,” Invest. RadioL, Vol. 24, No. 11, pp. 910-916, 1981.[79] R. M. Hanson, “Digital radiography,” Phys. Med. Biol., Vol. 33 No. 7, pp. 751-784, 1988.[80] P. Bunch and R. Vanmeter, “Noise characterization and reduction in a scanning micro-densitometer,” in Proc. SPIE, Medical Imaging, Vol. 767, pp. 236-249, 1987.[81] J. Vranckx and B. Strul, “Performance evaluation of a laser digitizer,” in Proc. SPIE,Medical Imaging, Vol. 767, pp. 524-528, 1987.[82] B. R. Whitting, E. Muka, T. E. Kocher, and M. J. Flynn, “High resolution, high performance radiographic scanner,” in Proc. SPIE, Medical Imaging IV: Image Formation,Vol. 1231, pp. 295-305, 1990.[83] K. Yip, T. E. Kocher, E. Muka, B. R. Whitting and A. R. Lubinsky, “A performanceanalysis of medical x-ray film digitizers,” in Proc. SPIE, Medical Imaging IV: ImageFormation, Vol. 1231, pp. 508-525, 1990.[84] 5. Batnitzky, S. J. Rosenthal, E. L. Siegel, L. H. Wetzel, M. D. Murphy, C. C. Cox, J.H. McMillan, A. W. Templeton and S. J. Dwyer III, “Teleradiology: an assessment,”Radiology, Vol. 177, pp. 11-17, 1990.Bibliography 160[85] B. Jaggi, “Design of a quantitative microscope for image cytometry using a solid statedetector in the primary image plane,” M.Sc. Thesis, Simon Fraser University, Vancouver,B.C., Canada, 1989.[86] B. Jaggi, M. J. Deen and B. Palcic “Design of a solid state microscope,” Optical Engineering, Vol. 28 No. 6, pp. 675-682, June 1989.[87] B. Jaggi, S. S. S. Poon, B. Pontifex, J. J. P. Fengler and B. Palcic “Evaluation of aquantitative microscope for image cytometry,” in Proc. Society for Analytical CytologyMeeting, Ashville, N.C., March 1990.[88] B. Palcic, B. Jaggi and C. MacAulay “The importance of image quality for computingtexture features in biomedical specimens,” in Proc. SPIE, OR laser, Los Angeles, 1990.[89] R. Holland, S. H. J. Veling, M. Mravunac and J. H. C. L. Hendriks “Histologic multifocality of Tis, T1-2 breast carcinomas: implications for clinical trials of breast-conservingsurgery,” Cancer, Vol. 56, pp. 979-990, September 1985.[90] G. T. Barnes “The use of bar pattern test objects in assessing the resolution of film/screensystems,” in The Physics of Medical Imaging: Recording System Measurements and Techniques, A.G.Haus ed., Collected papers of American Association of Physicists in MedicineSummer School, North Carolina, pp. 138-151, July 1979[91] K. R. Castleman “Digital Image Processing,” p. 181, Prentice Hall, N.J. 1979.[92] B. Jaggi, S. S. S. Poon, C. MacAullay and B. Palcic “Imaging systems for morphometricassessment of absorption or fluorescence in stained cells,” Cytometry, Vol.9, pp. 566-572,1988.[93] B. Jaggi, B. Pontifex, J. Swanson, and S. S. Poon, “Performance evaluation of a 12-bit,8 Mpel/s digital camera,” in Proc. SPIE, Cameras, Scanners, and Image AcquisitionSystems, Symposium on Electronic Imaging Science and Technology, SanJose, Jan 31-Feb 4, 1993.[94] A. K. Jam, “Fundamentals of Digital Image Processing,” Prentice-Hall, NY, 1989.[95] R. Gonzales, “Digital Image Processing,” 1987.[96] A. Rosenfeld and D. Kak “Digital Picture Processing,” 1986.[97] P. C. Bunch “Detective quantum efficiency of selected mammographic screen-film combinations,” in Proc. SPIE, Medical Imaging III: Image Formation, Vol. 1090, 1989.[98] A. C. Haus, Ed., “The Physics of Medical Imaging,” 1979.[99] D. R. Jacobson and C. R. Wilson, “Focal spot size, magnification and sharpness in mammography,” Med. Phys., Vol. 19, No. 3, pp. 832, May/June 1992.Bibliography 161[100] M. Rabbani and R. Shaw, “Detective quantum efficiency of imaging systems with amplifying and scattering mechanisms,” J. of Optical Soc. of America, Vol. 4, pp. 495, May1987.[101] J. W. Motz and M. Danos, “Quantum noise-limited images in screen-film systems,” inProc. SPIE, Application of Optical Instrumentation in Medicine X, Vol. 347, pp. 62-66,1982.[102] B. B.. Hunt and T. M. Cannon, “Nonstationary assumptions for Gaussian models ofimages,” IEEE Trans. Syst., Man, Cybern., Vol. 6, pp. 876-881, 1976.[103] D. T. Kuan, A. A. Sawchuk, T. C. Strand, and P. Chavel, “Adaptive noise smoothingfilter for images with signal dependent noise,” IEEE Trans. Pattern Anal. Machine Intell.,Vol. 7, pp. 165-177, 1985.[104] M. Rabbani, “Baysian filtering of Poisson noise using local statistics,” IEEE Trans. Acous.Speech .4 Sig. Proc., Vol. 36, No. 6, June 1988.[105] A. M. Tekalp and M. I. Sezan, “Quantitative analysis of artifacts in linear space-invariantimage restoration,” Multidimensional Systems and Signal Processing, Vol. 1, pp. 143-177,1990.[106] J. W. Woods, J. Biemond, and A. M. Tekalp, “Boundary value problem in image restoration,” in Proc. sixth mt. Conf. Acous. Speech and Signal Proc. pp. 18.11.1-18.11.4, 1985.[107] B. White and D. Brzakovic, “Two methods of image extension,” Computer Vision, Graphics and Image Processing, Vol. 50, pp. 342-352, 1990.[108] H. Lim, K. C. Tan and B. T. G. Tan, “Edge errors in inverse and Wiener filter restorationof motion-blurred images and their windowing treatment,” CVGIP: Graphical Models andImage Processing, Vol. 53, No. 2, pp. 186-195, March 1991.[109] H. C. Tan, H. Lim and B. T. G. Tan, “Windowing techniques for image restoration,”CVGIP: Graphical Models and Image Processing, Vol. 53, No. 5, pp. 491-500, Sept. 1991.[110] A. K. Jam, “Fundamentals of Digital Image Processing,” Prentice Hall, NY, pp.283, 1989.[1111 F. J. Harris, “On the use of windows for harmonic analysis with the discrete Fouriertransform,” Proc. of the IEEE, Vol. 66, No. 1, Jan 1978.[112] B. R. Hunt and T. M. Cannon, “Nonstationary assumptions for Gaussian models ofimages,” IEEE Trans. Acous. Speech é.4 Sig. Proc., Vol. 36, No. 6, June 1988.[113] T. Y. Young, K. S. Pu, “Handbook of Pattern Recognition and Image Processing,” Academic Press Inc., 1986.[114] R. Vistnes, “Texture models and image measures for texture discrimination,” mt. J.Computer Vision, Vol. 3, pp. 313-336, 1989.Bibliography 162[115] S. K. Pal and N. R. Pal, “Segmentation based on measures of contrast, homogenity andregion size,” IEEE Trans. Sys. Man. Cyber., Vol. SMC17, No. 5, Sept./Oct. 1987.[116] D. Marr and E. C. Hildreth, “Theory of edge detection,” Proc. of the Royal Society ofLondon, Part B, Vol. 207, pp. 187-217, 1980.[117] B. B. Mandeibrot, “The fractal geometry of nature,” Freeman, San Francisco, Ca., 1982.[118] R. M. Haralick, S. R. Sternberg and X. Zhuang, “Image analysis using mathematicalmorphology,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. PAMI9, No.4, July 1987.[119] W. C. Lin, E. C. K. Tsao, C. T. Chen, and Y. J. Feng, “Neural networks for medicalimage segmentation,” in Proc. SPIE, Image Processing, Vol. 1905, pp. 376-85, 1991.[120] A. Ghosh, N. R. Pal, S. K. Pal, “Object background classification using Hopfield typeneural network,” mt. J. Pattern Recognition and Artificial Intelligence, Vol. 6, No. 5, pp.989-1008, 1992.[121] M. Lanyi, “Morphologic analysis of microcalcifications,” in “Early Breast Cancer,” Ed.J. Zander and J. Baltyer, Springer-Verlag, 1.985.[122] C. Chen “Statistical Pattern Recognition,” Spartan, 1973.[123] 5. K. Pal and A. Rosenfeld “Image enhancement and thresholding by optimization offuzzy compactness,” Pattern Recognition Letters, Vol. 7, pp. 77-86, 1988.[124] R. Y. Wong, and E. L. Hall, “Scene matching with invariant moments,” Computer Graphics and Image Processing, Vol. 8, pp. 16-24, 1978.[125] Kodak, “Image Insight: geometry of image formation,” 1988.[126] W. Webb, Ed., “The Physics of Medical Imaging,” 1988.[127] Kodak, “Image Insight: screen imaging,” 1988.[128] V. L. Newhouse, “Progress in Medical Imaging,” 1988.[129] A. G. Haus, Ed., “The Physics of Medical Imaging,” 1979.[130] R. M. Nishikawa and M. J. Yaffe, “Signal to noise properties of mammographic film-screensystems,” Med. Phys., Vol. 12, No. 1, June 1985.[131] A. G. Haus, “Evaluation of image blur in medical imaging,” Medical Radiography andPhotography, vol. 61, numbers 1-2, 1985.[132] F. S. Alcorn, “The protocol and results of training non- radiologist to scan mammograms,”Radiology, Vol. 99, pp. 523-529, June 1971.Bibliography 163[133] J. N. Wolfe, “Risk for breast cancer development determined by mammographic parenchymal pattern,” Cancer, Vol. 37, pp. 2486-2492, 1976.[134] R. G. Payster et al., “Mammographic parenchymal patterns and the prevalence of breastcancer,” Radiology, Vol. 125, pp. 387-391, Nov. 1977.[135] R. L. Egan et al., “Mammographic parenchymal patterns and risk of breast cancer,”Radiology, Vol. 133, pp. 65-70, Oct. 1979.[136] P. M. Krook et al., “Mammographic parenchymal patterns as a risk indicator for prevalentand incident cancer,” Cancer, Vol. 41, pp.1093-1097, 1978.[1371 D. F. Rideout et al., “Patterns of breast parenchyma on mammography,” Journal of theCanadian association of radiologist, Vol. 28, Dec. 77.[138] N. F. Boyd et al., “Observer variation in the interpretation of xero-mammograms,” JNCIVol. 68, no. 3, March 1982.[1391 A. G. Gale et al.,”Computer aids to inammographic diagnosis,” The British J. of Radiology, Vol. 60, pp. 887-891, 1979.[140] M. Lanyi, “Morphologic analysis of microcalcifications,” in Early Breast Cancer, Ed. J.Zander and J. Baltyer, Springer- Verlog 1985.[141] E. A. Sickles, “Mammographic features of early breast cancer,” American Journal ofRoentgenology, Vol. 143, pp. 461-464, Sept. 1984.[142] E. J. Baines, D. V. McFarlane, A. B. Miller and collaborating radiologists, “Sensitivityand specificity of first screen mammography in 15 NBSS centres,” Journal of CanadianAssociation of Radiologists, V.39, pp. 273-276, 1988.[143] S. Edeiken,”Mammography and palpable cancer of the breast,” Cancer, Vol. 61, pp. 263-265, 1988.[144] P. E. Burns, M. G. A. Grace, A. W. Lees, C. May, “False negative mammograms causingdelay in breast cancer diagnosis,” J. Can Assoc. Radiol. Vol. 30, pp. 74-76, 1979.[145] B. D. Mann, A. E. Guiliano, L. W. Bassett, M. S. Barber, W. Hallauer, D. L. Morton,“Delayed diagnosis of breast cancer as a result of normal mammograms,” Arch. Surg. Vol.118, pp. 23-24, 1983.[146] 0. J. Walker, V. Gebski, A. 0. Langlands, “The misuse of mammography in the management of breast cancer revisited,” Med. J. Australia Vol. 151, pp. 509-512, 1989.[147] J. B. Buchanan, J. S. Spratt, L. S. Heuser, “Tumor growth, doubling times and theinability of the radiologist to diagnose certain cancers,” Radiol. Clin. North America Vol.21, pp. 115-126, 1983.Bibliography 164[148] J. B. Martin, M. Moskowitz, J. It. Milbrath, “Breast cancer missed by mammography,”American J. Roentgenol. Vol. 132, pp.737-739, 1979.[149] L. Kalisher, “Factors influencing false negative rates in xeromammography,” Radiology,Vol. 133, pp. 297-301, 1979.[150] J. N. Wolfe, K. A. Buck, M. Salane, N. J. Parekh, “Xeroradiography of the breast:overview of 21,057 consecutive cases,” Radiology, Vol. 165, pp. 305-311, 1987.[151] S. A. Feig, G. S. Shaber, A. Patchefsky, C. F. Schwartz, J. Edeiken, H. I. Libshitz, It. Nerlinger, It. F. Curley, J. D. Wallace, “Analysis of clinically occult and mammographicallyoccult breast tumors,” Am. J. Roentgenol., Vol. 128, pp. 403-408, 1977.[152] L. L. Fajardo, B. J. Hillman, C. Frey, “Correlation between breast parenchymal patternsand mammographers’ certainty of diagnosis,” Invest. Radiol., Vol. 23, pp. 505-508, 1988.[153] It. Holland, J. H. C. Hendriks, M. Mravunac, “Mammographically occult breast cancer:a pathologic and radiologic study,” Cancer, Vol. 52, pp. 1810-1819, 1983.[154] D. Ikeda, I. Andersson, “Ductal carcinoma in situ: atypical mammographic appearances,”Radiology, Vol. 172, pp. 661-666, 1989.[155] P. H. M. Peeters, A. L. M. Verbeek, J. H. C. L. Hendriks, It. Holland, M. Mravunac, G.P. Vooijs, “The occurrence of interval cancers in the Nijmegen screening programme,”Br. J. Cancer, Vol. 59, pp. 929-932, 1989.[156] B. Moskovic, H. D. Sinnett and C. A. Parsons, “The accuracy of mammographic diagnosisin surgically occult breast lesions,” Clinical Radiology, Vol. 41, pp. 344-346, 1990.[157] Council on Scientific Affairs “Council report, Early Detection of Breast Cancer,” Journalof American Medical Association, Vol. 252, No. 21, Dec. 1984.[158] M. L. Giger and K. Doi “Investigation of basic imaging properties in digital radiography,”Medical Physics, Vol. 11, No. 3 pp. 287-295 May/June 1984.[159] D. Goidgof and It. Archarya, Eds, Biomedical Image Processing and Biomedical Visualization, Proc. of SPIE Biomedical Image Processing Conf., Vol. 1905, SanJose, Ca.,1993.[160] A. C. Gale, S. M. Astly, D. R. Dance, and A. Y. Cairns, Eds. Digital Mammography,Proc. of the 2nd mt. Workshop on Digital Mammography, York, England, Elsevier:NY,1994.[161] K. W. Bowyer and S Astley, Eds., State of the Art in Digital Mammographic ImageAnalysis, World Scientiflc:London, 1994.Appendix AMammographyIn this Appendix I will describe the process of mammographic image formation and review thefactors affecting primary diagnosis of breast cancer from mammographic images.A.1 Mammographic Image Formation SystemThe components of a typical X-ray imaging system are shown in Figure A.1. The photons emitted by the X-ray tube enter the patient where they may be scattered, absorbed or transmittedwithout interaction. The primary photons recorded by the image receptor form the image, butthe scattered photons create a background signal which reduces contrast.The receptor is a combination of a fluorescent screen and the radiographic film. The screenis added to increase the detective efficiency of the receptor. After chemical development, thefilm is illuminated and viewed by a radiologist at a distance and magnification appropriate tothe detail in the image. The film is viewed as a transparency to provide a greater dynamicrange of intensity. The maximum optical density of a film is over 3 while that of a photographicprint is only 2 on a logarithmic scale.In a variation of screen-film mammography, a dry non-silver photographic system is usedknown as Xeroradiography. In a process similar to photocopying, the photoconductive properties of amorphous selenium powder is exploited. The resulting image has better spatialresolution and is edge enhanced. This technique has poor broad area contrast and is slowerthan screen-film receptors thus producing 5 to 10 times more radiation exposure to the patient.Interpretation of a mammogram directly depends on the quality of the mammographicimage. The intensity patterns in a mammogram are composed of the image of the breast165Appendix A. Mammography 166X-RAY MAMMOGRAPHY SYSTEM•_____/ Focal Spot•_____ ___Beam Limiterd,Mass/DensityBreast // Penumbra__I A-/d2 / /__ PhotocellridScreen Film7Figure A.1: X-ray mammography systemAppendix A. Mammography 167degraded by the system transfer function and embedded in noise. The breast image is madeup of the image of any abnormality superimposed on the image of the background anatomicstructures. These abnormalities must be observed, differentiated and classified as benign ormalignant.The perception of the sharpness of the signal is influenced by two factors: contrast and extentof blurring. Both of these factors are affected by noise. I will first consider various componentsalong the imaging path and characterize their overall transfer functions, then consider sourcesof noise and numerically specify a typical screening mammography unit. Effects of digitizationare also considered. The numerical measurements given are for the CGR X-ray mammographyunit at the B.C. Breast Screening Centre.A.2 Sources of Image Blur In MammographyImage blur is caused by (a) motion; (b) geometric distortion and(c) the screen-film receptor.A.2.1 Motion BlurMotion blur may be due to patient movement or involuntary organ movement. The degree ofmotion blur is proportional to exposure time. For a given electron beam voltage setting (in Ky),exposure time cannot be reduced below a minimum without adversely affecting the contrast.This minimum exposure time is a characteristic of screen-film combination and is automaticallycontrolled by a photocell measuring the exposure. The use of a breast compression devicereduces motion blur to negligible levels.A.2.2 Geometric DistortionGeometric distortion is a function of (a) effective focal spot size; (b) focal spot to object distanceand (c) object to film distance [125]. The effective focal spot size is the actual focal spot sizeforeshortened by the anode angle. The anode angle cannot be smaller than about 100 due tothe so-called “heel” effect. The heel effect is the absorption of x-ray photons by the anode itself.Appendix A. Mammography 168This absorption is strongest for rays that are almost parallel to the anode surface. The heeleffect contributes to the non-uniformity of x-ray, weakening the exposure on the anode side ofthe film (point C on Figure A.2). This non-uniformity is over and above the non-uniformity dueto the inverse square law which causes the central ray (point A on Figure A.2) to be strongerthan the peripheral rays (points B and C). For a given Ky setting the size of the focal spot islimited by the heat dissipation capacity of the anode. Rotating anodes achieve much smallerfocal sizes [126j. The effective focal spot size for screening mammography is 0.3 mm. Forminimum distortion the subject should be imaged parallel to the film and perpendicular tothe central ray. Higher focal-spot to subject distances provide a more uniform radiation andreduce the penumbra shadow but require increased exposure time or higher Ky settings. Highsubject-film distances increase magnification but also increase the blur.In dedicated mammographic units the anode to receptor distance is set to 65 cm and thebreast is in contact with the receptor. If the average breast thickness is taken as 4.5 cm, thepenumbra shadow will thus be less than 23 1um. The magnification ratio of an object 4.5 cmfrom the film will be 1.07.A.2.3 The Radiographic ReceptorThe receptor is a screen-film combination. The screen is a layer of fluorescent material thatabsorbs almost 90% of x-ray energy and fluoresces with hundreds of times more photons, usuallyin the blue range. Use of intensifying screens reduces patient dose by an order of magnitude.Double emulsion films are usually sandwiched between two screens. They reduce exposurerequirements but contribute to blur. For this reason only single emulsion films are used inmammography. The main source of blur in x-ray mammography, other than the focal spot size,is the screen [127].Factors affecting receptor blur are:• Distance of phosphor particles in the screen from the film i.e. layer thickness, and anyair gap thinner layers produce lower blur but are also less efficient in X-ray photonAppendix A. Mammography 169THE HEEL EFFECTAnodeC A BNote:A = 100% intensityAB consists of a slight increase over 100% intensity andthen a general decrease in intensity as B isapproached.AC = consists of a considerable decrease in intensity as Cis approached.Figure A.2: The heel effectAppendix A. Mammography 170absorption and conversion.• Phosphor particle size — larger particles produce more exposure and more blur.• Presence of reflector layer — increases photon absorption efficiency of the film, and theeffective distance of fluorescent particles to film.• Presence of light absorbing pigments or dyes in the screen — reduces scatter.• Cross over ratio of unabsorbed x-ray photons— increases blur on double emulsion films.• Film emulsion type, density and grain size.High quality screen-film combinations allow:• Lower patient exposure through lower Ky setting and/or lower exposure time.• Faster response and hence less motion blur.• Longer tube life through lower Ky.Factors affecting exposure time are:• Inherent film speed.• Absorption efficiency of the screen i.e. screen thickness, packing density, phosphor typeand size.• Spectral match of the film and screen.• Fluorescent efficiency of the screen i.e. ratio of absorbed x-ray to light produced.• Presence of a reflecting layer in the screen.• Presence of light absorbing dyes in the screen.Appendix A. Mammography 171The above parameters are used by the manufacturers of screen-film systems to obtain acceptable images at the lowest possible exposure to the patient. We will need to consider thesefactors to understand the process of image formation and subsequently develop algorithms forimage restoration of mammograms.A.3 The System Characteristic ParametersA.3.1 The Modulation Transfer FunctionThe overall degradation of images can be measured by test objects [125]. Neglecting the effectsof noise and poor contrast we can measure or calculate the system point spread function (PSF),the line spread function (LSF), the square wave response function (SWRF), the edge responsefunction (ERF) or the modulation transfer function (MTF). MTF is the magnitude of opticaltransfer function (OTF). The MTF of three popular screen-film combinations are given inFigure A.3.In the screening mammography program in B.C. the electron beam energy utilized is usuallyabout 27 Ky and the cathode current is set to 100 to 250 mAs range. The fluorescent screen isa single Kodak Min-R medium screen with an Ortho-M single emulsion film. This arrangementgives a spatial resolution of 16 cycles/mm when MTF is 4% of its peak value. The films areprocessed on an X-omatic processor on extended cycle.A.3.2 ContrastContrast is a function of both inherent subject contrast and characteristics of the screen-filmcombination. The subject contrast, 5, is defined as the ratio of the difference in x-ray fluenceincident on the receptor between an image point and an arbitrary reference point to the meanvalue of the two fluences.Since S ranges from 0 to 2, a percent contrast is often quoted, given by 100(S/2). Thesubject contrast is affected by the scatter-to-primary ratio R = S’/P, the thickness of thetarget tissue and the difference in linear attenuation coefficients of the normal tissue and anyAppendix A. Mammography 172MTF CURVES FOR THREEKODAK SCREEN-FILM COMBINATIONS1.O%4%MIN-R—OrthoMLAX MediumT-MAT G4’0.54% LEX RegularT-MATGo 4%4%4 4%4 4%S04%•4%0I I5 10 15Spatial Frequency (cycles/mm)Figure A.3: MTF curves for three Kodak screen-film combinations [131]Appendix A. Mammography 173abnormality that may be present in the breast. Figure A.4 gives the variation of contrast withphoton energy for two objects of importance in mammography. The upper curve is for a 100 umcalcium hydroxyapatite and the lower curve is for a 1 mm glandular tissue. The contrast isrelative to normal breast tissue.The film characteristics with and without a screen are given in Figure A.5. This is aplot of optical density against exposure. Optical density is defined as —log1oT where T isthe transmission ratio. A film with a steep characteristic curve has higher contrast but lowerlatitude. A major contribution to contrast degradation is scatter radiation discussed later. Thedynamic range for screening mammography is over 3 optical densities. The lowest densities arelimited by base (the natural tint of the film) and fog (background exposure).Let C(u, v) be the Fourier Transform of the x-ray fluence distribution after passing throughthe subject. If we consider a point object of contrast S then the x-ray fluence is S5(x,y)and C(u, v) = 5, a constant. If we assume the focal spot to be negligibly small and take theModulation Transfer Function of the film to be 1, then the MTF of the system is due to thatof the screen MTF3(u,v). The resulting image contrast isG . log10e . MTF3(u, v). C(u, v) (A.1)where G is the gradient of the film characteristic curve at optical density of 1 [130]. In generalthe point slope of the characteristic curve q(Q) should be used instead of G, since G is aconstant and does not reflect the non-linearity of the film, where Q is the number of incidentquanta/mm2.The contrast transfer function, (CTF), is defined asCTF(u, v, q) = MTF(u, v) . q(Q) (A.2)Detectability of an object, of course, depends not only on the contrast but also on the size.The minimum dose required to visualize an object increases as the inverse fourth power of thesize of the object [126].Appendix A. Mammography 174VARIATION OF CONTRASTWITH PHOTON ENERGY1— — I0.1Ca5(P04)30H -C0.01- Gland.0.001 I. I.10 20 30 - 40 50Energy (keV)Figure A.4: Variation of contrast with photon energy [126]Appendix A. Mammography 175SCREEN-FILMCHARACTERISTIC CURVESA. Characteristics of two screen film combinationsKodak Ortho.G Filna/Lanex Screens Kodak Oitho-G Film/Lanex ScreensBase+FogDensity=O.19 Base+FogDensity=O.19__0t=2.65Log Relative ExposureB. Comparison of characteristic curves of film (A) withscreen-film (B)________________3.0I I I I I0 0.5 1.0 1.5 2.0 2.5Log (relative exposure)Figure A.5: Screen-film characteristic curvesAppendix A. Mammography 176A.3.3 NoiseNoise in radiography is due to artifacts and mottle. There are three sources of mottle: (1) filmgraininess; (2) quantum mottle—random distribution of absorbed x-ray quanta; (3) structuremottle—microscopic non-uniformities of screen. The signal to noise ratio (SNR) is defined asSNR zD (A.3)/W(u, v)where W(u,v) is the Wiener noise power spectrum. In general W(u,v,Q) is a function ofexposure Q. Substituting from equation A.1 above and taking the subject contrast to be S wegetSNR(u, v) = G . log10 e s . MTF3(u, v) (A.4)\/W(u, v)SNR(u,v) S. -./NEQ (A.5)where NEQ, the noise equivalent quanta is a measure of performance. The effect of thereceptor on SNR is given by the detective quantum efficiencyD A6SNRI1 ( .)The ideal detector—the one counting the photons—will have DQE of 1 [130]. Figure A.6gives the quantum noise, film noise and total noise for the Min-R screen Ortho-M film combination [130]. Clearly, at high spatial frequencies the film noise dominates. The noise powerdue to screen mottle is only about 0.2% of the total noise power. The ability of the film torecord quantum mottle decreases to negligible levels beyond 10 cycles/ mm due to the effectsof system MTF. Therefore the detection of small objects is limited by the film granularity.A.3.4 The X-ray ScatterDuring passage through the subject, x-rays are scattered. The thicker or denser portions of thebreast produce more scatter [1281. Use of a compression device reduces the scatter. Figure A.7Appendix A. Mammography177NOISE POWER SPECTRUMT r p I I I r T11111 F r4io ‘I—— —i.. —1— •%..,I. •I%,c)106‘I’1’F..1I I I I I I til0.1 0.2 0.5 1.0 2 5 10 20 40Spatial frequency (cycles/mm)film noisequantum noisetotal noiseFigure A.6: Noise power spectrum; film noise (solid line); quantum noise (dash line); and totalnoise [130]Appendix A. Mammography 178gives a plot of scattered to primary radiation ratio as a function of radiation field and thicknessof breast. As the field size increases, typically above 8 cm, the ratio increases slowly becausex-rays are absorbed within the breast. The receptor usually has higher efficiencies for scatteredradiation, worsening the ratio.A grid is often employed to reduce the incidence of scattered radiation on the screen-film.Grids may be stationary and focused with absorption ratios of 3 to 15. The shadow of this gridmay be a disturbing artifact on the film. Moving grids are employed in screening mammographyunits to reduce the effect of this shadow. The grid has 44 lines per centimeter and achieves agrid ratio of 5:1. A scatter degradation factor may be defined as SDF = P/(P + S’) whereP is the primary radiation intensity and S’ is the scattered radiation intensity in a givenlocal neighborhood. For a unit with S’/P ratio of 7, typical of chest radiography, we haveSDF = 0.125. If now a 12:1 grid is employed, the measured value of SDF improves to0.5 [1281. The primary beam is also attenuated by the grid. The primary transmission factorTp for this grid is 0.62 and hence the overall detective quantum efficiency due to scatter andgrid is DQE = Tp.SDF = 0.31. Use of a grid requires an increased exposure of about 2 to 8and gives a contrast improvement of 1.5 to 3.5 [126].A.3.5 DigitizationThe analogue x-ray image is sampled and quantized to form a digital image. The digital imageis then processed and displayed. Following the work of Giger et al. [158] I assume a squaresampling grid and a finite sampling aperture. In general, the sampling interval may not beequal to the aperture. This is the case for 2-D GCD cameras with fill factor less than unity.When they are equal, we can refer to congruent pixels that completely cover the scene. Theanalogue image is modulated by the sampling aperture to form the pre-sampling image. Thisimage is then operated on by a two-dimensional comb function. The resulting optical transferfunction is:Appendix A. Mammography 179RATIO OF SCATTERED TO PRIMARYRADIATION AS A FUNCTION OFRADIATION FIELDC1.00.8- Phantom 6cm thick0.6-0.4- Phantom 8cm thick0.2-. I- I I 1 I -I--0 2 4 6 8 10 12 14 16Diameter of circular radiation field cmFigure A.7: Ratio of scattered to primary radiation as a function of radiation field [128]Appendix A. Mammography 180OTF(u,v) {[OTFA(u v) OTFs(u, v)]* (u, v,—, —) }•OTFF(u, v) OTFD(U, v)where A, S, F and D refer to analogue image, sampling aperture, linear filter and the displayunit respectively and * denotes convolution. If the image preprocessing filter is non-linear theanalysis will be more complex. If the aperture for sampling and display are squares of widthWs and WD, thenMTF(’u,v) = {[MTFA(u,v).sinc(-irwsu).sinc(lrWsvyj* flMTFF(u, v). .sinc(7rWDu) sinc(irWjjv)where111In our case Lx = = Ws = 0.1 mm, The display aperture on a 640 x 480 VGA monitoris 0.375 mm x 0.4375 mm. The digital MTF shows a ‘false’ increase over the analogue MTFdue to aliasing. To minimize the effects of aliasing, the sampling and display intervals shouldbe decreased. The Nyquist rate for a resolution of 16 cycles/mm (4% MTF of analogue screen-film) corresponds to pixel sizes of 31 1um.The digitization process also affects the noise. Assuming square sampling and display pixelsof width Ws and WD respectively:Total noise = {[WSA(u,v)sinc2(7rWsu)sinc2(IrWsv)] * itt (,,_-_)}MT]4(u, v) . sinc2(’JrWDu) . sinc2(’IrWDv) + WSE(U, v)where Ws is the Wiener noise power spectrum, and E refers to electronic noise. The effectof digitization on signal to noise ratio and threshold contrast can thus be determined.Appendix A. Mammography 181A.4 Mammographic Image InterpretationReading of mammograms is a highly skilled task requiring long training periods [132]. Inscreening applications, the films are classified into two groups of “normal” or “suspicious”requiring further investigation. For diagnostic purposes the type and extent of abnormalityis also determined leading to recommendation for treatment modality. For biopsy, excision orother surgery, the location of abnormality should also be specified.It has been claimed that the mammographic parenchymal pattern is an indicator of the riskof breast cancer. Wolfe [133] demonstrated a strong relationship between density patterns andthe risk of cancer. In current radiological practice, mammograms are classified into one of four“Wolfe grades” and closer observations in shorter time intervals are recommended for the highrisk group. These grades in order of increasing risk are:• Ni: Breasts composed primarily of fat• P1: Prominent ducts in the subareolar region involving approximately one third of thebreast• P2: Prominent duct pattern involving the major portion of the breast• DY: Considerable amount of collagen or dysplasia with or without identified ducts.Although several researchers [134, 135, 136, 137] have reported results contradictory toWolfe’s findings, the Wolfe classification scheme is still used.There is a good deal of variability in classifying mammograms. The variations are not onlyinter-observer but also for the same observer at different time intervals [138]. Individual radiologists in general rely on their experience and do not employ quantitative measurements in theirline of reasoning for classification. Several studies have attempted to quantify these mammographic features and relate them either to risk of cancer or to pathologically proven cases. Inan early attempt a computer aided package called MAMMCAD [139] has been developed as anexpert system to aid the radiologist.Appendix A. Mammography 182The common signs of abnormality can be divided into three categories namely, calcifications,masses and secondary signs of cancer.Clusters of microcalcifications are present in over 50% of malignant diseases of the breast.Because of their radio opacity, they constitute the best clue for early detection and over half ofclinically occult breast malignancies are discovered on the basis of microcalcifications alone [140].Microcalcifications are typically smaller than 0.5 mm each and a group of at least 3 within 1 cm2constitute a cluster [141]. 75%-80% of clustered calcifications prove to be within benign lesions,but biopsy is performed in most cases in order to detect those 20%-25% that are cancerous.Malignant clustered calcifications have thin linear, curvilinear and branching shapes and roundor oval shapes indicate benign lesions. Consistent use of shape for differential diagnosis isdifficult mainly due to the small size of microcalcifications. Malignant masses are invasive ofthe surrounding tissue and consequently are often spiculated and ill defined. The absence ofa well-defined edge makes detection difficult especially in dense fibroglandular tissue. It alsomakes differential diagnosis of benign and malignant masses difficult. Table A.1 gives commonfeatures of images of benign and malignant lesions.The most difficult of early cancers to find by mammography are those that contain nocalcification and are also surrounded by isodense tissue, impairing delineation of their tumormasses. These can only be detected by secondary signs of cancer which include a single dilatedduct, architectural distortion, asymmetry between the right and left breasts, and a developingdensity as compared to a previous mammogram.A.5 Radiographically Occult CancersPerception of an object in a radiographic image depends on its size, the amount of illumination,the sharpness of object boundaries, the degree of contrast and the presence of noise. Themammalian visual system has evolved to be responsive to very dwindling fluxes of photons.Since a nerve pulse involves the movement of millions of atoms or ions and since the energy ofa single photon is only able to disturb a single atom or molecule, the visual system is a highlyAppendix A. Mammography 183Benign MalignantRelative density Slight to marked increase Always definite increasedenser than benignCharacter of Homogeneous Non-homogeneous, centredensity densestShape Round, oval, lobulated Tentacled, ragged, spiculated, variable; spiculesheavier toward nippleBorders Well-circumscribed, Poorly circumscribed,regular and smooth; irregular, fuzzy or halothin layer of surrounding fatSurrounding Not invaded; displaced Infiltrated; trabeculaetissues trabeculae pushed aside retracted irregularly andsmoothly; no increased thickened; increased vasduvascularity larityCalcifications Coarse, isolated, few and Numerous, punctate, uncountcountable, not punctate, able, variable density, con-more apt to have polarity, fined to a measurable area,widely scattered, similar less polarity, diffuse inin density may be in pen- lesion, more centralphery of lesionRelative size Same size or larger than Smaller than clinicalclinical measurement measurement, often bya factor of 2-4Table A.1: Radiographic characteristics of breast massesAppendix A. Mammography 184efficient photomultiplier. The central region of the human retina, the fovea, which at the focalplane of the lens subtends an angle of about 1.5° is lined almost entirely with closely packedcones. Within the fovea, the spacing of cones is sufficiently close (about 10 sum) to enablegrating resolution of about 60 cycles per degree.If a film is viewed at a distance of 30 cm, one degree of fovea images about 5 mm ofthe film. This corresponds to a limiting spatial resolution of 12 cycles per mm. This limit isattainable only for sufficiently high levels of illumination and low levels of image noise. Very lowspatial frequencies below 0.2 cycles per degree are also difficult to perceive as can be seen fromfigure A.8 [126]. The spatial contrast sensitivity of the eye is tuned to sharp edges of about1 to 3 cycles per degree and is also a function of luminance. In general, there is an inverserelationship between the size of the object and the minimum contrast required for visibility.These limitations of the human visual system contribute to misclassification of mammograms.If it is assumed that a “true” answer exists to the question whether an abnormality is present,for example from subsequent pathological examination of excised tissue, then the radiologicaldecision can be classified into four categories of true positive (TP), false positive (FP), truenegative (TN) and false negative (FN).Proven malignancyRadiologic decision yes noyes TP FPno FN TNThe following three standard definitions are used in the literature:number of correct positive assessments TPSensitivity = .number of truly positive cases TP + FNnumber of correct negative assessments TNSpecificity =number of truly negative cases TN + FPnumber of correct assessments TP + TNAccuracy= total number of cases TP + TN + FN + FPThe sensitivity measures the ability of a radiologist to catch the cancers (positives) andspecificity measures his/her ability to reject the negative cases.Appendix A. Mammography 185CONTRAST SENSITIVITYOF THE EYE1000 1 is0oSOOcd m2-.—• —C,,Q cf_IJUC’,C01OO.0ScdmrJ)1 I_I 1_i01 OS I S 10 50100Spatial frequency (cycles/degree)Figure A.8: Contrast sensitivity of the eye [126]Appendix A. Mammography 186The Canadian National Breast Screening Study reported an overall sensitivity across 15screening centers of 75% [142], or 25% false negative mammography. With symptomatic women,false negative rates of as high as 44% among women aged less than 51 years had been reported [143].False negative mammography, by providing false reassurance, may cause further delay indiagnoses and treatment. Burns et at. [144] found that 42% of 50 patients with delayed diagnosisdue to negative mammograms had metastatic tumor involvement in their axillary lymph nodesat diagnosis. Mann et at. [1451 also observed that 58% of their patients with false negativemammograms have positive axillary nodes at delayed diagnosis. Walker et at. [146] attributedmore advanced stage at presentation in women with initially negative mammograms to thedelay in their definitive treatment from the first falsely negative mammographic report.Approximately 5% of the false negative mammograms are attributable to poor mammographic technique, and an additional 30% are thought to result from observer oversight dueto rushed interpretation, heavy caseload, and eye fatigue [147]. Both of these factors are potentially correctable. However, an unavoidable cause of false negative mammograms is theradiographic density of the breast [148, 149, 150]. A significant correlation between decreasing diagnostic certainty and increasing complexity of the mammographic breast parenchymapattern has been shown in two independent studies [151, 152].Tumor-related phenomena, such as tumor growth pattern and lack of tumor calcifications,may also hamper the visualization of a clinically evident lesion on the mammogram. Roland etat. [153] observed that 5 of the 15 mammographically occult cases were invasive lobular carcinomas, and four of these were situated in dense to very dense breasts. Intraductal carcinomasare also easy to miss especially when they are noncalcified and have only subtle radiologicalsigns [154]. In a Nijmegen breast screening program, twenty-four radiologically occult cancerswere located in dense breast parenchyma and approximately half of these were either lobularinvasive or ductal non-invasive cancers [155].Studies on the subject of false negative mammography have generally not contrasted falseAppendix A. Mammography 187negative mammograms with those that were truly positive. In addition mammograms havegenerally been read for the purposes of review with knowledge that they contained breastcancer.There are biological explanations for the non-visualization of certain breast cancers. Biological characteristics such as extensive radiological density (the predominant component of sheetlike, nodular or linear densities), histologic types such as infiltrative lobular or noninfiltrativecancers, as well as gross tumor size less than 2 cm., have been found to associate significantlyand independently with false negative mammograms.False positive diagnosis is also undesirable, although it is less damaging. In one study [156]of 69 surgically occult but mammographically apparent cases, 75% of breast biopsies performedproved to be benign. The problem in this case is not detection but recognition (of features)and classification. A high rate of false positive is obviously wasteful of resources, in addition toits undesirable effects on patients.Appendix BQuestionnaire For Clinical Evaluation of ImagesObserver Number_______________Case Number_______________Parameters (Mark with “X”)1. Number of Microcalcifications(a) < 10/Cm2(b) 10 or > 10/Cm22. Shape of Microcalcifications(a) semi-lunar (tea-cup)(b) linear_(c) round(d) Oval / rod shaped(e) curvilinear(f) branching3. Density of Microcalcifications(a) uniformly dense(b) eggshell(c) poorly defined (smudgy)188Appendix B. Questionnaire For Clinical Evaluation of Images 1894. Margination of Microcalcifications(a) smooth borders(b) less sharp to irregular5. Spatial Arrangement(a) scattered (may have polarity)(b) clustered or confined to one area6. Relation to Mass(a) no associated mass(b) concentrated in core of mass(c) concentrated in periphery of mass(d) scattered evenly throughout mass7. Overall Clinical Assessment of Microcalcifications(a) benign_(b) malignantAppendix CFeaturesC. 1 Features from Individual Microcalcifications:• Geometrical: Microcalcification identification (id); coordinates of the enclosing box (xl,yl, x2, y2); coordinates of the center of mass (xm, ym); intensity of the background.• Photometric: Gray level intensity of microcalcifications (mean-intensity, var-intensity);contrast (depth).• Statistics from the Optical Density histogram: mean (odmean); minimum (odmin); maximum (odmax); range (odrange); variance (odvar); skewness (odskew); kurtosis (odkurt).• Morphological: area; perimeter; mean-radius; var-radius; compactness; inertia; elongation; coarse and fine measures of boundary roughness (bdycrc, bdycrc2); distance-to-cluster-center; aspect ratio; angle-of-major-axis; alignmentl; alignmentll; alignment-with-cluster-axis;C.2 Features from Microcalcification Clusters:Photometric Featuresmm-contrast, max-contrast, range-contrast, mean-contrast, var-contrastMorphological Features:• total area of microcalcifications (total-area); area of the smallest microcalcification (mmarea); area of the largest microcalcification (max-area); range, average, variance and190Appendix C. Features 191normalized standard deviation of area (range-area, mean-area, var-area, nrstdv-area);• Simple statistics from the perimeter (mm- perimeter, max-perimeter, range-perimeter,mean-perimeter, var-perimeter, nrstdv-perimeter) and the average radii (mm-mean-radius,max-mean-radius, range-mean-radius, mean-mean-radius, varmean-radius) of each calcificat ion.• Descriptions of variance of radii of each microcalcification (mm-var-radius, max-var-radius, range-var-radius, mean-var-radius, var-var-radius).• Statistics of compactness (mm-compact, max-compact, range-compact, mean-compact,var-compact); and inertia (mm-inertia, max-inertia, range-inertia, mean-inertia, var-inertia).• Coordinates of the cluster center (cluster-center-x, cluster-center-y).• Distance of each microcalcification from the cluster center: mm-distance-to-center, max-distance-to-center, range-distance-to-center, mean-distance-to-center, var-distance-to-center• Distance of each microcalcification from the cluster center weighted by the area of the microcalcification: mm- area-x- distance, max-area-x-dist ance, range-area-x-distance, meanarea-x-distance, var-area-x-distance.• Distance of each micro calcification from the cluster center weighted by the circumferenceof the microcalcification: min-perimeter-x- distance, max-perimeter-x-dist ance, range-perimeterx- distance, mean-perimeter-x-dist ance, var-perimeter-x- distance• Distance between each pair of microcalcifications: mm-pair-distance, max-pair-distance,range-pair-distance, mean-pair- distan Ce, var-pair-distance.• Descriptions of the convex polygon enclosing the cluster: polygon-area, polygon-perimeter,polygon-compactness, polygon-aspect-ratio, mm-polygon-radii, max- polygon- radii, range-polygon-radii, mean-polygon-radii, var-polygon-radiiAppendix C. Features 192• Ratio of the area of the convex polygon filled by microcalcifications: polygon-caic-arearatio.• Alignment measures: alignment-field-intensity, magnetic-potential mm- alignmentl, maxalignmentl, range- alignmentl, mean- alignmentl, var- alignmentl mm- alignmentll, maxalignmentll, range- alignmentll, mean-alignmentll, var-alignmentll, mean-alignment-with-cluster-axis.• Moments: moment2O, momentO2, momentli, moment3O, momentO3, moment2l, momentl2.• Normalized shape invariant moments: invariant- shape 1, invariant-shape2, invariant-shape3,invariant-shape4, invariant-shape5, invariant-shape6.


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items