F E A T U R E E N H A N C E M E N T IN AVHRR IMAGERY VIA PROBABILISTIC RELAXATION LABELING METHODS by C A R L SZCZECHOWSKI Bachelor of Science, University of Michigan, 1981 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER of SCIENCE in THE FACULTY OF GRADUATE STUDIES Oceanography We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA APRIL 1986 ©CARL SZCZECHOWSKI, 1986 I n p r e s e n t i n g t h i s t h e s i s i n p a r t i a l f u l f i l m e n t o f t h e r e q u i r e m e n t s f o r an a d v a n c e d d e g r e e a t t h e U n i v e r s i t y o f B r i t i s h C o l u m b i a , I a g r e e t h a t t h e L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e a n d s t u d y . I f u r t h e r a g r e e t h a t p e r m i s s i o n f o r e x t e n s i v e c o p y i n g o f t h i s t h e s i s f o r s c h o l a r l y p u r p o s e s may be g r a n t e d by t h e h e a d o f my d e p a r t m e n t o r by h i s o r h e r r e p r e s e n t a t i v e s . I t i s u n d e r s t o o d t h a t c o p y i n g o r p u b l i c a t i o n o f t h i s t h e s i s f o r f i n a n c i a l g a i n s h a l l n o t be a l l o w e d w i t h o u t my w r i t t e n p e r m i s s i o n . D e p a r t m e n t o f QCHMOCHQPH Y The U n i v e r s i t y o f B r i t i s h C o l u m b i a 2075 Wesbrook P l a c e V a n c o u v e r , Canada V6T 1W5 Da t e 7 A~ / y ( > 7Q^ Abstract A set of algorithms is described which results in the detection, enhancement, extraction,- ,^ and identification of features in NOAA-9 AVHRR imagery. Sea ice leads (cracks) in ice images from the Beaufort Sea / Amundsen Gulf area are modelled as "lines" in the image-processing sense. Thermal gradients on the ocean surface are modelled as edges. Emphasis is given to enhancing the output of local line / edge detectors in order to provide improved input for line / edge tracking algorithms. As a result the identification scheme operates on a segmented version of the image rather than on a pixel by pixel basis, thereby providing a less noisy classification. Line / edge enhancement is achieved using the non-linear probabilistic relaxation model of Rosenfeld et al (1976). Results from the relaxation of line detector output suggests that an expanded label (i.e. line orientation) set is preferable to the smaller set suggested by previous studies. Also, a modified form of the original non-linear model (suggested by Peleg and Rosenfeld, 1978) was found to speed up the convergence rate significantly with no degradation in enhancement. A unique set of subjectively-derived compatibility coefficients was introduced into the relaxation with encouraging results. Edge relaxation using the 3x3 template matching edge operators resulted in a rel-atively poor enhancement due primarily to the vagaries of the edge operator. Improved results were achieved using the "crack" edge representation. A unique relaxation model consisting of a combination of the relaxation models of Prager (1980) and Hanson et al (1980) provided a good enhancement. Classification (i.e. identification) results were excellent for thesea ice lead application. Line brightness and original line detector strength seemed sufficient for differentiating between "strong" leads and "weak" leads as well as discriminating leads from clouds. The classification results were not as good in the edge case indicating that simple characteristics such those used in the line case are not sufficient for classification purposes. Table of contents. Page. Abstract. ii Table of contents. iii List of figures. vi Acknowledgements. viii I. Introduction. 1 II. A traditional method for line/edge detection and extraction. 4 A. Local edge detectors. 4 B. Local line detectors. 6 C. Linking line/edge elements into segments. 7 III. Weaknesses in line/edge tracking methods. 9 IV. A review of relaxation labeling processes. 11 A. Literature review. 11 B. Selection of a label set and local detector. 12 C. Initial label probability estimates. 13 D. Definition of neighbourhood. 14 E. Update rule (relaxation model). 14 1. First order models. 15 2. A second order model. 19 3. A higher order model. 20 F. Evaluation parameters. 21 V. Enhancement of sea ice leads via relaxation methods. 24 A. Defining the relaxation process. 24 1. Label set. 24 2. Local line detector. 25 3. Initial label probability estimates. 26 4. Definition of neighbourhood. 27 5. Update rule (relaxation model). 27 iii B. Relaxation with a 9 label set. 29 1. Mode comparison with Ap° = .0001. 29 2. Mode comparison with Ap° = .01. 32 3. Overview of results from the 9 label set. 35 C. Relaxation with a 13 label set. 41 1. The advantages of a 13 label set. 41 2. 13 label set with empirical compatibility coefficients. 44 3. 13 label set with "heuristic" compatibility coefficients. 49 4. Overview of results from the 13 label set. 56 VI. Enhancement of sea surface thermal features via relaxation methods. 57 A. Considerations specific to the edge domain. 57 1. Filtering. 57 2. Approaches for dealing with spatially large intensity changes. 58 B. Edge relaxation using the template matching edge operators. 59 1. Label set and edge templates. 59 2. Non-maximum suppression as a preprocessing procedure. 59 3. Initial probability estimates. 60 4. Relaxation model (update rule). 61 5. Definition of neighbourhood. 61 6. Implementation. 61 7. Overview of edge relaxation results using the template edge operators. 70 C. Edge relaxation using the crack edge representation. 71 1. A review of relaxation models using the crack edge representation. 71 2. Defining the relaxation model. 77 a. Edge operator. 77 b. Non-maxima absorption. 77 c. Initial label probability estimates. 79 d. Relaxation model (update rule). 79 iv e. Determination of the edge strength "threshold". 79 f. Neighbourhood edge pattern support. 80 g. Api determination. 81 h. Determination of the suppression parameter, C . 81 g. Evaluation parameters. 81 3. Implementation. 82 a. Input parameters and evaluation technique. 82 b. Api comparisons. 85 c. Comparison of results, variable suppression parameter (C). 86 d. Comparison of results, variable initial absorption scheme. 87 e. Overview of relaxation results using the crack edge representation. 88 VII. Extraction and identification of line/edge segments in AVHRR imagery. 89 A. Extraction and identification of line segments in ice imagery. 89 1. Line tracking. 89 2. Segment characteristic extraction. 91 3. Identification scheme. 92 4. Implementation. 94 B. Extraction and identification of sea surface thermal features. 110 1. Edge tracking. 110 2. Segment characteristic extraction. I l l 3. Identification scheme. 112 4. Implementation. 113 C. Overview of the extraction and labeling results. 117 VIII. Conclusions and recommendations. 123 References. 126 v List of figures. Page. Fig. 1 Intensity profiles of a line, step edge, and ramp edge. 2 Fig. 2 Prewitt and Sobel differential edge masks. 2 Fig. 3 Template matching edge operators. 5 Fig. 4 Horizontal and vertical crack edge masks. 5 Fig. 5 A full set of line detector templates. 5 Fig. 6 Evaluation parameter curves (9 label set, Ap° = .0001). 30 Fig. 7 Evaluation parameter curves (9 label set, Ap° = .01). 33 Fig. 8 Sea ice image with relaxation results. 36 Fig. 9 Symbols used in displaying relaxation results (9 label set). 39 Fig. 10 Adjacent line orientation configurations (9 label set). 39 Fig. 11 Evaluation parameter curves (13 labels, two coefficient sets). 45 Fig. 12 Adjacent line orientation configurations (13 label set). 47 Fig. 13 Adjacent line orientation configurations (positive coefficients). 47 Fig. 14 Adjacent line orientation configurations (null coefficients). 47 Fig. 15 Adjacent line orientation configurations (negative coefficients). 47 Fig. 16 Evaluation parameter curves (13 labels, heuristic). 55 Fig. 17 Sea surface image with relaxation results. 62 Fig. 18 Another sea surface image with relaxation results. 64 Fig. 19 Evaluation parameter curves (template matching edge operators). 66 Fig. 20 Crack edge neighbourhoods. 73 Fig. 21 Possible edge patterns for the crack edge neighbourhood. 73 Fig. 22 Location weighting functions used in absorption scheme. 78 Fig. 23 Possible edge patterns for the general class 1-1. 78 Fig. 24 Evaluation parameter curves (crack edge operator). 83 Fig. 25 Histograms of characteristics for line segment identification. 95 Fig. 26 Results of line identification process for an ice image. 98 Fig. 27 Another ice image with relaxation and identification results. 100 v i Fig. 28 Cloudy ice image with relaxation and identification results. 104 Fig. 29 Another cloudy ice image with relaxation and identification results. 107 Fig. 30 Histograms of characteristics for edge segment identification. 114 Fig. 31 Results of edge identification process for a sea surface image. 118 Fig. 32 Cloudy sea surface image with relaxation and identification results. 119 vii Acknowledgements. Financial support for the first eight months was received solely from the United States Naval Oceanographic Office via their long term training program. Funding for the final twelve months was received in part from an NSERC grant and a grant from the U.S. Naval Oceanographic Research and Development Activity. The author would like to personally acknowledge the tremendous support of the grad-uate students and other personnel associated with the satellite data processing system of the Department of Oceanography, University of British Columbia, under the direction of Dr. William Emery. In particular the author would like to thank Dr. Emery for helpful suggestions made during the work, Michael Collins for many useful suggestions and much needed constructive criticism, and last but not least, my wife Jeanne for her extensive work on the graphics and persistent spiritual support. viii I. Introduction. Image and scene analysis techniques have been used a great deal for automated inter-pretation or enhancement of digital imagery obtained from airborne or satellite sensors. The techniques used have been as varied as the types of imagery analyzed. A common analysis, usually associated with multi-spectral data (primarily from the Landsat series of satellites), has been the use of statistical techniques for the automated classification of farm crops, forest cover, etc. (cf. Lintz and Simonett,1976). Similarly, infrared imagery of the sea surface from meteorological satellites has been analyzed using Bayesian decision techniques to classify the sea surface into various oceanic water types (Coulter, 1983). Another area of study has been the use of line and edge detection techniques to detect or enhance line-like features or intensity gradients. The line-like features sought include rivers, roads, bridges, and geological lineaments. These studies include the utilization of simple line detectors for the extraction of line-like features (Vanderbrug, 1976), more elaborate systems which identify linear features based on geometric and/or spectral char-acteristics (Bajcsy and Tavakoli, 1976 , and Tavakoli and Rosenfeld, 1982), and the use of multi-spectral (i.e. multi-dimensional) smoothing techniques combined with edge and line detectors (Chittineni, 1983). This paper will be of the latter type. It will address the enhancement, extraction, and identification of lines and edges in NOAA-9 AVHRR infrared imagery. The emphasis will be on the use of probabilistic relaxation labeling techniques for the enhancement of the output of simple line and edge detectors. The traditional definitions of line and edge are used in this paper. A line is defined as a thin region of pixels darker (or lighter) than the pixels to either side of this thin region. An intensity profile across a line is shown in fig. la. In this paper the lines to be detected are the leads (cracks) in sea ice in the Beaufort Sea / Amundsen Gulf area . 1 a. b. c. Fig. 1 Intensity profiles of a line (a), step edge (b), and ramp edge (c). (From Ballard and Brown, 1982.) (a) - 1 0 1 -1 0 1 - 1 0 1 1 1 . 1 (b) 1 2 ' 1 0 0 ' 0 - 1 - 2 - 1 .1 Fig. 2 (a) Prewitt and (b) Sobel differential edge masks. (From Ballard and Brown, 1982.) 2 Edges in image analysis parlance are regions in which consistent intensity disconti-nuities occur. There are two basic types of edges, the step edge and the ramp edge. In cross section the step edge is an abrupt change in intensity whereas the ramp edge is a more subtle increase (or decrease) in intensity. Step and ramp edges are shown in figs, lb and lc respectively. The edges to be detected in this study are representative of thermal gradients on the ocean surface. Thermal gradients may be indicative of upwelling areas, the boundaries of warm or cold eddies, or the boundaries of strong currents differing in temperature from the surrounding water. The paper will be structured as follows. Initially a review of a traditional method for detecting and extracting line and edge segments will be presented. This will be followed by an analysis of the weaknesses in the above method and a description of how relaxation labeling techniques can reduce or eliminate these weaknesses. A general overview of the theory of relaxation labeling will follow with emphasis given to its application to line and edge enhancement. The application of relaxation labeling to the detection and enhancement of sea ice leads will be presented followed by its application to detecting and enhancing sea surface thermal gradients. Emphasis will be given to achieving the optimum enhancement of the output of local line/edge detectors in order to provide improved input for tracking algorithms. The issue of extraction and identification of the lines and edges found in the imagery will be addressed. This section is not meant to be a dissertation on multi-class identification techniques but instead is meant to show that feature identification need not be done solely on a pixel by pixel basis but can be accomplished using groups of pixels (i.e. segments of individual line/edge elements produced by tracking algorithms). Final remarks and recommendations will conclude the paper. 3 II. A traditional method for line/edge detection and extraction. A common approach for detecting lines and edges in imagery is to first convolve the image with local detectors or masks. These masks operate on a small region about a pixel and the output of the application of a mask gives an indication of how likely a line or edge (at some orientation) is present in the neighbourhood of the pixel. A . Local edge detectors. Many different edge masks can be found in the literature. The size of the masks vary but three by three pixel masks are often used due to their simplicity. Edge masks fall into two main categories, those which result in values for the magnitude and orientation of the intensity gradient in the vicinity of a pixel (differential operators) and those which output the magnitude of edges in discrete orientations - template matching operators . Both involve a straightforward discrete convolution of a mask with the image intensity values. Examples of the first type are shown in figs. 2a and 2b and are attributed to Prewitt and Sobell respectively (Ballard and Brown, 1982). The masks on the left are used to determine the gradient in the horizontal (Ax) while the ones on the right correspond to the vertical gradient (Ay). For the Prewitt operator the magnitude is computed as Magnitude = (A 2 + A 2 ) 1 / 2 and the orientation as Orientation = tan~1(Ay/Ax). Examples of the second type of edge operator are shown in Fig. 3 . In practice, the maximum response of the application of the eight masks is chosen to represent the magnitude and orientation of the edge. 4 D i r e c t i o n oC Cf adi e nt Pcevi t t K . s k s K i t s c h T h r e e - l e v e l Simple M a s k 1 r i v e - L e v e l Simple m i l r 1 i n r 5 5 s- 1 " 1 2 n o r t h l - 2 1 -3 0 - 1 0 0 0 .-1 - I - I . .-3 - 3 - 3 . .-1 -1 -1. .-1 - 2 " I -- 1 1 1 - -5 5 - 3 - r I 1 • 2 1 t t o r t h v e i t 1 - 2 -1 5 0 - 3 0 -1 1 0 - I .1 -1 - I . .-J -1 -1. -1 - 1 . . 0 -1 " I 1 - 1 " "5 - 3 -3 • ^ 0 -1 • 1 0 West 1 - 2 - 1 5 0 - 3 0 -1 2 0 - 2 . 1 1 - I J . 5 - 3 - 3 . 0 - I . . 1 0 - 1 -- \ -1 -1- --3- -3 - 3 - • 0 - I - i - ' 0 -1 Sou thwest 1 -2 - I 5 0 - 3 0 - 1 1 0 - 1 . 1 1 1 . . 5 5 - 3 . • 1- 1 . 2 1 0. r , South 1 -2 1 -3 0 -3 0 0 0 0 1 1 1 lJ I 5 5 SJ • 1 1 . 1 2 L - l - i r r - 3 - 3 - 3 ' "-1 -1 " -2 -1 0' Southe.se -1 - 2 1 -3 0 5 -1 0 -1 0 1 - 1 1 1 . 1 -3 5 5. 1 . 0 1 - - i i r ' - 3 - 3 5' - I 0 r - i o 1 East -1 - 2 1 -3 0 5 -1 0 -2 0 2 .-1 1 l J . - 3 -3 5. .-1 0 .-1 0 1-" 1 1 I" -3 5 5- 1 • 0 1 2" N o c t h e a i c -1 - 2 1 -3 0 5 -1 0 -1 0 1 . - I - I 1. . - 3 - 3 - 3 . .-1 -1 0. -2 -1 0_ Fig. 3 Template matching edge operators. (From Robinson, 1977.) Fig. 4 Horizontal and vertical crack edge masks. (From Prager et al, 1977.) e k e • b e a b « • b e a b c e b c • b c • b e a b • b e • b e • b e • b e • b e • B e • b e • b e • b e • b e • b e • b a ' b e • b e e b b b « e b c b • Fig. 5 A full set of line detector templates. (From Vanderbrug, 1976.) A different sort of edge operator is the "crack" edge operator shown in Fig. 4. The magnitude of the edge is simply the difference between adjacent pixel values and the orien-tation is implicit in that the edge magnitude is assigned to the pixels' common boundary. Variations of the crack edge operator are discussed by Prager, et al (1977). B . Local line detectors. There are three basic types of local line detectors, the linear, the non-linear, and the semi-linear. All three types are similar to the second general category of edge detectors discussed above in that a set of masks is used to output line likelihoods in discrete orien-tations. Except for the linear line detector, however, the output of a line detector is not a simple convolution of the mask with the image values. Let us compare the three types of line detectors for the vertical orientation case. First we define the pixel neighbourhood as «1 h Cl a<2 b-2 a3 b3 c3 The linear detector outputs (for a line of higher intensity than its surroundings) 3 j / 3 3 v That is, the output is simply the convolution of the image with the mask -1/2 1 - 1 / 2 -1/2 1 -1/2 -1/2 1 -1/2 The non-linear detector is much more stringent. It requires (for a line of higher intensity than its surroundings) 6,- > a,- and 6,- > c,- , »= 1,2,3 . 6 If all the conditions hold, then the output is the same as that of the linear detector. Otherwise the output is zero. The semi-linear detector is less stringent than the non-linear. Its requirements (again for a line of higher intensity than its surroundings) are 3 3 3 3 ^2bi > ^2aj and > £c->' i = l j=l i = l j=l Again, if both conditions hold, the output is that of the linear detector. If both conditions are not satisfied, the output is zero. A complete set of masks used for the various orientations is shown in Fig. 5. As with edges, common practice is to take the orientation associated with the largest output to represent the orientation of the line at the pixel under consideration. Note too that line orientations are symmetric modulo TT whereas edge orientations are symmetric modulo 2TT. C . Linking line/edge elements into segments. Once the line/edge detectors have operated on an image, we are essentially left with another "image" of line/edge elements. A further transformation of the data must be done before any meaningful information can be derived for further processing; the large number of elements must be reduced to a smaller number of extended segments. This is often accomplished through the utilization of tracking algorithms (cf. Persoon, 1976; Vanderbrug and Rosenfeld, 1978; Ballard and Brown, 1982). Tracking algorithms connect contiguous line/edge elements into continuous segments using as primary constraints the relative orientation and strengths of line/edge elements to determine how the "linking" process should proceed. Typically the process begins at a point having a high line/edge magnitude. A search is then made in a direction determined by the orientation of the element to find a suitable candidate for linking. This candidate becomes part of the segment if its strength is above a certain threshold and its orientation is compatible with the original point. The search process than repeats itself from the new point. 7 The constraints vary. One may bias the relative orientation requirement towards pro-ducing straight lines or curves of a certain curvature. Instead of a threshold on line/edge strength one may require tracking along elements of approximately equal strength (Rise-man and Arbib, 1977). In addition, the capability of jumping gaps between elements is often built into tracking algorithms. The segments created from the line/edge elements are usually labeled in some manner and characteristics such as length, curvature, and average strength (i.e. local detector response) are computed. This information is then supplied to higher level processes such as classification (i.e. identification) algorithms. 8 III. Weaknesses in line/edge tracking methods. If real world images were perfect, local edge (line) detectors would only respond in areas where extended edges (lines) occur. The results of applying local detectors would show gap-free continuous segments on a homogeneous background. However, imagery from remote sensors (and in particular satellite sensors) is inherently noisy with noise being contributed by the sensor, the digitization process, via transmission, etc. Because of noise and other environmental and physical factors, images are far from being perfect and the output of local detectors becomes unreliable. Thus techniques such as line tracking algorithms which use such detector output are adversely affected. Zucker, et al (1977) have enumerated the effects of unperfect data on line tracking and other algorithms which attempt to create segments from line/edge elements. 1) Noise effects produce strong detector responses in relatively homogeneous regions of the image. 2) Furthermore, noise effects may weaken or eliminate detector responses at points along extended segments. 3) Due to noise or other natural factors, the detector responses along a segment vary in strength. Algorithms which use thresholding or require tracking along elements of roughly equal strength will be adversely affected. 4) The practice of choosing the maximum response of a set of masks corresponding to various orientations may give rise to orientation anomalies along extended segments. Again this weakens the effectiveness of algorithms which rely heavily on orientation constraints. 5) Algorithms which have the capability of bridging gaps also lean heavily on orientation information and will suffer due to orientation anomalies. In light of the above an intermediate process is called for to "clean up" and make more reliable the raw detector responses before moving on to a segment forming process. One technique that has shown some promise in this area is relaxation labeling. 9 Relaxation labeling is an effective technique for line/edge enhancement because it uses contextual information to bring a point's interpretation more into consistency with its surroundings. In the case of lines and edges this interpretation consists of line/edge strength and orientation. Relaxation labeling processes use knowledge of how contiguous line/edge elements smoothly continue one another in order to enhance detector outputs at points belonging to extended lines/edges and to weaken or elimate spurious detector responses (Schachter et al, 1977). The general goals of enhancement follow. 1) The contrast (i.e. line/edge strength) of line/edge elements along extended segments should be evened out. This severely reduces the aforementioned weaknesses of linking algorithms using line/edge strength as a prime constraint. 2) Any orientation anomalies that may occur along extended segments should be rectified, again to alleviate problems in linking elements into segments. 3) Small gaps in extended segments should be filled. 4) Spurious detector responses should be weakened or eliminated. 5) Parallel indications of a line/edge should be suppressed to avoid "thickening". Once the raw detector responses have been enhanced, tracking algorithms, etc. can operate much more efficiently. 10 IV. A review of relaxation labeling processes. A . Literature review. What, then, is relaxation labeling? Various authors describe it it in different ways. (See for instance Rosenfeld et al ,1976, Davis and Rosenfeld ,1977, Prager et al ,1977, and Zucker et al ,1978. ) In short, relaxation labeling is an iterative parallel process which uses local context to bring the interpretation (labeling) of each point in an image into consistency with its surroundings. It utilizes heuristic knowledge (natural relationships among labels) to eliminate unlikely interpretations, to bolster consistent ones, and to reduce ambiguity in general. At each iteration local evidence is collected and evaluated, resulting in a change in state (update) at all points in parallel. Interpretations change based on local support or inhibition with the goal being a state at which no interpretations change (i.e. convergence to hopefully a globally acceptable interpretation). The pioneering paper adapting relaxation techniques to image and scene analysis was that of Rosenfeld et al (1976). In this mainly theoretical paper they described different models for relaxation labeling processes including the discrete model, the fuzzy model, a linear probabilistic model, and a non-linear probabilistic model. A prophetic statement concluded the paper- "The authors plan to explore nontrivial applications of these pro-cesses, and they hope that other investigators will do likewise." A flood of applications followed. These included line and curve enhancement (Zucker et al, 1977), edge reinforcement (Schachter et al, 1977), curve segmentation (Davis and Rosenfeld, 1977), image filtering (Lev et al, 1977), shape matching (Davis, 1979), multi-spectral pixel classification (Eklundh et al, 1980), blob detection (Danker and Rosenfeld, 1981), and image thresholding (Rosenfeld and Smith, 1981). This study will incorporate probabilistic relaxation labeling as a means of enhancing line/edge detector output. (For a description of the discrete and fuzzy relaxation labeling 11 models see Rosenfeld et al, 1976.) The above cited applications, all using probabilistic relaxation, attest to the fact that within the framework of probabilistic relaxation labeling a great deal of flexibility exists. All, however, had in common the following aspects. 1) Definition of a label set. 2) Selection of a local operator (detector) to provide input to the relaxation process. 3) A procedure to transform local operator responses to initial label probability estimates. 4) Definition of the neighbourhood. 5) Specification of an update rule (relaxation model). B. Selection of a label set and local detector. Probabilistic relaxation labeling schemes operate on label probability vectors assigned to each point in an image. Each component of a probability vector, p, corresponds to the probability that one of a set of labels is the correct label at the point in question. The labels comprising the set should be mutually exclusive and exhaustive (Hanson et al, 1980). In the case of lines, the label set A invariably is defined to be a set of labels corre-sponding to line elements at various orientations along with a "no-line" label (cf. Zucker et al, 1977). For edges the label set can be similarly defined (cf. Danker and Rosenfeld, 1981) but there are alternatives. For input to their relaxation scheme, Schachter et al (1977) used a differential edge detector. Their labels simply corresponded to "edge" and "no-edge" but they were left with the additional problem of having to adjust the orientation of the edge as part of a separate update formula. The crack edge representation has been utilized by Prager et al (1977), Riseman and Arbib (1977), Prager(1980), and Hanson et al (1980). The labels corresponded to "edge" and "no edge" with orientation remaining implicit. 12 C. Initial label probability estimates. The output of the local line/edge detectors is used to make an estimate of the initial label probability distribution. The formulas invariably used (Zucker et al, 1977; Danker and Rosenfeld, 1981; Schachter et al, 1977 ) are: K-l maxe„,-p0(\k) = »=! - J i i - , * = 1,2...JST - 1 (1) and e K-i 1=1 K-l max eni P O ( X K ) = ! _ _ ! f _ L _ ( 2 ) e m a x where p°(Afc) is the initial (superscript 0) probability for the existence of label A& at point i, en{ is the detector response at point i for the orientation corresponding to label A N , K is the number of labels in the label set (the last label being the "no line" label), and e m ( W is the largest detector response for the whole image (for all orientations), that is K-l I emax = max max e^i k=l i'=l where / is the number of points in the image. These formulas assure us that K £ p ? M = 1 , (3) fc=i the condition that the sum of the label probabilities at a point must equal unity, and that 0 < p ? ( A f c ) < l , AT = 1,2...^ (i.e. all probabilities are guaranteed to be non-negative with one as an upper bound). Note that this formulation scales everything with respect to the largest detector re-sponse in the image. It effectively assumes that this detector response is reliable. Also note that some initial probabilities will be set to zero. Since many of the relaxation models (update rules) are of the form p ' + 1 ( A ) = neighbourhood factor • p*(A) 13 where t is the iteration number, zero initial probabilities must be avoided. This can be overcome by adding a small (constant) correction factor in equations 1 and 2 followed by renormalization (to satisfy eqn. 3) or by selectively bringing initial probabilities of zero up to some level. D . Definition of neighbourhood. Due to their iterative nature, relaxation processes allow local information to propagate from each point through the image. It does not make sense then to deal with a large neighbourhood about a point at each iteration not only because this increases the com-putational cost but because each iteration effectively enlarges the neighbourhood about a point (Haralick, 1983). Therefore, the smallest possible neighbourhood is invariably used. When dealing with line/edge information derived from 3 x 3 line/edge masks, the neighbourhood of the point is comprised of the eight pixels surrounding the point although a four pixel neighbourhood is sometimes used. These are the four pixels sharing a common boundary with the central pixel. In this study the eight pixel neighbourhood will be used exclusively. Edges derived from the crack edge operator lie on the interpixel boundary so a different sort of neighbourhood is defined. The discussion of this will be deferred until section VI. E . Update rule (relaxation model). At the heart of the relaxation process is the update rule. This rule determines, at each iteration, how label probability vectors change based on contextual information. The fol-lowing discussion follows closely that of Rosenfeld et al (1976) and Hummel and Rosenfeld (1978). Let F represent the updating operator. Any probabilistic relaxation process can then be described as p5+1=^(P ti,P fa,-p}) 14 where t and J are as previously defined. The updating rule should have the following characteristics. 1) F should map the probability vector field, P, onto itself; that is must be satisfied for all i and t. 2) The change in a label's probability should be proportional to the magnitude of the local evidence for support or inhibition. 3) If there exists evidence neither for support nor inhibition, then a label's probability should not change. 4) The overall change for a probability vector at each iteration should be small to reduce errors (i.e. convergence to an inappropriate labeling). 5) The entropy (i.e. ambiguity) of the probability vector field should decrease at each iteration. That is, at any point some label probabilities should be strengthened while others weakened or eliminated. In most cases it is desirable for convergence to an unambiguous labeling, one label of probability one with all others zero. 6) The process should converge to a stable fixed state, P°°. 1. F i r s t order models. The first probabilistic relaxation model proposed by Rosenfeld et al (1976) was termed the linear probabilistic model. It was theoretically well founded in that the effect of a label at a neighbouring point upon a label at the point being updated was expressed via a conditional probability. The convergence properties of the update rule were also directly establishable. However it was shown that if the process converged, it would to a state solely dependent on the conditional probability distribution (i.e. no dependence on the initial probability vector estimates). An update rule which had such a dependence was called for. A second probabilistic model was presented in the same paper — the non-linear prob-abilistic model. Although not as theoretically grounded as the linear model, this model K (4) 15 had more of a heuristic flavor. First of all, a dependence on the initial probability esti-mates was expressed. Secondly, conditional probabilities were dispensed with in favor of "compatibility coefficients" as a means of expressing relationships between labels at neigh-bouring points. These two concepts are summarized in the table below (from Rosenfeld et al, 1976). Compatibility of A' with A. High Low Probability High + — of A'. Low 0 0 where A' is the label at the neighbouring point, A is the label at the point being updated, + implies that pf(A) should increase, — implies that p,(A) should decrease, and 0 implies that pt(A) should remain relatively unchanged. The table simply states that the magnitude of the change in p,(A) should be proportional to the probabilities of the labels in its neighbourhood and that the direction of change (i.e. decrease or increase) of p»(A) should be based on the compatibility of the label being updated with its surroundings. The compatibility coefficients can be viewed as correlations between labels at neigh-bouring points in that they are allowed to take on both positive and negative values. Thu3, unlike the linear model, both positive and negative influences are allowed to operate within the relaxation process. Compatibility coefficients are denoted by r,y(A, A') where i is the point being updated, 16 j is the neighbouring point, and A and A' are defined as above. Values for the compatibility coefficients should follow these guidelines. 1) If A' frequently co-occurs with (or is subjectively compatible with) A , then r,y(A,A') >0. 2) If A' rarely co-occurs with (or is subjectively incompatible with) A , then r,y(A, A') < 0. 3) If their occurrences are independent (one puts no constraint on the other), r t J(A, A') = 0. 4) The magnitude of r,y(A, A') should reflect the strength of the compatibility observing the constraint —1 < r,y(A,A') < 1. Compatibility coefficient values can be estimated from subjective analysis of how labels can interact. For line and edge applications they have been estimated based on continuity and orientation criteria (Zucker et al, 1977 ; Schachter et al, 1977) and have also been estimated empirically using formulas consistent with statistical correlation and mutual information measures (Peleg and Rosenfeld, 1978; Yamamoto, 1979). The label probabilities at neighbouring pixels act through the compatibility coefficients to determine the change in each label probability at the point being updated. This "cor-rection factor" for the label A at the point being updated, <7,(A), behaves as the table given above. = E^(E^(A,A>5(A')) (5) where Y2j denotes summation over all the points in the neighbourhood, and J2x> denotes summation over all labels at a neighbouring point. The di] weight the influence of the neighbouring point j on i (rf,y > 0). Constraining the weighting factors to sum to one ensures that - 1 < ? , ( A ) < 1 . 17 In some applications the d,j are replaced by 1/n where n is the number of points denned as the neighbourhood. At each iteration the new label probability values are computed using the general formula A' Note that the denominator is simply a normalization factor used to ensure that eqn. 4 remains satisfied. With a = 1 and m = 1, the above formula corresponds to the original model as given by Rosenfeld et al (1976). Because by their nature iterative processes are slow, modifications of the original formula were suggested to speed up the relaxation process. Setting or = 1 and m = n (n again being the number of points in the neighbourhood) corresponds to the form recommended by Peleg and Rosenfeld (1978). In this case one must be careful that the term 1 + m • g,(A) remains non-negative. The formula with a > 1 and m = 1 was suggested by Zucker et al (1978). In this paper the authors showed that the convergence rate could be sped up geometrically using this form of the update rule. All three forms of the non-linear probabilistic model will be examined in this study. The convergence properties of the non-linear update rule were studied by Zucker et al (1978). They showed that the original form (a = 1 and m = 1 ) converged but not neces-sarily to a UIM (unambiguous interpretation matrix). That is, the final probability vector space P°° will not necessarily be composed totally of unit vectors (i.e. an unambiguous labeling at each point). Furthermore, their proof of convergence did not depend on how the g»(A) term was calculated. This would indicate that the process with a — 1 and m = n would result in similar final states provided that m • g,(A) > —1 at all times. For a > 1 they also proved that the process converged. The final first-order model to be discussed is one proposed by Peleg (1980). Similar to 18 the linear probabilistic model, it is grounded in Bayesian theory. Because of its analytical basis, the updating rule and the compatibility coefficients are explicitly denned, eliminating the "quesswork" involved in the non-linear probabilistic model. Studies have shown that this model and the non-linear yield comparable results (Peleg, 1980 ; Danker and Rosenfeld, 1981). However, the work of Fekete et al (1981) indicated that the non-linear probabilistic rule was superior in reducing the entropy of their system. Due to its similarity to the non-linear update rule and the aforementioned weakness compared to it, Peleg's update rule will not be examined in this study. 2. A second order model. The first order models discussed above have been termed "homogeneous" models by Prager et al (1977). This term is due to the fact that the contributions from each of the labels at each neighbour to the term ft (A) are collected one by one with no regard to the interrelationships which exist between labels at different neighbouring points. Update rules which take these interrelationships into account are called higher order models (Hummel and Rosenfeld, 1978). First we shall look at what can be loosely termed a second order model. Eklundh and Rosenfeld (1981) introduced the "triple-pixel method" as an attempt to improve upon the homogeneous models used exclusively up to that point. The updating rule incorporates the information from two neighbouring points at a time. However, not all the possible combinations of neighbouring points are utilized. For a given neighbour-ing point, only the three locations approximately opposite the point (in an eight pixel neighbourhood) are paired with it. The compatibility coefficients are of the form ^ ( A , A', A") where i is the point being updated, 19 j and A; are the neighbouring points, A is the label at the update point,and A' and A" are the labels at the neighbouring points. These compatibility coefficients are computed from the initial label probability estimates. The updating rule takes the form rf-(A) E E E E P y ( A V J A > , y f c ( A , A ' , A » ) p.-r <x) = - , V A e A where E y denotes summation over all the neighbouring points, N(j) is the set of points paired with point jt T is a normalizing factor analogous to that in equation 6, and A', A", j, and k are defined as above. Test applications of this updating rule included a pixel classification problem and line enhancement. As would be expected, the method was shown to be an improvement over the homogeneous model only for the the latter task. The improvement, however, laid not in a better enhancement but in a faster convergence rate (i.e. fewer iterations to accomplish a comparable enhancement). This was verified by the work of Fekete et al (1981). Although the convergence rate in terms of iterations was sped up, the increased com-putational complexity of the method actually resulted in it being slower computationally than the homogeneous model. Due to the fact that this model yields comparable results to that of the homogeneous model and takes longer in "real time", this method will not be implemented in this study. 3. A higher order model. Ideally, one would like a model which can examine the neighbourhood of a point as a "snapshot". That is, it would be desirable not to look at parts of the neighbourhood and average the various effects but to get the "big picture" as to how the labeling at a pixel should change to make itself more consistent with its surroundings. 20 A method has been developed which attempts to do just this. This model, introduced by Prager et al (1977) , Hanson and Riseman (1978) , Prager (1980) , and Hanson et al (1980) utilizes the crack edge operator and the basis of the relaxation model is intimately linked with the concept of boundary (i.e. edge segment) continuity. Since the method is specific to the edge domain, a full discussion of these models will be deferred until Section VI. F . Evaluation parameters. This study will implement the non-linear probabilistic rule of equation 5 exclusively for ice lead enhancement (section V). In section VI (sea surface thermal feature enhancement) this update rule will also be used. It was discussed earlier that this update rule converges. The convergence properties of the higher order model to be used in section VI have not been formally established (Hanson et al, 1980) but experimentally the process has been shown to converge to some fixed point. In practice we must establish some measure of convergence. In general, how can one evaluate what the relaxation process is doing from one iteration to the next and most importantly, when should the iteration process halt? Various authors have suggested parameters calculated from the probability vector field which quantify the behaviour of relaxation processes. See for instance Fekete et al (1981), Peleg and Rosenfeld (1981), and Hanson et al (1980). The most common parameters are entropy, rate of change, and drift. The rate of change parameter measures the average point-by-point Euclidean distance between the probability vector fields at two successive iterations. <? = 7 Z > . m + 1 - p r n ( 7) 8=1 where J is the number of probability vectors in the image, and (8) i p r + 1 - p r n = 21 This parameter measures how much the probability vector field has changed from one iter-ation to the next and therefore is an indicator of the stability of the process (i.e. oscillating values of this parameter indicate instability) and a measure of labeling consistency (i.e. a relatively small value implies that the labeling at each point is becoming more consistent with its neighbours). More elaborate consistency measures are discussed by O'Leary and Peleg (1983). Since relaxation processes reduce ambiguity, a measure of the amount of ambiguity in the probability vector field has been defined. Entropy is given by i=l X For a given probability vector, the entropy is zero when the labeling is unambiguous (pf-is a unit vector) and a maximum when it is most ambiguous PL-(A) = i , v A e A where L is the number of labels. To avoid the computation involved in using the logarithm function an alternative way of defining the entropy is ^ = )E£pnA)(i-pr(A)). (io) i=l X The labeling results of the later iterations should bear some resemblance to the original labelings from which they were derived. The "drift" from the original labelings can be quantified using the formula j>=jl>r-p?ii- (ii) t=i The norm used is that of equation 8 . A low drift value indicates a high degree of similarity to the initial probabilistic labeling. It has been stressed by Peleg and Rosenfeld (1981) and Hanson et al (1980) that none of the the above parameters by itself can be relied upon as a stopping criteria. 1) The entropy by itself is not a good indicator because a probability vector field of unit 22 vectors may be a function of how the initial label probabilities were estimated and a given relaxation model may modify the probabilities to achieve its idea of consistency embodied in the compatibility coefficients. 2) An individual value of the change parameter is not totally reliable because one does not know if the values have been oscillating about some mean value. 3) The drift metric is minimised by the initial probability labeling so it of course cannot be used as a sole stopping criterion. Some combination of the above parameters must be used to arrive at a stopping criterion which will optimize the desirable aspects of the relaxation process described by each. Peleg and Rosenfeld (1981) tested some formulas combining the parameters in various ways but came up with little. In this paper the change and entropy parameters will be used in both sections V and VI. Additional criteria will be used in the latter part of section VI and will be discussed at that time. 2 3 V . Enhancement of sea ire leads via relaxation methods. This section will discuss the implementation of relaxation processes to the enhancement of linear features in NOAA-9 AVHRR Band 5 imagery. The imagery used is from the Beaufort Sea / Amundsen Gulf area and the linear features represent sea ice leads (cracks). A . Defining the relaxation process. Section IV made clear the vast amount of flexibility one has when applying relaxation processes. Attempting to test all the various combinations of label sets, update rules, local detectors, etc. is impractical for one is faced with combinatorial explosion. Based on prior studies and analyses of some prelimary results one can reduce the number of combinations substantially, however. For sea ice lead enhancement the combinations studied resulted from the following considerations. 1. Label set. Two label sets were considered. The first label set was suggested by Zucker et al (1977) and is comprised of nine labels. Eight of the labels represent line orientations at 22.5° intervals, the ninth representing the "no line" label. Using such a label set, however, brings up an interesting issue regarding which line detector templates correspond to the orientations at 22.5°, 67.5°, 112.5°, and 157.5° (measured clockwise from the vertical). Let's look at the specific case of the 22.5° orientation (analogous arguments apply to the other three). Zucker et al (1977) used the following template for this orientation. d i &i Ci a2 b-2 C2 a$ 63 C3 This template can be seen to be a mixture of the vertical and 45° templates. However, Vanderbrug (1977) pointed out that the following template represents a similar mix. 24 ai bi Ci 0.2 b-2 c<i 0 3 &3 C3 For the sake of completeness, both line detector templates will be implemented with the maximum output of the two templates used to represent the line strength at this orienta-tion. (It should be noted that when using the non-linear detector both templates cannot have a non-zero response at the same point.) Another point should be made regarding the orientations at 45° and 135°. As can be seen from fig. 5 there also exists two templates each for these orientations. Again, the maximum response of the pair of templates will be chosen to represent the line strength at these orientations. Therefore, for the nine label case, the full set of fourteen templates in fig. 5 will be used to provide the line strength values to the relaxation process. In addition, a label set comprised of thirteen labels will also be examined (twelve line labels plus the the "no line" label). Here each of the templates in fig. 5 will correspond to a distinct label except for the 45° and 135° cases. These will be treated as in the nine label case. 2. L o c a l line detector. Vanderbrug (1976) has compared and outlined the advantages and disadvantages of the various line detectors described in section II. His major conclusions follow. 1) The linear detector responds to edges and "smears" noise points. 2) The semi-linear detector does not respond to edges but is susceptible to noise within an image. 3) The non-linear detector does not respond to edges and of the three types of line detectors is preferable for use in noisy images. 25 For the purposes of this paper a line detector which responds to edges (i.e. the linear line detector) is surely not a good choice. In their application of relaxation processes to line enhancement, Zucker et al (1977) used the non-linear detector. Vanderbrug (1977) in his iterative line enhancement scheme used it also. Due to its good performance in noisy images the non-linear line detector will be used exclusively in this study. 3. Initial label probability estimates. Equations 1 and 2 will be used to transform the non-linear detector responses to initial probability vector estimates. However, as mentioned in section IV a problem arises with detector responses of zero. To avoid setting any of the initial probabilities to zero one could add a small correction factor (Ap°) to the initial probability estimates and renormalize to satisfy equation 3 or selectively bring detector responses of zero up to some low level of probability ( p^,,n ) without renormalization. Initial tests on ice imagery comparing the two procedures with Ap° = p^, t n showed that for the small values of the parameters used (.0001 and .01), the second method (no normalization) effectively delays the normalization to the first iteration of the relaxation process. This could be seen by examining the entropy and change parameter curves for the two methods at the early iterations. With the parameters set to .0001, the entropy and change curves for the two methods were nearly identical over 30 iterations. A visual comparison of the enhanced line images at the thirtieth iteration showed no difference in enhancement. The above observations were true for all three modes of application (corresponding to convergence rates) of the non-linear probabilistic update rule. The similarities are obviously due to the fact that the low value used made the initial probability vector fields nearly identical for the two methods. However, with the parameters set to .01 the entropy and change parameter curves diverged after the initial iterations. A visual inspection at the thirtieth iteration, though, revealed a very slight difference in enhancement. This could be attributable to the fact 26 that the "empirical" compatibility coefficients (to be discussed shortly) used in these tests were derived from the initial probability vector field. The slight variations in enhancement were probabily due to the minor differences in the compatibility coefficients. Due to the minor (or nonexistent) enhancement differences after such a relatively high number of iterations and also due to the fact that the entropy and change evaluation parameter curves are initially more smooth for the first method, the second method was removed from further consideration. 4. Definition of neighbourhood. An argument for a small neighbourhood within the context of relaxation processes was presented in section IV. In this study the eight pixel neighbourhood will be used exclusively. 5. Update rule (relaxation model). The non-linear probabilistic relaxation rule of equation 6 will be used exclusively in the enhancement of ice leads. As mentioned in section IV, there are three modes under which the model can run. These will be referred to as Mode 1 (slow convergence mode, m = 1 and a — 1), Mode 2 (fast convergence mode, m = n and a = 1), and Mode 3 (fastest convergence mode, m = 1 and a = 2). All three modes will be examined. To lessen the number of coefficients to deal with, the neighbourhood weighting coef-ficients ( d,j's ) will be dispensed with in favor of using £ in equation 5. It should be pointed out also that the update rule as implemented in this study only takes into account the label probabilities at the neighbouring points and does not explicitly include those at the update point itself. This has been termed an exclusive neighbourhood by Richards et al (1981). The compatibility coefficients (r,y's) provide the sole means for incorporating heuristic knowledge into the relaxation process ( Zucker et al, 1978). Therefore how one determines their values is of utmost importance. For line enhancement, Zucker et al (1977) derived 27 their compatibility coefficients based on a subjective view of how labels (line orientations) at adjoining pixels smoothly continue one another. Later relaxation studies, both those dealing with line/edge enhancement and otherwise, utilized compatibility coefficients de-rived from the initial probability vector field. Henceforth these shall be referred to as empirical compatibility coefficients. Peleg and Rosenfeld (1978) and Yamamoto (1979) introduced methods of determining compatibility coefficients based on statistical correla-tions, mutual information measures, etc. In this study, empirical compatibility coefficients will be calculated as statistical cor-relations because the formulation is straightforward, they are directly applicable to the non-linear update rule ( that is — 1 < r,y < 1 ), and due to the fact that the compatibility coefficients derived as statistical correlations have been shown to be as good or better than the mutual information coefficients when implemented in relaxation processes ( Pe-leg and Rosenfeld, 1978 ; Yamamoto, 1979 ). The formula for deriving the compatibility coefficients is straightforward (Peleg and Rosenfeld, 1978). E K . W - ?W1 b>.+(,,+y(V) - POT] w > = r - ^R*) <12> where Rij(X,X') denotes the compatibility coefficient of the label A at the central pixel of the neighbourhood with respect to the label A' at a pixel i pixels away in the horizontal direction and j pixels away in the vertical , Y2X y denotes summation over all the pixels in the image, p(X) is the average of px,y(^) for the whole image, and cr(A) is the standard deviation of pXiy(A) for the whole image. Peleg and Rosenfeld (1978) found that when using this formulation the dominance of the "no line" label resulted in high correlation values between all the orientation labels and the "no line" label. When these coefficients were subsequently used in the relaxation process all the lines in the image were wiped out because the "no line" label was consistently the most strongly supported label. They recommended that the coefficients should be weighted 28 thusly. *?y(A,V) =[1 -PW] [1 -P(A')] J M M ' ) (13) The statistical correlation is weighted inversely by the image-wide prevalence of the two labels concerned. This study will use this formulation but the coefficients will simply be referred to as empirical compatibility coefficients. B . Relaxation with a 9 label set. 1. Mode comparison with Ap° = .0001. Three ice images from January 1985 were run through the relaxation process each in the three modes described earlier. Ap° was set to .0001. Fig. 6 shows the resulting entropy and change parameter curves for one of the images. (The curves had the same characteristics for the other two images.) Equations 7 and 9 were used to derive the evaluation parameter values. First note the vast difference between the entropy and change curves for Mode 3 (dashed lines) as compared to the other modes. The Mode 3 curves indicate convergence around the tenth iteration. A visual inspection of the enhanced line images revealed that this mode, although very fast, converged to an unacceptable result. The only line elements left after enhancement were those which initially had a very high probability. This mode acted more as a thresholding process rather than an enhancement process. This result brings out two points. The first is that the entropy and change parameter values do not indicate the "goodness" of a relaxation processes' result . Secondly, this speedup technique plainly converges too quickly to a fixed point. Zucker et al (1978) warned of this latter problem but suggested that this mode may have some use. That is the relaxation could be initially run at one of the slower modes and when convergence is near a switch could be made to Mode 3. The entropy curves for Modes 1 and 2 are far more interesting. As can be seen from fig. 6 the Mode 2 curve (dotted line) reaches the same entropy level at the fifth iteration 29 o O) C D -CO » o a Cxi to z ° L J LJ o M CE a x: CM a' o o". 1 ~ 0.0 5.0 T T 10.0 15.0 20.0 ITERATION —1 1 25.0 30.0 o to in (M LJ f - H LJ az a LJ CD in az 0 o a L J 0 Cd o m a o a 1 — 0.0 5.0 10.0 15.0 20.0 ITERATION —1 1 25.0 30.0 Fig. 6 Evaluation parameter curves from the line relaxation using the 9 label set with Ap° = .0001. The curves correspond to the different modes - Mode 1 (solid), Mode 2 (dotted), and Mode 3 (dashed). as the Mode 1 curve (solid fine) reaches at the thirtieth. Note also that the Mode 2 curve up to the fifth iteration preserves the shape of the Mode 1 curve. All three images showed these characteristics. A visual comparison of the enhanced line images at the thirtieth iteration (Mode 1) and iteration five (Mode 2) not only revealed a similar enhancement but that the Mode 2 enhancement was slightly better. This indicates that the Mode 2 process converges about six times as fast with no ill effects. The change parameter curves of fig. 6 also give an indication of the speedup rate for Mode 2 as compared to Mode 1. The Mode 1 change curve reaches a maximum at iteration twenty while the Mode 2 curve peaks at the fourth iteration. Change parameter curves for the other two images had maxima at iterations 22 and 10, respectively, for Mode 1 and maxima at iterations 4 and 2 ,respectively, for Mode 2. A speedup of a factor of about 5 or 6 is indicated by these values supporting the estimate suggested by the entropy values. A visual examination of the enhanced line images derived from the first image indeed showed that the enhancement for the fourth iteration (Mode 2) was slightly better than that for iteration 21 (Mode 1). Another interesting feature of both the entropy and change curves for all three images is that while the Mode 1 curves reach no minima, the Mode 2 curves "bottom out" followed by a consistent upturn. In terms of the entropy, the three images' curves hit their minima at iterations 17, 19, and 21 respectively. On the other hand the change parameter curves showed minima at iterations 15, 17, and 19 respectively. The enhanced line images were visually analyzed to subjectively determine what was actually happening around the iterations indicated by the minima in the entropy and change curves. Line thickening was the culprit and seemed to start around iterations 18, 19, and 20 for the three images, respectively. In the line enhancement study of Zucker et al (1977) it was noted that at higher iterations thickening of the lines became a problem. This is indicative of the degradation 31 phase that most relaxation processes inevitably encounter (Kalayeh and Landgrebe, 1984). What this means, then, is that the onset of the degradation phase was indicated by the minima in the entropy and change parameter curves with the change parameter consistently giving an earlier "warning" than the entropy parameter. 2. M o d e comparison with Ap° = .01. Assuming that the relationships between the Mode 1 and Mode 2 cases would be inde-pendent of the value of Ap°, only one image was run through all the modes for Ap° = .01. The entropy and change parameter curves are shown in fig. 7 . Consistent with the previous section's results, the Mode 3 run converged very quickly with the same undesirable aspects as mentioned before. Also consistent with the previous section, the entropy curve for Mode 2 reached the same level as that for iteration 30 of the Mode 1 case in far fewer iterations, in this case 4. Again the shape of the Mode 2 curve up to the fourth iteration preserved the shape of the Mode 1 curve and a visual comparison of the enhanced line images showed that the Mode 2 (iteration 4) enhancement was slightly better than that for the Mode 1 (iteration 30) case. Analogous to the previous section, the change parameter curves also provide an estimate for the speedup rate. The Mode 1 curve reached its maximum level at iteration 19 while the Mode 2 curve peaked at iteration 3. However, the similarities end there. First of all, the entropy curve for the Mode 2 case does not reach a minimum within 30 iterations. Furthermore, the change parameter curve does not display a pronounced minimum followed by a consistent upturn although there is a weak minimum at iteration 26. The other two images used in the previous section were put through the Mode 2 relaxation process with Ap° = .01 to see if any distinct minima could be observed in their evaluation parameter curves. The entropy curves for these two images showed the same monotonic decrease in entropy as observed for the first input image. However the change parameter curves for these two images showed distinct minima at iterations 10 and 16 respectively. Recall that the same three images were run through the relaxation process 32 NORMALIZED ENTROPY «-<- O -- o P - O C u W P CO p * CO a. CD 00 O o cs CO O P Cu < P o" p *o p >-( p a CD o =r o t ? 3 Cu ZT. CD -3 5 ' CD ft p e l " • -» CD 3 r o * CD ST. co o 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 I 1 I I ' i ' i I Q -O •-3 n TO ~ ZD cn-t o t o c n -en-CD NORMALIZED CHANGE PARAMETER 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 I I I i i I l i I I 1 p 1 p c n Mod sing o n> P * _ i — • CD o co C O • — t o olid) lab n olid) CD ro c n ' o CD CD <-»- »—i o o Z2L t o o de s. «-(-XT Q CO t n -o with Ap° = .0001 and that the minima in their change parameter curves coincided within a small iteration range - 15 to 19. The change parameter minima are definitely much more spread out for Ap° = .01. The enhanced line images for each of the three input ice images were analyzed in an attempt to make sense of the situation. For all three input images it was observed that thickening started to occur around the tenth iteration. Except for the one image having a change parameter minimum at iteration 10, the entropy and change parameter curves give no clear indication of thickening in terms of minima in the evaluation parameter curves. Is there any indicator for the onset of thickening? A subtle one was observable in the entropy curves for all three input images. Note in fig. 7 that the Mode 2 entropy curve drops off linearly in the earlier iterations and then exhibits a non-linear dropoff in the later iterations. For the three ice images this departure from linearity occurred around iterations 7, 8, and 7 respectively. These values give somewhat of an early warning of the degradation phase but this departure from linearity was really the only consistent characteristic observable. There is a plausible explanation for the nature cf the entropy curves with Ap° = .01. First of all, this value of the initial probability adjustment raises the entropy of the initial probability vector field a great deal. For the three images analyzed the initial entropy was an order of magnitude greater for Ap° = .01 as compared to Ap° = .0001. Because the entropy is initially much higher, the relaxation process persistently tries to lower the probabilities of those labels whose initial probabilities were adjusted to a non-zero value but clearly get no support within the relaxation. Before thickening sets in, the relaxation process has had not enough time to finish the task (i.e. the entropy curve reaches no minima). The departure from linearity of the entropy curve can be explained if one views the entropy curve as describing a superposition of two processes. The first process is that described in the previous paragraph which would tend to lower the entropy overall. As 34 was seen with Ap° = .0001, thickening was indicated by a general increase in entropy (the upturn of the entropy curve). For the Ap° = .01 case, the increase in entropy due to thickening is superimposed on the decreasing trend of the first process. Thus we see a decrease in the rate of dropoff when thickening begins to occur. It is difficult to explain the variability of the change parameter curves with Ap° = .01. Possibly the variability is due to some input image dependence. 3. Overview of results from the 9 label set. A major result of the 9 label set runs is that the Mode 2 relaxation process is a reliable speedup technique. It produces a comparable enhancement to the Mode 1 process and speeds up the relaxation process by at least a factor of five. Another major result is that the Mode 3 runs converge too quickly to an unacceptable result. Figures 8a through 8c show an input image, the initial probability vector field dis-played symbolically, and one of the (pre-thickening) enhanced line images (also displayed symbolically). The symbolic representation follows that of Peleg and Rosenfeld (1978) and is actually a maximum likelihood representation of the probability vector field. That is, at every point the label with the maximum probability is determined. If the maximum probability corresponds to a line orientation, a 3 x 3 pixel symbol is centred on the point with its brightness proportional to the probability value. The symbols used are shown in fig. 9 . Note that for the 22.5°, 67.5°, 112.5°, and 167.5° orientations the symbols used are combinations of the two line detector templates corresponding to each of these orientations. If the "no line" label has the greatest probability at a point, a 3 x 3 square is centred on the position with an intensity proportional to 1— pno une. Thusly, if the "no line" probability is the largest but not necessarily a high value (an indication that a line orientation label has some possibility of eventually claiming the position) the square will be bright. Furthermore, the high "no line" probability positions provide a dark background. 35 Fig. 8 (a) AVHRR band 5 ice image of 30 January, 1985 and (b) the initial probability estimates for the line relaxation using the 9 label set and Ap° = .0001. 36 Fig. 8 (cont'd) (c) Result of the nineteenth iteration of the line relaxation using the 9 label set and Ap° = .0001. (d) Result of the thirtieth iteration of the line relaxation using the 13 label set and empirical compatibility coefficients. 37 Fig. 8 (cont'd) (e) Result of the thirtieth iteration of the line relaxation using the 13 label set and heuristic compatibility coefficients. 38 KJ X XX X X i x g x x x " x x x x x xx x x \ X X XX Fig. 9 Symbols used in displaying the maximum likelihood interpretation for the line relaxation using the 9 label set. Fig. 10 The adjacent line orientation configurations corresponding to one of the compat-ibility coefficients for the 9 label case. The two symbols (i.e. boxes and crosses) each correspond to one of the orientations. 39 The probabilities of line orientation labels along extended segments have been strength-ened to a high degree. Note that the squares along extended segments in fig. 8b have been replaced by line orientation symbols in fig. 8c. This can best be seen in the areas where all initial line orientation probabilities were relatively weak. Specifically note the area between the very distinct cracks in the top centre of the images. Note however that spurious detector responses throughout the image have not been removed. This is probabily due to the fact that the ano line" / line compatibility coefficients have been severely reduced via equation 13. Thusly the relaxation sub-process which should weaken or eliminate spurious responses has been reduced to an ineffective level (cf. Zucker et al, 1977). However these spurious responses haven't been reinforced which is desirable. A major disappointment is that "gap filling" was virtually non-existent in the enhanced line images. The few occasions where it seemed to happen occurred long after thickening had set in. Furthermore it was hoped that the higher value of Ap°, .01, would give line orientation labels a better chance of being enhanced in areas where gaps occurred along extended segments. However the higher value simply seemed to hasten the onset of thickening. The results showed that thickening was representative of the degradation phase char-acteristic of many relaxation processes. The onset of thickening was indicated by minima in the evaluation parameter curves for Ap° = .0001 and by a departure from linearity in the entropy curves when Ap° = .01. The fact that the entropy curves were more con-sistent as a means of detecting the onset of thickening than the change parameter curves supports the contention of section IV that more than one evaluation parameter should be utilized to evaluate the behaviour of a relaxation process. It also supports the idea that the evaluation parameters measure different aspects of the relaxation process. 40 C . Relaxation with a 13 label set. 1. The advantages of a 13 label set. The two greatest problems with the 9 label set were the thickening problem and the fact that thickening occurred before any small gaps could be filled along extended segments. Visual analyses of the enhanced line images derived from the image of fig. 8a indicated that thickening seemed to occur in areas where the 22.5° line orientation was most prevalent. An inspection of the image-wide initial probability averages and the empirical compat-ibility coefficient array gave some insight as to why the thickening was associated with this orientation. First of all, for this input image the 22.5° orientation label had the highest image-wide initial probability average among the orientations 22.5°, 67.5°, 112.5°, and 157.5° and the second highest average probability among all line orientation labels. Now let's look specifically at the empirical compatibility coefficients calculated from this input image for the 22.5° orientation label with respect to the 22.5° orientation labels in the eight-pixel neighbourhood. -.014 + .242 + .289 -.014 -.014 +.289 +.242 -.014 The locations of the values in the above array correspond to the positions of the neighbours for which the value applies. For example, the compatibility coefficient for the 22.5° label at the central pixel with the 22.5° label at the pixel immediately above (and below in this case) the central pixel equals 0.242. The thickening problem lies in the fact that the 22.5° orientation label corresponds to both of the following two line orientations. 41 X X X X X X Because the 22.5° label is a mixture of two distinct line orientations, the nature of the compatibility coefficients involving this label (and the labels for the 67.5°, 112.5°, and 157.5° cases which also are mixtures of distinct line orientations) is different from those involving the "pure" labels (i.e. the vertical, horizontal, and two diagonal line orientation labels). The vertical label for example has its highest positive compatibility coefficients associated with the two neighbours where a vertical line centred at the update position can be smoothly continued, the neighbouring locations directly above and below the update position. Since the 22.5° label is a mixture of two distinct line orientations, four neighbours in the eight-pixel neighbourhood are associated with high positive coefficients. (In the above example these are the two values of .242 and the two values of .289 .) That is, the empirical compatibility coefficients reflect the fact that the initial label probabilities for the 22.5° label are calculated from line detector responses from two distinct line orientation templates. (See section V.A.I .) The coefficients reflect the following continuity aspects for the two distinct line orientations making up the 22.5° label. The line orientation can be smoothly continued by a suitable line orientation centred at the top right corner position and/or an appropriate line orientation centred at the middle position of the bottom row in the eight-pixel neighbourhood. The other line orientation associated with the 22.5° X X X label, 42 x: x x can be smoothly continued by a suitable line orientation centred at the middle position of the top row of the eight-pixel neighbourhood and/or an appropriate line orientation centred at the bottom left corner position. Thus the empirical compatibility coefficient array displayed above shows positive coefficients at four locations in the eight-pixel neigh-bourhood (i.e. two positions for each of the two line orientations making up the 22.5° label). The location and value of compatibility coefficients within the eight-pixel neighbour-hood should be indicative of how well adjacent line orientations can smoothly continue one another. However for the 22.5° label the interpretation of the compatibility coefficients becomes convoluded. For example, the centre value of the top row of the above coefficient array gives the compatibility coefficient for the four adjacent line orientation configurations shown in fig. 10. (The crosses correspond to one of the 22.5° line orientations centred at the update position while the boxes correspond to one of the 22.5° line orientations centred at the neighbour directly above the update position.) It can be easily seen that only one of these four configurations should be associated with a positive compatibility coefficient, the third from the left. This configuration embodies the idea of positive compatibility in that the adjacent line orientations smoothly continue one another. However, using the nine label set dictates that all four configurations have the same positive compatibility coefficient value. The use of these coefficients within the relaxation results in positive support being received from the 22.5° orientation label at /our of the neighbouring locations. Recall though that any line orientation centred at the update position can only be smoothly con-tinued at two of the neighbouring positions. In addition, the labels at those neighbouring 43 locations associated with positive coefficients receive positive support within the relaxation from the 22.5° label at the update position. Since the nine label set results in positive support being received from (and given to) four neighbours instead of two, it is highly likely that this situation accounts for the thickening problem. Analogous arguments apply for the orientations at 67.5°, 112.5°, and 157.5° but because the 22.5° orientation is so prevalent in the image, the thickening is most observable in the areas of the image where it predominates. The idea of using a label set made up of 13 labels was the direct result of the argument above. With each of the labels 22.5°, 67.5°, 112.5°, and 157.5° split into two distinct labels, the above problems with the empirical compatibility coefficients will be eliminated in that only the adjacent line orientations which smoothly continue one another will be associated with positive coefficients. 2. 13 label set with empirical compatibility coefficients. Since the Mode 2 runs provided the best results for the nine label case, the relaxation process for the 13 label case was run exclusively in Mode 2. Also, since gap filling is a desirable aspect of the enhancement process, it was felt that a Ap° value of .01 would give the line orientation labels a better chance of survival in area3 where gaps occur along extended segments. This value was used exclusively. The three images used in the previous sections were run through the relaxation process. Fig. 8d shows the enhanced line image for the input image of fig. 8a. Fig. 11 shows the evaluation parameter curves (solid lines) for one of the images. (The other two images had similar curves.) Notice that the curves show no minima. Since the evaluation parameter curves gave no indication of bottoming out up to the thirtieth iteration, the enhanced line images at iteration 30 (for all the input images) were compared with the best (pre-thickening) results from the 9 label set runs. An itemization of the significant observed differences follows. (Compare figures 8c and 8d .) 1) Most importantly, for the 13 label case no thickening was apparent in the enhanced line 44 images even at such a relatively high iteration value. 2) Gap filling was evident in areas where the initial label probability image (fig. 8b) indicated no sign of a line orientation being even remotely probable. This can be seen best along the very distinct leads. The gaps filled were as long as three pixels in length. 3) Spurious responses were weakened to a higher degree and in some cases eliminated. 4) More prevalent was line probability strengthening along extended segments although this is probably due to the fact that the enhanced line images from the thirteen label runs were compared to enhanced images from the nine label runs which had run ten or more iterations less. The results support the hypothesis that thickening in the nine label case is attributable to the sharing of labels by two distinct line orientations. The empirical compatibility coefficients clearly show why the thirteen label case is such an improvement. Let's look specifically at the coefficients between the two line orientations X X X X X X which earlier were lumped together to represent the 22.5° orientation. The empirical compatibility coefficients between the first orientation above at the central location and the second orientation above at the neighbouring pixels were -.007 -.007 +.589 -.007 - .007 -.007 + .497 - .007 The adjacent line orientation configuration corresponding to the value 0.589 is shown to the left in fig. 12 and that for the value 0.497 to the right in the same figure. Clearly these configurations are desirable and should be associated with positive coefficients. Note too that these values are much higher than the values in the nine label case. The higher values 46 I X Fig. 12 The adjacent line orientation configurations corresponding to one of the compat-ibility coefficients for the 13 label case. The two symbols (i.e. boxes and crosses) each correspond to one of the orientations. 5c Fig. 13 Examples of the seven classes of adjacent line orientation configurations associated with positive heuristic compatibility coefficients. The two symbols (i.e. boxes and crosses) each correspond to one of the orientations. CL • X X X X Fig. 14 Examples of the four classes of adjacent line orientation configurations associated with heuristic compatibility coefficient values of zero. The two symbols (i.e. boxes and crosses) each correspond to one of the orientations. A xrj xn a U X XI Fig. 15 Examples of adjacent line orientation configurations associated with negative heuristic compatibility coefficients. The two symbols (i.e. boxes and crosses) each corre-spond to one of the orientations. , 47 for the compatibility coefficients should speed up the convergence rate for the relaxation (Richards et al, 1981). An inspection of the empirical compatibility coefficient array revealed that all such "natural" continuations were associated with positive values. This may explain why gap filling was accomplished so well. The coefficients also show that the "no line" / line (negative) values are higher for the thirteen label case. This would account for the increased effectiveness in weakening spurious responses. The evaluation parameter curves surely do not indicate the onset of a degradation phase at least within thirty iterations. The change parameter curve seems to be decreasing at a fairly constant rate at the higher iterations and the entropy curve, although decreasing in slope, still shows a steady decrease. This is unfortunate because two undesirable effects could be seen in the enhanced images at the later iterations. The first is that line elements at the ends of extended line segments seem to grow "into the void" (i.e. they grow into areas where no linear features are apparent) at the higher iterations. Another problem that doesn't occur nearly as often is a "reaching around" effect. That is, a spur seems to grow off an extended segment and then links back up with it creating a closed loop along the segment. These two effects are much more sporadic than the thickening problem so they are not reflected in the evaluation parameter curves. The drift evaluation parameter of equation 11 could possibly show some potential in indicating the presence of these "data creating" phenomena. Note that the argument of the previous section regarding the departure from linearity of the entropy curve, although plausible when thickening occurs, could be rejected on the grounds that the entropy curves in the present case also deviate from linearity somewhat while thickening does not occur. The departure from linearity criterion is a weak one and surely not robust. One must know a priori if the evaluation curves are the product of the nine label case. 48 3. 13 label set with "heuristic" compatibility coefficients. Since the set of 13 labels resulted in "natural" continuations of adjacent line labels being positively supported, an attempt was made to estimate a set of compatibility coefficients based on "natural" continuation and related criteria. These will be referred to as heuristic compatibility coefficients. At first glance it would seem to be an awesome task to estimate the vast number of coefficients for the 13 label, 8 neighbour case; there are 132 • 8 values to be estimated. However rotational and left / right symmetries reduce the number quite a bit. The fundamental principles underlying the heuristic coefficients are that 1) the non-linear line detector cannot possibly give a non-zero response, in many cases, to both of a pair of adjacent line orientations and 2) that by using the non-linear detector and 13 label set the resulting extended line segments become composed of "links" which correspond to the "natural" continuations referred to earlier. The first point can be clarified easily. Suppose the detector is operating in the following neighbourhood. h C2 63 C 3 Further suppose that the non-linear detector centred at 6 2 gives a non-zero response for the vertical orientation. Among other things, this implies that 62 < a 2 and 62 < c 2 (for a dark line on a bright background). When the detector is centred at either a 2 or c2 it is impossible for the non-linear detector to give a non-zero response for the vertical orientation (or any of the other orientation templates , seven in all, which result in a pixel value check to the left and right) for this implies that 62 > a 2 49 for the detector centred at and &2 > c 2 when c2 is the centre of the neighbourhood. The second point can be as easily explained. Suppose again that a vertical line is indicated centred at.fc2- The following conditions have thus been satisfied. 61 < a,\ and b\ < c\ 62 < a? and 62 < C2 63 < a 3 and 63 < c3 When the detector centres itself at 63, two conditions have already been satisfied (the latter two above) for the possible existence of lines at the following three orientations. X X X X X X X X X These are the "natural" continuations for the vertical label in the downward direction. Following is a summary of the manner in which the heuristic compatibility coefficients were set. 1) Excluding the 45° and 135° orientations, independence was assumed between any of the labels whose line detector templates check pixel values in the horizontal direction and any labels v/hose checks are done in the vertical direction (i.e. r,-,(A,A') = 0). 2) Positive coefficients were associated with any contiguous line orientation pair which have at least two pixels in common (i.e. both have at least two line detector conditions identically satisfied). Taking into account rotational and left / right symmetries, only seven unique configurations are possible (fig. 13 ). In these cases, rty(A,A') = 1. 3) There are cases where only one line detector condition is identically satisfied yet there exists no conflict for any other conditions. These resemble branching or junction situations 50 and there are four unique configurations possible (taking into account rotational and left / right symmetries). See fig. 14. Providing support for such situations may cause thickening at junctions but inhibiting these situations may eliminate junctions. The label orientations involved in these situations were left to get support elsewhere, r,y(A, A') = 0. 4) If two contiguous orientations are impossible vis a vis how the non-linear detector op-erates, then these are associated with r,y(A, A') = —1. 5) The 45° and 135° orientations were left to act as a bridge between the labels whose pixel value checks are done in the vertical and those whose checks are in the horizontal. For the sake of consistency, the configurations in fig. 13 in which these orientations occur will have r,y(A, A') = 1 and the configurations in fig. 14 in which they occur will have r t i(A,A') = 0. 6) The 45° and 135° orientations cause some problems in defining negative coefficients because each has two line detector templates performing pixel value checks in orthogonal directions. Therefore the conflicts introduced via how the non-linear detector functions are not straightforward. The decision was made to inhibit those configurations which resembled thickening situations such as those shown in fig. 15. Note that the leftmost configuration in fig. 15 thickens in two places in the vertical direction and that the middle configuration thickens in two places in the horizontal. The rightmost does so simultane-ously in both directions. All configurations involving the 45° and 135° orientations which thickened in at least two places in either the horizontal or vertical direction (or both) were associated with r,-y(A,A') = —1. 7) The (negative) "no line" / line coefficients were set in the following manner. Negative coefficients ( r,y(A,A') = —1) were assigned to the neighbourhood locations where each of the line orientations had a definite preference for continuation. For example, the "no line" / vertical orientation negative coefficients were assigned to the middle locations of both the top and bottom row of the compatibility coefficient array for this label pair. These are the locations at which the vertical label seeks possible continuation. The three input images used in the previous sections were used in a Mode 2 relaxation 51 utilizing the heuristic compatibility coefficients. Ap° was set to .01 . The enhanced line image corresponding to the input image of fig. 8a is shown in fig. 8e. Fig. 11 shows the evaluation parameter curves (dotted lines) for the image of fig. 8e. (Again the other two images resulted in similar curves.) Recall that the solid lines on these graphs represent the 13 label runs with empirical coefficients. Note that the heuristic curves indicate a faster convergence rate. This is due to the fact that the values for the heuristic coefficients are in general larger than the empirical coefficients resulting in a faster convergence (Richards et al, 1981). Unlike the curves for the empirical coefficients (13 label case), the change parameter curves for the heuristic coefficients displayed distinct minima (iterations 28, 21, and 20 for the three images, respectively) and one image, corresponding to the third value above, had a minimum in its entropy curve at iteration 21. These minima in the evaluation parameter curves may indicate the onset of the degradation phase. The enhanced line images from the later iterations of the heuristic runs were compared to those from iteration 30 of the runs using empirical coefficients for all the input images. As expected, thickening of the lines was not a problem with the heuristic coefficients but in general the results for the empirical coefficient runs were superior. The enhanced line images from the heuristic coefficient applications were worse than those from the empirical coefficient runs in the following ways. (Compare figs. 8d and 8e.) 1) Gap filling was not as prevalent. 2) The line elements at the ends of extended segments grew "into the void" more often. 3) Line probability strengthening along extended segments was somewhat inferior in the areas where the line probabilities comprising the segments were all initially weak. A plausible explanation for the above weaknesses is that the values of the heuristic coefficients are too large to be implemented in a Mode 2 relaxation process and furthermore, there are too many negative coefficients as compared to the number of positive ones. These observations were deduced from looking at certain parameter values output after each 52 iteration of the relaxation. These values consisted of the maximum and minimum q values (see equation 5) derived during the iteration and the number of times the 1 + m • q > 0 (see equation 6) condition was violated. As mentioned in section IV, using m = 1 (Mode 1 process) guarantees that no proba-bility value will become negative during the relaxation. However when using the Mode 2 process there exists the danger of violating this condition. For the 9 label runs and for the 13 label case using empirical coefficients this condition was never violated and the Mode 2 process served as a reliable speedup technique. In addition, the maximum (positive) q value was consistently greater than the largest negative q value. However the Mode 2 runs with heuristic coefficients resulted in many violations of the 1 + m • q > 0 condition. In fact after the first iteration alone the percentage of violations to the number of updated probabilities ranged from around 3 percent (for two of the input images) to about 5 percent for the other input image. (Subsequent iterations also showed violations of this condition but not nearly as often as for the first iteration.) Furthermore the largest positive q value was consistently a great deal smaller than the largest q value over all the iterations. Anticipating the potential for the Mode 2 process to create negative probability values, the relaxation algorithm for the Mode 2 process checked the value of q to determine if a violation had occurred. If the check was positive the label probability being updated was set to zero. The explanation of the previous few paragraphs accounts for the weaknesses observed in the enhanced line images derived using the heuristic coefficients. The relaxation subprocess acting to weaken the line orientation probabilities via the negative coefficients was much stronger than the strengthening subprocess. Thusly, the line label probabilities in the areas where gaps occurred were zeroed out before the relaxation process could strengthen them. Furthermore, extended segments comprised of generally low line orientation probabilities were eliminated due to the dominance of the line weakening subprocess. 53 To determine the extent of the degradation related to the zeroing out of label prob-abilities, the heuristic coefficients were used in a Mode 1 process. The same three input images were used. Fig. 16 shows the evaluation parameter curves for the Mode 1 process (solid lines) along with the the curves from the Mode 2 process (dotted lines) for one of the images. (Once again the Mode 1 evaluation parameter curves were similar for the other two input images.) It can be seen that the entropy curve for the Mode 2 process at the third iteration reaches the same level as the Mode 1 curve does at the thirtieth. Comparisons of the Mode 1 enhanced images at iteration 30 were made with those from the Mode 2 (iteration 3) runs for all three input images. All the comparisons revealed that the Mode 1 enhancements were substantially better. This implies that the Mode 2 process is not a reliable speedup technique when using the heuristic compatibility coefficients. The heuristic compatibility coefficient results indicate that some "tweaking" of the coefficients is called for to enable the process to run in Mode 2. The coefficients also must be adjusted so that the two relaxation subprocesses work well together. Overall though the results using the heuristic coefficients show some merit regarding the possibility of defining compatibility coefficients subjectively. The isotropic nature of the coefficients has its advantages and disadvantages. As pointed out by Richards et al (1981), the empirical compatibility coefficients , representing image-wide correlations, cannot take into account the various line orientation biases, etc. specific to different areas of the image. The compatibility coefficients for the image shown in fig. 8a , for example, are biased against the horizontal or near horizontal line labels because the predominant leads are in the vertical or near vertical orientations. A comparison of figs. 8d and 8e shows that the heuristic coefficients have resulted in an enhanced horizontal line segment towards the middle of the image whereas in the enhanced line image corresponding to the empirical coefficients this line has been enhanced very little. On the negative side, the heuristic coefficients have resulted in many lines becoming 54 attached via horizontal lines which appear not to be all that desirable. The compromise solution as suggested by Richards et al (1981) would be to derive compatibility coefficients for smaller windows of the image and therefore use location-dependent values within the relaxation. 4. Overview of results from the 13 label set. The results from the 13 label case suggest that all the desirable aspects of the nine label case are preserved but with two additional features . Gap filling was more readily accomplished. Also no thickening was evident even when using the higher value for Ap° (.01). A few undesirable features were observed in the later iterations. There were sporadic cases of line segments growing into areas where no line segments should occur. Even less often, spurs were seen to grow off extended line segments and to ultimately rejoin with the segment. The occurrences were definitely not as widespread as the thickening problem was in the 9 label case so the evaluation parameter curves did not reflect any problems with the relaxation. A subjectively derived set of compatibility coefficients worked fairly well but some adjustment of the coefficient values is called for to balance out the line strengthening and line weakening subprocesses. 5 6 VI. Enhancement of sea surface thermal features via relaxation methods. A . Considerations specific to the edge domain. So as not to cloud the many similarities between the enhancement of lines and edges via relaxation processes, some edge-specific details were left to this section. These relate to pre-processing of the image intensity data (i.e. filtering) and the need to deal with intensity changes which are spatially large. 1. Filtering. Since the linear features in the ice imagery were initially detected using an opera-tor which is not noise sensitive, the pre-processing of the ice imagery intensity data was minimal. The only pre-processing of the images involved removing obvious noise points invariably caused by "dropouts", an effect associated with the transmission of the data from the satellite. To deal with the dropout problem histograms of the image intensity values were produced and analyzed to find outliers from the main bulk of the histogram. A simple filter was then applied to the images. Its input consisted of the acceptable range of intensity values. Upon detecting an intensity value outside of this range, the filter replaced the value with an average of the pixel values in the eight pixel neighbour-hood which laid within the range. Nothing was done to the pixel values lying within the acceptable range. The reasoning behind the use of this so-called "dropout filter" was that in the event that a dropout point's value would be included in the computation of a line strength it may have resulted in an abnormally high line response. Since the initial label probability estimates are scaled to the largest line detector response in the image, the potential of the above situation occurring had to be minimized. On the other hand edge operators, implementing a straightforward convolution of a spatial mask with the image intensity field, are highly susceptible to image noise. Because of this a great deal of the image processing literature has dealt with finding filters which 57 "clean up" imagery without deleting or smearing edges (cf. Lev et al, 1977). A simple filter which achieves this is the median filter. This filter works in parallel and simply replaces each pixel value with the median value of the pixel value itself and the pixel values in the eight pixel neighbourhood. Therefore, the sea surface infrared imagery was first drop-filtered and then median filtered once. 2. Approaches for dealing with spatially large intensity changes. Dealing with intensity changes occurring over many pixels (i.e. gradient or ramp edges) is much more difficult (cf. Eberlein, 1976). Edge processing techniques, relaxation related and otherwise, invariably use one of two schemes to deal with this problem, non-maxima suppression or non-maxima absorption. Non-maxima suppression entails zeroing out all edge strength values of a set of paral-lel edge indications (of the same sign) except for the maximum value(s). To get a more accurate estimate of the edge strength in a given area of an image, non-maxima absorp-tion is preferable. The edge values of a set of parallel edges are summed with the total being assigned to one (or a few) of the edge locations of the set, often the location(s) corresponding to the maximum edge response(s) (Prager et al, 1977). Both suppression and absorption have the drawback that edge elements along extended edges may become misaligned, making edge tracking difficult and resulting in edge segment fragmentation. In this study both techniques will be implemented. Following the lead of the edge relaxation study of Danker and Rosenfeld (1981), suppression will be utilized where the edge values are derived from the template matching edge operators. An absorption scheme associated with the crack edge representation proposed by Hanson and Riseman (1978) and Hanson et al (1980) will be implemented in the section which uses the crack edge representation. It should be pointed out that the above implementations will be pre-processing steps in the sense that they will be used after the edge operators have acted on the image but before 58 the initial label probabilities are estimated. However, Riseman and Arbib (1977), Hanson and Riseman (1978), and Hanson et al (1980) have suggested that an edge relaxation process may be able to perform both non-maxima suppression and edge enhancement geared toward edge segment continuity. This point will be discussed in more detail in the following sections. B. Edge relaxation using the template matching edge operators. The work of Danker and Rosenfeld (1981) provides the basis for the edge enhancement scheme used in this section. Unlike the earlier edge relaxation study of Schachter et al (1977) which used the output of a differential edge operator as input to a relaxation process, this scheme uses the template matching edge operators as input to a relaxation process analogous to that utilized in section V of this paper. 1. Label set and edge templates. The label set is composed of eight edge labels, each corresponding to one of the direc-tional templates in fig. 3 , plus a "no edge" label. Abdou and Pratt (1979) and Kitchen and Rosenfeld (1981) showed that the 3-level templates of fig. 3 were as good or superior to the other template matching operators in almost all respects so the 3-level templates will be used exlusively in this section. Henceforth, the term "vertical edge" refers to an edge which results from intensity change in the horizontal. (The same argument applies to the other edge orientations.) This terminology is used to keep consistent with the symbols used to output the edge relaxation results. 2. Non-maximum suppression as a preprocessing procedure. As mentioned earlier, some authors have suggested that an edge relaxation process may be able to simultaneously suppress parallel indications of an edge while enhancing edge elements in such a way as to produce continuous extended edge segments. However, using empirical compatibility coefficients to guide the relaxation (as done by Danker and Rosenfeld, 1981) precludes this capability. 59 If the edge strengths derived from the template matching edge operators (assumingly calculated from an image which contains spatially large intensity changes) were to be con-verted directly to initial label probability estimates, the empirical compatibility coefficients subsequently derived from the initial probability vector field would reflect the presence of the ramp edges. For example, the compatibility coefficients for one of the vertical orien-tation labels with respect to itself will be almost guaranteed to have positive coefficients associated with the neighbouring locations directly to the right and left of the central (up-date) position. A relaxation process run with these coefficients would result in thick edges even in areas where thick edges originally were not evident. Clearly either non-maximum suppression or absorption must be implemented before the initial label probabilities are estimated. Keeping consistent with the study of Danker and Rosenfeld (1981), this study will implement non-maxima suppression. Obviously, edges parallel to a given edge orientation must be suppressed in the direction perpendicular to the edge but it became evident that edges should also be suppressed in directions along 45° diagonals to the edge direction. This is due to the nature of the 3x3 template matching operators. For example, vertical edges should be suppressed not only in the horizontal direction but also along the 45° and 135° diagonals. Some experiments were performed on idealized edges to determine if a preferred order existed in which to perform the suppression in the various directions. It was found that thinning the edges first along the directions diagonal to the edge direction followed by thinning perpendicular to the edge achieved the best results. 3. Initial probability estimates. Equations 1 and 2 were used to estimate the initial label probabilities. Again to avoid setting any probabilities to zero the scheme found to work best in the line relaxation study was used. That is, a constant value, Ap°, was added to all the initial label probabilities followed by renormalization at each point. A value of Ap° = .01 was used throughout . 60 4 . Relaxation model (update rule). The update rule used was the non-linear probabilistic rule of equation 6 , the same relaxation model used in section V. Section V demonstrated that the Mode 2 process (fast convergence mode) preserved the enhancement produced by the Mode 1 process with a substantial increase in the rate of convergence. It was assumed that the tasks of edge and line enhancement are similar enough that the above observation would hold true for the edge enhancement case also. Thusly only the Mode 2 case was implemented. The empirical compatibility coefficients were calculated as statistical correlations according to equations 12 and 13 . As in the line relaxation case, the entropy and change parameters of section IV will be used to monitor the relaxation process. 5 . Definition of neighbourhood. An eight pixel neighbourhood was used exclusively within the context of the relaxation process. 6. Implementation. Three calibrated Band 4 (infrared) AVHRR images of the sea surface off Vancouver Island were used in the study. Two of the images are shown in figs. 17a and 18a. Figs. 17b and 17c show the maximum likelihood interpretations for the initial probability field and the twentieth iteration of the relaxation process for the image of fig. 17a. The fifteenth iteration of the relaxation process for the image of fig. 18a is displayed in fig. 18b. The symbolic representation is similar to that used for the line relaxation but with an additional twist. Since edges are defined modulo 2ir, it is not sufficient to display a vertical edge as a vertical line in the symbolic representation. An additional pixel is placed perpendicular to the edge to indicate the direction of higher image intensity. Fig. 19 shows the entropy and change parameter curves for the three input images. Notice that for all the images the entropy curves monotonically decrease while the change parameter curves consistently rise (except for a very slight dip in one of the change pa-61 a b Fig. 17 (a) One of the calibrated band 4 sea surface images used in the edge relaxation, (b) Initial probability estimates for the edge relaxation using the template matching edge operators. 62 / . -"I 1, J ^ - u -rJ7 7jm Fig. 17 (cont'd) (c) The result of the twentieth iteration of the edge relaxation using the template matching edge operators, (d) The result of the tenth iteration of the edge relaxation using the crack edge operators. 6 3 Fig. 18 (a) Another of the calibrated band 4 sea surface images used in the edge relaxation, (b) The result of the fifteenth iteration of the edge relaxation using the template matching edge operators. 64 Fig. 18 (cont'd) (c) The result of the tenth iteration of the edge relaxation using the crack edge operators. 65 Fig. 19 Evaluation parameter curves from the edge relaxation using the template matching edge operators. The solid, dotted, and dashed lines correspond to the three input images used in the relaxation. rameter curves). Needless to say this is quite an unusual situation. Examination of the maximum likelihood interpretations of the initial probability vector field and of the relax-ation results, along with an examination of the empirical compatibility coefficient arrays, provided some explanations for the nature of the evaluation parameter curves and also indicated why the relaxation results were in general poor. Unless otherwise stated, the following comments apply to the relaxation runs on all three input images. The images corresponding to the initial probability estimates show that the "no edge" label predominates the initial probability estimates. This may be due in part to the fact that the initial edge label probabilities are scaled to the maximum edge response in the image and that only a few edge locations had high detector responses. In areas where the edge activity was high, however, the "no edge" probabilities, although the maximum over all the labels, were relatively low. (Note the abundance of the light grey to white squares in fig. 17b .) In effect this constitutes a highly ambiguous situation, therefore the initial entropy of the system would tend to be high. Note too that the initial entropy is further increased by using Ap° = .01. Another piece of evidence supporting a high degree of initial ambiguity (entropy) is due to the nature of the 3x3 template matching operators. Assume, for example, that we have a perfect vertical edge in an eight pixel neighbourhood. The following intensity values are illustrative. 1 2 2 1 2 2 1 2 2 The template corresponding to one of the vertical orientations, template 6 in fig. 3 , outputs the value 3. However two other templates, templates 5 and 7 in fig. 3 , also output a nonzero response, in this case a value of 2 for both templates. When equations 1 and 2 are used to estimate the initial label probabilities for this location, the probability values will become fairly evenly distributed over these three orientations. Clearly this type 67 of situation (analogous ones apply to the other orientations) supports the fact that high initial entropy values are inevitable when using the 3 x 3 template matching operators. The monotonic decrease in entropy over 30 iterations may be due in large part to the fact that situations such as the above are being rectified (the vertical label in our example will hopefully gain positive support relative to the others) little by little at each iteration. This may also account for the change parameter rising consistently over 30 iterations. Another factor may account for both the behaviour of the evaluation parameter curves and the poorness of the relaxation enhancement. This problem i3 associated with the suppression algorithm used. The suppression technique leaves intact the maximum re-sponse^) of a set of parallel edges. However if two or more edges in the set have the same (maximum) response, all edges having this value are left intact. In the case where the edges left intact are adjacent, we are left with an initially thick edge. This situation must occur often enough to have an effect on the empirical compatibility coefficients because an inspection of the coefficient arrays revealed positive coefficients where negative ones are definitely called for. In the vertical edge case, for example, a properly functioning non-maxima suppression should result in compatibility coefficients for a vertical label with respect to itself of the form - + -- + -where the position of the sign in the above array corresponds to the compatibility coeffi-cient between the vertical label at that neighbouring position with a vertical label at the central position. The negative values imply that the suppression should remove all parallel indications of vertical edges along both diagonals and in the horizontal. (The positive values correspond to an obviously desirable situation, a vertical edge continuing another vertical edge.) However, for the three input images used, the compatibility coefficient arrays often 68 showed positive (albeit relatively small) values where a negative value is desirable. These contradictions were rarely found at the locations perpendicular to the edge direction but rather along the directions diagonal to the edge direction (i.e. the corner values in the above example). This did not occur only for the vertical / vertical case but for analogous situations involving the other labels. This problem with the empirical compatibility coefficients results in an interleaving effect which can easily be seen in figs. 17c and 18b , usually associated with the vertical and horizontal edges. Since these parallel edges have some initial probability associated with them, when they are slowly enhanced by the relaxation process a general decrease in entropy will result accompanied by an addition to the change parameter value. Thus this phenomena supports the behaviour of the evaluation parameter curves also. Another problem associated with the vagaries of the template matching edge operator is evident in figs. 17c and 18b . Suppose we have the intensity values below in the input image. 1 1 2 2 1 1 2 2 0 1 2 2 When the edge operator is centred on the second value of the second row, one of the vertical templates, template 6 in fig. 3 , outputs a value of 4. Now when the edge operator centres itself on the third value of the same row, the vertical template outputs a value of 3 and two of the templates for diagonal edge orientations (templates 5 and 7 in fig. 3 ) output values of 2. The non-maxima suppressor will zero out the vertical edge value of 3 as being a parallel vertical edge with a lower strength than that of its neighbour. Suppose further that the situation is such that one or both of the diagonal edge values survive the non-maxima suppression process. It is very likely one or both of these diagonal edges will receive 69 positive support during the relaxation process due to the compatibility of these diagonal edge orientations with nearby vertical (high intensity to the right) edges. In figs. 17c and 18b there are many instances where a horizontal or vertical extended edge segment can be seen to have a spur growing at 45° angles to it. By the same token there are cases evident where diagonal edges can be seen to have horizontal or vertical spurs growing from them. Again since these spur orientations had some initial probability associated with them , a general drop in entropy and rise in the change parameter will be associated with their enhancement by the relaxation process. Once the interleaving effects and spur edges gain strength a cascade of other problems become evident in the enhanced edge images. All types of thickening start to occur and the relaxation process goes deeply into the degradation phase. 7. Overview of edge relaxation results using the template edge operators. This relaxation procedure surely does not qualify as a viable edge enhancement tech-nique based on the goals of line and edge enhancement outlined in section III. Thick edges are a problem and orientation anomalies (i.e. spurs) are clearly evident. The only positive aspect of the results is that edge probabilities were strenghthened along some fairly long extended segments. However these segments often have spurs associated with them . Reflecting on the few problems discussed above, the degradation phase is actually imbedded in the initial probability vector field via the poor performace of the template matching edge operators and the suppression algorithm. The above mentioned problems with the edge operator and the suppression algorithm are probably just a few among many. A more complex non-maximum suppression technique is called for such as the one hinted at by Rosenfeld and Thurston (1971). The suppressor must not only remove parallel edges of the same orientation type but must also look at edges in the neighbourhood whose orientations differ by 45°. 70 C . Edge relaxation using the crack edge representation. 1. A review of relaxation models using the crack edge representation. This section was delayed until now because the relaxation methods to be reviewed here are specific to the edge domain and furthermore are quite different from the relaxation models discussed in section IV. The studies of Riseman and Arbib (1977) and Hanson and Riseman (1978) emphasized the need for an edge operator which would produce not only an accurate estimate of the magnitude and orientation of a local edge but would unambiguously define its location. They outlined the following desired characteristics of an edge operator. 1) An edge should not pass through (be defined at) a pixel. 2) The location and orientation of edges relative to the pixels should be unambiguous. 3) A single edge should not give rise to multiple indications of its presence, a problem with o X C edge masks. 4) The edge representation should lend itself simply to processes such as tracking algo-rithms. The crack edge operator was found to satisfy these requirements. We will look at the relaxation processes of Riseman and Arbib (1977), Hanson and Riseman (1978), Prager et al (1977), Prager (1980), and Hanson et al (1980). All of these studies had in common the following characteristics. 1) The 1x2 crack edge operator of fig. 4 is the operator supplying the initial information to the relaxation process. 2) The crack edge operator leads to a simple label set - "edge" and "no-edge". Earlier it was pointed out that when dealing with line/edge relaxation processes the local neighbourhood of a point should be as small as possible. There exists a slight variation among the studies in terms of defining this neighbourhood for a crack edge. In the studies of Prager et al (1977) and Prager (1980), boundary continuity was the major goal of the relaxation process while in the other studies, non-maxima suppression was an additional 71 goal. For the crack edge representation, the smallest neighbourhood of an edge which contains sufficient information regarding possible edge continuations is shown in fig. 20a . In order for the process to act as a non-maxima suppressor the two edge locations parallel to the edge being updated are included in the neighbourhood (fig. 20b ). Among the various studies a major difference was the form of the update rule. Riseman and Arbib (1977) devised a relaxation scheme which they hoped would act both as a non-maxima suppressor and would bind local edge elements. Using the non-linear probabilistic model, they associated compatibility coefficients with each of the individual neighbouring edges in fig. 20b . The results were disappointing. Hanson and Riseman (1978) described the flaws in the above method and showed that a weighted sum of the neighbourhood edge (and "no-edge") probabilities could never achieve suitable results. They contended that the way to approach the problem was to not weight the individual edges but to weight the different possible patterns of edges in the neighbourhood. A slightly modified form of the non-linear probabilistic relaxation rule was used but the substantial difference laid in the computation of the neighbourhood "correction factor" (designated as Ap,). where n is the number of patterns used, u>k are weighting factors comparable to the r«v,(A, A ' ) of equation 5 , and Pk are the probabilities for the existence of the various patterns. For the edge continuity task , the patterns are those shown in fig. 21 . The nomen-clature for the symbols used are displayed in the same figure . The following should make more clear the computation of the pattern probabilities. 1) Each edge, vertical or horizontal, has two endpoints each with three edges incident upon them. (The edge being updated is excluded.) n (14) fc=i 72 • ! • ! • • I • I • • I D I D • I • I • Fig. 20 (a) Crack edge neighbourhood for the task of enhancing edge continuity, (b) Crack edge neighbourhood for the tasks of enhancing edge continuity and suppressing parallel edges. (From Hanson and Riseman, 1978.) I I I 0-2 I .. — .-oo. edge • — e d g e r I central edge to be updated I I I 1 - 1 I I I I 1 - 2 I III II I I _ _ II II I l l-i » I I ii a i 2 - 3 I I I II < II I Fig. 21 Possible edge patterns for the crack edge neighbourhood of fig. 20a. Note that the "edges" in the 1-1 case, for example, could be in the vertical edge locations. (From Prager et al, 1977.) 73 2) A probability is calculated for each of four "vertex types" according to the relative strengths of the incident edges upon each endpoint. The vertex type simply denotes the number of relatively strong edges incident at the endpoint. Also, no distinction is made based on relative orientation, (i.e. A vertex type of one could represent an edge in the horizontal or vertical .) 3) The vertex types at each endpoint determine the pattern. For example, a vertex type of one at both endpoints gives the pattern 1 — 1. The computation of vertex type probabilities is based on a probabilistic scheme to be ad-dressed more fully later. The pattern probability is given by the product of the probability of the vertex types at each endpoint. Now the compatability coefficients, u>f- , must be specified. At this point the concept of boundary continuity comes into play. For a particular pattern, it must be judged whether the pattern supports the existence of the central edge, whether the central edge should not be there, or whether no evidence for reward or punishment is evident. A summation of the choice of compatability coefficients will not be given just yet. In order to implement non-maxima suppression, Hanson and Riseman (1978) chose the following scheme. Choose the maximum probability of the two parallel edges having the same gradient sign as the central edge. Weight it negatively in order to activate suppression in equation 14 . Prager et al (1977) and Prager (1980) used a simpler variant of the edge pattern scheme just discussed. To avoid the use of numerous coefficients they extracted only the most probable pattern from the neighbourhood of edges . The most probable pattern is determined by the most probable vertex type at each endpoint of the central edge. Finding the most probable vertex type is done in the following manner. 1) First the "edge" probabilities of the three incident edges are ordered so that a > b > c . (i.e. The edge with the highest probability corresponds to o, etc.) 74 2) Since a vertex type of 0 is allowable (i.e. no relatively strong edges incident at the vertex) a type of threshold needs to be defined which will guarantee that vertex type 0 will be the most probable if all incident edge probabilities are very small. This "threshold" is denoted by q. 3) Assuming independence of the edges incident at the vertex (admittedly a bad assump-tion), the following set of equations was defined. m = max(a, q) (15) Pr(vertex type 0) = (m — a)(m — b)(m — c) (16) Pr(vertex type 1) = a • (m — b)(m — c) (17) Pr(vertex type 2) = a • b • (m — c) (18) Pr(vertex type 3) = a • b • c (19) 4) The vertex type t is selected to represent the vertex such that Pr(vertex type i) = m.3.x[Pr(vertex type j)]. (20) Performing the above computations for both endpoints of the edge being updated deter-mines the most probable pattern. It should be stressed that the parameter q'm not a threshold in the usual sense because in cases where all the edge values are much greater than q, the most probable vertex type is not necessarily vertex type three. The parameter q only behaves as a threshold in the usual sense when all the edges are weak. Although this technique requires no heuristic estimates for a set of coefficients such as the iwy's of equation 14, one must still judge which patterns should enhance, weaken, or leave alone the probability of the central edge. Once this is done the update procedure is straightforward in that a constant value ( Ap,- ) is added to (subtracted from) the central 75 edge probability based on whether the neighbourhood pattern supports (denies) the central pixel's existence. increment case : p ' + 1 = min(l,p* + Ap8) (21) decrement case : p* + 1 = max(0,p{- — Ap,) (22) uncertain case : p* + 1 = p*. (23) In order to put the crack edge relaxation models discussed in the previous few sec-tions onto a firmer theoretical foundation, Hanson et al (1980) developed various update rules from a Bayesian viewpoint. They investigated a model using the neighbourhood of fig. 20a (i.e. no consideration given to non-maxima suppression), a model incorporating non-maxima suppression, and a model utilizing the information from the central pixel of the neighbourhood. The various models required heuristic estimates for the conditional probabilities of the form Pr(central edge exists \ neighbourhood pattern). A thorough discussion of the various models is beyond the scope of this paper but one important concept is of interest. The final formulation of their model incorporating non-maxima suppression was of the form Pr(central edge exists) = [C • Pr(Son) + Pr{Sojj)\ n • Pr(central edge exists\pattern k)Pr(pattern k) (24) fc=i where C is a constant in the range [0,1] inversely proportional to the amount of non-maxima suppression desired, Pr(Son) is the probability of parallel edges of the same sign in the neighbourhood, and Pr(S0ff) is the probability that no parallel edges of the same sign exist in the neighbour-hood. 76 This equation effectively decomposes the tasks of enhancing boundary continuity and non-maxima suppression into product terms where the first term above represents the former task and the second term the latter. Note that if there is strong evidence of parallel edges in the neighbourhood the amount of suppression is controlled by the parameter C. 2. Defining the relaxation model. a. Edge operator. The crack edge operator of fig. 4 is used to derive the initial edge strengths. The sign of the edge (indicating the direction of higher intensity) is preserved for use by the relaxation model. b. Non-maxima absorption. Although the relaxation model to be used will attempt to simultaneously perform non-maxima suppression and to enhance edge segment continuity, Hanson and Riseman (1978) and Hanson et al (1980) have suggested that the suppression can be aided by a preprocessing step, an absorption scheme which distributes the total edge strength in a set of parallel (similarly signed) edges over the edges comprising the set. The scheme recognizes the potential for absorption techniques to misalign edges so instead of summing all the edge strength values and assigning the total to one or a few edge positions within the set of parallel edge locations, their technique distributes the total edge strength over a number of edge locations. The distribution is based on three factors - the ratio of the original edge strength of each edge in the set to the total of all the edge values, the sum total of all the edge values in the set, and a location factor. Multiplying these three factors determines the new (distributed) edge strength values. The location factor is simply a function of the distance of the original edge locations from the centre of gravity of the set of parallel edges. Fig. 22 shows the function (solid line) used in Hanson et al (1980). It can be seen that any edge four or more crack edge positions away from the centre of gravity are essentially zeroed out. Preliminary results 77 -6.0 -4.0 -2.0 0.0 2.0 4.0 6 0 DISTANCE FROM CENTRE OF GRAVITY* i i - 1 i Fig. 23 Possible edge patterns for the general class 1-1. 78 of some relaxation runs using this distance function indicated that this function may be too "generous" in some cases. That is the distance function should drop off more rapidly from the centre of gravity. Therefore another function shown in fig. 22 (dotted line) was designed. It corresponds to the equation 1 Location weighting function = 1 -j- 2 • d where d is the distance of the edge from the centre of gravity. Note that this function also does not set an arbitrary distance of four as a cutoff point in the distribution scheme. c. Initial label probability estimates. Equations 1 and 2 are once again used to estimate the initial label probability estimates. Note that the absolute values of the crack edge strengths are used. d. Relaxation model (update rule). A combination of the Prager et al (1977) and Prager (1980) relaxation model and that of Hanson et al (1980) will be implemented as the relaxation model in this study. The former technique, corresponding to equations 15 through 23, will serve to enhance edge segment continuity and will replace the second term in equation 24 . The first (i.e. suppression) term of equation 24 will remain intact. To enhance edge segment continuity one must determine a value for q for use in equa-tion 15 , a value for Ap,- in equations 21 through 23, and one must determine which neighbourhood patterns should support, weaken, or leave alone the value of the central edge. To activate the suppression subprocess, a value of C in equation 24 must be set. e. Determination of the edge strength "threshold''. Preliminary results indicated that the value of q should be kept relatively low. Higher values resulted in extremely fragmented edge segments. The value suggested by Prager (1980), q = .1, worked well and was held constant for all the relaxation runs. 79 f. Neighbourhood edge pattern support. Since sea surface temperature fields are essentially contoured, edge segments derived from them should not intersect. Thusly, the only neighbourhood edge pattern which should provide support within the relaxation process is the 1-1 pattern. Recall, though that the 1- 1 pattern can correspond to any one of the configurations shown in fig. 23. If the configuration corresponded to any of the first seven in fig. 23 , the central edge probability was incremented. If the configuration corresponded to one of the last two in fig. 23 , further analysis was performed. If the signs of the (parallel) edges were the same, the central edge probability was decremented because these parallel edges probably correspond to parallel edges within a ramp edge and definitely should not be connected. However if the signs were different, this was regarded as an acceptable situation and the central edge probability was incremented. Three of the neighbourhood edge patterns were associated with weakening the central edge probability. The 0-0 case corresponds to an isolated edge so the decision to weaken was straightforward. Incrementing the central edge probability in the 0-2 case would correspond to enhancing an intersection of contours. Thusly this pattern was associated with a decrementation. The last case for weakening the central edge probability is the 2- 2 pattern. Strengthening the central edge would correspond to a possible linking of two contours. Therefore the central edge probability was decremented for the 2-2 pattern. Some of the patterns dictated leaving the central edge value alone (for many of the same reasons cited by Prager et al , 1977 and Prager , 1980). In the 0-1 case for example, incrementing the central edge value would extend segments "into the void". However decrementing the central edge value would result, over many iterations, in progressively eliminating the end edge elements of extended segments. In the 1-2 case, incrementing would cause an intersection of contours but decrementing may eliminate terminating edge elements of extended segments as described in the previous paragraph. This obviously pertains to the edge element corresponding to the vertex type 1 in this pattern. 80 All the other patterns — 0-3,1-3, 2-3, and 3-3 — were treated as uncertain cases. Future iterations at locations where these patterns occur may result in a changed neighbourhood pattern which may then give a clearer indication of what should be done. The effect of each of the neighbourhood edge patterns on the central edge probability was kept constant throughout all the runs of the relaxation. g. Ap,- determination. Prager (1980) stated that values for Ap,- in the range [.15, .20] worked best in his study. Since his relaxation model did not include a suppression aspect as does this one, it was thought that there may be some interplay between the value of this parameter and the activity of the suppression subprocess. (Without a suppression aspect within the relaxation, the value of Ap,- simply affects the rate of convergence.) To test whether some interplay between this parameter and the suppression subprocess indeed existed, two different values for this parameter were used, .15 and .30. h. Determination of the suppression parameter, C. The value of C in equation 24 determines the amount of suppression desired within the relaxation process. Preliminary results indicated that a value of .5 (suggested by Hanson et al, 1980) may not provide enough suppression in some cases. Therefore a value of C = 0. (maximum suppression) was also tested . Pr(Son) in equation 24 was set to the maximum probability of the two edges parallel to the central edge having the same sign as the central edge (Hanson and Riseman, 1978). In all cases Pr(Soff) =1 - Pr{Son). g. Evaluation parameters. The change and entropy parameters defined in section IV were used to monitor the relaxation process. Rather than defining the entropy parameter using equation 9, the 81 second form, equation 10 , was used. This entropy formula lends itself well to a two-label set problem. Since the relaxation algorithm in this section is computationally much faster than the algorithms associated with the models implemented thusfar (a good deal of the calculations are done in integer arithmetic), two other evaluation parameters were denned. After each iteration an edge tracking algorithm (explained fully in section VII) was implemented. Among other things it output the length of the longest segment formed and the number of segments formed. These will serve as the additional evaluation criteria. The second criteria is an especially good one because it reflects the congealing of shorter segments into longer ones, an obviously desirable aspect of the relaxation enhancement. 3. Implementation. a. Input parameters and evaluation technique. The same three sea surface infrared images used in section VLB were used in this sec-tion. Each of the images was put through the relaxation process a total of eight times, each run corresponding to a different combination of input parameters. The set of parameters consisted of 1) an indicator as to which of the two location weighting functions in fig. 22 was to be used for the initial non-maxima absorption, 2) a value for Ap,- (.15 or .30), and 3) a value for the suppression parameter, C (.5 or 0.). All 23 possible combinations of these parameters were used as input. The evaluation parameters from preliminary runs indicated that ten iterations were probably sufficient for enhancement purposes. Note the tailing off of all the evaluation parameter curves in figs. 24a through 24d. The results from the eight runs were visually compared in a systematic fashion. That is, each pair of runs which differed by only one parameter in their respective input parameter 82 co ITERATION ITERATION Fig. 24 Evaluation parameter curves for the crack edge relaxation using the image of fig. 17a as input — (a) entropy and (b) change parameter. The solid, dotted, dashed, and chain-dotted curves correspond to different input parameter sets. 0 0 0.0 2.0 4.0 ITERATION 6.0 ITERATION Fig. 24 (cont'd) Evaluation parameter curves for the crack edge relaxation using the image of fig. 17a as input — (c) longest segment formed and (d) number of segments formed. The solid, dotted, dashed, and chain-dotted curves correspond to different input parameter sets. sets were compared. Conclusions as to the effect of each input parameter could thusly be made. It should be mentioned that the visual comparisons were done on edited versions of the relaxation results. This was due to the fact that the low value of q resulted in a plethora of enhanced edges which made it difficult in general to compare results. First, edge segments whose lengths were five edge elements or less were removed. Since one of the goals of the relaxation is to produce extended edge segments, length thresholding seemed appropriate for comparative purposes. The edge segments were also thresholded using average (original) edge strength. If an edge element within a segment was originally a part of a ramp edge, the total (original) edge strength of the ramp edge was used. This resulted in segments being preserved in the most interesting parts of the image — the areas where strong edges occurred or where ramp edges were prevalent. Histograms of (absolute) edge strength values indicated definite peaks at values of six and eight followed by a gap for the edge strength range [9,11]. (These observations are artifacts of the calibration process producing the input images.) Therefore edge segments with average edge strengths less than twelve were removed. Figs. 17d and 18c show the thresholded results from the tenth iteration of the relax-ation process for the images of figs. 17a and 18a . (Both resulted from the same set of input parameters.) In the figures all edges surviving the thresholding were assigned the same intensity. The evaluation parameter curves (for the image of fig. 17a) shown in fig. 24 correspond to four sets of input parameters. The solid line corresponds to the set with Ap,- = .15, C = 0., and the location weighting function which absorbs more towards the centre of gravity. The other three curves each correspond to varying one of these parameters. b. Ap,- comparisons. The comparisons between relaxation results where the input parameters differed only by their Ap,- value revealed that the lower value, .15, was preferable. 85 For the runs with moderate suppression (C = .5), the higher value of Ap,-, .30, resulted in many parallel edges being preserved in the areas where ramp edges were dominant. What may be happening is that the higher value of Api is causing the relaxation process overall to converge too quickly. The suppression subprocess has had not enough time to work. The runs with a higher degree of suppression [C — 0.) revealed that the parallel edge problem mentioned above was not evident with the larger value of Ap,- . However in the areas where the ramp edges were dominant, the higher value of Ap,- resulted in edge segment fragmentation. This indicates that the faster convergence did not allow the two subprocesses to interact well. Overall, the larger value of Ap,- resulted in longer segments in areas where ramp edges did not occur. The dotted lines in figs. 24c and 24d correspond to the higher value of Ap,-. Note that at the higher iterations the number of segments is decreasing and the longest segment has been formed. This is probably due in part to the congealing of short segments into longer ones in the areas where ramp edges did not occur. However, the fragmentation of edge segments in ramp edge areas would tend to increase the number of segments. Thusly the evaluation curve of fig. 24d is a superposition of the fragmentation and congealing effects. The "number of segments" evaluation criteria is therefore not very useful in terms of judging the overall results of the relaxation. Fig. 24b gives an idea of the faster rate of convergence for Ap,- = .30. The change parameter drops off rapidly from a very high level. The similarities between the shape of this curve and those in figs. 6 and 7 (Mode 3 line relaxation) are striking and represent the same effect, a rapid convergence to an unacceptable result. c. Comparison of results, variable suppression parameter (C). The comparisons between relaxation results where the input parameters differed only by the value of C revealed that the lower value , C = 0. (i.e. high degree of suppression), produced better overall results. With C = 0. the resulting segments were longer in general 86 and displayed better continuity (less fragmentation). This applied to the results using both of the initial absorption (location weighting function) schemes. The dashed lines in fig. 24 correspond to C = .5. Fig. 24d reflects the observed increase in fragmentation with this degree of suppression. (Compare the solid and dashed lines.) That is the curve consistently shows higher values for the number of segments formed. Since the suppression constant has no effect in areas where ramp edges are not found, this parameter accurately reflects the relative quality of the two suppression schemes in areas where ramp edges occur. The higher degree of suppression generally resulted in a better outlining of the signif-icant thermal features within the input images. However the relaxation results from one of the input images revealed a few incidences where the C = .5 runs did a better job at outlining the significant thermal features. This seems to indicate that different amounts of suppression work better in varying contexts. d. Comparison of results, variable initial absorption scheme. The comparisons between relaxation results where the input parameters differed only by the location weighting function used in the absorption pre-processing step revealed that the performance of the relaxation process depended on image context. The results from the input image of fig. 17a indicated that the second absorption scheme (i.e. the one which concentrates the absorption more to the centre of gravity) worked better overall. However the results from another input image showed that although the second absorption scheme worked well, the absorption scheme of Hanson et al (1980) seemed to produce better results in certain instances. The evaluation parameter curves in fig. 24 (corresponding to the image of fig. 17a) re-flect the better overall results for the second absorption scheme. The "chain-dotted" curves represent the relaxation implemented using the location weighting function of Hanson et al (1980). Note that the entropy curve stays consistently higher . 87 e. Overview of relaxation results using the crack edge representation. The results shown in figs. 17d and 18c correspond to the same input parameter set — Ap,- = .15 , C = 0., and the second absorption scheme (high absorption to the centre of gravity). Although the conclusions were somewhat tenuous regarding the last two parameters (the last in particular), this is the input parameter set which seemed to produce the best overall results. As an edge enhancement process, the relaxation scheme using the crack representation has its good and bad points. 1) It produces some fairly long extended segments especially in areas where ramp edges do not occur but as can be seen from the figures the outlining of the significant thermal features is quite fragmented. 2) The crack edge results do not seem to reflect the problems with spurs and other orien-tation anomalies associated with the relaxation model using the template matching edge operator. 3) With the low value of q = .1 many weak edge segments had to be removed before an enhancement of the more interesting thermal features became evident. 4) The non-maximum absorption / suppression combination scheme shows some potential in that "thick" edges were not evident after relaxation enhancement and that in some cases the segments in ramp edge areas were well formed. However the weakest aspects of the relaxation results were in the areas where ramp edges occur (excessive fragmentation for example). 88 VII. Extraction and identification of line/edge segments in AVHRR imagery. The approach taken for extracting and identifying line segments produced by the relaxation models of section V is similar to the procedure used for achieving the same ends in the edge segment case. First a tracking algorithm produces the segments and then various characteristics are derived for each of the segments. These characteristics are subsequently used to identify (i.e. label) each segment. A.Extraction and identification of line segments in ice imagery. 1. Line tracking. The thirteen label, empirical compatibility coefficient runs of the line relaxation process of section V provide the input to the line tracking algorithm. The thirtieth iteration of the relaxation process is used exclusively for each of the images. The initial step of the line tracking process is to determine, at each location, the label with the highest probability. These labels are stored in an integer array. The line tracking algorithm works solely on this array. Tracking the line segments is straightforward. First the algorithm (systematically) searches for a location where a line label is the most probable. Once found, the orientation of this line label determines 1) the two locations in its eight pixel neighbourhood where the line can possibly be con-tinued, and 2) the sets of line orientations (labels) which can "naturally" continue the line at each of the two locations, one set corresponding to each location. The term "natural" is used on purpose because the tracking algorithm is wholly based on the concept of "natural" continuations. In fact, the locations of the positive coefficients within the heuristic compatibility coefficient array determine both 1) and 2) above. Consider the following line orientation for example. 89 X X X The heuristic compatibility coefficient array shows only two locations in the eight pixel neighbourhood where this orientation can possibly continue, the top right corner position and the bottom center position. For the top right corner, the heuristic coefficient array has the following two labels associated with positive coefficients: X X X X X X For the bottom center position, positive coefficients are associated with the following ori-entations. X X X X X X X X X These label sets are thusly the candidates for linking at the two search positions. Note that when defining the heuristic coefficients, independence was assumed between all line orientations whose line detector templates resulted in a check in the horizontal direction and those line orientations whose checks were done in the vertical. This biases the relaxation in the sense that configurations corresponding to rapid changes in direction are not enhanced. Therefore the tracking algorithm has thi3 same inherent bias. In the above example the line could be continued at the top right corner position by the line at the following orientation. 90 XX X However the resulting configuration of this line orientation pair would look like the follow-ing. XX X X Thusly the line tracking algorithm is biased against rapid changes in direction such as the above. Starting from the original location, the line is first tracked in one direction until the segment terminates (no linking candidates are found). The search procedure then goes back to the original starting position and performs the search in the other direction. Once this segment has been fully tracked the algorithm systematically searches for another location where a line orientation label is most probable and the procedure is repeated. 2. Segment characteristic extraction. Many characteristics can be derived from the segments, both spectral and geometric (cf. Prager, 1980). In this study four characteristics were extracted: 1) segment length, 2) infrared intensity of the line elements comprising the segment (obtained from the input image), 3) original line strength (i.e. line detector response) of the line elements comprising the segment, and 4) the absolute difference between the pixel intensity values (obtained again from the input image) to the right and left (relative to the direction of tracking) of each line seg-ment. This is termed the "homogeneity parameter". A low value denotes a high degree of "homogeneity". 91 3. Identification scheme. The four derived characteristics were used in a simple two-class identification (labeling) problem. For the three ice images used in section V, the classes correspond to "weak lead" and "strong lead". That is it was hoped that the derived characteristics could be used to differentiate between the very dark, distinct leads in the ice imagery and those linear features which were visibly more tenuous. The identification scheme for this two-class problem was based on the following as-sumptions. 1) Relatively short segments are low confidence segments. 2) "Strong leads" can be differentiated from "weak leads" based on the infrared intensity of the line elements; "strong leads" are generally darker. 3) The original line detector responses for "strong leads" should be higher than those for "weak leads". 4) Since leads are cracks in a once solid mass of ice, the infrared image intensity should be similar for the pixels to either side of the lead. Thus it was assumed that the "homogene-ity parameter" would be small for "strong" leads and larger for the more tenuous linear features in the image (i.e. "weak leads"). Two ice images which contained clouds were used in another two-class labeling prob-lem. The classes here correspond to "ice lead" and "cloud". This labeling problem was introduced first of all to see whether the line detector used in the relaxation would re-spond to linear features in cloud areas and if so, whether these linear features could be discriminated from ice leads on the surface. The identification scheme for the second two-class problem was based on similar as-sumptions. 1) Again short segments are regarded as low confidence segments but in this two-class prob-lem segment length takes on added significance. It was thought that if the line detector used in the relaxation responded to linear features within the cloud field, these responses would 92 be sporadic and any segments in the cloud areas derived from the relaxation-enhanced images would be short. Therefore it was assumed that "lead" segments would in general be longer than any segments found in the cloudy areas. 2) Since clouds in the input images appear as high intensity features, the ice leads should generally have lower intensities. 3) It was thought that not only would the line detector respond sporadically in the cloud areas but that in the cases where it did the response would be weak. Thus leads should have a higher original line detector response than the linear features found in the clouds. 4) The "homogeneity parameter" should be a good discriminator because there is little reason to believe that the pixel intensities to either side of a linear feature in the clouds will be as similar as the pixel intensities to either side of a lead. The two identification problems are similar in that both the "strong lead" label in the first problem and the "lead" label in the second are distinguished from their respective alternative labels by longer segment lengths, lower intensity, stronger original line strength, and lower values for the "homogeneity parameter". This similarity not only made the algorithm writing simpler but it also proved useful in the labelings of one of the cloudy images. Since such a small sample of images was used and since the utility of the different characteristics primarily was being tested, the identification scheme was kept simple. 1) A straightforward thresholding scheme was used to differentiate between the classes. For example, if a line segment's average infrared intensity was below a certain threshold, it was labelled "strong lead" in the first two-class labeling problem. 2) The labelings were done on a characteristic by characteristic basis. No elaborate clus-tering or multi-dimensional thresholding was performed. 3) The thresholds for the various characteristics were allowed to vary for each input image analyzed. 4) Determination of the thresholds was accomplished via a subjective analysis of the his-tograms of the derived characteristics. Again, no automated histogram analysis algorithms, 93 etc. were used. 4. Implementation. The same procedure was used for all five ice images looked at. First a histogram of segment lengths was produced. A peak in the histogram was looked for near the lower end of the histogram. Except for one image, the segment length histograms for all five input images showed absolute maxima in the range [4,6], (For the image of fig. 8a there were equal numbers of segments for segment lengths of two and five. To be consistent with the other images the threshold in this case was chosen to be five.) The histogram maximum for each image was chosen to be the threshold for the segment length characteristic. Fig. 25a shows the histogram of segment lengths for the image in fig. 8a. After removal of all segments whose lengths were below or equal to the threshold, histograms for the other three characteristics were produced. These histograms did not represent the segment-averaged characteristics but were representative of the characteris-tics for each of the line elements comprising the segments. Figs. 25b through 25d show the line (infrared) intensity, the original line strength, and the homogeneity parameter histograms for the image in fig. 8a. It should be noted that the histogram for line strength revealed a great number of line elements with an initial line strength of zero. This is partly due to the line elements which resulted from gap filling within the relaxation but a good number of these resulted from line elements growing from the ends of extended segments in the later iterations of the relaxation. For display purposes the zeroth position of the histogram was zeroed out. Since a two-label case was being considered, some hint of bimodality was looked for in the histograms for the three remaining characteristics. The histograms in figs. 25b through 25d are representative of the histograms produced for the other four input images. The homogeneity parameter histograms consistently did not display any bimodality even for the cloudy images where this parameter was expected to be useful. The line intensity and line strength histograms in general showed some bimodality. 94 CO. o. O) CM_ CD Cn CO LJ CD Z LJ Of ZD O CD o o L J CQ co-CO LJ O -z. LJ CH ZD CD CD <Z> U_ O LJ CO Im nn n n n n n 0.0 15.0 30.0 45.0 SEGMENT LENGTH 60.0 150.0 160.0 170.0 INTENSITY 180.0 in — i 190. (' Fig. 25 (a) Segment length histogram for the image of fig. 8a. (b) Line (infrared) intensity histogram. ro The thresholds for the line intensity and line strength characteristics were chosen at the minimum value between the two "peaks" of the histogram. For example the threshold for the line intensity histogram of fig. 25b was chosen to be 170. The threshold for the line strength characteristic of fig. 25c was chosen to be 48. Since the homogeneity parameter showed no bimodality another method of defining a threshold had to be used. It was found that the histograms for this characteristic usually displayed an early maxima followed by a sharp dropoff. A kink or decline in the rate of dropoff was usually found in the range [3,6]. The location of this kink or decline in the dropoff rate was chosen to be the threshold. In the case of fig. 25d this threshold was chosen to be 4. The thresholding was performed on the segment-averaged values for each of the char-acteristics. It should be pointed out that the characteristics from line elements with an original line strength of zero were not included in the averages for any of the characteris-tics. This was done to avoid averaging in the characteristics for those line elements who grew from the ends of extended segments in the later iterations of the relaxation. The results of the identification scheme for the image of fig. 8a are shown in figs. 26a through 26c. Fig. 26a shows the result for the labeling based on segment- averaged line intensity. A comparison with fig. 8a reveals that the very distinct leads were almost all correctly identified as "strong leads" (blue lines) except for the upper parts of the two main leads to the left of the image. The result for the labeling based on segment-averaged original line strength (fig. 26b) shows that all the strong leads were correctly identified. Note that for both the line intensity and line strength characteristics, none of the more tenuous leads were mislabeled. Fig. 26c shows the result using the homogeneity parameter. The results are disappointing but recall that choosing the threshold for this parameter was difficult. Fig. 27a shows another of the ice images used in section V while fig. 27b shows the result of the thirtieth iteration of the relaxation process. Note in fig. 27b that many of 97 Fig. 26 Results of line identification process based on (a) line intensity and (b) original line detector response for the image of fig. 8a. 98 Fig. 26 (cont'd) (c) Results of line identification process based on "homogeneity parame-ter". 99 Fig. 27 (a) Band 5 ice image of 29 January, 1985 and (b) result of the thirtieth iteration of the relaxation process. 100 Fig. 27 (cont'd) (c) Line labeling based on line intensity and (d) original line detector response for the image of fig. 27a. 101 Fig. 27 (cont'd) (e) Line labeling based on "homogeneity parameter" for the image of fig. 27a. 102 the line segments in the left part of the image (where the ice leads are not as distinct) are very short and subsequently do not survive the length thresholding. Fig. 27c corresponds to the labeling based on line intensity. Many of the fairly distinct leads have been mislabeled. On the other hand the labeling based on line strength (fig. 27d) resulted in an acceptable interpretation. All of the leads in the far left portion of the image were labeled "weak leads". The few scattered "weak leads" in the right portion of the image seem appropriately labeled. The homogeneity parameter again produced uninteresting and undesirable results (fig. 27e). One of the cloudy images is shown in fig. 28a. Note that there are not many leads in the image but that there is a cloud pattern sweeping across the image. Fig. 28b shows the relaxation result (thirtieth iteration). The interesting aspect of the relaxation result is that it appears that the line detector responded sporadically and for the most part not at all in the cloudy areas of the image. The labeling results based on the various characteristics (figs. 28c through 28e) indicate that where the line detector did respond in the cloud regions, the resulting segments were very short and did not survive the length thresholding. Because the line detector did not respond in the cloud areas, the labeling results actually revert to a "strong lead" / "weak lead" labeling problem. (Recall that the attributes of a "weak lead" are analogous to those attributed to clouds.) Once again the labeling based on line strength (fig. 28d) seems to be the best. Fig. 29a shows the other cloudy image analyzed and fig. 29b shows the result of the thirtieth iteration of the relaxation. Note that in this case it appears that the striated nature of the clouds has resulted in the line detector responding a great deal in the cloud areas. The labeled line segments are displayed in figs. 29c through 29e. They indicate that quite a few of the segments in the cloud region survived the length thresholding. Unlike the results for the other input images, the labeling based on line intensity seems to be 103 Fig. 28 (a) Band 5 ice image of 25 January, 1984 and (b) result of the thirtieth iteration of the relaxation process. 104 Fig. 28 (cont'd) (c) Line labeling based on line intensity and (d) original line detector response for the image of fig. 28a. 1 0 5 1 0 6 Fig. 29 (a) Band 5 ice irr age of 23 January, 1985 and (b) result of the thirtieth iteration of the relaxation process. 107 Fig. 29 (cont'd) (c) Line labeling based on line intensity and (d) original line detector response for the image of fig. 29a. 108 Fig. 29 (cont'd) (e) Line labeling based on "homogeneity parameter" for the image of fig. 29a. 109 preferable to that based on line strength. For instance many of the leads in the top right corner have been mislabeled in the labeling based on line strength (fig. 29d) whereas the labeling results based on line intensity (fig. 29c) correctly label these leads. In general, too many leads are mislabeled based on the line strength characteristic. The result based on the line intensity characteristic underestimates the number of line segments which should be labeled "cloud" but does not mislabel any of the ice leads. The homogeneity parameter (fig. 29e) once again proved to be of no use in terms of labeling the segments. The results from this last image are significant for they indicate that even though the line detector response was relatively high in the cloud areas, the line segments in the cloudy areas could be identified based on infrared intensity. B. Extraction and identification of sea surface thermal features. 1. Edge tracking. Since the edge relaxation using the crack edge representation produced somewhat bet-ter results than the relaxation utilizing the template matching operators and since the crack edge representation leads to a simple tracking algorithm, only the crack edge results will be tracked and identified. A crack edge "image" is composed of vertical and horizontal edges meeting at vertices (two of each at each vertex). Because of this line segment tracking is straightforward and efficient (Prager, 1980). (Recall that it was performed after every iteration of the relaxation.) After the edges have been labeled "edge" and "no edge" based on a probability threshold of .5, all the tracking algorithm has to do is start at an "edge" location and analyze the vertex at one of its endpoints. If the vertex has only two "edges" incident upon it, this represents an unambiguous continuation and the tracking algorithm continues from the next (vertical or horizontal) edge location. If there are three or four edges incident at a vertex, this is an ambiguous situation and this vertex becomes a termination point for all segments that ultimately impinge upon it. 110 2 . Segment characteristic extraction. As the tracking algorithm proceeded, it extracted the following characteristics for each edge element. 1) The infrared image intensity (obtained from the original image) was extracted for the pixels to the right and left of the crack edge element in the direction of tracking. Note that as the segment changes direction it is important to define right and left relative to the direction of tracking. This was important for further processing. 2) The original edge strength for each element was extracted. If the edge element was originally part of a ramp edge, the total edge strength for the ramp edge was used as the original edge strength (as was done in section VI when thresholding out "weak" edges). Note also that the sign of the edge was kept intact. In the line tracking and identification sections , no higher order statistics were derived from the line element characteristics. In the edge case two higher order statistics and another parameter were derived from the first order characteristics. 1) The standard deviation of the image intensity values both to the left and to the right of each segment was derived. These standard deviations act as an indicator of the homo-geneity of the areas to the left and right of the edge segment (cf. Prager, 1980) 2) The sign of each edge element within a segment was used in the following way. A parameter was derived which would indicate whether an edge segment was consistent in terms of keeping higher intensity to one side of itself. The formula used was Consistency parameter =ma^^P°s,^ne^ where Npog is the number of positively signed edges in the segment, and Nneg is the number of negatively signed edges in the segment. Note that a perfectly consistent edge segment will have a consistency parameter of 1. Further note that the minimum possible value is .5. When creating the histograms the values in the range [0.5,1.0] were assigned to histogram bins which were 0.05 units wide. I l l It should be noted that as before, the above segment-derived characteristics did not include the characteristics from edge elements whose original edge strengths were zero. 3. Identification scheme. The characteristics explained above were used in a simple two-class identification prob-lem. Six images were analyzed, three of which were used in section VI (cloud free) plus three images which contained clouds. The labels, then, corresponded to "surface feature" and "cloud". The identification scheme was based on the following assumptions. 1) Sea surface thermal features should be generally warmer than thermal features in cloud areas. 2) The edge strengths associated with the perimeter of cloudy areas should be stronger in general than the edge strengths associated with surface thermal features. 3) It was believed that cloudy areas would exhibit a higher degree of texture resulting in relatively haphazard edge segments. Therefore it was thought that edge segments enhanced within the cloud areas would display larger standard deviations for the pixel values to the left and right of the edge segments and would also have a low value for the "consistency" parameter. As done in the line segment case, a simple thresholding on a characteristic by charac-teristic basis was performed. These thresholds again were subjectively derived from the histograms of the various characteristics. However, the thresholds were kept constant over the images analyzed. It was believed that since the images containing clouds also contained sea surface thermal features, the histograms from only the cloudy images needed to be an-alyzed. Furthermore, composite histograms were produced based on the characteristics from all three cloudy images. 112 4 . Implementation. The histograms for the various characteristics were produced after short and weak segments had been removed from consideration. Fig. 30a shows the histogram of segment lengths for the composite of cloudy images. Since no peaks could be observed at the lower end of any of the individual image's histograms, as was the case with the composite, an arbitrary threshold of length two was set. "Weak" edge segments were thresholded out using the same criteria as in section VI. Fig. 30b shows the histogram for the pixel intensities to the left of each edge element comprising the segments. (The histogram for the intensities to the right was similar.) The threshold in this case was set at 227. Therefore if the segment-averaged intensities to the right (or to the left) of a segment was greater than the threshold, the segment was labeled "cloud". Once again the averaging did not include characteristics from edge elements whose original edge strength was zero. Note the "spiky" nature of the intensity histogram. This is an artifact of the calibration process used to produce the input images and results in a "spiky" histogram for the edge strengths. The composite histogram of original edge strengths is shown in fig. 30c. This histogram surely does not suggest strong bimodality. However a threshold was chosen at edge strength 48. Fig. 30d shows the standard deviation histogram for the pixel intensities to the left of the segments. It looks much like the histogram for the "homogeneity" parameter used for the line identification problem. The histogram of standard deviations for the pixel intensities to the right of the segments was similar. The decision was made not to use this characteristic as an identification parameter because clearly no way of setting a threshold is observable. The same argument goes for the "consistency" parameter (fig. 30e). His-tograms of this characteristic for each individual image showed no significant differences between those histograms corresponding to cloudless images and those from cloudy images. 113 Fig. 30 (a) Histogram of segment lengths for the composite of cloudy images and (b) histogram of input image pixel intensities to the left of each edge segment. SIT n; a q CD < TO 8 £ w S S 5s 2-O t o et CD •-*> cr o rt n i f a CD co Cu 3 p TO rt> co cn X — O P TO rt- " 1 fD P p 3 co O O et- 5 cr p re » M, TO O CO — sr p B o TO CT rt-er CD CO °> cl co £ 0^ a co et- rt-• o TO 2- 3 O et-3 » co cn "3 et-a » P rt- -I O d-NUMBER OP OCCURENCES 100 to co-rn o GO n p-•-3 P co CD • CO o o 200 i 300 i 400 500 1 • 10 NUMBER OP OCCURENCES 20 30 40 50 60 70 80 90 100 110 CO cn • o a n o o-o n t—I • to cn o in-in o o-in o in-CO § • LJ CJ ~Z. o LJ £ tn o " fe ° " L J S -OQ CM O LD • O O -in "1 r r 0.0 2.0 4.0 6.0 8.0 10.0 CONSISTENCY PRRF1METER 12.0 Fig. 30 (cont'd) (e) Histogram of the "consistency parameter" values for the composite of cloudy images. 116 Figs. 31a and 31b are representative of the labeling results obtained from the cloud-free images. (These results correspond to the image of fig. 17a .) The labeling based on image intensity shows all edge segments properly labeled as "surface feature" (blue lines). The labeling based on edge strength showed mislabelings in the areas of significant thermal gradients. These results indicate that the two characteristics need to be combined in a more elaborate fashion. However, the determination of the edge strength threshold was difficult and this may have led, in part, to the mislabelings. Figs. 32a through 32d show one of the cloudy images, the results of the tenth iteration of the crack edge relaxation, and the labelings based on image intensity and edge strength. Although neither labeling is good, the labeling based on edge strength seems clearly inferior to that based on image intensity. However it appears the threshold for the intensity characteristic may have been set too high. An alternate view (the term enhancement is purposely not used to avoid confusion) of the cloudy image of fig. 32a is shown in fig. 32e. Comparisons of fig. 32e with the labeling based on edge strength point out a flaw in the initial assumptions. Note in fig. 32d that no edge segments appear in the interior of the cloud regions. Fig. 32e does not show a great deal of texture within the cloud interior but instead reveals fairly consistent weak intensity changes. The edge segments enhanced within the interior were thresholded out due to low average edge strength. Thus the assumption about cloud texture is not wholly valid. C. Overview of the extraction and labeling results. The extraction of line/edge segments from the results of the relaxation processes of sections V and VI was simple and straightforward. The only possible weakness in the line-tracking algorithm was that it did not accommodate rapid changes in direction. Where leads intersect this is advantageous but in cases where an individual lead changes direction quickly, the tracking algorithm would fragment the segment. (Note that this also applies to the enhancement provided by the relaxation process utilizing the heuristic compatibility 117 Fig. 31 (a) Labeling results based on intensity for the image of fig. 17a and (b) the labeling results based on original edge strength. 118 Fig. 32 (a) One of the cloudy images used in the labeling scheme and (b) the result from the tenth iteration of the crack edge relaxation. 1 1 9 Fig. 32 (cont'd) (c) Labeling result based on intensity for the image of fig. 32a and (d) the labeling result based on original edge strength. 120 Fig. 32 (cont'd) (e) An alternate view of the image of fig. 32a. The intensity structure within the clouds has been made more evident. 121 coefficients.) The line/edge segment labeling schemes provided some enlightening results. Note that these studies were not meant to be taken as dissertations on multi-class labeling techniques. Rather the studies were designed with two things in mind. The first was to show that one is not necessarily restricted to labeling features in imagery on a pixel by pixel basis. The second was to test some characteristics derived for the segmented features in terms of their utility in labeling schemes. It was clearly shown that some derived parameters such as the homogeneity parameter in the line labeling case and the second order statistics and "consistency" parameter in the edge labeling case were of little use when used independently. Perhaps they would prove to be useful in a clustering or multi-dimensional thresholding scheme. The simple parameters, the image intensity of a feature and the local detector response for the feature, proved to be good characteristics on their own and certainly would increase in value in tandem or combined with other characteristics. The results from the line labeling schemes showed that a histogram analysis approach for determining thresholds was useful for two of the characteristics and less useful for the homogeneity parameter. The histograms for the edge characteristics on the other hand proved to be more difficult to work with. This was most likely due to the dominance of the clouds within the images used. 122 VIII. Conclusions and recommendations. The problem of enhancing line-like features in AVHRR imagery was found to be a tractable one. The major conclusions follow. 1) Line thickening was an inevitable problem in the 9 label case. This problem was repre-sentative of the degradation phase which most relaxation labeling techniques enter in later iterations. The problem manifested itself before many desirable aspects of the enhance-ment were achieved. 2) The 13 label set was superior to the 9 label case in all regards. No thickening was evident over thirty iterations and when using empirical compatibility coefficients an additional de-sirable aspect, gap filling, was prevalent. The expanded label set is an improvement upon the label sets used in previous relaxation applications. (Vanderbrug (1977) suggested its use in his iterative enhancement scheme.) 3) Acceptable results were achieved with a set of heuristic (i.e. subjective) compatibility coefficients unique to this study. The need for some minor "tweaking" to balance out the two relaxation subprocesses and to allow the process to run in Mode 2 (i.e. fast mode) was evident. 4) A speedup technique for the non-linear probabilistic model was found to be reliable when empirical compatibility coefficients were used within the relaxation. 5) The evaluation parameters for the relaxation, the change and entropy parameters, were useful in interpreting the results of the relaxation. This was especially true when thicken-ing occurred in the 9 label case. 6) In the 13 label case the evaluation parameters did not serve well as stopping criteria. Perhaps the "drift" parameter would be of some use in indicating the onset of the degra-dation phase. Subjectively speaking, 15 to 20 iterations were found to be sufficient for a good pre-degradation enhancement. Note that the line detector used to provide input to the relaxation is restricted to detecting lines roughly one pixel wide. Although the leads in the images studied were enhanced well, preliminary results indicated that using the average of 2 x 2 pixel blocks 123 in the line detection algorithm resulted in the detection of wider leads. The identification (i.e. labeling) of the line segments produced by the relaxation pro-cesses provided some useful results. 1) It was shown that the two simplest segment characteristics may provide enough informa-tion for an accurate labeling. These characteristics were original line (infrared) intensity and original line detector response. 2) Histogram analysis techniques showed some promise in determining thresholds for the separation of the label classes. The edge problem was found to be much more complex and the results were, in general, not very good. 1) Using the 3 x 3 template matching edge operators as input to the relaxation proved to be troublesome. The non-maxima suppression algorithm used to "thin" the detector output could not deal with the various problems associated with their use so a more complex suppression scheme is definitely called for. It should be noted though that the complexity of such an algorithm would increase the computational time of the pre-processing a great deal. Furthermore, the additional computational time devoted to the relaxation enhancement would seem to make the process as a whole prohibitively slow. 2) A combination of non-maxima absorption (as a pre-processing step) and non-maxima suppression (as a relaxation subprocess) was found to work well when using the crack edge representation. The results though were far from being totally acceptable. Edge segments resulting from the the relaxation process were often fragmented in areas where ramp edges predominated. Unfortunately these areas often coincided with the more interesting thermal features of interest to oceanographers. 3) In the crack edge relaxation case, no set of input parameters was found to be superior overall. A slow rate of convergence was definitely preferable. However the quality of the relaxation result using different values for the absorption and suppression parameters seemed to be context-dependent. 124 The edge relaxation techniques used here emphasize non-maxima suppression as one of the relaxation subprocesses. It could be possible to implement non-maxima absorption as a subprocess by using a technique such as that suggested by Eberlein (1976) within a relaxation framework. His technique involves analyzing edge strength cross sections to smooth out peaks and valleys caused by noise and to absorb non-maximum responses to the significant peaks. Using the crack edge representation one would have to analyze cross sections only in the vertical and horizontal. By putting these two analyses into a relaxation framework it may be possible to have the processes in the orthogonal directions monitor each other to avoid edge segment fragmentation. The edge segment labeling results were poor. For the two segment characteristics found to be useful in line labeling (i.e. initial local detector response and original image intensity) the edge segment histograms showed weak (or non-existent) bimodality. However it was clearly shown that the higher order statistics and another more elaborate characteristic may be practically useless used independently or in conjunction with other characteristics. 125 References. Abdou, Ikram E. and William K. Pratt, "Quantitative design and evaluation of enhance-ment / thresholding edge detectors," Proceedings of the IEEE, vol. 67, pp. 753-763, 1979. Bajcsy, Ruzena and Mohamad Tavakoli, "Computer recognition of roads from satellite pictures," IEEE Trans. Syst., Man, Cybern., vol. SMC-6, pp. 623-637, 1976. Ballard, Dana H. and Christopher M . Brown. Computer Vision. Englewood Cliffs,NJ: Prentice-Hall, 1982. Chittineni, C.B., "Edge and line detection in multidimensional noisy imagery data." IEEE Trans. Geoscience and Remote Sensing, vol. GE-21, pp. 163-174, 1983. Coulter, Robert E. , "Preliminary report on the use of Bayesian Decison Theory for auto-matic water mass classification from satellite radiometer data," U.S. Naval Oceanographic Office, Bay St. Louis, Mississippi, Tech. Note TN9100-13-82, 1983. Danker, Alan J. and Azriel Rosenfeld, "Blob detection by relaxation," IEEE Trans. Pat-tern Anal. Machine Intell., vol. PAMI-3, pp. 79-92, 1981. Davis, Larry S., "Shape matching using relaxation techniques," IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-1, pp. 60-72, 1979. Davis, Larry S. and Azriel Rosenfeld, "Curve segmentation by relaxation labeling," IEEE Trans. Comput, vol. C-26, pp. 1053-1057, 1977. Eberlein, Robert B., "An iterative gradient edge detection algorithm," Computer Graphics and Image Processing, vol. 5, pp. 245-253, 1976. Eklundh, Jan-Olof and Azriel Rosenfeld, "Some relaxation experiments using triples of pixels," IEEE Trans. Syst., Man, Cybern., vol. SMC-10, pp. 150-153, 1980. Eklundh, Jan-Olof, Hiromichi Yamamoto, and Azriel Rosenfeld, "A relaxation method for multispectral pixel classification," IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-2, pp. 72-75, 1980. Fekete, Gyorgy, Jan-Olof Eklundh, and Azriel Rosenfeld, "Relaxation: evaluation and applications," IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-3, pp. 459-469, 1981. Hanson, Allen R. and Edward M. Riseman, "Segmentation of natural scenes," in Computer Vision Systems, A. Hanson and E. Riseman, Eds. New York: Academic Press, pp. 129-163,1978. Hanson, Allen R., Edward M. Riseman, and Frank C. Glazer, "Edge relaxation and bound-ary continuity," Dep. Comput. Inform. Sci., Univ. of Massachusetts, Amherst, COINS Tech. Rep. 80-11, 1980. Haralick, Robert M., "An interpretation for probabilistic relaxation," Computer Graphics and Image Processing, vol. 22, pp. 388-395, 1983. Hummel, Robert A. and Azriel Rosenfeld, "Relaxation processes for scene labeling," IEEE Trans. Syst., Man, Cybern., vol. SMC-8, pp. 765-768, 1978. Kalayeh, H.M. and D.A. Landgrebe, "Adaptive relaxation labeling," IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-6, pp. 369-372, 1984. Kitchen, Les and Azriel Rosenfeld, "Edge evaluation using local edge coherence," IEEE Trans. Syst., Man, Cybern., vol. SMC-11, pp. 597-605, 1981. 126 Lev, Amos, Steven W. Zucker, and Azriel Rosenfeld, "Iterative enhancement of noisy images," IEEE Trans. Syst, Man, Cybern., vol. SMC-7, pp. 435-442, 1977. Lintz, Joseph Jr. and David S. Simonett (Eds.). Remote sensing of environment. Reading, Massachusetts: Addison-Wesley, 1976. O'Leary, Dianne P. and Shmuel Peleg, "Analysis of relaxation processes: the two-node two-label case," IEEE Trans. Syst., Man, Cybern., vol. SMC-13, pp. 618-623, 1983. Peleg, Shmuel, "A new probabilistic relaxation scheme," IEEE Trans. Pattern Anal. Ma-chine Intell., vol. PAMI-2, pp. 362-369, 1980. Peleg, Shmuel and Azriel Rosenfeld, "Determining compatibility coefficients for curve en-hancement relaxation processes," IEEE Trans. Syst., Man, Cybern., vol. SMC-8, pp. 548-555, 1978. Peleg, Shmuel and Azriel Rosenfeld, "A note on the evaluation of probabilistic labelings," IEEE Trans. Syst, Man, Cybern., vol. SMC-11, pp. 176-179, 1981. Persoon, Eric, "A new edge detection algorithm and its applications in picture processing," Computer Graphics and Image Processing, vol. 5, pp. 425- 446, 1976. Prager, John M. , "Extracting and labeling boundary segments in natural scenes," IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-2, pp. 16-27, 1980. Prager, John M. , Allen R. Hanson, and Edward M. Riseman, "Extracting and labeling boundary segments in natural scenes," Dep. Comput. Inform. Sci., Univ. of Mas-sachusetts, Amherst, COINS Tech. Rep. 77-7, 1977. Richards, J.A., D.A. Landgrebe, and P.H. Swain, "On the accuracy of pixel relaxation labeling," IEEE Trans. Syst., Man, Cybern., vol. SMC-11, pp. 303-309, 1981. Riseman, Edward M. and Michael A. Arbib, "Computational techniques in the visual segmentation of static scenes," Computer Graphics and Image Processing, vol. 6, pp. 221-276, 1977. Robinson, Guner S., "Edge detection by compass gradient masks," Computer Graphics and Image Processing, vol. 6, pp. 492-501, 1977. Rosenfeld, Azriel and Mark Thurston, "Edge and curve detection for visual scene analysis," IEEE Trans. Comp., vol. C-20, pp. 562-569, 1971. Rosenfeld, Azriel, Robert A. Hummel, and Steven W. Zucker, "Scene labeling by relaxation operations," IEEE Trans. Syst., Man, Cybern., vol. SMC-6, pp. 420-433, 1976. Rosenfeld, Azriel and Russel C. Smith, "Thresholding using relaxation," IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-3, pp. 598-606, 1981. Schachter, Bruce J. , Amos Lev, Steven W. Zucker, and Azriel Rosenfeld, "An application of relaxation methods to edge reinforcement," IEEE Trans. Syst, Man, Cybern., vol. SMC-7, pp. 813-816, 1977. Tavakoli, Mohamad and Azriel Rosenfeld, "Building and road extraction from aerial pho-tographs," IEEE Trans. Syst., Man, Cybern., vol. SMC-12, pp. 84-91, 1982. Vanderbrug, Gordon J., "Line detection in satellite imagery," IEEE Trans. Geoscience Electronics, vol. GE-14, pp. 37-44, 1976. Vanderbrug, Gordon J. , "Experiments in iterative enhancement of linear features," Com-puter Graphics and Image Processing, vol. 6, pp. 25-42, 1977. 127 Vanderbrug, Gordon J. and Azriel Rosenfeld, "Linear feature mapping," IEEE Trans. Syst., Man, Cybern., vol. SMC-8, pp. 768-774, 1978. Yamamoto, Hiromichi, "A method of deriving compatibility coefficients for relaxation op-erators," Computer Graphics and Image Processing, vol. 10, pp. 256-271, 1979. Zucker, Steven W., Robert A. Hummel, and Azriel Rosenfeld, "An application of relaxation labeling to line and curve enhancement," IEEE Trans. Computers, vol. C-26, pp. 394-403, 922-929, 1977. Zucker, Steven W., E.V. Krishnamurty, and Robert L. Haar, "Relaxation processes for scene labeling: convergence, speed, and stability," IEEE Trans. Syst., Man, Cybern., vol. SMC-8, pp. 41-48, 1978. 128
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Feature enhancement in AVHRR imagery via probabilistic...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Feature enhancement in AVHRR imagery via probabilistic relaxation labeling methods Szczechowski, Carl 1986
pdf
Page Metadata
Item Metadata
Title | Feature enhancement in AVHRR imagery via probabilistic relaxation labeling methods |
Creator |
Szczechowski, Carl |
Publisher | University of British Columbia |
Date Issued | 1986 |
Description | A set of algorithms is described which results in the detection, enhancement, extraction,-^, and identification of features in NOAA-9 AVHRR imagery. Sea ice leads (cracks) in ice images from the Beaufort Sea / Amundsen Gulf area are modelled as "lines" in the image-processing sense. Thermal gradients on the ocean surface are modelled as edges. Emphasis is given to enhancing the output of local line / edge detectors in order to provide improved input for line / edge tracking algorithms. As a result the identification scheme operates on a segmented version of the image rather than on a pixel by pixel basis, thereby providing a less noisy classification. Line / edge enhancement is achieved using the non-linear probabilistic relaxation model of Rosenfeld et al (1976). Results from the relaxation of line detector output suggests that an expanded label (i.e. line orientation) set is preferable to the smaller set suggested by previous studies. Also, a modified form of the original non-linear model (suggested by Peleg and Rosenfeld, 1978) was found to speed up the convergence rate significantly with no degradation in enhancement. A unique set of subjectively-derived compatibility coefficients was introduced into the relaxation with encouraging results. Edge relaxation using the 3x3 template matching edge operators resulted in a relatively poor enhancement due primarily to the vagaries of the edge operator. Improved results were achieved using the "crack" edge representation. A unique relaxation model consisting of a combination of the relaxation models of Prager (1980) and Hanson et al (1980) provided a good enhancement. Classification (i.e. identification) results were excellent for the sea ice lead application. Line brightness and original line detector strength seemed sufficient for differentiating between "strong" leads and "weak" leads as well as discriminating leads from clouds. The classification results were not as good in the edge case indicating that simple characteristics such those used in the line case are not sufficient for classification purposes. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2010-06-30 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0053152 |
URI | http://hdl.handle.net/2429/26088 |
Degree |
Master of Science - MSc |
Program |
Oceanography |
Affiliation |
Science, Faculty of Earth, Ocean and Atmospheric Sciences, Department of |
Degree Grantor | University of British Columbia |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-UBC_1986_A6_7 S96.pdf [ 9.66MB ]
- Metadata
- JSON: 831-1.0053152.json
- JSON-LD: 831-1.0053152-ld.json
- RDF/XML (Pretty): 831-1.0053152-rdf.xml
- RDF/JSON: 831-1.0053152-rdf.json
- Turtle: 831-1.0053152-turtle.txt
- N-Triples: 831-1.0053152-rdf-ntriples.txt
- Original Record: 831-1.0053152-source.json
- Full Text
- 831-1.0053152-fulltext.txt
- Citation
- 831-1.0053152.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0053152/manifest