The Application of Discrete Wavelet Transforms to SAR Image Processing by Zhaohui Zeng B.A.Sc, Tsinghua University, Beijing, China, 1993 M.A.Sc, Peking University, Beijing, China, 1996 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Applied Science in THE FACULTY OF GRADUATE STUDIES (Department of Electrical and Computer Engineering) We accept this thesis as conforming to the required standard The University of British Columbia June 1999 © Zhaohui Zeng, 1999 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, 1 agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of The University of British Columbia Vancouver, Canada Date —c 7uty ti. /9?9 DE-6 (2/88) Abstract Synthetic aperture radar (SAR) is a very efficient instrument for obtaining a. better understanding of the earth's environment. SAR, data represents an important source of information for a large variety of scientists around the world. However, the acquir-ing mechanism of SAR is quite different from other sensors, such as optical sensors. It brings some unique properties of SAR image data which decides that conventional im-age processing technique may fail to obtain satisfactory result or have to be modified to adapt the application of SAR image data. The objective of this thesis work is to investigate the potential of discrete wavelet transforms (DWT) for SAR image processing. The emphasis is placed on speckle noise reduction and SAR image compression, which are the two of the most popular application fields of DWT to image processing in the current literature. Two new algorithms for speckle reduction have been developed: a Bayesian method based on the statistical model and wavelet extrema based on the local property of wavelet coefficients, and have been applied to both airborne and spaceborne SAR images. The comparison of their results to some existing well known methods show their advantages on both the visual and numerical sides. In addition, simultaneous speckle ,'n reduction and data compression can significantly improve the compressibility of SAR images. The modified SPIHT aJgorithm has been applied to SAR image coding. The effectiveness of this strategy has been proven from the comparison to the method without speckle reduction and classic efficient wavelet compression algorithms. i n Contents Abstract i i Contents iy List of Tables vii List of Figures viii A c r o n y m s xi Acknowledgements xii i 1 Introduction 1 1.1 Background 1 1.2 Outline of the Thesis 3 2 S A R Image Process ing 8 2.1 Basic Principles of Synthetic Aperture Radar 8 2.2 Fundamental Properties of SAR Images 13 2.2.1 Single-Channel Image Types 14 2.2.2 The Multiplicative Noise Model for Speckle 19 2.3 The Promising SAR Image Processing Technique - Discrete Wavelet Transform 21 3 Discrete Wavelet Transform 25 3.1 Introduction 25 3.2 Wavelet Transform Versus Fourier Transform 26 3.3 Dissimilarities Between Fourier and Wavelet Transform 27 3.4 Wavelet Analysis 30 3.4.1 The Discrete Wavelet Transform 30 3.4.2 The Fast Wavelet Transform 32 3.4.3 Wavelet Packet Transform 32 3.5 Application of the DWT 33 3.5.1 Zero-Tree Based Compression 34 V1 4 Speckle Reduct ion of S A R Images 41 4.1 Introduction 41 4.2 Soft Thresholding 43 4.3 Bayesian Method 44 4.4 Wavelet Extrema Methods 47 4.5 Experimental Results 50 5 S A R Image Compression 66 5.1 Texture Analysis 67 5.2 Homogeneity Map 69 5.3 Speckle Reduction 72 5.4 Modified SPIHT Coding Scheme 73 5.5 A New Encoding Scheme 76 5.6 Experimental Results 78 6 Conclusions 92 Bibl iography 94 VI List of Tables 2.1 Radar Parameters of ERS-1 SAR 9 4.1 s/m for clutter data 54 4.2 log — std for clutter data 54 4.3 Target-to-clutter ratio (t/c) (dB) 55 5.1 Comparison of target recognition ability 79 5.2 PSQNR (dB) for reconstructed SAR images 79 5.3 ASQNR (dB) for reconstructed SAR images 80 vii5 List of Figures 1.1 X-band airborne 1-look SAR image of Bedfordshire at England (Pro-vided by N. A. Software Ltd, http://www.nasoftware.co.uk/) 6 1.2 Spaceborne RADARSAT SAR image of Vancouver (Provided by CCRS, http://www.ccrs.nrcan.gc.ca/ccrs/, scale: 1:800000) 7 2.1 The geometry model of SAR 10 2.2 Different representation of the complex SAR image using RADARSAT SLC imagery 23 2.3 Schematic representation of a distributed scatterer as a layer of discrete point scatterers 24 3.1 Time-frequency plane covered by Fourier transform 28 3.2 Time-frequency plane covered by wavelet transform 29 3.3 Wavelet decomposition of image: woman 38 viii 3.4 Embedded zero tree 39 3.5 Scanning of embedded zero tree 40 4.1 Histogram of second-level detail wavelet coefficients (see text) 46 4.2 Square image, noise corrupted image and restored image 50 4.3 Wavelet transform and modulus of squares image 51 4.4 Wavelet transform and modulus of speckle noise corrupted squares image 52 4.5 Airborne radar image samples used in the test 58 4.6 Speckle reduction on X-band image with field texture 59 4.7 Speckle reduction on X-band image with village texture 60 4.8 Speckle reduction on X-band image with lake texture 61 4.9 Speckle reduction by soft threshold 62 4.10 Speckle reduction by bayesian wavelet 63 4.11 Speckle reduction by wavelet extrema 64 4.12 Speckle reduction by simulated annealing optimization 65 5.1 Tree structured analysis of airborne radar image (scene 1) 70 5.2 Tree structured analysis of airborne radar image (scene 2) 71 5.3 Tree-structured analysis of single-look airborne radar image 81 : : : • : ! * 5.4 Examples, of parent-offspring dependencies in the spatial-orientation . 82 5.5 Block diagram of the modified SPIHT coding scheme 83 5.6 Modified SPIHT without speckle reduction 84 5.7 Modified SPIHT with speckle reduction 85 5.8 Conventional SPIHT encoding results 86 5.9 JPEG encoding results 87 5.10 RADARS AT image at Vancouver with 0.5 bpp by Modified SPIHT with speckle reduction 88 5.11 RADARS AT image at Vancouver with 0.5 bpp by Modified SPIHT without speckle reduction 89 5.12 RADARS AT image at Vancouver with 0.5 bpp by conventional SPIHT 90 5.13 RADARSAT image at Vancouver with 0.5 bpp by JPEG 91 x -Acronyms ADC analog-to-digital converter ASQNR Average Signal to Quantization Noise Ratio CCRS Canada Center for Remote Sensing DCT Discrete Cosine Transform DEM Digital Elevation Model DFT Discrete Fourier Transform DLR Deutsches Zentrum fur Luft-und Raumfahrt DWT Discrete Wavelet Transform EOB End of Block ERS Earth Remote Sensing satellite EZW Embedded Zero-tree Wavelet F F T Fast Fourier Transform IDWT Inverse Discrete Wavelet Transform InSAR Interferometric Synthetic Aperture Radar JPEG Joint Picture Expert Group JPL Jet Propulsion Laboratory i x i MSPIHT Modified Set Partitioning in Hierarchical Trees PDF Probability Distribution Function PSF Point Spread Function PSQNR Peak Signal Quantization Noise Ratio QMF Quadrature Mirror Filter RADARSAT Canadian radar remote sensing satellite RCS Radar Cross Section SAR Synthetic Aperture Radar SPIHT Set Partitioning in Hierarchical Trees SLC Single Look Complex-valued (image) x t t Acknowledgements I am very grateful to my supervisor, Dr. I. G. Gumming, for providing effective supervision and academic guidance, as well as an opportunity to work in the field of synthetic aperture radar processing. I also would like to thank MacDonald Dettwiler, the BC Advanced Systems Institute and the BC Science Council for funding this research. Finally, I would like to thank all of the members of Radar Remote Sensing Group at UBC for providing a pleasant and effective work enviroment. ZHAOHUI ZENG xiii Chapter 1 Introduction 1.1 Background Synthetic aperture radar (SAR) is a very efficient instrument for obtaining a better understanding of the earth's environment. SAR data represents an important source of information for a large variety of scientists around the world. It can produce high resolution images of terrain and targets. Due to the capability of all time, all weather sensing, SAR systems are specially useful for monitoring areas of persistent cloud cover, determining wave conditions under storm systems, disaster monitoring, and monitoring temporal change. In addition, SAR data contains phase information of the observed area, which is essential for SAR interferometry to derive digital elevation models (DEMs). This can not be achieved by other imaging systems such as optical systems and infrared systems. In the near future, the amount of data collected by SAR each year is expected to increase substantially. In addition to the increasing 1 number of remote sensing satellites, there is a trend towards placing more sensors on each satellite, and for each sensor to have a higher data rate. Thus, storing SAR data and extracting useful information from SAR data becomes a practical and emergent issue in both research and industrial environment. However, the acquiring mechanism of SAR is quite different from other sensors, such as optical sensors. It brings some unique properties of SAR image data which decides that conventional image processing technique may fail to obtain satisfactory result or have to be modified to adapt the application of SAR image data. Firstly, the SAR data is collected in holographic form, and must be correlated to obtain a useful image. The SAR data can be transmitted and stored in either its original form (called raw data or signal data), or in its correlated form (called image data). Signal data and image data have quite different statistical properties, and so the processing must be considered separately. For the time being, SAR data correla-tion is done on the ground, and so the on-board storage and downlink transmission is clone with SAR signal data. However, when it becomes feasible to do SAR correlation on-board, there will be the need to store SAR image data on-board and to transmit both signal and image data to the ground. Secondly, the statistical properties of SAR image data are unlike other image data. Since SAR is a coherent imaging process it suffers from speckle which introduces 100% multiplicative noise. In addition, the resolution of SAR systems is usually comparable with many of the objects of interest within the scene, e.g. houses, trees or vehicles. This means that, unlike with many optical images, there is very little redundancy in the image and any image interpretation must retain all the information 2 in the original data. Thirdly, due to the presence of coherent speckle, SAR image data has a lower signal to noise ratio or lower radiometric resolution than optical data, yet it can have a very high dynamic range (up to 80 dB). This means that image algorithms designed for optical data may not be appropriate for SAR data, or at least that the optical algorithms have to be re-designed and re-evaluated for SAR data. 1.2 Outline of the Thesis The objective of this research is the investigation of the potential of discrete wavelet transforms (DVVT) for SAR image processing. The emphasis will be placed on speckle noise reduction and SAR image compression, which are the two of the most popular application fields of DWT to image processing in the current literature. Chapter 2 reviews the basic principles and theory related to SAR image sys-tem. Radar images have properties that are unfamiliar to those more used to optical data and photographs. Those connected with the sensor-scene geometry and the signal processing involved in image formation are described in this chapter. The multiplicative noise model is also introduced. The DWT is a relatively new signal processing technique which has received significant attention because its multi-resolution decomposition allows efficient image analysis and it has been widely used in analysis, noise reduction and data compression of SAR images. In chapter 3, we briefly introduce the background of wavelet transform 3 based on the comparison of DWT and FFT, and the application of DWT in image processing. Chapter 4 and 5 are the core of this dissertation. Chapter 4 focuses on the speckle noise reduction. Three speckle reduction methods based on the DWT are introduced: soft thresholding, Bayesian estimation and local maxima representation. The validity of these algorithms is verified and evaluated on both the airborne (X-band) and the spaceborne (RADARSAT) SAR image data. Chapter 5 is devoted to DWT-based SAR image compression which is the the combination of texture analysis, speckle reduction and modified set partitioning in hierarchical trees (SPIHT) coding scheme. The SAR image can be progressively compressed and transmitted with relatively independent control of speckle reduction based on the users' expectation of image quality. Finally, chapter 6 gives a summary of the research results achieved in this thesis and as well as the future work to improve it. Two data sets are used in this thesis to verify experimentally the algorithms and evaluate the results for both speckle reduction and image compression. X-band high-resolution airborne 1-look SAR image (Figure 1.1) showing part of Bedfordshire in south-east England displays many of the features that typically appear in airborne imagery: • The River Nene runs across the top of the image with tributaries axid other water features (dark areas in the top of the image). 4 • The built-up area in the lower center is the village of Stanwick. • The dark stripe which crosses the image from left to right just above the village is a major road (A605). • The remainder of the image is largely composed of isolated trees and a patch-work of fields bordered by hedges. The specific features of village, water and field are shown in Figure 4.5. A RADARSAT image showing the city of Vancouver surrounded with moun-tains and ocean is also tested. Three obvious features are displayed in Figure 1.2: city, mountain, and ocean. Interestingly, UBC is located at the tip of the peninsula shown near the upper left corner. 5 Figure 1.1: X-band airborne 1-look SAR image of Bedfordshire at England (Provided by N. A. Software Ltd, http://www.nasoftware.co.uk/). (i Figure 1.2: Spaceborne RADARSAT SAR image of Vancouver (Provided by CCRS, http://www.ccrs.nrcan.gc.ca/ccrs/, scale: 1:800000). 7 Chapter 2 SAR Image Processing 2.1 Basic Principles of Synthetic Aperture Radar Synthetic Aperture Radar (SAR) is a coherent microwave imaging system, which was first introduced by Carl Wiley of Goodyear Aerospace Co. in 1951 [3]. The main mechanism of SAR is to improve the azimuth resolution through analyzing and processing the Doppler properties of the received signal. In the last few years, high quality images of the Earth produced by synthetic aperture radar (SAR) systems carried on a variety of airborne and spaceborne platforms have become increasingly available. Two major leaps forward were provided by the launch of the ERS-1 satellite by the European Space Agency in 1991 and the advent of very flexible airborne system carrying multi-frequency polarimetric SARs, of which the NASA/JPL AirSAR provides perhaps the single most influential example. These systems ushered in a new era of civilian radar remote sensing because of their emphasis on SAR as a 8 measurement device, with great attention being paid to data quality and calibration. This emphasis continues to play a major part in the development, deployment, and application of current systems. The imaging geometry for a SAR system is shown in Figure 2.1. From this figure, we can see that SAR is a side-looking sensor with an imaged swath width from 10-70 kilometers for airborne systems to 50-500 kilometers for spaceborne systems. The swath is the area scanned by radar beam along the moving direction called azimuth. The parameters of a typical satellite-borne SAR ca,n be represented by those of the current ERS-1 satellite, which are approximately given by: Parameters Altitude H Orbit period Effective velocity Vr Nominal slant range R Slant range swath Ground range swath Range bandwidth Chirp Length Range FM rate Kr Radar wavelength A PRF Antenna length Doppler bandwidth Azimuth FM rate Value 780 98 7050 850 40 100 17 34 0.5 0.0566 1700 10 1410 2052 Unit Km min m/s Km Km Km MHz fiS MHz/us m Hz m Hz Hz/s Table 2.1: Radar Parameters of ERS-1 SAR The antenna transmits chirp pulses in the slant range direction to obtain higher-range resolution. The Chirp Signal, modulated by a high carrier frequency fo, has a 9 large bandwidth which is inversely proportional to the resolution. Sensor Trajectory Figure 2.1: The geometry model of SAR The equation of the transmitted signal is: st(ri,T) = P(r)cos(27r/0r + 7 r / v ( r - r ; / 2 ) 2 ) , r = [0,TJ] (2.1) where • P(T) is the envelope of the range pulse (usually a rectangular function) 10 • r is range time (s) • Kr is the range FM rate of the chirp (Hz/s) • r\ is the time duration of the chirp (5) • /o is the radar carrier frequency (Hz) • ?/ is azimuth time (s) The equation of the ideal received signal after coherent demodulation to base-band can be expressed as a complex signal which contains both magnitude and phase information of the reflector: sd(V, T) = A.(r, - T]C)P(T - Td)exp(-j2nfoTd + J7rKr(r - r , /2 - Td)2), (2.2) where r is in the interval [rd - r ; /2 , Td + r//2]; rj is [rjc - ?/;/2, i]c + ??//2]; j 2 = - 1 , and rd = 2R(ri)/c (2.3) R(VY = RZ + V^-noY (2.4) «" - i^^Ff <-' w here TJ, is the time delay of the echo from reflector (s) R(r/) is the range from antenna to reflector at time r-j(?n) 11 • c is the speed of light (m/s) • A is the radar wavelength (m) • Ro is the range to the reflector at time of closest approach (m) • Vr is the effective platform or radar velocity (m/s) • i]c is the azimuth time that the beam center crosses the target(s) • ?7o is the azimuth time that the antenna is closest to the target (s) • i]i is the synthetic aperture duration (s) • A(rj) is the two-way azimuth antenna gain pattern • (j) is the azimuth angle measured from the beam center (radians) • k is the beam width factor From equation (2.2), the phase of baseband received signal contains two parts, jnKr(r — T//2 — r^)2 and —j27rf0Td. Because rj changes more slowly than r , we can think Td is approximately a constant in JTTK,.(T — TI/2 — Td)2, which represents a linear FM signal in range direction. Considering the phase part of —j2nf0Td and expressing the range equation approximately as a second order polynomial in ?;, we can see the azimuth signal is also a linear FM signal with a negative FM rate. Thus, the SAR signal data can be viewed as a two-dimensional array of linear FM signals, indexed by slant range and azimuth time. Usually the major processing occurs on the ground after the received signal has been down-linked to Earth. So far, several different techniques have been developed to 12 process the SAR raw data into an image, such as Range/Doppler, Chirp Scaling, and SPECAN. In all techniques, matched filtering (or pulse compression) operations are applied in both azimuth and slant range directions. Pulse compression in the range direction is used in traditional radar systems. Pulse compression in the azimuth direction is achieved through matched filtering the azimuth phase information, using the phase history which a pixel on the ground has experienced as it is swept by the radar beam. The output of SAR processor is a two-dimensional complex image corresponding to the radar reflectivity of the observed area on the ground. 2.2 Fundamental Properties of SAR Images SAR is a linear measurement system providing an estimate of the complex reflectivity at each pixel. There are then various ways of representing the SAR data, all based on the complex image. These different image types all contain different manifestations of the phenomenon known as speckle, which results from interference between many random scatterers within a resolution cell. Speckle influences our ability to estimate image properties and thus is central to information retrieval from individual SAR images. For many purposes the phase inherent in the single-look complex imagery carries no useful information and can be discarded. The information at each pixel is then carried by the radar cross section (RCS) or backscattering coefficients a. For distributed targets, where resolution is not a critical issue, discarding the phase allows for the incoherent averaging of pixels or independent images to provide better estimates of a. 13 2.2.1 Single-Channel Image Types The primary geophysical quantity determining the SAR data is the complex radar reflectivity of the scene and the SAR provides a measurement of this reflectivity. Qual-itatively, this concept expresses the fact that when an electro magnetic wave scatters from position (x, y) on the Earth's surface, the physical properties of the terrain cause changes in both the phase, 4>(x,y), and amplitude, A(x,y), of the wave. The SAR, in fact, measures the number pair (A coscf), A simf) in the in-phase and quadrature channels of the receiver, weighted by the SAR point spread function (PSF). This estimate of the local reflectivity at each pixel can also be represented by the complex number Ae^; in this form, the SAR data are known as the complex image. From the complex image a variety of other products can be formed - for ex-ample, images of the real part Acos<f> (the in-phase component), the imaginary part Asincj) (the quadrature component), the amplitude A, the phase </>, the intensity / = A2, or the log intensity log I. The use of the word "intensity" is by analogy with measurements at optical wavelengths and is synonymous with power or energy. The log image is also sometimes referred to as the "dB" image, since for calibrated data each pixel corresponds to a linearly scaled estimate of the backscattering coefficient a, in dBs. The scaling will be unity only when the log image is formed by taking lOlogio of each pixel in the calibrated intensity image. For a properly calibrated system, these are all true measurements of the scat-tering properties of the Earth's surface , but visually they produce quite different representations, as illustrated in Figure 2.2. Typically, the real and imaginary im-14 ages show some structure but appear extremely noisy, the phase image is noiselike and shows no structure, while the amplitude, intensity, and log images, though noisy, are clearly easier to interpret. Of these latter three image types, the amplitude and log images are often preferred since the large dynamic range of the intensity image can reduce the perception of detail. The noiselike quality characteristic of these types of images is known as speckle. Given that the SAR is making true measurements of the Earth's scattering properties, why do such effects arise? A simple model of how the world might appear to the radar suggests an answer. In distributed targets we can think of each resolution cell as containing a number of discrete scatterers (Figure 2.3). As the wave interacts with the target, each scatterer contributes a backscattered wave with a phase and amplitude changes, so the total returned modulation of the incident wave is iV Ae* = J2 AkJ*k (2-6) fc=i This summation is over the number of scatters illuminated by the beam; it includes the weighting imposed by the SAR PSF and any attenuation effects caused by the scattering and propagation process. The individual scattering amplitude Ak and phases <f>k are unobservable because the individual scatterers are on much smaller scales than the resolution of the SAR, and there are normally many such scatterers per resolution cell [18]. An immediate conclusion from equation (2.6) is that the observed signal will 15 be affected by interference effects as a consequence of the phase difference between scatterers. In fact, speckle can be understood as an interference phenomenon in which the principal source of the noiselike quality of the observed data is the distribution of the phase terms. To infer this distribution, note that the slant range resolution is typically many wavelength across. Hence scatterers at different parts of the resolution cell will contribute very different phases to the return even if their scattering behavior is identical. As a result, we can in practice think of the phase </>/. as being uniformly distributed in [—7r,7r] and independent of the amplitude Ak The sum then looks like a random walk in the complex plane, where each step of length Ak is in a completely random direction. For large numbers of statistically identical scatterers, the analysis [11] reveals that: 1. The observed in-phase and quadrature components, z\ — Acoscj) and z2 — Asincf), will be independent identically distributed Gaussian random variables with zero mean and whose variance a/2 is determined by the scattering am-plitudes Ak, that is, they will have a joint probability density function (PDF) given by P.^-expf-i + ii) (2.7) 7rcr y a J 2. The observed phase <f> w m be uniformly distributed over [—TT,TT] 3. The amplitude A will have a Rayleigh distribution / > 0 (2.8) 1A PA(A) = —exp a a 16 with mean value ^^ — and standard deviation w( l — |cr. 4. The observed intensity or power / — A2 will have a negative exponential distri-bution P/ ( / ) = - e x p ( - - ) / > 0 (2.9) a \ a J 5. The log intensity D = In I has a Fischer-Tippett distribution eD ( eD\ PD(D) = — e x p f - — J (2.10) The distribution 1 to 5 are of fundamental importance in handling SAR data. Notice that, with the exception of the phase distribution, they are completely char-acterized by a single parameter <r, which carries all the available information about the target. These distributions apply to each speckled pixel in the scene. Hence, a com-parison of theory with observations would require many realizations of the scene, each with an independent speckle pattern. Since these are rarely available, and often we have just single images, we are instead obliged to test the theory by examing distributed targets assumed to have constant backscattering coefficient. Only very seldom do we have enough a priori knowdedge to assert with confidence the uniformity of a target. Nonetheless, many studies support the consistency of this speckle model with observed data. It is now clear why the different representations of the data shown in Fig-ure 2.2 convey different amounts of information about distributed targets in the 17 scene. Phase supplies nothing because its distribution is target independent. The real and imaginary channels supply information but require the eye to estimate the local standard deviation to detect differences between regions. Twice as many sam-ples are needed to estimate the standard deviation with the same standard error as the mean. The eye may also be better at seeing mean differences rather than changes in standard deviation. The amplitude data carries the information in its mean value, though noisily because the Rayleigh distribution is quite wide. The intensity data has a wider dynamic range than the amplitude data and appears noisier than amplitude because the coefficient of variation is higher for the exponential distribution than for the Raleigh distribution; for amplitude data it is 0.52, while for intensity data it is 1. The log data also has a reduced dynamic range compared with the intensity. The implications of this speckle model are important. If our interest is in single images of distributed targets and, hence, phase provides no information, we can get rid of it and use only amplitude, log, or intensity data. The observation that many distributed targets obey the sample speckle model allows us to provide a partial answer to our question about whether an image pixel carries information. Considered in isolation, each pixel supplies a real measurement of backscattered power or amplitude. When the pixel is made up of many of elementary scatterers, the observed power is an estimate of an underlying radar cross section (RCS) whose true value is being masked by interference effects. If the pixel is a part of a uniform distributed target, the information per pixel is low since the collection of pixels in the target can supply only one useful number, a. All what that greater numbers of pixels do is to reduce the estimation error in cr when averaging pixels. 18 For nonuniform but textured targets, this conclusion needs only a slight modification; more parameters other than the mean may be needed to characterize the region, but typically their number will be few. The information content of a pixel only becomes high when it is not speckled or the distributed target is made up of objects of large enough in size and with sufficient contrast to be resolved. 2.2.2 The Multiplicative Noise Model for Speckle The discrete scatterer model for distributed targets implies that all the information at each pixel in single-channel data is carried by the mean incoherent power, a, appropriate to its particular set of scatterers. The observed intensity at each pixel then has the conditional probability P 7 ( / | a ) = -e-1'" (2.1i; Making the change of variable / = an (2.12) leads to the PDF Pn(n) = e~n (2.13) Hence, the observed intensity at each point can be regarded as a deterministic 19 RCS value multiplied by unit mean exponentially distributed speckle. Equivalently, the complex reflectivity at each pixel can be written as: S = Si + iS2 = \fa{mx + im2) (2.14) where the (voltage) speckle contribution m = rn\ + im2 has PDF Pmltm2 = -exp(-m\ - m22) (2.15) that is, the in-phase and quadrature speckle components are independent zero-mean Gaussian random variables each with variance 1/2. Notice that we have not included a deterministic phase term in the formulation of the complex reflectivity be-cause it would be completely randomized by the uniform phase of the speckle term. Again, we see that the observed signal can be regarded as the product of an underlying RCS term and a speckle term. Similarly, for L-look data, the decomposition is pos-sible, where n is now a unit mean gamma distributed variable with order parameter L. This formulation of the SAR image as a deterministic RCS modulating a ran-dom stationary speckle process has led to the description of speckle as multiplicative noise, even though it is a repeatable interference phenomenon. Although the RCS underlying the observed values is essentially deterministic, its magnitude is unknown and normally is most usefully described by a probabilistic model. 20 2.3 The Promising SAR Image Processing Tech-nique - Discrete Wavelet Transform Wavelet transforms have received significant attention because their multi-resolution decomposition allows efficient image analysis. They have been used in image anal-ysis and compression, including SAR image [17] [14] [13] [16]. In fact, the wavelet transform offers several advantages for SAR image processing: • The decomposition of an image into a representation selective both in frequency and spatial orientation allows allocation of processing power to "important" components. • Algorithms for performing a wavelet decomposition and reconstruction are re-' cursive and computationally efficient. The computation of a wavelet transform as described by Mallat, has a computational burden of N. The fast wavelet-transform can reduce the computational burden to logN. • The decomposition of an image into a pyramid of detail images results in a set of coefficients that have reduced dynamic range compared to the original image. This is particularly important for compression of SAR imagery due to its very large dynamic range. • A wavelet multi-scale decomposition of an image is convenient for progressive image analysis and transmission. A coarse resolution image can be selected from a sensor or database, and then a) browsed at coarse resolution for data 21 quality, data validation, correct location, etc. and/or processed by algorithms which requires only coarse resolution input data; or b) a subset of the image can be transmitted and displayed at the finest available resolution. In the following chapters, the basic principles of the discrete wavelet trans-form are introduced and the application in speckle noise reduction and SAR image compression is investigated respectively. 22 mm (a) in-phase (real) component (b) quadrature (imaginary) component (c) phase (d) amplitude (e) intensity (f) log intensity Figure 2.2: Different representation of the complex SAR image using RADARSAT SLC imagery 23 Transmitter/Receiver Antenna • * * & • * • • • • • * • * • II • • • . * Figure 2.3: Schematic representation of a distributed scatterer as a layer of discrete point scatterers 24 Chapter 3 Discrete Wavelet Transform 3.1 Introduction Wavelets are functions that satisfy certain mathematical requirements and are used in representing data or other functions. This idea is not new because approximation using superposition of functions has existed since the 1800's, when Joseph Fourier discovered that he could superpose sines and cosines to represent other functions. However, in wavelet analysis, the scale that is used to look at data plays a special role. Wavelet algorithms process data at different scales; or resolutions. If a signal is looked with a large "window", small features could be noticed. The result in wavelet analysis is to see both the forest and the trees, so to speak. The wavelet analysis procedure starts by adopting a wavelet prototype func-tion, called an analyzing wavelet or mother wavelet. Temporal analysis is performed 25 with a contracted, high-frequency version of the same wavelet. Because the original signal or function can be represented in terms of a wavelet expansion (using coef-ficients in a linear combination of the wavelet functions), data operations can be performed using just the corresponding wavelet coefficients, and if by choice of the best wavelets adapted to the data, or truncating the coefficients below a threshold, the data is sparsely represented. This sparse coding makes wavelets an excellent tool in the field of data compression. 3.2 Wavelet Transform Versus Fourier Transform The Fast Fourier transform (FFT) and the discrete wavelet transform (DWT) are both linear operations that generate a data structure that contains log2N segments of various lengths, usually filling and transforming it into a different data vector of length 2". The mathematical properties of the matrices involved in the transforms axe similar as well. The inverse transform matrix for both the FFT and the DWT is the transpose of the original. As a result, both transforms can be viewed as a rotation in function space to a different domain. For the FFT, this new domain contains basis functions that are sines and cosines. For the wavelet transform, this new domain con-tains more complicated basis functions called wavelets, mother wavelets, or analyzing wavelets. Both transforms have another similarity. The basis functions are localized in frequency, making mathematical tools such as power spectra (how much power is contained in a frequency interval) and scale frames useful at picking out frequencies 26 and calculating power distributions. 3.3 Dissimilarities Between Fourier and Wavelet Transform The most interesting dissimilarity between these two kinds of transforms is that in-dividual wavelet functions are localized in space. Fourier sine and cosine functions are not. This localization feature, along with the wavelet's localization of frequency, makes many functions and operations using wavelets "sparse" when transformed into the wavelet domain. This sparseness, in turn, results in a number of useful applica-tions such as data compression, features detection in images, and the removal of noise from time series. One way to see the time frequency resolution differences between the Fourier transform and the wavelet transform is to look at the basis function coverage of the time frequency plane. Figure 3.1 shows a windowed Fourier transform, where the window is simply a square wave. The square wave window truncates the sine or cosine function to fit a window of a particular width. Because a single window is used for all frequencies in the FFT , the resolution of the analysis is the same at all locations in the time-frequency plane. An advantage of wavelet transforms is that the window size varies. In order to isolate signal discontinuities, one would like to have some very short basis func-tions. A way to achieve this is to have short high-frequency basis functions and long 27 "\j W m Frequency • 1 1 ... Time Figure 3.1: Time-frequency plane covered by Fourier transform low-frequency ones. This happy medium is achieved with wavelet transforms. Fig-ure 3.2 shows the coverage in the time-frequency plane with one wavelet function, the Daubechies wavelet. Wavelet transforms do not have a single set of basis functions like the Fourier transform, which utilizes just the sine and cosine functions. Instead, wavelet trans-forms have an infinite set of possible basis functions which can be selected to extract specific feature of interest. Thus wavelet analysis provides immediate access to in-28 Time Figure 3.2: Time-frequency plane covered by wavelet transform formation that can be obscured by other time-frequency methods such as Fourier analysis. 29 3.4 Wavelet Analysis 3.4.1 The Discrete Wavelet Transform Dilations and translations of the "Mother function", or "analyzing wavelet" (f>(x), define an orthogonal basis, our wavelet basis: 4>{x,l)(x) = 2-U(2-*x-i) (3.1) the variables x and / are integers that scale and dilate the mother function </> to generate wavelets, such as the Daubechies wavelet family. The scale index x indicates the width of the wavelet, and the location index / gives its position. Notice that the mother functions are rescaled, or "dilated" by powers of two, and translated by integers. What makes wavelet basis especially interesting is the self-similarity caused by the scales and dilation. Once the parent of the mother functions is known, everything about the basis is known. To span the data domain at different resolutions, the analyzing wavelet is used in a scaling equation: N-2 W(x)= J2(-^k^+i^(2x + k) (3.2) fc=-i where W(.x) is the scaling function for the mother function </>, and Ck are the wavelet coefficients. The wavelet coefficients must satisfy linear and quadratic constraints of the form 30 JV-I N-I ]T ck = 2, £ c ^ = 2 5 (3-3) fc=o fc=o where <5 is the delta function and 1 is the location index. One of the most useful features of wavelets is the ease with which an engineer can choose the defining coefficients for a given wavelet system to be adapted for a given problem. It is helpful to think of the coefficients Co,..., Cn as filters. The filters or coefficients are placed in a transformation matrix, which is applied to a raw data vector. The coefficients are ordered using two dominant patterns, one that works as a smoothing filter (like a moving average), and one pattern that works to bring our the data's "detail" information. These two orderings of the coefficients are called a quadrature mirror filter (QMF) pair in signal processing vocabulary. A more detailed description of the transformation matrix can be found elsewhere [12]. To complete the discussion of the DWT, let's take a look at how the wavelet coefficient matrix is applied to a data vector. The matrix is applied in a hierarchi-cal algorithm, sometimes called a pyramidal algorithm. The wavelet coefficients are arranged so that odd rows contain an ordering of wavelet coefficient with different signs that act to bring out the data detail. The matrix is first applied to the original, full-length vector. Then the vector is smoothed, and halved again, and the matrix applied once more. This process continues until a trivial number of "smooth" data remain. That is, each matrix application brings out a higher resolution of the data while at the same time smoothing the remaining data. The output of the DWT consists of the remaining "smooth" components, and all of the accumulated "detail" 31 components. 3.4.2 The Fast Wavelet Transform The DWT matrix is not sparse in general, so we face the same complexity issues that we had previously faced for the discrete Fourier transform. It is solved in the same way as the FFT, by factoring the DWT into a product of a few sparse matrices using self-similarity properties. The result is an algorithm that requires only order n operations to transform an n-sample vector. This is the "fast" DWT of Mallat and Daubechies [4]. 3.4.3 Wavelet Packet Transform The wavelet transform is actually a subset of a far more versatile transform, the wavelet packet transform [4]. Wavelet packets are particular linear combination of wavelets. They form bases which retain many of the orthogonality, smoothness, and localization properties of their parent wavelets. The coefficients in the linear combinations are computed by a recursive algorithm making each newly computed wavelet packet coefficient sequence the root of its own analysis tree. Wavelet packets represent a generalization of the method of multi-resolution decomposition, and comprise the entire family of subband coded decompositions. The library of wavelet packet bases organizes itself into a homogeneous tree, which can 32 be efficiently searched for a "best basis" under some optimality criterion. The entire wavelet packet tree can be obtained by recursively decomposition of both the low-pass subband and the high-pass subband using the quadrature mirror filter (QMF). 3.5 Application of the DWT Wavelets and similar methods (e.g., sub-band coding, multi-resolution analysis) have been widely used in image processing for noise reduction, image analysis and image compression. An important feature of these techniques is their successive approxi-mation property: as higher frequencies are added (which is equivalent to more bands in sub-band coding or, difference signals in pyramids), higher-resolution images are obtained. Note that multi-resolution successive approximation corresponds to the human visual system. Particularly in image compression, transform coding also has a successive approximation property and is thus part of this broad class of techniques which are characterized by multi-resolution approximations. In short, besides good compression capabilities, these schemes allow partial decoding of the coded version which lead to usable sub-resolution approximations. Here, instead of standard coding algorithms (e.g., DCT, pyramid coding), a successive approximation method is described for image coding using sub-band/wavelet trees [14], and is the basis of the coding scheme to be discussed later. 33 3.5.1 Zero-Tree Based Compression Figure 3.3 illustrates the 3-level wavelet decomposition of the image: woman. The lower right sub-image of Figure 3.3 is the reconstructed image of each subband coeffi-cient set. The upper right is the zoom of the reconstructed approximation coefficients of level 3. The synthesized image shown in the lower left can be perfectly recon-structed from all coefficients compared to the original image shown in the upper left. It is clear in the lower right sub-image of Figure 3.3 that there are some dependencies left among the bands, as well as within the bands. Also, for natural images with decaying spectrums it is unusual to find significant high-frequency energy if there is little low-frequency energy in the same spatial location. These observations lead to the development of an entropy coding method specifically tailored to octave-band or wavelet coding. It is based on a data structure called a zerotree, which is the analogous to zigzag scanning and the end of block (EOB) symbol used in the DCT. A new data structure called a zerotree is defined. A wavelet coefficient x is said to be insignificant with respect to a given threshold T if \x\ < T. The zerotree is based on the hypothesis that if a wavelet coefficient at a coarse scale is insignificant with respect to a given threshold T, then all wavelet coefficients of the same orientation in the same spatial location at finer scales are likely to be insignificant with respect to T. An element of a zerotree for threshold T is a zerotree root if it is not the descendant of a previously found zerotree root for threshold T, i.e., it is not predictably insignificant from the discovery of a zerotree root at a coarser scale at the same threshold. In a hierarchical system, with the exception of the highest frequency subbands, 34 every coefficient at a given scale can be related to a set of coefficients at the next finer scale of similar orientation. The coefficient at the coarser scale is called the parent, and all coefficients corresponding to the same spatial location at the next finer scale of similar orientation are called children. For a given parent, the set of all coefficients at all finer scales of similar orientation corresponding to the same location are called descendants. Similarly, for given child, the set of coefficients at all coarser scales of similar orientation corresponding to the same location are called ancestors. A wavelet tree descending from a coefficient in subband HH3 is seen in Figure 3.4. For a QMF-pyramid subband decomposition, the parent-child dependencies are shown in Figure 3.4. With the exception of the lowest frequency subband, all parents have four children. For the lowest frequency subband, the parent-child relationship is defined such that each parent node has three children. A scanning of the coefficients is performed in such a way that no child node is scanned before its parent. For an N-scale transform, the scan begins at the lowest frequency subband, denoted as LLpj, and scans subbands HLN, LHN, and HHN, at which point it moves on to scale N — \, etc. The scanning pattern for a 3-scale QMF-pyramid can be seen in Figure 3.5. Given a threshold level T to determine whether or not a coefficient is significant, a coefficient X is said to be an element of a zerotree for threshold T if itself and all of its descendents are insignificant with respect to T. An element of a zerotree for threshold T is a zerotree root if it is not the descendant of a previously found zerotree root for threshold T, i.e., it is not predictably insignificant from the discovery of a zerotree root at a coarser scale at the same threshold. A zerotree root is encoded with 35 a special symbol indicating that the insignificance of the coefficients at finer scales is completely predictable. The significance map can be efficiently represented as a string of symbols from a 3-symbol alphabet which is then entropy-coded. The three symbols used are 1) zerotree root, 2) isolated zero, which means that the coefficient is insignificant but has some significant descendant, and 3) significant. When encoding the finest scale coefficients, since coefficients have no children, the symbols in the string come from a 2-symbol alphabet, whereby the zerotree symbol is not used. Zerotree coding reduces the cost of encoding the significance map using self-similarity. Even though the image has been transformed using a decorrelating trans-form the occurrences of insignificant coefficients are not independent events. More traditional techniques employ some form of run-length encoding [15]. Unlike the zero-tree symbol, which is a single "terminating" symbol and applies to all tree-depths, run-length encoding requires a symbol for each run-length which much be encoded. A technique that is closer in spirit to the zerotrees is the end-of-block (EOB) symbol used in JPEG [15], which is also a "terminating" symbol indicating that all remaining DOT coefficients in the block are quantized to zero. To see why zerotrees may provide an advantage over EOB symbols, consider that a zerotree represents the insignificance information in a given orientation over an approximately square spatial area at all finer scales up to and including the scale of the zerotree root. Because the wavelet transform is a hierarchical representation, varying the scale in which a zerotree root occurs automatically adapts the spatial area over which insignificance is represented. The EOB symbol, however, always represents insignificance over the same spatial area, although the number of frequency bands within that spatial area varies. Given 36 a fixed block size, such as 8 x 8, there is exactly one scale in the wavelet transform in which if a. zerotree root is found at that scale, it corresponds to the same spatial area as a block of the DCT. If a zerotree root can be identified at a coarser scale, then insignificance pertaining to that orientation can be predicted over a larger area. Similarly, if the zerotree root does not occur at this scale, then looking for zerotrees at finer scale represents a hierarchical divide and conquer approach to searching for one or more smaller areas of insignificance over the same spatial regions as the DCT block size. Thus, many more coefficients can be preclicted in smooth areas where a root typically occurs at a coarse scale. Furthermore, the zerotree approach can iso-late interesting non-zero details by immediately eliminating large insignificant regions from consideration. 37 Original Image Recons. Approximation coef. of level 3 Figure 3.3: Wavelet decomposition of image: woman 38 \ Lb3 \ T LH3 • - • *i HL3 X HH3 \ HL2 i LH2 \ \ HH2 \ \ \ \ LH1 HL1 \ \ \ HH1 Figure 3.4: Embedded zero tree 39 LL3 LH3 HL3 HH3. > HL2 LH2 x HH2 > LH1 HL1 / HH1 Figure 3.5: Scanning of embedded zero tree 40 Chapter 4 Speckle Reduction of SAR Images 4.1 Introduction When details in the SAR image are important, speckle causes degradation of the image. Hence, speckle reduction is a necessary procedure before automatic and effi-cient class discrimination can be performed. As discussed in such as [1], the speckle noise typically can be modeled as multiplicative identical independent distribution (i.i.d.) Gaussian noise. Logarithmic transformation of a SAR image converts the multiplicative noise model to an additive noise model. For a digitized SAR image, we define z(j, k) as the gray level (or the observed image amplitude) of the (j, k)th pixel of the image. Hence, the pixel level of a SAR image can be written as 41 = xe (4.1) where x is the desired texture information and e is the multiplicative noise. Arsenault et al [1] showed that for a logarithmically transformed SAR image, the speckle is approximately Gaussian additive noise, y = x + e (4.2) where y = ln{\y\), x = ln(\x\), e = ln(\e\). If W is the multi-level DWT, then a multi-resolution representation is given by the equation: Wy = Wx + We (or, Y = X + E) (4.3) where Y = Wy, X — Wx, E = We. The noise level (standard deviation), a, in the multi-resolution representation is not known in advance and has to be estimated from the data. In the following sections in this chapter, three noise reduction methods used in the wavelet domain are discussed: a classical method - soft thresholding and two new algorithms based on statistics and local maxima of wavelet coefficients respectively. In the case of the two new algorithms, we are the first to apply them to SAR images. 42 4.2 Soft Thresholding In [5], Donoho proposed a non-linear method for reconstruction of an unknown signal from noisy measurements. The method is based on shrinkage or thresholding in the orthogonal wavelet domain. If the signal of interest, x, has been corrupted by measurement noise (or any other noise), then the signal y is obtained as y = x + ae (4.4) where e is unit-variance, zero-mean Gaussian white noise and a is the noise level. The recovery of the unknown signal, x, such that the mean square error between the estimate ,x of x and x itself, jjE[\x — x\], is optimized, can simply be done by thresholding in the wavelet domain. This method was extended to SAR images [16] and can be described in three steps: 1. Logarithmically transform the SAR image, then apply the orthogonal DWT to get the wavelet coefficients corresponding to the noise data, y. 2. Choose a threshold t = 'ja (where 7 is constant to be chosen) and apply the soft thresholding: m = V - t for y > t 0 for \y\ < t y + t for y < —t to obtain the enhanced wavelets coefficients, y. 43 3. Apply the inverse orthogonal DWT and the exponential transformation to get the denoised signal, x As a widely accepted noise reduction technique applied to wavelet transform coefficients, the soft thresholding method can efficiently remove additive noise. How-ever, the simultaneous existence of rich texture information and severe speckle noise in SAR images prevent soft thresholding from being as successful as it does on normal optical images. If the threshold is too big, it risks losing useful information; if too small, it fails in removing speckle noise. Two approaches are considered for the so-lution to this dilemma. The first is to estimate the image fea.ture from its statistical distribution based on Bayes rule, then change the distribution to match that of a noise-cleaned image; the second is to detect the location of texture information from the local extrema of its wavelet transform, then enhance these local extrema in order to keep those information untouched. 4.3 Bayesian Method Our method is to estimate the noise-cleaned wavelet coefficient X given the measured Y. The mean of the posterior distribution provides an unbiased least-square estimate X of the variable X. Bayes' rule allows us to write this in terms of the probability densities of the noise and signal: X(Y) = J PX\Y{X\Y) X dX 44 PY\X{Y\X) PX{X) X dX PY\X(Y\X) PX{X) dX r PE(Y - X) PX(X) X dX J PE(Y - X) PX{X) dX { ' ' where PE indicates the probability density function of the noise, and Px the prior probability density function of the signal. The denominator is the probability dis-tribution functions (PDF) of the noisy observation, computed via convolution of the noise and signal pdf. In order to use this equation to estimate the original value X, both of these probability density functions must be known. In [9], Mallat described wavelet coefficients (without speckle noise) as a two-parameter generalized Laplacian distribution: *<*> = N W ) (4'6) where N(s,p) = 2 .Sr( l /p)/p, and F(X) = f£° tx-1 e^dt, the well known "gamma" function. The distribution is zero-mean and symmetric, and the parameters {s,p} are directly related to the second and fourth moments of X: a (4.7) r(J) ' " r'(J) where K is the kurtosis (fourth moment divided by squared variance). Figure 4.1 gives the histogram of the wavelet coefficients of the second-level detail sub-band image. The plot shows the histogram of: a) the coefficients of the original image; b) the coefficients of the denoised image; and c) the coefficients given by the Laplacian distribution. This shows that the denoised image has much less high frequency energy than the original, and also that coefficients of the denoised image fit the Laplacian model well. 45 Horizontal 0.4 0.3-! 0.2 0.1 ! . i I I I * I:\ • : . i:l . . . t / : 1 :. 1 1 original — denoised model r - ^ C y i •C!T>->— i .L. I • -0 -200 -150 -100 -50 0 50 Vertical 100 150 200 250 A -200 -150 -100 -50 0 50 | Diagonal 100 150 200 original denoised model -20 0 20 40 Wavelet coeff values — > Figure 4.1: Histogram of second-level detail wavelet coefficients (see text). 46 Also, we can estimate {s, p} from the noisy observation Y. We find that the second and fourth moments of a generalized Laplacian signal corrupted by additive Gaussian white noise are: *2 = *» + -f( lf (4-8) ™4 = 3 ^ + w r ( 1 ^ + - ^ (4.9) Assuming an is known, the measurements of these two moments of the noisy-data are sufficient to estimate the model pdf parameters. 4.4 Wavelet Extrema Methods In images, the most important features are often the sharply varying points of the intensity, also called edge points. This is well illustrated by our ability to recognize a,n object from a drawing that outlines the edges. The classical technique used to remove white noise from a signal is to convolve the signal with a Gaussian filter. As a result, a part of the noise is removed but we also remove the high frequencies and thus smooth the signal singularities. Fortunately, we find that the value of the wavelet transform maxima increases or remains constant when the scale increases. On the other hand, the value of the wavelet transform local maxima created by the noise, decrease on average, when the scale increases. We use this property to remove that part of the wavelet maxima created by the noise. We then reconstruct a signal from the remaining maxima, as a result much of the noise is removed. 47 The wavelet extrema follows the definition of modulus and angle image at each scale 2J: -2mm M2]f(x,y) = y/\Wbf(x,y)\* + \WZf(x,y)\* (4.10) Wv fix v) Avf(x,y) = arctan(wp{x^))) (4.11) M2jf(x,y) can be interpreted as the modulus of the gradient vector whereas A2jf(x,y) gives the angle of this gradient vector. The sharper variation points of f(x,y) smoothed at the scale 2-7' correspond to the maxima of M2jf(x,y) along the gradient direction. For each of these maxima, detection is essentially equivalent to Camay's non-maxima suppression [8]. In order to evaluate the behavior of the wavelet maxima across scales, we need to make a correspondence between the maxima that appear at different scales 2 J . We say that a maxima at a scale 23 propagates to another maxima at the coarser scale 2<J,+1) if both maxima belong to the same image feature. In [8] Hwang et al prove that for white noise, on average, the number of maxima decreases by a factor 2 when the scale increases by 2. Half of the maxima do not propagate from the scale 2J to the scale 2* J+1 ' . In order to find which maxima propagate to the next scale, one should compute the wavelet transform on a dense sequence of scales. However, with a simple ad-hoc algorithm one can still find which maxima propagate to the next scale, from their value and position with respect to other maxima at next scale. This ad-hoc algorithm is not exact but saves computations since the wavelet transform at any other scale is not necessary to be computed. To remove the white noise components, we suppress all the maxima that do not propagate along enough scales 48 or whose average amplitude increases when the scale decreases. The original signal also includes smooth variations between each discontinuity. The energy of these smooth variations dominate the white noise at scales larger than 24. Hence, we only compute the wavelet transform maxima up to the scale 24. The image with speckle noise and the noise cleaned image using this algorithm are also shown in Figure 4.2. In order to test the algorithm, we consider a simple image which can demonstrate its operation. Figure 4.2, Figure 4.3 and Figure 4.4 show an image of overlapping squares. It is clear that the wavelet transform operates horizontally and vertically at varying scales. The wavelet modulus provided shows the combination of the components. In Figure 4.3 and Figure 4.4, each row represents a given wavelet scaling from 2° to 24. The first column is the horizontal WT, the second column is the vertical wavelet transform and the third column is the wavelet modulus. There are several interesting observations. First, as the scale increases, there is a perception that there are two objects. That perception does not provide us with useful information for extracting edges, although the segmentation analysis might use it as a starting estimate for deciding about the objects contained in the image. Second, as the scale increases, the singularities at coarser scale will have singular descendants at finer scales. This provides the ability to identify the singularities resulting from edges and from noise points. This simple algorithm shows the feasibility to discriminate a signal from its noise with an analysis of the local maxima behavior across scales. Much better strategies for selecting the maxima can certainly be developed depending upon the 49 (a) Original (b) With speckle (c) Filtered Figure 4.2: Square image, noise corrupted image and restored image applications. At each scale 2 J , the local maxima of the wavelet transform are the points (x,y) where the modulus image M2jf(x,y) is locally maximum along the gradient direction given by A2if{x, y). We record the position and value of each of these local maxima. In Figure 4.3 the local maxima of original image are at the border of the square. In this particular example, the value of the local maxima remain constant at all scales 21. The local maxima of noise corrupted image is shown in Figure A A. Both original image and noise corrupted image are decomposed to 4 levels. 4.5 Experimental Results In our experiments, Daubechies 4 (DB4) wavelet transforms (quadrature mirror filter pairs) are employed. Both airborne and spaceborne data have been tested. Three 50 level 1 level 2 level 3 level 4 Horizontal 1 ll M I 1 II Vertical Diagonal Figure 4.3: Wavelet transform and modulus of squares image 5] level 1 : • - ? i level 2 ••...)«x»;;.^^g| level 3 level 4 1-1 i IK J-W*- •:•,,-«« | 1 i ^ 3 S &3!8£ttflM § l i S.: :••] 1 Horizontal Verlical Diagonal Figure 4.4: Wavelet transform and modulus of speckle noise corrupted squares image 52 types of clutter regions in the airborne image are chosen to examine our algorithms: water features, field with isolated trees, village. In addition, the results from wavelet methods are also compared to traditional soft thresholding and the simulated anneal-ing optimization method introduced in [11]. The four methods are: 1. Soft thresholding: we choose the standard deviation in the highest scale at the diagonal orientation as the estimate of o. 2. Bayesian est imator: for each wavelet sub-band, we estimate the model pdf parameters S and P, and numerically compute an estimate via equation (6). After applying this estimator, we reconstruct x from the inverse discrete wavelet transform (IDWT) of X without further coefficient quantization. 3. Local wavelet m a x i m a : the wavelet coefficient local maxima at coarse scale is identified as an edge point if it has same singularities in its descendants; while a local maxima at the finer scale is identified as an edge point if it has the same singularities at its spatial neighbor. 4. Simulated annealing opt imizat ion: takes a single image and estimates the underlying surface cross-section using a variant of the simulated annealing optimization method. It approximates the data using a locally flat model of the cross-section [11]. Some statistical metrics are used to evaluate the performance of our algorithm: • Standard-deviat ion-to-mean ratio ( s / m ) : The quantity s/m (both in power) is a measure of image speckle in homogeneous regions [7]. 53 • Log standard deviat ion ( l o g ( s / m ) ) : The standard deviation of the clutter-data (in dB). This is an important quantity that directly affects the target detection performance of a standard two parameter constant-false-alarm-ratio (CFAR) algorithm [10]. • Target-to-c lutter ratio ( t / c ) : is the difference between the target and clutter means (in dB). It measures how the target stands out of the surrounding clutter. Original Image Soft-thresholding Bayesian estimator Local maxima simulated annealing Field 0.269 0.113 0.158 0.133 0.095 Lake 0.247 0.109 0.134 0.120 0.085 Village 0.312 0.147 0.153 0.141 0.101 Table 4.1: s/m for clutter data Original Image Soft-thresholding Bayesian estimator Local maxima simulated annealing Field 1.44 1.07 1.23 1.21 1.07 Lake 1.28 0.95 1.12 1.08 0.94 Village 1.79 1.25 1.37 1.30 1.21 Table 4.2: log — std for clutter data Three types of clutter regions in the airborne image are chosen to examine our algorithms: water features, isolated trees, and a village (Figure 4.5). The spaceborne RADARSAT image is also tested (Figure 1.2). 54 Original Image Soft-thresholding Bayesian estimator Local maxima simulated annealing Field 41.7 40.0 41.8 42.0 26.5 Lake 43.9 42.8 43.5 44.6 29.8 Village 36.6 35.2 36.4 37.2 24.4 Table 4.3: Target-to-clutter ratio (t/c) (dB) The perceptual results are shown for the airborne data in: Figure 4.6, Figure 4.7 and Figure 4.8; and for the spaceborne data in: Figure 4.9, Figure 4.10, Figure 4.11 and Figure 4.12. Table 4.1, 4.2 and 4.3 show the numerical results of the three metrics for the original and processed images in three typical regions. In assessing the performance of the different despeckling algorithms described in this chapter there are two main considerations: image quality and execution time. There is little point in implementing a fast algorithm that fails to extract useful information from the image data. On the other hand, it is inadvisable to implement a meticulous reconstruction if the algorithm is so slow that the application becomes impractical. The compromise between these requirements depends on the specific application. Here, we will consider both image quality and execution time and identify a good compromise solution yielding high quality reconstructions with reasonable execution times. It should be kept in mind that execution speeds will increase as computing technology advances. However, the effect of an unsuitable initial choice of algorithm cannot be subsequently rectified. 55 1. Image Quality: Comparing the original images Figure 4.5 and despeckled im-ages Figure 4.6, Figure 4.7 and Figure 4.8, it is obvious that all the four methods significantly reduce the speckle noise and appear to give a visually satisfactory reconstruction. It proves that despeckle methods based on discrete wavelet transform are valid and acceptable. As to the four methods we introduced here, if we take a closer look at Figure 4.6, we will find that at flat regions, simulated annealing optimization reduces speckle the most, soft thresholding is second, then the local maxima and Bayesian method, though the last two remove most of speckle noise as well. But simulated annealing and soft thresholding lose some ability to preserve the edge and target information, while Bayesian and wavelet extrema can not only retain this important information but also en-hance it compared to the original one. This is quite useful in some specific applications like target recognition, etc. The numerical comparison of the standard-deviation-to-mean ratio and log stan-dard deviation in four methods in Table 4.1, Table 4.2 and Table 4.3 further verify our analysis based on the visual results. Particularly the target-to-clutter ratio, data suggests that simulated annealing is not appropriate for reconstruct-ing small features. In Figure 4.6, the result in Table 4.3 from simulated anneal-ing is 26.5 dB compared to the original one 41.7 dB, which is much lower than that from the three wavelet methods. On other hand, wavelet extrema and Bayesian can even increase the target-to-clutter ratio compared to the original one and have an advantage over soft thresholding at this point. 2. Execution Time: The soft thresholding is the fastest among the four methods. 56 For the 256 x 256 image shown in Figure 4.6 on a Sun Sparc 10 system, execution time is 1 minute, the wavelet extrema is 1.5 minutes and the Bayesian is 5 minutes, and so much slower than the first two. The execution time consumed by simulated annealing is proportional to the iteration times, it will take about 50 seconds per iteration. But the result from one iteration is unacceptable and it typically requires about 50 to 100 iterations to achieve an acceptable result, thus the speed is very slow. 3. The annoying side effects of simulated annealing optimization and soft thresh-olding are more obvious in spacebome images than that in airborne images. In Figure 4.9 and Figure 4.12, the small features in the Vancouver city are lost. But the wavelet extrema in Figure 4.11 and Bayesian in Figure 4.10 keep some city detail. Thus, we can draw some conclusions: • If image quality, particularly small feature information, is most important, use wavelet extrema and Bayesian methods. If most of the original image informa-tion is made up of flat regions and big objects, simulated annealing and soft thresholding are suitable choices. • If speed is the overriding consideration, adopt soft thresholding and wavelet extrema. • If both image quality and speed are considered, wavelet extrema is the best choice from our experiments. 57 (a) Field (b) Village • 1 m 'v mm S^^Pi 1 KsSiSefrp BIS (c) Lake Figure 4.5: Airborne radar image samples used in the test 58 (a) Soft thresholding (b) Bayesian (c) Local maxima (d) simulated annealing Figure 4.6: Speckle reduction on X-band image with field texture 59 9* j- r • (a) Soft thresholding (b) Bayesian (c) Local maxima (d) simulated annealing Fi gure 4.7: Speckle reduction on X-band image with village texture til) (a) Soft thresholding (b) Bayesian (c) Local maxima (d) simulated annealing Figure 4.8: Speckle reduction on X-band image with lake texture 61 Figure 4.9: Speckle reduction by soft threshold 62 i L * Wk '"flak" •** ; v lS l I - S*V ThJfc Figure 4.10: Speckle reduction by bayesian wavelet 63 Figure 4.11: Speckle reduction by wavelet extrema 64 0 '« - I Figure 4.12: Speckle reduction by simulated annealing optimization 65 Chapter 5 SAR Image Compression Modern spaceborne SAR systems have an on-board hardware consisting basically of a transmitting and receiving unit and analog-to-digital converter (ADC), followed by a real-time data downlink and/or a storage facility. However, while the volume of data collected is increasing rapidly, the ability to transmit it to the ground, or to store it, is not increasing as fast. This is because the data rate of channels is limited by signal/noise ratio and by the bandwidth allocated, and there is more and more competition for bandwidth allocations. Also, while storage densities on archiving media are improving with technological developments, our ability to generate new data is increasing even faster. Thus, there is a strong interest in developing data encoding and decoding algo-rithms which can obtain higher compression ratios while keeping encoding/decoding error to an acceptable level (The compression ratio is the volume of data before en-coding divided by the volume after encoding). However, the encoding of SAR data 66 presents some special problems compared with other data sets, such as optical data. In the following sections, a SAR image compression algorithm developed in this thesis is introduced, which combines texture analysis, speckle reduction, homogeneous decomposition, and a modified zero-tree coding scheme. 5.1 Texture Analysis Using the DWT, every coefficient at a given scale can be related to a set of coeffi-cients at the next finer scale of similar orientation. As described in section 4.6.1, the coefficient at the coarse scale is called the parent, and all coefficients corresponding to the same spatial location at the next finer scale of similar orientation are called the children. For a given parent, the set of all coefficients at all finer scales of similar-orientation corresponding to the same location are called descendants. The traditional pyramid-type wavelet transform recursively decomposed sub-signals in the low frequency channels. However, since the most significant information of a texture often appears in the middle frequency channels, further decomposition in the lower frequency region alone in the conventional wavelet transform does, may not help for a SAR image which contains rich texture information. Thus, an appro-priate way to perform the wavelet transform for textures is to detect the significant frequency channels and then to decompose them further. Here, this problem is approached by analyzing SAR images by the tree-structured wavelet transform used in [2]. The key point is that the decomposition is no longer 67 simply applied to the low frequency subsignals recursively. Instead, the transform can be applied to the output of any filter hn, h^n, hnL, or IIHH, and can be described by the following steps: 1. Decompose a given textured image using a 2-D two-scale wavelet transform into 4 subimages, which can be viewed as the parent and children nodes in a tree. 2. Calculate the energy of each decomposed image x(m,n) (the children nodes): ^ M N ^ = ^ E E l*Kn)|2 (5-1) m=\ n = l 3. If the energy of a subimage is significantly smaller than others, we stop the decomposition in this region since it contains less information. This step can be achieved by comparing the energy with the largest energy value in the same scale. That is, if e < Cemax, stop decomposing this region where C is a constant less than 1. 4. If the energy of a subimage is significantly larger, we apply the above decom-position procedure to the subimage once again. Practically, the size of the smallest subimages should be also used as a stopping criterion for further decomposition. If the decomposed channel has a very narrow size, the location and the energy value of the feature may vary widely from sample to sample so that the feature may not be robust. According to our experience, the size of the smallest subimages should be less than 16 x 16. It is also worthwhile to point our that the above tree-structured wavelet transform provides a non-redundant representation, and it takes no more space to store the wavelet coefficients than to store the original image. 68 This method is analogous to computing the complete wavelet packet trans-form to a certain maximum level, then pruning the branch from top to bottom using the above energy threshold. With respect to each tree-structured wavelet transform, we calculate the energy at its leaves, and obtain an energy function defined on the spatial/frequency domain called the energy map, which will be used for texture clas-sification. Two textured images of size 256 x 256 and their 3-level tree structured wavelet decompositions are shown in Figure 5.1 and Figure 5.2. It is clear that different texture scenes have different energy distributions, and thus difFerent wavelet decomposition. 5.2 Homogeneity Map An image map which specifies the degree of homogeneity of image texture and features can be helpful in achieving higher compression gain because fewer bits can be allowed to homogeneous regions while allocating more bits to those regions containing more detail and sharp features. Here, we apply a very simple segmentation scheme based on quadtree decomposition of the image at the lowest scale. Quadtree decomposition is an analysis technique that involves subdividing an image into blocks that are more homogeneous than the whole image itself. It starts by decomposing the whole image into four equal-sized blocks, and then testing each block to see if it meets some criterion of homogeneity (e.g. if all the pixels in the block are within a specific dynamic range). If a block meets the homogeneity criterion, it is not divided any further. If it does not meet the criterion, it is subdivided again into 69 Tree Structure Analyzed Image: size (256 x 256) Node: (3,0) 50 100 150 200 250 Coefficients for Terminal Nodes Figure 5.1: Tree structured analysis of airborne radar image (scene 1) 70 Tree Structure Analyzed Image: size (256x256) Node (3,0) 50 100 150 200 250 Coefficients for Terminal Nodes Figure 5.2: Tree structured analysis of airborne radar image (scene 2) 71 four blocks, and the test criterion is applied to those blocks. This process is repeated iteratively until each block meets the criterion. The homogeneous decomposition is applicable to SAR images because they typically contain some clutter features which are obviously separated such as ocean, forest and field. After quadtree decomposition, two sets are obtained: a homogeneous set and a target set. The homogeneous set consists of the relatively homogeneous regions. Each homogeneous region is represented by the coordinates of the pixel at the left up corner and the size of the region. The target set consists of those single component regions, represented by their coordinates. The test criterion we choose is to split a block if the maximum of the block elements minus the minimum value of the block elements is greater than the threshold t^: U = loh (5.2) where 0)x is the standard deviation of the wavelet coefficients at the highest frequency band in the diagonal direction, and 7 is a, selected constant. 5.3 Speckle Reduction All of the algorithms discussed in Chapter 4 can be applied to reduce speckle at this phase. Thus, two candidates are used: the soft-thresholding because of its relative computational simplicity, and the local maxima approach because of its relatively-good balance between perceptual results and processing time. 72 • Soft thresholding: The value of the threshold ti used in soft-thresholding scheme is given by the formula: U = Fiaiyl2log{m) (5.3) where n/ is the number of pixels in each frequency band, o\ is same as a^, and Ft is the feature factor at the corresponding frequency channel. • Local maxima: The criterion to the wavelet extrema is more strict in the texture sparse regions than that in the texture intensive regions. The other important point at this phase is that we can particularly control the necessarily of speckle reduction. The extent of reduction is depended on the feature factor. Less reduction in texture intensified channel, more reduction in texture thinned channel. The soft-thresholding is not applied to low frequency channel, but only to some high frequency channels with small feature factor. 5.4 Modified S P I H T Coding Scheme The embedded zerotree wavelet coding (EZW) [14] is a very effective and computa-tionally simple technique for image compression. The "set partitioning in hierarchical trees" method (SPIHT) [13] improves the performance of EZW based on three con-cepts: 1. partial ordering of the transformed image elements by magnitude, with trans-73 mission of order by a subset partitioning algorithm that is duplicated at the decoder, 2. ordered bit plane transmission of refinement bits, and 3. exploitation of the self-similarity of the image wavelet transform across different scales. Here, we modify the SPIHT by soft-thresholding, by use of a homogeneous map, and feature factors so that it is more appropriate for SAR images. Normally, most of an image's energy is concentrated in the low frequency components. Consequently, the variance decreases as we move from the highest to the lowest levels of the subband pyramid. Furthermore, it has been observed that there is a spatial self-similarity between subbands, and the coefficients are expected to be better magnitude-ordered if we move downward in the pyramid following the same spatial orientation. For instance, large low-activity areas are expected to be identified in the highest levels of the pyramid, and they are replicated in the lower levels a,t the same spatial locations. A tree structure, called spatial orientation tree which different from the zero-tree described in section 3.6.1 but used in SPIHT, naturally defines the spatial rela-tionship on the hierarchical pyramid. Figure 5.4 shows how our spatial orientation tree is defined in a pyramid constructed with recursive four-subband splitting. Each node of the tree corresponds to a pixel and is identified by the pixel coordinate. Its direct descendants (offspring) correspond to the pixels of the same spatial orientation in the next finer level of the pyramid. The tree is defined in such a way that each 74 node has either no offspring (the leaves) or four offspring, which always form a group of 2 x 2 adjacent pixels. In Figure 5.4, the arrows are oriented from the parent node to its four offspring. The pixels in the highest level of the pyramid are the rest roots and are also grouped in 2 x 2 adjacent pixels. As introduced in section 3.6.1, a wavelet coefficient x is said to be insignificant with respect to a given threshold T if \x\ < T. The zerotree structure is based on the hypothesis that if a wavelet coefficient at a coarse scale is insignificant with respect to a given threshold T, then all wavelet coefficients of the same orientation in the same spatial location at finer scales are likely to be insignificant with respect to T. In the progressive transmission mode, the nth bit of the wavelet coefficients is transmitted if 2™ > \cij\ < 2 n + 1 . To make the relationship between magnitude comparisons and message bits clear, we use the following function to indicate the significance of a set of coordinates r : f 1. if m a x { | c i i | } > 2 n . (i,j) e r Sn(r) = I ' (5.4) I 0, otherwise. Our method is different from the conventional EZW or SPIHT methods in the following ways: 1. Different parents may have the same children because the tree-structured wavelet transform allows the size of children to be smaller than that of the parent, which is unlike the pyramid structure used in the SPIHT method. For example in Fig-ure 5.3, the coefficient in the scale branch (3,2) has descendants in (3,8), (3,9), (3,10), (3,11), (2,8), (2,9), (2,10), (2,11). 75 2. If the descendants have more than one parent, they are regarded as significant once they satisfy the condition of the threshold T{ from any one of their parents f; the descendants are insignificant only when they satisfy the condition of the thresholds {Ti, T2,..., 71;} from all of their parents. 3. The condition (5.4) of Sn is changed to be ?77a.x,{|c;J|} > Fi 2™, (i,j) £ r , where F[ is a texture factor. Thus, texture information has the higher priority in bit allocation. 4. Two different coding sets, the homogeneous set and target set, are encoded separately. 5.5 A New Encoding Scheme Because the homogeneity map of the lowest scale has been created by the quadtree decomposition, we can reduce the number of bits by encoding the homogeneity map while allocating more bits to highlighting areas of greater detail. To further improve the coding efficiency, soft-thresholding is applied before coding at other scales. The residual from soft-thresholding and coding is kept and then losslessly compressed using arithmetic coding and transmitted to decoder based on the users' demand. The whole coding scheme is illustrated in Figure 5.5, where the separate steps of speckle reduction and homogeneity analysis are noted. The features of the new method are: • Control of speckle reduction: Speckle reduction is based on the requirement 76 of the user. Speckle reduction can be ignored if not needed. • Speckle reduction: Soft-thresholding is applied to all wavelet coefficients except the lowest scale ones. After quantization, the length of significant bits of wavelet coefficients is shortened. The residual part is kept and transmitted based on user's demands. • Encoding homogeneous map: To the components in the homogeneous list, we send the decoder the average value, the dynamic range and the size of the homogeneous region, and then use this average value as the threshold to decide the significant coefficients at the next finer scale. Then we compute the differ-ence between the value of every component in the homogeneous region and the average value. We quantize these differences and transmit them based on the bit-plane. • Tree-structured coding: The wavelet coefficients in the target list are quan-tized and transmitted using the modified SPIHT algorithm. At every scale branch, three candidate encoding schemes are tested: raw data (no encoding), arithmetic coding, and run-length coding. The candidate with the best perfor-mance is chosen as the encoding scheme at this scale branch. • Residual coding: Efficient compression of these residuals is essential to the efficiency of the algorithm as a whole. Because this part of the coefficients possess a rapidly decaying exponential distribution with a maximal number of zero coefficients. This skewed distribution makes arithmetic coding the best choice for lossless compression. 77 5.6 Experimental Results In order to assess the effectiveness of our compression algorithm on SAR imagery (both spaceborne and airborne), three schemes are applied: modified SPIHT (MSPIHT), conventional SPIHT and JPEG. The DB4 wavelet with 3Tevel decomposition is ap-plied to each of the three methods. In addition, with and without speckle reduction are examined in order to assess the impact of the speckle on the efficiency of SAR image compression. Some measurements are also used to evaluate the performance of compression schemes. Because the potential of speckle noise reduction to SAR image compression is investigated, two classes of measurements are used here: 1. Peak Signal to Quantization Noise Rat io ( P S Q N R ) , defined as the ratio, in dB, of peak signal to MSE; Average Signal to Quantization Noise Ratio ( A S Q N R ) , defined as the ratio, in dB, of average signal energy to MSE. The MSE measure gives the total encoding error between original and reconstructed data, sets in an absolute sense. PSQNR and ASQNR give relative measures useful for comparing results of encoding between data sets with differing peak and mean values, respectively [6]. 2. Target-to-clutter ratio ( t / c ) : the measurement used in Section 4.5, are used to compare the visual quality of the compressed image. The results by each of these three methods are shown in airborne: Figure 5.6, Figure 5.7, Figure 5.8, Figure 5.9; spaceborne: Figure 5.10, Figure 5.11, Figure 5.12, 78 Figure 5.13. Original Image De-noised Image(local maxima) MSPIHT without denoise (0.2 bpp) MSPIHT with denoise (0.2 bpp) Conventional SPIHT (0.2 bpp) JPEG (0.2 bpp) t / c (dB) 41.7 42.0 31.8 34.1 32.2 unacceptable Table 5.1: Comparison of target recognition ability MSPIHT without denoise (0.2 bpp) MSPIHT with denoise (0.2 bpp) Conventional SPIHT (0.2 bpp) JPEG (0.2 bpp) 2 bpp 40.2 42.3 39.8 48.7 1 bpp 39.3 41.5 38.5 45.4 0.5 bpp 36.5 38.2 35.4 32.3 0.2 bpp 33.8 35.4 32.1 23.1 Table 5.2: PSQNR (dB) for reconstructed SAR images From these images, we can see that wavelet methods (both MSPIHT and con-ventional SPIHT) outperform JPEG, particularly in high compression ratio. This can be verified from both the visual results and numerical value (PSQNR and ASQNR). The results with speckle reduction are better than without speckle reduction at the same compression ratio, particularly when the compression ratio is high. More texture information can be kept by the algorithm when speckle reduction is used. It is clear that the speckle reduction necessarily results in some compression. Comparing both the visual and numerical results between MSPIHT with speckle reduction and without, we can easily show that the compressibility of the despeckled 79 MSPIHT without denoise (0.2 bpp) MSPIHT with denoise (0.2 bpp) Conventional SPIHT (0.2 bpp) JPEG (0.2 bpp) 2 bpp 38.8 40.7 37.8 46.4 1 bpp 37.3 39.5 36.5 40.3 0.5 bpp 34,5 36.2 33.4 29.1 0.2 bpp 31.8 33.4 30.1 20.3 Table 5.3: ASQNR (dB) for reconstructed SAR images image is highly improved. For example, even at very high compression ratio (e.g. 0.5 bpp), the despeckled MSPIHT can keep most of important information of original image, while MSPIHT without despeckled and conventional SPIHT can not. The compression results (of spaceborne SAR image at Vancouver by different methods are shown in Figure 5.10, Figure 5.11, Figure 5.12, Figure 5.13. 80 (0,0) (2,13) (2,13) (2,14) 3,0) (3,2) (3,9) (3,11) (3,48) (3,50) 1 (3,49) (3,51) i '" ! • -! • I <'.':,-V -----• • •. '•*• - '.• X • Vi]*** • • .'•: '-•••''". • , , :'•-.;" Figure 5.3: Tree-structured analysis of single-look airborne radar image. SI " IN f 1 1 - * 1 1 Figure 5.4: Examples of parent-offspring dependencies in the spatial-orientation 82 SAR Image Tree-structured Wavelet Analysis texture factor V D M Coefficients V A DWFCoeffof approximation if lossless image encoding residual data Speckle Reduction A Quadtree Decomposition Homogeneous Set A Target Set A Modified SPIHT Encoder Encoded Figure 5.5: Block diagram of the modified SPIHT coding scheme. 83 (a) 2.0 bpp (b) 1.0 bpp • : (c) 0.5 bpp (d) 0.2 bpp Figure 5.6: Modified SPIHT without speckle reduction SI (a) 2.0 bpp (b) 1.0 bpp (c) 0.5 bpp (d) 0.2 bpp Figure 5.7: Modified SPIHT with speckle reduction. 85 (a) 2.0 bpp (b) 1.0 bpp J : '., _ * «** JM ' ^2# s. \Ki • >"" ^ ^ %*1 • •<• : a , * HPs (c) 0.5 bpp (d) 0.2 bpp Figure 5.8: Conventional SPIHT encoding results 86 (a) 2.0 bpp (b) 1.0 bpp IlKltMBr (c) 0.5 bpp (d) 0.2 bpp Figure 5.9: JPEG encoding results. 87 k % Figure 5.10: RADARS AT image at Vancouver with 0.5 bpp by Modified SPIHT with speckle reduction 88 Figure 5.11: RADARS AT image at Vancouver with 0.5 bpp by Modified SPIHT without speckle reduction 89 r* Figure 5.12: RADARS AT image at Vancouver with 0.5 bpp by conventional SPIHT !H) Figure 5.13: RADARSAT image at Vancouver with 0.5 bpp by JPEG 91 Chapter 6 Conclusions The objective of this thesis has been the the application of DWT to SAR image processing. The following is a summary of the contributions in this thesis: • The discrete wavelet transform has been proven to be efficient in SAR image processing. The DWT allows us to analyze images from different bands and spatial orientations based on the image features. The advantages of DWT analysis are more applicable to SAR images than that to others because of the rich texture information contained in SAR imagery. The benefit from DWT has been proved through this thesis, particularly on speckle reduction and image compression. • Speckle noise is an annoying characteristic which hinders the application of conventional image processing techniques to SAR images. Speckle reduction in the wavelet domain is an effective strategy widely accepted in the research field 92 of SAR image processing. We have developed two new algorithms: a Bayesian method based on the statistical model [20] and wavelet extrema based on the local property of wavelet coefficients, and applied them to both airborne and spaceborne SAR images. The comparison of their results to some existing well known methods (e.g, soft thresholding and simulated annealing optimization) show their advantages on both the visual and numerical sides. • Simultaneous speckle reduction and data compression can significantly improve the compressibility of SAR images. The modified SPIHT algorithm has been applied to SAR image coding [21, 22]. The effectiveness of this strategy has been proven from the comparison to the method without speckle reduction and classic efficient wavelet compression algorithms (e.g., SPIHT). • The relationship of wavelet coefficients at different frequency bands and spatial orientations can provide more efficient ways to describe the properties of SAR images and is worthy of future investigation. 93 Bibliography [1] H. H. Arsenault and G. April, "Properties of speckle integrated with a finite aperture and logarithmically transformed", Optical Society of America, vol. 66, pp. 1160, 1976. [2] T. Chang, C. Kuo, "Texture analysis and classification with tree-structured wavelet transform", IEEE Trans. Image Processing vol. 2, No. 4, pp. 429-441, October, 1993. [3] J. C. Curlander and R. N. McDonough, Synthetic Aperture Radar Systems and Signal Processing. Wiley Series in Remote Sensing, John Wiley and Sons. Inc., 1991. [4] I. Daubechies, "Ten Lectures on Wavelets", SI AM, Philadelphia, PA, 1992. [5] D. Donoho, "De-noising by soft-thresholding," IEEE Trans. Inform. Theory, vol. 41, pp. 613-627, May 1995. [6] M. Dutkiewicz and I. Gumming, "Evaluation of the effects of encoding on SAR data", Photogramme engineering and remote sensing, vol.60, pp. 895-904, July, 1994. 94 [7] J W. Goodman, "Some fundamental properties of speckle", Optical Society of America, 66:1145-1150, November 1976. [8] W. Hwang and S. Mallat, "Singularity detection and processing with wavelets," IEEE Trans. Information Theory, vol. 38, pp. 617-643, May 1992. [9] S. G. Mallat, "A theory for multi-resolution signal decomposition," IEEE Trans. Pattern Anal. Machine Intel, vol. 11, pp. 673-697, November 1989. [10] L. Novak, M. Burl and W. Irving, "Optimal polarimetric processing for enhanced target detection", IEEE Trans, on AES, vol.29, pp. 234-244, January, 1993. [11] C. J. Oliver, Understanding synthetic aperture radar images. Artech House Inc., 1998. [12] W. Press et ah, "Numerical Recipes in Fortran", Cambridge University Press, 1992. [13] A. Said and W. Pearlman, "A new, fast, and efficient image codec based on set partitioning in hierarchical trees", IEEE Trans. Circuits and systems for Video Technology, vol.6, pp. 243-250, June 1996. [14] J. Shapiro, "Embedded image coding using zerotrees of wavelet coefficients", IEEE Trans. Signal Processing, vol.41, pp. 3445-3462, May 1993. [15] G. Wallace, "The JPEG Still Picture Compression Standard", Commun. ACM, vol. 34, pp. 30-44, April 1991. 95 [16] D. Wei, H. Guo, J. Odegard, M. Lang, and C. Burrus, "Simultaneous speckle reduction and data compression using best wavelet packet bases with application to SAR based ATD/R", SPIE conference on wavelet applications, volume 2491, Orlando, FL, April 1995. [17] S. A. Werness, S. C. Wei, and R. Carpinella, "Experiments with wavelets for compression of SAR data", IEEE Trans. Geoscience and Remote Sensing, vol. 32, pp. 197-201, Jan., 1994, [18] M. Whitt , F. Ulaby and K. Sarabandi, "Polarimetric Scatterometer Systems and Measurements", Radar Polarimetry for Geoscience Applications, 1990, pp. 191-272. [19] C. Yanasse, S. Quegan and R. Martin, "Inferences on Spatial and Temporal Variability of the Backscatter from Growing Crops Using AgriSAR Data", Int. J. Remote Sensing, vol. 13, 1992, pp. 493-507. [20] Zhaohui Zeng and Ian Gumming, "Bayesian Speckle Noise Reduction using the Discrete Wavelet Transform", Proceedings of the International. Geoscience and Remote Sensing Symposium, IGARSS'98, pp. 7-9, Seattle, July, 1998. [21] Zhaohui Zeng and Ian Gumming, "SAR Image Compression Based on the Wavelet Transform", Proceedings of the Fourth International Conference on Signal Processing, ICSP'98, pp. 787-791, Beijing, October, 1998. [22] Zhaohui Zeng and Ian d imming , "Modified SPIHT on SAR Image Compres-sion", Data Compression Conference - DCC '99 Snowbird, Utah, March 29 - 31, 1999. 96
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- The application of discrete wavelet transforms to SAR...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
The application of discrete wavelet transforms to SAR image processing Zeng, Zhaohui 1999
pdf
Page Metadata
Item Metadata
Title | The application of discrete wavelet transforms to SAR image processing |
Creator |
Zeng, Zhaohui |
Date Issued | 1999 |
Description | Synthetic aperture radar (SAR) is a very efficient instrument for obtaining a. better understanding of the earth's environment. SAR, data represents an important source of information for a large variety of scientists around the world. However, the acquiring mechanism of SAR is quite different from other sensors, such as optical sensors. It brings some unique properties of SAR image data which decides that conventional image processing technique may fail to obtain satisfactory result or have to be modified to adapt the application of SAR image data. The objective of this thesis work is to investigate the potential of discrete wavelet transforms (DWT) for SAR image processing. The emphasis is placed on speckle noise reduction and SAR image compression, which are the two of the most popular application fields of DWT to image processing in the current literature. Two new algorithms for speckle reduction have been developed: a Bayesian method based on the statistical model and wavelet extrema based on the local property of wavelet coefficients, and have been applied to both airborne and spaceborne SAR images. The comparison of their results to some existing well known methods show their advantages on both the visual and numerical sides. In addition, simultaneous speckle reduction and data compression can significantly improve the compressibility of SAR images. The modified SPIHT aJgorithm has been applied to SAR image coding. The effectiveness of this strategy has been proven from the comparison to the method without speckle reduction and classic efficient wavelet compression algorithms. |
Extent | 21368284 bytes |
Genre |
Thesis/Dissertation |
Type |
Text |
FileFormat | application/pdf |
Language | eng |
Date Available | 2009-07-03 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0065356 |
URI | http://hdl.handle.net/2429/10152 |
Degree |
Master of Applied Science - MASc |
Program |
Electrical and Computer Engineering |
Affiliation |
Applied Science, Faculty of Electrical and Computer Engineering, Department of |
Degree Grantor | University of British Columbia |
GraduationDate | 1999-11 |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-ubc_1999-0450.pdf [ 20.38MB ]
- Metadata
- JSON: 831-1.0065356.json
- JSON-LD: 831-1.0065356-ld.json
- RDF/XML (Pretty): 831-1.0065356-rdf.xml
- RDF/JSON: 831-1.0065356-rdf.json
- Turtle: 831-1.0065356-turtle.txt
- N-Triples: 831-1.0065356-rdf-ntriples.txt
- Original Record: 831-1.0065356-source.json
- Full Text
- 831-1.0065356-fulltext.txt
- Citation
- 831-1.0065356.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0065356/manifest