@prefix vivo: .
@prefix edm: .
@prefix ns0: .
@prefix dcterms: .
@prefix skos: .
vivo:departmentOrSchool "Science, Faculty of"@en, "Computer Science, Department of"@en ;
edm:dataProvider "DSpace"@en ;
ns0:degreeCampus "UBCV"@en ;
dcterms:creator "Lee, Tim Kam"@en ;
dcterms:issued "2010-04-20T23:42:50Z"@en, "1983"@en ;
vivo:relatedDegree "Master of Science - MSc"@en ;
ns0:degreeGrantor "University of British Columbia"@en ;
dcterms:description "The values recorded by the Landsat series of sensors are influenced by local topography and atmosphere, as well as by ground cover. In order to determine the albedo at a point in the satellite image, effects due to topography and atmosphere must be removed. Topographic effects can be predicted by a reflectance map. Atmospheric effects are modeled mathematically in terms of solar irradiance, sky irradiance, path radiance and optical depth. There are six unknown parameters in the model. These are estimated with the help of a digital elevation data. Finally the albedo map is generated."@en ;
edm:aggregatedCHO "https://circle.library.ubc.ca/rest/handle/2429/23963?expand=metadata"@en ;
skos:note "MODELING AND ESTIMATING ATMOSPHERIC EFFECTS IN LANDSAT IMAGERY by TIM KAM LEE B.Sc, The University of British Columbia, 1980 A T H E S I S S U B M I T T E D IN P A R T I A L F U L F I L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F M A S T E R O F S C I E N C E in T H E F A C U L T Y O F G R A D U A T E S T U D I E S ( D e p a r t m e n t o f C o m p u t e r Sc ience ) W e a c cep t t h i s thes i s as c o n f o r m i n g t o t he r e q u i r e d s t a n d a r d THE UNIVERSITY OF BRITISH COLUMBIA April 1983 © Tim Kam Lee, 1983 In p r e s e n t i n g t h i s t h e s i s i n p a r t i a l f u l f i l m e n t of the requirements f o r an advanced degree at the U n i v e r s i t y o f B r i t i s h Columbia, I agree t h a t the L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e and study. I f u r t h e r agree t h a t p e r m i s s i o n f o r e x t e n s i v e copying of t h i s t h e s i s f o r s c h o l a r l y purposes may be granted by the head o f my department or by h i s or her r e p r e s e n t a t i v e s . I t i s understood t h a t copying or p u b l i c a t i o n o f t h i s t h e s i s f o r f i n a n c i a l g a i n s h a l l not be allowed without my w r i t t e n p e r m i s s i o n . Department o f Computer Science The U n i v e r s i t y of B r i t i s h Columbia 1956 Main Mall Vancouver, Canada V6T 1Y3 D a t e 26 Apri l , 1983 DE-6 (3/81) A b s t r a c t The values recorded by the Landsat series of sensors are influenced by local topography and atmosphere, as well as by ground cover. In order to determine the albedo at a point in the satellite image, effects due to topography and atmo-sphere must be removed. Topographic effects can be predicted by a reflectance map. Atmospheric effects are modeled mathematically in terms of solar irradi-ance, sky irradiance, path radiance and optical depth. There are six unknown parameters in the model. These are estimated with the help of a digital eleva-tion data. Finally the albedo map is generated. ii Table of Contents 1. Introduction 1 2. Atmospheric Effects 3 2.1 Problems in Remote Sensing 3 2.2 Nature of Atmospheric Effects 4 2.3 Some Correction Methods 9 3. A Mathematical Model 11 3.1 The Reflectance Map 11 3.2 Components of Atmospheric Effects 15 3.3 Image Formation Equation 18 3.4 Landsat Imagery 25 3.5 Auxiliary Data 28 4. An Experiment 29 4.1 Introduction 29 4.2 Path Radiance 30 4.3 Sky Irradiance and Optical Depth 37 4.4 Albedo Map 41 5. Conclusion 44 Bibliography 47 Appendix I. Principle Components Transformation 50 Appendix II. Calibration of Landsat data 59 iii List of Tables Table I. Mean of Intensity Values Across the Shadow Boundaries 8 Table II. Values for L, 20 Table m. Values for p0 and H 35 Table IV. Values for sQ, Hg, rQ and HT - method 1 38 Table V. Values for sQ, Hg, TQ and HT - method 2 40 Table VI. Values for sQ, Hg, rQ and HT - method 3 40 Table VII. Parameters for Albedo Maps 41 Table VIII. Parameters for Path Radiance 45 Table LX. and Rmi„ 61 iv List of Figures Figure 1. Landsat MSS Images of St. Mary Lake 5 Figure 2. Synthetic Images of St. Mary Lake 6 Figure 3. Shadow Regions for St. Mary Lake 7 Figure 4. Perspective Projection and Orthographic Projection 12 Figure 5. Definition of i, e and g 14 Figure 6. Components of Atmospheric Effects 17 Figure 7. Tu and Td 23 Figure 8. The Emergent Angle 24 Figure 9. Definition of the Position of the Sun 26 Figure 10. Digital Elevation Map 27 Figure 11. Brightness vs. Elevation 31 Figure 12. Minimum Intensities 32 Figure 13. Linearized Equation for Path Radiance 34 Figure 14. Path Radiance 36 Figure 15. Albedo Maps 42 Figure 16. Principal Components of St. Mary Lake 53 Figure 17. Landsat MSS Images of St. Mary Lake 54 Figure 18. Re-construction of Band 4 56 Figure 19. PC2 of St. Mary Lake - September 17, 1979 57 v Acknowledgement I am grateful to my supervisor, Dr. Robert Woodham, for his guidance, support and encouragements throughout the course of my research. I would like to thank Dr. Alan Mackworth, Jim Little and Peter Chung for their suggestions and comments which led to the improvement of this thesis. I am thankful to William Kwa for printing the pictures in the thesis. Thanks are extended to Richard Lee, Anda Li, Voon Siaw, Wendy Moore, and many other friends for their help in various ways. vi CHAPTER 1 Introduction With earth observing satellites, it is possible to inventory natural resources in an economical way. For example, a Landsat image covers a much larger area, approximately 185 km. by 185 km., than an aerial photograph. But, unlike aerial photography which can be carefully planned for the best time of day, season and weather condition, satellite-based remote sensing has less control over imaging con-ditions. An optimum image is rarely obtained. Landsat acquires images at about 9:30 a.m. local solar time. In winter, the sun may be low in the sky. Many areas, particularly in mountainous regions, will lie in shadow. Extracting whatever information is provided by the image is made more difficult. Identifying surface material is a prerequisite to natural resources manage-ment. This prerequisite can be achieved by classifying the reflectance factor (albedo) of each picture element (pixel) of an image. Pixels with similar albedo are grouped together. In order to compute precise albedo values, atmospheric effects on the image must be understood and removed. They depend on local weather and topography; hence, they may vary significantly between pixels, especially in a rugged terrain. In this thesis, a mathematical model is presented to account for how direct solar radiance and diffuse sky radiance reach the ground and how the energy is reflected into the sensor. Atmospheric effects are divided into components of 1 2 solar irradiance, sky irradiance, path radiance and optical depth. Solar irradiance is modeled with a reflectance map, while the latter three terms are modeled with simple exponential functions of elevation. The parameters of the model are then estimated. Finally, an albedo for each pixel is computed. This thesis is divided into five chapters. Chapter 2 takes a closer look at the problem of atmospheric effects and reviews some early correction methods. Chapter 3 describes the image formation process and presents the mathematical model. In chapter 4, the unknown parameters of the model are estimated. Finally, chapter 5 presents a summary and conclusions. CHAPTER 2 Atmospheric Effects 2.1. Problems in Remote Sensing Work on computer-based image understanding requires us to model the image formation process explicitly. Woodham (1980b) pointed out that there are four major factors to consider: surface irradiance, surface orientation, properties of the surface material and properties of the medium through which energy is transmitted. In remote sensing, with Landsat imagery, these four factors are determined by topography, ground cover, direct solar illumination and atmospheric effects. Topography is the shape of the terrain. Ground cover relates to reflec-tance properties of the surface material and its microstructure. Atmospheric effects arise due to the interaction of direct solar illumination with constituents of the atmosphere. One of the major applications of remote sensing is to identify the ground material from a satellite image. This can be achieved by first computing a reflec-tance factor (albedo) at every picture element (pixel) and then by classifying the albedos so determined. Precise albedo values at each pixel are needed. Unfor-tunately, the intensity values recorded in an image are not direct measurements of albedo because they are influenced by local topography and atmosphere, in addition to ground cover. Al l Landsat images are acquired around 9:30 a.m. local solar time, when the sun is relatively low in the sky. Topographic effects can be 3 4 the dominant factor, particularly in areas of rugged terrain. Absorption and scattering in the atmosphere also perturb the measurements recorded at the sen-sor. Therefore, a prerequisite to determining precise albedo is to understand the effects produced by topography and atmosphere. Topographic effects can be predicted using the reflectance map, introduced by Horn (1977). A reflectance map incorporates a fixed illumination, surface material and viewing geometry into a single model. A reflectance map determines a mathematical expression for image irradiance in terms of surface orientation. Horn used this model to determine the shape of imaged objects. One can also generate a synthetic image that predicts image irradiance from object orientation. For Landsat imagery, synthetic images can be produced if the terrain elevation is known. Once the real satellite image is registered with the synthetic image (Horn and Bachman (1978); Little (1980)), topographic effects can be estimated. In fig-ures 1 and 2, we compare Landsat images for St. Mary Lake, B.C. and the corresponding synthetic images. After topographic effects are estimated, it remains to correct for atmos-pheric effects. In this thesis, we try to understand and to estimate the atmos-pheric problems using Landsat images. 2.2. Nature of Atmospheric Effects Looking at the synthetic images, especially the winter scene, we can observe that shadow regions are shown in black. (In figure 3, the shadow regions are explicitly shown.) In the synthetic images only one light source, the sun, is con-BAND 4 JflNUflRV 8, 1979 17158 GMT Figure 1. Landsat MSS Images of St. Mary Lake 6 SVNTHETIC IMAGE JRNUflRV 8, 1979 Figure 2. Synthetic Images of St. Mary Lake 7 HRDOU REGIONS SEPTEMBER 17, 1979 This is a binary image. The shadow areas are shown in black. Figure 3. Shadow Regions for St. Mary Lake 8 sidered. But, diffuse sky light is also an important source of illumination. It affects the whole scene, particularly the shadow regions. For instance, in the January image, the entire lake can be seen even though half of it lies in shadow. Similarly, one can observe the harvested areas east of the lake. In other words, certain ground cover details are still visible in shadow regions. Woodham (1980b) did an experiment: working with the Landsat images of St. Mary Lake (January 8, 1979), he compared the intensity values across the shadow boundaries for a flat area of conifer forest and for the lake, which is flat and covered by snow. (See Table I.) He made two observations for band 4 and band 5 images. First, the snow under shadow, on average, is brighter than the conifer forest in the direct sun light. Second, the difference between the forest in shadow and in sunlit areas is small, about 0.024 mW/cm2 for band 4 and 0.041 mW/cm2 for band 5. Table I. The mean values of Landsat intensities across the shadow boundaries for a flat area of conifer forest and the lake. (Woodham (1980b)) The unit is in mW/cm2. Ground cover Band 4 Band 5 Band 6 Band 7 snow (in shadow) 0.287 0.181 0.108 0.063 snow (in sun) 0.849 0.892 0.875 0.803 forest (in shadow) 0.155 0.081 0.056 0.024 forest (in sun) 0.178 0.122 0.194 0.229 9 Another effect caused by the atmosphere is that its particles absorb and scatter the radiant energy. Some of the reflected energy from the ground target will be absorbed or scattered and cannot be received by the sensor. Some radi-ant energy is recorded because it is scattered into the sensor, but not from the target. The former property is called optical depth, and the latter is called path radiance. These scattering and absorbing properties depend on the content of the air particles and local weather conditions, so they vary from day to day and from location to location. They also depend on terrain elevation, which indicates the depth of the air column between the surface target and the sensor. Thus, the atmosphere will affect the intensity values recorded by Landsat. But, with the diffuse sky light, areas lie in shadow can be observed and studied. 2.3. Some correction methods In remote sensing, people have treated the problem of atmospheric effects differently. In some applications, the presence of the atmosphere has been ignored: \"raw\" sensor values are used directly. For example, Taylor and Lang-ham (1975), Donker and Mulder (1976) and Lodwick (1981) attempted to use Principle Components Transformation to analyze the four Landsat bands. (See Appendix I.) Taylor used the transformed data to compose pseudo color images in order to study water quality. Donker and Mulder claimed that the first principal component relates to the overall brightness; the second principal component separates the vegetative and the non-vegetative cover, and the last two com-ponents contain mainly noise. Lodwick further associated the first principal com-ponent with the slope and aspect of the terrain. ( 10 Atmospheric effects such as path radiance and optical depth vary consider-ably with changing weather state and elevation. Henderson (1975) discovered that path radiance can change by up to 37% for different weather conditions. So, if atmospheric effects are not removed, there will be a severe noise problem in image classifying systems. Some simple methods have been used to address the problem. One of them is the band ratioing method (Crane (1971)), which tries to eliminate the scene irradiance, mainly the solar light, by taking the ratio of two adjacent spec-tral bands. Another common method is to subtract the smallest intensity value of the image, a rough approximation of path radiance, from the entire image. These methods are inadequate particularly in the mountainous regions where atmospheric effects are complicated by the surface terrain. Scene irradiance and path radiance may vary from one target point to another. In 1972, Turner and Spencer presented a radiative transfer model for com-puting the components of atmospheric effects. This model has been used to correct Apollo photographic imagery. But, there are two disadvantages. The model is hard to compute and it requires a lot of ground truth information, such as the average background albedo of the local terrain and the horizontal visual range. Sjoberg (1981) proposed a model which is based on a similar radiative transfer model. He chose the parameters for his model by trial and error. In this thesis, we apply Sjoberg's model. We, however, try to find a systematic way to estimate the parameters. CHAPTER 3 A Mathematical Model 3.1. The Reflectance Map Imaging geometry is described by a perspective projection. Notation is sim-plified if the viewing direction is aligned with the z-axis, as in figure 4(a). Furthermore, if the viewing object is relatively small compared to the viewing dis-tance or if the focal length of the sensing device approaches infinity, then the geometry can be approximated by orthographic projection. (See figure 4(b).) With an orthographic projection, the mapping between object space and image space becomes trivial. A point (x,y,z) in the object coordinate can be considered to map into point (u,v) in the image plane where x — u and y = v. Therefore, the coordinates {x,y) and (u,v) can be used interchangeably. If the object surface is in the form of z = f(x,y) (i) then a surface normal vector is df(x,y) df[x,y) ^ j dx ' dy ' Let us call the first element of this normal vector p, and the second element q, i.e. 11 12 (a) Perspective Projection (b) Orthographic Projection Figure 4. Perspective Projection and Orthographic Projection 13 = _ _ ( _ _ a Q d _ _ _ _ _ _ _ _ . The surface normal vector can be rewritten as [P, -!]• {p,q) is called the gradient of the function f(x,y). The two dimensional space of (p,q), the gradient space, is used to represent surface orientation. Many investi-gators have used the gradient space in image analysis and scene analysis (Mack-worth (1973); Horn (1977); Woodham (1977)). The reflectance of many surfaces depends only on surface orientation. Sur-face reflectance can be written as a function of incident, emergent and phase angles. Throughout this thesis, these three angles are abbreviated by i, e and g respectively. (See figure 5.) For orthographic projection, the phase angle g is con-stant. With t , e and g, some simple models of reflectance can be established. One of them is for a perfect diffuse reflector (a Lambertian surface), which \"appears\" equally bright from all viewing directions. The reflectance function, , is given by !p cos(r r J ) , 0 < « < 7T/2 = in otherwise. ( 2 ) where p : albedo. If a single distant light source is in the direction given by \\Pa> 9,. then cos(t) can be expressed in terms of p and q by the normalized dot product This diagram shows the incident angle (i), the emergent angle (e) and the phase angle (g). Figure 5. Definition of t, e and g 15 of the surface normal vector [p, q, -1] and the incident ray vector [ps, qg, -1]: !+ P-Ps + V% cos « = — — , c>\\ v /iT7T7 x/i+ p 2 + q 2 [ 6 } The reflectance map for the function (i,e,g) given in equation (2) is defined as n, ^ pps + q-qs) R(p,q) = — (4) V I + p2+ q2 p32+ q2 so that we can write an image irradiance equation of the form I(x,y) = R(p,q) (5) where I(x,y), the intensity of point (x,y), is determined by R(p,q). There are many different reflectance maps and they can be generated systematically (Horn and Sjoberg (1979)), but the reflectance map for a Lambertian surface is con-sidered sufficient for this study. Horn (1977) and Woodham (1977; 1980a) have used it to determine the object's shape. It can also be used to generate synthetic images if the surface orientation is known (Horn and Bachman (1978); Little (1980)). In this thesis, the reflectance map is applied to remote sensing, and is extended to incorporate the effects due to the atmosphere. 3.2. Components of Atmospheric Effects The radiant intensity recorded by Landsat differs from the scene radiance because of the presence of the atmosphere. Interpretation of the satellite imagery, hence, is complicated. If one wants to lay out a reasonable mathematical model to correct for atmospheric effects, one has to understand the entire imaging pro-cess: how photons reach the target (a surface point of the earth observed by the 16 multispectral scanner), and how they are reflected into the sensor. For the earth, the most important light source is the sun. When penetrat-ing through the atmosphere, sun rays will be absorbed and scattered by air parti-cles, water molecules, clouds or dust. They turn the sky into a secondary diffuse light source. Radiant energy may reach the target point on the ground directly or indirectly. (See figure 6.) Direct energy is called solar irradiance or sky irradi-ance, depending on its origin. Indirect energy arises from light that may have hit and been reflected by some other ground areas before they reach the target. This latter phenomenon is called mutual illumination, which can occur in a valley surrounded by mountains. The target will reflect the incident rays in all directions. Some scene radi-ance will go directly upward and be received by the multispectral scanner. Of course, this scene radiance is attenuated by the atmosphere. Meanwhile, the satellite will receive additional radiance. One source is energy reflected from areas outside the target. Because of this phenomenon, dark objects appear brighter when surrounded by white objects. Another source is path radiance: solar radi-ance may be scattered into the scanner before it reaches the ground. In general, Landsat receives three sources of energy: two of them are reflected from the sur-face and one by the air column. 17 target Figure 6. Components of Atmospheric Effects 18 That is, Lm = Ltar + Lsur + Epath (6) where Lm is image irradiance, Ltar is energy from the target, Lgur is energy from the surrounding areas, and Epath 1S P ath radiance. 3.3. Image Fo r m a t i o n Equation Before we go further to construct the mathematical model, we should state all our simplifying assumptions. (1) The distances of the sun and the satellite are assumed infinite, so ortho-graphic projection can be applied. (2) At any time, the Landsat MSS scans a relatively small portion of the ground surface because of the small viewing angle; therefore, we can neglect the curvature of the earth's surface. (3) The content and density of the air particles vary within the atmosphere so that the atmosphere appears to have different layers. We will assume that the atmosphere consists of only one layer. The absorbed and reflected pro-perties are identical throughout. (4) The ground objects are Lambertian reflectors. (5) There is no cloud in our studied imagery. (6) Mutual illumination is not considered. (7) Polarization effects are not considered. 19 (8) There are no additional light sources on the surface. (9) Scene radiance from the surrounding areas of the target is ignored. With these assumptions, equation (6) can be simplified as: Lm = Ltar + Epath (7) We now proceed to model atmospheric effects. From equation (7), the term Ltar, image irradiance due to energy reflected from the target, can be expanded as: L m = j T u - L t o t + E p a t h (8) Ltot is the total scene irradiance of the target, p is the albedo and T , upward transmission, is the percentage of the total energy reaching the sensor after penetrating upward through the air column. The term Ltot can be decomposed further as: Ltot — Lsun + Lsky (9) where L s u n is scene irradiance due to the sun, Lsky is scene irradiance due to the sky. For the term L s u n , we obtain: 0, if the target f(x,y) is in a shadow area (10) Td-Ltop-coS(i), if 0 < i < n/2 ^sun where Ltop is solar irradiance at the top of the atmosphere, t is incident angle, and Td is downward transmission, the percentage of the total energy reaching the surface, after penetrating downward through the air column. 20 Throughout the year, Ltop varies insignificantly' so it is taken as a constant. Many researchers have performed tests to estimate its value. (See Table II.) In this thesis, the values published by Thekaekara are used. Therefore, equation (7) is further expanded as m _ Tu 'Lsky + Epath if the target is in a shadow area, 7 Tu(Td-Ltop-cos(i)+ Lsky) + Epath otherwise. (11) In order to correct for atmospheric effects, the terms Epath, T _ , 7 _ and Lgky must be modeled. An exponential model will be applied to each. First, path radiance may be defined as a function of the air column length between the Table II. Values for Ltop, solar irradiance, measured at the top of the atmosphere and in the direction of the solar beams. The unit is in mW/cm2. Source Ltop (mW/cm2) Band 4 Band 5 Band 6 Band 7 Valley 1965 19.30 16.30 12.80 24.80 Thekaekara 1970 17.70 15.15 12.37 24.88 Rogers 1973 18.65 15.17 12.33 25.17 21 target and the satellite, in other words, the target's elevation: ~ZIH Epath=Po-e p (12) where z is the target elevation, p 0 is path radiance at sea-level, and Hp is the scale height. Second, atmospheric transmission is the percentage of the energy reaching its destination after penetrating through the atmosphere. So upward and down-ward transmission are functions of the target elevation. Again, an exponential function is used. Tu = t T (13) Td = e-T/cos{9) (14) where r=r0-e r (15) r is optical depth, z is the target elevation, g is the phase angle, TQ is optical depth at sea-level, and HT is the scale height. The cosine of the phase angle in downward transmission accounts for the slant pathway between the sun and the target. Furthermore, optical depth is the amount of radiant energy loss when traveling through the air. Thus 7_ and 7_ are increasing functions with respect to the target elevation z, for example, 22 when 2 = 0 T = u 1 when z —• oo, and Td = e- T o / c o s { 9 ] when z = 0 when z —+ oo. (See figure 7.) Third, sky irradiance is the amount of sky light received by the target. Intuitively, it should depend on the target elevation and orientation. The target orientation, which can be described by the emergent angle e, relates to the amount of sky to which the target is exposed. (See figure 8.) Lskv is defined by -z/H( (16) Finally, after collecting equations (11) to (16), we obtain: 23 atmospheric t r ansmiss ion elevation This diagram shows the functions of upward and downward transmission, with respect to the elevation. Figure 7. Tu and Td 24 cos ( e ) = 0 i f e = 9 0 ' , i.e., the surface is vertical cos ( e ) = 1 i f e = 0 i.e., the surface is horizontal Figure 8. The Emergent Angle 25 P _ r -z/H -z/H Lm = — e ^S0-e S + P0e P (17) if the target is in shadow, or Lm = - e ( e {B,-Ltop-cos{t)+hs0-e s)+PQe p (18) otherwise where -z/H T = r0-e T , and /i = (1+ cos(e)) / 2. From the above two equations, there are three pairs of unknown parameters, r0 and HT, s0 and Hg, and finally p 0 and Hp, associated with optical depth, sky irradiance and path radiance, respectively. The task is to estimate these six parameters from the satellite images. 3.4. Landsat Imagery In this study, an area of 21.6 km by 30.36 km, surrounding St. Mary Lake, British Columbia, Canada (latitude N 49 36 30, longitude W 116 11 30) is used as a test site. This area has been used for many different projects at the University of British Columbia (Woodham (1980b); Little (1980); Majka (1982)). It is a rugged mountainous region. The altitude of the area ranges from 944 m. to 2684 m. St. Mary River runs from northwest to east passing through St. Mary Lake which is located in the middle of the area. Landsat passes over this area and takes pictures around 9:30 a.m. local solar time. The MSS images are received at a ground station where calibration is performed. (See appendix II.) 20 The position of the sun is defined by the elevation angle and the azimuth angle 6. Figure 0. Definition of the Position of the Sun 27 The high elevation is represented by the light tone, and the valley is represented by the dark tone. Figure 1 0 . Digital Elevation Map 28 Band 4 imagery (0.5 //m to 0.6 ^m) is used in the analysis to follow. 3.5. Auxiliary Data Two more things are needed: the position of the sun and a digital elevation model (DEM) for the test site. The position of the sun, given as an elevation angle and an azimuth angle (see figure 9), can be determined by tables or stan-dard algorithms if the date, time and position on the earth's surface are given (Horn 1978). For example, the sun is at elevation 13.84° and azimuth 153.05°, for January 8, 1979, 9:58 a.m. Pacific standard time (17:58 Greenwich mean time (GMT)). And for September 17, 1979, 10:56 a.m. Pacific daylight time (17:56 GMT), the sun is at elevation 37.82 ° and azimuth 146.55 ° . The digital elevation model is a 180 by 253 array of elevations (120 m. grid). It was produced by digitizing the 1:50,000 Canadian National Topographic System (NTS) map sheet 82 F/9 (by James Little at The University of British Columbia). The DEM is shown in figure 10. Elevation is represented by brightness; a higher elevation is encoded by a brighter intensity. Therefore, valleys are represented by dark tones, and summits by light tones. With this information, the Landsat images can be registered with DEM (Horn and Bachman (1978); Little (1980)), and the shadow regions can be derived (Woodham 1980b). See figure 3 for the shadow maps. CHAPTER 4 An Experiment 4.1. Introduction In order to determine the atmospheric effects, the six unknown parameters, p0, H P , SQ, H S , TQ, and H T , of the exponential models for path radiance, sky irradiance and optical depth must be estimated. Once they are determined, the albedo p of point {x,y) in the image can be obtained by modifying the corresponding image irradiance Lm, first to remove the path radiance, and then to adjust for the scene irradiance and upward transmission. -z/H *(Lm-Po-e P) , , (19) e (e yy'-Ltop-cos{t)+ h-s0-e s) where e T^°s^-Ltop -COS(J) = 0 if point (x,y) is in shadow, -z/H T = r 0e T, and h = (1+ cos(e)) / 2. Using this method, the albedo of the entire image can be computed. The result is called an albedo map, which can be used as the input data for classification programs. In this chapter, we will describe methods for estimating the six param-eters, and then we will generate the albedo map. 29 30 4.2. P a t h Radiance By plotting image irradiance along the vertical axis against elevation along the horizontal axis, the dependency of path radiance can be examined. In figure 11(b), the result of the summer scene, image irradiance decreases with increasing elevation except for the high altitudes where the ground is covered by snow. In the low elevation end, several altitude levels contain a wide range of intensity values. These levels correspond to the flat valleys where a wide variety of ground cover, such as lakes, roads, valley vegetation, forest, etc., can be found. The plot for the winter scene, in figure 11(a), reveals that the decreasing behavior of the scene irradiance is not as pronounced as in the summer. The plot is more spread out in high altitudes, and the intensity values are lower. Perhaps, because of the cold and dry winter climate, more areas are covered by snow, and there are less water molecules in the air and hence less scattering. Also, the minimum brightness value for each elevation level is plotted. Some isolated points which may be caused by the sensor error have been excluded. As in figure 12(a) and 12(b) (for the winter and summer scene respec-tively), the above theory still holds; minimum intensity values decrease with increasing elevation except for high elevations. With these two plots, it is easy to observe that the path radiance for each altitude level must lie below the minimum point. By assuming that the minimum intensities are caused by some low reflective materials whose albedos are close to zero, we can estimate the path radiance function (equation (12)) by fitting an exponential curve under all these intensities. 31 BRND 4 JflNUflRV 8, 1979 ! (a) result for January 8, 1979 (b) result for September 17, 1979 Figure 11. Brightness vs. Elevation zt L. W N B u • H LD C OJ tr o.o u u Q I 0.0 (aJ January 8, 1373 — i 1 — 30.0 100.0 I J-50.0 I 200.0 —I 250.0 I 300.0 ~1 350.0 e l e v a t i o n (10 mJ lh) September 17, 1373 50.0 — I JOO.O ]50.0 —I 200.0 —I 250.0 I 330.0 350.0 e l e v a t i o n (10 mJ (a) result for January 8, 1979 (b) result for September 17, 1979 Figure 1 2 . Minimum Intensities 33 If path radiance for each elevation level is greater than zero, equation (12) can be linearized as: R = X - zY (20) where R = In X = ln (p0)i and y = I/\", The problem is simplified: to fit a straight line under all the logarithmic minimum intensities. (See figure 13.) Here, several variables need to be defined. The variable ri is the logarithmic minimum intensity for an elevation level 2-, and rfj is the distance between the point (r_,z_) and the desired straight line, i.e., di = r, - X + z{ Y (21) The problem is translated into an optimization task. min _ r,. - X + z{ Y (22) »'=i s.t. r- - X + z{ Y > 0, i = 1, . . . , n X, Y > 0. After a little bit of work, the formulation given in (22) becomes: n max n-X - _ z{Y (23) s.t. X - z{ Y < Ri, i = 1, . . . , n X, Y > 0 34 Figure 13. Linearized Equation for Path Radiance 35 Formation (23) is a linear programming problem; the values of X and Y are solved by the simplex method. In turn, the values of p 0 and Hp and then the exponential curve for path radiance can be computed. The results are listed in table III and are shown in figure 14(a) and 14(b). With the help of DEM, we can estimate and remove the path radiance for each pixel for the entire image. Table IU. Values for p0 and Hp. Estimated values for the parameters of path radiance. -z/H Epath = P 0 e P p0 (mW/cm2 sr) Hp (m) January 8, 1979 0.173 1591.6 September 17, 1979 0.521 3405.9 30 (aJ J a n u a r y 8. 1313 !_ LO CM _ -—i LO C QJ _ Q I 0.0 50.0 I JQO.O ~T J50.0 —I 200.0 e l e v a t i o n HQ raJ — i — 250.0 ~~I 300.0 350.0 L. Ln N E U •H Ln c QJ +--C •H (hJ September 17. 1373 50.0 I JOO.O —I J50.0 —I 200.0 —I 250.0 e l e v a t i o n (10 m) (a) result for January 8, 1979 (b) result for September 17, 1979 — i 1 300.0 350.0 Figure 14. Path Radiance 37 4.3. Sky Irradiance and Optical Depth After removing the effect of path radiance, the image formation equation becomes Lm =~e (e \"\"•Ltop-cos{t)+ h-s0-e s) (24) where Lm' = L m - Epath, corrected image irradiance, e r^cos^-cos(i) = 0 if the target is in shadow, -z/H T = r0e T, and h = (1+ cos(e)) / 2. The remaining four parameters, s0, Hs, r0 and HT, and the albedo p are strongly coupled together. If one wants to find out the four parameters for each pixel, p must be known beforehand. Unfortunately, p is the value that we want to deter-mine. Thus, it is not easy to compute the effects due to sky irradiance and opt-ical depth without additional information. Here, three methods are presented to demonstrate how the remaining parameters can be estimated. The first method is to apply the results from other sources. For example, Sjoberg (1981) tried 3.0 mW/cm2 for s 0 and set Hg to be the same as the scale height for path radiance, Hp. For optical depth, he applied the results from the U.S. Standard Atmosphere (Valley 1965). The parameters for this method are summarized in table IV. If the four parameters are to be estimated from the original image, addi-tional knowledge must be used. The shadow provides some useful information; 38 Table IV. Values for s0, Hs, r0 and HT - method 1 Source U.S. Standard Atmosphere r0 : 0.26185 HT : 2529.4 m. Sjoberg s0 : 3.0 mW/cm2 H„ : same as H„ however, this kind of information cannot be applied to the summer scene because only a very small portion of the area lies in shadow. In the second method, the areas across the shadow boundary, in the direc-tion of the sun's azimuth angle, are examined. These two neighboring areas are considered to have the same albedo value and the same sky irradiance and optical depth, if the following criteria hold: they are relatively flat, within 200 m., and the orientations of the surfaces are about the same, within 10 °. For the area on the shadow side of the boundary, image irradiance, after path radiance correction, has only one component (i.e., solar irradiance L g u n is zero (refer to equation (11))): L s h ' = ^ Tu-Lsky (25) where Lgh' : image irradiance, after path radiance correction, from a target in shadow. On the sunlit side, the amount of image irradiance due to solar irradiance can be obtained by subtracting the portion due to sky irradiance from the corrected 39 image irradiance. Lsun' ~ Lsh = ~ Tu(Lsun + Lsky) ~ ~ Tu'Lsky (26) P = —- T J ^ u sun where Lgun' : corrected image irradiance from a target under the sun, and = ^ ' / C ° ! ( \"- i , „ p -cos(0 Taking the ratio of equation (25) and (26), the term p- Tu / ir can be eliminated. Lsh Lsky , ^ T ' - L 1 = L ( 2 7 ) sun sh sun -z/H hsn-e s e-*/«»{9)-Ltop-cos(i) -z/H where r = r 0e T The remaining four parameters can be computed. The values of Lsh' and (Lsun' -Lsh') are collected from all areas across the shadow boundary where the above criteria are satisfied. Then equation (27) is fitted to the data by a non-linear data fitting algorithm. The results are posted in table V. Finally, the last method (method 3) will make use of the shadow regions as control areas. It is well known that snow is highly reflective; its albedo is approximately 0.95 in band 4 (Dozier and Frew (1981)). So p in equation (25) is known for snow areas in shadow. 40 Table V. Values for s0, H S , T0 and HT - method 2 s0 = 3.659 mW/cm2 HS -- 1129.1 m. r0 = 0.494 # r = 9195.8 m. , , 0.95 -r , _ L_A' = — e hs0-e (28) -z/H where T = r0-e 7 In the January image, most areas are covered by snow, especially at elevations above 2300 m. and on the lake. If equation (28) is fitted using the data col-lected from shadows at elevations above 2300 m. and on the lake, the parameters SQ, H S , TQ and HT can be estimated. They are tabulated in table VI. Table VI. Values for s0, H S , r0 and HT - method 3 s0 = 1.207 mW/cm2 HS = 9838.3 m. r0 - 0.365 HT = 2000.0 m. 41 4 . 4 . Albedo Map Now the albedo map can be generated. There are three sets of parameters for sky irradiance and optical depth. The first set, which is generated by method 1, is applied to both the winter and the summer scene, while the last two sets are applied only to the winter scene. Four albedo maps, Model Is, Model Iw, Model II and Model HI, are obtained. They are shown in figure 15 and their parameters are summarized in table VJJ. The albedo map of Model Is, method 1 with the summer scene, shows some remarkable details. The albedo of the lake is very low, close to zero, so that the entire lake is represented by black. The rivers and the cut over areas are clearly shown. The vegetation along the valleys has a different tone from the ground Table VII. Parameters for Albedo Maps Units : p 0 : mW/cm2 sr Hs : m HT : m LtoP : mW/cm2 Model Po H. s L top Is Iw n m 0.521 0.173 0.173 0.173 3405.9 1591.6 1591.6 1591.6 3.0 3.659 1.207 3.0 3405.9 1591.6 1129.1 9838.3 0.262 0.262 0.494 0.365 2529.4 2529.4 9195.8 2000.0 17.7 17.7 17.7 17.7 42 Figure 15. Albedo Maps 43 cover in the summits. On the other hand, the albedo map of Model Iw, applying method 1 to the winter scene, is not as good as Model Is. Some shadow effects can be spotted, for example, around the lake. Along the valleys, the dynamic range is small so that the cut over areas and the rivers do not appear as clearly as the summer one. Probably, the parameters of these models, which are pub-lished by other researchers, are produced from the clear days during summer time. The albedo map of Model II encounters the same problem as that of model Iw. The reason may be that the neighboring areas across the shadow boundary have different types of ground cover, in spite of the fact that they satisfy the criteria in the previous section. In this rugged terrain, the type of ground cover can vary significantly in a small area. Moreover, the Landsat image and DEM are relatively coarse. The Landsat image is re-sampled to 120 m. grid, the same width as DEM, even though each original pixel is about 79 m. in width. So a more precise image and DEM may improve the result. Model III (the parameters of sky irradiance and optical depth are obtained from method 3) shows a reasonably good image although some shadow effects can still be found around the lake. The cut over areas and the rivers stand out from valley vegetation. The summits and some slopes of the mountains are covered by snow. When the four albedo maps are compared, two points can be observed. First, the areas of valley vegetation and the summits agree reasonably well among all maps. Second, the summits in Models Iw, II and HI appear brighter than those in Model Is because all the high elevations are covered by snow in winter. CHAPTER 5 Conclusion When analyzing the satellite images, especially during classification, atmos-pheric effects must be removed. In some early systems, simple correction pro-cedures are applied. They may be suitable for flat regions; however, in moun-tainous areas with rugged terrain and shadows, the effects are much complicated so a more sophisticated method is needed. After considering the imaging situation and the components of the atmo-sphere, a relatively simple model is presented. This model breaks down the atmospheric effects into three components: path radiance, sky irradiance and opti-cal depth. Each component is further modeled by an exponential function. With the help of DEM, the shadow map and the sun position, the six unknown param-eters of the three exponential functions are determined. Path radiance is estimated by using the minimum image irradiance of each elevation level. With this technique, the minimum intensities are assumed to correspond to non-reflective ground cover. For example, the reflectance factor of the lake usually is very low, close to zero, during summer. But, sometimes this assumption may not be valid. The parameters of path radiance are very hard to verify since they have no standard values. In table VIII, they are compared to the values published by other people. 44 45 Table VIII. Parameters for Path Radiance. Units : path radiance {Epatfl) : mW/cm2 sr p 0 : mW/cm2 sr m Source January September Sjoberg(1981) Rogers (1973) Po 0.173 0.521 0.33 1591.6 3405.9 4714 Epath Time at 290m of the 0.144 0.479 0.310 0.268 year Jan. Sept. Oct. Ahem et al. (1977b) published a list of the values of E th which range from 0.186 mW/cm2 sr to 0.469 mW/cm2 sr. (the data was collected from May to September 1976). sun elevation angle 13.84° 37.82 ° 34.2° 42.0° The four parameters for sky irradiance and optical depth are strongly cou-pled. They are not easy to estimate unless there are some control areas where extra information, such as average albedo or visual range, is known. Without this kind of control area, shadows may provide some useful information; almost paradoxically, shadows usually cause problems for most remote sensing systems. Using shadows, the four parameters can be determined in two different ways. One method is to examine areas across the shadow boundary where the ground cover and the atmospheric effects are similar. Another method applies the knowledge of the albedo of snow which is about 0.95 for band 4 image. Unfor-tunately, there is a disadvantage with these methods. They cannot be applied to summer scenes because only a very small portion of areas lies under shadows. 48 Once the atmospheric effects are estimated, the albedo map may be pro-duced. The map is hard to verify since no ground truth information is available, so it has to be analyzed subjectively. In a true albedo map, shadow effects should be removed. The lake, the rivers, the harvested areas and the snow in the summits also provide some useful ground to justify the results. Our experi-ment have produced some \"reasonable\" albedo maps, even though shadow effects are not removed completely. Bibliography Ahern, F.J., Goodenough, D.G., Jain, S.C., Rao, V.R. and Rochon, G. (1977a). \"Landsat Atmospheric Corrections at CCRS\", Proceedings of the Fourth Canadian Symposium on Remote Sensing, Quebec City, May. Ahern, F.J., Goodenough, D.G., Jain, S.C, Rao, V.R. and Rochon, G. (1977b). \"Use of Clear Lakes as Standard Reflectors for Atmospheric Measurements\". Proceedings of the Eleventh International Symposium on Remote Sensing of Environment, p731-755. Ahern, F.J. and Murphy, J. (1978). \"Radiometric Calibration and Correction of Landsat 1, 2, and 3 MSS Data\", research report 78-4, Canada Centre for Remote Sensing, November. Crane, R.B. (1971). \"Preprocessing Techniques to Reduce Atmospheric and Sensor Variability in Multispectral Scanner Data\", Proceedings of the Seventh Inter-national Symposium on Remote Sensing of Environment, May. Daultrey, S. (1976). Principal Components Analysis, Series of Concept and Tech-niques in Modern Geography, No. 8. Donker, N.H.W. and Mulder, N.J. (1976). \"Analysis of MSS Digital Imagery with the Aid of Principal Component Transform\", ISP Commission VTI presented paper. Dozier, J. and Frew, J. (1981). \"Atmospheric Corrections to Satellite Radiometric Data over Rugged Terrain\", Remote Sensing of Environment 11, pl91-205. Gnanadesikan, R. (1977). Methods for Statistical Data Analysis of Multivariate Observations, Wiley Series Probability and Mathematical Statistics. Guindon, B., Goodenough, D.G. and Teillet, P.M. (1981/1982). \"The Role of Digital Terrain Models in the Remote Sensing of Forests\", research report, Canada Centre for Remote Sensing. Henderson, R.G. (1975). \"Signature Extension Using the MASC Algorithm\", Sym-posium on Machine Processing of Remote Sensed Data, June. Holkenbrink, P.F. (1978). \"Manual on Characteristics of Landsat Computer-Compatible Tapes Produced by the EROS Data Center Digital Image Pro-cessing System\", U.S. Geological Survey, Washington, December. 47 48 Horn, B.K.P. (1977). \"Understanding Image Intensities\", Artificial Intelligence, Vol. 8, p201-231. Horn, B.K.P. (1978). \"The position of the Sun\", AI-WP-162, Artificial Intelligence Laboratory, M.I.T., Cambridge, Mass. Horn, B.K.P. and Bachman, B.L. (1978). \"Using Synthetic Images to Register Real Images with Surface Models\", Communications ACM, November. Horn, B.K.P. and Sjoberg, R.W. (1979). \"Calculating the Reflectance Map\", Applied Optics 18:11, pl770-1779. Kirchner, J.A., Youkhana, S. and Smith, J.A. (1982). \"Influence of Sky Radiance Distribution on the Ratio Technique for Estimating Bidirectional Reflec-tance\", Photogrammetric Engineering and Remote Sensing, Vol. 48, No. 6, June. Little, J.J. (1980). \"Automatic Rectification of Landsat Images Using Features Derived From Digital Terrain Models\", Computer Science technical report, University of British Columbia. Lodwick, G.D. (1981). \"Topographic Mapping Using Landsat Data\", Proceedings of the Fifteenth International Symposium on Remote Sensing of Environ-ment, Ann Arbor, MI., May. Mackworth, A.K. (1973). \"Interpreting Pictures of Polyhedral Scenes\", Artificial Intelligence, Vol. 4. pl21-137. Majka, M. (1982). \"Reasoning about Spatial Relationships in the Primal Sketch\", Master thesis, University of British Columbia. Murphy, J. (1979). \"Format Specifications for Canadian Landsat MSS System Corrected Computer Compatible Tape\", research report 79-2, Canada Centre for Remote Sensing, August. Robinove, C.J. (1982). \"Computation with Physical Values from Landsat Digital Data\", Photogrammetric Engineering and Remote Sensing, Vol. 48, No. 5, May. Rogers, R.H. (1973). \"Investigation of Techniques for Correcting ERTS Data for Solar and Atmospheric Effects\", Interim Report, NASA-CR-132860. Sjoberg, R.W. (1981). \"Atmospheric Effects in Satellite Imaging of Mountainous Terrain\", Master thesis, MIT. Taylor, M.M. (1974). \"Principal Components Colour Display of ERTS Imagery\", 49 Proceedings of the Second Canadian Symposium on Remote Sensing. Taylor, M.M. and Langham, E.J. (1975). \"The Use of maximum Information Colour Enchancements in Water Quality Studies\", Proceedings of Third Canadian Symposium on Remote Sensing. Teillet, P.M., Guindon, B. and Goodenough, D.G. (1981/1982). \"On the Slope-Aspect Correction of Multispectral Scanner Data\", research report, Canada Centre for Remote Sensing. Thekaekara, M.P. (1970). \"Proposed Standard Values of the Solar Constant and the Solar Spectrum\", The Journal of Environmental Sciences, Vol. 13, No. 5, p6-9, September/October. Turner, R.E. and Spencer, M.M. (1972). \"Atmospheric Model for Correction of Spacecraft Data\", Proceedings of the Eighth International Symposium on Remote Sensing of Environment, p895-934. Valley, S.L. (1965). Handbook of Geophysics and Space Environments, New York, McGraw Hill. Woodham, R.J. (1977). \"A Cooperative Algorithm for Determining Surface Orien-tation from a Single View\", Proceedings of International Joint Conference on Artificial Intelligence, p635-641, Cambridge, Mass. Woodham, R.J. (1980a). \"Photometric Method for Determining Surface Orienta-tion from Multiple Images\", Optical Engineering 19:1, pl39-144, January/February. Woodham, R.J. (1980b). \"Using Digital Terrain Data to Model Image Formation in Remote Sensing\", Image Processing for Missile Guidance. Appendix I Principle Components Transformation 1. P C T Principle Components Transformation (PCT) is a statistical technique for analyzing a set of data, i.e., the four bands of Landsat MSS images. In a data set, there will usually be many variables, which are also called axes, dimensions or components; for example, each Landsat data band can be treated as a vari-able. For each variable, there is a variance associated with it and the variance may correlate with other variables. The total variance of the whole data set is the sum of the individual variances. PCT is a linear transformation; the new axes are the linear combination of the original axes. After the transformation, the number of axes and the total variance remain unchanged. However, the first principal component will pick up the most of the total variance; the second component, which is uncorrected with the first one, will account for as much of the remaining variance as possible; the third component, which is uncorrected with the first two, will, again, gather as much of the remaining variance as possible, and so on. There are two important properties of the resulting components. They are uncorrected. Moreover, most of the total variance is concentrated on the first several axes while a very small portion will go into the last few of them. We may discard the last few com-ponents, and focus our attention on the \"important\" dimensions. Therefore, PCT 50 51 can be used to reduce the dimensional size of a data set, while it will keep most of the \"significant\" information. 2. How to perform the transformation The procedure of the transformation will be illustrated step by step, using January 8, 1979 Landsat images. (a) First the variance-covariance matrix of the data set is needed. For the Landsat images, it is a 4x4 matrix. The th entry of the matrix is the covariance between band i and band j . Thus, the matrix is real sym-metric as 0.186 0.218 0.228 0.223 0.218 0.258 0.272 0.267 0.228 0.272 0.291 0.288 0.223 0.267 0.288 0.288 (b) We then find out all four eigenvalues and unit eigenvectors of the matrix. The sum of the eigenvalues is the total variance of the data set. The eigenvalues of our example are 1.012, 0.00987, 0.00114 and 0.000449. The corresponding eigenvectors are [ 0.424 0.503 0.535 0.530], [-0.617 -0.408 0.256 0.623], [-0.488 -0.313 0.617 -0.532], [-0.449 0.695 -0.516 0.221]. (c) The eigenvector which corresponds to the largest eigenvalue is the \"loading\" for the first principal component; the eigenvector which corresponds to the smallest eigenvalue is the \"loading\" for the last principal component. The 52 first principal component is computed by PC1 = 0.42414 + 0.503 Ib + 0.535 76 + 0.53017 where PCX is the first principal component, and 74, 75, 76 and I7 are the corresponding intensities from band 4, 5, 6 and 7. In other words, the four principal components can be described by the following equation: \"A \" is a 4x4 matrix formed by the row vectors of the unit eigenvectors, arranged by their eigenvalues in descending order. (d) If imagery is wanted, the principal components can be scaled and digitized. The four principal components of our example are shown in figure 16, and the original Landsat images are shown in figure 17. 3. Inverse P C T By inverting the transformation, the original images can be re-constructed. Within this process, if some principal component is discarded, two special effects can be achieved. First, we can find out what information a particular component contains by comparing the re-constructed images with the original images. Second, the original images are enhanced when the discarded component contains mainly noise, say F C 3 or PC4 in our case. PC2 PC3 PC, \\PC 53 Figure 16. Principal Components of St. Mary Lake Figure 17. Landsat MSS Images of St. Mary Lake 55 The following steps are used to perform the inverse transformation: (a) Assign any arbitrary value to the discarded components. Usually, zero is used. (b) Let us assume that the unit eigenvectors matrix, A, is known. Since the variance-covariance matrix is real symmetric (see section 2, point (a)), A is orthogonal. The inverse of an orthogonal matrix is its transpose. There-fore, the original intensities are formed by the following equation: ''4] Ki h r *' 1 p c * h = l A J x PC3 where A' is the transpose of the matrix A. (A, P C ' s , and Ps are defined as above (see Section 2, point (d))). Figure 18(a) shows the re-constructed band 4 using PCX and PC2. Figure 18(b) shows the re-constructed band 4 when PC2 is discarded. 4. Using PCT to analyze MSS images Some people have used PCT to analyze MSS images. Donker and Mulder (1976) and Lodwick (1981) have tried to understand the transformed images. They discovered that the last two principal components (FC 3 and PC 4), contain-ing a very small portion of the total variance, are mostly noise signals. PCX, which is a weighted sum of all the bands, is a general overall intensity of the scene. Lodwick further claimed that PCX is a function of topography and ground 56 ECONSTRUCTED BRND 4 WITH PCi AND PC 2 ANUflRV 8, 1979 RECONSTRUCTED BAND 4 WITHOUT PC 2 JfiNUflRV 8, 1979 /ay\" Only the first two principal components, PCX and PC2, are used, (b) The second principal component is discarded. Figure 18. Re-construction of Band 4 57 Figure 19. PC_ of St. M a r y Lake - September 17, 1079. 58 cover. Therefore, it can be used to predict the slopes of an area if the slopes of some sample area, with similar ground cover, are known. Finally, for PC2, they found out that it relates to the type of ground cover: it separates vegetation and non-vegetation. For example, in figure 19, PC2 for September 17, 1979, the non-vegetation such as the lake, the cut-over areas, and the summits is represented by white, and the vegetation along the valleys is represented by black. But PC2 for January 8, 1979 (see figure 16) is in another extreme. The non-vegetative areas appear in dark tones while the vegetative areas appear in light tones. In spite of the fact that the transformed values are in the different extremes for winter and summer, F C 2 seems to separate the non-vegetative areas from the vegetative areas. There is another interesting observation. Since this method has no knowledge about shadow, shadow effects can be found in PC2 for winter. How-ever, the snow under the shadow and the sun are transformed to a similar tone. For instance, the entire lake, which is covered by snow, is transformed to a simi-lar tone, even though half of it is cast by shadow. PCT is a useful tool to convert a data set into another set with fewer dimensions. By applying the transformation, Taylor (1974; 1975) tried to create a pseudo colored Landsat image which will closely match the color space of our visual system. Human visual system has three dimensions: they are described as \"brightness\" \"red/green\" and \"blue/yellow\" If four original Landsat images are mapped into our color space, one of them, which contains some \"important\" information, must be discarded. But when the transformed data is used, the amount of information loss can be minimized. Appendix II C a l i b r a t i o n of Landsat data In Landsat's platform, there is a multispectral scanner system (MSS) which has four spectral bands with six detectors in each band. The spectral bands are commonly called band 4 (green, 0.5/im to 0.6/zm), band 5 (red, 0.6/tm to 0.7//m), band 6 (near infrared, 0.7/im to 0.8/im), and band 7 (near infrared, 0.8/im to l.lfim). When MSS acquires an image, the six detectors of each band will obtain data for six adjacent track lines. Each track line is about 185 km. long, contain-ing about 3240 pixels. Each pixel is digitized into 64 grey levels, with logarithmic compression for band 4 to band 6, but with linear scale for band 7. The images are transmitted to a ground receiving station where further processing can be per-formed. On the ground, the images are corrected for radiometric and other effects. Before any correction is performed, band 4 to band 6 are decompressed so that all bands are in linear mode. A reference detector is chosen for each band. The data of the reference detector is first calibrated to the absolute scale. The other detectors' data will then be calibrated with the reference detector of its band to eliminate the radiometric stripping problem. At the same time, all data are expanded linearly to 8 bits (i.e. 256 grey levels). Furthermore, since all detectors do not acquire data simultaneously, but rather sequentially, band-to-band registra-tion and detector-to-detector resampling procedures have to be performed. They 50 60 may also be corrected for the problems of earth rotation, mirror velocity, and earth curvature. After all these procedures, the final values are stored in a high density tape. The absolute radiance can be calculated from the digitized 8 bit value by the following relation: * = - ^ ( * m a x - * m i n ) + ^min (29) where R is the absolute radiance, in mW/cm2 sr, and V is the digitized value in the range of 0 to 255. # m j n and Rmax are chosen by NASA if CAL2 option is applied during the calibration. (See table EX.) In this case Rmax and Rm[n may not correspond to the real saturation at the dark and bright ends. CAL2 will force the calibrated signal into saturation before any of the actual detector signal reaches the saturation level. Canada Centre for Remote Sensing has another option, CAL3, which provides wider dynamic range. With CAL3, the bands of three Landsat are placed into the same scale, except for band 4, Landsat 3. The i ? m a x and RmiB are listed in table IX. Furthermore, Rmin is always 0. Equa-tion (29) is simplified to a scaling operation as: R R m a x V (30) 255 There is an alternative way to obtain the absolute radiance (for both CAL2 and CAL3). One may use the equation: R' =AQX10 E°+ V X ^ X I O ^ 1 (31) where R' is the absolute radiance in W/cm2sr. and V is the 8 bit digital value. 61 Table LX. RmiB and Rmax. The unit for i? m j n and Rmax is mW/cm2 sr. These values are published by Ahern and Murphy (1978). CAL 2 Landsat 1 R R max min Band 4 2.48 0.0 Band 5 2.00 0.0 Band 6 1.76 0.0 Band 7 Not Calibrated 0.0 Landsat 2 R R max min 2.63 0.08 1.76 0.06 1.52 0.06 3.91 0.11 Landsat 3 R R max min 2.50 0.04 2.00 0.03 1.65 0.03 4.50 0.03 CAL 3 Landsat 1 R max ^min Band 4 3.00 0.0 Band 5 2.00 0.0 Band 6 1.75 0.0 Band 7 Not Calibrated 0.0 Landsat 2 R R max min 3.00 0.0 2.00 0.0 1.75 0.0 4.00 0.0 Landsat 3 R, Rmin 2.50 0.0 2.00 0.0 1.75 0.0 4.00 0.0 And A 0 , A h E0 and El are constants and are stored in Universal Header File of the high density tape (Ahern and Murphy (1978); Murphy (1979)). Converting the digital radiance to its absolute value is important because absolute radiance can be used for comparison with other data, in particular, with parameters of atmospheric models. "@en ;
edm:hasType "Thesis/Dissertation"@en ;
edm:isShownAt "10.14288/1.0051836"@en ;
dcterms:language "eng"@en ;
ns0:degreeDiscipline "Computer Science"@en ;
edm:provider "Vancouver : University of British Columbia Library"@en ;
dcterms:publisher "University of British Columbia"@en ;
dcterms:rights "For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use."@en ;
ns0:scholarLevel "Graduate"@en ;
dcterms:title "Modeling and estimating atmospheric effects in Landsat imagery"@en ;
dcterms:type "Text"@en ;
ns0:identifierURI "http://hdl.handle.net/2429/23963"@en .