"Arts, Faculty of"@en . "Geography, Department of"@en . "DSpace"@en . "UBCV"@en . "Adderley, Christopher David"@en . "2013-01-09T22:11:00Z"@en . "2012"@en . "Master of Science - MSc"@en . "University of British Columbia"@en . "This study aimed to determine the complete surface temperature T0,C of an urban system and quantify the bias of sensors with preferential views (satellite, hemispherical) on sensed urban surface temperature. A thermal camera emplaced on a tower in a street canyon in Vancouver, Canada recorded panoramas of surface temperature. Techniques from micrometeorology, and computer vision/graphics were combined to project this data onto a 3D model of the urban structure surrounding the tower, which was derived from photogrammetry and airborne LiDaR data, then correct sensed brightness temperatures for atmospheric and emissivity effects. Facets of the 3D model not seen from the tower were gap filled statistically with data from other areas. Roofs were the warmest in the daytime and coolest at night, while the ground followed an opposite trend. Wall surface temperatures fell in-between with variation dependent on orientation and material. T0,C followed the trends in Voogt & Oke (1997), lower than roof and ground surface temperatures but higher than wall surface temperatures during the daytime, with an opposite trend at night.\nA narrow field of view sensor and a hemispherical radiometer were simulated at varying locations in and above the canyon. Recovered simulated brightness temperatures showed that T0,C was exceeded during the daytime by simulated at-nadir brightness temperatures TB,nad by up to 2.2 K and undervalued by up to 1.6 K at night due to emissivity and anisotropic effects (-0.7 K and +2.9 K respectively). This difference is due to anisotropic effects which were up to 3.5 K for this urban structure, following street canyon orientation.\nSimulations of radiometers with hemispherical views showed that the horizontal placement of a radiometer measuring L\u00E2\u0086\u0091 in and above a street canyon causes the recovered flux to vary by up to 120 W m-2. Horizontal variation was reduced when the radiometer is placed at sufficient height above the ground surface (5 times building height), where simulated L\u00E2\u0086\u0091 measurements converge to a single value. Surface temperatures determined from L\u00E2\u0086\u0091 are in approximate agreement with T0,C but showed an RMSE of 1.8 K which is larger than the RMSE between T0,C and TB,nad (1.4 K)."@en . "https://circle.library.ubc.ca/rest/handle/2429/43830?expand=metadata"@en . " THE EFFECT OF PREFERENTIAL VIEW DIRECTION ON MEASURED URBAN SURFACE TEMPERATURE by CHRISTOPHER DAVID ADDERLEY B.Sc., The University of British Columbia, 2009 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science in THE FACULTY OF GRADUATE STUDIES (Geography) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) December 2012 \u00C2\u00A9 Christopher David Adderley, 2012 ii Abstract This study aimed to determine the complete surface temperature T0,C of an urban system and quantify the bias of sensors with preferential views (satellite, hemispherical) on sensed urban surface temperature. A thermal camera emplaced on a tower in a street canyon in Vancouver, Canada recorded panoramas of surface temperature. Techniques from micrometeorology, and computer vision/graphics were combined to project this data onto a 3D model of the urban structure surrounding the tower, which was derived from photogrammetry and airborne LiDaR data, then correct sensed brightness temperatures for atmospheric and emissivity effects. Facets of the 3D model not seen from the tower were gap filled statistically with data from other areas. Roofs were the warmest in the daytime and coolest at night, while the ground followed an opposite trend. Wall surface temperatures fell in- between with variation dependent on orientation and material. T0,C followed the trends in Voogt & Oke (1997), lower than roof and ground surface temperatures but higher than wall surface temperatures during the daytime, with an opposite trend at night. A narrow field of view sensor and a hemispherical radiometer were simulated at varying locations in and above the canyon. Recovered simulated brightness temperatures showed that T0,C was exceeded during the daytime by simulated at-nadir brightness temperatures TB,nad by up to 2.2 K and undervalued by up to 1.6 K at night due to emissivity and anisotropic effects (-0.7 K and +2.9 K respectively). This difference is due to anisotropic effects which were up to 3.5 K for this urban structure, following street canyon orientation. Simulations of radiometers with hemispherical views showed that the horizontal placement of a radiometer measuring L\u00E2\u0086\u0091 in and above a street canyon causes the recovered flux to vary by up to 120 W m -2 . Horizontal variation was reduced when the radiometer is placed at sufficient height above the ground surface (5 times building height), where simulated L\u00E2\u0086\u0091 measurements converge to a single value. Surface temperatures determined from L\u00E2\u0086\u0091 are in approximate agreement with T0,C but showed an RMSE of 1.8 K which is larger than the RMSE between T0,C and TB,nad (1.4 K). iii Table of Contents Abstract .................................................................................................................................................. ii Table of Contents ................................................................................................................................. iii List of Tables ......................................................................................................................................... vi List of Figures ...................................................................................................................................... vii List of Symbols and Abbreviations ...................................................................................................... xi Acknowledgments ............................................................................................................................... xvi 1 Introduction ........................................................................................................................... 1 1.1 The Urban Surface ......................................................................................................... 1 1.2 The Urban Energy Balance ............................................................................................ 2 1.3 Defining and Measuring Urban Surface Temperatures .................................................. 6 1.3.1 Defining Urban Surface Temperatures ............................................................. 6 1.3.2 In-Situ Measurement ......................................................................................... 7 1.3.3 Remote Sensing Measurements ........................................................................ 7 1.3.4 Instrumentation ................................................................................................. 8 1.3.5 Thermal Anisotropy .......................................................................................... 9 1.4 Research Objectives ..................................................................................................... 10 1.5 Organization of Thesis ................................................................................................. 11 2 Data Collection ........................................................................................................................ 12 2.1 Study Area .................................................................................................................... 12 2.1.1 Vancouver-Sunset Neighbourhood ................................................................. 12 2.1.2 Elgin Street Canyon ........................................................................................ 13 2.2 Measurement Campaign............................................................................................... 15 2.2.1 Instrumentation ............................................................................................... 15 2.2.2 Thermal Data Capture ..................................................................................... 16 2.2.3 Additional Instrumentation ............................................................................. 21 iv 2.2.4 Local Meteorological Conditions .................................................................... 21 2.3 3D Structure Recovery ................................................................................................. 23 2.3.1 DGPS Data Collection .................................................................................... 24 2.3.2 LiDaR Data Collection .................................................................................... 26 2.3.3 Photogrammetric Data Collection ................................................................... 29 3 Data Processing ....................................................................................................................... 34 3.1 3D Model Construction ...................................................................................................... 34 3.1.1 Photogrammetric Model Construction ............................................................ 34 3.1.2 3D Model Assembly ....................................................................................... 37 3.2 Thermal Scanner Position Recovery ............................................................................ 42 3.2.1 Thermal Scanner Lens Calibration.................................................................. 42 3.2.2 Thermal Scanner Position Recovery Algorithm ............................................. 42 3.2.3 Position Recovery Implementation ................................................................. 47 3.3 3D Model Properties .................................................................................................... 51 3.3.1 Geometric Information .................................................................................... 52 3.3.2 Surface Information ........................................................................................ 58 3.3.3 Sky View Factor Simulation ........................................................................... 61 3.4 Data Projection ............................................................................................................ 63 3.4.1 Panorama Frame Orientation .......................................................................... 64 3.4.2 Projection ........................................................................................................ 67 3.5 Correction and Gap Filling ........................................................................................... 70 3.5.1 Atmospheric Correction .................................................................................. 70 3.5.2 Emissivity Correction ..................................................................................... 71 3.5.3 Gap Filling ...................................................................................................... 73 4 Analysis of Canyon Surface Temperatures .......................................................................... 76 4.1 Elgin Street Canyon Morphometry .............................................................................. 76 v 4.2 Observed Surface Temperatures .................................................................................. 80 5 Modeling of Preferentially Viewed Surface Temperatures ................................................. 91 5.1 Simulation Setup .......................................................................................................... 91 5.2 Planar Sensor View ...................................................................................................... 94 5.2.1 Setup and Rendering ....................................................................................... 94 5.2.2 Comparison of Nadir Temperatures ................................................................ 97 5.2.3 Comparison of Planar Sensor Temperatures ................................................... 99 5.3 Hemispherical View ................................................................................................... 106 5.3.1 Setup and Rendering ..................................................................................... 106 5.3.2 Afternoon (13:30) ......................................................................................... 109 5.3.3 Evening Transition (18:30) ........................................................................... 110 5.3.4 Late Night (00:30) ......................................................................................... 112 5.3.5 Early Morning Transition (6:30) ................................................................... 113 5.3.6 Impact on Radiometer Placement ................................................................. 115 5.3.7 Comparison to In-Situ Measurements ........................................................... 117 6 Conclusions ....................................................................................................................... 119 6.1 Key Findings and Contributions ................................................................................ 119 6.2 Future Work and Outlook .......................................................................................... 121 6.3 Methodological Improvements .................................................................................. 122 Works Cited ......................................................................................................................................... 124 vi List of Tables 1.1: Thermal properties of selected urban materials .......................................................................... 4 2.1: Properties of the urban surface in the Vancouver-Sunset neighborhood calculated in a 500 m radius from Sunset Tower ....................................................................................................... 13 2.2: Meteorological conditions during the experiment from in-canyon measurements and from Sunset Tower. ............................................................................................................................ 22 2.3: Calibration parameters for Nikon D100 .................................................................................... 30 3.1: Photogrammetry total point errors ............................................................................................ 36 3.2: Summarized accuracy measures for 3D model ......................................................................... 40 3.3: Inventory of materials found in Elgin Street canyon with emissivities .................................... 59 3.4: Variables and intervals used in MODTRAN runs..................................................................... 71 3.5: Maximum, minimum and mean atmospheric correction values for the entire model area ....... 72 3.6: Maximum and mean emissivity correction values for the entire model area. ........................... 74 4.1: Calculated urban geometric measures for the modeled Elgin Street canyon ............................ 78 4.2: Comparison of selected parameters from the Elgin Street 3D model to parameters defined for the entire Vancouver-Sunset neighborhood. ............................................................................. 79 4.3: Average sky view factors of facet types in the Elgin Street canyon ......................................... 79 4.4: Abundances of materials in the Elgin Street canyon, divided by facet type. ............................ 80 4.5: 24-hour averaged surface temperatures for all facet types in the canyon ................................. 85 4.6: Summarized area-weighted mean maximum and minimum surface temperatures with calculated mean diurnal amplitude for canyon materials, divided by facet type and orientation ........................................................................................................................... 90 5.1: Comparison between canyon subsets used in Sections 4.1 and 5.1. ......................................... 94 5.2: Values of TB,nad and T0,C as calculated from the 3D model and the simulated sensor views. .... 98 5.3: Heights of convergence zC for all profiles and timesteps ........................................................ 115 5.4: RMSE of selected horizontal positions for increase radiometer altitude. .............................. 115 vii List of Figures 1.1: Schematic representation of an urban energy balance ................................................................ 2 1.2: Geometric modification of radiation exchanges in a street canyon ............................................ 5 1.3: Schematic representations of various urban surface temperature definitions. ............................ 6 1.4: Difference TB - T0 for combinations of \u00CE\u00B5 and differences Ta - T0 ................................................ 8 1.5: Examples of different simulated views of an idealized urban surface ..................................... 10 2.1: Map showing instruments emplaced in the Sunset study area .................................................. 13 2.2: Typical form of two-story residences in the Elgin Street canyon ............................................. 14 2.3: Typical spectral response curve for Thermovision A40 scanner .............................................. 16 2.4: Experimental setup in the Elgin Street canyon ........................................................................ 17 2.5: First frame of panorama at 13:30 with reflective tape labeled. ................................................. 17 2.6: Approximate location of tower and other instruments during the experiment .......................... 18 2.7: Rotation of thermal scanner ...................................................................................................... 19 2.8: Examples of frames from the thermal panoramas ..................................................................... 20 2.9: Meteorological conditions on September 14 and 15, 2008 ....................................................... 23 2.10: Locations of collected DGPS points ......................................................................................... 25 2.11: TIN generated from LiDaR point cloud .................................................................................... 27 2.12: Illustration of TIN error patterns ............................................................................................... 28 2.13: Nikon D100 with 28 mm lens ................................................................................................... 29 2.14: Top view of schematic example house with idealized locations of ground-level camera stations for a single house. ..................................................................................................................... 31 2.15: Locations at which the hydraulic tower was emplaced for visible photography ...................... 32 2.16: Examples of street canyon photography ................................................................................. 33 3.1: Steps for assembling photogrammetric model of House ID 01 ................................................ 35 3.2: House ID 14 from the east, showing bushes and trees making precise photogrammetry difficult ...................................................................................................................................... 37 viii 3.3: Flowchart describing steps for the assembly of the canyon 3D model ..................................... 37 3.4: Imported House ID 01 showing three common import errors, Shaded model of House ID 01 38 3.5: Examples of techniques used for extrapolating missing triangles in house models, example of interpolating missing triangles or vertices for a window on the same house ........................ 39 3.6: Translated LiDaR ground surface with RTK points shown as blue dots ................................. 40 3.7: Completed Elgin Street 3D model ............................................................................................ 41 3.8: Flowchart describing steps for the determination of the thermal scanner position ................... 44 3.9: and defined in the camera's image coordinate system ................................................... 45 3.10: Definition of angles in the camera location algorithm .............................................................. 47 3.11: Photos selected from the first panorama for position recovery around the tower location ...................................................................................................................................... 48 3.12: Iteration results for fixed x = 1.35, y = -8.635, varying angle and z position ........................ 49 3.13: Flowchart describing surface and geometric property derivation from 3D model ................... 51 3.14: Conceptual image of UVW unwrapping ................................................................................... 52 3.15: Examples of triangulated House ID 01 and ground surface models and their UVW texture maps .......................................................................................................................................... 53 3.16: Texture maps showing pixel xyz coordinates and pixel area for House ID 01 .......................... 54 3.17: Texture maps showing pixel azimuth for House ID 01 and ground surface ............................. 55 3.18: Texture maps showing pixel facet type for House ID 01 and ground surface .......................... 55 3.19: Texture maps showing pixel spherical coordinates for House ID 01 and ground surface ....................................................................................................................................... 56 3.20: Texture maps showing pixel to scanner distance for House ID 01 and ground surface ........... 57 3.21: Texture maps showing pixel occlusion for House ID 01 and ground surface ........................... 58 3.22: Texture maps showing material classification for House ID 01 and ground surface ................ 59 3.23: Texture maps showing pixel emissivity for House ID 01 and ground surface ......................... 60 3.24: Set up for SVF simulation in 3D Studio Max ........................................................................... 61 3.25: Simulated SVF values for each of the basins compared to the analytical curve predicted for a cylindrical basin. ....................................................................................................................... 61 ix 3.26: 3D model with added simple geometric objects representing dense vegetation. ...................... 62 3.27: Texture maps showing pixel sky view factor for House ID 01 and ground surface ................. 63 3.28: Keypoints generated for SIFT algorithm for Panorama 1 ......................................................... 64 3.29: Panorama 1, Frames 7 and 8 with both image centers (derived from SIFT and RANSAC) marked on each image. .............................................................................................................. 66 3.30: Frame \u00CE\u00B1F angle as calculated by SIFT without keyframes ........................................................ 67 3.31: Conceptual diagram of pixel projection .................................................................................... 68 3.32: Projected brightness temperature maps for House ID 01 and ground surface .......................... 69 3.33: Clipped brightness temperature maps for House ID 01 and ground surface ............................. 70 3.34: Magnitude of atmospheric correction for House ID 01 and ground surface ............................. 71 3.35: Magnitude of emissivity correction for House ID 01 and ground surface ................................ 73 3.36: Analysis masks for House ID 01 and ground surface ............................................................... 75 3.37: Gap-filled thermal texture sheets for House ID 01 and ground surface .................................... 76 4.1: View of the Elgin Street 3D model showing the subset of the canyon used in calculations in Chapter 4 ................................................................................................................................... 77 4.2: Area-weighted average facet temperatures over the course of the experiment as well as the complete surface temperature.................................................................................................... 81 4.3: Area-weighted wall temperatures divided by facet orientation ................................................. 83 4.4: Difference of area-weighted mean wall temperatures separated by facet orientation from 4- facet mean wall temperature. .................................................................................................... 84 4.5: Area-weighted average temperatures of the two most common roof material types ................ 86 4.6: Area-weighted temperatures of the ground surface, divided into concrete and grass surfaces. ..................................................................................................................................... 86 4.7: Area-weighted temperature of canyon vegetated surfaces compared to air temperature measured at 28 cm ..................................................................................................................... 87 4.8: Area-weighted wall temperatures, divided by material type for the most common wall materials. ................................................................................................................................... 88 5.1: Examples of exported indexed Lpx maps for House ID 01 and the ground surface ................... 92 5.2: Canyon 3D model, modified to be repeatable ........................................................................... 93 x 5.3: Schematic representations of view directions for the planar sensor view ................................. 95 5.4: Examples of planar sensor renderings for two different timesteps ........................................... 96 5.5: Diurnal course of T0,C compared to TB,nad. ................................................................................. 97 5.6: Polar plots of simulated airborne sensor temperature versus computed complete surface temperature from 13:30 PST to 18:30 PST on September 14th ................................................ 99 5.7: Polar plots of simulated airborne sensor temperature versus computed complete surface temperature from 19:30 PST on September 14th to 0:30 PST on September 15th ................. 101 5.8: Polar plots of simulated airborne sensor temperature versus computed complete surface temperature from 1:30 PST to 6:30 PST on September 15th .................................................. 103 5.9: Polar plots of simulated airborne sensor temperature versus computed complete surface temperature from 7:30 PST to 12:30 PST on September 15th. ............................................... 104 5.10: Deviation of all simulated view direction temperatures from T0,C by timestep ....................... 105 5.11: Anisotropy plotted over the entire diurnal course ................................................................... 106 5.12: Schematic representations of view directions for a hemispherical sensor .............................. 107 5.13: Examples of hemispherical sensor renderings for two different timesteps ............................. 108 5.14: Recovered Lh from a simulated hemispherical sensor at various locations for 13:30, September 14 th ........................................................................................................................ 109 5.15: Recovered Lh from a simulated hemispherical sensor at various locations for 18:30, September 14 th ......................................................................................................................... 111 5.16: Recovered Lh from a simulated hemispherical sensor at various locations for 00:30, September 15 th ......................................................................................................................... 112 5.17: Recovered Lh from a simulated hemispherical sensor at various locations for 6:30, September 15 th ......................................................................................................................... 114 5.18: Comparison of T0,nad, T0,h, and T0,C for the four simulated timesteps. ...................................... 116 5.19: Comparison between longwave radiation Lh incident at simulated sensor with simultaneous measurements of L\u00E2\u0086\u0091 from Sunset Tower. ................................................................................ 117 xi List of Symbols and Abbreviations Symbol Units Description AT,f m 2 Total surface area covered by a facet type AT,m m 2 Total surface area covered by a material type Apx m 2 Area of a pixel B degrees Thermal scanner tilt angle C J m -2 K -1 Heat capacity Ca J m -3 K -1 Heat capacity of air Cxyz m Thermal scanner location d m Texture sheet scanner-pixel line of sight distance dimg pixels Euclidean distance of pixel from image centre dx pixels Pixel change from the centre of first image to centre of second (x) dxang degrees Angular change from centre of first image to centre of second (longitude, global spherical coordinate system) dy pixels Pixel change from the centre of first image to centre of second (y) dyang degrees Angular change from centre of first image to centre of second (latitude, global spherical coordinate system) hb m Height of a simulated basin k W m -2 K -1 Thermal conductivity K\u00E2\u0086\u0091 W m-2 Upward short-wave radiation flux K\u00E2\u0086\u0093 W m-2 Downward short-wave radiation flux L\u00E2\u0086\u0091 W m-2 Upward long-wave radiation flux xii L\u00E2\u0086\u0093 W m-2 Downward long-wave radiation flux W m -2 Downward longwave flux at Sunset Tower Lh W m -2 Simulated hemispherical sensor incident long-wave radiation Lp W m -2 Simulated planar sensor incident long-wave radiation Lpx W m -2 Long-wave radiation emitted by a pixel Lpx,h W m -2 Long-wave radiation recovered by a simulated hemispherical sensor pixel Lpx,p W m -2 Long-wave radiation recovered by a simulated planar sensor pixel Ly m Characteristic along-canyon inter-building spacing p1 pixels Pixel location of a point p2 pixels Pixel location of a point degrees Pixel latitude and longitude (image polar coordinate system) degrees Pixel latitude and longitude (image polar coordinate system) W m-2 Net radiation flux W m -2 Latent heat flux W m -2 Anthropogenic heat flux W m -2 Sensible heat flux ra s m -1 Aerodynamic resistance for sensible heat rb m Radius of a simulated basin RH % Relative humidity T0 K Surface temperature T0,C K Complete surface temperature T0,f K Area-weighted temperature of a facet type xiii T0,m K Area-weighted temperature of a material class T0,px K Surface temperature of a pixel Ta K Air temperature TB K Brightness temperature TB,a,px K Brightness temperature of a pixel, corrected for atmospheric effects TB,h K Simulated hemispherical sensor brightness temperature TB,nad K Simulated planar sensor brightness temperature at nadir TB,p K Simulated planar sensor brightness temperature TB,px K Brightness temperature of a pixel Tcanyon K Average surface temperature of the canyon Tcorr K Brightness temperature of a pixel, corrected for atmospheric effects and emissivity effects. V m 3 m -2 Normalized building volume wi - Simulated hemispherical sensor pixel weight Wx m Characteristic canyon width x,y,z m Pixel x,y,z coordinates (global Cartesian coordinate system) xR,yR,zR m Rotated pixel x,y,z coordinates (global Cartesian coordinate system) zC m Height of convergence of Lh curves for a timestep zH m Area-weighted building height zH/Wx - Canyon height to width ratio \u00CE\u00B1 - Albedo \u00CE\u00B1F degrees Scanner frame centre pixel azimuth angle xiv \u00CE\u00B1p degrees Planar sensor image azimuth degrees Pixel longitude (global spherical coordinate system) \u00CE\u00B2 - Bowen ratio \u00CE\u00B2p degrees Planar sensor image off-nadir angle \u00CE\u00B2F degrees Scanner frame centre pixel tilt angle degrees Pixel latitude (global spherical coordinate system) degrees Rotation about a horizontal axis degrees Theoretical point angular separation (longitude, global spherical coordinate system) degrees Actual point angular separation (longitude, global spherical coordinate system) degrees Theoretical point angular separation (latitude, global spherical coordinate system) degrees Actual point angular separation (latitude, global spherical coordinate system) W m -2 Advected heat flux W m -2 Storage heat flux \u00CE\u00B5 - Blackbody emissivity \u00CE\u00B5px - Emissivity of a pixel degrees Texture sheet pixel elevation angle (latitude, global spherical coordinate system) \u00CE\u00B8i degrees Pixel longitude (image polar coordinate system) \u00CE\u00BBb m 2 m -2 Plan area ratio of buildings \u00CE\u00BBc m 2 m -2 Complete aspect ratio xv \u00CE\u00BBi m 2 m -2 Plan area ratio of impervious ground surfaces \u00CE\u00BBj m 2 m -2 Plan area ratio of impervious ground and building surfaces \u00CE\u00BBv m 2 m -2 Plan area ratio of vegetation \u00C2\u00B5 J m -2 s -1/2 K -1 Thermal admittance \u00CF\u0081 degrees Off-nadir pixel angle (image polar coordinate system) \u00CF\u0083 J s-1 m-2 K-4 Stefan-Boltzmann constant degrees Texture sheet pixel azimuth angle (longitude, global spherical coordinate system) \u00CF\u0086i degrees Pixel latitude (image polar coordinate system) \u00CE\u00A8 - Sky view factor xvi Acknowledgements This work would not have been possible without the encouragement and generosity of my supervisor, Dr. Andreas Christen. Throughout this work, Andreas was supportive and understanding and was instrumental in keeping me on track and motivated through some of the more frustrating aspects of the study. I would also like to that my supervisory committee members, Dr. Brett Eaton (UBC Geography) and Bob Woodham (UBC Computer Science), who provided insights, technical assistance and sounding boards for ideas. In addition, I would like James Voogt (University of Western Ontario) for providing assistance in defining my research goals and for his work on anisotropy in general, which was instrumental in framing this thesis. Fred Meier (TU Berlin) provided MODTRAN runs for atmospheric corrections. This study was supported by the Canadian Foundation for Climate and Atmospheric Sciences (CFCAS) as part of the Network Grant 'Environmental Prediction in Canadian Cities' (EPiCC) and by NSERC Discovery Grants (A. Christen, J.A Voogt). Selected equipment was provided by Environment Canada and M. Church (UBC Geography). I also thank N. Coops (UBC Forestry) and his research group for providing and processing the LiDaR data. I would also like to thank Frederic Chagnon, Ben Crawford, Adrian Jones, Rick Ketler, Ivan Liu, Kate Liss, Tim Oke, Chad Siemens, and Derek van der Kamp for assistance with the experimental portions of the study (both in 2008 and 2011). A special thanks goes out to my lab-mate Carmen Emmel for putting up with my endless rants about MATLAB and 3D geometry. The support of my friends and family cannot be understated, without which this thesis might not have been completed. In particular, I thank my loving fianc\u00C3\u00A9e Grace for enduring endless predictions of doom for the experiment and patiently listening as I explained why a particular graph was either a complete failure or a perfect key result. 1 Chapter 1 Introduction 1.1 The Urban Surface Cities and other dense settlements now house the great majority of the world's population. Canada, for example, has seen a significant shift - since the start of the 20th century, the percentage of people living in urban areas has increased from 37 to 87% (Statistics Canada, 2009). Because of this ongoing urbanization, it is important to understand impacts of cities on the environment with focus on the atmosphere. Furthermore, it is crucial to predict how the atmospheric environment affects health, safety and operation in cities in extreme events as well as in typical day-to-day situations. Climate has an immediate and visible impact on human health - which is perceived by the urban population as thermal comfort. Climate has an impact on heating and cooling demand, and is therefore relevant for energy production, load prediction and energy conservation measures in cities. Indirectly, the urban atmosphere also controls the dispersion of air pollutants, the lighting environment, and the urban hydrology (droughts, floods). Therefore, consideration of urban climate effects - in particular the exchange of energy between an urban surface and the atmosphere - is relevant to resilient and sustainable urban design. It has long been recognized that the urban climate is markedly different from its rural equivalent. Luke Howard was the first to demonstrate that air temperature can be significantly warmer in a city than in the surrounding countryside, thus effectively launching the field of urban climatology (Howard, 1833). Since then, our understanding of cities and their effects on the atmosphere has increased greatly. The urban heat island concept in particular has been heavily documented and studied, and several types of heat island have been defined. We now differentiate importantly between the surface urban heat island (SLHI), which refers to surface temperatures, and the canopy layer urban heat island (CLUHI), referring to air temperatures (Oke, 1982). 2 Other effects of urban areas on the environment include increases in downwind precipitation (Lowry, 1998) and storm initiation (Masson, 2005) to say nothing of the large body of literature dedicated to urban effects on air quality and pollution. In particular, the issue of pollution quantification has led to several important measurement campaigns, such as METROMEX near St-Louis (Changnon, 1978), MEDCAPHOTTRACE over Athens (Klemm, 1995), ESQUIF and ESCOMPTE over Paris and Marseille (Cros et al., 2003; Menut et al., 2000), BUBBLE (Rotach et al., 2005) in Basel, Joint Urban 2003 in Oklahoma City (Allwine et al., 2004) many of which have important results. The present study was part of the Environmental Prediction in Canadian Cities (EPiCC) Network which was funded from 2007 to 2011 by the Canadian Foundation for Atmospheric and Climate Studies (Voogt et al., 2007). 1.2 The Urban Energy Balance The differences in canopy-layer climates in cities are driven primarily by changes in the components of the urban energy balance (Oke, 1988), shown schematically in Figure 1.1 and represented by Equation 1.1 below (all fluxes in W m -2 ). Net input from radiation Q* and the anthropogenic heat flux QF is partitioned into the sensible heat flux QH, the latent heat flux QE, the storage heat flux \u00E2\u0088\u0086QS and the advected heat flux \u00E2\u0088\u0086QA. (1.1) Figure 1.1: Schematic representation of an urban energy balance (Based on Oke, 1987). The two terms QH and QE are referred to as the turbulent fluxes; assessing them requires dealing with turbulence and other properties that vary rapidly both in time and in space, particularly in the urban 3 environment. The partitioning of the available energy into QH and QE is described by the Bowen ratio \u00CE\u00B2 (Equation 1.2). (1.2) A low \u00CE\u00B2 is observed with sufficient water availability, which allows evapotranspiration. In urban areas, vegetation cover is frequently reduced and impermeable surfaces dominate - this generally results in a higher \u00CE\u00B2 and therefore reduces QE (Oke, 1987). Due to this effect, more energy is channeled into sensible heat, resulting in a larger QH. QH is frequently modeled using a resistive approach as in Equation 1.3, where QH is linked to the surface temperature T0, the air temperature Ta, the heat capacity of air Ca and the aerodynamic resistance for sensible heat transfer ra. (1.3) \u00E2\u0088\u0086QS is the energy associated with change in heat storage, i.e. change in energy stored and released by the urban canopy. The thermal properties of the urban fabric are key controls on \u00E2\u0088\u0086QS, some of which are summarized in Table 1.1. The heat capacity C characterizes the amount of heat needed to change a material's temperature, and is generally higher in common urban materials such as concrete, asphalt and metals. Materials with higher heat capacity take longer to warm over the course of a day but store additional energy once warmed. The thermal conductivity k of a material quantifies the ability of a material to conduct heat, and varies across urban materials, though is generally higher than in non-urban situations. Increased conductivity allows heat to travel through materials more rapidly, increasing the amount of energy storage that is available. The thermal admittance \u00C2\u00B5, is calculated from k and C, and quantifies the ability of a material to accept and release heat. \u00C2\u00B5 for urban materials is higher, indicating that they warm and cool more slowly and store more heat. The material differences have a significant effect on the patterns of heat uptake and release in cities and modulates the urban surface temperature. A study in Mexico City (Oke, Jauregui, & Grimmond, 1999) found that the urban surface's energy uptake was up to 60% of Q*. It has been found that \u00E2\u0088\u0086QS can be parameterized by measuring or calculating the thermal inertia (calculated from the thermal conductivity, density and heat capacity of a material, Xue & Cracknell, 1995) \u00E2\u0080\u0093 high values lead to slow temperature change and lower values lead to faster temperature change. Thermal inertia however does not measure the maximum storage of a system and therefore needs to be coupled with estimations of urban canopy heat storage to be effective (Xue & Cracknell, 1995). 4 Table 1.1: Thermal properties of selected urban materials (based on Oke, 1987). Material Heat capacity C (J m -3 K -1 x10 6 ) Thermal Conductivity k (W m -1 K -1 ) Thermal Admittance \u00C2\u00B5 (J m -2 s -1/2 K -1 ) Asphalt 1.94 0.75 1205 Concrete 2.11 0.28 1785 Stone 2.25 2.19 2220 Brick 1.37 0.83 1065 Metal (steel) 3.93 53.3 14475 Wood (dense) 1.52 0.19 535 Glass 1.66 0.74 1110 \u00E2\u0088\u0086QA, the advective heat flux, is a term representing any addition or removal of energy to the urban system, as transported by the mean wind or by turbulence, from the neighboring areas. Usually, this is assumed to be negligible (Oke, 1987). However, the urban system's complex morphometry can channel or redirect winds which creates unexpected mixing patterns (Oke, 1987) that can move heat in ways that are distinct to the urban setting. Of the two energy input terms, QF is unique to cities, representing the energy release associated with combustion of fuels, building heating and other human processes to the urban environment. QF makes an important addition to the energy budget in dense cities at certain times of year (potentially exceeding the net value of Q*) (Oke, 1988); it has a distinct metabolic pattern driven by time of day and seasonality. The yearly average QF for the study area in question was estimated at 12.2 W m -2 (van der Laan et al., 2010). The final input term Q*, the net radiation flux, is the most significant input term, and is defined in Equation 1.4, where K\u00E2\u0086\u0093 is the short-wave irradiance on the urban surface, K\u00E2\u0086\u0091 is the short-wave reflectance, L\u00E2\u0086\u0093 is the long-wave irradiance and L\u00E2\u0086\u0091 is the sum of long-wave emittance and reflectance (all fluxes in W m -2 ). (1.4) Attenuation of K\u00E2\u0086\u0093 in cities due to airborne pollutants decreases the direct-beam component of K\u00E2\u0086\u0093, though this has been shown to be mostly recovered by the urban surface as diffuse radiation (Sprigg & Reifsnyter, 1972). Studies have found that there exists a large variance in total attenuation of K\u00E2\u0086\u0093 over cities (Oke, 1988) where incoming radiation can be reduced by up to 20% (in large industrialized cities) or by a negligible amount (in well-mixed mesoscale situations). Recent work in Vancouver, B.C., Canada (where the current study takes place) has shown that K\u00E2\u0086\u0093 over the city is reduced by about 5% compared to 5 a rural reference site. However, most of this effect has been attributed to increased cloud cover over the city, as under clear-sky conditions the attenuation has been quantified as less than 1% (Christen, Oke, & Voogt, 2011). Figure 1.2: Geometric modification of radiation exchanges in a street canyon. a) Shortwave, showing multiple opportunities for ray absorption due to canyon trapping. b) Long-wave, showing increased view factors of canyon objects contributing to ray re- absorption. Though attenuation in some situations can be important, ray path modification by the urban canopy has by far the largest influence on the receipt and distribution of. K\u00E2\u0086\u0093. Shortwave radiation has more chances for absorption when reflected repeatedly in an urban canyon (Figure 1.2a). This effectively decreases the albedo of an urban surface significantly, with most cities converging on a value of 0.15 for the entire surface (Oke, 1988; Small, 2005). Shading of surfaces also modifies the vertical and horizontal distribution of shortwave in the canyon. This complex geometry influences outgoing longwave radiation emission in a similar fashion, with additional trapping of emitted photons causing an increase in Q* (Figure 1.2b). The radiation balance of an urban surface can be rewritten as Equation 1.5 to show all energy inputs to the surface, taking into account the surface albedo \u00CE\u00B1 and using the Stefan-Boltzmann law to calculate L\u00E2\u0086\u0091. \u00CE\u00B5 is the emissivity of the ground surface T0 the surface temperature. (1.5) It is therefore evident that the surface temperature T0 is closely controlled by the radiation and energy balance. In essence, all physically based micro- and macro-scale urban climate models use a T0 for the urban surface as a parameter or boundary condition to model longwave emittance L\u00E2\u0086\u0091 (via Equation 1.5), heat storage in the urban fabric \u00E2\u0088\u0086QS, and the turbulent heat flux QH (Equation 1.3). Therefore, T0 is a key parameter needed to model the urban energy balance. 6 1.3 Defining and Measuring Urban Surface Temperatures 1.3.1 Defining Urban Surface Temperatures There is no strong agreement in urban climatology about how to define the urban surface temperature due to strong heterogeneity of the surface, both in materials and form (Voogt & Oke, 1997). Possible approaches are shown in Figure 1.3, defining many different surface temperatures. Of particular interest is the complete surface temperature T0,C, originally defined by (Voogt & Oke, 1997), an area- weighted measure that looks at all facets of an urban surface (Figure 1.3a). T0,C is useful as it integrates individual surface temperature over all facets, something that other temperature definitions in Figure 1.3 do not, as they omit various facets or parts of the system areas. However, as a detailed description of the temperature of all facets in a canyon is required for its computation, the complete surface temperature is not often available (measured) in detail. Figure 1.3: Schematic representations of various urban surface temperature definitions. a) Complete surface temperature. b) Ground-level surface temperature. c) Roof-level surface temperature. d) Plan-view surface temperature (Based on Voogt & Oke, 1997). Many models use a single temperature value for the urban surface, while others use more complex formulations such as three temperatures (roof, wall and ground), for example the TEB (Town Energy Balance) model (Masson, 2000). Others can extend this by separating wall temperature into compass directions (north, south, east and west facing walls), for example, Martilli et al. (2002). In all 7 cases, a temperature value is required to set initial conditions, which is often acquired from measurements of the surface temperature for a particular urban setting, or can be initialized via a first guess. 1.3.2 In-Situ Measurement To measure the urban surface temperature, thermocouples or thermistors can be used, but are limited to point measurements, which causes issues as the urban fabric is strongly heterogeneous. Additionally, the introduction of a sensor composed of different materials makes such measurements prone to large errors. 1.3.3 Remote Sensing Measurements The problem of heterogeneity can be overcome by using remote sensing approaches that rely measurements of electromagnetic energy incident to a sensor. Objects emit radiation according to Planck's Law, which describes the spectral distribution of energy radiated by an ideal blackbody wavelength. The wavelength of emission varies with the temperature of the object; warmer objects emit more energy at shorter wavelengths. By integrating Planck's Law over all wavelengths and solid angles, the total amount of energy emitted by a blackbody can be found (the Stefan-Boltzmann law, Equation 1.6). Therefore, if the amount radiation incident on a sensor is measured, the temperature of the object can be inferred. The presence of the atmosphere complicates this; a sensor will be unable to precisely measure radiation emitted from an object due to absorption and reemission by the intervening air. Therefore, a remote sensor typically measures the radiance in a very narrow band where the atmosphere is highly transparent (8-13 \u00C2\u00B5m, the atmospheric window). This is then scaled upward assuming that emission from the surface follows the Planck Law (Aronoff, 2005). (1.6) However, most materials are 'grey' bodies and do not perfectly emit and absorb radiation - this adds complexity and means that measurement of alone does not allow retrieval of correct surface temperatures. Knowledge of the surface's emissivity \u00CE\u00B5 and the downward long-wave irradiance are also required. In this situation can be decomposed as in Equation 1.7; Equation 1.6 can then be rearranged to form Equation 1.8. (1.7) 8 (1.8) \u00CE\u00B5 and are frequently unavailable, so a brightness temperature TB is defined as the theoretical equivalent temperature for the measured , assuming that the surface is perfect blackbody (Equation 1.9). (1.9) For situations where the surface material is a blackbody, T0 = TB, but as \u00CE\u00B5 decreases and in situation where and are significantly different, the two temperatures are not the same. For many materials, \u00CE\u00B5 is between 0.9 and 1.0 (FLIR Systems, 2004) and with low cloud cover and are roughly similar (Figure 1.4). This is not the case for low-emissivity materials or clear sky conditions where T0 >> Ta. Figure 1.4: Difference TB - T0 for combinations of \u00CE\u00B5 and differences Ta - T0. Deviation is low where T0 = Ta but increases as difference between Ta and T0 increases. 1.3.4 Instrumentation In most cases, remote sensing measurements are the simplest way to recover temperature from an urban surface, as such sensors have an inherent ability to average temperatures across their fields of view (solving the primary issue with point-measurements by thermocouples). Radiation from ground surfaces emitted in the 8 to 13 \u00C2\u00B5m region of the atmospheric window is related to brightness temperature via the 9 Stefan-Boltzmann law (refer to Equations 1.6 to 1.9 above). With the advent of frequent measurements from satellite platforms, (for example the ASTER sensor on the Terra and Aqua satellites and the ETM+ sensor on the Landsat series of satellites) low resolution data for brightness temperature is readily available for most cities on the globe, albeit at low temporal resolutions (Aronoff, 2005). For higher temporal and spatial resolutions, thermal cameras mounted on helicopter platforms or fixed wing aircraft are often used for select urban areas. Thermal remote sensing measurements have a number of inherent issues. As indicated in Section 1.3.3, properly recovering surface temperature from brightness temperature is dependent on knowledge of \u00CE\u00B5 and , which can vary widely in an urban canopy (for example, asphalt has an emissivity of 0.95 while white painted materials have an emissivity of 0.9 (FLIR Systems, 2004) and sky view factor variations cause to vary). A detailed database is needed to make these corrections, which can be up to 7 K (Voogt & Oke, 1998). Additionally, users of satellite and airborne data products must be wary of atmospheric effects, where absorption and reemission of longwave radiation from the air between sensor and ground surface can bias the measurement (Meier, Scherer, Richters, & Christen, 2011). This affects satellite sensors to a larger extent than airborne sensors due to the increased volume of air between sensor platform and ground surface. Additionally we can consider another type of instrument used to measure long-wave radiation, a hemispherical downward facing radiometer. In contrast to IRTs and thermal cameras, such instruments are sensitive to the entire long-wave range (from 3- 100 \u00C2\u00B5m). Though such instruments are not frequently used to infer temperature measurements, the longwave radiation received depends on the surface temperatures in the field of view, the reflected radiation, as well as the emittance of the intervening atmosphere (an effect which is more relevant outside the atmospheric window). In hemispherical radiometers, the actual sensor is planar, which creates a cosine response when long-wave radiation is incident on the sensor (rays which are perpendicular, e.g., from surfaces directly below the radiometer, are more heavily weighted than rays whose angle of incidence is large (Oke, 1987)). 1.3.5 Thermal Anisotropy The largest challenge facing remote sensing measurements is the geometric nature of the urban surface. Due to variations in shading by buildings and vegetation, and variations in material properties, the surface viewed by a remote sensor can vary significantly depending on view direction. Thermal anisotropy affects both airborne and satellite sensors as well as hemispherical radiometers (see Figure 1.5). Viewing different surfaces affects the recovered surface temperature. Voogt and Oke (1998) found that measurement error due to thermal anisotropy exceeds that introduced by 10 emissivity and atmospheric effects (up to 10 K from anisotropy compared to 1.5-25 K and 4-7 K from emissivity and atmospheric effects respectively). The difference from brightness temperature at nadir becomes increasingly large as sensors deviate from the vertical, as is often the case with airborne sensors (and satellite platforms that have off-nadir viewing capabilities). At these angles, sensors see a different preferential view of the three-dimensional surfaces: more walls, less roofs and ground surfaces than in the nadir view (Figure 1.5b). As wall temperature varies with orientation, a remote sensor may observe different temperatures as both off-nadir angle and azimuth change. Figure 1.5: Examples of different simulated views of an idealized urban surface. a) from an airborne platform at nadir. b) from an airborne platform at an off-nadir angle. c) from a downward facing hemispherical radiometer. In this way, the parameters defining a preferential view (azimuth, off-nadir angle, and field of view) inherently defined by a remote sensor are detrimental to its ability to approximate the complete surface temperature T0,C and make comparisons between sensors. Many studies have attempted to quantify this effect via models or experimental measurements, for example (Lagourade et al., 2004, 2010). However, few studies have taken place with high resolution instrumentation placed at low altitudes near the average building height, resolving in detail the features of a street canyon. 1.4 Research Objectives The remotely sensed brightness temperature depends not only on the sensor view geometry (azimuth, off-nadir angle, field of view, height) but also on the thermal properties and the complex three- dimensional structure of the urban surface. The complete surface temperature of a canyon may not be well-represented by brightness temperatures recovered by remote sensing measurements, as it takes into account potentially unseen surfaces. The overall goal of this thesis is to quantify the bias in sensing surface temperature of a real urban surface under different remote sensor geometries (without considering atmospheric effects). To achieve this goal, the following steps are proposed: 11 1. Construct a detailed 3D surface model of an existing urban canyon with the aim of describing the materials and form of the canyon. 2. Develop and employ a methodology for projecting measured brightness temperature data recorded with a thermal scanner onto the 3D model to compute the complete urban surface temperature. 3. Simulate the field of view of a narrow field of view sensor as a function of azimuth and off-nadir angle observing the canyon and compare the values to the complete surface temperature. 4. Simulate the signal of a downward-facing hemispherical radiometer for various horizontal locations and heights above the urban surface and determine the bias relative to measurements. The study will aim to quantify the difference between complete surface temperature and the surface temperatures recovered by the simulated instruments, and attempt to make recommendations for the placement of these instruments in order to accurately quantify surface temperatures and upwelling long- wave radiation in the urban canyon. 1.5 Organization of Thesis This thesis is divided into six chapters. This chapter has introduced the background and rationale for the study, and defined objectives that will be examined in the thesis. Chapter Two provides details and background on the study area, as well as a detailed description of the methodology used during data collection with the thermal camera and for input into the 3D surface model. Chapter Three describes the steps taken to process data and transform it into a form suitable for view direction simulation. Chapter Four describes observational results derived from the processed data. Chapter Five describes the simulations of view directions: the methods used and descriptions of the results of the simulations. Conclusions and potential improvements to the methodology are described in Chapter Six, as well as possibilities for future work. 12 Chapter 2 Data Collection 2.1 Study Area 2.1.1 Vancouver-Sunset Neighbourhood The general study area is located in Vancouver, BC, Canada, an urban agglomeration located on the west coast of North America with a dense urban core and large suburban areas. The study takes place in the Sunset neighbourhood in the southeastern portion of the city, an urban hydrometeorological research area containing various long-term and short term instrumentation for urban climate monitoring (Christen, Oke, Grimmond, Steyn, & Roth, 2013). In particular, the Vancouver-Sunset climate tower (123.0784\u00C2\u00B0 W, 49.2261\u00C2\u00B0 N, 78 m above sea level) has provided semi-continuous data on the urban surface since 1978 and has contributed greatly to our understanding of the exchange processes between the urban surface and the atmosphere (for example, Cleugh & Oke, 1986; Oke, 1982; Schmid, Cleugh, Grimmond, & Oke, 1991; Voogt & Oke, 1998, 1997, among others). A schematic of the instrumentation emplaced in the Vancouver-Sunset neighbourhood is shown in Figure 2.1. 13 Figure 2.1: Map showing instruments emplaced in the Sunset study area (based on Christen, Johnson, McKendry, & Oke, 2012). The Vancouver-Sunset neighbourhood is characterized by a residential land use pattern following an orthogonal street grid layout, with some larger buildings and parks. Most residential buildings are single detached homes which can be divided by construction date: pre-1965 (1717 buildings), 1965-1990 (1544 buildings) and post-1990 (1084 buildings). Mean building volume for these types ranges from 580 to 879 m 3 with mean floor area ranging from 149 to 243 m 2 (van der Laan et al., 2012). Useful statistics on the urban surface in the neighborhood are presented in Table 2.1. Table 2.1: Properties of the urban surface in the Vancouver-Sunset neighborhood calculated in a 500 m radius from Sunset Tower (from van der Laan et al., 2012). Symbol Parameter Value \u00CE\u00BBb plan area ratio of buildings 29% \u00CE\u00BBi plan area fraction of impervious ground 37% zH mean building height, m 5.13 - Population density, persons ha -1 63.1 - Building density, buildings ha -1 12.80 2.1.2 Elgin Street Canyon The specific area chosen for this study was Elgin Street, between 45 th and 46 th Avenues in the 6100 block (see Figure 2.1). This location was chosen to fit specific criteria. Firstly, it is close to the Sunset meteorological tower, which provides the study with neighbourhood-scale meteorological data and incoming radiative fluxes, useful for data intercomparisons and corrections. Secondly, the area is a good 14 example of the typical suburban form in the Sunset area (see section 4.1 for direct morphometric comparisons), with large houses and significant yard space in the front and back. Thirdly, the canyon in question has a uniform north-south direction, which is important as it minimizes complex solar irradiance interactions. Fourthly, it has little tall vegetation, which would complicate the proposed research. Most of the houses in the canyon are similar, with a rectangle footprint with the elongated axis perpendicular to the street and low-pitched roofs with slopes from 10\u00C2\u00B0 to 15\u00C2\u00B0 (examples in Figure 2.2). These types of houses are common in the suburban areas of Vancouver, colloquially termed \u00E2\u0080\u0098Vancouver Specials\u00E2\u0080\u0099 and have construction dates between 1960 and 1990 (van der Laan et al., 2012). Building heights vary from 6.2 to 7.1 m in a two-floor configuration, with total building volumes varying from 520 to 750 m 3 . Most buildings have extensive back porches often including a carport. Sometimes porches are enclosed or covered \u00E2\u0080\u0093 this in general creates the variation in volume. The relative uniformity of the structures simplified the complex geometry somewhat and will prove helpful when creating a detailed 3D model of the study area. Figure 2.2: Typical form of two-story residences in the Elgin Street canyon, showing simple, boxy form and low pitched roofs (Photos: C. Adderley, June 15, 2011). In contrast to the structural similarity, the houses in the canyon exhibit a large variety of fabrics (materials). Roofs can vary between aluminum and asphalt/ceramic tile coverings, while walls show painted wood surfaces, some brick facades and wood/aluminum siding types. This heterogeneity lends itself well to an in-depth, three-dimensional analysis as the various materials will have distinct spatial and temporal thermal signatures. Front lawns are large, extending up to 9 m from the house facades, with mostly well-kept grass (height from 2-6 cm) as surface cover. Impermeable concrete sidewalks occupy a varying but small portion of the canyon. The road is of average width 9.5 m, with a surface consisting of concrete with 15 some asphalt patches. Behind the houses, small backyards exist with varying surface cover, ranging from fully impermeable concrete/asphalt to gardens. The lane area (7 m in width) is generally impermeable asphalt, but is patchy in areas (gravel, dirt sides and potholes). The street canyon also has few street and yard trees. This is not typical of the Vancouver-Sunset neighbourhood, where trees are frequent features. This choice was made to further simplify the complex structure of the area, as the geometric nature of trees makes them difficult to treat in a detailed 3D model. In addition, it would be practically impossible to reproduce the exact growth stages and leaf cover for the time of the experiment, and movement with wind in the canyon would prohibit precise georeferencing. 2.2 Measurement Campaign The experimental campaign was divided into two sections: one taking place in September 2008 and another taking place in 2010-2011. In 2008, the thermal imaging took place, and in 2010-2011 the three dimensional survey was completed. 2.2.1 Instrumentation In order to measure surface temperatures, a Thermovision A40M infrared camera (FLIR Systems, Wilsonville, Oregon, USA) was used. This camera uses an uncooled microbolometer sensor to retrieve brightness temperatures from measurements of incoming longwave radiation. The device consists of an array of detectors, each composed of an infrared-sensitive material with a small current traversing it. Incident radiation from a surface changes the electrical conductivity of the element, which is translated into a radiant temperature using the Stefan-Boltzmann law. In the case of this instrument, the microbolometer is sensitive to radiation in the 7.5 to 15 \u00C2\u00B5m region (FLIR Systems, 2004). The spectral response curve is shown in Figure 2.3, and shows that the spectral response of the camera is primarily concentrated between 8 and 13 \u00C2\u00B5m. This corresponds with the peak of the Stefan-Boltzmann curve for terrestrial temperatures (Aronoff, 2005). The detectors are arrayed in a grid of 320 by 240 elements, with each sensor outputting a single temperature value. This creates images of 320 by 240 pixels, with a radiometric resolution of 16 bits/pixel. Each pixel has an angular resolution (instantaneous field of view) of 1.3 milliradians. At ambient temperatures of near 300 K, a sensitivity of 0.08 K is achievable, with a published ideal measurement accuracy of \u00C2\u00B1 2 K for each pixel (FLIR Systems, 2004). However, the resulting temperature values are brightness temperatures and are not necessarily representative of the actual surface temperature. They will need correction for atmospheric effects, in which the intervening atmosphere between camera and surface both absorbs and emits additional 16 radiation, creating inaccuracy in the reading at the sensor. The magnitude of such a correction depends on the atmospheric path length to each scanner pixel. Using a geometric model of the area, the path lengths can be combined with similarly timed measurements of air temperature and humidity, and input to routines such as MODTRAN can be used to approximate atmospheric effects (as in Meier et al., 2011). Figure 2.3: Typical spectral response curve for a Thermovision A40M. Some variation between instruments is expected. (FLIR Systems, 2011) . The emissivity of the various materials in the street canyon will also cause variations, as few objects will be perfect blackbody emitters. The user manual for the A40M scanner contains a detailed database of material emissivities that can be used to correct temperatures. Lens vignetting will also be an issue, as the wide-angle lens used concentrates radiation towards the centre of the detector array, potentially reducing radiation input and thus reported temperatures near the borders of the captured images. For the experiment, the A40M was equipped with a wide angle lens (45 degree field-of-view) to capture as much of the street canyon as possible. Calibration data was collected to assist in this correction. In the laboratory situation, a calibration grid was set up on a wall using tape with differing thermal properties. The wall was then imaged with the A40M, and the situational geometry measured, so the location of the camera with respect to the calibration grid\u00E2\u0080\u0099s points was known. This would later be used to create a model of the A40M\u00E2\u0080\u0099s lens (see Section 3.2.1). 2.2.2 Thermal Data Capture On September 14th (Sunday) to 15th (Monday) 2008, a mobile tower was emplaced in the Elgin Street canyon. The Thermovision A40M scanner was mounted atop this tower on a pan/tilt device at a height of approximately 15 meters above ground level (Figure 2.4a). The tower was emplaced such that 17 the majority of the street canyon would be visible to the scanner (Figure 2.6). Figure 2.4: Experimental setup in the Elgin Street canyon. a) Tower with A40M mounted at approximately 15 m viewed from the North-East. b) Eddy covariance system emplaced on lawns (Photos: A Christen, September 14, 2008). Data collection began at 13:15 PST 1 on September 14, 2008. Every 30 minutes the scanner was rotated in a 360\u00C2\u00B0 panorama by manual remote control at two different tilt angles \u00E2\u0080\u0093 one at approximately 65\u00C2\u00B0 off-nadir, another at approximately 45\u00C2\u00B0. At the start of each panorama, the scanner was oriented so that a pattern of reflective tape applied to the road surface was just visible in frame (Figure 2.5). Figure 2.5: First frame of panorama at 13:30 with reflective tape labeled. This ensured a degree of consistency across the panoramas. At this point, recording began, with the A40M capturing images at 1Hz. The scanner began facing true north (0\u00C2\u00B0), then was rotated clockwise to the limit of the pan/tilt device (approximately 70\u00C2\u00B0). It was then rotated counter clockwise, passing again through north, until reaching the limit in the opposite direction (95\u00C2\u00B0), at which point the scanner 1 All times in this thesis are given in PST. 18 was rotated back to 0\u00C2\u00B0. The scanner was then tilted to 45\u00C2\u00B0 off nadir, and rotated clockwise until reaching the pan/tilt limit, at which point data capture stopped. Rotation rate was approximately 5\u00C2\u00B0 s -1 . A schematic of the pattern of rotation is shown in Figure 2.7. . Figure 2.6: Approximate location of tower and other instruments during the experiment. The scanner has full view of the facades of the majority of the houses on the east and west sides of the street. Green icons represent trees of varying heights roughly indicated by the size of the icon. Numeric identifiers on houses refer to ID numbers that reference each house throughout the thesis. 19 Figure 2.7: Rotation of thermal scanner. Red line shows the rotation at approximately 68\u00C2\u00B0 from nadir, blue line shows rotation at approximately 45\u00C2\u00B0 from nadir. The start of each panorama level is marked with a circular dot. The pattern of observation produced two panoramic images sequences per half hour interval for the duration of the experiment. Each sequence comprised between 230 and 250 frames depending on small variations in the rate of rotation and small changes in the start of recording (see examples of multiple panoramas and frames in Figure 2.8). The final panoramas began at approximately 16:00 on September 15, 2008. During the interval between panoramas, the scanner observed a representative cross-section of the street canyon to measure high-frequency variations in surface temperature. The results of this portion of the experiment are not considered in this paper and are part of a study investigating the temperature fluctuations and inferred flow structures in the canyon (see Christen & Voogt, 2009, 2010). 20 Figure 2.8: Examples of frames from the thermal panoramas. From left to right, frames 1, 45 and 65 at six hour intervals from the start of the experiment. 21 2.2.3 Additional Instrumentation Two tripods mounted with meteorological stations were placed to the north of the tower on lawns to the east and west of the street (see Figure 2.6 for locations). Each station contained a sonic anemometer (CSAT3 3D sonic, Campbell Scientific Inc., Logan, UT, USA), a net radiometer (NR-Lite, Kipp & Zonen, Delft, Netherlands), a temperature probe (HMP-35, Campbell Scientific Inc., Logan, UT, USA), measuring air temperature using a thermistor and relative humidity with a capacitor, and an infrared thermometer (Apogee IRTS, Campbell Scientific Inc., Logan, UT, USA) . Instruments were mounted at approximately 30 cm above ground level. During the entirety of the experiment, these instrumentation tripods were operated. This provided additional microscale meteorological data: net radiation and local brightness/air temperature and humidity values. Though primarily collected for the purposes of the high frequency surface temperature variation study, the data collected from these instruments is useful for correction of surface temperature error created by atmospheric effects. Additionally, at Sunset Tower, a CNR1 4-Component Radiometer (Kipp & Zonen, Delft, Netherlands), a CSAT3 sonic anemometer and an open-path infrared gas analyzer (Li-COR Li-7500, LI- COR Biosciences, Lincoln, NE, USA) were emplaced and operated at 10 Hz. See Crawford, Christen and Ketler (2010) for additional information on emplacement and quality control for these instruments. 2.2.4 Local Meteorological Conditions The experiment was performed during a period of clear-sky conditions. Generally, very few to no clouds were observed in the sky at this time (light cloud observed during early morning, though not over the study area), producing a near-ideal set of radiation fluxes (Figure 2.9a) that are typical for a clear sky day. A single anomalous event is present in L\u00E2\u0086\u0093 at 16:00, which is likely the an anomalous record and is filtered out for future calculations. Longwave fluxes otherwise show expected behaviour, with an increase in L\u00E2\u0086\u0091 during the day due to a likely increase in surface temperature. A corresponding small increase in L\u00E2\u0086\u0093 is also observed during the daytime due to increase air temperature during the day. Air temperatures show ranges between 284.7 K and 298.5 K in the canyon and between 286.0 K and 297.0 K at Sunset Tower. A pattern of cooling with height is visible during the daytime with a night-time inversion keeping cool air in the canyon. Surface conditions in the first day were generally dry, with dewfall beginning at 20:00 on the 14 th . Dew persisted in the shade on the 15 th until the end of the experiment, but had mostly disappeared from sunlit lawns by 14:00. This is consistent with the behaviour of QE during the experiment on the 15 th (highest in the morning and decreasing to lower values by 15:00). QH values are highest near solar noon, 22 throughout both days, and increase when QE decreases. Table 2.2 presents a summary of meteorological values at Sunset Tower and in the Elgin Street canyon. Table 2.2: Meteorological conditions during the experiment from in-canyon measurements and from Sunset Tower. Parameter Value In-canyon measurements (0.3 m) Average Ta (K) 291.4 Maximum Ta (K) 298.5 Minimum Ta (K) 284.7 RH range (%) 52 - 90 Sunset Tower measurements (28 m) Average Ta (K) 291.7 Maximum Ta (K) 297.0 Minimum Ta (K) 286.7 RH range (%) 49 - 90 Q* (MJ m -2 day -1 ) 16.2 L \u00E2\u0086\u0091 (MJ m-2 day-1) 37.1 L \u00E2\u0086\u0093 (MJ m-2 day-1) 29.4 K \u00E2\u0086\u0091 (MJ m-2 day-1) 4.9 K \u00E2\u0086\u0093 (MJ m-2 day-1) 31.9 QH (MJ m -2 day -1 ) 7.0 QE (MJ m -2 day -1 ) 2.1 Computation methods and additional details for the Sunset Tower measurements can be found in Crawford, Christen and Ketler (2010). 23 Figure 2.9: Meteorological conditions on September 14 and 15, 2008. a) Radiation fluxes at Sunset Tower, showing typical clear-sky conditions on both days. b) Air temperature at Sunset Tower and from instrumentation in the canyon, showing higher temperatures close to the ground during the day and cooler temperatures at night. c) Energy balance calculated from measurements at Sunset Tower, showing a larger sensible heat flux on the 14th (for information on computation of energy balance terms, see Crawford et al., 2010). 2.3 3D Structure Recovery The 3D structure of the Elgin Street area was recovered through a combination of three techniques, and was performed in 2011, three years after the thermal survey. However, as the 3D structure of the street canyon showed few changes, this should not be problematic (no changes or replacement of houses, no road construction). The main errors introduced by this temporal disconnect relate primarily to 24 vegetation cover, as street plants/trees may have grown or been trimmed in the intervening time. In addition, seasonal variation creates changes in leaf cover. During the 3D survey work, a specific accuracy goal for the final 3D model was set. As the A40M\u00E2\u0080\u0099s IFOV is published at 1.3 milliradians, we can estimate the maximum size of a pixel element on the ground. Given a tower height of 15 metres and a camera tilt of 45\u00C2\u00B0, and assuming a flat ground surface, an average pixel will cover approximately 9.0 cm 2 (if generalized to a square pixel). As most of the area in the camera\u00E2\u0080\u0099s field of view will be further away than this, a spatial resolution of 3.0 cm is an upper limit. Therefore, our ideal goal would be to construct a 3D model accurate to approximately 3 cm. 2.3.1 DGPS Data Collection Basic positional information was collected with a Trimble R7 differential GPS (DGPS) unit (Trimble, Sunnyvale, California, USA), comprising a base station (Trimble 5700) and a rover unit (Trimble 5800), on October 22 nd 2010 . The base station antenna was initially mounted on a tripod and associated tribrach, and operated in continuous recording mode for approximately 20 minutes over a distinct, invariant surface point (corner of a sidewalk). This provided a stable averaged base position for further survey points in the area. The unit was then operated in real-time kinematic (RTK) correction mode, where the base station and rover unit function in tandem. The base station receives a position from visible satellites which is compared to the position derived from the 20 minute integration, generating a correction factor. This correction is transmitted via radio to the rover unit, which applies it to positions collected at that same time. This greatly increases accuracy of collected points, down to the centimetre level (Trimble Navigation, 2003). 25 Figure 2.10: Locations of collected DGPS points. Generally, sidewalks, sidewalk intersections and surface material boundaries were chosen. Some vegetation, sidewalk cracks and street signs were also marked. Colours of points are scaled to the deviation of LiDaR surface elevations from DGPS elevations. All elevations have a consistent positive offset, and many erroneous points are located near road surfaces. Consistently accurate points are located on flat ground away from possible transient error sources (cars) and vegetation. The rover unit was used to take positions of distinct points along Elgin street public property. Generally, corner boundaries between material types were used, for example the intersection of two concrete sidewalks. This material boundary would create a strongly visible corner or line in the thermal 26 panoramic images and other visible imagery of the area (shown in Figure 2.10). For each point measured, a 10 second integration was performed, where GPS positions were collected at 1 Hz, and averaged to generate a final location. To ensure point quality, no data was collected unless certain GPS thresholds were met, as recommended in the device manual for maximum accuracy: \u00EF\u0082\u00B7 The number of satellites visible to both base station and rover was always above 8. \u00EF\u0082\u00B7 The HDOP (horizontal dilution of position) and VDOP (vertical dilution of precision) values were always below 3.0, where 1.0 is ideal. This is a measure of satellite geometry quality. \u00EF\u0082\u00B7 The RMSE of averaged positions was below 0.02 m. GPS horizontal positional error in this case is very low, providing x-y positions to the sub- centimetre level. Vertical precision is also very good, showing values less than 2 cm, which is in the aforementioned target accuracy range. 2.3.2 LiDaR Data Collection The DGPS data was supplemented with Light Detection and Ranging (LiDaR) data flown during March 2007 by a fixed wing aircraft operating a TRSI Mark II discrete-return sensor. This data was received as a georeferenced, classified point cloud with positions approximately every 0.7 metres, though irregularly spaced. Points were classified as either ground or non-ground (generally including trees and houses) with associated reflected intensity and an order of return. For the purposes of this project, only the ground surface was needed, as the resolution of the point cloud is insufficient to model the houses, and vegetation recorded by the LiDaR does not necessarily reflect the situation in the canyon during the experiment due to its collection date. Therefore, only ground points were selected. These data (provided in comma-separated CSV format) were imported into ArcGIS 10 (ESRI, Redlands, CA, USA), a GIS package with the capabilities to deal with 3D LiDaR data. A triangulated irregular network (TIN) was generated from these points in order to transform them into a 3D surface for use in the canyon model (Figure 2.11a). A georeferenced orthophoto (provided by the City of Vancouver) was also overlaid atop the data (Figure 2.11b). 27 Figure 2.11: a) TIN generated from LiDaR point cloud (ground vertices) b) TIN with orthophoto drape. To assess the accuracy of the LiDaR data, the DGPS points taken earlier were compared with elevations linearly interpolated from the TIN. It was found that the TIN did not match the GPS points very well in many areas \u00E2\u0080\u0093 a systematic overestimation of approximately 0.15 m was found. This is likely related to errors in the absolute positioning of the aircraft during the flyover. However, errors increase up 28 to 0.3 m in some cases, which cannot be explained through positioning alone (map of errors by location shown in Figure 2.10). Closer examination of the TIN points indicates that large error deviations are located in areas whose elevations may change frequently: in particular, along the street where parked cars may have been located during the flyover. Errors are also higher near vegetation objects, as might be expected. The processing method of the TIN creates spikes where an anomalous node is linked to nearby nodes whose elevations are correct, producing erroneous values in the entire surrounding area (Figure 2.12). In the case of cars and unfiltered vegetation, we would expect these errors to present as an increase in elevation, which is apparent. Figure 2.12: Illustration of TIN error patterns. The two nodes that are anomalous influence a much greater area due to linear interpolation between the nodes and their neighbours. Without a higher resolution flyover, there is little that can be done to alleviate this problem. It should suffice, however, to treat the GPS points as accurate and introduce an offset into the TIN elevations, given that the subsequent steps in this methodology will be performed in a local coordinate system. The TIN was negatively offset by 0.15 m across the entire domain in order to remove the systematic difference between LiDaR and DGPS points. The deviation is then limited to a maximum of 0.17 m. This is still rather poor in terms of precise accuracy as compared to our goal of 3 cm accuracy, so the utility of the LiDaR data beyond basic surface structure is limited. 29 2.3.3 Photogrammetric Data Collection In general, the height data from the LiDaR dataset did not adequately capture the more complex structure of the buildings in the street canyon, as resolution was inadequate and wholly useless for vertical surfaces. A reflectorless total station\u00E2\u0080\u0099s ranging laser ( Leica Geosystems, Heerbrugg, Switzerland) was tested for surveying building features, but did not perform very well, giving large random errors due to varying material types modifying the reflection of the beam. Close-range photogrammetry was chosen as a final method. In close-range photogrammetry, multiple images from varying directions of the same subject are correlated to solve 3D positions of points and lines. These features are then assembled into polygonal representations of objects. The images collected for photogrammetry were taken using a Nikon D100 digital single-lens- reflex (SLR) camera (Nikon, Tokyo, Japan), equipped with a 28 mm Nikkor lens (Figure 2.13a). A maximum resolution of 6 megapixels is possible (3008x2000 pixels) with images collected by this instrument. Figure 2.13: a) Nikon D100 with 28 mm lens and remote IR receiver. b) D100 mounted on top of tilt mount at 55 degrees from nadir. The software used to perform the photogrammetric analysis was Eos Photomodeler (Version 6.2, Eos Systems, Vancouver, British Columbia, Canada), an industry program with precise measurement capabilities and critically the ability to be used effectively with consumer-grade digital cameras. This software has been used previously for modeling of urban surfaces with good performance in Nikiforiadis & Pitts (2003). The software includes its own automated calibration suite to solve lens distortion and camera geometry, parameters that must be known before photogrammetric analysis can occur. Before any fieldwork was done, this calibration was performed in the laboratory. The camera was mounted on a 30 tripod in a well-lit room and used to photograph Photomodeler's provided calibration targets from varying orientations and roll angles. The results of the calibration for Brown's distortion model are shown in Table 2.3. For additional information on the parameters, see Brown's paper on basic geometric lens distortion (1966). Table 2.3: Calibration parameters for Nikon D100 Parameter Value Published focal length ( mm) 28 Effective focal length (mm) 29.622 Format size (mm, w by h) 24.003x15.9574 Image resolution (pixels w by h) 3008x2000 Principal point (mm, x,y) 11.9284, 7.9943 Lens K1 coefficient 1.41x10 -04 Lens K2 coefficient -1.54x10 -07 Lens K3 coefficient 0.00 Lens P1 coefficient -9.07x10 -07 Lens P2 coefficient 1.09x10 -05 Calibration RMSE (pixels) 0.0627 Maximum calibration residual (pixels) 0.1655 Photo coverage (%) 89 The results are well-matched with Photomodeler\u00E2\u0080\u0099s recommended calibration quality for high accuracy projects; the photo coverage is higher than 85% and the error results show that points marked on these photos can have higher than 1:3000 accuracy (for a 1 metre object, points will be within 0.0003 metres) (Eos Systems, 2011). To simplify collection of images, the street canyon was broken down into its component features, with each house being treated individually, and then integrated together to form a final canyon 3D model. Each house was assigned a numeric identifier (refer to Figure 2.5) and photographed using the Nikon D100. This ID number will be used in the rest of this thesis to refer to each house. To accurately capture the structure of a single building, photos from many angles of all surfaces were required. The larger the angle between two photos of a subject, the stronger the solution found between them. The ideal photographic pattern from ground level for each house is shown in Figure 2.14 below. This pattern maximizes angles for house facades and minimizes intrusion onto private property. Additional images were sometimes taken, depending on the structure of the house. In some cases, overhangs and protrusions were present, making these supplementary images a necessity. 31 Figure 2.14: Top view of schematic example house with idealized locations of ground-level camera stations for a single house. All points are visible in at least three images. Roofs were a significant structural component of the 2008 experiment\u00E2\u0080\u0099s dataset, and are not captured well by the ground-level photographic pattern presented in Figure 2.14, as the vertical angular separation of photos is low and roofs are not completely visible from the ground. To get stronger angles, the camera was also mounted on a hydraulic tower, on a tiltable platform atop the tower (Figure 2.15b). The tower was then erected at two different locations in the street canyon (locations shown in Figure 2.15). An infrared remote was used to operate the camera from the ground while the tower was rotated 360 degrees in 15 degree increments. This produced visible-spectrum panoramas from two different locations, giving sufficiently strong angles of the roof surfaces for photogrammetric purposes. Photos were taken on two separate days, June 15, 2011 and August 22, 2011. For each photo the shutter speed and aperture size of the camera were varied for maximum contrast while holding the sensor sensitivity (ISO setting) constant at the camera\u00E2\u0080\u0099s minimum value, for images with minimal noise. All image quality enhancements were disabled and the focal distance was maintained at the maximum value, as changes to either of these settings would interfere with our software\u00E2\u0080\u0099s calibration as performed previously. Images were captured in TIFF format in order to preserve image dynamic range and eliminate compression artifacts that exist with other formats such as JPEG. Examples of images taken from ground level are shown in Figure 2.16. 32 Figure 2.15: Locations at which the hydraulic tower was emplaced for visible photography. Images a) and b) show images from the northern position towards the southwest and southeast respectively, c) shows an image from the southerly position towards the northwest. (Photos: C. Adderley, June 15, 2011). 33 Figure 2.16: Examples of street canyon photography. a) south-east from north of canyon b) detail of House ID 15 facade c) front view of House ID 16 d) view of the back of House ID 10 from lane. (Photos: C. Adderley, August 22, 2011). 34 Chapter 3 Data Processing 3.1 3D Model Construction 3.1.1 Photogrammetric Model Construction To construct the 3D model, Photomodeler (Version 6.2, Eos Systems, Vancouver, BC, Canada) was used as the primary tool. Before any photogrammetry was performed, all of the visible images taken during the experiment were examined and categorized on per-house basis. Image contrast and brightness was also adjusted in some cases to improve the definition of surface or geometric figures. The set of images of a single house were input into the program, and common points across images were marked. The software then determined the orientation and location of the camera for each image, given a sufficient number of points. Strong, easily visible points were used initially to orient the images. These points included window corners, house edges and similar geometric and material features (example in Figure 3.1a). Once these had been marked and the camera location recovered, less certain and significant points were marked. The end result of this was a point cloud of 3D positions. It was essential to also attempt to locate a few positions on house lawns and surrounding houses. In particular, points that were surveyed with the RTK DGPS were prioritized as the accuracy of the LiDaR data varies (DGPS points were noted and care was taken to ensure they were marked as points on relevant photos). These tended to include street signs and sidewalk points (examples clear in Figure 3.1b). In this way, the individual houses can be matched with each other and the ground surface - this will simplify assembly in later steps. 35 Figure 3.1: Steps for assembling photogrammetric model of House ID 01. a) Point cloud generated by point matching. b) Point cloud with connecting edges added. c) Edges connected to create surfaces. Once a number of points were marked, a polygonal model could be constructed. This was done in Photomodeler as well, assembling the 3D positions into sets of lines defining polygons that would make up a house. Once common lines had been marked, they were used as the edges of these polygons, creating the actual model of the object (Figure 3.1c). In many cases, the entire house could not be constructed using photogrammetry. Some areas on the houses were shadowed, blocked or were visible on too few photos for accurate 3D points to be recovered. In some cases, this resulted in models with holes or missing polygons in important areas. Rather than deal with very uncertain points, these areas were left blank and would be reconstructed later during the assembly of the model. At this point, House ID 18 was dropped from the model, as it is barely visible on the thermal images and has poor photo coverage from the camera towers. As neither the camera model itself nor the point marking can be considered error-free, there are some issues associated with the creation of house models in this way. Inaccurately positioned marked points can lead to incorrectly determined camera positions, which in turn lead to inaccurately located 3D points. This error can be minimized by choosing good quality points on images with a large angular 36 separation. Even so, small errors accumulate to give potentially large positional uncertainty and are summarized in Table 3.1. Table 3.1: Photogrammetry total point errors. Overall precision and maximum error are the total possible uncertainties over all points in position. House ID Overall RMSE (pixels) Maximum RMSE (pixels) Maximum Residual (pixels) Overall Precision (m) Maximum error (m) Notes 1 2.071 7.299 9.501 0.0323 0.131 2 2.323 5.776 10.17 0.0361 0.112 3 2.651 23.721 32.836 0.069 0.251 Single anomalous point 4 1.652 2.093 5.67 0.0621 0.0627 5 1.525 5.978 8.757 0.045 0.142 6 1.59 6.147 7.686 0.0312 0.066 7 2.123 5.876 7.583 0.0337 0.0594 8 1.969 8.115 9.365 0.034 0.0827 9 1.66 4.722 8.163 0.00291 0.00678 10 1.873 4.473 7.057 0.0448 0.0802 Very poor coverage due to large tree 11 1.763 7.387 9.003 0.0353 0.0804 12 2.145 6.373 9.266 0.0298 0.0706 13 1.853 6.152 8.102 0.0301 0.0903 14 2.179 6.444 9.989 0.0428 0.114 15 1.542 5.676 5.92 0.0286 0.0803 16 1.47 3.409 5.142 0.0266 0.0598 17 1.419 4.588 7.767 0.0272 0.0419 19 1.576 4.344 7.52 0.0251 0.0611 In most cases, point errors are low, coming in below the 3 cm threshold for most houses. Some houses, particularly IDs 01, 02, 03 and 14 have large errors compared to the others. In the case of ID 01- 03, error causes were traced to anomalous points marked incorrectly on sidewalks, which were removed during model assembly. House ID 14 has poor photo coverage due to significant vegetation growing over the front of the house (ivy, trees, large bushes seen in Figure 3.2), which reduced the number of points that could be located and therefore the quality of the model. Due to the accuracy assessment, several of the houses (IDs 9, 10) were removed from the analysis due to poor coverage as well as their noncritical status; they lack visibility from the position of the thermal scanner. 37 Figure 3.2: House ID 14 from the east, showing bushes and trees making precise photogrammetry difficult (Photo: C. Adderley, August 22, 2011). 3.1.2 3D Model Assembly Figure 3.3: Flowchart describing steps for the assembly of the canyon 3D model. 38 Each house was individually exported in Autocad DXF2008 format from Photomodeler, as it is the most universally compatible format available (import and export for this format were available in all programs used). The files were imported into 3D Studio Max software (Version 8, student license, Autodesk, San Raphael, CA, USA) to be combined. 3D Studio Max is a computer assisted design (CAD) suite used for many purposes including architectural design, computer animation and digital effects. Its high precision and graphics rendering capabilities make it suitable for assembling the model of Elgin Street. Figure 3.4: a) Imported House ID 01 showing three common import errors: missing triangles, incorrectly oriented triangles and overlapping triangles. b) Shaded model of House ID 01 with all triangle overlaps repaired and missing components added. After the DXF files were imported from Photomodeler, reconstruction of some areas of some house models was required, due to the aforementioned lack of coverage of some house facets and a variety of DXF import errors (see Figure 3.4a). Generally, the latter were limited to overlapping triangles and incorrectly oriented triangles. The triangles with orientation issues were flipped to the correct orientation in the 3D software, while overlapping triangles were modified and rearranged so that the topology prevented overlaps, without changing the geometric shape of the area. Missing polygons were reconstructed. In some cases this was simple \u00E2\u0080\u0093 extending a wall polygon down to ground level or constructing a polygon across an evident gap (Figure 3.5a). In other cases a window or similar feature might have three marked points, an insufficient number to create the entire surface, so the fourth vertex would have to be interpolated from the others (Figure 3.5b). 39 Figure 3.5: a) Examples of techniques used for extrapolating missing triangles in house models. b) Example of interpolating missing triangles or vertices for a window on the same house. The LiDaR TIN surface created earlier was exported to WRML format, as ArcGIS\u00E2\u0080\u0099s export options are limited (the DXF format available failed to import reliably). It was similarly imported into 3D Studio Max along with the attached orthophoto and moved from a Universal Transverse Mercator Zone 12 coordinate system to a local coordinate system via translation. In the local coordinate system, the surface was centered at coordinates (0,0) and translated vertically such that the lowest point in the model was at z = 0. The RTK GPS points were also exported in a similar fashion with an identical translation applied. These two datasets formed the base onto which the photogrammetric models were to be placed. Figure 3.6: Translated LiDaR ground surface with RTK points shown as blue dots. 40 Each house model was then imported into the scene and added to the dataset by matching RTK GPS positions with 3D photogrammetric points. This generally involved matching sidewalk corners and other invariant points that were easily located in the RTK dataset and on the photogrammetric models. The orthophoto was also used to provide additional information on the basic positioning of each house. The completed canyon model is shown in Figure 3.7. The detailed house models are oriented to the ground surface with the exception of House IDs 9, 10 and 18 as mentioned previously. As photogrammetric models were matched exactly to RTK points to within machine precision, the accuracy of the resulting canyon model is directly related to the accuracies of those two datasets. As all future data processing will be done in a local coordinate system, the RTK points are taken as perfectly accurate with uncertainty derived solely from the RMSE of collected GPS positions. Additional uncertainty comes from the rotation of the house models, which occurred in a couple instances (rotation was done via the orthophoto, whose pixel resolution is only 5 cm). Summarized accuracy measures are presented in Table 3.2 below on a per-component basis, with notes on which models have rotation uncertainty. Table 3.2: Summarized accuracy measures for the 3D model. Orientation error is the maximum RMSE for the RTK points used to orient that house (or in the case of the ground surface, the maximum RMSE over all points). Model component Maximum photogrammetric error (m) Orientation error (m) Notes House ID 1 0.131 0.016 House ID 2 0.112 0.015 Rotation occurred House ID 3 0.251 0.021 House ID 4 0.0627 0.026 House ID 5 0.142 0.017 House ID 6 0.066 0.016 House ID 7 0.0594 0.024 House ID 8 0.0827 0.021 House ID 11 0.0804 0.015 House ID 12 0.0706 0.020 House ID 13 0.0903 0.018 House ID 14 0.114 0.011 House ID 15 0.0803 0.011 House ID 16 0.0598 0.025 Rotation occurred House ID 17 0.0419 0.022 Rotation occurred House ID 19 0.0611 0.019 Ground Surface - 0.026 41 Figure 3.7: Completed Elgin Street 3D model. Note the omission of House IDs 9, 10 and 18 from the completed model. 42 3.2 Thermal Scanner Position Recovery 3.2.1 Thermal Scanner Lens Calibration The large FOV lens used during the experiment introduced significant barrel distortion due to the nature of the mapping of the spherical lens surface to the rectangular image plane (Szeliski, 2011). A map of lens distortion had to be created to account for the distortion and allow pixels to be associated with angles. This was done using calibration images that were collected in the laboratory in 2008, where the camera was aimed directly perpendicular to a whiteboard that was gridded using reflective tape. The calibration data consisted of grid points (with known distance from the camera and known distance on the board relative to the centerline of the camera), matched with their camera pixel locations. The angles of each grid point \u00CF\u0086i (latitude) and \u00CE\u00B8i (longitude) with respect to the image centre were then calculated, providing a set of angular positions across the entire image. As lens distortion is mathematically quadratic (Szeliski, 2011) and visual inspection of the images revealed only barrel distortion, a 4th degree polynomial fit was created for both \u00CF\u0086i and \u00CE\u00B8i dependant on their pixel locations yi and xi (Equations 3.1 and 3.2). (3.1) (3.2) The RMSE for the \u00CF\u0086i fit was 0.020 degrees and that for \u00CE\u00B8i fit was 0.015 degrees. The comparable magnitudes of the coefficients between Equations 3.1 and 3.2 here indicate that lens distortion does not vary significantly between the image axes, which indicates that the lens has a high degree of radial symmetry. A set of pixel values i from 1 to 320 and a set of values j from 1 to 240 was then generated, each combination of i and j were input into Equation 3.X to compute the image \u00CF\u0086i and \u00CE\u00B8i for each pixel. The data was then gridded to form a 320 by 240 matrix (the resolution of the thermal scanner). This defines a local camera-centric polar coordinate system for the thermal scanner's sensor, with (0,0) being the centre of the image. Any pixel can be sampled in order to find its local \u00CF\u0086i and \u00CE\u00B8i coordinates. 3.2.2 Thermal Scanner Position Recovery Algorithm In order to project the thermal data taken in 2008 onto the 3D model, the location of the thermal scanner must be known. As this position was unfortunately not recorded to any accuracy in the original experiment, it must be reconstructed before projection can occur. In addition, the pan and tilt angles for every frame of the panoramas must be known. Fortunately, several points can assist in this: 43 \u00EF\u0082\u00B7 The rough location of the scanner (within approximately 1 m) can be determined from observing the thermal images themselves, from photos and from survey data taken during the experiment (in particular, distances to the reflective tape position). \u00EF\u0082\u00B7 The rough height of the scanner above ground level is also known, as the scanner tower was raised to approximately 15 m during the experiment. \u00EF\u0082\u00B7 By measurement of the mobile tower trailer\u00E2\u0080\u0099s dimensions and examination of photos taken during the experiment, the tower\u00E2\u0080\u0099s east-west location can be deduced to within approximately 0.2 meters. \u00EF\u0082\u00B7 Comparison of the thermal images with the reconstructed 3D model can constrain the tilt angle of the scanner to a region of about 10 degrees in size centered on 65 degrees from the vertical. However, this does not provide a location that is accurate enough for immediate reprojection. Instead, an iterative approach to recovering the camera position was taken. Several off-the shelf mathematical models for camera location do exist, but are generally intended for use in the laboratory with specialized calibration grids, or with higher resolution visible spectrum images (recovery algorithms tested included those developed by Tsai (1987) and Heikkil\u00C3\u00A4 & Silven (1997)). Tests with these algorithms did not provide useful results or failed entirely \u00E2\u0080\u0093 instead we chose to develop a specific iterative algorithm using the specific geometry of the situation. This process is diagrammed in Figure 3.8. 44 Figure 3.8: Flowchart describing steps for the determination of the thermal scanner position. First, the scanner location is defined as Cxyz, and its tilt angle as the angular distance of the image center from the vertical (so the horizon has a value of 90 degrees). , , and must be optimized so that they converge to the actual location. The camera roll angle is assumed to be zero, and the pan angle is considered later. We consider two points visible in a thermal image, and . They are distinct enough that their 3D positions can be determined from the 3D model of the street canyon. Their angular separation as seen from the camera is dependent on the camera location; if for example it is closer a larger angular separation will result. The algorithm computes the angular separation of these two points on the image and compares it to the angular separation predicted by an arbitrarily chosen camera position. 45 The actual angular separation of points and can be found by sampling the A40M calibration matrix containing angular values for each pixel, described in Section 3.2.1. Sampling the matrix at and \u00E2\u0080\u0099s pixel coordinates gives us the latitude and the longitude of and in the camera\u00E2\u0080\u0099s coordinate system ( and ). The difference between these is the angular separation. Figure 3.9: and defined in the camera's image coordinate system. Also note the variation in position of the horizon in the schematic image due to scanner tilt. However, because the camera was tilted downward during its panoramic sweep, the camera coordinate system defined above does not correspond exactly to the world coordinate system. Note that when tilted, the horizon does not appear at the same latitude across the camera image; it varies with longitudinal distance from the image centre (Figure 3.9). A transformation is therefore applied to each angle and for and . As MATLAB was unable to calculate a spherical rotation of the coordinate system and only a single-axis rotation along the horizontal (the horizon of the global geometry) was required, the rotation was performed in a Cartesian coordinate system. First a transformation to Cartesian coordinates is applied to and (Equation 3.3-3.5) with a radius r of 1. (3.3) (3.4) (3.5) The rotation angle about the horizontal axis is then calculated via Equations 3.6-3.8. It is equivalent to the camera tilt angle B: 46 (3.6) (3.7) (3.8) Then the Cartesian coordinates are returned to a spherical coordinate system via Equations 3.9- 3.11. (3.9) (3.10) (3.11) This transforms and into global angles for this point, now called and , respectively the global latitude and global longitude. If this operation is performed for both points and , we can calculate the difference between and as well as and . These differences and are the actual angular separations between and in a global spherical coordinate system. The coordinate transformation above requires the camera tilt angle , which is not known. It is selected iteratively along with , . If we supply all the elements of C as well as , we can find the theoretical angular separations and for points and for that value of C and . First we project and onto the xy axis, and choose a position R such that R is the projection of the centre of the image (Figure 3.10). Then, we compute and - this can be accomplished using basic trigonometry (Equations 3.12 and 3.13). (3.12) (3.13) 47 Figure 3.10: Definition of angles in the camera location algorithm. Both points and are projected onto the xy plane and the and angles are calculated via trigonometric relations from and a) is a collapsed view of b). The difference between and , and and is a measure of the accuracy of C and . If they are close to the actual position and tilt, then the difference will be low. We can use this to iteratively examine many points and tilt angles in order to select the one with the lowest difference. 3.2.3 Position Recovery Implementation Several thermal scanner frames were selected for use in position recovery. A single panorama was used (the first), and photos selected at several points along the scan such that various views would have a good angular separation (Figure 3.11). Points were then selected on each image to form a matrix of image positions and corresponding 3D locations as extracted from the canyon model. Each point was chosen to be reasonably precise (located at a sharp boundary in the thermal image) and matched preferably to a DGPS or photogrammetric point on the 3D model. 48 Figure 3.11: Photos selected from the first panorama for position recovery around the tower location. These frames were chosen as they have good coverage of the area, and each has a large number of distinct points that can be matched with points on the 3D model. The coordinate system used was that of the 3D model as defined in Section 3.1.2. (centered on the approximate center average of the canyon). In this coordinate system, the x axis moves from east to west, the y axis from south to north and the z coordinate represents the elevation. Scanner positions were chosen based on constraints defined in section 3.2.2. Boxes were generated centered on coordinates x,y,z = (1.2, 8.65, 18.5). Initially, to test the validity of the algorithm, a very large box (diameter 20 m x 20 m x 10 m) was used. Scanner positions were chosen every 2.0 metres inside this box. Scanner angle was initialized at 65\u00C2\u00B0 from inspection of the photos, and iterated every 2\u00C2\u00B0 from 55 to 75\u00C2\u00B0. For all the selected points, the 49 algorithm was run, generating an error value for each point pair and associated camera position. Error values were weighted by the angular separation of the two points, as points with larger separation would be less sensitive to marking errors. The median squared error of all point pairs was used to measure the accuracy of all point angles over a single iterated camera position and angle. Convergence was successful, pinning down an area where scanner locations had much lower median squared error. This process was then repeated for iteratively smaller boxes. The point with minimum median squared error from the previous step was used as the box centre. The box size was reduced by 50% with each repetition in order to slowly narrow down the position. With sizes of 5 m x 5 m x 2 m, the x and y positions of the scanner stabilized near x = 1.35 m and y = -8.635 m, though significant variation remained in the scanner angle and position. Additional runs showed that no distinct convergence could be found - within the regions of = 62 to 68\u00C2\u00B0 and z = 18 m to 20 m any error introduced by an incorrect z position could be compensated for by a change in angle and vice versa (Figure 3.12). Figure 3.12: Iteration results for fixed x = 1.35, y = -8.635, varying angle and z position. Note the lack of a defined convergence, median squared error does not show a distinct minimum, instead a large region centered around z = 18.25 m, = 66\u00C2\u00B0. This issue occurs due to the relatively coplanar nature of the points. Though the points exist in 3D space, their separation on the z plane is insufficient compared to the height of the scanner above ground. 50 The scanner is located above z = 17 m , while all points in the model are below z = 8 m. This limits the quality of the convergence quite significantly. Different iteration parameters and refinement of initial conditions were attempted to solve this problem, but met with no success. A method of visual inspection was instead used. Holding x and y constant, the thermal data were projected (see section 3.4.2) and the result examined. A best-fit approximation was chosen visually through this approach: The final scanner coordinates found were C = (1.35 m, -8.59 m , 17.95 m), 66.2\u00C2\u00B0. 51 3.3 3D Model Properties Figure 3.13: Flowchart describing steps involved in surface and geometric property derivation from the 3D model. 52 3.3.1 Geometric Information As most data processing would be done in MATLAB, a method for translating the 3D model and any useful surface information to an appropriate format was needed. The simplest method would be to use UVW texture unwrapping (often shortened to UV mapping), a technique used in computer graphics to apply planar bitmap images to 3D models (Shirley, Marshner, & Ashikhmin, 2009). Conceptually, a 3D object can be decomposed into its triangle components, and a planar image applied to each triangle. This is stored in a UVW map, where each vertex (x,y,z) comprising each triangle of the model is assigned a texture coordinate (u,v,w ranging from 0.0 to 1.0), which represents that vertex's position on a bitmap texture. A conceptual image of this process is shown in Figure 3.14. Figure 3.14: Conceptual image of UVW mapping. a) shows a 3D pseudohouse in xyz space with several vertices A, B and C. b) shows the image\u00E2\u0080\u0099s corresponding UVW map \u00E2\u0080\u0093 each polygon of the 3D object is separated and flattened onto the 2D UV plane. Vertices A, B and C show the mapping of 3D positions to 2D positions on the UV plane. Note that a single vertex in 3D space can have several positions on the UV plane. The 3D Studio Max software provides a built-in function for unwrapping objects in this manner (Automatic UV Flattening). This tool has a variety of settings, all of which were configured to provide the lowest amount of distortion to the unwrapped model. The unwrapped maps for House ID 01 and the ground surface are shown in Figure 3.15 below. The distribution of the triangular faces is somewhat chaotic, but free of distortion. 53 Figure 3.15: a) Example of triangulated House ID 01 b) UVW texture map generated for House ID 01 c) Triangulated ground surface d) UVW texture map generated for ground surface. The entire street canyon was exported to Autodesk's ASCII Scene Export (.ase) format, a basic text-based format that describes geometric information for all objects in a scene. It preserves vertex (x,y,z) position, vertex (u,v,w) position, triangle topological information, UVW triangle topological information, and triangle facing (normal vector) information. A custom import script was written in MATLAB to take this ASCII file and extract all of these properties. During extraction, another property was derived: the triangle area, via basic geometry. These data were then plotted according to their vertex UVW coordinates, producing UVW texture maps storing the geometric information. They were then rasterized to a 1024x1024 grid, producing three two dimensional arrays, with cell values describing x coordinates, y coordinates and z coordinates 54 (Figure3.16a-3.16c). During this process, the triangle area was divided by the number of pixels per rasterized triangle, converting triangle area to pixel area (Figure 3.16d). Figure 3.16: Texture maps for House ID 01 a) pixel x-coordinate. b) pixel y-coordinate. c) pixel z-coordinate. d) pixel area. From the UVW texture maps, additional useful geometric properties can be derived. The pixel normal vector can be used to derive facet compass orientation by converting the x and y components of the vector to an angle, and offsetting this so that 0 o is defined as north-facing, matching the coordinate system used in the rest of the analysis (Figure 3.17). The normal vector can also be used to automatically perform a facet classification operation in conjunction with the z-position. This relatively coarse classification assumed all pixels below z = 2.3 m (the maximum height of the ground surface) with slope angle less than 45\u00C2\u00B0 to be ground. All pixels above this height with slope angles less than 75\u00C2\u00B0 are considered roof. All other pixels are considered walls (Figure 3.18). 55 Figure 3.17: Pixel azimuth maps for a) House ID 01 and b) ground surface. Figure 3.18: Pixel facet type maps for a) House ID 01 and b) ground surface. With the thermal scanner location that was found earlier, we can compute the spherical coordinates of all pixels on the texture maps with relation to the scanner position. This is done via a coordinate transformation. First, the pixel (x,y,z) coordinates are translated by the scanner position so that it is at the origin. Then, a Cartesian to spherical transformation is applied (Equations 3.14-3.16). 56 (3.14) (3.15) (3.16) Where x, y, z are the coordinates of a pixel, d is the distance from a pixel to the camera (Figure 3.20) and \u00CE\u00B8/\u00CF\u0086 values are the spherical coordinates of that pixel with relation to the camera (Figure 3.19a/b and 3.19c/d). These values are added to texture maps for each house. Figure 3.19: Pixel spherical coordinates. a) House ID 01, pixel \u00CE\u00B8. b) ground surface, pixel \u00CF\u0086. c) House ID 01, \u00CE\u00B8. d) ground surface, pixel \u00CF\u0086. 57 Figure 3.20: Pixel to scanner line-of-sight distance for a) House ID 01 and b) ground surface. Additionally, as the thermal scanner is at a fixed location, it is evident that not all pixels in the texture maps can be seen by the scanner. It will be useful to know which pixels are visible and which are not. This is accomplished by returning to 3D Studio Max and positioning a omnidirectional raycast light source at the scanner location. This type of light source uses a Monte Carlo random method to cast rays in all directions, and was set up to assign a brightness value of 1.0 (the maximum) to any pixel that was hit with the raycast. Pixels that are not hit are assigned a brightness of 0.0. 3D Studio Max's Render To Texture function was used to associate the brightness value assigned to each triangle to the correct position on the texture map. This function creates a bitmap image of the UVW map whose RGB pixel values correspond to a designated property (for example light intensity, material ID integer, shadow intensity). This produced a 1024x1024 pixel bitmap image, which was saved in Portable Network Graphic (PNG) format, chosen for its lack of compression artifacts and compatibility with MATLAB's image import options. This image was imported into MATLAB, associating RGB intensity values from the bitmap with a boolean value of pixel visibility to scanner: false for invisible, true for visible (Figure 3.21). 58 Figure 3.21: Pixel occlusion maps for a) House ID 01 and b) ground surface. 3.3.2 Surface Information At this point, geometric information about the Elgin Street canyon had been compiled. However, additional surface properties are highly relevant to the study of surface temperatures. Specifically, the material type is important, as it defines the emissivity of a surface (Oke, 1987) and other underlying material properties (thermal admittance). A survey of the ground-level photos and orthophotography was undertaken, creating an inventory of all material types in the canyon. These are summarized in Table 3.3. These materials were manually marked on a per-polygon basis in 3D Studio Max. The software has the capacity to assign an integer-based material identifier to each triangle making up a 3D object - an integer value was assigned to each of the materials in Table 3.3 and associated with each triangle. Where material boundaries did not fit the topology of the model, additional triangle sides were added to the house models in order to correctly attribute materials. A distinct colour was then associated to each material type and the Render to Texture function was used to export a UVW texture map in PNG format. This file was imported into MATLAB and the colours in the file associated with the correct material identifiers (Figure 3.22). 59 Table 3.3: Inventory of materials found in the Elgin Street canyon with emissivities (based on FLIR Systems, 2004). Material Code Material Emissivity Notes 1 Aluminum (bare) 0.70 Generally aluminum roofs, some siding 2 Asphalt 0.967 3 Brick 0.93 Ceramic bricks 4 Concrete 0.92 5 Stucco 0.91 6 Wood 0.85 Where visible 7 Paint (white) 0.93 8 Paint (grey) 0.93 9 Paint (Blue) 0.93 10 Mixed materials 0.50 Unused in final analysis, low value for easy marking 11 Paint (Dark) 0.93 comprises many darker paints 12 Paint (Brown) 0.93 13 Paint (Light) 0.93 comprises many lighter paints 14 Rock 0.82 Rock facades of houses 15 Glass window 0.80 16 Paint (Red) 0.93 17 Grass 0.95 Figure 3.22: Pixel Material ID texture maps for a) House ID 01 and b) ground surface. Color indices map to the Material ID numbers in Table 3.3. 60 From the texture maps of materials, the emissivity can also be calculated. A comprehensive table of emissivities is available in the manual for the A40M scanner. Each material in the canyon was assigned an emissivity by consulting the tables. In the case of materials with multiple available emissivities, an average was taken. Emissivity values are summarized in Table 3.3. The pixel material texture map was then indexed by this table, producing another texture map of pixel emissivities (Figure 3.23). Figure 3.23: Pixel emissivity texture maps for a) House ID 01 and b) ground surface. 3.3.3 Sky View Factor Simulation 3D Studio Max also provides a function that allows the simulation of ambient light and shadow. This technique is generally known in 3D graphics as Ambient Occlusion, and effectively simulates a sky view factor calculation. In ambient occlusion simulations, each pixel on a 3D model is the source for a Monte Carlo raycasting simulation, where sampling rays are cast in all directions from the pixel. Broadly speaking, rays that hit objects earlier influence the colour of the originating pixel towards black, while rays that hit later influence the colour towards white. To test the viability of this as a method of numerically determining the sky view factor in complex geometry, several idealized 3D situations in which the sky view factor could be calculated geometrically were prepared in 3D Studio Max (see Figure 3.24). A cylindrical basin situation was used, where the sky view factor can be calculated using Equation 3.17 (Oke, 1987), where hb is the height of the basin and rb is its radius. (3.17) 61 Figure 3.24: Set up for SVF simulation in 3D Studio Max. Cylindrical objects matching basin geometry are created with hb values from 2.5 to 100. The spacing is denser near hb = 2.5 m; A constant rb of 20 m was chosen and hb varied between 2.5 m and 100 m. The ambient occlusion simulation was performed for each basin and the brightness of the centre pixel compared to the expected curve. The result (Figure 3.25) shows good correspondence between the two curves except in areas where hb/rb is between 2.5 and 4, where there is an overestimation by up to 0.02 or 35% of the value (though the curve is again matched by hb/rb = 5, indicating that the ambient occlusion method will perform well for computing the sky view factor. Figure 3.25: Simulated SVF values for each of the basins compared to the analytical curve predicted for a cylindrical basin. 62 Figure 3.26: 3D model with added simple geometric objects representing dense vegetation. Arrows connect photography of the vegetation objects to their geometric representation (Photo: C. Adderley, June/August 2011). All the houses and the ground surface were simulated with this method. Before doing so however, additional rough objects were added to approximate some objects, such as vegetation, which had not been modeled photogrammatically. In general, spheres or elongated cones were used to approximate these, though some hedges were also added with box type objects in some cases (Figure 3.26). Again, the Render To Texture function was used to create texture maps of sky view factor, where a brightness value of 1.0 was equal to a sky view factor of 1.0. These were exported as PNG files and imported into MATLAB (Figure 3.27). 63 Figure 3.27: Pixel sky view factor texture maps for a) House ID 01 b) ground surface. At this point, the data structure for storing geometric and surface area for each house was complete. In summary, for each element on the surface of the 3D model, we have its position in Cartesian coordinates (x,y,z), spherical coordinates relative to the camera (\u00CE\u00B8,\u00CE\u00A6,r), its facet type, material, emissivity and sky view factor. An empty texture map for storing projected thermal data was also added. 3.4 Data Projection To project the thermal scanner data onto the 3D model, a search algorithm was used. As each house\u00E2\u0080\u0099s texture map now includes an array defining the spherical coordinates of each pixel with reference to the scanner location, if the spherical coordinates of each pixel on the thermal scanner\u00E2\u0080\u0099s images are known, they can be matched to their coordinates on the texture map, assigning a temperature value to each texture map pixel. Several optimizations were made here due to computational time constraints. Despite the availability of a panorama every 30 minutes, it was decided to only use every second panorama, i.e. every hour. In order to have a 24 hour period for analysis, only panoramas 1 through 48 were projected. The high panorama at approximately 68 o from nadir (approximately frames 1 to 80) was the only one used as using the lower panorama would have introduced significantly more computational time for a minimal increase in viewed area. Additionally, though all of the texture maps were generated at 1024x1024 pixels, they were resampled by a factor of 2 x 2 (nearest neighbor interpolation) to 512x512 pixels to significantly reduce computation time per image. 64 3.4.1 Panorama Frame Orientation Projection of each frame requires that the \u00CE\u00B1F and \u00CE\u00B2F (frame centre pixel azimuth and tilt angles) of each frame in every panorama be known. The approximate angle of each frame is known from the iterative process above, but it is insufficient for these purposes due to jitter in the camera position. A technique known as local image matching was used to recover the exact \u00CE\u00B1F and \u00CE\u00B2F of each frame. In this technique, significant locations on an image are matched to similar locations on another image, and the pixel shift of these locations is found. Trigonometry can then be used to calculate the shift in \u00CE\u00B1F and \u00CE\u00B2F between the first and second images. The method used to match images is the Scale Invariant Feature Transform (SIFT) algorithm, developed by David Lowe (2004). This algorithm first locates keypoints on two images which are deemed scale and rotation invariant; that is, they do not vary when an image is scaled uniformly or rotated. These keypoints are assigned a descriptor vector which contains a statistical description of the point. By matching these descriptor vectors between the two, one can identify points that are similar on both images (Figure 3.28). For a more detailed description of the functioning of the SIFT algorithm, see (Lowe, 2004) Figure 3.28: Keypoints generated by SIFT for Panorama 1. a) Frame 7. b) Frame 8. The vectors originating from each keypoint are a visual representation of the keypoint - if two keypoint vectors appear similar, they are likely the same point. An implementation of the SIFT algorithm is available on David Lowe's website for educational use. It also contains a MATLAB interface to the code, which was modified so that the thermal scanner images would be suitable for use. This involved converting the thermal scanner data from temperature values to a 0.0 to 1.0 intensity format, which was done via minimum-maximum scaling on a per- panorama basis. Each panorama\u00E2\u0080\u0099s maximum and minimum temperatures were found and assigned to 1.0 and 0.0 respectively: each image was scaled in this way. 65 An initial starting orientation at frame 1 of each panorama was found by manually locating the centre point of the frame on the 3D model and extracting the coordinates. This point was converted to a spherical coordinate system matching that of the 3D model. SIFT keypoints were then generated for frame 1 and frame 2, and the two sets of keypoints were matched by comparing their descriptor vectors. For every keypoint in the first image, we compute the dot product between the keypoint's descriptor vector and all keypoint vectors in the second image. These are sorted by magnitude of the angle of the dot product. We then define a threshold for selecting a match (in this case, 0.75, found through experimentation). If ratio of the two points with the lowest angular difference is less than the threshold, we consider these two points a match. Once all keypoints are computed, the matches are sorted by their ratio, with ratios closer to zero being preferable. As a thresholding operation does not necessarily produce perfect results for all frames, a further check must be performed. As the set of matches may contain outliers but is assumed to be generally correct, RANSAC (RANdom SAmple Consensus) was applied in order to reduce the number of outliers. For the RANSAC implementation, a match is randomly selected from the full set of point matches. The change in scale and the change in angle between the first and second image are then computed based on this single match. All other matches' change in angle and scale are then computed and compared to this point. Two thresholds are used to assess the comparison: if it falls below the defined threshold, the match is deemed to be consistent with the randomly selected match. The process of randomly selecting a matching pair is then repeated for 25% of all matches. The match with the largest number of consistent matches is then considered the best match. The matches were then sorted by number of consistent matches and the upper 5% input into MATLAB's imtransform (image transformation) routine. This routine finds the transformation from frame 1's pixels to frame 2's pixels given several point pairs. It returns a transformation matrix which can be applied to frame 1's centre pixel in order to find dx and dy, the pixel change from the centre of image 1 to image 2 (Figure 3.29). By sampling the thermal scanner's calibration matrix at image centre (i,j) + (dx,dy), the dxang and dyang, the angular change from frame 1 to frame 2 is found. We assume that the roll angle of the camera does not change. This process can be repeated along an entire panorama to acquire frame angular change and thus frame \u00CE\u00B1F and \u00CE\u00B2F. 66 Figure 3.29: Panorama 1, Frames 7 and 8 with both image centers (derived from SIFT and RANSAC) marked on each image. There is no vertical shift dy in this image, but dx is marked in red. By sampling Frame 7's calibration matrix at C2, the orientation of Frame 1's centre pixel can be found. In this method, however, any errors between frames accumulate and cause significant error at the tail end of the panorama (up to 8\u00C2\u00B0 of \u00CE\u00B1 error). A bidirectional keyframe based approach was used to counter this error. Several keyframes (KF1-8) were chosen approximately every 10 frames in the panorama and their centre \u00CE\u00B1F and \u00CE\u00B2F were retrieved manually from the 3D model (via location in the model, then transformation to spherical coordinates). The SIFT matching and RANSAC refinement was then performed for each image pair from KF1 to KF2 (for example, frames 1-12), then in reverse, from KF2 to KF1. The angles from the forward matching were then combined with the angles from the reverse matching, with each angle weighted by its frame number distance from the initial keyframe. This ensured that directional matching bias was not a factor in frame location errors and ensures that every 10 frames, a correct azimuth/elevation angle pair is introduced. This process was repeated for the frames between KF2-3, KF3-4 and so on for each pair of keyframes. The results are shown in Figure 3.30. 67 Figure 3.30: Frame \u00CE\u00B1F angle as calculated by SIFT without keyframes. Keyframes are indicated with black crosses: the correction generated for each segment is shown in blue. The algorithm was applied to all panoramas with good results. In some cases, keyframes were more difficult to find so only 6 or 7 were used. In all cases the SIFT matching performs well, except near frame numbers 30-60 where the scanner faces west. This is to be expected as houses are closest to the camera in this region, and thus will take up more of the frame. Therefore, more significant points will be located on the boundaries of the frame \u00E2\u0080\u0093 areas which have a large degree of lens distortion. 3.4.2 Projection To match temperatures to their surface locations for a single panorama, each frame in the panorama was assigned correct spherical coordinates for each pixel from the centre coordinates found via image matching and the angular distribution of pixels found during scanner lens calibration. Firstly, the angular calibration data for a frame was loaded, in a camera-centric coordinate system with (0,0) as the frame center. The methodology defined in Section 3.2.2 (Equations 3.3-3.11) was employed to again rotate the image coordinate system. Data projection was done in a pixel-wise fashion for each pixel except the first and last row/column of the image. Each pixel P\u00E2\u0080\u0099s four corner locations (I,J,K,L) were first found by finding the midpoint between each pixel and its neighbor to the left, right, bottom or top. The texture sheets containing pixel spherical coordinates for all houses and the ground surface in the canyon were then loaded. The sheets were then searched for areas that fell within the quadrilateral defined by I, J, K and L (Figure 3.31). The temperature value for the image pixel was then assigned to all these pixels on a new UVW texture sheet reserved for projected temperature values. In addition, a value representing the 68 Euclidean distance of the pixel (dimg) from the centre of the image was also assigned to the area (Equation 3.18). (3.18) Figure 3.31: Conceptual diagram of pixel projection. A square pixel P with corners I,J,K,L is selected in the thermal frame. Corner coordinates are known relative to the orientation of the frame (\u00CE\u00B8, \u00CE\u00A6) from the projection point at the scanner position. They are projected onto a 3D surface as a I',J',K',L', forming a quadrilateral. This was repeated for every pixel in the frame, and then for every frame in the panorama. In many cases, frames overlapped and temperatures were assigned to cells already containing temperatures. In these situations, the dimg values were compared and the temperature value with lower associated dimg was taken to be correct, in order to avoid vignetting effects on the edges of images. 69 Figure 3.32: Projected brightness temperature maps for a) House ID 01 and b) ground surface. The process was repeated for every second panorama, filling all of the empty texture sheets reserved for thermal data earlier (Figure 3.32). Note that there are still inconsistencies and mis-projected areas on these texture sheets, so additional data processing is required. Most evidently, the projection stage above did not take into account line of sight constraints, projecting temperatures onto areas of the 3D model that would in reality be invisible to the scanner (i.e. behind another object). These areas are therefore meaningless to an analysis, being incorrectly attributed. A clip operation was performed by assigning a No Data value to all temperature sheet pixels where the value of the corresponding occlusion texture sheet was a boolean false (Figure 3.33). 70 Figure 3.33: Clipped brightness temperature maps for a) House ID 01 and b) ground surface. 3.5 Correction and Gap Filling Two corrections need to be applied in order to translate the projected brightness temperature sheets into meaningful surface temperatures: an atmospheric correction to compensate for atmospheric absorption and emission and an emissivity correction to compensate for the different emissive qualities of various materials. 3.5.1 Atmospheric Correction Atmospheric correction was performed using a set of MODTRAN simulations run for the current camera spectral range and provided kindly by Dr. F. Meier at TU Berlin following the procedure outlined in Meier et al. (2011). These corrections are based on path length, local air temperature, sensor measured surface temperature and relative humidity. Air temperature and relative humidity were measured in situ in the canyon by instruments mounted in the canyon during the experiment (see Section 2.2.3). Values were averaged over the duration of the panorama (generally 70-80 seconds). Path length and sensor measured uncorrected brightness temperature were available from the texture sheets. Due to the large number of surface temperature and path length values, it was impractical to run a MODTRAN simulation for each combination of variables. Therefore, a number of values were run as in Table 3.4 and atmospheric corrections in K (difference between uncorrected brightness and output) for each one. The ranges and intervals for each property reflect the distribution of these values in the experimental dataset. 71 Table 3.4: Variables and intervals used in MODTRAN runs. Variable Range Interval Surface brightness temperature 278 - 323 K 5 K Air temperature 273 - 303 K 2 K Relative humidity 40 - 80% 2% Path length 15 - 75 m 10 m For a given pixel, the path length and uncorrected brightness temperature were extracted, and corresponding atmospheric data compiled. The nearest values were selected from the table of corrections and an interpolation was performed (the variation of the atmospheric correction was assumed to be linear over the intervals, which were reasonably small). At this point, areas of the model with a path length of over 75 m were masked out of the model as well. The magnitude of this correction is shown in Figure 3.34 for two texture sheets. Table 3.5 shows the maximum, minimum and mean corrections across the entire domain for each timestep. Figure 3.34: Magnitude of atmospheric correction for a) House ID 01. b) ground surface 72 Table 3.5: Maximum, minimum and mean atmospheric correction values for the entire model area. Timestep Mean Correction (K) Minimum Correction (K) Maximum Correction (K) 13:30 3.7 1.8 7.4 14:30 4.2 2.1 8.4 15:30 4.3 2.1 8.6 16:30 4.2 2.1 8.2 17:30 3.7 1.8 7.4 18:30 3.2 1.6 6.5 19 :30 2.8 1.4 5.6 20:30 2.5 1.3 5.0 21:30 2.5 1.3 5.0 22:30 2.5 1.3 5.0 23:30 2.2 1.1 4.4 00:30 2.2 1.1 4.4 01:30 2.1 1.1 4.2 02:30 2.0 1.0 4.1 03:30 1.6 0.80 3.3 04:30 1.7 0.82 3.4 05:30 1.6 0.83 3.3 06:30 1.5 0.70 3.0 07:30 1.7 0.82 3.3 08:30 1.9 1.0 3.9 09:30 2.0 1.0 3.9 10:30 2.8 1.4 5.5 11:30 3.0 1.5 6.0 12:30 3.8 1.9 7.4 3.5.2 Emissivity Correction Additionally, an emissivity correction was performed for each pixel. This involved solving a surface longwave radiation budget. We define a pixel\u00E2\u0080\u0099s incoming longwave radiation L\u00E2\u0086\u0093 in Equation 3.19 as the sum of longwave radiation from the sky weighted by the pixel\u00E2\u0080\u0099s sky view factor, and the longwave radiation from the canyon, weighted by the pixel\u00E2\u0080\u0099s complement of the sky view factor to one. (3.19) In this equation we take Tcanyon to be the average temperature over the entire canyon for simplicity. The Lsky value was taken from measurements of longwave radiation taken from Sunset Tower (at h = 30 m, see section 2.2.3). We define the outgoing longwave radiation L\u00E2\u0086\u0091 as the difference between the 73 radiation observed from the thermal scanner (calculated from TB,px, the temperature observed at the scanner) and L\u00E2\u0086\u0093, scaled by the complement of the pixel\u00E2\u0080\u0099s emissivity (Equation 3.20). (3.20) Therefore, the surface temperature Tcorr, corrected for emissivity effects and sky effects, can be found by Equation 3.21. (3.21) This correction was applied to all pixels on the thermal texture sheets once they had been corrected for atmospheric effects. This correction can be iterative as applying the temperature correction to the texture sheet changes the value of Tcanyon. Therefore, this correction was performed repeatedly until the corrected surface temperature Tcorr for one iteration was not significantly different from Tcorr for the previous iteration (difference of less than 0.25 K). This was accomplished within 5 iterations for most pixels. The magnitude of this correction is shown in Figure 3.35. A summary of emissivity corrections over all timesteps is shown in Table 3.6. Figure 3.35: Magnitude of emissivity corrections for a) House ID 01 and b) ground surface. 74 Table 3.6: Maximum and mean emissivity correction values for the entire model area. All minimum corrections are near zero in this case; there was at minimum one pixel with correction lower 1.5x10-5 K at each timestep. Timestep Mean Correction (K) Maximum Correction (K) 13:30 -0.034 9.6 14:30 0.078 9.4 15:30 0.040 7.7 16:30 0.059 9.8 17:30 -0.034 9.6 18:30 -0.098 9.8 19 :30 -0.093 5.9 20:30 -0.11 5.1 21:30 -0.110 3.4 22:30 -0.13 3.5 23:30 -0.080 4.5 00:30 -0.11 3.1 01:30 -0.10 3.6 02:30 -0.11 3.1 03:30 -0.091 6.0 04:30 -0.093 3.6 05:30 -0.11 2.40 06:30 -0.11 3.1 07:30 -0.099 10.3 08:30 -0.053 11.8 09:30 0.050 12.5 10:30 0.15 10.3 11:30 0.16 11.1 12:30 0.26 8.7 With the emissivity correction and the atmospheric corrections performed, the projected brightness temperature values have now been translated to relevant surface temperatures. 3.5.3 Gap-Filling As previously indicated, many areas in the street canyon were not visible from the thermal scanner. Additionally, some temperature measurements were incorrectly projected due to corner effects, mixed pixels and an inaccurate camera position (some examples visible on Figure 3.35). For the purposes of constructing a complete model of surface temperatures, it is necessary that these areas be correct. Therefore, these areas needed to be filled in some fashion. It was decided to fill these gaps with other, similar values from the rest of the dataset. 75 Firstly, to remove incorrectly projected areas, a texture sheet acting as an analysis mask was created for each house and the ground surface, delineating areas where data was seen as correct by visual inspection. Using an image editing suite (Adobe Photoshop CS2), the analysis mask was manually created, with areas deemed to be incorrect marked in black and correct areas marked in white (Figure 3.36). A PNG image was created for each texture sheet for each timestep, which was imported into MATLAB as another UVW texture sheet. Figure 3.36: Analysis masks for a) House ID 01 and b) the ground surface. Areas that were evidently incorrect are masked in black. On the houses, these are generally misattributed temperatures (roof, bottom left area). On the ground surface, cars and large vegetation objects are masked out. By comparing the analysis mask with other texture sheets, it was possible to determine which areas of the sheets needed to be interpolated. The gap filling was based on an adaptive search algorithm. For each pixel requiring interpolation in a texture sheet, four indices were extracted: the pixel facet type, pixel material type, pixel orientation and pixel sky view factor. Each of these factors strongly affects the temperature of a surface. A search algorithm then looked at all other pixels in the same texture sheet which were not masked out by the analysis mask. Within these areas, pixel attributes were examined for values matching those belonging to the pixel to be filled. Located pixels had to have identical material and facet type to the originating pixel as the thermal behaviour of the surface is heavily modulated by these two properties. Sky view factor and orientation had variable thresholds: the orientation threshold was based on the slope of the pixel. Pixels with a high slope required an orientation value that was very close (within 5\u00C2\u00B0), while pixels with a low slope had a more relaxed threshold (15\u00C2\u00B0). The threshold for sky view factor was 0.03. 76 If a similar pixel could not be found on the texture sheet (i.e. the same house), the sheets for other houses and the ground surface were examined. This was key in areas where only one side of the house was visible to the thermal scanner, as pixels for the opposite side would all be unattributed. This assumes that the houses behave thermally similar and if a wall is missing, the same wall orientation is visible on the other side of the scanner (i.e. panorama) and houses with pixels matching the orientation could be found. For example, pixels on the south-facing wall of House ID 01, invisible to the scanner, could potentially be found on the south facing walls of House IDs 04, 05, 06, 11, 12 or 13. If a similar pixel was still not found, the sky view factor and orientation thresholds were relaxed by 5%, and the search performed again. In this way, almost all pixels were matched: the maximum number of unmatched pixels on any single sheet was 404 (ground surface sheet, pixels located underneath some houses). Figure 3.37: Gap-filled texture sheets for a) House ID 01 and b) the ground surface. This completed the texture maps, which at this point are fully gap-filled and corrected for atmospheric and emissivity effects (Figure 3.37). Brightness temperatures TB have been fully transformed to surface temperatures T0. 77 Chapter 4 Analysis of Canyon Surface Temperatures 4.1 Elgin Street Canyon Morphometry The completed Elgin Street 3D model can provide not only data about surface temperatures and energetics, but also useful data about the properties of the urban fabric and three dimensional structure (morphometry) of the area. However, the 3D model is not complete: House IDs 9, 10, 18 and 19 are either incomplete or were not projected. These houses were removed from the study and are not considered in Chapter 4. Their footprints (defined by the perimeters of the lots) are also removed from the sheet prior to computation of any canyon morphometric parameters (see Figure 4.1). Figure 4.1: View of the Elgin Street 3D model showing the subset of the canyon used in calculations in Chapter 4. 78 Various measures of structure often used to define urban surfaces in models and can be computed to further define the study area. These are summarized in Table 4.1. Table 4.1: Calculated urban geometric measures for the modeled Elgin Street canyon. Symbol Parameter Value \u00CE\u00BBb plan area ratio of buildings (m 2 m -2 ) 0.32 \u00CE\u00BBi plan area ratio of impervious ground surfaces (m 2 m -2 ) 0.19 \u00CE\u00BBj plan area ratio of impervious ground and building surfaces (m 2 m -2 ) 0.51 \u00CE\u00BBv plan area ratio of vegetation (m 2 m -2 ) 0.49 V normalized building volume 2 (m 3 m -2 ) 2.78 \u00CE\u00BBc complete aspect ratio (m 2 m -2 ) 2.77 zH area-weighted building height (m) 6.30 Wx characteristic canyon width (m) 33.3 zH/Wx canyon height to width ratio 0.18 Ly characteristic along-canyon inter-building spacing (m) 3.23 These parameters are typical for a suburban neighborhood, with the exception of the complete aspect ratio, which is rather high. This is likely as the complete building surface area is very high for the microscale case, as the 3D model includes many surfaces not usually mapped, such as overhangs and porches. This causes a larger value for \u00CE\u00BBc than in previous studies, where buildings are approximated in less detail (\u00CE\u00BBc is scale dependant). The values can be compared favorably to measures derived for the entire Vancouver-Sunset study area (Table 4.2). We see that in the case of \u00CE\u00BBb the Elgin street canyon has a higher proportion of buildings, this is to be expected as we do not see either large parks or major arterial roads in the study area. These regions are, however, present in the entire Vancouver- Sunset neighborhood. We also see that the plan area ratio of impervious surfaces is much lower in Elgin street, despite the large street and expansive houses. This is potentially due to misclassification errors in the LiDaR and remote sensing derived ground cover in van der Laan et al. (2012), see Goodwin et al. (2009) for the original methodology used. The value obtained for Elgin Street compares more favorably with the value found by Grimmond & Oke (2002). 2 Does not include vegetation structure. 79 Table 4.2: Comparison of selected parameters from the Elgin Street 3D model to parameters defined for the entire Vancouver- Sunset neighborhood. Symbol Parameter Elgin Street Vancouver-Sunset \u00CE\u00BBb plan area ratio of buildings (m 2 m -2 ) 0.32 0.29 (van der Laan et al., 2012) \u00CE\u00BBi plan area ratio of impervious ground surfaces (m 2 m -2 ) 0.19 0.37 (van der Laan et al., 2012) 0.23 (Grimmond & Oke, 2002) zH area-weighted building height (m) 6.30 5.13 (van der Laan et al., 2012) The derived sky view factor can also be computed for various facet types (Table 4.3). Roofs have the highest mean sky view factor (0.87) with a low variance within the class. Ground sky view factors are lower (centered at 0.58), but show the highest variance within the class. The walls in general have the lowest sky view factors, which can be further subdivided by facet orientation. As expected due to the canyon geometry, sky view factors of north and south facing walls are lower than east and west facing as the latter face out on the lane or the canyon. It is interesting to note the lower sky view factor of north facing walls and of east facing walls. In an idealized canyon, these should be equal to the south and west facing view factors. This variation is caused by the large variety of backyard structure configurations per lot. The distribution of porches, garages and overhangs can significantly vary the sky view factor. Table 4.3: Average sky view factors of facet types in the Elgin Street canyon. Facet Mean \u00CE\u00A8 Standard deviation of \u00CE\u00A8 Roofs 0.87 0.036 Ground 0.58 0.13 Walls (all) 0.26 0.037 Walls (North facing) 0.19 0.020 Walls (South facing) 0.22 0.015 Walls (East facing) 0.36 0.031 Walls (West facing) 0.32 0.055 To further characterize the canyon, we can calculate the abundances of materials present in the canyon from the data in the model. A table of materials separated by facet type is shown in Table 4.4. Roof material is quite evenly split between metal and asphalt, with a small area of brick (chimneys). Errors in the automatic facet type classification are evident: during material marking, concrete, rock and paint surfaces were not marked on roofs \u00E2\u0080\u0093 these elements were classed as roofs by the classifier. Fortunately, these values are very small. Walls are varied, but are dominated by painted surfaces, particularly white paint. The ground surface is divided between concrete and grass surfaces only. 80 Table 4.4: Abundances of materials in the Elgin Street canyon, divided by facet type. Facet, Material Surface Area (m 2 ) Percent (%) Roofs Aluminum 1748 45.8 Asphalt 1870 49.0 Brick 8 0.20 Concrete 2 0.051 Rock 0.1 0.00027 Stucco 0.2 0.00044 Wood 0.1 0.00037 Paint (all) 181 4.8 Paint (white) 137 3.6 Paint (grey) 0.7 0.019 Paint (blueish) 4 0.11 Paint (Red) 0.08 0.00022 Paint (dark) 26 0.68 Paint (brown 12 0.31 Paint (light) 0.9 0.025 Glass Windows 0.3 0.00085 Semitransparent 3 0.089 Walls Aluminum 146 2.3 Asphalt 1 0.020 Brick 217 3.4 Concrete 34 0.52 Rock 107 1.7 Stucco 495 7.7 Wood 10 0.16 Paint (all) 4682 72.6 Paint (white) 3719 57.6 Paint (grey) 37 0.58 Paint (blueish) 318 4.9 Paint (Red) 90 1.4 Paint (dark) 296 4.6 Paint (brown 193 3.0 Paint (light) 31 0.47 Glass Windows 345 5.3 Semitransparent 414 6.4 Ground Concrete 2359 15.8 Grass 12558 84.2 81 4.2 Observed Surface Temperatures The completed canyon 3D model allows a detailed analysis of surface temperatures with respect to the morphometry and view factors in the canyon. As in Section 4.1, a subset of the 3D model is used for computation of all surface temperatures in this section, following Figure 4.1. Figure 4.2: a) Area-weighted average facet surface temperatures over the course of the experiment as well as the complete surface temperature T0,C. b) Boxplots of wall surface temperatures, showing minimums, maximums and quartiles for each timestep. The coloured curve is the area-weighted wall surface temperature. c) As in b), but with roof surface temperatures. d) as in b), but with ground surface temperatures. Area-weighted surface temperatures of various facets over the entire diurnal course are relatively trivial to calculate with the dataset due to the large amount of information encoded in each pixel (areas are associated with every temperature value). Figure 4.2a shows the area-weighted surface temperatures of all surfaces in the street canyon divided by facet type. These temperatures were computed by first calculating 82 the total area of the facet type in question (by summing the areas of all pixels with facet type attributes matching that which was desired), then weighting all temperature values with the correct facet type attributes by their pixel area attributes as per Equation 4.1. (4.1) (4.2) Figures 4.2b-4.2d display a detailed plot of surface temperature distribution for each facet type, showing quartiles and minimum/maximum values for each timestep. Additionally, the area weighted surface temperature from Figure 4.2a is plotted for each facet type. Figure 4.2a also displays the complete surface temperature T0,c. T0,c is calculated via the method defined in Voogt and Oke (1997), though adapted to function with pixels instead of generalized areas (Equation 4.2). The plot of area-weighted surface temperatures shows the expected behavior for a northern hemisphere urban system located in the mid-latitudes. Roof surface temperatures display a distinct trend of higher values in the daytime with a spatial mean of 320.8 K ( 9.1 K) 3 at 13:30, cooling down to lowest temperatures at night with a spatial mean of 280.8 K ( 2.5 K) at 6:30 . Roofs will receive most shortwave irradiance in the daytime due to the lack of shading, and will cool rapidly by longwave emission at night due to their high sky view factor and limited heat storage capabilities. Roof temperatures show very large variation within the class (Figure 4.2c), which is due to the varying materials present (see Figure 4.5 and associated discussion). Class variation decreases at night. The ground surface shows opposite behavior, with lowest surface temperature of all facet classes during the day with a maximum spatial mean of 311.5 K ( 3.35 K) at 14:30 and highest mean minimum surface temperature of all facets at night with 286.4 K ( 1.7 K) at 6:30. In-class variation is more limited than the roofs and is more constant over the entire dataset, with minimal decrease at night (Figure 4.2d). The ground surface has distinct material differences from roofs, being composed of concrete and grass. Grass will not heat as effectively in the daytime (evapotranspiration) and will also cool rapidly at night (low thermal admittance). There may also be significant spatial variability in the grass temperature, as some areas will be shaded by houses and some may be irrigated. 3 In this portion of the thesis, values of the form 4.0 K refer to the variation of the temperature values (one standard deviation). 83 The concrete areas (road) tend to have reasonably high sky view factor due to their canyon locations and will also cool down. However, the lower sky view factor of the canyon on average will retard cooling due to reception of emitted longwave from house wall surfaces. Hence, we see warmer surface temperatures at night on the canyon floor compared to roofs. An interesting feature is the slower warming of the ground surface in the morning. This is partly due to shading, but also due to the fact that available energy is channeled into evaporation. Dewfall occurred in the study at night on the grassy areas, and was evaporated in the morning hours, reducing the warming rate of the lawn portion of the ground surface. Walls exhibit a surface temperature somewhere between ground and roofs in Figure 4.3a with a spatially averaged maximum of 314.4 K ( 7.8 K) at 14:30 and a minimum of 283.7 K ( 2.4 K) at 6:30. As wall temperatures will vary considerably by orientation (note large variations within the class in Figure 4.3b), they are detailed additionally in Figure 4.3a and 4.3b, where wall surface temperatures are split by facet orientation. This graph again shows expected behavior. Notably, south facing walls achieve warmer 24 hour temperatures (298.5 K ( 4.0 K) ) than others due to their sun-facing aspect. Interestingly, they achieve cooler temperatures at night. This is possible as the average sky view factor of south facing walls is slightly higher than that of the north facing walls (see Table 4.3). This would allow an increase in heat loss at night. Excluding the night-time situation, the north-facing walls are cooler than the south-facing walls by an average of 5.6 K ( 3.6 K), which is to be expected considering the location of the sun, leaving the north facing walls in shade. Figure 4.3: Area-weighted wall temperatures divided by facet orientation. Error bars show one standard deviation. a) North and South facing walls. b) East and West facing walls. 84 Differences of area-weighted wall surface temperatures split by orientation from the 4-facet average temperature are shown in Figure 4.4. Figure 4.4: Difference of area-weighted mean wall surface temperatures separated by facet orientation from 4-facet mean wall temperature. East and west facing walls show opposite patterns of heating and cooling. West-facing walls are warmer in the late afternoon (West: 307.0 K ( 4.1 K) vs. East: 305.1 K ( 4.8 K) from 15:30 to 20:30) and evening as the setting sun faces them. East-facing walls are warmer in the morning as the rising sun presents in the east, shading the west-facing walls (West: 291.6 K ( 3.8 K) vs. East: 296.6 K ( 5.5 K) from 6:30 to 11:30). There is also a notable lag time present between the peak of surface temperature of the south facing walls (near 13:30) and the west-facing walls (near 14:30).The 24-hour averages of each facet type and orientation are shown in Table 4.5. As the highest differences between building walls were present in the near-noon hours, the results shown here are consistent with Chudnovsky et al. (2004) and Hoyano et al. (1999) also using time sequence thermography. The results can also be compared to observations made of a scale model by Roberts (2010), where similar patterns between walls orientation are observed, though the magnitudes of temperature differences are different. These plots can be further decomposed using the material information embedded in the 3D model. In a similar fashion to Equation 4.1, it is possible to calculate T0,m, the area-weighted surface temperature of all surfaces with a given material (Equation 4.3). 85 (4.3) Table 4.5: 24-hour averaged temperatures for all facet types in the canyon. Facet Type 24-hour average temperature (K) Roofs 297.5 Ground 297.2 Walls (all) 297.3 Walls (North facing) 295.8 Walls (South facing) 298.5 Walls (East facing) 298.1 Walls (West facing) 296.4 Firstly, Figure 4.5 shows surface temperature plotted for the two roof types in the study area: asphalt/asphalt tiles and metals. Though Table 4.3 shows more materials than this, theses two are strongly dominant. The roof surface temperatures show similar diurnal courses, but large differences in magnitude. The 24 hour average of asphalt surfaces is much higher at all times by approximately 30 K, which seems unrealistic. This is certainly due to the thermal and radiative properties of metals, with a low emissivity. This also suggests that the emissivity values (0.7) used for metal roofs in the study was too high, as a lower emissivity would have resulted in a more reasonable surface temperature. Metal roofs in the canyon also exhibited varying degrees of longwave reflectivity and were in some cases partly covered with detritus and rust (see examples in Chapter 2, Figure 2.8). This would have had a significant effect on error of surface temperatures for those particular roofs which may have adversely affected the means shown in Figure 4.5 (note that the variation within the metal class is at all times higher than the variation within the asphalt class). The plots of ground materials are more interesting and show distinct courses (Figure 4.6). As expected, grass surfaces are cooler than concrete sidewalk and road surfaces (Grass: 309.3 K ( 3.1 K) vs. concrete: 317.6 K ( 5.3 K) at the time of the maximum at 14:30), except for in the morning where they are roughly equal to the road/sidewalk temperatures. The concrete materials have higher heat capacities and higher thermal admittance, which slows release of heat at night and keeps the concrete warmer (Grass: 286.7 K ( 1.2 K) vs. concrete: 289.4 K ( 2.0 K) at time of minimum nighttime difference at 3:30). In a similar fashion, this also retards warming in the morning, which allows the grass temperature to be closer to the concrete temperature. Additionally, evaporation of dewfall from the grass and transpiration from the grass prevents its surface temperature from deviating too much from air temperature. Near 12:00, the separation between the two materials has reasserted itself, as the concrete 86 continues to warm. These results agree with those found by Chudnovsky et al. (2004), who found that surface temperatures of concrete ground surfaces exceeded vegetation temperatures at all times in the diurnal course. Figure 4.5: Area-weighted surface temperatures of the two most common roof material types (asphalt/asphalt tiles and metal surface). Error bars show one standard deviation. Figure 4.6: Area-weighted surface temperatures of the ground surface, divided into concrete and grass surfaces. Error bars show one standard deviation. 87 It is expected that vegetation temperature should closely follow local air temperature, as found by Meier and Scherer (2012), where maximum deviations of tree temperature from air temperatures were 5.6 K and mean deviations were below 0.7 K. Figure 4.7 shows the area-weighted vegetation temperature compared to air temperature measured in the canyon (average of temperatures measured at East and West instrumentation tripods). At all times except from 5:30 to 8:30, the air temperature is lower than the area- weighted vegetation temperature in the canyon. A maximum difference of -10.8 K is observed at 17:30, with an average difference over the experiment of 5.3 K. This does not follow the expected results in Meier and Scherer for tree canopies. The vegetation in this study is short grass and therefore might be expected to follow a different behaviour: however, Chudnovsky et al. (2004) found using time sequence thermography that short grass temperatures were cooler by 4.0 K than tree canopy surfaces. This would indicate that the short grass temperature actually underestimates vegetation temperature, and that a hypothetical tree canopy could show warmer temperatures if this was true (it should be noted that Chudnovsky et al. took place in Tel Aviv where the climate is significantly different from Vancouver). Figure 4.7: Area-weighted surface temperature of canyon vegetated surfaces compared to air temperature measured at 28 cm. Error bars show one standard deviation. Wall surface temperatures by material are much more varied, as shown in Figure 4.8. In general, the same trend is visible across all materials, but with differences in absolute magnitude and magnitude of variation. As would be expected, rock facades were generally warmer due to high thermal admittance (maximum of 308.9 K ( 5.7 K) at 13:30, minimum of 284.6 K ( 1.46 K) at 6:30), and remained warm throughout the day and night. These surfaces have low sky view factors (located below balconies) and so are not expected to cool rapidly during the night. Because of their limited solar exposure, however, they 88 never reach very high surface temperatures, staying at least 5 K lower than concrete surfaces on the road and sidewalks. The same can be said of brick surfaces, though they tend to have lower surface temperature than rock surfaces (maximum of 286.0 K ( 8.0 K) at 14:30, minimum of 263.1 K ( 2.5 K) at 6:30). This is partly because brick surfaces in the model include chimneys, which are classified as a wall by the automatic facet type classifier. Chimneys have a very high sky view factor and a high surface area to volume ratio and so will cool off significantly in the night unless being used. They also have a high thermal admittance, so will take more energy to warm up than most other wall surfaces. The minimum is however very low, exhibiting minimums comparable to those of aluminum rooftop surfaces. This points towards data projection error, where some chimneys may have been incorrectly assigned temperature pixels belonging to aluminum roofs. Painted surfaces show the highest variability of all materials due to the large number of paint colours with varying albedos, warming to very high surface temperatures in the daytime and low surface temperatures at night (maximum of 308.1 K ( 5.3 K) at 14:30, minimum of 275.0 K ( 1.5 K) at 6:30). Painted surfaces are very common in the canyon (refer to Table 4.3), and are present at a wide variety of sky view factors and orientations \u00E2\u0080\u0093 this drives the variation in surface temperature. The underlying materials of painted surfaces can also vary, ranging from wood to aluminum siding, also driving variability within the category. The lower heat capacity and moderate thermal admittance of these materials does show in the ability of the surfaces to change reasonably rapidly and strongly from day to night-time. Figure 4.8: Area-weighted wall surface temperatures, divided by material type for the most common wall materials. All paint colours are shown averaged in this case. Error bars show one standard deviation. 89 Unpainted wood, though rather uncommon in the canyon, shows the highest wall temperatures (maximum of 314.1 K ( 8.4 K) at 15:30, minimum of 283.4 K ( 2.1 K) at 6:30), exceeded only by rock surfaces at night. Wood surfaces tend to support decks in back lanes, and therefore have reasonably low sky view factors and high view factors of houses. This is likely a driving factor for their higher temperatures: in addition, wood has a low thermal admittance, allowing it to cool through the night. Wood's thermal properties also depend heavily on water content, which may vary significantly considering the varied locations of wood surfaces. This will be a driver for the very high variation within the class in the daytime. Table 4.6 summarizes the behaviour of canyon facets divided by material composition. In general, it is expected that the mean diurnal amplitude of the surface temperature should be inversely related to the thermal admittance (see Chapter 1, Table 1.1) of the surface in question: low thermal admittance, larger mean diurnal amplitude (Oke, 1987). This is notable in roof surfaces which show a large mean diurnal amplitude (40 K). Roof surfaces, though made of materials with high thermal admittance, have limited heat storage capacities due to their low rate of conduction to the building interior. Ground surfaces have reasonably high thermal admittances and reflect this in a low mean diurnal amplitude (25.1 K). Wall surfaces vary greatly in amplitudes however, if broken down on a per- orientation basis few patterns are visible. The relation between thermal admittance and amplitude is not evident: for example rock surfaces have a very high (2220 J m -2 s -1/2 K -1 ) thermal admittance, but do not consistently show the lowest amplitude (see Table 4.6, wall materials). Wood, by contrast, does show this relation, with a low thermal admittance (535 J m -2 s -1/2 K -1 ) and consistently large mean diurnal amplitudes. This variation is present because the mean diurnal amplitude is also related to the orientation of the surface, due to shading effects (Voogt & Oke, 1998). This is strongly evident in wall surface temperatures, with walls that receive the least variation in shading (north-facing) having the lowest mean diurnal amplitude (29.1 K). The largest variation (south facing) shows a very large mean diurnal amplitude (36.5 K), and the east and west facing walls, which also have large variations in shading, show identical mean diurnal amplitudes (31.7 K). 90 Table 4.6: Summarized area-weighted mean maximum and minimum surface temperatures with calculated mean diurnal amplitude for canyon materials, divided by facet type and orientation. Values in brackets show standard deviations. Facet type (material) Mean maximum temperature (K) Mean minimum temperature (K) Mean diurnal amplitude (K) Roofs (all) 320.8 ( 9.1) 280.8 ( 2.5) 40.0 Roofs (asphalt) 321.0 ( 6.2) 280.9 ( 1.8) 40.1 Roofs (metal) 294.8 ( 10.9) 257.5 ( 2.3) 37.3 Ground (all) 311.5 ( 3.35) 286.4 ( 1.7) 25.1 Ground (grass) 309.3 ( 3.1) 284.6 ( 2.0) 24.7 Ground (concrete) 317.6 ( 3.1) 287.8 ( 1.6 ) 29.8 Walls (all) 314.4 ( 7.8) 283.7 ( 2.4) 30.7 North-facing walls (all) 312.7 ( 6.0) 283.6 ( 1.7) 29.1 North-facing walls (stucco) 295.2 ( 1.2) 273.0 ( 0.47) 22.2 North-facing walls (rock) 267.4 ( 0.93) 251.4 ( 0.43) 16.0 North-facing walls (brick) 277.6 ( 0.78) 260.3 ( 0.32) 17.3 North-facing walls (wood) 306.6 ( 0.61) 285.9 ( 0.30) 20.7 North-facing walls (paint) 302.3 ( 1.4) 269.6 ( 0.56) 32.7 South-facing walls (all) 319.0 ( 8.3) 282.5 ( 2.0) 36.5 South-facing walls (stucco) 315.4 ( 2.0) 285.0 ( 0.84) 30.4 South-facing walls (rock) 316.3 ( 3.3) 285.2 ( 0.80) 31.1 South-facing walls (brick) 313.6 ( 2.0) 285.4 ( 0.87) 28.3 South-facing walls (wood) 316.4 ( 1.5) 283.7 ( 0.40) 32.7 South-facing walls (paint) 313.1 ( 3.4) 279.9 ( 1.1) 33.2 East-facing walls (all) 316.3 ( 6.0) 284.6 ( 2.3) 31.7 East-facing walls (stucco) 320.8 ( 0.35) 287.6 ( 0.15) 33.2 East-facing walls (rock) 314.2 ( 2.5) 287.5 ( 0.76) 26.7 East-facing walls (brick) 278.5 ( 3.5) 254.9 ( 1.1) 23.6 East-facing walls (wood) 325.9 ( 0.15) 284.8 ( 0.30) 41.1 East-facing walls (paint) 305.2 ( 4.5) 272.9 ( 1.1) 32.3 West-facing walls (all) 316.0 ( 5.7) 284.3 ( 1.7) 31.7 West-facing walls (stucco) 314.4 ( 3.03) 281.9 ( 0.71) 32.5 West-facing walls (rock) 324.6 ( 0.57) 282.0 ( 0.41) 42.6 West-facing walls (brick) 273.7 ( 2.8) 241.9 ( 1.5) 31.8 West-facing walls (wood) 326.6 ( 0.94) 280.6 ( 0.40) 46.0 West-facing walls (paint) 316.0 ( 2.9) 277.0 ( 1.2) 39.0 91 Chapter 5 Modeling of Preferentially Viewed Surface Temperatures 5.1 Simulation Setup In order to examine the effect of various view directions on observed brightness temperature TB, the surface temperature data, now encoded in a 3D form, will be rendered using different angles and camera models. Specifically, two sensor types will be modeled: a simulation of a bird\u00E2\u0080\u0099s eye view from an airborne or satellite platform with a narrow field of view and a simulation of a hemispherical downward- facing radiometer with a field of view of 2\u00CF\u0080. Again, 3D Studio Max was chosen as the tool for view direction simulation, as it can model both view direction types using raytracing simulations. As remote sensors record incoming radiation then transform it into temperatures, it was necessary to run the simulations using longwave flux data as opposed to temperature data. This required changes to the temperature texture sheets. The emissivity correction done to transform TB to T0 was unnecessary, as a sensor would observe TB, with emissivity and reflection effects included. The atmospheric correction is not considered, as simulations would involve a sensor at different distances than the correction accounted for (100m versus up to 60 m). However, it was decided to simplify the situation and treat the atmosphere as fully transparent for the purposes of the simulations. Therefore, a new texture sheet was created, being the projected temperature values corrected for atmospheric effects, but prior to emissivity corrections. This texture sheet was then gap-filled using an 92 identical method to that defined in Section 3.5.3. This created a gap-filled, atmospherically corrected texture sheet with temperature values TB,a,px. This sheet was the processed using the Stefan-Boltzmann law (Equation 5.1) to create longwave flux values for all pixels. (5.1) Importing the pixel longwave emission data from MATLAB data structures into the program was done by encoding the maps into 32-bit truecolor images, which could be imported into the program. This is necessary as the number of data values exceeds the range available using a traditional greyscale map (256). With a 32 bit red, green, blue and transparency (RGBA) image, the number of possible data values is extended to 4,294,967,295, which is easily sufficient for the dataset. The number of unique Lpx values present in the entire street canyon for a given timestep were counted and indexed, with each value assigned a RGBA value. An image was created by matching the RGBA values to the Lpx values for each map. These were exported by MATLAB\u00E2\u0080\u0099s imwrite function into PNG images (example in Figure 5.1) for each timestep, producing 16 RGBA images that are compatible with a wide variety of programs. The indexing data containing the transformation from Lpx to colours was also saved. Figure 5.1: Examples of exported indexed Lpx maps for a) House ID 01. b) Ground surface. Note lower use of colour space due to irregular spacing of values. 93 These PNG images were then imported and assigned to the correct sections of the 3D model in 3D Studio Max. This converted the model, split by texture maps, into a 3D polygonal structure which could interact with the software\u00E2\u0080\u0099s lighting and raytracing engines for view direction simulations. However, if a simulation of a sensor at a high altitude is desired, a single block of street canyon is insufficient. The canyon had to therefore be repeated many times in the x and y directions, creating an effectively infinite suburban residential area. In order to do this, the 3D model had to be modified to be repeatable. Firstly, House IDs 7-10 and 17-19 were not used and removed from the model. The ground surface was also truncated to create a rectangular area that enclosed the remaining houses (Figure 5.2a). In addition, to create a fully repeatable element, a simulated version of the lane was created by duplicating the road surface. Though not strictly an ideal representation of the lane (due to different materials), it allows a representative repeatable element to be created (Figure 5.2b). Figure 5.2: a) Canyon 3D model, modified to be repeatable. Light red areas are removed, red houses are removed, the orange road surface is duplicated and moved to the area indicated in blue. b) Shaded view of finalized repeatable element. To verify whether this subset of the model is representative of the aerea, the repeatable element can be compared to the similar canyon subset used in Chapter 4 (Figure 4.1). Table 5.1 shows this comparison, indicating that the subset defined in Figure 5.2 is very similar morphometrically to the subset defined in Section 4.1. The resulting element could then be repeated approximately 50 times in the vertical and horizontal directions (45,000 houses). Though not completely infinite, it approximates an infinite plane for the view directions simulated. In addition, at this point the number of polygons present in the scene begins to approach the program\u00E2\u0080\u0099s memory limits. The tiled surface is not completely representative of a city: it has no east-west streets, but serves adequately as an idealized suburban surface. 94 Table 5.1: Comparison between canyon subsets used in Sections 4.1 and 5.1. Symbol Parameter Canyon Subset (Section 4.1) Canyon Subset (Section 5.1) \u00CE\u00BBb plan area ratio of buildings (m 2 m -2 ) 0.32 0.34 \u00CE\u00BBi plan area ratio of impervious ground surfaces (m 2 m -2 ) 0.19 0.21 \u00CE\u00BBj plan area ratio of impervious ground and building surfaces (m 2 m -2 ) 0.51 0.55 \u00CE\u00BBv plan area ratio of vegetation (m 2 m -2 ) 0.49 0.45 V normalized building volume (m 3 m -2 ) 4 2.78 2.65 \u00CE\u00BBc complete aspect ratio (m 2 m -2 ) 2.77 3.61 zH area-weighted building height (m) 6.30 6.23 Wx characteristic canyon width (m) 33.3 33.3 zH/Wx canyon height to width ratio 0.18 0.18 Ly characteristic along-canyon inter-building spacing (m) 3.23 3.23 5.2 Planar Sensor View A planar sensor is a basic simulation of an airborne or satellite sensor, approximating a geometric pinhole camera model. In this simulation we ignore lens distortion and atmospheric effects, concentrating instead on the effect of multiple view directions. By simulating a sensor at multiple locations and orientations, the effect of thermal anisotropy on the measurements of TB by the sensor can be examined in great detail. 5.2.1 Setup and Rendering In 3D Studio Max, a pinhole-type camera was placed facing downward at 10 000 m above the tiled surface. A field of view of 1\u00C2\u00B0 was used for the camera with rendered pixel dimensions of 256 by 256 pixels. These parameters approximate a sensor pointing down at nadir with a spatial resolution of 30 m. The camera's position was then modified using frame-based animation so that each frame of a rendered animation corresponded to a different view direction. The camera was angled so that each frame faced the same centre point of the tiled surface, with differing positions. Camera-centre azimuths (\u00CE\u00B1p) from 0\u00C2\u00B0 to 360\u00C2\u00B0 at an interval of 30\u00C2\u00B0 were used (Figure 5.3b) with off-nadir angles (\u00CE\u00B2p) of 0\u00C2\u00B0 to 70\u00C2\u00B0 at intervals of 10\u00C2\u00B0 (Figure 5.3a). This resulted in a total of 96 different view directions. 4 Does not include vegetation structure. 95 Figure 5.3: Schematic representations of view directions for the planar sensor view. a) looking down the canyon, showing off- nadir angles. b) from above, showing varying azimuths. For rendering, all image quality enhancements (antialiasing, anisotropic filtering, sub-pixel interpolation) were disabled in order to prevent the generation of artifacts and erroneous flux values from smoothing. Each frame was rendered to a separate RGBA PNG image (examples in Figure 5.4), which was then re-imported into MATLAB using the same indexing table as was used for export. In this way the irradiance values recorded at the simulated sensor, Lpx,p values for each pixel of the rendered images could be recovered from the colour data. This process was repeated for each timestep; radiance texture maps were exported from MATLAB and imported into 3D Studio Max, frames were rendered, and images were re-imported into MATLAB and converted to Lpx,p values. This resulted in a considerable database of camera views for multiple view directions for the entire diurnal course. Each of the images was then averaged (with equal pixel weighting) to recover an average radiance value Lp for each view direction (and each timestep). This radiance is then converted to a brightness temperature TB,p using Equation 5.2. (5.2) 96 Figure 5.4: Examples of planar sensor renderings for two different timesteps. Colours are relatively meaningless as they are not linearly mapped. 97 5.2.2 Comparison of Nadir Temperatures We can compare TB,p to the complete surface temperature T0,C, as computed in Chapter 4 for each timestep. Figure 5.5 below compares the value of TB,p measured by the sensor at nadir (TB,nad) to T0,C. Figure 5.5: Diurnal course of T0,C compared to TB,nad. In comparing T0,C with TB,nad, we find results following those in (Voogt & Oke, 1997). That is, T0,C in the daytime is lower than TB,nad, and is higher following sunset in a situation that favours strong radiative cooling (many surfaces with high sky view factors, clear skies, both of which are present in this situation). The largest magnitude difference is present near solar noon (solar altitude 43\u00C2\u00B0); at this point the sensor sees mostly horizontal surfaces, roofs and ground surfaces, which tend to be warmer due to material differences (see Chapter 4) at this point. As these facets make up the great majority of those captured by a sensor at nadir, the complete surface temperature will be lower as it also includes vertical facets. The reverse is true at night, where wall surface temperatures are likely to be higher (see Chapter 4, Figures 4.1, 4.3) than horizontal surface temperatures, in particular those of roofs, due to the low thermal admittance of roof resulting from their poor conductance to the rest of the building structure. Again, these make up a large proportion of the sensor\u00E2\u0080\u0099s view while walls are accounted for in T0,C. Figure 5.5 is reproduced in tabular form in Table 5.2. A maximum difference of -2.2 K is observed at 13:30, and a minimum deviation of -0.042 K is observed at 10:30. The root mean squared error (RMSE) of the deviations is 1.2 K. The results can also be compared to observations of Roberts' (2010) scale model, which also compared T0,C to TB,nad over a diurnal course (canyon zH/Wx of 0.48). The maximum T0,C - TB,nad difference is found at solar noon and 98 has a value of around -2.5 K, which is similar to this study (-2.2 K). T0,C - TB,nad differences in Roberts are lower than +1 K in the night-time situation, which is similar to values found here (+0.9 to 1.5 K). Table 5.2: Values of TB,nad and T0,C as calculated from the 3D model and the simulated sensor views. Time TB,nad (K) T0,C (K) T0,C - TB,nad (K) 13:30 315.1 312.5 -2.2 14:30 313.9 313.2 -0.68 15:30 312.8 312.4 -0.43 16:30 311.0 310.8 -0.15 17:30 306.8 307.1 0.29 18:30 300.7 302.0 1.2 19:30 294.7 295.9 1.2 20:30 291.7 293.1 1.4 21:30 290.4 291.9 1.4 22:30 289.6 291.0 1.5 23:30 289.1 290.4 1.3 00:30 287.8 289.2 1.5 01:30 287.7 288.9 1.4 02:30 286.8 288.2 1.4 03:30 286.2 287.1 0.92 04:30 285.2 286.5 1.3 05:30 284.0 285.3 1.3 06:30 283.4 284.9 1.5 07:30 284.6 286.1 1.5 08:30 290.4 292.0 1.6 09:30 295.8 296.7 0.87 10:30 303.3 303.2 -0.42 11:30 308.3 308.0 -0.32 12:30 312.6 311.6 -1.1 The large number of view directions simulated allows for the examination of the T0,C - TB,p difference in additional detail from various sensor positions. This deviation is plotted in polar form in Figures 5.6 to 5.9. In these plots, each pixel represents a temperature value for a view direction as plotted by azimuth from geographic north (\u00CE\u00B1p) and off-nadir angle (\u00CE\u00B2p) of the sensor. At the centre of the plot lies the value at nadir. Values lying between the 96 rendered view directions per graph are interpolated linearly. 99 5.2.3: Comparisons of Planar Sensor Temperatures Figure 5.6: Polar plots of simulated airborne sensor brightness temperature versus computed complete surface temperature from 13:30 PST to 18:30 PST on September 14th. Each pixel in the plot represents a temperature difference value for a specific combination of sensor \u00CE\u00B2p and \u00CE\u00B1p. The position of the sun is represented by a cross when above the horizon. 100 In the daytime situation overall, TB,p is higher than T0,C, particularly around solar noon. The effects of anisotropy are very visible, with a hotspot of higher TB,p following the solar position. When the position of the sun is close to the direction of the sensor, the highest deviations are observed (the solar position is represented by a cross in the figure). In the plot for 13:30 the highest deviation is found at \u00CE\u00B1p = 245\u00C2\u00B0, \u00CE\u00B2p = 60\u00C2\u00B0, where the maximum number of higher temperature surfaces (roofs and road surfaces) are visible to the sensor. As roofs are the warmest at this point, any view direction that decreases the view factor of roofs will decrease the measured temperature. We see this behaviour at \u00CE\u00B1p = 30\u00C2\u00B0, 150\u00C2\u00B0, 210\u00C2\u00B0 and 330\u00C2\u00B0 where the sensor sees the area between buildings. Warmer temperatures are observed from true north and south as the view factor of roads is very high in these regions: they are not as warm as roofs, but are warmer than north and south facing walls. From the east, temperatures are lowest as the view is primarily composed of east-facing walls which are shaded due to the solar angle. This pattern of higher visible TB,p from the south persists until the sun moves below the horizon (by 17:30) at which point the difference between west and eastern facets begins to be reduced. Highest temperature differences are found when looking from the south, with recently heated southern walls, south-facing roofs and warm roads comprising most of the view. At 17:30, the value of TB,p approaches T0,C throughout most of the modeled view. By 18:30, most modeled views show a TB,p lower than T0,C. The south facing region remains warm due to residual heat from illumination, while the distribution of TB,p values from other view directions remains reasonably constant, approximately 1.2 K cooler than T0,C. Another warm zone is visible from the north where the view factor of road surfaces is large. As these surfaces are warmer in the evening, this will increase observed TB,p from these directions. Moving into the nighttime situation from 19:30 to 00:30 (Figure 5.7), the continued decay of the daytime hotspot is evident until 21:30. Until this point, views from the south continue to be consistently warmer, underestimating T0,C by only 0.5-1.0 K. In most other regions of the plots, there is a consistent pattern of underestimation of T0,C by TB,p by between 1.0 and 2.2 K. This is to be expected as anisotropy will be lower at night. After the hotspot disappears, most of the view directions show values in this region There are one or two anomalies during this period. The highest temperature differences (coolest values of TB,p) are seen when looking from the east and west at low elevations. At these regions the sensor has significantly reduced view factors of roads and warmer ground areas, and sees almost nothing but walls and roofs. Because of the low temperatures of rooftops at night (see Chapter 4, Figures 4.2, 4,3), the sensor will record lower temperatures at these points (up to 2.1 K lower at 22:30). The persistent effects of warmer concrete road surfaces are also visible as hotspots when view directions are near \u00CE\u00B1p = 0\u00C2\u00B0 and 180\u00C2\u00B0, slightly increasing TB,p. 101 Figure 5.7: Polar plots of simulated airborne sensor brightness temperature versus computed complete surface temperature from 19:30 PST on September 14th to 0:30 PST on September 15th. Each pixel in the plot represents a temperature difference value for a specific combination of sensor \u00CE\u00B2p and \u00CE\u00B1p. 102 Another anomaly occurs at 23:30, when temperature differences suddenly decrease over the plot. This can also be seen on Figure 5.5: TB,nad shows a change in rate of decrease here. Examining the detailed breakdowns of surface temperatures in Chapter 4 can shed some light on this. The increase in nadir- region temperature at 23.30 appears to be correlated with small spikes in Twall,E , Twall,S, and Troof. Of these, the roof temperature is likely most significant as it comprises most of the view near nadir. This change manifests itself primarily on the metal roofs as well, suggesting some type of variation in the cooling behavior of metal roofs in the canyon. This is supported by the shape of the hot-spot \u00E2\u0080\u0093 the decrease in TB,p - T0,C is largest to the east and west, regions where roofs dominate the sensor view. Late night situations show a similar pattern to the previous six hours. Deviation from T0,C remains lower than 2 K in most situations and some remnants of the hotspot anomaly continue to persist. At 3:30 there is a significant decrease in the deviation between TB,p and T0,C, particularly in simulations made near nadir and to the east and north, in a similar fashion to the anomaly at 23:30. From Figure 5.5 it can be seen that this is not due to any significant change in TB,p, but appears to be because of a sudden dip in T0,C, reducing the difference between the two, which would account for the changes. Examining Figure 4.6 for this timestep reveals a sudden drop in road surface temperature at 3:30: the road surfaces are a major component of T0,C. This is likely to be a prime candidate for the drop in T0,C, though the reason for this drop is unclear. Moving onto the morning situation we see another very uniform hour at 7:30, which is very similar to the late-night plots. However, the beginning of the daytime hotspot is just visible when looking from the east at east-facing surfaces and from the west to a lower extent. This hot spot increases in intensity by 8:30 and causes very large TB,p to T0,C differences of -2.5 K when looking from the west. From 9:30 to 12:30 the daytime hotspot makes a reappearance with a slight lag from the sun position (approximately 1 hour). The reappearance of distinct cooler and warmer features at various \u00CE\u00B1p (30/150/210/330 and 0/90/180/270) is also observed as roofs and road surfaces begin to warm to higher temperatures than walls and other surfaces. Overall by 11:30 most areas of the plot shows that TB,p is now overestimating T0,C: a similar situation to the afternoon. Nearing solar noon this overestimation is greater than 1.0 K. 103 Figure 5.8: Polar plots of simulated airborne sensor brightness temperature versus computed complete surface temperature from 1:30 PST to 6:30 PST on September 15th. Each pixel in the plot represents a temperature difference value for a specific combination of sensor \u00CE\u00B2p and \u00CE\u00B1p. 104 Figure 5.9: Polar plots of simulated airborne sensor brightness temperature versus computed complete surface temperature from 7:30 PST to 12:30 PST on September 15th. Each pixel in the plot represents a temperature difference value for a specific combination of sensor \u00CE\u00B2p and \u00CE\u00B1p. The position of the sun is represented by a cross when above the horizon. 105 Throughout all simulations, we find overestimations of T0,C by TB,p of up to +2.9 K (17:30) and underestimations by up to -2.7 K (8:30). These are plotted in Figure 5.10. Figure 5.10: Deviation of all simulated view direction temperatures from T0,C by timestep. Boxplot shows minimum, maximum and 95% intervals. The data can also be used to compute the anisotropy (maximum temperature difference over each simulation), shown in Figure 5.11. This plot shows expected behaviour following observational results from Voogt & Oke (1997), with high anisotropy in the daytime (up to 3.5 K) and low anisotropy at night (< 1.0 K). The trend also follows that of the residential neighborhood in Voogt & Oke (1998), which showed higher differences in measured brightness temperatures from differing view directions in morning and evening situations (10:00 and 5:00) compared to midday situation (14:00).Magnitudes are similar as well though the simulated Elgin Street anisotropy is lower by approximately 1.5 K at 10:00 and 5:00. Both of these studies took place in Vancouver with similar urban structure, so an agreement is expected and supports the data in this study. Additional measurements from Marseille, France in Lagourade et al. (2004) also follow this trend, with maximum anisotropy during the morning (8:00 to 10:00). However, anisotropy is much larger (up to 10.5 K) in this study, likely due to the different urban form and fabrics as well as different thermal sensor types. The deviation plotted in Figure 5.10 follows the pattern in Figure 5.9 quite closely, further indicating that large deviations between T0,C and TB,p are due to anisotropic effects. 106 Figure 5.11: Anisotropy (maximum difference between temperature values over all simulations for a timestep) plotted over the entire diurnal course. 5.3 Hemispherical View A downward-facing radiometer, such as might be used to measure the upward longwave radiation flux L\u00E2\u0086\u0091, defines a hemispherical view direction, affected by radiation from a wide field of view extending to the horizon with a cosine response (Oke, 1987). It is geometrically equivalent to a pinhole camera with an extremely wide angle lens providing a 180\u00C2\u00B0 field of view. 5.3.1 Setup and Rendering In 3D Studio Max, another pinhole-type camera with field of view of 45\u00C2\u00B0 was placed facing a reflective hemispherical surface. This simulates a hemispherical sensor with a spherical lens, and was set to have rendered pixel dimensions of 256 by 256. This approximates a downward facing radiometer. The camera was initially placed above the ground surface in the centre of the canyon (above the road surface), between houses IDs 01 and 16. The camera's position was again modified using frame-based animation so that each frame of a rendered animation corresponded to a different position. A variety of positions were used to sample the repeated array of houses from the view of a the simulated radiometer. The camera was positioned directly above several points in different transects from west to east throughout the canyon: the road surface, above the western house's (ID 16) lawn, above the western house, above the lane surface, above the eastern house's (ID 01) lawn and above the eastern house (schematic in Figure 5.12b). For each position, varying heights were chosen (Figure 5.12a), from 2 m to 10 m (at 2 m intervals) and from 10 m to 100 m at 4 m intervals (up to approximately z/zH = 10). This produced a cross-section of the representative canyon with many radiometer positions. Two 107 additional cross sections were also added, between house IDs 14 and 03 and IDs 05 and 12, with identical camera positions and heights (Figure 5.12b). A total of 270 positions were rendered per timestep. For these simulations, only four timesteps were used: 13:30 and 18:30 on the 14 th and 00:30 and 6:30 on the 15 th . These 6-hour intervals give a good assessment of the canyon patterns over time and limit computational time. Figure 5.12: Schematic representations of view directions for a hemispherical sensor. a) looking down the canyon, various heights above the ground surface. b) from above, showing varying locations along the repeated element. As with the planar sensor simulations, all rendered visual quality enhancements were disabled in order to reduce the effects of any post filtering. Each position was rendered to a single frame (examples in Figure 5.13) and exported to a PNG image. They were imported into MATLAB and colours converted to pixel longwave values Lpx,h through the use of the original lookup table. A 256x256 cell array of Lpx,h values was then available for analysis for all 270 positions for all four timesteps. The imported Lpx,h arrays must them be corrected for angle-of-incidence effects present in a radiometer via a cosine response. An image-centric polar coordinate system was defined such that the angle from nadir \u00CF\u0081 was zero at the centre of the image and increased to 90\u00C2\u00B0 towards the image edge. This was then gridded to a 256x256 pixel array (matching the size of the rendered sensor images) and cell weights wi calculated by dividing the cosine of each cell by the sum of all cosine values via Equation 5.4 (5.4) Each pixel was then weighted by the corresponding wi value and the entire array summed to recover a single value Lh for each simulated radiometer position (Equation 5.5). 108 (5.5) The simulated Lh values can then plotted against the simulation's coordinates (Figures 5.14 to 5.17). Figure 5.13: Examples of hemispherical sensor renderings for two different timesteps. Colours are relatively meaningless as they are not mapped to particular values (note the apparent circular anomaly in the centre of the figure at 0:30). 109 5.3.2 Afternoon (13:30) Figure 5.14: Recovered Lh from a simulated hemispherical sensor at various locations for 13:30, September 14 th. Each plot represents a different slice along the canyon and each colour represents a different x-location. At 13:30 there is a strong variation in Lh with horizontal location. Throughout the canyon we see that lawns start at lower temperatures causing overall lower Lh over the lawns. The western lawn is likely warmer at y = 30 m and y = -30 m due to local anomalies, such as warmer nearby house walls or trees. For the eastern lawn, as height increases, Lh increases, particularly after z/zH = 1.0, as at this point roofs will begin to contribute to the sensor view, being warmer they will increase the measured Lh. At this point Lh begins to level out, converging near 500 W m -2 at z/zH = 6. The western lawn shows a different pattern, with rapidly increasing Lh until z/zH = 2 at which point Lh beings to degrease with height until convergence near z/zH = 6. The road and lane surfaces are of similar temperatures, with the road having slightly increased temperatures due to a larger surface area of asphalt being present and the higher sky view factor of this 110 position. This causes Lh at low z/zH to be larger. The lane\u00E2\u0080\u0099s lower sky view factor will reduce heating due to lower solar irradiance (shadowing). In addition, surfaces visible in the lane include recessed areas such as garages and porches which receive very low irradiance throughout the entire day. We would expect these to reduce Lh. Both road and lane locations show an decrease in Lh as z/zH increases until convergence near 500 W m -2 at z/zH = 5.0. As with the lawn surfaces, this pattern is repeated at all three canyon slices with only minor differences. Directly above the roof surfaces, Lh varies greatly but is often high, as the view factor comprises mostly hot roofs, which have been shown to be warmer throughout the experiment in the daytime. Profiles of Lh decrease rapidly when moving up, and again converges with other temperatures by z/zH = 7. The above-roof simulations show a large variability of Lh with location in the canyon (y = -30, 0, 30 m) as opposed to the other surface patches (lawns, street). A particular example can be seen over the western house at y = -30 where the profile's shape is reversed. This is due to varying roof materials from house to house, with asphalt roofs giving consistently higher Lh temperatures than metal roofs with low emissivities. The spatial variability of Lh profiles is significant at 13:30. To measure a consistent temperature regardless of location, a hemispherical radiometer would have to be placed at z/zH = 7, or approximately 40 m in order to remove the effects of heterogeneity and retrieve a consistent upward longwave flux with horizontal location. If locations are restricted, convergence to 500 W m -2 occurs relatively rapidly above locations that are either road or lawns; a radiometer positioned at z/zH = 5 would record a flux within 1.5 W m -2 of the convergent value. 5.3.3 Evening Transition (18:30) Figure 5.15 shows temperature traces for 18:30 on the 14 th . Above-lawn Lh at this point are significantly lower compared to the afternoon as nighttime cooling begins; western lawns are almost 10 K cooler than eastern lawns. This is likely due to the shadowing effect of the setting sun, which would place western lawns in shadow first, starting the nighttime cooling process earlier. Above the western lawns an interesting pattern of Lh increase with height is observed, Lh increasing until z/zH = 1.5, as roofs and walls remain warm at this point and have moved into the radiometer\u00E2\u0080\u0099s field of view. Past this point, other surfaces cause a reduction in Lh as the sensor begins to see more cool surfaces (roofs) in its central high- weighted field of view. The eastern lawns do not show this effect. Both curves converge between z/zH = 4 and 5 around 465 W m -2 , depending on position along the canyon. 111 Figure 5.15: Recovered Lh from a simulated hemispherical sensor at various locations for 18:30, September 14th. Each plot represents a different slice along the canyon and each colour represents a different x-location. Above road and lane surfaces, behavior matches that of 13:30, though the increase in Lh is more pronounced with height and persists for longer before converging. It is interesting to note that the view from above the lane does not converge until z/zH = 6.5, surpassing above-road Lh near z/zH = 3. This indicates that the areas around the lane (backs of houses, porch areas) stay warmer in general than areas in the centre of the canyon, likely due to the decreased sky view factor of these areas \u00E2\u0080\u0093 they will cool slightly slower in the evening. This pattern repeats itself with varying y-coordinate. Above-roof Lh start out at relatively high values, though lower than at 13:30 as nighttime cooling has already begun. Variance between Lh above the east and west roofs is again strong as roof material varies. Depending on position in the canyon, significant changes in Lh with height can be observed. At y = 0 m, Lh over both roof surfaces decreases with height as ground surfaces enter the field of view, then increases again as height increases and the larger view of the radiometer averages many surfaces. This is 112 in contrast to y = -30 m, where Lh over both roofs surfaces increases consistently with height until it converges at 463 W m -2 . In this situation the roofs are consistently cooler than other surfaces, so any increase in height will bring other surfaces into view, increasing Lh. Convergence of all above-roof Lh values occurs in all cases at z/zH = 5. In general, at 18:30, the height for optimal radiometer placement changes slightly. The convergence is achieved only near z/zH = 7 when taking all curves into account. However, the convergence occurs as low as z/zH = 5 (32 m with Lh = 464 W m -2 ) when removing certain curves (lane in all cases, roofs at y = 0 m). 5.3.4 Late Night (00:30) Figure 5.16: Recovered Lh from a simulated hemispherical sensor at various locations for 00:30, September 15th. Each plot represents a different slice along the canyon and each colour represents a different x-location. 113 At 00:30 on the 15 th , the vertical profiles of Lh (Figure 5.16) show a similar shape 13:30 and 18:30. Modeled above-lawn Lh remains the lowest of all surfaces and the western lawn has distinctly lower above-lawn Lh. Modeled Lh increases immediately and rapidly with height above the two lawn positions as warmer night-time walls come into the field of view of the radiometer. For both profiles, Lh decreases with height above z/zH = 1.5 and roofs come to dominate the fields of view. It is however interesting to note the variation between the profiles above the east and west lawns \u00E2\u0080\u0093 though both profiles achieve peaks in Lh near z/zH = 1.5, above the eastern lawn the profile of Lh decreases more slowly with height. This is likely a remnant from afternoon heating delivering irradiance to warm the eastern row of houses for longer in the evening. Cooling of house fa\u00C3\u00A7ades and lawns on the western side of the canyon has a head start, so Lh over western lawn locations can decrease faster. Even with this difference, by z/zH = 6.0, the curves converge near 387 W m -2 . There is no significant variation in the observed profiles over lawns at the different horizontal slices. Above roads and lanes, the vertical profiles of Lh are different from 18:30. Above-lane profiles no longer exhibit slower decrease of Lh with height and stay lower than above-road profiles. The pattern of decrease remains similar to that at 18:30, with Lh increasing until z/zH = 1.0 and then decreasing to converge with other profiles at z/zH = 7.0 and Lh = 387 W m -2 . There is no significant variation in the observed profiles over roads and lanes at the different horizontal slices. As usual, above-roof Lh shows a varying course depending on position in the canyon. In general however the trend is less variable than during the daytime: differences between profiles over roofs of different materials are lower. Profiles of Lh above western houses show an increase with height until convergence with other profiles near z/zH = 6.0 (32 m). The profile of Lh above the west house at y = -30 m is the only exception to this, it shows a decrease in Lh until z/zH = 3, then an increase until convergence near z/zH = 8.0. For this timestep, any of the profile locations at z/zH > 6.0 would be a strong candidate for radiometer placement if the effect of surface heterogeneity was to be negated. 5.3.5 Early Morning Transition (6:30) The final timestep (6:30, Figure 5.17) shows similar trends to the two previous times. Above- lawn Lh profiles begin with lowest Lh near the ground but increase rapidly with height towards roof level, then decrease until convergence near z/zH = 5 at Lh = 368 W m -2 . As with Figure 5.16, the profile of Lh above the eastern lawn value decreases and converges slower at all canyon slices except y = 30 m. An identical pattern to 00:30 is observed when looking at the profiles of Lh above the road and above the lane. They begin at lower Lh, which then increases rapidly with height, peaking near z/zH = 1.25, then decrease slowly until z/zH = 6, at which point they converge with the other curves. As with all other 114 timesteps, roofs show a large variance with location depending on material In this case, they converge rapidly by z/zH = 5.2 regardless of their starting point. For this timestep, radiometer placement for elimination of heterogeneity should again avoid the lane and be placed above any other surface ideally at above z/zH = 5.5. Figure 5.17: Recovered Lh from a simulated hemispherical sensor at various locations for 6:30, September 15 th. Each plot represents a different slice along the canyon and each colour represents a different x-location. 115 5.3.6 Impact On Radiometer Placement Table 5.3: Heights of convergence zC for all profiles and timesteps. Profiles are averaged over all along-canyon y positions. Location 13:30 18:30 00:30 6:30 Road centre 6.8 4.9 5.5 5.5 Lane centre 1.5 6.8 6.8 6.8 West Lawn 3.7 4.3 4.9 4.9 West House Roof 7.1 5.8 4.0 5.2 East Lawn 5.5 3.7 3.7 3.7 East House Roof 4.0 5.2 5.2 5.2 All profiles from all four timesteps show convergence of Lh with height. zC is defined as the height of convergence of Lh with the value of Lh at z/zH = 9, averaged over all profiles for a timestep. Convergence in this case indicates that a particular profile's value is within 1.0 W m -2 of at z/zH = 9.0. Table 5.3 summarizes the convergence heights relative to mean building height. In general we see that in order for all curves to converge at all timesteps, a height of 7.1 times zH is required. However, if a restriction is applied and the sensor placed above a lawn, a maximum of 6 times zH is required and often a value of 5 suffices. Table 5.4: RMSE of selected horizontal positions for increase radiometer altitude. Road positions are not included. z (m) RMSE at 13:30 (W m -2 ) RMSE at 18:30 (W m -2 ) RMSE at 00:30 (W m -2 ) RMSE at 6:30 (W m -2 ) 8 20.2 14.9 11.2 9.2 12 14.9 9.3 11.1 10.8 16 9.6 6.5 8.5 8.4 20 6.2 3.8 5.8 5.7 24 4.4 2.6 3.8 3.8 28 3.4 1.8 2.5 2.5 32 2.8 1.5 1.6 1.7 36 2.3 1.1 1.1 1.2 40 1.8 0.83 0.78 0.86 44 1.4 0.56 0.62 0.68 48 1.1 0.46 0.44 0.52 52 0.85 0.36 0.34 0.41 58 0.53 0.32 0.28 0.34 The energy balance and geometric structure of these urban surfaces appears to create a more representative sample halfway in between the extreme canyon/lanes and roofs. 116 The RMSE of all Lh values for selected horizontal positions for varying altitudes can also be computed (Table 5.4, excluding over-road locations due to the low feasibility of placing a radiometer at these locations). The RMSE decreases to below 1.0 W m -2 at all timesteps by 52 m (z/zH = 8). In the night- time situation, this occurs by 40 m (z/zH = 6.1). These values are similar to those found by looking at the height of profile convergence but do not take into account preferential selection of curves (selecting the least variable and most rapidly converging profile. The differences between horizontal positions shown here are much larger than Roberts (2010) found using a scale model of the canyon. This indicates that the large variation present in upwelling longwave radiation with horizontal location is strongly driven by material and surface types, as the scale model had low material variation. A simulation done by H\u00C3\u00A9non et al. (2011) using the SOLENE model for a small urban fragment in Marseille with increased detail found large differences (up to 20% of the value of L\u00E2\u0086\u0091) between horizontal locations at z/zH = 1.5. This study found that these differences were insignificant at z/zH = 2.5. This is a lower altitude than the modeled Elgin Street, and may be due to the greatly increased building density of Marseille compared to the study area as well as material differences (no large lawns). Increased density reduces the view factor of walls and the lack of lawns changes the thermal properties of the ground surface significantly. Figure 5.18: Comparison of T0,nad, T0,h, and T0,C for the four simulated timesteps. It is helpful to express the modeled Lh as an apparent brightness temperatures (using the Stefan- Boltzmann law with an emissivity of 1.0) and compare this hemispherical brightness temperature TB,h at 117 zC with T0,C and T0,nad for the four timesteps examined (with Tb,h being averaged at zC over all 18 profiles). In general T0,nad appears to be a better estimator for T0,C than TB,h. The RMSE for T0,nad - T0,C is computed as 1.4 K, while the RMSE for TB,h - T0,C is higher at 1.8 K (Figure 5.18). It should be noted that over the entire time series (24 hourly values), the RMSE for T0,nad - T0,C was lower at 1.2 K, so it is possible that the RMSE for TB,h - T0,C would be lower if calculated for all 24 timesteps as well. 5.3.7 Comparison to In-Situ Measurements Figure 5.19: Comparison between longwave radiation Lh incident at simulated sensor with simultaneous measurements of L\u00E2\u0086\u0091 from Sunset Tower. Lh values are averaged over all y-positions for above-lawn x-positions. Taking the modeled Lh at the least variable positions (that is, above the east and west lawns, averaged over all y-positions), the received longwave can be calculated for a height of 28 m. This matches the height of Sunset Tower where direct measurements of L\u00E2\u0086\u0091 were made during the field experiment (see Section 2.2.3). This allows a comparison between simulated values and independent measurements, although in a different location in the same neighborhood (Figure 5.19). Though the modeled Lh data show a pattern matching the upward radiation measured at Sunset Tower, the magnitude of Lh is larger than L\u00E2\u0086\u0091 during the daytime (by 22 W m-2). At night, the values are closer, particularly at 00:30 where the deviation is only -3.5 W m -2 . There are several possible reasons for the deviations between the actual measurements of L\u00E2\u0086\u0091 on the tower and the modeled Lh. No atmospheric correction was performed here; at distances of 60 m in the canyon, the MODTRAN simulations performed during Chapter 3 indicate a possible change of up to 8 K, when the surface temperature is high, as would be the case during the day. Though we expect lower 118 distances here, an effect is still possible. The wavelength sensitivity of the pyrgeometer used at Sunset Tower is significantly wider than the thermal scanner, and therefore would show increased sensitivity to atmospheric effects due in particular to water vapour absorption. This would reduce L\u00E2\u0086\u0091 measured at the instrument and account for the difference between modeled and actual values. Another possible explanation for the difference however lies with the very nature of the urban surface: its heterogeneity. It is likely that the urban form and materials around the base of Sunset Tower do not match the exact configuration of Elgin Street (for example, Sunset Tower is in the corner of a rectangular power substation, so approximately 25% of the radiative source area is a gravel surface). This would cause changes in the emitted long-wave radiation as gravel is not represented at all in the Elgin Street canyon. Additionally, the decreased visible roof area would lower surface temperatures and therefore L\u00E2\u0086\u0091 emissions in the daytime near Sunset Tower as compared to Elgin Street, which could also account for the difference. 119 Chapter 6 Conclusions 6.1 Key Findings and Contributions In this thesis, a novel methodology was developed and successfully applied to simulate the measurement bias of different remote sensor systems with preferential view (nadir, hemispherical, oblique) when inferring surface temperatures of a convoluted, three dimensional urban surface. Detailed surface temperature measurements were recorded over a 24h cycle using a thermal camera. The camera was located on a tower in an urban street canyon in Vancouver, BC, Canada and scanned from 0\u00C2\u00B0 to 360\u00C2\u00B0 horizontally and at 66.2\u00C2\u00B0 and 45\u00C2\u00B0 vertically using a pan and tilt device. The observations were used to generate a panoramic time series of thermal images of the street canyon. Aspects from micrometeorology, computer vision and computer graphics were combined to project the panoramic thermography data onto a detailed, photogrammetrically derived 3D model of the urban structure surrounding the tower, then corrected for atmospheric and emissivity effects. Facets of the 3D model that were not seen by the thermal camera were statistically gap-filled with data from other areas based on selected predictors (sky view factor, material, orientation of facet). The resulting three dimensional model allowed the computation of the complete surface temperature in great detail and its evolution over the 24h cycle. Additionally, ensemble surface temperatures of various facet types and materials were computed. Overall, facet surface temperatures followed expected patterns in space and time, and the complete surface temperature followed the trends demonstrated by Voogt & Oke (1997). The surface temperatures of roofs were highest of all facets in the daytime and lowest at night - the ground surface followed an opposite trend. The wall temperatures of buildings in the canyon fell in between, with 120 variations within the category dependent on facet orientation. These variations were as expected considering the study area's location, showing south facing wall temperatures to be highest during the day, and west and east-facing walls to be highest during evening and morning respectively. Complete surface temperatures were approximated most closely by average wall temperatures, falling generally between roof and ground temperatures. Some variation with material was also observed: as expected, materials with higher thermal admittance, such as walls, took longer to warm up and retained their heat for longer periods of time. Based on the 3D dataset, views from an airborne or satellite sensor and a hemispherical radiometer were successfully simulated at varying locations and orientations in and above the canyon. Simulated brightness temperatures for preferential views were recovered and compared to the complete surface temperature T0,C. Simulated brightness temperature measurements from a sensor with a narrow field of view at nadir TB,nad were compared to the computed complete surface temperature for the diurnal course, and results consistent with (Voogt & Oke, 1998) were found - the complete temperature was exceeded during the day and the opposite was found at night. Deviations between -2.2 K (day) and +1.6 K (night) were found between TB, nad and T0,C. For simulated off-nadir view directions, the deviation was +2.9 to -2.7 K. This increase is due to anisotropic effects, which were found to be highest in the daytime (particularly at sunrise and sunset, up to 3.5 K), which is consistent with the literature. Thermal anisotropy was similar in form but lower in magnitude to that measured over a residential area in Vancouver in Voogt & Oke (1998) (near 8 K in their study). The same pattern of strong east-west anisotropy following the street canyon orientation was reproduced in the current study. The hemispherical sensor simulations showed that the placement of a hemispherical radiometer strongly influences measured outgoing longwave radiation flux density L\u00E2\u0086\u0091 and also recovered brightness temperature. Variation between various horizontally separated locations inside and above the simulated canyon was very large (up to 120 W m -2 ) However, it was found that the horizontal variation was reduced when the radiometer is placed at a sufficient height above the ground surface where L\u00E2\u0086\u0091 measured at different horizontal locations converges at approximately 5 times the building height. Placement of radiometers at 5 times building height would eliminate variation with horizontal position for this canyon. Additionally, the simulations showed that the temperature inferred from L\u00E2\u0086\u0091 by a radiometer at convergence level is an inferior predictor of the complete urban surface temperature than the temperature derived from an airborne sensor at nadir (RMSE of 1.4 as compared to 1.8 K). This suggests that radiometer measurements should not be used for estimating the complete surface temperature in an urban situation. 121 6.2 Future Work and Outlook The approach taken in the paper could easily be extended to other urban morphometries, geographic locations and situations. Equipment needed is relatively simple - a thermal camera and DSLR camera comprise the two primary instruments. The most exotic data is the LiDaR point cloud, though increasingly cities are performing regular LiDaR flyovers, increasing its availability. Supporting meteorological data (air temperature, relative humidity, longwave radiation flux) is readily available for many large cities or can easily be collected with basic instrumentation available at standard climate stations. This allows the extension of the general method to other locations in order to examine the surface temperature distributions in different urban morphometries and quantify the anisotropy. With additional sites and geometries, it could be possible to develop correction factors to allow estimation of T0,C from TB,p. However, it is likely that these factors would be unique to a particular geometry and might only be applicable in the neighborhood/city that they were created in. The methodology can also be applied to thermal imagery at coarser resolutions such as from airborne platforms. The lower resolution also implies that lower detail would be required in the 3D model of the urban surface, with LiDaR-level detail being potentially sufficient. The process of simulating new view directions using computer graphics techniques is neither particularly difficult or computationally long, so any sensor or view geometry could be simulated. The basic technique of modeling unseen building facets with data from visible facets and a detailed 3D model could also potentially be extended to other regions of the electromagnetic spectrum: for example, hyperspectral and visible regions for determination of building albedo in 3D. The dataset can also be used to evaluate and assess current single-layer or multi-layer urban canopy models and specifically the microscale surface temperatures of the ground, wall and roof facets. The comprehensive nature of the dataset, covering all facet types in a canyon and many different materials, allows for detailed intercomparisons of ensemble temperatures at a level not possible to-date. Additionally, the reasonably simple (compared to many urban surfaces) geometry of the Elgin Street canyon makes conceptual model simulations simpler as there is less need to deal with very complex urban structures and vegetation. TUF-3D/ TUF-2D (Krayenhoff & Voogt, 2007) is an ideal candidate for this type of comparison as it is designed specifically for sub-facet temperature simulations. Other possible models include OSIrIS (Poglio, Mathieu-Marni, Ranchin, Savaria, & Wald, 2006) which is designed to simulate thermal images of buildings and DART (Gastellu-Etchegorry, Martin, & Gascon, 2004), designed to simulate satellite thermal images. The current dataset from Elgin Street allows for greater detail and higher resolutions than most airborne surface temperature scans or in-situ measurements with IRTs such as performed during BUBBLE (Rotach et al., 2005). 122 Additionally, the detailed material, geometric and temperature databases could be used to examine heat storage in additional detail. It would be possible to calculate the diurnal temperature range of each pixel in the canyon and link those to sky view factor, material and thermal admittance. It could be interesting to use for the study and modeling of heat storage and the energy balance in an urban environment. 6.3 Methodological Improvements The primary difficulty encountered during execution of the methodology was the lack of knowledge of the exact camera position. This contributed greatly to error in terms of projection of temperatures on the 3D surface model, and is a key candidate for improvement. The single most useful change to the experiment would have been the inclusion of an inclinometer and a GPS antenna on the tower itself. By using a DGPS with simultaneously operated base station, the position of the camera could be known to within a small ( < 0.05 m) window of precision and swaying of the tower under changing wind directions could be determined. This assumes results similar to those acquired with the Trimble DGPS unit used in this study (see Chapter 2). The attached inclinometer would have allowed the reporting of the camera tilt angle for each frame. Knowing these two parameters would have eliminated the camera position interpolation stage which causes much of the projection error. The only parameter remaining to be estimated would be the frame pan angle, which was calculated reasonably well using computer vision methods. With projection error reduced, the amount of incorrectly attributed areas would also be reduced, and therefore minimize areas that needed to be interpolated to create the finished model. Though the photogrammetric method used for building model construction was effective, some additional improvements could have been made. By collecting the photogrammetric data at the same time as the thermal data, modeling of vegetation could have taken place. Small changes to structure and vegetation in the canyon over time would have been nonexistent and therefore the model would require less post-processing and interpolation. This would improve accuracy in general. Additionally, Photomodeler\u00E2\u0080\u0099s texture mapping functions could have been used, which allow the projection of the visible images used in photogrammetry onto the 3D surfaces created, creating a visible spectrum texture map. This would have greatly improved the precision and detail available for material attribution and thus increased the precision of the emissivity corrections. An important source of uncertainty for the 3D model was the imprecision of the ground surface, acquired via airborne LiDaR. This data was shown to have both systematic and random error (see Chapter 2), and is not easily improved without a lower altitude flyover. Improving the accuracy of the data would improve the accuracy of building placement in the canyon and the accuracy of any thermal camera data 123 projected onto the ground surface. Using DGPS surveys combined with photogrammetry, a more accurate model of the ground surface could have been created. However, this would involve a considerable time investment as many more DGPS points would have to be collected, and would also likely involve additional requirements to access private property to locate specific features. More photogrammetric points would also have to be defined, and additional photos at strong angles to the ground surface would have to be acquired. As strong angles to the ground surface are most easily acquired from above, this would mean more tower-mounted photo positions. For additional 3D precision, a ground-based LiDaR instrument could have been used to generate the 3D model instead of photogrammetry. This type of instrument functions in a similar fashion to an airborne LiDaR, but is mounted in a rotating configuration on a tripod or similar platform. This allows collection of laser reflections from many more surfaces than airborne sensors due to orientation of the scanner. Reflections collected in this way include detailed vegetation surfaces and the ground surface, and show increases in precision from airborne coverage due to the lower surface to sensor distances (Aronoff, 2005). Such data is generally paired directly with a visible spectrum camera to automatically generate textured 3D models. Some reconstruction would likely still be necessary due to the LiDaR unit's limited field of view. Even mounted on a moving vehicle, some sections of the street would still be invisible to the LiDaR unit. Such a model would however likely be more detailed and more accurate than the combination of photogrammetric and other methods used in this study. 124 Works Cited Allwine, K. J., Leach, M. J., Stockham, L. W., Shinn, J. S., Hosker, R. P., Bowers, J. F., & Pace, J. C. (2004). Overview of joint urban 2003\u00E2\u0080\u0093an atmospheric dispersion study in Oklahoma City. In Preprints, Symp. on Planning, Nowcasting and Forecasting in the Urban Zone, Seattle, WA, Amer. Meteor. Soc., CD-ROM J (Vol. 7). Retrieved from https://ams.confex.com/ams/84Annual/techprogram/paper_74349.htm Aronoff, S. (2005). Remote Sensing for GIS Managers. Redlands, CA: ESRI Press. Brown, D. C. (1966). Decentering distortion of lenses. Photogrammetric Engineering, 7, 444\u00E2\u0080\u0093462. Changnon, S. A. (1978). METROMEX issue. Journal of Applied Meteorology, 17, 565\u00E2\u0080\u0093716. Christen, A., Johnson, M., McKendry, I., & Oke, T. R. (2012). Instrumented set-up and water monitoring in the neighborhood of the tower. Vancouver-Sunset Urban Climate Research Tower. Retrieved from http://www.geog.ubc.ca/~achristn/infrastructure/sunset.html Christen, A., Oke, T. R., Grimmond, C. S. B., Steyn, D. G., & Roth, M. (2013). 35 years of urban climate research at the \u00E2\u0080\u009CVancouver-Sunset\u00E2\u0080\u009D flux tower. Invited article in FluxLetter \u00E2\u0080\u0093 Newsletter of Fluxnet, 5(2). Christen, A., Oke, T. R., & Voogt, J. A. (2011). Seasonal controls on urban-rural differences of the surface radiation budget. Presented at the CMOS Congress 2011, Victoria, Canada. Christen, A., & Voogt, J. A. (2009). Linking atmospheric turbulence and surface temperature fluctuations in a street canyon. Presented at the 7th International Conference on Urban Climate, Yokahama, Japan. Christen, A., & Voogt, J. A. (2010). Inferring turbulent exchange processes in an urban street canyon from high-frequency thermography. Presented at the 19th Symposium on Boundary Layers and Turbulence, Keystone, Colarado, USA. Chudnovsky, A., Ben-Dor, E., & Saaroni, H. (2004). Diurnal thermal behavior of selected urban objects using remote sensing measurements. Energy and Buildings, 36(11), 1063\u00E2\u0080\u00931074. 125 Cleugh, H., & Oke, T. R. (1986). Suburban-rural energy balance comparisons in summer for Vancouver, B.C. Boundary-Layer Meteorology, (36), 351\u00E2\u0080\u0093369. Crawford, B., Christen, A., & Ketler, R. (2010). Processing and quality control procedures of turbulent flux measurements during the Vancouver EPiCC experiment (No. 1). University of Britsh Columbia. Cros, B., Durand, P., Frejafon, E., Kottmeer, C., Perros, P.-E., Peuch, V.-H., \u00E2\u0080\u00A6 Wortham, H. (2003). The ESCOMPTE Program: an overview. Atmospheric Research, 69(3-4), 241\u00E2\u0080\u0093279. Eos Systems. (2011). Photomoder Product Manual. FLIR Systems. (2004, October). ThermoVision A40M operator\u00E2\u0080\u0099s manual. FLIR Systems. (2011, May 10). A40 Spectral response curve. Gastellu-Etchegorry, J. P., Martin, E., & Gascon, F. (2004). DART: a 3D model for simulating satellite images and studying surface radiation budget. International Journal of Remote Sensing, 25(1), 73\u00E2\u0080\u009396. doi:10.1080/0143116031000115166 Goodwin, N. R., Coops, N., Tooke, R., Christen, A., & Voogt, J. A. (2009). Characterising urban surface cover and structure with airborne LiDAR technology. Canadian Journal of Remote Sensing, 35(3), 297\u00E2\u0080\u0093309. Grimmond, C. S. B., & Oke, T. R. (2002). Turbulent heat fluxes in urban areas: Observations and a local- scale urban meteorological parameterization scheme (LUMPS). Journal of Applied Meteorology, 41(7), 792\u00E2\u0080\u0093810. Heikkil\u00C3\u00A4, J., & Silv\u00C3\u00A9n, O. (1997). A four-step camera calibration procedure with implicit image correction (pp. 1106\u00E2\u0080\u00931112). Presented at the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico. H\u00C3\u00A9non, A., Mestayer, P. G., Groleau, D., & Voogt, J. (2011). High resolution thermo-radiative modeling of an urban fragment in Marseilles city center during the UBL-ESCOMPTE campaign. Building and Environment, 46(9), 1747\u00E2\u0080\u00931764. doi:10.1016/j.buildenv.2011.02.001 126 Howard, L. (1833). The Climate of London: Deduced From Meteorological Observations (Vols. 1-3, Vol. 1). London. Hoyano, A., Asano, K., & Kanamaru, T. (1999). Analysis of the sensible heat flux from the exterior surface of buildings using time sequential thermography. Atmospheric Environment, 33, 3941\u00E2\u0080\u0093 3951. Klemm, O. (1995). Transport and distribution of air pollutants in Athens (Greece): aircraft measurements during MEDCAPHOT - TRACE. In H. Power, C. A. Moussiopoulos, & C. A. Brebbia (Eds.), Air Pollution III (Vol. 3). Computational Mechanics Publications. Krayenhoff, S. E., & Voogt, J. A. (2007). A microscale three-dimensional urban energy balance model for studying surface temperatures. Boundary-Layer Meteorology, 123(3), 433\u00E2\u0080\u0093461. doi:10.1007/s10546-006-9153-6 Lagourade, J.-P., Henon, A., Moreau, P., Irvine, M., Voogt, J. A., & Mestayer, P. (2010). Modelling daytime thermal infrared directional anisotropy over Toulouse city centre. Remote Sensing of Environment, 114, 87\u00E2\u0080\u0093105. Lagourade, J.-P., Moreau, P., Bonnefond, J.-M., Voogt, J. A., Solliec, F., & Irvine, M. (2004). Airborne experimental measurements of the angular variations in surface temperature over urban areas: case study of Marseille (France). Remote Sensing of Environment, 93, 443\u00E2\u0080\u0093462. Lowe, D. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91\u00E2\u0080\u0093110. Lowry, W. P. (1998). Urban effects on precipitation amount. Progress in Physical Geography, 22(4), 477\u00E2\u0080\u0093520. doi:10.1177/030913339802200403 Martilli, A., Clappier, A., & Rotach, M. (2002). An urban surface exchange parameterization for mesoscale models. Boundary-Layer Meteorology, 104, 261\u00E2\u0080\u0093304. Masson, V. (2000). A physically-based scheme for the urban energy budget in atmospheric models. Boundary-Layer Meteorology, 94(3), 357\u00E2\u0080\u0093397. 127 Masson, V. (2005). Urban surface modeling and the meso-scale impact of cities. Theoretical and Applied Climatology, 84(1-3), 35\u00E2\u0080\u009345. doi:10.1007/s00704-005-0142-3 Meier, F., & Scherer, D. (2012). Spatial and temporal variability of urban tree canopy temperature during summer 2010 in Berlin, Germany. Theoretical and Applied Climatology. doi:10.1007/s00704- 012-0631-0 Meier, F., Scherer, D., Richters, J., & Christen, A. (2011). Atmospheric correction of thermal-infrared imagery of the 3-D urban environment acquired in oblique viewing geometry. Atmospheric Measurement Techniques, 4, 909\u00E2\u0080\u0093922. Menut, L., Vautard, R., Flamant, C., Abonnel, C., Beekman, M., Chazette, P., \u00E2\u0080\u00A6 Toupance, G. (2000). Measurement and modeling of atmospheric pollution over the Paris area: the ESQUIF Project. Annales Geophysicae, 18, 1467 \u00E2\u0080\u00931481. Nikiforiadis, F., & Pitts, A. (2003). 3D digital geometric reconstruction of the urban environment for daylight simulation studies (Vol. 8). Presented at the Eighth International Building Simulation Conference, Eindhoven, Netherlands. Oke, T. R. (1982). The energetic basis of the urban heat island. Quarterly Journal of the Royal Meteorological Society, 108(455), 1\u00E2\u0080\u009324. Oke, T. R. (1987). Boundary Layer Climates (II.). New York: Routledge. Oke, T. R. (1988). The urban energy balance. Progress in Physical Geography, 12(4), 471\u00E2\u0080\u0093508. doi:10.1177/030913338801200401 Oke, T. R., Jauregui, E., & Grimmond, C. (1999). The energy balance of central Mexico City during the dry season. Atmospheric Environment, 33(24-25), 3919\u00E2\u0080\u00933930. Poglio, T., Mathieu-Marni, S., Ranchin, T., Savaria, E., & Wald, L. (2006). OSIrIS: a physically based simulation tool to improve training in thermal infrared remote sensing over urban areas at high spatial resolution. Remote Sensing of Environment, 104, 238\u00E2\u0080\u0093246. Roberts, S. M. (2010). Three-dimensional radiation flux source areas in urban areas (Ph. D Thesis). University of British Columbia, Vancouver. 128 Rotach, M. W., Vogt, R., Bernhofer, C., Batchvarova, E., Christen, A., Clappier, A., \u00E2\u0080\u00A6 Voogt, J. A. (2005). BUBBLE \u00E2\u0080\u0093 an urban boundary layer meteorology project. Theoretical and Applied Climatology, 81(3-4), 231\u00E2\u0080\u0093261. doi:10.1007/s00704-004-0117-9 Schmid, H., Cleugh, H., Grimmond, C. S. B., & Oke, T. R. (1991). Spatial variability of energy fluxes in suburban terrain. Boundary-Layer Meteorology, (54), 249\u00E2\u0080\u0093276. Shirley, P., Marshner, S., & Ashikhmin, M. (2009). Fundamentals of Computer Graphics. Wellesley, MA.: AK Peters. Small, C. (2005). A global analysis of urban reflectance. International Journal of Remote Sensing, 26(4), 661\u00E2\u0080\u0093681. doi:10.1080/01431160310001654950 Sprigg, W. A., & Reifsnyter, W. E. (1972). Solar-radiation attenuation through the lowest 100 meters of an urban atmosphere. Journal of Geophysical Research, 77(33), 6499\u00E2\u0080\u00936508. Statistics Canada. (2009, September 22). Population urban and rural, by province and territory. Retrieved December 9, 2010, from http://www40.statcan.gc.ca/l01/cst01/demo62a-eng.htm Szeliski, R. (2011). Computer Vision: Algorithms and Applications. New York, USA: Springer. Trimble Navigation. (2003, September). 5700/5800 GPS Receiver User Guide. Tsai, R. Y. (1987). A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE Journal of Robotics and Automation, RA-3(4), 323\u00E2\u0080\u0093344. van der Laan, M., Tooke, R., Christen, A., Coops, N., Heyman, E., & Olchovski, I. (2012). Statistics on the built infrastructure at the Vancouver EPiCC experimental sites (No. 4). University of Britsh Columbia. van der Laan, M., Tooke, R., Christen, A., Coops, N., Kellet, R., & Oke, T. R. (2010). A LIDAR-based approach for anthropogenic heat release modeling by buildings at the urban neighborhood-scale. Presented at the 44th Annual CMOS Congress / 36th Annual Scientific Meeting of CGU, Ottawa, Canada. 129 Voogt, J. A., & Oke, T. (1998). Effects of urban surface geometry on remotely-sensed surface temperature. International Journal of Remote Sensing, 19(5), 895\u00E2\u0080\u0093920. Voogt, J. A., & Oke, T. R. (1997). Complete urban surface temperatures. Journal of Applied Meteorology, 36(9), 1117\u00E2\u0080\u00931132. Voogt, J. A., Oke, T. R., Belair, S., Benjamin, M., Christen, A., Coops, N., \u00E2\u0080\u00A6 Wang, J. (2007). The Environmental Prediction in Canadian Cities (EPiCC) Network. Presented at the AMS Seventh Symposium on the Urban Environment, San Diego, USA. Xue, Y., & Cracknell, A. P. (1995). Advanced thermal inertia modelling. International Journal of Remote Sensing, 16(3), 431\u00E2\u0080\u0093446. doi:10.1080/01431169508954411 "@en . "Thesis/Dissertation"@en . "2013-05"@en . "10.14288/1.0073506"@en . "eng"@en . "Geography"@en . "Vancouver : University of British Columbia Library"@en . "University of British Columbia"@en . "Attribution-NonCommercial-NoDerivatives 4.0 International"@en . "http://creativecommons.org/licenses/by-nc-nd/4.0/"@en . "Graduate"@en . "The effect of preferential view direction on measured urban surface temperature"@en . "Text"@en . "http://hdl.handle.net/2429/43830"@en .