Open Collections

UBC Undergraduate Research

On the use of multiple planetary boundary layer parameterization schemes forecasting temperature and… Chui, Timothy Chun-Yiu Mar 31, 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
52966-Chui_Timothy_ATSC_449_2016.pdf [ 1.81MB ]
Metadata
JSON: 52966-1.0300242.json
JSON-LD: 52966-1.0300242-ld.json
RDF/XML (Pretty): 52966-1.0300242-rdf.xml
RDF/JSON: 52966-1.0300242-rdf.json
Turtle: 52966-1.0300242-turtle.txt
N-Triples: 52966-1.0300242-rdf-ntriples.txt
Original Record: 52966-1.0300242-source.json
Full Text
52966-1.0300242-fulltext.txt
Citation
52966-1.0300242.ris

Full Text

ON THE USE OF MULTIPLE PLANETARY BOUNDARY LAYER PARAMETERIZATION SCHEMES FOR FORECASTING TEMPERATURE AND PRECIPITATION IN COMPLEX TERRAIN    by    TIMOTHY CHUN-YIU CHUI    A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF  THE REQUIREMENTS FOR THE DEGREE OF   BACHELOR OF SCIENCE (HONOURS)   in   THE FACULTY OF SCIENCE (ATMOSPHERIC SCIENCE)   This thesis conforms to the required standard   …………………………………………………..  Dr. Roland Stull   THE UNIVERSITY OF BRITISH COLUMBIA  (Vancouver)  MARCH 2016   © Timothy Chui, 2016  ii ABSTRACT  Turbulent exchanges of heat and moisture between the Earth and atmosphere are approximated in modern numerical weather prediction (NWP) models by planetary boundary layer (PBL) parameterizations. These parameterizations were originally formulated in flat terrain, even though they are used in weather models that can be applied to anywhere in the world. A comparison of the deterministic forecast performance of different parameterizations has not been conducted in the complex, mountainous terrain of western Canada.  The focus of this research was to evaluate eight commonly used PBL parameterizations by verifying their point forecasts of hourly temperature and daily accumulated precipitation against surface observations at 31 stations in British Columbia and Alberta for one year. The verification showed that choosing the optimum parameterizations depends mostly on the forecast variable and on the station, not on the complexity of their formulation. For the raw forecasts, the Mellor-Yamada-Janjić scheme (MYJ) had the best verification statistics on average for temperature, while the Medium-Range Forecast scheme (MRF) had the best verification on average for precipitation.  The forecasts were also combined using a variety of linear methods. An ensemble of PBL schemes inversely weighted by their root mean square error (RMSE) produced hourly temperature forecasts that were just as good as the best individual bias-corrected member at each station, while an equally weighted ensemble of linearly regressed precipitation forecasts had only a slightly larger error on average than the best individual precipitation forecast. Daily forecasts of temperature and precipitation, which are variables of interest to both the general public and to industry, can thus be improved over the raw model output by the careful selection and combination of PBL parameterizations.        iii ACKNOWLEDGEMENTS   This bachelor’s thesis was, to put it lightly, a lot of work, and one of the most challenging but rewarding things I have accomplished so far in my life. I would not have been able to get as far as I did if I did not have such an amazing support group who put blood, sweat and tears into propping me up and guiding me on this incredible journey.  My gratitude goes first and foremost to my supervisor, Roland Stull. I could not have asked for a better supervisor. To this day, I am still not entirely sure what he saw in me to take me in as a research assistant, but I am forever grateful for his wonderful mentorship and his faith in my abilities as a developing scientist. Many thanks as well go to David Siuta, who provided me with the forecast output, inspiration and incredible guidance for this project. He taught me everything I know about forecast verification, ensemble forecasting and PBL schemes. However, I would not even understand what a PBL is without the tutelage of Phil Austin. I owe my newfound love for atmospheric physics (and programming!) to him.  I am also very grateful to everyone on the UBC Geophysical Disaster Computational Fluid Dynamics Centre, for taking me under their wing and shaping me into the scientist I am today. In particular, I would like to thank Greg West for helping me obtain the observation data; Roland Schigas for teaching me how to work with the database (and for his amazing IT support in general); Thomas Nipen for his verification software COMPS; and Zhiying Li for introducing me to the concept of weighted ensembles and member selection.  Of course, this entire process would not have been possible without Mary Lou Bevier, whose amazing organizational abilities and attention to detail have made this thesis course a lot less painful than it should have been! Any pain leftover was divided among the thesis students, with whom I have had the honour of sharing this experience. Thanks as well to the Department of Earth, Ocean and Atmospheric Sciences for the monetary support, and allowing me to be a part of its wonderful academic community. Finally, my family and friends…they have seen my ups, downs and turn-arounds throughout this past year. It is through their undying love, support and encouragement that I am where I am today. Words cannot describe all that I owe them.  iv TABLE OF CONTENTS  ABSTRACT ....................................................................................................................... ii ACKNOWLEDGEMENTS ............................................................................................ iii LIST OF ABBREVIATIONS .......................................................................................... v LIST OF FIGURES ......................................................................................................... vi LIST OF TABLES .......................................................................................................... vii 1. INTRODUCTION......................................................................................................... 1 2. COMPARISON OF PBL SCHEMES ......................................................................... 4 2.1 Local Closure ....................................................................................................................... 5 2.1.1 Mellor-Yamada-Janjić (MYJ) ........................................................................................ 6 2.1.2 Mellor-Yamada-Nakanishi-Niino (MYNN) .................................................................. 6 2.1.3 Grenier-Bretherton-McCaa (GBM) ............................................................................... 7 2.1.4 University of Washington (UW).................................................................................... 7 2.1.5 Quasi-Normal Scale Elimination (QNSE) ..................................................................... 8 2.2 Nonlocal Closure ................................................................................................................. 8 2.2.1 Medium-Range Forecast (MRF) .................................................................................... 8 2.2.2 Yonsei University (YSU) .............................................................................................. 9 2.2.3 Asymmetrical Convective Model Version 2 (ACM2) ................................................... 9 3. METHODOLOGY ..................................................................................................... 10 3.1 Forecast Description ......................................................................................................... 10 3.2 Deterministic Verification Metrics .................................................................................. 13 3.2.1 Mean Absolute Error (MAE) ....................................................................................... 13 3.2.2 Root Mean-Squared Error (RMSE) ............................................................................. 13 3.2.3 Correlation Coefficient ................................................................................................ 14 3.3 Ensemble Construction Methods ..................................................................................... 15 3.3.1 Equal Weighting (EW) ................................................................................................ 15 3.3.2 Simple Linear Regression (SLR) ................................................................................. 16 3.3.3 Inverse-Error Weighting (IEW) ................................................................................... 18 3.3.4 Multiple Linear Regression (MLR) ............................................................................. 20 4. RESULTS AND DISCUSSION ................................................................................. 24 4.1 Verification of PBL Schemes ............................................................................................ 24 4.2 Evaluation of PBL Ensembles .......................................................................................... 28 5. CONCLUSION ........................................................................................................... 42 REFERENCES ................................................................................................................ 45 APPENDIX: LIST OF OBSERVATION STATIONS ................................................ 50    v LIST OF ABBREVIATIONS  ACM2 Asymmetrical Convective Model version 2 ARW Advanced Research WRF EW Equal weighting GBM Grenier-Bretherton-McCaa GFS Global Forecast System IEW Inverse-error weighting MAE Mean absolute error MLR Multiple linear regression MRF Medium-Range Forecast  MYJ Mellor-Yamada-Janjić MYNN Mellor-Yamada-Nakanishi-Niino Level 2.5 NCEP National Centers for Environmental Prediction NWP Numerical weather prediction PBL Planetary boundary layer QNSE Quasi-Normal Scale Elimination RMSE Root mean square error SCBL Stratocumulus-capped boundary layer SLR Simple linear regression TKE Turbulence kinetic energy UW University of Washington WRF Weather Research and Forecasting Model  YSU Yonsei University        vi LIST OF FIGURES  Fig. 1: Map of the forecast domains (Siuta et al. 2016).. .................................................. 11 Fig. 2: Zoomed in map of western Canada, with the stations marked as red dots. ........... 12 Fig. 3: Flowchart to apply SLR method to the forecasts for each day.............................. 18 Fig. 4: Flowchart to apply IEW method to the forecasts for each day, using the forward selection algorithm. ........................................................................................................... 20 Fig. 5: Flowchart to apply MLR method to the forecasts for each day, using the forward selection algorithm. ........................................................................................................... 23 Fig. 6: Bar charts of (a) MAE, (b) RMSE and (c) correlation coefficient rankings for the raw temperature forecasts across all stations. ................................................................... 25 Fig. 7: Bar charts of (a) MAE, (b) RMSE and (c) correlation coefficient rankings for the raw accumulated precipitation forecasts across all stations. ............................................. 27 Fig. 8: Time series of EW ensemble temperature forecasts and observations at Holland Rock, B.C. ......................................................................................................................... 30 Fig. 9: Time series of EW precipitation forecasts and observations at Holland Rock. .... 31 Fig. 10: Same as in Fig. 9, but the EW-SLR method is purely multiplicative. ................ 32 Fig. 11: Same as in Fig. 8, but with the IEW temperature forecasts. ............................... 36 Fig. 12: Same as in Fig. 10, but with the IEW precipitation forecasts. ............................ 36 Fig. 13: Average weights for the IEW forecasts across all stations.................................. 37 Fig. 14: Temperature forecasts for Holland Rock using MLR with all 16 members. ...... 38 Fig. 15: Like Fig. 14, except the forward selection algorithm was applied to reduce over-fitting. ................................................................................................................................ 39 Fig. 16: Cumulative frequency of inclusion into temperature MLR forecasts, across all stations. ............................................................................................................................. 39 Fig. 17: Cumulative frequency of inclusion into precipitation MLR forecasts, across all stations. ............................................................................................................................. 40      vii LIST OF TABLES  Table 1: Tabulated summary of the prognostic equations for the first three statistical moments of momentum. ..................................................................................................... 5 Table 2: Tabulated average temperature MAE and RMSE skill scores for each SLR forecast and domain, using the corresponding raw forecasts as reference. ...................... 29 Table 3: Tabulated average precipitation MAE and RMSE skill scores for each SLR forecast and domain, using the corresponding raw forecasts as reference. ...................... 33 Table 4: Best raw and SLR members for hourly temperature and accumulated precipitation at each station. ............................................................................................. 33 Table 5: Tabulated average temperature MAE and RMSE skill scores for each forecast combination method, using the best individual SLR forecasts at each station as reference............................................................................................................................................ 41 Table 6: Tabulated average precipitation MAE and RMSE skill scores for each forecast combination method, using the best individual SLR forecasts at each station as reference............................................................................................................................................ 41               1 1. INTRODUCTION   Accurate and timely weather forecasts are important for the general public and industry, who rely on weather information to make informed decisions (Katz and Ehrendorfer 2006; Miao and Russel 2015; Stull 2015). Effective decision-making can result in the mitigation of human and economic losses due to storm events (Thompson and Zucchini 1990), as well as the optimization of power production in green energy sectors such as the solar power industry (Alessandrini et al. 2015). One of the most common techniques used in weather prediction is ensemble forecasting, where output from multiple model runs are combined to forecast the state of the atmosphere and its uncertainty (Toth and Kalnay 1997; Cheung 2001; Wallace and Hobbs 2006). The uncertainty information allows decision-makers to set thresholds for action in anticipation of severe-weather scenarios (Thompson and Zucchini 1990; Stull 2015).  Errors in weather forecasts arise from many sources. One is poor description of the initial atmospheric state (Lorenz 1963). Another is the choice of physics parameterizations that are used to estimate variables that occur on scales smaller than the model grid spacing (Jacobson 2005; Skamarock et al. 2008; Stull 2015). One type of parameterization involves the planetary boundary-layer (PBL), where atmospheric flows interact with Earth’s surface (Stull 1988, 2015; Jacobson 2005). Although the PBL consists of only the bottom 10 m – 2 km of the troposphere, turbulence parameterizations used in the PBL are also used for turbulent transport elsewhere in the atmosphere (Skamarock et al. 2008).  PBL schemes have been formulated and tested mainly from analyzing atmospheric profiles over uniform, flat terrain (Stull 1988; Shin and Hong 2011). Unfortunately, these same PBL schemes are also used in numerical weather prediction (NWP) models for complex, mountainous terrain. Because each PBL parameterization is formulated differently, one can expect that they would perform differently under different atmospheric and terrain conditions (Holt and Raman 1988; Hu et al. 2010, 2013; Shin and Hong 2011). In particular, some of the assumptions that are made regarding PBL behavior may not apply over the complex terrain of western Canada, a region where few     2 studies to date address the performance of weather forecasts made using different PBL parameterizations. Given the impact that the parameterization of turbulent exchanges has on the behavior of a weather model, it is important that forecasters choose the best scheme for the right atmospheric and terrain conditions, and to evaluate the forecast uncertainty associated with that scheme. An alternative to choosing a single best scheme for forecasting would be to run multiple NWP models with different PBL parameterizations, making an ensemble. Studies have shown that the deterministic forecast produced from the ensemble average of a collection of members performs better than a single forecast initialized with the best estimate of the initial conditions (Epstein 1969; Toth and Kalnay 1997). One way to create an ensemble is by perturbing the initial conditions of a single model (Cheung 2001; Wilks 2005), although this is skillful only for medium- and extended-range forecasts (Cheung 2001; Arribas et al. 2005). Skillful forecasts are needed in the short-range for extreme events, but at shorter time scales the spread of a multi-initial-condition ensemble has not grown enough to justify the use of an ensemble mean (Cheung 2001; Arribas et al. 2005).   Ensembles have also been constructed by combining different numerical models (Atger 1999; Arribas et al. 2005; Casanova and Ahrens 2009) and different physics parameterizations (Vich and Romero 2010; Tapiador et al. 2012; Watanabe et al. 2012; Jerez et al. 2013), in an attempt to produce useful short-range ensemble forecasts. Tapiador et al. (2012) showed that multi-physics ensembles produce higher spread in the short-range than do initial-condition-perturbation ensembles, and Atger (1999) showed that “poor man’s” multi-model ensembles had higher forecast skill than perturbation ensembles created at national weather centers, such as the American National Centers for Environmental Prediction (NCEP).   A multi-PBL scheme ensemble is an alternative to the traditional perturbation method of creating ensembles. It can provide helpful probabilistic information for decision-makers while at the same time producing a good deterministic forecast via the ensemble mean. But because the PBL schemes are formulated differently, there may be some PBL schemes that perform worse than others, and the worst schemes may negatively affect the deterministic forecast. An ideal ensemble would consist of an     3 infinite number of members with each member performing as well as any other (Van Den Dool and Rukhovets 1994). Realistically, with a finite number of members, no two members would perform equally well in all situations, especially with different physics parameterizations (Vich and Romero 2010; Jerez et al. 2013).  Previous studies also have shown that the deterministic forecast of an ensemble of equally weighted members performs less well than an ensemble with members weighted based on how well they perform (Casanova and Ahrens 2009; Du and Zhou 2011), because the arithmetic mean is sensitive to errors that do not cancel out between highly-correlated forecasts (Adhikari and Agrawal 2012). By constructing an ensemble mean composed of members weighted inversely to their previous error, the result is influenced more by the better members of the ensemble.  The purpose of this thesis was to compare the performance of the individual PBL schemes with the performance of ensembles constructed with various combinations of those individual ensemble members. Deterministic metrics were used to verify the performance of the individuals and ensembles. An attempt was made to reconcile the verification results with how the PBL parameterizations were formulated, and examine how weighted ensembles of those schemes would improve the short-range forecast of two common weather variables, temperature and daily accumulated precipitation.  Determination of the viability of linearly weighted ensembles of PBL schemes being added to expand ensemble sizes may suggest revisions to the way that ensemble forecasts could be produced at forecasting centers in Canada. More fundamentally, the comparison of PBL schemes provides insight as to which parameterizations perform best at forecasting hydrometeorological variables in complex terrain in northwestern North America. As with all studies in statistical forecasting and numerical weather prediction, the goal of this research was to improve upon existing forecast methods, and provide the public with even better weather information (Wilks 2005; Stull 2015).        4 2. COMPARISON OF PBL SCHEMES    The PBL includes the bottom several kilometres of the troposphere, which is directly influenced by forcings from the Earth’s surface with response times of about 30 minutes (Stull 1988). Turbulence, due to mechanical and thermal processes, plays an important role in the transport of variables such as heat and moisture. Because turbulence is complex, deterministic descriptions of turbulence are computationally too expensive to be useful in weather forecasts. Instead, turbulence must be described statistically. The standard way to describe turbulence is by Reynolds averaging:   where 𝜉𝜉̅ is the average of a variable 𝜉𝜉, and 𝜉𝜉’ is the perturbation of the quantity due to turbulence.  If a quantity such as temperature or wind velocity is forecast, both the mean and turbulent parts must be forecast. However, the set of unknown terms for those quantities is larger than the corresponding set of prognostic equations, leading to what is called the closure problem (Stull 1988). Such unknowns involve the covariances of variables, including the turbulent components of wind velocity (Table 1). As shown in Table 1, the prognostic equations for a quantity with moment of order N requires derivatives involving moments of order N+1. The purpose of PBL parameterizations is to “close” the equations, by approximating unknown terms of order N+1 by using known quantities. If moments of order N are forecast, and a corresponding set of N+1 moments is parameterized, the parameterization is called Nth-order closure.  Turbulence closure is generally performed in one of two ways: local and non-local closure. The eight PBL schemes used in this study can be broadly classified into one or the other of these two methods.     (1)     5 ̶ Table 1: Tabulated summary of the prognostic equations for the first three statistical moments of momentum, where the indices i, j, k, m = 1, 2, 3 represent the three Cartesian directions (Stull 1988).  Prognostic eq. for: Moment (order)    Equation Number of eqs. Number of unknowns         1st   3  6         2nd   6  10       3rd        10          15  2.1 Local Closure   PBL schemes that involve local closure attempt to parameterize unknown quantities by using gradients or values of known quantities at a single point. The assumption is that turbulence at a grid point in a numerical model acts similarly to molecular diffusion, and is unable to transport quantities beyond adjacent grid points.  Local closure is based on a closure approximation called K-theory:   where j = 1, 2, 3 corresponds to the x, y, z directions, respectively; 𝑢𝑢𝚥𝚥′𝜉𝜉′������ is the flux of a quantity 𝜉𝜉 in the j direction, and 𝐾𝐾𝜉𝜉 is the eddy diffusivity of 𝜉𝜉. Local closure involves finding ways to determine 𝐾𝐾𝜉𝜉, so that terms like 𝑢𝑢𝚥𝚥′𝜉𝜉′������ can be expressed as being proportional to the gradient of 𝜉𝜉̅ at a point.  Five of the PBL schemes examined in this thesis use local closure, as described next. (2)     6 2.1.1 Mellor-Yamada-Janjić (MYJ)   The MYJ scheme is a 1.5-order local closure scheme (Mellor and Yamada 1974, 1982; Janjić 1990; Skamarock et al. 2008; Hu et al. 2010; Shin and Hong 2011). 1.5-order closure, which is also called turbulence kinetic energy (TKE) closure, involves expressing the eddy diffusivity as a function of the TKE:   where l is the mixing length (which represents the characteristic length of a turbulent eddy, determined empirically for the MYJ), 𝑆𝑆𝜉𝜉 is a proportionality constant, and e is the TKE:  Hence, TKE schemes commonly require a prognostic equation for ?̅?𝑒, on top of the prognostic equations for other mean quantities.  The MYJ scheme produces the vertical structure of the atmosphere based on how the TKE changes under different stability regimes. In particular, the depth of the PBL is determined from the TKE, and hence turbulent mixing within the PBL is based on the prognosis of the TKE. Because MYJ is a local scheme, it under-mixes the PBL for locations upstream of convection, resulting in lapse rates that are weaker than observed (Shin and Hong 2011; Coniglio et al. 2013; Cohen et al. 2015).    2.1.2 Mellor-Yamada-Nakanishi-Niino (MYNN)   The Mellor-Yamada-Nakanishi-Niino 1.5-order (officially MYNN2, hereafter written as MYNN for clarity) scheme is similar to the MYJ scheme in that it uses a prognostic equation for TKE to diagnose 𝐾𝐾𝜉𝜉 (Nakanishi 2001; Cohen et al. 2015). Although the MYJ determines stability and mixing length empirically from test observations, the MYNN obtains its parameters from large-eddy simulation in prior modeling runs.  (3) (4)     7  Both the MYJ and the MYNN fail to model the nonlocal convective activity associated with larger eddies (Coniglio et al. 2013; Cohen et al. 2015). However, Coniglio et al. (2013) found that the MYNN performs similarly to the nonlocal schemes in terms of determining the PBL height, moisture and potential temperature. This may be due to how the scheme allows the mixing length to vary across the stability spectrum more flexibly than the MYJ. However, the extra terms in the MYNN add to the computational cost of this scheme.  2.1.3 Grenier-Bretherton-McCaa (GBM)   The GBM is a 1.5-order scheme which was developed specifically for handling stratocumulus-capped boundary layers (SCBL) (Grenier and Bretherton 2001; Cohen et. al 2015). At high spatial resolutions, the scheme properly represents the thinning of stratocumulus clouds during the day due to vertical mixing. Given that the presence of such clouds greatly affects the structure of the PBL, the GBM is quite effective for dealing with marine boundary layers in coastal regions.  Like the other local schemes, the GBM is unable to deal with large eddies that mix quantities beyond adjacent grid points, and therefore tends to under-mix the PBL.  2.1.4 University of Washington (UW)   The UW scheme was formulated as an improvement over the GBM, and was also designed to model SCBLs (Bretherton and Park 2009; Cohen et. al 2015). Unlike the previous three schemes, the UW scheme diagnoses the TKE instead of forecasting it. It also deals with the entrainment of free atmosphere air into the PBL explicitly. The UW scheme is unique in that in stable conditions TKE diagnosis is dropped and the scheme reduces to a 1st-order scheme.   At night, the UW scheme reproduces an SCBL accurately. However, like the other local schemes, the UW scheme tends to under-mix the PBL in convective scenarios.      8 (5) ̶ 2.1.5 Quasi-Normal Scale Elimination (QNSE)  The QNSE scheme is another 1.5-order scheme, created to model stable boundary layers (Sukoriansky et al. 2005; Shin and Hong 2011; Cohen et. al 2015). Whereas all of the aforementioned schemes work with discretized grid points in physical space, the QNSE scheme employs spectral theory; that is, it applies the Fourier transform to temperature and velocity fields, to account for waves present in stably-stratified layers by working in frequency space. The scheme reproduces a PBL accurately under stable conditions (Cohen et. al 2015). However, in unstable environments, the QNSE under-predicts the temperature and depth of the PBL, and over-predicts the moisture content.    2.2 Nonlocal Closure   PBL schemes formulated with nonlocal closure can model larger-sized eddies that transport quantities across distances larger than individual grid cells (Stull 1988, 1993). Advective-like formulations of closure allow better representation of convective processes, such as thermals.   The Medium-Range Forecast and Yonsei University schemes used in this study are nonlocal closure schemes, while the Asymmetrical Convective Model version 2 is a hybrid nonlocal-local closure scheme. All three schemes use 1st-order closure.   2.2.1 Medium-Range Forecast (MRF)  The MRF scheme adds a counter-gradient correction term to the vertical component (w = vertical velocity) of the general K-theory formulation in (2) (Hong and Pan 1996; Skamarock et al. 2008; Cohen et al. 2015):        9 ̶ where 𝛾𝛾𝜉𝜉 is the counter-gradient correction term. The purpose of the counter-gradient correction term is to account for large eddies that could cause the transport of quantities against the local gradient, which is not considered in local schemes.  The MRF simulates a convective, unstable PBL better than local schemes, because it better represents the entrainment of warm air aloft into the PBL. However, in high-wind conditions during the night, the MRF is known to over-mix, resulting in deeper PBLs than observed.  2.2.2 Yonsei University (YSU)   The YSU scheme is an improvement over the MRF by including an explicit treatment of entrainment at the top of the PBL (Hong et al. 2006; Skamarock et al. 2008; Hu et al. 2010, 2013; Shin and Hong 2011; Cohen et al. 2015). The explicit treatment is formulated by adding an additional flux term to (5):    where 𝑤𝑤′𝜉𝜉′�����|ℎ  is the vertical flux of 𝜉𝜉 at the PBL height h.   Addition of the entrainment term results in a more accurate simulation of convective PBLs than the MRF, and reduces the amount of mixing in high-wind situations at night time.   2.2.3 Asymmetrical Convective Model Version 2 (ACM2)   The ACM2 is a hybrid nonlocal-local scheme (Holtslag and Boville 1993; Pleim 2007; Skamarock et al. 2008; Hu et al. 2010; Cohen et al. 2015). It is hybrid because in unstable conditions, surface fluxes are transported across all of the grid cells aloft, but each grid cell also transports quantities upwards through diffusion into its adjacent cell. Downward fluxes are computed locally between each cell. In stable conditions, the scheme transitions smoothly into a fully local scheme. (6)     10  Pleim (2007) notes that the ACM2 represents the temperature and velocity profiles of the convective PBL more accurately than pure local schemes or nonlocal schemes with a counter-gradient correction. However, Coniglio et al. (2013) show that, like the other nonlocal schemes, the ACM2 is prone to representing the nighttime PBL with convection that is deeper than observed.   3. METHODOLOGY  3.1 Forecast Description   The focus of this thesis was to investigate how well different PBL schemes could forecast temperature and accumulated precipitation in the complex terrain of western Canada. To evaluate the performance of those schemes, forecasts were made over British Columbia and Alberta (Fig. 1), using the Weather Research and Forecasting Model – Advanced Research WRF (WRF-ARW).  The WRF-ARW was chosen as the model of choice because it is a publicly accessible and easily configurable atmospheric modelling system, allowing easy selection of PBL schemes (Skamarock et al. 2008). The forecast domains in Fig. 1 were chosen to coincide with a concurrent study in using PBL schemes to forecast wind speeds at wind farms (Siuta et al. 2016). The outer frame of Fig. 1 encompasses the 36-km domain, where the horizontal distance between neighbouring grid points is 36 km (Jacobson 2005). The frame over British Columbia and Alberta (labelled as d02) represents the 12-km domain. Nested within d02 are the 4-km domains (d03 and d04). Forecasts were made at every grid point in all four domains, at spatial intervals dependent on the grid sizes of the domains. All of these grids have 62 terrain-following levels in the vertical, of which 15 levels are in the bottom 1 km of the atmosphere.       11                      Fig. 1: Map of the forecast domains (Siuta et al. 2016). The outer frame represents the 36-km domain; d02 is the 12-km domain; d03 and d04 are the 4-km domains.      Hourly temperature and daily accumulated precipitation observations were obtained from a database of weather-station data to verify the forecasts (Fig. 2; a full list of the stations can be found in the Appendix). Within the 36-km forecast domain are 31 stations across British Columbia and Alberta. Of these, 27 lie within the 12-km domain. There were only a handful of stations within the 4-km domains, since d03 and d04 were originally created to be centered over wind farms where the observation network is sparse. Because of the lack of observations within the 4-km domains, the analysis focused only on the verification of the 36-km and 12-km forecasts.     1 mm : 9 km     12                 Fig. 2: Zoomed in map of western Canada, with the stations marked as red dots. 27 stations are within the black frame representing the border of the 12-km domain, and four stations in southeast B.C. are outside the border.   Forecast and observation values were taken from June 1, 2014 to June 7, 2015, spanning all four seasons. The forecasts for a 24-hour forecast horizon were extracted using the nearest-neighbour method of finding the closest grid point to the longitude and latitude of the corresponding station. All forecasts were initialized with gridded output from the Global Forecast System (GFS), with the initial conditions scaled down to the forecast domains via the WRF-ARW’s preprocessor (Skamarock et al. 2008). In total, 16 different one-day forecasts were produced for each location, with eight PBL schemes in each of the 36- and 12-km domains.     1 mm : 8 km     13 3.2 Deterministic Verification Metrics   The verification of the individual PBL schemes and the ensembles constructed from those individuals was deterministic for this study, such that a forecast was considered to perform well if the numerical output of the forecast at a location matched closely with the observations at the corresponding surface station. Probabilistic measures, such as the hit rate for precipitation forecasts, were not considered for this study (Wilks 2005).  The three commonly used deterministic metrics are described below.   3.2.1 Mean Absolute Error (MAE)   The MAE is defined as follows:     where fk and ok are the k-th forecast and observation for a single member at a location, and N is the total number of forecast-observation pairs (Wilks 2005). It can be interpreted as the average magnitude of the forecast error for a set of observations. A lower MAE indicates a lower deviation between forecast and observations, hence a better deterministic forecast.   3.2.2 Root Mean-Squared Error (RMSE)   The RMSE is defined as follows:    (7) (8)     14 It is similar to the MAE, except that the forecast errors are squared (Wilks 2005). By squaring the errors, the RMSE is thus more sensitive to large errors than the MAE.  To illustrate the importance of using both the MAE and RMSE, consider the following two sets of forecast errors: a = (1, 6, 1) and b = (3, 3, 3), where each element in a set corresponds to a forecast error. By calculating the MAE, it would seem that a would contain the better forecasts, since MAE(a) = 2.67 < MAE(b) = 3. However, this conclusion with the MAE contradicts that of the conclusion drawn with the RMSE, since RMSE(a) = 3.56 > RMSE(b) = 3. Even though a has the lower errors on average when compared to b, a suffers from an excessively large error which causes its RMSE to be higher than that of b. This can occur if a forecast member performs well in general, but fails to predict high-impact scenarios such as thunderstorms, leading to very large errors during those events.  3.2.3 Correlation Coefficient   The Pearson product-moment coefficient of linear correlation, abbreviated in this thesis to the correlation coefficient, is the nondimensionalized covariance of the forecasts and the observations (Wilks 2005). It is defined by equation (9):     where the overbars denote means, as in Section 2, such that:       (9) (10) (11)     15 The correlation coefficient is a measure of the linear association between the forecasts and observations. It contains no information about the accuracy of the forecasts, hence a forecast that is highly correlated with the observations can still have a large error. However, it can be used as an indicator of whether or not a forecast is able to fluctuate and evolve similarly to the observations in a reasonable timeframe. A forecast that is highly correlated with the observations, but with a consistent error, may be made better by applying a bias correction to future forecasts.  In contrast, a forecast with little or even negative correlation with the observations may indicate that the NWP model is unable to realistically represent the atmospheric state, even if the forecast error is small; the small error may simply be a numerical coincidence. Consider two forecast sets, f1 = (1, 1, 1, 1, 1) and f2 = (5, 6, 4, 2, 3), and an observation set, o = (2, 3, 1, -1,0). f1 would have a much lower MAE and RMSE than f2 when verified with o. However, f1 has little correlation with o, while f2 has a correlation coefficient of 1 with o. A closer examination would show that f2 would match o if all values of f2 are subtracted by 3.  Hence, there are ways to make the raw NWP outputs better by performing post-processing on the forecasts (Wilks 2005; Stull 2015). Post-processing uses statistical methods to make the numerical output of an NWP model match the observations more closely. Various linear ensemble construction methods, which are forms of post-processing, were applied to the PBL schemes in this study.  3.3 Ensemble Construction Methods  3.3.1 Equal Weighting (EW)   The simplest way to produce a deterministic ensemble forecast is to average the raw output of all of the ensemble members (EW-Raw). For an ensemble of n members, the deterministic forecast via the EW-Raw method at a time k is just the arithmetic mean of the members:     16   where fk,i represents the i-th member of the ensemble.  The number of members depends on the ensemble of interest; for this study, the three main ensemble sizes were n = 16, for all eight PBL schemes across both domains, and n = 8, for the PBL schemes in the 36- and 12-km domains separately.  The EW-Raw method for creating a deterministic forecast from a collection of members is easy to implement, and is computationally inexpensive. However, EW-Raw only works well for variables with Gaussian probability distributions, such as temperature, where forecasts with positive errors (larger values than the observations) have a chance of cancelling out those with negative errors (smaller values than the observations) (Wilks 2005). For asymmetric probability distributions with a finite probability at a certain point, like 0 mm for accumulated precipitation, the errors of individual forecasts may not cancel out when averaged.   Even for symmetric probability distributions, if the ensemble is uncalibrated (i.e. the spread of the ensemble members is too narrow), and there is a consistent bias for all of the members, the ensemble mean may not perform better than the best individual member (Cheung 2001; Adhikari and Agrawal 2012). Hence, it is always necessary to correct the bias before averaging to produce a better deterministic forecast. The linear regression technique was used in this study to bias correct before averaging.  3.3.2 Simple Linear Regression (SLR)    The SLR is a method to describe the linear relationship between a predictor and its predictand (Wilks 2005). Taking a forecast set f as the predictor and an observation set o as the predictand, the SLR equation via ordinary least-squares regression can be written as follows:   (12) (13)     17 where 𝑓𝑓𝑘𝑘 is the linearly regressed k-th forecast of f . The parameters a0 and a1 are the intercept and slope parameters, respectively, such that the sum of the squared residuals    is minimized. Equivalently, this corresponds to minimizing the RMSE between the linearly regressed forecasts and the observations. Hence, the SLR method can serve as a bias correction to the forecasts, by combining an additive (ao) and multiplicative (a1) correction to the original forecasts f.   The SLR method can be implemented practically by finding the regression coefficients a0 and a1 based on a developmental dataset and applying them to a future independent verification set. For the hourly temperature forecasts, the parameters to post-process the next day’s forecast were calculated using the forecasts and observations from  the previous day, so that N = 24. For accumulated precipitation forecasts, the next day’s forecast was post-processed using the parameters from the regression model involving the previous thirty days, so that N = 30. The regressed forecasts were then averaged to  produce an ensemble deterministic forecast, by computing the arithmetic mean of the regressed forecasts, to produce an equally-weighted SLR ensemble (EW-SLR):    A flowchart of the algorithm to compute SLR forecasts and ensembles for a single day is shown in Fig. 3.         (14) (15)     18 Take development set’s raw forecasts and observations Compute regression parameters Post-process tomorrow’s forecast using regression parameters from today Start Average the regressed forecasts to obtain SLR ensemble mean (EW-SLR) Have all ensemble members been regressed? No Yes End                        Fig. 3: Flowchart to apply SLR method to the forecasts for each day. The development set referred to in the green block corresponds to the hourly observations of the current day for the hourly temperature forecasts, and to the previous thirty days for the accumulated precipitation forecasts.   3.3.3 Inverse-Error Weighting (IEW)   The previous ensemble construction methods, EW-Raw and EW-SLR, give equal weight to all members, regardless of their previous performance. Even after application of the SLR, the EW ensembles may be negatively impacted by members with very high RMSEs when verified with their development set, because the arithmetic mean is not     19 robust and does not take into account the relative performance of each member (Casanova and Ahrens 2009; Du and Zhou 2011; Adhikari and Agrawal 2012). Hence, weighting the members by their errors before linearly combining them will allow the better members in the ensemble to influence the value of the final deterministic forecast more and the poorer members less (Van Den Dool and Rukhovets 1994; Li and Tkacz 2001; Casanova and Ahrens 2009; Du and Zhou 2011; Miao and Russell 2015).  The IEW method used in this thesis weighted each ensemble member by the inverse of their RMSE:   where αi is the weight factor applied to the ensemble member i:   The weight factor is the normalized inverse RMSE of the forecast member from the developmental set, where the RMSE can be calculated from either the raw forecasts or the SLR forecasts verified with the developmental observations. The normalization factor β  ensures that the sum of the weights is 1, to produce an unbiased linear combination:      Hence, ensemble members with lower RMSEs would have more influence on the final deterministic forecast than members with higher RMSEs. Ensemble forecasts created from the linear combination of weighted raw forecasts (IEW-Raw) and weighted SLR forecasts (IEW-SLR) were tested in this study. A flowchart of the algorithm to compute IEW forecasts and ensembles for a single day is shown in Fig. 4.    (16) (17) (18)     20 Start Raw SLR End Apply SLR for all members in the developmental set (described in Fig. 3) Linearly combine weights with the SLR forecasts for the next day (IEW-SLR) Take raw forecasts from developmental set  Calculate weights using (16), (17) and (18)   Calculate weights using (16), (17) and (18)     Linearly combine weights with the raw forecasts for the next day (IEW-Raw)   End                        Fig. 4: Flowchart to apply IEW method to the forecasts for each day, using the forward selection algorithm. The SLR method is described by the left decision branch in Fig. 3, with the actual post-processing of the next day’s forecast weighted by the normalized inverse RMSEs from the developmental set.  3.3.4 Multiple Linear Regression (MLR)   The MLR is a generalized linear regression, which allows multiple predictor variables for a single predictand (Wilks 2005). For n members in an ensemble, the regressed forecast value can be computed as follows:     21    where bi is the regression coefficient for the i-th member of the ensemble, and b0 is the regression constant. Similar to the SLR method outlined in 3.3.2, the MLR method used is also an ordinary least-squares regression, and the parameters are found by minimizing the sum of squared residuals of the regressed forecasts and the observations.  One disadvantage to the MLR method is that it is prone to overfitting, and can result in poor deterministic forecasts when applied to independent data that is not part of the original development set. Initial testing of the MLR for all 16 members resulted in long computational times, and grossly unrealistic forecasts for both temperature and precipitation. Instead, the predictors must be screened before being included in the MLR, via an algorithm called forward selection (Glahn and Lowry 1972; Wilks 2005).   Forward selection involves adding forecast members to a regression model one at a time, such that the RMSE improves the greatest amount for a given developmental data set until a threshold criterion is reached. Consider n members, where all have been linearly regressed using the SLR method (13) with a development set. By initially selecting the member whose regression has the lowest RMSE in the development set, we have:   where the superscripts denote the step of the forward selection process, which in this case is the first step, and f1 denotes the ensemble member producing this regression. The subscript k is dropped for readability. In the next step, an MLR is performed for each of the remaining forecast members alongside f1, to find the regression model which further reduces the RMSE the most. The member with the MLR which produces the lowest RMSE results in a new model, involving f2:    (19) (20) (21)     22 There are now three parameters to the regression, with b2 scaling the new member f2. However, b0 and b1 in (21) are different than those in (20), since the regression has changed to accommodate the new predictor.  The selection process continues until the RMSE approaches a minimum value. The final MLR, with m members, is:   Equation (22) is in essence the MLR with the smallest number of members which produces a near error-less regression with the development set. The parameters b1 to bm were consequently applied to the forecast the next day for post-processing by scaling their corresponding forecast members f1 to fm, and the intercept b0 was then added to the scaled members. Ensemble members which were not included in the regression for that day would have zero-weight in the post-processed MLR forecast, since those members were deemed to have relatively high error based on the development set. Hence, the MLR method of ensemble construction considers the previous performance history of each potential member, unlike the EW method of assigning equal weight to each member, and unlike the IEW method of including all members.  A flowchart of the algorithm to compute MLR forecasts and ensembles for a single day is shown in Fig. 5.             (22)     23 Start Has the minimum RMSE criterion been reached? No Yes End Perform SLR for all members (described in Fig. 3) Perform MLR for the chosen members and each of the remaining members Choose member that has the lowest RMSE after the regression  Post-process tomorrow’s forecast using regression parameters from today                                Fig. 5: Flowchart to apply MLR method to the forecasts for each day, using the forward selection algorithm. The SLR method in the second block is described by the left decision branch in Fig. 3, minus the actual post-processing of the next day’s forecast.     24 (a) 4. RESULTS AND DISCUSSION  4.1 Verification of PBL Schemes  The eight PBL schemes used in this study were verified against observations across the 27 stations within the 36- and 12-km domains, and the 4 stations that are within the 36-km domain only. The MAE, RMSE and linear correlation coefficients for each ensemble member were computed for the entire year. To visualize the relative deterministic performance of the ensemble members, the deterministic metrics were ranked by station, with a lower rank value corresponding to a relatively better forecast. A good forecast in this context would mean a lower MAE and RMSE, and higher correlation coefficient relative to other ensemble members. The ranks for each ensemble member were then averaged across all of the available stations, and plotted as a bar chart, making comparison between schemes easier. The top chart of each metric shows the ranks across all ensembles (ranked 1-16), and the bottom chart shows the ranks within the individual domains (ranked 1-8). The ranks for temperature are plotted in Fig. 6.      25 (b) (c)  Fig. 6: Bar charts of (a) MAE, (b) RMSE and (c) correlation coefficient rankings for the raw temperature forecasts across all stations. The upper charts are the ranks for all 16 members, while the lower charts are the rankings separated by domain (36-km domain in blue, 12-km domain in red).      26 (a)  Fig. 6 shows that the domain size differences had little impact on the relative performance of the schemes, with the order of ranks generally preserved between and within both domains. The MYJ scheme had the best overall forecast for all three deterministic metrics. The QNSE scheme had the worst MAE and RMSE rankings but performed well in terms of the linear correlation coefficient, with the MRF having the worst ranking for that metric. The MAE and RMSE rankings were generally identical.  Fig. 7 shows the bar charts for precipitation, in the same format as Fig. 6 for temperature.                                         27 (b) (c)                                                     Fig. 7: Bar charts of (a) MAE, (b) RMSE and (c) correlation coefficient rankings for the raw accumulated precipitation forecasts across all stations. The upper charts are the ranks for all 16 members, while the lower charts are the rankings separated by domain (36-km domain in blue, 12-km domain in red).     28 Fig. 7 shows that the domain size differences had a greater impact on the performance of the PBL schemes for precipitation forecasts than they did for temperature forecasts. The MRF and UW schemes performed the best in general, although the roughly uniform distribution of rankings for the correlation chart in Fig. 7 (c) indicates that all of the schemes performed relatively closely in terms of varying with the observations. In fact, the precipitation forecasts generally had poor correlation coefficients, ranging between 0 and 0.68 depending on the scheme and station. The MAE and RMSE rankings did match in general, indicating that members with larger errors tended to perform worse on average, although the MRF and UW did not perform as distinctively well in terms of the RMSE.   In summary, the original formulation of the PBL schemes did not seem to have a predictable effect on the final rank order of the schemes. Even though many stations are located in coastal regions dominated by marine boundary layers, there was no preference in the rankings for the UW and GBM schemes at forecasting temperature compared with the other schemes. Although the UW scheme performed well at forecasting precipitation, the MRF scheme performed the best overall, even though it is an old nonlinear scheme which does not explicitly model marine boundary layers (Hong and Pan 1996; Skamarock et al. 2008; Cohen et al. 2015). Despite the sophisticated spectral formulation of the QNSE (Sukoriansky et al. 2005; Shin and Hong 2011; Cohen et. al 2015), its consistently poor rankings at forecasting both variables indicate that its raw forecast output may be a less desirable ensemble member than the other PBL schemes, even when compared to the simpler linear schemes.  4.2 Evaluation of PBL Ensembles    Before the ensembles were constructed, the SLR-corrected forecasts were examined to determine whether or not they would improve the performance of each individual member. A quantitative method to determine the improvement of a forecast f over a reference forecast ref is the skill score, defined in (23):       29    The equation in (23) can be adapted for the RMSE, or any continuous positive-definite metric (Wilks 2005). Since the correlation coefficient does not meet this criterion, the skill score is not used for that metric. For the MAE and RMSE, the skill score is 1 for a perfect forecast (MAEf = 0) and can approach negative infinity for a poor forecast. Table 2 lists the average MAE and RMSE skill scores for the SLR temperature forecasts, with the raw forecasts as the reference. Table 2: Tabulated average temperature MAE and RMSE skill scores for each SLR forecast and domain, using the corresponding raw forecasts as reference. A value of 1 is best.  ACM2 GBM MRF MYJ MYNN QNSE UW YSU SSMAE (36) 0.197 0.201 0.207 0.188 0.206 0.236 0.206 0.202 SSMAE (12) 0.174 0.178 0.192 0.167 0.175 0.207 0.179 0.172 SSRMSE (36) 0.150 0.149 0.154 0.105 0.150 0.135 0.163 0.120 SSRMSE (12) 0.103 0.111 0.109 0.055 0.102 0.041 0.115 0.065   Table 2 shows positive average skill scores for all PBL members, indicating that the SLR method was successful in reducing the errors associated with each member. In particular, the forecasts from the 36-km domain exhibited a greater improvement in the error after application of the SLR than the forecasts from the 12-km domain. The observations and forecasts at Holland Rock, B.C. were used to visualize how the SLR method reduces the average MAE and RMSE from the raw forecasts (Fig. 8). (23)     30  Fig. 8: Time series of EW ensemble temperature forecasts and observations at Holland Rock, B.C. Inset is a zoomed in plot of the last 30 days, delineated by the red rectangle.   When combined using the EW method, the EW-SLR ensemble forecasts had greater variance than the EW-Raw ensemble forecasts, which matched more closely to the variance associated with the observations. Using the coastal station from Fig. 8 as an example, the EW-SLR forecast did sometimes greatly over-predict or under-predict the temperature, but was able to match the observations more closely overall than the EW-Raw forecast. The inset in the figure shows how the EW-Raw over-predicted the temperature for several weeks from May to June, but the EW-SLR forecast was able to correct for that bias, contributing to better error statistics.     31  The SLR method, however, did not perform as well in reducing the error for the precipitation forecasts. In particular, the EW-SLR forecasts struggled with no-precipitation days (Fig. 9).   Fig. 9: Time series of EW precipitation forecasts and observations at Holland Rock, as in Fig. 8. Inset is a zoomed in plot of autumn 2014, delineated by the red rectangle. The purple curve often predicted precipitation on days which did not have precipitation.   Although the SLR forecasts did perform better than the raw forecasts in general, the inability to properly forecast zero-precipitation days negatively affects the usability of such a method. This problem was particularly acute during the autumn, as shown by the inset in which the EW-SLR ensemble constantly forecasted precipitation on dry days.   However, this problem can be fixed by performing an SLR with a0 from (13) set to 0. Hence, the SLR would be a multiplicative bias correction, which would retain the zero-precipitation forecasts from the raw model output (Fig. 10).     32  Fig. 10: Same as in Fig. 9, but the EW-SLR method is purely multiplicative. Inset is a zoomed in plot of autumn 2014, delineated by the red rectangle. The purple curve, under the multiplicative bias correction method, had greater success in predicting dry days than the one in Fig. 9.   Removing the intercept term resulted in an even greater improvement to the precipitation forecasts. However, both Fig. 9 and 10 show that although the raw forecasts had a tendency to over-forecast the accumulated precipitation, they were better able to predict extreme situations. The extreme precipitation event in mid-January in Fig. 9 and 10 was predicted better by the EW-Raw forecast, while the EW-SLR forecasts were more conservative, and were better able to predict the precipitation events in the autumn of 2014.  SLR forecasts for precipitation will hereafter refer to equation (13) without a0. The skill scores for the individual members are summarized in Table 3.        33 Table 3: Tabulated average precipitation MAE and RMSE skill scores for each SLR forecast and domain, using the corresponding raw forecasts as reference. The SLR is taken to exclude the intercept term ao. A value of 1 is best.  ACM2 GBM MRF MYJ MYNN QNSE UW YSU SSMAE (36) 0.578 0.578 0.558 0.573 0.580 0.583 0.561 0.574 SSMAE (12) 0.534 0.534 0.501 0.531 0.540 0.540 0.514 0.530 SSRMSE (36) 0.494 0.486 0.474 0.486 0.494 0.496 0.473 0.480 SSRMSE (12) 0.484 0.479 0.447 0.475 0.495 0.492 0.465 0.467   Similar to the temperature forecasts, the precipitation forecasts also showed a consistent improvement in the forecasts after application of the SLR, with the 36-km domain forecasts showing a greater improvement than the 12-km domain forecasts. Hence, the SLR method of bias correction was able to improve the forecasts of each individual member, as well as the EW ensembles. The best PBL schemes, when averaged by ranking across all three deterministic metrics, are summarized in Table 4.  Table 4: Best raw and SLR members for hourly temperature and accumulated precipitation at each station. The member with the lowest average rank between the MAE, RMSE and correlation coefficient metrics at each station is declared the best at that station.  Station Best Temp. Raw Member Best Temp. SLR Member Best Precip. Raw Member Best Precip. SLR Member Alert Bay QNSE_36 MRF_36 UW_12 QNSE_36 Ballenas Island YSU_36 QNSE_36 UW_12 ACM2_36 Banff MYJ_36 MRF_12 MRF_36 MRF_36 Burns Lake MYJ_12 GBM_12 UW_12 UW_36 Cape St. James MYNN_12 UW_12 MRF_12 MRF_12 Creston1 MRF_36 MYJ_36 MRF_36 GBM_36 Dawson Creek Airport UW_12 GBM_36 N/A2 N/A Dawson Creek Auto UW_12 MYNN_36 N/A N/A Entrance Island MYJ_36 QNSE_36 YSU_36 ACM2_36     34 Estevan Point MYJ_12 QNSE_12 MRF_12 MYJ_36 Fort St. John MYJ_36 ACM2_12 MRF_12 GBM_36 Grand Prairie ACM2_12 MYJ_12 ACM2_12 UW_36 Herbert Island QNSE_36 UW_36 MRF_12 MRF_36 Holland Rock UW_36 QNSE_36 MYJ_12 UW_36 Kindakun Rocks GBM_12 MYJ_36 MYJ_12 YSU_12 Langara MYJ_12 QNSE_12 MYJ_12 UW_12 Nakusp MYJ_12 MRF_12 UW_36 QNSE_36 Nelson MYJ_36 UW_36 MRF_36 MYJ_36 Osoyoos MYJ_36 GBM_36 MRF_36 MYNN_36 Peace River ACM2_12 GBM_12 MRF_36 MYJ_36 Princeton MYJ_12 QNSE_12 MRF_36 MRF_12 Salmon Arm MYJ_12 MYNN_12 MRF_12 GBM_12 Sand Heads MYJ_36 QNSE_36 N/A N/A Saturna Island MYJ_36 MYJ_36 YSU_12 GBM_36 Sisters Island MYJ_36 QNSE_36 N/A N/A Solander Island UW_36 ACM2_36 MRF_12 ACM2_36 Sparwood MRF_36 UW_36 MRF_36 ACM2_36 Spirit River MYJ_12 MYJ_12 MRF_36 QNSE_12 Summerland MYJ_12 GBM_12 MRF_12 YSU_12 Vernon MYJ_12 YSU_12 GBM_12 YSU_12 Yoho MYJ_12 GBM_12 MRF_12 YSU_36 1Stations in red are within the 36-km domain, but outside of the 12-km domain. 2Cells with N/A denote stations missing precipitation observations.   The best individual raw schemes at all stations generally agreed with the bar charts in Fig. 6 and 7, with the MYJ scheme of either domain being the best member most frequently for temperature, and the MRF of the 12-km domain being the best member most frequently for precipitation. However, after application of the SLR for bias correction, the frequency with which a PBL scheme is the best member is distributed more evenly across all of the schemes. In particular, the QNSE appears under the SLR     35 columns for both temperature and precipitation, whereas it only appears twice under the raw column for temperature, and never appears under the raw column for precipitation. The 36-km QNSE scheme is the best member at five stations for the SLR temperature forecasts, occurring more frequently than any other scheme. Of the 27 stations with sufficient precipitation observations available, the 36-km ACM2 scheme is the best member most frequently at four stations for the SLR precipitation forecasts, although one of those stations, Sparwood, only had 36-km forecasts available.  The forecasts were then combined via the IEW method to produce weighted deterministic forecasts, in contrast to the EW method where all of the forecast members are equally weighted (Fig. 11 for temperature, Fig. 12 for precipitation). The temperature time series at Holland Rock shows that, qualitatively, the IEW-Raw and IEW-SLR forecasts were very similar to each other, and both seem to match the variance of the observations well. However, the IEW-Raw and IEW-SLR precipitation forecasts appear very different from each other. Like the EW forecasts, the IEW still struggled with high-intensity rainfall events, especially when forecasting for the spike in January. A measure of the member performances can be obtained by averaging the weights throughout the entire year, across all stations for a single ensemble member. Since the weights are normalized, members with higher average weights can be interpreted as being a better ensemble member, with lower RMSE when verified with a development set. By averaging the weights, a fair comparison can be made of the relative influence each ensemble would have on the overall weighted deterministic forecast (Fig. 13).               36                         Fig. 11: Same as in Fig. 8, but with the IEW temperature forecasts.                  Fig. 12: Same as in Fig. 10, but with the IEW precipitation forecasts. The forecasts from the first thirty days for training were omitted.     37                Fig. 13: Average weights for the IEW forecasts across all stations for a year. Higher weights indicate a greater influence on the ensemble across all stations for that variable.   The weights in Fig. 13 have greater spread in between members for the IEW-Raw precipitation forecasts than the IEW-SLR forecasts, but the opposite is true for the temperature forecasts. Although the order of the IEW-Raw forecast weights matches the rankings from the bar charts earlier in the section, the order of IEW-SLR forecast weights does not. For temperature, the 36-km MYJ scheme had the most influence on the weighted ensemble for both the raw and SLR forecasts, but the 36-km QNSE had the second-highest average weight for the IEW-SLR. Likewise, the 36-km precipitation forecasts appeared to perform consistently better on a daily basis than the 12-km forecasts after application of the SLR, whereas the opposite was true for the raw forecasts. Hence, the relative performance of the individual schemes on a daily basis can be influenced by the application of a bias correction before verification or ensemble construction, as well as the domain used for forecasting. Finally, the MLR method was applied to the individual members, using the forward selection algorithm. Although the MLR method was attempted on all members,     38 the final ensemble produced was grossly over-fit, leading to unphysically large and small values for both temperature and precipitation, as well as requiring a considerably lengthier computation time than the other methods (Fig. 14). After application of the forward selection algorithm, the fitting was constrained, leading to more physical values, although over- and under-prediction still occurred (Fig. 15).   Since the forward selection algorithm retains only the best several forecast members from the training set for the regression, members with a higher frequency of being included in the regression would have a lower RMSE more often than members which are included less frequently, and consequently influence the output of the regression more frequently (Fig. 16 for temperature, Fig. 17 for precipitation). For both temperature and precipitation, the MYJ and MRF were included most frequently in the MLR, whereas the YSU and MYNN were often excluded. Although Fig. 6 and 7 indicate that the YSU and the MYNN were never the worst performing members, they were mediocre and generally had below-average performance, which may explain their infrequency of inclusion into the daily regression process.                 Fig. 14: Temperature forecasts for Holland Rock using MLR with all 16 members.     39               Fig. 15: Like Fig. 14, except the forward selection algorithm was applied to reduce over-fitting.                Fig. 16: Cumulative frequency of inclusion into temperature MLR forecasts, across all stations.     40               Fig. 17: Cumulative frequency of inclusion into precipitation MLR forecasts, across all stations.   The curves in Fig. 16 and 17 also cross each other throughout the year, indicating that certain ensemble members are included in the MLR more frequently than others depending on the time of year. The crossings in Fig. 16 for temperature are subtle, and mainly involve the members with lower frequencies of inclusion. However, the crossings in Fig. 17 for precipitation are much more apparent, especially during the autumn of 2014, involving all of the ensemble members. Hence, plots of inclusion frequencies for the MLR contain information on the average daily and seasonal performances of the schemes, information which would not otherwise be available from EW ensembles.  The overall annual performance of all forecast combination methods for temperature are summarized in Table 5, and for precipitation in Table 6. The skill scores for the MAE and RMSE were computed with reference to the best performing individual SLR member at each station across the entire year, and the skill scores were then averaged. The SLR individual forecasts were used as reference because of the objective decrease in error compared to the raw individual forecasts, as summarized in Tables 2 and 3. Hence, a good forecast combination would be one that performs similarly to or outperforms the best individual SLR forecast.     41 Table 5: Tabulated average temperature MAE and RMSE skill scores for each forecast combination method, using the best individual SLR forecasts at each station as reference. A value of 1 is best.  EW-Raw EW-SLR IEW-Raw IEW-SLR MLR (forward selection, irrespective of domain) SSMAE (All) -0.409 -0.004 -0.000 -0.007 -0.174 SSMAE (36-km) -0.432 -0.015 -0.427 -0.018  SSMAE (12-km) -0.424 -0.039 -0.418 -0.043  SSRMSE (All) -0.282 -0.051 -0.045 -0.060 -0.183 SSRMSE (36-km) -0.303 -0.060 -0.299 -0.067  SSRMSE (12-km) -0.294 -0.094 -0.290 -0.105   Table 6: Tabulated average precipitation MAE and RMSE skill scores for each forecast combination method, using the best individual SLR forecasts at each station as reference. A value of 1 is best.  EW-Raw EW-SLR IEW-Raw IEW-SLR MLR (forward selection, irrespective of domain) SSMAE (All) -18.102 -0.054 -17.962 -0.848 -0.404 SSMAE (36-km) -18.226 -0.055 -18.185 -0.839  SSMAE (12-km) -18.008 -0.061 -17.934 -0.865  SSRMSE (All) -3.303 -0.038 -3.281 -0.232 -0.177 SSRMSE (36-km) -3.403 -0.034 -3.400 -0.227  SSRMSE (12-km) -3.278 -0.052 -3.269 -0.250    The prevalence of negative skill scores indicates that, for all forecast combinations and for both temperature and precipitation, the best individual SLR forecast outperformed the ensemble. The EW-Raw had the worst deterministic forecasts overall, but there was little difference between the IEW-Raw and the best individual member for the temperature forecasts when all 16 members were used. The EW-SLR and the IEW-SLR were very similar in performance as well, with the 36-km ensembles verifying nearly as well as the full ensembles. For precipitation, only the EW-SLR ensembles verified nearly as well as the best individual precipitation forecast. However, the better precipitation RMSE skill scores relative to the MAE skill scores indicate that the     42 ensembles may not struggle with extremes necessarily, but instead have a consistent non-zero bias even during periods of dryness, leading to a buildup of error over time. In general, if the best performing bias-corrected member is known for a certain location, then computation time can be saved by simply using that one member to produce the forecast. However, if the best performing member is unknown, then the IEW-Raw ensemble with 16 PBL schemes would perform just as well for hourly temperature forecasts, and the EW-SLR ensemble would perform nearly as well for daily accumulated precipitation forecasts.   5. CONCLUSION   In this study, the performance of 16 PBL schemes was compared across 31 stations in western Canada. They were verified using the MAE, RMSE and the correlation coefficient with the surface hourly temperature and daily accumulated precipitation observations. In terms of the raw forecasts, the MYJ was the best member at forecasting temperature, while the MRF was the best at forecasting precipitation. After application of the SLR, there was greater variability in terms of the best performing member at a station. In general, the formulation of the PBL schemes had little impact on the final rankings of the schemes for forecasting temperature and precipitation in the complex terrain of western Canada, with older and less sophisticated schemes still performing relatively well. The rankings within one domain size were generally similar to the rankings within the other. After the application of an SLR bias correction, the differences between domains and schemes were smoothed out. When compared to the raw forecasts, the individual SLR forecasts had consistently less MAE and RMSE, with the error reduced by half for the precipitation forecasts. Plotting the average weights from the IEW ensembles, as well as the frequency of inclusion in the regression for the MLR forecasts, gave an indication of the daily performance of each ensemble member.     43 The use of certain ensemble construction methods produced deterministic forecasts with comparable error statistics on average to the best individual SLR forecast at each station. In particular, the IEW-Raw ensemble with 16 members performed just as well as the best SLR member at each station for the temperature forecasts. For precipitation forecasts, the EW-SLR ensemble only had marginally poorer MAE and RMSE than the best SLR member at each station. Because of the relative simplicity of these ensemble construction methods, creating an ensemble for an entire year across 27 stations with 16 members can be completed within several minutes on a personal laptop. Creating an ensemble forecast at a single point for a single day, given an available developmental data set, would require even less time. Hence, if the best bias-corrected individual forecast at a location is not known a priori, then the use of an IEW-Raw ensemble for hourly temperature forecasts, and an EW-SLR ensemble for accumulated precipitation forecasts, would produce reasonable and computationally feasible deterministic forecasts. But, if the best member is known, then the computation time for ensemble construction can be saved, and all that is required to produce a good deterministic forecast at a single location would be to apply a bias correction.  Although this thesis focused only on deterministic metrics for verification, the probabilistic characteristics of the forecasts are just as important. Future work will include a study on the calibration of the forecasts, by comparing the variance between the ensemble members to the variance of the observations (Wilks 2005). Examining the probabilistic properties of an ensemble would allow a forecaster to make decisions in anticipation of high-impact scenarios (Raftery et al. 2005; Casanova and Ahrens 2009; Miao and Russell 2015).   Ensemble construction techniques can also include nonlinear combination methods, which are more sophisticated than the linear methods used in this thesis. Examples of nonlinear combinations include artificial neural networks (Donaldson and Kamstra 1996; Adhikari and Agrawal 2012; Krasnopolsky and Lin 2012) and gene-expression programming (Bakhshaii and Stull 2009). Weighting by the use of probability density functions is also a valid method, such as Bayesian model averaging (Raftery et al. 2005).     44 These verification and ensemble construction methods can be extended to other physics parameterizations as well, such as microphysics and surface layer schemes, and to other forecast variables like wind speed and sea-level pressure. By experimenting with different post-processing and ensemble construction methods, meteorologists can produce a more skillful prediction of the future atmospheric state by using simple statistics, at a fraction of the computational cost of running the models themselves. Ultimately, better and timelier weather forecasts will have positive economic and social ramifications for both the general public and industry.               45 REFERENCES Adhikari, R., and R. K. Agrawal, 2012: A Novel Weighted Ensemble Technique for Time Series Forecasting. Advances in Knowledge Discovery and Data Mining, P. N. Tan, S. Chawla, C. K. Ho, J. Bailey, Eds., Springer, 38-49, doi: 10.1007/ 978-3-642-30217-6 4.  Alessandrini, S., L. D. Monache, S. Sperati, and G. Cervone, 2015: An Analog Ensemble for Short-Term Probabilistic Solar Power Forecast. Applied Energy, 157, 95-110, doi: 10.1016/j.apenergy.2015.08.011. Arribas, A., K. B. Robertson, and K. R. Mylne, 2005: Test of a Poor Man’s Ensemble Prediction System for Short-Range Probability Forecasting. Mon. Wea. Rev., 133, 1825-1839, doi: 10.1175/MWR2911.1. Atger, F., 1999: The Skill of Ensemble Prediction Systems. Mon. Wea. Rev., 127, 1941-1953, doi: 10.1175/1520-0493(1999)127<1941%3ATSOEPS>2.0.CO%3B2. Bakhshaii, A., and R. B. Stull, 2009: Deterministic Ensemble Forecasts Using Gene-Expression Programming. Wea. Forecasting, 24, 1431-1451, doi: 10.1175/ 2009WAF2222192.1. Bretherton, C. S., and S. Park, 2009: A New Moist Turbulence Parameterization in the Community Atmosphere Model. J. Clim., 22, 3422-3448, doi:10.1175/ 2008JCLI2556.1. Casanova, S., and B. Ahrens, 2009: On the Weighting of Multimodel Ensembles in Seasonal and Short-Range Weather Forecasting. Mon. Wea. Rev., 137, 3811-3822, doi: 10.1175/ 2009MWR2893.1.  Cheung, K. K. W., 2001: A Review of Ensemble Forecasting Techniques with a Focus on Tropical Cyclone Forecasting. Meteorol. Appl., 8, 315-332, doi: 10.1017/ S1350482701003073. Cohen, A. E., S. M. Cavallo, M. C. Coniglio, and H. E. Brooks, 2015: A Review of Planetary Boundary Layer Parameterization Schemes and Their Sensitivity in Simulating Southeastern U.S. Cold Season Severe Weather Environments. Wea. Forecasting, 30, 591-612, doi: 10.1175/WAF-D-14-00105.1.     46 Coniglio, M. C., J. Correia, P. T. Marsh, and F. Kong, 2013: Verification of Convection-Allowing WRF Model Forecasts of the Planetary Boundary Layer Using Sounding Observations. Wea. Forecasting, 28, 842-862, doi: 10.1175/WAF-D-12-00103.1. Donaldson, R. G., and M. Kamstra, 1996: Forecast Combining with Neural Networks. Journal of Forecasting, 15, 49-61, doi: 10.1002/ (SICI)1099-131X(199601)15:1<49::AID-FOR604>3.0.CO;2-2. Du, J., and B. Zhou, 2011: A Dynamical Performance-Ranking Method for Predicting Individual Ensemble Member Performance and Its Application to Ensemble Averaging. Mon. Wea. Rev., 139, 3284-3303, doi: 10.1175/MWR-D-10-05007.1. Epstein, E. S., 1969: The Role of Uncertainties in Prediction. J. Appl. Meteorol., 8, 190-198, doi: 10.1175/1520-0450%281969%29008<0190%3ATROIUI>2.0.CO%3B2. Glahn, H. R., and D. A. Lowry, 1972: The Use of Model Output Statistics (MOS) in Objective Weather Forecasting. J. Appl. Meteorol., 11, 1203-1211, doi: 10.1175/1520-0450%281972%29011<1203%3ATUOMOS>2.0CO%3B2. Grenier, H., and C. Bretherton, 2001: A Moist PBL Parameterization for Large-Scale Models and Its Application to Subtropical Cloud-Topped Marine Boundary Layers. Mon. Wea. Rev., 129, 357-377, doi: 10.1175/ 1520-0493%282001%29129<0357%3AAMPPFL>2.0.CO%3B2. Holt, T., and S. Raman, 1988: A Review and Comparative Evaluation of Multilevel Boundary Layer Parameterizations for First-Order Turbulent Kinetic Energy Closure Schemes. Reviews of Geophysics, 26, 761-780, doi: 10.1029/ RG026i004p0076. Holtslag, A. A. M., and B. A. Boville, 1993: Local Versus Nonlocal Boundary-Layer Diffusion in a Global Climate Model. J. Clim., 6, 1825-1842, doi: 10.1175/ 1520-0442(1993)006<1825%3ALVNBLD>2.0.CO%3B2. Hong, S. Y., and H. L. Pan, 1996: Nonlocal Boundary Layer Vertical Diffusion in a Medium-Range Forecast Model. Mon. Wea. Rev., 24, 2322-2339,  doi: 10.1175/1520-0493(1996)124<2322:NBLVDI>2.0.CO;2. −−−, Y. Noh, and J. Dudhia, 2006: A New Vertical Diffusion Package with an Explicit Treatment of Entrainment Processes. Mon. Wea. Rev., 134,  doi: 10.1175/MWR3199.1.     47 Hu, X. M, J. W. Nielsen-Gammon, and F. Zhang, 2010: Evaluation of Three Planetary Boundary Layer Schemes in the WRF Model. J. Appl. Meteorol. Climatol., 49, 1831-1844, doi: 10.1175/2010JAMC2432.1. −−−, P. M. Klein, and M. Xue, 2013: Evaluation of the Updated YSU Planetary Boundary Layer Scheme within WRF for Wind Resource and Air Quality Assessments. Journal of Geophysical Research, 118, 10490-10505, doi: 10.1002/ jgrd.50823. Jacobson, M. Z., 2005: Fundamentals of Atmospheric Modeling. Cambridge University Press, 828 pp. ISBN-13: 978-0-521-54865-6. Janjić, Z. I., 1990: The Step-Mountain Coordinate: Physical Package. Mon. Wea. Rev., 118, 1429-1443, doi: 10.1175/1520-0493(1990)118<1429:TSMCPP>2.0.CO;2. Jerez, S., J. P. Montavez, P. Jimenez-Guerrero, J. J. Gomez-Navarro, R. Lorente-Plazas, and E. Zorita, 2013: A Multi-Physics Ensemble of Present-Day Climate Regional Simulations over the Iberian Peninsula. Clim. Dyn., 41, 1749-1768, doi: 10.1007/ s00382-012-1551-5. Katz, R. W., and E. Martin, 2006: Bayesian Approach to Decision Making Using Ensemble Weather Forecasts. Wea. Forecasting, 21, 220-231,  doi: 10.1175/WAF913.1.  Krasnopolsky, V. M., and Y. Lin, 2012: A Neural Network Nonlinear Multimodel Ensemble to Improve Precipitation Forecasts over Continental US. Advances in Meteor., 2012, 11 pp, doi:10.1155/2012/649450.  Li, F., and G. Tkacz, 2001: Evaluating Linear and Non-Linear Time-Varying Forecast- Combination Methods. Working Papers 01-12, Bank of Canada. [Available online at http://www.bankofcanada.ca/wp-content/uploads/2010/02/wp01-12.pdf.]  Lorenz, E. N., 1963: Deterministic Nonperiodic Flow. J. Atmos. Sci., 20, 130-141,  doi: 10.1175/1520-0469(1963)020<0130%3ADNF>2.0.CO%3B2. Mellor, G. L., and T. Yamada, 1974: A Hierarchy of Turbulence Closure Models for Planetary Boundary Layers. J. Atmos. Sci., 31, 1791-1806, doi: 10.1175/ 1520-0469(1974)031<1791:AHOTCM>2.0.CO;2.      48 −−−, and −−−, 1982: Development of a Turbulence Closure Model for Geophysical Fluid Problems. Reviews of Geophysics, 20, 851-875, doi: 10.1029/ RG020i004p00851. Miao, M., and I. Russell, 2015: Access Forecast Uncertainties from a Multi-Model Post-Processing System. 49th CMOS Congress and 13th AMS Conf. on Polar Meteor. and Oceanogr., Whistler, BC, Canadian Meteorological and Oceanographic Society, 187. [Available on-line at http://cmos.in1touch.org/uploaded/web/ congress/Files/ CMOS%202015%20Abstract%20Book.pdf.]  Nakanishi, M., 2001: Improvement of the Mellor-Yamada Turbulence Closure Model Based on Large-Eddy Simulation Data. Boundary-Layer Meteorology, 99, 349-378, doi: 10.1023/A:1018915827400. Pleim, J. E., 2007: A Combined Local and Nonlocal Closure Model for the Atmospheric Boundary Layer. Part I: Model Description and Testing. J. Appl. Meteorol. Climatol., 46, 1383-1395, doi: 10.1175/JAM2539.1 Raftery, A. E., T. Gneiting, F. Balabdaoui, and M. Polakowski, 2005: Using Bayesian Model Averaging to Calibrate Forecast Ensembles. Mon. Wea. Rev., 133, 1155-1174, doi: 10.1175/MWR2906.1.  Shin, H. H., and S. Y. Hong, 2011: Intercomparison of Planetary Boundary-Layer Parametrizations in the WRF Model for a Single Day from CASES-99. Boundary-Layer Meteorol., 139, 261-281, doi: 10.1007/s10546-010-9583-z. Siuta, D. M., G. L. West, T. Nipen, and R. B. Stull, 2016: Calibrated Hub-height Wind Speed Forecasts in Complex Terrain from a Multi-initial condition, Multi-PBL Scheme WRF Ensemble. 23rd Conference on Probability and Statistics in the Atmospheric Sciences, New Orleans, LA, Amer. Meteor. Soc. Skamarock, W. C. et al., 2008: A Description of the Advanced Research WRF version 3. NCAR Tech. Note NCAR/TN-475+STR, 113 pp, doi:10.5065/D68S4MVH. Stull, R. B., 1988: An Introduction to Boundary Layer Meteorology. Springer Netherlands, 670 pp. ISBN-13: 978-90-277-2768-8.      49 −−−, 1993: Review of Non-Local Mixing in Turbulent Atmospheres: Transilient Turbulences Theory. Boundary-Layer Meteorol., 62, 21-96, doi: 10.1007/ BF00705546. −−−, 2015: Practical Meteorology: An Algebra-Based Survey of Atmospheric Science. University of British Columbia, 924 pp. ISBN-13: 978-0-88865-176-1.  Sukoriansky, S., B. Galperin, and I. Staroselsky, 2005: A Quasinormal Scale Elimination Model of Turbulent Flows with Stable Stratification. Phys. Fluids, 17, 1-28,  doi: /10.1063/1.2009010. Tapiador, F. J., W. K. Tao, J. J. Shi, C. F. Angelis, M. A. Martinez, C. Marcos, A. Rodriguez, and A. Hou: 2012: A Comparison of Perturbed Initial Conditions and Multiphysics Ensembles in a Severe Weather Episode in Spain. J. Appl. Meteorol. Climatol., 51, 489-504, doi: 10.1175/JAMC-D-11-041.1. Thompson, M. L., and W. Zucchini, 1990: Assessing the Value of Probability Forecasts. Mon. Wea. Rev., 118, 2696-2706, doi: 10.1175/ 1520-0493(1990)118<2696:ATVOPF>2.0.CO;2.  Toth, Z., and E. Kalnay, 1997: Ensemble Forecasting at NCEP and the Breeding Method. Mon. Wea. Rev., 125, 3297-3319, doi: 10.1175/ 1520-0493%281997%29125<3297%3AEFANAT>2.0.CO%3B2. Van Den Dool, H. M., and L. Rukhovets, 1994: On the Weights for an Ensemble-Averaged 6– 10-Day Forecast. Wea. Forecasting, 9, 457-465, doi: 10.1175/  1520-0434(1994)009<0457:OTWFAE>2.0.CO;2.  Vich, M., and R. Romero, 2010: Multiphysics Superensemble Forecast Applied to Mediterranean Heavy Precipitation Situations. Nat. Hazards Earth Syst. Sci., 10, 2371-2377, doi: 10.5194/nhess-10-2371-2010. Wallace, J. M., and P. V. Hobbs, 2006: Atmospheric Science: An Introductory Survey. Academic Press, 504 pp. ISBN-13: 978-0-12-732951-2. Watanabe, M. et al., 2012: Using a Multiphysics Ensemble for Exploring Diversity in Cloud-Shortwave Feedback in GCMs. J. Clim., 25, 5416-5431, doi: 10.1175/ JCLI-D-11-00564.1. Wilks, D. S., 2005: Statistical Methods in the Atmospheric Sciences. Academic Press, 648 pp. ISBN-13: 9780127519661.      50 APPENDIX: LIST OF OBSERVATION STATIONS   Stations Latitude (o) Longitude (o) Elevation (m) Alert Bay 50.579722 -127 59 Ballenas Island 49.35 -124 5 Banff 51.179722 -116 1383 Burns Lake 54.38333 -126 713 Campbell River 49.95 -125 106 Cape St. James 51.929722 -131 92 Creston 49.08333 -117 646 Dawson Creek Airport 55.75 -120 655 Dawson Creek Auto 55.75 -120 655 Entrance Island 49.21667 -124 5 Estevan Point 49.38333 -127 7 Fort St. John 56.23333 -121 695 Grand Prairie 55.179722 -119 669 Herbert Island 50.93333 -128 17 Holland Rock 54.16667 -130 5 Kindakun Rocks 53.32 -133 14 Langara 54.25 -133 41 Nakusp 50.26667 -118 524 Nelson 49.5 -117 535 Osoyoos 49.03 -119 283 Peace River 56.229722 -117 571 Princeton 50.6 -121 700 Salmon Arm 50.7 -119 351     51 Sand Heads 49.1 -123 1 Sartine Island 50.82 -129 111 Saturna Island 48.78333 -123 24 Sisters Island 49.479722 -124 5 Solander Island 50.11667 -128 99 Sparwood 49.75 -115 1137 Spirit River 55.69528 -119 1015 Summerland 49.56256 -120 457 Vernon 50.229722 -119 556 Yoho 51.45 -116 1615  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.52966.1-0300242/manifest

Comment

Related Items