APPLIED A U T O M A T E D N U M E R I C A L A V A L A N C H E FORECASTING USING ELECTRONIC WEATHER SENSOR DATA by P A U L DAVID CORDY Hon. B.Sc, The University of British Columbia, 1999 B.Ed., The University of British Columbia, 2002 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in THE F A C U L T Y OF G R A D U A T E STUDIES ' (Geography) â€¢THE UNIVERSITY OF BRITISH C O L U M B I A April 2007 Â© Paul David Cordy, 2007 Abstract Numerical avalanche prediction was used for Canadian highways avalanche forecasting for ten years before changes in information technology infrastructure rendered the original numerical avalanche forecasting model incompatible and therefore obsolete. Now these efforts are being renewed with greater automation by the use of electronic weather sensor data. Use of this data presents several challenges and opportunities. Automated hourly observations generate large datasets that require systems for filtering historic and current data; as well as fitness testing routines that dynamically extract independent validation samples from serially correlated datasets. These weather sensor data manipulation systems offer several advantages over traditional avalanche prediction models that are based on manually observed weather and surface snow information. Rapid dataset generation enables spatial scaling of predictions, easy generation and testing ofmemory variables, model comparison, and visual verification of predicted avalanche probability time series. These features will facilitate operational implementation of avalanche forecasting models for applied computer assisted avalanche forecasting-in highways avalanche control programs across British Columbia, Canada. In the winter of 2006/7, the Avalanche Forecast System (AFS) was applied in two avalanche prone transportation corridors. The AFS uses only electronic weather sensor data and incorporates all of the aforementioned capabilities. A nearest neighbour analysis is used to generate avalanche probabilities, however the AFS data management systems could also be made to operate with classical linear and modern non-linear statistical prediction methods. Automated filters eliminate erroneous data dynamically, permit investigation of various prediction targets (such as natural avalanche occurrences, or avalanches of different size classes), and a jackknife cross-validation routine generates fitness statistics by selecting test cases that are not temporally autocorrelated. The AFS was applied operationally in Kootenay Pass, near Salmo, BC, and also at Bear Pass, near Stewart, BC, where accuracy of 76% +/-2% and 71% +/-2% were achieved respectively. Table of Contents Abstract ii Table of Contents; iii List of Tables v List of Figures vi Acknowledgements... vii Dedication viii 1 Introduction 1 1.1 Physical processes of snow avalanche formation 1 1.2 Description of study sites 2 1.2.1 Kootenay Pass 3 1.2.2 Bear Pass 5 1.3 Applied avalanche forecasting in a transportation context 5 1.3.1 Information and entropy â€¢ 7 1.4 Description of data 8 1.4.1 Electronic data 8 1.4.2 Kootenay Pass data 10 1.4.3 Bear Pass data II 1.5 Literature review: numerical avalanche prediction models 1 1 1.5.1 Linear discriminant analysis : 12 1.5.2 Nearest neighbours 13 1.5.3 Binary classification trees 15 1.5.4 Overfilling 16 1.5.5 The curse of dimensionality 17 1.6 Summary '. 19 2 Avalanche Forecast System (AFS) 21 2.1 .Operational motivations :21 2.2 Automated data manipulation 22 2.2.1 Response binning 23 2.2.2 Response filtration 23 2.2.3 Normalization , 24 2.2.4 Memory variables 24 2.2.5 Dynamic filtration of erroneous data 26 2.2.6 Variable selection 27 2.3 Prediction algorithms 29 2.3.1 Nearest neighbour algorithm : 29 2.3.2 Threshold sum model 30 2.4 Bayesian updating 31 2.5 Fitness metrics '. 32 2.6 Autocorrelation window masking jackknife cross-validation 34 2.7 AFS Summary ' 35 3 Results '. 36 3.1 Normalization of input variables 37 3.2 Model performance time series 38 i n 3.3 Sensitivity testing of AFS parameters 43 3.3.1 Cross-validation parameters 44 3.3.2 â€¢' Observation and forecast periods 46 3.3.3 Nearest Neighbour parameters 48 3.3.4 Memory variables ..; : 49 3.3.5 Missing Variables 51 3.3.6 Occurrence filters.; 54 3.4 Results summary , 57 4 Conclusions 59 5 Future directions â€¢ 62 5.1 True forecasts using numerical weather prediction output 62 5.2 Avalanche activity index 62 5.3 Algorithm choice 63 5.4 Integration of variables from different weather stations 64 5.5 Time series visualization 65 5.6 Relation of AFS output to the hazard scale '. 65 5.7 Dynamic automated algorithm selection 65 5.8 Memory variable generators 66 5.9 Previous interval recall '. 67 5.10 Expanded filtration options 68 5.11 User activity logging system â€¢ 69 5.12 Java Interfaces for importing datasets and algorithms 69 5.13 Spatial AFS display..: 69 5.14 Future directions summary 70 6 References â€¢ 71 i v List o f Tables Table 1: Comparison of variables used in previous studies..." 28 Table 2: Contingency table for comparison of forecasts and observations 32 Table 3: Contingency table as above, using plain language 32 Table 4: Fitness statistics for the optimal models at both sites â€¢ 36 Table 5: Raw prediction results for both sites 36 Table 6: Variables used in the analysis at both field sites '. 36 Table 7: Effect of transformation of precipitation variables 37 Table 8: Optimal parameter settings for models at both study sites 43 v List of Figures Figure 1: M a p showing locations o f study sites 3 F igure 2: Illustration o f the curse o f d imensional i ty for nearest neighbour analysis 18 Figure 3: Kootenay Pass avalanche predict ion t ime series, winter 98-99 38 Figure 4: Kootenay Pass avalanche predict ion time series, N o v - D e c 98 39 Figure 5: Kootenay Pass avalanche predict ion t ime series, Jan-Feb 99 39 F igure 6: Kootenay Pass avalanche predict ion time series, M a r - A p r 99 40 Figure 7: Bear Pass avalanche prediction t ime series, winter 03-04 41 Figure 8: Bear Pass avalanche prediction time series, N o v - D e c 03 41 F igure 9: Bear Pass avalanche prediction t ime series, Jan-Feb 04 42 Figure 10: Bear Pass avalanche prediction t ime series, M a r - A p r 04 42 Figure 1 1: Fitness plots for variations in cross-val idat ion mask at Kootenay Pass 44 Figure 12: Fitness plots for variations in cross-val idat ion period at Koo tenay Pass 45 Figure 13: Fitness plots for variations in cross-val idat ion period at Bear pass 46 Figure 14: Fitness plots for various observation periods 47 Figure 15: Fitness plots for various forecast periods 47 Figure 16: Fitness plots for variations in the threshold ; ....48 Figure 17: Fitness plot for k, the number o f nearest neighbours at Koo tenay Pass. : 49 Figure 18: Fitness plot for k, the number o f nearest neighbours at Bear Pass ..49 F igure 19: Fitness plots for variations in precipitation lag period at Kootenay Pass 50 Figure 20: Fitness plot for variation o f the precipitation lag per iod at Bear Pass .50 Figure 21: Fitness plots for variation o f the w i n d speed log-variable at Bear Pass 51 F igure 22: Fitness o f various models where variables were omitted at Kootenay Pass ...52 Figure 23: Fitness values o f models using the opt imal variable set at Koo tenay Pass 52 Figure 24: Fitness o f various models where sensor data were omitted at Bear Pass 53 F igure 25: Fitness values for models using the opt imal variable set at Bear Pass 54 F igure 26: Fitness values for various targets at Kootenay Pass 55 Figure 27: Fitness values for various targets at Bear Pass 56 F igure 28: M o d e l fitness for select sub-regions o f Bear Pass 57 vi Acknowledgements This thesis was made possible by the guidance and assistance of Dr. David McClung of the UBC Avalanche Research Group, Dr. Roland Stull of UBC Earth and Ocean Science, as well as Ted Weick, John Tweedy, and Kevin Maloney of the British Columbia Ministry of Transportation. CJ Hawkins worked far in excess of the hours he was paid to code the Avalanche Forecast System and make improvements throughout the winter at my request. The patient editorial help of Nikkie Cordy and John Tweedy was indispensable. John Tweedy generously included me in all aspects of the Kootenay Pass avalanche control and forecasting operation, and this greatly enriched my understanding of the operational issues in highways avalanche forecasting. Stagleap Provincial Park provided an excellent area in which to contemplate snow and avalanche issues on skis. This work has been made possible by generous contributions from Canadian Mountain Holidays, the Natural Sciences, Engineering and Research Council, and the British Columbia Ministry of Transportation. Vl l For Mom and Dad, Thank you for always supporting me, even though I tread a winding path. l Introduction The first operational numerical avalanche prediction model in Canada was used in Kootenay Pass between 1993 and 1998 (McClung and Tweedy 1994). Avalanche technicians there found it to be useful; the model predicted avalanche size and moisture class, and recalled information about avalanche activity during similar weather conditions in the past. Eventually, changes in the information technology infrastructure at the British Columbia Ministry of Transportation Avalanche and Weather Program (hereafter: MoT) rendered the original avalanche forecasting model (AFM) unusable. Given that the effects of changes in explosive avalanche control methods on the model's accuracy were unknown, MoT officials could not justify re-coding the A F M , nor could they support multiple database platforms just to maintain the A F M . This study addresses all of the above issues. The new avalanche forecast system (AFS) is a generalized automated data manipulation system that gives the user greater control over the input variables and prediction targets. Furthermore, the AFS code is written in Java (which is widely compatible). It can be used in conjunction with any type of database, and it uses electronic weather sensor data. This means that the AFS can easily be adapted to new areas and to future changes in information technology. This study also shows that reasonably accurate numerical forecasting is possible even in areas where extensive remote explosive avalanche control is used. The avalanche technicians at Kootenay Pass consider the AFS to be a useful tool, as it supports their decision process and provides information about avalanche occurrences on past occasions that are similar with respect to weather conditions. l.i Physical processes of snow avalanche formation There are three-main modes of avalanche formation that are of interest in the context of highways avalanche forecasting: wet, moist, and dry slab avalanches. Dry slab avalanches result when a well bonded surface layer of snow, often consolidated by wind, solar, or other effects, rests on a softer weak layer. Imperfections in the underlying weak layer concentrate stresses that can propagate underneath the slab if triggered, resulting in a tensile crown fracture on the uphill margin of the slab. This releases the slab which can entrain more snow as it careens down the slope (McClung and Schaerer 2006). Moist avalanches are produced when the snowpack warms to 0Â°C. Failure is caused by loss of internal cohesion of the snow. Wet avalanches are similar in that they occur in response to warming and high humidity, and therefore they have been lumped together as a single avalanche type in previous numerical prediction efforts (McClung and Tweedy 1994, Floyer and McClung 2004). Though the potential for avalanches can span a large proportion of a season, they occur only during a few hours of each winter. Their sudden initiation results from a complex interaction among many meteorological, terrain, and snowpack structure effects that act on scales of several orders of magnitude (Haegli and McClung 2000, McClung 2002). In a transportation context, clearing of avalanche debris and the potential destruction caused by avalanches are more persistent effects of this intermittent phenomenon. Periods of avalanche activity are termed "cycles" that are closely linked with synoptic cycles. A cycle in a transportation context begins when the forecasting technician posts a moderate hazard and lasts until the hazard rating is dropped to low (John Tweedy, personal communication 2007). See the discussion section for more on the relations among instability, numerical forecast probabilities, and highways avalanche hazard ratings. 1.2 Description of study sites The new Avalanche Forecasting System (hereafter: AFS) was applied to. two distinct and widely separated transportation corridors: Kootenay Pass on Highway #3 in Southeast British Columbia, and Bear Pass on Highway #37A near Stewart,.BC. Both areas-are subject to high snowfall and frequent avalanches that pose a hazard to traffic on the road. Therefore the British Columbia Ministry of Transportation, (hereafter: MoT) maintains active avalanche forecasting and control programs at each site. 2 Figure 1: Map showing locations of study sites 1.2.1 K o o t e n a y Pass Kootenay Pass is located on the Crowsnest Highway between Salmo and Creston in the Selkirk Mountains. Kootenay Pass was the site of the'first operational numerical â€¢ avalanche forecast model in Canada (McClung and Tweedy 1994). Kootenay Pass has a transitional snow climate (Haegli 2004), meaning that its average temperatures, snowfall, and new snow densities are intermediate in between those of maritime and continental climates. Unlike maritime climates, transitional snow climates produce widespread and spatially variable persistent weak layers such as surface hoar (Jamieson 1995). These are angular, planar, feathery crystals that are deposited - directly on the snow surface from clouds as a frozen analog of dew. The high rate of growth of these crystals makes them unstable; they are strong in compression (thereby able to support a load up to a certain threshold) and weak in shear (and therefore are prone to sudden catastrophic failure under increased loads or stresses.) Avalanche activity at Kootenay Pass can alternate between moist and dry avalanches at anytime of the season, thus adding complexity to the problem of numerical forecasting there. There are on average 300-400 avalanches per season, the average annual snowfall is 908 cm/year and the average climax snow depth is 282 cm (K.J. Malone, personal communication 2007) at the snow study plot at 1774 metres above sea level. 3 The highway through Kootenay Pass cuts across the lower tracks of numerous avalanche paths, so that even small avalanches can potentially affect the road. Despite the relatively small elevation differences between highway and avalanche start zones (500-600 metres), Kootenay Pass has produced size four avalanches (as measured according to the Canadian avalanche size classification scheme (Canadian Avalanche Association 1995). More recently however, the ability to carry out explosive control remotely has greatly reduced the potential for such large avalanches. There has been a long and varied history of avalanche control and meteorological measurement technology at Kootenay Pass, and this is manifest in the historic data as changes in the character of avalanche activity and weather observations. Originally, explosive avalanche control was done using "Avalaunchers" (Brennan et al. 2006), but this was highly inaccurate due to the slow speeds of the projectile and thus the high influence'of wind. Later, a 105mm recoilless rifle was used, but this created new problems as repeated impacts on the same targets caused an increased rockfall hazard (J. Tweedy, personal communication 2007). The persistent nature of avalanche risk along the highway and the discrete nature of most of the regularly performing avalanche paths eventually prompted the installation of a series of 21 fixed Gas-Ex exploders that can be fired remotely in any weather at any time of day or night during dry and moist avalanche periods. This has had a remarkable impact on the frequency, character, and most importantly, the size of avalanches produced at Kootenay Pass. Avalanche technicians are no longer constrained by visibility and daylight, and avalanche control therefore proceeds whenever there is potential instability. Technicians can now manage the snow accumulations and instabilities in avalanche paths so that they release more frequently, producing smaller avalanche deposits on the road. As a result, highway closure times have been reduced by half (John Tweedy, personal communication 2007) and there are far fewer avalanches of size 3.5 or greater. The ability to perform 24 hour avalanche risk mitigation is the main operational characteristic that differentiates Kootenay and Bear Passes. 4 1.2.2 B e a r Pass Bear Pass is located between Stewart and Meziadin Junction on the northwest coast of British Columbia. The average annual snowfall rate is 805 cm/year with an average climax snowpack height of 172 cm measured at the road level. One can expect significantly greater accumulations in the start zones high above the highway, and on average 500-600 avalanches in a normal season. There are also considerable physical differences between the two sites. Bear Pass has a maritime climate (Floyer and McClung 2003), and elevation differences between the highway and start zones of up to 1900 metres. This means that there can be very significant differences between meteorological conditions at the weather plot and in the start zones, and it vastly increases the potential for extremely large avalanches due to high snowfall rates and significant entrainment of material as avalanches proceed from start zone to valley floor. Limited avalanche control is possible in inclement weather for some frequently problematic paths by using a fixed (rotating) Howitzer cannon located near the summit of the pass. Otherwise, helicopters are the only way to deliver explosives to the start zones, and therefore the road must be closed more often and for longer periods during persistent storms. This allows unstable deposits to accumulate, resulting in frequent large avalanches (size 3.5 and greater) that are triggered naturally, or by helicopter bombing when skies clear sufficiently. 1.3 Applied avalanche forecasting in a transportation context Highways avalanche technicians make predictions of current and future snow instability in space and time with respect to natural and explosive triggers. The goal of avalanche forecasting in a highways context is to minimize three sources of uncertainty about the state of instability of the snow cover: spatial and temporal variability of the snow cover, incremental changes from snow and weather conditions, and the variability of human perception (McClung 2002a). This recursive updating of the human perception of instability and the associated risk to road traffic is a process that continues through every day and night of the snow season. It is essential to maintaining safe operation of the' road. Frequency of 5 information updating varies in concert with changes in weather and recent avalanche activity. Avalanche technicians minimize uncertainty with respect to incremental changes in snow.and weather conditions by keeping abreast of current and forecasted weather and snow conditions across the forecast region, and by clearing unstable portions of the snow cover using explosives while the road is temporarily closed. Avalanches that hit the road while it is open can result in loss of life and damage to vehicles or infrastructure. The consequence of excessive conservatism is tied to the high economic cost of highway closure. Closures lasting longer than 2 hours incur $45,000/hour losses for Kootenay Pass, and $250,000/hour losses for the Trans-Canada Highway (John Tweedy, personal communication 2007). Though they can only partially represent reality, computer assisted avalanche forecasting programs can assist in providing access to relevant historic information and avalanche probability estimates that are as free as possible from human bias (McClung 2002a). This is the primary motivation for the work presented here. Avalanche forecasting is an evolutionary process that draws on information about the state of instability of the snow cover that is accumulated over time (McClung 2000). The human inductive reasoning process begins with the first snowfall, and forecasts are continually revised as the snowpack changes and more information is collected. In the forecaster's mind, old forecasts provide prior information: this is the perception of the state of instability before considering new information. The posterior probability is the state of future instability, which is sought by combining the prior with the current or future likelihood of avalanches given new data such as sudden intense warming, statistical prediction model output or relevant new evidence of instability. Bayesian revision provides a handy approximation of this process of dynamic integration of information about instability (Pearl 1988), and Bayes' rule (Posterior Probability Â°< Prior Probability x Likelihood) provides a convenient mathematical framework for combining avalanche probabilities from different information sources (McClung and Tweedy 1994). 6 1.3.1 I n f o r m a t i o n a n d e n t r o p y There are two types.of information used in avalanche forecasting: singular and distributional data (McClung 2002b). Singular or case information includes information specific to the case in question and includes, but is not limited to, recent avalanche occurrences and signs of instability such as shooting cracks. Distributional data consists of information about similar situations in the past, and as such provides the basis of numerical forecasting. One of the greatest benefits of the nearest neighbour algorithm is the way in which it reveals distributional data to the user. Both information types implicitly include information about terrain, trigger level, and incremental changes in snow and weather conditions. Using Bayes' rule, the likelihood calculated from distributional data is combined with a prior probability that is input by the forecaster according to singular information such as recent explosive control work or snow-pack structure information that cannot be included in the numerical model (McClung and Tweedy 1994). It is important to include both data types; distributional data helps to reduce human bias, whereas singular data helps to avoid errors and boost accuracy of forecasts (McClung 2002b). Data that are relevant to avalanche prediction are divided into three classes by LaChapelle (1980) and McClung and Schaerer (2006). These classes are numbered in order of decreasing informational entropy, which is defined as the relevance and ease of interpretation of a given piece of evidence. Informational entropy is therefore related to the human perception of instability. Class III (high entropy) data include meteorological measurements as well as snow surface information. In Bear Pass and Kootenay Pass these are taken hourly by automated electronic weather stations at various locations within each forecast region. These data lend themselves well to numerical prediction, as they are mostly numerical in nature (as-opposed to nominal, ordinal or categorical) and they are taken at regular intervals. Although weather strongly influences avalanche formation, class III data can be difficult to interpret directly. For instance, wind contributes to the formation of slabs that propagate cracks easily, but this effect will be highly spatially variable. It may require interaction with other class III factors to produce avalanches, and avalanche occurrences associated with the slab forming wind event may be deposited several days before being triggered. Furthermore, it is difficult to 7 distinguish interaction effects from inter-correlation among class 111 variables without the aid of computers. By contrast, Class I data are the most immediately relevant and easiest to interpret. If one observes shooting cracks or natural avalanches, one can state with reasonable certainty that-there is a high level of instability on that day. Class II data are intermediate in terms of informational entropy, and include measurements taken below the snow surface such as layer densities, thicknesses, crystal forms, and temperature gradients. The processes that one hopes to represent by these measurements are highly spatially variable as they are strongly affected by terrain features, elevation, and microclimates. In Kootenay Pass, a single set of Class III inputs (from a manual weather station and a ridgetop anemometer) were used to predict avalanches on the scale of the entire pass twice daily (McClung and Tweedy 1994). Since the model was reasonably accurate, we can assume that this is an appropriate scale for prediction using these variables. Also, a single manual station powered a prediction model in the much larger Bear Pass with similar accuracy. McClung and Floyer (2003) showed that spatial down-scaling of avalanche predictions is possible, and accuracy was improved when Bear Pass was divided into subgroups of avalanche paths that share a common response to meteorological forcing. The North aspect and South aspect sub-regions extend across the entire forecast region, but some sub-regions are much smaller and more discrete. 1.4 Description of data 1.4.1 Electronic data Use of electronic data presents new challenges and offers new opportunities. No longer is the forecaster forced to manually enter information in order to obtain a prediction. As most of.these data are now taken by remote sensors, the AFS was designed to automatically query this information and input it into the prediction algorithm. The data are hourly records of each variable, and each variable has an associated "confidence value" that indicates whether or not that particular sensor is or was providing quality data at the time of observation. 8 There are several sites in each pass where electronic sensors take current weather information, and each includes different sensor configurations with different data. In the pilot phase of the AFS, which is presented here, it was expedient to use the station with the greatest number of variables, but the next generation of AFS will allow the user to sample information from the most relevant sites. Comparison of the predictive accuracy of different combinations of variables from different sensors can help support and guide these decisions. Among the most significant variables used in the original Kootenay Pass avalanche forecast model were ram penetration and new snow density. In Bear Pass, foot penetration was among the optimal variables used in the discriminant functions. Ram . penetration values are taken using a ram penetrometer: a large weighted tube with a cone shaped tip that the user holds and releases above the snow (McClung and Schaerer 2006). Foot penetration is taken by stepping into the new snow, placing all of the observer's weight on that foot and measuring the depth of the resulting hole in the snow. Both ram and foot penetrations are a function of new snow density. These measures are not generated by numerical weather forecasts, so Roeger (2001) developed an exponential function for Kootenay Pass that approximates new snow density based on present temperature and the temperature 12 hours previous. That study found that the relationship was optimal when current air temperature was averaged with the observation taken 12 hours previous in a 2:1 ratio: T* = (2T1 +T2)/3 where T* is the time averaged temperature, TI = present temperature and T2 = temperature recorded roughly 12 hours ago and measured in Â°C. This averaged temperature was then entered into the following power function: p = b(mr"rr"+c) . where b = 130 kg/m3, To = 1Â°C, m = 1.2 and c = 30 (m and c are dimensionless). Though it was intended to replace important manually observed variables when applying 9 numerical weather prediction inputs to the original avalanche forecasting model, these relationships could also be used with current electronic weather sensor data. Both master datasets were pre-filtered manually to remove data from summer months and erroneous data. 1.4.2 K o o t e n a y Pass da ta The AFS currently takes data which are collected by two weather stations in Kootenay Pass. The historical data record consists of six of the winters (November to April) from 1997 to 2004. An anemometer situated at the summit of Stagleap Mountain provides wind speed and direction, and all other variables are derived from sensors at the weather plot near the highway at the summit of the pass (1774 metres). These sensors include a precipitation gauge, a sonic ranging snow height sensor, and a digital thermometer. The next generation of AFS will allow any combination of variables from any weather station. Presently, the data from both stations are combined into one master dataset whose variables are given below. Note that the standard observations are data which are collected at 0600 and 1800hrs as defined by the MoT. o Present temperature o Maximum temperature and Minimum temperature since the last standard observation o New snowfall since the last standard observation o Snowpack height: total snow accumulation over the entire season so far o Hourly precipitation o Hourly snowfall o New precipitation: water equivalent of new snow and rain since the last standard observation o Mean Wind speed (calculated by wind velocity vector summation over the last hour) o Mean Wind direction (calculated by wind vector summation over the last hour) o Maximum gust: the highest instantaneous wind speed in the last hour of observation â€¢0 Standard deviation of the wind direction, which is a measure of the variability of the wind direction 10 The data are collected and reported hourly through the Ministry's Snow Avalanche and Weather System (SAWS). Data for the AFS were obtained through web based tools running queries against the SAWS Database. The signal of the sonic ranging snow height sensor in the weather plot can penetrate through low density snow, and therefore some new snowfall is under reported. The precipitation gauge is much more accurate, and this provides a reliable measure of the water-equivalent of newly fallen snow. 1.4.3 B e a r Pass da ta A l l information at Bear Pass comes from the Endgoal Station weather plot at 410 metres in the forest on the East end of the pass. Electronic weather records from this site span four winters from November to. April, 2001 through 2005. The variables available in the master data are the same as listed above for Kootenay Pass. Unlike at Kootenay Pass, the anemometer is located in the weather plot along with the rain gauge, snow' height sensor, and thermometer. 1.5 Literature review: numerical avalanche prediction models . Considerable attention has been given to the problem of numerical avalanche forecasting, using various linear discriminant (McClung and Tweedy 1994); physical dynamical (Durand et al. 1999), and nonlinear classification (Davis et al: 1999, Purves et al. 2003) techniques. Most of these studies make use of current information regarding weather and properties of the surface snow layers. In practice, this information is often replete with missing values, measurement error, and mixtures of categorical, ordinal, and numerical variables. Studies differ in their methods for data preprocessing such as interpolation, deletion or replacement of erroneous values, binning of avalanche occurrences into response periods, variable selection, variable transformations, and generation of memory variables. Generally, preprocessing is a procedure that remains external to the numerical model, so the user is constrained by fixed datasets unless they have the technical skills to create their own. The AFS attempts to lift this constraint. 11 Weather and snow measurements taken in snow study plots may not closely represent conditions in the avalanche start zones, and historical records often contain misclassification errors in the occurrence data. Misclassifications can result in many ways. For example, storms and low visibility may preclude observations of avalanches until after storms have passed, and conditions that normally would result in periods of high avalanche activity are sometimes quiescent because recent natural or explosive trigged avalanches have already cleared out most potential instabilities. Al l of these factors, together with residual error associated with prediction of systems that are linked to chaotic processes like weather (McClung 2000), conspire to frustrate attempts at achieving prediction accuracy values greater than 75-80%. High misclassification rates suggest that performance might be improved by the use of robust loss criteria, because squared error loss can exaggerate the effects of gross misclassification and exponentially reward fortuitous classification of non-events (Hastie etal. 2001).. Two types of numerical model are discussed here: linear and non-linear. They differ in terms of their structure, preprocessing requirements, assumptions, limitations, and interpretability. Linear methods, which extract "global" trends in data, are superior when datasets are small with a high signal-to-noise ratio and there is a simple direct relationship between predictors and response. Non-linear methods, which tend to extract more "local" variation within the dataset are more flexible, require fewer assumptions (such as constraints of multivariate normality), and are able to detect important non-linear behaviour in the data (such as thresholds or patchiness.) The consequence of this ability to detect local variation is a greater propensity for over-fitting, and this manifests itself in -many ways. Generally, nonlinear models trained on small datasets tend to be inferior to their linear counterparts when applied to the same data (Hastie et al. 2001). 1.5.1 L i n e a r d i s c r i m i n a n t ana lys i s Linear analytical methods are constrained by the assumption that all features, whether these are raw variables or linear combinations thereof, are independent and uncorrelated'(or "orthogonal" in statistical parlance). In linear discriminant analysis (LDA) and canonical discriminant analysis (CDA), correlations among features are 12 minimized to encode variable interactions in a system of linear equations solved by eigen analysis. The previous LDA model at Kootenay Pass (McClung and Tweedy 1994) used a two stage analysis, where the first stage classified the forecast period as a dry or moist/wet avalanche period, then the second stage of analysis used the optimal LDA model (i.e. the optimal variables and eigen values) to predict the type of avalanche chosen in the first stage. McClung and Tweedy (1994) reported overall accuracy of 77% for their model, dry avalanche prediction accuracy was 79% and moist/wet avalanche prediction accuracy was 72%. Floyer and McClung (2003) achieved overall accuracy of 72% at Bear Pass using CDA, and greater accuracy resulted when Bear Pass was subdivided into sub-regions and separate models were fit to each sub-region. Since CDA and LDA are part of the same family of linear discriminant techniques, LDA will be used hereafter to refer to both. 1.5.2 Neares t ne ighbou r s McClung's linear model gave probabilistic avalanche predictions, suggested the relative importance of variables, and was accompanied by a nearest neighbour analysis. This was done to generate distributional data regarding avalanche activity on similar days. The nearest neighbours algorithm chooses days in the historical database that are geometrically nearest to the forecast day in an n-dimensional predictor space where each dimension corresponds to a different input variable (such as new snow height, temperature, total snow depth, wind speed etc.). Nearest neighbours distance is sometimes measured in the Euclidian distance metric (Buser et al. 1983). As in LDA, the data are normalized to equalize the ranges of each variable and centre their variance, around zero. In the "Cornice" model, the variable weights that reflect relative importance of variables are chosen by genetic algorithm (Purves 2002). Friedman (1994) describes a flexible metric for improving, the accuracy of the nearest neighbour analysis in which the neighbourhood is stretched along the predictor dimensions that have a greater significance in prediction and shrunk along the dimensions (variables) that have less predictive power. This accounts for the possibility that the structure in the predictor space may actually lie in a lower dimensional manifold within the predictor space while 13 preserving the majority of the variance of the system. This perhaps explains accuracy improvements made by the genetic algorithm optimized weights used by Purves (2002). A genetic algorithm is a stochastic optimization technique based on the principle of evolution. The weights of the nearest neighbour algorithm are treated as "genes." The genetic algorithm generates many individuals with different weights (genes) and chooses a few that are the best predictors of avalanches according to a selected fitness metric. These chosen individuals are replicated with mutation and mixing of genes to create a new generation and the process is repeated until the model reaches a maximum in terms of predictive fitness, or a predetermined number of generations. Accuracy of the genetic-algorithm-optimized nearest neighbour analysis of avalanches in Bear Pass was shown to be only 1% less than that achieved using canonical discriminant analysis (Floyer and McClung 2003). Another factor that might explain the relative success of Cornice is that it accounts for trends in the input variables by choosing nearest neighbour clays not only based on similarity to the forecast day but also according to similarity of the 3 clays before the forecast day. The contribution of each previous day to the nearest neighbour distance is inversely weighted by time, with the weight decreasing by half for each successive period (Purves 2002). This method requires infilling the historic data to ensure that one can always access three previous periods for each historic datum. Instead of the genetic-algorithm-optimized Euclidian distance used in Cornice, McClung and Tweedy (1994) and Floyer and McClung (2003) used the Mahalanobis distance metric (which makes use of the data covariance matrix) to account for inter-correlations among the variables. They also used a logarithmic transform to remove positive skewness from the data and satisfy the normality assumptions that are necessary for linear methods. â€¢ One advantage over the LDA is that nearest neighbour analysis offers the forecaster distributional data. The nearest neighbour summary shows all weather and avalanche occurrence data on the historic periods that were similar to the current conditions. This can supplement the seasoned forecaster's memory of distributional data, and provides a tool that new personnel can use to more rapidly develop an intuitive sense of the relation between weather and avalanches. McClung and Tweedy (1994) showed 14 that a conditional probability of avalanches could be generated by the ratio of neighbours that are associated with avalanches to the total number of neighbours. The threshold number of nearest neighbours that indicated an avalanche period was set to 6, and the total number of nearest neighbours (k) used in each prediction was k = 30. Accuracy of this model, using the Mahalanobis distance metric instead of a weighted Euclidian metric as in Purves (2002) was within 3% of the LDA accuracy. Floyer (2001) gives an excellent analysis of the sensitivity of the nearest neighbour method to changes in k, threshold and the number of variables used. He showed that for unequal numbers of avalanche and non-avalanche days the optimal values were k = 30 and threshold = 7. Floyer also showed that the classification rate declined steeply when greater than 7 variables were used in the analysis, but Purves (2002) managed 81 % unweighted average accuracy using 10 variables, k = 10, and the threshold k = 3. 1.5.3 B i n a r y c l a s s i f i ca t ion trees Classification and regression trees (CART) have been used for classifying snow profiles into stable and unstable classes (Schweizer and Jamieson 2003), and for avalanche prediction based on meteorological conditions (Davis et al. 1999, ICronholm et al. 2006) to varying degrees of success. There is good reason to be enthusiastic about tree-based classification models: they are easily interpreted; they highlight variable interactions; they are robust to missing data, outliers, misclassification errors, deviations from statistical normality, and they can be trained on a mixture of numerical, categorical, and ordinal variables. However, on their own they can be misleading if they are not tested on an independent sample, and they are generally limited in terms of accuracy. Kronholm et al. (2006) provide a notable exception to this, as the misclassification rate for avalanches along a 10 km stretch of highway in Norway was 15%. The simplicity of the optimal classification tree model could allow it to be applied to a remote weather sensor network when optimized separately for each station and the surrounding avalanche paths. CART (Brieman et al. 1984) is a recursive binary partitioning algorithm that is parameterized using a greedy stepwise optimization routine based on squared error loss (regression) or misclassification rate (classification). During optimization, each partition 15 of the data is defined by a single split value of one predictor. The split is chosen for maximal separation of the data into their target classes, or to maximize the difference between the mean values of the resulting sub-regions, or "nodes". These sub-regions are then partitioned repeatedly in the same way until some stopping rule is satisfied (Denison and Malik, 2002.) The composition of these terminal nodes governs classification. During operational use, the dominant class of the terminal node corresponding to the current weather conditions determines the classification of the period in question. â€¢CART is known for its ease of use and interpretability, as partitions made on single variables approximate major thresholds that contribute to avalanche formation, such as rates of daily snowfall exceeding 25 cm or differences in hardness between snow layers exceeding 2 hand-hardness levels. Linear methods use eigen-analysis to seek orthogonal structure, thereby extracting the portion of the total variability of the system that is accounted for by each individual variable (Hastie et al. 2001), whereas CART exploits variable interactions through increasingly local partitioning of the predictor space using different variables and thresholds. 1.5.4 Over f i t t i ng Unfortunately, CART is better known in the statistical community as a weak classifier that has a strong tendency for over-fitting. Schweizer and Jamieson (2003), report classification accuracy values ranging from 50-70% when applying CART to snow profile characteristics of "stable" and "unstable" profiles; these values were obtained by leave-one-out-cross-validation and no independent test sample was used. Davis et al. (1999) trained CART on avalanche occurrence, avalanche size, and crown height datasets gathered from 19 winter seasons of data. They grew very large trees (65 and 74 terminal nodes) using the larger avalanche occurrence dataset, and achieved 35% and 43% misclassification rates respectively. Much to their surprise, smaller trees (13 and 17 terminal nodes) trainedon the smaller crown height and avalanche size datasets yielded significantly lower misclassification rates: 24% and 28% respectively, even though the training dataset was smaller. In their conclusions they mention the instability of their, models' optimal split parameters when training on slightly different subsets of the same data, and puzzle at the poor accuracy of larger trees: "Sensitive dependence on initial 16 conditions is the classic definition of chaos. Chaotic behaviour would account for the failure of the complete, much larger learning sample to produce decision trees of greater overall accuracy." (Davis et al. 1999) While it can certainly be argued that chaos is at work in snow dynamics, there may be a simpler explanation for the failure of trees trained on the larger dataset: the trees grown on these datasets were simply too large, and thus were badly over-fit. Due to their inflexibility, over-fitting is not a concern when using linear analytical methods, and therefore they are often superior when trained on smaller datasets (Hastie et al. 2001). By contrast, nearest neighbour accuracy depends on data density. When the feature space is only sparsely populated, neighbourhoods span large portions of the data space and include dissimilar neighbours. This produces spurious results. Variations in data density in high dimensional nearest neighbour analyses lead to other problems as well, such as the curse of dimensionality. 1.5.5 The curse o f d i m e n s i o n a l i t y There are many predictors that could be used in analysis, but many are irrelevant or strongly correlated with others. Inclusion of extraneous and suboptimal predictors can degrade the classification skill of statistical analyses as they may dilute the power of significant predictors. The problem is commonly referred to as the "curse of dimensionality." Discriminant analysis is fairly robust to this curse, and is therefore a reasonable method for choosing optimal predictor sets from among the global set of variables. Floyer and McClung (2003) suggest canonical discriminant analysis (CDA) as an appropriate tool for choosing variables for nearest neighbours by taking those that have the highest canonical coefficients. In nearest neighbours analysis, each predictor adds an extra "dimension" to the predictor space, and if the total number of data points contained in that space remains fixed, then the data become more sparse. Sparse data lead to large neighbourhoods, and this means that the nearest neighbours may be more dissimilar to the current datum along the dimensions that carry more predictive information. Figure 1 illustrates this point for a nearest neighbour analysis where k = 3. Note that if only one dimension (one predictor) is used, the average radius of the neighbourhood is small. If two predictors are used 17 (leading to a two-dimensional predictor space), then the data become much more spread out, the radius of the neigbourhood increases. It not only encloses a circle (or a hypersphere in high dimensions) of greater radius, but also the neighbourhood becomes an exponentially larger region of the predictor space. Note that the nearest neighbours themselves are different than in the 1-D case. As the region encompassed by the neighbourhood increases exponentially, so the number of training cases must increase exponentially in order to preserve the representativeness of the class distributions within the neighbourhoods (Hasite et al. 2001). o o n or, n n n o r O .1 i / i "X Â« Â° / i ( Â° i Â° o !\ O V 1 \o | i Q i Â° o 1 4 Figure 2: Illustration of the curse of dimensionality for nearest neighbour analysis. The grey dot is the current datum for which the algorithm finds the nearest neighbours, the rectangular boxes enclose the neighbourhood in the single dimension case, and the circle encloses the two-dimensional neighbourhood. Increasing the number of dimensions from one to two dramatically changes the neighbourhood composition such that the neighbours are on average farther away (which, in physical terms, means that they are more dissimilar with respect to meteorological conditions.) Adapted from Hastie et al. (2001). Since there are a finite number of training cases, problems of dimensionality can be mitigated by judicious choice of predictor variables and by adjusting the relative weight of each predictor (Friedman 1994). 1.6 Summary Numerical avalanche prediction can benefit MoT avalanche technicians by providing greater access to distributional data with respect to the present set of meteorological conditions, and by aiding in the interpretation of class 111 weather data. The weather conditions on the nearest neighbour days can be reviewed by the technician in order to judge the propensity of the current, weather for producing avalanches. Weather data are of high informational entropy, meaning that they are more difficult to â€¢ interpret directly than class I or II data. Frequent reference to the nearest neighbour report by avalanche technicians in training can help them learn patterns in weather that produce avalanches (McClung 2002a). Bear Pass and Kootenay Pass are very different sites in terms of climate, geography, and explosive control measures used. However, the same electronic weather variables are available at both sites, and this has facilitatedlhe development of avalanche prediction software for both avalanche programs simultaneously. Also, it was previously shown that the predictor variables are nearly the same (McClung and Tweedy 1994, Floyer and McClung 2003). Nearest neighbour analysis is chosen for this study because it has been shown to work for both sites using manual data, and it is of most use to the forecaster. This affords the opportunity for direct comparison of numerical avalanche prediction using electronic data and those using manual data. Linear discriminant analysis and CART are both promising algorithms that could be applied to this problem. However the nearest neighbour method was chosen here because of its programming simplicity and flexibility. It is robust to missing data, it can be optimized by using a genetic algorithm or Mahalanobis distance metric (among other possibilities) to improve accuracy, and it offers the user a useful learning tool. The nearest neighbour method is prone to overfitting if the training dataset is too small or if too many irrelevant or correlated predictors are used, but it can achieve predictive accuracy that is near to that of other algorithms. McClung and Tweedy (1994) and Floyer and McClung (2003) present thorough explorations of manual observation datasets that can be used in guiding selection of the most appropriate variables for greatest predictive accuracy. Nevertheless, electronic and 19 manual data are not exactly equivalent, and the avalanche forecast system (AFS) has been designed to enable testing of different variables and parameter settings for model optimization. 20 2 Avalanche Forecast System (AFS) The Avalanche Forecast System application was designed for maximum flexibility. The goal was to give the user freedom to'choose variables, algorithms, datasets, filters, and otherwise adjust the model in every way possible. This was done to facilitate application of the software to any region where sufficient data exist, no matter what are the modeling goals or snow climate. Ongoing development of the AFS will further facilitate application of the program to new regions as vyell as to industrial and recreational backcountry ski avalanche forecasting. The AFS development was an iterative evolutionary process using very limited programming funds. Therefore the features included in the AFS are as simple and generalized as possible within the limits of operational usefulness. There may be more optimal approaches to various functions, such as the density proxy. However, fitness statistics reported in the results section show that the current AFS is sufficiently accurate to make the system operationally useful. Numerous alterations and extensions of the current code that could possibly boost accuracy are discussed in the "further work" section of the conclusion. 2.1 Operational motivations The AFS was designed to accommodate hourly electronic weather information dynamically, and automatically make new predictions without adding to the workload of the avalanche technicians. The MoT Snow Avalanche and Weather System (SAWS) collects, stores, and distributes all weather and avalanche information taken by sensors and personnel. Al l AFS inputs are received directly from SAWS. With this database infrastructure in place, there is a desire to develop tools for data visualization and forecast assistance in order to provide technicians with greater access to relevant information. The AFS is one such tool based on earlier work done at Kootenay Pass (McClung and Tweedy 1994) and in Scotland (Purves et al. 2002). The AFS differs from these in that it is based on hourly electronic sensor data instead of twice daily manual data. This distinction necessitated the invention of novel automated data manipulation and model validation routines. Furthermore, earlier attempts at numerical avalanche forecasting 21 have relied on pre-processed master datasets for which missing and erroneous data were discarded or replaced using paper records. In the original Kootenay Pass model, the researcher found the optimal predictors and parameters and manipulated the data. In this study, automated dataset.creation routines in the AFS allow the user to vary the model inputs and parameters. This is important when applying the AFS to new regions, though choice of optimal settings must still be governed by statistical analysis to avoid introducing human bias. Missing and erroneous data are automatically rejected during dataset creation and forecast generation, thus largely eliminating the task of manual data pre-processing. This enables both researcher and user to rapidly generate new datasets and facilitates testing of different combinations of parameters and predictors. Thus, the AFS has potential to greatly speed the development and validation processes for new numerical avalanche prediction models. Originally, the aforementioned flexibility was incorporated to allow for easy adaptation of the AFS to changes in observational instrumentation and deployment of the program to other MoT avalanche forecasting regions. The code was written so that data processing routines are independent of the nearest neighbour algorithm, and they would function with modern prediction algorithms. Hopefully, the next generation of AFS will allow insertion of different prediction algorithms. The.AFS queries SAWS hourly or on demand to retrieve current weather data from sensors, and uses this information to make predictions automatically based on a schedule specified in advance by the user. The user can schedule predictions at any desired time, and each schedule can make forecasts using several different models. 2.2 Automated data manipulation The AFS accepts raw data which is then filtered and pre-processed internally by the following functions: response binning and filtration, memory variable generation, and removal of erroneous data. Al l these functions are active during creation of new datasets upon which predictive models are built, and the last two also act on current data queried from SAWS. The AFS contains raw master datasets containing weather observations and avalanche occurrences; During dataset creation, the AFS scans through 22 each row of the chosen master data files and passes this information through the user defined pre-processing functions. The result is a new dataset that is free from missing and erroneous values and that meets the prediction targets specified by the user. Only variables chosen by the user are included in these datasets. 2.2.1 Response b i n n i n g The response period is the time period in which avalanche activity is linked to a particular set of meteorological measurements in the historic data. When building a dataset, the program scans over the master data and each hourly datum is labeled true if avalanches occurred within the response period of that datum. The default response period is 12 hours. The resulting dataset is composed of a matrix of meteorological values and an associated response column that carries the values "True" and "False" depending on whether or not that row was associated with avalanches that meet the user specified filtration criteria (as explained in section 2.2.2). The response variable is binary to suit the requirements of the nearest neighbour algorithm, but future generations of AFS will include moisture index and avalanche activity index (McClung and Tweedy 1994) for use as targets for modern non-linear prediction algorithms that are capable of regression as well as classification. / . 2.2.2 Response f i l t r a t i o n Forecasters expressed a desire to move beyond simple predictions of "avalanche day" or "non-avalanche day", and probabilities thereof. In addition to classification of avalanches into wet and dry groups, they were interested in the probability of natural â€¢ avalanches, avalanches exceeding a certain size, or those which hit the road. When generating a dataset, the AFS filters the occurrence data according to the restriction parameters. For example, one can create a filter that only accepts natural avalanche occurrences of size 2 or greater. Furthermore, it was shown to be advantageous to parse large forecast regions such as Bear Pass by aspect or subregions in which there was a uniform response to meteorological forcing (Floyer and McClung 2003). Therefore one can also set filters that only accept avalanche occurrences from certain avalanche paths. In this way, the predictions are spatially scalable within the limitations of data density. 23 2.2.3 N o r m a l i z a t i o n The historic and current data for each variable are normalized by subtracting the mean value from each observation and dividing by the standard deviation. Precipitation variables at both sites are positively skewed (McClung and Tweedy 1994, Floyer and McClung 2003), and so all precipitation variables were transformed using Xj(ln(Xj +1) (Bovis 1976) in the master data. Hereafter, this transformation is referred to as log-normalization. Master datasets with and without log-normalized precipitation values were tested at both sites. 2.2.4 M e m o r y va r i ab le s Often the rate of change or persistence of weather variables, such as spiking temperatures or repeated heavy snowfalls, are important predictors of avalanche activity (McClung and Schaerer 2006), so it is often beneficial to incorporate this information into predictive models. The AFS can generate memory variables by summing values in a certain predictor column or by choosing the maximum or minimum value of a variable within a certain observation period. When the dataset is created, a new column is added for each memory variable. In the case of a "log" variable, this column will contain the maximum or minimum value recorded within the specified observation period for each hour of the historic data. For example: if the memory variable is a log of maximum temperature within the 24 hour observation period, then each row (hour) in the database will have a new value which is the maximum recorded temperature during the 24 hours preceding the time of that row. The purpose of these extra columns is to maintain consistency in the predictive information. Maximum and minimum air temperatures supplied by SAWS are those that are recorded at the last standard observation time (0600 and 1800 hours). The discrete changes in the values of these variables at standard observation times are a procedural artifact and do not reflect reality. In contrast to this, the additional lag and log variable columns computed by the AFS shift hourly in a smooth manner. The aim of these modifications is to satisfy the desire of forecasters to choose the hour of the prediction while maintaining physical consistency of the predictive data. 24 The maximum wind speed log variable records the maximum wind speed observed in a period specified by the user. Unlike the temperature log variables, the wind log period can be different than the observation period. Though absent from previous studies at Kootenay and Bear Passes, this variable was tested in this analysis because I believe it to be more physically valid than the wind speed recorded over the past hour (as in the wind speed value given by SAWS). I justify this because wind slabs accumulate over time in sustained winds. Ideally, one would use the average wind speed over a certain period; however the maximum value logging procedure had already been coded for temperature values. Therefore, adding the wind log variable was trivial from a programming perspective, and this was important given the limited development funds available for the AFS pilot software. The AFS also creates new columns of lagged precipitation variables. McClung and Tweedy (1994) found that in some cases variables whose value was averaged over several observation periods were stronger predictors than the same value from the current time period only. The AFS takes a different approach to lagged variables. In the case of new snowfall, the user can set the interval over which the new snowfall amount is to be measured. The accuracy of the electronic snow height sensor is not sufficient to simply cumulate the hourly snowfall (sensitivity of this instrument varies with the density of new snow), and the "new snow" variable in SAWS jumps discreetly at standard observations just as maximum and minimum temperatures do. The new snowfall lag is computed by subtracting the height of the snow-pack at some specified period of time previous to the current observation from the current snow-pack height. The water-equivalent of new , snowfall is also an important predictor, as it is a measure of the new load on the snow-pack. The precipitation gauge data is used to generate the new precipitation lag as a measure of water equivalent of new snow. The measurement resolution of the' precipitation gauge sensor is sufficient to permit simple summation of hourly values. The new snow density, ram penetration, and foot penetration are highly statistically significant predictors in Kootenay Pass and Bear Pass, but there are no density, ram or foot penetration measurements in the electronic dataset for either place. For comparison and programming simplicity, only Roeger's (2001) temperature average was applied as a proxy of density. Using optimal relationships for density proxies could 25 improve the AFS ' accuracy, but this is unlikely to have a large impact given that accuracy achieved here is similar to that found in previous studies. Memory variable generation algorithms operate on the historic and current data alike so that predictors in the historic dataset are consistent with new values computed using current data queried from SAWS. One drawback of the current AFS is that when creating datasets, one must include the variable that is used to generate the memory variable. In other words, since the hourly precipitation given by the rain gauge is used when computing the new precipitation lag, both variables must be included in the nearest neighbour analysis. Removing this limitation was not possible given the constraints on programming funds, and will be done in the next generation of AFS. 2.2.5 D y n a m i c f i l t r a t i o n o f e r r o n e o u s da ta Occasionally there are missing values in the historical data, as electronic sensors sometimes record erroneous observations, poor quality observations, or a sensor simply ceases collecting data. When generating a dataset, the AFS rejects any row of data for which there is a missing value or a value of low confidence (as given by an adjacent column in SAWS that flag periods when a sensor is not functioning properly). In spite of these deletions, the MoT master data is sufficiently complete that one can still generate large enough datasets to achieve desirable predictive accuracy in high frequency avalanche areas. In the event that the current observation for a given predictor is missing or has a low confidence value at prediction time, the nearest neighbour algorithm will only calculate distance in predictor space based on the available values. Since nearest neighbour distances are all relative, predictions can still be computed in the absence of some predictors. However, the prediction accuracy may be degraded; this effect is explored in the results section. This is a significant advantage over linear predictive methods which require that all predictors be input for each prediction otherwise the algorithm cannot function. The user can toggle the AFS ' current data filtration algorithms during operation. Filters on each individual variable can be set to "on," "off," or "auto." The automatic setting uses the SAWS confidence values to determine whether the observation for each 26 variable is valid. Extreme wind values, which result when the wind sensor rimes up, are not flagged as low-confidence by SAWS for Kootenay Pass. Therefore, the automatic filtration setting in the AFS ignores wind speeds in excess of 170 km/h. The "on" setting-is used when a sensor is reporting reasonable values but SAWS nevertheless assigns low-confidence values to the data. The " o f f setting is used when sensors report erroneous data but SAWS does not flag these data with.low-confidence values. The AFS report files log the history of predicted probabilities, parameter settings, and a binary code that indicates which variables were used in each prediction, and which variables were missing. 2.2.6 V a r i a b l e se lec t ion McClung and Tweedy (1994) and Floyer and McClung (2003) give thorough explorations of the predictive power of manual variables at Kootenay and Bear Passes respectively. The table below summarizes the variables used to build the discriminant functions in both studies and compares them with those used in the AFS. In Kootenay Pass, several discriminant functions were trained on subsets of the original data that were stratified by moisture type and avalanche activity index. Each discriminant function employed a subset of the variables listed below, and these variable subsets differed for each moisture class and avalanche activity index class. In Bear Pass, all listed variables were used in the discriminant functions for the pass as a whole as well for individual sub-regions. 27 McClung and Tweedy (1994) Kootenay Pass Floyer and McClung (2003) Bear Pass AFS New snow depth Storm board total Weight new snow Snowfall rate Water equivalent of new precipitation Density of new snow Total snow depth Ram penetration Wind speed Maximum temperature (in the previous 12 hours) Present temperature Minimum temperature (in the previous 12 hours) Maximum temperature trend Present temperature trend New precipitation Total snow depth Foot penetration Present temperature Present temperature trend New snow lag Hourly precipitation New precipitation lag Density proxy Total snow depth Wind speed Maximum temperature (in the previous 24 hours) Present temperature Minimum temperature (in the previous 24 hours) Maximum wind speed log Table 1: Comparison of variables used in previous linear discriminant models at both sites with those used in the A F S . Note that the variables used in the first two columns include all variables for prediction of wet and dry avalanches. See the results section for variables retained in the final model at each site. It is not possible to replicate these variable sets exactly using.electronic data, though by pre-processing the electronic data, one can construct variables that account for most of the physical properties described above. The AFS new snow lag is more similar to the storm board total because it is a measure of the new snowfall, including settlement, during a period specified by the user. Hourly precipitation is used to replace snowfall rate, as the precipitation gauge is more accurate at hourly intervals than the snow height sensor. Furthermore, using new snowfall and new precipitation values lagged over the same period is roughly equivalent to a measure of the density of snow that accumulated during that period. This is not an exact measure of density, as the snow height sensor underestimates the height of low density snow. The density proxy variable adapted from Roeger (2001) is also included in the AFS to compensate for this shortcoming. Only the time averaged temperature relationship was used and not the power law function found by Roeger. This was done for simplicity and generality, as no work has been done to find the power law parameters that suit Bear Pass. 28 Temperature maxima and minima in previous studies are taken within 12 hours. However, accuracy was improved in the AFS at Kootenay Pass if temperature maxima and minima were logged in the previous 24 hours (see the results section). 24 hour log variables implicitly include some of the information contained in-the temperature trend variables. Taken together, maximum and minimum temperature during the last 24 hours indicate trends in temperature change over the diurnal period. However, none of the AFS temperature log variables improve predictive accuracy at Bear Pass (and therefore they are omitted from the Bear Pass model) even though present temperature trend was found to be significant by Floyer and McClung (2003). Temperature trends are perhaps not captured sufficiently by the current AFS. The variables with which prediction models have been built using the AFS, shown in the table above, are different than those that were found to be optimal by previous studies. However, they encode as much of the same information as is possible using only electronic sensors. 2.3 Prediction algorithms 2.3.1 Nea res t n e i g h b o u r a l g o r i t h m The observed dataset D = {yj,xj}"={ contains the historic record where v(. is the class label for observation â€žY(. . Let di = ||*--v.|| be the Euclidian distance between the i d l predictor vector and the new vector for which we wish to assign a class. The nearest neighbour algorithm takes k "neighbours" that have the smallest di and classifies the target datum as the dominant class among the k neighbours. Probabilities are generated using the ratio of avalanche days to k. It could be argued that use of the Euclidian distance metric is inappropriate for avalanche forecasting since it does not account for the fact that the variables are correlated. However, the success of previous models calculated using the Euclidian metric (Purves 2003, Buser 1983) shows that this metric works sufficiently well. Nearest neighbour analysis is computationally inexpensive, despite the fact that the entire dataset must be retained in order to make a prediction. By contrast, linear models encode prediction information in model parameters and the data are rejected after training. 29 One adaptation of this algorithm is necessary in order toapply it to hourly data: only one neighbour per day is allowed to be included in the A- nearest neighbours. It is easy to imagine that if a given day in the historic record contains a row that is similar to current conditions, that several nearest neighbours would be drawn from the same day. . This would have the effect of giving greater weight to certain historic days that contribute several neighbours to the analysis. Therefore the AFS only accepts one neighbour per day when choosing neighbours. A l l variables used in the nearest neighbour analysis are equally weighted. Purves' genetic-algorithm-optimized weighted Euclidian distance metric was considered for this work, but limited funds and promising preliminary accuracy of the unweighted nearest neighbours algorithm delayed implementation of stochastic optimization pending further study. 2.3.2 T h r e s h o l d s u m m o d e l This model is based on work done by Schweizer and Jamieson (2003), in which they used CART to establish a threshold sum model based on snow profile observations gathered from several hundred avalanche accident sites and non-avalanche days in Canada and Switzerland. Their model ranked the most significant variables and found a set of thresholds on each variable that distinguished avalanche days from non-avalanche days, and a set of weights on those variables that reflect the relative importance of each variable to the stability evaluation. The threshold sum model (TSM) was designed with the above research in mind. One can input the Schweizer and Jamieson (2003) threshold sum parameters in this algorithm and use it to generate a probability of avalanching based on snowpack information. This probability is calculated by dividing the threshold sum score by the sum of all threshold weights. Most importantly, the-TSM provides a structural precedent that will allow for. future substitution of other models based on information from different data classes at a variety of spatial and temporal scales of observation. Probabilities generated by different models are combined using recursive Bayesian updating (Pearl 1988). The T S M feature has not yet been tested, and therefore it is not discussed further in this study. 30 2.4 Bayesian updating Bayesian inference updating provides a mathematical formalism that approximates the process of human reasoning in avalanche forecasting (McClung 2002b), and thus it is used in the AFS to update probabilities of avalanching given different stability information. McClung and Tweedy (1994) employed a Bayesian framework to refine numerical forecasts using the forecaster's prior knowledge of the current state of instability. The forecasters entered their estimate of avalanche probability as a Bayesian "prior" in Bayes' Rule. Bayes' Rule states that the posterior probability (the forecast) is equal to the likelihood (based on distributional data) multiplied by the prior (based on singular data), and evaluated over the evidence (the sum of products of distributional likelihoods of avalanche occurrence and non-avalanche occurrence). This is written as ^p(D\y.)p(y.) where yt is the target class, i=l are avalanche periods, i=0 are non-avalanche periods, .and D stands for the historic data. The likelihood p(D\y]) is the probability that the new datum is a member of class y] based on similar periods in the historic record, and in the current AFS this is given by the proportion of avalanche periods among the nearest neighbours. The prior probability /?(v,) includes information about the current state of instability, and is estimated by the forecaster and/or given by the threshold sum model. The evidence is the sum of all likelihoods of avalanche occurrence as a product of the prior probability that each piece of evidence indicates that avalanches are likely. For example, there is a high prior probability that recent avalanche activity indicates likely instability, whereas the presence of a surface hoar layer is associated with a lower prior probability of avalanches as only some surface hoar layers produce avalanches. This example shows the relation between Bayes Rule and avalanche forecasting: information from higher data classes (weak layers fall in class II and recent avalanches are class I) carry a greater uncertainty as evidence of instability. 31 The AFS uses a recursive inferential process, which combines the nearest neighbour likelihood and threshold sum model prior to give a posterior probability of avalanches. The posterior of this first Step becomes the likelihood in the second round of inference where the forecaster's prior is incorporated-to give an overall prediction of avalanche probability given all available information. 2.5 Fitness metrics Fitness metrics are a means of judging the ability of a model to predict avalanches, and each metric shows a different quality of the predictions. There are four fitness metrics used here: accuracy, unweighted average accuracy, bias, and the Hanssen and Kuipers discriminant (Purves 2002). Discussion of these fitness metrics refers to the following contingency table and equations (Roeger et al. 2003): Observation Forecast Avalanche No Avalanche Avalanche True Positive False Positive (TP) (FP) No Avalanche False True Negative Negative (FN) (TN) Table 2: Contingency table for comparison of forecasts and observations. Oi", alternatively; Observation Avalanche . No Avalanche Avalanche Hit False alarm No Avalanche Miss Correct negative Table 3: Contingency table as above, using plain language. Accuracy: This reflects the traditional concept of accuracy, whereby one judges a model by the proportion of correct classifications to the total number of test predictions. This is given by: 32 TP+TN hits + correct negatives Accuracy = = =â€”-TP+TN + FP+ FN total where a value of 1 would mean a perfect forecast and 0 is the worst possible score. Unweighted average accuracy (UAA): this is a better measure for rare events, as high predictive accuracy of non-events will not skew this metric to the same degree as for the accuracy metric. Scores range from 0 to 1 where 1 is a perfect score. Unweighted average accuracy is given by UAA= 0.5 f TP TN \ ( hits correct negatives ^ TP+FN FP+TN V hits + misses false _ alarms + correct _ negatives Bias: Bias is not a measure of model accuracy; instead it allows the forecaster to see whether the model falsely predicts positive events (avalanches) more or less frequently than negative events (non-avalanches). If the model over-predicts avalanches, the bias will be greater than 1. A bias of less than 1 indicates that the model predicts avalanche days as non-avalanche days more frequently, whereas 1 is a perfect score. â€ž . TP + FP hits + false alarms Bias = = = TP + FN hits + misses Hanssen and Kuipers discriminant (HK): This is a measure of the model's ability to properly distinguish between events and non-events. It is also known as the true skill score. Note that the value ranges from 0 (poor) to 1 (perfect), but the score will generally be lower than the accuracy value. This reflects the fact that this is a more rigorous measure of model performance, and of the four options described here, it is perhaps the best metric to use for stochastic optimization of the genetic algorithm (Purves 2002). _ TP FP ' hits false alarms H K â€” TP + FN FP + TN hits + misses false alarms + correct negatives' 33 Probability of detection (POD): This fitness metric shows the percentage of avalanche occurrences that are correctly classified. It is analogous to the classification rate for avalanche events as in Floyer and McClung (2003). It is not included in,the AFS program, but it is listed in the.results section for thoroughness. P O D = ^ - = * ' * TP + FN h its + m isses 2.6 Autocorrelation window masking jackknife cross-validation There was a risk that fitness values determined by the traditional batch test algorithm adapted from Purves et al. (2002) were inflated. In the traditional batch test, 20% of historic data rows, in chronological order, are reserved for testing. When using hourly interval data, each row is associated with avalanches that happened within 12 hours of the time of that row, and therefore the next row is associated with as many as 1 1 hours of the same avalanche activity. In reality adjacent rows overlap fewer hours than 11 as rows with missing data are rejected. However, any overlap is problematic. One can argue that the batch test is classifying effectively far fewer independent avalanche periods than is reflected by the number of rows in the test batch. The following cross-validation procedure allows the user to: 1. Use only one datum per 12 hour period (or whatever time period specified by the user.) 2. For each datum chosen for testing, all adjacent rows within a user defined serial correlation masking period are excluded during testing and training. This ensures that no neighbours from the same synoptic period as the test datum are used in predictions. Here is a pseudo-code that describes the autocorrelation window masking cross-validation routine: let cp = cross-validation sampling period: the number of hours to skip ahead when choosing the validation dataset. 34 a = serial correlation mask value: this is a number set by the user which specifies the maximum number of hourly observations that may be correlated with each other, and that will therefore be omitted from the prediction for the current datum under consideration. ti = current observation row, indexed by i. . Leave-one-out cross-validation (jackknifing): 1. for row t|, make a training dataset that excludes all data within +/- a hours of time tj. 2 . make a prediction for row tj. 3. iterate for tj+i = tj + cp. This cross-validation algorithm automatically selects a set of independent test samples from serially autocorrelated datasets. When each sample is tested, all samples that could be correlated with that observation are rejected from the nearest neighbour analysis, Furthermore, by slightly varying the cross-validation period, one can generate different subsets of the test data. This allows the user to test the robustness of the fitness values generated by different subsets of the original dataset. 2.7 AFS Summary The AFS automates the process of building datasets, memory variables, and generating accuracy statistics. It uses only electronic weather sensor data, but includes the capability to incorporate the forecaster's prior knowledge about the instability of the snow cover, or to incorporate output from a model based on snow structure information (such as the threshold sum model.) This information is combined using Bayesian statistics. Currently, predictions based on distributional information are given by nearest neighbour analysis. Hourly observation intervals are matched to avalanche occurrences by binning them in 12 hour blocks. Each hour is associated with the avalanche activity that occurs in the 12 hours following that hour. A jackknife cross-validation routine evades the problem of temporal autocorrelation of hourly interval observations. Stochastic optimization of the variable weights in the nearest neighbour analysis was not applied in this study, but remains a promising avenue for future development. 35 3 Results This section summarizes the fitness of the operational avalanche forecasting models built using the AFS, and shows a thorough exploration of the AFS ; parameters, predictors, and their effect on accuracy. The following tables summarize the optimal performance of and variables used by the AFS at both Bear and Kootenay Passes for prediction of all avalanche types together (wet, moist and dry). Number # of rows of data associated Area Acc UAA POD H K Bias rows with avalanches Bear 0.66 0.71 0.81 0.43 2.30 15423 3083 Pass Kootenay 0.79 0.76 0.72 0.52 1.99 47501 3649 Pass Table 4: Fitness statistics for the optimal models at both sites, including data volumes and proportions of avalanches in each dataset. Number of # of test rows True False True False independent associated with Area positives positives negatives negatives test avalanches rows Bear 145 266 436 34 â€¢ 881 ' 179 Pass Kootenay 130 228 928 50 1336 170 Pass Table 5: Raw prediction results for both sites including numbers of independent test samples and proportion of these that are avalanche periods. Test rows are chosen using a cross-validation period of 17. Kootenay Pass Variables Bear Pass Variables New snowfall lag Hourly precipitation Hourly precipitation New precipitation lag New precipitation lag Total snow depth Total snow depth Wind speed Wind speed Maximum temperature (in the previous 24 hours) Present temperature Present temperature Maximum wind speed log Table 6: Variables used in the analysis at both field sites. 36 3-1 Normalization of input variables Normalization of variables was shown to improve predictive accuracy in nearest neighbours analysis by Purves (2002) and is an essential data pre-processing step in parametric methods such as LDA. Use of raw variables in nearest neighbour analysis effectively gives greater weight to predictors that have a higher variance and larger mean values, therefore the initial hypothesis was that normalization would improve classification accuracy. However, since nearest neighbours analysis is a non-parametric technique (meaning that it makes no assumptions of normality), it was hypothesized that log-normalization of precipitation variables as in Bovis (1976) would not significantly affect model performance. The table below shows that the latter hypothesis proved false in the case of Bear Pass. Further research is necessary to determine the reason that log-normalization improves the POD at Bear Pass, and decreases it at Kootenay Pass. In this study, log-normalized precipitation values are used at Bear Pass, but not at Kootenay Pass. Area Transformation Acc UAA POD H K Bias Bear Pass Log-normalized 0.66 0.71 0.81 0.43 2.3 Not log- 0.72 0.71 0.69 0.42 1.77 normalized Kootenay Log-normalized 0.87 0.78 0.68 : 0.57 1.89 Pass Not log- 0.79 0.76 0.72 0.52 1.99 normalized T a b l e 7: C o m p a r i s o n o f m o d e l s u s i n g o p t i m a l parameters w i t h and w i t h o u t t r ans fo rmat ion o f p r e c i p i t a t i o n va r i ab les . 37 3-2 Model performance time series K o o t e n a y P a s s : 1998/1999 A F S p red ic t i on output D e c 1 Jan 1 F e b 1 Mar 1 A p r 1 Figure 3: Time series showing predicted avalanche probabilities and timing of observed avalanches. X denotes one or more occurrences in that hour, and the gray line marks the warning level, or threshold number of nearest neighbours at which the'period is classified as an avalanche period. Gaps in the probability curve are periods where the. computer stopped working or SAWS failed to report any weather information. Figure 2 shows an entire season of AFS predictions and observed avalanche , occurrences at Kootenay Pass. At this resolution it is clear that the AFS can recognize major avalanche periods, though there are a significant-number.of misses. In particular, the AFS cannot identify a large proportion of wet avalanche periods such as in the month of May.in figure 2. At daily resolution, shown below, it appears that the highest probabilities (>0.'5) given by the AFS are almost always associated with avalanche occurrences. Probabilities less than 0.4 are much more likely to be falsealarms (where avalanches are predicted but none occur) than when the predicted probabilities are greater than 0.5. 38 Kootenay Pass: November/December 1998 AFS prediction output S 0.4 TO JD O 11 13 15 17 19 21 23 25 27 " 29 31 D a y o f N o v e m b e r / D e c e m b e r Figure 4: Time series showing predicted avalanche probabilities and timing of observed avalanches. X denotes avalanche occurrences. See figure 2 for details. Kootenay Pass: January/February 1999 AFS prediction output 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 2 4 6 8 10 12 14 16 18 20 22 24 26 28 D a y o f J a n u a r y / F e b r u a r y Figure 5: Time series showing predicted avalanche probabilities and timing of observed avalanches. X denotes avalanche occurrences. See Fmure 2 for details. 39 Kootenay Pass: March/April 1999 AFS prediction output 0.7i 0.6- l I , 1. 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 Day of March/Apri l Figure 6: Time series showing predicted avalanche probabilities and timing of observed avalanches. X denotes avalanche occurrences. See figure 2 for derails. For Bear Pass, the overview of the 2003-2004 winter season shows that the predictions approach unity and zero more frequently than the predictions for Kootenay Pass. The time series also appears noisier, especially when one looks at the daily resolution. 40 Bear Pass: 2003/2004 AFS prediction output Dec 1 Jan 1 Feb 1 Mar l Apr 1 Figure 7: Time series showing predicted avalanche probabilities and timing of observed avalanches. X denotes avalanche occurrences. See figure 2 for details. Bea r Pas s : Novembe r /Decembe r 2003 AFS pred ic t ion output 1 0.9 -0.8--0.7 ->, 0.6 J5 ra 0.5 _a o Â°- 0.4 -] 0.3 0.2 0.1 -I 0 1 3 5 7 9 11 13 15 17 19.21 23 25 27 29 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 Day of November /December Figure 8: Time series showing predicted avalanche probabilities and timing of observed avalanches. X denotes avalanche occurrences. See figure 2 for details. 41 Bear Pass: January/February 2004 AFS prediction output 1 i 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 2 4 6 8 10 12 14 16 18 20 22 24 26 28 Day of January /February Figure 9: Time series showing predicted avalanche probabilities and timing of observed avalanches. X denotes avalanche occurrences. See figure 2 for details. Bear Pass: March/April 2004 AFS prediction output i i i 0.9 -0.8 -1 3 5 7 9 11 13 15 17 19.21 23 25 27 29 31 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 Day of March/April Figure 10: Time series showing predicted avalanche probabilities and timing of observed avalanches. X denotes avalanche occurrences. See figure 2 foi" details. 42 Bear Pass predictions are noisier perhaps because there are fewer test cases (fewer years of data) than at Kootenay Pass. Each neighbourhood upon which predictions are based in Bear Pass may therefore encompass a larger region of predictor space on average. This means that the more distant neighbours are less similar to the datum in question, and are therefore more likely to result in spurious predictions. Therefore, as the data density continues to increase through the years, this noisy performance should settle down and accuracy could increase. 3.3 Sensitivity testing of AFS parameters One of the primary goals in the development of the AFS was to create a system in which all parameters, such as the observation and forecast periods, predictor choices, and lag periods can be varied. This will facilitate application of the AFS to new forecast areas without extensively re-coding the program. Flere, the effect of varying these parameters on model fitness is explored. First, a preliminary exploration of these parameters was done to find the optimal parameter values and predictors. Then, using these parameters and predictors, each individual parameter was varied separately. The resulting fitness values are shown in this section. Al l of the plots in this section were obtained by using the optimal values shown in the table below while.varying each parameter independently. Note that some of the following graphs plot bias - 1 so that the bias term fits on the same scale as the other fitness metrics. Given that temperature log variables were not found to be significant at Bear Pass, and that the observation period only operates on temperature log variables, there is no optimal value for the observation period at Bear Pass. The wind speed did not improve accuracy at Kootenay Pass, so it was also omitted. Parameter Kootenay Pass Bear Pass Observation period 24 h N/A Forecast period 12 h 12 h Cross-validation mask 66 h . 66 h Cross-validation sampling 17 h 17 h . period Threshold K 8 neighbours 4 neighbours K 40 neighbours 20 neighbours Precipitation lag period 24 h 30 h Wind speed log period N/A 18 h Table 8: Optimal parameter settings for models at both study sites. 43 3-3-1 Cross-validation parameters The cross-validation parameters were often similar across a wide range of values, therefore physically reasonable values were chosen. Fitness vs. cross-validation mask 0.9 0.S = 0.7 0.6 0.5 - - UAA HK - - - â€¢ bias-1 0 6 12 18 24 30 36 42 48 54 60 66 cross-validation mask (h) 72 Figure 11: Fitness plots for variations in cross-validation mask at Kootenay Pass. Similar results were obtained for Bear Pass. Perfect scores for UAA and H K are l , and - I for bias-1. The cross-validation mask eliminates the problem of temporally autocorrelated neighbours artificially improving model accuracy by their inclusion in predictions for periods from the same day or synoptic event. The "one neighbour per day" constraint seems also to have largely dealt with the problem of serially correlated data, though slightly inflated accuracy values still result when the mask is set to less than 3 hours. This confirms that the mask is a necessary feature. However, it is perhaps surprising that the 66 hour mask yields much the same result as the 18 hour mask. This may be attributable to the structure of the nearest neighbour analysis, in which a single temporally autocorrelated neighbour only accounts for 1/k percentage of the information upon which predictions are based. Al l other studies shown below use a cross-validation mask of 66 because this ensures that no nearest neighbours from the same synoptic cycle as the test case are used in generating predictions for that test case. This is perhaps excessive. However, the data show no significant reduction in model fitness for mask values greater than 18 hours. 44 The above figure, in which the cross-validation period is the equal for each iteration, also shows that when the same subset of the data is tested multiple times, fitness statistics remain stable. The cross-validation sampling period is the number of hours that separate test cases that are used by the cross-validation routine. The goal is to choose effectively independent samples for use in fitness testing. The minimum physically reasonable value for this parameter is equal to the forecast period (12 hours in this study), as this ensures that there is no overlap in the set of avalanche occurrences that define positive test cases. In other words, this ensures that fitness values are not artificially inflated by multiple correct predictions of the same avalanches. Furthermore, small variations in the cross-validation period produce different subsets of test data, and this accounts for the instability of the bias when this parameter is varied. In spite of this instability, it is evident that values less than 12 seem not to impact accuracy significantly. The cross-validation sampling value used in the rest of this study is 17. This number was chosen because it generates a large number of independent samples, it is out of phase with diurnal cycles, and the resulting test samples are a mix of observations from all parts of the day. Kootenay Pass: fitness vs. cross-validation sampl ing period 1.5 1.4 1.3 1.2 jj 1.1 Â§ 1 </> 0.9 5 0.8 0.7 0.6 0.5 0.4 â€¢ - U A A - - H K - â€¢ bias-1 0 6 12 18 24 - 30 36 42 48 cross-validation sampling period (h) Figure 12: Fitness plots for variations in cross-validation period at Kootenay Pass. Perfect scores for U A A and H K are 1. and-1 for bias-1. 45 This plot shows the degree of variability of the various fitness statistics that can be expected when fitness testing is performed on slightly different subsets of the same data. This variability should be kept in mind when choosing optimal values. Clearly, the bias is the most unstable metric of all, yet it is more so for the smaller datasets chosen by larger cross-validation period values. The jitter in all fitness values can be used to estimate the model error that results from data selection. Any variations in fitness due to changes in model parameters which are greater than these estimates are considered to be significant. Deviations from the mean of U A A , HK, and bias are +/-0.02, +/-0.05, and +1-.20 respectively. Bias varies most when applied to slightly different subsets of the data, and for cross-validation period values greater than 20 this effect is amplified. Bear Pass: fitness vs. cross-validation sampl ing period 1.3 1-2 -I 1.1 1 = 0.9 0.7 0.6 0.5 } 0.4 0.3 . . / - - â€” - U A A HK - - - - - - - bias-1 6 12 18 24 30 36 42 48 . c r o s s - v a l i d a t i o n s a m p l i n g p e r i o d (h) Figure 13: Fitness plots for variations in cross-validation period at Bear pass. Perfect scores for U A A and FIK are l , a n d - l for bias-1. 17 is the best cross-validation period to use, as the next prime number is close to a significant jump in bias variability for both Bear and Kootenay Passes. 3.3.2 O b s e r v a t i o n a n d forecast p e r i o d s When creating datasets using the AFS, the default observation and forecast periods are 24 and 12 respectively. â€¢ The observation period is the time window in which the AFS records the maximum and minimum temperature log variables. The variability of this parameter is 46 not greater than the expected error from varying the dataset, so 24 hours was chosen because it matches the diurnal period. Fitness vs. observation period 1 0.9 . Â« 0.8 ro in 0.7 ut a c Â£ 0.6 0.5 0.4 12 24 36 o b s e r v a t i o n p e r i o d (h) 48 â€¢ UAA HK â€¢ bias-1 Figure 14: Fitness plots For various observation periods. Perfect scores for U A A and HK. are 1, and -1 for bias-1. Fitness vs. forecast period 1.2 n 1 0.8 0.6 Â£ 0.4 0.2 0 â€¢UAA HK â€¢ bias-1 3 6 9 12 15 18 21 f o r e c a s t p e r i o d (h) 24 Figure 15: Fitness plots for various forecast periods. Perfect scores for U A A and H K are 1, and :1 for bias-1. The optimal value for the forecast period is 18 hours; however this study uses a 12 hour forecast period because it matches the operational objectives of the MoT without significant loss of accuracy. 47 3-3'3 Neares t N e i g h b o u r pa rame te r s McClung and Tweedy (1994) and Floyer (2003) both found that the optimal ratio of nearest neighbours that identifies an avalanche day is near 1/5. Fitness vs. threshold k â€¢1.5 0.5 -0.5 10- 12 t h r e s h o l d k UAA HK - â€¢ - â€¢ bias-1 Figure 16: Fitness plots for variations in the threshold number of nearest neighbours at which test data is labeled an avalanche period. k=30: the total number of nearest neighbours. Perfect scores for U A A and FlK are 1, and -1 for bias-1. This plot, in which k is'30, confirms that a threshold of 1 /5k is appropriate. In Kootenay (Bear) Pass, k = 40 (20) were chosen, though for values'of k greater than 20 (15), the results don't vary significantly with changes to k. 48 Kootenay Pass: fitness vs. k UAA HK bias-1 10 15 20 25 30 35 40 45 50 55 60 k Figure 17: Fitness plot for k, the number of nearest neighbours at Kootenay Pass. Perfect scores for U A A and H K are 1. and -1 for bias-1. Bear Pass: Fitness vs. k 1.2 1.1 -1 -0.9 -0) ra 0.8 > 8 0.7-C Â£ 0.6 -â€¢0.5 -0.4 -0.3 -UAA HK bias-1 10 15 20 25 30 35 40 45 -50 55 60 k Figure 18: Fitness plot for k, the number of nearest neighbours at Bear Pass. Perfect scores for U A A and H K are 1. and-1-for bias-1. ' . 3.3.4 M e m o r y va r i ab le s The following series of graphs justify the choice of precipitation lag periods and wind speed log periods. The optimal precipitation lag for Kootenay Pass is 24 hours, and 49 for Bear Pass it is 30 hours. The wind speed log value was not significant at Kootenay Pass,-and at Bear Pass the optimal value is 18. 1 0.9 <u 0.8 ro I 0.7 (/) CD | 0.6 0.5 0.4 Kootenay Pass: fitness vs. precipitation lag period U A A HK - - - - - - - bias-1 6 12 18 24 30 36 42 lag p e r i o d (h) 54 60 Figure 19: Fitness plots for variations in precipitation lag period at Kootenay Pass. The same value is used for both new snowfall lag and precipitation (water equivalent of new snowfall) lag. Perfect scores for U A A and FIK are 1, and -1 for bias-1. Bear Pass: fitness vs. precipitation lag period 1.1 1 0.9 <u 0.8 -| "ro w 0.7 in a> B 0.6 0.5 0.4 0:3 U A A HK bias-1 0 6 12 18 24 30 36 42 48 54 60 66 lag p e r i o d (h) Figure 20: Fitness plot for variation of the precipitation lag period at Bear Pass. The same value is used for both new snowfall lag and precipitation (water equivalent of new snowfall) lag. Perfect scores for U A A and F1K are 1, and -1 for bias-1. 50 At Bear Pass, there is a curious spike in fitness for a lag period of 36, but this value is not used because it could be a spurious random variation leading to overestitnation of the expected accuracy of this model. Bear Pass: fitness vs. w ind log period U A A H K POD 0 4 8 12 . 16 20 24 28 -w i n d log p e r i o d (h) Figure 21: Fitness plots for variation of the maximum wind speed log variable at Bear Pass. Perfect scores for U A A , ITKL, P O D are 1. There is no strongly favoured value for the wind log, however this variable is maintained because it consistently adds 1-3% accuracy, even when the test dataset js changed or other variables are dropped or added. When POD is considered, there does appear to be an optimal value at 18 hours that coincided with the maximum value of U A A . 3.3.5 M i s s i n g V a r i a b l e s Given that the AFS makes predictions even if there are one or more input predictors missing, the effect of removing variables one at a time was.tested. It shows that the model accuracy is lower, but still reasonable, if one sensor is missing from the current data, as will occasionally happen during operational use. 51 cu CD > V) â€¢ (fi CD C Fitness for select models with inoperative sensors at Kootenay Pass 0.8 0.75 -! . 0.7 0.65 -0.6 -0.55 0.5 0.45 0.4 â€¢ UAA â€¢ HK â€¢ P O D 0 Â° 0 Â° e> df miss ing sensors Figure 22: Fitness of various models where different variables were omitted. "None missing" refers to the model where no sensor data has been omitted. A l l other abscissa labels indicate the sensor dropped from the analysis. The bias is omitted here because it exceeds the range of the other fitness metrics and shows little useful discrimination power among models with missing variables. Perfect scores for U A A . HK, and P O D are 1. Fitness for select models with missing variables at Kootenay Pass CD CD > V) if) CO c 0.8 0.7 0.6 0.5 0.4 â€¢ U A A â€¢ HK â€¢ POD <<* ^ J> ^ ^ s * ^ ^ # ^ ^ missing variables Figure 23: Fitness values of models using the optimal variable set, each with a different single missing variable. Perfect scores for U A A , HK. and P O D are 1. 52 â€¢ .1 The first graph shows that all sensors carry important information at Kootenay Pass, as fitness drops significantly when each is removed from the analysis. Figure 16 shows that the wind log, density proxy, and minimum temperature are not significant. When all three of these variables are dropped from the analysis, fitness slightly improves, though only within the range of variability due to changes in cross-validation period. Fitness for select models with inoperative sensors at Bear . Pass â€¢ UAA â€¢ HK â€¢ P O D Figure 24: Fitness of various models where different sensor data were omitted. "None missing" refers to the model where no sensor data has been omitted. Perfect scores for U A A , H K , and P O D are 1 . This graph shows that all sensors are significant at Bear Pass, with the possible exception of present temperature. This variable is retained in the optimal model because it is measured by a different sensor than all the other variables at Bear Pass, and it could be advantageous to keep it in the event that another sensor is temporarily inoperative. Also, Floyer (2001) showed it to be a significant predictor at Bear Pass. 53 Fitness for select models using different variables at Bear Pass â€¢ UAA â€¢ HK â€¢ P O D Figure 25: Fitness values for models using the optimal variable set, each with a different single missing or added variable. Perfect scores for UAA, H K , and P O D are 1. Addition of minimum or maximum temperature or density proxy reduces accuracy at Bear Pass, as does removal of the wind log, precipitation and snow lags. Therefore the former variables are omitted and the latter are maintained in the optimal model. 3.3.6 O c c u r r e n c e f i l te rs Filtration of avalanche occurrences that are included in a given dataset allows the user to target specific subsets of avalanche occurrence types or avalanche paths. In the case of avalanche size and moisture type restrictions for Kootenay Pass, the small percentage of positive target data in the filtered dataset results in low accuracy because nearest neighbour analysis functions poorly with small and sparse datasets. This is evident when the value for accuracy is high, but the unweighted average accuracy is low. This shows that there are so few positive test cases that unweighted-average accuracy (where prediction of avalanche and non-avalanche periods are added and divided by two) is very poor when overall accuracy is high because negative test cases dominate the data. In this section, "no filters" refers to models that predict all avalanche types together. Each filtering restriction was tested individually. For instance, "natural" avalanches refers to all moisture and size classes of natural avalanche, and "moist" refers to moist avalanches of all size classes and trigger types. 54 Comparison of natural avalanche prediction at Bear and Kootenay Passes illustrates this problem. In Kootenay Pass, accuracy at predicting explosive controlled avalanches is reasonably good! However the Gaz-Ex explosive control system allows forecasters there to largely eliminate large natural avalanches, and therefore the proportion of positive test cases in the dataset is too small to produce an accurate model. Fitness for select Kootenay Pass targets 0.8 -| target paths/occurrence type Figure 26: Fitness values for various datasets that target specific occurrences at Kootenay Pass. " N o filters" denotes the optimal model where no occurrences were filtered out during dataset creation. Perfect scores for UAA, H K , and P O D are 1. 55 Fitness for select Bear Pass targets â€¢ UAA â€¢ HK â€¢ POD t a r g e t p a t h / o c c u r r e n c e t y p e Figure 27: Fitness values for various datasets that target specific occurrences at Bear Pass. " N o filters" denotes the optimal model where no occurrences were filtered out during dataset creation. Size 2 and 3 filters target only avalanche occurrences that are greater than or equal to the given size. Perfect scores for UAA, H K , and P O D are 1. In contrast, Bear Pass still produces frequent natural avalanches, and this is reflected in the accuracy of the model that targets natural avalanches versus that which targets controlled avalanches. Fitness for natural avalanche prediction is equal to that which targets all avalanche types (no filters) at Bear Pass. This is likely because there are many more natural avalanches at Bear Pass than Kootenay Pass, and timing of explosive control is dependent on human factors and clearing weather that permits helicopter bombing. The model that targets moist and wet avalanches is also reasonably accurate at Bear Pass, where the maritime climate there produces much more frequent moist and wet avalanches. At-Kootenay Pass, moist and wet avalanches are too infrequent to create an accurate model using nearest neighbours. Models that target large subregions of Bear Pass are also somewhat inaccurate, and accuracy in predicting avalanches for smaller avalanche path groupings such as Summit Sluffs is considerably poorer. 56 Fitness for sub-regions of Bear Pass 0.8 -| â€¢ UAA â€¢ H K â€¢ P O D no north south summit little filters aspect aspect sluffs bear target path/occurrence type Figure 28: Mode l fitness for select sub-regions of Bear Pass. Perfect scores for U A A , F1K, and P O D are 1. This is contrary to the findings of Floyer and McClung (2004), who achieved equal or better predictive accuracy for these small sub-regions of Bear Pass compared with models trained on the full set of avalanche paths at that location. This discrepancy is likely explained by the fact that Floyer and McClung had longer historical records with which to train their models. 3.4 Results summary The optimal parameters, and optimal variable sets for Bear and Kootenay Passes are given at the beginning of this section. Accuracy at Bear Pass is 71 % +/-2%, and it is 76% +/-2% at Kootenay Pass, as determined using the AFS ' cross-validation sampling technique. These expected error values are estimated by varying the cross-validation period. Generally speaking, AFS accuracy is gOod for both Kootenay and Bear Passes when all avalanche paths and types are considered. Moist/wet avalanche prediction is reasonably reliable at Bear Pass, but useless in Kootenay Pass due to the small number of positive test cases. This is true of all filtered datasets where the proportion of positive test cases is too small, such as datasets filtered by avalanche size class greater than three, or by natural avalanches at Kootenay Pass. Accuracy for downscaled predictions for subregions of Bear Pass could increase as more historical data is recorded. Predictive 57 accuracy for some of these smaller datasets might be improved by using linear discriminant analysis, variable weights optimized by genetic algorithm, or some modern machine learning algorithm. Previous studies used as many as ten winters of data to train predictive models at Kootenay and Bear Passes. Having achieved reasonable performance and accuracy for both sites using half the input data or less suggests that the model will improve with each new season of data collected over the next several years. Predictions at Bear Pass are likely to vary more smoothly with fewer spurious high and low probabilities as the number of test cases increases. 58 4 Conclusions Judicious choice of predictors and data pre-processing parameters optimizes the AFS so that unweighted average accuracy of 71% +1-2%, and 76% +1-2% is achieved for Bear and Kootenay Passes respectively. Operational testing at Kootenay Pass showed that this is a promising tool that merits integration of the AFS with the MoT's systems. Numerical avalanche prediction can support and improve the human inductive reasoning process in conventional avalanche forecasting (McClung 2002b). The goal of this study is to create an automated system for numerical avalanche prediction using hourly electronic weather sensor data. The AFS allows the user to generate new datasets easily, to choose different prediction targets, and to assess the accuracy of the resulting algorithms that are based on these datasets and targets. It also enables scenario testing to show the accuracy of the model when some input variables are missing, or to test the effect of changes in current or forecast weather conditions. Past studies have shown the degree of accuracy that can be expected when applying numerical avalanche prediction in Bear Pass (Floyer and McClung 2003), at Kootenay Pass (McClung and Tweedy 1994), and using numerical weather prediction as inputs for extension of predictions into the future (Roeger et al. 2003). Accuracy of the current AFS might be improved at Kootenay Pass by applying the exact new snow density proxy found by Roeger et al. (2001); and at Bear Pass by conducting a similar analysis for that location. It may be helpful to build the next generation of AFS to allow for easier addition of algorithms for prediction, missing variable simulation, as well as addition of new proxy variables. Further studies are necessary to determine optimal density proxy relationships for each region or replace them altogether by applying non-linear multivariate statistical techniques. Accuracy could also be improved at both sites by adding weights to the predictor variables (Friedman 1994) and optimizing them stochastically (Purves 2002). Further improvement in model fitness might be achieved by including information about weather from recent time periods, rates of change of variables, or class I and II data. Specific plans for such developments are outlined in the following section. The most certain means of improving the performance of the AFS would be to increase the size of the dataset upon which predictions are based. 59 Application of the AFS to new regions could be successfuTif sufficient data exists (greater than three winters), and if avalanches occurrences in these regions are as frequent as dry avalanches are in Kootenay Pass. The results show that there is promise for the use of hourly interval electronic weather sensor information for avalanche forecasting in different climatic regions across British Columbia. Weather information (class 111 data) is difficult to interpret in terms of the impact of incremental changes in weather on the instability of the snow cover. The AFS can help human forecasters minimize this uncertainty and increase the accuracy of human perception with regards to instability. This could be especially useful for MoT avalanche technicians in training, and for experienced technicians who are responsible for widely separated avalanche areas which are difficult to visit daily because of their number and the distance between them. In the current AFS, Bayesian revision allows the user to input their prior knowledge about the state of instability of the snow cover obtained through class I and II data. The Bayesian inference updating algorithm included in the AFS could in the future be expanded to automatically include information from lower entropy data classes in the numerical inference process by using a threshold sum model or other algorithms such as Multiple Additive Regression Trees or the Snow Profile Assistant expert system developed for use at Bear Pass (McClung 1995). The autocorrelation window masking jackknife cross-validation function is vital for generating defensible accuracy values when considering hourly predictions. This function ensures that all test cases are independent of each other and that all periods from the same synoptic event as the test case are excluded from the training dataset. Furthermore, it greatly speeds generation of accuracy statistics, and allows the user to test variations in model fitness generated using different subsets of the training data. The nearest neighbour algorithm functions poorly when applied to sparse datasets. Therefore, down-scaling of predictions for different moisture classes and some sub-regions of the forecast areas might require use of different algorithms, such as linear discriminant analysis. However, without an increase in the size of the training dataset, linear methods might still not improve accuracy for some predictive targets. 60 The Avalanche Forecast System created for the MoT has potential to be a useful addition to the suite of tools provided to the avalanche technicians. During operations, technicians at Kootenay Pass refer to the predicted probabilities to support their inductive reasoning regarding the state of instability of the snow cover. They use the nearest neighbour distances to judge the degree of uncertainty of the numerical prediction and they review the avalanche occurrences listed in the nearest neighbour report to confirm and improve their intuition regarding patterns of weather that produce avalanches (Kevin Maloney, personal communication 2007.) Therefore, in spite of the fact that the predictions of the AFS are noisy due to the â€¢ small training datasets used, the AFS is operationally useful and will likely improve over time. 61 5 Future directions MoT avalanche technicians and management consider the pilot AFS to be a successful demonstration that numerical prediction tools can usefully integrate with current and future MoT information technology and avalanche forecasting operations. The prospects of future funding for expansion of the AFS as a standard decision support tool alongside SAWS are promising. Below is a list of features that will be considered for inclusion in the next generation of AFS, should funding be approved for this project. This is as complete a list as is possible to compile at this point. However, as happened with the pilot AFS, some items will be altered or postponed for future generations depending on time, operational needs, resources, and results of cost-benefit analysis. 5.1 True forecasts using numerical weather prediction output Roeger et al. (2003) explicitly detail and verify methods for application of numerical weather prediction values to avalanche forecasting models. They showed that at Kootenay Pass, accuracy of avalanche prediction using 24 hour advance weather forecasts compares favourably with traditional now-casts using current weather conditions. This work will guide the incorporation of numerical weather prediction in the next generation of AFS. 5.2 Avalanche activity index Previous studies using LDA (McClung and Tweedy 1994) included the severity and magnitude of the avalanche potential in the prediction output, and the next generation will also include this capability. The avalanche activity index (AAI: the sum of sizes of all avalanches that occurred during a given period) could potentially be predicted using modern multivariate non-linear regression methods, and perhaps approximated by adapting the current nearest neighbour output structures. This was done by taking the average AAI of the k nearest neighbours (Roeger 2001). 62 5-3 Algorithm choice All numerical models require that input data be preprocessed and organized in some fashion, and this is often a very time consuming and challenging aspect of the modeling process. Most often, the predictive algorithm itself exists as a body of code that needs only be fed inputs, run, and interpreted. Given that the AFS preprocesses and feeds data automatically to the nearest neighbour algorithm, implementing new prediction algorithms will be easy once the code allows for substitution of new models. A generalized framework for substituting numerical algorithms could be implemented to allow the user a choice of prediction and simulation algorithms wherever they are needed: in avalanche prediction, memory variable generation, and missing value simulation. This functionality would enable rapid development and testing of new numerical prediction models applied to the problem of avalanche forecasting at smaller scales. The.proposed framework requires three extensions of the current code: algorithm and loss criterion selection interfaces (including several choices of classical and recent loss criteria to suit modern statistical techniques), optimization controls, and output matching. Modern statistical prediction algorithms often use more robust loss criteria than squared error loss, and modern analytical parameter optimization'techniques (notably, gradient descent) demand more continuous loss functions than misclassification error (Hastie et al. 2 0 0 1 ) . Therefore, along with the capability to add new prediction algorithms to the AFS, there must also be a capability to add different loss criteria. Furthermore, each prediction algorithm has different manually set optimization hyper-parameters. Therefore, there needs to be a framework for adding new code for interfaces where the user can choose the model parameters for, and view output from, different prediction algorithms. Integration of different algorithm types through the Bayesian inference updating structures and using different classes of data that operate at different spatial and temporal scales would allow downscaling of predictions without breaking physical and scaling laws in avalanche prediction. Modern classification and regression algorithms also permit the use of ordinal, categorical and nominal as well as numerical data, and this could potentially enable, the use of new variables that have never been applied to 63 operational numerical avalanche prediction. The AFS flexible dataset creation routines would allow testing of these data with modern machine learning algorithms. 5.4 Integration of variables from different weather stations The problem of downscaling predictions will require the use of all available reliable weather sensors. The next generation of AFS should allow the user to sample information from the most relevant sites according to their experience or according to results of statistical analysis. Comparison of the predictive accuracy of different combinations of variables from different sensors, using dataset creation and cross-validation routines of the current AFS, could help support and guide these decisions. Significant increases in accuracy could result, such as in Kootenay Pass where temperatures are taken by. a sensor on Stagleap Mountain at the elevation of most of the start zones. Given that all of the problematic avalanche paths at Kootenay Pass are south facing and often experience temperature inversions, the Stagleap temperatures would be a superior alternative to temperatures taken at the shady and forested weather plot. The ability to incorporate any or all variables from any or all weather stations will also enable the use of multivariate statistical techniques using a greater number of inputs. This will be useful for filling data gaps in the historic record as well as predicting the value that should be given by a currently inoperative sensor. Furthermore, this will allow fragmentation of the avalanche predictions into separate predictions for subregions of an area using inputs from the nearest and most relevant weather sensor data. This is important given the spatial variability of avalanche occurrences with respect to aspect, elevation and snow structure. Snowpack information is more spatially variable at the meso-scale than weather data; it may vary from one group of avalanche paths to another; and it can vary substantially relative to aspect and elevation. However, since class 11 (snow-pack) factors are of lower informational.entropy, they have potential to significantly improve predictions if they can be matched to weather-based predictions at the appropriate scale. For a large area such as Bear Pass, or for forecasting programs that are responsible for many widely separated avalanche paths (such as that in Revelstoke), proper matching of 64 class II data is especially critical. The current AFS contains a framework for downscaling of predictions (by focusing on specific groups of avalanche paths) and combination of snowpack factors (by Bayesian inference updating). Substituting more locally relevant weather information would ensure proper matching of data at the same scale. The time series visualization function applied to historical data can be used to analyze the effectiveness of these combinations at a higher temporal resolution than simply by looking at fitness statistics. 5.5 Time series visualization The current AFS retroactively predicts avalanches for a specified historic time period, and generates time series data showing avalanche occurrences, MI AAI , hazard ratings, and predicted probabilities. Future generations of AFS will display this information in graphs, and this will facilitate analysis of the relationship between predicted probabilities, the instability scale, and the hazard rating scale. 5.6 Relation of AFS output to the hazard scale Further research is necessary to define appropriate systems for adjustment of probabilities predicted using different datasets and algorithms. Relation of predicted avalanche probabilities to the hazard rating system used by MoT should be possible using the aforementioned time series visualization feature. This will be a useful exercise, as hazard rating levels are directly linked to the type of risk mitigation required: each level implies specific actions to be taken, such as beginning road patrols, removing road crew personnel from exposure to avalanches, or beginning active avalanche control. Linking the avalanche probabilities given by the numerical prediction models to the hazard rating scale used by the MoT would mean that the AFS probabilities would suggest a course of action in addition to indicating the level of instability clue to meteorological forcing. 5.7 Dynamic automated algorithm selection This is an extension of the dynamic algorithm selection routine in the original avalanche forecasting model at Kootenay Pass from McClung and Tweedy (1994). In their model, the first round of classification predicted whether the current period was a wet or dry avalanche period, and chose the best algorithm and input variables based on 6 5 this prediction. In the second round of classification, the optimal algorithm and variables were used to generate a probability of avalanches. The next generation will include similar tiered classification procedures. 5.8 Memory variable generators Currently, all memory variables such as snowfall and water equivalent of new snow are hard-coded into the AFS. If the AFS is to be transplantable to any operation, generation of optimal memory variables, which may vary from location to location, must be done using functions within the AFS. The user should be able to choose the raw variable(s) with which each memory variable will be generated, as well as the type of memory function to be applied to that variable. Currently, the AFS has four hard-coded memory functions which should be generalized in the next generation of AFS: the interval difference function, the density proxy, the hourly cumulation function, and the maximum/minimum loggers. An interval average function could be added to these. For cumulative variables such as the total snowpack height, the AFS uses an interval difference calculator. This function subtracts the value at some time in the past from the current value. For example, interval snowpack height changes indicate new snowfall. For precise high frequency measurements such as the rain gauge sensor, hourly accumulation of changes can be used. For example, the AFS computes water equivalent of new snowfall in this manner. Those algorithms do not require updating in the next generation of AFS. However, the density proxy variable should be generalized to allow optimization of this variable for different sites. Since it is unknown which kind of function will work best in any given region, the next generation of AFS will include a framework that allows externally proven proxy functions to be added to the AFS code with ease. See the above section on "algorithm choice" for more information. The generalized maximum and minimum log variable generator would allow specification of separate observation periods for each variable in which maxima and minima are computed. The maximum wind speed log variable that was hard-coded as a trial in the late stages of the pilot AFS development returns gust speeds, and therefore may not accurately reflect average wind speeds for the chosen observation period. Therefore a 66 function that takes an average of all values within the specified observation period might be a more relevant way to encode wind speed and direction history, and may be usefully applied to other variables as well. For example, precipitation rate is an important variable that is not included in the present AFS except at hourly intervals. Average water equivalent precipitation rate over several hours could replace the manually observed precipitation rate variable used in McClung and Tweedy (1994). In summary, to ensure maximal flexibility and widespread applicability, five memory variable generators will be considered for implementation in the next generation of AFS: the interval difference, hourly accumulation, extrema logger, interval average calculator, and a system for substituting new proxy variable functions. Some means of computing temperature trends would also be useful, perhaps using simple regression over a specified interval. More research is necessary in this respect. 5.9 Previous interval recall The history of weather in an area governs stability, and explicit memory variables partially account for this. Modern multivariate statistical analysis techniques for time series data (such as SSA (Ghil et al. 2002) and NLSSA (Hsieh 2004)) often take a different approach: they effectively include all interval observations of all predictors as separate variables. This way, memory information is encoded in the models without arbitrarily specifying each proxy function or memory variable lag period. Modern methods can illuminate multivariate correlation time scales and mine out memory effects from among all variables. The previous interval recall algorithm would query the historic and current " information for values of each predictor taken at one or more time intervals in the past. For algorithms such as Neural Networks, Support Vector Machines, or Multiple Additive Regression Trees that can handle high dimensional datasets, one could include all variable values from the current observation as well as all those taken at t hour intervals over the past T hours (for example, 6 hour intervals over 36 hours). This would be far too many variables to use in the nearest neighbour analysis. However, the previous interval recall function could still be used in a slightly different way that eliminates the need for arbitrarily calculated memory terms such as the snowfall 67 lag, and at die same time extracts potential memory value inherent in all input variables. In the regular nearest neighbour analysis, distances among data are calculated by comparing the current set of predictor values to each set of historical values. The previous interval recall function could be applied to an altered nearest neighbour distance computation so that, relative to the current forecast period and several previous periods, distances from each historic period and their corresponding preceding periods are computed. For example, if periods are 12 hours in length and memory information from 48 hours is relevant to the predictions, then nearest neighbour distances would be calculated for the current period and three previous periods. Obviously, .the current data are most important, and each successive day would be less relevant to the calculation. Therefore a flexible algorithm for down-weighting successive days' contribution to the nearest neighbour distance isproposedhere. It is represented mathematically by where d is the nearest neighbour distance, p indexes the periods in sequence, beginning at zero (for four periods, p=3). The index of p begins at zero to ensure that the current period always has a weight of one. m is the decay factor (larger in lead to more rapid decay of influence of previous periods), and n governs the amount of decay at each p (large numbers for n attribute more similar influence to successive periods). To implement this requires input data at regular intervals antecedent to each current and historic datum; yet missing data could introduce null values into the'inverse weighted nearest neighbour calculations. Therefore the proposed interval recall algorithm would look up the closest data (in terms of time) to the data in question and if there is too much missing data to accomplish this, then that neighbour would be ignored in the nearest neighbour analysis. 5.10 Expanded filtration options The occurrence filter function will allow the user to filter by all occurrence, observation features in SAWS. Filters on input variables will be added to create 68 restrictions that omit user defined time periods of data from created datasets (for example, data from summer months), as well as excluding extrema outside of a user specified range (to automatically reject erroneous data such as -6999 or new snowfall of 256 cm). Furthermore, some modern prediction algorithms are insensitive to erroneous and missing inputs, and therefore the next generation of AFS will allow the user to toggle all filters, including the historic missing data row rejection filter.' 5.11 User activity logging system This function would track the frequency of use and manner in which the user interacted with the AFS, as well as logging alterations to datasets, schedules, filters, and algorithms. 5.12 Java Interfaces for importing datasets and algorithms. This would make it easy to add new algorithms and historic data sets to the AFS as needed, such as when applying the AFS to new highways corridors or the ski industry. It would also include the ability to import snow profile information exported from SnoPro (copyright: Gassman Industries). This flexibility might make the AFS much more powerful as a research tool and more extensible for the MoT. 5.13 Spatial AFS display The simplest display of this information would be to show a flag in the avalanche paths that ran on a given nearest neighbour event. Each flag could be clicked on to show the detailed information on that avalanche. The flags would need to have some kind of colour or symbol designation to show which neighbour they are associated with so that one can correlate the avalanche occurrences to their associated neighbours. Also, along with the colour coding, one could arrange selected information about the avalanches in a circle around the occurrence flag. Each piece of information would have a designated location relative to the flag (above to the left, down and to the right, etc.)' such as is done with weather data for electronic weather stations displayed on a map. This way the user could quickly reference the information that interests them. One could also locate the 69 flag in the avalanche path at the point corresponding to the maximum toe mass distance to show in space, relative to where the avalanche initiated, where the main deposit of the avalanche came to rest. Many more creative solutions are possible. 5.14 Future directions summary Many of the improvements listed in this section could increase the accuracy of the AFS, but would also add complexity. Therefore, a cost-benefit analysis of new features is important to ensure that the accuracy of the more complex system is significantly better than simple ones. Testing new features in trial software, as was done in the AFS pilot program, and through separate statistical analyses, will inform such a cost-benefit analysis. 7 0 6 References Brennan, J. 2006. The evolution of the Avalauncher. Proceedings of the International Snow Science Workshop 2006. Telluride, CO. pp. 572-575. Brieman, L. and J.H. Friedman, R.A. Olshen, and C.J. Stone. 1984. Classification and Regression Trees. VVadsworth, Belmont, CA. pp. 326. Bovis, M.J., 1976. Statistical forecasting of snow avalanches, San Juan Mountains, southern Colorado. Journal of Glaciology. 18 (78), pp. 83-130. Buser, O. 1983. Avalanche forecasting with the method of nearest neighbours: an interactive approach. Cold Regions Science and Technology, 8(2), pp. 155-163. C A A . 1995. Canadian Avalanche Association Observation Guidelines and Recording Standards for Weather, Snowpack, and Avalanches. C A A . Revelstoke, BC, Canada, pp. 99. Davis, R.E., K. Elder, D. Hovvlett and E. Bouzaglou. 1999. Relating storm and weather factors to dry slab activity at Alta, Utah and Mammoth Mountain, California, using classification and regression trees. Cold Regions Science and Technology. 30, pp. 79-86. Denison and Malik, 2002. Nonlinear Methods for Classification and Regression. Wiley, West Sussex, England, pp.227 . Durand, Y. , Giraud, G., Brun, E., Merindol, L., Martin, E. 1999. A computer based system simulating snowpack structures as a tool for regional avalanche forecasting. Journal of Glaciology. 45 (151), pp. 469-484. Floyer, J. 2003. Statistical avalanche forecating using meteorological data from Bear Pass, British Columbia, Canada. University of British Columbia. Dept. of Geography. M.Sc. Thesis. ' Floyer, J .A V and D-M. McClung. 2003. Numerical avalanche prediction: Bear Pass, British Columbia, Canada. Cold Regions Science and Technology. 37, pp. 333-342. Friedman, J.H. 1994. Flexible metric nearest neighbour classification. Stanford University, Department of Statistics: Technical Report. http://www-stat.stanford.edu/~jhf/ftp/flexNN.pdf Ghil, M . , M.R. Allen, M.D. Dettinger, K. Ide, D. Kondrashov, M.E. Mann, A.W. Robertson, A. Saunders, Y . Tian, F. Varadi, P. Yiou. 2002. Advanced spectral methods for climatic time series, Reviews of Geophysics, 40(1), 1003, doi:10.1029/2000RG000092. 71 Haegli, P. 2004. Scale analysis of avalanche activity on persistent snowpack weaknesses with respect to large scale avalanche forecasting. University of British Columbia Dept. of Earth and Ocean Sciences, PhD Thesis, pp.254. Haegli, P. and D.M. McClung. 2000. A new perspective on computer assisted avalanche forecasting: scale and scale issues. Proceedings of the International Snow Science Workshop 2000. Bozenian, MT, American Avalanche Association, pp. 66-73. Hastie, T., Tibshirani, R., Friedman, J. 2001. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer-Verlag. New York. pp. 520. Hsieh, W. 2004. Nonlinear Multivariate and Time Series Analysis by Neural Network Methods. Reviews of Geophysics, 42, RG1003, doi: 10.1029/2002RG000112. pp. 25. Jamieson, J.B. 1995. Avalanche prediction for persistent snow slabs. PhD Thesis, University of Calgary, pp.258. Kronholm, K., D. Vikhamar-Schuler, C. Jaedicke, K. Isaksen, A. Sorteberg, and K. Kristensen. 2006. Forecasting snow avalanche days from meteorological data using classification trees; Grasdalen, Western Norway. In: Gleason, A. (Ed). Proceedings of the International Snow Science Workshop 2006. Telluride, CO. pp. 786-795. LaChapelle, E.R. 1980. The fundamental processes in conventional avalanche forecasting. Journal of Glaciology. 26 (94), pp. 75-84. McClung, D.M. 2000. Predictions in avalanche forecasting. Annals of Glaciology. 31, pp. 377-381. McClung, D.M. 2002a. The Elements of Applied Avalanche Forecasting, Part I: The Human Issues. Natural Hazards. Springer, Netherlands. 25, pp. 1 11-129. McClung, D.M. 2002b. The Elements of Applied Avalanche Forecasting, Part II: The Physical Issues and the Rules of Applied Avalanche Forecasting. Natural Hazards. Springer, Netherlands. 25, pp. 131-146. McClung, D. M . 1995. Use of Expert Knowledge in Avalanche Forecasting. Defence Science Journal. 45(2), pp. 117-123. McClung, D.M. and P. Schaerer. 2006. The Avalanche Handbook, 3rd Edition. The Mountaineers, Seattle. McClung, D.M., Tweedy, J., 1994. Numerical avalanche prediction: Kootenay Pass, British Columbia. Journal of Glaciology. 40, pp. 350-358. Pearl, J. 1988. Probabilistic Reasoning in Intelligent. Systems: Networks of Plausible . Inference. Morgan Kaufmann Publishers, San Mateo, CA. pp. 522. 72 Purves, R, J. Barton, D. Wright, 2002. Cornice - development of a nearest neighbours model applied in backcountry avalanche forecasting in Scotland. In: Stevens, J.R. (Ed.), Proceedings of the International Snow Science Workshop, Penticton, BC, Canada, pp. 117-122. Purves, R., K. Morrison, G. Moss and B. Wright. 2003. Nearest Neighbours for avalanche forecasting in Scotland - development, verification and optimisation of a model. Cold Regions Science and Technology. 37, pp. 343-355. Roeger, C. 2001. Verification of numerical weather prediction and avalanche forecasting. University of British Columbia. Dept. of Earth and Ocean Sciences. M.Sc. Thesis, pp. 146. Roeger, C , McClung, D. and R. Stull, 2003. Verified combination of numerical weather and avalanche prediction models at Kootenay Pass, British Coloumbia, Canada, Annals of Glaciology. 38, pp. 334-346. Schweizer, J. and B. Jamieson. 2003. A threshold sum approach to stability evaluation of manual snow profiles. Cold Regions Science and Technology. 37, pp. 233-241. Stull, R. 2000. Meteorology for Scientists and Engineers, 2'"' Ed. Brooks/Cole Thompson Learning, pp. 502 pp. Tweedy, J., J. Wylie, K. J. Malone. 2007. [Personal communication. Kootenay Pass avalanche forecasting program, B.C. Ministry of Transportation and Highways, Nelson, B.C. email: john.tweedy@gov.bc.ca.] 73
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Applied automated numerical avalanche forecasting using...
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Applied automated numerical avalanche forecasting using electronic weather sensor data Cordy, Paul David 2007
pdf
Page Metadata
Item Metadata
Title | Applied automated numerical avalanche forecasting using electronic weather sensor data |
Creator |
Cordy, Paul David |
Publisher | University of British Columbia |
Date Issued | 2007 |
Description | Numerical avalanche prediction was used for Canadian highways avalanche forecasting for ten years before changes in information technology infrastructure rendered the original numerical avalanche forecasting model incompatible and therefore obsolete. Now these efforts are being renewed with greater automation by the use of electronic weather sensor data. Use of this data presents several challenges and opportunities. Automated hourly observations generate large datasets that require systems for filtering historic and current data; as well as fitness testing routines that dynamically extract independent validation samples from serially correlated datasets. These weather sensor data manipulation systems offer several advantages over traditional avalanche prediction models that are based on manually observed weather and surface snow information. Rapid dataset generation enables spatial scaling of predictions, easy generation and testing of memory variables, model comparison, and visual verification of predicted avalanche probability time series. These features will facilitate operational implementation of avalanche forecasting models for applied computer assisted avalanche forecasting-in highways avalanche control programs across British Columbia, Canada. In the winter of 2006/7, the Avalanche Forecast System (AFS) was applied in two avalanche prone transportation corridors. The AFS uses only electronic weather sensor data and incorporates all of the aforementioned capabilities. A nearest neighbour analysis is used to generate avalanche probabilities, however the AFS data management systems could also be made to operate with classical linear and modern non-linear statistical prediction methods. Automated filters eliminate erroneous data dynamically, permit investigation of various prediction targets (such as natural avalanche occurrences, or avalanches of different size classes), and a jackknife cross-validation routine generates fitness statistics by selecting test cases that are not temporally autocorrelated. The AFS was applied operationally in Kootenay Pass, near Salmo, BC, and also at Bear Pass, near Stewart, BC, where accuracy of 76% +/-2% and 71% +/-2% were achieved respectively. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2011-03-09 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0101145 |
URI | http://hdl.handle.net/2429/32241 |
Degree |
Master of Science - MSc |
Program |
Geography |
Affiliation |
Arts, Faculty of Geography, Department of |
Degree Grantor | University of British Columbia |
Campus |
UBCV |
Scholarly Level | Graduate |
AggregatedSourceRepository | DSpace |
Download
- Media
- 831-ubc_2007-0067a.pdf [ 3.69MB ]
- Metadata
- JSON: 831-1.0101145.json
- JSON-LD: 831-1.0101145-ld.json
- RDF/XML (Pretty): 831-1.0101145-rdf.xml
- RDF/JSON: 831-1.0101145-rdf.json
- Turtle: 831-1.0101145-turtle.txt
- N-Triples: 831-1.0101145-rdf-ntriples.txt
- Original Record: 831-1.0101145-source.json
- Full Text
- 831-1.0101145-fulltext.txt
- Citation
- 831-1.0101145.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0101145/manifest