UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Real-time flow forecasting Assaf, Hamed 1991

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1991_A1 A87.pdf [ 10.7MB ]
Metadata
JSON: 831-1.0050452.json
JSON-LD: 831-1.0050452-ld.json
RDF/XML (Pretty): 831-1.0050452-rdf.xml
RDF/JSON: 831-1.0050452-rdf.json
Turtle: 831-1.0050452-turtle.txt
N-Triples: 831-1.0050452-rdf-ntriples.txt
Original Record: 831-1.0050452-source.json
Full Text
831-1.0050452-fulltext.txt
Citation
831-1.0050452.ris

Full Text

REAL-TIME FLOW FORECASTING By HAMED ASSAF B. Eng. (Civil Engineering) Kuwait University A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES CIVIL ENGINEERING We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA June 1991 © HAMED ASSAF, 1991 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department The University of British Columbia Vancouver, Canada Date ~ ~ ^ U ^ fl yj \f*> \ DE-6 (2/88) Abstract The main objective of this research is to develop techniques for updating deterministic river flow forecasts using feedback of real-time (on-line) flow and snowpack data. To meet this objective, previous updating methods have been reviewed and evaluated and typical error patterns in flow forecasts have been analyzed using standard techniques. In addition, a new criterion based on the coefficient of determination and coefficient of efficiency has been introduced to evaluate systematic errors in flow forecasts. Moreover, lagged linear regression has been suggested as a method for detecting and estimating timing errors. Arising from this initial work, two different updating procedures have been developed. Further work has shown that these two independent procedures can be usefully combined, leading to yet further improvement of forecast. Arising from these methods, two other additional approaches have been formulated, one for correcting timing errors and one for updating snowpack estimation parameters from flow measurements. The first of the updating methods consists of a flow updating model which was de-veloped to update the flow forecasts of the UBC watershed model using the most recent flow measurement. The updating process is achieved using the Kalman filter technique. The performance of the updating model is mainly controlled by the relative values of two parameters of the Kalman filter technique: the measurement variance and the state variance. It is found that the measurement variance is best selected as the square of a percentage of the flow. The updating model has been applied on the Illecillewaet river basin in British Columbia. A significant improvement in flow forecasts has been observed. The second method has been developed to update parameters of an energy budget n snowpack model using on-line snowpack measurements. The updating procedure is based on calculating the value of a snowpack parameter that yields a perfect correspondence between measured and calculated snowpacks. The updated value is then used in the snowpack model to enhance its future forecasts with feedback from previous snowpack measurements. The snowmelts generated by the updated snowpack model are then routed to produce flow forecasts. Applying this model on the snowpack measured at Mt. Fidelity in the upper Columbia River Basin in British Columbia showed that both the snowpack forecasts and the flow forecasts generated from these updated snowpack forecasts were greatly improved. Because the above two updating methods operate independently, they can be applied in combination whenever an appropriate measurement is available. The combined use of these two methods to data from the Illecillewaet river basin showed an additional improvement in flow forecasts. As a further development, the snowpack estimation model has been adapted so that a Kalman filter approach can be used to update snowpack estimation parameters from flow measurements. It is finally concluded that flow forecast updating requires the application of several methods, rather than one simple approach, because errors arise from various sources. In addition, updating procedures may prove useful in achieving a better calibration for watershed models. iii Table of Contents Abstract ii List of Tables ix List of Figures x Acknowledgement xiv 1 INTRODUCTION 1 2 Analysis of Errors in Flow Forecasts 4 2.1 Sources of Errors 4 2.1.1 Model Structure 5 2.1.2 Model Parameters . . . 7 2.1.3 Input Data 8 2.1.4 Reference Data Error . . . 9 2.2 Error Analysis Methods 10 2.2.1 Graphical Presentations 10 2.2.2 Least Squares Criteria 13 2.2.3 The Autocorrelation and the Partial Autocorrelation Functions . 22 2.3 Summary and Conclusion 26 3 Approaches to Real-Time Flow Forecasting 28 3.1 Error Prediction Models 29 iv 3.2 Flow Forecast Correction Models 30 3.3 Transfer Function Models 31 3.4 Conceptual Models 35 3.5 Summary and Conclusion 39 4 The Kalman Filter 41 4.1 Example of a Kalman Filter Application 41 4.1.1 Conditions 42 4.1.2 The State-Space Model 43 4.1.3 Model Algorithm 43 4.2 The Kalman Filter as a Bayesian Approach 47 4.3 The Roles and The Selection of The State Noise Variance and The Mea-surement Variance 49 4.4 Adaptive Kalman Filtering 61 4.5 Extended Kalman Filtering 62 4.6 Summary and Conclusion 63 5 A Flow Updating Model 65 5.1 The UBC Watershed Model 65 5.2 Modifying the UBC Watershed Model 66 5.3 The State-Space Format of the Flow Updating Model 69 5.4 Model Algorithm 69-5.5 Selection of the State Noise Variance and the Measurement Variance . . 70 5.5.1 Fixed Measurement Variance 71 5.5.2 Percentage Measurement Variance 76 5.5.3 An Adaptive Kalman Approach 76 5.6 Application 77 v 5.6.1 Overall Performance 77 5.6.2 Examining Progressive Model Performance 83 5.6.3 Detection of Linearity in The Flow Forecasts Residuals 91 5.6.4 Summary and Conclusion 92 6 A Snowpack Updating Model 94 6.1 The Energy Budget of the Snowpack 96 6.1.1 The Shortwave Heat Influx 97 6.1.2 The Longwave Radiation 97 6.1.3 The Convective and Advective Heat Transfer 98 6.1.4 The Rain Melt 98 6.2 Calculating the Snowpack in the UBC Watershed Model 98 6.3 Sensitivity Analysis of the Snowpack Parameters 101 6.3.1 The Cloud Cover 101 6.3.2 The Snow Albedo 107 6.3.3 The Snow Formation Function 109 6.4 The Snowpack Updating Approach 110 6.5 The Snowpack as a Function of the Net Energy Fluxes and the Snowfalls 112 6.6 The Snowpack Parameters As a Function of the Snowpack 114 6.6.1 The Cloud Cover Correction Factor r)c 114 6.6.2 The Cloud Cover Parameter A T S 116 6.6.3 The Snow Albedo Correction Factor r]a 116 6.6.4 The Snow Formation Parameter TT 117 6.7 Algorithm of the Snowpack Updating Model 118 6.8 Application of the Snowpack Updating Model 119 6.9 Combined Application of the Snowpack and the Flow updating Models . 123 vi 6.10 Updating Parameters of the Snowpack Using Flow Measurements . . . . 131 6.11 Summary and Conclusion 131 7 Discussion and Conclusions 134 7.1 Sources and Analysis of Forecast Residuals 135 7.2 Literature Review of the Real-Time Forecasting Models 137 7.3 The Kalman Filter 140 7.4 The Proposed Updating Procedures 143 7.4.1 Direct Updating of the Flow Forecasts 144 7.4.2 A Snowpack Updating Model 146 7.5 Updating Snowpack Parameters Using Flow measurements 149 7.6 Detection and Correction of Timing Errors 149 7.7 Combined Use of Updating Techniques 150 7.8 General 150 REFERENCES 153 APPENDICES 159 A The Illecillewaet Watershed 159 B Detailed Algorithm of the Flow Updating Model 163 B . l Prediction 163 B . l . l The Initial Conditions 163" B.1.2 For any time t > 0.0 164 B.2 Updating 164 B.2.1 The Kalman Gain 164 B.2.2 The Weighting Factors 165 vii B.2.3 The Covariance Matrix 166 B.3 A Sample Calculation 167 B.3.1 Prediction 167 B.3.2 Updating 168 B.3.3 Prediction for the Following Day 171 C Derivation of The Snowpack Function 173 D Derivation of the Flow-Snowpack Updating Model 181 D . l Development of the Basic Model 181 D . l . l Updating rjc 183 D.l.2 Updating AT, 183 D.l.3 Updating r)a 184 viii List of Tables Evaluation of the flow forecasts in terms of several criteria ix List of Figures 2.1a Observed flows and their estimates and residuals from a misrepresentative model 6 2.1b Observed flow vs. forecast from a misrepresent ative model 7 2.2 Examples of plots of observed, forecast flows, residuals, and accumulated errors 11 2.3a Example of a linear plot of observed flow vs. forecast 12 2.3b Example of a logarithmic plot of observed flow vs. forecast 12 2.4 Example of a plot of residual vs. residual one day before 13 2.5 Hydrographs of the observed, the UBC and the timing error-updated flows and their residuals for a forecast lead of five days 21 2.6 Theoretical autocorrelation function of AR(1) 24 2.7 Theoretical autocorrelation function of AR(2) 24 2.8 Example of an autocorrelation function of flow forecast residuals. . . . . 25 2.9 Example of a partial autocorrelation function of flow forecast residuals. . 25 3.1 Flow chart of a basic conceptual model 36 4.1a Effect of a good measurement and a poorly structured state equation on updating 48 4.1b Effect of a bad measurement and a well-structured state equation on up-dating 48 4.2 The Kalman filter as a Bayesian approach 49 4.3 Actual and measured flows and measurement errors (simulated data). . . 51 x 4.4 Actual and predicted states for a forecast leads of one day. 52 4.5 Actual and predicted states for a forecast lead of five days 54 4.6 Actual and predicted states for state noise variances of 0.01 and 0.5. . . . 56 4.7 Actual states and the states estimated by the measurement only. 58 4.8 The values of the gain S for two state noise variances 59 4.9 Actual and predicted states for a slow-time varying state 60 5.1 A general flow chart of the UBC watershed model 67 5.2 The coefficient of efficiency, E, vs. fixed measurement variance (0 to 1000 (=f)2) 72 5.3 The coefficient of efficiency, E, vs. fixed measurement variance (0 to 20,000 (•r)' 7 3 5.4 The coefficient of efficiency, E, vs. percentage-measurement variance. . . 74 5.5 Hydrographs of the observed, UBC and updated flows and residuals for a forecast lead of one day. 78 5.6 Hydrographs of the observed, UBC and updated flows and residuals for a forecast lead of 10 days 79 5.7 Hydrographs of the observed, UBC and updated flows and residuals for a forecast lead of 20 days 80 5.8 Hydrographs of the observed, UBC and updated flows and residuals for a forecast lead of 30 days 81 5.9 Successive daily updating of flow forecasts for forecast leads of 1,2,..10 days (i? = 200 ( ^ ) 2 ) 84 5.10a The calculated flow components 86 5.10b The daily estimates of the weighting factors for fixed measurement vari-ance of 200 ( ^ ) 2 86 x i 5.10c The daily estimates of the weighting factors for percentage-measurement variance of 15% 86 5.11 Successive daily updating of flow forecasts for forecast leads of 1,2,..10 days [R = (15%flow)2) 88 5.12 The key figure for Figs. 5.9 and 5.11 90 6.1 Flow chart of the UBC snowpack model 99 6.2 Sensitivity of the snowpack to the cloud cover parameter r\c 103 6.3 The effect of under- or over- estimating the cloud cover on the snowpack. 105 6.4 Sensitivity of the snowpack to the cloud cover parameter ATS 106 6.5 Sensitivity of the snowpack to the snow albedo parameter r)a 108 6.6 Sensitivity of the snowpack to the snow formation parameter Tr I l l 6.7 The measured, un-updated and updated snowpacks for 1969-1970 120 6.8 The updated values of the cloud cover parameter r]c For 1969-1970. . . . 121 6.9 Hydrographs of the observed, un-updated and updated flow forecasts and their residuals for 1970 122 6.10 The measured, un-updated and updated snowpacks for 1974-1976 124 6.11 The updated values of the cloud cover parameter, r]c For 1974-1976. . . . 125 6.12 Hydrographs of the observed, un-updated and updated flow forecasts and their residuals for 1975 126 6.13 Hydrographs of the observed, un-updated and updated flow forecasts and their residuals for 1976 127 6.14 Hydrographs of the observed, updated (snowpack) and updated (snowpack and flow) flow forecasts and their residuals for 1970 128 6.15 Hydrographs of the observed, updated (snowpack) and updated (snowpack and flow) flow forecasts and their residuals for 1975 129 xii 6.16 Hydrographs of the observed, updated (snowpack) and updated (snowpack and flow) flow forecasts and their residuals for 1976 130 xiii Acknowledgement Great thanks and appreciation to my parents, Shafe and Fiaza, for their undying support throughout these long years in school. I greatly appreciate the support and advice of my research supervisor, Dr. Michael Quick throughout my research and in preparing this thesis. I also thank the supervisory committee members: Dr. Denis Russell, Dr. Bil l Caselton and Dr. A l Freeze for their valuable comments on my thesis. Special thanks to my wife Emily Mulleda for her support and help in preparing this thesis. This research was funded for the last two years by the B.C. Relief Fund Administrated by the Vancouver Foundation. This funding supports ongoing research in the Department of Civil Engineering on flood problems of importance to British Columbia. This research was also funded by a graduate fellowship awarded by the University of British Columbia and by Earl R. Peterson Memorial Fellowship. xiv Chapter 1 INTRODUCTION Operation and management of a watershed system requires accurate and reliable flow forecasts. The accuracy of short-term flow forecasts is particularly important for effec-tive flood control measures. Traditionally, flow forecasts were generated from physically-based models that simulate the behavior of the watershed system based on "presumably" error-free input of meteorological data (precipitation and temperature). The flow mea-surements were only used as reference data for calibrating model parameters. However, advances in processing hydrological data made it possible to have fast and reliable access to flow measurements. This encouraged hydrologists to develop real-time forecasting models, i.e. models that use real-time (on-line) hydrological data, as well as hydrom-eteorological data, in calculating estimates of the future flows. The first generation of these models were based only on simple input-output relationship between meteorological inputs and measured flows with no consideration of the underlying physical processes. Although these models do well for very-short forecast leads, their lack of structure makes them unreliable for longer forecast leads. Later, more emphasis was placed on develop-ing physically-based real-time forecasting models. The characteristics of both types of models with examples from the literature are discussed in Chap. 3. The main objective of this research is to develop techniques for incorporating feed-back of real-time data (flow, snowpack) in upgrading non-real-time conceptual watershed models without altering their main structures. This approach is important since most conceptual watershed models are not designed to operate in the real-time mode and have 1 Chapter 1. INTRODUCTION 2 highly complex nonlinear structures that are difficult to adjust to operate in this mode. The UBC watershed model developed by M . C. Quick and A. Pipes (19896) is a typical example of these models. It has been selected as a study object for this research. Study data are selected from the Illecillewaet basin in British Columbia. Since the main objective of the updating procedure of any flow forecast is to reduce the residuals, or errors, in these forecasts, it became necessary to study the characteristics of these errors and review existing evaluation techniques for flow forecasts. These issues and the introduction of a new least squares criterion for detecting linear errors in flow forecasts will be presented and discussed in Chap. 2. Reviewing the updating procedures used in many of the real-time flow forecasting models showed that most of these procedures were based on the Kalman filter technique. The theoretical and practical aspects of this technique will be presented and discussed in Chap. 4. The Kalman filter technique was used to develop a procedure for updating the flow forecasts from the UBC watershed model using real-time flow measurements. The updat-ing procedure was designed based on the assumption that the flow is equal to a weighted sum of the flow components calculated by the UBC watershed model. The selection of the updating parameters and the application of this updating method to data from the Illecillewaet basin in British Columbia will be presented and discussed in Chap. 5. The extreme flows in the mountainous watershed systems, of which the Illecillewaet basin is an example, are mainly generated by snowmelt during the warm spring and. summer seasons. Snowmelt forecasting is therefore different from rainfall-induced flow forecasting, because most of the precipitation input and the snowpack are already known at the start of the season and can be updated during the melt season. Forecasts can therefore be made using historical records of temperature and precipitation for previous years, so that, by using a range of such historical records, it is possible to estimate a range Chapter 1. INTRODUCTION 3 of possible flows for the whole forecast snowmelt season, say from April to September. To achieve these longer term forecasts, it is very important to have a well calibrated model and, in addition, being able to update the flow from the latest flow and snowpack information is likely to be very useful. For this reason and because of the long residence time of the snow input, the quality of long-term forecasts of high flows are expected to be sensitive to the estimation of snowmelt in the watershed model. Based on this conclusion, a method has been designed to update parameters of the snowpack routine of the UBC watershed model using real-time snowpack measurements. This method was also applied to update the cloud cover estimates in the Mount Fidelity station located on the upper part of the Illecillewaet basin. The updated cloud cover estimates were then used in the UBC watershed model to calculate flow forecasts for the Illecillewaet basin. Details of the development of this method, its algorithm and the results from its application will be presented and discussed in Chap. 6. Due to the interaction between the above two updating techniques, the combined use of both methods was studied through combined application on data from the Illecillewaet basin. Chapter 2 Analysis of Errors in Flow Forecasts The purpose of any flow forecasting model is to simulate a given watershed system so that the flow at the basin outlet can be estimated from input data on precipitation and temperature. These models range from comprehensive watershed models to input-output black box models. For many reasons, to be discussed later, flow forecasts can never be perfect. The quantity and quality of such imperfections or errors vary considerably from one situation to another and can have very substantial implications on the management and the design of water resources projects and flood control measures. Studying the characteristics of these errors is essential for developing updating techniques for flow forecasts and for improving the designs and calibrations of flow forecasting models. This chapter starts with a discussion on the main sources of these errors. It follows with a discussion of methods used in analyzing model performance and forecast errors. 2.1 Sources of Errors Residuals in flow forecasts can be attributed to errors in the structure of the flow fore-casting model, errors in the model parameters, and errors in the input and the reference data. These sources of errors are discussed in the following sections. 4 Chapter 2. Analysis of Errors in Flow Forecasts 5 2.1.1 Model Structure Watershed systems are highly complex and diverse and much is unknown about the underlying physical processes. This makes it necessary to use approximate lumped rep-resentations of these systems. Stochastic representations are used to simulate unknown processes. Time and budget constraints on data collection and operation further limit the ability of a flow forecasting model in simulating watershed behavior. However, flow forecasts may be substantially improved by applying some modifications to model struc-ture. Model structure plays an important part in the reliability of flow forecasts. Con-ceptual models, which are based on a physical representation of watershed systems, are observed to be superior to black box models, which are based on input-output relations, especially for long forecast leads (see Kitanidis & Bras, 1980 c; Moore, 1986; and Geor-gakakos, 19866.) Flow forecast errors caused by a misrepresentative model are usually large and do not follow identifiable patterns (see the example shown in Fig. 2.1a.) The adequacy of a model structure can be simply tested by making linear regression between the observed hydrograph and the calculated hydrograph. A very small coefficient of determination D, to be defined later in section 2.2.2 (see Fig. 2.1b) means that the two hydrographs are only weakly correlated, thus indicating a serious problem in model structure. Model over-parameterization or under-parameterization are problems that can vary dramatically in their complexities. In general, a well experienced model designer can detectfand correct these problems. A simple example of model under-parameterization is if the fast flow component in a watershed model is missing resulting in underestimation of the flow peaks. Chapter 2. Analysis of Errors in Flow Forecasts OBSERVED 0 1 2 3 4 5 6 7 8 250 . 2 0 0 | 1 1 1 1 1 1 1 L 0 1 2 3 4 5 6 7 8 TIME (IN MONTHS) Fig. 2.1a Observed flows and their estimates and residuals from a misrepresentative model. Chapter 2. Analysis of Errors in Flow Forecasts 7 300 0 50 100 150 200 250 300 FORECAST FLOW, m3/S Fig. 2.1 b Observed flow vs. forecast from a misrepresentative model. 2.1.2 Model Parameters Model parameters can be poorly calibrated or specified for one or more of the following reasons: 1. Errors in the model structure. 2. Inadequacy of the input data for the following reasons: • Measurement errors. • Inability of the input data to excite all the watershed system modes (i.e., its responses to the possible conditions in the river basin). For example, the precipitation data collected during a dry season is useless in calibrating the fast rainfall-runoff unit hydrograph since most of the rain infiltrates into the soil and into the groundwater system. Chapter 2. Analysis of Errors in Flow Forecasts 8 • Insufficient data record length. Model parameters calibrated from short data record are reliable. On the other hand, long data records provide the oppor-tunity to detect the effects of systematic errors in the data. • Insufficient number and poor distribution of data collection stations. Troutman (1982) noted that if erroneous input data is used for calibrating model parameters, input data with the same type of error produces better results for the forecasting period than the ones produced by more accurate data. 3. Parameter correlation: Highly correlated parameters are difficult to calibrate, since a change in one parameter affects the other parameter(s). This is known as the identifiability problem. 4. Reference data errors: which are mainly caused by insufficient data record, mea-surement errors and using erroneous runoff-stage rating curves. 2.1.3 Input Data Input data may undermine the quality of flow forecasts for one or more of the following reasons: • Inferior measurement quality due to human and/or equipment errors; • Misrepresentative data stations; and • Poorly distributed data stations. If there are consistent input measurement errors, the result will be biased flow forecasts. If these errors are reasonably linear, these errors may be reduced using a method for updating flow forecasts based on the Kalman filter technique. An example is the flow updating model presented in Chap. 5. Chapter 2. Analysis of Errors in Flow Forecasts 9 Misrepresent ative data stations may result in very poor flow forecasts with inconsis-tent errors similar to those due to misrepresentative model structure (see Fig 2.1). These errors are difficult, if not impossible, to predict, so, it may be worthwhile disregarding these stations and use other sources of input data. The effect of the density and number of precipitation stations on flow forecasts de-pends mainly on the time-distribution of the precipitation in the watershed, the topo-graphical and the hydrological characteristics of the watershed, the variability and the time scale of flows, and the criterion used in evaluating the flow forecasts. A relatively uniformly distributed precipitation may be sufficiently estimated by one precipitation station, while for some events the average precipitation can be in serious error. For example, thunderstorm precipitation estimated from a single station may be either un-derestimated if the station is located on the edge of the storm, or overestimated if the station is located near the center of the storm. A relatively large number of precipitation stations is needed for a highly complex watershed. For highly variable flows, precipitation has to be measured more frequently. If the flow forecasts are evaluated by an average criterion, e.g. volume error, a lesser number of precipitation stations can be sufficient; if a more precise characteristic of the hydrograph is needed (e.g. the flow peak), a large number of precipitation stations may be needed. 2.1.4 Reference Data Error High quality reference data (for example, flow measurements) are mainly used for model calibration and evaluation. In addition, they can be used as a feedback for real-time flow forecasting models. Equipment and human factors are not the only causes of flow measurement errors, since flow rating curves may not represent all of the flow conditions, especially during high flood events. Chapter 2. Analysis of Errors in Flow Forecasts 10 2.2 Error Analysis Methods Several forecast error analysis techniques are presented in this section. Each provides a distinctive view of the differences between measured and forecast hydrographs. Results of these analyses are important in evaluating practical impacts of flow forecast errors, improving model calibration, and designing real-time flow forecasting models. 2.2.1 Graphical Presentations Many of the characteristics of the flow forecast errors can be easily detected through graphical presentations. Following are some of these plots: 1. Observed and forecast hydrographs (Fig. 2.2). 2. Error signal (Fig. 2.2), which helps in examining the distribution of and spotting extreme values of forecast residuals. 3. Accumulated errors (Fig. 2.2), which is important for evaluating long-term effects of forecast residuals. 4. Observed flow vs. forecast (Fig. 2.3), which is especially important in detecting errors in flow rating curves. Linear plots help in detecting linear relationships between observed and forecast flows. For example, Fig. 2.3a shows that, in general, very small and very large flows are overestimated, while medium range ones are underestimated. Logarithmic plots (Fig. 2.3b) may reveal the presence of nonlinear-relationships. 5. Flow forecast residuals versus previous residuals, which is useful for detecting and estimating correlations between consecutive residuals. It can be clearly seen from Fig. 2.4 that a residual is highly correlated with its value one day before. In other Chapter 2. Analysis of Errors in Flow Forecasts ) Fig. 2.2 Examples of plots of observed flows, forecast flows, residuals, and accumulated errors. Chapter 2. Analysis' of Errors in Flow Forecasts 1 2 300 F O R E C A S T FLOW, m3/S Fig. 2.3a Example of a linear plot of observed flow vs. forecast. Fig. 2.3b Example of a logarithmic plot of observed flow vs. forecast. Chapter 2. Analysis of Errors in Flow Forecasts 13 100 50 " (A if i i 0 W 11 -50 --100 • c • a a Q • Q 1 -100 -50 50 100 RESIDUAL ONE DAY BEFORE, m3/S Fig. 2.4 Example of a plot of residuals vs. residuals one day before. words, the residuals can be expressed as follows1: r(t) = ar(t- 1) + w(t - 1) (2-1) where r is the forecast residual; a is a correlation coefficient; and w is a random noise component. The autocorrelation and the partial autocorrelation functions, to be discussed later, are more comprehensive methods for evaluating these lagged correlations. 2.2.2 Least. Squares Criteria The overall performance of a flow forecasting model can be evaluated against a reference model based on the sum of the squares of residuals. Several criteria can be formulated xThis expression describes an autoregressive process of order one, designated as AR(1). Chapter 2. Analysis of Errors in Flow Forecasts 14 depending on the type of the reference model, and the following are some of the most important criteria: The Coefficient of Efficiency E The coefficient of efficiency, E, can be expressed as follows (Nash and Sutcliffe, 1970): E = (2.2) where S = !£!(<?.(<)-Q/(*)) a; S0 = E£i(9o(0-^o) 2 where Q0 is the observed flow; Q0 is the mean of the observed flows for the given record; and Qf is the forecast flow. The quantity S is a direct measurement of the discrepancy between the observed hydrograph and the forecast hydrograph. A high value of S indicates a large discrepancy and therefore a poor forecast, while a small value of S indicates the opposite. S0 is the value of S if the forecast hydrograph is a horizontal line equal to the mean observed flow. The coefficient of efficiency, E, is the difference between S0 and S normalized to SQ. Therefore, E is a non-dimensional measure of the quality of the flow forecasts. A value of E close to unity indicates a good flow forecast, while a bad flow forecast is characterized by an E value much less than unity. Choosing S0 as a normalizing factor is based on the well known coefficient of deter-mination, JD, which will be presented in the next section. Using the same normalizing Chapter 2. Analysis of Errors in Flow Forecasts 15 factor in both E and D makes it possible to compare between the two coefficients, which will be proven, in a later section, to be useful in estimating systematic errors in the flow forecasts. The Coefficient of Determination D The coefficient of determination, D, can be expressed as follows: D = (2.3) where i=\ where Qr is the flow calculated from linear regression of the observed flow against the forecast, so that the criterion D measures the performance of the flow forecasts after removing residuals linearly related to the forecast flow (usually called linear errors.) This leads to the conclusion that the difference between the values of E and D is equal to the normalized sum of squares of linear errors, which leads to the development of a very useful least squares criterion, PLE (percentage of linear errors) which is presented in the next section. Percentage of Linear Errors PLE Percentage of linear errors, PLE, is defined as follows: P L E = ( f ^ f ) 1 0 ° ( 2 ' 4 ) Therefore, PLE is the ratio of the percentage of the sum of squares of linear errors to the sum of squares of all residuals. In other words, PLE measures the improvement in a flow forecast brought about by using linear regression. PLE is not limited to this regression model of Q0 vs. Qf, but can also be used for testing other regression models of observed Chapter 2. Analysis of Errors in Flow Forecasts 16 flows against other outputs of a flow forecasting model2. PLE can therefore be used to test the validity of real-time flow forecasting models based on these regression models. This will be demonstrated for the flow updating model presented in Chap. 5. The Coefficient of Persistence Cp The coefficient of persistence, Cp, can be expressed as follows (Kitanidis & Bras, 1980c): CP = S L Z I (2.5) Op where SP = £(Qo(0 - Q0{i - A i ) ) 2 t ' = l where A i is the time interval of flow measurements. Cp measures the performance of the flow forecasting model relative to the performance of the naive persistence model, in which the flow is estimated equal to the most recent measured flow. A Cp value over zero means that the tested model performs better than the persistence model, while a value less than zero indicates the opposite. Consecutive short-term measured flows are generally highly correlated leading to a good performance of the persistence model for short-term flow forecasts. Therefore, Cp is useful in evaluating short-term flow forecasts. On the other hand, sparsely measured flows are weakly correlated causing the persistence model to be quite inferior for long-term flow forecasts. For this reason, Cp is impractical for evaluating long-term flow forecasts. 2 Anexample is linear regression of Q„..vs. Q,, Qr and Qg (the snowmelt, the rain and the groundwater flow components, respectively ), where Qf = Qs + Q r + Qg Chapter 2. Analysis of Errors in Flow Forecasts 17 The Coefficient of Extrapolation Ce The coefficient of extrapolation, Ce, is defined, as follows (Kitanidis & Bras, 1980c): C e = ^ (2.6) where Se = £(<?o(0 " Qe(t))2 where Qe(i) is the flow calculated from the extrapolation of the most two recent observed flows. A positive Ce value indicates that the tested model performs better than the extrapolation model; while a negative value indicates the opposite. Similar to the discussion regarding the coefficient of persistence Cp, high correlations among consecutive short-term measured flows make Ce useful for evaluating short-term flow forecasts, while weak correlations among consecutive long-term measured flows make C e inefficient in evaluating long-term flow forecasts. Selection of the least Squares Criteria The quantity S defined in equation 2.2 is a direct measure of the discrepancy between the observed hydrograph and the forecast hydrograph. However, the coefficient of efficiency E is a more suitable quantity for evaluating flow forecasts since it has a small non-dimensional value, and high values of E are associated with good flow forecasts. Choosing S0 as a normalizing factor for E, as compared to Sp and Se for the coefficient of persistence and the coefficient of extrapolation Cp and Ce, respectively, has two main advantages: 1. S0 is a fixed value regardless of the forecast lead, and 2. E can be compared with the coefficient of determination, D to evaluate systematic errors in the flow forecasts. Chapter 2. Analysis of Errors in Flow Forecasts 18 The coefficient of persistence Cv and the coefficient of extrapolation Ce measure the performances of the persistence model and the extrapolation model, respectively. Since both models are quite powerful for short-term flow forecasts, Cp and Ce are useful in evaluating short-term flow forecasts. However, both criteria are inefficient for the evalu-ation of long-term flow forecasts, since both the persistence model and the extrapolation model perform very badly for long-term flow forecasts and therefore cannot be used as bases for comparison. For applications with varying forecast leads, the use of Cp and Ce can be quite con-fusing, since the normalizing factors Sp and Se, respectively, are different for different forecast leads leading to changes in the values of Cp and Ce even if the flow forecasts have not changed. It is interesting to note the Cp is proportional to the difference between the coefficient of efficiency calculated for the persistence model and the coefficient of efficiency calculated for the flow forecasting model to be evaluated. This can be proven as follows: • using Eq. 2.2, the coefficient of efficiency for the persistence model is: Er = ^ L (2-7) • from Eqs. 2.7, 2.2 and 2.5 it can be.seen that: CP = (E-EP)^ (2.8) Op The above indicates that using Cp in evaluating the flow forecasts is equivalent to com-paring the values of E and Ep. It can be proven, similar to above discussion, that for the coefficient of extrapolation: Ce = (E-Ee)^- (2.9) Chapter 2. Analysis of Errors in Flow Forecasts 19 where, S0 — Se The approach of replacing the coefficients of persistence and extrapolation with the quantities, E — Ep and E — Ee, respectively, has two advantages: 1. It uses one criterion, i.e. the coefficient of efficiency E, in evaluating the flow fore-casts. 2. It makes it possible to evaluate flow forecasts against naive models other than the persistence and the extrapolation models. Based on the discussion in this section, it can be concluded that the coefficient of effi-ciency E is most suitable least squares criterion for evaluating flow forecasts. Therefore, E is chosen to be the main criterion in this research. Detection and Correction of Timing Errors in the Flow Forecasts Observed flows can be regressed against lagged calculated flows to detect and estimate timing errors. Values of coefficient of the determination for several lags are compared and the timing error is estimated as the one corresponding to the least value of the coefficient of determination. Lagged regression can also be applied between observed flows and calculated flow components. Based on this approach a simple updating model can be used to detect, estimate and correct for timing errors in the flow forecasts. The algorithm of this model is presented as follows: 1. Lagged Regression Multiple linear regression is applied for the past 10 observed flows , QQ, .vs. the lagged snowmelt flow components, Qs, and the groundwater flow components, Qg, Chapter 2. Analysis of Errors in Flow Forecasts 20 calculated by the UBC watershed model. Lags of 3, 2, 1, 0, -1, -2 and -3 days are used. The timing error, A, is estimated equal to the lag that produces the highest coefficient of determination. 2. Prediction For the next 5 days the flow is estimated equal to a weighted lagged sum of the snowmelt and the groundwater flow components: Qu(t) = asQ3(t - A) + agQg(t - A) (2.10) where A is the estimated timing error calculated from the previous step; and as and OLg are the coefficients of regression that yielded the highest coefficient of de-termination. 3. Updating After obtaining the flow measurements for the next five days, step (1) is repeated to update the estimates of the timing error and the coefficients as and ag. The timing error updating model was applied to the flow forecasts generated by the UBC watershed model3 for the Illecillewaet basin in British Columbia4 for the period April 15 to July 15, 1970. The measured flows, the non-updated flow forecasts, the up-dated flow forecasts, and their corresponding residuals are shown in Fig. 2.5. Applying the updating model resulted in a general improvement in the flow forecasts as the coeffi-cient of efficiency, E, increased from 0.74 to 0.82. The improvement in the flow forecasts is limited due to the high instability of the magnitude and the phase of the timing errors. Although the timing error updating model did not substantially improve the flow forecasts for this particular event, it does provide a simple and logical approach for 3The UBC model is described in Chap. 5. 4Detailed information of the Illecillewaet basin is presented in Appendix A. (A CO E < ID g co LU rr L_ o 350 300 -250 " 200 150 100 50 0 -50 -100 Obs. Flow UBC Row Upd. Flow UBC Error Upd. Error V \ .hJ SSJ H-A—/ \ A / — - — \ / I I 1 April 15 April 25 May 5 May 15 May 25 June 4 June 14 June 24 July 4 July 14 DATE (1970) Fig. 2.5 Hydrographs of the observed, the UBC and the timing error-updated flows and their residuals for a forecast lead of five days. Chapter 2. Analysis of Errors in Flow Forecasts 22 detecting and correcting timing errors. Moreover, this model is a good calibration tool for estimating timing errors in historical data. 2^ 2.3 The Autocorrelation and the Partial Autocorrelation Functions The Autocorrelation Function The autocorrelation function p(r) measures the correlation between a stochastic variable x(i) and its value r time steps ahead x(t + r). p(r) is defined as follows (Priestley, 1987): = C 0 V { , ( « ) , , ( t + r)} K 1 [VAR{x(t)}VAR{x(t + r)}]2 V ; where, COV{x(t),x(t + T)} is the covariance of x(t) and x(t + r); and VAR{x(t)} is the variance of x(t). Since covariances and variances are unknown quantities, the autocorrelation function can be estimated from a given sample as follows (IMSL, 1987): " ( T ) = ( 2 ' 1 2 ) where x is the mean; N is the record length; and the hat * indicates an estimate. p(r) is usually called the sample autocorrelation function to differentiate from the theoretical autocorrelation function p(r)) described by Eq. 2.11. Eq. 2.12 can be used to estimate the correlation between flow forecast residuals at different lags as follows: The plot of the autocorrelation function vs. the lag is useful in identifying the lags at which the residuals are highly correlated. For example, if the residuals follow an autoregressive process of order one (AR(l)), as described below: r{t) =ar(t-l) + w(t-l) (2.14) Chapter 2. Analysis of Errors in Flow Forecasts 23 the corresponding autocorrelation function is as shown in Fig. 2.6. On the other hand, the autocorrelation function for an autoregressive process of order two, (AR(2)), described by the following equation: r(t) = a r{t - 1) +•/?r(t -2) + w(t-\) (2.15) is as shown in Fig. 2.7. Using the residuals of Fig. 2.2 as an example, the sample autocorrelation function is calculated and plotted in Fig. 2.8. The similarity between Fig. 2.8 and Fig. 2.6 indicates that these residuals follow in general an autoregressive process of order one. The bump in Fig. 2.8 indicates the presence of relatively small correlations between residuals with lags between 10 to 16 days. This is more evident in the partial autocorrelation function to be discussed below. It should be emphasized that the autocorrelation function cannot be used to com-pare the performances of two different flow forecasting models, since it only evaluates correlations among residuals without any consideration of the quantities of these resid-uals. This can be seen from Eq. 2.13, which shows that the autocorrelation function is non-dimensional and is not related to the absolute values of the residuals. The Partial Autocorrelation Function While the autocorrelation function is equal to the correlation between two lagged values of a variable, the partial autocorrelation function is defined as the correlation between these two lagged values less the portion of correlation caused by the influence of correlations among previous values. For example, in Fig. 2.6 the correlations for lags longer than one day are caused by the influence of the correlation for a lag of one day. Removing this influence results in zero correlations for lags longer than one day, and therefore the partial autocorrelation function for an autoregressive process of order one AR(1) is equal to a single value at a lag of one day and zero for longer lags. Therefore, the partial Chapter 2. Analysis of Errors in Flow Forecasts 24 z o D LL z o UJ rr tc O o O LAG IN DAYS Fig. 2.6 Theoretical autocorrelation function of AR(1). z o 5 o 111 cr cr 10 15 LAG IN DAYS 20 25 Fig. 2.7 Theoretical autocorrelation function of AR(2). Chapter 2. Analysis of Errors in Flow Forecasts z o z li. z O 111 DC EC o o o Fig. 2.8 Example of an autocorrelation function of forecast residuals. o 1.0 0.8 -11 16 LAG IN DAYS 21 26 30 Fig. 2.9 Example of a partial autocorrelation function of forecast residuals. Chapter 2. Analysis of Errors in Flow Forecasts 26 autocorrelation function is a direct method for identifying the order of an autoregressive process. The partial autocorrelation function can be estimated equal to the diagonal elements of matrix £ defined as follows (Durbin, 1960): For k = 1 ,K, where K is the maximum possible order of an autoregressive process £ ( M ) = p(i) k = l ,J t(k,k) j = k. The order of an autoregressive process is estimated equal to the argument m of £(m, ra) beyond which the partial autocorrelation vanishes. Using the residuals of Fig. 2.2 as an example, the partial autocorrelation function is calculated and plotted in Fig. 2.9. It is clear from Fig. 2.9 that the values of the partial autocorrelation function are insignificant beyond a lag of one, hence these residuals follow an autoregressive process of order one. 2.3 Summary and Conclusion Misrepresent ative models, miscalibrated parameters, erroneous input and reference data are the four main causes of errors in flow forecasting. Much of the characteristics of these errors can be revealed through various plots of observed and forecast hydrographs, flow forecast residuals, accumulated errors, observed vs. forecast flows, and residuals vs. their lagged values. Performances of flow forecasting models can also be evaluated by several least squares criteria. The coefficient of efficiency E measures the overall performance of the model; while the coefficient of determination D evaluates the model performance after removing linear errors. Therefore, the difference between these two criteria or more specifically Chapter 2. Analysis of Errors in Flow Forecasts 27 the newly introduced criterion the PLE is a direct measure of "potentially removable" linear errors. The coefficient of persistence Cp and the coefficient of extrapolation Ce compare the model performance to those of naive models. Based on the discussion in section 2.2.2, the coefficient of efficiency is selected as the primary least squares criterion in this research. Based on lagged linear regression, a simple updating model was developed to detect, estimate and correct for timing errors in flow forecasts. Applying this updating model to flow forecasts generated by the UBC watershed model resulted in limited improvement in the flow forecasts. The inability of the updating model to significantly improve the flow forecasts is due to the high variability of the timing errors in the flow forecasts. However, this updating method is a useful tool for detecting and estimating timing errors in historical data. The degree and persistence of correlation among residuals can be estimated using the autocorrelation function. The partial autocorrelation function is a more direct method for estimating the order of an autoregressive process. In spite of being excellent tools for analyzing error signals, these function do not take into consideration the size and distribution of residuals and therefore cannot be used for comparing the performances of different models. Chapter 3 Approaches to Real-Time Flow Forecasting: A Literature Review As has been stated earlier, the main objective of this research is to design methods for updating the flow forecasts of watershed models using real-time (on-line) data. These methods belong to the group of real-time flow forecasting models, which are defined as models that use real-time reference data (for example, flow or snowpack measurements) as well as input data in calculating flow forecasts. The updating procedures in real-time flow forecasting models follow two main approaches: 1. Updating flow forecasts generated from a time-invariant (off-line) model; and 2. Adjusting variables (parameters, water storages, etc.) of a time-variant (on-line) flow forecasting model to achieve a better correspondence between measured and forecast hydrographs. The real-time models adopting the first approach are not flow forecasting models per se, since they are basically procedures designed to correct the flow forecasts of given flow forecasting models. Their applications do not necessitate altering the structures of the corresponding flow forecasting models. These real-time models can be classified into two types: • Error prediction models (section 3.1), and • Flow forecast correction models (section 3.2). 28 Chapter 3. Approaches to Real-Time Flow Forecasting 29 The real-time models of the second approach are dynamic models specifically designed for the purpose of updating their variables from real-time data. These models can be classified according to the nature of their structures as follows: • Transfer function models (section 3.3); and • Conceptual models (section 3.4). In the following sections the above four types of real-time models are discussed with more emphasis on the last two types (i.e., the transfer function and the conceptual models). 3.1 Error Prediction Models In many flow forecasting applications, residuals tend to be highly correlated with their past values. Using techniques established in the time series analysis literature (Box and Jenkins, 1970) magnitudes and extensions of these correlations can be evaluated. Corre-sponding algorithms can then be designed to simulate relationships between consecutive residuals and these can be used in estimating future residuals. Examples of these models can be seen in (Szollosi-Nagy, 1984; Moore, 1982; Jones, 1980; and Marivoet, 1980). A very common example is the autoregressive (AR) model (Moore, 1986): r(t) = -<j>1r(t-l)-<hr{t-2)-...-<l>,r(t-z) + w(t-l) (3.1) where r(t) = Qf(t) — Q0(t) is the forecast residual at time t, and Qf(t) and Q0(t) are the forecast and observed flows, respectively; (f>i, <f>2,--<f>z a r e model parameters; and w is a random noise of zero mean and known variance cr?,. Chapter 3. Approaches to Real-Time Flow Forecasting 30 At time t — 1, the future residual r(t) can be estimated equal to the right hand-side of Eq. 3.1 less w(t — 1), which is best estimated equal to zero1. Model parameters can be estimated through calibration using historical data. In simple applications these esti-mates are kept fixed during the forecasting period. However, a more realistic approach is to update these estimates as new data becomes available using one of many recur-sive estimating schemes available in the literature (e.g., recursive regression, maximum likelihood). Error prediction models are flexible and easy to apply. They are especially useful in providing feedback of flow measurements to highly complex off-line flow forecasting models. However, their performance is highly dependent on the persistence of forecast residuals. As noted by Moore (1986), this persistence is least in the vicinity of the hydrograph rising limb, while it is very strong on the falling limb. Unfortunately, this situation results in poor performance where it is needed the most (i.e., on the rising limb) and good performance where it matters the least (i.e., on the falling limb). As will be seen, this problem is also observed by other researchers working on other types of real-time forecasting models (i.e, the transfer function and the conceptual models). Timing errors are another serious obstacle in applying error prediction models. A simple method for detecting and correcting timing errors was presented in section. 2.2.2. 3.2 Flow Forecast Correction Models A comparative analysis between measured and forecast hydrographs (through graphical plots or regression analysis) may reveal certain relationships between the measured and the forecast flows. This may be in the form of a bias or even a nonlinear relationship. Based on the results of these analyses, a model can be designed to update the flow 1 Since its mean is equal to zero. Chapter 3. Approaches to Real-Time Flow Forecasting 31 forecasts using real-time flow data. A simple example is the flow updating model that will be presented in Chap. 4, in which the updated flow is calculated as a linear function of the flow forecast calculated from an off-line flow forecasting model. . The forecast flow can also be updated as a function of outputs from an off-line flow forecasting model. An example is the flow updating model to be presented in Chap. 5, in which the flow is calculated as a weighted sum of the calculated flow components (snowmelt, rainfall and groundwater). The weighting factors are recursively updated from on-line flow measurements using the Kalman filter, which will be discussed in Chap. 4. 3.3 Transfer Function Models Frequently referred to as black box or input-output models, transfer function models became popular after the advent of the text "Time Series Analysis" by Box and Jenkins (1970). In these models the flow is estimated as a weighted sum of the past observed flows and the input data (precipitation, temperature, etc.). An example is the following model (Moore, 1986): Qp{t + 1) = a0Qo{t) + BoPr(t) + (3lPr{t - 1) + PiPrit - 2) (3.2) where Qp is the predicted flow; Qo is the observed flow; pT is the effective rainfall; and <*o> Poi Pi a n ( I & are weighting factors. The above model has been proven (Moore, 1986) to be equivalent to the following unit hydrograph model: Qp(t) = voPr(t) + vlPr(t - 1) + v2Pr(t - 2) + ... (3.3) Chapter 3. Approaches to Real-Time Flow Forecasting 32 where v0,Vi,...vn are the ordinates of the unit hydrograph. The principle of continuity requires that: A> = l - ( a o + A + ft) (3.4) Moore (1986) points out that the accuracy of estimating the effective rainfalls is very critical for the success of the transfer function model. He also states that this problem is equivalent to choosing an appropriate loss model in the unit hydrograph model. Most transfer function models are formulated in a state-space format on which the Kalman filter can be applied. Following are presentations and discussion of two transfer function models selected from the literature: 1. (Todini, 19786): Qp(t + 1) = aQQo(t) + aiQo{t-l) + f30pd(t) + PM* - 1) + /32Pd{t - 2) + loPw(t) + lxpw{t - 1) + l 2 P w {t - 2) (3.5) where Pd{t) = pp(t). Pw(t) = 0 For dry soil conditions; and For wet soil conditions Pd(t) = 0 Pw(t) = PP(t) where pp is the total precipitation. Soil moisture condition is determined by the antecedent precipitation index (API) calculated as a function of time : API{t) = 9 API{t - 1) + pp(t) (3.6) Chapter 3. Approaches to Real-Time Flow Forecasting 33 where 9 is a constant. If API is less than a threshold value of T, the soil is consid-ered dry; otherwise the soil is considered wet. The model takes into consideration the effect of soil conditions on the rainfall-runoff relationship by assigning small values for the parameters /30, /?i and /32 and larger values for the parameters 70, 71 and 72. This causes the precipitation at dry conditions, pd{t), to be passed to the flow more slowly than the precipitation at wet conditions, pw(t). The flow is estimated through applying the mutually interactive state\parameter estimation (MISP) technique (Todini, 1976) on a state-space format of Eq. 3.5. The MISP is basically an adaptive Kalman filter method2. The model shows a satisfactory performance for a forecast lead of two hours. Close examination of the measured and the forecast hydrographs shows that although the flow forecast of the falling limb is good up to a forecast lead of six hours, model performance on the rising limb is quite poor beyond a forecast lead of four hours. This comes as no surprise, since Eq. 3.5 shows that the forecast flow is dependent on the correlation of the flow with the two most recent flows. This correlation is strong and consistent for the falling limb; while it is the weakest on the rising limb. This problem is shared by all transfer function models. 2. (Burn, 1985): Qp(t + l) = aoQ0{t) + a1Q0(t-l) + f30pp(t) + PiPP(t-l) + 02PP(t - 2) + /33pp{t - 3) + /34PP(t - 4) -f 7 o T ( i ) + 7 l T ( t - l ) + i 2 T { t - 2 ) (3.7) 2Adaptive Kalman filtering will be discussed in section 4.4. Chapter 3. Approaches to Real-Time Flow Forecasting 34 where T(t) is the temperature index for snowpack melting represented by: m{t) = d(t)T(t)SCA(t) (3.8) where m(t) is the snowmelt at time t; d{t) is a degree-day factor relating the temperature index to the depth of water column produced on that day; and SCA(t) is the snow covered area as percentage of the entire basin area. The MISP technique, mentioned in the previous example, has been applied on a state-space format of Eq. 3.7. In a later work (Burn, 1988), the above model has been modified by including time index and precipitation forecasts. In both versions the model performance was only acceptable for short forecast leads (one day or less). Longer forecast leads (two and three days) show a dramatic deterioration in flow forecasts. This is typical of transfer function models. Both of the transfer function models presented above share the following properties that can be generalized for this class of real-time flow forecasting models: 1. Their dependency on data feedback give them the ability to perform quite well for very short forecast leads; while their lack of physically-based structures eliminate their use for long- or even medium- forecast leads. 2. While the dependency on the correlation among consecutive flows is the main factor for their exceptional performance on the hydrograph falling limb, it is the same reason for inferior forecasts on the rising limb. 3. The simplicity of these models make them easy and flexible to apply. In addition less time and effort are needed in calibration and operation. Chapter 3. Approaches to Real-Time Flow Forecasting 35 3.4 Conceptual Models The term conceptual describes those models designed deterministically to simulate phys-ical processes of the watershed system. In these models the soil is visualized as a network of water storages. Input of rainfall"and/or snowmelt, less losses, are split among some of these storages according to a priority system. Excesses of water from high priority storages are passed down to lower priority storages, and so on. The outflow from each of these storages is assumed to be a function of the storage itself (Q = F(S)). These outflows may then be passed directly to the basin outlet (fast flow) or may be routed through a series of storages before reaching the basin outlet (e.g., ground water). Fig. 3.1 shows the structure of a basic conceptual model. The nonlinear nature of conceptual models and the presence of large number of vari-ables poses a difficult problem in adapting these models to the real-time mode. This problem can be solved by linearizing model equations (e.g,. through Taylor expansion) and selecting only sensitive variables for updating. Two examples of real-time conceptual models are discussed below: 1. (Kitanidis and Bras, 1980 (b & c)) The National Weather Service Catchment Model (NWSC) has been linearized using the method of statistical linearization (Gelb, 1986) for threshold-type equations and Taylor expansion for the rest. The Kalman filter is applied on a state-space format of the linearized model to update eight variables (states): five soil moisture storage elements, impervious area, channel storage, and the coefficient of the linear reservoir routing scheme. From reviewing the results of several runs of this model, the following has been observed and concluded: Chapter 3. Approaches to Real-Time Flow Forecasting 36 RAINFALL & SNOWMELT EVAPOTRANSPIRATION EXCESS | IMPERVIOUS AREA ± = — • FAST FLOW-GROUND WATER ZONE SLOW FLOW 7> SUBSURFACE ZONE MEDIUM FLOW TOTAL FLOW Fig. 3.1 Flow chart of a basic conceptual model. Chapter 3. Approaches to Real-Time Flow Forecasting 37 • The model is essentially sensitive to three variables only: the lower zone sec-ondary soil storage (fast base flow), the channel storage and the coefficient of linear reservoir routing scheme. The model performance was greatly improved when the state noise variances associated with these variables were increased. • Supplementing the model with a scheme for detecting isolated errors in vari-ables (developed by Kitanidis and Bras (1980a)) resulted in slightly inferior performance for short forecast leads (up to 18 hours) and marginal improve-ment in performance for longer forecast leads (up to 36 hours). • Comparing the performance of this conceptual model with those of naive mod-els (persistence and extrapolation) and a black box model shows that although there are no essential differences in performance for very short forecast leads (six hours), the conceptual model is much superior to the other models, espe-cially the extrapolation and black box models, for longer forecast leads. • Model performance deteriorates rapidly for longer forecasts leads. The coef-ficient of efficiency E (defined in Chap. 2) lost about 30% of its value as the forecast lead is increased from 6 hours to 36 hours. This creates some doubts about the validity of this model for use in long-term flow forecasting. • The authors raise the most serious problem in real-time forecasting: i.er, pre-dicting the beginning of the rising limb and the magnitude and timing of the peak. The rising limb is driven by the storage in the upper zone (surface runoff) which is unobservable until it starts to occur, by which time there is little time left for the model to update its states before the flow reaches its peak. • Timing errors may lead to erroneous updating of the variables. • It is noted that the period used for testing model performance is the same one Chapter 3. Approaches to Real-Time Flow Forecasting 38 used for selecting the state noise variance. Therefore, the results shown do not represent a "true" real-time test for the model. 2. (Georgakakos and Bras, 1986 (a Hz b)) Three physically-based models have been integrated into a single state-space model. A precipitation model (Georgakakos, 1984 (a Sz b)) yields real-time precipitation forecasts from inputs of temperature, pressure and dew point temperature. Then, precipitation input and flow measurements are processed through a soil moisture accounting model (Kitanidis and Bras, 1980 (a & b)) to estimate the flow at the basin outlet. Lastly, the flow is routed through a nonlinear channel routing model (Georgakakos, 1982) to calculate downstream flows. So, this model is basically a more developed version of the model discussed in the previous section (Kitanidis and Bras, 1980 (b k c)). Although precipitation forecasts generated by this model are found to be quite poor, they had little impact on the quality of flow forecasts, indicating that errors in precipitation may have been offset by adjusting soil moisture storages. In spite of the model's physical background and comprehensiveness it was outper-formed by naive models (persistence and extrapolation) for forecast leads of 6 and 12 hours. The conceptual model did better than the naive models for longer fore-cast leads (18, 24 and 30 hours), however its performance deteriorated very rapidly-as the forecast lead was increased. So, in general, this model is not expected to be reliable for longer forecast leads. Chapter 3. Approaches to Real-Time Flow Forecasting 39 3.5 Summary and Conclusion Real-time flow forecasting models are classified into four types. c rError prediction and flow forecast correction models are based- on adjusting flow forecasts generated from off-line flow forecasting models. They are simple and flexible approaches for integrating real-time data into complex and rigid flow models. Transfer function and conceptual models are designed, mostly in a state-space format, to exploit real-time data in updating some of the model variables. Transfer function models are based on the input-output relationship without consideration to the driving forces behind such relationship; while the conceptual models are designed to simulate the physical processes in the watershed system. Transfer function models are suitable for very short forecast leads. However, their de-pendency on feedback makes them unreliable for longer forecast leads. In such situations conceptual models are considered to be better alternatives. Unfortunately, performances of conceptual models deteriorates rapidly for relatively long forecast leads (beyond two days3). Al l types of real-time flow forecasting models are observed to be superior on the falling limb of the hydrograph. Unfortunately, forecasting on the rising limb is generally very poor. For the transfer function models this is attributed to the correlation for consecutive flows, which is strong on the falling limb and weak on the rising limb. For conceptual models, the falling limb is controlled by the water storage of the base flow which changes very slowly and therefore is easier to update; the rising limb is driven by the water storages of the surface flow and the interflow which are usually too variable so that the updating routine cannot keep track of these changes. In addition to the errors discussed, the real-time models are also susceptible to timing 3As observed from the applications of the two models discussed in this chapter. Chapter 3. Approaches to Real-Time Flow Forecasting 40 errors. All the above models lack any scheme to detect such errors. A simple method to solve this problem was presented in section. 2.2.2. Chapter 4 The Kalman Filter As has been noticed from Chap. 3, most of the updating procedures used in the real-time flow forecasting models are based on the Kalman filter technique. The popularity of this technique, not only in hydrological forecasting but in many other engineering applications, is due to its ability to provide a feedback from on-line data to update model's estimates in an easily applicable recursive approach. The main features and an example of the application of the Kalman filter to updating flow forecasts are presented and discussed in this Chapter. 4.1 Example of a Kalman Filter Application An attempt to read the original work of Kalman (1960, 1961) or many of the publica-tions concerning the theoretical background of the Kalman filter may be discouraging for many hydrologists in pursuing a full understanding of this approach and its potential applications in hydrology. This situation stems from the unfamiliar terminology and the relatively advanced statistical concepts used in the literature. To avoid this problem, the Kalman filter is presented through a simple example of its application in updating flow forecasts. The example sheds light on some of the practical features of the Kalman filter, leading the way to discuss important aspects and problems in applying this technique. In this example, daily flow at a basin outlet is being forecast by a watershed model, and a Kalman filter-based procedure is designed to update these flow forecasts for differ-ent forecast leads. 41 Chapter 4. The Kalman Filter 42 4.1.1 Conditions The Kalman filter was developed based on the assumption that the variable to be up-dated, referred to as the state, is normally distributed and is equal to a linear function of its past values and a normally distributed random error. In addition, the measurement is assumed to be a linear function of the state and a normally distributed random error. Based on these assumptions, the following conditions are set for the application of the flow updating model. 1. The observed flow is a linear function of the flow calculated by the watershed model: Qo(t) = a(t)Qe(t) + v(t) • (4.1) where Q0 is the observed flow; Qe is the flow estimated by the watershed model; a is a correction factor, referred to as the state; and v is a random noise of zero mean and variance (referred to as the measurement variance). Eq. 4.1 is referred to as the Measurement equation, since it describes the relationship between the measurement and the state. The flow measurements are available on intervals of At. 2. The state a is related to its past value as follows: a(**) = a(<fc-i) + u>(*fc-i) (4.2) where t^ — i^_x = A i ; and w is a random noise of zero mean and variance equal to cr .^ The variance cr^ is referred to as the state noise variance. Eq. 4.2 is referred to as the State equation, since it describes the structure of the state.' 3. The noises v and w are uncorrelated. This is a reasonable assumption, since the method of measuring the flow, and therefore the accuracy of the flow measurement is independent of the method used in estimating the flow forecasts. Chapter 4. The Kalman Filter 43 4.1.2 The State-Space Model The combined set of the state and the measurement equations (Eqs. 4.2 and 4.1, respectively) are usually referred to as the state-space model, which can be written in the following conventional matrix format of the Kalman filter literature: x k = x k - l + Wk_! State Equation z k = H k x k + v k Measurement Equation where, x k = [a(tk)]; w k = [w(tk)]; z k = [Q0(tk)]; H k = [Qe{tk)]; and v k = [v(tk)]. The state-space format is written in the matrix format, since in general, the state vector x k may be composed of several states, as in the flow updating model presented in Chap. 5, and z k can be a vector of several measurements (e.g., rain, flow, snowpack etc.). The state-space model is a convenient format for the application of the Kalman filter for the following reasons: • It explicitly presents the two sources from which the state is estimated, i.e its past values and the measurements. • Its time-domain format is suitable for the application of recursive approaches like the Kalman filter. • It explicitly presents the stochastic characteristics of the measurement and the state. 4.1.3 Model Algorithm The following is the algorithm of the model for updating flow forecasts: 1. Prediction before a measurement is available. The state x k can be estimated from the state equation (Eq. 4.3) as follows: x k ( - ) = x k_ 1(+) (4.4) Chapter 4. The Kalman Filter 44 where the hat """ indicates that the quantity is an estimate; the negative sign " means that the estimate is calculated before the measurement is available; and the positive sign "+" means that the estimate is calculated after the measurement becomes available. The state noise w^.i disappeared from Eq. 4.4, since its best estimate is its mean, which is equal to zero. Another important quantity associated with the state is its variance cr2., which is estimated using the following equation: Pk(-) = Pk-i(+) + Gk_i (4.5) where P k = [°"x(ifc)]j a n<l G k = [^(ijt)]. The state variance Pk(—) is important for estimating the confidence limits of xk( —). 2. Updating. Once a new flow measurement becomes available (at time tk), the state x k can be estimated from the measurement equation (Eq. 4.3) as follows: HkXk* = z k (4.6) leading to: QJtkf Qeitk) where, the small circle " ° " indicates that the estimate is obtained from the mea-surement only. The measurement noise v k does not appear in the above equation since its best estimate is its mean, which equals zero. xk(+) can then be estimated as a weighted sum of the estimates from Eqs. 4.4 and 4.6 as follows: xk(+) = [I - Sk]xk(-) + Skx£ (4.7) x k -Chapter 4. The Kalman Filter 45 The Kalman filter is based on deriving the value of the gain S k that optimizes the estimate x k . An optimal estimate is one that is unbiased1 and yields minimum sum of squares of forecast residuals. According to these conditions the gain Sk can be expressed as follows: S k = K k H k (4.8) where K k is the Kalman gain denned as follows: K k = P k ( - ) H k r [ H k P k ( - ) H f + R k ] " 1 (4.9) where R k = [(r2v(tk)]. In the literature of the Kalman filter, the Kalman gain K k is used instead of the gain S k , since Eq. 4.7 is usually expressed in this literature as follows: x k(+) = x k ( - ) + K k ( z k - H k x k ( - ) ) = x k ( - ) + K k r k (4.10) where H k x k (—) is the estimate of the measurement Zk (see the measurement equa-tion, Eq. 4.3,) and rk is the forecast residual. Eq. 4.10 has a very important computational significance, since for multiple state applications (for example, the flow updating model presented in Chap. 5) the state estimate from the measurement only x£ (Eq. 4.6) can not be uniquely determined, and therefore Eq. 4.7 can not be used. However, Eq. 4.7 can be applied to single state applications (for example, the flow updating model presented in this chapter), and is particularly useful in illustrating explicitly that the state is estimated from two independent sources of information, namely: the state equation, and the mea-surement equation. Moreover, the gain S k is a non-dimensional matrix; whereas, 1 An unbiased estimate xk(+) is one that converges to the actual as more measurements become available. Chapter 4. The Kalman Filter 46 the Kalman gain K k is of units related to the given application, for example the units of the Kalman gain for the example presented in this chapter are ^ . The estimate of the state variance P k is updated by the new measurement as follows: P k (+) = [ I - S k ] P k ( - ) (4.11) 3. Prediction. The state value for the next time interval, A i , can be estimated using Eq. 4.4 as follows: *k+i ( - )=*k(+) (4-12) The state variance for the next interval, A i , can be estimated using Eq. 4.5 as follows: Pk+i(-) = Pk(+) + G k (4.13) The measurement z k + i can be estimated using the measurement equation as fol-lows: z k + i = H k x k + 1 ( - ) (4.14) = [Qu(tk+1)] where Qu(ifc+i) = Qe(tk+1)a(tk+i)- is the updated flow forecast for time ifc+ 1. Variance of z k + i is estimated as follows: V A R ( z k + 1 ) = H k + 1 P k + 1 ( - ) H k r + 1 (4.15) It is interesting to note from Eqs. 4.8 and 4.15 that: H k K k = V A R ( z k ) [ V A R ( z k ) + R k ] - 1 (4.16) Chapter 4. The Kalman Filter 47 The above equation shows that K k is a function of the ratio of the measurement prediction variance V A R ( z k ) to the sum of the measurement prediction variance and the measurement noise variance R k . Applying this equation to the flow up-dating model discussed shows that the gain S k is equal to this ratio2: VAR[Qu(tk)} S k = (4.17) yAR[Qu(tk)] + <rl\ Since the variance is a measure of accuracy, it can be seen that the contributions of the state and the measurement equations in estimating the state are weighted ac-cording to their relative accuracies. A good measurement (small R k ) as compared to a poorly structured state equation (large G k and therefore large VAR(z k ) ) results in large S k giving more weight to the measurement in estimating x k (see Eq. 4.7). An opposite situation results in small S k and therefore a small contribu-tion of the measurement in estimating x k (see Fig. 4.1.) 4.2 The Kalman Fil ter as a Bayesian Approach The Kalman filter can be looked upon as a Bayesian approach which updates the nor-mal probability density function (expressed by its mean and variance3) of a state using relevant measurements. The mean, xk(—), and the variance, P k(—), of the a priori probability density function of the state x k are updated to become the mean, x k(+), and the variance, P k (+), of the a posterior probability density function using Eqs. 4.7 and 4.11, respectively (see Fig. 4.2.) As in any-Bayesian approach, the extent of updating an estimate is dependent on the . accuracy of the measurement. A good measurement results in significant updating; while 2This is only true for the applications of a single state and a single measurement. For multiple state and/or multiple measurement applications, S k is a matrix linearly related to this ratio. 3Normal probability density function is comprehensively described by its mean and variance. Chapter 4. The Kalman Filter 48 POORLY STRUCTURED STATE EQ. GOOD MEASUREMENT ESTIMATE FROM MEAS. EQ. ESTIMATE FROM STATE EQ. FINAL ESTIMATE Fig. 4.1a Effect of a good measurement and a poorly structured state equation on updating. BAD MEASUREMENT WELL-STRUCTURED STATE EQ. CACLCULATE ESTIMATE FROM MEAS. EQ. FINAL ESTIMATE x ^ ESTIMATE FROM — STATE EQ. x^ -> Fig. 4.1 b Effect of a bad measurement and a well structured state equation on updating. Chapter 4. The Kalman Filter 49 "priori" pdf NEW MEASUREMENT MEAN = \(-) VARIANCE = PK(-) i "posterior" pdf THE KALMAN FILTER MEAN = \tt VARIANCE = P,W Fig. 4.2 The Kalman Filter as a Baysian Approach. a bad one results in minimal updating. This statement is parallel to the one stated at the end of the previous section. 4.3 The Roles and The Selection of The State Noise Variance and The Mea-surement Variance Eq. 4.8 shows that the gain S k is a function of the relative values of the state variance Pk(—) and the measurement variance R k . Since, according to Eq. 4.5, Pk(~) is the sum of Pk_i(+) and the state noise variance Gk_i, it can be concluded that the gain S k is a function of the relative values of Pk-i(+)> P-k a n < i Gk - i - From the recursive Eqs. 4.5 and 4.11, it can be seen that the state variance Pk(+) 1S a function of the initial state variance Po, the state noise variance and the measurement variance from time t = 0 to t — tk. From the above discussion, it can be finally concluded that the updating process is controlled by the relative values of the initial value of the state variance, the state Chapter 4. The Kalman Filter 50 noise variance and the measurement variance. In the vicinity where the effects of the initial conditions are insignificant, the updating process is therefore only controlled by the relative values of the state noise variance and the measurement variance. Optimal values of these variances can be estimated from sensitivity analysis of the historical record. This procedure has been applied for the flow updating model presented in Chap. 5. A simulation study based on the example presented in the previous sections shows that three factors have to be considered in selecting the measurement variance and the state noise variance, namely: the rate of change of the state x, the measurement quality and the forecast lead. In this study, the actual flows are assumed to be the ones represented by the solid line in Fig. 4.3, and the measured flows are simulated according to Eq. 4.1 using a fixed ( 3 \ 2 ^ - J (based on the assumption that the flow measurement is within the range of ±20^-) . These simulated flows and the corresponding errors (measurement errors) are also shown in Fig. 4.3. The state a(t) is selected4as follows (see Fig. 4.4): ' 1.5, 1 < t < 20, 41 < t < 60, 81 < t < 92 3.0, 21 < t < 40, 61 < t < 80 The state values are assumed to be unknown and are to be estimated by the Kalman filter applied on the state-space model described by Eq. 4.3. Fig. 4.4 shows the results of two runs of different values of the state variance for a forecast lead of one day. For a very small state noise variance (cr^  =lE-6), the model was slow in response to the changes in the state value; while for larger state noise variance (cr2, =0.01), model performance was greatly improved due to the increase in the flexibility in tracking the changes in the state. However, running the model for the latter state noise variance of 0.01, but for longer forecast lead of 5 days resulted in severe deterioration in model performance (Fig. 4This example is similar to the example presented by Ljung and Soderstrom (1987, P. 275-278). a(t) = { Fig. 4.3 Actual and measured flows and measurement errors. (Simulated data) 3.5 3 - \ 2.5 H 2 H UJ CO 1.5 • , — ' t I I 1 -L-w4**' ACTUAL 0.5 H aw = o.oi 11 21 31 41 51 TIME IN DAYS 61 71 81 91 Fig. 4.4 Actual and predicted states for a forecast lead of one day. Cn Chapter 4. The Kalman Filter 53 4.5). This problem was solved by increasing the state noise variance to 0.1 (Fig. 4.5). These results can be explained as follows: since quite few measurements, 19 as compared to a total of 92, were available for updating for a forecast lead of 5 days, the chances of updating the state estimates were greatly reduced, resulting- in the poor performance for the run of a state noise variance of 0.01. Increasing the state noise variance to 0.01 results in greater degree of updating for every forecast increment, and therefore improving model performance. A guide for selecting the state noise variance with respect to the forecast lead was developed as follows: For a forecast lead of one day, At = 1, the state variance for time tk+i = t + 1 can be estimated using Eq. 4.13 as follows: If no measurement is available at time t + 1, the state variance is not updated, i.e: Pk+i(") Pk(+) + G k (4.18) Pk+i(+) = Pk+iR (4.19) The state variance for time t + 2 is estimated using Eq. 4.13 as follows: &l(t + 2). = *»(t + l ) + + oi(t + l) = al{t + 1)_ + o-l(t + 1) (4.20) If no measurement is available for the next n days, then: n-l °l{t + n) - = d2(0+ + 5>2>(i + 0(At=i) (4.21) t'=0 3.5 3 H 2.5 2 H UJ CO 1.5 \_ _ J <-J; i _ _ _# If r — , » . 4—v i — I* i i i _ TT7 8 3 CD 3 B 8 0.5 H ACTUAL 2 q, = o.oi a = 0.1 11 21 31 61 71 41 51 TIME IN DAYS Fig. 4.5 Actual and predicted states for a forecast lead of five days. 81 91 Or Chapter 4. The Kalman Filter 55 For a forecast lead of n days, At = n, Eq. 4.13 can be written in the following scalar format: &l{t + n)_ = <r2x(t)+ + a2w(t){At=n) (4.22) Comparing Eq. 4.22 with Eq. 4.21 it can be seen that: *i(*)(At=n) = £ <>l(t + 0(At=l) (4.23) i=0 So, for the example given above, if o^At=ix) = 0.01, then the state noise variance for a forecast lead of five days is selected as follows: 4 c r ™(At=5) = 5Z°i(* + *)(A«=l) i=0 = 5(0.01) = 0.05 Caution should be taken, however, in selecting a very large value for the state noise variance, since a very large state noise variance may cause extreme fluctuations in the state estimates. This can be seen in Fig. 4.6, where running the model for a very large state noise variance of 0.5 results in large fluctuations in the state estimates, especially at the first third of the run period. It is interesting to note, however, that on the second half of the run these fluctuations diminished and the state estimates were very close to the ones produced for a much smaller variance of 0.01. These results can be explained as follows: since the measured flows were simulated based on a fixed measurement variance a2v equal to 400 (^p)' , which is equivalent to a standard deviation av equal to 20 ^j-, the measurement errors of the small flows (below 50 ^-) were proportionately much larger than those of much larger flows. Since the first third of the forecast period is characterized by small flows, the state estimates based on the measurement only, i.e. x k (Eq.4.6), J I l _ ; I I I I I I L 1 11 21 31 41 51 61 71 81 91 TIME IN DAYS Fig. 4.6 Actual and predicted states for state noise variances of 0.01 and 0.5. Chapter 4. The Kalman Filter 57 were quite poor for the early stage of the forecast period; while they were much better for the following interval of high flows (see Fig. 4.7.) However, since the measurement contribution to the net estimate of the state (i.e., from both the measurement and the previous estimate of the state) is controlled by the gain SK , according to Eq. 4.7, where the larger the value of S K the more the contribution of the measurement and vice versa, the effects of the early poor measurements on the state estimate for the run of a state noise variance of 0.01 were minimal since the corresponding values of the gain S K were small (see Fig. 4.8),whereas the effects of these poor measurements on the states of the run of a state noise variance of 0.5 were quite significant since the corresponding values of the gain S K were quite large (see also Fig. 4.8.) The above discussion shows the importance of checking the quality of measurements if large state noise variances are to be used. Small state noise variance are preferred for slow time-varying states. For example, if the actual states have a fixed value of 2.25 (see Fig. 4.9), running the model for a small state noise variance value of 0.01 yields good results. Further reduction of the state noise variance to a much smaller value of 1E-6 results in more consistent results beyond the zone of the influence of initial conditions. From the above results, the following can be concluded about the selection of the ratio of the state noise variance to the measurement variance: • It should be high enough to track the variability of the state. • Relatively larger values are preferable for longer forecast leads. Eq. 4.23 can be used as a guide for selecting the state noise variance. • Very large values should be avoided, unless the measurements are of a very high quality. This is so, since a very large state noise variance causes the updating model to be highly sensitive to errors in the measurements. ACTUAL J MEAS. ONLY I I I I I I I I L 1 11 21 31 41 51 61 71 81 91 TIME IN DAYS Fig. 4.7 Actual states and the states estimated by the measurement only. 0.8 A co z < o UJ X r -0.6 A 0.4 H 0.2 H/ 1 \ 9 / 1 « \ % \ \ / i » • > > < « i \ \ i i i i % i \ * ' i i * f V » . » • < » A / » • ; • V / - 1 1 f r - - aw2= o.oi # * 1 t q 2= 0.5 i i 11 21 31 41 51 TIME IN DAYS 61 71 81 91 8 3 5" 8 Fig. 4.8 The values of the gain S for two state noise variances. <o 3.5 3 A 2.5 H ' • K „ / \/ x - / s ' -/ -• «> I ' M • -x f v { 2 H UJ CO 1.5 H * 1 H— 0.5 H ACTUAL aw = o.oi a;= 0.1 11 21 31 41 51 61 71 81 TIME IN DAYS Fig. 4.9 Actual and predicted states for a slow-time varying state. 91 o Chapter 4. The Kalman Filter 61 • Small values are recommended for slow-time varying states. • The key point in selecting this ratio is to a keep balance between the model's ability to track changes in the state (model alertness) and minimizing model instability. From the simulation results it can be argued that the relative values of the measure-ment and the state noise variances can be treated as design parameters that control the updating process in the Kalman filter. This involves selecting the optimal values that en-able the model to be "alert" to true changes in state values, while ignoring changes caused by random errors. Since the states in many flow forecasting applications are unobserv-able, it becomes necessary to test the model performance according to a criterion based on the measurement forecasts. Hence, the state noise variance and the measurement variance can be selected through sensitivity analysis of the measurement forecasts. 4.4 Adaptive Kalman Filtering In the previous sections, the state noise variance and the measurement variance are treated as fixed quantities that have to be selected before applying the Kalman filter. In other words there is no feedback from the measurements to update these variances. Adaptive Kalman filtering designates the family of Kalman filter schemes that contain this kind of feedback. Many of these adaptive methods are based on adjusting the state noise variance and the measurement variance to fulfill a Kalman filter theoretical assumption that the forecast residuals are uncorrelated. Examples of adaptive Kalman filter methods can be seen in (Mehra, 1970, 1972), (Brewer, 1976) and (Todini, 1976). Judging from reviewing these works and from the experience gained through testing the method of Kitanidis and Bras (1978) on the flow updating model presented in Chap. 5, several drawbacks.are observed: Chapter 4. The Kalman Filter 62 1. Many of these methods are based on strict conditions difficult to meet in practice. For example, the model suggested by Mehra (1970) requires a fixed value for the matrix H in Eq. 4.3. Therefore, this model cannot be applied to many applications, even for the simple example presented in this chapter. 2. Most of these methods require the additional estimation of new parameters. One typical example is the work by Kitanidis and Bras (1978). In their model, the state noise variance is treated as a stochastic variable which requires the knowledge of its own variance. So, the problem of selecting the state noise variance for the non-adaptive Kalman filter has been replaced by the problem of selecting a variance for the state noise variance. Not only that the adaptive model is more difficult than the non-adaptive one, but the dramatic increase in the computational cost does not materialize in practical improvement of the model performance. 3. The benefits from these methods are quite questionable, since they usually update the variances within small ranges, and therefore result in no significant changes in the performance of the updating models. 4. The application of these adaptive methods usually requires more measurements than the standard Kalman filter, which is a serious problem for most hydrological applications. Although adaptive Kalman filtering is not recommended, the final decision should be based on comparative studies between adaptive and nonadaptive options. 4.5 Extended K a l m a n Fil tering As mentioned before, the Kalman filter can only be applied to linear systems, However this problem may be bypassed by linearizing nonlinear model equations using a Taylor Chapter 4. The Kalman Filter 63 expansion (Gelb, 1986). But this approach is not suitable for threshold-type relationships which are very common in hydrological applications. These highly nonlinear threshold-type relationships can be linearized using the method of statistical linearization (Gelb, 1986). Examples of extended Kalman filtering applications can be found in the works of Kitanidis and Bras (1980c) and Georgakakos (1986). 4.6 Summary and Conclusion This chapter starts with presenting a simple application of the Kalman filter in updating flow forecasts. The algorithm of this application is presented in the general state-space format suitable for the Kalman filter. The Kalman filter is shown to be a Bayesian approach in which a priori normal prob-ability density function is updated by a new measurement to yield a posterior probability density function. The state noise and the measurement variances play a key role in controlling the updating process in the Kalman filter. The degree of updating is a function of relative values of these variances. A large state noise variance and a small measurement variance result in significant updating of the states; while a small state noise variance and large measurement variance result in poor updating. These variances should be treated as design parameters that are selected such that there is a balance between the model's ability to track changes in the states, and yet being insensitive to random noises. Three main factors should be considered in the process of these variances: the variability of the state, the quality of the measurement and the forecast lead. Relatively large state noise variance and small measurement variance are recommended for a highly variable system, good measurements and long forecast leads; while small state noise variance and large measurement variance are more suitable to the opposite situation. Very large state noise Chapter 4. The Kalman Filter 64 variances should be avoided, unless the measurements are of a very high quality. The adaptive Kalman filter methods that have been reviewed are found unsuitable for practical applications. This is mainly due to the strong conditions required for their application, their complex structures and their inconsistent performance. However, com-parative studies should be carried on before making a final decision on using them. Although the Kalman filter can only be applied to linear systems, the problem can be solved by linearizing nonlinear equations. Taylor expansion is a very common linearizing technique, but is not suitable for the threshold-type relationships which must be linearized using the statistical linearization technique. Chapter 5 A Flow Updating Model Most of the conceptual flow forecasting models are highly complex and nonlinear making it difficult to adjust the structures of these models for real-time operation. As has been mentioned in Chap. 3 error prediction and flow regression approaches offer simple and practical solutions to this problem. In this chapter a flow regression-type model is pre-sented, in which on-line flow measurements are used to update flow forecasts generated from the conceptual UBC watershed model. The updating process is achieved using the Kalman filter. A general description of the UBC watershed model with emphasis on the features relevant to the flow updating model is presented in section 5.1. This is followed by presenting the main equation of the flow updating model (section 5.2). The state-space format and the algorithm of the flow updating model are presented in sections 5.3 and 5.4, respectively. Methods of selecting the state noise variance and the measurement variance for applying the updating model on data from the Illecillewaet basin in British Columbia are discussed in section 5.5. The results of this application are presented and discussed in section 5.6. 5.1 The U B C Watershed Model The UBC watershed model was designed by Quick and Pipes (19896) to forecast stream-flows from mountainous watershed systems mainly driven by snowmelt. The model re-quires daily inputs of maximum and minimum temperatures and precipitation, which are 65 Chapter 5. A Flow Updating Model 66 usually selected as historical data of an average hydrological year. The model calculates on a daily basis the time distribution of the streamflows leaving the catchment as a result of snowmelt and rainfall. A general flow chart of the UBC model is shown in Fig. 5.1. The average temperature is used to estimate the ratio of the snowfall to the rainfall in total precipitation. Melting of the snowpack is simulated by a heat transfer model using data on the maximum and the minimum temperatures1. The snowmelt and the rainfall are then distributed according to a soil moisture deficit model into evapotranspiration losses and inputs to watershed routing functions to generate runoffs at the basin outlet. These runoffs can be classified according to their origins into the following three flow components: • the snowmelt flow component Qs generated from fast routing of the snowmelt; • the rain flow component Qr generated from fast routing of the rainfall; and • the groundwater flow component Qa generated from slow routing of both the rainfall and the snowmelt. Finally, the flow at the basin outlet, Qubc, is estimated as the sum of these three flow components, i.e.: Qubc(t) = Qs(t) + Qr(t) + Q9{t) (5.1) 5.2 Modifying the U B C Watershed Model The flow updating model is developed based on modifying Eq. 5.1 by estimating the flow as a weighted sum of the three flow components Q3, QT and Qg, i.e.: Qe(t) = a3{t)Qs(t) + ar{t)Qr(t) + ag{t)Qg(t) (5.2) This snowmelt model is presented in detail in Chap. 6. Chapter 5. A Flow Updating Model MAX.&MIN. TEMPERATURES PRECIPITATION ESTIMATION OF RAIN/SNOW SNOWMELT MODEL SNOW RAIN SNOW COMP. G. WATER COMP. RAIN COMP. CALCULATED FLOW Fig. 5.1 A general flow chart of the UBC watershed model. Chapter 5. A Flow Updating Model 68 where Qe is the estimated flow; and aa, ar, and ag, are weighting factors for the flow components Q3,Qr and Qg, respectively. These weighting factors are assumed to be stochastic and normally distributed variables that are related to their past values as follows: <*«(<*) = + u;,(tfc_i) aT(tk) = ar(ifc_i) + twr(tfc_i) (5.3) ag(tk) = a5(*fc_i)+ u;5(*fc_i) where tk — tk-i = At is the forecast lead; w3, wr and wg are random errors (or white noise processes) of zero means and variances crl!3,a1tr and cr^3, respectively. The Kalman filter is used to update the estimates of the means and the variances of the weighting factors whenever a flow measurement is available. The mean values serve as the best estimates of these factors; while the variances determine the confidence limits of these estimates. The above flow updating model is based on the assumption that a forecast residual from the UBC watershed model is linearly related to the corresponding flow components. The validity of this assumption is very important for the updating model to be effective, especially for long forecast leads. This assumption has been proved to be valid for the data record of the Illecillewaet basin, British Columbia, using the least square criterion PLE, defined in Chap. 2. Chapter 5. A Flow Updating Model 69 5.3 The State-Space Format of the Flow Updating Model Eqs. 5.2 and 5.3 are formulated into the following state-space model which is necessary for the application of the Kalman filter: where Xk = x k_i + wk_! Zk = H k x k + v k State Eq. Measurement Eq. (5.4) w3{tk) Xk = OCritk) ; w k = wr(tk) OCg(tk) Wg{tk) _ ; z k = [Qo(ifc)], where Q0 is the observed flow; ; and v k = [v(tk)} Qs(h) Qr(tk) where v is the measurement noise of zero mean and a known variance c 2 5.4 Model Algorithm Let, G k = als{tk) 0.0 0.0 0.0 aUh) 0.0 0.0 0.0 . o-2wg(tk) <rl.(tk) °2„(tk) <r2ag(tk) olritk) o&U) a2g{tk) °29(h) <r2rg(tk) <T2xg(h) ; and R k = [a2(ijt)]. Diagonal elements of P are the state variances and the nondiagonal elements are the covariances between pairs of the weighting factors. For example, cr2a is the variance of the state as, and a2r is the covariance between the snowmelt and the rain weighting factors, ee3 and orr, respectively. Chapter 5. A Flow Updating Model 70 In the off-line UBC watershed model the weighting factors are implicitly given fixed values of unity. Therefore, in the flow updating model the initial values of the weighting factors are set equal to unity, i.e.: xo = 1.0 1.0 1.0 The initial value of each weighting factor variance is selected equal to the square of 10% of the corresponding weighting factor; while the initial values of the covariances are selected equal to zero, i.e.: *" 0.01 0.0 0.0 Po = 0.0 0.01 0.0 0.0 0.0 0.01 The algorithm of the flow updating model is the same as the one presented in section 4.1.3. The algorithm is presented in details and supplemented with sample calculations in Appendix B. 5 . 5 Selection of the State Noise Variance and the Measurement Variance From the discussion in Chap. 4 it has been shown that the Kalman filter is mainly controlled by the state noise variance and the measurement variance. Best model per-formance can be therefore achieved through selecting optimal values of these variances v Since it is the relative not the absolute values of these variances that influence the updat-ing process2, it is sufficient to optimize model performance, through sensitivity analysis, versus either the state noise variance or the measurement variance while keeping the other variance fixed. In the application presented in this chapter the measurement variance 2This has been proven in section 4.3. Chapter 5. A Flow Updating Model 71 was selected as a variable for the sensitivity analysis, while the state noise variance was kept fixed. The flow updating model is applied to the flow forecasts of the Illecillewaet river, a tributary to the upper Columbia river, British Columbia. The streamflow is mainly supplied by snowmelt from a 10,000 square mile-basin3. Highest flows are observed during the warm spring and summer seasons. For this reason the period from April 15 to July 15, 1970 has been selected for testing the model performance. This period is also characterized by rapidly changing flows that are difficult but yet important to forecast. Evaluating model performance by the coefficient of efficiency, E, defined in Chap. 2, sensitivity analysis can be applied for two types of measurement variances: a fixed value during the evaluation period (Figs. 5.2 and 5.3); or as the square of a fixed percentage of the observed flow (Fig. 5.4). Sensitivity analysis, for each type of the measurement variance, is performed for several forecast leads (1, 5, 10, 20 and 30 days). The state noise variances have been fixed at values equal to 0.01, calculated as the square of 10% of the initial values of the corresponding weighting factors, which are equal to unity, i.e: G 0 = 0.01 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.01 5.5.1 Fixed Measurement Variance Fig. 5.2 shows that the criterion E is relatively insensitive to fixed measurement in the range of 0 to 1000 (27") > with the exception of the forecast lead of 20 days where the model performance is noticed to be very low for very small measurement variances (near 0 (~") )• For significantly larger measurement variances (up to 20,000 (~") )•> Fig. 5.3 shows that model performance deteriorates with increasing measurement variance with 3See the description of the hydrology and the climate of this basin in Appendix A. - / / / I 1 Day 5 Days 10 Days 20 Days 30 Days UBC Model 0 200 400 600 MEASUREMENT VARIANCE, ( ^ ) 800 1000 Fig. 5.2 The coefficient of efficiency, E, vs. fixed measurement variance (0 to 1000(^)). to 20 Days 30 Days UBC Model 0 5,000 10,000 15,000 20,000 MEASUREMENT VARIANCE, ( ^ l ) Fig. 5.3 The coefficient of efficiency, E, vs. fixed measurement variance (0 to 20,000 ). 0.7 0.6 1 Day 5 Days 10 Days 20 Days 30 Days UBC Model 0% 20% 40% 60% 8 0 % 100% MEASUREMENT VARIANCE (% Flow). Fig. 5.4 The coefficient of efficiency, E, vs. percentage measurement variance. Chapter 5. A Flow Updating Model 75 rates dependent on the forecast lead. Short-term forecasts (especially for one day) are minimally affected by increasing the measurement variance; while long-term forecasts (in particular the 25 and 30 days) deteriorate to levels closer to the off-line forecasts for large measurement variances. The effect of the measurement variance on the updating process and the frequency of updating explain these results. Large measurement variance makes the updating model less "responsive" to new measurements which results in less updating of the weighting factors. The availability of a large number of measurements for short forecast leads (92 measurements for a forecast lead of one day) counterbalance the effect of large measurement variance by increasing the number of times the weighting factors are updated. In contrast, the weighting factors are updated much less frequently (3 times for forecast lead of 30 days) for long forecast leads resulting in minimal updating of the forecasts. Although these observations suggest, in general, that measurement variance is to be selected on the smaller side for long forecast leads, very small measurement variances should be avoided, since a very small measurement variance causes the updating model to be very sensitive to errors in the measurements4. Therefore, a poorly measured flow may cause the model to erroneously alter the weighting factors. The effect of this "bad" measurement is particularly harmful for long-term forecasts since a small number of future measurements are available to bring the weighting factors back on the "correct" course. This explains the inferior flow forecasts observed for forecast lead of 20 days when very small measurement variances are used (see Fig. 5.2.) Since the effect of the measurement variance is the opposite of that of the state noise variance, i.e. the effect of a large measurement variance is similar to that of a small state noise variance and vice versa, the discussion above is equivalent to that in section 4.3. 4 A measurement variance of zero causes weighting factors to be updated solely based on the mea-surements, i.e, the updating model completely "trust" the measurements. Chapter 5. A Flow Updating Model 76 5.5.2 Percentage Measurement Variance Measurement variance can be selected as the square of a certain percentage of the flow. This reflects what has been reported by many researchers that high flow measurement errors are associated with higher flows (see Quick 1990.) Fig. 5.4 shows plots of E vs. measurement variance as a square of percentage flow ranging from 0% to 100% for forecast leads of 1, 5, 10, 20 and 30 days. It is interesting to notice the similarity between Fig. 5.4 and Fig. 5.3. However, the peaks of the percentage variance curves are flatter and start later than those of fixed variance curves. This is because the percentage axis in Fig. 5.4 is not linear in terms of (~") units. For example, for a flow of 100 a 20% variance equals [(0.2)(100)]2 = 400 (^j-)*; while a 40% variance equals [(0.4)(100)]2 = 1600 ) 2 . Fig. 5.4 shows that the updating model attains its highest levels of performance for measurement variances between 10% to 20%. Comparing Figs. 5.3 and 5.4 shows no practical differences between selecting the measurement as a fixed value or as the square of a percentage of the flow. This is only true based on the average performance of the updating model as measured by the coefficient of efficiency. As will be demonstrated later in this, chapter a more detailed look at the model performance for both types of variances shows an important difference between the two approaches. 5.5.3 A n Adaptive Kalman Approach An adaptive Kalman filter model developed by Kitanidis and Bras (19786) has been tested on the updating model. No significant differences have been noticed between the adaptive and the non-adaptive approaches. This however was not unexpected, since the adaptive algorithm is designed for slow time-varying variances resulting in relatively insignificant changes in the variances. It is also doubtful that these changes in the variances will Chapter 5. A Flow Updating Model 77 always improve flow forecasts. In addition the adaptive model is quite complex, time consuming and requires the knowledge of additional parameters. In general the adaptive approach is not recommended for the current flow updating model or similar models (see Chap. 4.) 5.6 Application Fig. 5.2 shows that the highest levels of performance of the updating model are those for measurement variances between 200 (j^-) ^° ^00 • Therefore, a measurement / 3 \ 2 variance of 200 ( ^ - J is selected to update flow forecasts. 5.6.1 Overall Performance The observed flows, the off-line flow forecasts and flow forecasts for forecast leads of 1, 10, 20 and 30 days and their corresponding residuals are shown in Figs. 5.5, 5.6, 5.7 and 5.8, respectively. Updating flow forecasts on a daily basis resulted in much better correspondence between measured and calculated hydrographs (see Fig. 5.5.) This is reflected in the great improvement in the value of E from 0.74 to 0.95 (a 27% increase) (see Table 5.1.) The updating model performs even better than the powerful but naive short-term persistence and extrapolation models by 31% and 40%, respectively (see Table 5.1.). This is rarely observed in the real-time forecasting literature (see Chap. 3.) Accumulated 3 errors were also dramatically reduced from 1672 to 38 Updating flow forecasts every 10 days did not result in any change in the flow fore--casts for the first 40 days (see the period from April 15 to May 15 in Fig. 5.6,) since only 3 measurements were available and forecast residuals were small. However, flow forecasts were greatly improved for the following period. This resulted in a remarkably high E April 15 April 25 May 5 May 15 May 25 June 4 June 14 June 24 July 4 July 14 DATE (1970) Fig. 5.5 Hydrographs of the observed, UBC and updated flows and residuals for a forecast lead of one day. oo April 15 April 25 May 5 May 15 May 25 June 4 June 14 June 24 July 4 July 14 Dates of Measurements Used in Updating (1970) Fig. 5.6 Hydrographs of the observed, UBC and updated flows and residuals for a forecast lead of 10 days. April 15 May 5 May 25 June 14 July 4 Dates of Measurements Used in Updating (1970) Fig. 5.7 Hydrographs of the observed, UBC and updated flows and residuals for a forecast lead of 20 days. oo o April 15 May 15 June 14 July 14 Dates of Measurements Used in Updating (1970) Fig. 5.8 Hydrographs of the observed, UBC and updated flows and residuals for a forecast lead of 30 days. ' CO Chapter 5. A Flow Updating Model Table 5.1 Evaluation of the flow forecasts in terms of several criteria. 82 Model Coefficient of Coefficient of Coefficient of Coefficient of Accumulated Efficiency Determination Persistence Extrapolation Error E D c P C E m'/s -2.26, At =1 -1.83,At=1 0.74 0.94 0.62, At =10 0.76,At = 20 0.82, At =30 0.97, At =10 0.99,At = 20 1.00, At = 30 1672 Q UP1 0.95 0.95 0.31 0.40 38 0.93 0.93 0.89 0.99 -389 U^P20 0.92 0.92 0.93 1.00 -369 ^UP30 0.81 0.88 0.87 1.00 470 value of 0.93. The updating model did also better than the persistence and extrapola-tion models. This is expected since the persistence and extrapolation models are solely dependent on the correlations between consecutive flows which are very weak for long 3 forecast leads. Accumulated errors were reduced down to -389 —. 3 Updating forecast leads every 20 days (Fig. 5.7) resulted in improved forecasts after May 25. A High E value of 0.93 has been achieved. The ability of the updating model to greatly improve flow forecasts even for this relatively long forecast lead is attributed to the strength of the underlying off-line UBC watershed model and the validity of the" assumption that the flow forecast residuals of the UBC watershed model are highly linear. It should be emphasized that the role of the flow updating model is to remove linear errors from the flow forecasts of the UBC model, therefore, if the UBC model is not representative of the watershed system, the flow forecasts are not expected to improve Chapter 5. A Flow Updating Model 83 by using the flow updating model, especially for long forecast leads. Updating flow forecasts every 30 days showed improvement only after June 14. E 3 increased to 0.81 and accumulated errors were reduced down to 470 —. This is an s important improvement, since -long-term volume forecasts are important for reservoir operation. 5.6.2 Examining Progressive Model Performance Fig. 5.9 shows successive daily updating of the flow forecasts for forecast leads from one to 10 days for the period starting May 25 and ending June 15, 1970. Observed hydrograph, updated forecasts and residuals are shown in the unshaded area. The shaded area to the left contains past observed flows; while the shaded area to the right shows future observed flows not covered by the updating model. Measurement variance has been selected equal to a fixed value of 200 (~") • Fig. 5.10a shows the computed flow components Qs,Qr and Qq for the same period. Fig. 5.10b shows the daily updated estimates of the weighting factors that were used to generate the flow forecasts shown in Fig. 5.9a. As a result of underestimating the flow on May 26 (see Fig. 5.9a), all the weighting factors were increased (see Fig. 5.10b) to compensate for this negative flow forecast residual. Using these weighting factors for updating flow forecasts for the next ten days resulted in overestimating low flows and improving high flow forecasts, especially at the hydrograph peak on June 4 (see Fig. 5.9b.) The flow forecast residual on May 27 was quite small leading to insignificant change in the weighting factors (see Fig. 5.10b.) On May 28 positive flow forecast residual was encountered. The updating model responded by decreasing the groundwater weighting factor and, to less extent, the rain weighting factor. However, the snowmelt weighting factor was increased. This is due to the highly negative correlation between the groundwater and the snowmelt weighting M A Y 2 8 J U N E 2 J U N E 7 M A Y 2 9 J U N E 3 J U N E S M A Y 3 0 J U N E 4 J U N E S (d) (e) (f) OBSERVED FLOW UPDATED FLOW RESIDUAL Rg. 5.9(a,b,c,d,e&f) Successive daily updating of flow forecasts for forecast leads of 1,2,... 10 days (fixed R = 200 (ffj). See key in Fig. 5.12. oo 3 0 0 2 0 0 | 100 a. -100 3 0 0 200 liii ilililillIPSti th r i*J i ill'nn Ii 11' 11 1 • i if iii lii •!! II ii B $ 1J i« li to1 • . .• • • • i • i • • ' • M A Y 3 1 J U N E 5 J U N E 1 0 (g) E|<o 100 -100 300 2 0 0 I 100 K 11 / i - " 1 A N f V j . Q V l i i i l l • * lillli IF 1 1 1 I 1 1 1 1 1 1 -100 3 0 0 2 0 0 1 0 0 3 0 0 - 2 0 0 100 J U N E 1 J U N E 6 J U N E 1 1 (ft) •100 3 0 0 f ;1S -/» \ P \ l l l l l ** j] | j!li;:k J U N E 2 J U N E 7 J U N E 1 2 (0 -100 J U N E 3 J U N E 8 J U N E 1 3 G) J U N E 4 J U N E S J U N E 1 4 (k) J U N E S J U N E 1 0 J U N E 1 S (I) Fig. 5.9(g,h,i,j,k&l) Successive daily updating of flow forecasts for forecast leads of 1.2....10 days (fixed R = 200 (ffj). See key in Fig. 5.12. OBSERVED FLOW UPDATED FLOW RESIDUAL 9 Si fu >—. Oq O 9° C n Chapter 5. A Flow Updating Model 300 28 27 28 29 31 1 2 3 5 8 7 8 10 11 12 13 15 MAY25 MAY30 JUNE4 JUNE9 JUNE14 Fig. 5.10a The calculated flow components. MAY25 MAY30 JUNE4 Fig. 5.10b Weighting factors (fixed R = 200 (?f)). MAY25 MAY30 JUNE4 2 Fig. 5.10c Weighting factors (R = (15% Flow)). Chapter 5. A Flow Updating Model 87 factors and the dominance of the groundwater flow component5 during this period (see Fig. 5.10a.) The reduction in the groundwater weighting factor resulted in improving low flow forecasts; while the increase in the snowmelt weighting factor improved high flow forecasts, particularly the peak (see Fig. 5.9d.) The trend in decreasing the contribution of the groundwater6 and increasing that of the snowmelt continued on May 29 (Fig. 5.9e) resulting in more improvement in flow forecasts. The flow forecasts updated on June 2 (two days before the peak) (see Fig. 5.9i) show a significant improvement and a very close estimation of the maximum flow. This is an important result, since most real-time flow forecasting models experience serious problems on the rising limb (see Chap. 3.) However, the high negative residual observed on the next day, June 3, (Fig. 5.9i) caused the snowmelt weighting factor to be increased dramatically to reach a value close to unity (see the peak in Fig. 5.10b,) resulting in overestimated flow forecasts, especially for the peak (Fig. 5.9j). This deterioration in model performance is mainly due to the high variability of flows on the rising limb complicated by the difficulty in accurately measuring high flows (see Quick (1990).) This problem can be successfully controlled by setting the measurement variance as a percentage of the flow, causing the updating model to be less sensitive to measurement errors, or rapid changes in high flows; while being able to extract useful information from the more accurate low flows. Fig. 5.11 is similar to Fig. 5.9 except that the measurement variance is selected as the square of 15% of the flow. A significant improvement in the flow forecasts is noticed for the hydrograph of Figs. 5.Hi and 5.11 j , especially the hydrograph peak on June 3,. (compare with Figs. 5.9i and 5.9j) without affecting, if not improving, the performance of the updating model on other days. Selecting a percentage-wise measurement caused more control of the changes in the weighting factors as can be seen from comparing Fig. 5This is explained in detail in Appendix B. 6The rain component is insignificant as can be seen from Fig. 5.10a. MAY25 MAY30 JUNE4 MAY 26 MAY31 JUNE5 MAY27 JUNE1 JUNES C ' MAY28 JUNE2 JUNE7 MAY29 JUNE3 JUNES MAY30 JUNE4 JUNE9 (d) (e) (f)  OBSERVED FLOW UPDATED FLOW RESIDUAL Fig. 5.11 (a.b.c.d.e&f) Successive daily updating of flow forecasts for forecast leads of 1,2,...10 days (R = (15% flow)2). See key in Fig. 5.12. 8§ 300 MAY31 JUNES JUNE10 (g) E|« 300 200 100 -100 200 100 . • ' ' • • 1 i i i i l i i i i -100 300 200 100 • • • / / °- \ T 4 • A /<* CO ' I 1 1 1 1 I 1 1 1 1 JUNE1 JUNE6 JUNE11 (h) 300 200 j 100 1 -100 100 300 JUNE2 JUNE7 JUNE12 (i) I 200 I 100 ; | | ' | | ' \ , i  ii:. 1 L..1 1 1 1 1 1 I JUNES JUNES JUNE13 (j) JUNE4 JUNE9 JUNE14 (k) Fig. 5.11 (g,h,i,j,k&l) Successive daily updating of flow forecasts for forecast leads of 1,2....10 days (R = (15% flow)). See key in Fig. 5.12. JUNES JUNE 10 JUNE15 (I) OBSERVED FLOW UPDATED FLOW RESIDUAL 8 o -5s I I Da oo CO MEASUREMENT USED LATEST UPDATED IN UPDATING FLOW Fig. 5.12 The key figure for Figs. 5.9 and 5.11. Chapter 5. A Flow Updating Model 91 5.10c with Fig. 5.10b. Particularly , the peak in the snowmelt weighting factor (on June 3) is reduced. These results support the approach of selecting the measurement variance as a percentage of the flow rather than a fixed value. It is interesting to notice that model performance is best on the hydrograph falling limb as can be seen from plots (k) and (i) of Figs. 5.9 and 5.11 . 5.6.3 Detection of Linearity in The Flow Forecasts Residuals The flow updating model presented in this chapter is based on the assumption that residuals of the flow forecasts generated by UBC Watershed model are linearly related to the prefinal outputs, namely: the flow components Qs, QT and Qg. This assumption can be satisfactorily tested by calculating the least squares criteria PLE defined in Chap. 2. A high value of PLE indicates a highly linear residuals, and vice versa. Since the Kalman filter is based on minimizing the sum of squares of errors, PLE could be used to test the possibility of improving flow forecasts using the Kalman filter. In the present work PLE is calculated from the E value of the UBC Watershed model and D from linear regression of Q0 on Qs, Qr and Qg. E is equal to 0.74, while D is equal to 0.94, so This means that 77% of the errors are linearly related to Q3, Qr and Qg and there is a great potential of improving flow forecast using the flow updating model. This is shown in the improvement of E value from 0.74 to 0.95 (for forecast lead of one day.) That-The PLE can be used in the process of selecting the optimal state-space structure of the flow updating model, avoiding therefore the time consuming and expensive process of implementing each model. This can be done by calculating the PLE values of each results in removing 0 . 9 5 - 0 . 7 4 1 - 0 . 7 4 = 81% of the errors. Chapter 5. A Flow Updating Model 92 model and then selecting the model with the highest PLE value. 5.6.4 Summary and Conclusion A flow forecast updating model is designed based on the assumption that the residuals of the flow forecasts generated by the UBC watershed model are linearly related to the calculated flow components Qa,Qr and Qg. The flow is estimated as a weighted sum of these flow components. Feedback of flow measurements are used to update the corresponding weighting factors using the Kalman filter technique. Sensitivity analysis based on data from the Illecillewaet basin shows that optimal measurement variances range between 200 to 500 (~") for fixed variances, and between 10% to 20% for percentage variances. These values are obtained for state noise variances equal to 0.01. Using a scheme developed by Kitanidis and Bras (1978) for updating state noise variance results in no improvement to flow forecasts. The flow updating model has been applied on the Illecillewaet basin for forecast leads of 1, 10, 20 and 30 days. A significant improvement in flow forecasts has been observed even for a long forecast lead of 20 days. The high performance of the flow updating model for long forecast leads is due to two reasons: the ability of the underlying UBC watershed model to simulate the Illecillewaet basin, and the validity of the assumption that flow forecast residuals are linear. The assumption of linearity is tested using the least square criterion PLE. Although average performance of the flow updating model is not affected by the type of measurement variance, selecting the measurement variance as a square percentage of the flow improves flow forecasts in the vicinity of high flows. This can be attributed to the observation by many researchers that the quality of flow measurements deteriorates as the flow increases, therefore selecting the measurement variance as a square percentage of the flow reduces sensitivity of the flow updating model to the high errors in the high Chapter 5. A Flow Updating Model 93 flows. The flow updating model provides an effective and flexible mean for using feedback of flow measurements in improving flow forecasts of a highly complex watershed model. It can also be used in conjunction with the snowpack model presented in Chap. 6 to further enhance the flow forecasts of the UBC watershed model. Chapter 6 A Snowpack Updating Model In mountainous watershed systems like those of the Columbia, Peace and Fraser rivers in British Columbia and those feeding the Indus River system, snowmelt constitutes the major input to streamflows during the warm spring and summer seasons, which are characterized by the most extreme streamflows. Forecasting seasonal streamflows therefore depends on accurate assessment of the snowpack water equivalents stored in the watershed. The most effective method of updating streamflows forecasts is therefore likely to depend on an updating procedure for correcting these snowpack estimates. In this study a model has been developed to update parameters of an energy budget snowpack model (section 6.2) using real-time snowpack measurements. This energy budget snowpack model is a sub-model of the UBC watershed model. The updating procedure is based on calculating the value of a snowpack parameter that yields a perfect correspondence between measured and calculated snowpacks. The updated snowpack parameter is then used in calculating future snowpack values and streamflows forecasts. The updating procedure is repeated whenever a new snowpack measurement becomes available. Updating a snowpack parameter from snowpack measurements necessitates that.this parameter is expressed as a function of two consecutive snowpack measurements and the accumulated snow fallen during the time period between these measurements. If the snowpack is measured on a daily basis (or, in general, on the same intervals as the flow forecasts), then the snowpack updating function is linear and can be easily 94 Chapter 6. A Snowpack Updating Model 95 formulated. However, if snowpack measurements are sparse (e.g., available on a monthly basis), then the development of the snowpack has to be tracked on a daily basis from the last snowpack measurement back to the previous one. Much more effort is therefore needed to develop the snowpack updating function. The resulting function can still be linear as long as there are no changes in the thermal condition of the snowpack (i.e., from frozen to melting condition or vice versa) during the interval between two consecutive snowpack measurements. Since thermal condition of the snowpack may vary, especially as the snowpack reaches its peak, a more comprehensive snowpack updating function has been developed. First, the snowpack updating function has been formulated separately for each of the three possible thermal conditions of the snowpack (which are the frozen, partially melting and the completely melting conditions); then the final updating function is expressed as the sum of these three functions each multiplied by a factor (named tag) that is either equal to unity if the snowpack is in the same condition or equal to zero if the snowpack is not in the same condition. An iterative method is used to determine the correct values of these tags, where the tags are first set equal to the initial or the previous values, then new snowpack values are calculated using these tags. New values of the tags are then determined and this iterative procedure is repeated until the last set of tags values matches that of the previous iterative step. This chapter starts with discussing the snowpack energy budget approach and the snowpack routine of the UBC watershed model which was developed based on this ap-proach. Then, the results of studying the effects of the cloud cover, the snow albedo and the snowfall on the snowpack are presented and discussed. The approach of developing the snowpack updating model and its algorithm are then presented. Finally, the results from the application of the snowpack updating model on the Mount Fidelity data station and the Illecillewaet basin in British Columbia are presented and discussed. Chapter 6. A Snowpack Updating Model 96 6.1 The Energy Budget of the Snowpack The snowpack can be treated as a thermodynamic system which receives and/or loses heat through shortwave, longwave, conductive, advective and rain melt heat fluxes. The loss in the snowpack is calculated as the snowmelt, estimated equal to the water column melted by the heat transfer surplus to the snowpack, less the snowfall. The heat surplus to the snowpack can be calculated as follows: equation: Ht = H3 + Ht + Hc + Ha + Hr (6.1) where Ht is the sum of heat fluxes; H3 is the shortwave heat component; H[ is the longwave heat component; Hc is the convective heat component; Ha is the advective heat component; and HT is the advective heat transfer by rain. The snowmelt, m, is determined according to Ht and the water equivalent of the snowpack, dm, as follows: For Ht < 0 m = 0.0 f Ht ,iidm>Ht For 5 j > 0 m = ( dm ,if dm < Ht The shortwave heat component is a function of the snow albedo and the cloud cover (section 6.1.1); while the longwave heat component is a function of the cloud cover and the temperature (section 6.1.2). The convective and the advective heat components are Chapter 6. A Snowpack Updating Model 97 functions of the wind speed and the temperature (section 6.1.3). In general, the snow albedo, the cloud cover and the wind speed are rarely measured, and so, these quantities are estimated as functions of the temperature (Quick and Pipes, 1989a). The parameters of these functions are usually treated as fixed quantities. However, in the snowpack updating model, to be presented later in detail, the most sensitive of these parameters are considered to be dynamic quantities that are updated as functions of the snowpack measurements. 6.1.1 The Shortwave Heat Influx Shortwave heat influx can be expressed as follows (Quick and Pipes, 1989a): H.(t) = Is[t) [1 - C(t)\ [1 - A(t)] in mm/day (6.2) where C is the fraction of cloud cover; A is the snow albedo; and I3 is the incident solar radiation for 35° North estimated as follows: /,(<) = 54 - 29c70S(27TiV/365) in mm/day (6.3) where N is the number of days starting from the first of January. 6.1.2 The Longwave Radiation The longwave radiation of a melting snowpack can be expressed as follows (Quick and Pipes, 1989 a): " Ht(t) = [-20 + 0.94Ta(*)] [1 - C(t)] + 1.24Tc(i)C(i) (6.4) where T„ is the average temperature in °C; and Tc is the cloud temperature (assumed equal to the minimum temperature). Chapter 6. A Snowpack Updating Model 98 6.1.3 The Convective and Advective Heat Transfer The convective heat transfer is estimated as follows (Quick and Pipes, 1989a): He = 0.113^(^/101)^(0^ in mm/day (6.5) where P is the atmospheric pressure in kN/M2 for the elevation under consideration; vw is the wind speed in kilometers per hour; and Rm = 1 — 7.7i?t- , where i?,- is the bulk Richardson number that can be approximated as 0.095T m (i)/^. The advective heat transfer can be expressed as follows (Quick and Pipes, 1989a): where Td is the dew point, which can be approximated by the minimum air temperature. 6.1.4 The Rain Melt Rain melt, Hr, can be estimated as follows (Quick and Pipes, 1989a): where K is a constant set equal to 0.01249; and Tn is the minimum air temperature. 6.2 Calculating the Snowpack in the U B C Watershed Model In the UBC watershed model the snowpack is calculated according to a routine based on the energy budget approach presented in the previous section. The algorithm of this routine is presented below and its flow diagram is shown in Fig. 6.1: 1. The average temperature is calculated as the mean of the measured maximum, Tx, and minimum, Tn, temperatures, i.e : Ha = O.URmTdV. (6.6) Hr = KTnpT in mm/day (6.7) Ut) = Tx(t) + Tn(t) (6.8) 2 Chapter 6. A Snowpack Updating Model RAINMELT Y E S | COMPLETELY MELTED SNOWPACK m(t) = dJt-1) + P8(t) dm(t). o.O HJO = 0.0 Fig. 6.1 Flow chart of the UBC snowpack model. Chapter 6. A Snowpack Updating Model 100 2. The snowfall, ps, is estimated from the total precipitation according to the routine presented in section 6.3.3. 3. The shortwave, the longwave, the convective, the advective and rain melt heat fluxes are calculated according to the equations in section 6.1. 4. The energy surplus from the snowpack, Hei is calculated as the sum of Ht (Eq. 6.1) and the previous cold content1 of the snowpack, Hcc(t — 1), after being reduced by a factor A to account for the warming of the frozen snowpack during the day, i.e: where 0.0 < A < 1.0. 5. The snowmelt, m(i), the water equivalent of the snowpack, dm (t), and the new cold content of the snowpack, Hcc(t), are estimated as follows: • For melting snowpack, He{t) > 0.0: He(t) = \Hcc(t - 1) + Ht(t) m(t) = He(t) dm{t) = dm(t - 1) - m(t) + ps{t) > if He(t) < dm(t - 1) + ps(t) Hcc{t) = 0.0 m(t) = dm(t - 1) + p3{t) dm{t) = 0.0 Hcc{t) = 0.0 .. • if He(t) > dm(t - I) + Ps(t) xCold content of the snowpack is denned as the amount of heat required to raise the temperature of the snowpack to 0.0°C, so the cold content is always less or equal to zero. Chapter 6. A Snowpack Updating Model 101 • For frozen snowpack, He(t) < 0.0: m(«) = 0.0; dm(t) = dm(t - 1) + p3(t); and Hcc(t) = He(t) 6.3 Sensitivity Analysis of the Snowpack Parameters Since the changes in the snowpack are calculated according to the balance between the input of snowfall and the output of the snowmelt, the estimated snowpack is controlled by the parameters of the snowmelt and the snow formation functions. However these are only a few of these parameters to which the snowpack is significantly sensitive. Therefore, only these parameters are considered for the snowpack updating model. In the following sections four parameters are presented and their effects on the snowpack are studied and discussed. 6.3.1 The Cloud Cover The cloud cover affects snowmelt through its influence on the shortwave and the longwave heat transfer components (see Eqs. 6.2 and 6.4). In most cases cloud cover is not observed, especially in the remote but important regions of the mountainous watersheds. However, cloud cover can be estimated as a function of the daily temperature range as follows (Quick and Pipes, 1989a): CW = l - ^ P (6.9) where ATS is the daily temperature range for open sky conditions. Eq. 6.9 is modified to include a correction factor, rjc as follows: Tx(t)-Tn(t) C(t) = r,c 1 - (6.10) AT, Snowpack sensitivity to the cloud cover can therefore be tested by comparing the snow-packs calculated using different values of the cloud cover correction factor r}c. Fig. 6.2 Chapter 6. A Snowpack Updating Model 102 shows the snowpacks calculated for r/c values of 0.5, 0.75, 1.0, 1.25 and 1.50. It can be seen from this figure that increasing the value of rjc, i.e increasing the cloud cover, results in slightly slowing the rise in the snowpack during the accumulation period (when snowfall is larger than snowmelt) and significantly slowing the meltdown of the snowpack during the depletion period (when snowfall is less that the snowmelt). The above observations are attributed to the differences in the conditions that prevail during the snowpack accumulation period and those of the snowpack depletion period. During the accumulation period the average daily temperatures are low (mostly below 0 °C) and snowfalls are intense. Low temperatures result in negative2 longwave, convective and advective heat transfers (see Eqs. 6.4, 6.5 and 6.6). Fresh snowfalls increase the snow albedos and thus reduce the shortwave heat component (see Eq. 6.2). These combined effects produce negative heat balance to the snowpack causing it to freeze. Increasing the cloud cover decreases the negative longwave component and may therefore lead to a minimal snowmelt that may be reduced by the decrease in the positive shortwave component . The total outcome is a slight reduction of the snowpack (which is difficult to observe in Fig. 6.2) due to the increase in the cloud cover. On the other hand, decreasing the cloud cover increases the negative longwave component and at the same time increases the shortwave component. The net result is a further decrease in the energy content of the snowpack, causing more freezing of the snowpack and no change in the snowpack. During the snowpack depletion period average temperatures are relatively high and snowfall is much less than during the accumulation period. High temperatures result in positive longwave radiation, convective and advective heat transfers. Old snowcover has a small albedo value and therefore absorbs more shortwave energy. The net heat transfer to the snowpack is therefore positive resulting in snowmelt. Increasing cloud cover reduces 2Negative heat transfer indicates a loss of of the snowpack energy content. OCT1.69 OCT31,69NOV30,69 DEC30.69 JAN29.70 FEB28.70 MAR30.70 APR29170MAY29,70 JUN28.70 JUL28.70 AUG27.70 SEP26.70 DATE Fig. 6.2 Sensitivity of the snowpack to the cloud cover parameter rj. o CO Chapter 6. A Snowpack Updating Model 104 the positive shortwave and longwave components resulting in less snowmelt. This causes an increase in the snowpack and subsequently a delay in snowpack depletion. On the other hand, decreasing cloud cover increases the shortwave and longwave components resulting in more snowmelt that speeds snowpack depletion. From the above discussion it can be concluded that an error in estimating the cloud cover causes a timing error in the snowpack that is much more pronounced during the snowpack depletion stage than during the accumulation stage (see Fig. 6.3.) So, for a given time during the depletion stage increasing cloud cover results in thicker snowpack (see the arrow on the right-hand side of Fig. 6.3;) while for another given time during the accumulation period increasing the cloud cover results in the opposite (see the arrow on the left-hand side of Fig. 6.3.) Sensitivity of the snowpack to the cloud cover can be also tested using the parameter A T 3 (see Eq. 6.9.) Taking A T S = 20°C as a standard value, the snowpack is calculated for ATS values 25% and 50% higher than the standard (25 and 30 °C, respectively) and 25% and 50% lower than the standard (15 and 10 °C, respectively). It can be seen from Fig. 6.4 that increasing AT, results in an increase in the total volume of the snowpack and an increase and a delay of the peak of the snowpack, which are attributed to the reduction of snowmelt during the depletion stage. A very small effect can be noticed during the accumulation stage. The effect of AT, on the snowpack is similar to that of ?7C although it is stronger during the accumulation stage. From the above discussion, it can be concluded that the snowpack is sensitive to the parameters nc and A T 3 only during its depletion stage. Therefore, only measurements during that stage are valuable as feedback for updating these parameters. CLOUD COVER NORMAL UNDERESTIMATED OVERESTIMATED NEGATIVE POSTIVE TIMING ERROR TIMING ERROR 9 e-f-o fu a" Oq o 0) ACCUMULATION STAGE DEPLETION STAGE TIME Fig. 6.3 The effect of under- or over-estimating the cloud cover on the snowpack. 1400 OCT1,69 OCT31.69 NOV30.69 DEC30.69 JAN29.70 FEB28.70 MAR30.70 APR29.70 MAY29.70 JUN28.70 JUL28.70 AUG27.70 SEP26.70 DATE Fig. 6.4 Sensitivity of the snowpack to the cloud cover parameter A T S . o Chapter 6. A Snowpack Updating Model 107 6.3.2 The Snow Albedo Albedo of a surface is denned as the ratio of the reflected shortwave to the incident - shortwave energies. Fresh snow has a high albedo (0.90) that decreases exponentially with time to as low as 0.3. If the new snowfall is thick enough to completely cover the old snow, the albedo can rise back to its maximum value. Eq. 6.2 is modified as follows to include a correction factor, rja: H,(t) = I,(t)[l-C(t)][l-riaA(t)] in mm/day (6.11) Fig. 6.5 shows the snowpacks calculated for five r\a values of 0.5, 0.75, 1.0, 1.25 and 1.50. From this figure it can be seen that increasing the snow albedo correction factor, rja, above the standard value of one results in delaying the melting and increasing the volume of the snowpack during the depletion stage without significantly affecting the snowpack during the accumulation stage. Reducing r]a below a value of unity results in a total reduction of the size of the snowpack. These observations can be attributed to the effect of snow albedo on the shortwave component. During the accumulation stage the snowpack is mostly frozen. Increasing the snow albedo results in a decrease in the shortwave component and therefore more freezing of the already frozen snowpack. Therefore, no change in the snowpack size is expected. Decreasing the snow albedo, on the other hand, increases the shortwave component which, if large enough, causes a o snowmelt and a reduction in the snowpack. During the snowpack depletion stage, when the snow is melting, increasing the snow" albedo decreases the shortwave component and therefore reduces the snowmelt caus- . -ing a delay in snow depletion. Consequently, decreasing the snow albedo increases the shortwave component leading therefore to more snowmelt. In general, the effect of rja on the snowpack is simpler that rjc and ATS. This is because •qa only affects the shortwave energy component, while the other two parameters affect DATE Fig. 6.5 Sensitivity of the snowpack to the snow albedo parameter 77. o 00 Chapter 6. A Snowpack Updating Model 109 both the shortwave and longwave components. It is interesting to notice from Eq. 6.2 that although the cloud cover and the snow albedo are two different physical phenomena, their effects on the shortwave heat transfer to the snowpack are similar. The cloud cover reduces the amount of the shortwave energy received by the snowpack by absorbing a portion of this energy, and the unabsorbed portion is later reduced by reflection from the snow surface. The interaction between these two processes makes it difficult to determine the source(s) of errors in the snowmelt estimates. For example, an error in the snowmelt that can be corrected by adjusting the parameter r)c may also be corrected through adjusting the parameter r}a. The inability to determine the actual source(s) of errors in the snowmelt may lead to an erroneous adjustment of a snowpack parameter that may degrade future snowpack forecasts. The sources of errors in the snowmelt estimates, or the snowpack forecasts in general, may be best identified by studying the effect of synthesized errors in the snowpack parameters on the snowpack forecasts, as has been done in this section and sections 6.3.1 and 6.3.3. 6.3.3 The Snow Formation Function In most precipitation gauges, the precipitation, form (snow or rain) is unknown. Since the form of precipitation is important for the routing the water through the watershed, a simple routine is used in the UBC watershed model to estimate the ratio of snowfall, ps, to rainfall, P r , in the total precipitation (Quick and Pipes, 1989a): • If the average temperature, Ta, is less than 0.0°C, then ps(t) = pp(t), where pp(t) is the precipitation at time t. • If T„ > T r , where Tr is the temperature above which all precipitation is rain, then p,(t) - 0.0 and pT(t) = pp(t). Chapter 6. A Snowpack Updating Model 110 • If average temperature is greater than 0.0°C and less than T r , then p.(t) = In the UBC watershed model Tr is usually set equal to 2.0°C. In this study, however, Tr is treated as a parameter that can be updated using snowpack measurements. Fig. 6.6 shows the snowpacks calculated for TT values of 1.0, 1.5, 2.0, 2.5 and 3.0 °C. As can be seen from this figure large TT values causes increases in the snowfalls leading to thicker snowpack. Small values of Tr result in the opposite. 6.4 The Snowpack Updating Approach In the UBC watershed model the snowpack parameters are selected through a calibration process using historical data and their values are fixed during the operation stage. In this study, however, the snowpack parameters, r?c, AT S , and j]a are considered dynamic quan-tities that can be updated using on-line snowpack measurements. The general outline of the updating model has been presented at the beginning of this chapter. The approach followed in developing this model can be summarized in the following three steps: 1. Deriving an explicit function expressing the change in the snowpack in terms of the sum of net energy fluxes Ht and the sum of snowfall over the interval between two consecutive snowpack measurements, i.e: dm(t)-dm(t-k) = F( J2 Ht(i), £ Ps(i)) (6-13) j ' = t - f c + l i=t-k+l -- where k is the interval between the snowpack measurements in days. This step is presented in section 6.5 2. The second step is to derive a function that expresses the sum of the net energy fluxes J2 Ht and the sum of snowfall YlPs m terms of the parameter to be updated. This step is presented in section 6.6 for the parameters r/c, AT S , r)a, and Tr. 1 - Ta(t) pp(t) (6.12) E E < O a. O z CO 1400 1200 1000 800 600 400 200 OCT1.69 OCT31.69 NOV30.69 DEC30.69 JAN29.70 FEB28.70 MAR30.70 APR29.70 MAY29.70 JUN28.70 JUL28.70 AUG27.70 SEP26.70 DATE Fig. 6.6 Sensitivity of the snowpack to the snow formation parameter T r . Chapter 6. A Snowpack Updating Model 112 3. Finally, the equation obtained from step(l) is combined with the one from step(2) to obtain an explicit function of the parameter in terms of two consecutive snowpack measurements. This step is also presented in section 6.6. 6.5 The Snowpack as a Function of the Net Energy Fluxes and the Snowfalls In the UBC snowpack routine presented in section 6.2 the snowpack is calculated in a recursive method. This means that the snowpack can only be calculated from the previous snowpack, the net energy flux and the snowfall of the previous day. The recursive method is computationally convenient and efficient since there is no need to store previous information, and in addition its computation equations are less complex. However, it can not be used for updating the snowpack parameters from snowpack measurements if the measurement interval is longer than one day. So, accomplishing the first step presented in the previous section, a relationship has been developed in which the snowpack is expressed in terms of the snowfalls, the net heat fluxes, and the cold content between two consecutive snowpack measurements. This relationship can be presented as follows: • Let k be the measurement interval for two consecutive snowpack measurements dm(t) and dm(tk), and let Kf(i), Kp(i), and Ka(i) for t — k < i < t be the snowpack thermal condition tags defined as follows: — For a frozen snowpack, i.e., He(i) < 0.0: Kf(i) = 1, Kp(i) = Ka(i) = 0.0 — For a melting snowpack, He(i) > 0.0: Kp(0 = 1.0 Ka(i) = 0.0 -For He(i) < dm(i - 1) + P s ( i ) Chapter 6. A Snowpack Updating Model 113 • For He(i) > dm(i - 1) + pa(i) Kp(i) = 0.0 K 0 ( t ) = 1.0 - Let Kfp(i) = Kf(i) + Kp(i), so K/P(i) = 1.0 means that the snowpack is either frozen or melting. • Let 91{i),92{i),e3(i),9i(i),95(i),96(i),97{t) and 9&(t) be defined as follows: «/ p(i) for i = t e1(i) = l *fp{i)Qi{i + l) for t - k + l < i < t - l Kp(i) for i = t 9x(i + l)/cp(i) for t = fc + l < » < t - l 0.0 for i = t #3(0 = { \Kf(i - l)/c p(i + 1) f o r i = * - l \Kf(i)93(i + 1) for *-fc + l < i < i - 2 94{i) = { 0.0 for i = t h t - \ X9i(i + 2)«p(i + !)«/(»') for i — fc+l<i<< — 2 0.0 for t = *,* - 1 & i - 2 *s(0 = ^ A/c/(0[55(» + l)+ A/e/(i + l)« p(» + 2)0i(» +3)] for i - A; 4-1 < i < t - 3 96(i) = 92{i) + 03(i) + 94(i) + 95(i) for t - k + 1 < t < t Chapter 6. A Snowpack Updating Model 114 e7{t) = \nf(t- k)86(t- k + 1) 68{t) = O^t-k + 1) • The snowpack at time i = t can be expressed as follows: t t dm(t) = £ *i(0M0- £ *e(»W) i=t-jfe+l t'=t-fc+l -e7{t)Hcc{t-k) + e8{t)dm(t-k) (6.14) The derivation of this relationship has been presented in detail in Appendix C. 6.6 The Snowpack Parameters As a Function of the Snowpack In this section the snowpack parameters r/c, ATS, r/a and Tr are expressed individually as functions of the snowpack. 6.6.1 The Cloud Cover Correction Factor r?c Substituting equations 6.2 and 6.4 into equation 6.1 yields: where Hn = Hc + Ha + Hr. Rearranging the above equation to obtain Ht in terms of the cloud cover, C, yields: Ht(t) = / , ( r ) [ l -L7 (<)] [ l -A(i ) ] + [-20-r-0.94To(t)][l-c7(<)] + 1.24rn(0C'(i) + iY1 T l (6.15) Ht(t) = a(t)C(t) + 0(t) (6.16)* where a{t) = 1.24rc(i)(<) + 20 - 0.9ATa(t) - I.(t)[l - A(t)); and B(t) = I,(t)[l - A(t)] - 20 + 0MTa(t) + Hn Modifying the cloud cover by the correction factor TJC results in: Ht(t) = Vca(t)C(t) + 0(t) (6.17) Chapter 6. A Snowpack Updating Model 115 Substituting equation 6.17 in equation 6.14 yields: dm(t) = £ *i(»>.(0-[ E e6(i)a(i)C(i)]Vc t'=t-fc+l t'=t-fc+l - E fcO'MO - 0r(t)HcC(t - k) + 08(t)dm(t-k)~ ~ i=t-k+l Rearranging the above equation yields: £,=t-fc+i 06(i)o:(t)G(t) + E *i(*>.(») t'=t-t+i - E W W ) ] (6-18) Replacing the snowpacks dm(t) and dm(£ — fc) in Eq. 6.18 with the correspondent measured snowpacks dm(t) and dm(th) yields the rjc value that if was used in calculating the snowpack from time t — k would have produced calculated snowpack values equal to the measured. In other words, Eq. 6.18 can be used to calculate the value of the parameter rjc that corrects the calculated snowpacks to match the measured. The problem, however, is that the course of development of the snowpack thermal condition may change by changing the value of r]c, resulting in different set of values of the tags (« / , « p and na) for different values of r/c. This problem can be overcome by applying an iterative routine to determine the correct set of tags. Fortunately, the iterative procedure is usually quite short (maximum of three steps), and is not needed for many situations, for example if the snowpack is melting where KV = 1 and «/ = KA = 0. It should be emphasized that adjusting the value of nc alone may not be sufficient to match the calculated snowpacks with the measured ones. This happens, for example, if the snowpack is frozen and the calculated snowpack is found to be less than the measured. In such situation the snowpack cannot be increased by changing the cloud cover. Chapter 6. A Snowpack Updating Model 6.6.2 The Cloud Cover Parameter A T a Substituting equation 6.9 into equation 6.16 yields: Ht(t) = am-Tx{t)^n{S+m = -a(t)(Tx(t) - Tn(t))~r + a{t) + ftt) = <x*(t)(±r) + f3*(t) where a*(t) = -a(t)(Tx{t) - T„(i)); and 0*(t) = a(t) + B(t) Substituting equation 6.19 into equation 6.14 yields: dm(t) = £ 01{i)p.(i)-[ J2 06(i)a*(i)] 1 A T i=t-k+l i=t-k+l 3 - J2 06(i)f(i)-O7(t)Hcc(t-k) + 98(t)dm(t i=t-k + l Rearranging the above equation yields: -e7(t)Hjt-k)+ J2 »i (»)?.(•') i=t-k+l - J2 'e(O0*(O] i=t-k+l The discussion on the application of Eq. 6.18 applies also to Eq. 6.20. 6.6.3 The Snow Albedo Correction Factor r/a Rearranging equation 6.15 Ht{t) = a°(t)AL{t) + B°{t) Chapter 6. A Snowpack Updating Model 117 where a°{t) = -I3(t)[l - C(t)}; and fi°(t) = (Is(t)-20+ 0MTa{t))(l-C{t)) + 1.2iTc(t)C(t) -+HC + Ha 4- HT Modifying the snow albedo is by the correction factor ^results in: Ht{t) = rlaa°(t)A{t) + p°{t) (6-21) Substituting equation 6.21 in equation 6.14 results in: dm(t) = 9x(i)ps(i)- J2 9e(i)a°(i)A(i)r,a i=t-k+l i=t-k+l - E e6(i)/3°(i)-97(t)Hcc(t-k) + es(t)dm(t-k) i=t-k+l Rearranging the above equation yields: - 97(l)Hcc(t - k) + £ «!(•>,(•) j=t-it+l - E WW (6.22) Eq. 6.22 can be used, similar to Eq. 6.18, to correct the calculated snowpacks to match the measured by adjusting the snow albedo correction factor, r)a. 6.6.4 The Snow Formation Parameter TT Substituting equation 6.12 into equation 6.14 results in: dm(t) = E 0 ! ( O [ i - ^ % p « - E hWt(i) i=t-k+l ±T i=t-k+l -97(t)Hcc(t-k) + e8(t)dm(t-k) Chapter 6. A Snowpack Updating Model 118 Rearranging the above equation results in: 1 1 T - ^ 0 r , T rT-^Mt)dm(t - k) - dm(t) -97(t)Hcc(t-k)- J2 -06(*)ff*(O i=t-k+i + E MOM*)] (6-23) t=t-Jt+l Equation 6.23 has more limitations than equations 6.18, 6.20 and 6.22, since it can only be used when 0.0 < Ta < TT. This problem can be overcome by using an iterative method. 6.7 Algorithm of the Snowpack Updating Model The procedure used in updating a snowpack parameter from on-line snowpack measure-ments is presented below: 1. During the course of calculating the snowpack, using the routine of the UBC wa-tershed model (section 6.2), the snowpack thermal condition tags («y,/cp, and Ka) are determined on a daily basis. 2. Once a new snowpack measurement becomes available (at time t), the variables 9i, 02i 63, # 4 ) 0 5 » # 6 , # 7 and 08 are back calculated, according to section 6.5, from time t to time, t — fc, of the previous snowpack measurement. 3. One of the snowpack parameters, r?c, AT 4 , r/a, or Tr can then be updated using one ofEqs. 6.18, 6.20, 6.22 or 6.23, respectively. 4. The new value of the snowpack parameter is then used in recalculating the snowpack starting from time t — k and ending at time t and step 1 is repeated. If the new set of tags values matches the previous one, the next step is then executed; otherwise Chapter 6. A Snowpack Updating Model 119 steps 2, 3 and 4 are repeated. 5. The updated snowpack parameter is then used in the UBC watershed model to calculate the flow forecasts. 6. The previous steps are repeated whenever a new snowpack measurement becomes available. 6.8 Application of the Snowpack Updating Model The snowpack updating model was used to update the cloud cover correction factor, T;c, using on-line snowpack measurements from the Mount Fidelity data station, British Columbia. The updated rjc was then used in the UBC watershed model to forecast flows from the Illecillewaet basin. During the period Oct. 1,1969 to Sep. 30, 1970 the snowpack at Mount Fidelity was measured six times (see Fig. 6.7). Only the last three measurements were useful for updating the parameter rjc, since the first three were collected during the accumulation stage when the snowmelt is insignificant and therefore the cloud cover has little effect on the snowpack. However, using this limited number of snowpack measurements in updating has dramatically improved the snowpack forecasts as can be seen from Fig. 6.7. The values of r/c for this period are shown in Fig. 6.8, from which it can be concluded that the cloud cover in the off-line UBC watershed model was consistently underestimated. Using the updated values of rjc for calculating river flows from the Illecillewaet basin resulted in a general improvement of the flow forecasts as can be seen from Fig. 6.9, and the increase in the value of the coefficient of efficiency, E, from 0.42 for the un-updated flow forecasts3 to 0.58 for the updated flow forecasts. However, using the updated values 3These forecasts were calculated using a version of the UBC watershed model different from the one used in Chap. 5. o 1800 1600 — • MEASUREMENT NOT UPDATED 1400 — UPDATED 1200 — / ^ - V i 1000 800 — / D \ \ < • 600 400 200 0 / w - ; ' i , i , i , i \ . i OCT. 1,69 NOV. 30,69 JAN. 29,70 MAR. 30,70 MAY 29,70 JULY 28,70 SEP. 26 DATE Fig. 6.7 The measured, un-updated and updated snowpacks for 1969-1970. OCT. 1,69 NOV. 30,69 JAN. 29,70 MAR. 30,70 MAY 29,70 JULY 28,70- SEP. 26,70 DATE Fig. 6.8 The updated values of the cloud cover parameter rjc for 1969-1970. Obs. Flow Un-upd. Flow Upd. Flow Un-upd. Res. Upd. Res. f L J . s ^ J — • \ / * /. ./"N . — U : f y t i i n i i i i l i i i u i i t i l i i i i t i i i i h t n i i i i i l i i i i i t t t i l t i i i i M u l i t i t t i i i i h i i i i i i i i l i i i i i t i i t h i M i i i i i l t i i i i i i i i h i t t i t t i i l i i i i i t M APR. 15 MAY 5 MAY 25 JUN 14 JUL 4 JUL 24 A U G 13 SEP 2 SEP 22 DATE Fig. 6.9 Hydrographs of the observed, un-updated and updated flow forecasts and their residuals for 1970. Chapter 6. A Snowpack Updating Model 123 of the parameter TJC resulted in significant underestimation of the high flows between end of May and the middle of June. This is attributed to the fact that increasing the cloud cover to accommodate for the underestimation of the snowpack in the Fidelity station, which is located in the upper part of the Illecillewaet basin, resulted in the underestimation of the snowmelt in the lower parts of the Illecillewaet basin, which are the main sources of the flow at the beginning of the melting season. The snowpack updating model was also used to update the parameter r\c for the period Oct. 1,1974 to Sep. 30,1976. The first four snowpack measurements were collected during the accumulation stage and therefore were not used in updating. Fig. 6.10 shows great improvement in the snowpack forecasts after using the updated cloud cover correction factors shown in Fig. 6.11. Using the updated factors in calculating the flow forecasts for the periods April 15 to Sep. 30, 1975 and April 15 to Sep. 30, 1976 resulted in a significant improvement in the flow forecasts as can be seen from Figs. 6.12 and 6.13, respectively. Evaluating these flow forecasts using the coefficient of efficiency, E, showed improvements in the value of the coefficient E from 0.41 to 0.79 for the year 1975 and 0.25 to 0.61 for the year 1976. 6.9 Combined Application of the Snowpack and the Flow updating Models The flow updating model presented in Chap. 5 was applied to update on a daily basis the flow forecasts calculated from the updated cloud cover correction factors r)c. Figs. 6.14, 6.15 and 6.16 show the updated flow forecasts using the snowpack updating alone, the' flow forecasts updated by both the snowpack and the flow updating models, and their corresponding residuals for the periods April 15 to Sep. 30 for the years 1970, 1975 and 1976, respectively. Al l three figures show great improvements in flow forecasts due to the additional application of the flow updating model. Evaluating these flow forecasts 1800 1600 h 1400 1200 1000 800 600 400 200 E E *f O I Z CO 0 • MEASUREMENT NOT UPDATED UPDATED J * « » 0 DATE I.-' GT-. Fig. 6.10 The measured, un-updated and updated snowpacks for 1974-1976. Chapter 6. A Snowpack Updating Model 125 CM CO CD ^ CM -i 1 1 1 r T i— CO CO ^ CM O - d o d o T ' 1—1—1 1 1—1—I 1—I— CM CO CO CM T- CO ID ^ CM •r^ d o d o LU Q CO 05 05 E 2 CO a i_ o > o o T3 3 O O <u CO CD 3 CO > T3 O •o Q. 3 © CO LL 350 300 250 « 200 E £ 150 ZD Q 55 100 UJ DC o 50 S . -50 -100 Obs. Flow Un-upd. Flow Upd. Flow Un-upd. Res. Upd. Res. II H s 1 II >• I I * i • l CI cu, f • . ( ' i t-t 1 f < » * • t 4) < 4 \ J1 4\i t \ '* '\ 1> | j I . 1 :A i 1 r;.;/\;i A I * x ' - I I * /< \ •. V .150 iiiiiimiimmiiiiiiitiiiiiiiiihiiiiiiiimmmiiimHiimiHHiiiiiimniHiiMHmiM APR. 15 MAY 5 MAY 25 JUN 14 JUL 4 JUL 24 A U G 13 SEP 2 SEP 22 DATE Rg. 6.12 Hydrographs of the observed flows, un-updated and updated flow forecasts and their residuals for 1975. t llllllllll 111111111 • 11111111111111111111111111111111111111111 «111111111111111111111111111 111111111111111111111111 • 11111111111 111111111 APR. 15 MAY 5 MAY 25 JUN 14 JUL 4 JUL 24 A U G 13 SEP 2 SEP 22 DATE Fig. 6.13 Hydrographs of the observed flows, un-updated and updated flow forecasts and their residuals for 1976. I — » to Chapter 6. A Snowpack Updating Model 128 i E tr c rr U. CO CO CO CO o =5 3 3 3 CVJ CVJ q. UJ CO CM q_ LU CO CO O ID < CN _ l Z> ~ 3 ZD ~ 3 z ~ 3 IO CVJ 2 10 2 = 10 UJ Q O O O O O O O in o- m o 10 o 10 CO CO CM CM T— T— O o o cr 8% "° 0 CO 05 c o o CO T3 Qi a 3 CO •o a o CO JQ O CD J2 co 3 a. "g o c CO CO CP cu •a c CO CO CO > co > o O CD 5 o o 5 = •a c CO o CO q. o c CO CO J = a. 2 a) 2 "D -I- o CO § • S/ELU '-ivnaiS3u JO Moid I. DATE Fig. 6.15 Hydrographs of the observed flows, updated (snowpack only) and updated (snowpack and flow) flow forecasts and their residuals for 1975. Chapter 6. A Snowpack Updating Model 130 | LU co cr I E GT tr ° ^ oa ^ 08 U- CO CO CO CO O 3 3 3 3 CM CM D_ LU CO CM Q_ LU CO CO o < CM _l ID -3 ~3 ^ CO CO OJ c o - i O CO CO 3 Q. T3 5 55 o o to "O a 3 LU fe Q 3 ~3 to CM 5 I S = 5 = in 0 0 0 0 0 0 0 m o 10 o 10 o in CO CO CM CM T- T-o 10 o o cr 10 < •o c CO CO £ 1 O O •D cu cu CO n o © o o ts "D O CO CO J = a. 2 2 T3 >. I CO CO Ui o CO a. I c © To •a o. 3 S/ELU "ivnaisau JO Moid Chapter 6. A Snowpack Updating Model 131 using the coefficient of efficiency shows improvements in the values of E from 0.58 to .92 for the year 1970, from 0.79 to 0.93 for the year 1975, and from 0.61 to 0.88 for the year 1976. Updating the cloud cover parameter r/c may result in significant changes in the flow forecasts, that may lead to dramatic changes in the weighting factors of the flow updating model. Therefore, it is necessary to update the weighting factors of the flow updating model whenever the cloud cover parameter r/c is updated. 6.10 Updating Parameters of the Snowpack Using Flow Measurements The scarcity, if the not the absence, of snowpack measurements in many watersheds is a major problem for the application of the above snowpack updating model. For this reason, the theoretical structure of a model for updating snowpack parameters using flow measurements was laid out. In this model, on-line flow measurements are processed through a state-space model to update single parameters of the snowpack routine us-ing the Kalman filter technique. The basic structure of this updating model and the derivation method are presented in Appendix D. 6.11 Summary and Conclusion In this chapter, a study on the behavior of the snowpack, and the development of a method for updating the parameters of an energy budget snowpack model from on-line snowpack measurements are presented. The snowpack was observed to pass through two distinctive stages: the accumulation and the depletion stages. The snowpack accumulation stage is characterized by: intense snowfalls that result in high snow albedos and therefore reduce the shortwave energy inputs to the snowpack, Chapter 6. A Snowpack Updating Model 132 and very low temperatures that result in longwave, convective and advective heat losses from the snowpack. The net outcome is a freezing snowpack and a consistent increase in the snowpack. During the snowpack depletion stage, the average temperatures are relatively high leading to longwave, convective and advective heat transfers to the snowpack, and the snowfalls are less in frequency and intensity, resulting in less snow albedo and therefore more shortwave heat transfer to the snowpack. The combined effect of these conditions result in a melting, and therefore depleting, snowpack. The cloud cover affects both the shortwave and the longwave heat transfers to the snowpack; while the snow albedo affects the shortwave heat transfer only. Therefore, both the cloud cover and the snow albedo are functions of the snowmelt, which is induced by the net heat transfer to the snowpack. Since the snowmelt is significant only during the snowpack depletion stage, only the snowpack measurements collected during this stage can be used to update the estimates of the cloud cover and snow albedo. However, these snowpack measurements are important for updating the estimates of the snowfalls. The effect of the cloud cover on the snowpack may not be distinguishable from that of the snow albedo, since both affects the shortwave heat transfer in a similar way. This may lead to difficulty in determining.the source(s) of errors in the snowpack forecasts. Identifying the source(s) of the errors is important, since adjusting the value of a snowpack parameter to reduce a current snowpack forecast error may result in degrading the quality of the future snowpack forecasts. An efficient method for identifying the sources of errors in the snowpack forecasts is to analyze the effect of synthesized errors in the snowpack parameters on the snowpack. A model was developed to update the cloud cover estimates from snowpack measure-ments. The updating model was used on data from the Mount Fidelity station, and significant improvements were noticed for the three years under study. Chapter 6. A Snowpack Updating Model 133 The updated cloud cover estimates were then used in the UBC watershed model to calculate the flow forecasts for the Illecillewaet basin. Significant improvements in the flow forecasts of two years were achieved. In the other year, general improvement is noticed with" the exception of the high flows at the beginning of the melting season. This was attributed to the overestimation of the cloud covers and therefore the underestimation of the snowmelt in the lower parts of the Illecillewaet basin which are the main sources of the flow at the beginning of the melting season. Both the cloud cover updating model and the flow updating model presented in Chap. 5 were applied simultaneously on the flow forecasts, resulting in significant im-provements in the flow forecasts for the three years of study. Since updating the cloud cover estimates results in changes in the flow forecasts and therefore changes in the weighting factors of the flow updating model, it is necessary to update the weighting factors immediately after updating the cloud cover. Chapter 7 Discussion and Conclusions The main goal of this research is to develop techniques for supplying conceptual watershed models with feedback of real-time (on-line) hydrological data (e.g. flow and snowpack measurements) to update flow forecasts. Since the main objective of any updating procedure is to reduce the residuals, or the errors, of the flow forecasts, the nature of these errors have been studied and techniques for analyzing these errors have been reviewed and discussed. In particular, a least squares criterion has been developed to test the linearity of forecast errors. A review of many of the real-time flow forecasting models showed that most of the updating procedures used in these models are based on the Kalman filter technique. Theoretical and practical aspects of this technique have been investigated by using the method to update simulated flow forecasts, which were generated from a watershed model. This review of the Kalman filter method showed that this technique is efficient in re-ducing linear errors. This important characteristic was exploited in developing a method for updating the flow forecasts of the UBC watershed model, which was then applied successfully to data from the Illecillewaet basin in British Columbia. The success of this method is based on the validity of the assumption of error linearity and the strength of the underlying UBC watershed model. The above updating model aims at reducing the forecast errors after they were gen-erated, i.e. it does not affect the source of these errors. Since snowmelt constitutes the 134 Chapter 7. Discussion and Conclusions 135 major input to the flows in the Illecillewaet basin, and mountainous watersheds in gen-eral, errors in estimating the snowpack may result in large flow forecast errors that may last for a long period of time, due to the long residence time of the snow. For this reason, a method was developed to update parameters of the UBC snowpack routine using real-time snowpack measurements, and then using these updated parameters in calculating flow forecasts. The updating method was applied successfully to meteorological data from Mount Fidelity station located in a high elevation of the Illecillewaet basin. Both of the above mentioned updating methods were used in combination on data from the Illecillewaet basin. The results showed additional improvement in the flow forecasts. However, careful consideration should be given to the interaction between the two updating methods when these methods are used in combination. The following is a discussion of the main findings of the thesis and the conclusions on the proposed methods, including a discussion of the combined use of these updating procedures. 7.1 Sources and Analysis of Forecast Residuals The nature of the forecast errors, or the residuals, must be identified before a decision can be made on the type of updating procedure to be used. These forecast residuals can only be determined from historical data sets when the forecast flows are observed. Residuals of the flow forecasts are caused by one or a combination of the following: mis-representative model structure, miscalibrated model parameters, errors in the input data; and errors in the reference data. Residuals caused by using a misrepresentative model structure are usually large and are best treated by redesigning or replacing the model. Forecast residuals due to miscalibration are better treated by recalibrating the model rather than designing a flow forecast updating procedure. However, forecast residuals Chapter 7. Discussion and Conclusions 136 caused by errors in the model variables (e.g. the water storages, the snow albedo) may be effectively reduced using a model that updates the values of these variables. The effect of linear errors in the input data or the reference data can be effectively reduced using flow" forecast updating models. Gn the other hand, misrepresentative data may produce forecast residuals which are difficult to correct, or to analyze. Graphical presentation is an important means of evaluating the quality of the flow forecasts. Plots of the observed and the forecast hydrographs, and the residuals show the general performance of the forecasting model, and the distribution of the residuals and may reveal certain characteristics of the forecasts such as timing errors, or extreme errors. Errors in the flow rating curves may be evaluated through analyzing plots of the observed flow vs. the forecast, as has been shown in section 2.2.1. Correlation between consecutive residuals can be detected and evaluated through examining plots of the forecast residuals against previous residuals (see section 2.2.1,) and, more accurately, by examining plots of the autocorrelation and the partial autocorrelation functions, as has been displayed in section 2.2.3. The quality of the flow forecasts can be evaluated using several least squares criteria. The coefficient of efficiency, E, measures the overall accuracy of the flow forecasts; while the coefficient of determination, D, measures the overall accuracy of the flow forecasts after removing linear errors. Based on these two coefficients, a least squares criterion PLE, defined in section 2.2.2, has been developed for evaluating linear errors in the flow forecasts. This combined use of the coefficients E and D appears to be a new criteriont and, although simple, is a very powerful tool. A high value of PLE indicates that the flow forecasts are highly linear, and vice versa. The linearity of errors is an important condition for the application of the flow updating model presented in Chap. 5, and is also true for many of the updating techniques that can be found in the literature. For water resource applications that require accurate, very short term, flow forecasts, Chapter 7. Discussion and Conclusions 137 it is important to test the performance of the flow forecasting model against those of naive models. This is so, since the naive models are quite powerful on very short term forecasting and their applications require very minimal calculations. Two naive models were reviewed in this research. First the persistence modelin which the flow is estimated equal to the most recent measured flow, and the extrapolation model in which the flow is estimated from the linear extrapolation of the last two measured flows. The performance of a flow forecasting model is tested against the performance of these two naive models using two least squares criteria: the coefficient of persistence C p , and the coefficient of extrapolation Ce, respectively. Higher values of these criteria indicates that the flow forecasting model is doing better than the naive models, and vice versa. Both criteria have been discussed in Chap. 2. It is important to note that since the persistence and the extrapolation models perform badly for long forecast leads, Cp and Ce are not suitable for evaluating long-term flow forecasts. The discussion in section 2.2.2 regarding the characteristics of the coefficient of efficiency, the coefficient of persistence and the coefficient of extrapolation showed that the coefficient of efficiency is most suitable for evaluating flow forecasts. The above least squares criteria can only be applied to historical data, which is not the basis for updating. However, these criteria indicate the nature of the errors, and can be used to evaluate the quality of the previous flow forecasts, and is hopefully a measure of future forecast accuracy. 7.2 Literature Review of the Real-Time Forecasting Models Reviewing many of the existing real-time forecasting models showed that they can be classified into three types: Chapter 7. Discussion and Conclusions 138 1. Error prediction models; 2. Transfer function models; and 3. real-time conceptual models.. The error prediction models are not flow forecasting models per se, since they are ba-sically procedures for estimating and correcting the errors in the flow forecasts of given flow forecasting models. In error prediction methods, the forecast residual is assumed to be related to its past values according to a certain time-series relationship (e.g. au-toregressive process of order one AR(1)), as has been presented in section 3.1. This relationship is then used to estimate future values of errors, which are then used to ad-just the flow forecasts. The performance of an error prediction method is very dependent on the correlations among consecutive errors. For this reason, this method is quite effec-tive on the falling limb of the hydrograph, where correlations among consecutive errors are high; while being unpredictable on the rising limb where error correlations are weak. The performance of the error prediction method is also dependent on the forecast lead. Errors of short term flow forecasts are highly correlated, and therefore are more accu-rately estimated by the error prediction method; while correlations among errors of long term flow forecasts are weakly correlated, leading therefore to a weak performance of the error prediction method. In the transfer function models, presented in section 3.3, the flow is estimated as a linear function of the input data as well as of some of the previously measured flows. The coefficients of this function are either fixed values that are previously determined by calibration using historical data, or variables that can be recursively estimated using an optimization technique (e.g. the Kalman filter, or the maximum likelihood technique). These models are sometimes referred to as black box or input-output models, since they simulate the relationship between the meteorological data as an input and the flow data as Chapter 7. Discussion and Conclusions 139 an output with little consideration of the underlying physical processes. The performance of this type of model is dependent on the validity of the assumed linear relationship, which are generally based on the correlation among consecutive flows. These correlations are the strongest on the falling limb of the hydrograph and for short measurement intervals; while being weak on the rising limb and for long measurement intervals. The performance of a transfer function model is therefore expected to be superior on the falling limb and for short forecast leads, and unreliable on the rising limb and for long1 forecast leads. Two examples of this type of models are presented in section 3.3. In the conceptual models, presented in section 3.4, the watershed system is visualized as a network of water storages, representing different soil layers, which receive input of rainfall and/or snowmelt, less losses, according to a priority system. In this system, the water storages representing the lower soil zones and the impervious areas have the highest priorities for the input, and the excess of the input is then transferred to the water storages representing the subsurface layer. The outflows from all water storages are assumed to be functions of the corresponding storages. These outflows can then be routed through a series of storages, and the sum of these routed flows is taken as the estimate of the flow at the basin outlet. The flow chart of a basic conceptual model is shown in Fig. 3.1. The parameters of a conceptual model can be treated as variables that can be updated from real-time data using the Kalman filter technique. The performance of a conceptual model is generally better than those of a transfer and an error prediction model, since it is based on a more realistic simulation of the watershed system. However, conceptual models are usually complex and more expensive to operate. Similar to the other types of real-time models, the conceptual models perform best on the falling limb 1 Describing the forecast lead as short or long is based on the time of concentration of the watershed which is a function of the size and the topography of the watershed, the nature of the precipitation and the climate. In general, small, steep and rainfall-induced watersheds are characterized by short concentration times; while large, less steep and snowmelt-induced watersheds have long concentration times. Chapter 7. Discussion and Conclusions 140 of the hydrograph and poorly on the rising limb. However, this behavior is caused by different reasons: the rising limb of the hydrograph is mainly controlled by the upper water storages which are highly variable and therefore difficult to predict; while the falling limb is driven by the lower water storages which change slowly and therefore are easier to estimate. Two examples of this type of model are shown in section 3.4. The main characteristics of the above models can be summarized as follows: • Al l the models perform well on the falling limb of the hydrograph. • The performances of all models deteriorate rapidly on the rising limb. • The error prediction models are very dependent on the correlation among consec-utive flow forecast residuals; while transfer function models are dependent on the correlation among consecutive flow forecasts. • The transfer function models do well only for short forecast leads. • The conceptual models tend to out-perform the other two type of models as forecast lead increases. • The timing error is not taken into consideration in any of the above models. These errors may lead the real-time model to degrade, rather than improve the flow fore-casts. 7.3 The Kalman Filter ' If the error analysis discussed in Chap. 2 and summarized in section 7.1 shows that there are significant linear errors, then two approaches are possible for improving flow forecasts: a) recalibration, and b) linear updating of the flow forecasts, which can be done Chapter 7. Discussion and Conclusions 141 effectively using the Kalman filter technique, which is widely used by many real-time flow forecasting models. The Kalman filter is a recursive2 technique that updates the estimate of an unknown quantity, usually referred to as a state, using real-time measurements. The state estimate is determined such that it produces the least sum of squares of errors and is unbiased3, which are conditions necessary to obtain an optimal estimate based on the assumption that the state is a stochastic normally distributed variable. From this point of view, the Kalman filter can be seen as a Bayesian approach in which the estimates of the mean and the variance of the normal probability density function of the state are updated whenever a relevant measurement is available, with the mean being the optimal estimate of the state and the variance determining the confidence limits of this optimal estimate. The updating procedure in the Kalman filter is based on calculating the updated estimate as a weighted sum of two estimates: one is the estimate calculated as a linear function, usually referred to as the state equation, of the previous values; and the other is the estimate calculated as a linear function, usually referred to as the measurement equa-tion, of the measurement. The weighting factors are determined such that the/conditions of the least sum of squares of errors and the unbiasness of the estimates, discussed above, are satisfied. These weighting factors are functions of two important quantities in the Kalman filter theory, namely: the state noise variance and the measurement variance. These variances determine the accuracies of the estimates obtained from the state and the measurement equations, respectively, such that smaller variances represent higher quality estimates, and vice versa. The updating process in the Kalman filter is there-fore dependent on two factors: the accuracy of estimating the state from its previous estimate, and the quality of the measurement, such that the better is the quality of the 2 A recursive technique is one that does not need to store past measurements. 3Unbiased estimate is one that converges to the true value, if large number of measurements are available. i Chapter 7. Discussion and Conclusions 142 measurement, the more is the updating in the state estimate, and vice versa. In this study, it has been proven, as has been presented in Chap. 4 that the updating process in the Kalman filter is a function of the relative, rather than the absolute, values of the above mentioned state noise and measurement variances4. Therefore, in applying sensitivity analysis for selecting the optimal values of these variances, it would be suffi-cient to test updating model performance against one variance only, while keeping the other fixed. Based on a simulation study, presented in Chap. 4, in which linear errors are intro-duced in the flow forecasts, it was found that the state variability, measurement quality and the forecast lead are the main factors to be considered in selecting the optimal values of the state noise variance and the measurement variance. In general, large state noise and small measurement variances are recommended for a highly variable system, good measurement and a long forecast lead; while small state noise and large measurement variances are recommended for a slowly variable system, a poor measurement and a short forecast lead. However, very large state noise variances should be avoided since they may cause very large fluctuations in the state estimates as a result of poor measurements. The adaptive Kalman filter approach is based on updating the state noise variance and the measurement variance using real-time data. Based on reviewing some of the methods using this approach, and the experience of the author from applying one of these methods, the adaptive Kalman filtering is not recommended. This is so for the following reasons: 1. Most of these adaptive methods are based on assuming that the state noise and-the measurement variances are changing very slowly. Using these adaptive methods 4This is true in the region beyond the influence of the initial conditions, or if the initial state variance, discussed in Chap. 4, is a function of the state noise variance. Chapter 7. Discussion and Conclusions 143 would result therefore in insignificant changes in these variances, resulting therefore in minimal, if any, improvements in the flow forecasts. 2. Most of the adaptive models require strict conditions that can not be satisfied even for simple hydrological applications. 3. Larger number of measurements are needed as compared to the nonadaptive ap-proach, which is a particularly serious problem for many hydrological applications. 4. Additional parameters are introduced, which need to be assigned, creating therefore additional sources of uncertainties in the forecasts; 5. Significant increase in the number of computations. As stated above, a fundamental condition for applying the Kalman filter is linearity of the system. However, certain linearization techniques can be used to linearize nonlinear relationships, the Taylor expansion being a popular technique, but statistical linearization is a better and more consistent alternative, especially for highly nonlinear relationships (e.g. threshold-type ones). 7.4 The Proposed Updating Procedures Based on analyzing the errors in the flow forecasts of the UBC watershed model and studying their main sources, two updating procedures were developed: a) in one, the flow forecasts of the UBC watershed model are directly updated from real-time flow measurements; and b) in the other, parameters of the snowpack routine in the UBC watershed model are updated using real-time snowpack measurements, and the updated parameters are then used in the UBC watershed model to upgrade the flow forecasts. In addition, both procedures can be used together in updating the flow forecasts. In the Chapter 7. Discussion and Conclusions 144 following sections, these two updating procedures and their single and combined use are briefly presented and discussed. 7.4.1 Direct Updating of the Flow Forecasts Analyzing the errors of the flow forecasts of the Illecillewaet basin generated by the UBC watershed model showed that they are mostly linear in structure. Since, as mentioned before, the Kalman filter is very effective in reducing linear errors, it was used in devel-oping a procedure for updating the flow forecasts of the Illecillwaet basin. This updating procedure was based on the assumption that flow can be estimated as a weighted sum of the flow components calculated by the UBC watershed model, with the weighting factors being the states updated by the Kalman filter using real-time flow measurements. The main equations and the algorithm of this procedure are discussed in details in Chap. 5. As mentioned before in section 7.3, the two most important parameters of the Kalman filter are the state noise variance and the measurement variance. For the current flow updating procedure, the state noise variance is a matrix, with its diagonal elements constituting the variances of the random noises associated with each weighting factor, and non-diagonal elements being the covariances between the noises of the weighting factors; while the measurement variance is simply the variance of a flow measurement5. The optimal values of the these variances were selected from sensitivity analysis of the quality of the updated flow forecasts, evaluated in terms of the coefficient of efficiency Er versus the measurement variance while fixing the state noise variance. Using one variance as a basis of sensitivity analysis, while fixing the other, was sufficient since, as mentioned before, the updating procedure in the Kalman filter is mainly a function of the relative not the absolute values of the state noise variance and the measurement variance. Two 5The state noise and the measurement variances of this updating procedure are presented in details in Chap. 5. Chapter 7. Discussion and Conclusions 145 assumptions were tested: one assumption is to select the state noise variance and the measurement variance as a fixed quantity throughout the updating period; and the other assumption is to select the state noise variance as a square of a certain percentage of the corresponding state, and the measurement variance as a square of a certain percentage of the most recent flow measurement. In general, the percentage form was found to be more efficient in updating high flow forecasts. This supports the observations of many researchers, namely that higher measurement errors are associated with higher flows, and therefore, the measurement variance is better selected as a direct function of the flow measurement. The sensitivity analysis showed that the measurement variance is best selected to lie between the squares of 10% and 20% of the flow measurement, and that the state noise variance components gave the best results if selected equal to 10% to their corresponding weighting factors. The updating procedure was applied on the flow forecasts of the Illecillewaet basin for the period April 15 to July 15, 1970, which is characterized by high flows due to the snowmelt. The flow forecasts were updated for forecast leads of 1, 5, 10, 20, and 30 days. Significant improvements in the flow forecasts, measured by the coefficient of efficiency E, were achieved even for forecast leads up to 20 days. The limited improvement in the flow forecasts for a forecast lead of 30 days is due to the lack of enough measurements for updating (4 flow measurements for the period of three months). The ability of the updating model to improve the flow forecasts even for a long forecast lead of 20 days is mainly attributed to two factors: one is the validity of the assumption that the errors in the flow forecasts of the UBC watershed model are highly linear and of the form described by the updating procedure; and the other is the strength of the underlying UBC watershed model. Using the adaptive method of Kitanidis and Bras (19786) to update the state noise variance for the above updating procedure, resulted in no significant difference in the Chapter 7. Discussion and Conclusions 146 flow forecasts. This is because this adaptive method was designed on the assumption that the state noise variance is slowly changing and therefore the state noise variance is not changed significantly enough to cause significant changes in the flow forecasts. The application of this adaptive method required significant increases in the complexity and the number of computations, and an additional effort for calibrating new parameters. For all these reason, this adaptive method is not recommend for the current, or simi-lar, updating procedures. In fact, based on reviewing other forms of adaptive Kalman filtering, adaptive methods are generally not recommended for several reasons discussed before in section 7.3. However, the final decision on whether to use such methods for a particular application should be based on testing this application using several of these methods. 7.4.2 A Snowpack Updating Model In the previous updating procedure, the flow forecasts were directly updated using real-time flow measurements, without affecting the source of errors in the watershed model. Since the snowmelt constitutes the main source of high flows for mountainous watershed systems, as in the Illecillewaet basin, it is expected that a high percentage of errors in the flow forecasts will be caused by errors in estimating the snowmelt. In addition, errors in the snowpack estimates are expected to result in long term errors in the flow forecasts due to the long residence time of the snow. For these reasons, an updating procedure was designed to update parameters of the snowpack routine of the UBC watershed model us-ing real-time snowpack measurements, and then using the updated snowpack parameters in the UBC watershed model to calculate flow forecasts. As a preliminary step in the design of the snowpack updating procedure, snowpack sensitivity to several snowpack parameters were analyzed. Testing two parameters as-sociated with the estimation of the cloud cover showed that underestimating the cloud Chapter 7. Discussion and Conclusions 147 cover results in insignificantly increasing the thickness of the snowpack during its accu-mulation stage and significantly increasing the rate of the snowmelt during the melting season. This is an important result, since mis-estimating the rate of the snowmelt re-sults in timing errors and generally poor forecasts of the high flows, and the remaining unmelted snowpack can be subject to accumulation errors. Studying the effect of the snow albedo on the snowpack showed that the underestimation of the snow albedo re-sults in the overestimation of the snowmelt rate, and vice versa. The similarity between the effects of the cloud cover and the snow albedo on the snowpack is attributed to the practically similar, yet physically different, roles of each factors in controlling the amount of shortwave energy absorbed by the snowpack which contributes to the snowmelt. The percentage of cloud cover determines the percentage of the shortwave radiation which reaches the snowpack; while the snow albedo determines the percentage of the shortwave radiation reflected by the snow. However, the cloud cover also controls the amount of longwave energy leaving the snowpack. The similarity between the effects of the cloud cover and the snow albedo may cause a problem in identifying the sources of errors in the snowpack estimates. This problem can be significant, since it may lead to an erroneous adjustment of a parameter as a result of an error caused by another parameter, which may result in degrading, rather than upgrading, future forecasts. The problem of identi-fying the source of errors in the snowpack forecasts may be solved by analyzing the effects of synthesized errors in individual snowpack parameters on the snowpack estimates, as have been presented in Chap. 6, and also by using measured data, such as the US Corps data. The snowpack updating procedure was applied to update the cloud cover estimates using snowpack data from Mount Fidelity station located on the upper region of the Illecillewaet basin, and then using the updated cloud cover estimates in updating the snowpack estimates. Significant improvement in the snowpack estimates was achieved, Chapter 7. Discussion and Conclusions 148 although very small number of snowpack measurements, 6 to 8 measurements a year, were available. In addition, only the snowpack measurements collected during the snowmelt season, which make only half of the measurements, were useful in updating the cloud cover. This is because the cloud cover only influences the snowmelt and not the snow-fall, and therefore is only significant during the melting season. However, the snowpack measurements during the snowpack accumulation stage are important for updating the snowfall estimates. The updated cloud cover estimates showed that the cloud cover was probably being consistently underestimated in the UBC watershed model. This informa-tion can be useful for improving the estimation of the cloud cover in the UBC watershed model, and shows how the snowpack updating method can be used as a calibration tool of the watershed model. Using the updated estimates of the cloud cover in the UBC watershed model to esti-mate the streamflows from the Illecillewaet basin for three years, resulted in a significant improvement in the flow forecasts for two of these years. However, the high flows at the beginning of the snow melting season of the third year were underestimated. This is attributed to the overestimation of the cloud cover and therefore the underestimation of the snowmelt in the lower parts of the Illecillewaet basin, which are the main sources of the flow at the beginning of the melting season and are not truly represented by the Mount Fidelity station located in the upper regions of the Illecillewaet basin. However, the flow forecasts for the rest of the snow melting season were improved, since they are driven by the snowmelt in the upper parts of the basin which are more closely represented by the Mount Fidelity station. The current snowpack updating model is only the first step towards developing a more comprehensive updating model. The current updating model only updates individual parameters, i.e. it can not update more than one parameter from a single snowpack measurement. On-going research is aimed at defining a set of criteria that can be used Chapter 7. Discussion and Conclusions 149 to define the percentage of updating in different parameters as a result of an error in the snowpack forecast. The updating model is based on the assumption that the measurement is perfectly correct, i.e. it does not take into account the measurement error. Since very few snowpack measurements are usually available for updating, the negative effect of bad measurements on the parameter estimates can be serious. In the current updating model, this problem can best be avoided by carefully studying the changes in the parameter due to new measurements and avoiding extreme adjustment of these parameters. However, additional research is needed to upgrade the present updating model and to take into consideration measurement errors. 7.5 U p d a t i n g Snowpack Parameters Using Flow measurements As has been mentioned above, the snowpack measurements in most watersheds are quite scarce, creating therefore a problem for applying the above snowpack updating model. For this reason, the main structure of a model for updating the snowpack parameters from real-time flow measurements has been laid out in Appendix D. This updating model also takes into account the errors in the measurement, since it is based on the Kalman filter technique. This updating model is currently under development. 7.6 Detection and Correction of T i m i n g E r r o r s In all the reviewed real-time forecasting model, the timing errors were not taken into account. For this reason, a procedure was developed based on using lagged linear regres-sion in estimating and correcting the timing errors in the flow forecasts, as discussed in section 2.2.2. Applying this procedure resulted in a little improvement in the flow fore-casts. This limitation is attributed to the high variability of the timing errors. However, this updating procedure can be a useful tool for calibration. Chapter 7. Discussion and Conclusions 150 7.7 C o m b i n e d Use of U p d a t i n g Techniques The snowpack updating model discussed in Chap. 6 and the flow updating model dis-cussed in'Chap". 5 were used in conjunction to update the flow forecasts. The-snowpack updating model was used to update the cloud cover estimates, which were then used to calculate the flow forecasts. A further updating of the flow forecasts was then made using the flow updating model. This resulted in additional improvement in the flow forecasts. Since adjusting the estimates of the cloud cover affects the weighting factors of the flow updating model, it is important to apply the flow updating model whenever a snowpack parameter is updated. This is not a significant problem, since flow measurements are more frequently available than the snowpack measurements, and therefore there is almost always a flow measurement available when a snowpack measurement is collected. 7.8 General This research is based on the premise that real-time flow forecasting, or flow forecasting in general, should be based on a simulation of the physical processes which control the behavior of the watershed system, and such models are often referred to as conceptual. In contrast to the physically based models, the success of the transfer function models are solely dependent on the validity of the statistical relationships on which they are based. Since, these relationships are short lived, the transfer function models are not expected to perform well beyond very short forecast leads. It can even be argued that the naive models, like the persistence model, are better alternatives for very short forecast leads. Although conceptual models are generally better alternatives than the transfer function models due to their physically based structures, they are usually complex and require the calibration of a large set of parameters. To avoid this problem and since many watershed models are not designed to operate in real-time mode, this research was aimed Chapter 7. Discussion and Conclusions 151 at developing techniques for updating the flow forecasts of such watershed models without introducing major changes into the structures of these models. As a first step toward developing updating techniques, methods for analyzing flow forecast error signals were studied, and an apparently new criterion, PLE, was introduced to detect and evaluate linear errors in the flow forecasts. This is particularly important, since linearity of errors is a basic assumption in many updating techniques. Then, the Kalman filter technique, used in many real-time forecasting models, was studied and presented using simulated data. The results from this study showed that the updating process in the Kalman filter is mainly a function of the relative, rather than the absolute, values of the state noise variance and the measurement variance. The optimal values of these variances are found to be functions of the variability of the parameters to be updated, the measurement quality and the forecast lead. Because of its capability to reduce linear errors, the Kalman filter was the first ap-proach used to develop a method for updating the flow forecasts of the UBC watershed model. This updating method was applied successfully on data from the Illecillewaet basin. The high performance of this updating method, even for long forecast leads, is mainly attributed to the high linearity of the flow forecast errors, and the strength of the underlying UBC watershed model. Since high flows in the mountainous basins are mainly driven by the snowmelt, es-timation of the snowpack establishes the volume of snowmelt runoff available during the season, and estimation of the melt rate determines the time distribution of runoff. Therefore, an updating procedure was developed to update parameters of the snowpack routine of the UBC watershed model using real-time snowpack measurements. This updating method was applied on data from the Mount Fidelity station, and then the updated parameters were used in the UBC watershed model to upgrade the snowpack and the flow forecasts. The snowpack forecasts were significantly improved and the flow Chapter 7. Discussion and Conclusions 152 forecasts were generally improved, with the exception of one year where the flow was underestimated at the beginning of the melting season. This was attributed to the un-derestimation of the snowmelt in the lower regions of the Illecillewaet basin which were not truly represented by the Mount Fidelity station located in the upper region of the basin. Future upgrading of the snowpack updating procedure would include updating more than one parameter at a time, and taking into consideration errors in the snowpack measurements. Both of the above updating techniques were used in combination, resulting in addi-tional improvement in the flow forecasts. Since updating snowpack parameters affect the weighting factors in the flow updating model, the flow updating model should always be applied after any updating of the snowpack parameters. To overcome the scarcity of the snowpack measurements, an updating procedure is suggested for updating parameters using real-time flow measurements. This updating procedure is at an early stage of development. Since timing errors were virtually ignored in all reviewed updating procedures, a simple method for detecting and correcting timing errors base on lagged linear regression was suggested. This method has been applied with limited success on data from the Illecillewaet basin. REFERENCES Ambrosino, G., P. Bolzern, M . Ferrario and G. Fronza (1978), "Real-Time Predictors of Flood Events: A Case Study," Ricerche di Automatica, vol. 9, no. 2, pp. 184-200. Becker, A., M . Melcher and G. Kose (1982), "Updating of Discharge Rating Curves by Means of Mathematical Models," Advances in Hydrometry (Proceedings of the Exeter Symposium) IAHS Publ. no. 134, pp. 37-48. Bergman, M . J. and J. W. Delleur (1985a), "Kalman Filter Estimation and Prediction of Daily Stream Flows: I. Review, Algorithm, and Simulation Experiments," Water Resources Bulletin, American Water Resources Association, Vol. 21, No. 5, pp. 815-825. Bergman, M . J. and J. W. Delleur (19856), "Kalman Filter Estimation and Prediction of Daily Stream Flows: II. Application to the Potomac River," Water Resoureces Bulletin, American Water Resources Association, Vol. 21, No. 5, pp. 827-832. Box, G. E. P. and G. M . Jenkins (1970), "Time Series Analysis Forecasting and Control," Holden-Day. Brewer, H. W. (1976), "Identification of the Noise Characterisitics in a Kalman Fil-ter," Control and Dynamic Systems, Advances in Theory and Applications (C. T. Leondes ed.) 12, pp. 491-581. Burn, D. H. (1985), "River Flow Forecasting Model for Sturgeon River," Journal of Hydraulic Engineering, ASCE, Vol. I l l , No. 2, pp. 316-333. 153 REFERENCES 154 Burn, D. H. (1988), "River Flow Forecasting for Multiple Time Periods," Candanian Journal of Civil Engineering, Vol. 15, pp. 58-65. Durbin, J. (1960), "The Fitting of.Time Series Models," Revue Institute Internationale de Statistics, 28, 233-243. Gelb, A (1986), "Applied Optimal Estimation," M.I.T. Press. Georgakakos, K. P. and R. L. Bras (1982), "Real-Time, Statistically Linearized, Adap-tive Flood Routing," Water Resources Research, Vol. 18, No. 3, pp. 513-524. Georgakakos, K. P. and R. L. Bras (1984a), "A Hydrologically Useful Station Precip-itation Model: 1. Formulation," Water Resources Research, Vol. 20, No. 11, pp. 1585-1596. Georgakakos, K. P. and R. L. Bras (19846), "A Hydrologically Useful Station Precip-itation Model: 2. Case Studies," Water Resources Research, Vol. 20, No. 11, pp. 1597-1610. Georgakakos, K. P. (1986a), "A Generalized Stochastic Hydrometeorological Model for Flood and Flash-Flood Forecasting, 1. Formulation," Water Resources Research, Vol. 22, No. 13, pp. 2083-2095. Georgakakos, K. P. (19866), "A Generalized Stochastic Hydrometeorological Model for Flood and Flash-Flood Forecasting, 2. Case Studies," Water Resources Research, Vol. 22, No. 13, pp. 2096-2106. Harvey, A. C. (1981), "Time Series Models," John Wiley & Sons, New York. IMSL (1987), "User's Manual," Problem Solving Software Systems, IMSL. REFERENCES 155 Jones, D. A. and R. J. Moore(1980), "A Simple Channel Flow Routing Model for Real-Time Use, Hydrological Forecasting," Proceeding Oxford Symposium, IAHS-AISH, Publ. No. 129, pp. 397-408. Kalman, R. E. (1960), "A New Approach to Linear Filtering and Prediction Problems," Transactions of the ASME, Journal of Basic Engineering, series D, vol. 82, pp. 34-45. Kalman, R. E. and R. S. Bucy (1961), "New Results In Linear Filtering and Prediction Theory," Transactions of the ASME, Journal of Basic Engineering, series D, vol. 83, pp. 95-108. Kitanidis, P. K. and R. L. Bras (1978), "Error Identification in Conceptual Hydrologic Models," Applications of Kalman Filter to Hydrology, Hydraulics and Water Re-sources (C. L., Chiu ed.), University of Pittsburgh Press, Pittsburgh, pp. 599-635. Kitanidis, P. K., and R. L. Bras (1980a), " Adaptive Filtering Through Detection of Isolated Transient Errors in Rainfall-Runoff Models," Water Resources Research, Vol. 16(4) , PP. 740-748. Kitanidis, P. K. and R. 1. Bras (19806), "Real-Time Forecasting with a Conceptual Hydrologic Model: 1. Analysis of Uncertainty," Water Resources Research, vol. 16(6), pp. 1025-1033. . Kitanidis, P. K. and R. 1. Bras (1980c), "Real-Time Forecasting with a Conceptual Hydrologic Model: 2. Applications and Results," Water Resources Research, vol. 16(6), pp. 1034-1044. Ljung, L. (1983), "System Identification, Theory for the User," Prentice - Hall, Infor-mation and System Sciences Series. REFERENCES 156 Ljung, L. and T. Soderstrom (1987), "Theory and Practice of Recursive Identifications," The M.I.T. Press. .-Marivoet, J. L. and G. L. Vandewiele (1980), "A Real-Time RainfalLRunoff Model," Hydrological Forecasting - Proceedings of.the Oxford Symposium, IAHS-AISH, Pub-lication no. 19, pp. 409-418. Mehra, R. K. (1970), "On the Identifications of Variances and Adaptive Kalman Filter-ing," IEEE Trans, on Autom. Contr., AC-12, pp. 175-184. Mehra, R. K. (1972), "Approaches to Adaptive Filtering," IEEE Trans, on Autom. Contr., AC-17, pp. 693-698. Moore, R. J. (1982), "Transfer Functions, Noise Predictors, and the Forecasting of Flood Events in Real-Time," Statistical Analysis of Rainfall and Runoff, Water Resources Publ , pp. 229-250. Moore, R. J. (1986), "Advances in Real-Time Flood Forecasting Practice", Symposium on Flood Warning Systems, Institution of Water Engineers and Scientists. Nash, J. E. and J. V . Sutcliffe (1970), "River Flow Forecasting Through Conceptual Models, I. A Discussion of Principles",Journal of Hydrology, Amsterdam, Vol. 3, pp. 282-290. Priestley, M.B. (1987), "Spectral Analysis and Time Series," ACADEMIC PRESS LTD, Quick, M.C. and 'A. Pipes (1989a), "High Mountain Snowmelt and Applications to-Runoff Forecasting," Proceedings of the Annual Conference and the 9th Canadian Hydrotechnical Conference, Canadian Society for Civil Engineering, Vol. II A, pp. 232-247. REFERENCES 157 Quick, M.C. and A. Pipes (19896), "Manual of the UBC Watershed Model," University of British Columbia. Quick, M.C. (1990), "The Reliability of Flood Discharge Estimates" Proceedings of the Annual Conference and the 10th Canadian Hydrotechnical Conference, Canadian Society for Civil Engineering, Vol. V, pp. 246-259. Szollosi-Nagy, A., S. Z. Ambrus, P. Bartha, K. Harkanyi and E. Meki (1984), "On the Use of Recursive Algorithms in Real-Time River Flow Forecasting - Hungarian Experiences," I FA C 9th triennial World Congress, Hungary, pp. 3213-3218. Todini, E., A. Szollogy-Nagy and E. F. Wood (1976), "Adaptive State-Parameter Es-timation Algorithms for Real-Time Hydrological Forecasting: a case Study," Proc. of the IIASA-WMO Workshop on the Recent Developments in Real Time Forecast-ing/Control of Water Resources Systems, Laxenburg (Austria). Todini, E. (1978), "Mutually Interactive State-Parameter (MISP) Estimation," Proc. Chapman Conf. on Application of Kalman Filter to Hydrology and Water Re-sources, Pittsburgh, pp. 135-151. Todini, E., and J. R. Wallis (1978), "A Real-Time Rainfall-Runoff Model for an On-Line Flood Warning System," Proc. Chapman Conf. on Application of Kalman Filter to Hydrology and Water Resources, Pittsburgh, pp. 355-368. Troutman, B. M . (1982), "An Analysis of Input Errors in Precipitation-Runoff Mod-els Using Regression with Errors in the Independent Variables," Water Resources Research, Vol.18, No.4, PP. 947-964. Tucci, C. E. M . and R. T. Clarke (1980), "Adaptive Forecasting with a Conceptual REFERENCES 158 Rainfall-Runoff Model," Hydrological Forecasting - Proceedings of the Oxford Sym-posium, IAHS-AISH, Publication no. 19, pp. 445-454. Appendix A Hydrological and Climatic Characteristics of the Illecillewaet Watershed The Illecillewaet river is a typical tributary to the upper Columbia river. The following paragraphs give a brief description of the hydrology and climate of this upper Columbia region. The Columbia River flows to the northwest within a wide glaciated valley, known as the Rocky Mountain Trench, from its source in Columbia Lake for a distance of approximately 200 miles. It then makes an abrupt turn and flows in a southerly direction for about 100 miles to Revelstoke. The Canoe River, a major tributary, flows in a south-easterly direction within the Rocky Mountain Trench to meet the Columbia River just upstream of the site of Mica Dam. With the completion of Mica Dam and subsequent filling of the reservoir, McNaughton Lake, the Rocky Mountain Trench will be flooded from about Valemount to Donald, a distance of 130 miles. The drainage boundary is formed by the Rocky Mountain chain in the east and by the Cariboo and Monashee Mountains in the west. The Purcell and Selkirk Mountains separate the northerly flowing section of the Columbia River from the section flowing to the south from Mica Dam. Several of the mountain peaks within the basin exceed El . 10,000 and rise more than 8000 feet above the valley floor. Some 10 percent of the drainage area,"approximately 1000 square miles, is covered with permanent snowfields and glaciers. The Columbia River basin between Revelstoke and Golden is densely forested with prominent stands of cedar and hemlock at the lower elevations and spruce and fir at the 159 Appendix A. The Illecillewaet Watershed 160 higher elevations. Vegetation thins above 5000 feet and the tree-line lies in the range of 6000 to 7000 feet. In the area just upstream of Golden, a medium to heavy Douglas fir and pine-grass vegetation cover is dominant. Further south, the forest cover changes to a lighter and more open mixture of Douglas fir, white spruce, yellow pine, larch, lodgepole pine and aspen in varying proportions. The occurrence of several major lakes in the Columbia River system reflects the extent of major glaciation in the past. The only natural lakes of any significant size remain-ing upstream of Revelstoke are Columbia Lake and Windermere Lake in the headwater section of the Columbia River. CLIMATE The Columbia River basin, located some 300 to 400 miles inland from the Coast Mountains, is influenced primarily by the maritime air masses and by weather systems moving eastwards across the basin. The temperature regime is strongly dependent on elevation, but temperatures recorded at several long-term meteorological stations located in the valleys indicate a fairly uni-form pattern of temperature across the basin. Average monthly temperatures recorded at Revelstoke and several meteorological stations in the Rocky Mountain Trench are quite similar, ranging from lows of 10 to 15°F in January to highs of 60 to 65°F in July. The average diurnal temperature range at the valley stations displays a seasonal trend varying from about 15°F during the winter, to 30 to 35° F in July. Extreme maximum temperatures in excess of 100°F have been recorded at several stations within the basin. Long-term meteorological records at Glacier (El. 4100) provide information on the temperatures in the middle elevation ranges in the basin and on the temperature lapse rates. The mean monthly temperatures in July are in the 55 to 60°F range, and in the winter are about 10°F. Appendix A. The Illecillewaet Watershed 161 The rates of change of temperature with elevation (lapse rates) based on a comparison of daily maximum and minimum temperatures, recorded at stations above elevations, show greater variability in the winter than in the summer. In the summer, the lapse rates in the maximum daily temperatures are consistently high, averaging about 4°F per 1000 feet; but, lapse rates in the minimum daily temperatures tend to be lower and more erratic because at night, the cooler air often drains into the valleys. During the winter months, lapse rates on both the daily maximum and minimum temperatures are low (averaging 1-2°F per 1000 feet), and variable, as arctic air frequently becomes trapped in the valleys with warmer maritime air circulating aloft. Substantial snowpacks develop during the winter at all elevations in the basin. The snowpack in the valley bottom at Revelstoke is usually depleted by the end of April, but permanent snowfields exist at the highest elevations. The rapid snowpack depletion in the lower and middle elevation ranges during the late spring is the dominant influence on the hydrology of the region. The magnitude of the warming temperature pattern, but also on the distribution of the snowpack throughout the basin. The eastward movement of moisture-laden maritime air masses across the Selkirk and Purcell Mountains produces considerably more precipitation in the drainage area downstream of Mica, thai! in the upstream area to the east of the mountains in the Rocky Mountain Trench. Precipitation data indicates that the meteorological stations in the Columbia River valley downstream of Mica Dam not only receive significantly more precipitation than the stations in the Rocky Mountain Trench, but also a greater proportion of that precipitation falls as snow. The long-term average annual precipitation ranges from approximately 43 inches at Revelstoke (El. 1500) to 60 inches at Glacier (El. 4100) and to 16 inches at Canal Flats (El. 2650). The water equivalent of the average annual snowfall for the same stations are 16.2 inches, 38.2 inches and 4.4 inches respectively. Measurement of Appendix A. The Illecillewaet Watershed 162 snowpacks above El . 6000 indicate that the water equivalent of the winter accumulation is in the order of 50 to 80 inches. Appendix B Detailed Algorithmof the Flow Updating Model The algorithm of the flow updating model, presented in Chap. 5, is presented in details and sample calculations are provided. B.l Prediction B. l . l The Initial Conditions Initially, the weighting factors and the covariance matrix are given the following values: X(0)_ = 1.0 1.0 1.0 ; and 0.01 0.0 0.0 P(0)_ = 0.0 0.01 0.0 0.0 0.0 0.01 These initial values are selected based on no previous knowledge of the weighting factors. The variances (diagonal elements) are simply taken as the square of 10% of the corresponding initial values of the weighting factors. Setting the covariances (the nondi-agonal elements) equal to zero is based on the assumption that are no initial correlations between different weighting factors. Although this assumption is not true, its influence on the model performance is insignificant1 since the model updates the covariance matrix 1 Based on sensitivity analysis of the initial conditions. 163 Appendix B. Detailed Algorithm of the Flow Updating Model 164 as new flow measurements become available. The effect of the initial conditions can be dramatically reduced by running the updating model on a previous data record. B.1.2 , For. any time t > 0.0 . _. r The weighting factors and the covariance matrix are calculated from their previous values as follows: Xk+i(-) = * k(+) Pk+iR = Pk(+) + G k (B.l) (B.2) Eq. B.2 can be expanded as follows: Pk +i(~) = <T2xs(tk)+ + a. 2 wa <T2XT(tk)+ + al <(0 wg (B.3) The flow for the time is then estimated as follows: Qe{h+l) = <X*{tk+l)_Qs(tk+l) + <Xr{tk+l)_Qr(tk+l) + <Xg(tk+l)_Qg{tk+l) B.2 Updating Once a new flow measurement becomes available the weighting factors and the covariance matrix are updated according to the following: B.2.1 The Kalman Gain The Kalman gain K(tk+1) is calculated as follows: K k + 1 = P k + i ( - ) H k r + 1 [ H k + 1 P k + 1 ( - ) H k r + 1 + R] - l (B.4) Appendix B. Detailed Algorithm of the Flow Updating Model 165 The term Pk+i( — )H k + 1 can be expanded as follows2: Pk+l(-)H k + 1 = Q A . + Qs<Tl_ + Qr<T2ag_ xg-(B.5) The denominator in Eq. B.4 can be expressed as follows: D = Q2a(tk+1)axs(tk+1). + Q2r{tk+1)axr{tk+1)_ + Q2{tk+1)awg{tk+1)_ + 2[Q a{tk+i)QT(tk+i)asr(tk+1). + Qs(tk+1)Qg(tk+1)asg(tk+1)_ + Qr(tk+1)Qg(tk+1)arg(tk+1U+<r2v (B.6) Substituting Eqs. B.5 and B.6 into Eq. B.4 yields: K k + 1 = 1_ D Qaoixs_ + QTaST_ + QgOisg_ QsOisr_ + QTocXT_ + Qgarg_ (B.7) QsO!sg_ + QTocrg_ + Qg From the above equation it can be seen that Kalman gain is a function not only of the variances of each weighting factors but also of the covariances. This is an important fact that will be discussed in the following sections. B.2.2 The Weighting Factors The weighting factors are updated as follows: Xk+l(+) = X k + 1 ( - ) - Kk+lTk+l (B.8) From Eq. B.8, it can be seen that the adjustment in a given weighting factor due to a new measurement is a function of its corresponding variance and covariances, the 2The arguments of time on the right-hand side, which are not shown due to limits in the space, are the same as the ones on the left-hand side. This is the same for the other following equations. Appendix B. Detailed Algorithm of the Flow Updating Model 166 forecast residual and the flow components. The degree of this adjustment increases as the corresponding variance and flow component, and the forecast residual increase, and vice versa. On the other hand, the effect of the other flow components is dependent on the corresponding covariances. If this given weighting factor is negatively correlated (negative covariance) with another weighting factor, then the higher the flow component of the latter weighting factor is, the less the updating is in the former weighting factor, and vice versa. This explains why in some cases the response of a certain weighting factor to the forecast residual is much less or even opposite to what is expected. For example, an overestimation in the flow forecast is expected to result in an overall decrease of the weighting factors. However, as will be seen later in an example, high negative covariances between the weighting factors may lead to increases, rather than decreases, in the weighting factors of the less dominant flow components. The role of the covariances in the updating process is important to avoid over-adjusting the weighting factors. B.2.3 The Covariance Matr ix The covariance matrix, P is updated as follows: P k + 1 (4- ) = [ I - S k + 1 ] P k + 1 ( - ) (B.9) where 5* is a non-dimensional gain that can expressed as follows: Sk+i = K k + i H k + i (B.10) Substituting Eq. B.7 into Eq. B.10 yields the following equation: Sk+l = K^Q. KxQr KXQ9 K2Q3 K2Qr K2Q9 KZQ, K3Qr K3Q9 (B. l l ) Appendix B. Detailed Algorithm of the Flow Updating Model 167 Substituting Eq. B . l l into Eq. B.9 results in: &X3_ a Ctxr- a « 3 -rg-xg-(B.12) 1 - KXQ. - K i Q r - K r Q g Pk+i(+)= - K 2 Q 3 l - K 2 Q r - K 2 Q g - K 3 Q S - K 3 Q r 1 - K3Qg where Ki,K2 and K3 are the elements of the Kalman gain matrix K . Eq. B.12 shows that the variances and covariances of a given weighting factor are updated, in a similar fashion to the weighting factors, as a function of the residual, their previous variance and covariances, and the flow components. B.3 A Sample Calculation The following example presents the procedures for estimating the flow on May 28, 1970 and then updating the weighting factors and the covariance matrix once the flow mea-surement becomes available. B.3.1 Prediction For calculating an estimate of the flow at May 28, 1970 the following weighting factors (calculated from previous day) are used: . - . -0.457 x(May 28)_ = = 0.843 a3- 1.199 The corresponding covariance matrix is: P(May 28)_ = 0.1164 -0.0490 -0.0827 -0.0490 0.3830 -0.0149 -0.0827 -0.0149 0.1007 Appendix B. Detailed Algorithm of the Flow Updating Model 168 The flow components calculated by the UBC watershed model for May 28, 1970 are as follows: ' Qs' 39.57 H T (May 28).= "Qr- — 9.91 . Q 3 . 71.32 The flow for May 28, 1970 is estimated as follows (see Fig. 5.9c:) Q e(May28) = H(May 28)X(May 28)_ = (0.457)(39.57) + (0.843)(9.91) + (1.199)(71.32) • » " ( T ) B.3.2 Updating (B.13) • The Forecast Residual On May 28 the flow is measured and found to be equal to 96.8 ("")• So, the forecast residual is calculated as follows: r(May28) = Q e(May 28) - Q 0(May 28) = 111.0-96.8 - (?) • The Kalman Gain The denominator in Eq. B.7, D, is calculated using Eq. B.6 as follows: D =. (39.57)2(0,1164) + (9.91)2(0.3830) + (71-32)2(0.1007)+ = 2[(39.57)(9.91)(-0.0490) + (39.57)(71.32)(-0.0827) + = (9.91)(71.32)(-0.0149)] + 200 (B-.14) Appendix B. Detailed Algorithm of the Flow Updating Model 169 Kalman gain matrix is calculated from Eq. B.7 as follows: (39.57)(0.1164) + (9.91)(-0.0490) + (71.32)(-0.0827) K = 1 405.81 (39.57)(-0.0490) + (9.91)(0.3830) + (71.32)(-0.0149) (39.57)(-0.0827) + (9.91)(-0.0149) + (71.32)(0\1002) this yields: -4.38E-3 K = 1.96E-3 9.18E-3 The above calculations show that the Kalman gain element corresponding to the snowmelt weighting factor is negative as a result of the combined effect of the rel-atively large negative covariance between the snowmelt and groundwater weight-ing factors and the dominance of the groundwater flow component. Since the Kalman gain elements for the rain and the groundwater weighting factors are pos-itive, these factors will be updated, as illustrated below, in the opposite direction of the snowmelt weighting factor. • Updating the Weighting Factors The weighting factors are updated according to Eq. B.8 as follows: 0.457 -4.38E-3 = 0.843 — 1.96E-3 1.199 9.18E-3 [15.1] (B.15) The above equation yields: 0.523 = 0.813 . &9+ . 1.060 (B.16) Appendix B. Detailed Algorithm of the Flow Updating Model 170 So, overestimating the flow resulted in the reduction of the rain and the groundwater weighting factors. However, the snowmelt weighting factor was increased mainly due to the influence of the groundwater weighting factor. This was found to improve the forecast of the hydrograph peak (see Fig. 5.9d.) • Updating The Covariance Matrix The covariance matrix is updated according to Eq. B.12 as follows: S(May 28) = 0.17.13 -0.0434 -0.3124 0.0776 0.0194 0.1398 0.3633 0.0910 0.6547 P(May 28)+ = 1.1733 0.0434 0.3124 -0.0776 .9806 -0.1398 -0.3633 -0.0910 0.3453 0.1164 -0.0490 -0.0827 -0.0490 0.3830 -0.0149 -0.0827 -0.0149 0.1007 Multiplying the above matrices results in: P(May 28)+ = 0.1086 -0.0455 -0.0663 -0.0455 0.3815 -0.0222 -0.0663 -0.0222 0.0662 It can be seen that the variances of the weighting factors after the measurement has been available (diagonal elements of matrix P+) are smaller than those before the measurement is available (diagonal elements of matrix P_). This shows the in-crease in confidence in estimating the weighting factors as a result of the additional information obtained from the new measurement, which is a basic characteristic of the Bayesian approach3. 3See section 4.2. Appendix B. Detailed Algorithm of the Flow Updating Model 171 B.3.3 Prediction for the Following Day • The weighting Factors The weighting factors are assumed, according to Eq. B . l , equal to the updated values in the previous day. So, the weighting factors for May 29, 1970 are predicted as follows: x(May 29)_ = x(May 28)+ So, For May 29: 0.523 = 0.813 1.060 • The Covariance Matrix The covariance matrix is updated according to Eq. B.2 as follows: P(May 29)_ = P(May 28)+ + G Substituting the values of P(May 28) and G in the above equation yields: 0.1086 -0.0455 -0.0663 0.01 0.0 0.0 P(May 29) _ = -0.0455 0.3815 -0.0222 + 0.0 0.01 0.0 -0.0663 -0.0222 0.0662 0.0 0.0 0.01 This results in: P(May 29)_ = 0.1186 -0.0455 -0.0663 -0.0455 0.3915 -0.0222 -0.0663 -0.0222 0.0762 As can be seen from the above equation, the prediction step results in increasing the variances of the weighting factors, which is interpreted as an increase in the endix B. Detailed Algorithm of the Flow Updating Model 172 uncertainty of the future estimates of the weighting factors. As a new flow measure-ment becomes available, the weighting factors are updated as shown previously and their corresponding variances are then decreased (Eq. B.12), indicating an increase in the confidence of the estimates as a result of acquiring the new information. So, the change in a given weighting factor variance is a balance between the gain during the updating stage and the loss during the prediction stage. This balance is con-trolled by the measurement and the state noise variances, R and G , respectively. The higher the quality of the measurement, the less is the measurement variance and the more are the decreases in the variances of the weighting factors during the updating stage, and vice versa. During the prediction stage, the larger the state noise variances the larger are the increases in the variances of the weighting factors, and vice versa. Appendix C Derivation of The Snowpack Function The detailed derivation of the snowpack function used in Chap. 6 is presented. This function expresses the snowpack in terms of the sum of the energy fluxes and the snowfalls for the period between snowpack measurements. The change in the observed snowpack between times t — k and t can be expressed as follows: E Ht(i), E Ps(i) i=t-k+l i=t-fc+l Let the snowpack condition tags /cy, /c p , and na be as defined in section 6.5, then the snowpack energy content, the snowpack water equivalent and the snowmelt can be expressed as follows: Snowpack Energy Content Hcc(t) = Kf(t)He(t) + Kp{t)0.0 + Ka(t)0.0 = Kf(t)He(t) (C.l) Snowpack <*m(0 = Kf(t)[dm(t-l)+Ps(t)] + Kp(t)[dm(t-l)+ps(t) - He{t)] + Ka(t)0.0 This results in: dm(t) - dm(t - k) = F dm(t) = Kfp(t)p.(t) ~ Kp{t)He(t) + Kfp(t)dm{t - 1) 173 (C.2) Appendix C. Derivation of The Snowpack Function where Kfp(t) = Kf(t) + Kp(t) The Snowmelt m(t) = Kf(t)0.0 + Kp(t)He(t) + Ka(t){p3(t) + dm(t-l)] this can be written as follows: m(t) = Kp(t)He{t) + Ka{t)\ps(t) + dm(t-l)] From Eq. C.2, for k = 1: dm(t) = Kfp(t)Ps{t)-Kp(t)[\Kf(t-l)Hcc(t-l) + Ht(t)} + Kfp(t)dm(t) Replacing the argument t with t — 1 in Eq. C.4 results in: dm(t-l) = Kfp(t)p3(t-1)- Kp(t- l)[\Kf(t-2)Hcc(t - 2 ) + Ht(t-l)] + Kfp(t-l)dm(t-2) Substituting the above equation into Eq. C.4 yields: For k = 2 dm(t) = Kfp(t)ps(t) + Kfp(t)Kfp(t - l ) P s ( t - 1) - Kp(t)Ht(t) - Kfp(t)Kp(t - l)Ht(t - 1) - \Kp(t)Kf(t - l)Hcc(t - 1) - \Kfp(t)Kp(t - l ) K f ( t - 2)Hcc(t - 2) + Kfp(t)Kfp(t-l)dm{t-2) Eq. C.5 can be generalized for k > 2: dm{t) = E (f[Kfp(i))ps(i)-Kp(t)Ht(t) Appendix C. Derivation of The Snowpack Function 175 " E ( II K„V))K,(i)Ht(i) i=t-k+l j=i+l - \Kp{t)Kf{t - l)Hcc(t - 1) - E A( n KfPu)K(i+i)«/(o^«(o + ( fl KfP(i))dm(t - k) (C.6) t'=t-Jfc+l From Eq. C . l : Hcc(t) = Kf(t)Hn(t) = K f ( t ) [ \ K f ( t - l ) H c c ( t - l ) + Ht(t)} (C.7) Replacing t with t — 1 in Eq. C.7 yields: - 1) = Kf(t - 1 ) ^ ( 2 - 2)Hcc(t - 2) + - 1)] (C8) Substituting Eq. C.8 into Eq. C.7 results in: Hcc(t) = \2Kf(t)Kf(t-l)2Kf(t-2)Hcc(t-2) + Kf(t)Ht(t) + \ K } { t ) K S { t - \ ) 2 H t ( t - \ ) (C.9) For t — k + l < i < t : Hcc{i) = \ 2 K f ( i ) K f ( i - l ) 2 K J ( i - 2 ) H c c ( i - 2 ) - r Kf(i)Ht(i) + \Kf(i)Kf(i- l)2Ht(i-l) Hcc(i) can be expressed in terms of the snowpack variables from time i down to time i — 3 as follows: . Hcc(i) = \ 3 K f ( i ) K f ( i - l ) 2 K f { i - 2 ) 2 K f ( i - 3 ) H c c ( i - 3 ) + Kf(i)Ht(i) + \Kf(i)Kf(i - l)2Ht(i - 1) + \2Kf{i)K}{i - l)*itf(i - 2)2Ht(i - 2) Appendix C. Derivation of The Snowpack Function 176 In general, Hcc(i) can be expressed in terms of the snowpack variables from time i down to time t — k as follows: ficc(t) = V ' - t + f c / c / ( i> / ( t r i » ( J Q Kf(j)2)Hcc(t - k) j=t-k+l + K,(i)[Ht(i) + E ^(fiKfiiywj)} j=t-k+l l=j Since = 0.0 or 1.0 Kf(i)2 = Kj(i). So, the above equation can be written as follows: Hcc(i) = y - t + k ( n « / 0 W - t ) + */(»)[^ ») + E ^w(II«/(0)^(i)] (cio) j=t-k+l l=j Substituting Eq. C.10 into the fourth term of the right hand side of Eq. C.6 (for i = t — 1) yields: For k > 2 t - i \Kp(t)Kf(t - l)Hcc{t - 1) = \kKp(t)( fj / c / ( z ) ) ^ C c ( i -A : ) + A / c p ( t ) K / ( i - l ) [ ^ t ( i - l ) t ' = t - A : + E A'-'-^n^o-))^^)] (cn) t ' = t - f c + l j=i For Jb = 2 \Kp(t)Kf(t-l)Hcc(t-l) = A 2 /c p ( i ) /c / ( t - - l ) /c / ( t -2) i f c c (f-2) + A K p ( * ) / c / ( * - l ) / J t ( i - l ) (C.12) Substituting Eq. C.10 into the fifth term of the right-hand side of Eq. C.6 yields: Appendix C. Derivation of The Snowpack Function 177 For k > 3 E A( J! K f p ( j ) ) K p ( i + l)Kf(i)Hcc(i) = i=t-k j=i+2 ... t + [A( II */P(*))«PC - FC+ !)«/(* - *) i=t-k+2 t + A2( TJ /c/p(*))/cp(i-* + 2 ) K / ( t - * + l )K / (r-Jb) i=t-k+3 + E v-^*+i(n */,(;))«,(»+i)( ri */om.(* - *) t ' = t - f c + 2 j ' = i + 2 j=t-ifc t + A( II KfP(i))Kp(t-k-r2)Kf(t-k + l)Ht(t-k-rl) i=t-k+3 + E (A( II KfP(J)K(i + l)Kf(i + l)[Ht(i) i=t-k+2 j=i+2 + E A^'(ri«/(0)^(i)} ( C . 1 3 ) j=t-k+i i=j For k = 3 t-2 t E A( n K/PU'))"P(*+i)*/(o^cc(o = i = t - f c i = » ' + 2 [A( n Kfp(i))Kp(t-2)Kf(t-3) i = t - l + X2Kjp(t)Kp(t - l)Kf(t - 2)Kf(t - 3 ) ] # C C ( * - 3 ) + A K / P ( < ) « P ( < - ! ) « / ( < - 2)fl" t(t - 2) (C.14) For k — 2 t - 2 t E A( II «/p(i))/cp(*+l)/c/(i)iYce(*) = AK / p(t)«p(«-l)/c /(*-2)fl- c c(t-2) ( C . 1 5 ) Appendix C. Derivation of The Snowpack Function 178 Substituting Eqs. C.12 and C.15 into Eq. C.6 yields for k = 2: dm(t) = E (fl */P(J))A(0 - «P(*)^«(0 - - 1 ) t ' = t - l j=i + \Kf(t - l)Kp(t)]Ht(t - 1) - \Kf(t - 2)[\Kp(t)Kf(t - 1) + Kfp{t)Kp(t - l)]Hcc{t - 2) + ( JJ K f p ( i ) ) d m ( t - 2) i=t-l Substituting Eqs. C . l l and C.14 into Eq. C.6 yields for k = 3: dm(t) = E (T[KfPti))Ps(i)- Kp(t)Ht(t) ~ [A«/(t - l ) K p ( f ) t '= t -2 j'=t +KfP(t)KP(t - i)]Ht(t - 1 ) - [A2( n «/(o)«P(o t = t - 2 t (C.16) + \Kfp(t)Kf(t - 2)Kp(t - 1) + ( II «/P(0)«P(* - 2)]fl«(i - 2) i = 4 - l t - 1 - A * / ( t - 3 ) [ A 2 ( JJ / c / ( * ) ) K P ( < ) + AK / p (0« / ( i -2 ) / c p ( t - l ) » ' = t - 2 t +(n * /P (O ) *P(< - 2)]^cc(< - 3) t = t - i + ( II K/,(0)<U* - 3) (C.17) i=t-2 Substituting Eqs. C . l l and C.13 into Eq. C.6 and rearranging yields for A; > 4: t t dm(t) = £ (UKfpV))P>W-*p(t)Ht{t)-[Kfp(t)Kp(t-l) i=t-k+l i=j +\KP(t)Kf(t - i)]Ht{t - 1 ) - [(n «/,(«))«,(* - 2) i = i - l t-1 + \2Kp(t)( JJ Kf(i)) + \KJp(t)Kp(t-l)K,(t-2)]Ht(t-2) t"=t-2 - K n */P(O)«F(* - 3)+A 3 « p ( i ) ( i i «/(0) i = t - 2 t '= t -3 + \Kfp(t)Kfp(t - l ) K p { t - 2)Kf(t - 2) Appendix C. Derivation of The Snowpack Function 179 t - 2 4- \2Kfp(t)Kp(t - 1)( fj Kf(i)))Ht(t - 3) t=t-3 - [(n */P(J))«P(O+^-^(ocn1 i=«'+i i=* t + A( fj /c / p(j))/c p ( i-rl)K / (0 i = « ' + 2 i=i i=t-j+i /=j - K n ^ i w t - H i i + A * - 1 ^ ) ! ff K/CO) «=t-fc+2 i=t-k+l t t'=t-A:+3 + II */p(i)K(*-0 ' i f « / 0 W - * + i) «'=1 j=t-fc+l J=t—fc+1 t t-1 - A / c ^ i - * ; ) [ ( /c/p(i))/cp(t-fc + l) + A f c- 1/c p(i) II «/(*) i=t-fc+i t=t-fc+i i • + A ( n «/p(o)M*-fc+2)«/(*---fc+i) t=t-fc+3 + E *••-*+*( n «/ Po')K(i+1)( n «/0 ' )pcc(i-fc) T •+( n (c.18). i=t-k+l Let 02(t), 53(i), 04(*)i ^ (O A(0 AW and 08(i) be defined as in section 6.5. Eq. C.18 can be expressed as follows: dm(t) = J2 *I(0P.(0- E H^Wi) «=t-fc+l i=t-k+l - 97(t)Hcc{t -k) + 68(t)dm(t - k) (C.19) Appendix C. Derivation of The Snowpack Function Eq. C.19 is the same as Eq. 6.14 in section 6.5. Appendix D Derivation of the Flow-Snowpack Updating Model D.l Development of the Basic Model In situations where the snowmelt constitutes the major input to the streamflow and the rainfall-generated flow component is insignificant, or can be easily separated, real-time flow measurements can be utilized to update the parameters of the snowmelt function. The approach is based on the following convolution equation: k Qs(t) = X>(* +1 - * M * + 1 - 0?.(0 (D.l) t=i where Qs is the snowmelt flow component; g,(l), 9,(2),.... qs(k) are the indices of the snowmelt unit hydrograph; m(i) is the snowmelt at day i; and 7 is the percentage of the snowmelt that is routed to the snowmelt flow component (the rest is routed to the groundwater.) Selecting the percentage of cloud cover as the main variable to be updated, the snowmelt for a given day, t, can be estimated1 as follows: m{t) = a(t)C(t) + fl(t), For a partially melting snowpack; = dm For a completely melting snowpack; or (D.2) = 0.0, For a frozen Snowpack ^ee Chap. 6. 181 Appendix D. Derivation of the Flow-Snowpack Updating Model 182 where a, and dm are as defined in section 6.6.1. Assuming that the snowpack is either frozen or partially melting2, and substituting Eq. D.2 into Eq. D . l : it - Qs(t) = E[7(* + i-0(«(* + i - 0 C ( * + + ^ + i=l k = X>(* + 1 - i)<*{i + 1 ~ i)C{t + 1 - 09(0 + 1=1 E7(* + l-0W + l-0?(0 (D-3) » = 1 Since the rainfall is assumed negligible, the snowmelt flow component can be esti-mated equal to the total; observed flow less the baseflow. This "observed" snowmelt flow component is referred to as Qso to distinguish it from the calculated snowmelt flow component Qs. It also is assumed that: Qsoit) = Qs(t) + v(t) (DA) where v(t) is a random error of zero mean and variance <r2. Substituting Eq. D.3 into Eq. D.4 and rearranging results in: k k Qs°(0-£7(*+l-0/3(*+l-0?(0 = ^27{t+l-i)a(t+l-i)C{t+l-i)q(i)+v{t) (D.5) t=i »'=i Eq. D.5 constitutes the basic equation for updating the snowmelt parameters. The development of the updating model for the cloud cover factor nc is presented in the following section. The basic equations of the updating models for the other snowmelt-parameters AT, and.r)a are presented in sections D.l.2 and D.l.3, respectively. 2This is an acceptable assumption since for a completely melted snowpack the snowmelt is insignificant. Appendix D. Derivation of the Flow-Snowpack Updating Model 183 D . l . l Updat ing rjc Eq. D.5 can be modified to include the parameter T)C as a correction factor for the past "fc" cloud covers, as follows: k k Vc(t) £ 7 ( * + 1 - i)a(t + 1 - i)C(t + 1 - i)q(i) + v(t) (D.6) i = l Eq. D.6 can be then used to calculate the value of rjc that yield the exact value of the measured snowmelt flow component, as follows: „ m = Qso(t) - E J U l(t + 1 - i)P(t + 1 - i)g(i) ( , Using Eq. D.7 in updating does not take into consideration the effect of errors in the flow measurements, which can be significant in the vicinity of the high flows. To account for these errors, r/c is assumed to be a stochastic variable that is related to its previous value as follows: nc(t) = nc(t - I) + wc(t - 1) (D.8) where wc(t) is a random error of zero mean and variance cr2. Eqs. D.8 and D.6 are the state and measurement equations, respectively, that consti-tute the state-space model. The Kalman filter can be applied on this state-space model to update the parameter nc using real-time flow measurements. D. l .2 Updat ing A T , Using the same procedure applied for the parameter nc, the following state-space model is developed for updating the cloud cover parameter ATS: ATs(t) = AT3(t- 1) + wA(t - 1) State Equation (D.9) Appendix D. Derivation of the Flow-Snowpack Updating Model 184 Qso(t) - JZi{t +1 - + 1 - 'MO = i = l 1 k ^2^f(t + 1 — i)a*(t + 1 — i)q(i) + v(t) Measurement Equation(D.lO) where w& is a random noise of zero mean and variance o\; and a* and /?* are as defined in section 6.6.2. D. l .3 Updat ing rja The state-space model for updating the snow albedo correction factor r)a can be expressed as follows: »7a(*) = Va(t ~ 1) + wa(t - 1) State Equation (D-U) k Qso(t) - ^ 7 ( i + 1 - i)P°{t + 1 - i)q(i) = i=l k rja(t)^2A(t + 1 - i)j{t + 1 - i)a°(t + 1 - i)q(i) + v(t) Meas. Equation(D.12) i=l where wa is random noise of variance cr\{t); and a° and j3° are as defined in section 6.6.3. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0050452/manifest

Comment

Related Items