UBC Faculty Research and Publications

Adaptive Nonlinear MOS Scheme for Precipitation Forecasts Using Neural Networks. Yuval; Hsieh, William W. 2003

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata


Hsieh_AMS_2003_WF303.pdf [ 1.08MB ]
JSON: 1.0041822.json
JSON-LD: 1.0041822+ld.json
RDF/XML (Pretty): 1.0041822.xml
RDF/JSON: 1.0041822+rdf.json
Turtle: 1.0041822+rdf-turtle.txt
N-Triples: 1.0041822+rdf-ntriples.txt
Original Record: 1.0041822 +original-record.json
Full Text

Full Text

APRIL 2003 303YUVAL AND HSIEHq 2003 American Meteorological SocietyAn Adaptive Nonlinear MOS Scheme for Precipitation Forecasts UsingNeural NetworksYUVAL AND WILLIAM W. H SIEHDepartment of Earth and Ocean Sciences, University of British Columbia, Vancouver, British Columbia, Canada(Manuscript received 1 August 2002, in final form 3 December 2002)ABSTRACTA novel neural network (NN)–based scheme performs nonlinear model output statistics (MOS) for generatingprecipitation forecasts from numerical weather prediction (NWP) model output. Data records from the past fewweeks are sufficient for establishing an initial MOS connection, which then adapts itself to the ongoing changesand modifications in the NWP model. The technical feasibility of the algorithm is demonstrated in three numericalexperiments using the NCEP reanalysis data in the Alaskan panhandle and the coastal region of British Columbia.Its performance is compared with that of a conventional NN-based nonadaptive scheme. When the new adaptivemethod is employed, the degradation in the precipitation forecast skills due to changes in the NWP model issmall and is much less than the degradation in the performance of the conventional nonadaptive scheme.1. IntroductionModel output statistics (MOS) is a process by whicha statistical relationship between the output of a nu-merical weather prediction (NWP) model and obser-vations is established in order to improve forecasts. Thisprocess is most often applied to forecast problems wherethe variable to be forecasted is not produced by the NWPmodel, or for downscaling where the spatial resolutionof the NWP model is too coarse. Rough terrain, lack ofobservations, and yet inadequate understanding of var-ious physical processes are additional problems thatcontribute to reduced predictability of the NWP and callfor additional processing by MOS.A major obstacle in implementing an MOS predictionsystem is the ongoing modification of NWP models.Improvements in the dynamic and data assimilationschemes, changes in the observation system, and re-finements of the temporal and spatial resolution of thenumerical solutions all contribute to changes in theNWP model characteristics. The weather system itselfis also changing on various timescales. No doubt thenthat the connection between the NWP output and theweather variables such as precipitation must be chang-ing as well, and that calls for MOS prediction schemesthat adapt themselves accordingly. In a recent paper,Wilson and Valle´e (2002) described the updateable MOS(Ross 1987) system of the Meteorological Service ofCanada. This system is based on updating the datasetCorresponding author address: William Hsieh, Dept. of Earth andOcean Sciences, University of British Columbia, 6339 Stores Rd.,Vancouver, BC V6T 1Z4, Canada.E-mail: whsieh@eos.ubc.cafrom which the linear MOS empirical relationships aredeveloped. A direct estimation and update of parametersof a linear MOS relationship from sequential data canbe carried out by Kalman filtering (Grewal 1993). Theauthors are not aware of an existing technique to con-tinuously update a nonlinear MOS relationship. An in-teresting alternative for nonlinear MOS forecasts of pre-cipitation was introduced by Xia and Chen (1999). Theirmodel output dynamics scheme is based on the modi-fication of the model vertical velocity given the last rainobservations. Factors like topography cannot be incor-porated in this method and thus its capability is limitedin mountainous areas like British Columbia (BC).The importance of a short-term localized precipitationforecasts for flood control, transportation safety, land-slide and avalanche prediction, and for the general pub-lic’s interest put it in the focus of many research efforts(Ebert 2001; Mao et al. 2000; Hall et al. 1999; Koizumi1999; Xia and Chen 1999; Kuligowski and Barros1998a,b; Krzysztofowicz 1998). Prediction of precipi-tation is notoriously difficult and it is a prime examplewhere NWP models fail very often, calling for estab-lishment of MOS schemes. The relationship betweenthe NWP variables and the true precipitation cannot byany means be assumed to be linear. Thus, several studies(Hall et al. 1999; Koizumi 1999; Kuligowski and Barros1998a,b) employed neural networks (NNs) to form thestatistical connection. In all of these cases the NWPmodel, and the MOS scheme, were frozen in time.This paper proposes an NN-based MOS process thatadapts itself continuously. At each time point, an NNis trained to form the connection between the 6-h ac-cumulated precipitation at a set of measuring stations304 VOLUME 18WEATHER AND FORECASTINGand the corresponding NWP forecasts valid for thattime. This NN model is a modification of an existingreferences model trained using all the data, in all thestations during a short period of time sometime beforepresent. The updated model is constructed to best fit thenew observed data (using predictors of a possibly mod-ified NWP model), keeping the desirable characteristicsprovided by the reference. The training process is com-pletely automatic and the decisions about the level ofmodification are made according to a well-defined math-ematical criterion (Golub et al. 1979).Providing precipitation forecasts in the coastal regionof the North American Pacific Northwest is a challeng-ing problem because of the acute topographical varia-tions and the large Pacific data void, which does notenable proper initializations of NWP models. The cur-rent precipitation forecasts for the region are not sat-isfactory and we hope this work will serve as a basisfor the future establishment of a MOS precipitation fore-casting system in the observation stations participatingin the Emergency Weather Network of BC (currentlyunder construction), using output of the high-resolutionNWP models run by the Atmospheric Sciences Pro-gramme at the University of British Columbia. In thispaper, data from the National Centers for EnvironmentalPrediction–National Center for Atmospheric Research(NCEP–NCAR) reanalysis set (Kalnay et al. 1996) areused to demonstrate the technical feasibility of estab-lishing the proposed scheme. The expected differencesand difficulties in its application in an operational modeare discussed.2. The adaptive NNa. The modeling processA system of N precipitation observation stations isconsidered. A set of K NWP variables are believed tobe associated with the precipitation at each of thesestations and are used as predictors. The accumulatedobserved precipitation between ti21and tiin the N ob-servation stations are provided by . The correspond-obsyiing K NWP predictor variables are given in the K 3 Nmatrix Xi, and each of its N columns contains the NWPvalues associated with one observation station at timeti. The NWP predictors can be variables from grid pointsin the vicinity of that station, and valid at various timepoints (e.g., ti, ti21, ti22, ti23, . . .). The choice of pre-dictors is problem dependent and requires some explo-ration. Additional quantities like local topographical in-formation can also be included. Note that the predictionlead time of the process was not mentioned. It equalsthe time elapsed between production of the NWP var-iables at ti2Dand the prediction time ti.The number of observation stations N is not con-strained to be a constant and its value can change fromone time point to another if the number of stations thatactually report precipitation vary. This is important foroperational systems where more likely than not a fewof the stations fail to report at any given time point.The NN relationship connecting the precipitation val-ues and the NWP model output is described bycaly 5 F [X , w ],iii(1)where is a vector of current precipitation valuescalyicalculated by the NN model whose parameters are storedin the vector wi, using the information provided by Xi.A detailed description of F will be given in the nextsubsection.The values wiare the ones that minimize the costfunction:obs cal 2 2f 5 | y 2 y | 1 b | w 2 w |,ii iref(2)where wrefcontains the reference NN model parametersand b is a parameter determining the trade-off betweenthe data fit constraints in the first term of f and therequirement of the model to be close to the referenceone, expressed in the second term. The value of b ischosen automatically by simultaneously minimizing fand the general cross-validation (GCV) function (Haberand Oldenburg 2000; Yuval 2000). Minimizing the GCVfunction ensures that the NN model is optimally tunedfor the prediction of data points not used in the model’sdevelopment, and for the avoidance of overfitting (Ha-ber and Oldenburg 2000; Golub et al. 1979). A naturalcandidate for the reference model wrefis the previousday’s model, wi21. However, we found it more beneficialto use as a reference a model based on the data accu-mulated over a slightly longer period of time. Using thelatter option, it is advisable, although not imperative, toperiodically update the reference model as more dataare accumulated.The spatial distribution of the precipitation at the nexttime point, ti11, is predicted byprey 5 F [X , w ].i11i11 i(3)When the observations become available, they canobsyi11be compared to in order to evaluate the accuracypreyi11by which the NN model predicted the observations.These new observations, and the corresponding NWPvalues Xi11, are then used to produce the updated NNparameters for the prediction in the next time point.The process of updating the model elements and pre-dicting future precipitation values is repeated continu-ously. Small, or no, changes to the model parametersare needed if no significant changes have occurred inthe system since the last update of the reference model.In that case, a reasonable fit between yobsand ycalcanbe achieved by a model close to the reference, and theGCV controlled training is likely to choose a large valuefor the b in Eq. (2). However, at times of seasonalchanges or following modifications in the NWP, the re-lationship between the NWP output X and the observedprecipitation yobsdoes not remain the same. In this case,the reference model parameters cannot provide an ad-equate fit between yobsand ycal. The chosen value of bAPRIL 2003 305YUVAL AND HSIEHwill be appropriately small, enabling a large deviationof the model from the reference to adapt it to the newrelationship.b. The neural network modelThe relationship between the NWP output variablesand the spatial precipitation values is modeled byF [X, w] 5 F [X, W , B , w , b ]11225 w [F(WX1 B )] 1 b , (4)1122where F is the hyperbolic tangent function, W1is an L3 K matrix, B1is an L 3 N matrix with identical col-umns, w2is a 1 3 L row vector, and b2is a 1 3 N rowvector with identical elements. In the NN literature, Fis usually called a two-layer feedforward NN. The ma-nipulations of X by W1and B1are referred to as theNN’s first, or hidden, layer, and F is the hidden layer’stransfer function. The value of L is called the numberof hidden neurons. The larger it is, the more complexis the NN model. It is convenient to store all the ele-ments of W1, B1, w2, and b2in one vector of NN modelparameters, w.It has been shown by Cybenko (1989), Hornik et al.(1989), and Funahashi (1989) that a two-layer feedfor-ward NN can approximate arbitrarily well any contin-uous nonlinear function given a set of inputs X and asufficient number of hidden neurons L. The NN is thusassured to be able to sufficiently simulate the desiredrelationship at any given time, no matter how complexand nonlinear this relationship might be. The problemis to find the model parameters that enable the modelto capture the actual relationship between the NWP out-put and the observed values while avoiding fitting tonoise (from both measurements and calculation arti-facts).The assumption of our methodology is that, at a giventime point, the physical processes governing the rela-tionship between the predictor variables and precipita-tion is the same in all the stations. The same NN model[wiof Eq. (1)] is simulating this relationship, and thepredicted spatial differences in the precipitation are dueto the differences in the corresponding NWP predictorsat the various locations. This assumption certainly doesnot hold in the very general case. Many different pro-cesses like large synoptic troughs, small-scale turbu-lence, and orographic lifting can lead to precipitation,and the relationship between NWP predictors and theprecipitation is not necessarily the same in all thesecases. For example, it was not surprising to find out thatthe NN model that works well for predicting large-scalesynoptic precipitation along the BC coast is not suitablefor predictions of precipitation in the interior BC PeaceRiver region where most of the precipitation comes fromlocal summer thunderstorms. Thus the MOS systemshould include only stations where the precipitation ispredominantly generated by similar physical processes.Different systems can be constructed for different re-gions. In this paper only stations along the BC coastand the Alaska panhandle were included.3. DataThe data in this study are from the NCEP–NCARreanalysis project (Kalnay et al. 1996), which uses astate-of-the-art global assimilation model to create acomprehensive dataset that is as complete as possible.The output variables are calculated on two differentgrids, a Gaussian T62 grid with 192 3 94 points (about1.9831.98), and a 2.5832.58 latitude–longitude grid.The 6-hourly data (four times a day) from 1 January2000 to 31 December 2001 were used in this paper.This study considers the data of the coastal PacificNorthwest, from northern Washington State (47.58N) tothe BC–Yukon Territories border (60.08N), including theAlaska panhandle (Fig. 1). The precipitation in this areais given in 31 Gaussian grid points. Sixteen atmosphericvariables, believed to be related to the precipitation rate,were chosen as predictors. A deliberate choice was madeto use only predictor variables given on the latitude–longitude grid. Their values had to be interpolated tothe precipitation grid locations imitating the real-lifesituation where the NWP grid points usually do notcoincide with the locations of the precipitation mea-surement stations. The interpolation was crude—eachprecipitation grid point was associated with the predic-tors values in the closest latitude–longitude grid point.This simulates the expected inaccurate interpolation ofNWP output values over the rough terrain of the CoastalMountains region.The predictor variables are 1000-, 850-, and 500-mbair temperature (K); 1000-, 850-, and 500-mb geopo-tential height (m); 1000-, 850-, 700-, and 500-mb ver-tical velocity (m s21); 1000-, 850-, and 500-mb relativehumidity (%); and 1000-, 850-, and 500-mb specifichumidity (kg kg21). Values of these predictors wereavailable four times a day at 0000, 0600, 1200, and1800 UTC. The 6-hourly accumulated precipitation rate[mm (6 h)21] in between these hours (i.e., 0000–0600,0600–1200 UTC, etc.) at the 31 stations is the predic-tand. It must be noted that the NCEP precipitation ispurely model based so the NN models only simulate thedynamic model that generates it. Neither are the tem-poral errors in the NWP predictors simulated in thisstudy. This should not affect the conclusions about theeffectiveness of the adaptive process as its performanceis compared to a nonadaptive one benefiting from thesame advantage. For simplicity, only predictor variablesvalid at the prediction time of the MOS process wereused. For an operating MOS system, NWP variablesvalid for previous time points should also be consideredand can help improve consistent temporal errors in theNWP predictors.306 VOLUME 18WEATHER AND FORECASTINGFIG. 1. Map of the study area. Locations of the NCEP–NCARprecipitation values are marked with asterisks.4. ResultsThe suggested methodology is based on updating anNN-based MOS connection established using short datarecords of many stations simultaneously. Its results aredemonstrated, and compared in this section to thoseachieved by a conventional (e.g., Kuligowski and Barros1998b) NN-based MOS connection that is developedindividually for each station using long data records,but with no updates to the model. The comparison meth-od is referred to henceforth as the benchmark.For the benchmark, data of 1 yr (1460 time points)were used to develop an individual model at each stationconnecting the 16 predictor variables, simulating NWPoutput, to the precipitation predictand. The models weredeveloped using the MATLAB Levenberg–Marquardtroutine (Demuth and Beale 2000). The performance ofthese models was tested in the following year duringwhich the predictors were modified in various ways tosimulate the modifications that occur in the NWP output.A reference model is needed in order to apply theadaptive MOS approach. This reference model was de-veloped using only the data of the last month of the firstyear. The same modified predictors were used to updatethe adaptive model and predict the precipitation duringthe second year. Testing the predictions in this case wascarried out by using the model, updated by the new dataof a 6-h period, to predict the precipitation of the cor-responding 6-h period in the next day. The frequentupdating ensures prompt adaptation to possible changesin the NWP but is not necessary if the changes areknown to occur on a less regular basis.Three numerical experiments were carried out. In thefirst one, no modifications were applied to the data in thetesting year to compare the performance of the MOSschemes in the case where a ‘‘frozen’’ NWP model isused. In practice, NWP models are never frozen and twoother experiments compared the performance of the twoschemes in scenarios where the testing period predictorswere modified. In one experiment the modifications werecarried out according to V 5 V 1 n sgn(V)|V |,where V is a predictor value; sgn(V) 521, 0, 1for V , 0, 5 0, . 0, respectively; and | · | denotestaking an absolute value. This results in a modificationthat is linearly proportional to the magnitudes of thepredictor values with the parameter n controlling theamount of modification. The testing year was dividedinto 10 equal periods with n 5 0.01, 0.05, 0.10, 0.05,0.01, 20.01, 20.05, 20.1, 20.05, and 20.01. Thus thelevel of the linear modification changed every 146 timepoints (about 5 weeks). The second type of modificationwas carried out according to V 5 sgn(V)|V |nwith n5 7/6, 6/5, 5/4, 6/5, 7/6, 6/7, 5/6, 4/5, 5/6, and 6/7 inthe 10 segments of the testing year. The modified pre-dictor values in this case are proportional to powers ofthe values, resulting in a nonlinear modification. Thepredictor series were all normalized prior to the mod-ification so that only the relative magnitudes, not theunits of the variables, dictate the amount of modifica-tion. Additional experimentation with combined linearand nonlinear predictor modifications, different divisionof the testing period, and different ranges for the valueof n lead to conclusions similar to those extracted fromthe results of the three experiments presented below.Figure 2 shows scatterplots of precipitation predic-tions against the observations at all the stations duringthe period of the testing year using the unmodified data.The predictions in Fig. 2a were produced by modelsupdated every time point according to the method pro-posed in this paper. Figure 2b shows the correspondingplot for the benchmark where models were trained sep-arately for each station using the full data record of thetraining year. The reduced scatter in Fig. 2b comparedto that in Fig. 2a, and the better corresponding corre-lation and root-mean-square error (rmse) scores, givenAPRIL 2003 307YUVAL AND HSIEHFIG. 2. Scatterplots of observed and predicted precipitation in the case of the nonmodifiedNWP predictors. The solid line is the perfect one-to-one fit. The dashed line is the least squaresfit to the data. The least squares parameters are given in Table 1. The predictions are by (a) theadaptive MOS scheme and (b) the benchmark conventional nonadaptive MOS scheme.TABLE 1. The slope and intercept values of the least squares fit lines in Figs. 2, 3, and 5 (tests 1, 2, and 3, respectively).Test 1Slope InterceptTest 2Slope InterceptTest 3Slope InterceptAdaptiveNonadaptive0.740.730.570.310.720.690.821.760.780.830.600.51in Table 2, show a clear advantage for the benchmarkmethod. This is not surprising bearing in mind that, withno modifications, the data in both training and testingperiods in this case are produced by the same frozenNCEP data assimilation model.The results in Fig. 2 are what we expect to see in theideal case of an MOS scheme developed for an NWPmodel that never changes. The benchmark, using train-ing on much longer data records, and a tailored modelfor each station, results in superior predictions andshould have been chosen for practical use in the caseof an MOS system using a frozen NWP model. Unfor-tunately, NWP models invariably change so a more re-alistic comparison is that of the performance of the twomethods using testing predictors data that were some-what modified.Figure 3 shows the scatterplots resulting from thesecond experiment where the predictors in the testingperiod were linearly modified. Both plots show morescatter than the plots in Fig. 2 but while Fig. 3a, showingresults of the adaptive method, is quite similar to thecorresponding plot in Fig. 2a, the scatterplot from thebenchmark (Fig. 3b) shows much greater scatter thandoes Fig. 2b. The correlation and rmse skills achievedby the benchmark in this case (Table 2) are significantlyworse than those for the nonmodified data and are in-ferior to those achieved by the adaptive method.Figure 4 compares the probability of detection (POD),the false-alarm rate (FAR), and the threat score (TS)(Wilks 1995) of the results achieved by the adaptivemethod and the benchmark. These three measures aremore suitable than the correlation and rmse skills forranking predictions of events of interest, which are lesslikely to occur than not. Precipitation forecasts, especiallyheavy precipitation events, are thus better ranked usingthese measures. The POD is the ratio of predictions ofevents (i.e., precipitation above a certain threshold) thatwere predicted and materialized, to the number of ob-servations of these events. The FAR is the ratio of pre-dicted events that did not materialize to the total numberof predictions of such events. The TS, the most commonamong the accuracy measures in precipitation predictionstudies, is the ratio of materialized forecasts to the totalnumber of occasions in which an event was forecastedand/or observed. The range of the three measures is [0,1] with the POD and TS having positive orientation, thatis, a score of unity is best, and the FAR having negativeorientation. Readers are advised to consult Wilks (1995)for a more complete discussion of these measures andthe contingency table from which they are derived.The POD values of the adaptive method in Fig. 4 arein general slightly higher than those of the benchmark.With linearly modified predictors, a little degradation isnoticed in most of the scores relative to those achievedusing nonmodified predictors. The better POD valuesof the the adaptive method, especially at higher precip-itation thresholds, signify a better capability in fore-casting large precipitation events than the benchmark.This capability is unfortunately, but not surprisingly,accompanied by a higher rate of false alarms when using308 VOLUME 18WEATHER AND FORECASTINGFIG. 3. Same as Fig. 2 but for the case of linearly modified NWP predictors.TABLE 2. The correlation (corr) and rmse skills of the results ob-tained by the adaptive and nonadaptive schemes using the nonmo-dified testing data (test 1), the testing data with linearly modifiedpredictors (test 2), and the testing data with nonlinearly modifiedpredictors (test 3).Test 1Corr RmseTest 2Corr RmseTest 3Corr RmseAdaptiveNonadaptive0.740.830.730.560.590.251.052.760.690.520.861.38FIG. 4. Plots of the POD, FAR, and TS scores. The 3 symbolsdenote results obtained by the adaptive MOS scheme; circles denotethe benchmark. Solid lines are the results for the case of the non-modified NWP predictors, and dashed lines are results for the linearlymodified NWP predictors.the nonmodified data. However, the false alarm rate ofthe benchmark degrades much more, and far exceedsthat of the the adaptive method when using the linearlymodified data. The relatively small degradation in theFAR scores obtained in this case by the adaptive methodsuggests its possible advantage in reducing false alarmswhen the NWP outputs are occasionally biased up ordown. A similar advantage is demonstrated by inspect-ing the TS results, which take into account both abilityto predict events and to avoid falsely calling for theiroccurrence. The TS results of the benchmark are slightlyhigher than those of the adaptive method for the non-modified data, but the degradation resulting from theuse of modified predictors pushes the benchmark TScurve well below that of the adaptive method.Figures 5 and 6 compare the results achieved in thethird experiment where the predictors, simulating NWPoutput, were nonlinearly modified. Figure 5a is a scat-terplot obtained using the adaptive method. Only a littleadditional scatter is detected in this plot compared tothat in Fig. 2a, where nonmodified data were used, andthe correlation and rmse skills are only slightly degrad-ed. The scatter of the predictions by the benchmark inFig. 5b is clearly worse in this case compared to thescatter in Fig. 2b, and the corresponding correlation andrmse scores are obviously not as good. The POD, FAR,and TS results shown in Fig. 6 convey similar infor-mation to that provided by Fig. 4 in the case of thelinearly modified predictors. That is, more accurate pre-dictions were achieved by the benchmark when frozenNWP output was used, but the deleterious effects ofalterations in the predictors affected the adaptive MOSmethod significantly less.5. Discussion and conclusionsThis paper proposes a method to adapt NN-basedMOS prediction of precipitation as new NWP outputand observations arrive. Its main advantages are theAPRIL 2003 309YUVAL AND HSIEHFIG. 5. Same as Fig. 2 but for the case of nonlinearly modified NWP predictors.FIG. 6. Same as Fig. 4 but dashed lines are for the case of thenonlinearly modified NWP predictors.relatively short length of data record required to estab-lish the MOS connection, and the ability to adapt thisconnection to changes in the NWP model and/or ob-servations. The method assumes that the physical pro-cesses that lead to the precipitation at the different ob-servation locations are similar and that the differencesin the MOS connection are only due to differences inthe predictors. The performance of the proposed methodwas demonstrated in three numerical experiments andwas compared to that of a benchmark nonadaptive NN-based MOS scheme. A nonadaptive scheme benefitsfrom the information provided by longer data records(in case they are available) to establish the MOS con-nection, and can be tailored separately for each location.The advantage of longer data records was evident inthe superior prediction achieved by the benchmark whileusing data produced by a frozen dynamic model. Mod-ifying the MOS predictors, to simulate changes in theNWP model, resulted in severe degradation of the MOSprediction by the benchmark. The performance of theadaptive scheme was not as severely affected by themodification in the NWP output, suggesting it might beof use for improving operational NWP predictions whenfrequent modifications in the NWP operation preventthe establishment of a conventional MOS scheme.The study described in this paper used NCEP re-analysis records of atmospheric variables as the NWPpredictors, and the purely model-based NCEP precipi-tation record as the predictand. Being purely modelbased, the NCEP precipitation does not necessarilyagree with any observed precipitation values. It is rathertuned to agree with related variables like vertical ve-locity, specific humidity, and latent heat flux, which aresmoothed over the NCEP analysis grid. Thus the spatialand temporal distribution of the precipitation values arenot expected to closely resemble the real ones. Our com-parison of the NCEP precipitation records at selectedlocations where rain gauge observations were availablerevealed substantial differences on a value by value ba-sis. However, with the exception of lack of extremevalues [above 25 mm (6 h)21] in the range, irregularityof the series, and the shape of the probability densityfunction, the NCEP precipitation records in the studyregion are quite similar to the observed ones. The typicaldifficulties in predicting observed precipitation also ap-pear while predicting the ones produced by the NCEPmodel.We thus believe that the use of NCEP precipitationto demonstrate the technical feasibility of establishing310 VOLUME 18WEATHER AND FORECASTINGan adaptive NN-based MOS scheme for predicting pre-cipitation is justified. Testing on data used for opera-tional forecasts is needed in order to evaluate this meth-od’s worth for practical predictions. A main concern isthe prediction capability of the NWP output on whichthe overall performance of the MOS scheme depends.Using NCEP records as simulators of NWP output elim-inated that concern in this study but it remains to betested how well the recently developed high-resolutionmodels (1.0-km grid and below) will be capable of cap-turing the small-scale, but important, phenomena thatare responsible for much of the variability in precipi-tation over rugged terrain like that of the Pacific North-west.To apply the nonlinear adaptive MOS scheme in anoperational mode will also require extensive additionalexploration to tailor the algorithm to the specific regionunder consideration. The most important issues to con-sider are the following. (a) The division of the locationsin the forecast area into groups with similar precipitationpatterns. The more homogeneous the groups the better,but from our experience, a minimal number of about 30stations is needed for each group. (b) An explorationfor the best predictors, especially additional ones thatconvey information not included in the NWP output,like typical local wind patterns, local topography (belowthe NWP resolution), etc. We believe that by properlyaddressing these issues, and using reasonably accurateNWP outputs, the algorithm presented in this paper isa viable tool for improving precipitation forecasts.Acknowledgments. The need for an adaptive nonlinearMOS scheme was first brought to our attention throughdiscussions with Professor Roland Stull. We thank threeanonymous reviewers for their helpful comments. Thiswork was supported by research and strategic grants toWilliam Hsieh from the Natural Sciences and Engi-neering Research Council of Canada.REFERENCESCybenko, G., 1989: Approximation by superpositions of a sigmoidalfunction. Math. Control, Signal, Syst., 2, 303–314.Demuth, H., and M. Beale, 2000: Neural Network Toolbox. Version4, The Math Works, 846 pp.Ebert, E. E., 2001: Ability of a poor man’s ensemble to predict theprobability and distribution of precipitation. Mon. Wea. Rev.,129, 2461–2480.Funahashi, K., 1989: On the approximate realization of continuousmappings by neural networks. Neural Networks, 2, 183–192.Golub, G. H., M. Heath, and G. Wahba, 1979: Generalized cross-validation as a method for choosing a good ridge parameter.Technometrics, 21, 215–223.Grewal, M. S., 1993: Kalman Filtering: Theory and Practice. PrenticeHall Information and System Science Series, Prentice Hall, 381pp.Haber, E., and D. W. Oldenburg, 2000: A GCV based method fornonlinear ill-posed problems. Comput. Geosci., 4, 41–63.Hall, T., H. E. Brooks, and C. A. Doswell III, 1999: Precipitationforecasting using a neural network. Wea. Forecasting, 14, 338–345.Hornik, K., M. Stinchcombe, and H. White, 1989: Multilayer feed-forward networks are universal approximators. Neural Networks,2, 359–366.Kalnay, E., and Coauthors, 1996: The NCEP/NCAR 40-Year Re-analysis Project. Bull. Amer. Meteor. Soc., 77, 437–471.Koizumi, K., 1999: An objective method to modify numerical modelforecasts with newly given weather data using an artificial neuralnetwork. Wea. Forecasting, 14, 109–118.Krzysztofowicz, R., 1998: Probabilistic hydrometeorological fore-casts: Toward a new era in operational forecasting. Bull. Amer.Meteor. Soc., 79, 243–251.Kuligowski, R. J., and A. P. Barros, 1998a: Experiments in short-term precipitation forecasting using artificial neural networks.Mon. Wea. Rev., 126, 470–482.——, and ——, 1998b: Localized precipitation forecasts from a nu-merical weather prediction model using artificial neural net-works. Wea. Forecasting, 13, 1194–1204.Mao, Q., S. F. Mueller, and H. H. Juang, 2000: Quantitative precip-itation forecasting for the Tennessee and Cumberland River wa-tershed using the NCEP regional spectral model. Wea. Fore-casting, 15, 29–45.Ross, G. H., 1987: An updateable model output statistics scheme.Programme on Short- and Medium Range Weather Prediction,PSMP Rep. Series, No. 25, World Meteorological Organization,25–28.Wilks, D. S., 1995: Statistical Methods in the Atmospheric Sciences.Academic Press, 467 pp.Wilson, L. J., and M. Valle´e, 2002: The Canadian Updateable ModelOutput Statistics (UMOS) system: Design and development test.Wea. Forecasting, 17, 206–222.Xia, J., and A. Chen, 1999: An objective approach for making rainfallforecasts based on numerical model output and the latest ob-servation. Wea. Forecasting, 14, 49–52.Yuval, 2000: Neural network training for prediction of climatologicaltime series, regularized by minimization of the generalized cross-validation function. Mon. Wea. Rev., 128, 1456–1473.


Citation Scheme:


Usage Statistics

Country Views Downloads
United States 6 7
China 5 9
Canada 3 3
Japan 3 0
Portugal 2 1
Republic of Korea 2 0
City Views Downloads
Vancouver 3 0
Tokyo 3 0
Ashburn 3 0
Beijing 2 0
Wilmington 2 0
Unknown 2 3
Guangzhou 2 0
Yongsan-dong 2 0
Shenzhen 1 9
College Park 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items