Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Assessment of the quality for the NADP/NTN date based on their predictability Komungoma, Saulati Koku 1992

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_1992_spring_komungoma_saulati.pdf [ 4.87MB ]
[if-you-see-this-DO-NOT-CLICK]
Metadata
JSON: 1.0086707.json
JSON-LD: 1.0086707+ld.json
RDF/XML (Pretty): 1.0086707.xml
RDF/JSON: 1.0086707+rdf.json
Turtle: 1.0086707+rdf-turtle.txt
N-Triples: 1.0086707+rdf-ntriples.txt
Original Record: 1.0086707 +original-record.json
Full Text
1.0086707.txt
Citation
1.0086707.ris

Full Text

ASSESSMENT OF THE QUALITY FOR THE NADP/NTNDATA BASED ON THEIR PREDICTABILITYBySAULATI KOKU KOMUNGOMAB.Sc., University of Dar-es-Salaam, 1980A THESIS SUBMITED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinTHE FACULTY OF GRADUATE STUDIESDEPARTMENT OF STATISTICSWe accept this thesis as conformingto the required standardTHE UNIVERSITY OF BRITISH COLUMBIAApril 1992© Saulati Koku Komungoma, 1992In presenting this thesis in partial fulfilment of the requirements for an advanceddegree at the University of British Columbia, I agree that the Library shall make itfreely available for reference and study. I further agree that permission for extensivecopying of this thesis for scholarly purposes may be granted by the head of mydepartment or by his or her representatives. It is understood that copying orpublication of this thesis for financial gain shall not be allowed without my writtenpermission.(Signature)Department of  .-Icrh ccThe University of British ColumbiaVancouver, CanadaDate 	Qe1 4 1.4152DE-6 (2/88)AbstractThree methods are used to predict the ion concentrations of a particular station usingthe concentrations of the other stations, for the data produced by the National AcidicDeposition (NADP) Network/ National Trends Network (NTN), during the period of1983-86. We relate the degree of predictability to the quality of the data. Stations areranked in the order in which they would be dropped if the network were, hypothetically,to be reduced in size. The agreement of the ranks given by different methods is assessed.Our study uses monthly volume weighted mean concentrations for each of the threeselected ions, investigated one at a time. Since there a large number of stations (86 forhydrogen, 81 for each of the remaining ions) and only 48 months, analyses was carriedout on clusters of stations. It was not possible to perform an ordinary regression analysiswith a lot of missing data, so the analysis is done with missing values replaced by theirestimates.iiContentsAbstract 	  iiContents 	  iiiList of Tables 	  ivList of Figures 	  viAcknowledgements 	  ix1 Background 	 11.1 Introduction 	 11.2 Effects of Surface Water Chemistry to Biota 	 21.3 Network and Data Description 	 41.4 Review of the Entropy 	 52 Methods for Predicting One's Station Records From the Others 	 92.1 Introduction	 92.2 Estimation of Missing Observations 	 102.3 Ordinary Regression Using Cross-Validatory Assessment 	 122.4 Regression Using a Bayesian Approach 	 142.5 Stone's Procedure 	 193 Assessment of the Network and the Agreement of the Methods 	 224 Discussion and Conclusion 	 35References 	 38Appendix	 40iiiList of TablesTable A 1.1(a) Average Squared Prediction Errors and Ranks for Cluster 1 ofHydrogen 	 40Table A 1.1(b) Average Squared Prediction Errors and Ranks for Cluster 2 ofHydrogen 	 41Table A 1.1(c) Average Squared Prediction Errors and Ranks for Cluster 3 ofHydrogen	 42Table A 1.1(d) Average Squared Prediction Errors and Ranks for Cluster 4 ofHydrogen 	 42Table A 1.1(e) Average Squared Prediction Errors and Ranks for Cluster 5 ofHydrogen	 43Table A 1.1(f) Average Squared Prediction Errors and Ranks for Cluster 6 ofHydrogen 	 44oable A 1.2(a) Average Squared Prediction Errors and Ranks for Cluster 1 ofSulfate	 45Table A 1.2(b) Average Squared Prediction Errors and Ranks for Cluster 2 ofSulfate	 47Table A 1.2(c) Average Squared Prediction Errors and Ranks for Cluster 3 ofSulfate	 49Table A 1.3(a) Average Squared Prediction Errors and Ranks for Cluster 1 ofNitrate	 50Table A 1.3(b) Average Squared Prediction Errors and Ranks for Cluster 2 ofNitrate	 52Table A 1.3(c) Average Squared Prediction Errors and Ranks for Cluster 3 ofNitrate	 54ivTable A 2.1(a) Association Measure for Cluster 1 of Hydrogen 	 55Table A 2.1(b) Association Measure for Cluster 2 of Hydrogen 	 55Table A 2.1(c) Association Measure for Cluster 3 of Hydrogen 	 56Table A 2.1(d) Association Measure for Cluster 4 of Hydrogen 	 57Table A 2.1(e) Association Measure for Cluster 5 of Hydrogen 	 57Table A 2.1(f) Association Measure for Cluster 6 of Hydrogen 	 58Table A 2.2(a) Association Measure for Cluster 1 of Sulfate 	 59Table A 2.2(b) Association Measure for Cluster 2 of Sulfate 	 59Table A 2.2(c) Association Measure for Cluster 3 of Sulfate 	 60Table A 2.3(a) Association Measure for Cluster 1 of Nitrates 	 61Table A 2.3(b) Association Measure for Cluster 2 of Nitrates 	 61Table A 2.3(c) Association Measure for Cluster 3 of Nitrates 	 62Table A 3. Names and Identification, Codes for Sites Included in the Study 	 63vList of FiguresFigure A 1.1(a) Boxplots of Observed and Predicted Values for Cluster 1 ofHydrogen	 65Figure A 1.1(b) Boxplots of Observed and Predicted Values for Cluster 2 ofHydrogen	 67Figure A 1.1(c) Boxplots of Observed and Predicted Values for Cluster 3 ofHydrogen	 69Figure A 1.1(d) Boxplots of Observed and Predicted Values for Cluster 4 ofHydrogen	 70Figure A 1.1(e) Boxplots of Observed and Predicted Values for Cluster 5 ofHydrogen	 71Figure A 1.1(f) Boxplots of Observed and Predicted Values for Cluster 5 ofHydrogen	 74Figure A 1.2(a) Boxplots of Observed and Predicted Values for Cluster 1 ofSulfate	 75Figure A 1.2(b) Boxplots of Observed and Predicted Values for Cluster 2 ofSulfate	 78Figure A 1.2(c) Boxplots of Observed and Predicted Values for Cluster 3 ofSulfate	 82Figure A 1.3(a) Boxplots of Observed and Predicted Values for Cluster 1 ofNitrate	 83Figure A 1.3(b) Boxplots of Observed and Predicted Values for Cluster 2 ofNitrate	 87Figure A 1.3(c) Boxplots of Observed and Predicted Values for Cluster 3 ofNitrate	 90viFigure A 2.1(a) Boxplots of Prediction Errors for Cluster 1 of Hydrogen 	 91Figure A 2.1(b) Boxplots of Prediction Errors for Cluster 2 of Hydrogen 	 93Figure A 2.1(c) Boxplots of Prediction Errors for Cluster 3 of Hydrogen 	 95Figure A 2.1(d) Boxplots of Prediction Errors for Cluster 4 of Hydrogen 	 96Figure A 2.1(e) Boxplots of Prediction Errors for Cluster 5 of Hydrogen 	 97Figure A 2.1(f) Boxplots of Prediction Errors for Cluster 6 of Hydrogen 	 100Figure A 2.2(a) Boxplots of Prediction Errors for Cluster 1 of Sulfate	 101Figure A 2.2(b) Boxplots of Prediction Errors for Cluster 2 of Sulfate 	 104Figure A 2.2(c) Boxplots of Prediction Errors for Cluster 3 of Sulfate 	 108Figure A 2.3(a) Boxplots of Prediction Errors for Cluster 1 of Nitrate 	 109Figure A 2.3(b) Boxplots of Prediction Errors for Cluster 2 of Nitrate	 113Figure A 2.3(c) Boxplots of Prediction Errors for Cluster 3 of Nitrate 	 116Figure A 3.1(a) Relative Measure of Agreement for Hydrogen 	 117Figure A 3.1(b) P-Value for Tests of Agreement for Hydrogen 	 117Figure A 3.2(a) Relative Measure of Agreement for Sulfate 	 118Figure A 3.2(b) P-Value for Tests of Agreement for Sulfate 	 118Figure A 3.3(a) Relative Measure of Agreement for Nitrate 	 119Figure A 3.3(b) P-Value for Tests of Agreement for Nitrate 	 119Figure A 4.1(a) Average Prediction Errors for Hydrogen (Cluster 1) inAscending Order 	 120Figure A 4.1(b) Average Prediction Errors for Hydrogen (Cluster 2) inAscending Order	 120Figure A 4.1(c) Average Prediction Errors for Hydrogen (Cluster 3) inAscending Order 	 121Figure A 4.1(d) Average Prediction Errors for Hydrogen (Cluster 4) inviiAscending Order	 121Figure A 4.1(e) Average Prediction Errors for Hydrogen (Cluster 5) inAscending Order	 122Figure A 4.1(f) Average Prediction Errors for Hydrogen (Cluster 6) inAscending Order	 122Figure A 4.2(a) Average Prediction Errors for Sulfate (Cluster 1) inAscending Order	 123Figure A 4.2(b) Average Prediction Errors for Sulfate (Cluster 2) inAscending Order	 123Figure A 4.2(c) Average Prediction Errors for Sulfate (Cluster 3) inAscending Order	 124Figure A 4.3(a) Average Prediction Errors for Nitrate (Cluster 1) inAscending Order 	 125Figure A 4.3(b) Average Prediction Errors for Nitrate (Cluster 2) inAscending Order 	 125Figure A 4.3(c) Average Prediction Errors for Nitrate (Cluster 3) inAscending Order 	 126viiiAckowledgementI wish to thank members of the Department of Statiatics and Faculty of GraduateStudies at the University of British Columbia for their continued assistence throughoutmy studies. In particular I wish to thank my thesis supervisors, Professor J. Zidekand Profesor N. Heckman for their help and continued support during my research andthesis write-up. My special thanks goes to Professor M. Stone who suggested the idea ofcross-validatory assessment in network design to my supervisors Professor J.V. Zidek.I would also like to thank my sponsors, the African Women 2000 Award who sup-ported me financially.It is my pleasure to acknowledge the kindness and assistence extended to me by myhusband Mr Ayub Komungoma. I do not forget others who gave me moral support.ixCHAPTER 1BACKGROUND1.1 IntroductionThe National Deposition/National Trend Network (NADP/NTN) is one of the net-works which collect rainfall chemistry data in different locations in the U.S. Eachlocation is called a station and is identified by a station code. Details of the networkand the data are given in the next section. The goal of this study is to assess how wellthe rainfall chemistry of a particular station can be predicted from chemistries of otherstations. This information might be used to reduce the size of the network, if the needarises, by dropping from the network a station whose rainfall chemistry is satisfactorilypredicted from other stations' rainfall chemistries. The rainfall chemistries studied arethe concentrations of 3 ions, namely hydrogen, sulfate and nitrate. In the followingsection, we discuss the nature of acidic deposition and its effect on biota.Various methods of predicting one station's chemistry from others are considered,namely, ordinary regression, regression using a Bayesian approach and Stone's crossvalidatory procedure. Descriptions of each method are included in the next chapter.Each month's rainfall chemistry at a particular station is predicted from the rest usingeach method, one at a time. This is, in turn, used to get a prediction error for eachmonth, one at a time, for that particular station. The average squared prediction errorat each station is used to assess each station's predictability. The smaller its averagesquared prediction error, the easier the prediction of a station. Stations are ranked bythe above criterion. In this way we can rank the stations in the order in which theymight hypothetically be dropped from the network, if necessary. Finally the rankingsfrom the three different methods plus the rankings of Wu and Zidek (1992) from anentropy based approach are compared. A brief review of this entropy approach is given1in Section 1.3. For more details, see Wu and Zidek (1992).1.2 Effect of Surface Water Chemistry to BiotaThe two main negatively charged ions that play a major role in the process of acidicdeposition are sulfate and nitrate. These ions combine with hydrogen ions to formacidifying chemical compounds (sulfuric and/or nitric acid).Acidic deposition causes surface water to lose its acid neutralizing capacity (ANC),which results in increased acidity (lower pH) and increased inorganic aluminum, whichis toxic to aquatic organisms.The extent to which acidic deposition causes surface water acidification is determinedby the process occurring in the surrounding watershed. When water moves through thewatershed, various processes change their chemical composition. The most prominentprocesses that take place are those that neutralize acids and release base cations (posi-tively charged ions such as calcium and magnesium). One of these processes is mineralweathering, in which minerals gradually dissolve with the passage of time. The other isa reaction in which the ions are exchanged in the soil, that is, the acidic hydrogen ionentering the soil is absorbed in the soil, replacing absorbed base cations, which in turnare released to the water.As most surface waters are well buffered, with pH values between 6.5 and 8.0, watersin which acid neutralizing and acid generating processes are nearly in balance are mostlikely to be affected by acidic deposition.Acidic deposition on surface water increases sulfate. The trend in many areas is thatsulfate concentration increases as the rate of acidic deposition increases. However thenitrates remain low; although nitrate is a very important compound in acidic deposition,most watersheds retain it efficiently because of its importance in plant nutriency. On theother hand as acid inputs to a watershed increase, there is a nearly universal response2of increase in magnitude of acid neutralizing reactions that produce base cations. Inwatersheds almost all of the acid input is neutralized, with no change in pH or ANCof the surface waters. Even in the most sensitive waters, a substantial fraction of theacid input is neutralized by the processes that release base cations. As ANC and pHdecrease, aluminum increases. When pH declines, aluminum, which is found in nearlyall soil minerals, is leached from the soils, causing concentrations levels in lakes andstreams to rise. Dissolved organic carbons tend to decrease with increase in acid input.This process reduces the decline in ANC and pH, thus partially making up for increasedacidity.The harmful effects on aquatic life are not caused by ANC change alone, becauseaquatic organisms respond to many factors. Other factors affecting aquatic organismsare the change in pH and the release of calcium caused by acidification. The change inpH is the main variable that affects aquatic life.Aluminum concentrations are always low in non-acidic waters. When the pH valuedecreases below 5.5, the concentration of aluminum increases, very often to toxic levels.Both a decrease in pH and an increase in aluminum can cause acidification toxic to fishand other biota.In very dilute systems, low calcium levels could be stressful to fish, although inthese waters the concentration of base cations increases in response to acidic deposition.Elevated base cations, especially calcium, may partially mitigate the toxic effects of lowpH and high aluminum. Therefore, as surface water acidifies, the resulting combinationof hydrogen ions, aluminum, and calcium determines the biological effects.Some types of organisms are sensitive to the chemical changes that accompany acid-ification and thus can not grow, survive, or reproduce in acidified water. As acidityincreases, these acid-sensitive species perish, resulting in the decline of species richness.3Phenomenon of surface water chemistry and its effect on biota have been of muchconcern to researchers. Data are collected all over the world to enable researchers tocome up with definite conclusions about the response of aquatic life to acidification.Detail on acidic deposition is found in the National Acid Precipitation Assessment Pro-gram, 1990 Integrated Assessment Report.1.3 Network and Data DescriptionThe NADP/NTN is one of the networks in the U.S. which collect data on acidicdeposition. This network collects weekly wet precipitation samples at more than 200stations. These precipitation chemistry samples are analyzed by the Central AnalyticalLaboratory at the Illinois State Water Survey in Chicago, where ion concentrations aremeasured. Finally the data are transferred to the Acidic Deposition System (ADS) database. This data base was established by the U.S. Environmental Protection agency atthe Northwest Laboratory in the U.S. to provide an integrated centralized data basefor the data collected by atmospheric deposition networks in North America. For moredetails about the NADP/NTN network and ADS refer to Olsen and Slavish ( 1986).Our study uses monthly volume weighted mean ion concentrations rather than weeklymeans since there is a large number of weeks with no precipitation. The analysis is doneon one ion at a time using stations with less than five missing monthly volume weightedmeans. As a result only 86 stations are used for the hydrogen ion analysis, and 81stations in the analysis of the remaining ions. The data included in this study werecollected between 1983 and 1986 inclusive, so a total of 48 monthly volume weightedmeans are used in the analysis. Because of the small number of monthly volume weightedmeans relative to the number of stations, it is necessary to restrict our analysis to smallclusters of stations and proceed from cluster to cluster. The clusters given by Wu andZidek (1992) are used. Clustering was done for each ion separately using the k-means4algorithm of Hartigan and Wong (1979). Given k, the number of clusters, the methodseeks to find k clusters so that the within-cluster sums of squares are minimized. Themethod proposed by Krzanowski and Lai (1988) was used to select k, the number ofclusters for each ion. The numbers, k, selected by the method in this study for differentions, range from three to six. The cluster sizes range from 2 to 47 stations. A logarithmictransformation of the data was performed prior to the analysis. In certain clusters withmore than 20 stations the number of complete records over all stations is less than thenumber of stations in the cluster. For example, the number of complete records in oneof the sulfate clusters with 36 stations is 19. So for the analyses presented in this thesis,the missing values are replaced by their estimates. More about the estimation of missingvalues appears in Section 1 of Chapter 2.1.4 Review of the Entropy ApproachThe purpose of an environmental monitoring network may be difficult to specifyprecisely. This presents a dilemma for the designer of such a network. Caselton andZidek (1984) argue that the purposes of any network are in essence the reduction ofuncertainty about some aspect of the world. They conclude that a rational design mustminimize entropy, a measure of uncertainty.The theory of entropy and its potential role in assessing the quality of the datagenerated by an existing network are described by Caselton, Kan and Zidek (1990).If X is an uncertain, i.e. random, quantity or vector of such quantities, and f is theprobability density function of X, then the uncertainty about X is expressible by theentropy of X's distribution, i.e. by H (X) = E[ — log f(X)1h(X)], where, according toJaynes (1963), h is a "measure" representing complete ignorance. The inclusion of h inthis definition of entropy makes this index of uncertainty satisfy the natural requirementof being invariant under one-to-one transformations of the scale of X. Although the5uncertainty about X is regarded as being of primary interest, often its distribution isdetermined by the conditional density of X, f (•I0), given an unspecified parameter, 0,which is of interest in its own right. In this case the total uncertainty of (X, 0) must beindexed. Conditional on the available data, the total uncertainty is then indexed by thetotal entropy, defined byH (X , 0) = —E [log[ f (X, Oldata) ]Idatai (1)h(X , 0)where the expectation is taken over both X and 0, andf (X, Oldata) = f (X10 , data)r(Oldata),	 (2)(01clat a) being the posterior distribution of 0.To assess the performance of stations in the network using the entropy approach, itis supposed that hypothetically, a specified number of stations, u, is to be dropped fromthe network and only g coordinates of X will be measured in future, with u g = p,where p is the number of stations. After rearranging subscripts as necessary, let X =(U, G) where U and G denote, respectively, the u and g dimensional vectors of valuescorresponding to the stations which are to be "ungauged" and "gauged". The process ofmeasurement will eliminate the uncertainty about G assuming the measurement errorto be negligible. The amount of uncertainty so eliminated would be MEAS definedby MEAS  = — E[log[f (Giclata) I h(G)fidata]. The set of g stations would be chosen tomaximize this entropy (MEAS). It can be shown that this same set of g stations canbe found by minimizing PRED + MODEL, the residual uncertainty remaining afterG is observed, where P RED = —E[log[f (UI0 ,G, data)! h(U)]Iclata] and MODEL =E[—log[f (0IG, data) I h(0)]Idataj.The g stations that maximize MEAS are considered to produce high quality data.On the other other hand, the u stations that maximize PRED + MODEL (the residualuncertainty after G is observed) are regarded as producing low quality data.6Wu and Zidek (1992) apply this theory to assess the quality of the same data setas in our study. The analysis was done one ion at a time, since there are ion-to-iondifferences in data quality. The stations were clustered and the entropy analysis donewithin each cluster. This was necessary because the number of stations is greater thanthe number of observations (48 volume weighted ion concentration means). If the size ofthe network were, hypothetically, to be reduced, then only the selected g stations wouldbe retained.In the implementation of the entropy theory, Caselton et al (1992) and Wu andZidek (1992) found it was not computationally feasible to find the best subset of ggauged stations among all p stations in the cluster. Thus a stepwise suboptimizationprocedure was used. The first step consisted of finding p — 1 stations which maximizeMEAS. The station left out would be the first one to be dropped from the network,hypothetically speaking. The next step was to find the p — 2 stations among the p — 1selected stations in the first step which maximize MEAS, and this yielded a secondstation which might hypothetically be terminated. This process continued until justone station was hypothetically left in the network. This exercise left the stations inranked order, starting with the station having the lowest quality data and finishing withthe station having the highest quality data.Looking at the rank order within clusters, the station identified as having highestquality data is, in most cases, geographically isolated from the rest of the stations inthe cluster. This seems reasonable since gauging such a station would be expectedto substantially reduce the uncertainty. We may conclude that the entropy approachis promising as a way of assessing the quality of the data. But this approach hasshortcomings. Entropy is a complex measure of uncertainty and therefore unintuitive.In addition we do not know how outliers might affect the ranking of the stations. Since7the entropy approach is not based on a predictive model it does not yield a methodwhich could actually be used to predict the ion concentrations of the stations whichwould be dropped from the network.To address these shortcomings, a different approach is taken here to determine therelative quality of the environmental data. In this approach, a station is considered toyield high quality data if its observations are difficult to predict from observations ofother stations. We do not mean to suggest that we are discrediting the entropy approachbut rather we are aiming at developing a better understanding of it.8CHAPTER 2METHODS FOR PREDICTING ONE STATION'S RECORDS FROMTHE OTHERS2.1 IntroductionA multiplicity of plausible objectives can be foreseen for any data collection network,and at the same time some important future uses of the data may not be foreseen. Thisposes a dilemma as quality represents fitness for intended use (see Caselton, Kan andZidek, 1990). As noted in the Introduction, to circumvent this difficulty, Wu and Zidek(1992) use an entropy based approach to assess the relative quality of the data producedby each station in a data gathering network, specifically that network which is the thesubject of our study.The goal of the present study is similar to that of Wu and Zidek (1992) except thatwe use a concept other than entropy to define data quality. Like Wu and Zidek (1992),we look at the data quality for each station as the amount of additional informationprovided by the data in that station. However we define the amount of information asthe extent to which the data from a particular station can be predicted from that of theother stations. A station whose data are hard to predict is considered as adding muchinformation to the network. We would interpret such a station's data as being of highquality. By a similar argument, stations whose data are easily predicted are thoughtto add little information, which we interpret to mean the data are of low quality. Wecould argue that, if there is a need to reduce the size of the network, the stations withlow quality data should be dropped out of the network first.The methods used in this study to predict one station's rainfall chemistry recordsfrom the others are ordinary regression, regression using the Bayesian approach andStone's cross-validatory assessment method (See Stone, 1973). Cross validation is used9to assess the performance of all the methods.In all three methods, regression models are constructed to predict the ion concen-tration of station j from the ion concentrations of the remaining stations in a cluster.Months are considered as replicates. For instance, the ordinary regression method re-sults in a regression model for each month i = 1, , 48 and each station j = 1, , p,where p is the size of the cluster. For fixed i and j the regression model is constructedusing the remaining 47 months as replicates. Thus, there are p parameters to be es-timated from 47 "replicates". The values of p range from 2 to 47 and, in some of thelarger clusters, p is close to and sometimes even greater than the number of months withcomplete observations. For example, in one of the sulfate clusters with p = 36, thereare only 19 months for which all the station values are available. It is not possible toperform an ordinary regression analysis when there are only 19 — 1 = 18 cases availableto estimate 36 parameters. So our analysis is done with missing observations replacedby their estimates at the outset: for each cluster, we produce an X matrix of logarith-mically transformed ion concentrations with no missing entries. These completed datasets are used for all methods to achieve consistency.Our strategy for estimating missing observations is discussed in the next section. Thelast three sections of this chapter discuss in detail the three methods used to predict astation's rainfall chemistry from the others.2.2 Estimation of Missing Observations.Various methods for estimating missing observations have been proposed by differentauthors. Afifi and Elashof (1961) suggest filling in the missing observations for eachvariable by that variable's mean. Another method uses regression instead. A variablewith missing observations is regressed on the other variables in the study, using onlycomplete cases. Regression is done separately for each variable with missing values.10The estimated regression model is used to impute the missing values. This method isdiscussed by Buck (1960) as well as Afifi and Elashof (1961). Stein and Shen (1991)regressed the logarithm of sulfate concentrations on the amount of precipitation and themonth of the year when precipitation occurred. The model fitted by least squares wasused to impute the missing sulfate values.A modified regression strategy was used in this study to impute the missing values.Each station with missing values was regressed on the other stations using only completerecords. But as we saw above, in some of the clusters the number of complete recordswas less than the number of parameters to be estimated. So the regression methodneeded some modifications before it could be applied to our data. These modificationsare found in a BMDP program, which estimates missing values. The program is called"Description and estimation of missing data" and is abbreviated as "PAM".This method uses stepwise regression to select the variables to be used. First, thevariable most correlated with the variable with missing values is chosen to enter into theregression equation. If the chosen variable meets the "F-to-enter" criterion (explainedbelow), then the next variable is chosen. The variable chosen next is the one withthe highest partial correlation with the variable with missing values, with the partialcorrelation conditional on the variable already used in the equation. This variable mustalso meet the "F-to-enter" criterion. Additional variables are chosen in the same manneruntil all variables which meet the "F-to-enter" criterion have been used. If, duringstepwise regression, no variables satisfy the criterion for admission into the regression,the mean of the variable with missing values is used to fill in that variable's missingvalues.The "F-to-enter" criterion is motivated by an approximate test of the coefficient ofany predictor variable. That is, the square of the ratio of the predictor's regression11coefficient to its standard error is approximately distributed as an F statistic with onedegree of freedom for the numerator. The square of this ratio is compared with the pre-set "F-to-enter" limit. If the square is greater than this limit, the variable has satisfiedthe "F-to-enter"criterion. For further details on this program, refer to BMDP StatisticalSoftware, 1983 edition, page 217 and Frane (1978b).2.3 Ordinary Regression Using Cross-Validatory AssessmentOrdinary regression using cross-validatory assessment is one of the methods used inthis study to assess how well each station's rainfall chemistry can be predicted from thechemistry records of other stations. In this study the sample is divided into two parts.The first part is used for estimation while the second part is for assessment. The size ofthe estimation subsample is taken to be n — 1 (with n = 48) and that of the predictionsubsample to be 1. These are the same subsample sizes used by Stone (1973).We proceed as follows. One station is selected and its data values designated asthe predictands. The remaining station records provide the predictors. Once a stationis fixed, we set aside one month's record from among the n = 48 monthly records,and treat this as a "future" month for prediction. Now the data for the remaining 47months are used as a "training set"; a linear model is fitted by ordinary least squares,using the 47 months as replicates. The estimated regression equation is then used topredict the ion concentration of the future month for the designated station, with theremaining stations' concentrations for that month as predictors. The selected month isnow replaced by another and the process repeated until all 48 months have played therole of the future month. In this way we obtain 48 predictions and 48 prediction errorsfor the designated station. That station is now replaced by another from the clusterand the whole exercise repeated. A new station now provides the predictands and thiscontinues until all stations in the cluster have played the designated role. At this point12we can assess the efficacy of ordinary regression and the relative difficulty of predictingthe records of the various stations from the others.To state our procedures more precisely let p be the number of stations in a clusterand n = 48 the number of months in the study. Further let :xii be the logarithmically transformed ion concentration for the ith monthin the jth station,X = (xis ), i = 1,2, ... ,n , j = 1,2, ..., p, be the n x p observation matrix,= (xii ,	 , x ip ),i = 1,	 , 48, and X. = (x ij ,	 , xnj ) t , j = 1, ...,p,X-4 be the matrix X with column vector X.i deleted,• be the matrix X with row vector Xi. deleted,X -ij be the matrix X with both column and row vectors Xi and Xi . deleted,• be the column vector X. with the ith month deleted,Xi-' be the row vector Xi. with the ith station deleted,• be the matrix X -ij with a column of ones added as the first column,be the row vector	 with one added as a first element.For fixed j and i = 1,2, ... ,48 the dependence of X..7i on X -ij,1 is modeled as3 =	 1 +	( 3 )where	 is a 47 x 1 column vector,	 is a 47 x p matrix, /3 is the p x 1 vectorof regression coefficients and U is a 47 x 1 column vector of random errors. Denote theestimated regression coefficients (which depend on i and j) and predicted value of xijby andand XESTJ , respectively. Then ;3: is multiplied by )0' 1 to get XESTi.i . Theprediction error made in predicting xis is given by xi; — XESTi; and denoted by13Both XESTij and PEii are on a logarithmically transformed scale. Each station yieldsa vector of 48 elements for both predicted values and prediction errors, one element foreach month. The average of the squared prediction errors for each station is used toassess how well each station's rainfall chemistry can be predicted from that of the otherstations. For fixed station, j, let .1a be the set of indices for which xij was not imputed,that is, the set of i's for which xij was not missing in the original data set. Then,the average of the squared prediction errors for station j is calculated using predictionerrors only for months i in I. Since the xi; values for the months with i .13 wereimputed using regression, the prediction errors from these months are omitted from theaverage to avoid misleadingly small estimates of the prediction error. Now if we denotethe average squared prediction error of station j by APE; , thenAPE.; = (E PEij)/(48 — kj),	 (4)JET,where ki is the number of months for which station j had missing data. Stations areranked using APE. Within a cluster the station with the minimum value of APE isranked first and has the lowest quality data according to the criterion described above.The station with the highest quality data is the one with maximum APE and is rankedlast.2.4 Regression Using a Bayesian ApproachIn the previous section, a least squares approach is used to estimate the unknownregression coefficients in predicting one station's ion concentrations from the others. Butif prior information about the regression coefficients is available, then this informationshould be exploited to find improved regression coefficient estimates. In this section, weassume that such prior information exists. Specifically, when station j and month i arefixed, the modelX cor = X cor r + (I,	 (5)14is assumed, where X,,,„ denotes the matrix of data values centred about station averages.Prior knowledge on the regression coefficients, (flu , , Pp') is expressed in terms of adensity which is exchangeable, where p' = p — 1. That is, prior knowledge of theregression coefficients would be unaltered by any permutation of the suffixes. Thisimplies that our opinion of p; is the same as that of pg .This idea of exchangeability derives from the work of de Finetti (1964). Lindley andSmith (1971) apply this idea and give explicit expressions for the Bayesian estimates.They use the mixture approach to construct an exchangeable prior distribution of theparameters of interest. This mixture approach is supported by Hewitt and Savage(1955). To give a simple example from Lindley and Smith (1971), suppose E(y110) = 0,and E(01 ) = II, with var(y2 10) = cr 2 and var(Oi ) = T 2 . If it is assumed that the O's areexchangeable, then, P, the prior distribution on 0, is of the formnPO f) = P(19t1/1)d0(11), (6),=iwhere P(•111) and OH are arbitrary measures. In the language of Lindley and Smith(1971), this example is a two stage-model. If in turn, depends on unspecified "hyper-parameters", then we can have a three stage-model. The choice of the number of stagesto be used depends on the individual.Lindley and Smith (1971) consider the linear regression model of the form,Y=Ted-U, (7)where the prior opinion on Q is exchangeable. We modify their approach by first "cen-tering" the data about their mean to eliminate the need for an intercept coefficient inEquation 7. Because of the exchangeability assumption, we can not include the interceptin the regression, since its inclusion would make the exchangeability assumption unre-alistic. To be precise, after setting aside the values for the "future month", we subtract15each station's mean (based on 47 months) from each station's concentrations. Thenwe work with these corrected values. This forces the regression line to pass throughthe origin. We apply Lindley and Smith's method (1971) with the regression model ofEquation 5, where = X t — 1(E koi x k3 )/47 and 1 is a 47 x 1 vector all of whosejelements are 1. Each station's values in X;oirjr are obtained from the corresponding sta-tion's values in X -ij in the same way as X.,0171 r.i . The resulting Bayesian estimate of 13,,Q*, is then used to calculate the prediction of station j during month i, by,ii; = (E x0/47 +	 (8)koiwhere Xc7rri. is obtained from	 by subtracting from each element of X27-3 the appro-priate station average.Suppose, a- 2 , cr,3	N (. X ;,,?r , 0-2	 (9)where In, is the n' x n' identity matrix and n' = 47. Assume Ot = (01 , ... , 9) isexchangeable and that,3:71, 4	 N(, 4).	 (10)Assuming vague prior knowledge for	 Lindley and Smith (1971) give an explicit ex-pression for the Bayesian estimate asO.* =	 + k((X cot4) t X jr) - 1 ( 1.p,	(ip')where k = Q2 1 , ,3 is the usual least squares estimate, p' is the number of parameters,and Jp, is a matrix of dimension p' x p', all of whose elements are 1.In practice a2 and cr,i will be unknown, and they can be estimated from the data.To estimate a 2 and 4, it is now assumed that a 2 and ai23 , which are independent of 0and are independently distributed asv /0.2	 xv2	 (12)16	voAo/a20	 x2, ,	 ( 13)(see Lindley and Smith, 1971), where v, A, vo and Ao are prior parameters. The jointdistribution of X -i j) e.14 a-2 and ap2 is given bycorr. (0.2)-1/2nexpf _(xc—oirr. —	 dy(xc-oirr.; — xcT,i7.3,.,#)1(20-2 )} x	(4) -112Pexp{—(g —	 — 1)1(24)} x(0-2 ) - 2 (P+2) exp{—vA/(2a2 )} x (4) - 1("+2) ex p{ — v	 I (2a-P}	 (14)Integrating the joint distribution with respect to e, one calculates the posteriordistribution of 0, u 2 and •20 , which is proportional to	(a2) _1/2(.+,+2) exp[_ { , A pc_07.2 ,4 —	 i3)t(x_i — x.c_02,,,,q)}1(20.2)]	XCOTT	 COT T.3101	x(o.i2,3),./*+,,A-n exp[_ {,„A, E (ok — #.) 2 1/(2a 20 )],	 (15)k=1where 0.	 (E7;:_i 13k) (p'). Using the results in Equation 11 and the modal equationsfor a2 and ar23 from the posterior distribution, we get the Bayes estimates of 13, •2 and20"	0.* =	 + k*((x;o1r37-) tXc—otrjr ) -1	— (41 )/P1)}:0)S 2	=	 vA (X;oirr.i — X;oirjritT) t (X;;irr.i — X	 ,3*)} I (n' v + 2),7112 = { VO APL^ (Qk 13.' ) 2 } API VO 1)7	 (16)k=1where k* = s 2 /s4. Lindley and Smith (1971, p17) suggest that the parameters v and vomay well be small positive numbers and that, in any case, the solution is very insensitiveto changes in these numbers, so that they may be set to zero. In the example in Lindleyand Smith (1971), the values of v and vo were set to zero as well as was the startingvalue of k*. The resulting equations were solved iteratively, starting with initial valuesfor s 2 and 4 or by setting the starting value of k* to zero. The initial value of 0* was17found via (16) and then used to find the next values of s 2 and 32 , and so on. However,if v and vv are set to zero and if during iteration, the values of the estimates of theA's become close to each other, then .5 20 becomes very close to zero. Hence k* = ,2 /,2becomes very big. This can result in an overflow in the computer calculations. From(16), it can be seen that /3* 0 as k* oo. Thus if a small estimate of /3 is usedin our prediction model, Equation (8), the prediction will be close to the the station'saverage. Because of the problem of overflow, we suggest two alternatives of estimating8. The first alternative is to set voilt o in the last Equation of (16) to a small numbersuch that vo is negligible and can be ignored in the denominator of the same expression.This modification keeps the estimate, Sp, bounded away from zero in the iterations. Thesecond alternative is to not iterate, but rather to pick a value of k* and use it in thefirst Equation of (16) to calculate the estimate, /3*.Because we do not have grounds to prefer one of the alternatives over the other,both are used. In the first alternative, which we will call Bayesian regression alternative(1), the initial value of k* is chosen to be zero, as in Lindley and Smith (1971). Insubsequent iterations, vf3 A0 is set equal to 0.1, with vo negligible, so that7/1S20 = {O.1 E(QZ — 0.*)}1(p + 1).k=1In the second, non-iterative alternative which we will call Bayesian regression alternative(2), we takeS = x—ik	 corr..? —	 Xcorr.j X;o174 13. n + 2)—s20 =	 ()k — 13.) 2 /(" — 1).k=1However for a cluster of size two the second alternative can not work, because afterfixing a station, there is only one station as a predictor, and thus p' = 1, and s 20 isundefined.18These two approaches are applied to our data to give the Bayesian estimates ofthe regression coefficients for each data set constructed by fixing station j and monthi, and using the remaining 47 months to construct a model for the prediction of theion concentration of station j. For fixed station j and month i, we get the Bayesianestimate /3*. This 13* is used to calculate the predicted value of x ii , denoted by 4. Weget the prediction error made by predicting xii , namely xis — 4. For each station j, weobtain the vector of 48 predicted values and prediction errors, one for each month. Theaverage of the squared prediction errors for station j, calculated using only the monthswhich were not imputed, is used to give the assessment of the station's predictability.For each cluster, stations are ranked using the average of the squared prediction error.The station with the minimum average squared prediction error is ranked first, and isconsidered as having the lowest data quality. Similarly, the station with the maximumaverage squared prediction error is ranked last and considered as having the highest dataquality. The results are included in the Appendix and the discussion is in Chapter 3.2.5 Stone's ProcedureBoth the ordinary regression method and regression based on a Bayesian procedure,for predicting rainfall chemistry of a particular station from the chemistries of the otherstations, were discussed in the previous two sections. These methods were assessed bycross-validation. If it is accepted in advance that cross-validation will be used in thisway, then the ordinary least squares and the Bayesian predictors can be modified todo well in the intended assessment. The resulting predictor differs markedly from thatproduced by the previous two methods. It is due to Stone (1973) and we include it inour study as a competitor to the first two methods.Stone's procedure begins by choosing a statistical predictor, which is a function ofthe data, (yi , ti ) i = 1,	 , n, and a parameter denoted by a. Here tf = (til , . , ti„).19The parameter, a, is estimated from the data by cross-validation, using a loss function.This loss function is also used to calculate C+, an assessment of the prediction efficiency.The score is constructed by calculating yi , a statistical predictor of yi based on the dataset (yi, ti), 1 = 1, , n, 1 i. The assessment score, C+, is the average of the lossesincurred from estimating yi byWe apply Stone's method to each station in a cluster. That is, for each fixed stationj, we take n = 48, is = p —1, yi = xi; and t! = XiTi , the vector of ion concentrations ofthe remaining stations in month i. Stone's procedure yields a linear model for predictingstation j's ion concentrations from the other stations', and a score, C;F, which assessesthe predictability of station j. These scores are then used to rank the stations within acluster.In Section 3 of his paper, Stone (1973) gives a number of statistical predictors whichcan be used in different problems. In the case of the present study, the statisticalpredictor of Stone's Example 3.3 is appropriate. Letting ,5 = {1, , n} the statisticalpredictor based on (yi ,	 i E s, is given by:a, s) = ay + (1 — a)(y. E bk(tik — t•c)),	 (17)where	 :7; Ei yi , and the f.k = n	 1 tik, and the bk 's are the estimated regressioncoefficients in the least squares multiple regression of y on t. The parameter, a, in thestatistical predictor, is estimated via cross-validation, using squared error loss. That is,a is chosen to minimize :nC(a) n E(yi —	 a, s -1 )) 2	(18)where s' = {1,	 , n}/{i}. The value of a so obtained is denoted by a+ (s), and theresulting model is Equation 17 with a = a+ (s).20Stone gives the explicit form of a+ (s) corresponding to the selected statistical pre-dictor and loss function asri2	nri(yi — y)	 r 	ri	n(yi —c'+(s)	(1 — Aii) 2	(n — 1)(1 — Aii) jj/	—	 (n — 1) if	(19)t_iwhere ri is the it' residual in the least squares multiple regression using the data (yi , ti ),i E ,s, andA = T(VT) -1 V,where T is the design matrix corresponding to this regression.The cross-validatory assessment employs C+, which is calculated as follows. Foreach i = 1,	 , n, the statistical predictor of Equation 17 is constructed as describedabove, but using the reduced data set (yk, tk), k E	 . Thus for each i, we have across-validatory choice of a, a+(s -i ), and an estimate of yi , (ti , cx+	 s-i). Theassessment of predictability is given bynC+ = n E(Yi Mi,a + (s -1 ),s -i )) 2 .	 (20)This statistical predictor and assessment procedure are applied in the present studyto each station. The station whose predictability score, C -P, is lowest is the most easilypredicted, so it is considered to have low quality data according to this assessmentprocedure. Similarly a station with a larger value of C;E . is hard to predict, and thus isconsidered to be producing data of high quality. Stations are ranked from low to highaccording to this definition of their data quality. Discussion of the results is given inChapter 3. The rankings are given in the Appendix.21CHAPTER 3ASSESSMENT OF THE NETWORK AND THE AGREEMENT OF THEMETHODSThe methods described in Chapter 2 were used to assess the quality of the datadescribed in Section 2 of Chapter 1, obtained over the period, 1983-86; and the resultsare presented in this section. The data used in this study were collected by the NationalDeposition/National Trend Network (NADP/NTN) which we discussed in Section 2 ofChapter 1. The NADP/NTN network stations used in this study are tabulated in theAppendix. For reasons given in Section 1 of Chapter 2, we use the data with missingvalues replaced by their estimates as described in Section 2 of Chapter 2. After fillingin the missing values by their estimates, we have a total of 48 volume weighted monthlyaverage concentrations for each station and ion, and these average concentrations areused in all the methods.In Section 2 of Chapter 1, we discussed the nature and the effect of acid deposition.Our study involves the concentrations of three ions, namely: hydrogen, sulfate andnitrate. The data were logarithmically transformed, to achieve a more nearly Gaussiandata distribution and to be consistent with the earlier work done on the same data set.The clusters from the study by Wu and Zidek (1992) are used.Our results for sulfate ion concentrations will be discussed in detail to illustrate ourfindings. This ion is selected for detailed consideration, so that one can compare ourresults with those from the entropy based analysis of Wu and Zidek (1992) where thesame ion was selected for detailed discussion. Alongside this focused discussion we shallbe commenting generally on all the ions and their clusters.Data for sulfate ion concentrations yield 3 station clusters with 37, 36 and 8 stations.Tables A1.2(a)-A1.2(c) in the Appendix give the ranks of the stations for each cluster22determined by the methods used in our study and those given by entropy analysis. Alsoincluded in the tables are the average squared prediction errors, an index of quality usedto rank stations. The corresponding measure used to rank stations in entropy analysisis not included in the tables, because it is not comparable to average prediction error.To focus our discussion, consider the third sulfate cluster which contains 8 stations,with identification codes 037a, 059a, 061a, 074a, 078a, 271a, 281a, and 354a. The cor-responding station names can be found in the Appendix. Table 3.1 below (identicalto Table A1.2(a) in the Appendix, but reproduced here for convenience) contains theranks and average prediction errors for the 8 stations in the cluster. Since this clustercontains 8 stations, a rank of 8 corresponds to the best station, a rank of 7 to the secondbest station, and so on. Four of the stations in the cluster (half the cluster size) aregiven the same ranks by all the methods. The station with identification code 037a isranked first by the three methods used here, as well as by the entropy analysis. For thisstation, the average squared prediction errors produced by ordinary regression, regres-sion using the Bayesian approach and Stone's cross-validatory assessment method arerespectively, 0.1484, 0.2242 and 0.1483. This means that this station (Glacier NationalPark, Montana) has the lowest quality (most easily predicted) data according to thefour methods. If there were a need to reduce the size of the network, this station mightbe considered first for possible termination. Two other stations in the cluster competefor "best". Identification codes for these stations are 281a and 354a, with respectivenames Bull Run, Oregon, and St. Mary Ranger Station, Montana. Bull Run, Oregon,is ranked best by the ordinary regression method and Stone's procedure, while boththe Bayesian alternative approaches and entropy rank it second best. St. Mary RangerStation, Montana, is ranked best by both the Bayesian alternatives approaches andentropy, while ordinary regression and Stone's procedure rank it second best.23Table 3.1: Average Squared Prediction Errors and Ranks in One of Sulfate Clusterstation methodscodeRegression Bayesian 1 Bayesian 2 Stone EntropyAPE rank APE rank APE rank APE rank rank037a 0.1484 1 0.1343 1 0.1334 1 0.1483 1 1059a 0.3091 5 0.2683 5 0.2711 5 0.2836 5 4061a 0.2537 4 0.2354 4 0.2403 4 0.2279 4 5074a 0.4518 6 0.3693 6 0.3812 6 0.4096 6 6078a 0.1796 2 0.1635 2 0.1659 2 0.1610 2 2271a 0.2498 3 0.1979 3 0.1983 3 0.2276 3 3281a 0.5258 8 0.4044 7 0.4604 7 0.5398 8 7354a 0.5053 7 0.4773 8 0.4767 8 0.4472 7 8In general, the four different methods do not give the same best station. However,as can be seen from the table above, the difference is not very important, since a stationranked best by one method is either ranked best or second best or third best by theother methods. Similarly, the ranks for the intermediate stations do not have significantdifferences. There is no cluster where a station is identified as the best by one method,and the poorest by the other methods. If it were literally necessary to select the bestsingle station it would be necessary to pay careful attention to its selection. One mightwell face a decision problem, and other nonstatistical issues might be invoked to resolvethe conflict. For example, the geographical positions of the stations might be taken intoconsideration.It is not unusual for different methods, adopted for a single purpose, to give differentresults, or for different judges to give different ranks to various contestants. But it isimportant that there be a reasonably strong association between the ranks given by the24different methods. Below we use association measures to give a more precise assessmentof the agreement of the ranks from the five methods.We use association measures to assess the degree of agreement between pairs ofranking methods and among all five ranking methods. This is done within each cluster.The ten pairwise associations we consider are for ordinary regression with the Bayesianregression alternative (1) approach, ordinary regression with the Bayesian regressionalternative (2) approach, ordinary regression with the Stone's procedure, ordinary re-gression with the entropy approach, the Bayesian regression alternative (1) approachwith the Bayesian regression alternative (2) approach, the Bayesian regression alter-native (1) approach with the Stone's procedure, the Bayesian regression alternative (1)approach with the entropy approach, the Bayesian regression alternative (2) with Stone'sprocedure, Bayesian regression alternative (2) with entropy and Stone's procedure withentropy approach. We test the null hypothesis that the rankings are random against thealternative hypothesis that a direct association exists between the ranking methods.For pairwise associations, Spearman's coefficient of rank correlation is used. Thetest statistic is R = 1 — 6 D?/(p3 — p), where Di is the the difference betweenthe two ranks given the i t' station, and p is the number of stations in the cluster.The value of R lies between -1 and 1, where a value of -1 means perfect disagreementor inverse association and a value of 1 means perfect agreement or direct association,which is of interest in our study. A value of zero indicates there is no association, thatis neither agreement nor disagreement. For p < 30 exact p-values for the calculatedvalues of R are given in Table I of Gibbons (1976) while for larger values of p , p> 30,an approximate normal distribution is used to calculate the p-values. (R-/p —1 isapproximately standard normal under the null hypothesis)For the five different ranking methods, Kendall's coefficient of concordance is used to25test the same hypothesis of no association. Three different test statistics are available.The first of these, denoted by S, measures the departure from lack of agreement and isgiven by,S = 7E[C; — k(p + 1)/21 2 ,	 (21)J.1where Ci is the sum of rankings of the jth station and k = 5 is the number of rankingmethods. This quantity is expected to be small if there is no agreement and big if apositive association exists. But it is difficult to have an intuition about the size of S, so asecond test statistic denoted by W is used. W is a relative measure of association, definedby W S/S*, a ratio of the observed measure of departure from lack of agreement, S,to S*, where S* is given byS* = lE[jk k(p + 1)121 2 .	 (22)S* is the value of S under perfect agreement. The value of W lies between 0 and1, where a value of 0 means no association and a value of 1 means perfect agreement.Accordingly, large values of W call for rejection of the null hypothesis in favor of thealternative. Since the test statistic, S, is a monotonically increasing function of W, andis large when W is large and zero when W is zero, the appropriate p-values are the righttail probabilities. Table K of Gibbons (1976) gives exact right-tail probabilities for Sfor p = 3, k < 8, and p = 4, k < 5.For combinations of p and k that are not covered by that table, an equivalent teststatistic to S and W, denoted by Q, is used. Either of the following two expressions canbe used to calculate Q:Q	 k(p —1)W,Q = 12S I kp(p + 1).	 (23)The distribution of this test statistic, Q, can be approximated by the chi-square distri-26bution with p — 1 degrees of freedom for large k. As with the test statistic S, Q is alsoa monotonically increasing function of W, so the appropriate p-values are also the righttail probabilities. Details of the association test procedures for both pairwise and anynumber, k, of ranking methods is found in Gibbons (1976).The association measures for all the clusters and all the ions are given in Tables A2.1-A2.3 in the Appendix. The entries in the table are R for pairwise associations or Wfor all five ranking methods, Q or S for all four ranking methods, depending on whichstatistic is used to calculate the p-value and, the p-values. Table 3.2 below (identical toTable A2.2(c) in the Appendix but given a different label to be consistent within thischapter) contains the association measures for the third sulfate cluster with 8 stations.Table 3.2: Association measures for the third sulfate clustermethods being compared R or W Q p-valueregression versus Bayesian 1 0.976 - 0.000regression versus Bayesian 2 0.976 - 0.000regression versus Stone 1.000 - 0.000regression versus entropy 0.952 - 0.001Bayesian 1 versus Bayesian 2 1.000 - 0.000Bayesian 1 versus Stone 0.976 - 0.0000Bayesian 1 versus entropy 0.976 - 0.000Bayesian 2 versus Stone 0.976 - 0.000Bayesian 2 versus entropy 0.976 - 0.000Stone versus entropy 0.952 0.001all methods 0.981 34.3333 0.000The values of R and W for this cluster are greater than 0.9. Also the p-values areless than or equal to 0.001. These results give evidence for the rejection of the null27hypothesis of random ranking in favor of the alternative hypothesis. That is, in thiscluster the ranks of the stations given by the all five methods have a strong agreement.Looking at the association measure, W, for all rankings, in all the clusters, we see that,for most of the clusters, W is greater than 0.8 and the p-value is less than 0.001 . Thisindicates good agreement of the ranks from all the methods. Results for nitrate showa different pattern from that discussed above. The association measures for nitrateclusters are relatively low. For example, for one of nitrate clusters with 41 stations Wis 0.5. But the p-value for this cluster is small (0.00001) so this gives grounds to rejectthe random ranking hypothesis in favor of the alternative. Generally small clusters ofsize less than five have large association measures but relatively large p-values. But thiscannot be used to support the null hypothesis of random ranking, since these clustershave inadequate sample sizes to yield a reliable conclusion.This study was motivated by the results of the entropy analysis. The entropy ap-proach seems to work well in ranking stations on the basis of data quality, but it isnot intuitive. So we seek an intuitive method which gives ranks close to those fromentropy analysis; it could be used in combination with the entropy approach to get abetter understanding of the data. The scatterplots of the p-values against cluster sizesand relative measures, R, against cluster sizes are used to find out which of the meth-ods used here is in close agreement with the entropy analysis and to see if agreementdepends on cluster size. The plots are done ion-by-ion. Scatterplots for all the ions areincluded in the Appendix. Figures 3.1(a) and Figure 3.1(b) below are scatterplots forrespectively, the relative measure of agreement, R, against cluster sizes and the p-valuesagainst cluster sizes for sulfate ion.From Figure 3.1(a) we see that the line corresponding to the Bayesian alternative(1) and entropy is above the other lines and the next highest line is the one correspond-28ing to Stone's procedure and entropy. This means that the agreement, as measured byR, between the Bayesian alternative (1) and entropy is higher than for the others. Inaddition, the corresponding line in Figure 3.1(b) is below the other lines. This indicatesthat the p-values for this comparison are smaller than for the others. These observa-tions suggest that the ranks produced by the Bayesian alternative (1) approach agreewith those produced by entropy more often than the others. The same phenomenon isobserved with the hydrogen ion rankings except for one cluster in which the ordinaryregression and entropy comparison has a bigger value of R and a lower p-value. But thisone cluster among the six clusters of hydrogen cannot change our conclusion. That is,even for hydrogen, the Bayesian alternative (1) approach and Stone's procedure agreestrongly with entropy. Nitrate gives exceptional results. For this ion Stone's procedureseems to rank last in agreement with entropy, but the Bayesian alternative (1) onceagain shows strong agreement with entropy.The scatterplots indicate that there is no consistent relationship between agreementand cluster size.The need for diagnostic checking is one of the reasons which motivated this study ofvarious approaches to assessing data quality. We need to know more about the predictedvalues and the prediction errors. We acquire this knowledge by comparing the boxplotsof the predicted values to those of the observed values, and by studying the boxplots ofthe prediction errors.For each cluster, we look at two sets of boxplots. One set of boxplots shows theobserved and predicted values from all four methods used in this study. For easy reading,we frame the five boxes for each station separately with the station identification codeabove the frame. The first boxplot in each frame is for the observed values, while thesecond, third, fourth and fifth represent the predicted values from respectively, ordinary29regression, regression using the Bayesian alternative (1) approach, regression using theBayesian alternative (2) approach and Stone's procedure. The second set of boxplotsshows the prediction errors from the four methods used. The same format as aboveis repeated. The four consecutive boxes in each small frame represent the predictionerrors for each given station, where the first boxplot in a group is for the prediction errorsfrom the ordinary regression method, the second and third boxes are, respectively, foralternative 1 and alternative 2 of the Bayesian approach while the last boxplot representsthe prediction errors from Stone's procedure.Boxplots for all the clusters are included in the Appendix. Both sets for the thirdsulfate cluster are included in this chapter for easy reference, and are labelled 3.2(a)and 3.2(b) for consistency within the chapter. Figure 3.2(a) is for the boxplots of theobserved and predicted values, while figure 3.2(b) is for the boxplots of the predictionerrors.From Figure 3.2(a) we see that the dispersion of the predicted values is less thanthat of the observed values. Among the predicted values from different methods, thosefrom Stone's procedure have the smallest dispersion, followed by those from the Bayesianalternative (1) approach. A careful examination of the plot indicates that the predictionsof Stone's procedure are pulled toward the center of the data. In other words, Stone'sprocedure shrinks the prediction towards the center of the data. This phenomenon isstrong in other clusters. Also, in other clusters, the Bayesian alternative (1) approachreveals the same shrinkage behavior. On the other hand the predicted values from theordinary regression method have the widest dispersion and sometimes a dispersion widerthan the observed values. This might be caused by outliers.From Figure 3.2(b), the boxplots for prediction errors, we see that the predictionerrors from the ordinary regression method have the widest dispersion, while the other30three methods produce prediction errors with almost the same dispersion. But in otherclusters there are some features which are not present in the cluster we have highlightedin our discussion. In particular, the dispersions of the prediction errors from the Bayesianalternative (1) are sometimes smaller than those from Stone's procedure. In general,however, the Bayesian alternative (1) approach and Stone's procedure compete for thedistinction of having prediction errors with smallest dispersion.3110	 15	 20	 25	 30	 35o -od1rgent	  2baylent— - — 3bay2ent— — 4stentFigure 3.1(a): Relative Measure of	 Figure 3.1(b): P-value for Tests ofAgreement for Sulfate03 -o0.QciCO0oo00cm0Agreement for Sulfate10	 15	 20	 25	 30	 35ciCOocluster size	 cluster sizeLegend:rgent = regression with entropy, bay1ent= Bayesian alternative (1) with entropybayl ent= Bayesian alternative (2) with entropy, stent = Stone's procedure with entropyCHAPTER 4DISCUSSION AND CONCLUSIONIn the analysis of the monthly ion concentrations of stations in the NADP/NTN, weattempted to determine which station's ion concentrations were most accurately pre-dicted from the other stations' concentrations. However, our analyses did not answerthe question completely, due to three things. First, the predictability of stations wereranked within cluster, rather than across the entire network. Secondly, our analyseswere conducted ion by ion. Thirdly, our different methods of prediction (ordinary re-gression, Bayes regression, and Stone's cross-validatory regression) resulted in differentrankings. It may not be possible to completely resolve these three issues with this dataset. However, we have gotten a clear picture of the relationship between the entropyapproach and our prediction methods.For each ion the analysis was done in clusters, each of size less than 48. This analysisby cluster makes the comparison of stations in different clusters impossible. But we arguethat, since ion concentrations in different clusters are statistically different, we wouldnot expect stations in different clusters to do well in predicting each other. However wecannot determine a single worst station from the network, since each cluster gives itsown worst station. We suggest a tentative solution to this problem: to take the worststation from a larger cluster to be the overall network worst station. Since a station tobe dropped out should be redundant, it seems logical to think of a redundant stationcoming from a larger cluster. Alternatively one might use geographical knowledge tomake the decision. If one could accurately estimate missing values in the original dataset, then one could use the weekly average concentrations with missing values replacedby their estimates. This would give us more than the 48 months as replicates, allowinganalysis of the entire network, and hence give only one worst station for each ion. We33think this idea is feasible since Sten, Shen and Styr (1992) have used multiple regressionto impute missing daily sulfate concentrations. The same problem would arise if wewould want the overall network best station. We do not consider this problem to be ofas much concern as the first one, since the need to retain only one station in a networkis not realistic. But if there are reasons to identify the overall network best station, thenother issues like geographical positions might be invoked in selecting the best station.We did not carry out a formal analysis to compare each station's ranking in differentions, so we cannot say that, for instance, all of a particular station's ion concentrationsare difficult to predict. So we cannot give a strong statement, but we have traced worstand best stations for all sulfate clusters and have given their ranks for each ion using thedifferent prediction methods. Table 4.1 contains the ranks of the selected stations forthe different ions. Since the clustering depends upon the ion under study, when lookingat the rank of a station, we should note the cluster size to judge whether a station istowards the worst or the best position. From this table we see that the worst stationas given by one ion is not necessarily the worst for another ion, and similarly for thebest station. But some of the stations are put in the same category in either two ofthe ions or in all three ions. By category, we mean a ranking towards either the worstor best position. For example the station with identification code 025a is towards theworst position category in sulfate and nitrate while the station with identification code163a is in the same category (towards the worst) in all three ions. Wu and Zidek (1992)found ion-to-ion differences in station clustering and station ranking. They pointed outthat those differences might be informative and so resisted the use of a multivariateanalysis at this stage. However, a multivariate analysis might give some indication ofthe simultaneous predictability of all of a station's ion concentrations.34Table 4.1: The Ranks of the Selected Stations in Different Ions.ion methodstation identification code025a 249a 068a 281a 163a 037aregression 1 36 37 8 1 1Stone 2 36 36 7 1 1sulfate Bayesian 1 1 36 29 7 5 1Bayesian 2 1 36 37 7 1 1cluster size 36 36 37 8 37 8regression 27 33 18 20 6 1Stone 32 26 19 5 2 4hydrogen Bayesian 1 31 24 19 6 2 4Bayesian 2 32 24 19 6 2 4cluster size 33 33 20 20 18 20regression 10 7 20 30 11 27Stone 2 28 20 31 4 19nitrate Bayesian 1 8 14 20 30 6 16Bayesian 2 7 4 19 31 5 26cluster size 41 41 31 31 31 31For the analysis of each ion within a cluster the different prediction methods' rankingsdid not always agree. In Chapter 3 we discussed the agreement of the station rankingsfrom different methods in each cluster for each ion. It was found that the degree ofagreement is reasonably high. We are interested in knowing which of the three rankingmethods used in our study agrees strongly with those from the entropy based analysisof Wu and Zidek (1992). We found in Chapter 3 that the ranks from regression usinga Bayesian alternative (1) approach agree with those from the entropy based approach35more often than the others. This implies that if, by using the entropy approach, a stationfound to be producing data of low quality is terminated, then in the future, regressionusing the Bayesian alternative (1) approach can be used to predict ion concentrationsof the deleted station.The close agreement of the ranking based on entropy analysis and the ranking fromthe methods used in our study reflects the relationship between the degree of the sta-tion's predictability and the amount of uncertainty reduced by the inclusion of a stationinto the network. That is, a station which is easily predicted will generally not notice-ably reduce uncertainty when added into the network. Such a station is considered asproducing data of low quality. On the other hand a station which is hard to predictwill generally greatly reduce uncertainty when added into the network. Such a stationis considered as producing data of high quality.This exercise of ranking stations, starting with the worst station to the best stationdoes, not mean that there is a worst station in an absolute sense. When we look atthe trend of the average squared prediction errors (APE), we see that there is notmuch difference among the APE's for the first few worst stations. But the differencebecomes sharper as we move towards the best station. Figure 4.1, a scatterplot of theaverage prediction errors (of cluster 1 of sulfate ) against the ordered rankings, supportsthis fact. Another fact revealed by this figure is the close agreement of the averageprediction errors from Stone's procedure and the regression using a Bayesian alternative(1) approach. These two methods have low average prediction errors when comparedwith the other methods. Scatterplots for other clusters for all three ions are included inthe Appendix. This ranking exercise, as pointed out by Wu and Zidek (1992), can onlybe used to suggest the station which could be closed if there were budgetary constraints.36.31 reg  2bay13bay24stone1/1/'1 1-1-1,4Average Prediction Errors for Cluster 1 of Sulfatein Ascending Order1000	 10	 20	 30ordered ranksLegend:reg=regression,bayl = Bayesian alternative (1), bay2= Bayesian alternative (2)Stone = Stone's procedureReferences[1] Afifi, A.A and Elashoff R.M. (1961). Missing Observations in Multivariate Statis-tics, Journal of the American Statistical Association, 61, 595-604.[2] Buck, S.F. (1960). A Method of Estimation of Missing Values in Multivariate DataSuitable for Use with an Electronic Computer Journal of the Royal Statiatical So-ciety, B, 22, 302-307.[3] Caselton, W.F., Kan. L. and Zidek, J.V. (1990). Quality Data Network DesignsBased on Entropy. Unpublished Manuscript.[4] Caselton, W.F., and Zidek, J.V. (1984). Optimal Monitoring Network Designs,Statistics and Probability Letters, 2, 223-227.[5] De Finetti, B. (1964). Foresight: its Logical Laws, its Subjective Sources. In Studiesin Subjective Probability (H.E. Kyburgs Jr, and H.E. Smoker, Eds) pp 93-158. NewYork: Wiley.[6] Frane J.W. (1976). Missing Data and BMDP: Some Pragmatic Approaches, Tech-nical Report, No 45 Department of Biomathematics UCLA.[7] Gibbons, J.D. (1976). Nonparametric Methods for Quantitative Analysis, HoltRinehart and Winston.[8] Hartigan, J.A. and Wong, M.A. (1979). A K-Means Clustering Algorithm, AppliedStatistics, 28, 101-108.[9] Hewitt, E. and Savage, L.G. (1955). Symmetric Measures on Cartesian Products.Trans. Amer. Math. Soc., 80, 470-501.38[10] Krzanowski, W.J. and Lai Y.T. (1988). A Criterion for Determining the Numberof Groups in a Data set Using Sum-of-Squares Clustering. Biometrics. 44, 23-34.[11] Lindley, D.V. and Smith, F.M. (1972). Bayes Esimation for Linear Model (withdiscussion). Journal of the Royal Statistical Society, B, 34, 1-41.[12] National Acid Precipitation Assessment Program, 1990. Integrated Assessment Re-port.[13] Olsen, A.R. and Slavich A.1. (1986). Acid Precipitation in North America: 1984Annual Data Summary fron Acid Deposition System Data Base. U.S EnvironmentalProtection Agency , Research Triangle Park, NC. EPL/600/4-86/033.[14] Sten, M.L., Shen, X and Styer, P.E. (1991). Applications of a Simple RegressionModel to Acid Rain Data. Technical Report No 276. Department of Statistics. TheUniversity of Chicago, Chicago, IL.[15] Stone, M. (1973). Cross-Validatory Choice and Assessment of Statistical PredictionJournal of the Royal Statistical Society, B, 36, 111-132.[16] Styer, P.E., and Stein, M.L. (1991). Acid Deposition Models for Detecting theEffects of Changes in Emissions, Technical Report No 331. Department of Statistics,University of Chicago, Chicago, IL.[17] Wu, S. and Zidek, J.V. (1992) An Entropy Based Analysis of Data from SelectedNAP/NTN Network Sites for 1983-86. Unpublished Manuscript.39APPENDIXTable A1.1(a) Average Squared Prediction Errors and Ranks for the Stationsin Cluster 1 of Hydrogenstationcoderegression Bayesian 1 Bayesian 2 Stone entropyAPE rank APE rank APE rank APE rank rank015a 1.9848 16 1.0338 15 1.1952 15 1.0514 15 15017a 0.6188 8 0.3383 6 0.3962 7 0.3787 7 5024a 2.23042 17 1.2851 17 1.4310 17 1.2975 17 17030a 0.37024 3 0.2929 4 0.3007 4 0.3143 4 7036a 0.5956 7 0.4432 8 0.4952 9 0.4828 9 8049a 0.3846 4 0.2732 3 0.2858 3 0.2965 3 3051a 1.1721 13 0.6348 13 8179 13 0.7802 13 13052a 0.5551 6 0.4543 9 0.4610 8 0.4527 8 10053a 0.7862 11 0.5383 10 0.5945 11 0.6120 10 11076a 0.9970 12 0.5514 11 0.7170 12 0.6206 11 9077a 1.7739 15 1.1356 16 1.2522 16 1.1314 16 16163a 0.3159 1 0.2261 2 2425 2 0.2370 2 2252a 0.4828 5 0.3616 7 0.3893 6 0.3747 6 6253a 0.3353 2 0.2160 1 0.2363 1 0.1979 1 1258a 0.6336 9 0.3220 5 0.3720 5 0.3331 5 4268a 1.3925 14 0.7729 14 0.9564 14 0.9725 14 14283a 0.7388 10 0.5531 12 0.5669 10 0.6377 12 12339a 2.5667 18 1.9549 18 1.8939 18 1.6876 18 1840Table A1.1(b) Average Squared Prediction Errors and Ranks for the Stationsin Cluster 2 of Hydrogenstationcoderegression Bayesian 1 Bayesian 2 Stone entropyAPE rank APE rank APE rank APE rank rank010a 1.0896 15 0.6209 13 o.7394 13 0.8512 14 14011a 0.9865 12 0.5561 12 0.6864 12 0.7569 13 10012a 1.7235 17 1.0377 16 1.3725 17 1.2563 17 16016a 1.0326 14 0.7130 15 0.7983 15 0.7188 12 13037a 0.3457 5 0.2518 4 0.2723 4 0.2668 4 4038a 2.3707 19 1.4686 18 1.8701 18 1.4813 18 18059a 0.6713 8 0.3935 7 0.4654 7 0.3520 7 6061a 0.1679 1 0.1211 1 0.1266 1 0.1220 2 1062a 3.4879 20 1.6470 20 2.2785 20 1.9624 20 20068a 2.2189 18 1.6361 19 1.9070 19 1.6308 19 19074a 0.3380 4 0.2929 5 0.2939 5 0.2885 6 8078a 1.0004 13 0.5332 11 0.6751 11 0.6500 11 11172a 0.1815 2 0.1264 2 0.1282 2 0.1001 1 2173a 0.7678 10 0.4809 9 0.5937 10 0.5093 9 9255a 1.3043 16 1.0602 17 1.1346 16 1.0255 16 17271a 0.1905 3 0.1457 3 0.1520 3 0.1532 3 3279a 0.7196 9 0.4438 8 0.5664 9 0.6342 10 7280a 0.8923 11 0.6490 14 0.7683 14 0.9509 15 15281a 0.3760 6 0.3082 6 0.3118 6 0.2765 5 5354a 0.5384 7 0.5012 10 0.4830 8 0.4112 8 1241Table A1.1(c) Average Squared Prediction Errors and Ranks for the Stationsin Cluster 3 of Hydrogenstationcoderegression Bayesian 1 Bayesian 2 Stone entropyAPE rank APE rank APE rank APE rank rank007a035a070a1.17112.07741.21631321.16961.59341.15392311.18371.57121.17602311.12781.81861.1095231132Table A1.1(d) Average Squared Prediction Errors and Ranks for the Stationsin Cluster 4 of Hydrogenstationcoderegression Bayesian 1 Bayesian 2 Stone entropyAPE rank APE rank APE rank APE rank rank004a 0.5734 1 0.4583 1 0.4688 1 0.5149 1 1029a 3.5054 10 2.2386 10 2.6179 10 2.3558 9 9034a 0.8194 6 0.7349 8 0.7374 7 0.7552 7 8071a 1.7075 9 1.4480 9 1.5129 9 2.4643 10 10166a 0.7058 4 0.6485 5 0.6552 5 0.5889 2 2254a 0.8472 7 0.7255 7 0.7419 8 0.7550 6 4273a 0.7375 5 0.6436 4 0.6527 4 0.6293 3 6275a 0.7041 3 0.6008 2 0.6105 2 0.6944 5 3282a 0.6755 2 0.6275 3 0.6359 3 0.6629 4 7349a 0.8792 8 0.6991 6 0.7233 6 0.8074 8 542Table A1.1(e) Average Squared Prediction Errors and Ranks for the Stationsin Cluster 5 of Hydrogenstationcoderegression Bayesian 1 Bayesian 2 Stone entropyAPE rank APE rank APE rank APE rank rank020a 0.4309 15 0.1250 10 0.1601 7 0.1851 12 12021a 1.2228 30 0.6873 33 0.7766 33 0.5594 31 31022a 0.5742 19 0.2189 16 0.3526 14 0.2292 16 22023a 0.3755 13 0.1311 12 0.1838 10 0.1721 9 9025a 0.9739 27 0.5283 31 0.7410 32 0.6173 32 26028a 1.1945 29 0.4046 28 0.6830 27 0.5428 29 32031a 0.7658 24 0.1811 14 0.3620 15 0.1566 7 20032a 0.5223 18 0.1411 13 0.1724 8 0.1700 8 13033a 0.6182 21 0.1816 15 0.3987 18 0.1735 10 16039a 1.7682 32 0.2804 23 0.4872 20 0.4494 28 30040a 0.3632 11 0.1001 7 0.1820 9 0.2355 17 10041a 1.0391 28 0.2714 22 0.4540 19 0.28484 21 23046a 0.2554 5 0.0919 5 0.1358 5 0.1256 4 2047a 0.3441 10 0.1139 8 0.1947 11 0.17474 11 6055a 0.2656 6 0.0948 6 0.2010 12 0.0984 3 7056a 0.1266 2 0.0892 4 0.1034 3 0.0972 2 3058a 0.4510 16 0.0849 3 0.1181 4 0.1857 13 4063a 0.3636 12 0.1302 11 0.2541 13 0.2176 15 14064a 0.1625 3 0.0806 2 0.0923 2 0.1259 5 843Table A1.1(e) Continuedstationcoderegression Bayesian  1 Bayesian 2 Stone entropyAPE rank APE rank APE rank APE rank rank065b 0.1201 1 0.0645 1 0.0905 1 0.0961 1 1073a 1.7031 31 0.2712 21 0.7115 29 0.5487 30 27075a 0.3038 8 0.1238 9 0.1473 6 0.1546 6 11161a 0.6085 20 0.5617 32 0.6372 25 0.7598 33 33164a 0.7928 25 0.2387 18 0.3891 16 0.4198 27 24168a 0.3324 9 0.4531 29 0.6983 28 0.3153 22 19171a 0.6740 23 0.5213 30 0.6705 26 0.2420 18 17249a 2.2977 33 0.2988 24 0.5887 24 0.4195 26 22251a 0.4256 14 0.3009 25 0.5016 22 0.3832 25 28257a 0.9505 26 0.2470 20 0.4876 21 0.3260 24 18272a 0.6684 22 0.2424 19 0.5166 23 0.2503 20 21277a 0.2901 7 0.3835 27 0.7221 30 0.1886 14 5285a 0.2542 4 0.2363 17 0.3905 17 0.2461 19 15350a 0.4782 17 0.3247 26 0.7305 31 0.3239 23 25Table A1.1(f) Average Squared Prediction Errors and Ranks for the Stationsin Cluster 6 of Hydrogenstation regression Bayes'anl Stone entropycode APE rank APE rank APE rank rank160a 1.5016 1 1.5016 1 1.4745 1 1278a 1.9126 2 1.9125 2 1.8781 2 244Table A1.2(a): Average Squared prediction Errors and Ranks for the Stationsin Cluster 1 of Sulfatestationcoderegression Bayesian 1 Bayesian 2 Stone entropyAPE rank APE rank APE rank APE rank rank004a 0.4139 6 0.1514 3 0.2577 7 0.1512 8 5010a 0.5927 12 0.3185 25 0.3822 14 0.2303 15 2001la 0.8626 21 0.2037 16 0.5076 21 0.4108 30 16012a 0.9378 24 0.4955 34 0.6559 27 0.4509 33 33029a 0.6013 13 0.2853 22 0.4086 16 0.2763 20 22030a 0.4442 7 0.3079 24 0.3234 11 0.2680 18 24034a 0.4524 8 0.1286 2 0.2102 3 0.1356 4 7035a 0.7034 16 0. 3696 30 0.4618 18 0.3128 23 25036a 0.5708 10 0.1943 13 0.3089 9 0.1424 6 9038a 0.6547 15 0.1985 15 0.3573 13 0.2309 16 17039a 0.9386 25 0.4242 32 0.6640 28 0.4877 35 36052a 0.7357 17 0.2816 21 0.4071 15 0.3000 22 21068a 3.1738 37 0.3626 29 1.5558 37 0.5189 36 31070a 1.7942 35 0.4356 33 0.9571 35 0.4019 29 35071a 0.8449 20 0.3428 28 0.6376 25 0.3743 28 32076a 0.9870 26 0.1917 12 0.5618 24 0.2728 19 14077a 1.1916 31 0.3854 31 0.6642 29 0.3134 31 27160a 1.2444 32 0.6412 35 0.7963 33 0.7200 37 37163a 0.1773 1 0.1577 5 0.1482 1 0.1185 1 1045Table A1.2(a) Continuedstationcoderegression Bayesian 1 Bayesian 2 Stone entropyAPE rank APE rank APE rank APE rank rank164a 0.3223 5 0.1818 11 0.2514 6 0.2022 10 13166a 0.6382 14 0.2902 23 0.4827 20 0.2271 14 8172a 0.7576 18 0.2284 17 0.3448 12 0.2814 21 29173a 0.3154 3 0.1681 7 0.2182 4 0.1789 9 2252a 0.3208 4 0. 1746 9 0.2260 5 0.1285 2 6253a 0.8422 19 0.1529 4 0.4661 19 0.1419 5 11254a 1.3770 34 0.2710 20 0.6783 30 0.3467 26 23255a 1.1435 30 0.6489 37 0.7348 31 0.4369 32 30257a 0.5785 11 0.1604 6 0.3167 10 0.2579 17 4258a 0.1.0605 27 0.1800 8 0.5520 22 0.2261 12 18268a 1.1218 29 0.6447 36 0.9270 34 0.4865 34 34273a 0.2449 2 0.1973 14 0.1844 2 0.1504 7 3275a 0.8560 22 0.2527 18 0.6461 26 0.2268 13 12278 1.3028 33 0.3208 26 0.7546 32 0.3410 24 28279a 2.3086 36 0.2603 19 1.1182 36 0.3448 25 26280a 0.9371 23 0.3369 27 0.5617 23 0.3590 27 19282a 1.1098 28 0.1778 10 0.4526 17 0.2030 11 15349a 0.4750 9 0.1192 1 0.2893 8 0.1321 3 146Table A1.2(b): Average Squared Prediction Errors and Ranks for the Stationsin Cluster 2 of Sulfatestationcoderegression Bayesian 1 Bayesian 2 Stone entropyAPE rank APE rank APE rank APE rank rank017a 1.3291 34 0.1449 23 0.5271 34 0.2528 28 29020a 0.1888 4 0.0608 2 0.1078 4 0.0843 1 3021a 0.3333 14 0.1646 26 0.1762 12 0.1181 7 21022a 0.5234 22 0.1127 17 0.2933 23 0.1222 9 19023a 0.4187 17 0.1028 12 0.2672 19 0.1168 6 14024a 0.6716 28 0.1069 14 2899 21 0.1224 10 17025a 0.1418 1 0.0576 1 0.0901 1 0.0849 2 4028a 0.3180 12 0.2334 32 0.2201 16 0.2571 29 34031a 1.3176 33 0.1814 29 0.4555 32 0.2687 31 28032a 0.6416 27 0.1403 22 0.2983 25 0.1907 22 23033a 0.4842 19 0.0987 10 0.2341 18 0.1457 13 9040a 0.4650 18 0.0824 8 0.2211 17 0.2070 23 32041a 0.3411 16 0.1044 13 0.1637 8 0.1551 15 12046a 0.1753 2 0.0724 4 0.1288 6 0.1203 8 5047a 0.2546 7 0.1173 18 0.1725 11 0.1658 20 13049a 1.1425 32 0.2818 35 0.3762 28 0.3556 33 33051a 0.6200 26 0.1074 15 0.3843 30 0.2287 26 15053a 1.3871 35 0.2607 34 0.6844 35 0.2801 32 3147Table A1.2(b) Continuedstationcoderegression Bayesian 1 Bayesian  2 Stone entropyAPE rank APE rank APE rank APE rank rank055a 0.1759 3 0.0710 3 0.1091 5 0.1058 3 2056a 0.3197 13 0.0774 6 0.2069 15 0.1463 14 6058a 0.5563 24 0.1076 16 0.3542 27 0.1588 17 11063a 0.2831 10 0.1369 21 0.1696 10 0.2126 24 16064a 0.2331 6 0.0800 7 0.0904 2 0.1114 5 20065b 0.2114 5 0.0877 9 0.1069 3 0.1113 4 1073a 0.4927 20 0.1788 27 0.3248 26 0.3566 34 26075a 0.3340 15 0.0737 5 0.1616 7 0.1614 19 7161a 0.3083 11 0.1519 25 0.1645 9 0.1581 16 27168a 0.9466 31 0.2448 33 0.4810 33 0.3853 35 35171a 0.5840 25 0.1354 20 0.3917 31 0.1766 21 22249a 1.9059 36 0.4026 36 0.8033 36 0.5058 36 36251a 0.9152 30 0.1861 30 0.2904 22 0.2146 25 30272a 0.2783 9 0.1025 11 0.1894 13 0.1289 11 8277a 0.2635 8 0.1335 19 0.1986 14 0.1449 12 10283a 0.5486 23 0.2327 31 0.3832 29 0.1608 18 25285a 0.5149 21 0.1798 28 0.2947 24 0.2379 27 24350a 0.8198 29 0.1493 24 0.2854 20 0.2680 30 1848Table A1.2(c): Average Squared Prediction Errors and Ranks for the Stationsin Cluster 3 of SulfatestationcodeRegression Bayesian 1 Bayesian 2 Stone entropyAPE rank APE rank APE rank APE rank rank037a 0.1484 1 0.1343 1 0.1334 1 0.1483 1 1059a 0.3091 5 0.2683 5 0.2711 5 0.2836 5 4061a 0.2537 4 0.2354 4 0.2403 4 0.2279 4 5074a 0.4518 6 0.3693 6 0.3812 6 0.4096 6 6078a 0.1796 2 0.1635 2 0.1659 2 0.1610 2 2271a 0.2498 3 0.1979 3 0.1983 3 0.2276 3 3281a 0.5258 8 0.4044 7 0.4604 7 0.5398 8 7354a 0.5053 7 0.4773 8 0.4767 8 0.4472 7 849Table A1.3(a): Average Squared Prediction Errors and Ranks for the Stationsin Cluster 1 of Nitratestationcoderegression Bayesian 1 Bayesian 2 Stone entropyAPE rank APE rank APE rank APE rank rank004a 3.044 40 0.1882 9 0.1.2349 38 0.182 15 29011a 1.999 33 0.3615 26 0.6625 28 0.217 24 32012a 0.752 14 0.6384 39 0.6911 29 0.326 36 39020a 2.109 34 0.2576 18 0.7614 34 0.157 8 3021a 0.999 20 0.2444 15 0.7079 30 0.261 30 17022a 1.815 30 0.4309 32 0.5707 24 0.210 20 11023a 0.660 11 0.2570 17 0.3090 12 0.161 9 7024a 0.782 15 0.4638 34 0.4864 17 0.298 33 37025a 0.613 10 0.1856 8 0.2672 7 0.095 2 2031a 1.738 29 0.3347 24 0.4893 18 0.192 17 30032a 0.461 8 0.2451 16 0.2904 9 0.235 27 20033a 0.555 9 0.1725 7 0.3006 10 0. 155 6 12038a 1.471 26 0.6027 37 0.5710 25 0.217 23 21040a 2.632 37 0.1124 3 0.7176 31 0.162 10 33041a 0.701 13 0.3007 21 0.3036 11 0.211 21 18046a 1.522 27 0.1496 4 0.3389 13 0.132 3 5047a 0.850 17 0.2017 11 0.5010 21 0.171 11 6051a 1.823 31 0.5791 36 0.7359 33 0.296 32 2250Table A1.3(a) Continuedstationcoderegression Bayesian 1 Bayesian 2 Stone entropyAPE rank APE rank APE rank APE rank rank053a 1.173 21 0.4834 35 0.8777 35 0.417 39 40055a 0.393 4 0.1682 5 0.2056 5 0.151 5 4056a 0.952 19 0.1925 10 0.4903 19 0.178 13 10058a 0.401 5 0.2761 20 0.2590 6 0.172 12 8063a 0.246 3 0.2021 12 0.1865 3 0.156 7 14064a 1.462 25 0.1712 6 0.5427 23 0.186 16 13065a 0.227 2 0.1086 2 0.1122 2 0.134 4 1076a 0.860 18 0.3605 25 0.4248 16 0.210 19 15077a 2.716 38 0.7553 40 1.8189 40 0.343 37 38161a 0.435 6 0.2675 19 0.2726 8 0.234 26 25166a 0.844 16 0.3775 29 0.5725 26 0.219 25 23168a 2.435 36 0.4186 31 0.8832 36 0.314 35 31171a 2.760 39 0.3222 22 0.6364 27 0.201 18 27249a 0.436 7 0.2332 14 0.2053 4 0.241 28 35251a 1.265 23 0.3682 27 0.5335 22 0.472 40 36252a 1.705 28 0.6175 38 0.7186 32 0.284 31 28253a 2.374 35 0.3954 30 0.9132 37 0.211 22 24258a 1.263 22 0.2126 13 0.3455 14 0.181 14 1651Table A1.3(a) Continuedstationcoderegression Bayesian 1 Bayesian 2 Stone entropyAPE rank APE rank APE rank APE rank rank272a 1.437 24 0.3252 23 0.4922 20 0.255 29 19273a 1.920 32 0.8116 41 1.4461 39 0.298 34 34277a 3.651 41 0.3754 28 1.8964 41 0.512 41 41283a 0.106 1 0.0525 1 0.0539 1 0.094 1 9285a 0.684 12 0.4368 33 0.3922 15 0.370 38 26Table A1.3(b) Average Squared Prediction Error and Ranks for the Stationsin Cluster 2 of Nitratestationcoderegression Bayesian 1 Bayesian 2 Stone entropyAPE rank APE rank APE rank APE rank rank010a 0.777 4 0.2596 8 0.4208 8 0.299 5 8017a 0.341 1 0.2267 2 0.2502 1 0.235 1 3028a 1.511 16 0.8503 26 1.0421 18 0.477 14 10029a 0.791 5 0.2202 1 0.3050 4 0.465 13 2030a 2.351 24 0.3922 13 0.9411 16 1.133 27 21036a 1.201 12 0.3111 9 0.5561 10 0.368 7 6037a 3.059 27 0.4473 16 1.5074 26 0.659 19 20049a 1.400 14 0.9118 27 1.2118 25 0.745 21 27052a 0.569 3 0.2420 5 0.2613 2 0.251 2 5068a 1.910 20 0.5580 20 1.044 19 0.704 20 26070a 2.112 22 0.4465 15 0.9874 17 0.394 9 24071a 0.866 8 0.3748 11 0.4883 9 0.390 8 1452Table A1.3 (b) Continuedstationcoderegression Bayesian 1 Bayesian 2 Stone entropyAPE rank APE rank APE rank APE rank rank073a 1.626 17 0.5946 21 0.8578 14 0.502 15 15078a 1.421 15 0.4677 17 0.5850 12 0.644 18 13160a 1.670 18 1.0074 29 1.1883 23 1.023 25 28163a 1.164 11 0.2510 6 0.3503 5 0.280 4 11164a 0.393 2 0.2272 3 0.2668 3 0.310 6 7173a 1.104 9 0.2545 7 0.5617 11 0.403 10 1254a 1.728 19 0.4967 18 0.8702 15 0.587 17 12255a 2.831 26 0.3751 12 1.1103 21 0.412 11 16257a 0.850 7 0.3524 10 0.4157 7 0.453 12 17268a 2.190 23 1.9298 31 1.9351 29 1.628 30 29271a 4.333 31 0.3952 14 1.6762 27 0.516 16 18275a 1.338 13 0.6996 24 1.1334 22 1.077 26 22278a 2.420 25 0.9904 28 1.2061 24 1.249 29 31279a 3.426 28 0.7868 25 1.7865 28 1.209 28 19280a 3.694 29 0.5129 19 2.1104 30 0.872 22 23281a 4.231 30 1.0187 30 2.4678 31 1.850 31 30282a 0.811 6 0.2357 4 0.0.4108 6 0.278 3 4349a 1.921 21 0.6867 23 1.0605 20 0.887 23 9354a 1.155 10 0.6291 22 0.7954 13 0.997 24 2553Table A1.3(c) Average Squared Prediction Errors and Ranks for the Stationsin Cluster 3 of Nitratestationcoderegression Bayesian 1 Bayesian 2 Stone entropyAPE rank APE rank APE rank APE rank rank059a 0.982 4 0.9444 4 0.9478 4 1.005 4 4061a 0.911 3 0.8603 3 0.8442 3 0.858 3 3074a 0.653 2 0.6563 2 0.6572 2 0.737 2 2172a 0.400 1 0.3996 1 0.4045 1 0.371 1 154Table A2.1(a): Association Measures for Cluster 1 of Hydrogenmethods being compared R or W Q p-valueregression versus Bayesian 1 0.952 - 0.00004regression versus Bayesian 2 0.967 - 0.00003regression versus Stone 0.961 - 0.00004regression versus entropy 0.911 - 0.00008Bayesian 1 versus Bayesian 2 0.989 - 0.00002Bayesian 1 versus Stone 0.996 - 0.00002Bayesian 1 versus entropy 0.981 - 0.00003Bayesian 2 versus Stone 0.994 - 0.00002Bayesian 2 versus entropy 0.967 - 0.00003Stone versus entropy 0.975 - 0.00003all methods 0.975 82.923 0.000Table A2.1(b): Association Measures for Cluster 2 of Hydrogenmethods being compared R or W Q p-valueregression versus Bayesian 1 0.973 - 0.00001regression versus Bayesian 2 0.982 - 0.0.000009regression versus Stone 0.970 - 0.00001regression versus entropy 0.938 - 0.00002Bayesian 1 versus Bayesian 2 0.994 - 0.00000755Table A2.1(b): Continuedmethods being compared R or W Q p-valueBayesian 1 versus Stone 0.980 - 0.000009Bayesian 1 versus entropy 0.980 - 0.000009Bayesian 2 versus Stone 0.986 - 0.000008Bayesian 2 versus entropy 0.967 - 0.00001Stone versus entropy 0.967 - 0.00001all methods 0.979 93.011 0.0000Table A2.1(c): Association Measures for Cluster 3 of Hydrogenmethods being compared R or W S p-valueregression versus Bayesian 1 0.50 - 0.500regression versus Bayesian 2 0.50 - 0.500regression versus Stone 0.50 - 0.500regression versus entropy 1.00 - 0.167Bayesian 1 versus Bayesian 2 1.00 0.167Bayesian 1 versus Stone 1.000 0.167Bayesian 1 versus entropy 0.50 - 0.500Bayesian 2 versus Stone 1.00 - 0.167Bayesian 2 versus entropy 0.50 - 0.500Stone versus entropy 0.50 - 0.500all methods 0.76 38 0.02456Table A2.1(d): Association Measures for Cluster 4 of Hydrogenmethods being compared R or W Q p-valueregression versus Bayesian 1 0.927 - 0.000regression versus Bayesian 2 0.939 - 0.000regression versus Stone 0.879 - 0.001regression versus entropy 0.673 - 0.019Bayesian 1 versus Bayesian 2 0.988 - 0.000Bayesian 1 versus Stone 0.830 - 0.002Bayesian 1 versus entropy 0.745 - 0.009Bayesian 2 versus Stone 0.818 - 0.003Bayesian 2 versus entropy 0.697 - 0.015Stone versus entropy 0.782 - 0.005all methods 0.862 38.804 0.000012Table A2.1(e): Association Measures for Cluster 5 of Hydrogenmethods being compared R or W Q p-valueregression versus Bayesian 1 0.637 - 0.00016regression versus Bayesian 2 0.618 - 0.00024regression versus Stone 0.739 - 0.000015regression versus entropy 0.810 - 0.000002Bayesian 1 versus Bayesian 2 0.945 - 0.000057Table A2.1(e): ContinuedBayesian 1 versus Stone 0.839 - 0.0000011Bayesian 1 versus entropy 0.811 - 0.0000022Bayesian 2 versus Stone 0.813 - 0.0000021Bayesian 2 versus entropy 0.753 - 0.00001Stone versus entropy 0.881 - 0.00000029all methods 0.828 132.42 0.0000Table A2.1(f): Association Measures for Cluster 6 of Hydrogenmethods being compared R or W S p-valueregression versus Bayesian 1 1.0 - 0.5regression versus Bayesian 2 1.0 - 0.5regression versus Stone 1.0 - 0.5regression versus entropy 1.0 - 0.5Bayesian 1 versus Bayesian 2 1.0 - 0.5Bayesian 1 versus Stone 1.0 - 0.5Bayesian 1 versus entropy 1.0 - 0.5Bayesian 2 versus Stone 1.0 - 0.5Bayesian 2 versus entropy 1.0 0.5Stone versus entropy 1.0 0.5all methods 1.0 8 -58Table A2.2(a): Association Measures for Cluster 1 of Sulfatemethods being compared R or W Q p-valueregression versus Bayesian 1 0.705 - 0.000015regression versus Bayesian 2 0.919 - 0.000regression versus Stone 0.752 - 0.000004regression versus entropy 0.713 - 0.000012Bayesian 1 versus Bayesian 2 0.726 - 0.0000086Bayesian 1 versus Stone 0.793 - 0.0000014Bayesian 1 versus entropy 0.847 - 0.0000002Bayesian 2 versus Stone 0.755 - 0.0000039Bayesian 2 versus entropy 0.697 0.000018Stone versus entropy 0.780 - 0.0000019all methods 0.815 142.61 0.0000Table A2.2(b): Association Measures for Cluster 2 of Sulfatemethods being compared R or W S p-valueregression versus Bayesian 1 0.973 - 0.000011regression versus Bayesian 2 0.982 - 0.0000093regression versus Stone 0.970 - 0.000012regression versus entropy 0.938 - 0.000022Bayesian 1 versus Bayesian 2 0.994 - 0.000007459Table A2.2(b) : ContinuedBayesian 1 versus Stone 0.980 - 0.0000096Bayesian 1 versus entropy 0.980 - 0.0000096Bayesian 2 versus Stone 0.986 - 0.0000086Bayesian 2 versus entropy 0.967 - 0.000012Stone versus entropy 0.967 - 0.000012all methods 0.979 93.011 0.0000Table A2.2(c): Association Measures for Cluster 3 of Sulfatemethods being compared R or W Q p-valueregression versus Bayesian 1 0.976 - 0.000regression versus Bayesian 2 0.976 - 0.000regression versus Stone 1.000 - 0.000regression versus entropy 0.952 - 0.001Bayesian 1 versus Bayesian 2 1.000 - 0.000Bayesian 1 versus Stone 0.976 - 0.0000Bayesian 1 versus entropy 0.976 - 0.000Bayesian 2 versus Stone 0.976 - 0.000Bayesian 2 versus entropy 0.976 0.000Stone versus entropy 0.952 0.001all methods 0.981 34.3333 0.00060Table A2.3(a): Association Measures for Cluster 1 of Nitratemethods being compared R or W Q p-valueregression versus Bayesian 1 0.394 - 0.0064regression versus Bayesian 2 0.877 0.0000regression versus Stone 0.082 - 0.3regression versus entropy 0.455 - 0.002Bayesian 1 versus Bayesian 2 0.587 - 0.00010Bayesian 1 versus Stone 0.033 - 0.4Bayesian 1 versus entropy 0.638 - 0.000027Bayesian 2 versus Stone 0.108 - 0.2Bayesian 2 versus entropy 0.572 - 0.00015Stone versus entropy 0.241 - 0.06all methods 0.514 102.73 0.0000Table A2.3(b): Association Measures for Cluster 2 of Nitratemethods being compared R or W Q p-valueregression versus Bayesian 1 0.65 - 0.00018regression versus Bayesian 2 0.897 - 0.00000047regression versus Stone 0.0411 - 0.4regression versus entropy 0.664 - 0.00014Bayesian 1 versus Bayesian 2 0.829 - 0.000002961Table A2.3(b): ContinuedBayesian 1 versus Stone 0.002 - 0.49Bayesian 1 versus entropy 0.799 - 0.0000059Bayesian 2 versus Stone 0.083 - 0.32Bayesian 2 versus entropy 0.772 - 0.000012Stone versus entropy 0.102 - 0.28all methods 0.567 85.059 0.0000Table A2.3(c): Association Measures for Cluster 3 of Nitratemethods being compared R or W S p-valueregression versus Bayesian 1 1.0 0.042regression versus Bayesian 2 1.0 - 0.042regression versus Stone 1.0 - 0.042regression versus entropy 1.0 - 0.042Bayesian 1 versus Bayesian 2 1.0 - 0.042Bayesian 1 versus Stone 1.0 - 0.042Bayesian 1 versus entropy 1.0 - 0.042Bayesian 2 versus Stone 1.0 - 0.042Bayesian 2 versus entropy 1.0 - 0.042Stone versus entropy 1.0 - 0.042all methods 1.0 125 0.001862Table A3: Names and Identification Codes for the Sites Included in the Studysite ID site name site ID site name004a Fayetteville, Arkansas 070a K-Bar, Texas007a Hopland (Ukiah), California 071a Victoria, Texas010a Rocky Mt. Net park, colorado 073a Horton's Station, Virginia011a Manitou, Colorado 074a Olympic Nat.park, Washington012a Pawnee, Colorado 075a Parsons, West Virginia015a Bradford Forest, Florida 076a Trout Lake, Wisconsin016a Everglades Nat.Pa, Florida 077a Spooner, Wisconsin017a Georgia Station, Georgia 078a Yellowstone, Wyoming020a Bondville, Illinois 160a Alamosa, Colorado021a Argonne, Illinois 161a Salem, Illinois022a Southern Ill U, Illinois 163a Caribou (a), Maine023a Dixon Springs Illinois 164a Bridgton, Maine024a NIARC, Illinois 166a Fernberg, Minnesota025a Idiaana Dunes, Indiana 168a Huntington, New York028a Elkmont, Tennessee 171a WalkerBranch, Tennessee029a Mesa Verde, Colorado 172a American Samoa, American Samoa030a Greenville Station, Maine 173a Sand Spring, Colorado031a Douglas Lake, Michigan 249a Bennington, Vermont032a Kellogg, Micigan 251a NACL, Massachusetts033a Wellston, Michigan 252a Ashland, Missouri034a Marcell, Minnesota 253a University Forest, Missouri035a Lamberton, Minnesota 254a Forest Seed Ctr, Texas63Table A3: Continued036a Meridian, Mississippi 255a Newcastie, Wyoming037a Glacier Nat. Park, Montana 257a Acadia > 11/81, Maine038a Mead, Nebraska 258 Chassell, Michigan039a Hubbard Brook, New Hampshire 268a Warren ZWSW, Arkansas040a Aurora, New York 271a Headquarters, Idaho041a Chautauqua, New York 272a Purdue U Ag Farm, Indiana046a Bennett Bridge, New York 273a Konza Prairie, Kansas047a Jasper, New York 275 Iberia, Louisiana049a Lewiston, North Carolina 277a East, Massachusetts051a Piedmont Station, North Carolina 278a Give Out Morgan, Montana052a Clinton Station, North Carolina 279a Bandelier, New Mexico053a Finley (a), North Calorina 280 Cuba, New Mexico055a Delaware, Ohio 281a Bull Run, Oregon056a Caldwell, Ohio 282a Longview, Texas058a Wooster, Ohio 283a Lake Bubay, Wisconsin059a Alsea, Oregon 285a Washington Xing, New Jersey061a H.J. Andrews, Oregon 339a Bellville, Georgia062a Teddy Roosevelt NP, North Dakota 349a Southeast, Louisiana063a Kane, Pennsylvania 350 Wye, Maryland064a Leading Ridge, Pennsylvania 354a St. Mary Ranger St, Montana065b Penn State, Pennsylvania068a Grand Canyon, Arizona64Nitobs	 reg	 bayt	 bay2atobs	 reg	 bayt	 bay2Nstobs	 reg	 bayt	 bay2OIV024amethod051aEtimethod030amethod052aobi	 rag	 bayt	 bay2	 stmethod1 63aobs	 reg	 bay I	 bay2methodStobi	 rag	 bays	 bay2Oobs	 reg	 bayt	 bay2 obs	 reg	 bay I	 bay2 atmethod method015aobi	 rag	 bayt	 bay2	 atmethod036aobit	 rag	 bay 1	 bay2method053a017amethod049aobs	 rag	 bay 1	 bay2	 atmethod076aNobs	 rag	 bayt	 bay2	 atmehod077ai 1-"" 43*ILTj "rj1Figure Al .1(a): Boxplots of Observed and Predicted Values for Cluster 1 of HydrogenLegend:obs=observed, reg=ordinary regression, bay1=regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedure0r)0inOstobs	 reg	 bay 1	 bay20obs	 rag	 bay 1	 bay2	 at obs	 reg	 bay 1	 bay2 atmethod methodmethod283aI04I	 I	 yElsi 004obs	 reg	 beyl	 bay2	 stmethod339aobs	 reg	 beyl	 bay2	 stobs	 reg	 bay 1	 bay2Figure A1.1(a): Continued252a	 253a	 258a	 268amethod	 methodLegend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedureobs rag	 bay I	 bay2	 atrd I	 I ÷ ble1 1+1'I	 •obe	 rag	 bay 1	 bay2method061aobs	 rag	 bays	 bay2method078amethodatobs	 reg	 bayl	 bay2Itobe	 rag	 bayl	 bay2obs	 rep	 bey I	 bay2	 amethod037aobs	 rag	 bald	 bay2	 atmethod062aobs	 reg	 bays	 bay2	 stmethodobe	 reg	 bayl	 bay2	 stmethod038aobs	 rag	 bays	 bay2method068aobe	 reg	 bayl	 bay2	 atmethodmethod059amethod074aobs	 reg	 bay 1	 bay2methodO0NON0OOFigure A1.1(b): Boxplots of Observed and Predicted Values for Cluster 2 of Hydrogen010a	 011a	 012a	 016aLegend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedure173a.obs	 rag	 bayt	 bay2	 ItFigure A1.1 (b): Continued255a-LI	 + 0 i43the	 reg	 bay 1	 bay2	 st271a1-1-pp&i. 1 I   polioi I Lij l'Tobs	 rag	 bay 1	 bay2	 atLegend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedureIt Stmethod280amethod281amethod methodmethod279a	F-Li 1 1	'FA EiI 	 I	 1	 I	obs	 rag	 bayt	 bay2methodmethod354a43 IIE il riI	 Iobs	 rag	 bayl	 bay2methodN0gstI 1Iobs	 rag	 bayt	 bay2 the	 rag	 bayl	 bay2	 st_L.  1  ±I qa1II172aobs	 rag	 bayt	 bay2	 stFigure A1.1(c): Boxplots of Observed and Predicted Values for Cluster 3 of Hydrogen007a	 070a	 035a . TI III,., T1.{1=1 Erf] 1=3 -i-- -+- -+- --I-C,,,111°be rag butylmethodbay2 st obs rag bay 1methodbay2 et obs rag bay'methodbay2 etLegend:obs=observed, reg=ordinary regression, bay1=regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedureFigure A1.1(d): Boxplots of Observed and Predicted Values for Cluster 4 of Hydrogen004a[-Li [19  	 Ggim1	 •	 •rep	 bey 1	 bay2	 itmethod166aobs	 rep	 bap bay2	 atmethod282arag	 bayl	 bay2	 atmethodatobs	 rep	 bay 1	 bay2OOthe	 rep	 bayl	 bay2 st obs	 rep	 bayl	 bay2	 st obs	 rep	 bay 1	 bay2 atmethod349amethod method029athe	 rep	 bay I	 bay2	 itmethod254a034amethod273a071athe	 rep	 bay I	 bay2method275a00+ EP 13 E13 E1.3obs	 reg	 bayl	 bay2	 stmethodLegend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedure,„Oobs	 reg	 bays	 bay2method032aat0004001obs	 rag	 bays	 bay2methodatmethod033aobs	 reg	 bay 1	 bay2	 atmethodmethod039aobs	 mg	 bays	 bay2	 atmethodmethod040aobs	 rag	 bay 1	 bay2	 atmethodatobs	 reg	 bayl	 bay2atobs	 reg	 bays	 bay2obs	 reg	 bay 1	 bay2	 etm *Mod025aobs	 rig	 bayl	 bay2	 atota	 reg	 bay 1	 bay2	 atmethod028ametiod031aobs	 rag	 bays	 bay2	 at obs	 rag	 bays	 bay2	 atmabod041aFigure A1.1(e): Boxplots of Observed and Predicted Values for Cluster 5 of Hydrogen020a	 021a	 022a	 023aLegend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedureetanYIobs	 reg	 bey I	 bay2	 st obs	 reg	 bay 1	 bay2 atstobs	 reg	 bays	 bay2method058athe	 rag	 beyl	 bay2method073athe	 rug	 bayl	 bay2	 stmethodmethod063athe	 reg	 bayt	 bay2method075amethodOOOobs rag	 bay	 bay2 obs	 rag	 bays	 bay2 atmethod064amethod065bobs	 rag	 bays	 bay2 st abs	 rag	 bayl	 bay2	 stmetiod161amethod164a* -'1 E1=4 +the	 rag	 bay 1	 bay2 st the	 rep	 bayt	 bay2	 stmethod methodFigure A1.1(e): Continued046a	 047a	 055a	 056aLegend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedurer.obs rep	 bay 1	 bey2 atobs	 rep	 bay 1	 bay2methodst171a	 249aO168astobs	 mg	 bay 1	 bey2obs	 reprag	 bay I	 bay2 st obs	 rep	 bay 1	 bey2	 stmethod350amethodobs	 rep	 bey 1	 bay2	 etmethod257amethod272aT251aobs	 rag	 bays	 bay2	 stmethod285aobs	 rep	 bay 1	 bay2methodm ehod277aIFigure A1.1 (e): Continuedobs	 rep	 bay I	 bay2	 StmethodLegend:obs=observed, reg=ordinary regression, bay1=regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedurer t	 1 EE!=i mmosale^.L.T11J.•TT I	 1 I=Ei=1iFigure A1.1(f): Boxplots of Observed and Predicted Values for Cluster 6 of HydrogenC \ i0Cij160a 278aobs	 reg	 bayl	 st	 obs	 reg	 bayl	 stmethod	 methodLegend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedurer?obi	 reg	 bay 1	 bay2	 stmethod023amethod024astobi	 rag	 bay 1	 bsy2 itobi	 reg	 bay 1	 bay2itobs	 nog	 bay 1	 bay2Figure A1.2(a): Boxplots of Observed and Predicted Values for Cluster 1 of Sulfate017a	 020a	 021a	 022ai	 I YTl i' EP•1	 1-method031aobs	 rag	 bay 1	 bsy2methodmeted032a17r-111E0E1m*methodOoLegend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedureobs	 rag	 beyt	 bay2 st obs	 rag	 bays	 bay2 atmethod025amethod028aOTstobs	 rag	 bay 1	 bsy2cbs	 reg	 bays	 bsy2	 stmethod033ametedobs	 rag	 bays	 bay2	 vmethod040acbs	 reg	 hart	 bay2methodOobs	 rig	 beyt	 bay2+11	4'4' 43 EIP* 44 E15IFigure Al .2(a): Continued041a	 046a	 047a 049aO+ 42-.°I  I I	i l l 1	 - 	  1,4;111 1 T Iobs	 rag	 bay I	 bay2 to obs	 reg	 bayl	 bay2 at obs	 rag	 bay 1	 bay2 st obi	 reg	 bay 1	 bay2 itatobs	 rag	 bay 1	 bay2 atobs	 reg	 bays	 bay2IITItobs	 rag	 bays	 bay2obsatmethod051aobs	 rag	 bays	 bay2method058aobs	 rig	 bays	 bay2methodmethod053a	 ft,method063a[4g* ,=L1obs	 rag	 bays	 bay2	 stmethodmethod055a1*3  method064aobs	 reg	 bay 1	 bay2mehodmethod056amethod065brag	 bay I	 bay2methodOLegend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedure075aaobs	 leg	 bayt	 bay2	 stI I 145rtobs	 rag	 bayt	 bay2Figure A1.2(a): Continued161aEl=1 	 EJ* 3obs	 rag	 bayl	 bay2	 at168aLegend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedureTTII method272aEi t11+1 41 +obs	 reg	 bay I	 bay2method350aobs	 rag	 bayt	 bay2	 stmethodobs reg	 bays	 bay2	 atmethod277aobs	 rag	 bayt	 bay2	 stmethodmethod283aobs	 rag	 bayt	 bay2	 stmethodmethod285amethodOmeted m 'Cod251astmethod249aoba	 leg	 bayt	 bay2 obs	 rag	 bayl	 bay2171aobs	 rag	 bay 1	 bay2	 stO9N0O073aobs	 rag	 bay 1	 bay2N0NOFigure Al .2(b): Boxplots of Observed and Predicted Values for Cluster 2 of Sulfate004a	 010a	 011aOE=4]  	 6E1=3•	 • It*i Iobs	 reg	 bayl	 bay2	 St	 obs	 cog	 bay I	 bay2	 obs	 tog	 bayl	 bay2method	 method	 method01 2a	 029a	 030aIll It II   Ea91 1: Oobs	 teg	 bar!	 bay2	 at	 obs	 rag	 bay l	 bay2	 a	 abs	 nag	 bay I	 bay2method	 method	 method034a	 035a	 036aCgi*a 	 I -4— 43 ÷obs	 rag	 bays	 bay2	 st	 obs	 rag	 bayl	 bay2	 obs	 cog	 bayl	 bay2	 stmethod	 method	 methodOLegendobs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedureEli3 I 1   rtriOFigure A1.2(b): Continued038a	 039a	 052a0O obs	 rep	 bay'	 bay2	 St	 abs	 rep	 bay I	 bay2	 at	 obt	 rep	 bay 1	 bay2	 atmethod	 method	 method068a	 070a	 071aO0abs rag bayl bay2 St abs rep bay2 obs rag bar! bay2 IRmethod	 method	 method076a	 077a	 160aobs nag bay 1methodbay2 St abs reg bay lmethodbay2 abs rag bay 1methodbay2 ItLegend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedureON0E1Eatoba it abs rugbay2baylreg bayl bay2method253aEI=A I 1 I --I-  1 method254aN00Figure A1.2(b): Continued163a	 164a	 166arag baylmethod172abay2 obi reg baysmethod173abay2 at obi rag bapmethod252abay2obi	 reg	 bay I	 bay2	 It	 obe	 reg	 bay 1	 bay2	 at	 obi	 rag	 bayl	 bey2	 atmethod	 melhod	 methodLegend:obs=observed, reg=ordinary regression, bayl=regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedureFigure A1.2(b): Continued257a	 258a	 268a 273aE Elf]+*tEHi14]ErtN0Legend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedureabs	 rag	 bays	 bay2 Itthe reg	 bayt	 bay2	 stobs	 reg	 bayt	 bay2 . obs	 reg	 bays	 bay2 st obs	 reg	 bayt	 bay2	 stmethod275amethod278amethod279aN`1,'=*' + ±43 ri-'obs	 rag	 buy I	 bay2 It obe	 reg	 bayt	 bay2 st obs	 reg	 bayt	 bay2	 atmethod282amethod349amethoditobs	 reg	 bayt	 bay2mNobs	 re;	 bayt	 bay2	 stmethodEl i i i43+43method0method280amethod,„. *  1  63EIDEI,I0oba	 rag	 bay'	 bay2method078a14=i*Epitoba	 rag	 bayl	 bay2	 atmethod271aOthe	 rag	 bay'	 bay2	 at abs	 rag	 bayt	 bay2method	 method281a	 354aFigure A1.2(c): Boxplots of Observed and Predicted Values for Cluster 3 of Sulfate037a	 059a	 061a	 074aOO0r.TEt3E.m] Eiaoba	 rag	 bay'	 bay2	 at oba	 rag	 bay'	 bay2	 at oba	 rag	 bayl	 bay2	 It abs	 rag	 bayl	 bay2method	 method	 method	 methodLegend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedureNON0004athe	 rig	 bay 1	 bay2011aobs	 reg	 bay I	 bay2012aabs	 rig	 bays	 bay2 st020aobs	 rag	 bayl	 bay2Othe	 rag	 bays	 bay2	 stFigure A1.3(a): Boxplots of Observed and Predicted Values for Cluster 1 of Nitratemethod	 method	 method	 method021a	 022a	 023a	 024ato,N0Nobs	 rig	 toy I	 bay2	 it[+]the	 reg	 bay I	 bay2	 it441 EH3 -9-obs	 rig	 bays	 bay2	 stmethod	 method	 matted	 method025a	 031a	 032a	 033a C0 Ei]NOPlq EP El2N0the	 rig	 bayl	 bay2	 it	 obs	 reg	 bay 1	 bay2	 st	 obs	 reg	 Dart	 bay2	 st	 the	 rag	 bayl	 bay2method	 method	 method	 methodLegend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedure0obs	 reg	 bayl	 bay2method051astobe	 reg	 bayl	 bay2method058aatNNItthe	 reg	 bays	 bay2038aobs	 reg	 bowl	 bay2	 atmethod047aobs	 rag	 bay I	 bay2method056a041a 046aobs	 rag	 bayl	 bay2method055amethod064a9040aNOobs	 rag	 bayl	 bay2	 etmethod053aabs	 rag	 bay 1	 bay2method063aat0obsatobs	 rag	 beyl	 bay2I	 e+s	 ril 	I	 1obs	 reg	 bayl	 bay2	 st the	 reg	 bayl	 bay2 rag	 bay 1	 bay2NOFigure Al .3(a): Continuedmethod	 method	 method	 methodLegend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedure076a 077a<NIOoba	 rag	 bayl	 bay2	 at obs reg	 bays	 bay2 atmethod168amethod171aobe	 rag	 bays	 bay2 at the	 reg	 bays	 bay2	 atmethod252amethod253aNO°bet	 reg	 bayl	 bay2 at the	 rag	 bayl	 bay2 itmethod methodatobs	 rag	 bay/	 bay2NOONOitthe	 reg	 bayl	 bay2065bobs	 reg	 bays	 bay2	 atmethod166amethod251aobs	 reg	 beyt	 bay2method161athe	 rag	 bay 1	 bay2method249aobs	 rag	 bayl	 bay2method258amethodFigure A1.3(a): Continued••OLegend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedureFigure A1.3(a): Continued<NIO O272aobe	 reg	 bey I	 bay2	 atmethod285aobs	 reg	 bay I	 bay2273aoba	 reg	 bay 1	 bay2	 itmethod277a1-1,-1 4g	 Ejgobs	 reg	 bay 1	 bay2	 itmethod283aE43 El3 4E,obs	 rag	 bays	 bay2methodmethodLegend:obs=observed, reg=ordinary regression, bay1=regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedure+ [1obs	 rag	 bey*	 bay2	 stmethod030aobs	 rag	 bay*	 bay2	 atmethod036aOobs	 rig	 bayl	 bay2	 St	 ate	 rag	 bays	 bay2	 stmethod	 method037a	 049a0•n••••n• 1=]0Oatobe	 rig	 bayl	 bay2 stobs	 reg	 bay I	 bay214gobs	 rag	 bawl	 bay2 obs	 rig	 bays	 bay2OO OFigure Al .3(b): Boxplots of Observed and Predicted Values for Cluster 2 of Nitrate010a	 017a	 028a	 029aobe	 rag	 beyl	 bay2	 st	 obs	 rag	 beyl	 bay2	 st	 obs	 rag	 bay 1	 bay2	 at	 obs	 rig	 bayl	 bay2	 stmethod	 method	 method	 method052a	 068a	 070a	 071amethod	 method	 method	 methodLegend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedureOobit rag	 bays	 bay2 stFigure A1.3(b): Continued078a	 160aobs	 reg	 beyt	 bay2	 st163a4;3obs	 reg	 bay 1	 bay2073aobs	 nag	 beyt	 bay2	 stOobs	 rag	 beyl	 bay2methodstOobs	 rag	 bay 1	 bay2 stStobs	 rag	 beyt	 bay2atobs	 rag	 beyt	 bay2obs	 rag	 bayl bay2 atO257aobe	 rag	 bay 1	 bay2	 stT T •r—:—0I 0•	4	 tireg baysmethod268abay2 stm eihod271a   ita --1--mahod275aobs	 reg	 bayl	 bay2oba49method	 method	 method	 method164a	 173a	 254a	 255amethod	 method	 method	 methodLegend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedure278aobs	 reg	 bey1	 bay2method282aaNFigure A1.3(b): Continued279a	 280aN obs	 reg	 bay 1	 bay2	 st	 obs	 reg	 bey 1	 bay2	 atmethod	 method349a	 354a281aabs	 nag	 bay 1	 bay2	 somethodO I+ [43	 E4-3N0obs	 reg	 bay I	 bay2	 obs	 reg	 bay 1	 bay2	 St	 obs	 mg	 bay I	 bay2	 stmethod	 method	 methodLegend:obs=observed, reg=ordinary regression, bay1=regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedure1Figure A1.3(c): Boxplots of Observed and Predicted Values for Cluster 3 of Nitrate059a	 061a	 074a	 172aaO, •Ei3 E1E13--1-•oba	 reg	 bayl	 bay2	 ohs	 rag	 bays	 bay2	 at	 obs	 reg	 bayl	 bay2	 at	 abs	 rig	 bayl	 bay2method	 method	 method	 methodLegend:obs=observed, reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2 = Bayesian alternative (2) approach, st= Stone's procedurerag bayl bay2 at0 0reg bays	 bay2methodat077a•••SN •El=30 0N	1 1E11 i	 1*at atrag rag baysbayl bay2 bay2method053amethod076amethod163aFigure A2.1(a): Boxplots of Prediction Errors for Cluster 1 of Hydrogena015aE4 g 113	 ELI017a1024a 030arag bay t bay2 at rag bays bay2 at reg bays bay2 it rag bay 1 bay2 stmethod	 method	 method	 method036a	 049a	 051a	 052arag bay.' bay2 t reg bay 1 bay2 at rag bay 1 bay2 It rag bayl bay2 atmethod	 method	 mehod	 methodLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedurerep bayl bay2mehod method..,TIN0I ''':'1atmethod339ai 1	iElEit+39Estbay2baylragmethod283aFigure A2.1 (a): Continued253a	 258a0.,.T252a	I 	 T	 T I	 I I II	I 11	' 1268a1 T 1 ......... +11 'TI TT [11I T 1-I I.s. ICMOreg bayl bay2 st rag bayt bay2 at reg bayl bay2 st rag bays bay2 at tmethod	 methodLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedurerag bay I bay2rag bays bay2 StIIrag bays bay2 strag bays bay2 atOOOOrep bay 1 bay2 atNO011a0.10O012aT TI 1] FgIat rag bayl	 bay2	 atmethodrag	 bay I	 bay2	 stmettodatmethod074a010amethod037amethod062aOrag038a1bayl	 bay2method068a059a016amethod061amethod078aIIFigure A2.1(b): Boxplots of Prediction Errors for Cluster 2 of Hydrogen0i	•	 i 	I 1	 ,	 1	 I Eta TNOrag	 bay I	 bay2	 at	 rep	 bay	 bay2	 at	 reg	 bay 1	 bay2	 at	 rag	 bays	 bay2	 atmethod	 method	 m °Cod	 methodLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureOTFigure A2.1 (b): Continued172a	 173a	 255a	 271aorOOO TITII    [49 ElE3.	 .reg bey1 bay2 et rag bay I bay2 at reg bay 1 bay2 st rog bey t bay2 itmethod	 method	 mehod	 method279a	 280a	 281a	 354arag beyl bay2 it reg bay 1 bay2 it reg beyl bay2 at rig bays bay2 atmethod	 method	 meted	 methodLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureT1.Figure A2.1 (c): Boxplots of Prediction Errors for Cluster 3 of Hydrogen007a	 070a	 035aTreg bayl baY2 at nog bay I bay2 reg bey I bey2 atmethod	 method	 methodLegend:reg=ordinary regression, bay1 =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureFigure A2.1(d): Boxplots of Prediction Errors for Cluster 4 of Hydrogen004a	 029a	 034a 071aat at atat reg rag nagreg beyl baylbayl baylbay2 bay2 bay2bay2method282amethod349amethod methodat atrag rag baytbayt bay2 bay2method methodmethod166aEig+11=1=1method254a÷+Ii+IE+9method273amethod275afilE19 +EI0ifJp[4=1Elf=iOLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureat atragbay2 bay2bayl baylrag atbay2baylragN0regITT	 T 1 11 1 11 1 111bayl bay2 atItat atbaylreg rag baysbay 1rag regbay2method025abay2method032abay2method031abays	 bay2method028aFigure A2.1(e): Boxplots of Prediction Errors for Cluster 5 of Hydrogen020a	 021a	 022a	 023a I	 (I I I +I	 1C CElm T 1[41C C1 1rag bay 1 bay2 at rag bay I bay2 at reg bay 1 bay2 at rag bayl bay2 Itmthhod	 method	 method	 method033a	 039a	 040a	 041a1 1 1 E43	 41Etie tElE3C17rag bays bay2 at bays bay2 at reg bay I bay2 at rag bayl bay2 atmethod	 method	 method	 methodLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,. bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedure0 0I TIjI	 IEIE11I T 1 1 T I i 7 19'0NOO1	 I E*31	+1 43I	 •	 •   ei=3[1E3Figure A2.1 (e): Continued046a	 047a	 055a	 056areg bayt bay2 et bay 1 bay2 at reg bayl bay2 et nag bayt bay2matted	 method	 mehod	 method058a	 063a	 064a	 065breg bayl bay2 et reg beyl bay2 at reg bays bay2 st reg bayt bay2 etmethod	 method	 method	 moe,od073a	 075a	 161a	 164arag beyt bay2 reg beyt bay2 et reg bayl bay2 et reg bayt bay2method	 method	 method	 methodLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureN NCNCT	1I TIT	1	 I1T I I I+ ( I II i T 1Figure A2.1 (e): Continued168a	 171a	 249a	 251aI	 Ti	1 ÷ 1 II 1E?NN,,N i	IEi	 1 T1	I	 [=IC..rag bap bay2 st rag bap bay2 at ray bayl bay2 at rag bayt bay2 stmethod	 method	 mathod	 method257a	 272a	 277a	 285a09 bayt bey2 st nag bayt bay2 St reg bayt bay2 St reg bayt bay2 atmethod	 method	 method	 method350aI T I	 I1IInag bayt bey2 stmethodLegend:reg=ordinary regression, bay1=regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureI1 IIJ_ 1.1.278areg	 bayl	 stFigure 2.1(f): Boxplots of Prediction Errors for Cluster 6 of Hydrogen160aT0?T TTII-1-I Ireg bayl stmethod	 methodLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureIat atatnag reg ragreghartbayt bayl bar!bay2bay2 bay2 bay2l i II	 IIrag bayt bay2 at rag bayt bay2 at rag WTI bay2 it rag bayt bay2 atmehod033amethod method040a032aII T 1atat at atrigrag ragregbayl aytbayl aytbay2bay2 bay2 bay2method method mehod methodFigure A2.2(a): Boxplots of Prediction Errors for Cluster 1 of Sulfate017a 020a 021a 022a4iBimi4E00method023aI	 I ÷method024amehod025a4p4aE#1*method028aITIE13+÷0 0method031aI	T 	 Eti I	 1 =1I	 •4-36+2E*40Legend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureT rag bay 1	 bay2method051aI.,bay 1 bay2TIIIst st at stNgrag regMg bay 1bity1 bay I bapbay2 bay2 bay2bay2method method method m (shodFigure A2.2(a): Continued041abay t	 bay2method058a	I I falm 1 I	1 Ela1	 1	 .046a 047a1	 1 45 Et' ElEim "hod055ai	•	 •1	 1	1 (# *1 +049a43 EEi + i]I I   43j-"E+1it	1  1 ElEi 43 ÷1.,1,.C.nagLegend:reg=ordinary regression, bay1=regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureI 13etIt ragbay2baysnagmethod053ast sttog bay2bay 1method056ast St st sttagbay2bay 1rag tag bay2bay2bays bay 1method063amethod064amethod065b:,,,.CtV.INO49 ÷etet at atrag rag ragreg baylbayt bayt bayl bay2bay2bay2 bay2method171amethod249amethod251amethod272aat atatat ragrag rag regbaylbayl bayt bayt bay2bay2 bay2 bay2[4Aat at atatrag rag ragragbayl bayt bayl bayl bay2bay2 bay2 bay2method method method methodFigure A2.2(a): Continued073a	 075a	 161a 168amethod277amethod283amethod285amethod350a,„O OOLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureT43*Figure A2.2(b): Boxplots of Prediction Errors for Cluster 2 of Sulfate004a	 010a	 011aN0T	14:1 	 I' E+,lag bayl bay2 st rag bay 1 bay2 a rep bay/ bay2method012amethod029amethod030aT	 t 0rag bar! bay2 st rep bays bay2 rep bay bay2CN0rfmethod034am othod035aTIT		 li   	 I ^	T 1	 1method036aN0rep beyl bay2 St rag beyl bay2 mg bay 1 bay2method method methodLegend:reg=ordinary regression, bay1=regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureI 0 0T1baylrag bay2 bay2rep bayl4:4reg bayl bay2 atFigure A2.2(b): Continued038a	 039a	 052areg bayl bay2 at rag bay I bay2 rep beyl bay2 atmethod	 method	 method068a	 070a	 071aN to0method076a0	 . T	T__	 }3I	I.method077amethod160arT 	 11	T	11	 I EE:Ei1	1	 1	 •reg	 bayl	 bay2	 at	 reg	 bayl	 bay2	 at	 rag	 bayl	 bay2	 atmethod	 method	 methodLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureT I	1 1	 1 1=PI 10 0T II T	 I TTFigure A2.2(b): Continued163a	 164a	 166arag bayl bay2 at reg bays bay2 bayt bay2 Itmethod172amethod173amethod252a1 1rag bay' bay2 It rag bay' bay2 to reg bays bay2 atmethod	 method	 method253a	 254a	 255a0 0rag bays bsY2 at reg bayl bay2 at rag bayl bay2method	 method	 methodLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureFigure A2.2(b): Continued257a	 258a	 268a	 273aNON0 Orag bayt bay2 at rag bayl bay2 at reg bayl bay2 at rag bayl bay2 atmethod	 method	 method	 method275a	 278a	 279a	 280a0 E] Ella 41 rag bayl bay2 at reg bayl bay2 t reg bayl bay2 at rag Dart bay2 atmethod	 method	 method	 method282a	 349aO13rNOrag	 bay t	 bay2	 at	 reg	 bayt	 bay2	 atmethod	 methodLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedure0 •Oreg bay', bay2Figure A2.2(c): Boxplots of Prediction Errors for Cluster 3 of Sulfate037a	 059a	 061a	 074aN0 0ilil	I-11 IL 1 l yNO [43EiJ* E1E' reg	 beyl	 bay2	 at	 rag	 bays	 bay2	 at	 tog	 bayt	 bay2	 $1	 reg	 bayl	 bay2	 It9method078amethodmethod271amethodmethod281amethodmethod354aFT,	1E+9 +E13.matodrig bayt bay2 at rag bayt bay2 at tog bay2OatLegend:reg=ordinary regression, bay1=regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureFigure A2.3(a): Boxplots of Prediction Errors for Cluster 1 of Nitrate004a	 011a	 012a	 020areg	 tart	 bay2	 et	 reg	 beyl	 bay2	 et	 Sag	 bayt	 bay2	 at	 rag	 bayt	 bay2	 atOmethod021aI I ÷ I I 14aI	 Imethod022amethod023aEf3method024at	 1El]	 [iA q=nig bayt bay2 St rag bayt bay2 at rag bayt bay2 at rag bayl bay2 almethod	 method	 method	 method025a	 031a	 032a	 033aNNO 4E1	1E1E3 nag	 bayt	 bay2	 St	 reg	 bayt	 bay2	 at	 reg	 bayt	 bay2	 It	 reg	 bayl	 bay2	 Itmethod	 method	 method	 methodLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureFigure A2.3(a): Continued038a	 040a	 041a	 046aO•0 EepO El=a 14=g rag	 Day 1	 bay2	 St	 rep	 bayt	 bay2	 at	 reg	 bay/	 bay2	 at	 reg	 bayl	 bay2	 atmethod	 method	 method	 method047a	 051a	 053a	 055aOOrag bayl bay2 st rag bayt bay2 st reg twirl bay2 at reg bayl bay2 atmethod	 method	 method	 method056a	 058a	 063a	 064aOreg bayi bay2 st rag bayt bay2 at rag bayl bay2 st rag bayt bay2 Stmethod	 method	 method	 methodLegend:reg=ordinary regression, bay1=regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureo . E 1 EN 00atrag bay I bay2Ejl+ E=D+atbey2bey 1tO9methodEi;3 14a EO E 4 4 sreg bayl	 bay2methodat rag bay	 biy2methodmethodN0•NOFigure A2.3(a): Continued065b	 076a	 077a	 161aI Ti	I If [ = Eif O E = +NO rag	 bayl	 bay2	 reg	 bay 1	 bay2	 It	 reg	 bayl	 bay2	 at	 reg	 bays	 bay2	 atmethod	 method	 method	 method166a	 168a	 171a	 249areg bayl bey2 at rag bey 1 bay2 at nag bayl bey2 at rag bayl bay2 ormethod	 method	 method	 method251a	 252a	 253a	 258aLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedure+G;PEI=J 4Figure A2.3(a): Continued272a	 273a	 277a	 283a.,i 1 IEElat16+31	 : Fi] E4=J Ejq÷ ElE t* *{Igrag	 bayt	 bay2	 or	 reg	 bayl	 bay2	 st	 reg	 bey1	 bay2	 at	 rag	 bayl	 bay2	 atmethod	 method	 method	 method285a[4=]Eig*[+]rag	 bays	 bay2	 atmethodLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureFigure A2.3(b): Boxplots of Prediction Errors for Cluster 2 of Nitrate010a	 017a	 028a 029a O0O r+a 	rag beyl bay2 It rag beyl bay2 st bayl bay2 st reg beyl bay2 stmethod	 method	 method	 method030a	 036a	 037a	 049aEi="1=3 11E3 ÷ EiEONO0Orig beyt bay2 st rag bays bay2 st reg bayl bay2 rag bayl bay2 stmethod052amethod068amettod070amethod071aOONOrag bays bay2 It rag bayl bay2 st rag bayl bay2 st bays bay2 Itmethod method method methodLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedure$1 at atst regrag ragreg beyl bayl1340 bayl bay2bay2 bay2 bay2at at atregrag reg ragbaylbayl bayl baytbay2 bay2bay2 bay2atst at regreg rag ragbeylbayl baytbay 1 bay2bay2bay2 bay2method matted methodmethodFigure A2.3(b): Continued073a	 078a	 160a 163amethod255amethod254a;	 •I i 	1 11, 1 1I 1	 1 i El=;EEitE49 142÷E=Ine==G#I ET3 14S3Omethod257amertmd268at*3 14ataa÷method271amethod275aOCSLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedure[4]E144] ONO1Tmethod164amethod173aNOFigure A2.3(b): Continued278a	 279a	 280a	 281a00OOtog bays bay2 it rag bayt bay2 at rig bay t bay2 at rag bayt bay2Mratmethod	 method	 method	 method282a	 349a	 354aNMrco •nag bayt bay2 st rag bayt bay2 it rag bay 1 bay2method	 method	 methodLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureNOaaO2T1Tst stragbay2bayl bayl regbay2 bay2bayt togmethodmethod methodFigure A2.3(c): Boxplots of Prediction Errors for Cluster 3 of Nitrate059a	 061a	 074a	 172atog bay1	 bay2m oOtodLegend:reg=ordinary regression, bayl =regression using a Bayesian alternative (1) approach,bay2= regression using a Bayesian alternative (2) approach, st= Stone's procedureSI5	 10	 15	 20	 25	 30  1rgent  2bay1ent---  3bay2ent-- 4stentL00co.0a)a)a_  1rgent  2bay1 ent---  3bay2ent-- 4stentN.0T".co0O5	 10	 15	 20	 25	 30Figure A3.1 (a): Relative Measure of	 Figure A3.1 (b): P-value for Tests ofAgreement for Hydrogen	 Agreement for Hydrogencluster size cluster sizeLegend:rgent = regression with entropy, bayl ent= Bayesian alternative (1) with entropybayl ent= Bayesian alternative (2) with entropy, stent = Stone's procedure with entropyFigure A3.2(a): Relative Measure of	 Figure A3.2(b): P-value for Tests ofAgreement for Sulfate Agreement for Sulfate000)0  1rgent  2bay1ent---  3bay2ent-- 4stentu")r-- _00N _ci10	 15	 20	 25	 30	 35cluster size10	 15	 20	 25	 30	 35cluster sizeLegend:rgent = regression with entropy, bayl ent= Bayesian alternative (1) with entropybay1ent= Bayesian alternative (2) with entropy, stent = Stone's procedure with entropy0.T....CO010 20 30 40 10 20 30 400cv _//4d/////////1rgent--------	 2bay1ent--- 3bay2ent-- 4stent0.od0ciFigure A3.3(a): Relative Measure of	 Figure A3.3(b): P-value for Tests ofAgreement for Nitrate Agreement for Nitratecluster size	 cluster sizeLegend:rgent = regression with entropy, bayl ent= Bayesian alternative (1) with entropybayl ent= Bayesian alternative (2) with entropy, stent = Stone's procedure with entropy00.a)rn>0 1 reg  2bay13bay24stone13/11 0/1 3/.	 / /1 -1-1 4-4z4,2,1 -	/2-2,1food11 reg	  2bay13bay24stoneAverage Prediction Errors for Hydrogen in Ascending OrderFigure A4.1 (a): Cluster 1	 Figure A4.1 (b): Cluster 2co(.1o5	 10	 15	 5	 10	 15	 20ordered ranks	 ordered ranksLegend:reg=regression,bayl = Bayesian (alternative)), bay2= Bayesian (alternative2)Stone = Stone's procedureCOT--(0.,0Cn1NTAverage Prediction Errors for Hydrogen in Ascending OrderFigure A4.1(c): Cluster 3	 Figure A4.1 (d): Cluster 41.0	 1.5	 2.0	 2.5	 3.0	 2	 4	 6	 8	 10ordered ranks	 ordered ranksLegend:reg=regression, bay1= Bayesian alternative (1), bay2= Bayesian alternative (2)Stone = Stone's procedure1 11.0	 1.2	 1.4	 1.6	 1.8	 2.0Tin.N.T.COT0.,-I i11 11 11 .3 333333332)111133333	,44 40211 11 111313iiiMiliffliftgggag4W22  1 reg  2bay1--- 3bay2-- 4stone1/11/1a0cv0	 5	 10	 15	 20	 25	 30Average Prediction Errors for Hydrogen in Ascending OrderFigure A4.1(e): Cluster 5	 Figure A4.1(f): Cluster 6ordered ranks	 ordered ranksLegend:reg=regression, bay1= Bayesian alternative (1), bay2= Bayesian alternative (2)Stone = Stone's procedure1 reg  2bay13bay24stone111 1 1/'i4H°°°°WP122?3333333333333 444?3.333 /4111 11 reg  2bay13bay24stone131 1 1	11 11 11111 1111	 333333333 ,3333333333333:31 31124333111 1;111 11444AAAA4A4d444AA44 4f14 ' 4111110000o - 	Oa)rnE. B.tr?0r)0.10Average Prediction Errors for Sulfate in Ascending OrderFigure A4.2(a): Cluster 1	 Figure A4.2(b): Cluster 20	 10	 20	 30	 0	 10	 20	 30ordered ranks	 ordered ranksLegend:reg=regression, bay1= Bayesian alternative (1), bay2= Bayesian alternative (2)Stone = Stone's procedure2 4 6 8(N0Average Prediction Errors for Sulfate in Ascending OrderFigure A4.2(c): Cluster 3ordered ranksLegend:reg=regression, bay1= Bayesian alternative (1), bay2= Bayesian alternative (2)Stone = Stone's procedure1 reg  2bay13bay24stone 1,1 1111111 1	 ,3/11	 31 1	 .33	 4/3	 /3 3 3 3:- 3334 43 34 4 24 ‘2 42 2A222222221-0	 5	 10	 15	 20	 25	 300	 10	 20	30	 40- 1 reg	  2bay13bay24stone1 1/1111	33/1111	 i;1 11 3/31 1 	I1111	3333	 2233333333333333333 222223,0222V2222222222222	 4444"4"14"44444444444444440 -T".Average Prediction Errors for Nitrate in Ascending OrderFigure A4.3(a): Cluster 1	 Figure A4.3(b): Cluster 2ordered ranks	 ordered ranksLegend:reg=regression, bay1= Bayesian alternative (1), bay2= Bayesian alternative (2)Stone = Stone's procedure0.Tcr?01.0	 1.5	 2.0	 2.5	 3.0	 3.5	 4.0Average Prediction Errors for Nitrate in Ascending OrderFigure A4.3(c): Cluster 3ordered ranksLegend:reg=regression, bay1= Bayesian alternative (1), bay2= Bayesian alternative (2)Stone = Stone's procedureBIOGRAPHICAL INFORMATION NAME:	 Iko tau tac.,,Ot\A )MAILING ADDRESS:	 1--4P 'cm	 '	 t`"1 STLC-5thi-k.1/4‘) et-a-5	 0	 s ft-L.h	 itt4P -	f=t7X	5o4	 G ft-L. ft-A-I,Arti-V1.ft+al ptPLACE AND DATE OF BIRTH: ri• mot-iv—CV z ke E4t-r-V2A-1,0EDUCATION (Colleges and Universities attended, dates, and degrees):V NI	 12-511Y	 er -F	 A-vs-6-s S	 Prisik4	 Cyl,	 c=z-h c-ttac^tv c-QS tI`f	 voe% its vt co Lult, •st	 teki 2_	 MSc_POSITIONS HELD:SPUBLICATIONS (if necessary, use a second sheet):AWARDS:Complete one biographical form for each copy of a thesis presentedto the Special Collections Division, University Library.DE-5

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
China 20 24
France 11 0
United States 9 0
Japan 3 0
City Views Downloads
Unknown 12 3
Beijing 9 0
Ashburn 5 0
Shanghai 3 0
Shenzhen 3 24
Tokyo 3 0
Hangzhou 2 0
Nanjing 2 0
Seattle 1 0
Buffalo 1 0
Hebei 1 0
Roubaix 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0086707/manifest

Comment

Related Items