Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Smoothing locally regular processes by Bayesian nonparametric methods, with applications to acid rain.. 1986

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
UBC_1986_A6_7 M32.pdf [ 3.71MB ]
UBC_1986_A6_7 M32.pdf
Metadata
JSON: 1.0096763.json
JSON-LD: 1.0096763+ld.json
RDF/XML (Pretty): 1.0096763.xml
RDF/JSON: 1.0096763+rdf.json
Turtle: 1.0096763+rdf-turtle.txt
N-Triples: 1.0096763+rdf-ntriples.txt
Citation
1.0096763.ris

Full Text

S M O O T H I N G L O C A L L Y R E G U L A R P R O C E S S E S B Y B A Y E S I A N N O N P A R A M E T R I C M E T H O D S , W I T H A P P L I C A T I O N S T O A C I D R A I N D A T A A N A L Y S I S by H O N W A I M A B . S c , C O N C O R D I A U N I V E R S I T Y , 1984 A T H E S I S S U B M I T T E D I N P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F M A S T E R O F S C I E N C E in T H E F A C U L T Y O F G R A D U A T E S T U D I E S Department of Statistics We accept this thesis as confirming to the required standard T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A September 1986 (S) H O N W A I M A , 1986 In presenting t h i s thesis i n p a r t i a l f u l f i l m e n t of the requirements for an advanced degree at the University of B r i t i s h Columbia, I agree that the Library s h a l l make i t f r e e l y available for reference and study. I further agree that permission for extensive copying of t h i s thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. I t i s understood that copying or publication of t h i s thesis for f i n a n c i a l gain s h a l l not be allowed without my written permission. Department of The University of B r i t i s h Columbia 1956 Main Mall Vancouver, Canada V6T 1Y3 DE-6 C3/81') ii ABSTRACT We consider the problem of recovering an unknown smooth function from the data using the Bayesian nonparametric approach proposed by Weerahandi and Zidek (1985). Selected nonparametric smoothing methods are reviewed and compared with this new method. At each value of the independent variable, the smooth function is assumed to be expandable in a Taylor series to the pth order. Two methods, cross-validation and "backfitting" are used to estimate the a priori unspecified hyperparameters. Moreover, a data-based procedure is introduced to select the appropriate order p. Finally, an analysis of an acid-rain, wet-deposition time series is included to indicate the efficacy of the proposed methods. iii TABLE OF CONTENTS ABSTRACT ii TABLE OF CONTENTS iii LIST OF TABLES v LIST OF FIGURES vi ACKNOWLEDGEMENTS viii 1. INTRODUCTION 1 2. SURVEY OF SELECTED NONPARAMETRIC SMOOTHING METHODS 3 2.1 T H E KERNEL METHOD 3 2.2 T H E SPLINE SMOOTHING METHOD 6 2.3 T H E EXPONENTIAL SMOOTHING METHOD 9 2.4 SOME ROBUST SMOOTHING METHODS 14 2.5 THE MULTIVARIATE EXTENSION 16 2.6 DISCUSSION 19 3. THEORY OF BAYESIAN NONPARAMETRIC APPROACH FOR SMOOTH- ING LOCALLY REGULAR PROCESSES 20 3.1 T H E GENERAL THEORY 20 iv 3.2 ESTIMATING THE "SMOOTHING PARAMETER" C FOR LOCALLY REGULAR STRUCTURAL MODELS 24 (I) CROSS-VALIDATION METHOD 24 (II) "BACKFITTING" METHOD 27 (III) CONCLUSIONS 29 3.3 CHOOSING T H E "OPTIMAL" ORDER P 29 4. APPLICATIONS 32 4.1 INTRODUCTION 32 4.2 COMPARISON OF T H E CROSS-VALIDATION AND T H E BACKFIT- TING METHODS 32 4.3 RESULTS OF CHOOSING T H E "OPTIMAL" ORDER P 35 5. CONCLUSIONS 38 BIBLIOGRAPHY 40 APPENDIX 47 V LIST OF TABLES TABLE I: Estimated values of XV(c) for a locally constant fit to the pH77 and M.A. pll data 33a T A B L E II: Estimated values of Cp for the pH77 data 33b T A B L E III: Estimated values of CP for the M.A. pH data 34b TABLE IV: Estimated values of PSE(p) for the pH77 data (predict one obser- vation ahead) 35a TABLE V: Estimated values of PSE(p) for the pH77 data (predict the average of 10 observations ahead) 35b T A B L E VI: Estimated values of PSE(p) for the Lobster data predict one observation ahead (top) and predict the average of 10 observations ahead (bottom) 36a T A B L E VII: Estimated values of PSE(p) for the sinusoid (y = 3cos rc - 5sin re 4-6) data predict one observation ahead (top) and predict the average of 10 observations ahead (bottom) 36d vi LIST OF FIGURES FIG. 1 (A-C): A comparison of the locally polynomial fits of different order p to the pH77 data (using the cross-validation method) 33c FIG. 2 (A-C): A comparison of the locally polynomial fits of different order p to the pH77 data (using the backfitting method) 33d FIG. 3: A comparison of the locally constant fits (top) and quadratic fits (bot- tom) using the cross-validation method (solid line) and the backfitting method (dotted line) for the pH77 data , 34a FIG. 4 (A-C): A comparison of the locally polynomial fits of different order p to the M.A. pH data (using the cross-validation method) 34c FIG. 5 (A-C): A comparison of the locally polynomial fits of different order p to the M.A. pH data (using the backfitting method) 34d FIG. 6: A comparison of the locally constant fits (top) and quartic fits (bottom) using the cross-validation method (solid line) and the backfitting method (dotted line) for the M.A. pH data 34e FIG. 7: A comparison of the credibility intervals for locally quadratic (solid line) and cubic (broken line) fits for the pH77 (top) and M.A. pH (bottom) data (using the cross-validation method) 34f FIG. 8A: A comparison of the locally constant (broken line), linear (solid line), and quadratic (dotted line) fits for the Lobster data (using the backfitting method) 36b FIG. 8B: Locally linear fit with the credibility intervals for the Lobster data (using the backfitting method) 36c vii FIG. 9A: A comparison of the locally linear (broken line), 5th order polynomial (solid line) and 10th order polynomial (dotted line) fits for the sinusoid (y = 3cos x — 5sin x + 6) data (using the cross-validation method) 37a. FIG. 9B: A comparison of the locally 5th order polynomial (solid line), 7th order polynomial (broken line) and 9th order polynomial (dotted line) fits for the sinusoid (y = 3cos x — 5s'm x + 6) data (using the backfitting method) 37b FIG. 10: Locally constant fit with the credibility intervals for the difference of the M.A. pH data, (using the backfitting method) 37c viii ACKNOWLEDGEMENTS I would like to thank Dr. Harry Joe for his guidance and assistance in the producing of this thesis. I am indebted to Dr. James Zidek for his helpful suggestions, encouragement, and for his careful reading of this thesis. I also appreciate the assistance of Brian Leroux in translating the article of Collomb (1981). The finanical support of the Department of Statistics of the University of British Columbia is gratefully acknowledged. 1. INTRODUCTION. This thesis is concerned with a Bayesian nonparametric method for estimating an unknown smooth function given some observed data with noise. Consider the model F(x t) = 5 ( x , ) + e,-, i = l , . . . ,n where S() is an unknown smooth function and the random variables e,- are uncorrelated with zero mean and constant variance a2. In recent years, many smoothing methods have been developed to estimate S nonparametrically from the observations on Y(x.), i = l , . . . ,n . A survey of selected nonparametric smoothing methods is presented in section 2. Included are the kernel method, the spline smoothing method, the exponential smoothing method and some robust smoothing methods. Weerahandi and Zidek (1985) propose a Bayesian nonparametric method to tackle such problems. The method is flexible and easy to use. Its derivation is technically elementary and can be extended to the analysis of space-time series and the regression problem. S is assumed to be locally regular, that is expandable in a Taylor series about x = x n + i to the pth term. The degree of local regularity is reflected in the size of p. The objects of inference are an estimate of S(x) and a corresponding credibility interval, where x may lie outside the range of the data. The general theory is described in section 3 and a brief comparison of their approach to those mentioned in section 2 is also included. This thesis is a supplement to the work of Weerahandi and Zidek (ibid). Be- sides using the cross-validation method to estimate the a priori unspecified hy- 1 perparameters, a new "backfitting" method is proposed as an alternative. The asymptotic behaviour of the cross-validated sum of squared errors is also ex- amined. A data-based procedure is developed to choose the appropriate order p. Section 3 gives detailed discussions of these proposed methods. Finally, in section 4, all the results are tested on the data of an acid-rain wet-deposition time series. The calculations were performed by using Fortran subroutines which are included in the Appendix. 2 2. S U R V E Y O F S E L E C T E D N O N P A R A M E T R I C S M O O T H I N G M E T H O D S . Summary : Suppose we are given observations [ ( x , - , t = l , . . . ,n] satisfy- ing the model Y[xi) = s{x{) + e,-, i = 1 , n . (2.1.1) where the {e,} are independent and identically distributed (i.i.d.) random vari- ables with zero mean and constant variance a2. The abscissae {x,} are assumed to be known without error, and to satisfy xi < z 2 5: ••• 5: xn. Here s() is a fixed but unknown smooth function. We consider the problem of recovering s() when only discrete, noisy mea- surements of it are available. In recent years, many nonparametric smoothing methods have been developed to tackle this problem. The extensive literature in this area makes a detailed review of each paper infeasible. We aim instead for a brief summary of the principal nonparametric smoothing methods including the kernel method, the spline smoothing method and the exponential smoothing method which is used quite often in the analysis of time-series data. Finally, some robust smoothing methods are also considered. This survey will help to put the developments of section 3 into perspective. 2.1 T h e Kernel ( M o v i n g W i n d o w ) M e t h o d Watson (1964) and Nadaraya (1964) independently and simultaneously pro- posed the kernel estimator of s(-) : , _ Y{XJ)K((X - Xj)/h) S[X)~ E?=1K((x-xi)/h) 3 where the kernal K(x) is a certain density function satisfying the conditions : — o o < l < o o x - + : t o o J—oo and h = h(n) ( the bandwidth of the kernel ), a sequence of positive numbers converging to zero which specifies the degree of smoothness, of the estimator. This is analogous to the kernel density estimator introduced by Rosenblatt (1956). He proposed the kernel estimator, which can be written as where K is the kernel and xi,...,xn are assumed to be i.i.d. with the unknown density /(•). Referring back to the Watson-Nadaraya estimator, Watson (ibid), using some Monte Carlo experiments, concluded that it is h and not K which is important in the estimate. Schuster (1972) obtained asymptotic normality as a correction of a result of the same type given by Nadaraya. A related estimator was introduced by Priestly and Chao (1972) : sup |-KXz)| < °°> where K is the kernel, satisfying the conditions : K(u) > 0, for all u K2{u) dv u < oo, and K(u) — K( — u). — o o Stone (1977) considered a general form of estimator, n t'=l 4 where v}nt(z) — tu n,(x, xi,xn), 1 < i < n , depends more heavily on those x,- close to x. This formulation includes the Watson-Nadaraya and the Priestly-Chao estimators as special cases. Stone (ibid) also derived the sufficient conditions on {wn} for the consistency of s(x). Clark (1977) proposed a convolution-based estimator, s ( i ) = / _ > ' M ^ ) < " ' where {Fi for x < xi, Yn for x > x n . He claimed that such estimators have some advantages over the Watson-Nadaraya and the Priestly-Chao estimators in small samples. Firstly, s(x) is smoother. Sec- ondly, the Watson-Nadaraya and the Priestly-Chao estimators are not necessary invariant under linear transformations and they do not have the shape-preserving properties of s(x). It is well known [ see Table 1 of Rosenblatt (1971) ] that the choice of the kernel function K, is of relatively unimportance. Usually, a kernel is chosen for computational convenience. While the choice of kernel does not seem to be an issue, at least for reasonably moderate sample sizes, the choice of bandwidth is quite a different matter. In practice, the estimate is often formed by computing the estimate for several h's and selecting the estimate with the most pleasing appearance. H the h is too small, the estimate will appear to be too wiggly and if too large, the estimate will begin to be too flat. 5 Clark (1977) estimated h from the data by a minor modification of the cross-validation technique described by Wahba and Wold (1975). A simulation experiment demonstrated that this approach was satisfactory in determining the appropriate degree of smoothness. Similarly, Hardle and Marron (1985) estimated h of the Watson-Nadaraya estimator by minimizing n CV{h) = n-l52(Y{xi)-ii(zi))* t'=i where s,- is the cross-validated estimator given by , E w y ( « j W ( « - g y ) A ) S l { x ) - TJi^jK{{x-xj)lh) • They also showed that such an h is asymptotically optimal with respect to the distances : n MSE (Averaged Squared Error) = n~l ^^(s(£,) — s(z,))2, ISE (Integrated Squared Error) = j (s(x) — s(x))2 f(x) dx where f(x) is the marginal density of x, and CMISE (Conditional Mean Integrated Squared Error) = E[ ISE \ xi,...,xn ]. We may be interested in constructing confidence intervals for s(x) or confi- dence bands for s(). However, besides some theoretical results mentioned in Bickel and Rosenblatt (1973) and Collomb (1981), no practical method has been reported. 2.2 T h e Spline Smoothing M e t h o d 6 For the model in equation (2.1.1), the spline smoothing estimate sa of s is defined to be the curve that minimizes over the class of all functions s for which s and s' are absolutely continuous and s" is square-integrable. The idea of (2.2.1) is to trade off fidelity to the data (small sum of the squared residuals) against smoothness (low value of integrated square second derivative). The smoothing parameter a determines the rate of exchange between these quantities and hence determines the amount by which the data are smoothed to produce the estimate sa. As a —> oo, sa becomes linear, and the limiting function SQQ is the least squares straight line passing replicated, until, in the limit, SQ passes through the data; So is the natural cubic spline interpolator of the data which is discussed below. The idea underlying this estimate can be traced back to Whittaker (1923), who proposed a discrete version using finite differences in place of derivatives in (2.2.1). Schoenberg (1964) proposed the replacement of the finite differences in Whittaker's objective function by derivatives. Reinsch (1967) showed that sa is a cubic spline with the following properties : (i) it is a cubic polynomial in each interval (z,-, z , + i ) , (ii) at each x,-, the curve and its first two derivatives are continuous, but there may be a discontinuity in the third derivative, (iii) in each of the ranges (—co,xi) and (x„,co) the second derivative is zero, so that sa is linear outside the range of the data. The method of cross-validation ( see, for example, Stone (1974) ) has been suggested for choosing the smoothing parameter a from the data. Let s~'(xt) be (2.2.1) through the data. As a —• 0 7 the solution to problem (2.2.1) with the ith data point , (x,-, F(x,)), omitted. Of course, s~*(xi) may be regarded as an estimator of Y(x{) and a as appropriately chosen if s~'(xt) is a good estimator of Y(xi). To measure the goodness of fit, we choose the average squared error, that is , n XVSC(af=1n-1 J^x,-) - C(*.)) 2 (2.2.2) t - i which is called the cross-validation score. Let A(a) be a matrix such that sa(xi) = Y^Aij{<x)Y{xj). 3 Craven and Wahba (1979) showed that (2.2.2) has the easier computational form XVSC(a) = n - ' ± ^ ' ) - S f ~ f . (2.2.3) They also suggested the use of a related criterion, called generalized cross- validation, obtain from (2.2.3) by replacing Aa(a) by its average value, n - 1traceA(a), say, n~1tvA(a). This gives the score GXVSC{a) = n-lRSS{a)/(l - n ^ t r ^ a ) ) 2 , (2.2.4) where RRS(a) is the residual sum of squares J27=i0^(xi)~sa(z»))2- Craven and Wahba gave fuller details and justification of the preceding derivation. Their arguments suggest that generalized cross-validation should yield approximately the optimal value for a, in the sense of minimizing the average squared errors n _ 1 J27=i ( s a ( x « ) — s ( x » ' ) ) 2 - ( A Fortran implementation can be found in the IMSL Library (1980). ) To use the formula (2.2.4) one still needs to find the trA(a). Silverman (1984) proposed a way to derive such an approximation. Let f(x) be an esti- mate of the local density of z,-, calculated using the fast algorithm of Silverman 8 (1982) on the range [a, 6] just containing the design points. Let J a It can then be shown that n trA{a) « 2 + ^ ( l + c 0 a(t - 1 . 5 ) 4 ) - 1 . (2.2.5) t'=3 Substituting (2.2.5) into (2.2.4) gives a score function called the asymptotic generalized cross-validation (AGXV) score which can then be minimized to give an automatic choice of smoothing parameter. A simulation study is carried out in Silverman (1984) and the practical performance of AGXV turns out to be even better then that of generalized cross-validation in that, for non-uniformly spaced data, the proportion of cases giving a bad value of the smoothing parameter is substantially reduced. In constructing confidence intervals, Wahba (1983) exploited a Bayesian in- terpretation of the smoothing spline, that is a Bayes estimate with respect to a certain zero mean Gaussian prior. She was able to develop credibility intervals for the s(x,). However, it should be emphasized that these are not simultaneous credibility interval estimates. Some extensions and modifications of Wahba's work can be found in Silverman (1985). 2.3 The Exponential Smoothing Method Exponential smoothing method was developed in the early 1950s. Since then it has become a popular method of forecasting. It is because the model formulations are relatively simple. Also, only limited data storage and computa- tional effort are required. Perhaps the most important reason for the popularity 9 of exponential smoothing is the surprising accuracy that can be obtained with minimal effort in model identification. Makridakis et al. (1982) reported that exponential smoothing method proved to be at least as accurate for forecasting as more complex approaches such as the Box-Jenkins autoregressive integrated moving average (ARIMA) technique. In essence exponential smoothing method applies an unequal set of weights to past data. These weights decay in an exponential manner from the most recent data value to the most distant value. It is also assumed that the {x,} are equally spaced and that F(xt) = F, = Yt is a time series. The main use is to forecast F t + i given Fi,...,Ft although this method can also be used to smooth Yi,...,Yn ( for example, see Fig. 1 in Brown and Meyer (1961) ). However, it should be pointed out that this method depends just on the previous data. So as a smoothing method, it is probably not as good as the kernel and the spline smoothing methods which use the data in a neighbourhood of a point. Here are some commonly used model formulations: (i) Simple exponential smoothing (no trend in the series) The general equation for the forecast value can be written as F t + l = aYt + {1 - a)Ft (2.3.1) where Ft is the forecast at time t, Yt is the observed value of the series at time t and a is a smoothing constant between 0 and 1 A large value of a, say .9, gives little smoothing in the forecast, where a small value of a, say .1, gives considerable smoothing. Most often a is chosen by trial-and-error. An optimal value of a may be chosen by minimizing the MSE of the forecast. The 10 initial value (as of time 0), Fo, can be estimated simply using the mean of the data. Alternatively, backcasting is suggested to obtain FQ. It is done by inverting the data series and starting the estimation procedure from the latest (most recent) value and finishing with the first (oldest) value. Doing this will provide forecasts or parameter estimates for the beginning of the data, and these can be used as initial values when the data are forecast in the usual sequence, that is, from the beginning to the end. Trigg and Leach (1967) suggest an adaptive response rate approach for the automatic specification of a. It allows a to vary from one period to another so that the forecast can respond to the pattern of the data as closely as possi- ble. The adaptive response rate approach is based on (2.3.1), but a varies and is calculated according to at+l = \Et/Mt\ where Et ( smooth error ) = f5et + (1 - (3)Et-i, Mt ( smooth absolute error ) = 0\et\ + (1 — (3)Mt-i, et = Yt - Ft, and 0 is a constant usually set at 0.1 or 0.2. We can let the initial values E\ = 0 and Mi equal an estimate of the mean absolute deviation, that is, The value of at will fluctuate between 0 ( perfect forecasts ) and 1 ( extreme bias in the forecasts ). 11 (ii) Winters' (1960) linear exponential smoothing (linear trend in the series) A smoothed estimate of the trend is given by Tt = 0(St - 5 t_i) + (1 - P)Tt-U (2.3.2) where St is the equivalent of the simple exponential smoothed value and (3 is a smoothing constant, analogous to a. The initial values So and TQ can be obtained by backcasting. Alternatively, they are found by using ordinary least squares. It is done by solving the equation for a straight line to obtain the intercept and the slope, and using these as initial parameter values. The basic principle underlying (2.3.2) is the same as that of (2.3.1). The most recent trend, (St — St-i), is weighted by B and the last smoothed trend, Tt-i, is weighted by (1 — 6). Then this smoothed trend is combined with the standard smoothing equation to obtain St = aYt + (l-a)(St-l+Tt_1). (2.3.3) In order to use these smoothed series values, S, and the smoothed trend compo- nent, T , to prepare a forecast, the trend component must be added to the basic smoothed value, for the number of periods m ahead to be forecast. The forecast is given by Ft+m = St + mTt. (iii) Winters' (1960) linear and seasonal exponential smoothing (linear trend and seasonality in the series) This model is based on three equations, each of which smooths a factor 1 2 associated with one of the three components of the pattern - randomness, linear, and seasonal. These three equations are as follows : St = otYt/h-L + (1 - <*){St-i + Tt-!), Tt = B(St-St-i) + {l-B)Tt-l, and I t = 1 Y t / S t + ( l - 1 ) I t _ L , where 5 is a smoothed value of the deseasonalized series, T is a smoothed value of the trend, / is a smoothed value of the seasonal factor, and L is the length of seasonality (e.g. number of months or quarters in a year). The initial value of the smoothed seasonal factor / can be found by the classical time series decomposition methods (for example, see Makridakis and Wheelwright (1978).) The equation for / is comparable to a seasonality index. It must be re- membered that Yt includes any randomness in the series. In order to smooth this randomness, the equation for / weights the newly computed seasonal factor (Yt/St) with 7 and the most recent seasonal number corresponding to the same season, It-L> with (1 — 7). The equation for Tt is given in (2.3.2). In the equation for 5 t, Yt is divided by the seasonal factor h-L- This is done to deseasonalize ( eliminate the seasonal fluctuations of ) Ft. h-L is used because It cannot be calculated until St is known. The forecast is given by Ft+m = {St + mTt)It-L + m- For models (ii) and (iii), the best values of a, 3 and 7 are found by trying various combinations of values and choosing that set of values that minimizes MSE of the forecast. Only exponential smoothing models with simple patterns are described in the 13 above discussion. Smoothing models for other complicated patterns are desribed by Makridakis and Wheelwright (1978), Makridakis et al. (1982) and Gardner (1985). 2.4 Some robust smoothing methods For the model in (2.1.1), Cleveland (1979) proposed the use of a robust weighted regression procedure for smoothing scatterplots. Firstly, for each i compute the estimates, /?y(x,), j = 0, l,...,d of the parameters in a polynomial regression of degree d of Y(xk) on Xk, which minimize n ]T wk{xi)(Y{xk) - 0 o - 0 i x k ~ ... - 6 d x k d ) 2 k-l where wk(xi) = W(/i , _ 1 (xfc — x,)) and hi is the rth smallest number among \xi — xy|, for j = l , . . . ,n . W is some given weight function and r = fn where / is the parameter used to determine the amount of smoothing. Thus, F(x,), the fitted value of the regression at x,-, is X2y=0/3j(x,)x,~'. Let 6k be some robustness weights. Then compute a new K(x,) for each t by fitting a dth. degree polynomial using weighted least squares with weight 5fctUfc(xt) at (xfc,Y"(xfc)). Now repeat this procedure successively to compute new { by the procedure indicated below, for example ) and Y[xi), t times; the final Y(xi) are robust locally weighted regression fitted values. Lastly, linear interpolation of these fitted values are used to obtain the robust smooth curve. Cleveland chose 6k = i?(efc/6s), where B, the bisquare weight function is de- fined by 14 B(x) - I <X " ^ F O R I1' < * ' " [ X ) ~ \ 0 for |z| > 1, tk = Y(xk) — Y(xk), and s is the median of the He also gave some guidelines for choosing /, d, t and W in practice. ( Cleveland's procedure is available in S [ Becker and Chambers (1984) ] through the command "lowess". ) Friedman (1984) constructed a variable span smoother based on locally linear fits. It is very fast to compute and the value of the parameter that controls the amount of smoothing is automatically optimized locally ( through cross-validation ) , allowing it to adapt to the response over the range of abscissa values. In the context of robust spline smoothing, we can replace the sum of squared errors in the objective function (2.2.1) by a different function of the errors, to give Here p is a convex and less rapidly increasing function with bounded derivative. Huber (1979) chose The constant c regulates the degree of robustness and good choices for c are between 1 or 2 times the standard deviation of the individual observation Y(xi). Lenth (1977) and Huber (1979) use iterative reweighted least squares procedures to mininize (2.4.1). Utreras (1981) gives a numerical procedure based on the Newton minimization algorithm for computing the minimizer of (2.4.1) for \x\ < c, for \x\ > c. (2.4.1). 15 2.5 The Multivariate Extension What we have discussed so far are the nonparametric smoothing methods of an univariate function s() . If s() is now a function of a vector variable x, then the extension of the above smoothing methods is straightforward. Here is a brief description: (i) the kernel method Using the Watson-Nadaraya estimator as an example, now h = (hi,hp) € R+, the collection of p-tuples of positive numbers; x = (ti,...,tp) G Rp and the kernel i f is a certain density function on Rp. We can use the approach of Hardle and Marron (1985) described in section 2.1 to estimate h. Bierens (1983) proposed a new class of kernel estimator which is constructed through a Sample Moments Integrating Normal Kernel (SMINK) estimator. The SMINK estimator is itself a density and satisfies the condition that it's first and second moment integrals equal the first and second sample moments, respec- tively. For p = 2, the kernel estimator of s() is given by „, .. . , f+"Yfi(Y,x\hi)dy /2W12) where fi(Y,x\hi) and ^ ( x l ^ ) are SMINK estimators of the density f(Y, x) and / 2 (z) , respectively, and ̂ 1 and h? are kernel bandwidth parameters. A tv/o-step procedure is used to estimate hi and and satisfactory performance of s(x) is revealed in a numerial example. 16 (ii) the spline smoothing method For the model in (2.1.1), s is now a function of a p-dimensional vector variable x, where x = (ti,...,tp). T h e smoothness penalty function f s"(u) 2du in (2.2.1) is replaced by In the particular case, p = m = 2, the smoothness penalty function becomes T , . f°° f°° ,d 2s 2d 2s d 2s, M J x We wish to find the multivariate spline smoothing estimate, sa, of or which min- imizes 1 n ~ J2(X(xi) - s(xi)) 2 + aJm(s). (2.5.1) For p = m = 2, (2.5.1) becomes ^E(H^)-s(x,))2 + aJ2(S). (2.5.2) t = l T h e minimizer of (2.5.2) is one of the "thin plate splines", so called because J2(s) is the bending energy of a thin plate. Wahba and Wendelberger (1980) and Wahba (1980) have shown how to use generalized cross-validation ( see Craven and Wahba (1979) ) to choose oc from noisy data, and have given a computational algorithm for p — 2. Utreras (1979) has also given computational procedures. In the general case, i.e., estimating the sa which minimizes (2.5.1), Wahba and Wendelberger (1980) and Wahba (1984a,b) do provide solutions. 17 It may be noteworthy that in the mining industry, a class of two dimen- sional smoothing methods known as "kriging" are closely related to the thin-plate splines. Comparisons of these two smoothing methods are found in several sources. Interested readers are referred to Dubrule (1983,1984) and Watson (1984). (iii) the exponential smoothing method Simple exponential smoothing method has been generalized to multivariate forcasting by Jones (1966) and Enns et al. (1982). The generalization is straight- forward. We replace the scalars in equation (2.3.1) with: Ft+1 = aYt + (1 - a)Ft. If there are p series, the dimensions of Ft)Ft + i and Yt are px 1. The smoothing matrix a has dimension p x p. Jones (ibid) proposes a recursive algorithm to estimate the smoothing matrix a. Enns et al. (ibid) introduce a class of multiple exponential smoothing models as an alternative to simple univariate exponential smoothing and Trigg and Leach (1967) suggest adaptive models. Such models are adaptive in that the smoothing matrix a, which is a generalization of the smoothing constant of univariate models, changes from period-to-period. The model parameters, including the full variance-covariance structure of the multiple time series as well as the smoothing matrix a are estimated by maximum likelihood. Harvey (1984) further simplified the computations of Enns et al. (ibid) so that one can use univariate smoothing models to forecast related time series. More- over, Harvey's results hold for smoothing models containing polynomial trends 18 and seasonal components as well. (iv) the robust smoothing methods To our knowledge, no generalization to the multivariate analysis has been reported up to now. 2.6 Discussion It should be emphasized that in the above survey, we have been guided in our choice of material by concerns related to the applicability of the methodology rather than theoretical properties of different kinds of smoothers. For discussion related to the latter and comprehensive lists of references on this subject, please refer to Stone (1977), Wegman (1980), Collomb (1981), Wegman and Wright (1983) and Silverman (1985). From this survey, we can discover some shortcomings of the described smooth- ing methods. For instance, the spline smoothing and the robust smoothing meth- ods are used for smoothing rather than forecasting. The exponential smoothing method is mainly used for forecasting but not for smoothing. Also, explicit form of confidence intervals of the smooth curve estimate are hard to derive. How- ever, Weerahandi'and Zidek (1985) provide an alternative approach to solve these problems. The general theory is presented in details in the following section. 19 3. T H E O R Y O F B A Y E S I A N N O N P A R A M E T R I C A P P R O A C H F O R S M O O T H I N G L O C A L L Y R E G U L A R P R O C E S S E S . 3.1 T h e General Theory Weerahandi and Zidek (1985) ( abbreviated hereafter by WZ ) propose a Bayesian nonparametric approach for the smoothing of stochastic processes. The stochastic processes of concern are of the form R = S + N where S is a smooth function and N is an independent noise process. In this thesis we take the domain to be the real line and may think of the stochastic process as a time series. Assume R has been observed at a sequence of time-values, ti, and let r,- = R(U) i = l , . . . ,n . The objects of inference are an estimate, B0 = S(tn+i), the interpolant or predictand, and a corresponding credibility interval. S is assumed to be locally regular, that is expandable in a Taylor series to the pth term about t = tn+i. A local parametrization of S is thereby achieved where the parameters are Bi = DlS{tn+l), D denoting the differentiation operator. An a priori structural model for the data is R = X/3 + e (3.1.1). Here R = {Ru ...,Rn)T, Ri = R{U), 3 = {BQ, ...,Bp)Ti X, an n x (p + 1) ma- trix, is given by X = (1, X\,Xp), 1 is an n-vector of l's, XjT = ([ti — tn+i}3'/j. [*»-<»+i ] y / /0 J = 1,-,P, e = 7? + /Y, NT = (Nu...,Nn), Ni = 7Y(i,-), r}T = (r/i,rjn), and rn is the remainder of the Taylor expansion of £(£,), that is tji = [U - t n + l ] p + i D p + 1 S ( 9 i ) / ( p + 1)! where 0,- is a point in the interval joining and t n + i . The degree of local regularity is reflected in the size of p. Although R is observed, 3 in equation (3.1.1) and e are not. The sec- 20 ond assumption in W Z ' s approach is that the expansion errors and all other a priori uncertainity about R, B and e have a joint multivariate normal distribu- tion. Furthermore, the 17,'s are assumed to be independent with mean zero and the variances of the 77,'s are of the form of 52\ti — £n+i|2p+2 where S2 represents uncertainity about the size of D p + 1 S/(p + 1)! in the remainder term of the Taylor expansion of S. In order to estimate Bo = S ( t n + l ) , the interpolant or predictand, assume that equation (3.1.1) holds with Cov(e) = a2H, where <r2 = Var(JV,) is the variance of the noise, H = diag{(l + c\tx - t n + i \ 2 p + 2 ) , ( 1 + c\tn - i n + 1 | 2 p + 2 ) } , c = <52/<r2, and Cov(p) = S . T h e n the inference problem can be solved by using standard results ( see Lindley and Smith (1972) ). Let "U\V" denote the conditional distribution of U given V. Then J R | S , a2,H ~ N{XP°, XY,XT + a2H) and p\R,Z,o-2,H~ N(F(12~1P0 + a~2XTH-1R),F) where F = ( S - 1 +a-2XTH~1X)-1. T h e n using a diffuse prior for P, i.e., letting E - 1 —• 0, E(P\R) converges to (XTH~lX)-1 XTH~lR, the generalized least squares estimator of p with a pos- teriori covariance matrix cr2(XTH~lX)~x. It is noteworthy that c serves as the parameter controlling the appropriate degree of smoothing. If c is too small, f will appear to be too smooth, and if 21 too large, f will be too rough. One approach to estimate c is cross-validation ( Stone (1974) ). Successively, tn+i = U is chosen, and then r t is dropped from the sample and interpolated using r,- = 0o{i), obtained by the above generalized least squares procedure. Lastly, c = c is chosen to be the value which minimizes the cross-validated sum of squared errors, i Alternatively, maximum likelihood estimation may be used to find c. However, in the acid rain example ( see section 4 ), c = 0 was obtained, so this method is ineffective and hence rejected. A "backfitting" method decribed below can also be used to estimate c. It is shown to be computationally efficient and it coheres to the general theory. A detailed description of this method is given in section 3.2 (II). For tn+i = ti, the a priori local structural model is Ri = a,+<7Z,-, i = l , . . . ,n where ct,- = S(ti) and Zi has mean 0, variance 1. Given c = c, or,- and Var(or,) = c,tr2 can be estimated by the above generalized least squares procedure. Thus E(Ri — a,)2 = Var(dj) +cr2 =<72(l + c,) when the noise is regarded as independent of the smooth. From thi3, an unbiased estimated estimator of cr2 is given by a2 — n~l ^2i (R. — <*»')2/(l + c t ) - Consequently, since a posteriori & has a normal distribution with mean a, the approximate 95% credibility intervals for a are given by d ± 1.965.JE'.(a:). The above methods were tested on the data from an acid rain, wet-deposition time series and very encouraging results have been obtained. In the case con- sidered in WZ, only locally constant, linear and quadratic (i.e., p < 2) structural models are considered for illustration. 22 In general, the WZ-approach has some advantages over the other nonpara- metric smoothing methods decribed in section 2. Firstly, the methods mentioned in section 2 share the shortcoming that the explicit form of the estimates of s(), especially for extrapolants, and confidence bands are hard to derive. For exam- ple, Silverman (1985) points out that in spline smoothing only implicit solution can be obtained except for large samples when an approximate explicit expression may be derived. In contrast, the WZ-approach does yield explicit estimates of s() with corresponding explicit credibility bands. Secondly, the prior that Wahba (1983) uses is essentially unique so that the applicability of the spline smoothing is severely limited. The exponential smoothing method suffers from the lack of an objective procedure for model identification. In contrast, in the WZ-approach, every aspect of the analysis is guided by the local-structural model and the Bayesian paradigm itself. Prior knowledge may be incorporated directly into the analysis and "borrowing from strength" is feasible. The "backfitting" method for estimating the smoothing parameter c is simple and computationally fast compared with other procedures. Furthermore, the WZ-approach can be extended to the analysis of multiple, multivariate space- time series and the regression problem. Weerahandi and Zidek (1986) have successfully extended the WZ-approach to the analysis of multiple independent time series. In the latter case, exact credibility bands for growth curves are generated. Their work on spatio-temporal analysis is in progress and some promising unpublished preliminary results have been obtained. 23 3.2 Estimating the "smoothing parameter" c for locally regular struc- tural models. (I) Cross-validation method In the case considered in WZ, c is found by minimizing XV(c) which depends on a2 only through c. The results of WZ suggest that quite satisfactory esti- mates of c are obtained in the locally constant, linear and quadratic structural models. So, it is tempting to try the same approach to estimate c = c for the other pth order locally regular structural models when S is assumed to have p-f 1 derivatives. It is then necessary to find out if c is really the global minimum, rather than a local minimum of the function XV(c). Two ways which help to determine this are: (i) Plot XV(c) against c and examine the shape of XV'(c) for large c. In this case, we need to know the asymptotic behaviour of XV(c) as c-> co. Let h < ... < tn, t\ < ... < tn, Xj = tj — ti for all j = l , . . . ,n excluding j = t , d;(i)(c) = the cross-validated estimate of or at ti, XV(oo) = value of XV(c) at c = oo, and an accent on YI w ^ denote summation over all j — l , . . . ,n excluding j — i. In the case of a locally constant fit ( i.e. p = 0 ), oc^(c) can be obtained by 24 the generalized least squares procedure described in section 3.1. It is given by: K , 0 ( C ) = E'd + cx?)- • It follows that E t —2 , , r3xj def . , a(i){°) ~* - 2 - a(t)(°o) as C -> OO. Therefore XF(oo)«X;Crt-d(l)(oo))2 « = i ^ ' Similarly, in the case of a locally linear fit ( i.e., p = 1 ), let j then <*(«) (c) ^ ( 1 + cxJ)- 1 , It follows that a(,)^cj T!xJAT!x~2 - ( E ' Z t 3 ) 2 — = a s c->oo. Therefore XV (bo) « ^-i - a ( t )(co)^ By using the same technique, we can find XV(co) for higher order locally regular structural models. 25 (ii) Examine the asymptotic behaviour of XV(c) for large c. If XV(c) is less than XV(oo) for large c, then we will have increased confidence that there is not a local minimum at infinity. In the locally constant case: r.--a(<)(c) = r,-- E'Cc-^x-d-c-ixT 2)) Let T h e n where for large c = rv E, — ' for large c. 1 ' ' y >̂̂ .—4 -2 = 53 ^ ^ i 2 = 6»' 53 'V1/* = a n d v^/ -2 = e«- E Zy A . *y ^ - a(.)(c) = r, - a,(6t- - —)(1 + —) r,- — a,-6,- — -a,(6,e,- — a*,-) for large c c rt- - or(,)(oo) + — c /,- = -a,-(&,e,- - dij. 26 So XV (c) = £(• >••-«(»•) (c))2 X:((r,-a(t)oo)) + f ) £)(rt- - a(f-)(oo))2 + -/,(r,- - a(f-)(oo)) (3.2.1). Differentiating (3.2.1) with respect to c, we obtain for large c. Similarly, by using the same technique, we can estimate XV (c)' for higher order locally regular structural models. (II) " B a c k f i t t i n g " method Recall from equation (3.1.1), that if S has p + 1 derivatives, then the a priori model is of the form where B0 = S{tn+1), Bt = />S(i n + 1), x{ = (ti-tn+iy/i\, i = 1, ...,p + 1 and e = t] + N as defined in section 3.1. Recall that <52 represents the uncertainty about the size of Dp+1S/(p+1)1 in the remainder term of the Taylor expansion of S i.e. 62 is the a priori variance of Dp+lS/(p + 1)!. Hence tn+i = ti may be chosen successively , and <52 may be Ri = Po + Six, + B2x2 + ... + /3pxp + 3 p + 1 x p + i + e (3.2.2) 27 estimated by 82 = sample variance of (/?p + 1/(p + 1)!) sample variance of (0p+i) = ((P+1)!) 2 ' We also note that c = <52/<r2, so the value of c for the order p, cp, say, is given by b2 a2 _ sample variance of (PP+i) = er 2((p+l)!) 2 ( 3 ' 2 - 3 ) = ("- i ) - i Efai(4 + i (0- jgp + i ) a * a ( ( p + l ) ! ) a where /? p + i(z) is the coefficient of xp , 0p+i is the sample mean of /? p + 1 's , and a 2 can be found by the generalized least squares procedure described in section 3.1. Thus, if (3.2.2) holds, by the same argument, we can use (3.2.3) to estimate the values of cy for any order j = 0, . . . ,p. However, to use the formula (3.2.3) we need c p + i - We may use the cross- validatory approach mentioned in section 3.1 to estimate c p + i , from which we can obtain estimates of /?,-, <52 and cr2. Alternatively, we may simply put c p + 1 = 0 assuming p is large. The reason is that at a certain stage, the uncertainty about the size of Dp+1S/(p + 1)! will be reduced approximately to zero for sufficiently large p. So, it yields the result 62 —* 0, hence c p + i —• 0. In summary, if (3.2.2) holds and cp+i is fixed, we may estimate the cy of the lower orders j < p by using (3.2.3). We refer to such an approach as the "backfitting" procedure. 28 (Ill) Conclusions Comparing methods (I) and (II), we prefer the "backfitting" approach to cross-validation for the following reasons. Firstly, it is computationally more efficient since we do not need to compute f,- for each p and c, and we can estimate cy for all j < p within one run once c p + i is fixed. Secondly, this approach coheres with the theory proposed in section 3.1 and we expect the resulting curves are comparatively smoother. In consequence, the smoothers obtained for different orders of local regularity become comparable. 3.3 Choosing the "optimal" order p. For many practical purposes, it is sufficient to choose p subjectively. Once c p is determined, say by either of the methods decribed in section 3.2, we may plot the resulting curves and then choose the one which "looks best". However, in many contexts it is desirable to have an automatic procedure for choosing p. Hence, one of our main objectives in this thesis is to find the "optimal" value of p in the sense that f, estimated by the pth order locally regular fit, minimizes some measure of accuracy. In our case, we use the predictive squared error PSE (to be defined later). A fit giving a minimal PSE yields the model of the data which gives the best prediction (in the least square sense) of each data point. In other words, the minimum PSE gives the parameters which maximize the internal consistency of the data set with respect to the estimated model. O u r procedure is as follows: 1. F ix the span of observations m ( m < n ). 2. F ix the value of p 4-1 and set cp+i — 0. 29 3. Use the first m observations and apply the "backfitting" procedure described in section 3.2 to estimate CQ, CI, ... , cy, for j = 0,...,p. 4. Once cy is found, apply the generalized least squares procedure described in section 3.1 to estimate r m + 1 = fm +i(cy) for each j < p. 5. Shift to the next m observations i.e. r 2 , r m + i and repeat steps 3 and 4 to estimate f m + 2 (cy) for each ; < p. 6. Repeat step 5 for successive m observations until rn(cy) for each j < p is found. 7. Add the sum of squared deviations from steps 4 to 6 for each j < p. This is the predictive squared error, . n—m PSE(J) ^ £ (rm+i - fm + t(cy)) 2 , ; = 0 , p . n — m — 8. Compare the PSE(j) for each j < p to find the optimal j < p which minimizes the PSE. In the procedure decribed above, we are predicting one observation ahead each time. It is perhaps more natural to predict the the average of w observations ahead since S(tn+i), the quantity of interest, is the smooth, i.e. the process stripped of noise at time t = tn+\. Our intention can be easily realized by a slight modification of the above procedure. It is done as follows: (i) In step 4, use the cy to find f m + 1 ( c y ) , r m + U ) ( c y ) for each j < p. Let 1 w f«'(cy) = — $3 Wfc+i-i(cy). fc=i (ii) In step 6, repeat step 5 for successive m observations until 7v(cy) [w' = 30 n — m — w + 1), for each j < p are found. (iii) In step 7, for each < p, replace PSE(j) by 1 w i=i where to 1 • r, 1 M w k=l Finally, follow the same steps 1 to 8 in the original procedure with the above modifications. In section 4 , both types of predictions are considered for illus- trations. 31 4. APPLICATION. 4.1 Introduction In this section, the methods described in the last section are tested on the data used in WZ. The data were obtained from one of the nine stations of the MAP3S/PCN monitoring network which is found in the Northeastern part of the United States and has provided event-based chemical measurements of v/et deposition events since about 1976. A detailed description of the data set can be found in Gentleman, Zidek and Olsen (1985). For illustrative purposes, our analyses are confined to two subcases: (i) the field pH values which were measured at a station located in Pennsylvania State University during the pre- cipitation events of 1977 ( 80 observations were recorded ), and (ii) the monthly average pH-values over the 1976-82 period at the same station ( 71 observations were recorded ). Hereafter (i) is referred to as the pH77 data and (ii) as the M.A. pH data. Throughout this section time is measured in days starting with January 1, 1976 which is regarded as the origin. Recall that in equation (3.1.1), cP = 52/cr2. However, 52, the variance of / ? p + i / ( p + 1)!, is extremely small. So u;,- = — tn+i is rescaled by a factor 10~2, t = l , . . . ,n , in estimating cp numerically. In this way, all the {cy} obtained are enlarged by a factor of 10 4 j + 4 for all j = 0, ...,p. The results reported below are obtained by the methods described in section 3; all the calculations were performed by using Fortran subroutines which are included in the Appendix. 4.2 Comparison of the cross-validation and the backfitting methods. 32 Before making any comparisons of these two methods, we examined the asymptotic behaviour of XV(c) for large c, to confirm that the cross-validatory estimate of cp being found is really the global minimum of XV(c). Only the locally constant fit is considered as an illustration. Observe in Table I that the XV(c) values increase as c tends to infinity for both the pH77 and the M.A. pH data. Also, for large c, XV(c)' « 0.2738c"2 and 0.000372c-2 for the pH77 and the M.A. pH data respectively. Therefore, we are sure that the global minimum of XV(c), cp, is finite. In applications, cp , the cross-validatory value of cp, is found by using an IMSL (1982) optimization subroutine to minimize XV(c). The estimated values of cP for the pH77 data are presented in Table II and the corresponding locally polynomial fits are displayed in Figure l(A-C). Figure 1 shows that the locally quadratic fit is smoother than the other polynomial fits in the sense that it cannot make the quick turns so it tends to round off the corners of the dips of the data. This may due to the fact that c 2 is comparatively small so a greater degree of smoothness is revealed in the resulting curve. As we have pointed out in section 3.1 62 is decreasing as p is increasing, so cp should be decreasing too. However, as p increases, cp become relatively large and the resulting curves look quite wiggly. Nevertheless, cio = 0 is obtained. It is noteworthy that unlike what happens in regression, the fit here is found to be fairly robust with respect to the degree of the polynomial within the range of the data. When Cio = 0, we can apply the backfitting procedure descibed in section 3.2 to obtain cp , the backfitted values of cp, p = 0, ...,9. The results are summa- rized in Table n and the corresponding locally polynomial fits are portrayed in 33 TABLE I: Estimated values of XV(c) for a l o c a l l y constant f i t to the pH77 and M.A. pH data. c XV(c) (pH77) XV(c) (M.A. pH) 0 6.0485 3.5161 3.2 * 4.5944 3.0132 128.0 ** 3.8339 3.2376 1 . OE+3 4.0742 3.2611 1.OE+4 4.5993 3.2642 1.0E+5 4.7845 3.2644 1.0E+6 4.8083 3.2646 1.OE+7 4.8108 3.2647 1.OE+8 4.8110 3.2648 1.0E+9 4.8111 3.2648 1.0E+10 4.8111 3.2648 o o 4.8112 3.2648 cross-validatory estimate of c for the M.A. pH data cross-validatory estimate of c for the pH77 data * ** 33a TABLE I I : E s t i m a t e d v a l u e s of Cp f o r the pH77 d a t a . Order (p) A Cp * rt Cp ** 0 128 45 1 1750 4034 2 1028 75721 3 350000 279977 4 1715000 452421 5 2486000 171752 6 616000 471 19 7 46300 1 989 8 4700 1 07 9 400 .002 1 0 0 A * Cp : Cp o b t a i n e d by the c r o s s - v a l i d a t i o n method * Cp : Cp o b t a i n e d by the b a c k f i t t i n g method 33b FIG. 1: A comparison of the l o c a l l y polynomial f i t s of di f f e r e n t order p to the pH77 data, (using the cross-validation method) B, p=2 ( s o l i d line) p=3 (broken line) p=4 (dotted line) „ p=4 ( s o l i d l ine) p=7 (broken line) p=1u(dotted line) 300.0 350.0 « a . o 450.0 500 0 TIME (DATI 550.0 600.0 650.0 700.0 800.0 33c FIG. 2 : A comparison of the l o c a l l y polynomial f i t s of di f f e r e n t order p to the pH77 data, (using the ba c k f i t t i n g method) B. p=2 ( s o l i d line) p=3 (broken line) p=4 (dotted line) C. p=4 ( s o l i d line) p=7 (broken line) p=9 (dotted line) 300.0 350.0 4)0 .0 450.0 500.0 TIME lOflTI 550.0 600.0 700.0 750.0 8 0 0 0 33d Figure 2(A-C). In general, the estimated values of cp are comparatively smaller than the corresponding cp except in the cases of locally linear and quadratic fits. The locally constant and quadratic fits obtained by these two methods are compared in Figure 3. The resulting curves do not change drastically unless the difference between cp and cp is greater than a factor of 10. As a final note about Figures 1 and 2, the locally polynomial fits emerge from the data at either extremity with a marked upward or downward trend as the order p increases. This is inevitable; as WZ point out , for large the a,'s become the least squares polynomial estimates. For for the M.A. pH data, the estimated values of cp and cp are given in Table III and the corresponding locally polynomial fits are displayed in Figure 4(A-C) and 5(A-C). Since the estimated cp's are relatively smaller, the corre- sponding locally polynomial fits obtained by the backfitting method are compar- atively smoother than those obtained by the cross-validation method. In Figure 6 are contrasted the results of the locally constant and quartic fits obtained by these two methods. However, little difference is observed. The credibility intervals for the locally quadratic and cubic fits are portrayed in Figure 7. We must be careful in interpreting those credibility intervals. The bands in Figure 7 are not simultaneous interval estimates. Those in Figure 7 merely indicate the pointwise credibility interval for 5 ( f n + i ) at each tn+i. It is clear that the bands are blowing up at either end of the data. This is consistent with the unsatisfactory predictions revealed in Figures 1 and 2. Using the backfitting method, similar results are obtained though they are not presented here. 34 FIG 3 : A comparison of the l o c a l l y c o n s t a n t f i t s ( t o p ) and q u a d r a t i c f i t s (bottom) u s i n g the c r o s s - v a l i d a t i o n method ( s o l i d l i n e ) and the b a c k f i t t i n g method ( d o t t e d l i n e ) f o r the pH77 d a t a . 300.0 350.0 100.0 150.0 500.0 TIME (DAT) I 550.0 600.0 650.0 700.0 750.0 800 34a TABLE I I I : E s t i m a t e d v a l u e s of Cp f o r the M.A. pH d a t a . Order (p) A Cp * Cp ** 0 3.20629 .94601 1 1.65739 .39114 2 . 17786 .02716 3 .06700 .00179 4 .08250 .00018 5 0 0 6 0 0 7 0 0 8 0 0 9 0 0 10 0 A * Cp : Cp o b t a i n e d by the c r o s s - v a l i d a t i o n method * Cp : Cp o b t a i n e d by the b a c k f i t t i n g method 34b FIG. 4 : A comparison of the l o c a l l y p o l y n o m i a l f i t s of d i f f e r e n t o r d e r p t o the M.A. pH d a t a , ( u s i n g the c r o s s - v a l i d a t i o n method) p=0 ( s o l i d l i n e ) J§ p=1 (broken l i n e ) p=2 ( d o t t e d l i n e ) B. p=2 ( s o l i d l i n e ) £ s p=3 (broken l i n e ) p=4 ( d o t t e d l i n e ) C. p=4 ( s o l i d l i n e ) p=7 (broken l i n e ) p = l O ( d o t t e d l i n e ) 16.3 4J .35 « - 2 94.03 119.9 TIME IDflYl 197.45 249.15 27 ( X 1 0 F 34c FIG. 5 : A comparison of the l o c a l l y polynomial f i t s of d i f f e r e n t order p to the M.A. pH data, (using the b a c k f i t t i n g method) A. p=0 ( s o l i d l i n e ) p=1 (broken l i n e ) •p=2 (dotted l i n e ) 16.3 42 .33 —1 1 1 66 .2 9 * 0 3 119.9 TIME IDPT) 143.73 171.S 197.43 223 .3 (XlO1 I p=2 ( s o l i d l i n e ) p=3 (broken l i n e ) p=4 (dotted l i n e ) 16.3 42 .33 68 .2 94 03 I I S . 9 TIME IDRT) 145.73 171.6 197.45 223.3 249 .13 273.0 IX101 ) C. p=4 ( s o l i d l i n e ) p=7 (broken l i n e ) p=9 ( d o t t e d l i n e ) 3 4 d FIG 6: A comparison of the l o c a l l y c o n s t a n t f i t s ( t o p ) and q u a r t i c f i t s (bottom) u s i n g the c r o s s - v a l i d a t i o n method ( s o l i d l i n e ) and the b a c k f i t t i n g method ( d o t t e d l i n e ) f o r the M.A. pH d a t a . T 16.5 42.35 68.2 94.05 119.9 TIME (DAY) T " 145.75 171.6 197.45 223.3 -1 1 249.15 275.0 (XlO1 ) 16.5 42.35 68.2 T r 94.05 119.9 TIME (OfiY) i r 145.75 171.6 197.45 223.3 249.15 275.0 IXIO1 I 34e FIG 7: A comparison of the c r e d i b i l i t y i n t e r v a l s f o r the l o c a l l y q u a d r a t i c ( s o l i d l i n e ) and c u b i c (broken l i n e ) f i t s f o r the pH77 (top) and the M.A. pH (bottom) d a t a , ( u s i n g the c r o s s - v a l i d a t i o n method) 16.5 42.35 68.2 94.05 119.9 145.73 171.6 197.45 223.3 249 15 2/5 TIME lORY) IXIO1 3 4 f Some differences between our approach and classical parametric methods, e.g., least squares, are obvious. O u r approach is more flexible since we can adjust the smoothing parameter c subjectively to get a "nice looking" curve. T h e choice c = 0 gives the least squares polynomial fit. For example, when p = 1, we get a least squares line passing through the data; however, the underlying pattern cannot be recaptured. Moreover, it is easy to generate the locally polynomial fit of any order p with corresponding pointwise credibility intervals. One more interesting feature is shown in F i g . 4 of W Z where the least squares linear and locally linear fits are contrasted when the middle of the pH77 data are deleted. T h e width of the confidence band for the least squares linear fit reaches a minimum where data are missing. However, the width of the credibility interval for the locally linear fit increases in the gap in accordance with our expectations. 4.3 R e s u l t s of choosing t h e "optimal" order p. Applying the procedure described in section 3.3, the estimated values of PSE(p) for the pH77 data are obtained and presented in Tables IV and V . T h e results clearly indicate that the optimal order is p = 0 since the locally con- stant fit attains the minimum PSE for all of the spans of observations, m , we used. Because the data are not monotone and the noise is significant, the best fit is simply obtained by taking the weighted average of the observations in the span. At the same time, we have compared our results with the simplest smoother, the moving mean, where f, is estimated by the average of the observations in the span. T h e comparisons are reported in Tables IV and V . Although the 35 TABLE IV: E s t i m a t e d v a l u e s of PSE(p) f o r the pH77 d a t a , ( p r e d i c t one o b s e r v a t i o n ahead) p m 15 20 30 40 50 60 70 0 .0738 .0773 .0639 .0615 .0718 .0851 .0393 1 .1298 . 1 305 . 1 009 .0906 .0795 .0986 .0492 2 .3804 .3484 .2300 . 1 657 .1470 . 1 540 .0586 3 5.089 3 . 7 1 9 .5573 .4074 .3671 .2573 .0533 4 39.08 19.21 1 .764 1 .330 .7226 .3866 .0709 5 381 .6 1 02.8 7.905 5.760 1 .795 .471 9 .1126 6 1768 .8 461 .8 18.21 22.68 1 .994 .5999 . 1 643 7 63212 3159.3 46.04 27.64 2.818 .7892 .2380 8 2769231 81 920 378.2 3 1.22 4.956 .9524 . 2700 9 221 31 30632 68.36 5.792 1 .021 .4001 . 1 977 moving mean .0673 .0762 .0848 .1026 .0940 .1064 .0678 35a TABLE V: E s t i m a t e d v a l u e s of PSE(p) f o r the pH77 d a t a . ( p r e d i c t the average of 10 o b s e r v a t i o n s ahead) m P 1 5 20 30 40 50 60 0 .0335 .0328 .0289 ' .0311 .0387 .01 23 1 . 1 250 . 1 004 .0693 .0746 .0822 .091 9 2 1 .491 1 . 1 05 .7047 .5908 .5588 .7217 3 38.67 18.06 9.541 8.317 6.819 8.919 4 1176.4 430.9 137.4 113.7 63.56 64.48 5 39996 9323.5 1938.8 1408.3 537.2 333 .4 6 1.4E+6 2.5E+5 3751 0 1 9741 4108.5 1504.1 7 7.9E+7 5.4E+6 6.3E+5 2.2E+5 18543 3038.3 8 1 .2E + 9 7.7E+8 9.5E+6 1.2E+6 75790 5949.6 9 3.5E+9 2.4E+10 2.4E+6 25596 1841.4 362 . 1 moving mean .0428 .0507 .0582 .0497 .0342 .01 94 35b moving mean has a simple model formulation, it is not capable of giving a better prediction than the locally constant fit. For the M.A. pH data, there is smaller variation in the pH values over the range of time, and again the locally constant fit is best though the results are not shown here. We have tested the efficacy of this procedure on two other data sets. Data set 1: We apply the procedure to data on the annual catch of lobsters by Maine fishermen taken from Morrison (1983) ( abbreviated hereafter by Lobster data ). The data consists of the number of registered lobsters traps used by Maine fishermen over the years 1932-75. The correlation coefficient between these two variables is close to 0.9 and this strongly suggests a linear dependence of the number of traps on time. This is supported by the results given in Table VI. The locally constant, linear and quadratic fits are compared in Figure 8A. These three curves are quite similar to each other even though the locally linear fit minimizes the PSE. The credibility intervals of the locally linear fit are portrayed in Figure 8B. As we have pointed out above such b a n d 3 are only pointwise interval estimates. Data set 2: One hundred equally spaced points were generated from the sinusoid y = 2 cos x — 5 sin z + 6 on the interval [0,10]. We apply the same procedure to the first 80 points and then compare the prediction of the last 20 points by using different locally polynomial fits. As the results show in Table VII, PSE is minimized by different values of j < p depending on the size of span m. For interpolation, all the locally polynomial fits appear to pass through the data points equally 36 TABLE V I : E s t i m a t e d v a l u e s of PSE(p) f o r the L o b s t e r d a t a p r e d i c t one o b s e r v a t i o n ahead ( t o p ) and p r e d i c t the average of 10 o b s e r v a t i o n s ahead ( b o t t o m ) . m 20 30 P 0 3.4 E + 4 5.8 E + 4 1 3.3 E + 4 5.6 E + 4 2 8.1 E + 4 1 .3 E + 5 3 2.0 E + 5 3.1 E + 5 4 4.9 E + 5 7.1 E + 5 5 1 . 1 E + 6 1 .5 E + 6 6 2.6 E + 6 2.7 E + 6 7 6.4 E + 6 3.6 E + 6 8 1 .7 E + 7 3.3 E + 6 9 2.8 E + 6 1 .6 E + 6 moving mean 1 .7 E + 5 3.5 E + 5 m P 20 30 0 7.7 E + 4 1 .4 E + 5 1 1 .3 E + 4 2.2 E + 4 2 6.7 E + 4 5.1 E + 4 3 4.3 E + 5 2.3 E + 5 4 4.0 E + 6 1 .6 E + 6 5 4.7 E + 7 1 .2 E + 7 6 6.1 E + 8 9.5 E + 7 7 9.4 E + 9 1 .4 E + 9 8 1.7 E +11 2.5 E + 10 9 4.9 E +12 4.8 E + 10 moving mean 1 .5 E + 5 3.3 E + 5 36a FIG 8A: A comparison of the l o c a l l y c o n s t a n t (broken l i n e ) , l i n e a r ( s o l i d l i n e ) and q u a d r a t i c ( d o t t e d l i n e ) f i t s f o r the L o b s t e r d a t a , ( u s i n g the b a c k f i t t i n g method) 1932.0 1936.3 1940.6 1944.9 1949.2 1953.5 1957.8 1962.1 1966.4 1970.7 1975 TIME (YERR) FIG 8B: Locally linear f i t with the c r e d i b i l i t y i n t e r v a l s for the Lobster data, (using the b a c k f i t t i n g method) TABLE V I I : E s t i m a t e d v a l u e s of PSE(p) f o r the s i n u s o i d (y = 3cosx - 5 s i n x + 6 ) — p r e d i c t one o b s e r v a t i o n ahead (top) and p r e d i c t the average of 10 o b s e r v a t i o n s ahead (b o t t o m ) . m P 20 40 60 0 7.8 E - 1 9.8 E - 1 1 . 1 E 0 1 1 .5 E - 2 1 .8 E - 2 1 .6 E - 2 2 2.4 E - 4 5.2 E - 4 6.4 E - 4 3 3.3 E - 6 6.6 E - 6 1 .0 E - 5 4 4.3 E - 8 1 .7 E - 7 2.4 E - 7 5 2.0 E - 9 3.2 E - 9 1 .0 E - 8 6 4.5 E - 9 1 .5 E - 9 1 .0 E - 9 7 1 .4 E - 8 1 .3 E - 9 6.0 E -10 8 4.2 E - 8 1 .3 E - 9 4.0 E -10 9 1 .5 E - 8 4.0 E -10 1 .2 E - 9 m P 20 40 60 0 1 .2 E + 1 1 .2 E + 1 1 .2 E + 1 1 4.6 E 0 6.3 E 0 9.3 E 0 2 6.8 E - 1 1.7 E - 0 6.3 E - 1 3 1.0 E - 1 3.0 E - 1 9.0 E - 1 4 6.5 E - 3 3.7 E - 2 1 .5 E - 2 5 5.6 E - 4 3.4 E - 3 1 .6 E - 2 6 4.0 E - 5 2.4 E - 4 1 .8 E - 4 7 2.9 E - 4 5.0 E - 5 1 . 1 E - 4 8 1.9 E - 2 3.5 E - 4 3.0 E - 5 9 1 .6 E + 6 9.0 E - 5 1.0 E - 5 3 6 d well since no noise exists in the data. These situations are illustrated in Figures 9(A-B). Figure 9B indicates that locally the 9th order polynomial gives the best prediction of the last 20 data points. From these two examples, we can see that the proposed procedure is capable of selecting the locally polynomial fit which captures the underlying pattern of the data. To conclude this section, it should be added that we have examined the behaviour of the yearly differences of the M.A. pH data over the range of time, D(tn+i) = R(tn+i) — R(tn+i — 1 year ). This is done to see if any seasonality existed over the years. Figure 10 is a plot of the locally constant fit with credibility intervals. No apparent seasonality is observed in the picture. However, we can see that D(tn+l) is increasing and decreasing in alternating years. Such a yearly cycle of the change of pH is clearly shown. Of course, related subject knowledge would be needed to explain such a phenomenon. 37 FIG 9A: A comparison of the l o c a l l y l i n e a r (broken l i n e ) , 5th o r d e r p o l y n o m i a l ( s o l i d l i n e ) and 10th o r d e r p o l y n o m i a l ( d o t t e d l i n e ) f i t s f o r the s i n u s o i d (y = 3cosx - 5 s i n x + 6) d a t a , ( u s i n g the c r o s s - v a l i d a t i o n method) • 0.0 0.99 1.98 2.97 3.96 4.95 5.94 6.93 7.92 8.91 9. X FIG 9B: A comparison of the l o c a l l y 5 t h o r d e r p o l y n o m i a l ( s o l i d l i n e ) , 7th order p o l y n o m i a l (broken l i n e ) and 9th o r d e r p o l y n o m i a l ( d o t t e d l i n e ) f i t s f o r the s i n u s o i d (y = 3cosx - 5 s i n x + 6) d a t a , ( u s i n g the b a c k f i t t i n g method) i 1 1 1 1 1 1 1 1 1 1 0.0 0.99 1.96 2.97 3.96 4.95 5.94 6.93 7.92 8.91 9.9 X FIG 10: L o c a l l y c o n s t a n t f i t w i t h the c r e d i b i l i t y i n t e r v a l s f o r the d i f f e r e n c e of the M.A. pH d a t a , ( u s i n g the b a c k f i t t i n g method) 5. C O N C L U S I O N S . In section 2, we survey some of the popular nonparametric smoothing meth- ods. A brief comparison of these methods with the WZ-approach is made. As mentioned in section 3.2, through the examination of the asymptotic behaviour of XV(c), we can be sure that the cross-validatory estimate of c being found is finite. Unfortunately, the cross-validatory estimates of c are relatively large and the resulting curves are wiggly. The problem is that we allow the data "speak freely for itself but neglect the formulated model. As an alternative, the backfitting method is more coherent with the theory in section 3.1 and the resulting curves are comparatively smoother in the sense that le?s turns are generated. Also, it is computationally fast and we can estimate cy for all j < p within one run once cp+1 is fixed. In section 4, the efficacy of a data-based procedure was demonstrated for choosing the "optimal" order p. Satisfactory results are achieved in two other examples too. It would be interesting to repeat these tests on data generated by a smooth function plus noise to see if the smooth function would be recovered. In practice, we can arbitrarily set cp+i = 0 for seme high order p and then apply the procedure to choose the appropriate order of locally polynomial fit. Once the order is determined, we can use the backfitting estimate of c to generate the interpolants or predictands with corresponding pointwise credibility intervals. Nevertheless, as we have pointed out above the fit is fairly robust with respect to the order of the polynomial, a slight difference of the order p would not change the appearance of the resulting curve very much. 38 On the whole, the proposed approach is flexible and simple to implement. In general, it is easy to generate the locally polynomial fit of any order if it is needed and appropriate degree of smoothness can be obtained through the backfitting method. Moreover, this approach does provide explicit estimates which are much different in form to those derived by classical methods. In conclusion, we suggest that further research be carried out, including 1. a proper Bayesian analysis for controlling the extrapolation of S(t), and 2. the development of the theory for multivariate space-time series and the regression problem. 39 B I B L I O G R A P H Y Becker, R. A. and Chambers, J. M. (1984): S: An Interactive Environment for Data Analysis and Graphics. California: Wadsworth. Bickel, P. J. and Rosenblatt, M. (1973): On some global measures of the density function estimates. Ann. Statist. 1 1071-1095. Bierens, H. J. (1983): Sample moments integrating normal kernel estimators of multivariate density and regression functions. Sankhya B 45 160-192. Brown, R. G. and Meyer, R. F. (1961): The fundamental theorem of expo- nential smoothing. Operations Research 9 673-685. Clark, R. M . (1977): Nonparametric estimation of a smooth regression func- tion. J. R. Statist. Soc. B 39 107-113. Cleveland, W. S. (1979): Robust locally weighted regression and smoothing scatterplots. J. Amer. Statist. Assoc. 74 829-836. Collomb, G. (1981): Estimation nonparametrique de la regression: revue bibliographique. Inter. Statist. Review 49 75-93. Craven, P. and Wahba, G. (1979): Smoothing noisy data with spline functions. Numer. Math. 31 377-403. Dubrule, O. (1983): Two methods with different objectives: splines and kriging. Math. Geology 15 245-257. 40 Dubrule, O. (1984): Comparing splines and kriging. Computers and Geo- sciences 10 327-338. Enns, P. G., Machak, J. A., Spivey, W. A., and Wrobleski, W. J. (1982): Forecasting applications of an adaptive multiple exponential smoothing model. Management Science 28 1035-1044. Friedman, J. H. (1984): A variable span smoother. Stanford University LCS Technical Report No. 5. Gardner, E. S. (1985): Exponential smoothing: the state of art. J. Forecasting 4 1-28. Gentleman, R., Zidek, J. and Olsen, R. (1985): MAP3S/PCN acid rain precipitation monitoring data base. University of British Columbia Statistics Department Technical Report No. 19. Hardle, VV. and Marron, J. S. (1985): Optimal bandwidth election in non- parametric regression function estimation. Ann. Statist. 13 1465-1481. Harvey, A. C. (1984): Analysis and generalization of a multivariate expo- nential smoothing model. LSE Econometrics Programme Discussion Paper No. A.41. Huber, P. J. (1979): Robust smoothing. In Rob ustness in Statistics. (R. L. Launer and G. N. wilkinson, eds.) New York: Academic Press. IMSL (1980): Internationa] Matiematicai and Statistics Libraries, Inc., Sub- 41 routine 1 C S S C V . IMSL (1982): International Mathematical and Statistics Libraries, Inc., Sub- routine Z X L S F . Jones, R. H. (1Q66): Exponential smoothing for multivariate times series. J. R. Statist. Soc. B 34 385-392. Lenth, R. V. (1977): Robust splines. Comm. in Statist.-A 6 847-854. Lindley, D. V. and Smith, A. F. M. (1972): Bayes estimates for the linear model. J. R. Statist. Soc. B 34 1-18. Makridakis, S., Andersen, A., Carbone, R., Fildes, R., Eibcn, M. , Newton, J., Lewandowski, R., Parzen, R. and Winkler, R. (1982): The accuracy of extrapolation (time series) methods: results of a forcasting competition. J. of Forecasting 1 111-153. Makridakis, S. and Wheelwright, S. C. (1978): Interactive Forecasting: Uni- variate and Multivariate Methods (2nd ed.). San Francisco: Holden-Day. Morrison, D.F. (1983): Applied Linear Statistical Methods. Prentice-Hall. Nadaraya, E. A. (1964): On estimating regression. Theory Probab. Appl. 9 141-142. Priestley, M . B. and Chao, M . T. (1972): Nonparametric function fitting. J. R. Statist. Soc. B 34 385-392. 42 Reinsch, C. H. (1967): Smoothing by spline functions. Numer. Math. 10 177-183. Rosenblatt, M. (1956): Remarks on some nonparametric estimates of a density function. Ann. Math. Statist. 27 832-837. Rosenblatt, M . (1971): Curve estimates. Ann. Math. Statist. 42 1815-1842. Schoenberg, I. J. (1964): Spline functions and the problem of graduation. Proc. Nat. Acad. Sci. USA 52 947-950. Schuster, E. F. (1972): Joint asymptotic distribution of the estimated regres- sion function at a finite number of distinct points. Ann. Math. Statist. 43 84-88. Silverman, B. V. (1982): Kernel density estimation using the fast Fourier transform. Appl. Statist. 31 93-99. Silverman, B. V. (1984): A fast and efficient cross-validation method for smoothing parameter choice in spline regression. J. Amer. Statist. Assoc. 79 584-589. Silverman, B. V. (1985): Some aspects of the spline smoothing approach to nonparametric regression curve fitting. J. R. Statist. Soc. B 47 1-21. Stone, C. J. (1977): Consistent nonparametric regression. Ann. Statist. 5 595-645. 43 Stone, M. (1974): Cross-validatory choice and assessment of statistical predic- tions. J. R. Statist. Soc. B 36 111-133. Trigg, D. W. and Leach, A. G. (1967): Exponential smoothing with an adaptive response rate. Opl. Res. Q. 1 8 53-63. UBC MATRIX (1982): A guide to solving matrix problems. University of British Columbia Computing. Documentation. Utreras, F. I. (1979): Cross-validation techniques for smoothing spline func- tions in one or two dimensions. In Smoothing Techniques For Curve Estimation. (T. Gasser and M. Rosenblatt, eds.) Lecture Notes in Mathematics, No. 757. Springer-Verlag, Berlin. Utreras, F. I. (1981): On computing robust splines and applications. SIAM J. Sci. Stat. Comput. 2 153-163. Wahba, G. (1980): Spline bases, regularization, and generalised cross- cross-validation for solving approximation problems with large quantities of noisy data. In Approximation Theory III (E. W. Cheney ed.) New York: Academic Press. Wahba, G. (1983): Bayesian "confidence intervals" for the cross-validation smoothing spline. J. R. Statist. Soc. B 45 133-150. Wahba, G. (1984a): Cross-validated spline methods for the estimation of multivariate functions from data on functionals. In Statistics: An Appraisal. (H. A. David and H. T. David eds.) The Iowa State University Press. 44 Wahba, G. (1984b): Surface fitting with scattered noisy data on Euclidean d-space and on the sphere. Rocky Mountain Journal of Mathematics 14 281-299. Wahba, G. and Wendelberger, J. (1980): Some new mathematical methods for variational objective analysis using splines and cross-validation. Monthly Weather Review 108 1122-1143. Wahba, G. and Wold, S. (1975): A completely automatic french curve: fitting spline functions by cross-validation. Comm. in Statist. 4 1-17. Watson, G. S. (1964): Smooth regression analysis. Sankhya A 26 359-372. Watson, G. S. (1984): Smoothing and interpolation by kriging and with splines. Math. Geology 16 601-615. Weerahandi, S. and Zidek, J. (1985): Smoothing locally smooth processes by Bayesian nonparametric methods. University of British Columbia Statistics Department Technical Report No. 26. Weerahandi, S. and Zidek, J. (1986): Analysis of multiple time series by Bayesian and Empirical Bayesian nonparametric methods. University of British Columbia Statistics Department Technical Report No. 33. Wegman, E. J. (1980): Two approaches to nonparametric regression: splines and isotonic inference. In Recent Developments in Statistical Inference and Data Analysis ( K . Matusita ed.). North-Holland. Wegman, E. J. and Wright, I. W. (1983): Splines in statistics. J. Amer. 45 Statist. Assoc. 78 351-365. Whittaker, E. T. (1923): On a new method of graduation. Proc. Edinburgh Math. Soc. 41 63-75. Winters, P. R. (1960): Forscasting sales by exponentially weighted moving averages. Management Sci. 324-342. 46 APPENDIX This appendix contains Fortran subprograms for the computations of (i) the smoothing parameter c by cross-validation, (ii) the smoothing parameter c by "backfitting", (iii) the smooth curve estimates with corresponding credibility in- tervals, and (iv) the values of P S E . 47 DOUBLE P R E C I S I O N F U N C T I O N F ( C ) C C T H I S I S A ROUTINE FOR COMPUTING THE SMOOTHING C PARAMETER C BY U S I N G T H E C R O S S - V A L I D A T I O N METHOD C THROUGH THE IMSL M I N I M I Z I N G ROUTINE Z X L S F WHICH C CAN BE R E P L A C E D BY ANOTHER M I N I M I Z I N G R O U T I N E . C T H E C A L L I N G PROGRAM MUST INCLUDE THE S T A T E M E N T C " C A L L Z X L S F ( F , X , S T E P , B O U N D , X A C C , M A X F N , I E R ) " . C DOUBLE P R E C I S I O N IS USED TO IMPROVE THE ACCURACY C C OF E S T I M A T I O N . c c INPUT V A R I A B L E S A R E : c Y ARRAY OF OBSERVED VALUES ( OREDERED IN T I M E ) c L ARRAY OF T I M E - V A L U E S ( ORDERED IN T I M E ) c M D I M . OF ARRAYS Y AND L c NP THE ORDER OF THE L O C A L L Y POLYNOMIAL F I T PLUS 1; c IN T H I S C A S E , N P = 1 , THAT IS L O C A L L Y CONSTANT F I T . c NP2 = 2 x NP c NP 1 = NP2 - 1 c SC THE S C A L I N G FACTOR c PROD = WORKING ARRAY OF D I M . NP2 c A T Y = WORKING ARRAY OF D I M . NP c ARRAYS ATA AND A T A I OF D I M . NP x N P , AND WORKING ARRAY c I PERM OF D I M . NP2 ARE USED IN THE SUBROUTINE INV c ( A L L THE ABOVE V A R I A B L E S MUST BE D E F I N E D IN THE C A L L I N G c PROGRAM ) c OUTPUT V A R I A B L E S A R E : c c E S T I M A T E OF THE SMOOTHING PARAMETER c SUM = C R O S S - V A L I D A T E D SUM OF SQUARED ERRORS c C SUBROUTINES C A L L E D BY F A R E : C I N V (CAN BE FOUND IN THE DOCUMENTATION: UBC M A T R I X ) . C INV CAN* BE R E P L A C E D BY ANOTHER MATRIX I N V E R S E ROUTINE C WHERE ATA IS INPUT AND A T A I IS O U T P U T . C R E A L * 8 A T A ( 1 ,1 ) , A T A I ( 1 ,1 ) , A T Y ( 1 ) , P R O D ( 2 ) , Y ( 8 0 ) , * L ( 8 0 ) , I P E R M ( 2 ) C C T H E DIMENSIONS OF THE ABOVE ARRAYS MUST BE S P E C I F I E D ; C IN T H I S C A S E , N P =1 , T H A T IS L O C A L L Y CONSTANT F I T . C R E A L * 8 S C , Z , C , S U M , T E M P , V , W , P , Y Y , B B COMMON Y , L , M , N P , S C , N P 1 , N P 2 C C T H E ABOVE "COMMON" STATEMENT IS USED IN ORDER TO PASS C T H E PARAMETERS FROM T H E C A L L I N G ROUTINE TO T H I S C R O U T I N E . THE C A L L I N G PROGRAM MUST I N C L U D E THE SAME C "COMMON" S T A T E M E N T . C Z=DABS(C) 48 SUM=O.DO DO 25 J = 1 , M BB=0.D0 DO 5 J J = 1 , N P 5 A T Y ( J J ) = 0 . D 0 DO 10 11=1,NP1 10 P R 0 D ( I I ) = 0 . D 0 N T = L ( J ) DO 20 1=1 ,M •IF(I . E Q . J ) GO TO 20 P = S C * ( L ( I ) - N T ) W = 1 . D 0 / ( 1 . D 0 + Z * ( P * * N P 2 ) ) V=W DO 12 11=1,NP1 P R 0 D ( I I ) = P R 0 D ( I I ) + V V=V*P 12 CONTINUE V=W Y Y = Y ( I ) DO 14 J J = 1 , N P A T Y ( J J ) = A T Y ( J J ) + V * Y Y V=V*P 14 CONTINUE 2 0 CONTINUE DO 22 11=1 ,NP DO 22 J J = 1 , N P A T A ( 1 1 , J J ) = P R O D ( 1 1 + J J - 1 ) 22 CONTINUE C C COMPUTE THE R E G R E S S I O N C O E F F I C I E N T S C C A L L I N V ( N P , N P , A T A , I P E R M , N P , A T A I , D E T , J E X P , C O N D ) DO 30 1=1 ,NP B B = B B + A T A I ( 1 , I ) * A T Y ( I ) C C BB IS THE C R O S S - V A L I D A T O R Y E S T I M A T E OF THE SMOOTH C 30 CONTINUE S U M = S U M + ( Y ( J ) - B B ) * * 2 25 CONTINUE WRITE(6,99 ) Z , S U M 99 F 0 R M A T ( E 1 8 . 7 , 2 X , E 1 5 . 8 ) C C Z IS THE A B S O L U T E V A L U E OF C AND SUM IS T H E C R O S S - C V A L I D A T E D SUM OF SQUARED ERRORS. THEY ARE OUTPUT SO C THAT THE V A L U E OF C WHICH MINIMIZES SUM CAN BE C D E T E R M I N E D . C F = SUM RETURN END 49 SUBROUTINE B F I T ( Y , L , B , A T A , A T Y , P R O D , A T A I , I P E R M , T T , T T T * C C , T A , M , N P , S C , C , N P 2 , N ) C C T H I S IS A ROUTINE FOR COMPUTING T H E SMOOTHING C PARAMETER C BY USING THE B A C K F I T T I N G M E T H O D . C C INPUT V A R I A B L E S A R E : C Y ARRAY OF OBSERVED V A L U E S (ORDERED I N T I M E ) C L ARRAY OF T I M E - V A L U E S (ORDERED IN T I M E ) C M D I M . OF ARRAYS Y AND L C NP = THE ORDER OF THE L O C A L L Y PLOYNOMIAL F I T PLUS 1 ; C IN T H I S C A S E , NP= 1 1 C SC THE S C A L I N G FACTOR C C THE SMOOTHING PARAMETER OF THE 1OTH ORDER L O C A L L Y C POLYNOMIAL F I T ; IN T H I S C A S E , C = 0 . 0 D 0 C T A ARRAY OF THE D E R I V A T I V E S OF THE SMOOTH OF C D I M . M AND N C NP2 = 2 x NP C N = NP - 1 C PROD = WORKING ARRAY OF D I M . NP2 C B , T T , T T T ARE WORKING ARRAYS OF D I M . NP C A T A AND A T A I OF D I M . NP x N P , ATY OF D I M . NP AND C IPERM OF D I M . NP2 ARE ARRAYS USED IN THE SUBROUTINES C INV AND DGMATV C ( A L L THE ABOVE V A R I A B L E S MUST BE D E F I N E D IN THE C C A L L I N G PROGRAM. ) C C OUTPUT V A R I A B L E S A R E : C C C ( K ) = SMOOTHING PARAMETER OF T H E KTH ORDER L O C A L L Y C POLYNOMIAL F I T , K = 1 , . . . , N C C SUBROUTINES C A L L E D BY B F I T A R E : C INV AND DGMATV (CAN BE FOUND I N THE D O C U M E N T A T I O N : C UBC M A T R I X ) . INV CAN BE R E P L A C E D BY ANOTHER MATRIX C I N V E R S E ROUTINE WHERE ATA IS I N P U T AND A T A I IS C O U T P U T . DGMATV CAN BE R E P L A C E D BY ANOTHER MATRIX C M U L T I P I C A T I O N ROUTINE WHERE A T A I AND ATY ARE INPUTS C AND B IS O U T P U T . C I M P L I C I T R E A L * 8 ( A - H , 0 - Z ) DIMENSION Y ( M ) , L ( M ) , B ( N P ) , A T A ( N P , N P ) , A T Y ( N P ) * , P R O D ( N P 2 ) , A T A I ( N P , N P ) , I P E R M ( N P 2 ) , T T ( N ) , T T T ( N ) , C C ( N ) , * , T A ( M , N ) NP1=NP2-1 DD=0.0D0 C C COMPUTE THE SAMPLE V A R I A N C E OF THE N O I S E DD AND C T H E D E R I V A T I V E S OF THE SMOOTH TA C DO 2 1=1 ,N T T ( I ) = 0 . 0 D 0 2 T T T ( I ) = 0 . 0 D 0 50 DO 25 J = 1 , M DO 5 J J = 1 , N P 5 A T Y ( J J ) = 0 . D 0 DO 10 11=1,NP1 10 P R O D ( I I ) = 0 . D 0 N T = L ( J ) DO 19 1=1 ,M P = S C * ( L ( I ) - N T ) W = 1 . D 0 / ( 1 . D 0 + C * ( P * * N P 2 ) ) V=W DO 12 11=1,NP1 P R O D ( I I ) = P R 0 D ( I I ) + V V = V * P 12 CONTINUE V=W Y Y = Y ( I ) DO 14 J J = 1 , N P A T Y ( J J ) = A T Y ( J J ) + V * Y Y V = V * P 14 CONTINUE 19 CONTINUE DO 22 11=1 ,NP DO 22 J J = 1 , N P A T A (11 , J J ) = P R O D (11 + J J - 1 ) 22 CONTINUE C A L L I N V ( N P , N P , A T A , I P E R M , N P , A T A I , D E T , J E X P , C O N D ) C A L L D G M A T V ( A T A I , A T Y , B , N P , N P , N P ) T E M P = A T A I ( 1 , 1 ) D D = D D + ( Y ( J ) - B ( 1 ) ) * * 2 / ( 1 . D O + T E M P ) DO 21 K = 1 , N T A ( J , K ) = B ( K + 1 ) 21 T T ( K ) = T T ( K ) + T A ( J , K ) 25 CONTINUE D D = D D / ( M - N P ) C C COMPUTE T H E SMOOTHING PARAMETERS C C ( K ) , K = 1 , . . . , N C DO 44 1=1 ,N 44 T T ( I ) = T T ( I ) / M DO 55 1=1 ,N DO 66 K = 1 , M 66 T T T ( I ) = T T T ( I ) + ( T A ( K , I ) ~ T T ( I ) ) * * 2 55 CONTINUE DO 77 1=1 ,N 77 T T T ( I ) = T T T ( I ) / ( M ~ 1 ) DO 88 K = 1 , N C C ( K ) = T T T ( K ) / ( D D ) 88 CONTINUE RETURN END 51 SUBROUTINE S M O O T H ( Y , L,MM,M,N P , S C , C , K K , U L , V L , * P R O D , A T Y , T E M , A , A T A , A T A I , I P E R M , N P 1 , N P 2 ) C C T H I S IS THE ROUTINE FOR COMPUTING THE I N T E R P O L A N T S C AND THE PREDICTANDS OF THE SMOOTH CURVE WITH THE C CORRESPONDING C R E D I B I L I T Y I N T E R V A L S . C C I N P U T V A R I A B L E S A R E : C Y ARRAY OF OBSERVED V A L U E S (ORDERED IN T I M E ) C L ARRAY OF T I M E - V A L U E S (ORDERED IN T I M E ) C MM = THE T O T A L NUMBER OF THE I N T E R P O L A N T S AND THE C P R E D I C T A N D S OF THE SMOOTH CURVE C M T O T A L NUMBER OF OBSERVATIONS OF Y C NP = THE ORDER OF THE L O C A L L Y POLYNOMIAL F I T PLUS 1 C SC = THE S C A L I N G FACTOR C C THE C R O S S - V A L I D A T O R Y OR THE B A C K F I T T I N G C SMOOTHING PARAMETER C NP2 = 2 x NP C NP1 = NP2 - 1 C KK = THE NUMBER OF PREDICTANDS BEYOND THE E I T H E R C S I D E OF THE RANGE OF DATA C PROD= WORKING ARRAY OF D I M . NP2 C ATY = WORKING ARRAY OF D I M . NP1 C TEM = WORKING ARRAY OF D I M . MM C ARRAYS ATA AND ATAI OF D I M . NP AND NP AND WORKING C ARRAY I PERM OF DIM NP2 ARE USED IN THE SUBROUTINE INV C ( A L L T H E ABOVE V A R I A B L E S MUST BE D E F I N E D I N THE C C A L L I N G PROGRAM. ) C C OUTPUT V A R I A B L E S A R E : C A = ARRAY OF D I M . MM OF THE I N T E R P O L A N T S OR THE C PREDICTANDS OF THE SMOOTH CURVE C UL = ARRAY OF D I M . MM OF THE UPPER L I M I T S OF THE C C R E D I B I L I T Y I N T E R V A L S C V L = ARRAY OF D I M . MM OF THE LOWER L I M I T S OF THE C C R E D I B I L I T Y I N T E R V A L S C SE = STANDARD ERROR OF THE INTERPOLANTS OR THE C PREDICTANDS C C SUBROUTINES C A L L E D BY SMOOTH A R E : C INV (CAN BE FOUND IN THE DOCUMENTATION: UBC M A T R I X ) . C INV CAN BE R E P L A C E D BY ANOTHER MATRIX I N V E R S E C ROUTINE WHERE ATA IS INPUT AND A T A I IS O U T P U T . C I M P L I C I T R E A L * 8 ( A - H , 0 - Z ) DIMENSION Y ( M M ) , L ( M M ) , A T A ( N P , N P ) , A T Y ( N P ) , P R 0 D ( N P 2 ) * , A T A I ( N P , N P ) , I P E R M ( N P 2 ) , A ( M M ) , T E M ( M M ) , U L ( M M ) , V L ( M M ) N=NP-1 K1=KK+1 K2=KK+M DD=0.D0 DO 2 I = 1 , K K 52 L ( I ) = 5 * ( 1 - 1 ) + 3 0 0 L ( I + 9 2 ) = 5 * ( I - 1 ) + 7 3 0 COMPUTE THE SAMPLE V A R I A N C E OF THE N O I S E DD DO 25 J = K 1 , K 2 BB=0 .D0 DO 5 J J = 1 , N P A T Y ( J J ) = 0 . D 0 DO 10 11=1,NP1 P R 0 D ( I I ) = 0 . D 0 N T = L ( J ) DO 19 I = K 1 , K 2 P = S C * ( L ( I ) - N T ) W = 1 . D 0 / ( 1 . D 0 + C * ( P * * N P 2 ) ) V=W DO 12 11=1,NP1 P R 0 D ( I I ) = P R 0 D ( I I ) + V V = V * P CONTINUE V=W Y Y = Y ( I ) DO 14 J J = 1 , N P A T Y ( J J ) = A T Y ( J J ) + V * Y Y V = V * P CONTINUE CONTINUE DO 22 11=1,NP DO 22 J J = 1 , N P A T A ( I I , J J ) = P R O D ( I I + J J - 1 ) CONTINUE C A L L I N V ( N P , N P , A T A , I P E R M , N P , A T A I , D E T , J E X P , C O N D ) DO 30 1=1,NP B B = B B + A T A I ( 1 , I ) * A T Y ( I ) CONTINUE T E M P = A T A I ( 1 , 1 ) D D = D D + ( Y ( J ) - B B ) * * 2 / ( 1 . D O + T E M P ) CONTINUE D D = D D / ( M - N P ) COMPUTE^THE STANDARD ERROR SE DO 29 J = l , M M BB=0.D0 DO 6 J J = 1 , N P A T Y ( J J ) = 0 . D 0 DO 16 11=1,NP1 P R O D ( I I ) = O . D O N T = L ( J ) DO 49 I = K 1 , K 2 P = S C * ( L ( I ) - N T ) W = 1 . D 0 / ( 1 . D 0 + C * ( P * * N P 2 ) ) 53 v=w DO 27 I I = 1 , N P 1 P R 0 D ( I I ) = P R O D ( I I ) + V V = V * P 27 C O N T I N U E V=W Y Y = Y ( I ) DO 74 J J = 1 , N P A T Y ( J J ) = A T Y ( J J ) + V * Y Y ,V=V*P 74 CONTINUE 49 CONTINUE DO 82 11=1,NP DO 82 J J = 1 , N P A T A ( I I , J J ) = P R 0 D ( I I + J J - 1 ) 82 CONTINUE C A L L I N V ( N P , N P , A T A , I P E R M , N P , A T A I , D E T , J E X P , C O N D ) DO 39 1=1,NP B B = B B + A T A I ( 1,1 ) * A T Y ( l ) 39 CONTINUE T E M ( J ) = A T A I ( 1 , 1 ) A ( J ) = B B 29 C O N T I N U E DO 101 1=1,MM S E = D S Q R T ( D D * T E M ( I ) ) C C COMPUTE THE C R E D I B I L I T Y I N T E R V A L S C U L ( I ) = A ( I ) + 1 . 9 6 * S E V L ( I ) = A ( I ) - 1 . 9 6 * S E 101 CONTINUE RETURN END 54 SUBROUTINE P S E ( S C , N 1 , N 2 , C , M , N 4 , N P , N P 1 , N P 2 , N , Y , L , C C , A T A , A T Y , A T A I , P R O D , B , I P E R M , T T , T T T , T A , V A R , R, AA) c c T H I S IS A ROUTINE FOR COMPUTING THE V A L U E S OF PSE c r P R E D I C T ONE OBSERVATION A H E A D . c I N P U T V A R I A B L E S A R E : c Y ARRAY OF OBSERVED V A L U E S (ORDERED IN T I M E ) c L ARRAY OF T I M E - V A L U E S (ORDERED I N T I M E ) c M D I M . OF THE ARRAYS Y AND L c NP T H E ORDER OF THE L O C A L L Y POLYNOMIAL F I T PLUS 1 c C THE SMOOTHING PARAMETER OF THE 1OTH ORDER c L O C A L L Y POLYNOMIAL F I T ; IN T H I S C A S E , C=0 .0D0 c SC THE S C A L I N G FACTOR c N 1 , N 2 = THE S T A R T I N G L I M I T S OF THE SPAN OF OBSERVATIONS c N NP - 1 c NP2 2 X NP c NP 1 NP2 - 1 c N4 M - N2 c B ARRAY OF THE D E R I V A T I V E S OF THE SMOOTH c CC ARRAY OF THE SMOOTHING PARAMETERS OF D I F F E R E N T c ORDER L O C A L L Y POLYNOMIAL F I T S c AA ARRAY OF THE E S T I M A T E S OF FUTURE OBSERVATIONS c VAR WORKING ARRAY OF D I M . M c PROD = WORKING ARRAY OF D I M . NP2 c TA WORKING ARRAY OF D I M . N2 AND N c T T , T T T = WORKING ARRAY OF D I M . N c ARRAYS A T A AND ATAI OF D I M . NP x N P , ARRAY ATY OF D I M . c NP AND ARRAY IPERM OF D I M . NP2 ARE USED I N THE c ROUTINES INV AND DGMATV ( CAN BE FOUND I N THE c D O C U M E N T A T I O N : UBC MATRIX ) . INV CAN BE R E P L A C E D BY c ANOTHER M A T R I X I N V E R S E ROUTINE WHERE ATA IS INPUT AND c A T A I IS O U T P U T . DGMATV CAN BE R E P L A C E D BY ANOTHER c M A T R I X M U L T I P I C A T I O N ROUTINE WHERE A T A I AND ATY ARE c INPUTS AND B IS O U T P U T . c ( A L L THE ABOVE V A R I A B L E S MUST BE D E F I N E D I N THE c c C A L L I N G PROGRAM ) c OUTPUT V A R I A B L E S A R E : c R U ) = ' E S T I M A T E D V A L U E OF PSE OF THE I T H ORDER L O C A L L Y c r* POLYNOMIAL F I T , I = 1 , . . . , N L- c SUBROUTINES C A L L E D BY PSE A R E : c c BACK AND PRED (ATTACHED TO T H I S ROUTINE) I M P L I C I T R E A L * 8 ( A - H , 0 - Z ) D I M E N S I O N Y ( M ) , L ( M ) , A A ( N , N 4 ) , C C ( N , N 4 ) , A T A ( N P , N P ) * , A T Y ( N P ) , A T A I ( N P , N P ) , P R O D ( N P 2 ) , B ( N P ) , I P E R M ( N P 2 ) , T T ( N 4 ) , * T T T ( N 4 ) , T A ( N 2 , N ) , V A R ( M ) , R ( N ) C C COMPUTE T H E B A C K F I T T I N G VALUES OF C AND T H E SAMPLE 55 C V A R I A N C E OF THE N O I S E C C A L L B A C K ( S C , N 1 , N 2 , C , M , N 4 , N P , N P 1 , N P 2 , N , Y , L , C C , A T A , * A T Y , A T A I , P R O D , B , I P E R M , T T , T T T , T A , V A R ) C C COMPUTE THE P R E D I C T I V E V A L U E S OF FUTURE OBSERVATIONS C C A L L P R E D ( S C , N P , N P 1 , N P 2 , N , M , N 1 , N 2 , N 4 , Y , L , C C , A T A , A T Y , * A T A I , P R O D , I P E R M , A A ) C C COMPUTE THE E S T I M A T E D V A L U E S OF P S E C DO 100 1=1 ,N R ( I ) = 0 . 0 DO 101 J = 1 , N 4 R ( I ) = R ( I ) + ( Y ( N 2 + J ) - A A ( l , J ) ) * * 2 101 CONTINUE R ( I ) = R ( I ) / N 4 100 CONTINUE RETURN END C C SUBROUTINE B A C K ( S C , N 1 , N 2 , C , M , N 4 , N P , N P 1 , N P 2 , N , Y , L , C C , * A T A , A T Y , A T A I , P R O D , B , I P E R M , T T , T T T , T A , V A R ) I M P L I C I T R E A L * 8 ( A - H , 0 - Z ) DIMENSION Y ( M ) , L ( M ) , B ( N P ) , A T A ( N P , N P ) , P R O D ( N P 2 ) , * A T A I ( N P , N P ) , I P E R M ( N P 2 ) , T T ( N ) , T T T ( N ) , C C ( N , N 4 ) , T A ( N 2 , N ) , * A T Y ( N P ) , V A R ( M ) K1 =N1 K2=N2 DO 18 NN=1,N4 DD=0.D0 DO 2 1=1 ,N DO 3 J = 1 , N 2 T A ( J , I ) = 0 . D 0 3 C O N T I N U E T T ( I ) = 0 . D O T T T ( I ) = 0 . D 0 2 C O N T I N U E DO 25 J = R 1 , K 2 DO 5 J J = 1 , N P 5 A T Y ( J J ) = 0 . D 0 DO 10 11=1,NP1 10 P R O D ( l I ) = 0 . D 0 N T = L ( J ) DO 19 I = K 1 , K 2 P = S C * ( L ( I ) - N T ) W = 1 . D 0 / ( 1 . D 0 + C * ( P * * N P 2 ) ) V=W DO 12 11=1,NP1 P R O D ( l I ) = P R O D ( l I ) + V 56 V = V * P 12 CONTINUE V=W Y Y = Y ( I ) DO 14 J J = 1 , N P A T Y ( J J ) = A T Y ( J J ) + V * Y Y V = V * P 14 CONTINUE 19 C O N T I N U E DO 22 11=1 ,NP • DO 22 J J = 1 , N P A T A ( I I , J J ) = P R O D ( I I + J J - 1 ) 22 C O N T I N U E C A L L I N V ( N P , N P , A T A , I P E R M , N P , A T A I , D E T , J E X P , C O N D ) C A L L D G M A T V ( A T A I , A T Y , B , N P , N P , N P ) T E M P = A T A I ( 1 , 1 ) D D = D D + ( Y ( J ) - B ( 1 ) ) * * 2 / ( 1 . D O + T E M P ) DO 21 K = 1 , N T A ( J , K ) = B ( K + 1 ) 21 T T ( K ) = T T ( K ) + T A ( J , K ) 2 5 C O N T I N U E D D = D D / ( N 2 - N P ) DO 44 1 = 1 , N 44 T T ( I ) = T T ( I ) / N 2 DO 55 1=1 ,N DO 66 K = K 1 , K 2 66 T T T ( I ) = T T T ( I ) + ( T A ( K , I ) - T T ( I ) ) * * 2 55 CONTINUE DO 47 1 = 1 , N 47 T T T ( I ) = T T T ( I ) / ( N 2 - 1 ) DO 88 K = 1 , N C C ( K , N N ) = T T T ( K ) / D D 88 C O N T I N U E VAR(NN)=DD K1=K1+1 K2=K2+1 18 C O N T I N U E RETURN END C C SUBROUTINE P R E D ( S C , N P , N P 1 , N P 2 , N , M , N 1 , N 2 , N 4 , Y , L , C C , A T A * , A T Y , A T A I , P R O D , I P E R M , A A ) I M P L I C I T R E A L * 8 ( A - H , 0 - Z ) D I M E N S I O N Y ( M ) , L ( M ) , A T A ( N P , N P ) , P R O D ( N P 2 ) , A A ( N , N 4 ) * , A T A I ( N P , N P ) , I P E R M ( N P 2 ) , C C ( N , N 4 ) , A T Y ( N P ) K1=N1 K2=N2 DO 71 I P = 1 , N 4 DO 72 J P = 1 , N NQ=JP NQ2=2*JP 57 NQ1=NQ2-1 BB=O.DO DO 74 J J = 1 , N Q 74 ATY ( J J ) = 0 . D 0 DO 75 11=1,NQ1 75 P R 0 D ( I I ) = 0 . D 0 KT=L(N2+IP) DO 76 J = K 1 , K 2 P = S C * ( L ( J ) - K T ) W = 1 . D 0 / ( 1 . D 0 + C C ( J P , I P ) * ( P * * N Q 2 ) ) V=W DO 77 11=1,NQ1 P R 0 D ( I I ) = P R 0 D ( I I ) + V V=V*P 77 CONTINUE V=W Y Y = Y ( J ) DO 78 J J = 1 , N Q ATY ( J J ) = A T Y ( J J ) + V * Y Y V = V * P 78 CONTINUE 7 6 CONTINUE DO 79 11=1,NQ DO 79 J J = 1 , N Q A T A ( I I , J J ) = P R O D ( I I + J J - 1 ) 79 CONTINUE C A L L I N V ( N Q , N P , A T A , I P E R M , N P , A T A I , D E T , J E X P , C O N D ) DO 80 I=1 ,NQ B B = B B + A T A I ( 1 , I ) * A T Y ( I ) 80 CONTINUE A A ( J P , I P ) = B B 7 2 CONTINUE K1=K1+1 K2=K2+1 71 CONTINUE RETURN END 58

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
Japan 4 0
China 3 0
United States 2 0
City Views Downloads
Tokyo 4 0
Beijing 3 0
Sunnyvale 1 0
Ashburn 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}

Share

Share to:

Comment

Related Items