OVERDISPERSION IN POISSON REGRESSION By W. Brad McNeney B. Sc. (Mathematics) University of British Columbia A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in THE FACULTY OF GRADUATE STUDIES DEPARTMENT OF STATISTICS We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA 1992 Â© W. Brad McNeney, 1992 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. (Signature) Department of ST I111 Si C S The University of British Columbia Vancouver, Canada Date DE-6 (2/88) /1 It . '2 Abstract Investigation of a possible relationship between air quality and human health in the community of Prince George, British Columbia was undertaken after a public opinion poll in 1972 discovered that poor air quality was the number one concern of the residents of Prince George. An analysis which attempted to identify such relationships using a data set including air quality measurements and hospital admissions for the period April 1, 1984 to March 31, 1986 is discussed in Knight, Leroux, Millar, and Petkau (1988). A similar analysis using emergency room visits during the same period rather than hospital admissions is described in Knight, Leroux, Millar, and Petkau (1989). The data set described here was collected to carry out a follow-up study to the emergency room visits analysis. The main part of the analyses carried out involved the use of Poisson regression models with a minor extension to account for over-dispersion in the data. The results of the analysis were not consistent with either the earlier study or with the expectations of the investigators. For example, higher levels of one of the air quality variables was found to be associated with a decrease in the number of emergency room visits for respiratory disease in the winter, but an increase in emergency room visits for respiratory disease in the fall. A mechanism to explain such effects is difficult to imagine. These counter-intuitive results motivated a simulation study to assess the methods used in the analysis and to compare these to other possible estimators and test statistics that can be employed in the analysis of over-dispersed Poisson data. ii Table of Contents Abstract^ ii List of Tables^ v vii List of Figures^ Acknowledgement^ viii 1^The Prince George Study 1 1.1^Data Set Description ^ 1.2 1.3 2 1.1.1 Emergency Room Visits Data ^ 2 1.1.2 Air Pollution Data ^ 2 1.1.3 Meteorological Data ^ 3 Methodology for the Prince George Study ^ 3 1.2.1 Poisson Regression ^ 3 1.2.2 Accounting for Over-Dispersion ^ 8 Results of the Data Analysis ^ 10 1.3.1 Temporal and Meteorological Effects ^ 11 1.3.2 Respiratory Visits and TRS ^ 12 1.3.3 Other Analyses of Pollution Covariates ^ 15 iii 1.4 Questions Raised by the Analysis ^ 2 Methodology^ 15 17 2.1 Quasi-Likelihood in Generalized Linear Models ^ 2.1.1 The Estimating Equations ^ 17 19 2.1.2 Test Statistics for Hypotheses Regarding Regression Parameters ^ 24 2.2 Test Statistics for Hypotheses in Over-Dispersed Poisson Regression ^ 27 3 A Simulation Study Related to the Prince George Study ^ 3.1 The General Simulation Procedure ^ 31 34 3.1.1 Generating the Data ^ 34 3.1.2 Fitting the Full Model ^ 35 3.1.3 Fitting the Reduced Model ^ 36 3.2 The Specifics of the Simulation Study ^ 4 Results of the Simulation Study ^ 36 42 4.1 Estimating the Dispersion Parameter 0 ^ 42 4.2 The Simple Case with a Single Series ^ 50 4.3 The Simple Case with Three Series ^ 60 4.4 The More Complicated Case with a Single Series ^ 71 4.5 The More Complicated Case with Three Series ^ 92 5 Discussion^ 106 Bibliography^ 111 iv List of Tables Table 1. Mean Values of Estimates (Â± Standard Deviation) in 1000 Simulated Data Sets. .43 Table 2. Mean Values of Estimates of 0. 50 Table 3. Standard Deviations of Estimates of /3 and Mean Values of Estimated Standard Errors (Â±s.d.). ^ 51 Table 4. Observed Rejection Probabilities Under Reduced Model. ^ 54 Table 5. Mean Values of Estimated Parameters (+ s.d.) Under Full Model. ^ 57 Table 6. Observed Rejection Probabilities Under Full Model. ^ 59 Table 7. Mean Values of Parameter Estimates. ^ 60 Table 8. Standard Deviations of Estimates and Mean Values of Estimated Standard Errors (Â±s.d ) ^ 62 Table 9. Observed Rejection Probabilities: Common Pollution to No Pollution Model. ^ 63 Table 10. Mean values of Parameter Estimates. ^ 64 Table 11. Standard Deviations of Estimates and Mean Values of Estimated Standard Errors ( +s .d ) ^ 65 Table 12. Observed Rejection Probabilities: Separate to Common. ^ 66 Table 13. Observed Rejection Probabilities: Common to Null. ^ 67 Table 14. Mean Values of Parameter Estimates. ^ 68 Table 15. Standard Deviations of Estimates and Mean Values of Estimated Standard Errors (Â±s.d.). ^ 69 v Table 16. Observed Rejection Probabilities: Separate to Common. ^ 70 Table 17. Mean Values of Estimated Parameters Under Reduced Model. ^ 71 Table 18. Standard Deviations of Estimates and Mean Values of Estimated Standard Errors (Â±s.d.). ^ 75 Table 19. Observed Rejection Probabilities Under Null Hypothesis. ^ 78 Table 20. Mean Values of Estimated Parameters Under Alternative Hypothesis. ^ 80 Table 21. Standard Deviations of Estimates Under Alternative Hypothesis. ^ 83 Table 22. Observed Rejection Probabilities Under Alternative Hypothesis. ^ 89 Table 23. Mean Values of Parameter Estimates. ^ 92 Table 24. Standard Deviations of Estimates and Mean Values of Estimated Standard Errors (Â±s.d.). ^ 94 Table 25. Observed Rejection Probabilities: Common to Null Model. ^ 95 Table 26. Mean Values of Parameter Estimates. ^ 97 Table 27. Standard Deviations of Estimates and Mean Values of Estimated Standard Errors (Â±s.d.) ^ 98 Table 28. Observed Rejection Probabilities: Separate to Common. ^ 100 Table 29. Observed Rejection Probabilities: Common to Null. ^ 100 Table 30. Mean Values of Parameter Estimates. ^ 101 Table 31. Standard Deviations of Estimates and Mean Values of Estimated Standard Errors (Â±s.d.). ^ 103 Table 32. Observed Rejection Probabilities: Separate to Common. ^ 104 vi List of Figures Figure 1. Comparison of Estimators of Phi: Correct Variance Simulations (Phi=1 or 1.4) ^ 45 Figure 2. Comparison of Estimators of Phi: Correct Variance Simulations (Phi=2 or 3) ^ 46 Figure 3. Comparison of Estimators of Phi: Misspecified Variance Simulations (Phi=1 or 1.4) ^ 47 Figure 4. Comparison of Estimators of Phi: Misspecified Variance Simulations (Phi=2 or 3) ^ 48 vii Acknowledgement I would like to express my gratitude to my advisor, Professor A. John Petkau, for his support, advice and patience throughout the writting of this thesis. I would also like to thank Professor Nancy E. Heckman for reading the manuscript and providing many helpful comments. viii Chapter 1 The Prince George Study Motivation for this work was provided by the analysis of a data set collected in Prince George, British Columbia, to study the possibility of associations between the ambient levels of air pollution and human health as measured by the rates of emergency room visits for respiratory illnesses. Investigation of such a possible relationship was undertaken after a public opinion poll in 1972 discovered that poor air quality was the number one concern of the residents of Prince George, ahead of such issues as crime, alcohol abuse and recreation facilities. Although monitoring of air quality parameters has been carried out at several monitoring stations in Prince George since 1980, most attempts to study this issue have compared hospital admissions rates and/or mortality rates in Prince George to those in other communities in British Columbia rather than attempt to identify an association between ambient levels of air pollution in Prince George and human health. An analysis which did attempt to identify such relationships using a data set including air quality measurements and hospital admissions for respiratory illnesses for the period April 1, 1984 to March 31, 1986 is discussed in Knight, Leroux, Millar, and Petkau (1988). A similar analysis for the same period based on emergency room visits rather than hospital admissions is described in Knight, Leroux, Millar, and Petkau (1989). The data set described here were collected to carry out a follow-up study to the emergency room visits analysis. 1 Chapter 1. The Prince George Study ^ 2 1.1 Data Set Description Three measures of air quality were monitored and the daily numbers of visits to the emergency room (ER) for respiratory related illnesses were recorded during the period April 1, 1986 to March 31, 1988. In addition, meteorological data in the form of temperature and maximum relative humidity was collected. Description of the relationship between the pollution variables and the ER visits was the goal of the study. However, meteorological and temporal information was also included as potential explanatory factors so that any relationships discovered with pollution variables could reasonably be attributed to the pollution itself and not to some other factor that might underly both pollution and ER visits. 1.1.1 Emergency Room Visits Data Each visit to the emergency room at Prince George Regional Hospital by a resident of Prince George over the two year period from April 1, 1986 to March 31, 1988 was originally classified according to 35 diagnostic categories potentially related to respiratory illness. These 35 categories were grouped into four broad categories which we will refer to as Asthma, Bronchitis+, Ear, and Others. Only the classification according to these four broad categories was available in the data set provided to us. A total of 8,079 visits were included in the study and there was one visit in at least one of the four respiratory categories on every day except January 15, 1987. 1.1.2 Air Pollution Data Data on three air quality parameters were available: Chapter 1. The Prince George Study ^ 3 TRS â€” measured via a continuous monitor with readings subsequently converted to hourly averages. The average of these hourly averages becomes the daily average and this was the summary used. TSP â€” measured via a 24 hour vacuum sample once every six days. SO 2 â€” measured via a continuous monitor in the same fashion as TRS. The daily average was the summary used. 1.1.3 Meteorological Data The meteorological data consisted of daily summaries for each of the following parameters: Temperature â€” the average of 24 hourly readings in degrees Celsius. Maximum Relative Humidity â€” the largest of 24 hourly readings. 1.2 Methodology for the Prince George Study The main part of the analyses carried out involved the use of Poisson regression models (or log-linear models). We give a brief description of these models and then describe an extension of Poisson regression models, which was required for the modelling of emergency room visits. 1.2.1 Poisson Regression The Poisson regression model is a special case of a larger class of models referred to as generalized linear models (GLMs), the theory of which is derived largely from the properties of the exponential family of distributions. Thus before we describe the Poisson regression model in Chapter 1. The Prince George Study^ 4 detail, a brief discussion of the exponential family and GLMs seems appropriate (see McCullagh and Nelder, 1989, for a more detailed description). For a random variable Y in the exponential family with canonical parameter 9 and (known) nuisance parameter 0, we may write its density function as f (y; 9) = exp{(y9 â€” b(0))/ a(0) c(y; q)}. Denote the mean value of Y as ,u, which is related to the parameter 9 by the function 9 = g(p). We will refer to the function g(.) as the link function. For making inference about 9 we introduce the log-likelihood function, the log of the above density function, which we denote 1(9). In the case of n independent observations Yi (i = 1, n), with possibly different canonical parameters, the joint log-likelihood is given by 1(9) = li(0i). In GLMs a linear model is specified for the parameters 0i of the form 0i = g(pi) = Xifi (where /3 is a vector of p<n parameters and Xi is the i th row of an n x p design matrix). It is then possible to estimate /3 via maximum likelihood by taking the derivative of the joint log-likelihood with respect to each of the parameters, equating the resulting p equations to zero and solving for the parameters of interest. These estimators have several nice characteristics including a limiting multivariate normal distribution with mean vector equal to /3w, the true value of the vector of parameters, and covariance matrix equal to the inverse of the Fisher Information matrix corresponding to the joint distribution of the Yi (i = 1, ..., n). Now consider a Poisson random variable Y. We write the density function of Y as f(y; it) = (e PitY)/y! or equivalently as f(y; = exp{ylog â€” ,u â€” logy!}. The latter form allows us to â€” identify the Poisson distribution as a member of the exponential family with 8 = log /2 (and therefore link function given by g(â€¢)=logâ€¢)), b(9) = fit a(0) = 1, and c(y; 0) = â€”logy!. , The motivation behind using the Poisson regression model for the Prince George study lies Chapter 1. The Prince George Study^ 5 in the intuitively reasonable assumption that the daily numbers of emergency room visits follow Poisson distributions with possibly different means. The mean number of visits on a given day will be described by a function of the available covariate information up to and including that day, which includes temporal variables such as day of the week and month, the meteorological variables, daily temperature and maximum humidity, and the pollution variables, TSP, TRS and S0 2 . The use of Poisson regression depends on the following basic assumptions: (a) the expected number of visits on a given day depends explicitly only on time, meteorology and pollution and not on the number of visits on past days. (b) daily visits are conditionally independent given the temporal, meteorological and pollution covariates. The assumption that the expected number of visits does not depend explicitly on past visits seems to be a reasonable starting point in this situation. However, more general models in which the mean depends explicitly on past visits could be considered as well. Suppose now that ti is a vector of temporal covariates, mi is a vector of meteorological covariates and si is a vector of pollution covariates for day i; si and mi can include values of pollution or meteorology prior to day i as well as those values on day i. If yi is the number of emergency room visits on day i (i =1,2,...,731) for a particular diagnostic class, the log-linear model assumes that yi has a Poisson distribution with mean p, where log pi = a + ti e r rni'6 si'ry with a, r, 6 and y unknown parameters (r, 6 and y are vectors having the same length as ti, mi and si respectively). The components of the parameter 7 give the relative changes in expected Chapter 1. The Prince George Study ^ 6 visits on a given day (that is, with temporal and meteorological covariates fixed) associated with changes in the pollution variables. In particular, for small changes in the pollution variables, from si to si + Ai, exp(A',0) â€” 1 A'7 gives the percentage change in the corresponding expected visits. In order to account for possibly different patterns of emergency room visits for the different diagnostic categories, the basic model is generalized by allowing different parameters a, T, and y for the different diagnostic categories, and thus describing the expected number of visits for category j (j =1,2,3,4) on day i by the equation log fGij = cri^t'i rj To fit the above model into the framework described for GLMs let = ( at , rt, St, ,.yt)t where here a = (0 1 , 0 2 ,0 3 , 0 4 )t, T = p = (pt, /4,4 /4) t and 4, 71, Tv, and so on. Then, if X is the appropriate design matrix containing all of the covariate information, our model can be expressed as logy = X /3. In addition to assuming the observations to be conditionally independent given the covariates, we assume that data from different diagnostic categories is independent, so that the joint likelihood is simply the product of the marginal Poisson distributions. Thus the joint log-likelihood is: 4 731 1(0) = E E [yii log pij â€” pij â€” log yij!] . j=1 1=1 For ease of notation, it will be more convenient to write the above log-likelihood using one subscript. We order the yii's as yl , ..., y2924, with the pi's numbered correspondingly. Now ^ Chapter 1. The Prince George Study^ 7 express the joint log-likelihood as: l(0) 2924 = E [yi log â€” â€” log yi!) . i=i To obtain maximum likelihood estimating equations for the parameters in /3 (suppose there are p parameters in /3), we differentiate the expression in (1.1) with respect to each of the p components of and equate the resulting derivatives to zero to arrive at: al^ 2924 Yi â€” Pi ) ON = 0; j = 1, ...,p. fr_1(^' apa =^ (1.2) Note that our model assumes pi = exp{XiM, or log pi = X0. Thus, (1.2) can be rewritten as 81^ 2924 =^(yi ni3 "Pi^1=1 â€”^Xii^0; j = 1,...,p.^ (1.3) The resulting estimators, Q are asymptotically normal, with mean vector given by /3*, the true , value of the vector of parameters, and covariance matrix equal to L In this case â€”ap â€” E ( a21 ij 2 )] ^021^2924^ 821 = â€”E () ^(90 t2^^ T.,^ (902 L Given the solution to (1.3), the fitted values, . it, = exp{Xi;j}, may be substituted in the above expression to obtain an estimate of the asymptotic covariance matrix. Using these fitted values, the deviance is then defined to be â€”2 times the log-likelihood, or 2924 dev. = â€”2 E [yi log i=i - log NI . For a fitted model with p parameters and a joint likelihood from n independent observations, the degrees of freedom associated with the deviance is n â€” p. By taking the difference in the Chapter 1. The Prince George Study^ 8 deviances of two fitted models, one of which is nested within the other, we may form a likelihood ratio statistic that can be compared to its limiting chi-square distribution to examine if the smaller model constitutes a significant deterioration in fit compared to the larger model. The Pearson residuals are defined to be (y, â€” r, â€” . These quantities are often used for diagnostic purposes in the same way the residuals from an ordinary regression (assuming normality) are used. We may also use these residuals to form a goodness-of-fit statistic, the familiar Pearson X 2 , by summing the squares of these Pearson residuals. Another potential use of the Pearson X 2 emerges in the following section dealing with over-dispersion. 1.2.2 Accounting for Over Dispersion - The extension of the Poisson regression model to be described concerns accounting for overdispersion. Suppose that the variance of the daily visits given the temporal, meteorological and pollution variables is greater than the mean; in this situation the emergency room visits data are said to be "over-dispersed". Over-dispersion represents a violation of the Poisson assumption underlying the Poisson regression model, which requires that the variance and mean be equal. One way to handle over-dispersed data is to introduce a dispersion parameter 0 > 1 and assume that the variance of the visits on day i is We now describe the procedure for statistical inference in this new model. The parameter vector which appears in the specification of the daily mean visits rates is estimated exactly as for the model without a dispersion parameter, i.e. by maximizing the Chapter 1. The Prince George Study ^ 9 Poisson log-likelihood function of (1.1). However, when assessing the statistical significance of estimated parameters it is necessary to take into account the dispersion parameter. To describe how this is done in the context of model selection (the dispersion parameter must also be taken into account when evaluating the standard errors of estimated parameters), assume that we begin with a model containing a number of parameters (the full model) and wish to test the viability of another model which is nested within the full model (the reduced model). Under our independence assumptions, the difference in the deviances for the two models is approximately distributed as 0 times a )( 2 random variable with degrees of freedom given by the difference in the degrees of freedom associated with the two deviances. Thus the difference in deviances must be divided by an estimate of the dispersion parameter before it can be compared to the x2 distribution to assess whether reduction from the full model to the reduced model is permissible. One choice of an estimate of the dispersion parameter is the ratio of the sum of squared Pearson residuals, the Pearson X 2 statistic, to its degrees of freedom (McCullagh and Nelder, 1989, p.200), both determined under the full model. Another estimate, which is often very close to the previous one, is the ratio of the deviance to its degrees of freedom, also under the full model. For this data the estimate based on the deviance was used; this leads to the following statistic for testing whether reduction from the full model to the reduced model is permissible: A'dev. = deviance (reduced) â€” deviance this statistic is compared to the (full) deviance (full)/df (full)^ ; x 2 distribution with degrees of freedom given by Adf = df (reduced) â€” df (full). The Poisson regression model with unspecified dispersion parameter is a particular case of a Chapter 1. The Prince George Study^ 10 class of models called "quasi-likelihood" models, so-named because the likelihood which is used to obtain estimates, the Poisson likelihood in our case, is not the true likelihood for the data but plays the role of a likelihood and so is called a quasi-likelihood. Quasi-likelihood models are discussed in more detail in Chapter 2. We now describe the results of the analysis of the Prince George data using the methods described above. 1.3 Results of the Data Analysis The possibility of over-dispersion in a series of counts can be judged by the index of dispersion: X2 = (n â€” 1)variance mean where n is the sample size, 731 in this case. Under sampling from a Poisson distribution, X 2 has an approximate chi-squared distribution with n-1 degrees of freedom and can be converted to an approximate standard normal variable by z = VT/Y 2 â€” ^2(n â€” 1) â€” 1. The results for the emergency room visits data are presented the following table: category Asthma Bronchitis+ Ear Other mean 0.76 1.54 3.62 5.13 variance 0.80 3.36 6.92 14.2 X2 763 1593 1395 2027 z 0.87 18.2 14.6 25.5 Except for the Asthma category, these exhibit a large degree of over-dispersion relative to the Poisson distribution; the values of z given in the above table clearly reflect extreme overdispersion for the categories Bronchitis+, Ear and Other. In fact, these distributions exhibit the pattern of a mixture of Poisson distributions, having higher than expected numbers of very Chapter I. The Prince George Study ^ 11 small and very large counts, at the expense of moderate counts. This over-dispersion must be taken into account in the data analysis. As mentioned previously, we must allow for the possibility that emergency room visits depend on factors other than just pollution, such as temporal and meteorological effects. These covariates were considered first, and model reductions were attempted to find a model which parsimoniously describes these effects. The pollution covariates were then added to the model and model reductions were attempted to identify the pollution covariates which had a substantial effect on emergency room visits for respiratory illness. 1.3.1 Temporal and Meteorological Effects The first step of the analysis involved the incorporation of adjustments for temporal effects. The full model for the log of the visits rate included separate effects for each of the 12 months and each of the 7 days of the week for each diagnostic category. Attempted model reductions to common month or common day-of-the-week effects, that is common across all four diagnostic categories, resulted in unacceptably large changes in adjusted deviance and therefore the full collection of separate effects for each diagnosis was retained as the temporal model. Next, the meteorological covariates (temperature and maximum relative humidity) were added to the temporal model at lags of 0 to 3 days; that is for any i = 4, ..., 731 we allow for the possibility that meteorology at times i, i â€” 1, i â€” 2 and i â€” 3 might affect the rate of emergency room visits at time i. We first allow for separate effects at each lag for each of the four seasons, winter (December to February), spring (March to May), summer (June to August) and fall (September to November), but with each of these effects common to all four diagnostic Chapter I. The Prince George Study ^ 12 categories. Based on our model reduction criterion, these separate season effects were found to be unimportant compared to common effects across seasons for each lag. When separate meteorological effects at each lag for each diagnostic category were considered, we found unacceptable increases in adjusted deviance when attempting to simplify the separate temperature effects to common effects. On the other hand, only common maximum humidity effects were needed. Thus our model which adjusts for possible temporal and meteorological effects included separate diagnosis effects for month, day-of-the-week and temperature, and common maximum humidity effects. 1.3.2 Respiratory Visits and TRS We next investigated a model which expresses the daily emergency room visits rates for any particular diagnosis category as log visits rate on day i = temporal effects + meteorological effects + 7o log TRSi + 71 log TRSiâ€”i + 72 log TRSi_2 7310g TRSi_ 3 . Here the effects of each of the TRS values are common across diagnosis categories; we also considered models involving separate effects for the four diagnosis categories or separate effects for the four seasons. For all models fitted, the temporal and meteorological effects had the same structure as previously identified. In what follows, we will abreviate this structure as the temimet model. We started with separate effects for each diagnostic category but found the reduction to common effects acceptable and furthermore removal of the TRS effects altogether resulted in 13 Chapter 1. The Prince George Study ^ only a negligible deterioration in the fit. The following table examines separate TRS effects for each season, where as before the separate season effects are common to all diagnostic categories: model terms tem/met effects + separate season TRS effects tem/met effects + common TRS effects tem/met effects only deviance 3827 3852 3855 d.f. 2804 2816 2820 A' dev. 18.4 1.7 A d.f. 12 4 Reduction to common effects leads to an increase in adjusted deviance of 18.4 versus a decrease of 12 in the number of parameters which is not so easily dismissed (p=0.11). Unfortunately the estimated effects from the full model present a confusing picture as we see in the following table which presents those estimates which are greater than one standard error in magnitude: effect Winter Lag 0 Spring Lag 0 Summer Lag 0 Winter Lag 2 Summer Lag 2 coefficient -0.026 -0.027 0.043 -0.035 -0.035 s.e. 0.020 0.026 0.028 0.023 0.029 Looking at the effects at each lag we get the suggestion there may be some TRS effect at lags of 0 and 2 days. Unfortunately the effects from one lag to the other are not consistent and furthermore, it is difficult to understand why higher levels of TRS should be associated with higher numbers of emergency room visits at some times of the year and lower numbers at other times of the year. These results may not be particularly surprising since the overall reduction in adjusted deviance for going from the full model to one with the tem/met effects only (p=0.20) indicates that the separate effects in the above table could be large simply by chance. However, we could also examine the effects grouped by season as in the following table: 14 Chapter 1. The Prince George Study^ Lag 0 1 2 3 Winter -0.026 Spring -0.027 Summer^Fall 0.043 -0.035 -0.035 where the dashes represent estimated effects which are less than their standard error. Looking down the columns of this table we see the possibility of a negative cumulative effect in Winter. Accordingly we now present results for a model which includes a single cumulative effect (an average of the logarithm of the lag 0 to lag 3 pollution measurements) for each of the four seasons: model terms tem/met effects + separate season TRS effects tem/met effects + cumulative TRS effects tem/met effects only deviance 3827 3837 3855 d.f. 2804 2816 2820 A' dev. 6.7 13.4 A d.f. 12 4 While there is little difference in the fit of the separate season effects model and the cumulative effects model, the cumulative TRS effects constitute a substantial improvement in the fit over the tem/met only model (0 i dev..13.4, Ad.f.=4, p=0.001). The estimates and standard errors of these cumulative effects are presented in the following table: effect Winter Spring Summer Fall coefficient -0.060 -0.002 0.009 0.075 s.e. 0.023 0.033 0.034 0.030 These effects suggest that an increase in the level of TRS is associated with a decrease in the number of emergency room visits for respiratory disease in the winter, but an increase in the number of emergency room visits for respiratory disease in the fall. This model provides a more Chapter 1. The Prince George Study^ 15 parsimonious description of the effect of TRS on emergency room visits for respiratory disease than the season effects model, yet a mechanism to explain such effects is difficult to imagine. 1.3.3 Other Analyses of Pollution Covariates Similar analyses were carried out for SO 2 (as well as TRS and SO 2 simultaneously) and for TSP. Details of the model reductions and results can be found in McNeney and Petkau (1991). These analyses proceeded in the same fashion as the TRS analysis described above, and similar problems were encountered; that is, the parameter estimates obtained in the final models were such that no plausible explanation for their structure came to mind. 1.4 Questions Raised by the Analysis The results of the data analysis described above make it difficult to come up with a reasonable and coherent explanation of the effects of air pollution on human health based on this data set. There are many possible reasons for our results, such as: (1) the air pollution data do not accurately represent the exposure of the residents of Prince George to pollutants, (2) emergency room visits for respiratory illnesses are poor indicators of the effects of air pollution on human health, (3) some other important factor was not considered in the analysis, or (4) the method of analysis is potentially misleading. With respect to the last item, the following questions come to mind: 16 Chapter 1. The Prince George Study ^ (a) Is the estimate of the scale parameter, 0, used in this analysis, namely deviance(full) ^a d.f(full) 2 (full) good estimate, or should we have used Pearson X d.f.(full)^â€¢ ? (b) Is the change in adjusted deviance criterion appropriate for model reduction in this situation? (c) Are statistics other than the adjusted deviance better for testing hypotheses concerning the regression parameters when over-dispersion is present? (d) Given the amounts of over-dispersion in the emergency room visits series, are the estimates of the model parameters and standard errors accurate? These questions motivated the simulation study we describe in Chapters 3 and 4. Before proceeding to the simulation study, Chapter 2 presents a more detailed description of quasi-likelihood theory and the estimating equations obtained from this theory for estimating regression and dispersion parameters. We also discuss some of the test statistics available for testing the significance of regression parameters. Finally, over-dispersed Poisson regression is discussed as a special case of quasi-likelihood. ^ Chapter 2 Methodology In this section we describe the aspects of maximum likelihood theory (in the context of GLMs) which motivate the theory of quasi-likelihood first proposed by Wedderburn (1974) and also discussed by McCullagh and Nelder (1989, Chapter 9). In particular, we are interested in the estimating equations and test statistics derived from this theory. We also discuss the application of quasi-likelihood to the analysis of the Prince George data. 2.1 Quasi Likelihood in Generalized Linear Models - Recall the log-likelihood for a single random variable in the exponential family 1(9) = ( ye â€” b(0))1 a(0) + c(y; 0). Maximum likelihood theory assumes sufficient regularity conditions to ensure that the following relations hold E [ l ] = 0. and E [ Â°21 4- E {( 81 ) 2 1 =0. 019^0 ' 2^09 In this discussion we will assume these conditions hold. The derivative of the log-likelihood is 011 00 = (y â€” b'(9))1 a(0), therefore E(011 00) = 0 implies that E(Y) = b'(9). We also have 021 0211^b"(9) b"(0) so that â€” E 092^ ^= a(0) â€¢ â€” (0)^ ^ao 2^a 17 18 Chapter 2. Methodology^ Finally E [(M 2 1, 1) ;(0))) 2 ]^1 E [CY aâ€”( 0 Var Y, â€” a2(0) which leads to the expression Var(Y) = b"(0)0(0). Rewriting the log-likelihood in terms of the mean it, we have 1(0) =1(g(p)) and al_ 01 00 Op = ae ap' so that iy â€” b'(0)1 00^y â€” V(0) 01 ap â€”^a(q5) ) aÂµ â€” 40)4^â€¢ Since ay(^op e) . ,, p = b'(0) and b"(0) = 00 the score function S(p) = -g- can be rewritten as S(IL) = (y â€” E(Y))/Var(Y). Note the following properties of the score function: 1. E(S) = 0, 2. Var(S) = E(S 2 ) = 3. ( a.-7 17 11 2.E(17 â€” 11)2 â€” Vari- (Y) , -E(e)=a i& ly ; - Now consider a random variable Y for which we do not specify a distribution but only suppose it has mean value p and a variance that is related to p by some function V(p; 0) where (kis an unknown parameter. Then if V(Âµ; q5) correctly specifies the variance, the function O â€” li) f U * ( 17 ; Ft, 0) â€” 17(ti; 0) has similar properties to a score function resulting from a log-likelihood in that: Chapter 2. Methodology^ 19 1. E(U*) = 0, 2. Var(U*) = E(U* 2 ) = v2(1,, ; 0E( 17 â€” /1 ) 2 = V (A;1 ek) E iV(A;c5)1 3. â€”E( a Âµâ€¢= E [V(-4t17) (17^ V2(Âµ;0 1 - ve'm â€” Since most asymptotic likelihood theory is based on the above three properties, we might expect the integral of^to behave like a log-likelihood. We refer to the integral fy^tY (y â€” t) dt Q(Y;^= j U * (Y;t, Odt = j ^ V (t; if it exists, as the quasi-log-likelihood or simply the quasi-likelihood. For n independent observations having the same unknown parameter 0 we have the quasi-likelihood Qn(Y;^= E^ 0) i=i where now y and p are vectors of length n. As in GLMs, the mean vector p is assumed to be related to the regression parameters by the link function g(p) = X/3. 2.1.1 The Estimating Equations As in the case where we have a true likelihood, the approach with a quasi-likelihood is to differentiate Q T, with respect to each of the components of and equate the resulting p derivatives to zero to obtain maximum quasi-likelihood estimators for the parameters. If 0 were known, then under the assumption that pi = exp{XiP}, we would solve the following estimating equations: 0 C2n^E^ n (Yi^ uj o, 0) =^= =0 u1-1.7^v ' We will refer to the p-dimensional vector (U 1 , ..., Up )' as U 0 . ^ 1, p. (2.4) Chapter 2. Methodology^ 20 Because 0 is unknown, we must introduce an additional estimating equation. This equation is based on the obvious notion that the variance function V(pi; 0) should reflect the true variances of the observed yi (denote these an. Thus, 0 should be estimated so that V(tti; 0), the model-based variance, agrees well, in some sense, with the true variance q. Only estimates of the true variances are available from the data, so a sensible estimate of 0 is provided by the solution to the "moment" equation OP, ^2 =^ i=1 q) 11 = 0.^ (2.5) To correct for the bias resulting from the simultaneous estimation of the p regression parameters, Breslow (1984) suggests modifying this equation slightly to (n WV =^14)2 ^n^j , i=1 V(tti; = 0^ (2.6) in place of (2.5). Given an initial estimate of 0 we may use (2.4) to obtain an estimate of /3. This estimate of /3 can then be used in (2.6) to estimate 0. This procedure is iterated to obtain the joint solution ($, ) to the estimating equations (2.4) and (2.6). Subject to regularity conditions (Inagaki, 1973; White, 1982) this joint solution converges in probability, as the number of observations increases, to (0*, 0*) where d* is the true value of 0, and 0* is the value of 0 which satisfies the limit equation n^cr? 7 7 1^v(ili; 0) = 1. To motivate the limiting distribution of the estimator (), ik) consider the following. Let = (/3t, 0)t and let U(9) = (U 0 ', PIT, where U 0 is the vector form of the equations given by (2.4) and U 4' is given in (2.6). While (2.6) is not a maximum quasi-likelihood equation, Chapter 2. Methodology^ 21 we may consider (2.4) and (2.6) as moment-type estimating equations. The derivation of the asymptotic properties of ([3,i4) will proceed along the same lines as the derivation of the properties of maximum likelihood estimators, although some modifications are required. The fact that 8 is the solution to U(a) = 0 along with the Mean Value Theorem allows us to write the Taylor series expansion of U(a) about the point 0* as au 0 = U(a) = U(8*) + 0 0t e m - r), where each of the p 1 elements of Om are between the corresponding elements of 0 and 0*. Re-arrangement allows us to write this as (_ 1 OU â€˜77-20 â€”13*) â– n em aot yi U ( r). (2.7) Under mild regularity conditions, 0 will converge in probability to 8* (see Moore 1986, Theorem 1, p.586). Then, since O ln is between a and 8* and au loot is a sum of n independent components, we have au n a0t =^ l nn aui Bm^n i where here the subscript i (r)^au(e*) E E auaot = noo lim n^aot 1 n nco Bm^i=1 i refers to the contribution of the i th observation; i = 1,^n and the expectation is taken with respect to the true value 0* under the assumption that p = exp{X/3} correctly specifies the mean. Now â€”E[au(r)laet] is a (p 1) x (p + 1) matrix of the form [ A n bn ctn d o where r auo^a n ( yi _ = â€”E^ t E apt^V(kti; O s ') ad . ao 0 ^ Chapter 2. Methodology^ = E ^1 ON ON VGai; 0*) 00 00t 22 av Â°p i oâ€¢^v2(4; 0 ') 013 api aot + E^(Yi â€“^Â°p i 4 1 ^atLi 0,L, = E^ v(iti; 0*) 00 00t ^rt ii a n (yi^ ) b n = â€“Eâ€” ^= E 00 i=1 v (pi; 0) 00 0* ; o* ^0,4 av] _â€”o(Pi; 0* ) 84) 9* (17 â€” " a n (Y iti )2 ( n _ p) cn =^[E^i*, ^ P 1=1^VI"^0â€¢ ^u and 2^ (Y /102 ay n (Y pi)2 (n P)11 = V"` = n^Gri 29V do =^{E^ E ri j â€¢ V 2 (Pii 0) 00 i=i^n^ cb*^V2(14;0) 00 cb* 2=1 Now consider the other random vector entering (2.4): 1 1^ U(r) â€“ \â€“y., L-4.1 (,;;câ€¢) 8 /3 0* \--T.^[v;;;Li 2 p, i;0 )^n ^( Assuming the regression model correctly specifies the mean, we have E 0 1^ 77 U(0*) =lz_i7n)^â€¢ L.-n=1 v(,e )^n By the definition of 0* and provided regularity conditions hold (see Moore 1986, Lemma 1, p.585), it follows that this random vector has a limiting multivariate normal distribution N (0,[ eSt fe I) where B = lim n _,,,,, ;7_13n. is the normed limit of n^f^\^ y affil t i 14)1 ^^ P" as 1 = 1 V(aii 0 * ) 0/0 0* 71^f^ B n = E(IP3 P3t )= E [E [ voli; o*) I Chapter 2. Methodology^ 23 Because the observations are assumed to be independent, this reduces to ^B = E ^(Y, gi )2^Â°Pi]^it ^a? i.iv (4;0*)^adt e* .^ ^ - attiapi a$ aot 9* For our purposes, the forms of e and f are of no consequence. Combining these results with Slutsky's theorem we have ^Vii(o e* ) e^({ A 0 I l y) f t e A Â° 1 [ B N (0,[^ c t d^ct where A =^c = lim^le n nand d = lim ^ct 00 ni d n . As 1 - 1^ [ A -1^0 [ A 0 d^d - 1 the asymptotic variance of VT.t(Ã« â€” r) is A - ^0 I r .13 e 1 I A'^0 â€”^d-1 [ e t^â€” CA - 1 1 [ The asymptotic covariance matrix for ViT(S â€” /3*) is the upper left-hand p x p sub-matrix of the above matrix, namely A -1 BA -1 . Note that this is exactly the covariance matrix one would obtain for the estimation of Q based on the estimating equations (2.4), with 0 fixed at 0* rather than estimated; the asymptotic covariance for S is unaffected by the estimation of 0. This occurs despite the fact that Q and are, in general, correlated (the entries of the upper right-hand pxl vector of the above matrix need not be zero); it is a result of the fact that the pxl vector b n has all p elements identically equal to 0, that is, E[ 4 U 13] = 0. Note further that if the function V(pi; 0) correctly specifies the variances cr?, then A = B and VTz(S â€” /13*) has asymptotic covariance matrix given by A -1 . Thus we can consider two estimates of the asymptotic covariance of /3; the first is modelbased and assumes that V(tti; 0) correctly specifies the variances. Denote this estimate 2 A,/ = 24 Chapter 2. Methodology^ A,7 1 where A n is A n with pi replaced by the fitted values^= pi(Xi; Si) and 0* replaced by (4. The second estimate we refer to as the empirical estimate. Denote this EE = An ^A n-1 where f3n^ (yi â€” /10 2 i=1 v 2 (4; ii)) aij apt The idea of using an empirical covaiance matrix in connection with misspecified models has been developed by numerous authors, including White (1982), Royall (1986), Liang and Zeger (1986), and Carroll and Rupert (1988, Section 4.3.2). We now have estimating equations for 8 and 0 which yield consistent estimators and we have an expression for the asymptotic covariance of frt(T3 â€” /3*). In the next sub-section, we - use this information to form statistics to test the significance of the regression parameters. 2.1.2 Test Statistics for Hypotheses Regarding Regression Parameters Consider the p-dimensional vector /3 partitioned into /3 1 and spectively. Here the point is to think of /32 of dimensions p l and p2 re- /32 as parameters which have fixed values /32 (usually zero) under some null hypothesis H o . We will refer to the model where all of the parameters = (#1, 13V are estimated as the full model, and the model where 13 1 is estimated when 02 is set to its hypothesized value /l as the reduced model. Under H o , the random vector 642 â€” /32) has a limiting multivariate normal distribution with mean 0 and covariance E22 which is estimated by 2 22 , the lower right-hand p 2 x /3 2 sub-matrix of either 2 A/ or 2E. We therefore have model-based and empirical versions of a test statistic we refer to as the Wald test where we compare the test statistic 4 2) distribution. (th /36D t (th /3 2 ) to the 25 Chapter 2. Methodology^ Another possible type of test, one that does not require fitting of the full model, is the score test. Recall from Section 2.1.1 that E[ i] = 0; j^1, p With sufficient regularity conditions, this implies that for any sequence of estimates^converging to 3* the statistics U 0 (/3, ii)) and U 43 ($, cb*) are asymptotically equivalent. This can be seen by comparing the following two asymptotic expansions: in 00, ik)^ow, 0. ) + aou; 1 OUO VT1 (13 â€” n 00 13m1 4,m frii 7 4,m â€” 0*), (2.8) and 1 0U 13 Cb * ) =^OW, 0 * ) â€” 0/3 VT/^VT/^n for some Om. (i=1,2) between p and - V17( - 0*) - fmz 3* and similarly for some / omâ€¢ (2.9) The right-hand side of these equations are of the same form except for the final term in (2.8). But, as n gets large, â€” 0*) converges in distribution while -,10UP/00, which is an average of independent components, converges to its expectation which is 0. Also, recall from Section 2.1.1 that the asymptotic covariance matrix of VT,(13 â€” /3*) does not depend on how, or even if, 0 is estimated so that the distributions of the random vectors fii(/ â€” /3*) are the same in both (2.8) and (2.9). This shows that the random vectors defined by the left-hand sides of (2.8) and (2.9) have the same asymptotic mean and variance. Therefore, in what follows we may suppose that the estimator of /3 is based on the estimating equations (2.4) alone, with 0 fixed at its limiting value 0*. Now we consider the maximum quasi-likelihood estimates under the reduced model. With UQ partitioned into components U 01 and U 432 of dimension p 1 and /3 2 respectively, let /3red denote this estimator, namely the solution to U 0 1[( /31,4) t ,01. 0. We wish to determine the Chapter 2. Methodology ^ 26 asymptotic distribution under the null hypothesis of the score statistic for testing Ho : 132 = To accomplish this, first examine the expansion 0= auoi uoiO^ (tired) =^ O P* Â°pi (371 (141 (2.10) â€” and the corresponding expression for the score statistic (Sred) = u 132 613*) + aau: or2 (gi â€” 01' ) (2.11) where Or' (i=1,2) are between 13 1 and 01. Rearranging (2.10) we get au al â€” 01) Oi =OW), al which, substituted into (2.11), allows us to write (Sred) =^[02 (M) ou#2 01 0;n2 [au oii -1 Ual (0*) . (9131 /3r i As n oo we have 02 (04') d, N(o, B22) -uoi (0*) L, N(0, Bii), \F and 1 au ,32 n ath p m2 01 An, auoi n Oth 1+' A ll . where the matrices A ll , A21, B11, and B22 are the appropriate submatrices of the matrices A and B. It follows that the asymptotic distribution of this score statistic is multivariate normal with mean 0 and covariance matrix 1 A Vary^ + l Al2 B12 - Ari Nrn (Sr ed) = B22 - A21 A aB21 ^A21 Aril B11 A111 Al2 â€¢ (2.12) 27 Chapter 2. Methodology^ If the variance function correctly specifies the variances, then A = B and (2.12) reduces to the usual expression 17.70 -^1 â€”u 2 Pred).1 = A22 â€” A21111711 Al2. V77, Vary[ (2.13) Therefore we can obtain a model-based estimate of the asymptotic covariance matrix by substituting A n for A in (2.13) or an empirical estimate by substituting A n for A and fin for B in (2.12). The score test for testing Ho : 02 = QZ is given by comparing PI (Sred)2 -1 0.2 (Sred) to the x (p2) distribution where 2 is our particular choice of the covariance matrix estimate. The final type of test statistic we will consider is a deviance or likelihood ratio statistic. Recall the quasi-log-likelihood Q n (y; ,u, = 1 Q(yi; pi, 0). Define the deviance under the full and reduced models as -2Q n (y; 1.1(X,:j full), (7)) and - 2Qn(Y; it(X, Sred), ) respectively, where the estimate c6 is obtained under the full model in both cases. The deviance statistic is then Adev. = 2 [Qn(Y; A(X ijred),^- n(Y; it(X f ull) k)] , ( which should behave like a log-likelihood ratio and thus have an approximate 4 2) distribution (McCullagh and Nelder, 1989, p. 471). In the next section, we will examine the specifics of applying these tests for the significance of regression parameters to the case where we have over-dispersed Poisson data. 2.2 Test Statistics for Hypotheses in Over Dispersed Poisson Regression - In Section 1.2 the estimating equations and scaled deviance test statistic for over-dispersed Poisson regression were presented as an extension of Poisson regression. This extension was Chapter 2. Methodology^ 28 necessary because all inference which requires estimates of variability, such as tests of hypotheses, will be in error if the over-dispersion is not taken into account, even though reasonable amounts of over-dispersion have very little effect on estimation of the regression parameters by ordinary Poisson regression (Cox, 1983). We saw that an easy way to accomplish this was to introduce a dispersion parameter 0 to model this extra variability. In quasi-likelihood this is equivalent to specifying the variance of the observed data to be described by V(yi, 0) = 0iti for the i th observation. In this case, for the link function g(y) = logy equations (2.4) and (2.6) of Section 2.1.1 become UM, = E i=1 Pi ) X = 0; for j = 1, ...,p, 0(0,0) =^[ (Yi ij Pi)2 n 1 P _ 0, i.1^n (2.14) (2.15) where pi = exp{X1 18}. Note that 0 may be factored out of (2.14) and that the estimate of 0 obtained from (2.15) is Pearson's X 2 statistic (using the Pearson residuals defined in Section 1.2) divided by its degrees of freedom. An alternate estimate of 0 which is thought to be very similar is to replace X 2 by the deviance given by the joint Poisson likelihood under the full model, which for this discussion we denote G 2 . For this formulation, it is immediate that the estimates of do not depend on the estimated value of 0; in other words, we may use the usual Poisson estimating equations for Q and estimate 0 afterwards with either of the two methods described above. Clearly this approach has considerable appeal over an approach involving specification of a variance function in which 0 cannot be factored out. 29 Chapter 2. Methodology^ With the choices V(iti; 0)_^and it = exp{Xi i3}, then from Section 2.1.1 the modelbased estimate of the covariance matrix of /3 is given by 2m A,7 1 where n ^1 ^a/1i^n j1 v 01 â€ž aSt xi exts = (4-1 E n An = E^ - where the Xi are the row vectors corresponding to the J=1 i th row of X. Note that 0 may be factored out of A n and the resulting covariance matrix is exactly times that obtained from Poisson regression. Alternately, the empirical estimate of the covariance matrix of j is given by EE = where yi ^( Bn pi)2^ _^ (yi - ^x t e xi4 xie m =^(yi â€” ) 2 AIXi = ^ â€¢ V 2 Can iA') as 0/3t i=i cog^i.1 Note that the factors 0 appearing in A n and En cancel out in the expression for EE. Considering these two estimates of the covariance matrix and using either 0 = X 2 /d.Lfuti or = G 2full jd.f.f u ll leads to three possible versions of the Wald test; two model-based and one empirical. We can similarly examine three versions of the score test. The two model-based versions use the expression given in equation (2.13) for the variance with A n appropriately partitioned and the score vector as given by the p 2 equations of the form (2.14) which correspond to parameters with some hypothesized value under H 0 . The resulting statistic could use either of the estimates of 0 described above. The empirical score test, on the other hand, uses the empirical covariance matrix of the form described by (2.12) with A n and En appropriately partitioned. Examination of the expressions involved in this statistic reveals that the 0 terms disappear here also due to cancellation. Chapter 2. Methodology^ 30 Finally, we may consider two versions of the deviance test. The expression for the quasilikelihood is simply the Poisson log-likelihood divided by an estimate of 0. Therefore our deviance tests, often referred to as scaled deviance tests, are given by 1 Adev. = E (y log(â€”) â€” â€” 140)) n i where Ili are the fitted values under the full model and^are the fitted values under the reduced model. Note that the best estimate of 0 (using either X 2 /d.f. or G 2 /d.f.) will be the one obtained under the full model since this model has the best estimates of pi. Now that we have discussed some of the methods available for modelling and making inference about parameters in over-dispersed Poisson data, we wish to address the questions raised in Section 1.4 and examine the performance of these procedures in a context similar to that encountered in the Prince George study. A simulation study designed to answer these questions is described in Chapter 3. Chapter 3 A Simulation Study Related to the Prince George Study Recall the methodological issues which arose in the Prince George study as described in Section 1.4: deviance(full) (a) Is the estimate of the scale parameter, q5^ , used in this analysis, namely ^ a d.f(full) good estimate, or should we have used Pearson X 2 (full) d.f.(full)^â€¢ ? (b) Is the change in adjusted deviance criterion appropriate for model reduction in this situation? (c) Are statistics other than the adjusted deviance statistic better for testing hypotheses concerning the regression parameters when over-dispersion is present? (d) Given the amounts of over-dispersion in the emergency room visits series, are the estimates of the model parameters and standard errors accurate? In this chapter we outline the purpose and procedures of a simulation study designed to address these questions. The primary goal of the simulation study was to examine the behavior of the model reduction test statistics described in Section 2.2 in contexts similar to the Prince George study (items (b) and (c)). Secondary goals were to examine the performance of estimates of over-dispersion parameters as well as of estimates of regression coefficients and their standard 31 Chapter 3. A Simulation Study Related to the Prince George Study ^ 32 errors (item (d)). Of particular concern were the effects of varying amounts of over-dispersion and misspecification of the variance function on this performance. In particular, we will: 1. Compare the scaled deviance, score and Wald tests for model reduction. 2. Compare the parameter estimates to the simulated values. 3. Compare empirical and model-based estimated standard errors to the standard deviations of the estimated parameters. However, as an essential preliminary to the main thrust of our simulation study, we must first consider the question of how the over-dispersion parameter should be estimated (item (a)). As already mentioned, two possibilities are via G 2 /d.f. (as was done in the Prince George study) and X 2 /d.f. Results presented in Section 4.1 provide a clear indication that the latter is preferred, and this estimate is therefore employed in all subsequent work. In particular, the only version of the scaled deviance model reduction test considered is the one where 0 is estimated by X 2 /d.f. Both empirical and model-based versions of the Wald statistic will be evaluated, but only the empirical score test will be evaluated; the latter test is reported to perform well even in the presence of over-dispersion and with misspecified variances (see Breslow, 1990). To carry out these comparisons, a variety of data configurations, all intended to be qualitatively similar to the context of the Prince George study, will be simulated. Different data configurations will correspond to different log-linear models for the mean levels of the observed counts and different amounts of over-dispersion. In all cases, the model fit to the simulated data will be that of over-dispersed Poisson regression; that is, estimation of the regression coefficients will Chapter 3. A Simulation Study Related to the Prince George Study^ 33 be based on the estimating equations (2.14) which result from an assumed variance function of the form V(iii;^= cbtti. The simulation study will not be concerned with the possible misspecification of the loglinear model for pi (the regression function), but will examine the effects of misspecification of the variance function. For this purpose we define the "correct" variance to be V(pi; 0) = the model to be used in the fitting, and the "misspecified" variance to be V(iii; 0) = + (0â€” 1)4. The latter variance function arises from thinking of the datum yi as being sampled from a distribution which is, conditional on the value of an unobserved variable Ai, Poisson with mean Ai; if Ai is considered to be sampled from a distribution with mean pi and variance (0 â€” 1)4, then the marginal distribution of yi has mean pi and variance pi + (0 â€” 1)4. This simulation study differs in several respects from a study carried out in a different context by Breslow (1990). Firstly, Breslow's study only considered simulated data with mean values no smaller than 2. In the Prince George study the Asthma series had many zero values and a mean value of only about 0.75. In the current study we wish to investigate the possibility that such a series could lead to poor performance of the estimators or test statistics. Another difference is that Breslow was concerned with a 3x4 factorial design with n replicates per cell (and a single continuous covariate) whereas our study will be concerned with one or more long data series similar to the series available for the Prince George study. This difference is perhaps not as great as it first appears because all the data in Breslow's simulations were generated under the hypothesis that the column factor of the 3x4 design had no effect. The result is that, in effect, three data series of length 4n are being considered and for n = 144 (the largest value Breslow considered) the length of the series is comparable with the length Chapter 3. A Simulation Study Related to the Prince George Study ^ 34 we will employ (700). The last key difference is that the current study is only concerned with test statistics and estimating equations based on an assumed variance function V(Âµ1; 0) = Opi. We will examine the performance of these test statistics and estimating equations using data generated under "correct" and "misspecified" variance functions, both for a single data series and for three series with differing amounts of over-dispersion (to represent the separate diagnostic categories), keeping in mind that the methods used will provide only a single overall estimate of 0. In contrast, Breslow generated each of his data sets under the variance function V(pi; 0) = pi + (0 â€” 1)4 and then examined the performance of the estimating equations and test statistics based on this variance function and also based on V(pi; 0) = 3.1 The General Simulation Procedure The simulation procedure, or sampling experiment, consists of three phases: generating the data, fitting a full model, and fitting one or more reduced models. All calculations are carried out in the programming language C. 3.1.1 Generating the Data The data generated are to represent the main qualitative features of the data analyzed in the Prince George study. To accomplish this we will generate time series of 700 independent overdispersed Poisson observations. We may choose to generate a single series (Sections 4.2 and 4.4) or we may generate three separate series (Sections 4.3 and 4.5). Each data set to be generated requires a choice of a log-linear model for the mean pi of the form log pi = XiO, a choice of the variance function to be used in the simulation, and a choice Chapter 3. A Simulation Study Related to the Prince George Study ^ 35 of the value for the over-dispersion parameter 0. Over-dispersion in the data is introduced through gamma-mixtures of Poisson random variables. To generate the datum yi, it is most convenient to think of our situation as follows. Let log = Xifi and let there be a random variable vi which has a gamma distribution with mean 1 and variance^Then for each i=1,...,700 we first generate a realization of the gamma random variable IA and then generate a realization of a Poisson random variable with parameter vi pi. The resulting observations are negative binomial, with mean pi and variance pi + The choice V = (0 â€” 1)/pi leads to simulated data with the "correct" variance function Ofti, whereas the choice Vi = (0 â€” 1) leads to simulated data with the "misspecified" variance function pi + (0 â€” 1)4. Note that both variance functions reduce to the usual Poisson variance when 0 = 1, and in this case the Poisson variates were generated directly. 3.1.2 Fitting the Full Model We fit the full model to obtain estimates of the parameters Shi n using an iterative method for solving the estimating equations (2.14) of Chapter 2. We then estimate 0 using the Pearson X 2 statistic divided by its degrees of freedom. In contrast to Breslow (1990), when we obtain an estimate of 0 which is less than one, we do not adjust that estimate to be equal to one. In our situation the dispersion parameter is only a nuisance parameter used to account for dispersion. It is not necessarily thought to arise from the variance of a mixing distribution (which would require values of 0 > 1); that is to say, under- dispersion is not ruled out. Using this estimate, both the empirical and the model-based estimates of the covariance matrix of Sfun are calculated (allowing the empirical and model-based Wald tests to be evaluated), as Chapter 3. A Simulation Study Related to the Prince George Study^ 36 well as the scaled deviance under the full model. 3.1.3 Fitting the Reduced Model Next, the reduced model is fit to obtain estimates of the parameters estimate of the covariance matrix of fired fired. The empirical is calculated and used in the calculation of the score statistic. We also calculate the scaled deviance using the estimate of 0 from the full model so that the scaled deviance statistic for testing the model reduction may be computed. The general procedure described above of generating data, fitting full and reduced models, calculating estimates and standard errors, and calculating the model Wald, empirical Wald, empirical score and scaled deviance test statistics is repeated 1000 times for each particular specification of the log-linear model, choice of V (pi; 0) and value of q4. In the following section the specific log-linear models, and the choices for the parameters in these models, as well as the levels of over-dispersion utilized in this simulation study are discussed. 3.2 The Specifics of the Simulation Study As mentioned in Section 3.1.1, we may choose to simulate either a single series of data, or three separate series to be modelled simultaneously. The single series simulations are small enough, in terms of the computing time required to generate the data and fit a model, to allow an investigation of the effects on the estimators and test statistics of varying both the level of dispersion and the simulated mean values. The three series simulations recreate an important feature of the data analysed in the Prince George study, namely separate series with possibly Chapter 3. A Simulation Study Related to the Prince George Study ^ 37 different amounts of dispersion (we chose to use three series here instead of four as in the Prince George study simply to reduce the computing time). The overall mean values of the series and the levels of over-dispersion to be considered in these simulations (whether in a single series or in three) was based on the data series from the Prince George study. To recap the approximate mean values of those series and to get a rough idea of how much dispersion was present, we have the following table: Asthma Bronchitis+ Ear Other mean 0.76 1.54 3.62 5.13 variance 0.80 3.36 6.92 14.2 variance/mean 1.05 2.18 1.91 2.77 From the above, the values 0.75, exp{0.7}(Pe, 2.01) and exp{1.6}(::::. 4.95) were chosen as possible overall mean values for the series to be simulated. In the single series simulations we will carry out sampling experiments using each of these values, and we will use the three values together as the overall mean levels in the three series simulations. Based on the third column of the above table, the values 1, 2 and 3 seem to be reasonable levels of over-dispersion to consider, although the estimates of 0 for each series would be smaller than the ratio of the variance to the mean (sf/) = ;= (yi u-T1 E i )2 is less than variance/mean = > (y -y) 2 /y in general). We also wish to simulate the value 1.4 since this is very close to the overall estimate of the dispersion parameter from the Prince George study, and a simulated value 0 = 5 is of possible interest as an "extreme" amount of over-dispersion. In the single series simulations, sampling experiments using each of these levels of dispersion are carried out at each possible mean level, with the exception that 0 = 5 is only simulated when the mean level is exp{1.6} (using 0 = 5 with lower mean values would result in series with Chapter 3. A Simulation Study Related to the Prince George Study^ 38 mostly zeros and only a few large values; this situation is too extreme and is not of particular interest in this thesis). In the three series simulations we chose two different combinations of dispersion parameters. The first is the triplet (1,2,3) which we refer to as combination I. These values seem like a plausible representation of the level of over-dispersion encountered in the Prince George study. We also consider the triplet (1.4,2,5), which we refer to as combination II, in order to examine the performance of the estimators and test statistics in a situation with a large amount of dispersion (relative to the amounts encountered in the Prince George study). Thus sampling experiments will be carried out using the following choices of overall mean values and dispersion parameters: Single Series Simulations Mean Value 0.75 exp{0.7} 1 1.4 2^3 5 x ^ ^ x exp{1.6} VA/VA/ x= no sampling experiment carried out Three Series Simulations Combination of 0 ^ Mean Values I=(1,2,3) I1=(1.4,2,5) (0.75,exp{0.7},expl1.61) ,/ ^ The models used for generating the data should, in some sense, reproduce the main features of the data encountered in the Prince George study. We first describe some very simple models that are used to generate single series of data before we attempt to reproduce the more complicated features of the Prince George study data. The simplest models to be considered are for a single series with the mean value of the j th observation given by //2 = exP{O 7x 2 },^j^1, ..., 700 ^ (3.16) Chapter 3. A Simulation Study Related to the Prince George Study^ 39 where^is the overall effect and -y is the coefficient corresponding to the covariate xi = sin(4irj/700), used to simulate a pollution variable with a regular annual cycle. To simulate data under the reduced model, one with no pollution, the choices for the simulated coefficients are = log0.75, 0.7, or 1.6 (with y = 0), corresponding to series with overall mean levels 0.75, exp{0.7}, and exp{1.6} respectively. To simulate data under the full model, one with pollution included, only /3 log 0.75 is used but three different values of -y are considered (7 = 0.05, 0.10 or 0.15). The simulations using models of the form (3.16) to generate data are referred to as the one series simple case. These simple models are also extended to a three series simple case with the mean of the i th observation in the i th series given by pii = exp{/3i yoxi 7ixi}^i = 1, 2, 3; j = 1, ..., 700.^(3.17) - where Oi is the overall effect for the i th diagnostic category, 7 0 is the common pollution effect and yi is the pollution effect specific to the i th diagnosis (with 73 0 for identifiablility). This parameterization with a common and separate pollution effects is utilized to facilitate the score test by allowing the hypothesis of common pollution effects to be stated as H: -yi = 0 for i = 1,2,3. For the alternate parameterization^= 7 0 + -yi for i = 1,2,3, the same hypothesis would take the more usual form of H: yi = Yz =^(which can be tested by the Wald or scaled deviance tests). In these simulations the simulated values of the ^are = log 0.75, 02 = 0.7 and /33 = 1.6. The null model (no pollution effects) follows from setting 7o = 71 = 72 = 73 = 0. We may also simulate data sets with a common pollution effect (we use 70 = 0.1, 71 = 72 = 73 = 0), or with separate pollution effects (we use 70 = 0.1, 71 = 0.1, Chapter 3. A Simulation Study Related to the Prince George Study ^ 40 72 = â€”0.2 and 73 = 0). The j th observation in the more complicated models with a single series has mean described by 4 itj = eXp{O^7-k s k + ^yxj} (3.18) k=1 where the sk are indicators designed to represent four "seasons" of equal length (Â±1 day), within each of our two "years" of 350 observations and xi is the sinusoidal pollution covariate just as in the simple case. The seasonal structure is the only attempt made in this thesis to recreate the temporal structure of the data in the Prince George study and no attempt is made to recreate the meteorological structure. For the purpose of simulating data under the reduced (or null) model, the values r 1 = 0.25, r2 = 0.25 and r3 = â€”0.5 (with r4 = 0 for identifiability) were used with one of /3 = log 0.75, 0.7, or 1.6. For the full model (one with pollution included) only /3 = log 0.75 is used with one of 7 = 0.05, 0.10 or 0.15. The simulations based on models described by (3.18) include seasonal effects and are referred to as the one series complicated case. As with the simple simulations, the one series complicated models are extended to a three series complicated case with the mean for the j th observation in the i th series given by 4 = exp{3i E k=1 Tiksk + -yox i + 7ix.; } (3.19) where /3, is the overall effect for the i th diagnostic category, rik is the effect of "season" k in diagnosis i, -yo is the common pollution effect, and 7i is the pollution effect specific to the i th diagnostic category. The values of the coefficients used in the simulations are: Chapter 3. A Simulation Study Related to the Prince George Study ^ 41 01 = log 0.75 = 0.2 T12 = 0.2 T13 = â€”0.2 T14 =00 133 = 1.6 T31 = 0.2 T21 = 0.2 0.2 7-32 == 0.3 T22=0.2 T23 = â€”0.4 T33 = â€”0.4 r24 = 0^T34 = 0 = There were no compelling reasons for the above choices for the season effects other than to make them separate effects; that is, different for each series. As in the three series simple case, the null model (no pollution) corresponds to 7o = 7 1 = 7 2 = -y 3 = 0 while the common pollution effect model uses simulated values 7o = 0.1, 7 = 72 = -y 3 = 0, and the separate pollution effects model uses -yo = 0.1, 7 1 = 0.1, 72 = â€”0.2 and 73 = 0. The results of fitting the models from each of these four cases (with the various possible choices of simulated coefficients, variance function, and dispersion parameter .0) are presented in Chapter 4. We begin Chapter 4 with a comparison of the possible estimators of q5 (as the remainder of the study depends on a choice of estimator of 0). For this comparison, only the one series simple case models were needed as these provide a strong indication that the estimator X 2 /d.f. is preferable to G2/d.f. Chapter 4 Results of the Simulation Study In this chapter we describe the results of the investigation of the performance of the estimators and test statistics. We begin with the comparison of possible estimators for 0. 4.1 Estimating the Dispersion Parameter 0 Of particular interest in this comparison are any shortcommings in the performance of the estimator ii)G = Geld. f. used in the Prince George study and the possibility that use of the alternate estimator 'cbx = X 2 /d.f. in the scaled deviance model reduction test statistic might have led to different conclusions than those sketched in Chapter 1. For this investigation single series of data were generated according to the simple model given by (3.16) using one of the three possible values of 0. Only the reduced model (set 7 = 0) was used to generate data, so that pi does not depend on this most simple of cases indicates the estimator i. As we shall see, the results in even cbx = X 2 /d.f. is preferable. Table 1 presents the mean values and standard deviations of the estimates of 0 obtained from the two estimators based on the results of fitting the reduced model. Part A of the table contains the results when the correct variance function is used to simulate the data, while part B presents the results for the misspecified variance function, along with the values Os where 0 ,ui = + (0 â€” 1)4. The 3 values 0 8 are the estimated values of 0 we should see if our incorrectly specified model is still 42 Chapter 4. Results of the Simulation Study ^ 43 correctly accounting for the extra variability; that is, 0 8 is the amount of dispersion in the data in terms of the variance function V(//i; = 0/ii used in the estimating equations. Table 1. Mean Values of Estimates (Â± Standard Deviation) in 1000 Simulated Data Sets. A. Correct Variance Function. Simulated Q Estimator log 0.75^OX EGG 0.70 ^ 4x Simulated 0 ^ ^ ^^ 1.4 2 1 3 5 1.401 (.096) 2.001 (.174) 1.002 (.055) 2.996 (.337) 1.108 (.039) 1.349 (.057) 1.642 (.086) 2.018 (.136) :4G 0.998 (.056) 1.137 (.059) 1.400 (.084) 1.515 (.074) 2.002 (.134) 2.010 (.097) 2.983 (.235) 2.700 (.139) kG 0.998 (.052) 1.046 (.056) 1.399 (.076) 1.447 (.075) 2.006 (.118) 2.035 (.108) 3.001 (.208) 4.985 (.378) 2.938 (.162) 4.537 (.246) ( 1.60 ( B. Misspecified Variance Function. Simulated /3 Estimator log 0.75 Os 93 X OG 0.70 cx 1 ( 1G 1.60 Os x Simulated 0 ^ ^^^ 2 1 1.4 5 3 1^1.3^1.75 2.5 1.000 (.053) 1.300 (.086) 1.749 (.135) 2.497 (.255) 1.107 (.038) 1.292 (.051) 1.526 (.072) 1.844 (.112) 1^1.806^3.014 1.005 (.054) 1.799 (.114) 2.992 (.248) 1.143 (.057) 1.854 (.088) 2.705 (.147) 5.028 5.041 (.535) 3.823 (.244) 1^2.981^5.953 1.002 (.054) 2.979 (.201) 5.948 (.503) 1.050 (.058) 2.919 (.159) 5.226 (.310) 10.906 10.867 (1.14) 8.150 (.552) 20.812 20.66 (2.95) 12.17 (1.08) In part A of the table the estimator cx provides good estimates of 0, on average, at all levels of dispersion. The estimator â€¢G appears to slightly overestimate 0 at low levels of dispersion and underestimate 0 in the presence of large amounts of dispersion. The largest differences in the estimators occur when the level of dispersion is very high relative to the mean value (see, for example, the simulations with /3 = log 0.75 and 0 = 3). In part B the estimator ciSx yields - Chapter 4. Results of the Simulation Study^ 44 good estimates of Os, on average, whereas (0G produces very poor estimates when the level of over-dispersion is high. Note that in both parts of Table 1, except perhaps when 0=1, the estimator 4r0x generally appears substantially more variable than qG. This suggests that for (0x may differ noticably from the simulated value of 0 (and some of the simulated data sets, may be worse than (0G) although on average it is clearly superior to ;M. To answer the key question of how these differences affect the model reduction test statistics we use the above data sets (generated under the simple model log pi = /3) to evaluate the scaled deviance statistic using either ii5G or cbx based on the full model given by (3.16). This model , reduction, a test of the hypothesis H: y = 0, is intended to be an over-simplified representation of the model reductions tested in the Prince George study. The plots in Figures 1, 2, 3 and 4 present the values of three test statistics; the scaled deviance using C60G, the scaled deviance using 4x and the empirical score test (which does not depend on an estimate of 0, see Section 2.2), versus the quantiles of the chi-squared distribution with 1 degree of freedom. Results are presented only for a simulated mean of 0.75 and for 0 = 1,1.4,2 and 3; the results for the other combinations of and 0 are qualitatively similar. These plots are intended to judge how well the test statistics agree with their predicted asymptotic distribution. The score test is included to allow comparison with a test statistic that has been reported (in a somewhat different context) to perform reasonably well even in the presence of over-dispersion (Breslow 1990). This statistic is also useful for comparison in this situation because it does not involve an estimate of the dispersion parameter. We see that for data sets generated under either the correct variance function (Figures 1 and 2) or the misspecified variance function (Figures 3 and 4), the use of the estimator 4G Figure 1. Comparison of Estimators of Phi: Correct Variance Simulations (Phi=1 or 1.4) G2 estimator of phi^ mean=0.75 and phf=1^ X2 estimator of phi^ mean=0.75 and phi=1^ Score test mean=0.75 and phi=1 a g 0 2 10 4^6 2 4 wand's, of chi square(1) quandlos of chi square(1) G2 estimator of phi mean=0.75 and phi=1.4 X2 estimator of phi mean=0.75 and phi-1.4 Score test mean=0.75 and phi=1.4 3 0 2 quantiles ol chi squaro(1) 4^6 quandoo of chi square(1) 10 10 a 4^6 coandlos of chl square(1) a 2 ^ 4^8 quanfilos of chi square(1) 10 Figure 2. Comparison of Estimators of Phi: Correct Variance Simulations (Phi=2 or 3) G2 estimator of phi^ mean=0.75 and phiL-2^ 0 2 4 X2 estimator of phi^ mean=0.75 and phi=2^ 10 2 4^6 10 Score test mean=0.75 and phi=2 0 2 4^6 quandlos doh! swam(1) quanolos of chi square(l) quantles of chi scpare(1) G2 estimator of phi mean=0.75 and phi=3 X2 estimator of phi mean=0.75 and phi=3 Score test mean=0.75 and phi=3 0 2 4^6 wand's' of NI square(1) a 10 Figure 3. Comparison of Estimators of Phi: Misspecified Variance Simulations (Phi=1 or 1.4) G2 estimator of phi^ mean=0.75 and phi=1^ 2 4^6 a 10 X2 estimator of phi^ mean=0.75 and phi=1^ 4 0 Score test mean=0.75 and phi=1 10 0 2^4^6 awaraot Score test mean-0.75 and phi=1.4 quantiles of chl finales of chl square(1) wimples of chi squaro(1) X2 estimator of phi mean=0.75 and phi=1.4 G2 estimator of phi mean=0.75 and phi=1.4 O 0 2 4^6 quintiles of chl square(1) 10 2 4^0 cafantlles of chl square(1) 10 0 2 4^6 quintiles ol chl aquare(1) a 10 Figure 4. Comparison of Estimators of Phi: Misspecified Variance Simulations (Phi=2 or 3) G2 estimator of phi^ mean=0.75 and phi.2^ X2 estimator of phi^ mean=0.75 and phi=2^ 2 2 4^6 8 10 Score test mean=0.75 and phi=2 2 a 4 guantlIss of chi square(I) guantles of chl square(1) guandles of chl scpace(t) G2 estimator of phi mean=0.75 and phi=3 X2 estimator of phi mean=0.75 and phi-3 Score test mean-0.75 and phi=3 4 guanalss of chl eguars(1) a 10 2 4^6 giant's' of chl square(1) 10 2 4 6 guanakts of cfil ' ,wars( I) 10 10 Chapter 4. Results of the Simulation Study^ 49 causes the scaled deviance test statistic to reject too often at higher levels of over-dispersion. The scaled deviance test using the estimator ii)x and the empirical score tests agree reasonably well with their predicted asymptotic distributions, except for obvious departures at the most extreme values. Similar plots for the other values of Q show the same departures of the scaled deviance statistic using (1)G from its predicted asymptotic distribution while the scaled deviance using crSx and the empirical score tests perform well. Recall that the estimator ikx is the most natural estimator of the two, arising as the moment estimate of 0, whereas cbG was used because it was more convenient and thought to generally agree closely with i4x. The limited results presented above indicate thatG may not agree well with ck x and that the latter is preferable, especially when substantial amounts of dispersion are present in the data. Therefore, in the simulations which follow, when an estimate of 0 is required we will always use 0x. With reference to the Prince George study, the question that arises is: Would this improved estimator have led to different conclusions? It appears that use of 0x would not have changed the nature of the results outlined in Chapter 1. For the pollutant TRS, in the situations where we rejected the null hypothesis in favor of a more complex alternative, iâ€¢x was evaluated and found to be smaller than QG (although the results of Table 1 would suggest that qX is generally larger than cbG), thus the null hypothesis would still be rejected. It appears quite likely the model reduction process would have led to the same final models and very similar final sets of estimates. We now begin to examine the performance of the estimators for parameters and their standard errors, and the test statistics outlined in Section 2.2. Chapter 4. Results of the Simulation Study ^ 50 4.2 The Simple Case with a Single Series For a single series we generate data under the reduced model (-y = 0) corresponding to equation (3.16) at the different possible mean levels and amounts of over-dispersion to be considered. Table 2 presents the mean values of the estimated regression coefficient under the reduced model. Table 2. Mean Values of Estimates of O. A. Correct Variance Function. Simulated 0 1^1.4^2^3 Simulated /3 -0.287^-0.288^-0.288^-0.289 log 0.75 1 0.7 0.699^0.700^0.699^0.696 1.6 1.601^1.600^1.600^1.599 B. Misspecified Variance Function. Simulated 0 1^1.4^2^3 Simulated /3 log 0.75 1 -0.288^-0.289^-0.290^-0.287 0.7 0.700^0.698^0.695^0.702 1.6 1.600^1.601^1.600^1.598 1 5 1.598 5 1.592 log 0.75;..-, -0.288 For the simulations under the correct variance function, the estimates of 0 are very close to the simulated values, even in the presence of substantial amounts of dispersion; the same is true in the misspecified variance simulations. This may not be surprising since in this situation the estimate of it = exp{/3} is just an average of the observed data. Table 3 summarizes the estimates of the variability of the estimates of 0 under the reduced model. In this table, "Simulated" standard error refers to the standard deviation of the 1000 parameter estimates. The details of parts A and B of Table 3 are very similar. In all cases the empirical and model-based estimates are identical to three decimal places - even for extreme 51 Chapter 4. Results of the Simulation Study^ amounts of over-dispersion. Nor is any difference in the variability of these estimates apparent. Further, these estimated standard errors accurately reflect the actual standard deviations, even when the variance function is misspecified. The agreement seen here between the simulated, empirical and model-based estimates of variability is somewhat surprising. It was expected that the model-based estimators of the covariance matrices would begin to yield poor estimates as the dispersion increased, but this is not the case. It appears that in this simple situation the over-dispersion in the data is being adequately accounted for, even when the variance function is misspecified; the quantity Os presented in part B of Table 1 was estimated very well and serves the purpose of accounting for the extra variability. Table 3. Standard Deviations of Estimates of 0 and Mean Values of Estimated Standard Errors (Â±s.d.). A. Correct Variance Function Simulated q5 2 3 .077 .062 .062 (.003) .075 (.004) .062 (.003) .075 (.004) Simulated /3 log 0.75 Method Simulated Model Empirical 1 .043 .044 (.002) .044 (.002) 1.4 .051 .052 (.002) .052 (.002) 0.70 Simulated Model Empirical .026 .027 (.001) .027 (.001) .031 .032 (.001) .032 (.001) .037 .038 (.001) .038 (.001) .046 .046 (.002) .046 (.002) 1.60 Simulated Model Empirical .017 .017 (.001) .017 (.001) .021 .020 (.001) .020 (.001) .024 .024 (.001) .024 (.001) .030 .029 (.001) .029 (.001) 5 .039 .038(.001) .038(.001) Chapter 4.^Results of the Simulation Study 52 Table 3 B. Misspecified Variance Function Simulated q5 2 3 .069 .059 .058 (.002) .069 (.003) .058 (.002) .069 (.003) Simulated /3^Method Simulated log 0.75 Model Empirical 1 .042 .044 (.002) .044 (.002) 1.4 .050 .050 (.002) .050 (.002) 0.70 Simulated Model Empirical .026 .027 (.008) .027 (.008) .036 .036 (.001) .036 (.001) .048 .046 (.002) .046 (.002) .057 .060 (.003) .060 (.003) 1.60 Simulated Model Empirical .017 .017 (.001) .017 (.001) .031 .029 (.001) .029 (.001) .041 .041 (.002) .041 (.002) .056 .056 (.003) .056 (.003) 5 .081 .077 (.005) .077 (.005) We now turn to the performance of the test statistics for the hypothesis H: y = 0, summarized in Table 4. The observed rejection probabilities presented in this table can be considered as averages of 1000 Bernoulli trials with success probability p, which we may approximate by the nominal level. Thus the standard errors of the observed probabilities are approximately Vp(1 - p)/n; for nominal levels of 0.10, 0.05 and 0.01 we may expect standard errors of approximately .009, .007 and .003. For the results in part A of Table 4, keeping the precision of these observed rejection probabilities in mind, we find that they agree quite well with the nominal levels at all mean values and even at higher levels of dispersion. We also note the observed rejection probabilities are very similar across the four tests for each choice of mean and dispersion suggesting that any deviations of the observed rejection probabilities from the nominal levels are a reflection of the particular 1000 data sets generated. The agreement between model and empirical Wald tests was to be expected given the agreement of the two estimators of the variances seen in Table 3. In general, the agreement among the four statistics may not be particularily surprising either. Conventional wisdom would suggest that, over-dispersion aside, Chapter 4. Results of the Simulation Study^ 53 each of the tests should perform well with a sample size of 700 (although there was some question if this was true when the series consist of a large number of zeros such as in the simulations using /3 = log 0.75). Thus, in light of how effectively the over-dispersion is accounted for, we should not expect any of the test statistics to do poorly. The results in the misspecified variance simulations are similar. In these simulations the agreement between the four tests may seem particularly surprising at first; we might expect the model-based Wald and the scaled deviance tests to do poorly since they rely on the particular choice of variance function used to fit the data. However, considering how the choice of variance function does not seem to matter in this simple case (as noted for Table 3), the above agreement should not be surprising. Chapter 4. Results of the Simulation Study 54 Table 4. Observed Rejection Probabilities Under Reduced Model. A. Correct Variance Function 0.01 .010 .010 .010 .010 0 = 1.4 0.10 0.05 0.01 .096 .045 .007 .097 .046 .005 .095 .045 .005 .096 .045 .008 0=2 0.10 0.05 0.01 .106 .053 .015 .114 .056 .016 .111 .056 .016 .108 .054 .015 .056 .055 .054 .056 .005 .006 .005 .005 .082 .046 .083 .046 .082 .046 .082 .046 .110 .111 .110 .110 .056 .057 .057 .056 .011 .012 .012 .011 .095 .043 .004 .097 .043 .004 .090 .038 .007 .087 .039 .007 Simulated 0 Test/Method ^ log 0.75 Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled 0=1 0.10 0.05 .105 .041 .105 .040 .102 .040 .106 .041 0.70^Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled .103 .104 .102 .103 1.60^Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled .112 .110 .108 .112 .062 .061 .060 .062 .017 .017 .014 .017 .087 .039 .091 .038 .090 .038 .087 .039 .007 .008 .007 .007 0 = 3^0 = 5 0.05 0.01 0.10 0.05 0.01 .056 .009 .059 .011 .054 .009 .056 .011 Simulated 3 Test/Method ^ log 0.75 Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled 0.10 .119 .122 .117 .121 0.70^Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled .114 .061 .013 .122 .065 .012 .119 .063 .011 .114 .061 .013 1.60^Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled .103 .043 .010 .096 .048 .013 .095 .045 .010 .103 .043 .010 ) .012 .011 .011 .013 .092 .043 .009 .090 .043 .009 .088 .043 .008 .092 .043 .009 ^ Chapter 4. Results of the Simulation Study 55 Table 4 B. Misspecified Variance Function 0 = 1 Simulated 0 Test/Method^0.10 0.05 0.01 log 0.75^Wald/Model .094 .045 .009 Wald/Empirical .095 .047 .009 Score/Empirical .095 .046 .009 Deviance/Scaled .094 .045 .009 0.10 0.05 0.01 .094 .044 .006 .095 .046 .007 .094 .046 .006 .095 .044 .006 q=22 0.10 0.05 0.01 .091 .045 .009 .092 .050 .012 .091 .047 .010 .093 .046 .009 0.70 Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled .091 .050 .016 .091 .051 .016 .092 .051 .015 .091 .050 .017 .095 .048 .012 .091 .047 .012 .091 .046 .011 .095 .048 .012 .107 .046 .010 .108 .046 .009 .108 .043 .007 .109 .047 .010 1.60 Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled .097 .041 .007 .097 .039 .009 .096 .039 .007 .097 .041 .007 .103 .052 .009 .105 .051 .010 .103 .050 .009 .103 .052 .010 .083 .033 .004 .081 .034 .005 .079 .033 .004 .083 .034 .004 Simulated 0 Test/Method log 0.75 Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled cb = 1.4 0=3 0.10 0.05 0.01 .112 .063 .014 .114 .066 .017 .110 .062 .015 .112 .064 .015 0=5 0.10 0.05 0.01 0.70 Wald/Model .101 .044 .010^Wald/Empirical .102 .048 .011 Score/Empirical .099 .042 .009 Deviance/Scaled ^.101 .045 .010^^- 1.60 Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled .111 .114 .108 .111 .059 .066 .059 .059 .014 .016 .013 .014 .102 .104 .100 .102 .056 .053 .047 .056 .011 .012 .006 .012 In summary, for the simulation results when the data is generated under the reduced model there is no clear difference between the empirical and model-based estimates of variances. Nor are there any differences among the four model reduction test statistics considered. We will look to the three series case in Section 4.3 to provide more insight into possible differences in the estimators and test statistics in this simple situation. Chapter 4. Results of the Simulation Study^ 56 A limited number of simulations were also carried out using the full model corresponding to (3.16) to generate the data. In particular, 1000 data sets were generated according to this model with = log 0.75 and one of y = 0.05, 0.10, or 0.15. The estimation of 0, fl and y were of interest as well as power for testing the hypothesis 11:7 = 0. The mean values of the parameter estimates are summarized in Table 5. First consider part A of the table, corresponding to simulations where the data were generated with the correct variance function. In all cases (that is, all combinations of simulated 0 and 7) the estimators yield good estimates. The mean values of both model and empirical versions of the estimated standard errors (not presented here) are in close agreement with the displayed standard deviations. The standard errors of the estimates increase as the dispersion increases as would be expected. Notice that the estimates of 0 (and their standard deviations) are almost identical to the corresponding quantities for qx in part A of Table 1 (which were based on data generated using the reduced model). In part B, we see that # and -y are estimated very well at all levels of dispersion considered. To determine what the estimates of 0 should be, first note that in these misspecified variance simulations, the amount of extra variability varies as the covariate xj, and thus the mean values pi, vary. In the present situation, the average of the 0 .3 (where pick; = pi + â€” 1)/2) for each cell in part B of Table 5 are the same to two decimals as the values 4r from part B of Table 1. It is therefore reasonable that the estimates of 0 are close to the estimates in part B of Table 1, and again cix is accounting for the extra variability very well. 57 Chapter 4. Results of the Simulation Study^ Table 5. Mean Values of Estimated Parameters (Â± s.d.) Under Full Model. A. Correct Variance Function Simulated 7 0.05 Parameter 1 0 1.00 (.055) /3 -.288 (.044) .049 (.061) 7 1.00 (.054) 0.10 0 /3 -.288 (.042) .100 (.063) 7 1.00 (.054) 0.15 0 -.288 (.043) 0 .151 (.062) 7 B. Misspecified Variance Function Simulated 7 0.05 0.10 0.15 1 Parameter 0 1.00 (.055) /3 -.288 (.044) .049 (.061) 7 0 1.00 (.054) /3 -.288 (.042) .100 (.063) 7 0 1.00 (.054) /3 -.288 (.043) .151 (.062) 7 Simulated 0 1.4 2 1.40 (.095) 2.00 (.178) -.289 (.050) -.292 (.064) .051 (.075) .050 (.087) 1.40 (.093) 1.99 (.179) -.290 (.051) -.291 (.063) .098 (.073) .099 (.086) 1.40 (.096) 2.00 (.171) -.289 (.051) -.288 (.063) .149 (.072) .150 (.085) 3 2.97 (.335) -.292 (.076) .053 (.109) 2.97 (.323) -.293 (.076) .102 (.110) 2.98 (.323) -.293 (.075) .153 (.109) Simulated 0 1.4 2 1.30 (.083) 1.75 (.133) -.290 (.050) -.292 (.059) .051 (.069) .055 (.084) 1.30 (.082) 1.75 (.134) -.290 (.050) -.290 (.056) .102 (.068) .099 (.083) 1.30 (.084) 1.75 (.137) -.291 (.049) -.291 (.058) .151 (.071) .150 (.081) 3 2.49 (.260) -.292 (.069) .053 (.098) 2.49 (.258) -.294 (.069) .104 (.102) 2.50 (.255) -.291 (.070) .151 (.100) The observed rejection probabilities under the alternative hypothesis are shown in Table 6 below. The results presented in part A show no difference in the power of the four statistics. For a given value of y, the observed rejection probabilities decrease as the amount of dispersion increases (the increased variability in the data makes it more difficult to detect an effect). The results in part B are very similar. In general, for given values of y and 0 > 1, the rejection probabilities are slightly higher in the misspecified variance simulations compared to the correct variance simulations. Note that for values of pj < 1, such as in these simulations using = log 0.75 and y no larger than 0.15, the value of pi + (0 - 1)p 1 (the variance under . Chapter 4. Results of the Simulation Study^ 58 the misspecified formulation) is less than the value of cbiti (the variance under the correct formulation). Therefore from our earlier observation that the power decreases as the amount of variablility in the data increases, we should expect the rejection probabilities in the misspecified variance to be higher than the correct variance simulations for these values of y and 0. In summary, from the results of the simulations under the full model (Tables 5 and 6) the estimates of 0 appear reasonable and the regression parameter estimates are close, on average, to the simulated values. There are also no indications of any differences in the power of the four test statistics considered. Further, estimated standard errors from the model and empirical estimators (not presented) were similar, both being very close, on average, to the simulated values. Chapter 4. Results of the Simulation Study ^ 59 Table 6. Observed Rejection Probabilities Under Full Model. A. Correct Variance Function 0=1 Test .05 .10 7 .05 Wald/Mod. .205 .117 Wald/Emp. .204 .117 Score/Emp. .202 .116 Dev/Scaled .205 .117 .01 .033 .035 .034 .033 .10 .194 .197 .197 .194 0 = 1.4 .05 .114 .119 .118 .117 .01 .041 .040 .038 .041 .10 .152 .155 .154 .153 0=2 .05 .090 .090 .086 .092 .01 .025 .027 .024 .026 .10 .152 .151 .147 .156 0=3 .05 .089 .086 .079 .092 .01 .025 .025 .024 .025 .10 Wald/Mod. Wald/Emp. S core/ Emp. Dev/Scaled .490 .489 .487 .490 .374 .372 .371 .374 .164 .171 .164 .164 .348 .351 .353 .349 .247 .253 .249 .247 .116 .119 .111 .117 .295 .298 .296 .296 .205 .207 .203 .207 .070 .076 .072 .074 .253 .266 .260 .253 .165 .159 .156 .166 .059 .065 .060 .060 .15 Wald/Mod. Wald/Emp. Score/Emp. Dev/Scaled .786 .788 .788 .786 .685 .687 .683 .685 .445 .445 .439 .448 .661 .663 .662 .663 .522 .521 .518 .523 .296 .297 .287 .300 .529 .527 .527 .531 .404 .411 .408 .405 .186 .187 .182 .190 .391 .402 .395 .391 .274 .274 .269 .277 .135 .136 .128 .136 .10 .193 .195 .189 .194 0 = 1.4 .05 .111 .111 .107 .111 .01 .024 .028 .026 .025 .10 .176 .174 .169 .177 .05 .108 .112 .109 .110 .01 .030 .032 .030 .032 .10 .165 .161 .157 .165 B. Misspecified Variance Function 0=1 y Test .10 .05 .01 .05 Wald/Mod. .205 .117 .033 Wald/Emp. .204 .117 .035 Score/Emp. .202 .116 .034 Dev/Scaled .205 .117 .033 0=2 0=3 .05 .090 .087 .082 .093 .01 .021 .020 .018 .021 .10 Wald/Mod. Wald/Emp. S core/ Emp. Dev/Scaled .490 .489 .487 .490 .374 .372 .371 .374 .164 .171 .164 .164 .407 .410 .409 .410 .291 .299 .300 .292 .123 .131 .122 .123 .336 .333 .328 .336 .236 .241 .233 .237 .087 .097 .088 .088 .291 .294 .291 .291 .195 .203 .192 .195 .073 .074 .066 .073 .15 Wald/Mod. Wald/Emp. Score/Emp. Dev/Scaled .786 .788 .788 .786 .685 .687 .683 .685 .445 .445 .439 .448 .680 .683 .681 .680 .579 .574 .565 .580 .337 .342 .333 .339 .585 .587 .584 .586 .455 .455 .450 .456 .218 .222 .209 .219 .470 .481 .473 .472 .340 .342 .333 .342 .158 .162 .148 .160 Chapter 4. Results of the Simulation Study^ 60 4.3 The Simple Case with Three Series The results discussed in this section are for three separate series of data generated according to log-linear models given by (3.17) with one of the two combinations of dispersion parameters described in Section 3.2 (combination I or II) and one of the two possible variance functions. For the null model (70 = y i = 72 = - -y3 = 0) the parameter values /3 i = log 0.75 (= â€”0.288), /32 = 0.7 and /33 = 1.6 are used to generate the data. The resulting data series are then fit using this same log-linear model. The mean values of the estimated parameters from 1000 simulated data sets generated under each of the four possible scenarios (2 combinations of 0 x 2 variance formulations) are presented in Table 7. Table 7. Mean Values of Parameter Estimates. Parameter 0 01 02 /33 Variance Function B. Misspecified A. Correct Combination of (/) Combination of 0 I^II I^II 1.997 2.799 4.964 8.333 -0.288 -0.289 -0.289 -0.287 0.700 0.699 0.701 0.699 1.594 1.599 1.599 1.598 Intuitively the estimates of 0 should be approximately equal to the average of the three dispersion values associated with the three generated data series in each of the 1000 data sets leading to the results in each of the four scenarios. These averages are 2.0 (combination I, correct variance), 2.8 (combination II, correct variance), 4.97 (combination I, misspecified variance) and 8.37 (combination II, misspecified variance). We can see that in each of the four scenarios, the estimates of 0 are very close to these corresponding averages. The estimates of the regression parameters are also excellent in all four scenarios. Chapter 4. Results of the Simulation Study^ 61 Table 8 presents the standard deviations of the parameter estimates and the mean values of the estimated standard errors for the regression parameters. In each column of this table the empirical estimates agree very well with the simulated standard deviations but the model-based estimates are most often quite different. The latter are much too large for the parameter /9 i corresponding to the series where the simulated dispersion is lower than the average of the three series, and too small for 03 corresponding to the series where the simulated dispersion is larger than the average of the three series. For the regression parameter N2, first consider the correct variance simulations. We find that with combination I of 0's the standard errors for 02 are about right; this appears to be because the simulated 0 for this series is equal to the average of the 4's for all three series. However, using combination II of cb's the simulated 0 for series 2 is smaller than the average of the O's for all three series and the model-based standard error is distinctly larger than the standard deviation of the parameter estimates. In the misspecified variance simulations, we also find the model-based standard error to be larger than the standard deviation of the parameter estimates because the simulated 0 for series 2 is smaller than the average of the cb's in these cases. Clearly the empirical estimator is better in each of these cases (for both combinations of 0 and for both variance formulations) and accurately reflects the true variability of these parameter estimates, even with the misspecified variance function. 62 Chapter 4. Results of the Simulation Study^ Table 8. Standard Deviations of Estimates and Mean Values of Estimated Standard Errors (Â±s.d.). Parameter 0 Variance Function A. Correct B. Misspecified Combination of 0 Combination of .0 I^II I^II .084 .144 .399 .976 Method Simulated 01 Simulated Model Empirical .042 .062 (.002) .044 (.002) .052 .073 (.003) .052 (.002) .042 .097 (.004) .044 (.002) .050 .126 (.008) .050 (.002) /32 Simulated Model Empirical .038 .038 (.001) .038 (.001) .038 .045 (.001) .038 (.001) .046 .059 (.003) .046 (.002) .047 .077 (.005) .046 (.002) 03 Simulated Model Empirical .029 .024 (.001) .029 (.001) .037 .028 (.001) .038 (.001) .054 .038 (.001) .056 (.003) .079 .049 (.002) .077 (.004) Using the data generated under the reduced model, the levels achieved by the four test statistics under study are examined by considering an alternative model that allows for a common "pollution" effect (71 = 72 = -y3 = 0, but 7o possibly not zero). The hypothesis to be tested is H: 70=0. Table 9 presents the observed rejection probabilities of each of the four test statistics. The precision of the observed rejection probabilities, which should be kept in mind when interpreting these results, are the same as those reported in Section 4.2. Consider first the correct variance simulations. In the model reductions for the correct variance simulations, the model-based tests (model Wald and scaled deviance) clearly reject too often while the rejection probabilities for the empirical tests (empirical score and empirical Wald) agree quite well with the nominal levels based on the predicted 4 ) critical values. A similar pattern is apparent in the misspecified variance simulations, although the departure of the model-based tests from Chapter 4. Results of the Simulation Study ^ 63 the nominal levels is considerably more exaggerated than in the correct variance simulations. Table 9. Observed Rejection Probabilities: Common Pollution to No Pollution Model. Test Variance Function B. Misspecified A. Correct Combination of 0 Combination of 0 I I II II 0.10 0.05 0.01 0.10 0.05 0.01 0.10 0.05 0.01 0.10 0.05 0.01 Wald/Mod .159 .099 .019 .173 .102 .039 Wald/Emp .114 .057 .004 .110 .058 .014 S core/Emp .114 .056 .004 .109 .058 .014 Dev/Scaled .159 .099 .018 .172 .102 .039 .180 .104 .045 .089 .054 .016 .088 .053 .012 .180 .104 .044 .226 .108 .102 .227 .139 .060 .062 .012 .057 .011 .139 .060 Thus in this multiple series situation (without any pollution effects at this time) we find a clear difference between the empirical and model-based estimates of variance, with the empirical estimator being preferable. Correspondingly we find that the model-based test statistics are inferior to the empirical test statistics. Given the differences in the variance estimates, the fact that the empirical Wald is preferable to the model Wald test is not surprising. Nor is it surprising that the empirical score test performs well in all cases since it performed well even under misspecified variances in the study described in Breslow (1990). What is interesting is the very close agreement agreement between the model Wald and scaled deviance tests and between the empirical score and empirical Wald tests. Conventional wisdom would suggest the Wald tests might be inferior to the other two tests, but we see that the model Wald gives observed rejection probabilities that are almost exactly the same as the scaled deviance test and the empirical Wald gives observed rejection probabilities that are almost exactly the same as the empirical score test. It seems that the test statistics which rely on a dispersion parameter to account for extra variability perform poorly, while those that estimate the extra variability Chapter 4. Results of the Simulation Study^ 64 empirically perform much better. A set of simulations with data generated according to the model with a common pollution effect (71 = 72 = 73 = 0, but 70 non-zero in (3.17)) were also carried out. Recall that the value chosen for 7o is 0.10. Table 10 presents the mean values of the parameter estimates for the data sets generated in this fashion. The estimates of the Oi are very good in each column of the table as are the estimates of -y o . The estimates of 0 are close to the estimates seen in Table 7 as we would expect given that in each case, the average of the dispersions is close to the averages encountered in the simulations under the null hypothesis. Table 10. Mean values of Parameter Estimates. Variance Function B. Misspecified A. Correct Combination of 0 Combination of 0 Parameter Ql P2 03 yo I^II 1.998 -0.289 0.699 1.599 0.100 2.799 -0.292 0.701 1.599 0.101 I^II 4.970 -0.288 0.699 1.597 0.098 8.325 -0.292 0.699 1.591 0.098 The standard deviations of the above estimates and the mean values of the estimated standard errors for the regression parameters are given in Table 11. The model-based estimates of the standard errors for the /3 show the same behavior as in Table 8 with the estimate being too large when the parameter corresponds to a series with simulated q5 lower than c b and too small when the parameter corresponds to a series with simulated 0 larger than (7,. The empirical estimator on the other hand, agrees very well with the simulated standard deviations. Notice that the model-based estimates of the standard error of 7o are too low throughout the table. Recall that the rejection probabilities were too high for the model-based test statistics in Table 65 Chapter 4. Results of the Simulation Study ^ 9. This suggests that the standard errors of the estimates of y o were lower than expected in the simulations under the null hypothesis as well. Table 11. Standard Deviations of Estimates and Mean Values of Estimated Standard Errors (Â±s.d.). Parameter 0 Method Simulated Variance Function A. Correct B. Misspecified Combination of 0 Combination of q I^II I^II .144 .086 .395 .967 131 Simulated Model Empirical .043 .062 (.002) .044 (.002) .053 .073 (.003) .052 (.002) .040 .097 (.004) .044 (.002) .052 .126 (.008) .050 (.002) 132 Simulated Model Empirical .038 .038 (.001) .038 (.001) .038 .045 (.001) .038 (.001) .046 .059 (.003) .046 (.002) .049 .077 (.005) .046 (.002) /33 Simulated Model Empirical .029 .024 (.001) .029 (.001) .037 .028 (.001) .038 (.001) .058 .038 (.001) .056 (.003) .079 .049 (.002) .077 (.004) 7o Simulated Model Empirical .030 .027 (.001) .031 (.001) .038 .032 (.001) .038 (.002) .055 .043 (.002) .054 (.002) .076 .056 (.003) .072 (.006) We now examine the results of evaluating the four test statistics based on an alternative hypothesis that includes common and separate "pollution" effects for each series. Thus these model reductions, from the model with separate effects to the one with a common pollution effect (which was used to generate the data), represent the separate to common effects reductions described in the Prince George study. Note that these tests, the results of which are summarized in Table 12, are the first discussed in this thesis involving reduction by more than one parameter (and are thus the first that involve estimates of covariance in the Wald and score test statistics). Chapter 4. Results of the Simulation Study^ 66 In the correct variance simulations the empirical test statistics achieve levels close to the nominal rejection probabilities with combination I of 0 and still quite close with combination II (keeping in mind the precision of these observed rejection probabilities discussed above). The levels of the model-based test statistics are clearly too low under both combinations of 0. Even with the misspecified variance formulation, the empirical test statistics achieve levels close to nominal; while the rejection probabilities may be slightly too large at the 0.10 level under combination II, overall the agreement is good. The levels of the model-based test statistics are far too low at the 0.10 and 0.05 nominal levels under both combinations of 0, although it is not clear how they do at the 0.01 level. Table 12. Observed Rejection Probabilities: Separate to Common. Test Variance Function A. Correct B. Misspecified Combination of 0 Combination of cb I I II II 0.10 0.05 0.01 0.10 0.05 0.01 0.10 0.05 0.01 0.10 0.05 Wald/Mod Wald/Emp Score/Emp Dev/Scaled .076 .100 .099 .077 .035 .051 .049 .035 .006 .011 .008 .006 .062 .092 .088 .062 .026 .049 .049 .027 .001 .010 .008 .001 .067 .104 .100 .067 .036 .057 .055 .036 .005 .006 .006 .006 .056 .117 .113 .056 .029 .052 .051 .030 0.01 .009 .014 .010 .009 We also examine the power of the four tests in this multiple series situation in Table 13 by considering the reduction from the common pollution effect to no pollution effects (the null model). The power for the two model-based tests are virtually identical throughout the table and the two empirical tests achieve similar power. However, the rejection probabilities for the model-based tests are higher than the empirical tests presumably as a result of the standard error of 7 o being underestimated in the former tests (see Table 11). Chapter 4. Results of the Simulation Study 67 Table 13. Observed Rejection Probabilities: Common to Null. Test Variance Function B. Misspecified A. Correct Combination of 0 Combination of 0 I II I II 0.10 0.05 0.01 0.10 0.05 0.01 0.10 0.05 0.01 0.10 0.05 0.01 Wald/Mod .958 .925 .834 .891 .827 .687 .695 .598 .404 .542 .447 .271 Wald/Emp .941 .890 .751 .837 .767 .537 .570 .445 .226 .399 .293 .111 Score/Emp .941 .891 .751 .837 .765 .535 .567 .436 .213 .394 .283 .100 Dev/Scaled .958 .925 .835 .891 .828 .688 .695 .599 .406 .542 .447 .274 The final model to be considered for generating data in this three series simple case is the model with separate pollution effects. Recall the value chosen for 7 0 is again 0.10, while 71 =0.10, 72 = -0.20 and -y3 = 0. Results of fitting the full model to these data sets are summarized in Tables 14, 15 and 16 which follow. The mean values of the parameter estimates (see Table 14 below) are very close to the correct values, even in the misspecified case with substantial amounts of dispersion. The estimates of 0 are very close to the values seen in Tables 7 and 10, which is still to be expected because, even in the misspecified variance simulations, the average of the dispersion values over the entire 2100 observations is approximately equal to the averages in the corresponding simulations under either the null or the common pollution effect model. Chapter 4. Results of the Simulation Study^ 68 Table 14. Mean Values of Parameter Estimates. Parameter Cb 01 02 03 70 71 72 Variance Function B. Misspecified A. Correct Combination of 0 Combination of 0 I I II II 1.997 2.799 4.962 8.326 -0.290 -0.289 -0.290 -0.289 0.700 0.698 0.699 0.700 1.598 1.597 1.595 1.593 0.098 0.102 0.102 0.096 0.102 0.101 0.100 0.096 -0.195 -0.200 -0.197 -0.195 The estimates of the standard errors are summarized in Table 15. In each of the columns of this table the empirical estimator is very close, on average, to the simulated standard deviations. Again the model-based estimator overestimates the standard errors for parameters which correspond to the series with dispersion lower than 0 (most notably th and 7 1 ) and underestimates the standard errors corresponding to the series with dispersion larger than 0 (most notably 03 ). It would seem that the low rejection probabilities in Table 12 for the model-based test statistic can be explained by the general overestimation of the standard error of 71. 69 Chapter 4. Results of the Simulation Study^ Table 15. Standard Deviations of Estimates and Mean Values of Estimated Standard Errors (Â±s.d.). Variance Function B. Misspecified A. Correct Combination of 0 Combination of 0 I^II I^II .142 .085 .393 .966 Parameter Cb Method Simulated 01 Simulated Model Empirical .044 .062 (.002) .044 (.002) .054 .073 (.003) .052 (.002) .042 .098 (.004) .044 (.002) .050 .127 (.008) .050 (.002) 02 Simulated Model Empirical .039 .038 (.001) .038 (.002) .037 .045 (.001) .038 (.001) .047 .059 (.003) .046 (.002) .047 .077 (.005) .046 (.002) 03 Simulated Model Empirical .029 .024 (.001) .029 (.001) .039 .028 (.001) .038 (.001) .056 .038 (.001) .056 (.003) .076 .049 (.002) .077 (.005) 70 Simulated Model Empirical .042 .034 (.001) .041 (.002) .054 .040 (.001) .054 (.003) .078 .054 (.002) .079 (.005) .113 .069 (.007) .109 (.011) 71 Simulated Model Empirical .074 .094 (.003) .075 (.003) .090 .111 (.004) .090 (.003) .098 .148 (.006) .100 (.004) .133 .191 (.011) .130 (.009) 72 Simulated Model Empirical .071 .063 (.002) .067 (.002) .076 .075 (.002) .076 (.003) .102 .100 (.004) .102 (.005) .131 .129 (.007) .127 (.012) Chapter 4. Results of the Simulation Study ^ 70 Table 16. Observed Rejection Probabilities: Separate to Common. Test Variance Function A. Correct Variance Function B. Misspec. Variance Function Combination of 0 Combination of 0 I I II II 0.10 0.05 0.01 0.10 0.05 0.01 0.10 0.05 0.01 0.10 0.05 Wald/Mod Wald/Emp Score/Emp Dev/Scaled .936 .962 .962 .936 .890 .932 .929 .891 .741 .822 .818 .743 .870 .944 .945 .870 .786 .889 .888 .789 .558 .747 .745 .560 .621 .911 .909 .623 .465 .858 .854 .468 .222 .685 .671 .227 .340 .880 .879 .341 .218 .791 .786 .218 0.01 .088 .593 .578 .088 In Table 16 we examine the power of the four test statistics under consideration in testing the reduction from the separate pollution effects simulated in these data sets to a model with a common pollution effect. The model-based test statistics have lower power than the empirical tests in all of the above cases. The difference is most noticable in the cases where the standard errors of -y i were seriously overestimated, such as the misspecified variance simulations. In all of the simulations in this three series simple case, in contrast to the one series simulations, it is clear that the empirical estimate of the covariance matrix for the regression parameters is superior to the model-based estimate. As for the observed levels of the four test statistics under consideration, we find that the model-based test statistics reject too often in some instances, and not as often as predicted by theory in other cases, whereas the empirical test statistics always achieve levels close to the nominal rates. Throughout we have found very close agreement between the model Wald and scaled deviance tests and between the empirical score and empirical Wald tests. It appears that, in these simple simulations, the two modelbased test statistics that rely on a dispersion parameter to account for the extra variability perform poorly, while the empirical test statistics (which do not rely on the correctness of a specified model to estimate the variance-covariance matrix) perform much better. 71 Chapter 4. Results of the Simulation Study^ 4.4 The More Complicated Case with a Single Series We now wish to study the performance of the estimators and test statistics when we generate data using the more complicated model for a single series, described by (3.18), that includes season effects. For the purpose of simulating data under the reduced model, the values r 1 = r2 = 0.25, r3 = -0.5 and r4 = 0 (with y = 0) were used. Table 17 presents the mean values of the parameter estimates from fitting the reduced model to each of the 1000 simulated data sets. Table 17. Mean Values of Estimated Parameters Under Reduced Model. A. Correct Variance Simulated 0 log 0.75 Parameter (i) Q Ti. r2 r3 0.70 0 ,Q r1 r2 73 1.60 4 13 ri. T2 73 1 1.00 -.291 .253 .252 -.498 .997 .699 .246 .250 -.499 .997 1.60 .251 .250 -.503 Simulated 0 3 1.4 2 1.39 1.98 2.96 -.286 -.295 -.292 .245 .253 .247 .253 .238 .250 -.513 -.513 -.520 2.98 1.40 1.99 .697 .698 .695 .247 .252 .250 .253 .250 .250 -.501 -.501 -.508 1.40 2.01 2.98 1.60 1.60 1.60 .247 .251 .249 .250 .248 .249 -.501 -.502 -.504 5 - - 4.97 1.60 .246 .251 -.499 72 Chapter 4. Results of the Simulation Study ^ Table 17 B. Misspecified Variance Simulated /3 log 0.75 0.70 1.60 Simulated 4) Parameter 1 1.4 2 3 1.00 1.31 1.77 2.54 0 /3 -.287 -.292 -.297 -.296 Tl .245 .246 .256 .248 T2 .246 .251 .258 .256 T3 -.507 -.503 -.503 -.507 1.00 1.83 3.07 5.15 0 /3 .702 .695 .699 .698 .247 .253 .244 .246 Ti. T2 .245 .254 .244 .248 T3 -.505 -.501 -.506 -.502 1.00 3.06 6.15 11.22 (/) /3 1.60 1.60 1.60 1.59 Ti .249 .252 .249 .252 T2 .248 .249 .248 .255 7-3 -.500 -.500 -.505 -.497 5 - 21.09 1.58 .263 .258 -.495 In part A of the table, the correct variance simulations, the estimates are very close to the simulated values in each column of the table. The regression parameter estimates ( 13 and Ti) are also quite good (correct to 1 decimal) even when dispersion is high relative to the mean. There are cases where the estimates appear to be a bit off (for example, the simulations with /3 = log 0.75 and c > 1.4), but the regression parameter estimators have substantially higher variances than in the simple simulations of the previous sections (compare the standard deviations of these estimates provided in Table 18 below with those in Table 3), so it is not unreasonable that the mean value from 1000 simulated data sets differs slightly from the simulated value. We now consider the misspecified variance results in part B of the above table. Here the values of q5 are roughly what they were in the one series simple case. This follows because the four seasons are like four short series with different mean levels and therefore different levels of dispersion. Results from the three series simple simulations (Section 4.3) suggest the average Chapter 4. Results of the Simulation Study ^ 73 of those levels of dispersion is what is being estimated estimated by the overall estimator sr4. Because the ri were chosen so that E i r = 0, the average is approximately the same as the amount of dispersion in the one series simple simulations of Section 4.2 (slightly different because the mean levels of the four "series" don't quite average out to the overall mean; >, exp{/3 does not equal exp{E i (0 -I- TO} = exp{0}). Thus the estimates of 0 in the above table are as we would expect. More importantly, the mean values of the regression parameter estimates are quite close to the simulated values at all levels of dispersion and for all choices of 0. The standard deviations of the parameter estimates and the mean values of the standard errors of the regression parameters are summarized in Table 18. In the correct variance simulations (part A of the table), the empirical and model-based estimators are very close to each other in all cases, and agree well with the actual standard deviations of the parameter estimates. There is one instance (0 = log0.75 and 0 = 3) when the mean values of the estimated standard errors for all the regression parameters are slightly lower than the standard deviations. This could be a suggestion that the estimators underestimate the standard errors when there is a large amount of dispersion and low overall mean level, but could also just be chance. In any case, there is no strong evidence that the empirical estimator particularly underestimates the standard errors in general as expected a priori (based on conventional wisdom), or at least not any more so than the model-based estimator. Overall there is no real difference in these estimated standard errors. Note however that the model-based estimated standard error is uniformly less variable than that based on the empirical estimator (although the difference is small in this particular situation). Chapter 4. Results of the Simulation Study^ 74 In the misspecified variance simulations there may be a problem with the model-based estimator similar to the problems in the three series simple case simulations of Section 4.3. If we think of the four seasons as four short series, then under the misspecified variance formulation, the four series, with different mean levels, have different amounts of dispersion. In particular, the simulated dispersion for the data in the third season will be quite a bit lower than for the other seasons. The "overall" estimate ::$. will be larger than the amount of dispersion simulated for this season which might explain why the model-based estimates of standard errors for f 3 are too high on average. This same argument would suggest that the standard errors for f 1 and 1:2 would be too low since these "series" have the highest means and therefore the most dispersion so that q will be lower than the amount of dispersion simulated in these series. There is some suggestion of this as well in part B of Table 12. As for the empirical estimator, overall it appears to estimate the standard deviations of the parameters much better than the model-based estimator. There is no clear evidence that this estimator consistently underestimates the standard errors of the regression parameters although whenever the average of the empirical estimates differs to any appreciable extent from a simulated standard deviation, this average does appear to be too low. Chapter 4. Results of the Simulation Study^ 75 Table 18. Standard Deviations of Estimates and Mean Values of Estimated Standard Errors (Â±s.d.). A. Correct Variance Function Simulated ,8 Parameter o log 0.75 Q Ti T2 T3 0.70^0 # T1 T2 T3 1.60 ^ 0 )3 Ti T2 T3 1 1.4 Simulated 0 2 3 Simulated Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .056 .086 .088(.004) .087(.006) .118 .117(.005) .116(.006) .115 .117(.005) .116(.006) .142 .143(.007) .142(.008) .097 .103 .103(.006) .103(.008) .135 .138 (.006) .137 (.007) .135 .138 (.006) .137 (.007) .166 .168(.009) .168(.010) .175 .124 .123(.009) .123(.011) .165 .165 (.009) .164 (.010) .164 .165 (.009) .164 (.010) .205 .202(.012) .201(.014) .326 .153 .150(.012) .150(.015) .206 .201(.013) .200(.014) .205 .201(.013) .200(.014) .258 .247(.019) .244(.021) Simulated Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .056 .055 .053(.002) .053(.003) .074 .071(.002) .071(.003) .072 .071(.002) .071(.003) .089 .087(.003) .086(.004) .087 .064 .063(.003) .063(.004) .087 .084 (.003) .084 (.004) .085 .084 (.003) .084 (.004) .103 .103(.004) .102(.005) .136 .077 .075(.004) .075(.005) .097 .100 (.004) .100 (.005) .101 .100 (.004) .100 (.005) .123 .123(.005) .122(.006) .245 .093 .092(.005) .092(.007) .121 .123(.005) .122(.006) .123 .123(.006) .122(.006) .153 .150(.007) .149(.009) Simulated Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .054 .034 .034(.001) .034(.002) .046 .045(.001) .045(.002) .045 .045(.001) .045(.002) .057 .055(.002) .055(.002) .078 .042 .040(.001) .040(.002) .056 .054 (.002) .053 (.002) .054 .054 (.002) .053 (.002) .067 .065(.002) .065(.003) .122 .047 .048(.002) .048(.003) .062 .064 (.002) .064 (.003) .063 .064 (.002) .064 (.003) .078 .078(.003) .078(.003) .200 .060 .059(.002) .059(.004) .081 .078(.003) .078(.004) .079 .078(.003) .078(.004) .100 .096(.004) .095(.005) Method 5 .388 .077 .076(.004) .076(.005) .105 .101(.004) .101(.005) .102 .101(.004) .101(.005) .126 .123(.006) .123(.007) 76 Chapter 4. Results of the Simulation Study^ Table 18 B. Misspecified Variance Function 1 1.4 Simulated 0 2 3 Simulated Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .052 .088 .087(.005) .087(.006) .119 .117(.005) .116(.006) .118 .117(.005) .116(.006) .140 .143(.007) .142(.008) .087 .101 .100(.006) .099(.008) .137 .134 (.006) .135 (.007) .136 .134 (.006) .134 (.007) .157 .163(.009) .158(.010) .140 .116 .117(.008) .115(.009) .162 .156 (.008) .157 (.009) .165 .156 (.008) .157 (.009) .185 .191(.011) .178(.011) .253 .139 .140(.011) .137(.013) .196 .186(.011) .190(.012) .192 .186(.011) .189(.012) .208 .228(.016) .207(.015) Simulated Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .054 .055 .053(.002) .053(.003) .073 .071(.002) .071(.003) .071 .071(.002) .071(.003) .090 .087(.003) .087(.004) .120 .072 .072(.003) .071(.005) .100 .096 (.004) .097 (.005) .099 .096 (.004) .098 (.004) .111 .118(.005) .109(.005) .261 .094 .093(.005) .091(.007) .132 .125 (.006) .128 (.007) .130 .125 (.006) .127 (.007) .146 .152(.008) .136(.008) .525 .125 .121(.009) .118(.011) .172 .161(.009) .165(.010) .165 .161(.009) .165(.011) .177 .197(.012) .173(.011) Simulated Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .057 .034 .034(.001) .034(.002) .047 .045(.001) .045(.002) .046 .045(.001) .045(.002) .055 .055(.002) .055(.002) .209 .059 .059(.002) .058(.004) .083 .079 (.003) .081 (.004) .081 .079 (.003) .081 (.004) .090 .097(.004) .087(.004) .525 .084 .084(.005) .082(.006) .118 .112 (.005) .115 (.006) .119 .112 (.005) .115 (.006) .122 .137(.007) .120(.006) 1.23 .118 .114(.008) .110(.010) .165 .152(.008) .156(.010) .159 .152(.008) .156(.010) .164 .186(.013) .159(.012) Simulated 16 Parameter Method log 0.75 0 # ri 72 73 0.70 0 13 Ti. 72 73 1.60 0 # Ti Ti T3 5 .â– . II. .â– ...â– ..... â– .. .1. â– .1. 2.96 .155 .157(.014) .152(.017) .217 .210(.015) .215(.017) .214 .210(.015) .215(.018) .221 .256(.020) .218(.018) Chapter 4. Results of the Simulation Study^ 77 In the next table we present the observed rejection probabilities for the four test statistics under study in each of the combinations of /3 and 0 considered above. Here the alternate model (to be fit to the data sets generated under the reduced model) includes the sinusoidal "pollution" covariate. In the correct variance simulations the model Wald and scaled deviance tests give almost identical results in all simulations. The empirical tests are not as similar (either to each other or to the model-based tests), but overall there appears to be little to distinguish any of the tests. The observed rejection probabilities are reasonably close to the nominal levels in almost all cases. Possible exceptions might be the simulations with /3 = 1.60 and either of 0 = 1 or 0 = 3, but these are likely just chance deviations from the nominal levels. In the misspecified variance simulations we also find the model Wald and scaled deviance tests to be almost identical in their observed rejection probabilities. In general the score test appears most conservative and it seems to get more conservative (relative to the others) as the amount of dispersion increases. This makes it look like it is performing better than the others when the others reject too often and it looks worse when the others reject at about the nominal rate. As a side note, based on the model-based standard errors of Table 18, we might look carefully at the empirical Wald test to see if it rejects too often, since it seemed the empirical estimates of the standard errors of some of the T, were too small, which might imply that the standard errors of the coefficient of the pollution covariate could be underestimated. The empirical Wald test does reject more often than the score test in general, but it occasionally rejects less often than the model-based tests so that no general conclusion regarding its performance can be reached from these results. Chapter 4. Results of the Simulation Study ^ 78 Table 19. Observed Rejection Probabilities Under Null Hypothesis. A. Correct Variance Function log 0.75 Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled 0 = 1^4 =1.4 1.4^0 = 2 0.10 0.05 0.01 0.10 0.05 0.01 0.10 0.05 0.01 .093 .060 .011 .112 .056 .012 .110 .052 .014 .098 .063 .013 .114 .057 .010 .115 .053 .012 .098 .062 .012 .112 .055 .009 .110 .047 .011 .093 .060 .011 .111 .058 .012 .110 .053 .014 0.70 Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled .100 .052 .014 .099 .045 .008 .098 .052 .013 .101 .049 .012 .102 .044 .010 .101 .056 .011 .095 .047 .012 .099 .044 .010 .097 .053 .011 .100 .052 .014 .099 .044 .008 .098 .052 .013 1.60 Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled .115 .054 .011 .092 .040 .011 .109 .053 .009 .121 .055 .011 .091 .041 .012 .111 .053 .010 .119 .053 .010 .091 .041 .010 .109 .053 .008 .115 .055 .011 .092 .040 .011 .109 .054 .009 Simulated 0 Test/Method Simulated /3 log 0.75 Test/Method Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled 0 = 3^0 = 5 0.10 0.05 0.01 0.10 0.05 0.01 .062 .012 .066 .016 .059 .011 .062 .012 0.70 Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled .099 .048 .009 .107 .048 .013 .101 .045 .007 .100 .047 .009 1.60 Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled .121 .125 .123 .121 .062 .063 .057 .062 .014 .015 .014 .014 .105 .048 .010 .108 .057 .012 .106 .051 .009 .106 .049 .010 Chapter 4. Results of the Simulation Study ^ 79 Table 19 B. Misspecified Variance Function 0 = 1^0 = 1.4^0 = 2 Simulated /3 Test/Method^0.10 0.05 0.01 0.10 0.05 0.01 0.10 0.05 0.01 log 0.75 Wald/Model .110 .051 .005 .112 .049 .004 .111 .058 .010 Wald /Empirical .113 .055 .005 .111 .047 .004 .112 .059 .007 Score/Empirical .111 .052 .005 .110 .042 .004 .109 .055 .005 Deviance/Scaled .110 .052 .005 .112 .049 .004 .111 .059 .010 0.70^Wald/Model^.095 .049 .009 .105 .049 .010 .109 .060 .013 Wald/Empirical .098 .051 .013 .103 .048 .008 .099 .055 .015 Score/Empirical .095 .051 .011 .100 .046 .007 .094 .057 .013 Deviance/Scaled .095 .049 .009 .105 .048 .009 .109 .060 .013 1.60^Wald/Model^.101 .051 .010 .102 .053 .013 .105 .052 .010 Wald/Empirical .102 .054 .011 .102 .046 .011 .098 .046 .008 Score/Empirical .102 .053 .010 .099 .045 .010 .092 .045 .008 Deviance/Scaled .101 .051 .010 .103 .053 .013 .104 .053 .011 Test/Method Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled 0=3 0.10 0.05 0.01 .113 .059 .010 .110 .056 .010 .102 .052 .007 .113 .060 .011 0.70 Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled .102 .052 .018 .093 .049 .014 .087 .046 .013 .103 .053 .018 1.60 Wald/Model Wald/Empirical Score/Empirical Deviance/Scaled .108 .056 .016 .104 .050 .017 .100 .046 .013 .108 .057 .016 Simulated /3 log 0.75 Cb = 5 0.10 0.05 0.01 .112 .110 .103 .114 .056 .056 .049 .057 .018 .018 .015 .018 We now consider the simulations under the full model (with pollution included). As in the simple case simulations, we chose to simulate data using only 0 = log 0.75 but with one of -y = 0.05, 0.10 and 0.15. Table 20 presents the mean values of the estimated parameters from the simulated data sets generated under the above model. 80 Chapter 4. Results of the Simulation Study^ Table 20. Mean Values of Estimated Parameters Under Alternative Hypothesis. A. Correct Variance Function Simulated y 0.05 Parameter 0 /3 Ti r2 T3 0.10 7 0 /3 sn. 72 T3 0.15 7 Cb /3 Ti T2 T3 7 1 1.00 -.293 .254 .249 -.497 .054 1.00 -.289 .246 .247 -.506 .103 1.00 -.289 .248 .247 -.498 .154 Simulated 4 1.4 2 1.40 1.97 -.295 -.311 .253 .274 .251 .276 -.503 -.503 .051 .038 1.97 1.39 -.287 -.304 .259 .237 .244 .260 -.511 -.510 .097 .104 1.40 1.98 -.304 -.300 .271 .260 .272 .259 -.511 -.515 .147 .132 3 2.94 -.303 .256 .254 -.513 .055 2.94 -.306 .258 .259 -.515 .098 2.94 -.299 .252 .259 -.532 .144 Simulated 0 2 1.4 1.77 1.31 -.295 -.296 .250 .253 .252 .254 -.504 -.504 .050 .050 1.31 1.79 -.294 -.295 .248 .250 .255 .251 -.505 -.504 .104 .101 1.79 1.32 -.298 -.294 .246 .254 .263 .246 -.501 -.502 .145 .149 3 2.54 -.293 .235 .246 -.515 .056 2.56 -.291 .240 .241 -.516 .102 2.57 -.303 .252 .257 -.496 .155 B. Misspecified Variance Function Simulated 7 0.05 Parameter 0 /3 Ti 72 r3 0.10 7 Cb /3 Ti T2 T3 0.15 7 95 /3 7-1. 72 T3 7 1 1.00 -.293 .254 .249 -.497 .054 1.00 -.289 .246 .247 -.506 .103 1.00 -.289 .248 .247 -.498 .154 Chapter 4. Results of the Simulation Study^ 81 Before interpreting these results, note that there is non-negligible collinearity between the continuous pollution covariate and the seasonal indicators. If, in our simulations, two explanatory variables have a substantial positive correlation, one of the corresponding regression parameters could be estimated to be larger than its simulated (or true) value, while the other could be estimated to be smaller than its simulated value, with little effect on the goodnessof-fit. Similarly, both could be estimated to be larger than the simulated values if negatively correlated. Such deviations from the true values, beyond the normal variation we would expect, lead to more variability from one set of fitted parameters to another than if the predictors were not collinear. The sample correlations are approximately 0.52 between the pollution variable and each of the indicators for the first and second season, and â€”0.52 between the pollution variable and the indicator for the third season. While such correlations are not a major concern, they must be kept in mind when examining the results of these simulations. For the correct variance simulations, the mean values of the regression parameter estimates are generally close to their simulated values, considering how big the standard deviations of the parameter estimates are (and thus the standard error of the average of the parameter estimates as an estimate of the true mean, see Table 21). However, in the cases -y = 0.05, 0 = 2 and -y = 0.15, 0 = 1.4 the averages of the estimates of 7- 1 and r2 exceed the simulated value by more than two standard errors (while the estimate of -y is too small). In the case -y = 0.15, 0 = 3 the estimate of r3 is too small (a large negative value that differs from the true value by more than two standard errors) with the estimate of 7 slightly too small, although it differs from the true value by less than one standard error. Note how the pattern of over and underestimation agrees with what we might expect when the pollution variable is positively correlated with the Chapter 4. Results of the Simulation Study ^ 82 first two season indicators and negatively correlated with the third. But there is also a hint of a pattern in the estimates of r3 which has no such plausible explanation; in general, as 0 increases across the table, r3 appears to decrease (become a more negative value). The observations in the third season have the lowest mean values, and as the amount of dispersion increases, there will be more and more zero values among these observations. We might infer that for large enough amounts of over-dispersion, this could be causing the estimator to underestimate the season effect, but with the low degree of precision in the averages of the parameter estimates we can not conclude whether this is the case, or if the deviations from the simulated values are just chance. In the misspecified variance simulations, the regression parameters estimates are very good; in fact, they appear somewhat better than in the correct variance case. The estimates of 0 seen in parts A and B of this table are similar to those in Table 17. The standard deviations of the parameter estimates in the 1000 simulations and the mean values of the standard errors of the regression parameters are summarized in Table 21. As in part A of Table 18, under the correct variance formulation the model and empirical estimates are very similar. Both estimators are very close to the standard deviations of the parameter estimates when the dispersion is 1 or 1.4. For simulated dispersions of 2 or 3, the averages of the estimates are not as close to the simulated values but the differences can be explained by the variability of the observed standard deviations and the estimated standard errors. 83 Chapter 4. Results of the Simulation Study^ Table 21. Standard Deviations of Estimates Under Alternative Hypothesis and Mean Values of Estimated Standard Errors (Â±s.d.). A. Correct Variance Function Simulated 0 ^ ^ Simulated y Parameter Method 1 1.4^2 3 0.05 ^ o # ri r2 T3 7 0.10^46 rI r2 T3 7 0.15^cb Ti r2 T3 7 Simulated Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .054 .122 .125(.005) .124(.007) .209 .212(.007) .211(.012) .210 .213(.007) .211(.012) .143 .145(.007) .144(.008) .138 .139(.005) .138(.007) .099 .149 .148(.007) .147(.010) .255 .251(.010) .249(.017) .246 .251(.010) .249(.018) .171 .172(.009) .171(.011) .166 .164(.007) .163(.009) .166 .177 .176(.009) .175(.015) .300 .298(.013) .296(.026) .305 .298(.013) .296(.026) .194 .205(.012) .203(.014) .203 .205(.012) .203(.014) .326 .221 .215(.014) .214(.024) .374 .364(.020) .362(.041) .373 .365(.020) .364(.042) .261 .251(.019) .247(.021) .243 .238(.013) .238(.021) Simulated Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .054 .126 .126(.005) .125(.008) .210 .212(.008) .210(.012) .217 .212(.008) .211(.012) .144 .147(.007) .147(.008) .139 .138(.005) .138(.007) .096 .151 .148(.007) .147(.011) .255 .250(.010) .248(.018) .250 .250(.010) .249(.018) .174 .174(.009) .173(.011) .166 .163(.006) .162(.010) .173 .172 .177(.009) .175(.015) .287 .298(.013) .295(.026) .289 .299(.013) .295(.026) .215 .209(.013) .207(.015) .188 .195(.009) .193(.014) .321 .224 .216(.014) .214(.023) .380 .364(.019) .361(.040) .385 .364(.019) .360(.041) .261 .255(.020) .251(.021) .249 .237(.012) .236(.022) Simulated Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .054 .128 .126(.005) .125(.008) .220 .211(.007) .210(.012) .216 .211(.007) .211(.012) .152 .149(.007) .149(.008) .141 .138(.005) .137(.007) .096 .150 .149(.007) .148(.011) .251 .250(.010) .248(.017) .248 .250(.010) .248(.018) .175 .177(.010) .176(.011) .161 .163(.006) .162(.010) .178 .174 .177(.010) .176(.016) .284 .298(.013) .296(.027) .292 .298(.013) .297(.026) .222 .212(.014) .210(.015) .186 .194(.008) .193(.014) .332 .223 .216(.015) .215(.024) .372 .363(.020) .361(.040) .374 .363(.020) .361(.040) .266 .259(.020) .255(.022) .243 .236(.013) .235(.021) Chapter 4. Results of the Simulation Study^ Table 21 B. Misspecified Variance Function Simulated y Parameter Method 0.05 ^ 0 o Ti T2 T3 7 0.10 ^ 4. # Ti T2 T3 7 0.15 ^ 0 15' 71 r2 T3 7 1 84 Simulated 0 1.4^2 3 Simulated Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .053 .122 .125(.005) .124(.007) .209 .212(.007) .211(.012) .210 .213(.007) .211(.012) .143 .145(.007) .144(.008) .138 .139(.005) .138(.007) .086 .145 .143(.007) .142(.010) .245 .243(.009) .243(.017) .253 .243(.010) .243(.017) .163 .166(.009) .159(.009) .167 .159(.006) .159(.009) .144 .169 .167(.008) .166(.013) .292 .282(.012) .285(.023) .290 .283(.012) .286(.024) .189 .194(.011) .179(.011) .191 .185(.008) .186(.013) .259 .197 .199(.012) .199(.020) .351 .338(.017) .343(.037) .345 .338(.017) .343(.037) .212 .232(.016) .208(.014) .227 .221(.011) .225(.020) Simulated Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .054 .126 .126(.005) .125(.008) .210 .212(.008) .210(.012) .217 .212(.008) .211(.012) .145 .147(.007) .147(.008) .139 .138(.005) .138(.007) .085 .145 .144(.007) .143(.010) .244 .243(.009) .244(.017) .247 .243(.009) .243(.017) .171 .169(.009) .162(.010) .160 .159(.006) .159(.009) .143 .172 .168(.009) .167(.014) .295 .283(.012) .285(.024) .295 .283(.012) .286(.025) .190 .197(.012) .182(.01) .193 .184(.008) .187(.013) .264 .203 .201(.012) .199(.020) .350 .339(.016) .343(.037) .349 .339(.017) .342(.037) .220 .237(.017) .210(.015) .227 .221(.010) .225(.019) Simulated Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .054 .120 .126(.005) .125(.008) .202 .212(.008) .210(.012) .204 .212(.008) .211(.012) .148 .150(.007) .149(.008) .136 .138(.005) .137(.007) .084 .148 .145(.007) .144(.010) .252 .243(.009) .244(.017) .251 .243(.009) .244(.017) .168 .172(.009) .164(.010) .165 .158(.006) .159(.009) .146 .170 .168(.009) .166(.013) .289 .283(.012) .284(.023) .296 .283(.012) .285(.024) .183 .200(.013) .183(.012) .194 .184(.008) .187(.013) .259 .207 .203(.012) .199(.019) .348 .340(.017) .342(.035) .346 .340(.017) .343(.035) .226 .241(.017) .211(.014) .224 .221(.011) .226(.019) Chapter 4. Results of the Simulation Study^ 85 To estimate the variance of the observed standard deviations, note the following. Because regression parameter estimates are asymptotically normal, if B t is a regression parameter estimate from the i th simulated data set and B is the average of these estimates, then (7 -2 Ei(Oi - 6) 2 , where o.2 represents Var(Oi), should have approximately a x 2 _ 1 distribution. It follows that the sample variance of these estimates should have variance roughly equal to 2a 4 /(n â€” 1). Using the delta method we arrive at the approximation Var(s) o-2 /[2(n â€” 1)]. Using this approximation, observed standard deviations of s = 0.3 and s = 0.2 would have standard errors of approximately 0.0067 and 0.0045 respectively. The mean values of the estimated standard errors will have approximate standard errors given by their standard deviations (enclosed in brackets in the table) divided by ..076 -00. When is 2 this can be as large as .027/1005 = 0.0009. With these precisions in mind, the observed differences between the variance estimates and the standard deviations are not too large. Another approach to determining whether the model-based and empirical standard errors approximate the simulated standard errors can be based on the increased precision provided by generating more data sets for a given choice of y and 0. The following table summarizes the results of the simulations for the case 7 = 0.10 and = 2 as displayed in part A of Table 21, and the results of five repeats of the simulation experiment using the same y and q5. Chapter 4. Results of the Simulation Study Parameter /3 Ti. T2 r3 7 Method Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical original .172 .177 .175 .287 .298 .295 .289 .299 .295 .215 .209 .207 .188 .195 .193 ^ repeat 1 .169 .177 .175 .293 .299 .296 .289 .299 .296 .206 .209 .207 .196 .195 .194 repeat 2 .182 .177 .176 .311 .299 .296 .304 .299 .297 .209 .209 .207 .196 .195 .194 repeat 3 .180 .177 .176 .307 .298 .297 .299 .299 .297 .215 .209 .207 .197 .195 .193 86 repeat 4 .177 .177 .176 .296 .298 .295 .297 .298 .297 .207 .208 .207 .194 .194 .194 repeat 5 .179 .177 .175 .298 .298 .296 .300 .299 .296 .215 .209 .207 .192 .195 .194 average .177 .177 .176 .299 .298 .296 .296 .299 .296 .211 .209 .207 .194 .195 .194 From the above table we note that in each simulation experiment, the mean values of the model and empirical estimates of the standard errors are very close to the average of the simulated standard deviations across the columns of the table. This confirms that the model and empirical estimators do a good job of estimating the standard deviations of the parameter estimates. We now examine part B of Table 21 corresponding to the simulations using the misspecified variance formulation to generate the data. Here we seem to be seeing the same pattern as in the simulations using data generated with the reduced model (part B of Table 18). The modelbased estimates appear too large for regression parameters which correspond to "seasons" with lower than average dispersion and too small for parameters which correspond to "seasons" with larger than average dispersion. In this situation, with a single pollution covariate, there is no clear picture of under or overestimation of the standard errors of y. However, one might speculate that if the pollution covariate were associated with one of the seasons (such as a Chapter 4. Results of the Simulation Study ^ 87 season x pollution interaction) the standard errors of the regression parameter might be under or overestimated, depending upon which season the covariate was related to, and this could lead to unexpected results. As for the empirical estimator, the results suggest the possibility that the standard errors of the regression parameters tend to be underestimated in these simulations. Of course, when making these observations we should keep in mind the possibility that the observed standard deviations of the parameter estimates may differ somewhat from the true standard deviations. A similar exercise to that carried out in the correct variance simulations was carried out for these simulations under the misspecified variance for the same combination = 0.10 and 0 = 2. The final column of the table which follows shows the averages of the 6 simulation experiments. Parameter /3 ri. 72 73 7 Method Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical original .172 .168 .167 .295 .283 .285 .295 .283 .286 .190 .197 .182 .193 .185 .187 repeat 1 .165 .167 .166 .289 .282 .285 .288 .283 .285 .189 .196 .180 .193 .184 .187 repeat 2 .164 .168 .166 .287 .283 .284 .286 .283 .285 .185 .197 .181 .192 .184 .187 repeat 3 .171 .168 .167 .288 .282 .286 .293 .283 .286 .182 .197 .181 .185 .184 .187 repeat 4 .168 .168 .166 .282 .282 .285 .286 .283 .285 .188 .197 .181 .190 .184 .187 repeat 5 .167 .167 .166 .291 .282 .285 .289 .283 .285 .190 .197 .181 .195 .184 .187 average .168 .168 .166 .289 .282 .285 .290 .283 .285 .187 .197 .181 .191 .184 .187 The final column of this table suggests the model-based estimator does underestimate the standard errors for r 1 and r2, and overestimate the standard error for r 3 . We may also note that the standard error of y is underestimated by the model-based estimator in these simulations. Chapter 4. Results of the Simulation Study^ 88 This part of the table is also consistent with the observation that the empirical estimator appears to slightly underestimate the standard errors. Whether this is true for other combinations of y and 0 remains speculation. In summary, for the correct variance simulations, both the model-based and the empirical estimators of standard errors perform well. However in the misspecified variance simulations, there is some evidence that the model-based estimator does not perform well and that the empirical estimator may underestimate the standard errors slightly. Chapter 4. Results of the Simulation Study ^ 89 Table 22. Observed Rejection Probabilities Under Alternative Hypothesis. A. Correct Variance Function 0=1 Test .10 .05 7 .05 Wald/Mod. .121 .065 Wald/Emp. .124 .069 Score/Emp. .124 .070 Dev/Scaled .121 .065 .01 .014 .015 .014 .014 .10 .116 .120 .116 .116 0 = 1.4 .05 .062 .062 .063 .062 .01 .014 .012 .012 .014 .10 .121 .127 .126 .122 0=2 .05 .066 .072 .067 .067 .01 .012 .014 .013 .013 .10 .115 .113 .110 .115 0=3 .05 .063 .066 .063 .064 .01 .020 .020 .017 .022 .10 Wald/Mod. Wald/Emp. Score/Emp. Dev/Scaled .185 .196 .196 .185 .104 .110 .108 .108 .027 .030 .028 .027 .170 .171 .167 .176 .102 .103 .100 .104 .030 .034 .032 .030 .130 .129 .130 .131 .060 .068 .068 .061 .016 .018 .016 .017 .150 .159 .156 .151 .087 .085 .081 .089 .018 .019 .017 .020 .15 Wald/Mod. Wald/Emp. Score/Emp. Dev/Scaled .298 .300 .299 .299 .191 .189 .190 .192 .076 .081 .080 .076 .201 .205 .206 .206 .126 .134 .131 .128 .038 .039 .040 .040 .195 .188 .190 .196 .111 .115 .114 .113 .019 .027 .027 .019 .165 .161 .159 .167 .087 .095 .093 .092 .027 .025 .023 .028 B. Misspecified Variance Function 0=1 .01 Test .10 .05 7 .05 Wald/Mod. .121 .065 .014 Wald/Emp. .124 .069 .015 Score/Emp. .124 .070 .014 Dev/Scaled .121 .065 .014 .10 .136 .129 .129 .137 0 = 1.4 .05 .075 .079 .076 .076 .01 .019 .023 .020 .019 .10 .120 .117 .116 .122 0=2 .05 .062 .062 .058 .063 .01 .018 .018 .016 .018 .10 .122 .118 .111 .121 0=3 .05 .052 .047 .042 .053 .01 .012 .011 .007 .013 .10 Wald/Mod. Wald/Emp. Score/Emp. Dev/Scaled .185 .196 .196 .185 .104 .110 .108 .108 .027 .030 .028 .027 .165 .159 .159 .165 .092 .090 .089 .095 .031 .034 .032 .031 .167 .165 .161 .168 .096 .099 .097 .097 .024 .020 .019 .024 .142 .135 .129 .145 .078 .074 .068 .080 .016 .012 .012 .018 .15 Wald/Mod. Wald/Emp. Score/Emp. Dev/Scaled .298 .300 .299 .299 .191 .189 .190 .192 .076 .081 .080 .076 .253 .253 .250 .254 .166 .166 .163 .167 .055 .055 .052 .056 .219 .212 .207 .221 .136 .137 .133 .138 .047 .048 .046 .048 .190 .184 .179 .191 .106 .105 .104 .109 .027 .024 .022 .028 The power of the four test statistics under study to detect the simulated pollution effect is summarized in Table 22; this table is similar in nature to Table 6 for the one series simple simulations. The results of the correct variance simulations suggest very low power for all test statistics in this more complicated case (only marginally higher than the nominal levels for the smallest simulated value of -y), but this is not surprising given the variability of the estimates Chapter 4. Results of the Simulation Study^ 90 of 7 compared to their simulated values. As in Table 6 the power of the tests tends to decrease as the amount of dispersion increases. The test statistics show similar rejection probabilities in the misspecified variance simulations. As in Table 6, the rejection probabilities, in general, are slightly higher in the misspecified variance simulations than in the correct variance simulations because for each simulated value of 0, there is less dispersion in the data under the misspecified variance formulation. Thus, except for the fact that the rejection probabilities are lower in Table 21 than in Table 6 due to the larger standard errors of y in these more complicated simulations, the test statistics perform similarly in the simple and complicated simulations with one series. We now summarize the findings of these simulations using the more complicated model for a single series of data. Estimates of 0 and of the regression parameters are relatively unaffected by the presence of over-dispersion although the estimators of these parameters became more variable as expected. Standard errors of the regression parameters were accurately estimated with either the model-based estimator or the empirical estimator when the data was generated using the correct variance formulation, but there is some evidence the model-based estimator performs poorly when the misspecified variance formulation was used to generate data. The empirical estimator appeared to do better in the misspecified variance simulations but it seemed to underestimate the standard errors when the level of dispersion was large relative to the overall mean level (this was not so apparent in the simulations under the reduced model reported in Table 19, but was more readily seen in the simulations under the full model reported in Table 21). In these simulations the problems with estimating the standard errors did not translate into problems with the model-based test statistics. This is in contrast to the simple case simulations with three series where poor performance of the model-based estimator of variances appeared Chapter 4. Results of the Simulation Study^ 91 to result in poor performance of the model-based test statistics. There were no detectable differences among any of the four test statistics under study. Chapter 4. Results of the Simulation Study ^ 92 4.5 The More Complicated Case with Three Series The last case to be considered is the simulations based on (3.19) describing the more complicated case with three series. Recall that the simulated values for the mean and temporal parameters were: Pi = log 0.75 Tii = 0.2 T12 = 0.2 713 = -0.2 .70 T21 = 0.2 T22 = 0.2 T2 3 = -0.4 /33 = 1.6 731 = 0.2 T32 = 0.3 733 = -0.4 T41 = 0 T42 = 0 T43 = 0 /32 = The first simulations to be considered use the null model (ryo = 71 = 72 = 73 = 0) to generate data. The mean values of the parameter estimates from fitting the null model to these data sets are summarized in Table 23. For the correct variance simulations, we see that in both combinations, ii. is again estimated to be the average of the dispersions. The mean values of the regression parameter estimates are close to the simulated values. Table 23. Mean Values of Parameter Estimates. Parameter 4) pi /3 2 /33 T11 T12 ro 721 T22 723 731 732 T33 Variance Function A. Correct B. Misspecified Combination of 0 Combination of 0 I^II I^II 1.998 2.785 5.157 8.676 -0.294 -0.292 -0.291 -0.300 0.695 0.697 0.692 0.700 1.597 1.594 1.595 1.575 0.204 0.203 0.197 0.205 0.204 0.198 0.197 0.211 -0.198 -0.199 -0.195 -0.193 0.205 0.198 0.205 0.203 0.204 0.201 0.206 0.200 -0.397 -0.397 -0.397 -0.407 0.202 0.199 0.198 0.217 0.305 0.304 0.302 0.321 -0.396 -0.398 -0.395 -0.396 93 Chapter 4. Results of the Simulation Study ^ In the misspecified variance simulations, the estimates of cb are in the same neighborhood as the estimates in the three series simple simulations. This is expected because the average of the levels of dispersion over all seasons in all series in these simulations is very close to the average of the three levels of dispersion in the three series simple simulations. The mean values of the regression parameter estimates are fairly close to the simulated values. Any deviations from the simulated values that we see here are likely random error due to the substantial amount of variability in the data sets, especially in the simulations using combination II. We now turn our attention to the standard deviations of the above estimates and the mean values of the estimated standard errors povided in Table 24. As in the three series simple case, in the correct variance simulations, the model-based standard errors for the parameters associated with the first series (th and r i j) are too large. This is a result of using the estimate c (estimated using data from all three series) to calculate the standard error for parameters corresponding to a series where the simulated dispersion is lower than 4. Analogously, we see that the standard errors for 03 and T3i are too small. For the regression parameters corresponding to series 2, we also find predictable results. Using combination I, the standard errors for 02 and .7 23 are - about right because the simulated 0 for this series is equal to the average of the 0 for all three series. However, using combination II, the simulated 0 for series 2 is smaller than 4 and we see that the model-based standard errors are larger than the standard deviations of the parameter estimates. Chapter 4. Results of the Simulation Study^ 94 Table 24. Standard Deviations of Estimates and Mean Values of Estimated Standard Errors (Â±s.d.). Variance Function A. Correct B. Misspecified Combination of (g) Combination of 0 I^II I^II .140 .084 .407 .973 Parameter 0 Method Simulated th Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .085 .124 (.006) .087 (.006) .076 .075 (.003) .075 (.005) .058 .048 (.002) .058 (.004) .105 .146 (.009) .103 (.008) .074 .089 (.004) .075 (.005) .074 .057 (.002) .075 (.005) .081 .198 (.011) .087 (.006) .092 .121 (.007) .092 (.007) .111 .077 (.005) .111 (.010) .103 .258 (.019) .100 (.008) .091 .156 (.011) .092 (.007) .157 .101 (.009) .153 (.018) Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .115 .167 (.006) .118 (.006) .111 .167 (.006) .118 (.006) .130 .145 (.007) .130 (.006) .141 .197 (.009) .139 (.007) .141 .197 (.009) .139 (.007) .154 .218 (.010) .154 (.008) .107 .268 (.013) .118 (.006) .113 .268 (.013) .118 (.006) .125 .295 (.015) .130 (.007) .136 .348 (.023) .136 (.007) .140 .347 (.023) .135 (.007) .150 .384 (.025) .147 (.008) Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .106 .102 (.003) .101 (.005) .101 .102 (.003) .101 (.005) .115 .119 (.004) .119 (.006) .105 .120 (.004) .101 (.005) .102 .120 (.004) .101 (.005) .114 .141 (.006) .118 (.006) .133 .164 (.008) .128 (.007) .130 .164 (.008) .128 (.007) .138 .192 (.010) .135 (.007) .124 .211 (.014) .128 (.007) .125 .211 (.013) .128 (.007) .138 .248 (.016) .135 (.008) Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .081 .065 (.002) .079 (.004) .077 .063 (.002) .077 (.003) .092 .076 (.002) .092 (.004) .099 .077 (.002) .102 (.005) .098 .075 (.002) .100 (.005) .119 .090 (.003) .119 (.007) .163 .104 (.005) .156 (.010) .158 .102 (.005) .156 (.010) .166 .122 (.006) .159 (.010) .216 .136 (.009) .216 (.018) .212 .133 (.008) .216 (.018) .225 .160 (.011) .218 (.018) 132 /33 rii r12 r13 r21 r22 723 731 732 r33 95 Chapter 4. Results of the Simulation Study^ The empirical estimates, on the other hand, are very good for all parameters (and at both levels of dispersion). There are only small differences between the empirical estimates and the observed standard deviations and there is no indication here that this estimator consistently underestimates the standard errors. Given the precision of the observed standard deviations and the average of the standard errors (discussed in the previous section), the observed differences can be explained by chance. In the misspecified variance simulations, it is even more apparent that the model-based estimator of the standard errors performs poorly while the empirical estimates are quite good. One could not say that the empirical estimates are consistently too low in these simulations. Table 25 summarizes the observed rejection probabilities of hypothesis tests for a common pollution covariate (H: 70 = 0) using the data generated under the reduced model. Table 25. Observed Rejection Probabilities: Common to Null Model. Variance Function B. Misspecified Combination of q5 A. Correct Combination of q5 II Test 0.10 I 0.05 0.01 0.10 II 0.05 0.01 0.10 I 0.05 0.01 0.10 0.05 0.01 Wald/Mod Wald/Emp Score/Emp Dev/Scaled .135 .101 .099 .135 .081 .047 .045 .081 .016 .006 .006 .016 .180 .112 .112 .180 .108 .055 .054 .108 .033 .014 .010 .033 .206 .116 .112 .206 .147 .058 .057 .146 .056 .021 .018 .056 .247 .108 .102 .246 .153 .056 .053 .153 .058 .017 .016 .058 In the correct variance simulations we find that with both combinations of 0, the empirical test statistics do reasonably well, achieving levels close to the nominal rejection probabilities. The model-based tests reject too often in the simulations using the first combination of 0 and even more so with the second combination. Notice again how very similar the results are for the Chapter 4. Results of the Simulation Study ^ 96 two model-based tests. As in Table 24, this overall pattern is even more clear in the misspecified variance simulations. Thus in this three series complicated case we find that the parameter estimates are adequate as are the empirical estimates of the standard errors. However, as in the three series simple case, we find that the model-based estimator of standard errors performs poorly, presumably as a result of using a single estimate of 0 when in fact different amounts of dispersion are simulated in each series. This poor performance of the model-based estimator appears to carry through to the model-based test statistics. This should be expected in the case of the model-based Wald test but it is interesting that it is also true of the scaled deviance test. We now consider simulations using a model which includes the pollution covariate common to all three series; the simulated value is 7 0 =0.1. The results summarized in Table 26 indicate that using either of the variance formulations, the overall effects and season effects are estimated just as well as they were in the null simulations. The mean values of the estimates of the pollution parameter are also close to the simulated value of 0.1. Chapter 4. Results of the Simulation Study^ 97 Table 26. Mean Values of Parameter Estimates. Parameter 0 Al 02 /33 Tii 712 ro 721 722 723 731 T32 733 70 Variance Function A. Correct B. Misspecified Combination of 0 Combination of 0 I^II I^II 1.997 2.789 5.202 8.686 -0.294 -0.299 -0.293 -0.299 0.694 0.695 0.694 0.695 1.593 1.597 1.589 1.601 0.212 0.202 0.203 0.205 0.201 0.202 0.208 0.205 -0.195 -0.193 -0.198 -0.192 0.204 0.200 0.203 0.201 0.199 0.201 0.200 0.200 -0.397 -0.400 -0.402 -0.402 0.198 0.200 0.192 0.196 0.297 0.302 0.301 0.289 -0.407 -0.402 -0.406 -0.400 0.101 0.098 0.103 0.103 A summary of the standard deviations of the above parameter estimates as well as the mean values of the estimated standard errors of the regression parameters are presented in Table 27. For the correct variance simulations, as in Table 24, the model-based standard errors for and nj are overestimated while those for of 133 pi and 733 are underestimated. Using combination I 0, the model-based standard errors for /32 and rzi are about right (as expected, see above). With combination II of 0, the model-based standard errors for /32 are only slightly too large and somewhat surprisingly, only that for 7 23 is too large (recall that in Table 24 723 22 _ 721, T22 were all overestimated on average). We also see that the model-based standard errors for -yo are underestimated. This is consistent with the larger than expected rejection probabilities in Table 25 for the model-based Wald test. The empirical estimates, on the other hand, are very good for all regression parameters. ^ Chapter 4. Results of the Simulation Study^ 98 Table 27. Standard Deviations of Estimates and Mean Values of Estimated Standard Errors (Â±s.d.). Variance Function B. Misspecified A. Correct Combination of 0 Combination of 0 I^II I II .083 .140 .423^.987 Parameter 0 Method Simulated fii Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .100 .133 (.006) .100 (.006) .087 .087 (.003) .089 (.005) .076 .063 (.002) .075 (.005) .124 .158 (.009) .119 (.007) .097 .103 (.004) .094 (.005) .095 .074 (.002) .095 (.006) .119^.152 .215 (.012) .279 (.021) .121 (.006) .148 (.011) .122 .149 .140 (.008) .181 (.012) .123 (.007) .143 (.011) .036 .192 .102 (.005) .132 (.008) .138 (.012) .188 (.022) Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .156 .185 (.006) .147 (.005) .150 .185 (.006) .147 (.005) .130 .190 (.007) .134 (.007) .186 .218 (.008) .176 (.007) .180 .217 (.008) .176 (.007) .159 .226 (.010) .159 (.009) .200 .297 (.014) .201 (.011) .196 .298 (.014) .201 (.011) .130 .307 (.016) .134 (.007) .266 .385 (.024) .257 (.020) .266 .385 (.024) .257 (.020) .161 .397 (.027) .150 (.008) Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .133 .128 (.003) .134 (.005) .132 .129 (.003) .134 (.005) .119 .123 (.004) .122 (.006) .149 .151 (.004) .148 (.006) .150 .152 (.004) .149 (.006) .120 .145 (.006) .122 (.006) .206 .207 (.009) .207 (.012) .209 .208 (.009) .208 (.012) .136 .199 (.010) .137 (.007) .256 .267 (.015) .253 (.021) .264 .268 (.015) .254 (.021) .140 .257 (.017) .137 (.007) Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical .120 .101 (.002) .118 (.007) .121 .101 (.002) .118 (.007) .094 .078 (.002) .096 (.005) .151 .120 (.003) .148 (.010) .149 .120 (.003) .148 (.010) .120 .093 (.003) .123 (.007) .219 .163 (.006) .224 (.020) .225 .163 (.006) .225 (.021) .162 .127 (.006) .160 (.010) .311 .212 (.011) .303 (.038) .309 212 (.011) .306 (.040) .228 .164 (.011) .217 (.17) Simulated Model Empirical .070 .061 (.001) .068 (.003) .086 .072 (.002) .084 (.004) .127 .098 (.004) .127 (.010) .179 .127 (.006) .171 (.18) /32 /33 rii r12 r13 rzi r22 r33 r31 r32 r3 3 7o Chapter 4. Results of the Simulation Study^ 99 In the misspecified variance simulations, the model-based standard errors for th and the Tl i are overestimated while those for 03 and the r 3 2 are underestimated. The model-based standard errors for 0 2 are overestimated with both combinations of (1) while the standard errors for r21 and r22 appear quite good compared with the standard error of r 23 which is clearly overestimated. These results are similar to the correct variance results but are again different from the results of the simulations using the null model as summarized in Table 25. As for y o , the standard error for this parameter appears to be underestimated on average, just as it was in the correct variance simulations. The empirical estimates are close to the simulated values with combination I, but there is a hint the estimates are too low on average with combination II (although keeping the precisions of the estimates and standard deviations in mind, the differences seen here could be just chance). The empirical estimates seem to approximate the simulated standard deviations better here than they did in the single series complicated simulations (for data generated with the model including the pollution effect; see Table 21). The simplest explanation for the poorer results in the one series complicated case would be that in those simulations we considered much higher levels of dispersion relative to the overall mean value (0 = log 0.75) than in this three series case and these higher levels of dispersion contribute to underestimation of the standard errors. The levels of the four test statistics in a model reduction from separate pollution effects for each series to a common effect using the data generated with a common pollution effect are summarized in Table 28. Under both variance formulations and for both combinations of cb, the empirical test statistics adhere reasonably well to the nominal levels, but the model-based tests do not reject as often as they should. For the model-based Wald test this would suggest Chapter 4. Results of the Simulation Study ^ 100 the standard errors for the separate pollution effects are overestimated (for at least one of the parameters). Again the model Wald and scaled deviance tests perform almost identically. Table 28. Observed Rejection Probabilities: Separate to Common. Test Variance Function A. Correct B. Misspecified Combination of 0 Combination of 0 I II II I 0.10 0.05 0.01 0.10 0.05 0.01 0.10 0.05 0.01 0.10 0.05 0.01 Wald/Mod . 074 Wald/Emp . 096 Score/Emp . 093 Dev/Scaled . 074 .032 .053 .050 .032 .008 .056 .027 .004 .054 .028 .010 .055 .027 .004 .012 .108 .050 .004 .117 .058 .016 .098 .057 .014 .009 .106 .046 .004 .111 .058 .013 .097 .054 .011 .008 .058 .028 .004 .053 .028 .010 .056 .028 .004 The powers of these tests to detect the common pollution effect represented by the simulated value of 70=0.10 are summarized in Table 29. In both variance formulations the model-based tests reject more often than the empirical tests (as expected given the underestimation of the standard errors of -yo and the consistent similarity of performance of the model Wald and scaled deviance tests). Predictably, the rejection probabilities decrease as 0 increases. Table 29. Observed Rejection Probabilities: Common to Null. Test Variance Function A. Correct^ B. Misspecified Combination of 0 Combination of 0 I II I II 0.10 0.05 0.01 0.10 0.05 0.01 0.10 0.05 0.01 0.10 0.05 0.01 Wald/Mod .522 .417 .224 .433 .334 .168 .335 .233 .102 .295 .219 .115 Wald/Emp .451 .340 .147 .338 .233 .083 .205 .112 .043 .181 .118 .047 Score/Emp .451 .340 .143 .338 .230 .082 .203 .108 .042 .171 .113 .038 Dev/Scaled .522 .418 .225 .434 .334 .168 .336 .234 .103 .298 .219 .115 The final model we will consider for the more complicated simulations with three series will be the most general one which includes separate pollution effects for each series. The simulated Chapter 4. Results of the Simulation Study^ 101 values were 70=0.1, 71=0.1, 72=-0.2 and y3=0. We begin as always with a table summarizing the mean values of the parameter estimates from 1000 simulated data sets generated with the above model. Table 30. Mean Values of Parameter Estimates. Parameter 01 02 03 711 T12 T13 T21 T22 T23 T31 T32 733 7o 71 72 Variance Function B. Misspecified A. Correct Combination of 0 Combination of 0 I^II I^II 8.664 5.187 1.994 2.789 -0.289 -0.295 -0.299 -0.289 0.694 0.695 0.695 0.698 1.595 1.595 1.569 1.601 0.188 0.204 0.198 0.203 0.205 0.193 0.203 0.202 -0.191 -0.202 -0.209 -0.201 0.196 0.195 0.203 0.205 0.192 0.203 0.203 0.203 -0.393 -0.400 -0.392 -0.399 0.228 0.193 0.200 0.200 0.298 0.323 0.299 0.302 -0.405 -0.401 -0.407 -0.402 0.100 0.087 0.100 0.101 0.099 0.103 0.106 0.124 -0.195 -0.186 -0.200 -0.202 The estimates of /3 and T 3 are very similar to those in Tables 23 and 26 for both correct and misspecified variance formulations. In the correct variance simulations, the estimates are very good for 7 0 , 7 1 and 72, as they are in the misspecified variance simulations using combination I of 0. However the estimates of these pollution parameters differ noticably from the true values in the simulations using combination II of 0, although the differences are not too great considering the amount of variability in the simulated data which is reflected in the standard deviations of these parameter estimates as shown in Table 31. Note that the predictor corresponding to Chapter 4. Results of the Simulation Study^ 102 the overall pollution effect is not orthogonal to the predictors corresponding to the separate effects. The mean values of the parameter estimates in a reparameterized model with orthogonal predictors representing three separate pollution effects, as opposed to a main effect and two separate effects, would generally be closer to the simulated values (although the standard errors of those estimates would be smaller). In that case the interpretation of the table would remain the same as above; that is, the mean values of the estimates differ from the simulated values but are within acceptable limits considering the precisions of these mean values. In the correct variance simulations the mean values of the model-based standard errors for /3i and r 3 provided in Table 31 are very similar to those observed in Table 24 as one would expect. Recall that following the simulations with a common pollution effect (Table 27) there was some question about the model-based standard errors for 7 21 and r22 (they were noticably overestimated for combination II in Table 24 but did not appear to be overestimated in Table 27). The results in the above table and Table 24 agree with the conclusion that standard errors are underestimated (overestimated) on average for parameters that correspond to data series with more (less) dispersion than the estimate (7). We note that the estimated model-based standard error for the parameter 7 1 appears quite a bit larger than it should be. This would explain the large observed rejection probabilities for the model-based Wald tests (relative to the empirical) seen in Table 29. The empirical estimates of the standard errors for the pollution parameters look very good in these correct variance simulations. ^ Chapter 4. Results of the Simulation Study ^ Table 31. Standard Deviations of Estimates and Mean Values of Estimated Standard Errors (Â±s.d.). Parameter 4, $1 $2 $3 ni r12 ri3 r21 r22 r23 r31 r32 r33 'Yo 71 72 Method Simulated Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Simulated Model Empirical Variance A. Correct Combination of 0 I^II .084 .143 .127 .145 .178 (.006) .211 (.010) .125 (.008) .148 (.010) .108 .108 .108 (.003) .127 (.004) .107 (.007) .107 (.007) .084 .103 .069 (.002) .082 (.003) .084 (.005) .108 (.007) .213 .244 .297 (.009) .352 (.013) .210 (.013) .247 (.017) .212 .250 .298 (.009) .353 (.013) .210 (.012) .248 (.017) .165 .134 .196 (.008) .233 (.013) .138 (.007) .164 (.010) .185 .188 .185 (.005) .218 (.007) .184 (.012) .183 (.013) .189 .186 .186 (.005) .220 (.007) .184 (.012) .184 (.013) .109 .115 .115 (.004) .136 (.005) .115 (.006) .114 (.006) .144 .171 .116 (.003) .137 (.004) .141 (.009) .182 (.013) .142 .170 .116 (.003) .138 (.004) .141 (.009) .182 (.013) .095 .124 .078 (.002) .093 (.003) .095 (.005) .123 (.007) .092 .113 .075 (.002) .089 (.002) .092 (.005) .119 (.007) .165 .195 .207 (.005) .245 (.008) .164 (.006) .199 (.008) .156 .165 .143 (.003) .169 (.004) .152 (.006) .169 (.007) Function B. Misspecified Combination of ib I^II .416^1.05 .122^.142 .287 (.015) .370 (.026) .126 (.008) .143 (.010) .139 .133 .174 (.009) .224 (.015) .133 (.010) .134 (.011) .165 .232 .112 (.005) .145 (.009) .164 (.016) .224 (.026) .207 .242 .480 (.022) .620 (.040) .210 (.012) .242 (.016) .207 .241 .481 (.022) .621 (.040) .210 (.012) .242 (.016) .139 .154 .317 (.016) .409 (.029) .139 (.007) .154 (.009) .236 .232 .298 (.014) .385 (.024) .230 (.019) .231 (.020) .238 .230 .300 (.013) .387 (.025) .231 (.019) .232 (.020) .136 .140 .186 (.010) .240 (.017) .134 (.007) .134 (.007) .284 .400 .188 (.007) .243 (.012) .316 (.032) .387 (.052) .289 .404 .188 (.007) .244 (.012) .287 (.031) .392 (.051) .159 .231 .127 (.006) .165 (.012) .160 (.010) .217 (.017) .181 .266 .122 (.004) .157 (.008) .187 (.016) .256 (.029) .220 .313 .334 (.014) .431 (.026) .301 (.025) .231 (.014) .235 .303 .230 (.009) .298 (.017) .240 (.014) .297 (.026) 103 Chapter 4. Results of the Simulation Study ^ 104 With the misspecified variance simulations we again find the model-based standard errors for ij and to be very similar to the corresponding standard errors in Table 24. As in the correct variance case, the model-based standard error for 71 is overestimated. The empirical estimates are also quite good even in the simulations using combination II of 0. There are some parameters for which the empirical estimator produces estimated standard errors which are on average too low but there is no clear indication of underestimation in this situation. Finally, in Table 32, we present the observed rejection probabilities to evaluate the power of the four test statistics in the multiple parameter reduction from separate pollution effects to a common effect. Table 32. Observed Rejection Probabilities: Separate to Common. Variance Function B. Misspecified Combination of A. Correct Combination of cb I Test 0.10 0.05 0.01 II 0.10 0.05 0.01 I II 0.10 0.05 0.01 0.10 0.05 0.01 Wald/Mod .358 .236 .102 .274 .168 .045 .120 .064 .012 .086 .047 .014 Wald/Emp .441 .319 .144 .396 .290 .121 .350 .234 .076 .336 .226 .080 Score/Emp .441 .315 .134 .390 .286 .115 .340 .224 .072 .325 .219 .071 Dev/Scaled .359 .238 .102 .275 .168 .045 .120 .064 .012 .086 .047 .014 The model-based test statistics have lower power than the empirical test statistics in this table. Recall that the model-based estimator overestimated the standard errors for -y i (while the standard errors for 72 were about right). This overestimation would explain why the rejection probabilities for the model-based Wald test are lower than for the empirical tests and, as usual, the scaled deviance test gives rejection probabilities that are almost identical to those for the model-based Wald test. Chapter 4. Results of the Simulation Study ^ 105 The results of the simulations under the alternative hypotheses in this more complicated case with three series are qualitatively very similar to the results of the simple simulations with three series. We found that the estimator of 0 performed adequately in estimating the average level of dispersion and that the regression parameter estimator did well for all parameters including the "pollution" parameters that were of most interest. We also found the empirical estimator of the variances to perform well in these simulations. The underestimation of standard errors apparent in the more complicated case with a single series was not so clearly apparent in these three series simulations, possibly because the levels of dispersion considered in this section were not as large relative to the overall mean levels as they were in the single series simulations where, in the simulations using the full model to generate data, only 0 = log 0.75 was considered with values of (/) as large as 2 or 3. The model-based estimator of variance performed very badly in these three series simulations because the models did not allow for the possibility that different series may have different amounts of dispersion. Thus in a sense, the variance was actually misspecified in all the simulations (even those labelled "correct variance"). The model-based test statistics considered also seemed to suffer from the misspecification of the variance function. When considering the levels of the model-based statistics, they rejected too often in some instances and less often than they should have in others. The empirical tests, even the empirical Wald test, achieved levels much closer to the nominal levels. Chapter 5 Discussion The analysis of relationships between air pollution and human health data from Prince George provided the motivation for carrying out a simulation study designed to evaluate the performance of possible variance estimators and test statistics available for making inference in over-dispersed Poisson models. Moderate amounts of over-dispersion are reported to have little effect on the estimation of regression parameters (Cox, 1983), however the extra variability must be taken into account when estimating variances or testing hypotheses. In the Prince George study, over-dispersion was accounted for via a dispersion parameter 0 (common to all of the data series) which was estimated by the deviance, G 2 , divided by its degrees of freedom. To adjust for this overdispersion, standard errors of the regression parameters were multiplied by the square root of the estimate ii;G = G 2 /d.f. and the usual likelihood ratio test (based on a Poisson likelihood), or deviance statistic, for testing the viability of a reduced model nested within a larger model, was divided by ,.. Proceeding with this methodology, models were fit to the emergency room visits data that included temporal, meteorological and the pollution parameters of primary interest. The final models, resulting from the model reduction procedures (for example, the final model for the pollutant TRS), included parameter estimates whose interpretation suggested that higher levels of pollution were related to 106 lower numbers of emergency room visits for Chapter 5. Discussion^ 107 respiratory illness. Such counter-intuitive results raised questions about the appropriateness of the methodology and motivated a simulation study. This study was designed to investigate the possibility that the amounts of over-dispersion encountered in the Prince George study were large enough (relative to the mean levels of the data series) to affect the estimation of regression parameters, and to examine the performance of other possible variance estimators and test statistics that could be employed in the analysis of such over-dispersed Poisson data. An alternate estimator of the dispersion parameter involving the Pearson X 2 statistic, .x =X 2 /d.f., was discussed and was found to be superior to the estimator (7)G. In addition to the model-based estimator of the covariance matrix of the estimated regression parameters described in the methodology for the Prince George study, an empirical estimator, which does not rely on the correct specification of the variance function, was described. The two estimators of the covariance matrix give rise to two versions of the Wald test as alternatives to the scaled deviance test used in the Prince George study for determining if certain parameters (such as pollution effects) contribute to the fit of the model. An empirical score statistic, suggested in Breslow (1990), which uses the estimating equations themselves for inference in over-dispersed Poisson models, was also considered. In the simple simulations described first in this thesis, the regression parameter estimates were all very close to the simulated values, even when data were simulated with large amounts of over-dispersion and with a misspecified variance function. The simple simulations with only one series of data did not show any clear differences between the estimators of the variances nor between the test statistics. With the three series simple simulations, the variance function used to fit the data assumed a common 0 whereas a separate 0 applies in the simulation of each Chapter 5. Discussion^ 108 series. Thus the variance function was always misspecified, even for the simulations labelled "correct variance". It appears that in this situation, the model-based estimator overestimates (underestimates) the standard errors for regression parameters corresponding to series with simulated dispersion smaller (larger) than the estimated dispersion (which is the average of the simulated values from the three series). We found that, when considering the levels of the test statistics, the observed rejection probabilities of the model-based test statistics often did not approximate the nominal rates well whereas the empirical test statistics always achieved levels close to nominal. We found an interesting agreement between the model Wald and scaled deviance tests, as well as between the empirical score and empirical Wald tests. Using the more complicated model for a single series of data, estimates of 0 and of the regression parameters were relatively unaffected by the presence of over-dispersion although the estimators of these parameters became more variable and may have been affected by collinearity between some of the predictors. Standard errors of the regression parameters were accurately estimated with either the model-based estimator or the empirical estimator when the data were generated using the correct variance formulation, but there was some evidence the model-based estimator performs poorly when the misspecified variance formulation was used to generate data. It was hypothesized that the problem with the model-based estimator was similar to the problem in the three series simple case. The observations in different seasons of the one series complicated case had different mean values and therefore, under the misspecified variance formulation, had different amounts of dispersion (a situation similar to four separate series generated with the correct variance function). The empirical estimator appeared to do better in the misspecified variance simulations but seemed to underestimate the standard errors when the Chapter 5. Discussion^ 109 level of dispersion was large relative to the overall mean level. In these one series complicated simulations, the problems with the model-based standard errors did not translate into poor performance of the model-based test statistics; this is in contrast to the simple case simulations with three series. There were no detectable differences among any of the four test statistics under study in this one series complicated case. The results of the simulations in the more complicated case with three series were very similar to the results of the simple simulations with three series. The estimator of q performed adequately in estimating the average level of dispersion and the regression parameter estimators did well for all parameters including the "pollution" parameters that were of primary interest. The empirical estimator of the variances also performed well in these simulations. The underestimation of standard errors apparent in the more complicated case with a single series was not so clearly apparent in the three series simulations, possibly because the levels of dispersion considered were not as large relative to the overall mean levels. The model-based estimator of variance performed very badly in the three series simulations because, again, the models used to fit the data did not allow for the possibility of differing amounts of over-dispersion in the different series. The model-based test statistics considered also seemed to suffer from the misspecification of the variance function. When considering the levels of the statistics, these tests did not achieve the nominal rates. The empirical tests, even the empirical Wald test, achieved levels much closer to the nominal levels. Overall, it appears solutions of the score equations produce good estimates of regression parameters in a situation with levels of over-dispersion similar to those in the Prince George study. The empirical estimator of the covariance matrix of the regression parameters appears Chapter 5. Discussion^ 110 preferable to the model-based estimator which relies on correct specification of the variance function. It appears that in situations where the variance function is misspecified, the two model-based test statistics which rely on a single estimated over-dispersion parameter may perform poorly. This is in sharp contrast to the empirical test statistics which always performed well in this study. An interesting side note to the investigation of the test statistics was a very close agreement between the two model-based tests throughout the simulation study. Only brief speculations were made regarding the possibility that different, and perhaps more appropriate, methods would have led to different conclusions in the Prince George study. It appears that, on average, the estimators of the regression parameters would yield correct estimates of the effects of pollution on emergency room visits, but it is unclear whether empirical estimates of the standard errors or an empirical test statistic would have led to a different set of final parameters in the models. Bibliography [1] Breslow, N. (1984). Extra Poisson Variation in Log Linear Models. Applied Statistics, 33, 38-44. [2] Breslow, N. (1990). Tests of Hypotheses in Overdispersed Poisson Regression and Other Quasi Likelihood Models. Journal of the American Statistical Association, 85, 565 571. [3] Carroll, R. J. and Ruppert, D. (1988). Transformation and Weighting in Regression. New York: Chapman and Hall. [4] Cox, D. R. (1983). Some Remarks on Over dispersion. Biometrika, 70, 269-274. [5] Inagaki, N. (1973). Asymptotic Relations Between the Likelihood Estimating Function and the Maximum Likelihood Estimator. Annals of the Institute of Statistical Mathematics, 265, 1-26. [6] Knight, K., Leroux, B., Millar, J., and Petkau, A.J. (1988). Air Pollution and Human Health: A Study Based on Hospital Admissions Data from Prince George, British Columbia. Report prepared under contract with the Health Protection Branch, Department of National Health and Welfare. Also issued as SIMS Technical Report 128, Department of Statistics, University of British Columbia (February 1989). [7] Knight, K., Leroux, B., Millar, J., and Petkau, A.J. (1989). Air Pollution and Human Health: - - - - - A Study Based on Emergency Room Visits Data from Prince George, British Columbia. Report prepared under contract with the Health Protection Branch, Department of National Health and Welfare. Also issued as SIMS Technical Report 136, Department of Statistics, University of British Columbia (June 1989). [8] Liang, K. Y. and Zeger, S. L. (1986). Longitudinal Data Analysis Using Generalized Linear Models. Biometrika, 73, 13-22. [9] McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman and Hall. [10] McNeney, B. and Petkau, A. J. (1991). Air Pollution and Human Health: A Follow up Study Based on Emergency Room Visits Data from Prince George, British Columbia. Report prepared under contract with the Ministry of Health, Province of British Columbia. [11] Moore, D. F. (1986). Asymptotic Properties of Moment Estimators for Overdispersed Counts and Proportions. Biometrika, 73, 583 588. [12] Royall, R. M. (1986). Model Robust Confidence Intervals. Intrernational Statistical Review, 54, 199-214. [13] Wedderburn, R. W. (1974). Quasi-Likelihood Functions, Generalized Linear Models, and the Gauss-Newton Method. Biometrika, 61, 439 447. [14] White, H. (1982). Maximum Likelihood Estimation of Misspecified Models. Econometrica, 50, 1-25. - - - 111
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Overdispersion in poisson regression
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
Overdispersion in poisson regression McNeney, W. Brad 1992
pdf
Page Metadata
Item Metadata
Title | Overdispersion in poisson regression |
Creator |
McNeney, W. Brad |
Date | 1992 |
Date Issued | 2008-09-17T22:56:33Z |
Description | Investigation of a possible relationship between air quality and human health in the community of Prince George, British Columbia was undertaken after a public opinion poll in 1972 discovered that poor air quality was the number one concern of the residents of Prince George. An analysis which attempted to identify such relationships using a data set including air quality measurements and hospital admissions for the period April 1, 1984 to March 31, 1986 is discussed in Knight, Leroux, Millar, and Petkau (1988). A similar analysis using emergency room visits during the same period rather than hospital admissions is described in Knight, Leroux, Millar, and Petkau (1989). The data set described here was collected to carry out a follow-up study to the emergency room visits analysis. The main part of the analyses carried out involved the use of Poisson regression models with a minor extension to account for over-dispersion in the data. The results of the analysis were not consistent with either the earlier study or with the expectations of the investigators. For example, higher levels of one of the air quality variables was found to be associated with a decrease in the number of emergency room visits for respiratory disease in the winter, but an increase in emergency room visits for respiratory disease in the fall. A mechanism to explain such effects is difficult to imagine. These counter-intuitive results motivated a simulation study to assess the methods used in the analysis and to compare these to other possible estimators and test statistics that can be employed in the analysis of over-dispersed Poisson data. |
Extent | 5996256 bytes |
Genre |
Thesis/Dissertation |
Type |
Text |
File Format | application/pdf |
Language | eng |
Collection |
Retrospective Theses and Dissertations, 1919-2007 |
Series | UBC Retrospective Theses Digitization Project |
Date Available | 2008-09-17 |
Provider | Vancouver : University of British Columbia Library |
Rights | For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. |
DOI | 10.14288/1.0086496 |
URI | http://hdl.handle.net/2429/2230 |
Degree |
Master of Science - MSc |
Program |
Statistics |
Affiliation |
Science, Faculty of Statistics, Department of |
Degree Grantor | University of British Columbia |
Graduation Date | 1992-05 |
Campus |
UBCV |
Scholarly Level | Graduate |
Aggregated Source Repository | DSpace |
Download
- Media
- [if-you-see-this-DO-NOT-CLICK]
- ubc_1992_spring_mcneney_brad.pdf [ 5.72MB ]
- [if-you-see-this-DO-NOT-CLICK]
- Metadata
- JSON: 1.0086496.json
- JSON-LD: 1.0086496+ld.json
- RDF/XML (Pretty): 1.0086496.xml
- RDF/JSON: 1.0086496+rdf.json
- Turtle: 1.0086496+rdf-turtle.txt
- N-Triples: 1.0086496+rdf-ntriples.txt
- Original Record: 1.0086496 +original-record.json
- Full Text
- 1.0086496.txt
- Citation
- 1.0086496.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Country | Views | Downloads |
---|---|---|
China | 27 | 3 |
United States | 6 | 0 |
United Kingdom | 4 | 0 |
Germany | 3 | 1 |
Brazil | 3 | 0 |
Italy | 3 | 1 |
Canada | 2 | 0 |
Russia | 2 | 0 |
Norway | 2 | 0 |
Japan | 2 | 0 |
City | Views | Downloads |
---|---|---|
Beijing | 23 | 0 |
Unknown | 8 | 3 |
London | 4 | 0 |
Shenzhen | 4 | 3 |
Ashburn | 4 | 0 |
Montesilvano Colle | 3 | 1 |
Oshawa | 2 | 0 |
Saint Petersburg | 2 | 0 |
Tokyo | 2 | 0 |
Sunnyvale | 1 | 0 |
Lewes | 1 | 0 |
{[{ mDataHeader[type] }]} | {[{ month[type] }]} | {[{ tData[type] }]} |
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0086496/manifest